Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
2 answers
142 views
Theoretical writes/sec limit
If I have solarwinds which has an alert set whenever we have more than 300 writes per second, and I know 300 writes per second is fine, and the writes per second often goes over 6500, how can I work out what the theoretical limit of writes per second is, so I can alter the alert thus making it actio...
If I have solarwinds which has an alert set whenever we have more than 300 writes per second, and I know 300 writes per second is fine, and the writes per second often goes over 6500, how can I work out what the theoretical limit of writes per second is, so I can alter the alert thus making it actionable? If "The number of writes should not exceed 85% of the disk capacity" how can I work out the number of writes that are possible with my current disk configuration?
jack (419 rep)
Feb 3, 2020, 03:08 PM • Last activity: Aug 2, 2025, 05:04 AM
3 votes
2 answers
411 views
Tracking stored procedure performance metrics
I’m trying to track impacts of stored procedure executions - CPU time, execution duration, logical/physical reads, memory grants, etc. - in order to visualize them. In particular, I'm looking for results of a stored procedure as a whole, not of any individual statements within. I’ve explored several...
I’m trying to track impacts of stored procedure executions - CPU time, execution duration, logical/physical reads, memory grants, etc. - in order to visualize them. In particular, I'm looking for results of a stored procedure as a whole, not of any individual statements within. I’ve explored several options - wrappers, DMVs, sp_WhoIsActive, custom logging procedures, etc. - but I’m still not fully convinced by the approaches I’ve used so far. How can I reliably capture stored procedure performance metrics over time?
Kaaniche Emna (31 rep)
Jul 24, 2025, 02:51 PM • Last activity: Jul 29, 2025, 05:33 PM
0 votes
2 answers
152 views
Track database slow down in real time
To minimize down time and to detect database slow down early, we query the processlist table every second, and if we find the queries slowing down, we throttle the lower priority queries and execute other measures. ``` SELECT avg(time) FROM INFORMATION_SCHEMA.PROCESSLIST WHERE db = 'mydb' AND user =...
To minimize down time and to detect database slow down early, we query the processlist table every second, and if we find the queries slowing down, we throttle the lower priority queries and execute other measures.
SELECT avg(time) FROM INFORMATION_SCHEMA.PROCESSLIST 
WHERE db = 'mydb'
	AND user = 'app_user'
	AND state != ''
	AND info NOT LIKE '%ALTER TABLE%'
	AND info NOT LIKE '%CREATE INDEX%'
	AND info NOT LIKE '%CREATE UNIQUE INDEX%'
	AND info NOT LIKE '%performance_summary%'
	AND info NOT LIKE '%certain_table%'
	AND info NOT LIKE '%certain_other_table%';
However, we realise that querying the processlist table is in itself pretty expensive and are looking for an alternative. We already tried performance_schema.threads, but found that the numbers reported by it were not that accurate and it showed low numbers, even when we were having obvious database issues.
SELECT AVG(PROCESSLIST_TIME) FROM performance_schema.threads
Another alternative is to just have our application track average times itself, but was wondering whether there may be another database specific solution possible. EDIT: To clarify, the above query is run real time with no human intervention. If the application detects degrading performance, it immediately throttles down other queries to try to get the overall performance back up. Everything is automated. Granted process list contains many idle queries, but this system has worked for us for many months now, and we are just trying to implement a "no temporary table on disk" solution.
Shahid Thaika (223 rep)
Oct 6, 2022, 02:59 PM • Last activity: Jul 16, 2025, 12:00 AM
2 votes
2 answers
3871 views
How do you monitor PostgreSQL queries?
I have an application which uses PostgreSQL. I want to monitor all the queries that the application sends to the database. Is there a way to monitor the queries sent?
I have an application which uses PostgreSQL. I want to monitor all the queries that the application sends to the database. Is there a way to monitor the queries sent?
Axel (121 rep)
Nov 24, 2020, 02:18 PM • Last activity: Jul 15, 2025, 10:18 PM
5 votes
1 answers
149 views
How to calculate required ressources from a baseline and what should be measured at all?
I have to deal with the following situatiuon: Currently we have - 1 cluster with 5 nodes running 10 instances (SQL Server 2008 Enterprise edition), and - 1 cluster with 2 nodes running 5 instances (SQL Server 2014 Standard edition) in use. All servers are virtual servers running in VMWare. We want t...
I have to deal with the following situatiuon: Currently we have - 1 cluster with 5 nodes running 10 instances (SQL Server 2008 Enterprise edition), and - 1 cluster with 2 nodes running 5 instances (SQL Server 2014 Standard edition) in use. All servers are virtual servers running in VMWare. We want to setup a new cluster (SQL Server 2014 Enterprise). In a first step, 1 instance from the 2008 Enterprise cluster and 1 instance from the 2104 Standard cluster are to be migrated. Therefore, my boss asked 2 questions: 1. How many cores do we need (aim: minimize license costs)? 2. How much RAM do we need? My answer was: "It depends ..." Now I have to deliver hard facts by monitoring the next few weeks. Great! (beware of irony) My approach for question number 1: Using perfmon.exe I plan to monitor - **Processor\% Processor Time** (_total and single cores), - **Processor\% User Time** (_total and single cores), - **Processor\% Interrupt Time** (_total and single cores) - is this really necessary? -, and - **System\Processor Queue Length**. The question is, where to get these data from? From the node? From the SQL Server? In the first case it should be easy: the first instance in question - *vsql2008ent-1\instanceX* for the sake of simplicity - is currently running on a node, let's name it node *sql2008NodeA*. No other instances, nor server should run under normal conditions on this node. So it should not matter where I get the data from, should it? In case of a disaster other instances will be running on this node, too. But we want to have a baseline for normal operation. The second instance - *vsql2014stan-1\instanceY* - shares it's node - *sql2014NodeA* - with 2 other instances. In this case I can never be sure, how much cores the instance will truely need for smooth operation, right? So I can monitor the instance. But what does the result mean? It shows the CPU ressources actually used by this instance, only. But would more cores have been used if they were available? So what would be the answer to the question mentioned above? RAM is the other question. Due to several disasters in the past when all instances landed on the same node I have set an upper limit for the maximum server memory for each instance. This limit depends on the available memory of the node (currently 100GB or 120GB respectively). So how to monitor this? If all memory is used up, everything seems clear: insufficient memory. If all goes slow: insufficient memory. But how much memory do I really need? I try to summarize my questions: 1. Where should I get the measures from (node vs. server)? 2. Do I need to monitor the interrupt time, if I want to know the number of cores required? 3. What should I monitor under the given circumstances to answer the question, how much RAM I need (I know: "The more the better.")? Thank you very much for your help! Best regards!
D.C. (63 rep)
Feb 13, 2017, 11:27 AM • Last activity: Jul 15, 2025, 11:08 AM
0 votes
1 answers
186 views
"The slave is lagging behind"-mystery
I have a testing scenario with one dedicated server (6 vCore x 3.6 GHz AMD Ryzen 5 Pro 3600, RAM: 32 GB, HDD: 2 x 1000 GB Software RAID 1 if that matters) and a `MySQL` (`MariaDB`) replication setup with 1 master and 2 replicas on the same machine, just to see how it works. For a production setup th...
I have a testing scenario with one dedicated server (6 vCore x 3.6 GHz AMD Ryzen 5 Pro 3600, RAM: 32 GB, HDD: 2 x 1000 GB Software RAID 1 if that matters) and a MySQL (MariaDB) replication setup with 1 master and 2 replicas on the same machine, just to see how it works. For a production setup this does not make too much sense, but for testing it is ok. It may be interesting to note that I do not use the InnoDB engine for this replication scenario. The total data managed by MySQL is 109 GB now, but not all databases are run by the master (49 GB) and not all of the master data is replicated (19 GB each). For example there are databases for a mail server and a search engine which are not replicated, the former not even served by the master. The master manages 3 main databases with 87 tables in one database to be replicated and 2 others which are not replicated, as anything in those can be and is regularly recreated. I run a cron job mysql_repl_monitor.sh every minute based on http://datacharmer.blogspot.de/2011/04/refactored-again-poor-mans-mysql.html which works fine with the little traffic in testing mode, but I expect this scenario to handle high loads just as well. Sometimes I get errors, which I first tried to catch with specific repair mechanisms, but after a while and some more experience I reckoned that this is not worth the effort so I fall back to another script mysql_rsync_vx_s.sh which brutally syncronizes the replica in trouble with the master via rsync. Of course, I have to lock all tables to do this, but rsync may only have to fix some tables which may only take some seconds depending on the size of the files to be different, seldom it takes up to a minute. In order to not get in trouble when it takes more than a minute, I set a semaphore to prevent invoking mysql_rsync_vx_s.sh more than once. I have some very big tables and partitioned some of these, so I can manage the size of the files somehow and this may help rsync as instead of one or two big files it may only have to cope with a bunch of smaller files. So far I didn't take notes about which tables are affected, which is a pity, I admit it. I write a log file but never looked at it except for debugging the script. I just added a permanent log file for rsync to gather this data. By far the most frequent error is **The slave xx is lagging behind**. I guess this accounts for 95% or even more of all errors. Some days I get none, on other days maybe a dozen without me being able to see a reason. Due to my self-repair mechanism this is not fatal, but it still annoys. I experience this behaviour for a long time now and cannot get a hint of what may be the source of this problem. Remember, it happens all on one dedicated machine running a bunch of docker containers. mysql_repl_monitor.sh looks at a LAG_THRESHOLD, which was, as far as I remember, originally set to 1000 by data charmer Giuseppe Maxia. I raised it to 14589, but I recorded a difference of 3750449 for $Master_Position-$Read_Master_Log_Pos which seems to signify that this huge surplus difference is built up within a minute, so very fast. Every couple of days I take a dump of my 3 databases of about 41 GB as a simple copy of files at a time where those errors do not occur. My machine runs at extreme minimal loads mostly, but even loads of 15 do not trigger this behavior. Often this lagging affects both replicas, but not always. Sometimes it occurs every couple of minutes for a dozen of minutes, but mostly the fix is quick and lasting. I am really puzzled. Has anybody an idea of where to look or what to record? I run a log table of all my queries which is reset at intervals, so I could have a look at what is happening if I knew what and when. #### Addendum @danblack First Master, then one slave:
M:1231014 [ci4]>select version();
+-------------------------------------------+
| version()                                 |
+-------------------------------------------+
| 10.5.10-MariaDB-1:10.5.10+maria~focal-log |
+-------------------------------------------+
1 row in set (0.012 sec)

M:1231014 [ci4]>SHOW VARIABLES LIKE 'binl%';
+-----------------------------------------+--------+
| Variable_name                           | Value  |
+-----------------------------------------+--------+
| binlog_annotate_row_events              | ON     |
| binlog_cache_size                       | 32768  |
| binlog_checksum                         | CRC32  |
| binlog_commit_wait_count                | 0      |
| binlog_commit_wait_usec                 | 100000 |
| binlog_direct_non_transactional_updates | OFF    |
| binlog_file_cache_size                  | 16384  |
| binlog_format                           | ROW    |
| binlog_optimize_thread_scheduling       | ON     |
| binlog_row_image                        | FULL   |
| binlog_row_metadata                     | NO_LOG |
| binlog_stmt_cache_size                  | 32768  |
+-----------------------------------------+--------+
12 rows in set (0.001 sec)
I deleted 346 rows with Value OFF, NONE or 0 from the following list
S:8120528 [ci4]>show global status;
+-----------------------------------------------+----------------------+
| Variable_name                                 | Value                |
+-----------------------------------------------+----------------------+
| Acl_database_grants                           | 11                   |
| Acl_proxy_users                               | 2                    |
| Acl_table_grants                              | 11                   |
| Acl_users                                     | 23                   |
| Aria_pagecache_blocks_unused                  | 15647                |
| Aria_pagecache_blocks_used                    | 13                   |
| Aria_pagecache_read_requests                  | 48                   |
| Aria_pagecache_reads                          | 35                   |
| Aria_pagecache_writes                         | 15                   |
| Aria_transaction_log_syncs                    | 1                    |
| Binlog_snapshot_file                          | mysql-bin.001631     |
| Binlog_snapshot_position                      | 4662                 |
| Binlog_bytes_written                          | 1620                 |
| Busy_time                                     | 0.000000             |
| Bytes_received                                | 46531353094          |
| Bytes_sent                                    | 34903772             |
| Com_alter_table                               | 2                    |
| Com_begin                                     | 389902               |
| Com_change_db                                 | 11                   |
| Com_change_master                             | 11                   |
| Com_commit                                    | 389897               |
| Com_create_user                               | 1                    |
| Com_delete                                    | 1052                 |
| Com_flush                                     | 10                   |
| Com_grant                                     | 1                    |
| Com_insert                                    | 39856                |
| Com_purge_before_date                         | 19                   |
| Com_reset                                     | 1                    |
| Com_select                                    | 24                   |
| Com_set_option                                | 13                   |
| Com_show_slave_status                         | 12283                |
| Com_show_status                               | 4                    |
| Com_start_slave                               | 15                   |
| Com_stop_slave                                | 14                   |
| Com_unlock_tables                             | 9                    |
| Com_update                                    | 348991               |
| Compression                                   | OFF                  |
| Connections                                   | 18504                |
| Cpu_time                                      | 0.000000             |
| Created_tmp_files                             | 26                   |
| Created_tmp_tables                            | 2                    |
| Feature_check_constraint                      | 3                    |
| Handler_commit                                | 10                   |
| Handler_delete                                | 1255714              |
| Handler_icp_attempts                          | 26                   |
| Handler_icp_match                             | 26                   |
| Handler_read_first                            | 3                    |
| Handler_read_key                              | 1604764              |
| Handler_read_next                             | 339                  |
| Handler_read_rnd                              | 26                   |
| Handler_read_rnd_next                         | 503                  |
| Handler_tmp_write                             | 457                  |
| Handler_update                                | 348992               |
| Handler_write                                 | 1237458              |
| Key_blocks_not_flushed                        | 0                    |
| Key_blocks_unused                             | 52745                |
| Key_blocks_used                               | 105613               |
| Key_blocks_warm                               | 40905                |
| Key_read_requests                             | 53155707             |
| Key_reads                                     | 1041728              |
| Key_write_requests                            | 12641012             |
| Key_writes                                    | 4407716              |
| Last_query_cost                               | 0.000000             |
| Max_used_connections                          | 2                    |
| Memory_used                                   | 52306256             |
| Memory_used_initial                           | 33979576             |
| Open_files                                    | 34                   |
| Open_table_definitions                        | 11                   |
| Open_tables                                   | 12                   |
| Opened_files                                  | 4056                 |
| Opened_plugin_libraries                       | 2                    |
| Opened_table_definitions                      | 242                  |
| Opened_tables                                 | 186                  |
| Qcache_free_blocks                            | 1                    |
| Qcache_free_memory                            | 1031304              |
| Qcache_total_blocks                           | 1                    |
| Queries                                       | 1195771              |
| Questions                                     | 24770                |
| Rows_read                                     | 1605109              |
| Rows_sent                                     | 503                  |
| Rows_tmp_read                                 | 454                  |
| Rpl_semi_sync_master_status                   | OFF                  |
| Rpl_semi_sync_slave_status                    | OFF                  |
| Rpl_status                                    | AUTH_MASTER          |
| Select_scan                                   | 2                    |
| Slave_heartbeat_period                        | 30.000               |
| Slave_received_heartbeats                     | 4878                 |
| Slave_running                                 | ON                   |
| Slaves_running                                | 1                    |
| Sort_range                                    | 1                    |
| Sort_rows                                     | 26                   |
| Syncs                                         | 4632                 |
| Table_locks_immediate                         | 786550               |
| Table_locks_waited                            | 731                  |
| Table_open_cache_active_instances             | 1                    |
| Table_open_cache_hits                         | 787177               |
| Table_open_cache_misses                       | 157832               |
| Threads_cached                                | 1                    |
| Threads_connected                             | 1                    |
| Threads_created                               | 2                    |
| Threads_running                               | 3                    |
| Uptime                                        | 736783               |
| Uptime_since_flush_status                     | 736783               |
| wsrep_cluster_conf_id                         | 18446744073709551615 |
| wsrep_cluster_status                          | Disconnected         |
| wsrep_local_index                             | 18446744073709551615 |
+-----------------------------------------------+----------------------+
454 rows in set (0.001 sec)
Which of these values are significant?
kklepper (101 rep)
Mar 23, 2022, 12:01 AM • Last activity: Jul 8, 2025, 12:06 PM
0 votes
1 answers
175 views
Client side SQL monitor for Oracle database aplications
We have custom application that connects to an Oracle database. We need a tool that is capable of running on the client machine (Windows 10/11) and is able to capture all SQL sent to the database, and show how long it takes to get response. In the Windows 7 environment, we were able to use Quest Sof...
We have custom application that connects to an Oracle database. We need a tool that is capable of running on the client machine (Windows 10/11) and is able to capture all SQL sent to the database, and show how long it takes to get response. In the Windows 7 environment, we were able to use Quest Software SQL Monitor (Freeware). However, this tool does not work on Windows 10/11. Main problem, is there a tool capable of capturing the application side SQL request and show if it is slow to receive a response?
Janis T (103 rep)
Mar 10, 2022, 10:13 AM • Last activity: Jul 7, 2025, 06:06 PM
0 votes
1 answers
11695 views
Which MySQL shows CPU usage?
I am monitoring MySQL server using phpMyAdmin [Status => Monitor] function. And it shows a number of useful graphs. [![phpMyAdmin Status Monitor][1]][1] **Which variables (or which mechanism) is phpMyAdmin using to pull this data?** ps. I was wondering if it is calling `SHOW STATUS` query? But when...
I am monitoring MySQL server using phpMyAdmin [Status => Monitor] function. And it shows a number of useful graphs. phpMyAdmin Status Monitor **Which variables (or which mechanism) is phpMyAdmin using to pull this data?** ps. I was wondering if it is calling SHOW STATUS query? But when I look through a bunch of variables resulting from SHOW STATUS I cannot spot anything related neither to CPU, memory or not even active connnections... enter image description here
Dimitry K (235 rep)
May 18, 2019, 02:23 PM • Last activity: Jul 7, 2025, 04:04 PM
0 votes
1 answers
169 views
What is the y-axis of this Oracle ASH Viewer?
I am investigating different monitoring software for my database, and I stumbled upon this open source ASH Viewer. I noticed in the main graph on the y-axis for resource usage, it's labeled as "Active Sessions" with an axis of varying integers Link: https://github.com/akardapolov/ASH-Viewer#how-it-w...
I am investigating different monitoring software for my database, and I stumbled upon this open source ASH Viewer. I noticed in the main graph on the y-axis for resource usage, it's labeled as "Active Sessions" with an axis of varying integers Link: https://github.com/akardapolov/ASH-Viewer#how-it-works enter image description here I don't understand what "Active Sessions" is supposed to mean. Is this the CPU % utilization of the database?
theGreenCabbage (111 rep)
Mar 10, 2020, 12:41 AM • Last activity: Jul 5, 2025, 07:07 PM
7 votes
1 answers
278 views
Find a previous statement or held locks in a blocking situation
I'm using XE blocked_process_report to detect and analyse the blocking. But since it's a point-in-time situation, I only see the blocking leader's statement currently being run and the incompatible locks the blocked session is trying to get. So if the blocking leader had several batches/statements i...
I'm using XE blocked_process_report to detect and analyse the blocking. But since it's a point-in-time situation, I only see the blocking leader's statement currently being run and the incompatible locks the blocked session is trying to get. So if the blocking leader had several batches/statements in a transaction, I cannot find out which of the previous statements causes the blocking. Repro script
/* Set up tables */
CREATE TABLE dbo.FirstQuery (Id int PRIMARY KEY)
CREATE TABLE dbo.SecondQuery (Id int PRIMARY KEY)

INSERT INTO dbo.FirstQuery (Id)
OUTPUT Inserted.Id INTO dbo.SecondQuery ( Id )
VALUES (1), (2), (3)

/* set up the blocked process event */

EXEC sys.sp_configure
        @configname = 'blocked process threshold (s)' -- varchar(35)
      , @configvalue = 10 -- int
    
RECONFIGURE 

CREATE EVENT SESSION [blocked_process_report] ON SERVER 
ADD EVENT sqlserver.blocked_process_report
(
    ACTION(sqlserver.session_id,sqlserver.sql_text,sqlserver.tsql_stack)
)
GO
ALTER EVENT SESSION [blocked_process_report] ON SERVER STATE = START

/* Session 1 - run first and don't commit or rollback */
BEGIN TRANSACTION
    DELETE FROM dbo.FirstQuery
GO
    DELETE FROM dbo.SecondQuery
-- ROLLBACK

/* Session 2 - run second */
BEGIN TRANSACTION
    DELETE FROM dbo.FirstQuery
The blocked process report
BEGIN TRANSACTION
       DELETE FROM dbo.FirstQuery   
        
    
    
        
            
            
       DELETE FROM dbo.SecondQuery
   -- ROLLBACK
I can only see the currently running batch in the input buffer. I could find the held lock info in the sys.dm_tran_locks, but if the blocking is short, I won't capture it in time.
; -- Previous statement must be properly terminated 
WITH LockInfo
AS
(
    SELECT
        dtl.request_session_id
	  , dtl.resource_type
      , dtl.resource_description
      , dtl.resource_associated_entity_id
      , dtl.request_mode
      , dtl.request_status
    FROM 
        sys.dm_tran_locks AS dtl
)
SELECT 
	*
FROM LockInfo AS blcked
JOIN LockInfo AS blcker
    ON blcker.request_session_id = 121 /* blocker session ID */
    AND blcker.resource_type = blcked.resource_type
    AND blcker.resource_description = blcked.resource_description
    AND blcker.resource_associated_entity_id = blcked.resource_associated_entity_id
WHERE 
    blcked.request_session_id = 125 /* blocked session ID */ 
    AND blcked.request_status = N'WAIT'
Is there any automatic and efficient way to find which statement is causing blocking and which locks are being held? I hope to find anything that helps me track the earlier blocker from session 1 (in this case, DELETE FROM dbo.FirstQuery in the code base or to set up additional monitoring. - TSQL Stack - Query Hash - ObjectId (if part of a procedure) - sql_text I can probably infer the held locks from the statement.
Zikato (5724 rep)
Sep 22, 2022, 10:18 AM • Last activity: Jun 3, 2025, 06:56 PM
2 votes
1 answers
1657 views
MySQL innodb_row_lock_time is more than wall clock time
We recently started graphing the **innodb_row_lock_time** status variable in our monitoring system. I noticed something I can't understand on a graph for a busy server: Between 04:55PM and 04:57PM, the value of this counter changed from **488,101,000** to **488,504,000**. The MySQL documentation say...
We recently started graphing the **innodb_row_lock_time** status variable in our monitoring system. I noticed something I can't understand on a graph for a busy server: Between 04:55PM and 04:57PM, the value of this counter changed from **488,101,000** to **488,504,000**. The MySQL documentation says: > innodb_row_lock_time > The total time spent in acquiring row locks, in milliseconds. How is it possible that during those 2 minutes, the server spent **403,000 ms** in acquiring row locks? 2 minutes is only 120,000 ms The only explanation that comes to mind is that this value is a sum of all threads but I cannot find any documentation to confirm this theory. The server is MySQL 5.1
bodgix (121 rep)
Apr 28, 2018, 06:04 PM • Last activity: Apr 9, 2025, 08:09 PM
0 votes
1 answers
25 views
Can you 'monitor' curl/ HTTP requests to Snowflake API to see issues?
I'm using a 3rd party tool that can 'connect' to Snowflake API but is throwing errors. They are translating/ botching something on their end, as I can get the Snowflake API easily working with about 5 other tools. As a Snowflake admin, can I see what 'failed curl' or whatever text is being sent to S...
I'm using a 3rd party tool that can 'connect' to Snowflake API but is throwing errors. They are translating/ botching something on their end, as I can get the Snowflake API easily working with about 5 other tools. As a Snowflake admin, can I see what 'failed curl' or whatever text is being sent to Snowflake is raw form, like the raw https/ curl request? Would this be a monitor/ trace of some kind? (I'm not the main admin).
user45867 (1739 rep)
Mar 3, 2025, 04:32 PM • Last activity: Apr 1, 2025, 06:45 PM
-1 votes
1 answers
32 views
Collect OS metrics in Idera
I use idera for monitoring database servers but recently I noticed that some servers (no all of them) are not monitoring the disk part properly. The error I get is: TITLE: Monitored SQL Server Properties ------------------------------ The connection test failed. ------------------------------ ADDITI...
I use idera for monitoring database servers but recently I noticed that some servers (no all of them) are not monitoring the disk part properly. The error I get is: TITLE: Monitored SQL Server Properties ------------------------------ The connection test failed. ------------------------------ ADDITIONAL INFORMATION: Error validating WMI connection: ------------------------------ Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) (mscorlib) ------------------------------ The tests I have executed are: Restart the winmgmt service Check the integrity of the repository Modify WMI rules in the firewall It is important to mention that the idera connection is made with a sql server user who has sysadmin permissions. Could you advise me about the permissions or configuration that could be missing, please?
Carolina (47 rep)
Mar 27, 2025, 07:24 PM • Last activity: Mar 27, 2025, 08:50 PM
2 votes
1 answers
122 views
pg_stat_statements displays only useless 9 queries despite tens of queries are executed per second
In PostgreSQL v15.10 I have installed pg_stat_statement extension and \dx shows it is at version 1.10. I have restarted the postgres cluster after install and extension settings setup. I reset statistics with: `SELECT pg_stat_statements_reset();` Looking into active transactions: `SELECT query FROM...
In PostgreSQL v15.10 I have installed pg_stat_statement extension and \dx shows it is at version 1.10. I have restarted the postgres cluster after install and extension settings setup. I reset statistics with: SELECT pg_stat_statements_reset(); Looking into active transactions: SELECT query FROM pg_stat_activity WHERE state = 'active'; I see few tens of SQLs (SELECT, UPDATE...) are executed per second. But when I check executed SQLs: SELECT query FROM pg_stat_statements; all I get is only 9 rows: query ------------------------------------------------- SET extra_float_digits = 3 COMMIT SET application_name = 'backend-prod' ROLLBACK SET application_name = 'PostgreSQL JDBC Driver' BEGIN SET application_name = 'black-prod' show all SHOW TRANSACTION ISOLATION LEVEL It looks like SELECT, UPDATE, DELETE and INSERT statements are not recorded at all. QUESTION: Why are SQLs not recorded? Little diagnosing: 1. Display extensions \dx
List of installed extensions                                                    
        Name        | Version |   Schema   |                                      Description                                      
--------------------+---------+------------+---------------------------------------------------------------------------------------
 pg_stat_statements | 1.10    | public     | track planning and execution statistics of all SQL statements executed                
 pgstattuple        | 1.5     | public     | show tuple-level statistics                                                           
 plpgsql            | 1.0     | pg_catalog | PL/pgSQL procedural language                                                          
 timescaledb        | 2.18.0  | public     | Enables scalable inserts and complex queries for time-series data (Community Edition)
2. List extension settings: psql -c "show all" | grep pg_stat_statements
pg_stat_statements.max                                  | 5000                              
pg_stat_statements.save                                 | on                                
pg_stat_statements.track                                | top                               
pg_stat_statements.track_planning                       | off                               
pg_stat_statements.track_utility                        | on                                
shared_preload_libraries                                | timescaledb,pg_stat_statements
folow (523 rep)
Feb 27, 2025, 08:32 AM • Last activity: Mar 5, 2025, 07:11 AM
0 votes
1 answers
3013 views
Replication Monitor Missing Subscriptions in Watch List
In Replication Monitor, for one of our publications (transactional replication), none of the subscriptions are showing up in the Subscription Watch List. This is new behavior that just started recently. The other publications associated with this distributor don't exhibit this behavior, the subscrip...
In Replication Monitor, for one of our publications (transactional replication), none of the subscriptions are showing up in the Subscription Watch List. This is new behavior that just started recently. The other publications associated with this distributor don't exhibit this behavior, the subscriptions are visible under the Subscription Watch List tab. I can reproduce similar behavior using the Sp_replmonitorhelpsubscription SP. When executing this using the @publisher that is not returning results in the Replication Monitor, no results are returned. But when using the @publisher that do show subscriptions, results are returned. I've confirmed in the subscription database for this affected publisher that transactional replication is functioning as expected, commands are being delivered, row counts are in sync. Is there anything that can be done to get the subscriptions to appear again for the affected publisher? The only thing I can think of doing is deleting the subscription and publication and starting over, since some metadata must be missing, but that requires a downtime and seems unnecessary. Version: Microsoft SQL Server 2008 R2 (SP2) - 10.50.4000.0
slachterman (373 rep)
Dec 21, 2013, 05:50 PM • Last activity: Feb 21, 2025, 02:04 PM
2 votes
2 answers
1571 views
Could you monitor all changes made within a postgresql database by somehow subscribing to the replication log?
I've been reading about postgresql replication and was wondering if it was somehow possible to listen to the replication binary log and push that to some tool like sumologic or splunk so that any changes made could be monitored without having to build update triggers on every table. Is this possible...
I've been reading about postgresql replication and was wondering if it was somehow possible to listen to the replication binary log and push that to some tool like sumologic or splunk so that any changes made could be monitored without having to build update triggers on every table. Is this possible?
slim (121 rep)
Dec 4, 2019, 06:23 PM • Last activity: Jan 1, 2025, 06:15 PM
-2 votes
1 answers
266 views
How can I figure if a query is CPU-bound or I/O-bound?
I have a query that runs regularly. It runs in under a second, but I consider it too slow and I would like to tune it. I can either monitor it as it runs live or copy the query text and run it myself. The query plan is a classic "[Wide, per-index updates](https://www.sql.kiwi/2013/01/optimizing-t-sq...
I have a query that runs regularly. It runs in under a second, but I consider it too slow and I would like to tune it. I can either monitor it as it runs live or copy the query text and run it myself. The query plan is a classic "[Wide, per-index updates](https://www.sql.kiwi/2013/01/optimizing-t-sql-queries-that-change-data/#1.-wide%2C-per-index-updates) " plan. It is almost identical to the below, with the only difference being that I have inserts rather than deletes. enter image description here I am sure that my bottleneck is during the index-modifying steps. It is either due to CPU usage (the indexes are PAGE compressed) or the reads required to insert into the indexes (they are billions of rows). But I would like to know which it is! I can easily find the total time the query runs for as well as the time spent on CPU (sp_WhoIsActive and SET STATISTICS TIME ON both present this readily), but how can I specifically find the **time** spent on logical reads during this query? Ideally, I would specifically like to know the time spent on logical reads during the index modification steps. SQL Server 2022. Compatibility level 140.
J. Mini (1237 rep)
Dec 12, 2024, 07:45 PM • Last activity: Dec 24, 2024, 11:27 PM
0 votes
0 answers
42 views
Is there any top-similar command to monitor mysql processes?
I would like to know if exists any command to live monitor processes of a mysql server. I currently use `show processlist;` but this is a mysql command (not a bash command) and it doesn't refresh automatically. Is there any tool for this purpose? I need it to check long queries and locks.
I would like to know if exists any command to live monitor processes of a mysql server. I currently use show processlist; but this is a mysql command (not a bash command) and it doesn't refresh automatically. Is there any tool for this purpose? I need it to check long queries and locks.
Tobia (211 rep)
Dec 21, 2024, 03:48 PM
5 votes
4 answers
1077 views
Recommendation for free event monitoring software
I'm seeking recommendations for free / open event monitoring tools. HPOV costs continue to multiply like so many wet gremlins, and we've had enough. There are some interesting suggestions in [this article][1] and the subsequent comments, but before I set about trying many of these I'm sure you all h...
I'm seeking recommendations for free / open event monitoring tools. HPOV costs continue to multiply like so many wet gremlins, and we've had enough. There are some interesting suggestions in this article and the subsequent comments, but before I set about trying many of these I'm sure you all have found a few that you like that may or may not be in this list. There is also this thread , but we're really shooting for a no-cost solution. The primary requirements are: 1. Perform a basic ping test 2. Check the up-state of the SQL Server service. 3. Send an email alert if 1 or 2 fail. 4. Works as far back as Sql Server 2005 - present. (This is where PBM comes up a bit short) Additional desired-not-required functionality would be: 1. Ability to pass a synthetic transaction, which if failed triggers an alert. 2. Ability to measure said transaction for duration, not just completion, and alert if a threshold is exceeded. 3. Alert on other measurable (logged) events. Thanks much for any recommendations! Edit: Currently investigating SpiceWorks
Eric Higgins (2689 rep)
Aug 13, 2012, 03:12 AM • Last activity: Nov 21, 2024, 11:46 AM
2 votes
2 answers
230 views
Azure MS SQL: grant sys.* and msdb.dbo.* select permissions for Grafana monitoring
I'm trying to monitor Azure MS SQL databases using [this Grafana dasboard](https://grafana.com/grafana/dashboards/21378-microsoft-sql-server-dashboard/). It executes several queries to tables whose names start with sys and msdb.dbo. I've created a dedicated `grafana` user for this and tried adding t...
I'm trying to monitor Azure MS SQL databases using [this Grafana dasboard](https://grafana.com/grafana/dashboards/21378-microsoft-sql-server-dashboard/) . It executes several queries to tables whose names start with sys and msdb.dbo. I've created a dedicated grafana user for this and tried adding the user to db.datareader permissions, as well as GRANT SELECT ON SCHEMA :: sys TO [grafana];, and GRANT view database state TO [grafana];, but I'm still getting permission errors. Trying to directly grant select on specific tables (e.g. GRANT SELECT ON sys.dm_exec_query_stats TO grafana;) on the master database, besides being tedious, seems to not be allowed ever for the master user, as I get SQL Error [S0001]: Permissions for system stored procedures, server scoped catalog views, and extended stored procedures cannot be changed in this version of SQL Server.. What's the right way of doing this?
barmanthewise (141 rep)
Nov 12, 2024, 07:52 AM • Last activity: Nov 13, 2024, 12:39 AM
Showing page 1 of 20 total questions