Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
2
votes
1
answers
1861
views
MariaDB Inner join is slow
Here is my SQL (with explain): EXPLAIN SELECT * FROM GBI gbi INNER JOIN VF_KRED kred ON gbi.vendor = kred.vendor; Giving below the outcome: [![enter image description here][1]][1] Now, the `select` SQL takes ~10 seconds to produce the outcome. Below are the indexes on tables: - VF_KRED table: [
select
SQL takes ~10 seconds to produce the outcome.
Below are the indexes on tables:
- VF_KRED table:


Akshay Lokur
(121 rep)
Jun 25, 2019, 04:06 PM
• Last activity: Aug 5, 2025, 04:05 PM
0
votes
1
answers
150
views
DB2 Workfile scan
We just migrate from LUW to zos and we're finding that in z/os the accesspath explains show new node named WFSCAN (Workfile scan) and this rises when there is a join, group by or any kind of sort. The question is, Is any performance issue with WFSCAN. Should I be worry about the size of the pool it...
We just migrate from LUW to zos and we're finding that in z/os the accesspath explains show new node named WFSCAN (Workfile scan) and this rises when there is a join, group by or any kind of sort.
The question is, Is any performance issue with WFSCAN. Should I be worry about the size of the pool it uses or any other costs in production environment?
What should I do for better performance of Workfile Scans?
Hana Bzh
(101 rep)
Oct 17, 2018, 09:35 AM
• Last activity: Aug 2, 2025, 07:09 AM
0
votes
2
answers
145
views
How to check the source of analysis task in Oracle Performance Analyzer?
We have provided sql tuning set as input to the analysis task of performance analyzer ``` variable l_task_id VARCHAR2(64); exec :l_task_id:=dbms_sqlpa.create_analysis_task( sqlset_name => ' ' , sqlset_owner=>' ' ,task_name => ' '); ``` Now i want to list all the tasks associated with the sql tuning...
We have provided sql tuning set as input to the analysis task of performance analyzer
variable l_task_id VARCHAR2(64);
exec :l_task_id:=dbms_sqlpa.create_analysis_task(
sqlset_name => ''
, sqlset_owner=>''
,task_name => '');
Now i want to list all the tasks associated with the sql tuning set. I have tried below ways but they are not working
select parameter_name,parameter_value
from dba_advisor_exec_parameters
where task_name='' and parameter_name='SQLSET_NAME';
But the result is showing "NOT USED" for column parameter_value
|parameter_name | parameter_value|
|-------------------- | ------------------|
|SQLSET_NAME UNUSED||
pavankumar
(1 rep)
Jul 1, 2021, 09:04 AM
• Last activity: Jul 20, 2025, 06:06 PM
3
votes
1
answers
110
views
Query performance guidance
I am working on a query that is running for reports and causing timeouts as it tends to run right over 30 seconds. I feel the UDFs are causing the issue, but I am not sure what can get me a positive gain in performance. I have put the plan at: https://www.brentozar.com/pastetheplan/?id=tUOdeVhykH fo...
I am working on a query that is running for reports and causing timeouts as it tends to run right over 30 seconds. I feel the UDFs are causing the issue, but I am not sure what can get me a positive gain in performance.
I have put the plan at:
https://www.brentozar.com/pastetheplan/?id=tUOdeVhykH
for review.
Any thoughts or help would be great.
sql
ALTER function [dbo].[ISM_fnGetUID_TS] ()
returns @Results table
(
UserID int --primary key
)
as -- body of the function
begin
insert @Results
select dbo.ISM_fnGetUID_Lite()
return
end
ALTER FUNCTION [dbo].[ISM_fnGetUID_Lite] ()
RETURNS int
AS
BEGIN
return (
select UserID = case
when app_name() like '%@App' then cast(left(app_name(), charindex('@', app_name()) - 1) as int)
else coalesce(
convert(smallint,substring(context_info(),1,2)),
(select UserID from dbo.TS_User u where (u.UserName = system_user and u.Disabled = 0)))
end
)
END
Garry Bargsley
(603 rep)
Jul 16, 2025, 12:42 PM
• Last activity: Jul 18, 2025, 12:31 PM
0
votes
1
answers
3273
views
Tuning MySQL 8.0.33 - increasing innodb_buffer_pool_size not working?
I'm trying to tune a MySQL 8.0.33 on Ubuntu 22.04 (32G RAM, 8 cores). (The same machine is also a web server, with 4 apache/php-fpm virtual hosts running). I'm not sure if this is the right thing to do, but I used `mysqlslap` to compare the performance before and after the config alterations. As adv...
I'm trying to tune a MySQL 8.0.33 on Ubuntu 22.04 (32G RAM, 8 cores). (The same machine is also a web server, with 4 apache/php-fpm virtual hosts running).
I'm not sure if this is the right thing to do, but I used
mysqlslap
to compare the performance before and after the config alterations.
As advised by mysqltuner
, I raised innodb_buffer_pool_size
and innodb_redo_log_capacity
values, but it seems that the performance haven't changed (average time to run mysqlslap
queries 0.386 seconds
, and 0.387 seconds
after configuration changes).
Shouldn't the queries be faster with a innodb_buffer_pool_size
almost 40x bigger? What's the correct way to measure the before/after performance?
What am I doing wrong? Any advices?
This was my original /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
user = mysql
bind-address = *
mysqlx-bind-address = 127.0.0.1
key_buffer_size = 8M
max_allowed_packet = 128M
myisam-recover-options = BACKUP
log_error = /var/log/mysql/error.log
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 2
max_binlog_size = 100M
log_timestamps=SYSTEM
skip_name_resolve=ON
I tested using mysqlslap
:
mysqlslap --concurrency=100 --iterations=20 --number-int-cols=30 --number-char-cols=30 --auto-generate-sql --auto-generate-sql-execute-number=10
That resulted:
Benchmark
Average number of seconds to run all queries: 0.386 seconds
Minimum number of seconds to run all queries: 0.345 seconds
Maximum number of seconds to run all queries: 0.531 seconds
Number of clients running queries: 100
Average number of queries per client: 10
Then I runned ./mysqltuner.pl
, with the results:
>> MySQLTuner 2.2.7
* Jean-Marie Renouard
* Major Hayden
>> Bug reports, feature requests, and downloads at http://mysqltuner.pl/
>> Run with '--help' for additional options and output filtering
[--] Skipped version check for MySQLTuner script
[--] Using mysql to check login
[OK] Operating on 64-bit architecture
-------- Storage Engine Statistics -----------------------------------------------------------------
[--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MEMORY +MRG_MYISAM +MyISAM +PERFORMANCE_SCHEMA
[--] Data in InnoDB tables: 4.4G (Tables: 942)
[OK] Total fragmented tables: 0
[OK] Currently running supported MySQL version 8.0.33-0ubuntu0.22.04.4
-------- Log file Recommendations ------------------------------------------------------------------
[OK] Log file /var/log/mysql/error.log exists
[--] Log file: /var/log/mysql/error.log (986B)
[OK] Log file /var/log/mysql/error.log is not empty
[OK] Log file /var/log/mysql/error.log is smaller than 32 MB
[!!] Log file /var/log/mysql/error.log isn't readable.
-------- Analysis Performance Metrics --------------------------------------------------------------
[--] innodb_stats_on_metadata: OFF
[OK] No stat updates during querying INFORMATION_SCHEMA.
-------- Views Metrics -----------------------------------------------------------------------------
-------- Triggers Metrics --------------------------------------------------------------------------
-------- Routines Metrics --------------------------------------------------------------------------
-------- Security Recommendations ------------------------------------------------------------------
[--] Skipped due to unsupported feature for MySQL 8.0+
-------- CVE Security Recommendations --------------------------------------------------------------
[OK] NO SECURITY CVE FOUND FOR YOUR VERSION
-------- Performance Metrics -----------------------------------------------------------------------
[--] Up for: 30m 4s (74K q [41.427 qps], 2K conn, TX: 14G, RX: 62M)
[--] Reads / Writes: 78% / 22%
[--] Binary logging is enabled (GTID MODE: OFF)
[--] Physical Memory : 31.3G
[--] Max MySQL memory : 704.7M
[--] Other process memory: 0B
[--] Total buffers: 168.0M global + 1.9M per thread (151 max threads)
[--] Performance_schema Max memory usage: 248M
[--] Galera GCache Max memory usage: 0B
[OK] Maximum reached memory usage: 613.2M (1.91% of installed RAM)
[OK] Maximum possible memory usage: 704.7M (2.20% of installed RAM)
[OK] Overall possible memory usage with other process is compatible with memory available
[OK] Slow queries: 0% (0/74K)
[OK] Highest usage of available connections: 68% (103/151)
[OK] Aborted connections: 0.00% (0/2704)
[--] Query cache has been removed since MySQL 8.0
[OK] Sorts requiring temporary tables: 4% (122 temp sorts / 2K sorts)
[!!] Joins performed without indexes: 159
[OK] Temporary tables created on disk: 0% (0 on disk / 1K total)
[!!] Thread cache hit rate: 31% (1K created / 2K connections)
[OK] Table cache hit rate: 94% (69K hits / 73K requests)
[OK] table_definition_cache (2000) is greater than number of tables (1271)
[OK] Open file limit used: 0% (3/10K)
[OK] Table locks acquired immediately: 100% (182 immediate / 182 locks)
[OK] Binlog cache memory access: 99.99% (15178 Memory / 15180 Total)
-------- Performance schema ------------------------------------------------------------------------
[--] Performance_schema is activated.
[--] Memory used by Performance_schema: 248.8M
[--] Sys schema is installed.
-------- ThreadPool Metrics ------------------------------------------------------------------------
[--] ThreadPool stat is disabled.
-------- MyISAM Metrics ----------------------------------------------------------------------------
[--] General MyIsam metrics:
[--] +-- Total MyISAM Tables : 0
[--] +-- Total MyISAM indexes : 0B
[--] +-- KB Size :8.0M
[--] +-- KB Used Size :1.5M
[--] +-- KB used :18.2%
[--] +-- Read KB hit rate: 0% (0 cached / 0 reads)
[--] +-- Write KB hit rate: 0% (0 cached / 0 writes)
[--] No MyISAM table(s) detected ....
-------- InnoDB Metrics ----------------------------------------------------------------------------
[--] InnoDB is enabled.
[--] InnoDB Thread Concurrency: 0
[OK] InnoDB File per table is activated
[OK] InnoDb Buffer Pool size ( 128.0M ) under limit for 64 bits architecture: (17179869184.0G )
[!!] InnoDB buffer pool / data size: 128.0M / 4.4G
[!!] Ratio InnoDB redo log capacity / InnoDB Buffer pool size (75%): 100.0M / 128.0M should be equal to 25%
[OK] InnoDB buffer pool instances: 1
[--] Number of InnoDB Buffer Pool Chunk: 1 for 1 Buffer Pool Instance(s)
[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances
[OK] InnoDB Read buffer efficiency: 99.95% (10397117 hits / 10402401 total)
[!!] InnoDB Write Log efficiency: 77.34% (100765 hits / 130280 total)
[OK] InnoDB log waits: 0.00% (0 waits / 29515 writes)
-------- Aria Metrics ------------------------------------------------------------------------------
[--] Aria Storage Engine not available.
-------- TokuDB Metrics ----------------------------------------------------------------------------
[--] TokuDB is disabled.
-------- XtraDB Metrics ----------------------------------------------------------------------------
[--] XtraDB is disabled.
-------- Galera Metrics ----------------------------------------------------------------------------
[--] Galera is disabled.
-------- Replication Metrics -----------------------------------------------------------------------
[--] Galera Synchronous replication: NO
[--] No replication slave(s) for this server.
[--] Binlog format: ROW
[--] XA support enabled: ON
[--] Semi synchronous replication Master: Not Activated
[--] Semi synchronous replication Slave: Not Activated
[--] This is a standalone server
-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
MySQL was started within the last 24 hours: recommendations may be inaccurate
We will suggest raising the 'join_buffer_size' until JOINs not using indexes are found.
See https://dev.mysql.com/doc/internals/en/join-buffer-size.html
(specially the conclusions at the bottom of the page).
Be careful, increasing innodb_redo_log_capacity means higher crash recovery mean time
Variables to adjust:
join_buffer_size (> 256.0K, or always use indexes with JOINs)
thread_cache_size (> 9)
innodb_buffer_pool_size (>= 4.4G) if possible.
innodb_redo_log_capacity should be (=32M) if possible, so InnoDB Redo log Capacity equals 25% of buffer pool size.
On running ./tuning-primer.sh
the results were:
-- MYSQL PERFORMANCE TUNING PRIMER --
- By: Matthew Montgomery -
MySQL Version 8.0.33-0ubuntu0.22.04.4 x86_64
Uptime = 0 days 0 hrs 33 min 13 sec
Avg. qps = 40.26
Total Questions = 80240
Threads Connected = 1
Warning: Server has not been running for at least 48hrs.
It may not be safe to use these recommendations
To find out more information on how each of these
runtime variables effects performance visit:
http://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html
Visit https://github.com/BMDan/tuning-primer.sh for the latest version of
this script, or to suggest improvements.
SLOW QUERIES
The slow query log is enabled.
Current long_query_time = 2.000000 sec.
Since startup, 0 out of 80261 queries have taken longer than to complete.
Your long_query_time seems reasonable.
BINARY UPDATE LOG
The binary update log is enabled
The expire_logs_days is not set.
The mysqld will retain the entire binary log until RESET MASTER or PURGE MASTER LOGS commands are run manually
Setting expire_logs_days will allow you to remove old binary logs automatically
See http://dev.mysql.com/doc/refman/8.0/en/purge-master-logs.html
WORKER THREADS
Current thread_cache_size = 9
Current threads_cached = 8
Current threads_per_sec = 0
Historic threads_per_sec = 0
Your thread_cache_size is fine
MAX CONNECTIONS
Current max_connections = 151
Current threads_connected = 1
Historic max_used_connections = 103
The number of used connections is 68% of the configured maximum.
Your max_connections variable seems to be fine.
INNODB STATUS
Current InnoDB index space = 749 M
Current InnoDB data space = 3.64 G
Current InnoDB buffer pool free = 25 %
Current innodb_buffer_pool_size = 128 M
Depending on how much space your innodb indexes take up it may be safe
to increase this value to up to 2 / 3 of total system memory
MEMORY USAGE
Max Memory Ever Allocated : 348 M
Configured Max Per-thread Buffers : 287 M
Configured Max Global Buffers : 152 M
Configured Max Memory Limit : 439 M
Physical Memory : 31.30 G
Max memory limit seem to be within acceptable norms
KEY BUFFER
No key reads. If you aren't using MyISAM, this is normal. If you are
using MyISAM, this is very, very bad.
Current MyISAM index space = 0 bytes
Current key_buffer_size = 8 M
Key cache miss rate is 1 : 0
Key buffer free ratio = 81 %
Your key_buffer_size seems to be fine
QUERY CACHE
Your server does not support the query cache. That's probably a good thing.
SORT OPERATIONS
Current sort_buffer_size = 256 K
Current read_rnd_buffer_size = 256 K
Sort buffer seems to be fine
JOINS
Current join_buffer_size = 260.00 K
You have had 187 queries where a join could not use an index properly
You should enable "log-queries-not-using-indexes"
Then look for non indexed joins in the slow query log.
If you are unable to optimize your queries you may want to increase your
join_buffer_size to accommodate larger joins in one pass.
Note! This script will still suggest raising the join_buffer_size when
ANY joins not using indexes are found.
OPEN FILES LIMIT
Current open_files_limit = 10000 files
The open_files_limit should typically be set to at least 2x-3x
that of table_cache if you have heavy MyISAM usage.
Your open_files_limit value seems to be fine
TABLE CACHE
Current table_open_cache = 4000 tables
Current table_definition_cache = 2000 tables
You have a total of 1090 tables
You have 2109 open tables.
The table_cache value seems to be fine
TEMP TABLES
Current max_heap_table_size = 16 M
Current tmp_table_size = 16 M
Of 1244 temp tables, 0% were created on disk
Created disk tmp tables ratio seems fine
TABLE SCANS
Current read_buffer_size = 128 K
Current table scan ratio = 84 : 1
read_buffer_size seems to be fine
TABLE LOCKING
Current Lock Wait ratio = 0 : 80525
Your table locking seems to be fine
Based on mysqltuner
and tuning-primer
results, I added these lines in the end of /etc/mysql/mysql.conf.d/mysqld.cnf
:
innodb_buffer_pool_size=5G
innodb_redo_log_capacity=1250M
Restarted with sudo systemctl restart mysql.service
, and verified the values:
mysql> SHOW VARIABLES LIKE 'innodb_buffer_pool_size';
+-------------------------+------------+
| Variable_name | Value |
+-------------------------+------------+
| innodb_buffer_pool_size | 5368709120 |
+-------------------------+------------+
1 row in set (0.00 sec)
mysql> SHOW VARIABLES LIKE 'innodb_redo_log_capacity';
+--------------------------+------------+
| Variable_name | Value |
+--------------------------+------------+
| innodb_redo_log_capacity | 1310720000 |
+--------------------------+------------+
1 row in set (0.01 sec)
And runned mysqlslap
again:
Benchmark
Average number of seconds to run all queries: 0.387 seconds
Minimum number of seconds to run all queries: 0.349 seconds
Maximum number of seconds to run all queries: 0.552 seconds
Number of clients running queries: 100
Average number of queries per client: 10
Cintya
(1 rep)
Aug 11, 2023, 05:11 PM
• Last activity: Jul 16, 2025, 05:08 PM
0
votes
0
answers
33
views
LIKE Operator optimization String % customer %
In MYSQL, I have a select query which is usually fetch the data from 4M records. There I'm using like operator with % and % this is not using indexing filter and reading total 4M records to fetch 2 records. How can I optimize it. SELECT r_ss.siteId, r_ss.referrer, r_ss.supplierhash, min(r_ss.bid_flo...
In MYSQL, I have a select query which is usually fetch the data from 4M records. There I'm using like operator with % and % this is not using indexing filter and reading total 4M records to fetch 2 records. How can I optimize it.
SELECT
r_ss.siteId,
r_ss.referrer,
r_ss.supplierhash,
min(r_ss.bid_floor) as bid_floor,
sum(r_ss.adCall_yesterday) as adCall_yesterday,
sum(r_ss.adCall_today) as adCall_today FROM
reporting.total_data_for_site r_ss WHERE
(r_ss.adCall_today > 1 OR r_ss.adCall_yesterday > 100)
AND CASE WHEN ( '1367,1397,1624' IS NOT NULL AND '1367,1397,1624' '' ) THEN r_ss.siteId IN ( SELECT * FROM tmp_site ) ELSE 1 = 1 END
AND r_ss.deviceid in (38,39,41)
AND r_ss.referrer LIKE CONCAT('%','watchfree', '%')
group by r_ss.siteId,r_ss.referrer ;
Aravind
(11 rep)
Jul 16, 2025, 01:09 PM
1
votes
2
answers
153
views
Efficient approach for filtering, paging and sorting across multiple tables with multiple key columns
I have a query that retrieves data from a complex data structure involving multiple tables. The data can be filtered using a tree component that allows selection of different keys from different tables, sometimes filtering across two or three tables at once (for example filtering on OrganisationID a...
I have a query that retrieves data from a complex data structure involving multiple tables. The data can be filtered using a tree component that allows selection of different keys from different tables, sometimes filtering across two or three tables at once (for example filtering on OrganisationID and the LocationID within this organisation).
In addition to filtering, the data must be paged and sorted. I'm using a tableview in the ui that allows sorting by a single selected column (only one ORDER BY clause at a time).
The filtering targets specific columns from different tables. For example:
- OrganisationID (from the Organisation table)
- LocationID (from the Location table)
- FolderID (from the Folder table)
Sometimes the filter must apply across multiple of these keys simultaneously, for example filtering all related OrganisationIDs and LocationIDs and retrieving a paged result set.
**Current approach:**
I considered creating multiple indexed views based on these key columns, each with many indices to support different sorting options. This would lead to about 6 views, each with around 26 indices. Since indices are pre-sorted this does provide with the ability to quickly filter and sort. But having a very large amount of indices like this on single tables let alone on an indexed view spanning multiple tables seems very contrary to index best practice.
**Question:**
What are efficient strategies to implement filtering, paging, and sorting across multiple fixed keys from different tables at once?
Here is a dbfiddle which illustrates what I am trying to describe.
It is a bit simplified, but it catches enough of the gist of what I am trying to accomplish.
**Additional Information:**
Filtering, sorting, and paging are all currently implemented. However, due to the nature of filtering and sorting across different tables, any sort operation that lacks an appropriate index severely impacts paging. Without an index, SQL Server must scan the entire filtered set to apply sorting and paging. For some filter combinations, this results in scanning over 500,000 rows.
Although most individual queries run reasonably fast, the system executes them frequently and concurrently for many users. This leads to significant memory grants, resulting in high memory pressure and a drastic drop in the Page Life Expectancy (PLE), affecting overall performance.
For the worst case scenario I can find a query costing 166233968 logical reads.
Sotem
(19 rep)
Jun 2, 2025, 11:53 AM
• Last activity: Jul 16, 2025, 07:51 AM
0
votes
1
answers
639
views
How to fix checkpoint requests problems?
On PostgreSQL v15 every morning at 5:15 AM starts heavy inserts in our database and finishes around 7:00 AM. I try to see if there are some parameters that can be tuned like checkpoints. I have written a script that executes the following SQL: `SELECT checkpoints_timed, checkpoints_req FROM pg_stat_...
On PostgreSQL v15 every morning at 5:15 AM starts heavy inserts in our database and finishes around 7:00 AM. I try to see if there are some parameters that can be tuned like checkpoints.
I have written a script that executes the following SQL:
SELECT checkpoints_timed, checkpoints_req FROM pg_stat_bgwriter
to get "checkpoints timed" vs. "checkpoints requested". Or in other words do "checkpoints requested" appears at morning heavy inserts.
I have set log_checkpoints=on
postgresql.conf parameter checkpoints info to be written in database server log file. I see the following related to checkpoints:
2024-09-09 05:22:34 LOG: checkpoint starting: time
2024-09-09 05:33:37 LOG: checkpoint complete: wrote 779881 buffers (8.9%); 0 WAL file(s) added, 483 removed, 664 recycled; write=652.802 s, sync=0.165 s, total=662.212 s; sync files=170, longest=0.059 s, average=0.001 s; distance=12326973 kB, estimate=12326973 kB
2024-09-09 05:34:27 LOG: checkpoint starting: wal
2024-09-09 05:48:27 LOG: checkpoint complete: wrote 1818364 buffers (20.7%); 0 WAL file(s) added, 0 removed, 1009 recycled; write=837.323 s, sync=0.097 s, total=839.269 s; sync files=175, longest=0.024 s, average=0.001 s; distance=16533226 kB, estimate=16533226 kB
2024-09-09 05:50:24 LOG: checkpoint starting: wal
2024-09-09 06:06:02 LOG: checkpoint complete: wrote 1302347 buffers (14.8%); 0 WAL file(s) added, 0 removed, 1010 recycled; write=938.102 s, sync=0.068 s, total=938.629 s; sync files=301, longest=0.011 s, average=0.001 s; distance=16547328 kB, estimate=16547328 kB
2024-09-09 06:08:20 LOG: checkpoint starting: wal
2024-09-09 06:35:20 LOG: checkpoint complete: wrote 1015915 buffers (11.6%); 0 WAL file(s) added, 0 removed, 880 recycled; write=1619.413 s, sync=0.038 s, total=1619.779 s; sync files=488, longest=0.020 s, average=0.001 s; distance=16548651 kB, estimate=16548651 kB
2024-09-09 06:38:20 LOG: checkpoint starting: time
2024-09-09 07:05:20 LOG: checkpoint complete: wrote 320834 buffers (3.7%); 0 WAL file(s) added, 0 removed, 376 recycled; write=1619.469 s, sync=0.004 s, total=1619.906 s; sync files=755, longest=0.001 s, average=0.001 s; distance=13792920 kB, estimate=16273078 kB
Whole day and night I see _checkpoint starting: time_ events (just like in first line). I assume those are _checkpoints timed_ events. Exception is at the time: 5:34, 5:50 and 6:08 (see above log) where _checkpoint starting: wal_ appears three times in sequence. I assume those are _checkpoint requested_ event. According to theory those events slow down the database performance and should be avoided.
If we look at line 4, there is:
- 1818364 buffers. If multiplied by 8192 (block_size) and divided by 67 * 1024 * 1024 * 1024 (shared_buffers) and we get 20.7%
- 20.7% is the percentage of memory buffers changed (dirty blocks from checkpoint starting point to checkpoint completed events) that are requested to be saved by checkpoint to $PGDATA/base/"database"/ Data Files.
- 0 WAL file(s) added: There was no need to add additional WAL files, because all are already added (according to max_wal_size parameter)
- 1009 recycled - number of old WAL files renamed and reused
- write=837.323 s - is 13 min and 57 seconds - time how long checkpoint needed to complete (also the time from "checkpoint starting: wal" to "checkpoint complete")
- sync=0.097 s - probably some disk fsync or similar (negligible in our case, because of very small number)
- total=839.269 s - probably "write + sync + 'something'"
- sync files=175 - is number of Data Files written by checkpoint in $PGDATA/base/"database"/ directory from checkpoint "start" to "complete".
- longest=0.024 s - maximal time for individual Data File to be written
- average=0.001 s - average time for Data File to be written
Now there are two parameters that I don't understand the meaning:
- distance=16533226 kB - 15.7 GB
- estimate=16533226 kB - 15.7 GB
QUESTIONS:
1. What does distance/estimate means?
2. Why is in my case checkpoint request fired?
3. Is increasing max_wal_size recommended? If yes, how to calculate new parameter size?
4. Is some other parameter recommended like decrease checkpoint_completion_target to e.g. 0.8?
PostgreSQL instance settings:
shared_buffers: 67GB
wal_buffers: 32MB
full_page_writes: on
max_wal_size: 30GB
checkpoint_timeout: 30min
checkpoint_completion_target: 0.9
min_wal_size: 512MB
block_size: 8192
wal_segment_siz: 16MB
Regards
folow
(523 rep)
Sep 9, 2024, 12:29 PM
• Last activity: Jul 13, 2025, 10:03 AM
0
votes
1
answers
165
views
Is this Parameter Sniffing?
I have a particular stored procedure that takes 1 parameter. This proc is executed once a day with the **same parameter** value everyday. The row count is also the same with each run of the proc. Day 1 - Runs for 10 minutes. Day 2 - Runs for 8 hours. Each run produces the exact same query plan. In a...
I have a particular stored procedure that takes 1 parameter.
This proc is executed once a day with the **same parameter** value everyday. The row count is also the same with each run of the proc.
Day 1 - Runs for 10 minutes.
Day 2 - Runs for 8 hours.
Each run produces the exact same query plan.
In an attempt to remedy this, I added **OPTION (RECOMPILE)** to the problematic statement and it seems to have fixed it. It's been 5 days and each run has been 10 minutes.
My understanding of bad parameter sniffing is when the cached query plan is not optimal for certain parameter values.
So, how do I actually define this problem?
What would cause this drastic change in runtimes if the query plan is exactly the same and the parameter value is the same?
Edit:
Query Store is enabled on the database. Using this, I was able to determine that the query plans are identical between good and bad runs.
This is the part of the query plan that does the bulk of the work.
Thanks

Sicilian-Najdorf
(381 rep)
Nov 6, 2023, 08:01 AM
• Last activity: Jul 12, 2025, 04:03 PM
0
votes
1
answers
154
views
Understanding the REF column in MySQL EXPLAIN
I have few questions about the REF column in the EXPLAIN output. MySQL Version 5.7.31 Community Server. The table structure | table1 | CREATE TABLE `table1` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `cp_id` bigint(20) NOT NULL, `ms_id` varchar(32) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL, `...
I have few questions about the REF column in the EXPLAIN output. MySQL Version 5.7.31 Community Server.
The table structure
| table1 | CREATE TABLE
table1
(
id
bigint(20) NOT NULL AUTO_INCREMENT,
cp_id
bigint(20) NOT NULL,
ms_id
varchar(32) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL,
bu_id
int(8) DEFAULT NULL,
create_time
datetime DEFAULT NULL,
update_time
datetime DEFAULT NULL,
status
int(8) DEFAULT NULL,
sync_time
date NOT NULL DEFAULT '1970-01-01',
PRIMARY KEY (id
),
KEY idx_update_time
(update_time
),
KEY idx_ms_st_sy_cp
(ms_id
,status
,sync_time
,cp_id
)
) ENGINE=InnoDB AUTO_INCREMENT=40264018 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin
explain select cp_id, max(sync_time) as sync_time from table1 where ms_id = '678645347824' and status = 1 and sync_time >= '2024-02-09' group by cp_id;
+----+-------------+--------------------+------------+-------+-------------------------------------------------------------------+-----------------------------------------+---
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------------------+------------+-------+-------------------------------------------------------------------+-----------------------------------------+---
| 1 | SIMPLE | table1 | NULL | range | idx_ms_st_sy_cp | idx_ms_st_sy_cp | 107 | NULL | 900 | 100.00 | Using where; Using index; Using temporary; Using filesort |
+----+-------------+--------------------+------------+-------+-------------------------------------------------------------------+-----------------------------------------+---
1 row in set, 1 warning (0.00 sec)
In the first EXPLAIN plan, the ref column is NULL.
On this Link https://www.linkedin.com/pulse/mysql-explain-explained-gowrav-vishwakarma-%E0%A4%97-%E0%A4%B0%E0%A4%B5-%E0%A4%B5-%E0%A4%B6-%E0%A4%B5%E0%A4%95%E0%A4%B0-%E0%A4%AE-/
it says if the ref column is NULL, meaning index is not being used. On MySQL documentation also it says the same thing
https://dev.mysql.com/doc/refman/5.7/en/explain-output.html#explain_ref
1. What exactly the index is doing here?
2. Is it only being used as a covering index?
Using Index in the EXTRA column tells me that the index is being used as a Covering Index. (please correct me if I wrong.)
3. The key_len 107 tells me that all the three columns (ms_id, status, sync_time) are being used. As various sources explain the key_len can be determined to check which columns are being used. So key_len of 107 (ms_id, status, sync_time) = (99 + 5 + 3). This 107 key_len can be calcuated from the below EXPLAINs. If the key_len tells me that the index is being used they why the REF column is NULL?
4. Why the TYPE is RANGE? Does this RANGE table access mean that only the sync_time column is being used? Can I conclude so? key_len = 107 and TYPE=RANGE it's bit confusing.
5. Are the columns TYPE, KEY and REF related to each other. I read that the columns KEY and REF are related and REF shows the kind of index access. and probably it is somehow connected.
explain select cp_id, max(sync_time) as sync_time from table1 where ms_id = '678645347824' group by cp_id;
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+-----------------------------------------+----
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+-----------------------------------------+----
| 1 | SIMPLE | table1 | NULL | ref | idx_ms_st_sy_cp | idx_ms_st_sy_cp | 99 | const | 905 | 100.00 | Using where; Using index; Using temporary; Using | filesort |
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+-----------------------------------------+----
1 row in set, 1 warning (0.00 sec)
In the second EXPLAIN, where I use only ms_id, then TYPE is REF and the REF column is CONST meaning that it is using the Index using the equality operator and key_len is 99 for the ms_id.
explain select cp_id, max(sync_time) as sync_time from table1 where ms_id = '678645347824' and status = 1 group by cp_id;
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+-----------------------------------------+----
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+-----------------------------------------+----
| 1 | SIMPLE | table1 | NULL | ref | idx_ms_st_sy_cp | idx_ms_st_sy_cp | 104 | const,const | 905 | 100.00 | Using where; Using index; Using temporary; Using filesort |
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+-----------------------------------------+----
1 row in set, 1 warning (0.01 sec)
In the third EXPLAIN, the TYPE is REF and the REF column is CONST,CONST because of the two = operators in the WHERE clause ms_id and status.
explain select cp_id, max(sync_time) as sync_time from table1 where ms_id = '678645347824' and status = 1 and sync_time >= '2024-02-09' and cp_id = '1' group by cp_id
;
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+---------+---------+-------------------+------
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+---------+---------+-------------------+------
| 1 | SIMPLE | table1 | NULL | ref | idx_ms_st_sy_cp | idx_com | 112 | const,const,const | 1 | 33.33 | Using index condition; Using where |
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+---------+---------+-------------------+------
1 row in set, 1 warning (0.00 sec)
In the last EXPLAIN again, it gets even wonderful, the REF column shows CONST,CONST,CONST meaning that the sync_time column is not being used. Am I correct?
Avinash Pawar
(216 rep)
Aug 9, 2024, 03:18 PM
• Last activity: Jul 12, 2025, 04:04 AM
0
votes
1
answers
168
views
Simple count id in MySql table is taking to long
I have to tables with 65.5 Million rows: 1) CREATE TABLE RawData1 ( cdasite varchar(45) COLLATE utf8_unicode_ci NOT NULL, id int(20) NOT NULL DEFAULT '0', timedate datetime NOT NULL DEFAULT '0000-00-00 00:00:00', type int(11) NOT NULL DEFAULT '0', status int(11) NOT NULL DEFAULT '0', branch_id int(2...
I have to tables with 65.5 Million rows:
1)
CREATE TABLE RawData1 (
cdasite varchar(45) COLLATE utf8_unicode_ci NOT NULL,
id int(20) NOT NULL DEFAULT '0',
timedate datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
type int(11) NOT NULL DEFAULT '0',
status int(11) NOT NULL DEFAULT '0',
branch_id int(20) DEFAULT NULL,
branch_idString varchar(64) COLLATE utf8_unicode_ci DEFAULT NULL,
PRIMARY KEY (id,cdasite,timedate),
KEY idx_timedate (timedate,cdasite)
) ENGINE=InnoDB;
2) Same table with partition (call it RawData2)
PARTITION BY RANGE ( TO_DAYS(timedate))
(PARTITION p20140101 VALUES LESS THAN (735599) ENGINE = InnoDB,
PARTITION p20140401 VALUES LESS THAN (735689) ENGINE = InnoDB,
.
.
PARTITION p20201001 VALUES LESS THAN (738064) ENGINE = InnoDB,
PARTITION future VALUES LESS THAN MAXVALUE ENGINE = InnoDB);
I'm using the same query:
SELECT count(id) FROM RawData1
where timedate BETWEEN DATE_FORMAT(date_sub(now(),INTERVAL 2 YEAR),'%Y-%m-01') AND now();
2 problems: 1. why the partitioned table runs longer then the regular table? 2. the regular table returns 36380217 in 17.094 Sec. is it normal, all R&D leaders think it is not fast enough, it need to return in ~2 Sec.
What do I need to check / do / change ? Is it realistic to scan 35732495 rows and retrieve 36380217 in less then 3-4 sec?
Shay.R
(1 rep)
Nov 8, 2017, 10:07 AM
• Last activity: Jul 11, 2025, 08:05 AM
-3
votes
1
answers
371
views
slow postgres performance with simple querys
After a lot of reading I found multiple sources saying Postgres should be able to handle ~100 million rows. I set up a simple table with: ``` CREATE TABLE trade ( id INT PRIMARY KEY NOT NULL, symbol VARCHAR(), open_time TIMESTAMP, end_price INT ) ``` I have 12 million of these records. My simple que...
After a lot of reading I found multiple sources saying Postgres should be able to handle ~100 million rows.
I set up a simple table with:
CREATE TABLE trade
(
id INT PRIMARY KEY NOT NULL,
symbol VARCHAR(),
open_time TIMESTAMP,
end_price INT
)
I have 12 million of these records.
My simple query of
SELECT * FROM table WHERE symbol=x and open_time>start_time and open_time
This query always returns less than a 1000 rows yet it takes 1100 milliseconds. This seems like a lot for a simple table with 10x less rows than it should handle?
Although I don't have any indexes cause I don't know what would be the best indexes to put on this table.
Is it possible to get this down to 100ms?
Any help writing a more performant query or db would be appreciated
EDIT 1:
After reading the comments I put the following indexes on the table
CREATE INDEX open_time ON trade(open_time);
CREATE INDEX symbol ON trade(symbol);
CREATE INDEX end_price ON trade(open_price);
After adding this the query time is 240 ms is this the max?
Although I have noticed
querying the beginning 40k rows the query time drops to 60ms
after that is rises to 240ms what is causing this?
sir snibbles
(11 rep)
May 19, 2022, 07:53 PM
• Last activity: Jul 9, 2025, 02:43 PM
0
votes
1
answers
190
views
PostgreSQL: using stale statistics instead of current ones because stats collector is not responding
We are running `PostgreSQL 13` on `Azure Centos VM` and found this problem from the logs followed by some slow `commit` and `SET` statements. The slow statement logs are only lasted for a period of less than 20sec. System metric are normal before or during the problem with a small spike in I/O wait...
We are running
PostgreSQL 13
on Azure Centos VM
and found this problem from the logs followed by some slow commit
and SET
statements. The slow statement logs are only lasted for a period of less than 20sec. System metric are normal before or during the problem with a small spike in I/O wait upto 4%.
> 2024-08-15 16:21:59.048 CST,,,33328,,62b10ea0.8230,14,,2022-06-21 08:19:44 CST,1/0,0,LOG,00000,"using stale statistics instead of current ones because stats collector is not responding",,,,,,,,,"","autovacuum launcher"
2024-08-15 16:22:09.203 CST,,,58821,,66bdbaa7.e5c5,1,,2024-08-15 16:21:59 CST,148/0,0,LOG,00000,"using stale statistics instead of current ones because stats collector is not responding",,,,,,,,,"","autovacuum worker"
2024-08-15 16:22:09.253 CST,"user_w","user_db",53133,"10.0.0.85:58698",66bdb747.cf8d,1,"COMMIT",2024-08-15 16:07:35 CST,46/0,0,LOG,00000,"duration: 21525.916 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.14:34356","client backend"
2024-08-15 16:22:09.253 CST,"user_w","user_db",48595,"10.0.0.68:33334",66bdb4d3.bdd3,1,"COMMIT",2024-08-15 15:57:07 CST,15/0,0,LOG,00000,"duration: 21383.608 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.18:36088","client backend"
2024-08-15 16:22:09.253 CST,"user_w","user_db",50680,"10.0.0.68:33714",66bdb5a9.c5f8,1,"COMMIT",2024-08-15 16:00:41 CST,25/0,0,LOG,00000,"duration: 20137.894 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.18:36400","client backend"
2024-08-15 16:22:09.253 CST,"user_w","user_db",42490,"10.0.0.68:60644",66bdb2d6.a5fa,1,"COMMIT",2024-08-15 15:48:38 CST,63/0,0,LOG,00000,"duration: 18201.579 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.18:36274","client backend"
2024-08-15 16:22:09.253 CST,"user_w","user_db",52468,"10.0.0.68:34266",66bdb6e0.ccf4,1,"COMMIT",2024-08-15 16:05:52 CST,30/0,0,LOG,00000,"duration: 20438.055 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.16:52796","client backend"
2024-08-15 16:22:09.269 CST,"user_w","user_db",55877,"10.0.0.52:47198",66bdb8e6.da45,2,"SET",2024-08-15 16:14:30 CST,57/0,0,LOG,00000,"duration: 3843.296 ms statement: SET application_name='app - 10.0.0.4:38932';",,,,,,,,,"app - 10.0.0.4:38932","client backend"
2024-08-15 16:22:09.269 CST,"user_w","user_db",55278,"10.0.0.70:59560",66bdb890.d7ee,1,"SET",2024-08-15 16:13:04 CST,43/0,0,LOG,00000,"duration: 20042.606 ms statement: SET application_name='app - 10.0.0.16:52848';",,,,,,,,,"app -10.0.0.16:52848","client backend"
From what I can check the collector
using IPv6
, IPv6 is enabled as of now and stats are getting updated. We only logging slow statements and the first entry of slow commit statement took 20sec completed at **2024-08-15 16:22:09.253 CST** which is on calculation might started before the first entry of stats collector log at **2024-08-15 16:21:59.048 CST**. We are unable to make a conclusion where the problem actually started with stats collector or the transactions and the cause of issue? This issue auto resolves in 10-20sec.
**UPDATE**
I have noticed when the system working normal there is no UDP socket
for postmaster
if I run netstat -n -u -p
. However files under pg_stat_temp
directory is getting updated and I can see the stats collector
process under process list. Why is there no visible UDP socket under postmaster?
goodfella
(595 rep)
Aug 19, 2024, 04:05 AM
• Last activity: Jun 27, 2025, 12:04 PM
0
votes
1
answers
89
views
Gather Streams plan operator bad performance
How to explain and troubleshoot the "final" Gather Streams query plan operator, which takes seconds, while the previous operators make it within one second? These are the left-most plan operators. I can't share the full plan for security reasons (sorry about that) [![enter image description here][1]...
How to explain and troubleshoot the "final" Gather Streams query plan operator, which takes seconds, while the previous operators make it within one second?
These are the left-most plan operators. I can't share the full plan for security reasons (sorry about that)
The plan metrics I follow don't look suspicious to me:
- Threads are pretty evenly distributed
- No prevalent wait type (as far as the plan info goes)
- Duration captured via Extended Event fits the Duration inside the plan stats
Is it that the gathering itself is so time-consuming on that number of rows ("just" 100k+)?
**Additional info:**
Here is the obfuscated query plan
SQL Sentry Plan Explorer gives a different view of the problematic operator with Costs by CPU, so this could be a wrong interpretation of numbers (which numbers(?)) by SSMS.
**Additional info #2:**
The obfuscated query plan I provide comes from the *query_post_execution_showplan* XE.
The query typically (in most executions) runs for about 1 second. Only some executions go beyond >5 seconds (my XE is filtered for Duration > 5s).



jericzech
(977 rep)
Jun 17, 2025, 11:40 AM
• Last activity: Jun 20, 2025, 07:06 PM
2
votes
1
answers
204
views
MySQL abnormal connection peak and page cleaner warnings
I'm running MySql 5.7.42 on Ubuntu 18.04 on VM with 12 Core, 64GB Ram and SSD storage. This server is dedicated as DB server, some scripts and maintenance (backups) runs nightly but no other services are provided. Recently, as data keep growing, I start to see a lot messages like: ``` 2024-01-12T07:...
I'm running MySql 5.7.42 on Ubuntu 18.04 on VM with 12 Core, 64GB Ram and SSD storage.
This server is dedicated as DB server, some scripts and maintenance (backups) runs nightly but no other services are provided.
Recently, as data keep growing, I start to see a lot messages like:
2024-01-12T07:00:24.883652Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 8686ms. The settings might not be optimal. (flushed=998 and evicted=0, during the time.)
2024-01-12T08:02:03.366693Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 6949ms. The settings might not be optimal. (flushed=6118 and evicted=0, during the time.)
2024-01-12T08:22:43.747484Z 0 [Note] InnoDB: page_cleaner: 1000ms intended loop took 6208ms. The settings might not be optimal. (flushed=10013 and evicted=0, during the time.)
and randomly I see strange connection peaks made by 60/80 simultaneous connections.
Tipically simultaneous were from 5 to 15 so I left 151 standard values.
One time it happened that all 151 connection were used (few seconds) so no other connection were allowed.
First of all I investigate for a request flood from frontend server but I didn't found anything strange. I monitorated the situation for a week so I'm pretty sure, Indeed, I have seen that the problem occurs even in times of relative low traffic!
mysql conf
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
key_buffer_size = 1400M
max_allowed_packet = 64M
thread_stack = 192K
thread_cache_size = 8
myisam-recover-options = BACKUP
log_error = /var/log/mysql/error.log
slow_query_log = 1
slow_query_log_file = /var/log/mysql/mysql-slow.log
long_query_time = 1
expire_logs_days = 10
max_binlog_size = 100M
innodb_buffer_pool_size = 42G # (adjust value here, 50%-70% of total RAM)
innodb_log_file_size = 2000M
innodb_log_buffer_size = 512M
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
innodb_io_capacity = 10000
innodb_io_capacity_max = 20000
innodb_buffer_pool_instances=36
query_cache_limit = 32M
query_cache_size = 0
query_cache_type = 0
innodb_read_io_threads = 16
innodb_write_io_threads = 16
innodb_lru_scan_depth=256
tmp_table_size = 64M
max_heap_table_size = 64M
innodb_autoinc_lock_mode =2
sync_binlog=0
bulk_insert_buffer_size=512M
join_buffer_size = 3M
sort_buffer_size = 512K
table_open_cache=4000
innodb_open_files=4000
max_connections=151
wait_timeout=7200
interactive_timeout=7200
mysqltuner.pl:
>> MySQLTuner 2.5.0
* Jean-Marie Renouard
* Major Hayden
>> Bug reports, feature requests, and downloads at http://mysqltuner.pl/
>> Run with '--help' for additional options and output filtering
[--] Skipped version check for MySQLTuner script
[OK] Logged in using credentials from Debian maintenance account.
[OK] Operating on 64-bit architecture
-------- Storage Engine Statistics -----------------------------------------------------------------
[--] Status: +ARCHIVE +BLACKHOLE +CSV -FEDERATED +InnoDB +MEMORY +MRG_MYISAM +MyISAM +PERFORMANCE_SCHEMA
[--] Data in InnoDB tables: 55.7G (Tables: 259)
[--] Data in MyISAM tables: 17.2G (Tables: 1)
[OK] Total fragmented tables: 0
[OK] Currently running supported MySQL version 5.7.33-0ubuntu0.18.04.1-log
-------- Log file Recommendations ------------------------------------------------------------------
[OK] Log file /var/log/mysql/error.log exists
[--] Log file: /var/log/mysql/error.log (174B)
[OK] Log file /var/log/mysql/error.log is not empty
[OK] Log file /var/log/mysql/error.log is smaller than 32 MB
[OK] Log file /var/log/mysql/error.log is readable.
[OK] /var/log/mysql/error.log doesn't contain any warning.
[OK] /var/log/mysql/error.log doesn't contain any error.
[--] 0 start(s) detected in /var/log/mysql/error.log
[--] 0 shutdown(s) detected in /var/log/mysql/error.log
-------- Analysis Performance Metrics --------------------------------------------------------------
[--] innodb_stats_on_metadata: OFF
[OK] No stat updates during querying INFORMATION_SCHEMA.
-------- Views Metrics -----------------------------------------------------------------------------
-------- Triggers Metrics --------------------------------------------------------------------------
-------- Routines Metrics --------------------------------------------------------------------------
-------- Security Recommendations ------------------------------------------------------------------
[OK] No Role user detected
[OK] There are no anonymous accounts for any database users
[OK] All database users have passwords assigned
[!!] There is no basic password file list!
-------- CVE Security Recommendations --------------------------------------------------------------
[--] Skipped due to --cvefile option undefined
-------- Performance Metrics -----------------------------------------------------------------------
[--] Up for: 11d 20h 17m 50s (346M q [338.255 qps], 7M conn, TX: 858G, RX: 101G)
[--] Reads / Writes: 57% / 43%
[--] Binary logging is disabled
[--] Physical Memory : 64.9G
[--] Max MySQL memory : 47.7G
[--] Other process memory: 0B
[--] Total buffers: 46.9G global + 4.1M per thread (151 max threads)
[--] Performance_schema Max memory usage: 172M
[--] Galera GCache Max memory usage: 0B
[OK] Maximum reached memory usage: 47.7G (73.55% of installed RAM)
[OK] Maximum possible memory usage: 47.7G (73.54% of installed RAM)
[OK] Overall possible memory usage with other process is compatible with memory available
[OK] Slow queries: 0% (8K/346M)
[!!] Highest connection usage: 100% (152/151)
[OK] Aborted connections: 0.00% (32/7732702)
[!!] Name resolution is active: a reverse name resolution is made for each new connection which can reduce performance
[OK] Query cache is disabled by default due to mutex contention on multiprocessor machines.
[OK] Sorts requiring temporary tables: 0% (144K temp sorts / 18M sorts)
[!!] Joins performed without indexes: 248445
[OK] Temporary tables created on disk: 18% (2M on disk / 14M total)
[OK] Thread cache hit rate: 98% (88K created / 7M connections)
[OK] Table cache hit rate: 99% (397M hits / 398M requests)
[OK] table_definition_cache (1609) is greater than number of tables (540)
[OK] Open file limit used: 0% (3/5K)
[OK] Table locks acquired immediately: 100% (45K immediate / 45K locks)
-------- Performance schema ------------------------------------------------------------------------
[--] Performance_schema is activated.
[--] Memory used by Performance_schema: 172.6M
[--] Sys schema is installed.
-------- ThreadPool Metrics ------------------------------------------------------------------------
[--] ThreadPool stat is disabled.
-------- MyISAM Metrics ----------------------------------------------------------------------------
[!!] Consider migrating 1 following tables to InnoDB:
[--] * InnoDB migration request for gilog.gestionale_events_log Table: ALTER TABLE gilog.gestionale_events_log ENGINE=InnoDB;
[--] General MyIsam metrics:
[--] +-- Total MyISAM Tables : 1
[--] +-- Total MyISAM indexes : 3.5G
[--] +-- KB Size :1.4G
[--] +-- KB Used Size :264.4M
[--] +-- KB used :18.9%
[--] +-- Read KB hit rate: 84.4% (7K cached / 1K reads)
[--] +-- Write KB hit rate: 0% (0 cached / 0 writes)
[!!] Key buffer used: 18.9% (264.4M used / 1.4G cache)
[!!] Key buffer size / total MyISAM indexes: 1.4G/3.5G
[!!] Read Key buffer hit rate: 84.4% (7K cached / 1K reads)
-------- InnoDB Metrics ----------------------------------------------------------------------------
[--] InnoDB is enabled.
[--] InnoDB Thread Concurrency: 0
[OK] InnoDB File per table is activated
[OK] InnoDB Buffer Pool size ( 45.0G ) under limit for 64 bits architecture: (17179869184.0G )
[!!] InnoDB buffer pool / data size: 45.0G / 55.7G
[!!] Ratio InnoDB log file size / InnoDB Buffer pool size (8.68055555555556%): 2.0G * 2 / 45.0G should be equal to 25%
[!!] InnoDB buffer pool instances: 36
[--] Number of InnoDB Buffer Pool Chunk: 360 for 36 Buffer Pool Instance(s)
[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances
[OK] InnoDB Read buffer efficiency: 100.00% (114108975883 hits / 114111666996 total)
[!!] InnoDB Write Log efficiency: 27% (23815946 hits / 88191482 total)
[OK] InnoDB log waits: 0.00% (0 waits / 64375536 writes)
-------- Aria Metrics ------------------------------------------------------------------------------
[--] Aria Storage Engine not available.
-------- TokuDB Metrics ----------------------------------------------------------------------------
[--] TokuDB is disabled.
-------- XtraDB Metrics ----------------------------------------------------------------------------
[--] XtraDB is disabled.
-------- Galera Metrics ----------------------------------------------------------------------------
[--] Galera is disabled.
-------- Replication Metrics -----------------------------------------------------------------------
[--] Galera Synchronous replication: NO
[--] No replication slave(s) for this server.
[--] Binlog format: ROW
[--] XA support enabled: ON
[--] Semi synchronous replication Master: Not Activated
[--] Semi synchronous replication Slave: Not Activated
[--] This is a standalone server
-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
Configure your accounts with ip or subnets only, then update your configuration with skip-name-resolve=ON
We will suggest raising the 'join_buffer_size' until JOINs not using indexes are found.
See https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_join_buffer_size
MyISAM engine is deprecated, consider migrating to InnoDB
Be careful, increasing innodb_log_file_size / innodb_log_files_in_group means higher crash recovery mean time
Variables to adjust:
max_connections (> 151)
wait_timeout ( 3.0M, or always use indexes with JOINs)
key_buffer_size (~ 277M)
key_buffer_size (> 3.5G)
innodb_buffer_pool_size (>= 55.7G) if possible.
innodb_log_file_size should be (=5G) if possible, so InnoDB total log file size equals 25% of buffer pool size.
innodb_buffer_pool_instances(=45)
innodb_log_buffer_size (> 512M)
server params:
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 265423
max locked memory (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 265423
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I guess first thigs is solving InnoDB: page_cleaner
warning because it appears many times in a day. Param innodb_lru_scan_depth
is set to 256, I don't think any lower value could help.
So basically 2 ways remains:
1. cleaning database from older unused records
2. increases resource as mysqltuner suggests
Any advice?
Thanks
Sbraaa
(21 rep)
Jan 12, 2024, 09:33 AM
• Last activity: Jun 18, 2025, 06:05 PM
1
votes
2
answers
1982
views
SQL Query, how to know if is resource intensive? I/O
I am troubleshooting an issue with our marketing platform as it has become sluggish and slow, I've requested the top 50 heavy/resource intensive queries executed on our sql server to correlate with our marketing workflows. Here is an example or some of the top queries, based on the average I/O would...
I am troubleshooting an issue with our marketing platform as it has become sluggish and slow, I've requested the top 50 heavy/resource intensive queries executed on our sql server to correlate with our marketing workflows.
Here is an example or some of the top queries, based on the average I/O would you say these queries are consuming too much resources? what is a normal acceptable IO for a query?
If I take the first query execution plan, it will be as following.
Plan Link https://www.brentozar.com/PasteThePlan/?id=HJtGXHxUc



David Garcia
(367 rep)
May 5, 2022, 11:33 AM
• Last activity: Jun 12, 2025, 09:31 AM
3
votes
1
answers
4107
views
SQL Server-- Calculating Optimal Number of CPU Cores
We're soon going to rebuild the SQL Server running our production ERP. Our SAN Admin issued me the following challenge: > *Assume I could give you as many Intel Xeon Gold 6240 CPU @ 2.6 GHz cores as you need for optimal SQL Server performance, as long as the > ROI is reasonable. We don't want to was...
We're soon going to rebuild the SQL Server running our production ERP. Our SAN Admin issued me the following challenge:
> *Assume I could give you as many Intel Xeon Gold 6240 CPU @ 2.6 GHz cores as you need for optimal SQL Server performance, as long as the
> ROI is reasonable. We don't want to waste money, but are willing to
> splurge a bit as long as you're getting tangible performance
> improvements. How many cores do you want?*
On our current production box, we think we have MaxDOP and CTP set effectively, and expensive queries are going parallel, but we still hit very high numbers quite regularly. We're regularly getting SOS_SCHEDULER_YIELD and CXPACKET/CXCONSUMER as top wait stats. I'm pretty confident that we're under CPU pressure, and I'd love the new server to work better.
After doing a bunch of reading, I've found quite a few articles (including by Glenn Berry) talking about *which* CPUs to select. What I've not had success finding are articles talking about how to calculate the optimal number of cores to allocate.
Assuming cost matters but is secondary to tangible performance, what kind of metrics can I take from my production ERP SQL Server, and how can I compare them to a specific known processor, to determine how many cores to allocate for the best ROI in terms of performance:cost?
EDIT-- Since someone may ask-- we're on SQL Server Enterprise Edition. The production instance is SQL Server 2017 but we'll likely be upgrading to 2019 on the new server/instance.
Eluros
(75 rep)
Mar 25, 2022, 02:19 PM
• Last activity: Jun 11, 2025, 12:06 PM
0
votes
2
answers
927
views
Multiple Schemas for increasing MySQL (5.7) Performance
Lets start with my question first: Does having multiple schemas (1 main and 1 daily - having 5-6 tables with same structure as main, but only working for the day and will be synced back with the main one every night) in MySQL (5.7) server can impact its performance ? More background: The problem I'm...
Lets start with my question first:
Does having multiple schemas (1 main and 1 daily - having 5-6 tables with same structure as main, but only working for the day and will be synced back with the main one every night) in MySQL (5.7) server can impact its performance ?
More background:
The problem I'm facing with my database is that the application was working fine under certain load (still does but LA goes up to 8), but past few days I'm fighting over CPU usage ... I dig the problem and the reason for that is that there are locks going on and my workers are stuck trying to insert/update new data.
To go even deeper I'm doing at least 1K inserts per minute, up to 2K updates per minute (some are on the previously inserted records), and more than 4-5K selects per minute.
And I'm not talking for simple integer inserts ... lots of strings, timestamps, integers and 6 indexes (total column number in the table is 32)
My server is fairly powerful (huh, fairly).
- 16GB RAM
- 1TB SSD
- 8 Core, Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
- Software: MySQL 5.7
Part of my my.cnf
innodb_buffer_pool_size = 12G
innodb_buffer_pool_instances = 8
innodb_log_file_size = 30G
innodb_log_buffer_size = 16M
innodb_flush_log_at_trx_commit = 2
innodb_thread_concurrency = 8
and still I'm getting LA of up to 8.00
The problem with all this is because all my workers start every minute and they sort of confront each other.
What I came with is following scenario:
- create separate schema with my 'daily' working tables (mostly the SELECTS) and do the inserts/updates in my original schema.
Will that decrease the load on the server?
And one off-topic question: Is LA of up to 8.00 healthy for the server ?
Thanks
P.S. If you need any further info, just let me know I'll share it.
user3570930
(1 rep)
Dec 19, 2017, 02:27 PM
• Last activity: Jun 3, 2025, 01:02 PM
1
votes
1
answers
241
views
A MySQL EXPLAIN number of rows discrepancy
MySQL 5.5.49-log More questions on the query in https://dba.stackexchange.com/q/175377/5061 (the query is the same but the question is different): I have the following table (filled with many rows): CREATE TABLE `SectorGraphs2` ( `Kind` tinyint(3) UNSIGNED NOT NULL COMMENT '1 - продюсер, 2 - жанр, 3...
MySQL 5.5.49-log
More questions on the query in https://dba.stackexchange.com/q/175377/5061 (the query is the same but the question is different):
I have the following table (filled with many rows):
CREATE TABLE
SectorGraphs2
(
Kind
tinyint(3) UNSIGNED NOT NULL COMMENT '1 - продюсер, 2 - жанр, 3 - регион',
Criterion
tinyint(3) UNSIGNED NOT NULL,
Period
tinyint(3) UNSIGNED NOT NULL,
PeriodStart
date NOT NULL,
SectorID
int(10) UNSIGNED NOT NULL,
Value
float NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
ALTER TABLE SectorGraphs2
ADD UNIQUE KEY Producer2
(Kind
,Criterion
,Period
,PeriodStart
,SectorID
) USING BTREE,
ADD KEY SectorID
(SectorID
);
then I run:
EXPLAIN
SELECT SectorID, SUM(Value)
FROM SectorGraphs2
WHERE Kind = 1 AND Criterion = 7
AND Period = 1
AND PeriodStart >= ? AND PeriodStart < ? + INTERVAL 1 WEEK
GROUP BY SectorID
and it produces:
+----+-------------+---------------+-------+---------------+-----------+---------+------+------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | extra |
+----+-------------+---------------+-------+---------------+-----------+---------+------+------+----------------------------------------------+
| 1 | SIMPLE | SectorGraphs2 | range | Producer2 | Producer2 | 6 | NULL | 1 | Using where; Using temporary; Using filesort |
+----+-------------+---------------+-------+---------------+-----------+---------+------+------+----------------------------------------------+
See a nicely formatted explanation here .
My question: Why it is used a temporary table and filesort but it reports only 1 row examined? It seems that because of using a temporary table, it should process more than one row. How can I determine the real number of rows processed? How to solve this discrepancy about number of processed rows?
Note that the task I was assigned to do now is to eliminate heavy (involving too many rows) queries. And now I do not know how to do this.
porton
(745 rep)
Jun 5, 2017, 11:52 AM
• Last activity: May 25, 2025, 03:03 PM
0
votes
1
answers
266
views
Query Locks Table And Takes Long Time to Run
I have a query that can take a long time to run. I tried running the same query in a test server and it runs very quick. However, in production it seems to hang and causes blocking of other queries. Where do I begin to find a solution for this? Below is my code and the plan... https://www.brentozar....
I have a query that can take a long time to run. I tried running the same query in a test server and it runs very quick.
However, in production it seems to hang and causes blocking of other queries.
Where do I begin to find a solution for this? Below is my code and the plan...
https://www.brentozar.com/pastetheplan/?id=ryAQWfL2X
The issue seems to be with the CTE statement. Is there a more efficient way of writing this?
-- table to store break details
CREATE TABLE #PositionReconciliationCurrentBreaks
(
TransactionKey INT,
BreakOriginationDate DATE,
BreakAge DECIMAL (18,0)
);
TRUNCATE TABLE #PositionReconciliationCurrentBreaks;
WITH PreviousRecBreaks_CTE -- get breaks from previous rec
AS
(
SELECT
MAX(i.InvestmentID) AS InvestmentID,
PRT.PositionDate,
PRT.PositionReconciliationDetailKey,
MAX(PRT.PositionReconciliationTransactionKey) AS PositionReconciliationTransactionKey,
PRT.InvestmentKey,
MAX(PRT.GenevaInvestmentKey) AS GenevaInvestmentKey,
MAX(PRT.BrokerInvestmentKey) AS BrokerInvestmentKey,
PRT.PaymentCurrencyKey,
PRT.AccountKey,
CASE WHEN MAX(PRT.BreakOriginationDate) = '2100-01-01' THEN NULL ELSE MAX(PRT.BreakOriginationDate) END AS BreakOriginationDate,
MAX(PRT.BreakAge) AS BreakAge,
SUM(ISNULL(PRT.GenevaQuantity,0)) AS GenevaQuantity,
SUM(ISNULL(PRT.BrokerQuantity,0)) AS BrokerQuantity
FROM [Fund.Accounting.Model.PositionReconciliation].PositionReconciliationTransaction PRT -- get transactions from previous rec
INNER JOIN
[Fund.Accounting.Model.PositionReconciliation].PositionReconciliationDetail TRD -- check details to get Action on transaction
ON TRD.PositionReconciliationDetailKey = PRT.PositionReconciliationDetailKey -- join on Detail Key
inner join [Fund.Accounting.Model.TradeReconciliation].Investment i -- add in investmentID for debugging purpose
ON i.InvestmentKey = PRT.InvestmentKey
INNER JOIN
[Fund.Accounting.Model].[ReconciliationActions] RA -- actions
ON RA.ReconciliationActionID=TRD.ReconciliationActionID
AND (TRD.PositionCommentKey IS NOT NULL) --approved breaks must have a comment.
AND RA.Name 'Suppress' -- ensure not suppressed
AND RA.Name 'Auto Match' -- ensure not perfect match - we do want to include cross references with a qty diff
WHERE PRT.PositionReconciliationID = @PreviousPositionReconciliationId -- previous rec
GROUP BY -- added grouping for quantity breaks as x-refs will be 2 separate lines in PositionReconciliationTransaction tbl; We group to force this as 1 line
PRT.PositionDate,
PRT.PositionReconciliationDetailKey,
PRT.InvestmentKey,
PRT.PaymentCurrencyKey,
PRT.AccountKey
),CurrentRecBreaks_CTE -- get breaks from current rec
AS
(
SELECT
MAX(i.InvestmentID) AS InvestmentID,
PRT.PositionDate,
PRT.PositionReconciliationDetailKey,
MAX(PRT.PositionReconciliationTransactionKey) AS PositionReconciliationTransactionKey,
PRT.InvestmentKey,
MAX(PRT.GenevaInvestmentKey) AS GenevaInvestmentKey,
MAX(PRT.BrokerInvestmentKey) AS BrokerInvestmentKey,
PRT.PaymentCurrencyKey,
PRT.AccountKey,
CASE WHEN MAX(PRT.BreakOriginationDate) = '2100-01-01' THEN NULL ELSE MAX(PRT.BreakOriginationDate) END AS BreakOriginationDate,
MAX(PRT.BreakAge) AS BreakAge,
SUM(ISNULL(PRT.GenevaQuantity,0)) AS GenevaQuantity,
SUM(ISNULL(PRT.BrokerQuantity,0)) AS BrokerQuantity
FROM [Fund.Accounting.Model.PositionReconciliation].PositionReconciliationTransaction PRT -- get transactions from previous rec
INNER JOIN
[Fund.Accounting.Model.PositionReconciliation].PositionReconciliationDetail TRD -- check details to get Action on transaction
ON TRD.PositionReconciliationDetailKey = PRT.PositionReconciliationDetailKey -- join on Detail Key
inner join [Fund.Accounting.Model.TradeReconciliation].Investment i -- add in investmentID for debugging purpose
ON i.InvestmentKey = PRT.InvestmentKey
LEFT JOIN
[Fund.Accounting.Model].[ReconciliationActions] RA -- actions
ON RA.ReconciliationActionID=TRD.ReconciliationActionID
AND RA.Name 'Suppress' -- ensure not suppressed
AND RA.Name 'Auto Match' -- ensure not perfect match - we do want to include cross references with a qty diff
WHERE PRT.PositionReconciliationID = @PositionReconciliationID -- current rec
GROUP BY -- added grouping for quantity breaks as x-refs will be 2 separate lines in PositionReconciliationTransaction tbl; We group to force this as 1 line
PRT.PositionDate,
PRT.PositionReconciliationDetailKey,
PRT.InvestmentKey,
PRT.PaymentCurrencyKey,
PRT.AccountKey
)
INSERT INTO #PositionReconciliationCurrentBreaks
(
TransactionKey,
BreakOriginationDate,
BreakAge
)
SELECT -- get breaks in current rec that also appeared in previous rec
C.PositionReconciliationTransactionKey
,ISNULL(P.BreakOriginationDate,@PreviousPositionReconciliationPeriod) --set ro previous date if it is first time presence.
,CASE DATEDIFF(DAY, P.PositionDate, C.PositionDate)
WHEN 0 THEN 1 --set to 1 day old, if it is 1st time
ELSE ISNULL(P.BreakAge,0) + DATEDIFF(DAY, P.PositionDate, C.PositionDate) -- get no. of days difference between current rec and previous one
END AS BreakAge
FROM PreviousRecBreaks_CTE P
INNER JOIN
CurrentRecBreaks_CTE C
ON P.AccountKey = C.AccountKey
AND P.PaymentCurrencyKey=C.PaymentCurrencyKey
AND P.InvestmentKey=C.InvestmentKey
--AND (P.GenevaInvestmentKey=C.GenevaInvestmentKey
-- OR P.BrokerInvestmentKey=C.BrokerInvestmentKey)
AND P.GenevaQuantity=C.GenevaQuantity
AND P.BrokerQuantity=C.BrokerQuantity
WHERE NOT EXISTS(
-- exclude breaks that are now perfect matches (or suppressed???)
SELECT
D.PositionReconciliationDetailKey
FROM
[Fund.Accounting.Model.PositionReconciliation].PositionReconciliationDetail D
INNER JOIN -- join on AggregatePositionReconciliationTransaction so we can get the Qty Diff value
[Fund.Accounting.Facade.PositionReconciliation].[AggregatePositionReconciliationTransaction] (@PositionReconciliationID,NULL,NULL) A
ON A.PositionReconciliationDetailKey = D.PositionReconciliationDetailKey
WHERE D.PositionReconciliationID = @PositionReconciliationID -- current rec
AND A.QuantityDifference = 0 -- perfect matches
AND D.PositionReconciliationDetailKey = C.PositionReconciliationDetailKey -- join details to transactions
);
Kevin
(533 rep)
Oct 30, 2018, 05:19 PM
• Last activity: May 18, 2025, 06:03 AM
Showing page 1 of 20 total questions