Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
20
votes
1
answers
16267
views
what is "planSummary: IDHACK"?
This Query scans only one document and returns only one document. But this is very slow: 2017-05-22T07:13:24.548+0000 I COMMAND [conn40] query databasename.collectionname query: { _id: ObjectId('576d4ce3f2d62a001e84a9b8') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorE...
This Query scans only one document and returns only one document. But this is very slow:
2017-05-22T07:13:24.548+0000 I COMMAND [conn40] query databasename.collectionname query: { _id: ObjectId('576d4ce3f2d62a001e84a9b8') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8009ms
2017-05-22T07:13:24.549+0000 I COMMAND [conn10] query databasename.collectionname query: { _id: ObjectId('576d4db35de5fa001ebdd77a') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8010ms
2017-05-22T07:13:24.553+0000 I COMMAND [conn47] query databasename.collectionname query: { _id: ObjectId('576d44b7ea8351001ea5fb22') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8014ms
2017-05-22T07:13:24.555+0000 I COMMAND [conn52] query databasename.collectionname query: { _id: ObjectId('576d457ceb82a0001e205bfa') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8015ms
2017-05-22T07:13:24.555+0000 I COMMAND [conn41] query databasename.collectionname query: { _id: ObjectId('576d457ec0697c001e1e1779') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8015ms
2017-05-22T07:13:24.555+0000 I COMMAND [conn39] query databasename.collectionname query: { _id: ObjectId('576d44b8ea8351001ea5fb27') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8015ms
2017-05-22T07:13:24.561+0000 I COMMAND [conn34] query databasename.collectionname query: { _id: ObjectId('576d44b55de5fa001ebdd31e') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8022ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn32] query databasename.collectionname query: { _id: ObjectId('576d4df6d738a7001ef2a235') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn51] query databasename.collectionname query: { _id: ObjectId('576d48165de5fa001ebdd55a') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8024ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn17] query databasename.collectionname query: { _id: ObjectId('576d44c19f2382001e953717') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn8] query databasename.collectionname query: { _id: ObjectId('576d45d256c22e001efdb382') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn42] query databasename.collectionname query: { _id: ObjectId('576d44bd57c75e001e6e2302') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn6] query databasename.collectionname query: { _id: ObjectId('576d44b394e731001e7cd530') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.571+0000 I COMMAND [conn31] query databasename.collectionname query: { _id: ObjectId('576d4dbcb7289f001e64e32b') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8032ms
This looks like very slow disk I/O. What does the
planSummary: IDHACK
mean? Any more info for IDHACK
?
Sybil
(2578 rep)
May 23, 2017, 03:36 PM
• Last activity: Jul 10, 2025, 01:38 AM
0
votes
3
answers
279
views
Identifying root cause of slow query outliers from pt-query-digest?
I've run pt-query-digest and noticed many outliers with very high query times. Many of these queries do not perform slowly when I run them myself, and I'm having trouble figuring out the root cause of these sporadic slow queries. Is there another tool I might use to shed more light on this situation...
I've run pt-query-digest and noticed many outliers with very high query times. Many of these queries do not perform slowly when I run them myself, and I'm having trouble figuring out the root cause of these sporadic slow queries. Is there another tool I might use to shed more light on this situation? What might cause the same query to only be slow *some* of the time?
Eric R.
(133 rep)
Oct 3, 2014, 04:26 PM
• Last activity: May 14, 2025, 08:07 PM
1
votes
3
answers
4608
views
How to manage a growing MySQL slow_log table?
I see MySQL supports storing slow_log as a table (internally using mysql/slow_log.CSV file), and this appears convenient, but is it a feature I should use in production? I found online articles on the importance of temporarily disabling slow query logging while rotating the file log, but what's the...
I see MySQL supports storing slow_log as a table (internally using mysql/slow_log.CSV file), and this appears convenient, but is it a feature I should use in production? I found online articles on the importance of temporarily disabling slow query logging while rotating the file log, but what's the best practice for managing slow_log.CSV? eg. copy and truncate table or rotate the slow_log.CSV just like db-slow.log?
Piers C
(111 rep)
Mar 26, 2018, 04:52 PM
• Last activity: Apr 21, 2025, 07:15 PM
3
votes
2
answers
166
views
Extra metadata in mysql slow query log - meaning of Id?
I'm running MySql5.7 with slow query logging enabled, and the log includes more metadata than is mentioned by the docs; these three lines are printed before each query: # Time: 2025-02-18T14:25:16.286041Z # User@Host: root[root] @ localhost [] Id: 261083 # Query_time: 3.677193 Lock_time: 0.001193 Ro...
I'm running MySql5.7 with slow query logging enabled, and the log includes more metadata than is mentioned by the docs; these three lines are printed before each query:
# Time: 2025-02-18T14:25:16.286041Z
# User@Host: root[root] @ localhost [] Id: 261083
# Query_time: 3.677193 Lock_time: 0.001193 Rows_sent: 1 Rows_examined: 4334365
The docs explain that third line only:
> each statement written to the log is preceded by a line that begins with a # character and has these fields (with all fields on a single line):
> * Query_time
> * Lock_time
> * Rows_sent
> * Rows_examined
Of the other two lines, most of the fields are obvious; but does anybody know what the **Id** field represents? It is different for most of the entries in my slow log, but in some cases there are several which have the same Id.
Vince Bowdren
(439 rep)
Feb 18, 2025, 04:12 PM
• Last activity: Feb 19, 2025, 12:53 PM
2
votes
2
answers
274
views
MySQL slow log deletion while slow-query-log is ON
I'm creating a tool that sets `slow-query-log` to `ON` with `long_query_time = 0` for 5 minutes to log all queries currently running. The only thing I'm afraid of is that the slow_log will be too big very quickly. From what I see if I delete the slow-query-log file, MySQL doesn't create it back unti...
I'm creating a tool that sets
slow-query-log
to ON
with long_query_time = 0
for 5 minutes to log all queries currently running.
The only thing I'm afraid of is that the slow_log will be too big very quickly.
From what I see if I delete the slow-query-log file, MySQL doesn't create it back until I run FLUSH SLOW LOGS
.
The question is: Is it safe to delete it while slow-query-log is on?
And do I need to FLUSH SLOW LOGS
before doing it? I tried to read the documentation but did not understand how FLUSH SLOW LOGS
is relevant and what is its exact purpose.
idan ahal
(131 rep)
Aug 27, 2024, 11:40 AM
• Last activity: Aug 27, 2024, 02:12 PM
1
votes
2
answers
137
views
Why is this query running so slowly?
My system (OpenMage / Magento 1) is running some queries like the following and even though it's not using any large tables it seems that they take an abnormally high amount of time to execute. Sometimes they take a few minutes, sometimes they will timeout after a day. For example ```sql # Time: 240...
My system (OpenMage / Magento 1) is running some queries like the following and even though it's not using any large tables it seems that they take an abnormally high amount of time to execute. Sometimes they take a few minutes, sometimes they will timeout after a day.
For example
# Time: 240405 7:37:19
# User@Host: db_example3[db_example3] @ localhost [127.0.0.1]
# Thread_id: 8345298 Schema: db_example3 QC_hit: No
# Query_time: 140.075668 Lock_time: 0.000176 Rows_sent: 1 Rows_examined: 15
# Rows_affected: 0 Bytes_sent: 148
# Tmp_tables: 1 Tmp_disk_tables: 0 Tmp_table_sizes: 253976
# Full_scan: No Full_join: No Tmp_table: Yes Tmp_table_on_disk: No
# Filesort: Yes Filesort_on_disk: No Merge_passes: 0 Priority_queue: No
#
# explain: id select_type table type possible_keys key key_len ref rows r_rows filtered r_filtered Extra
# explain: 1 SIMPLE ram_idx ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE 4 const 2 2.00 100.00 100.00 Using where; Using index; Using temporary; Using filesort
# explain: 1 SIMPLE gdiktia_idx ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,idx_attribute_store_value_entity IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID 8 db_example3.ram_idx.entity_id,const,const 1 1.00 100.00 100.00 Using index
# explain: 1 SIMPLE primary_camera_idx range PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE 4 NULL 2 2.00 75.00 25.00 Using where; Using index; Using join buffer (flat, BNL join)
# explain: 1 SIMPLE screen_resolution_idx eq_ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity PRIMARY 12 db_example3.ram_idx.entity_id,const,const,const 1 1.00 100.00 100.00 Using index
# explain: 1 SIMPLE memory_idx eq_ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity PRIMARY 12 db_example3.ram_idx.entity_id,const,const,const 1 1.00 100.00 100.00 Using index
# explain: 1 SIMPLE second_camera_idx eq_ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity PRIMARY 12 db_example3.ram_idx.entity_id,const,const,const 1 1.00 100.00 100.00 Using index
# explain: 1 SIMPLE nfcsosto_idx eq_ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity PRIMARY 12 db_example3.ram_idx.entity_id,const,const,const 1 1.00 100.00 100.00 Using index
# explain: 1 SIMPLE price_index eq_ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_PRICE_CUSTOMER_GROUP_ID,IDX_CATALOG_PRODUCT_INDEX_PRICE_WEBSITE_ID,IDX_CAT_PRD_IDX_PRICE_WS_ID_CSTR_GROUP_ID_MIN_PRICE PRIMARY 8 db_example3.ram_idx.entity_id,const,const 1 1.00 100.00 100.00 Using index
# explain: 1 SIMPLE core_count_idx eq_ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity PRIMARY 12 db_example3.ram_idx.entity_id,const,const,const 1 1.00 100.00 100.00 Using index
# explain: 1 SIMPLE megethossim_idx eq_ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity PRIMARY 12 db_example3.ram_idx.entity_id,const,const,const 1 1.00 100.00 100.00 Using index
# explain: 1 SIMPLE cat_index eq_ref PRIMARY,IDX_CAT_CTGR_PRD_IDX_PRD_ID_STORE_ID_CTGR_ID_VISIBILITY,15D3C269665C74C2219037D534F4B0DC PRIMARY 10 const,db_example3.ram_idx.entity_id,const 1 1.00 100.00 100.00 Using where
# explain: 1 SIMPLE e eq_ref PRIMARY PRIMARY 4 db_example3.ram_idx.entity_id 1 1.00 100.00 100.00 Using index
#
SET timestamp=1712291839;
SELECT gdiktia_idx
.value
, COUNT(gdiktia_idx.entity_id) AS count
FROM catalog_product_entity
AS e
INNER JOIN catalog_category_product_index
AS cat_index
ON cat_index.product_id=e.entity_id AND cat_index.store_id=1 AND cat_index.visibility IN(2, 4) AND cat_index.category_id = '17'
INNER JOIN catalog_product_index_price
AS price_index
ON price_index.entity_id = e.entity_id AND price_index.website_id = '1' AND price_index.customer_group_id = 0
INNER JOIN catalog_product_index_eav
AS screen_resolution_idx
ON screen_resolution_idx.entity_id = e.entity_id AND screen_resolution_idx.attribute_id = 188 AND screen_resolution_idx.store_id = 1 AND screen_resolution_idx.value = '234'
INNER JOIN catalog_product_index_eav
AS core_count_idx
ON core_count_idx.entity_id = e.entity_id AND core_count_idx.attribute_id = 193 AND core_count_idx.store_id = 1 AND core_count_idx.value = '41'
INNER JOIN catalog_product_index_eav
AS ram_idx
ON ram_idx.entity_id = e.entity_id AND ram_idx.attribute_id = 196 AND ram_idx.store_id = 1 AND ram_idx.value = '54'
INNER JOIN catalog_product_index_eav
AS memory_idx
ON memory_idx.entity_id = e.entity_id AND memory_idx.attribute_id = 197 AND memory_idx.store_id = 1 AND memory_idx.value = '62'
INNER JOIN catalog_product_index_eav
AS primary_camera_idx
ON primary_camera_idx.entity_id = e.entity_id AND primary_camera_idx.attribute_id = 198 AND primary_camera_idx.store_id = 1 AND primary_camera_idx.value = '315'
INNER JOIN catalog_product_index_eav
AS second_camera_idx
ON second_camera_idx.entity_id = e.entity_id AND second_camera_idx.attribute_id = 200 AND second_camera_idx.store_id = 1 AND second_camera_idx.value = '90'
INNER JOIN catalog_product_index_eav
AS megethossim_idx
ON megethossim_idx.entity_id = e.entity_id AND megethossim_idx.attribute_id = 229 AND megethossim_idx.store_id = 1 AND megethossim_idx.value = '164'
INNER JOIN catalog_product_index_eav
AS nfcsosto_idx
ON nfcsosto_idx.entity_id = e.entity_id AND nfcsosto_idx.attribute_id = 258 AND nfcsosto_idx.store_id = 1 AND nfcsosto_idx.value = '377'
INNER JOIN catalog_product_index_eav
AS gdiktia_idx
ON gdiktia_idx.entity_id = e.entity_id AND gdiktia_idx.attribute_id = 232 AND gdiktia_idx.store_id = '1' GROUP BY gdiktia_idx
.value
;
or
# Time: 240405 7:34:29
# User@Host: db_example3[db_example3] @ localhost [127.0.0.1]
# Thread_id: 8344334 Schema: db_example3 QC_hit: No
# Query_time: 74.418149 Lock_time: 0.000100 Rows_sent: 0 Rows_examined: 0
# Rows_affected: 0 Bytes_sent: 142
# Tmp_tables: 1 Tmp_disk_tables: 0 Tmp_table_sizes: 0
# Full_scan: No Full_join: No Tmp_table: Yes Tmp_table_on_disk: No
# Filesort: Yes Filesort_on_disk: No Merge_passes: 0 Priority_queue: No
#
# explain: id select_type table type possible_keys key key_len ref rows r_rows filtered r_filtered Extra
# explain: 1 SIMPLE extra_specs_idx ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,idx_attribute_store_value_entity IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID 2 const 1 0.00 100.00 100.00 Using where; Using index; Using temporary; Using filesort
# explain: 1 SIMPLE manufacturer2_idx eq_ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity PRIMARY 12 db_example3.extra_specs_idx.entity_id,const,const,const 1 NULL 100.00 NULL Using index
# explain: 1 SIMPLE primary_camera_idx eq_ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity PRIMARY 12 db_example3.extra_specs_idx.entity_id,const,const,const 1 NULL 100.00 NULL Using index
# explain: 1 SIMPLE second_camera_idx eq_ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity PRIMARY 12 db_example3.extra_specs_idx.entity_id,const,const,const 1 NULL 100.00 NULL Using index
# explain: 1 SIMPLE ram_idx eq_ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity PRIMARY 12 db_example3.extra_specs_idx.entity_id,const,const,const 1 NULL 100.00 NULL Using index
# explain: 1 SIMPLE nfcsosto_idx eq_ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity PRIMARY 12 db_example3.extra_specs_idx.entity_id,const,const,const 1 NULL 100.00 NULL Using index
# explain: 1 SIMPLE operating_system_idx eq_ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity PRIMARY 12 db_example3.extra_specs_idx.entity_id,const,const,const 1 NULL 100.00 NULL Using index
# explain: 1 SIMPLE megethossim_idx eq_ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity PRIMARY 12 db_example3.extra_specs_idx.entity_id,const,const,const 1 NULL 100.00 NULL Using index
# explain: 1 SIMPLE price_index eq_ref PRIMARY,IDX_CATALOG_PRODUCT_INDEX_PRICE_CUSTOMER_GROUP_ID,IDX_CATALOG_PRODUCT_INDEX_PRICE_WEBSITE_ID,IDX_CAT_PRD_IDX_PRICE_WS_ID_CSTR_GROUP_ID_MIN_PRICE PRIMARY 8 db_example3.extra_specs_idx.entity_id,const,const 1 NULL 100.00 NULL Using where
# explain: 1 SIMPLE cat_index eq_ref PRIMARY,IDX_CAT_CTGR_PRD_IDX_PRD_ID_STORE_ID_CTGR_ID_VISIBILITY,15D3C269665C74C2219037D534F4B0DC PRIMARY 10 const,db_example3.extra_specs_idx.entity_id,const 1 NULL 100.00 NULL Using where
# explain: 1 SIMPLE e eq_ref PRIMARY PRIMARY 4 db_example3.extra_specs_idx.entity_id 1 NULL 100.00 NULL Using index
# explain: 1 SIMPLE screen_type_idx range PRIMARY,IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID,IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE,idx_attribute_store_value_entity IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE 4 NULL 2 NULL 75.00 NULL Using where; Using index; Using join buffer (flat, BNL join)
#
SET timestamp=1712291669;
SELECT extra_specs_idx
.value
, COUNT(extra_specs_idx.entity_id) AS count
FROM catalog_product_entity
AS e
INNER JOIN catalog_category_product_index
AS cat_index
ON cat_index.product_id=e.entity_id AND cat_index.store_id=1 AND cat_index.visibility IN(2, 4) AND cat_index.category_id = '3'
INNER JOIN catalog_product_index_price
AS price_index
ON price_index.entity_id = e.entity_id AND price_index.website_id = '1' AND price_index.customer_group_id = 0
INNER JOIN catalog_product_index_eav
AS manufacturer2_idx
ON manufacturer2_idx.entity_id = e.entity_id AND manufacturer2_idx.attribute_id = 186 AND manufacturer2_idx.store_id = 1 AND manufacturer2_idx.value = '6'
INNER JOIN catalog_product_index_eav
AS screen_type_idx
ON screen_type_idx.entity_id = e.entity_id AND screen_type_idx.attribute_id = 189 AND screen_type_idx.store_id = 1 AND screen_type_idx.value = '37'
INNER JOIN catalog_product_index_eav
AS operating_system_idx
ON operating_system_idx.entity_id = e.entity_id AND operating_system_idx.attribute_id = 195 AND operating_system_idx.store_id = 1 AND operating_system_idx.value = '48'
INNER JOIN catalog_product_index_eav
AS primary_camera_idx
ON primary_camera_idx.entity_id = e.entity_id AND primary_camera_idx.attribute_id = 198 AND primary_camera_idx.store_id = 1 AND primary_camera_idx.value = '309'
INNER JOIN catalog_product_index_eav
AS second_camera_idx
ON second_camera_idx.entity_id = e.entity_id AND second_camera_idx.attribute_id = 200 AND second_camera_idx.store_id = 1 AND second_camera_idx.value = '87'
INNER JOIN catalog_product_index_eav
AS megethossim_idx
ON megethossim_idx.entity_id = e.entity_id AND megethossim_idx.attribute_id = 229 AND megethossim_idx.store_id = 1 AND megethossim_idx.value = '164'
INNER JOIN catalog_product_index_eav
AS ram_idx
ON ram_idx.entity_id = e.entity_id AND ram_idx.attribute_id = 196 AND ram_idx.store_id = 1 AND ram_idx.value = '52'
INNER JOIN catalog_product_index_eav
AS nfcsosto_idx
ON nfcsosto_idx.entity_id = e.entity_id AND nfcsosto_idx.attribute_id = 258 AND nfcsosto_idx.store_id = 1 AND nfcsosto_idx.value = '377'
INNER JOIN catalog_product_index_eav
AS extra_specs_idx
ON extra_specs_idx.entity_id = e.entity_id AND extra_specs_idx.attribute_id = 213 AND extra_specs_idx.store_id = '1' WHERE ((price_index.min_price -(price_index.min_price/(1+(CASE price_index.tax_class_id WHEN 2 THEN 0.2400 ELSE 0 END))*CASE price_index.tax_class_id WHEN 2 THEN 0.2400 ELSE 0 END)+((price_index.min_price-(price_index.min_price/(1+(CASE price_index.tax_class_id WHEN 2 THEN 0.2400 ELSE 0 END))*CASE price_index.tax_class_id WHEN 2 THEN 0.2400 ELSE 0 END))*CASE price_index.tax_class_id WHEN 2 THEN 0.2400 ELSE 0 END)) >= 99.995000) AND ((price_index.min_price -(price_index.min_price/(1+(CASE price_index.tax_class_id WHEN 2 THEN 0.2400 ELSE 0 END))*CASE price_index.tax_class_id WHEN 2 THEN 0.2400 ELSE 0 END)+((price_index.min_price-(price_index.min_price/(1+(CASE price_index.tax_class_id WHEN 2 THEN 0.2400 ELSE 0 END))*CASE price_index.tax_class_id WHEN 2 THEN 0.2400 ELSE 0 END))*CASE price_index.tax_class_id WHEN 2 THEN 0.2400 ELSE 0 END)) < 199.995000) GROUP BY extra_specs_idx
.value
;
As far as I can see the tables are using indexes, and generally nothing is written on the disk, everything is happening in memory so I'm not sure how to optimize and make them run faster.
My mariadb version is
Server version: 10.6.17-MariaDB-1:10.6.17+maria~ubu2004-log mariadb.org binary distribution
Here are the table definitions
create table db_example3.catalog_category_product_index
(
category_id int unsigned default 0 not null comment 'Category ID',
product_id int unsigned default 0 not null comment 'Product ID',
position int null comment 'Position',
is_parent smallint unsigned default 0 not null comment 'Is Parent',
store_id smallint unsigned default 0 not null comment 'Store ID',
visibility smallint unsigned not null comment 'Visibility',
primary key (category_id, product_id, store_id),
constraint FK_CATALOG_CATEGORY_PRODUCT_INDEX_STORE_ID_CORE_STORE_STORE_ID
foreign key (store_id) references db_example3.core_store (store_id)
on update cascade on delete cascade,
constraint FK_CAT_CTGR_PRD_IDX_CTGR_ID_CAT_CTGR_ENTT_ENTT_ID
foreign key (category_id) references db_example3.catalog_category_entity (entity_id)
on update cascade on delete cascade,
constraint FK_CAT_CTGR_PRD_IDX_PRD_ID_CAT_PRD_ENTT_ENTT_ID
foreign key (product_id) references db_example3.catalog_product_entity (entity_id)
on update cascade on delete cascade
)
comment 'Catalog Category Product Index' row_format = COMPRESSED;
create index 15D3C269665C74C2219037D534F4B0DC
on db_example3.catalog_category_product_index (store_id, category_id, visibility, is_parent, position);
create index IDX_CAT_CTGR_PRD_IDX_PRD_ID_STORE_ID_CTGR_ID_VISIBILITY
on db_example3.catalog_category_product_index (product_id, store_id, category_id, visibility);
create table db_example3.catalog_product_index_eav
(
entity_id int unsigned not null comment 'Entity ID',
attribute_id smallint unsigned not null comment 'Attribute ID',
store_id smallint unsigned not null comment 'Store ID',
value int unsigned not null comment 'Value',
primary key (entity_id, attribute_id, store_id, value),
constraint FK_CATALOG_PRODUCT_INDEX_EAV_STORE_ID_CORE_STORE_STORE_ID
foreign key (store_id) references db_example3.core_store (store_id)
on update cascade on delete cascade,
constraint FK_CAT_PRD_IDX_EAV_ATTR_ID_EAV_ATTR_ATTR_ID
foreign key (attribute_id) references db_example3.eav_attribute (attribute_id)
on update cascade on delete cascade,
constraint FK_CAT_PRD_IDX_EAV_ENTT_ID_CAT_PRD_ENTT_ENTT_ID
foreign key (entity_id) references db_example3.catalog_product_entity (entity_id)
on update cascade on delete cascade
)
comment 'Catalog Product EAV Index Table' row_format = COMPRESSED;
create index IDX_CATALOG_PRODUCT_INDEX_EAV_ATTRIBUTE_ID
on db_example3.catalog_product_index_eav (attribute_id);
create index IDX_CATALOG_PRODUCT_INDEX_EAV_ENTITY_ID
on db_example3.catalog_product_index_eav (entity_id);
create index IDX_CATALOG_PRODUCT_INDEX_EAV_STORE_ID
on db_example3.catalog_product_index_eav (store_id);
create index IDX_CATALOG_PRODUCT_INDEX_EAV_VALUE
on db_example3.catalog_product_index_eav (value);
create index idx_attribute_store_value_entity
on db_example3.catalog_product_index_eav (attribute_id, store_id, value, entity_id);
create table db_example3.catalog_product_index_price
(
entity_id int unsigned not null comment 'Entity ID',
customer_group_id smallint unsigned not null comment 'Customer Group ID',
website_id smallint unsigned not null comment 'Website ID',
tax_class_id smallint unsigned default 0 null comment 'Tax Class ID',
price decimal(12, 4) null comment 'Price',
final_price decimal(12, 4) null comment 'Final Price',
min_price decimal(12, 4) null comment 'Min Price',
max_price decimal(12, 4) null comment 'Max Price',
tier_price decimal(12, 4) null comment 'Tier Price',
group_price decimal(12, 4) null comment 'Group price',
primary key (entity_id, customer_group_id, website_id),
constraint FK_CAT_PRD_IDX_PRICE_CSTR_GROUP_ID_CSTR_GROUP_CSTR_GROUP_ID
foreign key (customer_group_id) references db_example3.customer_group (customer_group_id)
on update cascade on delete cascade,
constraint FK_CAT_PRD_IDX_PRICE_ENTT_ID_CAT_PRD_ENTT_ENTT_ID
foreign key (entity_id) references db_example3.catalog_product_entity (entity_id)
on update cascade on delete cascade,
constraint FK_CAT_PRD_IDX_PRICE_WS_ID_CORE_WS_WS_ID
foreign key (website_id) references db_example3.core_website (website_id)
on update cascade on delete cascade
)
comment 'Catalog Product Price Index Table' row_format = COMPRESSED;
create index IDX_CATALOG_PRODUCT_INDEX_PRICE_CUSTOMER_GROUP_ID
on db_example3.catalog_product_index_price (customer_group_id);
create index IDX_CATALOG_PRODUCT_INDEX_PRICE_MIN_PRICE
on db_example3.catalog_product_index_price (min_price);
create index IDX_CATALOG_PRODUCT_INDEX_PRICE_WEBSITE_ID
on db_example3.catalog_product_index_price (website_id);
create index IDX_CAT_PRD_IDX_PRICE_WS_ID_CSTR_GROUP_ID_MIN_PRICE
on db_example3.catalog_product_index_price (website_id, customer_group_id, min_price);
The counted rows on each table
select count(*) from catalog_product_index_eav; #418
select count(*) from catalog_product_index_price; #84
select count(*) from catalog_category_product_index; # 314
The server has an AMD Ryzen 9 3900 12-Core Processor with 128GB of RAM and Nvme disks.
Any help is appreciated
gabtzi
(181 rep)
Apr 5, 2024, 05:06 AM
• Last activity: Apr 21, 2024, 11:58 PM
2
votes
2
answers
816
views
Where to get the error/info logs of what happens when committing several statements?
I had a problem with an index missing on a table and the only way I had to gather clues was this way: BEGIN TRANSACTION; -- OVER 100 DELETES COMMIT; In DBeaver, I selected only the first line and the subsequent deletes, executed that. Then, selected the commit line, executed that and waited until I...
I had a problem with an index missing on a table and the only way I had to gather clues was this way:
BEGIN TRANSACTION;
-- OVER 100 DELETES
COMMIT;
In DBeaver, I selected only the first line and the subsequent deletes, executed that. Then, selected the commit line, executed that and waited until I felt that this is taking too long.
Then DBeaver gave me this message, when I canceled the transaction:
SQL Error : ERROR: canceling statement due to user request
Where: SQL statement "SELECT 1 FROM ONLY "schema"."table" x WHERE $1 OPERATOR(pg_catalog.=) "id_model" AND $2 OPERATOR(pg_catalog.=) "id_property1" AND $3 OPERATOR(pg_catalog.=) "id_property2" FOR KEY SHARE OF x"
id_model
is the WHERE clause for all the DELETEs in question. The query mentioned in the message is definitely not code written by me, anywhere in any function/procedure/trigger I can think of.
I took from this that a missing index on table
with columns id_property1
and id_property2
was the problem, and apparently it was.
Are there any logs explaining what is happening "under the hood" when the server is processing several statements queued in the transaction where I could have more easily found that statement?
There are several delete foreign key constraints configured in the model, so I guess when indexes are missing, the server is taking a while to find the correct way around?
Francis Ducharme
(123 rep)
Nov 1, 2023, 09:11 PM
• Last activity: Nov 2, 2023, 04:13 PM
4
votes
2
answers
1374
views
mysqldump select queries appearing in slow query log after upgrade
We recently upgraded from MariaDB 5.5 to MariaDB 10.5 Since doing so, we have a couple of `select` queries appearing in the slow query log when we backup databases using mysqldump. One of the tables is an InnoDB table with 125,000 records. The other is a MyISAM table with 220,000 records. I guess th...
We recently upgraded from MariaDB 5.5 to MariaDB 10.5
Since doing so, we have a couple of
select
queries appearing in the slow query log when we backup databases using mysqldump.
One of the tables is an InnoDB table with 125,000 records. The other is a MyISAM table with 220,000 records.
I guess this might not seem like any great concern, however for many years we never had anything in the slow query log related to mysqldump. So I am wondering if we should be worried? Did the behaviour of mysqldump change between versions 5.5 and 10.5, or could there be some other local setting that we perhaps had optimsied previously, that now isn't?
Our slow query time is set to 5 seconds, which is what it was previously. The offending queries are taking just under 6 seconds.
MrCarrot
(141 rep)
Feb 2, 2021, 07:45 PM
• Last activity: Nov 1, 2023, 01:50 PM
0
votes
2
answers
102
views
Why is this Postgres query so slow?
I'm trying to debug the slow query below but I'm struggling to understand why it is slow. I can see that both plan and subplan do an index scan, including an "Index only scan" for the subplan so both should be fast. Yet it's taking 7 seconds for this particular query. Any idea from this EXPLAIN outp...
I'm trying to debug the slow query below but I'm struggling to understand why it is slow. I can see that both plan and subplan do an index scan, including an "Index only scan" for the subplan so both should be fast. Yet it's taking 7 seconds for this particular query.
Any idea from this EXPLAIN output where the problem might be?
select "id", "item_id", "item_name", "type", "updated_time" from "changes"
where (
((type = 1 OR type = 3) AND user_id = 'USER_ID')
or type = 2 AND item_id IN (SELECT item_id FROM user_items WHERE user_id = 'USER_ID')
) and "counter" > '35885954' order by "counter" asc limit 100;
Limit (cost=8409.70..8553.44 rows=100 width=101) (actual time=7514.730..7514.731 rows=0 loops=1)
-> Index Scan using changes_pkey on changes (cost=8409.70..2387708.44 rows=1655325 width=101) (actual time=7514.728..7514.729 rows=0 loops=1)
Index Cond: (counter > 35885954)
Filter: ((((type = 1) OR (type = 3)) AND ((user_id)::text = 'USER_ID'::text)) OR ((type = 2) AND (hashed SubPlan 1)))
Rows Removed by Filter: 11378536
SubPlan 1
-> Index Only Scan using user_items_user_id_item_id_unique on user_items (cost=0.56..8401.57 rows=3030 width=24) (actual time=0.085..3.011 rows=3589 loops=1)
Index Cond: (user_id = 'USER_ID'::text)
Heap Fetches: 2053
Planning Time: 0.245 ms
Execution Time: 7514.781 ms
(11 rows)
laurent
(191 rep)
Oct 16, 2023, 06:16 PM
• Last activity: Oct 18, 2023, 06:41 AM
0
votes
1
answers
230
views
Set log-slow-verbosity=explain was not working
Trying to enable slow_query with verbosity enabled with different options in Mariadb 10.6.*. But unable to set option- explain for log-slow-verbosity in mysql session https://mariadb.com/kb/en/explain-in-the-slow-query-log/ while we can set other options as query_plan and innoDB in mysql session ```...
Trying to enable slow_query with verbosity enabled with different options in Mariadb 10.6.*.
But unable to set option- explain for log-slow-verbosity in mysql session
https://mariadb.com/kb/en/explain-in-the-slow-query-log/
while we can set other options as query_plan and innoDB in mysql session
> set log_slow_verbosity=explain;
ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'explain' at line 1
> set log_slow_verbosity=query_plan;
Query OK, 0 rows affected (0.009 sec)
> set log_slow_verbosity=innodb;
Query OK, 0 rows affected (0.038 sec)
> show variables like '%slow%';
+------------------------------+------------------------------------+
| Variable_name | Value |
+------------------------------+------------------------------------+
| ::: | |
| log_slow_verbosity | query_plan
| slow_query_log | ON
| slow_query_log_file | /var/opt/na/log/mysql-1/slow-query.log
+------------------------------+------------------------------------+
And while the same is working fine when we set this variable as part of my.cnf file
> cat /etc/na/my-1.cnf | grep log-slow
log-slow-verbosity=query_plan,explain
> show variables like '%log_slow%';
+------------------------------+------------------------------------+
| Variable_name | Value |
+------------------------------+------------------------------------+
| log_slow_verbosity | query_plan,explain |
+------------------------------+--------------------------------------------------------------------------------------------------------------------------------------+
6 rows in set (0.001 sec)
Is this option not supported in 10.6 which is not mentioned in the docs or am i missing anything
https://mariadb.com/kb/en/slow-query-log-extended-statistics/
ragul rangarajan
(111 rep)
Jan 20, 2023, 01:32 PM
• Last activity: Feb 15, 2023, 07:23 AM
0
votes
0
answers
135
views
MariaDB SELECT query keeps stuck in one system but not in another
This is the reduced query showing problem: SELECT t.* FROM transactions t LEFT JOIN assignments a ON a.transaction_id = t.id LEFT JOIN users u2 ON u2.id = a.user_id WHERE ( u2.operational_type IS NULL ); In Production Server (Ubuntu 21.04; mariadb Ver 15.1 Distrib 10.5.13-MariaDB) is executing in 0....
This is the reduced query showing problem:
SELECT
t.*
FROM
transactions t
LEFT JOIN assignments a ON a.transaction_id = t.id
LEFT JOIN users u2 ON u2.id = a.user_id
WHERE (
u2.operational_type IS NULL
);
In Production Server (Ubuntu 21.04; mariadb Ver 15.1 Distrib 10.5.13-MariaDB) is executing in 0.0179 seconds. Explain in production server returns:
+------+-------------+-------+--------+----------------+----------------+---------+------------------+-------+-------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+--------+----------------+----------------+---------+------------------+-------+-------------------------+
| 1 | SIMPLE | t | ALL | NULL | NULL | NULL | NULL | 56542 | |
| 1 | SIMPLE | a | ref | transaction_id | transaction_id | 4 | dbname.t.id | 1 | |
| 1 | SIMPLE | u2 | eq_ref | PRIMARY | PRIMARY | 4 | dbname.a.user_id | 1 | Using where; Not exists |
+------+-------------+-------+--------+----------------+----------------+---------+------------------+-------+-------------------------+
However in developing server (Ubuntu 22.04.1 LTS; mariadb Ver 15.1 Distrib 10.6.7-MariaDB) it gets stuck, In processlist it's state is sending data, it has been more than 30 minutes. Its EXPLAIN is as following:
+------+-------------+-------+--------+----------------+---------+---------+------------------+-------+-------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+--------+----------------+---------+---------+------------------+-------+-------------------------------------------------+
| 1 | SIMPLE | t | ALL | NULL | NULL | NULL | NULL | 48735 | |
| 1 | SIMPLE | a | ALL | transaction_id | NULL | NULL | NULL | 1 | Using where; Using join buffer (flat, BNL join) |
| 1 | SIMPLE | u2 | eq_ref | PRIMARY | PRIMARY | 4 | dbname.a.user_id | 1 | Using where; Not exists |
+------+-------------+-------+--------+----------------+---------+---------+------------------+-------+-------------------------------------------------+
What is the cause? How to troubleshoot?
EDIT:
SHOW TABLE STATUS
for Production Server.
+-----------------------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+------------+--------------------+----------+----------------+---------+------------------+-----------+
| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment | Max_index_length | Temporary |
+-----------------------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+---------------------+------------+--------------------+----------+----------------+---------+------------------+-----------+
| assignments | InnoDB | 10 | Dynamic | 50175 | 52 | 2637824 | 0 | 3178496 | 4194304 | 51273 | 2022-07-08 08:18:28 | 2022-08-29 12:40:28 | NULL | utf8mb4_spanish_ci | NULL | | | 0 | N |
| transactions | InnoDB | 10 | Dynamic | 37315 | 1505 | 56180736 | 0 | 3719168 | 7340032 | 41549 | 2022-07-13 11:18:23 | 2022-09-03 15:34:44 | NULL | utf8mb4_spanish_ci | NULL | | | 0 | N |
| users | InnoDB | 10 | Dynamic | 243 | 741 | 180224 | 0 | 81920 | 0 | 267 | 2022-08-20 12:33:16 | 2022-09-08 11:12:24 | NULL | utf8mb4_spanish_ci | NULL | | | 0 | N
SHOW TABLE STATUS
developing server:
+-----------------------+--------+---------+------------+-------+----------------+-------------+-----------------+--------------+-----------+----------------+---------------------+-------------+------------+--------------------+----------+----------------+---------+------------------+-----------+
| Name | Engine | Version | Row_format | Rows | Avg_row_length | Data_length | Max_data_length | Index_length | Data_free | Auto_increment | Create_time | Update_time | Check_time | Collation | Checksum | Create_options | Comment | Max_index_length | Temporary |
| assignments | InnoDB | 10 | Dynamic | 0 | 0 | 16384 | 0 | 32768 | 4194304 | 75369 | 2022-09-11 08:05:19 | NULL | NULL | utf8mb4_spanish_ci | NULL | | | 0 | N |
| transactions | InnoDB | 10 | Dynamic | 48735 | 1497 | 72974336 | 0 | 4767744 | 6291456 | 62236 | 2022-09-11 08:05:15 | NULL | NULL | utf8mb4_spanish_ci | NULL | | | 0 | N |
| users | InnoDB | 10 | Dynamic | 275 | 714 | 196608 | 0 | 98304 | 0 | 296 | 2022-09-11 12:09:11 | NULL | NULL | utf8mb4_spanish_ci | NULL | | | 0 | N |
I don't understand why assigments table show 0 rows. SELECT COUNT(*) FROM assignments WHERE 1
returns 74061
NeDark
(103 rep)
Sep 11, 2022, 04:01 PM
• Last activity: Sep 12, 2022, 11:49 AM
0
votes
2
answers
114
views
Mysql server running as replica doesn't record the slow queries in the slow query log, even though it's enabled
I am running a MySql 8.0.29 server serving as a replica for another server on an ubuntu 20.04.4 LTS box. 'slow_query_log','ON' 'slow_query_log_file','/var/log/mysql/mysql-slow.log' 'long_query_time', '10.000000' I ran a query that took approx. 20s, however, the query doesn't show up in mysql-slow.lo...
I am running a MySql 8.0.29 server serving as a replica for another server on an ubuntu 20.04.4 LTS box.
'slow_query_log','ON'
'slow_query_log_file','/var/log/mysql/mysql-slow.log'
'long_query_time', '10.000000'
I ran a query that took approx. 20s, however, the query doesn't show up in mysql-slow.log. The file is empty (size 0).
Any ideas where I can look to find out why the query is not recorded in the slow log? I checked the error logs, and syslog and there is nothing related to this. File permissions are ok as well.
Thanks
Update: it worked for a while after flushing the slow log, then it lost it again. Not sure what it is...
boggy
(427 rep)
Jun 14, 2022, 09:54 PM
• Last activity: Jun 21, 2022, 05:15 AM
3
votes
3
answers
6210
views
SQL Server freezes (because of application), need logging
We have an application running ontop of our SQL Server 2005 instance and a couple of times a week this application (unannounced) will cause the SQL Server to freeze. I can't even restart the SQL Server service; I have to restart the entire machine. Needless to say, I can't open a query window to run...
We have an application running ontop of our SQL Server 2005 instance and a couple of times a week this application (unannounced) will cause the SQL Server to freeze. I can't even restart the SQL Server service; I have to restart the entire machine.
Needless to say, I can't open a query window to run
sp_who2
to find the cause. It could be days before the issue shows up again. Is there any type of logging I can put in place to better track down what is causing the SQL Server to freeze up?
exec xp_readerrorlog
only shows me what has happened after the restart so it isn't much help.
At the time it freezes, CPU is pegged at 90-97% and memory is maxed at 8 GB. Server has 12 GB but the max is set to 8192 for SQL Server.
BamBamBeano
(231 rep)
Jul 1, 2013, 08:51 PM
• Last activity: Apr 8, 2022, 08:01 AM
2
votes
2
answers
1273
views
Slow Query logging everything
I have a mysql instance that is logging slow queries. I have it set to 1 sec, but I am getting back lots of these:(example) Query_time: 0.000237 Lock_time: 0.000064 Rows_sent: 0 Rows_examined: 109 On another server I get these: Query_time: 0 Lock_time: 0 Rows_sent: 0 Rows_examined: 0 Either way they...
I have a mysql instance that is logging slow queries. I have it set to 1 sec, but I am getting back lots of these:(example)
Query_time: 0.000237 Lock_time: 0.000064 Rows_sent: 0 Rows_examined: 109
On another server I get these:
Query_time: 0 Lock_time: 0 Rows_sent: 0 Rows_examined: 0
Either way they both seem to be logging other queries...or am I reading something wrong?
set-variable = log-slow-queries="/tmp/mysql-slow-query.log"
set-variable = long_query_time=1
mike628
(171 rep)
Jul 28, 2011, 06:16 PM
• Last activity: Dec 31, 2021, 12:16 AM
14
votes
3
answers
39819
views
How do I output MySQL logs to syslog?
I want to use syslog for logging MySQL(5.1.41) on Ubuntu(10.04 LTS). I found information that output error log to syslog. [mysqld_safe] syslog But I want to use syslog for logging general logs and slow query logs. Please advise me how to write config file? I could not find how to do that in the refe...
I want to use syslog for logging MySQL(5.1.41) on Ubuntu(10.04 LTS).
I found information that output error log to syslog.
[mysqld_safe]
syslog
But I want to use syslog for logging general logs and slow query logs.
Please advise me how to write config file?
I could not find how to do that in the reference manual.
> http://dev.mysql.com/doc/refman/5.1/en/log-destinations.html
inohiro
(345 rep)
Jun 29, 2011, 06:30 AM
• Last activity: Dec 31, 2021, 12:04 AM
12
votes
3
answers
34289
views
MySQL replication - slave is continuously lagging behind master
I am using MySQL-5.1.50 with a Master-slave replication setup. Most of the time the slave is lagging behind the master. When I run `show processlist;`, there is no query that's taking a long time. I enabled `slow_log` as well. However, it does not find any slow running query. The slave is continuous...
I am using MySQL-5.1.50 with a Master-slave replication setup.
Most of the time the slave is lagging behind the master.
When I run
show processlist;
, there is no query that's taking a long time. I enabled slow_log
as well. However, it does not find any slow running query.
The slave is continuously giving alerts that replication is seconds behind the master. Sometimes, the lag time increases.
How do I diagnose the cause of the problem?
I need urgent help, since this problem has persisted for the last 20 days.
adeela sahar
(121 rep)
Sep 24, 2012, 06:55 AM
• Last activity: Dec 29, 2021, 06:20 PM
7
votes
3
answers
2896
views
Replay (re-execute) MySQL SELECT queries from a log file
MySQL Benchmarking =================== I would like to evaluate the performance of a different MySQL instance by re-executing real queries from log files. I am aware of tools like mysqlslap, that produces random and autogenerated queries, but I would rather like to test the system with realistic que...
MySQL Benchmarking
===================
I would like to evaluate the performance of a different MySQL instance by re-executing real queries from log files. I am aware of tools like mysqlslap, that produces random and autogenerated queries, but I would rather like to test the system with realistic queries.
In order to benchmark the new system, I would ideally collect all queries either using the slow log or the general log and then replay the queries to the new system.
It should be possible to filter queries, for instance I would like to replay SELECT statements only. I am wondering if there exists an actively maintained procedure or a best practice that how to achieve warm-up or benchmark functionality.
Requirements
--------------
- Replay (re-execute) real queries collected from a running MySQL instance
- Filter statements by statement type (INSERT, UPDATE, DELETE) and DML
- Get statistics (how long did the query run before, after the migration etc)
Problem: Tools are outdated
-------------------------
Percona offered several tools that seemed to be designed for that purpose. All of them have been removed from the current Percona Toolkit. These tools were:
- pt-log-player
- The replay option for pt-query-digest
- query-playback
Potential solutions
----------------------
Using a Debian Wheezy docker container gives easy access to an older Percona Toolkit version where the pt-log-player is available. Running a container with
docker run -it --network="host" --name wheezy debian:wheezy /bin/bash
and installing the legacy versions inside works:
apt-get update
apt-get install percona-toolkit mysql-client
I also tried percona-playback, but it does not seem to be very actively maintained either. That would be a great tool, but it is hardly documented and I could not find a method for filtering SELECT statements only, as I do not want to run other DML statements. I only saw that Percona Server offers a flag for enabling a read only option, but this is not available with vanilla MySQL or AWS RDS.
However I would rather like to use a procedure that does not rely on unmaintained tools.
Imagined Workflow
------------------------
1. Enable slow query log or general log
2. Parse and analyze log file
3. Filter relevant queries
4. Re-execute queries
5. Compare results
Are there any tools how I could achieve this? What is the best way to filter the slow query log? How to replay the queries sanely?
Stefan
(231 rep)
Sep 27, 2018, 02:32 PM
• Last activity: Oct 22, 2021, 07:03 PM
0
votes
2
answers
30
views
MySQL slowlog optimsing for Having and Group By Keywords
I have below entry in the slow-log, i just wonder how can i optimise this below query as it looks very simple query that, i don't see any way of optimising it, # Query_time: 32.797535 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 512871 use dbname; SET timestamp=1629724372; SELECT * FROM `screenin...
I have below entry in the slow-log, i just wonder how can i optimise this below query as it looks very simple query that, i don't see any way of optimising it,
# Query_time: 32.797535 Lock_time: 0.000000 Rows_sent: 0 Rows_examined: 512871
use dbname;
SET timestamp=1629724372;
SELECT *
FROM
screening
JOIN referrals
ON referrals
.a_id
= screening
.screen_id
JOIN referrals_file
ON referrals_file
.a_id
= referrals
.a_id
GROUP BY screening
.screen_id
HAVING (MONTH(MIN(referrals_file.datereceived)) = 12 and YEAR(MIN(referrals_file.datereceived)) = 2021);
* (please mind am not a database administrator, just developer so my knowledge will be limited)
user1179459
(143 rep)
Aug 25, 2021, 07:00 AM
• Last activity: Aug 25, 2021, 11:19 PM
0
votes
1
answers
242
views
queryHash field not available in mongos slow query logs
Mongodb V4.2 has introduced [queryHash][1] in slow query logs to identify slow queries with same query shape. While this `queryHash` is available in slow query logs in `mongod` logs (shard server), it is not present in any of the slow query logs on my `mongos` server. **Mongod log** ``` {"t":{"$date...
Mongodb V4.2 has introduced queryHash in slow query logs to identify slow queries with same query shape. While this
queryHash
is available in slow query logs in mongod
logs (shard server), it is not present in any of the slow query logs on my mongos
server.
**Mongod log**
{"t":{"$date":"2021-01-09T01:45:27.821+00:00"},"s":"I", "c":"COMMAND", "id":51803, "ctx":"conn143447","msg":"Slow query","attr":{"type":"command","ns":"xxxx.collection","command":{"find":"collection","filter":{"entityId":"xxxx.ru"},"projection":{"crawlerDataId":0},"sort":{"sourceUpdatedAt":-1},"readConcern":{"provenance":"implicitDefault"},"limit":100,"runtimeConstants":{"localNow":{"$date":"2021-01-09T01:45:27.689Z"},"clusterTime":{"$timestamp":{"t":1610156727,"i":93}}},"_use44SortKeys":true,"shardVersion":[{"$timestamp":{"t":89277,"i":1}},{"$oid":"5cb57c9eafd72c1e06b5c433"}],"clientOperationKey":{"$uuid":"56e5a665-9162-4e1d-a368-165e592d59de"},"lsid":{"id":{"$uuid":"a15a9b9c-bb4f-4d1f-8770-08b64b28ad2b"},"uid":{"$binary":{"base64":"6ikS+z/HSNSwWB1ARioMqqzdlxOaNtFe+gViCoYwELs=","subType":"0"}}},"$clusterTime":{"clusterTime":{"$timestamp":{"t":1610156727,"i":93}},"signature":{"hash":{"$binary":{"base64":"7GLR2+4NQu2cWHpgqx31zmi28NE=","subType":"0"}},"keyId":6875251194940358707}},"$audit":{"$impersonatedUsers":[{"user":"xxxx","db":"xxxx"}],"$impersonatedRoles":[{"role":"readWrite","db":"xxxx"}]},"$client":{"driver":{"name":"mongo-java-driver","version":"3.9.1"},"os":{"type":"Linux","name":"Linux","architecture":"amd64","version":"5.4.0-1021-aws"},"platform":"Java/Private Build/1.8.0_265-8u265-b01-0ubuntu2~20.04-b01","mongos":{"host":"xxxx:27017","client":"xxxxx","version":"4.4.1"}},"$configServerState":{"opTime":{"ts":{"$timestamp":{"t":1610156725,"i":79}},"t":12}},"$db":"xxxx"},"planSummary":"IXSCAN { entityId: \"hashed\" }","keysExamined":73,"docsExamined":73,"hasSortStage":true,"cursorExhausted":true,"numYields":8,"nreturned":73,"queryHash":"F305D754","planCacheKey":"42D383C3","reslen":397160,"locks":{"ReplicationStateTransition":{"acquireCount":{"w":9}},"Global":{"acquireCount":{"r":9}},"Database":{"acquireCount":{"r":9}},"Collection":{"acquireCount":{"r":9}},"Mutex":{"acquireCount":{"r":3}}},"readConcern":{},"storage":{"data":{"bytesRead":5215788,"timeReadingMicros":129685}},"protocol":"op_msg","durationMillis":132}}
**Mongos log**
{"t":{"$date":"2020-12-31T02:19:33.680+00:00"},"s":"I", "c":"COMMAND", "id":51803, "ctx":"conn128","msg":"Slow query","attr":{"type":"command","ns":"xxxx.companyPriority","command":{"find":"companyPriority","filter":{"_id":{"$in":[{"$oid":"5319df10e4b0f7e16602a313"}]}},"$db":"xxxx","$clusterTime":{"clusterTime":{"$timestamp":{"t":1609381171,"i":2}},"signature":{"hash":{"$binary":{"base64":"C2cNRe4vAhiBs97T6VYXIYYU1Ck=","subType":"0"}},"keyId":6877905033757720578}},"lsid":{"id":{"$uuid":"ba72a15f-2568-43e9-8c9a-abd7bd2aa6be"}}},"nShards":1,"cursorExhausted":true,"numYields":0,"nreturned":1,"reslen":278,"protocol":"op_msg","durationMillis":2007}}
"queryHash":"F305D754"
is present in mongod
log and no queryHash
key in mongos
The documentation nowhere mentions about the queryHash
availability (or the absence of it) on mongos
server logs separately. Is this expected behaviour or some configuration changes are required to get queryHash
in mongos
logs?
Note: mongos
and mongod
both are running on version 4.4.1
ankshah
(143 rep)
Jan 18, 2021, 09:21 AM
• Last activity: Feb 20, 2021, 11:09 AM
3
votes
2
answers
1047
views
How To Exclude Queries Returned From mysqldumpslow Command
I would like to ignore queries in the mysql-slow log that originate from running the command [`mysqldumpslow`][L1] by using `SQL_NO_CACHE` as the trigger word. My experience using regex are limited at best and I'm unable to find any examples in the docs. I've tried a few variations of the following:...
I would like to ignore queries in the mysql-slow log that originate from running the command
mysqldumpslow
by using SQL_NO_CACHE
as the trigger word.
My experience using regex are limited at best and I'm unable to find any examples in the docs.
I've tried a few variations of the following:
mysqlslowdump -g '!(SQL_NO_CACHE)' /path/to/my/log
mysqlslowdump -g '^!(SQL_NO_CACHE)$' /path/to/my/log
... but always get back:
> Died at /usr/bin/mysqldumpslow line 162
What is wrong in my regex?
KDrewiske
(131 rep)
Feb 9, 2015, 08:21 PM
• Last activity: Feb 19, 2021, 02:49 PM
Showing page 1 of 20 total questions