Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
2
votes
1
answers
2204
views
DROP TABLE IF EXISTS hangs on Waiting for table metadata lock
I'm using a dockerized MySQL 5.7.13 as part of an integration test of an application. The way the system works is as follows: 1. I start up the main application. It opens up a connection to MySQL, but does not perform any queries. 2. I start the integration test setup and drop the old table if it wa...
I'm using a dockerized MySQL 5.7.13 as part of an integration test of an application. The way the system works is as follows:
1. I start up the main application. It opens up a connection to MySQL, but does not perform any queries.
2. I start the integration test setup and drop the old table if it was left behind by the previous run:
DROP TABLE IF EXISTS unit_device
3. I then go on and create the table like this:
`CREATE TABLE IF NOT EXISTS unit_device (
id CHAR(36) NOT NULL,
unit_id CHAR(36),
device_id CHAR(36),
deleted_at DATETIME,
PRIMARY KEY (id)
)`
4. I then insert some values into the table and trigger work in the main application which makes one SELECT:
SELECT unit_id, device_id FROM unit_device WHERE deleted_at IS NULL
5. Then I drop the table:
DROP TABLE IF EXISTS unit_device
However, when I try to run this query, it hangs seemingly forever for some reason. If I close the connection from the main application, the query resumes. One thing that seems suspicious is that once the main application has made its select once, I can see the following information in the output of show engine innodb status
: 1 read views open inside InnoDB
I must be doing something seriously wrong here, but after hours of research I was still unable to figure out what's causing the hang, or whether the problem is in the test or the main application.
JohnEye
(193 rep)
Nov 9, 2018, 04:50 PM
• Last activity: Jul 28, 2025, 12:06 AM
1
votes
1
answers
204
views
Locking a table and disabling the indexes hangs mysql - 100% CPU
CREATE TABLE `runs` ( `RunID` int(11) NOT NULL DEFAULT '0', PRIMARY KEY (`RunID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Dumping data for table `runs` -- -- WHERE: runid=430864838 LOCK TABLES `runs` WRITE; ALTER TABLE `runs` DISABLE KEYS; INSERT INTO `runs` VALUES (430864838); As soon as ```A...
CREATE TABLE
runs
(
RunID
int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (RunID
)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
--
-- Dumping data for table runs
--
-- WHERE: runid=430864838
LOCK TABLES runs
WRITE;
ALTER TABLE runs
DISABLE KEYS;
INSERT INTO runs
VALUES (430864838);
As soon as TABLE runs
DISABLE KEYS;
runs it seems to hang the database - a core is 100% being used and it will not respond.
What is the cause of this behavior?
Chris Stryczynski
(173 rep)
Feb 8, 2018, 10:56 AM
• Last activity: Jul 20, 2025, 11:00 PM
0
votes
1
answers
147
views
Is it possible to READ LOCK all databases and then UNLOCK one by one?
I need to take backup of databases from a live server for replication. There could be 2-5 databases each having size of 1-5 GB. Master is a live server so I can not stop mysql service. Since size of data is big, baking up in Read-Only mode could be dangerous too. I'm wondering is it possible to **1....
I need to take backup of databases from a live server for replication. There could be 2-5 databases each having size of 1-5 GB. Master is a live server so I can not stop mysql service. Since size of data is big, baking up in Read-Only mode could be dangerous too. I'm wondering is it possible to
**1. READ LOCK all databases and stop bin-log writing**
**2. Take backup of a database and UNLOCK only that one.**
MySQL Version : 5.1.61
Storage Engine : MyISAM
Thanks in advance.
Hasan Rumman
(101 rep)
May 14, 2018, 02:33 PM
• Last activity: Jul 19, 2025, 05:03 PM
3
votes
1
answers
606
views
Write-lock a whole table during transaction
I need to perform a delicate operation to my table in which I will solely `insert`, `delete` and `select` upon all of my rows and no God may interfere with the table during this operation: the table will be in an inconsistent state and any concurrent MySQL session shall not be allowed to modify it u...
I need to perform a delicate operation to my table in which I will solely
insert
, delete
and select
upon all of my rows and no God may interfere with the table during this operation: the table will be in an inconsistent state and any concurrent MySQL session shall not be allowed to modify it until commit
-ed.
The use of SELECT ... FOR UPDATE | LOCK IN SHARE MODE
is not suitable because, while it may potentially be used to lock all the rows in the table , it won't prevent the insertion of further rows by a concurrent session. Basically, I need to LOCK TABLES my_table WRITE
within the body of a transaction.
The table contains about 20,000 rows and a master-slave, mixed-format replication is in place over a slow connection so, for any workaround, I'd prefer to avoid using temporary tables which may faint the slave , and the amount of data dumped into the binlog should ideally be minimized.
The engine is InnoDB on MySQL 5.6, for both master and slave.
davide
(131 rep)
Apr 27, 2015, 04:19 AM
• Last activity: Jul 13, 2025, 11:09 PM
0
votes
1
answers
175
views
Advice for applying locks in my data processing pipeline
I currently have a Python program that enters rows into a Postgres table that essentially works as a list of data I need to process. These processes create on-disk files and trigger other behavior so I only want it to run once for each row. I then have another script that takes the rows from that ta...
I currently have a Python program that enters rows into a Postgres table that essentially works as a list of data I need to process. These processes create on-disk files and trigger other behavior so I only want it to run once for each row.
I then have another script that takes the rows from that table then begins to do the processing. So, for example, there might be 100 rows and each row might take 10-20 minutes to complete and each produces a few output files.
I currently am running into the problem where I can only run this script one at a time in fear that running two in parallel might end up with them processing the same data twice.
If I create a boolean field that I flip within the application when it's 'busy', I fear having a stale lock due to an abruptly killed process that doesn't end gracefully. If I use locks as built within Postgres, it seems they disappear upon the connection/session ending. But if I'm on an unstable connection, I'm not quite sure what the behavior would be or how I can get the behavior I want? Given these are 10-20 minute processes, I foresee connection being lost within that time frame and thus the lock being lost. Thanks for any advice on where to go. I'm using a Python library called psycopg2 to connect to the Postgres database.
Pensw
(1 rep)
May 21, 2022, 06:06 PM
• Last activity: Jul 6, 2025, 08:07 PM
0
votes
1
answers
174
views
why innodb generate table-lock when two sql using different index?
I know innodb-engine row lock by add lock to index item. But I don't understand the following scene. Prepare sql: ``` DROP TABLE IF EXISTS `tbl_order`; CREATE TABLE tbl_order ( `order_id` BIGINT NOT NULL, `product_name` VARCHAR(16) NOT NULL, KEY `order_id_idx`(`order_id`), KEY `product_name_idx`(`pr...
I know innodb-engine row lock by add lock to index item. But I don't understand the following scene.
Prepare sql:
The GEN_CLUST_INDEX indicates innodb use table-lock.
Could anyone help me?
DROP TABLE IF EXISTS tbl_order
;
CREATE TABLE tbl_order (
order_id
BIGINT NOT NULL,
product_name
VARCHAR(16) NOT NULL,
KEY order_id_idx
(order_id
),
KEY product_name_idx
(product_name
)
) Engine=InnoDB CHARSET=utf8mb4;
INSERT INTO tbl_order(order_id
, product_name
) VALUES(1, 'prod1'), (2, 'prod2');
Session-1 execute sql:
SET autocommit=0;
SELECT @@autocommit;
SELECT order_id
, product_name
FROM tbl_order WHERE order_id=1 FOR UPDATE;
Session-2 execute sql:
SET autocommit=0;
SELECT @@autocommit;
SELECT order_id
, product_name
FROM tbl_order WHERE product_name='prod1' FOR UPDATE;
I know session-1 use order_id_idx index item, session-2 use product_naem_idx index item. I don't understand why session-1 blocks session-2.
The locks information:

study_20160808
(13 rep)
Nov 3, 2019, 12:47 PM
• Last activity: Jul 3, 2025, 12:07 AM
0
votes
2
answers
718
views
Explicit locks in RedShift
I'm new to AWS Redshift and I wanted to know How i can implement my use-case. I have a Redshift table which gets lot of Inserts/Selects. In some cases I need to delete some data from that table. I don't want my Delete to impact other writes and reads. Can someone suggest what is the best locking mod...
I'm new to AWS Redshift and I wanted to know How i can implement my use-case.
I have a Redshift table which gets lot of Inserts/Selects.
In some cases I need to delete some data from that table. I don't want my Delete to impact other writes and reads.
Can someone suggest what is the best locking mode available in Redshift to implement my use case.
I don't want to end up with dadlocks and some locks hanging in my application.
I tried without locks and I'm observing the following violation:
Serializable isolation violation on table - 156300, transactions forming the cycle are: 6416378, 6416913 (pid:3770);
Also I went through the Redshift cxplicit locks but couldn't find any Official documentation for the same.
I know that Redshift is a wrapper on PostgreSQL, so can I use that documentation for implementing?
user3865748
(21 rep)
Dec 9, 2019, 12:14 PM
• Last activity: Jul 1, 2025, 05:03 AM
3
votes
1
answers
185
views
Row Locks in Partitioned Aurora MySQL Table
I'm experiencing many row locks in a AWS Aurora AWS MySQL MySQL version=8.0 Aurora Version=3.04.0 The table is pretty big close to 1 Billion rows. The table is partitioned in 24 partitions based on the created at column. In one of the production issue we found 26 rows have been locked. ---TRANSACTIO...
I'm experiencing many row locks in a AWS Aurora AWS MySQL
MySQL version=8.0
Aurora Version=3.04.0
The table is pretty big close to 1 Billion rows. The table is partitioned in 24 partitions based on the created at column.
In one of the production issue we found 26 rows have been locked.
---TRANSACTION 19670234666, ACTIVE 299 sec
53 lock struct(s), heap size 8312, 26 row lock(s), undo log entries 1
MySQL thread id 7015846, OS thread handle 70369603010432, query id 12207362320
IP username
We update only one row of the table using
id
condition, we do not pass created_at
in the update query.
What could have been the reason of 26 rows being locked?
DB query
UPDATE OrderEntity e SET e.state = :state, e.metadata= :metadata WHERE e.id = :id
Table Structure
```
id: BIGINIT
state: INT
metadata: TEXT
created_at: DATETIME
updated_at: DATETIME
...
Primary Key: id, created_at
sonus21
(131 rep)
Dec 20, 2023, 08:30 AM
• Last activity: Jul 1, 2025, 04:02 AM
16
votes
2
answers
5086
views
Massive INSERTs blocking SELECTs
I have a problem with a massive amount of INSERTs that are blocking my SELECT operations. #Schema# I have a table like this: CREATE TABLE [InverterData]( [InverterID] [bigint] NOT NULL, [TimeStamp] [datetime] NOT NULL, [ValueA] [decimal](18, 2) NULL, [ValueB] [decimal](18, 2) NULL CONSTRAINT [Primar...
I have a problem with a massive amount of INSERTs that are blocking my SELECT operations.
#Schema#
I have a table like this:
CREATE TABLE [InverterData](
[InverterID] [bigint] NOT NULL,
[TimeStamp] [datetime] NOT NULL,
[ValueA] [decimal](18, 2) NULL,
[ValueB] [decimal](18, 2) NULL
CONSTRAINT [PrimaryKey_e149e28f-5754-4229-be01-65fafeebce16] PRIMARY KEY CLUSTERED
(
[TimeStamp] DESC,
[InverterID] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF
, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON
, ALLOW_PAGE_LOCKS = ON)
)
I also have this little helper procedure, that allows me to insert or update (update on conflict) with MERGE command:
CREATE PROCEDURE [InsertOrUpdateInverterData]
@InverterID bigint, @TimeStamp datetime
, @ValueA decimal(18,2), @ValueB decimal(18,2)
AS
BEGIN
MERGE [InverterData] AS TARGET
USING (VALUES (@InverterID, @TimeStamp, @ValueA, @ValueB))
AS SOURCE ([InverterID], [TimeStamp], [ValueA], [ValueB])
ON TARGET.[InverterID] = @InverterID AND TARGET.[TimeStamp] = @TimeStamp
WHEN MATCHED THEN
UPDATE
SET [ValueA] = SOURCE.[ValueA], [ValueB] = SOURCE.[ValueB]
WHEN NOT MATCHED THEN
INSERT ([InverterID], [TimeStamp], [ValueA], [ValueB])
VALUES (SOURCE.[InverterID], SOURCE.[TimeStamp], SOURCE.[ValueA], SOURCE.[ValueB]);
END
#Usage#
I now have run service instances on multiple servers that perform massive updates by calling the
S = Shared. The holding session is granted shared access to the resource.
#Question#
Why are the SELECTs blocked by the
[InsertOrUpdateInverterData]
procedure rapidly.
There is also a website that does SELECT queries on the [InverterData]
table.
#Problem#
If I do SELECT queries on the [InverterData]
table they are proceeded in different timespans, depended on the INSERT usage of my service instances.
If I pause all service instances the SELECT is lightning-fast, if the instance perform rapid insert the SELECTs get really slow or even a timeout cancel.
#Attempts#
I'm done some SELECTs on the [sys.dm_tran_locks]
table to find locking processes, like this
SELECT
tl.request_session_id,
wt.blocking_session_id,
OBJECT_NAME(p.OBJECT_ID) BlockedObjectName,
h1.TEXT AS RequestingText,
h2.TEXT AS BlockingText,
tl.request_mode
FROM sys.dm_tran_locks AS tl
INNER JOIN sys.dm_os_waiting_tasks AS wt ON tl.lock_owner_address = wt.resource_address
INNER JOIN sys.partitions AS p ON p.hobt_id = tl.resource_associated_entity_id
INNER JOIN sys.dm_exec_connections ec1 ON ec1.session_id = tl.request_session_id
INNER JOIN sys.dm_exec_connections ec2 ON ec2.session_id = wt.blocking_session_id
CROSS APPLY sys.dm_exec_sql_text(ec1.most_recent_sql_handle) AS h1
CROSS APPLY sys.dm_exec_sql_text(ec2.most_recent_sql_handle) AS h2
This is the result:

[InsertOrUpdateInverterData]
procedure that is only using MERGE commands?
Do I have to use some kind of transaction with defined isolation mode inside of [InsertOrUpdateInverterData]
?
##Update 1 (related to question from @Paul)##
Base on MS-SQL server internal reporting about [InsertOrUpdateInverterData]
following statistic:
+ Average CPU-Time: 0.12ms
+ Average Read processes: 5.76 per/s
+ Average Write processes: 0.4 per/s
Base on this it looks like the MERGE command is mostly busy with reading operations that will lock the table!(?)
##Update 2 (related to question from @Paul)##
The [InverterData]
table as has following storage stats:
+ Data space: 26,901.86 MB
+ Row count: 131,827,749
+ Partitioned: true
+ Partition count: 62
Here is the (allmost) complete **sp_WhoIsActive** result set:
###SELECT
command###
+ dd hh:mm:ss.mss: 00 00:01:01.930
+ session_id: 73
+ wait_info: (12629ms)LCK_M_S
+ CPU: 198
+ blocking_session_id: **146**
+ reads: 99,368
+ writes: 0
+ status: suspended
+ open_tran_count: 0
###Blocking [InsertOrUpdateInverterData]
command###
+ dd hh:mm:ss.mss: 00 00:00:00.330
+ session_id: 146
+ wait_info: NULL
+ CPU: 3,972
+ blocking_session_id: NULL
+ reads: 376,95
+ writes: 126
+ status: sleeping
+ open_tran_count: 1
Steffen Mangold
(744 rep)
Jan 8, 2016, 02:04 PM
• Last activity: Jun 30, 2025, 12:40 PM
1
votes
1
answers
179
views
Process possessing lock crashes
I have two machines, `M1` and `M2`, through which I am connecting to **MySQL** Database on machine `M3` using Perl script (DBI.pm). So, here is the scenario that I am confused about: `M1` places a table level lock, and performs some operations. `M2` also does the same. Table level locking mechanism...
I have two machines,
M1
and M2
, through which I am connecting to **MySQL** Database on machine M3
using Perl script (DBI.pm).
So, here is the scenario that I am confused about:
M1
places a table level lock, and performs some operations. M2
also does the same. Table level locking mechanism is guarantying me that both of them will not be locking the table at the same time.
But, what if M1
has locked the table, and suddenly after performing half hte updates, the machine crashes or the Perl script process crashes, what happens to the lock then? My updates were performed mid-way, do those get rolled back? And what is the best way to deal with these kind of locks?
ATP
(111 rep)
Jul 19, 2016, 03:50 AM
• Last activity: Jun 29, 2025, 05:07 AM
6
votes
4
answers
1095
views
Does the new locking optimization make batching unnecessary?
[Optimized Locking (2025-05-19)][1] makes it sound like [batching][2] would be unnecessary if I were to, for example, try to delete 20 thousand rows from a million row table in one statement (DELETE FROM foo WHERE Status = 1). Is that true? [1]: https://learn.microsoft.com/en-us/sql/relational-datab...
Optimized Locking (2025-05-19) makes it sound like batching would be unnecessary if I were to, for example, try to delete 20 thousand rows from a million row table in one statement (DELETE FROM foo WHERE Status = 1). Is that true?
Mark Freeman
(2293 rep)
Jun 23, 2025, 03:31 PM
• Last activity: Jun 25, 2025, 12:18 PM
-1
votes
1
answers
140
views
MySQL 8.0.41 - 83 % Waiting for row lock on AO_319474_QUEUE
Environment We run three Jira Service Management DC clusters that all share the same topology: * MySQL 8.0.36 on Ubuntu 22.04 – 32 vCPU, 100 GB RAM, buffer-pool now 80 GB (only ~33 GB in use), NVMe storage. * Each Jira node holds a 50-connection pool (3 nodes × 50 = 150 sessions per cluster). *...
Environment
We run three Jira Service Management DC clusters that all share the same topology:
* MySQL 8.0.36 on Ubuntu 22.04 – 32 vCPU, 100 GB RAM, buffer-pool now 80 GB (only ~33 GB in use), NVMe storage.
* Each Jira node holds a 50-connection pool (3 nodes × 50 = 150 sessions per cluster).
* Workload is at 500 TPS steady;
A week ago we removed an unrelated update bottleneck and suddenly discovered a hot row issue at Jira table.
- 83 % of total wait time is Waiting for row lock (Percona PMM, 6-hour window).
- Slow-log is 90 % UPDATE AO_319474_QUEUE … WHERE ID = ?.
- SHOW ENGINE INNODB STATUS always shows 50-70 transactions waiting on the same PK row.
- CPU, disk I/O, redo, buffer-pool, latches — all SELECT ID, NAME
-> FROM AO_319474_QUEUE
-> WHERE ID = 11459545;
+----------+---------------------------------------------+
| ID | NAME |
+----------+---------------------------------------------+
| 11459545 | servicedesk.base.internal.processing.master |
+----------+---------------------------------------------+
Query details
mysql> explain SELECT ID, NAME
-> FROM AO_319474_QUEUE
-> WHERE ID = 11459545;
+----+-------------+-----------------+------------+-------+---------------+---------+---------+-------+------+----------+-------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-----------------+------------+-------+---------------+---------+---------+-------+------+----------+-------+
| 1 | SIMPLE | AO_319474_QUEUE | NULL | const | PRIMARY | PRIMARY | 8 | const | 1 | 100.00 | NULL |
+----+-------------+-----------------+------------+-------+---------------+---------+---------+-------+------+----------+-------+
1 row in set, 1 warning (0.01 sec)
Update Query
update
AO_319474_QUEUE
set CLAIMANT_TIME
= 1748437310646 where AO_319474_QUEUE
.ID
= 11459545 and (AO_319474_QUEUE
.CLAIMANT
= 'c.a.s.plugins.base.internal.events.runner.async.XXX' and AO_319474_QUEUE
.CLAIMANT_TIME
>= 1748437010646)
Update Query Details
mysql> explain update AO_319474_QUEUE
-> set CLAIMANT_TIME
= 1748437310373 where AO_319474_QUEUE
.ID
= 11459545 and (AO_319474_QUEUE
.CLAIMANT
= 'c.a.s.plugins.base.internal.events.runner.async.XXX' and AO_319474_QUEUE
.CLAIMANT_TIME
>= 1748437010373)
-> ;
+----+-------------+-----------------+------------+-------+----------------------------------------+---------+---------+-------+------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-----------------+------------+-------+----------------------------------------+---------+---------+-------+------+----------+-------------+
| 1 | UPDATE | AO_319474_QUEUE | NULL | range | PRIMARY,index_ao_319474_queue_claimant | PRIMARY | 8 | const | 1 | 100.00 | Using where |
+----+-------------+-----------------+------------+-------+----------------------------------------+---------+---------+-------+------+----------+-------------+
Warnings
mysql> SHOW WARNINGS LIMIT 10;
+-------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Level | Code | Message |
+-------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Note | 1003 | update scr_518f3268
.AO_319474_QUEUE
set scr_518f3268
.AO_319474_QUEUE
.CLAIMANT_TIME
= 1748437310373 where ((scr_518f3268
.AO_319474_QUEUE
.CLAIMANT
= 'c.a.s.plugins.base.internal.events.runner.async.XXX') and (scr_518f3268
.AO_319474_QUEUE
.ID
= 11459545) and (scr_518f3268
.AO_319474_QUEUE
.CLAIMANT_TIME
>= 1748437010373)) |
+-------+------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)
So the DB is idle, but everybody’s queuing on one hot row.
Mainly we were looking at stats from ENGINE INNODB STATUS\G
Already tried / confirmed
- Increased innodb_buffer_pool_size from 16 G → 80 G; buf-pool hit stays 1000/1000, so I/O really is not the issue.
- No I/O or redo backlog.
- Hinted Atlassian support — they’re still digging on the AO side.
**Indexes**
Index Summary
mysql> SELECT
-> INDEX_NAME,
-> COLUMN_NAME,
-> SEQ_IN_INDEX,
-> NON_UNIQUE
-> FROM INFORMATION_SCHEMA.STATISTICS
-> WHERE TABLE_SCHEMA = 'scr_518f3268'
-> AND TABLE_NAME = 'AO_319474_QUEUE'
-> ORDER BY INDEX_NAME, SEQ_IN_INDEX;
+--------------------------------+-------------+--------------+------------+
| INDEX_NAME | COLUMN_NAME | SEQ_IN_INDEX | NON_UNIQUE |
+--------------------------------+-------------+--------------+------------+
| index_ao_319474_queue_claimant | CLAIMANT | 1 | 1 |
| index_ao_319474_queue_topic | TOPIC | 1 | 1 |
| PRIMARY | ID | 1 | 0 |
| U_AO_319474_QUEUE_NAME | NAME | 1 | 0 |
+--------------------------------+-------------+--------------+------------+
4 rows in set (0.00 sec)
Analyse Table
mysql> ANALYZE TABLE AO_319474_QUEUE;
+-----------------------------------------+---------+----------+----------+
| Table | Op | Msg_type | Msg_text |
+-----------------------------------------+---------+----------+----------+
| scr_518f3268.AO_319474_QUEUE | analyze | status | OK |
+-----------------------------------------+---------+----------+----------+
1 row in set (0.00 sec)
**Create table info (SHOW CREATE)**
mysql> SHOW CREATE TABLE AO_319474_QUEUE
\G
*************************** 1. row ***************************
Table: AO_319474_QUEUE
Create Table: CREATE TABLE AO_319474_QUEUE
(
CLAIMANT
varchar(127) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin DEFAULT NULL,
CLAIMANT_TIME
bigint DEFAULT NULL,
CREATED_TIME
bigint NOT NULL,
ID
bigint NOT NULL AUTO_INCREMENT,
MESSAGE_COUNT
bigint NOT NULL,
MODIFIED_TIME
bigint NOT NULL,
NAME
varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL,
PURPOSE
varchar(450) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin NOT NULL,
TOPIC
varchar(255) CHARACTER SET utf8mb4 COLLATE utf8mb4_bin DEFAULT NULL,
PRIMARY KEY (ID
),
UNIQUE KEY U_AO_319474_QUEUE_NAME
(NAME
),
KEY index_ao_319474_queue_topic
(TOPIC
),
KEY index_ao_319474_queue_claimant
(CLAIMANT
)
) ENGINE=InnoDB AUTO_INCREMENT=12485939 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin
1 row in set (0.01 sec)
**Questions**
Mainly interested in way to resolve or further troubleshoot.
- Are there any known MySQL 8.0.41+ bugs that make single-row updates stall longer than expected?
- Any ideas how to mitigate problem? We can't change query as it's coming from product (Atlassian Jira Service Management / Data Center).
- Any low hanging fruits we can try for a quick fix?
- If it's Virtualisation issue - what we need to ask / capture to chat with our provider in DC?
I’m a Java engineer, not a full-time DBA - feel free to point out obvious RTFM gaps.
Attachments - Percona




user340962
(1 rep)
May 27, 2025, 09:00 AM
• Last activity: Jun 24, 2025, 09:07 PM
35
votes
4
answers
5038
views
Is NOLOCK always bad?
I am a Report Developer who wants to make my queries as efficient as possible. I used to work with a DBA who told me - I believe because I was always dealing with reports on a Production Server - to use `NOLOCK` in every single query. Now, I work with a DBA who has banned `NOLOCK` under any circumst...
I am a Report Developer who wants to make my queries as efficient as possible. I used to work with a DBA who told me - I believe because I was always dealing with reports on a Production Server - to use
NOLOCK
in every single query.
Now, I work with a DBA who has banned NOLOCK
under any circumstance - even when a report of mine (due to a considerable lack of indexes on a couple of tables) is stopping replication and system updates. In my opinion, in this case, a NOLOCK
would be a good thing.
Since most of my SQL training has come various DBAs with very different opinions, I wanted to ask this to a wide variety of DBAs.
DataGirl
(475 rep)
Jan 13, 2012, 06:01 PM
• Last activity: Jun 24, 2025, 07:39 AM
0
votes
1
answers
941
views
MySQL Lock Wait Time Out Exceeded upon Delete
I have a Delete Statement which fails with Below Exception com.mysql.cj.jdbc.exceptions.MySQLTransactionRollbackException: Lock wait timeout exceeded; try restarting transaction It waits for 50 Seconds for Lock and gives Up, I wanted to know which all SQL Statements can lock a table. LOCK WAIT 6017...
I have a Delete Statement which fails with Below Exception
com.mysql.cj.jdbc.exceptions.MySQLTransactionRollbackException: Lock wait timeout exceeded; try restarting transaction
It waits for 50 Seconds for Lock and gives Up, I wanted to know which all SQL Statements can lock a table.
LOCK WAIT 6017 lock struct(s), heap size 376, 137376 row lock(s), undo log entries 22011
MySQL thread id 8242, OS thread handle 0x2b019e749700, query id 13811010 10.103.89.37 administrator updating
DELETE from CONFIGSTOREQASTAGINGREL.EVENT_DETAILS where UPDATE_DATE < 20191013 AND COMPONENT_NAME = 'health'
------- TRX HAS BEEN WAITING 25 SEC FOR THIS LOCK TO BE GRANTED:
RECORD LOCKS space id 194 page no 9414 n bits 17 index
GEN_CLUST_INDEX
of table CONFIGSTOREQASTAGINGREL
.EVENT_DETAILS
trx id 38176269 lock_mode X waiting
Record lock, heap no 17 PHYSICAL RECORD: n_fields 14; compact format; info bits 0
0: len=6; bufptr=0x2b01d5f29030; hex= 0000001ef5fb; asc ;;
1: len=6; bufptr=0x2b01d5f29036; hex= 0000024674c5; asc Ft ;;
2: len=7; bufptr=0x2b01d5f2903c; hex= af000004900110; asc ;;
3: len=30; bufptr=0x2b01d5f29043; hex= 333532613463313736333863613932393935336262653631646135306137; asc 352a4c17638ca929953bbe61da50a7; (total 32 bytes);
4: len=30; bufptr=0x2b01d5f29063; hex= 316663346166393133353164633731393062363833393134386335326535; asc 1fc4af91351dc7190b6839148c52e5; (total 32 bytes);
5: len=6; bufptr=0x2b01d5f29083; hex= 6865616c7468; asc health;;
6: len=30; bufptr=0x2b01d5f29089; hex= 36436c74773144627644656948474472756e774d4f362020202020202020; asc 6Cltw1DbvDeiHGDrunwMO6 ; (total 32 bytes);
7: len=4; bufptr=0x2b01d5f290a9; hex= 81341726; asc 4 &;;
8: len=4; bufptr=0x2b01d5f290ad; hex= 800003aa; asc ;;
9: len=16; bufptr=0x2b01d5f290b1; hex= 4f494d2d534552564943452020202020; asc OIM-SERVICE ;;
10: len=4; bufptr=0x2b01d5f290c1; hex= 80000000; asc ;;
11: len=0; bufptr=0x2b01d5f290c5; hex= ; asc ;;
12: len=30; bufptr=0x2b01d5f290c5; hex= 436f6e6e656374697669747920746f20536563757265204167656e742069; asc Connectivity to Secure Agent i; (total 168 bytes);
13: len=4; bufptr=0x2b01d5f2916d; hex= 5da441f8; asc ] A ;;
I wanted to know what is Record Lock and when does that gets locked.I would very thankful if someone explains what is the meaning of above statements i got from SHOW INNODB ENGINE STATUS
user3865748
(21 rep)
Dec 27, 2019, 09:07 AM
• Last activity: Jun 23, 2025, 08:05 PM
0
votes
2
answers
192
views
Mysql locking - How to acquire a lock in the DB for each userId
Hi I have a J2EE Server(clustered) that executes requests from multiple users. I would like that each request of a given user will be handled like so: { acquireLockForUser(userId); // i.e. insert a lock row for a dedicated table doSomeStuffInDbForUser(user); releaseLock(userId); // i.e. delete a loc...
Hi I have a J2EE Server(clustered) that executes requests from multiple users.
I would like that each request of a given user will be handled like so:
{
acquireLockForUser(userId); // i.e. insert a lock row for a dedicated table
doSomeStuffInDbForUser(user);
releaseLock(userId); // i.e. delete a lock row from the dedicated table
}
This will make sure that each user request will be handled by a single thread of my servers.
Whats the way to achieve that? (Is it possible to insert into a table 'userLock' by id and lock it? how to insert\fetch from this "lock" table?)
Urbanleg
(375 rep)
Mar 30, 2016, 10:01 AM
• Last activity: Jun 21, 2025, 04:09 AM
0
votes
1
answers
198
views
Does Postgresql apply the same type of locks on UPDATE with range condition and on UPDATE with in-set condition?
Does Postgresql apply the same type of locks on `UPDATE` with a range condition and on `UPDATE` with in-set condition? E.g. will the queries: 1. `UPDATE table WHERE id IN (1,2,3,4,5,6,7,8,9,10);` 2. `UPDATE table WHERE id >= 1 AND id < 11`; Cause the same type of locks on `table`?
Does Postgresql apply the same type of locks on
UPDATE
with a range condition and on UPDATE
with in-set condition?
E.g. will the queries:
1. UPDATE table WHERE id IN (1,2,3,4,5,6,7,8,9,10);
2. UPDATE table WHERE id >= 1 AND id < 11
;
Cause the same type of locks on table
?
origaminal
(193 rep)
Jun 10, 2020, 05:19 PM
• Last activity: Jun 17, 2025, 01:01 AM
0
votes
1
answers
217
views
locking question around INSERT during UPDATE query
Say I have a large table ***products*** id, color, cost and I want to run a query to update the cost of all 'red' items UPDATE products SET cost = cost + 1 WHERE color = 'red' Let's pretend that there are so many rows that the query takes 5 minutes to finish. What sort of lock will be placed on the...
Say I have a large table ***products***
id, color, cost
and I want to run a query to update the cost of all 'red' items
UPDATE products
SET cost = cost + 1
WHERE color = 'red'
Let's pretend that there are so many rows that the query takes 5 minutes to finish.
What sort of lock will be placed on the table? Am I right in thinking that as the query executes,
write
locks will be acquired on all the rows where color = 'red'
, so these records cannot be updated until the UPDATE
query finishes. They can still be read however (and reading them will always provide old values until the entire UPDATE
transaction finishes)
What happens when the UPDATE
query is started, and it is still working out which rows to lock as it's scanning the table for records with the color red
? It would still allow updates to some records with the color red
?
But it won't lock the whole table, so other records can be written or updated whilst the UPDATE
query is going on? What happens if a new record is inserted with a color of red
. Does the price of that product get updated? surely not?
user1063998
(101 rep)
Jan 8, 2019, 09:39 PM
• Last activity: Jun 14, 2025, 01:04 PM
0
votes
1
answers
523
views
Does UPDATE FROM VALUES lock rows in the order they are listed?
I'm bulk updating some rows using `UPDATE ... FROM (values ...` as described in [this answer][1]. Will the rows will be locked in the order that they appear in the values? Or do I need to do an explicit `SELECT FOR UPDATE` first? Here's an example statement ``` UPDATE stats as t set "individualCount...
I'm bulk updating some rows using
UPDATE ... FROM (values ...
as described in this answer .
Will the rows will be locked in the order that they appear in the values? Or do I need to do an explicit SELECT FOR UPDATE
first?
Here's an example statement
UPDATE stats as t set "individualCount" = new_values."individualCount"
FROM (values (6::int,7::int),(3::int,15::int))
as new_values("individualCount","id")
WHERE new_values."id" = t."id"
(id is the primary key of the table)
ChrisJ
(621 rep)
Jan 31, 2023, 02:39 AM
• Last activity: Jun 13, 2025, 07:10 AM
0
votes
1
answers
224
views
How repeatable read isolation level and others are implemented in distributed/replicated databases?
I'm studying distributed systems/DBs and I'm struggling understanding isolation levels when we talk about distributed systems. Avoiding problems like dirty read, non-repeatable read, phantom reads, write-skew, etc. when we have a single DB is pretty straightforward with the introduction of optimisti...
I'm studying distributed systems/DBs and I'm struggling understanding isolation levels when we talk about distributed systems.
Avoiding problems like dirty read, non-repeatable read, phantom reads, write-skew, etc. when we have a single DB is pretty straightforward with the introduction of optimistic / pessimistic concurrency control algorithms.
Nevertheless, I'm really not understanding how the same problems are avoided when we deal with distributed systems.
## Example
Let's say that we have three total nodes (*N = 3*) for our DB and we want strong consistency for some reason (*R = 2* and *W = 2*, so *R + W > N*).
Let' say now that we have two transactions: T1, T2.
- T1:

SELECT * FROM X WHERE X.field = 'something'
... DO WORK ...
SELECT * FROM X WHERE X.field = 'something'
- T2:
INSERT INTO X VALUES(..,..,..) -- impact T1 search criteria
T2 will commit while T1 is in "DO WORK" phase, so we will have a *phantom read* problem.
## Question
How is this situation handled in the illustrated system above?
Do systems like this use 2PC-like algorithm and rely on the fact that one transaction will fail in one node due to the R+W>N constraint? If yes, is it a used solution? I would say that this is complex (when we have to rollback the committed transaction in Node_X) and it is also slow probably.
Do you have any useful material that I can check to continue studying this topic? I really cannot find much about this, there is very few material that discusses isolation level in distributed systems.
Feel free to correct the above if I made a mistake.
Thank you.
Dev
(1 rep)
Aug 14, 2022, 04:20 PM
• Last activity: Jun 11, 2025, 07:07 PM
0
votes
1
answers
729
views
MariaDB large number of sessions waiting on index_tree_rw_lock
I have a MariaDB RDS instance that is mostly saving batched insert statements to 4 tables. These inserts have batches of an average of 200/batch and each statement saves to a single table. However, the db keeps getting overloaded handling this load. From the performance schema, all of the sessions a...
I have a MariaDB RDS instance that is mostly saving batched insert statements to 4 tables. These inserts have batches of an average of 200/batch and each statement saves to a single table. However, the db keeps getting overloaded handling this load.
From the performance schema, all of the sessions are waiting on
synch/sxlock/innodb/index_tree_rw_lock
and to a lesser extent synch/mutex/innodb/buf_pool_mutex
. From the cpu statistics, the utilization doesn't get anywhere more than 10%, which makes me think the size of the instance is sufficient.
Is there any way to tune the db to help with this or what else can be done to further diagnose the issue?
The schema for the tables (the actual schema has around 30 columns, but I truncated the non-key ones):
CREATE TABLE table_name
(
id
varchar(191) NOT NULL,
datacenter
varchar(191) NOT NULL,
date
date NOT NULL,
type
varchar(50) DEFAULT NULL,
version
smallint(6) DEFAULT NULL,
shared
tinyint(1) DEFAULT NULL,
region
varchar(15) DEFAULT NULL,
country
varchar(15) DEFAULT NULL,
city
varchar(100) DEFAULT NULL,
zip_code
varchar(15) DEFAULT NULL,
...
PRIMARY KEY (id
,datacenter
,date
),
KEY idx_duid_daily_summary_date
(date
),
KEY idx_duid_daily_summary_type
(type
),
KEY idx_duid_daily_summary_version
(version
),
KEY idx_duid_daily_summary_region
(region
),
KEY idx_diod_daily_summary_country
(country
),
KEY idx_duid_daily_summary_city
(city
),
KEY idx_duid_daily_summary_zip_code
(zip_code
)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 COLLATE=latin1_swedish_ci
This is the result of show engine innodb status
with the transactions truncated
=====================================
2023-06-26 16:11:05 0x14fcd4e5b700 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 1 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 0 srv_active, 0 srv_shutdown, 1711200 srv_idle
srv_master_thread log flush and writes: 1711074
----------
SEMAPHORES
----------
------------
TRANSACTIONS
------------
Trx id counter 116627587
Purge done for trxs n:o = DATE_SUB(CURDATE(), INTERVAL 1 WEEK)
AND date '
) AS subquery;
`
```
SELECT
AVG(price)
FROM
table_name
WHERE
city =
AND date >= DATE_SUB(CURDATE(), INTERVAL 1 WEEK)
AND date | ALL | null | null | null | null | 1801110 | |
| 2 | DERIVED | table\_name | ref | idx\_city | idx\_city | 103 | const | 1801110 | Using where; Using index |
- Non-sub query varient:
| id | select\_type | table | type | possible\_keys | key | key\_len | ref | rows | Extra |
| :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- | :--- |
| 1 | SIMPLE | table\_name | ref | idx\_city | idx\_city | 103 | const | 1801110 | Using where; Using index |
As I was running these queries, multiple times the sub query has an average execution time of 550ms and an average fetching time of 15ms, while the non-sub query version has an average execution time of 800ms and average fetching time of 15ms.
In total, across 4 tables, the space used is 780GB with 331GB data and 447GB index.
ryanrasa
(1 rep)
Jun 26, 2023, 04:41 PM
• Last activity: Jun 1, 2025, 11:03 PM
Showing page 1 of 20 total questions