MySQL process stuck on “cleaning up”, locking table forever
4
votes
1
answer
4796
views
It was suggested I post here as opposed to StackOverflow.
I'm new to debugging mysql/innodb issues so here is the best I have:
I'm trying to run an integration test so these results are near identical each time, the test is consistently hanging at the same spot while trying to execute the same update. I'm getting a conflicting process that has locked the table the update is trying to use. If I just let it run I get a lock timeout exceeded exception, and when i kill the troubling process things do start moving again, though an exception is thrown in my test - "Caused by: org.mariadb.jdbc.internal.util.dao.QueryException: Could not send query: Last stream not finished".
Relevant information I can find from,
SHOW ENGINE INNODB STATUS;
------------
TRANSACTIONS
------------
---TRANSACTION 2609485, ACTIVE 36 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT 9 lock struct(s), heap size 1136, 4 row lock(s), undo log entries 2
MySQL thread id 231, OS thread handle 162200, query id 10577 localhost 127.0.0.1 root updating
/* update com.ipl.inspire.imp.radio.entities.entity.circuit.Circuit */
update abstract_asset_network
set lastUpdated='2016-01-14 15:34:10.2', leg=0, networkStatus_id=4,
parent_id=NULL, xCoordinate=0, yCoordinate=0 where id=33
Trx read view will not see trx with id >= 2609485, sees = 2609468, sees < 2609468
Two transactions here, and it seems that trx 2609468 has caused a lock on the query update is trying to execute (trx 2609485) 2 seconds afterwards. The lock seems to last forever (or until innodb_lock_wait_timeout gets hit, which I have tried increasing).
SELECT * FROM INFORMATION_SCHEMA.INNODB_TRX
trx_id trx_state trx_started trx_requested_lock_id trx_wait_started trx_weight trx_mysql_thread_id trx_query trx_operation_state trx_tables_in_use trx_tables_locked trx_lock_structs trx_locks_memory_bytes trx_rows_locked trx_rows_modified trx_concurrency_tickets trx_isolation_level
2609485 LOCK WAIT 2016-01-14 15:34:17 2609485:89945:3:3 2016-01-14 15:34:18 11 231 update abstract_asset_network set lastUpdated='2016-01-14 15:34:10.2', leg=0, networkStatus_id=4, parent_id=NULL, xCoordinate=0, yCoordinate=0 where id=33 starting index read 1 5 9 1136 4 2 0 REPEATABLE READ
2609468 RUNNING 2016-01-14 15:34:15 13 228 0 6 10 1136 4 3 0 REPEATABLE READ
Which suggests pretty much what is explained from the innodb status query I believe, though maybe someone else can make more sense of it.
The last query that was executed on process id 228 was (from the general_log)
228 Query /* select generatedAlias0 from Imp as generatedAlias0 */
select imp0_.id as id1_67_, imp0_.black as black2_67_, imp0_.current as current3_67_,
imp0_.name as name4_67_, imp0_.redMessageHost as redMessa5_67_,
imp0_.redMessagePort as redMessa6_67_, imp0_.site_id as site_id7_67_
from imp imp0_
and then later:
231 Query update abstract_asset_network
set lastUpdated='2016-01-14 15:34:10.2', leg=0, networkStatus_id=4,
parent_id=NULL, xCoordinate=0, yCoordinate=0 where id=33
.
.
.
8 Query kill 228
231 Query COMMIT
231 Query set autocommit=1
231 Query SELECT 1
231 Query set autocommit=0
231 Query /* select ...
But I don't think it's relevant and it's more to do with the process being re-used for "cleaning up" something else. However this same scenario with the same processes/timing/lock/queries happens the same every single time with the only things changing being just the ids.
Asked by user2819101
(41 rep)
Jan 14, 2016, 04:46 PM
Last activity: Jun 10, 2017, 04:57 PM
Last activity: Jun 10, 2017, 04:57 PM