AWS Aurora MySQL table archive running slow for one table
0
votes
0
answers
15
views
I'm working on archiving a bunch of tables in an environment where archiving was never done, with some data going back 10 years. I've written a script to perform the work, which loops through the primary key (an autoincrement
bigint
) *n* rows at a time, calling a procedure to archive the data to a separate table and then deleting that same data from the main table. I'm doing it in small batches to prevent any long term locking of the main tables. It also sleep
s in between each loop iteration. Batch size and sleep time are configurable via a config file. On my test system, for this table, I'm using a batch size of 1000 and a sleep time of 0. Instance class is r7g.4xl.
Most tables archive at several thousand rows per second, which is acceptable. But I have one table whose archiving is going very slowly; averaging under 550 rows/sec. There is no other activity in the database (there are other archives running against other DBs in the cluster at the same time, but killing them didn't improve the performance of this one). Here's the table schema (the schema for the archive table is identical):
CREATE TABLE inbox_item
(
id
bigint NOT NULL AUTO_INCREMENT,
user_id
bigint NOT NULL,
template_id
bigint NOT NULL,
url
varchar(4000) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
created_at
datetime NOT NULL,
hash
varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
parameters
varchar(4000) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
PRIMARY KEY (id
),
UNIQUE KEY hash_uidx
(hash
),
KEY template_id_idx
(template_id
),
KEY user_id_created_at_idx
(user_id
,created_at
)
) ENGINE=InnoDB AUTO_INCREMENT=442872663 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Note that while there are two large varchar
s, total actual data width is under 300 bytes. Here's the procedure that's being called:
CREATE DEFINER=root
@%
PROCEDURE archive_inbox_item_proc
(IN pkmin bigint, IN pkmax bigint, IN querymax bigint)
begin
declare exit handler for sqlexception
begin
get diagnostics condition 1
@err = MYSQL_ERRNO, @msg = MESSAGE_TEXT;
select -1;
select concat('Error ', cast(@err as char), ': ',@msg) 'Error';
rollback;
end;
start transaction;
insert ignore into inbox
.inbox_item_archive
select arctable.* from inbox
.inbox_item
as arctable where created_at = pkmin and arctable.id
= pkmin and arctable.id
< querymax and arctable.id
<= pkmax ;
select row_count();
commit;
end
pkmin
is always the actual minimum pkey value. There are no foreign keys or triggers referencing the table. Here's the table status:
Name: inbox_item
Engine: InnoDB
Version: 10
Row_format: Dynamic
Rows: 299879061
Avg_row_length: 243
Data_length: 72988737536
Max_data_length: 0
Index_length: 126937300992
Data_free: 45770342400
Auto_increment: 442872663
Create_time: 2025-03-28 06:15:36
Update_time: 2025-08-05 18:04:55
Check_time: NULL
Collation: utf8mb4_unicode_ci
Checksum: NULL
Create_options:
Comment:
Any ideas on what's causing this to run so slow relative to other tables in other databases?
Asked by Swechsler
(153 rep)
Aug 5, 2025, 06:05 PM