Sample Header Ad - 728x90

aurora mysql index creation challenges on large tables

1 vote
1 answer
475 views
I have a table of yearly data [size 500GB = ~1 billion rows] in aurora mysql 3.02.2. requirement is to migrate this table to another aurora mysql [PROD] using DMS. we initiated the DMS task with data validation enabled and also with indexes & PK in target, which caused slowness to other production processes due to load on the DB. so i did some research and looking to get some suggestions here please: *Note: this new table in prod will **NOT** be used by the application until this work is completed and renamed to current table name * 1. Load data first using DMS and create index manually later on a large table like this is the correct approach? **OR** 2. should i use DMS task filtering on "datetime" bigint column to load month by month data to the new table where the index build happens when the data is written over to the table. This can be done over the course of few days for each month (assuming we run the DMS task for few hours each day) **OR** any better method ? 3. does index creation generate lot of temp files on such a large table, which will lead to any memory issues **OR** should i use something like "ALGORITHM=INPLACE" in create index statement? 4. current parameter settings which i see related are "innodb_ddl_buffer_size=1MB" & "innodb_file_per_table=ON". any other parameters i should check? the target production db have 24x7 data written into it and i am looking for a way to avoid any slowness or performance issues when this table with index is being copied over.
Asked by JollyRoger (11 rep)
Nov 30, 2023, 05:15 AM
Last activity: Aug 1, 2025, 08:04 AM