Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
0
answers
16
views
AWS Aurora MySQL table archive running slow for one table
I'm working on archiving a bunch of tables in an environment where archiving was never done, with some data going back 10 years. I've written a script to perform the work, which loops through the primary key (an autoincrement `bigint`) *n* rows at a time, calling a procedure to archive the data to a...
I'm working on archiving a bunch of tables in an environment where archiving was never done, with some data going back 10 years. I've written a script to perform the work, which loops through the primary key (an autoincrement
bigint
) *n* rows at a time, calling a procedure to archive the data to a separate table and then deleting that same data from the main table. I'm doing it in small batches to prevent any long term locking of the main tables. It also sleep
s in between each loop iteration. Batch size and sleep time are configurable via a config file. On my test system, for this table, I'm using a batch size of 1000 and a sleep time of 0. Instance class is r7g.4xl.
Most tables archive at several thousand rows per second, which is acceptable. But I have one table whose archiving is going very slowly; averaging under 550 rows/sec. There is no other activity in the database (there are other archives running against other DBs in the cluster at the same time, but killing them didn't improve the performance of this one). Here's the table schema (the schema for the archive table is identical):
CREATE TABLE inbox_item
(
id
bigint NOT NULL AUTO_INCREMENT,
user_id
bigint NOT NULL,
template_id
bigint NOT NULL,
url
varchar(4000) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
created_at
datetime NOT NULL,
hash
varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
parameters
varchar(4000) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
PRIMARY KEY (id
),
UNIQUE KEY hash_uidx
(hash
),
KEY template_id_idx
(template_id
),
KEY user_id_created_at_idx
(user_id
,created_at
)
) ENGINE=InnoDB AUTO_INCREMENT=442872663 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Note that while there are two large varchar
s, total actual data width is under 300 bytes. Here's the procedure that's being called:
CREATE DEFINER=root
@%
PROCEDURE archive_inbox_item_proc
(IN pkmin bigint, IN pkmax bigint, IN querymax bigint)
begin
declare exit handler for sqlexception
begin
get diagnostics condition 1
@err = MYSQL_ERRNO, @msg = MESSAGE_TEXT;
select -1;
select concat('Error ', cast(@err as char), ': ',@msg) 'Error';
rollback;
end;
start transaction;
insert ignore into inbox
.inbox_item_archive
select arctable.* from inbox
.inbox_item
as arctable where created_at = pkmin and arctable.id
= pkmin and arctable.id
< querymax and arctable.id
<= pkmax ;
select row_count();
commit;
end
pkmin
is always the actual minimum pkey value. There are no foreign keys or triggers referencing the table. Here's the table status:
Name: inbox_item
Engine: InnoDB
Version: 10
Row_format: Dynamic
Rows: 299879061
Avg_row_length: 243
Data_length: 72988737536
Max_data_length: 0
Index_length: 126937300992
Data_free: 45770342400
Auto_increment: 442872663
Create_time: 2025-03-28 06:15:36
Update_time: 2025-08-05 18:04:55
Check_time: NULL
Collation: utf8mb4_unicode_ci
Checksum: NULL
Create_options:
Comment:
Any ideas on what's causing this to run so slow relative to other tables in other databases?
Swechsler
(153 rep)
Aug 5, 2025, 06:05 PM
0
votes
1
answers
343
views
Unknown source of high number of connections
In MySQL/Aurora I see from time to time a spike in connections number (from about 100 to 4000): [![enter image description here][1]][1] MySQL [doc][2] says: > Connections > > The number of connection attempts (successful or not) to the MySQL server. So I was thinking those coming from failed connect...
In MySQL/Aurora I see from time to time a spike in connections number (from about 100 to 4000):
MySQL doc says:
> Connections
>
> The number of connection attempts (successful or not) to the MySQL server.
So I was thinking those coming from failed connection, however I don't see failures:
MySQL [(none)]> SHOW GLOBAL STATUS like 'Conn%';
+-----------------------------------+----------+
| Variable_name | Value |
+-----------------------------------+----------+
| Connection_errors_accept | 0 |
| Connection_errors_internal | 0 |
| Connection_errors_max_connections | 0 |
| Connection_errors_peer_address | 0 |
| Connection_errors_select | 0 |
| Connection_errors_tcpwrap | 0 |
| Connections | 34388970 |
+-----------------------------------+----------+
7 rows in set (0.00 sec)
MySQL [(none)]> show global variables like '%conn%';
+-----------------------------------------------+-------------------+
| Variable_name | Value |
+-----------------------------------------------+-------------------+
| aurora_max_connections_limit | 16000 |
| connect_timeout | 10 |
| disconnect_on_expired_password | ON |
| init_connect | |
| max_connect_errors | 100 |
| max_connections | 4000 |
| max_user_connections | 0 |
| performance_schema_session_connect_attrs_size | 512 |
+-----------------------------------------------+-------------------+
I would very much like to understand what those are and when they are coming from.

Nir
(529 rep)
Oct 4, 2020, 08:45 AM
• Last activity: Aug 4, 2025, 03:10 AM
0
votes
1
answers
149
views
Can set command cause more CPU usage?
Im running AWS aurora 5.7. The application stack is PHP. After moving to Aurora, I noticed that the CPU consumption is really high. When I checked the performance insights, its showing the SET command is consuming more. (Green colour is CPU) [![enter image description here][1]][1] And I checked the...
Im running AWS aurora 5.7. The application stack is PHP. After moving to Aurora, I noticed that the CPU consumption is really high. When I checked the performance insights, its showing the SET command is consuming more. (Green colour is CPU)
And I checked the Aurora Billed Read IOPS, which is almost zero, I think most of them are in InnoDB buffer pool, I didn't see any slow queries, all are executed within a second.
So what Im thinking, the number of SQL queries on this server is a bit high, and thats why the Set command is showing more CPU.
Can someone help to understand this?


TheDataGuy
(1986 rep)
Oct 23, 2021, 05:46 AM
• Last activity: Jul 17, 2025, 03:05 PM
0
votes
2
answers
177
views
Autovacuum unable to vacuum which is causing DB reastart of Aurora Postgres DB
(Moved question from stackoverflow to dba.stackexchange) I am using AWS Aurora Postgres 13.8. Now we have around 35-40 replication slots in DB and we dropped few replication slots. The reason for dropping was we were seeing restart of DB. CPU, Memory and Write/Read IOPS are normal. We are currently...
(Moved question from stackoverflow to dba.stackexchange)
I am using AWS Aurora Postgres 13.8. Now we have around 35-40 replication slots in DB and we dropped few replication slots. The reason for dropping was we were seeing restart of DB. CPU, Memory and Write/Read IOPS are normal. We are currently on db.r6i.24xlarge. We have verified that there are no long running transactions and no blocked pids
We dropped around 12(First 7 and then 5) replication slot in DB. xmin horizon is increasing and is not completely able to vacuum table. Below is from logs
First, we dropped 7 slots, logs after that were like
2024-10-03 14:20:19 UTC::@::LOG: automatic vacuum of table ".pg_catalog.pg_statistic": index scans: 0
pages: 0 removed, 4175180 remain, 0 skipped due to pins, 649889 skipped frozen
tuples: 0 removed, 63044551 remain, 62974732 are dead but not yet removable, oldest xmin: 850469227
buffer usage: 7064601 hits, 0 misses, 0 dirtied
avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s
system usage: CPU: user: 9.86 s, system: 0.02 s, elapsed: 50.84 s
WAL usage: 0 records, 0 full page images, 0 bytes
Then we dropped 5 more slots, logs were then like:-
2024-10-03 17:02:12 UTC::@::LOG: automatic vacuum of table ".pg_catalog.pg_statistic": index scans: 0
pages: 0 removed, 4175180 remain, 0 skipped due to pins, 2299157 skipped frozen
tuples: 0 removed, 33025155 remain, 32953376 are dead but not yet removable, oldest xmin: 883545046
buffer usage: 3757715 hits, 0 misses, 0 dirtied
avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s
system usage: CPU: user: 4.93 s, system: 0.00 s, elapsed: 15.72 s
WAL usage: 0 records, 0 full page images, 0 bytes
I do understand that no. of dead tuples have been cleaned significantly but how many more replication slot do we need to drop in order to remove this error.
Also we have started seeing below error in logs now apart from above error.
024-10-04 07:03:15 UTC::@::LOG: automatic vacuum of table "..": index scans: 1
pages: 0 removed, 5949 remain, 0 skipped due to pins, 557 skipped frozen
tuples: 5485 removed, 79016 remain, 0 are dead but not yet removable, oldest xmin: 903322403
buffer usage: 3757819 hits, 0 misses, 0 dirtied
avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s
system usage: CPU: user: 2.03 s, system: 0.00 s, elapsed: 13.15 s
WAL usage: 0 records, 0 full page images, 0 bytes
As per the latest comment by Laurenz Albe in stack overflow, I was asked to to **drop all stale replication slots (where restart_lsn in pg_replication_slots is old)**
I ran this query
select slot_name, active, catalog_xmin, restart_lsn, pg_current_wal_insert_lsn()-restart_lsn as diff_lsn from pg_replication_slots order by diff_lsn desc;
First 2 slots have a huge difference with diff_lsn 793117723072, 427503896816 and then all slots with 92440376. I understand that first 2 definitely needs to be deleted. However what about the remaining since their difference is also huge.
Also how can I prevent such scenarios in future.
Nik
(101 rep)
Oct 4, 2024, 11:06 AM
• Last activity: Jul 17, 2025, 02:00 AM
0
votes
1
answers
158
views
Adding FK to a newly attached partition slow
We have a partitioned table from which we attach and detach partitions for archiving purposes. When attempting to add a foreign key on the partitioned table, the process is taking a long time (up to 20 minutes) which is causing many problems. We have discussed using a `NOT VALID` option, but unfortu...
We have a partitioned table from which we attach and detach partitions for archiving purposes. When attempting to add a foreign key on the partitioned table, the process is taking a long time (up to 20 minutes) which is causing many problems.
We have discussed using a
NOT VALID
option, but unfortunately this is not an option for partition tables, so just wanted to see if there were any options to add a foreign key to a partition
Krishnp92
(19 rep)
Jul 16, 2024, 12:50 PM
• Last activity: Jul 10, 2025, 10:06 PM
3
votes
2
answers
3817
views
How to performance tune high /wait/io/redo_log_flush values in "commit"
According to the "Top SQL" view in *AWS Performance Insights*, `commit` tops the list of SQL statements in terms of *average active session (AAS)*. And most of the time seems to be spent in `wait/io/redo_log_flush`. I have read the [docs on io/aurora_redo_log_flush](https://docs.aws.amazon.com/Amazo...
According to the "Top SQL" view in *AWS Performance Insights*,
commit
tops the list of SQL statements in terms of *average active session (AAS)*.
And most of the time seems to be spent in wait/io/redo_log_flush
. I have read the [docs on io/aurora_redo_log_flush](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/ams-waits.io-auredologflush.html) - which I assume to be the name of wait/io/redo_log_flush prior to Aurora 3 - but they don't really help me, because there are apparently only 0.25 commits per second, but the load is close to 15 AAS. So it is not a problem of too many small commits.
So why would such a low commit rate lead to such a high AAS? How can I dig deeper into this?
N.B. We are using *8.0.mysql_aurora.3.02.1*

Evgeniy Berezovsky
(151 rep)
Nov 14, 2022, 08:26 AM
• Last activity: Jul 9, 2025, 02:51 PM
1
votes
1
answers
1360
views
Aurora Serverless alter table Error 1114 table is full
Using Aurora Serverless, when attempting to `alter table mytable add column mynewcolumn decimal(10,3) after myoldcolumn` I receive "ERROR 1114 (HY000): The Table 'mytable' is full". The table has hundreds of millions of rows. Normally this error results from a full partition, but since I'm using Aur...
Using Aurora Serverless, when attempting to
alter table mytable add column mynewcolumn decimal(10,3) after myoldcolumn
I receive "ERROR 1114 (HY000): The Table 'mytable' is full". The table has hundreds of millions of rows. Normally this error results from a full partition, but since I'm using Aurora Serverless, I can't manually provision partition space. What should I do to avoid this error?
enharmonic
(203 rep)
Oct 4, 2019, 02:56 PM
• Last activity: Jul 3, 2025, 06:06 PM
3
votes
1
answers
185
views
Row Locks in Partitioned Aurora MySQL Table
I'm experiencing many row locks in a AWS Aurora AWS MySQL MySQL version=8.0 Aurora Version=3.04.0 The table is pretty big close to 1 Billion rows. The table is partitioned in 24 partitions based on the created at column. In one of the production issue we found 26 rows have been locked. ---TRANSACTIO...
I'm experiencing many row locks in a AWS Aurora AWS MySQL
MySQL version=8.0
Aurora Version=3.04.0
The table is pretty big close to 1 Billion rows. The table is partitioned in 24 partitions based on the created at column.
In one of the production issue we found 26 rows have been locked.
---TRANSACTION 19670234666, ACTIVE 299 sec
53 lock struct(s), heap size 8312, 26 row lock(s), undo log entries 1
MySQL thread id 7015846, OS thread handle 70369603010432, query id 12207362320
IP username
We update only one row of the table using
id
condition, we do not pass created_at
in the update query.
What could have been the reason of 26 rows being locked?
DB query
UPDATE OrderEntity e SET e.state = :state, e.metadata= :metadata WHERE e.id = :id
Table Structure
```
id: BIGINIT
state: INT
metadata: TEXT
created_at: DATETIME
updated_at: DATETIME
...
Primary Key: id, created_at
sonus21
(131 rep)
Dec 20, 2023, 08:30 AM
• Last activity: Jul 1, 2025, 04:02 AM
0
votes
0
answers
30
views
Copying millions of rows to a new table
I have a mysql table that has 100+ million rows and we need to add a new column to this table. ```ALTER TABLE``` will lock the table and I can't have that, so my next idea was to make a new table that has the new field and then just move the contents over using a script of some kind. however, I want...
I have a mysql table that has 100+ million rows and we need to add a new column to this table.
TABLE
will lock the table and I can't have that, so my next idea was to make a new table that has the new field and then just move the contents over using a script of some kind.
however, I wanted to see if I had some other (possibly better/faster) options available to me.
The MySQL database is an RDS on AWS (So it's an Aurora Database that's MySQL-flavored), so if someone know of some AWS magic, I'm all ears.
There's not that many indexes or any relationship setup (this system is a legacy system that never had a DBA design things).
I have considered using mysqldump to extract the data, but wasn't sure if this is a worthwhile route to go.
Any suggestions/ideas would be appreciated.
eman86
(1 rep)
Jun 30, 2025, 11:07 PM
• Last activity: Jun 30, 2025, 11:24 PM
0
votes
1
answers
200
views
Why is there a writable instance inside a secondary region for MySQL AWS Aurora Replica?
I have created two Aurora MySQL clusters with two nodes in two regions using terraform. The secondary region is set up to replicate from primary region using binlog. This seems to be set up correctly, `describe-db-clusters` shows replication info and `Replication source` is displayed on the secondar...
I have created two Aurora MySQL clusters with two nodes in two regions using terraform. The secondary region is set up to replicate from primary region using binlog. This seems to be set up correctly,
describe-db-clusters
shows replication info and Replication source
is displayed on the secondary cluster in the Console.
What confuses me is that in the secondary region, there is a write node. I can connect to it, add data to table and it gets propagated back to primary region—and vice versa. I am specifically not using multimaster cluster (which is single-region only anyway) and I am not using global database as I want to scale to more regions. Based on AWS docs, [only global database supports write forwarding](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/aurora-global-database-write-forwarding.html) .
I only tried with manually connecting to the instances using mysql CLI as I want to understand the characteristics before using this with an application.
How is this possible? Is there any documentation? How does this setup handle update conflicts?
Almad
(101 rep)
Feb 10, 2021, 10:44 AM
• Last activity: Jun 17, 2025, 07:09 AM
0
votes
1
answers
239
views
Can PostgreSQL bulk update cause issue in Logical replication?
I use AWS Aurora PostgreSQL. PS version:13(updated the question) We use Logical slots(wal2json) to replicate to the data warehouse. Recently one of the Bulk updates queries updated 40 million rows. After that, my cluster's IOPS went too high and the queue depth went to 80. All the operations got blo...
I use AWS Aurora PostgreSQL. PS version:13(updated the question)
We use Logical slots(wal2json) to replicate to the data warehouse. Recently one of the Bulk updates queries updated 40 million rows.
After that, my cluster's IOPS went too high and the queue depth went to 80. All the operations got blocked.
Then we removed the replication slots, and everything back to normal. I know the bulk update should have been done in batches. Just trying to understand if is this a bottleneck in logical replication. how do we handle such use cases in the future? Can pgoutput makes sense here?
TheDataGuy
(1986 rep)
Nov 28, 2022, 10:41 AM
• Last activity: Jun 12, 2025, 10:00 AM
1
votes
1
answers
745
views
How to clear AWS RDS Aurora MySQL local storage?
I got a warning message about running out of AWS RDS local storage. > The free storage capacity for DB Instance: {MY INSTANCE NAME} is low > at 2% of the provisioned storage [Provisioned Storage: 157.36 GB, Free > Storage: 3.35 GB]. You may want to increase the provisioned storage to > address this...
I got a warning message about running out of AWS RDS local storage.
> The free storage capacity for DB Instance: {MY INSTANCE NAME} is low
> at 2% of the provisioned storage [Provisioned Storage: 157.36 GB, Free
> Storage: 3.35 GB]. You may want to increase the provisioned storage to
> address this issue.
According to the AWS document, local storage depends on instance type and can be increased by scale up : https://aws.amazon.com/premiumsupport/knowledge-center/aurora-mysql-local-storage/?nc1=h_ls
But, I would like to clear the local storage instead of scaling up the instance because only one instance of the read cluster has insufficient local storage.
I restarted Aurora instance but the local storage is still almost full.
How can I clear local storage?
It's MySQL version is 5.6.
BingbongKim
(111 rep)
May 23, 2022, 04:47 AM
• Last activity: Jun 5, 2025, 08:09 AM
1
votes
0
answers
53
views
Aurora PostgreSQL Severe Performance Degradation Under Concurrent Load
**Environment:** - Database: AWS Aurora PostgreSQL - ORM: SQLAlchemy - API Framework: Python FastAPI **Issue:** I'm experiencing significant query performance degradation when my API receives concurrent requests. I ran a performance test comparing single execution vs. concurrent execution of the sam...
**Environment:**
- Database: AWS Aurora PostgreSQL
- ORM: SQLAlchemy
- API Framework: Python FastAPI
**Issue:**
I'm experiencing significant query performance degradation when my API receives concurrent requests. I ran a performance test comparing single execution vs. concurrent execution of the same query, and the results are concerning.
**Real-World Observations:**
When monitoring our production API endpoint during load tests with 100 concurrent users, I've observed concerning behavior:
When running the same complex query through PGAdmin without concurrent load, it consistently completes in ~60ms
However, during periods of high concurrency (100 simultaneous users), response times for this same query become wildly inconsistent:
Some executions still complete in 60-100ms
Others suddenly take up to 2 seconds
No clear pattern to which queries are slow
**Test Results:**
Single query execution time: 0.3098 seconds
Simulating 100 concurrent clients - all requests starting simultaneously...
Results Summary:
Total execution time: 32.7863 seconds
Successful queries: 100 out of 100
Failed queries: 0
Average query time: 0.5591 seconds (559ms)
Min time: 0.2756s, Max time: 1.9853s
Queries exceeding 500ms threshold: 21 (21.0%)
50th percentile (median): 0.3114s (311ms)
95th percentile: 1.7712s (1771ms)
99th percentile: 1.9853s (1985ms)
With 100 concurrent threads:
- Each query takes ~12.4x longer on average (3.62s vs 0.29s)
- Huge variance between fastest (0.5s) and slowest (4.8s) query
- Overall throughput is ~17.2 queries/second (better than sequential, but still
concerning)
**Query Details:**
The query is moderately complex, involving:
Several JOINs across multiple tables, a subquery using EXISTS, ORDER BY and LIMIT clauses.
**My Setup**
**SQLAlchemy Configuration:**
engine = create_async_engine(
settings.ASYNC_DATABASE_URL,
echo=settings.SQL_DEBUG,
pool_pre_ping=True,
pool_use_lifo=True,
pool_size=20,
max_overflow=100,
pool_timeout=30,
pool_recycle=30,
)
AsyncSessionLocal = async_sessionmaker(
bind=engine,
class_=AsyncSession,
expire_on_commit=False,
autocommit=False,
autoflush=False,
)
**FastAPI Dependency:**
async def get_db() -> AsyncGenerator[AsyncSession, None]:
"""Get database session"""
async with AsyncSessionLocal() as session:
try:
yield session
await session.commit()
except Exception:
await session.rollback()
raise
**Questions:**
- **Connection Pool Settings:** Are my SQLAlchemy pool settings appropriate for handling 100 concurrent requests? What would be optimal?
- **Aurora Configuration:** What Aurora PostgreSQL parameters should I tune to improve concurrent query performance?
- **Query Optimization:** Is there a standard approach to optimize complex queries with JOINs and EXISTS subqueries for better concurrency?
- **ORM vs Raw SQL:** Would bypassing SQLAlchemy ORM help performance?
Any guidance or best practices would be greatly appreciated. I'd be happy to provide additional details if needed.
**Update:**
**Hardware Configuration**
1. Aurora regional cluster with 1 instance
2. Capacity Type: Provisioned (Min: 0.5 ACUs (1GiB), Max: 16 ACUs (32 GiB))
3. Storage Config: Standard
**Performance Insights**
1. Max ACU utilization: 70%
2. Max CPU Utilization: 45%
3. Max DB connection: 111
4. EBS IO Balance: 100%
5. Buffer Cache Hit Ratio: 100%
Abhishek Tyagi
(11 rep)
May 20, 2025, 07:18 PM
• Last activity: May 21, 2025, 02:50 PM
1
votes
2
answers
831
views
How to enable Cloud Watch logs for AWS DMS task?
I am creating AWS DMS task for migrating data from GCP SQL server to AWS Aurora MySQL. How can I attach log group which I created? I could able to enable cloud watch logs for the task but when I click on "View CloudWatch logs" its showing as log group doesn't exit. Details in image - [
Pand005
(151 rep)
Dec 17, 2020, 10:10 PM
• Last activity: May 14, 2025, 02:08 AM
1
votes
1
answers
307
views
Can't use performance_schema in my mysql server
I am using AWS RDS aurora mysql 5.7 serverless but in RDS I have enabled the performance_schema parameter, I have connected my RDS to mysql workbench version 6.3 in mysql workbench when I am checking the parameter performance_schema it is showing disabled and I have also checked performance_schema h...
I am using AWS RDS aurora mysql 5.7 serverless but in RDS I have enabled the performance_schema parameter, I have connected my RDS to mysql workbench version 6.3 in mysql workbench when I am checking the parameter performance_schema it is showing disabled and I have also checked performance_schema has no data in it. I want to check the unused indexes in the schema for that I have to use performance schema.

Paras Sahu
(11 rep)
Aug 14, 2023, 07:58 AM
• Last activity: May 10, 2025, 02:05 AM
0
votes
0
answers
49
views
AWS DMS table mapping remove specific columns and rename
I'm trying to run AWS DMS with specific table mapping where I have my source table, then I want to rename the table to a new name that matches the target table name as well as not migrate certain columns that I don't need anymore and don't want in the target table. How can I do that? I tried this bu...
I'm trying to run AWS DMS with specific table mapping where I have my source table, then I want to rename the table to a new name that matches the target table name as well as not migrate certain columns that I don't need anymore and don't want in the target table. How can I do that?
I tried this but it doesnt't work. Is this the correct approach?:
{
"rules": [
{
"rule-type": "selection",
"rule-id": "1",
"rule-name": "select-team_assignments",
"object-locator": {
"schema-name": "source_schema",
"table-name": "proj_team_assignments"
},
"rule-action": "include"
},
{
"rule-type": "transformation",
"rule-id": "2",
"rule-name": "rename-table",
"rule-action": "rename",
"object-locator": {
"schema-name": "source_schema",
"table-name": "proj_team_assignments"
},
"value": "team_assignments"
},
{
"rule-type": "transformation",
"rule-id": "3",
"rule-name": "drop-columns",
"rule-action": "remove-columns",
"object-locator": {
"schema-name": "source_schema",
"table-name": "proj_team_assignments"
},
"value": [
"team_relation_type",
"issue_tracking_id",
"analysis_skills",
"communication_oral",
"coaching_others",
"problem_solving",
"written_communication",
"standard_work_mgmt",
"recommendation_synthesis"
]
}
]
}
Also do column types have to match one to one or can they be a bit different in terms of like instead of text I want a json field for that or instead of int, I want a string?
Thanks!
Foobar
(1 rep)
May 6, 2025, 11:35 PM
1
votes
1
answers
323
views
AWS Mysql unknown variable Warning Log
I upgraded AWS Aurora Mysql 5.7 to 8, but aws aurora log outputs like this. 2024-01-19T00:24:16.179185Z 0 [Warning] [MY-000067] [Server] unknown variable 'loose_authentication_kerberos_service_principal=mysql/private-write-instance.cq2fuqzvcb8i.ap-northeast-2.rds.amazonaws.com'. 2024-01-19T00:24:16....
I upgraded AWS Aurora Mysql 5.7 to 8, but aws aurora log outputs like this.
2024-01-19T00:24:16.179185Z 0 [Warning] [MY-000067] [Server] unknown variable 'loose_authentication_kerberos_service_principal=mysql/private-write-instance.cq2fuqzvcb8i.ap-northeast-2.rds.amazonaws.com'.
2024-01-19T00:24:16.179215Z 0 [Warning] [MY-000067] [Server] unknown variable 'loose_aurora_fast_restart_caspian_enable=1'.
2024-01-19T00:24:16.179223Z 0 [Warning] [MY-000067] [Server] unknown variable 'loose_innodb_trx_commit_allow_data_loss=0'.
2024-01-19T00:24:16.179230Z 0 [Warning] [MY-000067] [Server] unknown variable 'loose_eit_enabled=1'.
2024-01-19T00:24:16.179238Z 0 [Warning] [MY-000067] [Server] unknown variable 'loose_authentication_kerberos_service_key_tab=/rdsdbdata/activedirectory/keytab'.
2024-01-19T00:24:16.179250Z 0 [Warning] [MY-000067] [Server] unknown variable 'loose_authentication_kerberos_caseins_cmp=0'.
2024-01-19T00:24:16.179258Z 0 [Warning] [MY-000067] [Server] unknown variable 'loose_aurora_fast_restart_enable=1'.
2024-01-19T00:24:16.179266Z 0 [Warning] [MY-000067] [Server] unknown variable 'loose_rpc_v2_enabled=1'.
How can I delete or solve that warning log
B_B_Tod
(11 rep)
Jan 22, 2024, 01:00 AM
• Last activity: May 5, 2025, 05:07 PM
0
votes
1
answers
1165
views
Why a query takes too long in statistics thread state in AWS Aurora MySQL?
The following query execution too long in statistics state and I couldn't figure out why. DB engine - `5.7.mysql_aurora.2.07.2` DB Size - `db.r5.4xlarge` Sample Query Profile output ``` +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+...
The following query execution too long in statistics state and I couldn't figure out why.
DB engine -
5.7.mysql_aurora.2.07.2
DB Size - db.r5.4xlarge
Sample Query Profile output
+--------------------------------+----------+
| Status | Duration |
+--------------------------------+----------+
| starting | 0.000023 |
| checking query cache for query | 0.000155 |
| checking permissions | 0.000009 |
| checking permissions | 0.000002 |
| checking permissions | 0.000003 |
| checking permissions | 0.000002 |
| checking permissions | 0.000009 |
| Opening tables | 0.000035 |
| init | 0.000102 |
| System lock | 0.000035 |
| optimizing | 0.000004 |
| optimizing | 0.000003 |
| optimizing | 0.000011 |
| statistics | 0.224528 |
| preparing | 0.000030 |
| Sorting result | 0.000017 |
| statistics | 0.000041 |
| preparing | 0.000013 |
| Creating tmp table | 0.000023 |
| optimizing | 0.000013 |
| statistics | 0.064207 |
| preparing | 0.000035 |
| Sorting result | 0.000025 |
| statistics | 0.000098 |
| preparing | 0.000018 |
| executing | 0.000011 |
| Sending data | 0.000007 |
| executing | 0.000003 |
| Sending data | 0.000251 |
| executing | 0.000007 |
| Sending data | 0.000003 |
| executing | 0.000002 |
| Sending data | 0.000526 |
| end | 0.000007 |
| query end | 0.000013 |
| removing tmp table | 0.000007 |
| query end | 0.000004 |
| closing tables | 0.000003 |
| removing tmp table | 0.000004 |
| closing tables | 0.000002 |
| removing tmp table | 0.000005 |
| closing tables | 0.000002 |
| removing tmp table | 0.000004 |
| closing tables | 0.000010 |
| freeing items | 0.000050 |
| storing result in query cache | 0.000007 |
| cleaned up | 0.000004 |
| cleaning up | 0.000017 |
+--------------------------------+----------+
Query
select xo.ITEM, xo.VALUE
from (
select pi.ITEM, pi.ITEM_GROUP, pi.VALUE
from TABLE_2 pi
inner join (select max(ps.EXPORTED_DATE) as max_expo, ps.ITEM
from TABLE_2 ps
inner join (
select max(pp.EFFECTIVE_DATE) max_eff_TABLE_2, pp.ITEM
from TABLE_2 pp
where pp.EFFECTIVE_DATE SHOW INDEX FROM T1;
+-------+------------+--------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+-------+------------+--------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| T1 | 0 | PRIMARY | 1 | CUSTOMER_ID | A | 3297549 | NULL | NULL | | BTREE | | |
| T1 | 0 | PRIMARY | 2 | ITEM | A | 687374784 | NULL | NULL | | BTREE | | |
| T1 | 0 | PRIMARY | 3 | EFFECTIVE_DATE | A | 1314196480 | NULL | NULL | | BTREE | | |
| T1 | 1 | t1_ix_item | 1 | ITEM | A | 2151649 | NULL | NULL | | BTREE | | |
+-------+------------+--------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
Table 2
mysql> SHOW INDEX FROM TABLE_2;
+-------+------------+-----------------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_T2rt | T2cked | Null | Index_type | Comment | Index_comment |
+-------+------------+-----------------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
| T2 | 0 | PRIMARY | 1 | ITEM | A | 1 | NULL | NULL | | BTREE | | |
| T2 | 0 | PRIMARY | 2 | ITEM_GROUP | A | 14265 | NULL | NULL | | BTREE | | |
| T2 | 0 | PRIMARY | 3 | EFFECTIVE_DATE | A | 63663076 | NULL | NULL | | BTREE | | |
| T2 | 0 | PRIMARY | 4 | EXPORTED_DATE | A | 62464764 | NULL | NULL | | BTREE | | |
| T2 | 1 | t2_ix_item_expo | 1 | ITEM | A | 115823 | NULL | NULL | | BTREE | | |
| T2 | 1 | t2_ix_item_expo | 2 | EXPORTED_DATE | A | 13766454 | NULL | NULL | | BTREE | | |
| T2 | 1 | t2_ix_item_eff_date | 1 | ITEM | A | 115823 | NULL | NULL | | BTREE | | |
| T2 | 1 | t2_ix_item_eff_date | 2 | EFFECTIVE_DATE | A | 13766454 | NULL | NULL | | BTREE | | |
| T2 | 1 | t2_ix_item_eff_ig | 1 | ITEM | A | 115823 | NULL | NULL | | BTREE | | |
| T2 | 1 | t2_ix_item_eff_ig | 2 | EFFECTIVE_DATE | A | 13766454 | NULL | NULL | | BTREE | | |
| T2 | 1 | t2_ix_item_eff_ig | 3 | ITEM_GROUP | A | 68216912 | NULL | NULL | | BTREE | | |
| T2 | 1 | t2_idx_effective_date | 1 | EFFECTIVE_DATE | A | 79406 | NULL | NULL | | BTREE | | |
+-------+------------+-----------------------+--------------+----------------+-----------+-------------+----------+--------+------+------------+---------+---------------+
According to this: https://dba.stackexchange.com/questions/55969/statistics-state-in-mysql-processlist
I checked the innodb_buffer_pool_size.
mysql> SHOW VARIABLES LIKE "innodb_buffer_pool_size";
+-------------------------+-------------+
| Variable_name | Value |
+-------------------------+-------------+
| innodb_buffer_pool_size | 96223625216 |
+-------------------------+-------------+
In EXPLAIN output rows are minimal (Depends on the Item count in the query. If Item count is 10, the number of rows were 20). Even though the row counts are minimal why the query takes too long in statistics state?
Vithulan
(101 rep)
Jun 29, 2020, 06:21 AM
• Last activity: Apr 29, 2025, 04:04 PM
0
votes
1
answers
366
views
Charset Questions on Aurora-MySQL 5.7 as a Replica to a Aurora-MySQL 8.0 Primary
We are planning to upgrade from Aurora MySQL 2.10.2 (MySQL 5.7 equivalent) to Aurora MySQL 3.03.0 (MySQL 8 equivalent). One of the rollback plans that we are considering is having another Aurora MySQL 2.10.2 replicating from Aurora MySQL 3.03.0. It looks like this: [![enter image description here][1...
We are planning to upgrade from Aurora MySQL 2.10.2 (MySQL 5.7 equivalent) to Aurora MySQL 3.03.0 (MySQL 8 equivalent). One of the rollback plans that we are considering is having another Aurora MySQL 2.10.2 replicating from Aurora MySQL 3.03.0. It looks like this:
The reason for this is that in case there are performance issues with Aurora MySQL 3.03.0, we can roll back to the Aurora MySQL 2.10.2 replica. But based on this article , we have to change the character_set_server/character_set_database settings to

utf8
and collation_server/collation_database to utf8_unicode_ci
because when we tried to insert data directly on Aurora MySQL 3.03.0, the Aurora MySQL 2.10.2 replication breaks with this error Character set '#255' is not a compiled character...
.
So after we do our upgrade and assuming we do not need to rollback, we plan to change the UTF settings of our Aurora MySQL 3.03.0 to the default settings of the character_set_server to utf8mb4
again and collation_server to utf8mb4_0900_ai_ci
possibly after a week.
I am worried that we are changing the default charset/collation related settings to our new Aurora MySQL 3.03.0. Is there any downside to this approach? For example, from the time of the upgrade to the time when we change back our charset/collation related settings, would there be any possible data corruptions/truncation because the default charset on Aurora MySQL 3.03.0 is utf8? We tested this and it seems there is none but we wanted to ask if people have seen any issue with this. But again, I'm not sure which is why I wan
ted to ask about this.
Brian Estrada
(114 rep)
Jun 7, 2023, 04:37 PM
• Last activity: Apr 18, 2025, 10:02 PM
0
votes
2
answers
1695
views
Hints on bulk loading 3.6 billion rows to InnoDB on Aurora MySQL
everyone! I'm struggling for a week to bulk load 3.6 billion rows to an InnoDB table on Aurora MySQL 5.6.10a. This table has one FK to a "Main" table and has 12 columns. The first 1.4 billion were loaded overnight, but right now my insert rate is dropping quiklly. I **disabled** ```unique_check``` a...
everyone!
I'm struggling for a week to bulk load 3.6 billion rows to an InnoDB table on Aurora MySQL 5.6.10a. This table has one FK to a "Main" table and has 12 columns. The first 1.4 billion were loaded overnight, but right now my insert rate is dropping quiklly.
I **disabled**
and
but let
**on**.
I splited the file into 506 files with around 3.84GB each (7,000,000 rows each) and I'm using DATA FROM S3
to load them to the table.
Any hints to improve this task?
Thank you very much!
**Additional details**
All other tables in my SCHEMA use InnoDB as Engine and it works fine since they are much smaller than this one. Is it a good idea to change only this table to MyISAM? What would be the implications of doing so?
My Files are ordered by PK and the PK is an BIGINT
.
CREATE TABLE Movement
(
idMovement
bigint(20) NOT NULL AUTO_INCREMENT,
idLawSuit
bigint(20) NOT NULL,
content
mediumtext NOT NULL,
movementDate
datetime NOT NULL,
captureDate
datetime NOT NULL,
isReportContent
tinyint(4) DEFAULT NULL,
isDocument
tinyint(4) DEFAULT NULL,
contentInS3
tinyint(4) DEFAULT NULL,
contentS3Url
text,
uniqueConcatId
varchar(255) NOT NULL,
captureOrder
bigint(20) DEFAULT NULL,
movementExtraInfo
text,
PRIMARY KEY (idMovement
),
KEY idLawSuit10
(idLawSuit
),
CONSTRAINT idLawSuit10
FOREIGN KEY (idLawSuit
) REFERENCES LawSuit
(idLawSuit
) ON DELETE CASCADE ON UPDATE NO ACTION
) ENGINE=InnoDB AUTO_INCREMENT=1470000001 DEFAULT CHARSET=utf8
These are my InnoDB parameters:
innodb_adaptive_flushing ON
innodb_adaptive_flushing_lwm 10
innodb_adaptive_hash_index OFF
innodb_adaptive_max_sleep_delay 150000
innodb_additional_mem_pool_size 8388608
innodb_api_bk_commit_interval 5
innodb_api_disable_rowlock OFF
innodb_api_enable_binlog OFF
innodb_api_enable_mdl OFF
innodb_api_trx_level 0
innodb_aurora_enable_auto_akp OFF
innodb_autoextend_increment 64
innodb_autoinc_lock_mode 2
innodb_buffer_pool_dump_at_shutdown OFF
innodb_buffer_pool_dump_now OFF
innodb_buffer_pool_dump_pct 100
innodb_buffer_pool_filename ib_buffer_pool
innodb_buffer_pool_instances 8
innodb_buffer_pool_load_abort OFF
innodb_buffer_pool_load_at_startup OFF
innodb_buffer_pool_load_now OFF
innodb_buffer_pool_size 96223625216
innodb_change_buffer_max_size 25
innodb_change_buffering none
innodb_checksum_algorithm none
innodb_checksums OFF
innodb_cmp_per_index_enabled OFF
innodb_commit_concurrency 0
innodb_compression_failure_threshold_pct 5
innodb_compression_level 6
innodb_compression_pad_pct_max 50
innodb_concurrency_tickets 5000
innodb_data_file_path ibdata1:12M:autoextend
innodb_data_home_dir
innodb_disable_sort_file_cache OFF
innodb_doublewrite OFF
innodb_fast_shutdown 1
innodb_file_format Antelope
innodb_file_format_check ON
innodb_file_format_max Antelope
innodb_file_per_table ON
innodb_flush_log_at_timeout 1
innodb_flush_log_at_trx_commit 1
innodb_flush_method O_DIRECT
innodb_flush_neighbors 1
innodb_flushing_avg_loops 30
innodb_force_load_corrupted OFF
innodb_force_recovery 0
innodb_ft_aux_table
innodb_ft_cache_size 8000000
innodb_ft_enable_diag_print OFF
innodb_ft_enable_stopword ON
innodb_ft_max_token_size 84
innodb_ft_min_token_size 3
innodb_ft_num_word_optimize 2000
innodb_ft_result_cache_limit 2000000000
innodb_ft_server_stopword_table
innodb_ft_sort_pll_degree 2
innodb_ft_total_cache_size 640000000
innodb_ft_user_stopword_table
innodb_io_capacity 200
innodb_io_capacity_max 2000
innodb_large_prefix OFF
innodb_lock_wait_timeout 50
innodb_locks_unsafe_for_binlog OFF
innodb_log_buffer_size 8388608
innodb_log_file_size 50331648
innodb_log_files_in_group 2
innodb_log_group_home_dir ./
innodb_lru_scan_depth 1024
innodb_max_dirty_pages_pct 75
innodb_max_dirty_pages_pct_lwm 0
innodb_max_purge_lag 0
innodb_max_purge_lag_delay 0
innodb_mirrored_log_groups 1
innodb_monitor_disable
innodb_monitor_enable
innodb_monitor_reset
innodb_monitor_reset_all
innodb_old_blocks_pct 37
innodb_old_blocks_time 1000
innodb_online_alter_log_max_size 134217728
innodb_open_files 6000
innodb_optimize_fulltext_only OFF
innodb_page_size 16384
innodb_print_all_deadlocks OFF
innodb_purge_batch_size 900
innodb_purge_threads 3
innodb_random_read_ahead OFF
innodb_read_ahead_threshold 56
innodb_read_io_threads 32
innodb_read_only OFF
innodb_replication_delay 0
innodb_rollback_on_timeout OFF
innodb_rollback_segments 128
innodb_shared_buffer_pool_uses_huge_pages ON
innodb_sort_buffer_size 1048576
innodb_spin_wait_delay 6
innodb_stats_auto_recalc ON
innodb_stats_method nulls_equal
innodb_stats_on_metadata OFF
innodb_stats_persistent ON
innodb_stats_persistent_sample_pages 20
innodb_stats_sample_pages 8
innodb_stats_transient_sample_pages 8
innodb_strict_mode OFF
innodb_support_xa ON
innodb_sync_array_size 1
innodb_sync_spin_loops 30
innodb_table_locks ON
innodb_thread_concurrency 0
innodb_thread_sleep_delay 10000
innodb_undo_directory .
innodb_undo_logs 128
innodb_undo_tablespaces 0
innodb_use_native_aio OFF
innodb_use_sys_malloc ON
innodb_version 1.2.10
innodb_write_io_threads 4
Yago Carvalho
(3 rep)
Aug 13, 2019, 02:59 PM
• Last activity: Apr 15, 2025, 07:07 AM
Showing page 1 of 20 total questions