Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
1326
views
Amazon RDS change collate for mysql database in production without downtime
I saw a solution like this below: 1. create a new table like your source table. 2. alter that new table the way you want. 3. insert your data into the new table. 4. create indexes etc. as needed on the new table. 5. rename your old table to something like ..._old or whatever. 6. rename your new tabl...
I saw a solution like this below:
1. create a new table like your source table.
2. alter that new table the way you want.
3. insert your data into the new table.
4. create indexes etc. as needed on the new table.
5. rename your old table to something like ..._old or whatever.
6. rename your new table to the former name of the old one.
7. copy any missing rows from the _old table to the new one.
[Reference for an above solution](https://dba.stackexchange.com/questions/6131/mysql-speed-up-execution-of-alter-table)
But the above solution might cause data unavailability if there is a huge amount of data added before copying any missing rows from the _old table to the new one.
Is there any better solution than this, using AWS DMS, etc?
I also want to change the collate of all tables present in the database. Is it possible to get all the data replicated between two RDS DBs continuously, like new data entered in database A gets into Database B and viceversa?
Since I have around 50-60GB data any best way to solve this is appreciated.
**Update:**
- I have around **50-60GB** of data.
- Mysql version: **5.7**
- I need to change collation on all my tables
Msv
(101 rep)
Jul 16, 2022, 03:45 AM
• Last activity: Aug 2, 2025, 10:03 AM
0
votes
2
answers
411
views
How to down size the storage of AWS RDS MySQL instance?
I have an RDS MySQL instance db.m5.4xlarge with 8.3TB of data, and I want to shrink it to 4TB to save costs because it actually has 4TB of data. What are the viable options? I've tried several methods, aiming to complete it within 8 hours: 1. DMS (Database Migration Service): When using DMS, there w...
I have an RDS MySQL instance db.m5.4xlarge with 8.3TB of data, and I want to shrink it to 4TB to save costs because it actually has 4TB of data. What are the viable options?
I've tried several methods, aiming to complete it within 8 hours:
1. DMS (Database Migration Service):
When using DMS, there were index issues. Before the migration, adding indexes caused it to report duplicate IDs, but I checked and couldn't find any duplicates. After the migration, if I add the indexes back, the data doesn't sync.
2. Mysqldump:
It takes too long. For 450GB, it's already taking nearly 6 hours.
3. Take Snapshot and restore to smaller storage:
but the options on AWS console restrict me to downsize to a volume that is smaller than its original size[enter image description here](https://i.sstatic.net/kEyNI6ib.png)
user26127523
(11 rep)
Jul 6, 2024, 07:06 AM
• Last activity: Aug 2, 2025, 02:02 AM
0
votes
1
answers
1578
views
Importing huge table exhausts UNDO extents in Oracle RDS (ORA-01628)
I'm attempting to do an impdp on RDS, Oracle 12c. I'm importing only one table for this particular impdp job but every time I try to import it, UNDO usage gets to about 50% and then the logs just say `Resumable error: ORA-01628: max # extents (32765) reached for rollback segment`. Since this is RDS...
I'm attempting to do an impdp on RDS, Oracle 12c. I'm importing only one table for this particular impdp job but every time I try to import it, UNDO usage gets to about 50% and then the logs just say
Resumable error: ORA-01628: max # extents (32765) reached for rollback segment
.
Since this is RDS I cannot manually manage undo. I created a fresh RDS instance with a new 4TB UNDO tablespace to perform the import of just this table. I've read about creating one giant rollback segment and also about creating lots of small rollback segments to solve this problem. I've also read I can split the import into multiple parts, but I'd rather not do that if possible. Is there anything more I can do here to maybe stop the UNDO tablespace from running out of extents?
user3150146
(1 rep)
Nov 12, 2020, 01:35 PM
• Last activity: Aug 1, 2025, 12:07 PM
0
votes
1
answers
141
views
MariaDB (RDS) going temporarily offline (with memory drop)
We have an issue with a database of ours on RDS running MariaDB 10.4.8 on a t3-large instance type. For the past week or so, it's been sporadically inaccessible with no errors I can find. During the minute or two that it's down I can't run 'show process list' it just hangs. It fixes itself and is ba...
We have an issue with a database of ours on RDS running MariaDB 10.4.8 on a t3-large instance type.
For the past week or so, it's been sporadically inaccessible with no errors I can find. During the minute or two that it's down I can't run 'show process list' it just hangs. It fixes itself and is back working within a few minutes. When it comes back it looks like it has a lot of connections and queries to catch up on.
Changed params from default:
ft_min_word_len 1
innodb_buffer_pool_size {DBInstanceClassMemory*17/20}
innodb_ft_enable_stopword 1
innodb_ft_min_token_size 1
log_bin_trust_function_creators 1
max_statement_time 30
sql_mode NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
## Freeable Memory:
## Swap space:
## Performance Insights:
If there any way to find out what's causing it. Is it not enough free memory? What's a normal threshold to be running to?



Edd Turtle
(113 rep)
Nov 16, 2020, 02:54 PM
• Last activity: Aug 1, 2025, 09:01 AM
0
votes
1
answers
706
views
Parameter innodb_flush_log_at_trx_commit not available in AWS Aurora 5.7
I'm trying to modify the parameter `innodb_flush_log_at_trx_commit` on a RDS Aurora 5.7 instance from the AWS RDS Web Console and when I look through the parameters in the database I can not find it. How can I modify this parameter?
I'm trying to modify the parameter
innodb_flush_log_at_trx_commit
on a RDS Aurora 5.7 instance from the AWS RDS Web Console and when I look through the parameters in the database I can not find it.
How can I modify this parameter?
JohnnyAce
(101 rep)
Sep 20, 2021, 01:08 PM
• Last activity: Jul 28, 2025, 08:01 PM
0
votes
0
answers
36
views
Can you downgrade provisioned iops on gp3 for postgresql RDS?
Doing an occasional analysis on a postgreSQL RDS instance with gp3 storage. Running out of provisioned IOPS. [![Provisioned IOPS limit][1]][1] Is it possible to temporarily raise the provisioned IOPS limit? e.g. just for a day or this week? [1]: https://i.sstatic.net/COuKETrk.png
Doing an occasional analysis on a postgreSQL RDS instance with gp3 storage.
Running out of provisioned IOPS.
Is it possible to temporarily raise the provisioned IOPS limit? e.g. just for a day or this week?

Bart Jonk
(357 rep)
Nov 25, 2024, 04:57 PM
• Last activity: Jul 28, 2025, 04:10 PM
0
votes
1
answers
38
views
How do I create a restricted postgres role for a text-2-sql app?
I'm using AGI to generate Postgres SQL (AWS RDS Postgres) but I want to prevent users asking questions about underlying tables such as "Show me all databases", "Show me all users" that result in queries like `SELECT * FROM pg_database` and `SELECT * FROM pg_roles`. The working script I have for this...
I'm using AGI to generate Postgres SQL (AWS RDS Postgres) but I want to prevent users asking questions about underlying tables such as "Show me all databases", "Show me all users" that result in queries like
SELECT * FROM pg_database
and SELECT * FROM pg_roles
.
The working script I have for this is:
Connect to postgres
as postgres
(super user):
CREATE ROLE client_creator WITH LOGIN PASSWORD '' CREATEDB CREATEROLE;
Connect to postgres
as client_creator
:
CREATE DATABASE "client_db";
CREATE ROLE "client_reader" WITH LOGIN PASSWORD '';
Connect to client_db
as client_creator
GRANT CONNECT, TEMPORARY ON DATABASE "client_db" TO "client_reader";
GRANT SELECT ON ALL TABLES IN SCHEMA public TO "client_reader";
GRANT USAGE ON SCHEMA public TO "client_reader";
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO "client_reader";
REVOKE ALL PRIVILEGES ON SCHEMA pg_catalog FROM client_reader;
REVOKE ALL PRIVILEGES ON SCHEMA pg_catalog FROM public;
The issue is the final two REVOKE
commands seem to have no effect since I can still run SELECT * FROM pg_roles
and receive all rows.
I've also tried revoking SELECT
on tables in the pg_catalog
too.
Question is:
* Can I do this?
* Should I do this? Thoughts...
* I have seen comments in other posts that suggest this is a bad idea but since I understand each database has a copy of pg_catalog I thought it would not be an issue
* I believe the generate queries should not rely on pg_catalog objects but I may be wrong
Stokedout
(103 rep)
Dec 29, 2024, 12:05 AM
• Last activity: Jul 28, 2025, 04:09 PM
0
votes
1
answers
65
views
RDS postgres slow read IO
We are running postgres 14.12 in rds and expirence very slow IO reads.. around 30MB/s on index scans. we can't figure out what might be the cause of it. any ideas to what we should / could check? **configuration** instance class: `db.m6idn.8xlarge` (which should support 3125MB/s throughput) RAM: `12...
We are running postgres 14.12 in rds and expirence very slow IO reads.. around 30MB/s on index scans. we can't figure out what might be the cause of it. any ideas to what we should / could check?
**configuration**
instance class:
db.m6idn.8xlarge
(which should support 3125MB/s throughput)
RAM: 128GB
vCPU: 32
storage type: gp3
with 25000 IOPS (we only reach 18K) and 4000MiB/s throughput.
most of our slow queries are due to slow IO read..
edena
(1 rep)
Mar 4, 2025, 04:36 PM
• Last activity: Jul 28, 2025, 04:09 PM
0
votes
1
answers
164
views
Best way to achieve HIPAA compliance in a PostgresSQL for Patient Data
So basically we are designing an application to store PII patient data, we are currently storing by encrypting the data in AWS RDS using an KMS key, what i mean by that is encyrpting the actual data itself in the columns. Can anyone suggest a more better (less expensive) and efficient way of storing...
So basically we are designing an application to store PII patient data, we are currently storing by encrypting the data in AWS RDS using an KMS key, what i mean by that is encyrpting the actual data itself in the columns. Can anyone suggest a more better (less expensive) and efficient way of storing PII data. Is the way i am currently doing it the correct way?
Salman Khan
(1 rep)
Mar 27, 2025, 05:14 AM
• Last activity: Jul 28, 2025, 04:09 PM
0
votes
0
answers
24
views
Is it possible to use RDS for Postgres read-replica only in direct-connection?
I couldn't find answer to this question in the official docs, so hoping it isn't a stupid question. I have RDS for Postgres which is being used by multiple micro-services. One such service is the BFF and it requires a much better performance in terms of read latency, as it is used to serve data to t...
I couldn't find answer to this question in the official docs, so hoping it isn't a stupid question. I have RDS for Postgres which is being used by multiple micro-services.
One such service is the BFF and it requires a much better performance in terms of read latency, as it is used to serve data to the UI (it only reads), while other services are doing background jobs and are much more agnostic to slow queries and such.
I had a thought, which I'm not sure makes sense, of adding a read replica, and make my BFF service connect directly to this read-replica, this way I can both create isolation from other services and make sure they don't affect the BFF performance and it also allows me to provide more resources (i.e. stronger read instance) only to the service that really needs it.
I think that in Document DB this approach won't work, as any application that is connected using the cluster endpoint i.e. to the primary (and I must have at least one such app as it is the only way to perform "write") might end up running the read queries on any read replica, and I can't really enforce giving dedicated read replica to a service.
I wonder if the same is true for RDS for Postgres or if that isn't the case and I can have a certain application using a single read replica exclusively.
Noam
(101 rep)
Apr 13, 2025, 04:46 PM
• Last activity: Jul 28, 2025, 04:09 PM
1
votes
1
answers
67
views
SQL Server Shrinkfile Stops at 76% Due to LOB and System Data—Is Partial Shrink Possible?
**Problem:** I am working on a SQL Server 2019 database where I deleted a large amount of user data (~500 GB). After deletion, I attempted to shrink the data file using: ``` DBCC SHRINKFILE (N'web_Data', TARGET_SIZE_MB); ``` However, the shrink process stopped at 76% and could not proceed further. *...
**Problem:**
I am working on a SQL Server 2019 database where I deleted a large amount of user data (~500 GB).
After deletion, I attempted to shrink the data file using:
DBCC SHRINKFILE (N'web_Data', TARGET_SIZE_MB);
However, the shrink process stopped at 76% and could not proceed further.
**Analysis:**
Using .dm_db_database_page_allocations
and .allocation_units
, I found that:
- **LOB data and system objects** (e.g.,
,
,
) are allocated toward the **end of the file.**
- Since these allocations are not movable, the shrink operation cannot release the remaining space.
**Question:**
**How can I perform a shrink operation that reclaims only the movable free space, without getting stuck on unmovable LOB and system data?**
**What I Tried:**
SHRINKFILE
with a target size
- SHRINKFILE
with
**Clarifications:**
- I understand that shrinking can cause fragmentation. I’m **not asking whether I should shrink**, I’m asking how to reclaim only **the free space that is actually reclaimable** without getting blocked.
- Rebuilding the database is not part of this question—I’m focusing on whether **SQL Server allows partial shrink in this scenario.**
**Environment:**
- **SQL Server Version:** 2019
- **Database Size Before Deletion:** ~1 TB
- **Deleted Data:** ~500 GB from a single large table
- **File:** Single data file in PRIMARY filegroup
**Goal:**
- Reclaim as much space as possible using SHRINKFILE
, even if system LOB data cannot be moved.
Shehzad Malik
(11 rep)
Jul 21, 2025, 11:28 AM
• Last activity: Jul 21, 2025, 12:11 PM
0
votes
1
answers
179
views
Where can I find more detailed MySQL deadlock logs?
We have a MySQL database in AWS RDS that was migrated recently from 5.7 to 8. One of our microservices running in ECS executes bootstrapping code at launch to iterate through a large list of table names and issue a `CREATE TABLE IF NOT EXISTS` statement on each one. Since the migration, this bootstr...
We have a MySQL database in AWS RDS that was migrated recently from 5.7 to 8. One of our microservices running in ECS executes bootstrapping code at launch to iterate through a large list of table names and issue a
CREATE TABLE IF NOT EXISTS
statement on each one.
Since the migration, this bootstrapping code will predictably cause a deadlock during this loop:
*************************** 1. row ***************************
ERROR_NUMBER: 1213
ERROR_NAME: ER_LOCK_DEADLOCK
SQL_STATE: 40001
SUM_ERROR_RAISED: 30
SUM_ERROR_HANDLED: 0
FIRST_SEEN: 2024-11-25 15:12:29
LAST_SEEN: 2024-11-25 21:11:51
The deadlocks do not show up in the InnoDB status output, so I'm assuming this may be a result of metadata locking. I would appreciate if someone could please point me in the direction of where I could find any kind of additional logging about these deadlocks.
GS-Scooter
Nov 25, 2024, 09:28 PM
• Last activity: Jul 20, 2025, 02:09 AM
2
votes
1
answers
531
views
High Write Count/sec in my RDS server
Recently we have encountered seriously problem since we moved our mysql database from EC2 to RDS. We do have high cpu usage before moved to rds, but our website has a good speed for our customers. After set up instance in RDS, our speed dramatically down without any reasons. We checked our metrics o...
Recently we have encountered seriously problem since we moved our mysql database from EC2 to RDS.
We do have high cpu usage before moved to rds, but our website has a good speed for our customers. After set up instance in RDS, our speed dramatically down without any reasons.
We checked our metrics on RDS, then we found that we have almost 100 write count/sec in anytime of day, even we do not have too many customers at nighttime, but the write count/sec is still remain around 100 count/sec.
And we have checked our general_log and slow_log files, but we do not where the write operations from ?
Anyone experience similar problems ?
Our configuration of rds:
Mysql 5.6.23 db.r3.large
**Our metrics snapshot:**
Many Thanks


tim
(29 rep)
Aug 25, 2015, 07:20 AM
• Last activity: Jul 18, 2025, 03:02 AM
0
votes
1
answers
152
views
Oracle AWS package errors when removing user
We are getting the folowing error when attempting to remove a user from our 11g Oracle. a dump from an Amazon DB was (failed) imported, and now we see this. ERROR at line 1: ORA-00604: error occurred at recursive SQL level 2 ORA-04045: errors during recompilation/revalidation of {USER}.RDSADMIN ORA-...
We are getting the folowing error when attempting to remove a user from our 11g Oracle. a dump from an Amazon DB was (failed) imported, and now we see this.
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 2
ORA-04045: errors during recompilation/revalidation of {USER}.RDSADMIN
ORA-04067: not executed, package body "{USER}.RDSADMIN" does not exist
ORA-06508: PL/SQL: could not find program unit being called:
"{USER}.RDSADMIN"
ORA-06512: at line 2
ORA-06508: PL/SQL: could not find program unit being called:
"{USER}.RDSADMIN"
ORA-06512: at line 2
How do we remove this {USER} and it's RDSADMIN packages?
Thanks in advance
C
KS1
(201 rep)
Mar 12, 2019, 02:47 PM
• Last activity: Jul 17, 2025, 09:02 PM
1
votes
1
answers
5039
views
AWS RDS Postgres: How to diagnose CheckpointLag and potential slowups using AWS' Monitoring suite?
We are currently hosting a postgres RDS database and our team is noticing slowup in our querying service. I'm noticing a spike in the metric, `CheckpointLag` and I've been tasked in trying to find where this occurs specifically on the AWS side of things. In monitoring detailed performance, we've see...
We are currently hosting a postgres RDS database and our team is noticing slowup in our querying service. I'm noticing a spike in the metric,
CheckpointLag
and I've been tasked in trying to find where this occurs specifically on the AWS side of things.
In monitoring detailed performance, we've seen that our queries are much below (20%
) what our expected average active sessions (AAS) are said to reach. I also monitored the queries individually with EXPLAIN ANALYZE
and the most extreme query is takes 0.5s
to compute. This leads me to believe there's something else taking too long.
After checking other potential metrics, CPU, BurstBalance, etc... all appear normal, there is one metric CheckpointLag
which appears to have a spike under use and I can't seem to find documentation on. I can't seem to find what this means and the expected *acceptable* value we should expect with a db.m4.xLarge
. With no, to low, usage -- it appears to be ~140 seconds
. Under normal, expected usage it jumps to ~400 seconds
.
I'm asking what this metric really means, if the values are of *expected* or *normal* values, and if there's any other ways I can see if my RDS instance is the cause of my slowup?
**EDIT:**
Checkpoint lag is defined as a metric here: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-metrics.html with the description of The amount of time since the most recent checkpoint
. It was fairly vague and hard to decipher the true meaning. For my metrics, it appears that its pulling from this already pre-defined metric, but if there's a way to dive deeper in how its querying the instance, please let me know.
**Follow-Up**
I ended up editing queries to group results and reduce the number of rows being exported at one time as our team was querying way too many rows to begin with. With this, CheckpointLag went down and I associated it with time taken to either reach or perform queries on RDS (duh!), but I still have not pinpointed exact meaning. There must've been some bottleneck in outputting all of the rows and cause the "lag" to rise...
Andrew Narvaez
(11 rep)
Dec 15, 2023, 09:42 PM
• Last activity: Jul 16, 2025, 03:35 PM
0
votes
1
answers
159
views
MySQL: Reason for Degraded performance of a single inner join
We have two tables in our MYSQL 5.7 Aurora database: CUSTOMER_ORDER and BATCH. Customer order can have only one batch associated and it is not mandatory to have one. Create table statement of CUSTOMER_ORDER table: CREATE TABLE 'CUSTOMER_ORDER' ( 'CLIENT_ID' varchar(32) COLLATE utf8mb4_bin NOT NULL,...
We have two tables in our MYSQL 5.7 Aurora database: CUSTOMER_ORDER and BATCH. Customer order can have only one batch associated and it is not mandatory to have one.
Create table statement of CUSTOMER_ORDER table:
CREATE TABLE 'CUSTOMER_ORDER' ( 'CLIENT_ID' varchar(32) COLLATE utf8mb4_bin NOT NULL, 'ORDER_ID' varchar(64) COLLATE utf8mb4_bin NOT NULL, 'ORDER' json NOT NULL, 'ORDER_DATE' date GENERATED ALWAYS AS
(
cast(json_unquote(json_extract('ORDER', '$.date')) as date)
)
VIRTUAL, 'TEAM_ID' varchar(32) COLLATE utf8mb4_bin GENERATED ALWAYS AS
(
json_unquote(json_extract('ORDER', '$.teamId.teamId'))
)
VIRTUAL, 'ORDER_SOURCE' varchar(32) COLLATE utf8mb4_bin GENERATED ALWAYS AS
(
json_unquote(json_extract('ORDER', '$.orderSource'))
)
VIRTUAL, 'ORDER_STATUS' varchar(32) COLLATE utf8mb4_bin GENERATED ALWAYS AS
(
json_unquote(json_extract('ORDER', '$.status.status'))
)
VIRTUAL, 'EFFECTIVE_STATUS' varchar(32) COLLATE utf8mb4_bin GENERATED ALWAYS AS
(
json_unquote(json_extract('ORDER', '$.effectiveStatus'))
)
VIRTUAL, 'CREATED_ON' timestamp(6) NOT NULL, 'UPDATED_ON' timestamp(6) NOT NULL, 'ADDED_ON' timestamp(6) NULL DEFAULT CURRENT_TIMESTAMP(6)
ON
UPDATE
CURRENT_TIMESTAMP(6), 'BATCH_ID' varchar(128) COLLATE utf8mb4_bin GENERATED ALWAYS AS
(
json_unquote(json_extract('ORDER', '$.batchId.batchId'))
)
VIRTUAL, PRIMARY KEY ('CLIENT_ID', 'ORDER_ID'), KEY 'order_date_team_idx' ('CLIENT_ID', 'ORDER_DATE', 'TEAM_ID') ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4 COLLATE = utf8mb4_bin
Create table statement for BATCH table:
CREATE TABLE 'BATCH' ( 'CLIENT_ID' varchar(32) COLLATE utf8mb4_bin NOT NULL, 'BATCH_ID' varchar(128) COLLATE utf8mb4_bin NOT NULL, 'BATCH_DATE' date NOT NULL, 'BATCH_STATUS' varchar(32) COLLATE utf8mb4_bin NOT NULL, 'BATCH_SLA' varchar(32) COLLATE utf8mb4_bin NOT NULL, 'BATCH' json NOT NULL, 'EMPLOYEE_ID' varchar(32) COLLATE utf8mb4_bin DEFAULT NULL, 'EMPLOYEE_PERSONA_ID' varchar(32) COLLATE utf8mb4_bin DEFAULT NULL, 'VEHICLE_ID' varchar(32) COLLATE utf8mb4_bin DEFAULT NULL, 'VEHICLE_MODEL_ID' varchar(32) COLLATE utf8mb4_bin DEFAULT NULL, 'RECORD_VERSION' int(11) NOT NULL, 'CREATED_ON' timestamp(3) NOT NULL, 'UPDATED_ON' timestamp(3) NOT NULL, 'ADDED_ON' timestamp(3) NULL DEFAULT CURRENT_TIMESTAMP(3)
ON
UPDATE
CURRENT_TIMESTAMP(3), 'MINIMAL_BATCH' json DEFAULT NULL, 'BATCH_ID' varchar(64) COLLATE utf8mb4_bin GENERATED ALWAYS AS
(
json_unquote(json_extract('MINIMAL_BATCH', '$.batch.planId.sourceId'))
)
VIRTUAL, 'PLAN_ID' varchar(64) COLLATE utf8mb4_bin GENERATED ALWAYS AS
(
json_unquote(json_extract('MINIMAL_BATCH', '$.batch.planId.planId'))
)
VIRTUAL, PRIMARY KEY ('CLIENT_ID', 'BATCH_ID'), KEY 'date_rider_idx' ('CLIENT_ID', 'BATCH_DATE', 'EMPLOYEE_ID') ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4 COLLATE = utf8mb4_bin
And I am using the following query to find out the count of customer orders for a given client for a given date:
SELECT COUNT(1)
FROM CUSTOMER_ORDER AS customer_order
INNER JOIN BATCH AS batch
ON customer_order.CLIENT_ID = batch.CLIENT_ID
AND customer_order.BATCH_ID = batch.BATCH_ID
WHERE
customer_order.CLIENT_ID = 'clientA'
AND ORDER_DATE = '2021-05-01';
The reason I am doing this left outer join is to do further filtering of customer orders based on the batch. The problem I am facing with this query is that it takes in order of minutes to execute this query for clients who have large number of customer orders(~20k-100k) for a given date even without any extra filters on the batch table.
The output of the EXPLAIN statement for the query is as given below:
id,select_type,table,partitions,type,possible_keys,key,key_len,ref,rows,filtered,Extra
1,SIMPLE,customer_order,NULL,ref,"PRIMARY,order_date_team_idx,batch_idx",PRIMARY,130,const,1,10.00,"Using where"
1,SIMPLE,batch,NULL,eq_ref,"PRIMARY,date_rider_idx,team_id_idx",PRIMARY,644,"const,locus_devo.customer_order.BATCH_ID",1,100.00,"Using index"
Can you please help me identify the root cause of underperformance of this query?
PaulDaviesC
(101 rep)
Jun 2, 2021, 06:41 AM
• Last activity: Jul 15, 2025, 11:04 PM
1
votes
1
answers
160
views
Match binlog position in 2 different MySQL replicas
I'm trying implement MySQL upgrade approach that would require either no downtime at all or it would be counted in seconds. One of the problems is that these databases are RDS MySQL databases so there is no SUPER permissions that gives less control on what you can do with replicas. Anyway, this is h...
I'm trying implement MySQL upgrade approach that would require either no downtime at all or it would be counted in seconds. One of the problems is that these databases are RDS MySQL databases so there is no SUPER permissions that gives less control on what you can do with replicas. Anyway, this is how I see it can work.
Let's have simplified cluster schema:
M-->R-->--DMS-->--RE
Where:
* M - main read/write database
* R - read replica that used for external replication via AWS DMS
* RE - external replication database where we put that data
We need to upgrade database M
(M1
). That would also require to update read replica R
(R1
), but we don't want to re-sync everything for DMS
so we would like to continue that replication RE
from the place where we were before upgrade, do the final schema should look like: M1-->R1-->--DMS-->--RE
So, to achieve that we are starting new read replica from M
, promote it, create new DMS (DMS1
) replication that would sync changes from M1
, so the schema will look like:
M-->R-->--DMS-->--RE
|
DMS1-->M1
Then from M1
I can start R1
replica and then continue DMS
replication from R1
already, so we will have something like:
M-->R
|
DMS1-->M1-->R1-->--DMS-->--RE
and after that we can get rid of M
and R
. Now the question. As in the new cluster I'll continue from R1
binlog, which has completely different positions rather than R
. How can I get the right position in R1
that would represent the same position in R
?
Dmitrij Kultasev
(191 rep)
Aug 24, 2022, 02:17 PM
• Last activity: Jul 14, 2025, 06:03 PM
1
votes
1
answers
154
views
DROP and TRUNCATE table on MySQL 8.0.35 freezes the DB
We recently upgraded from MySQL 5.7 to MySQL 8.0.35 in AWS RDS. There is a log table that services write to and we rotate it daily, using `RENAME TABLE` command. Later when we perform a `DROP` or `TRUNCATE` on this table (~10-15 GB size) the DB freezes for 5-6 minutes. During those minutes * DB does...
We recently upgraded from MySQL 5.7 to MySQL 8.0.35 in AWS RDS.
There is a log table that services write to and we rotate it daily, using
The table is not used by any services when we run
RENAME TABLE
command.
Later when we perform a DROP
or TRUNCATE
on this table (~10-15 GB size) the DB freezes for 5-6 minutes.
During those minutes
* DB does not report any metrics
* Clients get timeouts and disconnect
* OS metrics show significant drop in IO and CPU
Table
CREATE TABLE test_test_test
(
rowId
bigint NOT NULL DEFAULT '0',
ruId
varchar(200) NOT NULL,
sessionId
varchar(200) NOT NULL,
transSource
varchar(255) DEFAULT NULL,
dcsnType
varchar(75) DEFAULT NULL,
dcsnTypeManual
varchar(75) DEFAULT NULL,
dcsnReason
varchar(255) DEFAULT NULL,
dcsnPlcRsn
varchar(100) DEFAULT NULL,
firstDcnRule
varchar(100) DEFAULT NULL,
payload
text,
insertTime
timestamp NULL DEFAULT CURRENT_TIMESTAMP,
status
varchar(50) DEFAULT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
## OS Metrics
Notice the DB metrics (bottom 3 graphs) are extrapolated (no measurements) during 07:33-07:41

DROP
or TRUNCATE
Would appreciate any theories, explanations or directions to explore.
Thanks
AlexV
(125 rep)
Feb 18, 2024, 10:57 AM
• Last activity: Jul 10, 2025, 08:02 PM
1
votes
1
answers
800
views
How to reduce MySQL performance_schema memory usage caused by thousands of unique hosts?
I have a problem with some performance_schema tables (\*\_by_host_* and \*\_by_account_*) taking up too much RAM. It is caused by clients connecting from a /19 subnet (8190 unique addresses). The connections aren't long-lasting: clients connect, perform a task and quit. Which settings can I tweak to...
I have a problem with some performance_schema tables (\*\_by_host_* and \*\_by_account_*) taking up too much RAM. It is caused by clients connecting from a /19 subnet (8190 unique addresses). The connections aren't long-lasting: clients connect, perform a task and quit. Which settings can I tweak to limit the memory usage and what are the downsides of each limit?
The DB is on AWS RDS and I would like to keep Performance Insights with as much data as possible.
Karol Jędrzejczyk
(11 rep)
Jun 4, 2021, 10:06 AM
• Last activity: Jul 9, 2025, 10:00 PM
0
votes
2
answers
177
views
MYSQL tuning to optimize CPU usage
I sometimes have a spike in CPU to 70% (usually 20%) and looking if there is anything I can do to improve. I am using mysql rds 5.6.44 I am using m4.large instance on aws Variables and version https://pastebin.com/1b1WQTk3 Here are my settings mysql> show global status; | Aborted_clients | 4 | Abort...
I sometimes have a spike in CPU to 70% (usually 20%) and looking if there is anything I can do to improve. I am using mysql rds 5.6.44
I am using m4.large instance on aws
Variables and version https://pastebin.com/1b1WQTk3
Here are my settings
mysql> show global status;
| Aborted_clients | 4
| Aborted_connects | 1068
| Binlog_cache_disk_use | 189
| Binlog_cache_use | 216687
| Binlog_stmt_cache_disk_use | 0
| Binlog_stmt_cache_use | 58
| Bytes_received | 10392043028
| Bytes_sent | 298689136353
| Com_admin_commands | 329463
| Com_assign_to_keycache | 0
| Com_alter_db | 0
| Com_alter_db_upgrade | 0
| Com_alter_event | 0
| Com_alter_function | 0
| Com_alter_procedure | 0
| Com_alter_server | 0
| Com_alter_table | 0
| Com_alter_tablespace | 0
| Com_alter_user | 0
| Com_analyze | 0
| Com_begin | 9635
| Com_binlog | 0
| Com_call_procedure | 0
| Com_change_db | 91419
| Com_change_master | 0
| Com_check | 0
| Com_checksum | 0
| Com_commit | 9366
| Com_create_db | 0
| Com_create_event | 0
| Com_create_function | 0
| Com_create_index | 0
| Com_create_procedure | 0
| Com_create_server | 0
| Com_create_table | 0
| Com_create_trigger | 0
| Com_create_udf | 0
| Com_create_user | 0
| Com_create_view | 0
| Com_dealloc_sql | 0
| Com_delete | 40905
| Com_delete_multi | 24
| Com_do | 0
| Com_drop_db | 0
| Com_drop_event | 0
| Com_drop_function | 0
| Com_drop_index | 0
| Com_drop_procedure | 0
| Com_drop_server | 0
| Com_drop_table | 0
| Com_drop_trigger | 0
| Com_drop_user | 0
| Com_drop_view | 0
| Com_empty_query | 0
| Com_execute_sql | 0
| Com_flush | 476
| Com_get_diagnostics | 0
| Com_grant | 0
| Com_ha_close | 0
| Com_ha_open | 0
| Com_ha_read | 0
| Com_help | 0
| Com_insert | 80355
| Com_insert_select | 0
| Com_install_plugin | 0
| Com_kill | 0
| Com_load | 0
| Com_lock_tables | 0
| Com_optimize | 0
| Com_preload_keys | 0
| Com_prepare_sql | 0
| Com_purge | 0
| Com_purge_before_date | 427
| Com_release_savepoint | 0
| Com_rename_table | 0
| Com_rename_user | 0
| Com_repair | 0
| Com_replace | 21869
| Com_replace_select | 3
| Com_reset | 0
| Com_resignal | 0
| Com_revoke | 0
| Com_revoke_all | 0
| Com_rollback | 2
| Com_rollback_to_savepoint | 0
| Com_savepoint | 0
| Com_select | 12146517
| Com_set_option | 3210675
| Com_signal | 0
| Com_show_binlog_events | 0
| Com_show_binlogs | 1
| Com_show_charsets | 0
| Com_show_collations | 0
| Com_show_create_db | 0
| Com_show_create_event | 0
| Com_show_create_func | 0
| Com_show_create_proc | 0
| Com_show_create_table | 90174
| Com_show_create_trigger | 0
| Com_show_databases | 4
| Com_show_engine_logs | 0
| Com_show_engine_mutex | 0
| Com_show_engine_status | 0
| Com_show_events | 0
| Com_show_errors | 0
| Com_show_fields | 90967
| Com_show_function_code | 0
| Com_show_function_status | 740
| Com_show_grants | 1
| Com_show_keys | 0
| Com_show_master_status | 60
| Com_show_open_tables | 0
| Com_show_plugins | 0
| Com_show_privileges | 0
| Com_show_procedure_code | 0
| Com_show_procedure_status | 740
| Com_show_processlist | 18217
| Com_show_profile | 0
| Com_show_profiles | 0
| Com_show_relaylog_events | 0
| Com_show_slave_hosts | 0
| Com_show_slave_status | 2187
| Com_show_status | 2544
| Com_show_storage_engines | 0
| Com_show_table_status | 89942
| Com_show_tables | 536996
| Com_show_triggers | 89942
| Com_show_variables | 72924
| Com_show_warnings | 0
| Com_slave_start | 0
| Com_slave_stop | 0
| Com_stmt_close | 1
| Com_stmt_execute | 132208
| Com_stmt_fetch | 1
| Com_stmt_prepare | 25
| Com_stmt_reprepare | 20
| Com_stmt_reset | 0
| Com_stmt_send_long_data | 0
| Com_truncate | 78
| Com_uninstall_plugin | 0
| Com_unlock_tables | 712
| Com_update | 170586
| Com_update_multi | 0
| Com_xa_commit | 0
| Com_xa_end | 0
| Com_xa_prepare | 0
| Com_xa_recover | 0
| Com_xa_rollback | 0
| Com_xa_start | 0
| Compression | OFF
| Connection_errors_accept | 0
| Connection_errors_internal | 0
| Connection_errors_max_connections | 0
| Connection_errors_peer_address | 0
| Connection_errors_select | 0
| Connection_errors_tcpwrap | 0
| Connections | 4450022
| Created_tmp_disk_tables | 414618
| Created_tmp_files | 6582
| Created_tmp_tables | 3908039
| Delayed_errors | 0
| Delayed_insert_threads | 0
| Delayed_writes | 0
| Flush_commands | 1
| Handler_commit | 12464012
| Handler_delete | 16467
| Handler_discover | 0
| Handler_external_lock | 33320128
| Handler_mrr_init | 0
| Handler_prepare | 575726
| Handler_read_first | 96176049
| Handler_read_key | 435827791
| Handler_read_last | 7963
| Handler_read_next | 517813660
| Handler_read_prev | 1679848
| Handler_read_rnd | 25957740
| Handler_read_rnd_next | 722327597
| Handler_rollback | 3880
| Handler_savepoint | 0
| Handler_savepoint_rollback | 0
| Handler_update | 1355412
| Handler_write | 122420271
| Innodb_buffer_pool_dump_status | not started
| Innodb_buffer_pool_load_status | Buffer pool(s) load
| Innodb_buffer_pool_pages_data | 130272
| Innodb_buffer_pool_bytes_data | 2134376448
| Innodb_buffer_pool_pages_dirty | 4
| Innodb_buffer_pool_bytes_dirty | 65536
| Innodb_buffer_pool_pages_flushed | 868226
| Innodb_buffer_pool_pages_free | 800
| Innodb_buffer_pool_pages_misc | 0
| Innodb_buffer_pool_pages_total | 131072
| Innodb_buffer_pool_read_ahead_rnd | 0
| Innodb_buffer_pool_read_ahead | 1000051
| Innodb_buffer_pool_read_ahead_evicted | 56726
| Innodb_buffer_pool_read_requests | 2288688850
| Innodb_buffer_pool_reads | 10633702
| Innodb_buffer_pool_wait_free | 0
| Innodb_buffer_pool_write_requests | 3262668
| Innodb_data_fsyncs | 526541
| Innodb_data_pending_fsyncs | 0
| Innodb_data_pending_reads | 0
| Innodb_data_pending_writes | 0
| Innodb_data_read | 191240474624
| Innodb_data_reads | 11672411
| Innodb_data_writes | 1275287
| Innodb_data_written | 29277500416
| Innodb_dblwr_pages_written | 868226
| Innodb_dblwr_writes | 119388
| Innodb_have_atomic_builtins | ON
| Innodb_log_waits | 0
| Innodb_log_write_requests | 1485837
| Innodb_log_writes | 262034
| Innodb_os_log_fsyncs | 286824
| Innodb_os_log_pending_fsyncs | 0
| Innodb_os_log_pending_writes | 0
| Innodb_os_log_written | 814815744
| Innodb_page_size | 16384
| Innodb_pages_created | 18711
| Innodb_pages_read | 11672394
| Innodb_pages_written | 868226
| Innodb_row_lock_current_waits | 0
| Innodb_row_lock_time | 194
| Innodb_row_lock_time_avg | 5
| Innodb_row_lock_time_max | 20
| Innodb_row_lock_waits | 36
| Innodb_rows_deleted | 16468
| Innodb_rows_inserted | 78261
| Innodb_rows_read | 1187774810
| Innodb_rows_updated | 216800
| Innodb_num_open_files | 11
| Innodb_truncated_status_writes | 0
| Innodb_available_undo_logs | 128
| Key_blocks_not_flushed | 0
| Key_blocks_unused | 13396
| Key_blocks_used | 370
| Key_read_requests | 47246510
| Key_reads | 162
| Key_write_requests | 17276565
| Key_writes | 0
| Last_query_cost | 0.000000
| Last_query_partial_plans | 0
| Max_used_connections | 196
| Not_flushed_delayed_rows | 0
| Open_files | 6
| Open_streams | 0
| Open_table_definitions | 10000
| Open_tables | 10000
| Opened_files | 1951917
| Opened_table_definitions | 104279
| Opened_tables | 503015
| Performance_schema_accounts_lost | 0
| Performance_schema_cond_classes_lost | 0
| Performance_schema_cond_instances_lost | 0
| Performance_schema_digest_lost | 829187
| Performance_schema_file_classes_lost | 0
| Performance_schema_file_handles_lost | 0
| Performance_schema_file_instances_lost | 1893050
| Performance_schema_hosts_lost | 0
| Performance_schema_locker_lost | 0
| Performance_schema_mutex_classes_lost | 0
| Performance_schema_mutex_instances_lost | 0
| Performance_schema_rwlock_classes_lost | 0
| Performance_schema_rwlock_instances_lost | 14905846
| Performance_schema_session_connect_attrs_lost | 0
| Performance_schema_socket_classes_lost | 0
| Performance_schema_socket_instances_lost | 0
| Performance_schema_stage_classes_lost | 0
| Performance_schema_statement_classes_lost | 0
| Performance_schema_table_handles_lost | 0
| Performance_schema_table_instances_lost | 72450
| Performance_schema_thread_classes_lost | 0
| Performance_schema_thread_instances_lost | 0
| Performance_schema_users_lost | 0
| Prepared_stmt_count | 4
| Qcache_free_blocks | 530
| Qcache_free_memory | 412104
| Qcache_hits | 43263242
| Qcache_inserts | 11619205
| Qcache_lowmem_prunes | 8809420
| Qcache_not_cached | 526837
| Qcache_queries_in_cache | 23516
| Qcache_total_blocks | 72253
| Queries | 64820122
| Questions | 64490244
| Rsa_public_key |
| Select_full_join | 112891
| Select_full_range_join | 648
| Select_range | 144436
| Select_range_check | 20
| Select_scan | 1417117
| Slave_heartbeat_period | 1800.000
| Slave_last_heartbeat |
| Slave_open_temp_tables | 0
| Slave_received_heartbeats | 0
| Slave_retried_transactions | 0
| Slave_running | OFF
| Slow_launch_threads | 0
| Slow_queries | 58
| Sort_merge_passes | 3382
| Sort_range | 1551341
| Sort_rows | 27351653
| Sort_scan | 2838100
| Ssl_accept_renegotiates | 0
| Ssl_accepts | 0
| Ssl_callback_cache_hits | 0
| Ssl_cipher |
| Ssl_cipher_list |
| Ssl_client_connects | 0
| Ssl_connect_renegotiates | 0
| Ssl_ctx_verify_depth | 18446744073709551615
| Ssl_ctx_verify_mode | 5
| Ssl_default_timeout | 0
| Ssl_finished_accepts | 0
| Ssl_finished_connects | 0
| Ssl_server_not_after |
| Ssl_server_not_before |
| Ssl_session_cache_hits | 0
| Ssl_session_cache_misses | 0
| Ssl_session_cache_mode | SERVER
| Ssl_session_cache_overflows | 0
| Ssl_session_cache_size | 128
| Ssl_session_cache_timeouts | 0
| Ssl_sessions_reused | 0
| Ssl_used_session_cache_entries | 0
| Ssl_verify_depth | 0
| Ssl_verify_mode | 0
| Ssl_version |
| Table_locks_immediate | 16649877
| Table_locks_waited | 0
| Table_open_cache_hits | 16611281
| Table_open_cache_misses | 503015
| Table_open_cache_overflows | 492791
| Tc_log_max_pages_used | 0
| Tc_log_page_size | 0
| Tc_log_page_waits | 0
| Threads_cached | 95
| Threads_connected | 9
| Threads_created | 196
| Threads_running | 2
| Uptime | 128056
| Uptime_since_flush_status | 128056
Chris Muench
(711 rep)
Sep 30, 2020, 05:28 PM
• Last activity: Jul 6, 2025, 09:05 AM
Showing page 1 of 20 total questions