Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
1044
views
How to install keyring_aws plugin in MySQL?
I am looking for a way to install keyring_aws plugin in mysql community edition. Is it possible? I gone through couple of docs, > https://dev.mysql.com/doc/refman/5.7/en/keyring-aws-plugin.html > https://dev.mysql.com/doc/refman/5.7/en/keyring-installation.html And also tried to findout keyring_aws....
I am looking for a way to install keyring_aws plugin in mysql community edition.
Is it possible?
I gone through couple of docs,
> https://dev.mysql.com/doc/refman/5.7/en/keyring-aws-plugin.html
> https://dev.mysql.com/doc/refman/5.7/en/keyring-installation.html
And also tried to findout keyring_aws.so file, but couldn't find it anywhere.
I checked in mysql dir also.
root@server:~# ls -l /usr/lib/mysql/plugin/
total 644
-rw-r--r-- 1 root root 21224 Jan 22 17:26 adt_null.so
-rw-r--r-- 1 root root 6288 Jan 22 17:26 auth_socket.so
-rw-r--r-- 1 root root 44144 Jan 22 17:26 connection_control.so
-rw-r--r-- 1 root root 108696 Jan 22 17:26 innodb_engine.so
-rw-r--r-- 1 root root 88608 Jan 22 17:26 keyring_file.so
-rw-r--r-- 1 root root 154592 Jan 22 17:26 libmemcached.so
-rw-r--r-- 1 root root 9848 Jan 22 17:26 locking_service.so
-rw-r--r-- 1 root root 10840 Jan 22 17:26 mypluglib.so
-rw-r--r-- 1 root root 6288 Jan 22 17:26 mysql_no_login.so
-rw-r--r-- 1 root root 56064 Jan 22 17:26 rewriter.so
-rw-r--r-- 1 root root 56936 Jan 22 17:26 semisync_master.so
-rw-r--r-- 1 root root 14768 Jan 22 17:26 semisync_slave.so
-rw-r--r-- 1 root root 27568 Jan 22 17:26 validate_password.so
-rw-r--r-- 1 root root 31296 Jan 22 17:26 version_token.so
Can we build this .so file from scratch? If yes, where can I find the source?
karthikeayan
(193 rep)
Feb 28, 2019, 12:33 PM
• Last activity: Aug 6, 2025, 02:04 AM
0
votes
0
answers
16
views
AWS Aurora MySQL table archive running slow for one table
I'm working on archiving a bunch of tables in an environment where archiving was never done, with some data going back 10 years. I've written a script to perform the work, which loops through the primary key (an autoincrement `bigint`) *n* rows at a time, calling a procedure to archive the data to a...
I'm working on archiving a bunch of tables in an environment where archiving was never done, with some data going back 10 years. I've written a script to perform the work, which loops through the primary key (an autoincrement
bigint
) *n* rows at a time, calling a procedure to archive the data to a separate table and then deleting that same data from the main table. I'm doing it in small batches to prevent any long term locking of the main tables. It also sleep
s in between each loop iteration. Batch size and sleep time are configurable via a config file. On my test system, for this table, I'm using a batch size of 1000 and a sleep time of 0. Instance class is r7g.4xl.
Most tables archive at several thousand rows per second, which is acceptable. But I have one table whose archiving is going very slowly; averaging under 550 rows/sec. There is no other activity in the database (there are other archives running against other DBs in the cluster at the same time, but killing them didn't improve the performance of this one). Here's the table schema (the schema for the archive table is identical):
CREATE TABLE inbox_item
(
id
bigint NOT NULL AUTO_INCREMENT,
user_id
bigint NOT NULL,
template_id
bigint NOT NULL,
url
varchar(4000) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
created_at
datetime NOT NULL,
hash
varchar(128) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
parameters
varchar(4000) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
PRIMARY KEY (id
),
UNIQUE KEY hash_uidx
(hash
),
KEY template_id_idx
(template_id
),
KEY user_id_created_at_idx
(user_id
,created_at
)
) ENGINE=InnoDB AUTO_INCREMENT=442872663 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
Note that while there are two large varchar
s, total actual data width is under 300 bytes. Here's the procedure that's being called:
CREATE DEFINER=root
@%
PROCEDURE archive_inbox_item_proc
(IN pkmin bigint, IN pkmax bigint, IN querymax bigint)
begin
declare exit handler for sqlexception
begin
get diagnostics condition 1
@err = MYSQL_ERRNO, @msg = MESSAGE_TEXT;
select -1;
select concat('Error ', cast(@err as char), ': ',@msg) 'Error';
rollback;
end;
start transaction;
insert ignore into inbox
.inbox_item_archive
select arctable.* from inbox
.inbox_item
as arctable where created_at = pkmin and arctable.id
= pkmin and arctable.id
< querymax and arctable.id
<= pkmax ;
select row_count();
commit;
end
pkmin
is always the actual minimum pkey value. There are no foreign keys or triggers referencing the table. Here's the table status:
Name: inbox_item
Engine: InnoDB
Version: 10
Row_format: Dynamic
Rows: 299879061
Avg_row_length: 243
Data_length: 72988737536
Max_data_length: 0
Index_length: 126937300992
Data_free: 45770342400
Auto_increment: 442872663
Create_time: 2025-03-28 06:15:36
Update_time: 2025-08-05 18:04:55
Check_time: NULL
Collation: utf8mb4_unicode_ci
Checksum: NULL
Create_options:
Comment:
Any ideas on what's causing this to run so slow relative to other tables in other databases?
Swechsler
(153 rep)
Aug 5, 2025, 06:05 PM
0
votes
1
answers
1326
views
Amazon RDS change collate for mysql database in production without downtime
I saw a solution like this below: 1. create a new table like your source table. 2. alter that new table the way you want. 3. insert your data into the new table. 4. create indexes etc. as needed on the new table. 5. rename your old table to something like ..._old or whatever. 6. rename your new tabl...
I saw a solution like this below:
1. create a new table like your source table.
2. alter that new table the way you want.
3. insert your data into the new table.
4. create indexes etc. as needed on the new table.
5. rename your old table to something like ..._old or whatever.
6. rename your new table to the former name of the old one.
7. copy any missing rows from the _old table to the new one.
[Reference for an above solution](https://dba.stackexchange.com/questions/6131/mysql-speed-up-execution-of-alter-table)
But the above solution might cause data unavailability if there is a huge amount of data added before copying any missing rows from the _old table to the new one.
Is there any better solution than this, using AWS DMS, etc?
I also want to change the collate of all tables present in the database. Is it possible to get all the data replicated between two RDS DBs continuously, like new data entered in database A gets into Database B and viceversa?
Since I have around 50-60GB data any best way to solve this is appreciated.
**Update:**
- I have around **50-60GB** of data.
- Mysql version: **5.7**
- I need to change collation on all my tables
Msv
(101 rep)
Jul 16, 2022, 03:45 AM
• Last activity: Aug 2, 2025, 10:03 AM
0
votes
1
answers
716
views
Problem with mysql/data folder is too big and took the whole place
for the first time I am faced with such a problem that has puzzled everyone. I have all hosting busy (AWS lightsail) 78GB and after a few analyzes I realized that 90% of the memory is occupied in the `mysql/data` folder. My wordpress website is weight - 6GB and I removed all binary logs before the p...
for the first time I am faced with such a problem that has puzzled everyone.
I have all hosting busy (AWS lightsail) 78GB and after a few analyzes I realized that 90% of the memory is occupied in the
mysql/data
folder.
My wordpress website is weight - 6GB and I removed all binary logs before the problem, so the problem is not connected with that.
I can no longer log into *phpmyadmin*, because there is no free space left on the disk. But when I got there until the problem I saw that wp-option
has a crazy amount of GB
. No plugin was able to clear this and I decided to put it off.
Now that everything is busy, most of the commands cannot be executed, especially the mysql commands. I am using SSH
. Help if possible.
Edgar Poe
(11 rep)
Aug 30, 2020, 07:38 AM
• Last activity: Jul 30, 2025, 04:08 AM
0
votes
1
answers
65
views
RDS postgres slow read IO
We are running postgres 14.12 in rds and expirence very slow IO reads.. around 30MB/s on index scans. we can't figure out what might be the cause of it. any ideas to what we should / could check? **configuration** instance class: `db.m6idn.8xlarge` (which should support 3125MB/s throughput) RAM: `12...
We are running postgres 14.12 in rds and expirence very slow IO reads.. around 30MB/s on index scans. we can't figure out what might be the cause of it. any ideas to what we should / could check?
**configuration**
instance class:
db.m6idn.8xlarge
(which should support 3125MB/s throughput)
RAM: 128GB
vCPU: 32
storage type: gp3
with 25000 IOPS (we only reach 18K) and 4000MiB/s throughput.
most of our slow queries are due to slow IO read..
edena
(1 rep)
Mar 4, 2025, 04:36 PM
• Last activity: Jul 28, 2025, 04:09 PM
1
votes
1
answers
205
views
Redshift (Serverless) SVV_TABLE_INFO shows huge storage sizes for tiny tables
My Redshift serverless shows massive storage size usage for tiny tables that so far have had only a couple DDL statements, only inserts, and are overall tiny tables. When I run `select "schema", "table", "tbl_rows", "size" FROM SVV_TABLE_INFO where "schema" = 'x' and "table = 'y'` I get 36 rows with...
My Redshift serverless shows massive storage size usage for tiny tables that so far have had only a couple DDL statements, only inserts, and are overall tiny tables.
When I run
select "schema", "table", "tbl_rows", "size" FROM SVV_TABLE_INFO where "schema" = 'x' and "table = 'y'
I get 36 rows with 2580 "size", which according to Redshift docs are 1mb blocks, so 2.6 GB of storage used with 36 rows. The SVV_TABLE_INFO
columns empty
, unsorted
, vacuum_sort_benefit
are all 0.
The fun part is when I run select * from x.y
I can copy the entire resultset to my clipboard and it comes to ~23kb total.
The AWS Redshift Serverless Web GUI similarly reports a whopping 1.1 TB of storage used (the same as when running sum("size")
against the SVV table btw) in the cluster. There is at most 100gb used in total.
Can someone help me figure out how/where those huge storage numbers come from?
## EDIT - Full SVV Dump for one table
{
"database": "xxx",
"schema": "x",
"table_id": 2234810,
"table": "y",
"encoded": "Y",
"diststyle": "KEY(id)",
"sortkey1": "received_at",
"max_varchar": 65535,
"sortkey1_enc": "none",
"sortkey_num": 1,
"size": 2580,
"pct_used": 0.0040,
"empty": 0,
"unsorted": 0.00,
"stats_off": 0.00,
"tbl_rows": 36,
"skew_sortkey1": 1.00,
"skew_rows": 100.00,
"estimated_visible_rows": 36,
"risk_event": null,
"vacuum_sort_benefit": 0.00,
"create_time": "2024-08-05T07:48:07.454Z"
}
Killerpixler
(131 rep)
Sep 3, 2024, 03:02 PM
• Last activity: Jul 25, 2025, 03:48 AM
1
votes
1
answers
149
views
Copy postgis layer from S3 to Heroku
I have a dump of a postgis layer (layer.dump), which I am trying to add to my heroku database (mydatabase). The postgis layer is stored on S3 (https://s3.amazonaws.com/layer.dump). I would like to add the layer to to the heroku database and previously used `heroku pgbackups:restore DATABASE 'https:/...
I have a dump of a postgis layer (layer.dump), which I am trying to add to my heroku database (mydatabase). The postgis layer is stored on S3 (https://s3.amazonaws.com/layer.dump) . I would like to add the layer to to the heroku database and previously used
heroku pgbackups:restore DATABASE 'https://s3.amazonaws.com/layer.dump '
. However, the new heroku pg:backups restore 'https://s3.amazonaws.com/layer.dump ' DATABASE
deletes all data from the target database before restoring the backup (https://devcenter.heroku.com/articles/heroku-postgres-backups) . Is there still a way to only restore a single table and leave the remaining tables in the database untouched?
Anne
(143 rep)
Sep 2, 2015, 06:44 PM
• Last activity: Jul 18, 2025, 07:06 PM
0
votes
1
answers
148
views
Prevent failure in conditional insert in mysql database
The infrastructure of our system looks like this. An AWS lambda function receives requests such as (accountId, .....). It creates an entry in the MySQL database using a newly generated UUID as caseId. (caseId, accountId, ....). The insert is a conditional insert operation discussed in detail below....
The infrastructure of our system looks like this.
An AWS lambda function receives requests such as (accountId, .....). It creates an entry in the MySQL database using a newly generated UUID as caseId. (caseId, accountId, ....).
The insert is a conditional insert operation discussed in detail below.
I am able to avoid race condition by setting transaction isolation to SERIALIZABLE. However, the issue is that I do not have any control over how many concurrent requests will be successfully processed.
For example, consider following concurrent requests.
request | accountId | field1 | ...
1 a1 value1 .... true --- create a new entry with caseId Idxxx
2 a1 value2 .... false --- update existing entry with caseId Idxxx
3 a1 value3 .... false --- update existing entry with caseId Idxxx
4 a1 value4 .... false --- update existing entry with caseId Idxxx
With our current implementation we are getting CannotAquireLockException.
What are the ways in which I can avoid retry failures (CannotAquireLockException) ?
The detailed table schema and condition are described below:
The database is a mysql database system with the following table schema.
Table1: case table
|caseId(PK) | accountId | status | .....
Table2: case reopen table
|caseId(FK)| casereopenId(PK)| caseReopenTime|
Table3: Alert table
Id (incrementing id) | alertId | accountId |
The lambda function tries to "create" a case in the database.
the create wrapper, generates a UUID for caseId.
The goal is :
- check if an accountId already exists in case table.
- if it does, then
- check if status is OPEN
- get the caseId for the accountId.
- check if the caseId is present in case reopen table.
- if above condition is false, then add an entry into the case table.
Thanks!
Swagatika
(101 rep)
Feb 13, 2020, 01:10 AM
• Last activity: Jul 17, 2025, 08:04 PM
1
votes
1
answers
5036
views
AWS RDS Postgres: How to diagnose CheckpointLag and potential slowups using AWS' Monitoring suite?
We are currently hosting a postgres RDS database and our team is noticing slowup in our querying service. I'm noticing a spike in the metric, `CheckpointLag` and I've been tasked in trying to find where this occurs specifically on the AWS side of things. In monitoring detailed performance, we've see...
We are currently hosting a postgres RDS database and our team is noticing slowup in our querying service. I'm noticing a spike in the metric,
CheckpointLag
and I've been tasked in trying to find where this occurs specifically on the AWS side of things.
In monitoring detailed performance, we've seen that our queries are much below (20%
) what our expected average active sessions (AAS) are said to reach. I also monitored the queries individually with EXPLAIN ANALYZE
and the most extreme query is takes 0.5s
to compute. This leads me to believe there's something else taking too long.
After checking other potential metrics, CPU, BurstBalance, etc... all appear normal, there is one metric CheckpointLag
which appears to have a spike under use and I can't seem to find documentation on. I can't seem to find what this means and the expected *acceptable* value we should expect with a db.m4.xLarge
. With no, to low, usage -- it appears to be ~140 seconds
. Under normal, expected usage it jumps to ~400 seconds
.
I'm asking what this metric really means, if the values are of *expected* or *normal* values, and if there's any other ways I can see if my RDS instance is the cause of my slowup?
**EDIT:**
Checkpoint lag is defined as a metric here: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-metrics.html with the description of The amount of time since the most recent checkpoint
. It was fairly vague and hard to decipher the true meaning. For my metrics, it appears that its pulling from this already pre-defined metric, but if there's a way to dive deeper in how its querying the instance, please let me know.
**Follow-Up**
I ended up editing queries to group results and reduce the number of rows being exported at one time as our team was querying way too many rows to begin with. With this, CheckpointLag went down and I associated it with time taken to either reach or perform queries on RDS (duh!), but I still have not pinpointed exact meaning. There must've been some bottleneck in outputting all of the rows and cause the "lag" to rise...
Andrew Narvaez
(11 rep)
Dec 15, 2023, 09:42 PM
• Last activity: Jul 16, 2025, 03:35 PM
0
votes
0
answers
25
views
How to send a mail notification if cron.job running is failed?
How can I configure AWS services to notify me of failed pg_cron job runs in the cron.job_run_details table using only the AWS Management Console or PostgreSQL settings, without writing any code?
How can I configure AWS services to notify me of failed pg_cron job runs in the cron.job_run_details table using only the AWS Management Console or PostgreSQL settings, without writing any code?
Balaji Ogiboina
(1 rep)
Jul 11, 2025, 05:46 AM
• Last activity: Jul 11, 2025, 07:15 AM
0
votes
1
answers
189
views
RDS Postgres v14.4 CPU utilization not going below 5% when idle
I have an RDS Postgres instance on a t3.small and am trying to drive down the CPU usage of my clients connecting to it (I have a few API services connected). I've stopped all my services connecting to my DB to see how low the CPU usage will go and it does not go below 5% CPU Utilization, even after...
I have an RDS Postgres instance on a t3.small and am trying to drive down the CPU usage of my clients connecting to it (I have a few API services connected).
I've stopped all my services connecting to my DB to see how low the CPU usage will go and it does not go below 5% CPU Utilization, even after a reboot (see attached image, right at the end is where I stopped my services and rebooted the DB server).
I have ran
SELECT * FROM pg_stat_activity
and can confirm that there are no clients connected, but there are the usual AWS monitors and other wait_event_type=Activity
rows.
Does RDS have a number of background services that will keep the CPU Utilization hovering around 5%?

PGT
(101 rep)
Feb 14, 2023, 07:00 AM
• Last activity: Jun 26, 2025, 07:06 PM
0
votes
1
answers
199
views
AWS RDS MySQL limit connection
We have multiple databases in a single RDS instance. Because of any reason if we have load in 1 database it stops all the databases. I'm not able to move any of the database because of following reasons: 1. In MySQL triggers we are calling other databases and as they are in a same server it works. 2...
We have multiple databases in a single RDS instance. Because of any reason if we have load in 1 database it stops all the databases.
I'm not able to move any of the database because of following reasons:
1. In MySQL triggers we are calling other databases and as they are in a same server it works.
2. What we can written in trigger we can write the same in App Server but it's going to be tedious tasks.
3. I'm not using Aurora so Lambda functions are out of scope for me.
So, currently I'm looking for a solution where I can limit the database so even if there were too much db load it should not bring down all the databases.
AWS RDS MySQL5.7
Thank you in advance.
Rahul
(115 rep)
Aug 12, 2022, 05:18 AM
• Last activity: Jun 19, 2025, 11:01 AM
2
votes
4
answers
4289
views
MongoDB: Server sockets closed after a few minutes
I am working with multiple `AWS intances` connected to the same `MongoDB` database (inside `Compose.io` Elastic deployment). But i am getting the error `server : sockets closed` after a few minutes. Can anyone give me any hint about what may be wrong with the connection code? **CONNECTION CODE** var...
I am working with multiple
AWS intances
connected to the same MongoDB
database (inside Compose.io
Elastic deployment). But i am getting the error server : sockets closed
after a few minutes. Can anyone give me any hint about what may be wrong with the connection code?
**CONNECTION CODE**
var url = "mongodb://:@:,:/?replicaSet=";
var options = {
server : {"socketOptions.keepAlive": 1},
replSet : { "replicaSet": , "socketOptions.keepAlive": 1 }
};
MongoClient.connect(url, options, function(err, db) { ... });
**ERROR MESSAGE**
Potentially unhandled rejection MongoError: server : sockets closed
at null. (/var/app/current/node_modules/mongodb/node_modules/mongodb-core/lib/topologies/server.js:328:47)
at g (events.js:199:16)
at emit (events.js:110:17)
at null. (/var/app/current/node_modules/mongodb/node_modules/mongodb-core/lib/connection/pool.js:101:12)
at g (events.js:199:16)
at emit (events.js:110:17)
at Socket. (/var/app/current/node_modules/mongodb/node_modules/mongodb-core/lib/connection/connection.js:142:12)
at Socket.g (events.js:199:16)
at Socket.emit (events.js:107:17)
at TCP.close (net.js:485:12)
santihbc
(21 rep)
Aug 4, 2015, 01:57 PM
• Last activity: Jun 9, 2025, 12:05 PM
1
votes
2
answers
2717
views
Reading SQL Server .sqlaudit Files Stored in S3
I set up an audit in SQL Server that saves audit logs (.sqlaudit files) to RDS, which we then copy over to S3 using a standard process. All of the documentation I can find on reading these audit files uses the following query to read them from RDS: SELECT * FROM msdb.dbo.rds_fn_get_audit_file ('D:\r...
I set up an audit in SQL Server that saves audit logs (.sqlaudit files) to RDS, which we then copy over to S3 using a standard process. All of the documentation I can find on reading these audit files uses the following query to read them from RDS:
SELECT *
FROM msdb.dbo.rds_fn_get_audit_file
('D:\rdsdbdata\SQLAudit\transmitted\*.sqlaudit'
,default,default)
RDS has an option setting called RETENTION_TIME for SQL Server Audit which allows you to specify a maximum of 840 hours before the files are removed from the RDS instance. I need to to be able to read the audit files for a longer period of time, so I am looking for a way to query the audit files in S3 instead, where they are retained.
One way may be to use sys.fn_get_audit_file, but I cannot do this with any user attached to this server that my organization has access to, even the admin user. None of the users have proper permissions for this and I don't see any way they can be granted.
With the .sqlaudit files sitting in the S3 bucket, how could I go about reading the files? Perhaps S3 Select could do it somehow, or maybe there's another obvious solution I'm overlooking?
quarry-marbles
(21 rep)
Feb 2, 2022, 09:23 PM
• Last activity: Jun 6, 2025, 02:07 AM
1
votes
1
answers
745
views
How to clear AWS RDS Aurora MySQL local storage?
I got a warning message about running out of AWS RDS local storage. > The free storage capacity for DB Instance: {MY INSTANCE NAME} is low > at 2% of the provisioned storage [Provisioned Storage: 157.36 GB, Free > Storage: 3.35 GB]. You may want to increase the provisioned storage to > address this...
I got a warning message about running out of AWS RDS local storage.
> The free storage capacity for DB Instance: {MY INSTANCE NAME} is low
> at 2% of the provisioned storage [Provisioned Storage: 157.36 GB, Free
> Storage: 3.35 GB]. You may want to increase the provisioned storage to
> address this issue.
According to the AWS document, local storage depends on instance type and can be increased by scale up : https://aws.amazon.com/premiumsupport/knowledge-center/aurora-mysql-local-storage/?nc1=h_ls
But, I would like to clear the local storage instead of scaling up the instance because only one instance of the read cluster has insufficient local storage.
I restarted Aurora instance but the local storage is still almost full.
How can I clear local storage?
It's MySQL version is 5.6.
BingbongKim
(111 rep)
May 23, 2022, 04:47 AM
• Last activity: Jun 5, 2025, 08:09 AM
0
votes
1
answers
1084
views
What is the difference between Aurora Mysql 5.7.x and Aurora 2.x.x
I have an RDS instance with Aurora Mysql 5.7.12 and I noticed that there are other versions of Aurora Mysql 5.7, but I don't understand what is the difference between them. Some of the version listed are: - Aurora (MySQL)-5.7.12 <- The one that I have. - Aurora (MySQL 5.7)-2.03.2 - Aurora (MySQL 5.7...
I have an RDS instance with Aurora Mysql 5.7.12 and I noticed that there are other versions of Aurora Mysql 5.7, but I don't understand what is the difference between them.
Some of the version listed are:
- Aurora (MySQL)-5.7.12 <- The one that I have.
- Aurora (MySQL 5.7)-2.03.2
- Aurora (MySQL 5.7)-2.03.3
- Aurora (MySQL 5.7)-2.03.4
- Aurora (MySQL 5.7)-2.04.1
- Etc...
I attach an image of the AWS Console:
I want to chose the right version (the one that have most bug fixes in order to have better stability in my system).

Roberto Briones Argüelles
(1 rep)
Apr 22, 2020, 12:12 AM
• Last activity: Jun 4, 2025, 01:04 AM
0
votes
1
answers
270
views
Amazon Aurora MySQL conditional comment queries
How does Amazon Aurora handle conditional queries based on MySQL version? I was unable to find any documentation on this. For example, the code below inserts into a database for MySQL 5.6.4 or newer. What is result in Aurora? I could spin up an instance but wanted to read the documentation about thi...
How does Amazon Aurora handle conditional queries based on MySQL version? I was unable to find any documentation on this.
For example, the code below inserts into a database for MySQL 5.6.4 or newer. What is result in Aurora?
I could spin up an instance but wanted to read the documentation about this to understand what incompatible features there are.
/*!50604 REPLACE INTO
phppos_app_config
(key
, value
) VALUES ('supports_full_text', '1')*/;
Chris Muench
(711 rep)
Jan 13, 2018, 06:27 PM
• Last activity: May 23, 2025, 03:04 PM
0
votes
1
answers
309
views
high cpu spikes on postgresql (rds)
I have a PG 15(.5) RDS instance, which is experiencing intermittent random(?) periods of high cpu; that do not seem to, afaict, correlate with an obvious cause: [![perf insights - high cpu][1]][1] Here is one example. The queries being made before/during & after are the same; and if run with the sam...
I have a PG 15(.5) RDS instance, which is experiencing intermittent random(?) periods of high cpu; that do not seem to, afaict, correlate with an obvious cause:
Here is one example. The queries being made before/during & after are the same; and if run with the same params once the cpu drops back down, are quick as before. I also don't see any increase in transactions:
or tuples:
during that period; or any large vacuum. The only graph that seems to line up, is this one:
according to the pg docs, this is:
> Number of times disk blocks were found already in the buffer cache, so that a read was not necessary
so I'm not clear why that would be related.
The instance is an




db.r7g.8xlarge
, with 1758 GiB of gp2; and isn't running particularly hot, as the graph shows. Is there something I can do to find out what the cpu is actually doing? Do I need to catch it in the act?
grahamrhay
(101 rep)
Jun 6, 2024, 02:33 PM
• Last activity: May 21, 2025, 09:01 PM
1
votes
1
answers
362
views
Intermittent SQLSTATE[HY000] [2003] Can't connect to MySQL server on 'server.here' (110)
I'm suddenly having intermittent 2003 connection errors and I can't find anything that has changed that can explain it. These servers have been running smoothly for years and suddenly two days ago these errors started showing up in my logs. The application successfully connects to and queries the da...
I'm suddenly having intermittent 2003 connection errors and I can't find anything that has changed that can explain it. These servers have been running smoothly for years and suddenly two days ago these errors started showing up in my logs. The application successfully connects to and queries the database almost all the time but sporadically hangs when trying to connect after several queries in quick succession. The connection timeouts always happen at random intervals. It worked just fine before two days ago and the code hasn't changed in 4 years.
Things I've tried that didn't work:
- Checked all InnoDB tables for corruption
- Increased the
back_log
value in my.cnf from 50 to 1000 (and restarted MySQL)
- Increased the max_connections
value in my.cnf from 100 to 200 (and restarted MySQL)
- Increased net.core.somaxconn
from 128 to 1024
- Increased net.ipv4.tcp_max_syn_backlog
from 1024 to 8192
- Verified there are no firewall rules blocking or throttling connections
- Flushed the MySQL query cache
- Cleared the MySQL query cache
- Restarted the application server and database server (there are two separate servers)
- Asked it nicely to simply start working again
My servers are EC2 instances in AWS and nothing about their configuration has been changed. There's no unusual traffic and the CPU, memory, and disk usage are well below 80%. Please help!
aecend
(111 rep)
Sep 15, 2021, 12:09 AM
• Last activity: May 14, 2025, 06:04 AM
1
votes
2
answers
831
views
How to enable Cloud Watch logs for AWS DMS task?
I am creating AWS DMS task for migrating data from GCP SQL server to AWS Aurora MySQL. How can I attach log group which I created? I could able to enable cloud watch logs for the task but when I click on "View CloudWatch logs" its showing as log group doesn't exit. Details in image - [
Pand005
(151 rep)
Dec 17, 2020, 10:10 PM
• Last activity: May 14, 2025, 02:08 AM
Showing page 1 of 20 total questions