Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
706 views
Parameter innodb_flush_log_at_trx_commit not available in AWS Aurora 5.7
I'm trying to modify the parameter `innodb_flush_log_at_trx_commit` on a RDS Aurora 5.7 instance from the AWS RDS Web Console and when I look through the parameters in the database I can not find it. How can I modify this parameter?
I'm trying to modify the parameter innodb_flush_log_at_trx_commit on a RDS Aurora 5.7 instance from the AWS RDS Web Console and when I look through the parameters in the database I can not find it. How can I modify this parameter?
JohnnyAce (101 rep)
Sep 20, 2021, 01:08 PM • Last activity: Jul 28, 2025, 08:01 PM
1 votes
1 answers
145 views
Aurora PostgreSQL fail over process
Aurora PostgreSQL: How the process work on failover,I have a primary DB-prod for transaction database and read replica for reporting. where reporting will be pointed if primary fail. similar question on, "If I have a primary database and an Amazon Aurora Replica actively taking read traffic and a fa...
Aurora PostgreSQL: How the process work on failover,I have a primary DB-prod for transaction database and read replica for reporting. where reporting will be pointed if primary fail. similar question on, "If I have a primary database and an Amazon Aurora Replica actively taking read traffic and a failover occurs, what happens?" https://aws.amazon.com/rds/aurora/faqs/
user152829 (11 rep)
Jun 5, 2018, 10:17 PM • Last activity: Jul 22, 2025, 04:03 PM
3 votes
2 answers
3820 views
How to performance tune high /wait/io/redo_log_flush values in "commit"
According to the "Top SQL" view in *AWS Performance Insights*, `commit` tops the list of SQL statements in terms of *average active session (AAS)*. And most of the time seems to be spent in `wait/io/redo_log_flush`. I have read the [docs on io/aurora_redo_log_flush](https://docs.aws.amazon.com/Amazo...
According to the "Top SQL" view in *AWS Performance Insights*, commit tops the list of SQL statements in terms of *average active session (AAS)*. And most of the time seems to be spent in wait/io/redo_log_flush. I have read the [docs on io/aurora_redo_log_flush](https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/ams-waits.io-auredologflush.html) - which I assume to be the name of wait/io/redo_log_flush prior to Aurora 3 - but they don't really help me, because there are apparently only 0.25 commits per second, but the load is close to 15 AAS. So it is not a problem of too many small commits. So why would such a low commit rate lead to such a high AAS? How can I dig deeper into this? N.B. We are using *8.0.mysql_aurora.3.02.1* enter image description here
Evgeniy Berezovsky (151 rep)
Nov 14, 2022, 08:26 AM • Last activity: Jul 9, 2025, 02:51 PM
3 votes
1 answers
9482 views
Is it possible to add Postgres extensions to AWS Aurora?
I am trying to futureproof a Postgres database so it can be transferred to AWS Aurora. I am using the guide extension `uuid-ossp`. Is it possible to add PostGres extensions to AWS Aurora? In particular, I am interested in whether the extension `uuid-ossp` can be used in Aurora.
I am trying to futureproof a Postgres database so it can be transferred to AWS Aurora. I am using the guide extension uuid-ossp. Is it possible to add PostGres extensions to AWS Aurora? In particular, I am interested in whether the extension uuid-ossp can be used in Aurora.
mikelus (343 rep)
Jun 11, 2020, 04:18 PM • Last activity: Jun 26, 2025, 11:06 PM
1 votes
0 answers
53 views
Aurora PostgreSQL Severe Performance Degradation Under Concurrent Load
**Environment:** - Database: AWS Aurora PostgreSQL - ORM: SQLAlchemy - API Framework: Python FastAPI **Issue:** I'm experiencing significant query performance degradation when my API receives concurrent requests. I ran a performance test comparing single execution vs. concurrent execution of the sam...
**Environment:** - Database: AWS Aurora PostgreSQL - ORM: SQLAlchemy - API Framework: Python FastAPI **Issue:** I'm experiencing significant query performance degradation when my API receives concurrent requests. I ran a performance test comparing single execution vs. concurrent execution of the same query, and the results are concerning. **Real-World Observations:** When monitoring our production API endpoint during load tests with 100 concurrent users, I've observed concerning behavior: When running the same complex query through PGAdmin without concurrent load, it consistently completes in ~60ms However, during periods of high concurrency (100 simultaneous users), response times for this same query become wildly inconsistent: Some executions still complete in 60-100ms Others suddenly take up to 2 seconds No clear pattern to which queries are slow **Test Results:** Single query execution time: 0.3098 seconds Simulating 100 concurrent clients - all requests starting simultaneously... Results Summary: Total execution time: 32.7863 seconds Successful queries: 100 out of 100 Failed queries: 0 Average query time: 0.5591 seconds (559ms) Min time: 0.2756s, Max time: 1.9853s Queries exceeding 500ms threshold: 21 (21.0%) 50th percentile (median): 0.3114s (311ms) 95th percentile: 1.7712s (1771ms) 99th percentile: 1.9853s (1985ms) With 100 concurrent threads: - Each query takes ~12.4x longer on average (3.62s vs 0.29s) - Huge variance between fastest (0.5s) and slowest (4.8s) query - Overall throughput is ~17.2 queries/second (better than sequential, but still concerning) **Query Details:** The query is moderately complex, involving: Several JOINs across multiple tables, a subquery using EXISTS, ORDER BY and LIMIT clauses. **My Setup** **SQLAlchemy Configuration:**
engine = create_async_engine(
    settings.ASYNC_DATABASE_URL,
    echo=settings.SQL_DEBUG,
    pool_pre_ping=True,
    pool_use_lifo=True,
    pool_size=20,
    max_overflow=100,
    pool_timeout=30,
    pool_recycle=30,
)

AsyncSessionLocal = async_sessionmaker(
    bind=engine,
    class_=AsyncSession,
    expire_on_commit=False,
    autocommit=False,
    autoflush=False,
)
**FastAPI Dependency:**
async def get_db() -> AsyncGenerator[AsyncSession, None]:
    """Get database session"""
    async with AsyncSessionLocal() as session:
        try:
            yield session
            await session.commit()
        except Exception:
            await session.rollback()
            raise
**Questions:** - **Connection Pool Settings:** Are my SQLAlchemy pool settings appropriate for handling 100 concurrent requests? What would be optimal? - **Aurora Configuration:** What Aurora PostgreSQL parameters should I tune to improve concurrent query performance? - **Query Optimization:** Is there a standard approach to optimize complex queries with JOINs and EXISTS subqueries for better concurrency? - **ORM vs Raw SQL:** Would bypassing SQLAlchemy ORM help performance? Any guidance or best practices would be greatly appreciated. I'd be happy to provide additional details if needed. **Update:** **Hardware Configuration** 1. Aurora regional cluster with 1 instance 2. Capacity Type: Provisioned (Min: 0.5 ACUs (1GiB), Max: 16 ACUs (32 GiB)) 3. Storage Config: Standard **Performance Insights** 1. Max ACU utilization: 70% 2. Max CPU Utilization: 45% 3. Max DB connection: 111 4. EBS IO Balance: 100% 5. Buffer Cache Hit Ratio: 100%
Abhishek Tyagi (11 rep)
May 20, 2025, 07:18 PM • Last activity: May 21, 2025, 02:50 PM
1 votes
1 answers
721 views
Aurora MySQL: how do I keep binlogs around after reboot?
We have a request to enable binlogs on an Aurora MySQL instance and to keep those logs around for a certain amount of time. To test, we enabled binlogging with `binlog_format=ROW` and set our retention time to 1 hour. ``` call mysql.rds_set_configuration('binlog retention hours', 1); ``` However, we...
We have a request to enable binlogs on an Aurora MySQL instance and to keep those logs around for a certain amount of time. To test, we enabled binlogging with binlog_format=ROW and set our retention time to 1 hour.
call mysql.rds_set_configuration('binlog retention hours', 1);
However, we are seeing inconsistent behavior when we reboot the instance. A new binlog gets created and the old one *sometimes* drops after only a couple of minutes. It was my understanding that setting the retention hours would keep the old file(s) around for an hour. There is also the global config expire_logs_days=0, would that affect keeping the old file(s) around?
Nate H (31 rep)
Aug 18, 2021, 04:23 PM • Last activity: Apr 16, 2025, 07:01 PM
0 votes
1 answers
416 views
Upgrade Aurora from 5.6 to 5.7.mysql_aurora.2.11.1 getting tons of deadlocks and lock wait timeouts, maybe 20-30x what they were before the upgrade
Our application logs are showing an excessive amount of lock wait times, and deadlocks for queries that have not had issues previously after upgrading from 5.6 to 5.7 MySQL in AWS Aurora. Are there any settings that I can adjust to reduce the deadlocks and lock wait timeouts? Variables and settings...
Our application logs are showing an excessive amount of lock wait times, and deadlocks for queries that have not had issues previously after upgrading from 5.6 to 5.7 MySQL in AWS Aurora. Are there any settings that I can adjust to reduce the deadlocks and lock wait timeouts? Variables and settings here: https://pastebin.com/cmu11Dwh Running on db.r5.12xlarge
user2966697 (1 rep)
Mar 21, 2023, 06:34 PM • Last activity: Apr 16, 2025, 09:05 AM
0 votes
2 answers
1695 views
Hints on bulk loading 3.6 billion rows to InnoDB on Aurora MySQL
everyone! I'm struggling for a week to bulk load 3.6 billion rows to an InnoDB table on Aurora MySQL 5.6.10a. This table has one FK to a "Main" table and has 12 columns. The first 1.4 billion were loaded overnight, but right now my insert rate is dropping quiklly. I **disabled** ```unique_check``` a...
everyone! I'm struggling for a week to bulk load 3.6 billion rows to an InnoDB table on Aurora MySQL 5.6.10a. This table has one FK to a "Main" table and has 12 columns. The first 1.4 billion were loaded overnight, but right now my insert rate is dropping quiklly. I **disabled**
and
but let
**on**. I splited the file into 506 files with around 3.84GB each (7,000,000 rows each) and I'm using
DATA FROM S3
to load them to the table. Any hints to improve this task? Thank you very much! **Additional details** All other tables in my SCHEMA use InnoDB as Engine and it works fine since they are much smaller than this one. Is it a good idea to change only this table to MyISAM? What would be the implications of doing so? My Files are ordered by PK and the PK is an
BIGINT
. CREATE TABLE Movement ( idMovement bigint(20) NOT NULL AUTO_INCREMENT, idLawSuit bigint(20) NOT NULL, content mediumtext NOT NULL, movementDate datetime NOT NULL, captureDate datetime NOT NULL, isReportContent tinyint(4) DEFAULT NULL, isDocument tinyint(4) DEFAULT NULL, contentInS3 tinyint(4) DEFAULT NULL, contentS3Url text, uniqueConcatId varchar(255) NOT NULL, captureOrder bigint(20) DEFAULT NULL, movementExtraInfo text, PRIMARY KEY (idMovement), KEY idLawSuit10 (idLawSuit), CONSTRAINT idLawSuit10 FOREIGN KEY (idLawSuit) REFERENCES LawSuit (idLawSuit) ON DELETE CASCADE ON UPDATE NO ACTION ) ENGINE=InnoDB AUTO_INCREMENT=1470000001 DEFAULT CHARSET=utf8 These are my InnoDB parameters: innodb_adaptive_flushing ON innodb_adaptive_flushing_lwm 10 innodb_adaptive_hash_index OFF innodb_adaptive_max_sleep_delay 150000 innodb_additional_mem_pool_size 8388608 innodb_api_bk_commit_interval 5 innodb_api_disable_rowlock OFF innodb_api_enable_binlog OFF innodb_api_enable_mdl OFF innodb_api_trx_level 0 innodb_aurora_enable_auto_akp OFF innodb_autoextend_increment 64 innodb_autoinc_lock_mode 2 innodb_buffer_pool_dump_at_shutdown OFF innodb_buffer_pool_dump_now OFF innodb_buffer_pool_dump_pct 100 innodb_buffer_pool_filename ib_buffer_pool innodb_buffer_pool_instances 8 innodb_buffer_pool_load_abort OFF innodb_buffer_pool_load_at_startup OFF innodb_buffer_pool_load_now OFF innodb_buffer_pool_size 96223625216 innodb_change_buffer_max_size 25 innodb_change_buffering none innodb_checksum_algorithm none innodb_checksums OFF innodb_cmp_per_index_enabled OFF innodb_commit_concurrency 0 innodb_compression_failure_threshold_pct 5 innodb_compression_level 6 innodb_compression_pad_pct_max 50 innodb_concurrency_tickets 5000 innodb_data_file_path ibdata1:12M:autoextend innodb_data_home_dir innodb_disable_sort_file_cache OFF innodb_doublewrite OFF innodb_fast_shutdown 1 innodb_file_format Antelope innodb_file_format_check ON innodb_file_format_max Antelope innodb_file_per_table ON innodb_flush_log_at_timeout 1 innodb_flush_log_at_trx_commit 1 innodb_flush_method O_DIRECT innodb_flush_neighbors 1 innodb_flushing_avg_loops 30 innodb_force_load_corrupted OFF innodb_force_recovery 0 innodb_ft_aux_table innodb_ft_cache_size 8000000 innodb_ft_enable_diag_print OFF innodb_ft_enable_stopword ON innodb_ft_max_token_size 84 innodb_ft_min_token_size 3 innodb_ft_num_word_optimize 2000 innodb_ft_result_cache_limit 2000000000 innodb_ft_server_stopword_table innodb_ft_sort_pll_degree 2 innodb_ft_total_cache_size 640000000 innodb_ft_user_stopword_table innodb_io_capacity 200 innodb_io_capacity_max 2000 innodb_large_prefix OFF innodb_lock_wait_timeout 50 innodb_locks_unsafe_for_binlog OFF innodb_log_buffer_size 8388608 innodb_log_file_size 50331648 innodb_log_files_in_group 2 innodb_log_group_home_dir ./ innodb_lru_scan_depth 1024 innodb_max_dirty_pages_pct 75 innodb_max_dirty_pages_pct_lwm 0 innodb_max_purge_lag 0 innodb_max_purge_lag_delay 0 innodb_mirrored_log_groups 1 innodb_monitor_disable innodb_monitor_enable innodb_monitor_reset innodb_monitor_reset_all innodb_old_blocks_pct 37 innodb_old_blocks_time 1000 innodb_online_alter_log_max_size 134217728 innodb_open_files 6000 innodb_optimize_fulltext_only OFF innodb_page_size 16384 innodb_print_all_deadlocks OFF innodb_purge_batch_size 900 innodb_purge_threads 3 innodb_random_read_ahead OFF innodb_read_ahead_threshold 56 innodb_read_io_threads 32 innodb_read_only OFF innodb_replication_delay 0 innodb_rollback_on_timeout OFF innodb_rollback_segments 128 innodb_shared_buffer_pool_uses_huge_pages ON innodb_sort_buffer_size 1048576 innodb_spin_wait_delay 6 innodb_stats_auto_recalc ON innodb_stats_method nulls_equal innodb_stats_on_metadata OFF innodb_stats_persistent ON innodb_stats_persistent_sample_pages 20 innodb_stats_sample_pages 8 innodb_stats_transient_sample_pages 8 innodb_strict_mode OFF innodb_support_xa ON innodb_sync_array_size 1 innodb_sync_spin_loops 30 innodb_table_locks ON innodb_thread_concurrency 0 innodb_thread_sleep_delay 10000 innodb_undo_directory . innodb_undo_logs 128 innodb_undo_tablespaces 0 innodb_use_native_aio OFF innodb_use_sys_malloc ON innodb_version 1.2.10 innodb_write_io_threads 4
Yago Carvalho (3 rep)
Aug 13, 2019, 02:59 PM • Last activity: Apr 15, 2025, 07:07 AM
4 votes
1 answers
1485 views
AWS Aurora Mysql seemingly not picking PRIMARY or secondary index
Relevant System Info: Aurora Mysql 8.0.mysql_aurora.3.03 rg6.xl instances (1 writer, 2 read replicas) Total size:5.5TB (all databases combined, or just looking at the most recent Snapshot) I have been working on migrating mariadb databases deprecated tokudb engine to RDS Aurora Mysql. With some fine...
Relevant System Info: Aurora Mysql 8.0.mysql_aurora.3.03 rg6.xl instances (1 writer, 2 read replicas) Total size:5.5TB (all databases combined, or just looking at the most recent Snapshot) I have been working on migrating mariadb databases deprecated tokudb engine to RDS Aurora Mysql. With some fine tuning of the parameters in RDS, there has been one behavior I cannot understand. **Scenario** The main database houses tables created by each year, so lets use TABLE_2023 as an example. In this table, there are two indexes; the primary and the secondary index. PRIMARY index(ID, DATE) and Secondary(DATE). If I take a query like the following, and thrown an
, the output shows NULL for the key column. I found this odd since if I run the same query, but against the existing mariadb server, it will output with one of the indexes in that column. If I force the query to use an index (SELECT * FROM TABLE_2023 FORCE INDEX(PRIMARY) WHERE ID = '3' AND TIME >=202301010000 AND = 202304170000 AND TIME Filter: ((TABLE_2023.TIME >= 202304170000) and(TABLE_2023.TIME Table scan on TABLE_2023 [Using parallel query (6 columns)] (cost=404.85 rows=2296760704) (actual time=18271.459..244721.615 rows=1822 loops=1) [parallel query actual (total time= 244722.063, rows returned= 1822)] enter image description here select count(*) on this year's table brings back 1809962901 rows. **CREATE TABLE Results** enter image description here CREATE TABLE TABLE_2023 ( SOMETHING_ID mediumint NOT NULL, TIME bigint unsigned NOT NULL, COLUMN3 tinyint DEFAULT NULL, COLUMN4 smallint DEFAULT NULL, COLUMN5 smallint DEFAULT NULL, JSON varchar(2048) CHARACTER SET latin1 COLLATE latin1_swedish_ci DEFAULT NULL, PRIMARY KEY (SOMETHING_ID,TIME), KEY TIME (TIME) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; **API Behaviors** Users that use the api can request a range of data. However, the query behavior is almost always asking the database for time series data for either an single ID or multiple IDs for the same time series. Currently, the code behind the API uses the "TIME>='' and TIME<=''" for a time range, not BETWEEN. Nor does it use "ID='1'" for a single station, but rather "in". I have recommended the developers to change the code to use "=" if one station is used, to avoid the table scan. **Final Thoughts** Can we get away with forcing the index we want the query to use? Of course. But the behavior we are seeing with Aurora Mysql not being able to pick the index without being forced to bothers me. Especially when I do the same queries against our current databases environment (which will use an index, regardless of being forced). The only time I am able to get an index used is by using simple queries; EXPLAIN select * from TABLE_2023 where SOMETHING_ID in (1206); --index PRIMARY "used" EXPLAIN select * from TABLE_2023 where SOMETHING_ID = '1206'; --index PRIMARY "used" EXPLAIN SELECT * FROM TABLE_2023 where TIME = '202304170000'; --index TIME "used" **Update as of 05/01/2023** I dropped the year table (2023) and reimported all the data from our source database using DMS (again). After this was done, I added the secondary index (TIME) since DMS will usually only grab the primary index. After this was done, I took the same example query and still my EXPLAIN results show no usage of an index. https://docs.aws.amazon.com/dms/latest/userguide/CHAP_BestPractices.html I came across this document when reviewing the methods I was loading data into tables. Turns out AWS recommends dropping Primary Index if doing a full table load. Going to give this a try and load the data, then re-add just the PRIMARY index. **Update as of 05/02/2023** After creating a new table w/o the PRIMARY index and secondary index, I loaded the data into the table via DMS. Once completed, I waited 6+ hours for the PRIMARY key index to create. Unfortunately, the behavior still exists. However, I do have a scheduled call with AWS reps that I hope to get an answer for this behavior. Once an answer is vetted, I will post here. **Update as of 05/05/2023** After meeting with AWS account managers, they will be relaying technical information to their engineers to take a look at. In the meantime, I conducted a few more tests: *Scenario 1:* Create table with primary key index only Load 38,000 rows, run 'EXPLAIN' with the example query-primary index would have been used. *Scenario 2:* Create table with primary key index only Load 23 million rows, run 'EXPLAIN' with the example query-primary index identified, but not used *Scenario 3:* Drop primary index in table from Scenario 2 Re-add primary index, run 'EXPLAIN' with the example query-primary index identified, but not used *Scenario 4:* Create table without primary key index Load 23 million rows, add primary key index, run 'EXPLAIN' with the example query-primary index identified, but not used *Scenario* 5: Create table without primary key index Load 38,000 rows, add primary key, run 'EXPLAIN' with example query-primary index identified and used *Scenario 6:* Create table without primary key index Load around 12 million rows, add primary key, run 'EXPLAIN' with example query-primary index identified but not used. **Update as of 05/08/2023** Tried suggested answer, same behavior. Ended up doing the following but AWS Aurora MySQL behaved the same ALTER TABLE TABLE_2023 ADD PRIMARY KEY (SOMETHING_ID,TIME), ADD UNIQUE KEY TIME_SOMETHING_ID (TIME,SOMETHING_ID); **Update as of 05/10/2023** As suggested Rolando, I dropped the TIME index and added the following ALTER TABLE TABLE_2023 ADD UNIQUE INDEX TIME_SOMETHING_ID (TIME,SOMETHING_ID); I then ran the example query in the beginning of this post. Here are the results: EXPLAIN enter image description here Query Results 1822 rows in set (52.513 sec) So, again, same behavior. Before I reverse the Primary Key order (as I did before), waiting to see response from Rolando for any further suggestions. **Update as of 05/16/2023** I have another meeting planned with AWS,but this time with an Aurora Mysql Specialist. Before I have the meeting, they suggested to turn off aurora_parallel_query. Upon doing this, I ran the same example query and was shocked to see the results. The Explain results showed |id|select_type|table|partitions|type|possible_keys|key|key_len|ref|rows|filtered|Extra| |--|-----------|-----|----------|----|-------------|---|-------|---|----|--------|-----| |1|SIMPLE|TABLE_2023||range|PRIMARY,time_Something_ID|PRIMARY|11||1823|100.0|Using where| And the query itself, when ran, completed extremely quickly. afterauroraparalleloffresults However, before I write this off as the answer, I am curious to why this is the solution. Amazon markets Aurora Parallel Query as a benefit for moving to Aurora, so my use case must not benefit from this. I will post the details of the meeting here when I have them. **Update as of 05/25/2023** Same behavior with 3.03.1. Sending AWS a snapshot of our TABLE_2023 with a bug report. **Update as of 06/12/2023** AWS internally identified the "bug" and the fix is set to be released in the public versions of 3.04.0 and 3.03.2. These are projected to come out at the end of this quarter, or the beginning of next quarter.
Randoneering (135 rep)
Apr 27, 2023, 09:34 PM • Last activity: Mar 18, 2025, 07:03 PM
0 votes
1 answers
394 views
Fast Select from a table with ~300 columns with about 200 mil static rows
I have a table with around 300+ columns which contain Demographics data with various attributes like Name, Address, State, Zipcode, Sex, education etc. The table has around 200mil rows. The table never gets updates or inserts and the data is always static unless it is refreshed, which happens once i...
I have a table with around 300+ columns which contain Demographics data with various attributes like Name, Address, State, Zipcode, Sex, education etc. The table has around 200mil rows. The table never gets updates or inserts and the data is always static unless it is refreshed, which happens once in 6 months, for which we delete the entire table and re-import it from AWS S3 bucket. Right now, the table is on Postgresql 13.6 which is launched on AWS RDS (Aurora Serverless). The only operation that is performed on this table/database is Select which is extremely slow (5-8 mins) when 4 or more conditions are applied. There is one index applied and the table is partitioned, by range, into 22 tables using Zip as range. All columns have datatype as text. i have tried using appropriate data types but there is no improvement in query time. Explain Analyse always shows sequential scan even when index scan can be used.
AWS Aurora size : Serverless v2 (30 - 128 ACUs)
Engine version : 13.6
All other AWS parameters set to default
I need help in determining if there would be a better approach to storing this data for milliseconds access and whether going with postgresql on AWS was the right choice. Unfortunately I cannot paste the full table structure or any example query. **Update** : Added table schema.I have removed some columns, but they are of the same type. please follow the below link (Thanks @Vérace) https://paste.depesz.com/s/bn Select count(*) from pubdata this query takes 11 seconds and uses index scan count : 255116641 the below query uses only 3 parallel workers and takes about 5 mins (which is unacceptable it seems) https://explain.depesz.com/s/1Gam **Update 2** I have made the following changes to PostgreSQL parameters
max_worker_processes : 20
max_parallel_workers : 10
max_parallel_workers_per_gather : 10
force_parallel_mode : true
I am still not looking at significant performance improvements. the RDS Aurora minimum config is 30ACUs (60 GiB RAM) and can go upto 128 ACUs (256 GiB)
Rohit Agre (101 rep)
Oct 30, 2022, 02:55 PM • Last activity: Feb 3, 2025, 11:07 PM
0 votes
0 answers
29 views
RDS - MySQL Aurora Upgrade Precheck Error
We are trying to upgrade our MySQL RDS instance from v5.7 to 8. Precheck fails with few errors, out of which we are unable to solve below error on temporary table Since the tables are not present in the schema, we can't run drop table command. We tried reboot also which didn't help. Any pointer here...
We are trying to upgrade our MySQL RDS instance from v5.7 to 8. Precheck fails with few errors, out of which we are unable to solve below error on temporary table Since the tables are not present in the schema, we can't run drop table command. We tried reboot also which didn't help. Any pointer here is appreciated.
"id": "schemaInconsistencyCheck",
"title": "Schema inconsistencies resulting from file removal or corruption",
"status": "OK",
"description": "Error: Following tables show signs that either table datadir directory or frm file was removed/corrupted. Please check server logs, examine datadir to detect the issue and fix it before upgrade",
"detectedProblems": [
    {
        "level": "Error",
        "dbObject": "tmp.table_name_test",
        "description": "present in INFORMATION_SCHEMA's INNODB_SYS_TABLES table but missing from TABLES table"
    },
    {
        "level": "Error",
        "dbObject": "tmp.table_name_temp",
        "description": "present in INFORMATION_SCHEMA's INNODB_SYS_TABLES table but missing from TABLES table"
    }
    ]
}
user3651204 (1 rep)
Jan 28, 2025, 08:52 AM • Last activity: Jan 28, 2025, 11:07 AM
0 votes
1 answers
1452 views
AWS Aurora Mysql different instance types for writer/reader
I have a db.r5.large for the master/writer. I want to add some readers. Do they need to be the same instance type as the master? Or can i add for example 3x db.t3.medium ?
I have a db.r5.large for the master/writer. I want to add some readers. Do they need to be the same instance type as the master? Or can i add for example 3x db.t3.medium ?
AFRC (101 rep)
Nov 27, 2020, 11:59 PM • Last activity: Jan 16, 2025, 11:04 PM
1 votes
0 answers
71 views
Why are Amazon Aurora RDS Postgres queries with SET session variables taking forever to run?
I an trying to pass some extra metadata from my dotnet core api app alongside each DB query via a couple [SET session variables][1] (aka [Customized Options][2]), EG: ``` SET app.name = 'Test'; SET app.user_id = ' '; SET app.session_id = ' '; SELECT now(); ``` This works great locally when run from...
I an trying to pass some extra metadata from my dotnet core api app alongside each DB query via a couple SET session variables (aka Customized Options ), EG:
SET app.name = 'Test';
SET app.user_id = '';
SET app.session_id = '';

SELECT now();
This works great locally when run from my API container in Docker against a local Postgres 16.4 instance also running in Docker. However, when deployed to our CI/CD AWS environment which is connected to an Amazon Aurora Postgres 16.4 RDS instance, the performance is terrible. It seems like each SET statement adds (a variable) ~1-4secs of delay... making even the most basic SQL statement take 10-20secs to complete. If I comment out the SET statements the impact disappears and the simple queries are instantaneous. This feels like either some major downside of AWS Aurora's flavor of Postgres, or some magic setting that I am missing that controls for how these values are processed. I can't find anything on the AWS-side that warns against this usage, and Postgres articles I find seem to promote using SET vars in this manner.
Ian (11 rep)
Oct 29, 2024, 09:32 PM
1 votes
1 answers
107 views
When is it safe to drop redundant or duplicated indexes on AWS RDS MySQL?
To identify redundant and duplicate indexes while ignoring primary and foreign keys and checking their cardinality. What is the best practice for dropping indexes on AWS RDS MySQL databases in the AWS RDS environment? My environment(context): AWS RDS Aurora MySQL 8.0 The purpose is to reduce storage...
To identify redundant and duplicate indexes while ignoring primary and foreign keys and checking their cardinality. What is the best practice for dropping indexes on AWS RDS MySQL databases in the AWS RDS environment? My environment(context): AWS RDS Aurora MySQL 8.0 The purpose is to reduce storage costs and improve INSERT/UPDATE/DELETE transactions.
Jayron Soares (13 rep)
Aug 2, 2024, 12:10 AM • Last activity: Aug 2, 2024, 09:12 PM
1 votes
1 answers
48 views
Decrease high response time when reading large table in MySQL
I have a table with around 80~ million rows called *parts*. I have a simple query like this SELECT line_code, part_number, compressed_part_number, ...[Others columns] FROM parts WHERE (@partNumber = part_number OR @partNumber = compressed_part_number) AND region = @region AND @lineCode = line_code I...
I have a table with around 80~ million rows called *parts*. I have a simple query like this SELECT line_code, part_number, compressed_part_number, ...[Others columns] FROM parts WHERE (@partNumber = part_number OR @partNumber = compressed_part_number) AND region = @region AND @lineCode = line_code I have separate indices in both columns *part_number*, *compressed_part_number* The Explain for this query shows like this | # | id | select_type | table | partitions | type | key | key_len | ref | rows | filtered | Extra | |----|----|-------------|-------|------------|-------------|-----------------------------------------|---------|------|-------|----------|----------------------------------------------------------------------------------------------| | 1 | 1 | SIMPLE | parts | NULL | index_merge | partnumber_idx, linecode_comprpartnumber_region_parttype_idx | 123, 123 | NULL | 64706 | 0.00 | Using sort_union(partnumber_idx, linecode_comprpartnumber_region_parttype_idx); Using where | The explain format json shows the following cost information: enter image description here Usually, the number of rows returned ranges between 4k to 6k. However, the first time the query is executed (for each combination of @partNumber and @lineCode), the response time takes more than 20s to finish (of course, subsequent executions of the same query finishes in less than 500ms) If add a limit of, for example, 500, the query performs a bit better (3s~) I have two cluster running in AWS RDS 1. Cluster A: db.r5.large (80M+ rows in parts table) 2. Cluster B: db.r6g.2xlarge (300M+ rows in parts table) The same behavior is replicable in both clusters The part_number and compressed_part_number are varchar(50) columns. I assume the main bottleneck is the I/O process of reading and retrieving the data as the the read cost overwhelmingly exceeds the evaluation cost (I could be wrong interprating this, though). 1. Is it normal that mysql struggles to return considerable 2 to 5 thousands of rows in large tables? 2. Is there a way to improve the performance in this aspect? (I'm open to new ideas of structuring the information) 3. Are there other options in RDS to use faster disks, or even changing the configuration for innoDb could help? Note: We need all the information, because we are doing other processing with all the rows. Pagination is not an option either.
fluid undefined (111 rep)
Jul 12, 2024, 06:16 PM • Last activity: Jul 12, 2024, 08:44 PM
0 votes
1 answers
94 views
MySQL Amazon Aurora - why is CPU low while max vCPU is high?
In MySQL Amazon Aurora, is it normal for `CPU` to be be low while other metrics are above the `max vCPU` line? [![DB metrics][1]][1] [1]: https://i.sstatic.net/dUnGz.png
In MySQL Amazon Aurora, is it normal for CPU to be be low while other metrics are above the max vCPU line? DB metrics
mstrom (143 rep)
Mar 6, 2024, 07:00 PM • Last activity: Mar 7, 2024, 09:51 AM
2 votes
1 answers
542 views
MySQL Amazon Aurora - a few slow queries causing extreme performance issues
I'm a dev, not a DBA, so pardon my ignorance. I'm running on MySQL Amazon Aurora, single-instance (with a replica for failover). A few days ago, my application had a handful of *extremely* slow queries (15min+) run simultaneously. I'm planning on optimizing those queries, but looking at the Aurora m...
I'm a dev, not a DBA, so pardon my ignorance. I'm running on MySQL Amazon Aurora, single-instance (with a replica for failover). A few days ago, my application had a handful of *extremely* slow queries (15min+) run simultaneously. I'm planning on optimizing those queries, but looking at the Aurora metrics for the waits, I'm curious about a couple of things. The spike from 18:30-19:00 is when the db was barely keeping up, slowing down the entire application. - Is it normal for a handful of long-running queries to cause an extreme bottleneck across the db? - Is there some way to terminate long-running queries after N number of seconds? - Does the wait/io/table/sql/handler indicate that the db was busy running queries? enter image description here
mstrom (143 rep)
Feb 29, 2024, 09:07 PM • Last activity: Feb 29, 2024, 09:41 PM
0 votes
0 answers
246 views
Why reading data in Postgres requires lock?
Using AWS Postgres, I am seeing some issues with `LWLock:lock_manager`. While investigating the issues, I've found [this article][1]. I can't understand **why READING data requires any kind of lock**? And why during spike in traffic this locks causes my DB to throttle? [1]: https://docs.aws.amazon.c...
Using AWS Postgres, I am seeing some issues with LWLock:lock_manager. While investigating the issues, I've found this article . I can't understand **why READING data requires any kind of lock**? And why during spike in traffic this locks causes my DB to throttle?
Maciej Pszczolinski (101 rep)
Feb 21, 2024, 07:21 AM • Last activity: Feb 21, 2024, 07:42 AM
1 votes
0 answers
324 views
Aurora RDS - difference between ConnectionAttempts and DatabaseConnections
So I have [read this a few times][1] about these two metrics: ConnectionAttempts and DatabaseConnections. Based on the above, is RDS rejecting lots of connections? Here are the graphs of these two metrics. Or perhaps I have miss-understood? [![enter image description here][2]][2] The max-connections...
So I have read this a few times about these two metrics: ConnectionAttempts and DatabaseConnections. Based on the above, is RDS rejecting lots of connections? Here are the graphs of these two metrics. Or perhaps I have miss-understood? enter image description here The max-connections parameter is set to 1609 for each instance.
Abs (11 rep)
Feb 9, 2024, 04:59 AM • Last activity: Feb 13, 2024, 02:57 AM
1 votes
1 answers
33 views
Aurora RDS async replication to historical MySQL cluster
I was trying to do https://dev.mysql.com/doc/refman/8.0/en/replication-howto-masterstatus.html until the command to pause all writes gets access denied on my test db FLUSH TABLES WITH READ LOCK; Error Code 1045. Access denied for user 'admin'@'%' So in Aurora RDS, how do I pause all writes, grab the...
I was trying to do https://dev.mysql.com/doc/refman/8.0/en/replication-howto-masterstatus.html until the command to pause all writes gets access denied on my test db FLUSH TABLES WITH READ LOCK; Error Code 1045. Access denied for user 'admin'@'%' So in Aurora RDS, how do I pause all writes, grab the binary log filename and offset, take snapshot, and then unpause all writes? Then, I can restore the snapshot, and I hope setup to start replicating from that offset to catch up to the source database.
Dean Hiller (115 rep)
Feb 7, 2024, 09:03 AM • Last activity: Feb 7, 2024, 10:23 PM
Showing page 1 of 20 total questions