Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
2
votes
2
answers
1939
views
MariaDB 11.2 using too much RAM
I'm running MariaDB 11.2 in Docker Swarm in Ubuntu 20.04 and its memory usage keeps growing until database is killed by OOM killer. [![Memory graph][1]][1] According to MySQLTuner, max memory should be around 7.1GB, but after 1-2 days memory usage goes up to 30GB. So when new task is started on serv...
I'm running MariaDB 11.2 in Docker Swarm in Ubuntu 20.04 and its memory usage keeps growing until database is killed by OOM killer.
According to MySQLTuner, max memory should be around 7.1GB, but after 1-2 days memory usage goes up to 30GB. So when new task is started on server and there are not enough memory, the OOM killer select mariadbd and kills it.
Server:
- Ubuntu 20.04.1 (5.4.0-169-generic Kernel)
- 64 GB RAM
- Docker 24.0.6
- MariaDB 11.2.3 (in container)
- Also Apache 2 (in container)
Here's

docker-compose.yaml
:
version: "3.8"
services:
database:
image: mariadb:11.2
deploy:
restart_policy:
condition: any
networks:
- database
ports:
- "3306:3306"
ulimits:
memlock: 2097152 # 2MB
volumes:
- "db_data:/var/lib/mysql"
- "db_log:/var/log/mysql"
- "/etc/localtime:/etc/localtime:ro"
- "/etc/timezone:/etc/timezone:ro"
- "./my.cnf:/etc/mysql/my.cnf"
networks:
database:
driver: overlay
attachable: true
name: database
...
Here's my.cnf
:
[server]
table_definition_cache = 2048
innodb_buffer_pool_size = 6G
innodb_log_file_size = 1536M
innodb_log_buffer_size = 256M
log_error=/var/log/mysql/error.log
[client-server]
socket = /run/mysqld/mysqld.sock
!includedir /etc/mysql/mariadb.conf.d/
!includedir /etc/mysql/conf.d/
*Includes are from my.cnf template.*
Content of /etc/mysql/mariadb.conf.d/
:
# 05-skipcache.cnf:
[mariadb]
host-cache-size=0
skip-name-resolve
# 50-server.cnf:
[mariadbd]
pid-file = /run/mysqld/mysqld.pid
basedir = /usr
expire_logs_days = 10
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
Directory /etc/mysql/conf.d/
is empty.
## What I tried?
### Update database
I had same problem with 10.6. version. So I tried upgrading to 11.2. but that didn't fix the problem.
### Performence schema
I enabled performence schema with memory instruments.
...
performance_schema = ON
performance-schema-instrument='memory/%=COUNTED'
...
Sum from sys.x$memory_global_by_current_bytes
is around 7.1GB. But memory used by mariadbd process is much higher.
### Incresse memlock in container to 2MB
I found warning in mysql/error.log
:
2024-02-11 21:09:42 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOMEM: try larger memory locked limit, ulimit -l, or https://mariadb.com/kb/en/systemd/#configuring-limitmemlock under systemd (262144 bytes required)
2024-02-11 21:09:42 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF
So I increesed memory lock limit to 2MB. Warning is gone, but memory problem still persists.
---
## Update 1
I had to reduce innodb_buffer_pool_size to 1GB to reduce the number of crashes. I also adjusted innodb_log_file_size to 256MB.
MySQLTuner output
when mariadb takes up 20GB of RAM (total system usage 48GB - 70%):
...
-------- Performance Metrics -----------------------------------------------------------------------
[--] Up for: 2d 21h 59m 30s (142M q [564.348 qps], 336K conn, TX: 531G, RX: 107G)
[--] Reads / Writes: 80% / 20%
[--] Binary logging is disabled
[--] Physical Memory : 62.8G
[--] Max MySQL memory : 2.0G
[--] Other process memory: 0B
[--] Total buffers: 1.5G global + 2.9M per thread (151 max threads)
[--] Performance_schema Max memory usage: 0B
[--] Galera GCache Max memory usage: 0B
[OK] Maximum reached memory usage: 1.6G (2.59% of installed RAM)
[OK] Maximum possible memory usage: 2.0G (3.11% of installed RAM)
[OK] Overall possible memory usage with other process is compatible with memory available
[OK] Slow queries: 0% (612/142M)
[OK] Highest usage of available connections: 25% (38/151)
[OK] Aborted connections: 0.00% (0/336034)
[OK] Query cache is disabled by default due to mutex contention on multiprocessor machines.
[OK] Sorts requiring temporary tables: 0% (113 temp sorts / 1M sorts)
[!!] Joins performed without indexes: 1341
[OK] Temporary tables created on disk: 4% (16K on disk / 358K total)
[OK] Thread cache hit rate: 99% (38 created / 336K connections)
[OK] Table cache hit rate: 99% (143M hits / 143M requests)
[OK] table_definition_cache (2048) is greater than number of tables (441)
[OK] Open file limit used: 0% (32/32K)
[OK] Table locks acquired immediately: 100% (468 immediate / 468 locks)
...
-------- MyISAM Metrics ----------------------------------------------------------------------------
[--] General MyIsam metrics:
[--] +-- Total MyISAM Tables : 0
[--] +-- Total MyISAM indexes : 0B
[--] +-- KB Size :128.0M
[--] +-- KB Used Size :23.3M
[--] +-- KB used :18.2%
[--] +-- Read KB hit rate: 0% (0 cached / 0 reads)
[--] +-- Write KB hit rate: 0% (0 cached / 0 writes)
[--] No MyISAM table(s) detected ....
-------- InnoDB Metrics ----------------------------------------------------------------------------
[--] InnoDB is enabled.
[OK] InnoDB File per table is activated
[OK] InnoDB Buffer Pool size ( 1.0G ) under limit for 64 bits architecture: (17179869184.0G )
[!!] InnoDB buffer pool / data size: 1.0G / 34.4G
[OK] Ratio InnoDB log file size / InnoDB Buffer pool size: 256.0M * 1/1.0G should be equal to 25%
[--] Number of InnoDB Buffer Pool Chunk: 64 for 1 Buffer Pool Instance(s)
[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances
[OK] InnoDB Read buffer efficiency: 99.34% (42353558185 hits / 42633926388 total)
[OK] InnoDB Write Log efficiency: 91.45% (130133891 hits / 142298603 total)
[OK] InnoDB log waits: 0.00% (0 waits / 12164712 writes)
-------- Aria Metrics ------------------------------------------------------------------------------
[--] Aria Storage Engine is enabled.
[OK] Aria pagecache size / total Aria indexes: 128.0M/2.6M
[OK] Aria pagecache hit rate: 99.9% (507M cached / 385K reads)
...
-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
You are using an unsupported version for production environments
Upgrade as soon as possible to a supported version !
We will suggest raising the 'join_buffer_size' until JOINs not using indexes are found.
See https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_join_buffer_size
Performance schema should be activated for better diagnostics
Before changing innodb_log_file_size and/or innodb_log_files_in_group read this: https://bit.ly/2TcGgtU
Variables to adjust:
join_buffer_size (> 256.0K, or always use indexes with JOINs)
performance_schema=ON
innodb_buffer_pool_size (>= 34.4G) if possible.
---
## Update 2
I found warning in log file:
[Warn] [Entrypoint]: /sys/fs/cgroup/pids:/docker/
11:net_cls,net_prio:/docker/
10:cpuset:/docker/
9:memory:/docker/
8:cpu,cpuacct:/docker/
7:hugetlb:/docker/
6:freezer:/docker/
5:devices:/docker/
4:blkio:/docker/
3:rdma:/docker/
2:perf_event:/docker/
1:name=systemd:/docker/
0::/docker//memory.pressure not writable, functionality unavailable to MariaDB
Jakub Marek
(21 rep)
Feb 16, 2024, 12:16 PM
• Last activity: Aug 5, 2025, 09:08 AM
0
votes
1
answers
141
views
MariaDB (RDS) going temporarily offline (with memory drop)
We have an issue with a database of ours on RDS running MariaDB 10.4.8 on a t3-large instance type. For the past week or so, it's been sporadically inaccessible with no errors I can find. During the minute or two that it's down I can't run 'show process list' it just hangs. It fixes itself and is ba...
We have an issue with a database of ours on RDS running MariaDB 10.4.8 on a t3-large instance type.
For the past week or so, it's been sporadically inaccessible with no errors I can find. During the minute or two that it's down I can't run 'show process list' it just hangs. It fixes itself and is back working within a few minutes. When it comes back it looks like it has a lot of connections and queries to catch up on.
Changed params from default:
ft_min_word_len 1
innodb_buffer_pool_size {DBInstanceClassMemory*17/20}
innodb_ft_enable_stopword 1
innodb_ft_min_token_size 1
log_bin_trust_function_creators 1
max_statement_time 30
sql_mode NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
## Freeable Memory:
## Swap space:
## Performance Insights:
If there any way to find out what's causing it. Is it not enough free memory? What's a normal threshold to be running to?



Edd Turtle
(113 rep)
Nov 16, 2020, 02:54 PM
• Last activity: Aug 1, 2025, 09:01 AM
4
votes
1
answers
1366
views
MySQL consumes much more memory than configured in innodb_buffer_pool_size
Long story short: I have the buffer pool size = 40G. I have 64GB RAM on my server. But MySQL actually uses 73G of memory (53G physical and the rest swapped). MySQL version: 8.0.22-0ubuntu0.20.04.2 [![htop][1]][1] How is that possible? And how to deal with it? I'd like MySQL to not swap, I think this...
Long story short: I have the buffer pool size = 40G. I have 64GB RAM on my server. But MySQL actually uses 73G of memory (53G physical and the rest swapped).
MySQL version: 8.0.22-0ubuntu0.20.04.2
How is that possible? And how to deal with it? I'd like MySQL to not swap, I think this degrages it's performance significantly.
Just a note: I'm using MySQL in a non-standard way, there are no "users", there are just couple of scripts running which perform very heavy calculations, most of queries run several hours.
My

mysqld.cnf
:
sync_binlog=0
innodb_buffer_pool_size=40G
innodb_buffer_pool_instances=4
innodb_buffer_pool_chunk_size=10G
innodb_flush_log_at_trx_commit=0
innodb_log_file_size=10G
innodb_lock_wait_timeout=18000 #5 hours
innodb_deadlock_detect=0
innodb_online_alter_log_max_size=512M
max_heap_table_size=2G
tmp_table_size=2G
max_allowed_packet=1G
ft_min_word_len=1
ft_stopword_file=''
innodb_ft_min_token_size=1
innodb_ft_enable_stopword='OFF'
innodb_ft_result_cache_limit=4000000000
sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,NO_ENGINE_SUBSTITUTION,NO_UNSIGNED_SUBTRACTION'
binlog_expire_logs_seconds=115000 #1.5 days
wait_timeout=86400 #24 hours
event_scheduler='OFF'
transaction_isolation='READ-UNCOMMITTED'
range_optimizer_max_mem_size=0
SHOW ENGINE INNODB STATUS
:
=====================================
2023-01-06 09:03:14 0x7f3b5454f700 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 42 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 5817581 srv_active, 0 srv_shutdown, 1176515 srv_idle
srv_master_thread log flush and writes: 0
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 2481543293
OS WAIT ARRAY INFO: signal count 2225015700
RW-shared spins 2256999997, rounds 3189111343, OS waits 930003374
RW-excl spins 2051400118, rounds 48471954436, OS waits 1345519963
RW-sx spins 2536334, rounds 3679029, OS waits 15196
Spin rounds per wait: 1.41 RW-shared, 23.63 RW-excl, 1.45 RW-sx
------------------------
LATEST FOREIGN KEY ERROR
------------------------
2022-12-21 14:40:46 0x7f45e015e700 Transaction:
TRANSACTION 7045075024, ACTIVE 0 sec updating or deleting
mysql tables in use 1, locked 1
11 lock struct(s), heap size 1136, 2 row lock(s), undo log entries 1
MySQL thread id 2964075, OS thread handle 139938089002752, query id 350171612 10.0.0.4 root updating
delete from users
where id
= 12
Foreign key constraint fails for table jokii
.matching_actions
:
,
CONSTRAINT matching_actions_user_id_foreign
FOREIGN KEY (user_id
) REFERENCES users
(id
)
Trying to delete in parent table, in index PRIMARY tuple:
DATA TUPLE: 11 fields;
0: len 8; hex 000000000000000c; asc ;;
1: len 6; hex 0001a3eb5050; asc PP;;
2: len 7; hex 0200001a561e07; asc V ;;
3: len 10; hex 4d617463686572363636; asc Matcher666;;
4: SQL NULL;
5: len 60; hex 2432792431302445326e484c77516a312f70742f504651786463676a2e303054734b35495a337a326a316b50576939312f756f72393167766842742e; asc $2y$10$E2nHLwQj1/pt/PFQxdcgj.00TsK5IZ3z2j1kPWi91/uor91gvhBt.;;
6: len 60; hex 473877353072535a363456334755694e486663763156355570756244525a774d46417a66344b516c63654c774c7049453730444552414f4f65474a6a; asc G8w50rSZ64V3GUiNHfcv1V5UpubDRZwMFAzf4KQlceLwLpIE70DERAOOeGJj;;
7: len 1; hex 02; asc ;;
8: len 1; hex 81; asc ;;
9: len 4; hex 63a31ad0; asc c ;;
10: len 4; hex 63a31ad0; asc c ;;
But in child table jokii
.matching_actions
, in index matching_actions_user_id_foreign, there is a record:
PHYSICAL RECORD: n_fields 2; compact format; info bits 0
0: len 8; hex 000000000000000c; asc ;;
1: len 8; hex 0000000000010f66; asc f;;
------------
TRANSACTIONS
------------
Trx id counter 7136759163
Purge done for trx's n:o = 7136654785, sees < 7136654785
--------
FILE I/O
--------
I/O thread 0 state: waiting for completed aio requests (insert buffer thread)
I/O thread 1 state: waiting for completed aio requests (log thread)
I/O thread 2 state: waiting for completed aio requests (read thread)
I/O thread 3 state: waiting for completed aio requests (read thread)
I/O thread 4 state: waiting for completed aio requests (read thread)
I/O thread 5 state: waiting for completed aio requests (read thread)
I/O thread 6 state: waiting for completed aio requests (write thread)
I/O thread 7 state: waiting for completed aio requests (write thread)
I/O thread 8 state: waiting for completed aio requests (write thread)
I/O thread 9 state: waiting for completed aio requests (write thread)
Pending normal aio reads: [0, 0, 0, 0] , aio writes: [0, 0, 0, 0] ,
ibuf aio reads:, log i/o's:, sync i/o's:
Pending flushes (fsync) log: 0; buffer pool: 85252
22562121354 OS file reads, 1883479287 OS file writes, 354203769 OS fsyncs
1 pending preads, 0 pending pwrites
10835.12 reads/s, 16384 avg bytes/read, 21.00 writes/s, 9.48 fsyncs/s
-------------------------------------
INSERT BUFFER AND ADAPTIVE HASH INDEX
-------------------------------------
Ibuf: size 15702, free list len 939386, seg size 955089, 152372692 merges
merged operations:
insert 494772961, delete mark 1167112219, delete 295481119
discarded operations:
insert 19938, delete mark 1, delete 1
Hash table size 10624987, node heap has 51992 buffer(s)
Hash table size 10624987, node heap has 1 buffer(s)
Hash table size 10624987, node heap has 1 buffer(s)
Hash table size 10624987, node heap has 11 buffer(s)
Hash table size 10624987, node heap has 15 buffer(s)
Hash table size 10624987, node heap has 1 buffer(s)
Hash table size 10624987, node heap has 2 buffer(s)
Hash table size 10624987, node heap has 13876 buffer(s)
607081.07 hash searches/s, 161045.98 non-hash searches/s
---
LOG
---
Log sequence number 143119571682666
Log buffer assigned up to 143119571682666
Log buffer completed up to 143119571682666
Log written up to 143119571682666
Log flushed up to 143119571682278
Added dirty pages up to 143119571682666
Pages flushed up to 143117988865165
Last checkpoint at 143117988865165
1177812297 log i/o's done, 6.95 log i/o's/second
----------------------
BUFFER POOL AND MEMORY
----------------------
Total large memory allocated 43956305920
Dictionary memory allocated 1494388
Buffer pool size 2621440
Free buffers 28
Database pages 2555398
Old database pages 943378
Modified db pages 4050
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 184186084759, not young 878129585715
0.00 youngs/s, 0.01 non-youngs/s
Pages read 22562341496, created 30581445, written 547935573
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 995 / 1000, young-making rate 1 / 1000 not 109 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 2555398, unzip_LRU len: 0
I/O sum:cur, unzip sum:cur
----------------------
INDIVIDUAL BUFFER POOL INFO
----------------------
---BUFFER POOL 0
Buffer pool size 655360
Free buffers 27
Database pages 638847
Old database pages 235844
Modified db pages 837
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 46030684170, not young 220509855311
0.00 youngs/s, 0.00 non-youngs/s
Pages read 5453016045, created 7695472, written 136263915
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 995 / 1000, young-making rate 1 / 1000 not 120 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 638847, unzip_LRU len: 0
I/O sum:cur, unzip sum:cur
---BUFFER POOL 1
Buffer pool size 655360
Free buffers 1
Database pages 638884
Old database pages 235857
Modified db pages 966
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 46060288135, not young 222302083098
0.00 youngs/s, 0.00 non-youngs/s
Pages read 5787215064, created 7628619, written 134915374
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 994 / 1000, young-making rate 1 / 1000 not 126 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 638884, unzip_LRU len: 0
I/O sum:cur, unzip sum:cur
---BUFFER POOL 2
Buffer pool size 655360
Free buffers 0
Database pages 638767
Old database pages 235814
Modified db pages 1065
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 45912035006, not young 218035144334
0.00 youngs/s, 0.00 non-youngs/s
Pages read 5544539793, created 7640673, written 134899528
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 996 / 1000, young-making rate 0 / 1000 not 94 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 638767, unzip_LRU len: 0
I/O sum:cur, unzip sum:cur
---BUFFER POOL 3
Buffer pool size 655360
Free buffers 0
Database pages 638900
Old database pages 235863
Modified db pages 1182
Pending reads 0
Pending writes: LRU 0, flush list 0, single page 0
Pages made young 46183077448, not young 217282502972
0.00 youngs/s, 0.00 non-youngs/s
Pages read 5777570594, created 7616681, written 141856756
0.00 reads/s, 0.00 creates/s, 0.00 writes/s
Buffer pool hit rate 996 / 1000, young-making rate 1 / 1000 not 101 / 1000
Pages read ahead 0.00/s, evicted without access 0.00/s, Random read ahead 0.00/s
LRU len: 638900, unzip_LRU len: 0
I/O sum:cur, unzip sum:cur
--------------
ROW OPERATIONS
--------------
0 queries inside InnoDB, 0 queries in queue
4 read views open inside InnoDB
Process ID=795, Main thread ID=139890893248256 , state=sleeping
Number of rows inserted 630429294, updated 2109116770, deleted 293525232, read 16421762044116
3.17 inserts/s, 0.07 updates/s, 0.00 deletes/s, 3431194.61 reads/s
Number of system rows inserted 197979, updated 1398, deleted 197627, read 592382
0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s
----------------------------
END OF INNODB MONITOR OUTPUT
============================
SHOW GLOBAL STATUS
: https://pastebin.com/raw/HWRKtpgC
Stalinko
(201 rep)
Jan 6, 2023, 09:32 AM
• Last activity: Jul 30, 2025, 10:05 AM
1
votes
2
answers
143
views
MySQL - Flush Memory after loading files
I am loading a great amount of data which are in CSV files using LOAD FILE command. It is working fine and after loading that files, it executes some data transformation... The problem is that, after loading for example 12 files (400 MB each one), mysql process consumes almost all memory available,...
I am loading a great amount of data which are in CSV files using LOAD FILE command. It is working fine and after loading that files, it executes some data transformation...
The problem is that, after loading for example 12 files (400 MB each one), mysql process consumes almost all memory available, it is not flushing, and so, the machine slow down, and I need to restart mysql to continue to load data files.
MySQL 8.0.17
Any ideas how to solve this issue?
- Added Store Procedure
```sql
BEGIN
DECLARE sequenceid INT;
DECLARE
_rollback
BOOL DEFAULT 0;
DECLARE CONTINUE HANDLER FOR SQLEXCEPTION SET _rollback
= 1;
START TRANSACTION;
INSERT INTO filenamedt (filename, filedate, fileinitialid) VALUES (CAfilename, now(), (SELECT MAX(imp_sequence_id) + 1 FROM imp_siscori_sequence));
INSERT INTO imp (select imp, ordem, CONCAT(TRIM(data),'01')) FROM temp_imp);
INSERT INTO imp_data (imp_id, unid_comerc, incoterm) (SELECT imp_id, TRIM(unidadeComercial), TRIM(incoterm), TRIM(natInformacao) FROM temp_imp );
INSERT INTO imp_description (imp_id, imp_description) (SELECT imp_id, TRIM(descricaoProduto) FROM temp_imp);
UPDATE imp_sequence SET imp_sequence_id = (SELECT MAX(imp_id) + 1 FROM temp_imp_siscori);
TRUNCATE TABLE temp_imp;
SET @m = (SELECT MAX(imp_sequence_id) + 1 FROM imp_sequence);
SET @s = CONCAT('ALTER TABLE temp_imp AUTO_INCREMENT=', @m);
PREPARE stmt1 FROM @s;
EXECUTE stmt1;
DEALLOCATE PREPARE stmt1;
IF _rollback
THEN
ROLLBACK;
ELSE
COMMIT;
END IF;
Gustavo_Oliveira
(23 rep)
Jun 5, 2020, 10:52 PM
• Last activity: Jul 27, 2025, 11:04 AM
0
votes
1
answers
1492
views
how to prevent out of shared memory error when refreshing materialized views
*Postgresql14, Windows server 2012* I get an `out of shared memory` related to a insufficient `max_locks_per_transaction`. I have two tables: - `tbl1` has insert/update every night. - These actions trigger a fonction that updates `tbl2` (**28 000 rows**) - These updates on `tbl2` trigger a function...
*Postgresql14, Windows server 2012*
I get an
Then the updates are aborted
At no time did I see a number of relation field equals to a mat view.
out of shared memory
related to a insufficient max_locks_per_transaction
.
I have two tables:
- tbl1
has insert/update every night.
- These actions trigger a fonction that updates tbl2
(**28 000 rows**)
- These updates on tbl2
trigger a function that refresh **8** materialized views.
It seems that there are too many locks to complete the refreshs because at a moment I read in the log :

UPDATE "...ERROR: 25P02: current transaction is aborted, commands ignored until end of transaction block
.
It seems that I have to increase the number of max_lock_per_transition
. The number could be 28 000x8 (did i understand the doc?).
My max_connection
is 80 and max_locks_per_transaction
64 (default) so 5120 (80*64) locks.
Should I set max_lock_per_transition
to 2800? Is it reasonable? or there are others solutions?
**EDIT** : as suggested by Laurenz Albe, I add some informations.
- I update manually one row in tbl1
at 9:58
- copy and paste log between 9:58 and 10:03 supposed time of the end of the supposed mat views refreshs.
- during these 2 minutes, I took pictures of pg_locks
table
log :
2023-02-16 09:58:09.413 CET 57 db=activite,user=POSTGIS LOG: 00000: temporary file: path "base/pgsql_tmp/pgsql_tmp4824.5.sharedfileset/0.0", size 8192
2023-02-16 09:58:09.413 CET 58 db=activite,user=POSTGIS CONTEXT: SQL statement "refresh materialized view ge.prescription"
PL/pgSQL function activite.maj_vm_prescription() line 13 at EXECUTE
2023-02-16 09:58:09.413 CET 59 db=activite,user=POSTGIS LOCATION: ReportTemporaryFileUsage, fd.c:1448
2023-02-16 09:58:09.413 CET 60 db=activite,user=POSTGIS STATEMENT: UPDATE activite.prescription SET
numoa = '558'::character varying WHERE
gid = 77833;
2023-02-16 09:58:09.420 CET 61 db=activite,user=POSTGIS LOG: 00000: temporary file: path "base/pgsql_tmp/pgsql_tmp4824.5.sharedfileset/1.0", size 8192
2023-02-16 09:58:09.420 CET 62 db=activite,user=POSTGIS CONTEXT: SQL statement "refresh materialized view ge.prescription"
PL/pgSQL function activite.maj_vm_prescription() line 13 at EXECUTE
2023-02-16 09:58:09.420 CET 63 db=activite,user=POSTGIS LOCATION: ReportTemporaryFileUsage, fd.c:1448
2023-02-16 09:58:09.420 CET 64 db=activite,user=POSTGIS STATEMENT: UPDATE activite.prescription SET
numoa = '558'::character varying WHERE
gid = 77833;
2023-02-16 09:58:09.431 CET 65 db=activite,user=POSTGIS LOG: 00000: temporary file: path "base/pgsql_tmp/pgsql_tmp4824.4.sharedfileset/0.0", size 139264
2023-02-16 09:58:09.431 CET 66 db=activite,user=POSTGIS CONTEXT: SQL statement "refresh materialized view ge.prescription"
PL/pgSQL function activite.maj_vm_prescription() line 13 at EXECUTE
2023-02-16 09:58:09.431 CET 67 db=activite,user=POSTGIS LOCATION: ReportTemporaryFileUsage, fd.c:1448
2023-02-16 09:58:09.431 CET 68 db=activite,user=POSTGIS STATEMENT: UPDATE activite.prescription SET
numoa = '558'::character varying WHERE
gid = 77833;
2023-02-16 09:58:09.438 CET 69 db=activite,user=POSTGIS LOG: 00000: temporary file: path "base/pgsql_tmp/pgsql_tmp4824.4.sharedfileset/1.0", size 8192
2023-02-16 09:58:09.438 CET 70 db=activite,user=POSTGIS CONTEXT: SQL statement "refresh materialized view ge.prescription"
PL/pgSQL function activite.maj_vm_prescription() line 13 at EXECUTE
2023-02-16 09:58:09.438 CET 71 db=activite,user=POSTGIS LOCATION: ReportTemporaryFileUsage, fd.c:1448
2023-02-16 09:58:09.438 CET 72 db=activite,user=POSTGIS STATEMENT: UPDATE activite.prescription SET
numoa = '558'::character varying WHERE
gid = 77833;
2023-02-16 09:58:10.816 CET 73 db=activite,user=POSTGIS LOG: 00000: duration: 3310.880 ms statement: UPDATE activite.prescription SET
numoa = '558'::character varying WHERE
gid = 77833;
2023-02-16 09:58:10.816 CET 74 db=activite,user=POSTGIS LOCATION: exec_simple_query, postgres.c:1306
2023-02-16 09:58:26.166 CET 1 db=,user= LOG: 00000: automatic vacuum of table "activite.pg_catalog.pg_attribute": index scans: 1
pages: 0 removed, 230 remain, 0 skipped due to pins, 53 skipped frozen
tuples: 935 removed, 8707 remain, 0 are dead but not yet removable, oldest xmin: 199363
index scan needed: 88 pages from table (38.26% of total) had 2111 dead item identifiers removed
index "pg_attribute_relid_attnam_index": pages: 95 in total, 6 newly deleted, 30 currently deleted, 24 reusable
index "pg_attribute_relid_attnum_index": pages: 65 in total, 3 newly deleted, 19 currently deleted, 16 reusable
avg read rate: 8.772 MB/s, avg write rate: 3.869 MB/s
buffer usage: 429 hits, 297 misses, 131 dirtied
WAL usage: 304 records, 103 full page images, 451652 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.26 s
2023-02-16 09:58:26.166 CET 2 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:26.253 CET 3 db=,user= LOG: 00000: automatic analyze of table "activite.pg_catalog.pg_attribute"
avg read rate: 14.907 MB/s, avg write rate: 1.257 MB/s
buffer usage: 1020 hits, 166 misses, 14 dirtied
system usage: CPU: user: 0.03 s, system: 0.00 s, elapsed: 0.08 s
2023-02-16 09:58:26.253 CET 4 db=,user= LOCATION: do_analyze_rel, analyze.c:826
2023-02-16 09:58:26.322 CET 5 db=,user= LOG: 00000: automatic vacuum of table "activite.pg_catalog.pg_class": index scans: 1
pages: 0 removed, 33 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 59 removed, 1044 remain, 0 are dead but not yet removable, oldest xmin: 199364
index scan needed: 26 pages from table (78.79% of total) had 252 dead item identifiers removed
index "pg_class_oid_index": pages: 8 in total, 0 newly deleted, 0 currently deleted, 0 reusable
index "pg_class_relname_nsp_index": pages: 17 in total, 0 newly deleted, 0 currently deleted, 0 reusable
index "pg_class_tblspc_relfilenode_index": pages: 13 in total, 0 newly deleted, 3 currently deleted, 3 reusable
avg read rate: 3.586 MB/s, avg write rate: 4.611 MB/s
buffer usage: 153 hits, 21 misses, 27 dirtied
WAL usage: 87 records, 21 full page images, 118807 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.04 s
2023-02-16 09:58:26.322 CET 6 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:26.349 CET 7 db=,user= LOG: 00000: automatic analyze of table "activite.pg_catalog.pg_class"
avg read rate: 1.447 MB/s, avg write rate: 2.604 MB/s
buffer usage: 505 hits, 5 misses, 9 dirtied
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.02 s
2023-02-16 09:58:26.349 CET 8 db=,user= LOCATION: do_analyze_rel, analyze.c:826
2023-02-16 09:58:27.744 CET 9 db=,user= LOG: 00000: automatic vacuum of table "activite.ara.prescription": index scans: 0
pages: 0 removed, 797 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 0 removed, 4898 remain, 0 are dead but not yet removable, oldest xmin: 199365
index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed
avg read rate: 4.554 MB/s, avg write rate: 4.554 MB/s
buffer usage: 860 hits, 801 misses, 801 dirtied
WAL usage: 798 records, 1 full page images, 55465 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 1.37 s
2023-02-16 09:58:27.744 CET 10 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:28.745 CET 11 db=,user= LOG: 00000: automatic analyze of table "activite.ara.prescription"
avg read rate: 6.352 MB/s, avg write rate: 0.273 MB/s
buffer usage: 9109 hits, 813 misses, 35 dirtied
system usage: CPU: user: 0.20 s, system: 0.01 s, elapsed: 0.99 s
2023-02-16 09:58:28.745 CET 12 db=,user= LOCATION: do_analyze_rel, analyze.c:826
2023-02-16 09:58:30.166 CET 13 db=,user= LOG: 00000: automatic vacuum of table "activite.cif.prescription": index scans: 0
pages: 0 removed, 805 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 0 removed, 4385 remain, 0 are dead but not yet removable, oldest xmin: 199366
index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed
avg read rate: 2.641 MB/s, avg write rate: 4.768 MB/s
buffer usage: 1196 hits, 448 misses, 809 dirtied
WAL usage: 806 records, 1 full page images, 55937 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 1.32 s
2023-02-16 09:58:30.166 CET 14 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:31.163 CET 15 db=,user= LOG: 00000: automatic analyze of table "activite.cif.prescription"
avg read rate: 3.498 MB/s, avg write rate: 0.196 MB/s
buffer usage: 9905 hits, 446 misses, 25 dirtied
system usage: CPU: user: 0.17 s, system: 0.00 s, elapsed: 0.99 s
2023-02-16 09:58:31.163 CET 16 db=,user= LOCATION: do_analyze_rel, analyze.c:826
2023-02-16 09:58:32.889 CET 17 db=,user= LOG: 00000: automatic vacuum of table "activite.ge.prescription": index scans: 0
pages: 0 removed, 1101 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 0 removed, 6563 remain, 0 are dead but not yet removable, oldest xmin: 199367
index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed
avg read rate: 0.018 MB/s, avg write rate: 5.063 MB/s
buffer usage: 2234 hits, 4 misses, 1105 dirtied
WAL usage: 1102 records, 1 full page images, 73401 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 1.70 s
2023-02-16 09:58:32.889 CET 18 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:33.972 CET 19 db=,user= LOG: 00000: automatic analyze of table "activite.ge.prescription"
avg read rate: 0.029 MB/s, avg write rate: 0.210 MB/s
buffer usage: 10688 hits, 4 misses, 29 dirtied
system usage: CPU: user: 0.28 s, system: 0.00 s, elapsed: 1.08 s
2023-02-16 09:58:33.972 CET 20 db=,user= LOCATION: do_analyze_rel, analyze.c:826
2023-02-16 09:58:34.202 CET 21 db=,user= LOG: 00000: automatic vacuum of table "activite.go.prescription": index scans: 0
pages: 0 removed, 404 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 0 removed, 2287 remain, 0 are dead but not yet removable, oldest xmin: 199370
index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed
avg read rate: 0.154 MB/s, avg write rate: 4.042 MB/s
buffer usage: 844 hits, 4 misses, 105 dirtied
WAL usage: 405 records, 1 full page images, 32278 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.20 s
2023-02-16 09:58:34.202 CET 22 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:34.581 CET 23 db=,user= LOG: 00000: automatic analyze of table "activite.go.prescription"
avg read rate: 0.021 MB/s, avg write rate: 0.373 MB/s
buffer usage: 3970 hits, 1 misses, 18 dirtied
system usage: CPU: user: 0.09 s, system: 0.00 s, elapsed: 0.37 s
2023-02-16 09:58:34.581 CET 24 db=,user= LOCATION: do_analyze_rel, analyze.c:826
2023-02-16 09:58:35.201 CET 25 db=,user= LOG: 00000: automatic vacuum of table "activite.bfc.prescription": index scans: 0
pages: 0 removed, 347 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 0 removed, 2118 remain, 0 are dead but not yet removable, oldest xmin: 199372
index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed
avg read rate: 4.692 MB/s, avg write rate: 4.692 MB/s
buffer usage: 379 hits, 351 misses, 351 dirtied
WAL usage: 348 records, 1 full page images, 28915 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.58 s
2023-02-16 09:58:35.201 CET 26 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:35.603 CET 27 db=,user= LOG: 00000: automatic analyze of table "activite.bfc.prescription"
avg read rate: 6.797 MB/s, avg write rate: 0.293 MB/s
buffer usage: 3664 hits, 348 misses, 15 dirtied
system usage: CPU: user: 0.07 s, system: 0.00 s, elapsed: 0.39 s
2023-02-16 09:58:35.603 CET 28 db=,user= LOCATION: do_analyze_rel, analyze.c:826
2023-02-16 09:58:35.686 CET 29 db=,user= LOG: 00000: automatic vacuum of table "activite.hdf.prescription": index scans: 0
pages: 0 removed, 381 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 0 removed, 1903 remain, 0 are dead but not yet removable, oldest xmin: 199375
index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed
avg read rate: 0.509 MB/s, avg write rate: 0.509 MB/s
buffer usage: 794 hits, 4 misses, 4 dirtied
WAL usage: 382 records, 1 full page images, 30921 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.06 s
2023-02-16 09:58:35.686 CET 30 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:36.220 CET 31 db=,user= LOG: 00000: automatic analyze of table "activite.hdf.prescription"
avg read rate: 0.029 MB/s, avg write rate: 0.322 MB/s
buffer usage: 6119 hits, 2 misses, 22 dirtied
system usage: CPU: user: 0.09 s, system: 0.00 s, elapsed: 0.53 s
2023-02-16 09:58:36.220 CET 32 db=,user= LOCATION: do_analyze_rel, analyze.c:826
2023-02-16 09:58:36.343 CET 33 db=,user= LOG: 00000: automatic vacuum of table "activite.mm.prescription": index scans: 0
pages: 0 removed, 623 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 0 removed, 3820 remain, 0 are dead but not yet removable, oldest xmin: 199376
index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed
avg read rate: 0.338 MB/s, avg write rate: 0.338 MB/s
buffer usage: 1279 hits, 4 misses, 4 dirtied
WAL usage: 624 records, 1 full page images, 45199 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.09 s
2023-02-16 09:58:36.343 CET 34 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:36.987 CET 35 db=,user= LOG: 00000: automatic analyze of table "activite.mm.prescription"
avg read rate: 0.037 MB/s, avg write rate: 0.243 MB/s
buffer usage: 6424 hits, 3 misses, 20 dirtied
system usage: CPU: user: 0.15 s, system: 0.01 s, elapsed: 0.64 s
2023-02-16 09:58:36.987 CET 36 db=,user= LOCATION: do_analyze_rel, analyze.c:826
2023-02-16 09:58:37.248 CET 37 db=,user= LOG: 00000: automatic analyze of table "activite.naom.prescription"
avg read rate: 0.101 MB/s, avg write rate: 0.572 MB/s
buffer usage: 2800 hits, 3 misses, 17 dirtied
system usage: CPU: user: 0.04 s, system: 0.01 s, elapsed: 0.23 s
2023-02-16 09:58:37.248 CET 38 db=,user= LOCATION: do_analyze_rel, analyze.c:826
2023-02-16 09:58:37.295 CET 39 db=,user= LOG: 00000: automatic vacuum of table "activite.pg_toast.pg_toast_5935083": index scans: 0
pages: 0 removed, 223 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 0 removed, 3348 remain, 0 are dead but not yet removable, oldest xmin: 199378
index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed
avg read rate: 0.757 MB/s, avg write rate: 1.262 MB/s
buffer usage: 450 hits, 3 misses, 5 dirtied
WAL usage: 224 records, 1 full page images, 21542 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.03 s
2023-02-16 09:58:37.295 CET 40 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:37.326 CET 41 db=,user= LOG: 00000: automatic vacuum of table "activite.pg_toast.pg_toast_5937734": index scans: 0
pages: 0 removed, 80 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 0 removed, 1259 remain, 0 are dead but not yet removable, oldest xmin: 199378
index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed
avg read rate: 1.578 MB/s, avg write rate: 2.105 MB/s
buffer usage: 164 hits, 3 misses, 4 dirtied
WAL usage: 81 records, 1 full page images, 13105 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.01 s
2023-02-16 09:58:37.326 CET 42 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:37.383 CET 43 db=,user= LOG: 00000: automatic vacuum of table "activite.pg_toast.pg_toast_5944587": index scans: 0
pages: 0 removed, 131 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 0 removed, 3231 remain, 0 are dead but not yet removable, oldest xmin: 199378
index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed
avg read rate: 1.555 MB/s, avg write rate: 2.074 MB/s
buffer usage: 266 hits, 3 misses, 4 dirtied
WAL usage: 132 records, 1 full page images, 16114 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.01 s
2023-02-16 09:58:37.383 CET 44 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:37.429 CET 45 db=,user= LOG: 00000: automatic vacuum of table "activite.pg_toast.pg_toast_5947500": index scans: 0
pages: 0 removed, 210 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 0 removed, 3513 remain, 0 are dead but not yet removable, oldest xmin: 199378
index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed
avg read rate: 0.786 MB/s, avg write rate: 1.048 MB/s
buffer usage: 424 hits, 3 misses, 4 dirtied
WAL usage: 211 records, 1 full page images, 20775 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.02 s
2023-02-16 09:58:37.429 CET 46 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:37.448 CET 47 db=,user= LOG: 00000: automatic vacuum of table "activite.pg_toast.pg_toast_5950402": index scans: 0
pages: 0 removed, 26 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 0 removed, 1012 remain, 0 are dead but not yet removable, oldest xmin: 199378
index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed
avg read rate: 13.014 MB/s, avg write rate: 17.351 MB/s
buffer usage: 56 hits, 3 misses, 4 dirtied
WAL usage: 27 records, 1 full page images, 9919 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
2023-02-16 09:58:37.448 CET 48 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:37.462 CET 49 db=,user= LOG: 00000: automatic vacuum of table "activite.pg_toast.pg_toast_5951379": index scans: 0
pages: 0 removed, 28 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 0 removed, 1714 remain, 0 are dead but not yet removable, oldest xmin: 199378
index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed
avg read rate: 19.306 MB/s, avg write rate: 25.741 MB/s
buffer usage: 60 hits, 3 misses, 4 dirtied
WAL usage: 29 records, 1 full page images, 10037 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
2023-02-16 09:58:37.462 CET 50 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:37.479 CET 51 db=,user= LOG: 00000: automatic vacuum of table "activite.pg_toast.pg_toast_5953084": index scans: 0
pages: 0 removed, 69 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 0 removed, 1881 remain, 0 are dead but not yet removable, oldest xmin: 199378
index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed
avg read rate: 10.435 MB/s, avg write rate: 13.914 MB/s
buffer usage: 142 hits, 3 misses, 4 dirtied
WAL usage: 70 records, 1 full page images, 12456 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s
2023-02-16 09:58:37.479 CET 52 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:58:58.912 CET 77 db=,user= LOG: 00000: checkpoint starting: time
2023-02-16 09:58:58.912 CET 78 db=,user= LOCATION: LogCheckpointStart, xlog.c:8782
2023-02-16 09:59:26.107 CET 1 db=,user= LOG: 00000: automatic vacuum of table "activite.pg_catalog.pg_statistic": index scans: 1
pages: 0 removed, 193 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 264 removed, 1419 remain, 0 are dead but not yet removable, oldest xmin: 199383
index scan needed: 64 pages from table (33.16% of total) had 134 dead item identifiers removed
index "pg_statistic_relid_att_inh_index": pages: 13 in total, 0 newly deleted, 0 currently deleted, 0 reusable
avg read rate: 3.215 MB/s, avg write rate: 4.649 MB/s
buffer usage: 408 hits, 83 misses, 120 dirtied
WAL usage: 286 records, 113 full page images, 478661 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.20 s
2023-02-16 09:59:26.107 CET 2 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 09:59:26.451 CET 3 db=,user= LOG: 00000: automatic vacuum of table "activite.pg_toast.pg_toast_2619": index scans: 1
pages: 0 removed, 395 remain, 0 skipped due to pins, 0 skipped frozen
tuples: 326 removed, 1278 remain, 0 are dead but not yet removable, oldest xmin: 199383
index scan needed: 93 pages from table (23.54% of total) had 331 dead item identifiers removed
index "pg_toast_2619_index": pages: 14 in total, 2 newly deleted, 5 currently deleted, 3 reusable
avg read rate: 3.575 MB/s, avg write rate: 4.267 MB/s
buffer usage: 636 hits, 150 misses, 179 dirtied
WAL usage: 365 records, 100 full page images, 154020 bytes
system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.32 s
2023-02-16 09:59:26.451 CET 4 db=,user= LOCATION: heap_vacuum_rel, vacuumlazy.c:864
2023-02-16 10:03:28.196 CET 79 db=,user= LOG: 00000: checkpoint complete: wrote 4483 buffers (27.4%); 0 WAL file(s) added, 3 removed, 0 recycled; write=269.162 s, sync=0.088 s, total=269.284 s; sync files=144, longest=0.007 s, average=0.001 s; distance=46917 kB, estimate=46917 kB
2023-02-16 10:03:28.196 CET 80 db=,user= LOCATION: LogCheckpointEnd, xlog.c:8871
I was expected a refresh of the 8 mat views but only see 4 refreshs of one mat view (refresh materialized view ge.prescription
)...weird.
I see the others mat views but only for vaccum
.
Here are some pg_locks pictures I hope are relevant :


Leehan
(205 rep)
Feb 15, 2023, 02:57 PM
• Last activity: Jul 16, 2025, 12:04 PM
0
votes
1
answers
162
views
MySQL NDB Cluster Community - High memory utilization by mysqld only on source/replica nodes
I have two MySQL NDB community clusters 8.0.30, each with 3 VMs Cluster#1_VM1: mysqld + ndb_mgmd Cluster#1_VM2: mysqld + ndb Cluster#1_VM3: mysqld + ndb Cluster#2_VM1: mysqld + ndb_mgmd Cluster#2_VM2: mysqld + ndb Cluster#2_VM3: mysqld + ndb I have bidirectional replication between Cluster#1_VM2 and...
I have two MySQL NDB community clusters 8.0.30, each with 3 VMs
Cluster#1_VM1: mysqld + ndb_mgmd
Cluster#1_VM2: mysqld + ndb
Cluster#1_VM3: mysqld + ndb
Cluster#2_VM1: mysqld + ndb_mgmd
Cluster#2_VM2: mysqld + ndb
Cluster#2_VM3: mysqld + ndb
I have bidirectional replication between Cluster#1_VM2 and Cluster#2_VM2.
VM2/3 have identical parameters.
The choice of such architecture might not be perfect but I think it was not relevant here.
mysqld process consumes a lot of memory on VMs with replication.
And replication seems to be the only difference between those VMs.
(Though I'm not quite sure how is load distributed among mysqld processes)
I would assume that ndbd would be the one to use most of the RAM since it has 8GB of DataMemory. But somehow mysqld is also utilizing a lot of RAM too.
I've checked several other questions here (e.g. https://dba.stackexchange.com/questions/286946/high-memory-usage-on-replica-mysql-8 )
Most of them are related to innodb and the queries for e.g. buffer utilization show nothing that could show me where that high memory utilization comes from.
So far I've checked
- buffer utilizations for innoDB (I have NDB, I know)
- engine status for innodb and ndb - no hint of high mem usage, or at least not clearly visible
All I know for now is that it grows over time after restart of the process.
But don't know yet how to limit/control the amount of memory consumed by mysqld in this case.
Marcin Stolzmann
(1 rep)
Sep 14, 2023, 11:34 AM
• Last activity: Jul 12, 2025, 08:02 AM
1
votes
1
answers
800
views
How to reduce MySQL performance_schema memory usage caused by thousands of unique hosts?
I have a problem with some performance_schema tables (\*\_by_host_* and \*\_by_account_*) taking up too much RAM. It is caused by clients connecting from a /19 subnet (8190 unique addresses). The connections aren't long-lasting: clients connect, perform a task and quit. Which settings can I tweak to...
I have a problem with some performance_schema tables (\*\_by_host_* and \*\_by_account_*) taking up too much RAM. It is caused by clients connecting from a /19 subnet (8190 unique addresses). The connections aren't long-lasting: clients connect, perform a task and quit. Which settings can I tweak to limit the memory usage and what are the downsides of each limit?
The DB is on AWS RDS and I would like to keep Performance Insights with as much data as possible.
Karol Jędrzejczyk
(11 rep)
Jun 4, 2021, 10:06 AM
• Last activity: Jul 9, 2025, 10:00 PM
0
votes
2
answers
614
views
Trying to find a way fit an extremely large index in to memory
**Description:** I am having trouble with the on-growing data size in my MySQL database. I am using Ejabberd and MAM function which will make use of an `archive` table to store messages sent between users and the table keeps growing. It now takes over 10 seconds to query something like ``` SELECT CO...
**Description:**
I am having trouble with the on-growing data size in my MySQL database.
I am using Ejabberd and MAM function which will make use of an
archive
table to store messages sent between users and the table keeps growing. It now takes over 10 seconds to query something like
SELECT COUNT(*) FROM archive
WHERE username = ''
and bare_peer = ''
and timestamp >= ''
and bare_peer = ''
and timestamp >= ''
and timestamp '
These are very common SQL that would execute thousands of times each day, and since the SQL are executed from within Ejabberd, I cannot change the syntax.
**Current Situation:**
- Instance Specification: 8 core CPU, 64 GB RAM
innodb_buffer_pool_size: 49392123904 bytes (roughly around 49GB)
- With references to this post, I got the result of 1005383M (roughly 1TB) estimated requirement of memory size.
- The archive table size: 700GB of data, and ~200GB of index, around 0.9b of rows (yes, a lot of rows)
- Here is the table creation SQL:
CREATE TABLE archive
(
username
varchar(191) COLLATE utf8mb4_unicode_ci NOT NULL,
timestamp
bigint(20) unsigned NOT NULL,
peer
varchar(191) COLLATE utf8mb4_unicode_ci NOT NULL,
bare_peer
varchar(191) COLLATE utf8mb4_unicode_ci NOT NULL,
xml
text COLLATE utf8mb4_unicode_ci NOT NULL,
txt
text COLLATE utf8mb4_unicode_ci,
id
bigint(20) unsigned NOT NULL AUTO_INCREMENT,
kind
varchar(10) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
nick
varchar(191) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
created_at
timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
UNIQUE KEY id
(id
),
KEY i_username
(username
) USING BTREE,
KEY i_timestamp
(timestamp
) USING BTREE,
KEY i_peer
(peer
) USING BTREE,
KEY i_bare_peer
(bare_peer
) USING BTREE,
FULLTEXT KEY i_text
(txt
)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
**Discussion:**
With the below information, one thing I could think of is to:
- Partition the archive table with Primary Key (RANGE / Per 5m of rows), but from my understanding, since MySQL doesn't support fulltext index in Partitions, I would be required to drop the fulltext index in the txt column, which I think is ok.
- Unfortunately, since MySQL could only partition on Primary Keys, and I cannot change the SQL. I therefore cannot utilize the partition directly on the SQL. What I could do is to drop the entire partition regularly and keep the remaining index size to fit into memory as much as possible.
I am posting to seek for a second opinion on whether:
1. Is this the best way I could do with the above limitations?
2. If so, How can I partition such a big table without downtime, by using possibility pt-online-schema-change ?
Thank you all for your time.
Vincent Lam
(3 rep)
Jun 7, 2020, 03:51 AM
• Last activity: Jun 27, 2025, 11:09 AM
2
votes
1
answers
2021
views
Out of memory in MySQL error log
What should I check to solve this error that appears in the error log? Do I need to add RAM to the server or adjust some configuration variable? I copy part of the log: > 180102 10:45:53 InnoDB: Started; log sequence number 0 267632652 > mysql3: Out of memory (Needed 2299002880 bytes) > > 180102 10:...
What should I check to solve this error that appears in the error log?
Do I need to add RAM to the server or adjust some configuration variable?
I copy part of the log:
> 180102 10:45:53 InnoDB: Started; log sequence number 0 267632652
> mysql3: Out of memory (Needed 2299002880 bytes)
>
> 180102 10:45:54 [Note] mysql3: ready for connections. Version:
> '5.0.27-community-log' socket: '' port: 3307 MySQL Community
> Edition (GPL)
>
> 180102 10:46:14180102 10:46:14 [ERROR] Cannot find table
> test/cliconcepw from the internal data dictionary of InnoDB though the
> .frm file for the table exists. Maybe you have deleted and recreated
> InnoDB data files but have forgotten to delete the corresponding .frm
> files of InnoDB tables, or you have moved .frm files to another
> database?
>
> See http://dev.mysql.com/doc/refman/5.0/en/innodb-troubleshooting.html
> how you can resolve the problem.
>
> 180215 20:15:05 [ERROR] Out of memory; check if mysqld or some other
> process uses all available memory; if not, you may have to use
> 'ulimit' to allow mysqld to use more memory or you can add more swap
> space
-------------
Windows Server 2008 64 bits / 8 GB RAM
MySQL 5.0
-------------
Gaston
(21 rep)
Feb 16, 2018, 09:15 PM
• Last activity: Jun 24, 2025, 04:06 AM
1
votes
0
answers
89
views
PostgreSQL logical replication fails with memory allocation error
We have a geographically distributed database setup with seven servers. There are 62 master tables set up for logical replication with six subscribers. This was working fine for a long time with PostgreSQL 15. Recently we migrated to PostgreSQL 17.5. Now the entire replication is failing often with...
We have a geographically distributed database setup with seven servers. There are 62 master tables set up for logical replication with six subscribers.
This was working fine for a long time with PostgreSQL 15. Recently we migrated to PostgreSQL 17.5. Now the entire replication is failing often with an error message captured in all the servers' log files as
**"ERROR: invalid memory alloc request size 1294438032"**
The error is captured in the publisher first, and then all the subscribers go into catchup mode and never recover.
There are no network related issues and all the server configuration parameters are fine.
On a side note, all servers have streaming replication set up for one standby server each, and that is working quite fine.
In all the servers, memory utilization even at peak hours is less than 30%.
Google Gemini indicates this could be a PosgreSQL bug, but I can't find any related discussions online.
Can you point me in the right direction to solve this please?
I can share the values of any configuration parameters if it helps.
Thadeus Anand
(56 rep)
Jun 16, 2025, 11:26 AM
2
votes
2
answers
206
views
how to find our if a spid is active?
I have a transaction in a [KILLED/ROLLBACK][1] state and I want to make sure that sql is not dealing with it, meaning, there is no point in waiting for it to sort itself out. the way I currently find out if the query is stuck is by using a query from our dear Pinal Dave, not sure of the exact link,...
I have a transaction in a KILLED/ROLLBACK state and I want to make sure that sql is not dealing with it, meaning, there is no point in waiting for it to sort itself out.
the way I currently find out if the query is stuck is by using a query from our dear Pinal Dave, not sure of the exact link, but I got this query from him.
Also when I run this query here , I get a better picture:
It is clear to me that I will have to restart the service.
I also checked tempdb
Now it turns out that I would like to stop using sys.processes .
question (finally)
How can I find if a spid is active without using
print @@servername
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SET NOCOUNT OFF
SELECT spid
,kpid
,login_time
,last_batch
,status
,hostname
,nt_username
,loginame
,hostprocess
,cpu
,memusage
,physical_io
FROM sys.sysprocesses
WHERE cmd = 'KILLED/ROLLBACK'
this works and through it I could conclude my session id is stuck.
nothing moves:


SELECT
g.database_id,
COUNT(vlf_sequence_number) AS VLF_Count
FROM sys.dm_db_log_info(NULL) g
WHERE g.database_id = DB_ID()
GROUP BY g.database_id
tempdb transaction log is growing (basically need to deal with it - restart the service)

dbcc opentran
kill 61 with statusonly

sys.processes
?
I did not include here on this question my main cause of concern.
that would be how long it would take for the sql server service come back online,
in this case I am nearly convinced that it would be a quick one, as it was not a long transaction, however, there is an availability group (manual failover, asynchronous commit) in the mix.
none of these are part of this question. just mentioning for context.
Marcello Miorelli
(17274 rep)
Jun 12, 2025, 09:54 AM
• Last activity: Jun 12, 2025, 02:40 PM
0
votes
1
answers
214
views
Difference between current_allocation s total_allocation in MySQL sys.memory_by_thread_by_current_bytes
I'm trying to debug a memory leak in MySQL 5.7.37 and running the following query gives me insights about the threads running and the memory allocation for each thread. select * from sys.memory_by_thread_by_current_bytes The output shows some of the threads having the total_allocated value in GiB. H...
I'm trying to debug a memory leak in MySQL 5.7.37 and running the following query gives me insights about the threads running and the memory allocation for each thread.
select * from sys.memory_by_thread_by_current_bytes
The output shows some of the threads having the total_allocated value in GiB. However, the current_allocated value is still in MiB. Some of the values I see are as follows:
> * current_count_used => 1783
> * current_allocated => 262.46MiB
> * current_avg_alloc => 150.74KiB
> * current_max_alloc => 259.49MiB
> * total_alloc => 16.07GiB
How could I interpret the above data? Does that mean the total_allocated memory is reserved from the RAM for this thread?
Sri
(101 rep)
Mar 11, 2023, 05:17 AM
• Last activity: Jun 10, 2025, 06:04 PM
0
votes
1
answers
891
views
High Memory Usage on Replica - Mysql 8
Hope you're doing well. We've recently moved to Mysql 8 (from Mysql 5.7). We're using AWS RDS Service with MySql. The AWS Team is also looking for a solution btw. Since a few days, we're facing several problem with replication. We never met these problems before with the 5.7 version. First, we had a...
Hope you're doing well.
We've recently moved to Mysql 8 (from Mysql 5.7).
We're using AWS RDS Service with MySql. The AWS Team is also looking for a solution btw.
Since a few days, we're facing several problem with replication. We never met these problems before with the 5.7 version.
First, we had a very import lag among our Master and our two replicas.
This was solved with specifics params on the Parameters group of the replica like :
- slave_parallel_type in LOGICAL_CLOCK mode
- binlog_group_commit_sync_delay with a 10ms delay
- sync_binlog at 0
It seems that the latency is now gone, and it's a good news (i'll wait several days to be sure).
Nevertheless, we're still facing a massive problem with the RAM used on the Replica.
We can't find the problem, perhaps the buffer, but it must be locked with a threshold.
It's a cycle movement like this :
In Green, the production.
In Orange the small replica.
In Blue the most powerfull replica (As you can see, we tried to upgrade the instance, but it's not a RAM limitation problem).
The problem is the same for both. The memory usage is increasing till the replica has to down.
By the way, the swap is never used.
If you have any clue with this, it will help me a lot!
Thanks for reading!
Have a nice day :)
--- UPDATE ---
It seems that the buffer is full, but I don't know with what.
Is there any garbage collector running ? Because seems not, perhaps something to turn on with Mysql8?
Here some queries showing the size of the allocated memory and a zoom on the innodb memory repartition. We see that I Have 12go on this replica. And they must be full if the memory is decreasing :
> SELECT SUBSTRING_INDEX(event_name,'/',2) AS
> code_area, FORMAT_BYTES(SUM(current_alloc))
> AS current_alloc
> FROM sys.x$memory_global_by_current_bytes
> GROUP BY SUBSTRING_INDEX(event_name,'/',2)
> ORDER BY SUM(current_alloc) DESC;
> SELECT *
> FROM sys.x$memory_global_by_current_bytes
> WHERE SUBSTRING_INDEX(event_name,'/',2) = "memory/innodb"
> ORDER BY current_alloc DESC
> LIMIT 10;



NicolasCab
(1 rep)
Mar 13, 2021, 09:33 AM
• Last activity: Jun 2, 2025, 09:03 AM
1
votes
1
answers
256
views
Sudden huge RAM usages with FireBird 4.0
Since about 2 weeks I have a sudden memory issue with 1 of our Firebird servers. For context: We have servers on 4 physical locations, each with a VM dedicated to Firebird and the connecting ERP pack. Each FireBird instance hosts 4-8 databases. The last change was about 4 weeks ago to all of these s...
Since about 2 weeks I have a sudden memory issue with 1 of our Firebird servers.
For context:
We have servers on 4 physical locations, each with a VM dedicated to Firebird and the connecting ERP pack.
Each FireBird instance hosts 4-8 databases.
The last change was about 4 weeks ago to all of these servers simultaneously, a "before insert/update" trigger that deletes a value on another table when the field on that table gains a certain value. No Firebird update, no ERP update.
2 weeks ago, 1 of these firebird instances started acting up, having very high load.
Thinking the database was just growing, we increased cores to 8 and RAM to 32GB.
This is our biggest instance, so we did not think much of it. However since 2 weeks ago the issue with the server being maxed out is returning every couple of days, today it even only took about 8 hours to go from freshly started to 30GB memory and 93% CPU usage!
To compare, the next biggest which is about 20% smaller and about 25% less users, is only using 5GB.
Again, to be clear, this is the only server having this issue.
It feels like there is a memory leak or some kind of loop.
I've tried doing a backup/restore and also deleting about 15K of old records from a certain table which causes freezing ERP system since that same day, but nothing seems to help.
Rebooting either firebird or the entire VM causes it to start at a normal level, but always gradually increase until the server performance just goes down the drain, causing issues for all users.
In firebird.log there are daily entries with error 10054 but they have been occuring daily since early 2022 apparently. (i wasn't on the team yet back then)
I also can't find anything in windows event viewer related to firebird.
Jeroen van der Weyde
(11 rep)
Dec 5, 2023, 02:42 PM
• Last activity: May 30, 2025, 05:03 PM
1
votes
1
answers
291
views
High memory usage in Masters with MariaDB and Vitess
I'm running a Kubernetes Cluster with MariaDB `10.3.20` and Vitess 6. Everything is running fine, but the memory usage in the `Master` nodes is always around 90%. I don't have OOM right now, but I was keen to understand this high memory usage. Currently, each node (master and replica) has the same c...
I'm running a Kubernetes Cluster with MariaDB
The chart below has a span of 15 days.
10.3.20
and Vitess 6. Everything is running fine, but the memory usage in the Master
nodes is always around 90%. I don't have OOM right now, but I was keen to understand this high memory usage.
Currently, each node (master and replica) has the same configuration of 10GB RAM and 10 CPU cores. The MariaDB configurations, for now, are the default ones (innodb_buffer_pool_size
has 128MB, I will try to update to 70% of the node memory), except for the number of max_connections
which is set to 15758
.
The data inside database is bellow 1GB and 5 tables. SELECT
peak is around 700 queries per seconds, INSERT
peak is around 200 QPS and UPDATE
100 QPS.
Masters are receiving only writes (INSERT
, UPDATE
, DELETE
) and replicas are receiving only reads (SELECT
).
Below are two charts for memory usage (the top ones are masters, the ones at the bottom are replicas).
The master's memory does an interesting "pattern" every week, not sure why.
I've tried to understand this behavior, reading MariaDB Memory Allocation , but couldn't anything that could explain this.


David Magalhães
(121 rep)
Feb 22, 2021, 01:57 PM
• Last activity: May 18, 2025, 04:04 AM
0
votes
1
answers
278
views
Free node memory and low PLE
We run a SQL Server 2014 standard edition on a 94 GB RAM server with 82 GB RAM dedicated to SQL. I know PLE is not the answer to all and we should not focus on it too much. But if you look at the image below, on one of our nodes, the PLE is very low, and stays low too. But we also have A LOT of free...
We run a SQL Server 2014 standard edition on a 94 GB RAM server with 82 GB RAM dedicated to SQL.
I know PLE is not the answer to all and we should not focus on it too much. But if you look at the image below, on one of our nodes, the PLE is very low, and stays low too. But we also have A LOT of free memory on that node. This seems to be the case for us on most days.
I feel like free memory is wasted memory, is that a correct assumption?
Why doesn't SQL Server not utilize this memory to store data pages in and drop them when it needs the memory for other applications?
Results
select * from sys.dm_os_process_memory
select * from sys.dm_os_memory_nodes
select @@version
> Microsoft SQL Server 2014 (SP2-CU18) (KB4500180) - 12.0.5687.1 (X64)
> Jul 20 2019 21:27:07 Copyright (c) Microsoft Corporation Standard
> Edition (64-bit) on Windows NT 6.3 (Build 9600: ) (Hypervisor)
exec sp_readerrorlog
select * from sys.dm_os_nodes
**Update on 2022/04/26**: We upgraded our SQL Server 2014 to SP3 CU 4 and for now, it seems to have resolved the issue.





KHP
(65 rep)
Mar 31, 2022, 07:54 AM
• Last activity: May 12, 2025, 12:07 PM
1
votes
1
answers
313
views
How to figure out what's happening with mongodb?
I am facing an issue where our primary instance of mongodb shows high memory consumption every night. & i was going through the metrics and could see the cache size being reduced and used memory going up. I looked at the logs in mongodb but couldn't really figure the reason why memory kept on increa...
I am facing an issue where our primary instance of mongodb shows high memory consumption every night. & i was going through the metrics and could see the cache size being reduced and used memory going up. I looked at the logs in mongodb but couldn't really figure the reason why memory kept on increasing in the first place.
I'm attaching a memory consumption screenshot for your reference.
Any approaches/clarifications are welcome.
thanks!


aakash bharti
(11 rep)
Jun 28, 2021, 01:36 PM
• Last activity: Apr 27, 2025, 11:04 PM
0
votes
2
answers
55
views
MySQL memory keep growing after restart
Here is PMM Memory chart showing that used memory keep growing. 03/13 point at timeline is moment of restart. I checked all charts / metrics but cannot explain where does memory go. Maybe you can give me some advice what I can check? Here is my config ``` # Percona Toolkit MySQL Summary Report #####...
Here is PMM Memory chart showing that used memory keep growing.
03/13 point at timeline is moment of restart. I checked all charts / metrics but cannot explain where does memory go. Maybe you can give me some advice what I can check? Here is my config
# Percona Toolkit MySQL Summary Report #######################
System time | 2025-03-14 14:37:17 UTC (local TZ: +03 +0300)
# Instances ##################################################
Port Data Directory Nice OOM Socket
===== ========================== ==== === ======
0 0
# MySQL Executable ###########################################
Path to executable | /usr/sbin/mysqld
Has symbols | No
# Slave Hosts ################################################
No slaves found
# Report On Port 3306 ########################################
User | pmm@127.0.0.1
Time | 2025-03-14 17:37:17 (+03)
Hostname | hostname
Version | 5.7.44-48 Percona Server (GPL), Release 48, Revision 497f936a373
Built On | Linux x86_64
Started | 2025-03-13 09:14 (up 1+08:23:13)
Databases | 7
Datadir | /var/lib/mysql/
Processes | 40 connected, 3 running
Replication | Is not a slave, has 0 slaves connected
Pidfile | /var/run/mysqld/mysqld.pid (exists)
# Processlist ################################################
Command COUNT(*) Working SUM(Time) MAX(Time)
------------------------------ -------- ------- --------- ---------
Query 3 3 1 1
Sleep 35 0 70000 17500
User COUNT(*) Working SUM(Time) MAX(Time)
------------------------------ -------- ------- --------- ---------
bitrix0 10 2 1 1
pmm 5 1 0 0
userdb1 25 0 0 0
Host COUNT(*) Working SUM(Time) MAX(Time)
------------------------------ -------- ------- --------- ---------
127.0.0.1 5 1 0 0
localhost 35 2 1 1
db COUNT(*) Working SUM(Time) MAX(Time)
------------------------------ -------- ------- --------- ---------
NULL 5 1 0 0
userdb1-b24 25 0 0 0
mydb 10 2 1 1
State COUNT(*) Working SUM(Time) MAX(Time)
------------------------------ -------- ------- --------- ---------
35 0 0 0
Sending data 2 2 1 1
starting 1 1 0 0
# Status Counters (Wait 10 Seconds) ##########################
Variable Per day Per second 11 secs
Aborted_clients 300
Aborted_connects 2
Bytes_received 30000000000 350000 300000
Bytes_sent 175000000000 2250000 1000000
Com_admin_commands 25000
Com_begin 12500
Com_change_db 8
Com_commit 12500
Com_delete 1250000 15 30
Com_delete_multi 80000 3
Com_empty_query 50000
Com_insert 1750000 20 20
Com_insert_select 350000 4 5
Com_replace 7000
Com_rollback 10
Com_select 50000000 600 450
Com_set_option 1000000 15 15
Com_show_binlogs 15
Com_show_create_db 25
Com_show_create_table 5
Com_show_databases 20
Com_show_engine_status 9000
Com_show_fields 4000
Com_show_keys 1000
Com_show_master_status 15
Com_show_plugins 1500
Com_show_processlist 15
Com_show_slave_hosts 15
Com_show_slave_status 9000
Com_show_status 17500
Com_show_table_status 4
Com_show_tables 45000
Com_show_variables 4500
Com_show_warnings 2
Com_stmt_execute 25
Com_stmt_close 25
Com_stmt_prepare 25
Com_truncate 2
Com_update 2000000 20 25
Com_update_multi 80000 3
Connections 225000 2 4
Created_tmp_disk_tables 3500000 40 15
Created_tmp_files 12500
Created_tmp_tables 6000000 70 50
Handler_commit 50000000 600 500
Handler_delete 1250000 15 8
Handler_external_lock 225000000 2500 2000
Handler_read_first 7000000 80 60
Handler_read_key 6000000000 70000 17500
Handler_read_last 60000
Handler_read_next 70000000000 800000 1250000
Handler_read_prev 225000000 2500 5
Handler_read_rnd 250000000 3000 700
Handler_read_rnd_next 2500000000 30000 1250
Handler_rollback 300
Handler_update 5000000 60 35
Handler_write 250000000 3000 300
Innodb_background_log_sync 90000 1
Innodb_buffer_pool_bytes_data 12500000000 150000 7000
Innodb_buffer_pool_bytes_dirty 450000000 5000 60000
Innodb_buffer_pool_pages_flushed 1250000 15
Innodb_buffer_pool_pages_made_not_young 1500000 20
Innodb_buffer_pool_pages_made_young 700000 7
Innodb_buffer_pool_pages_old 300000 3
Innodb_buffer_pool_read_ahead 60000
Innodb_buffer_pool_read_requests 100000000000 1250000 1500000
Innodb_buffer_pool_reads 700000 7
Innodb_buffer_pool_write_requests 250000000 3000 1250
Innodb_checkpoint_age 175000000 2000 45000
Innodb_checkpoint_max_age 1250000000 15000
Innodb_data_fsyncs 800000 8 1
Innodb_data_read 12500000000 150000 6000
Innodb_data_reads 800000 8
Innodb_data_writes 5000000 60 60
Innodb_data_written 30000000000 350000 100000
Innodb_dblwr_pages_written 700000 7
Innodb_dblwr_writes 300000 3
Innodb_ibuf_free_list 5000
Innodb_ibuf_segment_size 5000
Innodb_log_write_requests 6000000 70 125
Innodb_log_writes 3500000 40 60
Innodb_lsn_current 1250000000000 15000000 45000
Innodb_lsn_flushed 1250000000000 15000000 40000
Innodb_lsn_last_checkpoint 1250000000000 15000000
Innodb_master_thread_active_loops 70000 1
Innodb_master_thread_idle_loops 12500
Innodb_max_trx_id 2500000000 30000 125
Innodb_mem_adaptive_hash 1000000000 12500
Innodb_mem_dictionary 90000000 1000
Innodb_oldest_view_low_limit_trx_id 2500000000 30000 125
Innodb_os_log_fsyncs 80000 1
Innodb_os_log_written 6000000000 70000 100000
Innodb_pages_created 22500
Innodb_pages_read 800000 8
Innodb_pages0_read 1500
Innodb_pages_written 1250000 15
Innodb_purge_trx_id 2500000000 30000 125
Innodb_row_lock_time 1750
Innodb_row_lock_waits 50
Innodb_rows_deleted 1250000 15 8
Innodb_rows_inserted 250000000 3000 450
Innodb_rows_read 80000000000 900000 1250000
Innodb_rows_updated 1750000 20 30
Innodb_num_open_files 1500
Innodb_available_undo_logs 90
Innodb_secondary_index_triggered_cluster_reads 15000000000 175000 70000
Innodb_secondary_index_triggered_cluster_reads_avoided 125000 1
Innodb_buffered_aio_submitted 60000
Key_read_requests 30
Key_reads 3
Open_table_definitions 1250
Opened_files 17500
Opened_table_definitions 4500
Opened_tables 3500000 40 45
Performance_schema_digest_lost 600000 7 1
Queries 60000000 600 600
Questions 60000000 600 600
Select_full_join 200000 2 1
Select_full_range_join 450000 5
Select_range 5000000 60 35
Select_range_check 4500
Select_scan 3500000 40 50
Sort_merge_passes 70000
Sort_range 3000000 40 40
Sort_rows 225000000 2500 700
Sort_scan 6000000 70 45
Ssl_accepts 350 1
Ssl_finished_accepts 350 1
Table_locks_immediate 125000 1 1
Table_open_cache_hits 100000000 1250 900
Table_open_cache_misses 3500000 40 45
Table_open_cache_overflows 3500000 40 45
Threads_created 70
Uptime 90000 1 1
# Table cache ################################################
Size | 2395
Usage | 100%
# Key Percona Server features ################################
Table & Index Stats | Disabled
Multiple I/O Threads | Enabled
Corruption Resilient | Enabled
Durable Replication | Not Supported
Import InnoDB Tables | Not Supported
Fast Server Restarts | Not Supported
Enhanced Logging | Disabled
Replica Perf Logging | Disabled
Response Time Hist. | Enabled
Smooth Flushing | Not Supported
HandlerSocket NoSQL | Not Supported
Fast Hash UDFs | Unknown
# Percona XtraDB Cluster #####################################
# Plugins ####################################################
InnoDB compression | ACTIVE
# Query cache ################################################
query_cache_type | OFF
Size | 0.0
Usage | 0%
HitToInsertRatio | 0%
# Schema #####################################################
Specify --databases or --all-databases to dump and summarize schemas
# Noteworthy Technologies ####################################
SSL | Yes
Explicit LOCK TABLES | No
Delayed Insert | No
XA Transactions | No
NDB Cluster | No
Prepared Statements | Yes
Prepared statement count | 0
# InnoDB #####################################################
Version | 5.7.44-48
Buffer Pool Size | 24.0G
Buffer Pool Fill | 70%
Buffer Pool Dirty | 2%
File Per Table | ON
Page Size | 16k
Log File Size | 2 * 1.0G = 2.0G
Log Buffer Size | 128M
Flush Method | O_DIRECT
Flush Log At Commit | 2
XA Support | ON
Checksums | ON
Doublewrite | ON
R/W I/O Threads | 8 4
I/O Capacity | 200
Thread Concurrency | 0
Concurrency Tickets | 5000
Commit Concurrency | 0
Txn Isolation Level | READ-COMMITTED
Adaptive Flushing | ON
Adaptive Checkpoint |
Checkpoint Age | 212M
InnoDB Queue | 0 queries inside InnoDB, 0 queries in queue
Oldest Transaction | 0 Seconds
History List Len | 58
Read Views | 2
Undo Log Entries | 0 transactions, 0 total undo, 0 max undo
Pending I/O Reads | 0 buf pool reads, 0 normal AIO, 0 ibuf AIO, 0 preads
Pending I/O Writes | 0 buf pool (0 LRU, 0 flush list, 0 page); 0 AIO, 0 sync, 0 log IO (0 log, 0 chkp); 0 pwrites
Pending I/O Flushes | 0 buf pool, 0 log
Transaction States | 34xnot started
# MyISAM #####################################################
Key Cache | 16.0M
Pct Used | 20%
Unflushed | 0%
# Security ###################################################
Users | 8 users, 0 anon, 0 w/o pw, 0 old pw
Old Passwords | 0
# Encryption #################################################
No keyring plugins found
# Binary Logging #############################################
# Noteworthy Variables #######################################
Auto-Inc Incr/Offset | 1/1
default_storage_engine | InnoDB
flush_time | 0
init_connect | SET NAMES utf8 COLLATE utf8_unicode_ci
init_file |
sql_mode |
join_buffer_size | 2M
sort_buffer_size | 2M
read_buffer_size | 128k
read_rnd_buffer_size | 256k
bulk_insert_buffer | 0.00
max_heap_table_size | 1G
tmp_table_size | 1G
max_allowed_packet | 16M
thread_stack | 128k
log |
log_error | /var/log/mysql/error.log
log_warnings | 2
log_slow_queries |
log_queries_not_using_indexes | OFF
log_slave_updates | OFF
# Configuration File #########################################
Config File | /etc/my.cnf
[client]
port = 3306
socket = /var/lib/mysqld/mysqld.sock
default-character-set = utf8
[mysqld_safe]
nice = 0
socket = /var/lib/mysqld/mysqld.sock
[mysqld]
user = mysql
port = 3306
basedir = /usr
datadir = /var/lib/mysql
socket = /var/lib/mysqld/mysqld.sock
skip-external-locking
default-storage-engine = innodb
pid-file = /var/run/mysqld/mysqld.pid
transaction-isolation = READ-COMMITTED
max_allowed_packet = 16M
myisam-recover-options = BACKUP
explicit_defaults_for_timestamp = 1
expire_logs_days = 10
max_binlog_size = 100M
sql_mode = ""
query_cache_size = 32M
table_open_cache = 4096
thread_cache_size = 32
key_buffer_size = 16M
thread_stack = 128K
join_buffer_size = 2M
sort_buffer_size = 2M
tmpdir = /tmp
max_heap_table_size = 32M
tmp_table_size = 32M
innodb_file_per_table
innodb_buffer_pool_size = 32M
innodb_flush_log_at_trx_commit = 2
innodb_log_file_size = 64M
innodb_flush_method = O_DIRECT
innodb_strict_mode = OFF
character-set-server = utf8
collation-server = utf8_unicode_ci
init-connect = "SET NAMES utf8 COLLATE utf8_unicode_ci"
skip-name-resolve
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
default-character-set = utf8
[mysql]
[isamchk]
key_buffer = 16M
# /etc/mysql/conf.d/logging.cnf
[mysqld_safe]
log-error = /var/log/mysql/error.log
[mysqld]
log-error = /var/log/mysql/error.log
# /etc/mysql/conf.d/bvat.cnf
[mysqld]
query_cache_type = 1
query_cache_size = 128M
query_cache_limit = 16M
innodb_buffer_pool_size = 18432M
max_connections = 205
table_open_cache = 18432
thread_cache_size = 512
max_heap_table_size = 128M
tmp_table_size = 128M
key_buffer_size = 256M
join_buffer_size = 32M
sort_buffer_size = 32M
bulk_insert_buffer_size = 2M
myisam_sort_buffer_size = 32M
# /etc/mysql/conf.d/z_bx_custom.cnf
[mysqld]
query_cache_limit = 64M
max_connections = 200
max_heap_table_size = 1024M
tmp_table_size = 1Gb
query_response_time_stats = on
innodb_read_io_threads = 8
query_cache_type = off
query_cache_size = 0
join_buffer_size = 2M
sort_buffer_size = 2M
key_buffer_size = 16M
table_open_cache = 8192
innodb_buffer_pool_instances = 8
innodb_buffer_pool_size = 24g
innodb_log_file_size = 1g
innodb_log_buffer_size = 128M
max_digest_length = 20480
performance_schema_max_digest_length = 20480
performance_schema_max_sql_text_length = 20480
open_files_limit = 10000
# Memory management library ##################################
jemalloc is not enabled in mysql config for process with id 2817


slesh
(101 rep)
Mar 14, 2025, 09:52 PM
• Last activity: Apr 24, 2025, 01:34 PM
2
votes
4
answers
2488
views
Mysql 8.0.21 long lived prepared statements have a memory leak ? or are we doing something weird
-- updated the question because it was easy to initially assume I was asking for mysql tuning tips, although some of the tips did help me narrow down the problem -- -- further update. After doing only a partial upgrade on some servers, we found the problem is occurring on MySQL 5.7 as well, so guess...
-- updated the question because it was easy to initially assume I was asking for mysql tuning tips, although some of the tips did help me narrow down the problem --
-- further update. After doing only a partial upgrade on some servers, we found the problem is occurring on MySQL 5.7 as well, so guessing it was some sort of change in behaviour between 5.6 and 5.7. It's still a bit weird to us how it didn't cause a problem in MySQL 5.6 --
We recently upgraded from MySQL 5.6 to MySQL 8.0 on a few servers, one of the servers was fine, and has had no problems, but it
has significantly less load than one of our other servers which has been running out of memory.
Our server launches, then grabs 300 connections, and keeps them open with a C3P0 pool to the mysql server.
Over 4 days, this particular server accumulated 3500+ opened prepared statements, some of them taking more than 300 Mb in
memory/sql/Prepared_statement::main_mem_root
It's normal for our server profile to have the connections / prepared statements open, and hasn't been a problem on mysql 5.6 or 5.7
looking that up at mysql https://dev.mysql.com/doc/dev/mysql-server/latest/classPrepared__statement.html#a999eb56b28a5202c0fb1f5df96db3c74
I can see it's somehow related to prepared statements, but 'allocate parsed tree elements (instances of Item, SELECT_LEX and other classes).' doesn't tell me much, is it caching results? is that how it's growing ?
we're using connector/j 8.0.18 I've looked at the release notes for 8.0.18 -> 8.0.23 and there isn't any obvious memory leak fixes
our connection parameters include
cachePrepStmts", "true");
useServerPrepStmts", "true");
We were running these servers on AWS on MySQL 5.6 with the same overridden parameters on 8GB of RAM, when we upgraded to MySQL 8.0.21 we started running out of RAM in about 1 day. We grew the server to 32Gb but didn't change the parameters. It's gone over 15 GB used and still going up.
We're pretty sure it's related to the per connection thread memory, but not sure why.
Looking at memory_by_thread_by_current_bytes we have connections with over 200Mb
The server is running 8.0.21 on AWS RDS m4.large
I've audited the code and all resultsets are being closed. the prepared statements are in code as well, but the cache behavior above is keeping them open.
normally the innodb buffer pool is 4GB, but we dropped it to 3GB when we were on the 8GB machine for a bit more room.
show global status / memory_summary_by_thread_by_event_name / MySQL tuner run / show engine innodb status / show variables / sys.memory_by_thread_by_current_bytes / sys.memory_global_by_current_bytes
https://gist.github.com/DaveB93/138f6bac254fee5bbbbb7ce2af7c2fef
-- update --
we 'fixed' this by connecting to our application via JMX and changing the C3P0 connection pool settings to "maxIdleTimeExcessConnections" to 300 (up from 0)
This cleaned up all of the long lived connections, and freed up 10GB of ram, This doesn't seem like the long term solution I want though.
David B
(21 rep)
Jan 18, 2021, 07:29 PM
• Last activity: Apr 22, 2025, 06:04 AM
0
votes
1
answers
397
views
MySQL Causing OOM after Upgrading to 8.0.37
Recently, we faced severe & unrecoverable corruption of database (a bug from MySQL 8.0.27, but only hit us 2 years in). So we upgraded to MySQL 8.0.37, and everything's good so far. But there's a change that we did not notice it happen during 8.0.27. Occasionally, MySQL restarts due to OOM (Signal 9...
Recently, we faced severe & unrecoverable corruption of database (a bug from MySQL 8.0.27, but only hit us 2 years in).
So we upgraded to MySQL 8.0.37, and everything's good so far. But there's a change that we did not notice it happen during 8.0.27.
Occasionally, MySQL restarts due to OOM (Signal 9 killed MySQL). Does anyone know if we're missing anything or misusing any flags?
Little info on my server, LAMP stack, Ubuntu 20.04 with 128GB RAM (was 64GB last time during 8.0.27, we doubled it now) and below is my config (it's very old, passed down since 10 years ago (since MySQL 5 days) and we always maintain/replicate the same config since it just "works".
[mysqld]
innodb_print_all_deadlocks = 1 #added on 1 sep 2022, want to get all deadlocks
disable_log_bin #added on 26 jul 2022, not tested. To not create bin.log files
bind-address = 0.0.0.0
mysqlx-bind-address = 127.0.0.1
symbolic-links=0
sql-mode = "NO_ENGINE_SUBSTITUTION"
key_buffer_size = 16M
myisam-recover-options = BACKUP
innodb_buffer_pool_size = 48G #it was 64G, but causing OOM, changed to 48G.
innodb_log_file_size = 6G
innodb_buffer_pool_instances = 32
innodb_autoinc_lock_mode = 2
interactive_timeout = 120 #to prevent too many sleep in processes
wait_timeout = 120
max_execution_time = 60000 #60s, its in ms
innodb_flush_log_at_trx_commit = 0 #commit once/per sec instead of each line, faster performance might lose last 1 sec data. but no diff in test
sync_binlog = 0
max_allowed_packet = 64M
max_connections = 512
innodb_io_capacity = 1000
innodb_io_capacity_max = 2000
optimizer_switch = derived_merge=off #dont know what is this, but it was here since forever.
skip-name-resolve #something to do with connection, when connection comes from particular host, SQL does reverse DNS lookup, we don't need
Notes to mention, the only thing changed after upgrade was changing innodb_buffer_pool_size from 32G (original) to 64G (new), since we doubled the RAM. But now we tweaked to 48GB in hope to solve the OOM issue, but still seeking for help on why did it happen now, but not previously.
Thanks in advance.
Below attached is my HTOP on normal usage period as well as SQL Tuner's response (which states I have sufficient RAM for current configurations)


Patrick Teng
(1 rep)
Jul 7, 2024, 09:01 AM
• Last activity: Apr 21, 2025, 12:04 AM
Showing page 1 of 20 total questions