PostgreSQL query stuck on LWLock:BufferMapping with high CPU and memory usage — how to debug further?
0
votes
3
answers
180
views
We’re experiencing frequent long-running queries (>43 secs) in our PostgreSQL production DB, and they often get stuck on:
wait_event_type = LWLock
wait_event = BufferMapping
This seems to indicate contention on shared buffers. The queries are usually simple SELECTs (e.g., on the level_asr_asrlog table) but during peak usage, they slow down drastically and sometimes get auto-killed after 60 seconds.[based on
statement_timeout
]
Instance Configuration:
PostgreSQL version: 14
RAM: 104 GB (≈95 GB usable for Postgres)
vCPUs: 16
SSD Storage: GCP auto-scaled from 10TB → 15TB over the last year
Shared Buffers: 34.8 GB
work_mem: 4 MB
maintenance_work_mem: 64 MB
autovacuum_work_mem: -1 [I think this means its equal to maintenance_work_mem]
temp_buffers: 8 MB
effective_cache_size: ~40 GB
max_connections: 800
Observations
VACUUM processes often take >10 minutes.
Memory is almost fully utilized (free memory 95% and correlates with memory pressure.
The system appears to be thrashing — swapping data instead of doing useful work.
The wait event BufferMapping implies the backend is stuck trying to
associate a block with a buffer, likely due to memory contention.
I need help with below things,
- How to further diagnose LWLock:BufferMapping contention?
- Is increasing work_mem or shared_buffers a safe direction?
- Should I implement PgBouncer to reduce max_connections impact on memory?
- How to confirm if the OS is thrashing, and if so, how to resolve it?
Asked by pramod
(25 rep)
Mar 27, 2025, 04:20 AM
Last activity: Jul 22, 2025, 01:45 AM
Last activity: Jul 22, 2025, 01:45 AM