we face issues with on execution og pg_basebackup once in a while.
One cure seems to be to increase wal_keep_size. If you go down that rabbit hole the vuestion on which value is suitable pops up naturally.
As the Cybertec Blog is one of sources of truth I am consulting the most I stumbled over this blog post on the matter. It says
> When you for example see that your ‘find pg_xlog/ -cmin -60’ (file attributes changed within last hour) yields 3, you’ll know that you’re writing ca 1.2GB (3*16*24) per day and can set wal_keep_segments accordingly.
I think that means, if you find 3 files change in the recent 60 minutes, a value of
3 * 24 * 16
MB would be suitable for wal_keep_size
.
is that a correct interpretation? In one particular case I get 47 wal segments from find pg_wal -ctime -60 -type f ! -empty ! -name *.backup | wc -l
, which translates into wal_keep_size = 17.6 GiB
(47 * 24 * 16 / 1024).
That looks really like a lot to me, but maybe it's the reality we have to face.
- The data_directory measures 247 GiB of size.
- lsmem
shows Total online memory: 31G
- there are 6 other postgres instances (most of them significantly smaller (2.5 GiB, 100 MiB, ... for data_directory) then the one we are talking about here) sharing that same Machine
Can anybody kindly comment whether my understanding and the conclusion I am drawing from that are reasonable?
Asked by vrms
(269 rep)
Jan 22, 2025, 11:13 AM
Last activity: Jan 24, 2025, 02:48 PM
Last activity: Jan 24, 2025, 02:48 PM