Sample Header Ad - 728x90

Postgres file increment size

0 votes
2 answers
324 views
I'm COPY-ing 300 tables / 400G to PG10@win2016server using a few threads to speed things up, but because the table files are incremented in 64-kb units (or something there-about) they get fragmented on disk which hurt read performance. Ideally I would like to make sure that files are expanded in larger increments (eg. 8 mb), alternative would be to make initial size = max segment size (1gb). Is there any way to accomplish this? And I know, single threaded copy would solve the issue at hand, but I would still have the same problem after the copying, as tables grow continually and still only in small fragmented chunks. EDIT: As a "proof" of sorts, i created a simple table CREATE TABLE testtable ( id bigint primary key, somestring varchar(4000) ); BEGIN FOR i IN 1..2000 LOOP insert into testtable values(i, repeat('Jeg kan en sang der kan drive dig til vanvid.', 40)); END LOOP; END which produces a table of 4096kb, which contig.exe reports to be of 25 fragments (in this instance), or 160kb on average. For 4 mb this is not an issue, for 400gb it is.
Asked by mikkel (238 rep)
Oct 23, 2017, 01:13 PM
Last activity: May 11, 2025, 11:04 PM