Postgres - now to optimise a table for high frequency INSERTs?
1
vote
0
answers
67
views
I need to load data into a table as fast as possible. So far I've done the following.
- Create the table UNLOGGED
- Removed all indexes
- Removed all triggers
- Created the table in a tablespace on a SSD hard disk
- DML tuned for multiple records per INSERT statement
Some background info.... the data is loaded into the system via powershell script reading from a remote data-stream. The data arrives at a variable rate 24*7, slowly during the evening, nighttime and morning, increasing during office hours (up to 2000 records/second or more). During busy-periods the system focuses on just reading the stream and loading the data. during quieter periods the system reads, processes and then deletes the data. The table gets truncated every night, and normally peaks at around 20 million rows on a busy day.
The system can currently cope with a maximum of about 2400 records/second. Are there any other techniques that might squeeze a little more speed out of the system?
Asked by ConanTheGerbil
(1303 rep)
Apr 7, 2023, 09:05 AM