Is pg_attribute bloat from temp table creation still a potential issue in modern versions of PostgreSQL?
1
vote
1
answer
81
views
I am not an expert in PostgreSQL, however I recently ran across some PostgreSQL code during a code review which creates and drops temporary tables in a manner that seems consistent with the typical way we do that in SQL Server. i.e. in SQL Server we'd do:
DROP TABLE IF EXISTS #temp_data;
CREATE TABLE #temp_data
(
i int NOT NULL
);
The code review showed this code:
BEGIN;
DROP TABLE IF EXISTS temp_data;
COMMIT;
BEGIN;
CREATE TEMP TABLE IF NOT EXISTS temp_data(
i int NOT NULL
);
COMMIT;
[This answer on Stack Overflow](https://stackoverflow.com/a/55580279/1595565) claims you shouldn't drop and recreate temp tables frequently because of pg_attribute
bloat.
ChatGPT (gah!) has this to say about pg_attribute
bloat:
> In PostgreSQL, creating temporary tables frequently can cause bloat in pg_attribute
, as each new temporary table adds metadata that persists in the system catalogs even after the table is dropped. To avoid excessive bloat, consider these best practices:
>
> 1. Use ON COMMIT DELETE ROWS
Instead of Dropping Tables:
>
> > CREATE TEMP TABLE temp_data (
> i int NOT NULL
> ) ON COMMIT DELETE ROWS;
>
>
> 2. Use pg_temp
Schema for Session-Level Temporary Tables
>
> > CREATE TEMP TABLE pg_temp.temp_data (
> id SERIAL PRIMARY KEY,
> value TEXT
> );
>
Since I am very doubtful about the veracity of any claims made by any large language model, which approach should I choose? Indeed, is either approach even valid?
Asked by Hannah Vernon
(70988 rep)
Mar 13, 2025, 10:21 PM
Last activity: Mar 14, 2025, 07:58 AM
Last activity: Mar 14, 2025, 07:58 AM