Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
4
votes
1
answers
196
views
upgrade from PG12/Postgis3 to PG15/Postgis3 without pg_upgrade
What is a recommended way to upgrade a medium size (3.1Tb) production database from PG12/Postgis3 to PG15/Postgis3 with a reasonable (up to an hour) downtime? `pg_upgrade` fails to perform the task with the following error: "old and new pg_controldata maximum TOAST chunk sizes are invalid or do not...
What is a recommended way to upgrade a medium size (3.1Tb) production database from PG12/Postgis3 to PG15/Postgis3 with a reasonable (up to an hour) downtime?
pg_upgrade
fails to perform the task with the following error: "old and new pg_controldata maximum TOAST chunk sizes are invalid or do not match".
user3159253
(143 rep)
Jun 19, 2025, 12:34 PM
• Last activity: Jun 19, 2025, 01:58 PM
4
votes
0
answers
199
views
Postgres: Queries are too slow after upgrading to PG17 from PG15
Most of the queries got slower after upgrading our postgres from version 15 to 17 using pg_upgrade. I reconfirmed that "vaccuum, analyze" were all taken care. To debug, instead of upgrade, I installed two instances one with postgres 15 and another postgres 17 with the same application dump restored....
Most of the queries got slower after upgrading our postgres from version 15 to 17 using pg_upgrade. I reconfirmed that "vaccuum, analyze" were all taken care.
To debug, instead of upgrade, I installed two instances one with postgres 15 and another postgres 17 with the same application dump restored.
Now surprisingly one of the query i took from application which used to execute in 2s in PG15, is now taking 1min+ in PG17. I also observed that some of the operations involving DML operations slowed down too in PG17.
Explain plan of the two queries almost same, all the joins and paths used are exacty same.
Could anybody please provide some insights here?
**PG15 Plan:**
https://explain.depesz.com/s/5PGX
----------------------------------------------------------------------
**PG17 Plan:**
https://explain.depesz.com/s/27vD
Update: I installed PG16 with same dump, and verified that everything seems normal here infact better than PG15, so i just want rule out possibility of PG16 impact here.
**PG16 plan:**
https://explain.depesz.com/s/v0Gc
Sajith P Shetty
(312 rep)
May 13, 2025, 05:52 AM
• Last activity: May 14, 2025, 04:05 AM
0
votes
1
answers
880
views
pg_upgrade fails on pg_restore: error: connection to server at "localhost" port 5433 failed: FATAL: password authentication failed for user "postgres"
I have a PostgreSQL 13 installation on a Windows 10 server (on port 5432) that I want to migrate to a new PG 16.2 installation. To this end, I downloaded the Windows x86-64 installer for PostgreSQL 16.2 here: https://www.enterprisedb.com/downloads/postgres-postgresql-downloads During the run of the...
I have a PostgreSQL 13 installation on a Windows 10 server (on port 5432) that I want to migrate to a new PG 16.2 installation.
To this end, I downloaded the Windows x86-64 installer for PostgreSQL 16.2 here: https://www.enterprisedb.com/downloads/postgres-postgresql-downloads
During the run of the installer I manually set up:
- the default postgres user named
#### Side question: How could I properly (i.e. ending with the same state as a fresh install) re-initialize the PG 16 cluster according to the following
postgres
a password,
- the port of the cluster to 5433,
- the locale to en_US,
The PostgreSQL folder was successfully created at "C:\Program Files\PostgreSQL\16\" with everything it needs. Then it proposed to execute the Stack Builder utility to install add-ons which I only used to install PostGIS.
To test the newly installed PostgreSQL, I added an entry in the %APPDATA%\postgresql\pgpass.conf
file as explained here: https://www.postgresql.org/docs/current/libpq-pgpass.html
The pgpass file looks as follow:
#hostname:port:database:username:password
localhost:5432:*:postgres:************************************
localhost:5433:*:postgres:************************************
Then I successfully connected to PG with this command (it correctly fetched the password from the pgpass file):
"C:\Program Files\PostgreSQL\16\bin\psql.exe" -d postgres://postgres@localhost:5433/postgres
psql (16.2)
WARNING: Console code page (437) differs from Windows code page (1252)
8-bit characters might not work correctly. See psql reference
page "Notes for Windows users" for details.
Type "help" for help.
postgres=# SELECT version();
version
------------------------------------------------------------
PostgreSQL 16.2, compiled by Visual C++ build 1937, 64-bit
(1 row)
I followed the steps as described here https://www.postgresql.org/docs/current/app-pgrestore.html to give pg_upgrade
(16) a try:
1. I gracefully shut down the two PG services (both 13 and 16)
2. Because there is no postgres
user at the operating system level, I created a folder at C:\tmp
with read/write access to everyone and from this folder I run the following command from a cmd.exe terminal:
"C:\Program Files\PostgreSQL\16\bin\pg_upgrade" -b "C:\Program Files\PostgreSQL\13\bin" -B "C:\Program Files\PostgreSQL\16\bin" -d "C:\Program Files\PostgreSQL\13\data" -D "C:\Program Files\PostgreSQL\16\data" -p 5432 -P 5433 -U postgres --check
Performing Consistency Checks
-----------------------------
Checking cluster versions ok
Checking database user is the install user ok
Checking database connection settings ok
Checking for prepared transactions ok
Checking for system-defined composite types in user tables ok
Checking for reg* data types in user tables ok
Checking for contrib/isn with bigint-passing mismatch ok
Checking for incompatible "aclitem" data type in user tables ok
Checking for user-defined encoding conversions ok
Checking for user-defined postfix operators ok
Checking for incompatible polymorphic functions ok
*Clusters are compatible*
I decided to run the exact same command again after removing the --check
flag but it failed:
Performing Consistency Checks
-----------------------------
Checking cluster versions ok
Checking database user is the install user ok
Checking database connection settings ok
Checking for prepared transactions ok
Checking for system-defined composite types in user tables ok
Checking for reg* data types in user tables ok
Checking for contrib/isn with bigint-passing mismatch ok
Checking for incompatible "aclitem" data type in user tables ok
Checking for user-defined encoding conversions ok
Checking for user-defined postfix operators ok
Checking for incompatible polymorphic functions ok
Creating dump of global objects ok
Creating dump of database schemas
ok
Checking for presence of required libraries ok
Checking database user is the install user ok
Checking for prepared transactions ok
Checking for new cluster tablespace directories ok
If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.
Performing Upgrade
------------------
Setting locale and encoding for new cluster ok
Analyzing all rows in the new cluster ok
Freezing all rows in the new cluster ok
Deleting files from new pg_xact ok
Copying old pg_xact to new server ok
Setting oldest XID for new cluster ok
Setting next transaction ID and epoch for new cluster ok
Deleting files from new pg_multixact/offsets ok
Copying old pg_multixact/offsets to new server ok
Deleting files from new pg_multixact/members ok
Copying old pg_multixact/members to new server ok
Setting next multixact ID and offset for new cluster ok
Resetting WAL archives ok
Setting frozenxid and minmxid counters in new cluster ok
Restoring global objects in the new cluster ok
Restoring database schemas in the new cluster
template1
*failure*
Consult the last few lines of "C:/Program Files/PostgreSQL/16/data/pg_upgrade_output.d/20240415T170235.602/log/pg_upgrade_dump_1.log" for
the probable cause of the failure.
Failure, exiting
The mentioned pg_upgrade_dump_1.log
file contains:
command: "C:/Program Files/PostgreSQL/16/bin/pg_dump" --port 5432 --username postgres --schema-only --quote-all-identifiers --binary-upgrade --format=custom --file="C:/Program Files/PostgreSQL/16/data/pg_upgrade_output.d/20240415T173939.552/dump/pg_upgrade_dump_1.custom" ^"dbname^=template1^" >> "C:/Program Files/PostgreSQL/16/data/pg_upgrade_output.d/20240415T173939.552/log/pg_upgrade_dump_1.log" 2>&1
command: "C:/Program Files/PostgreSQL/16/bin/pg_restore" --port 5433 --username postgres --clean --create --exit-on-error --verbose --dbname postgres "C:/Program Files/PostgreSQL/16/data/pg_upgrade_output.d/20240415T173939.552/dump/pg_upgrade_dump_1.custom" >> "C:/Program Files/PostgreSQL/16/data/pg_upgrade_output.d/20240415T173939.552/log/pg_upgrade_dump_1.log" 2>&1
pg_restore: connecting to database for restore
pg_restore: error: connection to server at "localhost" (::1), port 5433 failed: FATAL: password authentication failed for user "postgres"
password retrieved from file "C:\Users\\AppData\Roaming/postgresql/pgpass.conf"
I'm wondering why this automatic tool fails at this stage, because if I remove the PG 16 entry in the pgpass file, the pg_upgrade
command fails much faster, which means that the pgpass file is correctly used at least for dumping the schemas, but apparently not for restoring them in the new PG 16 cluster.
Can anyone explain why exactly pg_upgrade
fails and how to solve this problem? Because I really run both the PG 16 installer and the pg_upgrade
tool in a vanilla way, without fancy options or setup.
which means that the PG 16 config files (such as pg_hba.conf
or postgresql.conf
) were not touched after the installation.
The pg_hba.conf
file of PG 16 is exactly the same as the one described [here](https://stackoverflow.com/a/78258971/6630397) .
The most relevant part being:
# PostgreSQL Client Authentication Configuration File
# ===================================================
# comments (...)
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all scram-sha-256
# IPv4 local connections:
host all all 127.0.0.1/32 scram-sha-256
# IPv6 local connections:
host all all ::1/128 scram-sha-256
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all scram-sha-256
host replication all 127.0.0.1/32 scram-sha-256
host replication all ::1/128 scram-sha-256
The PG 13 pg_hba.conf
file looks like this:
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all scram-sha-256
# IPv4 local connections:
host all all 127.0.0.1/32 md5
host all postgres 0.0.0.0/0 md5
# IPv6 local connections:
host all all ::1/128 md5
I'm therefore wondering if the issue would not be related to a different password encoding... But the documentation of the pg_upgrade
tool doesn't say anything about it.
#### Side question: How could I properly (i.e. ending with the same state as a fresh install) re-initialize the PG 16 cluster according to the following
pg_upgade
sentence: "If pg_upgrade fails after this point, you must re-initdb the new cluster before continuing." ?
s.k
(424 rep)
Apr 15, 2024, 06:04 PM
• Last activity: May 3, 2025, 01:08 PM
1
votes
1
answers
334
views
Using docker containers to execute pg_upgrade
using pg_upgrade when you have installed both the 'old' version and the 'new' version on a system is quite straightforward. I tried to find a way using pg_upgrade with docker containers. This is a little bit more complicated because you need the 'old datadir', the 'old bindir' and the 'new datadir'a...
using pg_upgrade when you have installed both the 'old' version and the 'new' version on a system is quite straightforward.
I tried to find a way using pg_upgrade with docker containers. This is a little bit more complicated because you need the 'old datadir', the 'old bindir' and the 'new datadir'and the 'new bindir' of the 'old' and the 'new' postgres version you want to upgrade from and upgrade to.
Because the 'old directories' are not present in the 'new' version docker container you have to mount them into the 'new' container.
But since pg_upgrade seems to expect not only the 'old' bindir and datadir but also the 'old' libraries (postgres depends on) of the older version, you also have to mount them into the new versions' container.
So I ended up running an 'old-version-container', copying the bindir, datadir and lib dir to the local docker host and mounting them in the 'new-version-container'
when coming from postgres-12 this means copying the contents of /usr/lib/ to the local docker host and remounting them to the 'new-version-container'
so I mounted the 'old-libdir' to /12-bindir/ and did
ldd /12-bindir/postgres
to find out which libs postgres depends on.
After copying the 'old-libs' to /usr/lib in the 'new-version-container'
all dependencies could be found and I was able to use pg_upgrade and actually upgrade a database from (i.e. postgres-12-alpine to postgres:15.10-bookworm)
so.. it worked..
So.. the real question is: because it works.. should I do it like this or do I miss something and am I doing something stupid or silly?
I am glad about every hint and opinion
D M
(445 rep)
Jan 20, 2025, 11:44 AM
• Last activity: Jan 21, 2025, 08:11 AM
0
votes
1
answers
1093
views
Postgres Database Performance is slow after upgrade from v15 to 17
After upgrading the Postgres Server from Version 15 to Version 17 running on RHEL 9, few of the SQLs changed the plan and running pretty slow. I am basically an Oracle DBA and thinking to perform the follow, but want to know what analyze option are best to calculate the stats. 1. Gather Statistics 2...
After upgrading the Postgres Server from Version 15 to Version 17 running on RHEL 9, few of the SQLs changed the plan and running pretty slow. I am basically an Oracle DBA and thinking to perform the follow, but want to know what analyze option are best to calculate the stats.
1. Gather Statistics
2. Rebuilding the Indexes
Beside above 2 options, if you have any further recommendations that would be highly appreciated.
Naveed Iftikhar
(1 rep)
Jan 11, 2025, 05:57 AM
• Last activity: Jan 14, 2025, 08:46 PM
1
votes
0
answers
51
views
Performance issue after upgrading Postgresql from v11 to v12 (AWS RDS)
We have just upgraded our RDS instance to v12.2 from v11.22, and we are seeing some of the queries are performing much slower than v11.6 instance. Would highly appreciate if can get help with this. **new version**: PostgreSQL 12.20 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 7.3.1 20180712 (Red Ha...
We have just upgraded our RDS instance to v12.2 from v11.22, and we are seeing some of the queries are performing much slower than v11.6 instance.
Would highly appreciate if can get help with this.
**new version**: PostgreSQL 12.20 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 7.3.1 20180712 (Red Hat 7.3.1-17), 64-bit
**old version**: PostgreSQL 11.22 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 7.3.1 20180712 (Red Hat 7.3.1-12), 64-bit
For example, with the query:
explain (buffers, analyze)
select
"partner_id" as "partnerId",
"instance_id" as "instanceId",
"advertiser_id" as "advertiserId",
"campaign_id" as "campaignId",
"platform",
coalesce(SUM("total_budget_spent"), 0) as "totalBudgetSpent",
coalesce(SUM("total_budget_spent_advertiser_currency"), 0) as "totalBudgetSpentAdvertiserCurrency"
from
"dsp_hourly_campaign_analytics" as "dsp_hourly_campaign_analytics"
where
("dsp_hourly_campaign_analytics"."time" >= 1717596340 -- 1609372800
and "dsp_hourly_campaign_analytics"."time" HashAggregate (cost=8516.52..8526.36 rows=656 width=104) (actual time=878.501..878.511 rows=13 loops=1)
Group Key: dsp_hourly_campaign_analytics.partner_id, dsp_hourly_campaign_analytics.instance_id, dsp_hourly_campaign_analytics.advertiser_id, dsp_hourly_campaign_analytics.campaign_id, dsp_hourly_campaign_analytics.platform
Buffers: shared hit=718 read=8521
I/O Timings: read=851.931
-> Append (cost=0.00..8401.74 rows=6559 width=56) (actual time=0.087..875.336 rows=6542 loops=1)
Buffers: shared hit=718 read=8521
I/O Timings: read=851.931
-> Seq Scan on dsp_hourly_campaign_analytics (cost=0.00..0.00 rows=1 width=2596) (actual time=0.003..0.003 rows=0 loops=1)
Filter: (("time" >= 1717596340) AND ("time" Index Scan using dsp_hourly_campaign_analytics_202406_campaign_index_202406 on dsp_hourly_campaign_analytics_202406 dsp_hourly_campaign_analytics_1 (cost=0.43..2559.39 rows=1828 width=56) (actual time=0.084..37.129 rows=3536 loops=1)
Index Cond: ((campaign_id)::text = ANY ('{247257,247259,249282,266304,272557,247258,247256,255462,255461,246096,275592,275593,275668,275681,270524,270557,269423,270838}'::text[]))
Filter: (("time" >= 1717596340) AND ("time" Index Scan using dsp_hourly_campaign_analytics_202407_campaign_index_202407 on dsp_hourly_campaign_analytics_202407 dsp_hourly_campaign_analytics_2 (cost=0.43..3682.56 rows=3229 width=56) (actual time=0.541..824.004 rows=2853 loops=1)
Index Cond: ((campaign_id)::text = ANY ('{247257,247259,249282,266304,272557,247258,247256,255462,255461,246096,275592,275593,275668,275681,270524,270557,269423,270838}'::text[]))
Filter: (("time" >= 1717596340) AND ("time" Index Scan using dsp_hourly_campaign_analytics_202408_campaign_index_202408 on dsp_hourly_campaign_analytics_202408 dsp_hourly_campaign_analytics_3 (cost=0.42..2126.99 rows=1501 width=56) (actual time=0.050..13.398 rows=153 loops=1)
Index Cond: ((campaign_id)::text = ANY ('{247257,247259,249282,266304,272557,247258,247256,255462,255461,246096,275592,275593,275668,275681,270524,270557,269423,270838}'::text[]))
Filter: (("time" >= 1717596340) AND ("time" HashAggregate (cost=8413.26..8422.96 rows=647 width=104) (actual time=8997.789..8997.804 rows=11 loops=1)
Group Key: dsp_hourly_campaign_analytics.partner_id, dsp_hourly_campaign_analytics.instance_id, dsp_hourly_campaign_analytics.advertiser_id, dsp_hourly_campaign_analytics.campaign_id, dsp_hourly_campaign_analytics.platform
Buffers: shared hit=180 read=6096
I/O Timings: read=8953.020
-> Append (cost=0.00..8300.00 rows=6472 width=56) (actual time=1.994..8990.356 rows=5656 loops=1)
Buffers: shared hit=180 read=6096
I/O Timings: read=8953.020
-> Seq Scan on dsp_hourly_campaign_analytics (cost=0.00..0.00 rows=1 width=2596) (actual time=0.008..0.008 rows=0 loops=1)
Filter: (("time" >= 1717596340) AND ("time" Index Scan using dsp_hourly_campaign_analytics_202406_campaign_index_202406 on dsp_hourly_campaign_analytics_202406 dsp_hourly_campaign_analytics_1 (cost=0.43..3421.11 rows=2471 width=56) (actual time=1.985..3781.430 rows=2666 loops=1)
Index Cond: ((campaign_id)::text = ANY ('{247257,247259,249282,266304,272557,247258,247256,255462,255461,246096,275592,275593,275668,275681,270524,270557,269423,270838}'::text[]))
Filter: (("time" >= 1717596340) AND ("time" Index Scan using dsp_hourly_campaign_analytics_202407_campaign_index_202407 on dsp_hourly_campaign_analytics_202407 dsp_hourly_campaign_analytics_2 (cost=0.43..2730.11 rows=2386 width=56) (actual time=2.510..4979.345 rows=2837 loops=1)
Index Cond: ((campaign_id)::text = ANY ('{247257,247259,249282,266304,272557,247258,247256,255462,255461,246096,275592,275593,275668,275681,270524,270557,269423,270838}'::text[]))
Filter: (("time" >= 1717596340) AND ("time" Index Scan using dsp_hourly_campaign_analytics_202408_campaign_index_202408 on dsp_hourly_campaign_analytics_202408 dsp_hourly_campaign_analytics_3 (cost=0.42..2116.42 rows=1614 width=56) (actual time=21.175..227.975 rows=153 loops=1)
Index Cond: ((campaign_id)::text = ANY ('{247257,247259,249282,266304,272557,247258,247256,255462,255461,246096,275592,275593,275668,275681,270524,270557,269423,270838}'::text[]))
Filter: (("time" >= 1717596340) AND ("time" = 1706745600) AND ("time" < 1709251200)))
)
INHERITS (public.dsp_hourly_campaign_analytics);
CREATE INDEX dsp_hourly_campaign_analytics_202402_advertiser_index_202402 ON public.dsp_hourly_campaign_analytics_202402 USING btree (advertiser_id);
CREATE INDEX dsp_hourly_campaign_analytics_202402_campaign_index_202402 ON public.dsp_hourly_campaign_analytics_202402 USING btree (campaign_id);
CREATE INDEX dsp_hourly_campaign_analytics_202402_instance_index_202402 ON public.dsp_hourly_campaign_analytics_202402 USING btree (instance_id);
CREATE INDEX dsp_hourly_campaign_analytics_202402_partner_index_202402 ON public.dsp_hourly_campaign_analytics_202402 USING btree (partner_id);
CREATE UNIQUE INDEX dsp_hourly_campaign_analytics_202402_pkey_202402 ON public.dsp_hourly_campaign_analytics_202402 USING btree ("time", advertiser_id, campaign_id, platform, publisher_channel, publisher_platform);
CREATE INDEX dsp_hourly_campaign_analytics_202402_platform_index_202402 ON public.dsp_hourly_campaign_analytics_202402 USING btree (platform);
CREATE INDEX idx_dsp_hourly_campaign_analytics_202402_time ON public.dsp_hourly_campaign_analytics_202402 USING btree ("time");
Vinh Lam
(11 rep)
Aug 27, 2024, 03:24 PM
• Last activity: Aug 27, 2024, 03:29 PM
Showing page 1 of 6 total questions