Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
33
views
How to correctly dumpall a PostgreSQL cluster when in the newer version one extension is separated into 2 extensions
I have a PostgreSQL version 10 and a PostGIS extension. In this old version of PostgreSQL, the PostGIS has also support for raster. But in the newer versions of PostGIS, the raster support in a separate extension called: postgis_raster. Thus, we wont have this line in our dumpall file: ``` CREATE EX...
I have a PostgreSQL version 10 and a PostGIS extension. In this old version of PostgreSQL, the PostGIS has also support for raster. But in the newer versions of PostGIS, the raster support in a separate extension called: postgis_raster.
Thus, we wont have this line in our dumpall file:
CREATE EXTENSION postgis_raster;
And when I restore it it tells me it does not recognize the raster type!
My file is very big if I do not zip it before storing it on the disk.
If I zip it, I wont be able to change the lines in the file to add this extension to the dump file manually.
I was thinking to do a pgdumpall with --global-only flag. Then, later dump each DB one by one using pgdump.
However, I was afraid that I may miss a detail from my DB cluster.
Is there a way to ask the pgdumpall to consider that postgis_raster is a separate extension and should be added to the dump file?
How can I safely dump and restore my cluster?
milad
(101 rep)
Jul 29, 2025, 03:09 PM
• Last activity: Jul 31, 2025, 07:47 AM
0
votes
1
answers
146
views
Refreshing development environment from backup that contains FDW in Postgres
Background: We recently added a foreign data wrapper from one of our Postgres (v14) databases to another Postgres (v14) database in a different system. It works fine in our production environment, but now when I try to refresh data in our development environment from a production backup, it fails be...
Background:
We recently added a foreign data wrapper from one of our Postgres (v14) databases to another Postgres (v14) database in a different system. It works fine in our production environment, but now when I try to refresh data in our development environment from a production backup, it fails because our firewall rules don't allow our dev environments to "talk" to prod environments.
I am using pg_dump for a nightly backup and pg_restore to refresh the the development environment.
Question:
Periodically refreshing a development environment from a production backup seems like it would be a pretty common task. How can I configure or script this so it doesn't fail? I need to change both the hostname and password so that my refreshed development database points to and connects to another development database. My only real idea is to use sed on the backup file before doing the restore to change the host (from prod to dev) and to change the password. But, this seems very fragile to me. My only other idea is to use a superuser and not require a password, but I know the security guys are not going to let me do this.
David S
(284 rep)
Feb 2, 2023, 04:28 PM
• Last activity: Jul 30, 2025, 07:05 AM
10
votes
1
answers
3859
views
pg_restore --clean not working because cascade drop
I'm working with a copy (**DB-B**) of a database (**DB-A**) I keep up to date by running daily: pg_restore -U postgres -d someDB --clean -j 2 -L WhatToImportList admin.dump But I started noticing duplicate records in **DB-B** tables. Upon further investigation seems the `--clean` is not dropping the...
I'm working with a copy (**DB-B**) of a database (**DB-A**) I keep up to date by running daily:
pg_restore -U postgres -d someDB --clean -j 2 -L WhatToImportList admin.dump
But I started noticing duplicate records in **DB-B** tables. Upon further investigation seems the
--clean
is not dropping the tables in **DB-B** because it would require a cascade drop on other tables and views that are in my **DB-B** but NOT in the origin **DB-A**.
1. Is it possible to force the import of data WITHOUT doing a cascade drop? I want to keep all my custom tables, views and functions!
2. If not, what would be a good duplication strategy where I import the data from **DB-A** to **DB-B** but keep all my functions, views and tables I need for my analysis in **DB-B**?
**Possible work around**: truncating each table and then importing them... but I'd have to include in the script EACH TABLE.
**To help future readers**, with this managed to generate a list of all the tables that had to be truncated:
SELECT 'TRUNCATE ' || table_name || ';'
FROM information_schema.tables
WHERE table_schema='my_schema'
AND table_type='BASE TABLE';
Gabriel
(387 rep)
Dec 11, 2020, 09:52 PM
• Last activity: Jun 22, 2025, 04:26 AM
1
votes
1
answers
212
views
Google Managed PGSQL DB migration to DigitalOcean Managed PGSQL DB
We are migrating from Google-Cloud PGSQL managed database to a managed PGSQL server on Digital ocean. The dilemma we are facing is that both Google and Digital ocean are managed so we have no access to directories so we can't dump then restore. So how would we go about this? Is there a command that...
We are migrating from Google-Cloud PGSQL managed database to a managed PGSQL server on Digital ocean.
The dilemma we are facing is that both Google and Digital ocean are managed so we have no access to directories so we can't dump then restore. So how would we go about this? Is there a command that copies the data directly from Google-Cloud PGSQL to Digital ocean?
Digital oceans resources assume we have access to a directory.
sqwale
(231 rep)
Jul 7, 2020, 07:01 PM
• Last activity: Jun 13, 2025, 04:04 AM
1
votes
1
answers
271
views
Can I do a backup without extension functions?
I use the PostGIS extension on my database. The thing is when I do a backup and then a restore, all the PostGIS functions give a warning during the restore process, because the functions already exist, as the extension is already installed. I have to scroll through all the warning to make sure there...
I use the PostGIS extension on my database. The thing is when I do a backup and then a restore, all the PostGIS functions give a warning during the restore process, because the functions already exist, as the extension is already installed.
I have to scroll through all the warning to make sure there isn't a real error.
So what is the right way to backup / restore, so I can have a clean restore process?
Juan Carlos Oropeza
(427 rep)
Dec 2, 2019, 04:26 PM
• Last activity: May 20, 2025, 04:02 AM
1
votes
2
answers
828
views
Why isn't pg_restore --create working?
I backed up a database called `app_data`, on a 9.3 server, with this basic command: pg_dump -n public -F custom app_data > app_data.pg.dump Then I try to restore it on another server (running 9.4) like this: pg_restore -C -d postgres app_data.pg.dump But it puts all the tables in the `postgres` data...
I backed up a database called
app_data
, on a 9.3 server, with this basic command:
pg_dump -n public -F custom app_data > app_data.pg.dump
Then I try to restore it on another server (running 9.4) like this:
pg_restore -C -d postgres app_data.pg.dump
But it puts all the tables in the postgres
database. The man page says it will create and use a new database, app_data
.
> -C --create
>
Create the database before restoring into it. [...]
>
> When this option is used, the database named with -d is used only to issue the initial DROP DATABASE and CREATE DATABASE commands. All data is
restored into the database name that appears in the archive.
That's not what it's doing. The name in the archive is app_data
:
bash-4.2$ pg_restore -l app_data.pg.dump
;
; Archive created at Tue Dec 15 04:16:52 2015
; dbname: app_data
...
Am I doing something wrong?
Rob N
(111 rep)
Dec 15, 2015, 04:47 AM
• Last activity: May 8, 2025, 09:02 PM
0
votes
1
answers
3983
views
Trying to restore data from new Postgres version to old Postgres version
I took a backup of the Postgres database version 15, and now I want to restore that data to Postgres version 13.1.2? is it possible?
I took a backup of the Postgres database version 15, and now I want to restore that data to Postgres version 13.1.2? is it possible?
datascinalyst
(105 rep)
Sep 18, 2023, 04:02 PM
• Last activity: Apr 24, 2025, 06:03 PM
6
votes
1
answers
1501
views
Postgresql: How to avoid encoding issues when copying a schema from one server to another?
I'm using pg_dump and pg_restore to move a schema from one Postgresql 9.5 server to another. On the destination server: $ pg_dump -h source.example.com -n my_schema -v --no-owner -F c -f my_schema.dump perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGU...
I'm using pg_dump and pg_restore to move a schema from one Postgresql 9.5 server to another. On the destination server:
$ pg_dump -h source.example.com -n my_schema -v --no-owner -F c -f my_schema.dump
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LC_CTYPE = "UTF-8",
LANG = (unset)
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
...
pg_dump: saving encoding = UTF8
(dump completes with no other errors or warnings)
$ pg_restore -h 127.0.0.1 -e -v --no-owner -d my_db my_schema.dump
...
perl: warning: Falling back to the standard locale ("C").
...
pg_restore: [archiver (db)] Error while PROCESSING TOC:
pg_restore: [archiver (db)] Error from TOC entry 2211; 0 6549333 TABLE DATA mention chicken
pg_restore: [archiver (db)] COPY failed for table "mention": ERROR: invalid byte sequence for encoding "UTF8": 0xcd 0x2e
Any idea on how to solve this issue? What I want is an exact binary copy of the data. There seems to be some encoding problem which makes me nervous, since what is restored may not be exactly the same as the dump, even if I don't get any errors.
David Tinker
(471 rep)
Sep 6, 2016, 08:59 AM
• Last activity: Mar 14, 2025, 11:07 AM
0
votes
0
answers
1577
views
pg_restore: error: unsupported version (1.16) in file header
``` baran@heaven:~$ pg_lsclusters Ver Cluster Port Status Owner Data directory Log file 16 main 5432 online postgres /var/lib/postgresql/16/main /var/log/postgresql/postgresql-16-main.log baran@heaven:~$ pg_dump --version pg_dump (PostgreSQL) 16.6 (Ubuntu 16.6-0ubuntu0.24.04.1) baran@heaven:~$ pg_du...
baran@heaven:~$ pg_lsclusters
Ver Cluster Port Status Owner Data directory Log file
16 main 5432 online postgres /var/lib/postgresql/16/main /var/log/postgresql/postgresql-16-main.log
baran@heaven:~$ pg_dump --version
pg_dump (PostgreSQL) 16.6 (Ubuntu 16.6-0ubuntu0.24.04.1)
baran@heaven:~$ pg_dump --file "/home/baran/aa/aa.backup" --host "localhost" --port "5432" --username "postgres" --format=c --large-objects "zkbiov2"
Password:
baran@heaven:~$ file /home/baran/aa/aa.backup
/home/baran/aa/aa.backup: PostgreSQL custom database dump - v1.15-0
I don't have pg 15 installed on my system. But when i try to do backup or restore file from pg16.6 server it give me error
pg_restore: error: unsupported version (1.16) in file header
I want to use postgres 16.6 only
Baran
(133 rep)
Feb 6, 2025, 12:04 PM
0
votes
1
answers
360
views
pg_restore with --no-acl --role=postgres still try set table owner to other role than postgres
I have put in a lot of effort trying to understand the process of pg_dump and pg_restore but there are a few key things that i still dont understand. My goal is to copy a schema from a production database with many roles to my local replication where i only need to use a postgres admin user. I dump...
I have put in a lot of effort trying to understand the process of pg_dump and pg_restore but there are a few key things that i still dont understand.
My goal is to copy a schema from a production database with many roles to my local replication where i only need to use a postgres admin user.
I dump myschema with this syntax
pg_dump -h "pgprod" -p "5432" -d prod_db -U "myuser" -Fc -C --verbose --schema "myschema" -f .\test.backup
Restore
pg_restore -U postgres -h localhost -p 5432 -d localdb -C -v --no-acl --role=postgres test.backup
But I cannot work out how to ignore any form of roles that was exisiting in the prod_db. I also tried the flag –no-owner with the same result. I am completely stuck at this step, since i dont understand why this is happening. Running postgres 16
pg_restore: error: could not execute query: ERROR: role "user1" does not exist
Command was: ALTER TABLE public."mytable" OWNER TO user1;
geogrow
(384 rep)
Jan 16, 2025, 01:01 PM
• Last activity: Jan 16, 2025, 01:34 PM
1
votes
1
answers
722
views
pg_dump on heavily written production db causing queries to hang
Trying to migrate our existing DB. There are no followers or existing backups that are available to us (they are on our cloud solution, but no available for download). So our only option is to run pg_dump on our prod table. It is very heavily written to and is roughly 700GB in size. ``` pg_dump -d $...
Trying to migrate our existing DB. There are no followers or existing backups that are available to us (they are on our cloud solution, but no available for download). So our only option is to run pg_dump on our prod table.
It is very heavily written to and is roughly 700GB in size.
pg_dump -d ${DB_URL} -Fc -Z 9 -f db.bak
Running the following will begin the dump, but seems to cause every other query being run to hang after awhile. There are no schema changes occurring during the dump, only heavy writes (which I thought wasn't any issue). I cannot stop any of the jobs(listeners just inserting data) as it would throw production out of sync for the entirety of the time the backup was occurring.
Are there any ways to optimize this?
Nicole Staline
(131 rep)
Apr 10, 2022, 04:27 AM
• Last activity: Dec 16, 2024, 10:03 PM
0
votes
0
answers
65
views
PostgreSQL Dump with replace statement
have two PostgreSQL servers, A and B. Some tables from Server A need to be replicated or synced to Server B. If the data in these tables on Server A is modified, I want those updates to be reflected on Server B as well. In other cases, there are tables where data is inserted or modified independentl...
have two PostgreSQL servers, A and B. Some tables from Server A need to be replicated or synced to Server B. If the data in these tables on Server A is modified, I want those updates to be reflected on Server B as well.
In other cases, there are tables where data is inserted or modified independently on both servers, A and B. In this scenario, I want to push the changes from Server B to Server A, ensuring the updates are synchronized.
When I was using MySQL, I achieved this using the following command:
--login-path=xyz --no-create-info --replace db t1 t2 t3 > a.sql
This allowed me to create a dump where:
New records were inserted.
Existing records were updated (thanks to the --replace option).
After restoring this dump, it worked seamlessly.
I now want to achieve a similar workflow and impact with PostgreSQL. However, I am new to PostgreSQL and need guidance on how to set this up effectively.
Baran
(133 rep)
Nov 15, 2024, 09:52 AM
0
votes
1
answers
72
views
Postgres server killed by OOM during pg_restore
I'm running Postgres 16.3 in a container with a 1 GB memory limit. When I run `pg_restore` with a dump file that's about 1 GB, the server will be killed by OOM and I'm guessing that this is from the auto-vacuum using too much memory. Here's the logs from one time it happened: ``` 2024-09-19 21:01:58...
I'm running Postgres 16.3 in a container with a 1 GB memory limit. When I run
pg_restore
with a dump file that's about 1 GB, the server will be killed by OOM and I'm guessing that this is from the auto-vacuum using too much memory.
Here's the logs from one time it happened:
2024-09-19 21:01:58.441 UTC LOG: checkpoint starting: wal
2024-09-19 21:02:14.444 UTC LOG: checkpoint complete: wrote 4643 buffers (28.3%); 0 WAL file(s) added, 0 removed, 33 recycled; write=15.710 s, sync=0.218 s, total=16.003 s; sync files=39, longest=0.077 s, average=0.006 s; distance=537593 kB, estimate=545819 kB; lsn=1D/1138CA88, redo lsn=1C/F303F258
2024-09-19 21:02:15.706 UTC LOG: checkpoints are occurring too frequently (17 seconds apart)
2024-09-19 21:02:15.706 UTC HINT: Consider increasing the configuration parameter "max_wal_size".
2024-09-19 21:02:15.707 UTC LOG: checkpoint starting: wal
2024-09-19 21:02:25.152 UTC LOG: server process (PID 234) was terminated by signal 9: Killed
2024-09-19 21:02:25.152 UTC DETAIL: Failed process was running: COPY public.auth_user (id, password, last_login, is_superuser, username, first_name, last_name, email, is_staff, is_active, date_joined) FROM stdin;
2024-09-19 21:02:25.152 UTC LOG: terminating any other active server processes
2024-09-19 21:02:25.210 UTC LOG: all server processes terminated; reinitializing
2024-09-19 21:02:25.236 UTC LOG: database system was interrupted; last known up at 2024-09-19 21:02:14 UTC
2024-09-19 21:02:25.797 UTC LOG: database system was not properly shut down; automatic recovery in progress
2024-09-19 21:02:25.801 UTC LOG: redo starts at 1C/F303F258
2024-09-19 21:02:34.673 UTC LOG: unexpected pageaddr 1C/D5C24000 in WAL segment 000000010000001D00000021, LSN 1D/21C24000, offset 12730368
2024-09-19 21:02:34.673 UTC LOG: redo done at 1D/21C23FA0 system usage: CPU: user: 5.42 s, system: 3.34 s, elapsed: 8.87 s
2024-09-19 21:02:35.248 UTC LOG: checkpoint starting: end-of-recovery immediate wait
2024-09-19 21:02:35.687 UTC LOG: checkpoint complete: wrote 15986 buffers (97.6%); 0 WAL file(s) added, 0 removed, 46 recycled; write=0.162 s, sync=0.218 s, total=0.441 s; sync files=45, longest=0.210 s, average=0.005 s; distance=765843 kB, estimate=765843 kB; lsn=1D/21C24048, redo lsn=1D/21C24048
2024-09-19 21:02:35.697 UTC LOG: database system is ready to accept connections
Is that what appears to be causing the issue? Is there a way to disable or limit the auto-vacuum during a restore so it will be able to finish without issue?
Dave Johansen
(121 rep)
Sep 24, 2024, 09:02 PM
• Last activity: Sep 26, 2024, 10:26 PM
2
votes
1
answers
438
views
PostgreSQL: pg_dump fails to correctly pg_restore due to missing or erroneous custom function, but present in dump
I am running PostgreSQL 13.15 in a local docker container, with matching pg_dump/pg_restore versions. When dumping and restoring the database, one of the custom functions that was previously present appears to be missing when it comes to create the table. The table of interest is skipped during its...
I am running PostgreSQL 13.15 in a local docker container, with matching pg_dump/pg_restore versions. When dumping and restoring the database, one of the custom functions that was previously present appears to be missing when it comes to create the table. The table of interest is skipped during its
COPY
command. After database restore, the same function with the same input can be run successfully without issue.
I've trimmed down the plaintext pg_dump to the minimal reproducible example.
Here is the error that occurs during restore:
CREATE TABLE
ALTER TABLE
COMMENT
WARNING: Value: Equipment
WARNING: TYPE: text
ERROR: function try_cast_jsonb(text) does not exist
LINE 1: SELECT try_cast_jsonb(raw_text)
^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
QUERY: SELECT try_cast_jsonb(raw_text)
CONTEXT: PL/pgSQL function public.content_aware_tsvector(regconfig,text[],character) line 18 at assignment
COPY searchable_items, line 1: "84 name {} {} {Equipment} A app_user"
Here is the function execution after restore is complete within the psql shell:
mydb=# SELECT try_cast_jsonb('Equipment');
try_cast_jsonb
----------------
The function returns null if the string cannot be cast to jsonb.
The database dump was performed with the following command:
~~~sh
pg_dump --host localhost --port 1487 --dbname mydb --clean --if-exists > backuptest.sql
~~~
The restore was performed as follows into a clean database:
~~~sh
pg_restore --host localhost --port 1487 --dbname mydb --clean --if-exists --no-owner backuptest.sql
~~~
This occurs both in the plain and c formats.
Here is the pg_dump in plaintext:
--
-- PostgreSQL database dump
--
-- Dumped from database version 13.15 (Debian 13.15-1.pgdg120+1)
-- Dumped by pg_dump version 13.15 (Debian 13.15-1.pgdg120+1)
SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;
-- DROP ALL CONSTRAINTS, INDEXES, FUNCTIONS, EXTENSIONS if existing
DROP FUNCTION IF EXISTS public.try_cast_jsonb(_text text);
DROP FUNCTION IF EXISTS public.content_aware_tsvector(_lang text, _raw_text text, _tsweight character);
DROP FUNCTION IF EXISTS public.content_aware_tsvector(_lang regconfig, _raw_text_arr text[], _tsweight character);
DROP EXTENSION IF EXISTS pg_trgm;
DROP EXTENSION IF EXISTS btree_gist;
--
-- Name: btree_gist; Type: EXTENSION; Schema: -; Owner: -
--
CREATE EXTENSION IF NOT EXISTS btree_gist WITH SCHEMA public;
--
-- Name: EXTENSION btree_gist; Type: COMMENT; Schema: -; Owner:
--
COMMENT ON EXTENSION btree_gist IS 'support for indexing common datatypes in GiST';
--
-- Name: pg_trgm; Type: EXTENSION; Schema: -; Owner: -
--
CREATE EXTENSION IF NOT EXISTS pg_trgm WITH SCHEMA public;
--
-- Name: EXTENSION pg_trgm; Type: COMMENT; Schema: -; Owner:
--
COMMENT ON EXTENSION pg_trgm IS 'text similarity measurement and index searching based on trigrams';
--
-- Name: try_cast_jsonb(text); Type: FUNCTION; Schema: public; Owner: app_user
--
CREATE FUNCTION public.try_cast_jsonb(_text text) RETURNS jsonb
LANGUAGE plpgsql STABLE
AS $$
DECLARE
trimmed_text text;
casted_json jsonb;
BEGIN
-- Remove leading whitespace
trimmed_text := ltrim(_text);
-- Check if the string doesn't start with '{' or '['
IF NOT (left(trimmed_text, 1) = '{' OR left(trimmed_text, 1) = '[') THEN
RETURN NULL;
END IF;
BEGIN
SELECT trimmed_text::jsonb INTO casted_json;
EXCEPTION WHEN OTHERS THEN
RETURN NULL;
END;
RETURN casted_json;
END;
$$;
ALTER FUNCTION public.try_cast_jsonb(_text text) OWNER TO app_user;
--
-- Name: content_aware_tsvector(regconfig, text[], character); Type: FUNCTION; Schema: public; Owner: app_user
--
CREATE FUNCTION public.content_aware_tsvector(_lang regconfig, _raw_text_arr text[], _tsweight character DEFAULT NULL::bpchar) RETURNS tsvector
LANGUAGE plpgsql IMMUTABLE
AS $_$
DECLARE
vec tsvector;
i integer;
raw_text text;
json_text jsonb;
debug text;
BEGIN
vec := '';
FOR i IN array_lower(_raw_text_arr, 1) .. array_upper(_raw_text_arr, 1) LOOP
raw_text := COALESCE(_raw_text_arr[i], '');
RAISE WARNING 'Value: %', raw_text;
SELECT pg_typeof(raw_text) INTO debug;
RAISE WARNING 'TYPE: %', debug;
json_text := try_cast_jsonb(raw_text);
IF json_text IS NOT NULL THEN
-- JSON aware tsvector only indexes contents and not structure.
vec := vec || jsonb_to_tsvector(_lang, json_text, '["string", "numeric", "boolean"]'::jsonb);
ELSEIF raw_text ~ '^[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{4}-[a-fA-F0-9]{12}$' THEN
-- Makes UUIDs a single element in the index for only exact matches.
vec := vec || array_to_tsvector(ARRAY[raw_text]);
ELSE
vec := vec || to_tsvector(_lang, raw_text);
END IF;
END LOOP;
IF _tsweight IS NOT NULL THEN
RETURN setweight(vec, _tsweight::"char");
END IF;
RETURN vec;
END;
$_$;
ALTER FUNCTION public.content_aware_tsvector(_lang regconfig, _raw_text_arr text[], _tsweight character) OWNER TO app_user;
--
-- Name: content_aware_tsvector(text, text, character); Type: FUNCTION; Schema: public; Owner: app_user
--
CREATE FUNCTION public.content_aware_tsvector(_lang text, _raw_text text, _tsweight character DEFAULT NULL::bpchar) RETURNS tsvector
LANGUAGE plpgsql IMMUTABLE
AS $$
DECLARE
raw_json json;
vec tsvector;
BEGIN
BEGIN
raw_json = _raw_text::json;
EXCEPTION
WHEN others THEN
raw_json = NULL;
END;
vec = (
CASE
WHEN raw_json IS NOT NULL THEN json_to_tsvector(_lang::regconfig, raw_json, '["string", "numeric", "boolean"]')
ELSE to_tsvector(_lang::regconfig, _raw_text)
END
);
IF _tsweight IS NOT NULL THEN
RETURN setweight(vec, _tsweight::"char");
ELSE
RETURN vec;
END IF;
END;
$$;
ALTER FUNCTION public.content_aware_tsvector(_lang text, _raw_text text, _tsweight character) OWNER TO app_user;
SET default_tablespace = '';
SET default_table_access_method = heap;
--
-- Name: searchable_items; Type: TABLE; Schema: public; Owner: app_user
--
CREATE TABLE public.searchable_items (
item_id integer NOT NULL,
item_type text NOT NULL,
pk_meta jsonb DEFAULT '{}'::jsonb NOT NULL,
meta jsonb DEFAULT '{}'::jsonb NOT NULL,
raw_text text[] NOT NULL,
tsweight character(1) DEFAULT NULL::bpchar,
tenant character varying DEFAULT CURRENT_USER NOT NULL,
var_id integer GENERATED ALWAYS AS ((((meta || pk_meta) ->> 'resource_var_id'::text))::integer) STORED,
val_id integer GENERATED ALWAYS AS ((((meta || pk_meta) ->> 'resource_val_id'::text))::integer) STORED,
search_tsv_english tsvector GENERATED ALWAYS AS (public.content_aware_tsvector('english'::regconfig, raw_text, tsweight)) STORED
);
ALTER TABLE public.searchable_items OWNER TO app_user;
--
-- Name: TABLE searchable_items; Type: COMMENT; Schema: public; Owner: app_user
--
COMMENT ON TABLE public.searchable_items IS 'Table for optimizing text based search';
--
-- Data for Name: searchable_items; Type: TABLE DATA; Schema: public; Owner: app_user
--
COPY public.searchable_items (item_id, item_type, pk_meta, meta, raw_text, tsweight, tenant) FROM stdin;
84 name {} {} {Equipment} A app_user
84 description {} {} {"core resource class for the scheduling of equipment"} C app_user
--
-- Name: searchable_items searchable_items_pkey; Type: CONSTRAINT; Schema: public; Owner: app_user
--
ALTER TABLE ONLY public.searchable_items
ADD CONSTRAINT searchable_items_pkey PRIMARY KEY (item_id, item_type, pk_meta);
--
-- Name: trgm_search_idx; Type: INDEX; Schema: public; Owner: app_user
--
CREATE INDEX trgm_search_idx ON public.searchable_items USING gin (((array_to_tsvector(array_remove(raw_text, NULL::text)))::text) public.gin_trgm_ops);
--
-- PostgreSQL database dump complete
--
What am I missing? What can cause a pg_dump to fail when restored into the same PostgreSQL and psql tool versions?
The table that is failing to restore is as follows (also embedded in the dump above):
CREATE TABLE public.searchable_items (
item_id integer NOT NULL,
item_type text NOT NULL,
pk_meta jsonb DEFAULT '{}'::jsonb NOT NULL,
meta jsonb DEFAULT '{}'::jsonb NOT NULL,
raw_text text[] NOT NULL,
tsweight character(1) DEFAULT NULL::bpchar,
tenant character varying DEFAULT CURRENT_USER NOT NULL,
var_id integer GENERATED ALWAYS AS ((((meta || pk_meta) ->> 'resource_var_id'::text))::integer) STORED,
val_id integer GENERATED ALWAYS AS ((((meta || pk_meta) ->> 'resource_val_id'::text))::integer) STORED,
search_tsv_english tsvector GENERATED ALWAYS AS (public.content_aware_tsvector('english'::regconfig, raw_text, tsweight)) STORED
);
The search_tsv_english
calculated column that is derived from the raw_text
and tsweight
columns causes the failure. This custom function should be defined before the COPY
command to restore the table is executed, but the error reports that the function is either not present, or there is an argument type mismatch.
Executing the COPY
command manually after restore, runs without issue - as if the function was not persisted before the COPY
command.
Justin Lowen
(68 rep)
Sep 13, 2024, 10:07 PM
• Last activity: Sep 17, 2024, 03:52 AM
0
votes
0
answers
31
views
pg_restore does not give back shell
I got a dump file.I'm trying to restore it but after it finished it does not give back shell and stucks.Is there any solution ?
I got a dump file.I'm trying to restore it but after it finished it does not give back shell and stucks.Is there any solution ?
mahdi
(1 rep)
May 27, 2024, 01:25 PM
0
votes
1
answers
116
views
Command pg_restore does not import / restore any data, printing "depends on"
A database `origin` was dumped in postgres _"custom"_ format using: `pg_dump -d origin --data-only -Fc > file.dump`. The dump was then used to restore all data in a **different database with the same table names**, as in the `origin` db. Command: `pg_restore -l -d target --verbose -Fc file.dump`. ``...
A database
origin
was dumped in postgres _"custom"_ format using: pg_dump -d origin --data-only -Fc > file.dump
.
The dump was then used to restore all data in a **different database with the same table names**, as in the origin
db. Command: pg_restore -l -d target --verbose -Fc file.dump
.
;
; Archive created at 2024-03-19 14:00:10 UTC
; dbname: origin
; TOC Entries: 215
; Compression: 0
; Dump Version: 1.13-0
; Format: CUSTOM
; Integer: 4 bytes
; Offset: 8 bytes
; Dumped from database version: 11.18 (Ubuntu 11.18-1.pgdg18.04+1)
; Dumped by pg_dump version: 11.18 (Ubuntu 11.18-1.pgdg18.04+1)
;
;
; Selected TOC Entries:
;
5357; 0 0 ENCODING - ENCODING
5358; 0 0 STDSTRINGS - STDSTRINGS
5359; 0 0 SEARCHPATH - SEARCHPATH
5360; 1262 16387 DATABASE - origin userA
5361; 0 0 DATABASE PROPERTIES - origin userA
; depends on: 5360
5145; 0 730750 TABLE DATA subpartitions backends_y2024w03 userA
; depends on: 237
5166; 0 738268 TABLE DATA subpartitions backends_y2024w04 userA
; depends on: 258
5189; 0 745798 TABLE DATA subpartitions backends_y2024w05 userA
; depends on: 281
.........
- **Problem:** Nothing was restored. As shown it seems like the _"custom"_ backup dump has some depends on: xxx
remarks encoded inside, which do not exist in the target database. Converting the dump to a plain SQL file, only containing commands like COPY subpartitions.backends_y2024w03 ("time", dbname, data, tag_data) FROM stdin;
, imported all data **without any issues**.
**Question:** How are depends on: xxx
missing entities detected, to what does xxx
refer to when restoring a _"custom"_ backup file and how to disable those checks? (as all real dependencies existed in the example)
FireEmerald
(101 rep)
Mar 21, 2024, 01:20 PM
• Last activity: May 2, 2024, 09:22 AM
1
votes
1
answers
8062
views
"Unsupported version" error restoring a schema backed up with the same version it was dumped
I am trying to backup and restore a schema from a remote server to a local system. Even though `pg_dump` and `pg_restore` report the same version, the later fails with the familiar "Unsupported version" error (log below). I have seen this error before, when `pg_restore` is an older version than `pg_...
I am trying to backup and restore a schema from a remote server to a local system. Even though
pg_dump
and pg_restore
report the same version, the later fails with the familiar "Unsupported version" error (log below). I have seen this error before, when pg_restore
is an older version than pg_dump
, but never when the versions match. What could be the cause?
$ psql -h server.somewhere -p 5432 -U user -d database
Password for user user:
psql (16.1 (Ubuntu 16.1-1.pgdg22.04+1), server 14.10)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
Type "help" for help.
database=> SELECT version();
version
----------------------------------------------------------------------------------------------------------
PostgreSQL 14.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-20), 64-bit
(1 row)
database=> \q
$ pg_dump --version
pg_dump (PostgreSQL) 14.10 (Ubuntu 14.10-1.pgdg22.04+1)
$ pg_restore --version
pg_restore (PostgreSQL) 14.10 (Ubuntu 14.10-1.pgdg22.04+1)
$ pg_dump -h server.somewhere -p 5432 -U user database -n schema_new -F t > ~/temp/schema_new_2024-01-29.backup
$ pg_restore -d wosis ~/temp/schema_new_2024-01-29.backup
pg_restore: error: unsupported version (1.15) in file header
Luís de Sousa
(674 rep)
Jan 29, 2024, 01:49 PM
• Last activity: Jan 30, 2024, 05:40 PM
0
votes
1
answers
138
views
How to pg_restore some tables with automatic index creation?
How to restore some table from pg_dump custom format, and automatically create it's index ? I just restore some tables from database dump (custom format). ```pg_restore -d database --table=table1 --table=table2 --table=table3 dumpdatabase.dmp``` This result tables with no index. I have to create man...
How to restore some table from pg_dump custom format, and automatically create it's index ?
I just restore some tables from database dump (custom format).
-d database --table=table1 --table=table2 --table=table3 dumpdatabase.dmp
This result tables with no index. I have to create manually each of them. How to automatically create it's index with pg_restore command ?
Sapta
(11 rep)
Dec 12, 2023, 05:10 AM
• Last activity: Dec 12, 2023, 09:57 AM
1
votes
1
answers
91
views
pg_restore drops extension plpgsql
We face a situation where pg_restore drops extension plpgsql and then recreates it. This is done by a CI job we have but i could not reproduce it. These are the commands we use: - pg_dump -d \$DATABASE_URL -j 8 -Z 0 -Fd -f ${backupDir} - psql -d \$DATABASE_URL -c \"DROP SCHEMA IF EXISTS public CASCA...
We face a situation where pg_restore drops extension plpgsql and then recreates it. This is done by a CI job we have but i could not reproduce it. These are the commands we use:
- pg_dump -d \$DATABASE_URL -j 8 -Z 0 -Fd -f ${backupDir}
- psql -d \$DATABASE_URL -c \"DROP SCHEMA IF EXISTS public CASCADE;
- psql \$DATABASE_URL -c \"CREATE SCHEMA public; GRANT ALL ON SCHEMA
public TO public;
- pg_restore --verbose --clean --no-owner -d \$DATABASE_URL -j 8 --format=d ${backupDir}
And these are the logs:

Vasileios Giannakidis
(13 rep)
Sep 28, 2023, 09:48 AM
• Last activity: Sep 28, 2023, 10:23 AM
0
votes
1
answers
3648
views
`pg_restore` unable to read `toc.dat` created by `pg_dump -Fd`: `pg_restore: error: expected format (1) differs from format found in file (3)`
The `pg_dump -Fd mydb -f mydb.dump` command creates a `mydb.dump` directory with a `toc.dat` file. The [`pg_dump` man page][1] mentions that [`pg_restore`][2] is able to read this file, but after skimming [its man page][2] and stumbling upon [this PostgreSQL mailing list thread][3], I get this crypt...
The
pg_dump -Fd mydb -f mydb.dump
command creates a mydb.dump
directory with a toc.dat
file. The pg_dump
man page mentions that pg_restore
is able to read this file, but after skimming its man page and stumbling upon this PostgreSQL mailing list thread , I get this cryptic error:
$ pg_restore -l mydb.dump/toc.dat
pg_restore: error: expected format (1) differs from format found in file (3)
toraritte
(121 rep)
Sep 26, 2023, 06:32 PM
• Last activity: Sep 27, 2023, 06:28 PM
Showing page 1 of 20 total questions