Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
2 answers
30 views
How to correctly dumpall a PostgreSQL cluster when in the newer version one extension is separated into 2 extensions
I have a PostgreSQL version 10 and a PostGIS extension. In this old version of PostgreSQL, the PostGIS has also support for raster. But in the newer versions of PostGIS, the raster support in a separate extension called: postgis_raster. Thus, we wont have this line in our dumpall file: ``` CREATE EX...
I have a PostgreSQL version 10 and a PostGIS extension. In this old version of PostgreSQL, the PostGIS has also support for raster. But in the newer versions of PostGIS, the raster support in a separate extension called: postgis_raster. Thus, we wont have this line in our dumpall file:
CREATE EXTENSION postgis_raster;
And when I restore it it tells me it does not recognize the raster type! My file is very big if I do not zip it before storing it on the disk. If I zip it, I wont be able to change the lines in the file to add this extension to the dump file manually. I was thinking to do a pgdumpall with --global-only flag. Then, later dump each DB one by one using pgdump. However, I was afraid that I may miss a detail from my DB cluster. Is there a way to ask the pgdumpall to consider that postgis_raster is a separate extension and should be added to the dump file? How can I safely dump and restore my cluster?
milad (101 rep)
Jul 29, 2025, 03:09 PM • Last activity: Jul 31, 2025, 07:47 AM
3 votes
2 answers
660 views
Getting Postgres CREATE TABLE statements
I created some tables (9, to be exact) with a Django (1.9.6) migration and now I'm trying to get simple `CREATE TABLE` statements for them. I tried [this answer](https://stackoverflow.com/a/2594564/3704831), but using `pg_dump` in this way gives me over 800 lines of output for the 9 tables. For exam...
I created some tables (9, to be exact) with a Django (1.9.6) migration and now I'm trying to get simple CREATE TABLE statements for them. I tried [this answer](https://stackoverflow.com/a/2594564/3704831) , but using pg_dump in this way gives me over 800 lines of output for the 9 tables. For example, part of the output creating the first table is -- -- Name: popresearch_question; Type: TABLE; Schema: public; Owner: postgres -- CREATE TABLE popresearch_question ( id integer NOT NULL, created_date timestamp with time zone NOT NULL, modified_date timestamp with time zone NOT NULL, question_text character varying(500) NOT NULL, question_template_id integer, question_type_id integer, user_id integer ); ALTER TABLE popresearch_question OWNER TO postgres; -- -- Name: popresearch_question_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres -- CREATE SEQUENCE popresearch_question_id_seq START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE CACHE 1; ALTER TABLE popresearch_question_id_seq OWNER TO postgres; -- -- Name: popresearch_question_id_seq; Type: SEQUENCE OWNED BY; Schema: public; Owner: postgres -- ALTER SEQUENCE popresearch_question_id_seq OWNED BY popresearch_question.id; and then later on are more ALTER statements: -- -- Name: popresearch_question id; Type: DEFAULT; Schema: public; Owner: postgres -- ALTER TABLE ONLY popresearch_question ALTER COLUMN id SET DEFAULT nextval('popresearch_question_id_seq'::regclass); and then later: -- -- Name: popresearch_question popresearch_question_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY popresearch_question ADD CONSTRAINT popresearch_question_pkey PRIMARY KEY (id); -- -- Name: popresearch_question popresearch_question_question_text_key; Type: CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY popresearch_question ADD CONSTRAINT popresearch_question_question_text_key UNIQUE (question_text); and after that there are at least a dozen more ALTER TABLE statements just for this one table scattered in the pg_dump output. Is there a way to get a simple, condensed CREATE TABLE statement that includes all the keys, constraints, etc.?
wogsland (416 rep)
May 9, 2017, 09:47 PM • Last activity: Jul 3, 2025, 07:00 PM
1 votes
2 answers
607 views
pg_dump - table dependencies when using --table
The [documentation][1] says this when using `-t table`: > Note When -t is specified, pg_dump makes no attempt to dump any other > database objects that the selected table(s) might depend upon. > Therefore, there is no guarantee that the results of a specific-table > dump can be successfully restored...
The documentation says this when using -t table: > Note When -t is specified, pg_dump makes no attempt to dump any other > database objects that the selected table(s) might depend upon. > Therefore, there is no guarantee that the results of a specific-table > dump can be successfully restored by themselves into a clean database. However, when the table has a sequence, and the owned by for the sequence is set to the table, it is included in the dump. Shouldn't it be excluded according to this part of the documentation? (If so, why are sequences exceptions?) What kind of database objects are going to be excluded in -t table mode then?
user (113 rep)
Jan 25, 2020, 02:53 PM • Last activity: Jul 2, 2025, 01:18 PM
0 votes
1 answers
190 views
Small databases, large backup files
We have PostgreSQL 9.6 databases and a dailytask running a pg_dump for all databases, though these backups are getting "large" at this point. My database was 900MB, then I tried clearing it by deleting old history which is not necessary anymore and after that ran VACUUM FULL. The statistics in pgAdm...
We have PostgreSQL 9.6 databases and a dailytask running a pg_dump for all databases, though these backups are getting "large" at this point. My database was 900MB, then I tried clearing it by deleting old history which is not necessary anymore and after that ran VACUUM FULL. The statistics in pgAdmin say that the database is now only 30MB. When I run the pg_dump command manually through a Command prompt, it creates a file of 22MB. When I run my daily task (windows task scheduler) it still creates a backup file of 1GB. What am I missing at this point?
TimVK (111 rep)
May 9, 2018, 01:42 PM • Last activity: Jun 26, 2025, 10:07 PM
0 votes
1 answers
46 views
Is pg_dump as acceptable as pg_dumpall for upgrading a database across major PostgreSQL versions?
I recently migrated a Postgres database from one server running Ubuntu 20.04 and Postgres 12 to another server running Ubuntu 24.04 and Postgres 16. To move the database from one server to the other, I did the following: ``` # On the Postgres 12 system $ sudo pg_dump -C sales_database > sales_databa...
I recently migrated a Postgres database from one server running Ubuntu 20.04 and Postgres 12 to another server running Ubuntu 24.04 and Postgres 16. To move the database from one server to the other, I did the following: ``` # On the Postgres 12 system $ sudo pg_dump -C sales_database > sales_database_bak.sql # On the Postgres 16 system $ sudo -u postgres psql 16 jump I performed, states > A dump/restore using pg_dumpall or use of pg_upgrade or logical replication is required for those wishing to migrate data from any previous release. See [Section 19.6](https://www.postgresql.org/docs/15/upgrading.html) for general information on migrating to new major releases. That is, the notes specifically cite pg_dumpall, not pg_dump, as necessary for the upgrade. The linked Section 19.6 continues to cite pg_dumpall without mentioning pg_dump. Of course, the two aren't that different: pg_dumpall calls pg_dump during operation. However, it's possible that pg_dumpall does something else that makes it different from pg_dump alone. I'm troubleshooting an error on the new system, and I want to make sure this isn't the issue. For the purposes of the upgrade, is pg_dump exactly equivalent to pg_dumpall for transferring a database in compliance with the upgrade requirements cited in the Postgres 15 upgrade notes?
Borea Deitz (151 rep)
Jun 3, 2025, 07:41 PM • Last activity: Jun 4, 2025, 04:49 AM
4 votes
2 answers
203 views
Restore a full PostgreSQL server including passwords
I'm preparing a migration from one server to another. Actually, the same hardware but a full reinstall of a newer OS version. So everything including PostgreSQL needs to be reinstalled. But the user data must remain the same. This question is about restoring a full Postgres database server with all...
I'm preparing a migration from one server to another. Actually, the same hardware but a full reinstall of a newer OS version. So everything including PostgreSQL needs to be reinstalled. But the user data must remain the same. This question is about restoring a full Postgres database server with all databases, structure, data, roles, passwords and everything. The old version is Postgres 13, the new is version 16. I can make a full dump with this command: pg_dumpall -U postgres -h localhost --clean --if-exists --quote-all-identifiers | gzip > ~/full-postgres.sql.gz On the new server, however, it can't be loaded. Currently this is in a test environment. Apparently I've created the default database with a new password, but the restoring now left the server in an unusable state. I did this: gunzip -c ~/full-postgres.sql.gz | psql -U postgres -h localhost It asked for the password and started running some commands. Then the last two lines are these:
\connect: connection to server at "localhost" (127.0.0.1), port 5432 failed: FATAL:  password authentication failed for user "postgres"
connection to server at "localhost" (127.0.0.1), port 5432 failed: FATAL:  password authentication failed for user "postgres"
I cannot connect to the server anymore with the old or new password. Guess that attempt is lost and I'll have to restart my test with a fresh server installation. What's the correct procedure to get a totally complete dump of Postgres 13 and load all the data, passwords etc. into a fresh Postgres 16 installation? Running on Ubuntu 20.04/24.04. Needless to say that I cannot recreate users and passwords manually because I don't have all the passwords. Users must be able to connect with their existing password again on the new server. So whereever Postgres keeps the relevant data, it needs to be restored from the dump. While I'm thinking of it, a related configuration change is that the old server had pg_hba.conf entries with the "md5" method and the new will have "scram-sha-256" everywhere. Would this prevent preserving the passwords maybe? So would progress just leave me behind here? PS: I could also copy the raw database files on disk and restore those, if it helps to preserve all data. Just remember the version has changed from 13 to 16.
ygoe (303 rep)
May 24, 2025, 12:17 PM • Last activity: May 25, 2025, 11:14 AM
0 votes
1 answers
247 views
change MySQL commands to PostgreSQL in a bash script
I have a bash script to backup MySQL databases. I have pasted the relevant part below (ignore the variables). The 2 lines I think I need to change to make this work in PostgreSQL are; `DBS="$(mysql --login-path=dbbkup -Bse 'show databases')"` and ` $MYSQLDUMP --login-path=dbbkup --add-drop-database...
I have a bash script to backup MySQL databases. I have pasted the relevant part below (ignore the variables). The 2 lines I think I need to change to make this work in PostgreSQL are; DBS="$(mysql --login-path=dbbkup -Bse 'show databases')" and $MYSQLDUMP --login-path=dbbkup --add-drop-database --single-transaction --triggers --routines --events --set-gtid-purged=OFF $db | $GZIP > $FILE Can anyone advise as to how I can get the same functionality with PostgreSQL? # get all database listing DBS="$(mysql --login-path=dbbkup -Bse 'show databases')" # start to dump database one by one for db in $DBS do DUMP="yes"; if [ "$IGNOREDB" != "" ]; then for i in $IGNOREDB # Store all value of $IGNOREDB ON i do if [ "$db" == "$i" ]; then # If result of $DBS(db) is equal to $IGNOREDB(i) then DUMP="NO"; # SET value of DUMP to "no" #echo "$i database is being ignored!"; fi done fi if [ "$DUMP" == "yes" ]; then # If value of DUMP is "yes" then backup database FILE="$BACKUPDIR/$NOW-$db.sql.gz"; echo "BACKING UP $db"; $MYSQLDUMP --login-path=dbbkup --add-drop-database --single-transaction --triggers --routines --events --set-gtid-purged=OFF $db | $GZIP > $FILE fi done
eekfonky (207 rep)
Oct 18, 2016, 09:16 AM • Last activity: May 20, 2025, 09:03 PM
1 votes
2 answers
312 views
Migrating from PostgreSQL to MySQL (data with \r\n)
I'm trying to migrate a database from PostgreSQL to MySQL on an Ubuntu 23.04 system (latest version of both). So far, it looks like things are fine, except for one problem. In one of the character varying fields in PostgreSQL, there are `\r\n` characters. Yeah -- whoever developed this system made a...
I'm trying to migrate a database from PostgreSQL to MySQL on an Ubuntu 23.04 system (latest version of both). So far, it looks like things are fine, except for one problem. In one of the character varying fields in PostgreSQL, there are \r\n characters. Yeah -- whoever developed this system made a front-end that allowed them. I'm saving the data using a command such as this:
-U mylogin --data-only --file=foo.sql --no-owner --column-inserts --table foo --quote-all-identifiers
which: - does not output the schema (I have separate CREATE TABLE statements for that) - creates only INSERT statements - does not associate an owner to the table --quote-all-identifiers was something I tried and it seems having it or not having it does not solve my problem. My problem is that I might have a row like: [1, "hello", "hello\r\nthere"] which looks like this in foo.sql: INSERT INTO TEST (id, comment1, comment2) VALUES (1, 'hello', 'hello there'); and MySQL does not like this (i.e. mysql -u mylogin -p . When doing backups and restorations within PostgreSQL, there wasn't any problem since pg_dump with default options uses COPY. I looked through the options for pg_dump and no obvious option jumps up at me that would solve this problem. I thought having a quote would (i.e., --quote-all-identifiers), but it did not. I'm not sure if I should look into how PostgreSQL is exporting it or how MySQL is importing it...or some other solution I haven't yet thought of. I was thinking of doing this on the PostgreSQL database:
TEST set comment2 = regexp_replace (description, E'([\\n\\r]+)', ' ', 'g');
(Untested) But I guess I would rather not change the source data if I can. Does anyone have a suggestion about what I should do? Or perhaps I'm going about this wrong... Thank you!
Ray (113 rep)
Sep 29, 2023, 08:44 PM • Last activity: May 15, 2025, 05:02 AM
0 votes
2 answers
1939 views
How can I use pg_dump to export a single table to CSV that can then be imported by Oracle SQL Loader?
Thanks in advance for any help on this. **Problem Summary:** I'm looking for the most efficient way possible to export a single table from Postgres/Greenplum for a large number of records (100M+) so that it can be imported by Oracle SQL Loader. **Research Background:** I know from research thus far...
Thanks in advance for any help on this. **Problem Summary:** I'm looking for the most efficient way possible to export a single table from Postgres/Greenplum for a large number of records (100M+) so that it can be imported by Oracle SQL Loader. **Research Background:** I know from research thus far that the pg_dump utility is more efficient than Postgres COPY, so I do NOT want to use the COPY command. Using pg_dump has many pluses, and can: 1. Can use multiple threads/cores 2. Can dump a single table of output 3. but to CSV? **My Main Question:** The critical thing I can't figure out yet is how to get pg_dump to export to csv or fixed-width plain text output. **A sidebar question:** I can't seem to find a detailed description (other than, 'The pg_dump --format=custom means the data is compressed') of what exactly the "custom" pg_dump format does to the data. The word "custom" implies that the output should be to a controllable schema, but I haven't been able to locate documentation yet of how this works.
zigmoo (9 rep)
Oct 12, 2022, 04:39 PM • Last activity: May 14, 2025, 01:05 PM
1 votes
2 answers
828 views
Why isn't pg_restore --create working?
I backed up a database called `app_data`, on a 9.3 server, with this basic command: pg_dump -n public -F custom app_data > app_data.pg.dump Then I try to restore it on another server (running 9.4) like this: pg_restore -C -d postgres app_data.pg.dump But it puts all the tables in the `postgres` data...
I backed up a database called app_data, on a 9.3 server, with this basic command: pg_dump -n public -F custom app_data > app_data.pg.dump Then I try to restore it on another server (running 9.4) like this: pg_restore -C -d postgres app_data.pg.dump But it puts all the tables in the postgres database. The man page says it will create and use a new database, app_data. > -C --create > Create the database before restoring into it. [...] > > When this option is used, the database named with -d is used only to issue the initial DROP DATABASE and CREATE DATABASE commands. All data is restored into the database name that appears in the archive. That's not what it's doing. The name in the archive is app_data: bash-4.2$ pg_restore -l app_data.pg.dump ; ; Archive created at Tue Dec 15 04:16:52 2015 ; dbname: app_data ... Am I doing something wrong?
Rob N (111 rep)
Dec 15, 2015, 04:47 AM • Last activity: May 8, 2025, 09:02 PM
0 votes
1 answers
327 views
Does daily pg_dump mess up postgres cache?
I migrated my geospatial Postgres 12.5 database to another cloud provider. I use postgis and I have around 35GB of data and 8GB of memory. Performances are way worse than on my previous provider, and new provider claims this is because the pg cache has to been "warmed up" everyday after automatic pg...
I migrated my geospatial Postgres 12.5 database to another cloud provider. I use postgis and I have around 35GB of data and 8GB of memory. Performances are way worse than on my previous provider, and new provider claims this is because the pg cache has to been "warmed up" everyday after automatic pg_dump backuping operations occuring in the night. Geospatial queries that would normally take 50ms sometimes take 5-10s on first request, and some that would run in 800ms take minutes. Is there something else looming or is the technical support right ? If so, should I disable daily backups ? Or can I somehow use a utility function to restore the cache ? (pg_prewarm ?)
Pak (101 rep)
Feb 25, 2021, 10:04 AM • Last activity: Apr 27, 2025, 10:04 PM
0 votes
1 answers
3978 views
Trying to restore data from new Postgres version to old Postgres version
I took a backup of the Postgres database version 15, and now I want to restore that data to Postgres version 13.1.2? is it possible?
I took a backup of the Postgres database version 15, and now I want to restore that data to Postgres version 13.1.2? is it possible?
datascinalyst (105 rep)
Sep 18, 2023, 04:02 PM • Last activity: Apr 24, 2025, 06:03 PM
0 votes
1 answers
384 views
pg_dump -> import traject using "upsert" paradigm?
Well say I have two physically separate database and I wish to replace database B with data from A. Except that I wish to keep the data in B which does not exist in A? So basically I'd go over each entry in A and use an insert-or-replace like: INSERT INTO table_name (name, value) VALUES ('hello', 'w...
Well say I have two physically separate database and I wish to replace database B with data from A. Except that I wish to keep the data in B which does not exist in A? So basically I'd go over each entry in A and use an insert-or-replace like: INSERT INTO table_name (name, value) VALUES ('hello', 'world') ON CONFLICT (name) do nothing; IE the following databases would be merged into the third: A) name | value A | 1 B | 2 B) name | value B | 10 C | 20 MERGED) name | value A | 1 B | 2 C | 20 If I use the -c option from pg_dump it would just throw away the "c" value in the merged table.
paul23 (483 rep)
Mar 16, 2021, 02:56 PM • Last activity: Apr 24, 2025, 05:03 AM
6 votes
1 answers
1500 views
Postgresql: How to avoid encoding issues when copying a schema from one server to another?
I'm using pg_dump and pg_restore to move a schema from one Postgresql 9.5 server to another. On the destination server: $ pg_dump -h source.example.com -n my_schema -v --no-owner -F c -f my_schema.dump perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGU...
I'm using pg_dump and pg_restore to move a schema from one Postgresql 9.5 server to another. On the destination server: $ pg_dump -h source.example.com -n my_schema -v --no-owner -F c -f my_schema.dump perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_CTYPE = "UTF-8", LANG = (unset) are supported and installed on your system. perl: warning: Falling back to the standard locale ("C"). ... pg_dump: saving encoding = UTF8 (dump completes with no other errors or warnings) $ pg_restore -h 127.0.0.1 -e -v --no-owner -d my_db my_schema.dump ... perl: warning: Falling back to the standard locale ("C"). ... pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 2211; 0 6549333 TABLE DATA mention chicken pg_restore: [archiver (db)] COPY failed for table "mention": ERROR: invalid byte sequence for encoding "UTF8": 0xcd 0x2e Any idea on how to solve this issue? What I want is an exact binary copy of the data. There seems to be some encoding problem which makes me nervous, since what is restored may not be exactly the same as the dump, even if I don't get any errors.
David Tinker (471 rep)
Sep 6, 2016, 08:59 AM • Last activity: Mar 14, 2025, 11:07 AM
0 votes
1 answers
5542 views
Error message from server: ERROR: invalid memory alloc request size
I am running with the below issue while taking pg_dump(PG14) of a table with bytea column. **pg_dump: error: Error message from server: ERROR: invalid memory alloc request size 1460154641** **The command was: COPY TO stdout;** The table "ABC" in concern is just 60MB large(total size) and it has a by...
I am running with the below issue while taking pg_dump(PG14) of a table with bytea column. **pg_dump: error: Error message from server: ERROR: invalid memory alloc request size 1460154641** **The command was: COPY TO stdout;** The table "ABC" in concern is just 60MB large(total size) and it has a bytea column. But the error says is not able to allocate a request size of 1.3GB. What are we missing here? Could you please help? Thanks. Update: I was able to take backup of the table using below command without error.
COPY public.abc TO stdout WITH (FORMAT binary);
--Successful execution
But below command fails:
COPY public.abc TO stdout ;
ERROR:  invalid memory alloc request size 1480703501
Even the select query return same error :
select * from ABC
ERROR: invalid memory alloc request size 1480703501
How did it even allow bytea column to be inserted with more than 1GB. Tha table is just 60Mb large and with just 1 row.
Sajith P Shetty (312 rep)
Aug 2, 2023, 09:24 AM • Last activity: Feb 13, 2025, 11:04 AM
0 votes
0 answers
1574 views
pg_restore: error: unsupported version (1.16) in file header
``` baran@heaven:~$ pg_lsclusters Ver Cluster Port Status Owner Data directory Log file 16 main 5432 online postgres /var/lib/postgresql/16/main /var/log/postgresql/postgresql-16-main.log baran@heaven:~$ pg_dump --version pg_dump (PostgreSQL) 16.6 (Ubuntu 16.6-0ubuntu0.24.04.1) baran@heaven:~$ pg_du...
baran@heaven:~$ pg_lsclusters
Ver Cluster Port Status Owner    Data directory              Log file
16  main    5432 online postgres /var/lib/postgresql/16/main /var/log/postgresql/postgresql-16-main.log
baran@heaven:~$ pg_dump --version
pg_dump (PostgreSQL) 16.6 (Ubuntu 16.6-0ubuntu0.24.04.1)
baran@heaven:~$ pg_dump --file "/home/baran/aa/aa.backup" --host "localhost" --port "5432" --username "postgres"  --format=c --large-objects "zkbiov2"
Password: 
baran@heaven:~$ file /home/baran/aa/aa.backup 
/home/baran/aa/aa.backup: PostgreSQL custom database dump - v1.15-0
I don't have pg 15 installed on my system. But when i try to do backup or restore file from pg16.6 server it give me error pg_restore: error: unsupported version (1.16) in file header I want to use postgres 16.6 only
Baran (133 rep)
Feb 6, 2025, 12:04 PM
2 votes
1 answers
2047 views
PG_DUMP on Replica DB server Error
I have a master - slave configuration with 9.4 version, but there is no WAL streaming replication configured. Customer simply copy xlog files on network share and replica applies it. I need to run pg_dump from replica although I face with error. What do I do: 1. SELECT pg_xlog_replay_pause() 2. Run...
I have a master - slave configuration with 9.4 version, but there is no WAL streaming replication configured. Customer simply copy xlog files on network share and replica applies it. I need to run pg_dump from replica although I face with error. What do I do: 1. SELECT pg_xlog_replay_pause() 2. Run pg_dump on replica server 3. SELECT pg_xlog_replay_resume() Immediately I receive after pg_xlog_replay_pause() ERROR: recovery is not in progress And when the command of pg_dump starts I see: pg_dump: [archiver (db)] query failed: ERROR: cannot assign TransactionIds during recovery pg_dump: [archiver (db)] query was: SELECT pg_export_snapshot() Questions: 1. Does it mean I can't do pg_dump from replica? I have a feeling that in case of wal streaming replication it would be possible. Correct me if I'm wrong. 2. I can't find any information for SELECT pg_xlog_replay_pause() / SELECT pg_xlog_replay_resume() works only with wal streaming setup. Can someone tell if this true? Thanks in advance. Customer's setup is: They copy xlog to network share and replica recovery.conf consist of standby_mode = 'on' restore_command = 'if exist A:\\Logs\\From_Master_DB\\%f (copy A:\\Logs\\From_master_DB\\%f %p) else (exit /b 1)' archive_cleanup_command = '"C:\\Program Files\\PostgreSQL\\9.4\\bin\\pg_archivecleanup" A:\\Logs\\From_Master_DB %r && "C:\\Program Files\\PostgreSQL\\9.4\\bin\\pg_archivecleanup" D:\\PG-SQL\\data\\pg_xlog %r' recovery_target_timeline = 'latest'
ntdrv (21 rep)
Jul 26, 2018, 10:11 AM • Last activity: Jan 19, 2025, 08:04 PM
5 votes
3 answers
6186 views
How to anonymize pg_dump output before it leaves server?
For development purposes, we dump the production database to local. It's fine because the DB is small enough. The company's growing and we want to reduce risks. To that end, we'd like to anonymize the data before it leaves the database server. One solution we thought of would be to run statements pr...
For development purposes, we dump the production database to local. It's fine because the DB is small enough. The company's growing and we want to reduce risks. To that end, we'd like to anonymize the data before it leaves the database server. One solution we thought of would be to run statements prior to pg_dump, but within the same transaction, something like this: BEGIN; UPDATE users SET email = 'dev+' || id || '@example.com' , password_hash = '/* hash of "password" */' , ...; -- launch pg_dump as usual, ensuring a ROLLBACK at the end -- pg_dump must run with the *same* connection, obviously -- if not already done by pg_dump ROLLBACK; Is there a ready-made solution for this? Our DB is hosted on Heroku, and we don't have 100% flexibility in how we dump. I searched for [postgresql anonymize data dump before download](https://duckduckgo.com/?q=postgresql+anonymize+data+dump+before+download) and variations, but I didn't see anything highly relevant.
François Beausoleil (1473 rep)
Mar 23, 2017, 07:06 PM • Last activity: Jan 18, 2025, 01:55 AM
0 votes
1 answers
1018 views
Backing up a DB on remote server
I need to backup a postgres DB which is located on a remote server. This server also hosts a lot of other stuff, and that's why I don't have a general access to the server itself. Therefore I believe that I can not use ```ssh``` to access the remote server and run ```pg_dump``` from there. (Please c...
I need to backup a postgres DB which is located on a remote server. This server also hosts a lot of other stuff, and that's why I don't have a general access to the server itself. Therefore I believe that I can not use
to access the remote server and run
from there. (Please correct me if I am wrong -regarding
) The only thing I can do is write and read the DB via a DB connection (which is allowed through the firewall.) What is the best approach of backing up such DB? I tried
, but I don't think it'll work under such conditions.
Sanyifejű (101 rep)
Jan 24, 2022, 04:39 PM • Last activity: Jan 9, 2025, 03:10 AM
0 votes
1 answers
4529 views
Are there limitations on using pg_dump on a materialized view such that the data won't be included in the results?
I have a database, call it **maps**, which contains 2 schemas, **a** & **b**. In each schema I have a number of tables: **a.t1**, **a.t2**, **b.t1**, **b.t2** (plus others). The column sets on each of these tables is different, but there are a number of columns in common. I have defined a materialis...
I have a database, call it **maps**, which contains 2 schemas, **a** & **b**. In each schema I have a number of tables: **a.t1**, **a.t2**, **b.t1**, **b.t2** (plus others). The column sets on each of these tables is different, but there are a number of columns in common. I have defined a materialised view, **a.mv**, which brings in the common columns of the 4 tables listed, including a geometry column (which represents a geographic outline). I want to backup the current contents of the view so that I can restore it on another server. To do this, I use this pg_dump command: pg_dump -h hostname -U username -t a.mv -f mv.sql maps What I get as a result is the SQL to define the table, but no data. The view definitely has data in it, because I can select from it in PgAdmin (it was created multiple days ago and the underlying tables haven't been changed since) I can dump the underlying tables, including data, with (eg) pg_dump -h hostname -U username -t a.t1 -f t1.sql maps but not the view. From the limited matches I've found with googling this, what I'm trying should work, but I'm wondering if the presence of a geometry column in the dump is causing the issue (or this might be a complete red herring). FWIW, the total data in the view is fairly substantial - probably around 1GB. However, I've dumped all the underlying tables in schema **a** successfully (including the 2 tables referenced by the view, and others), which was larger (1.5GB) Any ideas what the issue could be here? On the face of it, what I'm trying to do should work, but just doesn't and with no indicated errors.
khafka (79 rep)
Feb 6, 2020, 11:36 AM • Last activity: Jan 8, 2025, 01:03 PM
Showing page 1 of 20 total questions