Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

5 votes
1 answers
6421 views
Postgres Streaming Replication: How to re-sync data in master (with standby) after failover
I am absolutely new to postgres 9.1 replication mechanisms. I have managed to set up streaming replication between a master and standby postgres (Both have x number of records). At one point when the master server fails, using the trigger file mechanism the standby takes over, and accumulates additi...
I am absolutely new to postgres 9.1 replication mechanisms. I have managed to set up streaming replication between a master and standby postgres (Both have x number of records). At one point when the master server fails, using the trigger file mechanism the standby takes over, and accumulates additional data (say now has x+y number of records). Now when the Original Master Server comes up, it still has x number of records (which is now the new master). Is there a way to fetch only the delta, i.e. newly added 'y' number of records from the standby and restart as Master. Or do I have to always take entire base backup?
Nandini (51 rep)
May 5, 2015, 08:30 AM • Last activity: Apr 17, 2025, 03:03 AM
15 votes
4 answers
23031 views
Maximum number of databases for single instance of PostgreSQL 9
Developing a multicustomer application we plan to use a different database for each customer. But it could be more than 1000 customers (applications). Will PostgreSQL handle it without any problems? Has anybody tried something similar? Note: 35 tables for each one, with up to 3000 records as an aver...
Developing a multicustomer application we plan to use a different database for each customer. But it could be more than 1000 customers (applications). Will PostgreSQL handle it without any problems? Has anybody tried something similar? Note: 35 tables for each one, with up to 3000 records as an average, for each database.
Juanin (251 rep)
Sep 9, 2012, 07:00 PM • Last activity: Mar 21, 2025, 07:26 PM
28 votes
3 answers
43288 views
How can I create readonly user for backups in PostgreSQL?
Is it true that it is IMPOSSIBLE to create a readonly backup user in PostgreSQL? I've been advised on an IRC channel that you simply can't have a backup only user with no ownership privileges. I find it very strange so I want to make sure I'm not missing something. Below is what I tried but it doesn...
Is it true that it is IMPOSSIBLE to create a readonly backup user in PostgreSQL? I've been advised on an IRC channel that you simply can't have a backup only user with no ownership privileges. I find it very strange so I want to make sure I'm not missing something. Below is what I tried but it doesn't give me the results I'm looking for. When I do pg_dump on a given table I'm getting Permission denied for relation...: GRANT SELECT ON ALL TABLES IN SCHEMA public TO backup; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT ON TABLES TO backup; GRANT SELECT, USAGE ON ALL SEQUENCES IN SCHEMA public TO backup; ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT SELECT, USAGE ON SEQUENCES TO backup; Any help would be greatly appreciated!
milosgajdos (429 rep)
Nov 2, 2013, 01:05 AM • Last activity: Dec 19, 2024, 01:38 PM
6 votes
2 answers
2287 views
Idiom to treat empty result set as zero
This is obviously a SSCCE. I have an `inventory` table that holds the number of items (`numOfItems`) in a warehouse at any given date (`inventoryDate`). Both are integers to keep things simple: CREATE TABLE inventory(inventoryDate INTEGER, numOfItems INTEGER); ALTER TABLE inventory ADD PRIMARY KEY (...
This is obviously a SSCCE. I have an inventory table that holds the number of items (numOfItems) in a warehouse at any given date (inventoryDate). Both are integers to keep things simple: CREATE TABLE inventory(inventoryDate INTEGER, numOfItems INTEGER); ALTER TABLE inventory ADD PRIMARY KEY (inventoryDate); Now I have some entries: INSERT INTO inventory(inventoryDate, numOfItems) VALUES(1,250),(2,275) What the above says is that on time 1 there were 250 items in the warehouse and on time 2 there were 275. Now I want to test whether, at the latest date, the number of items in the warehouse exceeded 1000: SELECT 1000<(SELECT numOfItems FROM inventory ORDER BY inventoryDate DESC LIMIT 1) The above seems to work. However, on the edge case that there are no entries at all in the inventory table it's not working: DELETE FROM inventory; SELECT 1000<(SELECT numOfItems FROM inventory ORDER BY inventoryDate DESC LIMIT 1) ... returns: ?column? (null) I would like to treat the edge case where there are no entries at all as denoting zero (0) items. I know I can always initialize the inventory table with a fake first entry with a value of zero but like I said this is an SSCCE (in my real case this table is really a view which I don't want to modify). So I end up writing the query like this: WITH cte AS (SELECT numOfItems FROM inventory ORDER BY inventoryDate DESC LIMIT 1) SELECT 1000 < ( CASE WHEN NOT EXISTS(SELECT 1 FROM cte) THEN 0 ELSE (SELECT numOfItems FROM cte) END) The above does seem to work. But, is there a more idiomatic way? Something like a COALESCE but for an empty result set instead of null?
Marcus Junius Brutus (3409 rep)
Sep 12, 2014, 06:59 PM • Last activity: Nov 28, 2024, 01:28 AM
13 votes
1 answers
21899 views
Can I do VACUUM FULL to pg_largeobject table?
I have two tables (`table1`, `table2`) in a Postgres 9.1 database. Both having oid type. Each table 1 million records. And `pg_largeobject` table size is around 40GB. I have removed 0.9 million records from each table, and executed the below command. vacuum full analyze table1; vacuum full analyze t...
I have two tables (table1, table2) in a Postgres 9.1 database. Both having oid type. Each table 1 million records. And pg_largeobject table size is around 40GB. I have removed 0.9 million records from each table, and executed the below command. vacuum full analyze table1; vacuum full analyze table2; Still no change in pg_largeobject table size (auto vacuum is enabled) Do I need to execute above command to pg_largeobject table too? Will it impact anything?
RBB (815 rep)
Aug 29, 2015, 08:53 PM • Last activity: Aug 27, 2024, 09:40 PM
3 votes
2 answers
1970 views
SQL order by query results in any arbitrary (but reproducible) manner
For testing purposes I need to get some data from a set of database tables in any arbitrary (but reproducible) order. The idea being that I can later compare two runs using a textual diff tool. Is there an idiom for that? E.g. I can obviously do a: SELECT * FROM table_with_N_columns ORDER BY column_...
For testing purposes I need to get some data from a set of database tables in any arbitrary (but reproducible) order. The idea being that I can later compare two runs using a textual diff tool. Is there an idiom for that? E.g. I can obviously do a: SELECT * FROM table_with_N_columns ORDER BY column_1, ... , column_N I am just asking if there is an idiomatic way to achieve the same effect (for my purposes) without bothering to list every column in the ORDER BY clause. Any ordering will do so long as it is reproducible with subsequent runs of the query.
Marcus Junius Brutus (3409 rep)
Sep 6, 2014, 12:55 PM • Last activity: Jun 9, 2024, 11:31 PM
-1 votes
1 answers
332 views
To get rid of heavy archive logs in PostgreSQL 9.1
In our production server which is having PostgreSQL 9.1 installed, too heavy archive logs are being generated, approximately 41 GB/day. Heavy updates, deletes and inserts are the cause of this. Can anyone help us to reduce the amount of archive log generation in our production server?
In our production server which is having PostgreSQL 9.1 installed, too heavy archive logs are being generated, approximately 41 GB/day. Heavy updates, deletes and inserts are the cause of this. Can anyone help us to reduce the amount of archive log generation in our production server?
Arun Raut (55 rep)
Jan 22, 2018, 06:15 AM • Last activity: May 29, 2024, 02:06 PM
49 votes
2 answers
57384 views
Configuring PostgreSQL for read performance
Our system writes a lots of data (kind of Big Data system). Write performance is good enough for our needs but read performance is really too slow. The primary key (constraint) structure is similar for all our tables: timestamp(Timestamp) ; index(smallint) ; key(integer). A table can have millions o...
Our system writes a lots of data (kind of Big Data system). Write performance is good enough for our needs but read performance is really too slow. The primary key (constraint) structure is similar for all our tables: timestamp(Timestamp) ; index(smallint) ; key(integer). A table can have millions of rows, even billions of rows, and a read request is usually for a specific period (timestamp / index) and tag. It's common to have a query that returns around 200k lines. Currently, we can read about 15k lines per second but we need to be 10 times faster. Is this possible and if so, how? **Note:** PostgreSQL is packaged with our software, so the hardware is different from one client to another. It is a VM used for testing. The VM's host is Windows Server 2008 R2 x64 with 24.0 GB of RAM. ##Server Spec (Virtual Machine VMWare) Server 2008 R2 x64 2.00 GB of memory Intel Xeon W3520 @ 2.67GHz (2 cores) ###postgresql.conf optimisations shared_buffers = 512MB (default: 32MB) effective_cache_size = 1024MB (default: 128MB) checkpoint_segment = 32 (default: 3) checkpoint_completion_target = 0.9 (default: 0.5) default_statistics_target = 1000 (default: 100) work_mem = 100MB (default: 1MB) maintainance_work_mem = 256MB (default: 16MB) ###Table Definition CREATE TABLE "AnalogTransition" ( "KeyTag" integer NOT NULL, "Timestamp" timestamp with time zone NOT NULL, "TimestampQuality" smallint, "TimestampIndex" smallint NOT NULL, "Value" numeric, "Quality" boolean, "QualityFlags" smallint, "UpdateTimestamp" timestamp without time zone, -- (UTC) CONSTRAINT "PK_AnalogTransition" PRIMARY KEY ("Timestamp" , "TimestampIndex" , "KeyTag" ), CONSTRAINT "FK_AnalogTransition_Tag" FOREIGN KEY ("KeyTag") REFERENCES "Tag" ("Key") MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION ) WITH ( OIDS=FALSE, autovacuum_enabled=true ); ###Query The query takes about 30 seconds to execute in pgAdmin3, but we would like to have the same result under 5 seconds if possible. SELECT "AnalogTransition"."KeyTag", "AnalogTransition"."Timestamp" AT TIME ZONE 'UTC', "AnalogTransition"."TimestampQuality", "AnalogTransition"."TimestampIndex", "AnalogTransition"."Value", "AnalogTransition"."Quality", "AnalogTransition"."QualityFlags", "AnalogTransition"."UpdateTimestamp" FROM "AnalogTransition" WHERE "AnalogTransition"."Timestamp" >= '2013-05-16 00:00:00.000' AND "AnalogTransition"."Timestamp" Index Scan Backward using "PK_AnalogTransition" on "AnalogTransition" (cost=0.00..389244.53 rows=1548698 width=33) (actual time=2.187..1893.283 rows=500000 loops=1)" " Index Cond: (("Timestamp" >= '2013-05-16 01:00:00-04'::timestamp with time zone) AND ("Timestamp" Index Scan using "PK_AnalogTransition" on "AnalogTransition" (cost=0.00..971400.46 rows=774511 width=35) (actual time=0.037..410088.960 rows=250001 loops=1)" " Index Cond: (("Timestamp" >= '2013-05-22 20:00:00-04'::timestamp with time zone) AND ("Timestamp" <= '2013-05-24 20:00:00-04'::timestamp with time zone) AND ("KeyTag" = 16))" "Total runtime: 411044.175 ms"
JPelletier (593 rep)
May 15, 2013, 06:32 PM • Last activity: Apr 22, 2024, 05:21 PM
77 votes
7 answers
217352 views
How to list all views in SQL in PostgreSQL?
How do I list all views for a database using an SQL command in PostgreSQL? I would like something similar to output of the psql `\dv` command, but preferably just a list of view names. e.g., ```sql SELECT ...; my_view_1 my_view_2 my_view_3 ``` I'm running PostgreSQL v9.1.4 on Ubuntu Linux.
How do I list all views for a database using an SQL command in PostgreSQL? I would like something similar to output of the psql \dv command, but preferably just a list of view names. e.g.,
SELECT ...;
my_view_1
my_view_2
my_view_3
I'm running PostgreSQL v9.1.4 on Ubuntu Linux.
Rob Bednark (2273 rep)
Sep 6, 2012, 06:14 PM • Last activity: Jan 20, 2024, 01:56 PM
5 votes
3 answers
17961 views
copy command in postgresql to append data
I am executing `copy` command from a function execute 'copy (select * from tableName) to ''/tmp/result.txt''' It works fine. But, ``tableName`` is dynamic and will be having more tables (iterating in a loop). ``result.txt`` is having only the last iteration (table) data. I could do, if I had to, via...
I am executing copy command from a function execute 'copy (select * from tableName) to ''/tmp/result.txt''' It works fine. But, `tableName is dynamic and will be having more tables (iterating in a loop). result.txt` is having only the last iteration (table) data. I could do, if I had to, via the command line with `\o or >>` by appending to a file. But, I have to use the function. I read http://shuber.io/reading-from-the-filesystem-with-postgres/ . It suggests a solution using temporary tables. Can I do something similar, but without temp tables? (I must avoid archive logs). Will creating a temp table create archive logs? Is there any way to do appending with `copy` command in a function?
RBB (815 rep)
Sep 15, 2016, 05:18 AM • Last activity: Oct 17, 2023, 11:18 AM
450 votes
5 answers
883617 views
How do I list all schemas in PostgreSQL?
When using PostgreSQL v9.1, how do I list all of the schemas using SQL? I was expecting something along the lines of: SELECT something FROM pg_blah;
When using PostgreSQL v9.1, how do I list all of the schemas using SQL? I was expecting something along the lines of: SELECT something FROM pg_blah;
St&#233;phane (5755 rep)
Apr 15, 2013, 07:19 PM • Last activity: Sep 24, 2023, 01:46 AM
0 votes
1 answers
1610 views
Cannot restore data using Postgres WAL files
I'm a newbie at PostgreSQL. I'm trying to setup a streaming replication (postgresql 9.1). And I want to test backup & restore on local. I also made backup files successfully (I think so) by using `pg_basebackup` But cannot restore data. Here is how I set my database up. - Postgres on server1 (master...
I'm a newbie at PostgreSQL. I'm trying to setup a streaming replication (postgresql 9.1). And I want to test backup & restore on local. I also made backup files successfully (I think so) by using pg_basebackup But cannot restore data. Here is how I set my database up. - Postgres on server1 (master) Configure file: /var/lib/pgsql/9.1/data/postgresql.conf # WRITE AHEAD LOG wal_level = hot_standby # - Archiving - archive_mode = on archive_command = 'cp %p /var/lib/pgsql/9.1/wal-archive/%f' # - Master Server - max_wal_senders = 5 wal_keep_segments = 32 # - Standby Servers - hot_standby = on - Start master and insert some data - Postgres on server2 (slave) Configure file: /var/lib/pgsql/9.1/data/postgresql.conf (content is same as master) - Create recovery.conf file on slave // vi /var/lib/pgsql/9.1/data/recovery.conf standby_mode = 'on' primary_conninfo = 'host={master_hostname} port=5432 user=repl_user password={my_password}' - Start slave => and data is automatically synced with master - Create WAL backup // slave service postgresql-9.1 stop rm -rf /var/lib/pgsql/9.1/data/* su - postgres pg_basebackup -h {master_hostname} -p 5432 -U repl_user -D /var/lib/pgsql/9.1/data --xlog --checkpoint=spread --progress - WAL files is generated in /var/lib/pgsql/9.1/wal-archive on master after pg_basebackup is executed in slave // something like this 000000070000000000000013 000000070000000000000014 000000070000000000000015 000000070000000000000015.00000020.backup - Drop one table (master) DROP TABLE {table_name} - Restore that dropped table using WAL files (master) service postgresql-9.1 stop vi /var/lib/pgsql/9.1/data/recovery.conf // add this line to recovery.conf restore_command = 'cp /var/lib/pgsql/9.1/wal-archive/%f %p' - Start master again service postgresql-9.1 start But that dropped table is not restored on master. I don't know why. I also tried to add some other option, but everything is not worked. recovery_target_time = '2020-10-29 08:20:00 UTC' recovery_target_inclusive = false Please help on how to restore data using WAL files. If I accidentally drop a table on master, how can I restore it using WAL files? Which configuration should I setup on recovery.conf file? Is just write restore_command in recovery.conf file and start postgres again make server restore the lastest backup point data?
VitDuck (1 rep)
Nov 2, 2020, 10:27 AM • Last activity: Sep 18, 2023, 07:48 PM
4 votes
2 answers
14332 views
FATAL: catalog is missing 1 attribute(s) for relid 2662
I have a server with PostgreSQL9.1-903 (Windows 7 Professional). After a power cut my database is inaccessible (I presume that the database is corrupted). The postgreSQL service starts but the following message is displayed when I try to connect via `pg_Admin`: >FATAL: catalog is missing 1 attribute...
I have a server with PostgreSQL9.1-903 (Windows 7 Professional). After a power cut my database is inaccessible (I presume that the database is corrupted). The postgreSQL service starts but the following message is displayed when I try to connect via pg_Admin: >FATAL: catalog is missing 1 attribute(s) for relid 2662 I have tried to disable the option: 'ignore_system_index' but the database is still inaccessible. I guess that is not a problem with the indexes, but with the fact that some rows of the table pg_attribute have disappeared. I am interested in recovering the data because my last backup is a bit old. Is there any way to just connect to the database? After that I would use the tool pg_dump to retrieve the data and mount it in a new database. **EDIT** The parameter fsync is commented in the postgresql.conf (I assume that the default value is on). Following, I attach the part of the postgresql.conf which has been tuned: checkpoint_segments = 32 # in logfile segments, min 1, 16MB each shared_buffers = 1GB # min 128kB work_mem = 8MB maintenance_work_mem = 256MB effective_cache_size=1024MB default_statistics_target = 200 autovacuum_vacuum_cost_delay = 50ms
Oscar (41 rep)
Nov 21, 2013, 12:47 AM • Last activity: Aug 22, 2023, 11:01 AM
4 votes
3 answers
687 views
How can I optimize a SELECT DISTINCT on a subselect?
I've got a very slow query, running over 30 seconds: SELECT DISTINCT id10 FROM (SELECT j.id AS id10, j.modified, j.n_type AS n_type5 FROM note j WHERE j.modified_date >= '2016-10-01 23:12:34.000000' AND j.clientid = 16049 AND j.n_type IN ('n', 'n_1', 'n_custom', 'n_standard', 'n_status') ORDER BY j....
I've got a very slow query, running over 30 seconds: SELECT DISTINCT id10 FROM (SELECT j.id AS id10, j.modified, j.n_type AS n_type5 FROM note j WHERE j.modified_date >= '2016-10-01 23:12:34.000000' AND j.clientid = 16049 AND j.n_type IN ('n', 'n_1', 'n_custom', 'n_standard', 'n_status') ORDER BY j.id ASC) t2 ORDER BY id10 ASC LIMIT 20; Explain Analyze: https://explain.depesz.com/s/DU4 Is there a way I can improve this query? Created a new index on modified_date and n_type columns: CREATE INDEX ix_n_type_modified ON notes (n_type, timezone('Etc/UTC'::text, modified_date)) WHERE n_type IN ('n_1','n_custom','n_standard','n_status'); New Explain analyze: https://explain.depesz.com/s/RsTr Query still taking >5 seconds. New explain analyze using the new query provided by Evan Carroll: https://explain.depesz.com/s/yP4S Query: SELECT id AS id10 FROM FROM note j WHERE j.modified_date >= '2015-12-07 23:12:34.000000' AND j.clientid = 16049 AND j.n_type IN ('n', 'n_1', 'n_custom', 'n_standard', 'n_status') ORDER BY id10 ASC LIMIT 20; Query is taking even longer now. note is a view. It has a join on the jobs table (note.jobid -> jobs.id) I can't do any other join between those tables, as there is no "linked" columns between them. \d+ note;:
Table "public.note"
Column          |           Type           |                          Modifiers                          | Storage  | Stats target |                                                                       Description                                                  
----------------+--------------------------+-------------------------------------------------------------+----------+--------------+----------------------------------------------------------------------------------------------------------------------------------------------------------
id              | bigint                   | not null default "nextval"('"note_id_seq"'::"regclass")     | plain    |              | Primary key
jobid           | bigint                   | not null default 0                                          | plain    |
description     | "text"                   | default ''::"text"                                          | extended |
modified_date   | timestamp with time zone | default "statement_timestamp"()                             | plain    |              | Modified date
n_type          | "n_type"                 |                                                             | plain    |
    
Indexes:
"note_pkey" PRIMARY KEY, "btree" ("id")
"ix_note_gsdi_pk" "btree" (("id"::"text"))
"ix_job_fk" "btree" ("jobid")
"ix_job_n_type" "btree" ("n_type")
"ix_note_jobid_type" "btree" ("jobid", "n_type")
"ix_note_jobid_type_2" "btree" ("jobid", "n_type", "timezone"('Etc/UTC'::"text", "modified_date"))
View note on schema note_user - This view is in a different schema than the table note: SELECT r_30.id, r_30.jobid, r_30.description, timezone('Etc/UTC'::text, r_30.modified_date), cj.clientid FROM public.note r_30 JOIN public.jobs cj ON cj.id = r_30.jobid ; Table note: CREATE TABLE public.note ( id bigint NOT NULL DEFAULT nextval('note_id_seq'::regclass), -- Primary key jobid bigint NOT NULL DEFAULT 0, description text DEFAULT ''::text, n_type n_type, modified_date timestamp with time zone DEFAULT statement_timestamp(), CONSTRAINT note_pkey PRIMARY KEY (id) ); Table jobs: CREATE TABLE public.jobs ( id bigint NOT NULL DEFAULT nextval('jobs_id_seq'::regclass), clientid bigint NOT NULL DEFAULT 0, description text NOT NULL DEFAULT ''::text, modified_date timestamp without time zone DEFAULT statement_timestamp(), CONSTRAINT jobs_pkey PRIMARY KEY (id) ); Custom n_type data type: live_database=> \dT+ n_type List of data types Schema | Name | Internal name | Size | Elements | Access privileges | Description --------+-----------------+-----------------+------+---------------------+-------------------+------------------------------ public | n_type | n_type | 4 | n_1 +| | | | | | n_custom +| | | | | | n_standard +| | | | | | n_status +| | (1 row)
Patrick B. (311 rep)
Jan 15, 2017, 10:23 PM • Last activity: Jul 26, 2023, 06:03 PM
6 votes
1 answers
2873 views
Monitoring the Postgres query parser
As I understand it, when you send a query to Postgres like: SELECT id FROM table1 WHERE name = $1; Postgres will create a query plan. The plan will be cached for the same query in the same session. But if you create a function: CREATE OR REPLACE FUNCTION get_table1 (parname text) RETURNS bigint LANG...
As I understand it, when you send a query to Postgres like: SELECT id FROM table1 WHERE name = $1; Postgres will create a query plan. The plan will be cached for the same query in the same session. But if you create a function: CREATE OR REPLACE FUNCTION get_table1 (parname text) RETURNS bigint LANGUAGE plpgsql AS $$BEGIN RETURN (SELECT id FROM table1 where name = parname); END$$; The query plan will be cached for all future sessions. I'd like to verify this theory by checking how often the query analyzer is invoked. Is there a way to inspect the number of times Postgres parses a query? If possible, I'd like to monitor things like #parses per second and the min/max/avg parse duration.
Andomar (3515 rep)
Dec 19, 2014, 02:50 PM • Last activity: Jul 17, 2023, 10:35 PM
0 votes
0 answers
403 views
pg_restore fails with ERROR: could not create directory "pg_tblspc/16400/PG_11_201809051/16403": Invalid argument
Trying to migrate PG9.1 data to PG11.5 I did initdb using the following command. " \bin\initdb.exe" --locale="English_Malaysia.1252" -D " \pgupgrade" -U --pwfile Command output: The files belonging to this database system will be owned by user "Administrator". This user must also own the server proc...
Trying to migrate PG9.1 data to PG11.5 I did initdb using the following command. "\bin\initdb.exe" --locale="English_Malaysia.1252" -D "\pgupgrade" -U --pwfile Command output: The files belonging to this database system will be owned by user "Administrator". This user must also own the server process. The database cluster will be initialized with locale "English_Malaysia.1252". The default database encoding has accordingly been set to "WIN1252". The default text search configuration will be set to "english". Part of pg_upgrade the following is failing: command: "/pg_restore" --port 50432 --username --create --exit-on-error --verbose --dbname template1 "pg_upgrade_dump_16388.custom" >> "pg_upgrade_dump_16388.log" 2>&1 pg_restore: connecting to database for restore pg_restore: creating DATABASE "xyz" pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 4037; 1262 16388 DATABASE xyz pg_restore: [archiver (db)] could not execute query: ERROR: could not create directory "pg_tblspc/16400/PG_11_201809051/16403": Invalid argument Command was: CREATE DATABASE "xyz" WITH TEMPLATE = template0 ENCODING = 'UTF8' LC_COLLATE = 'English_Malaysia.1252' LC_CTYPE = 'English_Malaysia.1252' TABLESPACE = "xyz"; Can anyone tell me the possible cause of failure? Is it encoding issue? in first case DB was initialized with win1252 but during pg_restore it is running with utf8. Also can you tell me how can I troubleshoot this error further as I am not getting what is "invalid argument" here?
Neeraj Bansal (153 rep)
Jun 13, 2023, 03:28 PM
20 votes
2 answers
15495 views
PostgreSQL difference between VACUUM FULL and CLUSTER
I have a table with 200 GB of size occupied by data and 180 GB of size by the 6 indexes on it. It is 30% bloated, so I want to reclaim unwanted space occupied by it. It is clustered on `job_id_id`x index. So to reclaim the space do I need to use `cluster` command or `vacuum full` command? 1. What is...
I have a table with 200 GB of size occupied by data and 180 GB of size by the 6 indexes on it. It is 30% bloated, so I want to reclaim unwanted space occupied by it. It is clustered on job_id_idx index. So to reclaim the space do I need to use cluster command or vacuum full command? 1. What is the difference between this two command ? 2. Is vacuum full order by some column same as cluster command? 3. Is index recreated in both the commands? 4. In my case which one will be faster? The version of PostgreSQL database is 9.1
Arun P (201 rep)
Jul 22, 2013, 01:07 PM • Last activity: May 2, 2023, 02:09 PM
12 votes
3 answers
12906 views
Store a formula in a table and use the formula in a function
I have a PostgreSQL 9.1 database where part of it handles agent commissions. Each agent has his/her own formula of calculation how much commission they get. I have a function to generate the amount of commission each agent should get, but it's becoming impossible to use as the number of agents grow....
I have a PostgreSQL 9.1 database where part of it handles agent commissions. Each agent has his/her own formula of calculation how much commission they get. I have a function to generate the amount of commission each agent should get, but it's becoming impossible to use as the number of agents grow. Am forced to do some extremely long case statements and repeating code, which has made my function very big. All the formulas have constant variables:
d .. days worked that month
r .. new nodes accuired
l .. loyalty score
s .. subagent commission
b .. base rate
i .. revenue gained
The formula can be something like: d*b+(l*4+r)+(i/d)+s Each agent negotiates the payment formula with the HR dept. So can I store the formula in the agents table then have like a small function that just gets the formula from the table and translates it with values and computes the amount?
indago (545 rep)
Jan 31, 2013, 10:30 AM • Last activity: Apr 1, 2023, 12:06 PM
34 votes
4 answers
67239 views
pg_restore: [archiver] did not find magic string in file header
I'm using PostgreSQL 9.1 and want to restore backup files generated with `pg_dump`: sudo pg_dump -h 127.0.0.1 -U postgres --clean --inserts -E UTF8 -f out.sql database_name This command generates a valid sql file that starts with droping any existing database objects, then generates all tables, indi...
I'm using PostgreSQL 9.1 and want to restore backup files generated with pg_dump: sudo pg_dump -h 127.0.0.1 -U postgres --clean --inserts -E UTF8 -f out.sql database_name This command generates a valid sql file that starts with droping any existing database objects, then generates all tables, indizes, sequences and so on, and finally inserts data. When I try to restore the generated backup file with: (line breaks added for display purposes only) sudo pg_restore -d database_name -h 127.0.0.1 -U postgres --format=c --clean --create out.sql it fails and prints: pg_restore: [archiver] did not find magic string in file header What is the reason for that?
maja (443 rep)
Aug 22, 2015, 10:06 AM • Last activity: Feb 28, 2023, 06:44 PM
3 votes
1 answers
1266 views
Use of integer instead of interval (of one type)
Our DB design presently has a `interval` column which will only be storing days (no other interval type) so it is making sense to use `INT2` (`smallint`) instead of `interval`. [Reference to documentation.][1] Advantage: 2 bytes instead of 12 bytes (we have many such columns). Is this line of thinki...
Our DB design presently has a interval column which will only be storing days (no other interval type) so it is making sense to use INT2 (smallint) instead of interval. Reference to documentation. Advantage: 2 bytes instead of 12 bytes (we have many such columns). Is this line of thinking ok or am I overlooking something?
vedic (180 rep)
Mar 8, 2014, 07:28 AM • Last activity: Feb 3, 2023, 05:17 PM
Showing page 1 of 20 total questions