Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

5 votes
3 answers
2916 views
How can streaming replication be kept lagged by a certain 'delay' time?
Postgres has had [streaming replication](http://wiki.postgresql.org/wiki/Streaming_Replication) since 9.0 and since 9.1 it is possible to pause and resume the application of the streamed WAL records using [`pg_xlog_replay_pause()` and `pg_xlog_replay_resume()`](http://www.postgresql.org/docs/9.3/sta...
Postgres has had [streaming replication](http://wiki.postgresql.org/wiki/Streaming_Replication) since 9.0 and since 9.1 it is possible to pause and resume the application of the streamed WAL records using [pg_xlog_replay_pause() and pg_xlog_replay_resume()](http://www.postgresql.org/docs/9.3/static/functions-admin.html#FUNCTIONS-RECOVERY-CONTROL) . As far as I can tell, there is no built-in way of keeping the application of WAL deliberately lagged by a specific time period (eg 24h). Have I missed a built-in way of achieving this and if not, how should I approach doing it some other way?
Jack Douglas (40537 rep)
Mar 10, 2014, 01:55 PM • Last activity: Jul 31, 2025, 06:11 AM
0 votes
1 answers
144 views
Trying to load a file into a database on virtual machine
I have set up a Postgres db on a linux vm and have been having no issues using a GUI to connect to it. However, I am trying to load a large, 32GB, file onto it and so am skeptical of the old way I was doing as it takes a lot of bandwith. I set up a dropbox folder to sync on the VM, which it has, and...
I have set up a Postgres db on a linux vm and have been having no issues using a GUI to connect to it. However, I am trying to load a large, 32GB, file onto it and so am skeptical of the old way I was doing as it takes a lot of bandwith. I set up a dropbox folder to sync on the VM, which it has, and tried to COPY the file and got error message: ERROR: could not open file "~/Dropbox/0ptimus-Jaspin/nation/VoterMapping--NH--03-17-2014-HEADERS.tab" for reading: No such file or directory I used the following to try to do the copy: COPY nh FROM '/Dropbox/0ptimus-Jaspin/VoterMapping--NH--03-17-2014-HEADERS.tab'; Thanks!
Ron (1 rep)
Sep 20, 2014, 05:57 PM • Last activity: Jul 20, 2025, 03:09 AM
14 votes
3 answers
8142 views
How do I hide sensitive information like plaintext passwords from the logs?
I do not have access to a Postgres installation, so I cannot check. I am a security guy, and I'm seeing plaintext passwords in the logs: create user user1 with password 'PLAINTEXT PASSWORD' How can the DBAs change or create their passwords without the password in the clear in the logs? I've seen [th...
I do not have access to a Postgres installation, so I cannot check. I am a security guy, and I'm seeing plaintext passwords in the logs: create user user1 with password 'PLAINTEXT PASSWORD' How can the DBAs change or create their passwords without the password in the clear in the logs? I've seen this , which states you can use an md5 hash of the password, but then the hash is also in the clear. Is there a better way?
schroeder (242 rep)
Mar 6, 2015, 08:55 PM • Last activity: Jul 17, 2025, 10:10 AM
1 votes
1 answers
156 views
PostgreSQL full-text search retrival from id gap
I have a full-text search column on one of my tables in PostgreSQL 9.3 implemented based on [this][1] tutorial (without the South part). There is a postgres function where I SELECT concat_ws(' ', a."my_column1") INTO c1; SELECT concat_ws(' ', a."my_column2") INTO c2; RETURN setweight(to_tsvector('pg...
I have a full-text search column on one of my tables in PostgreSQL 9.3 implemented based on this tutorial (without the South part).
There is a postgres function where I SELECT concat_ws(' ', a."my_column1") INTO c1; SELECT concat_ws(' ', a."my_column2") INTO c2; RETURN setweight(to_tsvector('pg_catalog.simple', c1), 'C') || setweight(to_tsvector('pg_catalog.english', c2), 'D'); This is being written into a column called *fts_document* in the same table. I have INSERT and UPDATE triggers on this table and a trigger function as in the tutorial. Everything has worked fine at some point. After a dump from another db with identical structure and a restore to an empty db I got my table with a gap of the field id from 7306 to 21431. The main problem is that now the full-text-search retrieves documents only from this missing id range and the *my_column1* and *my_column2* values are very valid ones. I have deleted the index on *fts_document* column and rebuilt it, but no result. Where does postgres retrieve these weird ("not existing") rows from? How can I fix the search to retrieve also the existing rows and postgres to show the missing docs?
gevra (119 rep)
Aug 9, 2015, 01:20 AM • Last activity: Jul 11, 2025, 08:02 PM
0 votes
1 answers
180 views
Source of error on psql data load
Upon importing a file derived from rails establishment, `psql` is hitting the error of foreign key constraints: ERROR: insert or update on table "documents" violates foreign key constraint "fk_rails_d4abdc7f58" Key (typ_document_id)=(7) is not present in table "typ_documents". Yet, querying afterwar...
Upon importing a file derived from rails establishment, psql is hitting the error of foreign key constraints: ERROR: insert or update on table "documents" violates foreign key constraint "fk_rails_d4abdc7f58" Key (typ_document_id)=(7) is not present in table "typ_documents". Yet, querying afterwards SELECT * from typ_documents; getting a result 7 | Test Internal | 2013-07-04 08:36:16.026295 | 2013-07-04 08:36:16.026295 The only assumption that I can follow is that, when the table documents is being loaded, the order of loading is such that typ_documents is happening after and the error is arising. Yet, this error is not occurring consistently and the data dump is in alphabetical order of table names. Thus, this is a weak assumption. Removing the foreign key constraint from rails would overcome the problem, but that is a weak reaction. **Update** Data from the pg_dump file (redacted to omit the long full text) COPY documents (id, titolo, abstract, full_text, typ_document_id, idioma_id, competitor_id, created_at, updated_at) FROM stdin; [...] 5 Conservazione Finocchio con lavaggio La prova è stata [...redacted...] 3 ppm\r\n15\t15\r\n0,75\r\n30 sec\t60 sec\t90 sec 7 1 \N 2013-07-08 10:49:53.393598 2013-07-11 16:07:31.540986 COPY typ_documents (id, nome, created_at, updated_at) FROM stdin; [...] 7 Test Internal 2013-07-04 08:36:16.026295 2013-07-04 08:36:16.026295 How can this issue be debugged/overcome?
Jerome (299 rep)
Feb 12, 2016, 11:19 AM • Last activity: Jul 1, 2025, 03:00 AM
0 votes
1 answers
542 views
KNN search two arrays with 2000 elements
**What I did:** 1. installed postgresql 9.3.1 2. installed postgis 2.1.2 3. I tried to find the distance by selecting column input_variable_1 with value 50. I got the following result. select input_variable_1 50 as distance from test_ata limit 10; distance 48 2 17 29 44 37 45 17 49 2 (10 rows) **wha...
**What I did:** 1. installed postgresql 9.3.1 2. installed postgis 2.1.2 3. I tried to find the distance by selecting column input_variable_1 with value 50. I got the following result. select input_variable_1 50 as distance from test_ata limit 10; distance 48 2 17 29 44 37 45 17 49 2 (10 rows) **what i want:** *input*: column which stores array values say 2000 elements in a single record. I came across cube extension and postGIS there i can get distance between two points by using tree and distance concepts for lower dimentsions upto 100. 1. How to find the distance between two arrays having elements atleast 2000 in postgresql? 2. which tree is best for 2000 dimension indexing in postgresql? 3. finally i want to know how to use KNN search in postgresql?
harishankar (9 rep)
Mar 23, 2017, 02:15 PM • Last activity: May 23, 2025, 09:08 PM
0 votes
1 answers
295 views
Ubuntu postgresql 2 databas in two hard disks
I have two postgresql database in single 1tb hard disk. Each data base sizes more than 300gb. So now I faced a disk space problem and I bought another 1tb hard disk recently. So I need to move one database into this new hard disk. How can I do that in Ubuntu ???? If I move one database to another dr...
I have two postgresql database in single 1tb hard disk. Each data base sizes more than 300gb. So now I faced a disk space problem and I bought another 1tb hard disk recently. So I need to move one database into this new hard disk. How can I do that in Ubuntu ???? If I move one database to another drive, postgresql can communicate both harddisks at same time? Can I access both databases in postgresql on master hard. Sorry for my bad English.
Nirmani Malee (1 rep)
Sep 22, 2016, 12:16 PM • Last activity: May 13, 2025, 10:06 AM
0 votes
1 answers
311 views
Postgres record locking issue - could asynchronous UPDATES be a solution?
Is it possible (and sensible) to issue asynchronous UPDATEs to a postgres table? Here’s the issue. I have two process, both of which access the same table. Process “A” loads vast quantities of external records into a postgres table, this involves a trigger that updates a summary table (the summary t...
Is it possible (and sensible) to issue asynchronous UPDATEs to a postgres table? Here’s the issue. I have two process, both of which access the same table. Process “A” loads vast quantities of external records into a postgres table, this involves a trigger that updates a summary table (the summary table is the one involved in the conflict) Process “A” is quite critical, by tweaking the number of records in each INSERT statement, and the number of INSERTS before issuing a COMMIT, I can insert about 4000 records in a single transaction that takes between 5 and 8 seconds. Process “B” reads data from the database, does a lot of number crunching on the data and then UPDATEs the summary table. The problem is that although the reading and processing of the data can be achieved in milliseconds, the UPDATEing of the summary table typically takes 4 to 5 seconds to wait for the process “A” transaction to release it’s locks on the summary table. This is very in-efficient and dramatically slows down the overall process. The following may (or may not) also be relevant. - Process “A” updates only fields “A”, “B” and “C” in the summary table. - Process “B” updates only fields “D”, “E” and “F” in the summary table. - Both process “A” and “B” only update the summary table by explicitly identifying single rows via the unique primary key. - No other processes read or update the summary table. - Only the latest version (for any one record) of the values in “D”, “E” and “F” are relevant. Bearing in mind the above, is it possible for process “B” to issues it’s UPDATES to the summary table asynchronously (by this I mean issue the UPDATE, and then carrying on with the next iteration of the process without waiting for the response to the UPDATE). If it is possible, how? And perhaps more importantly, is it sensible? Thanks
ConanTheGerbil (1303 rep)
Jun 17, 2019, 06:13 PM • Last activity: May 8, 2025, 06:05 PM
32 votes
4 answers
68116 views
GRANT USAGE on all schemas in a database?
I want to [`GRANT USAGE`](http://www.postgresql.org/docs/9.4/static/sql-grant.html) to a user/role for a given database. The database has many schemas. I know there's an `ON ALL TABLES IN SCHEMA`, but I want "all schemas". I tried `GRANT USAGE .. ON DATABASE`, but that's obviously wrong (it doesn't...
I want to [GRANT USAGE](http://www.postgresql.org/docs/9.4/static/sql-grant.html) to a user/role for a given database. The database has many schemas. I know there's an ON ALL TABLES IN SCHEMA, but I want "all schemas". I tried GRANT USAGE .. ON DATABASE, but that's obviously wrong (it doesn't actually *exist*). This is for Postgres 9.3 or 9.4, it happens to be a server that is on AWS RDS.
300D7309EF17 (431 rep)
Mar 21, 2015, 04:26 AM • Last activity: Mar 19, 2025, 06:55 PM
11 votes
4 answers
13627 views
How do I validate all my constraints in postgresql?
It seems like I have been bitten by the first bug described in the release notes of postgresql 9.3.4: http://www.postgresql.org/docs/devel/static/release-9-3-4.html I now have e.g. duplicate primary keys. What is the best approach on doing a loop of re-checking of all my constraints (pkeys, fkeys) -...
It seems like I have been bitten by the first bug described in the release notes of postgresql 9.3.4: http://www.postgresql.org/docs/devel/static/release-9-3-4.html I now have e.g. duplicate primary keys. What is the best approach on doing a loop of re-checking of all my constraints (pkeys, fkeys) -> fixing issues -> checking again, to make sure that my data is OK? **UPDATE** I decided to go with an approach on fixing the issue by deleting all constraints, and then recreating them using the following method http://blog.hagander.net/archives/131-Automatically-dropping-and-creating-constraints.html . However I'm currently stuck on an error message, trying to recreate a pkey: ERROR: failed to find parent tuple for heap-only tuple at (1192248,5) in table "fruits" CONTEXT: SQL statement "ALTER TABLE "pm"."fruits" ADD CONSTRAINT "fruits_pkey" PRIMARY KEY (id)" What does this mean, and how do I fix that one (I can live with deleting it, if that's an option)? Another question: Would my data actually be all good, if I just got rid of any duplicate rows by deleting them, and then did a pg_dump, and restored the database from that dump. It would rebuild the data structure - right?
Niels Kristian (971 rep)
Apr 8, 2014, 08:01 AM • Last activity: Mar 10, 2025, 09:55 PM
14 votes
3 answers
19070 views
How to do a minor upgrade of PostgreSQL on Windows, e.g. 9.3.0 to 9.3.1?
What is the recommended way to perform a minor upgrade from PostgreSQL e.g. 9.3.0 to 9.3.1 using the Enterprise DB built windows installer? Should I uninstall first or just install over the existing installation? The current installation was performed with postgresql-9.3.0-1-windows-x64.exe. Now I w...
What is the recommended way to perform a minor upgrade from PostgreSQL e.g. 9.3.0 to 9.3.1 using the Enterprise DB built windows installer? Should I uninstall first or just install over the existing installation? The current installation was performed with postgresql-9.3.0-1-windows-x64.exe. Now I want to upgrade using postgresql-9.3.1-1-windows-x64.exe.
buzz3791 (396 rep)
Nov 4, 2013, 06:22 PM • Last activity: Feb 24, 2025, 12:10 AM
55 votes
7 answers
167141 views
How to get all roles that a user is a member of (including inherited roles)?
Let's say I have two Postgresql database groups, "authors" and "editors", and two users, "maxwell" and "ernest". create role authors; create role editors; create user maxwell; create user ernest; grant authors to editors; --editors can do what authors can do grant editors to maxwell; --maxwell is an...
Let's say I have two Postgresql database groups, "authors" and "editors", and two users, "maxwell" and "ernest". create role authors; create role editors; create user maxwell; create user ernest; grant authors to editors; --editors can do what authors can do grant editors to maxwell; --maxwell is an editor grant authors to ernest; --ernest is an author I would like to write a performant function that returns a list of the roles (preferably their oid's) that maxwell belongs to, something like this: create or replace function get_all_roles() returns oid[] ... It should return the oids for maxwell, authors, and editors (but not ernest). But I am not sure how to do it when there is inheritance.
Neil McGuigan (8653 rep)
Jan 3, 2014, 10:07 PM • Last activity: Jan 16, 2025, 10:43 AM
2 votes
1 answers
3801 views
Postgresql: Partition table by hour
I have a system that inserts several million records over the course of 24 hours. Users don't care about the data that's older than 24 hours, and typically query data over a small time frame (5 to 60 minutes). Applications need to access small timespans of data very quickly (under a second). Partiti...
I have a system that inserts several million records over the course of 24 hours. Users don't care about the data that's older than 24 hours, and typically query data over a small time frame (5 to 60 minutes). Applications need to access small timespans of data very quickly (under a second). Partitioning the table by hour seemed like the best thing to do. I made a sample / test setup to make sure the partitioning would work correctly. Here's my main table: create table test_table( test_table_id bigserial primary key, "timestamp" timestamp without time zone default now(), did_pass boolean not null ); create index timestamp_idx on test_table("timestamp"); And then created a separate schema to hold the partition tables: create schema test_table_parition; And created the hourly partition tables. There's a total of 24 partition tables. create table test_table_partition.h0( check((extract(hour from "timestamp")) >= 0 and (extract(hour from "timestamp")) = 1 and (extract(hour from "timestamp")) = 23 and (extract(hour from "timestamp")) '2015-02-20 11:00:30.152'; The query execution plan is going after the main table and all 24 of the partition tables. From my understanding of partitioning, the execution plan should only be going after the partition tables with an hour value greater than or equal to 11. How can I partition tables based on hour? **UPDATE** I was playing around with it a bit more today, and I discovered that running a query directly against the hour value of the table will produce the expected query plan. For example: explain select * from test_table where extract(hour from "timestamp") >= 11 and extract(hour from "timestamp") Seq Scan on test_table (cost=0.00..4.46 rows=1 width=17)" " Filter: ((date_part('hour'::text, "timestamp") >= 11::double precision) AND (date_part('hour'::text, "timestamp") Seq Scan on h11 (cost=0.00..1.48 rows=1 width=17)" " Filter: ((date_part('hour'::text, "timestamp") >= 11::double precision) AND (date_part('hour'::text, "timestamp") Seq Scan on h12 (cost=0.00..44.60 rows=9 width=17)" " Filter: ((date_part('hour'::text, "timestamp") >= 11::double precision) AND (date_part('hour'::text, "timestamp") < 13::double precision))" I think I need to change how the table partitioning is being setup.
CurtisHx (123 rep)
Feb 20, 2015, 04:43 PM • Last activity: Dec 20, 2024, 08:01 AM
146 votes
8 answers
475615 views
How to turn JSON array into Postgres array?
I have a column `data` of type `json` that holds JSON documents like this: { "name": "foo", "tags": ["foo", "bar"] } I would like to turn the nested `tags` array into a concatenated string (`'foo, bar'`). That would be easily possible with the `array_to_string()` function in theory. However, this fu...
I have a column data of type json that holds JSON documents like this: { "name": "foo", "tags": ["foo", "bar"] } I would like to turn the nested tags array into a concatenated string ('foo, bar'). That would be easily possible with the array_to_string() function in theory. However, this function does not accept json input. So I wonder how to turn this JSON array into a Postgres array (type text[])?
Christoph (1653 rep)
Dec 2, 2013, 08:48 PM • Last activity: Dec 16, 2024, 08:18 AM
4 votes
1 answers
11506 views
Permission for sequence in another schema
Postgres 9.3 Debian 7.0 I created a specific schema for a specific user and created a view in this schema for this user, so it's the only table he knows that exists. The problem is that this same user needs usage on the sequence of the primary key of this table, but it says "ERROR: permission denied...
Postgres 9.3 Debian 7.0 I created a specific schema for a specific user and created a view in this schema for this user, so it's the only table he knows that exists. The problem is that this same user needs usage on the sequence of the primary key of this table, but it says "ERROR: permission denied for sequence" The original table and its sequence belongs to schema A. This users's schema B has an insert-able view of this table T. I cannot grant usage on schema A for this user, otherwise he will be able to see the names and definition of all my tables. The question is: Is there some way to create some kind of view for this sequence so he can call nextval() and currval()? The goal is making this sequence usable for this restricted user without giving him access to the main schema where the sequence actually belongs.
Ivan De Sousa Paz (561 rep)
Sep 9, 2014, 06:57 PM • Last activity: Nov 20, 2024, 11:20 AM
11 votes
1 answers
6766 views
Reclaim disk space from dropped column without downtime
I have a heavily used table (with around 5 million rows) in a PostgreSQL database in which I want to drop a column and reclaim the space the column used. The docs suggest doing a table rewriting `ALTER TABLE` to force the space back, but that is not safe to run while the table is used and would caus...
I have a heavily used table (with around 5 million rows) in a PostgreSQL database in which I want to drop a column and reclaim the space the column used. The docs suggest doing a table rewriting ALTER TABLE to force the space back, but that is not safe to run while the table is used and would cause downtime. Are there any practical options that don't require any downtime? I tried to run the pgcompact tool, but that did not change anything.
viblo (511 rep)
Oct 9, 2015, 04:53 AM • Last activity: Oct 26, 2024, 10:21 AM
8 votes
2 answers
26878 views
Pgbouncer getting auth failing when trying to connect using psql
I'm trying to configure the latest pgbouncer to work with postgres 9. I can connect to my db using `psql` with the right password but when I use `psql -p 6432` I can't connect with the error message of `psql: ERROR: auth failed` This seems like it might be caused by my userlist.txt file, but I'm not...
I'm trying to configure the latest pgbouncer to work with postgres 9. I can connect to my db using psql with the right password but when I use psql -p 6432 I can't connect with the error message of psql: ERROR: auth failed This seems like it might be caused by my userlist.txt file, but I'm not sure of that. I checked and all required files are owned fully by Postgres system user pgbouncer.ini
[databases]
postgres = host=localhost port=5433 auth_user=postgres dbname=postgres

[pgbouncer]
pidfile = /usr/local/pgbouncer-1.9.0/pgbouncer.pid
logfile = /usr/local/pgbouncer-1.9.0/log/pgbouncer.log

user = postgres

listen_addr = *
listen_port = 6432

auth_type = md5
auth_file = /usr/local/pgbouncer-1.9.0/etc/userlist.txt
Userlist.txt "postgres" "md5" Command used to start pgbouncer ./bin/pgbouncer -d etc/pgbouncer.ini Log output showing failure
2019-08-20 13:46:01.080 16446 LOG C-0x1028ce0: postgres/postgres@127.0.0.1:43286 login attempt: db=postgres user=postgres tls=no
2019-08-20 13:46:01.080 16446 LOG C-0x1028ce0: postgres/postgres@127.0.0.1:43286 closing because: client unexpected eof (age=0)
2019-08-20 13:46:06.980 16446 LOG C-0x1028ce0: postgres/postgres@127.0.0.1:43414 login attempt: db=postgres user=postgres tls=no
2019-08-20 13:46:06.980 16446 LOG C-0x1028ce0: postgres/postgres@127.0.0.1:43414 closing because: auth failed (age=0)
2019-08-20 13:46:06.980 16446 WARNING C-0x1028ce0: postgres/postgres@127.0.0.1:43414 pooler error: auth failed
Josh Kirby (121 rep)
Aug 20, 2019, 02:02 PM • Last activity: Oct 2, 2024, 09:25 PM
37 votes
2 answers
42953 views
DELETE rows which are not referenced in other table
I have two tables in a PostgreSQL 9.3 database: Table `link_reply` has a foreign key named `which_group` pointing to table `link_group`. I want to delete all rows from `link_group` where no related row in `link_reply` exists. Sounds basic enough but I've been struggling with it. Will it be something...
I have two tables in a PostgreSQL 9.3 database: Table link_reply has a foreign key named which_group pointing to table link_group. I want to delete all rows from link_group where no related row in link_reply exists. Sounds basic enough but I've been struggling with it. Will it be something simple like this (not working)? DELETE FROM link_group WHERE link_reply = NULL;
Hassan Baig (2079 rep)
Apr 3, 2016, 02:15 PM • Last activity: Sep 25, 2024, 01:48 AM
2 votes
2 answers
566 views
"Wrong affix file formate for flag" while loading dictionaries from LibreOffice in PostgreSQL 9.3
Trying to get good TEXT SEARCH results out of PostgreSQL in English, German and Dutch made that I downloaded proofing tools of LibreOffice (tried of version 4.3 and 3.3) and iSpell. I placed the files in the "tsearch_data" dir and renamed them so PostgreSQL would find them. I now have: en_us.dict, e...
Trying to get good TEXT SEARCH results out of PostgreSQL in English, German and Dutch made that I downloaded proofing tools of LibreOffice (tried of version 4.3 and 3.3) and iSpell. I placed the files in the "tsearch_data" dir and renamed them so PostgreSQL would find them. I now have: en_us.dict, en_us.affix, nl_nl.dict, nl_nl.affix, de_de_frami.dict, de_de_frami.affix All of them give the same sort of errors while creating a dictionary in PostgreSQL 9.3 like so: CREATE TEXT SEARCH DICTIONARY test_ispell (template = ispell, Dictfile = en_us, affFile = en_us, stopwords = english); > ERROR: wrong affix file format for flag > CONTEXT: line 2428 of configuration file "/usr/share/postgresql/9.3/tsearch_data/en_us.affix": "COMPOUNDMIN 1" > > ERROR: wrong affix file format for flag > CONTEXT: line 2533 of configuration file "/usr/share/postgresql/9.3/tsearch_data/en_us.affix": "SFX 123 N 1" > > ERROR: invalid byte sequence for encoding "UTF8": 0xc4 0x62 > CONTEXT: line 18 of configuration file "/usr/share/postgresql/9.3/tsearch_data/de_de_frami.dict" > > ERROR: wrong affix file format for flag > CONTEXT: line 604 of configuration file "/usr/share/postgresql/9.3/tsearch_data/dutch.affix": "SFX Na N 1" It got pointed out to me that the first error appears to be a wrongful one as can be found in its man page: http://linux.die.net/man/4/hunspell COMPOUNDMIN num Minimum length of words used for compounding. Default value is 3 letters. **Update:** COMPOUND Hunspell functions can can't work in PostgreSQL see: http://www.postgresql.org/docs/current/static/textsearch-dictionaries.html#TEXTSEARCH-ISPELL-DICTIONARY The SFX syntax, is beyond me. The invalid byte sequence must be fixable as well I presume if I somehow manage to tell PostgreSQL the file type, yet I have no clue how to do that. Anyone got any idea how to load these files? **Update**: The Dutch dictionaries provided with Debian still produce errors this is part of the Myspell package, and I can't get it to work. Anyone any success with that? For now I will try and file a bug report to Debian since it is a function that should be working.
St. Jan (151 rep)
Oct 21, 2014, 04:30 PM • Last activity: Sep 11, 2024, 12:45 PM
2 votes
3 answers
5758 views
Performance of single vs multiple column unique constraint
I'm using PostgreSQL 9.3. I want to understand if I have an option to make a constraint unique across the entire table vs. unique across a subset of the table (i.e. by using 2 columns in the unique constraint, I restrict the uniqueness), which one is better for lookups? Consider this table where a u...
I'm using PostgreSQL 9.3. I want to understand if I have an option to make a constraint unique across the entire table vs. unique across a subset of the table (i.e. by using 2 columns in the unique constraint, I restrict the uniqueness), which one is better for lookups? Consider this table where a unique alphanumeric code is allotted to each student of the class. CREATE TABLE sc_table ( name text NOT NULL, code text NOT NULL, class_id integer NOT NULL, CONSTRAINT class_fk FOREIGN KEY (class_id) REFERENCES class (id), CONSTRAINT sc_uniq UNIQUE (code) ); Currently the code is unique across the entire table. However the specification says that it is sufficient for the code to be unique across the class only. For my design requirements there's no restriction either way. However if I change the constraint to be unique for a given class only, how would it affect lookup by code? Or, in other words, which of the following combination of constraint & lookup is the best speed wise: -- 1. unique across entire table, lookup by value CONSTRAINT sc_uniq UNIQUE (code) SELECT * FROM sc_table WHERE code='alpha-2-beta' -- 2. unique across entire table, lookup by value & class CONSTRAINT sc_uniq UNIQUE (code) SELECT * FROM sc_table WHERE class_id=1 AND code='alpha-2-beta' -- 3. unique per class, lookup by value CONSTRAINT sc_uniq UNIQUE (code, class_id) SELECT * FROM sc_table WHERE code='alpha-2-beta' -- 4. unique per class, lookup by value & class CONSTRAINT sc_uniq UNIQUE (code, class_id) SELECT * FROM sc_table WHERE class_id=1 AND code='alpha-2-beta' **Question**: My understanding is that 2 is better than 1 & 4 is better than 3. But which one's better between 1-vs-3 & 2-vs-4? **Update** : Adding output of explain analyze. 3 is bad because there's no index for the lookup. 2 seems to be the best but the table is too small to conclude that. -- 1 "Index Scan using sc_uniq on sc_table (cost=0.15..8.17 rows=1 width=72) (actual time=0.041..0.044 rows=1 loops=1)" " Index Cond: (code = 'code1'::text)" "Total runtime: 0.096 ms" -- 2 "Index Scan using sc_uniq on sc_table (cost=0.15..8.17 rows=1 width=72) (actual time=0.024..0.026 rows=1 loops=1)" " Index Cond: (code = 'code1'::text)" " Filter: (class_id = 1)" "Total runtime: 0.056 ms" -- 3 "Bitmap Heap Scan on sc_table2 (cost=4.18..12.64 rows=4 width=72) (actual time=0.052..0.053 rows=1 loops=1)" " Recheck Cond: (code = 'code1'::text)" " -> Bitmap Index Scan on sc_uniq2 (cost=0.00..4.18 rows=4 width=0) (actual time=0.039..0.039 rows=1 loops=1)" " Index Cond: (code = 'code1'::text)" "Total runtime: 0.121 ms" -- 4 "Index Scan using sc_uniq2 on sc_table2 (cost=0.15..8.17 rows=1 width=72) (actual time=0.036..0.039 rows=1 loops=1)" " Index Cond: ((code = 'code1'::text) AND (class_id = 1))" "Total runtime: 0.093 ms"
user4150760 (1129 rep)
Dec 20, 2014, 06:24 PM • Last activity: Jun 27, 2024, 10:15 PM
Showing page 1 of 20 total questions