Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
4
votes
1
answers
239
views
Create WAL at interval, but only if data changed
I have a PostgreSQL 8.4 installation, and I have configured the `archive_timeout` setting to 5 minutes to improve the odds that I always have complete WAL file to apply in case of system failure. However, this creates a LOT of big files, even if nothing has changed! Is there a way to tell PostgreSQL...
I have a PostgreSQL 8.4 installation, and I have configured the
archive_timeout
setting to 5 minutes to improve the odds that I always have complete WAL file to apply in case of system failure.
However, this creates a LOT of big files, even if nothing has changed! Is there a way to tell PostgreSQL to only create these WAL files at the timeout interval IF something has changed?
TSG
(193 rep)
Mar 7, 2018, 11:06 PM
• Last activity: Jun 2, 2025, 03:09 PM
15
votes
1
answers
19612
views
What are the privileges required to execute a trigger function in PostgreSQL?
What are the privileges required to execute a trigger function in PostgreSQL 8.4? It seems that privileges set to a role do not matter when executing a trigger function. I think I have seen some day that the privileges required to execute a trigger function is the `EXECUTE` privilege but for the own...
What are the privileges required to execute a trigger function in PostgreSQL 8.4?
It seems that privileges set to a role do not matter when executing a trigger function. I think I have seen some day that the privileges required to execute a trigger function is the
EXECUTE
privilege but for the owner of the table, not the actual role that performs the action which fires the trigger that calls the trigger function.
I cannot find the documentation part that explains that point, any help?
user18077
Jul 23, 2013, 08:23 AM
• Last activity: Apr 28, 2024, 06:35 PM
4
votes
1
answers
4290
views
Complex constraint across all data in a table
We have a table to record processing that occurs on a system and we need to ensure that only a single row has an 'in process' status. I want to ensure that the result of this is always either zero or one: select count(id) from jobs where status in ('P', 'G'); We're using explicit transactions so ide...
We have a table to record processing that occurs on a system and we need to ensure that only a single row has an 'in process' status.
I want to ensure that the result of this is always either zero or one:
select count(id) from jobs where status in ('P', 'G');
We're using explicit transactions so ideally this check would happen on commit and abort the transaction if the invariant doesn't hold. Handling any racy processing that throws the occasional error is much easier for us to deal with than suddenly ending up with more than one job that is 'in process'.
The solution only needs to work with Postgres so we're quite happy to take a non-standard solution to this. We're currently on 8.4, but we'll be upgrading at some point to 9.x if that makes any difference.
KayEss
(145 rep)
Aug 9, 2013, 02:53 AM
• Last activity: Jan 22, 2024, 11:35 PM
30
votes
2
answers
34494
views
How to create an index to speed up an aggregate LIKE query on an expression?
I may be asking the wrong question in the title. Here are the facts: My customer service folk have been complaining about slow response times when doing customer lookups on the administration interface of our Django-based site. We're using Postgres 8.4.6. I started logging slow queries, and discover...
I may be asking the wrong question in the title. Here are the facts:
My customer service folk have been complaining about slow response times when doing customer lookups on the administration interface of our Django-based site.
We're using Postgres 8.4.6. I started logging slow queries, and discovered this culprit:
SELECT COUNT(*) FROM "auth_user" WHERE UPPER("auth_user"."email"::text) LIKE UPPER(E'%deyk%')
This query is taking upwards of 32 seconds to run. Here's the query plan provided by EXPLAIN:
QUERY PLAN
Aggregate (cost=205171.71..205171.72 rows=1 width=0)
-> Seq Scan on auth_user (cost=0.00..205166.46 rows=2096 width=0)
Filter: (upper((email)::text) ~~ '%DEYK%'::text)
Because this is a query generated by the Django ORM from a Django QuerySet generated by the Django Admin application, I don't have any control over the query itself. An index seems like the logical solution. I tried creating an index to speed this up, but it hasn't made a difference:
CREATE INDEX auth_user_email_upper ON auth_user USING btree (upper(email::text))
What am I doing wrong? How can I speed up this query?
David Eyk
(537 rep)
Aug 9, 2011, 11:04 PM
• Last activity: Aug 1, 2023, 06:09 PM
12
votes
1
answers
7746
views
Postgresql Rename Table without Updating View Definitions
I've found out today that renaming tables in Postgresql also renames the VIEW definitions for Views that use the table I renamed automatically? Is there any way to turn this off? So I rename the table with: ALTER TABLE xyz RENAME TO abc; And my VIEW defition for the sake of example is: CREATE VIEW f...
I've found out today that renaming tables in Postgresql also renames the VIEW definitions for Views that use the table I renamed automatically? Is there any way to turn this off?
So I rename the table with:
ALTER TABLE xyz RENAME TO abc;
And my VIEW defition for the sake of example is:
CREATE VIEW foo AS SELECT * FROM xyz;
Next thing I know after I rename the table, the definition for foo has changed to point to table abc.
I'm using Postgresql 8.4
**--Updated: Dec 30, 2014 at 12:41pm--**
Here is a test case to illustrate what I mean:
CREATE TABLE tmp_test_a (num integer);
CREATE VIEW v_tmp_test_a AS SELECT * FROM tmp_test_a;
-- Now look at what the VIEW is using (now is using tmp_test_a, which is what it should be using):
SELECT pg_get_viewdef('v_tmp_test_a'::regclass, false);
-- Now create a second table, and swap them.
CREATE TABLE tmp_test_b (num integer);
ALTER TABLE tmp_test_a RENAME TO tmp_test_c;
ALTER TABLE tmp_test_b RENAME TO tmp_test_a;
ALTER TABLE tmp_test_c RENAME TO tmp_test_b;
-- Now look at what the VIEW is using again (now is using tmp_test_b with an alias of tmp_test_a)
SELECT pg_get_viewdef('v_tmp_test_a'::regclass, false);
-- Cleanup
DROP VIEW v_tmp_test_a;
DROP TABLE tmp_test_a;
DROP TABLE tmp_test_b;
Brandon
(223 rep)
Dec 30, 2014, 04:04 AM
• Last activity: Nov 22, 2022, 06:53 PM
20
votes
7
answers
6423
views
Form groups of consecutive rows with same value
I have a situation I think can be solved using window function but I'm not sure. Imagine the following table ```pgsql CREATE TABLE tmp ( date timestamp , id_type integer ) ; INSERT INTO tmp (date, id_type) VALUES ( '2017-01-10 07:19:21.0', 3 ), ( '2017-01-10 07:19:22.0', 3 ), ( '2017-01-10 07:19:23....
I have a situation I think can be solved using window function but I'm not sure.
Imagine the following table
CREATE TABLE tmp (
date timestamp
, id_type integer
) ;
INSERT INTO tmp (date, id_type)
VALUES
( '2017-01-10 07:19:21.0', 3 ),
( '2017-01-10 07:19:22.0', 3 ),
( '2017-01-10 07:19:23.1', 3 ),
( '2017-01-10 07:19:24.1', 3 ),
( '2017-01-10 07:19:25.0', 3 ),
( '2017-01-10 07:19:26.0', 5 ),
( '2017-01-10 07:19:27.1', 3 ),
( '2017-01-10 07:19:28.0', 5 ),
( '2017-01-10 07:19:29.0', 5 ),
( '2017-01-10 07:19:30.1', 3 ),
( '2017-01-10 07:19:31.0', 5 ),
( '2017-01-10 07:19:32.0', 3 ),
( '2017-01-10 07:19:33.1', 5 ),
( '2017-01-10 07:19:35.0', 5 ),
( '2017-01-10 07:19:36.1', 5 ),
( '2017-01-10 07:19:37.1', 5 );
I'd like to have a new group at each change of value in column id_type
.
E.G. 1st group from 7:19:21 to 7:19:25, 2nd starting and finishing at 7:19:26, and so on.
At this moment, using the query below ...
SELECT distinct
min(min(date)) over w as begin,
max(max(date)) over w as end,
id_type
FROM tmp
GROUP BY id_type
WINDOW w AS (PARTITION BY id_type)
ORDER BY begin;
I get the following result:
begin end id_type
2017-01-10 07:19:21.0 2017-01-10 07:19:32.0 3
2017-01-10 07:19:26.0 2017-01-10 07:19:37.1 5
While I'd like:
begin end id_type
2017-01-10 07:19:21.0 2017-01-10 07:19:25.0 3
2017-01-10 07:19:26.0 2017-01-10 07:19:26.0 5
2017-01-10 07:19:27.1 2017-01-10 07:19:27.1 3
2017-01-10 07:19:28.0 2017-01-10 07:19:29.0 5
2017-01-10 07:19:30.1 2017-01-10 07:19:30.1 3
2017-01-10 07:19:31.0 2017-01-10 07:19:31.0 5
2017-01-10 07:19:32.0 2017-01-10 07:19:32.0 3
2017-01-10 07:19:33.1 2017-01-10 07:19:37.1 5
Once that works, I want to include more criteria to define groups, and these others will be nullable.
Postgres Version: 8.4. We have Postgres with PostGis, so it is not easy to upgrade. PostGis functions change names and there are other problems, but we are already rewriting everything and the new version will use a newer version 9.X with PostGis 2.x.
Lelo
(303 rep)
Mar 6, 2017, 08:40 PM
• Last activity: Sep 13, 2021, 12:53 AM
3
votes
2
answers
7091
views
Dump PostgreSQL without DROP DATABASE
I want dump a database, but, without DROP DATABASE, and CREATE DATABASE instructions, in .backup / .sql (generate file). Actually I use this command: pg_dump --format=c --no-privileges --no-owner --file=/tmp/`hostname`-`date +%d%m%Y`.backup --username=admin --host=127.0.0.1 database_name But, in top...
I want dump a database, but, without DROP DATABASE, and CREATE DATABASE instructions, in .backup / .sql (generate file).
Actually I use this command:
pg_dump --format=c --no-privileges --no-owner --file=/tmp/
hostname
-date +%d%m%Y
.backup --username=admin --host=127.0.0.1 database_name
But, in top lines of file, I have this:
DROP DATABASE...
CREATE DATABASE...
So, I my case, I want duplicate this database, and, if I use this script with **pg_restore** I drop the other database (I don't want that).
Patrick Maciel
(143 rep)
Jul 12, 2013, 07:28 PM
• Last activity: Apr 19, 2021, 06:34 AM
2
votes
2
answers
10685
views
Postgresql - Segmentation fault (core dumped)
I'm working on postgresql code (server side). I installed a version from source code following the official tutorial: http://www.postgresql.org/docs/8.4/interactive/install-short.html I did same modification on code, so I need to install again postgresql, but this time my version. I renamed the */us...
I'm working on postgresql code (server side).
I installed a version from source code following the official tutorial:
http://www.postgresql.org/docs/8.4/interactive/install-short.html
I did same modification on code, so I need to install again postgresql, but this time my version.
I renamed the */usr/local/pgsql/* in */usr/local/pgsql2/* and I did:
make distclean
./configure
make
sudo make install
sudo mkdir /usr/local/pgsql/data
sudo chown darkcoffeelinux /usr/local/pgsql/data
until now everything went well, but with the command
/usr/local/pgsql/bin/initdb -D /usr/local/pgsql/data/
I got this error:
The files belonging to this database system will be owned by user "darkcoffeelinux".
This user must also own the server process.
The database cluster will be initialized with locale en_US.UTF-8.
The default database encoding has accordingly been set to UTF8.
The default text search configuration will be set to "english".
fixing permissions on existing directory /usr/local/pgsql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 24MB
creating configuration files ... ok
creating template1 database in /usr/local/pgsql/data/base/1 ... ok
initializing pg_authid ... Segmentation fault (core dumped)
child process exited with exit code 139
initdb: removing contents of data directory "/usr/local/pgsql/data"
Additional information: I'm using kubuntu, postgresql 8.4.15
DarkCoffee
(517 rep)
Feb 10, 2013, 06:35 AM
• Last activity: Mar 18, 2021, 11:55 AM
89
votes
2
answers
153389
views
Create index if it does not exist
I am working on a function that allows me to add an index if it does not exist. I am running into the problem that I cannot get a list of indexes to compare to. Any thoughts? This is a similar issue to the column creation one that is solved with this code: https://stackoverflow.com/a/12603892/368511
I am working on a function that allows me to add an index if it does not exist. I am running into the problem that I cannot get a list of indexes to compare to. Any thoughts?
This is a similar issue to the column creation one that is solved with this code:
https://stackoverflow.com/a/12603892/368511
GuidoS
(1057 rep)
Feb 27, 2013, 08:39 PM
• Last activity: Feb 1, 2021, 10:51 PM
45
votes
3
answers
197184
views
Using PostgreSQL 8.4, how to convert bytea to text value in postgres?
In my application I insert data in database using C code, since the strings I receive from an untrusted source I have escaped them using `PQescapeByteaConn` of libpq library. Which is working perfectly fine i.e. results in Octet format String. See below example, Input String : `\n\t\f\b\p\k\j\l\mest...
In my application I insert data in database using C code, since the strings I receive from an untrusted source I have escaped them using
PQescapeByteaConn
of libpq library. Which is working perfectly fine i.e. results in Octet format String. See below example,
Input String : \n\t\f\b\p\k\j\l\mestPrepared
Output String : \\012\\011\\014\\010pkjlmestPrepared
Output String is inserted in the database. Now I retrieve that data from the database in a java code using JDBC. How I can unescape the string back to its original value ?
I thought of two possible approaches,
1. Change the database retrieval query and pass this field to any String manipulation function of postgres i.e. which can convert bytea to text.
2. Do the decoding in Java code.
I can understand that the approach 1 will be more efficient. I have tried almost all the functions listed here but nothing is working. Please Help!!
I am using version 8.4 of postgres on a linux machine.
Amit
(591 rep)
Nov 14, 2013, 10:20 AM
• Last activity: Dec 30, 2020, 10:47 PM
2
votes
1
answers
4317
views
Split two rows into two columns
I have the following table: id | name | action | count ------------------------------ 1 | Value1 | 0 | 27 1 | Value1 | 1 | 49 2 | Value2 | 0 | 7 2 | Value2 | 1 | 129 3 | Value3 | 0 | 9 3 | Value3 | 1 | 7 I need to make the 'action' column appear twice, with the count value of each line in it, someth...
I have the following table:
id | name | action | count
------------------------------
1 | Value1 | 0 | 27
1 | Value1 | 1 | 49
2 | Value2 | 0 | 7
2 | Value2 | 1 | 129
3 | Value3 | 0 | 9
3 | Value3 | 1 | 7
I need to make the 'action' column appear twice, with the count value of each line in it, something like this:
id | name | action1 | action2
---------------------------------
1 | Value1 | 27 | 49
2 | Value2 | 7 | 129
3 | Value3 | 9 | 7
How can I do this? Here's my script:
SELECT m.id,
t.name,
m.action,
count(m.id) AS count
FROM table1 m
LEFT JOIN table2 t ON (m.id = t.id)
WHERE m.status != '2'
GROUP BY m.id,
t.name,
m.action
ORDER BY 1, 3
Renan Lazarotto
(173 rep)
Jul 15, 2014, 12:50 PM
• Last activity: Jun 28, 2020, 12:53 AM
3
votes
1
answers
2551
views
PostgreSQL 8.4 no levenshtein fuzzymatching?
I'm using debian squeeze package, it says function does not exist?
I'm using debian squeeze package, it says function does not exist?
Moe
(39 rep)
Jan 18, 2013, 08:26 AM
• Last activity: Jul 8, 2019, 10:54 PM
24
votes
3
answers
59440
views
How do I install pgcrypto in PostgreSQL 8.4?
I'm using Ubuntu Server 10.10 and I have installed PostgreSQL 8.4 using `apt-get install postgresql`. I would like to use the built-in `sha1()` function, but it seems that I have to install `pgcrypto` first. But I don't know how to install it. There is no `pgcrypto` if I try to install it using `apt...
I'm using Ubuntu Server 10.10 and I have installed PostgreSQL 8.4 using
apt-get install postgresql
. I would like to use the built-in sha1()
function, but it seems that I have to install pgcrypto
first. But I don't know how to install it.
There is no pgcrypto
if I try to install it using apt-get install pgcrypto
and I don't find any files starting with pgcrypto
in my system (I tried find / -name "pgcrypto*"
).
How do I install pgcrypto so I can use the digest('word-to-hash','sha1')
function in my database queries?
-------------
**Update:** I'm struggling to install pgcrypto on another Ubuntu machine. After installing the package using sudo apt-get install postgresql-contrib-8.4
how do I install it to my current PostgreSQL database?
Jonas
(33975 rep)
Mar 24, 2011, 01:58 PM
• Last activity: Feb 13, 2019, 04:49 PM
0
votes
1
answers
692
views
Altering a parent table in Postgresql 8.4 breaks child table defaults
The problem: In Postgresql, if table `temp_person_two` inherits from`temp_person`, default column values on the child table are ignored if the *parent* table is altered. How to replicate: First, create table and a child table. The child table should have one column that has a default value. CREATE T...
The problem: In Postgresql, if table
temp_person_two
inherits fromtemp_person
, default column values on the child table are ignored if the *parent* table is altered.
How to replicate:
First, create table and a child table. The child table should have one column that has a default value.
CREATE TEMPORARY TABLE temp_person (
person_id SERIAL,
name VARCHAR
);
CREATE TEMPORARY TABLE temp_person_two (
has_default character varying(4) DEFAULT 'en'::character varying NOT NULL
) INHERITS (temp_person);
Next, create a trigger on the parent table that copies its data to the child table (I know this appears like bad design, but this is a minimal test case to show the problem).
CREATE FUNCTION temp_person_insert() RETURNS trigger
LANGUAGE plpgsql
AS '
BEGIN
INSERT INTO temp_person_two VALUES ( NEW.* );
RETURN NULL;
END;
';
CREATE TRIGGER temp_person_insert_trigger
BEFORE INSERT ON temp_person
FOR EACH ROW
EXECUTE PROCEDURE temp_person_insert();
Then insert data into parent and select data from child. The data should be correct.
INSERT INTO temp_person (name) VALUES ('ovid');
SELECT * FROM temp_person_two;
person_id | name | has_default
-----------+------+-------------
1 | ovid | en
(1 row )
Finally, alter parent table by adding a new, unrelated column. Attempt to insert data and watch a "not-null constraint" violation occur:
ALTER TABLE temp_person ADD column foo text;
INSERT INTO temp_person(name) VALUES ('Corinna');
ERROR: null value in column "has_default" violates not-null constraint
CONTEXT: SQL statement "INSERT INTO temp_person_two VALUES ( $1 .* )"
PL/pgSQL function "temp_person_insert" line 2 at SQL statement
My version:
testing=# select version();
version
-------------------------------------------------------------------------------------------------------
PostgreSQL 8.4.17 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit
(1 row)
Curtis Poe
(515 rep)
Oct 7, 2013, 03:32 PM
• Last activity: Aug 29, 2018, 07:15 AM
11
votes
2
answers
23484
views
TOAST Table Growth Out of Control - FULLVAC Does Nothing
Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in order to take advantage of autovacuum features and be in line with 30ish other PGSQL servers. This was done by a separate IT group who administrates the hardware, so we don't have much choice on any other upgrades (won't see 9+ for a w...
Recently, I've had a PostgreSQL 8.2.11 server upgraded to 8.4 in order to take advantage of autovacuum features and be in line with 30ish other PGSQL servers. This was done by a separate IT group who administrates the hardware, so we don't have much choice on any other upgrades (won't see 9+ for a while). The server exists in a very closed environment (isolated network, limited root privileges) and runs on RHEL5.5 (i686). After the upgrade, the database has constantly been growing to the tune of 5-6 GB a day. Normally, the database, as a whole, is ~20GB; currently, it is ~89GB. We have a couple other servers which run equivalent databases and actually synchronize the records to each other via a 3rd party application (one I do not have access to the inner workings). The other databases are ~20GB as they should be.
Running the following SQL, it's fairly obvious there's an issue with a particular table, and, more specifically, its TOAST table.
SELECT nspname || '.' || relname AS "relation",
pg_size_pretty(pg_relation_size(C.oid)) AS "size"
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
ORDER BY pg_relation_size(C.oid) DESC
LIMIT 20;
Which produces:
relation | size ------------------------------------+--------- pg_toast.pg_toast_16874 | 89 GB fews00.warmstates | 1095 MB ... (20 rows)This TOAST table is for a table called "timeseries" which saves large records of blobbed data. A
SUM(LENGTH(blob)/1024./1024.)
of all the records in timeseries yields ~16GB for that column. There should be **no reason** this table's TOAST table should be as large as it is.
I've performed a VACUUM FULL VERBOSE ANALYZE timeseries
, and the vacuum runs to completion with no errors.
>INFO: vacuuming "pg_toast.pg_toast_16874"
>INFO: "pg_toast_16874": found 22483 removable, 10475318 nonremovable row versions in 10448587 pages
>DETAIL: 0 dead row versions cannot be removed yet.
>Nonremovable row versions range from 37 to 2036 bytes long.
>There were 20121422 unused item pointers.
>Total free space (including removable row versions) is 0 bytes.
4944885 pages are or will become empty, including 0 at the end of the table.
4944885 pages containing 0 free bytes are potential move destinations.
>CPU 75.31s/29.59u sec elapsed 877.79 sec.
>INFO: index "pg_toast_16874_index" now contains 10475318 row versions in 179931 pages
>DETAIL: 23884 index row versions were removed.
>101623 index pages have been deleted, 101623 are currently reusable.
>CPU 1.35s/2.46u sec elapsed 21.07 sec.
REINDEXed the table which freed **some** space (~1GB). I can't CLUSTER the table as there isn't enough space on disk for the process, and I'm waiting to rebuild the table entirely as I'd like to find out why it is so much bigger than equivalent databases we have.
Ran a query from the PostgreSQL wiki here - "Show Database Bloat" , and this is what I get:
current_database | schemaname | tablename | tbloat | wastedbytes | iname | ibloat | wastedibytes -----------------+------------+--------------------------------+--------+-------------+---------------------------------+--------+-------------- ptrdb04 | fews00 | timeseries | 1.0 | 0 | idx_timeseries_synchlevel | 0.0 | 0 ptrdb04 | fews00 | timeseries | 1.0 | 0 | idx_timeseries_localavail | 0.0 | 0 ptrdb04 | fews00 | timeseries | 1.0 | 0 | idx_timeseries_expirytime | 0.0 | 0 ptrdb04 | fews00 | timeseries | 1.0 | 0 | idx_timeseries_expiry_null | 0.0 | 0 ptrdb04 | fews00 | timeseries | 1.0 | 0 | uniq_localintid | 0.0 | 0 ptrdb04 | fews00 | timeseries | 1.0 | 0 | pk_timeseries | 0.1 | 0 ptrdb04 | fews00 | idx_timeseries_expiry_null | 0.6 | 0 | ? | 0.0 | 0It looks like the database doesn't consider this space as "empty," at all, but I just don't see where all the disk space is coming from! I suspect that this database server is deciding to use 4-5x as much disk space to save the same records pulled from the other data servers. My question is this: Is there a way I can verify the physical disk size of a row? I'd like to compare the size of one row on this database to another "healthy" database. Thanks for any help you can provide! **UPDATE 1** I ended up rebuilding the table from a dumped schema due to its size (couldn't leave it alone for another day). After synchronizing the data, via the software synch process, the TOAST table was ~35GB; however, I could only account for ~9GB of it from that blob column which should be the longest in terms of values. Not sure where the other 26GB is coming from. CLUSTERed, VACUUM FULLed, and REINDEXed to no avail. The **postgresql.conf** files between the local and remote data servers are **exactly** the same. Is there any reason this database might be trying to store each record with a larger space on disk? **UPDATE 2 - Fixed** I finally decided to just completely rebuild the database from the ground up- even going as far as to reinstall the PostgreSQL84 packages on the system. The database path was reinitialized and tablespaces wiped clean. The 3rd party software synchronization process repopulated the tables, and the final size came out to be **~12GB**! Unfortunately, this, in no way, helps to solve what the exact source of the issue was here. I'm going to watch it for a day or two and see if there are any major differences with how the revitalized database is handling the TOAST table and post those results here. Relation Size
ptrdb04=> SELECT nspname || '.' || relname AS "relation",
ptrdb04-> pg_size_pretty(pg_relation_size(C.oid)) AS "size"
ptrdb04-> FROM pg_class C
ptrdb04-> LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
ptrdb04-> WHERE nspname NOT IN ('pg_catalog', 'information_schema')
ptrdb04-> ORDER BY pg_relation_size(C.oid) DESC
ptrdb04-> LIMIT 2;
relation | size
-------------------------+---------
pg_toast.pg_toast_17269 | 18 GB
fews00.warmstates | 1224 MB
(2 rows)
VACUUM VERBOSE ANALYZE timeseries;
INFO: "timeseries": found 12699 removable, 681961 nonremovable row versions in 58130 out of 68382 pages DETAIL: 0 dead row versions cannot be removed yet. There were 105847 unused item pointers. 0 pages are entirely empty. CPU 0.83s/2.08u sec elapsed 33.36 sec. INFO: vacuuming "pg_toast.pg_toast_17269" INFO: scanned index "pg_toast_17269_index" to remove 2055849 row versions DETAIL: CPU 0.37s/2.92u sec elapsed 13.29 sec. INFO: "pg_toast_17269": removed 2055849 row versions in 518543 pages DETAIL: CPU 8.60s/3.21u sec elapsed 358.42 sec. INFO: index "pg_toast_17269_index" now contains 7346902 row versions in 36786 pages DETAIL: 2055849 index row versions were removed. 10410 index pages have been deleted, 5124 are currently reusable. CPU 0.00s/0.00u sec elapsed 0.01 sec. INFO: "pg_toast_17269": found 1286128 removable, 2993389 nonremovable row versions in 1257871 out of 2328079 pages DETAIL: 0 dead row versions cannot be removed yet. There were 18847 unused item pointers. 0 pages are entirely empty. CPU 26.56s/13.04u sec elapsed 714.97 sec. INFO: analyzing "fews00.timeseries" INFO: "timeseries": scanned 30000 of 68382 pages, containing 360192 live rows and 0 dead rows; 30000 rows in sample, 821022 estimated total rowsThe only noticeable difference after the rebuild (other than disk usage) is
INFO: "pg_toast_17269": found 1286128 removable, 2993389 nonremovable row versionsas @CraigRinger mentioned in a comment. The nonremovable row count is **much** smaller than before. **New question:** Can other tables affect the size of another table? (via foreign keys and such) Rebuilding the table did nothing, yet rebuilding the whole database proved to fix the problem.
BrM13
(113 rep)
Jul 3, 2013, 03:07 PM
• Last activity: Jul 11, 2018, 03:01 PM
0
votes
2
answers
204
views
Test/Verify Log-Shipping Backup
We recently suffered a power/connectivity outage at a data center housing two PostgreSQL 8.4 secondary servers, that remotely back up our two primaries via log-shipping replication. Backup power kept them alive, but they received no WAL files for a little over two days. The backlog on the two primar...
We recently suffered a power/connectivity outage at a data center housing two PostgreSQL 8.4 secondary servers, that remotely back up our two primaries via log-shipping replication. Backup power kept them alive, but they received no WAL files for a little over two days. The backlog on the two primaries approached 3500 before we got connectivity back again.
When I look at the logs in
~postgres/data/pg_log
on the two secondaries, I can see in the current log that the first WAL segment to be processed when we got back online is the numerically consecutive very next one after the last one that made it into the the log file from the day of the start of the outage. It *looks* like we didn't miss anything, and the secondaries were caught up in less that a day.
But I'd like to verify this. What would most effectively assuage my paranoia would be to break connectivity again so the primaries would begin to buffer up WAL files, promote the secondary machines to primary in isolation, run enough strictly read-only queries to satisfy myself of the integrity of our backup data, and then put the secondaries back in continuous recovery mode and finally restore connectivity.
Is that doable? Or would the promotion and processing of queries, even read-only ones, alter the secondary database enough that it could not resume continuous recovery afterwards?
I realize that this would not be an issue with streaming replication, but 8.4 does not have that capability.
Clovis_Sangrail
(89 rep)
Apr 11, 2018, 03:54 PM
• Last activity: Apr 11, 2018, 05:45 PM
6
votes
1
answers
4960
views
Using TCP keepalive for terminating orphan postgres queries not working
**Problem**: A user of some application executes a report which involves a heavy sql query. After some minutes, the user then quits the application (or even worse, the app crashes, or the connection to the database server is lost). The query however continues its execution, *even when it is no more...
**Problem**: A user of some application executes a report which involves a heavy sql query. After some minutes, the user then quits the application (or even worse, the app crashes, or the connection to the database server is lost). The query however continues its execution, *even when it is no more needed*.
So I've read about these types of situations and how to handle them, and it is suggested to use TCP Keepalive configuration .
The problem is that it looks like postgres is ignoring this configuration. I've lowered the original configuration values both at "postgres level" and at "OS level", with no luck:
- **postgresql.conf**
- tcp_keepalives_idle = 60
- tcp_keepalives_interval = 6
- tcp_keepalives_count = 10
- **Ubuntu**
- echo 60 > /proc/sys/net/ipv4/tcp_keepalive_time
- echo 10 > /proc/sys/net/ipv4/tcp_keepalive_intvl
- echo 6 > /proc/sys/net/ipv4/tcp_keepalive_probe
I even tried reloading/restarting postgres but nothing worked, the query keeps running past two minutes until the very end. ¿Is there any additional step I am missing?
**EDIT**: Result of pg_stat_activity:
datid | datname | procpid | usesysid | usename | current_query | waiting | xact_start | query_start | backend_start | client_addr | client_port
----------+--------------------------------------------+---------+----------+----------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------+-------------------------------+-------------------------------+-------------------------------+-------------+-------------
467061608 | libertya_prod_hts_rg1_20161004_qafuncional | 16716 | 16384 | libertya | SELECT dt.signo_issotrx, dt.name as tipodoc, i.ad_org_id, i.ad_client_id, i.documentno, i.c_invoice_id as doc_id, i.c_order_id, i.c_bpartner_id, bp.name as bpartner, i.issotrx, i.dateacct, i.dateinvoiced as datedoc, p.netdays, i.dateinvoiced + (p.netdays::text || ' days'::text)::interval AS duedate, paymenttermduedays(i.c_paymentterm_id, i.dateinvoiced::timestamp with time zone, now()) AS daysdue, i.dateinvoiced + (p.discountdays::text || ' days'::text)::interval AS discountdate, round(i.grandtotal * p.discount * 0.01::numeric, 2) AS discountamt, i.grandtotal AS grandtotalmulticurrency, invoicepaid(i.c_invoice_id, i.c_currency_id, 1) AS paidamtmulticurrency, invoiceopen(i.c_invoice_id, 0, i.c_currency_id,i.c_conversiontype_id) AS openamtmulticurrency, currencybase(i.grandtotal,i.c_currency_id,i.dateinvoiced, i.ad_client_id, i.ad_org_id) AS grandtotal, invoicepaid(i.c_invoice_id, 118, 1) AS paidamt, invoiceopen(i.c_invoice_id, 0,118,i.c_conversiontype_id) AS openamt, i.c_currency_id, i.c_conversiontype_id, i. | f | 2017-04-07 08:06:23.896404-03 | 2017-04-07 08:06:24.513244-03 | 2017-04-07 08:06:02.817331-03 | 127.0.0.1 | 59354
Federico Cristina
(173 rep)
Apr 6, 2017, 02:40 PM
• Last activity: Feb 12, 2018, 10:42 AM
4
votes
1
answers
15609
views
Where can I download an old version of PostgreSQL for Windows?
I need version 8.4 for a legacy system, but [EnterpriseDB (the official link) only shows 9.x](https://www.enterprisedb.com/downloads/postgres-postgresql-downloads) [![enter image description here][1]][1] On the [PostgreSQL FTP Mirror](https://www.postgresql.org/ftp/binary/v8.4.22/binary/) I see the...
I need version 8.4 for a legacy system, but [EnterpriseDB (the official link) only shows 9.x](https://www.enterprisedb.com/downloads/postgres-postgresql-downloads)
On the [PostgreSQL FTP Mirror](https://www.postgresql.org/ftp/binary/v8.4.22/binary/) I see the 8.4.22 binary, but only have binary for solaris:


Juan Carlos Oropeza
(427 rep)
Jan 31, 2018, 03:22 PM
• Last activity: Jan 31, 2018, 04:44 PM
6
votes
1
answers
5198
views
PostgreSQL 8.4: How to tell if a procedural language is installed or not?
I have a setup program that requires plpgsql to install stored procedures in a PostgreSQL 8.4 database. I need to make sure the language is installed or the app will fail. I don't want to drop the language and re-add it, as that could screw some other things up. Is there a way to install the languag...
I have a setup program that requires plpgsql to install stored procedures in a PostgreSQL 8.4 database. I need to make sure the language is installed or the app will fail. I don't want to drop the language and re-add it, as that could screw some other things up.
Is there a way to install the language "gently"?
CREATE LANGUAGE IF NOT EXISTS
does not appear to be valid.
Jeremy Holovacs
(1271 rep)
Feb 24, 2013, 04:21 PM
• Last activity: Jan 19, 2018, 02:00 AM
1
votes
1
answers
842
views
Postgres row count
I have a table with a row count of 985 records. But it is really slow to query this table. # select count(*) from movie; count ------- 985 Then I perform query using `EXPLAIN ANALYZE` The row count over here is 1636712, which is very much different from 985. Why is this happen and how to fix this? E...
I have a table with a row count of 985 records. But it is really slow to query this table.
# select count(*) from movie;
count
-------
985
Then I perform query using
EXPLAIN ANALYZE
The row count over here is 1636712, which is very much different from 985. Why is this happen and how to fix this?
EXPLAIN ANALYZE SELECT count(*) FROM movie;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=194963.62..194963.63 rows=1 width=0) (actual time=21051.607..21051.607 rows=1 loops=1)
-> Seq Scan on movie (cost=0.00..190852.29 rows=1644529 width=0) (actual time=2813.737..21051.094 rows=985 loops=1)
Total runtime: 21053.506 ms
\d movie
Indexes:
"movie_pkey" PRIMARY KEY, btree (movie_id)
"index_movie_internal_id" btree (movie_internal_id)
"movie_release_date_index" btree (movie_release_date DESC NULLS LAST, movie_internal_title)
"movie_release_date_title_index" btree (movie_release_date, movie_internal_title)
# VACUUM FULL ANALYZE movie; EXPLAIN ANALYZE SELECT count(*) FROM movie;
VACUUM
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=194963.62..194963.63 rows=1 width=0) (actual time=21051.607..21051.607 rows=1 loops=1)
-> Seq Scan on movie (cost=0.00..190852.29 rows=1644529 width=0) (actual time=2813.737..21051.094 rows=985 loops=1)
Total runtime: 21053.506 ms
Jason
(113 rep)
Nov 30, 2017, 05:15 AM
• Last activity: Nov 30, 2017, 07:13 AM
Showing page 1 of 20 total questions