Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
65
views
Do I need to CREATE LANGUAGE plpgsql in PostgreSQL 9+
I have a PHP web application using a PostgreSQL database. Ideally, to install this web application should be easy and only require a database name and user. Then, the web application loads an SQL file and imports the tables, indices, and functions. Functions use the `plpgsql` language. ### PostgreSQ...
I have a PHP web application using a PostgreSQL database.
Ideally, to install this web application should be easy and only require a database name and user.
Then, the web application loads an SQL file and imports the tables, indices, and functions.
Functions use the
plpgsql
language.
### PostgreSQL 8.4+
At first, the minimum required PostgreSQL version was 8.4.
In order to avoid the #102 error language "plpgsql" does not exist
, I was using this at the top of the SQL file:
--
-- Name: create_language_plpgsql(); Type: FUNCTION; Schema: public; Owner: postgres
--
CREATE FUNCTION create_language_plpgsql()
RETURNS BOOLEAN AS $$
CREATE LANGUAGE plpgsql;
SELECT TRUE;
$$ LANGUAGE SQL;
SELECT CASE WHEN NOT (
SELECT TRUE AS exists FROM pg_language
WHERE lanname='plpgsql'
UNION
SELECT FALSE AS exists
ORDER BY exists DESC
LIMIT 1
) THEN
create_language_plpgsql()
ELSE
FALSE
END AS plpgsql_created;
DROP FUNCTION create_language_plpgsql();
This will create the plpgsql
language if it does not exist.
### PostgreSQL 9+
Now, I want to drop PostgreSQL 8.4 support.
Judging from the wiki: https://wiki.postgresql.org/wiki/CREATE_OR_REPLACE_LANGUAGE
> For PostgreSQL 9.0 and newer [...] "CREATE OR REPLACE LANGUAGE" is the native syntax for installing a procedural language with no error if it's already installed.
So I thought I could replace the above SQL code with
CREATE OR REPLACE LANGUAGE plpgsql;
But when I run this SQL query on a database created on cPanel, I have the following error:
ERROR: must be owner of language plpgsql
.
Indeed, on cPanel, plpgsql
exists, but the user is not the owner of the plpgsql
language.
### Question
Should I rollback to the old SQL code which was working fine on both PostgreSQL 8.4 and 9+?
Judging from this link, https://www.postgresql.org/docs/9.0/plpgsql-overview.html
> In PostgreSQL 9.0 and later, PL/pgSQL is installed by default. However it is still a loadable module, so especially security-conscious administrators could choose to remove it.
Or can I be safe without checking if the plpgsql
language exists at all? And assume plpgsql
will _always_ be available?
François J.
(21 rep)
Mar 26, 2025, 05:42 PM
• Last activity: Mar 26, 2025, 07:15 PM
0
votes
0
answers
61
views
How do I successfully migrate a (very) old PostgreSQL database with PostGIS objects to a new version of PostgreSQL and PostGIS?
I've been trying to migrate a PostgreSQL database on version 9.0.1 with PostGIS objects to Postgres version 12.9 (both on linux) and I've been encountering numerous errors. I believe the source version of PostGIS is version 1.5 however when I execute the query `SELECT PostGIS_Full_Version();` in the...
I've been trying to migrate a PostgreSQL database on version 9.0.1 with PostGIS objects to Postgres version 12.9 (both on linux) and I've been encountering numerous errors. I believe the source version of PostGIS is version 1.5 however when I execute the query
SELECT PostGIS_Full_Version();
in the source db I get the following
> Error: function postgis_full_version() does not exist
The only reason I suspect it is version 1.5 is because doing a **"regular"** pg_dump copy (as if there are no postgis objects) results in seeing the following error in the log:
> ERROR: could not access file "$libdir/postgis-1.5": No such file or
> directory
The destination PostGIS version is 3.0.3.
In previous migrations from this same source instance I've never had any issues using the available postgis_restore.pl
script. One difference with this database is there is a **postgis schema**, also I have a feeling this database might have been previously migrated from an even older version of PostgreSQL/PostGIS as it has the word _Legacy in the database name.
Common examples of errors seen in the log include:
> ERROR: operator does not exist: geometry postgis.&& geometry
>
> ERROR: type "postgis.box2d" does not exist
>
> invalid command \N
Also, any table with PostGIS datatypes do not copy over. I perform all the usual steps like adding the PostGIS extension and topology extension. I also tried migrating this database to PostgreSQL 10, but got the same result. So my hunch is maybe I should pg_dump the postgis schema first and then try the postgis_restore.pl
script and see if all the tables/schemas copy over? I'm fairly new to PostgreSQL so I am kind of at a loss here. I've also tapped my local Esri expert here and they didn't really know what to say.
Eric Swiggum
(694 rep)
Aug 2, 2024, 12:44 AM
0
votes
1
answers
66
views
How can I find or log inbound traffic connecting to a PostgreSQL database instance?
I have a postgres instance where I need to find out who/what is connecting. The installed version of postgres is 9.0.1. The instance sits on a linux box, SUSE I believe. Where could I look if it is already logging this? If it isn't what could I enable? I need the origin hostname and/or ip, I don't r...
I have a postgres instance where I need to find out who/what is connecting. The installed version of postgres is 9.0.1. The instance sits on a linux box, SUSE I believe. Where could I look if it is already logging this? If it isn't what could I enable? I need the origin hostname and/or ip, I don't really need any other info outside of that.
Eric Swiggum
(694 rep)
Jul 18, 2024, 09:28 PM
• Last activity: Jul 18, 2024, 10:14 PM
14
votes
2
answers
11428
views
Select longest continuous sequence
I am trying to construct a query in PostgreSQL 9.0 that gets the longest sequence of continuous rows for a specific column. Consider the following table: lap_id (serial), lap_no (int), car_type (enum), race_id (int FK) Where `lap_no` is unique for each `(race_id, car_type)`. I would like the query t...
I am trying to construct a query in PostgreSQL 9.0 that gets the longest sequence of continuous rows for a specific column.
Consider the following table:
lap_id (serial), lap_no (int), car_type (enum), race_id (int FK)
Where
lap_no
is unique for each (race_id, car_type)
.
I would like the query to produce the longest sequence for a given race_id
and car_type
, so it would return an int
(or long) that is highest.
With the following data:
1, 1, red, 1
2, 2, red, 1
3, 3, red, 1
4, 4, red, 1
5, 1, blue, 1
6, 5, red, 1
7, 2, blue, 1
8, 1, green, 1
For car_type = red and race_id = 1
the query would return 5
as the longest sequence of the lap_no
field.
I found a similar question here however my situation is a bit more straightforward.
(I would also like to know the longest sequence for a given car_type
for all races, but was planning to work that out myself.)
DaveB
(319 rep)
Feb 25, 2013, 04:06 PM
• Last activity: Jul 29, 2022, 11:07 AM
0
votes
1
answers
124
views
Postgres 9.0 recovery from inadvertent drop-all-tables event
Due to a accidental config file check-in, a unit test wiped out our entire production database (did a drop, then re-created the tables). Trying to restore from backup revealed the recent backups were corrupted and the only valid backup we have is just over 1 year old. What are my options for recover...
Due to a accidental config file check-in, a unit test wiped out our entire production database (did a drop, then re-created the tables). Trying to restore from backup revealed the recent backups were corrupted and the only valid backup we have is just over 1 year old.
What are my options for recovering the data (if any exist)? Autovacuum may have been turned on. I saw a tool pg_dirtyread, but it appears to require Postgres 9.1 or higher, and from what I can tell does not help with drop/create table scenarios.
John P
(411 rep)
Jan 26, 2015, 06:24 AM
• Last activity: Mar 18, 2021, 03:02 PM
5
votes
2
answers
104901
views
PostgreSQL Error: server closed the connection unexpectedly
I cannot connect to PostgreSQL server from PgAdmin III , so I get this error: > Error connecting to the server: server closed the connection > unexpectedly > > This probably means the server terminated abnormally before or while > processing the request. I get the same error when I try connecting fr...
I cannot connect to PostgreSQL server from PgAdmin III , so I get this error:
> Error connecting to the server: server closed the connection
> unexpectedly
>
> This probably means the server terminated abnormally before or while
> processing the request.
I get the same error when I try connecting from
psql
.
I have got this error 2 days ago but after restarting the computer the problem disappears , but this time I have restarted it too many times vainly.
I'm using PostgreSQL 9.0.5 on windows 7 with ArcSDE 10.1 (Arcgis Spatial Database add-on), I have deactivated the antivirus and firewall.
I have tried to reinstall PgSQL but the problem persists.
How I can connect to my Pgsql service and get rid of this error?
geogeek
(201 rep)
Dec 6, 2013, 09:25 AM
• Last activity: Jan 4, 2021, 02:13 PM
18
votes
1
answers
66014
views
Pass an array or record to a function in PostgreSQL?
I have a task to pass arrays, records, and in some cases array of records as a parameter to functions in PostgreSQL.
I have a task to pass arrays, records, and in some cases array of records as a parameter to functions in PostgreSQL.
Worker
(519 rep)
May 17, 2011, 06:46 PM
• Last activity: Nov 4, 2020, 05:44 AM
5
votes
3
answers
4837
views
How to reclaim disk space after delete without rebuilding table?
Our PostgreSQL 9.0 Windows production server is running low on space. In our 100GB database, we have a large table containing TOASTed binary data. We have deleted some rows and need to return the space to the O/S. We do not have enough space to do a full rewrite of the table, and all my reading of `...
Our PostgreSQL 9.0 Windows production server is running low on space.
In our 100GB database, we have a large table containing TOASTed binary data. We have deleted some rows and need to return the space to the O/S.
We do not have enough space to do a full rewrite of the table, and all my reading of
CLUSTER
, VACUUM FULL
and pg_repack
is that they need to rewrite the table. My google-fu has been insufficient so far to find anything else.
Some downtime is acceptable (~2 hours), however the backup/restoration is too slow for our purposes, nor am I keen on dropping the database in between steps.
QUESTION: How can I return the disk space to the O/S without doing a full table rewrite?
(This is a production server so any solutions must be reputable/recommended/supported etc).
(P.S. A separate much larger disk is available which could be used, if the table could be rebuilt on it somehow. Tablespaces?)
Brendan Hill
(301 rep)
Apr 10, 2014, 11:34 PM
• Last activity: Apr 19, 2019, 11:00 PM
9
votes
3
answers
26138
views
Postgres ERROR: tuple concurrently updated
I have a large table test in which in user_id 2 have 500000 records. So I want to delete this record in chunks of 100 records but it is given error. Here is my query: delete from test where test_id in (select test_id from test where User_id = 2 limit 100 ) > ERROR: tuple concurrently updated What is...
I have a large table test in which in user_id 2 have 500000 records. So I want to delete this record in chunks of 100 records but it is given error.
Here is my query:
delete from test where test_id in (select test_id
from test where User_id = 2 limit 100 )
> ERROR: tuple concurrently updated
What is the issue. How can i solved it.
Saddam Khan
(642 rep)
Jan 13, 2017, 05:27 AM
• Last activity: Jan 16, 2017, 04:41 AM
0
votes
1
answers
470
views
Postgres: Missing data after Binary Copy - Is VACUUM FULL required?
I'm running Postgres 9.0 on a relatively big (10TB) and old Database. I must move that database to new server hardware, so I copied the whole `$PGDATA` directory via `rsync` of a LVM Snapshot and used archive/restore Commands to get all the changes after the copy process. I switched to the new serve...
I'm running Postgres 9.0 on a relatively big (10TB) and old Database. I must move that database to new server hardware, so I copied the whole
$PGDATA
directory via rsync
of a LVM Snapshot and used archive/restore Commands to get all the changes after the copy process. I switched to the new server using the trigger_file.
Everything looked good so far, but after a few hours I'm having some tables where data is not showing up in SELECT
s (I think that was instant after copying, but have no evidence).
I did ANALYZE
and VACUUM
on the table - no data there. But when I do a VACUUM FULL
on the table, the data appears.
I couldn't find anything comparable in the postgres documentation except the Transaction Id wraparound. The autovacuum was disabled for 1/2 year at the old server.
A week ago I activated it and autovacuum did its job as I saw.
On the old database server, the data from my test table is is returned by SELECT
s.
Did anyone encounter such a problem? I'm looking for a query that can show me which of the tables needs such a vacuum full
.
Why cannot I copy the data directory at the file system level is also hard to understand for me at the moment. Did I miss anything from the documentation that needs to be done after using the Trigger File?
I don't think it's relevant, but the version at the old server was 9.0.18 and at the new one 9.0.23.
Schors
(21 rep)
Jun 13, 2016, 06:19 PM
• Last activity: Dec 26, 2016, 04:11 PM
2
votes
1
answers
201
views
Setup Master-Slave replication with DML changes in Slave
I have a master database (PostgreSQL 9.0) and I need to setup a streaming replication on a slave. Slave db can read/write and make schema level changes too which shouldn't write back to master. Can you please highlight how can I do this?
I have a master database (PostgreSQL 9.0) and I need to setup a streaming replication on a slave. Slave db can read/write and make schema level changes too which shouldn't write back to master. Can you please highlight how can I do this?
user24908
(21 rep)
Jun 13, 2013, 05:06 AM
• Last activity: Nov 24, 2016, 06:35 AM
5
votes
1
answers
4400
views
Creating crosstab() pivot table in PostgreSQL 9.0
I have a rather complicated issue for myself which I hope someone can help me with. I want to use PostgreSQL to enable me to create pivot tables from a geometry-less table. To keep it simple I will just show the table structure i want to use for the pivot and then use the methods given to create oth...
I have a rather complicated issue for myself which I hope someone can help me with.
I want to use PostgreSQL to enable me to create pivot tables from a geometry-less table.
To keep it simple I will just show the table structure i want to use for the pivot and then use the methods given to create others, I hope :)
The fields I want to use are
postcode_nospace_
, and ageband
.
ageband
will be the columns across the top with postcode_nospace_
being the rows with a count of each postcode being the data.
There are 2132 records and of which some have more than one postcode, hence the count.
CREATE TABLE adult_social_care.activities_in_localities_asc
(
ogc_fid integer NOT NULL,
sort numeric(5,0),
ageband character(12),
postcode_nospace_ character(8),
wkb_geometry geometry,
CONSTRAINT activities_in_localities_asc_pkey PRIMARY KEY (ogc_fid)
);
UPDATE:
Here is the outcome I want to achieve as shown in an excel pivot table with the same data.
postcode 18_24 25_34 35_44 45_54 55_64 65_74 Total Count
----------------------------------------------------------------------------
BB115DE 1 2 2 3 8
FY38LZ 1 1 2
From looking around I have compiled the following SQL query. It groups by postcode and creates the field names required. However the fields are blank. I would ideally also want a total_count
column at the end of the table.
SELECT * FROM crosstab(
'SELECT postcode_nospace_, ageband, count(ageband) as total_count
FROM adult_social_care.activities_in_localities_asc
GROUP BY postcode_nospace_, ageband
ORDER BY postcode_nospace_'
,$$VALUES ('18-24'::text), ('25-34'), ('35-44'), ('45-54'), ('55-64'), ('65-74'), ('75-84'), ('85-94'), ('95 AND OVER')$$)
AS ct("postcode" text, "18-24" numeric, "25-34" numeric,"35-44" numeric, "45-54" numeric, "55-64" numeric, "65-74" numeric, "75-84" numeric, "85-94" numeric, "95 AND OVER" numeric);
daniel franklin
(145 rep)
Sep 8, 2015, 08:15 AM
• Last activity: Sep 9, 2015, 08:23 AM
3
votes
1
answers
6442
views
Add column with a sum total to crosstab() query in PostgreSQL 9.0
Following on from my previous question: https://dba.stackexchange.com/questions/114428/creating-crosstab-pivot-table-in-postgresql-9-0 I managed to create a pivot table for `ageband` using the `crosstab()` function. I can use this to either create a view or table of the base geometry-less table. How...
Following on from my previous question:
https://dba.stackexchange.com/questions/114428/creating-crosstab-pivot-table-in-postgresql-9-0
I managed to create a pivot table for
ageband
using the crosstab()
function. I can use this to either create a view or table of the base geometry-less table.
However, this still isn't much use as I need to link it to the gazetteers_and_addresses.unit_postcode
table in order to assign geometries for further analysis.
I will attach the table structure for both tables and the original code that worked to create my crosstab.
CREATE OR REPLACE VIEW adult_social_care.vw_ageband AS (
SELECT * FROM crosstab(
'SELECT postcode_nospace_, ageband, count(ageband) as total_count
FROM adult_social_care.activities_in_localities_asc
GROUP BY postcode_nospace_, ageband
ORDER BY postcode_nospace_'
,$$VALUES ('18-24'::text), ('25-34'), ('35-44'), ('45-54'), ('55-64'), ('65-74'), ('75-84'), ('85-94'), ('95 AND OVER')$$)
AS ct("postcode" text, "18-24" numeric, "25-34" numeric,"35-44" numeric, "45-54" numeric, "55-64" numeric, "65-74" numeric, "75-84" numeric, "85-94" numeric, "95 AND OVER" numeric));
Table defintions:
activities_in_localities_asc
:
CREATE TABLE adult_social_care.activities_in_localities_asc (
ogc_fid integer NOT NULL,
sort numeric(5,0),
ageband character(12),
postcode_nospace_ character(8),
wkb_geometry geometry,
CONSTRAINT activities_in_localities_asc_pkey PRIMARY KEY (ogc_fid)
);
unit_postcode
:
CREATE TABLE gazetteers_and_addresses.unit_postcode (
oogc_fid serial NOT NULL,
pc_area character(10),
postcode_nospaces text,
wkb_geometry geometry
);
If possible too, assign another field at the end which states a sum of all the fields to give a total_count
.
If this can be done then I can create dynamic views on different factors using one geometry-less table and unit_postcode
.
daniel franklin
(145 rep)
Sep 8, 2015, 02:52 PM
• Last activity: Sep 9, 2015, 02:54 AM
3
votes
1
answers
9713
views
Find substrings within between 2 string fragments
I am trying to populate a view in PostGIS in Postgres 9.0 and in this view I want it to contain a substring based on 2 string positions. See below for my code. CREATE OR REPLACE VIEW vw_actions AS SELECT ls.f_table_schema, ls.f_table_name, (SELECT substr(ls.attribute_actions_text, strpos(ls.attribut...
I am trying to populate a view in PostGIS in Postgres 9.0 and in this view I want it to contain a substring based on 2 string positions. See below for my code.
CREATE OR REPLACE VIEW vw_actions AS
SELECT ls.f_table_schema, ls.f_table_name,
(SELECT substr(ls.attribute_actions_text,
strpos(ls.attribute_actions_text, 'name="')+6,
strpos(ls.attribute_actions_text, '"/>') -
strpos(ls.attribute_actions_text, 'name="'))) AS actions
FROM layer_styles ls;
The outcome is that it doesn't like minus numbers when using strpos. I can get it go forward 6 characters to remove the 'name="' from the returned substring but cannot remove the '"/>'.
It returns the following:
View SHED Database"/>
where I want it return:
View SHED Database
Any suggestions would be greatly appreciated.
**ADDITION**:I have found out that if I was using 9.1 I could have used strposrev and i think the following code would have worked:
CREATE OR REPLACE VIEW vw_actions AS
SELECT ls.f_table_schema, ls.f_table_name,
(SELECT substr(ls.attribute_actions_text::text,
strpos(ls.attribute_actions_text::text, 'name="'::text)+6,
strposrev(ls.attribute_actions_text::text, '"/>'::text)+3 -
strpos(ls.attribute_actions_text::text, 'name="'::text))) AS actions
FROM layer_styles ls;
daniel franklin
(145 rep)
Jul 2, 2015, 01:58 PM
• Last activity: Jul 3, 2015, 03:09 PM
11
votes
3
answers
2305
views
Where does the magic column "name" come from?
I got this by accident: db=> select name from site; ERROR: column "name" does not exist LINE 1: select name from site; ^ db=> select site.name from site; name --------------- (1,mysitename) (1 row) The second query return a tuple containing a whole row. Using postgres 9.0.1. Edit: the definition of...
I got this by accident:
db=> select name from site;
ERROR: column "name" does not exist
LINE 1: select name from site;
^
db=> select site.name from site;
name
---------------
(1,mysitename)
(1 row)
The second query return a tuple containing a whole row. Using postgres 9.0.1.
Edit: the definition of site by request. I doesn't really matter, this quirk works for any table.
db=> \d site
Table "public.site"
Column | Type | Modifiers
--------+---------+---------------------------------------------------
id | integer | not null default nextval('site_id_seq'::regclass)
title | text | not null
hegemon
(905 rep)
Apr 17, 2011, 11:20 AM
• Last activity: Jun 13, 2014, 04:35 PM
2
votes
3
answers
2231
views
Vacuum settings for mostly append-only data
I have a table with the following characteristics: 1. We INSERT a few 100k rows each day 2. We never UPDATE the rows 3. We DELETE "old" data once a week From my shallow knowledge of Postgres VACUUM, it seems like this table might be a good candidate for a non-standard VACUUM strategy. A few question...
I have a table with the following characteristics:
1. We INSERT a few 100k rows each day
2. We never UPDATE the rows
3. We DELETE "old" data once a week
From my shallow knowledge of Postgres VACUUM, it seems like this table might be a good candidate for a non-standard VACUUM strategy. A few questions I have...
1. Apart from the weekly purging of old data, will a VACUUM _do_ anything? Seems to me there is nothing to compact/clean if we're only doing INSERTs.
2. Would it be a horrible idea to simply disable AUTOVACUUM on that table and manually VACUUM FULL after our weekly DELETE?
Larsenal
(123 rep)
Aug 13, 2013, 06:41 PM
• Last activity: Aug 14, 2013, 01:18 AM
1
votes
1
answers
3105
views
Postgresql querying trends
Firstly apologies if this is a duplicate, I am fairly new to SQL and so Im not sure what the correct terminology to use in my searches So I have a database which records motor races, with the following simplified schema race_table ========== race_id PK race_date timestamp average_speed Decimal max_s...
Firstly apologies if this is a duplicate, I am fairly new to SQL and so Im not sure what the correct terminology to use in my searches
So I have a database which records motor races, with the following simplified schema
race_table
==========
race_id PK
race_date timestamp
average_speed Decimal
max_speed Decimal
drivers
=======
driver_id PK
driver_name text
date_started timestamp
driver_races
============
driver_id FK
race_id FK
If each driver has ~1000 races spread over 2/3 years
How would I go about querying the overall % change (positive of negative) in their average race speed for a given date range, for example
% Change in first 6 months
Joe Smith - 5% increase
Andy James - 4% increase
% Change in first 12 months
Joe Smith - 8% increase
Lewis May - 6% increase
UPDATE: More detail on % Change
The % Change could be calculated using linear regression, (least-squares-fit would be suitable), the average change is effectivly the y-difference on a line-of-best-fit, where each point is a race, x is the race_date and y is the average_speed for that race.
Postgres's regr_slope will give the gradient of the line which is effectivly the same as the %change
SELECT regr_slope(r.average_speed, EXTRACT(EPOCH FROM r.race_date)) as slope
FROM race_table as r, driver_races as dr
WHERE dr.race_id = r.race_id
AND d.driver_id = 1
This gives the figure I want, but I now need to apply is against all users, sorted by 'slope'
DaveB
(319 rep)
Apr 3, 2013, 02:31 PM
• Last activity: Apr 4, 2013, 11:03 PM
8
votes
1
answers
7949
views
How can I correctly choose maximum number of occurrences of a string while grouping by another field?
I am using Postgresql 9.0. I have the following fields in a table: `id, name`. id name 1 John 1 Mary 1 Mary 1 Mary 1 John 1 Mary 3 Paul 3 Paul 3 George . . . . For each `id`, I want to select the name that occurs the most. How can I do that? I tried with the following query but it doesn't work: sele...
I am using Postgresql 9.0. I have the following fields in a table:
id, name
.
id name
1 John
1 Mary
1 Mary
1 Mary
1 John
1 Mary
3 Paul
3 Paul
3 George
. .
. .
For each id
, I want to select the name that occurs the most. How can I do that?
I tried with the following query but it doesn't work:
select id, max(name)
from table
group by id;
Tudor
(191 rep)
Dec 15, 2012, 07:59 AM
• Last activity: Dec 16, 2012, 08:48 PM
Showing page 1 of 18 total questions