Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
3
votes
1
answers
226
views
Access points of geometric path data in PostgreSQL 8.1
Summary: I would like to perform basic operations on the points of geometric path data contained in a PostgreSQL 8.1 server which I do not have administrative access to. In particular I would like to be able to iterate over these points using SQL and to access paths as arrays (or something functiona...
Summary:
I would like to perform basic operations on the points of geometric path data contained in a PostgreSQL 8.1 server which I do not have administrative access to. In particular I would like to be able to iterate over these points using SQL and to access paths as arrays (or something functionally equivalent).
---
Background:
I'm a novice database user who is working with an unsupported legacy application which is using PostgreSQL 8.1 as a back-end database. I would like to be able to manually run some sanity checks on the data in the database in pure SQL (or PL/pgSQL), i.e. I'd like to be able to avoid using an external programming language, which is what I'm currently doing.
Several tables in the database include columns containing geometric data. Here is a description of one such table:
psql -U "${appuser}" -d "${appdatabase}" -c '\d+ path_table'
Table "public.path_table"
Column | Type | Modifiers | Description
--------+---------+-----------+-------------
id | integer | not null |
path | path | not null |
Indexes:
"path_table_pkey" PRIMARY KEY, btree (id)
Check constraints:
"path_table_id_check" CHECK (id > 0)
Has OIDs: no
I consulted the section on [Geometric Functions and Operators](http://www.postgresql.org/docs/8.1/static/functions-geometry.html) from the [PostgreSQL 8.1 Manual](http://www.postgresql.org/docs/8.1/static/) , but to no avail.
Since this might be relevant, I'll mention that I believe that PostGIS 1.3.6 is installed, i.e.:
$ psql -U "${appuser}" -d "${appdatabase}" -c 'SELECT PostGIS_full_version();'
postgis_full_version
----------------------------------------------------------------------------------------
POSTGIS="1.3.6" GEOS="3.1.1-CAPI-1.6.0" PROJ="Rel. 4.7.1, 23 September 2009" USE_STATS
(1 row)
My understanding is that I can get a list of all installed functions by using the psql
\df
meta-command, i.e.
$ psql -U "${appuser}" -d "${appdatabase}" -c '\df'
My understanding is also that all of the available PostGIS commands should be prefixed by the string 'ST_' (denoting spatial type), so that the following should produce a list of all the available PostGIS commands:
$ psql -U "${appuser}" -d "${appdatabase}" -Alt -c '\df' | cut -d'|' -f2 | grep -i '^st_'
Inspecting this list didn't produce any likely candidates besides the ST_Dump function, which doesn't appear to apply to a path type:
SELECT st_dump(path_column) FROM table;
ERROR: function st_dump(path) does not exist
HINT: No function matches the given name and argument types. You may need to add explicit type casts.
Apparently the st_dump argument needs to be of type geometry, so I tried following the hint and casting the path to a geometry type:
SELECT st_dump(path_column::geometry) FROM table;
ERROR: cannot cast type path to geometry
I also consulted the [Geometry Accessors](http://postgis.net/docs/manual-1.3/ch06.html#id438545) of the [PostGIS 1.3.6 Manual](http://postgis.net/docs/manual-1.3) but apparently I don't know enough to be able to cull a solution from the documentation.
igal
(345 rep)
Nov 6, 2015, 03:05 PM
• Last activity: Jun 9, 2025, 12:07 AM
0
votes
1
answers
567
views
How can I obtain continuous results with a PostgreSQL query?
I am using PostgreSQL version 8.1. I have a table of dates and tasks. The tasks have start times and end times in the table. The query below does not work as expected. SELECT sites.abbrev, (SELECT count(*) FROM ( SELECT DISTINCT measurements.task FROM measurements JOIN tasks ON tasks.id = measuremen...
I am using PostgreSQL version 8.1. I have a table of dates and tasks. The tasks have start times and end times in the table. The query below does not work as expected.
SELECT sites.abbrev,
(SELECT count(*)
FROM (
SELECT DISTINCT measurements.task
FROM measurements
JOIN tasks ON tasks.id = measurements.task
WHERE measurements.ztime >= '2016-10-10'
AND measurements.ztime = '2016-10-10'
AND measurements.ztime <= '2016-10-15')
WHERE sites.abbrev = 'AA-10'
GROUP BY sites.id,
sites.abbrev,
measurements.ztime::date
ORDER BY measurements.ztime::date;
The results are shown below:
abbrev | num_tasks | total_time
--------+-----------+------------
AA-10 | 62 | 36
AA-10 | 62 | 5
AA-10 | 62 | 58
AA-10 | 62 | 28
AA-10 | 62 | 17
(5 rows)
Each row corresponds to one 24 hour period (Ex. 2016-10-10 to 2016-10-15). The problem is that I need
num_tasks
to show me the results for each 24 hour period, not the total number as shown.
PostgreSQL version 8.1 does not allow for dates to be used with generate_series
and there are no windowing functions available either.
How can I modify my query so I can achieve the desired results?
GIS Student
(11 rep)
Oct 18, 2016, 09:00 PM
• Last activity: Oct 18, 2016, 09:33 PM
3
votes
1
answers
1609
views
Converting standard_conforming_strings from off to on when importing Postgres dump
I've got a Postgres 8.1 instance which has `standard_conforming_strings` set to `off` (== handles slashes as escape characters). Recent Postgres versions have this setting `on` by default (for good reasons). When importing a dump from 8.1 to 9.x, then I can set this setting to `off` in the 9.x DB bu...
I've got a Postgres 8.1 instance which has
standard_conforming_strings
set to off
(== handles slashes as escape characters). Recent Postgres versions have this setting on
by default (for good reasons).
When importing a dump from 8.1 to 9.x, then I can set this setting to off
in the 9.x DB but I don't want to, I'd prefer converting the dump into the proper format. This is a ~100GB dump.
How could I do that? Can I simply turn this setting 'on' in the 8.1 version? (Will that affect exporting existing data? Will/can that corrupt or modify existing data?)
What doesn't work: simply importing data produced by 8.1 to 9.x without setting standard_conforming_strings to off
in the 9.x DB; the result is corrupted data.
Mate Varga
(139 rep)
Mar 19, 2016, 07:37 PM
• Last activity: Apr 3, 2016, 09:15 AM
2
votes
0
answers
1793
views
Data Corruption - ERROR: could not open segment 2 of relation
I am getting the below error for a production database. I am new to PostgreSQL databases. I need help to get this fixed. 2015-10-28 10:25:04,154 [monitor-thread-pool1-thread-6] ERROR [org.hibernate.util.JDBCExceptionReporter] ERROR: could not open segment 2 of relation 1663/98913/98947 (target block...
I am getting the below error for a production database. I am new to PostgreSQL databases. I need help to get this fixed.
2015-10-28 10:25:04,154 [monitor-thread-pool1-thread-6] ERROR [org.hibernate.util.JDBCExceptionReporter] ERROR: could not open segment 2 of relation 1663/98913/98947 (target block 427542): No such file or
directory
server1=> select oid::regclass from pg_class where relfilenode =98947;
oid
------------------------
alert_instance_history
(1 row)
server1=> select oid::regclass from pg_class where relfilenode =98913;
oid
-----
(0 rows)
server1=> select * from pg_database where oid=1663;
datname | datdba | encoding | datistemplate | datallowconn | datconnlimit | datlastsysoid | datvacuumxid | datfrozenxid | dattablespace | datconfig | datacl
---------+--------+----------+---------------+--------------+--------------+---------------+--------------+--------------+---------------+-----------+--------
(0 rows)
1. server1=> select version();
version
--------------------------------------------------------------------------------------------
PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.5 (Debian 1:3.3.5-13)
(1 row)
1. Error says 2 relations but I can only find 98947. How can I check the other segment ?
2. I just did a select on this table alert_instance_history, I can see records from this table. Do I need to check anything else?
server1=> select min(eventtimestamp) from alert_instance_history;
WARNING: could not write block 427542 of 1663/98913/98947
DETAIL: Multiple failures --- write error may be permanent.
ERROR: could not open segment 2 of relation 1663/98913/98947 (target block 427542): No such file or directory
CONTEXT: writing block 427542 of relation 1663/98913/98947
server1=>
server1=> reindex table alert_instance_history;
WARNING: could not write block 427542 of 1663/98913/98947
DETAIL: Multiple failures --- write error may be permanent.
ERROR: could not open segment 2 of relation 1663/98913/98947 (target block 427542): No such file or directory
CONTEXT: writing block 427542 of relation 1663/98913/98947
3. What's the possible options I have to restore, Can I restore just one table?
Backup of this database is done using pg_dump utility.
4. I verified random tables all of them are producing the same error.
cmproddb=> select min(CREATEDATE) from checkpointdata;
WARNING: could not write block 427542 of 1663/98913/98947
DETAIL: Multiple failures --- write error may be permanent.
ERROR: could not open segment 2 of relation 1663/98913/98947 (target block 427542): No such file or directory
CONTEXT: writing block 427542 of relation 1663/98913/98947
user5499459
(21 rep)
Oct 28, 2015, 06:39 PM
• Last activity: Oct 28, 2015, 08:00 PM
Showing page 1 of 4 total questions