Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

16 votes
3 answers
38084 views
Is it possible to configure PostgreSQL to automatically close idle connections?
Clients connect to our PostgreSQL 8.3 database but leave the connections opened. Is it possible to configure PostgreSQL to close those connections after a certain amount of inactivity?
Clients connect to our PostgreSQL 8.3 database but leave the connections opened. Is it possible to configure PostgreSQL to close those connections after a certain amount of inactivity?
Stephan (1513 rep)
Sep 12, 2012, 03:07 PM • Last activity: Jan 16, 2023, 06:09 PM
5 votes
1 answers
4422 views
Passing ROWTYPE parameter to EXECUTE
I am developing a function in Postgres which aims to recover for each record of a query the value of a result of a check contained in a set of functions. Only one of these functions will return the correct value. These functions have a common prefix 'fn_condition_' and receive an object of type 'my_...
I am developing a function in Postgres which aims to recover for each record of a query the value of a result of a check contained in a set of functions. Only one of these functions will return the correct value. These functions have a common prefix 'fn_condition_' and receive an object of type 'my_table' as parameter. As the number of functions that make the check is unknown, I decided to consult the Postgres catalog, from the table pg_catalog.pg_proc searching for functions with the prefix 'fn_condition_' and dynamically execute them with EXECUTE . My problem is the correct form to pass the parameter for EXECUTE. create or replace function test_conditions() returns void as $$ declare v_record my_table%rowtype; v_function pg_proc%rowtype; begin set search_path = 'pg_catalog'; for v_record in (select * from my_table where id in (1,2,3)) loop for v_function in ( SELECT p.proname FROM pg_namespace n JOIN pg_proc p ON p.pronamespace = n.oid WHERE n.nspname = 'operacional' and p.proname like ('fn_condition\\_%') order by p.proname) loop -- execute 'select ' || v_function.proname || '(' || v_record || ')'; -- ??? end loop; end loop; end; $$ language plpgsql; How to pass v_record properly in the commented EXECUTE command in the function above? execute 'select ' || v_function.proname || '(' || v_record || ')'; -- ??? **Example function:** create or replace function fn_condition_1(p_record my_table) returns bigint as $$ begin if ($1.atributo1 > $1.atributo2) then return 1; end if; return null; end; $$ language plpgsql;
Geison Santos (133 rep)
Jan 22, 2016, 03:26 PM • Last activity: Nov 18, 2022, 12:18 AM
11 votes
6 answers
51109 views
PostgreSQL import CSV File causes Syntax Error
I'm attempting to import a CSV file into a database via the "COPY" command; however, I get the (what seems common) error that I need to be a superuser and that I should use "\copy" instead. However, when using \copy, I get a syntax error: ERROR: syntax error at or near "\" LINE 1: \copy With the car...
I'm attempting to import a CSV file into a database via the "COPY" command; however, I get the (what seems common) error that I need to be a superuser and that I should use "\copy" instead. However, when using \copy, I get a syntax error: ERROR: syntax error at or near "\" LINE 1: \copy With the caret pointing to "\". Here's my query: \copy tablename(column2, column3, column4, column5) from '/home/uploads/data.csv' WITH DELIMITER ',' CSV HEADER' I tried both "copy" and "\copy". The first giving me a superuser error the latter giving me that syntax error. Any idea on how to fix it? make it work? I'm executing the command via myPgAdmin's sql input field. The only other question I have is concerning the importing of the columns via tablename(column2, column3 and so on. Is that the correct syntax for that?
antjanus (253 rep)
Aug 23, 2012, 07:13 PM • Last activity: Mar 8, 2022, 01:43 PM
0 votes
0 answers
777 views
postgresql suddenly stopped working "incomplete startup packet error"
I've searched and read a lot about these kind of errors, but none of them solved my question. Why does this error happens all of a sudden ? could this to be a problem of memory on the server ? (I checked the disk space and ram though like but it didn't seem to have problems related to this) Postgres...
I've searched and read a lot about these kind of errors, but none of them solved my question. Why does this error happens all of a sudden ? could this to be a problem of memory on the server ? (I checked the disk space and ram though like but it didn't seem to have problems related to this) Postgresql version 8.3 OS CentOs CLST LOG: incomplete startup packet CLST LOG: received fast shutdown request CLST LOG: aborting any active transactions CLST FATAL: terminating connection due to administrator command CLST LOG: autovacuum launcher shutting down CLST FATAL: terminating connection due to administrator command (HERE COMES A LOT OF THESE ERROR LIKE 15 times) CLST LOG: shut down CLST FATAL: the database system is shutting down" CLST LOG: the database system was shut down on 2021-12-29 11:12:00 CLST CLST LOG: autovacuum launcher started CLST LOG: Database system is ready to accept connections then it started to work just fine
EmiliOrtega (1 rep)
Dec 30, 2021, 12:23 AM
1 votes
1 answers
46 views
Import Postgres 8.3 database in Postgres 10.5
I have a database in Postgres 8.3 (and PostGIS 4.2), which I need to import in Postgres 10.5. I am a DB newbie and would appreciate any pointers to get started.
I have a database in Postgres 8.3 (and PostGIS 4.2), which I need to import in Postgres 10.5. I am a DB newbie and would appreciate any pointers to get started.
mitchus (113 rep)
Oct 8, 2018, 07:28 PM • Last activity: Jul 15, 2021, 01:39 PM
0 votes
1 answers
1421 views
Postgres 8.3 correct deletion of files in pgsql_tmp
We are talking about the PostgreSQL 8.3 RDBMS here. So, no `pg_terminate_backend()` is possible with this version. Sometimes we have to kill running processes at Operating System level (`kill -9 PID`) to solve issues related to `max_connections` value reached. In such cases, we targeted long running...
We are talking about the PostgreSQL 8.3 RDBMS here. So, no pg_terminate_backend() is possible with this version. Sometimes we have to kill running processes at Operating System level (kill -9 PID) to solve issues related to max_connections value reached. In such cases, we targeted long running SELECT queries to kill. As a result, we found out that our filesystem grows at 98% and fills up quickly, showing 1500+ files in the pgsql_tmp directory. Some orphan files are the expected result of this kind of maneuver, since Temp files should be deleted during proc_exit processing and aggressively terminating running processes are not the best option. So, to get rid of this "trash", what is our best option: 1. Conduct a postmaster restart and expect the RDBMS will run around and clean out all the temp directories by itself; or 2. stop the postmaster, manually delete the files in $PGDATA/pgsql_tmp/ and then, start postmaster again; or 3. without stopping the server, manually delete the files in $PGDATA/pgsql_tmp/ which are older than the current day. Please justify your answer(s).
dnaranjor (5 rep)
Feb 18, 2019, 09:34 PM • Last activity: Feb 19, 2019, 03:39 PM
2 votes
3 answers
248 views
PostgreSQL: 8.3/8.4 interface chages?
I would like to upgrade a PostgreSQL 8.3 server to PostrgreSQL 8.4. There are many software components that interact with the server over the network, and I am concerned with compatibility issues. Is there any interface change between the different versions of PostgreSQL, as far as an external compo...
I would like to upgrade a PostgreSQL 8.3 server to PostrgreSQL 8.4. There are many software components that interact with the server over the network, and I am concerned with compatibility issues. Is there any interface change between the different versions of PostgreSQL, as far as an external component is concerned?
Adam Matan (12079 rep)
Aug 21, 2011, 05:57 AM • Last activity: Nov 30, 2017, 07:04 AM
3 votes
2 answers
1075 views
PostgreSQL 8.3 - issues with autovacuum
I posted this on StackOverflow and it was suggested this query was better suited here. I'm trying to encourage the use and monitoring of autovacuum in some PostgreSQL 8.3 databases. One objection I hit often is that people don't "trust" autovacuum or there are bugs in autovacuum in 8.3 which mean th...
I posted this on StackOverflow and it was suggested this query was better suited here. I'm trying to encourage the use and monitoring of autovacuum in some PostgreSQL 8.3 databases. One objection I hit often is that people don't "trust" autovacuum or there are bugs in autovacuum in 8.3 which mean that it's ignored in preference to scheduling vacuuming. Mostly our tables are small and this approach appears to work. However, with our larger (& also heavily updated tables) this really doesn't work (dead tuple counts increase, exceed max_fsm_pages, and the tables don't get cleaned up etc etc). I'm just wondering if anyone has a reference for autovacuum in 8.3 being buggy or not working. My own experience has shown that autovac works fine and, where necessary, adding entries to the pg_autovacuum table does the trick. I'd like to understand the problem with autovacuum (if one exists).
azp74 (133 rep)
Sep 26, 2011, 10:15 AM • Last activity: Mar 17, 2016, 10:48 AM
11 votes
1 answers
5175 views
Unexpected Seq Scan when doing query against boolean with value NULL
I have a database column called `auto_review` where column type is `boolean`. There is an index for that field, created using the ActiveRecord ORM. CREATE INDEX index_table_on_auto_renew ON table USING btree (auto_renew); When I query the field for a boolean value, PG uses the index as expected. EXP...
I have a database column called auto_review where column type is boolean. There is an index for that field, created using the ActiveRecord ORM. CREATE INDEX index_table_on_auto_renew ON table USING btree (auto_renew); When I query the field for a boolean value, PG uses the index as expected. EXPLAIN for: SELECT "table".* FROM "table" WHERE "table"."auto_renew" = 'f' QUERY PLAN ---------------------------------------------------------------------------------------------- Bitmap Heap Scan on table (cost=51.65..826.50 rows=28039 width=186) Filter: (NOT auto_renew) -> Bitmap Index Scan on index_domains_on_auto_renew (cost=0.00..44.64 rows=2185 width=0) Index Cond: (auto_renew = false) (4 rows) When the value is NULL, a sequential scan is used. EXPLAIN for: SELECT "table".* FROM "table" WHERE "table"."auto_renew" IS NULL QUERY PLAN ---------------------------------------------------------------- Seq Scan on table (cost=0.00..1094.01 rows=25854 width=186) Filter: (auto_renew IS NULL) (2 rows) I'm curious to know the reason behind this choice.
Simone Carletti (341 rep)
Oct 26, 2012, 03:22 PM • Last activity: Mar 3, 2016, 01:00 AM
2 votes
1 answers
4779 views
Trying to start a newly created cluster in PostgreSQL on Windows
I just created a new cluster using PostgreSQL 8.3 and looks like everything worked. What I am having issues with is trying to start the server of this new cluster. My two clusters are on different ports. When looking online to start the new cluster, I tried executing the following line: postgres -D...
I just created a new cluster using PostgreSQL 8.3 and looks like everything worked. What I am having issues with is trying to start the server of this new cluster. My two clusters are on different ports. When looking online to start the new cluster, I tried executing the following line: postgres -D C:\my\new\cluster\data\directory This is the error I get: > The server must be started under an unprivileged user ID to prevent > possible system security compromises. See the documentation for more > information on how to properly start the server. So I know I am running the command from administrator but I don't know how to run it from an "unprivileged user ID". Any help would be great!
user972276 (309 rep)
Feb 19, 2013, 10:20 PM • Last activity: Aug 7, 2015, 12:57 PM
1 votes
2 answers
6784 views
How to simulate postgresql recovery mode?
I would like to simulate putting postgresql into "recovery mode". Few times i have situation where postgresql puted himself into recovery mode. In logs i had: Oct 18 06:37:35 xx postgres[5738]: [19-1] 2014-10-18 06:37:35.441 CEST|5738|xx|yy|FATAL: the database system is in recovery mode It was cause...
I would like to simulate putting postgresql into "recovery mode". Few times i have situation where postgresql puted himself into recovery mode. In logs i had: Oct 18 06:37:35 xx postgres: [19-1] 2014-10-18 06:37:35.441 CEST|5738|xx|yy|FATAL: the database system is in recovery mode It was caused by oom which killed one of process. But when i try to simulate it in test environment - postgresql restarts. How to prevent postgresql from restart and let him stay in recovery mode?
undefine (113 rep)
Oct 20, 2014, 07:54 AM • Last activity: Oct 20, 2014, 03:49 PM
3 votes
1 answers
29971 views
Using dynamic column names in PostgreSQL
I am using PostgreSSQL 8.3. I want to execute this query: select COALESCE(col_fr,col,'Unknown') from my_table where id = 'abc'; Twist in the story is column name `colum_fr` should be generated dynamically. `my_table` has different columns for different languages. Something like: id col col_ja col_fr...
I am using PostgreSSQL 8.3. I want to execute this query: select COALESCE(col_fr,col,'Unknown') from my_table where id = 'abc'; Twist in the story is column name colum_fr should be generated dynamically. my_table has different columns for different languages. Something like: id col col_ja col_fr I am using this query in crystal reports where I can pass string parameter for language: select COALESCE(col_||{?parameter},col,'Unknown') from my_table where id = 'abc'; which will internally be converted to something like the following, if {?language} value is fr: select COALESCE(col_||'fr',col,'Unknown') from my_table where id = 'abc'; which will never work. I don't want to use select case, to make it dynamic. As an alternative solution I also tried to create a stored procedure: CREATE OR REPLACE FUNCTION get_policy_name ( id text, lang text, def_value text ) RETURNS SETOF record AS $body$ DECLARE sql text; BEGIN sql := 'SELECT COALESCE(col_'||quote_ident(lang)||',col,'||quote_literal(def_value)||')FROM my_table WHERE id ='||quote_literal(id); RETURN QUERY EXECUTE sql END; $body$ LANGUAGE plpgsql; Which should return single record. It is not working. What am I missing? Does PostgreSSQL 8.3 support RETURN QUERY EXECUTE?
user36935 (31 rep)
Apr 10, 2014, 11:08 AM • Last activity: Apr 11, 2014, 12:01 AM
0 votes
1 answers
1487 views
Postgres 8.3 faster than 9.3?
Our software product currently ships on Windows 7 with Postgres 8.3 as its database. On a busy site, there may be 24 automated systems generating 100 rows (x100 columns) per minute, with 3-10 human clients viewing subsets of around 1000 rows – all retrieved at once, with incremental updates querying...
Our software product currently ships on Windows 7 with Postgres 8.3 as its database. On a busy site, there may be 24 automated systems generating 100 rows (x100 columns) per minute, with 3-10 human clients viewing subsets of around 1000 rows – all retrieved at once, with incremental updates querying pk + timestamp every minute or so and retrieving new rows that are pertinent. There are a few auxiliary tables, but this table has the primary activity. As a first step to a limited multi-master system (to help with geographically separated teams), we implemented an upgrade to 9.3. Performance wasn't the first priority, so it wasn't really profiled. As release time has come, management has decided to drop 9.3 for now citing fear of possible performance degradation and lack of testing resources. I was sure the performance issue was preposterous so I did some PgBench testing. Using 9.3's pgbench, I alternated between connecting to the local 8.3 and 9.3 installations (diff't port numbers). I've captured my results in this google drive spreadsheet , but the summary is that generally 8.3 beat 9.3. 9.3 only won in raw insert performance. We have some customization of our postgresql.conf files which I generally kept from 8.3 to 9.3, I'll list the non-default settings max_connections = 1000 shared_buffers = 320MB temp_buffers = 80MB max_prepared_transactions = 50 #8.3 only, 9.3 left at 0 (not sure why) max_fsm_pages = 204800 #8.3 only, 9.3 doesn't have setting autovacuum_max_workers = 30 So, is this just the price of progress, or is there something I should be doing in 9.3 to make it excel?
Thomas (123 rep)
Jan 27, 2014, 05:24 PM • Last activity: Jan 28, 2014, 09:14 AM
1 votes
0 answers
300 views
postgresql 8.3 possible checkpoint and memory misconfiguration
I am using postgresql 8.3 in a read/write heavy application where data are being written to the database from a background process while they are also being read/written from a web application. At peak hours, we notice performance problems in the database specifically high IO ( disk writes mostly, n...
I am using postgresql 8.3 in a read/write heavy application where data are being written to the database from a background process while they are also being read/written from a web application. At peak hours, we notice performance problems in the database specifically high IO ( disk writes mostly, not reads). I am suspecting checkpoints to be the problem. Our postgresql.conf file contains the following: shared_buffers = 3000MB work_mem = 256MB maintenance_work_mem = 100MB max_fsm_pages = 3448000 max_fsm_relations = 10000 wal_buffers = 1MB checkpoint_segments = 32 checkpoint_completion_target = 0.9 while the rest are default. Stats from the pg_stat_bgwriter table are the following: checkpoints_timed:103881 checkpoints_requested:16 buffers_checkpoints:92105172 buffers_clean:83695499 max_written_clean:430652 buffers_backend: 33671343 buffers_alloc: 9001524439 Our system also has 20 GB of memory and a RAID1 disk array of two drives. I turned today the checkpoint logging on though not during high traffic so I haven't been able to test it fully ( I will tomorrow though). In the log file I see a checkpoint every ~5 minutes. Together with the fact that there are about 100.000 checkpoints timed, this makes me believe that the server is performing a timed checkpoint every five minutes ( which is the default ? ). Since the requested checkpoints are only 16, it seems that it never reaches the 32 x 16 MB limit to initiate a checkpoint on request. This leads me to the speculation that somehow our system is not properly configured. Should I increase the checkpoint timeout to like 30 minutes and the segments to an extend that will cause a checkpoint each time ~6 GB are filled ? Aside from that, I am using redmine to monitor the machines memory usage and in the graph I see memory used to be about 500 - 800 mb while 6 GB are allocated to the swap and the rest to the file system cache. When issuing a TOP command the postgres processes are using about 4 GB of VIRT memory. Finally free -m indicates that only 500 mb are used the rest of the 20 GB being free.
John Papsi
Mar 14, 2013, 12:27 PM • Last activity: Mar 15, 2013, 01:33 PM
3 votes
2 answers
2020 views
PostgreSQL 8.3: Slow GROUP BY on large table
I have a table with about 10 million records. I want to do a simple group by, but it's using a sequential scan and is slow... select run_id, count(*) from result group by run_id; I have an index defined on the `run_id` column. How can I speed this up?
I have a table with about 10 million records. I want to do a simple group by, but it's using a sequential scan and is slow... select run_id, count(*) from result group by run_id; I have an index defined on the run_id column. How can I speed this up?
FogleBird (131 rep)
Jan 28, 2013, 04:05 PM • Last activity: Feb 9, 2013, 08:34 AM
3 votes
1 answers
6554 views
How can I create a dblink from Postgresql 8.3 to SQL Server (2000 and 2008)?
I have a Postgresql database that need to read data from two SQL Servers (one SQL Server 2000 and one SQL Server 2008). I want to setup two dblinks to the SQL Servers. After googling, foruming, reading documentation, I struggle to find something production ready. What tool can I use ?
I have a Postgresql database that need to read data from two SQL Servers (one SQL Server 2000 and one SQL Server 2008). I want to setup two dblinks to the SQL Servers. After googling, foruming, reading documentation, I struggle to find something production ready. What tool can I use ?
Stephan (1513 rep)
Oct 12, 2012, 02:06 PM • Last activity: Nov 13, 2012, 01:01 AM
3 votes
1 answers
367 views
A 'deeper' PostgreSQL autovacuum
We have a PostgreSQL 8.3.7 database suffering severe bloat after an algorithmic change. Unfortunately upgrading isn't an option at this time. For a particular group of partitioned count roll-up tables, we used to update them by selecting then either inserting for new counts or updating existing coun...
We have a PostgreSQL 8.3.7 database suffering severe bloat after an algorithmic change. Unfortunately upgrading isn't an option at this time. For a particular group of partitioned count roll-up tables, we used to update them by selecting then either inserting for new counts or updating existing counts. To avoid network saturation we switched instead to updating, checking for failed updates and then inserting. I've read this is a bad scenario for PostgreSQL (at least circa 8.3.7) where the dead tuples from the updates are in the middle of the table rather than the end (as previously) and so are not being reclaimed by the autovacuum which works from the back of the table. It seems to me that autovacuum_vacuum_cost_limit is the most likely setting I should change. Currently it is set to the default of 200 - perhaps I should start at 2,000 and go up from there? I have a small window to make production changes to use trial and error, and no test database of equivalent size.
coltnz (33 rep)
Sep 25, 2012, 10:41 AM • Last activity: Sep 26, 2012, 01:38 PM
3 votes
2 answers
887 views
Bug in PL/pgSQL function creation
I don't know if this question better suits here or in SO ... This is a script that I'd like to launch (the code of the function was copied [from a question on SO][1]): \c mydb create or replace function truncate_tables(username in varchar) returns void as $$ declare stmt RECORD; statements cursor fo...
I don't know if this question better suits here or in SO ... This is a script that I'd like to launch (the code of the function was copied from a question on SO ): \c mydb create or replace function truncate_tables(username in varchar) returns void as $$ declare stmt RECORD; statements cursor for select tablename from pg_tables where tableowner = username; begin for stmt in statements loop execute 'truncate table ' || quote_ident(stmt.tablename) || ' CASCADE ;'; end loop; end; $$ language 'plpgsql'; I get the following error: ERROR: syntax at or near "$1" LINE1: $1 QUERY $1 CONTEXT: SQL statement in PL/PgSQL function "truncate_tables" near line 5 I am new to Postgres and PL/pgSQL and don't know what this error message means.
Stephane Rolland (8911 rep)
Aug 22, 2012, 02:28 PM • Last activity: Sep 5, 2012, 02:00 AM
Showing page 1 of 18 total questions