Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
1707
views
How to identify file for "The system cannot find the file specified." during a pg_ctl -D start?
I'm getting one message during pg_ctl -D start of the standby server. Perhaps more of a general troubleshooting question. This is what I'm seeing when I do a `pg_ctl -D start`: D:\>pg_ctl -D d:\dev_data_standby start server starting D:\>2016-03-21 08:41:12 EDT 1016 LOG: loaded library "auto_explain"...
I'm getting one message during pg_ctl -D start of the standby server. Perhaps more of a general troubleshooting question.
This is what I'm seeing when I do a
pg_ctl -D start
:
D:\>pg_ctl -D d:\dev_data_standby start
server starting
D:\>2016-03-21 08:41:12 EDT 1016 LOG: loaded library "auto_explain"
1 file(s) copied.
The system cannot find the file specified.
The log file shows:
2016-03-21 08:41:12 EDT 3564 LOG: database system was interrupted; last known up at 2016-03-21 08:38:08 EDT
2016-03-21 08:41:16 EDT 3564 LOG: entering standby mode
2016-03-21 08:41:16 EDT 3564 LOG: restored log file "0000000100000002000000F2" from archive
2016-03-21 08:41:16 EDT 3564 LOG: redo starts at 2/F2000020
2016-03-21 08:41:16 EDT 3564 LOG: consistent recovery state reached at 2/F20000E0
2016-03-21 08:41:16 EDT 3816 LOG: streaming replication successfully connected to primary
How can I troubleshoot that error: The system cannot find the file specified.
?
JC5577
(625 rep)
Mar 21, 2016, 01:23 PM
• Last activity: Jun 20, 2025, 12:08 PM
0
votes
1
answers
313
views
How to optimize query with order by
Have query which generate OpenERP ORM. Table have 100k rows. SELECT "tbl".id FROM "tbl" WHERE (("tbl"."active" = 'True') AND ("tbl"."is_company" IS NULL or "tbl"."is_company" = false )) ORDER BY "tbl"."display_name" With indexes : "ix_tbl_pkey" PRIMARY KEY, btree (id) "ix_active" btree (active) "ix_...
Have query which generate OpenERP ORM. Table have 100k rows.
SELECT "tbl".id
FROM "tbl"
WHERE (("tbl"."active" = 'True') AND ("tbl"."is_company" IS NULL or "tbl"."is_company" = false ))
ORDER BY "tbl"."display_name"
With indexes :
"ix_tbl_pkey" PRIMARY KEY, btree (id)
"ix_active" btree (active)
"ix_displayname" btree (display_name)
"ix_iscompany" btree (is_company)
Query with order by takes 57735.775 ms.
Plan is :
Sort (cost=13031.73..13269.13 rows=94960 width=47) (actual time=57711.753..57725.079 rows=94967 loops=1)
Sort Key: display_name
Sort Method: quicksort Memory: 12918kB
-> Seq Scan on tbl (cost=0.00..5180.90 rows=94960 width=47) (actual time=0.009..57.056 rows=94967 loops=1)
Filter: (active AND ((is_company IS NULL) OR (NOT is_company)))
Rows Removed by Filter: 623
Total runtime: 57735.775 ms
(7 rows)
When i try without order by it takes 65.969 ms.
Plan is :
Seq Scan on tbl (cost=0.00..5180.90 rows=94960 width=4) (actual time=0.026..60.782 rows=94967 loops=1)
Filter: (active AND ((is_company IS NULL) OR (NOT is_company)))
Rows Removed by Filter: 623
Total runtime: 65.969 ms
(4 rows)
With
set enable_sort = off;
it takes 1206.157 ms plan is :
Index Scan using ix_displayname on tbl(cost=0.00..21479.14 rows=94960 width=47) (actual time=29.912..1194.954 rows=94967 loops=1)
Filter: (active AND ((is_company IS NULL) OR (NOT is_company)))
Rows Removed by Filter: 623
Total runtime: 1206.157 ms
(4 rows)
Any way to optimize it with indexes ? Because we cant change something in ORM .
GeoVIP
(263 rep)
May 19, 2016, 07:48 AM
• Last activity: Apr 29, 2025, 06:42 AM
11
votes
2
answers
21822
views
Postgresql: How to add a role that inherits from another?
Pretty cut and dry, I am just a total noob. Is there a way to make a given role inherit from another?
Pretty cut and dry, I am just a total noob. Is there a way to make a given role inherit from another?
yburyug
(245 rep)
Mar 14, 2014, 02:02 PM
• Last activity: Jan 12, 2025, 09:03 PM
1
votes
2
answers
5383
views
Tuning Postgres config for new Database Server
I have finally gotten a Purpose Built Database machine for a project i'm working on. The server is a 4 Core Xeon with 64GB of Ram and a Raid10 of 10krpm drives. I have just got the database moved over to the new machine; the performance with the same code are worse than when it was running on a VM....
I have finally gotten a Purpose Built Database machine for a project i'm working on.
The server is a 4 Core Xeon with 64GB of Ram and a Raid10 of 10krpm drives.
I have just got the database moved over to the new machine; the performance with the same code are worse than when it was running on a VM.
I'm looking for suggestions on what settings to adjust to what values.
Currently, I've upped shared_buffers to 60GB and the kernel settings needed to make that change.
temp_buffers
is 32MB
work_mem
is 5MB
I'm working on doing some stuff I'd like to get loaded in quickly so I have synchronous_commit
set to off.
Can anyone point me in the right direction as to how to improve the speed? I had it running quite a bit faster on a slower machine with much less memory and drives shared with the machines that were making calls to it, so I'm not really sure what the issues are.
Update: 2013-03-06 Performance is falling off a cliff shortly after a run starts. Not sure what to do. Settings
shared_buffers = 12GB
temp_buffers=32MB
work_mem = 5MB
maintenance_work_mem = 256MB
fsync = off
synchronous_commit = off
wal_buffers = 32MB
checkpoint_segments = 256
checkpoint_completion_target = .9
effective_cache_size 50GB
auto_vacuum = on
autovacuum_naptime = 1min
The task is a long script that's taking data from a copied in table and normalizing it into the database. So big reads occasionally to pick up 1000 rows or more, then lots of little reads to de-duplicate the record and find IDs etc, then some inserts along the way that are needed, and finally lots of inserts at the end. Then Repeat.
Any Suggestions? or ideas what's falling off?
This is one of my slower queries, I'd love ideas of how to speed it up.
EXPLAIN (ANALYZE, BUFFERS) select provider_id, count(list_alias.name_part_id)
from list_alias
where provider_id in (1,4,5,6,7,8)
and name_part_id in (5,7,8,3,111)
group by provider_id
order by count(list_alias.name_part_id) desc
limit(1)
The output.
"Limit (cost=31.62..31.62 rows=1 width=8) (actual time=0.157..0.157 rows=0 loops=1)"
" Buffers: shared hit=17 read=1"
" -> Sort (cost=31.62..31.62 rows=1 width=8) (actual time=0.153..0.153 rows=0 loops=1)"
" Sort Key: (count(name_part_id))"
" Sort Method: quicksort Memory: 25kB"
" Buffers: shared hit=17 read=1"
" -> GroupAggregate (cost=0.00..31.61 rows=1 width=8) (actual time=0.147..0.147 rows=0 loops=1)"
" Buffers: shared hit=17 read=1"
" -> Index Scan using "list_alias provider_id" on list_alias (cost=0.00..31.59 rows=1 width=8) (actual time=0.146..0.146 rows=0 loops=1)"
" Index Cond: (provider_id = ANY ('{1,4,5,6,7,8}'::integer[]))"
" Filter: (name_part_id = ANY ('{5,7,8,3,111}'::integer[]))"
" Buffers: shared hit=17 read=1"
"Total runtime: 0.238 ms"
Edit2: More Info:
"application_name";"pgAdmin III - Query Tool";"client"
"autovacuum";"on";"configuration file"
"autovacuum_naptime";"1min";"configuration file"
"checkpoint_completion_target";"0.9";"configuration file"
"checkpoint_segments";"256";"configuration file"
"DateStyle";"ISO, MDY";"configuration file"
"default_text_search_config";"pg_catalog.english";"configuration file"
"effective_cache_size";"50GB";"configuration file"
"external_pid_file";"/var/run/postgresql/9.2-main.pid";"configuration file"
"fsync";"off";"configuration file"
"lc_messages";"en_US.UTF-8";"configuration file"
"lc_monetary";"en_US.UTF-8";"configuration file"
"lc_numeric";"en_US.UTF-8";"configuration file"
"lc_time";"en_US.UTF-8";"configuration file"
"listen_addresses";"*";"configuration file"
"log_line_prefix";"%t ";"configuration file"
"maintenance_work_mem";"256MB";"configuration file"
"max_connections";"100";"configuration file"
"max_stack_depth";"2MB";"environment variable"
"port";"5432";"configuration file"
"shared_buffers";"12GB";"configuration file"
"ssl";"off";"configuration file"
"ssl_cert_file";"/etc/ssl/certs/ssl-cert-snakeoil.pem";"configuration file"
"ssl_key_file";"/etc/ssl/private/ssl-cert-snakeoil.key";"configuration file"
"synchronous_commit";"off";"configuration file"
"temp_buffers";"32MB";"configuration file"
"unix_socket_directory";"/var/run/postgresql";"configuration file"
"wal_buffers";"32MB";"configuration file"
"work_mem";"5MB";"configuration file"
Any ideas on why the database would be fine for a very short period of time when a process starts, then fall off the cliff very quickly? It's almost like it's moving to disk, but it doesn't seem to be when these queries are EXPLAIN BUFFERS 'd.
DiscontentDisciple
(111 rep)
Mar 4, 2013, 10:28 PM
• Last activity: Dec 25, 2024, 11:02 AM
13
votes
2
answers
17321
views
is it possible to run postgres with no WAL files being produced?
I am looking at a way of running a 9.2 postgres database on redhat 6 without any WAL files being produced. We have a end of day batch process which generates a large number of WAL files (and has filled the disk a couple of times). We also take a pg_dump backup both before the start of the end of day...
I am looking at a way of running a 9.2 postgres database on redhat 6 without any WAL files being produced.
We have a end of day batch process which generates a large number of WAL files (and has filled the disk a couple of times). We also take a pg_dump backup both before the start of the end of day process and also at the end. We are happy that should there be a problem during the batch process to restore the pre pg_dump backup.
When i set the wal_level to minimal we are still getting a large number of WAL files due to the type of database updates which occur. So is it possible to have no WAL files as all (basically a wal_level=none)?
thanks
dave
davegreen100
(572 rep)
Jun 3, 2015, 12:38 PM
• Last activity: Jun 24, 2024, 09:37 AM
0
votes
0
answers
52
views
Multiple but not all executions of the same Postgres SELECT query are getting locked endlessly
I am using Postgresql version **9.2.18** on Centos Version **CentOS Linux release 7.3.1611 (Core)** In postgres there is a select query on a single table which is getting executed multiple times per minute. For this query around first 20 or 21 executions are getting locked endlessly where as other e...
I am using Postgresql version **9.2.18** on Centos Version **CentOS Linux release 7.3.1611 (Core)**
In postgres there is a select query on a single table which is getting executed multiple times per minute.
For this query around first 20 or 21 executions are getting locked endlessly where as other executions are running fine.
When I running below command to check the queries running for more than a minute than these queries are not showing:
pid, pg_stat_activity.query_start, now() - pg_stat_activity.query_start AS query_time, query, state from pg_stat_activity WHERE (now() - pg_stat_activity.query_start) > interval '1 minutes';
But on running the query * FROM pg_locks;
Following is the result (pasted few lines):
locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | g
ranted | fastpath
------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-------+-----------------+--
-------+----------
relation | 16385 | 4614041 | | | | | | | | 23/15040 | 55538 | AccessShareLock | t
| t
relation | 16385 | 4233898 | | | | | | | | 23/15040 | 55538 | AccessShareLock | t
| t
relation | 16385 | 4207899 | | | | | | | | 23/15040 | 55538 | AccessShareLock | t
| t
relation | 16385 | 682008 | | | | | | | | 23/15040 | 55538 | AccessShareLock | t
| t
relation | 16385 | 17702 | | | | | | | | 23/15040 | 55538 | AccessShareLock | t
| t
relation | 16385 | 17309 | | | | | | | | 23/15040 | 55538 | AccessShareLock | t
| t
virtualxid | | | | | 23/15040 | | | | | 23/15040 | 55538 | ExclusiveLock | t
| t
relation | 16385 | 4614041 | | | | | | | | 22/15040 | 55537 | AccessShareLock | t
| t
relation | 16385 | 4233898 | | | | | | | | 22/15040 | 55537 | AccessShareLock | t
| t
relation | 16385 | 4207899 | | | | | | | | 22/15040 | 55537 | AccessShareLock | t
| t
relation | 16385 | 682008 | | | | | | | | 22/15040 | 55537 | AccessShareLock | t
| t
relation | 16385 | 17702 | | | | | | | | 22/15040 | 55537 | AccessShareLock | t
| t
relation | 16385 | 17309 | | | | | | | | 22/15040 | 55537 | AccessShareLock | t
| t
virtualxid | | | | | 22/15040 | | | | | 22/15040 | 55537 | ExclusiveLock | t
| t
relation | 16385 | 4614041 | | | | | | | | 21/15040 | 55536 | AccessShareLock | t
| t
relation | 16385 | 4233898 | | | | | | | | 21/15040 | 55536 | AccessShareLock | t
| t
relation | 16385 | 4207899 | | | | | | | | 21/15040 | 55536 | AccessShareLock | t
| t
relation | 16385 | 682008 | | | | | | | | 21/15040 | 55536 | AccessShareLock | t
| t
relation | 16385 | 17702 | | | | | | | | 21/15040 | 55536 | AccessShareLock | t
| t
relation | 16385 | 17309 | | | | | | | | 21/15040 | 55536 | AccessShareLock | t
| t
virtualxid | | | | | 21/15040 | | | | | 21/15040 | 55536 | ExclusiveLock | t
| t
**This result is showing the locked processes which are the locked queries.
Next the result of command -ef | grep SELECT
is as follows:**
postgres 55519 997 47 07:09 ? 01:52:23 postgres: pgdbuser dam_stage 15.206.133.156(33248) SELECT
postgres 55520 997 47 07:09 ? 01:52:16 postgres: pgdbuser dam_stage 15.206.133.156(33250) SELECT
postgres 55521 997 47 07:09 ? 01:52:16 postgres: pgdbuser dam_stage 15.206.133.156(33254) SELECT
postgres 55522 997 47 07:09 ? 01:52:18 postgres: pgdbuser dam_stage 15.206.133.156(33258) SELECT
postgres 55523 997 47 07:09 ? 01:52:18 postgres: pgdbuser dam_stage 15.206.133.156(33256) SELECT
postgres 55524 997 47 07:09 ? 01:52:27 postgres: pgdbuser dam_stage 15.206.133.156(33252) SELECT
postgres 55525 997 47 07:09 ? 01:52:23 postgres: pgdbuser dam_stage 15.206.133.156(33260) SELECT
postgres 55526 997 47 07:09 ? 01:52:16 postgres: pgdbuser dam_stage 15.206.133.156(33264) SELECT
postgres 55527 997 47 07:09 ? 01:52:19 postgres: pgdbuser dam_stage 15.206.133.156(33262) SELECT
postgres 55528 997 47 07:09 ? 01:52:21 postgres: pgdbuser dam_stage 15.206.133.156(33268) SELECT
postgres 55529 997 47 07:09 ? 01:52:26 postgres: pgdbuser dam_stage 15.206.133.156(33270) SELECT
postgres 55530 997 48 07:09 ? 01:52:37 postgres: pgdbuser dam_stage 15.206.133.156(33276) SELECT
postgres 55531 997 47 07:09 ? 01:52:21 postgres: pgdbuser dam_stage 15.206.133.156(33266) SELECT
postgres 55532 997 47 07:09 ? 01:52:24 postgres: pgdbuser dam_stage 15.206.133.156(33272) SELECT
postgres 55533 997 47 07:09 ? 01:52:23 postgres: pgdbuser dam_stage 15.206.133.156(33274) SELECT
postgres 55534 997 47 07:09 ? 01:52:17 postgres: pgdbuser dam_stage 15.206.133.156(33278) SELECT
postgres 55535 997 47 07:09 ? 01:52:19 postgres: pgdbuser dam_stage 15.206.133.156(33284) SELECT
postgres 55536 997 47 07:09 ? 01:52:32 postgres: pgdbuser dam_stage 15.206.133.156(33282) SELECT
postgres 55537 997 47 07:09 ? 01:52:20 postgres: pgdbuser dam_stage 15.206.133.156(33280) SELECT
postgres 55538 997 47 07:09 ? 01:52:20 postgres: pgdbuser dam_stage 15.206.133.156(33286) SELECT
root 81425 79781 0 11:03 pts/2 00:00:00 grep --color=auto SELECT
**If I kill these queries/processes then they are replaced by new instances of same queries.**
Please suggest what could be the cause of this and how to rectify this issue?
**EDIT:** The query is like this
*SELECT "id", "reference_code", "completed_at", "data_json" FROM "queue" WHERE "status" = 1 AND "reference_type" LIKE '%sorted%' ORDER BY "completed_at" DESC LIMIT 1*
**These processes are also being shown in the top command like this(pid's are different as previous one were killed):**
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
82894 postgres 20 0 236748 39812 37396 R 49.8 0.1 678:23.38 postgres
82890 postgres 20 0 236748 39800 37384 S 49.2 0.1 677:12.88 postgres
82892 postgres 20 0 236748 39764 37348 R 49.2 0.1 677:09.95 postgres
82895 postgres 20 0 236748 39840 37424 S 49.2 0.1 677:23.80 postgres
82896 postgres 20 0 236748 39744 37328 R 49.2 0.1 679:36.06 postgres
82899 postgres 20 0 236748 39816 37400 R 49.2 0.1 676:39.73 postgres
82900 postgres 20 0 236748 39800 37384 R 49.2 0.1 677:25.23 postgres
82901 postgres 20 0 236748 39800 37384 S 49.2 0.1 677:28.81 postgres
82902 postgres 20 0 236748 39776 37360 S 49.2 0.1 677:27.31 postgres
82903 postgres 20 0 236748 39788 37372 S 49.2 0.1 676:54.77 postgres
82906 postgres 20 0 236748 39904 37488 S 49.2 0.1 677:55.78 postgres
82907 postgres 20 0 236748 39896 37480 S 49.2 0.1 677:51.72 postgres
82908 postgres 20 0 236748 39780 37364 S 49.2 0.1 677:27.25 postgres
82891 postgres 20 0 236748 39844 37428 R 48.8 0.1 677:20.41 postgres
82893 postgres 20 0 236748 39812 37396 R 48.8 0.1 677:22.60 postgres
82898 postgres 20 0 236748 39800 37384 R 48.8 0.1 677:06.41 postgres
82904 postgres 20 0 236748 39800 37384 S 48.8 0.1 677:45.52 postgres
82905 postgres 20 0 236748 39696 37280 S 48.8 0.1 677:42.04 postgres
82909 postgres 20 0 236748 39884 37468 R 48.8 0.1 678:16.44 postgres
82897 postgres 20 0 236748 39708 37292 S 48.5 0.1 677:20.17 postgres
user2710961
(23 rep)
Apr 3, 2024, 11:55 AM
• Last activity: Apr 4, 2024, 10:44 AM
3
votes
2
answers
6194
views
psql, I try to connect to a database and I get "psql: FATAL: Peer authentication failed for user ", why?
I'm using **psql (9.2.23)** on my Dell Latitude 3540 laptop running **Linux CentOS 7**. I want to do a simple thing: create a database username and log in as that username. My operating system username is "davide", and the username I'd like to create and use in psql is "davide3". My laptop name is "...
I'm using **psql (9.2.23)** on my Dell Latitude 3540 laptop running **Linux CentOS 7**.
I want to do a simple thing: create a database username and log in as that username.
My operating system username is "davide", and the username I'd like to create and use in psql is "davide3". My laptop name is "laptop".
Here are the steps I followed, after having installed PostgreSQL:
>
[davide@laptop ~]$ sudo su - postgres
>
> [sudo] password for davide:
>
> Last login: XXX
>
> -bash-4.2$ psql
>
> psql (9.2.23)
>
> Type "help" for help.
>
> `postgres=# CREATE USER davide3 WITH PASSWORD 'davide3' CREATEDB
> CREATEUSER;`
>
> CREATE ROLE
>
> postgres=# \q
>
> -bash-4.2$ exit
>
> logout
Then I try to login to the database by using the username "davide3":
> [davide@laptop ~]$ psql -d postgres -U davide3
>
> psql: FATAL: Peer authentication failed for user "davide3"
Here's the trouble: I was expecting to get a password-insertion message, and to insert the 'davide3' password, but I did not get anything instead.
**What did I do wrong?
How to solve this issue?**
Thanks
DavideChicco.it
(327 rep)
Mar 12, 2018, 04:38 PM
• Last activity: Apr 2, 2024, 12:02 PM
18
votes
2
answers
16388
views
How to view the query of another session in pg_stat_activity without being super user?
I have a Postgresql 9.2 database. Two users are created on this database. When I perform the following query as superuser, I can see everything. select * from pg_stat_activity However, is it possible to acheive the same result without being connected as superuser ? Which privilege/role should I gran...
I have a Postgresql 9.2 database. Two users are created on this database. When I perform the following query as superuser, I can see everything.
select * from pg_stat_activity
However, is it possible to acheive the same result without being connected as superuser ?
Which privilege/role should I grant/create to have the results that a superuser can see ?
Stephan
(1513 rep)
Feb 5, 2014, 11:55 AM
• Last activity: Mar 9, 2024, 06:51 AM
0
votes
1
answers
488
views
Pgpool - 3.5 configuration
[enterprisedb@pgdvdb1dr bin]$ 2017-06-20 11:34:41: pid 103313: WARNING: could not open configuration file: "pgpool.conf" 2017-06-20 11:34:41: pid 103313: DETAIL: using default configuration parameter values 2017-06-20 11:34:41: pid 103313: FATAL: initializing pool password, failed to open file:"/mnt...
[enterprisedb@pgdvdb1dr bin]$ 2017-06-20 11:34:41: pid 103313: WARNING: could not open configuration file: "pgpool.conf"
2017-06-20 11:34:41: pid 103313: DETAIL: using default configuration parameter values
2017-06-20 11:34:41: pid 103313: FATAL: initializing pool password, failed to open file:"/mnt/hgfs/edb-postgres.auto/pgpool/staging/linux-x64/pgpool/etc/pool_passwd"
2017-06-20 11:34:41: pid 103313: DETAIL: file open failed with error:"No such file or directory"
+ Exit 3 ./pgpool -n
prakash ramakrishnan
(1 rep)
Jun 20, 2017, 06:11 AM
• Last activity: Feb 21, 2024, 05:04 PM
116
votes
7
answers
172904
views
Very slow DELETE in PostgreSQL, workaround?
I have a database on PostgreSQL 9.2 that has a main schema with around 70 tables and a variable number of identically structured per-client schemas of 30 tables each. The client schemas have foreign keys referencing the main schema and not the other way around. I just started filling the database wi...
I have a database on PostgreSQL 9.2 that has a main schema with around 70 tables and a variable number of identically structured per-client schemas of 30 tables each. The client schemas have foreign keys referencing the main schema and not the other way around.
I just started filling the database with some real data taken from the previous version. The DB had reached about 1.5 GB (it's expected to grow to several 10s GB within weeks) when I had to do a bulk delete in a very central table in the main schema. All concerned foreign keys are marked ON DELETE CASCADE.
It was no surprise that this would take a long time but after 12 hours it became clear that I was better off starting over, dropping the DB and launching the migration again. But what if I need to repeat this operation later when the DB is live and much larger? Are there alternative, faster methods?
Would it be much faster if I wrote a script that will browse the dependent tables, starting at the table furthest from the central table, deleting the dependent rows table by table?
An important detail is that there are triggers on some of the tables.
jd.
(1262 rep)
Mar 19, 2013, 04:44 PM
• Last activity: Sep 7, 2023, 04:57 PM
4
votes
3
answers
41360
views
POSTGRESQL error Could not create shared memory segment: Cannot allocate memory
Recently my client database has restarted and thrown the below error. FATAL: could not create shared memory segment: Cannot allocate memory DETAIL: Failed system call was shmget(key=5433001, size=5616156672, 03600). HINT: This error usually means that PostgreSQL's request for a shared memory segment...
Recently my client database has restarted and thrown the below error.
FATAL: could not create shared memory segment: Cannot allocate memory
DETAIL: Failed system call was shmget(key=5433001, size=5616156672, 03600).
HINT: This error usually means that PostgreSQL's request for a shared memory segment
exceeded available memory or swap space, or exceeded your kernel's SHMALL
parameter. You can either reduce the request size or reconfigure the kernel with
larger SHMALL. To reduce the request size (currently 5616156672 bytes), reduce
PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or
max_connections.
The PostgreSQL documentation contains more information about shared memory configuration.
LOG: database system was interrupted; last known up at 2017-06-07 08:10:13 BST
FATAL: the database system is starting up
FATAL: the database system is starting up
How can I resolve this issue?
Raghavendra
(161 rep)
Jun 13, 2017, 08:45 AM
• Last activity: Aug 10, 2023, 04:53 AM
5
votes
3
answers
2825
views
Full Text Search With PostgreSQL
i have a table with this rows: Stickers ------------------------------------------------------ ID | Title |Keywords (ts_vector) ------------------------------------------------------ 01 | Sticker Case 580H |'580h':3 'cas':2 'stick':1 02 | Sticker Case 580L |'580l':3 'cas':2 'stick':1 03 | Sticker Ca...
i have a table with this rows:
Stickers
------------------------------------------------------
ID | Title |Keywords (ts_vector)
------------------------------------------------------
01 | Sticker Case 580H |'580h':3 'cas':2 'stick':1
02 | Sticker Case 580L |'580l':3 'cas':2 'stick':1
03 | Sticker Case 580 |'580':3 'cas':2 'stick':1
04 | Sticker Case Plus 3000|'3000':4 'cas':2 'plus':3 'stick':1
Well, when i do search using this script, just row 03 return, how i do return the row 01 and 02?
SELECT
*
FROM
stickers
WHERE
keywords @@@ to_tsquery('case & 580');
Tommy
(63 rep)
Jan 25, 2013, 04:49 PM
• Last activity: Aug 1, 2023, 05:17 PM
4
votes
2
answers
2869
views
Trigram index for ILIKE patterns not working as expected
I've got a simple but slow query: SELECT DISTINCT title FROM ja_jobs WHERE title ILIKE '%RYAN WER%' AND clientid = 31239 AND time_job > 1457826264 ORDER BY title LIMIT 10; **Explain analyze:** Limit (cost=5946.40..5946.41 rows=1 width=19) (actual time=2746.759..2746.772 rows=1 loops=1) -> Unique (co...
I've got a simple but slow query:
SELECT DISTINCT title
FROM ja_jobs
WHERE title ILIKE '%RYAN WER%'
AND clientid = 31239
AND time_job > 1457826264
ORDER BY title
LIMIT 10;
**Explain analyze:**
Limit (cost=5946.40..5946.41 rows=1 width=19) (actual time=2746.759..2746.772 rows=1 loops=1)
-> Unique (cost=5946.40..5946.41 rows=1 width=19) (actual time=2746.753..2746.763 rows=1 loops=1)
-> Sort (cost=5946.40..5946.41 rows=1 width=19) (actual time=2746.750..2746.754 rows=4 loops=1)
Sort Key: "title"
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on "ja_jobs" (cost=49.02..5946.39 rows=1 width=19) (actual time=576.275..2746.609 rows=4 loops=1)
Recheck Cond: (("clientid" = 31239) AND ("time_job" > 1457826264))
Filter: (("title")::"text" ~~* '%RYAN WER%'::"text")
Rows Removed by Filter: 791
-> Bitmap Index Scan on "ix_jobs_client_times" (cost=0.00..49.02 rows=1546 width=0) (actual time=100.870..100.870 rows=795 loops=1)
Index Cond: (("clientid" = 31239) AND ("time_job" > 1457826264))
Total runtime: 2746.879 ms
Then, I created a trigram index:
CREATE INDEX ix_ja_jobs_trgm_gin ON public.ja_jobs USING gin (title gin_trgm_ops);
**Explain analyze after adding the index:** (Yes, I
analyze
)
Limit (cost=389.91..389.91 rows=1 width=20) (actual time=3720.511..3720.511 rows=0 loops=1)
-> Unique (cost=389.91..389.91 rows=1 width=20) (actual time=3720.507..3720.507 rows=0 loops=1)
-> Sort (cost=389.91..389.91 rows=1 width=20) (actual time=3720.505..3720.505 rows=0 loops=1)
Sort Key: "title"
Sort Method: quicksort Memory: 25kB
-> Bitmap Heap Scan on "ja_jobs" (cost=385.88..389.90 rows=1 width=20) (actual time=3720.497..3720.497 rows=0 loops=1)
Recheck Cond: (("clientid" = 31239) AND ("time_job" > 1457826264) AND (("title")::"text" ~~ '%RYAN WER%'::"text"))
Rows Removed by Index Recheck: 4
-> BitmapAnd (cost=385.88..385.88 rows=1 width=0) (actual time=3720.469..3720.469 rows=0 loops=1)
-> Bitmap Index Scan on "ix_jobs_client_times" (cost=0.00..50.00 rows=1644 width=0) (actual time=0.142..0.142 rows=795 loops=1)
Index Cond: (("clientid" = 31239) AND ("time_job" > 1457826264))
-> Bitmap Index Scan on "ix_ja_jobs_trgm_gin" (cost=0.00..335.63 rows=484 width=0) (actual time=3720.213..3720.213 rows=32 loops=1)
Index Cond: (("title")::"text" ~~ '%RYAN WER%'::"text")
Total runtime: 3720.653 ms
As you can see, the index did not work.
Table public.ja_jobs
:
CREATE TABLE public.ja_jobs (
id bigint NOT NULL DEFAULT "nextval"('"ja_jobs_id_seq"'::"regclass"),
refnum character varying(100) NOT NULL DEFAULT ''::character varying,
clientid bigint NOT NULL DEFAULT 0,
customerid bigint,
time_job bigint,
priority smallint NOT NULL DEFAULT 0,
status character varying(255) NOT NULL DEFAULT 'active'::"bpchar",
title character varying(100) NOT NULL DEFAULT ''::character varying,
-- some other irrelevant columns
)
Indexes on public.ja_jobs
:
Indexes:
"ja_jobs_pkey" PRIMARY KEY, "btree" ("id")
"ix_bill_customer_jobs" "btree" ("customerid", "bill_customer")
"ix_clientid_jobs" "btree" ("clientid")
"ix_customerid_job" "btree" ("customerid")
"ix_ja_jobs_clientid_modified_date_created_date" "btree" ("clientid", "modified_date", "created_date")
"ix_ja_jobs_gsdi_pk" "btree" (("id"::"text"))
"ix_ja_jobs_trgm_gin" "gin" ("title" "gin_trgm_ops")
"ix_job_customer_recent_jobs_lookaside_bill_customer" "btree" ("bill_customer", "modified_date")
"ix_job_customer_recent_jobs_lookaside_clientid" "btree" ("clientid", "modified_date")
"ix_job_customer_recent_jobs_lookaside_customer" "btree" ("customerid", "modified_date")
"ix_jobs_charges_and_parts_sort" "btree" (("charges_count" + "parts_count"))
"ix_jobs_client_times" "btree" ("clientid", "time_job", "time_arrival")
"ix_jobs_fts_description_en" "gin" ("full_text_universal_cast"("description"))
"ix_jobs_fts_full_address_en" "gin" ((((("full_text_universal_cast"("address"::"text") || "full_text_universal_cast"("suburb"::"text")) || "full_text_universal_cast"("city"::"text")) || "full_text_universal_cast"("stpr"::"text")) || "full_text_universal_cast"("postc
ode"::"text")))
"ix_jobs_fts_job_number_en" "gin" ("full_text_universal_cast"("job_number"::"text"))
"ix_jobs_fts_refnum_en" "gin" ("full_text_universal_cast"("refnum"::"text"))
"ix_jobs_fts_title_en" "gin" ("full_text_universal_cast"("title"::"text"))
"ix_jobs_full_address_street_first" "btree" (((((COALESCE("address"::character varying, ''::character varying)::"text" || COALESCE(' '::"text" || "suburb"::"text", ''::"text")) || COALESCE(' '::"text" || "city"::"text", ''::"text")) || COALESCE(' '::"text" || "postc
ode"::"text", ''::"text")) || COALESCE(' '::"text" || "stpr"::"text", ''::"text")))
"ix_jobs_paying_customers" "btree" ((COALESCE("bill_customer", "customerid")))
"ix_jobs_status_label_ids" "btree" ("status_label_id")
"ix_jobs_top_by_client" "btree" ("id", "clientid")
"ix_mobiuser_jobs" "btree" ("accepted_mobile_user")
"ix_recurrenceid_jobs" "btree" ("recurrenceid")
"ix_timejob_jobs" "btree" ("time_job")
"ja_jobs_client_type" "btree" ("clientid", "jobtype")
"ja_jobs_the_geom_idx" "gist" ("the_geom")
###Question:
What can I do to improve the query? Why is the trigram index not working as expected?
**UPDATE:** Re-ran the explain analyze buffer
Limit (cost=199669.37..199669.39 rows=10 width=20) (actual time=31523.690..31523.691 rows=1 loops=1)
Buffers: shared hit=26947 read=101574 dirtied=438
-> Sort (cost=199669.37..199669.40 rows=11 width=20) (actual time=31523.686..31523.686 rows=1 loops=1)
Sort Key: "title"
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=26947 read=101574 dirtied=438
-> Bitmap Heap Scan on "ja_jobs" (cost=4850.60..199669.18 rows=11 width=20) (actual time=11714.504..31523.640 rows=1 loops=1)
Recheck Cond: (("clientid" = 2565) AND ("time_job" > 1382496599))
Filter: (("title")::"text" ~~* '%Hislop%'::"text")
Rows Removed by Filter: 207654
Buffers: shared hit=26942 read=101574 dirtied=438
-> Bitmap Index Scan on "ix_jobs_client_times" (cost=0.00..4850.60 rows=155054 width=0) (actual time=11670.956..11670.956 rows=215142 loops=1)
Index Cond: (("clientid" = 2565) AND ("time_job" > 1382496599))
Buffers: shared hit=121 read=5772
Total runtime: 31524.874 ms
After removing DISTINCT
and the left %
:
explain (analyze, buffers)
SELECT title
FROM ja_jobs
WHERE title ILIKE 'Hislop 13035%'
AND clientid = 2565
AND time_job > 1382496599
ORDER BY title
LIMIT 10;
Limit (cost=2275.53..2275.55 rows=9 width=20) (actual time=3492.479..3492.483 rows=1 loops=1)
Buffers: shared hit=4940 read=448
I/O Timings: read=83.285
-> Sort (cost=2275.53..2275.55 rows=9 width=20) (actual time=3492.475..3492.477 rows=1 loops=1)
Sort Key: "title"
Sort Method: quicksort Memory: 25kB
Buffers: shared hit=4940 read=448
I/O Timings: read=83.285
-> Bitmap Heap Scan on "ja_jobs" (cost=391.62..2275.38 rows=9 width=20) (actual time=3492.460..3492.462 rows=1 loops=1)
Recheck Cond: (("title")::"text" ~~* 'Hislop Street Clinic 2513035%'::"text")
Filter: (("time_job" > 1382496599) AND ("clientid" = 2565))
Buffers: shared hit=4940 read=448
I/O Timings: read=83.285
-> Bitmap Index Scan on "ix_jobs_trgm_gin" (cost=0.00..391.62 rows=482 width=0) (actual time=3492.427..3492.427 rows=1 loops=1)
Index Cond: (("title")::"text" ~~* 'Hislop 13035%'::"text")
Buffers: shared hit=4939 read=448
I/O Timings: read=83.285
Total runtime: 3492.531 ms
- As you can see, the query is hitting the new index but it's slower.
Then I removed ORDER BY
but the query is still slow.
Also, I tried to use LIKE
(with is much faster), but LIKE
is case-sensitive, so for that reason I got no rows back. Can't use it.
user83914
May 12, 2016, 02:07 AM
• Last activity: Aug 1, 2023, 05:15 PM
4
votes
4
answers
32159
views
postgresql: permission denied for sequence <table>_i_seq
I have a schema of a table in PostgreSQL that looks like: CREATE TABLE IF NOT EXISTS data.pmacct ( i BIGSERIAL PRIMARY KEY, And there are users: - **A**: superuser privileges - **B**: read/write privileges - **C**: read privileges And I have made the following modifications in regards to the privile...
I have a schema of a table in PostgreSQL that looks like:
CREATE TABLE IF NOT EXISTS data.pmacct (
i BIGSERIAL PRIMARY KEY,
And there are users:
- **A**: superuser privileges
- **B**: read/write privileges
- **C**: read privileges
And I have made the following modifications in regards to the privileges (as user **A**)
CREATE SCHEMA data ;
ALTER DATABASE ip_spotlight SET search_path TO data ;
REVOKE EXECUTE ON ALL FUNCTIONS IN SCHEMA data FROM PUBLIC ;
ALTER DEFAULT PRIVILEGES IN SCHEMA data GRANT SELECT ON TABLES TO C ;
ALTER DEFAULT PRIVILEGES IN SCHEMA data GRANT ALL ON TABLES TO B ;
as user
postgres
GRANT USAGE ON SCHEMA data TO netops ;
GRANT USAGE ON SCHEMA data TO netopsapp ;
GRANT SELECT ON ALL TABLES IN SCHEMA data TO C ;
GRANT SELECT ON ALL SEQUENCES IN SCHEMA data TO C ;
GRANT EXECUTE ON ALL FUNCTIONS IN SCHEMA data TO C ;
GRANT ALL ON ALL TABLES IN SCHEMA data TO B ;
GRANT ALL ON ALL SEQUENCES IN SCHEMA data TO B ;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA data TO B ;
But I am getting the following error message as user **B**
> ERROR: permission denied for sequence pmacct_i_seq
Could you please enlighten me what I have done wrong and user **B** is not able to do an INSERT
statement into the data.pmacct
table ?
nskalis
(1721 rep)
Nov 8, 2017, 12:38 PM
• Last activity: Aug 1, 2023, 11:19 AM
7
votes
3
answers
39071
views
Search for current month data in postgresql 9.2
I'm trying to get the ticket data only from the current month on a PostgreSql 9.2 database. The field called `data_cadastro` is `DATETIME`: id_ticket | data_cadastro -----------+--------------------- 2521 | 2017-10-31 08:11:48 how should I do ?
I'm trying to get the ticket data only from the current month on a PostgreSql 9.2 database.
The field called
data_cadastro
is DATETIME
:
id_ticket | data_cadastro
-----------+---------------------
2521 | 2017-10-31 08:11:48
how should I do ?
Flavio Junior
(71 rep)
Nov 13, 2017, 03:09 PM
• Last activity: May 2, 2023, 07:52 PM
40
votes
4
answers
60561
views
how to track progress of a large postgres dump
Is there a way to see the progress of a `pg_dump` operation on a big db (ie > 1GB)? adding the `-v` option just dumps text on the screen, but doesn't give me much meaningful tracking information.
Is there a way to see the progress of a
pg_dump
operation on a big db (ie > 1GB)? adding the -v
option just dumps text on the screen, but doesn't give me much meaningful tracking information.
abbood
(503 rep)
Aug 24, 2016, 01:21 PM
• Last activity: Apr 1, 2023, 12:16 AM
6
votes
1
answers
13891
views
postgresql replication - pg_stat_replication is showing empty columns
I've a postgresql 9.2 streaming replication setup. It appears that the slave is getting the updates from master and is in sync. I've verified it by looking at pg_xlog dir and process list. $ ps aux | grep 'postgres.*rec' postgres 26349 2.3 42.9 38814656 18604176 ? Ss Sep20 24:06 postgres: startup pr...
I've a postgresql 9.2 streaming replication setup. It appears that the slave is getting the updates from master and is in sync. I've verified it by looking at pg_xlog dir and process list.
$ ps aux | grep 'postgres.*rec' postgres 26349 2.3 42.9 38814656 18604176 ? Ss Sep20 24:06 postgres: startup process recovering 000000010000026E00000073 postgres 26372 4.9 0.1 38959108 78880 ? Ss Sep20 51:27 postgres: wal receiver process streaming 26E/731E05F0And the startup logs on the slave also look alright.
2013-09-21 03:02:38 UTC LOG: database system was shut down in recovery at 2013-09-21 03:02:32 UTC 2013-09-21 03:02:38 UTC LOG: incomplete startup packet 2013-09-21 03:02:38 UTC FATAL: the database system is starting up 2013-09-21 03:02:38 UTC LOG: entering standby mode 2013-09-21 03:02:38 UTC LOG: redo starts at 26E/71723BB8 2013-09-21 03:02:39 UTC FATAL: the database system is starting up 2013-09-21 03:02:39 UTC LOG: consistent recovery state reached at 26E/75059C90 2013-09-21 03:02:39 UTC LOG: invalid xlog switch record at 26E/75059E98 2013-09-21 03:02:39 UTC LOG: database system is ready to accept read only connections 2013-09-21 03:02:39 UTC LOG: streaming replication successfully connected to primaryWhat worries me is that the
pg_stat_replication
table on the master. It shows that there is a client connected, but doesn't show anything more.
archive=> select * from pg_stat_replication; pid | usesysid | usename | application_name | client_addr | client_hostname | client_port | backend_start | state | sent_location | write_location | flush_location | replay_location | sync_priority | sync_state -----+----------+------------+------------------+-------------+-----------------+-------------+---------------+-------+---------------+----------------+----------------+-----------------+---------------+------------ 999 | 16384 | replicator | walreceiver | | | | | | | | | | | (1 row)Is this the expected behavior? I remember seeing values for client_addr, sent_location, replay_location etc. when I did a test run some time back. Is there anything that I'm missing?
Anand Chitipothu
(208 rep)
Sep 21, 2013, 03:16 AM
• Last activity: Dec 17, 2022, 11:18 AM
15
votes
2
answers
31290
views
Postgresql function to create table
I want to create a function in order to create a table with a specific structure pasing part of the name of the table as an argument so the name of the table is t_ . Similar to this: CREATE OR REPLACE FUNCTION create_table_type1(t_name VARCHAR(30)) RETURNS VOID AS $$ BEGIN EXECUTE "CREATE TABLE IF N...
I want to create a function in order to create a table with a specific structure pasing part of the name of the table as an argument so the name of the table is t_ . Similar to this:
CREATE OR REPLACE FUNCTION create_table_type1(t_name VARCHAR(30)) RETURNS VOID AS $$
BEGIN
EXECUTE "CREATE TABLE IF NOT EXISTS t_"|| t_name ||"
(
id SERIAL,
customerid INT,
daterecorded DATE,
value DOUBLE PRECISION,
PRIMARY KEY (id)
)"
END
$$ LANGUAGE plpgsql
Then call it like:
SELECT create_table_type1('one');
Is it possible?
Alan Cor
(357 rep)
May 23, 2013, 04:19 PM
• Last activity: Dec 6, 2022, 09:13 AM
8
votes
2
answers
17907
views
PostgreSQL CREATE TABLE creates with incorrect owner
I'm using PostgreSQL 9.2.4. When I create a table as a non-superuser in a database owned by that non-superuser, it is owned by the `postgres` user, so I can't put any data into it unless I explicitly grant myself permission. I created the non-superuser like this: admin_user=# create role "test1" NOI...
I'm using PostgreSQL 9.2.4. When I create a table as a non-superuser in a database owned by that non-superuser, it is owned by the
postgres
user, so I can't put any data into it unless I explicitly grant myself permission.
I created the non-superuser like this:
admin_user=# create role "test1" NOINHERIT LOGIN ENCRYPTED PASSWORD 'wibble' CONNECTION LIMIT 10;
Then I created a database owned by test1
like this:
admin_user=# create database "test1$db1" with owner "test1";
Then I started a new psql as user test1
, and created a table:
test1$db1=> create table test_table (column_name varchar(50));
But I couldn't insert into it:
test1$db1=> insert into test_table values ('some data');
ERROR: permission denied for relation test_table
Checking the permissions shows that the table is owned by the postgres
user:
test1$db1=> \dt
List of relations
Schema | Name | Type | Owner
--------+----------------+-------+----------
public | test_table | table | postgres
However, I can grant myself permissions and do stuff:
test1$db1=> grant insert, select on test_table to test1;
GRANT
test1$db1=> insert into test_table values ('some data');
INSERT 0 1
test1$db1=> select * from test_table;
column_name
-------------
some data
(1 row)
What's going on? I'm pretty sure this used to work. And the [PostgreSQL docs for CREATE TABLE](http://www.postgresql.org/docs/9.2/static/sql-createtable.html) say
> CREATE TABLE will create a new, initially empty table in the current
> database. The table will be owned by the user issuing the command.
Having to grant permissions to myself on my own tables doesn't sound like it's what I should have to do.
Any help much appreciated!
**[UPDATE]**
It looks like this might be caused by some kind of change to the PostgreSQL package at http://apt.postgresql.org/pub/repos/apt/
We have three machine images, all of which installed PostgreSQL from that source:
* Install at 23 July 2013, 18:47 UTC -- correct table ownership
* Install at 23 July 2013, 19:39 UTC -- correct table ownership
* Install at 24 July 2013, 13:16 UTC -- error
So perhaps something has been changed in the packaging. I'll drop a line to the mailing list for that.
Giles Thomas
(231 rep)
Jul 25, 2013, 05:01 PM
• Last activity: Oct 31, 2022, 04:02 AM
0
votes
0
answers
116
views
Postgres 9.2 Change elements of a JSON object
If I were to perform the following script in newer versions of Postgres (13 and upward), then elements within the JSONB object can easily be added and amended using `||`. DO $$ declare j1 jsonb; declare j2 jsonb; declare j3 jsonb; declare j4 jsonb; begin j1 := ('{"A":'||false||',"B":'||false||',"C":...
If I were to perform the following script in newer versions of Postgres (13 and upward), then elements within the JSONB object can easily be added and amended using
||
.
DO $$
declare j1 jsonb;
declare j2 jsonb;
declare j3 jsonb;
declare j4 jsonb;
begin
j1 := ('{"A":'||false||',"B":'||false||',"C":'||false||',"D":'||false||',"E":'||false||',"F":'||false||'}')::jsonb;
j2 := ('{"G":'||true||',"H":'||true||'}')::jsonb;
j4 := j1||j2;
raise notice '%',j4;
--Required output: {"A":false,"B":false,"C":false,"D":false,"E":false,"F":false,"G":true,"H":true}
j3 := ('{"A":'||false||',"B":'||true||',"C":'||false||',"D":'||false||',"E":'||true||',"F":'||false||'}')::jsonb;
j4 := j4||j3;
raise notice '%',j4;
--Required output: {"A":false,"B":true,"C":false,"D":false,"E":true,"F":false,"G":true,"H":true}
end $$;
I am trying to achieve the same on Postgres 9.2. Since 9.2 is only compatible with JSON objects, I have to use JSON instead of JSONB. (Client not able to upgrade yet). However, the same code for JSON gives errors.
DO $$
declare j1 json;
declare j2 json;
declare j3 json;
declare j4 json;
begin
j1 := ('{"A":'||false||',"B":'||false||',"C":'||false||',"D":'||false||',"E":'||false||',"F":'||false||'}')::json;
j2 := ('{"G":'||true||',"H":'||true||'}')::json;
j4 := j1||j2;
raise notice '%',j4;
--Required output: {"A":false,"B":false,"C":false,"D":false,"E":false,"F":false,"G":true,"H":true}
j3 := ('{"A":'||false||',"B":'||true||',"C":'||false||',"D":'||false||',"E":'||true||',"F":'||false||'}')::json;
j4 := j4||j3;
raise notice '%',j4;
--Required output: {"A":false,"B":true,"C":false,"D":false,"E":true,"F":false,"G":true,"H":true}
end $$;
Are there any functions that I can use to add/amend elements in a JSON object in Postgres 9.2? Ultimately achieving the same results in the second block of code as in the first block of code. Thanks.
Ruan
(35 rep)
Oct 18, 2022, 06:47 AM
Showing page 1 of 20 total questions