Multiple but not all executions of the same Postgres SELECT query are getting locked endlessly
0
votes
0
answers
52
views
I am using Postgresql version **9.2.18** on Centos Version **CentOS Linux release 7.3.1611 (Core)**
In postgres there is a select query on a single table which is getting executed multiple times per minute.
For this query around first 20 or 21 executions are getting locked endlessly where as other executions are running fine.
When I running below command to check the queries running for more than a minute than these queries are not showing:
pid, pg_stat_activity.query_start, now() - pg_stat_activity.query_start AS query_time, query, state from pg_stat_activity WHERE (now() - pg_stat_activity.query_start) > interval '1 minutes';
But on running the query * FROM pg_locks;
Following is the result (pasted few lines):
locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | g
ranted | fastpath
------------+----------+----------+------+-------+------------+---------------+---------+-------+----------+--------------------+-------+-----------------+--
-------+----------
relation | 16385 | 4614041 | | | | | | | | 23/15040 | 55538 | AccessShareLock | t
| t
relation | 16385 | 4233898 | | | | | | | | 23/15040 | 55538 | AccessShareLock | t
| t
relation | 16385 | 4207899 | | | | | | | | 23/15040 | 55538 | AccessShareLock | t
| t
relation | 16385 | 682008 | | | | | | | | 23/15040 | 55538 | AccessShareLock | t
| t
relation | 16385 | 17702 | | | | | | | | 23/15040 | 55538 | AccessShareLock | t
| t
relation | 16385 | 17309 | | | | | | | | 23/15040 | 55538 | AccessShareLock | t
| t
virtualxid | | | | | 23/15040 | | | | | 23/15040 | 55538 | ExclusiveLock | t
| t
relation | 16385 | 4614041 | | | | | | | | 22/15040 | 55537 | AccessShareLock | t
| t
relation | 16385 | 4233898 | | | | | | | | 22/15040 | 55537 | AccessShareLock | t
| t
relation | 16385 | 4207899 | | | | | | | | 22/15040 | 55537 | AccessShareLock | t
| t
relation | 16385 | 682008 | | | | | | | | 22/15040 | 55537 | AccessShareLock | t
| t
relation | 16385 | 17702 | | | | | | | | 22/15040 | 55537 | AccessShareLock | t
| t
relation | 16385 | 17309 | | | | | | | | 22/15040 | 55537 | AccessShareLock | t
| t
virtualxid | | | | | 22/15040 | | | | | 22/15040 | 55537 | ExclusiveLock | t
| t
relation | 16385 | 4614041 | | | | | | | | 21/15040 | 55536 | AccessShareLock | t
| t
relation | 16385 | 4233898 | | | | | | | | 21/15040 | 55536 | AccessShareLock | t
| t
relation | 16385 | 4207899 | | | | | | | | 21/15040 | 55536 | AccessShareLock | t
| t
relation | 16385 | 682008 | | | | | | | | 21/15040 | 55536 | AccessShareLock | t
| t
relation | 16385 | 17702 | | | | | | | | 21/15040 | 55536 | AccessShareLock | t
| t
relation | 16385 | 17309 | | | | | | | | 21/15040 | 55536 | AccessShareLock | t
| t
virtualxid | | | | | 21/15040 | | | | | 21/15040 | 55536 | ExclusiveLock | t
| t
**This result is showing the locked processes which are the locked queries.
Next the result of command -ef | grep SELECT
is as follows:**
postgres 55519 997 47 07:09 ? 01:52:23 postgres: pgdbuser dam_stage 15.206.133.156(33248) SELECT
postgres 55520 997 47 07:09 ? 01:52:16 postgres: pgdbuser dam_stage 15.206.133.156(33250) SELECT
postgres 55521 997 47 07:09 ? 01:52:16 postgres: pgdbuser dam_stage 15.206.133.156(33254) SELECT
postgres 55522 997 47 07:09 ? 01:52:18 postgres: pgdbuser dam_stage 15.206.133.156(33258) SELECT
postgres 55523 997 47 07:09 ? 01:52:18 postgres: pgdbuser dam_stage 15.206.133.156(33256) SELECT
postgres 55524 997 47 07:09 ? 01:52:27 postgres: pgdbuser dam_stage 15.206.133.156(33252) SELECT
postgres 55525 997 47 07:09 ? 01:52:23 postgres: pgdbuser dam_stage 15.206.133.156(33260) SELECT
postgres 55526 997 47 07:09 ? 01:52:16 postgres: pgdbuser dam_stage 15.206.133.156(33264) SELECT
postgres 55527 997 47 07:09 ? 01:52:19 postgres: pgdbuser dam_stage 15.206.133.156(33262) SELECT
postgres 55528 997 47 07:09 ? 01:52:21 postgres: pgdbuser dam_stage 15.206.133.156(33268) SELECT
postgres 55529 997 47 07:09 ? 01:52:26 postgres: pgdbuser dam_stage 15.206.133.156(33270) SELECT
postgres 55530 997 48 07:09 ? 01:52:37 postgres: pgdbuser dam_stage 15.206.133.156(33276) SELECT
postgres 55531 997 47 07:09 ? 01:52:21 postgres: pgdbuser dam_stage 15.206.133.156(33266) SELECT
postgres 55532 997 47 07:09 ? 01:52:24 postgres: pgdbuser dam_stage 15.206.133.156(33272) SELECT
postgres 55533 997 47 07:09 ? 01:52:23 postgres: pgdbuser dam_stage 15.206.133.156(33274) SELECT
postgres 55534 997 47 07:09 ? 01:52:17 postgres: pgdbuser dam_stage 15.206.133.156(33278) SELECT
postgres 55535 997 47 07:09 ? 01:52:19 postgres: pgdbuser dam_stage 15.206.133.156(33284) SELECT
postgres 55536 997 47 07:09 ? 01:52:32 postgres: pgdbuser dam_stage 15.206.133.156(33282) SELECT
postgres 55537 997 47 07:09 ? 01:52:20 postgres: pgdbuser dam_stage 15.206.133.156(33280) SELECT
postgres 55538 997 47 07:09 ? 01:52:20 postgres: pgdbuser dam_stage 15.206.133.156(33286) SELECT
root 81425 79781 0 11:03 pts/2 00:00:00 grep --color=auto SELECT
**If I kill these queries/processes then they are replaced by new instances of same queries.**
Please suggest what could be the cause of this and how to rectify this issue?
**EDIT:** The query is like this
*SELECT "id", "reference_code", "completed_at", "data_json" FROM "queue" WHERE "status" = 1 AND "reference_type" LIKE '%sorted%' ORDER BY "completed_at" DESC LIMIT 1*
**These processes are also being shown in the top command like this(pid's are different as previous one were killed):**
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
82894 postgres 20 0 236748 39812 37396 R 49.8 0.1 678:23.38 postgres
82890 postgres 20 0 236748 39800 37384 S 49.2 0.1 677:12.88 postgres
82892 postgres 20 0 236748 39764 37348 R 49.2 0.1 677:09.95 postgres
82895 postgres 20 0 236748 39840 37424 S 49.2 0.1 677:23.80 postgres
82896 postgres 20 0 236748 39744 37328 R 49.2 0.1 679:36.06 postgres
82899 postgres 20 0 236748 39816 37400 R 49.2 0.1 676:39.73 postgres
82900 postgres 20 0 236748 39800 37384 R 49.2 0.1 677:25.23 postgres
82901 postgres 20 0 236748 39800 37384 S 49.2 0.1 677:28.81 postgres
82902 postgres 20 0 236748 39776 37360 S 49.2 0.1 677:27.31 postgres
82903 postgres 20 0 236748 39788 37372 S 49.2 0.1 676:54.77 postgres
82906 postgres 20 0 236748 39904 37488 S 49.2 0.1 677:55.78 postgres
82907 postgres 20 0 236748 39896 37480 S 49.2 0.1 677:51.72 postgres
82908 postgres 20 0 236748 39780 37364 S 49.2 0.1 677:27.25 postgres
82891 postgres 20 0 236748 39844 37428 R 48.8 0.1 677:20.41 postgres
82893 postgres 20 0 236748 39812 37396 R 48.8 0.1 677:22.60 postgres
82898 postgres 20 0 236748 39800 37384 R 48.8 0.1 677:06.41 postgres
82904 postgres 20 0 236748 39800 37384 S 48.8 0.1 677:45.52 postgres
82905 postgres 20 0 236748 39696 37280 S 48.8 0.1 677:42.04 postgres
82909 postgres 20 0 236748 39884 37468 R 48.8 0.1 678:16.44 postgres
82897 postgres 20 0 236748 39708 37292 S 48.5 0.1 677:20.17 postgres
Asked by user2710961
(23 rep)
Apr 3, 2024, 11:55 AM
Last activity: Apr 4, 2024, 10:44 AM
Last activity: Apr 4, 2024, 10:44 AM