Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
3405 views
PostgreSQL update column in one table with value from another, based on key from first table
Sorry about the title, I wasn't sure how best to word this. My situation is this: I have two tables: paper_author [paper_id, author_id, author_name] authors [author_id, author_name] Right now, the `author_name` column in `paper_author` is `NULL`. I would like to select the author name from the `auth...
Sorry about the title, I wasn't sure how best to word this. My situation is this: I have two tables: paper_author [paper_id, author_id, author_name] authors [author_id, author_name] Right now, the author_name column in paper_author is NULL. I would like to select the author name from the authors table and insert it into the author_name column in the paper_author table. I've written the following query:
UPDATE paper_author pa 
 SET author_name = (SELECT author_name FROM authors a WHERE a.author_id = pa.author_id);
Which I think will do what I want, but this is taking a very long time to run (days). For reference, the table paper_author has ~900M rows and the table authors has ~200M rows. The author_id is the primary key in the authors table. author_id has an index in the table paper_author. The rows in paper_author are not unique on author_id (i.e. each author_id may appear multiple times associated with different papers). Is there a more efficient way to write this? Have I missed something?
Gabriel Getzie (1 rep)
Oct 6, 2020, 02:07 AM • Last activity: Jul 30, 2025, 12:04 AM
0 votes
1 answers
502 views
Need a function to run different queries based on parameters sent subquery must return one column error
I am new to advanced sql programming. I am storing constants. Constants have three possible scopes: local, user and global. Two tables: s0constants_user holds set values of constants s0constants holds the constant definitions and values. CREATE TABLE public.s0constants_user ( constant_name bpchar(15...
I am new to advanced sql programming. I am storing constants. Constants have three possible scopes: local, user and global. Two tables: s0constants_user holds set values of constants s0constants holds the constant definitions and values. CREATE TABLE public.s0constants_user ( constant_name bpchar(15) NOT NULL, empfkey int2 NOT NULL DEFAULT 0, constant_value bpchar(255) NULL, locality_name bpchar(30) NOT NULL DEFAULT 1, CONSTRAINT s0constants_user_pk PRIMARY KEY (constant_name, empfkey, locality_name) ); CREATE TABLE public.s0constants ( constant_name bpchar(15) NOT NULL, constant_value bpchar(255) NULL, constant_data_type bpchar(1) NULL, cttype int4 NULL, displayname bpchar(30) NULL, actiontype int4 NULL, note text NULL, const_scopefk int4 NOT NULL DEFAULT 3, -- 1=local 2- user 3=global CONSTRAINT "S0CONSTANTS_pkey" PRIMARY KEY (constant_name), CONSTRAINT scope_fk FOREIGN KEY (const_scopefk) REFERENCES s0constants_scopelk(id) I realize I could do this with separate tables for each scope but I'd like to make this work even if it is just a learning exercise. I need a function to return and constant's value and data type given the constant name and empfkey. There are different queries for each scope of constant. local - constant_name, empfkey, locality_name user - constant_name, empfkey global - constant_name Constant names are unique over the three scopes. I thought a case statement might do it but I'm having trouble limiting it to one row of data and how to set up the sub-queries. This is my first(Edit SECOND) pass at this but now I have issues with "subquery must return only one column" FUNCTION public.const_get( stname text, empid INT, locality text ) returns TABLE(cvalue text, ctype text) LANGUAGE sql STABLE PARALLEL SAFE AS $$ select c.const_scopefk, c.constant_data_type ,case when const_scopefk = 1 then --local ( select distinct scu.constant_value, c.constant_data_type from s0constants_user scu where (scu.constant_name = constant_name and scu.empfkey = empid and scu.locality_name = locality) limit 1) when const_scopefk = 2 then --user (select scu.constant_value, c.constant_data_type from s0constants_user scu where (scu.constant_name = constant_name and scu.empfkey = empid and scu.locality_name = 1) limit 1) when const_scopefk = 1 then --global (select scu.constant_value, c.constant_data_type from s0constants_user scu where (scu.constant_name = constant_name and scu.empfkey = 0 and scu.locality_name = 1) limit 1 ) end from public.s0constants c where c.constant_name = UPPER(constname); $$ EDIT: I've updated table definitions
markb (1 rep)
Mar 20, 2020, 03:35 PM • Last activity: Jul 26, 2025, 08:09 PM
0 votes
1 answers
179 views
Migrate columns from table A to table B, and update A with a reference to B
I have a table A with some columns that need to be migrated to a new table B. But I also need to create a column in table A to reference the id's from the new columns in the table B. I've tried doing it this way but it's not working: ```pgsql WITH cte AS ( INSERT INTO B (c1) SELECT c1 FROM A RETURNI...
I have a table A with some columns that need to be migrated to a new table B. But I also need to create a column in table A to reference the id's from the new columns in the table B. I've tried doing it this way but it's not working:
WITH cte AS (
INSERT INTO B (c1) 
	SELECT c1
	FROM A
	RETURNING id, A.id as "aId"
)

UPDATE A
SET bId = cte.id 
FROM cte
INNER JOIN A on cte."aId" = A.id
;
This is the error message:
ERROR:  missing FROM-clause entry for table A
LINE 5:  RETURNING id, A.id as "aId"
                       ^
SQL state: 42P01
Character: 168
I'm assuming this is because Postgres doesn't allow selecting from the table you are inserting the values and it only allows to return columns from the inserted row. So how to accomplish this? I know it can probably work with a for loop, but is there a cleaner way? I also tried updating table A directly from results of the insertion, but that didn't work as Postgres doesn't allow using the insert statement in a subquery for updating.
Warix3 (1 rep)
Jun 26, 2023, 07:04 PM • Last activity: Jul 4, 2025, 11:04 PM
0 votes
1 answers
948 views
Problem calling Postgres function
I wrote this function: ``` create or replace function field_summaries(gap interval , _tbl anyelement, _col text) returns table(hour text, average numeric, minimum numeric, maximum numeric) as $func$ begin RETURN QUERY EXECUTE format('select time_bucket($1, time)::text as hour, avg(%I), min(%I), max(...
I wrote this function:
create or replace function field_summaries(gap interval , _tbl anyelement, _col text)                                
returns table(hour text, average numeric, minimum numeric, maximum numeric) as
$func$   
begin
 RETURN QUERY 
  EXECUTE format('select time_bucket($1, time)::text as hour,                                                                                      
                         avg(%I), 
                         min(%I),
                         max(%I)                                                                                                         
                  from %I d                                                                                                         
                  where d in (select device_id from sensors)                                                                                           
                  group by hour', _col, _col, _col, _tbl)
     USING gap; --<< this passes the parameter to the EXECUTE command
END;
$func$ 
language plpgsql
stable;
to which I pass a table name and a column name.I call it like this : select field_summaries('5minutes', NULL::m_09, 'current'); which works fine. I need to call this function from my node.js code. so I need to pass a string as an argument. When I pass "Null::m_09" as a string I get an error. Is there a way to alter the function so that I can call it like this : select field_summaries('5minutes', m_09, 'current'); Right now I get this when I do it:
field_summaries('5minutes', m_09, 'current');
ERROR:  column "m_09" does not exist
LINE 1: select field_summaries('5minutes', m_09, 'current');
                                           ^
Any help is much appreciated.
Hedi Hadhri (3 rep)
Mar 14, 2021, 04:51 PM • Last activity: Jun 23, 2025, 06:09 AM
1 votes
1 answers
225 views
Speed up PostgreSQL logical decoding plugin
I'm writing a new [logical decoding][1] output plugin for PostgreSQL inspired by [`wal2json`][2]. But mine produces JSON by using [SPI][3] to repeatedly invoke [`to_json(...)`][4] on column values (instead of `wal2json`'s simpler JSON output that doesn't work well for some data types). Testing shows...
I'm writing a new logical decoding output plugin for PostgreSQL inspired by wal2json . But mine produces JSON by using SPI to repeatedly invoke to_json(...) on column values (instead of wal2json's simpler JSON output that doesn't work well for some data types). Testing shows that this is pretty CPU intensive. Is there a way to achieve the same result with better performance? Could you somehow directly invoke row_to_json on the tuple given to the change callback? Is there a way to directly invoke to_json on each column value without SPI? (My attempts to call those functions directly with DirectFunctionCall1Coll crashed the server, so any pointers are welcome!) ---- Here's what the functioning, but CPU-intensive, code looks like:
spiArgType = columnInfo->atttypid;
/* Prepare the SQL */
spiPlan = SPI_prepare("SELECT to_json($1)::TEXT", 1, &spiArgType);
/* Execute it */
spiReturnValue = SPI_execute_plan(spiPlan, &columnValue, NULL, false, 1);
/* And if it succeeded, append to output string */
if (spiReturnValue vals, SPI_tuptable->tupdesc, 1, &isNull);
  appendStringInfo(out, "%s", TextDatumGetCString(jsonDatum));
}
SPI_freeplan(spiPlan);
---- A failed attempt using OidFunctionCall1Coll:
TupleDesc cols = ...;
HeapTuple tuple = ...;
Form_pg_attribute colInfo;
Datum colValue;
bool isNull;

for (i=0; i natts; ++i) {
  colInfo = TupleDescAttr(cols, i);

  // (Omitted checks for dropped or system column)

  colValue = heap_getattr(tuple, i+1, cols, &isNull);
  if (! isNull) {
    // The following call fails, and emits this error:
    // ERROR:  could not determine input data type
    jsonDatum = OidFunctionCall1Coll(F_TO_JSON, InvalidOid, colValue);
    // This is never reached:
    appendStringInfo(out, "%s", TextDatumGetCString(jsonDatum));
  }
  // ...
}
csd (700 rep)
May 19, 2020, 11:09 PM • Last activity: Jun 12, 2025, 08:10 AM
3 votes
1 answers
219 views
Why do two queries run faster than combined subquery?
I'm running postgres 11 on Azure. If I run this query: select min(pricedate) + interval '2 days' from pjm.rtprices It takes 0.153 sec and has the following explain: "Result (cost=2.19..2.20 rows=1 width=8)" " InitPlan 1 (returns $0)" " -> Limit (cost=0.56..2.19 rows=1 width=4)" " -> Index Only Scan...
I'm running postgres 11 on Azure. If I run this query: select min(pricedate) + interval '2 days' from pjm.rtprices It takes 0.153 sec and has the following explain: "Result (cost=2.19..2.20 rows=1 width=8)" " InitPlan 1 (returns $0)" " -> Limit (cost=0.56..2.19 rows=1 width=4)" " -> Index Only Scan using rtprices_pkey on rtprices (cost=0.56..103248504.36 rows=63502562 width=4)" " Index Cond: (pricedate IS NOT NULL)" If I run this query: select pricedate, hour, last_updated, count(1) as N from pjm.rtprices where pricedate Sort (cost=738576.82..739570.68 rows=397541 width=16)" " Sort Key: pricedate DESC, hour, last_updated" " -> Index Scan using rtprices_pkey on rtprices (cost=0.56..694807.03 rows=397541 width=16)" " Index Cond: (pricedate Result (cost=2.19..2.20 rows=1 width=8)" " InitPlan 1 (returns $0)" " -> Limit (cost=0.56..2.19 rows=1 width=4)" " -> Index Only Scan using rtprices_pkey on rtprices rtprices_1 (cost=0.56..103683459.22 rows=63730959 width=4)" " Index Cond: (pricedate IS NOT NULL)" " -> Gather Merge (cost=3791454.84..4662729.67 rows=6316230 width=24)" " Workers Planned: 2" " Params Evaluated: $1" " -> Partial GroupAggregate (cost=3790454.81..3932679.99 rows=3158115 width=24)" " Group Key: rtprices.pricedate, rtprices.hour, rtprices.last_updated" " -> Sort (cost=3790454.81..3812583.62 rows=8851522 width=16)" " Sort Key: rtprices.pricedate DESC, rtprices.hour, rtprices.last_updated" " -> Parallel Seq Scan on rtprices (cost=0.00..2466553.08 rows=8851522 width=16)" " Filter: (pricedate Limit (cost=0.56..1629038.11 rows=3 width=4)" " -> Result (cost=0.56..105887441.26 rows=195 width=4)" " -> Unique (cost=0.56..105887441.26 rows=195 width=4)" " -> Index Only Scan using rtprices_pkey on rtprices rtprices_1 (cost=0.56..105725202.47 rows=64895517 width=4)" " -> Sort (cost=648411.43..649243.43 rows=332798 width=16)" " Sort Key: rtprices.pricedate DESC, rtprices.hour, rtprices.last_updated" " -> Nested Loop (cost=0.56..612199.22 rows=332798 width=16)" " -> CTE Scan on lastday (cost=0.00..0.06 rows=3 width=4)" " -> Index Scan using rtprices_pkey on rtprices (cost=0.56..202957.06 rows=110933 width=16)" " Index Cond: ((pricedate <= lastday.pricedate) AND (pricedate = lastday.pricedate))" This last one is all well and good but if my subquery wasn't extensible to this hack, is there a better way for my subquery to have similar performance to the one at a time approach?
Dean MacGregor (739 rep)
Jun 4, 2021, 09:10 PM • Last activity: Jun 7, 2025, 04:03 AM
0 votes
1 answers
249 views
How can I setup users/groups in PostgreSQL so that each user has privileges on objects created by other users in the same group?
I have created a group (role) called "employees" and I've created some users that are its member and that inherit its rights. I have a database owned by the group "employees". The goal: To setup things in a way that allows all of the users to work with all of the objects in the database. The problem...
I have created a group (role) called "employees" and I've created some users that are its member and that inherit its rights. I have a database owned by the group "employees". The goal: To setup things in a way that allows all of the users to work with all of the objects in the database. The problem: I can't expect the users to set the owner to "employees" when they create a new object, because they use various limited interfaces to work with the database. When they create a schema or a table, it gets created with the user as its owner, which means that the other users don't have rights on that schema/table. I'm using PostgreSQL 11.2.
user183865
Jun 23, 2019, 09:45 PM • Last activity: May 21, 2025, 07:03 PM
0 votes
1 answers
360 views
Unstructed data field: Query all values from array of objects by key
I have a table that has a unique ID, and a second 'collumn' named 'data' that contains simple key/value items like: "nickname": "value" "fullName": "value" "office": "value" "unity": "value" and a few, more elaborated structure items like: "address": { "city": "value", "state": "value", }, and "pers...
I have a table that has a unique ID, and a second 'collumn' named 'data' that contains simple key/value items like: "nickname": "value" "fullName": "value" "office": "value" "unity": "value" and a few, more elaborated structure items like: "address": { "city": "value", "state": "value", }, and "personalVehicle": [ { "brand": "value", "model": "value", "plate": "value", "color": "value" }, { "brand": "value", "model": "value", "plate": "value", "color": "value" } ] Where, as you can see, personalVehicle is a key that stores an array of objects, in which every object has it's own simple key/value items. I can query specific key values from address for all registries: SELECT data->'address'->'city' as city FROM person +------------+ | city | |------------| | "city1" | | "city2" | | "city3" | +------------+ Here is the situation: I can query all info about the vehicles with SELECT data->'personalVehicle' as vehicles FROM person +------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | vehicles | |------------------------------------------------------------------------------------------------------------------------------------------------------------------| | [ { "brand": "Toyota", "model": "Corolla", "plate": "AAA-1111", "color": "Red" }, { "brand": "Ford", "model": "Focus", "plate": "ZZZ-9999", "color": "Blue" } ] | | | | [ { "brand": "Hyundai", "model": "Tucson", "plate": "ABC-1212", "color": "Grey" } ] | +------------------------------------------------------------------------------------------------------------------------------------------------------------------+ But I cannot retrieve an specific key for all objects, when the objects are inside of an array; in which case, I need to specify the index: SELECT data->personalVehicle->0->model as model from person +-------------+ | model | |-------------| | "Toyota" | | | | "Hyundai" | +-------------+ This guy up here, is the first index of the array, that is, the first car. I need to get the models for all N number of cars that the person might have. How do I do that? Query that without specifying the index?
Jhonatan Cruz (1 rep)
May 28, 2019, 06:02 PM • Last activity: May 8, 2025, 04:06 AM
1 votes
1 answers
4670 views
Count how many times a combination of columns occur
I have the following table: ``` Mytable: ------- col1 INT col2 INT col3 INT ``` And in postgresql I want to count the number of occurences that a combination of values in col1, col2, col3 occur. For example if the `Mytable` has the values: col1 | col2 | col3 ---- | --- | --- 1 | 5 | 3 1 | 8 | 3 1 |...
I have the following table:
Mytable:
-------
col1 INT
col2 INT
col3 INT
And in postgresql I want to count the number of occurences that a combination of values in col1, col2, col3 occur. For example if the Mytable has the values: col1 | col2 | col3 ---- | --- | --- 1 | 5 | 3 1 | 8 | 3 1 | 5 | 3 1 | 5 | 3 1 | 5 | 3 1 | 5 | 4 1 | 5 | 4 2 | 5 | 3 1 | 8 | 3 How I can generate the following result: col1 | col2 | col3 | count ---- | --- | --- | --- 1 | 5 | 3 | 4 1 | 8 | 3 | 2 2 | 5 | 3 | 1 1 | 5 | 4 | 2
Dimitrios Desyllas (873 rep)
Jul 27, 2022, 09:46 AM • Last activity: Apr 15, 2025, 10:03 AM
1 votes
1 answers
871 views
Unable to upgrade to postgresql 13
I'm getting the following error: > postgres@:~/>cat loadable_libraries.txt could not load library >"$libdir/plpython2": ERROR: could not access file >"$libdir/plpython2": No such file or directory In database: db_name When trying to run pg_upgrade from PostgreSQL-11 to PostgreSQL-13. /usr/pgsql-13/b...
I'm getting the following error: > postgres@:~/>cat loadable_libraries.txt could not load library >"$libdir/plpython2": ERROR: could not access file >"$libdir/plpython2": No such file or directory In database: db_name When trying to run pg_upgrade from PostgreSQL-11 to PostgreSQL-13. /usr/pgsql-13/bin/pg_upgrade -b /usr/pgsql-11/bin -B /usr/pgsql-13/bin -c -d /localpart0/db/postgres/11/db_name/data -D/localpart0/db/postgres/13/db_name/data -j 20 -r -v --clone I don't have any plpython* extensions and already tried changing the pg_pltemplate entries for plpythonu and plpython2u to plpython3 and run the pg_upgrade but it fails either way. Installed Packages: > postgresql11-plpython.x86_64 11.7-1PGDG.rhel7 @postgres11.prd > postgresql13-plpython3.x86_64 13.1-1PGDG.rhel7 @postgres13.prd Any guidance would be greatly appreciated.
efrain berdecia (31 rep)
Dec 21, 2020, 02:27 PM • Last activity: Apr 10, 2025, 08:05 PM
1 votes
1 answers
506 views
Postgresql (11.9) LIKE query not using text_ops index
I've seen related [questions][1] but I think this should be working and isn't. I have a text_ops function based index on column to do LIKE searches, but it only picks up the index with an equals: CREATE INDEX foo ON mytable USING btree ((upper((street)::text), group_id, current_version, pending, sto...
I've seen related questions but I think this should be working and isn't. I have a text_ops function based index on column to do LIKE searches, but it only picks up the index with an equals: CREATE INDEX foo ON mytable USING btree ((upper((street)::text), group_id, current_version, pending, storage_id); index description from DBeaver This query is lightning fast on my 113M row table: select * from mytable where upper(street) = '104 LESHAWN COVE' and group_id = 5022352 and current_version = 1 and pending = 0 The corresponding LIKE is takes a couple of minutes: select * from mytable where upper(street) LIKE '104 LESHAWN COVE%' and group_id = 5022352 and current_version = 1 and pending = 0 The explain plan show the one using the index and the other not. I thought the text_ops indexed column should allow for the LIKE to work, but I must be missing something here. show lc_collate en_US.UTF-8 I don't know that it matters, but this is on a partitioned table on the group_id column which also exists in the index for unpartitioned environments. The index is on the partition in this case. So what am I missing. These operator classes are new to me, so I must not understand part of it.
nfdavenport (121 rep)
Dec 7, 2021, 07:26 PM • Last activity: Mar 2, 2025, 10:04 PM
0 votes
4 answers
128 views
What is the cause of this intermittent slow query when counting rows over a single indexed text column in PostgreSQL?
I have a table with 2,395,015 rows, where a `TEXT` column has one of three values and is never `NULL`. I have intermittent query performance issues when counting the number of rows where the value matches most (>99%) of the rows. I want to fix this performance problem. These queries must return exac...
I have a table with 2,395,015 rows, where a TEXT column has one of three values and is never NULL. I have intermittent query performance issues when counting the number of rows where the value matches most (>99%) of the rows. I want to fix this performance problem. These queries must return exact counts, so I can't use approximate counts.
corpus=# \d metadata
                             Table "public.metadata"
    Column     |            Type             | Collation | Nullable |    Default
---------------+-----------------------------+-----------+----------+----------------
 id            | text                        |           | not null |
 priority      | integer                     |           | not null | 10
 media_type    | text                        |           | not null |
 modified      | timestamp without time zone |           | not null | now()
 processed     | timestamp without time zone |           |          |
 status        | text                        |           | not null | 'QUEUED'::text
 note          | text                        |           |          |
 content       | text                        |           |          |
 resolved      | text                        |           |          |
 response_time | integer                     |           |          |
 luid          | integer                     |           | not null |
 jamo_date     | timestamp without time zone |           |          |
 audit_path    | text                        |           |          |
Indexes:
    "metadata_pkey" PRIMARY KEY, btree (id)
    "metadata_id_idx" btree (id)
    "metadata_luid_idx" btree (luid)
    "metadata_modified_idx" btree (modified DESC)
    "metadata_processed_idx" btree (processed DESC)
    "metadata_status_idx" btree (status)
Check constraints:
    "media_type_ck" CHECK (media_type = ANY (ARRAY['text/json'::text, 'text/yaml'::text]))
    "status_ck" CHECK (status = ANY (ARRAY['QUEUED'::text, 'PROCESSED'::text, 'ERROR'::text]))
Foreign-key constraints:
    "metadata_luid_fkey" FOREIGN KEY (luid) REFERENCES concept(luid) ON DELETE CASCADE

corpus=#
I have some simple queries that count the number of rows matching one of the three status codes (QUEUED, PROCESSED, ERROR). There are 0 rows matching QUEUED, 9,794 matching ERROR, and 2,385,221 matching PROCESSED. When I run an identical query against each of these status codes, I usually get a result set promptly:
corpus=# EXPLAIN ANALYZE VERBOSE SELECT COUNT(*) FROM metadata WHERE status='QUEUED';
                                                                          QUERY PLAN

---------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=1947.17..1947.18 rows=1 width=8) (actual time=2.935..2.936 rows=1 loops=1)
   Output: count(*)
   ->  Index Only Scan using metadata_status_idx on public.metadata  (cost=0.43..1915.97 rows=12480 width=0) (actual time=2.932..2.933 rows=0 loops=1)
         Output: status
         Index Cond: (metadata.status = 'QUEUED'::text)
         Heap Fetches: 0
 Planning Time: 0.734 ms
 Execution Time: 2.988 ms
(8 rows)
corpus=# EXPLAIN ANALYZE VERBOSE SELECT COUNT(*) FROM metadata WHERE status='ERROR';
                                                                             QUERY PLAN

--------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Aggregate  (cost=1184.19..1184.20 rows=1 width=8) (actual time=1484.763..1484.764 rows=1 loops=1)
   Output: count(*)
   ->  Index Only Scan using metadata_status_idx on public.metadata  (cost=0.43..1165.26 rows=7569 width=0) (actual time=4.235..1484.029 rows=9794 loops=1)
         Output: status
         Index Cond: (metadata.status = 'ERROR'::text)
         Heap Fetches: 9584
 Planning Time: 0.072 ms
 Execution Time: 1484.786 ms
(8 rows)

corpus=#
corpus=# EXPLAIN ANALYZE VERBOSE SELECT COUNT(*) FROM metadata WHERE status='PROCESSED';
                                                                                          QUERY PLAN

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Finalize Aggregate  (cost=261398.83..261398.84 rows=1 width=8) (actual time=741.319..749.026 rows=1 loops=1)
   Output: count(*)
   ->  Gather  (cost=261398.62..261398.83 rows=2 width=8) (actual time=741.309..749.020 rows=3 loops=1)
         Output: (PARTIAL count(*))
         Workers Planned: 2
         Workers Launched: 2
         ->  Partial Aggregate  (cost=260398.62..260398.63 rows=1 width=8) (actual time=735.099..735.100 rows=1 loops=3)
               Output: PARTIAL count(*)
               Worker 0: actual time=730.871..730.872 rows=1 loops=1
               Worker 1: actual time=733.435..733.436 rows=1 loops=1
               ->  Parallel Index Only Scan using metadata_status_idx on public.metadata  (cost=0.43..257903.37 rows=998100 width=0) (actual time=0.065..700.529 rows=795074 loops=3)
                     Output: status
                     Index Cond: (metadata.status = 'PROCESSED'::text)
                     Heap Fetches: 747048
                     Worker 0: actual time=0.060..702.980 rows=670975 loops=1
                     Worker 1: actual time=0.076..686.946 rows=1010099 loops=1
 Planning Time: 0.085 ms
 Execution Time: 749.068 ms
(18 rows)

corpus=#
But occasionally, count for the PROCESSED rows takes an excessive amount of time (sometimes several minutes):
corpus=# EXPLAIN ANALYZE VERBOSE SELECT COUNT(*) FROM metadata WHERE status='PROCESSED';
                                                                                           QUERY PLAN

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Finalize Aggregate  (cost=261398.83..261398.84 rows=1 width=8) (actual time=30019.273..30019.336 rows=1 loops=1)
   Output: count(*)
   ->  Gather  (cost=261398.62..261398.83 rows=2 width=8) (actual time=30019.261..30019.326 rows=3 loops=1)
         Output: (PARTIAL count(*))
         Workers Planned: 2
         Workers Launched: 2
         ->  Partial Aggregate  (cost=260398.62..260398.63 rows=1 width=8) (actual time=29967.734..29967.735 rows=1 loops=3)
               Output: PARTIAL count(*)
               Worker 0: actual time=29939.915..29939.916 rows=1 loops=1
               Worker 1: actual time=29944.395..29944.395 rows=1 loops=1
               ->  Parallel Index Only Scan using metadata_status_idx on public.metadata  (cost=0.43..257903.37 rows=998100 width=0) (actual time=75.385..29931.795 rows=795074 loops=3)
                     Output: status
                     Index Cond: (metadata.status = 'PROCESSED'::text)
                     Heap Fetches: 747151
                     Worker 0: actual time=128.857..29899.156 rows=916461 loops=1
                     Worker 1: actual time=28.609..29905.708 rows=854439 loops=1
 Planning Time: 421.203 ms
 Execution Time: 30019.440 ms
(18 rows)

corpus=#
While the above query is running slowly, I am able to query the same table for either of the other two codes, and those queries return within 1 second. I have looked for table locks (there are none). This happens even when there are no other queries or table inserts running. - What are the possible causes for these intermittent slow queries? - What additional debugging can I try to get some more information about these slow queries? - Are there any relevant server settings? - Is there a more efficient way to index / code these columns (e.g., should I use a CHAR(1)), or even a SMALLINT? If so, what index should be used for the column? If I use a CHAR(1), is there any difference between the following constraints: - ALTER TABLE jgi_metadata ADD CONSTRAINT status_code_ck CHECK (status_code = ANY (ARRAY['Q'::char(1), 'P'::char(1), 'E'::char(1)])); - ALTER TABLE jgi_metadata ADD CONSTRAINT status_code_ck CHECK (status_code IN ('Q', 'P', 'E')); - Could a partial index be used for this column, even though it is never NULL? - Should I split the PROCESSED off into a boolean column, and then use the status column only for the other codes and make it nullable with a partial index? This is PostgreSQL 11 with default settings, running on Linux. Other things I have tried: - Increased the work_mem to 100MB (via postgresql.conf). No change in performance. - I tried creating a partial index on the status column. --- **Update:** I have discovered that this performance issue has nothing to do with the status column, but rather the size of the table itself, as the following 2-minute query shows:
corpus=# EXPLAIN ANALYZE VERBOSE SELECT COUNT(*) FROM metadata;
                                                                                            QUERY PLAN

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Finalize Aggregate  (cost=196398.52..196398.53 rows=1 width=8) (actual time=118527.897..118554.762 rows=1 loops=1)
   Output: count(*)
   ->  Gather  (cost=196398.30..196398.51 rows=2 width=8) (actual time=118522.165..118554.756 rows=3 loops=1)
         Output: (PARTIAL count(*))
         Workers Planned: 2
         Workers Launched: 2
         ->  Partial Aggregate  (cost=195398.30..195398.31 rows=1 width=8) (actual time=118491.043..118491.044 rows=1 loops=3)
               Output: PARTIAL count(*)
               Worker 0: actual time=118475.143..118475.144 rows=1 loops=1
               Worker 1: actual time=118476.110..118476.111 rows=1 loops=1
               ->  Parallel Index Only Scan using metadata_status_idx on public.metadata  (cost=0.43..192876.13rows=1008870 width=0) (actual time=71.797..118449.265 rows=809820 loops=3)
                     Output: status
                     Heap Fetches: 552630
                     Worker 0: actual time=75.877..118434.476 rows=761049 loops=1
                     Worker 1: actual time=104.872..118436.647 rows=745770 loops=1
 Planning Time: 592.040 ms
 Execution Time: 118554.839 ms
(17 rows)

corpus=#
This seems to be very similar to other questions now, so I am trying mitigation strategies from this answer : - VACUUM ANALYZE metadata; First COUNT(*) after this took 5 seconds, subsequent counts took 190ms. Other thoughts: - Would it help if the status column were split off into its own table, with a foreign key into the metadata table? Note: I am becoming convinced that this question is a duplicate of several other questions here: - https://dba.stackexchange.com/questions/280095/postgresql-extremely-slow-counts - https://dba.stackexchange.com/questions/314371/count-queries-are-too-slow-even-with-an-index-only-scan - https://dba.stackexchange.com/questions/235094/why-are-some-count-queries-so-slow - https://dba.stackexchange.com/questions/317626/optimizing-select-count-result-in-postgresql - https://stackoverflow.com/questions/58449716/postgres-why-does-select-count-take-so-long - https://stackoverflow.com/questions/16916633/if-postgresql-count-is-always-slow-how-to-paginate-complex-queries - https://stackoverflow.com/questions/7943233/fast-way-to-discover-the-row-count-of-a-table-in-postgresql/7945274#7945274 This answer may hold the best solution for this problem: - https://stackoverflow.com/a/7945274/2074605 As requested, here is a query plan analysis with buffers:
EXPLAIN (ANALYZE, BUFFERS, VERBOSE) SELECT COUNT(*) FROM metadata;

                                                                                           QUERY PLAN

-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Finalize Aggregate  (cost=80771.95..80771.96 rows=1 width=8) (actual time=26711.481..26716.494 rows=1 loops=1)
   Output: count(*)
   Buffers: shared hit=293915 read=19595 dirtied=282 written=12
   ->  Gather  (cost=80771.73..80771.94 rows=2 width=8) (actual time=26711.203..26716.488 rows=3 loops=1)
         Output: (PARTIAL count(*))
         Workers Planned: 2
         Workers Launched: 2
         Buffers: shared hit=293915 read=19595 dirtied=282 written=12
         ->  Partial Aggregate  (cost=79771.73..79771.74 rows=1 width=8) (actual time=26565.622..26565.623 rows=1 loops=3)
               Output: PARTIAL count(*)
               Buffers: shared hit=293915 read=19595 dirtied=282 written=12
               Worker 0: actual time=26530.890..26530.891 rows=1 loops=1
                 Buffers: shared hit=105264 read=6760 dirtied=145 written=5
               Worker 1: actual time=26530.942..26530.942 rows=1 loops=1
                 Buffers: shared hit=84675 read=7529 dirtied=46 written=2
               ->  Parallel Index Only Scan using metadata_status_idx on public.metadata  (cost=0.43..77241.05 rows=1012275 width=0) (actual time=42.254..26529.232 rows=809820 loops=3)
                     Output: status
                     Heap Fetches: 17185
                     Buffers: shared hit=293915 read=19595 dirtied=282 written=12
                     Worker 0: actual time=59.291..26494.376 rows=815113 loops=1
                       Buffers: shared hit=105264 read=6760 dirtied=145 written=5
                     Worker 1: actual time=31.165..26484.729 rows=1036972 loops=1
                       Buffers: shared hit=84675 read=7529 dirtied=46 written=2
 Planning Time: 98.400 ms
 Execution Time: 26716.529 ms
(25 rows)
Autumn Skye (379 rep)
Aug 8, 2024, 01:50 PM • Last activity: Feb 27, 2025, 02:52 PM
0 votes
1 answers
1041 views
Calculate the duration in an interval and outside of events that can last over several days
I need to calculate the total duration of events in a given time interval and on a given day of the week with PostgreSQL 11 For an event on the same day no problem. But over several days I manage to approximate the result with "generate_series".u Is there another way faster and more precise? Interva...
I need to calculate the total duration of events in a given time interval and on a given day of the week with PostgreSQL 11 For an event on the same day no problem. But over several days I manage to approximate the result with "generate_series".u Is there another way faster and more precise? Interval In: Start---------------------------------------End E.begin_date ------ E.end_date -> E.end_date - E.begin_date Start--------------------------------End E.begin_date -------------------E.end_date -> End - E.begin_date Start--------------------------------End E.begin_date -----------------------E.end_date -> E.end_date - Start Start------------------End E.begin_date-------------------------------------E.end_date -> End - Start Interval In on several Days with generate_series Day 1 Day 2 etc.... | | Start-------End | Start---------End | Start---------End E.begin_date--------------------------------------------------------------E.end_date Interval Out: Start--------End E.begin_date ------------------- E.end_date -> (Start - E.begin_date) + (E.end_date - End) Start--------------------------------End E.begin_date -------------------E.end_date -> E.end_date - End Start--------------------------------End E.begin_date -----------------------E.end_date -> Start - .begin_date Start--------End E.begin_date--------E.end_date -> E.end_date - E.begin_date Interval Out on several Days with generate_series Day 1 Day 2 etc.... | | Start-------End | Start---------End | Start---------End E.begin_date--------------------------------------------------------------E.end_date This is my sql request: select * FROM (SELECT sum( case when (begin_date AT TIME ZONE 'Europe/Paris')::date = (end_date AT TIME ZONE 'Europe/Paris')::date -- same days then case -- ------time_begin------------------------------------------------------------------------time_end---- -- begin_date------------------------------------------------end_date when (begin_date AT TIME ZONE 'Europe/Paris')::time >= '08:00'::time AND (end_date AT TIME ZONE 'Europe/Paris')::time = '08:00'::time AND (begin_date AT TIME ZONE 'Europe/Paris')::time = '18:00'::time then '18:00'::time - (begin_date AT TIME ZONE 'Europe/Paris') ::time -- -----------------time_begin------------------------------------------------------------------------time_end---- -- begin_date-----------------------------------end_date when (end_date AT TIME ZONE 'Europe/Paris')::time >= '08:00'::time AND (end_date AT TIME ZONE 'Europe/Paris')::time = '18:00'::time then '18:00'::time - '08:00'::time END when (begin_date AT TIME ZONE 'Europe/Paris')::date = '08:00'::time AND (m::time AT TIME ZONE 'Europe/Paris') = '2019-12-01T00:00:00'::date AND ("event"."begin_date" AT TIME ZONE 'Europe/Paris')::date '18:00'::time)) )) order by total_included desc) as included, (SELECT sum( case when (begin_date AT TIME ZONE 'Europe/Paris')::date = (end_date AT TIME ZONE 'Europe/Paris')::date -- same day then case -- -----------------------------------------------------------------time_begin---------------------------------time_end---- -- begin_date-------------------------------end_date -- or -- -------------------------------time_begin---------------------------------time_end----------- -- begin_date----------------------------------------------------------------------end_date when ((begin_date AT TIME ZONE 'Europe/Paris')::time NOT BETWEEN '08:00'::time AND '18:00'::time) AND ((end_date AT TIME ZONE 'Europe/Paris')::time NOT BETWEEN '08:00'::time AND '18:00'::time) then case when ((begin_date AT TIME ZONE 'Europe/Paris')::time '18:00'::time AND (end_date AT TIME ZONE 'Europe/Paris')::time > '18:00'::time then end_date::time - begin_date::time when (begin_date AT TIME ZONE 'Europe/Paris')::time '18:00'::time then ('08:00'::time - (begin_date AT TIME ZONE 'Europe/Paris')::time) + ((end_date AT TIME ZONE 'Europe/Paris')::time - '18:00'::time) end -- --------------------------------------time_begin---------------------------------time_end---- -- begin_date-----------------------------------end_date when ((begin_date AT TIME ZONE 'Europe/Paris')::time NOT BETWEEN '08:00'::time AND '18:00'::time) AND ((end_date AT TIME ZONE 'Europe/Paris')::time BETWEEN '08:00'::time AND '18:00'::time) then '08:00'::time - (begin_date AT TIME ZONE 'Europe/Paris')::time -- ----------time_begin---------------------------------time_end---- -- begin_date-----------------------------------end_date when ((begin_date AT TIME ZONE 'Europe/Paris')::time BETWEEN '08:00'::time AND '18:00'::time) AND ((end_date AT TIME ZONE 'Europe/Paris')::time NOT BETWEEN '08:00'::time AND '18:00'::time) then (end_date AT TIME ZONE 'Europe/Paris')::time - '18:00'::time end when (begin_date AT TIME ZONE 'Europe/Paris')::date = '08:00'::time AND (m AT TIME ZONE 'Europe/Paris')::time = '2019-12-01T00:00:00'::date AND ("event"."date_creation" AT TIME ZONE 'Europe/Paris')::date '08:00'::time) AND ((event.end_date AT TIME ZONE 'Europe/Paris')::time < '18:00'::time)) )) order by total_excluded desc) as excluded Data Sample for interval IN ('8am', '6pm') for Wednesday Thursday: |---------|------------------|------------------|------------------| | id | begin_date | end_date | Result(Duration) | |---------|------------------|------------------|------------------| | 1 | 2020-01-01 10:00 | 2020-01-01 12:00 | 02:00:00 | |---------|------------------|------------------|------------------| |---------|------------------|------------------|------------------| | 2 | 2020-01-01 10:00 | 2020-01-01 20:00 | 08:00:00 | |---------|------------------|------------------|------------------| |---------|------------------|------------------|------------------| | 3 | 2020-01-01 07:00 | 2020-01-01 14:00 | 06:00:00 | |---------|------------------|------------------|------------------| |---------|------------------|------------------|------------------| | 4 | 2020-01-01 07:00 | 2020-01-01 19:00 | 10:00:00 | |---------|------------------|------------------|------------------| |---------|------------------|------------------|------------------| | 5 | 2020-01-01 08:00 | 2020-01-03 18:00 | 20:00:00 | |---------|------------------|------------------|------------------| |---------|------------------|------------------|------------------| | 6 | 2020-01-01 09:00 | 2020-01-05 17:00 | 16:00:00 | |---------|------------------|------------------|------------------| Data Sample for interval OUT ('8am', '6pm') for Wednesday Thursday: |---------|------------------|------------------|------------------| | id | begin_date | end_date | Result(Duration) | |---------|------------------|------------------|------------------| | 1 | 2020-01-01 10:00 | 2020-01-01 12:00 | 00:00:00 | |---------|------------------|------------------|------------------| |---------|------------------|------------------|------------------| | 2 | 2020-01-01 10:00 | 2020-01-01 20:00 | 02:00:00 | |---------|------------------|------------------|------------------| |---------|------------------|------------------|------------------| | 3 | 2020-01-01 07:00 | 2020-01-01 14:00 | 01:00:00 | |---------|------------------|------------------|------------------| |---------|------------------|------------------|------------------| | 4 | 2020-01-01 07:00 | 2020-01-01 19:00 | 02:00:00 | |---------|------------------|------------------|------------------| |---------|------------------|------------------|------------------| | 5 | 2020-01-01 07:00 | 2020-01-03 18:00 | 02:00:00 | |---------|------------------|------------------|------------------| |---------|------------------|------------------|------------------| | 6 | 2020-01-01 09:00 | 2020-01-05 17:00 | 02:00:00 | |---------|------------------|------------------|------------------| The input parameters are: * start date and end date * start time and end time * a list of the days of the week To reach a result where we have the total duration of events within the interval, but also the total duration of events outside the interval of hours. All this for a list of days of the given week. Days of the week means that if an event lasts several days say from Monday to Wednesday, but in the filter there are only Monday and Tuesday, the duration of Wednesday will not be counted. The current query works but to handle events that take place over several days I use generate_series, which is not very efficient. The question is how to improve this query.
Hadrien (1 rep)
Apr 10, 2020, 11:07 AM • Last activity: Feb 17, 2025, 02:02 PM
1 votes
1 answers
767 views
Unable create partition by trigger
1) I have table CREATE TABLE tor ( events_date timestamp without time zone NOT NULL, message text CONSTRAINT tor_pkey PRIMARY KEY (events_date, message) )PARTITION BY RANGE (events_date); 2) There is a default partition where the data gets, if there is no partition where you can put it CREATE TABLE...
1) I have table CREATE TABLE tor ( events_date timestamp without time zone NOT NULL, message text CONSTRAINT tor_pkey PRIMARY KEY (events_date, message) )PARTITION BY RANGE (events_date); 2) There is a default partition where the data gets, if there is no partition where you can put it CREATE TABLE tor_part_default partition OF tor default; 3) There is a trigger on an INSERT to tor_part_default: CREATE OR REPLACE FUNCTION create_next_part() RETURNS trigger LANGUAGE 'plpgsql' COST 100 VOLATILE NOT LEAKPROOF AS $BODY$ DECLARE partname VARCHAR(11); startdate VARCHAR(10); enddate VARCHAR(10); BEGIN partname = 'tor_'||to_char(NEW.events_date, 'YYYYMM'); startdate = to_char(NEW.events_date, 'YYYY-MM-01'); enddate = to_char(TO_DATE(startdate,'YYYY-MM-DD') + interval '1 month','YYYY-MM-DD'); RAISE NOTICE ' Start date: %',startdate; RAISE NOTICE ' End date: %',enddate; IF NOT EXISTS (SELECT 1 FROM information_schema.tables WHERE table_name = partname) THEN RAISE NOTICE 'Partition created: %',partname; EXECUTE 'CREATE TABLE '||partname||' partition OF tor FOR VALUES FROM ('''||startdate||''') TO ('''||enddate||''')'; EXECUTE 'INSERT INTO '||partname||' VALUES(NEW.*)'; END IF; RETURN NEW; END; $BODY$; CREATE TRIGGER tor_create_next_part BEFORE INSERT ON tor_part_default FOR EACH ROW EXECUTE PROCEDURE create_next_part(); When I try to insert data, as a result I get an error: ERROR: cannot CREATE TABLE .. PARTITION OF "tor" because it is being used by active queries in this session CONTEXT: SQL statement "CREATE TABLE tor_201606 partition OF tor FOR VALUES FROM ('2016-06-01') TO ('2016-07-01')" PL/pgSQL function create_next_part() line 16 at EXECUTE What am I doing wrong or is there another way to create partitions automatically?
Maxiko (131 rep)
Oct 4, 2019, 05:12 AM • Last activity: Feb 8, 2025, 09:04 PM
0 votes
1 answers
1524 views
How does Postgres decide if to use additional workers?
I have the [following table][1] with an added BTREE-index on "captured_at". ```sql CREATE TABLE datagram ( id bigserial NOT NULL, src_re integer NOT NULL, src_clt integer NOT NULL, src_meter integer NOT NULL, captured_at timestamp with time zone NOT NULL, captured_rssi smallint NOT NULL, oms_status...
I have the following table with an added BTREE-index on "captured_at".
CREATE TABLE datagram
(
  id            bigserial                 NOT NULL,
  src_re        integer                   NOT NULL,
  src_clt       integer                   NOT NULL,
  src_meter     integer                   NOT NULL,
  captured_at   timestamp with time zone  NOT NULL,
  captured_rssi smallint                  NOT NULL,
  oms_status    smallint                  NOT NULL,
  oms_enc       bytea,
  oms_dec       bytea
);
I have the following query:
EXPLAIN (ANALYZE true, BUFFERS true, VERBOSE true)
SELECT
  DISTINCT ON ("real_estate"."number", "flat"."number", "meter"."mfct_code", "meter"."reading_serial", "meter"."type") "real_estate"."number" AS "real_estate_nr",
  "flat"."number" AS "flat_nr",
  "datagram"."id" AS "datagram_id"
FROM "real_estate"
  JOIN "flat"       ON "real_estate"."id" = "flat"."real_estate"
  JOIN "meter_bcd"  ON "flat"."id" = "meter_bcd"."flat"
  JOIN "meter"      ON "meter_bcd"."id" = "meter"."meter_bcd"
  JOIN "datagram"   ON "datagram"."src_meter" = "meter"."id"
WHERE 
(
  "real_estate"."id"    IN ([...]) AND
  "meter"."id"          IN ([...]) AND
  "datagram"."captured_at" BETWEEN
  (
    CAST('2020-08-28T10:34:32.855+02:00' AS TIMESTAMP WITH TIME ZONE) 
    - CAST('P5D' AS INTERVAL)
  )
  AND
  (
    CAST('2020-08-28T10:34:32.855+02:00' AS TIMESTAMP WITH TIME ZONE) 
    + CAST('P0D' AS INTERVAL)
  )
)
ORDER BY
  "real_estate"."number" ASC,
  "flat"."number" ASC,
  "meter"."mfct_code" ASC,
  "meter"."reading_serial" ASC,
  "meter"."type" ASC,
  "datagram"."captured_at" DESC
When that query is applied to the above table with an index on "captured_at" only, that results in the following query plan. The important thing to note is that NO parallel workers a re used.
->  Hash Join  (cost=246164.35..2004405.07 rows=11323 width=51) (actual time=93.802..5776.755 rows=104607 loops=1)
    Hash Cond: (meter.meter_bcd = meter_bcd.id)
    ->  Hash Join  (cost=246019.19..2003889.83 rows=68494 width=37) (actual time=93.067..5744.787 rows=104607 loops=1)
        Hash Cond: (datagram.src_meter = meter.id)
        ->  Index Scan using idx_datagram_captured_at_btree on datagram  (cost=0.57..1756571.73 rows=495033 width=20) (actual time=0.054..5451.417 rows=514369 loops=1)
              Index Cond: ((captured_at >= ('2020-08-28 10:34:32.855+02'::timestamp with time zone - '5 days'::interval)) AND (captured_at   Hash Join  (cost=245966.53..272335.67 rows=5419 width=51) (actual time=625.846..1560.103 rows=34869 loops=3)
    Hash Cond: (datagram_y2020_h2.src_meter = meter.id)
    ->  Parallel Append  (cost=4.19..25430.72 rows=236911 width=20) (actual time=2.827..863.298 rows=171456 loops=3)
          Subplans Removed: 23
          ->  Parallel Index Scan using datagram_y2020_h2_captured_at_idx on datagram_y2020_h2  (cost=0.44..24051.22 rows=236888 width=20) (actual time=2.826..848.388 rows=171456 loops=3)
                Index Cond: ((captured_at >= ('2020-08-28 10:34:32.855+02'::timestamp with time zone - '5 days'::interval)) AND (captured_at   Hash Join  (cost=264793.42..1666293.23 rows=4332 width=51) (actual time=96.080..638.802 rows=34869 loops=3)
    Hash Cond: (oms_rec.meter = meter.id)
    ->  Nested Loop  (cost=1.14..1400747.39 rows=189399 width=20) (actual time=0.145..496.366 rows=171456 loops=3)
    ->  Hash  (cost=264709.53..264709.53 rows=6620 width=39) (actual time=95.521..95.528 rows=40044 loops=3)
          Buckets: 65536 (originally 8192)  Batches: 1 (originally 1)  Memory Usage: 3016kB
          ->  Parallel Index Scan using idx_clt_rec_captured_at on clt_rec  (cost=0.57..14853.95 rows=189399 width=24) (actual time=0.098..81.556 rows=171456 loops=3)
          ->  Index Scan using pk_oms_rec on oms_rec  (cost=0.57..7.32 rows=1 width=12) (actual time=0.002..0.002 rows=1 loops=514369)
          ->  Hash Join  (cost=145.59..264709.53 rows=6620 width=39) (actual time=9.883..86.390 rows=40044 loops=3)
                Index Cond: (id = clt_rec.oms_rec)
                Index Cond: ((captured_at >= ('2020-08-28 10:34:32.855+02'::timestamp with time zone - '5 days'::interval)) AND (captured_at <= ('2020-08-28 10:34:32.855+02'::timestamp with time zone + '00:00:00'::interval)))
                Hash Cond: (meter.meter_bcd = meter_bcd.id)
**So, based on which facts does Postgres decide if to use aadditional workers or not? Can I see those decisions explained somewhere? I don't see anything in the query plan. Thanks!**
Thorsten Sch&#246;ning (123 rep)
Feb 9, 2021, 03:51 PM • Last activity: Feb 5, 2025, 09:03 PM
1 votes
1 answers
1998 views
Reset all sequences so that they continue after max(id) + 1?
It looks like I messed up a database migration and while all sequences are there, they would start at `1`, which triggers errors like: > django.db.utils.IntegrityError: duplicate key value violates unique > constraint "django_admin_log_pkey" DETAIL: Key (id)=(2) already > exists. Is there a query/sc...
It looks like I messed up a database migration and while all sequences are there, they would start at 1, which triggers errors like: > django.db.utils.IntegrityError: duplicate key value violates unique > constraint "django_admin_log_pkey" DETAIL: Key (id)=(2) already > exists. Is there a query/script I could run that would run across all tables in the database, look at any columns tied to a sequence and reset those sequences to max(column) + 1? Using PostgreSQL v11.
d33tah (429 rep)
Sep 3, 2019, 06:41 AM • Last activity: Jan 19, 2025, 06:02 AM
0 votes
1 answers
394 views
Converting ticks into range bars
You have a tick table with 3 columns CREATE TABLE tick ( symbol character(9) NOT NULL, price real NOT NULL, "time" timestamp with time zone NOT NULL ); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26175.9, '2019-04-11 10:32:00.361294-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( '...
You have a tick table with 3 columns CREATE TABLE tick ( symbol character(9) NOT NULL, price real NOT NULL, "time" timestamp with time zone NOT NULL ); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26175.9, '2019-04-11 10:32:00.361294-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26175.6, '2019-04-11 10:32:00.429346-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.7, '2019-04-11 10:32:01.03363-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.6, '2019-04-11 10:32:01.085657-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26175.6, '2019-04-11 10:32:01.141192-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.8, '2019-04-11 10:32:01.245584-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.6, '2019-04-11 10:32:01.353293-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.8, '2019-04-11 10:32:01.611854-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.9, '2019-04-11 10:32:02.784545-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.8, '2019-04-11 10:32:02.888302-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.9, '2019-04-11 10:32:02.947919-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.8, '2019-04-11 10:32:03.055718-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.9, '2019-04-11 10:32:03.158856-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174, '2019-04-11 10:32:03.262169-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.8, '2019-04-11 10:32:03.806712-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.6, '2019-04-11 10:32:04.3873-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.8, '2019-04-11 10:32:04.444778-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.6, '2019-04-11 10:32:04.504045-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.1, '2019-04-11 10:32:04.824775-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.6, '2019-04-11 10:32:04.880204-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.1, '2019-04-11 10:32:04.98412-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.1, '2019-04-11 10:32:05.546972-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.8, '2019-04-11 10:32:06.126979-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.1, '2019-04-11 10:32:06.180949-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.8, '2019-04-11 10:32:06.233617-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.9, '2019-04-11 10:32:06.53779-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.1, '2019-04-11 10:32:06.83216-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.1, '2019-04-11 10:32:07.099359-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.1, '2019-04-11 10:32:07.159276-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.1, '2019-04-11 10:32:07.220017-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.6, '2019-04-11 10:32:07.436384-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.1, '2019-04-11 10:32:07.492972-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.6, '2019-04-11 10:32:07.599031-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.1, '2019-04-11 10:32:08.528171-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26172.9, '2019-04-11 10:32:08.803254-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.1, '2019-04-11 10:32:08.954501-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26172.9, '2019-04-11 10:32:09.056822-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.6, '2019-04-11 10:32:09.160093-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.4, '2019-04-11 10:32:09.813164-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.6, '2019-04-11 10:32:10.168529-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.4, '2019-04-11 10:32:10.223418-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.6, '2019-04-11 10:32:10.32763-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26173.4, '2019-04-11 10:32:10.383433-05'); INSERT INTO TICK (SYMBOL, PRICE, TIME) VALUES ( 'US30', 26174.6, '2019-04-11 10:32:10.444899-05'); Since Range Bars are driven by price movement, a new Range Bar is only created once the specified Range has been met. For example, if the specified Range amount is 10, it means that each Range Bar will have a range (High/Max to Low/Min) of 10. It is thus conceivable that a single Range Bar could represent several days if the movement throughout those minuets/hours/days was only within a 10 price range. Once a Range Bar is closed-out, the open of the next Range Bar will always be at exactly the same price as the Close of the prior Range Bar. The Range amount determines the size of the Range Bars. Suppose that within a given five minute period prices oscillate between 10 and 12, going from 10 to 12, then back to 10 and then back to 12. If the Range amount for this chart were set to 1, this would result in six Range Bars, two from 10 to 12, two going back down to 10, and two more going back up to 12, as shown in the example below: enter image description here Since all of the aforementioned oscillations occurred within a five minute period (the specified underlying interval in this example) when the Range Bar is built historically, that five minute period would be represented by only two Range Bars, as shown in the example below: enter image description here I think this gives a clear example of what I am trying to select from tick table. Would a window with sub select be the way to go about doing something like this? Or function? Any one up for showing a simple solution to this complex select ?? Result would look like US30, 26175.9, 2019-04-11 10:32:00 US30, 29174.9, 2019-04-11 10:32:01 US30, 29173.9, 2019-04-11 10:32:01 US30, 29174.9, 2019-04-11 10:32:03 US30, 29173.9, 2019-04-11 10:32:04 US30, 29172.9, 2019-04-11 10:32:08 Idea on what the results of select would look like if the range was 1
user10078199 (33 rep)
Sep 23, 2019, 11:41 AM • Last activity: Jan 9, 2025, 12:02 PM
-1 votes
2 answers
731 views
search path is getting updated automatically
In postgres 11.5 for a running session suddenly lot of errors reported in pg_logs "relation does not exist". After enabling extra logging we have found out that just before the errors the following statement is getting executed and feels same is the culprit. SELECT pg_catalog.set_config('search_path...
In postgres 11.5 for a running session suddenly lot of errors reported in pg_logs "relation does not exist". After enabling extra logging we have found out that just before the errors the following statement is getting executed and feels same is the culprit. SELECT pg_catalog.set_config('search_path', '', false) We are not running it. Could anyone please help me to understand by whom and under what circumstances it is getting executed.
Neeraj Bansal (153 rep)
Nov 20, 2020, 06:20 AM • Last activity: Jan 7, 2025, 07:04 PM
4 votes
5 answers
4959 views
PostgreSQL (big data): search by tags and order by timestamp
We need to add a search feature on a large table (200M+ rows): ``` item_id | tags | created_at | ... ------------------------------------------------------------------- 1 | ['tag1', 'bar2'] | 2020-01-06 12:43:32 | 2 | ['example5', 'tag9', 'foo2'] | 2020-01-10 10:40:00 | 3 | ['test1', 'tag5'] | 2020-...
We need to add a search feature on a large table (200M+ rows):
item_id | tags                          | created_at          | ...
-------------------------------------------------------------------
1       | ['tag1', 'bar2']              | 2020-01-06 12:43:32 |
2       | ['example5', 'tag9', 'foo2']  | 2020-01-10 10:40:00 |
3       | ['test1', 'tag5']             | 2020-01-11 12:43:32 |
...
The queries would be similar to this one:
SELECT * FROM items 
WHERE tags @> ARRAY['t2', 't5']::varchar[]
ORDER BY created_at DESC
LIMIT 100;
Basically it's like searching some logs by tags and ordering them by timestamp. Seems a common scenario... What index should we use? Have you ever tested something similar in production? - Example 1: create a GIN index on tags. The problem is that the search may return millions of results and in order to apply order / limit you need to make millions of reads from the table on disk (in order to get the created_at value for each row). - Example 2: add the btree_gin extension and create a composite index on created_at and tags. The problem is the same as above: I think that PostgreSQL cannot use ordering since the index is declared as a GIN index and not as a btree. - Example 3: create a btree index on created_at and tags. PostgreSQL needs to scan the whole index, since btree doesn't support array operators. I also fear that due to the SELECT * PostgreSQL will not use an index-only scan, thus resulting in millions of reads from disk (that would be actually useless since it only needs 100 reads from disk).
collimarco (653 rep)
Jan 16, 2020, 11:32 AM • Last activity: Dec 31, 2024, 06:01 AM
0 votes
0 answers
72 views
Performance of queries decreased after upgrading from PostgeSQL 11 to 15
I updated the PostgreSQL server from version 11 to version 15 using pg_upgrade. There are several queries that have been affected after the update. Here is one of these queries, which took 15 seconds on version 11, but now takes 12 minutes on PostgreSQL 15. And here are two execution plans for compa...
I updated the PostgreSQL server from version 11 to version 15 using pg_upgrade. There are several queries that have been affected after the update. Here is one of these queries, which took 15 seconds on version 11, but now takes 12 minutes on PostgreSQL 15. And here are two execution plans for comparison. What I've already done is ANALYZE all tables and rebuild all indexes. The hardware and configuration are the same. Do you have any ideas? Best regards. Query Text: SELECT r.*, count(*) OVER() total_rows FROM (SELECT rb.id, rb.req_number, rb.req_status, rb.id_own_customer, rb.id_user_create, rb.date_create, rb.contact_person, rb.phone_number, rb.id_org_user_create, rb.responsible, rb.plan_weight, rb.fact_weight, (SELECT descr_org FROM customer_organizations WHERE rb.id_out_customer = id) descr_out_customer, (SELECT name FROM organizations WHERE rb.id_own_customer = id_org) descr_own_customer, qaverage.route_distance, qaverage.mileage, qaverage.mileage_in_route, qaverage.cnt_trip, qaverage.progress, p.reg_numbers, rc.max_plan_date_start, rc.max_plan_date_finish, rc.min_plan_date_start, rc.min_plan_date_finish, rc.contractors FROM request_ts_base rb LEFT JOIN LATERAL(SELECT re.id_request, string_agg(corg.descr_org, ' ') FILTER(WHERE re.contractor IS NOT NULL) contractors, max(re.date_start) max_plan_date_start, max(re.date_finish) max_plan_date_finish, min(re.date_start) min_plan_date_start, min(re.date_finish) min_plan_date_finish FROM request_ts_extension re LEFT JOIN customer_organizations corg ON corg.id = re.contractor WHERE re.id_request = rb.id GROUP BY re.id_request) rc ON TRUE LEFT JOIN LATERAL(SELECT avg(route_distance) route_distance, avg(mileage) mileage, avg(mileage_in_route) mileage_in_route, avg(cnt_trip) cnt_trip, avg(progress) progress FROM request_ts_transport WHERE request_ts_transport.id_request = rb.id) qaverage ON TRUE LEFT JOIN(SELECT string_agg(p1.reg_number, '') reg_numbers, p1.id_request FROM(SELECT p0.reg_number, rt0.id_request FROM passport p0 JOIN request_ts_transport rt0 ON rt0.id_ts = p0.id_mo UNION ALL SELECT tc1.reg_number, tc1.id_request FROM request_transport_cont tc1 LEFT JOIN request_ts_base rtb ON rtb.id = tc1.id_request) p1 GROUP BY p1.id_request) p ON p.id_request = rb.id) r WHERE r.date_create >= '2024-08-16 17:00:00' AND r.date_create WindowAgg (cost=1710222.66..1710828.12 rows=2 width=992) (actual time=1022776.649..1022776.933 rows=59 loops=1) Buffers: shared hit=78502606, temp read=1973575 written=4431431 -> Nested Loop Left Join (cost=1710222.66..1710818.10 rows=2 width=379) (actual time=15314.202..1022775.813 rows=59 loops=1) Buffers: shared hit=78502429, temp read=1973575 written=4431431 -> Nested Loop Left Join (cost=1710219.25..1710811.21 rows=2 width=219) (actual time=15314.161..1022772.853 rows=59 loops=1) Buffers: shared hit=78502252, temp read=1973575 written=4431431 -> Nested Loop Left Join (cost=1710218.52..1710730.01 rows=1 width=155) (actual time=15314.035..1022765.772 rows=59 loops=1) Join Filter: (rt0.id_request = rb.id) Rows Removed by Join Filter: 93259443 Buffers: shared hit=78501994, temp read=1973575 written=4431431 -> Nested Loop Semi Join (cost=0.72..505.20 rows=1 width=123) (actual time=0.076..2.931 rows=59 loops=1) Buffers: shared hit=241 -> Index Scan Backward using test1_idx_idu_dc on request_ts_base rb (cost=0.43..151.34 rows=142 width=123) (actual time=0.060..1.065 rows=59 loops=1) Index Cond: ((id_user_create = 450) AND (date_create >= '2024-08-22 19:00:00'::timestamp without time zone) AND (date_create Index Scan using child_org_id_idx on org_access (cost=0.29..2.48 rows=1 width=4) (actual time=0.018..0.018 rows=1 loops=59) Index Cond: (child_org_id = rb.id_org_user_create) Filter: (parent_org_id = '-1715906210'::integer) Rows Removed by Filter: 1 Buffers: shared hit=177 -> HashAggregate (cost=1710217.80..1710220.30 rows=200 width=36) (actual time=12708.608..17085.665 rows=1580669 loops=59) Group Key: rt0.id_request Batches: 1405 Memory Usage: 267505kB Disk Usage: 308560kB Buffers: shared hit=78501753, temp read=1973575 written=4431418 -> Append (cost=6620.38..1663672.09 rows=9309143 width=13) (actual time=0.539..8719.336 rows=8814315 loops=59) Buffers: shared hit=78501753 -> Hash Join (cost=6620.38..50290.01 rows=297593 width=13) (actual time=0.538..269.004 rows=222192 loops=59) Hash Cond: (rt0.id_ts = p0.id_mo) Buffers: shared hit=2360011 -> Seq Scan on request_ts_transport rt0 (cost=0.00..42885.98 rows=298498 width=8) (actual time=0.014..126.001 rows=222942 loops=59) Buffers: shared hit=2354159 -> Hash (cost=6193.50..6193.50 rows=34150 width=13) (actual time=29.997..30.013 rows=34150 loops=1) Buckets: 65536 Batches: 1 Memory Usage: 2111kB Buffers: shared hit=5852 -> Seq Scan on passport p0 (cost=0.00..6193.50 rows=34150 width=13) (actual time=0.011..22.974 rows=34150 loops=1) Buffers: shared hit=5852 -> Seq Scan on request_transport_cont tc1 (cost=0.00..1380653.50 rows=9011550 width=13) (actual time=0.011..7658.655 rows=8592123 loops=59) Buffers: shared hit=76141742 -> GroupAggregate (cost=0.73..80.59 rows=31 width=68) (actual time=0.098..0.098 rows=1 loops=59) Group Key: re.id_request Buffers: shared hit=258 -> Nested Loop Left Join (cost=0.73..79.73 rows=31 width=46) (actual time=0.082..0.086 rows=1 loops=59) Buffers: shared hit=258 -> Index Scan using request_ts_extension_d_rqst on request_ts_extension re (cost=0.43..11.34 rows=31 width=24) (actual time=0.055..0.058 rows=1 loops=59) Index Cond: (id_request = rb.id) Buffers: shared hit=249 -> Memoize (cost=0.29..2.26 rows=1 width=26) (actual time=0.006..0.006 rows=1 loops=72) Cache Key: re.contractor Cache Mode: logical Hits: 69 Misses: 3 Evictions: 0 Overflows: 0 Memory Usage: 1kB Buffers: shared hit=9 -> Index Scan using customer_organizations_pk on customer_organizations corg (cost=0.28..2.25 rows=1 width=26) (actual time=0.029..0.029 rows=1 loops=3) Index Cond: (id = re.contractor) Buffers: shared hit=9 -> Aggregate (cost=3.42..3.43 rows=1 width=160) (actual time=0.032..0.032 rows=1 loops=59) Buffers: shared hit=177 -> Index Scan using request_ts_transport_d_rqst on request_ts_transport (cost=0.42..3.34 rows=5 width=21) (actual time=0.020..0.020 rows=0 loops=59) Index Cond: (id_request = rb.id) Buffers: shared hit=177 SubPlan 1 -> Index Scan using customer_organizations_pk on customer_organizations (cost=0.28..2.50 rows=1 width=22) (actual time=0.001..0.001 rows=0 loops=59) Index Cond: (id = rb.id_out_customer) SubPlan 2 -> Index Scan using organizations_pk on organizations (cost=0.28..2.50 rows=1 width=34) (actual time=0.003..0.003 rows=1 loops=59) Index Cond: (id_org = rb.id_own_customer) Buffers: shared hit=177 Fast plan on PostgreSQL 11 Sort (cost=1714930.22..1714930.67 rows=178 width=992) (actual time=15279.127..15279.129 rows=45 loops=1) Sort Key: rb.date_create DESC, rb.req_number DESC, rb.id DESC Sort Method: quicksort Memory: 65kB Buffers: shared hit=1378008, temp read=19958 written=19958 -> WindowAgg (cost=1689686.72..1714914.25 rows=178 width=992) (actual time=15278.950..15279.055 rows=45 loops=1) Buffers: shared hit=1378008, temp read=19958 written=19958 -> Nested Loop Left Join (cost=1689686.72..1714022.47 rows=178 width=379) (actual time=14875.260..15278.758 rows=45 loops=1) Buffers: shared hit=1377873, temp read=19958 written=19958 -> Nested Loop Left Join (cost=1689683.32..1713411.82 rows=178 width=219) (actual time=14875.209..15278.472 rows=45 loops=1) Buffers: shared hit=1377738, temp read=19958 written=19958 -> Nested Loop Semi Join (cost=1689682.60..1712963.58 rows=6 width=155) (actual time=14875.071..15276.659 rows=45 loops=1) Buffers: shared hit=1376372, temp read=19958 written=19958 -> Hash Left Join (cost=1689678.10..1712907.38 rows=10 width=155) (actual time=14874.975..15275.858 rows=45 loops=1) Hash Cond: (rb.id = rt0.id_request) Buffers: shared hit=1376147, temp read=19958 written=19958 -> Index Scan using request_ts_base_datecreate_org on request_ts_base rb (cost=0.43..23229.68 rows=10 width=123) (actual time=1.066..65.947 rows=45 loops=1) Index Cond: ((date_create >= '2024-08-16 17:00:00'::timestamp without time zone) AND (date_create Hash (cost=1689675.17..1689675.17 rows=200 width=36) (actual time=14834.259..14834.259 rows=2766656 loops=1) Buckets: 1048576 (originally 1024) Batches: 4 (originally 1) Memory Usage: 74056kB Buffers: shared hit=1336291, temp written=16075 -> HashAggregate (cost=1689670.67..1689673.17 rows=200 width=36) (actual time=12228.531..14092.515 rows=2766656 loops=1) Group Key: rt0.id_request Buffers: shared hit=1336291 -> Append (cost=6620.24..1645668.18 rows=8800499 width=13) (actual time=28.843..7868.353 rows=8803484 loops=1) Buffers: shared hit=1336291 -> Hash Join (cost=6620.24..49330.96 rows=221825 width=13) (actual time=28.842..262.406 rows=222128 loops=1) Hash Cond: (rt0.id_ts = p0.id_mo) Buffers: shared hit=45753 -> Seq Scan on request_ts_transport rt0 (cost=0.00..42126.47 rows=222547 width=8) (actual time=0.010..77.281 rows=222877 loops=1) Buffers: shared hit=39901 -> Hash (cost=6193.44..6193.44 rows=34144 width=13) (actual time=28.669..28.669 rows=34150 loops=1) Buckets: 65536 Batches: 1 Memory Usage: 2111kB Buffers: shared hit=5852 -> Seq Scan on passport p0 (cost=0.00..6193.44 rows=34144 width=13) (actual time=0.009..22.607 rows=34150 loops=1) Buffers: shared hit=5852 -> Seq Scan on request_transport_cont tc1 (cost=0.00..1376324.74 rows=8578674 width=13) (actual time=0.014..6597.547 rows=8581356 loops=1) Buffers: shared hit=1290538 -> Bitmap Heap Scan on org_access (cost=4.50..5.61 rows=1 width=4) (actual time=0.013..0.013 rows=1 loops=45) Recheck Cond: ((child_org_id = rb.id_org_user_create) AND (parent_org_id = 1657375583)) Heap Blocks: exact=45 Buffers: shared hit=225 -> BitmapAnd (cost=4.50..4.50 rows=1 width=0) (actual time=0.011..0.011 rows=0 loops=45) Buffers: shared hit=180 -> Bitmap Index Scan on child_org_id_idx (cost=0.00..1.32 rows=5 width=0) (actual time=0.003..0.003 rows=3 loops=45) Index Cond: (child_org_id = rb.id_org_user_create) Buffers: shared hit=90 -> Bitmap Index Scan on org_access_parent_org_id_idx (cost=0.00..2.88 rows=199 width=0) (actual time=0.007..0.007 rows=199 loops=45) Index Cond: (parent_org_id = 1657375583) Buffers: shared hit=90 -> GroupAggregate (cost=0.72..74.15 rows=28 width=68) (actual time=0.038..0.038 rows=1 loops=45) Group Key: re.id_request Buffers: shared hit=1366 -> Nested Loop Left Join (cost=0.72..73.38 rows=28 width=46) (actual time=0.012..0.035 rows=7 loops=45) Buffers: shared hit=1366 -> Index Scan using request_ts_extension_d_rqst on request_ts_extension re (cost=0.43..9.98 rows=28 width=24) (actual time=0.008..0.019 rows=7 loops=45) Index Cond: (id_request = rb.id) Buffers: shared hit=415 -> Index Scan using customer_organizations_pk on customer_organizations corg (cost=0.28..2.26 rows=1 width=26) (actual time=0.001..0.001 rows=1 loops=317) Index Cond: (id = re.contractor) Buffers: shared hit=951 -> Aggregate (cost=3.40..3.41 rows=1 width=160) (actual time=0.005..0.005 rows=1 loops=45) Buffers: shared hit=135 -> Index Scan using request_ts_transport_d_rqst on request_ts_transport (cost=0.42..3.33 rows=5 width=21) (actual time=0.004..0.004 rows=0 loops=45) Index Cond: (id_request = rb.id) Buffers: shared hit=135 SubPlan 1 -> Index Scan using customer_organizations_pk on customer_organizations (cost=0.28..2.50 rows=1 width=22) (actual time=0.000..0.000 rows=0 loops=45) Index Cond: (rb.id_out_customer = id) SubPlan 2 -> Index Scan using organizations_pk on organizations (cost=0.28..2.50 rows=1 width=34) (actual time=0.002..0.002 rows=1 loops=45) Index Cond: (rb.id_own_cu stomer = id_org) Buffers: shared hit=135
Fahrenheit (1 rep)
Aug 26, 2024, 08:42 PM
Showing page 1 of 20 total questions