Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
150
views
how to get explain plan from pl/sql function call?
consider this working pl/sql block: declare o_result boolean; io_error varchar2(30000); tab1 someTable1; tab2 someTable2; begin tab1(1).col1 := 'str'; tab1(1).col2 := 1; tab2(1).col1 := 'str'; tab2(1).col2 := 0.5; tab2(1).col3 := 6; o_result := some_function(io_error, 2,'str', 5, to_date('01-JAN-23'...
consider this working pl/sql block:
declare
o_result boolean;
io_error varchar2(30000);
tab1 someTable1;
tab2 someTable2;
begin
tab1(1).col1 := 'str';
tab1(1).col2 := 1;
tab2(1).col1 := 'str';
tab2(1).col2 := 0.5;
tab2(1).col3 := 6;
o_result := some_function(io_error, 2,'str', 5, to_date('01-JAN-23','DD-MON-YY'), to_date('31-DEC-23','DD-MON-YY'), 4, 'A', tab1, tab2, 'str', 'B', 'str');
end;
how to get the explain plan for pl/sql function calls where its inputs are dates and custom types?
gabriel119435
(119 rep)
Feb 20, 2024, 07:44 PM
• Last activity: Jul 18, 2025, 02:02 PM
0
votes
2
answers
164
views
Oracle query different performances between prod and staging environment
So I have those 2 Oracle 12c databases R2, prod and staging environment, both are perfectly aligned and running on identical hardware. The same select statement will take about 400ms on staging while it will not take less than 4 seconds. Our expert DBA left the company leaving me and my relatively t...
So I have those 2 Oracle 12c databases R2, prod and staging environment, both are perfectly aligned and running on identical hardware.
The same select statement will take about 400ms on staging while it will not take less than 4 seconds.
Our expert DBA left the company leaving me and my relatively thin skills to investigate on this. I have no idea how to proceed, although I saw some major differences in each explain plans (1st is prod, 2nd is staging) regarding cardinalities.
I don't really know where to start to only identify the source of the issue.
I rebuilt the index involved in the query with no effect. I later dropped it and created it again with no more improvment.
Edit:
Following suggestions, I gathered the stats with sample = 100% with no improvment.
Though for some reason the only different parameter between prod and staging was sga_max_size value which was much higher in staging. Aligning it in prod helped me regain similar respond time (query runs in 500ms) though the explain plans are still differents.
I will mark this as resolved as the main performance issue is now gone


Marc
(101 rep)
Sep 5, 2022, 01:40 PM
• Last activity: Jul 16, 2025, 10:04 AM
0
votes
1
answers
155
views
Understanding the REF column in MySQL EXPLAIN
I have few questions about the REF column in the EXPLAIN output. MySQL Version 5.7.31 Community Server. The table structure | table1 | CREATE TABLE `table1` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `cp_id` bigint(20) NOT NULL, `ms_id` varchar(32) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL, `...
I have few questions about the REF column in the EXPLAIN output. MySQL Version 5.7.31 Community Server.
The table structure
| table1 | CREATE TABLE
table1
(
id
bigint(20) NOT NULL AUTO_INCREMENT,
cp_id
bigint(20) NOT NULL,
ms_id
varchar(32) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL,
bu_id
int(8) DEFAULT NULL,
create_time
datetime DEFAULT NULL,
update_time
datetime DEFAULT NULL,
status
int(8) DEFAULT NULL,
sync_time
date NOT NULL DEFAULT '1970-01-01',
PRIMARY KEY (id
),
KEY idx_update_time
(update_time
),
KEY idx_ms_st_sy_cp
(ms_id
,status
,sync_time
,cp_id
)
) ENGINE=InnoDB AUTO_INCREMENT=40264018 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin
explain select cp_id, max(sync_time) as sync_time from table1 where ms_id = '678645347824' and status = 1 and sync_time >= '2024-02-09' group by cp_id;
+----+-------------+--------------------+------------+-------+-------------------------------------------------------------------+-----------------------------------------+---
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------------------+------------+-------+-------------------------------------------------------------------+-----------------------------------------+---
| 1 | SIMPLE | table1 | NULL | range | idx_ms_st_sy_cp | idx_ms_st_sy_cp | 107 | NULL | 900 | 100.00 | Using where; Using index; Using temporary; Using filesort |
+----+-------------+--------------------+------------+-------+-------------------------------------------------------------------+-----------------------------------------+---
1 row in set, 1 warning (0.00 sec)
In the first EXPLAIN plan, the ref column is NULL.
On this Link https://www.linkedin.com/pulse/mysql-explain-explained-gowrav-vishwakarma-%E0%A4%97-%E0%A4%B0%E0%A4%B5-%E0%A4%B5-%E0%A4%B6-%E0%A4%B5%E0%A4%95%E0%A4%B0-%E0%A4%AE-/
it says if the ref column is NULL, meaning index is not being used. On MySQL documentation also it says the same thing
https://dev.mysql.com/doc/refman/5.7/en/explain-output.html#explain_ref
1. What exactly the index is doing here?
2. Is it only being used as a covering index?
Using Index in the EXTRA column tells me that the index is being used as a Covering Index. (please correct me if I wrong.)
3. The key_len 107 tells me that all the three columns (ms_id, status, sync_time) are being used. As various sources explain the key_len can be determined to check which columns are being used. So key_len of 107 (ms_id, status, sync_time) = (99 + 5 + 3). This 107 key_len can be calcuated from the below EXPLAINs. If the key_len tells me that the index is being used they why the REF column is NULL?
4. Why the TYPE is RANGE? Does this RANGE table access mean that only the sync_time column is being used? Can I conclude so? key_len = 107 and TYPE=RANGE it's bit confusing.
5. Are the columns TYPE, KEY and REF related to each other. I read that the columns KEY and REF are related and REF shows the kind of index access. and probably it is somehow connected.
explain select cp_id, max(sync_time) as sync_time from table1 where ms_id = '678645347824' group by cp_id;
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+-----------------------------------------+----
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+-----------------------------------------+----
| 1 | SIMPLE | table1 | NULL | ref | idx_ms_st_sy_cp | idx_ms_st_sy_cp | 99 | const | 905 | 100.00 | Using where; Using index; Using temporary; Using | filesort |
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+-----------------------------------------+----
1 row in set, 1 warning (0.00 sec)
In the second EXPLAIN, where I use only ms_id, then TYPE is REF and the REF column is CONST meaning that it is using the Index using the equality operator and key_len is 99 for the ms_id.
explain select cp_id, max(sync_time) as sync_time from table1 where ms_id = '678645347824' and status = 1 group by cp_id;
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+-----------------------------------------+----
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+-----------------------------------------+----
| 1 | SIMPLE | table1 | NULL | ref | idx_ms_st_sy_cp | idx_ms_st_sy_cp | 104 | const,const | 905 | 100.00 | Using where; Using index; Using temporary; Using filesort |
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+-----------------------------------------+----
1 row in set, 1 warning (0.01 sec)
In the third EXPLAIN, the TYPE is REF and the REF column is CONST,CONST because of the two = operators in the WHERE clause ms_id and status.
explain select cp_id, max(sync_time) as sync_time from table1 where ms_id = '678645347824' and status = 1 and sync_time >= '2024-02-09' and cp_id = '1' group by cp_id
;
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+---------+---------+-------------------+------
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+---------+---------+-------------------+------
| 1 | SIMPLE | table1 | NULL | ref | idx_ms_st_sy_cp | idx_com | 112 | const,const,const | 1 | 33.33 | Using index condition; Using where |
+----+-------------+--------------------+------------+------+-------------------------------------------------------------------+---------+---------+-------------------+------
1 row in set, 1 warning (0.00 sec)
In the last EXPLAIN again, it gets even wonderful, the REF column shows CONST,CONST,CONST meaning that the sync_time column is not being used. Am I correct?
Avinash Pawar
(216 rep)
Aug 9, 2024, 03:18 PM
• Last activity: Jul 12, 2025, 04:04 AM
20
votes
1
answers
16267
views
what is "planSummary: IDHACK"?
This Query scans only one document and returns only one document. But this is very slow: 2017-05-22T07:13:24.548+0000 I COMMAND [conn40] query databasename.collectionname query: { _id: ObjectId('576d4ce3f2d62a001e84a9b8') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorE...
This Query scans only one document and returns only one document. But this is very slow:
2017-05-22T07:13:24.548+0000 I COMMAND [conn40] query databasename.collectionname query: { _id: ObjectId('576d4ce3f2d62a001e84a9b8') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8009ms
2017-05-22T07:13:24.549+0000 I COMMAND [conn10] query databasename.collectionname query: { _id: ObjectId('576d4db35de5fa001ebdd77a') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8010ms
2017-05-22T07:13:24.553+0000 I COMMAND [conn47] query databasename.collectionname query: { _id: ObjectId('576d44b7ea8351001ea5fb22') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8014ms
2017-05-22T07:13:24.555+0000 I COMMAND [conn52] query databasename.collectionname query: { _id: ObjectId('576d457ceb82a0001e205bfa') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8015ms
2017-05-22T07:13:24.555+0000 I COMMAND [conn41] query databasename.collectionname query: { _id: ObjectId('576d457ec0697c001e1e1779') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8015ms
2017-05-22T07:13:24.555+0000 I COMMAND [conn39] query databasename.collectionname query: { _id: ObjectId('576d44b8ea8351001ea5fb27') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8015ms
2017-05-22T07:13:24.561+0000 I COMMAND [conn34] query databasename.collectionname query: { _id: ObjectId('576d44b55de5fa001ebdd31e') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8022ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn32] query databasename.collectionname query: { _id: ObjectId('576d4df6d738a7001ef2a235') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn51] query databasename.collectionname query: { _id: ObjectId('576d48165de5fa001ebdd55a') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8024ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn17] query databasename.collectionname query: { _id: ObjectId('576d44c19f2382001e953717') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn8] query databasename.collectionname query: { _id: ObjectId('576d45d256c22e001efdb382') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn42] query databasename.collectionname query: { _id: ObjectId('576d44bd57c75e001e6e2302') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.564+0000 I COMMAND [conn6] query databasename.collectionname query: { _id: ObjectId('576d44b394e731001e7cd530') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8025ms
2017-05-22T07:13:24.571+0000 I COMMAND [conn31] query databasename.collectionname query: { _id: ObjectId('576d4dbcb7289f001e64e32b') } planSummary: IDHACK ntoskip:0 keysExamined:1 docsExamined:1 idhack:1 cursorExhausted:1 keyUpdates:0 writeConflicts:0 numYields:0 nreturned:1 reslen:42 locks:{ Global: { acquireCount: { r: 2 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { r: 1 } } } 8032ms
This looks like very slow disk I/O. What does the
planSummary: IDHACK
mean? Any more info for IDHACK
?
Sybil
(2578 rep)
May 23, 2017, 03:36 PM
• Last activity: Jul 10, 2025, 01:38 AM
2
votes
2
answers
2736
views
Postgresql explain alter table add column doesn't work
I'm using Postgresql 12 (AWS RDS) and pgAdmin4. While the following statement works ```explain select * from table "Commands";``` The next one ```explain alter table "Commands" drop column "test";``` gives this error ```ERROR: syntax error at or near "alter" LINE 1: explain alter table "Commands" dr...
I'm using Postgresql 12 (AWS RDS) and pgAdmin4.
While the following statement works
select * from table "Commands";
The next one
alter table "Commands" drop column "test";
gives this error
: syntax error at or near "alter"
LINE 1: explain alter table "Commands" drop column "test";
^
SQL state: 42601
Character: 9
I made sure the statement alter table "Commands" drop column "test"
itself can be run successfully so this cannot be an issue with that statement.
Does explain
not work with alter
or am I missing anything?
Tran Triet
(123 rep)
Sep 16, 2021, 10:05 AM
• Last activity: Jun 20, 2025, 11:01 AM
2
votes
1
answers
415
views
Different results from Postgres EXPLAIN ANALYZE
I was investigating the performance of a query that hit the top of my list in PMM and noticed strange behavior. The first time the requests execution time is long, subsequent times quickly: Aggregate (cost=1026.84..1026.85 rows=1 width=8) (actual time=17.657..17.659 rows=1 loops=1) -> Nested Loop (c...
I was investigating the performance of a query that hit the top of my list in PMM and noticed strange behavior. The first time the requests execution time is long, subsequent times quickly:
Aggregate (cost=1026.84..1026.85 rows=1 width=8) (actual time=17.657..17.659 rows=1 loops=1)
-> Nested Loop (cost=1.12..1026.80 rows=15 width=37) (actual time=0.844..17.625 rows=24 loops=1)
-> Index Scan using i01_recipients on recipients recipiente1_ (cost=0.56..32.94 rows=28 width=37) (actual time=0.129..0.994 rows=24 loops=1)
Index Cond: ((user_reference)::text = 'b8245f69-f429-4bc3-80c3-78de75ffb7b8'::text)
-> Index Scan using i01_messages on messages messageent0_ (cost=0.56..35.37 rows=12 width=74) (actual time=0.638..0.688 rows=1 loops=24)
Index Cond: ((recipient_id)::text = (recipiente1_.id)::text)
Filter: (((channel)::text = 'INTERNAL'::text) AND ((status)::text = ANY ('{SENT,DELIVERED,READ}'::text[])))
Rows Removed by Filter: 1
Planning Time: 0.639 ms
Execution Time: 17.713 ms
Aggregate (cost=1026.84..1026.85 rows=1 width=8) (actual time=0.790..0.792 rows=1 loops=1)
-> Nested Loop (cost=1.12..1026.80 rows=15 width=37) (actual time=0.092..0.779 rows=24 loops=1)
-> Index Scan using i01_recipients on recipients recipiente1_ (cost=0.56..32.94 rows=28 width=37) (actual time=0.031..0.078 rows=24 loops=1)
Index Cond: ((user_reference)::text = 'b8245f69-f429-4bc3-80c3-78de75ffb7b8'::text)
-> Index Scan using i01_messages on messages messageent0_ (cost=0.56..35.37 rows=12 width=74) (actual time=0.026..0.027 rows=1 loops=24)
Index Cond: ((recipient_id)::text = (recipiente1_.id)::text)
Filter: (((channel)::text = 'INTERNAL'::text) AND ((status)::text = ANY ('{SENT,DELIVERED,READ}'::text[])))
Rows Removed by Filter: 1
Planning Time: 0.505 ms
Execution Time: 0.839 ms
I'm not a big postgres expert. Could you explain to me why this is happening and why this request is loading my system.
Марат Бралиев
(21 rep)
Dec 8, 2022, 02:34 PM
• Last activity: Jun 1, 2025, 11:03 AM
1
votes
1
answers
250
views
See content of temporary Table in Oracles' EXPLAIN
I'm executing a Query in Oracle SQL Developer and eventually call ``` SELECT * FROM TABLE ( dbms_xplan.display ); ``` to get an impression what Oracle just did. There's a column *Name* in the output that states the name of the index that is used in an operation or the name of the table that is being...
I'm executing a Query in Oracle SQL Developer and eventually call
SELECT * FROM TABLE ( dbms_xplan.display );
to get an impression what Oracle just did. There's a column *Name* in the output that states the name of the index that is used in an operation or the name of the table that is being accessed.
Every once in a while it says something like *SYS_TEMP_0FD9D6B47_384FBF5*. I'm not sure what of make of it. I guess this is a temporary table created from a WITH clause. How can I see what's the content or the SQL code behind this table?
Bernhard Döbler
(217 rep)
Oct 16, 2020, 04:55 PM
• Last activity: May 28, 2025, 01:05 PM
1
votes
1
answers
241
views
A MySQL EXPLAIN number of rows discrepancy
MySQL 5.5.49-log More questions on the query in https://dba.stackexchange.com/q/175377/5061 (the query is the same but the question is different): I have the following table (filled with many rows): CREATE TABLE `SectorGraphs2` ( `Kind` tinyint(3) UNSIGNED NOT NULL COMMENT '1 - продюсер, 2 - жанр, 3...
MySQL 5.5.49-log
More questions on the query in https://dba.stackexchange.com/q/175377/5061 (the query is the same but the question is different):
I have the following table (filled with many rows):
CREATE TABLE
SectorGraphs2
(
Kind
tinyint(3) UNSIGNED NOT NULL COMMENT '1 - продюсер, 2 - жанр, 3 - регион',
Criterion
tinyint(3) UNSIGNED NOT NULL,
Period
tinyint(3) UNSIGNED NOT NULL,
PeriodStart
date NOT NULL,
SectorID
int(10) UNSIGNED NOT NULL,
Value
float NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
ALTER TABLE SectorGraphs2
ADD UNIQUE KEY Producer2
(Kind
,Criterion
,Period
,PeriodStart
,SectorID
) USING BTREE,
ADD KEY SectorID
(SectorID
);
then I run:
EXPLAIN
SELECT SectorID, SUM(Value)
FROM SectorGraphs2
WHERE Kind = 1 AND Criterion = 7
AND Period = 1
AND PeriodStart >= ? AND PeriodStart < ? + INTERVAL 1 WEEK
GROUP BY SectorID
and it produces:
+----+-------------+---------------+-------+---------------+-----------+---------+------+------+----------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | extra |
+----+-------------+---------------+-------+---------------+-----------+---------+------+------+----------------------------------------------+
| 1 | SIMPLE | SectorGraphs2 | range | Producer2 | Producer2 | 6 | NULL | 1 | Using where; Using temporary; Using filesort |
+----+-------------+---------------+-------+---------------+-----------+---------+------+------+----------------------------------------------+
See a nicely formatted explanation here .
My question: Why it is used a temporary table and filesort but it reports only 1 row examined? It seems that because of using a temporary table, it should process more than one row. How can I determine the real number of rows processed? How to solve this discrepancy about number of processed rows?
Note that the task I was assigned to do now is to eliminate heavy (involving too many rows) queries. And now I do not know how to do this.
porton
(745 rep)
Jun 5, 2017, 11:52 AM
• Last activity: May 25, 2025, 03:03 PM
0
votes
1
answers
282
views
Anyway to view original query on top of views using postgres
For example, I have a simple query: ```sql SELECT * FROM information_schema.columns where table_schema = 'public'; ``` Which turn out very complex, `information_schema.columns` is a complex view. Right now, to understand the query, I need to read all the definitions of all the views involved. Is the...
For example, I have a simple query:
SELECT * FROM information_schema.columns where table_schema = 'public';
Which turn out very complex, information_schema.columns
is a complex view.
Right now, to understand the query, I need to read all the definitions of all the views involved.
Is there any way to "expand" query and make it easier to read side by side with query plan output?
nvcnvn
(101 rep)
Apr 25, 2021, 10:52 AM
• Last activity: May 15, 2025, 04:05 AM
1
votes
0
answers
38
views
how to make the execution plan as same with the actual query in PostgreSQL 14
When I use the execution plan in PostgreSQL 14, I found the execution plan much faster than the actual query, this is how I generate the execution plan: explain (analyze true, timing true) long sql... the execute plan takes 16s, and the actual query takes 23s. and the timing in execute plan is 16s....
When I use the execution plan in PostgreSQL 14, I found the execution plan much faster than the actual query, this is how I generate the execution plan:
explain (analyze true, timing true) long sql...
the execute plan takes 16s, and the actual query takes 23s. and the timing in execute plan is 16s. is it possible to make the explain time keep the same with actual sql(if different means that actual run are diff with the explain, it may mistake the user)? maybe we should lock the explain.
Dolphin
(939 rep)
May 10, 2025, 05:41 AM
• Last activity: May 10, 2025, 06:57 AM
0
votes
1
answers
542
views
ANALYZE or EXPLAIN for previous query
`EXPLAIN SELECT ...` shows the EXPLAIN for a query (without actually running it). `SHOW EXPLAIN FOR conn_id` shows the EXPLAIN for a query that is currently running. Is it possible to get the EXPLAIN for the last query that ran on a connection? For example, if my application detected that a specific...
EXPLAIN SELECT ...
shows the EXPLAIN for a query (without actually running it).
SHOW EXPLAIN FOR conn_id
shows the EXPLAIN for a query that is currently running.
Is it possible to get the EXPLAIN for the last query that ran on a connection?
For example, if my application detected that a specific query took over 2 seconds to run for a user, I'd like to log the EXPLAIN
or ANALYZE
with the query plan of that query.
I'm not sure that re-running the query with EXPLAIN
or ANALYZE
will give me the exact query plan that was used on its previous run, as MySQL/MariaDB could potentially use another query plan on the re-run.
(I am aware of log_slow_verbosity=query_plan,explain
, but I don't want to log this for every query that takes over 1 second - only this specific query)
Nuno
(829 rep)
Nov 3, 2022, 11:50 PM
• Last activity: Apr 25, 2025, 04:07 AM
0
votes
1
answers
762
views
Can EXPLAIN be used to get some insights about Common Table Expression (CTE)?
I need to find the ancestors in a table having an ID value and a Google research point me to recursive [Common Table Expression (CTE)][1] I wonder if EXPLAIN can be used to get some insights on how MySQL handles this and if there is room for optimization. Also is this the best strategy to get ancest...
I need to find the ancestors in a table having an ID value and a Google research point me to recursive Common Table Expression (CTE) I wonder if EXPLAIN can be used to get some insights on how MySQL handles this and if there is room for optimization.
Also is this the best strategy to get ancestors in a table? I could involve code in here but don't want to because is gonna lead me to loops and probably multiple SELECT statements sent to the DB.
The SQL I wrote following the previous article is as follow:
WITH RECURSIVE page_revisions_path (id, page_id, parent_id) AS
(
SELECT id, page_id, parent_id
FROM page_revisions
WHERE parent_id = 'some_id'
UNION ALL
SELECT c.id, c.page_id, c.parent_id
FROM page_revisions_path AS cp JOIN page_revisions AS c ON cp.id = c.parent_id
)
SELECT * FROM page_revisions_path;
Here is also the SHOW CREATE TABLE
query result:
CREATE_TABLE page_revisions
(
id
varchar(26) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci NOT NULL,
page_id
varchar(26) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
parent_id
varchar(26) CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci DEFAULT NULL,
routable
tinyint(1) NOT NULL,
PRIMARY KEY (id
),
KEY IDX1
(page_id
),
KEY IDX2
(parent_id
),
CONSTRAINT FK1
FOREING KEY (parent_id
) REFERENCES page_revisions
(id
),
CONSTRAINT FK2
FOREING KEY (page_id
) REFERENCES pages
(id
)
) ENGINE=InnoDB
ReynierPM
(1888 rep)
Feb 7, 2022, 10:13 PM
• Last activity: Apr 18, 2025, 05:07 PM
0
votes
1
answers
38
views
Different plan and slower query on smaller Postgres table
Running the same query on two tables that only differ in row count (~7.8M vs ~1.4M) results in two different plans, which sounds reasonable. But the execution on the *smaller* table is 4 to 5 times slower and I would like to understand why. The tables are defined as is: ```sql Column | Type | Collat...
Running the same query on two tables that only differ in row count (~7.8M vs ~1.4M) results in two different plans, which sounds reasonable. But the execution on the *smaller* table is 4 to 5 times slower and I would like to understand why.
The tables are defined as is:
Column | Type | Collation | Nullable | Default
------------+--------------------------+-----------+----------+---------
image_id | bigint | | not null |
h3_cell | h3index | | not null |
created_at | timestamp with time zone | | not null |
location | geometry(PointZ,4326) | | not null |
Indexes:
"images_a_pkey" PRIMARY KEY, btree (image_id)
"images_a_created_at_idx" btree (created_at)
"images_a_h3_cell_idx" btree (h3_cell)
The query is the following
h3_cells AS (
SELECT UNNEST(h3_linestring_to_cells(:line_string, 13, 1)) AS cell
)
SELECT COUNT(*)
FROM images
JOIN h3_cells hc ON images.h3_cell = hc.cell
The h3_linestring_to_cells()
function returns an array of h3index
whose size can be in some cases in the tens of thousands of values. In the examples below it returns about 50,000.
On the table with 7.8M rows, the plan and execution entries are as such (array values redacted for brevity):
Aggregate (cost=347404.47..347404.48 rows=1 width=8) (actual time=74.311..74.312 rows=1 loops=1)
Buffers: shared hit=154681 read=328
I/O Timings: shared read=1.362
-> Nested Loop (cost=0.43..346724.23 rows=272093 width=0) (actual time=0.051..74.246 rows=833 loops=1)
Buffers: shared hit=154681 read=328
I/O Timings: shared read=1.362
-> ProjectSet (cost=0.00..256.90 rows=51377 width=8) (actual time=0.002..4.113 rows=51377 loops=1)
-> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.000..0.001 rows=1 loops=1)
-> Index Only Scan using images_a_h3_cell_idx on images_a (cost=0.43..6.68 rows=5 width=8) (actual time=0.001..0.001 rows=0 loops=51377)
Index Cond: (h3_cell = (unnest('{...}'::h3index[])))
Heap Fetches: 354
Buffers: shared hit=154681 read=328
I/O Timings: shared read=1.362
Planning Time: 139.421 ms
Execution Time: 74.345 ms
While on the smaller 1.4M rows table, the plan and execution are these:
Aggregate (cost=105040.78..105040.79 rows=1 width=8) (actual time=327.586..327.587 rows=1 loops=1)
Buffers: shared hit=148358 read=6315 written=41
I/O Timings: shared read=26.521 write=0.327
-> Merge Join (cost=4791.05..104802.14 rows=95455 width=0) (actual time=321.174..327.575 rows=118 loops=1)
Merge Cond: (ptilmi.h3_cell = (unnest('{...}'::h3index[])))
Buffers: shared hit=148358 read=6315 written=41
I/O Timings: shared read=26.521 write=0.327
-> Index Only Scan using images_b_h3_cell_idx on images_b ptilmi (cost=0.43..95041.10 rows=1415438 width=8) (actual time=0.026..245.897 rows=964987 loops=1)
Heap Fetches: 469832
Buffers: shared hit=148358 read=6315 written=41
I/O Timings: shared read=26.521 write=0.327
-> Sort (cost=4790.62..4919.07 rows=51377 width=8) (actual time=11.181..13.551 rows=51390 loops=1)
Sort Key: (unnest('{...}'::h3index[]))
Sort Method: quicksort Memory: 1537kB
-> ProjectSet (cost=0.00..256.90 rows=51377 width=8) (actual time=0.002..3.716 rows=51377 loops=1)
-> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.000..0.001 rows=1 loops=1)
Planning Time: 146.617 ms
Execution Time: 327.626 ms
In the case of a smaller source array, e.g. of size 25,000, the plan on the smaller table changes to the first one (nested loop) and its execution time becomes more in line with expectations (faster than on the bigger table).
I'm failing to understand what triggers this change in plan for a less efficient one.
Note that I'm using a CTE+JOIN instead of e.g. WHERE h3_cell = ANY(h3_linestring_to_cells(:line_string, 13, 1))
as the resulting array is often quite large and I've found the former to often be more efficient in this case. Interestingly, with a 50,000 entries array the = ANY()
approach is faster on the smaller table, at 25,000 it's slower.
Jukurrpa
(195 rep)
Apr 7, 2025, 03:16 PM
• Last activity: Apr 8, 2025, 06:06 AM
1
votes
1
answers
41
views
Greenplum/Postgres - Why not backward index scan?
I have this query running on `Greenplum 6`(Postgres 9.4) too slow as it performing a `sequential scan`. But I am thinking why it's not performing an `index backward scan` to fetch the records since there is an index on the same column. explain analyse select * from tab1 where create_time > now() - i...
I have this query running on
Greenplum 6
(Postgres 9.4) too slow as it performing a sequential scan
. But I am thinking why it's not performing an index backward scan
to fetch the records since there is an index on the same column.
explain analyse select * from tab1 where create_time > now() - interval '1d' order by create_time desc limit 20;
https://explain.depesz.com/s/UTA7#html
The index is a composite index
(create_time,col2), could it be the reason behind sequential scan? I have already updated statistics of the table and also the bloat ≈5% only.
goodfella
(595 rep)
Apr 3, 2025, 09:03 AM
• Last activity: Apr 4, 2025, 04:42 AM
0
votes
0
answers
43
views
Why does Postgres use filter instead of index scan when there are non-negligible number of null records?
This is my table: ```sql learning_postgres=# create table r(id serial primary key, x int not null, y int); CREATE TABLE learning_postgres=# create index r_x_btree_idx on r using btree (x); CREATE INDEX learning_postgres=# create index r_y_btree_idx on r using btree (y); CREATE INDEX learning_postgre...
This is my table:
learning_postgres=# create table r(id serial primary key, x int not null, y int);
CREATE TABLE
learning_postgres=# create index r_x_btree_idx on r using btree (x);
CREATE INDEX
learning_postgres=# create index r_y_btree_idx on r using btree (y);
CREATE INDEX
learning_postgres=# \d+ r
Table "public.r"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
--------+---------+-----------+----------+-------------------------------+---------+-------------+--------------+-------------
id | integer | | not null | nextval('r_id_seq'::regclass) | plain | | |
x | integer | | not null | | plain | | |
y | integer | | | | plain | | |
Indexes:
"r_pkey" PRIMARY KEY, btree (id)
"r_x_btree_idx" btree (x)
"r_y_btree_idx" btree (y)
Access method: heap
After inserting 5 millions records and a very few number of records with null y
:
learning_postgres=# insert into r(x, y) select random() * 1000000, random() * 1000000 from generate_series(0, 5000000);
INSERT 0 5000001
learning_postgres=# insert into r(x) select random() * 1000000 from generate_series(0, 10);
INSERT 0 11
learning_postgres=# analyze r;
ANALYZE
learning_postgres=# select attname, null_frac from pg_stats where tablename = 'r';
attname | null_frac
---------+-----------
id | 0
x | 0
y | 0
(3 rows)
The query planner is using an index scan to find y is NULL
:
learning_postgres=# explain analyze select * from r where y is null order by x asc limit 1;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
Limit (cost=8.46..8.46 rows=1 width=12) (actual time=0.031..0.032 rows=1 loops=1)
-> Sort (cost=8.46..8.46 rows=1 width=12) (actual time=0.030..0.030 rows=1 loops=1)
Sort Key: x
Sort Method: top-N heapsort Memory: 25kB
-> Index Scan using r_y_btree_idx on r (cost=0.43..8.45 rows=1 width=12) (actual time=0.016..0.018 rows=11 loops=1)
Index Cond: (y IS NULL)
Planning Time: 0.383 ms
Execution Time: 0.066 ms
(8 rows)
But after inserting more null records for column y
, the query planner then choses filter over an index scan:
learning_postgres=# insert into r(x) select random() * 1000000 from generate_series(0, 1000);
INSERT 0 1001
learning_postgres=# analyze r;
ANALYZE
learning_postgres=# select attname, null_frac from pg_stats where tablename = 'r';
attname | null_frac
---------+-----------
x | 0
id | 0
y | 0.0002
(3 rows)
learning_postgres=# explain analyze select * from r where y is null order by x asc limit 1;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.43..220.45 rows=1 width=12) (actual time=40.041..40.043 rows=1 loops=1)
-> Index Scan using r_x_btree_idx on r (cost=0.43..220017.03 rows=1000 width=12) (actual time=40.040..40.040 rows=1 loops=1)
Filter: (y IS NULL)
Rows Removed by Filter: 7962
Planning Time: 0.354 ms
Execution Time: 40.068 ms
(6 rows)
Why this is the case? Why it doesn't use an index scan?
Amir reza Riahi
(155 rep)
Mar 27, 2025, 09:32 AM
0
votes
1
answers
32
views
Why `MATCH (p:Person)-[:ACTED_IN]->(m) WHERE 'Neo' IN r.roles RETURN p.name` returns 3 rows for only 1 node?
In Neo4j, when I use the following query: MATCH (p:Person)-[:ACTED_IN]->(m) WHERE 'Neo' IN r.roles RETURN p then it returns only one Person node. But when I change the query to: MATCH (p:Person)-[:ACTED_IN]->(m) WHERE 'Neo' IN r.roles RETURN p.name then it returns 3 rows. [![Neo4j query results][1]]...
In Neo4j, when I use the following query:
MATCH (p:Person)-[:ACTED_IN]->(m) WHERE 'Neo' IN r.roles RETURN p
then it returns only one Person node.
But when I change the query to:
MATCH (p:Person)-[:ACTED_IN]->(m) WHERE 'Neo' IN r.roles RETURN p.name
then it returns 3 rows.
This is strange to me since I expect there should be only one row returned?

Kt Student
(115 rep)
Feb 7, 2025, 03:18 AM
• Last activity: Feb 7, 2025, 05:43 PM
0
votes
2
answers
102
views
How to correctly use a BRIN index in PostgreSQL?
My table is like this: CREATE TABLE IF NOT EXISTS public.ticks_a_2507 ( tick_time timestamp(6) with time zone NOT NULL, tick_nano smallint NOT NULL, trade_day date NOT NULL, -- other columns... CONSTRAINT prk_ticks_a_2507 PRIMARY KEY (tick_time, tick_nano) ); CREATE INDEX IF NOT EXISTS idx_ticks_a_2...
My table is like this:
CREATE TABLE IF NOT EXISTS public.ticks_a_2507
(
tick_time timestamp(6) with time zone NOT NULL,
tick_nano smallint NOT NULL,
trade_day date NOT NULL,
-- other columns...
CONSTRAINT prk_ticks_a_2507 PRIMARY KEY (tick_time, tick_nano)
);
CREATE INDEX IF NOT EXISTS idx_ticks_a_2507_td
ON public.ticks_a_2507 USING brin(trade_day);
----------------------------------
Rows those were inserted in a same day, always carry same value for column "trade_day".
Henc I guess that a BRIN can be used as the type of index "idx_ticks_a_2507_td", so as to save my disk space.
But when I issue a query on this column, it looks like that the index is **NOT** used by PG13.
explain select min(trade_day), max(trade_day) from ticks_a_2507;
> Finalize Aggregate (cost=18870.72..18870.73 rows=1 width=8)
>
> -> Gather (cost=18870.39..18870.70 rows=3 width=8)
>
> Workers Planned: 3
>
> -> Partial Aggregate (cost=17870.39..17870.40 rows=1 width=8)
>
> -> Parallel Seq Scan on ticks_a_2507 (cost=0.00..16672.93 rows=239493 width=4)
And the queries are much slower than ones with a BTree index on the same column.
I have hundreds of tables with this structure, and everyday every table will be inserted thousands rows with same "trade_day" value(the date of insertion op).
If I use **
BTree
**, a lot amount of disk space would be wasted, even exceeding the data space.
Am I using **Brin
** in a wrong way? Or this is NOT the stage to use it?
Leon
(411 rep)
Jan 17, 2025, 09:53 AM
• Last activity: Jan 19, 2025, 08:02 AM
4
votes
1
answers
515
views
Postgres condition pushdown when using range conditions (>=, <= or BETWEEN)
I want to understand better the circumstances under which Postgres query executioner pushes conditions down into the join components. I would be especially grateful for references to source code parts responsible for this. Let's say that you have a view the body of which is a join between two tables...
I want to understand better the circumstances under which Postgres query executioner pushes conditions down
into the join components. I would be especially grateful for references to source code parts responsible for this.
Let's say that you have a view the body of which is a join between two tables on a date field,
like this (this is a trivial made up example with no real-life interpretation):
**Schema (PostgreSQL v13)**
create table measures (mdate date, measurement numeric);
create index measures_mdate on measures(mdate);
create view name_matches_surname as
select m1.*, m2.measurement as the_other_measurement
from measures m1 left join measures m2 on (m1.mdate = m2.mdate);
insert into measures values ('2021-05-11',1.0), ('2021-05-11',2.0);
---
If I select from this view with an equality condition on the date field, the condition would be pushed down into both sides of the join, as could be seen here:
**Query #1: (= 'now') is pushed down**
explain select * from name_matches_surname where mdate='now'::date;
| QUERY PLAN |
| --------------------------------------------------------------------------------------- |
| Nested Loop Left Join (cost=8.40..27.89 rows=36 width=68) |
| Join Filter: (m1.mdate = m2.mdate) |
| -> Bitmap Heap Scan on measures m1 (cost=4.20..13.67 rows=6 width=36) |
| Recheck Cond: (mdate = '2021-05-12'::date) |
| -> Bitmap Index Scan on measures_mdate (cost=0.00..4.20 rows=6 width=0) |
| Index Cond: (mdate = '2021-05-12'::date) |
| -> Materialize (cost=4.20..13.70 rows=6 width=36) |
| -> Bitmap Heap Scan on measures m2 (cost=4.20..13.67 rows=6 width=36) |
| Recheck Cond: (mdate = '2021-05-12'::date) |
| -> Bitmap Index Scan on measures_mdate (cost=0.00..4.20 rows=6 width=0) |
| Index Cond: (mdate = '2021-05-12'::date) |
As you can see, both sides of the join are index scans with condition (='now') being used to access the index. So far so good. Lets try range condition (BETWEEN) instead:
**Query #2: BETWEEN is pushed down into one side of the join only**
explain select * from name_matches_surname
where mdate between 'now'::date-5 and 'now'::date;
| QUERY PLAN |
| ------------------------------------------------------------------------------------------------- |
| Hash Right Join (cost=13.77..41.61 rows=38 width=68) |
| Hash Cond: (m2.mdate = m1.mdate) |
| -> Seq Scan on measures m2 (cost=0.00..22.70 rows=1270 width=36) |
| -> Hash (cost=13.70..13.70 rows=6 width=36) |
| -> Bitmap Heap Scan on measures m1 (cost=4.21..13.70 rows=6 width=36) |
| Recheck Cond: ((mdate >= '2021-05-07'::date) AND (mdate Bitmap Index Scan on measures_mdate (cost=0.00..4.21 rows=6 width=0) |
| Index Cond: ((mdate >= '2021-05-07'::date) AND (mdate <= '2021-05-12'::date)) |
As you can see, BETWEEN was pushed down into the left side of the join only, and for the right side, we have Seq Scan. You can easily verify that type of join (inner or left) or size of the table have no bearing on the outcome of pushdown - the condition will not be pushed down into the right side of the join.
As far as I can see, this behaviour is consistent for all versions between Postgres 9.5 and 13.
What is the reason for this? Is it possible to rewrite the view in a different way so that BETWEEN would be pushed down "equally well", like (=)? Are there any references to the documentation or source code that explain the pushdown strategies in depth?
---
[View on DB Fiddle](https://www.db-fiddle.com/f/pT2PwUkhJWuX9skWiBWXoL/0)
ADEpt
(141 rep)
May 12, 2021, 08:09 AM
• Last activity: Nov 7, 2024, 03:39 AM
0
votes
2
answers
63
views
How to have a covering index used for the updates?
I have a db table with" - `id bigint NOT NULL AUTO_INCREMENT` (primary key) - `customer_ref varchar(64) NOT NULL` - `customer_counter int NOT NULL` - `description varchar(255) NOT NULL` I have also the following indexes: - `PRIMARY KEY(id)` - `UNIQUE KEY c_ref (customer_ref)` - `KEY c_c (customer_co...
I have a db table with"
-
id bigint NOT NULL AUTO_INCREMENT
(primary key)
- customer_ref varchar(64) NOT NULL
- customer_counter int NOT NULL
- description varchar(255) NOT NULL
I have also the following indexes:
- PRIMARY KEY(id)
- UNIQUE KEY c_ref (customer_ref)
- KEY c_c (customer_counter)
- KEY c_desc(customer_ref, description, customer_counter)
I have the following update statements:
UPDATE CustomerReferences SET customer_counter=100, description='invalidate due to X123' WHERE customer_ref='1000001' and description 'immutable'
UPDATE CustomerReferences SET customer_counter=100, description='invalidate due to X123' WHERE id=2 and description 'immutable'
Problem: I am using the EXPLAIN
and I can't find a way to make the query filter the rows directly from the index. It always uses the index of the primary key or the unique key (c_ref
). How can I index the table so that these updates filter/update directly via the index? Note that the particular updates should be no-ops as if I do select count(*) based on the where
statement I get 0 rows due to the condition description 'immutable'
Jim
(123 rep)
Oct 20, 2024, 05:41 PM
• Last activity: Oct 21, 2024, 07:02 PM
-1
votes
1
answers
915
views
PostgreSQL is skipping indexes when using `order by` + `limit`
I have a postgreSQL database with two big tables, connections and sessions ```sql select "c"."id" "id", "ch"."title" "channel", "ch"."name" "channel_name", "c"."nickname", "c"."duration", "c"."status", "c"."start_time", "c"."stop_time", "u"."nickname" "user", "u"."username" "username" from "connecti...
I have a postgreSQL database with two big tables, connections and sessions
select "c"."id" "id", "ch"."title" "channel", "ch"."name" "channel_name", "c"."nickname", "c"."duration", "c"."status", "c"."start_time", "c"."stop_time", "u"."nickname" "user", "u"."username" "username"
from "connections" "c"
left join "sessions" "sh" on "session_id" = "sh"."id"
left join "customers" "cs" on "sh"."customer_id" = "cs"."id"
left join "channels" "ch" on "channel_id" = "ch"."id"
left join "clients" "cl" on "client_id" = "cl"."id"
left join "users" "u" on "c"."user_id" = "u"."id"
where ("sh"."customer_id" = 13598)
order by "id" desc
limit 10 offset 0
When running this query without limit
, execution takes about 800ms. But with limit 10
it takes 16000ms!!!
It seems that when limit added, Postgre skips indexes. How can I change that?
This is the explain result of this query with and without the limit:
**With limit:**
Limit (cost=30.93..30.94 rows=1 width=1202) (actual time=0.017..0.017 rows=0 loops=1)
-> Sort (cost=30.93..30.94 rows=1 width=1202) (actual time=0.015..0.015 rows=0 loops=1)
Sort Key: c.id DESC
Sort Method: quicksort Memory: 25kB
-> Nested Loop Left Join (cost=14.02..30.92 rows=1 width=1202) (actual time=0.009..0.009 rows=0 loops=1)
-> Nested Loop Left Join (cost=13.88..30.49 rows=1 width=850) (actual time=0.008..0.008 rows=0 loops=1)
-> Hash Join (cost=13.74..26.14 rows=1 width=306) (actual time=0.008..0.008 rows=0 loops=1)
Hash Cond: (c.session_id = sh.id)
-> Seq Scan on skyroom_connections c (cost=0.00..11.90 rows=190 width=310) (actual time=0.007..0.007 rows=0 loops=1)
-> Hash (cost=13.67..13.67 rows=6 width=12) (never executed)
-> Bitmap Heap Scan on skyroom_sessions sh (cost=4.20..13.67 rows=6 width=12) (never executed)
Recheck Cond: (customer_id = 13598)
-> Bitmap Index Scan on skyroom_sessions_customer_idx (cost=0.00..4.20 rows=6 width=0) (never executed)
Index Cond: (customer_id = 13598)
-> Index Scan using skyroom_channels_pkey on skyroom_channels ch (cost=0.14..4.16 rows=1 width=552) (never executed)
Index Cond: (id = sh.channel_id)
-> Index Scan using users_pkey on users u (cost=0.14..0.38 rows=1 width=360) (never executed)
Index Cond: (id = c.user_id)
Planning Time: 0.978 ms
Execution Time: 0.147 ms
**Without Limit:**
Sort (cost=30.93..30.94 rows=1 width=1202) (actual time=0.010..0.010 rows=0 loops=1)
Sort Key: c.id DESC
Sort Method: quicksort Memory: 25kB
-> Nested Loop Left Join (cost=14.02..30.92 rows=1 width=1202) (actual time=0.004..0.004 rows=0 loops=1)
-> Nested Loop Left Join (cost=13.88..30.49 rows=1 width=850) (actual time=0.004..0.004 rows=0 loops=1)
-> Hash Join (cost=13.74..26.14 rows=1 width=306) (actual time=0.004..0.004 rows=0 loops=1)
Hash Cond: (c.session_id = sh.id)
-> Seq Scan on skyroom_connections c (cost=0.00..11.90 rows=190 width=310) (actual time=0.003..0.003 rows=0 loops=1)
-> Hash (cost=13.67..13.67 rows=6 width=12) (never executed)
-> Bitmap Heap Scan on skyroom_sessions sh (cost=4.20..13.67 rows=6 width=12) (never executed)
Recheck Cond: (customer_id = 13598)
-> Bitmap Index Scan on skyroom_sessions_customer_idx (cost=0.00..4.20 rows=6 width=0) (never executed)
Index Cond: (customer_id = 13598)
-> Index Scan using skyroom_channels_pkey on skyroom_channels ch (cost=0.14..4.16 rows=1 width=552) (never executed)
Index Cond: (id = sh.channel_id)
-> Index Scan using users_pkey on users u (cost=0.14..0.38 rows=1 width=360) (never executed)
Index Cond: (id = c.user_id)
Planning Time: 0.425 ms
Execution Time: 0.097 ms
rostamiani
(191 rep)
Aug 3, 2020, 05:36 PM
• Last activity: Oct 12, 2024, 07:08 PM
Showing page 1 of 20 total questions