Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
776
views
ERROR 1071 (42000): Specified key was too long; max key length is 1000 bytes
An online optimization tool offered to add indexes. One of the index is successfully created but i cant create the second one. Do you think that indexes going to reduce query run time? Any ideas how can i pass through the error code? SQL Query takes 10.1 secs and got over 380 times execution in slow...
An online optimization tool offered to add indexes. One of the index is successfully created but i cant create the second one.
Do you think that indexes going to reduce query run time?
Any ideas how can i pass through the error code?
SQL Query takes 10.1 secs and got over 380 times execution in slow log.
SELECT
l.ID,
post_title,
post_content,
post_name,
post_parent,
post_author,
post_status,
post_modified_gmt,
post_date,
post_date_gmt
FROM
(
SELECT
wp_posts.ID
FROM
wp_posts
WHERE
wp_posts.post_status IN ('publish')
AND wp_posts.post_type = 'post'
AND wp_posts.post_password = ''
AND wp_posts.post_date != '0000-00-00 00:00:00'
ORDER BY
wp_posts.post_modified ASC
LIMIT
100 OFFSET 214000
) o
JOIN wp_posts l ON l.ID = o.ID
Indexes that i need to create;
MariaDB [database]> ALTER TABLE
wp_posts
ADD INDEX wp_posts_idx_post_type_post_passw_post_statu
(post_type
,post_password
,post_status
);
> ERROR 1071 (42000): Specified key was too long; max key length is 1000
> bytes
MariaDB [database]> ALTER TABLE wp_posts
ADD INDEX wp_posts_idx_post_modified
(post_modified
);
Query OK, 453289 rows affected (10.839 sec)
Records: 453289 Duplicates: 0 Warnings: 0
---
CREATE TABLE wp_posts
(
ID
bigint(20) unsigned NOT NULL AUTO_INCREMENT,
post_author
bigint(20) unsigned NOT NULL DEFAULT 0,
post_date
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_date_gmt
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_content
longtext COLLATE utf8mb4_unicode_ci NOT NULL,
post_title
text COLLATE utf8mb4_unicode_ci NOT NULL,
post_excerpt
text COLLATE utf8mb4_unicode_ci NOT NULL,
post_status
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'publish',
comment_status
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
ping_status
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
post_password
varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
post_name
varchar(200) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
to_ping
text COLLATE utf8mb4_unicode_ci NOT NULL,
pinged
text COLLATE utf8mb4_unicode_ci NOT NULL,
post_modified
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_modified_gmt
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_content_filtered
longtext COLLATE utf8mb4_unicode_ci NOT NULL,
post_parent
bigint(20) unsigned NOT NULL DEFAULT 0,
guid
varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
menu_order
int(11) NOT NULL DEFAULT 0,
post_type
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'post',
post_mime_type
varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
comment_count
bigint(20) NOT NULL DEFAULT 0,
PRIMARY KEY (ID
),
KEY post_name
(post_name
(191)),
KEY type_status_date
(post_type
,post_status
,post_date
,ID
),
KEY post_parent
(post_parent
),
KEY post_author
(post_author
),
KEY wp_posts_idx_post_modified
(post_modified
)
) ENGINE=MyISAM AUTO_INCREMENT=463265 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
execash
(1 rep)
May 8, 2020, 06:44 AM
• Last activity: Aug 6, 2025, 01:04 PM
0
votes
2
answers
140
views
Asking Suggestions regarding mysql optimization
Im running a dedicated server with around 20-25 sites, almost all of them running wordpress installations. Running it over a cpanel setup. For a while its mostly mysql eating most of the cpu and hitting high load times ``` mysql 0 61.69(cpu) 6.92(ram) /usr/sbin/mysqld ``` **Server config is** ``` Up...
Im running a dedicated server with around 20-25 sites, almost all of them running wordpress installations.
Running it over a cpanel setup.
For a while its mostly mysql eating most of the cpu and hitting high load times
mysql 0 61.69(cpu) 6.92(ram) /usr/sbin/mysqld
**Server config is**
Uptime 70 days
Operating System CentOS Linux 7 (Core) x64 File
Handles 14560 of 6511967
Processes 342
CPU Model AMD Ryzen 5 3600 6-Core Processor
Ram 64GB
I am trying to improve this and came across mysql tuner, here is what it has to say after running mysql with performance schema on for 2 days or more.
This isnt exactly my forte so the mycnf is just a blend of what I tried with an older mysqltuner suggestion but ive heard that the application would still require a human touch.
Would appreciate some help in optimizing the settings.
>> MySQLTuner 1.7.19 - Major Hayden
>> Bug reports, feature requests, and downloads at http://mysqltuner.com/
>> Run with '--help' for additional options and output filtering
[--] Skipped version check for MySQLTuner script
[OK] Currently running supported MySQL version 10.3.27-MariaDB-log
[OK] Operating on 64-bit architecture
-------- Log file Recommendations ------------------------------------------------------------------
[OK] Log file /var/lib/mysql/server.name.here.err exists
[--] Log file: /var/lib/mysql/server.name.here.err(4M)
[OK] Log file /var/lib/mysql/server.name.here.err is readable.
[OK] Log file /var/lib/mysql/server.name.here.err is not empty
[OK] Log file /var/lib/mysql/server.name.here.err is smaller than 32 Mb
[!!] /var/lib/mysql/server.name.here.err contains 31430 warning(s).
[!!] /var/lib/mysql/server.name.here.err contains 23132 error(s).
[--] 60 start(s) detected in /var/lib/mysql/server.name.here.err
[--] 1) 2020-12-07 7:35:16 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 2) 2020-12-01 14:35:35 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 3) 2020-11-30 16:10:14 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 4) 2020-11-30 16:07:53 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 5) 2020-11-01 1:57:12 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 6) 2020-10-10 19:28:45 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 7) 2020-10-10 19:28:32 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 8) 2020-09-30 3:36:14 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 9) 2020-09-28 17:58:16 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 10) 2020-09-25 18:38:33 0 [Note] /usr/sbin/mysqld: ready for connections.
[--] 51 shutdown(s) detected in /var/lib/mysql/server.name.here.err
[--] 1) 2020-12-07 7:35:07 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 2) 2020-12-01 14:35:27 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 3) 2020-11-30 16:09:53 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 4) 2020-11-30 16:07:33 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 5) 2020-11-01 1:57:09 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 6) 2020-10-10 19:28:39 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 7) 2020-10-10 19:28:26 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 8) 2020-09-30 3:34:34 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 9) 2020-09-28 17:56:38 0 [Note] /usr/sbin/mysqld: Shutdown complete
[--] 10) 2020-09-25 18:36:55 0 [Note] /usr/sbin/mysqld: Shutdown complete
-------- Storage Engine Statistics -----------------------------------------------------------------
[--] Status: +Aria +CSV +InnoDB +MEMORY +MRG_MyISAM +MyISAM +PERFORMANCE_SCHEMA +SEQUENCE
[--] Data in MyISAM tables: 2.1G (Tables: 1387)
[--] Data in InnoDB tables: 3.2G (Tables: 2207)
[--] Data in MEMORY tables: 586.4K (Tables: 3)
[OK] Total fragmented tables: 0
-------- Analysis Performance Metrics --------------------------------------------------------------
[--] innodb_stats_on_metadata: OFF
[OK] No stat updates during querying INFORMATION_SCHEMA.
-------- Security Recommendations ------------------------------------------------------------------
[OK] There are no anonymous accounts for any database users
[OK] All database users have passwords assigned
[!!] There is no basic password file list!
-------- CVE Security Recommendations --------------------------------------------------------------
[--] Skipped due to --cvefile option undefined
-------- Performance Metrics -----------------------------------------------------------------------
[--] Up for: 2d 5h 57m 24s (167M q [862.613 qps], 2M conn, TX: 21024G, RX: 379G)
[--] Reads / Writes: 98% / 2%
[--] Binary logging is disabled
[--] Physical Memory : 62.8G
[--] Max MySQL memory : 43.5G
[--] Other process memory: 0B
[--] Total buffers: 5.0G global + 260.7M per thread (151 max threads)
[--] P_S Max memory usage: 104M
[--] Galera GCache Max memory usage: 0B
[OK] Maximum reached memory usage: 26.5G (42.17% of installed RAM)
[OK] Maximum possible memory usage: 43.5G (69.35% of installed RAM)
[OK] Overall possible memory usage with other process is compatible with memory available
[OK] Slow queries: 0% (324K/167M)
[OK] Highest usage of available connections: 55% (84/151)
[OK] Aborted connections: 0.00% (55/2878495)
[!!] name resolution is active : a reverse name resolution is made for each new connection and can reduce performance
[!!] Query cache may be disabled by default due to mutex contention.
[OK] Query cache efficiency: 40.4% (102M cached / 254M selects)
[!!] Query cache prunes per day: 3479297
[OK] Sorts requiring temporary tables: 0% (11K temp sorts / 6M sorts)
[!!] Joins performed without indexes: 12813
[!!] Temporary tables created on disk: 66% (2M on disk / 3M total)
[OK] Thread cache hit rate: 98% (40K created / 2M connections)
[OK] Table cache hit rate: 95% (4K open / 4K opened)
[OK] table_definition_cache(2097152) is upper than number of tables(3862)
[OK] Open file limit used: 7% (2K/40K)
[OK] Table locks acquired immediately: 99% (13M immediate / 13M locks)
-------- Performance schema ------------------------------------------------------------------------
[--] Memory used by P_S: 104.0M
[--] Sys schema is installed.
-------- ThreadPool Metrics ------------------------------------------------------------------------
[--] ThreadPool stat is enabled.
[--] Thread Pool Size: 12 thread(s).
[--] Using default value is good enough for your version (10.3.27-MariaDB-log)
-------- MyISAM Metrics ----------------------------------------------------------------------------
[!!] Key buffer used: 24.9% (182M used / 734M cache)
[OK] Key buffer size / total MyISAM indexes: 700.0M/460.2M
[OK] Read Key buffer hit rate: 100.0% (1B cached / 41K reads)
[!!] Write Key buffer hit rate: 69.5% (804K cached / 558K writes)
-------- InnoDB Metrics ----------------------------------------------------------------------------
[--] InnoDB is enabled.
[--] InnoDB Thread Concurrency: 0
[OK] InnoDB File per table is activated
[OK] InnoDB buffer pool / data size: 4.0G/3.2G
[!!] Ratio InnoDB log file size / InnoDB Buffer pool size (0.78125 %): 16.0M * 2/4.0G should be equal to 25%
[!!] InnoDB buffer pool instances: 8
[--] Number of InnoDB Buffer Pool Chunk : 32 for 8 Buffer Pool Instance(s)
[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances
[OK] InnoDB Read buffer efficiency: 100.00% (102924116296 hits/ 102924220151 total)
[!!] InnoDB Write Log efficiency: 73.75% (4679039 hits/ 6344450 total)
[OK] InnoDB log waits: 0.00% (0 waits / 1665411 writes)
-------- AriaDB Metrics ----------------------------------------------------------------------------
[--] AriaDB is enabled.
[OK] Aria pagecache size / total Aria indexes: 128.0M/1B
[OK] Aria pagecache hit rate: 99.1% (277M cached / 2M reads)
-------- TokuDB Metrics ----------------------------------------------------------------------------
[--] TokuDB is disabled.
-------- XtraDB Metrics ----------------------------------------------------------------------------
[--] XtraDB is disabled.
-------- Galera Metrics ----------------------------------------------------------------------------
[--] Galera is disabled.
-------- Replication Metrics -----------------------------------------------------------------------
[--] Galera Synchronous replication: NO
[--] No replication slave(s) for this server.
[--] Binlog format: MIXED
[--] XA support enabled: ON
[--] Semi synchronous replication Master: OFF
[--] Semi synchronous replication Slave: OFF
[--] This is a standalone server
-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
Control warning line(s) into /var/lib/mysql/server.name.here.err file
Control error line(s) into /var/lib/mysql/server.name.here.err file
Configure your accounts with ip or subnets only, then update your configuration with skip-name-resolve=1
Increasing the query_cache size over 128M may reduce performance
We will suggest raising the 'join_buffer_size' until JOINs not using indexes are found.
See https://dev.mysql.com/doc/internals/en/join-buffer-size.html
(specially the conclusions at the bottom of the page).
When making adjustments, make tmp_table_size/max_heap_table_size equal
Reduce your SELECT DISTINCT queries which have no LIMIT clause
Before changing innodb_log_file_size and/or innodb_log_files_in_group read this: https://bit.ly/2TcGgtU
Variables to adjust:
query_cache_size (=0)
query_cache_type (=0)
query_cache_size (> 128M) [see warning above]
join_buffer_size (> 2.0M, or always use indexes with JOINs)
tmp_table_size (> 32M)
max_heap_table_size (> 32M)
innodb_log_file_size should be (=512M) if possible, so InnoDB total log files size equals to 25% of buffer pool size.
innodb_buffer_pool_instances(=4)
**And here is what my.cnf currently has**
**Note:** This is now edited based on suggestions by Wilson and Rick, Will save and restart mysql and get back to you guys with updates.
[client-server]
#
# include all files from the config directory
#
!includedir /etc/my.cnf.d
[mysqld]
# Logging and performance measurement here
log-error=/var/lib/mysql/ryzen.dogestream.com.err
# for enhanced slow query log
log_slow_verbosity=query_plan,explain
performance-schema=1
max_allowed_packet=268435456
max_heap_table=32M
tmp_table_size=32M
open_files_limit=40000
# Buffer sizes
join_buffer_size=2M
key_buffer_size=700M
sort_buffer_size=2M
# InnoDB stuff goes here
innodb_file_per_table=1
innodb_buffer_pool_size=15G
innodb_log_file_size=16M
local-infile=0
# from 1024 to conserve 90% of CPU cycles used for function
innodb_lru_scan_depth=100
# should always match table_open_cache
innodb_open_files=9000
# Query stuff goes here
# from 128M to conserve RAM for more useful purposes
query_cache_size=0
# from 1 for OFF to avoid 3 million+ query cache prunes per day
query_cache_type=0
# from 2M to conserve RAM
query_cache_limit=0
slow_query_log=1
slow_query_log_file=mysql-slow.log
long_query_time=0.1
# It says cache it is here
table_definition_cache=-1
# from 3000 to reduce tables opened_tables count
table_open_cache=9000
# from 16 to accomodate your 84 concurrent users
thread_cache_size=256
Sawada Tsunayoshi
(1 rep)
Dec 9, 2020, 12:52 PM
• Last activity: Aug 5, 2025, 03:06 PM
21
votes
4
answers
5278
views
SQL Server cardinality hint
Is there a way how to 'inject' a cardinality estimation to a SQL Server optimizer (any version)? i.e. something similar to Oracle's cardinality hint. My motivation is driven by the article, [How Good Are Query Optimizers, Really?][1] \[1] , where they test the influence of the cardinality estimator...
Is there a way how to 'inject' a cardinality estimation to a SQL Server optimizer (any version)?
i.e. something similar to Oracle's cardinality hint.
My motivation is driven by the article, How Good Are Query Optimizers, Really? \[1] , where they test the influence of the cardinality estimator on a selection of a bad plan. Therefore, it would be sufficient if I could force the SQL Server to 'estimate' the cardinalities precisely for complex queries.
---
\[1] Leis, Viktor, et al. "How good are query optimizers, really?"
Proceedings of the VLDB Endowment 9.3 (2015): 204-215.
Radim Bača
(233 rep)
Mar 31, 2017, 07:17 AM
• Last activity: Aug 5, 2025, 12:16 PM
0
votes
2
answers
155
views
mysql query performance... got to be a better way
I have a query right now that for a lack of a better word, sucks. I'm looking for some help with respect to other ways to approach writing this query. Here are the details. I've got a temp table build up with the following columns: - `AssignmentSubType` - `AssignmentValue` - `OwnerUsergroupID` - `As...
I have a query right now that for a lack of a better word, sucks. I'm looking for some help with respect to other ways to approach writing this query. Here are the details.
I've got a temp table build up with the following columns:
-
AssignmentSubType
- AssignmentValue
- OwnerUsergroupID
- AssociatedObjectID
The basic gist of what I want to do is to find AssociatedObjectID
s that have the same AssignmentSubType
, AssignmentValue
, and OwnerUsergroupID
. This is because there is a ton of SQL down the road that really doesn't need to be run if I've got "duplicates" (same AssignmentSubType
, AssignmentValue
, and OwnerUsergroupID
). Once I've got my list of duplicates I run the next set of SQL stuff on one of the duplicates , when I've got back info from that I join back the rest of the duplicates for the final result set.
Example table data:
AssignmentSubType | AssignmentValue | OwnerUsergroupID | AssociatedObjectID
retailer | Dicks | 1 | 5
retailer | Dicks | 1 | 7
retailer | Dicks | 1 | 9
In this example I just want to do the calculations on associatedobjecid = 5 since 7 and 9 will be the exact same, and then join back in 7 and 9 at the 'end'
I've got the following query that works but is very inefficient
SELECT firstObject, f2.AssociatedObjectID
FROM (
SELECT firstObject, dupAss AS dups
FROM (
SELECT min(AssociatedObjectID) AS firstObject, group_concat(concat('-',AssociatedObjectID,'-')) AS dupAss
FROM (
SELECT * FROM tableFromAbove
) innerRes GROUP BY AssignmentSubType, AssignmentValue, OwnerUsergroupID
) outR
) outR2
LEFT JOIN tableFromAbove f2
ON outR2.dups LIKE concat('%-',f2.AssociatedObjectID,'-%') ORDER BY firstObject
this query will give me back a result set like the following
firstObject | AssociatedObjectID
5 | 7
5 | 9
like I said earlier in this post... I then use this result set to join back to the results from the rest of the queries for associatedobjectid = 5. Any thoughts on how I could restructure this so that it is more efficient?
Josh
(159 rep)
Apr 19, 2018, 02:57 PM
• Last activity: Aug 4, 2025, 08:04 AM
0
votes
2
answers
146
views
Moving tables to different database within same sql server
There is a SQL server, there are around 100 databases in it. I have to query a few tables from one of the databases. when I query, it's very slow and I think CPU utilization is very high at that time. I have also noticed that there are queries from other tables from other services which are affectin...
There is a SQL server, there are around 100 databases in it. I have to query a few tables from one of the databases. when I query, it's very slow and I think CPU utilization is very high at that time. I have also noticed that there are queries from other tables from other services which are affecting the overall performance of querying from the database.
I am thinking to move these tables to a different database within the same SQL server. Do you think it will solve this issue? or it will not improve the performance of querying from my tables, I only bother about my tables. Will it have no impact because the new data will also be in the same SQL server? Please provide the answers in detail to my queries.
Vivek Nuna
(101 rep)
Jun 21, 2023, 07:00 AM
• Last activity: Aug 3, 2025, 07:06 AM
0
votes
1
answers
1561
views
Query with left join not using index
The below query is taking about 1 second to run SELECT `taxi_driver_mapping`.*, concat("dpk00", taxi_driver_mapping.mapping_payment_plan_id) AS package_text, `people`.*, `taxi`.*, `company`.`company_name`, `country`.`country_name`, `state`.`state_name`, `city`.`city_name`, `peoplecreated`.`name` AS...
The below query is taking about 1 second to run
SELECT
taxi_driver_mapping
.*,
concat("dpk00",
taxi_driver_mapping.mapping_payment_plan_id) AS package_text,
people
.*,
taxi
.*,
company
.company_name
,
country
.country_name
,
state
.state_name
,
city
.city_name
,
peoplecreated
.name
AS created_name
,
peopleupdated
.name
AS updated_name
,
device_owner
FROM taxi_driver_mapping
LEFT JOIN company
ON taxi_driver_mapping
.mapping_companyid
= company
.cid
LEFT JOIN taxi
ON taxi_driver_mapping
.mapping_taxiid
= taxi
.taxi_id
LEFT JOIN country
ON taxi_driver_mapping
.mapping_countryid
= country
.country_id
LEFT JOIN state
ON taxi_driver_mapping
.mapping_stateid
= state
.state_id
LEFT JOIN city
ON taxi_driver_mapping
.mapping_cityid
= city
.city_id
LEFT JOIN people
ON taxi_driver_mapping
.mapping_driverid
= people
.id
LEFT JOIN people
AS peoplecreated
ON taxi_driver_mapping
.mapping_createdby
= peoplecreated
.id
LEFT JOIN people
AS peopleupdated
ON taxi_driver_mapping
.mapping_updatedby
= peopleupdated
.id
LEFT JOIN driver_information
ON taxi_driver_mapping
.mapping_driverid
= driver_information
.driver_id
WHERE (people
.name
LIKE '%abm1173%'
OR people
.lastname
LIKE '%abm1173%'
OR people
.email
LIKE '%abm1173%'
OR company
.company_name
LIKE '%abm1173%'
OR people
.phone
LIKE '%abm1173%'
OR people
.id
LIKE '%abm1173%'
OR people
.username
LIKE '%abm1173%'
OR taxi
.taxi_no
LIKE '%abm1173%')
ORDER BY mapping_id
DESC limit 10 offset 0
Below is the explain plan for it:
+----+-------------+---------------------+--------+------------------+---------+---------+----------------------------------------------+-------+----------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+---------------------+--------+------------------+---------+---------+----------------------------------------------+-------+----------------------------------------------------+
| 1 | SIMPLE | taxi_driver_mapping | ALL | NULL | NULL | NULL | NULL | 78718 | Using temporary; Using filesort |
| 1 | SIMPLE | company | eq_ref | PRIMARY | PRIMARY | 4 | Eztaxi.taxi_driver_mapping.mapping_companyid | 1 | NULL |
| 1 | SIMPLE | taxi | eq_ref | PRIMARY | PRIMARY | 4 | Eztaxi.taxi_driver_mapping.mapping_taxiid | 1 | NULL |
| 1 | SIMPLE | country | eq_ref | PRIMARY | PRIMARY | 4 | Eztaxi.taxi_driver_mapping.mapping_countryid | 1 | NULL |
| 1 | SIMPLE | state | ALL | PRIMARY,state_id | NULL | NULL | NULL | 3 | Using where; Using join buffer (Block Nested Loop) |
| 1 | SIMPLE | city | ALL | PRIMARY | NULL | NULL | NULL | 2 | Using where; Using join buffer (Block Nested Loop) |
| 1 | SIMPLE | people | eq_ref | PRIMARY | PRIMARY | 4 | Eztaxi.taxi_driver_mapping.mapping_driverid | 1 | Using where |
| 1 | SIMPLE | peoplecreated | eq_ref | PRIMARY | PRIMARY | 4 | Eztaxi.taxi_driver_mapping.mapping_createdby | 1 | NULL |
| 1 | SIMPLE | peopleupdated | eq_ref | PRIMARY | PRIMARY | 4 | Eztaxi.taxi_driver_mapping.mapping_updatedby | 1 | NULL |
| 1 | SIMPLE | driver_information | eq_ref | PRIMARY | PRIMARY | 4 | Eztaxi.taxi_driver_mapping.mapping_driverid | 1 | NULL |
+----+-------------+---------------------+--------+------------------+---------+---------+----------------------------------------------+-------+----------------------------------------------------+
Below is the table definition for taxi_driver_mapping table:
| taxi_driver_mapping | CREATE TABLE taxi_driver_mapping
(
mapping_id
int(100) NOT NULL AUTO_INCREMENT,
mapping_driverid
int(100) NOT NULL,
mapping_taxiid
int(100) NOT NULL,
mapping_taxi_model_id
int(11) NOT NULL,
mapping_companyid
int(100) NOT NULL,
mapping_countryid
int(100) NOT NULL,
mapping_stateid
int(100) NOT NULL,
mapping_cityid
int(100) NOT NULL,
mapping_startdate
timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
mapping_enddate
timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
mapping_createdby
int(100) NOT NULL,
mapping_createdate
timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
mapping_status
varchar(3) NOT NULL DEFAULT 'A',
mapping_updatedby
int(11) DEFAULT NULL,
mapping_updatedate
timestamp NULL DEFAULT NULL,
mapping_payment_plan_id
int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (mapping_id
),
KEY ndx_mapping_driverid
(mapping_driverid
),
KEY ndx_mapping_taxiid
(mapping_taxiid
),
KEY ndx_driver_status_taxi_cid
(mapping_driverid
,mapping_status
,mapping_taxiid
,mapping_companyid
),
KEY idx_test
(mapping_status
,mapping_driverid
,mapping_taxiid
)
) ENGINE=InnoDB AUTO_INCREMENT=123027 DEFAULT CHARSET=latin1 |
Table definition for company table:
| company | CREATE TABLE company
(
cid
int(100) NOT NULL AUTO_INCREMENT,
company_name
varchar(250) NOT NULL,
company_address
varchar(250) NOT NULL,
current_location
text NOT NULL,
latitude
varchar(250) NOT NULL,
longitude
varchar(250) NOT NULL,
bankname
varchar(100) NOT NULL,
bankaccount_no
varchar(250) NOT NULL,
company_country
int(11) NOT NULL,
company_state
int(11) NOT NULL,
company_city
int(11) NOT NULL,
header_bgcolor
varchar(25) NOT NULL,
menu_color
varchar(25) NOT NULL,
mouseover_color
varchar(25) NOT NULL,
time_zone
varchar(250) NOT NULL,
userid
int(11) NOT NULL,
company_status
varchar(3) NOT NULL DEFAULT 'A',
company_type
enum('0','1','2','4') NOT NULL DEFAULT '0' COMMENT '0-Company , 1-Kiosk , 2-Corporate 4-Restaurant',
drivers_count
int(11) NOT NULL DEFAULT '500',
created_by
int(11) DEFAULT NULL,
created_at
datetime DEFAULT NULL,
updated_by
int(11) DEFAULT NULL,
updated_at
datetime DEFAULT NULL,
PRIMARY KEY (cid
)
) ENGINE=InnoDB AUTO_INCREMENT=4122 DEFAULT CHARSET=latin1 |
I thought that the column mapping_companyid used in the first left join not being indexed is the reason for full table scan. But even after adding the index to it, the query is taking 1 second. Can someone please let me know what could be the issue here? Let me know if any more info. is needed. Thanks
user5594148
(43 rep)
Jan 29, 2020, 06:36 PM
• Last activity: Aug 2, 2025, 03:02 PM
0
votes
1
answers
150
views
DB2 Workfile scan
We just migrate from LUW to zos and we're finding that in z/os the accesspath explains show new node named WFSCAN (Workfile scan) and this rises when there is a join, group by or any kind of sort. The question is, Is any performance issue with WFSCAN. Should I be worry about the size of the pool it...
We just migrate from LUW to zos and we're finding that in z/os the accesspath explains show new node named WFSCAN (Workfile scan) and this rises when there is a join, group by or any kind of sort.
The question is, Is any performance issue with WFSCAN. Should I be worry about the size of the pool it uses or any other costs in production environment?
What should I do for better performance of Workfile Scans?
Hana Bzh
(101 rep)
Oct 17, 2018, 09:35 AM
• Last activity: Aug 2, 2025, 07:09 AM
2
votes
2
answers
1916
views
Nested loop on primary key
CREATE TABLE `request` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `created_by` int(11) unsigned DEFAULT NULL, `content` text, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; CREATE TABLE `user` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) DEFAULT '', PRIMARY...
CREATE TABLE
request
(
id
int(11) unsigned NOT NULL AUTO_INCREMENT,
created_by
int(11) unsigned DEFAULT NULL,
content
text,
PRIMARY KEY (id
)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
CREATE TABLE user
(
id
int(11) unsigned NOT NULL AUTO_INCREMENT,
name
varchar(255) DEFAULT '',
PRIMARY KEY (id
)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
explain select * from request r
left join user u on (r.created_by = u.id)
+--+-----------+-----+----------+----+-------------+----+-------+----+----+--------+--------------------------------------------------+
|id|select_type|table|partitions|type|possible_keys|key |key_len|ref |rows|filtered|Extra |
+--+-----------+-----+----------+----+-------------+----+-------+----+----+--------+--------------------------------------------------+
|1 |SIMPLE |r |NULL |ALL |NULL |NULL|NULL |NULL|2 |100 |NULL |
|1 |SIMPLE |u |NULL |ALL |PRIMARY |NULL|NULL |NULL|1 |100 |Using where; Using join buffer (Block Nested Loop)|
+--+-----------+-----+----------+----+-------------+----+-------+----+----+--------+--------------------------------------------------+
Here is the content of the tables:
+--+----------+-------+
|id|created_by|content|
+--+----------+-------+
|1 |NULL |Test |
|2 |1 |Bar baz|
+--+----------+-------+
+--+------+
|id|name |
+--+------+
|1 |Thomas|
+--+------+
Why is it a nested loop ?
If I add a condition on join like this: (r.created_by > 0 AND r.created_by = u.id)
there is no more nested loop.
My version is MySQL 5.7.34.
Thomas Corbisier
(21 rep)
Oct 11, 2021, 12:58 PM
• Last activity: Jul 28, 2025, 04:05 AM
1
votes
1
answers
735
views
Simple query with a single join very slow
I have this very slow, simple query that joins a large table (~180M rows) with a smaller table (~60k rows) with a foreign key, filtering an indexed column on the smaller table, ordering by the primary key in the larger table, and then taking the 25 latest rows. The `EXPLAIN` shows `Using index; Usin...
I have this very slow, simple query that joins a large table (~180M rows) with a smaller table (~60k rows) with a foreign key, filtering an indexed column on the smaller table, ordering by the primary key in the larger table, and then taking the 25 latest rows.
The
EXPLAIN
shows Using index; Using temporary; Using filesort
on the smaller table. Why?
Engine: MySQL 5.7.
Query:
SELECT
order.id,
order.company_id,
order.total
FROM
order
INNER JOIN
company ON company.id = order.company_id
WHERE
company.company_headquarter_id = 23133
ORDER BY order.id DESC
LIMIT 25;
+----+-------------+------------+------------+------+---------------------------------------+----------------------------+---------+-----------------------+------+----------+----------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+------------+------------+------+---------------------------------------+----------------------------+---------+-----------------------+------+----------+----------------------------------------------+
| 1 | SIMPLE | company | NULL | ref | PRIMARY,company_headquarter_id_idx | company_headquarter_id_idx | 8 | const | 6 | 100.00 | Using index; Using temporary; Using filesort |
| 1 | SIMPLE | order | NULL | ref | company_id_idx | company_id_idx | 8 | company.id | 381 | 100.00 | NULL |
+----+-------------+------------+------------+------+---------------------------------------+----------------------------+---------+-----------------------+------+----------+----------------------------------------------+
CREATE TABLE order
(
id
bigint(20) NOT NULL AUTO_INCREMENT,
company_id
bigint(20) NOT NULL,
total
double(18,2) NOT NULL,
PRIMARY KEY (id
),
KEY company_id_idx
(company_id
),
CONSTRAINT company_id_fk
FOREIGN KEY (company_id
) REFERENCES company
(id
)
) ENGINE=InnoDB AUTO_INCREMENT=186518644 DEFAULT CHARSET=latin1
CREATE TABLE company
(
id
bigint(20) NOT NULL AUTO_INCREMENT,
company_headquarter_id
bigint(20) NOT NULL,
name
varchar(100) NOT NULL,
PRIMARY KEY (id
),
KEY company_headquarter_id_idx
(company_headquarter_id
),
CONSTRAINT company_headquarter_id_fk
FOREIGN KEY (company_headquarter_id
) REFERENCES company_headquarter
(id
)
) ENGINE=InnoDB AUTO_INCREMENT=60825 DEFAULT CHARSET=latin1
CREATE TABLE company_headquarter
(
id
bigint(20) NOT NULL AUTO_INCREMENT,
name
varchar(100) NOT NULL,
phone
varchar(10) DEFAULT NULL,
address_id
bigint(20) NOT NULL,
PRIMARY KEY (id
),
UNIQUE KEY name
(name
),
KEY address_id_idx
(address_id
),
CONSTRAINT address_id_fk
FOREIGN KEY (address_id
) REFERENCES address
(id
)
) ENGINE=InnoDB AUTO_INCREMENT=43862 DEFAULT CHARSET=latin1
CREATE TABLE address
(
id
bigint(20) NOT NULL AUTO_INCREMENT,
street_address
varchar(100) DEFAULT NULL,
zip
varchar(7) DEFAULT NULL,
state
varchar(2) DEFAULT NULL,
city
varchar(50) DEFAULT NULL,
country
varchar(10) DEFAULT NULL,
PRIMARY KEY (id
)
) ENGINE=InnoDB AUTO_INCREMENT=147360955 DEFAULT CHARSET=latin1
The query becomes faster when I:
* Remove the ORDER BY
clause.
* Filter company.company_headquarter_id
with a company_headquarter_id
that has a smaller number of orders. (company_headquarter_id = 23133
has ~3M rows in the order
table)
* Split it into two separate queries:
First:
SELECT
company.id
FROM
company
WHERE
company.company_headquarter_id = 23133;
Second:
SELECT
order.id,
order.company_id,
order.total
FROM
order
WHERE
order.company_id IN (20122, 50729, 50730, 50731, 50732, 50733) /* From first query */
ORDER BY order.id DESC
LIMIT 25;
Any ideas?
Thank you.
EDIT:
When I do:
SELECT STRAIGHT_JOIN
order.id,
order.company_id,
order.total
FROM
order
INNER JOIN
company ON company.id = order.company_id
WHERE
company.company_headquarter_id = 23133
ORDER BY order.id DESC
LIMIT 25;
The query is much faster and EXPLAIN shows a temporary table is not created.
flyingdutchman
(11 rep)
Apr 23, 2022, 05:42 PM
• Last activity: Jul 23, 2025, 04:07 PM
0
votes
2
answers
144
views
Fastest query to process data in small batches without repetition
I have java app that is using MySQL in the backend. I have the following table: `A = int, B = varchar, C = timestamp` A | B | C 1 | 100 | 2022-03-01 12:00:00 2 | 200 | 2022-03-01 12:00:01 3 | 100 | 2022-03-01 12:00:01 4 | 200 | 2022-03-01 12:00:02 5 | 600 | 2022-03-01 12:00:03 1 | 100 | 2022-03-01 1...
I have java app that is using MySQL in the backend. I have the following table:
A = int, B = varchar, C = timestamp
A | B | C
1 | 100 | 2022-03-01 12:00:00
2 | 200 | 2022-03-01 12:00:01
3 | 100 | 2022-03-01 12:00:01
4 | 200 | 2022-03-01 12:00:02
5 | 600 | 2022-03-01 12:00:03
1 | 100 | 2022-03-01 12:00:06
5 | 700 | 2022-03-01 12:00:07
2 | 200 | 2022-03-01 12:00:08
9 | 100 | 2022-03-01 12:00:08
On every X seconds, query should be run, and it should process 5 records where column C > LAST_PROCESSED_TIMESTAMP
. This LAST_PROCESSED_TIMESTAMP
is updated after each run.
What I need is - I want to select these 5 rows, but not to include the rows if columns A and B are going to repeat in some fetches that are going to happen in the future.
Example: for table above:
First run - select 5
1 | 100 | 2022-03-01 12:00:00 2 | 200 | 2022-03-01 12:00:01 = '2022-03-01 12:00:00'
LIMIT 5
) a
LEFT JOIN (
SELECT A,B
FROM TABLE
WHERE C >= '2022-03-01 12:00:00'
LIMIT 5, 18446744073709551615
) b ON ( a.A=b.A
AND a.B=b.B
)
WHERE b.A IS NULL;
and also (this one is probably NOT OK, since it will select MAX of C even if not in first 5, so for my example, it would include 2 | 200 | 2022-03-01 12:00:08
inside of the first run - not what I need):
SELECT A, B, MAX(C)
FROM TABLE
WHERE C >= '2022-03-01 12:00:00'
GROUP BY A, B ASC
LIMIT 5;
Bojan Vukasovic
(101 rep)
Mar 7, 2022, 05:48 PM
• Last activity: Jul 22, 2025, 10:01 PM
3
votes
2
answers
2331
views
Complex view becomes slow when adding ORDER BY in outer query with small LIMIT
I have a pretty large query in a view (let's call it `a_sql`), that is really fast unless I use `ORDER BY` in an outer `SELECT` with a small `LIMIT`: SELECT customs.id AS custom_id, customs.custom_name AS custom_name, customs.slug AS slug, customs.use_case AS custom_use_case, SUM(CASE WHEN designers...
I have a pretty large query in a view (let's call it
a_sql
), that is really fast unless I use ORDER BY
in an outer SELECT
with a small LIMIT
:
SELECT
customs.id AS custom_id, customs.custom_name AS custom_name, customs.slug AS slug, customs.use_case AS custom_use_case,
SUM(CASE WHEN designers.id = orders.user_id AND orders.bulk = 't' THEN order_rows.quantity ELSE 0 END) AS sale_bulk,
SUM(CASE WHEN designers.id = orders.user_id AND orders.bulk = 'f' THEN order_rows.quantity ELSE 0 END) AS sale_not_bulk,
SUM(CASE WHEN designers.id = orders.user_id THEN order_rows.quantity ELSE 0 END) AS sale_total,
SUM(CASE WHEN designers.id orders.user_id AND orders.bulk = 't' THEN order_rows.quantity ELSE 0 END) AS buy_bulk,
SUM(CASE WHEN designers.id orders.user_id AND orders.bulk = 'f' THEN order_rows.quantity ELSE 0 END) AS buy_not_bulk,
SUM(CASE WHEN designers.id orders.user_id THEN order_rows.quantity ELSE 0 END) AS buy_total,
SUM(CASE orders.bulk WHEN 't' THEN order_rows.quantity ELSE 0 END) AS total_bulk,
SUM(CASE orders.bulk WHEN 'f' THEN order_rows.quantity ELSE 0 END) AS total_not_bulk,
COALESCE(SUM(order_rows.quantity), 0 ) AS total,
MIN(shoes.id) AS shoe_id,
MIN(shoe_models.id) AS shoe_model_id, MIN(shoe_models.name) AS shoe_model_name, MIN(shoe_models.title) AS shoe_model_title,
MIN(model_categories.id) AS model_category_id, MIN(model_categories.name) AS model_category_name,
MIN(business_orders.id) AS business_order_id, MIN(business_orders.state) AS business_order_state, MIN(business_orders.published_at) AS business_order_published_at,
MIN(designers.id) AS designer_id, MIN(designers.email) AS designer_email, MIN(designer_details.first_name) AS designer_first_name, MIN(designer_details.last_name) AS designer_last_name
FROM business_orders /* 10^6 rows */
LEFT JOIN users designers
ON designers.id = business_orders.user_id
/* 10^6 rows - business_orders has 0 or 1 users, users has n business_orders */
LEFT JOIN user_details designer_details
ON designers.id = designer_details.user_id
/* 10^6 rows - users has 0 or 1 user_details, user_details has 1 users */
INNER JOIN customs
ON business_orders.id = customs.business_order_id
/* 10^6 rows - business_orders has 1 customs, customs has 1 business_order */
LEFT JOIN shoes
ON shoes.product_id = customs.id
AND shoes.product_type = 'Custom'
/* 10^6 rows - customs has 1 shoes, shoes has 1 customs */
LEFT JOIN shoe_models
ON shoe_models.id = shoes.shoe_model_id
/* 10^2 rows - shoes has 1 shoe_models, shoe_models has n shoes */
LEFT JOIN model_categories
ON shoe_models.model_category_id = model_categories.id
/* 10^1 rows - shoe_models has 1 model_categories, model_categories has n models */
INNER JOIN sizes
ON shoes.id = sizes.shoe_id
/* 10^6 rows - sizes has 1 shoes, shoes has n sizes */
LEFT JOIN order_rows
ON order_rows.article_id = sizes.id
AND order_rows.article_type::text = 'Size'::text
/* 10^5 rows - sizes has n order_rows, order_rows has 0 or 1 size */
LEFT JOIN orders
ON orders.id = order_rows.order_id
/* 10^4 rows - order_rows has 1 orders, orders has n order_rows */
WHERE orders.state IN ('funded', 'confirmed', 'paid', 'delivered'
,'production', 'produced', 'ready_to_ship'
, 'shipped')
OR orders.id IS NULL
GROUP BY business_orders.id
Returns around 52.000 rows.
A query of the following type is executed in 12.728 ms:
SELECT * FROM A_SQL LIMIT 10
The related EXPLAIN
output:
Limit (cost=3.51..145.53 rows=10 width=324) (actual time=1.545..12.468 rows=10 loops=1)
Buffers: shared hit=1652
-> Subquery Scan on x (cost=3.51..737218.84 rows=51911 width=324) (actual time=1.543..12.462 rows=10 loops=1)
Buffers: shared hit=1652
-> GroupAggregate (cost=3.51..736699.73 rows=51911 width=610) (actual time=1.542..12.455 rows=10 loops=1)
Group Key: business_orders.id
Buffers: shared hit=1652
-> Nested Loop Left Join (cost=3.51..716552.04 rows=270739 width=610) (actual time=0.090..4.073 rows=608 loops=1)
Filter: (((orders.state)::text = ANY ('{funded,confirmed,paid,delivered,production,produced,ready_to_ship,shipped}'::text[])) OR (orders.id IS NULL))
Rows Removed by Filter: 5
Buffers: shared hit=1652
-> Nested Loop Left Join (cost=3.23..408595.00 rows=448022 width=609) (actual time=0.087..3.264 rows=613 loops=1)
Buffers: shared hit=1547
-> Nested Loop (cost=2.94..264656.18 rows=448022 width=605) (actual time=0.082..1.227 rows=596 loops=1)
Buffers: shared hit=269
-> Nested Loop Left Join (cost=2.52..130221.18 rows=52594 width=601) (actual time=0.073..0.578 rows=14 loops=1)
Buffers: shared hit=197
-> Nested Loop Left Join (cost=2.23..104252.63 rows=51831 width=588) (actual time=0.066..0.478 rows=14 loops=1)
Join Filter: (shoe_models.model_category_id = model_categories.id)
Rows Removed by Join Filter: 79
Buffers: shared hit=155
-> Nested Loop Left Join (cost=2.23..101141.72 rows=51831 width=72) (actual time=0.055..0.413 rows=14 loops=1)
Buffers: shared hit=154
-> Nested Loop (cost=2.09..92396.06 rows=51831 width=52) (actual time=0.051..0.348 rows=14 loops=1)
Buffers: shared hit=126
-> Nested Loop Left Join (cost=1.80..65264.56 rows=51831 width=48) (actual time=0.033..0.209 rows=14 loops=1)
Buffers: shared hit=84
-> Merge Join (cost=1.38..21836.97 rows=51831 width=26) (actual time=0.022..0.109 rows=14 loops=1)
Merge Cond: (business_orders.id = customs.business_order_id)
Buffers: shared hit=28
-> Index Scan using business_orders_pkey on business_orders (cost=0.29..3688.80 rows=51911 width=22) (actual time=0.012..0.036 rows=14 loops=1)
Buffers: shared hit=14
-> Index Scan using index_customs_on_business_order_id on customs (cost=0.41..17371.39 rows=51831 width=8) (actual time=0.005..0.029 rows=14 loops=1)
Buffers: shared hit=14
-> Index Scan using users_pkey on users designers (cost=0.41..0.83 rows=1 width=26) (actual time=0.006..0.006 rows=1 loops=14)
Index Cond: (id = business_orders.user_id)
Buffers: shared hit=56
-> Index Scan using index_shoes_on_product_id_and_product_type on shoes (cost=0.29..0.51 rows=1 width=12) (actual time=0.007..0.008 rows=1 loops=14)
Index Cond: ((product_id = customs.id) AND ((product_type)::text = 'Custom'::text))
Buffers: shared hit=42
-> Index Scan using shoe_models_pkey on shoe_models (cost=0.14..0.16 rows=1 width=24) (actual time=0.003..0.003 rows=1 loops=14)
Index Cond: (id = shoes.shoe_model_id)
Buffers: shared hit=28
-> Materialize (cost=0.00..1.06 rows=4 width=520) (actual time=0.001..0.002 rows=7 loops=14)
Buffers: shared hit=1
-> Seq Scan on model_categories (cost=0.00..1.04 rows=4 width=520) (actual time=0.004..0.005 rows=7 loops=1)
Buffers: shared hit=1
-> Index Scan using index_user_details_on_user_id on user_details designer_details (cost=0.29..0.49 rows=1 width=17) (actual time=0.006..0.006 rows=1 loops=14)
Index Cond: (designers.id = user_id)
Buffers: shared hit=42
-> Index Scan using index_sizes_on_shoe_id on sizes (cost=0.42..2.00 rows=56 width=8) (actual time=0.006..0.030 rows=43 loops=14)
Index Cond: (shoe_id = shoes.id)
Buffers: shared hit=72
-> Index Scan using index_order_rows_on_article_id on order_rows (cost=0.29..0.31 rows=1 width=12) (actual time=0.003..0.003 rows=0 loops=596)
Index Cond: (article_id = sizes.id)
Filter: ((article_type)::text = 'Size'::text)
Rows Removed by Filter: 2
Buffers: shared hit=1278
-> Index Scan using orders_pkey on orders (cost=0.29..0.67 rows=1 width=18) (actual time=0.000..0.000 rows=0 loops=613)
Index Cond: (id = order_rows.order_id)
Buffers: shared hit=105
Planning time: 5.013 ms
Execution time: 12.728 ms
A query of the following type, instead, is executed in 9090.141ms
SELECT * FROM a_sql ORDER BY custom_id LIMIT 10
The related EXPLAIN
output:
Limit (cost=328570.62..328570.64 rows=10 width=324) (actual time=8987.928..8987.929 rows=10 loops=1)
Buffers: shared hit=10412 read=12400, temp read=18319 written=18063
-> Sort (cost=328570.62..328700.40 rows=51911 width=324) (actual time=8987.926..8987.926 rows=10 loops=1)
Sort Key: x.business_order_id
Sort Method: top-N heapsort Memory: 27kB
Buffers: shared hit=10412 read=12400, temp read=18319 written=18063
-> Subquery Scan on x (cost=306105.20..327448.84 rows=51911 width=324) (actual time=3074.397..8978.470 rows=8004 loops=1)
Buffers: shared hit=10412 read=12400, temp read=18319 written=18063
-> GroupAggregate (cost=306105.20..326929.73 rows=51911 width=610) (actual time=3074.395..8975.492 rows=8004 loops=1)
Group Key: business_orders.id
Buffers: shared hit=10412 read=12400, temp read=18319 written=18063
-> Sort (cost=306105.20..306782.04 rows=270739 width=610) (actual time=3073.679..3411.919 rows=467218 loops=1)
Sort Key: business_orders.id
Sort Method: external merge Disk: 56936kB
Buffers: shared hit=10412 read=12400, temp read=18319 written=18063
-> Hash Right Join (cost=98065.48..133611.68 rows=270739 width=610) (actual time=1559.328..2325.275 rows=467218 loops=1)
Hash Cond: (order_rows.article_id = sizes.id)
Filter: (((orders.state)::text = ANY ('{funded,confirmed,paid,delivered,production,produced,ready_to_ship,shipped}'::text[])) OR (orders.id IS NULL))
Rows Removed by Filter: 3712
Buffers: shared hit=10412 read=12400, temp read=9442 written=9186
-> Hash Left Join (cost=813.00..1497.05 rows=7367 width=26) (actual time=9.566..22.691 rows=7367 loops=1)
Hash Cond: (order_rows.order_id = orders.id)
Buffers: shared hit=888
-> Seq Scan on order_rows (cost=0.00..509.08 rows=7367 width=12) (actual time=0.029..5.732 rows=7367 loops=1)
Filter: ((article_type)::text = 'Size'::text)
Rows Removed by Filter: 11199
Buffers: shared hit=277
-> Hash (cost=700.78..700.78 rows=8978 width=18) (actual time=9.507..9.507 rows=8993 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 470kB
Buffers: shared hit=611
-> Seq Scan on orders (cost=0.00..700.78 rows=8978 width=18) (actual time=0.009..7.142 rows=8993 loops=1)
Buffers: shared hit=611
-> Hash (cost=57087.20..57087.20 rows=448022 width=605) (actual time=1547.263..1547.263 rows=469413 loops=1)
Buckets: 1024 Batches: 128 Memory Usage: 567kB
Buffers: shared hit=9524 read=12400, temp read=1037 written=8932
-> Hash Left Join (cost=30955.54..57087.20 rows=448022 width=605) (actual time=496.442..1160.554 rows=469413 loops=1)
Hash Cond: (shoes.shoe_model_id = shoe_models.id)
Buffers: shared hit=9524 read=12400, temp read=1037 written=1035
-> Hash Join (cost=30938.67..52547.10 rows=448022 width=69) (actual time=496.300..964.720 rows=469413 loops=1)
Hash Cond: (sizes.shoe_id = shoes.id)
Buffers: shared hit=9509 read=12400, temp read=1037 written=1035
-> Seq Scan on sizes (cost=0.00..8642.10 rows=441710 width=8) (actual time=0.009..119.758 rows=441934 loops=1)
Buffers: shared hit=797 read=3428
-> Hash (cost=29664.25..29664.25 rows=52594 width=65) (actual time=496.056..496.056 rows=54329 loops=1)
Buckets: 4096 Batches: 2 Memory Usage: 2679kB
Buffers: shared hit=8712 read=8972, temp written=294
-> Hash Left Join (cost=15725.17..29664.25 rows=52594 width=65) (actual time=162.077..460.095 rows=54329 loops=1)
Hash Cond: (designers.id = designer_details.user_id)
Buffers: shared hit=8712 read=8972
-> Hash Join (cost=11607.65..22688.39 rows=51831 width=52) (actual time=124.442..362.315 rows=51846 loops=1)
Hash Cond: (customs.id = shoes.product_id)
Buffers: shared hit=6055 read=8972
-> Hash Left Join (cost=7908.32..17952.45 rows=51831 width=48) (actual time=83.756..251.381 rows=51846 loops=1)
Hash Cond: (business_orders.user_id = designers.id)
Buffers: shared hit=3652 read=8972
-> Hash Join (cost=1843.00..10720.93 rows=51831 width=26) (actual time=27.942..139.640 rows=51846 loops=1)
Hash Cond: (customs.business_order_id = business_orders.id)
Buffers: shared hit=3079 read=4919
-> Seq Scan on customs (cost=0.00..7841.31 rows=51831 width=8) (actual time=0.009..41.084 rows=51846 loops=1)
Buffers: shared hit=2404 read=4919
-> Hash (cost=1194.11..1194.11 rows=51911 width=22) (actual time=27.888..27.888 rows=51849 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 2513kB
Buffers: shared hit=675
-> Seq Scan on business_orders (cost=0.00..1194.11 rows=51911 width=22) (actual time=0.007..15.422 rows=51849 loops=1)
Buffers: shared hit=675
-> Hash (cost=5265.70..5265.70 rows=63970 width=26) (actual time=55.788..55.788 rows=63972 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 3679kB
Buffers: shared hit=573 read=4053
-> Seq Scan on users designers (cost=0.00..5265.70 rows=63970 width=26) (actual time=0.003..35.227 rows=63972 loops=1)
Buffers: shared hit=573 read=4053
-> Hash (cost=3051.16..3051.16 rows=51853 width=12) (actual time=40.654..40.654 rows=51846 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 2154kB
Buffers: shared hit=2403
-> Seq Scan on shoes (cost=0.00..3051.16 rows=51853 width=12) (actual time=0.009..28.311 rows=51846 loops=1)
Filter: ((product_type)::text = 'Custom'::text)
Buffers: shared hit=2403
-> Hash (cost=3306.12..3306.12 rows=64912 width=17) (actual time=37.610..37.610 rows=64670 loops=1)
Buckets: 8192 Batches: 1 Memory Usage: 2748kB
Buffers: shared hit=2657
-> Seq Scan on user_details designer_details (cost=0.00..3306.12 rows=64912 width=17) (actual time=0.007..19.790 rows=64670 loops=1)
Buffers: shared hit=2657
-> Hash (cost=16.19..16.19 rows=54 width=540) (actual time=0.121..0.121 rows=54 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 4kB
Buffers: shared hit=15
-> Hash Left Join (cost=1.09..16.19 rows=54 width=540) (actual time=0.034..0.101 rows=54 loops=1)
Hash Cond: (shoe_models.model_category_id = model_categories.id)
Buffers: shared hit=15
-> Seq Scan on shoe_models (cost=0.00..14.54 rows=54 width=24) (actual time=0.006..0.028 rows=54 loops=1)
Buffers: shared hit=14
-> Hash (cost=1.04..1.04 rows=4 width=520) (actual time=0.016..0.016 rows=7 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 1kB
Buffers: shared hit=1
-> Seq Scan on model_categories (cost=0.00..1.04 rows=4 width=520) (actual time=0.006..0.012 rows=7 loops=1)
Buffers: shared hit=1
Planning time: 4.263 ms
Execution time: 9090.141 ms
Table definitions are the following. No integrity constraints are defined on the database (using ORM)
CREATE TABLE business_orders (
id integer NOT NULL,
user_id integer,
published_at timestamp without time zone,
CONSTRAINT business_orders_pkey PRIMARY KEY (id)
);
CREATE INDEX index_business_orders_on_user_id
ON business_orders
USING btree
(user_id);
CREATE TABLE users
(
id serial NOT NULL,,
email character varying(255) NOT NULL DEFAULT ''::character varying,
CONSTRAINT users_pkey PRIMARY KEY (id)
);
CREATE UNIQUE INDEX index_users_on_email
ON users
USING btree
(email COLLATE pg_catalog."default");
CREATE TABLE user_details
(
id serial NOT NULL,
user_id integer,
first_name character varying(255),
last_name character varying(255),
CONSTRAINT user_details_pkey PRIMARY KEY (id)
);
CREATE INDEX index_user_details_on_user_id
ON user_details
USING btree
(user_id);
CREATE TABLE customs
(
id serial NOT NULL,
shoes_assortment_id integer,
business_order_id integer,
CONSTRAINT customs_pkey PRIMARY KEY (id)
);
CREATE INDEX index_customs_on_business_order_id
ON customs
USING btree
(business_order_id);
CREATE TABLE shoes
(
id serial NOT NULL,
product_id integer,
product_type character varying(255),
CONSTRAINT shoes_pkey PRIMARY KEY (id)
);
CREATE INDEX index_shoes_on_product_id_and_product_type
ON shoes
USING btree
(product_id, product_type COLLATE pg_catalog."default");
CREATE INDEX index_shoes_on_shoe_model_id
ON shoes
USING btree
(shoe_model_id);
CREATE TABLE shoe_models
(
id serial NOT NULL,
name character varying(255) NOT NULL,
title character varying(255),
model_category_id integer,
CONSTRAINT shoe_models_pkey PRIMARY KEY (id)
);
CREATE INDEX index_shoe_models_on_model_category_id
ON shoe_models
USING btree
(model_category_id);
CREATE UNIQUE INDEX index_shoe_models_on_name
ON shoe_models
USING btree
(name COLLATE pg_catalog."default");
CREATE TABLE model_categories
(
id serial NOT NULL,
name character varying(255) NOT NULL,
sort_order integer,
created_at timestamp without time zone NOT NULL,
updated_at timestamp without time zone NOT NULL,
access_level integer,
CONSTRAINT model_categories_pkey PRIMARY KEY (id)
);
CREATE UNIQUE INDEX index_model_categories_on_name
ON model_categories
USING btree
(name COLLATE pg_catalog."default");
CREATE TABLE sizes
(
id serial NOT NULL,
shoe_id integer,
CONSTRAINT sizes_pkey PRIMARY KEY (id)
);
CREATE INDEX index_sizes_on_shoe_id
ON sizes
USING btree
(shoe_id);
CREATE TABLE order_rows
(
id serial NOT NULL,
order_id integer,
quantity integer,
article_id integer,
article_type character varying(255),
article_name character varying(255),
unit_taxed_cents integer,
CONSTRAINT order_rows_pkey PRIMARY KEY (id)
);
CREATE INDEX index_order_rows_on_article_id
ON order_rows
USING btree
(article_id);
CREATE INDEX index_order_rows_on_article_type
ON order_rows
USING btree
(article_type COLLATE pg_catalog."default");
CREATE INDEX index_order_rows_on_order_id
ON order_rows
USING btree
(order_id);
CREATE INDEX index_order_rows_on_quantity
ON order_rows
USING btree
(quantity);
CREATE INDEX index_order_rows_on_unit_taxed_cents
ON order_rows
USING btree
(unit_taxed_cents);
CREATE TABLE orders
(
id serial NOT NULL,
user_id integer,
state character varying(255),
bulk boolean DEFAULT false,
CONSTRAINT orders_pkey PRIMARY KEY (id)
);
CREATE INDEX index_orders_on_user_id
ON orders
USING btree
(user_id);
Because the a_sql
is a view, I can't insert the ORDER BY
clause inside the view. I will need to call it as a black box.
The use cases for this query are:
- With a limit of 10, ordered by custom_id
- With a limit of 10, ordered by total
- To filter all rows that have business_order.user_id = orders.id and business_orders.id = x
(usually not more than 100 rows as result)
The graphical explain of pg_admin, even if I don't understand much, seems to be telling me that if I run the query with no ordering, then the query is using indexes, (and doing "nested loop joins"), while if I do it with the ordering, then it doesn't (it uses "hash joins").
Are there any ways to increase performance?
Marco Marcianesi
(57 rep)
Jan 12, 2016, 11:58 AM
• Last activity: Jul 19, 2025, 11:17 AM
1
votes
1
answers
154
views
Access previous records from a table variable without using cursor in SQL Server 2014
In a stored procedure, I have to do some calculation based on the previous record and a total. Currently I can achieve this using Cursor or a table variable, but its taking a lot of time as the number of records are huge. How can I achieve this to improve the performance? A part of my stored procedu...
In a stored procedure, I have to do some calculation based on the previous record and a total. Currently I can achieve this using Cursor or a table variable, but its taking a lot of time as the number of records are huge.
How can I achieve this to improve the performance?
A part of my stored procedure:
DECLARE @HoursTemp TABLE
(
ID [INT],
EmployeeName [VARCHAR](100),
WorkDate [DATE],
RegularHours [DECIMAL],
OvertimeHours [DECIMAL],
NightJob [BIT],
JobId [INT]
)
INSERT INTO @HoursTemp
SELECT ID, EmployeeName, WorkDate, RegularHours,OvertimeHours, NightJob, JobId
FROM @PayrollSheet
GROUP BY WorkDate, EmployeeName , JobIdType, NightJob, RegularHours, OvertimeHours, ID, JobId
ORDER BY WorkDate, EmployeeName, JobIdType, NightJob, JobId
SELECT @count = COUNT(ID) FROM @HoursTemp WHILE (@i = 8 AND @SumOverTimeHours = 0)
BEGIN
SET @SumOverTimeHours = @OverTimeHour + (@SumOfHours - 8); -- Sum up the overtime hour.
SET @RegularHours = 8 - (@SumOfHours - @RegularHours); -- Get remainder regular hour.
UPDATE @PayrollSheet SET RegularHours = @RegularHours, OvertimeHours = @SumOverTimeHours Where ID = @PayRollID -- Update the temp table.
END
ELSE
BEGIN
--When sum of regular hours is greater than 8, further regular hours of the day will be considered as over time hours.
IF(@SumOfHours >=8)
UPDATE @PayrollSheet SET RegularHours = 0, OvertimeHours = @RegularHours Where ID = @PayRollID --Update the over time hours with regular hour and set regular hour with 0.
END
END
ELSE
BEGIN
SET @SumOfHours = @RegularHours; -- If Dates does not match then reset the regular hours sum variable to current regular hours so it will be summed with next regular hours when dates match.
SET @SumOverTimeHours = 0; -- Sum up Over time hours present in existing records.
--SET @SumOfNightHours = 0; -- Reset OverTime Sum and Night hours sum variable when dates does not match.
END
-- Get counts of record with same work date,Job ID and Night Job true and match it with total records with same work date.
SELECT @RecordsWithWorkDateAndJobID = COUNT(*) from @PayrollSheet where WorkDate = @WorkDate AND JobID = @PayRollJobID AND NightJob = 1 AND EmployeeName = @EmployeeName
SELECT @RecordsWithOnlyWorkDate = COUNT(*) from @PayrollSheet where WorkDate = @WorkDate AND EmployeeName = @EmployeeName --AND NightJob = 1
--If both count matches then night hours are entered in same job on a day. Else night hours are entered in more than one job. Also if sum of night hours is less than 8 then only update night hours.
IF(NOT(@RecordsWithWorkDateAndJobID > 0 AND @RecordsWithOnlyWorkDate > 0 AND @RecordsWithWorkDateAndJobID = @RecordsWithOnlyWorkDate)) --AND @SumOfNightHours < 8
BEGIN
UPDATE @PayrollSheet SET NightJob = 0 Where ID = @PayRollID -- Set regular hours as 0 and night hours as the regular hour.
--SET @SumOfNightHours = @SumOfNightHours + @RegularHours; -- Maintain sum of night hours, so that if it is greater than 8 then night hours will not be added.
END
SET @PreviousDate = @WorkDate --Store previous work date.
SET @PreviousJobID = @PayRollJobID -- Store previous job id.
SET @PreviousEmployeeName = @EmployeeName
-- Fetch next record from temp table.
SELECT @i = @i + 1
END
Rishi Samadhiya
(21 rep)
Feb 23, 2017, 11:46 AM
• Last activity: Jul 18, 2025, 09:04 AM
0
votes
0
answers
33
views
LIKE Operator optimization String % customer %
In MYSQL, I have a select query which is usually fetch the data from 4M records. There I'm using like operator with % and % this is not using indexing filter and reading total 4M records to fetch 2 records. How can I optimize it. SELECT r_ss.siteId, r_ss.referrer, r_ss.supplierhash, min(r_ss.bid_flo...
In MYSQL, I have a select query which is usually fetch the data from 4M records. There I'm using like operator with % and % this is not using indexing filter and reading total 4M records to fetch 2 records. How can I optimize it.
SELECT
r_ss.siteId,
r_ss.referrer,
r_ss.supplierhash,
min(r_ss.bid_floor) as bid_floor,
sum(r_ss.adCall_yesterday) as adCall_yesterday,
sum(r_ss.adCall_today) as adCall_today FROM
reporting.total_data_for_site r_ss WHERE
(r_ss.adCall_today > 1 OR r_ss.adCall_yesterday > 100)
AND CASE WHEN ( '1367,1397,1624' IS NOT NULL AND '1367,1397,1624' '' ) THEN r_ss.siteId IN ( SELECT * FROM tmp_site ) ELSE 1 = 1 END
AND r_ss.deviceid in (38,39,41)
AND r_ss.referrer LIKE CONCAT('%','watchfree', '%')
group by r_ss.siteId,r_ss.referrer ;
Aravind
(11 rep)
Jul 16, 2025, 01:09 PM
0
votes
2
answers
618
views
SQL server alwayson 3 Node Primary Down
I am using three node alwayson Availability Group SQL 2012 Enterprise Edition. 2 Node in DC with File Share Witness quorum also in DC and third node to be setup in DR I need to perform a DR switchover with all DC down And DR database up for applications. Can this solution work or I need to use Log S...
I am using three node alwayson Availability Group SQL 2012 Enterprise Edition.
2 Node in DC with File Share Witness quorum also in DC and third node to be setup in DR
I need to perform a DR switchover with all DC down And DR database up for applications.
Can this solution work or I need to use Log Shipping as third node instead of alwayson AG node.
Please help on this.
Thanks
mukki
(11 rep)
Jun 25, 2020, 07:20 AM
• Last activity: Jul 12, 2025, 11:03 PM
1
votes
1
answers
826
views
How to reduce redundant time-series data in MySQL into a function?
i have a question that actually does not fit here very much, but maybe it fits. So, i have data like this: [![enter image description here][1]][1] How can i reduce this 3 entries with same values? Sure, i can just delete the other 2, but in time-series data i can not just do this, this would hide th...
i have a question that actually does not fit here very much, but maybe it fits. So, i have data like this:
How can i reduce this 3 entries with same values? Sure, i can just delete the other 2, but in time-series data i can not just do this, this would hide the information, that in between the real time 15:19:45 and 15:19:55, the value did not change. So i thought about saving the delta-time and value in a separate table and only save the first entry and mark it, but i dont know if this is the best way to do so.

Rep
(11 rep)
Sep 9, 2016, 10:15 PM
• Last activity: Jul 8, 2025, 08:01 AM
1
votes
1
answers
196
views
Best solution for performance - to extract 400 over fields with varchar (300) for one single BI Data set
I have this requirement to extract about 450 fields with most of it's fields length with varchar(300) from multiple tables for one single BI report. For better performance, I tried to create one single table to store all the required fields. However, it exceed the SQL server max row size and table w...
I have this requirement to extract about 450 fields with most of it's fields length with varchar(300) from multiple tables for one single BI report.
For better performance, I tried to create one single table to store all the required fields. However, it exceed the SQL server max row size and table was created with errors such as cannot insert or update the table since it exceeded max row size limit. As such, It seems like we have to split the tables.
Then I am afraid for the extract performance. During extract time, we also need to apply data level security. Do you have any idea for better design to get better performance?
cool
(11 rep)
Oct 4, 2021, 11:00 AM
• Last activity: Jul 7, 2025, 08:02 AM
0
votes
1
answers
782
views
How to optimize UPDATE with a nested SELECT subquery?
I wrote a complicated UPDATE query, and it works, but it looks menacing. Here's what I'm trying to do: In each topic user 'Bob123' posted anonymously. When you post anonymously in a topic, you get a unique anonymous index for that topic. Say I want to merge two topics together. Bob123 has a differen...
I wrote a complicated UPDATE query, and it works, but it looks menacing. Here's what I'm trying to do:
In each topic user 'Bob123' posted anonymously. When you post anonymously in a topic, you get a unique anonymous index for that topic.
Say I want to merge two topics together. Bob123 has a different anon index in both topics, so his unique anon index wouldn't be unique. I only have two pieces of data to work with: $topic_id, the topic id you are merging into, and $post_id_list, all the post ids that got merged over.
I want to update all anonymous_index entries per each distinct poster_id's post in that topic. This anonymous_index needs to be the original index they had in the topic before the other topic was merged into it.
The first SELECT query first selects the anon indices of the moved posts.
The outer SELECT query gets the first non-merged post's anon index (if it is
> 0
) of those merged posters in the topic and selects a merged anon index from the first query.
Then, I update it. Wherever the anon index of those posters in that topic doesn't equal the old index, I update it.
Is there something simple that I'm missing here? I don't like the fact that I have a subquery in a subquery.
At first I was using HAVING MIN(anonymous_index) MAX(anonymous_index)
along with AND post_id NOT IN ($merged_post_list)
to select the poster id list that needed to be updated and an unmerged anon index, but it returned 0 rows with this. If the merged post is BEFORE all original posts (and has a larger anon index), then the minimum anon index will match the maximum index for that poster. So making another subquery fixed this...
$merged_post_list = implode(',', $post_id_list);
...
UPDATE " . POSTS_TABLE . " AS p
INNER JOIN ( SELECT p.post_id, p.anonymous_index AS old_index,
merged.poster_id, merged.anonymous_index AS new_index
FROM " . POSTS_TABLE . " AS p,
( SELECT poster_id, anonymous_index
FROM " . POSTS_TABLE . "
WHERE post_id IN ($merged_post_list)
AND topic_id = $topic_id
AND anonymous_index > 0
) AS merged
WHERE p.post_id NOT IN ($merged_post_list)
AND p.topic_id = $topic_id
AND p.anonymous_index > 0
AND p.poster_id = merged.poster_id
GROUP BY merged.poster_id
) AS postdata
SET p.anonymous_index = postdata.old_index
WHERE p.topic_id = $topic_id
AND anonymous_index > 0
AND anonymous_index postdata.old_index
AND p.poster_id = postdata.poster_id
post_id is the primary index, poster_id and topic_id are also indices.
Here's some sample behavior:
Before merge:
|post_id_____poster_id_____anonymous_index|
| 11 | 3 | 2 |
| 12 | 22 | 1 |
| 14 | 22 | 1 |
| 15 | 3 | 2 |
After merge:
|post_id_____poster_id_____anonymous_index|
| 10 | 22 | 4 |
| 11 | 3 | 2 |
| 12 | 22 | 1 |
| 13 | 3 | 4 |
| 14 | 22 | 1 |
| 15 | 3 | 2 |
| 16 | 22 | 4 |
After UPDATE (the above query):
|post_id_____poster_id_____anonymous_index|
| 10 | 22 | 1 |
| 11 | 3 | 2 |
| 12 | 22 | 1 |
| 13 | 3 | 2 |
| 14 | 22 | 1 |
| 15 | 3 | 2 |
| 16 | 22 | 1 |
EDIT: I've made the following index and an alternative SELECT query to avoid having two subqueries, how would these fare?:
(topic_id, poster_id, anonymous_index, post_id)
SELECT merged.poster_id, merged.anonymous_index AS new_index,
old.post_id, old.anonymous_index AS old_index
FROM " . POSTS_TABLE . " AS merged,
" . POSTS_TABLE . " AS old
WHERE merged.post_id IN ($post_list)
AND merged.anonymous_index > 0
AND merged.anonymous_index old.anonymous_index
AND old.topic_id = $topic_id
AND old.post_id NOT IN ($post_list)
AND old.anonymous_index > 0
AND old.poster_id = merged.poster_id
GROUP BY merged.poster_id
ORDER BY NULL
EDIT AGIAN:
Here is my schema:
Table structure for table phpbb_posts
--
CREATE TABLE phpbb_posts
(
post_id
int(10) UNSIGNED NOT NULL,
topic_id
int(10) UNSIGNED NOT NULL DEFAULT '0',
forum_id
mediumint(8) UNSIGNED NOT NULL DEFAULT '0',
poster_id
int(10) UNSIGNED NOT NULL DEFAULT '0',
icon_id
mediumint(8) UNSIGNED NOT NULL DEFAULT '0',
poster_ip
varchar(40) COLLATE utf8_bin NOT NULL DEFAULT '',
post_time
int(11) UNSIGNED NOT NULL DEFAULT '0',
post_reported
tinyint(1) UNSIGNED NOT NULL DEFAULT '0',
enable_bbcode
tinyint(1) UNSIGNED NOT NULL DEFAULT '1',
enable_smilies
tinyint(1) UNSIGNED NOT NULL DEFAULT '1',
enable_magic_url
tinyint(1) UNSIGNED NOT NULL DEFAULT '1',
enable_sig
tinyint(1) UNSIGNED NOT NULL DEFAULT '1',
post_username
varchar(255) COLLATE utf8_bin NOT NULL DEFAULT '',
post_subject
varchar(255) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL DEFAULT '',
post_text
mediumtext COLLATE utf8_bin NOT NULL,
post_checksum
varchar(32) COLLATE utf8_bin NOT NULL DEFAULT '',
post_attachment
tinyint(1) UNSIGNED NOT NULL DEFAULT '0',
bbcode_bitfield
varchar(255) COLLATE utf8_bin NOT NULL DEFAULT '',
bbcode_uid
varchar(8) COLLATE utf8_bin NOT NULL DEFAULT '',
post_postcount
tinyint(1) UNSIGNED NOT NULL DEFAULT '1',
post_edit_time
int(11) UNSIGNED NOT NULL DEFAULT '0',
post_edit_reason
varchar(255) COLLATE utf8_bin NOT NULL DEFAULT '',
post_edit_user
int(10) UNSIGNED NOT NULL DEFAULT '0',
post_edit_count
smallint(4) UNSIGNED NOT NULL DEFAULT '0',
post_edit_locked
tinyint(1) UNSIGNED NOT NULL DEFAULT '0',
post_visibility
tinyint(3) NOT NULL DEFAULT '0',
post_delete_time
int(11) UNSIGNED NOT NULL DEFAULT '0',
post_delete_reason
varchar(255) COLLATE utf8_bin NOT NULL DEFAULT '',
post_delete_user
int(10) UNSIGNED NOT NULL DEFAULT '0',
sfs_reported
tinyint(1) UNSIGNED NOT NULL DEFAULT '0',
parent_id
int(10) UNSIGNED DEFAULT '0',
post_depth
int(3) UNSIGNED NOT NULL DEFAULT '0',
is_anonymous
tinyint(1) UNSIGNED NOT NULL DEFAULT '0',
anonymous_index
mediumint(8) UNSIGNED NOT NULL DEFAULT '0'
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
--
-- Indexes for dumped tables
--
--
-- Indexes for table phpbb_posts
--
ALTER TABLE phpbb_posts
ADD PRIMARY KEY (post_id
),
ADD KEY forum_id
(forum_id
),
ADD KEY topic_id
(topic_id
),
ADD KEY poster_ip
(poster_ip
),
ADD KEY poster_id
(poster_id
),
ADD KEY post_username
(post_username
),
ADD KEY tid_post_time
(topic_id
,post_time
),
ADD KEY post_visibility
(post_visibility
),
ADD KEY parent_id
(parent_id
);
--
-- AUTO_INCREMENT for dumped tables
--
--
-- AUTO_INCREMENT for table phpbb_posts
--
ALTER TABLE phpbb_posts
MODIFY post_id
int(10) UNSIGNED NOT NULL AUTO_INCREMENT;COMMIT;
Thrash Tech
(1 rep)
Jan 16, 2019, 01:49 AM
• Last activity: Jul 3, 2025, 10:04 AM
15
votes
1
answers
415
views
Why aggregate operator used after unique index scan
I have a table with a unique index filtered for non-nullable values. In the query plan there is use of distinct. Is there a reason for this? USE tempdb CREATE TABLE T1( Id INT NOT NULL IDENTITY PRIMARY KEY ,F1 INT , F2 INT ) go CREATE UNIQUE NONCLUSTERED INDEX UK_T1 ON T1 (F1,F2) WHERE F1 IS NOT NUL...
I have a table with a unique index filtered for non-nullable values. In the query plan there is use of distinct. Is there a reason for this?
USE tempdb
CREATE TABLE T1( Id INT NOT NULL IDENTITY PRIMARY KEY ,F1 INT , F2 INT )
go
CREATE UNIQUE NONCLUSTERED INDEX UK_T1 ON T1 (F1,F2) WHERE F1 IS NOT NULL AND F2 IS NOT NULL
GO
INSERT INTO T1(f1,F2) VALUES(1,1),(1,2),(2,1)
SELECT DISTINCT F1,F2 FROM T1 WHERE F1 IS NOT NULL AND F2 IS NOT NULL
SELECT F1,F2 FROM T1 WHERE F1 IS NOT NULL AND F2 IS NOT NULL
query plan :

mordechai
(253 rep)
Aug 7, 2014, 09:49 AM
• Last activity: Jul 1, 2025, 05:34 PM
0
votes
2
answers
187
views
Best way to delete large set of rows knowing field to NOT delete
Coming from a Rails background, I have a large production database that I have a copy of where I only need records from 4 of the 2000+ companies. I'm trying to delete all the rows except the ones belonging to the 4, and I know the way I have it is not optimal. DELETE FROM appointments WHERE (appoint...
Coming from a Rails background, I have a large production database that I have a copy of where I only need records from 4 of the 2000+ companies. I'm trying to delete all the rows except the ones belonging to the 4, and I know the way I have it is not optimal.
DELETE FROM appointments
WHERE (appointments.company_id NOT IN (6, 753, 785, 1611))
another example is when I have to delete records on a table where the company_id is on an associated table:
DELETE FROM mappings
WHERE mappings.id IN (SELECT mappings.id
FROM code_mappings
INNER JOIN codes ON codes.remote_id = mappings.code_remote_id
WHERE (codes.company_id NOT IN (6, 753, 785, 1611)))
Tonys
(131 rep)
Oct 1, 2016, 03:39 AM
• Last activity: Jun 30, 2025, 02:05 PM
0
votes
1
answers
187
views
Help with SQL Query: Is there a cost-efficient way to join multiple tables to one main table?
I have a `main` table set up like this: |id |table_to_join | table_item_id | |:---|:-------------|:--------------| |1 |products |123 | |2 |products |577 | |3 |products |344 | |4 |products |1230 | |5 |images |14 | |6 |images |42 | |7 |video |555 | |8 |products |400 | |9 |video |9 | |10 |images |19 |...
I have a
main
table set up like this:
|id |table_to_join | table_item_id |
|:---|:-------------|:--------------|
|1 |products |123 |
|2 |products |577 |
|3 |products |344 |
|4 |products |1230 |
|5 |images |14 |
|6 |images |42 |
|7 |video |555 |
|8 |products |400 |
|9 |video |9 |
|10 |images |19 |
I have three other tables called products
, images
and videos
. Their Primary ids correspond to those of the table_item_id shown in the main
table above, which is an indexed column. I would like to query the main table and select those 10 rows, and join those three tables to their corresponding table_item_id to produce one single list showing a variety of different media.
I am looking for an efficient query to do something like this, if it's even possible. I had thought of three Joins, but that seems a bit messy, especially if I start adding more than the three tables in the future. I had also thought of using individual ajax queries on each table after it displays on the homepage, but again that seems a bit messy.
The query below is sort of what I'm looking for, but I'm not sure if there is a better, more cost-efficient method that best fits what I'm intending to do. Let me know what you think.
SELECT * FROM main
JOIN products ON main.table_item_id = products.id AND table_to_join = 'products'
JOIN images ON main.table_item_id = images.id AND table_to_join = 'images'
JOIN video ON main.table_item_id = video.id AND table_to_join = 'video'
LIMIT 10
peppy
(45 rep)
Aug 27, 2023, 12:39 AM
• Last activity: Jun 30, 2025, 12:04 PM
Showing page 1 of 20 total questions