Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
776 views
ERROR 1071 (42000): Specified key was too long; max key length is 1000 bytes
An online optimization tool offered to add indexes. One of the index is successfully created but i cant create the second one. Do you think that indexes going to reduce query run time? Any ideas how can i pass through the error code? SQL Query takes 10.1 secs and got over 380 times execution in slow...
An online optimization tool offered to add indexes. One of the index is successfully created but i cant create the second one. Do you think that indexes going to reduce query run time? Any ideas how can i pass through the error code? SQL Query takes 10.1 secs and got over 380 times execution in slow log. SELECT l.ID, post_title, post_content, post_name, post_parent, post_author, post_status, post_modified_gmt, post_date, post_date_gmt FROM ( SELECT wp_posts.ID FROM wp_posts WHERE wp_posts.post_status IN ('publish') AND wp_posts.post_type = 'post' AND wp_posts.post_password = '' AND wp_posts.post_date != '0000-00-00 00:00:00' ORDER BY wp_posts.post_modified ASC LIMIT 100 OFFSET 214000 ) o JOIN wp_posts l ON l.ID = o.ID Indexes that i need to create; MariaDB [database]> ALTER TABLE wp_posts ADD INDEX wp_posts_idx_post_type_post_passw_post_statu (post_type,post_password,post_status); > ERROR 1071 (42000): Specified key was too long; max key length is 1000 > bytes MariaDB [database]> ALTER TABLE wp_posts ADD INDEX wp_posts_idx_post_modified (post_modified); Query OK, 453289 rows affected (10.839 sec) Records: 453289 Duplicates: 0 Warnings: 0 --- CREATE TABLE wp_posts ( ID bigint(20) unsigned NOT NULL AUTO_INCREMENT, post_author bigint(20) unsigned NOT NULL DEFAULT 0, post_date datetime NOT NULL DEFAULT '0000-00-00 00:00:00', post_date_gmt datetime NOT NULL DEFAULT '0000-00-00 00:00:00', post_content longtext COLLATE utf8mb4_unicode_ci NOT NULL, post_title text COLLATE utf8mb4_unicode_ci NOT NULL, post_excerpt text COLLATE utf8mb4_unicode_ci NOT NULL, post_status varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'publish', comment_status varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open', ping_status varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open', post_password varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', post_name varchar(200) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', to_ping text COLLATE utf8mb4_unicode_ci NOT NULL, pinged text COLLATE utf8mb4_unicode_ci NOT NULL, post_modified datetime NOT NULL DEFAULT '0000-00-00 00:00:00', post_modified_gmt datetime NOT NULL DEFAULT '0000-00-00 00:00:00', post_content_filtered longtext COLLATE utf8mb4_unicode_ci NOT NULL, post_parent bigint(20) unsigned NOT NULL DEFAULT 0, guid varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', menu_order int(11) NOT NULL DEFAULT 0, post_type varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'post', post_mime_type varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '', comment_count bigint(20) NOT NULL DEFAULT 0, PRIMARY KEY (ID), KEY post_name (post_name(191)), KEY type_status_date (post_type,post_status,post_date,ID), KEY post_parent (post_parent), KEY post_author (post_author), KEY wp_posts_idx_post_modified (post_modified) ) ENGINE=MyISAM AUTO_INCREMENT=463265 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
execash (1 rep)
May 8, 2020, 06:44 AM • Last activity: Aug 6, 2025, 01:04 PM
1 votes
0 answers
31 views
Necessary to include filtered index filter column if value always NULL in PostgreSQL?
My question is about PostgreSQL. I found similar questions for MS SQL server but I don't know if the answers apply here. My table looks like this: scores ====== | ID | UserID | ValidFrom | ValidUntil | ScorePoints | +----+--------+------------+------------+-------------| | 1 | 1 | 2025-08-01 | 2025-...
My question is about PostgreSQL. I found similar questions for MS SQL server but I don't know if the answers apply here. My table looks like this: scores ====== | ID | UserID | ValidFrom | ValidUntil | ScorePoints | +----+--------+------------+------------+-------------| | 1 | 1 | 2025-08-01 | 2025-08-02 | 80 | | 2 | 1 | 2025-08-02 | NULL | 85 | There will be a lot of queries to find the currently valid score for a given UserID (= the row for UserID = ? AND ValidUntil IS NULL). I have a unique index like this: CREATE UNIQUE INDEX ix_uq_scores ON scores ( userid ) WHERE validuntil IS NULL; Or should it be: CREATE UNIQUE INDEX ix_uq_scores ON scores ( userid, validuntil ) WHERE validuntil IS NULL; A query might look like SELECT u.id, u.username, s.scorepoints FROM users u INNER JOIN scores s ON s.userid = u.id AND s.validuntil IS NULL WHERE u.id = 123; My filtered index will only include rows where validuntil **IS** NULL. So do I have to add this constant NULL value to the index tuple?
MrSnrub (181 rep)
Aug 4, 2025, 02:32 AM
0 votes
3 answers
147 views
How DB index works internally at high level?
I have around 500 GB of data in one table of MySQL which has around 5 billion records. It has around 15 columns. It has index on all the required columns. When I do `select * from big_table where index_column = some_value ` it takes couple of minutes to return the data. I am not sure how indexing wo...
I have around 500 GB of data in one table of MySQL which has around 5 billion records. It has around 15 columns. It has index on all the required columns. When I do select * from big_table where index_column = some_value it takes couple of minutes to return the data. I am not sure how indexing works internally here.Here is my understanding 1. DB vendor will bring indexed column complete(not just data in where clause) data in memory first 2. Find the values(in where clause) under data fetched in step_1 to get the record row location in actual table. 3. Then another IO call will go to disk to get all required data based on row location fetched in step_2 Is that correct ? I am not sure on first step i.e. whether complete data of indexed column is fetched in memory or values are searched on disk itself without bringing the complete data in memory ?
user3198603 (399 rep)
Mar 10, 2019, 02:07 AM • Last activity: Aug 3, 2025, 12:11 PM
1 votes
1 answers
697 views
Table is full even with innodb_file_per_table
I am trying to create an index to my table using alter query. My my.cnf file innodb_data_home_dir = /usr/local/mysql5/data innodb_data_file_path = ibdata1:60021538816;ibdata2:300M;ibdata3:30000M;ibdata4:10000M;ibdata5:10000M:autoextend innodb_buffer_pool_instances = 3 innodb_buffer_pool_size = 3G in...
I am trying to create an index to my table using alter query. My my.cnf file innodb_data_home_dir = /usr/local/mysql5/data innodb_data_file_path = ibdata1:60021538816;ibdata2:300M;ibdata3:30000M;ibdata4:10000M;ibdata5:10000M:autoextend innodb_buffer_pool_instances = 3 innodb_buffer_pool_size = 3G innodb_additional_mem_pool_size = 8M # Set .._log_file_size to 25 % of buffer pool size innodb_log_file_size = 256M innodb_additional_mem_pool_size = 128M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 2 innodb_lock_wait_timeout = 100 innodb_file_per_table innodb_flush_method=O_DIRECT Still every time my alter query alter table user add unique index idx_emailHash (emailHash); giving Table 'user' is full? What am I missing. I am using MySQL 5.6 Some more info [root@db data]# ll | grep user -rw-rw----. 1 mysql mysql 19551 Jun 10 14:33 user.frm -rw-rw----. 1 mysql mysql 28412215296 Jun 10 22:58 user.ibd [root@db data]# ll | grep ibd -rwxr-xr-x. 1 mysql mysql 60021538816 Jun 10 22:58 ibdata1 -rw-rw----. 1 mysql mysql 314572800 Jun 10 22:20 ibdata2 -rw-rw----. 1 mysql mysql 31457280000 Jun 10 22:33 ibdata3 -rw-rw----. 1 mysql mysql 10485760000 Jun 10 22:51 ibdata4 -rw-rw----. 1 mysql mysql 10485760000 Jun 10 22:51 ibdata5
Ankit Bansal (61 rep)
Jun 10, 2021, 05:32 PM • Last activity: Aug 3, 2025, 04:05 AM
2 votes
1 answers
219 views
Index Strategy for MariaDB Query that matches on one column and between two others
I have a table that contains a set of measurements for a continuous stream of processes. Although each process is individual, they are categorized into groups. The events have a start and end timestamp and a process group identifier. The table structure is as follows (InnoDB, MariaDB 10): **Table Na...
I have a table that contains a set of measurements for a continuous stream of processes. Although each process is individual, they are categorized into groups. The events have a start and end timestamp and a process group identifier. The table structure is as follows (InnoDB, MariaDB 10): **Table Name: measurements** CREATE TABLE measurements ( row_id int(11) NOT NULL AUTO_INCREMENT, process_name varchar(100) COLLATE utf8_bin NOT NULL, process_id int(11) NOT NULL, process_group_id tinyint(4) NOT NULL, measurement_1 float NOT NULL, measurement_2 float NOT NULL, measurement_3 float NOT NULL, measurement_4 float NOT NULL, start_timestamp int(11) NOT NULL, end_timestamp int(11) NOT NULL, PRIMARY KEY (row_id), KEY process_group_id (process_group_id, start_timestamp,end_timestamp), KEY process_id (process_id) ) ENGINE=InnoDB AUTO_INCREMENT=7294932 DEFAULT CHARSET=utf8 COLLATE=utf8_bin I'm designing a query to obtain the sum of measurements 1,2,3 & 4 for all processes running within a group at a particular point in time so that the app can express each measurement for a specific process as a percentage of the total measurements in the group at that time. The start and end times of processes within a group are not synchronized and they are of variable length. So for a process running in Group 5, at timestamp 1431388800 SELECT SUM(measurement_1), SUM(measurement_2), SUM(measurement_3), SUM(measurement_4) FROM measurements WHERE process_group_id = 5 AND 1431388800 BETWEEN start_timestamp AND end_timestamp This query runs, but takes around 0.5s. The table has 8m records and grows by about 30,000 a day. I have an index on process_group_id, start_timestamp, end_timestamp. However, the query does not appear to use anything but the process_group_id part of the index. I created an additional index on process_group_id alone to check this, and once created EXPLAIN showed it using this index. After some searching, I saw a suggestion to modify the query and add an ORDER BY clause. Having done this the query is accelerated to around 0.06s and it seems to use the full index. However, I'm unsure as to why: SELECT process_group_id, SUM(measurement_1), SUM(measurement_2), SUM(measurement_3), SUM(measurement_4) FROM measurements WHERE process_group_id = 5 AND 1431388800 BETWEEN start_timestamp AND end_timestamp ORDER BY process_group_id ASC With 30,000 new records a day that requires their shares to be calculated, 0.06s is still not particularly fast. Is there a better way of structuring either the table or designing the query to get a few orders of magnitude quicker, or is a query which matches on one column and then a range query on two others always going to be fairly slow to run?
Esteban (21 rep)
May 12, 2015, 08:51 PM • Last activity: Aug 1, 2025, 06:04 PM
1 votes
1 answers
475 views
aurora mysql index creation challenges on large tables
I have a table of yearly data [size 500GB = ~1 billion rows] in aurora mysql 3.02.2. requirement is to migrate this table to another aurora mysql [PROD] using DMS. we initiated the DMS task with data validation enabled and also with indexes & PK in target, which caused slowness to other production p...
I have a table of yearly data [size 500GB = ~1 billion rows] in aurora mysql 3.02.2. requirement is to migrate this table to another aurora mysql [PROD] using DMS. we initiated the DMS task with data validation enabled and also with indexes & PK in target, which caused slowness to other production processes due to load on the DB. so i did some research and looking to get some suggestions here please: *Note: this new table in prod will **NOT** be used by the application until this work is completed and renamed to current table name * 1. Load data first using DMS and create index manually later on a large table like this is the correct approach? **OR** 2. should i use DMS task filtering on "datetime" bigint column to load month by month data to the new table where the index build happens when the data is written over to the table. This can be done over the course of few days for each month (assuming we run the DMS task for few hours each day) **OR** any better method ? 3. does index creation generate lot of temp files on such a large table, which will lead to any memory issues **OR** should i use something like "ALGORITHM=INPLACE" in create index statement? 4. current parameter settings which i see related are "innodb_ddl_buffer_size=1MB" & "innodb_file_per_table=ON". any other parameters i should check? the target production db have 24x7 data written into it and i am looking for a way to avoid any slowness or performance issues when this table with index is being copied over.
JollyRoger (11 rep)
Nov 30, 2023, 05:15 AM • Last activity: Aug 1, 2025, 08:04 AM
1 votes
2 answers
3309 views
REINDEX takes significantly more time on production system than on local snapshot
We have a table with time-series-like data (~200million rows, 30GB data size + index) in a PG 11 database, and for the first time since we started writing into it, we have to delete from it (~30%). To keep things performant after the delete we want to - rebuild the indexes - run a full vacuum on it...
We have a table with time-series-like data (~200million rows, 30GB data size + index) in a PG 11 database, and for the first time since we started writing into it, we have to delete from it (~30%). To keep things performant after the delete we want to - rebuild the indexes - run a full vacuum on it To get a better understanding of how much time this will take I have created a dump of the given table, loaded it into a local database, deleted the data, rebuilt the index, and ran a full vacuum:
REINDEX TABLE my_schema.my_table;
VACUUM (FULL, VERBOSE, ANALYZE) my_schema.my_table;
To my surprise, both finished quite fast (2 x 30min). I know to get the exact time required to run these commands on the server I need to backup the filesystem to get the exact same local snapshot as the server and run it on the same hardware. However from my limited understanding because we have never deleted or updated rows in the table the restored local snapshot should be physically very similar to the table on the server. So the operations on the server should take a similar amount of time to finish. This is not what I observe currently. I have deleted the data and re-indexing took 6x more times than on the local system. Similarly, the vacuum took around 6x times more as well. This is a significant difference that I would have never expected. My local hardware is similar to the server (16GB vs 20Gb), but with more CPU cores (2vCPU vs i7 8vCPU). I have SSD on my local machine and I don't know what storage the server uses but I would be surprised if it was non-SSD (We are using Azure Single Server for PostgreSQL). The CPU utilization was below 10% on the server all the time so I think the operation was not CPU bound. (I am just guessing here.) The table contains 4 indexes: 3 of those a single-column BTREE index with mostly even distribution (date-time, synced source file ID, consumer ID) and a compound index for these 3 together. Why the difference is so big? Was my local testing flawed?
NoNameProvided (111 rep)
Jan 30, 2022, 04:44 PM • Last activity: Jul 30, 2025, 03:02 PM
0 votes
1 answers
392 views
MariaDB 10.2.22 Galera Cluster Using Bad Indexes After Analyze
I'm using Drupal on a MariaDB Galera 10.2.22 3 node cluster and I'm having some problems with one query. I have a table (InnoDB) called field_data_field_department which has a bunch of indexes. If I dump the 5 tables used in my query and load them into a new DB, the query runs fine using the index f...
I'm using Drupal on a MariaDB Galera 10.2.22 3 node cluster and I'm having some problems with one query. I have a table (InnoDB) called field_data_field_department which has a bunch of indexes. If I dump the 5 tables used in my query and load them into a new DB, the query runs fine using the index field_department_target_id. But once I analyze that table (or MariaDB does an internal stats rebuild) the query stops using that index and uses the PRIMARY index. I don't understand why it does this, and the query is MUCH slower using PRIMARY (90-120 seconds vs under 1/10th of a second.) Does anyone know how I can diagnose why the planner is picking the wrong index? Or why ANALYZEing the table makes it break (I would think that would make things better) The query is dynamically generated by Drupal so I can't just go into the code and add FORCE INDEX. Here's the explain for the fast result: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE file_managed ref uri,status,file_type status 1 const 7403 Using where; Using temporary; Using filesort 1 SIMPLE field_data_field_department ref PRIMARY,entity_type,deleted,entity_id PRIMARY 391 const,drupal_authoring.file_managed.fid,const 1 Using where 1 SIMPLE taxonomy_term_data_field_data_field_department eq_ref PRIMARY PRIMARY 4 drupal_authoring.field_data_field_department.field_department_target_id 1 Using where; Using index 1 SIMPLE taxonomy_term_data_field_data_field_department__field_data_field_department ref PRIMARY,entity_type,deleted,field_department_target_id field_department_target_id 390 drupal_authoring.taxonomy_term_data_field_data_field_department.tid,const 15 Using where; Using index 1 SIMPLE file_usage ref PRIMARY,fid_count,fid_module fid_count 4 drupal_authoring.file_managed.fid 1 Using index And for the slow query: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE file_managed ref uri,status,file_type file_type 152 const 7592 Using index condition; Using where; Using temporary; Using filesort 1 SIMPLE field_data_field_department ref PRIMARY,entity_type,deleted,entity_id PRIMARY 391 const,drupal_authoring.file_managed.fid,const 1 Using where 1 SIMPLE taxonomy_term_data_field_data_field_department eq_ref PRIMARY PRIMARY 4 drupal_authoring.field_data_field_department.field_department_target_id 1 Using where; Using index 1 SIMPLE taxonomy_term_data_field_data_field_department__field_data_field_department ref PRIMARY,entity_type,deleted,field_department_target_id PRIMARY 386 const 7985 Using where 1 SIMPLE file_usage ref PRIMARY,fid_count,fid_module fid_count 4 drupal_authoring.file_managed.fid 1 Using index And here is the output of SHOW INDEXES Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment field_data_field_department 0 PRIMARY 1 entity_type A 4 NULL NULL BTREE field_data_field_department 0 PRIMARY 2 entity_id A 15742 NULL NULL BTREE field_data_field_department 0 PRIMARY 3 deleted A 15742 NULL NULL BTREE field_data_field_department 0 PRIMARY 4 delta A 15742 NULL NULL BTREE field_data_field_department 0 PRIMARY 5 language A 15742 NULL NULL BTREE field_data_field_department 1 entity_type 1 entity_type A 4 NULL NULL BTREE field_data_field_department 1 bundle 1 bundle A 24 NULL NULL BTREE field_data_field_department 1 deleted 1 deleted A 2 NULL NULL BTREE field_data_field_department 1 entity_id 1 entity_id A 15742 NULL NULL BTREE field_data_field_department 1 revision_id 1 revision_id A 15742 NULL NULL YES BTREE field_data_field_department 1 language 1 language A 2 NULL NULL BTREE field_data_field_department 1 field_department_target_id 1 field_department_target_id A 715 NULL NULL BTREE Any ideas? Even if I can just do something to stop MariaDB from analyzing the table that would help. I can dump and reload it as a temp fix, but that only lasts for a little while.
Charles Tassell (9 rep)
Feb 23, 2019, 03:58 AM • Last activity: Jul 28, 2025, 10:04 AM
0 votes
1 answers
725 views
MongoDB Indexing - How many fields should I index?
I have a `Collection` in my db, called `Post`. Here's how it looks: Post { user: , school: , hashtag: , numberOfReports: , viewRanking: , type: , isPollOfTheDay: , createdAt: , endsAt: } (Obviously it also contains other, unrelated fields) So this `Post` collection, is being queried by 5 different s...
I have a Collection in my db, called Post. Here's how it looks: Post { user: , school: , hashtag: , numberOfReports: , viewRanking: , type: , isPollOfTheDay: , createdAt: , endsAt: } (Obviously it also contains other, unrelated fields) So this Post collection, is being queried by 5 different screens in my app. **School**, **Hashtag**, **All Polls**, **Discover** & **Profile**. All queries are very similar to each other, but they differ. Let's have a look at them individually: > School Here, I have 2 queries 1. I compare by school.name (equal to), numberOfReports (less than), user.id (for blocks checking (not contained in)) and lastly, we sort in descending order either by createdAt or viewRanking (depends on the user) 2. I compare by isPollOfTheDay (equal to true) and endsAt (greater than or equal to) > Hashtag I compare by hashtag.id (equal to), numberOfReports (less than), user.id (for blocks checking (not contained in)) and lastly, we sort in descending order either by createdAt or viewRanking (depends on the user) > All Polls I compare by type (equal to), isPollOfTheDay (equal to false), numberOfReports (less than), user.id (for blocks checking (not contained in)) and lastly, we sort in descending order either by createdAt or viewRanking (depends on the user) > Discover I compare by school.name (not equal to), type (equal to), numberOfReports (less than), user.id (for blocks checking (not contained in)) and lastly, we sort in descending order by createdAt > Profile I compare by user.id (equal to) and we sort in descending order by createdAt These are all of my queries! I can say that they are called almost with the same frequency. My question is, should I just index all 9 fields? Are they too many to index? Should I ignore the isPollOfTheDay field, since it's a Boolean? (I've read that we shouldn't index Booleans) EDIT: Every document occupies about 200 bytes. We currently have 25K documents and growing in a pace of ~300/day. The only fields that can change, are viewRanking and numberOfReports where the first one will change often, whereas the second far less often! The selectivity of each query is high (I think), since the needed documents are found mainly by their first comparison. There are about 50 different school.names & another 50 hashtag.ids.
Sotiris Kaniras (151 rep)
Jan 9, 2019, 08:09 PM • Last activity: Jul 26, 2025, 05:09 PM
1 votes
1 answers
735 views
Simple query with a single join very slow
I have this very slow, simple query that joins a large table (~180M rows) with a smaller table (~60k rows) with a foreign key, filtering an indexed column on the smaller table, ordering by the primary key in the larger table, and then taking the 25 latest rows. The `EXPLAIN` shows `Using index; Usin...
I have this very slow, simple query that joins a large table (~180M rows) with a smaller table (~60k rows) with a foreign key, filtering an indexed column on the smaller table, ordering by the primary key in the larger table, and then taking the 25 latest rows. The EXPLAIN shows Using index; Using temporary; Using filesort on the smaller table. Why? Engine: MySQL 5.7. Query:
SELECT
    order.id,
    order.company_id,
    order.total
FROM
    order
INNER JOIN
    company ON company.id = order.company_id
WHERE
    company.company_headquarter_id = 23133
ORDER BY order.id DESC
LIMIT 25;
+----+-------------+------------+------------+------+---------------------------------------+----------------------------+---------+-----------------------+------+----------+----------------------------------------------+
| id | select_type | table      | partitions | type | possible_keys                         | key                        | key_len | ref                   | rows | filtered | Extra                                        |
+----+-------------+------------+------------+------+---------------------------------------+----------------------------+---------+-----------------------+------+----------+----------------------------------------------+
|  1 | SIMPLE      | company    | NULL       | ref  | PRIMARY,company_headquarter_id_idx    | company_headquarter_id_idx | 8       | const                 |    6 |   100.00 | Using index; Using temporary; Using filesort |
|  1 | SIMPLE      | order      | NULL       | ref  | company_id_idx                        | company_id_idx             | 8       | company.id            |  381 |   100.00 | NULL                                         |
+----+-------------+------------+------------+------+---------------------------------------+----------------------------+---------+-----------------------+------+----------+----------------------------------------------+
CREATE TABLE order (
  id bigint(20) NOT NULL AUTO_INCREMENT,
  company_id bigint(20) NOT NULL,
  total double(18,2) NOT NULL,
  PRIMARY KEY (id),
  KEY company_id_idx (company_id),
  CONSTRAINT company_id_fk FOREIGN KEY (company_id) REFERENCES company (id)
) ENGINE=InnoDB AUTO_INCREMENT=186518644 DEFAULT CHARSET=latin1

CREATE TABLE company (
  id bigint(20) NOT NULL AUTO_INCREMENT,
  company_headquarter_id bigint(20) NOT NULL,
  name varchar(100) NOT NULL,
  PRIMARY KEY (id),
  KEY company_headquarter_id_idx (company_headquarter_id),
  CONSTRAINT company_headquarter_id_fk FOREIGN KEY (company_headquarter_id) REFERENCES company_headquarter (id)
) ENGINE=InnoDB AUTO_INCREMENT=60825 DEFAULT CHARSET=latin1

CREATE TABLE company_headquarter (
  id bigint(20) NOT NULL AUTO_INCREMENT,
  name varchar(100) NOT NULL,
  phone varchar(10) DEFAULT NULL,
  address_id bigint(20) NOT NULL,
  PRIMARY KEY (id),
  UNIQUE KEY name (name),
  KEY address_id_idx (address_id),
  CONSTRAINT address_id_fk FOREIGN KEY (address_id) REFERENCES address (id)
) ENGINE=InnoDB AUTO_INCREMENT=43862 DEFAULT CHARSET=latin1

CREATE TABLE address (
  id bigint(20) NOT NULL AUTO_INCREMENT,
  street_address varchar(100) DEFAULT NULL,
  zip varchar(7) DEFAULT NULL,
  state varchar(2) DEFAULT NULL,
  city varchar(50) DEFAULT NULL,
  country varchar(10) DEFAULT NULL,
  PRIMARY KEY (id)
) ENGINE=InnoDB AUTO_INCREMENT=147360955 DEFAULT CHARSET=latin1
The query becomes faster when I: * Remove the ORDER BY clause. * Filter company.company_headquarter_id with a company_headquarter_id that has a smaller number of orders. (company_headquarter_id = 23133 has ~3M rows in the order table) * Split it into two separate queries: First:
SELECT
    company.id
FROM
    company
WHERE
    company.company_headquarter_id = 23133;
Second:
SELECT
    order.id,
    order.company_id,
    order.total
FROM
    order
WHERE
    order.company_id IN (20122, 50729, 50730, 50731, 50732, 50733)  /* From first query */
ORDER BY order.id DESC
LIMIT 25;
Any ideas? Thank you. EDIT: When I do:
SELECT STRAIGHT_JOIN
    order.id,
    order.company_id,
    order.total
FROM
    order
INNER JOIN
    company ON company.id = order.company_id
WHERE
    company.company_headquarter_id = 23133
ORDER BY order.id DESC
LIMIT 25;
The query is much faster and EXPLAIN shows a temporary table is not created.
flyingdutchman (11 rep)
Apr 23, 2022, 05:42 PM • Last activity: Jul 23, 2025, 04:07 PM
1 votes
2 answers
2709 views
Find MySQL InnoDB index corruption
I have large MySQL InnoDB table on my localhost. In this table primary key is `id`, unique index is set on fields `a`, `b`, `c`. When I run query: SELECT * FROM mytable WHERE a = 1 AND b = 2 AND c = 3; the result is empty set, but such row certainly exists in the table. When I add " AND id = 1234" a...
I have large MySQL InnoDB table on my localhost. In this table primary key is id, unique index is set on fields a, b, c. When I run query: SELECT * FROM mytable WHERE a = 1 AND b = 2 AND c = 3; the result is empty set, but such row certainly exists in the table. When I add " AND id = 1234" at the end (where 1234 is id of expected result): SELECT * FROM mytable WHERE a = 1 AND b = 2 AND c = 3 AND id = 1234; it returns 1 row, as expected. The only explanation is that unique index is corrupted. But when I tried to check table with: mysqlcheck -c mydb mytable or with: ANALYZE table mydb.mytable; both checks returned OK with no errors. So the question is: How to identify this error in index? Is there a way to find this error with some query? MySQL version is 5.7.23, in error.log no errrors. ---------- **Edit:** It turned out, that after creating empty table mytable1 with same structure and filling it with data by query: INSERT INTO mytable1 (SELECT * FROM mytable); this error is present in mytable1, which is impossible in case of index curruption. After some research I have found, that the trouble is in field c, which has timestamp type. Actual table structure is: CREATE TABLE mytable ( id int(10) unsigned NOT NULL AUTO_INCREMENT, a int(10) unsigned NOT NULL, b int(10) unsigned NOT NULL, c timestamp NOT NULL DEFAULT '1999-01-01 00:00:00', d tinyint(1) NOT NULL DEFAULT '0', created timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, updated timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (id), UNIQUE KEY index2 (a,b,c,d) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; There are screenshots of mysql queries: screenshot1 Third query differs from first by 'LIKE' instead of '=' I don't know the reason why first query returned empty set, it looks like some bug in MySQL related with timestamp field. I run query, which fixed this error: screenshot2 UPDATE mytable SET c = FROM_UNIXTIME(UNIX_TIMESTAMP(c)); After this update the problem query began work fine. But actually, this update must do nothing. ---------- **Edit 2** I have found, that the cause of this problem is in clock change in Russia on October 26, 2014. Because my local setting were: mysql> select @@global.time_zone, @@session.time_zone; SYSTEM | SYSTEM so on my localhost datetime '2014-10-26 01:00:00' had two timestamps: mysql> select from_unixtime(1414270800); 2014-10-26 01:00:00 mysql> select from_unixtime(1414274400); 2014-10-26 01:00:00 So select by value returned empty set: mysql> select * from test_db.mytable where c = '2014-10-26 01:00:00'; Empty set but select by id returned correct data: mysql> select c from test_db.mytable where id = 316572297; 2014-10-26 01:00:00 I have changed default-time-zone in my.cnf to '+03:00', and everything began work fine. Note: to reproduce error you must also have index on field c. Without it select by value c = ... works fine.
dima.rus (119 rep)
Oct 17, 2018, 10:29 AM • Last activity: Jul 23, 2025, 09:07 AM
0 votes
1 answers
407 views
Poor performance on MySQL 5.7 on joined tables ( scanning wrong table ?)
I have a joined table (News, Publishers) with indexing. The query is working fine on MySQL 5.5. After I upgrading one of the server to MySQL 5.7, I start noticing high load, high CPU, and slow query. A query taking almost 0.00 seconds (5.5) took 2 to 5 seconds in MySQL 5.7 Query: SELECT news.id FROM...
I have a joined table (News, Publishers) with indexing. The query is working fine on MySQL 5.5. After I upgrading one of the server to MySQL 5.7, I start noticing high load, high CPU, and slow query. A query taking almost 0.00 seconds (5.5) took 2 to 5 seconds in MySQL 5.7 Query: SELECT news.id FROM news ,publishers WHERE news.publisher_id=publishers.id AND publishers.language='en' ORDER BY date_added DESC LIMIT 10; I tried to figure what happen with EXPLAIN, and here is my finding: MySQL 5.5 +----+-------------+------------+--------+------------------+----------------+---------+---------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+------------------+----------------+---------+---------------------------------+------+-------------+ | 1 | SIMPLE | news | index | idx_publisher_id | idx_date_added | 9 | NULL | 10 | | | 1 | SIMPLE | publishers | eq_ref | PRIMARY | PRIMARY | 8 | klsescre_klse.news.publisher_id | 1 | Using where | +----+-------------+------------+--------+------------------+----------------+---------+---------------------------------+------+-------------+ MySQL 5.7 +----+-------------+------------+------------+-------+------------------+------------------+---------+-----------------------------+------+----------+-----------------------------------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+------------+------------+-------+------------------+------------------+---------+-----------------------------+------+----------+-----------------------------------------------------------+ | 1 | SIMPLE | publishers | NULL | index | PRIMARY | NULL | 277 | NULL | 47 | 10.00 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | news | NULL | ref | idx_publisher_id | idx_publisher_id | 8 | klsescre_klse.publishers.id | 4962 | 100.00 | NULL | +----+-------------+------------+------------+-------+------------------+------------------+---------+-----------------------------+------+----------+-----------------------------------------------------------+ My guess is in 5.7, MySQL scan the PUBLISHERS table before NEWS, thus not making use of INDEX of I created for NEWS, making the query much slower. Can anyone help me with this? How can I make MySQL 5.7 scan the table like 5.5 ?
neobie (149 rep)
May 30, 2017, 11:17 AM • Last activity: Jul 19, 2025, 08:02 PM
2 votes
1 answers
147 views
Performance, RAM improvement with partitions
I have a table like: ```sql create table table_a ( id_a mediumint unsigned not null, id_b tinyint unsigned not null, id_c char(10) not null, value tinyint unsigned not null, TYPE enum ('a', 'b', 'c') not null, DATE date not null, constraint `unique` unique (id_b, id_c, id_a, TYPE, DATE), constraint...
I have a table like:
create table table_a
(
    id_a      mediumint unsigned   not null,
    id_b      tinyint unsigned     not null,
    id_c      char(10)             not null,
    value     tinyint unsigned     not null,
    TYPE      enum ('a', 'b', 'c') not null,
    DATE      date                 not null,
    constraint unique
        unique (id_b, id_c, id_a, TYPE, DATE),
    constraint fk_id_b_id_c
        foreign key (id_b, id_c) references table_b (id_b, id_c)
            on delete cascade,
    constraint id_a
        foreign key (id_a) references table_c (ID)
            on delete cascade
);

create index table_a_value_index
    on table_a (value);

create index table_a_new
    on table_a (id_b, id_c, DATE, id_a);

create index mks_date
    on table_a (DATE);
Now I would like to create partitions, but I am not sure how they will impact the RAM usage or the performance. I have tried the partitions like:
ALTER TABLE table_a
 PARTITION BY RANGE (to_days(date)) (
    PARTITION 2021_H1 VALUES LESS THAN (to_days('2021-07-01')),
    PARTITION 2021_H2 VALUES LESS THAN (to_days('2022-01-01')),
    PARTITION 2022_H1 VALUES LESS THAN (to_days('2022-07-01')),
    PARTITION current VALUES LESS THAN (MAXVALUE)
 ) ;
But that hadn't the expected RAM, performance improvement. My research has shown that smaller partitions could be better, but you should not have more than 50 partitions. If I would partition after each month, I could store the last 4 years (when max. 50 partitions are recommended), which would be enough. But how much would that impact my RAM usage and/or the performance? As far as I understood, the partitions are treated as separated tables, does that mean, that each partition will have their own indexes? The table has a size of 20GB+, but the indexes are 40GB+. It would be beneficial to reduce the loaded index size. The most used indexes are *unique* and *table_a_new*. The filter for *date* is a specific date or a range of 6 months. It is fine, that I will lose my foreign keys.
Zystrix (45 rep)
Dec 5, 2022, 11:16 AM • Last activity: Jul 17, 2025, 03:09 AM
1 votes
2 answers
153 views
Efficient approach for filtering, paging and sorting across multiple tables with multiple key columns
I have a query that retrieves data from a complex data structure involving multiple tables. The data can be filtered using a tree component that allows selection of different keys from different tables, sometimes filtering across two or three tables at once (for example filtering on OrganisationID a...
I have a query that retrieves data from a complex data structure involving multiple tables. The data can be filtered using a tree component that allows selection of different keys from different tables, sometimes filtering across two or three tables at once (for example filtering on OrganisationID and the LocationID within this organisation). In addition to filtering, the data must be paged and sorted. I'm using a tableview in the ui that allows sorting by a single selected column (only one ORDER BY clause at a time). The filtering targets specific columns from different tables. For example: - OrganisationID (from the Organisation table) - LocationID (from the Location table) - FolderID (from the Folder table) Sometimes the filter must apply across multiple of these keys simultaneously, for example filtering all related OrganisationIDs and LocationIDs and retrieving a paged result set. **Current approach:** I considered creating multiple indexed views based on these key columns, each with many indices to support different sorting options. This would lead to about 6 views, each with around 26 indices. Since indices are pre-sorted this does provide with the ability to quickly filter and sort. But having a very large amount of indices like this on single tables let alone on an indexed view spanning multiple tables seems very contrary to index best practice. **Question:** What are efficient strategies to implement filtering, paging, and sorting across multiple fixed keys from different tables at once? Here is a dbfiddle which illustrates what I am trying to describe. It is a bit simplified, but it catches enough of the gist of what I am trying to accomplish. **Additional Information:** Filtering, sorting, and paging are all currently implemented. However, due to the nature of filtering and sorting across different tables, any sort operation that lacks an appropriate index severely impacts paging. Without an index, SQL Server must scan the entire filtered set to apply sorting and paging. For some filter combinations, this results in scanning over 500,000 rows. Although most individual queries run reasonably fast, the system executes them frequently and concurrently for many users. This leads to significant memory grants, resulting in high memory pressure and a drastic drop in the Page Life Expectancy (PLE), affecting overall performance. For the worst case scenario I can find a query costing 166233968 logical reads.
Sotem (19 rep)
Jun 2, 2025, 11:53 AM • Last activity: Jul 16, 2025, 07:51 AM
0 votes
1 answers
149 views
Why insert statements get slow when having too many indexes?
I am having a slowness in my insert statements when working with MongoDB, and when I did a quick investigation, I though this might happened because the table has 11 indexes. Plus, I know that when I insert one record on a table having some indexes, the database has to update all relative indexes be...
I am having a slowness in my insert statements when working with MongoDB, and when I did a quick investigation, I though this might happened because the table has 11 indexes. Plus, I know that when I insert one record on a table having some indexes, the database has to update all relative indexes before informing me that insertion is completed, but my question is: **Why is the database will inform me that the insert/update statement is completed after all indexes are also inserted or updated?, why not insertion to main table will be enough and the indexes will be updated later (with different thread for example!?.** I wish my question is clear enough, help please :)
سجاد أحمد (1 rep)
Oct 2, 2021, 09:37 AM • Last activity: Jul 16, 2025, 03:09 AM
0 votes
2 answers
175 views
Non-clustered index covering a clustered index
I have a table that has the following indexes: **IXC** clustered index that includes a single column : A asc. **IX1** that includes two columns : A asc, B asc. In this case ICX seems like a duplicate index of IX1. Is it a good idea to delete ICX and make IX1 the clustered index in that case? Or is t...
I have a table that has the following indexes: **IXC** clustered index that includes a single column : A asc. **IX1** that includes two columns : A asc, B asc. In this case ICX seems like a duplicate index of IX1. Is it a good idea to delete ICX and make IX1 the clustered index in that case? Or is there scenarios where this would be a bad idea? DDL example : CREATE TABLE [dbo].[foo] ( [A] NVARCHAR (20) NOT NULL, [B] INT NULL, [C] NVARCHAR (20) NOT NULL, [D] NVARCHAR (20) NOT NULL, [E] NVARCHAR (20) NOT NULL, ); GO CREATE CLUSTERED INDEX [IXC] ON [dbo].[foo]([A] ASC); GO CREATE NONCLUSTERED INDEX [IX1] ON [dbo].[foo]([A] ASC, [B] ASC);
Kimo (229 rep)
Jan 16, 2020, 07:10 AM • Last activity: Jul 15, 2025, 12:43 PM
0 votes
1 answers
172 views
Optimizing PostgreSQL Query with DISTINCT ON, Filtering, and Ordering
I'm working on optimizing a query for a PostgreSQL database used in an educational platform. The query retrieves distinct `question_id` values along with their `is_seen` status from the `question_interaction` table based on specific `user`, `customer_id`, `student_id`, and `tag` criteria. Here's the...
I'm working on optimizing a query for a PostgreSQL database used in an educational platform. The query retrieves distinct question_id values along with their is_seen status from the question_interaction table based on specific user, customer_id, student_id, and tag criteria. Here's the query:
SELECT DISTINCT ON (question_id) question_id, is_seen
FROM question_interaction
WHERE user_id = 'USR234342'
  AND customer_id = '39874513-8bb1-461e-a73f-9e73fa31870d'
  AND student_id = '8179483214'
  AND tags @> ARRAY ['history']::TEXT[]
  AND is_seen IS NOT NULL
ORDER BY question_id, timestamp DESC;
Here is the selectivity order of the columns:
question_id > student_id > user_id > customer_id
*Can someone provide guidance on how to create an effective index for this query?* Specifically, I'm looking for recommendations on: - The choice of columns for the index - The order of columns in the index - Any additional conditions to consider
sujeet (257 rep)
Aug 21, 2023, 07:06 AM • Last activity: Jul 13, 2025, 09:04 PM
3 votes
0 answers
47 views
Do we need index on "order by" if the "where" clause alreday have index ( small data set )
Assume I have a very large table X and I want to run below SQL. select * from table X where type='X1234' order by time; type is not unique and indexed ( with high cardinality), and there are very few rows in one type ( assume 1 - 5 ). Data size of each row is about 50 byte to 50K byte Do I still nee...
Assume I have a very large table X and I want to run below SQL. select * from table X where type='X1234' order by time; type is not unique and indexed ( with high cardinality), and there are very few rows in one type ( assume 1 - 5 ). Data size of each row is about 50 byte to 50K byte Do I still need index for time column? I think although the table is very large, but "where" clause should be executed before "order by", that will result in a small dataset and no index is needed for column time. Am I right? ---------------------- Update: I have tested with composite index ( type, time ), and single index ( type ), the report explain shows as below. Composite index +----+-------------+---------+------------+------+----------- | id |type |ref | rows | filtered | Extra | +----+-------------+---------+------------+------+----------+ | 1 |ref |const | 3 | 100.00 | Using index condition | +----+-------------+---------+------------+------+----------+ Single index +---------+------------+------+---------------+------------+---------+------+ | id |type |ref | rows | filtered | Extra | +----+-------------+---------+------------+------+---------------+----------+ | 1 |ref |const | 3 | 100.00 | Using index condition; Using filesort | +----+-------------+---------+------------+------+---------------+----------+ There is no noticeable difference in execution time. It is likely the dataset is small and the filesort operation was entirely performed in buffer cache, so it is still very fast. In case the dataset become larger, then using composite index should be a good idea.
user8100646 (101 rep)
May 30, 2018, 03:17 PM • Last activity: Jul 12, 2025, 06:39 PM
1 votes
1 answers
157 views
What and where to index, rapid increasing table postgres
I work as a software engineer with python and django. Currently I am struggling with a design choice made before my time. We have a transaction table that logs all customer activity. Due to the success of the platform the data in the table is rapidly increasing. I have issues getting the query time...
I work as a software engineer with python and django. Currently I am struggling with a design choice made before my time. We have a transaction table that logs all customer activity. Due to the success of the platform the data in the table is rapidly increasing. I have issues getting the query time to a manageable size. I have this example of a query that runs extremely slow. I guess some good indexing could do the job but I don't really know where to start. I would love some tips on how to help myself (any quality posts/books or other resources) or how to solve this problem. If somehow possible i would like to not make manual queries and just use the ORM. The * in the select i placed to make the query more readable. SELECT * FROM "customer_customer" INNER JOIN "customer_transaction" ON ("customer_customer"."id" = "customer_transaction"."customer_id") WHERE ("customer_customer"."status" = 1 AND NOT ("customer_customer"."id" IN ( SELECT U1."customer_id" AS Col1 FROM "customer_transaction" U1 WHERE U1."transaction_type" IN (30, 14) ) ) AND "customer_transaction"."date" >= '2018-05-11 11:01:43.598530+02:00') As Asked in the comments here are additional infos: Currently I am running the commands on my local computer. The Query is generated by the orm. Create of the customer table: -- -- PostgreSQL database dump -- -- Dumped from database version 9.6.0 -- Dumped by pg_dump version 9.6.0 SET statement_timeout = 0; SET lock_timeout = 0; SET idle_in_transaction_session_timeout = 0; SET client_encoding = 'UTF8'; SET standard_conforming_strings = on; SET check_function_bodies = false; SET client_min_messages = warning; SET row_security = off; SET search_path = public, pg_catalog; SET default_tablespace = ''; SET default_with_oids = false; -- -- Name: customer_customer; Type: TABLE; Schema: public; Owner: postgres -- CREATE TABLE customer_customer ( id integer NOT NULL, firstname character varying(63), phone character varying(31), terms_accepted boolean NOT NULL, user_id integer, cashed_vip_points integer NOT NULL, vip_points integer NOT NULL, receive_mail boolean NOT NULL, mailchimp_email character varying(254), image character varying(100), image_thumb character varying(100), favorite_hash character varying(31), has_accepted_favorite_hint boolean NOT NULL, address_id integer, blog_url character varying(200), instagram_username character varying(200), overrule_default_vip_points integer, status integer NOT NULL, signature boolean, signature_date date, store_id_id integer, shopping_mail boolean NOT NULL, CONSTRAINT customer_customer_overrule_default_vip_points_check CHECK ((overrule_default_vip_points >= 0)) ); ALTER TABLE customer_customer OWNER TO postgres; -- -- Name: customer_customer_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres -- CREATE SEQUENCE customer_customer_id_seq START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE CACHE 1; ALTER TABLE customer_customer_id_seq OWNER TO postgres; -- -- Name: customer_customer_id_seq; Type: SEQUENCE OWNED BY; Schema: public; Owner: postgres -- ALTER SEQUENCE customer_customer_id_seq OWNED BY customer_customer.id; -- -- Name: customer_customer id; Type: DEFAULT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ALTER COLUMN id SET DEFAULT nextval('customer_customer_id_seq'::regclass); -- -- Name: customer_customer customer_customer_address_id_key; Type: CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ADD CONSTRAINT customer_customer_address_id_key UNIQUE (address_id); -- -- Name: customer_customer customer_customer_favorite_hash_key; Type: CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ADD CONSTRAINT customer_customer_favorite_hash_key UNIQUE (favorite_hash); -- -- Name: customer_customer customer_customer_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ADD CONSTRAINT customer_customer_pkey PRIMARY KEY (id); -- -- Name: customer_customer customer_customer_user_id_key; Type: CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ADD CONSTRAINT customer_customer_user_id_key UNIQUE (user_id); -- -- Name: customer_customer_211f6852; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_customer_211f6852 ON customer_customer USING btree (store_id_id); -- -- Name: customer_customer customer_custo_address_id_41aab9497590bc7_fk_address_address_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ADD CONSTRAINT customer_custo_address_id_41aab9497590bc7_fk_address_address_id FOREIGN KEY (address_id) REFERENCES address_address(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_customer customer_custom_store_id_id_67b8071f917b6245_fk_stores_store_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ADD CONSTRAINT customer_custom_store_id_id_67b8071f917b6245_fk_stores_store_id FOREIGN KEY (store_id_id) REFERENCES stores_store(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_customer customer_customer_user_id_482ced6557101913_fk_auth_user_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ADD CONSTRAINT customer_customer_user_id_482ced6557101913_fk_auth_user_id FOREIGN KEY (user_id) REFERENCES auth_user(id) DEFERRABLE INITIALLY DEFERRED; -- -- PostgreSQL database dump complete -- And of the transaction table: -- -- PostgreSQL database dump -- -- Dumped from database version 9.6.0 -- Dumped by pg_dump version 9.6.0 SET statement_timeout = 0; SET lock_timeout = 0; SET idle_in_transaction_session_timeout = 0; SET client_encoding = 'UTF8'; SET standard_conforming_strings = on; SET check_function_bodies = false; SET client_min_messages = warning; SET row_security = off; SET search_path = public, pg_catalog; SET default_tablespace = ''; SET default_with_oids = false; -- -- Name: customer_transaction; Type: TABLE; Schema: public; Owner: postgres -- CREATE TABLE customer_transaction ( id integer NOT NULL, points integer, transaction_type integer NOT NULL, customer_id integer NOT NULL, date timestamp with time zone NOT NULL, product_id integer, fotostream_entry_id integer, acommit_transaction_id character varying(36), amount numeric(6,2), has_storno_id integer, merged_customernumber_id integer, message_de character varying(255), message_fr character varying(255), points_befor_migration integer, store_id integer, storno_from_id integer, user_id integer, _transaction_type_messages_id integer, aac_import_row character varying(5000) ); ALTER TABLE customer_transaction OWNER TO postgres; -- -- Name: customer_transaction_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres -- CREATE SEQUENCE customer_transaction_id_seq START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE CACHE 1; ALTER TABLE customer_transaction_id_seq OWNER TO postgres; -- -- Name: customer_transaction_id_seq; Type: SEQUENCE OWNED BY; Schema: public; Owner: postgres -- ALTER SEQUENCE customer_transaction_id_seq OWNED BY customer_transaction.id; -- -- Name: customer_transaction id; Type: DEFAULT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ALTER COLUMN id SET DEFAULT nextval('customer_transaction_id_seq'::regclass); -- -- Name: customer_transaction customer_transaction_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT customer_transaction_pkey PRIMARY KEY (id); -- -- Name: customer_transacti_acommit_transaction_id_71030b1b69b97709_like; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transacti_acommit_transaction_id_71030b1b69b97709_like ON customer_transaction USING btree (acommit_transaction_id varchar_pattern_ops); -- -- Name: customer_transacti_acommit_transaction_id_71030b1b69b97709_uniq; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transacti_acommit_transaction_id_71030b1b69b97709_uniq ON customer_transaction USING btree (acommit_transaction_id); -- -- Name: customer_transaction_7473547c; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_7473547c ON customer_transaction USING btree (store_id); -- -- Name: customer_transaction_928570bc; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_928570bc ON customer_transaction USING btree (merged_customernumber_id); -- -- Name: customer_transaction_9524d7ad; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_9524d7ad ON customer_transaction USING btree (_transaction_type_messages_id); -- -- Name: customer_transaction_9bea82de; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_9bea82de ON customer_transaction USING btree (product_id); -- -- Name: customer_transaction_b65a298f; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_b65a298f ON customer_transaction USING btree (fotostream_entry_id); -- -- Name: customer_transaction_cb24373b; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_cb24373b ON customer_transaction USING btree (customer_id); -- -- Name: customer_transaction_d9b62ea2; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_d9b62ea2 ON customer_transaction USING btree (storno_from_id); -- -- Name: customer_transaction_date_bd33b3ac; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_date_bd33b3ac ON customer_transaction USING btree (date); -- -- Name: customer_transaction_e8701ad4; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_e8701ad4 ON customer_transaction USING btree (user_id); -- -- Name: customer_transaction_f2c0da2f; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_f2c0da2f ON customer_transaction USING btree (has_storno_id); -- -- Name: customer_transaction_transaction_type_36582b63; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_transaction_type_36582b63 ON customer_transaction USING btree (transaction_type); -- -- Name: customer_transaction_transaction_type_custome_3619995d_idx; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_transaction_type_custome_3619995d_idx ON customer_transaction USING btree (transaction_type, customer_id, date); -- -- Name: customer_transaction_transaction_type_customer_id_3eb6f7d0_idx; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_transaction_type_customer_id_3eb6f7d0_idx ON customer_transaction USING btree (transaction_type, customer_id); -- -- Name: customer_transaction D4d691342aa107b3b4fb5a167936d123; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT "D4d691342aa107b3b4fb5a167936d123" FOREIGN KEY (merged_customernumber_id) REFERENCES customer_customernumber(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction D6e0c79ad7a40ca02054ed28c4d6999c; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT "D6e0c79ad7a40ca02054ed28c4d6999c" FOREIGN KEY (_transaction_type_messages_id) REFERENCES customer_transactiontype(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction D9460b882ac4401f6adf8077475229ed; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT "D9460b882ac4401f6adf8077475229ed" FOREIGN KEY (fotostream_entry_id) REFERENCES fotostream_fotostreamentry(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction cust_storno_from_id_6a48315f632674fa_fk_customer_transaction_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT cust_storno_from_id_6a48315f632674fa_fk_customer_transaction_id FOREIGN KEY (storno_from_id) REFERENCES customer_transaction(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction custo_has_storno_id_116b248645f7fd59_fk_customer_transaction_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT custo_has_storno_id_116b248645f7fd59_fk_customer_transaction_id FOREIGN KEY (has_storno_id) REFERENCES customer_transaction(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction customer_product_id_428b30409c797b6b_fk_lookbook_productbase_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT customer_product_id_428b30409c797b6b_fk_lookbook_productbase_id FOREIGN KEY (product_id) REFERENCES lookbook_productbase(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction customer_t_customer_id_7962b09af88fe147_fk_customer_customer_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT customer_t_customer_id_7962b09af88fe147_fk_customer_customer_id FOREIGN KEY (customer_id) REFERENCES customer_customer(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction customer_transacti_store_id_4014f4c86692b54b_fk_stores_store_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT customer_transacti_store_id_4014f4c86692b54b_fk_stores_store_id FOREIGN KEY (store_id) REFERENCES stores_store(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction customer_transaction_user_id_3497aca364c6a472_fk_auth_user_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT customer_transaction_user_id_3497aca364c6a472_fk_auth_user_id FOREIGN KEY (user_id) REFERENCES auth_user(id) DEFERRABLE INITIALLY DEFERRED; -- -- PostgreSQL database dump complete -- And here is the execution plan: Merge Join (cost=14112.60..11422372541.38 rows=505000 width=671) (actual time=1351.692..16713328.367 rows=146436 loops=1) Merge Cond: (customer_customer.id = customer_transaction.customer_id) Buffers: shared hit=1922626 read=720356, temp read=98686908 written=1364 -> Index Scan using customer_customer_pkey on customer_customer (cost=14096.99..11422109016.84 rows=81175 width=121) (actual time=1342.257..16649665.313 rows=35553 loops=1) Filter: ((status = 1) AND (NOT (SubPlan 1))) Rows Removed by Filter: 309213 Buffers: shared hit=156156 read=72783, temp read=98686908 written=1364 SubPlan 1 -> Materialize (cost=14096.57..78342.29 rows=805641 width=4) (actual time=0.007..52.406 rows=356853 loops=161642) Buffers: shared hit=1667 read=25275, temp read=98686908 written=1364 -> Bitmap Heap Scan on customer_transaction u1 (cost=14096.57..71166.08 rows=805641 width=4) (actual time=147.297..485.822 rows=797943 loops=1) Recheck Cond: (transaction_type = ANY ('{30,14}'::integer[])) Heap Blocks: exact=24756 Buffers: shared hit=1667 read=25275 -> Bitmap Index Scan on customer_transaction_transaction_type_customer_id_3eb6f7d0_idx (cost=0.00..13895.16 rows=805641 width=0) (actual time=140.944..140.944 rows=797943 loops=1) Index Cond: (transaction_type = ANY ('{30,14}'::integer[])) Buffers: shared hit=1 read=2185 -> Index Scan using customer_transaction_cb24373b on customer_transaction (cost=0.43..252918.54 rows=2144835 width=550) (actual time=0.039..63012.608 rows=2143881 loops=1) Filter: (date >= '2018-05-11 11:01:43.59853+02'::timestamp with time zone) Rows Removed by Filter: 1048039 Buffers: shared hit=1766470 read=647573 Planning time: 16.013 ms Execution time: 16713362.490 ms
Phteven (111 rep)
Nov 7, 2018, 10:38 AM • Last activity: Jul 12, 2025, 03:03 PM
0 votes
1 answers
157 views
MySQL Finding index size for compound unique indexes
I'm using following query to get index and data sizes for tables in database. But unfortunately it doesn't show index size for compound unique indexes. How can i get index sizes for compound unique indexes? Here is my query; SELECT TABLE_NAME AS "Table", TABLE_ROWS AS "Rows", CONCAT((FORMAT((DATA_LE...
I'm using following query to get index and data sizes for tables in database. But unfortunately it doesn't show index size for compound unique indexes. How can i get index sizes for compound unique indexes? Here is my query; SELECT TABLE_NAME AS "Table", TABLE_ROWS AS "Rows", CONCAT((FORMAT((DATA_LENGTH) / POWER(1024,2),2)), ' Mb') AS "Data Size", CONCAT((FORMAT((INDEX_LENGTH) / POWER(1024,2),2)), ' Mb') AS "Index Size", CONCAT((FORMAT((DATA_LENGTH+ INDEX_LENGTH) / POWER(1024,2),2)), ' Mb') AS "Total Size", TRIM(TRAILING ', ' FROM CONCAT_WS(', ', ENGINE, TABLE_COLLATION, CREATE_OPTIONS)) AS "Type" FROM information_schema.TABLES WHERE information_schema.TABLES.table_schema = "my_database_name_here"
she hates me (103 rep)
Oct 10, 2015, 04:52 PM • Last activity: Jul 11, 2025, 10:06 PM
Showing page 1 of 20 total questions