Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
776
views
ERROR 1071 (42000): Specified key was too long; max key length is 1000 bytes
An online optimization tool offered to add indexes. One of the index is successfully created but i cant create the second one. Do you think that indexes going to reduce query run time? Any ideas how can i pass through the error code? SQL Query takes 10.1 secs and got over 380 times execution in slow...
An online optimization tool offered to add indexes. One of the index is successfully created but i cant create the second one.
Do you think that indexes going to reduce query run time?
Any ideas how can i pass through the error code?
SQL Query takes 10.1 secs and got over 380 times execution in slow log.
SELECT
l.ID,
post_title,
post_content,
post_name,
post_parent,
post_author,
post_status,
post_modified_gmt,
post_date,
post_date_gmt
FROM
(
SELECT
wp_posts.ID
FROM
wp_posts
WHERE
wp_posts.post_status IN ('publish')
AND wp_posts.post_type = 'post'
AND wp_posts.post_password = ''
AND wp_posts.post_date != '0000-00-00 00:00:00'
ORDER BY
wp_posts.post_modified ASC
LIMIT
100 OFFSET 214000
) o
JOIN wp_posts l ON l.ID = o.ID
Indexes that i need to create;
MariaDB [database]> ALTER TABLE
wp_posts
ADD INDEX wp_posts_idx_post_type_post_passw_post_statu
(post_type
,post_password
,post_status
);
> ERROR 1071 (42000): Specified key was too long; max key length is 1000
> bytes
MariaDB [database]> ALTER TABLE wp_posts
ADD INDEX wp_posts_idx_post_modified
(post_modified
);
Query OK, 453289 rows affected (10.839 sec)
Records: 453289 Duplicates: 0 Warnings: 0
---
CREATE TABLE wp_posts
(
ID
bigint(20) unsigned NOT NULL AUTO_INCREMENT,
post_author
bigint(20) unsigned NOT NULL DEFAULT 0,
post_date
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_date_gmt
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_content
longtext COLLATE utf8mb4_unicode_ci NOT NULL,
post_title
text COLLATE utf8mb4_unicode_ci NOT NULL,
post_excerpt
text COLLATE utf8mb4_unicode_ci NOT NULL,
post_status
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'publish',
comment_status
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
ping_status
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
post_password
varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
post_name
varchar(200) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
to_ping
text COLLATE utf8mb4_unicode_ci NOT NULL,
pinged
text COLLATE utf8mb4_unicode_ci NOT NULL,
post_modified
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_modified_gmt
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_content_filtered
longtext COLLATE utf8mb4_unicode_ci NOT NULL,
post_parent
bigint(20) unsigned NOT NULL DEFAULT 0,
guid
varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
menu_order
int(11) NOT NULL DEFAULT 0,
post_type
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'post',
post_mime_type
varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
comment_count
bigint(20) NOT NULL DEFAULT 0,
PRIMARY KEY (ID
),
KEY post_name
(post_name
(191)),
KEY type_status_date
(post_type
,post_status
,post_date
,ID
),
KEY post_parent
(post_parent
),
KEY post_author
(post_author
),
KEY wp_posts_idx_post_modified
(post_modified
)
) ENGINE=MyISAM AUTO_INCREMENT=463265 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
execash
(1 rep)
May 8, 2020, 06:44 AM
• Last activity: Aug 6, 2025, 01:04 PM
3
votes
2
answers
499
views
Partitioning heaps - Why?
I know, at least theoretically, that heap tables can be partitioned. As I learn more about table partitioning, I realise that the use case is very limited. Similarly, the use case for heaps in SQL Server is also very limited. Whenever I intersect my list of heap use cases and partitioning use cases,...
I know, at least theoretically, that heap tables can be partitioned. As I learn more about table partitioning, I realise that the use case is very limited. Similarly, the use case for heaps in SQL Server is also very limited. Whenever I intersect my list of heap use cases and partitioning use cases, I find myself with an empty list.
Are there any sets of circumstances where a partitioned heap would appear strongly desirable?
J. Mini
(1225 rep)
Jul 26, 2025, 11:37 PM
• Last activity: Jul 28, 2025, 04:48 AM
1
votes
1
answers
168
views
Table filtered for multiple column: eav support table can be the best solution for search performance improvement?
I have **a table with 10~15 columns** (some varchar, some int, some double, and some date). **Table can contain tens of millions of records.** **User can filter from all columns using all filter operator** (equal, start with, contains, ends with, major, minor..) and setting multiple filters (ex. Fil...
I have **a table with 10~15 columns** (some varchar, some int, some double, and some date). **Table can contain tens of millions of records.**
**User can filter from all columns using all filter operator** (equal, start with, contains, ends with, major, minor..) and setting multiple filters (ex. Filter for column1 and column2 and column5 and column 8)
**I would like to implement a solution who optimize the search response having a good compromise for insert performance.**
**Solution1**:
Create an index for each column.
I think is not the best solution because it will compromise the insert performance and not allow to cover all combination of filters.
**Solution2**:
Using an EAV support table who contains the column data allowed to filter.
CREATE TABLE FlatTable
(
TableId UNIQUEIDENTIFIER NOT NULL,
Column1 VARCHAR(64),
Column2 INTEGER,
Column3 DATETIME,
Column4 FLOAT,
...
Column10 VARCHAR(512),
CONSTRAINT PK_FlatTable PRIMARY KEY (TableId)
)
CREATE TABLE EavTable
(
EavTableId UNIQUEIDENTIFIER NOT NULL,
TableId UNIQUEIDENTIFIER NOT NULL,
PropertyName VARCHAR (512) NOT NULL,
StringValue VARCHAR (512),
StringValueReverse AS REVERSE(StringValue),
StringValueFullText VARCHAR(MAX),
NumericValue MONEY,
DateValue DATETIME,
CONSTRAINT PK_EavTable PRIMARY KEY (EavTableId),
CONSTRAINT FK_EavTable FOREIGN KEY (TableId) REFERENCES FlatTable (TableId) ON DELETE CASCADE
)
CREATE UNIQUE INDEX UX_EavTable ON EavTable (TableId, StringValue, PropertyName)
CREATE UNIQUE INDEX UX2_EavTable ON EavTable (TableId, StringValueReverse, PropertyName)
CREATE UNIQUE INDEX UX3_EavTable ON EavTable (TableId, NumericValue, PropertyName)
CREATE UNIQUE INDEX UX4_EavTable ON EavTable (TableId, DateValue, PropertyName)
CREATE FULLTEXT INDEX ON EavTable(StringValueFullText)
KEY INDEX PK_EavTable
WITH STOPLIST = OFF;
**Remarks**:
*StringValueReverse* column is used to filter string values with "ends with operator" allow to resolve the query with a like 'reverseValue%' filter and perform and index seek on index UX2_EavTable.
*StringValueFullText* column contains string value splitted on multiple words allow to use fulltext index on some circumstances when filter operator is "contains".
*NumericValue* column contains the numeric value of the property if it is a number allow to perform and index seek on index UX3_EavTable
*DateValue* column contains the datetime value of the property if it is a date allownto perform and index seek on index UX4_EavTable
**With this schema i can filter on multiple columns using multiple EXISTS conditions with INDEX SEEK on all filters** (except some case filtering with contains filter)
For example:
SELECT *
FROM FlatTable
WHERE EXISTS (SELECT *
FROM EavTable
WHERE TableId = FlatTable.TableId
AND PropertyName = 'Property1'
AND StringValue LIKE 'value%'
)
AND EXISTS (SELECT *
FROM EavTable
WHERE TableId = FlatTable.TableId
AND PropertyName = 'Property2'
AND StringValueReverse LIKE 'eulav%'
)
AND EXISTS (SELECT *
FROM EavTable
WHERE TableId = FlatTable.TableId
AND PropertyName = 'Property3'
AND CONTAINS(StringValueFullText, '"value*'")
)
AND EXISTS (SELECT *
FROM EavTable
WHERE TableId = FlatTable.TableId
AND PropertyName = 'Property4'
AND NumericValue > 100
)
AND EXISTS (SELECT *
FROM EavTable
WHERE TableId = FlatTable.TableId
AND PropertyName = 'Property5'
AND DateValue > '20240101'
)
How you evaluate this solution?
There is a better one?
Thanks
Claudio
(11 rep)
Dec 25, 2023, 07:27 PM
• Last activity: Jul 26, 2025, 10:03 AM
0
votes
2
answers
175
views
Non-clustered index covering a clustered index
I have a table that has the following indexes: **IXC** clustered index that includes a single column : A asc. **IX1** that includes two columns : A asc, B asc. In this case ICX seems like a duplicate index of IX1. Is it a good idea to delete ICX and make IX1 the clustered index in that case? Or is t...
I have a table that has the following indexes:
**IXC** clustered index that includes a single column : A asc.
**IX1** that includes two columns : A asc, B asc.
In this case ICX seems like a duplicate index of IX1. Is it a good idea to delete ICX and make IX1 the clustered index in that case? Or is there scenarios where this would be a bad idea?
DDL example :
CREATE TABLE [dbo].[foo] (
[A] NVARCHAR (20) NOT NULL,
[B] INT NULL,
[C] NVARCHAR (20) NOT NULL,
[D] NVARCHAR (20) NOT NULL,
[E] NVARCHAR (20) NOT NULL,
);
GO
CREATE CLUSTERED INDEX [IXC]
ON [dbo].[foo]([A] ASC);
GO
CREATE NONCLUSTERED INDEX [IX1]
ON [dbo].[foo]([A] ASC, [B] ASC);
Kimo
(229 rep)
Jan 16, 2020, 07:10 AM
• Last activity: Jul 15, 2025, 12:43 PM
0
votes
1
answers
172
views
Optimizing PostgreSQL Query with DISTINCT ON, Filtering, and Ordering
I'm working on optimizing a query for a PostgreSQL database used in an educational platform. The query retrieves distinct `question_id` values along with their `is_seen` status from the `question_interaction` table based on specific `user`, `customer_id`, `student_id`, and `tag` criteria. Here's the...
I'm working on optimizing a query for a PostgreSQL database used in an educational platform.
The query retrieves distinct
question_id
values along with their is_seen
status from the question_interaction
table based on specific user
, customer_id
, student_id
, and tag
criteria. Here's the query:
SELECT DISTINCT ON (question_id) question_id, is_seen
FROM question_interaction
WHERE user_id = 'USR234342'
AND customer_id = '39874513-8bb1-461e-a73f-9e73fa31870d'
AND student_id = '8179483214'
AND tags @> ARRAY ['history']::TEXT[]
AND is_seen IS NOT NULL
ORDER BY question_id, timestamp DESC;
Here is the selectivity order of the columns:
question_id > student_id > user_id > customer_id
*Can someone provide guidance on how to create an effective index for this query?*
Specifically, I'm looking for recommendations on:
- The choice of columns for the index
- The order of columns in the index
- Any additional conditions to consider
sujeet
(257 rep)
Aug 21, 2023, 07:06 AM
• Last activity: Jul 13, 2025, 09:04 PM
0
votes
1
answers
157
views
MySQL Finding index size for compound unique indexes
I'm using following query to get index and data sizes for tables in database. But unfortunately it doesn't show index size for compound unique indexes. How can i get index sizes for compound unique indexes? Here is my query; SELECT TABLE_NAME AS "Table", TABLE_ROWS AS "Rows", CONCAT((FORMAT((DATA_LE...
I'm using following query to get index and data sizes for tables in database. But unfortunately it doesn't show index size for compound unique indexes. How can i get index sizes for compound unique indexes?
Here is my query;
SELECT TABLE_NAME AS "Table",
TABLE_ROWS AS "Rows",
CONCAT((FORMAT((DATA_LENGTH) / POWER(1024,2),2)), ' Mb') AS "Data Size",
CONCAT((FORMAT((INDEX_LENGTH) / POWER(1024,2),2)), ' Mb') AS "Index Size",
CONCAT((FORMAT((DATA_LENGTH+ INDEX_LENGTH) / POWER(1024,2),2)), ' Mb') AS "Total Size",
TRIM(TRAILING ', ' FROM CONCAT_WS(', ', ENGINE, TABLE_COLLATION, CREATE_OPTIONS)) AS "Type"
FROM information_schema.TABLES
WHERE information_schema.TABLES.table_schema = "my_database_name_here"
she hates me
(103 rep)
Oct 10, 2015, 04:52 PM
• Last activity: Jul 11, 2025, 10:06 PM
1
votes
2
answers
163
views
How to make Postgres use the correct index here?
Here's a test setup: ```sql CREATE TABLE t ( filename int, cropped bool not null default false, resized bool not null default false, create_date date not null default '1970-01-01' ); INSERT INTO t SELECT generate_series(1, 1000000); UPDATE t SET cropped = true, resized = true WHERE filename IN (SELE...
Here's a test setup:
I create some partial indexes for other queries and one index specifically for the query below:
CREATE TABLE t (
filename int,
cropped bool not null default false,
resized bool not null default false,
create_date date not null default '1970-01-01'
);
INSERT INTO t
SELECT generate_series(1, 1000000);
UPDATE t SET cropped = true, resized = true
WHERE filename IN (SELECT filename FROM t ORDER BY random() LIMIT 900000);
UPDATE t SET resized = false
WHERE filename IN (SELECT filename FROM t WHERE cropped = true ORDER BY random() LIMIT 1000);
VACUUM FULL t;
ANALYZE t;
[Fiddle](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=f356fd56a920ea8a93c192f5a8c16b1c)
Data now looks like this:
SELECT cropped, resized, count(*)
FROM t
GROUP BY 1,2;

CREATE INDEX idx_cropped ON t(cropped) WHERE NOT cropped;
CREATE INDEX idx_resized ON t(resized) WHERE NOT resized;
CREATE INDEX specific ON t(cropped,resized) WHERE cropped AND NOT resized;
Now I run my query:
EXPLAIN ANALYZE
SELECT count(*) FROM t WHERE cropped AND NOT resized AND create_date Index Scan using idx_resized on t (cost=0.29..3777.71 rows=89415 width=0) (actual time=478.177..478.480 rows=1000 loops=1)
Filter: (cropped AND (create_date Bitmap Heap Scan on t (cost=35.50..11652.73 rows=89415 width=0) (actual time=0.054..0.250 rows=1000 loops=1)
Recheck Cond: (cropped AND (NOT resized))
Filter: (create_date Bitmap Index Scan on specific (cost=0.00..13.15 rows=89415 width=0) (actual time=0.040..0.040 rows=1000 loops=1)
which uses the index specific
and completes in less than a ms on both dbfiddle.uk and my machine.
-----
Edit: Additional mystery - when I set the values not with an UPDATE but with a DEFAULT, then the correct index is chosen. What is going on? [Fiddle](https://dbfiddle.uk/?rdbms=postgres_14&fiddle=dc7d8aea14e90f08ab6537a855f34d8c)
AndreKR
(607 rep)
Apr 18, 2022, 01:20 AM
• Last activity: Jul 6, 2025, 04:06 PM
1
votes
1
answers
198
views
How to index a query with many joins
I have this query with [this schema](https://github.com/snurr-group/mofdb/blob/master/db/schema.rb) that takes ~8s on my laptop and longer on the vm running the service. ```mysql SELECT DISTINCT `mofs`.* FROM `mofs` INNER JOIN `isotherms` ON `isotherms`.`mof_id` = `mofs`.`id` INNER JOIN `isodata` ON...
I have this query with [this schema](https://github.com/snurr-group/mofdb/blob/master/db/schema.rb) that takes ~8s on my laptop and longer on the vm running the service.
SELECT DISTINCT mofs
.*
FROM mofs
INNER JOIN isotherms
ON isotherms
.mof_id
= mofs
.id
INNER JOIN isodata
ON isodata
.isotherm_id
= isotherms
.id
WHERE mofs
.hidden
= FALSE
AND (isodata.gas_id in (24))
AND (volumeA3 is not NULL and atomicMass is not NULL)
ORDER BY mofs
.id
ASC
LIMIT 100;
Essentially I join mofs->isotherms->isodata (all simple one-many relationships) and select only the mofs with isodata.gas_id present in a list supplied by the user (24)
in this case but there could be 1-6 numbes here.
There are a few constant where conditions mofs.hidden = false
mofs.volumeA3 is not NULL
and mofs.atomicMass is not NULL
these are the same for every query.
I've tried adding an index :mofs, [:volumeA3, :atomicMass, :hidden], where: 'volumeA3 IS NOT NULL and atomicMass IS NOT NULL and hidden is FALSE'
to capture the static part of the query. This doesn't make it notably faster.
Query explainer:
mysql> EXPLAIN SELECT DISTINCT mofs
.* FROM mofs
INNER JOIN isotherms
ON isotherms
.mof_id
= mofs
.id
INNER JOIN isodata
ON isodata
.isotherm_id
= isotherms
.id
WHERE mofs
.hidden
= FALSE AND (isodata.gas_id in (24)) AND (volumeA3 is not NULL and atomicMass is not NULL) ORDER BY mofs
.id
ASC LIMIT 100;
+----+-------------+-----------+------------+------+-------------------------------------------------------------------------------------------------------------------------------+------------------------------+---------+--------------------------------+-------+----------+------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-----------+------------+------+-------------------------------------------------------------------------------------------------------------------------------+------------------------------+---------+--------------------------------+-------+----------+------------------------------+
| 1 | SIMPLE | mofs | NULL | ref | PRIMARY,index_mofs_on_hidden,index_mofs_on_volumeA3,index_mofs_on_atomicMass,index_mofs_on_volumeA3_and_atomicMass_and_hidden | index_mofs_on_hidden | 1 | const | 60373 | 25.00 | Using where; Using temporary |
| 1 | SIMPLE | isotherms | NULL | ref | PRIMARY,index_isotherms_on_mof_id | index_isotherms_on_mof_id | 9 | mofdb2_prod_dump2.mofs.id | 5 | 100.00 | Using index; Distinct |
| 1 | SIMPLE | isodata | NULL | ref | index_isodata_on_isotherm_id,index_isodata_on_gas_id | index_isodata_on_isotherm_id | 9 | mofdb2_prod_dump2.isotherms.id | 3 | 41.45 | Using where; Distinct |
+----+-------------+-----------+------------+------+-------------------------------------------------------------------------------------------------------------------------------+------------------------------+---------+--------------------------------+-------+----------+------------------------------+
How can I speed up this query?
Is there a way to add an index that captures the mof->isotherm->isodata.gas_id relationship ? Eg. In postgres sql I could create a materialized view like this:
mof_id|gas_id
1|350
1|33
2|5
2|33
...
and then query that view. Can I achieve something similar with an index?
### Schema
mysql> show create table mofs;
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mofs | CREATE TABLE mofs
(
id
bigint NOT NULL AUTO_INCREMENT,
hashkey
varchar(255) DEFAULT NULL,
name
varchar(255) DEFAULT NULL,
database_id
bigint DEFAULT NULL,
cif
mediumtext,
void_fraction
float DEFAULT NULL,
surface_area_m2g
float DEFAULT NULL,
surface_area_m2cm3
float DEFAULT NULL,
pld
float DEFAULT NULL,
lcd
float DEFAULT NULL,
pxrd
text,
pore_size_distribution
text,
created_at
datetime NOT NULL,
updated_at
datetime NOT NULL,
pregen_json
json DEFAULT NULL,
mofid
text,
mofkey
text,
hidden
tinyint(1) NOT NULL DEFAULT '0',
atomicMass
float DEFAULT NULL,
volumeA3
float DEFAULT NULL,
batch_id
bigint DEFAULT NULL,
PRIMARY KEY (id
),
KEY fk_rails_42b2867304
(database_id
),
KEY index_mofs_on_hashkey
(hashkey
),
KEY index_mofs_on_name
(name
),
KEY index_mofs_on_hidden
(hidden
),
KEY index_mofs_on_pld
(pld
),
KEY index_mofs_on_lcd
(lcd
),
KEY index_mofs_on_void_fraction
(void_fraction
),
KEY index_mofs_on_surface_area_m2g
(surface_area_m2g
),
KEY index_mofs_on_surface_area_m2cm3
(surface_area_m2cm3
),
KEY mofid_exact_match_idx
(mofid
(768)),
KEY mofkey_exact_match_idx
(mofkey
(768)),
KEY index_mofs_on_volumeA3
(volumeA3
),
KEY index_mofs_on_atomicMass
(atomicMass
),
KEY index_mofs_on_batch_id
(batch_id
),
KEY index_mofs_on_volumeA3_and_atomicMass_and_hidden
(volumeA3
,atomicMass
,hidden
),
CONSTRAINT fk_rails_42b2867304
FOREIGN KEY (database_id
) REFERENCES databases
(id
),
CONSTRAINT fk_rails_c2906db3ef
FOREIGN KEY (batch_id
) REFERENCES batches
(id
)
) ENGINE=InnoDB AUTO_INCREMENT=167396 DEFAULT CHARSET=utf8 |
+-------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> show create table isotherms;
+-----------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+-----------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| isotherms | CREATE TABLE isotherms
(
id
bigint NOT NULL AUTO_INCREMENT,
doi
varchar(255) DEFAULT NULL,
digitizer
varchar(255) DEFAULT NULL,
temp
float DEFAULT NULL,
simin
text,
adsorbate_forcefield_id
bigint DEFAULT NULL,
molecule_forcefield_id
bigint DEFAULT NULL,
mof_id
bigint DEFAULT NULL,
adsorption_units_id
bigint DEFAULT NULL,
pressure_units_id
bigint DEFAULT NULL,
composition_type_id
bigint DEFAULT NULL,
created_at
datetime NOT NULL,
updated_at
datetime NOT NULL,
batch_id
bigint DEFAULT NULL,
PRIMARY KEY (id
),
KEY index_isotherms_on_mof_id
(mof_id
),
KEY index_isotherms_on_adsorption_units_id
(adsorption_units_id
),
KEY index_isotherms_on_pressure_units_id
(pressure_units_id
),
KEY index_isotherms_on_composition_type_id
(composition_type_id
),
KEY fk_rails_8886e0d88b
(adsorbate_forcefield_id
),
KEY fk_rails_180e64ceb3
(molecule_forcefield_id
),
KEY index_isotherms_on_doi
(doi
),
KEY index_isotherms_on_batch_id
(batch_id
),
CONSTRAINT fk_rails_10527b19a8
FOREIGN KEY (composition_type_id
) REFERENCES classifications
(id
),
CONSTRAINT fk_rails_180e64ceb3
FOREIGN KEY (molecule_forcefield_id
) REFERENCES forcefields
(id
),
CONSTRAINT fk_rails_1b8cd34a98
FOREIGN KEY (pressure_units_id
) REFERENCES classifications
(id
),
CONSTRAINT fk_rails_7931af24f5
FOREIGN KEY (adsorption_units_id
) REFERENCES classifications
(id
),
CONSTRAINT fk_rails_8886e0d88b
FOREIGN KEY (adsorbate_forcefield_id
) REFERENCES forcefields
(id
),
CONSTRAINT fk_rails_94b5964f6a
FOREIGN KEY (mof_id
) REFERENCES mofs
(id
),
CONSTRAINT fk_rails_ea429d3060
FOREIGN KEY (batch_id
) REFERENCES batches
(id
)
) ENGINE=InnoDB AUTO_INCREMENT=6368886 DEFAULT CHARSET=utf8 |
+-----------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
mysql> show create table isodata;
+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table |
+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| isodata | CREATE TABLE isodata
(
id
bigint NOT NULL AUTO_INCREMENT,
isotherm_id
bigint DEFAULT NULL,
gas_id
bigint DEFAULT NULL,
pressure
float DEFAULT NULL,
loading
float DEFAULT NULL,
bulk_composition
float DEFAULT NULL,
PRIMARY KEY (id
),
KEY index_isodata_on_isotherm_id
(isotherm_id
),
KEY index_isodata_on_gas_id
(gas_id
),
CONSTRAINT fk_rails_279fe04b2e
FOREIGN KEY (isotherm_id
) REFERENCES isotherms
(id
),
CONSTRAINT fk_rails_b1baf3536c
FOREIGN KEY (gas_id
) REFERENCES gases
(id
)
) ENGINE=InnoDB AUTO_INCREMENT=17313801 DEFAULT CHARSET=utf8 |
+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)
nlta
(111 rep)
Feb 24, 2022, 09:03 AM
• Last activity: Jun 24, 2025, 05:05 PM
0
votes
1
answers
191
views
When using blitzindex should are the reads the number of times the index was used to read or is that the op stats?
Im using blitz index to identify some indexing issues. Im concerned that I am reading the output incorrectly. In my output, im seeing reads, compared to writes in the usage column, and then im seeing a lot more reads in the op stats column. Which column tells me how many times the index has been use...
Im using blitz index to identify some indexing issues. Im concerned that I am reading the output incorrectly. In my output, im seeing reads, compared to writes in the usage column, and then im seeing a lot more reads in the op stats column. Which column tells me how many times the index has been used for returning data?

AgentKnipe
(1 rep)
Apr 24, 2020, 02:33 PM
• Last activity: Jun 19, 2025, 09:01 PM
0
votes
1
answers
27
views
Mongo DB Tree Constant Balance Due High Delete/Chunk Migration/Shard Adding
We have huge collections that are having multiple indexes in mongodb with multiple shards. Since the deletion process is time consuming , and ttl mongo simply does not works, we have switched to semi automated process ( that is initialized from the code ). As part of development we have noticed that...
We have huge collections that are having multiple indexes in mongodb with multiple shards. Since the deletion process is time consuming , and ttl mongo simply does not works, we have switched to semi automated process ( that is initialized from the code ).
As part of development we have noticed that if we will remove the indexes before deletion started and rebuild them after delete completed , it works much faster then just deleting, even considering that indexes built takes close to 13 hours ( we rebuild indexes one after another )
Same problem happens on adding new shard to cluster, it is getting the data very slow, but if we delete all the indexes on all collections it works ultra fast.
When we investigated problem further we have noticed that there is massive read/write on indexes, which is unproportional to data read/write ( we have separate disks for data , indexes and oplog ). The conclusion we have made is that indexes are getting rebuild ( even while set to be rebuild in background ) after every record deletion/migration.
This is totally resource uneffective. Is there any way to suspend the indexes rebalance/rebuild after every operation for some time, or just manually? or at least decrease the number of such operation ( e.g not to do it after every op , but after every 1 million records, etc )?
Best regards and glory to Ukraine
Yulian Oifa
Yulian Oifa
(1 rep)
Jun 17, 2025, 12:43 PM
• Last activity: Jun 18, 2025, 06:10 AM
1
votes
1
answers
221
views
Calendar Event table - best practice setup for range queries and individual retrieval
This seems like a generic problem that should have been solved already, but I can't find anything about this. In general this question is - given a table where data is read by a date range, what is the best, most efficient setup? We have a calendar event table that will quickly grow to millions of r...
This seems like a generic problem that should have been solved already, but I can't find anything about this. In general this question is - given a table where data is read by a date range, what is the best, most efficient setup?
We have a calendar event table that will quickly grow to millions of records.
The schema is something like:
CREATE TABLE [dbo].[CalendarEvent](
[Id] [uniqueidentifier] NOT NULL,
[DtStart] [datetime] NULL,
[DtEnd] [datetime] NULL,
[Created] [datetime] NULL,
[LastModified] [datetime] NULL,
[CalendarEventType] [nvarchar](255) NULL,
[CalendarId] [uniqueidentifier] NULL
PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
Forget about recurring events, etc. as that doesn't bear on our problem.
Most queries will be of the type:
select * from CalendarEvent where CalendarId = 'b5d6338f-805f-4717-9c0a-4600f95ac515' AND dtStart > '01/01/2020' AND dtStart < '10/22/2020'
Notice no joins, etc.
But we will also have some that select for individual events, and include joins:
select * from CalendarEvent ce join tags t on ce.Id = t.CalendarEventId where Id = '17606330-5486-496a-a91c-f5d0e123bfff'
Questions and ideas:
1. Should we keep the Id as the PK, but make the start date the clustered index?
2. Should we just make an index on dtStart?
3. Should we partition by month?
4. Should we denormalize a little and break duplicate the dtStart data by include year and month columns that we can index and use in our range queries?
In general, when you do your querying on a table by date range, what is the best setup for this type of table?
Note: If you think this question could be improved to help more people, make it more generic and widely applicable, such as removing references to a Calendar Event table specifically, and making this just about date range querying in any type of table, please help me do that.
richard
(121 rep)
Oct 22, 2020, 05:53 PM
• Last activity: Jun 15, 2025, 07:03 PM
2
votes
1
answers
215
views
PostgreSQL imbalanced index query optimization
I am having performance issues with my query because the index is heavily imbalanced. I have a column `family_id` on a table. The table has about 150 million records. This `family_id` column identifies which records inside the table belong to the same family. A family has, on average, ~5 members. Ho...
I am having performance issues with my query because the index is heavily imbalanced. I have a column
family_id
on a table. The table has about 150 million records. This family_id
column identifies which records inside the table belong to the same family. A family has, on average, ~5 members. However, because of the data is not clean, there exist edge cases in our data where the family_id
is NULL
(~600.000 records) or where the family_id
is -1
(~3.2 million records).
I am building a data pipeline in Logstash which queries this data and incrementally forwards it to a Elasticsearch index. For my use case, I need to bring the data on a per family basis. The case where family_id
is -1
or NULL
requires special treatment, and therefore I need to filter based on that. However, such queries take extremely long for a batch of 10.000 records. To give an idea of the time difference; the normal case takes about 10-20 seconds, whereas the -1
or NULL
case takes about 30 minutes.
Is there a proper solution for such cases such that I can speed up this query?
Thanks!
**Query and DDL**
As requested I have added the table DDL, query and EXPLAIN ANALYZE
. The table DDL:
CREATE TABLE "xml".t_patent_document_values (
patent_document_value_id serial4 NOT NULL,
publication_id int4 NOT NULL,
ucid varchar(32) NULL,
lang bpchar(2) NULL,
country bpchar(2) NULL,
doc_number varchar(32) NULL,
kind varchar(4) NULL,
published date NULL,
produced date NULL,
withdraw bool NULL DEFAULT false,
family_id int4 NULL,
status varchar(16) NULL,
modified_load_id int4 NOT NULL,
created_load_id int4 NULL,
deleted_load_id int4 NULL,
CONSTRAINT t_patent_document_values_pkey PRIMARY KEY (patent_document_value_id)
);
CREATE INDEX family_id_publication_id_idx ON xml.t_patent_document_values USING btree (family_id, publication_id) WHERE (family_id = '-1'::integer);
CREATE INDEX family_id_publication_id_null_idx ON xml.t_patent_document_values USING btree (family_id, publication_id) WHERE ((family_id IS NULL) AND (publication_id > 0));
CREATE INDEX idx_country_published ON xml.t_patent_document_values USING btree (country, published);
CREATE INDEX idx_created_load_id ON xml.t_patent_document_values USING btree (created_load_id);
CREATE INDEX idx_created_load_id_modified_load_id ON xml.t_patent_document_values USING btree (created_load_id, modified_load_id);
CREATE INDEX idx_created_load_id_modified_load_id_family_id ON xml.t_patent_document_values USING btree (created_load_id, modified_load_id, family_id);
CREATE INDEX idx_ctry_dnum ON xml.t_patent_document_values USING btree (country, doc_number);
CREATE INDEX idx_family_id ON xml.t_patent_document_values USING btree (family_id);
CREATE INDEX idx_modified_load_id ON xml.t_patent_document_values USING btree (modified_load_id);
CREATE INDEX idx_patent_document_publication_id ON xml.t_patent_document_values USING btree (publication_id);
CREATE INDEX idx_patent_document_published ON xml.t_patent_document_values USING btree (published);
CREATE INDEX idx_patent_document_ucid ON xml.t_patent_document_values USING btree (ucid);
CREATE INDEX idx_patent_document_values_country ON xml.t_patent_document_values USING btree (country);
CREATE INDEX idx_published_month ON xml.t_patent_document_values USING btree (date_part('month'::text, published));
CREATE INDEX idx_published_year ON xml.t_patent_document_values USING btree (date_part('year'::text, published));
CREATE INDEX idx_t_patent_document_values_cmp ON xml.t_patent_document_values USING btree (publication_id, modified_load_id);
CREATE INDEX idx_withdrawn ON xml.t_patent_document_values USING btree (withdraw);
The fast query (family_id != -1 or family_id is not null
):
select array_agg(tpdv.publication_id)
from
xml.t_patent_document_values tpdv
where
tpdv.family_id > 0
group by tpdv.family_id
order by tpdv.family_id
limit 1000
Its EXPLAIN ANALYZE
:
Limit (cost=0.57..20178.18 rows=1000 width=36) (actual time=0.017..16.191 rows=1000 loops=1)
-> GroupAggregate (cost=0.57..473243036.65 rows=23453864 width=36) (actual time=0.016..16.123 rows=1000 loops=1)
Group Key: family_id
-> Index Scan using idx_family_id on t_patent_document_values tpdv (cost=0.57..472220861.17 rows=145800436 width=8) (actual time=0.012..15.608 rows=1002 loops=1)
Index Cond: (family_id > 0)
Planning time: 0.323 ms
Execution time: 16.259 ms
Note that in reality this query will do a lot more aggregations over the family_id
, but this is a simplified version such that the problem might be easier to tackle.
The slow query:
select *
from
xml.t_patent_document_values tpdv
where (tpdv.family_id = -1 or tpdv.family_id is null) and publication_id > 0
order by publication_id
limit 1000
Its EXPLAIN ANALYZE
:
Limit (cost=0.57..122889.16 rows=1000 width=73) (actual time=52632.236..1731648.507 rows=1000 loops=1)
-> Index Scan using idx_patent_document_publication_id on t_patent_document_values tpdv (cost=0.57..484776575.75 rows=3944846 width=73) (actual time=52632.235..1731648.326 rows=1000 loops=1)
Index Cond: (publication_id > 0)
Filter: ((family_id = '-1'::integer) OR (family_id IS NULL))
Rows Removed by Filter: 27646323
Planning time: 3.102 ms
Execution time: 1731657.620 ms
thijsvdp
(121 rep)
Dec 12, 2022, 11:51 AM
• Last activity: Jun 11, 2025, 05:06 AM
1
votes
1
answers
96
views
Creating efficient indexes on SQL Server with a big WHERE clause
I have a database that contains two tables, containing these columns (some irrelevant columns have been omitted): 1. Containers table - `Id nvarchar(20) NOT NULL` primary key, value generated before insert - `Category nvarchar(5) NOT NULL` A category with around 5 possible values - `Status int NOT N...
I have a database that contains two tables, containing these columns (some irrelevant columns have been omitted):
1. Containers table
-
Id nvarchar(20) NOT NULL
primary key, value generated before insert
- Category nvarchar(5) NOT NULL
A category with around 5 possible values
- Status int NOT NULL
Value between 0 and 3.
2. Items table
- Id uniqueidentifier NOT NULL
primary key
- ContainerId nvarchar(20) NOT NULL
FK to the Containers table. Containers contain 1-n Items
- PartId uniqueidentifier NULL
A part ID OR a part category must be provided. They are mutually exclusive. There are ~15000 unique part ids
- PartCategory nvarchar(50) NULL
~1000 unique part categories
- CustomerId uniqueidentifier NULL
A customer ID, category or area must be provided. They are mutually exclusive. There are ~2000 unique customer ids
- CustomerCategory nvarchar(10) NULL
~50 unique values
- CustomerArea nvarchar(10) NULL
~30 unique values
- StartDate date NOT NULL
- EndDate date NOT NULL
- MinQuantity int NOT NULL
- MaxQuantity int NULL
Currently my Containers table contains 9000 rows, and my Items table contains 800000 rows.
I now need to create a query that retrieves all Items matching a set of criterias : the container category, the container status (I always need those with a value of 3), the items part (through its ID or category), the items customer (through its ID, category or area), with a valid date (current date must be between StartDate
and EndDate
), and a valid quantity (a provided quantity must be between MinQuantity
and MaxQuantity
if not null).
I naively wrote the following query:
select Items.ColumnA, Items.ColumnB -- actually I project 13 columns here
from Items
inner join Containers on Items.ContainerId=Containers.Id
where
Containers.Category = 'Category1' and
Containers.Status = 3 and -- always 3 (not parametrized)
(Items.PartId = 'some guid' or Items.PartCategory = 'PartCategory') and
(Items.CustomerId = 'some other guid' or Items.CustomerCategory = 'CustCategory' or Items.CustomerArea='area') and
Items.StartDate = 'some date' and
Items.MinQuantity = 10)
This works and runs in around 200ms on our staging server. Not great, but that was acceptable for single queries.
Now, my problem is that we want to prepare catalogs to send to every customer. This means calling this query for hundreds of customers multiplied by around 70000 parts. And that, of course, takes a very long time. We have tried creating an index suggested by Management Studio in the Live Query Statistics, but while it did bring some improvement, we are still trying to improve.
Question :
1. Are there any obvious pointers about what can be done to optimize this specific query? I have a developer background and I'm feeling overwhelmed by all the different indexing possibilities.
2. I'm also trying to think of a way to group all my queries into one call to the database, but even if I use things like a stored procedure to loop on inputs and execute my query 1000 times, I fear that the gain will also be small as the database will still have to seek on large indexes, which will still take the most time. Is there any stragegy to optimize grouping calls to this query?
*Addendum*: The index that we created after Management Studio's suggestion looks like this:
CREATE NONCLUSTERED INDEX [IX_SomeIndex] ON Items (
MinQuantity ASC,
MaxQuantity ASC,
StartDate ASC,
EndDate ASC
)
INCLUDE PartId, PartCategory, CustomerId, CustomerCategory, CustomerArea, ContainerId, Column1, Column2
Shtong
(111 rep)
Jun 6, 2025, 04:46 PM
• Last activity: Jun 7, 2025, 02:43 PM
0
votes
1
answers
532
views
postgresql reindex frequency and monitoring
What's up guys. I have a couple of tables where i inserting much of data (and almost never deleting). Periodically i see that insert gains ExclusiveLock on index of table. So i guess that sometimes insert triggers reindex of table. So i wonder how often it happens and how can i see in logs how much...
What's up guys. I have a couple of tables where i inserting much of data (and almost never deleting). Periodically i see that insert gains ExclusiveLock on index of table. So i guess that sometimes insert triggers reindex of table. So i wonder how often it happens and how can i see in logs how much time it took and how often it is triggered? Maybe some blogs/docs on it ? thanks in advance.
ArtemP
(69 rep)
Jul 6, 2016, 09:07 AM
• Last activity: Jun 7, 2025, 11:04 AM
1
votes
1
answers
233
views
Time based incremental load query optimization and index design
**Problem** I try to optimize an extract/load query on a highly normalized table structure. Extract/load is implementing incremental loading using a datetime column to determine only records which have changed since the last extract/load. The extract/load query after joins selects hundreds of millio...
**Problem**
I try to optimize an extract/load query on a highly normalized table structure. Extract/load is implementing incremental loading using a datetime column to determine only records which have changed since the last extract/load. The extract/load query after joins selects hundreds of millions of records. Each individual table joined has a date column "last_modified" which indicates when a record was modified last. To determine the combined last modified date of an joined record I select the greatest value of "last_modified" of each individual table.
This solution seams not to scale as with hundred of millions of records the filter predicate on the combined "last_modified" does not use any index and requires full table access on all individual tables and takes too long to complete (1h+) just to filter after the combined "last_modified" information.
**Background**
- Oracle database (multiple versions starting from 11g)
- I'm not a dba or have much Oracle know how
- Extract/load process (especially delta loading) is not time critical. If I can guarantee that data is up to date/extracted every 24h its fine.
**Sample**
The following is a simplified sample setup. Assume there is a index on reference_key.
CREATE TABLE shipment (
key VARCHAR2(10) NOT NULL,
last_modified DATE, -- represents when a record in shipment was last modified
CONSTRAINT shipment_pk PRIMARY KEY (key)
);
CREATE TABLE invoice (
key VARCHAR2(10) NOT NULL,
reference_key VARCHAR2(10),
last_modified DATE, -- represents when a record in invoice was last modified
CONSTRAINT invoice_pk PRIMARY KEY (key),
CONSTRAINT invoice_fk_shipment FOREIGN KEY (reference_key) REFERENCES shipment(key)
);
CREATE TABLE line (
key VARCHAR2(10) NOT NULL,
reference_key VARCHAR2(10),
last_modified DATE, -- represents when a record in line was last modified
CONSTRAINT line_pk PRIMARY KEY (key),
CONSTRAINT line_fk_invoice FOREIGN KEY (reference_key) REFERENCES invoice(efksid)
);
--To get flat de-normalized records a join like the following is needed
SELECT
shipment.key || '-' || line.key || '-' || invoice.key AS key,
GREATEST(line.last_modified, invoice.last_modified, shipment.last_modified) AS last_modified
FROM
line
JOIN invoice ON line.reference_key = invoice.key
JOIN shipment ON invoice.reference_key = shipment.key;
To extract/load data since the next 100000 records from a given date I could run the following.
SELECT *
FROM (
SELECT
shipment.key || '-' || line.key || '-' || invoice.key AS key,
GREATEST(line.last_modified, invoice.last_modified, shipment.last_modified) AS last_modified
FROM
line
JOIN invoice ON line.reference_key = invoice.key
JOIN shipment ON invoice.reference_key = shipment.key)
WHERE last_modified > '${target_last_modified}'
ORDER BY last_modified;
FETCH NEXT 100000 ROWS ONLY;
The problem this query does not finish/feasible with hundred of millions of records (can even go up to 1-2 billion records).
**What I have tried**
- Indices on the individual last_modified columns seams not to help as it is not considered? when using greatest and still does a full table access scan.
- A functional/and computed column(with an index) seams also not an option as it would need to incorporate multiple tables which is not possible? in Oracle.
- Materialized views to preselect the combined last_modified column and create an index on it. If possible I would like to avoid those. I think I think I could not use a refresh-on-commit materialized view due to the high transactions volume on used tables.
**Questions**
- In this scenario are there any tricks using indices which could speed up my query using the last_modified field.
- If materialized views are my only options what kind of refresh strategies would be feasible. A full refresh would (very likely) take too long. Are there any incremental refresh strategies - if yes would I not just move my extract/load problem from the query to the materialized view?
Thanks for any input/hint given!
Clemens K
(11 rep)
Mar 8, 2023, 02:46 PM
• Last activity: Jun 4, 2025, 02:01 PM
1
votes
1
answers
37
views
working of index in join query
My question is regarding the ```Explain``` output. Please refer the below query and its ```Explain``` command output ``` Explain update test.table1 t1 join test.table2 t2 on t1.field1=t2.field1 and t1.field2=t2.field2 set t2.field3=t1.field3; ``` Below is the result of the EXPLAIN command. [
,
) to both
and
, and then executed the query. After adding the index, the query performance improved significantly. Based on the output of the
command, I confirmed that the query is properly utilizing the index.
However, I have a question regarding the details of the
output. Specifically, there is no indication of index usage for
. In the
output (please refer to the image), the
field for
is shown as NULL.
Does this mean that the index on
is not being used?
If so, how is the query executing so quickly? In other words, if no index is used on
, what mechanism is allowing the matching process in
to perform so efficiently?
I am using MySQL 8.0.42
jithin giri
(209 rep)
Jun 3, 2025, 12:16 PM
• Last activity: Jun 3, 2025, 03:23 PM
0
votes
1
answers
112
views
Postgres Query Slow 0.05% of the time
I have the following tables: ``` CREATE TABLE IF NOT EXISTS users ( id NUMERIC(20, 0) NOT NULL DEFAULT NEXTVAL('users_sequence') PRIMARY KEY, list_id NUMERIC(20, 0) NOT NULL, first_name VARCHAR(512) DEFAULT NULL NULL, last_name VARCHAR(512) DEFAULT NULL NULL, full_name VARCHAR(1024) GENERATED ALWAYS...
I have the following tables:
CREATE TABLE IF NOT EXISTS users
(
id NUMERIC(20, 0) NOT NULL DEFAULT NEXTVAL('users_sequence') PRIMARY KEY,
list_id NUMERIC(20, 0) NOT NULL,
first_name VARCHAR(512) DEFAULT NULL NULL,
last_name VARCHAR(512) DEFAULT NULL NULL,
full_name VARCHAR(1024) GENERATED ALWAYS AS
(CASE
WHEN first_name IS NULL AND
last_name IS NULL THEN NULL
ELSE
(TRIM(COALESCE(first_name, '') || ' ' || COALESCE(last_name, ''))) END) STORED,
deleted_at TIMESTAMP DEFAULT NULL NULL,
-- Some ~20 columns
);
CREATE TABLE IF NOT EXISTS user_aliases
(
id NUMERIC(20, 0) NOT NULL DEFAULT NEXTVAL('user_aliases_sequence') PRIMARY KEY,
entry_id NUMERIC(20, 0) NOT NULL,
list_id NUMERIC(20, 0) NOT NULL,
first_name VARCHAR(512) DEFAULT NULL NULL,
last_name VARCHAR(512) DEFAULT NULL NULL,
full_name VARCHAR(1024) GENERATED ALWAYS AS
(CASE
WHEN first_name IS NULL AND
last_name IS NULL THEN NULL
ELSE
(TRIM(COALESCE(first_name, '') || ' ' || COALESCE(last_name, ''))) END) STORED,
deleted_at TIMESTAMP DEFAULT NULL NULL,
CONSTRAINT fk_user_aliases_entry_id FOREIGN KEY (entry_id) REFERENCES users (id) ON UPDATE CASCADE ON DELETE CASCADE
);
and the following indices:
CREATE INDEX users_full_name_idx ON users USING GIST (full_name gist_trgm_ops, list_id) WHERE deleted_at IS NULL AND full_name IS NOT NULL;
CREATE INDEX user_aliases_full_name_idx ON user_aliases USING GIST (full_name gist_trgm_ops, list_id) WHERE deleted_at IS NULL AND full_name IS NOT NULL;
and I have the following functions:
And my query is:
SELECT id
FROM (
SELECT
e.id,
e.full_name
FROM users e
WHERE e.full_name % :value
AND e.deleted_at IS NULL
AND e.list_id IN (:lists)
UNION
SELECT
a.entry_id AS id,
a.full_name
FROM user_aliases a
WHERE a.full_name % :value
AND a.deleted_at IS NULL
AND a.list_id IN (:lists)
) filter_table
WHERE LEVENSHTEIN_LESS_EQUAL(full_name, :value, :threshold) Subquery Scan on filter_table (cost=285.91..286.23 rows=16 width=20) (actual time=238.323..238.327 rows=0 loops=1)
Output: filter_table.id
Buffers: shared hit=40038
-> HashAggregate (cost=285.91..286.07 rows=16 width=536) (actual time=238.322..238.325 rows=0 loops=1)
" Output: e.id, e.full_name"
" Group Key: e.id, e.full_name"
Batches: 1 Memory Usage: 24kB
Buffers: shared hit=40038
-> Append (cost=13.58..285.83 rows=16 width=536) (actual time=238.317..238.320 rows=0 loops=1)
Buffers: shared hit=40038
-> Bitmap Heap Scan on master.users e (cost=13.58..199.25 rows=11 width=27) (actual time=164.384..164.385 rows=0 loops=1)
" Output: e.id, e.full_name"
Recheck Cond: (((e.full_name)::text % 'hasan can saral'::text) AND (e.deleted_at IS NULL))
" Filter: ((e.list_id = ANY ('{1,3,5,6,7,10,20,30,40,50,60,70,80,90,100,110,120,130,140,150,160,170,180,190,200,210,220,230,240,250,260,270,280,300,310,320,330,340,350,360,370,380,390,400,410,420,430,440,450,460,470,480,490,500,510,520,530,540,550,560,570,580,590,600,610,620,630,640,650,660,670,680,690,700,710,720,730,740,750,760,770,780,790,800,810,820,830,840,850,860,870,880,890,900,910,920,930,940,950,960,970,980,990,1000,1010,1020,1030,1040,1050,1060,1070,1080,1090,1100,1110,1120,1130,1140,1150,1160,1170,1180,1190,1200,1210,1220,1230,1240,1250,1260,1270,1280,1290,1300,1310,1320,1330,1340,1350,1360,1370,1380,1390,1400,1410,1420,1430,1440,1450,1460,1470,1480,1490,1500,1510,1520,1530,1540,1550,1560,1570,1580,1590,1600,1610,1620,1630,1640,1650,1660,1670,1680,1690,1700,1710,1720,1730,1740,1750,1760,1770,1780,1790,1800,1810,1820,1830,1840,1850,1860,1870,1880,1890,1900,1910,1920,1930,1940,1950,1960,1970,1980,1990,2000,2010,2020,2030,2040,2050,2060,2070,2080,2090,2100,2110,2120,2130,2140,2150,2160,2170,2180,2190,2200,2210,2220,2230,2240,2250,2260,2270,2280,2290,2300,2310,2320,2330,2340,2350,2360,2370,2380,2390,2400,2410,2420,2430,2440,2450,2460,2470,2480,2490,2500,2510,2520,2530,2540,2550,2560,2570,2580,2590,2600,2610,2620,2630,2640,2650,2660,2670,2680,2690,2700,2710,2720,2730,2740,2750,2760,2770,2780,2790,2800,2810,2820,2830,2840,2850,2860,2870,2880,2890,2900,2910,2920,2930,2940,2950,2960,2970,2980,2990,3000,3010,3020,3030,3040,3050,3060,3070,3080,3090,3100,3110,3120,3130,3140,3150,3160,3170,3180}'::numeric[])) AND (levenshtein_less_equal((e.full_name)::text, 'hasan can saral'::text, 2) Bitmap Index Scan on users_full_name_idx (cost=0.00..12.77 rows=48 width=0) (actual time=163.986..163.987 rows=161 loops=1)
Index Cond: ((e.full_name)::text % 'hasan can saral'::text)
Buffers: shared hit=27035
-> Bitmap Heap Scan on master.user_aliases a (cost=9.36..86.34 rows=5 width=38) (actual time=73.928..73.928 rows=0 loops=1)
" Output: a.entry_id, a.full_name"
Recheck Cond: (((a.full_name)::text % 'hasan can saral'::text) AND (a.deleted_at IS NULL))
" Filter: ((a.list_id = ANY ('{1,3,5,6,7,10,20,30,40,50,60,70,80,90,100,110,120,130,140,150,160,170,180,190,200,210,220,230,240,250,260,270,280,300,310,320,330,340,350,360,370,380,390,400,410,420,430,440,450,460,470,480,490,500,510,520,530,540,550,560,570,580,590,600,610,620,630,640,650,660,670,680,690,700,710,720,730,740,750,760,770,780,790,800,810,820,830,840,850,860,870,880,890,900,910,920,930,940,950,960,970,980,990,1000,1010,1020,1030,1040,1050,1060,1070,1080,1090,1100,1110,1120,1130,1140,1150,1160,1170,1180,1190,1200,1210,1220,1230,1240,1250,1260,1270,1280,1290,1300,1310,1320,1330,1340,1350,1360,1370,1380,1390,1400,1410,1420,1430,1440,1450,1460,1470,1480,1490,1500,1510,1520,1530,1540,1550,1560,1570,1580,1590,1600,1610,1620,1630,1640,1650,1660,1670,1680,1690,1700,1710,1720,1730,1740,1750,1760,1770,1780,1790,1800,1810,1820,1830,1840,1850,1860,1870,1880,1890,1900,1910,1920,1930,1940,1950,1960,1970,1980,1990,2000,2010,2020,2030,2040,2050,2060,2070,2080,2090,2100,2110,2120,2130,2140,2150,2160,2170,2180,2190,2200,2210,2220,2230,2240,2250,2260,2270,2280,2290,2300,2310,2320,2330,2340,2350,2360,2370,2380,2390,2400,2410,2420,2430,2440,2450,2460,2470,2480,2490,2500,2510,2520,2530,2540,2550,2560,2570,2580,2590,2600,2610,2620,2630,2640,2650,2660,2670,2680,2690,2700,2710,2720,2730,2740,2750,2760,2770,2780,2790,2800,2810,2820,2830,2840,2850,2860,2870,2880,2890,2900,2910,2920,2930,2940,2950,2960,2970,2980,2990,3000,3010,3020,3030,3040,3050,3060,3070,3080,3090,3100,3110,3120,3130,3140,3150,3160,3170,3180}'::numeric[])) AND (levenshtein_less_equal((a.full_name)::text, 'hasan can saral'::text, 2) Bitmap Index Scan on user_aliases_full_name_idx (cost=0.00..8.56 rows=20 width=0) (actual time=73.735..73.735 rows=62 loops=1)
Index Cond: ((a.full_name)::text % 'hasan can saral'::text)
Buffers: shared hit=12809
"Settings: effective_cache_size = '16GB', search_path = 'master', work_mem = '32MB'"
Planning Time: 2.445 ms
Execution Time: 238.404 ms
Why does my query performs poorly, and how do I resolve it?
**Update:**
I got the slow query plan using auto_explain
(the query took 25s to execute):
Limit (cost=10645.98..10646.18 rows=10 width=20) (actual time=47782.688..47782.691 rows=0 loops=1)
Output: filter_table.id
Buffers: shared hit=3749709
-> Subquery Scan on filter_table (cost=10645.98..10667.42 rows=1072 width=20) (actual time=47782.684..47782.687 rows=0 loops=1)
Output: filter_table.id
Buffers: shared hit=3749709
-> HashAggregate (cost=10645.98..10656.70 rows=1072 width=536) (actual time=47782.682..47782.684 rows=0 loops=1)
" Output: e.id, e.full_name"
" Group Key: e.id, e.full_name"
Batches: 1 Memory Usage: 73kB
Buffers: shared hit=3749709
-> Append (cost=1449.29..10640.62 rows=1072 width=536) (actual time=47782.666..47782.669 rows=0 loops=1)
Buffers: shared hit=3749709
-> Bitmap Heap Scan on master.users e (cost=1449.29..10533.25 rows=1068 width=27) (actual time=47636.308..47636.310 rows=0 loops=1)
" Output: e.id, e.full_name"
" Recheck Cond: (((e.full_name)::text % 'SOME RELATIVELY LONG VALUE HERE'::text) AND (e.list_id = ANY ('{1,3,5,6,7,10,20,30,40,50,60,70,80,90,100,110,120,130,140,150,160,170,180,190,200,210,220,230,240,250,260,270,280,300,310,320,330,340,350,360,370,380,390,400,410,420,430,440,450,460,470,480,490,500,510,520,530,540,550,560,570,580,590,600,610,620,630,640,650,660,670,680,690,700,710,720,730,740,750,760,770,780,790,800,810,820,830,840,850,860,870,880,890,900,910,920,930,940,950,960,970,980,990,1000,1010,1020,1030,1040,1050,1060,1070,1080,1090,1100,1110,1120,1130,1140,1150,1160,1170,1180,1190,1200,1210,1220,1230,1240,1250,1260,1270,1280,1290,1300,1310,1320,1330,1340,1350,1360,1370,1380,1390,1400,1410,1420,1430,1440,1450,1460,1470,1480,1490,1500,1510,1520,1530,1540,1550,1560,1570,1580,1590,1600,1610,1620,1630,1640,1650,1660,1670,1680,1690,1700,1710,1720,1730,1740,1750,1760,1770,1780,1790,1800,1810,1820,1830,1840,1850,1860,1870,1880,1890,1900,1910,1920,1930,1940,1950,1960,1970,1980,1990,2000,2010,2020,2030,2040,2050,2060,2070,2080,2090,2100,2110,2120,2130,2140,2150,2160,2170,2180,2190,2200,2210,2220,2230,2240,2250,2260,2270,2280,2290,2300,2310,2320,2330,2340,2350,2360,2370,2380,2390,2400,2410,2420,2430,2440,2450,2460,2470,2480,2490,2500,2510,2520,2530,2540,2550,2560,2570,2580,2590,2600,2610,2620,2630,2640,2650,2660,2670,2680,2690,2700,2710,2720,2730,2740,2750,2760,2770,2780,2790,2800,2810,2820,2830,2840,2850,2860,2870,2880,2890,2900,2910,2920,2930,2940,2950,2960,2970,2980,2990,3000,3010,3020,3030,3040,3050,3060,3070,3080,3090,3100,3110,3120,3130,3140,3150,3160,3170,3180}'::numeric[])) AND (e.deleted_at IS NULL))"
" Filter: (levenshtein_less_equal((e.full_name)::text, 'SOME RELATIVELY LONG VALUE HERE'::text, 2) Bitmap Index Scan on users_full_name_idx (cost=0.00..1448.22 rows=3203 width=0) (actual time=47634.248..47634.248 rows=75 loops=1)
" Index Cond: (((e.full_name)::text % 'SOME RELATIVELY LONG VALUE HERE'::text) AND (e.list_id = ANY ('{1,3,5,6,7,10,20,30,40,50,60,70,80,90,100,110,120,130,140,150,160,170,180,190,200,210,220,230,240,250,260,270,280,300,310,320,330,340,350,360,370,380,390,400,410,420,430,440,450,460,470,480,490,500,510,520,530,540,550,560,570,580,590,600,610,620,630,640,650,660,670,680,690,700,710,720,730,740,750,760,770,780,790,800,810,820,830,840,850,860,870,880,890,900,910,920,930,940,950,960,970,980,990,1000,1010,1020,1030,1040,1050,1060,1070,1080,1090,1100,1110,1120,1130,1140,1150,1160,1170,1180,1190,1200,1210,1220,1230,1240,1250,1260,1270,1280,1290,1300,1310,1320,1330,1340,1350,1360,1370,1380,1390,1400,1410,1420,1430,1440,1450,1460,1470,1480,1490,1500,1510,1520,1530,1540,1550,1560,1570,1580,1590,1600,1610,1620,1630,1640,1650,1660,1670,1680,1690,1700,1710,1720,1730,1740,1750,1760,1770,1780,1790,1800,1810,1820,1830,1840,1850,1860,1870,1880,1890,1900,1910,1920,1930,1940,1950,1960,1970,1980,1990,2000,2010,2020,2030,2040,2050,2060,2070,2080,2090,2100,2110,2120,2130,2140,2150,2160,2170,2180,2190,2200,2210,2220,2230,2240,2250,2260,2270,2280,2290,2300,2310,2320,2330,2340,2350,2360,2370,2380,2390,2400,2410,2420,2430,2440,2450,2460,2470,2480,2490,2500,2510,2520,2530,2540,2550,2560,2570,2580,2590,2600,2610,2620,2630,2640,2650,2660,2670,2680,2690,2700,2710,2720,2730,2740,2750,2760,2770,2780,2790,2800,2810,2820,2830,2840,2850,2860,2870,2880,2890,2900,2910,2920,2930,2940,2950,2960,2970,2980,2990,3000,3010,3020,3030,3040,3050,3060,3070,3080,3090,3100,3110,3120,3130,3140,3150,3160,3170,3180}'::numeric[])))"
Buffers: shared hit=3735945
-> Bitmap Heap Scan on master.user_aliases a (cost=9.36..91.29 rows=4 width=38) (actual time=146.350..146.351 rows=0 loops=1)
" Output: a.entry_id, a.full_name"
Recheck Cond: (((a.full_name)::text % 'SOME RELATIVELY LONG VALUE HERE'::text) AND (a.deleted_at IS NULL))
" Filter: ((a.list_id = ANY ('{1,3,5,6,7,10,20,30,40,50,60,70,80,90,100,110,120,130,140,150,160,170,180,190,200,210,220,230,240,250,260,270,280,300,310,320,330,340,350,360,370,380,390,400,410,420,430,440,450,460,470,480,490,500,510,520,530,540,550,560,570,580,590,600,610,620,630,640,650,660,670,680,690,700,710,720,730,740,750,760,770,780,790,800,810,820,830,840,850,860,870,880,890,900,910,920,930,940,950,960,970,980,990,1000,1010,1020,1030,1040,1050,1060,1070,1080,1090,1100,1110,1120,1130,1140,1150,1160,1170,1180,1190,1200,1210,1220,1230,1240,1250,1260,1270,1280,1290,1300,1310,1320,1330,1340,1350,1360,1370,1380,1390,1400,1410,1420,1430,1440,1450,1460,1470,1480,1490,1500,1510,1520,1530,1540,1550,1560,1570,1580,1590,1600,1610,1620,1630,1640,1650,1660,1670,1680,1690,1700,1710,1720,1730,1740,1750,1760,1770,1780,1790,1800,1810,1820,1830,1840,1850,1860,1870,1880,1890,1900,1910,1920,1930,1940,1950,1960,1970,1980,1990,2000,2010,2020,2030,2040,2050,2060,2070,2080,2090,2100,2110,2120,2130,2140,2150,2160,2170,2180,2190,2200,2210,2220,2230,2240,2250,2260,2270,2280,2290,2300,2310,2320,2330,2340,2350,2360,2370,2380,2390,2400,2410,2420,2430,2440,2450,2460,2470,2480,2490,2500,2510,2520,2530,2540,2550,2560,2570,2580,2590,2600,2610,2620,2630,2640,2650,2660,2670,2680,2690,2700,2710,2720,2730,2740,2750,2760,2770,2780,2790,2800,2810,2820,2830,2840,2850,2860,2870,2880,2890,2900,2910,2920,2930,2940,2950,2960,2970,2980,2990,3000,3010,3020,3030,3040,3050,3060,3070,3080,3090,3100,3110,3120,3130,3140,3150,3160,3170,3180}'::numeric[])) AND (levenshtein_less_equal((a.full_name)::text, 'SOME RELATIVELY LONG VALUE HERE'::text, 2) Bitmap Index Scan on user_aliases_full_name_idx (cost=0.00..8.56 rows=20 width=0) (actual time=146.099..146.099 rows=4 loops=1)
Index Cond: ((a.full_name)::text % 'SOME RELATIVELY LONG VALUE HERE'::text)
Buffers: shared hit=13703
"Settings: effective_cache_size = '16GB', search_path = 'master', work_mem = '32MB'"
Planning Time: 5.653 ms
Execution Time: 47783.054 ms
**Update #2:**
The query is slow for some specific, and relatively long filter values, and fast for most.
Changing the query to:
SELECT id
FROM (
(SELECT
e.id,
e.full_name
FROM users e
WHERE e.full_name % :value
AND e.deleted_at IS NULL
AND e.list_id IN (:lists)
LIMIT :limit)
UNION
(SELECT
a.entry_id AS id,
a.full_name
FROM user_aliases a
WHERE a.full_name % :value
AND a.deleted_at IS NULL
AND a.list_id IN (:lists)
LIMIT :limit)
) filter_table
WHERE LEVENSHTEIN_LESS_EQUAL(full_name, :value, :threshold) <= :threshold
LIMIT :limit
basically, adding LIMIT
clauses to every subquery of UNION
makes the query fast again. So there is a solution. But why?
Hasan Can Saral
(175 rep)
May 14, 2025, 02:17 PM
• Last activity: May 29, 2025, 05:57 PM
0
votes
4
answers
301
views
Index Maintenance on busy OLTP database
I have a database where constant read writes happening all the time. Searches are being done with wild card entries. The server is on SQL 2016 standard edition. There is no budget for enterprise nor any intention to not using the wild card searches, and there is no maintenance window. Since wildcard...
I have a database where constant read writes happening all the time. Searches are being done with wild card entries. The server is on SQL 2016 standard edition.
There is no budget for enterprise nor any intention to not using the wild card searches, and there is no maintenance window.
Since wildcard searches are being made indexes are also of no use as it is doing a full table scan and thus creates locking.
I am using Ola Hallengren script for indexing but the indexes maintenance is taking 10-12 hours to complete for a 300gb database as it busy 24/7. Below is the script:
EXECUTE dbo.IndexOptimize
@Databases = 'user_databases',
@FragmentationLow = NULL,
@FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
@FragmentationLevel1 = 5,
@FragmentationLevel2 = 30,
@UpdateStatistics = 'ALL',
@OnlyModifiedStatistics = 'Y'
This has become a frustrating issue and I am about to quit.
Please advise what are my options here.
SQL_NoExpert
(1117 rep)
Mar 23, 2021, 02:28 PM
• Last activity: May 22, 2025, 02:03 PM
1
votes
2
answers
277
views
Is a Composite UNIQUE Constraint On an Already UNIQUE Column Redundant?
I'd just like to clarify something pretty simple regarding single and composite `UNIQUE` constraints that I can't find a clear answer to. Assume I have the following table: ```sql CREATE TABLE person ( id INT PRIMARY KEY, customer_id INT NOT NULL UNIQUE, name VARCHAR(20) NOT NULL ... ) ``` Would it...
I'd just like to clarify something pretty simple regarding single and composite
UNIQUE
constraints that I can't find a clear answer to.
Assume I have the following table:
CREATE TABLE person (
id INT PRIMARY KEY,
customer_id INT NOT NULL UNIQUE,
name VARCHAR(20) NOT NULL
...
)
Would it be safe to say that adding the following constraint:
UNIQUE(customer_id, name)
would be redundant and offer no benefit under any circumstance? In other words, is it useless to create a composite UNIQUE
constraint that includes a column which is already UNIQUE
?
This appears to make logical sense, but as I stated, I couldn't find an answer confirming it.
Twisted Tea
(111 rep)
Sep 5, 2023, 09:25 PM
• Last activity: May 21, 2025, 10:05 PM
0
votes
3
answers
2371
views
How to make MySQL use indexes in an integer range select query
I'm doing a select on a table looking for rows that match between an upper and lower bound SELECT * FROM ranges WHERE lolong = 2091484391 ; the ( simplified ) create table is: CREATE TABLE `ranges` ( `id` bigint(10) unsigned NOT NULL AUTO_INCREMENT, `hi` varchar(15) DEFAULT NULL, `hilong` bigint(20)...
I'm doing a select on a table looking for rows that match between an upper and lower bound
SELECT * FROM ranges WHERE lolong = 2091484391 ;
the ( simplified ) create table is:
CREATE TABLE
ranges
(
id
bigint(10) unsigned NOT NULL AUTO_INCREMENT,
hi
varchar(15) DEFAULT NULL,
hilong
bigint(20) DEFAULT NULL,
lo
varchar(15) DEFAULT NULL,
lolong
bigint(20) DEFAULT NULL,
PRIMARY KEY (id
),
KEY hilong
(hilong
),
KEY lolong
(lolong
)
) ENGINE=InnoDB AUTO_INCREMENT=234447 DEFAULT CHARSET=utf8
explain shows that it's not using indexes:
mysql> explain SELECT * FROM ranges WHERE lolong = 2091484391 ;
+----+-------------+------------+------------+------+---------------+------+---------+------+------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+------------+------------+------+---------------+------+---------+------+------+----------+-------------+
| 1 | SIMPLE | ranges | NULL | ALL | hilong,lolong | NULL | NULL | NULL | 7232 | 24.83 | Using where |
+----+-------------+------------+------------+------+---------------+------+---------+------+------+----------+-------------+
and even when i try to force it it only uses one index
mysql> explain SELECT * FROM ranges force index (hilong,lolong) WHERE lolong = 2091484391 ;
+----+-------------+------------+------------+-------+---------------+--------+---------+------+------+----------+------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+------------+------------+-------+---------------+--------+---------+------+------+----------+------------------------------------+
| 1 | SIMPLE | ranges | NULL | range | hilong,lolong | hilong | 9 | NULL | 2757 | 65.14 | Using index condition; Using where |
+----+-------------+------------+------------+-------+---------------+--------+---------+------+------+----------+------------------------------------+
This doesn't seem right, how can I get MySQL to use both indexes, it seems like it's needlessly scanning rows for this simple looking query
Joe Lipson
(1 rep)
Apr 24, 2017, 02:19 AM
• Last activity: May 14, 2025, 11:04 AM
Showing page 1 of 20 total questions