Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

2 votes
1 answers
924 views
Delete using join or using sub-query?
I need to delete rows from a table based on what is present in a temporary table. For me, both of these statements work : DELETE from main_table where `id` in (select `deletable_id` from temporary_table); and DELETE main_table from main_table join temporary_table on main_table.id = temporary_table.d...
I need to delete rows from a table based on what is present in a temporary table. For me, both of these statements work : DELETE from main_table where id in (select deletable_id from temporary_table); and DELETE main_table from main_table join temporary_table on main_table.id = temporary_table.deletable_id; Which of the two is advisable to use given the fact that main_table will be having a billion of rows and the temporary will have a few thousands.
gaganbm (141 rep)
Jun 12, 2015, 11:51 AM • Last activity: Aug 1, 2025, 05:03 PM
0 votes
1 answers
1035 views
Detect with Left Join duplicates entries and return the first value, also return sum of duplicates entries
¡Hi!, i have a case with two tables than can have a lot of matches entries and the time of fetching increases a lot. This tables can be Table A: Employees ------------------------- | Name | ID | Account | ------------------------- | Nicole | 01 | 12345 | | Alexis | 02 | 67890 | ----------------...
¡Hi!, i have a case with two tables than can have a lot of matches entries and the time of fetching increases a lot. This tables can be Table A: Employees ------------------------- | Name | ID | Account | ------------------------- | Nicole | 01 | 12345 | | Alexis | 02 | 67890 | ------------------------- And Table B: BankAccounts -------------------------- | Name | ID | Account | -------------------------- | Nicole | 01 | 12345 | | Nicole | 01 | 67890 | //duplicates Accounts | Alexis | 02 | 67890 | //duplicates Accounts -------------------------- And i want to do this with a Left Join in a Table that can have more of 450,000 different entries Result Table C Column_A = ¿Exists the account number in other register? Column_B = if(NumberOfMatches > 1) //this means that the account be found in other user AND i want to get the first value of all posibles number of matches |Account exists in other user|Match in User.. ----------------------------------------------------------------------------- | Name | ID | Account | Column_A | NumberOfMatches | Column_B | BadID | --------------------------------------------------------------------|-------| | Nicole | 01 | 12345 | No | 1 | Nicole (OK) | null | | Alexis | 02 | 67890 | Yes | 2 | Nicole (BAD)| 01 | ----------------------------------------------------------------------------- Thanks and regards! Note: sorry for my english, im learning :p
user186910
Feb 20, 2020, 05:38 AM • Last activity: Aug 1, 2025, 04:06 AM
0 votes
1 answers
383 views
Joining Columns with different character set
I am joining two tables on varchar columns having CHARACTER SET utf8 COLLATE utf8_bin and one with CHARACTER SET latin1. Indexing is not working when i am joining these two tables on this column, I am only storing alphanumeric values english characters a-z, A-Z and digits 0-9, no lengthy strings (ch...
I am joining two tables on varchar columns having CHARACTER SET utf8 COLLATE utf8_bin and one with CHARACTER SET latin1. Indexing is not working when i am joining these two tables on this column, I am only storing alphanumeric values english characters a-z, A-Z and digits 0-9, no lengthy strings (chinese characters/UTF16) or characters that takes more space . I am working with Mysql InnoDB. is this the normal behaviour?
Manish Kumar (1 rep)
Jun 6, 2023, 10:07 AM • Last activity: Jul 31, 2025, 07:08 AM
0 votes
1 answers
509 views
Simple Update Join much slower than it should be (MYSQL)
This is a simple Update Join that updates only about 100 rows: Update A INNER JOIN B using(id) SET A.active = 1 WHERE A.date > '2020' This takes about 30 seconds to run, despite the fact that: - This query updates the same 100 rows and takes milliseconds to run: `Update A SET active = 1 WHERE date >...
This is a simple Update Join that updates only about 100 rows: Update A INNER JOIN B using(id) SET A.active = 1 WHERE A.date > '2020' This takes about 30 seconds to run, despite the fact that: - This query updates the same 100 rows and takes milliseconds to run: Update A SET active = 1 WHERE date > '2020' - The join condition is fast, this query does the same join and takes less than a second SELECT * FROM A INNER JOIN B using(id) WHERE A.date > '2020' - The field active not part of any index - Table A has an index on (id, date), and table B has an index on id. I tried putting the where condition in the join (using on date > '2020') but it didn't help. I'm absolutely stumped why this takes so long. Any help is appreciated.
Tod (1 rep)
Sep 1, 2020, 05:00 AM • Last activity: Jul 28, 2025, 07:01 AM
0 votes
2 answers
7660 views
Confusion over using LEFT JOIN on multiple tables
I may be misunderstanding something basic here, but I'm struggling to find this issue being explained in my research. Let's say we have a users table and other tables that relate to the user, let's call them: - orders (contains a userId) - reviews (contains a userId) - vaccinations (contains a userI...
I may be misunderstanding something basic here, but I'm struggling to find this issue being explained in my research. Let's say we have a users table and other tables that relate to the user, let's call them: - orders (contains a userId) - reviews (contains a userId) - vaccinations (contains a userId) Each user can have many orders, or reviews, or vaccinations. Now let's say for whatever code I'm writing I want to get all users, all their orders, reviews and vaccinations. **Should I be writing one query that left joins everything together, or three separate queries?** I.E should it be something like: SELECT * FROM users LEFT JOIN orders ON orders.userId = users.id LEFT JOIN reviews ON reviews.userId = users.id LEFT JOIN vaccinations ON vaccinations.userId = users.id Or three completely separate queries like: 1. SELECT * FROM users LEFT JOIN orders ON orders.userId = users.id 2. SELECT * FROM users LEFT JOIN reviews ON reviews.userId = users.id 3. SELECT * FROM users LEFT JOIN vaccinations ON vaccinations.userId = users.id ## Some background ## I think what's causing me confusion is that most my time spent querying SQL is using the node ORM Sequelize. It allows me to happily query the database using a single query that on the face of it makes sense. Something like this: return models.users.findAll({ include: [{ model: models.orders required: false }, { model: models.reviews, required: false }, { model: models.vaccinations, required: false }], }); In code it returns the results to me in a really nice ordered way that makes a lot of sense. However, what I realised when looking at the MySQL 'slow query' log is that some of these joins were returning hundreds of thousands of results per query. I guess this is due to how one extra row in one of the tables means the query then returns many more results. Just to repeat the question to end with **Should I be writing one query that left joins everything together, or three separate queries?** Thank you so much for your help.
pezza3434 (1 rep)
Dec 5, 2021, 08:10 PM • Last activity: Jul 25, 2025, 12:02 PM
0 votes
2 answers
1364 views
UNION or SELF JOIN?
I have a table to hold parent and child posts together. +------+----------------+--------------+-------------+------------+ | p_id | parent_post_id | child_status | post_status | post_title | +------+----------------+--------------+-------------+------------+ | 1 | 0 | 1 | publish | New 1 | | 2 | 1...
I have a table to hold parent and child posts together. +------+----------------+--------------+-------------+------------+ | p_id | parent_post_id | child_status | post_status | post_title | +------+----------------+--------------+-------------+------------+ | 1 | 0 | 1 | publish | New 1 | | 2 | 1 | 0 | publish | ab 1 | | 3 | 1 | 0 | publish | ab2 | | 4 | 0 | 0 | publish | new2 | | 5 | 4 | 0 | publish | ab3 | +------+----------------+--------------+-------------+------------+ I want to show all parents from this table along with the child posts if the child_status of the parent is set to true. Currently I use a self join to accomplish the same SELECT p1.* FROM wp_bw_post p1 LEFT OUTER JOIN wp_bw_post p2 ON p1.parent_post_id=p2.p_id WHERE (p1.parent_post_id=0 OR p2.child_status=1) AND p1.post_status="publish"; which gives me the expected results. +------+----------------+--------------+-------------+------------+ | p_id | parent_post_id | child_status | post_status | post_title | +------+----------------+--------------+-------------+------------+ | 1 | 0 | 1 | publish | New 1 | | 2 | 1 | 0 | publish | ab 1 | | 3 | 1 | 0 | publish | ab2 | | 4 | 0 | 0 | publish | new2 | +------+----------------+--------------+-------------+------------+ 4 rows in set (0.00 sec). Performance is my great concern as this data is presented using infinite scroll.The table is expected to have millions of records and I need to present it as a single queue using some where and orderby conditions. Is this the efficient way to do this, or should I store the parent and child posts in a different table and fetch all using UNION?
Dency G B (109 rep)
Dec 3, 2015, 09:25 AM • Last activity: Jul 25, 2025, 10:09 AM
0 votes
1 answers
2725 views
Left Join with only one of multiple matches in the right table
I have a database, where every entry may have multiple names, therefore I made a name database. While all names will be displayed otherwhere I need a list of all main entries with one representative name. All names are ranked and the one with the highest ranking-position shall be used. Unranked name...
I have a database, where every entry may have multiple names, therefore I made a name database. While all names will be displayed otherwhere I need a list of all main entries with one representative name. All names are ranked and the one with the highest ranking-position shall be used. Unranked names are “0” or “-1” and should be ignored and since the name-ranking-system is bad, the one to use is not always “1”. In the case of no name being there, the main entry should still be returned. In short: I need a left join that takes all entries of table “main” and joins them with the name that has the smallest, greater than 0, position, if there is one. main:
| main_ID | val_A | val_B |
+---------+-------+-------+
|       2 | some  | stuff |
|       3 | and   | more  |
|       4 | even  | more  |
names:
| name_ID | main_ID | name           | position |
+---------+---------+----------------+----------+
|       1 |       2 | best name      |        1 |
|       2 |       2 | some name      |        0 |
|       3 |       3 | alt name       |        3 |
|       4 |       2 | cool name      |        2 |
|       5 |       3 | abandoned name |       -1 |
|       6 |       3 | awesome name   |        2 |
what I want to get:
| main_ID | val_A | val_B | name         |
+---------+-------+-------+--------------+
|       2 | some  | stuff | best name    |
|       3 | and   | more  | awesome name |
|       4 | even  | more  |              |
Benito (1 rep)
Jul 10, 2023, 10:01 PM • Last activity: Jul 24, 2025, 10:02 PM
1 votes
1 answers
735 views
Simple query with a single join very slow
I have this very slow, simple query that joins a large table (~180M rows) with a smaller table (~60k rows) with a foreign key, filtering an indexed column on the smaller table, ordering by the primary key in the larger table, and then taking the 25 latest rows. The `EXPLAIN` shows `Using index; Usin...
I have this very slow, simple query that joins a large table (~180M rows) with a smaller table (~60k rows) with a foreign key, filtering an indexed column on the smaller table, ordering by the primary key in the larger table, and then taking the 25 latest rows. The EXPLAIN shows Using index; Using temporary; Using filesort on the smaller table. Why? Engine: MySQL 5.7. Query:
SELECT
    order.id,
    order.company_id,
    order.total
FROM
    order
INNER JOIN
    company ON company.id = order.company_id
WHERE
    company.company_headquarter_id = 23133
ORDER BY order.id DESC
LIMIT 25;
+----+-------------+------------+------------+------+---------------------------------------+----------------------------+---------+-----------------------+------+----------+----------------------------------------------+
| id | select_type | table      | partitions | type | possible_keys                         | key                        | key_len | ref                   | rows | filtered | Extra                                        |
+----+-------------+------------+------------+------+---------------------------------------+----------------------------+---------+-----------------------+------+----------+----------------------------------------------+
|  1 | SIMPLE      | company    | NULL       | ref  | PRIMARY,company_headquarter_id_idx    | company_headquarter_id_idx | 8       | const                 |    6 |   100.00 | Using index; Using temporary; Using filesort |
|  1 | SIMPLE      | order      | NULL       | ref  | company_id_idx                        | company_id_idx             | 8       | company.id            |  381 |   100.00 | NULL                                         |
+----+-------------+------------+------------+------+---------------------------------------+----------------------------+---------+-----------------------+------+----------+----------------------------------------------+
CREATE TABLE order (
  id bigint(20) NOT NULL AUTO_INCREMENT,
  company_id bigint(20) NOT NULL,
  total double(18,2) NOT NULL,
  PRIMARY KEY (id),
  KEY company_id_idx (company_id),
  CONSTRAINT company_id_fk FOREIGN KEY (company_id) REFERENCES company (id)
) ENGINE=InnoDB AUTO_INCREMENT=186518644 DEFAULT CHARSET=latin1

CREATE TABLE company (
  id bigint(20) NOT NULL AUTO_INCREMENT,
  company_headquarter_id bigint(20) NOT NULL,
  name varchar(100) NOT NULL,
  PRIMARY KEY (id),
  KEY company_headquarter_id_idx (company_headquarter_id),
  CONSTRAINT company_headquarter_id_fk FOREIGN KEY (company_headquarter_id) REFERENCES company_headquarter (id)
) ENGINE=InnoDB AUTO_INCREMENT=60825 DEFAULT CHARSET=latin1

CREATE TABLE company_headquarter (
  id bigint(20) NOT NULL AUTO_INCREMENT,
  name varchar(100) NOT NULL,
  phone varchar(10) DEFAULT NULL,
  address_id bigint(20) NOT NULL,
  PRIMARY KEY (id),
  UNIQUE KEY name (name),
  KEY address_id_idx (address_id),
  CONSTRAINT address_id_fk FOREIGN KEY (address_id) REFERENCES address (id)
) ENGINE=InnoDB AUTO_INCREMENT=43862 DEFAULT CHARSET=latin1

CREATE TABLE address (
  id bigint(20) NOT NULL AUTO_INCREMENT,
  street_address varchar(100) DEFAULT NULL,
  zip varchar(7) DEFAULT NULL,
  state varchar(2) DEFAULT NULL,
  city varchar(50) DEFAULT NULL,
  country varchar(10) DEFAULT NULL,
  PRIMARY KEY (id)
) ENGINE=InnoDB AUTO_INCREMENT=147360955 DEFAULT CHARSET=latin1
The query becomes faster when I: * Remove the ORDER BY clause. * Filter company.company_headquarter_id with a company_headquarter_id that has a smaller number of orders. (company_headquarter_id = 23133 has ~3M rows in the order table) * Split it into two separate queries: First:
SELECT
    company.id
FROM
    company
WHERE
    company.company_headquarter_id = 23133;
Second:
SELECT
    order.id,
    order.company_id,
    order.total
FROM
    order
WHERE
    order.company_id IN (20122, 50729, 50730, 50731, 50732, 50733)  /* From first query */
ORDER BY order.id DESC
LIMIT 25;
Any ideas? Thank you. EDIT: When I do:
SELECT STRAIGHT_JOIN
    order.id,
    order.company_id,
    order.total
FROM
    order
INNER JOIN
    company ON company.id = order.company_id
WHERE
    company.company_headquarter_id = 23133
ORDER BY order.id DESC
LIMIT 25;
The query is much faster and EXPLAIN shows a temporary table is not created.
flyingdutchman (11 rep)
Apr 23, 2022, 05:42 PM • Last activity: Jul 23, 2025, 04:07 PM
1 votes
2 answers
143 views
How to leave only the rows with an timestamp difference with an interval greater than a parameter
I have a Postgres table with timestamps as entries. [![enter image description here][1]][1] From this table, I would like to calculate a new one where there are no consecutive entries with a timestamp difference shorter than 400 milliseconds. So in the case of the image, from the first 10 rows I wou...
I have a Postgres table with timestamps as entries. enter image description here From this table, I would like to calculate a new one where there are no consecutive entries with a timestamp difference shorter than 400 milliseconds. So in the case of the image, from the first 10 rows I would only leave [1,5,9] I tried with joins, but I realised I would need the updated table before calculating the ON clause of posterior rows, because I would need to know which rows have already been deleted. Edit: I tried the following join to at least have an idea of the tokens I would like to delete:
select distinct on (s.token)s.token as token1, s.timestamp as tm1, s2.token as token2, s2.timestamp as tm2
	from temporal.samples s
	join temporal.samples s2
	on s2.timestamp>s.timestamp + interval '400000 microseconds')
Giving this result: enter image description here Here, I see that the next token after the first one which follows the condition, is the 5th. So I would like to delete 2,3,4. Then, the next which is 400 ms delayed from the 5th is the 9th, so I would like to delete 6,7,8. Thanks in advance
eddie_jung (11 rep)
Jun 10, 2023, 10:56 AM • Last activity: Jul 20, 2025, 08:04 PM
0 votes
1 answers
407 views
Poor performance on MySQL 5.7 on joined tables ( scanning wrong table ?)
I have a joined table (News, Publishers) with indexing. The query is working fine on MySQL 5.5. After I upgrading one of the server to MySQL 5.7, I start noticing high load, high CPU, and slow query. A query taking almost 0.00 seconds (5.5) took 2 to 5 seconds in MySQL 5.7 Query: SELECT news.id FROM...
I have a joined table (News, Publishers) with indexing. The query is working fine on MySQL 5.5. After I upgrading one of the server to MySQL 5.7, I start noticing high load, high CPU, and slow query. A query taking almost 0.00 seconds (5.5) took 2 to 5 seconds in MySQL 5.7 Query: SELECT news.id FROM news ,publishers WHERE news.publisher_id=publishers.id AND publishers.language='en' ORDER BY date_added DESC LIMIT 10; I tried to figure what happen with EXPLAIN, and here is my finding: MySQL 5.5 +----+-------------+------------+--------+------------------+----------------+---------+---------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+------------+--------+------------------+----------------+---------+---------------------------------+------+-------------+ | 1 | SIMPLE | news | index | idx_publisher_id | idx_date_added | 9 | NULL | 10 | | | 1 | SIMPLE | publishers | eq_ref | PRIMARY | PRIMARY | 8 | klsescre_klse.news.publisher_id | 1 | Using where | +----+-------------+------------+--------+------------------+----------------+---------+---------------------------------+------+-------------+ MySQL 5.7 +----+-------------+------------+------------+-------+------------------+------------------+---------+-----------------------------+------+----------+-----------------------------------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+------------+------------+-------+------------------+------------------+---------+-----------------------------+------+----------+-----------------------------------------------------------+ | 1 | SIMPLE | publishers | NULL | index | PRIMARY | NULL | 277 | NULL | 47 | 10.00 | Using where; Using index; Using temporary; Using filesort | | 1 | SIMPLE | news | NULL | ref | idx_publisher_id | idx_publisher_id | 8 | klsescre_klse.publishers.id | 4962 | 100.00 | NULL | +----+-------------+------------+------------+-------+------------------+------------------+---------+-----------------------------+------+----------+-----------------------------------------------------------+ My guess is in 5.7, MySQL scan the PUBLISHERS table before NEWS, thus not making use of INDEX of I created for NEWS, making the query much slower. Can anyone help me with this? How can I make MySQL 5.7 scan the table like 5.5 ?
neobie (149 rep)
May 30, 2017, 11:17 AM • Last activity: Jul 19, 2025, 08:02 PM
0 votes
1 answers
153 views
MySQL Insert/Update across 3 tables with 1m+ rows
To start with, I know nothing of database design, so I apologise if this seems obvious to others. I have been researching up to 3NF over the last few weeks, and I think I have a layout that works. I have a database with 1m+ rows, currently organised as follows: Table: MasterTable Rows: ID, FirstName...
To start with, I know nothing of database design, so I apologise if this seems obvious to others. I have been researching up to 3NF over the last few weeks, and I think I have a layout that works. I have a database with 1m+ rows, currently organised as follows: Table: MasterTable Rows: ID, FirstName, LastName, PetName, PetAge I would like to split it as follows: Table: People PersonID (PK), FirstName, LastName Table: Pets PetID (PK), PetName, PetAge Table: Records RecordID (PK), MasterTable.ID, People.PersonID, Pets.PetID PKs in all cases auto-increment so that more records can be added later. The people and pets tables have been populated using: INSERT INTO Pets(PetName, PetAge) SELECT PetName, PetAge From MasterTable WHERE 1 INSERT INTO People(FirstName, LastName) SELECT PetName, PetAge From MasterTable WHERE 1 INSERT INTO Records(ID) SELECT ID From MasterTable WHERE 1 So I have three tables. When I try to create the Records table, I can't get anything to work. I have tried: INSERT INTO Records(PersonID, PetID, ID) SELECT People.PersonID, Pets.PetID, MasterTable.ID FROM MasterTable LEFT JOIN People ON MasterTable.FirstName = People.FirstName AND People.LastName = MasterTable.LastName LEFT JOIN Pets ON Pets.PetName = MasterTable.PetName AND Pets.PetAge = MasterTable.PetAge WHERE 1 I think the WHERE clause might be the problem. I have tried WHERE Pets.PetName = MasterTable.PetName and almost every kind of WHERE I can think of. I have a few questions I'd really appreciate some help with as I'm going out of my mind here. 1) Does it matter the order of the LEFT JOIN clauses? Does it matter which table is specified first and which is specified last? 2) I initially tried INNER JOIN but I figure it's just going to join more columns than is necessary, is that right? 3) If I am inserting firstname and lastname, I can't match on firstname and lastname, right? As in, create the firstname lastname entries and then use that ID to match the next join? It seems simple enough to split this into three, assign a PK to each, and then create a finale table where PKs relate to PKs, but apparently it's not. When I add '''LIMIT 5''' the select returns the correct info. Without the limit clause, all my attempts have run for over 24h and not finished. Either they have been stuck copying *everything* to temp tables, or they have just said "selecting data" as the status. Can someone please help? Sorry if something doesn't make sense, I'll clarify as I go.
ernoh (1 rep)
Sep 3, 2020, 09:07 PM • Last activity: Jul 18, 2025, 10:04 AM
0 votes
2 answers
335 views
Select Sum from two joined tables
There are structures: CREATE TABLE `invoices` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `date` date NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB; INSERT INTO `invoices` VALUES (1,'2018-09-22'); CREATE TABLE `products` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `invoice_id` int(10) unsig...
There are structures: CREATE TABLE invoices ( id int(10) unsigned NOT NULL AUTO_INCREMENT, date date NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB; INSERT INTO invoices VALUES (1,'2018-09-22'); CREATE TABLE products ( id int(10) unsigned NOT NULL AUTO_INCREMENT, invoice_id int(10) unsigned NOT NULL, amount decimal(10,2) unsigned NOT NULL, quantity smallint(5) unsigned NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB; INSERT INTO products VALUES (1,1,150.00,2),(2,1,60.00,3),(3,1,50.00,1); CREATE TABLE payments ( id int(10) unsigned NOT NULL AUTO_INCREMENT, invoice_id int(10) unsigned NOT NULL, amount decimal(10,2) unsigned NOT NULL, date date NOT NULL, PRIMARY KEY (id) ) ENGINE=InnoDB; INSERT INTO payments VALUES (1,1,400.00,'2018-09-23'),(2,1,80.00,'2018-09-23'); I have this query: select i.id, sum(pr.amount * pr.quantity) as productAmount, sum(pm.amount) as paymentAmount from invoices as i left join products as pr on pr.invoice_id=i.id left join payments as pm on pm.invoice_id=i.id group by i.id and have this result: +----+---------------+---------------+ | id | productAmount | paymentAmount | +----+---------------+---------------+ | 1 | 1060.00 | 1440.00 | +----+---------------+---------------+ 1 row in set (0,00 sec) However, I want to get the following result: +----+---------------+---------------+ | id | productAmount | paymentAmount | +----+---------------+---------------+ | 1 | 530.00 | 480.00 | +----+---------------+---------------+ 1 row in set (0,00 sec) I want sum amount of products and sum amount of payments grouped by invoice.id. What should be the query in this case?
abdulmanov.ilmir (101 rep)
Sep 24, 2018, 06:58 AM • Last activity: Jul 17, 2025, 09:08 AM
0 votes
1 answers
159 views
MySQL: Reason for Degraded performance of a single inner join
We have two tables in our MYSQL 5.7 Aurora database: CUSTOMER_ORDER and BATCH. Customer order can have only one batch associated and it is not mandatory to have one. Create table statement of CUSTOMER_ORDER table: CREATE TABLE 'CUSTOMER_ORDER' ( 'CLIENT_ID' varchar(32) COLLATE utf8mb4_bin NOT NULL,...
We have two tables in our MYSQL 5.7 Aurora database: CUSTOMER_ORDER and BATCH. Customer order can have only one batch associated and it is not mandatory to have one. Create table statement of CUSTOMER_ORDER table: CREATE TABLE 'CUSTOMER_ORDER' ( 'CLIENT_ID' varchar(32) COLLATE utf8mb4_bin NOT NULL, 'ORDER_ID' varchar(64) COLLATE utf8mb4_bin NOT NULL, 'ORDER' json NOT NULL, 'ORDER_DATE' date GENERATED ALWAYS AS ( cast(json_unquote(json_extract('ORDER', '$.date')) as date) ) VIRTUAL, 'TEAM_ID' varchar(32) COLLATE utf8mb4_bin GENERATED ALWAYS AS ( json_unquote(json_extract('ORDER', '$.teamId.teamId')) ) VIRTUAL, 'ORDER_SOURCE' varchar(32) COLLATE utf8mb4_bin GENERATED ALWAYS AS ( json_unquote(json_extract('ORDER', '$.orderSource')) ) VIRTUAL, 'ORDER_STATUS' varchar(32) COLLATE utf8mb4_bin GENERATED ALWAYS AS ( json_unquote(json_extract('ORDER', '$.status.status')) ) VIRTUAL, 'EFFECTIVE_STATUS' varchar(32) COLLATE utf8mb4_bin GENERATED ALWAYS AS ( json_unquote(json_extract('ORDER', '$.effectiveStatus')) ) VIRTUAL, 'CREATED_ON' timestamp(6) NOT NULL, 'UPDATED_ON' timestamp(6) NOT NULL, 'ADDED_ON' timestamp(6) NULL DEFAULT CURRENT_TIMESTAMP(6) ON UPDATE CURRENT_TIMESTAMP(6), 'BATCH_ID' varchar(128) COLLATE utf8mb4_bin GENERATED ALWAYS AS ( json_unquote(json_extract('ORDER', '$.batchId.batchId')) ) VIRTUAL, PRIMARY KEY ('CLIENT_ID', 'ORDER_ID'), KEY 'order_date_team_idx' ('CLIENT_ID', 'ORDER_DATE', 'TEAM_ID') ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4 COLLATE = utf8mb4_bin Create table statement for BATCH table: CREATE TABLE 'BATCH' ( 'CLIENT_ID' varchar(32) COLLATE utf8mb4_bin NOT NULL, 'BATCH_ID' varchar(128) COLLATE utf8mb4_bin NOT NULL, 'BATCH_DATE' date NOT NULL, 'BATCH_STATUS' varchar(32) COLLATE utf8mb4_bin NOT NULL, 'BATCH_SLA' varchar(32) COLLATE utf8mb4_bin NOT NULL, 'BATCH' json NOT NULL, 'EMPLOYEE_ID' varchar(32) COLLATE utf8mb4_bin DEFAULT NULL, 'EMPLOYEE_PERSONA_ID' varchar(32) COLLATE utf8mb4_bin DEFAULT NULL, 'VEHICLE_ID' varchar(32) COLLATE utf8mb4_bin DEFAULT NULL, 'VEHICLE_MODEL_ID' varchar(32) COLLATE utf8mb4_bin DEFAULT NULL, 'RECORD_VERSION' int(11) NOT NULL, 'CREATED_ON' timestamp(3) NOT NULL, 'UPDATED_ON' timestamp(3) NOT NULL, 'ADDED_ON' timestamp(3) NULL DEFAULT CURRENT_TIMESTAMP(3) ON UPDATE CURRENT_TIMESTAMP(3), 'MINIMAL_BATCH' json DEFAULT NULL, 'BATCH_ID' varchar(64) COLLATE utf8mb4_bin GENERATED ALWAYS AS ( json_unquote(json_extract('MINIMAL_BATCH', '$.batch.planId.sourceId')) ) VIRTUAL, 'PLAN_ID' varchar(64) COLLATE utf8mb4_bin GENERATED ALWAYS AS ( json_unquote(json_extract('MINIMAL_BATCH', '$.batch.planId.planId')) ) VIRTUAL, PRIMARY KEY ('CLIENT_ID', 'BATCH_ID'), KEY 'date_rider_idx' ('CLIENT_ID', 'BATCH_DATE', 'EMPLOYEE_ID') ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4 COLLATE = utf8mb4_bin And I am using the following query to find out the count of customer orders for a given client for a given date: SELECT COUNT(1) FROM CUSTOMER_ORDER AS customer_order INNER JOIN BATCH AS batch ON customer_order.CLIENT_ID = batch.CLIENT_ID AND customer_order.BATCH_ID = batch.BATCH_ID WHERE customer_order.CLIENT_ID = 'clientA' AND ORDER_DATE = '2021-05-01'; The reason I am doing this left outer join is to do further filtering of customer orders based on the batch. The problem I am facing with this query is that it takes in order of minutes to execute this query for clients who have large number of customer orders(~20k-100k) for a given date even without any extra filters on the batch table. The output of the EXPLAIN statement for the query is as given below: id,select_type,table,partitions,type,possible_keys,key,key_len,ref,rows,filtered,Extra 1,SIMPLE,customer_order,NULL,ref,"PRIMARY,order_date_team_idx,batch_idx",PRIMARY,130,const,1,10.00,"Using where" 1,SIMPLE,batch,NULL,eq_ref,"PRIMARY,date_rider_idx,team_id_idx",PRIMARY,644,"const,locus_devo.customer_order.BATCH_ID",1,100.00,"Using index" Can you please help me identify the root cause of underperformance of this query?
PaulDaviesC (101 rep)
Jun 2, 2021, 06:41 AM • Last activity: Jul 15, 2025, 11:04 PM
0 votes
3 answers
158 views
mySQL foreign key and primary key
Consider table A with columns id (primary key), name and table B with columns id, a_id (foreign key linked with table A id column), address. What will be the sequence of columns if the query is: SELECT * FROM B INNER JOIN A ON B.a_id = A.id;
Consider table A with columns id (primary key), name and table B with columns id, a_id (foreign key linked with table A id column), address. What will be the sequence of columns if the query is: SELECT * FROM B INNER JOIN A ON B.a_id = A.id;
Likita Chavali (9 rep)
Jul 19, 2020, 10:02 AM • Last activity: Jul 14, 2025, 09:05 AM
0 votes
1 answers
153 views
Selecting info from users who have been inactive for at least a week
I need to find the users who have not done a task in at least a week, and pull the relevant information from them. In this case, their id, names, and data1 from another table. This is what I have so far, but it's not working. Something to do with the joins I think? Can someone help me out? :) Additi...
I need to find the users who have not done a task in at least a week, and pull the relevant information from them. In this case, their id, names, and data1 from another table. This is what I have so far, but it's not working. Something to do with the joins I think? Can someone help me out? :) Additional info: the user_identity table is a bit complicated, with multiple rows with the same userid hence the distinct. SELECT DISTINCT ON (u.id) u.id, u.given_name, u.family_name, i.data1 FROM users u JOIN task t_week on u.id = t_week.user_id and t_week.created_at > %s JOIN user_identity i on (u.id=i.user_id) WHERE t_week.created_at IS NULL AND i.content_type='email_address' AND i.owner_id is NULL GROUP BY u.id, u.given_name, u.family_name, i.data1,i.created_at ORDER BY u.id, i.created_at
Wboy (103 rep)
Aug 12, 2016, 04:18 AM • Last activity: Jul 13, 2025, 05:08 PM
1 votes
1 answers
157 views
What and where to index, rapid increasing table postgres
I work as a software engineer with python and django. Currently I am struggling with a design choice made before my time. We have a transaction table that logs all customer activity. Due to the success of the platform the data in the table is rapidly increasing. I have issues getting the query time...
I work as a software engineer with python and django. Currently I am struggling with a design choice made before my time. We have a transaction table that logs all customer activity. Due to the success of the platform the data in the table is rapidly increasing. I have issues getting the query time to a manageable size. I have this example of a query that runs extremely slow. I guess some good indexing could do the job but I don't really know where to start. I would love some tips on how to help myself (any quality posts/books or other resources) or how to solve this problem. If somehow possible i would like to not make manual queries and just use the ORM. The * in the select i placed to make the query more readable. SELECT * FROM "customer_customer" INNER JOIN "customer_transaction" ON ("customer_customer"."id" = "customer_transaction"."customer_id") WHERE ("customer_customer"."status" = 1 AND NOT ("customer_customer"."id" IN ( SELECT U1."customer_id" AS Col1 FROM "customer_transaction" U1 WHERE U1."transaction_type" IN (30, 14) ) ) AND "customer_transaction"."date" >= '2018-05-11 11:01:43.598530+02:00') As Asked in the comments here are additional infos: Currently I am running the commands on my local computer. The Query is generated by the orm. Create of the customer table: -- -- PostgreSQL database dump -- -- Dumped from database version 9.6.0 -- Dumped by pg_dump version 9.6.0 SET statement_timeout = 0; SET lock_timeout = 0; SET idle_in_transaction_session_timeout = 0; SET client_encoding = 'UTF8'; SET standard_conforming_strings = on; SET check_function_bodies = false; SET client_min_messages = warning; SET row_security = off; SET search_path = public, pg_catalog; SET default_tablespace = ''; SET default_with_oids = false; -- -- Name: customer_customer; Type: TABLE; Schema: public; Owner: postgres -- CREATE TABLE customer_customer ( id integer NOT NULL, firstname character varying(63), phone character varying(31), terms_accepted boolean NOT NULL, user_id integer, cashed_vip_points integer NOT NULL, vip_points integer NOT NULL, receive_mail boolean NOT NULL, mailchimp_email character varying(254), image character varying(100), image_thumb character varying(100), favorite_hash character varying(31), has_accepted_favorite_hint boolean NOT NULL, address_id integer, blog_url character varying(200), instagram_username character varying(200), overrule_default_vip_points integer, status integer NOT NULL, signature boolean, signature_date date, store_id_id integer, shopping_mail boolean NOT NULL, CONSTRAINT customer_customer_overrule_default_vip_points_check CHECK ((overrule_default_vip_points >= 0)) ); ALTER TABLE customer_customer OWNER TO postgres; -- -- Name: customer_customer_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres -- CREATE SEQUENCE customer_customer_id_seq START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE CACHE 1; ALTER TABLE customer_customer_id_seq OWNER TO postgres; -- -- Name: customer_customer_id_seq; Type: SEQUENCE OWNED BY; Schema: public; Owner: postgres -- ALTER SEQUENCE customer_customer_id_seq OWNED BY customer_customer.id; -- -- Name: customer_customer id; Type: DEFAULT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ALTER COLUMN id SET DEFAULT nextval('customer_customer_id_seq'::regclass); -- -- Name: customer_customer customer_customer_address_id_key; Type: CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ADD CONSTRAINT customer_customer_address_id_key UNIQUE (address_id); -- -- Name: customer_customer customer_customer_favorite_hash_key; Type: CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ADD CONSTRAINT customer_customer_favorite_hash_key UNIQUE (favorite_hash); -- -- Name: customer_customer customer_customer_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ADD CONSTRAINT customer_customer_pkey PRIMARY KEY (id); -- -- Name: customer_customer customer_customer_user_id_key; Type: CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ADD CONSTRAINT customer_customer_user_id_key UNIQUE (user_id); -- -- Name: customer_customer_211f6852; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_customer_211f6852 ON customer_customer USING btree (store_id_id); -- -- Name: customer_customer customer_custo_address_id_41aab9497590bc7_fk_address_address_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ADD CONSTRAINT customer_custo_address_id_41aab9497590bc7_fk_address_address_id FOREIGN KEY (address_id) REFERENCES address_address(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_customer customer_custom_store_id_id_67b8071f917b6245_fk_stores_store_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ADD CONSTRAINT customer_custom_store_id_id_67b8071f917b6245_fk_stores_store_id FOREIGN KEY (store_id_id) REFERENCES stores_store(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_customer customer_customer_user_id_482ced6557101913_fk_auth_user_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_customer ADD CONSTRAINT customer_customer_user_id_482ced6557101913_fk_auth_user_id FOREIGN KEY (user_id) REFERENCES auth_user(id) DEFERRABLE INITIALLY DEFERRED; -- -- PostgreSQL database dump complete -- And of the transaction table: -- -- PostgreSQL database dump -- -- Dumped from database version 9.6.0 -- Dumped by pg_dump version 9.6.0 SET statement_timeout = 0; SET lock_timeout = 0; SET idle_in_transaction_session_timeout = 0; SET client_encoding = 'UTF8'; SET standard_conforming_strings = on; SET check_function_bodies = false; SET client_min_messages = warning; SET row_security = off; SET search_path = public, pg_catalog; SET default_tablespace = ''; SET default_with_oids = false; -- -- Name: customer_transaction; Type: TABLE; Schema: public; Owner: postgres -- CREATE TABLE customer_transaction ( id integer NOT NULL, points integer, transaction_type integer NOT NULL, customer_id integer NOT NULL, date timestamp with time zone NOT NULL, product_id integer, fotostream_entry_id integer, acommit_transaction_id character varying(36), amount numeric(6,2), has_storno_id integer, merged_customernumber_id integer, message_de character varying(255), message_fr character varying(255), points_befor_migration integer, store_id integer, storno_from_id integer, user_id integer, _transaction_type_messages_id integer, aac_import_row character varying(5000) ); ALTER TABLE customer_transaction OWNER TO postgres; -- -- Name: customer_transaction_id_seq; Type: SEQUENCE; Schema: public; Owner: postgres -- CREATE SEQUENCE customer_transaction_id_seq START WITH 1 INCREMENT BY 1 NO MINVALUE NO MAXVALUE CACHE 1; ALTER TABLE customer_transaction_id_seq OWNER TO postgres; -- -- Name: customer_transaction_id_seq; Type: SEQUENCE OWNED BY; Schema: public; Owner: postgres -- ALTER SEQUENCE customer_transaction_id_seq OWNED BY customer_transaction.id; -- -- Name: customer_transaction id; Type: DEFAULT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ALTER COLUMN id SET DEFAULT nextval('customer_transaction_id_seq'::regclass); -- -- Name: customer_transaction customer_transaction_pkey; Type: CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT customer_transaction_pkey PRIMARY KEY (id); -- -- Name: customer_transacti_acommit_transaction_id_71030b1b69b97709_like; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transacti_acommit_transaction_id_71030b1b69b97709_like ON customer_transaction USING btree (acommit_transaction_id varchar_pattern_ops); -- -- Name: customer_transacti_acommit_transaction_id_71030b1b69b97709_uniq; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transacti_acommit_transaction_id_71030b1b69b97709_uniq ON customer_transaction USING btree (acommit_transaction_id); -- -- Name: customer_transaction_7473547c; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_7473547c ON customer_transaction USING btree (store_id); -- -- Name: customer_transaction_928570bc; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_928570bc ON customer_transaction USING btree (merged_customernumber_id); -- -- Name: customer_transaction_9524d7ad; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_9524d7ad ON customer_transaction USING btree (_transaction_type_messages_id); -- -- Name: customer_transaction_9bea82de; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_9bea82de ON customer_transaction USING btree (product_id); -- -- Name: customer_transaction_b65a298f; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_b65a298f ON customer_transaction USING btree (fotostream_entry_id); -- -- Name: customer_transaction_cb24373b; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_cb24373b ON customer_transaction USING btree (customer_id); -- -- Name: customer_transaction_d9b62ea2; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_d9b62ea2 ON customer_transaction USING btree (storno_from_id); -- -- Name: customer_transaction_date_bd33b3ac; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_date_bd33b3ac ON customer_transaction USING btree (date); -- -- Name: customer_transaction_e8701ad4; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_e8701ad4 ON customer_transaction USING btree (user_id); -- -- Name: customer_transaction_f2c0da2f; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_f2c0da2f ON customer_transaction USING btree (has_storno_id); -- -- Name: customer_transaction_transaction_type_36582b63; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_transaction_type_36582b63 ON customer_transaction USING btree (transaction_type); -- -- Name: customer_transaction_transaction_type_custome_3619995d_idx; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_transaction_type_custome_3619995d_idx ON customer_transaction USING btree (transaction_type, customer_id, date); -- -- Name: customer_transaction_transaction_type_customer_id_3eb6f7d0_idx; Type: INDEX; Schema: public; Owner: postgres -- CREATE INDEX customer_transaction_transaction_type_customer_id_3eb6f7d0_idx ON customer_transaction USING btree (transaction_type, customer_id); -- -- Name: customer_transaction D4d691342aa107b3b4fb5a167936d123; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT "D4d691342aa107b3b4fb5a167936d123" FOREIGN KEY (merged_customernumber_id) REFERENCES customer_customernumber(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction D6e0c79ad7a40ca02054ed28c4d6999c; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT "D6e0c79ad7a40ca02054ed28c4d6999c" FOREIGN KEY (_transaction_type_messages_id) REFERENCES customer_transactiontype(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction D9460b882ac4401f6adf8077475229ed; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT "D9460b882ac4401f6adf8077475229ed" FOREIGN KEY (fotostream_entry_id) REFERENCES fotostream_fotostreamentry(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction cust_storno_from_id_6a48315f632674fa_fk_customer_transaction_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT cust_storno_from_id_6a48315f632674fa_fk_customer_transaction_id FOREIGN KEY (storno_from_id) REFERENCES customer_transaction(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction custo_has_storno_id_116b248645f7fd59_fk_customer_transaction_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT custo_has_storno_id_116b248645f7fd59_fk_customer_transaction_id FOREIGN KEY (has_storno_id) REFERENCES customer_transaction(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction customer_product_id_428b30409c797b6b_fk_lookbook_productbase_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT customer_product_id_428b30409c797b6b_fk_lookbook_productbase_id FOREIGN KEY (product_id) REFERENCES lookbook_productbase(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction customer_t_customer_id_7962b09af88fe147_fk_customer_customer_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT customer_t_customer_id_7962b09af88fe147_fk_customer_customer_id FOREIGN KEY (customer_id) REFERENCES customer_customer(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction customer_transacti_store_id_4014f4c86692b54b_fk_stores_store_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT customer_transacti_store_id_4014f4c86692b54b_fk_stores_store_id FOREIGN KEY (store_id) REFERENCES stores_store(id) DEFERRABLE INITIALLY DEFERRED; -- -- Name: customer_transaction customer_transaction_user_id_3497aca364c6a472_fk_auth_user_id; Type: FK CONSTRAINT; Schema: public; Owner: postgres -- ALTER TABLE ONLY customer_transaction ADD CONSTRAINT customer_transaction_user_id_3497aca364c6a472_fk_auth_user_id FOREIGN KEY (user_id) REFERENCES auth_user(id) DEFERRABLE INITIALLY DEFERRED; -- -- PostgreSQL database dump complete -- And here is the execution plan: Merge Join (cost=14112.60..11422372541.38 rows=505000 width=671) (actual time=1351.692..16713328.367 rows=146436 loops=1) Merge Cond: (customer_customer.id = customer_transaction.customer_id) Buffers: shared hit=1922626 read=720356, temp read=98686908 written=1364 -> Index Scan using customer_customer_pkey on customer_customer (cost=14096.99..11422109016.84 rows=81175 width=121) (actual time=1342.257..16649665.313 rows=35553 loops=1) Filter: ((status = 1) AND (NOT (SubPlan 1))) Rows Removed by Filter: 309213 Buffers: shared hit=156156 read=72783, temp read=98686908 written=1364 SubPlan 1 -> Materialize (cost=14096.57..78342.29 rows=805641 width=4) (actual time=0.007..52.406 rows=356853 loops=161642) Buffers: shared hit=1667 read=25275, temp read=98686908 written=1364 -> Bitmap Heap Scan on customer_transaction u1 (cost=14096.57..71166.08 rows=805641 width=4) (actual time=147.297..485.822 rows=797943 loops=1) Recheck Cond: (transaction_type = ANY ('{30,14}'::integer[])) Heap Blocks: exact=24756 Buffers: shared hit=1667 read=25275 -> Bitmap Index Scan on customer_transaction_transaction_type_customer_id_3eb6f7d0_idx (cost=0.00..13895.16 rows=805641 width=0) (actual time=140.944..140.944 rows=797943 loops=1) Index Cond: (transaction_type = ANY ('{30,14}'::integer[])) Buffers: shared hit=1 read=2185 -> Index Scan using customer_transaction_cb24373b on customer_transaction (cost=0.43..252918.54 rows=2144835 width=550) (actual time=0.039..63012.608 rows=2143881 loops=1) Filter: (date >= '2018-05-11 11:01:43.59853+02'::timestamp with time zone) Rows Removed by Filter: 1048039 Buffers: shared hit=1766470 read=647573 Planning time: 16.013 ms Execution time: 16713362.490 ms
Phteven (111 rep)
Nov 7, 2018, 10:38 AM • Last activity: Jul 12, 2025, 03:03 PM
1 votes
2 answers
163 views
Explain plan will sort the result after join even the column included in index
I am using SQL Server 2022 Developer Trying get all AccessLog that classified to type 1. ``` SELECT [t].[Time], [u].[UserName], [t].[Type], [t].[Message] FROM [AccessLog] AS [t] LEFT JOIN [AppUser] AS [u] ON [t].[UserId] = [u].[Id] WHERE EXISTS (SELECT 1 FROM [LogCatalog] AS [c] WHERE [c].[Type] = 1...
I am using SQL Server 2022 Developer Trying get all AccessLog that classified to type 1.
SELECT [t].[Time], [u].[UserName], [t].[Type], [t].[Message]
FROM [AccessLog] AS [t]
         LEFT JOIN [AppUser] AS [u] ON [t].[UserId] = [u].[Id]
WHERE EXISTS (SELECT 1
              FROM [LogCatalog] AS [c]
              WHERE [c].[Type] = 1
                AND [c].[Name] = [t].[Type])
ORDER BY [t].[Time] DESC
For 1M record, it will need ~90s to execute on my computer. Most cost is on sort operate. I already have index on AccessLog.Time DESC, but the plan will sort again still after join. https://www.brentozar.com/pastetheplan/?id=HyXzc9UUp I have Index on AccessLog: 1. PK [Id] 2. IX [Time] DESC 3. IX [Time] DESC, [Type] ASC 4. IX [Type] ASC, [Time] DESC 5. IX [Type] ASC 6. IX [UserId] ASC 7. IX [Time] DESC, [UserId] ASC, [Type] ASC The query filter by [Type] and order by [Time], why the plan can not use the [Time],[Type] index but need to sort again?
Uni (11 rep)
Dec 12, 2023, 07:16 AM • Last activity: Jul 10, 2025, 11:06 PM
0 votes
1 answers
181 views
Select tables data with sum of other tables
I am very new on MySQL DBA and i searched but i am not clearly understanding inner join with sum. user table: id, username, locationid history table: id, username, locationid, spent Example user table: 1, sarah12, 12 2, martin21, 12 3, sarah12, 13 Example history table: 1, sarah12, 12, 100 2, martin...
I am very new on MySQL DBA and i searched but i am not clearly understanding inner join with sum. user table: id, username, locationid history table: id, username, locationid, spent Example user table: 1, sarah12, 12 2, martin21, 12 3, sarah12, 13 Example history table: 1, sarah12, 12, 100 2, martin21, 12, 40 3, sarah12, 13, 500 4, sarah12, 12, 50 Now i want to list all users with sum of their spending history by specific location ID. Like i want to list all users which has locationid 12 with sum of their spending history So, i am expecting something like that: 1, sarah12, 150 2, martin21, 40 Can you please guide me how can i do that? Edit: Basically i try with random solution from stackoverflow. I have added my last tried code. But it seems it does not work. Here is the last one: select user.*, history.spent as intotal from user inner join (select username, sum(spent) as total from history group by username) history on user.username = history.username where user.locationid = 12
Imran Ahmed (1 rep)
May 28, 2023, 11:53 AM • Last activity: Jul 10, 2025, 06:02 PM
0 votes
1 answers
199 views
Why im getting error 1114 the table is full?
This query works fine SELECT MAX(customers.created_at) lst_trx_date, business_infos.id, business_infos.name as business_name, users.name as owner_name FROM users INNER JOIN business_infos ON users.id = business_infos.user_id INNER JOIN customers on business_infos.id = customers.businessId GROUP BY c...
This query works fine SELECT MAX(customers.created_at) lst_trx_date, business_infos.id, business_infos.name as business_name, users.name as owner_name FROM users INNER JOIN business_infos ON users.id = business_infos.user_id INNER JOIN customers on business_infos.id = customers.businessId GROUP BY customers.businessId ORDER BY customers.created_at DESC but this query getting an eerror SELECT MAX(orders.date) lst_trx_date, business_infos.id, business_infos.name as business_name, users.name as owner_name FROM users INNER JOIN business_infos ON users.id = business_infos.user_id INNER JOIN orders on business_infos.id = orders.businessId GROUP BY orders.businessId ORDER BY orders.created_at DESC 1st query have small data and 2nd query have a large data what should i need to consider ?
Nullified (25 rep)
Oct 26, 2022, 03:52 AM • Last activity: Jul 4, 2025, 01:03 AM
0 votes
1 answers
782 views
How to optimize UPDATE with a nested SELECT subquery?
I wrote a complicated UPDATE query, and it works, but it looks menacing. Here's what I'm trying to do: In each topic user 'Bob123' posted anonymously. When you post anonymously in a topic, you get a unique anonymous index for that topic. Say I want to merge two topics together. Bob123 has a differen...
I wrote a complicated UPDATE query, and it works, but it looks menacing. Here's what I'm trying to do: In each topic user 'Bob123' posted anonymously. When you post anonymously in a topic, you get a unique anonymous index for that topic. Say I want to merge two topics together. Bob123 has a different anon index in both topics, so his unique anon index wouldn't be unique. I only have two pieces of data to work with: $topic_id, the topic id you are merging into, and $post_id_list, all the post ids that got merged over. I want to update all anonymous_index entries per each distinct poster_id's post in that topic. This anonymous_index needs to be the original index they had in the topic before the other topic was merged into it. The first SELECT query first selects the anon indices of the moved posts. The outer SELECT query gets the first non-merged post's anon index (if it is > 0) of those merged posters in the topic and selects a merged anon index from the first query. Then, I update it. Wherever the anon index of those posters in that topic doesn't equal the old index, I update it. Is there something simple that I'm missing here? I don't like the fact that I have a subquery in a subquery. At first I was using HAVING MIN(anonymous_index) MAX(anonymous_index) along with AND post_id NOT IN ($merged_post_list)to select the poster id list that needed to be updated and an unmerged anon index, but it returned 0 rows with this. If the merged post is BEFORE all original posts (and has a larger anon index), then the minimum anon index will match the maximum index for that poster. So making another subquery fixed this...
$merged_post_list = implode(',', $post_id_list);

...

UPDATE " . POSTS_TABLE . " AS p
INNER JOIN (    SELECT p.post_id, p.anonymous_index AS old_index,
                       merged.poster_id, merged.anonymous_index AS new_index
                FROM " . POSTS_TABLE . " AS p,
                (       SELECT poster_id, anonymous_index
                        FROM " . POSTS_TABLE . "
                        WHERE post_id IN ($merged_post_list)
                        AND topic_id = $topic_id
                        AND anonymous_index > 0
                ) AS merged
                WHERE p.post_id NOT IN ($merged_post_list)
                AND p.topic_id = $topic_id
                AND p.anonymous_index > 0
                AND p.poster_id = merged.poster_id
                GROUP BY merged.poster_id
) AS postdata
SET p.anonymous_index = postdata.old_index
WHERE p.topic_id = $topic_id
AND anonymous_index > 0
AND anonymous_index  postdata.old_index
AND p.poster_id = postdata.poster_id
post_id is the primary index, poster_id and topic_id are also indices. Here's some sample behavior: Before merge:
|post_id_____poster_id_____anonymous_index|
| 11         | 3           | 2            |
| 12         | 22          | 1            |
| 14         | 22          | 1            |
| 15         | 3           | 2            |
After merge:
|post_id_____poster_id_____anonymous_index|
| 10         | 22          | 4            |
| 11         | 3           | 2            |
| 12         | 22          | 1            |
| 13         | 3           | 4            |
| 14         | 22          | 1            |
| 15         | 3           | 2            |
| 16         | 22          | 4            |
After UPDATE (the above query):
|post_id_____poster_id_____anonymous_index|
| 10         | 22          | 1            |
| 11         | 3           | 2            |
| 12         | 22          | 1            |
| 13         | 3           | 2            |
| 14         | 22          | 1            |
| 15         | 3           | 2            |
| 16         | 22          | 1            |
EDIT: I've made the following index and an alternative SELECT query to avoid having two subqueries, how would these fare?: (topic_id, poster_id, anonymous_index, post_id)
SELECT merged.poster_id, merged.anonymous_index AS new_index,
	   old.post_id, old.anonymous_index AS old_index
FROM " . POSTS_TABLE . " AS merged,
	 " . POSTS_TABLE . " AS old
WHERE merged.post_id IN ($post_list)
AND merged.anonymous_index > 0
AND merged.anonymous_index  old.anonymous_index
AND old.topic_id = $topic_id
AND old.post_id NOT IN ($post_list)
AND old.anonymous_index > 0
AND old.poster_id = merged.poster_id
GROUP BY merged.poster_id
ORDER BY NULL
EDIT AGIAN: Here is my schema:
Table structure for table phpbb_posts
--

CREATE TABLE phpbb_posts (
  post_id int(10) UNSIGNED NOT NULL,
  topic_id int(10) UNSIGNED NOT NULL DEFAULT '0',
  forum_id mediumint(8) UNSIGNED NOT NULL DEFAULT '0',
  poster_id int(10) UNSIGNED NOT NULL DEFAULT '0',
  icon_id mediumint(8) UNSIGNED NOT NULL DEFAULT '0',
  poster_ip varchar(40) COLLATE utf8_bin NOT NULL DEFAULT '',
  post_time int(11) UNSIGNED NOT NULL DEFAULT '0',
  post_reported tinyint(1) UNSIGNED NOT NULL DEFAULT '0',
  enable_bbcode tinyint(1) UNSIGNED NOT NULL DEFAULT '1',
  enable_smilies tinyint(1) UNSIGNED NOT NULL DEFAULT '1',
  enable_magic_url tinyint(1) UNSIGNED NOT NULL DEFAULT '1',
  enable_sig tinyint(1) UNSIGNED NOT NULL DEFAULT '1',
  post_username varchar(255) COLLATE utf8_bin NOT NULL DEFAULT '',
  post_subject varchar(255) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL DEFAULT '',
  post_text mediumtext COLLATE utf8_bin NOT NULL,
  post_checksum varchar(32) COLLATE utf8_bin NOT NULL DEFAULT '',
  post_attachment tinyint(1) UNSIGNED NOT NULL DEFAULT '0',
  bbcode_bitfield varchar(255) COLLATE utf8_bin NOT NULL DEFAULT '',
  bbcode_uid varchar(8) COLLATE utf8_bin NOT NULL DEFAULT '',
  post_postcount tinyint(1) UNSIGNED NOT NULL DEFAULT '1',
  post_edit_time int(11) UNSIGNED NOT NULL DEFAULT '0',
  post_edit_reason varchar(255) COLLATE utf8_bin NOT NULL DEFAULT '',
  post_edit_user int(10) UNSIGNED NOT NULL DEFAULT '0',
  post_edit_count smallint(4) UNSIGNED NOT NULL DEFAULT '0',
  post_edit_locked tinyint(1) UNSIGNED NOT NULL DEFAULT '0',
  post_visibility tinyint(3) NOT NULL DEFAULT '0',
  post_delete_time int(11) UNSIGNED NOT NULL DEFAULT '0',
  post_delete_reason varchar(255) COLLATE utf8_bin NOT NULL DEFAULT '',
  post_delete_user int(10) UNSIGNED NOT NULL DEFAULT '0',
  sfs_reported tinyint(1) UNSIGNED NOT NULL DEFAULT '0',
  parent_id int(10) UNSIGNED DEFAULT '0',
  post_depth int(3) UNSIGNED NOT NULL DEFAULT '0',
  is_anonymous tinyint(1) UNSIGNED NOT NULL DEFAULT '0',
  anonymous_index mediumint(8) UNSIGNED NOT NULL DEFAULT '0'
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin;

--
-- Indexes for dumped tables
--

--
-- Indexes for table phpbb_posts
--
ALTER TABLE phpbb_posts
  ADD PRIMARY KEY (post_id),
  ADD KEY forum_id (forum_id),
  ADD KEY topic_id (topic_id),
  ADD KEY poster_ip (poster_ip),
  ADD KEY poster_id (poster_id),
  ADD KEY post_username (post_username),
  ADD KEY tid_post_time (topic_id,post_time),
  ADD KEY post_visibility (post_visibility),
  ADD KEY parent_id (parent_id);

--
-- AUTO_INCREMENT for dumped tables
--

--
-- AUTO_INCREMENT for table phpbb_posts
--
ALTER TABLE phpbb_posts
  MODIFY post_id int(10) UNSIGNED NOT NULL AUTO_INCREMENT;COMMIT;
Thrash Tech (1 rep)
Jan 16, 2019, 01:49 AM • Last activity: Jul 3, 2025, 10:04 AM
Showing page 1 of 20 total questions