Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
559
views
MySQL query - Passing variable into subquery's derived table
Is there a way to pass a variable into a subquery, and use that variable in a derived table in the subquery? I am trying identify "active" organizations, and there are multiple actions that constitute "activity". One is creating an invoice, represented here (others not shown here for the sake of rea...
Is there a way to pass a variable into a subquery, and use that variable in a derived table in the subquery?
I am trying identify "active" organizations, and there are multiple actions that constitute "activity". One is creating an invoice, represented here (others not shown here for the sake of readability). However, because there are more types of actions, I cannot do the simpler approach of just querying the
Invoice
table, like in thisWorks
column.
Therefore, I am trying to create a derived table, where multiple sources can be queried, but also filtering the @start
and @end
timestamps, as seen in the thisDoesntWork
column.
Here is a simplified version of what the query looks like:
select
@row:=@row+1 as row,
@sixPrior:=date_sub(date_sub(date_sub(curdate(), interval dayofweek(curdate())-2 day), interval @row-1 week), interval 6 month) as sixMonthsPrior,
@start:=date_sub(date_sub(curdate(), interval dayofweek(curdate())-2 day), interval @row-1 week) as start,
@end:=date_sub(date_add(curdate(), interval dayofweek(curdate())+5 day), interval @row-1 week) as end,
@sixPrior as sixCheck,
@start as startCheck,
@end as endCheck,
( select count(distinct(organizationId))
from Invoice
where createdTime between @start and @end
) as thisWorks,
(
select count(distinct(organizationId))
from (
select organizationId
from Invoice
where createdTime between @start and @end
) d
) as thisDoesntWork
from Organization o
join (select @row:=0) r
limit 10;
In the example above, thisDoesntWork
, has the same value for every row (this first value). I assume it's got something to do with @start
and @end
never getting updated values.
Is there any way to filter a derived table, with values that change each row, from an outer table?
I'm also open to alternative approaches, if derived table restrictions will never allow it to work.
I'm also using MySql 5.7, so LATERAL JOIN
is not a possibility
Jared Anderton
(101 rep)
Dec 1, 2020, 06:05 PM
• Last activity: Aug 5, 2025, 10:03 AM
2
votes
1
answers
199
views
mysql group by on table from two derived non-indexable
Working with mysql 5.7. I've got a query that ends up joining two derived tables and grouping on one of the columns. The query without the grouping runs pretty fast.... .5 seconds and returns 15K rows (In production I would expect that to possibly hit 100K + rows). When I do a group on this query it...
Working with mysql 5.7. I've got a query that ends up joining two derived tables and grouping on one of the columns. The query without the grouping runs pretty fast.... .5 seconds and returns 15K rows (In production I would expect that to possibly hit 100K + rows). When I do a group on this query it really kills the performance resulting in a query that takes 10x longer. I have tried to reorganize the query to push the group by logic to an earlier step; logically that doesn't work. I've also tried creating a temp table to insert the query's results that takes .5 seconds to run but the inserting into the temp table takes ~5 seconds (and I'm assuming that exactly what is happening with the original query given what the explain says). I've also tried modifying the session setting for the join and sort buffers but that doesn't seem to change anything at all. Does anybody have any advice on what else I could try or what I have tried that I haven't tried "correctly". Do I need to do something else besides
SET SESSION sort_buffer_size = 1024*1024*4;
SET SESSION join_buffer_size = 1024*1024*4;
to get those conf settings to work cause they don't seem to do anything at all to my query performance. I've adjust those values all over the place from 1024*4 to 1024*1024*10
Also, I'm a bit miffed as to why it takes ~5 seconds to add 15K rows to a temp table. I've tried memory engine, adding indexes etc... always takes ~5 seconds.
Here is a very simple boiled down version of the query that may help get the gist of what I'm doing
select group_concat(storeID), fileID
from ( select ugfileToStores.storeID, ugfileToStores.fileID
from ( select usergroupID, storeID from tableOne
join ( select fileID, storeID from tableTwo ) tableTwo
on tableOne.storeID = tableTwo.storeID ) ugfileToStores
) fileToStores
group by fileID
Without the group by at the end and the group_concat the query runs in ~.5 seconds... with it it runs in ~5 seconds.
Here is an example of the actual query:
SELECT sql_no_cache group_concat(usergroup0.storeid),
filebucket0.filebucketid
FROM (
SELECT en.id AS usergroupid,
st.storeid AS storeid,
1 AS assignmentcount
FROM entity en
CROSS JOIN
(
SELECT storeid
FROM masterentity_attachedstores
WHERE masterentityid = 156825) st
WHERE en.id IN ('156830') ) usergroup0
INNER JOIN
(
SELECT maxout.filebucketid,
maxout.storeid
FROM (
SELECT filebucketid,
storeid,
entityid
FROM (
SELECT stb.id AS filebucketid,
AS.storeid AS storeid,
1 AS assignmentcount
FROM masterentity_attachedstores AS
JOIN entity en
ON AS.storeid = en.id
JOIN filebucket AS stb
WHERE en.isdeleted = 0
AND AS.masterentityid = 156825
AND stb.id IN (55,40)
UNION ALL
SELECT stb.id AS filebucketid,
stb_ret_s.id AS storeid,
count(stb_ret_a.id) AS assignmentcount
FROM assignment AS stb_ret_a
JOIN filebucket AS stb
ON stb.id = stb_ret_a.associatedobjectid
AND stb.id IN (69,50,68)
INNER JOIN entity AS stb_ret_e
ON stb_ret_e.id = stb_ret_a.assignmentvalue
AND stb_ret_e.classname = 'Retailer'
AND stb_ret_a.assignmentsubtype IN ('RetailerID')
JOIN store stb_ret_s
ON stb_ret_s.retailerid = stb_ret_e.id
WHERE stb_ret_a.associatedobjectclass = 'FileBucket'
AND stb_ret_a.isdeleted = 0
AND stb_ret_a.assignmentsubtype IN ('RetailerID')
AND stb_ret_e.isdeleted = 0
GROUP BY filebucketid,
storeid
UNION ALL
SELECT filebucket.id AS filebucketid,
stb.storeid AS storeid,
1 AS assignmentcount
FROM filebucket
CROSS JOIN
(
SELECT maxout.repid,
maxout.storeid
FROM (
SELECT repid,
storeid,
entityid
FROM (
SELECT mp.id AS repid,
mp_cf_csv.entityid AS storeid,
count(mp_cf_a.id) AS assignmentcount
FROM assignment AS mp_cf_a
JOIN rep AS mp
ON mp.id = mp_cf_a.associatedobjectid
JOIN
customfieldvalue
AS mp_cf_csv
ON mp_cf_csv.value
= REPLACE(REPLACE(substring_index(substring_index(mp_cf_a.assignmentvalue, ',"', -1), ':', -1), '"',''), '}','')
AND mp_cf_csv.customfieldid
= REPLACE(substring_index(substring_index(mp_cf_a.assignmentvalue, ',', 1), ':', -1), '"','')
JOIN entity AS mp_cf_e
ON mp_cf_e.id = mp_cf_csv.entityid
WHERE mp_cf_a.associatedobjectid IN (7400,7825,7780,7700)
AND mp_cf_a.associatedobjectclass = 'Rep'
AND mp_cf_a.isdeleted = 0
AND mp_cf_a.assignmentsubtype IN ('CustomFieldValue')
AND mp_cf_e.isdeleted = 0
GROUP BY repid,
storeid
UNION ALL
SELECT mp.id AS repid,
mp_ret_s.id AS storeid,
count(mp_ret_a.id) AS assignmentcount
FROM assignment AS mp_ret_a
JOIN rep AS mp
ON mp.id = mp_ret_a.associatedobjectid
JOIN store AS mp_ret_s
ON mp_ret_s.retailerid = mp_ret_a.assignmentvalue
AND mp_ret_a.assignmentsubtype IN ('RetailerID')
JOIN entity AS mp_ret_e
ON mp_ret_e.id = mp_ret_s.id
WHERE mp_ret_a.associatedobjectid IN (7700,7400,7780,7825)
AND mp_ret_a.associatedobjectclass = 'Rep'
AND mp_ret_a.isdeleted = 0
AND mp_ret_a.assignmentsubtype IN ('RetailerID')
AND mp_ret_e.isdeleted = 0
GROUP BY repid,
storeid) orouttie
JOIN masterentity_attachedstores AS
ON orouttie.storeid = AS.storeid
AND AS.masterentityid = 156825
JOIN
(
SELECT associatedobjectid,
ownerobjectid
FROM assignment
WHERE associatedobjectid IN (7400,7700,7780,7825)
AND associatedobjectclass='Rep'
GROUP BY associatedobjectid) creatorassignment
ON creatorassignment.associatedobjectid = orouttie.repid
JOIN entityuser creatorentity
ON creatorentity.id = creatorassignment.ownerobjectid ) maxout
JOIN asshelper0660552001475295480164_out outcreator
ON maxout.storeid = outcreator.storeid
AND outcreator.usergroupid = maxout.entityid ) stb
WHERE id IN (60,55,50)
UNION ALL
SELECT stb.id AS filebucketid,
AS.storeid AS storeid,
1 AS assignmentcount
FROM masterentity_attachedstores AS
JOIN entity en
ON AS.storeid = en.id
JOIN filebucket AS stb
WHERE en.isdeleted = 0
AND AS.masterentityid = 156825
AND stb.id IN (40)) orouttie
JOIN masterentity_attachedstores AS
ON orouttie.storeid = AS.storeid
AND AS.masterentityid = 156825
JOIN
(
SELECT associatedobjectid,
ownerobjectid
FROM assignment
WHERE associatedobjectid IN (40,50,55,60,65,67,68,69)
AND associatedobjectclass='FileBucket'
GROUP BY associatedobjectid) creatorassignment
ON creatorassignment.associatedobjectid = orouttie.filebucketid
JOIN entityuser creatorentity
ON creatorentity.id = creatorassignment.ownerobjectid ) maxout
JOIN asshelper0777657001475295479813_out outcreator
ON maxout.storeid = outcreator.storeid
AND outcreator.usergroupid = maxout.entityid ) filebucket0
ON filebucket0.storeid = usergroup0.storeid
GROUP BY filebucket0.filebucketid
Josh
(159 rep)
Oct 1, 2016, 04:05 PM
• Last activity: Jun 22, 2025, 11:08 AM
2
votes
1
answers
488
views
How does the use of derived tables speed up sorting?
In the article [45 Ways To Speed Up Firebird Database][1], it states that > 30. Use derived tables to optimize `SELECT` with `ORDER BY`/`GROUP BY` > Another way to optimize SQL query with sorting is to use derived > tables to avoid unnecessary sort operations. Instead of > > SELECT FIELD_KEY, FIELD1...
In the article 45 Ways To Speed Up Firebird Database , it states that
> 30. Use derived tables to optimize
SELECT
with ORDER BY
/GROUP BY
> Another way to optimize SQL query with sorting is to use derived
> tables to avoid unnecessary sort operations. Instead of
>
> SELECT FIELD_KEY, FIELD1, FIELD2, ... FIELD_N
> FROM T
> ORDER BY FIELD2
>
> use the following modification:
>
> SELECT T.FIELD_KEY, T.FIELD1, T.FIELD2, ... T.FIELD_N FROM
> (SELECT FIELD_KEY FROM T ORDER BY FIELD2) T2
> JOIN T ON T.FIELD_KEY = T2.FIELD_KEY
I assume T.FIELD_KEY
means a primary key column.
In other words, it shunts the sorting to a subquery, and then 'self-joins' to get the rest of the necessary columns.
As I understand it, this will let Firebird do the sorting on a narrower resultset which would use less memory and be more cache-friendly, but surely it shouldn't give that much a boost? Especially when it needs to fetch the other columns afterwards, potentially in a random I/O pattern too.
On the other hand, this reeks of undefined behaviour in that there's no guarantee (in standard SQL) that the engine would retain the order after the join -- it could just join the other way, wouldn't it?
Are my guesses correct? Am I missing something?
oals
(121 rep)
May 24, 2016, 01:15 PM
• Last activity: Jan 7, 2025, 12:35 PM
5
votes
1
answers
446
views
Merge Delete on joined tables is allowed but has a bug
To start off, an updatable CTE, derived table or view may be the target of an `UPDATE` directly, even if it has multiple base tables, as long as all columns are from the same source table. But they cannot be the target of a `DELETE`, even if columns from only one table are selected. ``` Msg 4405 Lev...
To start off, an updatable CTE, derived table or view may be the target of an
UPDATE
directly, even if it has multiple base tables, as long as all columns are from the same source table.
But they cannot be the target of a DELETE
, even if columns from only one table are selected.
Msg 4405 Level 16 State 1
View or function 'x' is not updatable because the modification affects multiple base tables.
To get around this restriction, I attempted to use a MERGE
against a dummy table. (Obviously this simplistic example could be written using a DELETE...WHERE EXISTS
or by putting one table as the merge source, but the point stands. The original used ROW_NUMBER
so these weren't possible.)
WITH Joined AS (
SELECT t1.*
FROM t1
JOIN t2 ON t2.id1 = t1.id1
)
MERGE Joined
USING (VALUES(0)) v(dummy) ON 1=0
WHEN NOT MATCHED BY SOURCE THEN DELETE;
**dbfiddle**
This was actually allowed. But what I found was that the table that was modified did not depend on the columns being selected, or their order. It depended purely on the **order** that the tables were **joined**.
**This, to my mind seems completely buggy behaviour.**
Experimenting with THEN UPDATE
shows far more sensible behaviour: it depends on which columns are used in the THEN UPDATE
clause, in the same way as a normal UPDATE
statement.
So, I think SQL Server should:
* Either continue allowing updatable CTEs to be deleted from, but ensure that only one table's columns are selected (like an UPDATE
), ensuring no ambiguity.
* Or completely disallow THEN DELETE
in MERGE
when the source is an updatable CTE with multiple base tables.
**Do I have some misunderstanding in how updatable views work, or is there an actual bug here?**
____
**A bug report has now been filed on Azure Feedback. Please vote for it here .**
Charlieface
(17545 rep)
Jan 18, 2024, 12:04 PM
• Last activity: Jan 22, 2024, 11:26 AM
1
votes
1
answers
186
views
Is it possible to reduce the duplication in this simple query?
I'm new to SQL and so this is probably a stupid question but can I reduce the duplication in this query? My actual query involves more complex expressions. Right now I'm just doing string concatenation from the language I'm using to generate the long query but I think there must be a way to do this...
I'm new to SQL and so this is probably a stupid question but can I reduce the duplication in this query? My actual query involves more complex expressions. Right now I'm just doing string concatenation from the language I'm using to generate the long query but I think there must be a way to do this with plain SQL?
select
x + y as a,
x + z as b,
(x + y) / (x + z) as c
from
integers
order by
z + (x + y) / (x + z) desc
limit
10;
What I'd like:
select
x + y as a,
x + z as b,
a / b as c
from
integers
order by
z + c desc
limit
10;
-- Fails with: Query Error: error: column "a" does not exist
Sample table:
create table integers as
select x, y, z
from
generate_series(1, 10) x,
generate_series(x, 10) y,
generate_series(y, 10) z;
Fiddle: https://www.db-fiddle.com/f/u3cVh7QsafWNE6ELWhvBc1/0
user225326
(13 rep)
Mar 6, 2021, 01:50 PM
• Last activity: Mar 7, 2021, 05:47 PM
13
votes
4
answers
6035
views
Is it REALLY possible that the order will not be guaranteed for this particular redundant derived table?
I stumbled upon this question on a [Twitter conversation with Lukas Eder][1]. Although the correct behavior would be to apply the ORDER BY clause on the outermost query, because, here, we are not using DISTINCT, GROUP BY, JOIN or any other WHERE clause in the outermost query, why wouldn't a RDBMS ju...
I stumbled upon this question on a Twitter conversation with Lukas Eder .
Although the correct behavior would be to apply the ORDER BY clause on the outermost query, because, here, we are not using DISTINCT, GROUP BY, JOIN or any other WHERE clause in the outermost query, why wouldn't a RDBMS just pass the incoming data as it was sorted by the inner query?
SELECT *
FROM (
SELECT * FROM table ORDER BY time DESC
) AS t
When running this example on PostgreSQL, at least, you get the same Execution Plan for both the inner query and this derived table example, as well as the same result set.
So, I would assume that the Planner will simply discard the outermost query because it's redundant or simply pass through the results from the inner table.
Does anyone think this might not be the case?
Vlad Mihalcea
(917 rep)
Aug 23, 2017, 09:11 AM
• Last activity: Oct 5, 2020, 11:48 PM
0
votes
2
answers
311
views
Order by is ignored if inside a derived table of a update (only in MYSQL 5.7)
I have a update to fix positions values inside a table. The update worked fine in MYSQL 5.5, but after upgrade to MYSQL 5.7 it doesnt work anymore. Now, the "order by" inside the derived table is ignored. So, the rows are updateds based in primary key, not in order by. May someone help me? Why the o...
I have a update to fix positions values inside a table. The update worked fine in MYSQL 5.5, but after upgrade to MYSQL 5.7 it doesnt work anymore. Now, the "order by" inside the derived table is ignored. So, the rows are updateds based in primary key, not in order by.
May someone help me? Why the order by is ignored in MYSQL 5.7 and how to fixed it?
Thanks!!!
# OLD POSITION VALUES
ImageId Position
389 0
390 6
391 4
392 1
# NEW POSITION VALUES, AFTER FIXED (MYSQL 5.5)
ImageId Position
389 1
390 4
391 3
392 2
# NEW POSITION VALUES, AFTER FIXED (MYSQL 5.7)
ImageId Position
389 1
390 2
391 3
392 4
# UPDATE
UPDATE Image as t1
INNER JOIN (
SELECT t.*, @rownum := @rownum + 1 AS newPosition
FROM Image t, (SELECT @rownum := 0) r
ORDER BY t.Position
) as td
ON t1.ImageId = td.ImageId SET t1.Position= td.newPosition
user2431082
(1 rep)
Aug 1, 2019, 04:31 PM
• Last activity: Sep 4, 2019, 06:23 AM
0
votes
1
answers
93
views
mysql query with 1 derived table and 1 inner join takes forever to load
I tried removing some chunks of code little by little and found out that the condition inside the inner join is the devil. Can someone enlighten me on how to achieve this when I have like 10,000+ rows of data. This works perfectly fine if I have 1-50 row of data SELECT negotiations.id FROM (SELECT n...
I tried removing some chunks of code little by little and found out that the condition inside the inner join is the devil. Can someone enlighten me on how to achieve this when I have like 10,000+ rows of data. This works perfectly fine if I have 1-50 row of data
SELECT negotiations.id
FROM
(SELECT n.id,n.business_rank,db.status,
case status
when '11' then IF(DATEDIFF(IFNULL(apply_date, curdate()), curdate()) < 1, true, false)
when '12' then IF(DATEDIFF(IFNULL(db.contract_date, curdate()), curdate()) < 1, true, false)
when '13' then IF(DATEDIFF(IFNULL(db.settlement_date, curdate()), curdate()) < 1, true, false)
when '20' then IF(DATEDIFF(IFNULL(db.apply_cancel_date, curdate()), curdate()) < 1, true, false)
when '21' then IF(DATEDIFF(IFNULL(db.contract_cancel_date, curdate()), curdate()) < 1, true, false)
end as to_notify
FROM negotiations AS n
INNER JOIN deal_buildings AS db ON db.id = (
SELECT id
FROM deal_buildings AS db2
WHERE db2.deal_id = n.id
AND n.main_member_id = 79
AND db.status in (11,12,13,20,21)
ORDER BY status DESC
LIMIT 1
) ) as negotiations
WHERE ((
(business_rank = '17' and status = 11) or
(business_rank = '19' and status = 12) or
(business_rank = '22' and status = 13) or
(business_rank = '18' and status = 20) or
(business_rank = '20' and status = 21)) is false )
and to_notify
usc_student
(1 rep)
Apr 12, 2019, 10:00 AM
• Last activity: Apr 23, 2019, 04:09 AM
1
votes
3
answers
273
views
Improve Query Performance of Select Statment inside IN Statment
My query is: (3) UPDATE wp_postmeta set meta_value = 'outofstock' where meta_key = '_stock_status' and post_id in ( (2)SELECT post_id FROM ( (1) SELECT A.post_id from wp_postmeta A JOIN wp_postmeta B ON A.post_id = B.post_id AND A.meta_key = 'attribute_pa_beden' and A.meta_value in ('12yas','34yas',...
My query is:
(3) UPDATE wp_postmeta set meta_value = 'outofstock' where meta_key = '_stock_status' and
post_id in
(
(2)SELECT post_id FROM
(
(1) SELECT A.post_id from wp_postmeta A
JOIN wp_postmeta B ON A.post_id = B.post_id
AND A.meta_key = 'attribute_pa_beden'
and A.meta_value in ('12yas','34yas','56yas','78yas','910yas','1112yas')
and B.meta_key = 'attribute_pa_renk'
and ((B.meta_value = 'bordo') OR ('bordo' = 'hepsi'))
JOIN wp_posts ON A.post_id = wp_posts.id
JOIN wp_term_relationships ON wp_posts.post_parent = wp_term_relationships.object_id
and term_taxonomy_id in ('2643','2304')
) AS DerivedTable
)
To improve the speed of this query, I analysed it using the "Explain" statment. Below are the results:
When I added
When I added
When I added
My analysis is there is no speed problem with the (1) subquery, but after I select the data from this subquery to a derived table (2), somehow there is a 55.277.640 "rows" comes, and which seems to be the reason why my query is so slow. How can I optimize it ? What is wrong here ?
**Edit**: The tables are Wordpress WooCommerce module standart tables. I didn't modified them. Here
Explain in location 2:
Explain in location 3:
Here some information on what I am trying to do. In my website there are a lot of products variations. The wp_postmeta table has meta information for all variations. (For example, price, color, size, instock etc).
My query is used to update stock information. So for example, I have a product with 9 colors, and 9 size. Which means 81 variations. If I need to update stocks of color 'bordo' for all sizes. This means it is 9 rows has to be updated in wp_postmeta. (All sizes for color: bordo).
Here some row numbers if I update above queries to
So basicly their query is doing the same thing with different structure. Now the query's performance has increased quite much. The only question in my mind, how safe it is to add a manual index to a wordpress core table?
explain
to (1) location in above query and run the subquery. The results are as below:

explain
to (2) location and run that subqyery, results are as below.

explain
to (3) location and run the whole query, results are as below:

SHOW CREATE TABLE
results:
**wp_postmeta**
CREATE TABLE wp_postmeta
(
meta_id
bigint(20) unsigned NOT NULL AUTO_INCREMENT,
post_id
bigint(20) unsigned NOT NULL DEFAULT '0',
meta_key
varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
meta_value
longtext COLLATE utf8mb4_unicode_ci,
PRIMARY KEY (meta_id
),
KEY post_id
(post_id
),
KEY meta_key
(meta_key
(191))
) ENGINE=MyISAM AUTO_INCREMENT=11119572 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
**wp_posts**
CREATE TABLE wp_posts
(
ID
bigint(20) unsigned NOT NULL AUTO_INCREMENT,
post_author
bigint(20) unsigned NOT NULL DEFAULT '0',
post_date
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_date_gmt
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_content
longtext COLLATE utf8mb4_unicode_ci NOT NULL,
post_title
text COLLATE utf8mb4_unicode_ci NOT NULL,
post_excerpt
text COLLATE utf8mb4_unicode_ci NOT NULL,
post_status
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'publish',
comment_status
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
ping_status
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'open',
post_password
varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
post_name
varchar(200) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
to_ping
text COLLATE utf8mb4_unicode_ci NOT NULL,
pinged
text COLLATE utf8mb4_unicode_ci NOT NULL,
post_modified
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_modified_gmt
datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
post_content_filtered
longtext COLLATE utf8mb4_unicode_ci NOT NULL,
post_parent
bigint(20) unsigned NOT NULL DEFAULT '0',
guid
varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
menu_order
int(11) NOT NULL DEFAULT '0',
post_type
varchar(20) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT 'post',
post_mime_type
varchar(100) COLLATE utf8mb4_unicode_ci NOT NULL DEFAULT '',
comment_count
bigint(20) NOT NULL DEFAULT '0',
PRIMARY KEY (ID
),
KEY post_name
(post_name
(191)),
KEY type_status_date
(post_type
,post_status
,post_date
,ID
),
KEY post_parent
(post_parent
),
KEY post_author
(post_author
)
) ENGINE=MyISAM AUTO_INCREMENT=352598 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
**wp_term_relationships**
CREATE TABLE wp_term_relationships
(
object_id
bigint(20) unsigned NOT NULL DEFAULT '0',
term_taxonomy_id
bigint(20) unsigned NOT NULL DEFAULT '0',
term_order
int(11) NOT NULL DEFAULT '0',
PRIMARY KEY (object_id
,term_taxonomy_id
),
KEY term_taxonomy_id
(term_taxonomy_id
)
) ENGINE=MyISAM DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
**Edit2**: After manually changing to innoDB, the explains are changed as below.
Explain in location 1:



select count(*)
:
> wp_postmeta has 9.929.761
>
> The first subquery where I indicated as (1), returns 3612 rows.
>
> The first subquery where I indicated as (2), returns 3612 rows as
> well.
>
> The first subquery where I indicated as (3), returns 3612 rows too.
So basicly there are 3612 "bordo" colored product variations in my website, and I need to update stock information of all these variations.
When I checked my database, there are 227K rows with meta_key "_stock_status". (All colors and all product sizes). I know the cost of updating all that data is a little bit costly. But I am not using this function all the time.
When I add 20 new products and when I get information from production team about latest stock status of products and colors, I click this "Update Stocks" button to update all stock information in the website to the last state.
**Edit2**
I read the modification requirments for wp_postmeta table from Rick. They all seemed to be helpful but I didn't wanted to change the default structure of wp_postmeta table as I am not a DB expert and may not revert back future problems that can be caused by modifing the core wordpress tables.
So to stay on the safe side, I read the other two answers from **ypercubeᵀᴹ** and **Old Pro**.
They suggested to add a new index to wp_postmeta table, and this is my only change on this core wordpress table. After I tested both of their queries, and they gave the same explain
statement results as below.

HOY
(121 rep)
Feb 10, 2019, 05:34 PM
• Last activity: Feb 11, 2019, 07:15 PM
0
votes
1
answers
1029
views
SQL query does not recognise derived table
So I've made a simple SQL query: SELECT companyname FROM works w1, ( SELECT companyname,sum(salary) AS sumsal FROM works GROUP BY companyname ) w2 WHERE w1.companyname=w2.companyname it works fine so far but then I add the following line in the where clause: AND w2.sumsal=(SELECT MIN(w2.sumsal) FROM...
So I've made a simple SQL query:
SELECT companyname
FROM works w1, (
SELECT companyname,sum(salary) AS sumsal
FROM works
GROUP BY companyname
) w2
WHERE
w1.companyname=w2.companyname
it works fine so far but then I add the following line in the where clause:
AND w2.sumsal=(SELECT MIN(w2.sumsal)
FROM w2);
I get:
ERROR 1146 (42S02): Table 'companydb.w2' doesn't exist
I'm using MySQL 8.0.
mrmagin
(3 rep)
Oct 28, 2018, 02:35 PM
• Last activity: Oct 29, 2018, 10:43 AM
3
votes
1
answers
243
views
Delete Joined to Derived Table Causes Index Scan and Table Spool
Our application uses a popular 3rd-party extension to Entity Framework that allows for valuable things that native Entity Framework is not capable of or designed to do, like bulk deletes and updates. It seems that the query pattern produced by the bulk delete API method from this library is not opti...
Our application uses a popular 3rd-party extension to Entity Framework that allows for valuable things that native Entity Framework is not capable of or designed to do, like bulk deletes and updates.
It seems that the query pattern produced by the bulk delete API method from this library is not optimized well by the SQL Server query optimizer.
I am using SQL Server 2014. Consider the following table and index (names have been changed to protect the innocent):
CREATE TABLE [dbo].[a_staging_table]
(
[staging_ky] [BIGINT] IDENTITY(1,1) NOT NULL,
[col_a] [INT] NOT NULL,
[col_b] [VARCHAR](15) NOT NULL,
[col_c] [VARCHAR](256) NOT NULL,
[col_d] [INT] NULL,
[col_e] [INT] NOT NULL,
[col_f] [VARCHAR](3) NOT NULL,
[col_g] [VARCHAR](3) NULL,
[col_h] [VARCHAR](20) NULL,
[col_i] [CHAR](4) NOT NULL,
[col_j] [INT] NOT NULL,
[col_k] [VARCHAR](10) NULL,
[col_l] [DATETIME] NOT NULL,
[col_m] [VARCHAR](25) NOT NULL,
[col_n] [DATETIME] NOT NULL,
[col_o] [VARCHAR](25) NOT NULL,
[col_p] [BIGINT] NOT NULL,
[col_q] [BIT] NOT NULL,
[col_r] [DATETIME] NULL,
[col_s] [VARCHAR](25) NULL,
PRIMARY KEY CLUSTERED ([staging_ky] ASC)
);
GO
CREATE NONCLUSTERED INDEX [IDX_dbo_a_staging_table_col_p]
ON [dbo].[a_staging_table]([col_p] ASC)
WITH (FILLFACTOR = 100);
GO
The application is trying to delete all records from the table where col_p is equal to a certain value, and the generated SQL query from Entity Framework looks like this:
DELETE [dbo].[a_staging_table]
FROM [dbo].[a_staging_table] AS j0 INNER JOIN (
SELECT
1 AS [C1],
[Extent1].[staging_ky] AS [staging_ky]
FROM [dbo].[a_staging_table] AS [Extent1]
WHERE [Extent1].[col_p] = @p__linq__0
) AS j1 ON (j0.[staging_ky] = j1.[staging_ky])
Looking at the query plan, SQL Server does not optimize away the join between the table and itself, and instead, the plan does an index scan with a nested loops join and seek to the same index, followed by a table spool, prior to the delete.
A more traditional and optimal delete statement is something like this statement, which just does an index seek, prior to the clustered index delete:
DELETE [dbo].[a_staging_table]
FROM [dbo].[a_staging_table] AS j0
WHERE [j0].[col_p] = @p__linq__0
The actual query plans for both statements are posted at:https://www.brentozar.com/pastetheplan/?id=SygZoD6um
Is this query pattern generated by this library a known query anti-pattern for a delete statement in SQL Server?
Bryan Rebok
(1219 rep)
Sep 17, 2018, 06:00 PM
• Last activity: Sep 18, 2018, 12:04 AM
9
votes
5
answers
33969
views
Difference between inline view and WITH clause?
Inline views allow you to select from a subquery as if it were a different table: SELECT * FROM /* Selecting from a query instead of table */ ( SELECT c1 FROM t1 WHERE c1 > 0 ) a WHERE a.c1 < 50; I've seen this referred to using different terms: inline views, WITH clause, CTE and derived tables. To...
Inline views allow you to select from a subquery as if it were a different table:
SELECT
*
FROM /* Selecting from a query instead of table */
(
SELECT
c1
FROM
t1
WHERE
c1 > 0
) a
WHERE
a.c1 < 50;
I've seen this referred to using different terms: inline views, WITH clause, CTE and derived tables. To me it seems they are different vendor specific syntax for the same thing.
Is this a wrong assumption? Are there any technical/performance differences between these?
Kshitiz Sharma
(3367 rep)
May 2, 2017, 02:07 PM
• Last activity: May 16, 2018, 08:33 PM
0
votes
2
answers
1061
views
Update statement using Case statement, derived table, and ROW_NUMBER
Running MS SQL 2008 R2. I am provided a CSV list of product-IDs which I parse into individual rows and normalize. I then need to take those product-IDs (anywhere from 1-10) and write them to their corresponding columns in a table. I'm using a combination of CASE statements, a derived table, along wi...
Running MS SQL 2008 R2. I am provided a CSV list of product-IDs which I parse into individual rows and normalize. I then need to take those product-IDs (anywhere from 1-10) and write them to their corresponding columns in a table. I'm using a combination of CASE statements, a derived table, along with ROW_NUMBER in order to assign the correct product-ID value to each corresponding column. For some reason, it only seems to update the product_id_1 column, which is not right since there are other values stored in the #contactProducts table Below please find the T-SQL to recreate my issue:
CREATE TABLE #contactProducts(
[contact_product_id] [int] IDENTITY(1,1) NOT NULL,
[temp_import_id] [int] NOT NULL,
[product_id] [varchar](50) NOT NULL,
[createDt] [datetime] NOT NULL,
CONSTRAINT [PK_#contactProducts] PRIMARY KEY CLUSTERED
(
[contact_product_id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
INSERT INTO #contactProducts VALUES(3,'B6281-81',getDate())
INSERT INTO #contactProducts VALUES(3,'B100839-1',getDate())
INSERT INTO #contactProducts VALUES(3,'B101203-5',getDate())
INSERT INTO #contactProducts VALUES(3,'B101204-7',getDate())
INSERT INTO #contactProducts VALUES(3,'B101204-7',getDate())
SELECT * FROM #contactProducts
-- Temp_Import_2
CREATE TABLE #temp_import_2(
[temp_import_2_id] [int] IDENTITY(1,1) NOT NULL,
[temp_import_id] [int] NOT NULL,
[PRODUCT_IDS] [varchar](2000) NULL,
[PRODUCT_ID_1] [varchar](50) NULL,
[PRODUCT_ID_2] [varchar](50) NULL,
[PRODUCT_ID_3] [varchar](50) NULL,
[PRODUCT_ID_4] [varchar](50) NULL,
[PRODUCT_ID_5] [varchar](50) NULL,
[PRODUCT_ID_6] [varchar](50) NULL,
[PRODUCT_ID_7] [varchar](50) NULL,
[PRODUCT_ID_8] [varchar](50) NULL,
[PRODUCT_ID_9] [varchar](50) NULL,
[PRODUCT_ID_10] [varchar](50) NULL,
CONSTRAINT [temp_import_2] PRIMARY KEY CLUSTERED
(
[temp_import_2_id] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF,
ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
INSERT INTO #temp_import_2 (temp_import_id, product_IDs)
VALUES(3, 'B6281-81,B100839-1,B101203-5,B101204-7,B102472-14')
SELECT * FROM [#temp_import_2]
-- update with derived table
-- even tho there are values for thisRowNum = 2,3,4, etc. they don't seem to update. only works for thisRowNum = 1
UPDATE #temp_import_2 SET
#temp_import_2.PRODUCT_ID_1 = CASE WHEN derivedTbl.thisRowNum = 1 THEN derivedTbl.product_id END
,#temp_import_2.PRODUCT_ID_2 = CASE WHEN derivedTbl.thisRowNum = 2 THEN derivedTbl.product_id ELSE '2' END
,#temp_import_2.PRODUCT_ID_3 = CASE WHEN derivedTbl.thisRowNum = 3 THEN derivedTbl.product_id ELSE '3' END
,#temp_import_2.PRODUCT_ID_4 = CASE WHEN derivedTbl.thisRowNum = 4 THEN derivedTbl.product_id ELSE '4' END
,#temp_import_2.PRODUCT_ID_5 = CASE WHEN derivedTbl.thisRowNum = 5 THEN derivedTbl.product_id ELSE '5' END
,#temp_import_2.PRODUCT_ID_6 = CASE WHEN derivedTbl.thisRowNum = 6 THEN derivedTbl.product_id ELSE '6' END
,#temp_import_2.PRODUCT_ID_7 = CASE WHEN derivedTbl.thisRowNum = 7 THEN derivedTbl.product_id ELSE '7' END
,#temp_import_2.PRODUCT_ID_8 = CASE WHEN derivedTbl.thisRowNum = 8 THEN derivedTbl.product_id ELSE '8' END
,#temp_import_2.PRODUCT_ID_9 = CASE WHEN derivedTbl.thisRowNum = 9 THEN derivedTbl.product_id ELSE '9' END
,#temp_import_2.PRODUCT_ID_10 = CASE WHEN derivedTbl.thisRowNum = 10 THEN derivedTbl.product_id ELSE '10' END
FROM (
select temp_import_id, product_id, ROW_NUMBER() OVER (PARTITION BY temp_import_id ORDER BY contact_product_id) AS thisRowNum
from #contactProducts
) derivedTbl INNER JOIN #temp_import_2 ON derivedTbl.temp_import_id = #temp_import_2.temp_import_id
SELECT * FROM [#temp_import_2]
DROP TABLE #contactProducts
DROP TABLE [#temp_import_2]
saoco
(3 rep)
Apr 20, 2017, 09:53 PM
• Last activity: Apr 20, 2017, 10:35 PM
1
votes
0
answers
84
views
Get unique rows based on max of a column in mysql
I came across many solutions like the one in [this question][1]. However, they all are restricted to data from one table. I'm very very new to SQL. I'd like to get the same results as in the linked question with two tweaks. 1. The column 'User' is from table1, 'Date' from table2, the rest from table...
I came across many solutions like the one in this question . However, they all are restricted to data from one table. I'm very very new to SQL. I'd like to get the same results as in the linked question with two tweaks.
1. The column 'User' is from table1, 'Date' from table2, the rest from table3.
2. I need to input a list of 'User'(s) for which the data is to be extracted.
I have really huge tables on which I need to work. I've tried distinct and group by but they are taking forever.
I really appreciate any help.
Sample data:
table1
Global_id User User_id table_id table_type Disease source
1 TT964 -1 555 1 Breast cancer 1
3 TT2345 -1 876 1 Lung cancer 2
5 TT083 TT083-01 123 1 Brain tumor 4
87 TT4456 -1 4843 1 Cardiac arrest 7
65 TT6688 -1 5850 1 Brain tumor 4
350 TT4678 -1 969 1 Cardiac arrest 7
678 TT4567 -1 3985 1 Lung cancer 2
table2
fid gid gene
3 1 BRCA1
6 23 TP53
7 4 APO
1 6 APC
6 72 HFE
4 96 KRT5
table3
table_id m_id f_id
555 1 3
876 1 1
123 1 6
875 2 6
409 1 4
3985 2 3
Tried this query the latest:
SELECT distinct a.User, f.fid, table_type, f.gene, s.table_id, s.m_id, Disease
FROM data03.table3 s
inner JOIN data01.table1 a
inner join data01.table2 f
where f.fid in
(SELECT s.f_id FROM data03.table3 s
where s.table_id in
(SELECT a.table_id FROM data01.table1 a
where a.User in ("TT964","TT083","TT6688"))
and s.m_id in (1,2,3,4,5,6)
and table_type = 1
and Disease like "Brain%"
and fid = 623
and a.User in ("TT964","TT083","TT6688"))
where fid and f_id are the same. I need to get data where the table_ids match for table_type 1 and disease brain cancer for users and a few other conditions mentioned in the query. My code could be quite messy as I'm pretty new to SQL again.
Here's the show create table data:
CREATE TABLE
table1
(
Global_id
bigint(20) NOT NULL AUTO_INCREMENT,
User
varchar(128) NOT NULL,
User_id
varchar(255) NOT NULL,
table_id
bigint(20) NOT NULL,
table_type
smallint(6) NOT NULL,
Disease
varchar(255) NOT NULL,
source
int(11) DEFAULT NULL,
PRIMARY KEY (Global_id
),
KEY fk_table_id_table_type
(table_id
,table_type
),
KEY fk_User
(User
)
);
CREATE TABLE table2
(
fid
int(11) NOT NULL AUTO_INCREMENT,
gid
int(11) DEFAULT NULL,
gene
varchar(100) DEFAULT NULL,
PRIMARY KEY (fid
),
KEY ix_gid
(gid
),
KEY ix_gene
(gene
)
);
CREATE TABLE table3
(
table_id
bigint(20) NOT NULL,
m_id
smallint(6) DEFAULT NULL,
f_id
int(11) DEFAULT NULL,
PRIMARY KEY (m_id
),
KEY idx_f_id_table_id
(f_id
,table_id
)
);
abn
(111 rep)
Mar 16, 2015, 10:40 PM
• Last activity: Mar 17, 2015, 03:59 PM
0
votes
2
answers
370
views
How are derived table implemented in MySQL
When MySQL makes a derived table as a result of `SELECT (SELECT ...) etc` or `SELECT * FROM a JOIN (SELECT * from B) etc` or `SELECT * FROM (SELECT ...)` Are these derived temporary tables created in-memory or the file-system? Does it depend on the size of the derived table?
When MySQL makes a derived table as a result of
SELECT (SELECT ...) etc
or SELECT * FROM a JOIN (SELECT * from B) etc
or SELECT * FROM (SELECT ...)
Are these derived temporary tables created in-memory or the file-system? Does it depend on the size of the derived table?
Cratylus
(1013 rep)
Sep 14, 2014, 06:31 PM
• Last activity: Sep 15, 2014, 08:28 AM
8
votes
2
answers
22778
views
Are there advantages to using temporary tables over derived tables in SQL Server?
I read the derived tables have better performance than temporary tables, but anyway many SQL Server developers prefer the second ones. Why? I must do queries with large data (millions records) and I want to be sure I am using the best choice. CREATE TABLE A( id BIGINT IDENTITY(1,1) NOT NULL, field1...
I read the derived tables have better performance than temporary tables, but anyway many SQL Server developers prefer the second ones. Why?
I must do queries with large data (millions records) and I want to be sure I am using the best choice.
CREATE TABLE A(
id BIGINT IDENTITY(1,1) NOT NULL,
field1 INT NOT NULL,
field2 VARCHAR(50) NULL,
);
CREATE TABLE B(
id INT IDENTITY(1,1) NOT NULL,
field1 VARCHAR(10) NULL,
field2 INT NULL
);
INSERT INTO A
(field1,field2)
VALUES
(1,'a'),(2,'b'),(3,'c'),(2,'d'),(5,'e'),
(6,'f'),(7,'g'),(8,'h'),(9,'i'),(2,'j');
INSERT INTO B
(field1,field2)
VALUES
('a',1),('b',2),('c',3),('d',4),('e',5),
('f',6),('g',7),('h',8),('i',9),('j',2),('k',3);
DECLARE @begin INT=0,@end INT=200;
**Derived tables**
/*derived tables*/
SELECT
C.id,C.field1,C.field2,C.field3
FROM
(
SELECT
A.id,A.field1,A.field2,B.field2 AS field3,
ROW_NUMBER() OVER (ORDER BY A.id) AS iRow
FROM
A INNER JOIN B ON A.field1=B.id
) C
WHERE iRow BETWEEN @begin AND @end;
**Temporary tables**
/*temporary tables*/
CREATE TABLE #C (
iRow INT IDENTITY(1,1),
id bigint,
field1 INT,
field2 VARCHAR(50),
field3 INT );
INSERT INTO #C
(id,field1,field2,field3)
SELECT TOP 1000
A.id,A.field1,A.field2,B.field2
FROM
A INNER JOIN B ON A.field1=B.id
ORDER BY
A.id;
SELECT id,field1,field2,field3
FROM #C
WHERE iRow BETWEEN @begin AND @end;
DROP TABLE #C;
norgematos
(241 rep)
Feb 11, 2014, 02:53 PM
• Last activity: Mar 17, 2014, 01:15 AM
Showing page 1 of 16 total questions