Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
883
views
How does different forms of EXISTS() affect performance in MS Sql Server?
I've been told that when checking for the existence of a row in a table, this will work: `EXISTS(SELECT 1 FROM....)` but that a "better" way in regards to speed performance of the query would be to use this: `EXISTS(SELECT TOP(1) NULL FROM......)` I've searched online for anything that references th...
I've been told that when checking for the existence of a row in a table, this will work:
EXISTS(SELECT 1 FROM....)
but that a "better" way in regards to speed performance of the query would be to use this:
EXISTS(SELECT TOP(1) NULL FROM......)
I've searched online for anything that references this comparison and the only information I can find says that this is a personal preference and there actually is no performance gain of one over the other. There's always the "case by case" aspect of a question like this, but in general, is there any performance gain from using the second implementation of EXISTS()
over the first one?
Gharbad The Weak
(119 rep)
Mar 7, 2024, 06:20 PM
• Last activity: Jun 5, 2025, 11:09 AM
1
votes
1
answers
59
views
Database Tuning to address poor performance
I'm not a DB Administrator but am looking for advice on tuning our MySQL database. It's a small storefront that is running Prestashop, and lags out for several minutes when saving changes to a product pack. That is, a product that is made up of 2 or more other products. I just ran [MySQLTuner-Perl][...
I'm not a DB Administrator but am looking for advice on tuning our MySQL database. It's a small storefront that is running Prestashop, and lags out for several minutes when saving changes to a product pack. That is, a product that is made up of 2 or more other products.
I just ran MySQLTuner-Perl and it looks alright for the most part, but there are several points flagged for review.
[!!] Aborted connections: 6.83% (337899/4948004)
[!!] CPanel and Flex system skip-name-resolve should be on
[!!] Joins performed without indexes: 10213707
I suspect the 10M Joins performed without indexes is the likely culprit here, but not entirely sure what that means or how it should be addressed.
The server has tons of spare memory so I could increase the allocation 2-3x as a temporary bandaid, but I imagine this won't be resolved until I figure out the indexing issue. Any advice or clarity on these results would be greatly appreciated.

David Anderson
(13 rep)
Oct 4, 2024, 06:09 PM
• Last activity: Mar 4, 2025, 07:44 PM
0
votes
1
answers
74
views
How to tune the complex Oracle stored procedure?
I have an Oracle stored procedure that takes a bunch of parameters and heavily does the left outer join like this: ```sql create or replace PROCEDURE "SOME_PROCEDURE" ( cRECORDSET OUT SYS_REFCURSOR, iPHY_YEAR IN SOME_PROCEDURE_TYPE.PHYSICAL_YEAR % TYPE, iPHYSICAL_DOC_TYPE IN SOME_PROCEDURE_TYPE.PHYS...
I have an Oracle stored procedure that takes a bunch of parameters and heavily does the left outer join like this:
create or replace PROCEDURE "SOME_PROCEDURE" (
cRECORDSET OUT SYS_REFCURSOR,
iPHY_YEAR IN SOME_PROCEDURE_TYPE.PHYSICAL_YEAR % TYPE,
iPHYSICAL_DOC_TYPE IN SOME_PROCEDURE_TYPE.PHYSICAL_DOC_TYPE % TYPE,
iPHYSICAL_DOC_TYPE_NONSAP IN SOME_PROCEDURE_TYPE.PHYSICAL_DOC_TYPE_NONSAP % TYPE,
sREFERENCE_NO IN SOME_PROCEDURE_TYPE.REFERENCE_NO % TYPE,
sPHYSICAL_NO IN SOME_PROCEDURE_TYPE.PHYSICAL_NO % TYPE,
iLOCATION_ID IN SOME_PROCEDURE_TYPE.LOCATION_ID % TYPE,
iCONTAINER_ID IN SOME_PROCEDURE_TYPE.CONTAINER_ID % TYPE,
sCONTAINER_NUMBER IN SOME_PROCEDURE_TYPE.CONTAINER_NUMBER % TYPE,
dREGISTERED_DATE_FROM IN SOME_PROCEDURE_TYPE.REGISTERED_DATE_FROM % TYPE,
dREGISTERED_DATE_TO IN SOME_PROCEDURE_TYPE.REGISTERED_DATE_TO % TYPE,
iPHYSICAL_STATUS_ID IN SOME_PROCEDURE_TYPE.PHYSICAL_STATUS_ID % TYPE,
sDRAFT_FLAG IN SOME_PROCEDURE_TYPE.DRAFT_FLAG % TYPE,
sCANCEL_FLAG IN SOME_PROCEDURE_TYPE.CANCEL_FLAG % TYPE,
sPHYSICAL_REMARK IN SOME_PROCEDURE_TYPE.PHYSICAL_REMARK % TYPE,
sPHYSICAL_DOC_TITLE IN SOME_PROCEDURE_TYPE.PHYSICAL_DOC_TITLE % TYPE,
sFI_DOC_NO_FROM IN SOME_PROCEDURE_TYPE.FI_DOC_NO_FROM % TYPE,
sFI_DOC_NO_TO IN SOME_PROCEDURE_TYPE.FI_DOC_NO_TO % TYPE,
iCOMPANY_ID IN SOME_PROCEDURE_TYPE.COMPANY_ID % TYPE,
iFI_DOC_YEAR IN SOME_PROCEDURE_TYPE.FI_DOC_YEAR % TYPE,
sFI_DOC_TYPE IN SOME_PROCEDURE_TYPE.FI_DOC_TYPE % TYPE,
sIS_ASSET IN SOME_PROCEDURE_TYPE.IS_ASSET % TYPE,
sWORKFLOW_STATUS_CODE IN SOME_PROCEDURE_TYPE.WORKFLOW_STATUS_CODE % TYPE,
sPO_NO IN SOME_PROCEDURE_TYPE.PO_NO % TYPE,
sVENDOR_NAME IN SOME_PROCEDURE_TYPE.VENDOR_NAME % TYPE,
sCUSTOMER_NAME IN SOME_PROCEDURE_TYPE.CUSTOMER_NAME % TYPE,
dPOSTING_DATE_FROM IN SOME_PROCEDURE_TYPE.POSTING_DATE_FROM % TYPE,
dPOSTING_DATE_TO IN SOME_PROCEDURE_TYPE.POSTING_DATE_TO % TYPE,
sCLEARING_DOC_NO IN SOME_PROCEDURE_TYPE.CLEARING_DOC_NO % TYPE,
iCLEARING_DOC_YEAR IN SOME_PROCEDURE_TYPE.CLEARING_DOC_YEAR % TYPE,
dCLEARING_DOC_DATE IN SOME_PROCEDURE_TYPE.CLEARING_DOC_DATE % TYPE,
sPAYMENT_BLOCK IN SOME_PROCEDURE_TYPE.PAYMENT_BLOCK % TYPE,
iAMOUNT_INC_VAT IN SOME_PROCEDURE_TYPE.AMOUNT_INC_VAT % TYPE,
sFI_DOC_REFNO IN SOME_PROCEDURE_TYPE.FI_DOC_REFNO % TYPE,
sSEND_TO IN SOME_PROCEDURE_TYPE.SEND_TO % TYPE,
sSAP_REVERSE_FLAG IN SOME_PROCEDURE_TYPE.SAP_REVERSE_FLAG % TYPE,
sCLEARING_FLAG IN SOME_PROCEDURE_TYPE.CLEARING_FLAG % TYPE,
iPAGE_SIZE IN SOME_PROCEDURE_TYPE.PAGE_SIZE % TYPE,
iPAGE_INDEX IN SOME_PROCEDURE_TYPE.PAGE_INDEX % TYPE,
iUSERID IN USR_DETAIL.USER_ID % TYPE
) IS
LAST_INDEX NUMBER;
USR_PERM_TYPE GET_USR_PERM_TYPE;
FIRST_INDEX NUMBER;
BEGIN
LAST_INDEX := iPAGE_SIZE * iPAGE_INDEX;
FIRST_INDEX := LAST_INDEX - iPAGE_SIZE + 1;
SELECT FN_GET_USR_PERM(iUSERID) INTO USR_PERM_TYPE FROM DUAL;
OPEN cRECORDSET FOR
WITH CTE_PHD_DETAIL AS (
SELECT
*
FROM
PHD_DETAIL
WHERE
USR_PERM_TYPE.VIS_MODE = 'ALL'
OR (
USR_PERM_TYPE.VIS_MODE = 'UNT'
AND (UNIT_ID = USR_PERM_TYPE.USUB_ID)
)
OR (
USR_PERM_TYPE.VIS_MODE = 'SUB'
AND SUB_UNIT_ID = USR_PERM_TYPE.USUB_ID
)
)
SELECT
PHD_ITEMS.RECORD_NUMBER,
PHD.PHYSICAL_ID,
PHD.PHYSICAL_DOC_TYPE,
PHD.PHYSICAL_NO,
PHD.REGISTERED_DATE,
PHD.CANCEL_FLAG,
PHD.DRAFT_FLAG,
CASE
WHEN PHD.REQUESTED_NO IS NOT NULL THEN
PHD.REQUESTED_NO ELSE PHD.TEXT_FILE_NAME
END REFERENCE_NO,
PHDS.PHYSICAL_STATUS_ID,
PHDS.PHYSICAL_STATUS_NAME,
CNT.CONTAINER_ID,
CASE
WHEN PHD.PHYSICAL_STATUS_ID = 2 THEN
NULL ELSE CNT.CONTAINER_NUMBER
END CONTAINER_NUMBER,
CNT.CONTAINER_TYPE,
CNTU.IS_DOC_HOUSE CONTAINER_IS_DOC_HOUSE,
CNTU.UNIT_ABBR CONTAINER_UNIT_ABBR,
LOC.LOCATION_ID,
CASE
WHEN PHD.PHYSICAL_STATUS_ID = 2
OR PHD.PHYSICAL_STATUS_ID = 12
OR PHD.PHYSICAL_STATUS_ID = 14 THEN
(
SELECT
SEND_TO
FROM
( SELECT SEND_TO, PHYSICAL_ID FROM PHD_STATUS_LOGS T1 WHERE T1.PHYSICAL_STATUS_ID = 2 ORDER BY CREATE_DATE DESC )
WHERE
ROWNUM = 1
AND PHYSICAL_ID = PHD_ITEMS.PHYSICAL_ID
) --DECODE( PHDLG.SENT_TO,NULL , '',PHDLG.SENT_TO )
WHEN CNT.CONTAINER_TYPE = 1 THEN
CASE
WHEN CNT.CONTAINER_STATUS_ID = 2 THEN
CASE
WHEN CNT.CURRENT_UNIT_ID IS NOT NULL THEN
CNTU.UNIT_FULL_NAME ELSE CCOM.COMPANY_CODE || ' - ' || CCOM.COMPANY_NAME
END ELSE LOC.LOCATION_NAME
END ELSE LOC.LOCATION_NAME -- LOC.LOCATION_NAME
END LOCATION_NAME,
FIDS.WORKFLOW_STATUS_NAME,
FID.FI_DOC_ID,
FID.FI_DOC_NO,
FID.FI_DOC_YEAR,
CASE
WHEN COM_FID.COMPANY_ID IS NOT NULL THEN
COM_FID.COMPANY_CODE || ' - ' || COM_FID.COMPANY_NAME ELSE
CASE
WHEN COM_PHD.COMPANY_ID IS NOT NULL THEN
COM_PHD.COMPANY_CODE || ' - ' || COM_PHD.COMPANY_NAME ELSE ''
END
END AS COMPANY_FULL_NAME,
FID.FI_DOC_REFNO,
CASE
WHEN FID.FI_DOC_ID IS NULL THEN
PTEMP.AMOUNT_INC_VAT ELSE FID.AMOUNT_INC_VAT
END AMOUNT_INC_VAT,
CASE
WHEN FID.FI_DOC_ID IS NULL THEN
PTEMP.AMOUNT_CURRENCY ELSE FID.AMOUNT_CURRENCY
END AMOUNT_CURRENCY,
FID.IS_ASSET,
CASE
WHEN PHD_ITEMS.FI_DOC_ID IS NULL THEN
CASE
WHEN PTEMP.CUSTOMER_CODE IS NOT NULL THEN
PTCUS.CUSTOMER_FULL_NAME ELSE PTVEN.VENDOR_FULL_NAME
END ELSE
CASE
WHEN FID.FIRST_ITEMS = 'CUSTOMER' THEN
FID.CUSTOMER_NAME ELSE FID.VENDOR_NAME
END
END AS CUSTOMER_VENDOR_NAME,
PHD.PHYSICAL_REMARK
FROM
(
SELECT
ROW_NUMBER ( ) over ( ORDER BY PHD.PHYSICAL_NO DESC, FID_ID.FI_DOC_NO, PHDTEMP.PHD_TEMP_DATA_ID DESC ) RECORD_NUMBER,
PHD.PHYSICAL_ID,
PHD.PHYSICAL_NO,
FID_ID.FI_DOC_ID,
FID_ID.FI_DOC_NO,
PHDTEMP.PHD_TEMP_DATA_ID
FROM
(
SELECT DISTINCT
PHD.PHYSICAL_ID,
PHD.PHYSICAL_NO,
FID_ID.FI_DOC_ID,
PHDTEMP.PHD_TEMP_DATA_ID
FROM
CTE_PHD_DETAIL PHD
LEFT JOIN (
SELECT
t1.FI_DOC_ID,
t1.PHYSICAL_ID,
t2.fi_Doc_no,
t2.fi_doc_year
FROM
PHD_FID_RELATE_DOC t1
INNER JOIN FID_SAP_DETAIL t2 ON t1.FI_DOC_ID = t2.FI_DOC_ID UNION ALL
SELECT
t1.FI_DOC_ID,
t1.PHYSICAL_ID,
t2.fi_Doc_no,
t2.fi_doc_year
FROM
FID_SAP_PHYSICAL_HISTORY t1
INNER JOIN FID_SAP_DETAIL t2 ON t1.FI_DOC_ID = t2.FI_DOC_ID
) FID_ID ON FID_ID.PHYSICAL_ID = PHD.PHYSICAL_ID
LEFT JOIN PHD_TEMP_DATA PHDTEMP ON PHD.PHYSICAL_ID = PHDTEMP.PHYSICAL_ID
AND FID_ID.FI_DOC_ID
IS NULL LEFT JOIN PHD_STATUS_LOGS PHDSL ON PHD.PHYSICAL_ID = PHDSL.PHYSICAL_ID
LEFT JOIN PHD_TEMP_DOCTITLE PHDT ON PHD.PHYSICAL_ID = PHDT.PHYSICAL_ID
LEFT JOIN CNT_DETAIL CNT ON PHD.CURRENT_CONTAINER_ID = CNT.CONTAINER_ID
LEFT JOIN VEW_UNT_DETAIL CNTU ON CNT.CURRENT_UNIT_ID = CNTU.UNIT_ID
LEFT JOIN LOC_DETAIL LOC ON CNT.LOCATION_ID = LOC.LOCATION_ID
LEFT JOIN COM_DETAIL COM_PHD ON PHD.COMPANY_ID = COM_PHD.COMPANY_ID
LEFT JOIN FID_SAP_DETAIL FID ON FID_ID.FI_DOC_ID = FID.FI_DOC_ID
LEFT JOIN COM_DETAIL COM ON FID.COMPANY_CODE = COM.COMPANY_CODE
LEFT JOIN FID_SAP_CUSTOMER FCUS ON FID.FI_DOC_ID = FCUS.FI_DOC_ID
LEFT JOIN FID_SAP_VENDOR FVEN ON FID.FI_DOC_ID = FVEN.FI_DOC_ID
LEFT JOIN FID_SAP_CLEARING_DOC CLR ON FID.FI_DOC_ID = CLR.FI_DOC_ID
LEFT JOIN FID_SAP_PO PO ON FID.FI_DOC_ID = PO.FI_DOC_ID
LEFT JOIN FID_SAP_STATUS FIDS ON FID.WORKFLOW_STATUS_CODE = FIDS.WORKFLOW_STATUS_CODE
LEFT JOIN MAS_VENDOR PTVEN ON PHDTEMP.VENDOR_CODE = PTVEN.VENDOR_CODE
AND COM_PHD.SERVER_ID = COM_PHD.SERVER_ID
LEFT JOIN MAS_CUSTOMER PTCUS ON PHDTEMP.CUSTOMER_CODE = PTCUS.CUSTOMER_CODE
AND PTCUS.SERVER_ID = COM_PHD.SERVER_ID
WHERE
PHD.PHYSICAL_NO IS NOT NULL
AND (
(
(
( PHD.PHYSICAL_DOC_TYPE = 1 AND iPHYSICAL_DOC_TYPE = 1 )
OR ( ( PHD.PHYSICAL_DOC_TYPE = 2 OR PHD.PHYSICAL_DOC_TYPE = 4 ) AND iPHYSICAL_DOC_TYPE = 2 )
)
AND iPHYSICAL_DOC_TYPE_NONSAP IS NULL
)
OR ( PHD.PHYSICAL_DOC_TYPE = iPHYSICAL_DOC_TYPE_NONSAP AND iPHYSICAL_DOC_TYPE_NONSAP IS NOT NULL )
)
AND (
sREFERENCE_NO IS NULL
OR (
UPPER( PHD.REQUESTED_NO ) LIKE '%' || UPPER( sREFERENCE_NO ) || '%'
OR UPPER( PHD.TEXT_FILE_NAME ) LIKE '%' || UPPER( sREFERENCE_NO ) || '%'
)
)
AND ( sPHYSICAL_NO IS NULL OR ( PHD.PHYSICAL_NO = sPHYSICAL_NO ) )
AND ( iLOCATION_ID IS NULL OR ( LOC.LOCATION_ID = iLOCATION_ID ) )
AND ( iPHYSICAL_STATUS_ID IS NULL OR ( PHD.PHYSICAL_STATUS_ID = iPHYSICAL_STATUS_ID ) )
AND ( iCONTAINER_ID IS NULL OR ( CNT.CONTAINER_ID = iCONTAINER_ID AND CNT.CONTAINER_TYPE = 2 ) )
AND (
sCONTAINER_NUMBER IS NULL
OR ( UPPER( CNT.CONTAINER_NUMBER ) LIKE '%' || UPPER( sCONTAINER_NUMBER ) || '%' AND CNT.CONTAINER_TYPE = 1 )
)
AND ( TRUNC( dREGISTERED_DATE_FROM ) IS NULL OR ( TRUNC( PHD.REGISTERED_DATE ) >= TRUNC( dREGISTERED_DATE_FROM ) ) )
AND ( TRUNC( dREGISTERED_DATE_TO ) IS NULL OR ( TRUNC( PHD.REGISTERED_DATE ) = sFI_DOC_NO_FROM ) )
AND ( sFI_DOC_NO_TO IS NULL OR ( FID.FI_DOC_NO = TRUNC( dPOSTING_DATE_FROM ) OR TRUNC( PHDTEMP.CREATE_DATE ) = TRUNC( dPOSTING_DATE_FROM ) )
)
AND ( sCLEARING_DOC_NO IS NULL OR ( UPPER( CLR.CLEARING_DOC_NO ) LIKE '%' || UPPER( sCLEARING_DOC_NO ) || '%' ) )
AND ( iCLEARING_DOC_YEAR IS NULL OR ( CLR.CLEARING_DOC_YEAR = iCLEARING_DOC_YEAR ) )
AND ( dCLEARING_DOC_DATE IS NULL OR ( CLR.CLEARING_DATE = dCLEARING_DOC_DATE ) ) --AND ( sPAYMENT_BLOCK IS NULL
-- OR ( UPPER(FID.PAYMENT_BLOCK) LIKE '%'
-- || UPPER(sPAYMENT_BLOCK)
-- || '%' ) )
AND ( sFI_DOC_REFNO IS NULL OR ( UPPER( FID.FI_DOC_REFNO ) LIKE '%' || UPPER( sFI_DOC_REFNO ) || '%' ) )
AND ( ( sSAP_REVERSE_FLAG IS NULL AND ( FID.SAP_REVERSE_FLAG IS NULL OR FID.FI_DOC_ID IS NULL ) ) OR ( sSAP_REVERSE_FLAG IS NOT NULL ) )
AND ( iAMOUNT_INC_VAT IS NULL OR ( FID.AMOUNT_INC_VAT = iAMOUNT_INC_VAT OR PHDTEMP.AMOUNT_INC_VAT = iAMOUNT_INC_VAT ) )
AND (
sCUSTOMER_NAME IS NULL
OR UPPER( PTCUS.CUSTOMER_FULL_NAME ) LIKE '%' || UPPER( sCUSTOMER_NAME ) || '%'
OR UPPER( FCUS.CUSTOMER_CODE || ' ' || FCUS.CUSTOMER_NAME ) LIKE '%' || UPPER( sCUSTOMER_NAME ) || '%'
)
AND (
sVENDOR_NAME IS NULL
OR UPPER( PTVEN.VENDOR_FULL_NAME ) LIKE '%' || UPPER( sVENDOR_NAME ) || '%'
OR UPPER( FVEN.VENDOR_CODE || ' ' || FVEN.VENDOR_NAME ) LIKE '%' || UPPER( sVENDOR_NAME ) || '%'
) --AND FID_ID.FI_DOC_ID IS NULL AND PHDTEMP.PHD_TEMP_DATA_ID IS NOT NULL
) PHD
LEFT JOIN FID_SAP_DETAIL FID_ID ON FID_ID.FI_DOC_ID = PHD.FI_DOC_ID
LEFT JOIN PHD_TEMP_DATA PHDTEMP ON PHD.PHD_TEMP_DATA_ID = PHDTEMP.PHD_TEMP_DATA_ID
AND PHD.FI_DOC_ID IS NULL
) PHD_ITEMS
INNER JOIN CTE_PHD_DETAIL PHD ON PHD_ITEMS.PHYSICAL_ID = PHD.PHYSICAL_ID
LEFT JOIN PHD_STATUS PHDS ON PHD.PHYSICAL_STATUS_ID = PHDS.PHYSICAL_STATUS_ID
LEFT JOIN CNT_DETAIL CNT ON PHD.CURRENT_CONTAINER_ID = CNT.CONTAINER_ID
LEFT JOIN VEW_UNT_DETAIL CNTU ON CNT.CURRENT_UNIT_ID = CNTU.UNIT_ID
LEFT JOIN LOC_DETAIL LOC ON CNT.LOCATION_ID = LOC.LOCATION_ID
LEFT JOIN COM_DETAIL CCOM ON CNT.CURRENT_COMPANY_ID = CCOM.COMPANY_ID
LEFT JOIN COM_DETAIL COM_PHD ON PHD.COMPANY_ID = COM_PHD.COMPANY_ID
LEFT JOIN FID_SAP_DETAIL FID ON PHD_ITEMS.FI_DOC_ID = FID.FI_DOC_ID
LEFT JOIN FID_SAP_STATUS FIDS ON FID.WORKFLOW_STATUS_CODE = FIDS.WORKFLOW_STATUS_CODE
LEFT JOIN COM_DETAIL COM_FID ON COM_FID.COMPANY_CODE = FID.COMPANY_CODE
LEFT JOIN PHD_TEMP_DATA PTEMP ON PHD_ITEMS.PHD_TEMP_DATA_ID = PTEMP.PHD_TEMP_DATA_ID
LEFT JOIN MAS_VENDOR PTVEN ON PTEMP.VENDOR_CODE = PTVEN.VENDOR_CODE
AND PHD_ITEMS.FI_DOC_ID IS NULL
AND PTVEN.SERVER_ID = COM_PHD.SERVER_ID
LEFT JOIN MAS_CUSTOMER PTCUS ON PTEMP.CUSTOMER_CODE = PTCUS.CUSTOMER_CODE
AND PHD_ITEMS.FI_DOC_ID IS NULL
AND PTCUS.SERVER_ID = COM_PHD.SERVER_ID
WHERE
PHD_ITEMS.RECORD_NUMBER BETWEEN FIRST_INDEX
AND LAST_INDEX
ORDER BY
RECORD_NUMBER;
END;
This query takes around 1:30 minutes before issuing the ~4000 records query result. It is hard to extract the execution plan for this long query. What suggestion or strategy or should I do something to get the plan?
In my opinion, I currently have 3 options:
## Dynamic Query
The first option is escalating the procedure to be ORM statement with some if statement to compose the proper statement with inner join instead of left outer join (if the parameter is null, no join). I think this option is smart when I think about at a first glance. However, it might took me so long time to accomplish the task.
## Materialized View
One of option that I think about is pulling the left join clauses to be rewritten as Materialized View
. Unfortunately, this store procedure is involved in application so the Materialized View will be updated frequently when combine with ON COMMIT
or ON STATEMENT
. Is this option considered dangerous for deadlock or bad performance?
## Remove some search criteria (stored procedure parameters)
Suppose that I can remove some criteria, it means I can remove some left outer join clauses. However, this option is not possible after I discuss with teammate.
How do I suppose to do to tackle with performance problem or what other strategies or options do I have? Any advices are all welcomed.
Pranithan T.
(103 rep)
Jan 7, 2025, 10:22 AM
• Last activity: Jan 7, 2025, 07:42 PM
0
votes
0
answers
18
views
In which sections of AWR and Statspack can we identify how much CPU the database usually consumes?
We need to migrate a database, but we need to evaluate the possibility of reducing the number of resources (CPU) to decrease the number of licenses. In which sections of AWR and Statspack can we identify how much CPU the database usually consumes? I mean, if we have 16 cores in the server and usuall...
We need to migrate a database, but we need to evaluate the possibility of reducing the number of resources (CPU) to decrease the number of licenses. In which sections of AWR and Statspack can we identify how much CPU the database usually consumes? I mean, if we have 16 cores in the server and usually it consumes only 12 cores, how can we check it?
Astora
(841 rep)
Dec 13, 2024, 01:42 AM
0
votes
1
answers
517
views
AI Tools for Tuning SQL Server?
Are there any strong and accurate AI-based utilities out there that can help identify problems with and suggest better-tuned alternatives for deficient queries, and that can analyze and provide useful feedback for execution plans? I am a learning Administrator with responsibility for 2016 and 2022 d...
Are there any strong and accurate AI-based utilities out there that can help identify problems with and suggest better-tuned alternatives for deficient queries, and that can analyze and provide useful feedback for execution plans? I am a learning Administrator with responsibility for 2016 and 2022 databases. Any suggestions would be appreciated!
AccidentalDBA_CO
(157 rep)
Jun 24, 2024, 06:52 PM
• Last activity: Jun 24, 2024, 07:24 PM
0
votes
0
answers
39
views
Why my postgresql perfomance getting worse after changing default parametes
I changed my postgresql server parameters to tune it for a query but I get worse result than default settings.Here is my changed parameters and explain result: # pgTune suggested settings shared_buffers = 2560MB effective_cache_size = 7680MB maintenance_work_mem = 640MB work_mem = 12MB max_wal_size...
I changed my postgresql server parameters to tune it for a query but I get worse result than default settings.Here is my changed parameters and explain result:
# pgTune suggested settings
shared_buffers = 2560MB
effective_cache_size = 7680MB
maintenance_work_mem = 640MB
work_mem = 12MB
max_wal_size = 4GB
and here are results:
# explain analyze result before tuning
Result (cost=11618.77..15868.77 rows=100000 width=50) (actual time=249.507..309.305 rows=109800 loops=1)
Buffers: shared hit=1, temp read=417 written=419
-> Sort (cost=11618.77..11868.77 rows=100000 width=18) (actual time=249.502..276.241 rows=109800 loops=1)
Sort Key: actor.actor_id, s1."time"
Sort Method: external merge Disk: 3336kB
Buffers: shared hit=1, temp read=417 written=419
-> Nested Loop (cost=0.00..1262.95 rows=100000 width=18) (actual time=0.551..42.765 rows=109800 loops=1)
Buffers: shared hit=1
-> Function Scan on generate_series s1 (cost=0.00..10.00 rows=1000 width=8) (actual time=0.524..0.703 rows=1098 loops=1)
-> Materialize (cost=0.00..3.20 rows=100 width=10) (actual time=0.000..0.011 rows=100 loops=1098)
Buffers: shared hit=1
-> Subquery Scan on actor (cost=0.00..2.70 rows=100 width=10) (actual time=0.020..0.086 rows=100 loops=1)
Buffers: shared hit=1
-> Limit (cost=0.00..1.70 rows=100 width=118) (actual time=0.019..0.064 rows=100 loops=1)
Buffers: shared hit=1
-> Seq Scan on actor actor_1 (cost=0.00..445666.24 rows=26213424 width=118) (actual time=0.018..0.046 rows=100 loops=1)
Buffers: shared hit=1
Planning Time: 0.251 ms
Execution Time: 319.925 ms
(19 rows)
# explain analyze result after changing parameters
Result (cost=9567.77..13817.77 rows=100000 width=50) (actual time=280.372..340.972 rows=109800 loops=1)
Buffers: shared hit=1
-> Sort (cost=9567.77..9817.77 rows=100000 width=18) (actual time=280.363..294.594 rows=109800 loops=1)
Sort Key: actor.actor_id, s1."time"
Sort Method: quicksort Memory: 11414kB
Buffers: shared hit=1
-> Nested Loop (cost=0.00..1262.95 rows=100000 width=18) (actual time=0.633..43.263 rows=109800 loops=1)
Buffers: shared hit=1
-> Function Scan on generate_series s1 (cost=0.00..10.00 rows=1000 width=8) (actual time=0.600..0.806 rows=1098 loops=1)
-> Materialize (cost=0.00..3.20 rows=100 width=10) (actual time=0.000..0.011 rows=100 loops=1098)
Buffers: shared hit=1
-> Subquery Scan on actor (cost=0.00..2.70 rows=100 width=10) (actual time=0.025..0.092 rows=100 loops=1)
Buffers: shared hit=1
-> Limit (cost=0.00..1.70 rows=100 width=118) (actual time=0.024..0.069 rows=100 loops=1)
Buffers: shared hit=1
-> Seq Scan on actor actor_1 (cost=0.00..445666.24 rows=26213424 width=118) (actual time=0.022..0.051 rows=100 loops=1)
Buffers: shared hit=1
Planning Time: 0.247 ms
Execution Time: 355.548 ms
(19 rows)
Here is my query on dvdrental sample database :
explain (analyze,buffers) SELECT
actor.first_name,
s1.time as time,
random() * (100-0) + 0 AS cpu_usage,
random() * (30-26) + 26 * 1024 AS average_mhs,
random() * (90-50) + 50 AS temperature,
random() * (100-0) + 0 AS fan
FROM generate_series(
'2018-10-14',
'2021-10-15',
INTERVAL '1 day') AS s1(time)
CROSS JOIN(
SELECT
actor_id,
first_name,
last_name
FROM actor limit 100
) actor
ORDER BY
actor.actor_id,
s1.time;
While I used
limit 2000
or higher instead of limit 100
the result gets worse too.
mahdi
(1 rep)
Apr 15, 2024, 12:01 PM
• Last activity: Apr 15, 2024, 01:13 PM
1
votes
1
answers
1846
views
How to use UNNEST hint in Oracle Database?
I have encountered Bug 34044661 - Poor Performance Due to SQL Not Unnesting in Oracle 19C (Doc ID 34044661.8). Oracle provides some workarounds: use UNNEST hint or alter session set "_optimizer_squ_bottomup" = false; or alter session set "_optimizer_cost_based_transformation" = OFF; The current exec...
I have encountered Bug 34044661 - Poor Performance Due to SQL Not Unnesting in Oracle 19C (Doc ID 34044661.8).
Oracle provides some workarounds:
use UNNEST hint
or
alter session set "_optimizer_squ_bottomup" = false;
or
alter session set "_optimizer_cost_based_transformation" = OFF;
The current execution plan (estimate plan is 19h to execute).
-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| E-Time |
-------------------------------------------------------------------------------------------------------------
| 143 | NESTED LOOPS | | 307K| 135M| | 1808M (2)| 19:37:40 |
|*144 | TABLE ACCESS FULL | XXXX | 2571K| 1054M| | 49880 (2)| 00:00:02 |
|*145 | TABLE ACCESS FULL | BBBB | 1 | 32 | | 703 (2)| 00:00:01 |
After change "_optimizer_squ_bottomup" or "_optimizer_cost_based_transformation" (estimate plan is 05 secs to execute)
-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | E-Rows |E-Bytes|E-Temp | Cost (%CPU)| E-Time |
-------------------------------------------------------------------------------------------------------------
|*146 | HASH JOIN | | 273K| 120M| 11M| 104K (1)| 00:00:05 |
|*147 | TABLE ACCESS FULL | BBBB | 273K| 8547K| | 703 (2)| 00:00:01 |
|*148 | TABLE ACCESS FULL | XXXX | 2571K| 1054M| | 49880 (2)| 00:00:02 |
But I'd like to use the UNNEST hint. Where can I place this hint in the query? I haven't had any success so far to reach the second execution plan (without NESTED LOOPS).
The SQL is as follows (I've hidden some columns).
SELECT
some columns,
.....,
FROM XXXX AS FIN
WHERE NOT EXISTS (SELECT 1 FROM BBBBB R1 WHERE R1.VVVV = 'F' AND R1.NNNN = FIN.NNNN)
AND NOT EXISTS (SELECT 1 FROM BBBBB R1 WHERE R1.VVVV = 'E' AND R1.NNNN = FIN.DDDDD)
AND NOT (FIN.SSSS = 'S' AND FIN.EEEEEE IS NOT NULL AND FIN.VVVV='E')
UNION ALL
SELECT
some columns,
.....,
FROM XXXX AS FIN
,BBBBB CCC
WHERE (--
(CCC.VVVV = 'F' AND CCC.NNNN = FIN.NUFIN) OR
(CCC.VVVV = 'E' AND CCC.NNNN = FIN.DDDDD)
)
AND NOT (FIN.SSSS = 'S' AND FIN.EEEEEE IS NOT NULL AND FIN.VVVV='E');
Astora
(841 rep)
Jul 23, 2023, 02:24 AM
• Last activity: Jul 23, 2023, 02:57 PM
1
votes
1
answers
75
views
Explain plan not showing most columns
I am running an explain plan for a query and it only returns the following columns: explain plan for SELECT .... SET LINESIZE 130 SET PAGESIZE 0 SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY); Plan hash value: 3706016447 ------------------------------------------------------------------------ | Id | Operat...
I am running an explain plan for a query and it only returns the following columns:
explain plan for SELECT ....
SET LINESIZE 130
SET PAGESIZE 0
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
Plan hash value: 3706016447
------------------------------------------------------------------------
| Id | Operation | Name |
------------------------------------------------------------------------
| 0 | SELECT STATEMENT | |
|* 1 | FILTER | |
| 2 | NESTED LOOPS OUTER | |
| 3 | NESTED LOOPS | |
| 4 | NESTED LOOPS | |
| 5 | NESTED LOOPS | |
| 6 | NESTED LOOPS | |
....
Why? if I try another simple query it works fine...
SQL> explain plan for select * from dba_users;
Explained.
SET LINESIZE 130
SET PAGESIZE 0
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);
Plan hash value: 2145154934
-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 137 | 37949 | 37 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | 137 | 37949 | 37 (0)| 00:00:01 |
|* 2 | HASH JOIN | | 41 | 10414 | 24 (0)| 00:00:01 |
| 3 | TABLE ACCESS FULL | TS$ | 8 | 88 | 5 (0)| 00:00:01 |
|* 4 | HASH JOIN | | 41 | 9963 | 19 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL | TS$ | 8 | 88 | 5 (0)| 00:00:01 |
**UPDATE**: this seems to happen because I am using the
/*+ RULE*/
hint, how to explain with RULE hint? I need to compare it with the /*+ RULE*/
hint and without /*+ RULE*/
hint
Querying dba_users
with rule hint come with the same result:
SQL> explain plan for select /*+ RULE*/ * from dba_users;
Explained.
SQL> SET LINESIZE 130
SET PAGESIZE 0
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);SQL> SQL>
Plan hash value: 52995442
-------------------------------------------------------------------------
| Id | Operation | Name |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | |
| 1 | MERGE JOIN OUTER | |
| 2 | SORT JOIN | |
| 3 | MERGE JOIN | |
| 4 | SORT JOIN | |
| 5 | NESTED LOOPS | |
Astora
(841 rep)
Jul 18, 2023, 05:31 PM
• Last activity: Jul 23, 2023, 11:15 AM
2
votes
1
answers
600
views
sp_BlitzIndex -- 'High value" vs 'High Impact'
Sorry, I may be missing something super-obvious here, but I've googled this and asked some colleagues and am not finding a satisfactory answer. I've been using sp_BlitzIndex for a few months now along with some of my other regular index tuning tools and have found it to be a super-useful tool to add...
Sorry, I may be missing something super-obvious here, but I've googled this and asked some colleagues and am not finding a satisfactory answer.
I've been using sp_BlitzIndex for a few months now along with some of my other regular index tuning tools and have found it to be a super-useful tool to add to my SQL toolbox.
But I do not quite understand the difference between 'High value' and 'High Impact' as in this example below:
'Indexaphobia: High value missing index with High Impact'
At first I thought they meant roughly the same thing -- it was identifying a missing index that would be both highly valuable and have a great impact on query performance.
But that didn't quite make sense to me -- why have two separate terms to indicate what was pretty much the same thing? So a colleague suggested that the term 'impact' was referring to the impact on db write operations. So 'high impact' in this case meant the index might cost more than other indexes when it came to updates and deletes.
But I've read some of Brent's comments, and, if I'm understanding him correctly, I don't think my colleague's interpretation is quite right either.
Can someone please enlighten me on this? I'd really like to understand the difference before testing out some of the suggestions sp_BlitzIndex is making to me. Is 'high impact' good or bad? If it's good, then what's the difference between 'high value' and 'high impact'?
Thanks very much in advance!!
Robert
Robert H
(21 rep)
Feb 25, 2018, 06:59 PM
• Last activity: Feb 22, 2023, 07:08 AM
6
votes
2
answers
2489
views
Backup tuning for large databases
I am tuning SQL Server backups using compression, writing to multiple files, and changing the `BufferCount` and `MaxTransferSize` options. I have a SQL Server instance with a 4 TB database and several smaller databases from a few MB to 150 GB. My question is, can I start tuning with the 150 GB DB an...
I am tuning SQL Server backups using compression, writing to multiple files, and changing the
BufferCount
and MaxTransferSize
options. I have a SQL Server instance with a 4 TB database and several smaller databases from a few MB to 150 GB.
My question is, can I start tuning with the 150 GB DB and then apply those settings to the 4 TB backup with minimal changes, or does the 4 TB require its own "tuning session" from scratch? Since it usually takes several backup attempts to dial in the settings I'm trying to get a jump-start on tuning the 4 TB database if possible.
Dave M.
(103 rep)
Nov 12, 2014, 04:32 PM
• Last activity: Feb 6, 2023, 04:04 PM
12
votes
2
answers
28407
views
Sql Server, Is "Reorganize Files Before Releasing Unused Space" better or worse?
I could not find a satisfactory answer. I fully understand that "Shrink" will cause index defragmentation, and that can worsen performance. But, is not this option to reduce defragmentation? So, to activate this option, is it better or worse? Please do not increase my confusion. I ask you to give me...
I could not find a satisfactory answer. I fully understand that "Shrink" will cause index defragmentation, and that can worsen performance.
But, is not this option to reduce defragmentation?
So, to activate this option, is it better or worse? Please do not increase my confusion. I ask you to give me a clear and simple answer: it is better, or, it is worse.
Then if you want, give me your explanation that I will gladly read and appreciate.

Diego Soto
(223 rep)
Aug 19, 2017, 08:01 PM
• Last activity: Dec 8, 2022, 03:56 AM
1
votes
2
answers
631
views
Tuning MySQL on GROUP REPLICATION
I'm close to switch from old version of MySQL 5.6 ( master / slave configuration ) to MySQL 8 ( 3 nodes on multi-master configuration ) in GROUP REPLICATION. The tables are 99% InnoDB. The webfarm just copied the previous InnoDB configuration variables to these new machines. I was wondering how to i...
I'm close to switch from old version of MySQL 5.6 ( master / slave configuration ) to MySQL 8 ( 3 nodes on multi-master configuration ) in GROUP REPLICATION.
The tables are 99% InnoDB.
The webfarm just copied the previous InnoDB configuration variables to these new machines.
I was wondering how to improve performances, each machine has the below configuration:
1) Cpu:
12 core (Intel Xeon Processor (Skylake, IBRS))
2) RAM:
total used free shared buff/cache available
Mem: 49456252 5164100 11487392 18972 32804760 43676276
Swap: 1998844 6924 1991920
3) Disk:
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 54G 1.2G 50G 3% /
/dev/mapper/vg0-mysql 1004G 414G 590G 42% /var/lib/mysql
All my databases are 250GB of datas.
InnnoDB variables:
*************************** 1. row ***************************
Variable_name: innodb_adaptive_flushing
Value: ON
*************************** 2. row ***************************
Variable_name: innodb_adaptive_flushing_lwm
Value: 10
*************************** 3. row ***************************
Variable_name: innodb_adaptive_hash_index
Value: ON
*************************** 4. row ***************************
Variable_name: innodb_adaptive_hash_index_parts
Value: 8
*************************** 5. row ***************************
Variable_name: innodb_adaptive_max_sleep_delay
Value: 150000
*************************** 6. row ***************************
Variable_name: innodb_api_bk_commit_interval
Value: 5
*************************** 7. row ***************************
Variable_name: innodb_api_disable_rowlock
Value: OFF
*************************** 8. row ***************************
Variable_name: innodb_api_enable_binlog
Value: OFF
*************************** 9. row ***************************
Variable_name: innodb_api_enable_mdl
Value: OFF
*************************** 10. row ***************************
Variable_name: innodb_api_trx_level
Value: 0
*************************** 11. row ***************************
Variable_name: innodb_autoextend_increment
Value: 64
*************************** 12. row ***************************
Variable_name: innodb_autoinc_lock_mode
Value: 2
*************************** 13. row ***************************
Variable_name: innodb_buffer_pool_chunk_size
Value: 134217728
*************************** 14. row ***************************
Variable_name: innodb_buffer_pool_dump_at_shutdown
Value: ON
*************************** 15. row ***************************
Variable_name: innodb_buffer_pool_dump_now
Value: OFF
*************************** 16. row ***************************
Variable_name: innodb_buffer_pool_dump_pct
Value: 25
*************************** 17. row ***************************
Variable_name: innodb_buffer_pool_filename
Value: ib_buffer_pool
*************************** 18. row ***************************
Variable_name: innodb_buffer_pool_in_core_file
Value: ON
*************************** 19. row ***************************
Variable_name: innodb_buffer_pool_instances
Value: 8
*************************** 20. row ***************************
Variable_name: innodb_buffer_pool_load_abort
Value: OFF
*************************** 21. row ***************************
Variable_name: innodb_buffer_pool_load_at_startup
Value: ON
*************************** 22. row ***************************
Variable_name: innodb_buffer_pool_load_now
Value: OFF
*************************** 23. row ***************************
Variable_name: innodb_buffer_pool_size
Value: 34359738368
*************************** 24. row ***************************
Variable_name: innodb_change_buffer_max_size
Value: 25
*************************** 25. row ***************************
Variable_name: innodb_change_buffering
Value: all
*************************** 26. row ***************************
Variable_name: innodb_checksum_algorithm
Value: crc32
*************************** 27. row ***************************
Variable_name: innodb_cmp_per_index_enabled
Value: OFF
*************************** 28. row ***************************
Variable_name: innodb_commit_concurrency
Value: 0
*************************** 29. row ***************************
Variable_name: innodb_compression_failure_threshold_pct
Value: 5
*************************** 30. row ***************************
Variable_name: innodb_compression_level
Value: 6
*************************** 31. row ***************************
Variable_name: innodb_compression_pad_pct_max
Value: 50
*************************** 32. row ***************************
Variable_name: innodb_concurrency_tickets
Value: 5000
*************************** 33. row ***************************
Variable_name: innodb_data_file_path
Value: ibdata1:12M:autoextend
*************************** 34. row ***************************
Variable_name: innodb_data_home_dir
Value:
*************************** 35. row ***************************
Variable_name: innodb_deadlock_detect
Value: ON
*************************** 36. row ***************************
Variable_name: innodb_dedicated_server
Value: OFF
*************************** 37. row ***************************
Variable_name: innodb_default_row_format
Value: dynamic
*************************** 38. row ***************************
Variable_name: innodb_directories
Value:
*************************** 39. row ***************************
Variable_name: innodb_disable_sort_file_cache
Value: OFF
*************************** 40. row ***************************
Variable_name: innodb_doublewrite
Value: ON
*************************** 41. row ***************************
Variable_name: innodb_fast_shutdown
Value: 1
*************************** 42. row ***************************
Variable_name: innodb_file_per_table
Value: ON
*************************** 43. row ***************************
Variable_name: innodb_fill_factor
Value: 100
*************************** 44. row ***************************
Variable_name: innodb_flush_log_at_timeout
Value: 1
*************************** 45. row ***************************
Variable_name: innodb_flush_log_at_trx_commit
Value: 1
*************************** 46. row ***************************
Variable_name: innodb_flush_method
Value: O_DIRECT
*************************** 47. row ***************************
Variable_name: innodb_flush_neighbors
Value: 0
*************************** 48. row ***************************
Variable_name: innodb_flush_sync
Value: ON
*************************** 49. row ***************************
Variable_name: innodb_flushing_avg_loops
Value: 30
*************************** 50. row ***************************
Variable_name: innodb_force_load_corrupted
Value: OFF
*************************** 51. row ***************************
Variable_name: innodb_force_recovery
Value: 0
*************************** 52. row ***************************
Variable_name: innodb_fsync_threshold
Value: 0
*************************** 53. row ***************************
Variable_name: innodb_ft_aux_table
Value:
*************************** 54. row ***************************
Variable_name: innodb_ft_cache_size
Value: 8000000
*************************** 55. row ***************************
Variable_name: innodb_ft_enable_diag_print
Value: OFF
*************************** 56. row ***************************
Variable_name: innodb_ft_enable_stopword
Value: ON
*************************** 57. row ***************************
Variable_name: innodb_ft_max_token_size
Value: 84
*************************** 58. row ***************************
Variable_name: innodb_ft_min_token_size
Value: 3
*************************** 59. row ***************************
Variable_name: innodb_ft_num_word_optimize
Value: 2000
*************************** 60. row ***************************
Variable_name: innodb_ft_result_cache_limit
Value: 2000000000
*************************** 61. row ***************************
Variable_name: innodb_ft_server_stopword_table
Value:
*************************** 62. row ***************************
Variable_name: innodb_ft_sort_pll_degree
Value: 2
*************************** 63. row ***************************
Variable_name: innodb_ft_total_cache_size
Value: 640000000
*************************** 64. row ***************************
Variable_name: innodb_ft_user_stopword_table
Value:
*************************** 65. row ***************************
Variable_name: innodb_io_capacity
Value: 200
*************************** 66. row ***************************
Variable_name: innodb_io_capacity_max
Value: 2000
*************************** 67. row ***************************
Variable_name: innodb_lock_wait_timeout
Value: 120
*************************** 68. row ***************************
Variable_name: innodb_log_buffer_size
Value: 134217728
*************************** 69. row ***************************
Variable_name: innodb_log_checksums
Value: ON
*************************** 70. row ***************************
Variable_name: innodb_log_compressed_pages
Value: ON
*************************** 71. row ***************************
Variable_name: innodb_log_file_size
Value: 268435456
*************************** 72. row ***************************
Variable_name: innodb_log_files_in_group
Value: 2
*************************** 73. row ***************************
Variable_name: innodb_log_group_home_dir
Value: ./
*************************** 74. row ***************************
Variable_name: innodb_log_spin_cpu_abs_lwm
Value: 80
*************************** 75. row ***************************
Variable_name: innodb_log_spin_cpu_pct_hwm
Value: 50
*************************** 76. row ***************************
Variable_name: innodb_log_wait_for_flush_spin_hwm
Value: 400
*************************** 77. row ***************************
Variable_name: innodb_log_write_ahead_size
Value: 8192
*************************** 78. row ***************************
Variable_name: innodb_lru_scan_depth
Value: 1024
*************************** 79. row ***************************
Variable_name: innodb_max_dirty_pages_pct
Value: 90.000000
*************************** 80. row ***************************
Variable_name: innodb_max_dirty_pages_pct_lwm
Value: 10.000000
*************************** 81. row ***************************
Variable_name: innodb_max_purge_lag
Value: 0
*************************** 82. row ***************************
Variable_name: innodb_max_purge_lag_delay
Value: 0
*************************** 83. row ***************************
Variable_name: innodb_max_undo_log_size
Value: 1073741824
*************************** 84. row ***************************
Variable_name: innodb_monitor_disable
Value:
*************************** 85. row ***************************
Variable_name: innodb_monitor_enable
Value:
*************************** 86. row ***************************
Variable_name: innodb_monitor_reset
Value:
*************************** 87. row ***************************
Variable_name: innodb_monitor_reset_all
Value:
*************************** 88. row ***************************
Variable_name: innodb_numa_interleave
Value: OFF
*************************** 89. row ***************************
Variable_name: innodb_old_blocks_pct
Value: 37
*************************** 90. row ***************************
Variable_name: innodb_old_blocks_time
Value: 1000
*************************** 91. row ***************************
Variable_name: innodb_online_alter_log_max_size
Value: 134217728
*************************** 92. row ***************************
Variable_name: innodb_open_files
Value: 3459
*************************** 93. row ***************************
Variable_name: innodb_optimize_fulltext_only
Value: OFF
*************************** 94. row ***************************
Variable_name: innodb_page_cleaners
Value: 4
*************************** 95. row ***************************
Variable_name: innodb_page_size
Value: 16384
*************************** 96. row ***************************
Variable_name: innodb_parallel_read_threads
Value: 4
*************************** 97. row ***************************
Variable_name: innodb_print_all_deadlocks
Value: OFF
*************************** 98. row ***************************
Variable_name: innodb_print_ddl_logs
Value: OFF
*************************** 99. row ***************************
Variable_name: innodb_purge_batch_size
Value: 300
*************************** 100. row ***************************
Variable_name: innodb_purge_rseg_truncate_frequency
Value: 128
*************************** 101. row ***************************
Variable_name: innodb_purge_threads
Value: 4
*************************** 102. row ***************************
Variable_name: innodb_random_read_ahead
Value: OFF
*************************** 103. row ***************************
Variable_name: innodb_read_ahead_threshold
Value: 56
*************************** 104. row ***************************
Variable_name: innodb_read_io_threads
Value: 4
*************************** 105. row ***************************
Variable_name: innodb_read_only
Value: OFF
*************************** 106. row ***************************
Variable_name: innodb_redo_log_archive_dirs
Value:
*************************** 107. row ***************************
Variable_name: innodb_redo_log_encrypt
Value: OFF
*************************** 108. row ***************************
Variable_name: innodb_replication_delay
Value: 0
*************************** 109. row ***************************
Variable_name: innodb_rollback_on_timeout
Value: OFF
*************************** 110. row ***************************
Variable_name: innodb_rollback_segments
Value: 128
*************************** 111. row ***************************
Variable_name: innodb_sort_buffer_size
Value: 1048576
*************************** 112. row ***************************
Variable_name: innodb_spin_wait_delay
Value: 6
*************************** 113. row ***************************
Variable_name: innodb_spin_wait_pause_multiplier
Value: 50
*************************** 114. row ***************************
Variable_name: innodb_stats_auto_recalc
Value: ON
*************************** 115. row ***************************
Variable_name: innodb_stats_include_delete_marked
Value: OFF
*************************** 116. row ***************************
Variable_name: innodb_stats_method
Value: nulls_equal
*************************** 117. row ***************************
Variable_name: innodb_stats_on_metadata
Value: OFF
*************************** 118. row ***************************
Variable_name: innodb_stats_persistent
Value: ON
*************************** 119. row ***************************
Variable_name: innodb_stats_persistent_sample_pages
Value: 20
*************************** 120. row ***************************
Variable_name: innodb_stats_transient_sample_pages
Value: 8
*************************** 121. row ***************************
Variable_name: innodb_status_output
Value: OFF
*************************** 122. row ***************************
Variable_name: innodb_status_output_locks
Value: OFF
*************************** 123. row ***************************
Variable_name: innodb_strict_mode
Value: ON
*************************** 124. row ***************************
Variable_name: innodb_sync_array_size
Value: 1
*************************** 125. row ***************************
Variable_name: innodb_sync_spin_loops
Value: 30
*************************** 126. row ***************************
Variable_name: innodb_table_locks
Value: ON
*************************** 127. row ***************************
Variable_name: innodb_temp_data_file_path
Value: ibtmp1:12M:autoextend
*************************** 128. row ***************************
Variable_name: innodb_temp_tablespaces_dir
Value: ./#innodb_temp/
*************************** 129. row ***************************
Variable_name: innodb_thread_concurrency
Value: 12
*************************** 130. row ***************************
Variable_name: innodb_thread_sleep_delay
Value: 0
*************************** 131. row ***************************
Variable_name: innodb_tmpdir
Value:
*************************** 132. row ***************************
Variable_name: innodb_undo_directory
Value: ./
*************************** 133. row ***************************
Variable_name: innodb_undo_log_encrypt
Value: OFF
*************************** 134. row ***************************
Variable_name: innodb_undo_log_truncate
Value: ON
*************************** 135. row ***************************
Variable_name: innodb_undo_tablespaces
Value: 2
*************************** 136. row ***************************
Variable_name: innodb_use_native_aio
Value: ON
*************************** 137. row ***************************
Variable_name: innodb_version
Value: 8.0.17
*************************** 138. row ***************************
Variable_name: innodb_write_io_threads
Value: 4
*************************** 139. row **************************
Variable_name: max_connections
Value: 3072
*************************** 140. row ***************************
Variable_name: max_user_connections
Value: 3072
I read a lot of articles about the size of buffer pool ( rule of 80% ), but i'm still thinking if there are other variables to increase or not.
In the actual master server i've the following statistics data:
In addition i tried to run the mysqltuner perl script, these are the suggestions:
-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
Control warning line(s) into /var/log/mysql/error.log file
Control error line(s) into /var/log/mysql/error.log file
MySQL was started within the last 24 hours - recommendations may be inaccurate
Reduce your overall MySQL memory footprint for system stability
Dedicate this server to your database for highest performance.
Configure your accounts with ip or subnets only, then update your configuration with skip-name-resolve=1
Before changing innodb_log_file_size and/or innodb_log_files_in_group read this: https://bit.ly/2TcGgtU
Variables to adjust:
*** MySQL's maximum memory usage is dangerously high ***
*** Add RAM before increasing MySQL buffer variables ***
table_definition_cache(2000) > 464470 or -1 (autosizing if supported)
innodb_buffer_pool_size (>= 241.7G) if possible.
innodb_log_file_size should be (=3G) if possible, so InnoDB total log files size equals to 25% of buffer pool size.
innodb_buffer_pool_instances(=24)
The line "innodb_buffer_pool_size (>= 241.7G) if possible." shocked me a bit, how is possibile to have an amount of RAM AND buffer pool so huge?
I understood it's related to my datas size ( 250GB ) i mention before but is not suggestable such a big amount of RAM?
Feel free to ask other details and thanks for your help!
@danblack
Below the mysql config file of one of the GROUP REPLICATION machines, these machines are not used by my customers right now, i'll switch to them when all is ready and properly configured.
https://pastebin.com/dS1AeM2R
This is the full mysqltuner report on GR machine:
https://pastebin.com/XEFbpGUV
@Wilson Hauck
I don't see an SSD or NVMEE as devices:
pvdisplay
--- Physical volume ---
PV Name /dev/sdc
VG Name vg0
PV Size <651.93 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 166893
Free PE 493
Allocated PE 166400
PV UUID 4Xco4e-Es5d-LcZ2-FHxt-8PYw-B1E6-h9nwP8
cat /sys/block/sdc/queue/rotational
1
These are the output of one machine where i'in going to switch, i repeat that these machines are still not used by the application:
UPTIME 25 hours:
B)
https://pastebin.com/Uqtre6BS
C)
https://pastebin.com/3HfWdCTR
D)
https://pastebin.com/EazB2XWZ
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 193064
max locked memory (kbytes, -l) 65536
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 193064
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
**In addition to these information i give you the output of the slave machine used by now from my customers**:
UPTIME: 26 days
MySQL config file:
https://pastebin.com/EztvxgYX
mysqltuning script + SHOW GLOBAL STATUS + SHOW GLOBAL VARIABLES + SHOW FULL PROCESSLIST ( i put all here, i fineshed the link available for my reputation)
https://pastebin.com/5rq8sLF1
Disk type:
pvdisplay
--- Physical volume ---
PV Name /dev/vdb1
VG Name vg0
PV Size 912.54 GiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 233611
Free PE 5259
Allocated PE 228352
PV UUID G8aSfO-Ktbg-UcfP-4yoL-jjH9-qjxe-OUN9ni
cat /sys/block/vdb/queue/rotational
1
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 459880
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 459880
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
iostat -xm 5 3
Linux 3.2.0-4-amd64 02/24/20 _x86_64_ (12 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
6.95 0.00 0.65 1.20 0.01 91.19
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.16 0.09 0.15 0.00 0.00 22.30 0.00 1.02 1.94 0.46 0.46 0.01
vdb 5.84 89.65 156.08 208.73 7.59 3.05 59.71 0.92 2.53 1.93 2.97 0.68 24.89
dm-0 0.00 0.00 161.92 269.31 7.59 3.05 50.51 1.21 2.81 2.52 2.98 0.58 24.96
avg-cpu: %user %nice %system %iowait %steal %idle
3.35 0.00 0.54 4.43 0.00 91.69
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.60 4.40 1.40 0.02 0.01 9.10 0.01 1.93 2.55 0.00 0.14 0.08
vdb 3.40 35.60 830.40 51.20 17.90 0.30 42.26 1.16 1.31 1.18 3.53 0.85 75.04
dm-0 0.00 0.00 833.80 74.60 17.90 0.30 41.01 1.54 1.69 1.53 3.47 0.83 75.20
avg-cpu: %user %nice %system %iowait %steal %idle
3.40 0.00 0.43 4.64 0.00 91.52
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.40 0.00 0.40 0.00 0.00 16.00 0.00 0.00 0.00 0.00 0.00 0.00
vdb 5.80 30.80 810.60 46.80 14.65 0.27 35.63 1.39 1.59 1.61 1.32 1.04 89.36
dm-0 0.00 0.00 816.20 67.20 14.67 0.27 34.63 2.02 2.24 2.34 1.04 1.01 89.36
**UPDATE 2ND EDIT**
- SLAVE SERVER
( SHOW GLOBAL STATUS; + SHOW GLOBAL VARIABLES; + SHOW FULL PROCESSLIST; )
https://pastebin.com/nTeSuCjZ
- STAGE SERVER
SHOW GLOBAL VARIABLES;
https://pastebin.com/n369zpdB
( SHOW GLOBAL STATUS; + SHOW FULL PROCESSLIST; )
https://pastebin.com/8rDtkubv

Jung
(11 rep)
Feb 23, 2020, 07:27 PM
• Last activity: Dec 4, 2022, 11:05 AM
2
votes
1
answers
635
views
PostgreSQL 9.6.2 perfomance
I need help for server side optimization. I think it is a system file cache problem but I'm not sure, maybe I am wrong. I have 2 servers (S1, S2) with PostgreSQL 9.6.2 on both of them, the database contents are same. Profiling query: S1 Limit (cost=0.28..7.42 rows=1 width=32) (actual time=0.035..0.0...
I need help for server side optimization.
I think it is a system file cache problem but I'm not sure, maybe I am wrong.
I have 2 servers (S1, S2) with PostgreSQL 9.6.2 on both of them, the database contents are same.
Profiling query:
S1
Limit (cost=0.28..7.42 rows=1 width=32) (actual time=0.035..0.035 rows=1 loops=1)
-> Nested Loop (cost=0.28..14.56 rows=2 width=32) (actual time=0.035..0.035 rows=1 loops=1)
Join Filter: (djangocms_blog_blogcategory.id = djangocms_blog_post_categories.blogcategory_id)
Rows Removed by Join Filter: 10
-> Index Only Scan using djangocms_blog_post_categories_post_id_blogcategory_id_key on djangocms_blog_post_categories (cost=0.28..12.28 rows=2 width=4) (actual time=0.017..0.017 rows=1 loops=1)
Index Cond: (post_id = 20)
Heap Fetches: 1
-> Materialize (cost=0.00..1.45 rows=30 width=32) (actual time=0.012..0.014 rows=11 loops=1)
-> Seq Scan on djangocms_blog_blogcategory (cost=0.00..1.30 rows=30 width=32) (actual time=0.008..0.008 rows=11 loops=1)
Planning time: 0.539 ms
Execution time: 0.065 ms
S2
Limit (cost=0.28..7.42 rows=1 width=32) (actual time=0.059..0.060 rows=1 loops=1)
-> Nested Loop (cost=0.28..14.57 rows=2 width=32) (actual time=0.059..0.059 rows=1 loops=1)
Join Filter: (djangocms_blog_blogcategory.id = djangocms_blog_post_categories.blogcategory_id)
Rows Removed by Join Filter: 10
-> Index Only Scan using djangocms_blog_post_categories_post_id_blogcategory_id_key on djangocms_blog_post_categories (cost=0.28..12.29 rows=2 width=4) (actual time=0.032..0.032 rows=1 loops=1)
Index Cond: (post_id = 20)
Heap Fetches: 1
-> Materialize (cost=0.00..1.45 rows=30 width=32) (actual time=0.012..0.021 rows=11 loops=1)
-> Seq Scan on djangocms_blog_blogcategory (cost=0.00..1.30 rows=30 width=32) (actual time=0.006..0.008 rows=11 loops=1)
Planning time: 0.847 ms
Execution time: 0.126 ms
Why are the above execution times different approximately ~ 2 times slower on S2 server.
System info:
S1: usage hardware storage like /dev/sda
OS: CoreOS 1185.3.0, Linux 4.7.3-coreos-r2
CPU: Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
RAM: 8 banks DIMM Synchronous 2133 MHz (0.5 ns), vendor: Micron, size: 16GiB
Motherboard: ASUSTeK COMPUTER INC. Z10PA-U8 Series
S2: usage software RAID 1 like /dev/md
OS: Ubuntu 16.04, Linux 4.8.0-49-generic
CPU: Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
RAM: 8 banks DIMM Synchronous 2400 MHz (0.4 ns), vendor: Micron, size: 31GiB
Motherboard: ASUSTeK COMPUTER INC. Z10PA-U8 Series
sysctl -a differences between S1 and S2
--- S1 2017-05-10 12:06:13.000000000 +0400
+++ S2 2017-05-10 12:06:17.000000000 +0400
@@ -2,0 +3,30 @@
+debug.kprobes-optimization = 1
@@ -3,0 +34,5 @@
+dev.mac_hid.mouse_button2_keycode = 97
+dev.mac_hid.mouse_button3_keycode = 100
+dev.mac_hid.mouse_button_emulation = 0
+dev.raid.speed_limit_max = 200000
+dev.raid.speed_limit_min = 1000
@@ -6 +41 @@
-fs.aio-nr = 16248
+fs.aio-nr = 0
@@ -8 +43 @@
-fs.dentry-state = 2479916 2429108 45 0 0 0
+fs.dentry-state = 801876 778676 45 0 0 0
@@ -10,5 +45,5 @@
-fs.epoll.max_user_watches = 27012874
-fs.file-max = 13160039
-fs.file-nr = 7936 0 13160039
-fs.inode-nr = 1219692 585566
-fs.inode-state = 1219692 585566 0 0 0 0 0
+fs.epoll.max_user_watches = 54067363
+fs.file-max = 131070
+fs.file-nr = 2944 0 131070
+fs.inode-nr = 743397 373
+fs.inode-state = 743397 373 0 0 0 0 0
@@ -19,0 +55 @@
+fs.mount-max = 100000
@@ -25,10 +60,0 @@
-fs.nfs.idmap_cache_timeout = 0
-fs.nfs.nfs_callback_tcpport = 0
-fs.nfs.nfs_congestion_kb = 262144
-fs.nfs.nfs_mountpoint_timeout = 500
-fs.nfs.nlm_grace_period = 0
-fs.nfs.nlm_tcpport = 0
-fs.nfs.nlm_timeout = 10
-fs.nfs.nlm_udpport = 0
-fs.nfs.nsm_local_state = 3
-fs.nfs.nsm_use_hostnames = 0
@@ -49 +75 @@
-fs.quota.syncs = 0
+fs.quota.syncs = 738
@@ -52,2 +77,0 @@
-fscache.object_max_active = 12
-fscache.operation_max_active = 6
@@ -62 +86 @@
-kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %e
+kernel.core_pattern = core
@@ -64 +88 @@
-kernel.core_uses_pid = 1
+kernel.core_uses_pid = 0
@@ -71,2 +95,3 @@
-kernel.hardlockup_panic = 1
-kernel.hostname = db01
+kernel.hardlockup_panic = 0
+kernel.hostname = db02
+kernel.hotplug =
@@ -77 +102 @@
-kernel.io_delay_type = 0
+kernel.io_delay_type = 1
@@ -81,0 +107 @@
+kernel.keys.persistent_keyring_expiry = 259200
@@ -86 +111,0 @@
-kernel.latencytop = 0
@@ -89,0 +115 @@
+kernel.moksbstate_disabled = 0
@@ -96 +122 @@
-kernel.ns_last_pid = 30114
+kernel.ns_last_pid = 5983
@@ -102 +128 @@
-kernel.osrelease = 4.7.3-coreos-r2
+kernel.osrelease = 4.8.0-49-generic
@@ -109 +135 @@
-kernel.panic_on_stackoverflow = 0
+kernel.panic_on_rcu_stall = 0
@@ -114 +140 @@
-kernel.perf_event_max_sample_rate = 50000
+kernel.perf_event_max_sample_rate = 32000
@@ -117 +143 @@
-kernel.perf_event_paranoid = 2
+kernel.perf_event_paranoid = 3
@@ -121 +147 @@
-kernel.printk = 7 4 1 7
+kernel.printk = 4 4 1 7
@@ -122,0 +149 @@
+kernel.printk_devkmsg = ratelimit
@@ -126 +153 @@
-kernel.pty.nr = 3
+kernel.pty.nr = 8
@@ -128,2 +155,2 @@
-kernel.random.entropy_avail = 825
+kernel.random.entropy_avail = 3413
@@ -133,2 +160,2 @@
-kernel.random.write_wakeup_threshold = 896
+kernel.random.write_wakeup_threshold = 1024
@@ -148 +175 @@
-kernel.sched_domain.cpu0.domain0.max_newidle_lb_cost = 7553
+kernel.sched_domain.cpu0.domain0.max_newidle_lb_cost = 12934
@@ -161 +188 @@
-kernel.sched_domain.cpu0.domain1.max_newidle_lb_cost = 8457
+kernel.sched_domain.cpu0.domain1.max_newidle_lb_cost = 14562
@@ -174 +201 @@
-kernel.sched_domain.cpu1.domain0.max_newidle_lb_cost = 5790
+kernel.sched_domain.cpu1.domain0.max_newidle_lb_cost = 8807
@@ -187 +214 @@
-kernel.sched_domain.cpu1.domain1.max_newidle_lb_cost = 7589
+kernel.sched_domain.cpu1.domain1.max_newidle_lb_cost = 11616
@@ -200 +227 @@
-kernel.sched_domain.cpu10.domain0.max_newidle_lb_cost = 5613
+kernel.sched_domain.cpu10.domain0.max_newidle_lb_cost = 16533
@@ -213 +240 @@
-kernel.sched_domain.cpu10.domain1.max_newidle_lb_cost = 5614
+kernel.sched_domain.cpu10.domain1.max_newidle_lb_cost = 15852
@@ -226 +253 @@
-kernel.sched_domain.cpu11.domain0.max_newidle_lb_cost = 5731
+kernel.sched_domain.cpu11.domain0.max_newidle_lb_cost = 9695
@@ -239 +266 @@
-kernel.sched_domain.cpu11.domain1.max_newidle_lb_cost = 6304
+kernel.sched_domain.cpu11.domain1.max_newidle_lb_cost = 12636
@@ -252 +279 @@
-kernel.sched_domain.cpu2.domain0.max_newidle_lb_cost = 4771
+kernel.sched_domain.cpu2.domain0.max_newidle_lb_cost = 12337
@@ -265 +292 @@
-kernel.sched_domain.cpu2.domain1.max_newidle_lb_cost = 6913
+kernel.sched_domain.cpu2.domain1.max_newidle_lb_cost = 11912
@@ -278 +305 @@
-kernel.sched_domain.cpu3.domain0.max_newidle_lb_cost = 5315
+kernel.sched_domain.cpu3.domain0.max_newidle_lb_cost = 8975
@@ -291 +318 @@
-kernel.sched_domain.cpu3.domain1.max_newidle_lb_cost = 12228
+kernel.sched_domain.cpu3.domain1.max_newidle_lb_cost = 12429
@@ -304 +331 @@
-kernel.sched_domain.cpu4.domain0.max_newidle_lb_cost = 7697
+kernel.sched_domain.cpu4.domain0.max_newidle_lb_cost = 11457
@@ -317 +344 @@
-kernel.sched_domain.cpu4.domain1.max_newidle_lb_cost = 7813
+kernel.sched_domain.cpu4.domain1.max_newidle_lb_cost = 12466
@@ -330 +357 @@
-kernel.sched_domain.cpu5.domain0.max_newidle_lb_cost = 5840
+kernel.sched_domain.cpu5.domain0.max_newidle_lb_cost = 6009
@@ -343 +370 @@
-kernel.sched_domain.cpu5.domain1.max_newidle_lb_cost = 6493
+kernel.sched_domain.cpu5.domain1.max_newidle_lb_cost = 12096
@@ -356 +383 @@
-kernel.sched_domain.cpu6.domain0.max_newidle_lb_cost = 6487
+kernel.sched_domain.cpu6.domain0.max_newidle_lb_cost = 9209
@@ -369 +396 @@
-kernel.sched_domain.cpu6.domain1.max_newidle_lb_cost = 11182
+kernel.sched_domain.cpu6.domain1.max_newidle_lb_cost = 13395
@@ -382 +409 @@
-kernel.sched_domain.cpu7.domain0.max_newidle_lb_cost = 6140
+kernel.sched_domain.cpu7.domain0.max_newidle_lb_cost = 9542
@@ -395 +422 @@
-kernel.sched_domain.cpu7.domain1.max_newidle_lb_cost = 5866
+kernel.sched_domain.cpu7.domain1.max_newidle_lb_cost = 15411
@@ -408 +435 @@
-kernel.sched_domain.cpu8.domain0.max_newidle_lb_cost = 8012
+kernel.sched_domain.cpu8.domain0.max_newidle_lb_cost = 7499
@@ -421 +448 @@
-kernel.sched_domain.cpu8.domain1.max_newidle_lb_cost = 7803
+kernel.sched_domain.cpu8.domain1.max_newidle_lb_cost = 14383
@@ -434 +461 @@
-kernel.sched_domain.cpu9.domain0.max_newidle_lb_cost = 5389
+kernel.sched_domain.cpu9.domain0.max_newidle_lb_cost = 9134
@@ -447 +474 @@
-kernel.sched_domain.cpu9.domain1.max_newidle_lb_cost = 6507
+kernel.sched_domain.cpu9.domain1.max_newidle_lb_cost = 12891
@@ -456 +483 @@
-kernel.sched_rr_timeslice_ms = 100
+kernel.sched_rr_timeslice_ms = 25
@@ -463,0 +491 @@
+kernel.secure_boot = 0
@@ -465,0 +494 @@
+kernel.sg-big-buff = 32768
@@ -473 +502 @@
-kernel.softlockup_panic = 1
+kernel.softlockup_panic = 0
@@ -476 +505 @@
-kernel.sysrq = 16
+kernel.sysrq = 176
@@ -478 +507 @@
-kernel.threads-max = 1030459
+kernel.threads-max = 2062506
@@ -482,0 +512,3 @@
+kernel.unprivileged_bpf_disabled = 0
+kernel.unprivileged_userns_apparmor_policy = 1
+kernel.unprivileged_userns_clone = 1
@@ -485 +517 @@
-kernel.version = #1 SMP Tue Nov 1 01:38:43 UTC 2016
+kernel.version = #52~16.04.1-Ubuntu SMP Thu Apr 20 10:55:59 UTC 2017
@@ -488,0 +521 @@
+kernel.yama.ptrace_scope = 1
@@ -499 +532 @@
-net.core.default_qdisc = fq_codel
+net.core.default_qdisc = pfifo_fast
@@ -514 +547 @@
-net.core.somaxconn = 65535
+net.core.somaxconn = 131070
@@ -2452 +2059 @@
-vm.lowmem_reserve_ratio = 256 256 32
+vm.lowmem_reserve_ratio = 256 256 32 1
@@ -2459 +2066 @@
-vm.mmap_min_addr = 4096
+vm.mmap_min_addr = 65536
HDParm tests
S1
hdparm -Tt /dev/sda9
/dev/sda9:
Timing cached reads: 23880 MB in 1.99 seconds = 11974.99 MB/sec
Timing buffered disk reads: 1316 MB in 3.00 seconds = 438.57 MB/sec
S2
hdparm -Tt /dev/md2
/dev/md2:
Timing cached reads: 20508 MB in 2.00 seconds = 10262.05 MB/sec
Timing buffered disk reads: 1532 MB in 3.00 seconds = 510.02 MB/sec
The difference in hdparm tests show 16 % lower cached reads on S2, but I do not think this is a the main reason of the 2 times slower execution time on S2.
All tests was repeated ~ 20 times on each server.
suquant
(121 rep)
May 11, 2017, 01:50 PM
• Last activity: May 23, 2022, 09:02 AM
3
votes
1
answers
869
views
Index usage Brent Ozar script
I'm using sp_blitzIndex to check the health of my database indexation, but I don't understand every column used in the output. I'm talking specifically about "Usage Stats" and "Op Stats". I find many indexes with zero Reads but a thousand writes (example: Reads: 0 Writes: 42,597). Does it mean that...
I'm using sp_blitzIndex to check the health of my database indexation, but I don't understand every column used in the output.
I'm talking specifically about "Usage Stats" and "Op Stats". I find many indexes with zero Reads but a thousand writes (example: Reads: 0 Writes: 42,597).
Does it mean that the index isn't used at all ? At least not for SELECT statements ? Is it safe to remove it or disable it at first ?
PS: My SQL instance hasn't been rebooted for very long time.
dba_maroc
(31 rep)
Oct 4, 2021, 02:37 PM
• Last activity: Oct 4, 2021, 03:02 PM
9
votes
1
answers
10301
views
How do you tune innodb_read_io_threads?
The default value of `innodb_read_io_threads` and `innodb_write_io_threads` is 4. How do you check if your server load needs a higher number of threads? As I regularly check `show engine innodb status \G`, there is no traceable activity: -------- FILE I/O -------- I/O thread 0 state: waiting for com...
The default value of
innodb_read_io_threads
and innodb_write_io_threads
is 4. How do you check if your server load needs a higher number of threads?
As I regularly check show engine innodb status \G
, there is no traceable activity:
--------
FILE I/O
--------
I/O thread 0 state: waiting for completed aio requests (insert buffer thread)
I/O thread 1 state: waiting for completed aio requests (log thread)
I/O thread 2 state: waiting for completed aio requests (read thread)
...
I/O thread 32 state: waiting for completed aio requests (write thread)
I/O thread 33 state: waiting for completed aio requests (write thread)
Pending normal aio reads: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] , aio writes: [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] ,
Furthermore, what is the downside of increasing it to the maximum value of 64? If not using, they are harmless idle threads.
Googlebot
(4551 rep)
Sep 12, 2021, 01:25 PM
• Last activity: Sep 12, 2021, 03:09 PM
0
votes
1
answers
255
views
Oracle 9i historical long queries since instance startup
please I would like to know any query that shows the queries or the top 10 queries that take more time to finish since the beginning of the instance with user and the sql sentence, I have seen several that show those that are currently running and active but can not find any that show a historical o...
please I would like to know any query that shows the queries or the top 10 queries that take more time to finish since the beginning of the instance with user and the sql sentence, I have seen several that show those that are currently running and active but can not find any that show a historical of these queries since the instance was started, it is usually very simple from Toad with ADDM/AWR Reports but the database in question is Oracle 9i and does not give that option. Thank you
Arnaldo Raxach
(145 rep)
Dec 5, 2018, 02:23 PM
• Last activity: Jul 20, 2021, 05:04 PM
0
votes
1
answers
463
views
Postgres SELECT query produces exact same explain analyze results but different execution times in the results
I have a SELECT query which when running produces the exact same plan in terms of indexes being used when doing an explain analyze however the duration of the run is different (2 seconds vs 30 seconds on average). Why would this be the case? Query ``` explain analyze SELECT SUM ((t0.item_cash_staked...
I have a SELECT query which when running produces the exact same plan in terms of indexes being used when doing an explain analyze however the duration of the run is different (2 seconds vs 30 seconds on average).
Why would this be the case?
Query
explain analyze SELECT SUM ((t0.item_cash_staked - t0.item_cash_won))
FROM item t0, product t1
WHERE (((( t0.item_rejection_code_id IS null)
AND (t0.item_created_on > '2019-08-01 17:38:33.613+01'))
AND (t1.customer_id = 123456))
AND (t1.product_id = t0.product_id));
Explain results
Aggregate (cost=1984710.64..1984710.65 rows=1 width=32) (actual time=26916.904..26916.905 rows=1 loops=1)
-> Nested Loop (cost=1.14..1983541.69 rows=233789 width=6) (actual time=4531.244..26765.752 rows=453812 loops=1)
-> Index Scan using product_idx_01 on product t1 (cost=0.57..172442.67 rows=539262 width=4) (actual time=6.213..4490.454 rows=500133 loops=1)
Index Cond: (customer_id = 123456)
-> Index Scan using item_idx_product_id on item t0 (cost=0.57..3.24 rows=12 width=10) (actual time=0.035..0.044 rows=1 loops=500133)
Index Cond: (product_id = t1.product_id)
Filter: ((item_rejection_code_id IS NULL) AND (item_created_on > '2019-08-01 17:38:33.613+01'::timestamp with time zone))
Rows Removed by Filter: 1
Planning time: 0.409 ms
Execution time: 26916.999 ms
(10 rows)
Explain Result Faster
Aggregate (cost=1984710.64..1984710.65 rows=1 width=32) (actual time=1786.816..1786.816 rows=1 loops=1)
-> Nested Loop (cost=1.14..1983541.69 rows=233789 width=6) (actual time=289.922..1687.398 rows=453812 loops=1)
-> Index Scan using product_idx_01 on product t1 (cost=0.57..172442.67 rows=539262 width=4) (actual time=0.013..202.082 rows=500133 loops=1)
Index Cond: (customer_id = 123456)
-> Index Scan using item_idx_product_id on item t0 (cost=0.57..3.24 rows=12 width=10) (actual time=0.002..0.003 rows=1 loops=500133)
Index Cond: (product_id = t1.product_id)
Filter: ((item_rejection_code_id IS NULL) AND (item_created_on > '2019-08-01 17:38:33.613+01'::timestamp with time zone))
Rows Removed by Filter: 1
Planning time: 0.275 ms
Execution time: 1786.866 ms
(10 rows)
I get running a query once will be slower than running it later as it will skip the planning phase, but I have had times where I have run the same query 1 after the other and it still takes approx 25 seconds to run? Why would this be?
Is there anyway of improving the query to make it better performing?
Any help is much appreciated.
rdbmsNoob
(459 rep)
May 21, 2021, 03:55 PM
• Last activity: May 25, 2021, 10:12 AM
3
votes
1
answers
131
views
Save CPU cycles on SQL Server Query Plans (Better without Stats than with on wide tables)
Assume that a query in question has a very wide set of columns. The query looks like the following: set statistics io,time on; Select a.Col1, a.Col2, b.Col3, b.Col4, c.Col5 From FirstWideTable a Join SecondWideTable b on a.Col6=b.Col7 Join ThirdWideTable c on b.Col8=c.Col9 WHERE a.Col10=@SomeVariabl...
Assume that a query in question has a very wide set of columns. The query looks like the following:
set statistics io,time on;
Select a.Col1, a.Col2, b.Col3, b.Col4, c.Col5
From FirstWideTable a
Join SecondWideTable b on a.Col6=b.Col7
Join ThirdWideTable c on b.Col8=c.Col9
WHERE a.Col10=@SomeVariable;
set statistics io,time off;
The above query uses 10 columns of 100 or so (more "selective" queries). SQL Server created statistics on these tables and tables contain effective indexes adressing the predicate (Col10) and while not showing here, assume the PK's are created on the three tables (see demo below):
CREATE NONCLUSTERED INDEX NC__a_Col1_Col2_Col6
ON FirstWideTable(Col1 ASC,Col2 ASC, Col3 ASC)
INCLUDE (Col10)
ON PRIMARY;
and similar indexes are available for the Second and Third Wide Tables.
If the query is written as a subquery with only the 10 columns required:
set statistics io,time on;
Select a.Col1, a.Col2, b.Col3, b.Col4, c.Col5
From (SELECT a.Col1, a.Col2,a.Col6,a.Col10 FROM FirstWideTable) a
Join (SELECT b.Col4,b.Col5, b.Col7 FROM SecondWideTable) b on a.Col6=b.Col7
Join (c.Col5, c.Col9 FROM ThirdWideTable) c on b.Col8=c.Col9
WHERE a.Col10=@SomeVariable;
set statistics io,time off;
resulst improve. Elapsed time on the wide tables with statistics show terrible (30 second) elapased time. Each time the query (without statistics running against it) has the actual columns listed in a subquery, the elapsed time drops considerably.
QUERY RESULTS - Wide table has statistics - Wide table is the baseline.
subqueries results do not have statistics against them, the timing was taken after adding in the subquery that examines only the columns necessary rather than the wide table.
CPU Time (ms) Elapsed Time (ms)
Wide table result 7265 35459
Subqueried narrow result table 1 5125 31271
Wide table result 6765 33446
Subqueried narrow result table 2 5391 27099
Wide table result 7203 34354
Subqueried narrow result table 3 5843 10321
Result from the theoretical data below:
CPU Time (ms) Elapsed Time (ms)
Wide table result 109 461
Subqueried narrow result table 1 32 441
Wide table result 78 441
Subqueried narrow result table 2 110 453
Wide table result 62 441
Subqueried narrow result table 3 63 458
Here is how the theoretical data was generated.
/* create a dummy source table */
select * into tblSourceTable from(select top 100000 NewID()A, NewID()B from sys.columns)x
Generate theoretical data:
DECLARE @test2 TABLE (
[ID] [bigint] NULL,
[A] [uniqueidentifier] NULL,
[B] [uniqueidentifier] NULL,
[C] [uniqueidentifier] NULL,
[D] [uniqueidentifier] NULL,
[E] [uniqueidentifier] NULL,
[F] [uniqueidentifier] NULL,
[G] [uniqueidentifier] NULL,
[H] [uniqueidentifier] NULL,
[I] [uniqueidentifier] NULL,
[J] [uniqueidentifier] NULL,
[K] [uniqueidentifier] NULL,
[L] [uniqueidentifier] NULL,
[M] [uniqueidentifier] NULL,
[N] [uniqueidentifier] NULL,
[O] [uniqueidentifier] NULL,
[P] [uniqueidentifier] NULL,
[Q] [uniqueidentifier] NULL,
[R] [uniqueidentifier] NULL,
[S] [uniqueidentifier] NULL,
[T] [uniqueidentifier] NULL,
[U] [uniqueidentifier] NULL,
[V] [uniqueidentifier] NULL,
[W] [uniqueidentifier] NULL,
[X] [uniqueidentifier] NULL,
[Y] [uniqueidentifier] NULL,
[Z] [uniqueidentifier] NULL,
[AA] [uniqueidentifier] NULL,
[AB] [uniqueidentifier] NULL,
[AC] [uniqueidentifier] NULL,
[AD] [uniqueidentifier] NULL,
[AE] [uniqueidentifier] NULL,
[AF] [uniqueidentifier] NULL,
[AG] [uniqueidentifier] NULL,
[AH] [uniqueidentifier] NULL,
[AI] [uniqueidentifier] NULL,
[AJ] [uniqueidentifier] NULL,
[AK] [uniqueidentifier] NULL,
[AL] [uniqueidentifier] NULL,
[AM] [uniqueidentifier] NULL,
[AN] [uniqueidentifier] NULL,
[AO] [uniqueidentifier] NULL,
[AP] [uniqueidentifier] NULL,
[AQ] [uniqueidentifier] NULL,
[AR] [uniqueidentifier] NULL
)
Make a dummy data set
insert into @test2
select Top 50000 ID=ROW_NUMBER()over(order by C.A),
NewID() AS A,NewID() AS B,NewID() AS C,NewID() AS D,NewID() AS E,NewID() AS F,
NewID() AS G,NewID() AS H,NewID() AS I,NewID() AS J,NewID() AS K,NewID() AS L,
NewID() AS M,NewID() AS N,NewID() AS O,NewID() AS P,NewID() AS Q,NewID() AS R,
NewID() as S,NewID() AS T,NewID() AS U,NewID() AS V,NewID() AS W,NewID() AS X,
NewID() AS Y,NewID() AS Z,NewID() AS AA,NewID() AS AB,NewID() AS AC,NewID() AS AD,
NewID() AS AE,NewID() AS AF,NewID() AS AG,NewID() AS AH,NewID() AS AI,NewID() AS AJ,
NewID() AS AK,NewID() AS AL,NewID() AS AM,NewID() AS AN,NewID() AS AO,NewID() AS AP,
NewID() AS AQ,NewID() AS AR
FROM test C
cross apply test D
cross apply test E
cross apply test F
cross apply test G
cross apply test H
cross apply test I
cross apply test J
cross apply test K
cross apply test L
cross apply test M
cross apply test N
cross apply test O
cross apply test P
cross apply test Q
cross apply test R
cross apply test S
cross apply test T
cross apply test U
cross apply test V
cross apply test W
cross apply test X
cross apply test Y
cross apply test Z
Create the tables:
Select * into [FirstWideTable] FROM @test2
Select * into [SecondWideTable] FROM @test2
Select * into [ThirdWideTable] FROM @test2
CREATE UNIQUE CLUSTERED INDEX [PK_TEST_A_ID] ON [dbo].[FirstWideTable]([ID] ASC)
CREATE UNIQUE CLUSTERED INDEX [PK_TEST_B_ID] ON [dbo].[SecondWideTable]([ID] ASC)
CREATE UNIQUE CLUSTERED INDEX [PK_TEST_C_ID] ON [dbo].[ThirdWideTable]([ID] ASC)
Run the query a few times allowing statistics data to be created
Select a.ID, a.A, a.AB, b.A, b.B, b.C, b.D, c.A, C.B, c.D, c.AA
From [FirstWideTable] a
LEFT Join [SecondWideTable] b on a.ID=b.ID
LEFT Join [ThirdWideTable] c on b.ID=c.ID
WHERE a.ID BETWEEN 1 and 50000;
Select a.ID, a.A, a.AB, b.A, b.B, b.C, b.D, c.A, C.B, c.D, c.AA
From [FirstWideTable] a
LEFT Join [SecondWideTable] b on a.ID=b.ID
LEFT Join [ThirdWideTable] c on b.ID=c.ID
WHERE a.ID BETWEEN 1 and 50000;
Select a.ID, a.A, a.AB, b.A, b.B, b.C, b.D, c.A, C.B, c.D, c.AA
From [FirstWideTable] a
LEFT Join [SecondWideTable] b on a.ID=b.ID
LEFT Join [ThirdWideTable] c on b.ID=c.ID
WHERE a.ID BETWEEN 1 and 50000;
Create the statistics
update statistics [FirstWideTable]
update statistics [SecondWideTable]
update statistics [ThirdWideTable]
Grab the results
set statistics io,time on
Select a.ID, a.A, a.AB, b.A, b.B, b.C, b.D, c.A, C.B, c.D, c.AA
From (Select ID, A, AB from [FirstWideTable]) a
LEFT Join(Select ID, A, B, C, D from[SecondWideTable]) b on a.ID=b.ID
LEFT Join(Select ID, A, B, D, AA from[ThirdWideTable]) c on b.ID=c.ID
WHERE a.ID BETWEEN 1 and 50000;
set statistics io,time off
set statistics io,time on
Select a.ID, a.A, a.AB, b.A, b.B, b.C, b.D, c.A, C.B, c.D, c.AA
From [FirstWideTable] a
LEFT Join [SecondWideTable] b on a.ID=b.ID
LEFT Join [ThirdWideTable] c on b.ID=c.ID
WHERE a.ID BETWEEN 1 and 50000;
set statistics time io,off
Showplan: https://www.brentozar.com/pastetheplan/?id=H15IMMYaS
Jamie
(141 rep)
Dec 5, 2019, 01:14 PM
• Last activity: Nov 13, 2020, 04:34 AM
1
votes
2
answers
893
views
oracle slow query with diferent user
I have a question with an oracle query that takes almost 20 minutes when it is executed from a schema and yet when it runs as sys users it runs in 1 second maximum ... how is this possible? ... I do not know much about tuning plsql and I find it difficult to find the cause of this tremendous delay,...
I have a question with an oracle query that takes almost 20 minutes when it is executed from a schema and yet when it runs as sys users it runs in 1 second maximum ... how is this possible? ... I do not know much about tuning plsql and I find it difficult to find the cause of this tremendous delay, I would like to know if there is any tool to autotuning the queries. The sentence in question is:
SELECT FC.ID_SER
FROM MIGER.FONT_COUNTER FC
WHERE TO_CHAR(FC.ID_SER) NOT IN (SELECT
ID_SER FROM INAC.SERV)
-- AND (FC.STATE 'R' OR FC.STATE NULL)
AND FC.T_ACT='M'
Thank you
miguel ramires
(169 rep)
Feb 7, 2019, 10:40 PM
• Last activity: Jun 5, 2020, 01:48 PM
1
votes
2
answers
63
views
A SQL query optimzation issue with ORDER BY
An SQL query optimzation issue with ORDER BY Current status I have a database of Chess games and their corresponding moves (stored as strings called fens). I have two main tables 'Game' and 'GamePosition'. GamePosition has an index on the fen column and Game has an index on white_elo. I currently ha...
An SQL query optimzation issue with ORDER BY
Current status
I have a database of Chess games and their corresponding moves (stored as strings called fens). I have two main tables 'Game' and 'GamePosition'. GamePosition has an index on the fen column and Game has an index on white_elo. I currently have 170471 games and 14813401 positions. I'm running mysql 5.7.28.
Object
I'm trying to fetch top rated games based on elo rating of the players. I've simplified my query a bit here, but the point and performance problem is the same.
SELECT Game.id
FROM Game
JOIN GamePosition ON Game.id = game_id
WHERE fen = 'rnbqkbnr/pppppppp/8/8/3P4/8/PPP1PPPP/RNBQKBNR'
ORDER BY white_elo DESC
LIMIT 10
This query tends to be a bit slow (1.2 s) if I get a lot of results (typically the first move gives 67k+ results). Since I plan to exand the database 10x at least, I want to optimize my query. I've run a EXPLAIN which shows that it needs to do a filesort before finding the top rated games on all the results. This seems to be the issue. Removing the ORDER BY from the query makes it superfast (0.0008 s).
Any ideas if it's possible to optimize the query, or if I could store the data differently?
UPDATE:
Table: GamePosition
Create Table: CREATE TABLE

GamePosition
(
id
int(11) NOT NULL AUTO_INCREMENT,
game_id
int(11) NOT NULL,
fen
varchar(100) NOT NULL,
move_color
char(1) NOT NULL,
castling_rights
varchar(4) NOT NULL,
en_passant_square
varchar(2) NOT NULL,
half_move
smallint(6) NOT NULL,
full_move
smallint(6) NOT NULL,
timestamp
time DEFAULT NULL COMMENT 'time in video',
move
varchar(10) DEFAULT NULL,
from_square
varchar(4) DEFAULT NULL,
to_square
varchar(4) DEFAULT NULL,
parent_position
int(11) DEFAULT NULL,
PRIMARY KEY (id
),
KEY position_ibfk_1
(game_id
),
KEY fen
(fen
),
KEY parent_position
(parent_position
),
CONSTRAINT game_key
FOREIGN KEY (game_id
) REFERENCES Game
(id
) ON DELETE CASCADE ON UPDATE CASCADE
)
Table: Game
Create Table: CREATE TABLE Game
(
id
int(11) NOT NULL AUTO_INCREMENT,
pgn
text NOT NULL,
white_id
int(11) NOT NULL,
black_id
int(11) NOT NULL,
white_elo
int(11) DEFAULT NULL,
black_elo
int(11) DEFAULT NULL,
result
tinyint(4) DEFAULT NULL COMMENT '0=white, 1=draw 2=black',
date
date DEFAULT NULL,
event_id
int(11) NOT NULL,
eco
varchar(5) NOT NULL,
PRIMARY KEY (id
),
KEY event_key
(event_id
),
KEY black_key
(black_id
),
KEY white_key
(white_id
),
CONSTRAINT black_key
FOREIGN KEY (black_id
) REFERENCES Player
(id
),
CONSTRAINT event_key
FOREIGN KEY (event_id
) REFERENCES Event
(id
),
CONSTRAINT white_key
FOREIGN KEY (white_id
) REFERENCES Player
(id
)
)
Table: Player
Create Table: CREATE TABLE Player
(
id
int(11) NOT NULL AUTO_INCREMENT,
name
varchar(100) NOT NULL,
PRIMARY KEY (id
),
UNIQUE KEY name_unique
(id
)
)
Actual Query
SELECT GamePosition.id AS position_id, Game.id AS id,
white.name AS white, black.name AS black, Game.id,
white_elo, black_elo, result, date
FROM Game
JOIN GamePosition ON Game.id = game_id
JOIN Player white ON white.id = white_id
JOIN Player black ON black.id = black_id
WHERE fen = '$fen'
ORDER BY white_elo+black_elo DESC
LIMIT $limit
OFFSET $offset";
Kind Regards,
Bjorn
Oddbjørn Sjøgren
(11 rep)
May 13, 2020, 02:43 PM
• Last activity: May 24, 2020, 12:54 AM
Showing page 1 of 20 total questions