Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
0 answers
32 views
Rolling blocking on Compiling a SP
We are experiencing rolling blocking when a specific stored procedure runs. This SP runs multiple times a minute, and does heavily use temp tables. It doesn't block all the time, but during peak times its prevalent. blocking lasts for 30 seconds, the SP times out, then the next in line blocks. When...
We are experiencing rolling blocking when a specific stored procedure runs. This SP runs multiple times a minute, and does heavily use temp tables. It doesn't block all the time, but during peak times its prevalent. blocking lasts for 30 seconds, the SP times out, then the next in line blocks. When it happens the blocked process shows as this: waitresource="OBJECT: 5:1371949573:0 [COMPILE]" The object is the SP itself. we don't see any specific statement inside the SP blocking, its just this compile wait. Looking at the plan cache, this SP has about 2500+ different, single use plans in cache. We have not been able to determine why this SP is not reusing a plan. Has anyone ran into this situation before, and more importantly, how can I find out the reason that it is not getting plan reuse? Is there an extended events session that can track executions of this SP, and specifically the reasons why its compiling so frequently? the only thing i can think of is that maybe since a temp table in this stored procedure gets multiple updates, inserts, and deletes would cause this? the SP does make use of multiple instances of Option(keepfixed plan) as well.
DBA Greg14 (265 rep)
Apr 3, 2025, 02:39 PM
3 votes
1 answers
227 views
Warnings NoJoinPredicate
This morning, our SQL Server 2016 started using high CPU percentage(90%) when normally is about 30%. Reviewing cache plan with sp_blitzcache, I saw a new plan for an old query in the top of CPU time. Full query plan [is here][1]. This plan had a warning about no join predicate. (@TokenCentro nvarcha...
This morning, our SQL Server 2016 started using high CPU percentage(90%) when normally is about 30%. Reviewing cache plan with sp_blitzcache, I saw a new plan for an old query in the top of CPU time. Full query plan is here . This plan had a warning about no join predicate. (@TokenCentro nvarchar(36),@Tarjeta nvarchar(max) ,@TarjetaAux nvarchar(max) ) SELECT Socios.id, Socios.id AS IdSocio, Centros.id AS IDCentro, Centros.nombre, ISNULL(NombreWEB, Centros.Nombre) AS NombreWeb, Socios.Token, Foto, ISNULL(Socios.IdIdioma, 1) AS IdIdioma, AplicacionPropia, ISNULL(Socios.IdConf_Regional, 1) AS IdUnidad, Socios_Asignar_Centro.AplicacionWEB, CASE WHEN Socios.Email = '' THEN 1 ELSE CASE WHEN Socios.Email IS NULL THEN 1 ELSE 0 END END AS ConEmail, Centros.HorarioVisible, Centros.PerfilApp, TactilEntrenoSocio, Centros.ColorKiosco, Conf_Regional.PrimerDia, ISNULL(Centros.IdGrupoApp, 0) AS IdGrupoApp, Centros.requiere_LOPD, ISNULL(Centros.LOPD, '') AS LOPD, Socios.aceptada_lopd, Centros.IntegracionDieta, Empresas.Chat, Centros.field, Centros.valoracionApp, Empresas.p_Premios, Centros.Reservas, ISNULL(CustomCss, '') AS CustomCss, Socios_Asignar_Centro.PuedeReservar, zonasHorarias.Zona, zonasHorarias.ZonaIana, ISNULL(Socios.TokenExten, '') AS TokenExtern, Centros.field1, Centros.tipoServicioPush, Socios.Nombre AS NombreSocio, Socios.Apellidos AS ApellidosSocio, Centros_Opciones.forzarLogin FROM Socios INNER JOIN Socios_Asignar_Centro ON Socios_Asignar_Centro.IdSocio = Socios.Id INNER JOIN Conf_Regional ON Conf_Regional.Id = Socios.IdConf_Regional INNER JOIN Centros ON Centros.Id = Socios_Asignar_Centro.IdCentro AND Centros.FechaBaja IS NULL INNER JOIN Empresas ON Empresas.Id = Centros.IdEmpresa INNER JOIN zonasHorarias ON Socios.idzonahoraria = zonasHorarias.id INNER JOIN Centros_Opciones ON Centros_Opciones.IdCentro = Centros.Id INNER JOIN Centros_Tactiles ON Centros_Tactiles.IdCentro = Centros.Id WHERE Socios.tarjeta IS NOT NULL AND Socios.tarjeta '' AND Socios.tarjeta '0' AND ( Socios.Tarjeta = @Tarjeta OR Socios.Tarjeta = @TarjetaExtend ) AND Centros_Tactiles.Token = @Token AND ( Socios_Asignar_Centro.FechaBaja IS NULL OR Socios_Asignar_Centro.FechaBaja > CONVERT(DATE, GETUTCDATE())) ORDER BY IdCentro, IdSocio DESC After changing the length of the parameter from nvarchar(max) to nvarchar(50), the new plan was normal again. However, I can not see where the problem is. Any ideas? Warning
Serafín (45 rep)
May 12, 2017, 02:57 PM • Last activity: Dec 22, 2024, 08:30 AM
15 votes
1 answers
746 views
What could cause a SQL Server non-cached temporary table to NOT trigger a recompile when a large amount of rows have changed?
I have observed (and reproduced) the following scenario with SQL Server 2022. ### The pattern in use * code is executed via sp_executesql (no stored procedure is involved) * The first query selects data into a temporary table * A DDL statement then creates a clustered index on the temporary table. T...
I have observed (and reproduced) the following scenario with SQL Server 2022. ### The pattern in use * code is executed via sp_executesql (no stored procedure is involved) * The first query selects data into a temporary table * A DDL statement then creates a clustered index on the temporary table. The temporary table is definitely NOT cacheable-- first of all this isn't a module (sproc or function), but also we're creating an index after the temp table is populated. So I would not expect statistics left behind on a cached temporary object to be involved at all here. * A query selects data from the temporary table. This query gets FULL optimization each time (not a TRIVIAL plan) This batch can run for both small and larger datasets, so that temp table can have 1 row in it or many thousands of rows. This behavior normally occurs on a readable secondary. There is no writable query store and no automatic plan forcing as a factor. I have verified that I can reproduce the behavior against the primary replica as well. (Automatic plan correction was told to ignore the query and I confirmed no plan forcing on primary when reproduced.) ### Repro script * [Setup script](https://gist.github.com/LitKnd/83130120683e669669c048a5e74c8145#file-setuprepro-sql) - I ran this on SQL Server 2022 CU15. This turns off query store and uses compat level 130. * [Repro query](https://gist.github.com/LitKnd/83130120683e669669c048a5e74c8145#file-repro-sql) - I've been running this via SQL Query Stress so I can easily run it concurrently on one or more threads * [Plan Generation Num and temp tables](https://gist.github.com/LitKnd/83130120683e669669c048a5e74c8145#file-plan_generation_num_and_temp_tables-sql) - A very simple query to observe the plan_generation_num in sys query stats (" A sequence number that can be used to distinguish between instances of plans after a recompile.") and the current list of temp tables ### What normally happens-- and the behavior I expect Normally, changing large amounts of rows in the temporary table between query executions automatically cause recompiles, and I'll see that the query selecting the data from the temporary table has a row estimate matching the rows in the temp table. When this works as expected, performance is fine. *With the repro query*: If I clear the plan cache, then run the repro query 40 iterations on a single thread in SQL Query Stress, plan_generation_number ends up being 82. When sampling query plans with sp_WhoIsActive, row estimates querying the temp table match the number of rows in the temp table, as expected. ### What sometimes happens -- and looks like a bug to me On rare occasions, I see that a plan is in use where there is a 1 row estimate plan for the temp table, but a very large amount of rows are actually in the temp table. A LOT of rows have changed, but it didn't automatically recompile: screenshot of a clustered index scan of a temporary table with a 1 row estimate, but 4.2 million rows have been scanned from it so far This leads to very slow performance because the low estimate plan decides to use a nested loop without prefetching, which makes it a CPU burner. *With the repro query*: If I clear the plan cache, then run the repro query 20 iterations on 2 threads in SQL Query Stress, plan_generation_number ends up less than 82-- it varies by run, but might be 72 or 59, indicating fewer recompiles. While this is running, I can also sample occasions with sp_WhoIsActive where there is a single rowcount estimated but many more rows in the temporary table. Screenshot: not the query plan i expected-- why only 1 rowcount? ### I can only reproduce this when running repro code on multiple concurrent sessions I have not been able to reproduce this behavior with a single session in SQL Server. The only way I can reproduce this is to set up a block of code that: * Executes at least 1 iteration of the sp_executesql statement that have 1 row in the temp table * Then executes 1 iteration of the sp_executesql statement that has a lot more rows in the temp table If I run this in a single session, I have not been able to reproduce the problems. But if I run this concurrently in four or five sessions, I'll be able to occasionally get the "THAT DIDN'T RECOMPILE LIKE IT SHOULD HAVE" issue to pop up. (Note: using SQL Query Stress, I can repro this with only 2 sessions/iterations.) This feels like a bug to me, I'm curious if anyone else has seen it. Recompile and stats behavior with temp tables is super complex tho, so there may be some nuance that I'm missing with how this works with non-cachable temp tables. PS: I do think cachable temp tables are generally better. I'm just trying to figure out why this behavior would happen in a non-cacheable temp table scenario at this point. ### Workarounds After adding an option (recompile) to the query, I can no longer reproduce the reuse of the 1 row plan querying the temp table. This is sufficient, but I'm puzzled why it is necessary.
Kendra Little (938 rep)
Dec 6, 2024, 03:32 PM • Last activity: Dec 10, 2024, 09:00 AM
0 votes
1 answers
185 views
Why doesn't DISCARD PLANS/ALL work for queries inside PLPGSQL functions?
I'm experiencing unexpected behavior with query plan caching in PostgreSQL when using different function types. I have a dashboard query that exhibits significantly different performance characteristics based on how it's implemented: - Raw SQL Query: ~200ms execution time - SQL Function: ~2s executi...
I'm experiencing unexpected behavior with query plan caching in PostgreSQL when using different function types. I have a dashboard query that exhibits significantly different performance characteristics based on how it's implemented: - Raw SQL Query: ~200ms execution time - SQL Function: ~2s execution time - PLPGSQL Function: First 5 calls: ~200ms, subsequent calls: ~2s The most puzzling part is that DISCARD PLANS/ALL doesn't help with the PLPGSQL function performance degradation, even though it should clear the plan cache. Each implementation uses the same SQL query, but in functions the query is parametrized. For example in functions there is condition like this: (_types is null or type = any (_types)), which in plain SQL tests traslates to (null is null or type = any (null::int[])) Environment - database functions are used as a database interface for Node REST API - API uses connection pooling in session mode (each qurey has its own session) - After each query, the connection is returned to the pool and DISCARD ALL is executed - PostgreSQL v13.13 running on CentOS 8 Questions - Why doesn't DISCARD PLANS/ALL help with the PLPGSQL function's performance? It seems that it does not clear session state completely. - Is there a way to maintain the original performance while keeping the query inside a PLPGSQL function?
DavidP (203 rep)
Nov 6, 2024, 12:57 PM • Last activity: Nov 6, 2024, 01:19 PM
0 votes
2 answers
112 views
why sql server is not reusing plan in my adhoc query?
Seems my understanding of query plan is not correct. If I run this query first time and check the query plan cache. I see one plan cached. declare @codes varchar(max)='BHVD7,BHVDE,BHVDF' SELECT OFST010020 AS [OFST010020], OFST730010 AS [OFST730010], Identifier AS [Identifier] FROM STATIC_FUNDSHARECL...
Seems my understanding of query plan is not correct. If I run this query first time and check the query plan cache. I see one plan cached. declare @codes varchar(max)='BHVD7,BHVDE,BHVDF' SELECT OFST010020 AS [OFST010020], OFST730010 AS [OFST730010], Identifier AS [Identifier] FROM STATIC_FUNDSHARECLASSUNIT_NEXT as f WITH(NOLOCK) Inner join STATIC_FUNDSHARECLASSUNITEXTENSION_NEXT as fex WITH(NOLOCK) on fex.Identifier2 = f.Identifier WHERE OFST900174 in (select value from string_split(@codes,',')) If I just change the code paramter and run the query again in the same SSMS window, for example declare @codes varchar(max)='BHVD7,BHVDF' SELECT OFST010020 AS [OFST010020], OFST730010 AS [OFST730010], Identifier AS [Identifier] FROM STATIC_FUNDSHARECLASSUNIT_NEXT as f WITH(NOLOCK) Inner join STATIC_FUNDSHARECLASSUNITEXTENSION_NEXT as fex WITH(NOLOCK) on fex.Identifier2 = f.Identifier WHERE OFST900174 in (select value from string_split(@codes,',')) I thought the query plan would have 1 row and the useCount would be 2. But in the query plan cache, I actually saw two plans. Each plan's sql text is actually the full text (starting with declare @codes). Am I doing something wrong or ad hoc query doesn't really re-use plan even parameter is involved?
jerry xu (63 rep)
Nov 5, 2024, 09:38 PM • Last activity: Nov 6, 2024, 03:27 AM
-1 votes
1 answers
65 views
What is the purpose of parent_plan_handle in sys.dm_exec_cached_plans?
I am attempting to prove whether the plan cache is caching statements individually within multiple stored procedures with identical statements. I thought I would be able to use `parent_plan_handle` to see this relationship however it is always returning null. I've checked this on both SQL 2019 and 2...
I am attempting to prove whether the plan cache is caching statements individually within multiple stored procedures with identical statements. I thought I would be able to use parent_plan_handle to see this relationship however it is always returning null. I've checked this on both SQL 2019 and 2022. When I consult the literature , the column is strangely absent. The question: What is the purpose of parent_plan_handle in sys.dm_exec_cached_plans?
WiseTechGlobal CTO (198 rep)
Jul 13, 2024, 08:06 AM • Last activity: Jul 13, 2024, 05:51 PM
0 votes
2 answers
239 views
PostgreSQL plan cache and variable number of arguments
Query: select * from table where col in ( values (?), (?), (?) ) The query text is generated, and number of (?) is variable and up to 10000. It works just fine, just wondering, whether Postgres caches same plan instance for different number of args? In other words: what is the _key_ for plan cache i...
Query: select * from table where col in ( values (?), (?), (?) ) The query text is generated, and number of (?) is variable and up to 10000. It works just fine, just wondering, whether Postgres caches same plan instance for different number of args? In other words: what is the _key_ for plan cache in Postgres? And how does it apply for the query above?
Mikhail Boyarsky (103 rep)
Feb 23, 2024, 12:16 AM • Last activity: Feb 24, 2024, 01:34 PM
1 votes
1 answers
90 views
Can the plan cache be used to diagnose a lack of RAM?
I have learned a few good ways to diagnose a lack of RAM on a SQL Server and I am wondering if I should add the plan cache to this toolbox. A sign of lacking RAM is that the plan cache doesn't live very long. You can easily find the oldest cached plans in `sys.dm_exec_query_stats` and you can find h...
I have learned a few good ways to diagnose a lack of RAM on a SQL Server and I am wondering if I should add the plan cache to this toolbox. A sign of lacking RAM is that the plan cache doesn't live very long. You can easily find the oldest cached plans in sys.dm_exec_query_stats and you can find how much RAM the plan cache is using from SELECT * FROM sys.dm_os_memory_clerks WHERE TYPE = 'CACHESTORE_SQLCP'. **Can either of these be turned in to actionable information about your system lacking RAM?** My suspicion is that neither view provides useful metrics. If 2% or 20% of my server's memory is taken up by the plan cache, then so what? Similarly, if my oldest cached plan is a month old, then so what? For all I know, it could just be a very frequently used plan.
J. Mini (1237 rep)
Feb 22, 2024, 12:08 AM • Last activity: Feb 24, 2024, 07:21 AM
2 votes
1 answers
645 views
What are the advantages and disadvantages of inspecting the Plan Cache rather than the Query Store?
The Plan Cache and the Query Store are not the same, even [for the same query](https://www.scarydba.com/2018/11/19/query-store-and-plan-cache-plans-compared/). When hunting for performance or monitoring information on a specific query or set of related queries, what are the advantages/disadvantages...
The Plan Cache and the Query Store are not the same, even [for the same query](https://www.scarydba.com/2018/11/19/query-store-and-plan-cache-plans-compared/) . When hunting for performance or monitoring information on a specific query or set of related queries, what are the advantages/disadvantages of each? The general impression that my online research indicates is that the Query Store can be queried faster than the Plan Cache (I'm not sure why) and that its entries tend to last much longer (this is configurable), but I have found nothing said about situations where the Plan Cache is superior to the Query Store. Assume that I do not care about the features that SQL Server 2017 and 2022 introduced that use Query Store for automatic performance tuning. Instead assume that I am comparing the Query Store and Plan Cache for the purposes of tasks that both can do.
J. Mini (1237 rep)
Jan 13, 2024, 08:45 PM • Last activity: Feb 1, 2024, 07:00 PM
0 votes
2 answers
175 views
A query I ran yesterday isn't in the plan cache, but older and newer ones are. Why?
Yesterday, I ran a big ad-hoc query that I didn't save. Today, I want to see it again. I thought that I'd hunt through the plan cache. To my surprise, I can find older queries and newer queries, but not the one that I ran. Why might this be? I know that I should have Query Store turned on, but I hav...
Yesterday, I ran a big ad-hoc query that I didn't save. Today, I want to see it again. I thought that I'd hunt through the plan cache. To my surprise, I can find older queries and newer queries, but not the one that I ran. Why might this be? I know that I should have Query Store turned on, but I haven't done it yet.
J. Mini (1237 rep)
Jan 24, 2024, 11:01 AM • Last activity: Jan 24, 2024, 05:25 PM
0 votes
1 answers
107 views
Scalar Functions Causing Plan Cache Bloat
Our plan cache is showing a large number of duplicate entries for a scalar function that is called as per the code below: SELECT dbo.fnSomeFunction('12345678') I have done some testing and found that it appears a new plan is compiled for each call, rather than a single plan with sniffed parameters s...
Our plan cache is showing a large number of duplicate entries for a scalar function that is called as per the code below: SELECT dbo.fnSomeFunction('12345678') I have done some testing and found that it appears a new plan is compiled for each call, rather than a single plan with sniffed parameters similar to a stored procedure. Create an equivalent stored proc and scalar function: USE StackOverflow2010 GO CREATE OR ALTER PROC dbo.sp_Test ( @id INT ) AS SELECT Body FROM Posts WHERE Id = @id GO CREATE OR ALTER FUNCTION dbo.sfn_Test ( @id INT ) RETURNS NVARCHAR(MAX) AS BEGIN DECLARE @body NVARCHAR(MAX) SELECT @body=Body FROM Posts WHERE Id = @id RETURN @body END clear the plan cache and call both with different parameters: USE StackOverflow2010 DBCC FREEPROCCACHE EXEC dbo.sp_test @id = 4 GO EXEC dbo.sp_test @id = 5 GO EXEC dbo.sp_test @id = 6 GO SELECT dbo.sfn_Test(4) GO SELECT dbo.sfn_Test(5) GO SELECT dbo.sfn_Test(6) GO Run a query to view our plan cache: SELECT d.name, t.text AS TSQL_Text, s.creation_time, s.execution_count, p.query_plan FROM sys.dm_exec_query_stats s CROSS APPLY sys.dm_exec_sql_text(s.plan_handle) t CROSS APPLY sys.dm_exec_query_plan(s.plan_handle) p JOIN sys.databases d ON t.dbid = d.database_id ORDER BY TSQL_Text; The results: enter image description here We can see that SQL Server has reused the stored procedure plan but each scalar function call gets it's own plan. Is there a way to reduce plan cache bloat as a result of this? The only option I can think of is to enable Optimize for Ad-Hoc Workloads? As for why SQL Server behaves this way, I assume this is just yet another example of why not to use scalar functions?
SE1986 (2182 rep)
Oct 4, 2023, 01:52 PM • Last activity: Oct 4, 2023, 02:44 PM
3 votes
0 answers
1731 views
how to minimize the effect of updating statistics on the plan cache?
**The issue I am trying to address:** After updating statistics there are slow downs in running many of our processes here. then following [the scary dba][1] on this link below: [http://www.sqlservercentral.com/Forums/Topic1310374-1550-1.aspx][2] > If updating the statistics slows down the performan...
**The issue I am trying to address:** After updating statistics there are slow downs in running many of our processes here. then following the scary dba on this link below: http://www.sqlservercentral.com/Forums/Topic1310374-1550-1.aspx > If updating the statistics slows down the performance of the query, > it's likely because a new execution plan was created based on the > updated statistics. > > You'll need to look at the plans before & after the statistics update > to understand exactly why. > > Usually, in most circumstances, the reverse is true, out of date > statistics leads to poor performance. > > But sometimes this does happen. The exact solution depends on what's > causing the problem within your query and based on what's been posted > so far, I can't be sure. It might simply be bad parameter sniffing. Regarding When To Update Statistics? and the effects of doing this > Flushing a plan from cache is determined by memory pressure. > > Statistics updates cause plan recompilations from this link: SQL Server: Do Statistics Updates cause query plans to be flushed? Should they? and from this link: Execution Plan Caching and Reuse **Recompiling Execution Plans** The conditions that invalidate an execution plan include the following: > 1 - Changes made to a table or view referenced by the query (ALTER > TABLE and ALTER VIEW). > > 2 - Changes made to a single procedure, which would drop all plans for > that procedure from the cache (ALTER PROCEDURE). > > 3 - Changes to any indexes used by the execution plan. > > 4 - Updates on statistics used by the execution plan, generated either > explicitly from a statement, such as UPDATE STATISTICS, or generated > automatically. > > 5 - Dropping an index used by the execution plan. > > 6 - An explicit call to sp_recompile. > > 7 - Large numbers of changes to keys (generated by INSERT or DELETE > statements from other users that modify a table referenced by the > query). > > 8 - For tables with triggers, if the number of rows in the inserted or > deleted tables grows significantly. > > 9 - Executing a stored procedure using the WITH RECOMPILE option. What I would like to achieve is, **BEFORE** I update the statistics of a table, I go through the Plan cache, and I find **ALL** the procedures that are related to myTable. I can find all stored procedures that are related to a table, AND all tables that are related to a stored procedure using the script below: --------------------------------------------------- -- script to find: -- all the stored procedures that touch table @table -- all tables that are touched by @sp_name stored procedure -- when setting null to parametes show all -- marcelo miorelli 15-dec-2014 --------------------------------------------------- print @@servername print db_name() DECLARE @sp_name NVARCHAR(128) = NULL -- 'Custom_CSLA_GetCategoryHierarchyAsXML' ,@table NVARCHAR(128) = 'SizeGuide' --NULL-- select [Table Name] = schema_name(o.schema_id) + '.' + o.Name, [Found In] = sp.Name, sp.type_desc from sys.objects o inner join sys.sql_expression_dependencies sd on o.object_id = sd.referenced_id inner join sys.objects sp on sd.referencing_id = sp.object_id --and sp.type in ('P', 'FN') where ((@table IS NULL) OR (o.name = @table)) AND ((@sp_name IS NULL) OR (sp.Name = @sp_name)) order by sp.Name the example above shows me all the stored procedures, and views, that touch a table called *'SizeGuide'*, on different schemas as you can see on the picture below: enter image description here then before I update the stats of table *'SizeGuide'* I generate a query plan/execution plan for all procedures that touch that table AND had a plan on cache before I did the update stats. I would do this when the server is less busy, so the next day, I have a plan cache, exactly like I had PRIOR to update stats, but all recompiled (only for the plans that were in the cache). Is this question clear? basically what I would like to do is to: 1- find all the query plans I have for stored procedures related to myTable 2- update the statistics of myTable 3 - re-generate all the query plans I had before the update stats Next day no RECOMPILES
Marcello Miorelli (17274 rep)
Sep 24, 2015, 12:44 PM • Last activity: Jul 29, 2023, 06:53 AM
6 votes
1 answers
2873 views
Monitoring the Postgres query parser
As I understand it, when you send a query to Postgres like: SELECT id FROM table1 WHERE name = $1; Postgres will create a query plan. The plan will be cached for the same query in the same session. But if you create a function: CREATE OR REPLACE FUNCTION get_table1 (parname text) RETURNS bigint LANG...
As I understand it, when you send a query to Postgres like: SELECT id FROM table1 WHERE name = $1; Postgres will create a query plan. The plan will be cached for the same query in the same session. But if you create a function: CREATE OR REPLACE FUNCTION get_table1 (parname text) RETURNS bigint LANGUAGE plpgsql AS $$BEGIN RETURN (SELECT id FROM table1 where name = parname); END$$; The query plan will be cached for all future sessions. I'd like to verify this theory by checking how often the query analyzer is invoked. Is there a way to inspect the number of times Postgres parses a query? If possible, I'd like to monitor things like #parses per second and the min/max/avg parse duration.
Andomar (3515 rep)
Dec 19, 2014, 02:50 PM • Last activity: Jul 17, 2023, 10:35 PM
1 votes
1 answers
115 views
Which SQL Server component is in charge with checking cached execution plans for a given query?
I understand that whenever SQL server receives a query it looks in the cache plan for any matching execution plans for that query. However, most of the resources that I encountered don't specify which component is in charge of this lookup. [This article][1] suggests that it's the Buffer Manager, but...
I understand that whenever SQL server receives a query it looks in the cache plan for any matching execution plans for that query. However, most of the resources that I encountered don't specify which component is in charge of this lookup. This article suggests that it's the Buffer Manager, but I failed to find any evidence for this in Microsoft's Query processing architecture guide . On the other hand, the speaker in this presentation (around 42:43) seems to suggest that it's the Cmd Parser. So, which one is it?
Mehdi Charife (131 rep)
May 12, 2023, 11:23 AM • Last activity: May 12, 2023, 07:31 PM
0 votes
1 answers
716 views
Clearing plan cache improves performance - bad plans in cache?
I have run into this situation a number of times on a VM with SQL Server Express 12.0.6205.1 (SQL Server 2014 SP3 CU1) Using SSMS, I'll run a stored procedure a few times. It consistently takes, say 4 seconds to run. ("Wait time on server replies" in "Client Statistics"). Then I clear the plan cache...
I have run into this situation a number of times on a VM with SQL Server Express 12.0.6205.1 (SQL Server 2014 SP3 CU1) Using SSMS, I'll run a stored procedure a few times. It consistently takes, say 4 seconds to run. ("Wait time on server replies" in "Client Statistics"). Then I clear the plan cache: DBCC FREEPROCCACHE Now the same stored procedure will consistently take 400 ms to run. This VM is under VERY light load. It's just me poking at it every few days. This VM is a copy of a production server that is having performance problems. So it seems like maybe bad plans are getting stuck in the plan cache? I would like the calls to be consistently fast, obviously. What is the problem and how to I fix it?
mhenry1384 (156 rep)
Jun 23, 2020, 08:43 PM • Last activity: Apr 27, 2023, 11:03 AM
10 votes
1 answers
6657 views
PL/pgSQL issues when function used twice (caching problem?)
I am facing an absolutely weird problem that feels much like a Postgres bug than an algorithm problem. I have this function: CREATE FUNCTION sp_connect(mail character varying, passwd character varying, role character varying) RETURNS json LANGUAGE plpgsql STABLE AS $$ DECLARE user_info record; BEGIN...
I am facing an absolutely weird problem that feels much like a Postgres bug than an algorithm problem. I have this function: CREATE FUNCTION sp_connect(mail character varying, passwd character varying, role character varying) RETURNS json LANGUAGE plpgsql STABLE AS $$ DECLARE user_info record; BEGIN IF role = 'Role1' THEN SELECT u.id, r.name INTO user_info FROM users u INNER JOIN users_roles ur ON ur.user_id = u.id INNER JOIN roles r ON ur.role_id = r.id WHERE u.email = mail AND u.password = encode(digest(CONCAT(passwd, u.password_salt), 'sha512'), 'hex') AND r.name = 'Role1'; ELSIF role = 'Role2' THEN SELECT h.id, 'Role1' AS name INTO user_info FROM history h WHERE h.email = mail AND h.password = encode(digest(CONCAT(passwd, h.password_salt), 'sha512'), 'hex'); ELSE RAISE 'USER_NOT_FOUND'; END IF; IF NOT FOUND THEN RAISE 'USER_NOT_FOUND'; ELSE RETURN row_to_json(row) FROM (SELECT user_info.id AS id, user_info.name AS role) row; END IF; END; $$; The problem I'm facing is when I use this function to log in with a Role1-user, then when I use it with a Role2-user, I get this error message: type of parameter 7 (character varying) does not match that when preparing the plan (unknown) Which is... well, I just don't understand where does it come from. If you wipe the database and change the login order (i.e. Role2 then Role1), this time, Role1 gets the error. Strange issue, strange solutions... If I just use ALTER FUNCTION sp_connect but without modify anything inside the function, then magically, the two roles can login without any problem. I also tried this solution: IF NOT FOUND THEN RAISE 'USER_NOT_FOUND'; ELSE IF role = 'Seeker' THEN RETURN row_to_json(row) FROM (SELECT user_info.id AS id, user_info.name AS role) row; ELSE RETURN row_to_json(row) FROM (SELECT user_info.id AS id, user_info.name AS role) row; END IF; And by adding an IF and ELSE that is absolutely useless and use the same RETURN-clause, this does not trigger any error. I know DBA StackExchange is not for developers but this kind of problem seems to be more like a caching problem or whatever. Can somebody can tell me if I am doing something wrong with PostgreSQL functions? Or where I may get help with this weird problem?
Eric Ly (1222 rep)
Jan 21, 2016, 05:57 PM • Last activity: Apr 18, 2023, 02:24 AM
3 votes
0 answers
50 views
Does sys.dm_exec_query_stats give information for "Compiled Plan Stub"?
I am investigating on the SQL Server behaviour with the “Optimize for Ad hoc Workloads” option set to “True”; in particular, I am trying to understand which information we have in cache after the first execution of an Ad Hoc query and consequently the Stub is put in cache. I noticed that as soon the...
I am investigating on the SQL Server behaviour with the “Optimize for Ad hoc Workloads” option set to “True”; in particular, I am trying to understand which information we have in cache after the first execution of an Ad Hoc query and consequently the Stub is put in cache. I noticed that as soon the query is executed the first time and the Stub is in cache, we are able to get query statistics through sys.dm_exec_query_stats, but only for a limited period of time. On a production environment (SQL Server 2019 (RTM)) it seems that sys.dm_exec_query_stats maintains this information for less than one hour. Here it is the query I ran: select sysdatetime() as [SysDateTime], case when qs.sql_handle is not null then 'Query Stats are here' end as [Query Stats are here], count(*) as Cnt, min (qs.last_execution_time) as MinLastExecTime from sys.dm_exec_cached_plans cp left join sys.dm_exec_query_stats qs on qs.plan_handle = cp.plan_handle where cp.objtype = 'Adhoc' and cp.cacheobjtype = 'Compiled Plan Stub' group by case when qs.sql_handle is not null then 'Query Stats are here' end go And this is the result: enter image description here I am not able to find a good explanation. Is there anyone that can help?
Alessandro Mortola (83 rep)
Apr 11, 2023, 05:34 AM • Last activity: Apr 12, 2023, 02:54 PM
3 votes
1 answers
1738 views
Trace flag 2861 and what a 'zero-cost' plan actually means
I'm trying to get my head around trace flag 2861 and what it actually does with trivial queries? The blurb [says][1]: >SQL Server typically won't cache plans for these trivial queries because the cost of caching the plan is higher than the cost of generating a new plan for such a simple query. That...
I'm trying to get my head around trace flag 2861 and what it actually does with trivial queries? The blurb says : >SQL Server typically won't cache plans for these trivial queries because the cost of caching the plan is higher than the cost of generating a new plan for such a simple query. That seems to be patently untrue since every 'trivial' query I run seems to get cached. So I'm wondering what the point of 2861 is, unless I'm misunderstanding what a trivial plan actually is. When I query the cached plan and it says it's ad hoc and trivial then I have no reason to doubt it. Hoping someone can enlighten me.
YaHozna (357 rep)
May 26, 2016, 08:12 PM • Last activity: Mar 24, 2023, 06:35 PM
0 votes
1 answers
91 views
Can the plan-cache and/or statistics be copied from one instance to another
I restore production databases on top of a staging instance that I use for testing. Is it possible to also copy the plan-cache and/or statistics to the staging instance, to better simulate the production environment for testing purposes?
I restore production databases on top of a staging instance that I use for testing. Is it possible to also copy the plan-cache and/or statistics to the staging instance, to better simulate the production environment for testing purposes?
Dave Sims (333 rep)
May 8, 2017, 03:06 PM • Last activity: Feb 14, 2023, 06:40 PM
0 votes
0 answers
833 views
Force Microsoft SQL to recreate and use Specific execution plan?
I have a stored procedure that was working really well for about 3 weeks, than all of a sudden it basically ground to a complete Stop. This was i believe due to the server creating an additional query plan that was far less efficient. [![good plan bad plan][1]][1] For whatever reason both of these p...
I have a stored procedure that was working really well for about 3 weeks, than all of a sudden it basically ground to a complete Stop. This was i believe due to the server creating an additional query plan that was far less efficient. good plan bad plan For whatever reason both of these plans were flushed from the server. Is there a way to get the Server to recreate the good plan ? i have a copy of the plan (the graphical format), the query hash and the plan hash but wasn't sure if the plan hash still exists anywhere after being flushed. "Good" Plan: https://www.brentozar.com/pastetheplan/?id=S1TLgATDo "Bad" Plan: https://www.brentozar.com/pastetheplan/?id=Hy5cJATwi
Praxiom (101 rep)
Dec 7, 2022, 07:55 AM • Last activity: Dec 7, 2022, 11:20 AM
Showing page 1 of 20 total questions