Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
214
views
Oracle 12cR2 RAC - ORA-06508
***Environment:*** - ***DB Version:*** Oracle Database 12cR2 EE on Exadata - ***RAC:*** True - ***Problem:*** ORA-06508 on procedure. > One of our users had a DBA role privilege, we don't want such a big > privilege to be defined anymore. When we revoke the DBA role privilege > from this user, all p...
***Environment:***
- ***DB Version:*** Oracle Database 12cR2 EE on Exadata
- ***RAC:*** True
- ***Problem:*** ORA-06508 on procedure.
> One of our users had a DBA role privilege, we don't want such a big
> privilege to be defined anymore. When we revoke the DBA role privilege
> from this user, all procedures, packages and triggers became invalid.
> This was something we expected at that moment and we accepted it as
> normal. When we compiled all of them, there was no problem and we
> reassigned the DBA role privilege to this user in order to postpone
> this work until later.
>
> The problem starts from this point. We get an error when one of the
> procedures wants to run. But the error is not constantly getting, for
> example it was working correctly for the last 26 hours, but then it
> got the error. When we drop / create or compile, the problem is
> solved. However, the strange part is that when this problem occurs,
> the procedure and dependencies all seem to be valid. The error is as
> follows:
>
>
>
>
>
>
>
> Error raised in: PROCEDURE_NAME at line xxxx - ORA-06508: PL/SQL:
> could not find program unit being called
***Line-xxxx:***
EXCEPTION
WHEN OTHERS THEN ROLLBACK;
R_CODE:= 400;
R_MESSAGE:='Error raised in: '|| $$plsql_unit ||' at line ' || $$plsql_line || ' - '||sqlerrm;
I could not find a suitable solution for this scenario on Oracle Support or on dba.stackexchange and I am still researching. Anyone have a good idea?
Best Regards,
jrdba123
(29 rep)
Nov 20, 2021, 06:36 PM
• Last activity: Jun 19, 2025, 07:02 AM
1
votes
0
answers
32
views
Rolling blocking on Compiling a SP
We are experiencing rolling blocking when a specific stored procedure runs. This SP runs multiple times a minute, and does heavily use temp tables. It doesn't block all the time, but during peak times its prevalent. blocking lasts for 30 seconds, the SP times out, then the next in line blocks. When...
We are experiencing rolling blocking when a specific stored procedure runs. This SP runs multiple times a minute, and does heavily use temp tables. It doesn't block all the time, but during peak times its prevalent. blocking lasts for 30 seconds, the SP times out, then the next in line blocks.
When it happens the blocked process shows as this:
waitresource="OBJECT: 5:1371949573:0 [COMPILE]"
The object is the SP itself. we don't see any specific statement inside the SP blocking, its just this compile wait.
Looking at the plan cache, this SP has about 2500+ different, single use plans in cache.
We have not been able to determine why this SP is not reusing a plan.
Has anyone ran into this situation before, and more importantly, how can I find out the reason that it is not getting plan reuse? Is there an extended events session that can track executions of this SP, and specifically the reasons why its compiling so frequently?
the only thing i can think of is that maybe since a temp table in this stored procedure gets multiple updates, inserts, and deletes would cause this? the SP does make use of multiple instances of Option(keepfixed plan) as well.
DBA Greg14
(265 rep)
Apr 3, 2025, 02:39 PM
1
votes
2
answers
291
views
Do temp tables cause Recompilations in SQL Server?
we have a database on SQL Server 2019 with compatibility set to 110 (SQL Server 2012). We have a stored procedure that is being recompiled frequently, and I noticed we are using many temp tables. We create a temp table, insert data, and then add an index on it for further queries. I suspect this mig...
we have a database on SQL Server 2019 with compatibility set to 110 (SQL Server 2012). We have a stored procedure that is being recompiled frequently, and I noticed we are using many temp tables. We create a temp table, insert data, and then add an index on it for further queries. I suspect this might be causing the recompilations. Do temp tables trigger recompiles, and if so, under what conditions? Thank you!
lifeisajourney
(751 rep)
Oct 16, 2024, 06:35 PM
• Last activity: Mar 22, 2025, 06:59 PM
0
votes
0
answers
50
views
Recompilations in stored procedure SQL Server
I have this piece of code inside a stored procedure. The stored procedure executes around 28k times, but the number of re-compilations is around 170k. I think this piece of code may be one of the reasons for the recompilations. My guess is because an index is being created on the temp table after th...
I have this piece of code inside a stored procedure. The stored procedure executes around 28k times, but the number of re-compilations is around 170k. I think this piece of code may be one of the reasons for the recompilations.
My guess is because an index is being created on the temp table after the data is inserted, it's causing a recompile. What do you think about it? What are some other reasons why this SP may be getting recompiled so many times?
-- Create a temporary table to store user activity logs
CREATE TABLE #tmpUserLogs (
log_id INT PRIMARY KEY IDENTITY(1,1),
session_id UNIQUEIDENTIFIER,
activity_status UNIQUEIDENTIFIER );
-- Create sample session data
DECLARE @UserSessions TABLE (
session_id UNIQUEIDENTIFIER PRIMARY KEY,
user_id INT,
status_category INT,
session_type_id INT,
approval_flag INT,
activity_status UNIQUEIDENTIFIER,
is_restricted BIT,
timestamp_last_updated DATETIME );
-- Create sample user profiles
DECLARE @UserProfiles TABLE (
user_id INT PRIMARY KEY,
session_id UNIQUEIDENTIFIER );
-- Create sample activity status data
DECLARE @ActivityStatus TABLE (
activity_status UNIQUEIDENTIFIER PRIMARY KEY );
-- Create sample status filter data
DECLARE @StatusFilter TABLE (
status_id UNIQUEIDENTIFIER PRIMARY KEY );
-- Insert mock data
INSERT INTO @UserSessions VALUES
(NEWID(), 101, 1, 2, 1, NEWID(), 0, GETDATE()),
(NEWID(), 102, 2, 1, 0, NEWID(), 1, GETDATE()),
(NEWID(), 103, 3, 3, 1, NEWID(), 0, GETDATE());
INSERT INTO @UserProfiles
VALUES (101, (SELECT TOP 1 session_id FROM @UserSessions));
INSERT INTO @ActivityStatus VALUES (NEWID());
INSERT INTO @StatusFilter VALUES (NEWID());
-- Define parameters
DECLARE @applySecurityCheck BIT = 0;
DECLARE @startIndex INT = 0;
DECLARE @batchSize INT = 10;
-- Insert user activity logs into the temp table
INSERT INTO #tmpUserLogs
SELECT u.session_id, a.activity_status
FROM @UserSessions u
INNER JOIN @UserProfiles up ON up.session_id = u.session_id
INNER JOIN @ActivityStatus a ON a.activity_status = u.activity_status
INNER JOIN @StatusFilter sf ON
(@applySecurityCheck = 0 AND sf.status_id = u.activity_status)
OR
(
@applySecurityCheck = 1
AND u.is_restricted = 1
AND sf.status_id = u.activity_status
)
ORDER BY u.timestamp_last_updated DESC
OFFSET @startIndex ROWS
FETCH NEXT @batchSize ROWS ONLY;
/* Create an index on session_id */
CREATE NONCLUSTERED INDEX idx_session
ON #tmpUserLogs(session_id) INCLUDE (activity_status);
lifeisajourney
(751 rep)
Feb 3, 2025, 08:25 PM
• Last activity: Feb 5, 2025, 05:24 AM
15
votes
1
answers
746
views
What could cause a SQL Server non-cached temporary table to NOT trigger a recompile when a large amount of rows have changed?
I have observed (and reproduced) the following scenario with SQL Server 2022. ### The pattern in use * code is executed via sp_executesql (no stored procedure is involved) * The first query selects data into a temporary table * A DDL statement then creates a clustered index on the temporary table. T...
I have observed (and reproduced) the following scenario with SQL Server 2022.
### The pattern in use
* code is executed via sp_executesql (no stored procedure is involved)
* The first query selects data into a temporary table
* A DDL statement then creates a clustered index on the temporary table. The temporary table is definitely NOT cacheable-- first of all this isn't a module (sproc or function), but also we're creating an index after the temp table is populated. So I would not expect statistics left behind on a cached temporary object to be involved at all here.
* A query selects data from the temporary table. This query gets FULL optimization each time (not a TRIVIAL plan)
This batch can run for both small and larger datasets, so that temp table can have 1 row in it or many thousands of rows.
This behavior normally occurs on a readable secondary. There is no writable query store and no automatic plan forcing as a factor.
I have verified that I can reproduce the behavior against the primary replica as well. (Automatic plan correction was told to ignore the query and I confirmed no plan forcing on primary when reproduced.)
### Repro script
* [Setup script](https://gist.github.com/LitKnd/83130120683e669669c048a5e74c8145#file-setuprepro-sql) - I ran this on SQL Server 2022 CU15. This turns off query store and uses compat level 130.
* [Repro query](https://gist.github.com/LitKnd/83130120683e669669c048a5e74c8145#file-repro-sql) - I've been running this via SQL Query Stress so I can easily run it concurrently on one or more threads
* [Plan Generation Num and temp tables](https://gist.github.com/LitKnd/83130120683e669669c048a5e74c8145#file-plan_generation_num_and_temp_tables-sql) - A very simple query to observe the plan_generation_num in sys query stats (" A sequence number that can be used to distinguish between instances of plans after a recompile.") and the current list of temp tables
### What normally happens-- and the behavior I expect
Normally, changing large amounts of rows in the temporary table between query executions automatically cause recompiles, and I'll see that the query selecting the data from the temporary table has a row estimate matching the rows in the temp table.
When this works as expected, performance is fine.
*With the repro query*: If I clear the plan cache, then run the repro query 40 iterations on a single thread in SQL Query Stress, plan_generation_number ends up being 82. When sampling query plans with sp_WhoIsActive, row estimates querying the temp table match the number of rows in the temp table, as expected.
### What sometimes happens -- and looks like a bug to me
On rare occasions, I see that a plan is in use where there is a 1 row estimate plan for the temp table, but a very large amount of rows are actually in the temp table. A LOT of rows have changed, but it didn't automatically recompile:
This leads to very slow performance because the low estimate plan decides to use a nested loop without prefetching, which makes it a CPU burner.
*With the repro query*: If I clear the plan cache, then run the repro query 20 iterations on 2 threads in SQL Query Stress, plan_generation_number ends up less than 82-- it varies by run, but might be 72 or 59, indicating fewer recompiles. While this is running, I can also sample occasions with sp_WhoIsActive where there is a single rowcount estimated but many more rows in the temporary table. Screenshot:
### I can only reproduce this when running repro code on multiple concurrent sessions
I have not been able to reproduce this behavior with a single session in SQL Server. The only way I can reproduce this is to set up a block of code that:
* Executes at least 1 iteration of the sp_executesql statement that have 1 row in the temp table
* Then executes 1 iteration of the sp_executesql statement that has a lot more rows in the temp table
If I run this in a single session, I have not been able to reproduce the problems. But if I run this concurrently in four or five sessions, I'll be able to occasionally get the "THAT DIDN'T RECOMPILE LIKE IT SHOULD HAVE" issue to pop up. (Note: using SQL Query Stress, I can repro this with only 2 sessions/iterations.)
This feels like a bug to me, I'm curious if anyone else has seen it. Recompile and stats behavior with temp tables is super complex tho, so there may be some nuance that I'm missing with how this works with non-cachable temp tables.
PS: I do think cachable temp tables are generally better. I'm just trying to figure out why this behavior would happen in a non-cacheable temp table scenario at this point.
### Workarounds
After adding an


option (recompile)
to the query, I can no longer reproduce the reuse of the 1 row plan querying the temp table. This is sufficient, but I'm puzzled why it is necessary.
Kendra Little
(938 rep)
Dec 6, 2024, 03:32 PM
• Last activity: Dec 10, 2024, 09:00 AM
6
votes
1
answers
543
views
How to Determine if a Stored Procedure is Being Recompiled on Every Execution in SQL Server 2012
What query can I use to determine if a stored procedure is being recompiled every time it is executed in SQL Server 2012? I suspect that some of our stored procedures might be getting recompiled, but I'm not sure how to check. Thanks!
What query can I use to determine if a stored procedure is being recompiled every time it is executed in SQL Server 2012? I suspect that some of our stored procedures might be getting recompiled, but I'm not sure how to check. Thanks!
lifeisajourney
(751 rep)
Oct 22, 2024, 01:19 PM
• Last activity: Oct 23, 2024, 10:58 AM
2
votes
1
answers
184
views
What kinds of operations could cause a "Temp table changed" recompilation?
Frequent recompilation happens in one SP with 'Temp table changed' cause. What kinds of operations on temp table would trigger this? There are no alter and create index operations on temp table, just DMLs like 'Statistics changed' as below. Thanks. [![Extended event][1]][1] [1]: https://i.sstatic.ne...
Frequent recompilation happens in one SP with 'Temp table changed' cause. What kinds of operations on temp table would trigger this? There are no alter and create index operations on temp table, just DMLs like 'Statistics changed' as below. Thanks.

Jason Liu
(43 rep)
Jul 20, 2024, 06:13 AM
• Last activity: Jul 21, 2024, 11:39 PM
1
votes
1
answers
188
views
SP getting recompiled with "Statistics changed" reason even after disabling auto_stats for all tables involved
I was troubleshooting a performance issue and I need an SP to stop getting recompiled because of auto update stats. I don't want to disable Auto stats update on the entire database so I thought I could disable it on all the tables involved using `sp_autostats` proc. But even after I disabled auto up...
I was troubleshooting a performance issue and I need an SP to stop getting recompiled because of auto update stats.
I don't want to disable Auto stats update on the entire database so I thought I could disable it on all the tables involved using
sp_autostats
proc. But even after I disabled auto update stats on all the tables involved, it is still getting recompiled with the reason "Statistics changed".
I checked the last time stats was updated and also used auto_stats extended event to track if the stats were getting updated, but it is not.
REPRO
USE [test]
GO
CREATE TABLE [dbo].[test](
[a] [int] IDENTITY(1,1) NOT NULL, [b] [int] NULL, [c] [int] NULL,
PRIMARY KEY CLUSTERED
(
[a] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = ON, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE NONCLUSTERED INDEX [idx] ON [dbo].[test]
(
[c] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = ON, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
GO
CREATE proc [dbo].[sp1] @a int
as
select * from test where c=@a order by b
GO
sp_autostats 'test','off'
go
--creating lots of changes.
--Session 1
set nocount on
while 1=1
begin
delete top(1) from test
end
--Session 2
set nocount on
while 1=1
insert into test select 1, FLOOR(RAND() * (10 - 1 + 1)) + 1;
--powershell 3. Calling the SP in a loop
while ($true) {
$sqlQuery = "EXEC test.dbo.sp1 @a=100;"
Invoke-Sqlcmd -ServerInstance 'sql1\s14' -Database 'test' -Query $sqlQuery -Encrypt Optional
Start-Sleep -Milliseconds 500
}
--Monitor sql_statement_recompile extended event. Should see a recompile event in a couple minutes with "Statistics changed" as recompile_cause
CREATE EVENT SESSION [test] ON SERVER
ADD EVENT sqlserver.sql_statement_recompile(SET collect_object_name=(1),collect_statement=(1)
ACTION(sqlserver.server_principal_name,sqlserver.sql_text))
ADD TARGET package0.ring_buffer
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=OFF)
GO
I suspect it might have something to do with stats on worktable created for the sort? if we comment out the order by clause in the SP the problem goes away. Periodically calling sp_recompile
also seems to reset whatever it is tracking that triggers recompile.
Any workaround for this and anyone else faced this issue?
DMDM
(1750 rep)
Jun 10, 2024, 10:07 AM
• Last activity: Jun 13, 2024, 07:00 PM
2
votes
1
answers
3317
views
Is there any benefit to WITH RECOMPILE or OPTION (RECOMPILE) in a stored procedure that uses temp tables at every stage?
Suppose that I have a stored procedure. It does not call any other stored procedures and is not called by any others, nor is it called concurrently. Every step in this stored procedure either creates a local temp table (usually by `SELECT INTO`) or references one created earlier in this procedure. T...
Suppose that I have a stored procedure. It does not call any other stored procedures and is not called by any others, nor is it called concurrently. Every step in this stored procedure either creates a local temp table (usually by
SELECT INTO
) or references one created earlier in this procedure. Theoretically, is there any benefit to including either WITH RECOMPILE
or OPTION (RECOMPILE)
in such a stored procedure? If it helps, assume that I am on a 2016 version of SQL Server.
I am ignorant about when and how the presence of temp tables in stored procedures cause recompiles. This question is entirely born of that ignorance. I know the benefits of recompilation in general, but not how or when temp tables cause it.
J. Mini
(1237 rep)
Dec 17, 2023, 12:23 PM
• Last activity: May 30, 2024, 07:02 PM
0
votes
2
answers
295
views
Aside from explicitly flushing the cache or demanding recompiles, what recompiles a full stored procedure in SQL Server 2019?
I recently finished reading [ Plan Caching in SQL Server 2008 ](https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ee343986(v=sql.100)?WT.mc_id=DP-MVP-5440) and I have become confused. It seems that, short of totally flushing the plan cache or explicitly demanding a stored proce...
I recently finished reading [
Plan Caching in SQL Server 2008
](https://learn.microsoft.com/en-us/previous-versions/sql/sql-server-2008/ee343986(v=sql.100)?WT.mc_id=DP-MVP-5440) and I have become confused. It seems that, short of totally flushing the plan cache or explicitly demanding a stored procedure to be recompiled, recompilations of stored procedures from SQL Server 2008 onwards are done at the statement level rather than the stored-procedure level.
So, aside from explicitly flushing the cache or demanding recompiles (e.g.
WITH RECOMPILE
), what recompiles **a full stored procedure** in SQL Server 2019 rather than just recompiling individual statements?
To give an example of where I am confused, consider the following procedure.
CREATE PROCEDURE FOO AS
BEGIN
SELECT * INTO #temp1 FROM table1
INSERT BAR1 SELECT * FROM #temp1
INSERT BAR2 SELECT * FROM #temp1
END
I can think of plenty of things that could cause SELECT * INTO #temp1 FROM table1
to recompile, but it would be strange for that to recompile without the next lines also recompiling. This makes me think that there must be some things within SQL Server that will cause entire stored procedures to recompile.
J. Mini
(1237 rep)
Dec 27, 2023, 10:49 PM
• Last activity: Dec 28, 2023, 07:58 PM
0
votes
1
answers
107
views
Scalar Functions Causing Plan Cache Bloat
Our plan cache is showing a large number of duplicate entries for a scalar function that is called as per the code below: SELECT dbo.fnSomeFunction('12345678') I have done some testing and found that it appears a new plan is compiled for each call, rather than a single plan with sniffed parameters s...
Our plan cache is showing a large number of duplicate entries for a scalar function that is called as per the code below:
SELECT dbo.fnSomeFunction('12345678')
I have done some testing and found that it appears a new plan is compiled for each call, rather than a single plan with sniffed parameters similar to a stored procedure.
Create an equivalent stored proc and scalar function:
USE StackOverflow2010
GO
CREATE OR ALTER PROC dbo.sp_Test
(
@id INT
)
AS
SELECT Body
FROM Posts
WHERE Id = @id
GO
CREATE OR ALTER FUNCTION dbo.sfn_Test
(
@id INT
)
RETURNS NVARCHAR(MAX)
AS
BEGIN
DECLARE @body NVARCHAR(MAX)
SELECT @body=Body
FROM Posts
WHERE Id = @id
RETURN @body
END
clear the plan cache and call both with different parameters:
USE StackOverflow2010
DBCC FREEPROCCACHE
EXEC dbo.sp_test @id = 4
GO
EXEC dbo.sp_test @id = 5
GO
EXEC dbo.sp_test @id = 6
GO
SELECT dbo.sfn_Test(4)
GO
SELECT dbo.sfn_Test(5)
GO
SELECT dbo.sfn_Test(6)
GO
Run a query to view our plan cache:
SELECT d.name,
t.text AS TSQL_Text,
s.creation_time,
s.execution_count,
p.query_plan
FROM sys.dm_exec_query_stats s
CROSS APPLY sys.dm_exec_sql_text(s.plan_handle) t
CROSS APPLY sys.dm_exec_query_plan(s.plan_handle) p
JOIN sys.databases d
ON t.dbid = d.database_id
ORDER BY TSQL_Text;
The results:
We can see that SQL Server has reused the stored procedure plan but each scalar function call gets it's own plan.
Is there a way to reduce plan cache bloat as a result of this? The only option I can think of is to enable Optimize for Ad-Hoc Workloads?
As for why SQL Server behaves this way, I assume this is just yet another example of why not to use scalar functions?

SE1986
(2182 rep)
Oct 4, 2023, 01:52 PM
• Last activity: Oct 4, 2023, 02:44 PM
2
votes
0
answers
350
views
Oracle 19C Auto Compile fails on Package With SQL_Macro but ALTER PACKAGE ... COMPILE Succeeds
We have a package SCHEMA.MYPACKAGE with a procedure SCHEMA.MYPACKAGE.GETDATA that calls a sql macro SCHEMA.SQLMACROFUNCTION that exists within the same schema but outside the package. The macro is called within a select statement and qualified by schema name even though it's within the same schema,...
We have a package SCHEMA.MYPACKAGE with a procedure SCHEMA.MYPACKAGE.GETDATA that calls a sql macro SCHEMA.SQLMACROFUNCTION that exists within the same schema but outside the package. The macro is called within a select statement and qualified by schema name even though it's within the same schema, like this:
SELECT col1,col2,col3
FROM SCHEMA.SQLMACROFUNCTION(param1, param2, param3)
ORDER BY col1 DESC;
The problem is that when anything invalidates the package (for example, adding a column to a table the package depends on) and the next procedure call tries to auto-compile the package as Oracle is supposed to do, the compile fails with
> PLS-00201: identifier 'SQLMACROFUNCTION' must be declared
leaving the package in an invalid state until it's manually compiled with
ALTER PACKAGE SCHEMA.MYPACKAGE COMPILE;
which always succeeds.
Anyone have an explanation for this behavior?
I considered permissions, but the package and the sql_macro function are in the same schema.
I considered function name qualification, but it's already called with schemaname.objectname.
Also, and I can't corroborate with evidence from Oracle documentation, but I read somewhere that functions (in this case a sql_macro) have to be executed in order to validate the code, but an auto-compile only parses the code, which might explain.
There are obvious workarounds to this issue (substituting the function code into the procedure in place of the function call resolves the auto-compile issue), but I'm curious to understand why it's happening and am interested in some documentation, if it's out there.
Thanks in advance.
lightwing
(31 rep)
Jun 21, 2023, 06:57 PM
0
votes
1
answers
175
views
SQL Server 2014 Cardinality Estimator estimate final number of rows after OUTER JOIN is less than number of rows from initial table
I have a [SQL query] for SQL Server 2019. It works fine with option (USE HINT ('FORCE_LEGACY_CARDINALITY_ESTIMATION')) and very bad without this hint. I found out, that SQL Server 2014 Cardinality Estimator estimate final number of rows after OUTER JOIN **is less than number of rows from initial tab...
I have a [SQL query] for SQL Server 2019.
It works fine with option
(USE HINT ('FORCE_LEGACY_CARDINALITY_ESTIMATION'))
and very bad without this hint.
I found out, that SQL Server 2014 Cardinality Estimator estimate final number of rows after OUTER JOIN **is less than number of rows from initial table** without any WHERE predicates and **with option (recompile)**.
New CE Plan
Legacy Cardinality Estimator estimate final number of rows after OUTER JOIN is equal or more number of rows from initial table (correct).
Old CE Plan
Is this a bug of SQL Server 2014 Cardinality Estimator or I do something wrong?
Андрей Ерёмин
(1 rep)
May 22, 2023, 08:05 AM
• Last activity: May 31, 2023, 04:46 AM
0
votes
1
answers
4415
views
Difference between with recompile and Option(recompile)
I've read online that `with recompile` will recompile the whole procedure whereas `Option(recompile)` will only the recompile a specific statement that it is used on. If a certain statement in the procedure is recompiled, won't it affect other parts of the procedure because now the optimizer has bet...
I've read online that
with recompile
will recompile the whole procedure whereas Option(recompile)
will only the recompile a specific statement that it is used on. If a certain statement in the procedure is recompiled, won't it affect other parts of the procedure because now the optimizer has better stats to make better decisions later on (assuming the option(recompile) produces better stats) and thus causing a complete recompile of the procedure? How exactly does with recompile
differ from option(recompile)
?
stargazer77
(11 rep)
Jan 4, 2023, 06:15 PM
• Last activity: Jan 4, 2023, 10:42 PM
4
votes
1
answers
685
views
Execution plan with a lot of ComputeScalar operations
I executed the same query with and without OPTION (RECOMPILE). When I compare the 2 plans, visually the one main difference I see is that the plan without option recompile shows a lot of ComputeScalar operators and the other one doesn't have any. Here are the two plans: Without OPTION Recompile: htt...
I executed the same query with and without OPTION (RECOMPILE). When I compare the 2 plans, visually the one main difference I see is that the plan without option recompile shows a lot of ComputeScalar operators and the other one doesn't have any.
Here are the two plans:
Without OPTION Recompile: https://www.brentozar.com/pastetheplan/?id=S1OcZGi85
With OPTION recompile: https://www.brentozar.com/pastetheplan/?id=ryJAfMsL5
Why does one plan use a ton of compute scalar and the other one doesn't? The query without option recompile takes nearly 4 minutes to execute. Are the compute scalar operations causing the slowness?
DBK
(378 rep)
May 12, 2022, 11:11 PM
• Last activity: May 13, 2022, 04:24 AM
9
votes
1
answers
1944
views
Why does OPTION RECOMPILE cause a predicate pushdown?
I have a SQL query that is made out of a nested mess of views and table-valued functions going on for at least 4 levels deep (*I didn't have the time or patience to go through it all, it's hundreds of lines of code at each level*). I've been trying to understand why is it that when I run the base qu...
I have a SQL query that is made out of a nested mess of views and table-valued functions going on for at least 4 levels deep (*I didn't have the time or patience to go through it all, it's hundreds of lines of code at each level*).
I've been trying to understand why is it that when I run the base query with OPTION (RECOMPILE) it works very fast, however when I run it without this OPTION, it runs very slow.
I've made sure to clear the plan cache before this happens and even when generating a new plan, it's suboptimal, however, the OPTION (RECOMPILE) one is fast.
I've inspected both plans and noticed that for the plan with OPTION (RECOMPILE) the parameters that are passed.
SELECT [p].[Activity]
,[p].[ActivityType]
,[p].[Company]
,[p].[Flags]
,[p].[Id]
,[p].[Name]
,[p].[Priority]
,[p].[Filters]
,[p].[Priority]
,[p].[Classification]
,[p].[Number]
,[p].[TaskFilter]
,[p].[TaskType]
,[p].[User]
FROM (
SELECT *
FROM [ActivProdStatuses]('ProdJobTask', 0)
) AS [p]
WHERE (
( ([p].[User] = 'some_value') AND (([p].[Flags] & 8) = 0) )
AND ([p].[Activity] = 'unique_value')
)
AND
(CASE
WHEN ([p].[Flags] & 4) 0
THEN CAST(1 AS BIT)
ELSE CAST(0 AS BIT)
END = 1 )
ORDER BY [p].[Priority]
OPTION (RECOMPILE)
In the plan without OPTION RECOMPILE, I have a part in my plan where the bulk of the time is spent moving around useless data which is later filtered by the FILTER operator (you can see 0B coming out of the FILTER).
The FILTER operator has all the filtering parameters that come with the procedure (
In the OPTION RECOMPILE version, those parameters are pushed down to the execution plan (I'm guessing it's called Predicate Pushdown) and filtered directly in the first step, when data is read from disk.
From what I looked at, this is my conclusion for why this is happening and why the OPTION RECOMPILE plan is more efficient. You can see the same data access for the same table, below and the part of the more efficient OPTION RECOMPILE plan.
---
Now, **my question is** why would the OPTION RECOMPILE plan behave differently than when just generating a new plan for a new query, with the same parameters / values passed to it. What does the OPTION RECOMPILE do?
I've tried to search online and see if it does "force" a predicate pushdown but I couldn't find anything specific about this.
You can find the **anonymized** plan for the **slow execution** here . Also, the **anonymized** plan for the **OPTION RECOMPILE execution** is here .
I tried running the query once with OPTION (RECOMPILE) and then removing the hint and immediately running the query again (same parameters both times). The first run is fast, the second is slow.
I did think to generate the "fast" plan in the hopes that it would be stored in the cache and get reused when the OPTION RECOMPILE is not specified anymore. However, I believe the query plan hashes differ and the cached plan will not be reused since there are more changes than just the values passed in to the filtering clauses.

unique_value
and some_value
) and some other filtering parameters that are determined in the nested levels. The nested levels contain themselves other TVF's with parameters determined from OUTER APPLY'ed queries.


Radu Gheorghiu
(983 rep)
Feb 25, 2022, 01:14 AM
• Last activity: Feb 25, 2022, 11:36 PM
5
votes
2
answers
868
views
Why does SQL Server not perform constant (UNION ALL) branch elimination with OPTION(RECOMPILE) when selecting the result into a scalar variable?
We use some 'aggregate' views to select from multiple tables using a discriminator (note: these views are *not* partitioned views, because the discriminator is not in the base tables). This normally works well when using `option(recompile)`, as the query planner will *eliminate* the non-reachable `u...
We use some 'aggregate' views to select from multiple tables using a discriminator (note: these views are *not* partitioned views, because the discriminator is not in the base tables). This normally works well when using
The second and "bad" plan:
This "bad" plan also has an implicit conversion warning, making me suspect that the select into a scalar variable might be bypassing many different optimizations - or even ignoring the
option(recompile)
, as the query planner will *eliminate* the non-reachable union all
paths before selection of a query plan.
However, this constant-folding optimization appears defeated when selecting the result into a scalar variable. Selecting the result into a temporary table variable does not de-optimize the recompilation.
Here is a reproduction case in SQL Server 2017:
-- A table, don't need any data.
create table [test].test_table (col1 int, primary key (col1));
-- A simple 'aggregate' view. Using the same table here is irrelevant and,
-- while the view shows the scenario, it might not be required to reproduce the issue.
create view [test].test_view as
select col1, descrim = 1 from [test].test_table
union all
select col1, descrim = 2 from [test].test_table
Normal query, which results in an optimized query plan touching only *one* of the union all
branches:
declare @descrim int = 2;
select count(col1)
from [test].test_view
where descrim = @descrim
option (recompile) -- explicit recompile here "works"
However, as soon as a "select into scalar variable" is used, the plan becomes *de-optimized* as it does not eliminate the non-used union. (The plan is still correctly optimized when using a literal value in the query text.)
declare @descrim int = 2;
declare @brokeit int;
select @brokeit = count(col1)
from [test].test_view
where descrim = @descrim
option (recompile) -- explicit recompile here does NOT optimize plan for @descrim!
**1. Is this de-optimization "expected"?**
**2. Where is this *significant de-optimization* behavior with respect to option(recompile)
and/or selecting into a scalar variable documented or otherwise discussed in depth?**
**3. Is there an simple way to get an recompile-optimized plan with select @x = ..
without using a temporary table (variable)?**
While during query execution the union all
will prevent actual IO access to the secondary artifact, such is still an issue with query plan generation. In the specific error case spawning this question, leaving in multiple tables for consideration prevents SQL Server from choosing an appropriate seek plan and the resulting plan options are very poor choices in the given domain.
The first "good" plan:


option(recompile)
hint entirely.
user2864740
(205 rep)
Mar 8, 2021, 11:41 PM
• Last activity: Apr 4, 2021, 03:59 PM
2
votes
0
answers
62
views
Does SQL Server really require fully qualified object references to match a statement to a cached execution plan
From SQL Server Docs, section [Execution Plan Caching and Reuse][1]: > The algorithms to match new SQL statements to existing, unused execution plans in the cache require that all object references be fully qualified. For example, the first of these SELECT statements is not matched with an existing...
From SQL Server Docs, section Execution Plan Caching and Reuse :
> The algorithms to match new SQL statements to existing, unused execution plans in the cache require that all object references be fully qualified. For example, the first of these SELECT statements is not matched with an existing plan, and the second is matched:
>
> SELECT * FROM Contact
>
> SELECT * FROM Person.Contact
Is the above statement also true for object references that use the default
dbo
? The reason I ask is because I personally never fully qualify object references (and I assume most people don't) even when using a driver like JDBC
. Does this mean that all of my SQL
statements are being re-compiled every time they're executed?
Archmede
(165 rep)
Jun 10, 2020, 12:12 AM
0
votes
1
answers
84
views
can't find column stats for newly created table
I am trying to simulate a plan recompilation due to auto update stats which is triggered when threshold(500 + 20 percent modifications) reached. database current settings are: [![enter image description here][1]][1] I started with creating a simple table: create table recomp(num int) go Then I inser...
I am trying to simulate a plan recompilation due to auto update stats which is triggered when threshold(500 + 20 percent modifications) reached.
database current settings are:
I started with creating a simple table:
create table recomp(num int)
go
Then I insert some rows:
insert into recomp
values (5)
go 100
and run my query below 5 times (I used **where 1 = (select 1)**) for full optimization:
select * from recomp
where 1 = (select 1)
in my plan cache I have the following:
Now the interesting part, I again insert new rows in order to trigger auto update stats for **num** column of the recomp table:
insert into recomp
values (8)
go 500
Here stats for num column has updated. The next what I am going to do is run my select query:
select * from recomp
where 1 = (select 1)
When I check my plan cache:
and an event which captures recompiles:
my question is: where can I find **stats** for **num** column?





Rauf Asadov
(1313 rep)
May 27, 2020, 10:15 PM
• Last activity: May 28, 2020, 01:44 AM
0
votes
1
answers
100
views
Does plan recompilation happens to plan guide?
Does plan recompilation happens for plan guide due to the, for example, auto update stats? P.S Do **not** take into account adding a hint **OPTION (RECOMPILE)** in the plan guide
Does plan recompilation happens for plan guide due to the, for example, auto update stats?
P.S Do **not** take into account adding a hint **OPTION (RECOMPILE)** in the plan guide
Rauf Asadov
(1313 rep)
May 27, 2020, 02:54 PM
• Last activity: May 27, 2020, 09:20 PM
Showing page 1 of 20 total questions