Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
339
views
Linked Server not connecting to SQL Server 2022 database
I have a SQL Server 2019 Enterprise database in a data center far, far away. I have a SQL Server 2019 developer edition database on my local laptop (A), and it is a Linked Server on the far away database (B). The security is based on a local SQL login on the A database. Everything works fine. I do d...
I have a SQL Server 2019 Enterprise database in a data center far, far away. I have a SQL Server 2019 developer edition database on my local laptop (A), and it is a Linked Server on the far away database (B). The security is based on a local SQL login on the A database. Everything works fine. I do development work and test the B database where I create JOINs back to my A database.
My IT group just issued a new laptop to me. I installed SQL Server 2022 developer edition and now I have the database (C) configured just like the one on my old laptop A. However, I cannot create a Linked Server connection to the C database from the B database. I have confirmed about 49 times that the SQL login account is exactly the same for both A and C databases. I have deleted and re-created the Linked Server from B to A multiple times today, and it works fine. But when I try to create a Linked Server from B to C, I receive an error message that the connection failed.
I have noticed in SQL Config Mgr that there are no Client Protocols on the C database. Apparently this went away with SQL 2022. I don't think this is the issue; I have made sure the port numbers on C match what is on A, which is 1433. I have restarted the SQL Browser service on C multiple times but this did not help.
I have laptops A and C sitting side by side, and I cannot see any difference in the way the databases are configured. What am I missing?

MarkF
(1 rep)
Jul 28, 2024, 02:16 AM
• Last activity: Sep 20, 2025, 12:06 AM
1
votes
1
answers
77
views
Need help determining ideal batch size for querying a large table
I have a table with maybe around 180-200k rows (~40 columns) that I'm trying to query. The premise is that there's a list of strings to compare an FK column against, totaling around 170k passed as an API req in batches, then that batch size is further chunked into smaller batches to filter in an `IN...
I have a table with maybe around 180-200k rows (~40 columns) that I'm trying to query. The premise is that there's a list of strings to compare an FK column against, totaling around 170k passed as an API req in batches, then that batch size is further chunked into smaller batches to filter in an
IN
command.
SELECT *
FROM table
WHERE IsActive = 1 AND Fk IN ( .. list of batch size .. )
Testing on a local database with around 5k rows, I found that the optimizer switched from using *Nested Loops -> Index Seek -> Key Lookup*, to *Hash Match -> Constant Scan -> Clustered Index Scan* for batch sizes larger than 64.
I have seen that index seeks are generally said to be better, but in this case where the output is likely to contain >60% of the table rows, how should I be determining what batch size to use? Would index scans or seeks have better performance here? Or is this method itself inefficient and should use a TVP instead?
Query plan for example batch size of 100: https://www.brentozar.com/pastetheplan/?id=3xy2ILe0K4
Kyrop
(11 rep)
Sep 13, 2025, 03:24 AM
• Last activity: Sep 14, 2025, 01:49 PM
0
votes
2
answers
67
views
gMSA account for a SSRS DataSource?
I'm trying to set SSRS to fully use a gMSA account. The Microsoft documentation states that gMSAs are not supported. However, I could normally set the service account to be the gMSA one. Everything runs fine, since the DataSource is configured with the previous AD service account (a normal AD accoun...
I'm trying to set SSRS to fully use a gMSA account.
The Microsoft documentation states that gMSAs are not supported.
However, I could normally set the service account to be the gMSA one. Everything runs fine, since the DataSource is configured with the previous AD service account (a normal AD account).
Is there a way to set the execution account or datasource account as a gMSA?

Racer SQL
(7562 rep)
Sep 11, 2025, 02:53 PM
• Last activity: Sep 13, 2025, 12:43 PM
0
votes
0
answers
113
views
SQL Server 2022 Contained AGs and database mail
I have two SQL Server 2022 environments with Contained AG configured with a listener that the users are connecting too. All three have db mail configured identically. The two servers are able to successfully send mail using db mail but the listener does not. This is an issue because the users are cr...
I have two SQL Server 2022 environments with Contained AG configured with a listener that the users are connecting too. All three have db mail configured identically. The two servers are able to successfully send mail using db mail but the listener does not. This is an issue because the users are creating sql server jobs on the listener and they want to be alerted on job failures. How do I configure the listener so that jobs created on the listener are able to email notifications on job failure?
As per Sean Gallardys comments below.
I have two DB Instances with DBMail configured Identically. Test email succeed on both. sysmail_allitems shows a sent_status = Sent.
I have made a contained connection via the listener and setup jobs. The jobs failed to notify on success. I configured DBMail on the contained connection with the same configuration as on the Instances. I then performed a test email. sysmail_allitems shows a sent_status = unsent even after days have past. No error is generated.
As said in my comment below the sysmail_event_log table was blank as in empty. Yesterday while attempting to troubleshoot I ran the sysmail process manually multiple times.
EXEC msdb.dbo.sysmail_stop_sp;
EXEC msdb.dbo.sysmail_start_sp;
Here is the resulting screenshot.
Here is the screenshot of the sysmail_allitems as of today showing the mail as unsent.
Database Mail XPs are enabled, the broker is enabled, and the mail queue is stuck in an INACTIVE State.
I am receiving the following error in the error log:
Database Mail is not enabled for agent notifications. Cannot send e-mail to DENTISTRY_SCHEDJOBS
However, mail has been configured and the agent does have access.




Lumpy
(2129 rep)
Aug 29, 2025, 03:12 PM
• Last activity: Sep 10, 2025, 04:13 PM
0
votes
1
answers
696
views
Connecting to SQL Server from externally with proxy server
I have a server running SQL Server 2022 with a couple of instances. There is the need to grant access to an external supplier working with us on a project. The idea is: the supplier connects to our sql server on a specific database they need to access. We have created a configuration on a proxy serv...
I have a server running SQL Server 2022 with a couple of instances.
There is the need to grant access to an external supplier working with us on a project.
The idea is: the supplier connects to our sql server on a specific database they need to access.
We have created a configuration on a proxy server running nginx that allows the connection to the local sql server.
However, I cannot reach the database when testing from outside.
Is there any configuration I need to do on the SQL server to allow this hebavior?
Thanks in advance!
nginx entry
-----------
server {
access_log /var/log/nginx/porgreencloud1433.log upstream_time;
error_log /var/log/nginx/porgreencloud1433_error.log;
listen 1433;
server_name porgreencloud.company.com;
location / {
proxy_connect_timeout 60s;
proxy_socket_keepalive on;
proxy_pass http://dbtcp ;
}
ssl_certificate /etc/letsencrypt/live/porgreencloud.company.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/porgreencloud.company.com/privkey.pem; # managed by Certbot
include /etc/nginx/sites-available/sslsettings.conf;
}
SQL Server settings
-------------------
Connection String
-----------------
SQLUser = "ourUser"
SQLPassword = "LetsConnect2DB!!"
SQLServer = "porgreencloud.company.com,1433"
DbConn = "Provider=SQLOLEDB.1;Persist Security Info=True;User ID=" & SQLUser & ";Password=" & SQLPassword & ";Initial Catalog=" & DBName & ";" & _
"Data Source=" & SQLServer & ";Use Procedure for Prepare=1;Auto Translate=True;Packet Size=4096;" & _
"Use Encryption for Data = False;Tag with column collection when possible=False"



aganju82
(1 rep)
May 24, 2024, 08:39 AM
• Last activity: Sep 9, 2025, 10:03 AM
4
votes
0
answers
213
views
SQL Server 2022 vs 2019 Query Performance gap
I am facing a performance issue with an SQL query. The query is generated dynamically via the webapp, I am testing with a static version to make things simpler. The SQL query includes temp tables and columns which get their values recalculated with Scalar Valued functions. To be more precise, the qu...
I am facing a performance issue with an SQL query.
The query is generated dynamically via the webapp, I am testing with a static version to make things simpler.
The SQL query includes temp tables and columns which get their values recalculated with Scalar Valued functions.
To be more precise, the query loads data into temporary tables from a table.
It's 350 columns, rows do not matter in our scenario, they are 0.
I have two identical machines, first one with SQL Server 2019 and second one with SQL Server 2022 and I have a huge gap in performance.
- 2019: takes 2 seconds to execute the query on first execution
- 2022: takes 14 seconds to execute the query on first execution
Query plans below:
2019
2022
Query plans were huge (150mb each). I have removed a bunch of columns to make them less than 2mb and also anonymised them. Performance gap remains. Links below:
2019
https://www.brentozar.com/pastetheplan/?id=LVoA1mZyLx
2022
https://www.brentozar.com/pastetheplan/?id=eP7Q7f1msx
Database compatibility level is 100 in both cases.
SQL Server OS and specs are identical.
As you can see, it looks like the difference is in the compilation time where there is a huge gap.
----------
Things I tried:
1. Disable scalar UDF inlining -
2022


ALTER DATABASE SCOPED CONFIGURATION SET TSQL_SCALAR_UDF_INLINING = OFF;
2. Set Legacy Cardinality Estimator (CE) - ALTER DATABASE SCOPED CONFIGURATION SET LEGACY_CARDINALITY_ESTIMATION = ON;
3. Trace Flags: 4199, 9481, 10204, 2312
4. Disable parameter sniffing
5. Disabling batch mode on rowstore - ALTER DATABASE SCOPED CONFIGURATION SET BATCH_MODE_ON_ROWSTORE = OFF;
6. optimize for ad hoc workloads - true
7. Clear cache plans + updates stats
8. PARAMETERIZATION - tried SIMPLE and FORCE
9. Changing database compatibility level
10. Installed SQL Server 2025 Evaluation - same performance as 2022
----------
Things I expect:
Any further suggestions to what I should look into. I have a lot of scalar valued functions, I can rewrite them if absolutely necessary, just want to find out if anyone has faced similar performance issues and if there are any more settings I can adjust.
Both SQL servers are using the latest CUs:
SQL Server 2019 CU32 (15.0.4430.1, February 2025)
SQL Server 2022 CU20 (16.0.4205.1, July 2025)
**EDIT**
Statistics:
2019


faithY
(41 rep)
Aug 13, 2025, 02:54 PM
• Last activity: Sep 8, 2025, 12:31 PM
7
votes
4
answers
764
views
If all of the indexes of a table are partitioned with the same function and scheme but on different columns, is it still considered aligned?
[The documentation](https://learn.microsoft.com/en-us/sql/relational-databases/partitions/partitioned-tables-and-indexes?view=sql-server-ver16#aligned-index) says the following > ### Aligned index > > An index that is built on the same partition scheme as its corresponding table. When a table and it...
[The documentation](https://learn.microsoft.com/en-us/sql/relational-databases/partitions/partitioned-tables-and-indexes?view=sql-server-ver16#aligned-index) says the following
> ### Aligned index
>
> An index that is built on the same partition scheme as its corresponding table. When a table and its indexes are in alignment, the database engine can switch partitions in or out of the table quickly and efficiently while maintaining the partition structure of both the table and its indexes. An index doesn't have to participate in the same named partition function to be aligned with its base table. However, the partition function of the index and the base table must be essentially the same, in that:
>
> * The arguments of the partition functions have the same data type.
> * They define the same number of partitions.
> * They define the same boundary values for partitions.
To my shock, I cannot see anywhere in this definition that says that the index is not considered aligned if it is partitioned on a completely different column to the table.
So, is the index below considered aligned with the table?
CREATE PARTITION FUNCTION ByYear (DATETIME2(0))
AS RANGE RIGHT
FOR VALUES ('20000101', '20010101', '20020101', '20030101');
GO
CREATE PARTITION SCHEME AllToPrimary
AS PARTITION ByYear
ALL TO ([Primary]);
GO
CREATE TABLE Partitioned
(
Junk NVARCHAR(MAX),
TablePartitioningColumn DATETIME2(0),
IndexPartitioningColumn DATETIME2(0)
) ON AllToPrimary(TablePartitioningColumn)
GO
CREATE NONCLUSTERED INDEX PartitionedDifferent
ON Partitioned (IndexPartitioningColumn)
ON AllToPrimary(IndexPartitioningColumn)
J. Mini
(1269 rep)
May 18, 2025, 05:33 PM
• Last activity: Sep 8, 2025, 09:43 AM
0
votes
1
answers
92
views
Why does dm_hadr_database_replica_states show a different AOAG LAG time than the GUI?
When I'm querying `select * from sys.dm_hadr_database_replica_states`, I see 11k seconds (3 hours). [![enter image description here][1]][1] But when I go to the AOAG dashboard and select the "Estimated Data Loss (time)" column, it shows 1 day. Am I missing something here? Why the difference? This is...
When I'm querying
But when I go to the AOAG dashboard and select the "Estimated Data Loss (time)" column, it shows 1 day.
Am I missing something here? Why the difference?
This is a query I'd like to use to create an alert if I knew the correct value:
SELECT
ag.name AS AvailabilityGroupName,
dr.name AS DatabaseName,
ars.replica_server_name AS ReplicaServerName,
drs.log_send_queue_size AS LogSendQueueSizeKB,
drs.redo_queue_size AS RedoQueueSizeKB,
drs.redo_rate AS RedoRateKBPerSec,
drs.secondary_lag_seconds
FROM
sys.availability_groups AS ag
JOIN
sys.availability_replicas AS ar ON ag.group_id = ar.group_id
JOIN
sys.dm_hadr_database_replica_states AS drs ON ar.replica_id = drs.replica_id
JOIN
sys.databases AS dr ON drs.database_id = dr.database_id
JOIN
sys.availability_replicas AS ars ON drs.replica_id = ars.replica_id
WHERE
drs.is_primary_replica = 0 -- Focus on secondary replicas
AND drs.secondary_lag_seconds > 3600
select * from sys.dm_hadr_database_replica_states
, I see 11k seconds (3 hours).

Racer SQL
(7562 rep)
Sep 3, 2025, 06:05 PM
• Last activity: Sep 6, 2025, 02:57 PM
0
votes
1
answers
92
views
How to Use Group Managed Service Account (GMSA) as a service account for SQL Server 2022
Help please! I have a manual install of SQL 2022 built with the service running as the built in account. I now want to change the service to run as the group managed service account that I now have (i.e. after the instance had been installed). Changing the Log On as in Configuration Manager, I am ge...
Help please!
I have a manual install of SQL 2022 built with the service running as the built in account. I now want to change the service to run as the group managed service account that I now have (i.e. after the instance had been installed). Changing the Log On as in Configuration Manager, I am getting - *The request failed or the service did not respond in a timely fashion. Consult the event log or other applicable error logs for details* but I don't have any helpful information in the logs.
Has anyone had this experience? Is there a problem with using gmsa to login manually as I'm trying to do? If so, what is the way round it? How can I change the already installed instance to run with the gmsa?
Thank you
PTL_SQL
(431 rep)
Aug 28, 2025, 07:28 AM
• Last activity: Sep 5, 2025, 10:18 AM
-1
votes
0
answers
60
views
SSIS packages fail with unexpected termination status on SQL server 2022
My SQL Agent job, which runs SSIS packages, was working fine with the same user and setup earlier. But today, it failed with the status **"Unexpected Termination"** in SSMS/SSISDB. Nothing was changed in the package or user permissions. Why can this suddenly happen? Any idea why it happens? [
Manthan Prabhu
(29 rep)
Aug 25, 2025, 06:54 AM
• Last activity: Sep 3, 2025, 12:59 AM
8
votes
1
answers
347
views
Alter table alter column does not work with user defined types
A user with db_datareader, db_datawriter, db_ddladmin and db_securityadmin roles can add columns with a user defined type, but cannot alter those. For example: Lets say we create a user defined type called MyDateType of type DateTime. We want to add a MyDateType column to table Person, fill it with...
A user with db_datareader, db_datawriter, db_ddladmin and db_securityadmin roles can add columns with a user defined type, but cannot alter those.
For example: Lets say we create a user defined type called MyDateType of type DateTime. We want to add a MyDateType column to table Person, fill it with something and then set it to not null.
ALTER TABLE Person ADD SomeDate MyDateType NULL; -- This works
UPDATE Person SET SomeDate = OtherDate; -- This works
ALTER TABLE Person ALTER COLUMN SomeDate MyDateType NOT NULL; -- User has no permission error
When we change that last statement to:
ALTER TABLE Person ALTER COLUMN SomeDate DateTime NOT NULL; -- This works!
So, using a direct datatype lets me change the column, but using our user defined type does not, which is strange to me, because adding works.
I could also solve it by using a default value and adding the column as not null straight away, but I am more interested in why alter column behaves this way. Anyone no why?
The exact error is this:
> Msg 15247, Level 16, State 5, Line 1
> User does not have permission to perform this action.
**Full repro script**
REVERT;
GO
DROP TABLE IF EXISTS Person
DROP TYPE IF EXISTS MyDateType
DROP USER IF EXISTS MyUser;
CREATE TYPE MyDateType FROM DATETIME;
CREATE TABLE Person (OtherDate DATETIME);
CREATE USER MyUser WITHOUT LOGIN;
ALTER ROLE db_datareader ADD MEMBER MyUser
ALTER ROLE db_datawriter ADD MEMBER MyUser
ALTER ROLE db_ddladmin ADD MEMBER MyUser
ALTER ROLE db_securityadmin ADD MEMBER MyUser
GO
GRANT REFERENCES ON TYPE::MyDateType TO MyUser
--GRANT EXECUTE ON TYPE::MyDateType TO MyUser
ALTER TABLE Person ADD SomeDate MyDateType NULL; -- This works
GO
EXECUTE AS User='MyUser'
UPDATE Person SET OtherDate = SomeDate; -- This works
UPDATE Person SET SomeDate = OtherDate; -- This works
ALTER TABLE Person ALTER COLUMN SomeDate MyDateType NOT NULL; -- User has no permission error
Plekuz
(83 rep)
Aug 29, 2025, 11:22 AM
• Last activity: Aug 31, 2025, 06:36 AM
5
votes
1
answers
111
views
Why does SQL Server Full Text Search (FTS) rank go down when match count goes up due to highly rare word
Given this query: declare @aboutPredicateOpt nvarchar(4000) = N'IsAbout( PICCO weight(0.1), IC228 weight(0.1) )'; select RowId, BrandId, ObjectId, IsActive, OrderNumber, SearchableDescription, Rank from QuickSearchProducts_Temp qsp join Containstable(QuickSearchProducts_Temp, SearchableDescription,...
Given this query:
declare
@aboutPredicateOpt nvarchar(4000) =
N'IsAbout(
PICCO weight(0.1),
IC228 weight(0.1)
)';
select RowId, BrandId, ObjectId, IsActive, OrderNumber, SearchableDescription, Rank
from QuickSearchProducts_Temp qsp
join Containstable(QuickSearchProducts_Temp, SearchableDescription, @aboutPredicateOpt) as ct
on ct.[key] = qsp.RowId
where qsp.IsActive = 1
order by iif(OrderNumber in ( '6403072'), 0, 1), ct.rank desc
A number of rows are returned that contain both terms. Note the row of interest has artificially been moved to top (using "order by" even though its rank is same).
Now add a very rare term to the aboutPredicateOpt:
declare
@aboutPredicateOpt nvarchar(4000) =
N'IsAbout(
15301530 weight(0.1),
PICCO weight(0.1),
IC228 weight(0.1)
)';
And the rank for the row of interest drops dramatically even though it is the only row (of 50k) that contains that (rare) string.
And for just the rare word:
declare
@aboutPredicateOpt nvarchar(4000) =
N'IsAbout(
15301530 weight(0.1)
)';
Only that row is returned which is great.
Chats with AI said the FTS formula likely involves overall rarity across all rows and concluded that high rarity should increase the rank when a match is found. Seems to be the opposite though.
Note that each row's SearchableDescription contains a distinct set of terms so that the frequency per row of any given term matches should always be one (ignoring wildcards searches for now).
Now a few ways to compensate for this are being considered:
- Increase weights for "longer" or more complex terms or rarity (from histogram).
- Calculate a ranking that counts matches using like and combine it into the main ranking.
But mainly I want to understand why the Rank **drops by 50%** from the top tier it was formally in. (Sql Server 2022)
Table Definition with FTS index:
CREATE TABLE [QuickSearchProducts_Temp] (
RowId int IDENTITY(1, 1) NOT NULL,
ObjectId bigint NOT NULL,
BrandId smallint NOT NULL DEFAULT ((0)),
OrderNumber nvarchar(200) NOT NULL,
IsActive bit NOT NULL,
SearchableDescription nvarchar(1000) NULL
);
ALTER TABLE QuickSearchProducts_Temp add constraint PK_QSP primary key (RowId);
CREATE FULLTEXT CATALOG QuickSearchFullTextCatalog AS DEFAULT;
CREATE FULLTEXT INDEX ON QuickSearchProducts_Temp ( SearchableDescription )
KEY INDEX PK_QSP ON QuickSearchFullTextCatalog WITH CHANGE_TRACKING AUTO;
set identity_insert QuickSearchProducts_Temp ON;
-- The following inserts only 10k rows of the original 50k
-- but still illustrates the point with similar results.
--
insert into QuickSearchProducts_Temp
(RowId, ObjectId, BrandId, OrderNumber, IsActive, SearchableDescription)
select Number as RowId,
Number+10000 as ObjectId,
Number % 5 as BrandId,
iif(Number = 0, '6403072', convert(varchar(30), 10000-Number)) as OrderNumber,
1 as IsActive,
SearchableDescription =
case
when Number = 0
then '15301530 Picco Ic228 bing bang boom'
when Number <= 410
then 'Picco Ic228 bing bang boom'
when Number <= 959
then 'Picco bing bang boom'
else 'lotta other rows ' + convert(varchar(10), Number)
end
-- for this use numbers table or adjust https://www.sqlservercentral.com/steps/bones-of-sql-the-calendar-table
from Common.NumberSequence_0_10000_View
where Number < 10000



crokusek
(2162 rep)
Aug 20, 2025, 06:45 AM
• Last activity: Aug 21, 2025, 08:37 PM
0
votes
0
answers
14
views
RML Utilities – DTCTransaction EndTime Column Missing in Trace
While learning RML Utilities, I’m trying to generate an .RML file using ReadTrace. The tool fails because the trace must include the DTCTransaction event and its EndTime column, but EndTime is not available in the Profiler GUI for this event. Is there any way to capture EndTime for DTCTransaction in...
While learning RML Utilities, I’m trying to generate an .RML file using ReadTrace. The tool fails because the trace must include the DTCTransaction event and its EndTime column, but EndTime is not available in the Profiler GUI for this event.
Is there any way to capture EndTime for DTCTransaction in a trace, or is this a known limitation? Any workaround for RML load testing without it?
Manthan Prabhu
(29 rep)
Aug 14, 2025, 01:03 PM
• Last activity: Aug 19, 2025, 10:57 PM
0
votes
0
answers
22
views
DEA 2.6 Startup Error – MEF GetExportedValue Prerequisite Import Failure
Installing Microsoft Database Experimentation Assistant (DEA) v2.6 on Server 2022 fails to start with: `System.InvalidOperationException: GetExportedValue cannot be called before prerequisite import 'Microsoft.DEA.Logging.TraceLogger..ctor (Parameter="configManager", ContractName="Microsoft.DEA.Conf...
Installing Microsoft Database Experimentation Assistant (DEA) v2.6 on Server 2022 fails to start with:
Tried:
Installed all dependencies (.NET 4.8, VC++ Redist, SQL Server 2022).
Uninstalled/reinstalled DEA, cleaned %appdata% and Program Files folders.
Ran as admin, updated Windows, checked Event Viewer.
Verified no locked DLLs or AV blocks.
Question:
What causes this MEF import failure in DEA 2.6 and how to fix it? Any way to force MEF to reload/resolve IConfigManager?
System.InvalidOperationException: GetExportedValue cannot be called before prerequisite import 'Microsoft.DEA.Logging.TraceLogger..ctor (Parameter="configManager", ContractName="Microsoft.DEA.ConfigurationManager.IConfigManager")' has been set.

Manthan Prabhu
(29 rep)
Aug 18, 2025, 04:32 AM
• Last activity: Aug 19, 2025, 10:56 PM
1
votes
2
answers
3102
views
New SQL Server 2022 slower than old 2014 server
I'm planning and testing a new SQL Server 2022 install to migrate from an older 2014 database. I've noticed during performance testing some major discrepancies with some bulky queries. Here is a query I use for testing: INSERT INTO @volume SELECT max(SUBSTRING(dbo.CLEACUM.TRADE_DATE, 1, 4)) AS TRADE...
I'm planning and testing a new SQL Server 2022 install to migrate from an older 2014 database. I've noticed during performance testing some major discrepancies with some bulky queries.
Here is a query I use for testing:
INSERT INTO @volume
SELECT max(SUBSTRING(dbo.CLEACUM.TRADE_DATE, 1, 4)) AS TRADEYEAR,
sum(CASE [CANCEL] WHEN '1' THEN - cast(abs([TOT_QTY]) as bigint)
ELSE cast(abs([TOT_QTY]) as bigint) END) AS TOT_QTY,0
FROM dbo.CLEACUM LEFT OUTER JOIN
dbo.IBM_SECMASTER ON dbo.CLEACUM.TDE_SYMBOL = dbo.IBM_SECMASTER.DCS_CUSIP LEFT OUTER JOIN
dbo.VOL_IBM_CODES_IDA ON dbo.IBM_SECMASTER.SEC_CLASS >= dbo.VOL_IBM_CODES_IDA.NUMMIN AND
dbo.IBM_SECMASTER.SEC_CLASS = dbo.VOL_IBM_CODES_IDA.TYPEMIN AND
dbo.IBM_SECMASTER.SEC_TYPE = ABS(CONVERT(float, DATEDIFF(day,
dbo.CLEACUM.MATURITY_DATE, dbo.CLEACUM.TRADE_DATE) / 365.25)) AND dbo.VOL_IBM_CODES_IDA.DATEMIN = ABS(CONVERT(float, DATEDIFF(day, dbo.CLEACUM.MATURITY_DATE, dbo.CLEACUM.TRADE_DATE)
/ 365.25)) AND dbo.VOL_MaturityLabels.DATEMIN datepart(yyyy,getdate())-5)
and (cast(TRADE_DATE as date)<= VOLUME_SAME_PERIOD_DAY)
group by SUBSTRING(dbo.CLEACUM.TRADE_DATE, 1, 4)
The CLEACUM table is quite big. Around 17M rows over 3.2GB disk space. Nothing insane but still big for such a query to scan the whole table.
When I run on 2014, fresh start or live for days, the query completes at max 30sec. When I run on 2022, it takes over 50 mins to complete. Next run will run complete in 30sec. The
DBCC DROPCLEANBUFFERS
command will also make it run slow again without a restart or a long wait.
From troubleshooting, I can see the IO loading the table from disk (reading ~1MB from mdf file), which I assume is the delay and the subsequent run will use data from cache instead. I'm ok with that but...
How is 2014 not doing the same thing? If I run that same query on 2014 after a machine reboot, it still runs in 30s or less! But in 2022 it needs to load it from disk before it can use the cache. Both give me about the same stats when measured:
Table '#B59594BD'. Scan count 0, logical reads 5, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'VOL_MaturityLabels'. Scan count 1, logical reads 192958, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 1746, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Workfile'. Scan count 14, logical reads 2872, physical reads 338, read-ahead reads 2534, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'VOL_CUSIPEquivalence'. Scan count 1, logical reads 8, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'VOL_IBM_CODES_IDA'. Scan count 1, logical reads 1736622, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'IBM_SECMASTER'. Scan count 1, logical reads 65, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'VOL_ExcludedAccounts'. Scan count 1, logical reads 195636, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'CLEACUM'. Scan count 1, logical reads 1040096, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'VOL_BusinessDaysPerYear'. Scan count 1, logical reads 1, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 24008 ms, elapsed time = 28572 ms.
Table '#AAEDD858'. Scan count 0, logical reads 5, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'VOL_MaturityLabels'. Scan count 1, logical reads 192958, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, page server reads 0, read-ahead reads 1746, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'Workfile'. Scan count 14, logical reads 2872, physical reads 338, page server reads 0, read-ahead reads 2534, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'VOL_CUSIPEquivalence'. Scan count 1, logical reads 8, physical reads 5, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'VOL_IBM_CODES_IDA'. Scan count 1, logical reads 1736622, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'IBM_SECMASTER'. Scan count 1, logical reads 65, physical reads 0, page server reads 0, read-ahead reads 88, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'VOL_ExcludedAccounts'. Scan count 1, logical reads 195636, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'CLEACUM'. Scan count 1, logical reads 1040096, physical reads 17904, page server reads 0, read-ahead reads 305712, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
Table 'VOL_BusinessDaysPerYear'. Scan count 1, logical reads 1, physical reads 0, page server reads 0, read-ahead reads 0, page server read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob page server reads 0, lob read-ahead reads 0, lob page server read-ahead reads 0.
SQL Server Execution Times:
CPU time = 40063 ms, elapsed time = 3041555 ms.
For context, both essentially run on same hypervisor/storage with both VMs configured with identical resources. Original server had no index of any kind and OLTP wasn't a thing with 2014 (nor is it enabled in our 2022). The HA feature is configured on 2022 but the concerned DB isn't replicated yet. I made both as identical as possible to limit variables.
Only difference I noticed is 2014 will allocate all max mem (16GB) for SQL right away at launch whereas 2022 seems to be more dynamic depending on queries.
I know the query isn't optimal, but my job is just to migrate it all on a newer server.
What am I missing, not understanding? How can I achieve the same level of performance on 2022 as our current 2014?
Execution plan 2014 and 2022
---
Session wait stats are all PAGEIOLATCH_SH
(my interpretation is waiting on IO for data to continue, which matches IO stats).
I set memory reservation on VM and rebooted. Still no change, not even a second in difference.
JulioQc
(143 rep)
Dec 5, 2023, 08:12 PM
• Last activity: Aug 18, 2025, 01:53 AM
1
votes
2
answers
1934
views
Why did queries become slower after upgrading SQL Server 2012 to 2022?
I'm in the process of upgrading my current set of databases from SQL Server 2012 to 2022. I've ensured I've matched my resources/specs to ensure both Servers match. I am facing a huge performance decrease in the new SQL Server. Queries that took seconds on 2012 are now taking 10+ minutes in 2022. I'...
I'm in the process of upgrading my current set of databases from SQL Server 2012 to 2022. I've ensured I've matched my resources/specs to ensure both Servers match. I am facing a huge performance decrease in the new SQL Server. Queries that took seconds on 2012 are now taking 10+ minutes in 2022. I've restored the databases from 2012 -> 2022 so the indexes/data are the same. I've also run statistics on all the databases/tables. Set the compatibility to 110 (Server 2012)
I'm at a loss on how to proceed? I could try to tune every individual query but that isn't practical.
I've used Blitz scripts, they have warnings due to my cache being unstable. This is expected since the new database would need to build a cache over time.
It looks like my main wait statistic is Parallelism. Since my server CPU matches the old spec, I'm unsure how to resolve this.
Thank you,
Output of sp_PressureDetector.sql
First screenshot is the 2012 Waits as per Request Erik
Second screenshot is the 2022 Waits. When running the scripts I see CXCONSUMER is the number one wait when troubleshooting a particular query.


Gautam
(41 rep)
Apr 9, 2024, 05:52 PM
• Last activity: Aug 18, 2025, 01:49 AM
2
votes
2
answers
126
views
How can I tell if my BPSORT waits are the result of spilling batch mode sorts?
BPSORT sort waits [mean that I have batch mode sorts][1]. This is okay. However, I'm aware that [batch mode sort spills are very slow][2]. How can I tell if a query that appears to have high BPSORT waits is suffering from batch mode sort spills? Assume that I am on a live production server. This mak...
BPSORT sort waits mean that I have batch mode sorts . This is okay. However, I'm aware that batch mode sort spills are very slow . How can I tell if a query that appears to have high BPSORT waits is suffering from batch mode sort spills?
Assume that I am on a live production server. This makes both Extended Events and getting actual execution plans awkward. Other tools, such as the plan cache and Query Store, are available.
J. Mini
(1269 rep)
Aug 14, 2025, 09:02 PM
• Last activity: Aug 16, 2025, 03:01 PM
14
votes
2
answers
507
views
On SQL 2022, DECLARE statement causes evaluation of database availability
Ok, this is a head scratcher for us. Hopefully someone out there can shed some light. This behaviour is only occurring on SQL 2022 for us -- and not on 2016, 2017 or 2019. Let's say we have a database that is offline. We can execute this fine: IF 1=0 EXEC [An_Offline_DB].dbo.[Some_Stored_Proc] As yo...
Ok, this is a head scratcher for us. Hopefully someone out there can shed some light. This behaviour is only occurring on SQL 2022 for us -- and not on 2016, 2017 or 2019.
Let's say we have a database that is offline. We can execute this fine:
IF 1=0
EXEC [An_Offline_DB].dbo.[Some_Stored_Proc]
As you'd expect, there is no error, because it clearly is not going to try and execute the proc.
However, if we add in a completely unrelated DECLARE statement like this:
DECLARE @x int
IF 1=0
EXEC [An_Offline_DB].dbo.[Some_Stored_Proc]
GO
on SQL 2022 we now get this error:
Database 'An_Offline_DB' cannot be opened because it is offline.
But on older versions of SQL it works fine.
We have recreated this issue with readable secondaries in an AlwaysOn Availability Group, so the reason for the database being unavailable is not relevant.
So it appears the presence of the DECLARE statement is causing a full evaluation of the script, but only for SQL 2022.
One side note, is that if the database doesn't exist, we don't get an error:
DECLARE @x int
IF 1=0
EXEC [Does_Not_Exist].dbo.[Some_Stored_Proc]
GO
Mike
(892 rep)
Aug 8, 2025, 01:31 AM
• Last activity: Aug 14, 2025, 07:51 AM
0
votes
1
answers
256
views
The operation could not be performed because OLE DB provider "MSOLEDBSQL" for linked server "ServerB" was unable to begin a distributed transaction
A Tale As Old As Time... - I have a Linked Server setup to a 3rd party database which I have no ownership or access to otherwise. Lets call it `ServerB`. I have a stored procedure (`SomeStoredProcedure`) that selects from Linked Server `ServerB`. If I explicitly set the isolation level to `SERIALIZA...
A Tale As Old As Time...
-
I have a Linked Server setup to a 3rd party database which I have no ownership or access to otherwise. Lets call it
But no dice, same error:
I understand that the query wants to promote to a distributed transaction for the above scenario since a Linked Server is involved (I assume enforcing
ServerB
. I have a stored procedure (SomeStoredProcedure
) that selects from Linked Server ServerB
. If I explicitly set the isolation level to SERIALIZABLE
and then try to insert the results of SomeStoredProcedure
into a local temp table, I get the following error:
> OLE DB provider "MSOLEDBSQL" for linked server "ServerB" returned message "The parameter is incorrect.".
>
> Msg 7399, Level 16, State 1, Line 1
>
> The OLE DB provider "MSOLEDBSQL" for linked server "ServerB" reported an error. One or more arguments were reported invalid by the provider.
>
> Msg 7391, Level 16, State 2, Line 1
>
> The operation could not be performed because OLE DB provider "MSOLEDBSQL" for linked server "ServerB" was unable to begin a distributed transaction.
If I just execute the procedure directly (without inserting the results into a local temp table) it works. If I don't use the SERIALIZABLE
isolation level, it also works. (Other explicit isolation levels work as well.)
I have tried disabling Enable Promotion of Distributed Transactions for RPC
as mentioned in other answers:


SERIALIZABLE
isolation is more involved across a remote server). But is it possible to prevent it from promoting to a distributed transaction under these circumstances?
The same issue is reproducible using sp_executesql
to select from the Linked Server as well. Repro code for example:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
DROP TABLE IF EXISTS #LocalTempTable;
CREATE TABLE #LocalTempTable (ID INT);
INSERT INTO #LocalTempTable (ID)
EXEC sp_executesql N'SELECT ID FROM ServerB.DatabaseName.SchemaName.SomeTable;';
*Reminder: I don't own this 3rd party server, and can't change any settings on it such as actually enabling the MSDTC.
J.D.
(41063 rep)
Jun 27, 2025, 06:06 PM
• Last activity: Aug 10, 2025, 10:03 AM
0
votes
0
answers
51
views
Why does enabling Snapshot Isolation use up CPU?
I'm currently enabling Snapshot Isolation with `ALTER DATABASE [MyDB] SET ALLOW_SNAPSHOT_ISOLATION ON;`. It was taking longer that I expected, so I ran `sp_WhoIsActive`. I was surprised to discover that it has used 81,519 units of CPU in 83 seconds. Why does this happen?
I'm currently enabling Snapshot Isolation with
ALTER DATABASE [MyDB] SET ALLOW_SNAPSHOT_ISOLATION ON;
. It was taking longer that I expected, so I ran sp_WhoIsActive
. I was surprised to discover that it has used 81,519 units of CPU in 83 seconds. Why does this happen?
J. Mini
(1269 rep)
Aug 8, 2025, 09:14 PM
Showing page 1 of 20 total questions