Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
18472
views
Log-in as service account in SQL Server Management Studio
The stored procedure that performs a CTE in production is being executed using a service account. However, the stored procedure returns empty and I tried checking this using SSMS but I my domain account has no execute functionality in production. I tried logging into SQL Server Management Studio usi...
The stored procedure that performs a CTE in production is being executed using a service account. However, the stored procedure returns empty and I tried checking this using SSMS but I my domain account has no execute functionality in production.
I tried logging into SQL Server Management Studio using the service account but I ran through some problems. First was resolved by following this link - https://dba.stackexchange.com/questions/173785/the-certificate-chain-was-issued-by-an-authority-that-is-not-trusted
However, once that is executed, it returns as if no user name was provided:
Question is, can I login using a service account in SQL Server Management Studio?

Patrick
(111 rep)
Oct 21, 2018, 07:27 AM
• Last activity: Aug 5, 2025, 11:07 PM
0
votes
0
answers
12
views
SQL Server Estimates don't use AVG_RANGE_ROWS for Uniqueidentifer Parameter
I'm trying to debug a very weird query row estimation. The query is very simple. I have a table `OrderItems` that contains for each Order (column `OrderId`) the items of the order. ```sql SELECT count(*) FROM orders.OrderItem WHERE OrderId = '5a7e53c4-fc70-f011-8dca-000d3a3aa5e1' ``` According to th...
I'm trying to debug a very weird query row estimation.
The query is very simple. I have a table
OrderItems
that contains for each Order (column OrderId
) the items of the order.
SELECT count(*)
FROM orders.OrderItem
WHERE OrderId = '5a7e53c4-fc70-f011-8dca-000d3a3aa5e1'
According to the statistics from IX_OrderItem_FK_OrderId
(that's just a normal unfiltered foreign key index CREATE INDEX IX_OrderItem_FK_OrderId on orders.OrderId(OrderId)
, the density is 1.2620972E-06 with 7423048 rows, so about ~9.3 items per order (if we ignore the items with OrderId = NULL
, if we include them there are even less).
The statistics are created with FULLSCAN, and are only slightly out of date (around ~0.2% new rows since the last recompute).
| Name | Updated | Rows | Rows Sampled | Steps | Density | Average key length | String Index | Filter Expression | Unfiltered Rows | Persisted Sample Percent |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| IX_OrderItem_FK_OrderId | Aug 3 2025 4:36PM | 7423048 | 7423048 | 198 | 0.1649756 |26.443027 | "NO " | NULL | 7423048 | 100 |
| All density | Average Length | Columns |
| --- | --- | --- |
| 1.2620972E-06 | 10.443027 | OrderId |
| 1.3471555E-07 | 26.443027 | OrderId, Id |
The query plan however expects, that the query returns 205.496 items. And in reality there are actually 0 results - because the orderId
doesn't exist.
Detailed Query Plan:
https://www.brentozar.com/pastetheplan/?id=hVKYNLmXSU
It probably uses the histogram for coming up with the estimate.
It should fall into following bucket with RANGE_HI_KEY = 'a39932d8-aa2c-f011-8b3d-000d3a440098'
. But that estimate should then be 6.87 according to the AVG_RANGE_ROWS.
It somehow looks like it uses the EQ_ROWS from the previous bucket (but 205 might also just be by accident).
| RANGE_HI_KEY | RANGE_ROWS | EQ_ROWS | DISTINCT_RANGE_ROWS | AVG_RANGE_ROWS |
| --- | --- | --- | --- | --- | --- |
| 9d2e2bea-aa6e-f011-8dca-000d3a3aa5e1 | 12889 | 205 | 2412 | 5.343698 |
| a39932d8-aa2c-f011-8b3d-000d3a440098 | 21923 | 107 | 3191 | 6.8702602 |
OPTION(RECOMPILE)
does not help.
Can somebody explain how SQL Server (in particularly Azure SQL) is coming up with that number?
- Does it really think that the parameter is close enough to the bucket start, and just takes the EQ_ROWS value even though the AVG_RANGE_ROWS is a lot smaller?
- Does it not understand the parameter because it's defined as VARCHAR? If I replace it with DECLARE @OrderId UNIQUEIDENTIFIER = '5a7e...'; WHERE OrderId = @OrderId
the estimate is down to 6. But if that's the reason, from where is the estimate 205?
Jakube
(101 rep)
Aug 5, 2025, 04:53 PM
21
votes
4
answers
5275
views
SQL Server cardinality hint
Is there a way how to 'inject' a cardinality estimation to a SQL Server optimizer (any version)? i.e. something similar to Oracle's cardinality hint. My motivation is driven by the article, [How Good Are Query Optimizers, Really?][1] \[1] , where they test the influence of the cardinality estimator...
Is there a way how to 'inject' a cardinality estimation to a SQL Server optimizer (any version)?
i.e. something similar to Oracle's cardinality hint.
My motivation is driven by the article, How Good Are Query Optimizers, Really? \[1] , where they test the influence of the cardinality estimator on a selection of a bad plan. Therefore, it would be sufficient if I could force the SQL Server to 'estimate' the cardinalities precisely for complex queries.
---
\[1] Leis, Viktor, et al. "How good are query optimizers, really?"
Proceedings of the VLDB Endowment 9.3 (2015): 204-215.
Radim Bača
(233 rep)
Mar 31, 2017, 07:17 AM
• Last activity: Aug 5, 2025, 12:16 PM
4
votes
1
answers
1046
views
Different collation between master database and application database
I just installed SQL Server 2019. Unfortunately I didn't notice the collation while installing. Following are the collations: SELECT name, collation_name FROM sys.databases | name | collation_name | |--------------------|--------------------------------| | master | Latin1_General_CI_AS | | tempdb |...
I just installed SQL Server 2019. Unfortunately I didn't notice the collation while installing. Following are the collations:
SELECT name, collation_name
FROM sys.databases
| name | collation_name |
|--------------------|--------------------------------|
| master | Latin1_General_CI_AS |
| tempdb | Latin1_General_CI_AS |
| model | Latin1_General_CI_AS |
| msdb | Latin1_General_CI_AS |
| ReportServer | Latin1_General_100_CI_AS_KS_WS |
| ReportServerTempDB | Latin1_General_100_CI_AS_KS_WS |
| ApplicationDB | SQL_Latin1_General_CP1_CI_AS |
use ApplicationDB
GO
SELECT SERVERPROPERTY(N'Collation')
> SQL_Latin1_General_CP1_CI_AS
The application recommends that the database to be SQL_Latin1_General_CP1_CI_AS, but they're unsure about the system databases as they're not database experts. Will there be any consequences. We would be running SSRS reports to pull data from the application database. Will this be an issue? I'm confused
Master_Roshy
May 12, 2022, 05:46 PM
• Last activity: Aug 5, 2025, 07:09 AM
2
votes
1
answers
665
views
Split SSIS project in to multiple files to avoid merge hell
Since SSIS does not like being merged nicely I was wondering how to have a big SSIS package split up. With the idea of having multiple devs working on this simultaneously, and a minimal chance of having merge conflicts. In SSIS 2016 I found these options: - ***package parts***, but apparently they d...
Since SSIS does not like being merged nicely I was wondering how to have a big SSIS package split up. With the idea of having multiple devs working on this simultaneously, and a minimal chance of having merge conflicts.
In SSIS 2016 I found these options:
- ***package parts***, but apparently they don't share connection managers. I don't want to have 100 different connection managers.
- ***subpackages***, still this doesn't look very clean and I also wonder if this is what it is intended for. Also, the debugger goes crazy too opening the subpackages while running. Any other drawbacks I should know of?
I can't be the only person with this problem. Is there an other way to achieve this?
Sam Segers
(129 rep)
Jan 19, 2017, 10:47 AM
• Last activity: Aug 5, 2025, 06:05 AM
0
votes
0
answers
17
views
How to update existing data in Master Data Services SQL Server 2022?
I am learning to use Master Data Services for the first time and currently stuck on **updating** existing data. This is also my first time using SSIS so I am currently learning on how to use SQL Command to update data. **Overview data load workflow** 1. Data is being stored into a staging table (DQS...
I am learning to use Master Data Services for the first time and currently stuck on **updating** existing data. This is also my first time using SSIS so I am currently learning on how to use SQL Command to update data.
**Overview data load workflow**
1. Data is being stored into a staging table (DQS_STAGING_DATA)
2. When load successful, data then will be loaded from DQS_STAGING_DATA into each staging table in MDS with import type 0 (Ex: stg.Person).
**My current SSIS workflow**
[Loading data into MDS stg.Person and stg.Company](https://i.sstatic.net/LBhe3Ldr.png)
**What I have tried**
Change import type from import type 1 to 0.
> 1: Create new members only. Any updates to existing MDS data fail.
> 0: Create new members. Replace existing MDS data with staged data, but only if the staged data is not NULL.
How do I update data inside of the stg.Person and stg.Company using my current SSIS workflow and ensure that Master Data Excel Add-ins will reflect the new data? Both of these staging tables also have their own subscription view.
**My expectation**
1. A simple to follow step by step and beginner explanation to update existing data in Master Data Services.
2. Comment and feedback on my current SSIS pipeline.
Amir Hamzah
(11 rep)
Aug 5, 2025, 05:18 AM
1
votes
1
answers
503
views
Connection to database causes SSPI context error
I have migrated a SQL server from another server box that was decommissioned. They have the same name and also same IP address. However, when I connect to the server from an application using trusted connection, I get the "SSPI context not generated" error. What I have done: * I have used the setspn...
I have migrated a SQL server from another server box that was decommissioned. They have the same name and also same IP address. However, when I connect to the server from an application using trusted connection, I get the "SSPI context not generated" error.
What I have done:
* I have used the setspn -X to confirm there is no duplicate SPN.
* I have changed the order of protocol to follow the order:
Shared Memory,
Named Pipes,
TCP/IP.
* I have verified that when I restart SQL server, the service registers and deregisters. This was found in the SQL server log.
* I have checked that on the SQL configuration the TCP/IP network protocol has the right IP and is active and enabled on both 32bit and 64bit.
I am running out of ideas and I am still getting the same error. I can't find any log that point to Kerberos.
damola
(11 rep)
Mar 21, 2016, 11:12 PM
• Last activity: Aug 4, 2025, 04:01 PM
0
votes
1
answers
144
views
Unexpected LOB_COMPACTION on DATETIME Column with Ola Hallengren's IndexOptimize
I'm using Ola Hallengren's IndexOptimize script for index maintenance on my SQL Server databases. We have a Clustered and 3 non-clustered indexes on a big table having rows over 685 million. Reorganize was running more than 7 hours till morning on a non-clustered index with option LOB_COMPACTION = O...
I'm using Ola Hallengren's IndexOptimize script for index maintenance on my SQL Server databases. We have a Clustered and 3 non-clustered indexes on a big table having rows over 685 million.
Reorganize was running more than 7 hours till morning on a non-clustered index with option LOB_COMPACTION = ON until it's killed explicitly. This index has only one key column on DATETIME type and no covering columns in there.
There is one column with type geography on this table but this column is not part of that index.
According to my understanding, LOB_COMPACTION should not be applicable to the index since DATETIME is not a LOB data type.
I'm expecting when REORGANIZE run on this non-clurtered index, it should run without LOB_COMPACTION = ON.
Here is the script I'm running in my SQL Agent job:
EXECUTE dbo.IndexOptimize @Databases = 'Database1,Database2',
@LogToTable = 'Y',
@FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE',
@FragmentationHigh = 'INDEX_REBUILD_ONLINE',
@FragmentationLevel1 = 15,
@FragmentationLevel2 = 50,
@FillFactor = 90,
@Indexes = 'ALL_INDEXES',
@SortInTempdb = 'Y',
@MaxNumberOfPages = 10000000,
@Execute = 'Y';
We did not specify the @LOBCompaction parameter, so it defaults to 'Y'.
Has anyone else encountered this issue? Is there a known problem with the script, or am I missing something in my configuration?
Any insights or suggestions would be greatly appreciated!
Thanks!
Vikas Kumar
(11 rep)
Aug 30, 2024, 12:58 PM
• Last activity: Aug 4, 2025, 02:01 PM
-1
votes
1
answers
70
views
If Read Committed Snapshot Isolation is already enabled, what is the cost of enabling Snapshot Isolation?
Suppose that I have a database with Read Committed Snapshot Isolation already enabled. Is there any reason at all to not also enable Snapshot Isolation? Intuitively, you would think that the row versions would be kept around from longer. [The documentation dismisses this](https://learn.microsoft.com...
Suppose that I have a database with Read Committed Snapshot Isolation already enabled. Is there any reason at all to not also enable Snapshot Isolation?
Intuitively, you would think that the row versions would be kept around from longer. [The documentation dismisses this](https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide?view=sql-server-ver17#behavior-when-reading-data) .
> Even though
READ COMMITTED
transactions using row versioning provides a transactionally consistent view of the data at a statement level, row versions generated or accessed by this type of transaction are maintained until the transaction completes.
So I am left without any ideas.
Assume SQL Server 2022. SQL Server 2025 brought with it Optimized Locking, which creates just enough uncertainity in my mind that I don't want to ask about it here.
J. Mini
(1225 rep)
Aug 1, 2025, 08:05 PM
• Last activity: Aug 4, 2025, 01:07 PM
0
votes
1
answers
65
views
Sql Server 2019, page compression, pre-/post-application
I've got this ETL process that produces json blobs from a corpus once a week. I've been going through the process trying to optimize it. I ran sp_estimate_data_compression_savings on the output table, and it said it was highly compressible, so I ran it. The savings was a 152 gig table down to a 22 g...
I've got this ETL process that produces json blobs from a corpus once a week. I've been going through the process trying to optimize it.
I ran sp_estimate_data_compression_savings on the output table, and it said it was highly compressible, so I ran it. The savings was a 152 gig table down to a 22 gig table. I thought Great! I'll just declare the table with page compression when we load into it.
Now, I know that the compression isn't always as efficient when you load into an empty table as it is applying the compression afterwards, but to my surprise, when I loaded into the empty table with page compression declared, I got effectively ZERO compression. The index (which is not very compressible) got the expected compression but the heap got none. sp_estimate_data_compression_savings still says it can get great compression *reapplying* PAGE compression.
I read some documentation that said loading into an empty table with PAGE compression starts trying to do ROW and then compresses PAGE when it hits the end of a block and thinks it can do more compression. I took that as explaining why you don't always get as much compression as applying it post-load, but ZERO?
And additional post step takes a while, and I've often been willing to sacrifice 10-20% of the compression to spread the cost. I'm just surprised that pre- application got zero compression on load. Is there something I'm missing here?
CREATE TABLE [dbo].[JSONmaster2](
[ID] [int] NOT NULL,
[JSON] [varchar](8000) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL,
[InsertDate] [smalldatetime] NOT NULL
)
WITH(DATA_COMPRESSION = PAGE)
GO
CREATE UNIQUE NONCLUSTERED INDEX [ixJSONmaster-KeyID] ON [dbo].[JSONmaster2]
([ID] ASC)
WITH (DATA_COMPRESSION = PAGE)
user1664043
(379 rep)
Jul 31, 2025, 11:46 PM
• Last activity: Aug 4, 2025, 10:27 AM
5
votes
1
answers
660
views
Reinitialize Table Values in SQL SSDT Unit Testing
I am creating SQL Server Unit Tests. We are testing various stored procedures. In Unit testing principles, it is good practice to setup a small database table, populate values, and tear down (truncate/delete) the database tables, and resetup for each test. This way every unit tests will have a clean...
I am creating SQL Server Unit Tests. We are testing various stored procedures.
In Unit testing principles, it is good practice to setup a small database table, populate values, and tear down (truncate/delete) the database tables, and resetup for each test. This way every unit tests will have a clean environment to validate sprocs which insert, select, update, delete, etc,
Does anyone where or how to reinitialize the tables values in Sql Unit Testing? Resources are pretty new for unit testing in SQL SSDT VS 2017, so I think lot of people are trying to figure out and understand.
Feel free to show or add pictures below.
http://www.sqlservercentral.com/articles/Unit+Testing/155651/
http://www.erikhudzik.com/2017/08/23/writing-sql-server-unit-tests-using-visual-studio-nunit-and-sqltest/
Pictures in Visual Studio SSDT:
Also, trying to review this class in SQLDatabaseSetup.cs:
[TestClass()]
public class SqlDatabaseSetup
{
[AssemblyInitialize()]
public static void InitializeAssembly(TestContext ctx)
{
// Setup the test database based on setting in the
// configuration file
SqlDatabaseTestClass.TestService.DeployDatabaseProject();
SqlDatabaseTestClass.TestService.GenerateData();
}
}
}
using Microsoft.Data.Tools.Schema.Sql.UnitTesting;

user162241
Oct 24, 2018, 05:09 AM
• Last activity: Aug 4, 2025, 09:09 AM
1
votes
1
answers
1493
views
Error: 9245, Severity: 16, State: 1. / "During the last time interval X query notification errors were suppressed"
We are receiving this notification several hundred times a day and while we're regularly checking notifications from Severity 16 errors, this isn't something we're really concerned about. I've read the article and replies at https://dba.stackexchange.com/questions/33750/error-9245-severity-16-state-...
We are receiving this notification several hundred times a day and while we're regularly checking notifications from Severity 16 errors, this isn't something we're really concerned about.
I've read the article and replies at https://dba.stackexchange.com/questions/33750/error-9245-severity-16-state-1-during-the-last-time-interval-xxx-query-n and while that's very informative, I'd like to know how can we suppress it?
We have an automated alert set up for severity 016 errors and we don't exact categorize this one as an "action items" yet are receiving hundreds of them daily.
Is there a way to suppress it?
MyDoggieJessie
(19 rep)
Feb 6, 2018, 02:31 PM
• Last activity: Aug 4, 2025, 07:10 AM
0
votes
0
answers
13
views
Linked Server failure on clustered SQL Server
I have two clustered Microsoft SQL Servers (`SQLA` & `SQLB`) installed, and confirmed that both of the servers have an ODBC connector for a local PostgreSQL server. From that ODBC connection, I have a linked server created for use in some stored procedures that fails at least once a fortnight with t...
I have two clustered Microsoft SQL Servers (
Linked Server settings:
EXEC master.dbo.sp_addlinkedserver @server = N'POSTGRESP23', @srvproduct=N'', @provider=N'MSDASQL', @datasrc=N'PSQLPROD'
EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'POSTGRESP23',@useself=N'False',@locallogin=NULL,@rmtuser=N'postgres',@rmtpassword='########'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'collation compatible', @optvalue=N'false'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'data access', @optvalue=N'true'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'dist', @optvalue=N'false'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'pub', @optvalue=N'false'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'rpc', @optvalue=N'false'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'rpc out', @optvalue=N'false'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'sub', @optvalue=N'false'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'connect timeout', @optvalue=N'0'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'collation name', @optvalue=null
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'lazy schema validation', @optvalue=N'false'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'query timeout', @optvalue=N'0'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'use remote collation', @optvalue=N'true'
EXEC master.dbo.sp_serveroption @server=N'POSTGRESP23', @optname=N'remote proc transaction promotion', @optvalue=N'true'
SQLA
& SQLB
) installed, and confirmed that both of the servers have an ODBC connector for a local PostgreSQL server.
From that ODBC connection, I have a linked server created for use in some stored procedures that fails at least once a fortnight with this error message:
> Cannot initialize the data source object of OLE DB provider "MSDASQL" for linked server "POSTGRESP23"
When troubleshooting the issues, the ODBC connector on both SQLA
and SQLB
tests successfully from the System DSN menu on the server; the error originates from the linked server.
Currently, to fix this for convenience and lower downtime I am just deleting the linked server and remaking it, pointing it to the same ODBC object. However, this is not a sustainable process.
Can anyone suggest where to look when troubleshooting? As I'm at a loss.
**Additional Information**
psqlODBC_X64
is installed on both machines already from https://odbc.postgresql.org/
System DSN settings:


NathanM
Jul 30, 2025, 10:48 PM
• Last activity: Aug 4, 2025, 03:37 AM
1
votes
1
answers
462
views
How can I improve low IOPS performance for an Azure SQL Database Hyperscale elastic pool?
I'm doing performance testing a series of data migration scripts which mainly consist of large INSERTs (sometimes 100M-200M records) with often complex SELECTs with a lot of hash joins. Configuration: - **Hyperscale: Premium-series, 16 vCores** - Max **4TB** of Locally-redundant backup storage, howe...
I'm doing performance testing a series of data migration scripts which mainly consist of large INSERTs (sometimes 100M-200M records) with often complex SELECTs with a lot of hash joins.
Configuration:
- **Hyperscale: Premium-series, 16 vCores**
- Max **4TB** of Locally-redundant backup storage, however for most of the tests only ~1TB was allocated
Problem: during one of the critical queries which takes circa 2 hours and performs a 100M+ insert on a heavily indexed table (7 indexes) there seem to be extremely wild swings in IOPS write performance:
I set **MAXDOP=16** manually for the query (1 DOP per vCPU)
Questions:
1. My best guess is that the IOPS gets severely degraded when the local SSD (presumably with a maximum of 2560 IOPS per vCore ) gets filled up and the severe bottleneck becomes the shared storage instance. Is this accurate? Note that in this case this happens roughly 50% of the time and virtually everything grinds down to a halt.
2. If this is the case, is there a way to calculate the effective minimum IOPS of the shared storage instance?
3. Is there a way to improve the minimum IOPS of the shared storage instance, e.g. by manually over-allocating the storage space? (this would be the standard approach for a lot of other configurations where e.g. you get 3 IOPS per GB)
4. How can I get the actual IOPS figures rather than %?
5. Any other suggestions/tips?
Thanks in advance

Andrew G
(71 rep)
Mar 5, 2024, 03:39 AM
• Last activity: Aug 4, 2025, 02:46 AM
1
votes
1
answers
140
views
Does sql config manager changes (example: service account) on 1 FCI node auto update it onto the other node?
Link: https://learn.microsoft.com/en-us/sql/sql-server/failover-clusters/windows/always-on-failover-cluster-instances-sql-server?view=sql-server-ver15#FCIelements > Each node in the resource group maintains a synchronized copy of the > configuration settings and check-pointed registry keys to ensure...
Link: https://learn.microsoft.com/en-us/sql/sql-server/failover-clusters/windows/always-on-failover-cluster-instances-sql-server?view=sql-server-ver15#FCIelements
> Each node in the resource group maintains a synchronized copy of the
> configuration settings and check-pointed registry keys to ensure full
> functionality of the FCI after a failover
My understanding is that service account and other sql config manager settings are stored in the registry.
Does the above quote imply that when any sql config manager change is made on 1 FCI node (sql server service account, sql agent service account, sql integration services service account, protocol changes [tcp/shared memory]) then it will auto apply this on the node 2?
For example: following link suggests that password change made on 1 node gets communicated on other nodes:
https://learn.microsoft.com/en-us/sql/sql-server/failover-clusters/windows/failover-cluster-instance-administration-and-maintenance?view=sql-server-ver15#changing-service-accounts
> You should not change passwords for any of the SQL Server service
> accounts when an FCI node is down or offline. If you must do this, you
> must reset the password again by using SQL Server Configuration
> Manager when all nodes are back online.
variable
(3590 rep)
Oct 1, 2022, 05:51 AM
• Last activity: Aug 4, 2025, 12:07 AM
2
votes
1
answers
2865
views
SQL server suddenly backups are slow and async_io_completion shows suspended for long time
50GB database - full backup normally takes 14 (10 minutes backup, 4 minutes to compress). Suddenly the backup execution time has been climbing upward to 3-4 hours. Backup dest is a Network path. To eliminate the network as an issue, used backup to 'NUL:' - same results 4 hrs. During the full backup...
50GB database - full backup normally takes 14 (10 minutes backup, 4 minutes to compress). Suddenly the backup execution time has been climbing upward to 3-4 hours. Backup dest is a Network path.
To eliminate the network as an issue, used backup to 'NUL:' - same results 4 hrs.
During the full backup async_io_completion is showing "suspended" examples below:
(1161299ms)ASYNC_IO_COMPLETION 1pm / 22% complete
(2838894ms)ASYNC_IO_COMPLETION 2pm / 60% complete
(3969124ms)ASYNC_IO_COMPLETION 230pm / 77% complete
(4818588ms)ASYNC_IO_COMPLETION 245pm / 80% complete
(5816122ms)ASYNC_IO_COMPLETION 3pm / 86% complete
(6695511ms)ASYNC_IO_COMPLETION 315pm / 89% complete
checked VLF count is low (less than 100) / Tried SQL native backup - same results / DBCC checkdb - no errors.
**HERE IS THE KICKER**....Took a full backup of this 50gb and created a new db on the same SQL instance (alongside orig db). Executed a full backup on the newly created db and the backup time is back to normal 14 minutes.
Anyone has suggestions on why the newly copy of the same db would perform correctly? Is there something SQL related that can be checked/verified to determine root cause? Leaning towards SAN or VM issue, need proof.
**Env info:**
VM environment
Windows 2016 32GB
SQL Standard 2017 CU16
jay
(31 rep)
Jun 10, 2020, 04:26 PM
• Last activity: Aug 3, 2025, 02:02 PM
0
votes
2
answers
146
views
Moving tables to different database within same sql server
There is a SQL server, there are around 100 databases in it. I have to query a few tables from one of the databases. when I query, it's very slow and I think CPU utilization is very high at that time. I have also noticed that there are queries from other tables from other services which are affectin...
There is a SQL server, there are around 100 databases in it. I have to query a few tables from one of the databases. when I query, it's very slow and I think CPU utilization is very high at that time. I have also noticed that there are queries from other tables from other services which are affecting the overall performance of querying from the database.
I am thinking to move these tables to a different database within the same SQL server. Do you think it will solve this issue? or it will not improve the performance of querying from my tables, I only bother about my tables. Will it have no impact because the new data will also be in the same SQL server? Please provide the answers in detail to my queries.
Vivek Nuna
(101 rep)
Jun 21, 2023, 07:00 AM
• Last activity: Aug 3, 2025, 07:06 AM
0
votes
1
answers
139
views
How to execute complete database script without checks on objects existence
I'm working on a customer data archiving. I created a complete database script to prepare the new database for archiving data with the same DB schema of the production one. At the moment I'm preparing a Test environment to verify all the steps we will execute in in production later, but the script i...
I'm working on a customer data archiving.
I created a complete database script to prepare the new database for archiving data with the same DB schema of the production one.
At the moment I'm preparing a Test environment to verify all the steps we will execute in in production later, but the script is, obviously, raising a lot of errors because of missing external database's views, functions and stored procedure being referred to.
Is it possible to switch off objects existence during the script execution, so I can create the complete database schema to make my tests?
---
I decided to proceed with tests about restoring the whole production database and then clear all tables data.
To clear the data I will try suggestions I found in replies to this question in Stack Overflow, to be precise these two:
- https://stackoverflow.com/a/12719464/21116580
- https://stackoverflow.com/a/22527902/21116580
I will come back as soon as I'll complete my tests with the results.
El See
(1 rep)
May 20, 2024, 03:29 PM
• Last activity: Aug 3, 2025, 05:05 AM
0
votes
1
answers
542
views
Azure Dedicated SQL pool - group has db_datareader access but cannot login
We have created Test AD Group and they should have readonly access to the database (schemas and tables, views) in the Azure SQL dedicated pool. Our DBA team did it but the users in the Test AD Group cannot login until they select the database as the default database in the connection dialog in SSMS....
We have created Test AD Group and they should have readonly access to the database (schemas and tables, views) in the Azure SQL dedicated pool.
Our DBA team did it but the users in the Test AD Group cannot login until they select the database as the default database in the connection dialog in SSMS.
Is there a way to allow them to login while ONLY have readonly access to the database (including future schemas and tables that will be created)?
xmlapi
(11 rep)
Dec 6, 2022, 06:22 PM
• Last activity: Aug 2, 2025, 11:05 PM
0
votes
1
answers
115
views
Did performance when querying partitioned tables using min/max functions or TOP improve after SQL Server 2022?
With partitioned tables in SQL Server, there is a [notorious major performance issue](https://web.archive.org/web/20130412223317/https://connect.microsoft.com/SQLServer/feedback/details/240968/partition-table-using-min-max-functions-and-top-n-index-selection-and-performance) when using using min/max...
With partitioned tables in SQL Server, there is a [notorious major performance issue](https://web.archive.org/web/20130412223317/https://connect.microsoft.com/SQLServer/feedback/details/240968/partition-table-using-min-max-functions-and-top-n-index-selection-and-performance) when using using min/max functions or
TOP
. Microsoft document workarounds for it [here](https://learn.microsoft.com/en-us/troubleshoot/sql/database-engine/performance/decreased-performance-run-aggregating-clause) . I am confident that this was not fixed in SQL Server 2022. Microsoft surely would have updated the workaround list if giving them more money was a workaround.
However, was this changed after SQL Server 2022? I am sure that I saw a working link to [this Connect item](https://connect.microsoft.com/SQLServer/feedback/details/240968/partition-table-using-min-max-functions-and-top-n-index-selection-and-performance) in 2024. Today, I cannot find it even on the modern [Azure suggestions thing](https://feedback.azure.com/d365community/forum/04fe6ee0-3b25-ec11-b6e6-000d3a4f0da0) that all of the Connect items were migrated to. This suggests to me that something has happened with this decade-old bug in the last few years.
I cannot answer this myself, since I do not have access to SQL Server 2025 or any bleeding-edge Azure stuff, I hear that preview builds for SQL Server 2025 have been released.
J. Mini
(1225 rep)
Feb 2, 2025, 12:47 PM
• Last activity: Aug 2, 2025, 10:54 PM
Showing page 1 of 20 total questions