Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
4
votes
3
answers
141
views
Can Standard Developer Edition be used in Docker?
I hear that the new [Standard Developer](https://learn.microsoft.com/en-us/sql/sql-server/editions-and-components-of-sql-server-2025?view=sql-server-ver17#sql-server-editions) edition is available in the current SQL Server 2025 preview. But how can I get it in Docker? [SQL Server 2022's documentatio...
I hear that the new [Standard Developer](https://learn.microsoft.com/en-us/sql/sql-server/editions-and-components-of-sql-server-2025?view=sql-server-ver17#sql-server-editions) edition is available in the current SQL Server 2025 preview. But how can I get it in Docker? [SQL Server 2022's documentation](https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-docker-container-deployment?view=sql-server-ver17&pivots=cs1-bash#production) shows us how to specify specific editions when creating a container.
docker run --name sqlenterprise \
-e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=' \
-e 'MSSQL_PID=EnterpriseCore' -p 1433:1433 \
-d mcr.microsoft.com/mssql/server:2022-latest
but what -e
do I used for Standard Developer? In other words, what goes here?
docker pull mcr.microsoft.com/mssql/server:2025-latest
docker run -e "ACCEPT_EULA=Y" \
-e "MSSQL_SA_PASSWORD=YourPasswordMake~TSp3cial" \
-e "MSSQL_PID=?????????????????????????????????????????????" \
-p 1433:1433 \
--name sql2025P1 --hostname sql2025P1 \
-d mcr.microsoft.com/mssql/server:2025-latest
docker start sql2025P1
J. Mini
(1237 rep)
Jun 9, 2025, 10:17 PM
• Last activity: Jul 27, 2025, 01:45 PM
1
votes
1
answers
182
views
SQL Server 2019 and TDE
We are about to move away from SQL 2008R2 to a much newer version. We upgraded to the Enterprise Edition of 2008R2 just for TDE, but I was just browsing here: https://learn.microsoft.com/en-us/sql/sql-server/editions-and-components-of-sql-server-version-15?view=sql-server-ver15 and it appears to say...
We are about to move away from SQL 2008R2 to a much newer version. We upgraded to the Enterprise Edition of 2008R2 just for TDE, but I was just browsing here: https://learn.microsoft.com/en-us/sql/sql-server/editions-and-components-of-sql-server-version-15?view=sql-server-ver15 and it appears to say that SS2019 includes TDE in the Standard Edition (the 2017 and older pages confirm it’s Enterprise Edition only).
Is this a typo or did I miss the memo and my life just got a LOT cheaper?
Tifosi
(11 rep)
Dec 5, 2019, 07:31 AM
• Last activity: Jul 10, 2025, 02:05 AM
0
votes
1
answers
261
views
Error "No catalog entry found for partition ID ..."
When trying to delete disabled index or enable the same disabled index on one heavily utilised table, I am getting an error: No catalog entry found for partition ID 72057594967228416 in database 6. The metadata is inconsistent. Run DBCC CHECKDB to check for a metadata corruption. The database has di...
When trying to delete disabled index or enable the same disabled index on one heavily utilised table, I am getting an error:
No catalog entry found for partition ID 72057594967228416 in database 6. The metadata is inconsistent. Run DBCC CHECKDB to check for a metadata corruption.
The database has different filegroups (one for table, one for indexes, ...) but no partitioning, no partition schemes, no partition functions. I can flawlessly create different index in the same filegroup with no error.
DBCC CHECKDB('myDB') WITH ALL_ERRORMSGS, NO_INFOMSGS
returns exactly **0** errors.
My question: is the metadata damaged as in this case: Question DBA stack exchange or is there a way out?
Thanks!
Sranda
(360 rep)
Jun 28, 2024, 08:28 AM
• Last activity: Jun 5, 2025, 10:04 PM
0
votes
1
answers
521
views
SQL Server Standard and Enterprise editions in AlwaysOn Availability Groups
Can two different versions of SQL Servers participate in AlwaysOn Availability Groups? I am specifically talking about SQL Server 2019 Standard and SQL Server 2019 Enterprise.
Can two different versions of SQL Servers participate in AlwaysOn Availability Groups?
I am specifically talking about SQL Server 2019 Standard and SQL Server 2019 Enterprise.
Frank Martin
(451 rep)
Jun 9, 2024, 06:03 AM
• Last activity: Feb 26, 2025, 04:38 AM
0
votes
1
answers
360
views
Replicated transactions are waiting for next log backup or for mirroring partner to catch up
We have a partial transactional replication setup in our environment to make a data subset available to another application. Application Server 1 -> Tables1.row1/row2/row3 -> Application Server 2 We are running the same versions of SQL Server everywhere; however, application server 1 is running SQL...
We have a partial transactional replication setup in our environment to make a data subset available to another application.
Application Server 1 -> Tables1.row1/row2/row3 -> Application Server 2
We are running the same versions of SQL Server everywhere; however, application server 1 is running SQL Server Standard Edition and application server 2 is running SQL Server Express Edition.
Server 1 is the publisher and running a push subscription to Server 2.
We have a few databases configured exactly as above and they work perfectly.
However, I have 2 databases that will not replicate, they get stuck in *"Replicated transactions are waiting for next Log backup or for mirroring partner to catch up."*. We have tried 1448 flag and log backup flags.
Running debug on the distributor and subscriber agents shows the following:
-none
OLE DB DISTOLE 'DBSERVER': sp_MSget_last_transaction
@publisher_id = 0, @publisher_db = N'DBNAME', @for_truncate = 0x1
Publisher: exec sp_replcounters N'DBNAME'
OLE DB Publisher 'DBSERVER': exec sp_replcounters N'DBNAME'
Status: 16384, code: 22522, text:
'Replicated transactions are waiting for next Log backup or for
mirroring partner to catch up.'.
The distributor is reporting:
-none
The commit timestamp value is: 0x0000149100001f30000100000000
OLE DB Distributor 'DBSERVER': {call sp_MSget_subscription_guid(109)}
sp_MSget_repl_commands timestamp value is: 0x0x0000149100001f30000100000000
OLE DB Subscriber 'DBSERVER':
declare
@event_type SMALLINT = 1,
@distributor_major_version SMALLINT = 16,
@distributor_minor_version SMALLINT = 0,
@distributor_build_number SMALLINT = 1000,
@totalruntime INT = 23296,
@totalworktime INT = 108,
@totalnumtrans INT = 0,
@numtranspersec REAL = 0.00,
@totalnumcmds INT = 0,
@numcmdspersec REAL = 0.00,
@totalskippedcmds INT = 0,
@totalidletime INT = 20015,
@agent_id INT = 109,
@numtranssincelastevent INT = 0,
@numcmdssincelastevent INT = 0,
@totalnumbytes BIGINT = 0,
@numbytessincelastevent BIGINT = 0
if object_id(N'sys.sp_repl_generate_subscriber_event') is not null
exec sys.sp_repl_generate_subscriber_event
@event_state = @event_type,
@distributor_major_version = @distributor_major_version,
@distributor_minor_version = @distributor_minor_version,
@distributor_build_number = @distributor_build_number,
@totalruntime = @totalruntime,
@totalworktime = @totalworktime,
@totalnumtrans = @totalnumtrans,
@numtranspersec = @numtranspersec,
@totalnumcmd
Looking across the entire environment we have no errors. Communication works perfectly and we have this communication working with other databases. But this database does not want to behave. Any pointers?
---
There is nothing using either database, literally no open connections.
DBCC OPENTRAN
gives the following:
-none
Transaction information for database 'DBNAME'.
Replicated Transaction Information:
Oldest distributed LSN : (5294:74672:1)
Oldest non-distributed LSN : (5294:74680:1)
DBCC execution completed.
If DBCC printed error messages, contact your system administrator.
Completion time: 2024-07-11T16:37:07.0583682+01:00
Transaction information for database 'DBNAME'.
Replicated Transaction Information:
Oldest distributed LSN : (5294:74672:1)
Oldest non-distributed LSN : (5294:74680:1)
DBCC execution completed.
If DBCC printed error messages, contact your system administrator.
Completion time: 2024-07-11T16:37:07.0583682+01:00
Rob Howe
(11 rep)
Jul 10, 2024, 03:01 PM
• Last activity: Feb 8, 2025, 10:29 AM
0
votes
2
answers
134
views
Migrate availability group listener to a different availability group
Hopefully a simple question to answer! Can you point an existing availability group listener to a different availability group? Current production environments are using Enterprise licenses and the desire is to reduce these to Standard. There is a piece of work to create new basic availability group...
Hopefully a simple question to answer!
Can you point an existing availability group listener to a different availability group?
Current production environments are using Enterprise licenses and the desire is to reduce these to Standard.
There is a piece of work to create new basic availability groups for the databases in question. The hope was that at the point of switchover, the process would be to point the existing listeners to the new availability groups where appropriate.
AM28337
(1 rep)
Jan 28, 2025, 12:05 PM
• Last activity: Jan 28, 2025, 02:06 PM
6
votes
1
answers
2638
views
Partitioning a 4TB table while staying online (SQL Server Standard)
We have a table that is currently ~4TB in size. We wish to introduce partitioning on this table. The table is already clustered on the key we wish to partition on. The options appear to be - Use ALTER TABLE SWITCH to switch the data of the table to a range-partitioned table that has only one partiti...
We have a table that is currently ~4TB in size. We wish to introduce partitioning on this table. The table is already clustered on the key we wish to partition on.
The options appear to be
- Use ALTER TABLE SWITCH to switch the data of the table to a range-partitioned table that has only one partition (zero cost). Then call ALTER PARTITION FUNCTION for each partition we wish to introduce (definitely not zero cost)
- re-create the clustered index using DROP_EXISTING=ON
We're not using SQL Enterprise, so there's no ability to do this online.
Running these queries locks the entire table.
* Even if we accepted the lock, we have no real way of estimating how long this would take (and whether we could complete it in time over a weekend)
* Most of the data is archive data, there's only a small slice of data that's actually 'live' and would be a problem when it's locked. Trying to figure out if there's a strategy using ALTER PARTITION and ALTER TABLE SWITCH that allows us to do *most* of this with the data swapped out on a staging table.
* Can anyone suggest a way we can do this progressively?
Thanks!
James Crowley
(223 rep)
Nov 9, 2017, 11:44 AM
• Last activity: Jan 3, 2025, 11:01 AM
0
votes
3
answers
297
views
What methods exist for partitioning a multi-terabyte table on Standard Edition without hours of down time?
Much of what I find online about partitioning large tables is very old. Databases are larger now and the old methods rarely account for the constraints of Standard Edition (partitioning was Enterprise only until 2016). This question exists to find modern methods of partitioning large tables on Stand...
Much of what I find online about partitioning large tables is very old. Databases are larger now and the old methods rarely account for the constraints of Standard Edition (partitioning was Enterprise only until 2016). This question exists to find modern methods of partitioning large tables on Standard Edition.
I have the following constraints. What methods exist for satisfying them on Standard Edition?
- 2 TB table. To keep things simple, assume that it has one ascending primary key and no other indexes/constraints/triggers.
- Must be partitioned
- The end of the table is hot. The rest is not.
- Standard Edition, so RAM and online operations are limited
- SQL Server 2022
- No data loss allowed
- Minimal down time allowed. 20 minutes at most. This likely rules out partitioning the table in place.
- Optimistic locking of any kind is not enabled
- This process can be done slowly, e.g. over weeks.
J. Mini
(1237 rep)
Dec 23, 2024, 10:34 PM
• Last activity: Jan 2, 2025, 03:51 PM
0
votes
1
answers
121
views
How do "Parallel query processing on partitioned tables and indexes" and "Partitioned table parallelism" differ?
In plain English, "Parallel query processing on partitioned tables and indexes" and "Partitioned table parallelism" sound like the same thing. Yet, in SQL Server 2022, [Standard Edition has "Partitioned table parallelism"](https://learn.microsoft.com/en-us/sql/sql-server/editions-and-components-of-s...
In plain English, "Parallel query processing on partitioned tables and indexes" and "Partitioned table parallelism" sound like the same thing. Yet, in SQL Server 2022, [Standard Edition has "Partitioned table parallelism"](https://learn.microsoft.com/en-us/sql/sql-server/editions-and-components-of-sql-server-2022?view=sql-server-ver16#RDBMSSP) but not [Parallel query processing on partitioned tables and indexes](https://learn.microsoft.com/en-us/sql/sql-server/editions-and-components-of-sql-server-2022?view=sql-server-ver16#data-warehouse) . How do the two features differ?
I have made a reasonable effort to search through the documentation, but have not yet read all of the white papers. I have not been successful in learning what either feature is.
J. Mini
(1237 rep)
Dec 20, 2024, 10:04 PM
• Last activity: Dec 22, 2024, 08:48 AM
5
votes
1
answers
178
views
Why does adding an empty columnstore index to my tables consistently make execution plans ignore bitmaps?
### Problem I am using a Standard Edition SQL Server 2019 box. Given the edition, compatibility levels should not be relevant (the level-dependent batch mode features are disallowed on Standard). On this box, I have been experimenting with the hack of adding an empty non-clustered filtered columnsto...
### Problem
I am using a Standard Edition SQL Server 2019 box. Given the edition, compatibility levels should not be relevant (the level-dependent batch mode features are disallowed on Standard). On this box, I have been experimenting with the hack of adding an empty non-clustered filtered columnstore index to a temp table before joining it on to other tables, thus allowing the queries with said join to use batch mode despite not using any columnstore indexes. When I do this, I have been regularly finding the following behaviour in the queries that previously did not use batch mode:
- The parts of the execution plans that now use batch mode run much faster. In particular, window functions become incredible compared to what they were in the plans that had no batch mode parts.
- The plans with batch mode parts will always have a much lower cost than the queries that do not, even when the they take longer to run.
- Gather Streams sometimes becomes much slower, even though every operator is running evenly on two threads in the plans that have batch mode steps.
- Queries that previously built a bitmap on one table in order to cheaply scan and then hash join another table by passing the bitmap's hash probe in the the scan become **much** slower. Instead of a bitmap inside an index scan, they have a very expensive Filter operator after the scan.
Is there any explanation for the final observation in my list? More importantly, **how can I prevent the introduction of batch mode removing excellent bitmaps?** That bold point is what I'm wanting answered here. If I could have both the bitmaps and the excellent benefits of batch mode elsewhere in the query (e.g. in the window functions), then my queries would be much faster. I accept that I could probably index the tables better, but I was already happy with the original hash joins.
### Reproducible Example
Erik Darling quite rightly requested that I paste the plan, but I thought I'd go one better and give a reproducible example. The following uses the [StackOverflow2010 database](https://www.brentozar.com/archive/2015/10/how-to-download-the-stack-overflow-database-via-bittorrent/) . You can [load this in a docker container](https://sqlblog.org/2020/03/15/attaching-restoring-databases-inside-a-container) if you wish.
SET STATISTICS XML OFF;
USE StackOverflow2010;
DROP TABLE IF EXISTS #IDs;
SELECT TOP (10000)
Id
INTO
#IDs
FROM
Comments;
ALTER TABLE #IDs ADD CONSTRAINT RowStorePK
PRIMARY KEY CLUSTERED (Id);
SET STATISTICS XML ON;
;WITH [NumberedPosts] AS
(
SELECT
PostID
,ROW_NUMBER() OVER (PARTITION BY PostID ORDER BY Score) RN
FROM
Comments
WHERE
PostID IN (SELECT Id FROM #IDs)
)
SELECT
PostID
FROM
NumberedPosts
WHERE
RN = 1;
/*
Hack to get batch mode for these queries despite
my real machine being on Standard Edition.
*/
CREATE NONCLUSTERED COLUMNSTORE INDEX ColStoreNonClust ON #IDs (ID)
WHERE ID = 1 AND ID = 2;
-- Same query again.
;WITH [NumberedPosts] AS
(
SELECT
PostID
,ROW_NUMBER() OVER (PARTITION BY PostID ORDER BY Score) RN
FROM
Comments
WHERE
PostID IN (SELECT Id FROM #IDs)
)
SELECT
PostID
FROM
NumberedPosts
WHERE
RN = 1;
[This plan](https://www.brentozar.com/pastetheplan/?id=S17c3PgwR) is from before making the columnstore index. It uses a bitmap. [This plan](https://www.brentozar.com/pastetheplan/?id=SyMo2DgPC) is from after making the columnstore index. It does not use a bitmap and runs much slower then the bitmap-using query.
I have seen this so many times that I've become convinced that it's a pattern that must have some general explanation or prevention steps.
J. Mini
(1237 rep)
Jun 26, 2024, 08:15 PM
• Last activity: Jul 3, 2024, 11:56 AM
3
votes
1
answers
567
views
Can we downgrade edition from SQL Server 2016 Enterprise to Standard when we have compression feature enabled?
I have a production database running on SQL Server 2016 Enterprise. I ran the below query to check the edition-specific feature and see the compression feature in use. Can we still downgrade the edition? Query ---------- SELECT feature_name FROM sys.dm_db_persisted_sku_features; GO Output ----------...
I have a production database running on SQL Server 2016 Enterprise. I ran the below query to check the edition-specific feature and see the compression feature in use. Can we still downgrade the edition?
Query
----------
SELECT feature_name
FROM sys.dm_db_persisted_sku_features;
GO
Output
----------
feature_name
----------
Compression
Vikas Kanoujiya
(31 rep)
Jun 11, 2024, 02:12 PM
• Last activity: Jun 11, 2024, 02:49 PM
-1
votes
1
answers
178
views
How is basic availability group able to auto failover without a cluster?
I have recently learnt that the SQL server standard edition's basic availability group doesn't require a windows cluster. Basic AG is considered equivalant to the deprecated mirroring technology. Like mirroring, the basic AG supports auto and manual failover. However, mirroring supported auto failov...
I have recently learnt that the SQL server standard edition's basic availability group doesn't require a windows cluster.
Basic AG is considered equivalant to the deprecated mirroring technology. Like mirroring, the basic AG supports auto and manual failover.
However, mirroring supported auto failover only with the presense of witness server.
In availability groups there is no concept of witness server. So my question is - how is basic AG able to support automatic failover without the windows cluster? Or is the concept that - basic AG without cluster only supports manual failover, and basic AG with cluster supports automatic and manual failover?
I couldn't find documentation on this so any guidance will help me here.
variable
(3590 rep)
Apr 21, 2024, 06:25 PM
• Last activity: Apr 21, 2024, 08:11 PM
0
votes
1
answers
126
views
What happens when you try to use more than 32 GB of memory-optimized data in a database on Standard Edition?
In SQL Server 2019, you cannot put more than 32 GB of [memory-optimized data in a database on Standard Edition](https://learn.microsoft.com/en-us/sql/sql-server/editions-and-components-of-sql-server-2019?view=sql-server-ver15&preserve-view=true#Cross-BoxScaleLimits). What happens if you try anyway?...
In SQL Server 2019, you cannot put more than 32 GB of [memory-optimized data in a database on Standard Edition](https://learn.microsoft.com/en-us/sql/sql-server/editions-and-components-of-sql-server-2019?view=sql-server-ver15&preserve-view=true#Cross-BoxScaleLimits) . What happens if you try anyway?
I'd try it myself, but I don't want to risk discovering that the answer is a knock on the door from the Licencing Police™.
J. Mini
(1237 rep)
Apr 4, 2024, 10:35 PM
• Last activity: Apr 6, 2024, 04:18 PM
12
votes
1
answers
935
views
Is there a reliable way to check if a trigger being fired was the result of a DML action from another *specific* trigger?
Is there any way to tell from within a trigger, when it gets fired, that it is being fired as the result of a DML action that occurred within another *specific* trigger? Any chance any information about the call stack is exposed in the [`EVENTDATA()`](https://learn.microsoft.com/en-us/sql/relational...
Is there any way to tell from within a trigger, when it gets fired, that it is being fired as the result of a DML action that occurred within another *specific* trigger?
Any chance any information about the call stack is exposed in the [
EVENTDATA()
](https://learn.microsoft.com/en-us/sql/relational-databases/triggers/use-the-eventdata-function?view=sql-server-ver16) function? Or another function? I'd prefer not having to shred XML. 🙂
I'm ideally shooting for getting the name of the original trigger that executed the DML that caused the second trigger to fire, from within the scope of that second trigger. But I'm open to similar ways of identifying the source too.
I have full control over the code of both triggers in question.
J.D.
(40893 rep)
Jun 2, 2023, 04:32 PM
• Last activity: Mar 19, 2024, 01:50 PM
7
votes
4
answers
1108
views
Is my queue table implementation race condition safe?
Hello people smarter than me! I've created a sort-of-a-queue table system, but it seems too simple to be safe from race conditions. Am I missing something or is the following race condition safe? The Schema - I have a table, let's call it `ProductQueue`: CREATE TABLE dbo.ProductQueue ( SerialId BIGI...
Hello people smarter than me! I've created a sort-of-a-queue table system, but it seems too simple to be safe from race conditions. Am I missing something or is the following race condition safe?
The Schema
-
I have a table, let's call it
ProductQueue
:
CREATE TABLE dbo.ProductQueue
(
SerialId BIGINT PRIMARY KEY,
QueuedDateTime DATETIME NOT NULL -- Only using this for reference, no functionality is tied to it
);
I have procedure for adding to the queue called AddToProductQueue
:
CREATE PROCEDURE dbo.AddToProductQueue (@SerialId BIGINT)
AS
BEGIN
INSERT INTO dbo.ProductQueue (SerialId, QueuedDateTime)
OUTPUT Inserted.SerialId
SELECT @SerialId, GETDATE();
END
I also have a procedure for removing from the queue called RemoveFromProductQueue
:
CREATE PROCEDURE dbo.RemoveFromProductQueue (@SerialId BIGINT)
AS
BEGIN
DELETE FROM dbo.ProductQueue
OUTPUT Deleted.SerialId
WHERE SerialId = @SerialId;
END
Note, SerialId
is globally unique for a Product
in the source database / system. I.e. no two instances of a Product
can ever have the same SerialId
. That's the extent of it on the database side.
The Workflow
-
- I have an application process that runs hourly.
- That process gets a variable list of SerialIds
from the source system.
- It iteratively calls the AddToProductQueue
procedure on each SerialId
in its list.
- If the procedure tries to insert a SerialId
that exists in the ProductQueue
table already, it throws a primary key violation error, and the application process catches that error and skips that SerialId
.
- Otherwise, the procedure successfully adds that SerialId
to the ProductQueue
table and returns it back to the application process.
- The application process then adds that successfully queued SerialId
to a separate list.
- After the application process finishes iterating its list of all candidate SerialIds
to enqueue, it then iterates its new list of successfully queued SerialIds
and does external work on them, in a **separate thread** per SerialId
. (This work is all unrelated to the database.)
- Finally, as each thread finishes its external work, the last step in that asynchronous thread is to remove that SerialId
from the ProductQueue
table by calling the RemoveFromProductQueue
procedure. (Note that a new database context object is instantiated and a new connection is created for each asynchronous call to this procedure, so that it is thread-safe on the application side.)
Additional Information
-
- There aren't any indexes on the ProductQueue
table, and it'll never have more than 1,000 rows in it at one time. (Actually, most times it'll literally only have a couple of rows.)
- The same SerialId
can become a candidate again to be re-added to the queue table on a future execution of the application process.
- There are no safe guards from preventing a second instance of the application process from concurrently running, either by accident or if the first instance took more than 1 hour to run, etc. (This is the concurrent part I'm most concerned about.)
- The transaction isolation level of the database (and connection being made) where the queue table and procedures live is the default isolation level of Read Committed
.
Potential Problems
-
- The running instance of the application process crashes in an unhandled way, leaving SerialIds
stuck in the queue table. This is acceptable for the business needs, and we plan to have exception reports to help us manually remediate this case.
- The application process gets executed multiple times concurrently and grabs some of the same SerialIds
between instances in their initial source lists. I can't think of any negative ramifications of this case yet, since the enqueuing procedure is atomic, and the actual list of SerialIds
that the application process will work on should be self-contained due to that atomic enqueuing procedure. We don't care which instance of the application process actually processes each SerialId
as long as the same SerialId
isn't processed concurrently by both process instances.
J.D.
(40893 rep)
Mar 13, 2024, 09:46 PM
• Last activity: Mar 16, 2024, 03:03 PM
23
votes
1
answers
5895
views
Is "+" slower than "CONCAT" for large strings?
I have always thought that [CONCAT][1] function is actually a wrapper over the [+ (String Concatenation)][2] with some additional checks in order to make our life more easier. I have not found any internal details about how the functions are implemented. As to the performance, [it seems][3] that the...
I have always thought that CONCAT function is actually a wrapper over the + (String Concatenation) with some additional checks in order to make our life more easier.
I have not found any internal details about how the functions are implemented. As to the performance, it seems that there is overhead for calling
I wonder if someone can share any internals or explains the behavior as it seems that there might be a rule that it's better to concatenate large strings using
CONCAT
when data is concatenating in a loop (which seems normal as there are additional handles for NULLs).
Few days ago, a dev modified some string concatenation code (from +
to CONCAT)
because did not like the syntax and told me it became faster.
In order to check the case, I have used the following code:
DECLARE @V1 NVARCHAR(MAX)
,@V2 NVARCHAR(MAX)
,@V3 NVARCHAR(MAX);
DECLARE @R NVARCHAR(MAX);
SELECT @V1 = REPLICATE(CAST('V1' AS NVARCHAR(MAX)), 50000000)
,@V2 = REPLICATE(CAST('V2' AS NVARCHAR(MAX)), 50000000)
,@V3 = REPLICATE(CAST('V3' AS NVARCHAR(MAX)), 50000000);
where this is variant one:
SELECT @R = CAST('' AS NVARCHAR(MAX)) + '{some small text}' + ISNULL(@V1, '{}') + ISNULL(@V2, '{}') + ISNULL(@V3, '{}');
SELECT LEN(@R); -- 1200000017
and this is variant two:
SELECT @R = CONCAT('{some small text}',ISNULL(@V1, '{}'), ISNULL(@V2, '{}'), ISNULL(@V3, '{}'))
SELECT LEN(@R); -- 1200000017
For smaller strings, there is no differences. At some point, the CONCAT
variant becomes faster:

CONCAT
.
Version:
*Microsoft SQL Server 2022 (RTM-CU8) (KB5029666) - 16.0.4075.1 (X64)
Aug 23 2023 14:04:50
Copyright (C) 2022 Microsoft Corporation
Standard Edition (64-bit) on Windows Server 2019 Standard 10.0 (Build 17763: ) (Hypervisor)*
----------
The exact script looks like the following:
DECLARE @V1 NVARCHAR(MAX)
,@V2 NVARCHAR(MAX)
,@V3 NVARCHAR(MAX);
DECLARE @R NVARCHAR(MAX);
SELECT @V1 = REPLICATE(CAST('V1' AS NVARCHAR(MAX)), 50000000)
,@V2 = REPLICATE(CAST('V2' AS NVARCHAR(MAX)), 50000000)
,@V3 = REPLICATE(CAST('V3' AS NVARCHAR(MAX)), 50000000);
--SELECT @R = CAST('' AS NVARCHAR(MAX)) + '{some small text}' + ISNULL(@V1, '{}') + ISNULL(@V2, '{}') + ISNULL(@V3, '{}'); -- 00:00:45 -- 00:01:22 -- 00:01:20
--SELECT LEN(@R); -- 300000017
SELECT @R = CONCAT('{some small text}',ISNULL(@V1, '{}'), ISNULL(@V2, '{}'), ISNULL(@V3, '{}')) -- 00:00:11 -- 00:00:16 -- 00:00:10
SELECT LEN(@R); -- 300000017
where I am changing the last argument of the REPLICATE functions in order to generate larger strings for the concatenation. Then, I am executing each variant three times.
gotqn
(4348 rep)
Dec 28, 2023, 10:12 AM
• Last activity: Dec 30, 2023, 11:18 AM
2
votes
3
answers
2244
views
Backup Performance best practices
I'm in the process of re-writing our maintenance plans and at the moment the focus is backups. Whilst preparing this plan, I'd like to try and ensure that procedures are in place to ensure that backup performance is optimal. I am aware of implementing the below, however I'd like to get an idea of ho...
I'm in the process of re-writing our maintenance plans and at the moment the focus is backups. Whilst preparing this plan, I'd like to try and ensure that procedures are in place to ensure that backup performance is optimal. I am aware of implementing the below, however I'd like to get an idea of how I can improve beyond that scope.
Current Considerations:
- Backup Compression
- Multiple data files for larger databases - perhaps over 100GB?
- Writing backups to another I/O subsystem
- Defining values for parameters including BLOCKSIZE, MAXTRANSFERSIZE & BUFFERCOUNT
- Purging old files
- Deleting backup history
Beyond the above, is there anything that would be benificial to implement. Also whilst setting the values for BLOCKSIZE, MAXTRANSFERSIZE & BUFFERCOUNT, are there any considerations I should take/how do I go about defining the correct values? I appreciate it will be a bit of trial and error with testing but it would be useful to get an idea of best practices.
To give an idea of my routine, I'm to run hourly transaction log backups, daily differentials and weekly full backups.
Thanks
Krishn
(383 rep)
Jul 18, 2018, 08:38 AM
• Last activity: Dec 15, 2023, 01:39 PM
3
votes
2
answers
428
views
How to sum up the distinct Total Time of an Event ignoring duplicate overlaps in Times?
I have the following `EventTimes` table: DROP TABLE IF EXISTS dbo.EventTimes; CREATE TABLE dbo.EventTimes ( EventTimeKeyId INT IDENTITY(1,1) PRIMARY KEY, EventId INT NOT NULL, StartTime TIME NOT NULL, EndTime TIME NOT NULL ); With the following data: -- Event 1 INSERT INTO dbo.EventTimes (EventId, S...
I have the following
dbfiddle.uk for reference.
---
Other Info:
- If it's easier to work with datetimes, feel free to alter the data types from
EventTimes
table:
DROP TABLE IF EXISTS dbo.EventTimes;
CREATE TABLE dbo.EventTimes
(
EventTimeKeyId INT IDENTITY(1,1) PRIMARY KEY,
EventId INT NOT NULL,
StartTime TIME NOT NULL,
EndTime TIME NOT NULL
);
With the following data:
-- Event 1
INSERT INTO dbo.EventTimes (EventId, StartTime, EndTime)
VALUES
(1, '04:00:00', '14:00:00'),
(1, '06:00:00', '11:00:00'),
(1, '09:00:00', '12:00:00'),
(1, '13:00:00', '14:00:00'), -- Gap between this row and the next row
(1, '02:30:00', '04:00:00'); -- Notice the half-hour on this one
-- Event 2
INSERT INTO dbo.EventTimes (EventId, StartTime, EndTime)
VALUES
(2, '00:00:00', '06:00:00'), -- Gap between this row and the next row
(2, '09:00:00', '13:00:00'),
(2, '11:00:00', '15:00:00');
Notice:
- The same Event
can have two time ranges that overlap each other. E.g. same Event
from 4 AM to 2 PM and and also from 6 AM to 11 AM.
- There can also be gaps between the two time ranges. E.g. same Event
from 1 PM to 2 PM and also from 2:30 PM to 4 PM.
End Goal:
I'm trying to calculate the TotalTime
of a given Event
ignoring the duplicate overlapping time. E.g. for the set of ranges of 9 AM to 1 PM and 11 AM to 3 PM, the distinct TotalTime
should be 6 hours (9 AM to 3 PM). Conversely, I also don't want to count the time in the gaps between two time ranges. So for the set of ranges of 1 PM to 2 PM and 2:30 PM to 4 PM the TotalTime
should be 2.5 hours. (Note these are just subsets of the full example above, and the final result should be the sum of all of these unique times per Event
.)
The TotalTime
should never exceed 24 hours, these times are all within a single day (as is the TIME
data type).
Expected final results for the provided examples in the scripts above:

TIME
to DATETIME
. I can convert the results back, no problem.
- I believe some sort of recursion is needed to solve this. I feel I'm pretty close on a solution, but not quite there yet.
In a case such as 2:30 - 4:00 and 4:00 - 14:00, I'd expect the Total Time to be 11.5 hours.
J.D.
(40893 rep)
Nov 17, 2023, 05:36 PM
• Last activity: Nov 20, 2023, 10:05 PM
0
votes
1
answers
555
views
Wait time on server replies is sometimes very long when running the query over and over
As seen in this screen shot, I'm running the same query over and over again, comes back with 0 rows (as expected). For some reason I'm seeing very large wait times on server replies while running this on the server itself. I was wondering if someone could point me in the right direction as to why th...
As seen in this screen shot, I'm running the same query over and over again, comes back with 0 rows (as expected). For some reason I'm seeing very large wait times on server replies while running this on the server itself. I was wondering if someone could point me in the right direction as to why this might be occurring.
My guess is that its lack of resources or too many other things occurring but wondering how I can prove that.
MSSql server 2017 standard edition
64 core machine, 240 GB of ram
EDIT: output of wait stat added. 42 rows.


Steven M
(53 rep)
Oct 6, 2023, 07:18 PM
• Last activity: Oct 9, 2023, 04:36 PM
0
votes
2
answers
252
views
On update: Login failed for SQL Server authenticated sysadm
I run into a weird bug? User complained to me that he cannot anymore update certain tables in database. He had owner rights in database and the login was window login (ad-login). I tried elevating him to sysadm, checked all the rights and even deleted and added the login again. Still the problem con...
I run into a weird bug?
User complained to me that he cannot anymore update certain tables in database. He had owner rights in database and the login was window login (ad-login). I tried elevating him to sysadm, checked all the rights and even deleted and added the login again. Still the problem continued.
I tried creating a SQL Server sysadm-login, so I wouldn't have to think about AD-issue and tried that. the same error consists. Both of the logins can login to SQL-instance, SELECT data, UPDATE certain tables but cannot UPDATE other ones.
Older logins (like mine) work fine on evertyhing, but the newer ones don't. I have no idea what could be causing this.
>Msg 18456, Level 14, State 1, Line 1
Login failed for user 'xxxx'
Version: Microsoft SQL Server 2017 (RTM-CU31) (KB5016884) - 14.0.3456.2 (X64) Sep 2 2022 11:01:50 Copyright (C) 2017 Microsoft Corporation Standard Edition (64-bit) on Windows Server 2019 Standard 10.0 (Build 17763: ) (Hypervisor)
user1054844
(101 rep)
Aug 11, 2023, 11:48 AM
• Last activity: Aug 16, 2023, 07:50 AM
Showing page 1 of 20 total questions