Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

6 votes
4 answers
1095 views
Does the new locking optimization make batching unnecessary?
[Optimized Locking (2025-05-19)][1] makes it sound like [batching][2] would be unnecessary if I were to, for example, try to delete 20 thousand rows from a million row table in one statement (DELETE FROM foo WHERE Status = 1). Is that true? [1]: https://learn.microsoft.com/en-us/sql/relational-datab...
Optimized Locking (2025-05-19) makes it sound like batching would be unnecessary if I were to, for example, try to delete 20 thousand rows from a million row table in one statement (DELETE FROM foo WHERE Status = 1). Is that true?
Mark Freeman (2293 rep)
Jun 23, 2025, 03:31 PM • Last activity: Jun 25, 2025, 12:18 PM
2 votes
1 answers
404 views
Does PostgreSQL have lock promotion or escalation?
We having this strange behaviour with `ExclusiveLock` in one of our database running PostgreSQL 13. I am not able to dig what leads to this lock as lock info is from a monitoring tool. From what I see from docs `ExclusiveLock` only acquired by refreshing `materialised view`, however we don't have an...
We having this strange behaviour with ExclusiveLock in one of our database running PostgreSQL 13. I am not able to dig what leads to this lock as lock info is from a monitoring tool. From what I see from docs ExclusiveLock only acquired by refreshing materialised view, however we don't have any materialised views in our database. Bit of research I ended up in this blog https://blog.heroku.com/curious-case-table-locking-update-query and the case this guy shares similar to mine , as I can see a few RowExclusiveLock during this period and few queries updating same row concurrently. However I coudn't find any official docs on PostgreSQL behaviour on lock escalation just like other databases do. Does Postgres escalate lock in rare cases? What cases can lead to escalations?
goodfella (595 rep)
Oct 29, 2024, 04:14 AM • Last activity: Oct 29, 2024, 11:43 AM
8 votes
2 answers
596 views
Blocking between two MERGE queries inserting into the same table
### Scenario I have a large table partitioned on an `INT` column. When I run two different `MERGE` statements on two different partitions of this table, they seem to be blocking each other. Sample code to recreate the scenario: #### 1. Preparation. Create tables and some dummy data ``` SET NOCOUNT O...
### Scenario I have a large table partitioned on an INT column. When I run two different MERGE statements on two different partitions of this table, they seem to be blocking each other. Sample code to recreate the scenario: #### 1. Preparation. Create tables and some dummy data
SET NOCOUNT ON
GO
--
--	Create parition function and partition scheme
--
	DROP FUNCTION IF EXISTS PF_Site_ID 
	GO
	CREATE PARTITION FUNCTION PF_Site_ID (INT)  
		AS RANGE RIGHT FOR VALUES 	(
										0,
										1, 2, 3, 4, 5, 6, 7, 8, 9, 10
									)
	GO  
	DROP PARTITION SCHEME PS_Site_ID
	GO
	CREATE PARTITION SCHEME PS_Site_ID
		AS PARTITION PF_Site_ID
		ALL TO ('PRIMARY')
	GO

--
-- Large table partitioned on Site_ID. Two STG tables. And some dummy data
--
	DROP TABLE IF EXISTS dbo.PartitionedLargeTable
	GO
	CREATE TABLE dbo.PartitionedLargeTable
	(
		  ID       INT          NOT NULL IDENTITY(1,1)
		, Site_ID  INT          NOT NULL
		, Name     VARCHAR(50)
	) ON PS_Site_ID (Site_ID)
	GO
	ALTER TABLE dbo.PartitionedLargeTable SET (LOCK_ESCALATION = AUTO)
	GO

--
--	STG tables
--
	DROP TABLE IF EXISTS dbo.STG_Test1
	GO
	CREATE TABLE dbo.STG_Test1
	(
		  ID       INT          NOT NULL IDENTITY(1,1)
		, Site_ID  INT          NOT NULL
		, Name     VARCHAR(50)
	) ON [PRIMARY]
	GO
	DROP TABLE IF EXISTS dbo.STG_Test2
	GO
	CREATE TABLE dbo.STG_Test2
	(
		  ID       INT          NOT NULL IDENTITY(1,1)
		, Site_ID  INT          NOT NULL
		, Name     VARCHAR(50)
	) ON [PRIMARY]
	GO

--
--	Dummy data
--
	INSERT INTO dbo.PartitionedLargeTable (Site_ID, Name) SELECT 1, NEWID()
	INSERT INTO dbo.PartitionedLargeTable (Site_ID, Name) SELECT 2, NEWID()
	GO 10000

	INSERT INTO dbo.PartitionedLargeTable (Site_ID, Name)
	SELECT Site_ID, Name FROM dbo.PartitionedLargeTable
	GO 5

	INSERT INTO dbo.STG_Test1(Site_ID, Name) SELECT 1, NEWID()
	GO 10000
	INSERT INTO dbo.STG_Test2(Site_ID, Name) SELECT 2, NEWID()
	GO 10000

	INSERT INTO dbo.STG_Test1 (Site_ID, Name)
	SELECT Site_ID, Name FROM dbo.STG_Test1
	GO 7

	INSERT INTO dbo.STG_Test2 (Site_ID, Name)
	SELECT Site_ID, Name FROM dbo.STG_Test2
	GO 7
#### 2. MERGE 1 In one SSMS window, run this MERGE statement:
MERGE dbo.PartitionedLargeTable AS TGT

USING (SELECT ID, Site_ID, Name FROM dbo.STG_Test1) AS SRC
    ON  SRC.Site_ID = TGT.Site_ID
    AND SRC.ID      = TGT.ID

WHEN MATCHED THEN
    UPDATE 
        SET TGT.Name = SRC.Name

WHEN NOT MATCHED THEN
    INSERT (Site_ID, Name)
    VALUES (SRC.Site_ID, SRC.Name);
#### 3. MERGE 2 In a second SSMS window, run this MERGE statement:
MERGE dbo.PartitionedLargeTable AS TGT

USING (SELECT ID, Site_ID, Name FROM dbo.STG_Test2) AS SRC
    ON  SRC.Site_ID = TGT.Site_ID
    AND SRC.ID      = TGT.ID

WHEN MATCHED THEN
    UPDATE 
        SET TGT.Name = SRC.Name

WHEN NOT MATCHED THEN
    INSERT (Site_ID, Name)
    VALUES (SRC.Site_ID, SRC.Name);
--- The two MERGE statements run on different *Site_IDs* (so two different partitions). One of the performance benefits of partitioned tables is that we could manipulate partitions independently of each other (within reason). So, something like INSERT or UPDATE on one partition will not block similar operations on other partitions. Compare this to when the table is NOT partitioned, if we perform two large INSERT operations (or two large UPDATE operations), then one blocks the other once the number of rows manipulated goes over a certain number (something like 3k or 5k rows), then the PAGE lock is escalated to TABLOCK. Hence INSERT blocks INSERT (or UPDATE blocks UPDATE) To avoid such lock escalation to TABLOCK, this table was partitioned with LOCK_ESCALATION = AUTO, which limits locks up to HOBT level (and not table). But with MERGE, the blocking still happens. Any ideas on how to prevent this blocking? We have 10 parallel MERGE statements running, on 10 different partitions of this large table (and they are blocking each other). The image below shows the nature of blocking. When a table is partitioned, the lock escalation is supposed to only go up to the partition (not to the whole table). When these MERGE statements are running, I see the HOBT id's that each MERGE is querying (locking). And in some cases, the HOBT ID does not match the partition IDs of this table. enter image description here The actual table I work with has a COLUMNSTORE CLUSTERED index on the partitioning scheme.
ToC (727 rep)
Aug 14, 2024, 09:44 PM • Last activity: Aug 16, 2024, 08:57 AM
2 votes
1 answers
193 views
When Page Lock happen under Read Committed Isolation level in SQL Server
I have a clustered index table in Azure SQL Database. After checking the dm_db_index_operational_stats, the column **page_lock_wait_count** greater than 0 means there are some Page Lock happended for this before, but no **row_lock_wait_count** recorded like below pic. [![from dm_db_index_operational...
I have a clustered index table in Azure SQL Database. After checking the dm_db_index_operational_stats, the column **page_lock_wait_count** greater than 0 means there are some Page Lock happended for this before, but no **row_lock_wait_count** recorded like below pic. from dm_db_index_operational_stats I try to simulate this scenario in SQL Server 2019. Set the transaction level to Read Committed Snapshot in my test db environment, same as Azure SQL. But I can only get below lock situations. 1. Begin transaction A, to update **few rows** in table T. Begin transaction B to update similar rows in table. The waiting lock resource type of transaction B is **KEY** in dm_tran_locks. 2. Begin transaction A, to update **a plenty of rows** in table T. Begin transaction B to also update **large amount of rows** in table. The wait resource type is OBJECT for transaction B, since the transaction A is hold the X mode of OBJECT, I guess this is because the lock has been escalated. I tried above scenarios many times in my testing environment. Since transaction A will cause Page IX lock, so the *row_lock_count/row_lock_wait_count/page_lock_count* will increase, but no page_lock_wait_count occurred. Question: 1. Why page_lock_wait_count can be greater than 0 when row_lock_wait_count=0? 2. What conditions or scenarios will cause page lock wait?
Jerry.Chen (23 rep)
Jul 22, 2024, 09:05 AM • Last activity: Aug 4, 2024, 10:14 AM
1 votes
1 answers
718 views
Why does parallelism cause lock escalation and where is the tipping point?
I use the customized Stack Overflow database (180GB) and run a simple update query: (there is only a clustered index on the Users table) Begin Tran Update U set U.Reputation=100000 from StackOverflow.dbo.Users as U where U.CreationDate = '2008-10-10 14:26:33.540' [Query plan][1]: [![enter image desc...
I use the customized Stack Overflow database (180GB) and run a simple update query: (there is only a clustered index on the Users table) Begin Tran Update U set U.Reputation=100000 from StackOverflow.dbo.Users as U where U.CreationDate = '2008-10-10 14:26:33.540' Query plan : enter image description here This query causes lock escalation. I can't run a query using the same table in another window:
select * from StackOverflow.dbo.Users as U where U.id=11
If I add option (maxdop 1) at the end of the query to avoid parallelism, everything is fine (plan ). Lock escalation does not occur in a smaller Stack Overflow DB (StackOverflow2013 - 52GB) (plan ). How to identify what amount of data is causing the escalation? I use SQL Server 2019. DB compatibility level is 150. Table info: * StackOverflow2013.dbo.Users -- 2 465 713 rows; 45 184 pages * StackOverflow.dbo.Users -- 8 917 507 rows; 143 667 pages
Aksel Rohtsalu (11 rep)
Feb 16, 2023, 12:43 PM • Last activity: Feb 17, 2023, 12:43 PM
63 votes
4 answers
62535 views
What is lock escalation?
I was asked this question at an interview and had no answer. Can anyone here explain?
I was asked this question at an interview and had no answer. Can anyone here explain?
Kilhoffer
Oct 15, 2008, 06:50 PM • Last activity: Sep 6, 2022, 11:20 AM
0 votes
1 answers
2024 views
Deadlocks caused due to Keylocks and Pagelocks
I have a database on which select, update and insert queries are run. At times it gave me deadlocks (**key locks**) while updating and selecting data from the table, it considered the select query to be the victim. After some research and examination of our queries, we added and removed certain inde...
I have a database on which select, update and insert queries are run. At times it gave me deadlocks (**key locks**) while updating and selecting data from the table, it considered the select query to be the victim. After some research and examination of our queries, we added and removed certain indexes that would help in speeding up the queries. Now, after adding the indexes, I'm getting deadlocks (**page locks**) for the same select and update query. Why is this happening? Is there any way to reduce these deadlocks? Deadlock is between
SELECT COUNT(*) AS [Id Count ]
FROM   TABLE
WHERE  (
           ([name]) = 'abc'
           AND (([Id])  '4284167' OR [Id] IS NULL)
           AND ([Stage]  0)
           AND (
                   ([Result]) = 'P'
                   OR ([Result] IS NULL OR [Result] = '')
                   OR ([Result]) = 'Delay'
               )
       )
...and
UPDATE TABLE
SET    [Result] = 'C'
WHERE  [Id] = 4284027
       AND [Result] = 'P'
Before altering the indexes, these queries were deadlocked because of keylocks(Index). ### Indexes involved due to deadlocks with Keylock 1. Primary key, Non Clustered on column - ID 2. Clustered Index on columns - [Date] - [Time] 3. Non Clustered index on column - [Result] 4. Non Clustered index on column - [Stage] Objects that were involved in deadlocks were the Clustered Index on [Date][Time] and the Non Clustered Index on [Result]. ### Indexes involved due to deadlocks with PageLocks 1. Primary Key, Clustered Index on column - ID 2. Non Clustered Index on columns - [Date] - [Time] 3. Non Clustered Index on columns: - Reffd - Date - Time - typeQuery - Result - Stage - Answer (Included Column) - FollowUpdate (Included Column) Objects that were involved in deadlocks were Primary Key on [Id] and a Page ### Query Plan for Keylock - Pagelock 1. https://www.brentozar.com/pastetheplan/?id=ByK26kfyj [1] 2. https://www.brentozar.com/pastetheplan/?id=SyzygeGJi ### Query Plan for Pagelock - Pagelock 1.https://www.brentozar.com/pastetheplan/?id=S1wSNXzkj 2.https://www.brentozar.com/pastetheplan/?id=H1WhXmGys
IT Researcher143 (105 rep)
Aug 19, 2022, 07:33 AM • Last activity: Aug 23, 2022, 12:55 PM
2 votes
2 answers
1218 views
What type of locks does BEGIN TRANSACTION cause against a specific table that's being read from?
In the following example, what kinds of locks are generated from the explicit transaction block that wraps the SELECT INTO query? BEGIN TRANSACTION T1 SELECT Field1, Field2, Field3 INTO TableB FROM TableA WHERE Field3 > Value1 COMMIT TRANSACTION T1 Notes: 1) TableA has about 100 billion rows, but th...
In the following example, what kinds of locks are generated from the explicit transaction block that wraps the SELECT INTO query? BEGIN TRANSACTION T1 SELECT Field1, Field2, Field3 INTO TableB FROM TableA WHERE Field3 > Value1 COMMIT TRANSACTION T1 Notes: 1) TableA has about 100 billion rows, but the filter causes the query to return only about 5 million rows. 2) Field3 is a clustered index on TableA, and the generated query plan is doing a clustered index seek. 3) The isolation level in my scenario is the default (READ COMMITTED). Do explicit transaction blocks like this potentially cause more locking (or a higher chance of lock escalation) against the table then if the query wasn't wrapped in an explicit transaction?
J.D. (40893 rep)
Dec 5, 2019, 10:14 PM • Last activity: Jul 21, 2022, 12:29 PM
5 votes
2 answers
16123 views
can I resolve a deadlock with the rowlock hint?
I have a large delete stored proc and I've reproduced a deadlock in a situation where the deletes would not have deleted anything. It looks like the part of the stored proc that hit a deadlock was like this (changed table names): DELETE d FROM Table1 d inner join dbo.Table2 orc on orc.id = d.Table2I...
I have a large delete stored proc and I've reproduced a deadlock in a situation where the deletes would not have deleted anything. It looks like the part of the stored proc that hit a deadlock was like this (changed table names): DELETE d FROM Table1 d inner join dbo.Table2 orc on orc.id = d.Table2Id inner join dbo.Table3 orr on orr.id = orc.Table3Id inner join Table4 oeh on oeh.id = orr.Table4Id inner join @deleteEntities de on de.id = oeh.EntityId It looks to me like two deletes were running at the same time and deadlock each other when trying to delete from this large table. For these items I know that there would have been no records very large tables table1, table2, table3. I am wondering if this might be fixed by changing to: DELETE d FROM Table1 d WITH(rowlock) inner join dbo.Table2 orc on orc.id = d.Table2Id inner join dbo.Table3 orr on orr.id = orc.Table3Id inner join Table4 oeh on oeh.id = orr.Table4Id inner join @deleteEntities de on de.id = oeh.EntityId I am thinking that since table1 is a large table sql-server locks whole pages and this hint will make it only lock rows. Note that I have indexed fk's on Table2Id, Table3Id, Table4Id and entityid. I have enabled tracing as described here: [Link](https://web.archive.org/web/20160206200456/http://blogs.msdn.com:80/b/bartd/archive/2006/09/09/deadlock-troubleshooting_2c00_-part-1.aspx) with: DBCC TRACEON (1222, -1) Below is the log output with "2011-08-29 15:46:57.78 spid15s" cut off the start of each line. From what I see two usp_EntityFullDelete's are deadlocking on the same statement - one deleting row 746946 and one deleting row 628302. Am I correct in my analysis of this trace output? Is there anything else that might help prevent this? deadlock-list deadlock victim=process3e9ada8 process-list process id=processbaf048 taskpriority=0 logused=20022 waittime=3890 schedulerid=1 kpid=1304 status=suspended spid=59 sbid=0 ecid=1 priority=0 transcount=0 lastbatchstarted=2011-08-29T15:46:53.263 lastbatchcompleted=2011-08-29T15:46:53.263 clientapp=.Net SqlClient Data Provider hostname=RGDS hostpid=9108 isolationlevel=read committed (2) xactid=1135559188 currentdb=19 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128056 executionStack frame procname=Bugfixes.dbo.usp_EntityFullDelete line=178 stmtstart=11180 stmtend=11776 sqlhandle=0x030013007b725817b4bfe400499f00000100000000000000 DELETE d FROM Table1 d inner join dbo.Table2 orc on orc.id = d.Table2Id inner join dbo.Table3 orr on orr.id = orc.Table3Id inner join Table4 oeh on oeh.id = orr.Table4Id inner join @deleteEntities de on de.id = oeh.EntityId frame procname=adhoc line=1 sqlhandle=0x01001300ef337933809a04fd000000000000000000000000 exec dbo.usp_EntityFullDelete 746946,0 inputbuf process id=processbaf588 taskpriority=0 logused=20022 waittime=3906 schedulerid=1 kpid=6244 status=suspended spid=62 sbid=0 ecid=3 priority=0 transcount=0 lastbatchstarted=2011-08-29T15:46:45.637 lastbatchcompleted=2011-08-29T15:46:45.637 clientapp=.Net SqlClient Data Provider hostname=RGDS hostpid=9108 isolationlevel=read committed (2) xactid=1135558120 currentdb=19 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128056 executionStack frame procname=Bugfixes.dbo.usp_EntityFullDelete line=178 stmtstart=11180 stmtend=11776 sqlhandle=0x030013007b725817b4bfe400499f00000100000000000000 DELETE d FROM Table1 d inner join dbo.Table2 orc on orc.id = d.Table2Id inner join dbo.Table3 orr on orr.id = orc.Table3Id inner join Table4 oeh on oeh.id = orr.Table4Id inner join @deleteEntities de on de.id = oeh.EntityId frame procname=adhoc line=1 sqlhandle=0x01001300fc3e1016609402c4000000000000000000000000 exec dbo.usp_EntityFullDelete 628302,0 inputbuf process id=process3e9a868 taskpriority=0 logused=580 waitresource=PAGE: 19:1:1942004 waittime=3890 ownerId=1135558120 transactionname=user_transaction lasttranstarted=2011-08-29T15:46:53.053 XDES=0xf2512b30 lockMode=U schedulerid=3 kpid=8808 status=suspended spid=62 sbid=0 ecid=8 priority=0 transcount=0 lastbatchstarted=2011-08-29T15:46:45.637 lastbatchcompleted=2011-08-29T15:46:45.637 clientapp=.Net SqlClient Data Provider hostname=RGDS hostpid=9108 isolationlevel=read committed (2) xactid=1135558120 currentdb=19 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128056 executionStack frame procname=Bugfixes.dbo.usp_EntityFullDelete line=178 stmtstart=11180 stmtend=11776 sqlhandle=0x030013007b725817b4bfe400499f00000100000000000000 DELETE d FROM Table1 d inner join dbo.Table2 orc on orc.id = d.Table2Id inner join dbo.Table3 orr on orr.id = orc.Table3Id inner join Table4 oeh on oeh.id = orr.Table4Id inner join @deleteEntities de on de.id = oeh.EntityId frame procname=adhoc line=1 sqlhandle=0x01001300fc3e1016609402c4000000000000000000000000 exec dbo.usp_EntityFullDelete 628302,0 inputbuf process id=process3e9ada8 taskpriority=0 logused=0 waitresource=PAGE: 19:1:1928384 waittime=3765 ownerId=1135559188 transactionname=user_transaction lasttranstarted=2011-08-29T15:46:53.263 XDES=0xf2512d70 lockMode=U schedulerid=3 kpid=9196 status=suspended spid=59 sbid=0 ecid=6 priority=0 transcount=0 lastbatchstarted=2011-08-29T15:46:53.263 lastbatchcompleted=2011-08-29T15:46:53.263 clientapp=.Net SqlClient Data Provider hostname=RGDS hostpid=9108 isolationlevel=read committed (2) xactid=1135559188 currentdb=19 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128056 executionStack frame procname=Bugfixes.dbo.usp_EntityFullDelete line=178 stmtstart=11180 stmtend=11776 sqlhandle=0x030013007b725817b4bfe400499f00000100000000000000 DELETE d FROM Table1 d inner join dbo.Table2 orc on orc.id = d.Table2Id inner join dbo.Table3 orr on orr.id = orc.Table3Id inner join Table4 oeh on oeh.id = orr.Table4Id inner join @deleteEntities de on de.id = oeh.EntityId frame procname=adhoc line=1 sqlhandle=0x01001300ef337933809a04fd000000000000000000000000 exec dbo.usp_EntityFullDelete 746946,0 inputbuf process id=process3e9b198 taskpriority=0 logused=20006 waittime=3984 schedulerid=3 kpid=9212 status=suspended spid=59 sbid=0 ecid=0 priority=0 transcount=2 lastbatchstarted=2011-08-29T15:46:53.263 lastbatchcompleted=2011-08-29T15:46:53.263 clientapp=.Net SqlClient Data Provider hostname=RGDS hostpid=9108 loginname=sa isolationlevel=read committed (2) xactid=1135559188 currentdb=19 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128056 executionStack frame procname=Bugfixes.dbo.usp_EntityFullDelete line=178 stmtstart=11180 stmtend=11776 sqlhandle=0x030013007b725817b4bfe400499f00000100000000000000 DELETE d FROM Table1 d inner join dbo.Table2 orc on orc.id = d.Table2Id inner join dbo.Table3 orr on orr.id = orc.Table3Id inner join Table4 oeh on oeh.id = orr.Table4Id inner join @deleteEntities de on de.id = oeh.EntityId frame procname=adhoc line=1 sqlhandle=0x01001300ef337933809a04fd000000000000000000000000 exec dbo.usp_EntityFullDelete 746946,0 inputbuf exec dbo.usp_EntityFullDelete 746946,0 process id=process46b0da8 taskpriority=0 logused=20006 waittime=4000 schedulerid=4 kpid=6596 status=suspended spid=62 sbid=0 ecid=0 priority=0 transcount=2 lastbatchstarted=2011-08-29T15:46:45.637 lastbatchcompleted=2011-08-29T15:46:45.637 clientapp=.Net SqlClient Data Provider hostname=RGDS hostpid=9108 loginname=sa isolationlevel=read committed (2) xactid=1135558120 currentdb=19 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128056 executionStack frame procname=Bugfixes.dbo.usp_EntityFullDelete line=178 stmtstart=11180 stmtend=11776 sqlhandle=0x030013007b725817b4bfe400499f00000100000000000000 DELETE d FROM Table1 d inner join dbo.Table2 orc on orc.id = d.Table2Id inner join dbo.Table3 orr on orr.id = orc.Table3Id inner join Table4 oeh on oeh.id = orr.Table4Id inner join @deleteEntities de on de.id = oeh.EntityId frame procname=adhoc line=1 sqlhandle=0x01001300fc3e1016609402c4000000000000000000000000 exec dbo.usp_EntityFullDelete 628302,0 inputbuf exec dbo.usp_EntityFullDelete 628302,0 process id=process46b1048 taskpriority=0 logused=0 waitresource=PAGE: 19:1:1942003 waittime=3937 ownerId=1135559188 transactionname=user_transaction lasttranstarted=2011-08-29T15:46:53.263 XDES=0xd0224ab0 lockMode=U schedulerid=4 kpid=7892 status=suspended spid=59 sbid=0 ecid=8 priority=0 transcount=0 lastbatchstarted=2011-08-29T15:46:53.263 lastbatchcompleted=2011-08-29T15:46:53.263 clientapp=.Net SqlClient Data Provider hostname=RGDS hostpid=9108 isolationlevel=read committed (2) xactid=1135559188 currentdb=19 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128056 executionStack frame procname=Bugfixes.dbo.usp_EntityFullDelete line=178 stmtstart=11180 stmtend=11776 sqlhandle=0x030013007b725817b4bfe400499f00000100000000000000 DELETE d FROM Table1 d inner join dbo.Table2 orc on orc.id = d.Table2Id inner join dbo.Table3 orr on orr.id = orc.Table3Id inner join Table4 oeh on oeh.id = orr.Table4Id inner join @deleteEntities de on de.id = oeh.EntityId frame procname=adhoc line=1 sqlhandle=0x01001300ef337933809a04fd000000000000000000000000 exec dbo.usp_EntityFullDelete 746946,0 inputbuf process id=process46b16d8 taskpriority=0 logused=580 waitresource=PAGE: 19:1:441708 waittime=3937 ownerId=1135558120 transactionname=user_transaction lasttranstarted=2011-08-29T15:46:53.053 XDES=0xd0224870 lockMode=U schedulerid=4 kpid=6676 status=suspended spid=62 sbid=0 ecid=6 priority=0 transcount=0 lastbatchstarted=2011-08-29T15:46:45.637 lastbatchcompleted=2011-08-29T15:46:45.637 clientapp=.Net SqlClient Data Provider hostname=RGDS hostpid=9108 isolationlevel=read committed (2) xactid=1135558120 currentdb=19 lockTimeout=4294967295 clientoption1=671088672 clientoption2=128056 executionStack frame procname=Bugfixes.dbo.usp_EntityFullDelete line=178 stmtstart=11180 stmtend=11776 sqlhandle=0x030013007b725817b4bfe400499f00000100000000000000 DELETE d FROM Table1 d inner join dbo.Table2 orc on orc.id = d.Table2Id inner join dbo.Table3 orr on orr.id = orc.Table3Id inner join Table4 oeh on oeh.id = orr.Table4Id inner join @deleteEntities de on de.id = oeh.EntityId frame procname=adhoc line=1 sqlhandle=0x01001300fc3e1016609402c4000000000000000000000000 exec dbo.usp_EntityFullDelete 628302,0 inputbuf resource-list pagelock fileid=1 pageid=1928384 dbid=19 objectname=Bugfixes.dbo.Table1 id=lockaef9db80 mode=U associatedObjectId=72057595211284480 owner-list owner id=process46b0da8 mode=U waiter-list waiter id=process3e9ada8 mode=U requestType=wait exchangeEvent id=port80128a00 nodeId=22 owner-list owner event=e_waitNone type=producer id=process3e9ada8 owner event=e_waitNone type=producer id=process46b1048 waiter-list waiter event=e_waitPortClose type=consumer id=processbaf048 exchangeEvent id=port80128e20 nodeId=6 owner-list owner event=e_waitNone type=producer id=processbaf048 waiter-list waiter event=e_waitPortOpen type=consumer id=process3e9b198 pagelock fileid=1 pageid=1942004 dbid=19 objectname=Bugfixes.dbo.Table1 id=lockf3d1f080 mode=U associatedObjectId=72057595211284480 owner-list owner id=process3e9b198 mode=U waiter-list waiter id=process3e9a868 mode=U requestType=wait exchangeEvent id=port80128ed0 nodeId=22 owner-list owner event=e_waitNone type=producer id=process46b16d8 owner event=e_waitNone type=producer id=process3e9a868 waiter-list waiter event=e_waitPortClose type=consumer id=processbaf588 exchangeEvent id=port80128320 nodeId=6 owner-list owner event=e_waitNone type=producer id=processbaf588 waiter-list waiter event=e_waitPortOpen type=consumer id=process46b0da8 pagelock fileid=1 pageid=1942003 dbid=19 objectname=Bugfixes.dbo.Table1 id=lockfbcc1680 mode=U associatedObjectId=72057595211284480 owner-list owner id=process46b0da8 mode=U waiter-list waiter id=process46b1048 mode=U requestType=wait pagelock fileid=1 pageid=441708 dbid=19 objectname=Bugfixes.dbo.Table1 id=lockfc628980 mode=U associatedObjectId=72057595211284480 owner-list owner id=process3e9b198 mode=U waiter-list waiter id=process46b16d8 mode=U requestType=wait
Adam Butler (1531 rep)
Aug 29, 2011, 11:55 PM • Last activity: Jun 23, 2022, 08:01 AM
0 votes
1 answers
362 views
Table Lock Escalation - Performance Questions
I have few questions on table lock escalations, please confirm or disprove: 1. Index Scan operator in a query results in Table Lock Escalation ? always or not (small tables) ? 2. When Shared lock is escalated to a table level, it blocks small updates and deletes ? inserts as well ? 3. Is it possible...
I have few questions on table lock escalations, please confirm or disprove: 1. Index Scan operator in a query results in Table Lock Escalation ? always or not (small tables) ? 2. When Shared lock is escalated to a table level, it blocks small updates and deletes ? inserts as well ? 3. Is it possible to get counts of table lock_escalation event, grouped by login and then by table ? For example: >login1 - table1 - 11 login1 - table2 - 5 login2 - table1 - 29 etc... Extended Events seemed to allow only one level of grouping if I am not mistaken
Aleksey Vitsko (6195 rep)
Oct 23, 2021, 06:01 PM • Last activity: Oct 25, 2021, 04:31 PM
1 votes
1 answers
226 views
Does SQL Server upgrade to table locks at total locks or session locks?
I’ve a table that both has mass sequential inserts at the end of the CI and random (very distributed) reads+updates. Naturally, the mass inserts should not block the random access. RCSI is used, so the read-only queries shouldn’t affect the lock count (?) in relation to the sequential insert. My con...
I’ve a table that both has mass sequential inserts at the end of the CI and random (very distributed) reads+updates. Naturally, the mass inserts should not block the random access. RCSI is used, so the read-only queries shouldn’t affect the lock count (?) in relation to the sequential insert. My concern is that, even when limiting the maximum number of locks taken during the insert (eg. inserting in batches), it is possible for one (or more) of the OLTP updates to bypass this limit. If the lock count heuristic is per-session then it is less of potential issue. Given the answer to the question in the title, then, what is the “best” way to prevent table lock escalation here? My current approach/thought is to select a row count (eg. arbitrary of 1-4k) during the mass inserts to allow “some slack”, although this feels overall imprecise. While batches are essential a way to deal with replication and such, it would be nice to specify a batch size of 5k rows and move on. (To be fair, quick table locks aren’t really the issue: the intent of the question is more about _finding the edge_ such that table lock escalation doesn’t happen.) There has been DBA pushback on both 1) disabling row locks (to ensure page locks and thus reduce lock counts) and 2) disabling table lock escalation (with forced page locks to minimize worst-case). Are there any other relevant database properties to consider with respect to lock escalation? (Increasing the lock limit to say, 10k would then allow a much larger “slack” batch size.) Using with PAGLOCK in the batch inserts results in page-row deadlocks with the current instead-of triggers and generally appears to be a pain to get correct. Although the instead-of triggers themselves are currently on the axing block for various technical complications. This hint also only increases the “slack”. I don’t suppose there is a NOINCRLOCKCOUNT hint that has been overlooked..
user2864740 (205 rep)
Apr 18, 2021, 07:54 PM • Last activity: Apr 19, 2021, 07:39 PM
1 votes
2 answers
109 views
lock escalation issue when using DDL trigger
I have a problem, I am trying to create a log table for my database so I can keep track of changes. I have created the following trigger that runs when a table is altered, created and dropped: ``` CREATE TRIGGER TableTrigger ON DATABASE FOR CREATE_TABLE, ALTER_TABLE, DROP_TABLE AS BEGIN SET NOCOUNT...
I have a problem, I am trying to create a log table for my database so I can keep track of changes. I have created the following trigger that runs when a table is altered, created and dropped:
CREATE TRIGGER TableTrigger
ON DATABASE
FOR
	CREATE_TABLE,
	ALTER_TABLE,
	DROP_TABLE
AS
BEGIN
	SET NOCOUNT ON;
	INSERT INTO TableLog (
		EventDate,
		EventType,
		Existing_Table_Name,
		New_Table_Name,
		Changed_By
	)
	VALUES (
		GETDATE(),
		EVENTDATA().value('(/EVENT_INSTANCE/EventType)', 'NVARCHAR(100)'),
		EVENTDATA(),
		EVENTDATA(),
		USER
	);
END;
GO
But for example I change a name of a column in a table the event data commandtext XML shows this
ALTER TABLE dbo.Languages SET (LOCK_ESCALATION = TABLE)
Instead of the full command. How do I stop this from locking and letting me see the full command?
Oliver Smith (11 rep)
Dec 3, 2020, 12:58 PM • Last activity: Dec 3, 2020, 04:41 PM
5 votes
1 answers
14211 views
What is Postgres lock behaviour when UPDATE WHERE clause does a table scan?
Say you have a large table with tens of millions of rows. You want to `UPDATE large_table SET col=value WHERE col=other_value`... but `col` is not indexed and an `EXPLAIN` shows that this query will perform a seq scan over the whole table. What is the lock behaviour here? According to most accounts...
Say you have a large table with tens of millions of rows. You want to UPDATE large_table SET col=value WHERE col=other_value... but col is not indexed and an EXPLAIN shows that this query will perform a seq scan over the whole table. What is the lock behaviour here? According to most accounts Postgres only locks the affected rows of an UPDATE query and does not have lock escalation. So does it search for the rows to update first, then only lock the found rows? It seems like there would potentially be problems of other queries updating rows concurrently in that case though. Does it lock each row "as it finds them" i.e. locking rows progressively as it goes through the seq scan? So I think the best case here is it locks rows as it finds them, and the affected rows (only) will be locked for up to as long as the UPDATE query takes to complete. But I am worried that this query could instead end up blocking all writes to the table until it completes. I have read this: https://habr.com/en/company/postgrespro/blog/503008/ and I think the worst case will not happen, but here https://blog.heroku.com/curious-case-table-locking-update-query is a possibly inaccurate representation of similar info that gives me some doubts. The application only uses SELECT, SELECT FOR UPDATE and UPDATE queries (i.e. no other explicit locks taken apart from those). The table has foreign keys to other tables, and other tables have foreign key to this table. We're on Postgres 11.
Anentropic (558 rep)
Oct 27, 2020, 12:15 PM • Last activity: Oct 28, 2020, 11:28 AM
7 votes
1 answers
2144 views
How to avoid Table Lock Escalation?
I've got a task to update 5 million rows in a production table, without locking down the whole table for extended time So, I used approach that helped me before many times - updating top (N) rows at a time with 1-N second interval between chunks This time started with update top (1000) rows at a tim...
I've got a task to update 5 million rows in a production table, without locking down the whole table for extended time So, I used approach that helped me before many times - updating top (N) rows at a time with 1-N second interval between chunks This time started with update top (1000) rows at a time, monitoring the Extended Events session for lock_escalation events in the process lock_escalation showed up during each update operation, so I started lowering row count per chunk 1000 -> 500 -> 200 -> 100 -> 50 rows and so on down to 1 Before (not with this table, and for delete operations - not update), lowering row count to 200 or 100, helped to get rid of lock_escalation events But this time, even with 1 row per 1 update operation, table lock_escalation still shows up. Duration of each update operation is about the same, regardless if its 1 row or 1000 rows at a time How to get rid of table lock escalations in my case ? @@TRANCOUNT is zero Extended event: Extended Event Code :
set nocount on

declare 
	@ChunkSize				int = 1000,							-- count rows to remove in 1 chunk 
	@TimeBetweenChunks		char(8) = '00:00:01', 				-- interval between chunks
	
	@Start					datetime,
	@End					datetime,
	@Diff					int,
	
	@MessageText			varchar(500),
	
	@counter				int = 1,
	@RowCount				int = 1,
	@TotalRowsToUpdate		bigint,
	@TotalRowsLeft			bigint
	


-- total row count to update
set @TotalRowsToUpdate = (select count(*)
							from [Table1]
								join [Table2] on
									btid = tBtID
							where	btStatusID = 81)


set @TotalRowsLeft = @TotalRowsToUpdate
set @MessageText = 'Total Rows to Update = ' + cast(@TotalRowsLeft as varchar) raiserror (@MessageText,0,1) with nowait
print ''



-- begin cycle
while @RowCount > 0 begin

	set @Start = getdate()

	-- update packages
	update top (@ChunkSize) bti
		set	btstatusid = 154,
			btType = 1
	from [Table1] bti
		join [Table2] on
			btid = tBtID
	where	btStatusID = 81
	

	set @RowCount = @@ROWCOUNT

	-- measure time
	set @End = getdate()
	set @Diff = datediff(ms,@Start,@End)

	set @TotalRowsLeft = @TotalRowsLeft - @RowCount
	set @MessageText = cast(@counter as varchar) + ' - Updated ' + cast(@RowCount as varchar) + ' rows in ' + cast(@Diff as varchar) + ' milliseconds - total ' + cast(@TotalRowsLeft as varchar) + ' rows left...'

	-- print progress message
	raiserror (@MessageText,0,1) with nowait


	set @counter += 1

	WAITFOR DELAY @TimeBetweenChunks

end
Plan: https://www.brentozar.com/pastetheplan/?id=SyozGWMLw
Aleksey Vitsko (6195 rep)
Sep 30, 2020, 11:02 AM • Last activity: Oct 1, 2020, 03:43 PM
5 votes
1 answers
542 views
Lock Escalation and Count Discrepancy in lock_acquired Extended Event
I'm trying to understand why there is a discrepancy in lock count in `sys.dm_tran_locks` and `sqlserver.lock_acquired` extended event in certain cases. Here is my repro script, I'm using the `StackOverflow2013` database on SQL Server 2019 RTM, compat level 150. /* Initial Setup */ IF OBJECT_ID('dbo....
I'm trying to understand why there is a discrepancy in lock count in sys.dm_tran_locks and sqlserver.lock_acquired extended event in certain cases. Here is my repro script, I'm using the StackOverflow2013 database on SQL Server 2019 RTM, compat level 150. /* Initial Setup */ IF OBJECT_ID('dbo.HighQuestionScores', 'U') IS NOT NULL DROP TABLE dbo.HighQuestionScores; CREATE TABLE dbo.HighQuestionScores ( Id INT PRIMARY KEY CLUSTERED, DisplayName NVARCHAR(40) NOT NULL, Reputation BIGINT NOT NULL, Score BIGINT ) INSERT dbo.HighQuestionScores (Id, DisplayName, Reputation, Score) SELECT u.Id, u.DisplayName, u.Reputation, NULL FROM dbo.Users AS u; CREATE INDEX ix_HighQuestionScores_Reputation ON dbo.HighQuestionScores (Reputation); Next I update the table statistics with a large fake row count /* Chaotic Evil. */ UPDATE STATISTICS dbo.HighQuestionScores WITH ROWCOUNT = 99999999999999; DBCC FREEPROCCACHE WITH NO_INFOMSGS; Then I open a transaction and update Score for Reputation, say 56 BEGIN TRAN; UPDATE dbo.HighQuestionScores SET Score = 1 WHERE Reputation = 56 /* 8066 records */ AND 1 = (SELECT 1); /* Source: https://www.erikdarling.com/sql-server/helpers-views-and-functions-i-use-in-presentations/ Thanks, Erik */ SELECT * FROM dbo.WhatsUpLocks(@@SPID) AS wul WHERE wul.locked_object = N'HighQuestionScores' ROLLBACK; I get a bunch of page locks (despite having an index on Reputation). I'm guessing the bad estimates really did a number on the optimizer there. page_locks I also double checked using sp_whoisactive and it too returns the same information. Meanwhile I also have an extended event running on sqlserver.lock_acquired separately. When I look at the grouped data I see **8066** page locks instead of initial **6159** extended_event_lock_count I definitely do not a see a lock escalation (verified using sqlserver.lock_escalation event), so I guess my question is why is the extended event showing a discrepancy with a higher number of lock count?
Arun Gopinath (522 rep)
Aug 31, 2020, 09:48 AM • Last activity: Aug 31, 2020, 12:09 PM
0 votes
1 answers
220 views
SQL Server Lock Escalation clarity required
I want to know whether Enabling the Trace Flag 1211 globally prevents the Lock Escalation even in the event of the Huge Memory Pressure. I have 2 Questions regarding this:- 1. After Enabling Trace Flag do we still need to alter each and every Table in the User Database to Disable Page Locks. 2. Supp...
I want to know whether Enabling the Trace Flag 1211 globally prevents the Lock Escalation even in the event of the Huge Memory Pressure. I have 2 Questions regarding this:- 1. After Enabling Trace Flag do we still need to alter each and every Table in the User Database to Disable Page Locks. 2. Suppose if the answer is Yes then we would still need to disable Page Locks at each and every Non Clustered Indexes on the above Tables to prevent Page locks / Lock Escalation.
Jeetendra Pardeshi (27 rep)
Aug 18, 2020, 07:40 AM • Last activity: Aug 18, 2020, 05:42 PM
1 votes
0 answers
128 views
MSSQL - Lock escalation on partitioned indexed view
I'm using partitioned tables where I set `LOCK_ESCALATION` to `AUTO` to prevent lock escalation to table level, but keep it on partition level. On the top of that table I have indexed view which is partitioned on the same scheme, but updates on underlying table are escalating locks on whole indexed...
I'm using partitioned tables where I set LOCK_ESCALATION to AUTO to prevent lock escalation to table level, but keep it on partition level. On the top of that table I have indexed view which is partitioned on the same scheme, but updates on underlying table are escalating locks on whole indexed view, which basically cancels the effect of partition level escalation on the underlying table... I would also like to escalate the indexed view locks only to partition level, but I can't find a way how to set it on indexed view as ALTER VIEW doesn't allow to specify SET options. I wasn't able to find any other way how to enable that, so I'm assuming that it's not possible. Or does anyone know about any way how to set it?
Lukas.Navratil (355 rep)
Oct 11, 2019, 10:10 AM
2 votes
1 answers
1982 views
Queries from different tables locking each other (LCK_M_X)
Background: Was hired to work with JMeter for performance testing but I'm also getting more and more into SQL performance (because of time, and because I like it). I still suck, and will for a long time. I will try to explain the best I can. I have a table (`NUMERATION`) that is updated by many tran...
Background: Was hired to work with JMeter for performance testing but I'm also getting more and more into SQL performance (because of time, and because I like it). I still suck, and will for a long time. I will try to explain the best I can. I have a table (NUMERATION) that is updated by many transactions. It contains most unique identifiers, so it gets a lot of updates. This query
UPDATE Numeration WITH (ROWLOCK) SET NumCounter = NumCounter + 1 WHERE Tid = @p0
...is sometimes being locked by queries who have no relationship to this one, and I have no idea why. There are no triggers in the tables. There is a Foreign Key that references a Unique identifer in another table, I don't know how to map all relationships but I think both queries have FKs on this Identifier I used sp_WhoIsActive and sp_BlitzFirst to see the session IDs and type of lock (LCK_M_X). The query plans are below: https://www.brentozar.com/pastetheplan/?id=r1ebGh7Lr Hope you can help me with not my homework.
xicotaslb (31 rep)
Sep 9, 2019, 11:20 AM • Last activity: Sep 10, 2019, 01:06 PM
0 votes
2 answers
1515 views
Why Table Lock Escalation is happening in my scenario
I have a table Table1 that is updated in small chunks using below query: ``` update top (1000) Table1 set VarcharColumn1 = 'SomeValue' from Table1 where ID in (select ID from Table2) and VarcharColumn1 is NULL ``` Additional details: Table2 has 90000 rows, and total number of rows that need to be up...
I have a table Table1 that is updated in small chunks using below query:
update top (1000) Table1
    set VarcharColumn1 = 'SomeValue'
from Table1
where ID in (select ID from Table2)
      and VarcharColumn1 is NULL
Additional details: Table2 has 90000 rows, and total number of rows that need to be updated in Table1, is also 90000 (1 to 1 relationship) Also when Table1 rows are updated, there is a trigger on Table1 that inserts rows as they were before update, to table Table1History So this means when I update 1000 rows in Table1, 1000 rows inserted into Table1History Question: When I update top 100 rows, no Table Lock Escalation happens I monitor this using Extended Events "lock_escalation" event, and also in Performance Monitor - SQLServer:Access Methods - Table Lock Escalations / sec When I update top 1000 or 500 rows, Table Lock Escalation IS happening on Table1 So I wonder, what is the mechanism or formula that is used by SQL Server to escalate locking to table level ? In google it says 5000 rows is the threshold, but obviously in my case 1000 or 500 rows is enough to cause Table Lock Escalation on Table1 extended events lock_escalation
Aleksey Vitsko (6195 rep)
Aug 14, 2019, 09:56 AM • Last activity: Aug 14, 2019, 04:31 PM
1 votes
0 answers
236 views
DeadLock - Two Process Locked Same Index
We have met a scenario that two process acquired exclusive lock on same index ans the same time two are requesting for shared lock to other process which eventually leads to dead lock. I am still not sure how come **Select Statement acquires exclusive lock**!! [![enter image description here][1]][1]...
We have met a scenario that two process acquired exclusive lock on same index ans the same time two are requesting for shared lock to other process which eventually leads to dead lock. I am still not sure how come **Select Statement acquires exclusive lock**!! enter image description here DeadLock XML :: Deadlock Graph for the Reference enter image description here Additional Code Block IF (SELECT COUNT(1) FROM ( IF (SELECT COUNT(1) FROM ( SELECT b.Shop FROM [Batch] b INNER JOIN [Process] o ON o.procNo = b.procNo WHERE b.Ship = N'P' ) B) UPDATE b4 SET StringVal3 = ( SELECT TOP 1 Shop FROM ( SELECT COUNT(1) FROM ( SELECT b.Shop FROM [Batch] b INNER JOIN [Process] o ON o.procNo = b.procNo WHERE b.Ship = N'P' ) B) FROM [Batch] b4 WHERE b4.Ship = N'P' AND b4.Unique = 0 AND b4.Flag = 4
Parthi (11 rep)
Nov 26, 2018, 11:01 AM • Last activity: Nov 26, 2018, 04:17 PM
Showing page 1 of 20 total questions