Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

15 votes
3 answers
9336 views
What is the point of TRY CATCH block when XACT_ABORT is turned ON?
Code sample with XACT_ABORT_ON: SET XACT_ABORT_ON; BEGIN TRY BEGIN TRANSACTION //do multiple lines of sql here COMMIT TRANSACTION END TRY BEGIN CATCH IF (@@TRANCOUNT > 0) ROLLBACK; //may be print/log/throw error END CATCH Since XACT ABORT is ON, any error will automatically rollback the transaction....
Code sample with XACT_ABORT_ON: SET XACT_ABORT_ON; BEGIN TRY BEGIN TRANSACTION //do multiple lines of sql here COMMIT TRANSACTION END TRY BEGIN CATCH IF (@@TRANCOUNT > 0) ROLLBACK; //may be print/log/throw error END CATCH Since XACT ABORT is ON, any error will automatically rollback the transaction. So what purpose does the TRY CATCH block serve?
variable (3590 rep)
Feb 1, 2022, 06:10 AM • Last activity: Sep 20, 2025, 02:18 AM
0 votes
3 answers
140 views
Extended Events sessions stops collection when edited while the session is on run
I created an Extended Events session - based on Standard SQL template. Just added Global filter on sqlserver.databasename for a particular DB. Started watching live data. Updated the session to edit it and added few more events. And clicked ok. The session was running during the edit happened. Now t...
I created an Extended Events session - based on Standard SQL template. Just added Global filter on sqlserver.databasename for a particular DB. Started watching live data. Updated the session to edit it and added few more events. And clicked ok. The session was running during the edit happened. Now the Watch live data stopped adding rows. I did refresh the 'sessions' folder + the created session. But the Watch Live Data didn't resume. Closed and opened again by right clicking the session name and chose 'Watch Live Data'. Now it shows 'Retrieving event information from server' but never adds up any events. When checked the Target Data from eventfile. The last time it traced is before the edit happened. Though a day passed, the Target data never gets updated. Have to create new session. Does Extended Event session functions so? Or Am I wrong? Kindly help. Can't keep creating new session for every edit we need. Thank you! Adding screenshot from Standard XEvent Profiler which is too not working. enter image description here Here is the script of the session - (Changed session name & DB name alone) CREATE EVENT SESSION [SessionName] ON SERVER ADD EVENT sqlserver.database_xml_deadlock_report( ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username) WHERE ([sqlserver].[database_name]=N'DBName')), ADD EVENT sqlserver.existing_connection(SET collect_database_name=(1),collect_options_text=(1) ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username) WHERE ([sqlserver].[database_name]=N'DBName')), ADD EVENT sqlserver.lock_deadlock_chain(SET collect_database_name=(1),collect_resource_description=(1) ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username)), ADD EVENT sqlserver.rpc_starting(SET collect_data_stream=(1) ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username) WHERE ([package0].[equal_boolean]([sqlserver].[is_system],(0)) AND [sqlserver].[database_name]=N'DBName')), ADD EVENT sqlserver.sp_statement_completed(SET collect_object_name=(1) ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username) WHERE ([sqlserver].[database_name]=N'DBName')), ADD EVENT sqlserver.sql_batch_completed( ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username) WHERE ([sqlserver].[database_name]=N'DBName')), ADD EVENT sqlserver.sql_batch_starting( ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username) WHERE ([package0].[equal_boolean]([sqlserver].[is_system],(0)) AND [sqlserver].[database_name]=N'DBName')), ADD EVENT sqlserver.sql_statement_completed(SET collect_parameterized_plan_handle=(1) ACTION(package0.event_sequence,package0.process_id,sqlos.cpu_id,SQLSatellite.AppName,sqlserver.client_app_name,sqlserver.client_connection_id,sqlserver.client_hostname,sqlserver.client_pid,sqlserver.database_name,sqlserver.nt_username,sqlserver.query_hash,sqlserver.session_id,sqlserver.session_nt_username,sqlserver.sql_text,sqlserver.transaction_id,sqlserver.transaction_sequence,sqlserver.username) WHERE ([sqlserver].[database_name]=N'DBName')) ADD TARGET package0.event_file(SET filename=N'E:\SessionName.xel') WITH (MAX_MEMORY=8192 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=5 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=PER_CPU,TRACK_CAUSALITY=ON,STARTUP_STATE=ON) GO Event global filter which I mentioned is as below - to apply in all events enter image description here
Meera K (81 rep)
Aug 23, 2024, 07:08 AM • Last activity: Sep 20, 2025, 02:06 AM
1 votes
1 answers
56 views
Question on Index Fragmentation and Index Rebuilds
I am currently sitting with an Azure managed SQL-server, with all my table indexes sitting at >99% fragmentation. For some reason this instance was set to use DTU's, it's on a 250 tier. These tables are being constantly written and read from, and nightly data deleted. Is there a way I can rebuild th...
I am currently sitting with an Azure managed SQL-server, with all my table indexes sitting at >99% fragmentation. For some reason this instance was set to use DTU's, it's on a 250 tier. These tables are being constantly written and read from, and nightly data deleted. Is there a way I can rebuild the indexes while things are running without the risk of any of the other queries timing out? Or do I need to schedule some down time and kill the connections so that I can rebuild the indexes? Do I perhaps need to scale this DB up for this temporarily to something like 500 DTU's? I would appreciate any advice on what would be a good or a correct way to handle this. (Yes there is no DBA, no-one was looking at after this DB, I happen to notice this and now sit with this problem :(... )
Chaos (21 rep)
Sep 18, 2025, 10:50 PM • Last activity: Sep 19, 2025, 10:24 PM
0 votes
1 answers
289 views
Can not Truncate the database because “secondaries has no log was added”
I am a newbie to SQL Server. I got an error when trying to truncate the database: >Database can not shrink until all secondaries have moved past the point where the log was added. When I check the status of the secondary on the primary machine (Always On High Availability Dashboard), it has >heath:...
I am a newbie to SQL Server. I got an error when trying to truncate the database: >Database can not shrink until all secondaries have moved past the point where the log was added. When I check the status of the secondary on the primary machine (Always On High Availability Dashboard), it has >heath: good and >Synchronization state: Synchronizing. But the status of database on secondary is NULL not ONLINE. When I check on the Always On High Availability of the Secondary machine, it's state is No data available on secondary replicas. **Question:** What should I do and what is going on with the database..? (I want to truncate because the free space is no more enough. I use E:\ for all databases file has used 421.5Gb and free 78.5Gb.) Here is the nearly I check the status wiht the query below with two result on primary machine(pic 1) and secondary machine(pic 2):
SELECT d.name, r.replica_server_name, ars.role_desc, ars.operational_state_desc,
		drs.synchronization_state_desc, ars.connected_state_desc,
		drs.last_hardened_lsn, d.log_reuse_wait_desc
FROM sys.availability_databases_cluster AS adc 
JOIN sys.databases AS d ON adc.group_database_id = d.group_database_id 
JOIN sys.dm_hadr_database_replica_states AS drs ON d.database_id = drs.database_id 
JOIN sys.availability_replicas AS r ON drs.replica_id = r.replica_id 
JOIN sys.dm_hadr_availability_replica_states ars ON r.replica_id = ars.replica_id
This is the result on Primary machine This is the result on Secondary machine I have tried suspending, off and re-join the secondary replica databases on the secondary machine (APEX-SQL2) but nothing has change... >Additionally, I ran 3 query below: enter image description here enter image description here enter image description here *Thank you for reading. Pls help..*
Anh Nguyen (1 rep)
Apr 10, 2023, 04:58 PM • Last activity: Sep 19, 2025, 09:06 PM
7 votes
4 answers
888 views
Using a filtered index when setting a variable
I know from other [questions][1] and [posts][2] that when SQL compiles a query plan it can only use a filtered index if the filtered index is guaranteed to be able to be used every time the query runs. This means you can't use a variable in the where clause because sometimes it might be able to use...
I know from other questions and posts that when SQL compiles a query plan it can only use a filtered index if the filtered index is guaranteed to be able to be used every time the query runs. This means you can't use a variable in the where clause because sometimes it might be able to use the filtered index and sometimes not. One way around this is to use OPTION(RECOMPILE), so that the times it can use it, it'll get the filtered index. Doing some testing, I found that this query can use a filtered index (note, I'm forcing the index just to prove a point):
SELECT MAX(table1.SomeDateField)
FROM		dbo.table1 WITH(INDEX(MyFilteredIndex))
WHERE		table1.filteredColumn = @variable
OPTION (RECOMPILE)
However, if I want to assign the result to a variable, I'm bang out of luck:
SELECT @OutputVariable = MAX(table1.SomeDateField)
FROM		dbo.table1 WITH(INDEX(MyFilteredIndex))
WHERE		table1.filteredColumn = @variable
OPTION (RECOMPILE)
results in: > Msg 8622, Level 16, State 1, Line 15 Query processor could not produce > a query plan because of the hints defined in this query. Resubmit the > query without specifying any hints and without using SET FORCEPLAN. The query can clearly use the filtered index as it runs find when I don't want to save the output to a variable. I have ways of rewriting this query to hard-code @variable to remove the problem, but can someone explain why the first query can use the filtered index, but the 2nd query can't?
Greg (3292 rep)
Dec 18, 2020, 01:18 AM • Last activity: Sep 19, 2025, 08:22 PM
-1 votes
1 answers
274 views
connection String and Listener
I am using SQL server 2017, I will like to know how Connection String can be use to talk via Listener to Databases rather talking directly to the DB
I am using SQL server 2017, I will like to know how Connection String can be use to talk via Listener to Databases rather talking directly to the DB
KIBALA keys (1 rep)
Mar 13, 2020, 07:41 PM • Last activity: Sep 19, 2025, 07:07 PM
0 votes
0 answers
16 views
How to let multiple SQL Server instances use PolyBase (best practice with ?Linked Servers)?
I know PolyBase can only be installed on one SQL Server instance per host. **In my setup:** - Instance A already uses PolyBase for its own external data queries. - Instance B (on the same server) also needs access to external data. Since I can’t install PolyBase twice, my plan is for B to query A vi...
I know PolyBase can only be installed on one SQL Server instance per host. **In my setup:** - Instance A already uses PolyBase for its own external data queries. - Instance B (on the same server) also needs access to external data. Since I can’t install PolyBase twice, my plan is for B to query A via a Linked Server. - Windows Server 2019 running SQL 2019 on prem **Questions:** 1. Is using Linked Servers in this way considered the best practice, or are there cleaner alternatives? 2. Any security/authentication recommendations (Kerberos, least privilege)? 3. What about performance (four-part names vs. OPENQUERY, predicate pushdown)? 4. Should I expose views in A for B to query, or reference A’s external tables directly? 5. Any known operational issues (timeouts, monitoring, transactions)? I havent tried it yet, so I’d appreciate any guidance or alternative approaches that work without adding a second server.
Jonasstadi (11 rep)
Sep 19, 2025, 08:42 AM
0 votes
1 answers
280 views
SQL Server 2017 - Read-Committed Snapshot isolation update deadlocks
I have a stored procedure that is running concurrently on schedule and is causing 100's of deadlocks a day. The transaction is managed by the application and the database is using Read-Committed Snapshot Isolation. The SQL that is deadlocking is UPDATE dbo.Job SET IsCalculated = 0 WHERE RfJobSt...
I have a stored procedure that is running concurrently on schedule and is causing 100's of deadlocks a day. The transaction is managed by the application and the database is using Read-Committed Snapshot Isolation. The SQL that is deadlocking is UPDATE dbo.Job SET IsCalculated = 0 WHERE RfJobStatusID = 2 AND LineID IN (SELECT LineID FROM dbo.Line WHERE PlantID = @nJobPlantID) AND IsCalculated = 1; giving plan enter image description here enter image description here The deadlock graph is enter image description here Deadlock XML is xml_report unknown unknown Proc [Database Id = 6 Object Id = 1160807643] UPDATE dbo.Job SET IsCalculated = 0 WHERE RfJobStatusID = 2 AND LineID IN (SELECT LineID FROM dbo.Line WHERE PlantID = @nJobPlantID) AND IsCalculated = unknown Proc [Database Id = 6 Object Id = 1160807643] Index is CREATE NONCLUSTERED INDEX IM_RfJobStatusID_IsCalculated ON dbo.Job(RfJobStatusID ASC,IsCalculated ASC) INCLUDE(WOID,EstimatedEndDT,LineID,EstimatedStartDT) And table structure is CREATE TABLE dbo.Job( JobID bigint NOT NULL, WOID bigint NOT NULL, RfJobStatusID tinyint NOT NULL, JobQualityStatusID tinyint NULL, DeadlineDT datetimeoffset(7) NOT NULL, ActualStartDT datetimeoffset(7) NOT NULL, ActualEndDT datetimeoffset(7) NOT NULL, PlannedStartDT datetimeoffset(7) NOT NULL, PlannedEndDT datetimeoffset(7) NOT NULL, PlannedDuration float NOT NULL, EstimatedStartDT datetimeoffset(7) NOT NULL, EstimatedEndDT datetimeoffset(7) NOT NULL, EstimatedDuration float NOT NULL, ScheduledSequence smallint NOT NULL, LineID int NOT NULL, ProductionStartDT datetimeoffset(7) NOT NULL, ProductionEndDT datetimeoffset(7) NOT NULL, TargetCycleTime float NULL, TargetCycleTimeQuantity float NOT NULL, TargetManPower float NULL, TargetSetup float NULL, TargetTearDown float NULL, TargetFixedTime float NULL, QuantityOrdered float NOT NULL, IsCalculated tinyint NOT NULL, LineGroupID int NOT NULL, PassName nvarchar(50) NOT NULL, WorkOrderReleaseStateID int NOT NULL, ProductionCampaignID bigint NULL, CampaignSequence smallint NULL, IsFirstInSequence tinyint NULL, IsLastInSequence tinyint NULL, ProductionSequence smallint NULL, DeadlineLocalDT AS (CONVERT(datetime2,DeadlineDT)) PERSISTED, ActualStartLocalDT AS (CONVERT(datetime2,ActualStartDT)) PERSISTED, ActualEndLocalDT AS (CONVERT(datetime2,ActualEndDT)) PERSISTED, PlannedStartLocalDT AS (CONVERT(datetime2,PlannedStartDT)) PERSISTED, PlannedEndLocalDT AS (CONVERT(datetime2,PlannedEndDT)) PERSISTED, EstimatedStartLocalDT AS (CONVERT(datetime2,EstimatedStartDT)) PERSISTED, EstimatedEndLocalDT AS (CONVERT(datetime2,EstimatedEndDT)) PERSISTED, ProductionStartLocalDT AS (CONVERT(datetime2,ProductionStartDT)) PERSISTED, ProductionEndLocalDT AS (CONVERT(datetime2,ProductionEndDT)) PERSISTED, CONSTRAINT PK_Job PRIMARY KEY CLUSTERED (JobID ASC)) GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_RfJobStatusID DEFAULT ((1)) FOR RfJobStatusID GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_DeadlineDT DEFAULT (CONVERT(datetimeoffset,'2050-12-31T00:00:00+00:00',(126))) FOR DeadlineDT GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_ActualStartDT DEFAULT (CONVERT(datetimeoffset,'2050-12-31T00:00:00+00:00',(126))) FOR ActualStartDT GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_ActualEndDT DEFAULT (CONVERT(datetimeoffset,'2050-12-31T00:00:00+00:00',(126))) FOR ActualEndDT GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_PlannedStartDT DEFAULT (CONVERT(datetimeoffset,'2050-12-31T00:00:00+00:00',(126))) FOR PlannedStartDT GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_PlannedEndDT DEFAULT (CONVERT(datetimeoffset,'2050-12-31T00:00:00+00:00',(126))) FOR PlannedEndDT GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_PlannedDuration DEFAULT ((0)) FOR PlannedDuration GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_EstimatedStartDT DEFAULT (CONVERT(datetimeoffset,'2050-12-31T00:00:00+00:00',(126))) FOR EstimatedStartDT GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_EstimatedEndDT DEFAULT (CONVERT(datetimeoffset,'2050-12-31T00:00:00+00:00',(126))) FOR EstimatedEndDT GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_EstimatedDuration DEFAULT ((0)) FOR EstimatedDuration GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_ScheduledSequence DEFAULT ((0)) FOR ScheduledSequence GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_LineID DEFAULT ((1)) FOR LineID GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_ProductionStartDT DEFAULT (CONVERT(datetimeoffset,'2050-12-31T00:00:00+00:00',(126))) FOR ProductionStartDT GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_ProductionEndDT DEFAULT (CONVERT(datetimeoffset,'2050-12-31T00:00:00+00:00',(126))) FOR ProductionEndDT GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_TargetCycleTimeQuantity DEFAULT ((1)) FOR TargetCycleTimeQuantity GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_QuantityOrdered DEFAULT ((0)) FOR QuantityOrdered GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_IsCalculated DEFAULT ((0)) FOR IsCalculated GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_LineGroupID DEFAULT ((1)) FOR LineGroupID GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_PassName DEFAULT ('1') FOR PassName GO ALTER TABLE dbo.Job ADD CONSTRAINT DF_Job_WorkOrderReleaseStateID DEFAULT ((1)) FOR WorkOrderReleaseStateID GO ALTER TABLE dbo.Job WITH CHECK ADD CONSTRAINT FK_Job_Line FOREIGN KEY(LineID) REFERENCES dbo.Line (LineID) ON UPDATE CASCADE GO ALTER TABLE dbo.Job CHECK CONSTRAINT FK_Job_Line GO ALTER TABLE dbo.Job WITH CHECK ADD CONSTRAINT FK_Job_LineGroup FOREIGN KEY(LineGroupID) REFERENCES dbo.LineGroup (LineGroupID) ON UPDATE CASCADE GO ALTER TABLE dbo.Job CHECK CONSTRAINT FK_Job_LineGroup GO ALTER TABLE dbo.Job WITH CHECK ADD CONSTRAINT FK_Job_ProductionCampaign FOREIGN KEY(ProductionCampaignID) REFERENCES dbo.ProductionCampaign (ProductionCampaignID) `ON UPDATE CASCADE GO ALTER TABLE dbo.Job CHECK CONSTRAINT FK_Job_ProductionCampaign GO ALTER TABLE dbo.Job WITH CHECK ADD CONSTRAINT FK_Job_RfJobStatus FOREIGN KEY(RfJobStatusID) REFERENCES dbo.RfJobStatus (RfJobStatusID) GO ALTER TABLE dbo.Job CHECK CONSTRAINT FK_Job_RfJobStatus GO ALTER TABLE dbo.Job WITH CHECK ADD CONSTRAINT FK_Job_WO FOREIGN KEY(WOID) REFERENCES dbo.WO (WOID) GO ALTER TABLE dbo.Job CHECK CONSTRAINT FK_Job_WO GO ALTER TABLE dbo.Job WITH CHECK ADD CONSTRAINT FK_Job_WorkOrderReleaseState FOREIGN KEY(WorkOrderReleaseStateID) REFERENCES dbo.WorkOrderReleaseState (WorkOrderReleaseStateID) ON UPDATE CASCADE GO ALTER TABLE dbo.Job CHECK CONSTRAINT FK_Job_WorkOrderReleaseState GO Is anyone able to point me in the right direction or give me some hints on how I maybe able to fix or at least reduce the number of deadlocks?
Mark Gibson (1 rep)
Aug 19, 2022, 10:46 AM • Last activity: Sep 19, 2025, 03:07 AM
0 votes
2 answers
134 views
on deleting a row, the update logic is run also
I am using a after insert, update delete trigger. When I insert a record in my application called as clarity, it inserts a row in the test table with the value as 'Insert' When I update the record in my application, it insert a row in my test table with the value as 'Update' When I delete the record...
I am using a after insert, update delete trigger. When I insert a record in my application called as clarity, it inserts a row in the test table with the value as 'Insert' When I update the record in my application, it insert a row in my test table with the value as 'Update' When I delete the record in my application, it adds two rows in my test table with the value as 'Update' and 'Delete' in that order. Why would deleting a row call the update logic of the trigger. My code is as below
-sql
USE [claritydb]
GO
/****** Object:  Trigger [niku].[TRIG_STATUSRPT]    Script Date: 11/11/2019 11:04:32 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

ALTER TRIGGER [niku].[TRIG_STATUSRPT]
ON [niku].[ODF_CA_COP_PRJ_STATUSRPT]
AFTER INSERT, UPDATE, DELETE
AS
BEGIN
	DECLARE @STATUSID INT,
			@PROJECTID INT,
            @SCHEDULE_STATUS INT,
			@SCOPE_STATUS INT,
			@EFT_STATUS INT,
			@OVERALL_STATUS INT,
			@ACTION_TYPE VARCHAR(50),
			@MAXSTATUSIDVAR INT,
			@DelCount int,
            @InsCount int,

   SELECT @InsCount = Count(*) FROM INSERTED
   SELECT @DelCount = Count(*) FROM DELETED

---- Get data from inserted/ updated
SELECT @STATUSID= ID,
         @PROJECTID= ODF_PARENT_ID,
            @SCHEDULE_STATUS= COP_SCHEDULE_STATUS,
			@SCOPE_STATUS= COP_SCOPE_STATUS,
			@EFT_STATUS= COP_COST_EFT_STATUS 
           FROM inserted
 ---- Get data from deleted
SELECT @STATUSID= ID,
         @PROJECTID= ODF_PARENT_ID,
            @SCHEDULE_STATUS= COP_SCHEDULE_STATUS,
			@SCOPE_STATUS= COP_SCOPE_STATUS,
			@EFT_STATUS= COP_COST_EFT_STATUS 
           FROM deleted
SELECT TOP 1 @MAXSTATUSIDVAR= ID FROM ODF_CA_COP_PRJ_STATUSRPT op where op.ODF_PARENT_ID = @PROJECTID  ORDER BY ID DESC

   If @InsCount > 0 and @DelCount = 0
   Begin
      -- At least 1 row inserted. Your Insert Trigger logic here
	    INSERT INTO TEST_STATUS_REPORT(STATUS_ID,PROJECT_ID,COP_SCHEDULE_STATUS,COP_SCOPE_STATUS,COP_COST_EFT_STATUS,ACTION_TYPE,CREATED_DATE,MAXSTATUSID)
              Values( @STATUSID, @PROJECTID, @SCHEDULE_STATUS, @SCOPE_STATUS,@EFT_STATUS, 'Insert',GETDATE(),@MAXSTATUSIDVAR)
	  
   End
   Else If @DelCount > 0 and @InsCount = 0
   Begin
      -- at least 1 row deleted. Your Delete Trigger logic here 
	   INSERT INTO TEST_STATUS_REPORT(STATUS_ID,PROJECT_ID,COP_SCHEDULE_STATUS,COP_SCOPE_STATUS,COP_COST_EFT_STATUS,ACTION_TYPE,CREATED_DATE,MAXSTATUSID)
              Values( @STATUSID, @PROJECTID, @SCHEDULE_STATUS, @SCOPE_STATUS,@EFT_STATUS, 'Delete',GETDATE(),@MAXSTATUSIDVAR)
   End 
   Else If @DelCount > 0 and @InsCount > 0
   Begin
      -- old row deleted, new row inserted; both indicates an update.
      -- your update logic here.  
	          INSERT INTO TEST_STATUS_REPORT(STATUS_ID,PROJECT_ID,COP_SCHEDULE_STATUS,COP_SCOPE_STATUS,COP_COST_EFT_STATUS,ACTION_TYPE,CREATED_DATE,MAXSTATUSID)
              Values( @STATUSID, @PROJECTID, @SCHEDULE_STATUS, @SCOPE_STATUS,@EFT_STATUS, 'Update',GETDATE(),@MAXSTATUSIDVAR)
   End 
END
Joseph (1 rep)
Nov 11, 2019, 12:19 PM • Last activity: Sep 18, 2025, 09:06 PM
0 votes
1 answers
274 views
Bad query plan only on first execution, not in SSMS
We have a very customizable report query system where the user can pick and chose which columns to include in the results as well as a lot of filters. I'll be mapping what we're doing onto an example orders system. Customers, orders, line items of orders and the like. For purposes of the example pro...
We have a very customizable report query system where the user can pick and chose which columns to include in the results as well as a lot of filters. I'll be mapping what we're doing onto an example orders system. Customers, orders, line items of orders and the like. For purposes of the example problem also assume we've got a very free-form chunk of customer information like "wife's name", "anniversary", "names of pets", etc (stuff you might want around to personalize interactions with the customer) To avoid massive denormalization of the order details in our orders report (when they select that output), we're populating a DataSet with 2 queries - the first query is for the orders that meet the filter criteria; the second is for the order details that match the output from the first query. Our qa dept recently started running into some problems in the C# middleware because the output of the 2 queries is disjoint, even though the same filter criteria are used in both queries. Both queries start by asking for the top 20 orders that match the filter criteria. Further complicating things, the qa automation suite does a lot of database restores, setting a few data conditions, running a test and then doing that again for the next test case. After a lot of back-and-forth, we found that they are only having the problem the first time this query is run after a restore. Every execution *after* the first one (with the same query) produces the correct results. I even got their seed database and started doing the restores myself. If I took the SQL code and ran it in SSMS immediately after the restore, I couldn't reproduce the problem. I can only reproduce it if I run through the app code and ADO.NET to run the query. I used Profiler to capture the query plan on that first execution and for reasons I can't fathom, only on that first execution it completely upends the query and starts by looking for that free-form blob of pet names as the seed kernel of the results. e.g. SELECT TOP (20) bunch of configurable columns, xml blob of customer preferences, ... FROM ORDERS INNER JOIN CUSTOMERS ON CUSTOMERS.ID = ORDERS.CUSTOMERID INNER JOIN (SELECT sums, avgs, etc FROM ORDERDETAILS GROUP BY ORDERDETAILS.ORDERID) orderavg on orderavg.ORDERID = ORDERS.ORDERID ... whatever other joins needed based on configuration ... INNER JOIN CUST_PREFS ON CUSTOMERS.ID = CUST_PREFS.CUSTOMERID WHERE ORDERS.DATE BETWEEN @from and @to ORDER BY ORDERS.DATE SELECT ORDERID, orderdetails FROM (SELECT TOP(20) ORDERID FROM ORDERS WHERE ORDERS.DATE BETWEEN @from and @to ORDER BY ORDERS.DATE) ORDERS INNER JOIN ORDERDETAILS ON ORDERDETAILS.ORDERID = ORDERS.ORDERID Seemed straightforward enough - start from the orders table and work your way down. But for this case QA ran into, *only* on the first execution after a db restore, and *only* through ADO.NET (not SSMS), the query plan comes out **starting** the execution with the CUST_PREFS "find me the random customer notes blob" join (the 7th join in the query). Re-run the same query in the app again, and it goes back to starting with the root table (ORDERS). A) I'm at a loss to figure out why it decides the 7th join is the right place to start immediately after a restore. The stats don't bear it out and it goes back to something more understandable on the 2nd execution. B) I could try to encapsulate the core filter query in a subselect (like the 2nd query has) to make the filter evaluation more predictable and only join for detail after the right subset is selected, but that seems a lot of reshuffling. C) I'm debating adding an OPTION (FORCE ORDER) hint to the first query to try and make this more predictable, but I thought I'd toss this out amongst a group with deeper knowledge than I before pulling on that thread. Any pointers from those wiser than me would be appreciated.
user1664043 (381 rep)
Jan 30, 2019, 09:14 PM • Last activity: Sep 18, 2025, 08:05 PM
19 votes
1 answers
10366 views
SQL Server - Adding non-nullable column to existing table - SSDT Publishing
Due to business logic, we need a new column in a table that is critical to ensure is always populated. Therefore it should be added to the table as `NOT NULL`. Unlike [previous questions][1] that explain how to do this *manually*, this needs to be managed by the SSDT publish. I have been banging my...
Due to business logic, we need a new column in a table that is critical to ensure is always populated. Therefore it should be added to the table as NOT NULL. Unlike previous questions that explain how to do this *manually*, this needs to be managed by the SSDT publish. I have been banging my head against the wall for a while over this simple-sounding task due to some realizations: 1. A default value is not appropriate, and it cannot be a computed column. Perhaps it is a foreign key column, but for others we cannot use a fake value like 0 or -1 because those values might have significance (e.g. numeric data). 2. Adding the column in a pre-deployment script will fail the publish when it automatically tries to create the same column, a second time (even if the pre-deployment script is written to be idempotent) (this one is really aggravating as I can otherwise think of an easy solution) 3. Altering the column to NOT NULL in a post-deployment script will be reverted each time the SSDT schema refresh occurs (so at the very least our codebase will mismatch between source control and what is actually on the server) 4. Adding the column as nullable now with the intention of changing to NOT NULL in the future does not work across multiple branches/forks in source control, as the target systems will not necessarily all have the table in the same state next time they are upgraded (not that this is a good approach anyway IMO) The approach I have heard from others is to directly update the table definition (so the schema refresh is consistent), write a predeployment script that *moves* the entire contents of the table to a temporary table with the new column population logic included, then to move the rows back in a postdeployment script. This seems risky as all hell though, and still pisses off the Publish Preview when it detects a NOT NULL column is being added to a table with existing data (since that validation runs before the predeployment scripting). How should I go about adding a new, non-nullable column without risking orphaned data, or moving data back and forth on every publish with lengthy migration scripts? Thanks.
Elaskanator (761 rep)
Jun 1, 2018, 08:07 PM • Last activity: Sep 18, 2025, 07:03 PM
0 votes
1 answers
296 views
ola-hallengren running full backups and log backups at the same time-- restore issues?
I recently ran across an issue trying to restore a db and log files. DB restored without issues. About 10 log files in, I received a message stating the LSN was out of sync and could not finish. I used a tool that puts them in order, so that's not my issue. I have my log backups set to run in 15 min...
I recently ran across an issue trying to restore a db and log files. DB restored without issues. About 10 log files in, I received a message stating the LSN was out of sync and could not finish. I used a tool that puts them in order, so that's not my issue. I have my log backups set to run in 15 minute intervals. This large database takes a couple of hours to perform a full backup. Could the log backups happening simultaneously during the full backup cause this situation? The end result was not bad this time, but this is alarming. I am trying to maintain a RPO of 15 minutes. This little test resulted in an 8 hr RPO.... unacceptable. Regards, Tim
Tim Shelton (1 rep)
Dec 14, 2020, 06:52 PM • Last activity: Sep 18, 2025, 04:01 PM
0 votes
0 answers
25 views
How to write a query from one table with dynamic row and column headings
Apologies if this is a basic question, but my google isn't googling, and SQL is not my strongest point. I'm trying to create a table output in SQL where the headings and rows are dynamic. I'm trying to show an output that shows if users signed in on a certain date. For example, lets look at this SQL...
Apologies if this is a basic question, but my google isn't googling, and SQL is not my strongest point. I'm trying to create a table output in SQL where the headings and rows are dynamic. I'm trying to show an output that shows if users signed in on a certain date.
For example, lets look at this SQL table called 'signins' where all the records are kept. | EmployeeName | SignInDate | | -----|----- | |Name1 | 15/05/2025| |Name2 | 15/05/2025| |Name1 | 16/05/2025| |Name2 | 17/05/2025| and i'm trying to output something like this: | | 15/05/2025 | 16/05/2025 | 17/05/2025 | | ------|------------|------------|----------- | |Name1 | T | T | | |Name2 | T | | T | The dates and names could be different and may not be sequential, and i'm not worried at this stage if it's a letter T or something else. I know how to group by date, and i know how to group by name, it's the combining it together and getting this output that i am struggling with. The date column is a date type. I use this to group by: SELECT SignInDate AS SignInDates FROM Signins GROUP BY SignInDate SELECT EmployeeName FROM Signins GROUP BY EmployeeName And this is working for me to be able to grab all the unique dates and names as two separate outputs Can someone point me in the right general direction please to get me started. I think once I have the basics, I'll be able to expand it further to what I need. Thanks
TonyC (1 rep)
Sep 18, 2025, 02:24 PM • Last activity: Sep 18, 2025, 02:25 PM
-1 votes
0 answers
22 views
Ola Hallengren's Parallel job processing - Maximum number of parallel jobs
I've been working on running some of our SQL Agent Maintenance jobs using Ola Hallengren's scripts and they are fantastic. Thank you. Using parallel processing is my current focus, to reduce runtime. Our database sizes range from about 300GB to 3TB and over 200 in total. What would be the kind of ma...
I've been working on running some of our SQL Agent Maintenance jobs using Ola Hallengren's scripts and they are fantastic. Thank you. Using parallel processing is my current focus, to reduce runtime. Our database sizes range from about 300GB to 3TB and over 200 in total. What would be the kind of maximum value of **n** that I should be looking at for running parallel jobs? I'm currently testing different values but just wonder if there's a maximum that it would not be advisable to go above?
PTL_SQL (431 rep)
Sep 18, 2025, 08:22 AM
1 votes
1 answers
269 views
Can a replication subscription also be used as a publication?
Some background: We are working with a vendor who makes a read-only copy of their source database available to us in the cloud via filtered transactional replication. The cloud database is a subscriber to the source database's publications. The publishing database is multi-tenant, so only our custom...
Some background: We are working with a vendor who makes a read-only copy of their source database available to us in the cloud via filtered transactional replication. The cloud database is a subscriber to the source database's publications. The publishing database is multi-tenant, so only our customer data is replicated to our read-only copy. The question is: Could we could we publish from the subscriber/cloud database to a subscribing database in one of our own data centers? If so, how might that work and would it be a management nightmare?
Mike Brule (103 rep)
Jan 11, 2019, 06:51 PM • Last activity: Sep 18, 2025, 08:06 AM
1 votes
1 answers
317 views
Testing the estimated query plan of a portion of a trigger that uses inserted/deleted
I'm performance tuning a SQL Server trigger that has several statements using the "inserted" and "deleted" tables. How can I break out those individual queries from the trigger for testing in order to see their estimated execution plans? The "inserted" and "deleted" tables have no context outside of...
I'm performance tuning a SQL Server trigger that has several statements using the "inserted" and "deleted" tables. How can I break out those individual queries from the trigger for testing in order to see their estimated execution plans? The "inserted" and "deleted" tables have no context outside of a trigger. I want to make sure that it mimics the behavior of the trigger as closely as possible. The trigger I'm testing is INSTEAD OF DELETE. I was hoping not to actually delete the record so that I could easily test the before and after without deleting the row.
skeletank (301 rep)
Nov 16, 2022, 03:52 PM • Last activity: Sep 18, 2025, 05:05 AM
2 votes
1 answers
137 views
Org chart CTE with exact index but plan shows expensive sort node
I've got a moderately large table (148 million rows) that's got pretty common org chart attributes to it. Names are changed, but it's the same idea. I was thinking of adding a hierarchyid column to the mix; then I could index that and hopefully make some of the queries faster. This query is modeled...
I've got a moderately large table (148 million rows) that's got pretty common org chart attributes to it. Names are changed, but it's the same idea. I was thinking of adding a hierarchyid column to the mix; then I could index that and hopefully make some of the queries faster. This query is modeled on the msdocs sample on hierarchy id. Before I ran it, I put indexes on (employeeid), (managerid, include employeeid), and (officeid, managerid, employeeid) hoping to make it as fast as possible. But the query took over 5 hours to run. I looked at the plan, and the (officeid, managerid, employeeid) index is used for both parts of the cte. What I don't understand is that the ROW_NUMBER() windowing function also appears to have added a SORT node, which ss says is taking 94% of the query time. It's already using an index that is pre-ordered in the same way, so why is there a SORT node taking up 94% of the time? enter image description here Thanks WITH paths(path, duns, extended_gu, extended_parent) AS ( -- This section provides the value for the root of the hierarchy SELECT hierarchyid::GetRoot() AS path, employeeid, officeid, managerid FROM OrgChart where officeid = 0 -- This section provides values for all nodes except the root UNION ALL SELECT CAST(p.path.ToString() + CAST(ROW_NUMBER() OVER (PARTITION BY RD.officeid, RD.managerid ORDER BY RD.officeid, RD.managerid) AS varchar(30)) + '/' AS hierarchyid), RD.employeeid, RD.managerid, Rd.officeid FROM OrgChart rd JOIN paths AS p ON rd.managerid = P.employeeid ) UPDATE rd set tree=P.Path FROM Paths P INNER JOIN OrgChart rd on rd.duns = P.Duns EDIT: Schema, added in response to comment below. I don't see a "Paste The Plan" as a feature in the editor; where is that? Rather than null on the roots, those ids are 0 CREATE TABLE [dbo].[OrgChart]( [employeeid] [int] NOT NULL, [managerid] [int] NOT NULL, [officeid] [int] NOT NULL, ... [tree] [hierarchyid] NULL ) ON [PRIMARY] GO CREATE UNIQUE NONCLUSTERED INDEX [uixOrgChart_employeeids] ON [dbo].[OrgChart] ( [employeeid] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] CREATE UNIQUE NONCLUSTERED INDEX [uixOrgChart_officemgremployee] ON [dbo].[OrgChart] ( [officeid] ASC, [managerid] ASC, [employeeid] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] CREATE NONCLUSTERED INDEX [uixOrgChart_mgremp] ON [dbo].[OrgChart] ( [managerid] ASC ) INCLUDE([employeeid]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
user1664043 (381 rep)
Feb 18, 2021, 04:23 PM • Last activity: Sep 18, 2025, 04:08 AM
0 votes
1 answers
429 views
How can i get AG dashboard values via SQL Query across all supported versions of SQL server
I am trying to ease the monitoring of availability groups across multiple SQL servers. Came across few dmvs which got me the data. But cannot figure out some important columns which i am seeing in SSMS AG Dashboard report. How can i get this data via SQL query. We cannot use PS unfortunately as its...
I am trying to ease the monitoring of availability groups across multiple SQL servers. Came across few dmvs which got me the data. But cannot figure out some important columns which i am seeing in SSMS AG Dashboard report. How can i get this data via SQL query. We cannot use PS unfortunately as its blocked to get data remotely Columns in SSMS dashboard which i cannot find in dmv are below. Do we know from where these metrics are getting populated? > Estimate Data Loss (seconds) > > Estimated Recovery Time (seconds) > > Synchronization Performance (seconds) > > Issues: > > Connection state > > Last connection error no:
Newbie-DBA (804 rep)
Jul 15, 2021, 01:27 AM • Last activity: Sep 18, 2025, 01:07 AM
0 votes
1 answers
312 views
Error: 823, Severity: 24, State: 2 after restart database server
> SQLSTATE[HY000]: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The operating system returned error 38(Reached the end of the file.) to SQL Server during a read at offset 0x0000b081cbc000 in file 'D:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\DB_v4.mdf'. Additional me...
> SQLSTATE[HY000]: [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]The operating system returned error 38(Reached the end of the file.) to SQL Server during a read at offset 0x0000b081cbc000 in file 'D:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\DB_v4.mdf'. Additional messages in the SQL Server error log and operating system error log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. I have following error after restarting the database server. The main page that contains dropdown and select from the database is working, but will give an error when I try to save/login. I try to query on SSMS, can login to database, and can perform query SELECT TOP 100, but if I try to INSERT/COUNT it gives an error: > Error: 823, Severity: 24, State: 2.
Insko Malaysia
Dec 11, 2023, 06:36 AM • Last activity: Sep 17, 2025, 08:02 PM
1 votes
1 answers
1208 views
Cannot drop the database encryption key because it is currently in use
I want to create a backup of my sql managed instance database to blob storage. However am encountering the below error after trying to turn off encryption key as follows: use master ALTER DATABASE [db1] SET ENCRYPTION OFF; use [db1] DROP DATABASE ENCRYPTION KEY GO Msg 33105, Level 16, State 1, Line...
I want to create a backup of my sql managed instance database to blob storage. However am encountering the below error after trying to turn off encryption key as follows: use master ALTER DATABASE [db1] SET ENCRYPTION OFF; use [db1] DROP DATABASE ENCRYPTION KEY GO Msg 33105, Level 16, State 1, Line 6 Cannot drop the database encryption key because it is currently in use. Database encryption needs to be turned off to be able to drop the database encryption key. I have read all MS documentation and everything is pointing to what I have done above. What am I doing wrong here?
Immortal (121 rep)
Dec 8, 2023, 09:14 AM • Last activity: Sep 17, 2025, 07:02 PM
Showing page 1 of 20 total questions