Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
68 views
Sql Server 2019, page compression, pre-/post-application
I've got this ETL process that produces json blobs from a corpus once a week. I've been going through the process trying to optimize it. I ran sp_estimate_data_compression_savings on the output table, and it said it was highly compressible, so I ran it. The savings was a 152 gig table down to a 22 g...
I've got this ETL process that produces json blobs from a corpus once a week. I've been going through the process trying to optimize it. I ran sp_estimate_data_compression_savings on the output table, and it said it was highly compressible, so I ran it. The savings was a 152 gig table down to a 22 gig table. I thought Great! I'll just declare the table with page compression when we load into it. Now, I know that the compression isn't always as efficient when you load into an empty table as it is applying the compression afterwards, but to my surprise, when I loaded into the empty table with page compression declared, I got effectively ZERO compression. The index (which is not very compressible) got the expected compression but the heap got none. sp_estimate_data_compression_savings still says it can get great compression *reapplying* PAGE compression. I read some documentation that said loading into an empty table with PAGE compression starts trying to do ROW and then compresses PAGE when it hits the end of a block and thinks it can do more compression. I took that as explaining why you don't always get as much compression as applying it post-load, but ZERO? And additional post step takes a while, and I've often been willing to sacrifice 10-20% of the compression to spread the cost. I'm just surprised that pre- application got zero compression on load. Is there something I'm missing here? CREATE TABLE [dbo].[JSONmaster2]( [ID] [int] NOT NULL, [JSON] [varchar](8000) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, [InsertDate] [smalldatetime] NOT NULL ) WITH(DATA_COMPRESSION = PAGE) GO CREATE UNIQUE NONCLUSTERED INDEX [ixJSONmaster-KeyID] ON [dbo].[JSONmaster2] ([ID] ASC) WITH (DATA_COMPRESSION = PAGE)
user1664043 (379 rep)
Jul 31, 2025, 11:46 PM • Last activity: Aug 4, 2025, 10:27 AM
1 votes
1 answers
140 views
Does sql config manager changes (example: service account) on 1 FCI node auto update it onto the other node?
Link: https://learn.microsoft.com/en-us/sql/sql-server/failover-clusters/windows/always-on-failover-cluster-instances-sql-server?view=sql-server-ver15#FCIelements > Each node in the resource group maintains a synchronized copy of the > configuration settings and check-pointed registry keys to ensure...
Link: https://learn.microsoft.com/en-us/sql/sql-server/failover-clusters/windows/always-on-failover-cluster-instances-sql-server?view=sql-server-ver15#FCIelements > Each node in the resource group maintains a synchronized copy of the > configuration settings and check-pointed registry keys to ensure full > functionality of the FCI after a failover My understanding is that service account and other sql config manager settings are stored in the registry. Does the above quote imply that when any sql config manager change is made on 1 FCI node (sql server service account, sql agent service account, sql integration services service account, protocol changes [tcp/shared memory]) then it will auto apply this on the node 2? For example: following link suggests that password change made on 1 node gets communicated on other nodes: https://learn.microsoft.com/en-us/sql/sql-server/failover-clusters/windows/failover-cluster-instance-administration-and-maintenance?view=sql-server-ver15#changing-service-accounts > You should not change passwords for any of the SQL Server service > accounts when an FCI node is down or offline. If you must do this, you > must reset the password again by using SQL Server Configuration > Manager when all nodes are back online.
variable (3590 rep)
Oct 1, 2022, 05:51 AM • Last activity: Aug 4, 2025, 12:07 AM
0 votes
1 answers
139 views
Impact of timezone change during sql server migration
We are migrating from our legacy SQL Server 2017 on Windows 2016 host to SQL Server 2019 on Windows 2019. In above migration the bigger change has been the difference in timezone between legacy and new build Windows host. Earlier we used to have mix of CST, EST and PST servers being built but now ev...
We are migrating from our legacy SQL Server 2017 on Windows 2016 host to SQL Server 2019 on Windows 2019. In above migration the bigger change has been the difference in timezone between legacy and new build Windows host. Earlier we used to have mix of CST, EST and PST servers being built but now everything is standardized under UTC. Me being DBA, trying to understand possible impact with above change. Few questions here: 1. What impact if any is expected for queries querying with getdate(), will they impact DB queries as time is now in UTC? 2. AD team said they can force application to send ET at their end but how will it handle sql querying when landing to DB? 3. All the databases in question here uses partitioned tables with partition functions as date ranges and lot of ad-hoc queries are done on read replica. Will there be impact or change to partitions?
Newbie-DBA (804 rep)
Feb 1, 2024, 03:47 AM • Last activity: Aug 2, 2025, 03:02 AM
4 votes
1 answers
261 views
Index Rebuild with WAIT_AT_LOW_PRIORITY still blocks
I'm running... ```sql ALTER INDEX PK_MyTable ON fct.MyTable REBUILD PARTITION = 261 WITH ( ONLINE = ON ( WAIT_AT_LOW_PRIORITY ( MAX_DURATION = 30 minutes, ABORT_AFTER_WAIT = SELF ) ), RESUMABLE = ON, MAX_DURATION = 355 ) ; ``` ...and I expect it **not** to cause any blocking, but it does. It is aski...
I'm running...
ALTER INDEX PK_MyTable 
    ON fct.MyTable 
    REBUILD PARTITION = 261 
    WITH (
        ONLINE = ON (
            WAIT_AT_LOW_PRIORITY (
                MAX_DURATION = 30 minutes, 
                ABORT_AFTER_WAIT = SELF
            )
        ), 
        RESUMABLE = ON, 
        MAX_DURATION = 355 
    )
;
...and I expect it **not** to cause any blocking, but it does. It is asking for a LCK_M_SCH_M lock. My statement is blocked by an INSERT INTO statement, which is what I expect; but it is also blocking many other statements that are asking for a LCK_M_SCH_S lock, and I was not expecting that. At the same time there is a BACKUP running of a database that has FILESTREAM. Could this be the reason? What else could cause this blocking? Have I completely misunderstood WAIT_AT_LOW_PRIORITY? RCSI is on.
Henrik Staun Poulsen (2291 rep)
Jul 31, 2025, 08:54 AM • Last activity: Aug 1, 2025, 12:53 AM
7 votes
2 answers
455 views
FOR XML PATH and OPTION() can't be together in query?
I have a weird edge case where trying to combine FOR XML PATH('') with OPTION(FORCE ORDER) is not parsing as valid sql for some reason, and I can't find any explanation so far with Google. The parse error just says "Incorrect syntax near the keyword 'OPTION'." but I'm not clear why. Anyone have an e...
I have a weird edge case where trying to combine FOR XML PATH('') with OPTION(FORCE ORDER) is not parsing as valid sql for some reason, and I can't find any explanation so far with Google. The parse error just says "Incorrect syntax near the keyword 'OPTION'." but I'm not clear why. Anyone have an explanation? I was asked to improve the performance of an ETL process that packs a bunch of transaction details into a string to insert into a json blob from a Sql Server 2019 database. This is a detail table with about 2.6 billion rows that get packed into about 325 million final results. I tried a number of things with incremental improvements (the original implementation was using the DECLARE @result nvarchar(max) = '' SELECT @result = @result + ... concatenation method, which turned out to be the least efficient of all I could think of) but then I looked at table compression. PAGE compression turned out to be very profitable. Not only did the details table go from 148 gig to 30 gig, in some query plans the order of the joins also made it take 1/3 the time. When this happened, the work table joined on ID to the details table first and then to the type definition table. So I set the table definition for the next build to use PAGE compression and, as might be expected, didn't get quite as good compression loading into an empty table. Instead of 30 gig, it was 45 gig - still a big space savings, but unless I got maximal compression, the plan produced wasn't optimal. If I rebuild the PAGE compressed primary key and get it back to max compression, the plan comes out fast, but anything other produces a plan that takes a lot longer. So I tried to add OPTION (FORCE ORDER) to the query to see if I could get it down the path I wanted. CREATE FUNCTION [dbo].[fnGetDetailsJSON](@ID int) RETURNS [nvarchar](max) WITH EXECUTE AS CALLER AS BEGIN DECLARE @ss nvarchar(max) = (SELECT N','+(N'"Type' + RTRIM(ST.TypeCode))+ N'":"' + D.DisplayValue + N'",' + (N'"Type' + RTRIM(ST.TypeCode)) + N'IPCV":"' + CONVERT(varchar(20),d.IPCV) + N'"' FROM [Transactions].[dbo].[DetailsMaster] (NOLOCK) D INNER JOIN [Transactions].dbo.DetailTypes DT WITH(NOLOCK,INDEX([ucoixTransactionTypes-TypeID-Display])) ON D.TypeID = DT.SignalTypeID AND DT.DisplayInProduct = 1 WHERE D.ID = @ID order by D.TypeID FOR XML PATH('') -- OPTION (FORCE ORDER) won't let FOR XMLPATH() and OPTION(FORCE ORDER) exist on the same query ) RETURN @SS END GO used in a query like this: SET STATISTICS IO ON; SET STATISTICS TIME ON; DECLARE @MinRowNumber int = 0, @MaxRowNumber int = 30000, @StartTime Date= GETDATE() SELECT ID, dbo.fnGetDetailsJSON(ID), @StartTime FROM dbo.WorkList n where n.RowNumber > @MinRowNumber and n.RowNumber <= @MaxRowNumber
user1664043 (379 rep)
Jul 30, 2025, 05:43 PM • Last activity: Aug 1, 2025, 12:51 AM
2 votes
0 answers
141 views
How can Last_Execution_time be before Creation_time?
sp_HealthParser (made by Erik Darling) crashed on me, because Last_Execution_time was way before Creation_time. How is that possible? The columns : **Last_execution_time** in sys.dm_exec_query_stats should, in my book, be on or after **Creation_time** (from the same sys.dm_exec_query_stats)? The que...
sp_HealthParser (made by Erik Darling) crashed on me, because Last_Execution_time was way before Creation_time.
How is that possible? The columns : **Last_execution_time** in sys.dm_exec_query_stats should, in my book, be on or after **Creation_time** (from the same sys.dm_exec_query_stats)? The query fails at datediff ( SECOND, deqs.creation_time, deqs.last_execution_time ) , 0) which I've fixed by adding a case statement like this: case when deqs.creation_time < deqs.last_execution_time then datediff ( SECOND, deqs.creation_time, deqs.last_execution_time ) else 0 end I now get 1106 rows with last_execution_time equal to **1900-01-01 00:00:00.000**, all with plan_generation_num=1 and execution_count=1 enter image description here
Henrik Staun Poulsen (2291 rep)
Oct 22, 2023, 05:22 PM • Last activity: Jul 31, 2025, 09:02 AM
1 votes
2 answers
2543 views
SQL Server 2019 using Linked server to Oracle 19c (19.9) using Oracle 19c client on Windows - DBCOLUMNFLAGS_ISNULLABLE error
Having issues selected data for some Oracle tables over the linked server from SQL Server: select type, count(*) from RMTDB..JOB1308.HOST_INTERFACE group by type order by 1 desc; >Msg 7356, Level 16, State 1, Line 13 The OLE DB provider "OraOLEDB.Oracle" for linked server "RMTDB" supplied inconsiste...
Having issues selected data for some Oracle tables over the linked server from SQL Server: select type, count(*) from RMTDB..JOB1308.HOST_INTERFACE group by type order by 1 desc; >Msg 7356, Level 16, State 1, Line 13 The OLE DB provider "OraOLEDB.Oracle" for linked server "RMTDB" supplied inconsistent metadata for a column. The column "CREATE_TIME" (compile-time ordinal 3) of object ""JOB1308"."HOST_INTERFACE"" was reported to have a "DBCOLUMNFLAGS_ISNULLABLE" of 0 at compile time and 32 at run time. Open query version of it works: select * from OPENQUERY( RMTDB, 'select type, count(*) from HOST_INTERFACE group by type order by 1 desc') result TYPE COUNT(*) 10 10450 8 6543 6 21 Anyone run into this?
sherveyj (111 rep)
Mar 20, 2021, 11:12 PM • Last activity: Jul 31, 2025, 01:03 AM
1 votes
1 answers
167 views
Inconsistent execution times for stored procedure
I have a stored procedure that runs daily. Most days it completes in about 20 minutes, some days it takes an hour. I'm trying to find the cause so I can get it to be more consistent. I looked for other jobs/processes that could be running at the same time and competing for resources, but did not see...
I have a stored procedure that runs daily. Most days it completes in about 20 minutes, some days it takes an hour. I'm trying to find the cause so I can get it to be more consistent. I looked for other jobs/processes that could be running at the same time and competing for resources, but did not see anything promising. Ran BlitzCache. First result is Plan Cache Instability with 100% plans created in the past 24 hours. But the server was just patched and rebooted last night so I think that is to be expected. The second result is Plan Warnings (warning detected in execution plan). Checked the Execution Plan and saw a recommendation for a missing index. It wants an index on PaidDate and then Include all the other columns in the table. I find the sql recommendations for indexes to be hit/miss, plus this database is a backup from a vendor and I do not have direct control to add indexes at will. Also, as long as I'm not clearing the plan cache I'm doubtful that a missing index would cause this type of inconsistency. Third is Non-SARGable queries. I've included what I believe is the most resource intensive part of the query. The final step is to INSERT into a table from the temp table. There is a CASE statement, but we use CASE statements a lot and do not have this kind of variability in other stored procedures. DROP TABLE IF EXISTS #PPNUnpivotCCO; SELECT CCOMemberID = xw.CCOMemberID, MemberID = e.PrimeID, RateCD = e.RateCode, ContractServiceAreaCD = e.ContractServiceAreaCode, EligCat = e.EligCat, ContractRateAMT = Convert(decimal(15,2),e.ContractRate), Method = e.Method, CCOPremiumAMT = Convert(decimal(15,2),e.PremiumAmount), ---Medical Medical_PartnerCD = med.PartnerCD, Medical_PartnerNM = med.PartnerNM, MedBaseAmt = Convert(decimal(15,2),e.MedBaseAmt), MedRiskRateAmt = Convert(decimal(15,6),e.MedRiskRate), --Numeric length extended to 6 past the decimal C.S. MedAdjAmt = Convert(decimal(15,2),e.MedAdjAmt), AdminWHMedAmt = Convert(decimal(15,2),e.AdminWHMed), StrategicWHMedAmt = Convert(decimal(15,2),e.StrategicWHMed), CommunityWHMedAmt = Convert(decimal(15,2),e.CommunityWHMed), ReserveWHMedAmt = Convert(decimal(15,2),e.ReserveWHMed), MedFinalAMT= Convert(decimal(15,2),e.MedFinal), ---Mental Health Behavioral_PartnerCD = beh.PartnerCD, Behavioral_PartnerNM = beh.PartnerNM, MHBaseAmt = Convert(decimal(15,2),e.MHBaseAmt), MHRiskRateAmt = Convert(decimal(15,6),e.MHRiskRate), --Numeric length extended to 6 past the decimal C.S. MHAdjAmt = Convert(decimal(15,2),e.MHAdjAmt), AdminWHMHAmt = Convert(decimal(15,2),e.AdminWHMH), StrategicWHMHAmt = Convert(decimal(15,2),e.StrategicWHMH), CommunityWHMHAmt = Convert(decimal(15,2),e.CommunityWHMH), ReserveWHMHAmt = Convert(decimal(15,2),e.ReserveWHMH), MHFinalAmt = Convert(decimal(15,2),e.MHFinal), ---NEMT NEMTBaseAmt = Convert(decimal(15,2),e.NEMTBaseAmt), NEMTWithholdAmt = Convert(decimal(15,2),e.NEMTWithhold), StrategicWHNEMTAmt = Convert(decimal(15,2),e.StrategicWHNEMT), CommunityWHNEMTAmt = Convert(decimal(15,2),e.CommunityWHNEMT), NEMTFinalAmt = Convert(decimal(15,2),e.NEMTFinal), ---Dental Dental_PartnerCD = den.PartnerCD, Dental_PartnerNM = den.PartnerNM, DenBaseAmt = Convert(decimal(15,2),e.DenBaseAmt), DenRiskRateAmt = Convert(decimal(15,2),e.DenRiskRate), DenAdjAmt = Convert(decimal(15,2),e.DenAdjAmt), AdminWHDenAmt = Convert(decimal(15,2),e.AdminWHDen), StrategicWHDenAmt = Convert(decimal(15,2),e.StrategicWHDen), CommunityWHDenAmt = Convert(decimal(15,2),e.CommunityWHDen), ReserveWHDenAmt = Convert(decimal(15,2),e.ReserveWHDen), performanceWHDenAmt = Convert(decimal(15,2),e.performanceWHDen), DenFinalAmt = Convert(decimal(15,2),e.DenFinal), MCOTaxAmt = Convert(decimal(15,2),e.MCOTax), HRAAmt = Convert(decimal(15,2),e.HRA), RevenueRAEFinalAMT = COALESCE(e.MedFinal, 0) + COALESCE(e.DenFinal, 0) + COALESCE(e.MHFinal, 0), MemberMonth= CASE WHEN e.RateCode COLLATE database_default IS NULL THEN 0 WHEN e.RateCode COLLATE database_default IN ( SELECT RateCD FROM Reference.Reference.Rate r WHERE r.ExcludeFromMM = 'Y') THEN 0 WHEN e.Prorate > 1 THEN 1 ELSE ROUND(e.Prorate, 5) END , BenefitMonth = DATEFROMPARTS(YEAR(e.BenefitBegin), MONTH(e.BenefitBegin), 1) , StartDT = Convert(date,e.BenefitBegin) , EndDT = Convert(date,e.BenefitEnd), PaidDT = Convert(date,e.PaidDate), PremiumPaymentNotificationID INTO #PPNUnpivotCCO --select e.* FROM VendorDB.dbo.Allocation e INNER JOIN Reference.Reference.CCOIdToMemberID xw -- only active members (solves the issue of multiple MemberId for a single COMemberid ON xw.CCOMemberID = e.ccoMemberID AND xw.MemberID COLLATE database_default = e.PrimeID AND xw.ActiveFLG = 1 LEFT JOIN Reference.Reference.Rate r---updated to left join, for some of the kickers, we do not have the rate codes set up in reference. ON e.RateCode COLLATE database_default = r.RateCD LEFT JOIN Reference.Reference.Partners Med ON ( e.MedPlan COLLATE database_default = Med.PartnerNM OR e.MedPlan COLLATE database_default = Med.PartnerAlt1NM OR e.MedPlan COLLATE database_default = Med.PartnerAlt2NM ) AND Med.ActiveFLG = 1 LEFT JOIN Reference.Reference.Partners beh ON ( e.MHPlan COLLATE database_default = beh.PartnerNM OR e.MHPlan COLLATE database_default = beh.PartnerAlt1NM OR e.MHPlan COLLATE database_default = beh.PartnerAlt2NM ) AND beh.ActiveFLG = 1 LEFT JOIN Reference.Reference.Partners den ON ( e.DenPlan COLLATE database_default = den.PartnerNM OR e.DenPlan COLLATE database_default = den.PartnerAlt1NM OR e.DenPlan COLLATE database_default = den.PartnerAlt2NM ) AND den.ActiveFLG = 1 WHERE 1=1 and Convert(date,e.PaidDate) >= Convert(date,DateAdd(Year,-2,GETDATE())) ---Only pulling paid the last 2 years ; DELETE Enrollment.PremiumCCOMemberMonth where PaidDT >= Convert(date,DateAdd(Year,-2,GETDATE())); INSERT INTO Enrollment.PremiumCCOMemberMonth (CCOMemberID, MemberID, RateCD, ContractServiceAreaCD, EligCat, ContractRateAMT, Method, CCOPremiumAMT, Medical_PartnerCD, Medical_PartnerNM, MedBaseAmt, MedRiskRateAmt, MedAdjAmt, AdminWHMedAmt, StrategicWHMedAmt, CommunityWHMedAmt, ReserveWHMedAmt, MedFinalAmt, Behavioral_PartnerCD, Behavioral_PartnerNM, MHBaseAmt, MHRiskRateAmt, MHAdjAmt, AdminWHMHAmt, StrategicWHMHAmt, CommunityWHMHAmt, ReserveWHMHAmt, MHFinalAmt, NEMTBaseAmt, NEMTWithholdAmt, StrategicWHNEMTAmt, CommunityWHNEMTAmt, NEMTFinalAmt, Dental_PartnerCD, Dental_PartnerNM, DenBaseAmt, DenRiskRateAmt, DenAdjAmt, AdminWHDenAmt, StrategicWHDenAmt, CommunityWHDenAmt, ReserveWHDenAmt, performanceWHDenAmt, DenFinalAmt, MCOTaxAmt, HRAAmt, RevenueRAEFinalAMT, MemberMonth, BenefitMonth, StartDT, EndDT, PaidDT, PremiumPaymentNotificationID ) SELECT CCOMemberID, MemberID, RateCD, ContractServiceAreaCD, EligCat, ContractRateAMT, Method, CCOPremiumAMT, Medical_PartnerCD, Medical_PartnerNM, MedBaseAmt, MedRiskRateAmt, MedAdjAmt, AdminWHMedAmt, StrategicWHMedAmt, CommunityWHMedAmt, ReserveWHMedAmt, MedFinalAmt, Behavioral_PartnerCD, Behavioral_PartnerNM, MHBaseAmt, MHRiskRateAmt, MHAdjAmt, AdminWHMHAmt, StrategicWHMHAmt, CommunityWHMHAmt, ReserveWHMHAmt, MHFinalAmt, NEMTBaseAmt, NEMTWithholdAmt, StrategicWHNEMTAmt, CommunityWHNEMTAmt, NEMTFinalAmt, Dental_PartnerCD, Dental_PartnerNM, DenBaseAmt, DenRiskRateAmt, DenAdjAmt, AdminWHDenAmt, StrategicWHDenAmt, CommunityWHDenAmt, ReserveWHDenAmt, performanceWHDenAmt, DenFinalAmt, MCOTaxAmt, HRAAmt, RevenueRAEFinalAMT, MemberMonth, BenefitMonth, StartDT, EndDT, PaidDT, PremiumPaymentNotificationID FROM #PPNUnpivotCCO Fourth is regarding implicit conversions. > The column [xw].[MemberID] has a data type of varchar which caused > implicit conversion on the column [e].[PrimeID]. They are both varchar, so I'm guessing the implicit conversion is a result of the COLLATE needed because this database has a different collation than the other databases on the server. Does any of this look like the probable cause or do I need to dig in other places?
Don (377 rep)
Nov 13, 2024, 10:50 PM • Last activity: Jul 30, 2025, 06:10 PM
0 votes
1 answers
399 views
Is there any option to enable Window NT logs in SQL Server Logs
I'm trying to view SQL Server logs on a specific server and is missing Windows NT section in same. Though other server are showing that section. In the screenshot shared below, I highlighted section that I'm not getting for one of the SQL Server instances. **Question** - Is there any configuration o...
I'm trying to view SQL Server logs on a specific server and is missing Windows NT section in same. Though other server are showing that section. In the screenshot shared below, I highlighted section that I'm not getting for one of the SQL Server instances. **Question** - Is there any configuration or any required steps, for enabling **Windows NT** section in SQL Server Logs? Server details with the issues: Microsoft SQL Server 2019 (RTM) - 15.0.2000.5 (X64) Sep 24 2019 13:48:23 Copyright (C) 2019 Microsoft Corporation Standard Edition (64-bit) on Windows Server 2019 Standard 10.0 (Build 17763: ) (Hypervisor) Server details having no issue or the screen server details Microsoft SQL Server 2016 (SP2-CU11-GDR) (KB4535706) - 13.0.5622.0 (X64) Dec 15 2019 08:03:11 Copyright (c) Microsoft Corporation Standard Edition (64-bit) on Windows Server 2012 R2 Datacenter 6.3 (Build 9600: ) (Hypervisor) enter image description here
Harsimranjeet Singh (133 rep)
Dec 29, 2020, 03:09 PM • Last activity: Jul 29, 2025, 06:07 AM
0 votes
2 answers
888 views
query for report of user logins in MSSQL
Does anyone know how to get a report of user logins in MSSQL that will display the days and number of user logins and logouts in the output? --- Version 2019 I ran `master.sys.syslogins` but I got this message >The request for procedure 'syslogins' failed because 'syslogins' is a view object.
Does anyone know how to get a report of user logins in MSSQL that will display the days and number of user logins and logouts in the output? --- Version 2019 I ran master.sys.syslogins but I got this message >The request for procedure 'syslogins' failed because 'syslogins' is a view object.
Iraj AkbariSohi (1 rep)
Apr 18, 2023, 09:32 AM • Last activity: Jul 28, 2025, 03:00 PM
0 votes
2 answers
71 views
How do I make a good database diagram if my database is missing a lot of foreign key relationships?
I would prefer not to have to do it by hand and am hoping that there is a good tool that can get me most of the way there and then allow me to keep adding relationships to it manually since we are missing a ton of foreign key relationships. What is a good way to do this? So far I've tried: 1. SSMS d...
I would prefer not to have to do it by hand and am hoping that there is a good tool that can get me most of the way there and then allow me to keep adding relationships to it manually since we are missing a ton of foreign key relationships. What is a good way to do this? So far I've tried: 1. SSMS database diagram, but it isn't useful because we don't have all the FKs set up and because I can't manually add lines to show relationships without actually going ahead and creating the FKs 2. Doing it by hand in Excel - very time consuming 3. Visual Studio LINQ to SQL - I installed it, but it doesn't show up when I try to add new item and search for it 4. Visual Studio EF Core Power Tools - when I try to create a diagram, I get this error: enter image description here
TheMortiestMorty (153 rep)
Jul 25, 2025, 04:09 PM • Last activity: Jul 26, 2025, 01:12 PM
0 votes
1 answers
146 views
SQL Server instance upgrade impacting on other instance maintenance plan jobs
I have two instances on a single server. Instances are named TEST and LIVE for simplicity. Both are running SQL Server 2019. Only the LIVE instance has SQL Agent jobs running (Basic backup jobs via maintenance plans). SSIS is not installed on this environment. I have just upgraded the TEST instance...
I have two instances on a single server. Instances are named TEST and LIVE for simplicity. Both are running SQL Server 2019. Only the LIVE instance has SQL Agent jobs running (Basic backup jobs via maintenance plans). SSIS is not installed on this environment. I have just upgraded the TEST instance to SQL Server 2022. All was quite straight forward except for the ODBC and OLEDB components having to be uninstalled for it to be able to upgrade correctly. All looks okay now on the TEST instance. However there is an issue with the LIVE instance which is still running SQL Server 2019. The backup jobs are failing with: Could not load package "Maintenance Plans\Backup" because of error 0x80070002. Description: The system cannot find the file specified. I have determined that the issue is with the DTExec component by running the backup code, from the SQLAgent job, via the command prompt. If I attempt to run it with the 150\DTS\Binn version (i.e. The SQL Server 2019 version) it fails. However when I run it with the 160\DTS\Binn (i.e. the newly installed SQL Server 2022 version) it works. I would assumed that if anything it would be the other way around. I cannot see any changes to files within the SQL Server 2019 folders. I am guessing some shared component has altered and is causing this but I cannot track it down. I am quite sure that when I upgrade the LIVE instance then this problem will be resolved but I would like to know why this upgrade has caused an issue. If there were multiple instance on a server then this could be much more of a problem. I have tried all the obvious sources of information (Including ChatGPT) for help but nothing is telling me why this is happening. Does anyone have an idea? Additional information: Reinstalling the old drivers makes no difference. I get the same results.
StuKay (101 rep)
Oct 23, 2024, 03:58 PM • Last activity: Jul 25, 2025, 07:03 PM
1 votes
2 answers
143 views
Tempdb log growth 3x as slow on new server, SP does not perform
We are running into the issue that our stored procedure on our new SQL Server performs a lot slower than on the old server. We looked into the problem and noticed the following on our temp db. The incremental growth takes 3 times as long on the new server. The same happens for the log file of our da...
We are running into the issue that our stored procedure on our new SQL Server performs a lot slower than on the old server. We looked into the problem and noticed the following on our temp db. The incremental growth takes 3 times as long on the new server. The same happens for the log file of our database. What is causing this? enter image description here
Esmee (29 rep)
Mar 19, 2021, 04:16 PM • Last activity: Jul 23, 2025, 11:06 PM
1 votes
1 answers
67 views
SQL Server Shrinkfile Stops at 76% Due to LOB and System Data—Is Partial Shrink Possible?
**Problem:** I am working on a SQL Server 2019 database where I deleted a large amount of user data (~500 GB). After deletion, I attempted to shrink the data file using: ``` DBCC SHRINKFILE (N'web_Data', TARGET_SIZE_MB); ``` However, the shrink process stopped at 76% and could not proceed further. *...
**Problem:** I am working on a SQL Server 2019 database where I deleted a large amount of user data (~500 GB). After deletion, I attempted to shrink the data file using:
DBCC SHRINKFILE (N'web_Data', TARGET_SIZE_MB);
However, the shrink process stopped at 76% and could not proceed further. **Analysis:** Using
.dm_db_database_page_allocations
and
.allocation_units
, I found that: - **LOB data and system objects** (e.g.,
,
,
) are allocated toward the **end of the file.** - Since these allocations are not movable, the shrink operation cannot release the remaining space. **Question:** **How can I perform a shrink operation that reclaims only the movable free space, without getting stuck on unmovable LOB and system data?** **What I Tried:**
SHRINKFILE
with a target size -
SHRINKFILE
with
**Clarifications:** - I understand that shrinking can cause fragmentation. I’m **not asking whether I should shrink**, I’m asking how to reclaim only **the free space that is actually reclaimable** without getting blocked. - Rebuilding the database is not part of this question—I’m focusing on whether **SQL Server allows partial shrink in this scenario.** **Environment:** - **SQL Server Version:** 2019 - **Database Size Before Deletion:** ~1 TB - **Deleted Data:** ~500 GB from a single large table - **File:** Single data file in PRIMARY filegroup **Goal:** - Reclaim as much space as possible using
SHRINKFILE
, even if system LOB data cannot be moved.
Shehzad Malik (11 rep)
Jul 21, 2025, 11:28 AM • Last activity: Jul 21, 2025, 12:11 PM
1 votes
1 answers
166 views
Restore of Database with CDC Failing
I'm taking a backup of a SQL Server 2019 database with CDC enabled, and trying to restore it to a different instance. I don't need CDC on target database so I don't have the "keep_cdc" option in the restore script. Despite this, I'm getting the following error: > Could not upgrade the metadata for d...
I'm taking a backup of a SQL Server 2019 database with CDC enabled, and trying to restore it to a different instance. I don't need CDC on target database so I don't have the "keep_cdc" option in the restore script. Despite this, I'm getting the following error: > Could not upgrade the metadata for database mydbname that is enabled > for Change Data Capture. The failure occurred when executing the > action 'drop function fn_cdc_get_all_changes_'. The > error returned was 3764: 'line 1, state 1, Cannot drop the table > valued function > 'cdc.fn_cdc_get_all_changes_dbo_mytablename' because it is > being used for Change Data Capture.'. Use the action and error to > determine the cause of the failure and resubmit the request. The database seems to restore OK despite the error, but CDC is enabled when I look in sys.databases. When I try to disable cdc on the restored database I get following error: > Could not update the metadata that indicates database mydbname is not enabled for Change Data Capture. The failure occurred when executing the command 'DROP TABLE dbo.systranschemas'. The error returned was 3764: 'Cannot drop the table 'dbo.systranschemas' because it is being used for Change Data Capture.'. Use the action and error to determine the cause of the failure and resubmit the request. Any thoughts as to why this is happening despite the fact that I'm not including keep_CDC in the restore script? Any potential solutions?
Brian Layden (11 rep)
Feb 14, 2025, 04:45 PM • Last activity: Jul 18, 2025, 05:07 AM
5 votes
1 answers
160 views
Will unchanged database extents result in identical SQL backup bits on disk? What about with backup compression enabled?
(I'm trying to predict long-term SQL backup storage needs and if/where to enable compression, on a storage appliance with de-duplication. I know the best answer is "test both and see", which we are doing.) I seem to recall that SQL backups (and restores) are performed upon SQL database *extents* (8x...
(I'm trying to predict long-term SQL backup storage needs and if/where to enable compression, on a storage appliance with de-duplication. I know the best answer is "test both and see", which we are doing.) I seem to recall that SQL backups (and restores) are performed upon SQL database *extents* (8x 8kb data pages). My question is, *are those extents handled predictably and consistently by the SQL backup process* in a way that the storage de-duplication can rely on? By that I mean, if we do a full backup each night, *will the unchanged extents result in identical bits on disk (for those unchanged portions) from night to night?* Or is that not reliable/predictable? What if we enable SQL backup compression? Will identical extents result in identical compressed bits from night to night? Or is that not guaranteed?
BradC (10023 rep)
Jul 16, 2025, 08:43 PM • Last activity: Jul 17, 2025, 04:28 PM
0 votes
0 answers
21 views
Understanding commit_time in dm_tran_commit_table: Insert Time, Timezone, and Latency Concerns
I am currently working with SQL Server and exploring transaction management, specifically the `dm_tran_commit_table` view. I have a couple of questions regarding the `commit_time` and change tracking. Database has `Delay Durability = Forced` and `Is Read Committed Snapshot = true` 1. Does the `commi...
I am currently working with SQL Server and exploring transaction management, specifically the dm_tran_commit_table view. I have a couple of questions regarding the commit_time and change tracking. Database has Delay Durability = Forced and Is Read Committed Snapshot = true 1. Does the commit_time represent the exact moment a record is inserted into a table or inserted to change tracking table, or does it indicate the time the transaction was committed, which may be different? 2. If the transaction is committed, what timezone does the commit_time reference? Is it based on the server’s local time, UTC, or another timezone? The reason I am asking is that I have a creation date in my system from which we are replicating data, and I noticed that the date from the system is about 25 minutes earlier than the date taken from commit_time. According to the statistics I checked, there should be a maximum latency of about 5 minutes. This discrepancy is concerning, and I’d like to understand if the commit_time could be contributing to this issue. I would greatly appreciate any insights or references to documentation that can clarify these points. Thank you for your assistance!
adam.g (465 rep)
Jul 15, 2025, 10:43 AM
0 votes
1 answers
165 views
Is this Parameter Sniffing?
I have a particular stored procedure that takes 1 parameter. This proc is executed once a day with the **same parameter** value everyday. The row count is also the same with each run of the proc. Day 1 - Runs for 10 minutes. Day 2 - Runs for 8 hours. Each run produces the exact same query plan. In a...
I have a particular stored procedure that takes 1 parameter. This proc is executed once a day with the **same parameter** value everyday. The row count is also the same with each run of the proc. Day 1 - Runs for 10 minutes. Day 2 - Runs for 8 hours. Each run produces the exact same query plan. In an attempt to remedy this, I added **OPTION (RECOMPILE)** to the problematic statement and it seems to have fixed it. It's been 5 days and each run has been 10 minutes. My understanding of bad parameter sniffing is when the cached query plan is not optimal for certain parameter values. So, how do I actually define this problem? What would cause this drastic change in runtimes if the query plan is exactly the same and the parameter value is the same? Edit: Query Store is enabled on the database. Using this, I was able to determine that the query plans are identical between good and bad runs. This is the part of the query plan that does the bulk of the work. enter image description here Thanks
Sicilian-Najdorf (381 rep)
Nov 6, 2023, 08:01 AM • Last activity: Jul 12, 2025, 04:03 PM
2 votes
1 answers
177 views
Synchronize 2 sql server 2019 database over intermittent connection (no replication)
I'm a PostgreSQL DBA learning SQL Server 2019. **The setup :** - 1 Desktop, 1 Laptop - Both running SQL Server 2019 Developer - Windows 10 Home (nothing else, no active directory, nas, san, ...) - a database in simple recovery model, with partial containment enabled **The problem :** - I want to be...
I'm a PostgreSQL DBA learning SQL Server 2019. **The setup :** - 1 Desktop, 1 Laptop - Both running SQL Server 2019 Developer - Windows 10 Home (nothing else, no active directory, nas, san, ...) - a database in simple recovery model, with partial containment enabled **The problem :** - I want to be able to synchronize a database from my laptop to the desktop, or the other way around - I don't want to use replication (log shipping or others) since I may not connect both servers for a long time and my laptop is very limited in disk space. (I'm in simple recovery model for the same reasons) - both (desktop/laptop) database must be writable (so no primary/secondary) - if the database is modified on both side, I'll lose modification on one side. It's ok! I accept this. **The solution I know :** - Create a SQL Server dump, send it to the desktop over shared network, apply the dump. or the other way around. **Is there a way to export a database to another database, on demand ? Or is the dump/transfer/restore the way to go ?** (that's what I would do with PostgreSQL) I don't know all the SQL Server keywords and terminology and my google-fu was unsuccessful. Most solutions I find are obviously about how to set up replication over a reliable/permanent network. which doesn't fit my requirements.
ker2x (121 rep)
Jul 21, 2020, 07:28 PM • Last activity: Jul 11, 2025, 02:03 AM
2 votes
1 answers
76 views
How do I maintain a specific cadence in a bi-weekly SQL Agent job schedule?
I have a SQL Agent job I need to run every two weeks on specific Saturdays (6/14, 6/28, 7/12, 7/26...). The job failed on 6/28 and when I query `sysjobschedules` it indicates the next scheduled run date is now 7/5. How do I get this to stay on the original schedule? I assumed it calculated from the...
I have a SQL Agent job I need to run every two weeks on specific Saturdays (6/14, 6/28, 7/12, 7/26...). The job failed on 6/28 and when I query sysjobschedules it indicates the next scheduled run date is now 7/5. How do I get this to stay on the original schedule? I assumed it calculated from the Start Date in the schedule but this does not appear to be the case. I tried changing the Start Date in the schedule to 7/1/2025 to see if it would reset, but the next scheduled run date is still showing 7/5/2025.
Don (377 rep)
Jul 1, 2025, 05:56 PM • Last activity: Jul 10, 2025, 10:59 AM
Showing page 1 of 20 total questions