Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
139 views
Impact of timezone change during sql server migration
We are migrating from our legacy SQL Server 2017 on Windows 2016 host to SQL Server 2019 on Windows 2019. In above migration the bigger change has been the difference in timezone between legacy and new build Windows host. Earlier we used to have mix of CST, EST and PST servers being built but now ev...
We are migrating from our legacy SQL Server 2017 on Windows 2016 host to SQL Server 2019 on Windows 2019. In above migration the bigger change has been the difference in timezone between legacy and new build Windows host. Earlier we used to have mix of CST, EST and PST servers being built but now everything is standardized under UTC. Me being DBA, trying to understand possible impact with above change. Few questions here: 1. What impact if any is expected for queries querying with getdate(), will they impact DB queries as time is now in UTC? 2. AD team said they can force application to send ET at their end but how will it handle sql querying when landing to DB? 3. All the databases in question here uses partitioned tables with partition functions as date ranges and lot of ad-hoc queries are done on read replica. Will there be impact or change to partitions?
Newbie-DBA (804 rep)
Feb 1, 2024, 03:47 AM • Last activity: Aug 2, 2025, 03:02 AM
1 votes
1 answers
155 views
Using Policy-Based Management to check backup history for an Availability Group database
I like using Policy-Based Management to do some simple "everything's okay" sanity checks and email me if something goes out of spec. Typically, I validate the time since the last full backup, both to verify backups are running on schedule, and also to make sure that newly created databases are being...
I like using Policy-Based Management to do some simple "everything's okay" sanity checks and email me if something goes out of spec. Typically, I validate the time since the last full backup, both to verify backups are running on schedule, and also to make sure that newly created databases are being included in backups. This works perfectly fine on a standalone server. However, we recently deployed a two-node Always On Availability Group. It's configured to run backups on the current primary node. As you're probably aware, backup history is stored in msdb on the server that performs the backup. This causes a problem with the @LastBackupDate property, which only checks the local backup history for the server the policy is being evaluated on. After a failover, the backup policies almost immediately go out of compliance, as the backups have been running from the other server for however long it was primary (most likely for longer than the span that the policy is checking). Is there any reasonably simple way to make these policy checks Availability-Group-aware? Or am I going to have to look for some other backup monitoring solution?
db2 (9708 rep)
Aug 14, 2019, 12:16 PM • Last activity: Jul 20, 2025, 07:05 AM
4 votes
1 answers
3252 views
Should I use the deprecated MD5 function in SQL Server?
We'd like to use **MD5** for our hashing function, instead of **SHA_256**, but as of SQL Server 2016, MD5 is deprecated. We're using this for hashing (comparing which records have changed). We now have this dilemma of using of risking it by using this function, or incurring storage and performance o...
We'd like to use **MD5** for our hashing function, instead of **SHA_256**, but as of SQL Server 2016, MD5 is deprecated. We're using this for hashing (comparing which records have changed). We now have this dilemma of using of risking it by using this function, or incurring storage and performance overhead of using SHA_256. It's frustrating Microsoft decided to deprecate these functions even though they are still useful in certain scenarios. This project isn't a critical component of the business. We'll likely go with SHA_256, but is this the right choice? Should new development always avoid deprecated functions? For context - daily will be about 1-2 million rows, upserting into a 400 million row table, comparing hashbytes on the fly. About 30 columns wide. https://learn.microsoft.com/en-us/sql/t-sql/functions/hashbytes-transact-sql?view=sql-server-2017 https://dba.stackexchange.com/questions/35219/choosing-the-right-algorithm-in-hashbytes-function
Gabe (1396 rep)
Feb 7, 2019, 02:38 PM • Last activity: Jul 19, 2025, 05:50 AM
4 votes
1 answers
167 views
Migrating a SQL Server 2014 AG to a new 2017 AG
Our current production server runs on SQL Server 2014 AlwaysOn AG. We are planning to upgrade it to SQL Server 2017 AlwaysOn. Due to the configuration issues with the present cluster, we can not use the existing cluster. The production server has 52 databases with a total size of 2 TB. The databases...
Our current production server runs on SQL Server 2014 AlwaysOn AG. We are planning to upgrade it to SQL Server 2017 AlwaysOn. Due to the configuration issues with the present cluster, we can not use the existing cluster. The production server has 52 databases with a total size of 2 TB. The databases servers an online system, so minimum downtime is our core requirement. enter image description here Our initial plan is to go for a side by side approach. 1. Provision 3 SQL Server 2017 2. Create new cluster 3. Logs from the existing 2014 AG Primary server to all the servers in the new 2017 group. On the switchover day, 1. Disconnect all applications from the existing server. 2. Do a final logshipping. 3. Apply logs to all servers in the new group WITH NORECOVERY, except WITH RECOVERY on Primary_new 4. Create AG and add all database to AG 5. Change all applications to point to the new cluster. 6. Shutdown old cluster. Is this a good approach? Are there any other approaches? If I am missing anything, please guide me. Thanks.
user1716729 (693 rep)
Dec 27, 2018, 05:48 PM • Last activity: Jul 18, 2025, 05:03 PM
1 votes
1 answers
494 views
SQLServer Linux: Error restoring backup of DB from Windows w/ full-text data file
I'm trying to move a Windows SQL Server database from Windows 10 to Linux. For this, I'm following the instructions in https://learn.microsoft.com/es-es/sql/linux/sql-server-linux-migrate-restore-database?view=sql-server-linux-2017 The linux database is freshly installed in an Ubuntu 16.04.4 LTS. Th...
I'm trying to move a Windows SQL Server database from Windows 10 to Linux. For this, I'm following the instructions in https://learn.microsoft.com/es-es/sql/linux/sql-server-linux-migrate-restore-database?view=sql-server-linux-2017 The linux database is freshly installed in an Ubuntu 16.04.4 LTS. The Windows database backup is a previously existing (I have not executed the backup, but it's a full backup) When I try to restore, it generates an error in the catalog database (access denied), as shown (database name changed to 'mydb' for privacy): sqlcmd -S localhost -U SA -Q "RESTORE DATABASE mydb FROM DISK = '/var/opt/mssql/backup/mydb_backup_201804300000.bak' WITH MOVE 'mydb' TO '/var/opt/mssql/data/mydb.mdf', MOVE 'mydb_log' TO '/var/opt/mssql/data/mydb_log.ldf', MOVE 'sysft_appuser_catalog3' TO '/var/opt/mssql/data/catalog.ft'" Msg 7610, Level 16, State 1, Server irulan, Line 1 Acceso denegado a '/var/opt/mssql/data/catalog.ft' o la ruta de acceso no es válida. Msg 3156, Level 16, State 50, Server irulan, Line 1 El archivo 'sysft_appuser_catalog3' no se puede restaurar en '/var/opt/mssql/data/catalog.ft'. Utilice WITH MOVE para identificar una ubicación válida para el archivo. The other 2 files (mdf and ldf) are created without problems in the same folder). I have tried with different file names, creating previously (touch) the file, and so on with no success. How can I restore this database? I'd be willing to restore it without the full-text index - is there a way to do that? This is the output of FILELISTONLY (to check the content of backup) LogicalName PhysicalName Type FileGroupName Size MaxSize FileId CreateLSN DropLSN UniqueId ReadOnlyLSN ReadWriteLSN BackupSizeInBytes SourceBlockSize FileGroupId LogGroupGUID DifferentialBaseLSN DifferentialBaseGUID IsReadOnly IsPresent TDEThumbprint SnapshotUrl -------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---- -------------------------------------------------------------------------------------------------------------------------------- -------------------- -------------------- -------------------- --------------------------- --------------------------- ------------------------------------ --------------------------- --------------------------- -------------------- --------------- ----------- ------------------------------------ --------------------------- ------------------------------------ ---------- --------- ------------------------------------------ ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ mydb D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\mydb.mdf D PRIMARY 3460300800 35184372080640 1 0 0 D64B0490-3FF6-4EFE-A9A1-491B5993F3AF 0 0 2348613632 512 1 NULL 30094000017824000037 B7E468AB-78C2-4732-8D73-2F07E3ABAF9D 0 1 NULL NULL mydb_log D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Data\mydb_log.ldf L NULL 1540227072 2199023255552 2 0 0 A6B8CF28-C3D8-4B50-B030-4D5B14F82084 0 0 0 512 0 NULL 0 00000000-0000-0000-0000-000000000000 0 1 NULL NULL sysft_appuser_catalog3 D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\FTData\appuser_catalog3 F PRIMARY 931744 0 65539 17974000000690900001 0 0B0AEAB0-86A2-42ED-9B37-E70EE556383C 0 0 983040 512 1 NULL 30094000017824000037 B7E468AB-78C2-4732-8D73-2F07E3ABAF9D 0 1 NULL NULL (3 rows affected) Note: I have found this [Stack Overflow post](https://stackoverflow.com/questions/536182/mssql2005-restore-database-without-restoring-full-text-catalog) ; the poster had a different problem, but was also willing to restore the DB without the full-text data. It doesn't say how (if!) he ever resolved his problem, so it doesn't really give me an answer.
user155183
Jul 12, 2018, 07:00 PM • Last activity: Jul 15, 2025, 09:01 AM
0 votes
1 answers
161 views
Unable to create maintenance plan
When trying to create a maintenance plan locally on two prod servers through SSMS, I receive this error: [![Maintenance plan creation][1]][1] [1]: https://i.sstatic.net/Wh84N.png No such error occurs on my UAT system. They're all running SQL Server 2017 (14.0.3162.1), and they were all installed wit...
When trying to create a maintenance plan locally on two prod servers through SSMS, I receive this error: Maintenance plan creation No such error occurs on my UAT system. They're all running SQL Server 2017 (14.0.3162.1), and they were all installed with the same feature set. The UAT machine is Windows 10, while the prod servers are Windows Server 2016. Which components/features specifically need to be installed such that I can create a maintenance plan?
Franzl (101 rep)
Jun 30, 2019, 10:33 AM • Last activity: Jul 13, 2025, 04:05 PM
2 votes
1 answers
690 views
Handling new fields when merging in data using hashbytes?
We load in data from stage into our ODS and check for differences using hashbytes. We calculate hashbytes from the stage tables and insert/update into the destination table and also store the hashbytes value in the destination. The problem arises when there's a new field that we need to bring in fro...
We load in data from stage into our ODS and check for differences using hashbytes. We calculate hashbytes from the stage tables and insert/update into the destination table and also store the hashbytes value in the destination. The problem arises when there's a new field that we need to bring in from a source system. We'll add a new column in the ODS table with a default value but the calculated hashbytes are different because of this new field. As a result, everything gets updated even if nothing has changed. We'd have to update Hashbytes column in the large (300M but could be 1B rows) table whenever we add a new field which is not often but enough to be cumbersome. What are the best approaches to handle this? I'm thinking remove the hashbytes column from the ODS table and just calculate it in the proc for stage and ODS values. I inherited this code so I don't know why the hashbytes is stored in the ODS table, is that a best practice? **Approach #1** UPDATE b SET Field1 = a.Field1 ,Field2 = a.Field2 FROM source a INNER JOIN destination b ON a.PrimaryKeyField = b.PrimaryKeyField WHERE CAST(HASHBYTES('SHA1', CONCAT(a.[Field1], '|', a.[Field2]) CAST(HASHBYTES('SHA1', CONCAT(b.[Field1], '|', b.[Field2]) **Approach #2** UPDATE b SET Field1 = a.Field1 ,Field2 = a.Field2 FROM source a INNER JOIN destination b ON a.PrimaryKeyField = b.PrimaryKeyField WHERE (a.Field1 b.Field1 OR isnull(a.Field2,'') isnull(b.Field2,''))
Gabe (1396 rep)
Dec 28, 2018, 08:15 PM • Last activity: Jul 2, 2025, 11:03 AM
3 votes
1 answers
397 views
How to lock a MSSQL (2017 Linux) account after N unsuccessful login attempts
I have to configure account lock after N unsuccessful login attempts on MSSQL 2017 Linux. That is standalone server and is not in AD. I couln'd find any valuable information so far unfortunately. Perhaps because Linux platform is quite fresh for MSSQL Server. Thank you for any advice here.
I have to configure account lock after N unsuccessful login attempts on MSSQL 2017 Linux. That is standalone server and is not in AD. I couln'd find any valuable information so far unfortunately. Perhaps because Linux platform is quite fresh for MSSQL Server. Thank you for any advice here.
Radek Radek (31 rep)
Mar 25, 2019, 10:14 AM • Last activity: Jul 1, 2025, 09:06 AM
14 votes
2 answers
1816 views
Why is a temp table a more efficient solution to the Halloween Problem than an eager spool?
Consider the following query that inserts rows from a source table only if they aren't already in the target table: INSERT INTO dbo.HALLOWEEN_IS_COMING_EARLY_THIS_YEAR WITH (TABLOCK) SELECT maybe_new_rows.ID FROM dbo.A_HEAP_OF_MOSTLY_NEW_ROWS maybe_new_rows WHERE NOT EXISTS ( SELECT 1 FROM dbo.HALLO...
Consider the following query that inserts rows from a source table only if they aren't already in the target table: INSERT INTO dbo.HALLOWEEN_IS_COMING_EARLY_THIS_YEAR WITH (TABLOCK) SELECT maybe_new_rows.ID FROM dbo.A_HEAP_OF_MOSTLY_NEW_ROWS maybe_new_rows WHERE NOT EXISTS ( SELECT 1 FROM dbo.HALLOWEEN_IS_COMING_EARLY_THIS_YEAR halloween WHERE maybe_new_rows.ID = halloween.ID ) OPTION (MAXDOP 1, QUERYTRACEON 7470); One possible plan shape includes a merge join and an eager spool. The eager spool operator is present to solve the Halloween Problem : first plan On my machine, the above code executes in about 6900 ms. Repro code to create the tables is included at the bottom of the question. If I'm dissatisfied with performance I might try to load the rows to be inserted into a temp table instead of relying on the eager spool. Here's one possible implementation: DROP TABLE IF EXISTS #CONSULTANT_RECOMMENDED_TEMP_TABLE; CREATE TABLE #CONSULTANT_RECOMMENDED_TEMP_TABLE ( ID BIGINT, PRIMARY KEY (ID) ); INSERT INTO #CONSULTANT_RECOMMENDED_TEMP_TABLE WITH (TABLOCK) SELECT maybe_new_rows.ID FROM dbo.A_HEAP_OF_MOSTLY_NEW_ROWS maybe_new_rows WHERE NOT EXISTS ( SELECT 1 FROM dbo.HALLOWEEN_IS_COMING_EARLY_THIS_YEAR halloween WHERE maybe_new_rows.ID = halloween.ID ) OPTION (MAXDOP 1, QUERYTRACEON 7470); INSERT INTO dbo.HALLOWEEN_IS_COMING_EARLY_THIS_YEAR WITH (TABLOCK) SELECT new_rows.ID FROM #CONSULTANT_RECOMMENDED_TEMP_TABLE new_rows OPTION (MAXDOP 1); The new code executes in about 4400 ms. I can get actual plans and use Actual Time Statistics™ to examine where time is spent at the operator level. Note that asking for an actual plan adds significant overhead for these queries so totals will not match the previous results. ╔═════════════╦═════════════╦══════════════╗ ║ operator ║ first query ║ second query ║ ╠═════════════╬═════════════╬══════════════╣ ║ big scan ║ 1771 ║ 1744 ║ ║ little scan ║ 163 ║ 166 ║ ║ sort ║ 531 ║ 530 ║ ║ merge join ║ 709 ║ 669 ║ ║ spool ║ 3202 ║ N/A ║ ║ temp insert ║ N/A ║ 422 ║ ║ temp scan ║ N/A ║ 187 ║ ║ insert ║ 3122 ║ 1545 ║ ╚═════════════╩═════════════╩══════════════╝ The query plan with the eager spool seems to spend significantly more time on the insert and spool operators compared to the plan that uses the temp table. Why is the plan with the temp table more efficient? Isn't an eager spool mostly just an internal temp table anyway? I believe I am looking for answers that focus on internals. I'm able to see how the call stacks are different but can't figure out the big picture. I am on SQL Server 2017 CU 11 in case someone wants to know. Here is code to populate the tables used in the above queries: DROP TABLE IF EXISTS dbo.HALLOWEEN_IS_COMING_EARLY_THIS_YEAR; CREATE TABLE dbo.HALLOWEEN_IS_COMING_EARLY_THIS_YEAR ( ID BIGINT NOT NULL, PRIMARY KEY (ID) ); INSERT INTO dbo.HALLOWEEN_IS_COMING_EARLY_THIS_YEAR WITH (TABLOCK) SELECT TOP (20000000) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM master..spt_values t1 CROSS JOIN master..spt_values t2 CROSS JOIN master..spt_values t3 OPTION (MAXDOP 1); DROP TABLE IF EXISTS dbo.A_HEAP_OF_MOSTLY_NEW_ROWS; CREATE TABLE dbo.A_HEAP_OF_MOSTLY_NEW_ROWS ( ID BIGINT NOT NULL ); INSERT INTO dbo.A_HEAP_OF_MOSTLY_NEW_ROWS WITH (TABLOCK) SELECT TOP (1900000) 19999999 + ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM master..spt_values t1 CROSS JOIN master..spt_values t2;
Joe Obbish (32976 rep)
Feb 26, 2019, 01:27 AM • Last activity: Jul 1, 2025, 08:56 AM
0 votes
1 answers
194 views
How to do failover between AG setup for FCI-FCI servers
Hope I can portrait my question well because i really get confused with AG's :) We have the setup like below- Windows cluster WKCLU01 with 4 nodes (Node1,2,3,4) and a file share witness in DC3, total 5 votes so ODD# fits in Out of those 4 node In DC1 --> SQL FCI Shared storage between Node1 and Node...
Hope I can portrait my question well because i really get confused with AG's :) We have the setup like below- Windows cluster WKCLU01 with 4 nodes (Node1,2,3,4) and a file share witness in DC3, total 5 votes so ODD# fits in Out of those 4 node In DC1 --> SQL FCI Shared storage between Node1 and Node 2 as SQL1\inst1 In DC2 --> SQL FCI Shared storage between Node3 and Node 4 as SQL2\inst2 Now we have to setup AG between DC1 and DC2. below is my understanding: AG can be setup between 2 replicas here SQL1\inst1 and SQL2\inst2 in ASYNC mode as per limitation and cant use SYNC mode. Assuming this is correct 1. Is it true automatic failover will happen between each of the FCIs only just like plain old Always on FCI's? 2. Now how can we do a planned failover between DC1 and DC2 as per our bi-monthly activity? Do we have TSQL or PS to help us automate this. On some msdn links i am confused doing so will cause data loss and somewhere it wont if we change sync mode? Please suggest
Newbie-DBA (804 rep)
Jul 16, 2020, 07:42 PM • Last activity: Jul 1, 2025, 08:05 AM
3 votes
1 answers
972 views
How to install Distributed Replay (dreplay.exe) for SQL Server 2017
In SQL Server 2014, [dreplay.exe was available under "Management Tools – Basic"][1] feature. However, in SQL Server 2017 that feature no longer exists (Enterprise edition) and it doesn't come with SSMS installer either. Here is what available from SQL Server 2017 installer: [![enter image descriptio...
In SQL Server 2014, dreplay.exe was available under "Management Tools – Basic" feature. However, in SQL Server 2017 that feature no longer exists (Enterprise edition) and it doesn't come with SSMS installer either. Here is what available from SQL Server 2017 installer: enter image description here Did they move dreplay.exe somewhere else? Any help is much appreciated! **Update 1.** ------------- It seems there is a bit of confusion around Distributed Replay and what exactly I am looking for. So I thought to include a diagram from this article : enter image description here **Administration tool** (dreplay.exe) is what I am looking for. **Controller** (DReplayController.exe) is Distributed Replay Controller from feature selection screen. It is useless without Administration tool. **Client** (DReplayClient.exe) is Distributed Replay Client from feature selection screen. It is useless without Controller. **Update 2.** ------------- OS: Windows Server 2016 Standard SQL Server Management Studio version: 18.4 ** ** (As this post suggests dreplay.exe came with SSMS 17.x, but got removed from 18.x) SQL Server version: 14.0.1000.169 *** *** (displayed at Installation Type step or can be found in MediaInfo.xml Property Id="BaselineVersion" that is located in the root of disk with SQL Server installer) Binn folder contents: Directory of C:\Program Files (x86)\Microsoft SQL Server\140\Tools\Binn 18/11/2019 15:19 . 18/11/2019 15:19 .. 18/11/2019 15:19 DQ 22/08/2017 18:39 697,528 DReplayCommon.dll 22/08/2017 18:39 20,592 DReplayServer.tlb 22/08/2017 18:40 32,952 DReplayServerPS.dll 04/11/2019 15:25 Resources 04/11/2019 15:28 schemas 15/06/2019 11:09 602,848 SqlManager.dll 22/08/2017 18:47 60,088 SQLPS.exe 22/08/2017 18:48 379 SQLPS.exe.config 15/06/2019 11:09 29,472 sqlresld.dll 15/06/2019 11:09 29,264 SqlResourceLoader.dll 15/06/2019 11:08 59,672 SQLSCM.DLL 15/06/2019 11:08 133,200 SQLSVC.DLL 10 File(s) 1,665,995 bytes 5 Dir(s) 97,550,450,688 bytes free
Vladimirs (131 rep)
Nov 18, 2019, 05:40 PM • Last activity: Jun 30, 2025, 09:04 AM
0 votes
1 answers
181 views
Why can a select statement acquire more than 1 Sch-S lock on one table?
I have a repeating deadlock in my production environment. I cannot reproduce it in staging. Script 1 is a scheduled data import procedure. -- Create and populate new_TableImported if object_id('old_TableImported') is not null drop table old_TableImported set transaction isolation level serializable;...
I have a repeating deadlock in my production environment. I cannot reproduce it in staging. Script 1 is a scheduled data import procedure. -- Create and populate new_TableImported if object_id('old_TableImported') is not null drop table old_TableImported set transaction isolation level serializable; set xact_abort on; begin try begin tran if object_id('TableImported') is not null exec sp_rename 'TableImported', 'old_TableImported' exec sp_rename 'new_TableImported', 'TableImported' commit end try begin catch if (xact_state() 0) rollback tran; throw; end catch set transaction isolation level read committed; if object_id('old_TableImported') is not null drop table old_TableImported Script 2 is a scheduled data export procedure: just select from a view. CREATE VIEW [export].[vSomeDataNeededOutside] AS select t.*, e.SomeField from dbo.OneOfMyTables t left join ( select ... from TableImported group by ... ) e on ... where ... According to data collected with SQL Server Profiler. 1. Select (2) starts. Sch-S lock is obtained on TableImported. 1. Data import (1) runs up to sp_rename block. Sch-M lock is requested, and the query waits. 1. (2) for some reason requests additional Sch-S lock on TableImported. 1. Deadlock. (1) waits (2) to release the first Sch-S lock, (2) waits (1) to acquire the second Sch-S lock. What is going on? Why do select try to obtain the second lock of the same type on the same table?
aleyush (101 rep)
Jun 2, 2019, 06:20 PM • Last activity: Jun 29, 2025, 07:02 PM
0 votes
1 answers
224 views
SQL join unexpected result
I have the following table: |item | running | resourceid | |--------|--------------|-------------| |017510 | C1 | 43 | |338877 | C4 | 44 | |451233 | C1 | 45 | |771225 | C4 | 41 | |011212 | C4 | 47 | |313366 | C3 | 34 | |771226 | C4 | 48 | |990000 | C4 | 46 | for each "running" I need to get the Max...
I have the following table: |item | running | resourceid | |--------|--------------|-------------| |017510 | C1 | 43 | |338877 | C4 | 44 | |451233 | C1 | 45 | |771225 | C4 | 41 | |011212 | C4 | 47 | |313366 | C3 | 34 | |771226 | C4 | 48 | |990000 | C4 | 46 | for each "running" I need to get the Max resourceid and it's item which give me the expected result: |running | resourceid | item | |---------|--------------|-------| | C1 | 45 | 451233 | | C3 | 34 | 313366 | | C4 | 48 | 771226 | With this code: SELECT b.running, MAX(b.resourceid)as MaxResourceid, MAX(b.item) as item FROM runningResources as b inner join (SELECT running, MAX(resourceid) as MaxValue FROM runningResources GROUP BY running ) a ON a.running=b.running and a.Maxvalue=b.resourceid group by b.running Since the query SELECT b.running, MAX(b.resourceid)as MaxResourceid, MAX(b.item) as item FROM runningResources as b group by b.running gives the result: | running | resourceid | item | |----------|---------------|-------| | C1 | 45 | 451233 | | C3 | 34 | 313366 | | C4 | 48 | **990000** | why is C4 in the final result showing **771226** ? I would think that the join would give the item of the outer SELECT (C4,48,990000)
Enirdas (11 rep)
May 24, 2022, 05:18 PM • Last activity: Jun 25, 2025, 02:02 PM
0 votes
1 answers
212 views
If I am a sysadmin, does it matter if I have the MASTER KEY password or not?
So currently I've faced some issues restoring databases with Symmetrical keys, Master keys, SSISDB and etc and I got curious. We didn't have the password for anything. All that was done in the past, stayed there, BUT I could normally restore the DBs by regenerating password for keys, moving DBs, res...
So currently I've faced some issues restoring databases with Symmetrical keys, Master keys, SSISDB and etc and I got curious. We didn't have the password for anything. All that was done in the past, stayed there, BUT I could normally restore the DBs by regenerating password for keys, moving DBs, restoring DBs,opening the keys on source server and latest ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY. So, if you are a sysadmin, (obviously it's a good practice) but if you lose keys, it's no trouble , since we can regenerate, right?
Racer SQL (7546 rep)
Aug 28, 2024, 12:40 PM • Last activity: Jun 25, 2025, 06:04 AM
3 votes
1 answers
4107 views
SQL Server-- Calculating Optimal Number of CPU Cores
We're soon going to rebuild the SQL Server running our production ERP. Our SAN Admin issued me the following challenge: > *Assume I could give you as many Intel Xeon Gold 6240 CPU @ 2.6 GHz cores as you need for optimal SQL Server performance, as long as the > ROI is reasonable. We don't want to was...
We're soon going to rebuild the SQL Server running our production ERP. Our SAN Admin issued me the following challenge: > *Assume I could give you as many Intel Xeon Gold 6240 CPU @ 2.6 GHz cores as you need for optimal SQL Server performance, as long as the > ROI is reasonable. We don't want to waste money, but are willing to > splurge a bit as long as you're getting tangible performance > improvements. How many cores do you want?* On our current production box, we think we have MaxDOP and CTP set effectively, and expensive queries are going parallel, but we still hit very high numbers quite regularly. We're regularly getting SOS_SCHEDULER_YIELD and CXPACKET/CXCONSUMER as top wait stats. I'm pretty confident that we're under CPU pressure, and I'd love the new server to work better. After doing a bunch of reading, I've found quite a few articles (including by Glenn Berry) talking about *which* CPUs to select. What I've not had success finding are articles talking about how to calculate the optimal number of cores to allocate. Assuming cost matters but is secondary to tangible performance, what kind of metrics can I take from my production ERP SQL Server, and how can I compare them to a specific known processor, to determine how many cores to allocate for the best ROI in terms of performance:cost? EDIT-- Since someone may ask-- we're on SQL Server Enterprise Edition. The production instance is SQL Server 2017 but we'll likely be upgrading to 2019 on the new server/instance.
Eluros (75 rep)
Mar 25, 2022, 02:19 PM • Last activity: Jun 11, 2025, 12:06 PM
0 votes
1 answers
234 views
The ALTER TABLE statement conflicted with the FOREIGN KEY constrain
I had a specific relational database with different tables. what I wanted to do is actually to copy two tables from different DB located on different server to my database using export feature in Microsoft SQL Server Management Studio 2017. The new tables have been imported to my current database. H...
I had a specific relational database with different tables. what I wanted to do is actually to copy two tables from different DB located on different server to my database using export feature in Microsoft SQL Server Management Studio 2017. The new tables have been imported to my current database. However, the problem is that the tables that have been exported from the other database into mine don't have primary and foreign keys. So what I did, I deleted the old tables in my current database and now working on adding the primary and foreign keys to the freshly copied one. The issue is that I was able to create the primary keys without any problems however, when I tried to create the foreign key to the second table to link it to the first table I Got this error: > The ALTER TABLE statement conflicted with the FOREIGN KEY constrain I know the reason of the error that I have unmatched number of items in my first and second tables. I ran this query: select UUT_RESULT from STEP_RESULT WHERE UUT_RESULT NOT IN (SELECT ID from UUT_RESULT) I was able to see that there is 883 entries in my second table doesn't have match in the first table. How can I solve this problem? I can't clean the tables because they have 15 million entries so is there a way that I can delete these 883 from my second table and then I can match the two tables?
Nano (139 rep)
May 9, 2019, 01:23 PM • Last activity: Jun 10, 2025, 07:01 PM
0 votes
1 answers
221 views
Finding and locally deleting deleted records in a million-record table
I'm trying to incrementally load data from a remote server to a local one (using SSIS and linked server). Remote table has 1.7 million of records, increasing every hour. So far, I have been able to load new records and update existing records using their RECID and LASTMODIFIEDDATEANDTIME fields. But...
I'm trying to incrementally load data from a remote server to a local one (using SSIS and linked server). Remote table has 1.7 million of records, increasing every hour. So far, I have been able to load new records and update existing records using their RECID and LASTMODIFIEDDATEANDTIME fields. But when I try to find records which are deleted since last refresh, I face a never-ending operation: DELETE FROM localdb.dbo.INVENTTRANS WHERE RECID NOT IN (SELECT RECID FROM REMOTESERVER.remotedb.dbo.INVENTTRANS) I tried running SELECT RECID FROM REMOTESERVER.remotedb.dbo.INVENTTRANS and it loads data in less than 10 seconds, hence there is no network/performance issue. But when I run the above DELETE query, it doesn't finish even after 15 minutes. I tried copying RECIDs to a local table to prevent possible reciprocations between local and remote server, no luck. Can someone guide me to improve performance of such a query?
Mohammad Javahery (1 rep)
Jun 11, 2022, 11:28 AM • Last activity: Jun 8, 2025, 10:04 PM
0 votes
1 answers
991 views
Is it possible to add a column to a partition in a partitioned table?
I have a large partitioned table of 16 billion plus records. I am currently updating one of the columns to NULL using the partition number. The process is cumbersome in that it takes a lot of time to complete the operation. One of the partitions of 70 GB took 11 hours to complete the update. So my q...
I have a large partitioned table of 16 billion plus records. I am currently updating one of the columns to NULL using the partition number. The process is cumbersome in that it takes a lot of time to complete the operation. One of the partitions of 70 GB took 11 hours to complete the update. So my question is, is it possible to create a column within a partition so that I can use it to do the update in a faster way? Ideally I'd like to create the new column with NULL values, then delete the old column and rename the new one to the same name, rather than update billions of rows i.e. create NewColumn with null values rename CurrentColumn to CurrentColumn_delete rename NewColumn to CurrentColumn Delete CurrentColumn_delete but only on the partition and not whole table
Crispin (81 rep)
Feb 2, 2022, 07:52 AM • Last activity: Jun 1, 2025, 08:05 PM
0 votes
2 answers
284 views
How to benchmark SQL Server performance after hardware replacement?
We are going to upgrade server with new hardware. And i'm kinda curious about how is SQL Server performance will improve. I'm not a DBA and know just a little about performance monitoring. My only idea is creating some heavy queries, run it before hardware replacement, run it after hardwaer replacem...
We are going to upgrade server with new hardware. And i'm kinda curious about how is SQL Server performance will improve. I'm not a DBA and know just a little about performance monitoring. My only idea is creating some heavy queries, run it before hardware replacement, run it after hardwaer replacement, compare execution time. Yeah i know that idea doen't even sounds good but it's all what i have. Is there any way to benchmark performance in more common and efficient way?
Kliver Max (155 rep)
Jun 19, 2023, 06:08 AM • Last activity: Jun 1, 2025, 08:04 AM
0 votes
1 answers
105 views
Changing column type from Decimal (16,2) to Decimal (32,2) on table size 500million rows
Hi folks I have a database part of an AG,I need to change the column type from Decimal (16,2) to Decimal (32,2). Does anyone know the best way I could do that above without bloating the log? My thinking is: 1 - take the database out the AG 2 - create a new table with the requested datatype (32,2) 3...
Hi folks I have a database part of an AG,I need to change the column type from Decimal (16,2) to Decimal (32,2). Does anyone know the best way I could do that above without bloating the log? My thinking is: 1 - take the database out the AG 2 - create a new table with the requested datatype (32,2) 3 - insert the data into the new table Any suggestion ?
Daniel (137 rep)
May 20, 2025, 10:34 AM • Last activity: May 21, 2025, 04:03 PM
Showing page 1 of 20 total questions