Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
157
views
Should I stop replication logreader and distribution jobs during database maintenance (reindexing, updating stats, and checkdb)?
I have transactional replication setup in SQL Server 2022. I have a distribution server running push subscriptions to subscribers. I'm trying to figure out if I need to stop replication jobs (log reader and distribution jobs) when running maintenance jobs such as backups, re-indexing, update stats,...
I have transactional replication setup in SQL Server 2022. I have a distribution server running push subscriptions to subscribers. I'm trying to figure out if I need to stop replication jobs (log reader and distribution jobs) when running maintenance jobs such as backups, re-indexing, update stats, and DBCC maintenance? I'm trying to avoid causing any replication outage due to maintenance jobs.
Replication jobs are setup to run continuously when SQL Server starts. I did this for best practices because we were seeing latency when I scheduled these jobs instead of running them continuously. I made reset jobs to run continuously among a few other changes and we no longer see latency.
If I need to stop jobs while running maintenance jobs; can someone recommend the best way to do this. Like I said, I was trying to avoid scheduling these jobs. I was thinking if needed; I could put a step in the first maintenance job that is scheduled to run to stop replication job and restart it after the last step of the last maintenance job. Any thoughts?
I haven't tried to run maintenance jobs yet as this is in a new production environment, and I didn't want to have to apply another snapshot in case maintenance jobs break replication.
dba_gal
(1 rep)
Mar 23, 2024, 03:59 PM
• Last activity: Jul 10, 2025, 04:02 PM
1
votes
1
answers
177
views
dbcc checkdb errors, or msg 601 when restoring SQL 2012 backup to SQL 2016
I need to move two databases, WES and WILL from a Win2012/SQL2012 instance to a Win2016/SQL2016 instance for a software upgrade. I'm using the full backup files from the SQL2012 maintenance plans. I've verified I can restore them to the original SQL2012 instance with no errors. Restoring WES to SQL2...
I need to move two databases, WES and WILL from a Win2012/SQL2012 instance to a Win2016/SQL2016 instance for a software upgrade. I'm using the full backup files from the SQL2012 maintenance plans. I've verified I can restore them to the original SQL2012 instance with no errors.
Restoring WES to SQL2016 reports no errors, but dbcc checkdb shows errors:
>Msg 8939, Level 16, State 98, Line 13
Table error: Object ID 60, index ID 1, partition ID 281474980642816, alloc unit ID 281474980642816 (type In-row data), page (1:453). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 133129 and -4.
...etc
Restoring WILL to SQL2016 stops partway through the upgrade steps with
>Database 'WILL' running the upgrade step from version 805 to version 806.
Msg 601, Level 12, State 3, Line 2
Could not continue scan with NOLOCK due to data movement.
I thought the upgrade between versions was automatic. Is there anything else I need to do?
JeffH
(11 rep)
Feb 25, 2022, 02:24 PM
• Last activity: Jul 1, 2025, 09:07 PM
0
votes
2
answers
203
views
What to attempt when REPAIR_ALLOW_DATA_LOSS fails in reparing a corrupted db?
I have a corrupted DB, I do not know the cause, it is a DB of an application whose distributor never had this kind of corruption. So I tried DBCC with REPAIR_ALLOW_DATA_LOSS as suggested in: https://stackoverflow.com/questions/22830952/an-inconsistency-was-detected-during-an-internal-operation-in-da...
I have a corrupted DB, I do not know the cause, it is a DB of an application whose distributor never had this kind of corruption.
So I tried DBCC with REPAIR_ALLOW_DATA_LOSS as suggested in:
https://stackoverflow.com/questions/22830952/an-inconsistency-was-detected-during-an-internal-operation-in-database/22831143#22831143
but it does not work.
What else can I try? I saw that tools that promise DB fix exist (first found in google search https://www.stellarinfo.com/gdc/sql/sql-server-database-repair-tool.php) , but I wonder should I delegate to a blackbox the destiny of my db? Ok I can take a backup of course, but before purchasing in dispair a tool I'd like to ask for more advice. Or these tools simply run DBCC and are therefore useless...
This is the type of errors
DBCC CHECKDB('My_DB') WITH NO_INFOMSGS, ALL_ERRORMSGSshows
shows me (I have many of these)
> Msg 8939, Level 16, State 98, Line 7
> Table error: Object ID 0, index ID -1, partition ID 0, alloc unit ID 3759437827725787136 (type Unknown), page (13558:888026339).
> Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 133129 and -14.
Trying to repair data loss with
ALTER DATABASE TW SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
BEGIN TRANSACTION;
DBCC CHECKDB ('My_DB', REPAIR_ALLOW_DATA_LOSS);
ALTER DATABASE TW SET MULTI_USER;
COMMIT;
i have:
> Msg 8939, Level 16, State 98, Line 3 Table error: Object ID 0, index
> ID -1, partition ID 0, alloc unit ID -9222773885349068800 (type
> Unknown), page (15657:-1933734559). Test (IS_OFF (BUF_IOERR,
> pBUF->bstat)) failed. Values are 2057 and -4.
> Repairing this error requires other errors to be corrected first.
What can I do?
I also realized that querying specific table records i see that some BLOBs are corrupted. Does deleting records solve this kind of pronlems?
Thanks!
user193655
(145 rep)
Feb 19, 2025, 06:01 PM
• Last activity: Jun 27, 2025, 10:46 AM
1
votes
2
answers
241
views
DBCC Shrinkfile after full backup restoration but before differential restoration
My database full backup file size is 760 GB and differential would be around 20 GB but the database size is around 2TB. As it is a production database, so to apply only a minimum downtime, i want to run DBCC SHRINKFILE with TRUNCATEONLY before restoring differential backup. Please advise if i can ru...
My database full backup file size is 760 GB and differential would be around 20 GB but the database size is around 2TB. As it is a production database, so to apply only a minimum downtime, i want to run DBCC SHRINKFILE with TRUNCATEONLY before restoring differential backup.
Please advise if i can run the command DBCC SHRINKFILE with TRUNCATEONLY after full backup restored but before differential backup restoration.
kalpna
(11 rep)
Jul 24, 2018, 05:22 AM
• Last activity: May 25, 2025, 06:04 PM
0
votes
0
answers
58
views
dbcc checkdb - how to work out the resources needed?
[Integrity checks][1] are important on a regular basis to [find out][2] [errors][3]. Even on servers involved in [availability groups][4]. [we can normally find out how long it took to run?][5] It is a known fact that [dbcc checkdb][6] [use a great deal of resources][7]. So I have run it to one of m...
Integrity checks are important on a regular basis to find out errors .
Even on servers involved in availability groups .
we can normally find out how long it took to run?
It is a known fact that dbcc checkdb use a great deal of resources .
So I have run it to one of my databases - 170 Gb size.
with estimate only :
dbcc checkdb(DEV_My_Database) with estimateonly
It came out:
> DBCC results for 'DEV_My_Database'. Estimated TEMPDB space (in KB)
> needed for CHECKDB on database DEV_My_Database = 44320313. DBCC
> execution completed. If DBCC printed error messages, contact your
> system administrator.
However, I never saw any extra usage of tempdb
as everything seemed to fit in there for this purpose.
How can I work out the right usage of resources, including tempdb, for my dbcc ?
Marcello Miorelli
(17274 rep)
Feb 13, 2025, 02:26 PM
• Last activity: Feb 13, 2025, 05:44 PM
2
votes
1
answers
1944
views
DBCC ShrinkFile EmptyFile fails due to system table SYSFILES1
In following a recommendation from our new storage vendor, I'm trying to split up some large & busy databases into multiple data files, each to live on a separate VDisk that has a separate controller. Yes, this is a virtual platform. So let's say we have `MyCoolDB` and its primary MDF is at `D:\Data...
In following a recommendation from our new storage vendor, I'm trying to split up some large & busy databases into multiple data files, each to live on a separate VDisk that has a separate controller. Yes, this is a virtual platform.
So let's say we have
MyCoolDB
and its primary MDF is at D:\Data\MyCoolDB.mdf
, with logical name MyCoolDB_data
. I add 3 new files: E:\Data2\MyCoolDB_2.ndf
, F:\Data3\MyCoolDB_3.ndf
, and G:\Data4\MyCoolDB_4.ndf
. That works.
Now I want to re-distribute the data into these new files. So I use the DBCC SHRINKFILE
command with the EMPTYFILE
option. Namely, DBCC SHRINKFILE ('MyCoolDB_data', EMPTYFILE);
.
It runs for about 10-30 seconds, then FAILS, throwing the error as follows: SHRINKFILE: System table SYSFILES1 Page 1:32074080 could not be moved to other files because it only can reside in the primary file of the database.
This operation **worked fine** in DEV/TEST environment, and I got no such errors. What could be going wrong / different in PROD? What are my next-steps or other options? Because now we have 3 new NDF files sitting there with hardly anything in them (while, yes, "new data" is being distributed into them, the primary file is still huge in comparison), and storage folks are going to be banging down my door about all that "wasted space".
Things I have tried:
1. doing a CHECKPOINT
beforehand
Things I cannot try without downtime/slow-time (I think), and therefore have not yet:
2. Simple recovery -> do shrink -> Full recovery -> take full-backup to allow tlog-backups to continue
3. Creating a new FILEGROUP
, and adding files to new FILEGROUP
, and using index rebuilds to move data into said new FILEGROUP
's files. PS: Now that the new NDF files are there and have 'something' in them, I'm betting it's gonna be nearly impossible to UNDO that action (i.e. to EMPTY them and remove them), so I'm not looking forward to that being a thing, if it is.
4. Creating a new DB with the desired file structure and actually migrating all the data over to it. This seems like the last-resort answer. As I said, the SHRINKFILE EMPTYFILE
worked fine in another environment for the same database.
Config: SQL 2017 Standard, 16 cores (2x8), 124 GB RAM. Platform: Nutanix with VMWare.
NateJ
(824 rep)
Mar 12, 2020, 02:48 PM
• Last activity: Jan 20, 2025, 08:02 AM
5
votes
3
answers
4904
views
Should I be regularly shrinking my DB or at least my log file?
My question is, should I be running one or both of the shrink command regularly, DBCC SHRINKDATABASE OR DBCC SHRINKFILE ============================= background ============================= Sql Server: Database is 200 gigs, logs are 150 gigs. running this command SELECT name ,size/128.0 - CAST(FILE...
My question is, should I be running one or both of the shrink command regularly,
DBCC SHRINKDATABASE
OR
DBCC SHRINKFILE
=============================
background
=============================
Sql Server: Database is 200 gigs, logs are 150 gigs.
running this command
SELECT name ,size/128.0 -
CAST(FILEPROPERTY(name, 'SpaceUsed') AS int) / 128.0
AS AvailableSpaceInMB FROM sys.database_files;`
produces this output..
MyDB: 159.812500 MB free
MyDB_Log: 149476.390625 MB free
So it seems there is some free space.
Our backup schedule is as follows:
1. Transaction Logs Once an hour
2. Full Backups twice a week
3. Differential Backups 5 times a week
Tom DeMille
Apr 23, 2010, 04:19 PM
• Last activity: Jan 17, 2025, 01:27 PM
12
votes
4
answers
24580
views
Database consistency checker in postgresql
Are there any DBCC(Database consistency checker) commands in PostgreSQL? I could find SQL server DBCC commands but not for Postgres? I read that postgresql has inbuilt functionality of perfomance tuning and no DBCC commands are available for postgres. Is it true?
Are there any DBCC(Database consistency checker) commands in PostgreSQL? I could find SQL server DBCC commands but not for Postgres? I read that postgresql has inbuilt functionality of perfomance tuning and no DBCC commands are available for postgres. Is it true?
user32207
(141 rep)
Dec 29, 2013, 01:03 PM
• Last activity: Jan 9, 2025, 08:05 PM
5
votes
1
answers
1473
views
What are the downsides of running DBCC FREESYSTEMCACHE ('TokenAndPermUserStore') hourly?
I have been seeing some concerning behavior in the SQL Server memory caches on one of our production servers. We are on SQL Server 2014 SP2 Enterprise Edition, 16 cores, 400GB RAM (300 GB dedicated to SQL Server). Using this query (credits to [Glenn Berry](https://www.sqlskills.com/blogs/glenn/categ...
I have been seeing some concerning behavior in the SQL Server memory caches on one of our production servers. We are on SQL Server 2014 SP2 Enterprise Edition, 16 cores, 400GB RAM (300 GB dedicated to SQL Server).
Using this query (credits to [Glenn Berry](https://www.sqlskills.com/blogs/glenn/category/dmv-queries/)) :
SELECT TOP(10) mc.[type] AS [Memory Clerk Type],
CAST((SUM(mc.pages_kb)/1024.0) AS DECIMAL (15,2)) AS [Memory Usage (MB)]
FROM sys.dm_os_memory_clerks AS mc WITH (NOLOCK)
GROUP BY mc.[type]
ORDER BY SUM(mc.pages_kb) DESC OPTION (RECOMPILE);
I have discovered an issue. The
USERSTORE_TOKENPERM
values grow to a point where they seem to overwhelm the cache, as a result, almost all plans are compiled and none are cached. That is CACHESTORE_SQLCP
and CACHESTORE_OBJCP
will sit at around 200 MB each, while the USERSTORE_TOKENPERM
cache is upwards of 11 GB. With over 168,000 tokens stored, for about 100 concurrent/600 total users with 3000-5000 connections. These numbers seem way off.
Unfortunately, the vendor that provides this application doesn't have crazy things like "System Requirements" documentation, and they won't commit to supporting the application on untested Service Packs/CU's. So my hands are tied to SQL Server 2014 SP2, as [other threads](https://dba.stackexchange.com/questions/222734/tokenandpermuserstore-clear-decreases-cpu-usage-for-a-short-period-of-time) I've read suggest upgrading to SQL Server 2014 SP2 CU7 or higher. That is the long-term goal. So I needed another solution for the short term.
After running the command suggested in [this thread from 2009](https://troubleshootingsql.com/2009/12/30/performance-issues-due-to-userstore_tokenperm/) .
Specifically:
DBCC FREESYSTEMCACHE ('TokenAndPermUserStore')
I see our CPU usage dropping from 30%-80% utilization, to under 30%. User complaints of "slowness" in the application have trailed off. Plan caches are now growing to 10 GB or more.
As the day goes on, the token cache grows again. Near the end of the day we start to see a the problematic caching behavior again. I have scheduled the DBCC
command to run nightly, outside of business hours.
I understand that I can apply [*trace flag 4610 and 4618*](https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-traceon-trace-flags-transact-sql?view=sql-server-ver15) here to limit the number of stored tokens.
> Increases the size of the hash table that stores the cache entries by
> a factor of 8. When used together with trace flag 4618 increases the
> number of entries in the TokenAndPermUserStore cache store to 8,192.
By the end of the day, we end up with about 40,000 tokens cached.
Does anyone have any experience (or knowledge) with this issue that can say whether using trace flags, or clearing the cache, is a more effective solution?
What are the downsides of clearing the TokenAndPermUserStore cache more frequently than once per day? Say, hourly? Is there a "too often"?
And what would be the downsides of limiting the token cache through trace flags? Will I have end users hit bottlenecks with token limits, or otherwise (thus causing another issue)?
My major concern here is that, this is medical data, and I do not want to lose patient data as a result of a problem associated with a cache flush.
Update for 2024:
Microsoft apparently has put out an article detailing the information from this post and related posts. It can be reviewed here:
https://learn.microsoft.com/en-us/troubleshoot/sql/database-engine/performance/token-and-perm-user-store-perf-issue
Jacob H
(709 rep)
Dec 20, 2019, 07:35 PM
• Last activity: Oct 22, 2024, 12:21 PM
1
votes
1
answers
42
views
SQL Server 2008R2 - data file grows back after shrinking and available to shrink again
After shrinking of datafile there is around 180Gb of free space on hard drive (prior to operation shrink file dialog also shows that it is possible to free up around 180Gb of space). After 1-2 days the datafile grows back to its original size (before shrink operation) and it is possible to shrink it...
After shrinking of datafile there is around 180Gb of free space on hard drive (prior to operation shrink file dialog also shows that it is possible to free up around 180Gb of space). After 1-2 days the datafile grows back to its original size (before shrink operation) and it is possible to shrink it again (Shrink file dialog shows again that it is possible to free up around 180Gb of space).
I strictly forced to maintain some free space on hard drive, so avoiding the shrink operation is not an option.
Looking for advice what could cause this issue and possible solution.
Andrew B
(113 rep)
Aug 1, 2024, 02:20 PM
• Last activity: Aug 1, 2024, 04:16 PM
0
votes
1
answers
53
views
Is DBCC CHECKFILEGROUP worthless if you already run DBCC CHECKDB on all databases?
`DBCC CHECKDB` makes many other integrity checks redundant, but I can't see anything in the documentation saying if it makes `DBCC CHECKFILEGROUP` redundant. Is `DBCC CHECKFILEGROUP` worthless if you already run `DBCC CHECKDB` on all databases?
DBCC CHECKDB
makes many other integrity checks redundant, but I can't see anything in the documentation saying if it makes DBCC CHECKFILEGROUP
redundant. Is DBCC CHECKFILEGROUP
worthless if you already run DBCC CHECKDB
on all databases?
J. Mini
(1225 rep)
Jul 19, 2024, 11:34 AM
• Last activity: Jul 19, 2024, 02:30 PM
1
votes
1
answers
1151
views
How to fix table with corrupt rows
I was inserting records into a table, when I suddenly got disconnected from the DB server When I tried doing a row count on the table, I got a fatal error 824 I tried `DBCC CHECKTABLE` and it indicated that the table had consistency errors, and recommended `repair_allow_data_loss` The table has 4902...
I was inserting records into a table, when I suddenly got disconnected from the DB server
When I tried doing a row count on the table, I got a fatal error 824
I tried
DBCC CHECKTABLE
and it indicated that the table had consistency errors, and recommended repair_allow_data_loss
The table has 49021 rows, and I can still select the 49013 rows and accessing the rows beyond that yields to the fatal error 824
The records I was trying to insert into the table are not very important, and are just test data. I was wondering if it would be safe to perform repair_allow_data_loss in that matter?
Or is there any workaround in removing the corrupted rows from the table?
We have a backup table, and I was wondering if doing a truncate on the table and inserting the data from the backup table will work?
jambajuice
Jun 22, 2023, 11:04 AM
• Last activity: Apr 12, 2024, 07:17 AM
5
votes
1
answers
1482
views
how to kill a rogue spid that is already killed?
I have an obstinate spid that I cannot [kill][1] and it is preventing the transaction log of [my tempdb][2] to get truncated this is how [I found][3] this rogue spid: if object_id('tempdb..#OpenTranStatus','U') is not null drop table #OpenTranStatus CREATE TABLE #OpenTranStatus ( ActiveTransaction v...
I have an obstinate spid that I cannot kill and it is preventing the transaction log of my tempdb to get truncated
this is how I found this rogue spid:
if object_id('tempdb..#OpenTranStatus','U') is not null
drop table #OpenTranStatus
CREATE TABLE #OpenTranStatus (
ActiveTransaction varchar(25),
Details sql_variant
);
-- Execute the command, putting the results in the table.
INSERT INTO #OpenTranStatus
EXEC ('DBCC OPENTRAN (sqlwatch) with tableresults')
SELECT * FROM #OpenTranStatus
this is the query it is running (or holding on to):
select d.database_id , sd.sqlwatch_database_id, sd.sql_instance
into #d
from dbo.vw_sqlwatch_sys_databases d
inner join [dbo].[sqlwatch_meta_database] sd
on sd.[database_name] = d.[name] collate database_default
and sd.[database_create_date] = case when d.name = 'tempdb' then '1970-01-01 00:00:00.000' else d.[create_date] end
and sd.sql_instance = @sql_instance
left join [dbo].[sqlwatch_config_exclude_database] ed
on d.[name] like ed.database_name_pattern collate database_default
and ed.snapshot_type_id = @snapshot_type_id
where ed.snapshot_type_id is null
option (keep plan)
it has been running for over 60 hours:
It has already been killed.
so the things I have tried doing:
alter database sqlwatch set single_user with rollback immediate
but it did not work
kill 54 with statusonly
> SPID 54: transaction rollback in progress. Estimated rollback
> completion: 0%. Estimated time remaining: 0 seconds.
question:
how can I stop spid54?



Marcello Miorelli
(17274 rep)
Dec 20, 2023, 12:28 PM
• Last activity: Dec 20, 2023, 01:05 PM
0
votes
1
answers
101
views
What is the difference between dbcc checks and suspect pages table in terms of data page corruption?
Dbcc checkdb returns errors if it detects corruption on data pages. Similarly the suspect pages table hold corruption info of data pages. What is the difference between dbcc checks and suspect pages table in terms of data page corruption?
Dbcc checkdb returns errors if it detects corruption on data pages.
Similarly the suspect pages table hold corruption info of data pages.
What is the difference between dbcc checks and suspect pages table in terms of data page corruption?
variable
(3590 rep)
May 9, 2022, 03:22 AM
• Last activity: Apr 22, 2023, 10:45 AM
2
votes
1
answers
4947
views
The operating system returned error 665(The requested operation could not be completed due to a file system limitation) on SQL Server 2014
Have a SQL Server 2014 instance running on Windows 2012 R2 that was running Ola Hallengren's DB integrity script. This is the error that resulted: > The operating system returned error 665(The requested operation could > not be completed due to a file system limitation) to SQL Server during > a writ...
Have a SQL Server 2014 instance running on Windows 2012 R2 that was running Ola Hallengren's DB integrity script. This is the error that resulted:
> The operating system returned error 665(The requested operation could
> not be completed due to a file system limitation) to SQL Server during
> a write at offset 0x0001392f6ee000 in file
> 'H:\MSSQL12\Data\EDW_Data_6.ndf_MSSQL_DBCC15'. Additional messages in
> the SQL Server error log and system event log may provide more detail.
> This is a severe system-level error condition that threatens database
> integrity and must be corrected immediately. Complete a full database
> consistency check (DBCC CHECKDB). This error can be caused by many
> factors; for more information, see SQL Server Books Online.
I checked both the event log and the SQL Server error log but not getting a lot of additional info that could help. I haven't rerun the DBCC check for this db yet. This database is pretty active as it is a data warehouse database. It's currently 10.5 TB. I've heard that if I rerun the dbcc it may clear out the MSSQL_DBCC15 however I'm hesitant to do that on this server due to the level of activity. My preference would be to copy the backup to another server and run the dbcc there. If I do that, is there any way for me to clean up the DBCC15 files left on disk on the primary server?
Any other places I could find more info other than event log and SQL Server error log?
Appreciate any thoughts on what could cause this error or the best option for running DBCC checkdb based on the situation.
Toni
(21 rep)
Apr 6, 2015, 09:32 AM
• Last activity: Mar 27, 2023, 11:03 AM
10
votes
2
answers
20751
views
How to free the unused space for a table
This question is asked like tens of times, and to my surprise such a simple requirement comes so difficult. Yet I can't solve this problem. I use SQL Server 2014 Express edition with 10GB limit of database size (not a filegroup size, database size). I crawled news, and inserted HTML into a table. Ta...
This question is asked like tens of times, and to my surprise such a simple requirement comes so difficult. Yet I can't solve this problem.
I use SQL Server 2014 Express edition with 10GB limit of database size (not a filegroup size, database size).
I crawled news, and inserted HTML into a table. Table's schema is:
Id bigint identity(1, 1) primary key,
Url varchar(250) not null,
OriginalHtml nvarchar(max),
...
Database ran out of size and I received
As I understand this picture the unused space is now like 50 percent of total size. That is, now I have 5GBs unused space. But I can't reclaim it.
Rebuilding indexes didn't help. The
insufficient disk space
Of course shrinking database and filegroup didn't help. DBCC SHRINKDATABASE
didn't help. So I wrote a simple application to read each record, strip out some unwanted parts of the OriginalHtml
like head section and aside and footer to keep the main body only and I now see this image when getting report of disk usage by top tables:

truncateonly
option won't help because as I understood no record is deleted, only the size of each record is reduced.
I'm stuck at this point. Please help, what should I do?
Clustered Index is on column Id
.
This is the result of EXECUTE sys.sp_spaceused @objname = N'dbo.Articles', @updateusage = 'true';
name rows reserved data index_size unused
----------- -------- ------------ ----------- ------------ -----------
Articles 112258 8079784 KB 5199840 KB 13360 KB 2866584 KB
Saeed Neamati
(1515 rep)
Sep 26, 2017, 04:34 AM
• Last activity: Mar 17, 2023, 09:57 AM
5
votes
1
answers
500
views
DBCC CLONEDATABASE & User Defined Table Types conflict
I am using DBCC CLONEDATABASE to create database copies without data. There are a number of databases that will not clone. The error message is: >Cannot insert duplicate key row in object 'sys.sysschobjs' with unique index 'clst'. The duplicate key value is (565577053). It is always one of three IDs...
I am using DBCC CLONEDATABASE to create database copies without data. There are a number of databases that will not clone. The error message is:
>Cannot insert duplicate key row in object 'sys.sysschobjs' with unique index 'clst'. The duplicate key value is (565577053).
It is always one of three IDs: 245575913, 565577053, 885578193. The three IDs will always be the object_id of a user defined table type in the database failing to clone. Most of my databases have user defined table types, however some will clone, others will not. In the model database, two of these IDs relate to the internal tables
sys.plan_persist_context_settings
& sys.plan_persist_query_hints
. The third isn't matched in model.sys.objects.
I have seen other blogs mention dropping the objects in model. The model database doesn't have any user code and I was unable to drop those internal tables while logged in with DAC. The databases would have initially been created in SQL Server 2016 Std, the instance has been upgraded to 2019 Std CU17. The databases are restored to a sandbox server also 2019 Std CU17. This is where the clone fails. Prior to the upgrade (both production & sandbox were upgraded together), only 2 databases wouldn't clone, I now have 10!
Not sure if its relevant but the table type in sys.table_types is dbo.TableType, matching on type_table_object_id. in sys.objects it is sys.TT_TableType_34C8D9D1, is_ms_shipped set to 1. They are created as dbo.TableType.
Any help appreciated
beehive
(179 rep)
Dec 7, 2022, 07:24 PM
• Last activity: Dec 9, 2022, 11:35 AM
0
votes
1
answers
95
views
DBCC RESULTS - 1719 Errors - how best to proceed with repair
A production database which is continually updated with external data has reported following an integrity check 1719 errors. These can be grouped into: 116 x Table error: Object ID 0, index ID -1, partition ID 0, alloc un it ID 14918418583781376 (type Unknown), page ID (1:358049) contains an incorre...
A production database which is continually updated with external data has reported following an integrity check 1719 errors.
These can be grouped into:
116 x
Table error: Object ID 0, index ID -1, partition ID 0, alloc un
it ID 14918418583781376 (type Unknown), page ID (1:358049) contains an incorrect page ID in its page header. The
PageId in the page header = (45:3473461).
Summarised:
CHECKDB found 0 allocation errors and 118 consistency errors
not associated with any single object.
544 x
Object ID 854294103, index ID 6, partition ID 720575941117214
72, alloc unit ID 72057594157203456 (type In-row data): Page (1:358048) could not be processed. See other errors
for details.
Summarised:
CHECKDB found 0 allocation errors and 544 consistency errors
in table 'Rpm.TrainRecord' (object ID 854294103).
1021 x
Table error: Object ID 1327343793, index ID 4, partition ID
72057594114670592, alloc unit ID 72057594160611328 (type In-row data), page ID (1:7031943) contains an incorrect
page ID in its page header. The PageId in the page header = (1:4795983).
Summarised:
CHECKDB found 0 allocation errors and 1021 consistency error
s in table 'Audit.AuditLog' (object ID 1327343793).
31 x
Object ID 1349579846, index ID 7, partition ID 7205759410936
2176, alloc unit ID 72057594154647552 (type In-row data): Page (3:600863) could not be processed. See other erro
rs for details.
Summarised:
CHECKDB found 0 allocation errors and 31 consistency errors
in table 'Rpm.AlarmEventRecord' (object ID 1349579846).
The end output is
repair_allow_data_loss is the minimum repair level for the e
rrors found by DBCC CHECKDB ([databasename]).
We have backups, it seems the issue has come about following a move of the DB from one server to another (over last weekend). Checking the old server, there are no consistency errors at all.
The database functions fine and can be backed up / restored.
I'm guessing I need to repair with data loss - but can I do anything better than that?
Damo
(193 rep)
Nov 17, 2022, 10:08 AM
• Last activity: Nov 19, 2022, 10:35 PM
3
votes
2
answers
3366
views
How do I interpret the log when I run DBCC TRACEON (3502, 3504, 3605, -1)
I have been using DBCC Traceon (3502, 3504, 3605, -1) because it was recommended in a blog for discovering performance issues related to I/O. I'm running MS SQL Server 2008 R2 SP1 Results in my SQL Log file look something like this (numbers fudged a little): > about to log checkpoint end > > last ta...
I have been using DBCC Traceon (3502, 3504, 3605, -1) because it was recommended in a blog for discovering performance issues related to I/O. I'm running MS SQL Server 2008 R2 SP1
Results in my SQL Log file look something like this (numbers fudged a little):
> about to log checkpoint end
>
> last target outstanding 2, avgWriteLatency 40ms
>
> Average Throughput: 0.67 MB/sec, I/O Saturation: 79, Context Switches
> 201
>
> FlushCache: cleaned up 125 Bufs with 69 writes, in 1447ms (avoided 0
> new dirty bufs)
>
> Ckpt dbid 9 phase 1 ended (8)
>
> about to log checkpoint begin.
I don't really know how to read this, or break it down in a way that I get get anything truly meaningful out of it.
What does 'last target outstanding mean?"
Does the average write latency mean the overhead time it takes per write? or the time between writes? 40ms seems high, the physical drive is a 1TB, and it's RAID5 configured.
What is I/O saturation?
What does it have to do with the Context Switches. I'm assuming Context switches have something to do with multi-tasking. Changing between jobs/writes.
FlushCache. I realize this has to do with clearing out the cache. What are the Bufs? Are these pages of data that needed to be written? What are the dirty Bufs? Why would they be avoided?
A detailed breakdown would be appreciated.
meltdownmonk
(377 rep)
Jun 20, 2013, 12:55 AM
• Last activity: Nov 8, 2022, 09:03 AM
1
votes
1
answers
104
views
Can't drop and recreate FK in SQL
I have a DBCC check that will always fail with the following error: > Msg 8992, Level 16, State 1, Line 2 > Check Catalog > > Msg 3853, State 1: > Attribute (referenced_object_id=517752334,key_index_id=1) of row (object_id=660509732) in sys.foreign_keys does not have a matching row (object_id=517752...
I have a DBCC check that will always fail with the following error:
> Msg 8992, Level 16, State 1, Line 2
> Check Catalog
>
> Msg 3853, State 1:
> Attribute (referenced_object_id=517752334,key_index_id=1) of row (object_id=660509732) in sys.foreign_keys does not have a matching row (object_id=517752334,index_id=1) in sys.indexes.
I have found the problem where the
referenced_object_id
is wrong and needs to be changed to 517785102 for FK_SytModuleStat_SytModule
.
I can't change it in sys.foreign_keys
and I can't drop and recreate the FK_SytModuleStat_SytModule
in the SytModuleStat
to fix the problem. The only problem is that the FK is no longer correct when I try to modify it, I see that the primary/unique key base and column fields are blank compared to a working one.
I have tried many suggestions with alter, drop create, set emergency etc. But I can't fix this one key!
I have tried to connect with DAC in single user mode and it won't let me enable ad hoc updates even with the reconfigure overrride option. I get this error:
> Ad hoc updates to system catalogs are not allowed.
DBCC repair with data loss won't work, and there doesn't seem to be a repair catalog feature in SQL Server 2014 Express Edition.
Dominic Legendre
(13 rep)
Nov 2, 2022, 04:14 PM
• Last activity: Nov 6, 2022, 06:17 AM
Showing page 1 of 20 total questions