Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

3 votes
2 answers
1607 views
DBCC consistency error in SQL Server
I ran DBCC CheckDB in our Production SQL Server 2014. It reported back with one consistency error: Failed:(-1073548784) Executing the query "DBCC CHECKDB(N'MYDB') WITH NO_INF..." failed with the following error: "Table error: Object ID 629577392, index ID 1, partition ID 72057594387039319, alloc uni...
I ran DBCC CheckDB in our Production SQL Server 2014. It reported back with one consistency error: Failed:(-1073548784) Executing the query "DBCC CHECKDB(N'MYDB') WITH NO_INF..." failed with the following error: "Table error: Object ID 629577392, index ID 1, partition ID 72057594387039319, alloc unit ID 72057594045596759 (type LOB data). The off-row data node at page (1:6009), slot 24, text ID 15754068079 is not referenced. CHECKDB found 0 allocation errors and 1 consistency errors in table 'MYTABLE' (object ID 629577281). CHECKDB found 0 allocation errors and 1 consistency errors in database 'MYDB'. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (MYDB).". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. Since we do not have a good backup for that DB, we have to route to the below scenario. What I did to troubleshoot: 1. I restored the recent backup file from Prod to UAT and ran DBCC CheckDB again to replicate the error. The same consistency error came back. 2. Ran DBCC CHECKTABLE (MYTABLE) = same consistency error shows:
Table error: Object ID 629577392, index ID 1, partition ID 72057594387039319, alloc unit ID 72057594045596759 (type LOB data). 

The off-row data node at page (1:6009), slot 24, text ID 15754068079 is not referenced.
DBCC results for 'MYTABLE'.
There are 53635 rows in 2705 pages for object "MYTABLE".
CHECKTABLE found 0 allocation errors and 1 consistency errors in table 'MYTABLE' (object ID 629577281).
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKTABLE
3. Then implemented the following steps below: -- step 1 ALTER DATABASE [MYDB] SET SINGLE_USER WITH ROLLBACK IMMEDIATE; -- step 2 DBCC CHECKDB(N'[MYDB]', REPAIR_ALLOW_DATA_LOSS);
Msg 8964, Level 16, State 1, Line 3
Table error: Object ID 629577392, index ID 1, partition ID 72057594387039319, alloc unit ID 72057594045596759 (type LOB data). The off-row data node at page (1:6009), slot 24, text ID 15754068079 is not referenced.
        The error has been repaired.
There are 53635 rows in 2705 pages for object "MYTABLE".
-- step 3 DBCC CHECKDB ('[MYDB]') ---(Re-run to check for any additional errors = no errors reported) -- step 4 ALTER DATABASE [MYDB] SET MULTI_USER; ---(setting it back to multi user mode = users can access the DB) My questions/concerns: 1. How do I know which data has been lost? From my understanding it doesn't seem like any data was lost because before the repair the table had 53635 rows. After the repair it still has 53635 rows. 2. Do we need to to schedule downtime/ outage for that exact site when setting the DB to single user mode? 3. After doing the repair in Production, what are some best practices to keep aware of?
sqllover2020 (73 rep)
Aug 17, 2021, 03:25 PM • Last activity: Jan 6, 2025, 09:04 PM
4 votes
1 answers
284 views
Do I really need a backup when using REPAIR_ALLOW_DATA_LOSS?
Reluctant DBA here. Do I really need a backup when using REPAIR_ALLOW_DATA_LOSS? I am well aware [MS recommends][1] doing a backup. What's the worse that could happen if I run a DBCC CHECKTABLE using REPAIR_ALLOW_DATA_LOSS? A bit of background. A client wants to migrate the DB used by our software f...
Reluctant DBA here. Do I really need a backup when using REPAIR_ALLOW_DATA_LOSS? I am well aware MS recommends doing a backup. What's the worse that could happen if I run a DBCC CHECKTABLE using REPAIR_ALLOW_DATA_LOSS? A bit of background. A client wants to migrate the DB used by our software from on-prem to Azure. Currently, the DB size is roughly 1TB and he wants to delete a lot of data in order to get it below 500GB before migration. He has no DBA. There are no backups whatsoever. Not enough space for backups. I wrote a program that deletes data older than the threshold agreed upon, but while deleting data from one of the tables for a certain date, I got a Fatal Error 824 . DBCC CHECKDB output: Msg 8939, Level 16, State 98, Line 1 Table error: Object ID 914102297, index ID 1, partition ID 72057594051493888, alloc unit ID 72057594057195520 (type In-row data), page (1:3692983). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 133129 and -4. Msg 8928, Level 16, State 1, Line 1 Object ID 914102297, index ID 1, partition ID 72057594051493888, alloc unit ID 72057594057195520 (type In-row data): Page (1:3692983) could not be processed. See other errors for details. Msg 8976, Level 16, State 1, Line 1 Table error: Object ID 914102297, index ID 1, partition ID 72057594051493888, alloc unit ID 72057594057195520 (type In-row data). Page (1:3692983) was not seen in the scan although its parent (1:3693476) and previous (1:3692476) refer to it. Check any previous errors. Msg 8978, Level 16, State 1, Line 1 Table error: Object ID 914102297, index ID 1, partition ID 72057594051493888, alloc unit ID 72057594057195520 (type In-row data). Page (1:3693015) is missing a reference from previous page (1:3692983). Possible chain linkage problem. CHECKDB found 0 allocation errors and 4 consistency errors in table 'schema.FancyTableName' (object ID 914102297). CHECKDB found 0 allocation errors and 4 consistency errors in database 'FancyDatabaseName'. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (FancyDatabaseName). The table in question contains non-critical data. The client is OK with losing data for that day (it's older than the threshold). No table depends on it. The table itself has a foreign key dependency on another table. So, what's the worse that could happen when running DBCC CHECKTABLE('schema.FancyTableName',REPAIR_ALLOW_DATA_LOSS) WITH TABLOCK? I need to run it with TABLOCK since there's not enough space for the internal database snapshot. Thank you!
Edgar (143 rep)
Mar 29, 2023, 09:31 AM • Last activity: Mar 29, 2023, 10:58 AM
0 votes
1 answers
651 views
Is there any specific differences on table level integrity check between DBCC CHECKDB and DBCC CHECKTABLE?
I've faced with allocation error in one of our databases while running DBCC CHECKDB on it. It throws below-mentioned error: > Msg 8947, Level 16, State 1, Line 5 > Table error: Multiple IAM pages for object ID 1277199566, index ID 1, partition ID 72057717676669456, alloc unit ID 72057708766647328 (t...
I've faced with allocation error in one of our databases while running DBCC CHECKDB on it. It throws below-mentioned error: > Msg 8947, Level 16, State 1, Line 5 > Table error: Multiple IAM pages for object ID 1277199566, index ID 1, partition ID 72057717676669456, alloc unit ID 72057708766647328 (type LOB data) contain allocations for the same interval. IAM pages (1:425664) and (1:1422669). > > CHECKDB found 1 allocation errors and 0 consistency errors in table 'schemaName.tableName' (object ID 1277199566). > CHECKDB found 1 allocation errors and 0 consistency errors in database 'databaseName'. Weird thing is that, when I'm trying to run DBCC CHECKTABLE on this table, it doesn't show any error. Additionally, I've tried to run CHECKTABLE WITH EXTENDED_LOGICAL_CHECKS option, and I didn't get any error. Per Microsoft documentation: > To perform DBCC CHECKTABLE on every table in the database, use DBCC CHECKDB. > > For the specified table, DBCC CHECKTABLE checks for the following: > >* Index, in-row, LOB, and row-overflow data pages are correctly linked. >* Indexes are in their correct sort order. >* Pointers are consistent. >* The data on each page is reasonable, included computed columns. >* Page offsets are reasonable. >* Every row in the base table has a matching row in each nonclustered index, and vice-versa. >* Every row in a partitioned table or index is in the correct partition. >* Link-level consistency between the file system and table when storing varbinary(max) data in the file system using FILESTREAM But as I can see, its not true for opposite. We can't run CHECKTABLE for just one table and get the same result for the same table. Does anyone have information related to differences between these two commands, in terms of checking integrity and consistency on specific object?
Elvin Ahmadov (240 rep)
Aug 3, 2021, 12:12 PM • Last activity: Aug 3, 2021, 01:09 PM
0 votes
2 answers
2482 views
What causes DBCC CHECKTABLE to return "session is in the kill state"?
A scheduled job step ran `DBCC CHECKTABLE` on a table but returned the following error: > Cannot continue the execution because the session is in the kill state. [SQLSTATE HY000] (Error 596). The step failed. What causes this error?
A scheduled job step ran DBCC CHECKTABLE on a table but returned the following error: > Cannot continue the execution because the session is in the kill state. [SQLSTATE HY000] (Error 596). The step failed. What causes this error?
user2368632 (1133 rep)
Dec 18, 2020, 07:34 AM • Last activity: Dec 19, 2020, 10:08 AM
0 votes
1 answers
226 views
Overhead associated with DBCC CHECKTABLE?
Have a logshipped VLDB and don't run **DBCC CHECKDB** on primary due to the long running periods of high CPU and tempdb usage. Have read about **DBCC CHECKTABLE** as a possibility for a more granular data integrity checking option. Does **DBCC CHECKTABLE** have overhead levels similar to **DBCC CHEC...
Have a logshipped VLDB and don't run **DBCC CHECKDB** on primary due to the long running periods of high CPU and tempdb usage. Have read about **DBCC CHECKTABLE** as a possibility for a more granular data integrity checking option. Does **DBCC CHECKTABLE** have overhead levels similar to **DBCC CHECKDB**? Can it be run during business hours without adversely affecting server CPU and TempDB performance?
user2368632 (1133 rep)
Dec 13, 2020, 11:47 PM • Last activity: Dec 14, 2020, 10:54 AM
7 votes
1 answers
1046 views
DBCC CHECKTABLE taking 15+ minutes to run on an empty table
I have a database where DBCC CHECKTABLE on a number of small or empty tables taking over 15 minutes to run. When it finishes there are no failures or errors. The performance on everything else on the server is at a very acceptable form. There was nothing else running at the same time. I have also tr...
I have a database where DBCC CHECKTABLE on a number of small or empty tables taking over 15 minutes to run. When it finishes there are no failures or errors. The performance on everything else on the server is at a very acceptable form. There was nothing else running at the same time. I have also tried DBCC CLEANTABLE and updated stats with fullscan. I'm using SQL Server 2016 Enterprise Edition (13.0.5201.2) Example table: CREATE TABLE [Schema1].[Table1]( [col1] [int] NOT NULL, [col2] [nvarchar](100) NOT NULL, [col3] [xml] NOT NULL, CONSTRAINT [PK_1] PRIMARY KEY CLUSTERED ( [col1] ASC, [col2] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] GO
Russ960 (361 rep)
Oct 15, 2018, 10:00 PM • Last activity: Jun 13, 2019, 05:42 PM
1 votes
1 answers
986 views
Why does DBCC CHECKTABLE WITH TABLERESULTS only report Error 8990 on corrupt tables?
I have a production VLDB that I'm using [Paul Randal's guidance][1] on to spread `DBCC CHECKDB` workload over a week. I'm logging the results of `DBCC CHECKTABLE WITH TABLERESULTS` and `DBCC CHECKALLOC WITH TABLERESULTS` to a table an noticed `DBCC CHECKTABLE WITH TABLERESULTS` was only returning a...
I have a production VLDB that I'm using Paul Randal's guidance on to spread DBCC CHECKDB workload over a week. I'm logging the results of DBCC CHECKTABLE WITH TABLERESULTS and DBCC CHECKALLOC WITH TABLERESULTS to a table an noticed DBCC CHECKTABLE WITH TABLERESULTS was only returning a single row per table on non-corrupt tables. Here's the first few columns. Error Level State MessageText 2593 10 1 There are 57 rows in 1 pages for object "SmallTable". But when I run DBCC CHECKTABLE WITH TABLERESULTS on a corrupt table in a test database, which I corrupted with this method , I get the following: Error Level State MessageText 8939 16 98 Table error: Object ID 565577053, index ID 1, partition ID 72057594041335808, alloc unit ID 72057594046971904 (type In-row data), page (1:312). Test (IS_OFF (BUF_IOERR, pBUF->bstat)) failed. Values are 133129 and -4. 8928 16 1 Object ID 565577053, index ID 1, partition ID 72057594041335808, alloc unit ID 72057594046971904 (type In-row data): Page (1:312) could not be processed. See other errors for details. 8980 16 1 Table error: Object ID 565577053, index ID 1, partition ID 72057594041335808, alloc unit ID 72057594046971904 (type In-row data). Index node page (0:0), slot 0 refers to child page (1:312) and previous child (0:0), but they were not encountered. 2593 10 1 There are 0 rows in 0 pages for object "tblWhoops". 8990 10 1 CHECKTABLE found 0 allocation errors and 3 consistency errors in table 'tblWhoops' (object ID 565577053). I'm curious as to why **Error 8990** ins't returned in with the non-corrupt table? DBCC CHECALLOC WITH TABLERESULTS and DBCC CHECKDB WITH TABLERESULTS return this row when there aren't any allocation or consistency errors to report. Bonus question, why doesn't DBCC CHECKCATALOG have a WITH TABLERESULT arguement?
S3S (3578 rep)
Oct 11, 2018, 12:46 AM • Last activity: Oct 11, 2018, 01:20 AM
2 votes
0 answers
1576 views
DBCC TABLE CHECK command won't stop
We have a SQL Server 2014 with 140 user databases, each of which have approx. 180 tables. A maintenance plan was scheduled for once a week but it was discovered that the Check Database Integrity (`DBCC CheckDB`) job was taking too long. The schedule was disabled, the job stopped and the process kill...
We have a SQL Server 2014 with 140 user databases, each of which have approx. 180 tables. A maintenance plan was scheduled for once a week but it was discovered that the Check Database Integrity (DBCC CheckDB) job was taking too long. The schedule was disabled, the job stopped and the process killed. However, we are now seeing DBCC TABLE CHECK command running on multiple threads. The MAXDOP configuration setting has been reduced to give other processes a chance, but the table check is still using as many threads as it can get hold of. The question is: How to kill this process? BTW - the server has been rebooted(twice).
jock (21 rep)
Oct 4, 2018, 01:22 PM • Last activity: Oct 4, 2018, 02:01 PM
6 votes
1 answers
270 views
Table with persisted computed CLR originated data marked with has_unchecked_assembly_data = 1, but DBCC CHECKTABLE does not unset this same flag
We have an OrderLines table with columns: Quantity int not null QtyCancelled int not null QtyBackorder int not null QtyPicking int not null QtyPacking int not null QtyShiped int not null with computed and persisted column FulfillmentStatusId as: dbo.clr_GetFulfillmentStatusByLineQuantities(Quantity,...
We have an OrderLines table with columns: Quantity int not null QtyCancelled int not null QtyBackorder int not null QtyPicking int not null QtyPacking int not null QtyShiped int not null with computed and persisted column FulfillmentStatusId as: dbo.clr_GetFulfillmentStatusByLineQuantities(Quantity, QtyCancelled, QtyBackorder, QtyPicking, QtyPacking, QtyShiped) "clr_GetFulfillmentStatusByLineQuantities", of course, being mapped through a scalar function to an ASSEMBLY in the database with the associated method (C#) marked with: [SqlFunction(DataAccess = None, IsDeterministic = true, IsPrecise = true, SystemDataAccess = None)] With reference to this question , I've run DBCC CHECKTABLE against this table *multiple times* and it still remains as has_unchecked_assembly_data=1. This was true when the column was first added to the table, but I believe the DBCC CHECKTABLE command succeeded prior to the migration to SQL 2016. ----- While the table has been in use for a long time with has_unchecked_assembly_data=1, there seems to be no impact on the table's functionality. I *am* concerned about ALTER ASSEMBLY throwing an exception when not using WITH UNCHECKED DATA, then throwing *another* exception when I *did* use that option, but appearing to update the assembly anyway "according to MVID". Sorry if I ramble, I don't ask too many questions and I mostly lurk. Thank in advance, however! ----- Update (20180917T17:34-05:00): Thanks to Sean for suggesting an initial direction. The result of *DBCC CHECKTABLE ('OrderLines') WITH EXTENDED_LOGICAL_CHECKS* was two lines similar to: Msg 2537, Level 16, State 106, Line 1 Table error: object ID 1486732449, index ID 1, partition ID 72057594403946496, alloc unit ID 72057594422755328 (type In-row data), page (1:1430554), row 5. The record check (valid computed column) failed. The values are 38 and 0. So I executed a REBUILD on the primary key of the table (which is index ID 1 indicated above), which succeeded. However, rerunning *DBCC CHECKTABLE('OrderLines') WITH EXTENDED_LOGICAL_CHECKS* again returned similar errors, only this time with different partition IDs and allocation unit IDs (as one might expect). The extreme solution might be to drop, recreate, and repopulate the table with identity insert, "nuke the site from orbit" so to speak, but for obvious reasons, that's *extreme*. What is the correct solution to "there appears to be inconsistent computed column data in the table's clustered index"?
JW-OP (141 rep)
Sep 17, 2018, 06:57 PM • Last activity: Sep 19, 2018, 06:12 PM
2 votes
1 answers
459 views
VLDB - Create SQL DB Snapshot And Run Granular DBCC CHECKDB Commands
I have a 25TB db (SQL2008 Enterprise SP4) that appears to not have had CHECKDB run on it in production. No idea if it was run against a restored backup somewhere at sometime. Right now I don't have space to restore a copy on another server connected to the same storage. I need to be able to run chec...
I have a 25TB db (SQL2008 Enterprise SP4) that appears to not have had CHECKDB run on it in production. No idea if it was run against a restored backup somewhere at sometime. Right now I don't have space to restore a copy on another server connected to the same storage. I need to be able to run checkcatalog, checkalloc and checktable but even for small tables it doesn't appear to be able to create the internal sparse file dbcc snapshot. So I was wondering if I can create a SQL database snapshot on another drive and then run those commands against that and then drop the snapshot db once complete. Thanks! --Todd
Todd Kleinhans (23 rep)
Mar 8, 2016, 11:13 PM • Last activity: Mar 8, 2016, 11:56 PM
Showing page 1 of 10 total questions