Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
67
views
SQL Server Shrinkfile Stops at 76% Due to LOB and System Data—Is Partial Shrink Possible?
**Problem:** I am working on a SQL Server 2019 database where I deleted a large amount of user data (~500 GB). After deletion, I attempted to shrink the data file using: ``` DBCC SHRINKFILE (N'web_Data', TARGET_SIZE_MB); ``` However, the shrink process stopped at 76% and could not proceed further. *...
**Problem:**
I am working on a SQL Server 2019 database where I deleted a large amount of user data (~500 GB).
After deletion, I attempted to shrink the data file using:
DBCC SHRINKFILE (N'web_Data', TARGET_SIZE_MB);
However, the shrink process stopped at 76% and could not proceed further.
**Analysis:**
Using .dm_db_database_page_allocations
and .allocation_units
, I found that:
- **LOB data and system objects** (e.g.,
,
,
) are allocated toward the **end of the file.**
- Since these allocations are not movable, the shrink operation cannot release the remaining space.
**Question:**
**How can I perform a shrink operation that reclaims only the movable free space, without getting stuck on unmovable LOB and system data?**
**What I Tried:**
SHRINKFILE
with a target size
- SHRINKFILE
with
**Clarifications:**
- I understand that shrinking can cause fragmentation. I’m **not asking whether I should shrink**, I’m asking how to reclaim only **the free space that is actually reclaimable** without getting blocked.
- Rebuilding the database is not part of this question—I’m focusing on whether **SQL Server allows partial shrink in this scenario.**
**Environment:**
- **SQL Server Version:** 2019
- **Database Size Before Deletion:** ~1 TB
- **Deleted Data:** ~500 GB from a single large table
- **File:** Single data file in PRIMARY filegroup
**Goal:**
- Reclaim as much space as possible using SHRINKFILE
, even if system LOB data cannot be moved.
Shehzad Malik
(11 rep)
Jul 21, 2025, 11:28 AM
• Last activity: Jul 21, 2025, 12:11 PM
0
votes
1
answers
28
views
Shrinking log files on AG to manage VLF's
some of our databases have excessive VLF's in their log files and I plan to shrink and then grow them to an appropriate size. I'm going to use the Tiger Team script https://github.com/microsoft/tigertoolbox/tree/master/Fixing-VLFs All the databases are in Availability Groups, so I won't be changing...
some of our databases have excessive VLF's in their log files and I plan to shrink and then grow them to an appropriate size. I'm going to use the Tiger Team script https://github.com/microsoft/tigertoolbox/tree/master/Fixing-VLFs
All the databases are in Availability Groups, so I won't be changing the recovery model, but an answer by Ted here
https://learn.microsoft.com/en-us/answers/questions/632536/shrink-a-database-log-file-which-is-synchronized-(
says "You MUST MAKE sure that you don't use the truncate the log with truncate only. That will kill the log info that is needed to keep the database synchronized to the secondary."
Is that right? Is using TruncateOnly problematic on AGs? It doesn't say anything about that in MS docs about shrinkfile.
AlexP012
(53 rep)
May 20, 2025, 07:32 AM
• Last activity: Jun 5, 2025, 01:41 PM
1
votes
2
answers
241
views
DBCC Shrinkfile after full backup restoration but before differential restoration
My database full backup file size is 760 GB and differential would be around 20 GB but the database size is around 2TB. As it is a production database, so to apply only a minimum downtime, i want to run DBCC SHRINKFILE with TRUNCATEONLY before restoring differential backup. Please advise if i can ru...
My database full backup file size is 760 GB and differential would be around 20 GB but the database size is around 2TB. As it is a production database, so to apply only a minimum downtime, i want to run DBCC SHRINKFILE with TRUNCATEONLY before restoring differential backup.
Please advise if i can run the command DBCC SHRINKFILE with TRUNCATEONLY after full backup restored but before differential backup restoration.
kalpna
(11 rep)
Jul 24, 2018, 05:22 AM
• Last activity: May 25, 2025, 06:04 PM
0
votes
3
answers
10334
views
What is Fastest way to shrink a datafile?
I have a 4 Tb SQL Server 2008 R2 database. I'm going to move this DB to a new server with a new release of SQL Server 2012. (the application does not support SQL Server > 2012, but that is not the subject of my question). Before that, I'm going to move almost all objects from the main datafile of 4T...
I have a 4 Tb SQL Server 2008 R2 database. I'm going to move this DB to a new server with a new release of SQL Server 2012. (the application does not support SQL Server > 2012, but that is not the subject of my question).
Before that, I'm going to move almost all objects from the main datafile of 4To to multiple datafiles on the same filegroup so I can get back the free space on the 4Tb file.
After moving objects, free space take like forever to be reclaimed with a dbcc shrink file with blocs of 256Mo.
What's the fastest way to get that space back to the OS?
Is there a way to use a backup restore to shrink the file?
dba_maroc
(31 rep)
Apr 27, 2021, 04:01 PM
• Last activity: May 22, 2025, 09:06 PM
0
votes
2
answers
407
views
How to shrink log file for Azure SQL Database?
My Azure Sql Database transaction log file is about 1 TB in size, but only 1 GB is in use: SELECT file_id, name, type_desc, CAST(FILEPROPERTY(name, 'SpaceUsed') AS decimal(19,4)) * 8 / 1024. AS space_used_mb, CAST(size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS decimal(19,4)) AS s...
My Azure Sql Database transaction log file is about 1 TB in size, but only 1 GB is in use:
SELECT file_id, name, type_desc,
CAST(FILEPROPERTY(name, 'SpaceUsed') AS decimal(19,4)) * 8 / 1024. AS space_used_mb,
CAST(size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS decimal(19,4)) AS space_unused_mb,
CAST(size AS decimal(19,4)) * 8 / 1024. AS space_allocated_mb,
CAST(max_size AS decimal(19,4)) * 8 / 1024. AS max_size_mb
FROM sys.database_files;
Results:
I'm trying to cut it down to 50 GB in size, but neither of these seem to have any effect, the

space_unused_mb
and space_allocated_mb
values remain the same:
DBCC SHRINKFILE (2, truncateonly);
DBCC SHRINKFILE (2, 50000);
Or am I barking up the wrong tree here? Does log file size count against storage cost?
Greg
(582 rep)
Oct 16, 2024, 09:53 PM
• Last activity: May 17, 2025, 04:06 AM
4
votes
1
answers
359
views
Best practise for reducing database file size during migration to new server
**Introduction** I have been reading many articles about deleting from a databases, and the perils of using the shrink command. However, I have an opportunity to move a database to a new server with a big outage window, so do want to give the database a reset and size reduction (to increase performa...
**Introduction**
I have been reading many articles about deleting from a databases, and the perils of using the shrink command. However, I have an opportunity to move a database to a new server with a big outage window, so do want to give the database a reset and size reduction (to increase performance and save cost in hosted environment).
**Background**
* Existing SQL Server is 2016 Web Edition.
* New SQL Server 2022 Standard Edition.
* Database size ~120GB.
* A few large tables make up the bulk of that as well as indexes on those large tables.
* I want to focus just on the database size reduction. I'm comforable with the other parts of creating a new server.
**Proposed process**
_Before outage_
1. Run stored procedures which use
DELETE FROM
syntax. I have tested these carefully. They batch the deletes and will be run over a few days.
2. Monitor the log files and log disk to double check there is enough space.
3. Ensure delete processing is done by the time we get to the outage window.
_During outage_
4. Take a full backup of the database and put in a safe place.
5. Drop the largest indexes (there around about 3 or 4 that I will target).
6. Put the database into simple recovery mode.
7. Shrink the log and database files as much as possible.
8. Backup the database.
9. Copy the backup file to the new server.
10. Restore the database to correct file locations
11. Apply the indexes which were removed.
12. Rebuild all indexes.
13. Update statistics.
14. Put database back to full recovery model.
15. Testing and restarting applications
ErikEV
(77 rep)
Apr 7, 2025, 11:59 PM
• Last activity: Apr 8, 2025, 11:49 AM
0
votes
2
answers
140
views
Reclaiming space after dropping varbinary columns
I have a table of almost one million rows with two columns of datatype `varbinary`. These two columns store binary data that causes the database to grow to 1 TB. As this database is also restored in other QA environments and a dev environment, we now have the task of reclaiming space as much as poss...
I have a table of almost one million rows with two columns of datatype
varbinary
. These two columns store binary data that causes the database to grow to 1 TB.
As this database is also restored in other QA environments and a dev environment, we now have the task of reclaiming space as much as possible to save costs.
After engaging the vendor, they advised that these two columns can be dropped as they are no longer in use. The application writing to these two columns has been modified accordingly.
I have followed two options to approach the situation in my dev environment but I need help on the right approach.
### Option 1
1. Drop the two binary columns
2. Run DBCC CLEANTABLE
- This step took almost 24 hours. I had to stop it because I will not have that much time in the production environment.
3. Shrink the filesystem - A bit reluctant to do this because of fragmentation.
I got stuck on no. 2 above and then tried the second option below.
### Option 2
1. Drop the two binary columns
2. Create a new table and copy the data over using SSIS - This operation took almost 12 hours to complete (I will not have this much time in production)
3. Create a different filesystem and move all tables and other objects to this new table filesystem except the concerned old table in question using the CREATE INDEX…WITH DROP_EXISTING = ON, ONLINE = ON
command
4. Drop the old table in the primary filesystem
5. Shrink the primary filesystem - I expect this to shrink faster since there are not many objects in there.
Item 2 took almost 12 hours to complete. Does anyone know a better approach to getting rid of these two columns and reclaiming the space?
The production environment has AOAG (Always On), meaning that I have to be in full recovery.
Using DBCC CLEANTABLE
took so long and my log was growing. I had to stop it to try option 2. After running for more than 24 hrs, I stopped it. It took so long to eventually stop. It was at that point that I thought DBCC CLEANTABLE
was not a good option for me.
dennis
(1 rep)
Feb 8, 2025, 02:35 PM
• Last activity: Feb 9, 2025, 05:17 PM
1
votes
1
answers
89
views
If I replace images in a table, will existing BLOB data be cleaned up and the database shrink?
I recently updated a table in my database by saving small thumbnail images as a `byte[]` to a `varbinary(max)` column. This significantly improved the speed of loading thumbnails (previously I was loading them from the file system), but due to an error in my code where the thumbnail images were bein...
I recently updated a table in my database by saving small thumbnail images as a
byte[]
to a varbinary(max)
column.
This significantly improved the speed of loading thumbnails (previously I was loading them from the file system), but due to an error in my code where the thumbnail images were being resized but not compressed, the images were about 140-160KB in size and created a ton of BLOBs. The images should be about 5 or 6 KB. In testing of my new code, the images should be sized correctly at around 5 or 6 KB.
If I iterate through the table and replace all of the byte[]
images in the varbinary(max)
column, will this clean up all these unnecessary BLOBs?
My database understandably grew significantly as a result of my error, and I'm trying to get the database to go back down as its current size is creating issues with our backups.
---
What I understand of the BLOB data was that it was stored separately in the database and referenced in the varbinary
cell in my table, so I wasn't sure if I replaced that if the associated BLOB data would be deleted. If it is automatically deleted, why wouldn't the database automatically shrink?
I am testing it in the Dev Environment, but after the last mistake that I made, I wanted to double check. I saw in SSMS that there are options for Shrinking the Database, so I'm assuming that would be the task I should run after updating all of the image data. Of course there will be many backups throughout that process as well. I'm just trying to avoid any additional issues.
I have a database that was backup up as about 50MB and grew to about 3.5GB after my error. I can replace the data with smaller images, but it doesn't seem like that would be enough because the database won't automatically shrink.
M_Lyons10
(13 rep)
Feb 6, 2025, 07:00 PM
• Last activity: Feb 7, 2025, 06:09 AM
4
votes
1
answers
488
views
SQL Server reduce unused space on a mostly heap table database
I’m working with a 2.3TB primary data file and currently have about 1TB of unused space. Recently, I performed row-level compression on the largest table in the database, which reduced the table’s size from 0.9TB to 0.4TB. However, after this compression, the size of the file grew, and while the spa...
I’m working with a 2.3TB primary data file and currently have about 1TB of unused space. Recently, I performed row-level compression on the largest table in the database, which reduced the table’s size from 0.9TB to 0.4TB. However, after this compression, the size of the file grew, and while the space used by the table decreased, the overall unused space in the file did not shrink accordingly.
My issue is how to reclaim this unused space, considering this is a reporting-based database with no modelling, meaning there are no primary keys or clustered indexes.
My initial test was to run dbcc shrinkfile(, TRUNCATEONLY), but no unallocated space was found at the end of the page. The next thing I was going to try is to go back to the massive table and create a cluster index with the hope of moving the unallocated space from the compression operation to the end of the file, but there is no unallocated space on that table. Also, there is only about 7GB unallocated space in the tables.
This is what I found after checking the space usage of the file: File Size (GB): 2,287 Space Used (GB): 1,311 Unallocated Space (GB): 976. Here are the results from running exec sp_spaceused: Database Size: 2,372,346.56 MB Unallocated Space: 999,150.95 MB Reserved: 1,375,788,088 KB Data: 1,162,793,432 KB Index Size: 204,355,584 KB Unused: 8,639,072 KB.
I am not a database administrator, so I got no idea on what I could do. Does anyone have any suggestion?
c_tames1998
(43 rep)
Jan 27, 2025, 10:11 AM
• Last activity: Feb 1, 2025, 02:05 AM
2
votes
1
answers
1944
views
DBCC ShrinkFile EmptyFile fails due to system table SYSFILES1
In following a recommendation from our new storage vendor, I'm trying to split up some large & busy databases into multiple data files, each to live on a separate VDisk that has a separate controller. Yes, this is a virtual platform. So let's say we have `MyCoolDB` and its primary MDF is at `D:\Data...
In following a recommendation from our new storage vendor, I'm trying to split up some large & busy databases into multiple data files, each to live on a separate VDisk that has a separate controller. Yes, this is a virtual platform.
So let's say we have
MyCoolDB
and its primary MDF is at D:\Data\MyCoolDB.mdf
, with logical name MyCoolDB_data
. I add 3 new files: E:\Data2\MyCoolDB_2.ndf
, F:\Data3\MyCoolDB_3.ndf
, and G:\Data4\MyCoolDB_4.ndf
. That works.
Now I want to re-distribute the data into these new files. So I use the DBCC SHRINKFILE
command with the EMPTYFILE
option. Namely, DBCC SHRINKFILE ('MyCoolDB_data', EMPTYFILE);
.
It runs for about 10-30 seconds, then FAILS, throwing the error as follows: SHRINKFILE: System table SYSFILES1 Page 1:32074080 could not be moved to other files because it only can reside in the primary file of the database.
This operation **worked fine** in DEV/TEST environment, and I got no such errors. What could be going wrong / different in PROD? What are my next-steps or other options? Because now we have 3 new NDF files sitting there with hardly anything in them (while, yes, "new data" is being distributed into them, the primary file is still huge in comparison), and storage folks are going to be banging down my door about all that "wasted space".
Things I have tried:
1. doing a CHECKPOINT
beforehand
Things I cannot try without downtime/slow-time (I think), and therefore have not yet:
2. Simple recovery -> do shrink -> Full recovery -> take full-backup to allow tlog-backups to continue
3. Creating a new FILEGROUP
, and adding files to new FILEGROUP
, and using index rebuilds to move data into said new FILEGROUP
's files. PS: Now that the new NDF files are there and have 'something' in them, I'm betting it's gonna be nearly impossible to UNDO that action (i.e. to EMPTY them and remove them), so I'm not looking forward to that being a thing, if it is.
4. Creating a new DB with the desired file structure and actually migrating all the data over to it. This seems like the last-resort answer. As I said, the SHRINKFILE EMPTYFILE
worked fine in another environment for the same database.
Config: SQL 2017 Standard, 16 cores (2x8), 124 GB RAM. Platform: Nutanix with VMWare.
NateJ
(824 rep)
Mar 12, 2020, 02:48 PM
• Last activity: Jan 20, 2025, 08:02 AM
0
votes
1
answers
1884
views
Oracle 11g - Decreasing Big Table Size
I want to reduce the size of a table. > I thought if I could do this with export / import, but when I check it > with the ESTIMATE_ONLY option, it doesn't seem to work. > > I have no chance of using partitioning. > > If I do it with CTAS (create table as select), the archive production > will peak a...
I want to reduce the size of a table.
> I thought if I could do this with export / import, but when I check it
> with the ESTIMATE_ONLY option, it doesn't seem to work.
>
> I have no chance of using partitioning.
>
> If I do it with CTAS (create table as select), the archive production
> will peak and the disaster database will be affected.
>
> How can I reduce the size of this table? If I do shrink, will UNDO be
> enough and how much space will be enough to add UNDO tablespace to
> overcome this problem? Do you have an estimate of how long it will
> take if I shrink?
>
> I have to be very careful as it will be done directly in the
> production database.
> Version: Oracle 11g Release 2 - Standard Edition
>
> ASM option: yes
>
> Cluster: yes
>
> Partitioning option: no
>
> Compress data pump option: no
Table size information:
SQL> select segment_name,sum(bytes)/1024/1024/1024 GB from dba_segments where owner='OWNER_NAME' and segment_type='TABLE' and segment_name=upper('TABLE_NAME') group by segment_name;
SEGMENT_NAME GB
--------------------------------------------------------------------------------- ----------
TABLE_NAME 392.493164
Export information (It does not work in reducing the table size.
):
nohup expdp "'/ as sysdba'" directory=DP_DIR tables=OWNER_NAME.TABLE_NAME ESTIMATE_ONLY=Y &
Output:
Starting "SYS"."SYS_EXPORT_TABLE_01": "/******** AS SYSDBA" directory=DP_DIR tables=OWNER_NAME.TABLE_NAME ESTIMATE_ONLY=Y
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. estimated "OWNER_NAME"."TABLE_NAME" 392.4 GB
Total estimation using BLOCKS method: 392.4 GB
Best regards,
jrdba123
(29 rep)
Mar 21, 2021, 10:27 AM
• Last activity: Aug 12, 2024, 01:06 AM
1
votes
1
answers
42
views
SQL Server 2008R2 - data file grows back after shrinking and available to shrink again
After shrinking of datafile there is around 180Gb of free space on hard drive (prior to operation shrink file dialog also shows that it is possible to free up around 180Gb of space). After 1-2 days the datafile grows back to its original size (before shrink operation) and it is possible to shrink it...
After shrinking of datafile there is around 180Gb of free space on hard drive (prior to operation shrink file dialog also shows that it is possible to free up around 180Gb of space). After 1-2 days the datafile grows back to its original size (before shrink operation) and it is possible to shrink it again (Shrink file dialog shows again that it is possible to free up around 180Gb of space).
I strictly forced to maintain some free space on hard drive, so avoiding the shrink operation is not an option.
Looking for advice what could cause this issue and possible solution.
Andrew B
(113 rep)
Aug 1, 2024, 02:20 PM
• Last activity: Aug 1, 2024, 04:16 PM
-5
votes
1
answers
151
views
WAIT_AT_LOW_PRIORITY doesn work for shrink data file
When I try to make shrink data file like this: DBCC SHRINKFILE (N'DataFileName' , 1500) WITH WAIT_AT_LOW_PRIORITY I get the error: 'WAIT_AT_LOW_PRIORITY' is not a recognized option. What can be the reason?
When I try to make shrink data file like this:
DBCC SHRINKFILE (N'DataFileName' , 1500) WITH WAIT_AT_LOW_PRIORITY
I get the error:
'WAIT_AT_LOW_PRIORITY' is not a recognized option.
What can be the reason?
Maria
(1 rep)
Jul 22, 2024, 11:13 AM
• Last activity: Jul 22, 2024, 03:23 PM
-1
votes
1
answers
852
views
How to reduce big size database files (Data File ) SQL Server without SHRINK?
I found a couple of 'question' regarding shrinking big datafiles but I think that is not very accurate to my situation. So here is my question I have a big database with almost 8TB of Data File Size and almost 160 GB of log file The database is in offline status Shrink process due to the big size of...
I found a couple of 'question' regarding shrinking big datafiles but I think that is not very accurate to my situation. So here is my question
I have a big database with almost 8TB of Data File Size and almost 160 GB of log file
The database is in offline status
Shrink process due to the big size of the data file will take a long time so I'm not sure how to proceed in order to avoid any problem in the instance.
One option that I have in mind is to create a new data file and then, copy the tables to the new data files and finally shrink the old data file
Best regards
SakZepelin
(21 rep)
Sep 21, 2021, 02:28 PM
• Last activity: May 16, 2024, 10:03 PM
0
votes
0
answers
29
views
Reduce primary DATAFILE size for an SQL Server Database 2016
In an AG always on configuration (primary) I have a database with 10TB datafile, 50% of the datafile is a free space. Then, I want to reduce this DATAFILE from 10TB to 8TB. How to proceed for that without doing a shrink? If there is no way other than shrink, how to estimate the time it will take?
In an AG always on configuration (primary) I have a database with 10TB datafile, 50% of the datafile is a free space. Then, I want to reduce this DATAFILE from 10TB to 8TB. How to proceed for that without doing a shrink?
If there is no way other than shrink, how to estimate the time it will take?
user2106896
Apr 25, 2024, 09:04 AM
• Last activity: Apr 25, 2024, 04:36 PM
299
votes
4
answers
357288
views
Why Does the Transaction Log Keep Growing or Run Out of Space?
This one seems to be a common question in most forums and all over the web, it is asked here in many formats that typically sound like this: > In SQL Server - > > > > - What are some reasons the transaction log grows so large? > - Why is my log file so big? > - What are some ways to prevent this pro...
This one seems to be a common question in most forums and all over the web, it is asked here in many formats that typically sound like this:
> In SQL Server -
>
>
>
> - What are some reasons the transaction log grows so large?
> - Why is my log file so big?
> - What are some ways to prevent this problem from occurring?
> - What do I do when I get myself on track with the underlying cause and want to put
> my transaction log file to a healthy size?
Mike Walsh
(18278 rep)
Dec 5, 2012, 02:11 AM
• Last activity: Mar 11, 2024, 12:43 PM
0
votes
1
answers
177
views
SQL Server, Maintenance Plan, Shrinking DATA File
After 5 years of work on our Sql Server database, I plan to carry out a maintenance on, and we plan to eliminate more than 170 GB of useless data by truncating some tables, and after that a Shrink File operation will also be carried out, and when I checked on SQL Server I found this: Allocated Space...
After 5 years of work on our Sql Server database, I plan to carry out a maintenance on, and we plan to eliminate more than 170 GB of useless data by truncating some tables, and after that a Shrink File operation will also be carried out, and when I checked on SQL Server I found this:
Allocated Space: 713912.31 MB
Free Space Available: 710979.13 MB (99%). Is that normal ? or we have some issue behind this ? Is their some bad configuration that we have made ?
Free Space Available: 710979.13 MB (99%). Is that normal ? or we have some issue behind this ? Is their some bad configuration that we have made ?
Mohammed Amine BENAMARA
(11 rep)
Mar 6, 2024, 02:56 PM
• Last activity: Mar 7, 2024, 01:17 PM
1
votes
2
answers
956
views
Does rebuilding/reorganize an index in SQL Server free up space in the DB file due to compacting data pages?
If I have an index with a default fill factor of 100%, and modifications result in a large amount of internal fragmentation (page splits, free space, etc.) so that many pages end up around 60% full instead of 100%, would an index rebuild/reorganize compact the data into fewer data pages, thus result...
If I have an index with a default fill factor of 100%, and modifications result in a large amount of internal fragmentation (page splits, free space, etc.) so that many pages end up around 60% full instead of 100%, would an index rebuild/reorganize compact the data into fewer data pages, thus resulting in more available space reported in the database?
If the previous is true, does it then make sense to reorganize indexes before performing a shrink operation to maximize the space gained by the shrink (followed by another reorganize after the shrink to reduce fragmentation caused by the shrink)?
Finally, does DBCC Shrinkfile compact data pages the way a rebuild does, or does it only move data pages around?
Marcus
(89 rep)
Jan 26, 2024, 01:44 PM
• Last activity: Jan 26, 2024, 02:06 PM
0
votes
1
answers
1102
views
Index Maintenance - Reorganize After BIG SQL Server Shrink
I'm planning to execute the below index maintenance created by [Ola Hallengren][1] in a 1TB Database. EXECUTE [dbo].[IndexOptimize] @Databases = 'USER_DATABASES', @FragmentationLow = NULL, @FragmentationMedium = 'INDEX_REORGANIZE', @FragmentationHigh = 'INDEX_REORGANIZE', @FragmentationLevel1 = 50,...
I'm planning to execute the below index maintenance created by Ola Hallengren in a 1TB Database.
EXECUTE [dbo].[IndexOptimize]
@Databases = 'USER_DATABASES',
@FragmentationLow = NULL,
@FragmentationMedium = 'INDEX_REORGANIZE',
@FragmentationHigh = 'INDEX_REORGANIZE',
@FragmentationLevel1 = 50,
@FragmentationLevel2 = 80,
@UpdateStatistics = 'ALL'
,@FillFactor = 100
So basically I know that shrinking is a very bad idea and not a common practice,
I'm Shrinking because is really needed and it has been working good.
So due to the fragmentation that the shrinking is causing, I'm planning to use the mentioned stored procedure to **Reorganize** all the affected indexes and avoid getting the freed space back with a **Rebuild**
So the specific question would be, does this look like a good approach or have you faced something similar and used another approach for it ?, or if you have suggestions are gonna be highly appreciated.
user141153
Aug 22, 2018, 02:35 AM
• Last activity: Dec 7, 2023, 05:08 PM
-1
votes
1
answers
151
views
Alternatives to reclaim free space after Shrink activity
For couple of our SQL server databases on version SQL2017 APP team did purge to change retention on these partitioned tables which has released quite good amount of space. Some background- Before purge- DB size around 10 TB, free space 1.5 TB After purge - DB size till shows 10 TB free space 7 TB We...
For couple of our SQL server databases on version SQL2017 APP team did purge to change retention on these partitioned tables which has released quite good amount of space.
Some background-
Before purge- DB size around 10 TB, free space 1.5 TB
After purge - DB size till shows 10 TB free space 7 TB
We are planning to do migration on SQL2019 and wont need initial storage because as mentioned above DB actual size has reduced well under 5 TB. However the problem is due to DB design we are unable to get much success from Purge because its been a week and we have reclaimed hardly 200 GB from that free space by shrinking. We will run into issues like blocking or never completing shrink operations which just seems to be too slow. We have used some guidance from https://eitanblumin.com/2020/04/07/troubleshooting-long-running-shrink-operations/ but still no luck
We are exploring few options below-
> 1. Backup/Restore simply copies the database with free space on destination where we will not be provisioned with 10 TB
>
> 2. Still looking into transactional replication if it will work because TR might not be ideal to move entire database as part of DB
> migration
>
> 3. I am not sure on this one but may be anyone of you can guide- DB has been created with multiple files , approx 10-12 ndf'sunder primary
> filegroup. Is it possible to create new files and simply make all
> transactions start going in new files and then later simply empty the
> old ndf files where shrinking is slow.
Please provide thoughts or how can above be achieved with some inputs.
Update based on comments
> Thanks, We have tested in lower env with option of `DBCC SHRINKFILE,
> EMPTYFILE`. That one seems to move data from existing file in the FG
> to other files in same FG. It is for sure very slow, around 20 hrs for
> 500 GB file but seems to be marking file INACTIVE. Does that mean i
> will get no more data in that file and safe to DELETE the file so as
> to reclaim the free space
?
Thanks in advance !!
Newbie-DBA
(804 rep)
Sep 6, 2023, 04:03 PM
• Last activity: Sep 20, 2023, 08:05 PM
Showing page 1 of 20 total questions