Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
2 answers
143 views
Tempdb log growth 3x as slow on new server, SP does not perform
We are running into the issue that our stored procedure on our new SQL Server performs a lot slower than on the old server. We looked into the problem and noticed the following on our temp db. The incremental growth takes 3 times as long on the new server. The same happens for the log file of our da...
We are running into the issue that our stored procedure on our new SQL Server performs a lot slower than on the old server. We looked into the problem and noticed the following on our temp db. The incremental growth takes 3 times as long on the new server. The same happens for the log file of our database. What is causing this? enter image description here
Esmee (29 rep)
Mar 19, 2021, 04:16 PM • Last activity: Jul 23, 2025, 11:06 PM
1 votes
1 answers
3390 views
Sql Server - Best Practices for Growing Large Database Files
I know this kind of question has been asked previously but I don't get the answer that I'm looking for. I'm reconsidering setting up Autogrowth for DB Primary & Data files for Log(partitioned table) and Log Files. I am going through some articles and what I conclude is: Microsoft is suggesting ***Au...
I know this kind of question has been asked previously but I don't get the answer that I'm looking for. I'm reconsidering setting up Autogrowth for DB Primary & Data files for Log(partitioned table) and Log Files. I am going through some articles and what I conclude is: Microsoft is suggesting ***Autogrowth to 1/8th of file size which is 12.5%*** on another hand Brent Ozar and others are suggesting ***256MB for data files and 128MB for log files***. Now, I'm confused as my main database file is approximate 545GB and Logfile is around 330GB, and there will be big difference (either too big or too small) if I chose any option. Currently, Autogrowth is set to 500MB for main data file, Logfile is set to 10% and for other NDF files Autogrowth is set to 5MB and 64MB What would you recommend and how can I calculate it?
Hakoo Desai (111 rep)
Jul 10, 2023, 02:33 AM • Last activity: Apr 13, 2025, 12:02 AM
0 votes
1 answers
59 views
how can I add the date and time on the autogrowth query below?
while trying to prevent autogrowth from happening, I got this [nice script here][1]. however, if it does happen, I need to know how to find it out. that I can do using [this very nice script by Max Vernon][2]. ``` print @@servername + ' - ' + SUBSTRING(@@version,1,COALESCE(CHARINDEX('Copyright',@@ve...
while trying to prevent autogrowth from happening, I got this nice script here . however, if it does happen, I need to know how to find it out. that I can do using this very nice script by Max Vernon .
print @@servername + ' - ' + SUBSTRING(@@version,1,COALESCE(CHARINDEX('Copyright',@@version,0)-1,108))
						SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
						SET NOCOUNT OFF


/*
    Description:    display growth events for all databases on the instance
    by:             Max Vernon
    date:           2014-10-01
*/
DECLARE @Version NVARCHAR(255);
DECLARE @VersionINT INT;
SET @Version = CONVERT(NVARCHAR(255),SERVERPROPERTY('ProductVersion'));
SET @VersionINT = CONVERT(INT, SUBSTRING(@Version,1 ,CHARINDEX('.',@Version)-1));
DECLARE @cmd NVARCHAR(2000);
SET @cmd = '';
IF @VersionINT >= 9
BEGIN
    SET @cmd = 
'
DECLARE @trcfilename VARCHAR(1000);

SELECT @trcfilename = path 
FROM sys.traces WITH(NOLOCK)
WHERE is_default = 1;

IF COALESCE(@trcfilename,'''')  ''''
BEGIN
SELECT [Radhe]=
    '''''''' + @@SERVERNAME + '''''','''''' +
     DB_NAME(mf.database_id) + '''''','''''' +
     mf.name + '''''','' +
     CONVERT(VARCHAR(255), a.NumberOfGrowths) + '','' +
     CONVERT(VARCHAR(255), CAST(a.DurationOfGrowthsInSeconds AS decimal(38, 20)))
    FROM
    (
        SELECT
            tt.DatabaseID AS database_id,
            tt.FileName AS LogicalFileName,
            COUNT(*) AS NumberOfGrowths,
            SUM(tt.Duration / (1000 * 1000.0)) AS DurationOfGrowthsInSeconds
            FROM sys.fn_trace_gettable(@trcfilename, default) tt
            WHERE (EventClass IN (92, 93))
            GROUP BY
                tt.DatabaseID,
                tt.FileName
    ) a
    INNER JOIN sys.master_files mf ON
        (mf.database_id = a.database_id) AND
        (mf.name = a.LogicalFileName);
END
ELSE
BEGIN
    SELECT @@SERVERNAME, ''NO TRACE FILE'';
END
';
EXEC sp_executesql @cmd;
END
ELSE
BEGIN
    SELECT [SERVER NAME]=@@SERVERNAME, [Product Version]=SERVERPROPERTY('ProductVersion');
END
what is missing in and I would like to add is - when did the autogrowth happen? how can I find that out?
Marcello Miorelli (17274 rep)
Jul 10, 2024, 11:37 AM • Last activity: Jul 10, 2024, 03:02 PM
299 votes
4 answers
357287 views
Why Does the Transaction Log Keep Growing or Run Out of Space?
This one seems to be a common question in most forums and all over the web, it is asked here in many formats that typically sound like this: > In SQL Server - > > > > - What are some reasons the transaction log grows so large? > - Why is my log file so big? > - What are some ways to prevent this pro...
This one seems to be a common question in most forums and all over the web, it is asked here in many formats that typically sound like this: > In SQL Server - > > > > - What are some reasons the transaction log grows so large? > - Why is my log file so big? > - What are some ways to prevent this problem from occurring? > - What do I do when I get myself on track with the underlying cause and want to put > my transaction log file to a healthy size?
Mike Walsh (18278 rep)
Dec 5, 2012, 02:11 AM • Last activity: Mar 11, 2024, 12:43 PM
0 votes
0 answers
55 views
Database AutoGrowth For Logging
I have a SQL Server 2019 database that we use for collection of logs from our software from machines on our network. We are required to hold these logs for a about a year. With the logging from our collection scripts, it loads up one of our databases which is currently around 500,000MB. This is for...
I have a SQL Server 2019 database that we use for collection of logs from our software from machines on our network. We are required to hold these logs for a about a year. With the logging from our collection scripts, it loads up one of our databases which is currently around 500,000MB. This is for our current network of 5 computers and servers. WE plan on adding many more computers and servers. As the network grows, I have a few questions. 1. As we grow our network what should we set our database Maximum file size to since best practice should not be autogrowth unlimited? 2. Currently autogrowth is set to 10% to unlimited size, should I change it? 3. If I set a maximum size with autogrowth to a maximum size will it not input data into the database once filled? 4. Should I turn on Instant File Initialization to help limit the size and is it worth the security risk? I am trying to get a head of my network growing and figure out what I will need to do to manage the growth.
JukEboX (131 rep)
Feb 27, 2024, 08:16 PM
1 votes
1 answers
802 views
Database file reserved space
What are the possible reasons that a SQL Server database file reserved space is 95 percent of the total space? My database file has increased drastically and it shows that the actual space used is 5 percent while 95 percent is reserved.
What are the possible reasons that a SQL Server database file reserved space is 95 percent of the total space? My database file has increased drastically and it shows that the actual space used is 5 percent while 95 percent is reserved.
LearningSQL (9 rep)
Dec 21, 2023, 04:10 PM • Last activity: Dec 28, 2023, 06:13 AM
3 votes
3 answers
619 views
Database Log File Growth change does not reflect on secondary replica sys.master_files
Environment: Microsoft SQL Server 2014 - 12.0.4100.1 (X64) Apr 20 2015 17:29:27 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.3 (Build 9600: ) (Hypervisor) When I change the log file growth rate on primary replica, secondary replica database get that change and I ca...
Environment: Microsoft SQL Server 2014 - 12.0.4100.1 (X64) Apr 20 2015 17:29:27 Copyright (c) Microsoft Corporation Enterprise Edition (64-bit) on Windows NT 6.3 (Build 9600: ) (Hypervisor) When I change the log file growth rate on primary replica, secondary replica database get that change and I can verify from GUI and sys.database_files view. But same change does not reflect in sys.master_files view. Set up code :Connect PrimaryNode IF EXISTS(SELECT name FROM sys.databases WHERE name = 'FileGrowthTest]') DROP DATABASE FileGrowthTest GO CREATE DATABASE [FileGrowthTest] ON PRIMARY ( NAME = N'FileGrowthTest', FILENAME = N'L:\FileGrowthTest.mdf' , SIZE = 4096KB , FILEGROWTH = 10%) LOG ON ( NAME = N'FileGrowthTest_log', FILENAME = N'F:\FileGrowthTest_log.ldf' , SIZE = 4096KB , FILEGROWTH = 10%) GO BACKUP DATABASE [FileGrowthTest] TO DISK = N'E:\Backup\FileGrowthTest.bak' WITH NOFORMAT, NOINIT, NAME = N'FileGrowthTest-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO --- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE. :Connect PrimaryNode USE [master] GO ALTER AVAILABILITY GROUP [TestAG] ADD DATABASE [FileGrowthTest]; GO :Connect PrimaryNode BACKUP DATABASE [FileGrowthTest] TO DISK = N'\\backupshare\FileGrowthTest.bak' WITH COPY_ONLY, FORMAT, INIT, SKIP, REWIND, NOUNLOAD, COMPRESSION, STATS = 5 GO :Connect SecondaryNode RESTORE DATABASE [FileGrowthTest] FROM DISK = N'\\backupshare\FileGrowthTest.bak' WITH NORECOVERY, NOUNLOAD, STATS = 5 GO :Connect PrimaryNode BACKUP LOG [FileGrowthTest] TO DISK = N'\\backupshare\FileGrowthTest.trn' WITH NOFORMAT, INIT, NOSKIP, REWIND, NOUNLOAD, COMPRESSION, STATS = 5 GO :Connect SecondaryNode RESTORE LOG [FileGrowthTest] FROM DISK = N'\\backupshare\FileGrowthTest.trn' WITH NORECOVERY, NOUNLOAD, STATS = 5 GO :Connect SecondaryNode -- Wait for the replica to start communicating begin try declare @conn bit declare @count int declare @replica_id uniqueidentifier declare @group_id uniqueidentifier set @conn = 0 set @count = 30 -- wait for 5 minutes if (serverproperty('IsHadrEnabled') = 1) and (isnull((select member_state from master.sys.dm_hadr_cluster_members where upper(member_name COLLATE Latin1_General_CI_AS) = upper(cast(serverproperty('ComputerNamePhysicalNetBIOS') as nvarchar(256)) COLLATE Latin1_General_CI_AS)), 0) 0) and (isnull((select state from master.sys.database_mirroring_endpoints), 1) = 0) begin select @group_id = ags.group_id from master.sys.availability_groups as ags where name = N'TestAG' select @replica_id = replicas.replica_id from master.sys.availability_replicas as replicas where upper(replicas.replica_server_name COLLATE Latin1_General_CI_AS) = upper(@@SERVERNAME COLLATE Latin1_General_CI_AS) and group_id = @group_id while @conn 1 and @count > 0 begin set @conn = isnull((select connected_state from master.sys.dm_hadr_availability_replica_states as states where states.replica_id = @replica_id), 1) if @conn = 1 begin -- exit loop when the replica is connected, or if the query cannot find the replica status break end waitfor delay '00:00:10' set @count = @count - 1 end end end try begin catch -- If the wait loop fails, do not stop execution of the alter database statement end catch ALTER DATABASE [FileGrowthTest] SET HADR AVAILABILITY GROUP = [TestAG]; GO GO Both file has 10% growth rate now. Changing to fixed value. USE [master]; GO ALTER DATABASE FileGrowthTest MODIFY FILE (NAME='FileGrowthTest', FILEGROWTH = 256MB); ALTER DATABASE FileGrowthTest MODIFY FILE (NAME='FileGrowthTest_log', FILEGROWTH = 128MB); GO Change is visible on secondary replica node while using sys.database_files view for both data and log files. USE [FileGrowthTest]; GO SELECT name AS file_name, type_desc AS file_type, growth AS current_percent_growth FROM sys.database_files WHERE is_percent_growth=1 But using sys.master_files view only datafile change is visible. Log file growth still shows 10%. SELECT d.name as database_name, mf.name as file_name, mf.type_desc as file_type, mf.growth as current_percent_growth FROM sys.master_files mf (NOLOCK) JOIN sys.databases d (NOLOCK) on mf.database_id=d.database_id WHERE is_percent_growth=1 AND d.name='FileGrowthTest' GO Why sys.master_files does not reflect the log file autogrowth rate change whereas datafile autogrowth change is reflected correctly?
SqlWorldWide (13707 rep)
Oct 17, 2018, 04:53 PM • Last activity: Dec 4, 2023, 06:40 AM
0 votes
1 answers
443 views
Why does data file growth fail even with plenty of disk space?
We have a SQL Server 2016 (v13 SP3) Enterprise Edition server hosted in a Windows failover cluster / SQL Availability Group with two nodes (primary & secondary). The two nodes are running inside AWS EC2 instances running Windows Server 2012 R2 64-bit (NT 6.3). Earlier this week, the server started r...
We have a SQL Server 2016 (v13 SP3) Enterprise Edition server hosted in a Windows failover cluster / SQL Availability Group with two nodes (primary & secondary). The two nodes are running inside AWS EC2 instances running Windows Server 2012 R2 64-bit (NT 6.3). Earlier this week, the server started responding with this error: ` Could not allocate space for object 'dbo.Batches'.'pk_Batches_BatchID' in database 'XXX' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup. ` At first this seemed pretty straightforward: we figured we'd been careless and allowed the data and/or log files to get too big. The files were definitely full without any unallocated space left in them. We figured we just needed to grow the Windows (NTFS) drive (backed by AWS EBS underneath the hood). The 'XXX' database has one log file and two data files - the data files are set to unlimited growth (though only by 64MB at a time) and the database only has the default 'PRIMARY' filegroup, no other filegroups involved. The data files are on the 'D:' drive. **But the 'D:' drive has over 400 GB free, so why are the data files not growing?** We spent a lot of time looking at the Windows Clustering and the SQL Availability Groups, as we were also seeing plenty of errors about the AG status going into "Recovering" and the clustering role not applying/synchronizing properly. Some changes allowed the primary node to come back up for a few minutes, but then it would crash again. (Because of this, our ability to inspect the 'XXX' database itself was limited.) We looked to see if EBS was having some sort of issue or outage, but could see no errors. We realize that the servers are old and out of date. We realize that some would say that using/relying on autogrowth is a bad practice. But this question isn't about best practices - it's **how do we get this currently-down production server back on its feet?**
nateirvin (756 rep)
Sep 14, 2023, 11:52 PM • Last activity: Sep 15, 2023, 09:05 AM
0 votes
2 answers
728 views
What causes disk space growth from Informatica load onto SQL Server?
We've been dealing with huge disk growth on SQL Server from Informatica. After the load, the database grows to 2.4TB. After the database shrink, it goes to 1.05TB. What could likely cause this to happen? What settings can we check in Informatica and/or SQL Server for our next run to troubleshoot thi...
We've been dealing with huge disk growth on SQL Server from Informatica. After the load, the database grows to 2.4TB. After the database shrink, it goes to 1.05TB. What could likely cause this to happen? What settings can we check in Informatica and/or SQL Server for our next run to troubleshoot this or do a guess/check? EDIT: There are two ways to move data using an Informatica mapping. Using SQL Overrides (running straight SQL) using the SQL Transformation type or using the built in data streams with Informatica's out of the box functionality. In this case, we are using data streams. When data streams are used, straight SQL is still used, but Informatica creates the SQL code behind the scenes. We are loading in 1,000,000 record increments. I thought perhaps Informatica might be tellinig SQL Server to allocate disk space as it loads, but I'm not even sure what command(s) to look for if it did.
JustBeingHelpful (2116 rep)
Mar 24, 2015, 08:43 PM • Last activity: Mar 24, 2023, 12:04 PM
0 votes
1 answers
105 views
What happens to proportinal fill once data files reach their limit?
Taking the example of data files (user db or temp db), assuming initial db size is 1GB and we have 3 data files. Proportional fill will allow data to enter into the 3 files such that it allow max parallelism because of proportional fill algorithm. However, once the data reaches 1GB in each file, and...
Taking the example of data files (user db or temp db), assuming initial db size is 1GB and we have 3 data files. Proportional fill will allow data to enter into the 3 files such that it allow max parallelism because of proportional fill algorithm. However, once the data reaches 1GB in each file, and assuming auto growth is set to 100MB, then will proportionally fill algorithm stop working because only 1 file's 100MB is available (free) for use? Is there way around this for example by growing all data files together by 100MB even when only 1 of them is full?
variable (3590 rep)
Mar 8, 2022, 07:34 AM • Last activity: Mar 9, 2022, 07:41 AM
0 votes
1 answers
71 views
Looking for help understanding what is happening in this SQL Script that shows key info about SQL Server Autogrowth Events
Here is the code below: ``` DECLARE @current_tracefilename VARCHAR(500); DECLARE @0_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @current_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @current_tracefilename = REVERSE(@current_tracefilename); SELECT @indx = PATINDEX('%\%', @...
Here is the code below:
DECLARE @current_tracefilename VARCHAR(500);
DECLARE @0_tracefilename VARCHAR(500);
DECLARE @indx INT;
SELECT @current_tracefilename = path
FROM sys.traces
WHERE is_default = 1;
SET @current_tracefilename = REVERSE(@current_tracefilename);
SELECT @indx = PATINDEX('%\%', @current_tracefilename);
SET @current_tracefilename = REVERSE(@current_tracefilename);
SET @0_tracefilename = LEFT(@current_tracefilename, LEN(@current_tracefilename) - @indx) + '\log.trc';
SELECT DatabaseName, 
       te.name, 
       Filename, 
       CONVERT(DECIMAL(10, 3), Duration / 1000000e0) AS TimeTakenSeconds, 
       StartTime, 
       EndTime, 
       (IntegerData * 8.0 / 1024) AS 'ChangeInSize MB', 
       ApplicationName, 
       HostName, 
       LoginName
FROM ::fn_trace_gettable(@0_tracefilename, DEFAULT) t
     INNER JOIN sys.trace_events AS te ON t.EventClass = te.trace_event_id
WHERE(trace_event_id >= 92
      AND trace_event_id <= 95)
ORDER BY t.StartTime;
I stumbled on this in SQL Shack when my team was experiencing autogrowth events but could not find the cause. This script was exceptionally helpful in tracking down what caused the autogrowths but uses many concepts I have not worked with like the PATH function, ::fn_trace_gettable, and PATINDEX().
statsGuy (3 rep)
Aug 25, 2021, 04:53 PM • Last activity: Aug 25, 2021, 08:19 PM
0 votes
0 answers
54 views
Shrinking log files in SQL Server 2014
I ran sp_Blitz in our Production server over the weekend and it output a few results regarding the log file growth. Example: The database [MYDB] has a 104 GB transaction log file, larger than the total data file sizes. This may indicate that transaction log backups are not being performed or not per...
I ran sp_Blitz in our Production server over the weekend and it output a few results regarding the log file growth. Example: The database [MYDB] has a 104 GB transaction log file, larger than the total data file sizes. This may indicate that transaction log backups are not being performed or not performed often enough. This DB is part of Always On Availability group and recovery model is Full. Full backups are taken nightly. Log backups are taken every hour. I am working on shrinking the log files in small increments. In order to do so, I will be first taking the log backup manually. After shrinking, I need to set reduce the log file size to an appropriate size. Auto growth is set to a percentage(bad practice). My question is: How do I know what is the appropriate size to reduce to? Is there a special formula to keep in consideration when setting the size? Also, I have noticed that sometimes when I shrink, the log size will not shrink. What is the reason for this and what should I do in this case? Thanks in advance.
sqllover2020 (73 rep)
Aug 19, 2021, 02:36 PM
-1 votes
1 answers
138 views
Data loss is happening due to auto growth of the MSSQL Database
I have a database that is 101 GB in size (as of now) and continuing to increase. How to stop the data loss that is happening due to the unavailability of the SQL server during the auto-growth event? I tried with 4 GB then 2 GB and now 500 MB, still I'm unsure it will work.
I have a database that is 101 GB in size (as of now) and continuing to increase. How to stop the data loss that is happening due to the unavailability of the SQL server during the auto-growth event? I tried with 4 GB then 2 GB and now 500 MB, still I'm unsure it will work.
Chinmay Kalyankar (1 rep)
Jun 25, 2021, 10:15 AM • Last activity: Jun 25, 2021, 10:51 AM
0 votes
1 answers
186 views
DB autogrowth issue
I have a database that will growth in 1 month time the autogrowth was set to 500MB but as I monitored it, the consumption is 1GB+ per day. Should I wait for it to auto grow or is it better to set the initial size to much bigger size? Or set the auto growth to much higher size? Another thing is the t...
I have a database that will growth in 1 month time the autogrowth was set to 500MB but as I monitored it, the consumption is 1GB+ per day. Should I wait for it to auto grow or is it better to set the initial size to much bigger size? Or set the auto growth to much higher size? Another thing is the timing of expanding the size, is it ok to do it online while there is a production will it have a minimal impact to our production or should I schedule a downtime? Thanks for the response.
Neil de Rama (1 rep)
Jan 21, 2021, 02:15 PM • Last activity: Jan 21, 2021, 03:11 PM
1 votes
1 answers
512 views
Disk Usage Report SSMS - query to obtain same info
I would like to have a query that obtains the same info as in the **Disk Usage** report, section **data/Log files autogrowth/autoshrink events** [![enter image description here][1]][1] I would like to build a custom monitoring report but i could not find where this report comes. the only thing I cou...
I would like to have a query that obtains the same info as in the **Disk Usage** report, section **data/Log files autogrowth/autoshrink events** enter image description here I would like to build a custom monitoring report but i could not find where this report comes. the only thing I could find is that this info comes from: sys.fn_trace_gettable but i don't know what field or joins I must do to have the same functionality as the report above. EDIT: i need these fields, also the size of the growth (for analytical purposes) enter image description here thanks for your help.
Eduardo Pivaral (139 rep)
Sep 28, 2017, 05:21 PM • Last activity: May 13, 2020, 05:21 PM
6 votes
2 answers
17523 views
Issues with TempDB mdf file ever increasing
I have a tempdb growth issue. Let me preface everything by giving my tempdb settings. ![tempdb settings][2] Even with no queries running on the database/server tempdb keeps on increasing in size, at first rapidly and then slowly without stopping. I've run many queries to figure out what is running,...
I have a tempdb growth issue. Let me preface everything by giving my tempdb settings. tempdb settings Even with no queries running on the database/server tempdb keeps on increasing in size, at first rapidly and then slowly without stopping. I've run many queries to figure out what is running, below is the result of the query below which actually gave me the results I could use. dm_db_task_space_usage As can be seen they are all internal spid's is there any way to find out why tempdb continues to grow out of control and how to mitigate it? Any help on this problem would be greatly appreciated. --Query that returned the result set SELECT session_id, SUM(internal_objects_alloc_page_count) AS task_internal_objects_alloc_page_count, SUM(internal_objects_dealloc_page_count) AS task_internal_objects_dealloc_page_count FROM sys.dm_db_task_space_usage GROUP BY session_id HAVING SUM(internal_objects_alloc_page_count) > 0
nightmareivy (123 rep)
Jan 13, 2014, 07:56 PM • Last activity: May 7, 2020, 10:32 PM
2 votes
1 answers
456 views
Do I need to enable Trace Flag 1117 for equally sized data files?
I was reading about fill proportional algorithm in SQL Server and then I recalled [TF1117][1]. BOL states: > When a file in the filegroup meets the autogrow threshold, all files > in the filegroup grow. This trace flag affects all databases and is > recommended only if every database is safe to be g...
I was reading about fill proportional algorithm in SQL Server and then I recalled TF1117 . BOL states: > When a file in the filegroup meets the autogrow threshold, all files > in the filegroup grow. This trace flag affects all databases and is > recommended only if every database is safe to be grow all files in a > filegroup by the same amount. What I can't understand is if data files are filling proportionally, won't they auto-grow proportionally either? In that case we can't omit using TF1117. **EDIT** Here is the version of SQL Server enter image description here
Rauf Asadov (1313 rep)
Jan 23, 2020, 01:51 PM • Last activity: Jan 23, 2020, 02:22 PM
2 votes
1 answers
83 views
Log Autogrowths bigger than log size, but no shrinks to explain how
I have a database with a logfile size of 1GB. I implemented some logging of AutoGrowths to eventually setup some notifications for when a database autogrows. What I'm experiencing though, is that SQL Server is saying there were autogrowths of the log file, but the filesize of the log file is not act...
I have a database with a logfile size of 1GB. I implemented some logging of AutoGrowths to eventually setup some notifications for when a database autogrows. What I'm experiencing though, is that SQL Server is saying there were autogrowths of the log file, but the filesize of the log file is not actually changing. There are enough autogrowths to have grown my file to 12GB in size since the beginning of the year, but my file is still only 1GB. Does anyone know how/why this would happen? Microsoft SQL Server 2017 (RTM-CU16) (KB4508218) - 14.0.3223.3 (X64) Jul 12 2019 17:43:08 Copyright (C) 2017 Microsoft Corporation Enterprise Edition (64-bit) on Windows Server 2016 Standard 10.0 (Build 14393: ) (Hypervisor) enter image description here There haven't been any autoshrinks, or manual shrinks of the log file. enter image description here
Todd Histed (31 rep)
Jan 13, 2020, 02:47 PM • Last activity: Jan 14, 2020, 09:29 PM
0 votes
1 answers
121 views
Is there a way to determine what caused the last growth operation of my database (MDF file)?
If I don't already have any logging in place, is there any way to use the DMVs or anything else on SQL Server to determine which query / event caused my database's last grow operation?
If I don't already have any logging in place, is there any way to use the DMVs or anything else on SQL Server to determine which query / event caused my database's last grow operation?
J.D. (40893 rep)
Dec 5, 2019, 06:56 PM • Last activity: Dec 10, 2019, 04:47 PM
1 votes
1 answers
806 views
autogrowth set higher than available disk
First all our autogrow settings are either turned off or the file size is set to the max autogrow size. I guess same difference as I came in after the setup. Although I've never had this problem, I've been looking for an answer to a possible condition: I need my file to grow for example a tiny amoun...
First all our autogrow settings are either turned off or the file size is set to the max autogrow size. I guess same difference as I came in after the setup. Although I've never had this problem, I've been looking for an answer to a possible condition: I need my file to grow for example a tiny amount, say 100 MB for a insert to happen. However my autogrow is set to say 500MB. The disk has 450 MB available. What would happen in that scenario. And just say this insert happens once in a while at night, so I had no idea that it was going to happen, so I couldn't or didn't pre grow the file. I'm really just curious about how sql server would handle this.
user18843
Dec 2, 2019, 05:32 PM • Last activity: Dec 3, 2019, 03:38 PM
Showing page 1 of 20 total questions