Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

2 votes
2 answers
464 views
PostgreSQL: Long running auto vacuum and high wal rate
Recently a `vacuum` process triggered by `auto vacuum daemon`(PostgreSQL13) on one of our table took long 1.5hr to complete. During this period there was high `WAL` rate up to 7MB/sec and high Disk I/O. This table size around 75GB(200M rows) , and the auto vacuum count from stats table is just 2 onl...
Recently a vacuum process triggered by auto vacuum daemon(PostgreSQL13) on one of our table took long 1.5hr to complete. During this period there was high WAL rate up to 7MB/sec and high Disk I/O. This table size around 75GB(200M rows) , and the auto vacuum count from stats table is just 2 only for last 2 year. What are some auto-vacuum tuning possibilities for huge tables? My considerations are, - Should I disable auto vacuum for huge table and perform manually? - Can decreasing threshold and hence more frequent vacuum solve long running vacuum? Any other solutions?
goodfella (595 rep)
Jun 28, 2024, 01:43 AM • Last activity: Jul 30, 2025, 03:00 AM
1 votes
2 answers
930 views
Can I get the created time of a PostgreSQL DB?
I'm using PostgreSQL-13.0. Is there a way I can get the created time or last modified time of a database? Thanks!
I'm using PostgreSQL-13.0. Is there a way I can get the created time or last modified time of a database? Thanks!
Leon (411 rep)
Dec 17, 2023, 04:19 PM • Last activity: Jul 18, 2025, 12:06 PM
0 votes
1 answers
197 views
SQL Server - Maintenance - How to obtain duration per database from the commandlog table
Attempting to obtain the duration for individual maintenance operations per database in the commandlog table which is available in [Ola Hallengren's][1] maintenance scripts [![enter image description here][2]][2] If anyone has a better solution to this, I'd love to hear from you. set dateformat mdy...
Attempting to obtain the duration for individual maintenance operations per database in the commandlog table which is available in Ola Hallengren's maintenance scripts enter image description here If anyone has a better solution to this, I'd love to hear from you. set dateformat mdy DECLARE @months AS int = -3 /* DBCC_CHECKDB,UPDATE_STATISTICS,ALTER_INDEX,BACKUP_DATABASE,BACKUP_LOG */ DECLARE @operation AS varchar(50) = 'BACKUP_DATABASE' DECLARE @datbasename as varchar(255) = 'Logs' SELECT @operation DECLARE @commandsql as varchar(max) set @commandsql = ' ;WITH reindex_table as ( SELECT TOP (1000) [DatabaseName] ,[StartTime] ,[EndTime] ,DATEDIFF(MI,StartTime,EndTime) as duration ,convert(varchar, StartTime, 23) as operationdate ,ROW_NUMBER() over (PARTITION by convert(varchar, StartTime, 23) order by convert(varchar, StartTime, 23) desc) ''rownumber'' FROM [_DBA_Tools].[dbo].[CommandLog] where 1=1 And CommandType = '''+ @operation +''' And DatabaseName = ''' + @datbasename + ''' and StartTime >= DATEADD(MONTH, '+cast(@months as varchar(100))+', GETDATE()) and StartTime 1 group by operationdate, DatabaseName order by operationdate ' PRINT @commandsql EXEC (@commandsql) The commandlog doesn't have an id that regroups operations that were all run from the same job execution.
user4659
Aug 20, 2022, 01:45 PM • Last activity: Jun 28, 2025, 06:11 AM
22 votes
3 answers
34850 views
Why not rebuild indexes with page count <1000?
I use Ola Hallengrens script for Index maintenance. Before I did that, I used the following query to see which indexes are fragmented most: SELECT dbschemas.[name] as 'Schema', dbtables.[name] as 'Table', dbindexes.[name] as 'Index', indexstats.avg_fragmentation_in_percent, indexstats.page_count FRO...
I use Ola Hallengrens script for Index maintenance. Before I did that, I used the following query to see which indexes are fragmented most: SELECT dbschemas.[name] as 'Schema', dbtables.[name] as 'Table', dbindexes.[name] as 'Index', indexstats.avg_fragmentation_in_percent, indexstats.page_count FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL, NULL, NULL) AS indexstats INNER JOIN sys.tables dbtables on dbtables.[object_id] = indexstats.[object_id] INNER JOIN sys.schemas dbschemas on dbtables.[schema_id] = dbschemas.[schema_id] INNER JOIN sys.indexes AS dbindexes ON dbindexes.[object_id] = indexstats.[object_id] AND indexstats.index_id = dbindexes.index_id ORDER BY indexstats.avg_fragmentation_in_percent desc In my case, the avg_fragmentation was over **70%** for **15** indexes and over **30%** for **28** indexes. So, I rebuild every index using Ola Hallengren's solution. When I ran the query again, this was the result: Fragmentation over **70%** for **12** indexes, over **30%** for **15** indexes. I figured, the reason was because of the page_count, which was lower than 1000 for each of the indexes that were still very fragmented. For example, one of the indexes with a page_count of 967 has a fragmentation percentage of **98,98%**! To me, it seems worth rebuilding that index! I did, and afterwards, the fragmentation was **0%**. Also, a index with a page_count of 132 went from **95%** to **0%** So, my question is, what reasons would there be to NOT rebuild those indexes? One reason might be that rebuilding costs time and resources, but because the indexes are small, doesn't this mean it costs relatively few resources and it would still be benfecial to rebuild it anyway? There are multiple related question on this site, but all of them answer the question why a index would not defragment, or if indexes are still useful if they are small and you don't defragment them, whereas here the statement DOES decrease fragmentation, with the question being, why not do it anyway?
user1261104 (415 rep)
Jan 2, 2015, 10:22 AM • Last activity: Jun 5, 2025, 11:06 AM
0 votes
4 answers
301 views
Index Maintenance on busy OLTP database
I have a database where constant read writes happening all the time. Searches are being done with wild card entries. The server is on SQL 2016 standard edition. There is no budget for enterprise nor any intention to not using the wild card searches, and there is no maintenance window. Since wildcard...
I have a database where constant read writes happening all the time. Searches are being done with wild card entries. The server is on SQL 2016 standard edition. There is no budget for enterprise nor any intention to not using the wild card searches, and there is no maintenance window. Since wildcard searches are being made indexes are also of no use as it is doing a full table scan and thus creates locking. I am using Ola Hallengren script for indexing but the indexes maintenance is taking 10-12 hours to complete for a 300gb database as it busy 24/7. Below is the script: EXECUTE dbo.IndexOptimize @Databases = 'user_databases', @FragmentationLow = NULL, @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE', @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE', @FragmentationLevel1 = 5, @FragmentationLevel2 = 30, @UpdateStatistics = 'ALL', @OnlyModifiedStatistics = 'Y' This has become a frustrating issue and I am about to quit. Please advise what are my options here.
SQL_NoExpert (1117 rep)
Mar 23, 2021, 02:28 PM • Last activity: May 22, 2025, 02:03 PM
0 votes
1 answers
278 views
Error message when running a SQL maintenance plan backup
I'm receiving the following error message when my daily maintenance backup plans execute: Error Message 1: >[298] SQLServer Error: 407, Internal error. The string routine in file e:\sql11_main_t\sql\ntdbms\storeng\dmu\dbcc\source\dumplog.cpp, line 5473 failed with HRESULT 0x8007007a. [SQLSTATE 42000...
I'm receiving the following error message when my daily maintenance backup plans execute: Error Message 1: > SQLServer Error: 407, Internal error. The string routine in file e:\sql11_main_t\sql\ntdbms\storeng\dmu\dbcc\source\dumplog.cpp, line 5473 failed with HRESULT 0x8007007a. [SQLSTATE 42000] (ConnExecuteCachableOp) Error Message 2: > SQLServer Error: 8630, Internal Query Processor Error: The query processor encountered an unexpected error during execution (HRESULT = 0x80040e19). [SQLSTATE 42000] (ConnExecuteCachableOp) What can I do to fix these errors and get my backups to run successfully again? Thank you in advance.
Mike Gomes (1 rep)
Nov 7, 2023, 09:10 PM • Last activity: May 18, 2025, 04:06 PM
0 votes
1 answers
290 views
Oracle gather stats autotask being interrupted before maintenance window ends
a few months ago I managed to enlarge the maintenance windows for autotask on a pluggable database, this way: BEGIN dbms_scheduler.disable( name => 'WINDOW_NAME'); dbms_scheduler.set_attribute( name => 'WINDOW_NAME', attribute => 'DURATION', value => numtodsinterval(6, 'hour')); dbms_scheduler.enabl...
a few months ago I managed to enlarge the maintenance windows for autotask on a pluggable database, this way:
BEGIN
  dbms_scheduler.disable(
    name  => 'WINDOW_NAME');
  dbms_scheduler.set_attribute(
    name      => 'WINDOW_NAME',
    attribute => 'DURATION',
    value     => numtodsinterval(6, 'hour'));
  dbms_scheduler.enable(
    name => 'WINDOW_NAME');
END;
/
This was what I got:

WINDOW_NAME      START_TIME                               DURATION
---------------- ---------------------------------------- ---------------
THURSDAY_WINDOW  08-JUL-21 10.00.00.241279 PM +02:00      +000 06:00:00
FRIDAY_WINDOW    09-JUL-21 10.00.00.241279 PM +02:00      +000 06:00:00
SATURDAY_WINDOW  10-JUL-21 10.00.00.241279 PM +02:00      +000 06:00:00
SUNDAY_WINDOW    11-JUL-21 10.00.00.241279 PM +02:00      +000 06:00:00
MONDAY_WINDOW    12-JUL-21 10.00.00.241279 PM +02:00      +000 06:00:00
TUESDAY_WINDOW   13-JUL-21 10.00.00.241279 PM +02:00      +000 06:00:00
WEDNESDAY_WINDOW 14-JUL-21 10.00.00.241279 PM +02:00      +000 06:00:00
But now, on that same pluggable db I'm noticing that since August 24th stats jobs are being stopped due to the end of the window:
TASK_NAME                 STATUS     CURRENT_JOB_NAME          LAST_TRY_DATE                       LAST_TRY_R
------------------------- ---------- ------------------------- ----------------------------------- ----------
gather_stats_prog         ENABLED                              12-SEP-21 03.00.09.858615 AM +02:00 STOPPED AT
                                                                                                    END OF MA
                                                                                                   INTENANCE
                                                                                                   WINDOW

auto_space_advisor_prog   ENABLED                              12-SEP-21 12.33.31.490404 AM +02:00 SUCCEEDED
AUTO_SQL_TUNING_PROG      ENABLED                              11-SEP-21 11.00.22.422336 PM +02:00 SUCCEEDED
At first I thought that it might be caused by some performance problem, but then I noticed that the jobs were still during 4 hours (default windows duration):
CLIENT_NAME                         JOB_NAME                  JOB_STATUS JOB_START_TIME                             JOB_DURATION
----------------------------------- ------------------------- ---------- ------------------------------------------ ---------------
auto optimizer stats collection     ORA$AT_OS_OPT_SY_9601     STOPPED    07-SEP-21 10.00.19.825083 PM EUROPE/VIENNA +000 03:59:44
auto optimizer stats collection     ORA$AT_OS_OPT_SY_9621     STOPPED    08-SEP-21 10.00.11.786102 PM EUROPE/VIENNA +000 03:59:53
I then noticed that the maintenance windows duration was still set to 4 hours on the container database:
WINDOW_NAME      START_TIME                               DURATION
---------------- ---------------------------------------- ---------------
SUNDAY_WINDOW    12-SEP-21 10.00.00.235575 PM +02:00      +000 04:00:00
MONDAY_WINDOW    13-SEP-21 10.00.00.235575 PM +02:00      +000 04:00:00
TUESDAY_WINDOW   14-SEP-21 10.00.00.235575 PM +02:00      +000 04:00:00
WEDNESDAY_WINDOW 15-SEP-21 10.00.00.235575 PM +02:00      +000 04:00:00
THURSDAY_WINDOW  16-SEP-21 10.00.00.235575 PM +02:00      +000 04:00:00
FRIDAY_WINDOW    17-SEP-21 10.00.00.235575 PM +02:00      +000 04:00:00
SATURDAY_WINDOW  18-SEP-21 10.00.00.235575 PM +02:00      +000 04:00:00
Should I modify the duration on the container too? Thank you.
trustno1 (9 rep)
Sep 12, 2021, 03:13 PM • Last activity: May 8, 2025, 06:02 AM
2 votes
2 answers
93 views
When reading from a columnstore index, is PERSIST_SAMPLE_PERCENT ignored in UPDATE STATISTICS?
I have a partitioned table with a rowstore primary key and a nonclustered columnstore index. The leading key of both (not that key order matters for columnstore) is the same. When updating statistics, SQL Server regularly chooses to scan the columnstore index. This takes far too long for my taste, s...
I have a partitioned table with a rowstore primary key and a nonclustered columnstore index. The leading key of both (not that key order matters for columnstore) is the same. When updating statistics, SQL Server regularly chooses to scan the columnstore index. This takes far too long for my taste, so I tried
UPDATE STATISTICS My_Table
WITH PERSIST_SAMPLE_PERCENT = ON, SAMPLE 3 PERCENT
To my surprise, the statistics updates still took a very long time. The execution plans very much suggest that the columnstore index is being scanned in the same way as it previously was. So, I have my question: when reading from a columnstore index, is PERSIST_SAMPLE_PERCENT ignored in UPDATE STATISTICS? Automatic statistics updates seem to share this issue. Update: I suspect that it might actually be the deleted bitmap that causes all of this to go wrong.
J. Mini (1237 rep)
Mar 29, 2025, 12:33 AM • Last activity: Apr 20, 2025, 06:39 PM
4 votes
1 answers
359 views
Best practise for reducing database file size during migration to new server
**Introduction** I have been reading many articles about deleting from a databases, and the perils of using the shrink command. However, I have an opportunity to move a database to a new server with a big outage window, so do want to give the database a reset and size reduction (to increase performa...
**Introduction** I have been reading many articles about deleting from a databases, and the perils of using the shrink command. However, I have an opportunity to move a database to a new server with a big outage window, so do want to give the database a reset and size reduction (to increase performance and save cost in hosted environment). **Background** * Existing SQL Server is 2016 Web Edition. * New SQL Server 2022 Standard Edition. * Database size ~120GB. * A few large tables make up the bulk of that as well as indexes on those large tables. * I want to focus just on the database size reduction. I'm comforable with the other parts of creating a new server. **Proposed process** _Before outage_ 1. Run stored procedures which use DELETE FROM syntax. I have tested these carefully. They batch the deletes and will be run over a few days. 2. Monitor the log files and log disk to double check there is enough space. 3. Ensure delete processing is done by the time we get to the outage window. _During outage_ 4. Take a full backup of the database and put in a safe place. 5. Drop the largest indexes (there around about 3 or 4 that I will target). 6. Put the database into simple recovery mode. 7. Shrink the log and database files as much as possible. 8. Backup the database. 9. Copy the backup file to the new server. 10. Restore the database to correct file locations 11. Apply the indexes which were removed. 12. Rebuild all indexes. 13. Update statistics. 14. Put database back to full recovery model. 15. Testing and restarting applications
ErikEV (77 rep)
Apr 7, 2025, 11:59 PM • Last activity: Apr 8, 2025, 11:49 AM
2 votes
2 answers
4274 views
MySQL db optimization error on innodb tables
I have a database named 'mysql' which is being used on RHEL 8 server running MySQL 8.0.32. When trying to optimize the entire database using the mysqlcheck command 'mysqlcheck -o mysql' the below errors are returned. I know that some of this is informational messages, and these can be ignored, howev...
I have a database named 'mysql' which is being used on RHEL 8 server running MySQL 8.0.32. When trying to optimize the entire database using the mysqlcheck command 'mysqlcheck -o mysql' the below errors are returned. I know that some of this is informational messages, and these can be ignored, however I am wondering how to get more info about these errors. Is there a way to exclude tables from the mysqlcheck -o optimize (like the ignore option with mysqldump '--ignore-table'). mysql.innodb_index_stats note : Table does not support optimize, doing recreate + analyze instead error : The used command is not allowed with this MySQL version status : Operation failed mysql.innodb_table_stats note : Table does not support optimize, doing recreate + analyze instead error : The used command is not allowed with this MySQL version status : Operation failed mysql.gtid_executed Warning : Please do not modify the gtid_executed table. This is a mysql internal system table to store GTIDs for committed transactions. Modifying it can lead to an inconsistent GTID state. note : Table does not support optimize, doing recreate + analyze instead
PirateRagnaros (21 rep)
Mar 24, 2023, 06:43 AM • Last activity: Feb 24, 2025, 06:03 PM
0 votes
2 answers
2833 views
How to reduce the size of a SQL Server-Database?
## Context ## We have a Backup DB, where we normally store BackUps of tables from production DB before doing any updates/deletes. If anything goes wrong, we can restore the data from that table created in BackUp DB. ## Problem ## The size of Backup DB is rapidly increasing and **I need a way to redu...
## Context ## We have a Backup DB, where we normally store BackUps of tables from production DB before doing any updates/deletes. If anything goes wrong, we can restore the data from that table created in BackUp DB. ## Problem ## The size of Backup DB is rapidly increasing and **I need a way to reduce its size**. ### Steps so far ### I tried deleting old tables and shrinking BackUP DB but shrinking takes too much of time.
Bishal (1 rep)
Jan 15, 2022, 05:42 PM • Last activity: Feb 13, 2025, 06:03 PM
7 votes
2 answers
7013 views
IndexOptimize - Configuration
We recently switched to Ola Hallengren's maintenance script and automated the deployment of [*MaintenanceSolution.sql*][1] to our customers's SQL Server instance. We need to set these parameters for the job *IndexOptimize - USER_DATABASES*: * `@UpdateStatistics = 'ALL'` * `@OnlyModifiedStatistics =...
We recently switched to Ola Hallengren's maintenance script and automated the deployment of *MaintenanceSolution.sql* to our customers's SQL Server instance. We need to set these parameters for the job *IndexOptimize - USER_DATABASES*: * @UpdateStatistics = 'ALL' * @OnlyModifiedStatistics = 'Y' I see that these parameters are present in *MaintenanceSolution.sql*: enter image description here After I change the values of the above-mentioned parameters in *MaintenanceSolution.sql* to
@UpdateStatistics nvarchar(max) = 'ALL'

@OnlyModifiedStatistics nvarchar(max) = 'Y'
then execute, I don't see either @UpdateStatistics = 'ALL' or @OnlyModifiedStatistics = 'Y' added in *Job Step properties - IndexOptimize - USER_DATABASES*: enter image description here My questions are: 1. Why don't "statistics" options appear in the job's command? 2. Is it wrong to edit *MaintenanceSolution.sql* directly for this? 3. Is there a way to add theses parameters to the job using a query?
William (73 rep)
Aug 21, 2019, 10:38 AM • Last activity: Feb 13, 2025, 09:58 AM
258 votes
6 answers
448503 views
Force drop db while others may be connected
I need to remove a database from a PostgreSQL DB cluster. How can I do it even if there are active connections? I need sort of a `-force` flag, that will drop all connections and then the DB. How can I implement it? I'm using `dropdb` currently, but other tools are possible.
I need to remove a database from a PostgreSQL DB cluster. How can I do it even if there are active connections? I need sort of a -force flag, that will drop all connections and then the DB. How can I implement it? I'm using dropdb currently, but other tools are possible.
Alex (2775 rep)
Jan 30, 2012, 11:28 AM • Last activity: Feb 10, 2025, 09:43 PM
0 votes
2 answers
91 views
Performance Impact of PURGE BINARY LOGS on MySQL 5.7.32
I am currently facing disk space issues with a MySQL 5.7.32 server that has >500 binary logs of 1.1GB each. I have spent several days reading up on this, and found that `PURGE BINARY LOGS` is the right command to use but have not been able to find any information on the potential performance impact...
I am currently facing disk space issues with a MySQL 5.7.32 server that has >500 binary logs of 1.1GB each. I have spent several days reading up on this, and found that PURGE BINARY LOGS is the right command to use but have not been able to find any information on the potential performance impact or if the MySQL server locks up during the process. I have checked the server variables and found that: binlog_gtid_simple_recovery and gtid_mode is both set to ON, and as far as i can tell this would affect the performance in a positive way. Can anyone shed some light on the performance impact or the locks i can expect when purging logs in small batches?
Rag3Rac3r (3 rep)
Jan 29, 2025, 11:14 AM • Last activity: Feb 3, 2025, 10:42 AM
1 votes
1 answers
1998 views
Reset all sequences so that they continue after max(id) + 1?
It looks like I messed up a database migration and while all sequences are there, they would start at `1`, which triggers errors like: > django.db.utils.IntegrityError: duplicate key value violates unique > constraint "django_admin_log_pkey" DETAIL: Key (id)=(2) already > exists. Is there a query/sc...
It looks like I messed up a database migration and while all sequences are there, they would start at 1, which triggers errors like: > django.db.utils.IntegrityError: duplicate key value violates unique > constraint "django_admin_log_pkey" DETAIL: Key (id)=(2) already > exists. Is there a query/script I could run that would run across all tables in the database, look at any columns tied to a sequence and reset those sequences to max(column) + 1? Using PostgreSQL v11.
d33tah (429 rep)
Sep 3, 2019, 06:41 AM • Last activity: Jan 19, 2025, 06:02 AM
5 votes
3 answers
4904 views
Should I be regularly shrinking my DB or at least my log file?
My question is, should I be running one or both of the shrink command regularly, DBCC SHRINKDATABASE OR DBCC SHRINKFILE ============================= background ============================= Sql Server: Database is 200 gigs, logs are 150 gigs. running this command SELECT name ,size/128.0 - CAST(FILE...
My question is, should I be running one or both of the shrink command regularly, DBCC SHRINKDATABASE OR DBCC SHRINKFILE ============================= background ============================= Sql Server: Database is 200 gigs, logs are 150 gigs. running this command SELECT name ,size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int) / 128.0 AS AvailableSpaceInMB FROM sys.database_files;` produces this output.. MyDB: 159.812500 MB free MyDB_Log: 149476.390625 MB free So it seems there is some free space. Our backup schedule is as follows: 1. Transaction Logs Once an hour 2. Full Backups twice a week 3. Differential Backups 5 times a week
Tom DeMille
Apr 23, 2010, 04:19 PM • Last activity: Jan 17, 2025, 01:27 PM
0 votes
0 answers
61 views
One step in Azure SQL job does not retry
I have an Azure SQL Database with a weekly maintenance job. The job scales the database up, performs various tasks, potentially including an index rebuild if fragmentation is high, and then scales the database down again. My issue is that, even though the 'scale-down' step of the job has 10 retries...
I have an Azure SQL Database with a weekly maintenance job. The job scales the database up, performs various tasks, potentially including an index rebuild if fragmentation is high, and then scales the database down again. My issue is that, even though the 'scale-down' step of the job has 10 retries and a timeout of 12 hours configured, the step fails after 52 seconds, and doesn't retry. The step definition: retry_attempts = 10 An example execution: The step fails, with current_attempts = 1 It's a generic error, which is fine and expected: > Command failed: A severe error occurred on the current command. The results, if any, should be discarded. A severe error occurred on the current command. The results, if any, should be discarded. All other steps complete successfully - the 'scale-up' operation actually did need to retry before it succeeded, which is encouraging: The step succeeds, with current_attempts = 2 The error is the same as in the step which does not retry. If you have any insight into why a certain step would not retry, when others in the same job do, then I'd really appreciate it - the same issue seems to occur in every execution of the job! EDIT: It looks like markdown tables aren't supported here, swapped them out for screenshots.
yggdrasiljsp (1 rep)
Dec 4, 2024, 02:04 PM • Last activity: Dec 4, 2024, 02:09 PM
3 votes
2 answers
886 views
How can I run sp_refreshview across all views in my SQL Server database
I'm using SQL Server, and I received an error indicating that one or more of my SQL views is out of sync with its underlying SQL table. > 'MySQLServerName' returned data that does not match expected data > length for column 'MyColumnName'. The (maximum) expected data length > is 50, while the return...
I'm using SQL Server, and I received an error indicating that one or more of my SQL views is out of sync with its underlying SQL table. > 'MySQLServerName' returned data that does not match expected data > length for column 'MyColumnName'. The (maximum) expected data length > is 50, while the returned data length is 52. This occurs if you update the definition of the underlying SQL table, but forget to update the associated SQL view(s). A quick fix for this error is to run sp_refreshview : use MySQLDatabaseName go EXECUTE sp_refreshview N'MyViewName'; **But what if I have a long list of broken views, or I don't even know which views are broken, or I don't know which underlying table definitions are no longer matching those views? What if I want to save time by running sp_refreshview across every view in my database. What's an easy way to accomplish that task?**
Speedcat (349 rep)
Oct 22, 2024, 02:29 AM • Last activity: Oct 24, 2024, 02:30 AM
0 votes
2 answers
883 views
Finding tables & columns at risk of hitting max value of int
I just took over a medium-sized postgres database and the team has been surprised with many tables hitting their auto-incrementing `id` `int `max limits which triggered unscheduled downtime to fix. Does anyone know of a simple tool to help check all tables/columns and easily find which are at risk o...
I just took over a medium-sized postgres database and the team has been surprised with many tables hitting their auto-incrementing id int max limits which triggered unscheduled downtime to fix. Does anyone know of a simple tool to help check all tables/columns and easily find which are at risk of hitting the int max limits? I'm sure we can come up with a script to check, but I remember reading somewhere of a tool to help me identify potential issues exhaustively.
leonsas (233 rep)
Oct 13, 2020, 01:30 PM • Last activity: Sep 18, 2024, 01:45 PM
2 votes
5 answers
695 views
Investigate why SQL Log file increased in size
We had one or more of our log files increase in size over night, causing our SQL logs disk to almost fill up. I've been asked to look at what could have caused this. Can anyone suggest where to start please? I have rough times when logs increased.
We had one or more of our log files increase in size over night, causing our SQL logs disk to almost fill up. I've been asked to look at what could have caused this. Can anyone suggest where to start please? I have rough times when logs increased.
AngryDog (61 rep)
Aug 21, 2024, 11:04 AM • Last activity: Aug 23, 2024, 09:36 PM
Showing page 1 of 20 total questions