Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

1 votes
1 answers
697 views
Table is full even with innodb_file_per_table
I am trying to create an index to my table using alter query. My my.cnf file innodb_data_home_dir = /usr/local/mysql5/data innodb_data_file_path = ibdata1:60021538816;ibdata2:300M;ibdata3:30000M;ibdata4:10000M;ibdata5:10000M:autoextend innodb_buffer_pool_instances = 3 innodb_buffer_pool_size = 3G in...
I am trying to create an index to my table using alter query. My my.cnf file innodb_data_home_dir = /usr/local/mysql5/data innodb_data_file_path = ibdata1:60021538816;ibdata2:300M;ibdata3:30000M;ibdata4:10000M;ibdata5:10000M:autoextend innodb_buffer_pool_instances = 3 innodb_buffer_pool_size = 3G innodb_additional_mem_pool_size = 8M # Set .._log_file_size to 25 % of buffer pool size innodb_log_file_size = 256M innodb_additional_mem_pool_size = 128M innodb_log_buffer_size = 8M innodb_flush_log_at_trx_commit = 2 innodb_lock_wait_timeout = 100 innodb_file_per_table innodb_flush_method=O_DIRECT Still every time my alter query alter table user add unique index idx_emailHash (emailHash); giving Table 'user' is full? What am I missing. I am using MySQL 5.6 Some more info [root@db data]# ll | grep user -rw-rw----. 1 mysql mysql 19551 Jun 10 14:33 user.frm -rw-rw----. 1 mysql mysql 28412215296 Jun 10 22:58 user.ibd [root@db data]# ll | grep ibd -rwxr-xr-x. 1 mysql mysql 60021538816 Jun 10 22:58 ibdata1 -rw-rw----. 1 mysql mysql 314572800 Jun 10 22:20 ibdata2 -rw-rw----. 1 mysql mysql 31457280000 Jun 10 22:33 ibdata3 -rw-rw----. 1 mysql mysql 10485760000 Jun 10 22:51 ibdata4 -rw-rw----. 1 mysql mysql 10485760000 Jun 10 22:51 ibdata5
Ankit Bansal (61 rep)
Jun 10, 2021, 05:32 PM • Last activity: Aug 3, 2025, 04:05 AM
4 votes
1 answers
261 views
Index Rebuild with WAIT_AT_LOW_PRIORITY still blocks
I'm running... ```sql ALTER INDEX PK_MyTable ON fct.MyTable REBUILD PARTITION = 261 WITH ( ONLINE = ON ( WAIT_AT_LOW_PRIORITY ( MAX_DURATION = 30 minutes, ABORT_AFTER_WAIT = SELF ) ), RESUMABLE = ON, MAX_DURATION = 355 ) ; ``` ...and I expect it **not** to cause any blocking, but it does. It is aski...
I'm running...
ALTER INDEX PK_MyTable 
    ON fct.MyTable 
    REBUILD PARTITION = 261 
    WITH (
        ONLINE = ON (
            WAIT_AT_LOW_PRIORITY (
                MAX_DURATION = 30 minutes, 
                ABORT_AFTER_WAIT = SELF
            )
        ), 
        RESUMABLE = ON, 
        MAX_DURATION = 355 
    )
;
...and I expect it **not** to cause any blocking, but it does. It is asking for a LCK_M_SCH_M lock. My statement is blocked by an INSERT INTO statement, which is what I expect; but it is also blocking many other statements that are asking for a LCK_M_SCH_S lock, and I was not expecting that. At the same time there is a BACKUP running of a database that has FILESTREAM. Could this be the reason? What else could cause this blocking? Have I completely misunderstood WAIT_AT_LOW_PRIORITY? RCSI is on.
Henrik Staun Poulsen (2291 rep)
Jul 31, 2025, 08:54 AM • Last activity: Aug 1, 2025, 12:53 AM
2 votes
2 answers
581 views
How to determine if an ALTER statement will be a metadata-only operation O(1) or a size of data operation O(n)?
We are dealing with very large tables in Sql Azure. Size-of-data operations can lock tables for a very long time so we must be careful to run these only on week ends. Metadata operations are quick and we can run them in production at any time. Looking at a given ALTER TABLE statement, how can I dete...
We are dealing with very large tables in Sql Azure. Size-of-data operations can lock tables for a very long time so we must be careful to run these only on week ends. Metadata operations are quick and we can run them in production at any time. Looking at a given ALTER TABLE statement, how can I determine if this will be a metadata-only operation or a size-of-data operation? As far as I know it's not possible to view an ALTER's execution plan?
Clement (576 rep)
Mar 30, 2020, 04:03 PM • Last activity: May 20, 2025, 03:37 PM
6 votes
1 answers
2638 views
Partitioning a 4TB table while staying online (SQL Server Standard)
We have a table that is currently ~4TB in size. We wish to introduce partitioning on this table. The table is already clustered on the key we wish to partition on. The options appear to be - Use ALTER TABLE SWITCH to switch the data of the table to a range-partitioned table that has only one partiti...
We have a table that is currently ~4TB in size. We wish to introduce partitioning on this table. The table is already clustered on the key we wish to partition on. The options appear to be - Use ALTER TABLE SWITCH to switch the data of the table to a range-partitioned table that has only one partition (zero cost). Then call ALTER PARTITION FUNCTION for each partition we wish to introduce (definitely not zero cost) - re-create the clustered index using DROP_EXISTING=ON We're not using SQL Enterprise, so there's no ability to do this online. Running these queries locks the entire table. * Even if we accepted the lock, we have no real way of estimating how long this would take (and whether we could complete it in time over a weekend) * Most of the data is archive data, there's only a small slice of data that's actually 'live' and would be a problem when it's locked. Trying to figure out if there's a strategy using ALTER PARTITION and ALTER TABLE SWITCH that allows us to do *most* of this with the data swapped out on a staging table. * Can anyone suggest a way we can do this progressively? Thanks!
James Crowley (223 rep)
Nov 9, 2017, 11:44 AM • Last activity: Jan 3, 2025, 11:01 AM
0 votes
1 answers
103 views
Why not make an online index rebuild resumable?
[The documentation](https://learn.microsoft.com/en-us/sql/relational-databases/indexes/guidelines-for-online-index-operations?view=sql-server-ver16#resumable-index-considerations) makes resumable index rebuilds sound like magic. For example, it claims > Generally, there's no performance difference b...
[The documentation](https://learn.microsoft.com/en-us/sql/relational-databases/indexes/guidelines-for-online-index-operations?view=sql-server-ver16#resumable-index-considerations) makes resumable index rebuilds sound like magic. For example, it claims > Generally, there's no performance difference between resumable and nonresumable online index rebuild. and > Resumable index create or rebuild doesn't require you to keep open a long running transaction, allowing log truncation during this operation and a better log space management. Personally, I've [had them save the day once before](https://dba.stackexchange.com/a/344483/277982) . So, assuming: - SQL Server 2022 - Enterprise Edition - I have already decided to do my index maintenance online Why would I ever decide to rebuild an index without making said rebuild resumable?
J. Mini (1237 rep)
Jan 1, 2025, 03:05 PM • Last activity: Jan 2, 2025, 02:50 PM
1 votes
2 answers
189 views
Can a 5 terabyte index be rebuilt online without worrying about the transaction log?
### Context Buried deep in [this documentation](https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-database-transact-sql-file-and-filegroup-options?view=sql-server-ver16#arguments), we find > a log file specified with unlimited growth has a maximum size of 2 TB Suppose that I want to rebui...
### Context Buried deep in [this documentation](https://learn.microsoft.com/en-us/sql/t-sql/statements/alter-database-transact-sql-file-and-filegroup-options?view=sql-server-ver16#arguments) , we find > a log file specified with unlimited growth has a maximum size of 2 TB Suppose that I want to rebuild a single non-partitioned 5 TB index ONLINE that belongs to a table in a database with the FULL recovery model on my Enterprise Edition box. The recovery model cannot change because it is in an AAG and is also not my decision to make. ### The Problem Does the limit on the size of the transaction log render it _impossible_ to rebuild such an index? If it is not impossible, then should any steps at all be taken to mitigate dramatically growing the transaction log? ### What I've Tried I've tried to test this myself, but the non-production clones of this server are shared with hundreds of other people and don't have transaction log backups. This means I'm just as scared of this rebuild breaking them as I am of breaking the actual production box. Very early in my career, I saw production boxes broken by lack of care with the transaction log. What tests I've done certainly indicate that running such a rebuild causes the transaction log to grow rapidly. *I'm scared*. I also lack the hardware to test this on my personal machines. I don't have 10 TB of storage sitting on my desk! I've tried to Google around the problem and found nothing to help. It is almost as if the problem that I'm worried about doesn't even exist.
J. Mini (1237 rep)
Dec 30, 2024, 10:36 PM • Last activity: Jan 2, 2025, 01:15 PM
0 votes
1 answers
386 views
Online schema changes for cloudsql DB
I'm new to cloudsql for mysql. I have a table with trigger. I need to do some ddl changes on that table. However, I can't take any downtime. **What I have tried so far:** I tried with pt-online-schema-change. Because, it work for our local databases. However, in case of managed cloudsql - it wasn't...
I'm new to cloudsql for mysql. I have a table with trigger. I need to do some ddl changes on that table. However, I can't take any downtime. **What I have tried so far:** I tried with pt-online-schema-change. Because, it work for our local databases. However, in case of managed cloudsql - it wasn't working. Opened an issue with Percona(PT-1964). Next, I checked for gh-ost. However, as per their documentation(https://github.com/github/gh-ost/blob/master/doc/requirements-and-limitations.md) -it doesn't support any table with trigger. Making changes with shared lock option isn't available for me - because we have a replica too. Does anyone knows any other method/suggeation to make ddl changes online for cloudsql for mysql. Thanks in advance!
Shiwangini (380 rep)
May 22, 2021, 07:06 AM • Last activity: Dec 30, 2024, 11:02 AM
15 votes
1 answers
30519 views
Changing foreign key to ON DELETE CASCADE with least amount of impact
I have an existing foreign key that has `ON DELETE NO ACTION` defined. I need to change this foreign key to `ON DELETE CASCADE`. I can do this within a transaction: begin; alter table posts drop constraint posts_blog_id_fkey; alter table posts add constraint posts_blog_id_fkey foreign key (blog_id)...
I have an existing foreign key that has ON DELETE NO ACTION defined. I need to change this foreign key to ON DELETE CASCADE. I can do this within a transaction: begin; alter table posts drop constraint posts_blog_id_fkey; alter table posts add constraint posts_blog_id_fkey foreign key (blog_id) references blogs (id) on update no action on delete cascade; commit; The problem is that the posts table is large (4 million rows) which means that validating the foreign key can take a non-trivial amount of time (I've tested this with a copy of the database). Dropping/adding the foreign key [acquires an ACCESS EXCLUSIVE lock](https://www.postgresql.org/docs/9.5/static/sql-altertable.html#AEN72289) on posts. So, adding the foreign key blocks *all* access to the posts table for a decent amount of time because the lock is held while constraint validation occurs. I need to perform an online migration (I don't have a dedicated downtime window). I know that I can perform **2 transactions** to help with the check taking a long time: begin; alter table posts drop constraint posts_blog_id_fkey; alter table posts add constraint posts_blog_id_fkey foreign key (blog_id) references blogs (id) on update no action on delete cascade not valid; commit; begin; alter table posts validate constraint posts; commit; The advantage of this approach is that the ACCESS EXCLUSIVE lock is held for a very short time for dropping/adding the constraint and then for validating the constraint only [a SHARE UPDATE EXCLUSIVE on posts and ROW SHARE lock on blogs](https://www.postgresql.org/docs/9.5/static/sql-altertable.html) since I'm on Postgres 9.5. Are there downsides to this? I know that adding NOT VALID to the constraints means that *existing* data is not validated, but any rows inserted/updated before the VALIDATE CONSTRAINT *will* be checked. Because the foreign key is dropped/added in the same transaction, is there any possibility of creating inconsistent data?
TheCloudlessSky (337 rep)
Oct 27, 2017, 02:02 PM • Last activity: May 16, 2024, 08:16 PM
10 votes
1 answers
3031 views
How to add index to a big table with 60M records without downtime?
we have been struggling with one issue in the past few days. We want to add an index to a huge table with 60M records. At first we tried adding it with basic mysql syntax. But it clogged our production DB. That table is used very frequently in production queries. So everything suffered. Our DB is ho...
we have been struggling with one issue in the past few days. We want to add an index to a huge table with 60M records. At first we tried adding it with basic mysql syntax. But it clogged our production DB. That table is used very frequently in production queries. So everything suffered. Our DB is hosted on AWS RDS. Its Mysql 5.7. We are using Laravel as our PHP framework Next thing we read about was, we can copy the current table into a new one. Then add index to the new table. Then shift the laravel model to use the new table. We thought it made sense and it would be easy enough But copying the table data from one table to the new one, was taking quite a lot of time. Our calculations showed it would take days. We tried using Laravel as well as SQL commands. But it was too slow either way. Then we tried exporting the data as CSV and importing it, but again, too slow. The first few million records would insert fast, but then the table would become extremely slow in inserting. Finally we tried mysqldump and we realised it also locks the new table while inserting, so maybe that's why its fast enough. It took around 6 hours to copy the table into new one. BUT we were missing 2M records in this method. We also checked how many records came into the existing table while exporting/importing, it was only around a 100K. So the exporting/importing was missing 1.9M records, and we couldn't figure out why. After going through all these different ways, we have decided to put the app in downtime and add the index on the huge table I wanted to know, do others face this issue as well? Is there a way to either add indexes on a huge table without causing downtime on production? Or is there a faster way to copy a big mysql table without loss of data?
Rohan (53 rep)
Dec 15, 2023, 12:19 PM • Last activity: Dec 20, 2023, 10:24 AM
9 votes
3 answers
1873 views
Online Index Rebuild of large table takes exclusive lock
I'm trying to rebuild the clustered index of a large table (77GB) on Azure SQL Database. There is high concurrent transactional activity on the table so I'm using the `ONLINE=ON` option. This has worked well for smaller tables; however, when I run it on this large table it seems to take exclusive lo...
I'm trying to rebuild the clustered index of a large table (77GB) on Azure SQL Database. There is high concurrent transactional activity on the table so I'm using the ONLINE=ON option. This has worked well for smaller tables; however, when I run it on this large table it seems to take exclusive locks on the table. I had to stop it after 5 minutes because all transactional activity was timing out. From session with SPID 199:
ALTER INDEX PK_Customer ON [br].[Customer] 
REBUILD WITH (ONLINE = ON, RESUMABLE = ON);
From another session: cl A bit further below in the same results: enter image description here * Object 978102525 is the clustered index. * Object 1125579048 is the table. enter image description here I understand that online rebuild can take locks for a 'short' duration at the start and end of the process. However, these locks are taken for several minutes, which is not exactly a 'short' duration. ### Additional info While the rebuild is running, I ran SELECT * FROM sys.index_resumable_operations; but it returned 0 rows, as if the rebuild hadn't started at all. The smaller tables also have a PK potentially > 900 bytes and the same ALTER statement worked without any long blocking so I don't think it's related to PK size. These smaller tables also had a similar amounts of nvarchar(max) columns. The only real difference I can think of is that this table has many more rows. ### Table definition Here is the full definition of br.Customer. There are no foreign keys or non clustered indices.
CREATE TABLE [br].[Customer](
	[Id] [bigint] NOT NULL,
	[ShopId] [nvarchar](450) NOT NULL,
	[accepts_marketing] [bit] NOT NULL,
	[address1] [nvarchar](max) MASKED WITH (FUNCTION = 'partial(2, "XXX", 0)') NULL,
	[address2] [nvarchar](max) MASKED WITH (FUNCTION = 'partial(2, "XXX", 0)') NULL,
	[city] [nvarchar](max) NULL,
	[company] [nvarchar](max) NULL,
	[country] [nvarchar](max) NULL,
	[country_code] [nvarchar](max) NULL,
	[email] [nvarchar](max) MASKED WITH (FUNCTION = 'email()') NULL,
	[first_name] [nvarchar](max) MASKED WITH (FUNCTION = 'partial(2, "XXX", 0)') NULL,
	[last_name] [nvarchar](max) MASKED WITH (FUNCTION = 'partial(2, "XXX", 0)') NULL,
	[note] [nvarchar](max) NULL,
	[phone] [nvarchar](max) MASKED WITH (FUNCTION = 'partial(2, "XXX", 0)') NULL,
	[province] [nvarchar](max) NULL,
	[province_code] [nvarchar](max) NULL,
	[state] [nvarchar](max) NULL,
	[tax_exempt] [bit] NOT NULL,
	[verified_email] [bit] NOT NULL,
	[zip] [nvarchar](max) NULL,
	[multipass_identifier] [nvarchar](max) NULL,
	[created_at_local] [datetimeoffset](7) NOT NULL,
	[updated_at_local] [datetimeoffset](7) NOT NULL,
	[tags] [nvarchar](max) NULL,
	[address_phone] [nvarchar](max) MASKED WITH (FUNCTION = 'partial(2, "XXX", 0)') NULL,
	[address_firstname] [nvarchar](max) MASKED WITH (FUNCTION = 'partial(2, "XXX", 0)') NULL,
	[address_lastname] [nvarchar](max) MASKED WITH (FUNCTION = 'partial(2, "XXX", 0)') NULL,
 CONSTRAINT [PK_Customer] PRIMARY KEY CLUSTERED 
(
	[ShopId] ASC,
	[Id] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO

ALTER TABLE [br].[Customer] ADD  DEFAULT ('0001-01-01T00:00:00.000+00:00') FOR [created_at_local]
GO

ALTER TABLE [br].[Customer] ADD  DEFAULT ('0001-01-01T00:00:00.000+00:00') FOR [updated_at_local]
GO
### sp_WhoIsActive I investigated further today (24 Sep) and ran SP_WHOISACTIVE @get_locks = 1, which clearly shows all UPDATE/INSERT/DELETE transactions blocked by the session running the ALTER INDEX. Locks held on Customer table by query running the ALTER INDEX:
Locks from session running UPDATE on same table:
Clement (576 rep)
Sep 23, 2020, 01:37 PM • Last activity: Nov 20, 2023, 11:32 AM
0 votes
1 answers
733 views
Optimize table on large table online DDL Feature
I am running optimize table on MySQL table size 340GB in size and storage engine is InnoDB. ``` mysql> show variables like '%tmp%'; +----------------------------+------------------+ | Variable_name | Value | +----------------------------+------------------+ | default_tmp_storage_engine | InnoDB | |...
I am running optimize table on MySQL table size 340GB in size and storage engine is InnoDB.
mysql> show variables like '%tmp%';
+----------------------------+------------------+
| Variable_name              | Value            |
+----------------------------+------------------+
| default_tmp_storage_engine | InnoDB           |
| innodb_tmpdir              |                  |
| max_tmp_tables             | 32               |
| slave_load_tmpdir          | /dbtmp/mysql_tmp |
| tmp_table_size             | 33554432         |
| tmpdir                     | /dbtmp/mysql_tmp |
tmpdir directory is filling up space .
/dev/mapper/vgdbtmp-lvdbtmp
                      296G  231G   51G  83% /dbtmp
What will be the alternate solution to optimize table without creating copy of table. MySQL 5.6 version
mysql> select version();
+------------+
| version()  |
+------------+
| 5.6.46-log |
+------------+
1 row in set (0.01 sec)
I have read below MySQL link. https://mysqlserverteam.com/mysql-5-6-17-improved-online-optimize-table-for-innodb-and-partitioned-innodb-tables/ https://dev.mysql.com/doc/refman/5.6/en/optimize-table.html
to Mysql 5.6.17, OPTIMIZE TABLE does not use online DDL. Consequently, concurrent DML (INSERT, UPDATE, DELETE) is not permitted on a table while OPTIMIZE TABLE is running, and secondary indexes are not created as efficiently.
But here no one is firing DDL/DML/select on table currently. It should run online without creating copy of table right?
Adam Mulla (143 rep)
Feb 16, 2020, 09:51 AM • Last activity: Jun 26, 2023, 07:07 PM
3 votes
1 answers
365 views
During an online piecemeal restore, is it possible to restore to a point-in-time?
I have a database (full recovery model) with multiple filegroups, that each have full and diff backups. I am able to replicate an online piecemeal restore (SQL Server 2019 Enterprise Edition) as it is described here: [Example: Piecemeal Restore of Database (Full Recovery Model)][1] However, when I a...
I have a database (full recovery model) with multiple filegroups, that each have full and diff backups. I am able to replicate an online piecemeal restore (SQL Server 2019 Enterprise Edition) as it is described here: Example: Piecemeal Restore of Database (Full Recovery Model) However, when I add a STOPAT clause when restoring the last log backup, this only works for the primary filegroup. For all following filegroups I get either this message: > Msg 4342, Level 16, State 1, Line 161 Point-in-time recovery is not possible unless the primary filegroup is part of the restore sequence. Omit the point-in-time clause or restore the primary filegroup. When I try to restore the log without the STOPAT clause, I get this: > The supplied backup is not on the same recovery path as the database, and is ineligible for use for an online file restore. I was not able to find any restrictions about point-in-time recovery during online piecemeal restores, except for this part (found on Piecemeal Restores (SQL Server) ): > If a partial restore sequence excludes any FILESTREAM filegroup, point-in-time restore is not supported. You can force the restore sequence to continue. However the FILESTREAM filegroups that are omitted from your RESTORE statement can never be restored. To force a point-in-time restore, specify the CONTINUE_AFTER_ERROR option together with the STOPAT, STOPATMARK, or STOPBEFOREMARK option, which you must also specify in your subsequent RESTORE LOG statements. If you specify CONTINUE_AFTER_ERROR, the partial restore sequence succeeds and the FILESTREAM filegroup becomes unrecoverable. Since the database does not contain any filestream filegroups, this should not apply. Does anyone know if point-in-time recovery is possible in my scenario? --- Here is a minimal example to play with (not working): ------------------------------------------------------------------------ -- ONLINE PIECEMEAL RESTORE WITH POINT-IN-TIME RECOVERY ------------------------------------------------------------------------ --------------------------------- -- CREATE DB -- log, 2 filegroups (primary + A) and one table each --------------------------------- USE [master] GO CREATE DATABASE [RestoreTest] ON PRIMARY( NAME = 'PRIMARY', FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\RestoreTest.mdf' ), FILEGROUP A( NAME = 'RestoreTest_A', FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\RestoreTest_A.ndf' ) LOG ON( NAME = 'RestoreTest_log', FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\RestoreTest_Log.ldf' ) GO USE [RestoreTest] GO CREATE TABLE [Table1]( [X] INT ) ON [PRIMARY]; CREATE TABLE [Table2]( [X] INT ) ON [A]; GO --------------------------------- -- Backups --------------------------------- -- full backups of each filegroup BACKUP DATABASE [RestoreTest] FILEGROUP = 'PRIMARY' TO DISK = 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\RestoreTest_Full.bak' WITH INIT BACKUP DATABASE [RestoreTest] FILEGROUP = 'A' TO DISK = 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\RestoreTest_A_Full.bak' WITH INIT GO -- store current time for later point-in-time recovery WAITFOR DELAY '00:00:01' DECLARE @now DATETIME = (SELECT GETDATE()) EXEC sp_set_session_context 'stopat', @now; WAITFOR DELAY '00:00:01' --insert some data INSERT INTO [Table1] VALUES (1) INSERT INTO [Table2] VALUES (1) GO -- then take log backup BACKUP LOG [RestoreTest] TO DISK= 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\RestoreTest_Log.bak' WITH INIT GO --------------------------------- -- RESTORE (point-in-time) --------------------------------- -- drop database USE [master] DROP DATABASE [RestoreTest] GO -- restore primary filegroup DECLARE @stopat DATETIME = (SELECT CAST(SESSION_CONTEXT(N'stopat') AS DATETIME)) RESTORE DATABASE [RestoreTest] FILEGROUP = 'PRIMARY' FROM DISK = 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\RestoreTest_Full.bak' WITH PARTIAL, NORECOVERY RESTORE LOG [RestoreTest] FILEGROUP = 'PRIMARY' FROM DISK = 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\RestoreTest_Log.bak' WITH RECOVERY, STOPAT = @stopat GO -- test SELECT [name], [state_desc] FROM sys.master_files WHERE [database_id] = DB_ID('RestoreTest') GO -- restore filegroup A (with STOPAT) DECLARE @stopat DATETIME = (SELECT CAST(SESSION_CONTEXT(N'stopat') AS DATETIME)) RESTORE DATABASE [RestoreTest] FILEGROUP = 'A' FROM DISK = 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\RestoreTest_A_Full.bak' WITH NORECOVERY RESTORE LOG [RestoreTest] FILEGROUP = 'A' FROM DISK = 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\RestoreTest_Log.bak' WITH RECOVERY, STOPAT = @stopat -- test SELECT [name], [state_desc] FROM sys.master_files WHERE [database_id] = DB_ID('RestoreTest') GO -- try again, restore filegroup A (without STOPAT) RESTORE DATABASE [RestoreTest] FILEGROUP = 'A' FROM DISK = 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\RestoreTest_A_Full.bak' WITH NORECOVERY RESTORE LOG [RestoreTest] FILEGROUP = 'A' FROM DISK = 'C:\Program Files\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\Backup\RestoreTest_Log.bak' WITH RECOVERY -- test SELECT [name], [state_desc] FROM sys.master_files WHERE [database_id] = DB_ID('RestoreTest') GO USE [master] DROP DATABASE [RestoreTest] GO Output: Processed 360 pages for database 'RestoreTest', file 'PRIMARY' on file 1. Processed 7 pages for database 'RestoreTest', file 'RestoreTest_log' on file 1. BACKUP DATABASE...FILE= successfully processed 367 pages in 0.021 seconds (136.346 MB/sec). Processed 8 pages for database 'RestoreTest', file 'RestoreTest_A' on file 1. Processed 3 pages for database 'RestoreTest', file 'RestoreTest_log' on file 1. BACKUP DATABASE...FILE= successfully processed 11 pages in 0.011 seconds (7.457 MB/sec). (1 row affected) (1 row affected) Processed 14 pages for database 'RestoreTest', file 'RestoreTest_log' on file 1. BACKUP LOG successfully processed 14 pages in 0.005 seconds (21.093 MB/sec). Processed 360 pages for database 'RestoreTest', file 'PRIMARY' on file 1. Processed 7 pages for database 'RestoreTest', file 'RestoreTest_log' on file 1. RESTORE DATABASE ... FILE= successfully processed 367 pages in 0.020 seconds (143.164 MB/sec). Processed 0 pages for database 'RestoreTest', file 'PRIMARY' on file 1. Processed 14 pages for database 'RestoreTest', file 'RestoreTest_log' on file 1. RESTORE LOG successfully processed 14 pages in 0.006 seconds (17.578 MB/sec). (3 rows affected) Processed 8 pages for database 'RestoreTest', file 'RestoreTest_A' on file 1. RESTORE DATABASE ... FILE= successfully processed 8 pages in 0.006 seconds (10.416 MB/sec). Msg 4342, Level 16, State 1, Line 116 Point-in-time recovery is not possible unless the primary filegroup is part of the restore sequence. Omit the point-in-time clause or restore the primary filegroup. Msg 3013, Level 16, State 1, Line 116 RESTORE LOG is terminating abnormally. (3 rows affected) Processed 8 pages for database 'RestoreTest', file 'RestoreTest_A' on file 1. RESTORE DATABASE ... FILE= successfully processed 8 pages in 0.005 seconds (12.500 MB/sec). Msg 3116, Level 16, State 1, Line 134 The supplied backup is not on the same recovery path as the database, and is ineligible for use for an online file restore. Msg 3013, Level 16, State 1, Line 134 RESTORE LOG is terminating abnormally. (3 rows affected)
Matthias L (93 rep)
Feb 10, 2023, 04:41 PM • Last activity: Feb 20, 2023, 10:22 AM
1 votes
1 answers
998 views
Run pt-online-schema-change with multiple ALTER queries synchronously
I want to run 3 ALTER queries with pt-online-schema-change tool: --alter "ADD INDEX userid_sid_ts_fid (user_id, scorecard_id, timestamp, factor_id), DROP INDEX uidts, RENAME INDEX userid_sid_ts_fid to uidts" However I face this error: Error altering new table `*****`.`_scoring_basis_new`: DBD::mysql...
I want to run 3 ALTER queries with pt-online-schema-change tool: --alter "ADD INDEX userid_sid_ts_fid (user_id, scorecard_id, timestamp, factor_id), DROP INDEX uidts, RENAME INDEX userid_sid_ts_fid to uidts" However I face this error: Error altering new table *****._scoring_basis_new: DBD::mysql::db do failed: Key 'userid_sid_ts_fid' doesn't exist in table '_scoring_basis_new' So it looks like is trying to run these 3 queries asynchronously rather than 1 by 1. How can I prevent that?
Diego (113 rep)
Dec 15, 2022, 12:30 PM • Last activity: Dec 15, 2022, 04:11 PM
1 votes
1 answers
2810 views
Alter Table Drop Column INPLACE NoLock - until which point is the column being dropped accessible?
I am trying to run MySQL Alter `INPLACE` command to drop few columns from a very large 90GB table. While the Alter is running I am able to run the Select statement on the same table to ensure that the table is not locked. MySQL Version 5.7 with InnoDB Questions: 1. While the alter command is running...
I am trying to run MySQL Alter INPLACE command to drop few columns from a very large 90GB table. While the Alter is running I am able to run the Select statement on the same table to ensure that the table is not locked. MySQL Version 5.7 with InnoDB Questions: 1. While the alter command is running with in place algorithm and nolock, up to what point the data can be accessed in the columns being dropped? e.g. at the point when the columns are almost being dropped? I need to make this change in prod so need to make sure of this. 2. Can the application still update the table while the alter statement to drop the columns is running? Currently the columns are stored and after dropping them we will be adding them back as Virtual. 3. Will there be any downtime at all, I read somewhere that the table will be locked shortly at the end, correct me if I am wrong.
Ali (345 rep)
Aug 20, 2021, 06:08 AM • Last activity: Aug 25, 2022, 08:07 PM
1 votes
1 answers
78 views
Index creation on partitioned table is not completely offline in Standard Edition
We have SQL Server 2019 Standard Edition > Microsoft SQL Server 2019 (RTM-CU10) (KB5001090) - 15.0.4123.1 (X64) > Mar 22 2021 18:10:24 Copyright (C) 2019 Microsoft Corporation > Standard Edition (64-bit) on Windows Server 2016 Datacenter 10.0 > (Build 14393: ) (Hypervisor) where index creation is co...
We have SQL Server 2019 Standard Edition > Microsoft SQL Server 2019 (RTM-CU10) (KB5001090) - 15.0.4123.1 (X64) > Mar 22 2021 18:10:24 Copyright (C) 2019 Microsoft Corporation > Standard Edition (64-bit) on Windows Server 2016 Datacenter 10.0 > (Build 14393: ) (Hypervisor) where index creation is completely offline operation. So every CREATE INDEX statement render the table unavailable for reading/writing. I partitioned one table and to my big surprise when I was creating all the nonclustered indexes on the same partition scheme this table was perfectly readable. Is it a documented feature and where can I read about it? All data modifications of this table was still blocked but there was no blocked readers at all. This database has RCSI enabled if it could matter.
sepupic (11277 rep)
Aug 16, 2022, 12:37 PM • Last activity: Aug 16, 2022, 12:59 PM
3 votes
1 answers
1899 views
Postgres varchar(100)[] -> text[] column conversion
I need to change a column type from `varchar(100)[]` to `text[]`. The table has >100M rows and heavy write load, so I'd like to avoid taking an access exclusive lock and rewriting the index. I saw in [this question](https://dba.stackexchange.com/q/189890/112479) that non-array `varchar` to `text` ch...
I need to change a column type from varchar(100)[] to text[]. The table has >100M rows and heavy write load, so I'd like to avoid taking an access exclusive lock and rewriting the index. I saw in [this question](https://dba.stackexchange.com/q/189890/112479) that non-array varchar to text changes manage this, so I was wondering: does the same apply with array columns? [The docs](https://www.postgresql.org/docs/13/sql-altertable.html#SQL-ALTERTABLE-NOTES) + [the mailing list](https://www.postgresql.org/message-id/flat/CAMp9%3DExXtH0NeF%2BLTsNrew_oXycAJTNVKbRYnqgoEAT01t%3D67A%40mail.gmail.com) say that this change will be quick if: > the old type is either binary coercible to the new type or an unconstrained domain over the new type) Is that the case for varchar(100)[] to text[]?
Felipe (317 rep)
Jun 8, 2021, 02:33 AM • Last activity: Jun 10, 2021, 01:42 AM
2 votes
2 answers
2602 views
Convert postgres varchar[] to text[] without downtime
As I discovered in researching [this question](https://dba.stackexchange.com/q/293904/112479), converting a column from `varchar(100)[]` to `text[]` requires a table rewrite and rebuilding indexes. Is there a way to manage this conversion without the locking and rewriting through some catalog tricke...
As I discovered in researching [this question](https://dba.stackexchange.com/q/293904/112479) , converting a column from varchar(100)[] to text[] requires a table rewrite and rebuilding indexes. Is there a way to manage this conversion without the locking and rewriting through some catalog trickery like that used in [this question](https://stackoverflow.com/q/48693373/754254) ? Particularly interested in version 10 if possible
Felipe (317 rep)
Jun 8, 2021, 02:39 AM • Last activity: Jun 9, 2021, 03:44 AM
0 votes
0 answers
1085 views
Are online index REBUILD operation possible on every Azure SQL database?
I am trying to find informations about the possibility to operate online index REORGANIZE/REBUILD on a database. What I actually found in the Microsoft [documentation][1] is that: > Reorganizing an index uses minimal system resources and is an online operation. > Depending on the type of index and d...
I am trying to find informations about the possibility to operate online index REORGANIZE/REBUILD on a database. What I actually found in the Microsoft documentation is that: > Reorganizing an index uses minimal system resources and is an online operation. > Depending on the type of index and database engine version, a rebuild operation can be done online or offline. The documentation is also indicating that: > Online index operations are not available in every SQL Server edition. For more information, see Editions and supported features of SQL Server. If I follow the link giving more details about the differences between the multiple SQL Server editions, it says that "Online index create and rebuild" is only supported by "Enterprise Editions". But now I would like to have more information about the possibility to operate online index REBUILD on Azure SQL database and I cannot find any explicit documentation page about it. - Is there any limitation on Azure SQL databases too? - Or online index REBUILD is possible for any Azure SQL database? Does someone have some documentation link to share or anymore information about this specific point?
Alexandre D. (101 rep)
Apr 2, 2020, 12:31 PM • Last activity: Apr 2, 2020, 01:24 PM
1 votes
0 answers
802 views
Online Index Rebuild shows self blocking
I've been doing some testing on rebuilding some larger clustered indexes (with online + sort_in_tempdb on) and I'm observing something that I can't immediately explain. I have a hunch that this is some sort of intra-query blocking due to the rebuild going parallel but can't 100% confirm. Running SQL...
I've been doing some testing on rebuilding some larger clustered indexes (with online + sort_in_tempdb on) and I'm observing something that I can't immediately explain. I have a hunch that this is some sort of intra-query blocking due to the rebuild going parallel but can't 100% confirm. Running SQL 2016 EE - SP2 + CU8 (13.0.5426) **The query I'm running:** USE [UserDB] GO ALTER INDEX [PK_Order] ON [dbo].[Order] REBUILD WITH (SORT_IN_TEMPDB = ON, ONLINE = ON) GO **Whoisactive says:** whoisactive output **Digging further into that session and it's ecids, I see the following:** ecid info Now, with sort in tempdb (dbid 2) being on, my concern here is that the resource_description column is showing page locks on dbid5 (the DB that I'm rebuilding the index on). This looks like a page lock on the user database that may cause blocking or disruptions. Can anyone shine some light on what exactly is happening here? Is this just unbalanced parallelism and threads have to sit on their hands until the others finish? Worth noting that these locks do clear after a while (5-10 mins) and we revert back to just CXCONSUMER waits.
Kris Gruttemeyer (3879 rep)
Feb 6, 2020, 01:19 PM • Last activity: Feb 6, 2020, 01:24 PM
2 votes
2 answers
748 views
How to make DDL changes to large tables and cause minimum server and object contention?
I work with a lot of large tables (billions to tens of billions of records in each) in a database I recently inherited. I see a few clear DDL changes that would benefit the use-cases of the database but it's hard for me to implement them because the database can afford very minimal contention. (Esse...
I work with a lot of large tables (billions to tens of billions of records in each) in a database I recently inherited. I see a few clear DDL changes that would benefit the use-cases of the database but it's hard for me to implement them because the database can afford very minimal contention. (Essentially if a heavy query is running for more than a minute or two it has to be killed.) Even during a maintenance window, these changes would be just way too long and would exceed my allocated time (at most 1 hour, since there aren't really any off hours). Types of changes I want to make are create indexes, add persisted computed columns, create indexed views, and general index tuning. If there was a way to do any of these operations iteratively and pause between iterations then I could get away with the total time taking a while because at least I can allow other processes to run in between, instead of a backlog being built up. The only idea I can think of is if I maintained a copy of the database on a separate server where I can make DDL changes, then re-point my applications to that server. Then update the first server with the DDL changes so it's in sync, and then the next time I need to make an update, I can repeat the process. Edit: I'm on SQL Server 2016 Enterprise Edition.
J.D. (40893 rep)
Jan 17, 2020, 03:37 PM • Last activity: Jan 17, 2020, 07:15 PM
Showing page 1 of 20 total questions