Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
142 views
How can I manually backup a large Azure Flexible Postgres database?
I have a large (3TB+) single database on a Azure Postgres Flexible Server. This is a managed service, so it takes backups automatically and the frequency can be adjusted. As a disaster recovery/emergency option, I'd like to periodically take a snapshot of the database separately from these increment...
I have a large (3TB+) single database on a Azure Postgres Flexible Server. This is a managed service, so it takes backups automatically and the frequency can be adjusted. As a disaster recovery/emergency option, I'd like to periodically take a snapshot of the database separately from these incremental backups and store it somewhere else. I don't need ongoing connectivity for incremental backups (current managed system does that), I just want a periodic backup that'd be restorable to a new server. I would just choose Azure's own Vaulted Backup option, which provides enough separation for this purpose. But according to the docs : >Vaulted backups are supported for server size <= 1 TB. If backup is configured on server size larger than 1 TB, the backup operation fails. So I'm looking for other options. I checked out Barman , which looks like it should be perfect. But the managed Postgres Flexible Server doesn't support ssh/rsync access and I'm getting conflicting info on whether pg_basebackup is supported--seems unlikely, along with other physical replication options. I can't use Azure's own tools to backup my server, it's too big. I can't use external tools, they can't have shell access. The last option is just pg_dump, which in my experience will take days to finish and may need to be performed on a point-in-time-restore so that performance isn't affected. I'd prefer to create and restore from a physical backup rather than logical, so this is a last resort. Is there a better option for a managed server this size than pg_dump?
zach (1 rep)
May 28, 2025, 11:05 PM • Last activity: Aug 5, 2025, 12:06 AM
0 votes
1 answers
190 views
Azure blob storage - Access tier change
We currently have a legacy blob storage account of 600TB configured as Standard\Hot tier which has archived backups. As part of cost optimization, we want to change the access tier to Cool on the blob storage account. Does this incur data movement ? As per documentation, it says the change is instan...
We currently have a legacy blob storage account of 600TB configured as Standard\Hot tier which has archived backups. As part of cost optimization, we want to change the access tier to Cool on the blob storage account. Does this incur data movement ? As per documentation, it says the change is instantaneous. Does that mean hot to cool will not have any data transfer and incur no additional charges in pricing?
yvr238 (129 rep)
Nov 2, 2021, 05:13 PM • Last activity: Jun 27, 2025, 03:04 PM
3 votes
1 answers
210 views
how to mirror an azure storage account to a non-paired zone to enable fast file_snapshot backup/restore and database file access?
Wow, that title was a mouthful. # Scenario # My dev server is restoring a copy of production data daily, anonymizing it, and creating a dev-grade backup for my developers to restore locally. We're running SQL 2019. Unfortunately, my servers are all in Azure East US, but almost all of my company is b...
Wow, that title was a mouthful. # Scenario # My dev server is restoring a copy of production data daily, anonymizing it, and creating a dev-grade backup for my developers to restore locally. We're running SQL 2019. Unfortunately, my servers are all in Azure East US, but almost all of my company is based in Spain. As of today, we had the server produce a standard backup that would be downloaded by a central server here in Spain, so devs can restore without going to the cloud. Restores on developers' laptops are taking up to 180 minutes to complete. This is where BACKUP ... WITH FILE_SNAPSHOT comes in. In a few words, it allows backups and restores to go super fast, by storing data directly on Azure storage accounts. There's more to it, but it's not the point here - [read more here](https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/file-snapshot-backups-for-database-files-in-azure?view=sql-server-ver15) So far so good, but... # Issues # - Restoring near the server will be fast, but will force developers to traverse the Atlantic when reading from what should be the local database. - Restoring the snapshot from EastUs to a storage account in West Europe seems to be copying the snapshot over (which is logical), but this means there's no restore performance benefit. - My devs are really keen on restoring daily, weekly at most to ensure they're on par with production database releases and can troubleshoot data issues that get escalated to the engineering team. - Manually copying files from East Us to West Europe is not a good option, since it requires daily attention from someone. # What I tried # - Restoring a file_snapshot backup from our East US storage account to a temporary West Europe storage account - Restoring a file_snapshot backup within our East Us storage account (performances were awful for large queries due to, at least, network latencies) - Setting the storage to RA_GRS; when I tested it, it would only pair with West Us - this was to speed up normal restores, but it's still meaningful. # What I want to accomplish # I want my devs to be able to restore quickly using the FILE_SNAPSHOT technology to the nearest storage location. I'll need them to be able to write there using SAS keys. To ensure they can restore quickly, I'm assuming the storage account must be mirrored with a copy that sits here in West Europe, but I don't know how to accomplish this. ## My plan ## Currently, I'm planning to either try and store the backup in a West Europe bound storage container, or try restoring to it after enabling Azure CDN. I'm doubting both of these options because 1. FILE_SNAPSHOT backup takes a snapshot of the files in Azure storage, so they will have to be copied across the wire through the Atlantic 2. Azure CDN is a read-only system, I'm not sure it will help me with writes in the restore process. If it has to read from the source file, I'd be back to square one. I'm going to test these out (aka do my homework), but I'm reaching out to see if anyone here has other ideas, has already done this successfully (and wants to share their fix), or know this is not doable and I should file a UserVoice item. Thanks a bunch ### Update 2020-07-07 ### I have been testing CDNs - it does not work. When trying to restore an existing backup to URL (it does not matter if using FILE_SNAPSHOT or not) I'm getting an incorrectly formatted error. I know that the .bak is not corrupted since I can restore directly from the original blob storage. My current guess is that some header has a mismatch between the CDN URL and the actual blob storage. I also tried backing up directly to the CDN URL but that seems to not be supported, which makes sense due to my current understanding of CDNs being read-only systems. ### Update 2020-07-08 ### Also tested out backing up to a different storage location. Backups are fast, but the snapshot is still taken locally to the .mdf location - this means restores are painfully slow, and reducing that lag (and the data transfer) is the whole point of this exercise. I cannot really modify the .bak pointers after the backup has been generated, so even copying that data manually across the ocean won't work. This means I'm officially out of ideas. # Update # I missed out on doing this for a while, but I'd better keep this thread up to date. @DavidSpillet has a few points in his reply. Here is a bit more information that might help people helping me out - you're all awesome. 1. We have about 13 (SQL natively compressed) backup files, totaling around 60GB. The approximate size after restoring them is around 270GB. 2. Our databases are separated into two systems. System A has 2 databases, System B has 11. Restores are done concurrently per system, so we will have at most 2 restores per developer running in parallel. 3. Developers in Spain restores in the office using a wired LAN network, sitting on the same VNET (and building, and floor) the server is in. 4. The server is a 16-core, 32 logical CPUs, 96GB RAM monster that was once used to host a shared SQL instance for our developers. Restores in there take consistently between 15-20 minutes. 5. The server high-capacity storage (where the backups are held) is based on spinning locally attached disks - but then again, restoring locally is not a bottleneck 6. Developers laptops use high-capacity NVMe SSD drives (at least, the new ones. The older model used spinning drives. I'm not minding those as they're being rolled out) I also have a few more concerns due to the current world situation, namely * Devs are currently working anywhere, sometimes at the office, most of the time at home. Unfortunately, our client VPN endpoint is in another country, so restoring from the office is a major pain, and I'm trying to account for that, especially considering how the country is looking like (2020-07-29). * Some of the new laptop models arrived with lower capacity storage (256 GB) and there's nothing I can do to fix that. * Cannot go on purchasing commercially available software that would do the trick (for example [this one](https://www.red-gate.com/products/dba/sql-provision/)) because of budget constraints.
GMassDBA (345 rep)
Jun 26, 2020, 05:15 PM • Last activity: Jun 17, 2025, 04:03 PM
1 votes
1 answers
1892 views
SQL Server - Access denied when reading a file from Azure Blob Storage container using SAS key
I have a requirement to read CSV files from an Azure blob storage. So far, this is throwing access denied errors every time I run my query: CREATE DATABASE SCOPED CREDENTIAL WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = 'sv=2021-06-08&ss=b&srt=sco&sp=rl&se=2023-03-31T09:38:05Z&st=2022-09-01T02...
I have a requirement to read CSV files from an Azure blob storage. So far, this is throwing access denied errors every time I run my query: CREATE DATABASE SCOPED CREDENTIAL WITH IDENTITY = 'SHARED ACCESS SIGNATURE', SECRET = 'sv=2021-06-08&ss=b&srt=sco&sp=rl&se=2023-03-31T09:38:05Z&st=2022-09-01T02:38:05Z...'; CREATE EXTERNAL DATA SOURCE WITH ( TYPE = BLOB_STORAGE , LOCATION = 'https://.blob.core.windows.net/ ' , CREDENTIAL= --> ); SELECT * FROM OPENROWSET ( BULK '/.csv' , DATA_SOURCE = '' , FORMAT ='CSV' , FORMATFILE='/.fmt' , FORMATFILE_DATA_SOURCE = '' , FIRSTROW = 2 ) AS test Below are some more details on how everything was setup: - The storage account kind is of BlockBlobStorage. - In the **Firewalls and virtual networks setting**, it is only **Enabled from selected virtual networks and IP addresses**. I already added my public IP address, as well as the IP address of Azure SQL Server which I got from here: https://learn.microsoft.com/en-us/azure/azure-sql/database/connectivity-architecture?view=azuresql#gateway-ip-addresses - The whole process works if I set it to Enabled from all networks. The SQL server and the storage account lives within the same resource group. - I also configured a VNet that is both added for both of the resource. - Saw this thread which is exactly similar to my issue, however the accepted answer is not working from my end: https://stackoverflow.com/questions/58340185/cannot-bulk-load-because-the-file-file-csv-could-not-be-opened-operating-syst I checked all the documentations regarding SAS access keys, database scoped credentials, external data sources and VNet networking and I don't see any limitations for SAS key access to be denied. Did I miss a configuration setup? I find it a little weird that in most cases, they are recommending to setup the storage account to be **Enabled from all networks**, which might be a security issue.
Dustine Tolete (119 rep)
Oct 3, 2022, 03:38 AM • Last activity: May 5, 2025, 03:06 PM
0 votes
0 answers
27 views
Running an Agent Job using Azure Shared Access Key Credential
I'd like to have a SQL agent job that copies a file up to Azure Storage using a Shared Access Key Credential. The credential is set up and works for backup and restore process, but I'd like to use it for copying files up to blob storage. I don't mind using PowerShell, CMD or SISS, I just want to be...
I'd like to have a SQL agent job that copies a file up to Azure Storage using a Shared Access Key Credential. The credential is set up and works for backup and restore process, but I'd like to use it for copying files up to blob storage. I don't mind using PowerShell, CMD or SISS, I just want to be able to leverage the credential Any Ideas? Cheers Alex
AlexP012 (53 rep)
Sep 3, 2024, 10:30 AM
0 votes
1 answers
880 views
What are recommended values for block size, buffercount and maxtransfer size for Ola backups to Azure?
I have SQL servers running on onprem and Azure VM hardware. The backups write to a file share located on a network VM. I am wanting to change the backup destination to Azure blob storage. For this I have created an Azure storage account and tested the Ola backup script with credentials (account key/...
I have SQL servers running on onprem and Azure VM hardware. The backups write to a file share located on a network VM. I am wanting to change the backup destination to Azure blob storage. For this I have created an Azure storage account and tested the Ola backup script with credentials (account key/identity and also with shared access signature). Is want some guidance on what is recommended values to use for the following: 1. Block size 2. Buffer count 3. Max transfer size I understand that the block blob has a 200GB file limit and this is OK for me as my compressed backups come to under 110GB. I understand that Azure blob has a limitation of 50000 blocks * 4MB maxtransfersize. What is the idea values to use so that I don't hit the above limit.
variable (3590 rep)
Mar 8, 2024, 09:24 AM • Last activity: Mar 8, 2024, 10:01 AM
0 votes
1 answers
501 views
How transaction is matched with read and write operations in BLOB storage (azure)?
In Azure Monitor, for particular blob in storage account we have the following metrics: - ingress (GiB) - engress (GiB) - transactions (count) It looks like: ![enter image description here][1] I have a database which data and log files are using Azure Blob as storage and want to calculate the price...
In Azure Monitor, for particular blob in storage account we have the following metrics: - ingress (GiB) - engress (GiB) - transactions (count) It looks like: enter image description here I have a database which data and log files are using Azure Blob as storage and want to calculate the price of such scenario per month. Using the pricing calculator we have cost per storage (let's say 100 GB) and dynamic cost per the following operations: enter image description here My questions is one transaction in the Azure Monitor chart the same as one read/write operation?
gotqn (4348 rep)
Mar 2, 2023, 01:45 PM • Last activity: Dec 7, 2023, 02:01 AM
0 votes
0 answers
133 views
Backup SQL transaction log to Azure "append blob"?
We have long saved our hourly transaction log backups throughout the day by starting a new t-log file nightly, after the full backup is complete, and then appending to it hourly for the rest of the day. Recently I have moved the nightly full backups to Azure blob storage which is working great, but...
We have long saved our hourly transaction log backups throughout the day by starting a new t-log file nightly, after the full backup is complete, and then appending to it hourly for the rest of the day. Recently I have moved the nightly full backups to Azure blob storage which is working great, but the t-logs write the first file and after that are not able to append. 1 minute of googling confirmed that append is *not* supported on either block blobs nor page blobs (I had already tried it both ways to no avail) but also showed that there is a third type of blob, the Append Blob. That sure sounds like the ticket! But I can't figure out how to specify which type of blob to use. To switch between Block Blob and Page Blob I change which credential is being used (SAS token or shared key) but not sure how to use Append Blob, or whether that's just plain not supported for SQL backups. I'm guessing it comes down to how the initial blob/file is created, but have yet to find any info online about whether what I'm trying to do there is possible or not. This is MS SQL Server 2022 and the backup is currently performed by an agent job, not a maint plan.
OKreboots (1 rep)
Sep 17, 2023, 06:32 PM • Last activity: Sep 19, 2023, 02:09 AM
0 votes
1 answers
136 views
How to archive a on-premise sql server database to azure
We have an on-premise database of about eight terabytes that we are trying to archive to Azure. How can we go about it? This archive needs to be accessible because we might need to pull data from it in the future. I have read about azure data lake, blob storage and data factory but i still don't kno...
We have an on-premise database of about eight terabytes that we are trying to archive to Azure. How can we go about it? This archive needs to be accessible because we might need to pull data from it in the future. I have read about azure data lake, blob storage and data factory but i still don't know how to go about it.
David ONAJOBI
May 16, 2023, 04:45 PM • Last activity: May 17, 2023, 03:08 AM
1 votes
2 answers
114 views
Backup with date to azure blob storage url
Im trying to do backups to an azure blob storage that have the date when the backup is made, that way the job does not fail when the backup is generated with the same name, I am concatenating *convert(varchar(500),GetDate(),112) + '.bak')* but im getting the next error in the job execution *Incorrec...
Im trying to do backups to an azure blob storage that have the date when the backup is made, that way the job does not fail when the backup is generated with the same name, I am concatenating *convert(varchar(500),GetDate(),112) + '.bak')* but im getting the next error in the job execution *Incorrect syntax near '+'. [SQLSTATE 42000] (Error 102). The step failed.* The script: Backup database model to url = 'https:containerurl/azurecontainer/model' + convert(varchar(500),GetDate(),112) + '.bak') go
Andres Arango (195 rep)
Apr 16, 2020, 08:19 PM • Last activity: Apr 6, 2023, 05:23 AM
0 votes
1 answers
174 views
Restore from Azure Storage DB to sqlserver 2014
I am trying to restore the Database from Azure Storage to Sql server 2014 but getting the error "TITLE: Microsoft SQL Server Management Studio ------------------------------ An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ----------------...
I am trying to restore the Database from Azure Storage to Sql server 2014 but getting the error "TITLE: Microsoft SQL Server Management Studio ------------------------------ An exception occurred while executing a Transact-SQL statement or batch. (Microsoft.SqlServer.ConnectionInfo) ------------------------------ ADDITIONAL INFORMATION: A nonrecoverable I/O error occurred on file "https://*****.blob.core.windows.net/sql_backup/Test_DB.bak :" The specified URL points to a Block Blob. Backup and Restore operations on Block Blobs are not permitted.. RESTORE HEADERONLY is terminating abnormally. (Microsoft SQL Server, Error: 3271). I am restoring the DV using SSMS from URL option SSMS_Restore_From_ Any suggestions please ?
Saira (101 rep)
Aug 11, 2022, 03:29 AM • Last activity: Aug 11, 2022, 03:57 AM
2 votes
0 answers
284 views
SQL Server 2014 SP3 Backup to URL error
We have a database on a SQL Server 2014 (SP3 12.0.6433.1) and I'm trying to take a backup to Azure Blob Storage. I have followed this document (https://learn.microsoft.com/en-us/previous-versions/sql/2014/relational-databases/backup-restore/sql-server-backup-to-url?view=sql-server-2014) and created...
We have a database on a SQL Server 2014 (SP3 12.0.6433.1) and I'm trying to take a backup to Azure Blob Storage. I have followed this document (https://learn.microsoft.com/en-us/previous-versions/sql/2014/relational-databases/backup-restore/sql-server-backup-to-url?view=sql-server-2014) and created a credential using identity and access key using the following code:
CREATE CREDENTIAL mycredential WITH IDENTITY = 'mystorageaccount'
,SECRET = '' ;
and tried to do a backup using the following code:
BACKUP DATABASE mydb
TO URL = 'https://mystorageaccount.blob.core.windows.net/mycontainer/mydb.bak '
WITH CREDENTIAL = 'mycredential'
,COMPRESSION
,STATS = 5;
GO
However, I am getting the following error:
Msg 3271, Level 16, State 1, Line 1
A nonrecoverable I/O error occurred on file "https://mystorageaccount.blob.core.windows.net/mycontainer/mydb.bak :" Backup to URL received an exception from the remote endpoint. Exception Message: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel..
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.
The database is ~3GB. The error is thrown after the query executes for about 1.5 mins. The Azure storage account I am using is a General Purpose V1 Storage account. However, I have also tried this with a V2 account. Has anyone gone through this?
AzNerd (21 rep)
Jun 28, 2022, 04:12 PM • Last activity: Jun 28, 2022, 08:03 PM
0 votes
0 answers
137 views
BACKUP LOG failed to write to Azure Blob
We have a SQL VM running out of Azure with SQL Agent jobs running BACKUP LOG every x hours to Azure Blob. These jobs have been running fine for a long time until recently. We keep getting the below error and these jobs fail intermittently for various databases. They don't always fail at their schedu...
We have a SQL VM running out of Azure with SQL Agent jobs running BACKUP LOG every x hours to Azure Blob. These jobs have been running fine for a long time until recently. We keep getting the below error and these jobs fail intermittently for various databases. They don't always fail at their scheduled run. If it fails, I can re-run the job manually like two minutes later without any issue. I have looked up online and haven't found a solution to my problem. Does anyone know why and how I can fix this? Any help is appreciated. -------- Message Executed as user: DSN\xyz. Write on "https://xyz.blob.core.windows.net/CW_Data_LOG_06102022_140000.TRN " failed: 13(The data is invalid.) [SQLSTATE 42000] (Error 3202) BACKUP LOG is terminating abnormally. [SQLSTATE 42000] (Error 3013). The step failed.
sydney (167 rep)
Jun 13, 2022, 03:12 PM • Last activity: Jun 13, 2022, 03:34 PM
2 votes
1 answers
2000 views
Sql server 2016 split url backup into multiple files
I have a database backing up into an Azure page blob via URL backups set up from SSMS. I am trying to find out how to split the backup into smaller chunks. I have a 2 TB database that is now backing up in to one large file, I want to separate it into 4 smaller files. Any hints would be helpful. Envi...
I have a database backing up into an Azure page blob via URL backups set up from SSMS. I am trying to find out how to split the backup into smaller chunks. I have a 2 TB database that is now backing up in to one large file, I want to separate it into 4 smaller files. Any hints would be helpful. Environment Summary: Azure VM with SQL 2016 Enterprise Backup Method: URL Backup to Azure Page Blob Security Involved: Key Vault and Credentials Current Status: Backups working Desired Status: Large Backups split into multiple files I am ok with a powershell example. Thank you
BriK (23 rep)
Mar 22, 2022, 06:29 PM • Last activity: Mar 23, 2022, 07:09 AM
2 votes
2 answers
1368 views
Error when backing up from SQL Server 2014 to Azure blob storage
I'm trying to back up a SQL Server 2014 database to Azure blob storage. However, I'm getting the below error when trying to do so. > A nonrecoverable I/O error occurred on file > "https://mystorageaccount.blob.core.windows.net/mycontainer/MyDB.bak:" > Backup to URL received an exception from the rem...
I'm trying to back up a SQL Server 2014 database to Azure blob storage. However, I'm getting the below error when trying to do so. > A nonrecoverable I/O error occurred on file > "https://mystorageaccount.blob.core.windows.net/mycontainer/MyDB.bak :" > Backup to URL received an exception from the remote endpoint. > Exception Message: The remote server returned an error: (403) > Forbidden.. Initially, I assumed I'd made a mistake with either the credential, the name of either the Storage Account, or the Container. However, when I tried backing up a database from a SQL Server 2019 instance, it worked fine. I've spent several hours trying to find a solution and I've made no no further progress. So, I'm looking for a bit of help.
YaHozna (357 rep)
Jun 25, 2021, 05:24 PM • Last activity: Mar 13, 2022, 07:36 PM
3 votes
2 answers
296 views
Backup files are lost while backing up to Azure URL from SQL Server 2016
I have a job script like this which should backup SQL Server 2016 dbs to Azure blob storage with storage key credential. Usually it works fine, but sometimes a few backups are lost (I mean there is no backup file on the storage account) and I do not get any kind or job failure or errors on job log f...
I have a job script like this which should backup SQL Server 2016 dbs to Azure blob storage with storage key credential. Usually it works fine, but sometimes a few backups are lost (I mean there is no backup file on the storage account) and I do not get any kind or job failure or errors on job log file. Dbs are just skipped. I have noticed that when backup job overlaps maintenance procedures this happens quite often when I have changed times it happened once a month or less. But I am not sure that maintenance(index maint, dbcc, stats update) is the reason for this anomaly. I would like to know whether had you any prior experience like this and may know what is the core reason? The interesting part is that I have restore verifyonly which also just skips databases. It does not try to restore those dbs that's why I do not get fail errors, just skips DECLARE @dbname sysname DECLARE @path nvarchar(120) DECLARE @credential sysname = 'BackupStorageCredential' DECLARE @date nvarchar(250) = CAST( GETDATE() AS Date ) SET @path = N'[my_storage_url]' DECLARE db_cursor CURSOR FOR SELECT name FROM sys.databases WHERE name IN ('db1','db2','db3') OPEN db_cursor FETCH NEXT FROM db_cursor INTO @dbname WHILE @@FETCH_STATUS = 0 BEGIN DECLARE @query_backupToAzBLOB NVARCHAR(max) DECLARE @query_verify NVARCHAR(max) SET @query_backupTOAzBLOB = 'BACKUP DATABASE [' + @dbname + '] TO URL =''' + @path + @dbname + '/' + @dbname + '_' + @date +'.bak'' WITH CREDENTIAL = ''' + @credential + ''',NOFORMAT, NOINIT, NAME =''' + @dbname + ''', NOSKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10, CHECKSUM' EXEC (@query_backupTOAzBLOB) SET @query_verify = 'RESTORE VERIFYONLY FROM URL =''' + @path + @dbname + '/' + @dbname + '_' + @date +'.bak'' WITH CREDENTIAL = ''' + @credential + ''', FILE = 1, NOUNLOAD, STATS = 5' EXEC(@query_verify) FETCH NEXT FROM db_cursor INTO @dbname END CLOSE db_cursor DEALLOCATE db_cursor
igelr (2162 rep)
Dec 29, 2021, 08:45 AM • Last activity: Dec 29, 2021, 08:42 PM
1 votes
1 answers
2207 views
SQL server restore from backup files on azure storage blob container (using SAS token)
I have set up a SAS token in an azure blob storage container. It is currently valid, as demonstrated by the fact that this command in windows cmd.exe works: ``` azcopy list "https:// .blob.core.windows.net/ /path/to/file.BAK? " ``` and return as output: ``` INFO: ; Content Length: 7.58 GiB ``` Now i...
I have set up a SAS token in an azure blob storage container. It is currently valid, as demonstrated by the fact that this command in windows cmd.exe works:
azcopy list "https://.blob.core.windows.net//path/to/file.BAK ?"
and return as output:
INFO: ;  Content Length: 7.58 GiB
Now if I issue the following command in SQL server via SSMS (server runs locally on the same machine that issues the azcopy command, and the values in -brackets are identical as above. Note that I have no "/" at the end of the container name and no "?" in the token, following Microsoft documentation's recommendations):
SQL
CREATE CREDENTIAL [https://.blob.core.windows.net/]  
  -- this name must match the container path, start with https and must not contain a forward slash at the end
WITH IDENTITY='SHARED ACCESS SIGNATURE' 
  -- this is a mandatory string and should not be changed   
 , SECRET = '';
GO

RESTORE HEADERONLY FROM
URL = N'https://.blob.core.windows.net//path/to/file.BAK ';
I get an error after the RESTORE command:
Msg 3201, Level 16, State 2, Line 28
Cannot open backup device 'https://.blob.core.windows.net//path/to/file.BAK '. Operating system error 3(The system cannot find the path specified.).
Msg 3013, Level 16, State 1, Line 28
RESTORE HEADERONLY is terminating abnormally.
I checked and double checked that there is no typo in the paths, and that the tokens are identical. `` is in format
sv=&st=&se=&sr=c&sp=rl&sig=
As a final point, if I DROP the credential before RESTORE, I receive a different error message
Operating system error 86(The specified network password is not correct.).
so it looks like the token works... I would appreciate some suggestions on what could possibly go wrong here, or how I could troubleshoot why it's behaving different than the azcopy utility. **EDIT 1**: A comment has been made about case sensitivity of the storage file path. ` and ` are lower case both in Microsoft Azure Storage Explorer and in the azcopy and SQL commands. path/to/file.BAK is in mixed case, and also identical in Microsoft Azure Storage Explorer and in the azcopy and SQL commands.
pygri (63 rep)
Nov 26, 2021, 10:52 AM • Last activity: Nov 30, 2021, 10:51 AM
1 votes
0 answers
40 views
Azure VM SQL Server Automated Backup doing log backups only
We have an Azure VM running our own copy of SQL Server 2019, and we have just configured the Automated Backup functionality. I have it configured like this: [![Automated Backup configuration][1]][1] Everything seems to be fine, except for the fact that we do not seem to be getting any full backups....
We have an Azure VM running our own copy of SQL Server 2019, and we have just configured the Automated Backup functionality. I have it configured like this: Automated Backup configuration Everything seems to be fine, except for the fact that we do not seem to be getting any full backups. I have checked our storage account in the backup container, and we have all the expected logs for each user database, but no trace of any full backups. Configuration seems pretty straightforward, so I can't see anything wrong here. I have also checked various logs on the VM for both Windows and SQL Server itself, and I can see the log backups from the Automated Backup happening, but no reference to full backups or any errors or warnings that might indicate something is wrong. Any ideas?
Crogacht (11 rep)
Sep 9, 2021, 09:25 PM
2 votes
1 answers
5805 views
Azure Block Blob Limitation
Recently I ran into an error when I tried to backup a database to Azure Blob storage. The error was this: > 1117(The request could not be performed because of an I/O device error.) The database .mdf file was about 80gb. Upon reading a few postings, it turns out Azure Blob storage does have some type...
Recently I ran into an error when I tried to backup a database to Azure Blob storage. The error was this: > 1117(The request could not be performed because of an I/O device error.) The database .mdf file was about 80gb. Upon reading a few postings, it turns out Azure Blob storage does have some type of limitation. If I understand correctly, there are only 50,000 blocks available and each block can hold roughly 4Mb. I played around with the following BACKUP DATABASE TO URL with varying parameters: 1. BACKUP DATABASE [mydb] TO URL WITH INIT (failed) 2. BACKUP DATABASE [mydb] TO URL WITH INIT, COMPRESSION (succeeded) 3. BACKUP DATABASE [mydb] TO URL WITH INIT, COMPRESSION, MAXTRANSFERSIZE = 4194304 (failed) 4. BACKUP DATABASE [mydb] TO URL WITH INIT, COMPRESSION, MAXTRANSFERSIZE = 4194304, BLOCKSIZE = 65536 (succeeded) When the operation was successful, the backup file produced was about 7.8Gb. Again, the .mdf file was about 80Gb. I'm a bit confused as to how the parameter BLOCKSIZE is used as part of the BACKUP T-SQL command. Can someone explain why the operation would fail when this parameter is omitted?
sydney (167 rep)
Aug 23, 2021, 01:54 PM • Last activity: Aug 25, 2021, 05:28 PM
0 votes
0 answers
45 views
PostgreSQL for user management and another database for file management (Blob storage)
I am working on a project with video hosting and online editing platform with a microservice architecture in mind. One of my teammates uses PostgreSQL SQL for user management, but I need to handle large files, and we are going to use Azure blob storage. I would like to know, should I create another...
I am working on a project with video hosting and online editing platform with a microservice architecture in mind. One of my teammates uses PostgreSQL SQL for user management, but I need to handle large files, and we are going to use Azure blob storage. I would like to know, should I create another database for file management, and if yes, what should it be? (I have MongoDB in mind)
Hosten (1 rep)
Apr 29, 2021, 07:56 AM • Last activity: Apr 29, 2021, 12:10 PM
Showing page 1 of 20 total questions