Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
3
votes
1
answers
461
views
How to deal with seldom used big database and postgresql?
I loaded in PostgreSQL (9.3) OpenStreetMap data for whole europe (400gb). Then I installed a geocoding api (nominatim) that queries the database with gis queries. My problem is : > This database is queried a few times a day, but because postgres loads > the data in its buffer on demand, my first que...
I loaded in PostgreSQL (9.3) OpenStreetMap data for whole europe (400gb). Then I installed a geocoding api (nominatim) that queries the database with gis queries.
My problem is :
> This database is queried a few times a day, but because postgres loads
> the data in its buffer on demand, my first query on a particular gps
> point is always slow, and I do only one query per GPS point, so it's
> always slow (like 60sec slow against 100ms when the data is buffered)
.
What could I do here ?
- I'm using a A6 Azure instance (4 Cores, 28 GB memory, HDD).
- Server is Ubuntu 14.04.4 LTS
- Swapping is enabled
- There is index on the geometry columns queried.
- The shared buffer is 10GB
- work mem is 256MB
- maintenance work mem is 4GB
remi bourgarel
(259 rep)
Oct 4, 2016, 12:31 PM
• Last activity: Aug 2, 2025, 12:08 PM
0
votes
1
answers
141
views
Can I downscale/upscale my MySQL server automatically?
Is this doable? Since we know the peak times of our platform we are planning to downscale our DB after peak hours then upgrade the specs before the start of peak hours, how can we do this? Like how is RDS doing this?
Is this doable? Since we know the peak times of our platform we are planning to downscale our DB after peak hours then upgrade the specs before the start of peak hours, how can we do this? Like how is RDS doing this?
JRA
(137 rep)
Feb 26, 2021, 09:34 AM
• Last activity: Jul 23, 2025, 08:03 AM
0
votes
1
answers
190
views
PostgreSQL: using stale statistics instead of current ones because stats collector is not responding
We are running `PostgreSQL 13` on `Azure Centos VM` and found this problem from the logs followed by some slow `commit` and `SET` statements. The slow statement logs are only lasted for a period of less than 20sec. System metric are normal before or during the problem with a small spike in I/O wait...
We are running
PostgreSQL 13
on Azure Centos VM
and found this problem from the logs followed by some slow commit
and SET
statements. The slow statement logs are only lasted for a period of less than 20sec. System metric are normal before or during the problem with a small spike in I/O wait upto 4%.
> 2024-08-15 16:21:59.048 CST,,,33328,,62b10ea0.8230,14,,2022-06-21 08:19:44 CST,1/0,0,LOG,00000,"using stale statistics instead of current ones because stats collector is not responding",,,,,,,,,"","autovacuum launcher"
2024-08-15 16:22:09.203 CST,,,58821,,66bdbaa7.e5c5,1,,2024-08-15 16:21:59 CST,148/0,0,LOG,00000,"using stale statistics instead of current ones because stats collector is not responding",,,,,,,,,"","autovacuum worker"
2024-08-15 16:22:09.253 CST,"user_w","user_db",53133,"10.0.0.85:58698",66bdb747.cf8d,1,"COMMIT",2024-08-15 16:07:35 CST,46/0,0,LOG,00000,"duration: 21525.916 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.14:34356","client backend"
2024-08-15 16:22:09.253 CST,"user_w","user_db",48595,"10.0.0.68:33334",66bdb4d3.bdd3,1,"COMMIT",2024-08-15 15:57:07 CST,15/0,0,LOG,00000,"duration: 21383.608 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.18:36088","client backend"
2024-08-15 16:22:09.253 CST,"user_w","user_db",50680,"10.0.0.68:33714",66bdb5a9.c5f8,1,"COMMIT",2024-08-15 16:00:41 CST,25/0,0,LOG,00000,"duration: 20137.894 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.18:36400","client backend"
2024-08-15 16:22:09.253 CST,"user_w","user_db",42490,"10.0.0.68:60644",66bdb2d6.a5fa,1,"COMMIT",2024-08-15 15:48:38 CST,63/0,0,LOG,00000,"duration: 18201.579 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.18:36274","client backend"
2024-08-15 16:22:09.253 CST,"user_w","user_db",52468,"10.0.0.68:34266",66bdb6e0.ccf4,1,"COMMIT",2024-08-15 16:05:52 CST,30/0,0,LOG,00000,"duration: 20438.055 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.16:52796","client backend"
2024-08-15 16:22:09.269 CST,"user_w","user_db",55877,"10.0.0.52:47198",66bdb8e6.da45,2,"SET",2024-08-15 16:14:30 CST,57/0,0,LOG,00000,"duration: 3843.296 ms statement: SET application_name='app - 10.0.0.4:38932';",,,,,,,,,"app - 10.0.0.4:38932","client backend"
2024-08-15 16:22:09.269 CST,"user_w","user_db",55278,"10.0.0.70:59560",66bdb890.d7ee,1,"SET",2024-08-15 16:13:04 CST,43/0,0,LOG,00000,"duration: 20042.606 ms statement: SET application_name='app - 10.0.0.16:52848';",,,,,,,,,"app -10.0.0.16:52848","client backend"
From what I can check the collector
using IPv6
, IPv6 is enabled as of now and stats are getting updated. We only logging slow statements and the first entry of slow commit statement took 20sec completed at **2024-08-15 16:22:09.253 CST** which is on calculation might started before the first entry of stats collector log at **2024-08-15 16:21:59.048 CST**. We are unable to make a conclusion where the problem actually started with stats collector or the transactions and the cause of issue? This issue auto resolves in 10-20sec.
**UPDATE**
I have noticed when the system working normal there is no UDP socket
for postmaster
if I run netstat -n -u -p
. However files under pg_stat_temp
directory is getting updated and I can see the stats collector
process under process list. Why is there no visible UDP socket under postmaster?
goodfella
(595 rep)
Aug 19, 2024, 04:05 AM
• Last activity: Jun 27, 2025, 12:04 PM
0
votes
1
answers
45
views
Azure SQL on VM Availability group Async third node in DR region config
our Availability Groups in Azure are currently 2 nodes, both on the same subnet, with a load balancer in front of them. I'd like to add an async node in another region for DR purposes. This will be on a different subnet. In an ideal world I'd split the current AG nodes onto different subnets, get ri...
our Availability Groups in Azure are currently 2 nodes, both on the same subnet, with a load balancer in front of them.
I'd like to add an async node in another region for DR purposes. This will be on a different subnet.
In an ideal world I'd split the current AG nodes onto different subnets, get rid of the LB and update all the connection strings with multisubnetfailover= true. However we have a very old code base and I've been told there isn't budget\time to locate, update and test all the connection strings.
If, instead, I updated the cluster with RegisterAllProvidersIP=0, will it work with the LB's and the new DR node? Will failovers between the 2 nodes on the same subnet still have to wait for the DNS TTL to failover?
AlexP012
(53 rep)
Jan 21, 2025, 09:08 AM
• Last activity: Jun 4, 2025, 08:27 PM
2
votes
0
answers
37
views
PostgreSQL: How much memory should I allocate as ramdisk for "pg_stat_tmp"?
We have been facing a problem in an `Azure VM` running `PostgreSQL13` where a recurring issue of `disk availability` causing `statistics collector process` to stall for few seconds followed by slow queries. After searching few threads I have found that changing `pg_stat_tmp` to `ramdisk` could resol...
We have been facing a problem in an
Azure VM
running PostgreSQL13
where a recurring issue of disk availability
causing statistics collector process
to stall for few seconds followed by slow queries. After searching few threads I have found that changing pg_stat_tmp
to ramdisk
could resolve this and maybe improve performance as well.
What would be the best size to allocate ramdisk? Is there any way to calculate? What could possibly happen if collector run out of this allocated memory?
goodfella
(595 rep)
Apr 28, 2025, 04:45 AM
0
votes
1
answers
29
views
Underpowered DR Async AG node
we are building Async AG nodes on SQL in VM in Azure, for DR purposes. To save money the compute resource is a quarter of what the sync nodes are, with the idea that if we ever implement DR the compute will be increased. Could the under powered third node impact performance of the 2 sync nodes.
we are building Async AG nodes on SQL in VM in Azure, for DR purposes. To save money the compute resource is a quarter of what the sync nodes are, with the idea that if we ever implement DR the compute will be increased.
Could the under powered third node impact performance of the 2 sync nodes.
AlexP012
(53 rep)
Mar 4, 2025, 03:39 PM
• Last activity: Mar 10, 2025, 02:33 PM
1
votes
1
answers
2875
views
Azure Virtual Machine - local temp storage (D: drive) - how much IOPS it can handle?
Virtual machine size `DS3` (under older generation sizes) Azure Portal (when deploying VM) shows that `DS3` supports up to 16 data disks / up to `12800 IOPS` - thats fine, But what I am interested in is it's local/temp storage 28 GB D: drive Documentation shows that this local/temp drive is SSD, but...
Virtual machine size
DS3
(under older generation sizes)
Azure Portal (when deploying VM) shows that DS3
supports up to 16 data disks / up to 12800 IOPS
- thats fine,
But what I am interested in is it's local/temp storage 28 GB D: drive
Documentation shows that this local/temp drive is SSD, but what I can't seem to find is information on how many IOPS this drive can handle ?
More specifically, if my TempDB has requirement of IOPS up to 1900
, can D: drive on DS3
VM handle that requirement ?
From this source https://learn.microsoft.com/en-us/azure/virtual-machines/sizes-previous-gen I can guess that D: is 3200
IOPS but not sure I understand it correctly...

Aleksey Vitsko
(6195 rep)
Nov 13, 2020, 01:36 PM
• Last activity: Jan 21, 2025, 09:00 AM
0
votes
1
answers
861
views
Invoke-AzVMRunCommand on Azure VM using domain user
I am working on ARM template to setup SQL server VM on Azure with Always on High availibility setup using Win server 2019 and SQL Server 2019 Enterprise image. I am trying to automate everything using ARM template. Now, I need to execute SETSPN commnad on one of the Azure SQL Server VM using AMR tem...
I am working on ARM template to setup SQL server VM on Azure with Always on High availibility setup using Win server 2019 and SQL Server 2019 Enterprise image. I am trying to automate everything using ARM template.
Now, I need to execute SETSPN commnad on one of the Azure SQL Server VM using AMR template for which I used Script Deployment template and there using **Invoke-AzVMRunCommand** command to run PowerShell script on Azure VM.
As Invoke-AzVMRunCommand run PowerShell script with System Account context, I am getting access denied error when SETSPN command try to execute on the remote VM.
So, looking for any other option to run PowerShell command to Azure VM in automated way with domain user context having permission to execute SETSPN command.
Here is the command I am trying to execute on VM using ARM deployment script and Invoke-AzVMRunCommnad.
**setspn -S MSSQLSvc/..com:1433 domain\sqlserviceaccount**
Nirav B
Sep 13, 2022, 04:57 PM
• Last activity: Dec 29, 2024, 11:07 AM
1
votes
1
answers
153
views
Running into MSSQLSERVER_701 "insufficient memory in the resource pool to run the SQL queries" when Azure SQL backup runs
We have a VM that was on-prem, but we recently moved it into Azure. The VM hosts a SQL Server instance with a few databases on it. After setting up a schedule of SQL database backups through Azure, we've consistently encountered an error: SQL ErrorNumber: 701 There is insufficient system memory in r...
We have a VM that was on-prem, but we recently moved it into Azure. The VM hosts a SQL Server instance with a few databases on it.
After setting up a schedule of SQL database backups through Azure, we've consistently encountered an error:
SQL ErrorNumber: 701
There is insufficient system memory in resource pool 'internal' to run this query.
Error: 3271, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
BACKUP DATABASE is terminating abnormally.
The alert points to this Microsoft link: https://go.microsoft.com/fwlink/?linkid=2087433
Prior to moving our VM into Azure, I don't believe we ever encountered this error during SQL backups, nor do I recall seeing it for other transactions (part of why I don't have much experience troubleshooting memory errors). I'm able to back up these databases in SQL Server Management Studio without issue, and our previous product for taking backups didn't throw this error either.
Has anyone else encountered issues like this after moving a SQL Server instance and its VM into Azure? Given these details, I would greatly appreciate any recommendations as to where I should begin troubleshooting this issue.
j. kiel
(31 rep)
Jul 2, 2024, 03:52 PM
• Last activity: Jul 12, 2024, 06:28 PM
0
votes
1
answers
66
views
Does Azure automated backup for SQL database on Azure VM have way to detect log chain?
I have SQL server on Azure VM. If I enable Azure automated backups for the SQL server on Azure VM to backup the dbs (full/diff/log), and suppose if an admin performs a manual log backup (which effectively will break the log chain of Azure database backup), then, is there any alerting feature provide...
I have SQL server on Azure VM.
If I enable Azure automated backups for the SQL server on Azure VM to backup the dbs (full/diff/log), and suppose if an admin performs a manual log backup (which effectively will break the log chain of Azure database backup), then, is there any alerting feature provided by Azure?
variable
(3590 rep)
Mar 31, 2024, 06:26 AM
• Last activity: Apr 1, 2024, 09:34 PM
0
votes
1
answers
48
views
For Azure VM SQL log shipping, What is the reason for requiring storage account to be in the same resource group as SQL VM? Any workaround?
https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/log-shipping-configure?view=azuresql The prerequisite for log shipping in Azure VM based SQL servers is: > At least two domain-joined Azure virtual machines with SQL Server in > the same resource group as an Azure storage acc...
https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/log-shipping-configure?view=azuresql
The prerequisite for log shipping in Azure VM based SQL servers is:
> At least two domain-joined Azure virtual machines with SQL Server in
> the same resource group as an Azure storage account for transaction
> log backups.
What is the reason for requiring the storage account to be in the same resource group as SQL vms?
Is there any alternative if one or both VMs are in a different resource group compared to the storage account?
variable
(3590 rep)
Mar 29, 2024, 12:43 PM
• Last activity: Mar 29, 2024, 05:54 PM
1
votes
1
answers
530
views
SQL Server - Log Shipping to Azure VM
I'm looking for some advice regarding Log Shipping from SQL Server 2016 On-Premises to SQL Server 2016 on an Azure VM. From my existing knowledge and research, there are a few ways of going about this. **Option 1 - Hand-crafted Log Shipping** Instead of using SQL Server's native log shipping, we do...
I'm looking for some advice regarding Log Shipping from SQL Server 2016 On-Premises to SQL Server 2016 on an Azure VM.
From my existing knowledge and research, there are a few ways of going about this.
**Option 1 - Hand-crafted Log Shipping**
Instead of using SQL Server's native log shipping, we do it manually i.e.
1. Write a script to Backup Log to a local file share
2. Write a script that copies the files up to the Azure storage account
2. Write a script to restore the files to the secondary database.
The main complication I can see with this is implementing a way for the restore script to know which files it needs to restore. We could track it in some way e.g. based on the last restored date.
**Option 2 - Use the native Log Shipping with a file share**
1. Create a file share on our network which is accessible from the secondary SQL Server in Azure
2. Configure Log Shipping to backup the T-Log to that folder
3. Configure Log Shipping to restore from that folder
This approach seems somewhat simpler, although step 1 above may be more complex than I realise at this stage.
I appreciate that these options are very high-level, but has anyone done this using the above approaches, or a different approach altogether?
FrugalShaun
(441 rep)
Mar 11, 2024, 03:37 PM
• Last activity: Mar 18, 2024, 01:02 PM
0
votes
1
answers
880
views
What are recommended values for block size, buffercount and maxtransfer size for Ola backups to Azure?
I have SQL servers running on onprem and Azure VM hardware. The backups write to a file share located on a network VM. I am wanting to change the backup destination to Azure blob storage. For this I have created an Azure storage account and tested the Ola backup script with credentials (account key/...
I have SQL servers running on onprem and Azure VM hardware.
The backups write to a file share located on a network VM.
I am wanting to change the backup destination to Azure blob storage.
For this I have created an Azure storage account and tested the Ola backup script with credentials (account key/identity and also with shared access signature).
Is want some guidance on what is recommended values to use for the following:
1. Block size
2. Buffer count
3. Max transfer size
I understand that the block blob has a 200GB file limit and this is OK for me as my compressed backups come to under 110GB.
I understand that Azure blob has a limitation of 50000 blocks * 4MB maxtransfersize.
What is the idea values to use so that I don't hit the above limit.
variable
(3590 rep)
Mar 8, 2024, 09:24 AM
• Last activity: Mar 8, 2024, 10:01 AM
1
votes
2
answers
552
views
Database corruption on VM shutdown
We moved a SQL Server database to a virtual machine in Azure. Since the database only contains historical data and does not have transactions anymore we want to turn off and turn on the VM on a schedule to reduce costs. As far as I know database corruption can happen if the machine restarts in the m...
We moved a SQL Server database to a virtual machine in Azure.
Since the database only contains historical data and does not have transactions anymore we want to turn off and turn on the VM on a schedule to reduce costs.
As far as I know database corruption can happen if the machine restarts in the middle of a transaction, so I have three questions about this:
1. Can database corruption happen in a database that does not have transaction anymore when the machine is shut down?
2. Is there a correct way to shut down the machine and avoid a corruption scenario?
3. Should we leave this machine running 7 x 24?
Andres Arango
(195 rep)
Jul 7, 2021, 05:47 PM
• Last activity: Feb 4, 2024, 04:50 PM
1
votes
1
answers
314
views
How to start Neo4j on Azure
I'm running Neo4j v4.x on an Azure VM (Linux Ubunto 16.04). Everything was running fine until my project required a resizing for more storage. This seemingly went well as I proceeded to load data, but then a glitch of unknown type caused the Neo4j server to stop. I could no longer access it from the...
I'm running Neo4j v4.x on an Azure VM (Linux Ubunto 16.04). Everything was running fine until my project required a resizing for more storage. This seemingly went well as I proceeded to load data, but then a glitch of unknown type caused the Neo4j server to stop. I could no longer access it from the Neo4j broweser or from python code that queries it. I've restarted the VM. No effect. Even redeployed the VM without effect. From Putty:
neo4j status
Neo4j is not running
I've tried numerous commands, but this one comes closest to trying to do something:
sudo ssh {myusername}@{vm ip} "systemctl restart neo4j V4_0.service"
It asked for my password and accepted it (rejected errors) and then gave me
> Failed to restart neo4j.service: Interactive authentication required.
See system logs and 'systemctl status neo4j.service' for details.
Failed to restart V4_0.service: Interactive authentication required.
See system logs and 'systemctl status V4_0.service' for details.
I thought I'd authenicated. In debugging, i tried entering bad username or password and was immediately rejected as expected.
Another set of info generated by the command
root@neo4jVM:/etc/init.d# neo4j start
result:
> Directories in use:
home: /var/lib/neo4j
config: /etc/neo4j
logs: /var/log/neo4j
plugins: /var/lib/neo4 j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/run/neo4j Starting Neo4j.
WARNING: Max 1024 open files allowed,minimum of 40000 recommended. See the Neo4j manual. Started neo4j (pid> 71995). It is available at http://0.0.0.0:7474/ There may be a shortdelay until the server is ready. See /var/log/neo4j/neo4j.log for current status.
Despite the promising message, it never started and Http://0.0.0.0:7474 did not connect.
for clarification, I tried variations on the one command in Putty: sudo ssh dastumpf@104.43.228.191 "systemctl restart neo4j" sudo ssh dastumpf@104.43.228.191 "systemctl restart neo4j.service" etc.
Exploring the Neo4j log, I found this ....LifecycleManagingDatabaseService@78116659' was successfully initialized, but failed to start.
What am I missing?
David A Stumpf
(121 rep)
Jul 17, 2020, 03:55 AM
• Last activity: Jan 11, 2024, 12:03 PM
0
votes
1
answers
300
views
What is the benefit of multiple subnet (FCI/AG) in a virtualized setup?
In a virtualized setup, for example on-prem VMware based virtualized setup or Azure based VMs, what exactly is the benefit of placing the SQL FCI or AG nodes in separate subnet?
In a virtualized setup, for example on-prem VMware based virtualized setup or Azure based VMs, what exactly is the benefit of placing the SQL FCI or AG nodes in separate subnet?
variable
(3590 rep)
Oct 21, 2023, 01:41 PM
• Last activity: Oct 21, 2023, 05:41 PM
1
votes
2
answers
101
views
When can we downsize Azure Virtual machine for SQL Server?
For now our production SQL Server is `Standard_D96ds_v5` Azure VM machine. We got request to check if we can downsize server to `Standard_D48ds_v5`. What measures will take into consideration to have confirmation that we can downsize server ?
For now our production SQL Server is
Standard_D96ds_v5
Azure VM machine.
We got request to check if we can downsize server to Standard_D48ds_v5
.
What measures will take into consideration to have confirmation that we can downsize server ?
adam.g
(465 rep)
Sep 22, 2023, 09:51 AM
• Last activity: Sep 28, 2023, 03:29 PM
0
votes
2
answers
183
views
Azure Disk performance optimizaton in SQL VM
We are using Azure SQL VM and we have multiple data files of multiple databases stored on same drive (let's say Drive E: of size 3 TB) which is created from 4 * 512 GB disks + 4 * 256 GB disks. I was thinking to remove those smaller disks and add a bigger disk like 1 TB or 2 TB. What would you recom...
We are using Azure SQL VM and we have multiple data files of multiple databases stored on same drive (let's say Drive E: of size 3 TB) which is created from 4 * 512 GB disks + 4 * 256 GB disks.
I was thinking to remove those smaller disks and add a bigger disk like 1 TB or 2 TB.
What would you recommend based on the following for getting best performance from disks:
1) single disk of 3TB.
2) 3 Disks of 1 TB each.
3) 1 Disk of 1 TB and 1 Disk of 2 TB.
It's Azure premium SSD.
---
I checked the documentation and on my understanding using P30 and P40 disks i.e. 1TB and 2TB disks respectively seems to be better. However, if my single database size is around 1.7 TB in size, I'm confused if that would be better to use single 2TB for it or combine two 1TB disks.
Bishal Basyal
(7 rep)
Sep 25, 2023, 11:06 AM
• Last activity: Sep 28, 2023, 09:40 AM
1
votes
1
answers
782
views
Combination of multiple instances and AG listeners - connecting to AG listener sends me to different instance
The environment in question consists of AD domain and 2 AZ SQL VMs. The SQL server version is SQL 2022 Standard - CU 2. There are currently three DB instances on each AZ SQL VM and databases in those instances work in (basic) AlwaysON AG for non-system databases. Recently, I cooperated with Microsof...
The environment in question consists of AD domain and 2 AZ SQL VMs. The SQL server version is SQL 2022 Standard - CU 2. There are currently three DB instances on each AZ SQL VM and databases in those instances work in (basic) AlwaysON AG for non-system databases.
Recently, I cooperated with Microsoft to leverage all the hosting of SQL servers in Azure and their advice, to fully leverage capabilities of Azure IaaS SQL agent is to install default instance in addition to 3 named instances. Which I did.
Now for those 3 instances, the situation is as follows:
- The SQL server VMs have let's say addresses A,B in their subnet
- The listeners for each instance and for each DB in that instance have addresses C,D,E,F....
- The listeners in 2 instances run on TCP ports > 49152
- The listeneres for last instance run on ports 1433, but on addresses different than A and B
**My problem**
After installing default SQL instance as per advice from Microsoft, when connecting to listener on port 1433, I somehow ended up in the default instance instead of the instance to which the listener running on TCP 1433 belongs.
/* EDIT 01 */
To answer Sean's valuable questions:
To reach the AG listener listening on 1433 (but on different IP to server's address), I have used and ended up in the default instance - both running on 1433 but with different IPs. Once default instance was uninstalled, all started to work again.
All the named instances listen on dynamic ports, the default instance listens on 1433. On 1433, also listeners of one named instance listen, but those listeners each have individual IP address. The network setup is according to MSFT guideline - Azure load balancer in front of the AZ SQL VMs for AGs to work properly. I did not touch the loadbalancer to reflect the default instance listening on 1433, because that default instance didn't have any AG.
/* End of EDIT 01 */
Does anyone have an idea why?
Sranda
(360 rep)
Apr 20, 2023, 01:23 PM
• Last activity: Apr 21, 2023, 01:23 PM
1
votes
1
answers
306
views
Azure VM Drive size configuration for MS-SQL
I have run into a scenario for a production server where **I was recommended to create a separate drive for each one of the bigger clients** (based on database sizes) whereas all other smaller clients can be placed on the same drive in Azure VM for better performance. Eg: - Client 1 has 800GB size (...
I have run into a scenario for a production server where **I was recommended to create a separate drive for each one of the bigger clients** (based on database sizes) whereas all other smaller clients can be placed on the same drive in Azure VM for better performance.
Eg:
- Client 1 has 800GB size (data file)
- Client 2 has 900GB size (data file)
- Client 3 has 50GB size (data file)
- Client 4 has 40GB size (data file)
We already have
- 1 separate drive ****D:** for temp db** (both .ldf and .mdf are stored on drive D)
- and 1 separate **drive E: for log files** (all .ldf file for all clients is stored here on E:)
- and I was recommended to create a Drive F: for Client 3 and Client 4 (to store data files whereas their log files are already stored in E:),
- Drive G: for Client 1 (to store data files whereas their log files are already stored in E:)
- and Drive H: for Client 2 (to store data files whereas their log files are already stored in E:).
Can this increase performance? Also, if I need to add a disk (Premium SSD) for 1 TB, would it be good if we use 256*4 or 512*2 or a single 1 TB disk?
Please let me know if anyone has any experience with this.
Bishal Basyal
(7 rep)
Apr 13, 2023, 10:50 AM
• Last activity: Apr 13, 2023, 12:43 PM
Showing page 1 of 20 total questions