Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
142
views
How can I manually backup a large Azure Flexible Postgres database?
I have a large (3TB+) single database on a Azure Postgres Flexible Server. This is a managed service, so it takes backups automatically and the frequency can be adjusted. As a disaster recovery/emergency option, I'd like to periodically take a snapshot of the database separately from these increment...
I have a large (3TB+) single database on a Azure Postgres Flexible Server. This is a managed service, so it takes backups automatically and the frequency can be adjusted. As a disaster recovery/emergency option, I'd like to periodically take a snapshot of the database separately from these incremental backups and store it somewhere else.
I don't need ongoing connectivity for incremental backups (current managed system does that), I just want a periodic backup that'd be restorable to a new server. I would just choose Azure's own Vaulted Backup option, which provides enough separation for this purpose. But according to the docs :
>Vaulted backups are supported for server size <= 1 TB. If backup is configured on server size larger than 1 TB, the backup operation fails.
So I'm looking for other options. I checked out Barman , which looks like it should be perfect. But the managed Postgres Flexible Server doesn't support ssh/rsync access and I'm getting conflicting info on whether
pg_basebackup
is supported--seems unlikely, along with other physical replication options.
I can't use Azure's own tools to backup my server, it's too big. I can't use external tools, they can't have shell access. The last option is just pg_dump
, which in my experience will take days to finish and may need to be performed on a point-in-time-restore so that performance isn't affected. I'd prefer to create and restore from a physical backup rather than logical, so this is a last resort.
Is there a better option for a managed server this size than pg_dump
?
zach
(1 rep)
May 28, 2025, 11:05 PM
• Last activity: Aug 5, 2025, 12:06 AM
1
votes
1
answers
462
views
How can I improve low IOPS performance for an Azure SQL Database Hyperscale elastic pool?
I'm doing performance testing a series of data migration scripts which mainly consist of large INSERTs (sometimes 100M-200M records) with often complex SELECTs with a lot of hash joins. Configuration: - **Hyperscale: Premium-series, 16 vCores** - Max **4TB** of Locally-redundant backup storage, howe...
I'm doing performance testing a series of data migration scripts which mainly consist of large INSERTs (sometimes 100M-200M records) with often complex SELECTs with a lot of hash joins.
Configuration:
- **Hyperscale: Premium-series, 16 vCores**
- Max **4TB** of Locally-redundant backup storage, however for most of the tests only ~1TB was allocated
Problem: during one of the critical queries which takes circa 2 hours and performs a 100M+ insert on a heavily indexed table (7 indexes) there seem to be extremely wild swings in IOPS write performance:
I set **MAXDOP=16** manually for the query (1 DOP per vCPU)
Questions:
1. My best guess is that the IOPS gets severely degraded when the local SSD (presumably with a maximum of 2560 IOPS per vCore ) gets filled up and the severe bottleneck becomes the shared storage instance. Is this accurate? Note that in this case this happens roughly 50% of the time and virtually everything grinds down to a halt.
2. If this is the case, is there a way to calculate the effective minimum IOPS of the shared storage instance?
3. Is there a way to improve the minimum IOPS of the shared storage instance, e.g. by manually over-allocating the storage space? (this would be the standard approach for a lot of other configurations where e.g. you get 3 IOPS per GB)
4. How can I get the actual IOPS figures rather than %?
5. Any other suggestions/tips?
Thanks in advance

Andrew G
(71 rep)
Mar 5, 2024, 03:39 AM
• Last activity: Aug 4, 2025, 02:46 AM
0
votes
1
answers
137
views
SQL Server 2017 in Azure Cloud config DB Mail
I have set up a SQL Server 2017 in Azure cloud. I have created a new GMX email address and enabled it for pop3 external programs I have set up db mail on the SQL Server but I am getting the following auth error > The mail could not be sent to the recipients because of the mail server failure. (Sendi...
I have set up a SQL Server 2017 in Azure cloud. I have created a new GMX email address and enabled it for pop3 external programs
I have set up db mail on the SQL Server but I am getting the following auth error
> The mail could not be sent to the recipients because of the mail server failure. (Sending Mail using Account 1 (2020-01-25T12:14:37).
>
> Exception Message: Cannot send mails to mail server. (The SMTP server requires a secure connection or the client was not authenticated. The server response was: Authentication required). )
I have tried ports 25 and 587 with/without SSL enabled on server
mail.gmx.com
.
I have checked the network for Azure ports and AllowInternetOutBound
is enabled and set to any.
Any assistance would be great, thanks in advance
Jamie Templeman
Jan 25, 2020, 12:39 PM
• Last activity: Aug 2, 2025, 09:09 PM
2
votes
1
answers
1355
views
Backup from dacpac file extracted using SSDT
I tried method given in this answer by Ramankant Dadhichi but deploying failed: https://dba.stackexchange.com/questions/244167/backup-a-database-from-azure-sql-managed-instance-and-restore-to-on-premise-sql?newreg=ec931412355c4730acfff21d2c7c78cd I have my database in Azure SQL Managed Instance. I e...
I tried method given in this answer by Ramankant Dadhichi but deploying failed:
https://dba.stackexchange.com/questions/244167/backup-a-database-from-azure-sql-managed-instance-and-restore-to-on-premise-sql?newreg=ec931412355c4730acfff21d2c7c78cd
I have my database in Azure SQL Managed Instance. I extracted a dacpac using SSDT. But, now when I try to deploy the extracted file using SSMS, I get the following error:
> Could not deploy package.
> Error SQL0: The element [releaseengineer] cannot be deployed. This element contains state that cannot be recreated in the target database.
> Error SQL0: The element [Reporter] cannot be deployed. This element contains state that cannot be recreated in the target database.
(Microsoft.SqlServer.Dac)
Priya Sharma
(21 rep)
Feb 10, 2020, 01:52 AM
• Last activity: Aug 2, 2025, 01:04 PM
0
votes
1
answers
144
views
Save changes of records of products in Azure DB in way that we can see historical data
I am curious about "best" model for my purpose of storage of data (include historical changes) in Azure DB that I will be able to see timeline of that changes. Example: We have some company which making products. These products changing prices or description very often, sometimes they got deleted an...
I am curious about "best" model for my purpose of storage of data (include historical changes) in Azure DB that I will be able to see timeline of that changes.
Example:
We have some company which making products. These products changing prices or description very often, sometimes they got deleted and next day they publish them again (with some changes).
My app checking that company every hour and check all products which they have.
Save them to Azure DB (doesn't be save in the same entity as company have them, just very short version):
But I can say that:
- I will have like 10 fields.
- Checking thousands products every hour.
So I feel bad that in every small change I store whole Product record again.
Also it can happen that one product can be removed, and Add under different IdOfProduct (this is generated just by sequence on company side), but with the same location and description but for example different price.
So I am also trying way how to store that products (even historical one, which are deleted) that if I will have "new product" that I can make fast compare with database if there is something similar (check location, after that check description, maybe even price) and If its similar, make a record to another table that there is possibility of the connection to the old product which was deleted).
Can anyone give me advice how to Store these information that I will not waste the resources (db size or cpu resources on checking linked products from some messy db design)?
Thank you.
Every hint will help me :)
- ID,
- date,
- price,
- description,
- few links for pics,
- location.
Sometimes they update price so I check it again and if I see that record from DB have different price than on company shop. Or they change description, or etc.
- I would like to store that change of. (Include previous value and new value.)
- Include when it happen (date and hour, nothing too specific).
I was thinking db scheme like:


user3132499
Sep 3, 2021, 09:03 AM
• Last activity: Jul 31, 2025, 08:03 PM
1
votes
1
answers
389
views
Sharded Mongodb stalls randomly
I have setup Sharded MongoDB cluster using hashed sharding in kuberenetes.I first created the config server Replicaset and then created 2 shard replicasets. Finally created mongos to connect to the sharded cluster. I followed the below link to setup sharded MongoDB Click https://docs.mongodb.com/man...
I have setup Sharded MongoDB cluster using hashed sharding in kuberenetes.I first created the config server Replicaset and then created 2 shard replicasets. Finally created mongos to connect to the sharded cluster.
I followed the below link to setup sharded MongoDB Click https://docs.mongodb.com/manual/tutorial/deploy-sharded-cluster-hashed-sharding/
After creation of mongos,I have enabled sharding for the database and have sharded the collection using the hashed sharding strategy.
After all this setup,I'm able to connect to mongos and have added some data to some of the collections in the database and able to check the distribution of data across different shards.
The issue that I'm facing is when trying to access mongodb from my java spring boot project,the connection stalls randomly.But once the connection is established for a particular query, that particular query won't stall for next few tries.After some idle time if I try to make request again to mongodb,it will again start to stall.
Note : MongoDB is hosted in "DS2 v2" VM and this cluster has 4 nodes.1 for config server,2 for shards and 1 for mongos
In one of the link,they had asked to set proper shard key to all the collections and this will have an impact on the performance of the mongodb.There were couple of things to consider before selecting the right shard key,I had considered all those factors before selecting shard key.I read through this link to select shard key - Click https://www.mongodb.com/blog/post/on-selecting-a-shard-key-for-mongodb
One of the other solution that I came across was that to set the ShardingTaskExecutorPoolMaxConnecting and to limit the rate at which mongos nodes add connectons to connection pools.I tried setting it to 20,5,100,150 and none of this resolved the stalling issue that I'm facing. This is the link - Click https://jira.mongodb.org/browse/SERVER-29237
I tried tweaking other parameters like ShardingTaskExecutorPoolMinSize and taskExecutorPoolSize.Even this did not resolve stalling issue.
I also set --serviceExecutor as adaptive.
Increased the wiredTigerCacheSizeGB from 0.25 to 2.This also dint make any difference to the stalling issue
1) YAML file of service and Deployment for config server of mongodb is -
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
kompose.version: 1.12.0 (0ab07be)
creationTimestamp: null
labels:
io.kompose.service: mongo-conf-service
name: mongo-conf-service
spec:
type: LoadBalancer
ports:
- name: "27017"
port: 27017
targetPort: 27017
selector:
io.kompose.service: mongo-conf-service
status:
loadBalancer: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
kompose.version: 1.12.0 (0ab07be)
creationTimestamp: null
labels:
io.kompose.service: mongo-conf-service
name: mongo-conf-service
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: mongo-conf-service
spec:
containers:
- env:
- name: MONGO_INITDB_ROOT_USERNAME
value: #Username
- name: MONGO_INITDB_ROOT_PASSWORD
value: #Password
command:
- "mongod"
- "--storageEngine"
- "wiredTiger"
- "--port"
- "27017"
- "--bind_ip"
- "0.0.0.0"
- "--wiredTigerCacheSizeGB"
- "2"
- "--configsvr"
- "--replSet"
- "ConfigDBRepSet"
image: #MongoImageName
name: mongo-conf-service
ports:
- containerPort: 27017
resources: {}
volumeMounts:
- name: mongo-conf
mountPath: /data/db
restartPolicy: Always
volumes:
- name: mongo-conf
persistentVolumeClaim:
claimName: mongo-conf
2) YAML file of service and Deployment for Shard mongodb is -
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
kompose.version: 1.12.0 (0ab07be)
creationTimestamp: null
labels:
io.kompose.service: mongo-shard
name: mongo-shard
spec:
type: LoadBalancer
ports:
- name: "27017"
port: 27017
targetPort: 27017
selector:
io.kompose.service: mongo-shard
status:
loadBalancer: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
kompose.version: 1.12.0 (0ab07be)
creationTimestamp: null
labels:
io.kompose.service: mongo-shard
name: mongo-shard
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: mongo-shard
spec:
containers:
- env:
- name: MONGO_INITDB_ROOT_USERNAME
value: #Username
- name: MONGO_INITDB_ROOT_PASSWORD
value: #Password
command:
- "mongod"
- "--storageEngine"
- "wiredTiger"
- "--port"
- "27017"
- "--bind_ip"
- "0.0.0.0"
- "--wiredTigerCacheSizeGB"
- "2"
- "--shardsvr"
- "--replSet"
- "Shard1RepSet"
image: #MongoImage
name: mongo-shard
ports:
- containerPort: 27017
resources: {}
3) YAML File of mongos server:
apiVersion: v1
items:
- apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
kompose.version: 1.12.0 (0ab07be)
creationTimestamp: null
labels:
io.kompose.service: mongos-service
name: mongos-service
spec:
type: LoadBalancer
ports:
- name: "27017"
port: 27017
targetPort: 27017
selector:
io.kompose.service: mongos-service
status:
loadBalancer: {}
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -d -f docker-compose.yml -o azure-deployment.yaml
kompose.version: 1.12.0 (0ab07be)
creationTimestamp: null
labels:
io.kompose.service: mongos-service
name: mongos-service
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: mongos-service
spec:
containers:
- env:
- name: MONGO_INITDB_ROOT_USERNAME
value: #USername
- name: MONGO_INITDB_ROOT_PASSWORD
value: #Password
command:
- "numactl"
- "--interleave=all"
- "mongos"
- "--port"
- "27017"
- "--bind_ip"
- "0.0.0.0"
- "--configdb"
- "ConfigDBRepSet/mongo-conf-service:27017"
image: #MongoImageName
name: mongos-service
ports:
- containerPort: 27017
resources: {}
The logs of mongos server is :
2019-08-05T05:27:52.942+0000 I NETWORK [listener] connection accepted from 10.0.0.0:5058 #308807 (79 connections now open)
2019-08-05T05:27:52.964+0000 I ACCESS [conn308807] Successfully authenticated as principal Assist_Random_Workspace on Random_Workspace from client 10.0.0.0:5058
2019-08-05T05:27:54.267+0000 I NETWORK [worker-3] end connection 10.0.0.0:52954 (78 connections now open)
2019-08-05T05:27:54.269+0000 I NETWORK [listener] connection accepted from 10.0.0.0:52988 #308808 (79 connections now open)
2019-08-05T05:27:54.275+0000 I NETWORK [listener] connection accepted from 10.0.0.0:7174 #308809 (80 connections now open)
2019-08-05T05:27:54.279+0000 I ACCESS [conn308809] SASL SCRAM-SHA-1 authentication failed for Assist_Refactored_Code_DB on Refactored_Code_DB from client 10.0.0.:7174 ; UserNotFound: User "Assist_Refactored_Code_DB@Refactored_Code_DB" not found
2019-08-05T05:27:54.281+0000 I NETWORK [worker-1] end connection 10.0.0.5:7174 (79 connections now open)
2019-08-05T05:27:54.342+0000 I NETWORK [worker-1] end connection 10.0.0.6:57391 (78 connections now open)
2019-08-05T05:27:54.343+0000 I NETWORK [listener] connection accepted from 10.0.0.0:57527 #308810 (79 connections now open)
2019-08-05T05:27:55.080+0000 I NETWORK [worker-3] end connection 10.0.0.0:56021 (78 connections now open)
2019-08-05T05:27:55.081+0000 I NETWORK [listener] connection accepted from 10.0.0.0:56057 #308811 (79 connections now open)
2019-08-05T05:27:56.054+0000 I NETWORK [worker-1] end connection 10.0.0.0:59137 (78 connections now open)
2019-08-05T05:27:56.055+0000 I NETWORK [listener] connection accepted from 10.0.0.0:59184 #308812 (79 connections now open)
2019-08-05T05:27:59.268+0000 I NETWORK [worker-1] end connection 10.0.0.5:52988 (78 connections now open)
2019-08-05T05:27:59.270+0000 I NETWORK [listener] connection accepted from 10.0.0.0:53047 #308813 (79 connections now open)
2019-08-05T05:27:59.343+0000 I NETWORK [worker-3] end connection 10.0.0.6:57527 (78 connections now open)
2019-08-05T05:27:59.344+0000 I NETWORK [listener] connection accepted from 10.0.0.0:57672 #308814 (79 connections now open)
2019-08-05T05:28:00.080+0000 I NETWORK [worker-3] end connection 10.0.1.1:56057 (78 connections now open)
2019-08-05T05:28:00.081+0000 I NETWORK [listener] connection accepted from 10.0.0.0:56116 #308815 (79 connections now open)
2019-08-05T05:28:01.054+0000 I NETWORK [worker-3] end connection 10.0.0.0:59184 (78 connections now open)
2019-08-05T05:28:01.058+0000 I NETWORK [listener] connection accepted from 10.0.0.0:59225 #308816 (79 connections now open)
2019-08-05T05:28:01.763+0000 I NETWORK [listener] connection accepted from 10.0.0.0:7173 #308817 (80 connections now open)
2019-08-05T05:28:01.768+0000 I ACCESS [conn308817] SASL SCRAM-SHA-1 authentication failed for Assist_Sharded_Database on Sharded_Database from client 10.0.0.0:7173 ; UserNotFound: User "Assist_Sharded_Database@Sharded_Database" not found
2019-08-05T05:28:01.770+0000 I NETWORK [worker-3] end connection 10.0.0.0:7173 (79 connections now open)
2019-08-05T05:28:04.271+0000 I NETWORK [worker-3] end connection 10.0.0.0:53047 (78 connections now open)
2019-08-05T05:28:04.272+0000 I NETWORK [listener] connection accepted from 10.0.0.0:53083 #308818 (79 connections now open)
2019-08-05T05:28:04.283+0000 I NETWORK [listener] connection accepted from 10.0.0.0:7105 #308819 (80 connections now open)
2019-08-05T05:28:04.287+0000 I ACCESS [conn308819] SASL SCRAM-SHA-1 authentication failed for Assist_Refactored_Code_DB on Refactored_Code_DB from client 10.0.0.0:7105 ; UserNotFound: User "Assist_Refactored_Code_DB@Refactored_Code_DB" not found
Java Code block to connect to MongoDB is -
Note:The below code supports multitenancy of MongoDB at Database level.Based on one of the parameter in every request,we will determine from which database to query from.
The below code will work fine for Standalone MongoDB instance.
1) Application property
mongodb.uri=${mongoURI:mongodb://username:password@IPaddress:portNumber}
mongodb.defaultDatabaseName=assist
assist-server-address1 = IpAddress
2) Spring Boot Application
@SpringBootApplication
@ServletComponentScan
public class ServiceApplication extends RepositoryRestConfigurerAdapter {
public static void main(String[] args) {
SpringApplication.run(ServiceApplication.class, args);
}
@Autowired
public MongoDBCredentials mongoDBCredentials;
@Bean
public MongoTemplate mongoTemplate() {
return new MongoTenantTemplate(
new SimpleMongoDbFactory(new MongoClient(new MongoClientURI(mongoDBCredentials.getUri())),
mongoDBCredentials.getDefaultDatabaseName()));
}
}
3)MongoTemplate -> This is used to establish Mongo connection.We have implemented Multitenant Mongo(To connect to multiple database based on one of the parameter of the request)
public class MongoTenantTemplate extends MongoTemplate {
private static Map tenantTemplates = new HashMap();
@Value("${assist-server-address1}")
public String ServerAddress1;
@Value("${spring.data.mongodb.username}")
public String ServerUsername;
@Value("${spring.data.mongodb.password}")
public String ServerPassword;
@Value("${spring.data.mongodb.database}")
public String ServerDbName;
@Value("${assist-current-environment}")
public String currentEnv;
@Autowired
public MongoDBCredentials mongoDBCredentials;
@Autowired
WorkspacesRepository workspaceRepository;
private static final Logger LOG = LoggerFactory.getLogger(MongoTenantTemplate.class);
Marker marker;
public MongoTenantTemplate(MongoDbFactory mongoDbFactory) {
super(mongoDbFactory);
tenantTemplates.put(mongoDbFactory.getDb().getName(), new MongoTemplate(mongoDbFactory));
}
protected MongoTemplate getTenantMongoTemplate(String tenant) {
MongoTemplate mongoTemplate = tenantTemplates.computeIfAbsent(tenant, k -> null);
LOG.info(marker, "Tenant is (MongoDBCredentials) : {}",tenant);
try {
if (mongoTemplate == null) {
MongoCredential mongoCredential;
// Username,databaseName,password
if (tenant == ServerDbName) {
mongoCredential = MongoCredential.createCredential(ServerUsername, ServerDbName,
ServerPassword.toCharArray());
} else {
Workspaces workspace = workspaceRepository.findByDbName(tenant);
mongoCredential = MongoCredential.createCredential(workspace.getDbUserName(), workspace.getDbName(),
workspace.getDbPassword().toCharArray());
}
ServerAddress address1 = new ServerAddress(ServerAddress1, port);
List serverAddressList = new ArrayList();
serverAddressList.add(address1);
SimpleMongoDbFactory mongoDbFactory = new SimpleMongoDbFactory(
new MongoClient(serverAddressList, Arrays.asList(mongoCredential)), tenant);
mongoTemplate = new MongoTemplate(mongoDbFactory);
}
else {
}
} catch (Exception e) {
tenantTemplates.remove(tenant);
}
return mongoTemplate;
}
...
In the above logs,there is an error in authentication to Assist_Refactored_Code_DB(This database is not created by me).Im not sure why this authentication is failing and in which mongo URI the username and password should be mentioned.And Im also not sure whether this is one of the reason for stalling or not. This is the only error logs that I could find in mongos.All other logs in config server and shard mongo doesnt have any errors.
I expect the sharded mongodb to not stall at any point of time and work similar to standalone mongodb.
Can anyone guide me to resolve the stalling of sharded mongodb issue?
Prajwal M
(11 rep)
Aug 6, 2019, 11:14 AM
• Last activity: Jul 31, 2025, 08:04 AM
1
votes
1
answers
146
views
Move SQL Server database files to Azure storage account container
I'm trying to move SQL Server database files into an Azure storage account container. Creating or restoring databases in container storage works flawlessly, but moving fails. First, I create a credential with the SAS token (permissions: rwdl): ``` CREATE CREDENTIAL [https://foobar.blob.core.windows....
I'm trying to move SQL Server database files into an Azure storage account container. Creating or restoring databases in container storage works flawlessly, but moving fails.
First, I create a credential with the SAS token (permissions: rwdl):
CREATE CREDENTIAL [https://foobar.blob.core.windows.net/databases]
WITH IDENTITY='SHARED ACCESS SIGNATURE', SECRET = 'secretsecretsecret'
After that I'm altering the existing database and point the both data and log file to the storage account:
ALTER DATABASE [AdventureWorks] MODIFY FILE
( NAME = AdventureWorks, FILENAME =
'https://foobar.blob.core.windows.net/databases/AdventureWorks.mdf ');
ALTER DATABASE [AdventureWorks]
MODIFY FILE ( NAME = AdventureWorks_log, FILENAME =
'https://foobar.blob.core.windows.net/databases/AdventureWorks.ldf ');
After I copied the MDF and LOG file to the
container, I restart the database:
ALTER DATABASE [AdventureWorks] SET OFFLINE WITH ROLLBACK IMMEDIATE;
ALTER DATABASE [AdventureWorks] SET ONLINE;
But I'm getting this error:
Msg 5181, Level 16, State 5, Line 8
Could not restart database "AdventureWorks". Reverting to the previous status.
Msg 5069, Level 16, State 1, Line 8
ALTER DATABASE statement failed.
However, if I'm creating a new database and point the files to the storage account, it works:
CREATE DATABASE [WorkingAdventure] ON PRIMARY
( NAME = N'WorkingAdventure', FILENAME =
N'https://foobar.blob.core.windows.net/databases/WorkingAdventure.mdf ',
SIZE = 1GB, MAXSIZE = UNLIMITED, FILEGROWTH = 50MB)
LOG ON
( NAME = N'WorkingAdventure_log', FILENAME =
N'https://foobar.blob.core.windows.net/databases/WorkingAdventure.ldf ',
SIZE = 2MB, MAXSIZE = 250MB, FILEGROWTH = 1MB)
Restoring a backup works, too.
I'm trying to avoid creating a backup from the databases and restore them into a storage account, because of the database sizes (in total about 20TB).
s3b
(11 rep)
Aug 23, 2024, 02:11 PM
• Last activity: Jul 22, 2025, 07:02 AM
0
votes
1
answers
153
views
How do you make structural changes to Azure SQL Server DBs when you don't have access via SSMS?
We're no longer allowed to have access to UAT/PROD environments via SSMS at my organization. The way our deployment process works is tied to git pushes. So when you push to the DEV branch, the DEV web code and DB is updated, likewise when pushing to the QA branch and the UAT branch. The problem is w...
We're no longer allowed to have access to UAT/PROD environments via SSMS at my organization. The way our deployment process works is tied to git pushes. So when you push to the DEV branch, the DEV web code and DB is updated, likewise when pushing to the QA branch and the UAT branch.
The problem is when there is a structural change to the DB, very often the deployment fails with the error
data loss may occur
. In the past, with on-prem solutions, when we would publish the DB, if we encountered the data loss may occur
error we could uncheck Block incremental deployment if data loss might occur
in Visual Studio and the deployment would work. We never incurred any data loss either. Since this option is no longer available, it was suggested we use pre and post deployment scripts.
What do I need to put in the pre and post deployment scripts to prevent the data loss error? Our Visual Studio DB Project already contains all the table/view/SP/function definitions.
Legion
(113 rep)
Jul 26, 2021, 11:50 PM
• Last activity: Jul 15, 2025, 03:02 PM
-2
votes
1
answers
51
views
Azure Synapse dedicated pool high compute node memory
Queries are running slow on azure synapse dedicated pool and all the compute and control nodes' memory usage is touching 100%. Some days back, we restarted the pool (pause and resume) and noticed the control node memory dropped to 3% (compute node remained at 100%) and it appears that queries starte...
Queries are running slow on azure synapse dedicated pool and all the compute and control nodes' memory usage is touching 100%. Some days back, we restarted the pool (pause and resume) and noticed the control node memory dropped to 3% (compute node remained at 100%) and it appears that queries started working fine for a week, and now the memory is back to 100%
Now there are a couple of questions I have:
1. Is this normal that memory usage for control and compute nodes is always at 100%?
2. My understanding is that the compute node's job is to distribute queries to the distribution node, so why is its memory always 100%?
I am using the following query to get the usage
SELECT
(pc1.cntr_value/1048576.0) as Curr_Mem_GB,
(pc2.cntr_value/1048576.0) as Max_Mem_GB,
pc1.cntr_value * 100.0/pc2.cntr_value AS Memory_Utilization_Percentage,
pc1.pdw_node_id
FROM
-- pc1: current memory
sys.dm_pdw_nodes_os_performance_counters AS pc1
-- pc2: total memory allowed for this SQL instance
JOIN sys.dm_pdw_nodes_os_performance_counters AS pc2
ON pc1.object_name = pc2.object_name
AND pc1.pdw_node_id = pc2.pdw_node_id
WHERE pc1.counter_name = 'Total Server Memory (KB)'
AND pc2.counter_name = 'Target Server Memory (KB)'
hassan ammar
Jul 4, 2025, 10:56 AM
• Last activity: Jul 5, 2025, 06:00 PM
0
votes
1
answers
192
views
On-premises SQL to Azure SQL MI Migration
We have a requirement to migrate around 800 GB of data from on-premises SQL Server (2012 and 2017) spread across multiple databases to Azure SQL MI. We have run the DMA on the source data and have got some issues (datatype not supported any more, coding style changes etc.). We are planning to use DM...
We have a requirement to migrate around 800 GB of data from on-premises SQL Server (2012 and 2017) spread across multiple databases to Azure SQL MI. We have run the DMA on the source data and have got some issues (datatype not supported any more, coding style changes etc.). We are planning to use DMS to perform the data migration
My questions are:
1. Which stage I should perform the remediation, in the source itself before the migration or in the target Azure SQL MI
2. Which option is preferred for the data migration
Hillol Saha
(1 rep)
Oct 26, 2021, 06:11 AM
• Last activity: Jun 29, 2025, 01:02 PM
0
votes
1
answers
190
views
Azure blob storage - Access tier change
We currently have a legacy blob storage account of 600TB configured as Standard\Hot tier which has archived backups. As part of cost optimization, we want to change the access tier to Cool on the blob storage account. Does this incur data movement ? As per documentation, it says the change is instan...
We currently have a legacy blob storage account of 600TB configured as Standard\Hot tier which has archived backups. As part of cost optimization, we want to change the access tier to Cool on the blob storage account. Does this incur data movement ? As per documentation, it says the change is instantaneous. Does that mean hot to cool will not have any data transfer and incur no additional charges in pricing?
yvr238
(129 rep)
Nov 2, 2021, 05:13 PM
• Last activity: Jun 27, 2025, 03:04 PM
0
votes
1
answers
190
views
PostgreSQL: using stale statistics instead of current ones because stats collector is not responding
We are running `PostgreSQL 13` on `Azure Centos VM` and found this problem from the logs followed by some slow `commit` and `SET` statements. The slow statement logs are only lasted for a period of less than 20sec. System metric are normal before or during the problem with a small spike in I/O wait...
We are running
PostgreSQL 13
on Azure Centos VM
and found this problem from the logs followed by some slow commit
and SET
statements. The slow statement logs are only lasted for a period of less than 20sec. System metric are normal before or during the problem with a small spike in I/O wait upto 4%.
> 2024-08-15 16:21:59.048 CST,,,33328,,62b10ea0.8230,14,,2022-06-21 08:19:44 CST,1/0,0,LOG,00000,"using stale statistics instead of current ones because stats collector is not responding",,,,,,,,,"","autovacuum launcher"
2024-08-15 16:22:09.203 CST,,,58821,,66bdbaa7.e5c5,1,,2024-08-15 16:21:59 CST,148/0,0,LOG,00000,"using stale statistics instead of current ones because stats collector is not responding",,,,,,,,,"","autovacuum worker"
2024-08-15 16:22:09.253 CST,"user_w","user_db",53133,"10.0.0.85:58698",66bdb747.cf8d,1,"COMMIT",2024-08-15 16:07:35 CST,46/0,0,LOG,00000,"duration: 21525.916 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.14:34356","client backend"
2024-08-15 16:22:09.253 CST,"user_w","user_db",48595,"10.0.0.68:33334",66bdb4d3.bdd3,1,"COMMIT",2024-08-15 15:57:07 CST,15/0,0,LOG,00000,"duration: 21383.608 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.18:36088","client backend"
2024-08-15 16:22:09.253 CST,"user_w","user_db",50680,"10.0.0.68:33714",66bdb5a9.c5f8,1,"COMMIT",2024-08-15 16:00:41 CST,25/0,0,LOG,00000,"duration: 20137.894 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.18:36400","client backend"
2024-08-15 16:22:09.253 CST,"user_w","user_db",42490,"10.0.0.68:60644",66bdb2d6.a5fa,1,"COMMIT",2024-08-15 15:48:38 CST,63/0,0,LOG,00000,"duration: 18201.579 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.18:36274","client backend"
2024-08-15 16:22:09.253 CST,"user_w","user_db",52468,"10.0.0.68:34266",66bdb6e0.ccf4,1,"COMMIT",2024-08-15 16:05:52 CST,30/0,0,LOG,00000,"duration: 20438.055 ms statement: COMMIT",,,,,,,,,"app - 10.0.0.16:52796","client backend"
2024-08-15 16:22:09.269 CST,"user_w","user_db",55877,"10.0.0.52:47198",66bdb8e6.da45,2,"SET",2024-08-15 16:14:30 CST,57/0,0,LOG,00000,"duration: 3843.296 ms statement: SET application_name='app - 10.0.0.4:38932';",,,,,,,,,"app - 10.0.0.4:38932","client backend"
2024-08-15 16:22:09.269 CST,"user_w","user_db",55278,"10.0.0.70:59560",66bdb890.d7ee,1,"SET",2024-08-15 16:13:04 CST,43/0,0,LOG,00000,"duration: 20042.606 ms statement: SET application_name='app - 10.0.0.16:52848';",,,,,,,,,"app -10.0.0.16:52848","client backend"
From what I can check the collector
using IPv6
, IPv6 is enabled as of now and stats are getting updated. We only logging slow statements and the first entry of slow commit statement took 20sec completed at **2024-08-15 16:22:09.253 CST** which is on calculation might started before the first entry of stats collector log at **2024-08-15 16:21:59.048 CST**. We are unable to make a conclusion where the problem actually started with stats collector or the transactions and the cause of issue? This issue auto resolves in 10-20sec.
**UPDATE**
I have noticed when the system working normal there is no UDP socket
for postmaster
if I run netstat -n -u -p
. However files under pg_stat_temp
directory is getting updated and I can see the stats collector
process under process list. Why is there no visible UDP socket under postmaster?
goodfella
(595 rep)
Aug 19, 2024, 04:05 AM
• Last activity: Jun 27, 2025, 12:04 PM
0
votes
1
answers
186
views
MariaDB / MySQL performance optimization Azure disk access
We have a lot of MariaDB's that we have recently moved to "Azure Database for MariaDB server". Most of the time it works OK, but we sometimes experience some snags with slow queries compared to running on our old servers. Let's just call our database "customerdb". For example, we have two big tables...
We have a lot of MariaDB's that we have recently moved to "Azure Database for MariaDB server". Most of the time it works OK, but we sometimes experience some snags with slow queries compared to running on our old servers. Let's just call our database "customerdb".
For example, we have two big tables, data_orders and data_orders_cardpayment, both quite big (data orders also contains an xml reprecentation of the order).
For a customer that has about 900K rows in data_orders, the following query looks like this:
explain SELECT SQL_NO_CACHE
SUM(T0.GiftCard+T0.CreditNote+T0.Nettbank+T0.Depositum) AS Sum
FROM DATA_ORDERS_CARDPAYMENT T0
INNER JOIN DATA_ORDERS T1 ON T0.OrderId = T1.Id
WHERE T1.IsTransferred = 0
AND T1.CountedId = -1
AND T1.InstanceCode = 'B';
+------+-------------+-------+-------------+----------------------------------------------+-------------------------+---------+--------------------+------+-------------------------------------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+-------------+----------------------------------------------+-------------------------+---------+--------------------+------+-------------------------------------------------------+
| 1 | SIMPLE | T1 | index_merge | PRIMARY,CountedId,InstanceCode,IsTransferred | IsTransferred,CountedId | 1,4 | NULL | 256 | Using intersect(IsTransferred,CountedId); Using where |
| 1 | SIMPLE | T0 | ref | OrderId | OrderId | 4 | customerdb.T1.Id | 1 | |
+------+-------------+-------+-------------+----------------------------------------------+-------------------------+---------+--------------------+------+-------------------------------------------------------+
So this only needs to check 256 rows? Good! Then I run this query at the start of the day, and it takes like this:
SELECT SQL_NO_CACHE
SUM(T0.GiftCard+T0.CreditNote+T0.Nettbank+T0.Depositum) AS Sum
FROM DATA_ORDERS_CARDPAYMENT T0
INNER JOIN DATA_ORDERS T1 ON T0.OrderId = T1.Id
WHERE T1.IsTransferred = 0
AND T1.CountedId = -1
AND T1.InstanceCode = 'B';
+------+
| Sum |
+------+
| 0 |
+------+
1 row in set (1 min 38.283 sec)
So this took 1 minute and 38 seconds. Running it again, it takes only 0.119 seconds, 0.128 seconds, 0.106 seconds etc. I ran the same query on our old database server on the old database "customerdb_moved", there it took 2.68 seconds at the start of the day, and then 0.07 seconds afterwards (but that was a quite beefed up server).
So, why does it take forever the first time, and only the first time? I assume it for some unknown reason must load most of the table data from disk into memory, even though the indexes in theory should avoid this. Successive loads are always fast.
Can I avoid the initial loading from disk? If not, how do I speed up the initial disk access time in my Azure database? (BTW, at the moment, the pricing tier for the database server in Azure is 8 vCores, 80GB memory, 292 GB storage, 876 available IOPS, memory optimized.)
[Update]
I found the main culprit at least.
The explain statement somehow got things wrong. When running the following:
SELECT COUNT(*)
FROM data_orders T1
where T1.IsTransferred = 0
AND T1.CountedId = -1
AND T1.InstanceCode = 'B';
It actually returned about 5,5k rows, as there was some old lines that didn't have the T1.IsTransferred-flag set. I don't know why my explain returned 256 rows, and I still think the original query should take way less time than 1 minutes and 38 seconds initially for summing over these 5,5k rows on the first loop for the day. When setting the old lines to IsTransferred, the query today shows 5 rows for T1 in explain, while the count(*) FROM T1 shows 164 actual rows, and the query took about 3 seconds for the initial first time (still quite a lot for only a load of 164 rows) and then about 0.02 for the successive queries.
runholen
(101 rep)
Nov 21, 2022, 07:31 AM
• Last activity: Jun 25, 2025, 04:04 AM
5
votes
1
answers
185
views
Can Azure Backup Server backup databases participating in Log Shipping?
We have a number of log-shipped SQL 2017 databases which we wish to start backing up using Microsoft Azure Backup Server (locally and to the cloud) on a 15 minute schedule. Does anyone know if it is possible for Azure Backup Server to hook into the log backups made by SQL Server log shipping?
We have a number of log-shipped SQL 2017 databases which we wish to start backing up using Microsoft Azure Backup Server (locally and to the cloud) on a 15 minute schedule.
Does anyone know if it is possible for Azure Backup Server to hook into the log backups made by SQL Server log shipping?
Mark
(941 rep)
Oct 30, 2019, 10:43 PM
• Last activity: Jun 22, 2025, 04:02 AM
0
votes
1
answers
387
views
Configuring a DNN listener for an availability group fails
I am running into problem when creating DNN listener on Azure VM. I am runnign the script provided in Microsoft article: https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/availability-group-distributed-network-name-dnn-listener-configure?view=azuresql and getting the followi...
I am running into problem when creating DNN listener on Azure VM.
I am runnign the script provided in Microsoft article:
https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/availability-group-distributed-network-name-dnn-listener-configure?view=azuresql
and getting the following error:
*Set-ClusterResourceDependency :* *An error occurred while setting the dependency relationship for the resource 'myag'.
The operation could not be completed because the cluster resource is online*
Any help suggestions will be greatly appreciated.

VitalicJJ
(55 rep)
May 25, 2023, 03:55 PM
• Last activity: Jun 21, 2025, 09:01 PM
3
votes
1
answers
210
views
how to mirror an azure storage account to a non-paired zone to enable fast file_snapshot backup/restore and database file access?
Wow, that title was a mouthful. # Scenario # My dev server is restoring a copy of production data daily, anonymizing it, and creating a dev-grade backup for my developers to restore locally. We're running SQL 2019. Unfortunately, my servers are all in Azure East US, but almost all of my company is b...
Wow, that title was a mouthful.
# Scenario #
My dev server is restoring a copy of production data daily, anonymizing it, and creating a dev-grade backup for my developers to restore locally. We're running SQL 2019.
Unfortunately, my servers are all in Azure East US, but almost all of my company is based in Spain.
As of today, we had the server produce a standard backup that would be downloaded by a central server here in Spain, so devs can restore without going to the cloud. Restores on developers' laptops are taking up to 180 minutes to complete.
This is where
BACKUP ... WITH FILE_SNAPSHOT
comes in. In a few words, it allows backups and restores to go super fast, by storing data directly on Azure storage accounts. There's more to it, but it's not the point here - [read more here](https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/file-snapshot-backups-for-database-files-in-azure?view=sql-server-ver15)
So far so good, but...
# Issues #
- Restoring near the server will be fast, but will force developers to traverse the Atlantic when reading from what should be the local database.
- Restoring the snapshot from EastUs to a storage account in West Europe seems to be copying the snapshot over (which is logical), but this means there's no restore performance benefit.
- My devs are really keen on restoring daily, weekly at most to ensure they're on par with production database releases and can troubleshoot data issues that get escalated to the engineering team.
- Manually copying files from East Us to West Europe is not a good option, since it requires daily attention from someone.
# What I tried #
- Restoring a file_snapshot backup from our East US storage account to a temporary West Europe storage account
- Restoring a file_snapshot backup within our East Us storage account (performances were awful for large queries due to, at least, network latencies)
- Setting the storage to RA_GRS; when I tested it, it would only pair with West Us - this was to speed up normal restores, but it's still meaningful.
# What I want to accomplish #
I want my devs to be able to restore quickly using the FILE_SNAPSHOT
technology to the nearest storage location. I'll need them to be able to write there using SAS keys. To ensure they can restore quickly, I'm assuming the storage account must be mirrored with a copy that sits here in West Europe, but I don't know how to accomplish this.
## My plan ##
Currently, I'm planning to either try and store the backup in a West Europe bound storage container, or try restoring to it after enabling Azure CDN.
I'm doubting both of these options because
1. FILE_SNAPSHOT backup takes a snapshot of the files in Azure storage, so they will have to be copied across the wire through the Atlantic
2. Azure CDN is a read-only system, I'm not sure it will help me with writes in the restore process. If it has to read from the source file, I'd be back to square one.
I'm going to test these out (aka do my homework), but I'm reaching out to see if anyone here has other ideas, has already done this successfully (and wants to share their fix), or know this is not doable and I should file a UserVoice item.
Thanks a bunch
### Update 2020-07-07 ###
I have been testing CDNs - it does not work. When trying to restore an existing backup to URL (it does not matter if using FILE_SNAPSHOT or not) I'm getting an incorrectly formatted error. I know that the .bak is not corrupted since I can restore directly from the original blob storage. My current guess is that some header has a mismatch between the CDN URL and the actual blob storage.
I also tried backing up directly to the CDN URL but that seems to not be supported, which makes sense due to my current understanding of CDNs being read-only systems.
### Update 2020-07-08 ###
Also tested out backing up to a different storage location. Backups are fast, but the snapshot is still taken locally to the .mdf location - this means restores are painfully slow, and reducing that lag (and the data transfer) is the whole point of this exercise. I cannot really modify the .bak pointers after the backup has been generated, so even copying that data manually across the ocean won't work.
This means I'm officially out of ideas.
# Update #
I missed out on doing this for a while, but I'd better keep this thread up to date.
@DavidSpillet has a few points in his reply. Here is a bit more information that might help people helping me out - you're all awesome.
1. We have about 13 (SQL natively compressed) backup files, totaling around 60GB. The approximate size after restoring them is around 270GB.
2. Our databases are separated into two systems. System A has 2 databases, System B has 11. Restores are done concurrently per system, so we will have at most 2 restores per developer running in parallel.
3. Developers in Spain restores in the office using a wired LAN network, sitting on the same VNET (and building, and floor) the server is in.
4. The server is a 16-core, 32 logical CPUs, 96GB RAM monster that was once used to host a shared SQL instance for our developers. Restores in there take consistently between 15-20 minutes.
5. The server high-capacity storage (where the backups are held) is based on spinning locally attached disks - but then again, restoring locally is not a bottleneck
6. Developers laptops use high-capacity NVMe SSD drives (at least, the new ones. The older model used spinning drives. I'm not minding those as they're being rolled out)
I also have a few more concerns due to the current world situation, namely
* Devs are currently working anywhere, sometimes at the office, most of the time at home. Unfortunately, our client VPN endpoint is in another country, so restoring from the office is a major pain, and I'm trying to account for that, especially considering how the country is looking like (2020-07-29).
* Some of the new laptop models arrived with lower capacity storage (256 GB) and there's nothing I can do to fix that.
* Cannot go on purchasing commercially available software that would do the trick (for example [this one](https://www.red-gate.com/products/dba/sql-provision/)) because of budget constraints.
GMassDBA
(345 rep)
Jun 26, 2020, 05:15 PM
• Last activity: Jun 17, 2025, 04:03 PM
0
votes
1
answers
211
views
How to connect in case Azure Cosmos DB default force use TLS/SSL?
My case: Client: Windows 10 x64. pgAdmin 4 version 8.4 (latest). Server: Cloud Azure Cosmos DB for PostgreSQL burstable, version 16. [![enter image description here][1]][1] My connection string will be likes this ``` jdbc:postgresql://c-donhuvycluster.qs4vulsae3t7uz.postgres.cosmos.azure.com:5432/vy...
My case:
Client: Windows 10 x64. pgAdmin 4 version 8.4 (latest).
Server: Cloud Azure Cosmos DB for PostgreSQL burstable, version 16.
My connection string will be likes this

jdbc:postgresql://c-donhuvycluster.qs4vulsae3t7uz.postgres.cosmos.azure.com:5432/vydb?user=citus&password={your_password}&sslmode=require
Encrypted connections
This cluster enforces encrypted connections using Transport Layer Security (TLS). For information on TLS version and certificates, refer to connecting with TLS/SSL.
How to connect in case Azure Cosmos DB default force use TLS/SSL? Please guide me. I search video tutorial on YouTube but did not see.
I did not know how to face with sslmode=require
.
My workaround: https://learn.microsoft.com/en-us/azure/cosmos-db/postgresql/howto-ssl-connection-security#applications-that-require-certificate-verification-for-tls-connectivity
But Is this command support Windows OS (I can run OpenSSL with Git Bash for Windows)? , how to declare certificate it with pgAdmin v4 8.4?
David Lapetina
(219 rep)
Mar 11, 2024, 02:02 AM
• Last activity: Jun 16, 2025, 03:09 AM
0
votes
1
answers
72
views
Azure SQL Database Backups PITR & LTR
I have so much confusion how Azure SQL database backups PITR & LTR Works. I have observed few things from the backup as follows; 1. PITR - Earliest Point in Time Retention happening every day. for example: 2. PITR configured for 7 Days example 1: ``` 2025-4-28 - current Day, 2025-4-27, 2025-4-26, 20...
I have so much confusion how Azure SQL database backups PITR & LTR Works. I have observed few things from the backup as follows;
1. PITR - Earliest Point in Time Retention happening every day. for example:
2. PITR configured for 7 Days
example 1:
2025-4-28 - current Day,
2025-4-27,
2025-4-26,
2025-4-25,
2025-4-24,
2025-4-23,
2025-4-22,
2025-4-21 - Earliest PITR restore point
example 2:
2025-4-29 - current Day,
2025-4-28,
2025-4-27,
2025-4-26,
2025-4-25,
2025-4-24,
2025-4-23,
2025-4-22 - Earliest PITR restore point
so, are we able to restore 2025-4-20 also? or only able to restore from (Earliest PITR restore point) 2025-4-21 to current day (2025-4-28)?
Please someone explain me how this actually works.
And LTR?
Abdul Khadar
Apr 28, 2025, 01:49 AM
• Last activity: May 14, 2025, 09:47 PM
1
votes
1
answers
752
views
CREATE LOGIN for AD user in master not allowing connection
I am struggling to figure out the best method to set up security on an Azure SQL Managed instance. First, I tried to set up SQL logins, but since MUST_CHANGE isn't supported, I would then have access to everyone's password when I created the login, clearly an unacceptable solution. I then made mysel...
I am struggling to figure out the best method to set up security on an Azure SQL Managed instance. First, I tried to set up SQL logins, but since MUST_CHANGE isn't supported, I would then have access to everyone's password when I created the login, clearly an unacceptable solution. I then made myself the AD admin, and set up AD accounts. If I CREATE USER FOR EXTERNAL USER in the master database, people can connect, but if I CREATE LOGIN FOR EXTERNAL USER and then perform the CREATE USER commands in the other dbs linking to the login, people still get a login error.
Since I cannot define a DEFAULT_DATABASE on the CREATE LOGIN command, I am really confused about the security context. Am I really supposed to create users for everyone in the master database? That seems like a security hole. Am I missing something here?
Robert Sievers
(150 rep)
May 20, 2022, 07:57 PM
• Last activity: May 13, 2025, 11:08 PM
0
votes
0
answers
27
views
Azure flexible database for mysql won't login, admin user access denied, and Entra fails
I have fully provisioned Microsoft Entra authentication on my Flexible Server but the AzureActiveDirectory plugin never loads. Specifically: Server version: 8.0.21 Region: East US 2 Actions taken (all of which reproduce the issue): 1. Enabled “MySQL and Microsoft Entra authentication” in the portal....
I have fully provisioned Microsoft Entra authentication on my Flexible Server but the AzureActiveDirectory plugin never loads. Specifically:
Server version: 8.0.21
Region: East US 2
Actions taken (all of which reproduce the issue):
1. Enabled “MySQL and Microsoft Entra authentication” in the portal.
2. Created and assigned a user-assigned managed identity.
3. Added an Azure AD administrator (both as a guest user and as a
short-alias security group).
4. Restarted the server multiple times (via CLI and portal).
5. Attempted native‐SQL logins (resetting admin password, changing
users/passwords, opening all client IPs) and always received “Access
denied for user …” despite correct credentials and firewall rules.
6. Can't be Verified because of no login via `SHOW PLUGINS LIKE
'AzureActiveDirectory'`—no ACTIVE plugin appears, and any CREATE
USER … IDENTIFIED WITH AzureActiveDirectory returns ERROR 1524
(HY000): Plugin 'AzureActiveDirectory' is not loaded.
Repeated the above on two separate servers and with multiple admin users—issue persists.
Because the plugin never loads into the server process, all Azure AD–based logins are impossible.
This is the actual error, and what I'm trying to run
ERROR 1524 (HY000) at line 1: Plugin 'AzureActiveDirectory' is not loaded
export TOKEN=$(az account get-access-token \
--resource https://ossrdbms-aad.database.windows.net \
--query accessToken -o tsv)
mysql \
--host=bigbigheart-mysql.mysql.database.azure.com \
--ssl-mode=REQUIRED \
--enable-cleartext-plugin \
--user=aaddbadmins \
--password="$TOKEN" \
--database=stickerchartdb \
--execute="\
CREATE USER 'aaddbadmins' IDENTIFIED WITH AzureActiveDirectory; \
GRANT ALL PRIVILEGES ON stickerchartdb.* TO 'aaddbadmins'; \
FLUSH PRIVILEGES;"
Michael Wegter
(1 rep)
May 9, 2025, 08:32 PM
Showing page 1 of 20 total questions