Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
2
votes
1
answers
29
views
SQL Server Estimates don't use AVG_RANGE_ROWS for Uniqueidentifer Parameter
I'm trying to debug a very weird query row estimation. The query is very simple. I have a table `OrderItems` that contains for each Order (column `OrderId`) the items of the order. ```sql SELECT count(*) FROM orders.OrderItem WHERE OrderId = '5a7e53c4-fc70-f011-8dca-000d3a3aa5e1' ``` According to th...
I'm trying to debug a very weird query row estimation.
The query is very simple. I have a table
OrderItems
that contains for each Order (column OrderId
) the items of the order.
SELECT count(*)
FROM orders.OrderItem
WHERE OrderId = '5a7e53c4-fc70-f011-8dca-000d3a3aa5e1'
According to the statistics from IX_OrderItem_FK_OrderId
(that's just a normal unfiltered foreign key index CREATE INDEX IX_OrderItem_FK_OrderId on orders.OrderId(OrderId)
, the density is 1.2620972E-06 with 7423048 rows, so about ~9.3 items per order (if we ignore the items with OrderId = NULL
, if we include them there are even less).
The statistics are created with FULLSCAN, and are only slightly out of date (around ~0.2% new rows since the last recompute).
| Name | Updated | Rows | Rows Sampled | Steps | Density | Average key length | String Index | Filter Expression | Unfiltered Rows | Persisted Sample Percent |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| IX_OrderItem_FK_OrderId | Aug 3 2025 4:36PM | 7423048 | 7423048 | 198 | 0.1649756 |26.443027 | "NO " | NULL | 7423048 | 100 |
| All density | Average Length | Columns |
| --- | --- | --- |
| 1.2620972E-06 | 10.443027 | OrderId |
| 1.3471555E-07 | 26.443027 | OrderId, Id |
The query plan however expects, that the query returns 205.496 items. And in reality there are actually 0 results - because the orderId
doesn't exist.
Detailed Query Plan:
https://www.brentozar.com/pastetheplan/?id=hVKYNLmXSU
It probably uses the histogram for coming up with the estimate.
It should fall into following bucket with RANGE_HI_KEY = 'a39932d8-aa2c-f011-8b3d-000d3a440098'
. But that estimate should then be 6.87 according to the AVG_RANGE_ROWS.
It somehow looks like it uses the EQ_ROWS from the previous bucket (but 205 might also just be by accident).
| RANGE_HI_KEY | RANGE_ROWS | EQ_ROWS | DISTINCT_RANGE_ROWS | AVG_RANGE_ROWS |
| --- | --- | --- | --- | --- | --- |
| 9d2e2bea-aa6e-f011-8dca-000d3a3aa5e1 | 12889 | 205 | 2412 | 5.343698 |
| a39932d8-aa2c-f011-8b3d-000d3a440098 | 21923 | 107 | 3191 | 6.8702602 |
OPTION(RECOMPILE)
does not help.
Can somebody explain how SQL Server (in particularly Azure SQL) is coming up with that number?
- Does it really think that the parameter is close enough to the bucket start, and just takes the EQ_ROWS value even though the AVG_RANGE_ROWS is a lot smaller?
- Does it not understand the parameter because it's defined as VARCHAR? If I replace it with DECLARE @OrderId UNIQUEIDENTIFIER = '5a7e...'; WHERE OrderId = @OrderId
the estimate is down to 6. But if that's the reason, from where is the estimate 205?
Jakube
(121 rep)
Aug 5, 2025, 04:53 PM
• Last activity: Aug 6, 2025, 04:39 PM
0
votes
2
answers
146
views
Moving tables to different database within same sql server
There is a SQL server, there are around 100 databases in it. I have to query a few tables from one of the databases. when I query, it's very slow and I think CPU utilization is very high at that time. I have also noticed that there are queries from other tables from other services which are affectin...
There is a SQL server, there are around 100 databases in it. I have to query a few tables from one of the databases. when I query, it's very slow and I think CPU utilization is very high at that time. I have also noticed that there are queries from other tables from other services which are affecting the overall performance of querying from the database.
I am thinking to move these tables to a different database within the same SQL server. Do you think it will solve this issue? or it will not improve the performance of querying from my tables, I only bother about my tables. Will it have no impact because the new data will also be in the same SQL server? Please provide the answers in detail to my queries.
Vivek Nuna
(101 rep)
Jun 21, 2023, 07:00 AM
• Last activity: Aug 3, 2025, 07:06 AM
0
votes
1
answers
144
views
Save changes of records of products in Azure DB in way that we can see historical data
I am curious about "best" model for my purpose of storage of data (include historical changes) in Azure DB that I will be able to see timeline of that changes. Example: We have some company which making products. These products changing prices or description very often, sometimes they got deleted an...
I am curious about "best" model for my purpose of storage of data (include historical changes) in Azure DB that I will be able to see timeline of that changes.
Example:
We have some company which making products. These products changing prices or description very often, sometimes they got deleted and next day they publish them again (with some changes).
My app checking that company every hour and check all products which they have.
Save them to Azure DB (doesn't be save in the same entity as company have them, just very short version):
But I can say that:
- I will have like 10 fields.
- Checking thousands products every hour.
So I feel bad that in every small change I store whole Product record again.
Also it can happen that one product can be removed, and Add under different IdOfProduct (this is generated just by sequence on company side), but with the same location and description but for example different price.
So I am also trying way how to store that products (even historical one, which are deleted) that if I will have "new product" that I can make fast compare with database if there is something similar (check location, after that check description, maybe even price) and If its similar, make a record to another table that there is possibility of the connection to the old product which was deleted).
Can anyone give me advice how to Store these information that I will not waste the resources (db size or cpu resources on checking linked products from some messy db design)?
Thank you.
Every hint will help me :)
- ID,
- date,
- price,
- description,
- few links for pics,
- location.
Sometimes they update price so I check it again and if I see that record from DB have different price than on company shop. Or they change description, or etc.
- I would like to store that change of. (Include previous value and new value.)
- Include when it happen (date and hour, nothing too specific).
I was thinking db scheme like:


user3132499
Sep 3, 2021, 09:03 AM
• Last activity: Jul 31, 2025, 08:03 PM
2
votes
1
answers
149
views
How to implement updates from mySQL operational DB to Azure SQL DB for reporting
We have an **operational mySQL DB running on AWS** for a transactional system and an **Azure SQL DB for reporting** with PowerBI. Now I'd like to regularly (e.g. every night) do an update of certain tables from the mySQL DB to the Azure SQL DB. I found [this description on how to do incremental copi...
We have an **operational mySQL DB running on AWS** for a transactional system and an **Azure SQL DB for reporting** with PowerBI. Now I'd like to regularly (e.g. every night) do an update of certain tables from the mySQL DB to the Azure SQL DB.
I found this description on how to do incremental copies using Azure Data Factory , however the alternatives don't seem feasible to me:
1. Delta data loading from database by using a watermark requires adding watermark columns to the source DB, but I don't want to make changes to the operational DB because it is managed and regularly updated by the transactional system.
2. Delta data loading from SQL DB by using the Change Tracking technology seems to require an SQL Server DB as the source if I understand this correctly.
The remaining two alternatives apply only to updates from files, not DBs, to my understanding.
Are there other feasible alternatives based on the described conditions? They don't necessarily need to involve Azure Data Factory, however the updates should run completely automated in the cloud.
Maybe a non-incremental update (i.e. full replacement of the target DB tables every time) would be an option too, but I'm afraid that this would lead to high costs on the Azure SQL Server side - please share any experience on that as well, if available.
Jörg Brenninkmeyer
(121 rep)
May 2, 2019, 07:26 AM
• Last activity: Jul 21, 2025, 06:05 AM
2
votes
2
answers
937
views
Azure SQL - is it possible to assign CREATE PROCEDURE permission for the master database?
I am trying to add a stored procedure to the master database (I deliberately am trying this for the master db) on Azure SQL. Is this possible? I suspect the answer is "no", but I'd like to see some confirming documentation on it if that is the case. What I've tried so far: Here's the procedure (just...
I am trying to add a stored procedure to the master database (I deliberately am trying this for the master db) on Azure SQL. Is this possible? I suspect the answer is "no", but I'd like to see some confirming documentation on it if that is the case.
What I've tried so far:
Here's the procedure (just as a test):
CREATE PROCEDURE dbo.blah AS RETURN 0;
I've tried the above using the admin login, and it failed:
Msg 262, Level 14, State 18, Procedure blah, Line 2 [Batch Start Line 21]
CREATE PROCEDURE permission denied in database 'master'.
I then tried to create another login and user and then assigned them to the "dbmanager" role and tried the create procedure again - same failure.
Anssssss
(248 rep)
Apr 20, 2021, 07:18 PM
• Last activity: Jul 17, 2025, 12:09 AM
0
votes
1
answers
154
views
Does Data Migration Advisor migrate system databases?
I would like to know if Data Migration Assistant copies system databases (master, model, msdb) into the new server.
I would like to know if Data Migration Assistant copies system databases (master, model, msdb) into the new server.
Rauf Asadov
(1313 rep)
Aug 21, 2019, 11:48 AM
• Last activity: Jul 15, 2025, 06:06 PM
1
votes
1
answers
157
views
Why Azure SQL Server table used space is too high?
I have two tables in my SQLServer database (azure managed), and the structure of two tables as below (for easy reference renamed them) **dbo.TABLE1** CREATE TABLE [dbo].[TABLE1] ( [ID] VARCHAR (50) NOT NULL, -- PK index [S_NUMBER] VARCHAR (50) NOT NULL, [MAKE] VARCHAR (12) NULL, [MODEL] VARCHAR (40)...
I have two tables in my SQLServer database (azure managed), and the structure of two tables as below (for easy reference renamed them)
**dbo.TABLE1**
CREATE TABLE [dbo].[TABLE1] (
[ID] VARCHAR (50) NOT NULL, -- PK index
[S_NUMBER] VARCHAR (50) NOT NULL,
[MAKE] VARCHAR (12) NULL,
[MODEL] VARCHAR (40) NULL,
[C_ID] VARCHAR (50) NOT NULL,
[C_CODE] REAL NULL,
[C_DESCRIPTION] VARCHAR (50) NULL,
[S_CODE] REAL NULL,
[S_DESCRIPTION] VARCHAR (50) NULL,
[LS_TIME] DATETIME2 (7) NOT NULL,
[LE_TIME] DATETIME2 (7) NOT NULL,
[YEAR] DATE NULL,
[MONTH] DATE NULL,
[WEEK] DATE NULL,
[DAY] DATE NULL,
[HOUR] DATETIME2 (7) NULL,
[WEEK_DAY] VARCHAR (12) NULL,
[F_USED] REAL NULL,
[F_USED_UOM] VARCHAR (12) NULL,
[DIST_TRAVE] REAL NULL,
[DIST_TRAV_UOM] VARCHAR (12) NULL,
[DURATION] REAL NULL,
[DURATION_UOM] VARCHAR (12) NULL,
[WEIGHT_MOVED] REAL NULL,
[WEIGHT_STORED] REAL NULL,
[WEIGHT_UOM] VARCHAR (12) NULL,
[CREATED_TIME] DATETIME NULL,
[MODIFIED_TIME] DATETIME NULL,
[OPER_ID] VARCHAR (12) NULL,
[OPER_TYPE] VARCHAR (12) NULL,
[START_TIME] DATETIME2 (7) NULL,
[END_TIME] DATETIME2 (7) NULL,
[UTC_DAY] DATE NULL,
[UTC_WEEK] DATE NULL,
[UTC_MONTH] DATE NULL,
[UTC_YEAR] SMALLINT NULL,
[UTC_HOUR] TINYINT NULL,
[LOCAL_HOUR] TINYINT NULL,
[EFFICIENCY] DECIMAL (5) NULL,
[JS_ID] BIGINT NOT NULL,
[L_ZONE] NVARCHAR (50) NULL,
[D_ZONE] NVARCHAR (50) NULL,
[SHIFT_TYPE] NVARCHAR (12) NULL,
[SHIFT_DATE] DATE NULL,
[D_CATEGORY] NVARCHAR (50) NULL,
[REC_CATEGORY] NVARCHAR (12) NULL,
[LSERIAL_NUMBER] VARCHAR (50) NULL,
[GPS_TIME] DATETIME NULL,
[GPS_STATUS] VARCHAR (12) NULL,
[A_ID] VARCHAR (50) NULL,
[E_ID] VARCHAR (50) NULL,
[TAG_GUID] VARCHAR (64) NULL,
[LZ_ID] VARCHAR (64) NULL,
[DZ_ID] VARCHAR (64) NULL,
[LZ_VERSION] INT NULL,
[DZ_VERSION] INT NULL,
[Z_ID] BIGINT NULL,
[SHIFT_ID] NVARCHAR (64) NULL,
[LI_TIME] DATETIME2 (3) CONSTRAINT [S_CREATED_TIMESTAMP] DEFAULT (sysdatetime()) NULL,
[PC_ID] INT NOT NULL DEFAULT 0,
[L_MAKE] VARCHAR (32) NULL,
[TV_MOVED] DECIMAL (18, 2) NULL
);
**dbo.TABLE2**
CREATE TABLE [dbo].[TABLE2](
[ID] NVARCHAR(32) NOT NULL, -- PK index
[C_ID] NVARCHAR(64),
[DURATION] DECIMAL(18,2),
[P_COUNT] BIGINT,
[LS_TIME] DATETIME,
[LE_TIME] DATETIME,
[S_AGG_ID] NVARCHAR(32) -- FK index
)
**dbo.TABLE2** is subset of **dbo.TABLE1** with only 5 columns, and the row count is same. We capture only 5 columns from **dbo.TABLE1** and insert in **dbo.TABLE2**. But the size(space used) of **dbo.TABLE2** is almost **3 times** the **dbo.TABLE1**.
**Indexes:** Apart from **PK**, **FK** indexes we have one additional index on **dbo.TABLE1** with include columns [JS_ID, MAKE, S_NUMBER, TAG_GUID, LE_TIME]
Why the space used is too high for smaller table even though row count is same as its source table?

Lovababu
Jul 18, 2020, 05:00 AM
• Last activity: Jul 13, 2025, 03:02 AM
0
votes
1
answers
171
views
How route traffic to a Azure SQL server via a jump server to prevent whitelistingof dev machine?
I have to connect to Azure SQL server (e.g. mydatabase.azuredb.windows.net:1433). In order to do so, i have to inform my ADMIN to whitelist my machine's IP, works fine that day. NExt day my machine restarts and i get a another IP address. I have to inform and wait for whitelisting and cycle continue...
I have to connect to Azure SQL server (e.g. mydatabase.azuredb.windows.net:1433). In order to do so, i have to inform my ADMIN to whitelist my machine's IP, works fine that day. NExt day my machine restarts and i get a another IP address. I have to inform and wait for whitelisting and cycle continues. We work using VPN and have a server which (windows, onpremise) behind the VPN and has static IP.
Is is possible to route Azure SQL server traffic from my machine via this server?
AsitK
(101 rep)
Sep 11, 2023, 03:48 PM
• Last activity: Jul 10, 2025, 12:04 AM
2
votes
1
answers
158
views
Reduce Parallelism Without Setting MAXDOP
I am trying to reduce parallelism in my query without explicitly setting MAXDOP. I have been informed that MAXDOP can hinder throughput as the platform grows and query executions increase (happy to be corrected on this). The following query has been observed hitting MAXDOP 8, causing spikes of worke...
I am trying to reduce parallelism in my query without explicitly setting MAXDOP. I have been informed that MAXDOP can hinder throughput as the platform grows and query executions increase (happy to be corrected on this).
The following query has been observed hitting MAXDOP 8, causing spikes of workers in our Azure SQL database. The result is the error "the request limit for the database is 800 and has been reached"
The following has been applied to try and make this query less parallel:
1. Split the queries into #ErIds and #BaseTable
2. Change a big CASE statement into individual IFs for the sort
If anyone has any ideas on how to reduce parallelism that would be really appreciated.
I can provide more info if needed.
Thank you.
```
declare @uniqueIds dbo.GuidList
, @ids dbo.IntegerList
, @searchTerm nvarchar(max) = null
, @pageNum int
, @pageSize int
, @sortBy nvarchar(50) = 'Name'
, @sortDescending bit = 0
set nocount on;
-- creating these local variables to avoid parameter sniffing
declare @pageNumLocal int = case
when @pageNum is null
or @pageNum <= 0
then 0
else @pageNum
end
, @pageSizeLocal int = isnull(@pageSize, 999999) -- If pageSize is null, return all records
, @sortByLocal nvarchar(50) = isnull(@sortBy, 'Name')
, @sortOrderLocal nvarchar(20) = 'asc'
, @searchTermLocal nvarchar(50) = '%' + lower(trim(isnull(@searchTerm, ''))) + '%';
if @sortDescending = 1
set @sortOrderLocal = 'desc'
else
set @sortOrderLocal = 'asc';
create table #ErIds(id integer primary key);
insert into #ErIds
select distinct e.Id
from [TableA] e
where e.IsSoftDeleted = 0
and (
e.Id in (
select intVal
from @ids
)
or e.UniqueId in (
select guidVal
from @uniqueIds
)
)
create table #BaseTable (
Id int,
UniqueId uniqueidentifier,
[Name] nvarchar(200),
EmployeeCount int,
ProcessorName nvarchar(255)
);
insert into #BaseTable
select e.Id,
e.UniqueId,
e.[Name],
e.EmployeeCount,
ProcessorName = isnull(u.FirstName + ' ' + u.LastName, u.EmailAddress),
from #ErIds erids
inner join [TableA] e
on erids.id = e.id
left join [TableB] bs
on bs.EmployerId = e.Id
left join [TableC] u
on u.Id = bs.ProcessorId
where
(
isnull(@searchTermLocal, '') = ''
or lower(trim(e.[Name])) like @searchTermLocal
or lower(trim(isnull(u.FirstName + ' ' + u.LastName, u.EmailAddress))) like @searchTermLocal
or lower(trim(convert(nvarchar(36), e.UniqueId))) like @searchTermLocal
)
if @sortByLocal = 'Name'
begin
if @sortOrderLocal = 'asc'
select * from #BaseTable order by [Name] asc offset @pageNumLocal * @pageSizeLocal rows fetch next @pageSizeLocal rows only;
else
select * from #BaseTable order by [Name] desc offset @pageNumLocal * @pageSizeLocal rows fetch next @pageSizeLocal rows only;
end
else if @sortByLocal = 'ProcessorName'
begin
if @sortOrderLocal = 'asc'
select * from #BaseTable order by ProcessorName asc offset @pageNumLocal * @pageSizeLocal rows fetch next @pageSizeLocal rows only;
else
select * from #BaseTable order by ProcessorName desc offset @pageNumLocal * @pageSizeLocal rows fetch next @pageSizeLocal rows only;
end
else if @sortByLocal = 'EmployeeCount'
begin
if @sortOrderLocal = 'asc'
select * from #BaseTable order by EmployeeCount asc offset @pageNumLocal * @pageSizeLocal rows fetch next @pageSizeLocal rows only;
else
select * from #BaseTable order by EmployeeCount desc offset @pageNumLocal * @pageSizeLocal rows fetch next @pageSizeLocal rows only;
end
drop table #ErIds;
drop table #BaseTable;
SkelDave
(163 rep)
Jul 7, 2025, 09:25 AM
• Last activity: Jul 8, 2025, 07:45 PM
2
votes
1
answers
52
views
Temporal table return UPDATED.ValidFrom duplicates
Im trying to understand Temporal Tables in SQL Server better, and how I can recreate an issue we have. Sometimes the time returned from UPDATE is not the same as what is really stored in the table, and the value returned are duplicate with another thread. I want to at least be able to recreate and u...
Im trying to understand Temporal Tables in SQL Server better, and how I can recreate an issue we have.
Sometimes the time returned from UPDATE is not the same as what is really stored in the table, and the value returned are duplicate with another thread. I want to at least be able to recreate and understand these issues better.
I have a Azure SQL db, with a temporal table:
CREATE TABLE TT (
Id UNIQUEIDENTIFIER PRIMARY KEY CLUSTERED
, Amount MONEY NOT NULL
, [ValidFrom] datetime2 GENERATED ALWAYS AS ROW START
, [ValidTo] datetime2 GENERATED ALWAYS AS ROW END
, PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo)
) WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.TTH));
GO
CREATE NONCLUSTERED INDEX IX_TTH_Id ON TT([Id]) WITH(ONLINE = ON)
GO
When my application is called, it runs the following queries:
1. begin tran
2. select * from TT with (updlock, serializable) where Id = '...'
3. some selects, inserts and updates in other tables...
4. update TT set Amount = 123 output INSERTED.ValidFrom where id = '...'
5. commit tran
Often there will be two instances running that both want to update the same row in TT (same Id
). What is confusing is that Sometimes the ValidFrom returned will not be the same as what is actually written to table:
1. Thread A: begin tran
2. Thread A: select * from TT with (updlock, serializable) where Id = @x
3. Thread B: begin tran
4. Thread B: select * from TT with (updlock, serializable) where Id = @x
(this will wait)
5. Thread A: some selects, inserts and updates in other tables...
6. Thread A: update TT set Amount = 123 output INSERTED.ValidFrom where id = @x
7. Thread A. commit tran
8. Thread B: some selects, inserts and updates in other tables...
9. Thread B: update TT set Amount = 123 output INSERTED.ValidFrom where id = @x
10. Thread B. commit tran
What now happens is that the time returned in step 6 and step 9 are the same. But when I later inspect the history table they are very different (for example a diff of 2 seconds). How does this happen? The total execution time is less than 100ms.
(I have tried to recreate it locally and against a testing db we have, but with no luck. Its easy to make cases when the ValidFrom returned are the same, but then it will also be the same in the history table)
viblo
(511 rep)
Jul 4, 2025, 11:48 AM
• Last activity: Jul 5, 2025, 12:30 PM
0
votes
1
answers
166
views
Copy large database from one server to another
I want to copy a database from one server to another. The source database is a geo-replicated read-only secondary. In case it matters, I want to use this as the source to avoid loading up the production (primary) database. The source and target servers are in different resource groups, but they are...
I want to copy a database from one server to another. The source database is a geo-replicated read-only secondary. In case it matters, I want to use this as the source to avoid loading up the production (primary) database.
The source and target servers are in different resource groups, but they are located in the same area and in the same subscription.
This might seem like pretty standard stuff, but the problem I am having is that neither the export functionality nor the copy functionality within the Azure portal will work. The export fails with
>Error code 400: the size of the database to be exported is bigger than what is supported for this operation.
The copy fails in a slightly different way; when I try to configure the target database I cannot set the data storage size, it defaults to 32GB and the slider to change it is missing:
The source database is a little over 1TB in size.
What other options do I have for creating a copy of that database? Or is there a way to work around the error displayed above?

slugster
(145 rep)
Apr 15, 2021, 04:04 AM
• Last activity: Jul 4, 2025, 06:06 PM
1
votes
1
answers
199
views
FILEPROPERTY not working over linked server to Azure SQL Database
[FILEPROPERTY (Transact-SQL)][1] is not working with a linked server pointing from a on-Premise SQL 2017 to a Azure SQL Database. I see the following comment in books online: > Returns NULL for files that are not in the current database. But my linked server has `@catalog` pointing to a specific dat...
FILEPROPERTY (Transact-SQL) is not working with a linked server pointing from a on-Premise SQL 2017 to a Azure SQL Database. I see the following comment in books online:
> Returns NULL for files that are not in the current database.
But my linked server has
@catalog
pointing to a specific database.
Here is my code and I get NULL
for spaceUsedMB
, freeSpaceMB
, and spaceUsedMB
columns.
SELECT
collectedAt = GetDate(),
serverName =(SELECT * FROM OPENQUERY (AzureDB_adventureworks, 'SELECT @@SERVERNAME')),
databaseName=(SELECT * FROM OPENQUERY (AzureDB_adventureworks, 'SELECT DB_NAME()')),
fileName = LEFT(a.NAME, 64) ,
a.FILE_ID AS fileId,
fileSizeMB = CONVERT(DECIMAL(12, 2), ROUND(a.size / 128.000, 2)),
spaceUsedMB = CONVERT(DECIMAL(12, 2), ROUND(FILEPROPERTY(a.name,'SpaceUsed')/ 128.000, 2)),
freeSpaceMB = CONVERT(DECIMAL(12, 2), ROUND(( a.size - FILEPROPERTY(a.name,'SpaceUsed'))/ 128.000, 2)),
percentFree = CONVERT(DECIMAL(12, 2), (CONVERT(DECIMAL(12, 2), ROUND((a.size - FILEPROPERTY(a.name,'SpaceUsed'))/128.000, 2))*100)/ CONVERT(DECIMAL(12, 2), ROUND(a.size / 128.000, 2))),
a.physical_name
FROM [AzureDB_adventureworks].[adventureworks].sys.database_files a
I was able to get by with following code but still curious to know why above code did not work.
SELECT * FROM OPENQUERY (AzureDB_adventureworks,
'SELECT
GETDATE() AS collectedAT,
@@SERVERNAME AS serverName,
DB_NAME() AS databaseName,
LEFT(a.name, 64) AS fileName,
a.file_id AS fileId,
a.size AS fileSizeMB,
CONVERT(DECIMAL(12, 2), ROUND(FILEPROPERTY(a.name,''SpaceUsed'')/ 128.000, 2)) AS spaceUsedMB,
CONVERT(DECIMAL(12, 2), ROUND(( a.size - FILEPROPERTY(a.name,''SpaceUsed''))/ 128.000, 2)) AS freeSpaceMB,
CONVERT(DECIMAL(12, 2), (CONVERT(DECIMAL(12, 2), ROUND((a.size - FILEPROPERTY(a.name,''SpaceUsed''))/128.000, 2))*100)/ CONVERT(DECIMAL(12, 2), ROUND(a.size / 128.000, 2))) as percentFree,
a.physical_name AS physicalName
from adventureworks.sys.database_files a'
)
If I run the following code against Azure SQL Database I will get correct result for all columns.
SELECT
collectedAt = GetDate(),
serverName = @@SERVERNAME,
databaseName= DB_NAME(),
fileName = LEFT(a.NAME, 64) ,
a.FILE_ID AS fileId,
fileSizeMB = CONVERT(DECIMAL(12, 2), ROUND(a.size / 128.000, 2)),
spaceUsedMB = CONVERT(DECIMAL(12, 2), ROUND(FILEPROPERTY(a.name,'SpaceUsed')/ 128.000, 2)),
freeSpaceMB = CONVERT(DECIMAL(12, 2), ROUND(( a.size - FILEPROPERTY(a.name,'SpaceUsed'))/ 128.000, 2)),
percentFree = CONVERT(DECIMAL(12, 2), (CONVERT(DECIMAL(12, 2), ROUND((a.size - FILEPROPERTY(a.name,'SpaceUsed'))/128.000, 2))*100)/ CONVERT(DECIMAL(12, 2), ROUND(a.size / 128.000, 2))),
a.physical_name
FROM sys.database_files a
SqlWorldWide
(13707 rep)
Mar 7, 2019, 02:45 PM
• Last activity: Jul 2, 2025, 10:01 PM
0
votes
1
answers
487
views
importing bacpac file using sql package
How can I specify Database maximum size when importing bacpac file using sql package? The maximum size of 32GB is auto-sizing, but my actual DB size is over 35GB. Can I get to use DatabaseMaximumSize=(INT60) instead of DatabaseMaximumSize=(INT32)? eg. import parameter /p:DatabaseEdition=GeneralPurpo...
How can I specify Database maximum size when importing bacpac file using sql package? The maximum size of 32GB is auto-sizing, but my actual DB size is over 35GB. Can I get to use DatabaseMaximumSize=(INT60) instead of DatabaseMaximumSize=(INT32)?
eg. import parameter /p:DatabaseEdition=GeneralPurpose /p:DatabaseServiceObjective=GP_Gen5_2 /p:DatabaseMaximumSize=(INT60)
Cho Wai Tun
(1 rep)
Jun 12, 2021, 07:12 AM
• Last activity: Jun 30, 2025, 07:08 AM
0
votes
1
answers
192
views
Azure SQL Backup Restore
How long will it take to restore 40GB full backup in Azure SQL database, Azure SQL Managed Instance?
How long will it take to restore 40GB full backup in Azure SQL database, Azure SQL Managed Instance?
SOUser
(31 rep)
Jun 8, 2023, 12:34 PM
• Last activity: Jun 29, 2025, 03:03 PM
0
votes
1
answers
192
views
On-premises SQL to Azure SQL MI Migration
We have a requirement to migrate around 800 GB of data from on-premises SQL Server (2012 and 2017) spread across multiple databases to Azure SQL MI. We have run the DMA on the source data and have got some issues (datatype not supported any more, coding style changes etc.). We are planning to use DM...
We have a requirement to migrate around 800 GB of data from on-premises SQL Server (2012 and 2017) spread across multiple databases to Azure SQL MI. We have run the DMA on the source data and have got some issues (datatype not supported any more, coding style changes etc.). We are planning to use DMS to perform the data migration
My questions are:
1. Which stage I should perform the remediation, in the source itself before the migration or in the target Azure SQL MI
2. Which option is preferred for the data migration
Hillol Saha
(1 rep)
Oct 26, 2021, 06:11 AM
• Last activity: Jun 29, 2025, 01:02 PM
0
votes
1
answers
192
views
Can you mix on premisses SQL Servers and Azure SQL Servers in a trust group?
I need to be able to create cross-server views which include tables from databases on SQL Servers in a local corporate domain and joining to tables in a database on a hosted Azure SQL Server instance and I need to be able to access those views from stored procedures in the local databases using the...
I need to be able to create cross-server views which include tables from databases on SQL Servers in a local corporate domain and joining to tables in a database on a hosted Azure SQL Server instance and I need to be able to access those views from stored procedures in the local databases using the "EXECUTE AS" operator.
We were able to setup a linked server on the local SQL Server to the Azure database and we were able to write and execute the views that join across the two system and the stored procedure that utilized those views executed successfully when executed within management studio. However, when the stored procedure is execute from within the context of our ERP system (happens to be Microsoft Dynamics SL) the stored procedures fail.
I believe one of the reasons they fail is because the ERP system executes all SQL calls with in a predefined security context of a special Login (I may be mangling the details... I'm at the edge of my expertise with respect to security in SQL here) functionally equivalent to things being called with an "Execute As" clause added.
So, for testing, I tried explicitly adding the EXECUTE AS to the stored proc. and after doing so, attempts to execute the procedure in management studio return an error that the linked Server is not "Trustworthy"....
So, this finally brings me to the actual question I've posted....
How can you establish a trust relationship between a linked Azure server/database and a database on a local domain SQL server instance?
Tom Malia
Mar 15, 2022, 02:23 PM
• Last activity: Jun 26, 2025, 08:06 AM
6
votes
4
answers
1095
views
Does the new locking optimization make batching unnecessary?
[Optimized Locking (2025-05-19)][1] makes it sound like [batching][2] would be unnecessary if I were to, for example, try to delete 20 thousand rows from a million row table in one statement (DELETE FROM foo WHERE Status = 1). Is that true? [1]: https://learn.microsoft.com/en-us/sql/relational-datab...
Optimized Locking (2025-05-19) makes it sound like batching would be unnecessary if I were to, for example, try to delete 20 thousand rows from a million row table in one statement (DELETE FROM foo WHERE Status = 1). Is that true?
Mark Freeman
(2293 rep)
Jun 23, 2025, 03:31 PM
• Last activity: Jun 25, 2025, 12:18 PM
1
votes
1
answers
221
views
Calendar Event table - best practice setup for range queries and individual retrieval
This seems like a generic problem that should have been solved already, but I can't find anything about this. In general this question is - given a table where data is read by a date range, what is the best, most efficient setup? We have a calendar event table that will quickly grow to millions of r...
This seems like a generic problem that should have been solved already, but I can't find anything about this. In general this question is - given a table where data is read by a date range, what is the best, most efficient setup?
We have a calendar event table that will quickly grow to millions of records.
The schema is something like:
CREATE TABLE [dbo].[CalendarEvent](
[Id] [uniqueidentifier] NOT NULL,
[DtStart] [datetime] NULL,
[DtEnd] [datetime] NULL,
[Created] [datetime] NULL,
[LastModified] [datetime] NULL,
[CalendarEventType] [nvarchar](255) NULL,
[CalendarId] [uniqueidentifier] NULL
PRIMARY KEY CLUSTERED
(
[Id] ASC
)WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
Forget about recurring events, etc. as that doesn't bear on our problem.
Most queries will be of the type:
select * from CalendarEvent where CalendarId = 'b5d6338f-805f-4717-9c0a-4600f95ac515' AND dtStart > '01/01/2020' AND dtStart < '10/22/2020'
Notice no joins, etc.
But we will also have some that select for individual events, and include joins:
select * from CalendarEvent ce join tags t on ce.Id = t.CalendarEventId where Id = '17606330-5486-496a-a91c-f5d0e123bfff'
Questions and ideas:
1. Should we keep the Id as the PK, but make the start date the clustered index?
2. Should we just make an index on dtStart?
3. Should we partition by month?
4. Should we denormalize a little and break duplicate the dtStart data by include year and month columns that we can index and use in our range queries?
In general, when you do your querying on a table by date range, what is the best setup for this type of table?
Note: If you think this question could be improved to help more people, make it more generic and widely applicable, such as removing references to a Calendar Event table specifically, and making this just about date range querying in any type of table, please help me do that.
richard
(121 rep)
Oct 22, 2020, 05:53 PM
• Last activity: Jun 15, 2025, 07:03 PM
1
votes
1
answers
226
views
XE_FILE_TARGET_TVF WaitType in Azure SQL
Recently I started exploring Azure Sql and came across waittype XE_FILE_TARGET_TVF. I tried searching online to get more info but didn't get enough information. I am just curious to know: - What does this waittype mean ? - Is it only specific to Azure SQL ? - How does it affect sql query performance...
Recently I started exploring Azure Sql and came across waittype XE_FILE_TARGET_TVF. I tried searching online to get more info but didn't get enough information.
I am just curious to know:
- What does this waittype mean ?
- Is it only specific to Azure SQL ?
- How does it affect sql query performance ?
Ketan
(507 rep)
Jul 19, 2016, 08:13 AM
• Last activity: Jun 14, 2025, 04:07 AM
0
votes
1
answers
228
views
SQL stored procedure to produce custom XML text
I have around 30 tables from which I need to generate a specific XML format, which is consistent through the tables. With my current solution I have a stored procedure for each 30 tables where I do: select ' ' + ' ' + ' ' + ' ' + ' ' + ' ' + ' ' + from dbo.table where the parantheses values are hard...
I have around 30 tables from which I need to generate a specific XML format, which is consistent through the tables.
With my current solution I have a stored procedure for each 30 tables where I do:
select ''
+ ''
+ ''
+ ''
+ ''
+ ''
+ ''
+
from dbo.table
where the parantheses values are hardcoded for each table. So for example in one table (id column) is CUSTOMER_ID. While in a different table (id column) is STORE_ID. and (fieldname) is for example CUSTOMER_NAME. So they are all hardcoded column names in the brackets and primary key in .
So my stored procedure with all 30 tables is around 1000 lines long, as every column and primary key is hardcoded for each table.
I'm certain there is a dynamic way to achieve this functionality where columns/rows are somehow iterated over, but I can't figure it out.
Anton
(101 rep)
Jan 21, 2020, 01:15 PM
• Last activity: Jun 11, 2025, 06:04 PM
Showing page 1 of 20 total questions