Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
2
votes
1
answers
1678
views
Add shared folder to AlwaysOn AG Failover
We have setup a 2 node Synchronous AG with automatic failover to provide HA for our ERP application (Dynamics GP) The GP desktop client loads a number of shared reports and dictionaries from a file share currently located on one of the database servers. In order to provide complete high availability...
We have setup a 2 node Synchronous AG with automatic failover to provide HA for our ERP application (Dynamics GP)
The GP desktop client loads a number of shared reports and dictionaries from a file share currently located on one of the database servers. In order to provide complete high availability this share needs to be available in the event of a failure of the primary replica.
At first I thought I would just copy the folder and setup shares on the secondary replica but realized SMB is not accessible through the AG listener.
My second thought was to move the share to another location however this still introduces a single point of failure and defeats the purpose of HA.
My only 2 real choices are to somehow add the folder as a resource to the cluster, however the 2 nodes are virtual machines and creating a shared SAN disk would be complex
Or a messier option would be to create a task that checks the server name and in the event of a failover would update DNS to direct clients via UNC to the correct server share.
What would be the recommended approach for this?
Jeremie
(21 rep)
Jun 7, 2017, 06:12 PM
• Last activity: Jun 26, 2025, 05:02 AM
0
votes
1
answers
294
views
How to connect to Microsoft Dynamics Rms 2.0 Running sql2008r2?
Would anyone know anything about Microsoft Dynamics Rms 2.0 (2012)? We are having trouble connecting our sql server to the client side cashier (windows 10 pro) this software sunset in 2016 I believe and Microsoft won’t help. (We used an emergency ticket) So the “cashier”, doesn’t see the database, h...
Would anyone know anything about Microsoft Dynamics Rms 2.0 (2012)? We are having trouble connecting our sql server to the client side cashier (windows 10 pro) this software sunset in 2016 I believe and Microsoft won’t help. (We used an emergency ticket) So the “cashier”, doesn’t see the database, however it will see the server it’s self. When ever using ole ob connection the server is seen but not able to be connected to. We have opened every port on both computers to test to see if we could establish a connection, but to no avail... I have spent 16 hours on this with several people and can’t figure it out.
Here is the original question asked here https://www.facebook.com/groups/thisisanitsupportgroup/permalink/2698630683716128/
Covers a lot of what I have already done so
Chris
(1 rep)
Jul 25, 2020, 03:57 AM
• Last activity: May 11, 2025, 09:04 PM
4
votes
2
answers
751
views
Delete unused indexes from Microsoft Dynamics CRM Application
So I found tons of unused indexes from our CRM application with this query from Pinal: SELECT o.name AS ObjectName , i.name AS IndexName , i.index_id AS IndexID , dm_ius.user_seeks AS UserSeek , dm_ius.user_scans AS UserScans , dm_ius.user_lookups AS UserLookups , dm_ius.user_updates AS UserUpdates...
So I found tons of unused indexes from our CRM application with this query from Pinal:
SELECT
o.name AS ObjectName
, i.name AS IndexName
, i.index_id AS IndexID
, dm_ius.user_seeks AS UserSeek
, dm_ius.user_scans AS UserScans
, dm_ius.user_lookups AS UserLookups
, dm_ius.user_updates AS UserUpdates
, p.TableRows
, 'DROP INDEX ' + QUOTENAME(i.name)
+ ' ON ' + QUOTENAME(s.name) + '.' + QUOTENAME(OBJECT_NAME(dm_ius.OBJECT_ID)) AS 'drop statement'
FROM sys.dm_db_index_usage_stats dm_ius
INNER JOIN sys.indexes i ON i.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = i.OBJECT_ID
INNER JOIN sys.objects o ON dm_ius.OBJECT_ID = o.OBJECT_ID
INNER JOIN sys.schemas s ON o.schema_id = s.schema_id
INNER JOIN (SELECT SUM(p.rows) TableRows, p.index_id, p.OBJECT_ID
FROM sys.partitions p GROUP BY p.index_id, p.OBJECT_ID) p
ON p.index_id = dm_ius.index_id AND dm_ius.OBJECT_ID = p.OBJECT_ID
WHERE OBJECTPROPERTY(dm_ius.OBJECT_ID,'IsUserTable') = 1
AND dm_ius.database_id = DB_ID()
AND i.type_desc = 'nonclustered'
AND i.is_primary_key = 0
AND i.is_unique_constraint = 0
ORDER BY (dm_ius.user_seeks + dm_ius.user_scans + dm_ius.user_lookups) ASC
GO
I was reading about the CRM application and how it creates it's own indexes and etc.
I don't know if this is the right place to ask a question about Microsoft CRM but, is it safe to drop those indexes with 0 seeks, lookups and scans with tons of updates and rows?
I can find problems it they hard coded that index inside the source code, but, is this a good pratice that microsoft uses?
*Note*: I can close this post if this is not the place to ask this question.
Racer SQL
(7546 rep)
Jul 24, 2019, 08:07 PM
• Last activity: Oct 16, 2023, 09:46 PM
15
votes
1
answers
3457
views
sp_cursoropen and parallelism
I'm running into a performance problem with a query that I can't seem to get my head around. I pulled the query out of a cursor definition. This query takes seconds to execute SELECT A.JOBTYPE FROM PRODROUTEJOB A WHERE ((A.DATAAREAID=N'IW') AND ((A.CALCTIMEHOURS 0) AND (A.JOBTYPE 3))) AND EXISTS (SE...
I'm running into a performance problem with a query that I can't seem to get my head around.
I pulled the query out of a cursor definition.
This query takes seconds to execute
SELECT A.JOBTYPE
FROM PRODROUTEJOB A
WHERE ((A.DATAAREAID=N'IW')
AND ((A.CALCTIMEHOURS0)
AND (A.JOBTYPE3)))
AND EXISTS (SELECT 'X'
FROM PRODROUTE B
WHERE ((B.DATAAREAID=N'IW')
AND (((((B.PRODID=A.PRODID)
AND ((B.PROPERTYID=N'PR1526157') OR (B.PRODID=N'PR1526157')))
AND (B.OPRNUM=A.OPRNUM))
AND (B.OPRPRIORITY=A.OPRPRIORITY))
AND (B.OPRID=N'GRIJZEN')))
AND NOT EXISTS (SELECT 'X'
FROM ADUSHOPFLOORROUTE C
WHERE ((C.DATAAREAID=N'IW')
AND ((((((C.WRKCTRID=A.WRKCTRID)
AND (C.PRODID=B.PRODID))
AND (C.OPRID=B.OPRID))
AND (C.JOBTYPE=A.JOBTYPE))
AND (C.FROMDATE>{TS '1900-01-01 00:00:00.000'}))
AND ((C.TODATE={TS '1900-01-01 00:00:00.000'}))))))
GROUP BY A.JOBTYPE
ORDER BY A.JOBTYPE
The actual execution plan looks like this.
Noticing the server wide setting was set to MaxDOP 1 I tried playing around with maxdop settings.
Adding
However, the application in question (Dynamics AX) doesn't execute queries like this, it uses cursors.
The actual code captured is this.
declare @p1 int
set @p1=189527589
declare @p3 int
set @p3=16
declare @p4 int
set @p4=1
declare @p5 int
set @p5=2
exec sp_cursoropen @p1 output,N'SELECT A.JOBTYPE FROM PRODROUTEJOB A WHERE ((A.DATAAREAID=N''IW'') AND ((A.CALCTIMEHOURS0) AND (A.JOBTYPE3))) AND EXISTS (SELECT ''X'' FROM PRODROUTE B WHERE ((B.DATAAREAID=N''IW'') AND (((((B.PRODID=A.PRODID) AND ((B.PROPERTYID=N''PR1526157'') OR (B.PRODID=N''PR1526157''))) AND (B.OPRNUM=A.OPRNUM)) AND (B.OPRPRIORITY=A.OPRPRIORITY)) AND (B.OPRID=N''GRIJZEN''))) AND NOT EXISTS (SELECT ''X'' FROM ADUSHOPFLOORROUTE C WHERE ((C.DATAAREAID=N''IW'') AND ((((((C.WRKCTRID=A.WRKCTRID) AND (C.PRODID=B.PRODID)) AND (C.OPRID=B.OPRID)) AND (C.JOBTYPE=A.JOBTYPE)) AND (C.FROMDATE>{TS ''1900-01-01 00:00:00.000''})) AND ((C.TODATE={TS ''1900-01-01 00:00:00.000''})))))) GROUP BY A.JOBTYPE ORDER BY A.JOBTYPE ',@p3 output,@p4 output,@p5 output
select @p1, @p3, @p4, @p5
resulting in this execution plan (and unfortunately the same multiple-second execution times).
I've tried several things such as dropping cached plans, adding options in the query inside the cursor definition, ... But none of them seem to get me a parallel plan.
I've also searched google for quite a bit looking for parallelism limitations of cursors but can't seem to find any limitations.
Am I missing something obvious here?
The actual SQL build is

OPTION (MAXDOP 0)
to the query, or changing the server settings results in much better performance and this query plan.


SQL Server 2008 (SP1) - 10.0.2573.0 (X64)
which i realise is unsupported, but I cannot upgrade this instance as I see fit. I would need to transfer the database to another server and that would mean pulling a fairly large uncompressed backup over a slow WAN.
Trace flag 4199 doesn't make a difference, and neither does OPTION (RECOMPILE).
The cursor properties are:
API | Fast_Forward | Read Only | Global (0)
Tom V
(15752 rep)
Jul 9, 2015, 01:23 PM
• Last activity: Aug 27, 2023, 11:05 AM
3
votes
2
answers
726
views
Merge join asks for seemingly unnecessary sort before joining
I use one table to update another table. Both the tables have a clustered index where the first 4 columns are the same type of data. CREATE UNIQUE CLUSTERED INDEX [i109139_I_109139CLI_44C9176D64C84ECB959C0EAA6AE7A4F4] ON [dbo].[t109139_44C9176D64C84ECB959C0EAA6AE7A4F4] ( [PARTITION] ASC, [DATAAREAID...
I use one table to update another table. Both the tables have a clustered index where the first 4 columns are the same type of data.
CREATE UNIQUE CLUSTERED INDEX [i109139_I_109139CLI_44C9176D64C84ECB959C0EAA6AE7A4F4] ON [dbo].[t109139_44C9176D64C84ECB959C0EAA6AE7A4F4]
(
[PARTITION] ASC,
[DATAAREAID] ASC,
[ITEMID] ASC,
[INVENTSERIALID] ASC,
[RECID] ASC
)
GO
ALTER TABLE [dbo].[INVENTSERIAL] ADD CONSTRAINT [I_1204ITEMSERIALIDX] PRIMARY KEY CLUSTERED
(
[PARTITION] ASC,
[DATAAREAID] ASC,
[ITEMID] ASC,
[INVENTSERIALID] ASC
)
The following is the query I use to update the tables
UPDATE T1
SET NOTFORPROJECTS = T2.LANNOTFORPROJECTS
FROM tempdb."DBO".t109139_44C9176D64C84ECB959C0EAA6AE7A4F4 T1
CROSS JOIN LANINVENTSERIALVIEW T2
WHERE T1.PARTITION = 5637144576
AND T1.DATAAREAID = N'lan'
AND T2.PARTITION = 5637144576
AND T2.DATAAREAID = N'lan'
AND T2.INVENTSERIALID = T1.INVENTSERIALID
AND T2.ITEMID = T1.ITEMID
AND T2.DATAAREAID = T1.DATAAREAID
AND T2.PARTITION = T1.PARTITION
When I look at the queryplan, I notice that it does use the clustered indexes of both the tables, but it performs a hash match instead of a merge join. I find this unexpected as the used data should both be sorted on the equality operators.
So then I try to force a Merge join, and it seems that SQL Server still uses both the clustered indexes, but first sorts the data from the two tables from ItemId,InventSerialId to InventSerialId,ItemId and then uses that to perform a merge join. So it basically swaps the two columns.
Do note that in the table InventSerial, only 1 InventSerialId is an empty string, and in the other table that I update, there will be many empty-string (not null) InventSerialIds. I have no idea if this is relevant.
Query plan:
https://www.brentozar.com/pastetheplan/?id=H1L4j7EF3
KHP
(65 rep)
Jul 6, 2023, 12:03 PM
• Last activity: Jul 6, 2023, 02:04 PM
3
votes
1
answers
315
views
SQL server chooses hash match over merge join but the input fields to join should be sorted
I have a simple query that joins two tables on the field PRODID. For some reason, SQL Server opts to use a hash match to join this, but I believe it should choose a merge join, as that field is part of the index, and the preceding fields of both indexes are already used to filter out most of the dat...
I have a simple query that joins two tables on the field PRODID. For some reason, SQL Server opts to use a hash match to join this, but I believe it should choose a merge join, as that field is part of the index, and the preceding fields of both indexes are already used to filter out most of the data. So the next field in both indexes is PRODID, which should be sorted.
The query:
select JOURNAL.PRODID, JOURNAL.JOURNALID
from PRODJOURNALTABLE JOURNAL
inner join PRODROUTEJOB JOB on JOB.PRODID = JOURNAL.PRODID and JOB.DATAAREAID = JOURNAL.DATAAREAID and JOB.PARTITION = JOURNAL.PARTITION
where JOURNAL.POSTEDDATETIME between '2021/05/01' and '2021/05/10'
and JOURNAL.POSTED = 1
and JOURNAL.JOURNALTYPE = 1
and JOURNAL.DATAAREAID = N'LAN'
and JOURNAL.PARTITION = 5637144576
and JOB.WRKCTRID = N'TF1'
The query plan
https://www.brentozar.com/pastetheplan/?id=B1oyc8qBh
The used indexes
CREATE NONCLUSTERED INDEX [I_243ROLLERRORVIEWIDX] ON [dbo].[PRODJOURNALTABLE]
(
[PARTITION] ASC,
[DATAAREAID] ASC,
[JOURNALTYPE] ASC,
[POSTED] ASC,
[POSTEDDATETIME] ASC,
[PRODID] ASC
)
INCLUDE([JOURNALID])
CREATE NONCLUSTERED INDEX [I_258ROLLERRORVIEWIDX] ON [dbo].[PRODROUTEJOB]
(
[PARTITION] ASC,
[DATAAREAID] ASC,
[WRKCTRID] ASC,
[PRODID] ASC,
[OPRNUM] ASC
)
INCLUDE([OPRPRIORITY])
EDIT:
Adding the JOIN HINT merge join to the query reveals that it tries to sort the data from the PRODJOURNALTABLE on the field PRODID
select JOURNAL.PRODID, JOURNAL.JOURNALID
from PRODJOURNALTABLE JOURNAL
inner merge join PRODROUTEJOB JOB on JOB.PRODID = JOURNAL.PRODID and JOB.DATAAREAID = JOURNAL.DATAAREAID and JOB.PARTITION = JOURNAL.PARTITION
where JOURNAL.POSTEDDATETIME between '2021/05/01' and '2021/05/10'
and JOURNAL.POSTED = 1
and JOURNAL.JOURNALTYPE = 1
and JOURNAL.DATAAREAID = N'LAN'
and JOURNAL.PARTITION = 5637144576
and JOB.WRKCTRID = N'TF1'
Query plan: https://www.brentozar.com/pastetheplan/?id=rJh1GPqBn
But it should be sorted according to the index, no?


KHP
(65 rep)
May 23, 2023, 03:37 PM
• Last activity: May 23, 2023, 04:49 PM
1
votes
2
answers
1232
views
Viewing execution plans for queries with parameters
I have a sample query generated by an application (Microsoft Dynamics AX 2012 in this case) which is ineffective performance-wise (cross joins, order by etc.) I would like to display its execution plan and attempt to tune it by indexing or rewriting some parts of it. I cannot just copy/paste it into...
I have a sample query generated by an application (Microsoft Dynamics AX 2012 in this case) which is ineffective performance-wise (cross joins, order by etc.)
I would like to display its execution plan and attempt to tune it by indexing or rewriting some parts of it.
I cannot just copy/paste it into SSMS because there are numerous parameters of many data types. I don't even know what are the values for these parameters.
Is there a way to quickly identify the execution plan of this query? Maybe by querying some DMVs? I got the query text from monitoring software which must have done it.
SELECT 2 AS f1,
T3.RECID AS f2,
T4.RECID AS f3,
T4.GENERALJOURNALACCOUNTENTRY AS f4,
T4.LEDGERDIMENSION AS f5,
Sum(T6.TRANSACTIONCURRENCYAMOUNT) AS f6,
T6.TRANSACTIONCURRENCY AS f7,
T6.MONETARYAMOUNT AS f8,
Sum(T7.ACCOUNTINGCURRENCYAMOUNT) AS f9,
N'aaa' AS DATAAREAID,
1 AS RECVERSION,
5637144576 AS PARTITION,
IDENTITY(bigint, 1, 1) AS RECID
INTO [##ax_tmp_tim99_151_7623]
FROM SUBLEDGERJOURNALENTRY T1
CROSS JOIN ACCOUNTINGDISTRIBUTION T2
CROSS JOIN TAXTRANS T3
CROSS JOIN SUBLEDGERJOURNALACCOUNTENTRY T4
CROSS JOIN ACCOUNTINGDISTRIBUTION T5
CROSS JOIN ACCOUNTINGDISTRIBUTION T6
CROSS JOIN SUBLEDGERJOURNALACCOUNTENTRYDISTRIBUTION T7
WHERE ( ( T1.PARTITION = @P1 )
AND ( ( ( ( T1.TRANSFERID = @P2 )
AND ( T1.LEDGER = @P3 ) )
AND ( T1.TYPE @P4 ) )
AND ( T1.TYPE @P5 ) ) )
AND ( ( T2.PARTITION = @P6 )
AND ( ( T2.ACCOUNTINGEVENT = T1.ACCOUNTINGEVENT )
AND ( ( ( T2.MONETARYAMOUNT @P7 )
AND ( T2.MONETARYAMOUNT @P8 ) )
AND ( T2.MONETARYAMOUNT @P9 ) ) ) )
AND ( ( ( T3.PARTITION = @P10 )
AND ( T3.DATAAREAID = @P11 ) )
AND ( T3.SOURCEDOCUMENTLINE = T2.SOURCEDOCUMENTLINE ) )
AND ( ( T4.PARTITION = @P12 )
AND ( ( ( ( T4.SUBLEDGERJOURNALENTRY = T1.RECID )
AND ( T4.POSTINGTYPE @P13 ) )
AND ( T4.POSTINGTYPE @P14 ) )
AND ( T4.POSTINGTYPE @P15 ) ) )
AND ( ( T5.PARTITION = @P16 )
AND ( T5.RECID = T2.PARENTDISTRIBUTION ) )
AND ( ( T6.PARTITION = @P17 )
AND ( ( ( T6.SOURCEDOCUMENTLINE = T5.SOURCEDOCUMENTLINE )
AND ( T6.PARENTDISTRIBUTION = T5.RECID ) )
AND ( ( ( T6.MONETARYAMOUNT = @P18 )
OR ( T6.MONETARYAMOUNT = @P19 ) )
OR ( T6.MONETARYAMOUNT = @P20 ) ) ) )
AND ( ( T7.PARTITION = @P21 )
AND ( ( ( T7.SUBLEDGERJOURNALACCOUNTENTRY = T4.RECID )
AND ( T7.ACCOUNTINGDISTRIBUTION = T6.RECID ) )
AND ( ( ( T7.ACCOUNTINGCURRENCYAMOUNT >= @P22 )
AND ( T6.TRANSACTIONCURRENCYAMOUNT >= @P23 ) )
OR ( ( T7.ACCOUNTINGCURRENCYAMOUNT < @P24 )
AND ( T6.TRANSACTIONCURRENCYAMOUNT < @P25 ) ) ) ) )
GROUP BY T3.RECID,
T4.RECID,
T4.GENERALJOURNALACCOUNTENTRY,
T4.LEDGERDIMENSION,
T6.TRANSACTIONCURRENCY,
T6.MONETARYAMOUNT
ORDER BY T3.RECID,
T4.RECID,
T4.GENERALJOURNALACCOUNTENTRY,
T4.LEDGERDIMENSION,
T6.TRANSACTIONCURRENCY,
T6.MONETARYAMOUNT
Łukasz Kastelik
(1405 rep)
Mar 27, 2015, 10:13 AM
• Last activity: Apr 24, 2023, 11:03 AM
7
votes
2
answers
2284
views
Verifying connection pooling server side with CONTEXT_INFO on
I'm working with a 3 tiered application, Microsoft Dynamics AX, where the middle tier maintains connections to a SQL Server. Several clients connect to this middle tier server. The middle tier server typically has several connections open to the SQL Server, so I'm pretty sure they are being pooled,...
I'm working with a 3 tiered application, Microsoft Dynamics AX, where the middle tier maintains connections to a SQL Server. Several clients connect to this middle tier server.
The middle tier server typically has several connections open to the SQL Server, so I'm pretty sure they are being pooled, however there is no documentation available as to how this is implemented.
Normally we cannot relate SPIDs to users or client applications, but there is an option where we can set a registry key (specific to Microsoft Dynamics AX) which makes this information available in the
context_info
field of sys.dm_exec_sessions
.
Again, there is no documentation about how this is implemented. The only information we have on this is a vague blog entry on MSDN.
The post mentions
> Adding this information has a small performance overhead.
So as we don't know any of the implementation details such as:
1. Is the information somehow included in the connection string or is this done by SET CONTEXT_INFO?
1. When are connections reused?
1. What exact impact can be expected
Is there any way to determine server side how the connection pooling is working and what the impact of the context_info is?
**update:**
Using this query from here
SELECT des.program_name,
des.login_name,
des.host_name,
-- der.database_id,
COUNT(des.session_id) AS [Connections]
FROM sys.dm_exec_sessions des
INNER JOIN sys.dm_exec_connections DEC
ON des.session_id = DEC.session_id
WHERE des.is_user_process = 1
--AND des.status 'running'
GROUP BY des.program_name,
des.login_name,
des.host_name
-- ,der.database_id
HAVING COUNT(des.session_id) > 2
ORDER BY COUNT(des.session_id) DESC
I can see connection pooling is used.
Tom V
(15752 rep)
May 12, 2015, 01:28 PM
• Last activity: Mar 18, 2023, 12:00 PM
0
votes
0
answers
42
views
Monitoring MS Dynamics 365 with Extended Events
I know that SQL Server and Azure SQL support "Extended Events" as a means of profiling execution. I also know that MS Dynamics 365 is built with Azure SQL. What I'd like to accomplish is creating and managing XEvents for MS Dynamics 365 (i.e on its underlying Azure SQL DB). Presumably this means I'd...
I know that SQL Server and Azure SQL support "Extended Events" as a means of profiling execution. I also know that MS Dynamics 365 is built with Azure SQL. What I'd like to accomplish is creating and managing XEvents for MS Dynamics 365 (i.e on its underlying Azure SQL DB). Presumably this means I'd be storing the telemetry data into Azure BLOB storage.
Is it possible to profile the MS Dynamics 365 CRM, or is it so deeply "managed"...that such instrumentation isn't possible? And if it isn't possible, are there any particular alternatives? My intent is to capture a cross-section of what kinds of queries/updates are being performed against the DB...and in what quantities for a typical daily client work-load.
A key motivation for this is to serve as an input for capacity planning for the creation of a custom "operational data store", into which a breadth of applications will subsequently depend on for similar data.
Brent Arias
(191 rep)
Oct 28, 2022, 04:47 PM
• Last activity: Oct 28, 2022, 05:03 PM
0
votes
2
answers
416
views
SQL - Transactional Replication between existing databases
I want to reconfigure a Dynamics AX 2012 SQL Server to improve performance of an existing architecture where one SQL server is used for both transactions and reports. This reduces SQL performance during peak loads (large MIS report generation). In the proposed architecture, the transaction SQL repli...
I want to reconfigure a Dynamics AX 2012 SQL Server to improve performance of an existing architecture where one SQL server is used for both transactions and reports. This reduces SQL performance during peak loads (large MIS report generation).
In the proposed architecture, the transaction SQL replicates to a report SQL to share the burden and improve performance. I chose Transactional replication to implement this. But, here is the problem:
1. Because I'm using SQL standard, I can choose all the Tables but not all Views and SPs for replication because some contain Indexed keys and for that SQL Enterprise is required.
2. If I don't replicate all the Views and SPs then AX fails to recognise the report SQL.
Is it possible to restore a backup from the Transaction SQL to Report SQL (which contains all Tables, Views and SPs) and then enable Transactional replication to replicate just the tables?
Or, should I just use the Basic always-on availability group in SQL STD and setup a read replica?
Or is there a smarter :) way around this?
Thank you in advance for the help.
Adding a screenshot of the Indexed views issue with STD SQL.

Mandeep Singh
(3 rep)
Aug 29, 2022, 03:43 AM
• Last activity: Aug 29, 2022, 02:54 PM
0
votes
0
answers
53
views
Indexing operations on a 8TB AX dynamics database
We have an 8TB AX dynamics database on SQL Server 2016 Enterprise Edition. Our team would like to perform indexing operations but for a max of 8 hours with minimal impact during the week. My recommendation was to run online operations using Ola's script during the week and allow offline during weeke...
We have an 8TB AX dynamics database on SQL Server 2016 Enterprise Edition.
Our team would like to perform indexing operations but for a max of 8 hours with minimal impact during the week. My recommendation was to run online operations using Ola's script during the week and allow offline during weekend only.
The feedback I got is that Ola's script creates a worktable (to do list of operations) at every invocation of the script, That is correct.
They rather have a script that creates a worktable initially and only creates a new worktable when the previous one is completely handled, to avoid some indexes that are never touched.
Is somebody aware of such a script?
TimBer
(11 rep)
Jun 24, 2022, 02:40 PM
• Last activity: Jun 25, 2022, 01:57 PM
8
votes
2
answers
1675
views
What is the persistent_version_store table?
What is the purpose of the MS Shipped table: `sys.persistent_version_store`? **Why do I want to know** Primarily curiosity, partly because it may relate to an issue waiting to bite us... We're running a SaaS instance of MS Dynamics 365 for Finance and Operations. This has very [limited tools][1] for...
What is the purpose of the MS Shipped table:
sys.persistent_version_store
?
**Why do I want to know**
Primarily curiosity, partly because it may relate to an issue waiting to bite us...
We're running a SaaS instance of MS Dynamics 365 for Finance and Operations. This has very limited tools for monitoring performance. Whilst trying to find something of meaning I discovered that this table had an insanely high row count (~75,000,000). That seems worrying. It's possible that this is OK, but MS have included it on their analysis tools, meaning that this eclipses all other information on the graph (since it has a linear scale) making the graph pointless. I want to understand what this table is so I can determine if it's something to be investigated, or if it's just bad design to have included it on the monitoring tools.
**Why not read the documentation**
Since it's an MS table it should be documented somewhere. However, searching persistent_version_store gives 1 result, and it's in Chinese. Searching for Persistence Version Store does better; but it's unclear whether this is the same thing / there's not much documentation on what it is; only how to fix issues where it's reached its maximum size.
JohnLBevan
(459 rep)
Aug 25, 2018, 09:39 AM
• Last activity: Sep 15, 2021, 12:24 PM
6
votes
3
answers
1096
views
Restore database of 90GB to computer of only 100GB
How do I restore a huge Microsoft SQL Server database that is 6GB as a zip file and 90GB as a bak file to a computer that only has 100GB free space? When I uncompress the zip file I get a bak file that is 90GB so only 10GB left on hard drive hence not enough space to restore using SQL Server Managem...
How do I restore a huge Microsoft SQL Server database that is 6GB as a zip file and 90GB as a bak file to a computer that only has 100GB free space?
When I uncompress the zip file I get a bak file that is 90GB so only 10GB left on hard drive hence not enough space to restore using SQL Server Management Studio.
The advantage is that I have two computers, the other had 200GB free so I was able to fully restore but its not the intended computer. I've been able to access the restored database over the network using ethernet but my other software of interest requires that the SQL Server running the database is on the same computer as the software of interest so I have to move / restore the database to the computer with less space somehow.
Kindly requesting for guidance, I've been exploring the option of manually copying and pasting the .mdf and log files from the computer where I fully restored to the new computer and then wiring it up somehow but I don't know if it will work.
Gilboot
(173 rep)
Jul 3, 2021, 06:15 PM
• Last activity: Jul 4, 2021, 03:23 PM
35
votes
3
answers
54365
views
What does OPTION FAST in SELECT statement do?
I have done some digging on what the `OPTION (FAST XXX)` query hint does inside a `SELECT` statement and I'm still confused about it. According to MSDN: > Specifies that the query is optimized for fast retrieval of the first number_rows. This is a nonnegative integer. After the first number_rows are...
I have done some digging on what the
OPTION (FAST XXX)
query hint does inside a SELECT
statement and I'm still confused about it. According to MSDN:
> Specifies that the query is optimized for fast retrieval of the first number_rows. This is a nonnegative integer. After the first number_rows are returned, the query continues execution and produces its full result set.
For me that does not make much sense but basically the query can get the first XXX rows really fast then the rest at normal speed?
The Microsoft Dynamics query that got me thinking on this is:
select pjproj.project,pjproj.project_desc,pjproj.customer,pjproj.cpnyid
from pjproj WITH (NOLOCK)
where project like '%'
order by project OPTION(FAST 500)
Can anyone explain exactly what this query hint is doing and its advantage over not using it?
Matthew Verstraete
(925 rep)
Apr 15, 2016, 06:38 PM
• Last activity: Apr 18, 2021, 12:54 PM
0
votes
1
answers
79
views
Can the Data Export Service in Microsoft Dynamics export data to an Azure Synapse Data Warehouse?
I am able to use the Data Export Service (DES) to export data to an Azure SQL Database. When I try to create a new profile in DES that connects to a Data Warehouse (aka Azure Synapse Analytics) on the same Azure SQL Server using the same user (db_owner member), I keep getting the message 'Error vali...
I am able to use the Data Export Service (DES) to export data to an Azure SQL Database. When I try to create a new profile in DES that connects to a Data Warehouse (aka Azure Synapse Analytics) on the same Azure SQL Server using the same user (db_owner member), I keep getting the message 'Error validating profile registration'. Note that the Key Vault Secrets used for each connection are virtually the same except the 'Initial Catalog' and the DES wizard is able to validate the connection to the Key Vault.
sartoris
(103 rep)
Jan 28, 2021, 03:49 PM
• Last activity: Feb 12, 2021, 11:10 PM
0
votes
2
answers
314
views
Difference in duration of SP:Completed and sum(SP:StmtCompleted)
I have a stored procedure, which sometimes displays a big difference in duration for the overall execution time and the execution time for each of its statements. The procedure consist of 4 statements, each with a maximum run-time of 4ms. But some procedure has Sp:Completed duration of around 1000 m...
I have a stored procedure, which sometimes displays a big difference in duration for the overall execution time and the execution time for each of its statements. The procedure consist of 4 statements, each with a maximum run-time of 4ms. But some procedure has Sp:Completed duration of around 1000 ms or more.
The procedure is called very often (at least once every second, or more).
My first idea was about recompile time, but this procedure is rather simple and doesn't need to be recompiled. I run trace to find recompile events relating this procedure and can't catch any record in it.
I want to use MS SQL 2017 if it may help. Every duration which I mention here was obtained from the SQL Server Profiler.
What is the reason for the long procedure execution time?
Michael Gontar
(1 rep)
Jan 21, 2019, 01:05 PM
• Last activity: Dec 25, 2020, 12:35 PM
3
votes
0
answers
47
views
Source Control Dynamics
What ways are there to do version control of an MS Dynamics database? I've found mentions of DACBAC, and obviously SSIS packages could just be dumped in a git repo. I'm particularly keen on learning how to keep connections between different environments (no one wants to copy an SSIS over and have to...
What ways are there to do version control of an MS Dynamics database? I've found mentions of DACBAC, and obviously SSIS packages could just be dumped in a git repo.
I'm particularly keen on learning how to keep connections between different environments (no one wants to copy an SSIS over and have to go through each connection updating it).
Additionally, and this may be a "novice" question, but my understanding is that dacbac doesn't contain SSIS. Is this correct? Would both need to be in source control?
What is the best way!?
JohnFF
(31 rep)
Mar 1, 2020, 12:08 PM
• Last activity: Jul 14, 2020, 12:33 PM
0
votes
1
answers
648
views
Full-Text Search against serial numbers?
Using Dynamics 365 with MSSQL. There is an option to enable Full Text Search, however once it is enabled we cannot search by partial serial numbers. For instance, KN567228 shows in the full text catalog but trying to search for 7228 will return no results. The Stoplist is empty, so that isn't the ca...
Using Dynamics 365 with MSSQL. There is an option to enable Full Text Search, however once it is enabled we cannot search by partial serial numbers.
For instance, KN567228 shows in the full text catalog but trying to search for 7228 will return no results. The Stoplist is empty, so that isn't the cause. This is a unique situation for Full-Text Search because there really isn't a thesaurus or wordbreaker that could break down serial numbers in a meaningful way. But I thought the search would work for partial text (like searching for 7228 in KN567228).
**Should I just not be using Full-Text Search in this scenario?**
JustinCredible
(3 rep)
Jun 1, 2020, 04:58 PM
• Last activity: Jun 1, 2020, 08:55 PM
0
votes
2
answers
2587
views
Add missing column to SSIS, data not extracted to staging table
I've inherited an environment that extracts data from a Dynamics AX 2012 ERP to a data warehouse for reporting. The original extraction set up by consultants did not include the proper date column needed by accounting. I have performed the following: - Created the column in the staging tables and de...
I've inherited an environment that extracts data from a Dynamics AX 2012 ERP to a data warehouse for reporting. The original extraction set up by consultants did not include the proper date column needed by accounting. I have performed the following:
- Created the column in the staging tables and destination tables
- opened the packages in Visual Studio and found that each package file uses the same variable User::vSQLBase and contains the query that pulls all columns from the table needed in Dynamics AX. I altered that variable to include the date column.
- I saved my changes and deployed the entire solution back to the data warehouse.
- Ran the job in SQL Server, but none of the columns populated.
I did have an error appear in the debug of Visual Studio when I test executed the package.
> Warning: 0x800470C8 at Incremental Load, Read Delta : The external
> columns for Read Delta are out of synchronization with the data source
> columns. The column "Document Date" needs to be added to the external
> columns.
I did contact the consultant who replied with "the new column is not mapped to the staging table in the package, be aware that both the full load and incremental load need the same mapping. and a full load is mandatory since this is a new column". I am further unable to contact the consultant due to budget limitations. What am I missing to get the column to the staging table?
SQLHound
(271 rep)
Mar 23, 2019, 07:12 PM
• Last activity: Mar 25, 2019, 01:41 PM
0
votes
1
answers
2854
views
SSRS Rectangle visibility based on Parameter Value
I am creating a `Report` in `SSRS`. Its a `Dynamic 2011 CRM` Report. I have created some `Tables`. And some `Rectangles`. Each `Rectangle` contains some `Tables`. Also there is a `Parameter` that contains value from 1 to 4. It is a `multiple values` parameter. The goal is that the user select parame...
I am creating a
Report
in SSRS
.
Its a Dynamic 2011 CRM
Report.
I have created some Tables
.
And some Rectangles
.
Each Rectangle
contains some Tables
.
Also there is a Parameter
that contains value from 1 to 4.
It is a multiple values
parameter.
The goal is that the user select parameters( one or more ) and then tables based on the rec visibility condition appear.( Maybe two rec appear, or maybe all of them)
We all know that there is IN operator
in SQL Syntax.
But I do not know how to make it here.
I googled a lot but no success.
Here is what I tried in rec visibility.
=Parameters!type.Value.Equals("1")
And in the second table :
=Parameters!type.Value.Equals("2")
till the last rectangle.
Even changed to :
=Parameters!type.IsMultiValue.Equals("1")
I Also Put both of them in =iif
, but no success.
How can I perform this?
Visibility of rectangle based on the multiple values parameter.
Any help will appreciated.
alireza azadi
(25 rep)
Feb 19, 2019, 05:47 AM
• Last activity: Feb 19, 2019, 06:03 AM
Showing page 1 of 20 total questions