Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
25
views
Create a script for changes from one database to another on same server
I had been using the **Tracking feature** in MariaDB to collect the changes I have made to the database on my PC and then applying the script to a database on the production server. Now I have MySQL on my machine and the Workbench does not have a tracking feature. I have made a copy of the database...
I had been using the **Tracking feature** in MariaDB to collect the changes I have made to the database on my PC and then applying the script to a database on the production server.
Now I have MySQL on my machine and the Workbench does not have a tracking feature.
I have made a copy of the database on the same server.
If I make changes to one of them, how can I get MySQL Workbench (or MySQL) to compare the two databases and produce a script to turn one into the other?
I can then run this script on the production server.
Rohit Gupta
(2126 rep)
Jul 10, 2025, 07:51 AM
• Last activity: Jul 10, 2025, 05:22 PM
9
votes
2
answers
2236
views
Change Tracking across several Tables
I'm looking into providing DB level change tracking within my system. I need to be able to track changes at an entity level, rather than just individual tables. Right now, we have triggers on each of the tables of interest, that write the tables PK into a `Change_Event` table. We can then query that...
I'm looking into providing DB level change tracking within my system. I need to be able to track changes at an entity level, rather than just individual tables.
Right now, we have triggers on each of the tables of interest, that write the tables PK into a
Change_Event
table. We can then query that table, and boil it down to the PK that represents the entity as a whole. We can then retrieve the data for that entity and perform the relevant actions upon it.
As an example, consider the following (simplified) example:
CREATE TABLE Employee
(
Id INT IDENTITY PRIMARY KEY,
Name VARCHAR(250) NOT NULL,
Telephone VARCHAR(15) NOT Null
);
CREATE Table Visit
(
Id INT IDENTITY PRIMARY KEY,
VisitDate DATETIME2 Not NULL,
[Address] VARCHAR(500) NOT NULL,
VisitorId INT NOT NULL,
CONSTRAINT FK_Visit_VisitorId FOREIGN KEY (VisitorId) REFERENCES Employee (Id)
);
INSERT into Employee (Name, Telephone)
VALUES ('John Doe', '123456789'),
('Jane Smith', '999555333');
INSERT INTO Visit (VisitDate, Address, VisitorId)
VALUES (SYSDATETIME(), '123 Fake St', 1),
(GETDATE() + 5, '99 Test Av', 2);
In this example, the Visit Entity is considered to be the following xml:
SELECT Id,
CAST((
SELECT Id,
VisitDate,
Address,
(
SELECT E.Id,
E.Name,
E.Telephone
FROM Employee E
WHERE E.Id = V.VisitorId
FOR XML RAW ('Visitor'), TYPE, ELEMENTS
)
Visitor
FROM Visit V
WHERE V.Id = Visit.Id
FOR XML RAW ('Visit'), ELEMENTS
) AS XML) AS Data
FROM Visit
I need to be able to know, when one of those entities has changed. If I were to change the telephone number of the Employee, I need to be able to see that that has changed the Visit entity so that I can reprocess it.
----
Right now, my change table records the Employee Id. I then run a query that gives me all the visits with that employee as the VisitorId. That sounds fine when you are only looking at two tables, but when you factor in several tables (and potentially several levels of abstraction between them) the performance can get pretty slow.
I've looked into Change Data Capture, but that still seems to be capturing at a table level. Equally, I could add Modified date columns to all tables, and combine all the modifieds from the view to produce a single, max modified value - but considering I need to filter on that field, I can't imagine performance would be that great either.
Is there a recommended method of dealing with this within SQL Server? As an added consideration - whilst the Instance is SQL 2008, the DB is still in 2000 compatibility mode.
**UPDATE**
I've been looking into [Change Tracking](https://msdn.microsoft.com/en-us/library/bb964713.aspx) , which works pretty well at a table level. Unfortunately it doesn't work too well with the select above.
I even tried creating a schemabound view based on that select, and enabling change tracking on the view, but this appears to be an option that can't apply to views.
I can make it work by using something like the below:
WITH ChangedVisits AS
(
SELECT DISTINCT OV.Id
FROM dbo.Visit OV
INNER JOIN Employee E ON OV.VisitorId = E.Id
LEFT JOIN CHANGETABLE(CHANGES Visit, @LastSyncVersion) AS VCT ON OV.Id = VCT.Id
LEFT JOIN CHANGETABLE(CHANGES Employee, @LastSyncVersion) AS ECT ON OV.VisitorId = ECT.Id
WHERE VCT.Id IS NOT NULL
OR ECT.Id IS NOT NULL
)
SELECT Id,
CAST(
(
SELECT V.Id as [@Id],
V.VisitDate,
Address,
(
SELECT E.Id,
E.Name,
E.Telephone
FROM dbo.Employee E
WHERE E.Id = V.VisitorId
FOR XML PATH(''), TYPE, ELEMENTS
)
Visitor
FROM dbo.Visit V
WHERE V.Id = CV.Id
FOR XML PATH ('Visit'), ELEMENTS
)
AS XML) AS Data
FROM ChangedVisits CV
But the drawback of that is the need to join on each of the tables and an associated CHANGETABLE
object in order to work out if something has changed.
Obsidian Phoenix
(443 rep)
Feb 9, 2015, 09:52 AM
• Last activity: Jun 29, 2025, 10:05 AM
2
votes
1
answers
278
views
Change Tracking and Primary keys
I have a SQL Server database with Change Tracking enabled on a hand full of tables and it's used by another application on the cloud to sync data between both databases. When an existing customer record in SQL Server needs to be synced to the cloud, for the first time, it's important that the order...
I have a SQL Server database with Change Tracking enabled on a hand full of tables and it's used by another application on the cloud to sync data between both databases.
When an existing customer record in SQL Server needs to be synced to the cloud, for the first time, it's important that the order in which data is synced is done correctly as it would not make sense to sync the address data first without its pre-existing customer record (in the cloud), so I've written code to sync data for existing customers in a specific order so as to not cause sync errors at the destination cloud data app.
I don't know what columns within the eight tables need to be synced, so I've just been updating one column, the primary key and upon checking the system data captured by Change Tracking, I've noticed that ALL columns within a table are showing as "updated".
This is good because it has saved me a lot of scripting, I mean some of the tables are wide.
I assumed that this is the case because a clustered index PK is the data so updating the PK to the same value would cause all columns to update too.
Am I right to assume this?
Interestingly, if I update a the value to a PK that is NOT clustered, it has the same effect, although the table does have a clustered index but it's not the PK.
I was expecting one column, the PK to be the only column to be updated, but I am not complaining, just need to verify that updating the PK is a shortcut to scripting updates for all columns.
MindDData
(123 rep)
Jun 18, 2025, 06:43 AM
• Last activity: Jun 18, 2025, 08:34 AM
0
votes
1
answers
306
views
Track MySQL Procedure Changes in MySQL
We want to keep track of changes being done to MySQL procedures only, actually recently our procedures seem to have overwritten to some previous versions and we were unable to track it that who did this. We were able to track thread-id from mysqlbinlog but were not able to determine whose userid was...
We want to keep track of changes being done to MySQL procedures only, actually recently our procedures seem to have overwritten to some previous versions and we were unable to track it that who did this. We were able to track thread-id from mysqlbinlog but were not able to determine whose userid was using that thread-id. Is there any way by which we can do following
**• Track user-id from mysqlbinlog thread-id
• Keep track of every create procedure/drop procedure**
We are using MySQL 5.7.20 community edition on windows 10 & CentOS7 and we are using row level bin-logging
Any help is greatly appreciated.
Nawaz Sohail
(1480 rep)
Mar 6, 2018, 05:32 PM
• Last activity: May 5, 2025, 03:04 AM
1
votes
1
answers
434
views
Validating SQL Server Change Tracking Synchronization
I have some tables that I need to synchronize from one SQL Server database to another. I'm using SQL Server's "Change Tracking" feature. My program uses Change Tracking to synchronize the latest changes and then checks to verify that the changes were synchronized correctly. The way that it goes abou...
I have some tables that I need to synchronize from one SQL Server database to another. I'm using SQL Server's "Change Tracking" feature. My program uses Change Tracking to synchronize the latest changes and then checks to verify that the changes were synchronized correctly.
The way that it goes about this is:
1. It fetches the current change tracking version at the source together with some statistics about the source table that will later be used to verify that things synched correctly. The query looks like:
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
BEGIN TRANSACTION;
WITH T AS (
SELECT [NumericValue] AS [NumericValue]
FROM [SourceTable]
)
SELECT
AVG(CONVERT([NumericValue])) AS Avg,
SUM(CONVERT([NumericValue])) AS Sum,
COUNT(*) AS Count,
CHANGE_TRACKING_CURRENT_VERSION() AS CTVersion
FROM T;
COMMIT TRANSACTION;
2. It fetches all changes between the last version it synchronized to and the source version fetched in step 1. The query to fetch the changes in the desired version range looks like:
SELECT SYS_CHANGE_OPERATION, SYS_CHANGE_VERSION, [SYSKEY_PrimaryKey], [PrimaryKey], [NumericValue]
FROM (
SELECT
T.*,
CT.SYS_CHANGE_OPERATION,
CASE WHEN CT.SYS_CHANGE_OPERATION = 'I' THEN CT.SYS_CHANGE_CREATION_VERSION ELSE CT.SYS_CHANGE_VERSION END AS SYS_CHANGE_VERSION,
[CT].[PrimaryKey] AS [SYSKEY_PrimaryKey]
FROM CHANGETABLE(CHANGES [SourceTable], @startVersion) AS CT
LEFT JOIN [SourceTable] AS T ON T.[PrimaryKey] = CT.[PrimaryKey]
WHERE CT.SYS_CHANGE_OPERATION = 'D'
OR (T.[PrimaryKey] = CT.[PrimaryKey])
AND
CASE
WHEN CT.SYS_CHANGE_OPERATION = 'I' THEN
CT.SYS_CHANGE_CREATION_VERSION
ELSE CT.SYS_CHANGE_VERSION
END <= @endVersion
) AS [SourceTable]
3. It gets the same statistics from the destination table that it did in step 1. The query looks like:
SELECT
AVG(CONVERT([NumericValue])) AS Avg,
SUM(CONVERT([NumericValue])) AS Sum,
COUNT(*) AS Count
FROM [DestinationTable]
4. It compares the statistics from steps 1 and 2. If they match then all is good, if not then it knows there was a problem with the synchronization
The problem I'm having is that it is intermittently detecting that the values don't match on tables that experience very rapid updates. Investigating, I have found no evidence of anything actually going wrong in the synchronization process, but the destination seems to lag slightly behind where it needs to be to match the source. When the source table updates slow down, the destination always catches up and matches the source. It seems at this point that the problem stems from the version fetched in step 1 not actually matching the statistics that are fetched at the same time. This seems to indicate that the increment of the change tracking version is not performed atomically together with the actual changes to the table. Unfortunately I haven't found anything in the documentation that explicitly confirms or denies this theory, which makes it difficult to know whether there's something else I need to be looking for. Additionally, if it turns out to be true that the change tracking version and the table state aren't guaranteed to be consistent then I don't know how it would be possible for me to accurately perform validation after synchronization.
Any help clarifying whether it's expected for the version number to lag behind the actual state of the table, or suggestions of techniques I could use to get around this problem would be much appreciated.
Davide De Simone
(11 rep)
Aug 7, 2020, 05:26 AM
• Last activity: Mar 20, 2025, 12:07 AM
2
votes
2
answers
599
views
Change Tracking SYS_CHANGE_COLUMNS on Update operation
I would like to get an understanding of why on an update operation in the change tracking table, the SYS_CHANGE_COLUMNS can be NULL. I would expect the SYS_CHANGE_COLUMNS will be populated. In the example below, the first and second record's update has a NULL value on SYS_CHANGE_COLUMNS but the firs...
I would like to get an understanding of why on an update operation in the change tracking table, the SYS_CHANGE_COLUMNS can be NULL. I would expect the SYS_CHANGE_COLUMNS will be populated.
In the example below, the first and second record's update has a NULL value on SYS_CHANGE_COLUMNS but the first record's SYS_CHANGE_VERSION SYS_CHANGE_CREATION_VERSION.
The third record contains a value in the SYS_CHANGE_COLUMNS but the SYS_CHANGE_CREATION_VERSION is NULL.
We use SYS_CHANGE_COLUMNS as a filter for updates to check if it is populated and it looks like we may be missing updates in this case. Any feedback on this is appreciated.

cenko
(31 rep)
Nov 20, 2022, 10:37 PM
• Last activity: Dec 12, 2024, 04:03 PM
0
votes
3
answers
191
views
A good way to get "transaction time" or similar on SQL Server for change tracking
I would like to add version information to our DB - all it will be used for is to check whether the user's version of a row has changed since they read it. I am thinking along the lines of adding a column called `_VERSION` to the two dozen tables in question. As part of a batch `UPDATE` statement, I...
I would like to add version information to our DB - all it will be used for is to check whether the user's version of a row has changed since they read it. I am thinking along the lines of adding a column called
_VERSION
to the two dozen tables in question. As part of a batch UPDATE
statement, I'd have something to the effect of SET ..._VERSION=something_or_other
. I am agnostic on what data this column holds.
One solution I've seen is to use a datetime column and then get the transaction start time from sys.dm_tran_active_transactions, selecting the row for CURRENT_TRANSACTION_ID()
. For our needs, the 1/300th accuracy will be more than enough. I understand that using CURRENT_TIMESTAMP
is not as useful in this situation because that will change as the batch proceeds.
If this is canonical I'll do that. It's easy to implement.
But is this the *best* way for really simple uses? Is there some other value in the sys.dm_tran...
that provides the same outcome but may be easier to store and WHERE than a datetime?
**NOTE:** yes, I am aware of temporal tables and MS's change tracking and have been told not to use them.
Maury Markowitz
(230 rep)
Nov 29, 2024, 05:57 PM
• Last activity: Nov 30, 2024, 10:20 PM
4
votes
2
answers
575
views
COMMIT_TABLE Waits Crashing System
We run a lot of change tracking in production. Nearly all tables are change tracked. This is for purposes of extracting SQL data to Databricks. Other than the CPU overhead, change tracking rarely causes headaches. But when it does, it’s bad. The problem manifests with spids waiting on COMMIT_TABLE i...
We run a lot of change tracking in production. Nearly all tables are change tracked. This is for purposes of extracting SQL data to Databricks. Other than the CPU overhead, change tracking rarely causes headaches. But when it does, it’s bad.
The problem manifests with spids waiting on COMMIT_TABLE indefinitely. They just hang and hang and cannot be killed.
Full backups against the database “locked up” on these waits cannot occur with the server in this state. The only remedy we’ve been able to come up with is to fail the instance over to a secondary replica in the Availability Group and then reset the replica with the COMMIT_TABLE storm.
The only answer I’ve found online for alleviating this wait type is to attempt to clean up the commit table manually with the “sys.sp_flush_commit_table_on_demand” proc. Guide here - Change Tracking Cleanup – Going Beyond Automatic – SirSQL.net . This has proven useless for the kinds of backups we’re experiencing. I’ve even tried running the proc in a job overnight every minute to constantly flush out the commit table and we’re still experiencing this problem.
Has anyone else run into this? Is there a trick to using the flush commit table on demand proc? Is it possible we’re just running way too much change tracking and there’s no work around here? Thanks!

BBaggins
(81 rep)
Jun 5, 2024, 05:18 PM
• Last activity: Jun 6, 2024, 09:05 AM
4
votes
1
answers
2190
views
Automatically detect table name in MSSQL Server database using stored function
I'm working on implementing Change Data Tracking in a SQL Server database and I set up a single table to test changes. In the past, changes have been made manually, but I'd like to create a way to track updates and deletions automatically through the built-in CDC process. In many cases, these change...
I'm working on implementing Change Data Tracking in a SQL Server database and I set up a single table to test changes. In the past, changes have been made manually, but I'd like to create a way to track updates and deletions automatically through the built-in CDC process. In many cases, these changes are spread across dozens of different tables, so the ideal situation would be to create a view that collocates all of the changed data.
CDC works at the table level and doesn't track the user who makes changes. I found a way to add user data to the CDC tracking tables by creating a new user_name column and adding SUSER_SNAME() to the "Default Value or Binding" column property using the SQL command: "**ALTER TABLE cdc.dbo_MyTable_CT ADD UserName nvarchar(50) NULL DEFAULT(SUSER_SNAME())**" from a separate Stack Overflow question - (https://stackoverflow.com/questions/869300/sql-server-2008-change-data-capture-who-made-the-change) .
I'm trying to find a similar stored function to track the table name from this list - https://msdn.microsoft.com/en-us/library/ms187812.aspx .
So far the only stored function I've successfully implemented from the list is db_name() which automatically returns the database name. I've tried schema_name and object_name, but those automatically revert to N'schema_name()' or N'object_name()' in the design view. In the table view, the changed rows return (schema_name()) or (object_name()).
Does anyone know how to automatically populate the a column with the table name of the table from which changes were made using some sort of stored function?
Beau Smith
(51 rep)
Feb 17, 2016, 05:28 PM
• Last activity: Mar 19, 2024, 06:05 AM
32
votes
4
answers
58610
views
Track all modifications to a PostgreSQL table
We have a table which has only roughly 500 rows, but it is very important for us. I want see all changes which happen to this table. The changes should get tracked with a timestamp. I don't want the tracking to happen in the application code, since I want to track changes which happen via `psql` she...
We have a table which has only roughly 500 rows, but it is very important for us.
I want see all changes which happen to this table. The changes should get tracked with a timestamp.
I don't want the tracking to happen in the application code, since I want to track changes which happen via
psql
shell, too.
I am happy with a PostgreSQL specific solution, since I don't use a different DB in this context.
guettli
(1591 rep)
Apr 2, 2019, 09:36 AM
• Last activity: Feb 12, 2024, 03:49 PM
0
votes
2
answers
689
views
Tool for monitoring changes to data in a database
I regularly back up my MariaDB databases using mariadb-dump (mariadb's mysqldump). However, I want to ensure data integrity in either of these cases: - a vulnerability in an application using a database has been exploited and unauthorized changes to the data have been made (e.g. an attacker forging...
I regularly back up my MariaDB databases using mariadb-dump (mariadb's mysqldump).
However, I want to ensure data integrity in either of these cases:
- a vulnerability in an application using a database has been exploited and unauthorized changes to the data have been made (e.g. an attacker forging or deleting data like changing the password of a random user account or deleting a user account)
- an application error or bug has affected the data (e.g. leading to data corruption or data loss, like a query that affects rows that shouldn't have been affected)
What I mean by that is that I want to make sure that if either of the aforementioned cases happens, I can see the history of changes to the data in an easy-to-follow way (e.g. using a diff tool). So, I probably need something like event logging for DELETE, INSERT, and UPDATE actions that would list each such data manipulation query in a chronologically sorted list (with event timestamps) and I can compare it to the last known revision (to see only changes that I haven't reviewed before).
I am searching for a database data change tool that would let me see if an authorized change to the data has occurred (automatically enforcing any security rules is not needed, only an ability to inspect manually in an easy way after an event has happened).
I know the best thing to do is to take preventive measures (proactively) but I also need an additional measure like the one I described to minimize the consequences.
So far, the best I could do was to compare two
.sql
sequential mysqldumps using a diff compare tool. However, this approach is not easy to follow as it requires moving forward and backward inside files and also I can't see when a change has happened because there are no columns in the databases storing the time of row updates.
kataba
(33 rep)
Feb 2, 2024, 05:34 PM
• Last activity: Feb 3, 2024, 12:01 AM
1
votes
0
answers
75
views
Change Tracking sometimes returns new inserts without primary key values
We utilize Change Tracking feature to catch inserts and updates for couple of tables. Queries are like this: SELECT P.* ,CT.SYS_CHANGE_OPERATION ,TC.commit_time FROM schema.tablename AS P RIGHT OUTER JOIN CHANGETABLE(CHANGES schema.tablename, 438750) AS CT ON CT.ID = P.ID LEFT OUTER JOIN sys.dm_tran...
We utilize Change Tracking feature to catch inserts and updates for couple of tables. Queries are like this:
SELECT P.*
,CT.SYS_CHANGE_OPERATION
,TC.commit_time
FROM schema.tablename AS P
RIGHT OUTER JOIN CHANGETABLE(CHANGES schema.tablename, 438750) AS CT ON CT.ID = P.ID
LEFT OUTER JOIN sys.dm_tran_commit_table TC on CT.sys_change_version = TC.commit_ts
WHERE SYS_CHANGE_OPERATION IN ('I','U') /* INSERTS and UPDATES */
AND SYS_CHANGE_VERSION <= '439766' /* = CHANGE_TRACKING_CURRENT_VERSION */
From time to time results may contain new inserts which doesn't have primary keys. Below there's an example of one case. First row is a header and in this case the first column (ID) is primary key. First four rows are ok but after that we have these odd cases which are listed being as type SYS_CHANGE_OPERATION="I" (=insert) but all values are missing, including primary key:
Does anyone have ran into similar issues using Change Tracking? What could be the reason for this kind of behaviour?

Markku
(11 rep)
Jan 31, 2024, 01:27 PM
1
votes
2
answers
6461
views
PostgreSQL Trigger to keep track of table changes
I am trying to create a trigger (Postgres 9.6) to track changes made to a table. This is my approach: CREATE OR REPLACE FUNCTION taxon_history() RETURNS trigger AS $BODY$ BEGIN IF TG_OP = 'DELETE' THEN INSERT INTO history.taxon(operacao, "data", tecnico, original_oid, taxon) VALUES ('DELETE', curren...
I am trying to create a trigger (Postgres 9.6) to track changes made to a table. This is my approach:
CREATE OR REPLACE FUNCTION taxon_history() RETURNS trigger AS
$BODY$
BEGIN
IF TG_OP = 'DELETE' THEN
INSERT INTO history.taxon(operacao, "data", tecnico, original_oid, taxon)
VALUES ('DELETE', current_timestamp, current_user, old.oid, old.taxon);
RETURN old;
ELSIF TG_OP = 'UPDATE' THEN
INSERT INTO history.taxon(operacao, "data", tecnico, original_oid, taxon)
VALUES ('DELETE', current_timestamp, current_user, old.oid, old.taxon);
RETURN old;
ELSIF TG_OP = 'INSERT' THEN
INSERT INTO history.taxon(operacao, "data", tecnico, original_oid, taxon)
VALUES ('INSERT', current_timestamp, current_user, new.oid, new.taxon);
RETURN old;
END IF;
END;
$BODY$
LANGUAGE plpgsql;
CREATE TRIGGER history_taxon
AFTER INSERT OR UPDATE OR DELETE ON taxon
FOR EACH ROW EXECUTE PROCEDURE taxon_history();
However when something changes in the
(public).taxon
table, no record is added to the history.taxon
table. I also don´t get any error message so I am in the dark on why nothing is happening. What am I doing wrong?
Lisdengard
(113 rep)
Jun 22, 2019, 05:09 AM
• Last activity: Nov 23, 2023, 08:36 AM
7
votes
2
answers
9763
views
Persistent Severity 016 alerts: "Change Tracking autocleanup is blocked on side table"
We had a large database running on a SQL Server 2014 Enterprise instance (evaluation license). Due to cost constraints, we had to move it to a new SQL Server 2017 Standard instance (per-core license; just updated to CU6). **EDIT: The database is in SQL Server 2014 (120) compatibility, and was transf...
We had a large database running on a SQL Server 2014 Enterprise instance (evaluation license). Due to cost constraints, we had to move it to a new SQL Server 2017 Standard instance (per-core license; just updated to CU6). **EDIT: The database is in SQL Server 2014 (120) compatibility, and was transferred by restoring full and log backups.**
Everything is working fine except for change tracking. We use change tracking to locate recent changes and keep a denormalized table up to date; the denorm table is used for rapid filling of a grid for the web application.
Change tracking is working, but apparently autocleanup is not. We are getting Severity 016 alerts:
> Change Tracking autocleanup is blocked on side table of
tablename
. If the failure persists, check if the table tablename
is blocked by any process.
These come about every half hour for each of the tables that is complaining (about 4 different tables).
We've tried the manual cleanup described [here](https://github.com/Microsoft/tigertoolbox/blob/master/change-tracking/ChangeTrackingCleanup.sql) ... it too presents "side table is blocked" errors.
So here are my choices:
1. Turn off this specific alert and hope nothing blows up.
1. Turn off autocleanup; schedule a manual cleanup during a daily maintenance window.
1. Not sure what else I can do.
Besides "what should I do to stop this?", my side question is "why is this happening with 2017 Std when it didn't happen with 2014 Enterprise?"
Ross Presser
(282 rep)
Apr 23, 2018, 06:49 PM
• Last activity: Oct 5, 2023, 04:02 PM
0
votes
1
answers
91
views
Should I worry about a serial update execution plan?
For a mature .Net project I support we're currently exploring options to remove Entity Framework Core from the project while still maintaining some of the EF functionality we rely on. At the top of this list is our ability to keep track of exactly which values have changed as a result of an update r...
For a mature .Net project I support we're currently exploring options to remove Entity Framework Core from the project while still maintaining some of the EF functionality we rely on.
At the top of this list is our ability to keep track of exactly which values have changed as a result of an update request that an outside system sends to us - we have no control over that system, and it's not unusual for us to receive update requests that contain identical data several times.
We currently use EF's change tracker to keep a view on whether or not the update we're processing really makes any changes to a set of specific columns so that we know whether or not to inform our users of these changes.
Looking into this how we might achieve this without having EF and its overhead involved led me to SQL Server's [OUTPUT Clause](https://learn.microsoft.com/en-us/sql/t-sql/queries/output-clause-transact-sql?view=sql-server-ver16) , which would theoretically allow us to perform an update to a table and return a view of the prior and updated state of the key columns we use for notifications triggers.
So far so good, however, there is a warning under the [Parallelism](https://learn.microsoft.com/en-us/sql/t-sql/queries/output-clause-transact-sql?view=sql-server-ver16) section of that document that says that "[an]
OUTPUT
clause that returns results to the client, or table variable, will always use a serial plan.". I'm afraid that my knowledge of SQL is not strong enough at this time to be able to tell if this is likely to be a problem or not.
Should this concern me from a performance or reliability standpoint? Every update that we perform is keyed on a set of columns that form (in almost every case) a compound unique key, so even if the query portion of the update is run serially would it be a noticeable impact?
Take as an example the following schema:
CREATE TABLE user_profile
(
[id] INT IDENTITY (1,1) NOT NULL PRIMARY KEY
[username] NVARCHAR(100) NOT NULL,
[tenancy] NVARCHAR(20) NOT NULL,
[city] NVARCHAR(50) NULL,
[state] NVARCHAR(50) NULL,
[first_name] NVARCHAR(50) NULL,
[last_name] NVARCHAR(50) NULL,
[external_system_access_key] NVARCHAR(200) NULL,
CONSTRAINT [UX_user] UNIQUE ([username], [tenancy])
)
In this example, a user manages their own city
, state
, first_name
and last_name
values, but an external system manages external_system_access_key
through a request to our service like
-none
POST /{tenancyId}/user/{username}/profile/external
{
"accessKey": "1224567890"
}
If we receive that same update several times without the value of accessKey
changing, we're wanting to know if the value differs pre- and post-update execution, so we know whether or not to inform the user that the key has changed. Each request would result in an update like this:
DECLARE @accessKey NVARCHAR(200) = '1234567890';
DECLARE @username NVARCHAR(100) = 'username';
DECLARE @tenancy NVARCHAR(20) = 'tenancy';
UPDATE [user_profile]
SET [remote_system_access_key] = @accessKey
OUTPUT INSERTED.id, DELETED.[remote_system_access_key] AS OLD_remote_system_access_key, INSERTED.[remote_system_access_key] AS NEW_remote_system_access_key
WHERE
[username] = @username AND [tenancy] = @tenancy;
In the case where the request gave us a new value for that column there would be different values for each of the OLD_
and NEW_
output columns, and if it's a request we've had previously then they will match, allowing us to evaluate any changes after the insert is done.
But SQL Server's documentation says that this will always result in a serial execution plan. What I need to know is: is this a problem? Any assistance I could get understanding this and its potential impacts would be greatly appreciated.
Generally, each update will only hit one row at a time as its where clause uses a compound unique key. We're just trying to avoid the thing that EF does and require querying the data out first - instead, since SQL server already provides a mechanism to know what the state before and after the update was, we would like to get that data back afterwards if it's not going to cause performance issues.
The example I used above is quite simplistic compared to our actual cases, where we'll be updating large numbers of columns in each statement - all in the one table, but multiple columns in each. It would be prohibitively complex to try and cover every possible permutation of what might be updated in each request.
Adrian
(103 rep)
Sep 4, 2023, 07:24 AM
• Last activity: Sep 8, 2023, 12:19 AM
0
votes
1
answers
201
views
Can I do transaction for dynamic sql which contains DISABLE change tracking / TRUNCATE / PARTITION SWITCH
I would like to create transaction which will : 1. DISABLE change tracking IF exists. 2. TRUNCATE partition for table. 3. SWITCH PARTITION from different table to main table. 4. ENABLE change tracking IF was disabled before. Would like to create script with parameters so I will be using dynamic SQL....
I would like to create transaction which will :
1. DISABLE change tracking IF exists.
2. TRUNCATE partition for table.
3. SWITCH PARTITION from different table to main table.
4. ENABLE change tracking IF was disabled before.
Would like to create script with parameters so I will be using dynamic SQL. Idea:
/* OPS parameters */
DECLARE @schemaName sysname = 'dbo';
DECLARE @tableName sysname = 'TABLE';
DECLARE @partition INT = 90;
/* DEV parameters */
DECLARE @tableNameSRP sysname = CONCAT(@tableName, '_SRP');
DECLARE @tableNameWithSchema sysname = CONCAT(QUOTENAME(@schemaName), '.', QUOTENAME(@tableName));
DECLARE @tableNameWithSchemaSRP sysname = CONCAT(QUOTENAME(@schemaName), '.', QUOTENAME(@tableNameSRP));
DECLARE @isCtReEnabled BIT = 0;
DECLARE @isDebug BIT = 1;
SET TRAN ISOLATION LEVEL READ UNCOMMITTED;
SET XACT_ABORT ON;
BEGIN TRAN;
BEGIN TRY
IF EXISTS (
SELECT
1
FROM sys.change_tracking_tables
WHERE object_id = OBJECT_ID(@tableNameWithSchema)
)
BEGIN
SET @statement = N'ALTER TABLE ' + @tableNameWithSchema + N' DISABLE CHANGE_TRACKING;';
IF (@isDebug = 0)
BEGIN
EXEC sp_executesql @stmt = @statement;
END;
IF (@isDebug = 1)
BEGIN
RAISERROR('[INFO] SQL: %s', 0, 1, @statement) WITH NOWAIT;
END;
SET @isCtReEnabled = 1;
END;
SET @statement
= N'TRUNCATE TABLE ' + @tableNameWithSchema + N' WITH (PARTITIONS (' + CAST(@partition AS NVARCHAR(5)) + N'))
ALTER TABLE ' + @tableNameWithSchemaSRP + N' SWITCH PARTITION ' + CAST(@partition AS NVARCHAR(5)) + N' TO ' + @tableNameWithSchema + N' PARTITION '
+ CAST(@partition AS NVARCHAR(5));
IF (@isDebug = 0)
BEGIN
EXEC sp_executesql @stmt = @statement;
END;
IF (@isDebug = 1)
BEGIN
RAISERROR('[INFO] SQL: %s', 0, 1, @statement) WITH NOWAIT;
END;
IF (@isCtReEnabled = 1)
BEGIN
SET @statement = N'ALTER TABLE ' + @tableNameWithSchema + N' ENABLE CHANGE_TRACKING;';
IF (@isDebug = 0)
BEGIN
EXEC sp_executesql @stmt = @statement;
END;
IF (@isDebug = 1)
BEGIN
RAISERROR('[INFO] SQL: %s', 0, 1, @statement) WITH NOWAIT;
END;
END;
COMMIT;
END TRY
BEGIN CATCH
SET @errorMessage = ERROR_MESSAGE();
RAISERROR('ERROR MESSAGE: %s', 0, 1, @errorMessage) WITH NOWAIT;
IF @@TRANCOUNT > 0
ROLLBACK TRANSACTION;
END CATCH;
My questions:
1. Is operation like ENABLE / DISABLE change tracking , TRUNCATE and PARTITION SWITCH will be working in one transaction ? (want to execute all or nothing)
2. Is dynamic sql is a problem in this case ?
adam.g
(465 rep)
Aug 28, 2023, 03:28 PM
• Last activity: Aug 28, 2023, 08:35 PM
0
votes
1
answers
739
views
What permission have to set for SQL user for configuration of change tracking in Database?
I am building application which have to have option to configure (disable/enable) change tracking on tables on specific database. I am connecting to database via SQL user. Which database permissions need to add for SQL user for disable/enable change tracking on tables and option to run below query:...
I am building application which have to have option to configure (disable/enable) change tracking on tables on specific database. I am connecting to database via SQL user. Which database permissions need to add for SQL user for disable/enable change tracking on tables and option to run below query:
SELECT DISTINCT
sct1.name AS CT_schema
, sot1.name AS CT_table
, ps1.row_count AS CT_rows
, sct2.name AS tracked_schema
, sot2.name AS tracked_name
, CHANGE_TRACKING_MIN_VALID_VERSION(sot2.object_id) AS min_valid_version
, itt.create_date AS change_tracking_table_creation_date
, CAST(ps1.reserved_page_count * 8. / 1024 AS BIGINT) AS CT_reserved_MB
, CAST(ps2.reserved_page_count * 8. / 1024 AS BIGINT) AS tracked_base_table_MB
, ps2.row_count AS tracked_rows
FROM sys.internal_tables it
JOIN sys.objects sot1
ON it.object_id = sot1.object_id
JOIN sys.schemas AS sct1
ON sot1.schema_id = sct1.schema_id
JOIN sys.dm_db_partition_stats ps1
ON it.object_id = ps1.object_id
AND ps1.index_id IN (0, 1)
LEFT JOIN sys.objects sot2
ON it.parent_object_id = sot2.object_id
LEFT JOIN sys.schemas AS sct2
ON sot2.schema_id = sct2.schema_id
LEFT JOIN sys.dm_db_partition_stats ps2
ON sot2.object_id = ps2.object_id
AND ps2.index_id IN (0, 1)
INNER JOIN sys.internal_tables itt
ON itt.name = sot1.name
WHERE it.internal_type IN (209, 210);
adam.g
(465 rep)
Jan 25, 2023, 02:11 PM
• Last activity: Jan 26, 2023, 08:40 PM
1
votes
2
answers
2974
views
Is there a way to get an event when there's a change in a table?
Using Azure Sql Server, and in C# if that's relevant, is there a way to get an event when data in specific table changes? In one table we need an event if any column changes in a row. In a 2nd table we need an event only when either of 2 columns change (although any change would be ok). And we need...
Using Azure Sql Server, and in C# if that's relevant, is there a way to get an event when data in specific table changes?
In one table we need an event if any column changes in a row.
In a 2nd table we need an event only when either of 2 columns change (although any change would be ok).
And we need to know which row.
Is it possible to do this? If so, how?
**Update:** We need a call into our C# code for this event. A database trigger where the database can do something doesn't help, our program needs to take action on a change.
thanks - dave
David Thielen
(189 rep)
Sep 21, 2014, 03:33 PM
• Last activity: Jan 12, 2023, 11:15 AM
2
votes
1
answers
634
views
How to "update" change_tracking setting to enable TRACK_COLUMNS_UPDATED
I have a table where I have enabled change tracking without TRACK_COLUMNS_UPDATED. I know this because if I go through SSMS's Generate scripts, I get the line: ALTER TABLE [dbo].[dt] ENABLE CHANGE_TRACKING WITH(TRACK_COLUMNS_UPDATED = OFF) I want to enable TRACK_COLUMNS_UPDATED. If I run: ALTER TABL...
I have a table where I have enabled change tracking without TRACK_COLUMNS_UPDATED. I know this because if I go through SSMS's Generate scripts, I get the line:
ALTER TABLE [dbo].[dt] ENABLE CHANGE_TRACKING WITH(TRACK_COLUMNS_UPDATED = OFF)
I want to enable TRACK_COLUMNS_UPDATED. If I run:
ALTER TABLE dt
ENABLE CHANGE_TRACKING
WITH (TRACK_COLUMNS_UPDATED = ON)
I get error `Msg 4996, Level 16, State 1, Line 19
Change tracking is already enabled for table 'dt'.`
Do I have to disable change_tracking first? If so, is this explicitly documented somewhere?
George Menoutis
(447 rep)
Oct 17, 2019, 07:56 AM
• Last activity: Jan 2, 2023, 02:48 PM
3
votes
1
answers
719
views
SYS_CHANGE_OPERATION not showing correct value when row is Updated in CHANGE TRACKING
When a row is updated the SYS_CHANGE_OPERATION column is still showing I (Insert) and not U (Update) when querying [CHANGETABLE][1] . Here's what I'm doing. USE master GO CREATE DATABASE TestCT ; GO SELECT compatibility_level , @@VERSION FROM sys.databases WHERE name = 'TestCT'; > compatibility_leve...
When a row is updated the SYS_CHANGE_OPERATION column is still showing I (Insert) and not U (Update) when querying CHANGETABLE .
Here's what I'm doing.
USE master
GO
CREATE DATABASE TestCT ;
GO
SELECT compatibility_level , @@VERSION
FROM sys.databases WHERE name = 'TestCT';
> compatibility_level = 160
@@Version = Microsoft SQL Server 2022 (RTM) - 16.0.1000.6 (X64) Oct 8 2022 05:58:25 Copyright (C) 2022 Microsoft Corporation Developer Edition (64-bit) on Windows 10 Pro
> 10.0 (Build 19045: ) (Hypervisor)
ALTER DATABASE TestCT
SET CHANGE_TRACKING = ON
(CHANGE_RETENTION = 2 DAYS, AUTO_CLEANUP = ON) ;
USE TestCT;
GO
DROP TABLE IF EXISTS dbo.TestCT
CREATE TABLE dbo.TestCT(
Id INT IDENTITY (1,1) CONSTRAINT PK__TestCT PRIMARY KEY ,
Col1 VARCHAR(100)
) ;
GO
ALTER TABLE dbo.TestCT
ENABLE CHANGE_TRACKING
WITH (TRACK_COLUMNS_UPDATED = ON) ;
GO
INSERT INTO dbo.TestCT (Col1)
VALUES ('INSERT1') ;
GO
SELECT *
FROM CHANGETABLE (CHANGES dbo.TestCT,0) as CT
LEFT JOIN dbo.TestCT EM ON CT.Id = EM.Id
ORDER BY SYS_CHANGE_VERSION ;
GO

SYS_CHANGE_OPERATION
here is I (Insert) as expected.
Now if I update the row
UPDATE T
SET Col1 = 'UPDATE'
FROM dbo.TestCT T
WHERE Col1 = 'INSERT1';
GO

SYS_CHANGE_OPERATION
here is still I (Insert) and not U (Update).
As you can see SYS_CHANGE_VERSION
has incremented by 1 as expected.
What am I doing wrong here?
Geezer
(513 rep)
Dec 4, 2022, 12:28 PM
• Last activity: Dec 4, 2022, 01:18 PM
Showing page 1 of 20 total questions