Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

3 votes
1 answers
136 views
Is there any reasonable way to Purge Expired Data from a SQL Server Ledger Database
The SQL Server Ledger Database functionality looks very interesting, and I would like to recommend it for several use cases, but I am concerned that there is no option for purging historical data. I understand the logic, because "Purging History" could easily be abused to cover your tracks. That sai...
The SQL Server Ledger Database functionality looks very interesting, and I would like to recommend it for several use cases, but I am concerned that there is no option for purging historical data. I understand the logic, because "Purging History" could easily be abused to cover your tracks. That said, purging historical data is a legitimate business requirement, even if, and sometimes, especially when, dealing with specific PII data. Currently, the only option I see is to create a whole new Ledger Database and copy forward all data and structures that need to be retained. The big issue with that solution is that it results in the loss of ALL historical audit/security information, even for data that is not going to be purged. As a result, if a business wanted to use the Ledger tables, but purge expired data after 7 years, purging yearly, then it would need to retain 6 copies of previous ledger database backups to access the old audit/security information. Currently, the only GOOD solution I can see is if Microsoft creates a new piece of functionality that allows for the rebuilding of an existing Ledger Database while dropping all history and historical data beyond a specific point. (e.g. 5 years) Am I missing something? --- In short, Temporal Tables are cool. I love the ability to set a HISTORY_RETENTION_PERIOD directly, but they very much lack the security features of LEDGER tables. You can mess with the history, and dropping a column also drops all of its history. Also missing is an Append-Only table option. Basically, Temporal tables can't satisfy the Audit requirements that Ledger Database/Tables can. Mutable and purgeable are two very different things. To summarize, I don't want a mutable history. On updating temporal table history see: https://stackoverflow.com/q/46737825
AnthonyVO (135 rep)
Jul 23, 2025, 08:30 PM • Last activity: Jul 28, 2025, 12:41 AM
2 votes
1 answers
141 views
Does the auto_stats Extended Event misreport the sample percentage from temporal tables or columnstore?
Start up the auto_stats Extended Event, filter out some noise, and prepare yourself to monitor it. ```sql CREATE EVENT SESSION [AutoStatsTest] ON SERVER ADD EVENT sqlserver.auto_stats( WHERE ([duration]>1 AND [object_id] > 0) ) ADD TARGET package0.ring_buffer ALTER EVENT SESSION [AutoStatsTest] ON S...
Start up the auto_stats Extended Event, filter out some noise, and prepare yourself to monitor it.
CREATE EVENT SESSION [AutoStatsTest] ON SERVER 
ADD EVENT sqlserver.auto_stats(
    WHERE ([duration]>1 AND [object_id] > 0) ) 
ADD TARGET package0.ring_buffer

ALTER EVENT SESSION [AutoStatsTest] ON SERVER
STATE = START
GO
Find a temporal table. I had a copy of the StackOverflow2010 database to hand, so I just made Votes system versioned.
ALTER TABLE Votes ADD
ValidFrom DATETIME2 GENERATED ALWAYS AS ROW START HIDDEN
    CONSTRAINT DF_InsurancePolicy_ValidFrom DEFAULT SYSUTCDATETIME(),
ValidTo DATETIME2 GENERATED ALWAYS AS ROW END HIDDEN
    CONSTRAINT DF_InsurancePolicy_ValidTo DEFAULT CONVERT(DATETIME2, '9999-12-31 23:59:59.9999999'),
PERIOD FOR SYSTEM_TIME(ValidFrom, ValidTo);
GO

ALTER TABLE Votes
    SET (SYSTEM_VERSIONING = ON);
GO
With your temporal table, do something that will trigger an automatic update of stats. This worked for me
SELECT TOP (1000) [BountyAmount]
FROM [StackOverflow2010].[dbo].[Votes]
WHERE [PostId] 
			
			
			
				100
Given the duration (800 microseconds on my pathetic machine), I knew that this was nonsense. Upon consulting sys.dm_db_stats_properties, I found that the actual percentage was much less than 100.
/*
Save yourself the pain of writing
the stats query
and just use sp_BlitzIndex from GitHub
*/
EXEC sp_blitzindex @databasename = 'StackOverflow2010', @tablename = 'Votes'
In summary, **it appears that the auto_stats Extended Event reports an incorrect sample_percentage for temporal tables**. Is this a SQL Server bug or is it my misunderstanding? If it is my misunderstanding, then where can I read further? Individual Extended Events, as far as I know, are very lacking in documentation. I have only seen this happen with temporal tables. I have found it on both SQL Server 2022 on a real box (the table was multi-terabyte, so I screamed when I saw a 100% sample rate for the primary key's statistic) and on my local 2019 test box. Adding a columnstore index seems to help reproduce this, but I am not 100% sure.
CREATE NONCLUSTERED COLUMNSTORE INDEX IX_ColStore
on dbo.Votes
(
    Id,
    PostId
)
I am absolutely sure that there is a real bug here. However, my attempts to reproduce it have given inconsistent results. I suspect that you need a TB-scale table. **Final update:** I give up on explaining this one. The statistics for the table says that 90 times more rows than appeared in the actual execution plan (caught live as it was running in production, scanning the non-clustered columnstore index) were used to update the statistics. The number of rows in the actual execution plan is 10,000 times less than the table's row count and the auto_stats Extended Event says that 100% of the rows in the table were read. It is impossible for all of these to be true at the same time. I suspect that it might actually be the deleted bitmap that causes all of this.
J. Mini (1225 rep)
Mar 7, 2025, 10:24 PM • Last activity: Jul 17, 2025, 05:21 PM
2 votes
1 answers
52 views
Temporal table return UPDATED.ValidFrom duplicates
Im trying to understand Temporal Tables in SQL Server better, and how I can recreate an issue we have. Sometimes the time returned from UPDATE is not the same as what is really stored in the table, and the value returned are duplicate with another thread. I want to at least be able to recreate and u...
Im trying to understand Temporal Tables in SQL Server better, and how I can recreate an issue we have. Sometimes the time returned from UPDATE is not the same as what is really stored in the table, and the value returned are duplicate with another thread. I want to at least be able to recreate and understand these issues better. I have a Azure SQL db, with a temporal table:
CREATE TABLE TT (
    Id UNIQUEIDENTIFIER PRIMARY KEY CLUSTERED
  , Amount MONEY NOT NULL
  , [ValidFrom] datetime2 GENERATED ALWAYS AS ROW START
  , [ValidTo] datetime2 GENERATED ALWAYS AS ROW END
  , PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo)
) WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.TTH));
GO

CREATE NONCLUSTERED INDEX IX_TTH_Id ON TT([Id]) WITH(ONLINE = ON)
GO
When my application is called, it runs the following queries: 1. begin tran 2. select * from TT with (updlock, serializable) where Id = '...' 3. some selects, inserts and updates in other tables... 4. update TT set Amount = 123 output INSERTED.ValidFrom where id = '...' 5. commit tran Often there will be two instances running that both want to update the same row in TT (same Id). What is confusing is that Sometimes the ValidFrom returned will not be the same as what is actually written to table: 1. Thread A: begin tran 2. Thread A: select * from TT with (updlock, serializable) where Id = @x 3. Thread B: begin tran 4. Thread B: select * from TT with (updlock, serializable) where Id = @x (this will wait) 5. Thread A: some selects, inserts and updates in other tables... 6. Thread A: update TT set Amount = 123 output INSERTED.ValidFrom where id = @x 7. Thread A. commit tran 8. Thread B: some selects, inserts and updates in other tables... 9. Thread B: update TT set Amount = 123 output INSERTED.ValidFrom where id = @x 10. Thread B. commit tran What now happens is that the time returned in step 6 and step 9 are the same. But when I later inspect the history table they are very different (for example a diff of 2 seconds). How does this happen? The total execution time is less than 100ms. (I have tried to recreate it locally and against a testing db we have, but with no luck. Its easy to make cases when the ValidFrom returned are the same, but then it will also be the same in the history table)
viblo (511 rep)
Jul 4, 2025, 11:48 AM • Last activity: Jul 5, 2025, 12:30 PM
2 votes
1 answers
692 views
Aggregate values over time in temporal table
Recently we started using temporal tables feature in MSSQL. Now our analytics department wants to write query aggregating data on per month basis. Let's say we have table +----+-------+------------+------------+ | Id | Value | BeginDate | EndDate | +------------+------------+------------+ | 1 | 10 |...
Recently we started using temporal tables feature in MSSQL. Now our analytics department wants to write query aggregating data on per month basis. Let's say we have table
    +----+-------+------------+------------+
    | Id | Value | BeginDate  | EndDate    |
    +------------+------------+------------+
    | 1  | 10    | 2019-12-24 | 9999-12-31 |
    +------------+------------+------------+
    | 1  | 20    | 2019-12-05 | 2019-12-24 |
    +------------+------------+------------+
    | 1  | 40    | 2019-11-05 | 2019-12-05 |
    +----+-------+------------+------------+
And now I would like to know average value of Value on per month basis. Is that even possible with MSSQL temporal tables? Is it possible to somehow calculate those values (for example avg(5/31 * 40 + 19/31 * 20 + 7/31 * 10) for December) automatically and then return them in following format
    +----+-------+-------+------+
    | Id | Avg   | Month | Year |
    +------------+--------------+
    | 1  | 15,16 | 12    | 2019 |
    +----+-------+-------+------+
Paweł Hemperek (121 rep)
Jan 21, 2020, 12:54 PM • Last activity: Jul 2, 2025, 06:04 AM
2 votes
2 answers
1336 views
What is the correct method to pass range of dates to a SQL Server 2016 temporal table query where you need a set returned for each day?
I have a SQL Server 2016 temporal table query that I’m having a hard time figuring out an efficient way to write. Let’s say I have a temporal table called [Users] that each day has the total body of users in a group. Each day users can have a status of ('A', 'B', 'C', 'Y', 'Z') If you wanted to sele...
I have a SQL Server 2016 temporal table query that I’m having a hard time figuring out an efficient way to write. Let’s say I have a temporal table called [Users] that each day has the total body of users in a group. Each day users can have a status of ('A', 'B', 'C', 'Y', 'Z') If you wanted to select the current version of the table I would simply write SELECT [UserID] , [Status] --where the user status would be listed as 'A', 'B', 'C', 'Y', or 'Z' , [InsertDate] --record creation date which isn't related to the System-versioning fields , [SysStartTime] -- which could be omitted from the simple query , [SysEndTime] -- which could be omitted from the simple query FROM [Users] If I wanted to look up how the table looked on 6am of October 2nd, 2017 I know could write: SELECT [UserID] , [Status] , [InsertDate] , [SysStartTime] , [SysEndTime] FROM [Users] FOR SYSTEM_TIME AS OF '2017-10-02 06:00:00' Easy enough and I could count the number of people with various statuses using a simple SUM function. Where my problem begins is when I want to analyse trends over a longer period. In theory I want a picture of how the table looked on each day of that extended timespan. Like if I was looking at 20 contiguous days of data with the [Users] table having on average 5000 rows per a day I would hope that my query would return about 100000 records. My initial thought was a to pass in a subquery to the 'FOR...AS OF' statement but SQL Server does not seem to like that. I have tried a bunch of things many of which look something like this: SELECT [UserID] , [Status] , [InsertDate] , [SysStartTime] , [SysEndTime] FROM [Users] FOR SYSTEM_TIME AS OF (SELECT Fulldate FROM SetOfDates_Table) Am I missing something obvious about how to pass a set of date values to 'AS OF'? Should I instead be using the 'SELECT..FROM..FOR..AS OF' as a function, pass a parameter to it then UNION the resulting data sets together? Just for reference, I'm in a data warehousing environment where I'd want to schedule the query to happen in advance rather than needing it to run on demand.
MMartinez (23 rep)
Oct 23, 2017, 09:06 PM • Last activity: Dec 27, 2024, 10:57 AM
6 votes
2 answers
1176 views
In SQL Server Temporal Tables start/end date timestamps are identical for rows that are created inside a EF Core transaction
I'm using a **EF Core DbContext transaction** to create/update entities on a **SQL Server temporal table**. The data I'm processing (array of items) has a key (Year-Sequential, like: `2023-001`, `2023-002`, etc.) and then a set of attributes that corresponds to other columns on the table below. One...
I'm using a **EF Core DbContext transaction** to create/update entities on a **SQL Server temporal table**. The data I'm processing (array of items) has a key (Year-Sequential, like: 2023-001, 2023-002, etc.) and then a set of attributes that corresponds to other columns on the table below. One of this attributes identifies the kind of operation: Create, Update, Cancel. It may happen that I receive a request with many different operations related to a single Year-Sequential key, such as: 2023-001 - Create () 2023-001 - Update () 2023-001 - Cancel () 2023-002 - Create () 2023-002 - Update () 2023-002 - Cancel () etc. Since I'm using a temporal table to store this data to be able to track changes on the rows over time, I've coded the processing method like a single DB transaction with inside a recursive method that process subsets of the items. The recursive method works this way: 1. It takes the first occurrence of each Year-Sequential key and process it. 2. If it finds another time the same Year-Sequential key for another operation, it puts it aside for the next round of processing. 3. When all the "not duplicated key" items have been processed, there's a call to dbContext.SaveChangesAsync(). 4. The processed entities are created and have the EntityId value assigned by the database (that I need to use for the next steps of processing). 5. If the list of "duplicated key items" is not empty, the method recursively calls itself on the duplicates list and starts again from step 1. If instead, the duplicated key items list is empty, the method ends and then there's a call to transaction.CommitAsync(), to commit the transaction and save everything to the database. Everything works, but there's a **problem with Start/End date** used for temporal table period. For each round of the recursive method I correctly get a row created on each single Year-Sequential key. So, taking previous data example, on the History table I've 2 old rows about 2023-001 (Create, 1st Update) with the same EntityId and, on the Current table, I've the third (2nd Update) row, so the one related to the last 2023-001 Cancel operation. #### See the queries screenshot below. The problem is that if **I make a temporal query on that data** ([MovimentiDati] table), using the FOR SYSTEM_TIME ALL clause, **I get back only the current row! No way I can get the old rows too, but they exists!** So, the history is all there but I can't get it! Instead, if I query the History table ([MovimentiDatiHistory]) I correctly see the old rows with the same EntityId. On the screenshot, you see 5 items with the same issue. For example, take the row with MovimentoId = 399, you can see: - on MovimentiDatiHistory table: Creation of the row (1st run of recursive method and 1st call to dbcontext.SaveChangesAsync()) - on MovimentiDatiHistory table: First update on the row (2nd run of recursive method and 2nd call to dbcontext.SaveChangesAsync()) - on MovimentiDati (current) table: Second (and last) update on the row (3rd run of recursive method and 3rd call to dbcontext.SaveChangesAsync()) But, as highlighted in red, you can also see that **Start/End date timestamps are identical!** And I think this is the reason why I cannot get the correct result when I query the data from [MovimentiDati] table (current) with the temporal clause. I'm not sure about where the problem is, if in the EF Core library or in SQL Server, but I would like to elaborate on the matter... Queries
Cheshire Cat (193 rep)
Mar 6, 2023, 04:08 PM • Last activity: Sep 25, 2024, 09:36 AM
2 votes
2 answers
318 views
If a trigger runs an update will it ALWAYS have the same timestamp for a temporal table?
Background - Here is an example near to what I am working with: CREATE TABLE sandboxTesting.TemporalTest ( GroupNumber VARCHAR(25) NOT NULL, StartEffectiveWhen DATE NOT NULL, EndEffectiveWhen DATE NULL, ModifiedWhen DATETIME NULL, IsReady BIT NOT NULL DEFAULT 0, RowValidFrom DATETIME2 GENERATED ALWA...
Background - Here is an example near to what I am working with: CREATE TABLE sandboxTesting.TemporalTest ( GroupNumber VARCHAR(25) NOT NULL, StartEffectiveWhen DATE NOT NULL, EndEffectiveWhen DATE NULL, ModifiedWhen DATETIME NULL, IsReady BIT NOT NULL DEFAULT 0, RowValidFrom DATETIME2 GENERATED ALWAYS AS ROW START NOT NULL, RowValidTo DATETIME2 GENERATED ALWAYS AS ROW END NOT NULL, PERIOD FOR SYSTEM_TIME (RowValidFrom, RowValidTo), CONSTRAINT PK_TemporalTest PRIMARY KEY CLUSTERED ( GroupNumber, StartEffectiveWhen ) ) WITH (SYSTEM_VERSIONING=ON (HISTORY_TABLE=sandboxTesting.TemporalTestHistory)) GO CREATE TRIGGER sandboxTesting.OnModify ON sandboxTesting.TemporalTest AFTER UPDATE AS BEGIN UPDATE temporalTst SET temporalTst.IsReady = 0, temporalTst.ModifiedWhen = GETDATE() FROM sandboxTesting.TemporalTest temporalTst JOIN deleted del ON del.GroupNumber = temporalTst.GroupNumber AND del.StartEffectiveWhen = temporalTst.StartEffectiveWhen WHERE -- All business columns go here with OR statements in between them. -- The idea is that if anything changes except the IsReady flag, then we -- set the IsReady back to false. (IsReady has to be set by itself) del.EndEffectiveWhen temporalTst.EndEffectiveWhen OR (del.EndEffectiveWhen IS NULL AND temporalTst.EndEffectiveWhen IS NOT NULL) OR (del.EndEffectiveWhen IS NOT NULL AND temporalTst.EndEffectiveWhen IS NULL) END GO -- Insert new test INSERT INTO [sandboxTesting].[TemporalTest] ([GroupNumber], [StartEffectiveWhen], [EndEffectiveWhen], [ModifiedWhen]) VALUES ('12345', '2024-01-1', NULL, NULL) GO -- Set is as ready UPDATE sandboxTesting.TemporalTest SET IsReady = 1 WHERE GroupNumber = '12345' AND StartEffectiveWhen = '2024-01-1' GO -- Change the End date UPDATE sandboxTesting.TemporalTest SET EndEffectiveWhen = '2024-09-02' WHERE GroupNumber = '12345' AND StartEffectiveWhen = '2024-01-1' -- Set the new end date as ready for billing. UPDATE sandboxTesting.TemporalTest SET IsReady = 1 WHERE GroupNumber = '12345' AND StartEffectiveWhen = '2024-01-1' GO -- Select the Data SELECT * FROM sandboxTesting.TemporalTest for SYSTEM_TIME ALL ORDER BY GroupNumber, StartEffectiveWhen desc, RowValidFrom DESC, RowValidTo DESC, ModifiedWhen desc -- Select the Raw Data (for comparison) SELECT * FROM sandboxTesting.TemporalTest UNION ALL SELECT * FROM sandboxTesting.TemporalTestHistory ORDER BY GroupNumber, StartEffectiveWhen desc, RowValidFrom DESC, RowValidTo DESC, ModifiedWhen desc When I run this, this is the first result: |GroupNumber|StartEffectiveWhen|EndEffectiveWhen|ModifiedWhen|IsReady|RowValidFrom|RowValidTo| |:----|:----|:----|:----|:----|:----|:----| |12345|2024-01-01|2024-09-02|2024-08-29 17:15:28.587|1|2024-08-29 23:15:28.5764223|9999-12-31 23:59:59.9999999| |12345|2024-01-01|NULL|NULL|1|2024-08-29 23:15:28.5295658|2024-08-29 23:15:28.5764223| |12345|2024-01-01|NULL|NULL|0|2024-08-29 23:15:28.4826980|2024-08-29 23:15:28.5295658| And the second set of output looks like this: |GroupNumber|StartEffectiveWhen|EndEffectiveWhen|ModifiedWhen|IsReady|RowValidFrom|RowValidTo| |:----|:----|:----|:----|:----|:----|:----| |12345|2024-01-01|2024-09-02|2024-08-29 17:15:28.587|1|2024-08-29 23:15:28.5764223|9999-12-31 23:59:59.9999999| |12345|2024-01-01|2024-09-02|2024-08-29 17:15:28.587|0|2024-08-29 23:15:28.5764223|2024-08-29 23:15:28.5764223| |12345|2024-01-01|2024-09-02|NULL|1|2024-08-29 23:15:28.5764223|2024-08-29 23:15:28.5764223| |12345|2024-01-01|NULL|NULL|1|2024-08-29 23:15:28.5295658|2024-08-29 23:15:28.5764223| |12345|2024-01-01|NULL|NULL|0|2024-08-29 23:15:28.4826980|2024-08-29 23:15:28.5295658| This is different because the first query result uses the for SYSTEM_TIME ALL clause, while the second one just queries the raw data. The difference is that, in the first data set, the second and third rows of the second data set have been filtered out. They have been removed because the second and third rows have a start date and end date that are the same. (Basically saying those rows were never really in effect.) Question - What I need to know is, can I depend on this "zero time difference" always being there for data that was updated via an AFTER trigger? (I need to write some queries that will fail if this is not the case.) What I mean by that is this: If my server was being hammered by thousands of queries all doing crazy amounts of IO and calculations, would the RowValidFrom and RowValidTo values for the second and third rows of the second dataset still have a difference of 0? Put another way, are these values the same because of transaction logic? Or are they the same because my server is fast and not really under any pressure right now?
Vaccano (2550 rep)
Aug 29, 2024, 11:25 PM • Last activity: Aug 30, 2024, 12:26 PM
16 votes
4 answers
3840 views
Why do temporal tables log the begin time of the transaction?
When updating a row in a temporal table the old values for the row are stored in the history table with the transaction begin time as the `SysEndTime`. The new values in the current table will have the transaction begin time as the `SysStartTime`. `SysStartTime` and `SysEndTime` are `datetime2` colu...
When updating a row in a temporal table the old values for the row are stored in the history table with the transaction begin time as the SysEndTime. The new values in the current table will have the transaction begin time as the SysStartTime. SysStartTime and SysEndTime are datetime2 columns used by temporal tables to record when a row was the current version. Transaction begin time is the time the transaction containing the updates started. BOL says: > The times recorded in the system datetime2 columns are based on the > begin time of the transaction itself. For example, all rows inserted > within a single transaction will have the same UTC time recorded in > the column corresponding to the start of the SYSTEM_TIME period. **Example:** I start updating all the rows in my Orders table at 20160707 11:00:00 and the transaction takes 5 minutes to run. This creates a row in the history table for each row with SysEndTime as 20160707 11:00:00. All the rows in the current table will have a SysStartTime of 20160707 11:00:00. If someone were to execute a query at 20160707 11:01:00 (while the update is running) they would see the old values (assuming default read committed isolation level). But if someone was to then use the AS OF syntax to query the temporal table as it was at 20160707 11:01:00 they would see the new values because the their SysStartTime would be 20160707 11:00:00. To me this means it doesn't show those rows as they were at that time. If it used the transaction end time the problem wouldn't exist. **Questions:** Is this by design? Am I missing something? The only reason I can think it's using the transaction begin time is that it is the only 'known' when the transaction starts. It doesn't know when the transaction will end when it starts and it would take time to apply the end time at the end which would invalidate the end time it was applying. Does this make sense? This should allow you to recreate the issue.
James Anderson (5794 rep)
Jul 7, 2016, 02:04 PM • Last activity: Aug 30, 2024, 10:07 AM
1 votes
1 answers
94 views
Error when calling sp_refreshview on view based on two temporal tables
I have two temporal tables joined by a view, and when I alter the second table, and call `sp_refreshview`, it generates the error: >View or function 'ViewTest' has more column names specified than columns defined. I noticed that after calling `sp_refreshview`, the `select *` on the second table is n...
I have two temporal tables joined by a view, and when I alter the second table, and call sp_refreshview, it generates the error: >View or function 'ViewTest' has more column names specified than columns defined. I noticed that after calling sp_refreshview, the select * on the second table is now incorporating the hidden datetime columns meant for system versioning. Below is the test code that I have written to demonstrate the issue.
USE Test;

ALTER TABLE Department SET ( SYSTEM_VERSIONING = OFF)
GO
ALTER TABLE DepartmentTwo SET ( SYSTEM_VERSIONING = OFF)
GO

DROP TABLE IF EXISTS DepartmentTwo; 
DROP TABLE IF EXISTS Department; 

CREATE TABLE Department
(
    DeptID INT NOT NULL PRIMARY KEY CLUSTERED,
    DeptName VARCHAR(50) NOT NULL,
    ManagerID INT NULL,
    ParentDeptID INT NULL,
    ValidFrom DATETIME2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
    ValidTo DATETIME2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
    PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo)
)
WITH (SYSTEM_VERSIONING = ON);

CREATE TABLE DepartmentTwo
(
	DeptID INT NOT NULL FOREIGN KEY REFERENCES Department(DeptId),
    DeptTwoID INT NOT NULL PRIMARY KEY CLUSTERED,
    ValidFrom DATETIME2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
    ValidTo DATETIME2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
    PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo)
)
WITH (SYSTEM_VERSIONING = ON);


DROP VIEW IF EXISTS ViewTest; 
GO

CREATE VIEW ViewTest
	AS 
SELECT 
    d.DeptName,
    d.ManagerID,
    d.ParentDeptID,
	dt.*
FROM Department d
LEFT OUTER JOIN DepartmentTwo dt on d.DeptID = dt.DeptID
GO

ALTER TABLE DepartmentTwo ADD NewCol int;
GO

execute sp_refreshview 'ViewTest';
GO

select * from ViewTest;
Is there some specific behaviour on temporal tables that is causing the datetime columns to begin being shown in the view, even though they should be hidden? Below is an example on how it starts incorporating the System version columns (ValidFrom, ValidTwo), with the generated select statement via SSMS:
/****** Script for SelectTopNRows command from SSMS  ******/
SELECT TOP (1000) [DeptName]
      ,[ManagerID]
      ,[ParentDeptID]
      ,[DeptID]
      ,[DeptTwoID]
      ,[ValidFrom]
      ,[ValidTo]
      ,[NewCol]
  FROM [Test].[dbo].[ViewTest]
jevans (13 rep)
Jul 25, 2024, 04:09 PM • Last activity: Jul 26, 2024, 08:48 AM
-2 votes
2 answers
718 views
How to exclude some column from history table in temporal table
I have a same question in [Stack Overflow](https://stackoverflow.com/q/76943485/1539100) about this, But I hope to find answer here I have temporal table like below ```sql -- Create a Temporal Table CREATE TABLE dbo.Sample ( SampleId int identity(1,1) PRIMARY KEY CLUSTERED , SampleDate date NOT NULL...
I have a same question in [Stack Overflow](https://stackoverflow.com/q/76943485/1539100) about this, But I hope to find answer here I have temporal table like below
-- Create a Temporal Table
CREATE TABLE dbo.Sample
(
  SampleId int identity(1,1) PRIMARY KEY CLUSTERED
  , SampleDate date NOT NULL
  , Data1 varchar(50)
  , Data2 varchar(50)
  -- Other fields...
  , SysStartTime datetime2 GENERATED ALWAYS AS ROW START
  , SysEndTime datetime2 GENERATED ALWAYS AS ROW END
  , PERIOD FOR SYSTEM_TIME (SysStartTime, SysEndTime)
 )
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.SampleHistory));
I want to say all of changes in Data2 does not store in HISTORY_TABLE. In another word, in very large and big tables I just need few columns for auditing so I want to exclude other columns to reduce DB size and improve performance (in some cases most frequent changes in table occur on non-auditable column) **How to do it? (in another word how to exclude to tracking changes of Data2)**
sorosh_sabz (105 rep)
Nov 2, 2023, 10:07 AM • Last activity: Apr 25, 2024, 09:56 PM
5 votes
2 answers
1226 views
Temporal Table: Multiple inserts on each update?
**Actual Question:** Is there anything in sql 2019 server (besides a trigger) that can cause multiple rows to be inserted in the history table when a single row is updated in the temporal table? **Backgound:** I have an existing table called Candidate which has no computed columns and no triggers. I...
**Actual Question:** Is there anything in sql 2019 server (besides a trigger) that can cause multiple rows to be inserted in the history table when a single row is updated in the temporal table? **Backgound:** I have an existing table called Candidate which has no computed columns and no triggers. I implemented versioning as follows: ALTER TABLE dbo.Candidate ADD BeginDate datetime2 GENERATED ALWAYS AS ROW START NOT NULL DEFAULT SYSUTCDATETIME(), EndDate datetime2 GENERATED ALWAYS AS ROW END NOT NULL DEFAULT CAST('9999-12-31 23:59:59.9999999' AS datetime2), PERIOD FOR SYSTEM_TIME (BeginDate,EndDate) GO ALTER TABLE dbo.Candidate SET(SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.Candidate_History, DATA_CONSISTENCY_CHECK = ON)) When I do an update on the table, even a single column ( update candidate set lastContacted = '2021-10-16 23:27:46.927' where id = 44999 ) There are multiple rows added to the history table: | BeginDate | EndDate | | --------- | ------- | | 2022-01-24 08:25:55.3718538 | 2022-01-24 08:30:37.9523200 | | 2022-01-24 08:30:37.9523200 | 2022-01-24 08:30:37.9523200 | | 2022-01-24 08:30:37.9523200 | 2022-01-24 08:30:38.1853066 | | 2022-01-24 08:30:38.1853066 | 2022-01-24 08:30:38.1853066 | | 2022-01-24 08:30:38.1853066 | 2022-01-24 08:30:38.3093140 | | 2022-01-24 08:30:38.3093140 | 2022-01-24 08:30:38.3093140 | | 2022-01-24 08:30:38.3093140 | 2022-01-24 08:30:38.4352868 | | 2022-01-24 08:30:38.4352868 | 2022-01-24 08:30:38.4352868 | (The strangest thing is when the Begindate and EndDate are the same) The issue is probably in code (.net or some library) somewhere and not in SQL Before I go down the rabbit hole, I just wanted to know: **Is there's anything in SQL server (some setting I have missed) that can cause a single update on the table to insert multiple rows into the history besides a trigger?**
Paul Perrick (153 rep)
Jan 24, 2022, 10:04 AM • Last activity: Jan 28, 2024, 11:01 PM
14 votes
2 answers
15440 views
Error updating temporal tables
I have implemented the System Versioned Temporal Tables on the table "Orders". The application is using different access patterns to modify the data in this table. There are direct statements coming from the application or the application is running long batches in explicit transactions where it mak...
I have implemented the System Versioned Temporal Tables on the table "Orders". The application is using different access patterns to modify the data in this table. There are direct statements coming from the application or the application is running long batches in explicit transactions where it makes several changes to multiple tables. The update on the "Orders" table is not the first statement in those long batches! So, sometimes we are facing the following error. > Data modification failed on system-versioned table "Orders" because > transaction time was earlier than period start time for affected > records. Apparently this is a standard behavior of System Versioned Temporal Tables. https://learn.microsoft.com/en-us/sql/relational-databases/tables/temporal-tables?view=sql-server-2017#how-does-temporal-work Is this something that will always need to be handled in an exception routine? Or is Microsoft considering to change that behavior? --Script to simulate the error message: CREATE TABLE dbo.Orders ( [OrderId] INT NOT NULL PRIMARY KEY CLUSTERED , [OrderValue] DECIMAL(19,4) , [ValidFrom] DATETIME2 (2) GENERATED ALWAYS AS ROW START , [ValidTo] DATETIME2 (2) GENERATED ALWAYS AS ROW END , PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo) ) WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.OrdersHistory)); GO INSERT dbo.Orders ([OrderId], [OrderValue]) VALUES (1, 9.99), (2, 9.99); GO SELECT * FROM dbo.Orders; GO --Run first query BEGIN TRAN WAITFOR DELAY '00:00:15'; UPDATE dbo.Orders SET [OrderValue] = [OrderValue] + 1; COMMIT TRAN --Run Query 2 in another session sql server BEGIN TRAN UPDATE dbo.Orders SET [OrderValue] = [OrderValue] + 1; COMMIT TRAN
Danilo Braga (1428 rep)
Jul 5, 2018, 06:46 PM • Last activity: Dec 7, 2023, 07:55 PM
1 votes
0 answers
58 views
How to Preserve Past Order Details Amidst Customer Information Changes?
I'm facing a scenario in my e-commerce setup where after an order is marked as `Done`, any subsequent changes made by the customer to their information (like shipping address) are reflected incorrectly in past orders. I stumbled upon a blog post that discusses three solutions in PostgreSQL: 1. denor...
I'm facing a scenario in my e-commerce setup where after an order is marked as Done, any subsequent changes made by the customer to their information (like shipping address) are reflected incorrectly in past orders. I stumbled upon a blog post that discusses three solutions in PostgreSQL: 1. denormalizing the Order table 2. using Temporal Tables 3. storing snapshots as JSON blobs. I am keen to understand which of these methods, or any other recommendations you have, effectively tackle this issue while being scalable for a small business setup. I value insights from individuals who have witnessed or implemented these solutions in various company sizes and am interested in learning about what worked or didn't work in your experience. Source [Link](https://dev.to/egorpavlikhin/temporal-tables-or-json-storage-3gbg)
Ashwin Jacob (111 rep)
Oct 26, 2023, 02:38 PM
0 votes
1 answers
4670 views
Add a column at a specific position in system versioned temporal table
I need to add a column to a temporal table. The [Microsoft][1] documentation only suggests appending the new column using the following code: ALTER TABLE dbo.Department ADD TempColumn INT; However, this would cause the column to appear below the audit columns, including the mandatory system versioni...
I need to add a column to a temporal table. The Microsoft documentation only suggests appending the new column using the following code: ALTER TABLE dbo.Department ADD TempColumn INT; However, this would cause the column to appear below the audit columns, including the mandatory system versioning columns 'ValidFrom' and 'ValidTo'. Whilst this is not an issue in terms of performance, all of our non-system versioned tables keep the audit columns as the last columns. I have tried setting system versioning off but can't work out how to make the changes as I would with a normal user table whilst preserving the data in the history table. SSMS allows you to create the code for modifying tables using the 'Design' option but this is also not available for temporal tables. I would usually use code that is similar to the 'Design' option output for adding a column where a new table is created with the new column and a suffix in the table name, the old table is dropped and the new table is renamed. How can I do this for temporal tables please?
Fletch (325 rep)
Mar 12, 2021, 11:37 AM • Last activity: Oct 10, 2023, 07:50 PM
5 votes
1 answers
1276 views
Can I enable system versioning (temporal tables) on a large table without scanning the table?
Microsoft docs currently give an example of how to enable temporal tables on existing tables at ALTER TABLE, System Versioning examples: [A. Add system versioning to existing tables][1] Using the syntax there but specifying a constant default, I have: ``` ALTER TABLE InsurancePolicy ADD PERIOD FOR S...
Microsoft docs currently give an example of how to enable temporal tables on existing tables at ALTER TABLE, System Versioning examples: A. Add system versioning to existing tables Using the syntax there but specifying a constant default, I have:
ALTER TABLE InsurancePolicy
ADD PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo),
ValidFrom datetime2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL
--    DEFAULT SYSUTCDATETIME(), /* default specified in the docs */
    DEFAULT CONVERT(DATETIME2, '2023-08-14') /* use a constant default */
ValidTo datetime2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL
    DEFAULT CONVERT(DATETIME2, '9999-12-31 23:59:59.99999999') ;
When I run this statement, I can observe an event SP:StatementStarting with TextData: SELECT [ValidFrom],[ValidTo] FROM [dbo].[InsurancePolicy] This tells me that SQL Server is looking at that data (probably to determine that ValidTo and ValidFrom conform to some constraints). The schema modification lock + the scan of the table is giving me grief. In theory, the scan is unnecessary because the values are constant. In Microsoft's docs, example B, they mention "(a certain set of data checks happen in the background)" But perhaps those checks are unnecessary when the columns are brand new. So: Is there any way to enable temporal tables online? Without placing a sch-m lock on the table while the table is scanned?
Michael J Swart (2235 rep)
Aug 14, 2023, 05:32 PM • Last activity: Aug 15, 2023, 02:37 PM
0 votes
1 answers
146 views
Maintain changes in database via generic approach triggers into history table
Our main system database has temporal tables These have helped us hugely in working out issues However. A drawback of this is that an issue with doing a lot of updates at once can lead to a an error about not being able to update the temporal table due to the time being earlier than the period start...
Our main system database has temporal tables These have helped us hugely in working out issues However. A drawback of this is that an issue with doing a lot of updates at once can lead to a an error about not being able to update the temporal table due to the time being earlier than the period start time for the affected records I get this error when updating the database through entity framework core I need a solution where I can get a temporal table like audit but any issues do not ever get back to the system and stop a transaction from happening Can anyone suggest a solution? Has anyone tried a trigger based fix? This is using azure Sql In a perfect world our replica database would get the changes and then write to a temporal table but I’m assuming that’s not an option as I would assume that the whole structure of the source and replicated database have to be identical Paul
PaulInUk2017 (1 rep)
Jul 12, 2023, 11:36 PM • Last activity: Jul 13, 2023, 01:48 AM
5 votes
3 answers
4520 views
Where can I get SQL Server Transaction Begin Time from?
I am working with temporal tables in SQL Server and the generated time is the transaction begin time in UTC `datetime2`. Is there a function or table in SQL Server to get that same transaction begin time to use elsewhere that does not require me to write to and select from a system versioned table f...
I am working with temporal tables in SQL Server and the generated time is the transaction begin time in UTC datetime2. Is there a function or table in SQL Server to get that same transaction begin time to use elsewhere that does not require me to write to and select from a system versioned table first? I have some non-system versioned tables that are part of the same transaction, and I would prefer if the datetime2 recorded against them matched instead of using SysUTCDateTime and having it vary. I tried to pull it from sys.dm_tran_active_transactions but it is datetime and local server time and appear to be different. * system versioned: 2017-04-11 14:00:59.4690673 * active transactions: 2017-04-11 15:00:59.467
Trotts (151 rep)
Apr 11, 2017, 02:09 PM • Last activity: May 28, 2023, 08:19 AM
1 votes
1 answers
583 views
How to check if a table is system versioned for MariaDB?
How to check if a table is system versioned for MariaDB ? This post refers to same question on "SQL Server 2016" https://dba.stackexchange.com/questions/218517/how-to-check-if-the-system-versioning-for-a-table-is-on I've tried this syntax on MariaDB, but I got error. I guess "TableTemporalType", is...
How to check if a table is system versioned for MariaDB ? This post refers to same question on "SQL Server 2016" https://dba.stackexchange.com/questions/218517/how-to-check-if-the-system-versioning-for-a-table-is-on I've tried this syntax on MariaDB, but I got error. I guess "TableTemporalType", is MS specific (see https://learn.microsoft.com/en-us/sql/t-sql/functions/objectproperty-transact-sql?view=sql-server-ver15 ) Anyone knows how can be done in MariaDB ? Thanks for your help.
John D. (13 rep)
Jan 16, 2023, 11:24 AM • Last activity: Jan 16, 2023, 11:43 AM
0 votes
1 answers
34 views
How to model/query this data problem? Is this a timeseries issue?
We have a large amount of data in a MS-SQL2014 (v12) server. We need to determine if a product was for sale in some time period. For example, was this inventory for sale in the month of July 2022? (1st-Jul-2022 -> 31st-Jul-2022). I've made a [db fiddle here][1] which can be used to refine the answer...
We have a large amount of data in a MS-SQL2014 (v12) server. We need to determine if a product was for sale in some time period. For example, was this inventory for sale in the month of July 2022? (1st-Jul-2022 -> 31st-Jul-2022). I've made a db fiddle here which can be used to refine the answer(s). The data we have is some inventory/stock list. It's a list of events that occur to single inventory items. So here's a sample of some 'events' that happen to this one random widget, in the inventory:
Sample Data - Events that occur with some inventory
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

| Id | Name      | Date       | Price | Status    |
| -- | --------- | -----------| ----- | --------- |
| 1  | INV-1     | 2022-05-01 | 100   | Available | New
| 1  | INV-1 aaa | 2022-09-05 | 100   | Available | Name change
| 1  | INV-1 bbb | 2022-09-06 | 100   | Available | Name change
| 1  | INV-1     | 2022-09-07 | 120   | Available | Name and price change
| 1  | INV-1     | 2022-09-08 | 120   | Sold      | Sold
| 1  | INV-1     | 2022-10-02 | 120   | Available | Returned. Back for sale
| 1  | INV-1     | 2022-10-01 | 115   | Available | Price change
| 1  | INV-1     | 2022-10-01 | 115   | Sold      | New

| 2  | INV-2     | 2022-05-01 | 200   | Available | New (Other product)
| 3  | INV-3     | 2022-05-01 | 300   | Available | New (Other product)
| 4  | INV-4     | 2022-10-01 | 400   | Available | New (Other product)
| 5  | INV-5     | 2022-10-01 | 500   | Available | New (Other product)
Expected Results:
^^^^^^^^^^^^^^^^^

-- Which products were available in JULY?
| Id |
| -- |
| 1  |
| 2  |
| 3  |

-- Which products available in OCT?
| Id |
| -- |
| 1  | -- Was returned and put back to available again.
| 2  |
| 3  |
| 4  |
| 5  |
Is this possible? At first, I was told to investigate TEMPORAL TABLES. That sounds like it might seriously help .. except that it's for MS-SQL2016 (v13+). I'm on v12 -and- I won't be able to upgrade or migrate the data to a v13+ version. Also, I'm not on an Enterprise Edition and can't go to that. Can anyone please help?
Pure.Krome (283 rep)
Sep 16, 2022, 04:13 AM • Last activity: Sep 19, 2022, 07:02 AM
0 votes
1 answers
41 views
how to add a query to postgres and have it show up as a table
I'm not sure what technology I need because I'm not a DBA, but I'm sure there's a good way to do this. I have a postgres database with this schema (top two tables exist): [![observations][1]][1] **So I'm trying to add the 3rd table to the database. It's a subset of `observation`: all latest observat...
I'm not sure what technology I need because I'm not a DBA, but I'm sure there's a good way to do this. I have a postgres database with this schema (top two tables exist): observations **So I'm trying to add the 3rd table to the database. It's a subset of observation: all latest observations by stream_id and target_id with the target_key value already joined into it.** I know upfront that 99% of the queries against the target table are asking for the latest row of a certain stream_id and target_id. so I thought instead of manually building another table with duplicate data, I'll try to leverage the power of the database to make a temporal table, or a foreign data wrapper or something, that is essentially a hard coded query on the database which looks like a table we can query, (and since it's querying a subset of the data, its much faster). Ok, so that's what I'm looking for but I don't know which technology to use, as I mentioned I've been searching and I found temporal tables, virtual tables and FDW. But I'm a programmer not a DBA so I'm having a hard time telling the difference or understanding which one matches my need. What technology can I use for this?
MetaStack (103 rep)
Jun 12, 2022, 03:27 PM • Last activity: Jun 13, 2022, 03:53 AM
Showing page 1 of 20 total questions