Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

2 votes
1 answers
352 views
Is there any benefit for natively compiled TVFs when used in non native modules?
I have some schema-bound **inline** table valued functions that I could natively compile. However, my workflow does not use store procedures and it would only be invoked from entity framework or ado.net directly. As the functions must be inlined into our queries, I fail to see a point in marking my...
I have some schema-bound **inline** table valued functions that I could natively compile. However, my workflow does not use store procedures and it would only be invoked from entity framework or ado.net directly. As the functions must be inlined into our queries, I fail to see a point in marking my functions as natively compiled. However, a colleague of mine wants to use it because it's shiny and new. From some brief testing there seemed to be no actual benefit. Am I mistaken or could there beneficial scenarios?
Michael B (519 rep)
May 11, 2019, 03:33 PM • Last activity: Jul 13, 2025, 07:03 AM
5 votes
1 answers
2634 views
Replication stops with GTID_NEXT error after creation/drop of memory table in mysql5.6
We have recently upgraded to mysql5.6.25 from mysql5.5.x/mysql5.1.x on our mysql-cluster. Below is a brief snapshot of our architecture. [![enter image description here][1]][1] [1]: https://i.sstatic.net/QCakk.jpg Since we have upgraded and enabled gtid-mode we have been intermittently getting slave...
We have recently upgraded to mysql5.6.25 from mysql5.5.x/mysql5.1.x on our mysql-cluster. Below is a brief snapshot of our architecture. enter image description here Since we have upgraded and enabled gtid-mode we have been intermittently getting slave errors similar to : ***Last_SQL_Error: Error 'When @@SESSION.GTID_NEXT is set to a GTID, you must explicitly set it to a different value after a COMMIT or ROLLBACK. Please check GTID_NEXT variable manual page for detailed explanation. Current @@SESSION.GTID_NEXT is 'd7e8990d-3a9e-11e5-8bc7-22000aa63d47:1466'.' on query. Default database: 'adplatform'. Query: 'create table X_new like X'*** Our observations are as below.. - These slave errors are resolved simply by restarting the slave. - Such errors are always with Create/Drop of tables which have Memory Storage Engine. - Errors on Complete-Slave(B) show up continuously at a fixed minute (39th) of the hour and have been repeating since we have upgraded, almost a week. - Errors on Complete-Slave as well as Partial slave are observed whenever its master is restarted. - Cluster-1 and Cluster-2 have centos machines and Cluster-3 have ubuntu-machines. Slaves on centos machines also fail with the same error whenever its master(C/D) is restarted, but slave on ubuntu machines do not fail!!. We have temporarily been able to live with this issue by setting up an action-script on our monitoring system which fires on slave error alert on any machine. A look into gtid_next section in replication-options doc of mysql tells following > Prior to MySQL 5.6.20, when GTIDs were enabled but gtid_next was not > AUTOMATIC, DROP TABLE did not work correctly when used on a > combination of nontemporary tables with temporary tables, or of > temporary tables using transactional storage engines with temporary > tables using nontransactional storage engines. In MySQL 5.6.20 and > later, DROP TABLE or DROP TEMPORARY TABLE fails with an explicit error > when used with either of these combinations of tables. (Bug #17620053) This seems related to my issue but still doesn't not explain my scenario. Any hints/direction to solve the issue would be greatly appreciated... **EDIT :** I managed to find a similar recently reported bug in mysql(#77729), description of which is as follows : https://bugs.mysql.com/bug.php?id=77729 > When you have table with Engine MEMORY working on replication master, > mysqld injects "DELETE" statement in binary logs on first access query > to this table. This insures consistency of data on replicating slaves. > > If replication is GTID ROW based, this inserted "DELETE" breaks > replication. Logged event is in STATEMENT format and do not generate > correct SET GTID_NEXT statements in binary log. Unfortunately, the status of this bug is marked as Can't Repeat...
raman2887 (51 rep)
Aug 5, 2015, 08:05 AM • Last activity: Jul 1, 2025, 11:02 PM
1 votes
1 answers
58 views
Transaction log file for in-memory database doesn't shrink
I am using SQL Server 2019 Enterprise Edition. We have an in-memory database used for session state data with one table stored in memory, schema only durability, simple recovery, differential backups run daily, full backup weekly. I noticed some errors in our logs recently. >Disallowing page allocat...
I am using SQL Server 2019 Enterprise Edition. We have an in-memory database used for session state data with one table stored in memory, schema only durability, simple recovery, differential backups run daily, full backup weekly. I noticed some errors in our logs recently. >Disallowing page allocations for database 'MyDatabase' due to insufficient memory in the resource pool 'default' > >XTP failed page allocation due to memory pressure: FAIL_PAGE_ALLOCATION 8 > >There is insufficient system memory in resource pool 'default' to run this query. In researching these, I also discovered that the log files for this database were several GB. It seemed odd to me that the log wasn't truncating. With experimentation, I found that if I run CHECKPOINT **twice**, I could then shrink the log file. It was my understanding that an automatic checkpoint should be happening when the transaction log file > 1.5 GB. My question is - is it normal to have transaction log files that are several GB for in-memory databases and keep growing?
Kathy S (11 rep)
Jun 11, 2025, 06:26 PM • Last activity: Jun 13, 2025, 10:42 AM
5 votes
2 answers
5596 views
What are all the XTP_THREAD_POOL processes doing?
I just ran `sp_who` on my SQL Server 2016 instance that uses `MEMORY_OPTIMIZED` tables, and I see several `XTP_THREAD_POOL` processes running: [![xtp nonsense][1]][1] Some other details about the output: - There are exactly 6 `XTP_THREAD_POOL` rows in the resultset - My system has two logical cores...
I just ran sp_who on my SQL Server 2016 instance that uses MEMORY_OPTIMIZED tables, and I see several XTP_THREAD_POOL processes running: xtp nonsense Some other details about the output: - There are exactly 6 XTP_THREAD_POOL rows in the resultset - My system has two logical cores - Specifically, it's a VM with 2 sockets - each socket housing a 1-core processor - All of these tasks are running as background (sa) What are these processes doing? I'm not finding much on Google about this task name. TIA
Josh Darnell (30173 rep)
Jul 16, 2018, 08:17 PM • Last activity: Apr 25, 2025, 09:00 AM
0 votes
1 answers
77 views
Is there any disadvantage to moving a small and frequently read read-only table to in-memory OLTP?
Suppose that you have a table which is: * Read from so frequently that it is almost always in RAM. * Very small. * Almost never changed. Is there any disadvantage moving such a table to in-memory OLTP? My thoughts are that if it's always in memory anyway, then you should consider using the architect...
Suppose that you have a table which is: * Read from so frequently that it is almost always in RAM. * Very small. * Almost never changed. Is there any disadvantage moving such a table to in-memory OLTP? My thoughts are that if it's always in memory anyway, then you should consider using the architecture designed for living in memory.
J. Mini (1237 rep)
Nov 19, 2024, 07:31 PM • Last activity: Nov 20, 2024, 04:12 PM
6 votes
3 answers
664 views
Cannot reclaim Index Unused Memory in In-Memory OLTP
Steps to reproduce the problem Create a database with memory-optimized filegroup and container Create schema only in-memory table with nonclustered pk Simulate insert and delete activity. My result is that I have high index unused memory that won't go down. ``` USE master go DROP DATABASE IF EXISTS...
Steps to reproduce the problem Create a database with memory-optimized filegroup and container Create schema only in-memory table with nonclustered pk Simulate insert and delete activity. My result is that I have high index unused memory that won't go down.
USE master
 go
 DROP DATABASE IF EXISTS MemoryOptimizedTest
 CREATE DATABASE MemoryOptimizedTest
 GO
 USE MemoryOptimizedTest
 GO
 ALTER DATABASE MemoryOptimizedTest 
 ADD FILEGROUP imoltp_mod CONTAINS MEMORY_OPTIMIZED_DATA
 GO 
 
 ALTER DATABASE MemoryOptimizedTest ADD FILE (name='imoltp_mod1', filename='c:\imoltp_mod1') TO FILEGROUP imoltp_mod
 GO
 
 
 DROP TABLE IF EXISTS dbo.MyCache
 CREATE TABLE dbo.MyCache
 (
 	PK int NOT NULL, 
 	SecondInt int NOT NULL,
 	ThirdInt int NOT NULL,
     CONSTRAINT PK_MyCache PRIMARY KEY NONCLUSTERED (PK)
 ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_ONLY)
 
 go

/* Generate activity and monitor table size */
USE MemoryOptimizedTest
go


SELECT
	object_id,
	OBJECT_SCHEMA_NAME(object_id) + '.' + OBJECT_NAME(object_id) AS Table_Name,
	memory_allocated_for_table_kb,
	memory_used_by_table_kb,
	memory_allocated_for_indexes_kb,
	memory_used_by_indexes_kb
FROM sys.dm_db_xtp_table_memory_stats
WHERE OBJECT_ID = OBJECT_ID('dbo.MyCache')

;WITH
  L0   AS(SELECT 1 AS c UNION ALL SELECT 1),
  L1   AS(SELECT 1 AS c FROM L0 CROSS JOIN L0 AS B),
  L2   AS(SELECT 1 AS c FROM L1 CROSS JOIN L1 AS B),
  L3   AS(SELECT 1 AS c FROM L2 CROSS JOIN L2 AS B),
  L4   AS(SELECT 1 AS c FROM L3 CROSS JOIN L3 AS B),
  L5   AS(SELECT 1 AS c FROM L4 CROSS JOIN L4 AS B),
  Nums AS(SELECT ROW_NUMBER() OVER(ORDER BY (SELECT NULL)) AS n FROM L5)
, tally AS (SELECT TOP (10000) n FROM Nums ORDER BY n)
INSERT INTO dbo.MyCache (PK, SecondInt, ThirdInt)
SELECT 
	n
	, n+1
	, n+2
FROM tally 

WAITFOR DELAY '00:00:02'
DELETE FROM dbo.MyCache

GO 50
When I run it on my localmachine Microsoft SQL Server 2017 (RTM-GDR) (KB4505224) - 14.0.2027.2 (X64) Developer Edition with 16 GB Max Memory and 1.5 GB Available memory, the memory_allocated_for_indexes_kb fluctuates normally. When I run it on our DEV environment Microsoft SQL Server 2019 (RTM-CU7) (KB4570012) - 15.0.4063.15 (X64) Enterprise edition 2 TB Max Memory, 220 GB Available memory The memory_allocated_for_indexes_kb only grows. I've simulated activity for a table for few hours and have index used memory = 0.24 MB, Index Unused Memory = 385 MB and it won't go down. The garbage collector ran according to PerfMon Sweep expired rows removed/sec in XTP Garbage collection. I read somewhere that the garbage collector doesn't free up space until it faces memory pressure but it seems weird that it would hold so much unused memory. EDIT: I used Resource pool for the database holding in-memory tables. One percent is the lowest I could go. I filled the memory with an other table taking 99% of the resource pool and the memory_allocated_for_indexes still won't go down. Whether there is a transactional activity or not. No active trasnactions are blocking GC and I've waited for more than 30 minutes. I can't accept any of the current answers as my problem is still not answered. Note that the table rows are cleaned up, but not for the index.
Zikato (5724 rep)
Sep 14, 2020, 01:05 PM • Last activity: Sep 28, 2024, 04:01 PM
0 votes
1 answers
73 views
In-Memory Table Type with Primary Key causing Eager Index Spool
I have defined an In-Memory Table Type to hold a sequence of unique integers for the purpose of passing them between stored procedures. The type has the following definition: CREATE TYPE [dbo].[IntegerUniqueList] AS TABLE( [IntegerValue] [int] NOT NULL, PRIMARY KEY NONCLUSTERED ( [IntegerValue] ASC...
I have defined an In-Memory Table Type to hold a sequence of unique integers for the purpose of passing them between stored procedures. The type has the following definition: CREATE TYPE [dbo].[IntegerUniqueList] AS TABLE( [IntegerValue] [int] NOT NULL, PRIMARY KEY NONCLUSTERED ( [IntegerValue] ASC ) ) WITH ( MEMORY_OPTIMIZED = ON ) In complex queries that use it, even though an index seek seems the obvious choice, the optimizer is choosing to include an Index Spool. This is defeating the point of having the data in memory. This can be replicated in AdventureWorks2019 with the following: DECLARE @IDs dbo.IntegerUniqueList INSERT @IDs (IntegerValue) SELECT p.BusinessEntityID FROM Person.BusinessEntity p SELECT p.BusinessEntityID FROM Person.Person p WHERE (p.BusinessEntityID = 1) OR ( ( (0 = 1) OR EXISTS ( SELECT 1 FROM Person.BusinessEntity be JOIN Person.BusinessEntityAddress bea ON (bea.BusinessEntityID = be.BusinessEntityID) JOIN Person.Address a ON (a.AddressID = bea.AddressID) WHERE (bea.AddressTypeID = 2) AND (be.BusinessEntityID = p.BusinessEntityID) AND (a.AddressLine1 LIKE N'%1%') ) ) AND EXISTS ( SELECT 1 FROM @IDs id WHERE (id.IntegerValue = p.BusinessEntityID) ) ) Here is the plan produced: https://www.brentozar.com/pastetheplan/?id=SkFipTYTR If I remove the in-memory aspect, or change the NONCLUSTERED Primary Key to use a HASH index, the spool goes away. Given that in-memory *should* give better performance and I can't predict how many rows will be put in the variables to predict a reasonable BUCKET_COUNT, neither of those solutions is ideal. What's going on here and are there any other options to prevent the spool?
Dan Def (165 rep)
Sep 19, 2024, 04:26 PM • Last activity: Sep 19, 2024, 04:35 PM
6 votes
1 answers
369 views
Microsoft SQL In-Memory OLTP in SQL Express 2019/2022
I am trying to use SQL In-Memory OLTP. The limitation is that the solution needs to be compatible with SQL Express 2019 and 2022. This [link][1] claims that In-Memory OLTP is compatible with **all** versions of SQL, including Express. With version 2022 I am not managing to get started: EXECUTE maste...
I am trying to use SQL In-Memory OLTP. The limitation is that the solution needs to be compatible with SQL Express 2019 and 2022. This link claims that In-Memory OLTP is compatible with **all** versions of SQL, including Express. With version 2022 I am not managing to get started: EXECUTE master.dbo.xp_create_subdir 'C:\data\' GO ALTER DATABASE S2X ADD FILEGROUP S2x_mod CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE S2x ADD FILE ( name='S2x_mod1', filename='c:\data\S2x_mod1') TO FILEGROUP S2x_mod; I get as far as the last part of the above query and get the following error: > Could not process the operation. Always On Availability Groups replica manager is disabled on this instance of SQL Server. Enable Always On Availability Groups, by using the SQL Server Configuration Manager. Then, restart the SQL Server service, and retry the currently operation. For information about how to enable and disable Always On Availability Groups, see SQL Server Books Online. When trying to enable the option via the SQL Server Configuration Manager, it says that turning on "Always On Availability Groups" is not possible with **this** version of SQL Express. With the 2019 version I get past the above query and manage to create a memory-optimized table and native stored procedure: -- Creating a memory-optimized table CREATE TABLE dbo.State ( CID NVARCHAR(50) NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000), PNO INT NOT NULL, IsStart BIT NOT NULL, IsEnd BIT NOT NULL, CMPID NVARCHAR(50) NOT NULL ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); GO CREATE PROCEDURE dbo.GetStateByCID @CID NVARCHAR(50) WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER AS BEGIN ATOMIC WITH ( TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'English' ) SELECT CID, PNO, IsStart, IsEnd, CMPID FROM dbo.State WHERE CID = @CID; END; but when executing it: EXEC @return_value = [dbo].[GetStateByCID] @CID = N'1234' I get the following error: > An error occurred while executing batch. Error message is: Internal connection fatal error. Error state: 15, Token : 0 I did not find a lot of help online on this; but it seemed to be an issue which was fixed in the 2017 version (see here ). This does not make much sense to me, as I would have expected the fix to be included in the 2019 version by default. In any case, using In-Memory OLTP would only be possible for me, if it is compatible with both Express 2019 **and** 2022. What am I missing here? Is In-Memory OLTP compatible with SQL express or not? Am I doing something wrong?
Jurgen Cuschieri (187 rep)
Sep 4, 2024, 03:25 PM • Last activity: Sep 5, 2024, 01:45 PM
2 votes
2 answers
1256 views
Error creating memory optimized filegroup in SSDT
I am attempting to add a memory optimized filegroup to a SQL server database project in SSDT. This is for SQL Server 2017, using Visual Studio 2017. However compiling the project (pressing F5 to build) is resulting in an error. This error does not occur when deploying (via Deploy). The filegroup is...
I am attempting to add a memory optimized filegroup to a SQL server database project in SSDT. This is for SQL Server 2017, using Visual Studio 2017. However compiling the project (pressing F5 to build) is resulting in an error. This error does not occur when deploying (via Deploy). The filegroup is created as normal (with the SQLCMD variable as scripted by SSDT): ALTER DATABASE [$(DatabaseName)] ADD FILEGROUP [MemoryOptimizedFilegroup] CONTAINS MEMORY_OPTIMIZED_DATA However this results in an error: > The operation 'AUTO_CLOSE' is not supported with databases that have a MEMORY_OPTIMIZED_DATA filegroup. However, database settings show that auto close is indeed disabled: Auto Close The deployment script generated by SSDT for some reason is creating the database, creating the filegroup, and then only setting AUTO_CLOSE off afterward. It is possible this is resulting in the error: CREATE DATABASE [$(DatabaseName)] ON PRIMARY(NAME = [$(DatabaseName)], FILENAME = N'$(DefaultDataPath)$(DefaultFilePrefix)_Primary.mdf') LOG ON (NAME = [$(DatabaseName)_log], FILENAME = N'$(DefaultLogPath)$(DefaultFilePrefix)_Primary.ldf') COLLATE SQL_Latin1_General_CP1_CI_AS GO PRINT N'Creating [MemoryOptimizedFilegroup]...'; GO ALTER DATABASE [$(DatabaseName)] ADD FILEGROUP [MemoryOptimizedFilegroup] CONTAINS MEMORY_OPTIMIZED_DATA; GO ALTER DATABASE [$(DatabaseName)] ADD FILE (NAME = [MemoryOptimizedFilegroup_69323650], FILENAME = N'$(DefaultDataPath)$(DefaultFilePrefix)_MemoryOptimizedFilegroup_69323650.mdf') TO FILEGROUP [MemoryOptimizedFilegroup]; GO USE [$(DatabaseName)]; GO IF EXISTS (SELECT 1 FROM [master].[dbo].[sysdatabases] WHERE [name] = N'$(DatabaseName)') BEGIN ALTER DATABASE [$(DatabaseName)] SET ANSI_NULLS ON, ANSI_PADDING ON, ANSI_WARNINGS ON, ARITHABORT ON, CONCAT_NULL_YIELDS_NULL ON, NUMERIC_ROUNDABORT OFF, QUOTED_IDENTIFIER ON, ANSI_NULL_DEFAULT ON, CURSOR_DEFAULT LOCAL, CURSOR_CLOSE_ON_COMMIT OFF, AUTO_CREATE_STATISTICS ON, AUTO_SHRINK OFF, AUTO_UPDATE_STATISTICS ON, RECURSIVE_TRIGGERS OFF WITH ROLLBACK IMMEDIATE; ALTER DATABASE [$(DatabaseName)] SET AUTO_CLOSE OFF WITH ROLLBACK IMMEDIATE; END I am not certain whether AUTO_CLOSE is enabled or disabled by default after CREATE DATABASE. If ON by default, it seems that SSDT is generating the deployment script in the wrong order. If OFF by default, then I don't understand why the error exists. Has anyone had success creating a memory optimized file group in an SSDT project?
Definite (145 rep)
Jan 23, 2020, 09:18 AM • Last activity: Jul 29, 2024, 10:38 AM
1 votes
0 answers
39 views
Restoring of OLTP In-Memory DBs in SQL 2022 on Ubuntu really slow
We've started using OLTP In-Memory tables (schema durability only) in a small DB (DB properties show 137 MB of Row data, 1GB of In-Memory data, BAK size of 152MB) and the recovery times (either restarting the SQL service or restoring the DB from a BAK file) are extremely slow in SQL Standard 2022 on...
We've started using OLTP In-Memory tables (schema durability only) in a small DB (DB properties show 137 MB of Row data, 1GB of In-Memory data, BAK size of 152MB) and the recovery times (either restarting the SQL service or restoring the DB from a BAK file) are extremely slow in SQL Standard 2022 on Ubuntu Linux. On a Windows SQL server VM, it restores and recovers in 20 seconds to create and the observed WAITs for each table create were “PREEMPTIVE_OS_CREATEDIRECTORY”. Research on this lead me to Microsoft pages that talk about applying KB4090789 and KB4052338 for SQL 2017 and enabling TraceFlag 9944. I assume those KBs for SQL 2017 would already be included in SQL 2022. I enabled TraceFlag 9944 and restarted the SQL server. The normal (non-OLTP) DBs started up just fine. The OLTP DB failed to start and was marked as SUSPECT. I could not get it to come back online/recover it as long as the 9944 TraceFlag was enabled. I've also tried using 9944 on several Containers running SQL Express 2022 as they all take >10 minutes to recover and they always result in the DB being marked as SUSPECT. I've gone through the SQL for Linux tuning pages (FileSystem settings, Kernel/CPU tuning, updates/patches, SQL server config, etc) and haven't found anything different than I'm doing. Any ideas/suggestions would be greatly appreciated. Thanks!
Hugh Clark (11 rep)
Jul 15, 2024, 09:59 PM
0 votes
1 answers
126 views
What happens when you try to use more than 32 GB of memory-optimized data in a database on Standard Edition?
In SQL Server 2019, you cannot put more than 32 GB of [memory-optimized data in a database on Standard Edition](https://learn.microsoft.com/en-us/sql/sql-server/editions-and-components-of-sql-server-2019?view=sql-server-ver15&preserve-view=true#Cross-BoxScaleLimits). What happens if you try anyway?...
In SQL Server 2019, you cannot put more than 32 GB of [memory-optimized data in a database on Standard Edition](https://learn.microsoft.com/en-us/sql/sql-server/editions-and-components-of-sql-server-2019?view=sql-server-ver15&preserve-view=true#Cross-BoxScaleLimits) . What happens if you try anyway? I'd try it myself, but I don't want to risk discovering that the answer is a knock on the door from the Licencing Police™.
J. Mini (1237 rep)
Apr 4, 2024, 10:35 PM • Last activity: Apr 6, 2024, 04:18 PM
4 votes
2 answers
905 views
In-memory OLTP databases take very long to recover during startup
we use SQL Server 2019 on Windows with in-memory oltp activated on some databases. After a server reboot/service restart, the in-memory databases take very long to be available (more than an hour) even though most tables are not durable. The size of memory optimized objects is very small: 10 MB We s...
we use SQL Server 2019 on Windows with in-memory oltp activated on some databases. After a server reboot/service restart, the in-memory databases take very long to be available (more than an hour) even though most tables are not durable. The size of memory optimized objects is very small: 10 MB We see a background session on master db with wait type (110514580ms)WAIT_XTP_RECOVERY and almost no reads. CPU cores are at 100%. Disks are idling. We use transparent data encryption (TDE) for this database. This database uses synonyms to access another db on the same instance. It uses service broker. Instance has transactional replication set-up on databases without in-memory activated. Adding CPU makes it faster. This is a lowend machine, but not crap. XTP engine 2.11. Any idea what's going on?
maxschaf (43 rep)
Jan 30, 2024, 09:54 PM • Last activity: Jan 31, 2024, 06:43 PM
14 votes
1 answers
1863 views
Why does a simple natively compiled stored procedure run out of memory when table variables are used?
My version of SQL Server is SQL Server 2019 (RTM-CU18). The following repro code requires that an in memory filegroup is created. For anyone following along, please remember that an in-memory filegroup cannot be dropped from a database once it is created. I have a simple in-memory table in which I i...
My version of SQL Server is SQL Server 2019 (RTM-CU18). The following repro code requires that an in memory filegroup is created. For anyone following along, please remember that an in-memory filegroup cannot be dropped from a database once it is created. I have a simple in-memory table in which I insert integers from 1 - 1200: DROP TABLE IF EXISTS [dbo].[InMem]; CREATE TABLE [dbo].[InMem] ( i [int] NOT NULL, CONSTRAINT [PK_InMem] PRIMARY KEY NONCLUSTERED (i ASC) ) WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_ONLY ); INSERT INTO [dbo].[InMem] SELECT TOP (1200) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM master..spt_values t1 CROSS JOIN master..spt_values t2; I also have the following natively compiled stored procedure: GO CREATE OR ALTER PROCEDURE p1 WITH NATIVE_COMPILATION, SCHEMABINDING AS BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english') SELECT c1.i, c2.i, c3.i FROM dbo.[InMem] c1 CROSS JOIN dbo.[InMem] c2 CROSS JOIN dbo.[InMem] c3 WHERE c1.i + c2.i + c3.i = 3600; END; GO The procedure returns one row when executed. On my machine it takes around 32 seconds to complete. I cannot observe any unusual behavior in terms of memory usage while it executes. I can create a similar table type: CREATE TYPE [dbo].[InMemType] AS TABLE( i [int] NOT NULL, INDEX [ix_WordBitMap] NONCLUSTERED (i ASC) ) WITH ( MEMORY_OPTIMIZED = ON ); as well as the same stored procedure but using the table type instead: GO CREATE OR ALTER PROCEDURE p2 (@t dbo.[InMemType] READONLY) WITH NATIVE_COMPILATION, SCHEMABINDING AS BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N'us_english') SELECT c1.i, c2.i, c3.i FROM @t c1 CROSS JOIN @t c2 CROSS JOIN @t c3 WHERE c1.i + c2.i + c3.i = 3600; END; GO The new stored procedure throws an error after about one minute: > Msg 701, Level 17, State 154, Procedure p2, Line 6 [Batch Start Line 57] There is insufficient system memory in resource pool 'default' to run this query. While the procedure executes I can see the amount of memory used by the MEMORYCLERK_XTP memory clerk increase to around 2800 MB for the database by querying the sys.dm_os_memory_clerks dmv. According to the sys.dm_db_xtp_memory_consumers DMV, nearly all of the memory usage seems to be from the "64K page pool" consumer: enter image description here enter image description here For reference, here is how I executed the new stored procedure. It uses the same 1200 rows as the table: DECLARE @t dbo.[InMemType]; INSERT INTO @t (i) SELECT i from [dbo].[InMem]; EXEC p2 @t; The resulting query plan is a simple nested loop plan with no blocking operators. Per request, here is an estimated query plan for the second stored procedure. I do not understand why memory usage grows to over 2 GB for such a query when I use a table-valued parameter. I have read various bits of documentation and in-memory OLTP white papers and can't find any reference to this behavior. Using ETW tracing, I can see that the first procedure spends most of its cpu time calling hkengine!HkCursorHeapGetNext and the second procedure spends most of its cpu time calling hkengine!HkCursorRangeGetNext. I can also get the C source code for both procedures. The first procedure is here and the second procedure, with the memory problem, is here . However, I don't know how to read C code so I don't know how to investigate further. Why does a simple natively compiled stored procedure use over 2 GB of memory when performing nested loops against a table-valued parameter? The problem also occurs when I run the query outside of a stored procedure.
Joe Obbish (32976 rep)
Nov 18, 2022, 09:13 PM • Last activity: Jan 16, 2024, 09:07 PM
0 votes
1 answers
639 views
Why use a memory-optimized table variable instead of a temp table or normal table variable?
Table variables have a bad reputation, mostly due to cardinality estimates. However, this reputation was earned before the introduction of memory-optimized table variables. [Microsoft covers some use-cases here](https://learn.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/faster-temp-ta...
Table variables have a bad reputation, mostly due to cardinality estimates. However, this reputation was earned before the introduction of memory-optimized table variables. [Microsoft covers some use-cases here](https://learn.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/faster-temp-table-and-table-variable-by-using-memory-optimization?view=sql-server-ver16) , but I'm not sold on the benefits. A normal table variable will go in to RAM very quickly, so I don't see how the general benefits of in-memory OLTP will apply for memory-optimized table variables. [This article](https://web.archive.org/web/20140111080518/http://blogs.technet.com/b/dataplatforminsider/archive/2014/01/07/sql-server-2014-in-memory-oltp-memory-optimized-table-types-and-table-variables.aspx) lists some benefits that apply, but its focus is clearly on listing differences rather than benefits. In what circumstances would a memory-optimized table variable be preferred over a normal table variable or a temp table? To keep the playing ground even, **assume that we are working in a normal stored procedure that has no reference to any memory-optimized tables.** In other words, assume that my only usage of in-memory OLTP is for table variables. Furthermore, let's assume that we are on SQL Server 2022. Assuming a version like 2014 would make answers needlessly complicated.
J. Mini (1237 rep)
Jan 13, 2024, 05:09 PM • Last activity: Jan 16, 2024, 07:34 PM
0 votes
1 answers
115 views
If memory-optimised tables always live in RAM, how do they survive a server reboot?
I clearly have a missing piece in my understanding of in-memory OLTP. Suppose that I have a memory-optimised table that is set up to persist both schema and data. Given that these tables always live in RAM, how does their data survive a server reboot? I've looked over the documentation, but I cannot...
I clearly have a missing piece in my understanding of in-memory OLTP. Suppose that I have a memory-optimised table that is set up to persist both schema and data. Given that these tables always live in RAM, how does their data survive a server reboot? I've looked over the documentation, but I cannot recall seeing any mention of how or when their data is moved to disk. Indeed, as far as I know, the whole point of hekaton is that they don't live on the disk. Have I simply missed a piece of documentation?
J. Mini (1237 rep)
Jan 13, 2024, 03:20 PM • Last activity: Jan 13, 2024, 04:56 PM
29 votes
4 answers
30476 views
The transaction log for database 'database_name' is full due to 'XTP_CHECKPOINT'
I have a question about `XTP_CHECKPOINT`. I'm using SQL Server 2014. I have a database that is in SIMPLE recovery model mode. It is also being replicated. There are no open transactions. I've run `DBCC OPENTRAN` and it returns: > "No active open transactions." But I keep getting this message wheneve...
I have a question about XTP_CHECKPOINT. I'm using SQL Server 2014. I have a database that is in SIMPLE recovery model mode. It is also being replicated. There are no open transactions. I've run DBCC OPENTRAN and it returns: > "No active open transactions." But I keep getting this message whenever I try to create or drop a table or delete data: (I've replaced my actual database name with the word database_name) >"The transaction log for database 'database_name' is full due to 'XTP_CHECKPOINT'" Does anyone know why this might be happening, and, more importantly, how can I make it stop? And yes, the database really is in SIMPLE recovery model mode. i.e. The transaction log should truncate automatically. Incidentally, another database that I have in full recovery mode did the same thing, started returning the same error: >The transaction log for database 'database_name' is full due to 'XTP_CHECKPOINT' I tried to change the log growth settings to unlimited growth, but it wouldn't let me, returning the same error. I can reproduce the problem without any XTP stuff at all, except for just the filegroup. Here's how: http://pastebin.com/jWSiEU9U
user3469285 (671 rep)
Feb 23, 2015, 05:25 PM • Last activity: Sep 6, 2023, 07:39 PM
0 votes
2 answers
735 views
What is the quickest and most efficient way to remove an in-memory filegroup from a large SQL Server 2019 database?
I have a database with an in-memory filegroup that I want to remove (I won't go into many details why, but I want to move the database to another place without this filegroup). I'm trying to move the database to Azure SQL Managed Instance (General Purpose), and this PaaS solution doesn't support in-...
I have a database with an in-memory filegroup that I want to remove (I won't go into many details why, but I want to move the database to another place without this filegroup). I'm trying to move the database to Azure SQL Managed Instance (General Purpose), and this PaaS solution doesn't support in-memory filegroups. Notes: - This is running SQL Server 2019 - This database is quite big: more than 500GB - There is no data in that filegroup at this time - It have tables (of course), views, SP, Synonyms... From the documentation "Once you use a memory-optimized filegroup, you can only remove it by dropping the database.". So, I need to drop the database, and create it again without the file group. But doesn't detail the best way to do this. What is the best option to do this? I've tried the Copy Database Wizard but it copies the filegroup that I want to delete. I've tried the SQL Server Import and Export Wizard but it takes too long. I've tried to export to another instance, but due to the size of the database it fails after several hours (network error or session lost). I've tried to use the option "Generate Scripts" and do this: 1. Export schema only 2. Export data only 3. Edit the file to remove the filegroup 4. Drop the database 5. Recreate the database with the edited file 6. Import the data But this is taking too long. There is any other way (best and quick way) to do this? 1. Extract the schema from DatabaseA (from Generate Scripts: Types of data to script: Schema only) 2. Put the DatabaseA in single user mode 3. Create the DatabaseB in the same instance Create all the objects in DatabaseB from the extract without that filegroup 4. Use the SQL Server Import and Export Wizard and copy all the data from DatabaseA to DatabaseB 5. Rename the database DatabaseA to DatabaseA_old 6. Rename the database DatabaseB to DatabaseA
dcop7 (29 rep)
Jun 4, 2023, 08:08 PM • Last activity: Jun 5, 2023, 02:46 PM
2 votes
1 answers
965 views
How to get space used by In-Memory table in SqlServer
I'm trying to understand how much storage In-Memory table consumes in SqlServer: SELECT is_memory_optimized,durability_desc FROM sys.tables WHERE name='TBL_HSD_AD_OR' select count(*) [#Rows] from TBL_HSD_AD_OR with(nolock) [![enter image description here][1]][1] Now when I'm using `EXEC sp_spaceused...
I'm trying to understand how much storage In-Memory table consumes in SqlServer: SELECT is_memory_optimized,durability_desc FROM sys.tables WHERE name='TBL_HSD_AD_OR' select count(*) [#Rows] from TBL_HSD_AD_OR with(nolock) enter image description here Now when I'm using EXEC sp_spaceused I get: EXEC sp_spaceused 'TBL_HSD_AD_OR' enter image description here Is there a different way to get the space used for Memory-Optimized tables?
Yosi Dahari (517 rep)
Sep 13, 2016, 06:18 AM • Last activity: Jan 29, 2023, 03:07 PM
0 votes
0 answers
201 views
LocalDB build and Memory Optimized table types
We have a DevOps pipeline for our DB Build/Release process and we're using VS SQL Database projects. We have one DB which makes extensive use of memory optimized table variables. Used for transfering data to/from a Calc Engine & DB. I know that LocalDB doesn't support FILESTREAM (needed for MO table...
We have a DevOps pipeline for our DB Build/Release process and we're using VS SQL Database projects. We have one DB which makes extensive use of memory optimized table variables. Used for transfering data to/from a Calc Engine & DB. I know that LocalDB doesn't support FILESTREAM (needed for MO tables), but I'm not looking to nail up a LocalDB. QUESTION: can LocalDB handle DB build/release process for validation purposes? If no, will I need a permanent Azure SQL DB for this purpose?
SqlNightOwl (45 rep)
Nov 22, 2022, 05:16 PM
0 votes
0 answers
1717 views
"There is insufficient system memory in resource pool 'default' to run this query" error with MEMORY_OPTIMIZED tables
I use Memory-Optimized Table in my SQL Server. However the error "There is insufficient system memory in resource pool 'default' to run this query" occures after several drop/create/insert operations with MEMORY_OPTIMIZED tables. For example I have created the memory_optimized table and I have loade...
I use Memory-Optimized Table in my SQL Server. However the error "There is insufficient system memory in resource pool 'default' to run this query" occures after several drop/create/insert operations with MEMORY_OPTIMIZED tables. For example I have created the memory_optimized table and I have loaded some data to the table. After that I have dropped the table, create the table the insert of the same data may be failed with the error "insufficient system memory". Why may the error occured? Are there any ways how to avoid it? The steps for reproducing the problem: Run ms sql server in docker via command: sudo docker run --memory=3g -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=Ab123456" -p 1434:1434 --name sql1 --hostname sql1 -d mcr.microsoft.com/mssql/server:2022-latest Create test database: CREATE DATABASE my_db; ALTER DATABASE my_db ADD FILEGROUP my_db CONTAINS MEMORY_OPTIMIZED_DATA; ALTER DATABASE my_db ADD FILE ( NAME=my_db_my_db, FILENAME='/tmp/my_db_my_db' ) TO FILEGROUP my_db; ALTER DATABASE my_db SET RECOVERY FULL; BACKUP DATABASE my_db TO DISK = 'NUL'; Generate some data for testing: CREATE TABLE test_data ( id int NOT NULL identity(1,1), field varchar(max) NOT NULL ) WITH (DATA_COMPRESSION = PAGE); declare @i int; set @i =1; while @i<=1000000 begin set @i=@i+1; INSERT INTO test_data (field) VALUES ('asfd'); end; Create memory_optimized tables: CREATE TABLE my_memory_table ( id int NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1500000), field varchar(max) NOT NULL ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); INSERT INTO my_memory_table (id,field) select top 1000000 id, field from test_data; The next queries should be repeated several times before the error occures (in our case after 2 or 3 repeates): DROP TABLE my_memory_table; CREATE TABLE my_memory_table ( id int NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1500000), field varchar(max) NOT NULL ) WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA); INSERT INTO my_memory_table (id,field) select top 1000000 id, field from test_data;
Iga21207 (1 rep)
Oct 3, 2022, 01:39 PM • Last activity: Oct 3, 2022, 01:49 PM
Showing page 1 of 20 total questions