Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
1094
views
Is MySQL Cluster just for in-memory databases?
In MySQL website I see below definition for MySQL Cluster. > MySQL Cluster is a technology that enables clustering of in-memory > databases in a shared-nothing system. I want to use Clustering for a chat application like `viber` that there are some `Ejabberd` servers, which use MySQL as database.Is...
In MySQL website I see below definition for MySQL Cluster.
> MySQL Cluster is a technology that enables clustering of in-memory
> databases in a shared-nothing system.
I want to use Clustering for a chat application like
viber
that there are some Ejabberd
servers, which use MySQL as database.Is it necessary that databases use in-memory tables. according this definition all databases and tables should be in-memory or not?
This is the link of MySQL Cluster definition.
amir jj
(289 rep)
Mar 27, 2016, 05:06 PM
• Last activity: Jul 2, 2025, 07:05 PM
0
votes
1
answers
213
views
oracle XE 18c inmemory settings
I try to this in oracle XE 18c. I cant't alter inmemory_size property but can alter sga_target. Who relieve me? SQL> alter system set inmemory_size=200M; alter system set inmemory_size=200M * ERROR at line 1: ORA-02097: parameter cannot be modified because specified value is invalid ORA-02095: speci...
I try to this in oracle XE 18c.
I cant't alter inmemory_size property but can alter sga_target.
Who relieve me?
SQL> alter system set inmemory_size=200M;
alter system set inmemory_size=200M
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-02095: specified initialization parameter cannot be modified
SQL> alter system set sga_target=1024M;
System altered.
Myeongsik Joo
(101 rep)
Aug 3, 2020, 06:08 AM
• Last activity: Jun 19, 2025, 07:04 PM
1
votes
0
answers
39
views
Is it even possible to create a scalable rhyming dictionary for 10 million words in a single language like English?
I'm going in circles brainstorming ideas and TypeScript or SQL code to implement basically a "rhyming database". The goal of the rhyming database is to find rhymes for all words, not just exact rhymes but "nearby or close rhymes" too (like Rap music, etc.). Here are some of the facts and features: 1...
I'm going in circles brainstorming ideas and TypeScript or SQL code to implement basically a "rhyming database". The goal of the rhyming database is to find rhymes for all words, not just exact rhymes but "nearby or close rhymes" too (like Rap music, etc.). Here are some of the facts and features:
1. Estimate 10 million English words for now (but realistically I'm thinking about doing this for ~30 languages).
2. I think rhymes would be like a reverse exponential curve (let's just imagine), so many short words rhyme long words, but it tapers down as words get longer.
3. We only will support up to 3 syllables of word-end rhyming.
4. Don't worry about the system for capturing the phonetic information of words, we can use something like the [CMU pronunciation format/dictionary](https://en.wikipedia.org/wiki/CMU_Pronouncing_Dictionary#Database_format) . I have a system for computing phonetic information (too involved to describe for this post).
5. In a not-worse-but-bad-case, let's say there are 1000 rhymes for every word, that is 10m x 1k = 10 billion relationships between words. At 10,000 rhymes, that is 100 billion, so the database might start running into scaling problems.
6. Ideally, we compute a "similarity score", _comparing each word to every other word_ (cross-product), and have a threshold things must score higher than to count as a rhyme.
7. We then sort by the similarity score.
8. We allow _pagination_ based on an input pronunciation text, and you can jump to specific pages in the rhyme query results.
Well, all these features together seem like an impossible ask so far: **pagination**, **complex/robust similarity scoring** (not just hacky extremely simplified SQL functions for calculating basic scores, but advanced cosineSimilarity scoring, or even more custom stuff taking into account sound-sequences in each word), **10 million words**, **up to 3 syllables of rhyming**, **fast query time**, and ideally not requiring a huge memory-intensive server.
I have been essentially cycling through 5 or 10 solutions to this problem with ClaudeAI (either backed by SQL, or just in-memory), but it can't seem to solve all those problems at once, it leaves one key problem unsolved, so everything won't work.
- First solution was in-memory, for every word, compute a robust vector similarity score based on the pronunciation/phonemes/sounds of each word, cross-product style. This seems like the ideal solution (which would give you 100% accurate results), but it won't scale, because 10m x 10m is trillions and beyond that. Especially not possible in the DB. By precomputing all similarities between every pair of words, search is easy, as there is a map from input to array of rhymes, already sorted by score. Pagination is easy too. But it won't scale.
- Next "solution" was a SQL version, with an **extremely primitive**
phoneme_similarity
SQL function. Then a query for all rhymes would be something like:
const query = `
WITH scored_rhymes AS (
SELECT
w.word,
(
phoneme_similarity(w.last_vowel, ?) * 3 +
phoneme_similarity(w.penultimate_vowel, ?) * 1.5 +
CASE WHEN w.final_consonant_cluster = ? THEN 2 ELSE 0 END +
CASE WHEN substr(w.stress_pattern, -1) = substr(?, -1) THEN 1 ELSE 0 END +
CASE WHEN substr(w.stress_pattern, -2) = substr(?, -2) THEN 0.5 ELSE 0 END
) AS score
FROM words w
WHERE w.word != ?
AND w.last_vowel = ?
)
SELECT word, score
FROM scored_rhymes
WHERE score > 0
ORDER BY score DESC, word
LIMIT ? OFFSET ?
`;
While it seems to handle pagination, the scoring logic is severely lacking. This won't give quality rhyme results, we need much more advanced phonetic sequence clustering and scoring logic. But it would scale, as there is just a single words
table, with some phonetic columns. It's just not going to be accurate/robust enough scoring / rhyming-wise.
- A third solution it came up with, did the advanced scoring, but _after_ it made a DB query (DB-level pagination). This will not result in quality pagination, because a page worth of words are fetched based on non-scored data, then scores are computed on that subset in-memory, and then they are sorted. This is completely inaccurate.
- Then the fourth solution, after saying how it didn't meet all the constraints/criteria, it did a SQL version, with storing the cross product of every word pair, precomputing the score! Again, we did that already in memory, and it definitely won't scale storing 10m x 10m links in the DB.
So then it is basically cycling through these answers with small variations that don't have a large effect or improvement on the solution.
_BTW using AI to help think through this has gotten me way deeper into the weeds of solving this problem and making it a reality. I can think for days and weeks about a problem like this on my own, reading a couple papers, browsing a few GitHub repos, ... but then I think in my head "oh yeah I got something that is fast, scalable, and quality". Yeah right haha. Learning through AI back and forth helps getting working data structures and algorithms, and brings new insights and pros/cons lists to my attention which I would otherwise not have figured out in a timely manner._
So my question for you now is, after a few days of working on this rhyming dictionary idea is, is there a way to solve this to get all the constraints of the system satisfied (pagination/scoring/10m-words/3-syllables/fast-query/scalable)?
An [answer to my StackOverflow question](https://stackoverflow.com/questions/79101873/how-to-build-a-trie-for-finding-exact-phonetic-matches-sorted-globally-by-weigh/79102113?noredirect=1#comment139481345_79102113) about finding phonetic matches in detail suggested I use a [succinct indexable dictionary](https://en.wikipedia.org/wiki/Succinct_data_structure#Succinct_indexable_dictionaries) , or even the [Roaring Compressed Bitmap](https://roaringbitmap.org/) data structure. But from my understanding so far, this requires still computing the cross product and scoring, it just might save on some memory. I don't know though if it would efficiently story trillions and quadrillions of associations though (in-memory even, on large machine).
So I'm at a loss. Is it impossible to solve my problem as described? If so, what should I cut out to make this solvable? Either what constraints/desires should I cut out, or what other things can I cut corners on?
_I tagged this as PostgreSQL because that's what I'm using for the app in general, if that helps._
Lance Pollard
(221 rep)
Oct 19, 2024, 06:08 AM
• Last activity: Oct 19, 2024, 06:14 AM
1
votes
0
answers
80
views
Capturing in-memory OLTP transaction aborted errors in SQL Server
In one of my SQL Server 2016 Enterprise instances, `sp_BlitzFirst` is reporting hundreds of aborted transactions per second in one of my in-memory OLTP databases. I've set up error logging with try/catch for all procedures in the DB but they've caught nothing yet. Similarly, an extended event set up...
In one of my SQL Server 2016 Enterprise instances,
sp_BlitzFirst
is reporting hundreds of aborted transactions per second in one of my in-memory OLTP databases.
I've set up error logging with try/catch for all procedures in the DB but they've caught nothing yet. Similarly, an extended event set up to capture errors isn't showing anything related to aborted transactions.
The sys.dm_xtp_transaction_stats
and sys.dm_os_performance_counters
DMVs only seem to provide a total number of transactions aborted but don't provide details regarding the procedure(s) causing the errors.
It's probably an error related to the ones listed in this MS doc, but I have no way of knowing which one at the moment:
https://learn.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/transactions-with-memory-optimized-tables?view=sql-server-ver15#conflict-detection-and-retry-logic
Is there any way to capture this type of error?
Marcus
(89 rep)
Jul 29, 2024, 01:18 PM
• Last activity: Oct 15, 2024, 08:08 PM
6
votes
3
answers
12469
views
Is it possible to have a temporary, on-the-fly PostgreSQL table which only ever lives in RAM?
I know that PostgreSQL automatically uses RAM only for tables that are small enough. But I'm not just talking about the data inside the table, but about the entire table itself. Basically, I've (poorly) re-implemented various database functionality in PHP arrays, for example when I fetch data from a...
I know that PostgreSQL automatically uses RAM only for tables that are small enough. But I'm not just talking about the data inside the table, but about the entire table itself.
Basically, I've (poorly) re-implemented various database functionality in PHP arrays, for example when I fetch data from an API and want to sort it or "massage" it before displaying it in my control panel. In such a situation, it makes no sense (at least to me) to have an actual database table around which is only ever used for this, with temporary data. It would be much better if I could on-the-fly create a table which I fill up in RAM, sort and then fetch records from with normal PG SQL queries.
Is this a thing? It feels stupid that I have array structures and various functions that try to mimic SQL's "SORT BY".
Naturally, I'm not talking about executing
CREATE TABLE
, add the data to it, and then return it again to PHP, and then DROP TABLE
. That would be ridiculously bad for performance.
If there is no way to do this, I'll just accept it, but it's something which I've often thought would make perfect sense.
user15080516
(745 rep)
Feb 10, 2021, 03:10 AM
• Last activity: May 6, 2024, 10:45 AM
4
votes
2
answers
905
views
In-memory OLTP databases take very long to recover during startup
we use SQL Server 2019 on Windows with in-memory oltp activated on some databases. After a server reboot/service restart, the in-memory databases take very long to be available (more than an hour) even though most tables are not durable. The size of memory optimized objects is very small: 10 MB We s...
we use SQL Server 2019 on Windows with in-memory oltp activated on some databases. After a server reboot/service restart, the in-memory databases take very long to be available (more than an hour) even though most tables are not durable. The size of memory optimized objects is very small: 10 MB
We see a background session on master db with wait type (110514580ms)WAIT_XTP_RECOVERY and almost no reads. CPU cores are at 100%. Disks are idling.
We use transparent data encryption (TDE) for this database. This database uses synonyms to access another db on the same instance. It uses service broker. Instance has transactional replication set-up on databases without in-memory activated.
Adding CPU makes it faster. This is a lowend machine, but not crap. XTP engine 2.11.
Any idea what's going on?
maxschaf
(43 rep)
Jan 30, 2024, 09:54 PM
• Last activity: Jan 31, 2024, 06:43 PM
0
votes
1
answers
449
views
SQL Server 2019 SE adoption of in-memory OLTP table consideration
i have an SQL Server database, that is almost a read-only db (the write procedure will happen at scheduled time, tipically twice a day). I have recently discovered about the in-memory table (i'am a developer not a DBA expert) and wondering if can be a good idea to move the DB into a "in memory" vers...
i have an SQL Server database, that is almost a read-only db (the write procedure will happen at scheduled time, tipically twice a day).
I have recently discovered about the in-memory table (i'am a developer not a DBA expert) and wondering if can be a good idea to move the DB into a "in memory" version; and what are the general issue of this configuration.
In-memory tables is a new topic to me, I am quite confused about the fact if those table are good for speed up only IO operation, or also simple queries (in particular queries that are alredy discrete).
Also i am curious to understand if this technology is a good choiche also when the SQL Server can use discretely fast SSD.
I don't wanto to be too broad in my question, but i am also courious about possible known issues that may happen during migration to this technology.
PS :
To give you a bit of background, the idea come to my mind, because i am facing a little performance issue. Query are already quite performant, but i have few of them (already optimized as much as i can) that take roughtly 0.2-0.3s, too much for my need.
Just for curiosity, i will add that this timing need is due the fact that those "slow" queries are bound to a web request, that should complete in less than a second.
Skary
(368 rep)
Jul 7, 2023, 10:55 AM
• Last activity: Jul 7, 2023, 12:15 PM
2
votes
0
answers
168
views
MySQL release memory allocated for a temp table
I'm using 5.6 MySQL database to insert tens of millions records between multiple tables. Before the insertion I send all my data into a temp memory table, for that reason I have increased both `tmp_table_size` and `max_heap_table_size` to 15GB. It works well so far, but after I explicitly drop the t...
I'm using 5.6 MySQL database to insert tens of millions records between multiple tables.
Before the insertion I send all my data into a temp memory table, for that reason I have increased both
tmp_table_size
and max_heap_table_size
to 15GB.
It works well so far, but after I explicitly drop the temp table and after my connection is closed MySQL keeps using the memory allocated for the temp table and the only way I'm able to free that memory is to restart the service.
My application runs every few hours and I eventually run out of memory after 2-3 runs. Is there another way to fix this without restarting the service?
UPD: here's the structure of the temp table:
**Temp table DDL**
CREATE TEMPORARY TABLE ids_temp (
AAID binary(16) NOT NULL,
GroupID int(10) NOT NULL,
INDEX temp_GroupID_idx (GroupID)
) ENGINE = MEMORY
**Inserts into the temp table** (using MySQLCursor.executemany() )
INSERT INTO ids_temp (ID, GroupID) VALUES (%s, %s)
**Drop statement**
DROP TEMPORARY TABLE IF EXISTS ids_temp
**Inserts to other tables**
INSERT IGNORE INTO data.{GroupID} (ID
)
SELECT T.ID
FROM ids_temp T WHERE T.GroupID = {GroupId}
Maksim Vi.
(121 rep)
Feb 9, 2023, 06:30 PM
• Last activity: Feb 9, 2023, 08:43 PM
5
votes
2
answers
7123
views
In Memory Database Remove File and FileGroup
I have a SQL Server 2016 database and I configured in-memory table. I want to drop this configuration, then I drop that table. When I want to delete that file and filegroup, I got following error. USE [InMem_Test] GO ALTER DATABASE [InMem_Test] REMOVE FILE [InMemFile] GO ALTER DATABASE [InMem_Test]...
I have a SQL Server 2016 database and I configured in-memory table.
I want to drop this configuration, then I drop that table.
When I want to delete that file and filegroup, I got following error.
USE [InMem_Test]
GO
ALTER DATABASE [InMem_Test] REMOVE FILE [InMemFile]
GO
ALTER DATABASE [InMem_Test] REMOVE FILEGROUP [InMemFileGroup]
GO
> Msg 41802, Level 16, State 1, Line 3 Cannot drop the last
> memory-optimized container 'InMemFile'. Msg 5042, Level 16, State 11,
> Line 5 The filegroup 'InMemFileGroup' cannot be removed because it is
> not empty.
Banu Akkus
(389 rep)
Jul 14, 2017, 12:30 PM
• Last activity: Jul 1, 2022, 02:17 PM
10
votes
1
answers
1375
views
Efficiently store large list structure in RocksDB so that the data can be retrieved in pages
**Description:** RocksDB is a key-value storage so we can simply serialise the list of objects & store the value corresponding to a key. This would be ok if the data in the list is small enough. But if the list is large and ever increasing in size then we would need the data paginated. So in this ca...
**Description:**
RocksDB is a key-value storage so we can simply serialise the list of objects & store the value corresponding to a key. This would be ok if the data in the list is small enough.
But if the list is large and ever increasing in size then we would need the data paginated. So in this case storing the entire serialised list data corresponding to a single key would not be a good idea; as there would be a performance issue since every time a new data is inserted into the list this very large value would need to be read & updated also during read time when showing list to user entire value would be retrieved while only a part of it was needed by the user.
Ex: Let’s say we want to store orders placed by user in rocksDB. Then we could store this order data in following way in RockDB “u:1:li:o” : Serialised([O1{}, O2{},….On{}]). But if there are thousands of orders placed by user and we would like to retrieve the orders in form of pages (10 or 20 records at a time). So storing thousands of order in same key and retrieving entire data from that key & then giving required 10-20 records won’t be a good idea. Also adding new order by user to same key will affect the performance as described above.
So I am working to design schema for efficiently storing and retrieving such large lists in RocksDB.
If you can give your suggestions on schema design that would be great & very much helpful.
Pinank Lakhani
(103 rep)
Dec 24, 2019, 05:51 AM
• Last activity: Feb 26, 2022, 09:07 AM
1
votes
0
answers
173
views
Bogus SQL Server (XTP) in-memory error 41842 claiming 4B rows updated
We run a large, very active set of in-memory (schema-only) tables - the main table has been split (sharded) into 5 separate databases because the table is a total size of > 50gb (Standard Edition limits the total size of in-memory tables to 32gb per database). The largest of the databases contains a...
We run a large, very active set of in-memory (schema-only) tables - the main table has been split (sharded) into 5 separate databases because the table is a total size of > 50gb (Standard Edition limits the total size of in-memory tables to 32gb per database). The largest of the databases contains a 10gb table with roughly 1.5M rows.
The tables take 10K or so updates per minute, each of which updates between 500-10K rows. Tables are read from ~1K times per minute, returning between 50 and 1K rows.
Anonymized Schema:
Indexes:
There are 4 of these servers. About once a week, one of the servers stops accepting updates and instead returns this message (error Msg 41842): "Too many rows inserted or updated in this transaction. You can insert or update at most 4,294,967,294 rows in memory-optimized tables in a single transaction." That's just not a possible number, so we appear to be hitting some other condition that is triggering this message. When it happens, the server rejects all data changes and we have to reboot and refill the schema-only tables.
Servers are running on 12 CPUs w 128G memory (125G for sql)
Microsoft SQL Server 2019 (RTM-CU11) (KB5003249) - 15.0.4138.2 (X64) May 27 2021 17:34:14 Copyright (C) 2019 Microsoft Corporation Standard Edition (64-bit) on Windows Server 2016 Standard 10.0 (Build 14393: ) (Hypervisor)
Has anyone else seen this error?


DavidS
(29 rep)
Aug 16, 2021, 08:32 PM
• Last activity: Aug 16, 2021, 09:19 PM
0
votes
1
answers
498
views
How to run postgres in-memory on windows and sync to FS when a temer runs out?
So Progress is slow for me (in terms of query and update times) on SSD and it is ok in my case: I use it as a mix of Graph database + Table processor + NoSQL JSON trees so lots of loops, graphs, trees,... Yet my data is not so large - it all can fit into RAM 3 times. So I want to speed Postgres up -...
So Progress is slow for me (in terms of query and update times) on SSD and it is ok in my case: I use it as a mix of Graph database + Table processor + NoSQL JSON trees so lots of loops, graphs, trees,... Yet my data is not so large - it all can fit into RAM 3 times. So I want to speed Postgres up - have it all run in RAM and sync to SSD/HDD once in N minutes. In my dream world, it would save\update only the diff between the current snapshot and the previous one. Is it possible to configure Postgres to do such a thing?
Blender
(75 rep)
Jun 15, 2021, 01:17 PM
• Last activity: Jun 15, 2021, 02:42 PM
0
votes
1
answers
564
views
How to configure InnoDB/MariaDB to not fill ibdata1 with (rollback/undo?) data that is useless in an in-memory database?
I'm trying to run a tiny (~100-200 MB) database in-memory, but ibdata1 grows out of control. Situation: MariaDB is running on a resource constrained HW (Raspberry, 1GB RAM), storing non-critical sensory data (like room temperature). The real file system is an SD card, so frequent writes destroy it....
I'm trying to run a tiny (~100-200 MB) database in-memory, but ibdata1 grows out of control.
Situation: MariaDB is running on a resource constrained HW (Raspberry, 1GB RAM), storing non-critical sensory data (like room temperature). The real file system is an SD card, so frequent writes destroy it. Storing data in-memory would be a good option.
What is not a solution:
- Memory storage engine can't handle TEXT columns
- Aria storage engine can't handle foreign keys
- MyRocks storage engine is not available for 32-bit platforms
- SQLite ///:memory: can't handle the "load", throws uncountable 'cannot commit - no transaction is active' errors
What I've tried:
- using tmpfs for datadir and tmpdir
- tweaking InnoDB to minimal resource usage
- disabling doublewrite buffering, disabling change buffering, using READ-UNCOMMITTED isolation
See [configuration](https://github.com/lmagyar/homeassistant-addon-mariadb-inmemory/blob/master/mariadb/rootfs/etc/my.cnf.d/mariadb-server.cnf)
But when I purge old data, the DELETE causes ibdata1 to grow nearly to the 100% size of the table I delete 10-15% of the rows from. My **guess** is that Rollbacks Segments and Undo Tablespace is still very active, but I can't "disable" MVCC altogether.
**How to configure InnoDB/MariaDB to not fill ibdata1 with (rollback/undo?) data that is useless in an in-memory database?**
lmagyar
(111 rep)
Feb 8, 2021, 12:30 PM
• Last activity: Feb 9, 2021, 11:16 AM
1
votes
1
answers
236
views
How to enable INMEMORY JOIN GROUP in Oracle 12.2
I have read official Oracle documentation on INMEMORY JOIN GROUP and run code from their tutorials. However I unable to successfully create INMEMORY JOIN GROUP. - Installed database is Oracle 12.2 - INMEMORY TABLE and INMEMORY COLUMN were created successfully. Exception message of INMEMORY JOIN GROU...
I have read official Oracle documentation on INMEMORY JOIN GROUP and run code from their tutorials. However I unable to successfully create INMEMORY JOIN GROUP.
- Installed database is Oracle 12.2
- INMEMORY TABLE and INMEMORY COLUMN were created successfully.
Exception message of INMEMORY JOIN GROUP creation below:
Error starting at line : 1 in command -
CREATE INMEMORY JOIN GROUP employees_departments
(employees(department_id), departments())
Error report -
ORA-00900: invalid SQL statement
00900. 00000 - "invalid SQL statement"
*Cause:
*Action:
My code is
CREATE INMEMORY JOIN GROUP employees_departments
(employees(department_id), departments());
Are there any Oracle initialize parameters that should be specified in order to turn on support of INMEMORY JOIN GROUP?
Oracle SQL Developer highlights INMEMORY as incorrect syntax:

BrilliantContract
(113 rep)
Jan 10, 2021, 05:37 PM
• Last activity: Jan 10, 2021, 07:18 PM
1
votes
1
answers
85
views
where is memory going in In-Memory OLTP?
I have only memory optimized table in my database. When I am running below query, I am getting pages_mb for that DB as 710 MB. SELECT type , name , memory_node_id , pages_kb/1024 AS pages_MB FROM sys.dm_os_memory_clerks WHERE type LIKE '%xtp%' But when I run below query, I am getting mem_alloc_total...
I have only memory optimized table in my database. When I am running below query, I am getting pages_mb for that DB as 710 MB.
SELECT type
, name
, memory_node_id
, pages_kb/1024 AS pages_MB
FROM sys.dm_os_memory_clerks WHERE type LIKE '%xtp%'
But when I run below query, I am getting mem_alloc_total_mb for that table as 2 and others as 0.
SELECT
object_name(object_id) AS table_name,
memory_allocated_for_indexes_kb / 1024 as mem_alloc_index_mb,
memory_allocated_for_table_kb / 1024 as mem_alloc_table_mb,
memory_used_by_indexes_kb / 1024 as mem_used_index_mb,
memory_used_by_table_kb / 1024 as mem_used_table_mb,
(memory_allocated_for_table_kb + memory_allocated_for_indexes_kb) / 1024 as mem_alloc_total_mb,
(memory_used_by_table_kb + memory_used_by_indexes_kb) /1024 as mem_used_total_mb
FROM sys.dm_db_xtp_table_memory_stats
where object_id = object_id('');
go
Data from both the above queries are contradicting each other. Am I missing anything?
*NOTE: Durability setting for the memory optimized tabe is "SCHEMA_AND_DATA".*
sachin-SQLServernewbiee
(473 rep)
Oct 21, 2020, 04:00 PM
• Last activity: Oct 21, 2020, 07:37 PM
0
votes
1
answers
221
views
In Memory OLTP causing OOM Issue in SQL Server even though no data in In Memory OLTP Table
We have only 1 memory optimized table in each database and there are total 10 databases as such. This table is completely empty. But still we can see in DBCC MEMORYSTATUS Output that In memory OLTP is consuming 7.68 GB out of 13 GB that has been assigned to SQL Server. MEMORYCLERK_XTP (node 0) KB --...
We have only 1 memory optimized table in each database and there are total 10 databases as such. This table is completely empty. But still we can see in DBCC MEMORYSTATUS Output that In memory OLTP is consuming 7.68 GB out of 13 GB that has been assigned to SQL Server.
MEMORYCLERK_XTP (node 0) KB
---------------------------------------- ----------
VM Reserved 0
VM Committed 0
Locked Pages Allocated 0
SM Reserved 0
SM Committed 0
Pages Allocated 8055696 -- 7.68 GB
Sometimes CHECKPOINT is also not running because of OOM situation.
Other than restarting SQL Service, is there any other solution to resolve this issue?
SQL Server Version: Microsoft SQL Server 2016 (SP2-CU12) (KB4536648) - 13.0.5698.0 (X64) Feb 15 2020 01:47:30 Copyright (c) Microsoft Corporation Enterprise Edition: Core-based Licensing (64-bit) on Windows Server 2012 R2 Standard 6.3 (Build 9600: ) (Hypervisor)

sachin-SQLServernewbiee
(473 rep)
Oct 20, 2020, 03:53 PM
• Last activity: Oct 20, 2020, 04:17 PM
3
votes
1
answers
2232
views
Slow inserts in oracle 12c inmemory table
We are running some tests to check the in-memory performance on a **12c (12.1.0.2.4 EE) RAC**. Servers have 56GB memory and 20core CPUs. Our plan is to have a few read-performance critical tables in memory and the rest in disk. The test was to first populate the tables using our insert tool and then...
We are running some tests to check the in-memory performance on a **12c (12.1.0.2.4 EE) RAC**. Servers have 56GB memory and 20core CPUs.
Our plan is to have a few read-performance critical tables in memory and the rest in disk. The test was to first populate the tables using our insert tool and then run queries on it using JMeter (web application benchmark tool).
The insert tool basically reads records from a file and then inserts records to the DB in blocks and commits.
We started testing with one table and observed slow insert rates straightaway. **But when the table is made a no inmemory table the insert rates were fine.**
**The table has 90 columns, 1 trigger, 15 indexes.**
The test preparation and results are given below.
**Preparation**
_______________
1) Create the table, trigger, indexes.
2) Make table in-memory using "alter table test_table inmemory priority critical"
**Results**
______________
Without Inmemory option (~7000 recs/sec)
Avg time to read 1 record = [0.0241493] ms
Avg time to insert 1 record = [0.141788] ms
Avg time to insert 1 block of 500 number of rows = [70.894] ms
Avg time to commit 2 blocks(500 rows per block) = [3.888] ms
Total time for 2000 blocks of inserts = [141.788] s, at [7052.78] recs/s
Total time for 1000 number of commits = [3.888] s
Total time for 2000 blocks of inserts + 1000 number of commits = [145.676] s
Total time to read 1000000 number of records from file = [24.1493] s
Total time to read 1000000 number of records + 2000 blocks of inserts + 1000 number of commits = [169.825] s
With Inmemory option (~200 recs/sec)
Avg time to read 1 record = [0.0251651] ms
Avg time to insert 1 record = [4.62541] ms
Avg time to insert 1 block of 500 number of rows = [2312.7] ms
Avg time to commit 2 blocks(500 rows per block) = [3.32] ms
Total time for 200 blocks of inserts = [462.541] s, at [216.197] recs/s
Total time for 100 number of commits = [0.332] s
Total time for 200 blocks of inserts + 100 number of commits = [462.873] s
Total time to read 100000 number of records from file = [2.51651] s
Total time to read 100000 number of records + 200 blocks of inserts + 100 number of commits = [465.39] s
The memory parameters of the DB are given below.
NAME TYPE VALUE
lock_sga boolean FALSE
pre_page_sga boolean TRUE
sga_max_size big integer 30G
sga_target big integer 30G
unified_audit_sga_queue_size integer 1048576
inmemory_clause_default string
inmemory_force string DEFAULT
inmemory_max_populate_servers integer 8
inmemory_query string ENABLE
inmemory_size big integer 10G
inmemory_trickle_repopulate_ integer 1
servers_percent
optimizer_inmemory_aware boolean TRUE
buffer_pool_keep string
buffer_pool_recycle string
db_block_buffers integer 0
log_buffer big integer 1048552K
use_indirect_data_buffers boolean FALSE
memory_max_target big integer 0
memory_target big integer 0
optimizer_inmemory_aware boolean TRUE
pga_aggregate_limit big integer 8G
pga_aggregate_target big integer 4G
We are also tried the following, but the results were the same.
1) Stop one instance on the RAC (2 node RAC)
2) Change the inmemory priority to "high" then "low".
Hope someone can point me in the right direction.
Ahamed Fazlul Wahab
(31 rep)
Aug 27, 2015, 08:49 AM
• Last activity: May 7, 2020, 08:52 AM
2
votes
0
answers
295
views
Do concurrent in-memory databases exist?
I spent days searching the internet but all the databases I found are at most in-memory but not concurrent. By concurrency I mean that they run on multiple threads and can do multiple reads concurrently without read locks. This way a system which uses a lot of complex SQL queries could scale much be...
I spent days searching the internet but all the databases I found are at most in-memory but not concurrent.
By concurrency I mean that they run on multiple threads and can do multiple reads concurrently without read locks. This way a system which uses a lot of complex SQL queries could scale much better with increased number of requests per time interval.
The only "solution" to this kind of scaling I found was just running multiple servers with duplicate databases.
If they exist, can you give me an example of some in-memory concurrent (relational) database?
If not, why do they not exist?
Gillian
(121 rep)
Jan 21, 2020, 08:27 PM
2
votes
1
answers
56
views
Literature about In-Memory Database Systems
I am currently writing my bachelor thesis on IMDBS and would like to ask if any of you have any recommendations for literature which summarizes the technical conception/architecture of IMDBS as apparently I am not able to find articles, papers, books etc which are covering the physical architecture...
I am currently writing my bachelor thesis on IMDBS and would like to ask if any of you have any recommendations for literature which summarizes the technical conception/architecture of IMDBS as apparently I am not able to find articles, papers, books etc which are covering the physical architecture or to In-Memory Database Systems but just on logical data storage, indexing etc.
Any recommendations would be highly appreciated.
Thanks in advance,
toni
Toni Lippmann
(21 rep)
Dec 21, 2019, 10:42 AM
• Last activity: Dec 21, 2019, 11:52 AM
2
votes
2
answers
1207
views
Is an in-memory table faster to read from than a table cached in memory?
sql-server
sql-server-2016
performance
memory-optimized-tables
in-memory-database
performance-tuning
Is an in-memory (aka Hekaton) table faster to read from than a non-in-memory table that's already cached in the memory of a SQL server? I ask this because I have a case where I'm considering converting one of my tables to an in-memory table, but right now it's pretty fast to load into the memory cac...
Is an in-memory (aka Hekaton) table faster to read from than a non-in-memory table that's already cached in the memory of a SQL server?
I ask this because I have a case where I'm considering converting one of my tables to an in-memory table, but right now it's pretty fast to load into the memory cache, but the performance to read / operate against it after it's loaded into the cache is where it's slow.
Can I still see potential performance improvements by converting this table to an in-memory table?
J.D.
(40893 rep)
Nov 27, 2019, 09:19 PM
• Last activity: Nov 28, 2019, 12:38 AM
Showing page 1 of 20 total questions