Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

-1 votes
1 answers
75 views
If Read Committed Snapshot Isolation is already enabled, what is the cost of enabling Snapshot Isolation?
Suppose that I have a database with Read Committed Snapshot Isolation already enabled. Is there any reason at all to not also enable Snapshot Isolation? Intuitively, you would think that the row versions would be kept around from longer. [The documentation dismisses this](https://learn.microsoft.com...
Suppose that I have a database with Read Committed Snapshot Isolation already enabled. Is there any reason at all to not also enable Snapshot Isolation? Intuitively, you would think that the row versions would be kept around from longer. [The documentation dismisses this](https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide?view=sql-server-ver17#behavior-when-reading-data) . > Even though READ COMMITTED transactions using row versioning provides a transactionally consistent view of the data at a statement level, row versions generated or accessed by this type of transaction are maintained until the transaction completes. So I am left without any ideas. Assume SQL Server 2022. SQL Server 2025 brought with it Optimized Locking, which creates just enough uncertainity in my mind that I don't want to ask about it here.
J. Mini (1225 rep)
Aug 1, 2025, 08:05 PM • Last activity: Aug 4, 2025, 01:07 PM
2 votes
0 answers
63 views
How can I design an experiment to show the benefits of writing while under Snapshot isolation?
We've all read [the documentation for Snapshot isolation](https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide?view=sql-server-ver16#modify-data-without-optimized-locking) and know about Update Conflict Detection and when you should _theo...
We've all read [the documentation for Snapshot isolation](https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide?view=sql-server-ver16#modify-data-without-optimized-locking) and know about Update Conflict Detection and when you should _theoretically_ use Snapshot isolation for writes. However, I have never found anyone who does their writes under Snapshot isolation. Paul White has [a post on how it can go wrong](https://www.sql.kiwi/2014/06/the-snapshot-isolation-level/) , but I have never seen anyone discuss what it looks like when it goes _right_. I haven't found it in any textbooks, blogs, or production servers. **How can I design an experiment to show the benefits of writing while under Snapshot isolation?** I am particularly interested in comparing write performance. However, what should I vary to test when writing under Snapshot is a good idea? Furthermore, what is a fair comparison to it? Read Committed, RCSI, or something more extreme like Serializable? I am **not** asking about Read Committed Snapshot or [using Snapshot for reads](https://dba.stackexchange.com/questions/346376/why-not-use-snapshot-isolation-for-everything-read-only)· ; They're both awesome.
J. Mini (1225 rep)
Jul 23, 2025, 06:48 PM • Last activity: Jul 26, 2025, 12:28 PM
0 votes
1 answers
164 views
Transaction isolation level/design for write-only ledger
This is a simplified example of a problem I've been working on. Say I have the following database schema: ```lang-none Table: Deposits +--------+------------+--------+---------+ | ID | Date | Amount | User ID | +--------+------------+--------+---------+ | c1f... | 1589993488 | 40.0 | 6c7... | | bfe....
This is a simplified example of a problem I've been working on. Say I have the following database schema:
-none
Table: Deposits
+--------+------------+--------+---------+
|   ID   |    Date    | Amount | User ID |
+--------+------------+--------+---------+
| c1f... | 1589993488 |   40.0 | 6c7...  |
| bfe... | 1589994420 |   30.0 | 744...  |
+--------+------------+--------+---------+
-none
Table: Withdrawals
+--------+------------+--------+---------+
|   ID   |    Date    | Amount | User ID |
+--------+------------+--------+---------+
| ad4... | 1589995414 |   20.0 | 6c7...  |
| e9b... | 1589996417 |   20.0 | 6c7...  |
+--------+------------+--------+---------+
And I'm writing a function that **performs a withdrawal** for a User. It: 1. Sums the deposits (SELECT amount FROM deposits WHERE user_id = ...) 2. Sums the withdrawals (SELECT amount FROM withdrawals WHERE user_id = ...) 3. If the difference is greater than the requested amount, INSERTs a new Withdrawal We're using MySQL 8 and the default isolation level of REPEATABLE READ. This function may be running as a lambda, so memory locks are not an option, and we don't have distributed locking (ie. a Redid-based lock) available. **A Caveat** There are existing admin operations run at the REPEATABLE READ level to create/delete these entities on-demand, by ID. **My Questions:** 1. Am I correct in understanding that I need to use SERIALIZABLE as the isolation level for this transaction? 2. Will the SERIALIZABLE range lock prevent the REPEATABLE READ transaction from inserting new rows into the Withdrawals, or removing rows from the Deposits?
Craig Otis (101 rep)
May 20, 2020, 05:37 PM • Last activity: Jul 10, 2025, 03:07 PM
1 votes
1 answers
374 views
Serializable isolation fails even for unrelated rows
I have this table ``` create table "tasks" (id SERIAL PRIMARY KEY, user_id int REFERNCES "user"(id), title TEXT); ``` I also created index on ```"tasks"(user_id)``` Then I open two transactions simultaneously. t0 t1.. denote series of time snapshots in increasing order Transaction 1 ``` begin; --T0...
I have this table
create table "tasks" (id SERIAL PRIMARY KEY, user_id int REFERNCES "user"(id), title TEXT);
I also created index on
"tasks"(user_id)
Then I open two transactions simultaneously. t0 t1.. denote series of time snapshots in increasing order Transaction 1
begin; --T0
set transaction isolation level serializable; --T2
select * from "tasks" where user_id=1; --T4
insert into "tasks" (user_id, title, content, created_time) VALUES (1, 'abc'); --T6
end; --T8
Transaction 2
begin; --T1
set transaction isolation level serializable; --T3
select * from "tasks" where user_id=2; --T5
insert into "tasks" (user_id, title, content, created_time) VALUES (2, 'abc'); --T7
end; --T9
Transactions 1 succeeds but transaction 2 fails. But if i change the
user_id=?
of the select statements to
id=?
, it works. So does SSI allow only unrelated primary key changes and still fails with unrelated indexed columns? The situation is similar to this question https://dba.stackexchange.com/questions/242035/isolation-level-serializable-not-working-as-expected . But I have created an index on my column.
Le Hoang Long (111 rep)
Apr 30, 2022, 02:24 PM • Last activity: Jul 1, 2025, 05:04 PM
0 votes
1 answers
131 views
The operation could not be performed because OLE DB provider "MSOLEDBSQL" for linked server "ServerB" was unable to begin a distributed transaction
A Tale As Old As Time... - I have a Linked Server setup to a 3rd party database which I have no ownership or access to otherwise. Lets call it `ServerB`. I have a stored procedure (`SomeStoredProcedure`) that selects from Linked Server `ServerB`. If I explicitly set the isolation level to `SERIALIZA...
A Tale As Old As Time... - I have a Linked Server setup to a 3rd party database which I have no ownership or access to otherwise. Lets call it ServerB. I have a stored procedure (SomeStoredProcedure) that selects from Linked Server ServerB. If I explicitly set the isolation level to SERIALIZABLE and then try to insert the results of SomeStoredProcedure into a local temp table, I get the following error: > OLE DB provider "MSOLEDBSQL" for linked server "ServerB" returned message "The parameter is incorrect.". > > Msg 7399, Level 16, State 1, Line 1 > > The OLE DB provider "MSOLEDBSQL" for linked server "ServerB" reported an error. One or more arguments were reported invalid by the provider. > > Msg 7391, Level 16, State 2, Line 1 > > The operation could not be performed because OLE DB provider "MSOLEDBSQL" for linked server "ServerB" was unable to begin a distributed transaction. If I just execute the procedure directly (without inserting the results into a local temp table) it works. If I don't use the SERIALIZABLE isolation level, it also works. (Other explicit isolation levels work as well.) I have tried disabling Enable Promotion of Distributed Transactions for RPC as mentioned in other answers: Linked Server Options But no dice, same error: Error I understand that the query wants to promote to a distributed transaction for the above scenario since a Linked Server is involved (I assume enforcing SERIALIZABLE isolation is more involved across a remote server). But is it possible to prevent it from promoting to a distributed transaction under these circumstances? The same issue is reproducible using sp_executesql to select from the Linked Server as well. Repro code for example:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;

DROP TABLE IF EXISTS #LocalTempTable;
CREATE TABLE #LocalTempTable (ID INT);

INSERT INTO #LocalTempTable (ID)
EXEC sp_executesql N'SELECT ID FROM ServerB.DatabaseName.SchemaName.SomeTable;';
*Reminder: I don't own this 3rd party server, and can't change any settings on it such as actually enabling the MSDTC.
J.D. (40893 rep)
Jun 27, 2025, 06:06 PM • Last activity: Jun 28, 2025, 01:11 PM
1 votes
2 answers
156 views
Why would you ever mix the SNAPSHOT isolation level with an UPDLOCK?
[The documentation](https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide?view=sql-server-ver16#update) points out that you can use `UPDLOCK` hints while running a transaction under the `SNAPSHOT` isolation level. [This more obscure docume...
[The documentation](https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide?view=sql-server-ver16#update) points out that you can use UPDLOCK hints while running a transaction under the SNAPSHOT isolation level. [This more obscure documentation](https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql/snapshot-isolation-in-sql-server?redirectedfrom=MSDN#using-lock-hints-with-snapshot-isolation) shows an example. However, I cannot think of any use case! This is my question: Why would you ever mix the SNAPSHOT isolation level with an UPDLOCK? The two big alternatives that come to my mind are simply using a pessimistic isolation level or using the READCOMMITTEDLOCK hint.
J. Mini (1225 rep)
May 10, 2025, 04:40 PM • Last activity: May 13, 2025, 09:02 AM
2 votes
1 answers
290 views
Isolation Level Conflict
Hi i was asked to look at a third party app recently. There were a few queries doing giant reads and that was easy enough to fix. However, while i was working on the queries i kept seeing different transaction isolation levels being set. I've not really seen that before. I also was looking at the wa...
Hi i was asked to look at a third party app recently. There were a few queries doing giant reads and that was easy enough to fix. However, while i was working on the queries i kept seeing different transaction isolation levels being set. I've not really seen that before. I also was looking at the wait stats and LCK_M_IX is at the top. Did some reading and that can be caused by incompatible mode on another thread. My question is: is this happening because of the queries setting different ISOLATION LEVELS? I'm seeing repeatable read quite a lot. And Serialisable. And read uncommitted. And the database isolation level is read committed snapshot. And the developer has proposed adding more specific isolation levels in their c# code to fix it. I thought that might well make it worse! Which is why I'm asking the question.
CaptainBalrog (33 rep)
Mar 2, 2022, 11:58 AM • Last activity: May 7, 2025, 06:06 AM
3 votes
2 answers
500 views
Can a concurrent transaction see only part of committed changes?
This is a general question about database transaction isolation. Let's say we have a transaction reading some table and there is a writing transaction committing multiple updated rows in the same table. Is it possible that the reading transaction sees only some of the rows updated and the rest of th...
This is a general question about database transaction isolation. Let's say we have a transaction reading some table and there is a writing transaction committing multiple updated rows in the same table. Is it possible that the reading transaction sees only some of the rows updated and the rest of the rows not updated? (I assume that all rows satisfy the reading statement search criteria, and there is no intervening writing transactions). Does the answer depend on the concurrency control mechanism used by database? For example, is it possible in PostgreSQL using snapshots or in SQL Server using locks? Does it depend on isolation level used, and if yes then how? This question is not about uncommitted changes. I am interested in whether changes committed by a concurrent transaction have a chance of being only partially seen by the reading transaction. Reformulating the question for PostgreSQL: Can a snapshot taken at the beginning of a statement or transaction contain only part of the changes made by a committing transaction? And a similar question for lock-based concurrency control.
yurish (205 rep)
May 1, 2025, 08:50 PM • Last activity: May 3, 2025, 05:27 PM
5 votes
1 answers
424 views
Why not use Snapshot isolation for everything read-only?
Suppose that: - I already have Snapshot isolation enabled - I do not have Read Committed Snapshot Isolation enabled - I have been strict with my `READCOMMITTEDLOCK` hints wherever I truly need them then is there any reason at all to not use the Snapshot isolation level **for all read-only queries?**...
Suppose that: - I already have Snapshot isolation enabled - I do not have Read Committed Snapshot Isolation enabled - I have been strict with my READCOMMITTEDLOCK hints wherever I truly need them then is there any reason at all to not use the Snapshot isolation level **for all read-only queries?** It seems that all of the costs from Snapshot isolation are paid at the moment that you enable the database-level setting rather than when you run queries under it.
J. Mini (1225 rep)
May 2, 2025, 10:54 PM • Last activity: May 3, 2025, 02:08 PM
2 votes
1 answers
184 views
How SNAPSHOT ISOLATION works
On SQL Server 2019 with ADR disabled: ``` SET TRANSACTION ISOLATION LEVEL SNAPSHOT; BEGIN TRAN; WAITFOR DELAY '00:00:20'; -- insert of one record to dbo.MARA_CT in different session SELECT * FROM dbo.MARA_CT AS mc; WAITFOR DELAY '00:00:20'; -- insert of one record to dbo.Album in different session S...
On SQL Server 2019 with ADR disabled:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
BEGIN TRAN;
WAITFOR DELAY '00:00:20'; -- insert of one record to dbo.MARA_CT in different session
SELECT
    *
FROM dbo.MARA_CT AS mc;
WAITFOR DELAY '00:00:20'; -- insert of one record to dbo.Album in different session
SELECT
    *
FROM dbo.Album AS a;
WAITFOR DELAY '00:00:20';
SELECT
    *
FROM dbo.MARA_CT AS mc;
SELECT
    *
FROM dbo.Album AS a;
COMMIT;
I inserted one row into dbo.MARA_CT and during the second WAITFOR DELAY, I inserted a record into dbo.Album. According to Microsoft documentation: > The term "snapshot" reflects the fact that all queries in the transaction see the same version, or snapshot, of the database, based on the state of the database at the moment in time when the transaction begins. The results from my query: 1. dbo.MARA_CT **contains** the record inserted during the first WAITFOR DELAY '00:00:20'. 2. dbo.Album **does not contain** the record inserted during the second WAITFOR DELAY '00:00:20'. Does the first touch of any table in the query create a "snapshot" for the entire transaction rather than capturing the state of the database at the moment the transaction begins? Does SNAPSHOT ISOLATION store all versions from the beginning of the transaction for all rows in the database, or only for the tables used in the query? How does this work? If RCSI is enabled and we subsequently enable SNAPSHOT ISOLATION, does SQL Server store any new values/versions in tempdb? My use of SNAPSHOT ISOLATION is to test new versions of procedures and views (new schemas). Both the new and old procedures need to return the same number of records and utilize the same data, even as the tables involved experience constant inserts, updates, and deletes. Would SNAPSHOT ISOLATION be more advantageous than a database snapshot since RCSI is already enabled on the databases?
adam.g (465 rep)
Apr 18, 2025, 01:28 PM • Last activity: Apr 24, 2025, 05:03 PM
0 votes
1 answers
388 views
Do I need to set "ConnectionReset" parameter in the connection string when using isolation levels occasionally?
I have a MySql 5.6.44, and an dotnet application using [Mysql.Data 8.0.12][1]. The application code has not specified any isolation level till now because the default one was enough. Suddenly we need to do a **SERIALIZABLE** transaction in a new feature, and only the transactions in that part of the...
I have a MySql 5.6.44, and an dotnet application using Mysql.Data 8.0.12 . The application code has not specified any isolation level till now because the default one was enough. Suddenly we need to do a **SERIALIZABLE** transaction in a new feature, and only the transactions in that part of the code specify such level. In dotnet, connections to the DB are pooled, so after the connection usage is over, the connection gets back to the pool. I have the concern, that the isolation level set for that operations remains in the connection affecting other transactions in the code which do not need such high level. According to the component documentation , there is an optional connection string parameter that indicates if the connection state needs to be reset: > ConnectionReset , Connection Reset Default: false > > If true, the connection state is reset when it is retrieved from the > pool. The default value of false avoids making an additional server > round trip when obtaining a connection, but the connection state is > not reset. Checking the decompiled component code, it seems that transaction levels are set with **SESSION** scope (MySqlConnection.cs:1238): MySqlTransaction mySqlTransaction = new MySqlTransaction(this, iso); MySqlCommand mySqlCommand = new MySqlCommand("", this); mySqlCommand.CommandText = "SET SESSION TRANSACTION ISOLATION LEVEL "; switch (iso) { case System.Data.IsolationLevel.Chaos: this.Throw((Exception) new NotSupportedException(Resources.ChaosNotSu break; case System.Data.IsolationLevel.ReadUncommitted: mySqlCommand.CommandText += "READ UNCOMMITTED"; break; ... According to MySql documentation about transactions : > With the SESSION keyword: > > The statement applies to all subsequent transactions performed within > the current session. So if I understand correctly, the isolation level set for one operation will affect other operations in our code that are not specifying an isolation level themselves, since the connections are being returned to the pool without reseting. So being this the case, I should use the "ConnectionReset" parameter in our connection strings, am I right? PS: The open source MySql connector (which we do not use) seems that is setting true by default in the **ConnectionReset** parameter by default , which makes total sense to me.
vtortola (101 rep)
Jan 27, 2020, 11:24 AM • Last activity: Apr 23, 2025, 08:09 PM
0 votes
1 answers
2771 views
PostgreSQL Foreign Data Wrappers - Simultaneous queries won't finish
We're using foreign data wrappers in a database which points to another server (which is a read-only replica). We run scheduled jobs using python ( more on this here: https://github.com/sqlalchemy/sqlalchemy/discussions/8348 ) and lately we're facing an issue with a specific query (select statement...
We're using foreign data wrappers in a database which points to another server (which is a read-only replica). We run scheduled jobs using python ( more on this here: https://github.com/sqlalchemy/sqlalchemy/discussions/8348 ) and lately we're facing an issue with a specific query (select statement with cte) - this query runs every hour on 10+ workers (python processes) each with their own conditions. When I run this same query on the original server it takes ~6s, using fdw it's around 2-3 minutes. Since we reached 10+ workers these queries are stuck in an "active" state, I can see them is session manager, and after 20 minutes or so I get the following error: SSL SYSCALL error: EOF detected. (The max connections option is set to 200.) After a few of the workers fail with this error, the last ones fail with the following:
ERROR:app.services.cikk.main:(psycopg2.errors.ConnectionFailure) SSL SYSCALL error: EOF detected
server closed the connection unexpectedly
	This probably means the server terminated abnormally
	before or while processing the request.
CONTEXT:  remote SQL command: START TRANSACTION ISOLATION LEVEL REPEATABLE READ
The postgres_fdw doc says: > The remote transaction uses SERIALIZABLE isolation level when the local transaction has SERIALIZABLE isolation level; otherwise it uses REPEATABLE READ isolation level. > [...] That behavior would be expected anyway if the local transaction uses SERIALIZABLE or REPEATABLE READ isolation level, but it might be surprising for a READ COMMITTED local transaction. This means that the server keeps read and write locks until the end of the transacion, and the read locks are released as soon as the select operation is performed - but it never finishes. Maybe there's a deadlock (since 10+ queries try to use the same tables on the remote server)? If so how can I overcome this issue? Does this mean I can only make queries "synchronously" using fdw to make this work? postgres version: - PostgreSQL 12.10 - (Debian 12.10-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit server keepalive settings: - tcp_keepalives_idle: 7200 - tcp_keepalives_interval: 75 - tcp_keepalives_count: 9 Thanks for the help in advance! ### UPDATE: I think I figured it out. - I had multiple ~3min queries running simultaneously (these queries used the same tables from a foreign server) and they wouldn't finish - I started these manually to monnitor what's going on using pg_stat_activity as @jjanes suggested - What I saw is all of the queries were in an active state, the wait_event_type was IO and the wait_event was BufFileWrite - Read into those a little bit to find out what's going on: - wait_event_type - IO: The type of event for which the backend is waiting. - which is pretty self explanatory - and if the value is IO it means that some IO operation is in progress - Since the wait_event was BufFileWrite I looked into it what it means exactly: Buffered files in PostgreSQL are primarily temporary files. Temporary files are used for sorts and hashes. BufFileWrite is seen when these in memory buffers are filled and written to disk. - So what could cause this? One site (link down below) says: Large waits on BufFileWrite can indicate that your work_mem setting is too small to capture most of your sorts and hashes in memory and are being dumped to disk. and Ensure your sorts largely occur in memory rather than writing temp files by using explain analyze on your query ... - I checked our work_mem value with show work_mem; which was 20971kB - I thought it should be enough so looked further - The clue here for me was the explain analyze part. I created the foreign server with use_remote_estimate: true, which means When use_remote_estimate is true, postgres_fdw obtains row count and cost estimates from the remote server - The solution was to set this property (use_remote_estimate) to false and now it seems to be working the way it should. Useful links: https://www.postgresql.org/docs/current/monitoring-stats.html https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/apg-waits.iobuffile.html https://docs.dbmarlin.com/docs/kb/wait-events/postgresql/buffilewrite/ https://www.postgresql.org/docs/current/runtime-config-resource.html https://www.postgresql.org/docs/current/postgres-fdw.html
K. Anye (13 rep)
Aug 4, 2022, 02:11 PM • Last activity: Apr 19, 2025, 09:05 PM
1 votes
1 answers
1190 views
Postgres Repeatable Read vs Selecting rows that are changed by another ongoing transaction
Let's say I have a set of select statements that query important field values in a for-loop. The goal is to make sure that the rows are not updated by any other transaction so that this set of selects doesn't result in data that is out of date. In theory, it seems that setting the transaction level...
Let's say I have a set of select statements that query important field values in a for-loop. The goal is to make sure that the rows are not updated by any other transaction so that this set of selects doesn't result in data that is out of date. In theory, it seems that setting the transaction level to repeatable read should solve the problem. In this case, we can begin the transaction in the first select statement and then reuse the same transaction in this loop to make sure that updates are blocked until this transaction is committed. Is there anything I am missing? Probably, there are some other ways to be sure that stale rows are not selected. UPDATE: a bit more details I have a series of queries like select name from some_table where id = $id_param and this $id_param is set in a for-loop. I am worried, however, that this name field might be changed by another concurrent operation for some row or even get deleted. This would result in corrupted states for the final object. It seems that based on the comment below, pessimistic locking could be the way to go i.e. using ...FOR UPDATE, but I am not sure.
Don Draper (209 rep)
Sep 15, 2022, 04:35 PM • Last activity: Apr 18, 2025, 04:05 PM
0 votes
1 answers
611 views
Is this atomic set and get in MariaDB / MySQL: `UPDATE t SET col = @my_var := col + 1 WHERE id = 123`?
I want to atomically update a single row and get its value. I discovered today that MariaDB / MySQL has this additional way of setting user variables using the assignment operator `:=`, described e.g. [here][1]. I wonder therefore if the following statement, followed by checking the value of `@my_va...
I want to atomically update a single row and get its value. I discovered today that MariaDB / MySQL has this additional way of setting user variables using the assignment operator :=, described e.g. here . I wonder therefore if the following statement, followed by checking the value of @my_var, accomplishes what I want: UPDATE t SET col = @my_var := col + 1 WHERE id = 123; It seems it should, but I'd like to confirm with the pundits. --- (I know there's also SELECT .. FOR UPDATE.)
Jan Żankowski (203 rep)
May 27, 2021, 10:31 AM • Last activity: Mar 4, 2025, 01:04 AM
2 votes
1 answers
108 views
Why does innodb next-key locks lock more range than the condition?
Assume the following table: ```sql DROP TABLE IF EXISTS person; CREATE TABLE person ( id int unsigned auto_increment primary key, name varchar(255) not null, age tinyint unsigned, key (age) ); INSERT INTO person (name, age) VALUES ('bob', 10), ('alice', 16), ('jack', 19), ('william', 20); ``` And th...
Assume the following table:
DROP TABLE IF EXISTS person;
CREATE TABLE person (
    id   int unsigned auto_increment primary key,
    name varchar(255) not null,
    age  tinyint unsigned,
    key (age)
);

INSERT INTO person (name, age)
VALUES ('bob', 10), ('alice', 16), ('jack', 19), ('william', 20);
And the following query:
begin;
select * from person where age < 12 for update;
rollback;
Why does innodb lock range [12, 16] as reported by:
select LOCK_TYPE, LOCK_MODE, LOCK_STATUS, LOCK_DATA 
from performance_schema.data_locks;

TABLE,IX,GRANTED,
RECORD,X,GRANTED,"10, 1"
RECORD,X,GRANTED,"16, 2"
RECORD,"X,REC_NOT_GAP",GRANTED,1
Even there're new records inserted or modified between [12, 16], it will not cause phantom reads, right?
William (155 rep)
Feb 15, 2025, 04:39 PM • Last activity: Feb 17, 2025, 02:41 PM
1 votes
1 answers
201 views
What are the downsides of Accelerated Database Recovery, assuming that Read Committed Snapshot Isolation is enabled?
When I look over the documentation for Accelerated Database Recovery and Read Committed Snapshot Isolation, it appears that every downside of Accelerated Database Recovery is shared by Read Committed Snapshot Isolation. So assuming that I already have Read Committed Snapshot Isolation enabled, what...
When I look over the documentation for Accelerated Database Recovery and Read Committed Snapshot Isolation, it appears that every downside of Accelerated Database Recovery is shared by Read Committed Snapshot Isolation. So assuming that I already have Read Committed Snapshot Isolation enabled, what are the downsides of Accelerated Database Recovery? I suspect that there is no coincidence that both Accelerated Database Recovery and Read Committed Snapshot Isolation are enabled by default in Azure. Assume either SQL Server 2019 or SQL Server 2022.
J. Mini (1225 rep)
Jan 15, 2025, 07:53 PM • Last activity: Jan 16, 2025, 11:38 AM
2 votes
2 answers
314 views
Does a serializable transaction around a single statement have any effect compared to an implicit transaction?
I'm doing some performance auditing of our codebase and have noticed that we always run SQL statements within a serializable transaction. I've also noticed that many of these transactions only perform a single statement. Are there any circumstances where the effect of a statement will change based o...
I'm doing some performance auditing of our codebase and have noticed that we always run SQL statements within a serializable transaction. I've also noticed that many of these transactions only perform a single statement. Are there any circumstances where the effect of a statement will change based on whether or not it's running in a serializable transaction? The official PostgreSQL documentation (https://www.postgresql.org/docs/current/tutorial-transactions.html) states: > PostgreSQL actually treats every SQL statement as being executed within a transaction. If you do not issue a BEGIN command, then each individual statement has an implicit BEGIN and (if successful) COMMIT wrapped around it. But it doesn't say anything about the isolation level used. My hope is I can drop these transactions (avoiding at least two roundtrips for BEGIN and COMMIT, along with some other accidental complexity), but I want to avoid subtly changing the semantics of this code.
ocharles (208 rep)
Apr 11, 2024, 03:33 PM • Last activity: Jan 11, 2025, 01:58 PM
3 votes
2 answers
1978 views
Blocking on readable secondary replica
We recently migrated from LogShipping `standby/read-only` setup to Multi Subnet AG setup with readable secondaries. Generally on old setup we have select queries running for longer duration as the database in question is over 20 TB and has mix of read write workload on primary. After moving to new s...
We recently migrated from LogShipping standby/read-only setup to Multi Subnet AG setup with readable secondaries. Generally on old setup we have select queries running for longer duration as the database in question is over 20 TB and has mix of read write workload on primary. After moving to new setup of AG we have started seeing blocking which i am not able to understand. Why select queries on secondary are blocking other select queries in my readable secondary replica instance, even when the database being queried has RCSI enabled? Below is what i have captured - Lead blocker is some long running SELECT query does not show any specific waittype as particular, lets say SPID 129 - SPID 129 blocks a session ID 45 ( i am sure this is not a user id) for almost 6 hours which is dependent on spid129 and wait type is LCK_M_SCH_M - Here comes the problem when this SPID 45 just blocks all other select queries now in that 6 hour duration. I am not able to understand what is happening. Can someone help me troubleshoot or look in correct direction?
Newbie-DBA (804 rep)
Apr 6, 2021, 02:13 PM • Last activity: Dec 10, 2024, 10:16 AM
0 votes
2 answers
163 views
What wait types would terrible RCSI performance cause?
Read Committed Snapshot Isolation (RCSI) is well-understood. The key way that it can cause performance bottlenecks is if the version chains get too long. Are there any wait type that indicate this specific issue? Or is it only shown by other wait types (and if so, what?).
Read Committed Snapshot Isolation (RCSI) is well-understood. The key way that it can cause performance bottlenecks is if the version chains get too long. Are there any wait type that indicate this specific issue? Or is it only shown by other wait types (and if so, what?).
J. Mini (1225 rep)
Nov 15, 2024, 02:44 PM • Last activity: Nov 30, 2024, 12:42 PM
2 votes
2 answers
1685 views
Snapshot Isolation vs Read Committed - OLTP and Reporting Databases
I just finished reading an excellent article on isolation levels [here][1]. Our company will soon start development on a rewrite and expansion of our current product. My desire is to have an OLTP database and a separate, more denormalized, reporting database. Assuming we're somewhat disciplined and...
I just finished reading an excellent article on isolation levels here . Our company will soon start development on a rewrite and expansion of our current product. My desire is to have an OLTP database and a separate, more denormalized, reporting database. Assuming we're somewhat disciplined and most of our ad-hoc and reporting type queries actually go to the reporting database, does it sound appropriate that our OLTP database have a default isolation level of Read Committed (we won't need a more stringent isolation level for OLTP) and our reporting database be Snapshot Isolation (probably RCSI)? My thinking is that if our OLTP database is actually a true OLTP database and not serving double-duty as a reporting DB, we won't need snapshot isolation, and the associated overhead it entails. But snapshot isolation would be desirable on the reporting database so that readers are not blocked by the constant flow of data coming in, and reading the last saved version of a row would be acceptable.
Randy Minder (2032 rep)
Oct 9, 2017, 02:18 PM • Last activity: Nov 29, 2024, 10:47 AM
Showing page 1 of 20 total questions