Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
408
views
Why does MySQL need double-write buffer if there is redo log and undo log?
## Why does MySQL need double-write buffer if there is redo log and undo log? I read other posts that answer why MySQL needs double write buffer. They point to this article. http://dimitrik.free.fr/blog/archives/2011/01/mysql-performance-innodb-double-write-buffer-redo-log-size-impacts-mysql-55.html...
## Why does MySQL need double-write buffer if there is redo log and undo log?
I read other posts that answer why MySQL needs double write buffer. They point to this article. http://dimitrik.free.fr/blog/archives/2011/01/mysql-performance-innodb-double-write-buffer-redo-log-size-impacts-mysql-55.html
> and it will be impossible to repair it from the redo log as there are only changes saved within redo and not a whole page
That article states the above. But I still don't understand why.
Each page's block contains the LSN that was applied to it. Doesn't it? So, even if only some of the blocks of the page were written to disk due to a partial page write, I can look at the LSN associated with each block and detect if it was a partial write. If it was a partial write, I will replay the redo record for that LSN. The replay will give me the updated set of bytes to patch into the given offset range and I'll apply it to the blocks where the LSN wasn't applied previously.
Won't this work? Why do I need the double-write buffer?
Let's say the above does not work. But I can still use the undo log right? I could see that the page was partially updated, get the old value from the undo log and apply it to essentially rollback the transaction.
## How does the redo log guarantee that a single log record is not partially written to disk if it crosses a block boundary?
Actually what I cannot figure out is that since a single redo log record can cross a block boundary, what mechanism does the redo log use to detect that the redo record was not fully written and truncate the log to the last known full log record instead of applying a partial log record?
user855
(101 rep)
Nov 16, 2023, 11:55 PM
• Last activity: Jan 1, 2025, 02:01 PM
1
votes
0
answers
33
views
unqlite not writing to disk after calling store value function
Using [unqlite][1] I call unqlite_kv_store to write my key and value. Then the program terminates. I didn't call unqlite_close() on the open database. Now, in SQLite with WAL and synchronisation NORMAL if that happens that data that was written in the write call is written to the WAL file, however s...
Using unqlite I call unqlite_kv_store to write my key and value. Then the program terminates. I didn't call unqlite_close() on the open database. Now, in SQLite with WAL and synchronisation NORMAL if that happens that data that was written in the write call is written to the WAL file, however since checkpoint isn't done on every step the info in the WAL will be written to the database file next time it's opened.
However in unqlite when the program terminate without unqlite_close() there is a just a 512-byte large journal file and it's mostly filled with zeros. In other words the data wasn't written there.
I thought it was supposed to be ACID compliant. And I haven't set UNQLITE_OPEN_OMIT_JOURNALING or any other non-standard setting.
Zebrafish
(111 rep)
Oct 17, 2024, 02:40 PM
• Last activity: Oct 17, 2024, 03:38 PM
1
votes
1
answers
27
views
MySQL "ACI" mode per transaction?
I am aware of global configuration "innodb_flush_log_at_trx_commit" in MySQL, but that is server wide. I am looking for some way of specify that some transactions can be ACI instead of ACID, in order to reduce commit latency in exchange of risk of transaction lost if the database dies unexpectedly....
I am aware of global configuration "innodb_flush_log_at_trx_commit" in MySQL, but that is server wide.
I am looking for some way of specify that some transactions can be ACI instead of ACID, in order to reduce commit latency in exchange of risk of transaction lost if the database dies unexpectedly.
I would like to be able to specify durability guarantees transaction per transaction or, session/connection per session/connection.
Is that possible with MySQL 8.x?
UPDATE: Postgresql has this functionality via "Asynchronous Commit". For instance: https://www.postgresql.org/docs/current/wal-async-commit.html . I need something like this for MySQL.
jcea
(113 rep)
Apr 10, 2024, 02:39 PM
• Last activity: Apr 10, 2024, 06:19 PM
1
votes
1
answers
64
views
Is READ UNCOMMITTED equivalent to non-ACID (BASE) transaction management?
Relational databases are often associated to the ACID properties, which "protect" data but may hinder performance, even on a single node. Isolation levels (the I in ACID) shield transactions from various types of interference from other concurrent transaction. The greater the isolation level (think,...
Relational databases are often associated to the ACID properties, which "protect" data but may hinder performance, even on a single node.
Isolation levels (the I in ACID) shield transactions from various types of interference from other concurrent transaction. The greater the isolation level (think, SERIALIZABLE) the greater the performance penalty, because you are intentionally limiting the number of concurrent transactions on the same data set.
BASE databases essentially follow an optimistic approach to conflicts. Conflicts can appear, but they will eventually be solved and all replicas will agree on the same value (eventual consistency).
Is the READ UNCOMMITTED isolation level equivalent, in terms of performance, to using a NoSQL/BASE database? It seems that, in both cases, you avoid locks and gain in terms of performance while losing some consistency.
A. Darwin
(111 rep)
Feb 19, 2024, 08:24 AM
• Last activity: Feb 19, 2024, 11:04 AM
0
votes
1
answers
174
views
When can we say a transaction is committed in mysql?
I know the transaction can ensure that all the queries are committed or rolled back, but what is the point at which we can say ok, the transaction has been committed and all changes are durable? I think it is **when commit flag is written into redo log** (take mysql as an example), is it right?
I know the transaction can ensure that all the queries are committed or rolled back, but what is the point at which we can say ok, the transaction has been committed and all changes are durable?
I think it is **when commit flag is written into redo log** (take mysql as an example), is it right?
Kaiyu Yang
(13 rep)
Dec 27, 2023, 02:23 PM
• Last activity: Dec 27, 2023, 06:23 PM
1
votes
3
answers
14200
views
How to handle ~1k inserts per second
Assuming one has about 1k requests per second that require an insert. Now, there are a lot of answers to this on the internet... but they're technically wrong in this specific context. Yes, pretty much any RDBMS can handle 1k inserts per second on standard hardware but IF AND ONLY IF you drop ACID g...
Assuming one has about 1k requests per second that require an insert.
Now, there are a lot of answers to this on the internet... but they're technically wrong in this specific context. Yes, pretty much any RDBMS can handle 1k inserts per second on standard hardware but IF AND ONLY IF you drop ACID guarantees. It's surprising how many terrible answers there are on the internet. Such as "you can always scale up CPU and RAM" which is supposed to give you more inserts per second but that's not how it works. The limiting factor is disk speed or more precise: how many transactions you can actually flush/sync to disk. And this is the tricky bit.
On decent "commodity hardware" (unless you invest into high performance SSDs) this is about what you can expect:
* SQLite: 30 inserts/s
* MySQL: 80 inserts/s
This is the rate you can insert while maintaining ACID guarantees. This essentially means that if you have a forum with 100 posts per second... you can't handle that with such a setup.
Read requests aren't the problem. You can have many thousands of read requests per second no problem but write requests are usually <100 per second.
Thus, this question is specifically aimed at how one can handle 1k inserts per second while still maintaining ACID guarantees - assuming that a single node can handle about 80 transactions per second.
One way I could see this working is if you buffer inserts somewhere in your application logic and submit them as larger transactions to the database (while keeping clients waiting until the transaction is over) which should work fine if you need single inserts only although it complicates the application logic quite a bit.
mroman
(137 rep)
Jul 17, 2018, 03:15 PM
• Last activity: Sep 10, 2023, 05:22 PM
7
votes
1
answers
4057
views
Transaction and data consistency during a failure
When a database begins a transaction, all statements executed in that transaction are isolated and atomic (and consistent and durable). These are pretty much the definition of a transaction. [Wikipedia](http://en.wikipedia.org/wiki/ACID#Locking_vs_multiversioning) states that there are some database...
When a database begins a transaction, all statements executed in that transaction are isolated and atomic (and consistent and durable). These are pretty much the definition of a transaction.
[Wikipedia](http://en.wikipedia.org/wiki/ACID#Locking_vs_multiversioning) states that there are some databases that insure a transaction remains isolated by locking the rows and not unlocking them until the transaction has committed.
My question is: How can a database that solely relies on locking guarantee consistency? If a power outage occurs mid-transaction, there may be data partially written to the row.
Even for databases like SQL Server that use a Temporary DB to perform all the transactions, what happens if a power outage occurs as the database is committing the transactions to disk?
Does the data become corrupted or does the database handle that correctly?
Richard
(1 rep)
Aug 11, 2011, 10:32 PM
• Last activity: Jul 6, 2023, 06:21 AM
1
votes
2
answers
301
views
After server is restarted, the database and the backups are destroyed by the system crash. What ACID properties are broken in this situation?
Let's assume that two transactions execute on a database, both reading, and writing. At some time, the system crashes. After the server is restarted, the database and the backups were destroyed by the crash and the database does no longer exists. Which ACID properties would this break? My first gues...
Let's assume that two transactions execute on a database, both reading, and writing. At some time, the system crashes. After the server is restarted, the database and the backups were destroyed by the crash and the database does no longer exists. Which ACID properties would this break?
My first guess is that it breaks only Durability since we can assume that transactions did nothing to the database ensuring atomicity, the database is definitely consistent, and I am not sure about isolation, but I don't see how would this break isolation.
ujoeja
(11 rep)
Nov 24, 2021, 07:55 PM
• Last activity: May 9, 2023, 12:33 PM
0
votes
2
answers
139
views
Does ACID guarantee locking (semaphore) until the transaction is commited?
I have following interaction (pseudocode): ``` start transaction select delivery for update # semaphore select delivery_items where processed = false # get unprocessed data and process it update delivery_items SET processed = true commit # release semaphore ``` I was expecting that this will be enou...
I have following interaction (pseudocode):
start transaction
select delivery for update # semaphore
select delivery_items where processed = false # get unprocessed data and process it
update delivery_items SET processed = true
commit # release semaphore
I was expecting that this will be enough to lock the delivery so that no two people will be able to perform actions on its items at the same time, but for some reason I am getting race conditions where two people do manage to modify delivery_items
(I am getting the same items processed twice).
Am I not understanding ACID correctly?
Should the first select not block until the other user finishes transaction with a commit preventing race condition between second select and update?
All MySQL settings are set to default (except some memory pool sizes).
I am using Doctrine if that makes any difference.
I am also using nested transactions (doctrine implements them using SAVEPOINT
s).
HubertNNN
(103 rep)
Apr 11, 2023, 01:42 PM
• Last activity: Apr 12, 2023, 04:26 AM
4
votes
0
answers
909
views
Does Serializable isolation eliminate the need for explicit locks?
Postgres [documentation](https://www.postgresql.org/docs/current/transaction-iso.html#XACT-SERIALIZABLE) on `Serializable` isolation level counterposes `Serializable` with explicit locking, and makes it sound as though explicit locking should not be needed when using `Serializable`. I'm looking to u...
Postgres [documentation](https://www.postgresql.org/docs/current/transaction-iso.html#XACT-SERIALIZABLE) on
Serializable
isolation level counterposes Serializable
with explicit locking, and makes it sound as though explicit locking should not be needed when using Serializable
. I'm looking to understand
1. Am I misunderstanding the guarantees of Serializable
;
2. Do I need to use explicit locks in combination with Serializable
level, or should I just pick one;
3. Why does Serializable
behave the way it does in my case.
### Relevant excerpts from documentation that confuse me:
> This level emulates serial transaction execution for all committed transactions; as if transactions had been executed one after another, serially, rather than concurrently.
> To guarantee true serializability PostgreSQL uses predicate locking, which means that it keeps locks which allow it to determine when a write would have had an impact on the result of a previous read from a concurrent transaction, had it run first. In PostgreSQL these locks do not cause any blocking and therefore can not play any part in causing a deadlock. They are used to identify and flag dependencies among concurrent Serializable transactions which in certain combinations can lead to serialization anomalies. In contrast, a Read Committed or Repeatable Read transaction which wants to ensure data consistency may need to take out a lock on an entire table, which could block other users attempting to use that table, or it may use SELECT FOR UPDATE or SELECT FOR SHARE which not only can block other transactions but cause disk access
Sounds like an implication that FOR SHARE
/ FOR UPDATE
should only be needed in Read Committed
/ Repeatable Read
.
> The monitoring of read/write dependencies has a cost, as does the restart of transactions which are terminated with a serialization failure, but balanced against the cost and blocking involved in use of explicit locks and SELECT FOR UPDATE or SELECT FOR SHARE, Serializable transactions are the best performance choice for some environments.
Compares Serializable
and explicit locks as though they are interchangeable alternatives with their own pros and cons.
> Eliminate explicit locks, SELECT FOR UPDATE, and SELECT FOR SHARE where no longer needed due to the protections automatically provided by Serializable transactions
Recommends to remove explicit locks from Serializable
transactions.
### Scenario that I encountered
The data model is as follows.
**books**
| Column name | Data type |
|------------------|----------|
| id | UUID PRIMARY KEY |
| text | TEXT |
| is_published | BOOL
**favorites**
| Column name | Data type |
|------------------|----------|
| id | INT PRIMARY KEY
| user_id | UUID FOREIGN KEY
| book_id | UUID FOREIGN KEY
I need to ensure that a book can only be added to a user's **favorites** if it is_published
. If a book is unpublished while it is being added to favourites, I want the transaction to fail. So I run the following:
-- session 1
START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SELECT is_published FROM book WHERE id = $1;
-- session 2
START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
UPDATE book SET is_published = false WHERE id = $1;
COMMIT;
-- session 1
-- application checks the value of is_published.
-- if it is false, the transaction rolls back.
INSERT INTO favorites (user_id, book_id) VALUES ($2, $1);
COMMIT;
For some reason, session 1 is still able to commit successfully, which in my mind conflicts with the definition of Serializable
: if the transactions were to execute in a different order it would have produced a different result.
However, if I make data consistency checks "bidirectional", it for some reason works as needed:
-- session 1
START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SELECT is_published FROM book WHERE id = $1;
-- session 2
START TRANSACTION ISOLATION LEVEL SERIALIZABLE;
SELECT count(*) FROM favorites WHERE book_id = $1;
-- application checks if the book is added to favorites by someone
-- if it is, the transaction rolls back
UPDATE book SET is_published = false WHERE id = $1;
COMMIT;
-- session 1
INSERT INTO favorites (user_id, book_id) VALUES ($2, $1);
-- transaction fails with error: "Canceled on identification as a pivot, during write"
So my questions are:
1. When documentation says as if transactions had been executed one after another, serially
, does this only apply to transaction bodies **varbatim**, i.e. any conditional application logic that happens outside of SQL breaks this guarantee?
2. Why on Earth does it work correctly in the second case?
3. Do I ever need to use explicit locks if I correctly use transaction isolation levels throughout the app?
Kit Isaev
(141 rep)
Mar 28, 2023, 10:06 PM
• Last activity: Mar 28, 2023, 10:44 PM
0
votes
1
answers
85
views
How/Why are in-flight transactions lost in synchronous replication in contrast to semi-synchronous
I'm reading through an article on why SQL databases are hard to scale and problems with ACID. Here's the link to the short article: [Link](http://dbmsmusings.blogspot.com/2010/08/problems-with-acid-and-how-to-fix-them.html) > Today's solution is usually post-write replication, where each transaction...
I'm reading through an article on why SQL databases are hard to scale and problems with ACID.
Here's the link to the short article: [Link](http://dbmsmusings.blogspot.com/2010/08/problems-with-acid-and-how-to-fix-them.html)
> Today's solution is usually post-write replication, where each transaction is executed first at some primary replica, and updates are propagated to other replicas after the fact. Basic master-slave/log-shipping replication is the simplest example of post-write replication, although other schemes which first execute each transaction at one of multiple possible masters fall under this category. In addition to the possibility of stale reads at slave replicas, these systems suffer a fundamental latency-durability-consistency tradeoff: either a primary replica waits to commit each transaction until receiving acknowledgement of sufficient replication, or it commits upon completing the transaction. In the latter case, either in-flight transactions are lost upon failure of the primary replica, threatening durability, or they are retrieved only after the failed node has recovered, while transactions executed on other replicas in the meantime threaten consistency in the event of a failure.
I'm assuming that by the former statement, it is pointing to the semi-synchronous replication and by latter it points to the asynchronous replication such that a commit is said to be successful as soon as the transaction is committed on the primary replica.
However, I didn't quite get what does it mean by losing the transaction upon failure and how are in-flight transactions lost in synchronous replication and not the semi-synchronous case? Can anyone help to explain?
Isn't a transaction saved to disk before it's being executed?
asn
(127 rep)
Mar 25, 2023, 01:11 PM
• Last activity: Mar 27, 2023, 02:18 PM
1
votes
1
answers
1877
views
How to have uniqueness constraints for structures across multiple tables?
Say I have a schema system something like this: create table objects { uuid id; string type; } create table object_properties { uuid id; uuid source_id; // the object which has this property string name; // the property name uuid value_id; // the property value object } // ...and tables for each pri...
Say I have a schema system something like this:
create table objects {
uuid id;
string type;
}
create table object_properties {
uuid id;
uuid source_id; // the object which has this property
string name; // the property name
uuid value_id; // the property value object
}
// ...and tables for each primitive data type
create table string_properties {
uuid id;
uuid source_id; // the object which has this property
string name; // the property name
string value; // the property value string
}
I then want to create this object:
{
type: 'foo',
bar: {
type: 'bar',
baz: {
type: 'baz',
slug: 'hello-world'
}
}
}
That is:
// objects
id | type
123 | foo
234 | bar
345 | baz
// object_properties
source_id | name | value_id
123 | bar | 234
234 | baz | 345
// string_properties
source_id | name | value
345 | slug | hello-world
I want to only create this "object tree" if the tree ending in
slug: hello-world
doesn't exist. How best can I do that? I can do it easily by first making a query, checking the object exists, and then creating it if not. But that is one query followed by one insert. There is a chance that two processes come in at the same time, both make the query, both succeed, and then both make the insert. How can I prevent that? Note, I am currently having each independent query+insert happen both in a transaction, so each transaction has the query followed by the insert.
Or will the update
inside the first transaction be readable "outside" from the second transaction? I am using PostgreSQL / CockroachDB, is this a "read uncommitted" sort of setting?
Lance Pollard
(221 rep)
Aug 2, 2022, 05:56 AM
• Last activity: Aug 9, 2022, 12:32 PM
9
votes
3
answers
10614
views
Is Mysql ACID Compliant?
Is Mysql not an ACID compliant according to Postgresql? In some blogs I see Mysql is not ACID compliant. How true is that? Let't not consider the replication here, lets consider a standalone and how efficient is Mysql ACID? In my understanding for Mysql-ACID. > A - Atomicity (Set of transactions sho...
Is Mysql not an ACID compliant according to Postgresql? In some blogs I see Mysql is not ACID compliant. How true is that?
Let't not consider the replication here, lets consider a standalone and how efficient is Mysql ACID?
In my understanding for Mysql-ACID.
> A - Atomicity (Set of transactions should all be committed if one
> fails it has to rollback. Yes means all are committed , no means even
> one failed it has to Rollback).
>
> I.E. Features that supports in Mysql are.
>
> - start Transaction; ..... commit ;
> - auto_commit=1;
>
> C - Consistency.
>
> ( PK,FK,UK,NOT-NULL). It adheres to Relations and constraints for
> Databases. Instance a parent key can be deleted only when its child
> key is removed.
>
> I - Isolation. Isolation between users and their state of commit.
>
> Read Repeatable Read Uncommitted Read Committed Serialized
>
> D - Durability. At the event of DB crash innodb recovers the DB by
> applying committed transaction from iblog file and discards
> not-committed transaction.
Click here for the source of this question. - Is it because the blog is created @2001?
**UPDATE Jun-30-2017:** As per "Evan Carroll" response and I have personally tested the blog experiment on 5.7.18-enterprise. The results obtained from the experiment seems to be **Mysql is Not an ACID Compliant.**
Mannoj
(1581 rep)
Jun 29, 2017, 07:03 AM
• Last activity: Nov 7, 2021, 02:17 PM
1
votes
1
answers
238
views
What's the correct database behavior when two transactions change the same row/value?
I'm writing a database LRU cache in C++ for an embedded NoSQL database to solve a performance problem, and I'm trying to understand the correct assumed behavior and philosophy behind it. Say there's a NoSQL database at a certain state X. We start transaction 1 (tx1) and start transaction 2 (tx2) fro...
I'm writing a database LRU cache in C++ for an embedded NoSQL database to solve a performance problem, and I'm trying to understand the correct assumed behavior and philosophy behind it.
Say there's a NoSQL database at a certain state X. We start transaction 1 (tx1) and start transaction 2 (tx2) from the same isolated state X. Both transactions attempt to change the same key/value pair. Each transaction changes the value to some value, and the two values are not equal. Tx1 commits, then tx2 commits. What's the correct behavior of the database?
1. The new value is the one committed from tx2, because it overwrites tx1
2. The new value is the one committed from tx1, because committing tx2 should fail
Or is the answer something else?
Can someone please elaborate on how such a system should be programmed if it's ACID compliant?
The DB I'm using that I want to cache over is LMDB , which claims to be ACID-compliant.
The Quantum Physicist
(125 rep)
Aug 26, 2021, 05:43 AM
• Last activity: Aug 26, 2021, 04:04 PM
0
votes
1
answers
74
views
Does it make sense to define a new "Data type"? Is there a way to do so?
I'm basically new to SQL, and I was making my own project when this question came to my mind. I'm making a database that has an entity where an attribute would be a "number" and a "year". In many programming languages one would create a new object or data type that takes both the number and the year...
I'm basically new to SQL, and I was making my own project when this question came to my mind.
I'm making a database that has an entity where an attribute would be a "number" and a "year".
In many programming languages one would create a new object or data type that takes both the number and the year. (say a tuple with INT, YEAR)
I wanted to know if this makes sense in SQL or I should just go with an INT and YEAR attribute.
To put more details into the project, I'm creating a DB for my mom who runs a business that goes with "campaings" (the INT value) directly associated with the "Year", so one would put the campaign 09 for year 2020, and so on.
I wanted to create a new Data Type for the attribute so then I don't have multiple "campaign 09" with different years.
I hope my question is clear, thanks!
EDIT: Read the comments for the answer.
Francisco Laferrière
(103 rep)
Apr 25, 2021, 04:30 PM
• Last activity: Apr 26, 2021, 04:10 PM
1
votes
1
answers
171
views
Can Postgres timestamp be used to implement client side caching?
If I have a pool of connections in a multi-threaded client sending out requests to the database server, can I rely on transaction or statement level timestamps to determine which data is most current? For example, let's say I have a pair of transactions T1 and T2 which both return the same row which...
If I have a pool of connections in a multi-threaded client sending out requests to the database server, can I rely on transaction or statement level timestamps to determine which data is most current?
For example, let's say I have a pair of transactions T1 and T2 which both return the same row which is updated in some way. Suppose T1 starts first. Is it possible for T2 to return stale data or complete earlier despite T1's timestamp being earlier? For instance, if T1 is slow and expensive, can it finish later and/or return data that is somehow more current than the data returned by T2?
Specific use case: Let's say we have a web server, a database server, and two different user agents connected to the web server. Let's say each user agent submits a mutation "simultaneously", and the web server wants to update the cache accordingly. How can we cache the correct result on the web server? By "correct," I mean the final result of these two mutations instead of the intermediate result.
user2959071
(63 rep)
Feb 11, 2021, 05:54 AM
• Last activity: Feb 12, 2021, 10:18 AM
11
votes
1
answers
6199
views
Synchronization using triggers
I have a requirement similar to previous discussions at: - https://dba.stackexchange.com/questions/5608/writing-a-simple-bank-schema-how-should-i-keep-my-balances-in-sync-with-their-t - https://dba.stackexchange.com/questions/35025/trigger-in-combination-with-transaction I have two tables, `[Account...
I have a requirement similar to previous discussions at:
- https://dba.stackexchange.com/questions/5608/writing-a-simple-bank-schema-how-should-i-keep-my-balances-in-sync-with-their-t
- https://dba.stackexchange.com/questions/35025/trigger-in-combination-with-transaction
I have two tables,
[Account].[Balance]
and [Transaction].[Amount]
:
CREATE TABLE Account (
AccountID INT
, Balance MONEY
);
CREATE TABLE Transaction (
TransactionID INT
, AccountID INT
, Amount MONEY
);
When there is an insert, update or delete against the [Transaction]
table, the [Account].[Balance]
should be updated based on the [Amount]
.
Currently I have a trigger to do this job:
ALTER TRIGGER [dbo].[TransactionChanged]
ON [dbo].[Transaction]
AFTER INSERT, UPDATE, DELETE
AS
BEGIN
IF EXISTS (select 1 from [Deleted]) OR EXISTS (select 1 from [Inserted])
UPDATE [dbo].[Account]
SET
[Account].[Balance] = [Account].[Balance] +
(
Select ISNULL(Sum([Inserted].[Amount]),0)
From [Inserted]
Where [Account].[AccountID] = [Inserted].[AccountID]
)
-
(
Select ISNULL(Sum([Deleted].[Amount]),0)
From [Deleted]
Where [Account].[AccountID] = [Deleted].[AccountID]
)
END
Although this seems to be working, I have questions:
1. Does the trigger follow the relational database's ACID principle? Is there any chance an insert might be committed but the trigger fail?
2. My IF
and UPDATE
statements look strange. Is there any better way to update the correct [Account]
row?
Yiping
(227 rep)
Apr 22, 2014, 02:08 AM
• Last activity: May 8, 2020, 06:29 AM
4
votes
2
answers
1450
views
Transaction atomicity implies consistency
Transactions are said to need both atomicity and consistency. But, what is the point of consistency? Once you ensure that you either update both accounts on money transfer, you are consistent. Wikipedia article on consistency says that consistency is a set of programmatic constraints, invariants. Bu...
Transactions are said to need both atomicity and consistency. But, what is the point of consistency?
Once you ensure that you either update both accounts on money transfer, you are consistent. Wikipedia article on consistency says that consistency is a set of programmatic constraints, invariants. But you are free to code your transaction operations any way. So, I feel that programmer only needs Atomicity, Isolation and Durability. Consistency is an additional convenience DB tool, like trigger or stored procedure and is not absolute low level requirement. Given AID, the programmer can always ensure the consistency. It is a simple as check user data and reject the request if some rule is broken. Right?
It looks like Wikipedia identifies Atomicity with Consistency in the Data Consistency article, but not in the article on ACID transactions.
Listening on eventual consistency, I see that developers mean that DB is always in some correct snapshot state, i.e. data are stabilized and you can query/issue your transaction upon the DB, which you could not do if DB is available in transition. That is, consistent DB is one which satisfies the constraints at any moment of time. But, it satisifies the constraints if operations are atomic. Atomicity implies that we see only transaction results, which implies that we see only snapshots, which means consistence. Why do you need to single the consistence out as if it is independent property?
Valentin Tihomirov
(1 rep)
Mar 11, 2016, 02:07 PM
• Last activity: Jan 22, 2020, 11:19 PM
-1
votes
1
answers
1957
views
Serializable Transactions vs SELECT FOR UPDATE in MySQL
I have a stock table with columns: `id, available, sold, status` fields. Before increasing `sold` field I have to check if `sold` + `x` is less and equal than `available`. To make sure every transaction uses the freshest data and not allow other transactions to modify it I used `SELECT ... FOR UPDAT...
I have a stock table with columns:
id, available, sold, status
fields. Before increasing sold
field I have to check if sold
+ x
is less and equal than available
. To make sure every transaction uses the freshest data and not allow other transactions to modify it I used SELECT ... FOR UPDATE
.
`START TRANSACTION;
SELECT * FROM stock WHERE id=1 FOR UPDATE;`
*--some backend logic here check if sold + x <= available is true update else unlock --*
`UPDATE stock set sold = sold + x where id = 1;
COMMIT;`
But I am not really sure if I am doing it right. I looked into MySQL documentation and old questions, read about isolation levels. Should I set the transaction isolation level as serializable
or it is totally redundant?
I am using MySQL 8.0.17.
Shahin
(65 rep)
Nov 19, 2019, 06:44 AM
• Last activity: Nov 19, 2019, 02:22 PM
0
votes
2
answers
3250
views
How do writers not block reads in PostgreSql
Since readers do not block writers and writers do not block readers is what MVCC is all about, how does postgresql prevent writers who are in the process of committing their changes to blocks/pages prevent a reader from reading this inconsistent data? So from what I understand about PostgreSQL's MVC...
Since readers do not block writers and writers do not block readers is what MVCC is all about, how does postgresql prevent writers who are in the process of committing their changes to blocks/pages prevent a reader from reading this inconsistent data?
So from what I understand about PostgreSQL's MVCC, each table is represented as a heap file and each heap file contains a collection of pages / blocks which are 8 kb segments of information.
So say two transaction
T1
and T2
concurrently running and T1
decides to make some modification to block1
. So at first, T1
reads block1
and makes modifications to it in-memory and when it's finally done, it decides to commit; ie actually write back these in-memory changes to block1
. So I understand that xmax
of the previous entries it modified are set to id(T1)
and a new entry is created with xmin
set to id(T1)
. Now what I find difficult is, say while the actual commit process occurs for T1
, say T2
reads (while the write by T1
has started but not ended). How is this case handled?
Any answer would be appreciated
paiakshay998
(31 rep)
Jul 24, 2019, 02:08 AM
• Last activity: Jul 24, 2019, 07:37 PM
Showing page 1 of 20 total questions