Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
466
views
Try Confirm Cancel (TCC) Protocol
## My Confusion I have been reading up about the Try Confirm Cancel (TCC) Protocol, and I understand the main idea behind it (for the happy flow). My confusion comes in how do we actually implement or manage this in the edge cases (when a service unexpectedly crashes). Suppose we have 3 microservice...
## My Confusion
I have been reading up about the Try Confirm Cancel (TCC) Protocol, and I understand the main idea behind it (for the happy flow). My confusion comes in how do we actually implement or manage this in the edge cases (when a service unexpectedly crashes).
Suppose we have 3 microservices: S1, S2, S3. Suppose S1 and S2 provide the TCC interface as APIs, while S3 is the service which actually calls S1 and S2 and manages the transaction. Assuming
T
to represent time, I have listed some scenarios below which seem to be problematic:
Scenario 1
T0: Client calls S3 to trigger a flow which uses TCC.
T1: S3 calls TRY for S1. This succeeds.
T2: S3 crashes
In Scenario 1 above, when S3 recovers from the crash, how will it know to call CANCEL for S1?
Another Scenario which faces the same issue:
Scenario 2
T0: Client calls S3 to trigger a flow which uses TCC.
T1: S3 calls TRY for S1 and S2. This succeeds.
T2: S3 calls CONFIRM for S1
T2: S3 crashes
In Scenario 2 above, when S3 recovers from the crash, how will it know to call CONFIRM for S2?
The articles I've read on TCC never seem to address the issues highlighted above.
## My Proposed Solution
I was thinking the only way to solve this is to somehow store a representation of the transaction that is supposed to take place. Taking the same example above, suppose S3 is a microservice trying to coordinate the transfer of money between 2 different banks S1 and S2. The flow would be something like this:
- T0: Client calls S3 to trigger transfer of money between 2 bank accounts.
- T1: S3 stores a record in DB first before making any calls to S1 and S2 (the status
field will change according to which stage the TCC is in):
| from_user_id | to_user_id | amount | status |
| -------------- | ------------ | -------- | -------- |
| 100 | 101 | 56 | TO_TRY |
- T2: S3 calls TRY for S1 and S2.
- T3: If both TRY succeeds, S3 should update the status
field of the DB record to TO_CONFIRM
. If either TRY fails, S3 should update the status
field of the DB record to TO_CANCEL
. Lets assume the cancellation scenario:
| from_user_id | to_user_id | amount | status |
| -------------- | ------------ | -------- | -------- |
| 100 | 101 | 56 | TO_CANCEL |
- T4: S3 calls CANCEL for S1 and S2.
- T5: If both CANCEL succeeds, S3 should update the status
field of the DB record to COMPLETED
. Otherwise, just leave it in TO_CANCEL
.
The idea behind this solution is that if S3 crashes at some point, we can have a cronjob to pick up the records in the DB that are stuck in non-COMPLETED statuses, and try to push them to the COMPLETED status (making sure that either the TRY, CANCEL or CONFIRM phase eventually follows through).
Is my proposed solution a legitimate solution? I'm assuming there are already existing solutions to this, but I can't seem to find anything on this issue. Any useful articles or resources would be greatly appreciated!
Ryn
(101 rep)
Jun 25, 2022, 06:36 AM
• Last activity: Aug 4, 2025, 04:12 AM
1
votes
1
answers
833
views
MySQL Partial Transactions
I am experiencing some strange behavior in MySQL using transactions. We have a PHP site that runs ~100 workstations inserting thousands of items (table 1) daily, with 4 to 7 logged events (table 2) at creation. Roughly 3 times a day we are having partial success of our inserts. There are up to 3 MyS...
I am experiencing some strange behavior in MySQL using transactions. We have a PHP site that runs ~100 workstations inserting thousands of items (table 1) daily, with 4 to 7 logged events (table 2) at creation. Roughly 3 times a day we are having partial success of our inserts. There are up to 3 MySQL users that *could* be called from PHP, and an ODBC source involved, so hopefully I explain this well enough. Unfortunately the PHP is spaghetti code from a previous dev so I'm having troubles tracing it all. Previously I had replicated this problem on every creation by inadvertently reconnecting to the DB under a different user, and making that conditional fixed it, but it's back and about as intermittent as can be. I caught this the first time by logging queries in the test environment and finding a different connection ID in the middle of the script, but this is unfeasible for how many statements run and how sparsely the problem occurs.
MySQL users:
hostingconnection - r/o to get basicdata's username/password from a different data source on the same MySQL instance
basicdata - r/w for the project's standard tables
admindata - r/w for the project's elevated privilege tables
All users can be used to get the ODBC connection string from a table in the main data source
Say we're doing an insert into Table1, and 4 inserts into Table2. During the PHP script that creates the records I can access all data that was inserted (Table1's values being used in Table2 as an example). After the PHP script finishes, Table1's record is missing and so are Table2's first two inserts, but the 3rd and 4th inserts to Table2 still exist. No errors are logged, no rollback is triggered, and the commit is "successful" (based on return value of PHP
mysqli_commit()
).
This time around I've programmed the PHP in all the connection scripts to send an email if they're hit when a transaction is already active, and none are being hit. I can find no other "way in", so does anyone know of other ways a transaction can be "successful" but missing the first half or so of a transaction?
EDIT: Posting the relevant SQL taken from the test environment. Some things to know:
- The transaction starts at INSERT INTO objects
.
- The first INSERT INTO object_events
that remains is the value "level partno".
- Ignore the "label print failed" message - that means server-side cups printing failed, which wasn't configured on the test environment. A success message would have been there instead.
- The SELECT VALUE FROM company
statements had been the culprit in the previous issue - cannot confirm or deny, but my check for an existing transaction in that connect script does not trigger an email. They had unnecessarily triggered mysqli_connect()
, which "reset" the transaction without an error.
- UPDATE workcenters
may or may not be successful - it is overwritten shortly after and I have no way to know what it was previously.
- I am not explicitly setting autocommit, but I would think that mysqli_begin_transaction()
and mysqli_commit()
would imply that autocommit is false.
- The log would need to run more than a little while - failures happen at random times, around 3 times a day over a 10 hour period. Even 1/2 hour of logs for the MySQL general log is too large of a file to manage. Unfortunately this leaves me only the provided log from the test environment to go on.
- The fact that the object ID exists in subsequent writes to object_events tells me that the insert was successful, but somehow not committed. If a rollback happened the script would have ended and the user would have been notified of an error, which is not the case.
- Not using mysqli_begin_transaction()
and mysqli_commit()
"fixes" the problem, but having good reason to use transactions we would rather fix the underlying issue instead of having a workaround.
SQL:
SELECT EMPLOYEE_ID,FIRSTNAME,LASTNAME,STATUS FROM employees WHERE EMPLOYEE_ID=706279
;
SELECT * FROM workcenters where ID=148
;
SELECT ee.EMPLOYEE_ID,ee.FIRSTNAME,ee.LASTNAME,ee.STATUS,IF(wa.EMPLOYEE_ID=ep.EMPLOYEE_ID,1,0) AS AUTHORIZED FROM employee_presence ep JOIN workcenters wc ON wc.ID=ep.WORKCENTER_ID JOIN employees ee ON (ee.EMPLOYEE_ID=ep.EMPLOYEE_ID) LEFT JOIN workcenter_authorization wa ON (wa.EMPLOYEE_ID=ee.EMPLOYEE_ID and wa.WORKCENTER_ID=wc.ID) WHERE ep.WORKCENTER_ID=148 GROUP BY ee.EMPLOYEE_ID
;
UPDATE workcenters
SET SHOP_ORDER='', WIP=0 WHERE ID=148
;
UPDATE workcenters SET LABEL_QTY=1 WHERE ID=148
;
SELECT DESCRIPTION from label_types
WHERE ID=2
;
SELECT *,GetPartName('714883') AS PARTNAME
FROM part_names WHERE PART#
='714883' LIMIT 1
;
SELECT *,GetPartName('714883') AS PARTNAME
FROM part_names WHERE PART#
='714883' LIMIT 1
;
SELECT * FROM label_templates WHERE LABEL_ID=(SELECT PACK_LABEL_ID FROM part_names WHERE PART#
='714883') LIMIT 1
;
SELECT NAME from printers
;
UPDATE workcenters SET LAST_UPDATE=now(), PRINTER_SELECTED=1 WHERE ID=148
;
-- BEGIN TRANSACTION
INSERT INTO objects (TYPE
,PART_NUMBER
,LEVEL_PART_NUMBER
,STATUS
,QUANTITY
,LOCATION
) VALUES ('COMPONENT','714883','714883','',375,'R1')
;
SELECT * FROM objects WHERE ID=LAST_INSERT_ID()
;
INSERT INTO object_events SET OBJECT_ID=2637120,WORKCENTER_ID=148, EVENT_TYPE='created', PROPERTY='location',EVENT_TEXT='R1'
;
SELECT NewLabel(2637120,'ATS') AS BARCODE
;
SELECT VALUE FROM company WHERE KEYNAME='XPPS_CONN' LIMIT 1
;
SELECT VALUE FROM company WHERE KEYNAME='XPPS_USER' LIMIT 1
;
SELECT VALUE FROM company WHERE KEYNAME='XPPS_PASS' LIMIT 1
;
SELECT EMPLOYEE_ID FROM object_events_employees WHERE EVENT_ID=LAST_INSERT_ID()
;
UPDATE workcenters SET PART_NUMBER='714883',LEVEL_PART_NUMBER='714883',STD_TIME=0 WHERE ID=148
;
-- FIRST REMAINING RECORD HERE
INSERT INTO object_events SET OBJECT_ID=2637120,LAST_UPDATE=now(),WORKCENTER_ID=148,EVENT_TYPE='set property', PROPERTY='level partno', EVENT_TEXT='714883'
;
SELECT EMPLOYEE_ID FROM object_events_employees WHERE EVENT_ID=LAST_INSERT_ID()
;
INSERT INTO object_events SET OBJECT_ID=2637120,LAST_UPDATE=now(),WORKCENTER_ID=148,EVENT_TYPE='set property', PROPERTY='label template', EVENT_TEXT='5'
;
SELECT EMPLOYEE_ID FROM object_events_employees WHERE EVENT_ID=LAST_INSERT_ID()
;
SELECT * from printers WHERE NAME='Put-Away'
;
UPDATE workcenters SET LAST_UPDATE=now(), STATUS_MESSAGE='Printer does not respond, is it turned off?', MESSAGE_STATUS=2 WHERE ID=148
;
INSERT INTO object_events SET OBJECT_ID=2637120,LAST_UPDATE=now(),WORKCENTER_ID=148,EVENT_TYPE='label print failed',EVENT_TEXT='ATS3282328*'
;
-- COMMIT TRANSACTION
SELECT EMPLOYEE_ID FROM object_events_employees WHERE EVENT_ID=LAST_INSERT_ID()
;
MaKR
(111 rep)
Nov 8, 2019, 10:04 PM
• Last activity: Jul 29, 2025, 04:10 PM
0
votes
1
answers
33
views
Is there a way to get transaction time statistics on specific tables
I saw some log entries that indicated transaction time outliers of up to 10s at times, where transaction times are typically below 1s. To get a view of how often this happens, is there a way to get transaction time statistics in Postgres for transactions involving particular tables? Even just count...
I saw some log entries that indicated transaction time outliers of up to 10s at times, where transaction times are typically below 1s. To get a view of how often this happens, is there a way to get transaction time statistics in Postgres for transactions involving particular tables?
Even just count number of transactions over some threshold, which I could then compare to the total number?
Thank you.
nsandersen
(241 rep)
Jul 25, 2025, 12:44 PM
• Last activity: Jul 26, 2025, 03:43 PM
0
votes
1
answers
146
views
How would you implement Rollback when you need to make sure that 2 updates and 2 inserts either all happen or are all rolled back?
I have a MySQL stored procedure. Cut down, it looks like this. ``` START TRANSACTION SELECT some_columns SET some_variables UPDATE row_in_balance_table UPDATE row_in_entry_table INSERT row_in_ledger_table INSERT row_in_ledger_table COMMIT; ``` I need all 4 rows to be updated/inserted, or none of the...
I have a MySQL stored procedure. Cut down, it looks like this.
START TRANSACTION
SELECT some_columns
SET some_variables
UPDATE row_in_balance_table
UPDATE row_in_entry_table
INSERT row_in_ledger_table
INSERT row_in_ledger_table
COMMIT;
I need all 4 rows to be updated/inserted, or none of them to be.
What is just a standard normal way to make this happen? I had considered something like, after each query
IF (SELECT ROW_COUNT() = 1 ) THEN
SET row_affected_counter = row_affected_counter + 1;
END IF;
And then, because I need to affect 4 total rows, just before the COMMIT I could use..
IF (row_affected_counter != 4 ) THEN
ROLLBACK;
END IF;
COMMIT;
So I think my procedure should rollback on an error, because it's in a transaction, and rollback if any of the updates/inserts don't happen, because the counter won't reach the expected total of rows affected.
This didn't work though because it seems like ROW_COUNT doesn't reset to 0 if a follow insert/update is called.
Is there a better way to do this?
T M
(11 rep)
Nov 6, 2023, 08:33 PM
• Last activity: Jul 24, 2025, 08:03 PM
1
votes
1
answers
148
views
Can I rely on the MSDTC to guarantee a distributed transaction was completed?
I've come across a bug using the TransactionScope from the .NET library, where we're getting a TransactionInDoubt exception. After doing a lot of research into something I don't know much about, I've figured this happens when there is a communication error between the database, and we're unsure if t...
I've come across a bug using the TransactionScope from the .NET library, where we're getting a TransactionInDoubt exception. After doing a lot of research into something I don't know much about, I've figured this happens when there is a communication error between the database, and we're unsure if the transaction completed.
Trying to handle this error quickly became a pain however, there seem to be a lot of possible problems that arise.
Basically, I want to know if there is a guarantee that a distributed transaction across multiple databases will be committed or rolled-back on ALL databases, not committed on one and not the another.
From my knowledge, the MSDTC controls a distributed transaction and handles this all for me. However reading up on it a little, it seems if there is a communication problem while trying to confirm the commit to one of the databases, then the commit won't be performed for that database, but the others will be committed.
There are plenty of resources online that discuss this, and it's often a lot of conceptual talk, my simple question is: Can I rely on the MSDTC, or do we have to come up with a separate solution to the problem?
DubDub
(111 rep)
Dec 4, 2019, 08:54 AM
• Last activity: Jul 19, 2025, 06:09 PM
0
votes
1
answers
161
views
Running 2 inserts in one transaction is executing unacceptably long time. Running same DML Separately works fine
Decided to see what DBAs have to say about this because developers have no idea. I need to execute 2 DMLs in one transaction. First DML takes records from `Table1`, changes some values and inserts back into the same table a subset of the originals. Second DML selects the newly inserted records, join...
Decided to see what DBAs have to say about this because developers have no idea. I need to execute 2 DMLs in one transaction. First DML takes records from
Table1
, changes some values and inserts back into the same table a subset of the originals. Second DML selects the newly inserted records, joins them with the original records that were used to generate the subset, and inserts some values into table2
When I ran this from c# code under one transaction, first statement executed fine and then second kept going forever. So, I decided to check directly in the DB. - same thing.
Using Postgre SQL, if I run the block below, it just keeps running but not completing.. for hours. But the execution shows as active
If I Run these 2 statement separately, in separate transactions - takes few seconds.
What do I miss here? I am not very experienced with Postgres.
BTW, if I add SELECT
after first statement and just run INSERT
and SELECT
under transaction, select
runs fine and then if there is no commit
, records are gone, as expected. It is running that second INSERT
in the same transaction that is troublesome.
Thanks
sql
begin;
INSERT INTO table1
(fields...)
SELECT
FROM table1
WHERE
GROUP BY
;
INSERT INTO table2 (f1, f2) -- INSERT SUBSET based on uncommitted data from ^^^
select c.id, a.id
FROM
table1 t1 Inner JOIN -- SELF JOIN
table1 t2 a On --(uuid, int, int)
where
;
COMMIT;
**-- Explain plan**
text
Insert on table2 (cost=220982.08..222767.83 rows=0 width=0) (actual time=1497.900..1497.902 rows=0 loops=1)
-> Subquery Scan on "*SELECT*" (cost=220982.08..222767.83 rows=28181 width=40) (actual time=770.711..986.875 rows=94731 loops=1)
-> Merge Join (cost=220982.08..222415.56 rows=28181 width=70) (actual time=770.635..934.304 rows=94731 loops=1)
Merge Cond: (((a.someid)::text = (c.someid)::text) AND ((a.type)::text = (c.type)::text))
-> Sort (cost=127241.69..127555.13 rows=125375 width=46) (actual time=673.203..737.628 rows=94731 loops=1)
Sort Key: a.someid, a.type
Sort Method: external merge Disk: 5208kB
-> Seq Scan on table1 a (cost=0.00..112768.01 rows=125375 width=46) (actual time=0.006..258.759 rows=94731 loops=1)
Filter: ((MustDoSome IS TRUE) AND (batchnumber = '2b86d81c-d76a-4184-b04f-6bdbeb78b51b'::uuid) AND (keytype = 1) AND (datarecordtype = 2))
Rows Removed by Filter: 588905
-> Sort (cost=93740.39..93787.02 rows=18650 width=46) (actual time=97.419..112.265 rows=94731 loops=1)
Sort Key: c.someid, c.type
Sort Method: external sort Disk: 3448kB
-> Bitmap Heap Scan on statementdata c (cost=727.30..92417.46 rows=18650 width=46) (actual time=8.072..44.686 rows=58677 loops=1)
Recheck Cond: (stmtkeytype = 2)
Filter: ((batchnumber = '2b86d81c-d76a-4184-b04f-6bdbeb78b51b'::uuid) AND (datarecordtype = 2))
Rows Removed by Filter: 58677
Heap Blocks: exact=16204
-> Bitmap Index Scan on fki_fk_table1_stmt_key_type (cost=0.00..722.63 rows=58428 width=0) (actual time=5.263..5.263 rows=117354 loops=1)
Index Cond: (stmtkeytype = 2)
Planning Time: 0.557 ms
Trigger for constraint fk_xxx_zz_yyy_id: time=899.588 calls=94731
Trigger for constraint fk_xxx_yyy_id: time=837.080 calls=94731
Trigger for constraint fk_RRRR_yyyt_id: time=1001.268 calls=94731
T.S.
(216 rep)
Jun 15, 2023, 04:03 PM
• Last activity: Jul 15, 2025, 04:07 PM
0
votes
0
answers
21
views
Understanding commit_time in dm_tran_commit_table: Insert Time, Timezone, and Latency Concerns
I am currently working with SQL Server and exploring transaction management, specifically the `dm_tran_commit_table` view. I have a couple of questions regarding the `commit_time` and change tracking. Database has `Delay Durability = Forced` and `Is Read Committed Snapshot = true` 1. Does the `commi...
I am currently working with SQL Server and exploring transaction management, specifically the
dm_tran_commit_table
view. I have a couple of questions regarding the commit_time
and change tracking. Database has Delay Durability = Forced
and Is Read Committed Snapshot = true
1. Does the commit_time
represent the exact moment a record is inserted into a table or inserted to change tracking table, or does it indicate the time the transaction was committed, which may be different?
2. If the transaction is committed, what timezone does the commit_time
reference? Is it based on the server’s local time, UTC, or another timezone?
The reason I am asking is that I have a creation date in my system from which we are replicating data, and I noticed that the date from the system is about 25 minutes earlier than the date taken from commit_time
. According to the statistics I checked, there should be a maximum latency of about 5 minutes. This discrepancy is concerning, and I’d like to understand if the commit_time
could be contributing to this issue.
I would greatly appreciate any insights or references to documentation that can clarify these points.
Thank you for your assistance!
adam.g
(465 rep)
Jul 15, 2025, 10:43 AM
3
votes
1
answers
606
views
Write-lock a whole table during transaction
I need to perform a delicate operation to my table in which I will solely `insert`, `delete` and `select` upon all of my rows and no God may interfere with the table during this operation: the table will be in an inconsistent state and any concurrent MySQL session shall not be allowed to modify it u...
I need to perform a delicate operation to my table in which I will solely
insert
, delete
and select
upon all of my rows and no God may interfere with the table during this operation: the table will be in an inconsistent state and any concurrent MySQL session shall not be allowed to modify it until commit
-ed.
The use of SELECT ... FOR UPDATE | LOCK IN SHARE MODE
is not suitable because, while it may potentially be used to lock all the rows in the table , it won't prevent the insertion of further rows by a concurrent session. Basically, I need to LOCK TABLES my_table WRITE
within the body of a transaction.
The table contains about 20,000 rows and a master-slave, mixed-format replication is in place over a slow connection so, for any workaround, I'd prefer to avoid using temporary tables which may faint the slave , and the amount of data dumped into the binlog should ideally be minimized.
The engine is InnoDB on MySQL 5.6, for both master and slave.
davide
(131 rep)
Apr 27, 2015, 04:19 AM
• Last activity: Jul 13, 2025, 11:09 PM
0
votes
1
answers
164
views
Transaction isolation level/design for write-only ledger
This is a simplified example of a problem I've been working on. Say I have the following database schema: ```lang-none Table: Deposits +--------+------------+--------+---------+ | ID | Date | Amount | User ID | +--------+------------+--------+---------+ | c1f... | 1589993488 | 40.0 | 6c7... | | bfe....
This is a simplified example of a problem I've been working on. Say I have the following database schema:
-none
Table: Deposits
+--------+------------+--------+---------+
| ID | Date | Amount | User ID |
+--------+------------+--------+---------+
| c1f... | 1589993488 | 40.0 | 6c7... |
| bfe... | 1589994420 | 30.0 | 744... |
+--------+------------+--------+---------+
-none
Table: Withdrawals
+--------+------------+--------+---------+
| ID | Date | Amount | User ID |
+--------+------------+--------+---------+
| ad4... | 1589995414 | 20.0 | 6c7... |
| e9b... | 1589996417 | 20.0 | 6c7... |
+--------+------------+--------+---------+
And I'm writing a function that **performs a withdrawal** for a User. It:
1. Sums the deposits (SELECT amount FROM deposits WHERE user_id = ...
)
2. Sums the withdrawals (SELECT amount FROM withdrawals WHERE user_id = ...
)
3. If the difference is greater than the requested amount, INSERT
s a new Withdrawal
We're using MySQL 8 and the default isolation level of REPEATABLE READ
.
This function may be running as a lambda, so memory locks are not an option, and we don't have distributed locking (ie. a Redid-based lock) available.
**A Caveat**
There are existing admin operations run at the REPEATABLE READ
level to create/delete these entities on-demand, by ID.
**My Questions:**
1. Am I correct in understanding that I need to use SERIALIZABLE
as the isolation level for this transaction?
2. Will the SERIALIZABLE
range lock prevent the REPEATABLE READ
transaction from inserting new rows into the Withdrawals, or removing rows from the Deposits?
Craig Otis
(101 rep)
May 20, 2020, 05:37 PM
• Last activity: Jul 10, 2025, 03:07 PM
0
votes
2
answers
217
views
perform atomic operations on elements inside a container
In my DB Tshirts, Jeans and Shoes can be placed inside a Box and they can have a color that may be null. [![enter image description here][1]][1] Elements of the box must have the same color. To ensure this I use a trigger: /* If I try to INSERT (OR UPDATE) an item in a box containing elements of dif...
In my DB Tshirts, Jeans and Shoes can be placed inside a Box and they can have a color that may be null.
Elements of the box must have the same color. To ensure this I use a trigger:
/*
If I try to INSERT (OR UPDATE) an item in a box
containing elements of different color, it raises an exception.
*/
BEGIN
IF (
SELECT color FROM tshirt WHERE tshirt.box_id = NEW.box_id
UNION SELECT color FROM jeans WHERE jeans.box_id = NEW.box_id
UNION SELECT color FROM shoes WHERE shoes.box_id = NEW.box_id
) NEW.color THEN
RAISE EXCEPTION 'Error..';
RETURN NULL;
END IF;
END;
Suppose I want to change the color from 'Blue' to 'Red' for items inside Box 1. This code will fail because of the trigger:
UPDATE Tshirts SET color = 'Red' WHERE box_id = 1;
UPDATE Jeans SET color = 'Red' WHERE box_id = 1;
UPDATE Shoes SET color = 'Red' WHERE box_id = 1;
The only way I can change the color of items in the box is:
1. Remove all items from the box, setting their box_id to null.
2. Change the color of all the elements.
3. Put all the elements inside the box, setting back their box_id.
How can I change the color of items in a box more simply and atomically through modeling or triggers?

Yata
(33 rep)
Oct 16, 2015, 12:56 PM
• Last activity: Jul 10, 2025, 01:28 PM
7
votes
3
answers
1033
views
What events still occur within a transaction that gets rolled back?
Is it true that all data modifications are undone when the transaction they are a part of gets rolled back? For example if a cursor that just executed a stored procedure 100 times that updated a table on each iteration, would all those updates get rolled back? Would a DDL statement get rolled back?....
Is it true that all data modifications are undone when the transaction they are a part of gets rolled back?
For example if a cursor that just executed a stored procedure 100 times that updated a table on each iteration, would all those updates get rolled back?
Would a DDL statement get rolled back?...such as DROP TABLE or CREATE VIEW?
What about DROP DATABASE?
I know certain statements still execute though like PRINT "MESSAGE".
I'm just trying to get an idea of what kinds of events still occur.
J.D.
(40893 rep)
Nov 15, 2019, 07:21 PM
• Last activity: Jul 9, 2025, 02:26 PM
0
votes
1
answers
88
views
Conditional update
I would like to create a conditional update on an audit table, such that if the update on the main table succeeds, then the update on the audit table is performed. Right now I have this ``` CREATE DEFINER=`root`@`localhost` PROCEDURE `updateParent`( IN inId INT(10), IN inName varchar(75), IN inEmail...
I would like to create a conditional update on an audit table, such that if the update on the main table succeeds, then the update on the audit table is performed.
Right now I have this
CREATE DEFINER=root
@localhost
PROCEDURE updateParent
(
IN inId INT(10),
IN inName varchar(75),
IN inEmail varchar(50),
IN inPhone varchar(8),
IN inUsername varchar(50)
)
BEGIN
SET AUTOCOMMIT = 0;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
START TRANSACTION;
update parent
set nameOfPerson = inName
,email = inEmail
,phonenr = inPhone
where id = inId;
COMMIT;
SET @rowcount = ROW_COUNT();
IF @rowcount > 0 THEN
INSERT INTO parent_h
(parent_id
, nameOfPerson
, email
, phonenr
, opts
, event
, username
)
SELECT id, nameOfPerson, email, phonenr, ts, "U", inUsername
from parent
where id = inId;
END IF;
COMMIT;
END
Occasionally, if the update is performed and zero rows are updated (no data was changed), then I do not want the audit table update to be performed.
Are there any disadvantages of doing it this way? and are there any better ways? This is a web-application.
UPDATE:
I tried with an AFTER_UPDATE trigger. It seems like MySQL runs the trigger even if zero rows are updated on the table. That means I need to test each field to determine if any actual changes were made.
CREATE DEFINER=root
@localhost
TRIGGER parent_AFTER_UPDATE
AFTER UPDATE ON parent
FOR EACH ROW BEGIN
IF (
OLD.id NEW.id or
OLD.nameOfPerson NEW.nameOfPerson or
OLD.email NEW.email or
OLD.phonenr NEW.phonenr
)
THEN
INSERT INTO parent_h
(
parent_id
,
nameOfPerson
,
email
,
phonenr
,
opts
,
event
,
username
)
VALUES
(
NEW.id,
NEW.nameOfPerson,
NEW.email,
NEW.phonenr,
NEW.ts,
'U',
inUsername
)
;
END IF;
END
An additional issue is that I need to add the userid of the principal performing the update, which is not available in the table update.
tcelvis
(1 rep)
Jun 16, 2024, 05:28 PM
• Last activity: Jul 7, 2025, 07:12 AM
1
votes
1
answers
374
views
Serializable isolation fails even for unrelated rows
I have this table ``` create table "tasks" (id SERIAL PRIMARY KEY, user_id int REFERNCES "user"(id), title TEXT); ``` I also created index on ```"tasks"(user_id)``` Then I open two transactions simultaneously. t0 t1.. denote series of time snapshots in increasing order Transaction 1 ``` begin; --T0...
I have this table
create table "tasks" (id SERIAL PRIMARY KEY, user_id int REFERNCES "user"(id), title TEXT);
I also created index on "tasks"(user_id)
Then I open two transactions simultaneously. t0 t1.. denote series of time snapshots in increasing order
Transaction 1
begin; --T0
set transaction isolation level serializable; --T2
select * from "tasks" where user_id=1; --T4
insert into "tasks" (user_id, title, content, created_time) VALUES (1, 'abc'); --T6
end; --T8
Transaction 2
begin; --T1
set transaction isolation level serializable; --T3
select * from "tasks" where user_id=2; --T5
insert into "tasks" (user_id, title, content, created_time) VALUES (2, 'abc'); --T7
end; --T9
Transactions 1 succeeds but transaction 2 fails. But if i change the user_id=?
of the select statements to id=?
, it works.
So does SSI allow only unrelated primary key changes and still fails with unrelated indexed columns?
The situation is similar to this question https://dba.stackexchange.com/questions/242035/isolation-level-serializable-not-working-as-expected . But I have created an index on my column.
Le Hoang Long
(111 rep)
Apr 30, 2022, 02:24 PM
• Last activity: Jul 1, 2025, 05:04 PM
3
votes
1
answers
1327
views
MySQL: Skip INSERT / UPDATE statement that triggers error but show them in output
I have a million of rows of customers to be imported via SQL dump. Customer data will be inserted into main table `customer` and child table `customer_address` The dump contains the following SQL statements: **File**: customer-dump.sql BEGIN TRANSACTION; INSERT INTO customer (firstname, lastname) VA...
I have a million of rows of customers to be imported via SQL dump. Customer data will be inserted into main table
customer
and child table customer_address
The dump contains the following SQL statements:
**File**: customer-dump.sql
BEGIN TRANSACTION;
INSERT INTO customer (firstname, lastname) VALUES ('John', 'Doe');
INSERT INTO customer_address (customer_id, city, country) VALUES (LAST_INSERT_ID(), 'New York', 'USA');
COMMIT;
BEGIN TRANSACTION;
INSERT INTO customer (firstname, lastname) VALUES ('Lorem', 'Ipsum');
INSERT INTO customer_address (customer_id, city, country) VALUES (LAST_INSERT_ID(), 'Chicago', 'USA');
COMMIT;
# So on for N no of customers
I am importing via CLI as:
mysql -u [user] -p [database] /tmp/customer-import-error.log
**My Queries:**
- How to continue the script execution even if there is a failure in some rows?
- How to show/log errors for failed rows?
- Any advice on such massive import?
MagePsycho
(201 rep)
Apr 18, 2017, 08:13 PM
• Last activity: Jun 27, 2025, 11:08 PM
1
votes
1
answers
204
views
Do I always need boilerplate code to prevent nested transactions?
Because I never know if my procedure might be called from other procedures that have an opened transaction, should I always check for existing transactions before opening a new one? It seems like a bit of boilerplate code that I need to repeat everywhere and it will make for worse readability. I'm t...
Because I never know if my procedure might be called from other procedures that have an opened transaction, should I always check for existing transactions before opening a new one?
It seems like a bit of boilerplate code that I need to repeat everywhere and it will make for worse readability.
I'm trying to future proof against this error:
> Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 1, current count = 0.
Example of boilerplate code Exception handling and nested transactions
Zikato
(5724 rep)
Mar 6, 2020, 09:48 AM
• Last activity: Jun 20, 2025, 02:05 AM
1
votes
2
answers
262
views
Postgres Foreign Data Wrappers for Data Migration. Do transactions hold across databases?
I want to migrate some data from Postgres database A to database B. I found Foreign Data Wrappers that allow you to connect to a table in B while in database A. I have a table in A that I want to move to B and clear out the records in A. insert into b_table select * from a_table; delete from a_table...
I want to migrate some data from Postgres database A to database B. I found Foreign Data Wrappers that allow you to connect to a table in B while in database A. I have a table in A that I want to move to B and clear out the records in A.
insert into b_table select * from a_table;
delete from a_table;
Will a postgres transaction provide me with all expected error recovery in the event of a crash or disconnect between the two SQL lines even though I'm working with 2 different databases?
mj_
(347 rep)
Jun 26, 2020, 05:09 PM
• Last activity: Jun 6, 2025, 01:02 AM
5
votes
3
answers
1273
views
advantages of single-leader (transactions) over multi-leader replication
I am reading the excellent book *"Designing Data-Intensive Applications"* which I wholeheartedly recommend, but I'm confused by a section comparing multi-leader (i.e. multi-writer) replication to single-leader replication. I understand the basic difference: In multi-leader, multiple leader nodes can...
I am reading the excellent book *"Designing Data-Intensive Applications"* which I wholeheartedly recommend, but I'm confused by a section comparing multi-leader (i.e. multi-writer) replication to single-leader replication. I understand the basic difference: In multi-leader, multiple leader nodes can accept writes, each leader sends its writes to the other leaders, and you have conflict-resolution rules to decide how to merge them. Single leader solves concurrency using transactions.
The following two paragraphs describe how multi-writer can be more challenging because conflicts aren't resolved right away. My question is afterward.
> [This paragraph and the diagram are describing multi-leader.] For example, consider a wiki page that is simultaneously being edited
> by two users as shown in Figure 5-7 [copied below]. User 1 changes the title of the
> page from A to B, and user 2 changes the title from A to C at the same
> time. Each user’s change is successfully applied to their local
> leader. However, when the changes are asynchronously replicated, a
> conflict is detected. This problem does not occur in a
> single-leader database.
>
> In a single-leader database, the second writer will either block and
> wait for the first write to complete, or abort the second write
> transaction, forcing the user to retry the write. On the other hand,
> in a multi-leader setup, both writes are successful, and the conflict
> is only detected asynchronously at some later point in time. At that
> time, it may be too late to ask the user to resolve the conflict.
I see the difficulty with multi-writer here, but I'm skeptical that single-writer would be much better.
Consider the most likely chain of events when two people edit a Wikipedia page at roughly the same time: 1) Person 1 loads the edit page, takes 3-5 seconds to edit the title and submits. 2) Person 2 loads the edit page, takes 3-5 seconds to edit the title and submits. Each database transaction to apply the edit is only a few milliseconds, so it is far more likely that that the updates will happen one after the other, than that they will happen at the same time. Therefore if your concern is that one of these 2 people's updates will be lost, you need to address the potential for conflicts at the application level somehow; transactions won't really help you.
Furthermore in the case where the two transactions do overlap, it doesn't help the users to simply delay one of the transactions until the other is done. Once it resumes it will still overwrite the first user's data.
So my question is, is there some helpful transaction technique I'm missing that would actually be useful here? It's been a while since I tried to use transactions so my technique is rusty.
The best improvement I can think of: add

AND title='A'
to the end of both UPDATE
statements, and add a second statement to the transaction that checks the number of affected rows, and rollback if that is equal to 0. The rollback would have no effect but it would indicate a failure to the client. But this is a bit hackish.
I don't think it would help to begin the transaction with a check (i.e. SELECT * FROM pages WHERE title='A'
and make sure you get something back). Both transactions could possibly see 'A' at the beginning even though only one transaction would win out.
Stephen
(213 rep)
Sep 24, 2018, 02:39 AM
• Last activity: May 31, 2025, 05:02 AM
3
votes
1
answers
6366
views
Many transactions are getting stuck in the "cleaning up" state - what could cause that?
I'm trying to track down a locking issue, and found via `SHOW PROCESSLIST` that when the queries in question are waiting for locks and timing out, there are no other queries active in the process list. So I looked at `SHOW ENGINE INNODB STATUS`, which showed that when the transactions in question ar...
I'm trying to track down a locking issue, and found via
SHOW PROCESSLIST
that when the queries in question are waiting for locks and timing out, there are no other queries active in the process list. So I looked at SHOW ENGINE INNODB STATUS
, which showed that when the transactions in question are waiting on locks, the only other tansactions present that have started (there are many in thenot started
state) are ones that have been running for 30 seconds or more, and are in the cleaning up
state.
Why would a transaction get stuck in the cleaning up
state? Is there any way to tell at that point, what any of the original queries were for that transaction? (The transaction does not appear to be present in the information_schema.innodb_trx table).
Example output:
---TRANSACTION 2670738352, ACTIVE 25 sec
2 lock struct(s), heap size 360, 1 row lock(s), undo log entries 1
MySQL thread id 158609, OS thread handle 0x2ae9a6bcf700, query id 487681086 10.0.1.246 ownlocal cleaning up
Trx read view will not see trx with id >= 2670738353, sees = 2670737972, sees = 2670736876, sees = 2670735437, sees < 2670724054
### Additional Info: ###
- **MySQL Version:** 5.6
- **Storage Engine:** InnoDB
mltsy
(159 rep)
Dec 16, 2014, 12:12 AM
• Last activity: May 29, 2025, 08:01 AM
0
votes
1
answers
262
views
Insufficient privilege for read only transaction
I am using Oracle and SQL Developer and here is a problem I ran into. I have created a table called T_TEST and I am trying to create a read only transaction to it from another user. This is my procedure: CREATE PROCEDURE reader AS BEGIN SET TRANSACTION READ ONLY; SELECT SUM(value) FROM SYSTEM.T_TEST...
I am using Oracle and SQL Developer and here is a problem I ran into. I have created a table called T_TEST and I am trying to create a read only transaction to it from another user. This is my procedure:
CREATE PROCEDURE reader
AS
BEGIN
SET TRANSACTION READ ONLY;
SELECT SUM(value) FROM SYSTEM.T_TEST;
COMMIT;
END;
And this is the result that I get:
ORA-01031: insufficient privileges
The user itself has this privileges:
GRANT CONNECT TO admin1;
GRANT CREATE SESSION TO admin1;
GRANT SELECT ON T_TEST TO admin1;
Which is enough for this query to work:
SELECT SUM(value) FROM SYSTEM.T_TEST;
However, when I wrap it up in a transaction, as I showed, it fails. So what privileges am I lacking here?
Владислав Крутенко
(3 rep)
Oct 24, 2021, 04:24 PM
• Last activity: May 27, 2025, 05:04 AM
1
votes
1
answers
251
views
Why do some applications use implicit transactions in SQL Server?
I have run into several applications that were written for a one dbms and then ported to SQL Server, and many use implicit transactions--which oftentimes really make it difficult to manage on the SQL side of things. I came across Kendra Little's website and post [here][1] and she seems to see the sa...
I have run into several applications that were written for a one dbms and then ported to SQL Server, and many use implicit transactions--which oftentimes really make it difficult to manage on the SQL side of things.
I came across Kendra Little's website and post here and she seems to see the same thing:
> 2. Implicit transactions
>
> Implicit transactions are a bit weird, and I typically only run into them when applications have been written for a different relational database and then ported to SQL Server.
My question is why? What benefit do implicit transactions provide to the application? It seems like extra work and makes things more difficult all around.
Mike S
(177 rep)
Dec 2, 2019, 01:07 AM
• Last activity: May 24, 2025, 07:03 PM
Showing page 1 of 20 total questions