Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
1578
views
Importing huge table exhausts UNDO extents in Oracle RDS (ORA-01628)
I'm attempting to do an impdp on RDS, Oracle 12c. I'm importing only one table for this particular impdp job but every time I try to import it, UNDO usage gets to about 50% and then the logs just say `Resumable error: ORA-01628: max # extents (32765) reached for rollback segment`. Since this is RDS...
I'm attempting to do an impdp on RDS, Oracle 12c. I'm importing only one table for this particular impdp job but every time I try to import it, UNDO usage gets to about 50% and then the logs just say
Resumable error: ORA-01628: max # extents (32765) reached for rollback segment
.
Since this is RDS I cannot manually manage undo. I created a fresh RDS instance with a new 4TB UNDO tablespace to perform the import of just this table. I've read about creating one giant rollback segment and also about creating lots of small rollback segments to solve this problem. I've also read I can split the import into multiple parts, but I'd rather not do that if possible. Is there anything more I can do here to maybe stop the UNDO tablespace from running out of extents?
user3150146
(1 rep)
Nov 12, 2020, 01:35 PM
• Last activity: Aug 1, 2025, 12:07 PM
0
votes
1
answers
1645
views
MYSQL: InnoDB inserts wont rollback transaction inside a stored procedure
I have this stored procedure: START TRANSACTION; code_gen_loop: WHILE @Codes > 0 DO IF MOD( @Codes, 10000 ) = 0 THEN EXECUTE qry_status_stmt2; EXECUTE qry_status_stmt3; EXECUTE qry_canceled_stmt; IF @Canceled = 1 THEN ROLLBACK; LEAVE code_gen_loop; END IF; END IF; EXECUTE qry_rand_stmt; EXECUTE qry_...
I have this stored procedure:
START TRANSACTION;
code_gen_loop: WHILE @Codes > 0 DO
IF MOD( @Codes, 10000 ) = 0 THEN
EXECUTE qry_status_stmt2;
EXECUTE qry_status_stmt3;
EXECUTE qry_canceled_stmt;
IF @Canceled = 1 THEN
ROLLBACK;
LEAVE code_gen_loop;
END IF;
END IF;
EXECUTE qry_rand_stmt;
EXECUTE qry_insert_stmt;
SET @Codes = @Codes - ROW_COUNT();
END WHILE;
IF @Canceled = 0 THEN
COMMIT;
EXECUTE qry_complete_stmt;
END IF;
The main loop is made to insert millions of rows into a table. Every 10,000 inserts it updates a memory table with its progress and checks if the job has been canceled. If the job is canceled then ROLLBACK the inserts and break out of the loop. If the job hasn't been canceled then continue the loop until finish and then commit the inserts.
The problem is that if the job is canceled it breaks from the loop correctly but does not rollback the commits. If I go look in the table I can see it has committed the inserts up to where the job was canceled.
Why is my ROLLBACK not working here?
Full procedure can be found http://pastebin.com/2YghzJrE
Scott
(169 rep)
Feb 10, 2015, 12:31 PM
• Last activity: May 17, 2025, 07:04 PM
0
votes
1
answers
594
views
Com_commit and Com_rollback count are almost same. What doest it mean?
SHOW GLOBAL STATUS has Com_* variables: The Com_xxx statement counter variables indicate the number of times each xxx statement has been executed. When I compare Com_commit to Com_rollback, both are almost equal. I do not understand how is that possible. My understanding is that you commit transacti...
SHOW GLOBAL STATUS has Com_* variables: The Com_xxx statement counter variables indicate the number of times each xxx statement has been executed.
When I compare Com_commit to Com_rollback, both are almost equal. I do not understand how is that possible. My understanding is that you commit transactions and any rollback is done only when required. Then, why these values are so close?
We have only one database that is used for application (Jira) and it uses READ-COMMITTED Transaction Isolation. There are no other procedures in the entire mysql that does any operations.

GP92
(155 rep)
Aug 21, 2020, 02:49 AM
• Last activity: Apr 15, 2025, 10:03 AM
0
votes
1
answers
100
views
Stop rolling back transaction by committing them
I have a SQL Server process that runs annually, creating around 400 million records in several tables. The process operates in a loop, committing every 20,000 records for each of the tables. I recently discovered that someone changed the script and put a `begin tran` in the initial lines, which was...
I have a SQL Server process that runs annually, creating around 400 million records in several tables.
The process operates in a loop, committing every 20,000 records for each of the tables.
I recently discovered that someone changed the script and put a
begin tran
in the initial lines, which was only committed in the last lines. And there was an error in the process.
Now the database is rolling back each of the 20 thousand lines that were committed in the nested transactions.
Is there any way to prevent this rollback?
How to identify the SPID of the transaction and commit it before the roll back reaches it?
There were only about 50 million lines left when the error occurred due to lack of space in *tempdb*. We estimate that it will take around 20 hours to roll back everything in four different tables.
Jair Pedro Assis
(1 rep)
Jan 29, 2025, 11:55 PM
• Last activity: Jan 30, 2025, 10:54 PM
1
votes
3
answers
1906
views
Transaction Error in TRY CATCH block
The following code returns this message in case something goes wrong: > Transaction count after EXECUTE indicates a mismatching number of > BEGIN and COMMIT statements. Previous count = 0, current count = 1. OPEN extensionViews FETCH NEXT FROM extensionViews INTO @Viewname WHILE @@FETCH_STATUS = 0 B...
The following code returns this message in case something goes wrong:
> Transaction count after EXECUTE indicates a mismatching number of
> BEGIN and COMMIT statements. Previous count = 0, current count = 1.
OPEN extensionViews
FETCH NEXT FROM extensionViews
INTO @Viewname
WHILE @@FETCH_STATUS = 0
BEGIN
BEGIN TRY
exec sp_refreshview @Viewname
END TRY
BEGIN CATCH
print 'ERROR sp_refreshview'
END CATCH
FETCH NEXT FROM extensionViews
INTO @Viewname
END
CLOSE extensionViews
DEALLOCATE extensionViews
I already found out that there is something going wrong with the implicit transactions, but I do not really understand it entirely.
In case of a CATCH, do I have to explicitly ROLLBACK the transaction? Does this mean that I have to start the transaction explicitly as well insode of the Cursor to rollback THIS one command instead of all?
Magier
(4827 rep)
Sep 22, 2016, 08:10 AM
• Last activity: Jan 11, 2025, 01:34 PM
-1
votes
1
answers
76
views
Issue with Incomplete Rollback in PolarDB During Batch Write Operations
I'm encountering an unusual scenario while working with PolarDB where a batch of write SQL operations was executed. One of the SQL statements within this batch encountered an error, necessitating a transaction rollback. However, it appears that the rollback process did not complete successfully, eve...
I'm encountering an unusual scenario while working with PolarDB where a batch of write SQL operations was executed. One of the SQL statements within this batch encountered an error, necessitating a transaction rollback. However, it appears that the rollback process did not complete successfully, even though no error messages were propagated back to the business logic layer, which typically monitors for such exceptions.
Given that resource availability is not a concern — with ample CPU, memory, and sufficient free disk space — and there were no evident lock conflicts during this operation, I'm seeking guidance on how best to diagnose and understand why the rollback might have failed or appeared incomplete.
Specifically, I'm interested in:
1. Diagnostic Steps: What are the recommended steps or tools to trace and verify the status of a rollback operation in PolarDB?
2. Error Logging: How can I ensure or check if any internal error logs within PolarDB might have captured details about the unsuccessful rollback attempt, given the absence of error reports in the application layer?
3. Transaction Isolation Levels: Could the transaction isolation level in use have any implications on the observed behavior, and if so, what adjustments might be considered?
4. Best Practices: Are there any known issues or best practices when handling transaction rollbacks in PolarDB, especially in the context of batched writes, that could help prevent or troubleshoot similar incidents?
Any insights, past experiences, or pointers to relevant documentation would be greatly appreciated.
user24266786
(1 rep)
Apr 17, 2024, 05:39 AM
• Last activity: Aug 4, 2024, 09:22 AM
-2
votes
1
answers
644
views
Killed SPID on xp_LogInfo not rolling back and growing tempdb
I received a notification that the space on the drive my tempdb sits on was running low, when checking the running process it pointed to a server agent task. I checked the scheduled tasks and we had one that had been running for over a day, usually it runs for less than a minute, and it was showing...
I received a notification that the space on the drive my tempdb sits on was running low, when checking the running process it pointed to a server agent task.
I checked the scheduled tasks and we had one that had been running for over a day, usually it runs for less than a minute, and it was showing a status of "Executing: 0(Unknown)" in the job activity monitor.
When I tried to stop the Job in Activity Monitor nothing happened, it just kept on running and filling up tempdb to almost the full size of the disc. Knowing that the job itself could just be stopped and re-run without affecting the business, I, perhaps too hastily, killed the SPID, thinking this would "release" space back to tempdb and I would then restart the services out of hours to shrink the files.
Unfortunately this didn't work and the SPID has been in a Killed/rollback state for nearly a day now whilst the tempdb files grow (I arranged for the disk they sit on to be increased whilst I looked into it).
When I check the process for the SPID it appears to be running
xp_LogInfo
with the last batch relating to insert #nt select distinct domain+N'\'+name, sid, sidtype from OpenRowset(TABLE NETUSERGETGROUPS, @acctnam
.
I believe that the owner of the SQL Job may have reset their password when the job kicked in to cause this (this is just me guessing) which has caused it to get stuck in a loop.
The CPU and MemUsage in sysprocesses
hasn't changed for over a day now and the LastWaitType
is showing as PREEMPTIVE_OS_AUTHORIZATIONOPS
.
When running DBCC INPUTBUFFER
for the SPID I previously killed I am receiving this:
EventType - Language Event
Parameters - 0
EventInfo - (@P1 nvarchar(128),@P2 uniqueidentifier)EXECUTE msdb.dbo.sp_sqlagent_has_server_access @login_name = @P1, @job_id = @P2
I have now restarted the SQL Server Agent Service which stopped the job running but the tempdb log continues to grow.
What can I do to resolve this?
When running DBCC OPENTRAN(TEMPDB)
I get the below, the spid it relates to is the one I killed:
Transaction information for database 'tempdb'.
Oldest active transaction:
SPID (server process ID): 205
UID (user ID) : -1
Name : INSERT EXEC
LSN : (5293:26478:20)
Start time : Oct 19 2020 3:30:02:193PM
SID : 0x0105000000000005150000005f9b10c13b99fcfc4ad890fd8e220200
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
kill {spid} with statusonly
returned "SPID 205: transaction rollback in progress. Estimated rollback completion: 0%. Estimated time remaining: 0 seconds."
Shayn Thomas
(15 rep)
Oct 22, 2020, 06:46 PM
• Last activity: Jul 3, 2024, 11:00 PM
0
votes
1
answers
751
views
Copy a Large INNODB table in MySQL
I have an INNODB table that's > 93 million rows. A lot of the data is considered "temp" data and is governed by an "is_active" flag of 1/0. When a user updates a form, new data is written with an "is_active=1" and the previous active records are updated to "is_active=0". We wanted to move the data t...
I have an INNODB table that's > 93 million rows. A lot of the data is considered "temp" data and is governed by an "is_active" flag of 1/0. When a user updates a form, new data is written with an "is_active=1" and the previous active records are updated to "is_active=0".
We wanted to move the data to a new table to clean up and ran statement like...
INSERT INTO tblNew (a, b, c)
SELECT a, b, c FROM tblOld WHERE is_active=1
This ran overnight and when I looked in the morning I noticed there were a bunch of processes backed up in the SHOW PROCESS LIST so I did a KILL on the process on the ID, which started the ROLLBACK and brought the server down for another 10 hours... production box of course.
I've been reading a lot on how you can try to repair, etc. and have been doing that all day, but I'm wondering if there's any kind of option I could have added to avoid the need for rollback on failure? Or is there a strategy commit or flush every X number of rows, etc.
I was trying this...
INSERT INTO tblNew (a, b, c)
SELECT a, b, c FROM tblOld WHERE is_active=1 AND pkID > 0 AND pkID < 1000000
Where the pkID was the primary key. I would run it in groups of 550k - 1M and up the number range for PK each run. There's an index on the PK and on is_active, yet I noticed speeds increased each run from 30 seconds to over 5 minutes a run by time it was in the 20M range. Any idea why this would take longer each run when it's the same number of rows for the work?
So 2 in summary, questions...
1) Can I do something to keep a huge rollback from happening if I stop the process?
2) Why did inserting the same number of items based on PK and indexed column take progressively longer per run?
Don
(103 rep)
May 15, 2015, 10:18 PM
• Last activity: Apr 27, 2024, 06:14 PM
1
votes
1
answers
132
views
how can I see how many records have been affected by a transaction before it is committed? or how costly would be the rollback if I have to kill it?
I am using [sqlwatch][1] to monitor [some of my servers][2] and it has a job that deletes in batch from a table called `dbo.sqlwatch_logger_snapshot_header` I had lots of problems of locking and blocking as this table is used by different processes, then I disabled all other jobs with the exception...
I am using sqlwatch to monitor some of my servers and it has a job that deletes in batch from a table called
progress has been slow but steady, there were 37.2 million rows
now there are 36.9 millions
so basically it is working, so all good.
Now my question:
Suppose I had to stop this job for any reason,
it is a batch delete, if I had to stop this job now, how many rows are affected, how costly would it be to rollback what has not been committed yet?
dbo.sqlwatch_logger_snapshot_header
I had lots of problems of locking and blocking as this table is used by different processes,
then I disabled all other jobs with the exception of the one that deals with this table.
I used the following script to disable all other sqlwatch jobs:
use distribution
set nocount on
set transaction isolation level read uncommitted
declare @Job_id uniqueidentifier
declare @job_name varchar(300)
declare @category_name varchar(300)
declare @body varchar(max)
declare @flag tinyint
declare @enabled bit = 0; --0 - disable 1-enable
declare @subject varchar(300) = case when @enabled = 0 then 'Disabling sqlwatch jobs on' + @@servername else 'Enabling sqlwatch jobs on' + @@servername end;
declare @job_enabled bit;
set @flag = 0
set @body = 'The following jobs are going to be ' +
case when @enabled = 0 then 'Disabled' else 'Enabled' end
+ ' : '+char(10)+Char(13)
IF OBJECT_ID('tempdb.[dbo].[#LogReaderAgents]') IS NOT NULL
DROP TABLE [dbo].[#LogReaderAgents]
CREATE TABLE [dbo].[#LogReaderAgents] (
[job_id] UNIQUEIDENTIFIER NOT NULL,
[job_name] SYSNAME NOT NULL,
[category_name] SYSNAME NOT NULL,
[enabled] TINYINT NOT NULL)
INSERT INTO [dbo].[#LogReaderAgents] ([job_id],[job_name],[category_name],[enabled])
select job_id, job_name = sj.name, category_name=sc.name, sj.enabled
from msdb.dbo.sysjobs sj
inner join msdb.dbo.syscategories sc
on sj.category_id = sc.category_id
where 1=1
and sj.name like 'SQLWATCH%'
-- and sc.category_id in (10,13)
and sj.name not in ('SQLWATCH-INTERNAL-RETENTION')
-- exec sp_gettabledef 'dbo.#LogReaderAgents'
-- exec sp_GetInsertList 'TEMPDB','DBO.#LogReaderAgents'
DECLARE c1 CURSOR FOR
SELECT Job_id,job_name, category_name, [enabled]
FROM #LogReaderAgents
OPEN c1
FETCH NEXT FROM c1
INTO @Job_id,@job_name,@category_name,@job_enabled
WHILE @@FETCH_STATUS = 0
begin
if (select top (1) stop_execution_date from msdb.dbo.sysjobactivity ja
where Job_ID = @Job_id
and
ja.start_execution_date IS NOT NULL
order by Start_execution_date desc) is not NULL
begin
set @flag = 1
Print @job_name +' is ' + case when @job_enabled=1 then 'Enabled' else 'disabled' end
exec msdb..sp_update_job @job_id = @job_id, @enabled = @enabled
set @Body = @Body + char(10)+char(13) + @job_name + ' -- ' + @category_name +' is ' + case when @job_enabled=1 then 'Enabled' else 'disabled' end
end
FETCH NEXT FROM c1
INTO @Job_id,@job_name,@category_name,@job_enabled
end
CLOSE c1
DEALLOCATE c1
however, after that the job is still running, but I dont see any changes in the number of the records that are in this table.
here is the job executing a procedure and doing a delete in batch:



Marcello Miorelli
(17274 rep)
Feb 23, 2024, 04:56 PM
• Last activity: Feb 25, 2024, 01:35 AM
1
votes
1
answers
81
views
Why does creating an index on MySQL(InnoDB) and then crashing deliberately in between not result in a rollback?
I created this table USERS having `10 million records ``` mysql> desc users; +--------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------+--------------+------+-----+---------+-------+ | id | int | NO | | NULL | | | name | varchar(255) | NO | | N...
I created this table USERS having `10 million records
mysql> desc users;
+--------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------+--------------+------+-----+---------+-------+
| id | int | NO | | NULL | |
| name | varchar(255) | NO | | NULL | |
| email | varchar(255) | NO | | NULL | |
| gender | varchar(10) | NO | | NULL | |
+--------+--------------+------+-----+---------+-------+
Now I have 2 terminal sessions, session_1, and session_2
In session_1, I ran this command
mysql> alter table users add primary key(id);
While this alter command in session_1 was still in progress,
In session 2, I kill the mysql client using
kill -9
When I restart the mysql client, I issue the desc USERS
once more, I don't see any Primary Key on id column, but after a minute or so, I see that the primary key is there on
id
column.
mysql> desc users;
+--------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------+--------------+------+-----+---------+-------+
| id | int | NO | PRI | NULL | |
| name | varchar(255) | NO | | NULL | |
| email | varchar(255) | NO | | NULL | |
| gender | varchar(10) | NO | | NULL | |
+--------+--------------+------+-----+---------+-------+
4 rows in set (0.01 sec)
I have also tried with set autocommit=0
catsarethebest
(13 rep)
Dec 31, 2023, 08:49 AM
• Last activity: Dec 31, 2023, 06:35 PM
0
votes
1
answers
49
views
Rollback/forward DiskIO ratio
I tried an `alter table X add column C`, but it was taking too long and rolled back(actually killed it). When I did so, sp_who2 showed DiskIO = 5 mil. Is it meaningful to assume that it will take another 5 mil to complete the rollback, regardless of time? If not, is there some kind of multiplier tha...
I tried an
alter table X add column C
, but it was taking too long and rolled back(actually killed it). When I did so, sp_who2 showed DiskIO = 5 mil.
Is it meaningful to assume that it will take another 5 mil to complete the rollback, regardless of time? If not, is there some kind of multiplier that has to do with the transaction logs?
George Menoutis
(447 rep)
Sep 26, 2023, 06:23 AM
• Last activity: Sep 26, 2023, 01:14 PM
21
votes
5
answers
8458
views
Is ROLLBACK a fast operation?
Is it true that RDBMS systems are optimized for `COMMIT` operations? How much slower/faster are `ROLLBACK` operations and why?
Is it true that RDBMS systems are optimized for
COMMIT
operations? How much slower/faster are ROLLBACK
operations and why?
garik
(6782 rep)
Aug 31, 2011, 08:25 AM
• Last activity: Aug 6, 2023, 11:28 AM
4
votes
0
answers
1726
views
InnoDB crashes on trx rollback
I have InnoDB assertion failure during the trx rollback 160316 10:47:31 InnoDB: Rolling back trx with id 5ADBE4173, 1 rows to undo 160316 10:47:31 InnoDB: Assertion failure in thread 140611160659712 in file fut0lst.ic line 83 InnoDB: Failing assertion: addr.page == FIL_NULL || addr.bof I've managed...
I have InnoDB assertion failure during the trx rollback
160316 10:47:31 InnoDB: Rolling back trx with id 5ADBE4173, 1 rows to undo
160316 10:47:31 InnoDB: Assertion failure in thread 140611160659712 in file fut0lst.ic line 83
InnoDB: Failing assertion: addr.page == FIL_NULL || addr.bof
I've managed to start MySQL with
innodb_force_recovery = 3
(SRV_FORCE_NO_TRX_UNDO). Now I can see this transaction in INFORMATION_SCHEMA
:
mysql> select * from information_schema.innodb_trx\G
*************************** 1. row ***************************
trx_id: 5ADBE4173
trx_state: RUNNING
trx_started: 2016-03-16 11:16:08
trx_requested_lock_id: NULL
trx_wait_started: NULL
trx_weight: 1
trx_mysql_thread_id: 0
trx_query: NULL
trx_operation_state: NULL
trx_tables_in_use: 0
trx_tables_locked: 0
trx_lock_structs: 0
trx_lock_memory_bytes: 376
trx_rows_locked: 0
trx_rows_modified: 1
trx_concurrency_tickets: 0
trx_isolation_level: REPEATABLE READ
trx_unique_checks: 1
trx_foreign_key_checks: 1
trx_last_foreign_key_error: NULL
trx_adaptive_hash_latched: 0
trx_adaptive_hash_timeout: 10000
1 row in set (0.00 sec)
mysql>
It also appears in SHOW ENGINE InnoDB STATUS
:
TRANSACTIONS
------------
Trx id counter 5ADBE4303
Purge done for trx's n:o < 0 undo n:o < 0
History list length 1897
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 0, not started
MySQL thread id 56285, OS thread handle 0x7f31cdca4700, query id 174313 localhost root
show engine innodb status
---TRANSACTION 5ADBE4173, ACTIVE 590 sec recovered trx
, undo log entries 1
----------------------------
END OF INNODB MONITOR OUTPUT
============================
MySQL documentation states that "You cannot cancel transactions that are in the process of being rolled back."
So is there any way to know which tables it uses? And if it is, can I just drop the offensive tables and restart server back with innodb_force_shutdown = 0
?
gevial
(141 rep)
Mar 16, 2016, 08:43 AM
• Last activity: Jul 20, 2023, 07:43 PM
-1
votes
1
answers
93
views
How to rollback data mart update if data quality checks are not passed?
Good afternoon! I am working with Oracle database. I have a PL/SQL procedure which is scheduled with DBMS job to update every night a table `SOME_TABLE`: ```sql CREATE OR REPLACE PROCEDURE MY_PROC IS START_DATE DATE; BEGIN START_DATE := trunc(add_months(trunc(sysdate), -2), 'MM'); -- delete data for...
Good afternoon!
I am working with Oracle database. I have a PL/SQL procedure which is scheduled with DBMS job to update every night a table
SOME_TABLE
:
CREATE OR REPLACE PROCEDURE MY_PROC
IS
START_DATE DATE;
BEGIN
START_DATE := trunc(add_months(trunc(sysdate), -2), 'MM');
-- delete data for last 3 months and insert updated data for last 3 months
DELETE FROM SOME_TABLE
WHERE 1=1
AND VALUE_DAY >= START_DATE;
INSERT INTO SOME_TABLE
SELECT ...
FROM OTHER_TABLE
...
;
COMMIT;
END;
I want to make some basic data quality checks for this table (number of null values in columns etc). So my question is:
What is the best way to design data quality checks and the procedure MY_PROC
itself to test data quality of the table SOME_TABLE
after update and rollback the update if data quality checks are not passed?
rsx
(1 rep)
Jun 22, 2023, 08:49 AM
• Last activity: Jun 22, 2023, 06:13 PM
1
votes
4
answers
13270
views
how to test query
I would like to run a query but just to test it and have the possibility to go back to previous state I saw rollback transaction. do I have just to run this command before ? delete * from myTables where colName = "***"
I would like to run a query but just to test it and have the possibility to go back to previous state
I saw rollback transaction.
do I have just to run this command before ?
delete * from myTables
where colName = "***"
Lika
(21 rep)
Nov 26, 2018, 10:18 AM
• Last activity: Jun 5, 2023, 08:04 AM
1
votes
2
answers
1466
views
Trap exception in PostgreSQL script
I want to run a script inside a transaction in PostgreSQL. So I surround the SQL code with `begin` and `commit` statements. But I want to `rollback` on error. I don't see how to do that. BEGIN; UPDATE public.tablename SET blah = 'xxx' WHERE thing= '123'; COMMIT;
I want to run a script inside a transaction in PostgreSQL. So I surround the SQL code with
begin
and commit
statements. But I want to rollback
on error. I don't see how to do that.
BEGIN;
UPDATE public.tablename
SET blah = 'xxx'
WHERE thing= '123';
COMMIT;
Peter Marshall
(121 rep)
Dec 1, 2020, 03:49 PM
• Last activity: Apr 23, 2023, 03:30 AM
1
votes
1
answers
2808
views
MariaDB writing transaction and rollback - novice
Currently I am learning how to write transactions for MariaDB (I got a version 10.6 running). I attempt to execute several commands and based on their success to commit or rollback. So far I have searched google for a few days and got really stuck not knowing how to go about it. This is how far I go...
Currently I am learning how to write transactions for MariaDB (I got a version 10.6 running). I attempt to execute several commands and based on their success to commit or rollback. So far I have searched google for a few days and got really stuck not knowing how to go about it. This is how far I got and cannot figure out what is wrong. I am very grateful for any pointers! Cheers
JR
DROP TABLE IF EXISTS FWtest
.testtrans
;
CREATE TABLE FWtest
.testtrans
(
ID
INT NOT NULL AUTO_INCREMENT ,
ver
INT NULL DEFAULT NULL ,
name
VARCHAR(10) NULL DEFAULT NULL ,
savedatetime
DATETIME NULL DEFAULT NULL ,
PRIMARY KEY (ID
)) ENGINE = InnoDB;
DROP TABLE IF EXISTS FWtest
.testtrans2
;
CREATE TABLE FWtest
.testtrans2
(
ID
INT NOT NULL ,
param1
INT NULL DEFAULT NULL ,
param2
INT NULL DEFAULT NULL)
ENGINE = InnoDB;
DELIMITER |
START TRANSACTION
BEGIN NOT ATOMIC TRY;
SET @config_name='test';
-- get the next version number, set 0 if none
SELECT @config_version:= IFNULL(max(ver
)+1,0) FROM FWtest
.testtrans
WHERE name
= @config_name;
INSERT INTO FWtest
.testtrans
(name
, ver
, savedatetime
)
VALUES ( @config_name, @config_version, NOW() );
SELECT @config_id:=LAST_INSERT_ID();
INSERT INTO FWtest
.testtrans2
(ID
, param1
, param2
)
VALUES
(@config_id, 1, null),
(@config_id, 2, 'aa'); -- this line will make the insert fail
END TRY
BEGIN CATCH
ROLLBACK;
END CATCH
COMMIT|
DELIMITER ;
**SOLUTION:**
Thanks to the pointers of Akina I worked out a solution using an exception handler directly in a transaction:
DELIMITER |
BEGIN NOT ATOMIC
DECLARE EXIT HANDLER FOR SQLEXCEPTION ROLLBACK;
START TRANSACTION;
SET @config_name='test';
SELECT @config_version:= IFNULL(max(ver
)+1,0) FROM FWtest
.testtrans
WHERE name
= @config_name;
INSERT INTO FWtest
.testtrans
(name
, ver
, savedatetime
) VALUES ( @config_name, @config_version, NOW() );
SELECT @config_id:=LAST_INSERT_ID();
INSERT INTO FWtest
.testtrans2
(ID
, param1
, param2
)
VALUES
(@config_id, 1, null),
(@config_id, 2, 'nn'); -- this line will make the insert fail
COMMIT;
END|
DELIMITER ;
Note: When using this sample code in a sql tool (e.g. DBeaver) then error checking and rollback needs to be turned off in order for the transaction to handle the error itself.
JRB
(13 rep)
Jan 13, 2023, 12:02 PM
• Last activity: Jan 13, 2023, 03:23 PM
0
votes
2
answers
482
views
Does any database track all inserts, updates, deletes so that you can simply undo any changes until you get back to a good state?
Sometimes people make inserts, updates, or deletes they did not intend to and want to revert back their changes (or undo them). Are there databases out there that make this easy to do (i.e. are there database that track every single insert, update, delete ever made to it so that you can recreate the...
Sometimes people make inserts, updates, or deletes they did not intend to and want to revert back their changes (or undo them). Are there databases out there that make this easy to do (i.e. are there database that track every single insert, update, delete ever made to it so that you can recreate the entire database up until a certain time)?
user1068636
(453 rep)
Dec 16, 2022, 03:30 AM
• Last activity: Dec 16, 2022, 12:58 PM
0
votes
2
answers
2098
views
Guaranteed rollback in SSIS
I'm trying to create a job in SSIS to truncate a table, read from a data source and rewrite the table, so only 2 steps (1 Truncate and 2 Select + Insert). I need to guarantee a Truncate's Rollback in the event that the insert has errors. I tried with the sequence container and the TransactionOption...
I'm trying to create a job in SSIS to truncate a table, read from a data source and rewrite the table, so only 2 steps (1 Truncate and 2 Select + Insert). I need to guarantee a Truncate's Rollback in the event that the insert has errors.
I tried with the sequence container and the TransactionOption but SSIS doesn't want to cooperate.
PS I can't do too many operations because I have millions of records and I need rapid performance.
Giorgia Grossi
(1 rep)
Oct 26, 2022, 02:06 PM
• Last activity: Oct 26, 2022, 06:56 PM
1
votes
1
answers
2595
views
What happens if transaction rollback fails?
So failure while committing leads to rollback. But what if rollback also fails due to some reasons and the database has changed due to partial commitment. How is the database restored to its original state? The database in question is postgresql.
So failure while committing leads to rollback. But what if rollback also fails due to some reasons and the database has changed due to partial commitment. How is the database restored to its original state?
The database in question is postgresql.
Kishan Kumar
(123 rep)
May 24, 2020, 07:01 AM
• Last activity: Oct 11, 2022, 12:04 PM
Showing page 1 of 20 total questions