Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
686
views
Restoring secondary backup files on primary server in sql server
I want to restore log backup file on copy-only full backup in restoring mode on primary replica server these backups were taken from secondary replica . I took copy only full back up on secondary replica then failover after that I took .trn backup file on primary replica(which took full backup ). bo...
I want to restore log backup file on copy-only full backup in restoring mode on primary replica server
these backups were taken from secondary replica .
I took copy only full back up on secondary replica then failover after that I took .trn backup file on primary replica(which took full backup ).
both full and trn file backup from one server but full backup took when server in secondary mode and trn backup took when server be in primary backup and I want to restored both of them in restoring mode on secondary replica after failover.
I wanted to add database in cluster with my hand because our databases are big
it was error when they are restored on primary server
the error is
> The log in this backup set begins at LSN 38202000004450500001, which is too recent to apply to the database.
An earlier log backup that includes LSN 38167000000015500001 can be restored.
NShyn
(1 rep)
Jan 25, 2022, 09:57 AM
• Last activity: Jul 25, 2025, 10:05 PM
1
votes
1
answers
153
views
Why did full backup file size grow after data movement for Availability Group replica was suspended?
Found an error log informing that data movement to a secondary availability replica was suspended. I'm not sure whether the cause of the suspension is relevant to answering my question, but the cause was "Error 3456 ... could not redo log record" etc. This is a [known issue][1] with a resolution inv...
Found an error log informing that data movement to a secondary availability replica was suspended. I'm not sure whether the cause of the suspension is relevant to answering my question, but the cause was "Error 3456 ... could not redo log record" etc. This is a known issue with a resolution involving patching SQL Server.
However this condition wasn't addressed for a few days. The nightly full backup file sizes (taken against the primary replica) began increasing by about 10% per night. The database also has log backups made every 15 minutes for most hours in the day, but log backups are disabled during a window at night while indexes are rebuilt and full backups made.
To get the secondary back online it was required to restore from backup. In this instance some issues were encountered - first an attempt was made to rejoin the AG but that hung for ages. Then a command was given to have that replica database removed from the AG but of course it hung too. Killed both those processes but could not kill the session that had faulted in redoing the log records (as it is not a user session - although its ID was above 50 by the way). I therefore restarted SQL Server on that replica before eventually restoring from backup and rejoining the AG.
Having resolved the issue, the next nightly full backup file size (on the primary, again) had reduced back to normal size.
Why would the *full* backup file increase in size until this issue was resolved? Is it because the log could not be cleared and this was being included in the full backup? If so, then what was getting backed up in the 15-minutely log backups? Imagine for a moment that the primary had suffered a failure during this period - could I not have restored from the available full and log backup files I had (ie. because obviously something remained in the log such that it wasn't clearing)? And if it was a case of the log not clearing, then why (because the secondary had been suspended, so surely the primary no longer waits for the secondary before committing its own log)?
Afterthought: the system process that I could not kill without a server restart was the one that faulted while attempting to apply the logs to the secondary - leading the secondary to be suspended. I suppose this is a "long running transaction"? If this is the reason, can someone still please clarify for me the earlier questions - particularly about how "at risk" the primary was until this was resolved?
youcantryreachingme
(1655 rep)
Jul 20, 2020, 06:48 AM
• Last activity: Jul 18, 2025, 12:02 AM
0
votes
1
answers
639
views
How to fix checkpoint requests problems?
On PostgreSQL v15 every morning at 5:15 AM starts heavy inserts in our database and finishes around 7:00 AM. I try to see if there are some parameters that can be tuned like checkpoints. I have written a script that executes the following SQL: `SELECT checkpoints_timed, checkpoints_req FROM pg_stat_...
On PostgreSQL v15 every morning at 5:15 AM starts heavy inserts in our database and finishes around 7:00 AM. I try to see if there are some parameters that can be tuned like checkpoints.
I have written a script that executes the following SQL:
SELECT checkpoints_timed, checkpoints_req FROM pg_stat_bgwriter
to get "checkpoints timed" vs. "checkpoints requested". Or in other words do "checkpoints requested" appears at morning heavy inserts.
I have set log_checkpoints=on
postgresql.conf parameter checkpoints info to be written in database server log file. I see the following related to checkpoints:
2024-09-09 05:22:34 LOG: checkpoint starting: time
2024-09-09 05:33:37 LOG: checkpoint complete: wrote 779881 buffers (8.9%); 0 WAL file(s) added, 483 removed, 664 recycled; write=652.802 s, sync=0.165 s, total=662.212 s; sync files=170, longest=0.059 s, average=0.001 s; distance=12326973 kB, estimate=12326973 kB
2024-09-09 05:34:27 LOG: checkpoint starting: wal
2024-09-09 05:48:27 LOG: checkpoint complete: wrote 1818364 buffers (20.7%); 0 WAL file(s) added, 0 removed, 1009 recycled; write=837.323 s, sync=0.097 s, total=839.269 s; sync files=175, longest=0.024 s, average=0.001 s; distance=16533226 kB, estimate=16533226 kB
2024-09-09 05:50:24 LOG: checkpoint starting: wal
2024-09-09 06:06:02 LOG: checkpoint complete: wrote 1302347 buffers (14.8%); 0 WAL file(s) added, 0 removed, 1010 recycled; write=938.102 s, sync=0.068 s, total=938.629 s; sync files=301, longest=0.011 s, average=0.001 s; distance=16547328 kB, estimate=16547328 kB
2024-09-09 06:08:20 LOG: checkpoint starting: wal
2024-09-09 06:35:20 LOG: checkpoint complete: wrote 1015915 buffers (11.6%); 0 WAL file(s) added, 0 removed, 880 recycled; write=1619.413 s, sync=0.038 s, total=1619.779 s; sync files=488, longest=0.020 s, average=0.001 s; distance=16548651 kB, estimate=16548651 kB
2024-09-09 06:38:20 LOG: checkpoint starting: time
2024-09-09 07:05:20 LOG: checkpoint complete: wrote 320834 buffers (3.7%); 0 WAL file(s) added, 0 removed, 376 recycled; write=1619.469 s, sync=0.004 s, total=1619.906 s; sync files=755, longest=0.001 s, average=0.001 s; distance=13792920 kB, estimate=16273078 kB
Whole day and night I see _checkpoint starting: time_ events (just like in first line). I assume those are _checkpoints timed_ events. Exception is at the time: 5:34, 5:50 and 6:08 (see above log) where _checkpoint starting: wal_ appears three times in sequence. I assume those are _checkpoint requested_ event. According to theory those events slow down the database performance and should be avoided.
If we look at line 4, there is:
- 1818364 buffers. If multiplied by 8192 (block_size) and divided by 67 * 1024 * 1024 * 1024 (shared_buffers) and we get 20.7%
- 20.7% is the percentage of memory buffers changed (dirty blocks from checkpoint starting point to checkpoint completed events) that are requested to be saved by checkpoint to $PGDATA/base/"database"/ Data Files.
- 0 WAL file(s) added: There was no need to add additional WAL files, because all are already added (according to max_wal_size parameter)
- 1009 recycled - number of old WAL files renamed and reused
- write=837.323 s - is 13 min and 57 seconds - time how long checkpoint needed to complete (also the time from "checkpoint starting: wal" to "checkpoint complete")
- sync=0.097 s - probably some disk fsync or similar (negligible in our case, because of very small number)
- total=839.269 s - probably "write + sync + 'something'"
- sync files=175 - is number of Data Files written by checkpoint in $PGDATA/base/"database"/ directory from checkpoint "start" to "complete".
- longest=0.024 s - maximal time for individual Data File to be written
- average=0.001 s - average time for Data File to be written
Now there are two parameters that I don't understand the meaning:
- distance=16533226 kB - 15.7 GB
- estimate=16533226 kB - 15.7 GB
QUESTIONS:
1. What does distance/estimate means?
2. Why is in my case checkpoint request fired?
3. Is increasing max_wal_size recommended? If yes, how to calculate new parameter size?
4. Is some other parameter recommended like decrease checkpoint_completion_target to e.g. 0.8?
PostgreSQL instance settings:
shared_buffers: 67GB
wal_buffers: 32MB
full_page_writes: on
max_wal_size: 30GB
checkpoint_timeout: 30min
checkpoint_completion_target: 0.9
min_wal_size: 512MB
block_size: 8192
wal_segment_siz: 16MB
Regards
folow
(523 rep)
Sep 9, 2024, 12:29 PM
• Last activity: Jul 13, 2025, 10:03 AM
0
votes
2
answers
176
views
The actual cause of transaction log full under below scenario
In one of the instance, it shows >The transaction log for database 'DB' is full. To find out why space in the log cannot be reused, see the **log_reuse_wait_desc** column in **sys.databases** The instance has at least 100g hard disk empty storage and the transaction log allowed 2,097,152MB (2T) to g...
In one of the instance, it shows
>The transaction log for database 'DB' is full. To find out why space in the log cannot be reused, see the **log_reuse_wait_desc** column in **sys.databases**
The instance has at least 100g hard disk empty storage and the transaction log allowed 2,097,152MB (2T) to grow, growth rate at 10%.
The database recovery mode is FULL (at least it displays FULL in the SSMS)
In this case, what can cause the exception of full transaction log?
When the transaction log have enough physical storage to grow and the log size is not bigger than the upper bound
SKLTFZ
(141 rep)
Mar 28, 2019, 04:30 AM
• Last activity: Jul 6, 2025, 02:51 AM
0
votes
1
answers
37
views
Oracle Redo Log shipping from PRIMARY to PRIMARY
I am setting up online and archived redo shipping from a source database (PRIMARY) to destination database (PRIMARY). I cannot use a STANDBY database as destination as I have to setup real time downstream capture using Oracle XStream and standby databases are READ only. The issue I am facing is once...
I am setting up online and archived redo shipping from a source database (PRIMARY) to destination database (PRIMARY). I cannot use a STANDBY database as destination as I have to setup real time downstream capture using Oracle XStream and standby databases are READ only.
The issue I am facing is once I setup both DBs following instructions from here and here , I am able to receive the archived redo log files (I see the files showing up after a log switch). But, I am not receiving online redo log files, which is required for real time capture.
Anyone setup this and maybe help with what might be missing? Please note that if I switch the destination DB from PRIMARY to STANDBY I receive both online and archived redo log files, but it doesn't serve my purpose.
I am using the simple remote login password mode to communicate between the two DBs.
Ashok
(1 rep)
May 8, 2025, 04:33 AM
• Last activity: Jul 4, 2025, 08:38 AM
3
votes
1
answers
231
views
How MSSQL handle cross database transaction in TRN files?
I've a distributed transaction between two bases (let's call them *data* and *history*) (on the same server, so no MSDTC). I can't find any doc about how the transaction will be saved on both trn files and how it will link them forever. Can I restore *data* DB without *history*? Will it fail? What a...
I've a distributed transaction between two bases (let's call them *data* and *history*) (on the same server, so no MSDTC). I can't find any doc about how the transaction will be saved on both trn files and how it will link them forever. Can I restore *data* DB without *history*? Will it fail? What are the impacts?
Thomas Ayoub
(91 rep)
Jun 24, 2025, 01:30 PM
• Last activity: Jun 24, 2025, 02:35 PM
1
votes
2
answers
369
views
Track SELECT statements on specific tables
I'm searching for a way to keep track of any SELECT statements on a specific table in a database where logs are not enabled. I can't find any solutions to this except to force people to use a stored procedure to select data from the table and then log entries within that SP. Is there a better soluti...
I'm searching for a way to keep track of any SELECT statements on a specific table in a database where logs are not enabled. I can't find any solutions to this except to force people to use a stored procedure to select data from the table and then log entries within that SP.
Is there a better solution to this problem? I do have full rights over processlist if that helps but given that the table I want to keep track of is pretty small, there's a good chance I may miss the queries between calls.
sumo_saka
(11 rep)
Jul 15, 2021, 06:16 PM
• Last activity: Jun 21, 2025, 10:03 AM
0
votes
1
answers
58
views
simple package fails with The transaction log for database 'X' is full due to 'ACTIVE_TRANSACTION
I am not a DBA and our DBA has quit and now I have to try to figure out DBA issues. So I am trying to create a job that deletes certain rows from a table and then uploads a set of new rows onto it from ODBC "Snowflake" for Athena, no truncation involved bc I do not want to truncate the whole table....
I am not a DBA and our DBA has quit and now I have to try to figure out DBA issues.
So I am trying to create a job that deletes certain rows from a table and then uploads a set of new rows onto it from ODBC "Snowflake" for Athena, no truncation involved bc I do not want to truncate the whole table.
The package fails in both VS and SSMS and it gives the following error:
[Execute SQL Task] Error: Executing the query "QUERY" failed with the following error: "The transaction log for database 'X' is full due to 'ACTIVE_TRANSACTION'.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.
Since the package has 2 tasks and it kept failing on the first one which is the delete query, I thought maybe I needed to change the first task from running a query to executing a SP, that did not work. I read up on the error and some ppl said I might have to reboot the VM, that did not work. I saw some other solutions like shrinking the log files but I also read that that could cause issues with the data, which I would like to avoid. I saw I could increase the size of the log file but IDK how to do that nor if it will mess up the data like shrinking has the possibility to do. I also read that best practice is to find what is causing the log to fill up and try to fix the problem by optimizing it but I have no remote idea how to even start doing that.
Any advise ya'll can give?
Gianpiero Loli
(1 rep)
Jun 4, 2025, 06:30 PM
• Last activity: Jun 4, 2025, 10:04 PM
0
votes
1
answers
237
views
Actual used and free space in WAL segments
I am new to the PostgreSQL world. I have an experience in administrating Microsoft SQL Server and Oracle databases. In SQL Server, we are able to find how much space is actually used in pre-allocated log file. I wonder if we can do the same thing in PostgreSQL? How can we identify which WAL segments...
I am new to the PostgreSQL world. I have an experience in administrating Microsoft SQL Server and Oracle databases.
In SQL Server, we are able to find how much space is actually used in pre-allocated log file. I wonder if we can do the same thing in PostgreSQL?
How can we identify which WAL segments are actually full and which WAL segments are empty?
I am currently using PostgreSQL version 10.
Rauf Asadov
(1313 rep)
Jun 11, 2021, 08:44 AM
• Last activity: May 31, 2025, 10:03 PM
0
votes
2
answers
405
views
How to shrink log file for Azure SQL Database?
My Azure Sql Database transaction log file is about 1 TB in size, but only 1 GB is in use: SELECT file_id, name, type_desc, CAST(FILEPROPERTY(name, 'SpaceUsed') AS decimal(19,4)) * 8 / 1024. AS space_used_mb, CAST(size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS decimal(19,4)) AS s...
My Azure Sql Database transaction log file is about 1 TB in size, but only 1 GB is in use:
SELECT file_id, name, type_desc,
CAST(FILEPROPERTY(name, 'SpaceUsed') AS decimal(19,4)) * 8 / 1024. AS space_used_mb,
CAST(size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS decimal(19,4)) AS space_unused_mb,
CAST(size AS decimal(19,4)) * 8 / 1024. AS space_allocated_mb,
CAST(max_size AS decimal(19,4)) * 8 / 1024. AS max_size_mb
FROM sys.database_files;
Results:
I'm trying to cut it down to 50 GB in size, but neither of these seem to have any effect, the

space_unused_mb
and space_allocated_mb
values remain the same:
DBCC SHRINKFILE (2, truncateonly);
DBCC SHRINKFILE (2, 50000);
Or am I barking up the wrong tree here? Does log file size count against storage cost?
Greg
(582 rep)
Oct 16, 2024, 09:53 PM
• Last activity: May 17, 2025, 04:06 AM
0
votes
1
answers
279
views
Can not Truncate the database because “secondaries has no log was added”
I am a newbie to SQL Server. I got an error when trying to truncate the database: >Database can not shrink until all secondaries have moved past the point where the log was added. When I check the status of the secondary on the primary machine (Always On High Availability Dashboard), it has >heath:...
I am a newbie to SQL Server. I got an error when trying to truncate the database:
>Database can not shrink until all secondaries have moved past the point where the log was added.
When I check the status of the secondary on the primary machine (Always On High Availability Dashboard), it has
>heath: good and
>Synchronization state: Synchronizing.
But the status of database on secondary is NULL not ONLINE. When I check on the Always On High Availability of the Secondary machine, it's state is No data available on secondary replicas.
**Question:** What should I do and what is going on with the database..?
(I want to truncate because the free space is no more enough. I use E:\ for all databases file has used 421.5Gb and free 78.5Gb.)
Here is the nearly I check the status wiht the query below with two result on primary machine(pic 1) and secondary machine(pic 2):
I have tried suspending, off and re-join the secondary replica databases on the secondary machine (APEX-SQL2) but nothing has change...
>Additionally, I ran 3 query below:
*Thank you for reading. Pls help..*
SELECT d.name, r.replica_server_name, ars.role_desc, ars.operational_state_desc,
drs.synchronization_state_desc, ars.connected_state_desc,
drs.last_hardened_lsn, d.log_reuse_wait_desc
FROM sys.availability_databases_cluster AS adc
JOIN sys.databases AS d ON adc.group_database_id = d.group_database_id
JOIN sys.dm_hadr_database_replica_states AS drs ON d.database_id = drs.database_id
JOIN sys.availability_replicas AS r ON drs.replica_id = r.replica_id
JOIN sys.dm_hadr_availability_replica_states ars ON r.replica_id = ars.replica_id





Anh Nguyen
(1 rep)
Apr 10, 2023, 04:58 PM
• Last activity: May 17, 2025, 12:10 AM
2
votes
1
answers
4167
views
Innodb: When bulk inserting or updating - how to disable transactions?
When doing a mass insert like `INSERT INTO SELECT FROM` the speed benefit of doing that in chunks of 10k records can be enormous, in my tests a query that was running for 5 days took only 2 hours using the chunk method. To make things worse: if you let such an operation run for a few hours and KILL...
When doing a mass insert like
INSERT INTO SELECT FROM
the speed benefit of doing that in chunks of 10k records can be enormous, in my tests a query that was running for 5 days took only 2 hours using the chunk method.
To make things worse: if you let such an operation run for a few hours and KILL the job (or crash) then it can take days or weeks until ROLLBACK is finished.
Rollback is significantly slower than insert, no optimizations here.
The same counts for doing mass UPDATEs.
**My question:**
How can I completely disable transaction log for such a transactions ?
I tested this: SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
But it showed no difference, when killing the job transactions start rolling back.
I really did my homework and tried to find anything about it, no luck.
P.S.
Of course I am aware that importing without transactions does not give a unique snapshot to rely on if anything is changed in meantime.
John
(402 rep)
Jul 24, 2018, 09:31 PM
• Last activity: May 16, 2025, 11:01 AM
0
votes
1
answers
291
views
Hsqldb transaction log rollback cause
I'm fairly new to the database world but I had to use an `hsqldb-2.0.0` database for a netflow collector software. As the software shows no output even if the client is collecting the flows I investigated the database and it seems that after committing some changes it rolls back to the previous stat...
I'm fairly new to the database world but I had to use an
hsqldb-2.0.0
database for a netflow collector software. As the software shows no output even if the client is collecting the flows I investigated the database and it seems that after committing some changes it rolls back to the previous state (the empty one). This is the sql log :
2016-03-07 09:30:51.217 0 SET DATABASE TRANSACTION CONTROL LOCKS
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE DEFAULT ISOLATION LEVEL READ COMMITTED
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE TRANSACTION ROLLBACK ON CONFLICT TRUE
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE TEXT TABLE DEFAULTS ''
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE SQL NAMES FALSE
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE SQL REFERENCES FALSE
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE SQL SIZE TRUE
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE SQL TYPES FALSE
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE SQL TDC DELETE TRUE
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE SQL TDC UPDATE TRUE
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE SQL TRANSLATE TTI TYPES TRUE
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE SQL CONCAT NULLS TRUE
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE SQL UNIQUE NULLS TRUE
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE SQL CONVERT TRUNCATE TRUE
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE SQL AVG SCALE 0
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE SQL DOUBLE NAN TRUE
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET FILES WRITE DELAY 500 MILLIS
2016-03-07 09:30:51.217 0 SET FILES BACKUP INCREMENT TRUE
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET FILES CACHE SIZE 10000
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET FILES CACHE ROWS 50000
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET FILES SCALE 32
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET FILES LOB SCALE 32
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET FILES DEFRAG 0
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET FILES NIO TRUE
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET FILES NIO SIZE 256
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET FILES LOG TRUE
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET FILES LOG SIZE 50
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 CREATE USER SA PASSWORD DIGEST 'd41d8cd98f00b204e9800998ecf8427e'
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 ALTER USER SA SET LOCAL TRUE
2016-03-07 09:30:51.217 0 CREATE SCHEMA PUBLIC AUTHORIZATION DBA
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 ALTER SEQUENCE SYSTEM_LOBS.LOB_ID RESTART WITH 1
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 SET DATABASE DEFAULT INITIAL SCHEMA PUBLIC
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 GRANT USAGE ON DOMAIN INFORMATION_SCHEMA.SQL_IDENTIFIER TO PUBLIC
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 GRANT USAGE ON DOMAIN INFORMATION_SCHEMA.YES_OR_NO TO PUBLIC
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 GRANT USAGE ON DOMAIN INFORMATION_SCHEMA.TIME_STAMP TO PUBLIC
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 GRANT USAGE ON DOMAIN INFORMATION_SCHEMA.CARDINAL_NUMBER TO PUBLIC
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 GRANT USAGE ON DOMAIN INFORMATION_SCHEMA.CHARACTER_DATA TO PUBLIC
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.217 0 GRANT DBA TO SA
2016-03-07 09:30:51.217 0 COMMIT
2016-03-07 09:30:51.232 2 COMMIT
2016-03-07 09:30:51.232 2 ROLLBACK
2016-03-07 09:30:57.207 3 CREATE TABLE IpSegments ( IpSeg VARCHAR(15) NOT NULL,SegName VARCHAR(15) NOT NULL)
2016-03-07 09:30:57.207 3 COMMIT
2016-03-07 09:30:57.223 3 CREATE TABLE Raw_V1 ( RouterIP VARCHAR(15) NOT NULL, SysUptime BIGINT, Secs BIGINT, Nsecs BIGINT, SrcAddr VARCHAR(15) NOT NULL, DstAddr VARCHAR(15) NOT NULL, NextHop VARCHAR(15) NOT NULL, Input INT NOT NULL, Output INT NOT NULL, dPkts BIGINT, dOctets BIGINT, aFirst BIGINT, aLast BIGINT, SrcPort INT NOT NULL, DstPort INT NOT NULL, Prot INT NOT NULL, TOS INT NOT NULL, Flags INT, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.223 3 COMMIT
2016-03-07 09:30:57.238 3 CREATE TABLE Raw_V5 ( RouterIP VARCHAR(15) NOT NULL, SysUptime BIGINT, Secs BIGINT, Nsecs BIGINT, Flow_Sequence BIGINT NOT NULL, Engine_Type INT NOT NULL, Engine_ID INT NOT NULL, SrcAddr VARCHAR(15) NOT NULL, DstAddr VARCHAR(15) NOT NULL, NextHop VARCHAR(15) NOT NULL, Input INT NOT NULL, Output INT NOT NULL, dPkts BIGINT, dOctets BIGINT, aFirst BIGINT, aLast BIGINT, SrcPort INT NOT NULL, DstPort INT NOT NULL, Tcp_Flags INT NOT NULL, Prot INT NOT NULL, TOS INT NOT NULL, Src_As INT NOT NULL, Dst_As INT NOT NULL, Src_Mask INT NOT NULL, Dst_Mask INT NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.238 3 COMMIT
2016-03-07 09:30:57.238 3 CREATE TABLE Raw_V7 ( RouterIP VARCHAR(15) NOT NULL, SysUptime BIGINT, Secs BIGINT, Nsecs BIGINT, Flow_Sequence BIGINT NOT NULL, SrcAddr VARCHAR(15) NOT NULL, DstAddr VARCHAR(15) NOT NULL, NextHop VARCHAR(15) NOT NULL, Input INT NOT NULL, Output INT NOT NULL, dPkts BIGINT, dOctets BIGINT, aFirst BIGINT, aLast BIGINT, SrcPort INT NOT NULL, DstPort INT NOT NULL, Flags INT NOT NULL, Tcp_Flags INT NOT NULL, Prot INT NOT NULL, TOS INT NOT NULL, Src_As INT NOT NULL, Dst_As INT NOT NULL, Src_Mask INT NOT NULL, Dst_Mask INT NOT NULL, Flags1 INT NOT NULL, Router_Sc VARCHAR(15) NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.238 3 COMMIT
2016-03-07 09:30:57.238 3 CREATE TABLE Raw_V8_AS ( RouterIP VARCHAR(15) NOT NULL, SysUptime BIGINT, Secs BIGINT, Nsecs BIGINT, Flow_Sequence BIGINT NOT NULL, Engine_Type CHAR(1) NOT NULL, Engine_ID CHAR(1) NOT NULL, Flows BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, aFirst BIGINT NOT NULL, aLast BIGINT NOT NULL, Src_As INT NOT NULL, Dst_As INT NOT NULL, Input INT NOT NULL, Output INT NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.238 3 COMMIT
2016-03-07 09:30:57.254 3 CREATE TABLE Raw_V8_ProtoPort ( RouterIP VARCHAR(15) NOT NULL, SysUptime BIGINT, Secs BIGINT, Nsecs BIGINT, Flow_Sequence BIGINT NOT NULL, Engine_Type CHAR(1) NOT NULL, Engine_ID CHAR(1) NOT NULL, Flows BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, aFirst BIGINT NOT NULL, aLast BIGINT NOT NULL, Prot INT NOT NULL, SrcPort INT NOT NULL, DstPort INT NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.254 3 COMMIT
2016-03-07 09:30:57.254 3 CREATE TABLE Raw_V8_DstPrefix ( RouterIP VARCHAR(15) NOT NULL, SysUptime BIGINT, Secs BIGINT, Nsecs BIGINT, Flow_Sequence BIGINT NOT NULL, Engine_Type CHAR(1) NOT NULL, Engine_ID CHAR(1) NOT NULL, Flows BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, aFirst BIGINT NOT NULL, aLast BIGINT NOT NULL, Dst_Prefix VARCHAR(15) NOT NULL, Dst_Mask INT NOT NULL, Dst_As INT NOT NULL, Output INT NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.254 3 COMMIT
2016-03-07 09:30:57.254 3 CREATE TABLE Raw_V8_SrcPrefix ( RouterIP VARCHAR(15) NOT NULL, SysUptime BIGINT, Secs BIGINT, Nsecs BIGINT, Flow_Sequence BIGINT NOT NULL, Engine_Type CHAR(1) NOT NULL, Engine_ID CHAR(1) NOT NULL, Flows BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, aFirst BIGINT NOT NULL, aLast BIGINT NOT NULL, Src_Prefix VARCHAR(15) NOT NULL, Src_Mask INT NOT NULL, Src_As INT NOT NULL, Input INT NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.254 3 COMMIT
2016-03-07 09:30:57.254 3 CREATE TABLE Raw_V8_Prefix ( RouterIP VARCHAR(15) NOT NULL, SysUptime BIGINT, Secs BIGINT, Nsecs BIGINT, Flow_Sequence BIGINT NOT NULL, Engine_Type CHAR(1) NOT NULL, Engine_ID CHAR(1) NOT NULL, Flows BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, aFirst BIGINT NOT NULL, aLast BIGINT NOT NULL, Src_Prefix VARCHAR(15) NOT NULL, Dst_Prefix VARCHAR(15) NOT NULL, Src_Mask INT NOT NULL, Dst_Mask INT NOT NULL, Src_As INT NOT NULL, Dst_As INT NOT NULL, Input INT NOT NULL, Output INT NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.254 3 COMMIT
2016-03-07 09:30:57.254 3 CREATE TABLE Raw_V9 ( RouterIP VARCHAR(15) NOT NULL, SysUptime BIGINT, Secs BIGINT, PackageSequence BIGINT NOT NULL, SourceID BIGINT NOT NULL, SrcAddr VARCHAR(15) NULL, DstAddr VARCHAR(15) NULL, NextHop VARCHAR(15) NULL, Input INT NULL, Output INT NULL, dPkts BIGINT, dOctets BIGINT, aFirst BIGINT, aLast BIGINT, SrcPort INT NULL, DstPort INT NULL, Tcp_Flags INT NULL, Prot INT NULL, TOS INT NULL, Src_As INT NULL, Dst_As INT NULL, Src_Mask INT NULL, Dst_Mask INT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.254 3 COMMIT
2016-03-07 09:30:57.254 3 CREATE TABLE OptionsTable ( RouterIP VARCHAR(15) NOT NULL, SysUptime BIGINT, Secs BIGINT, PackageSequence BIGINT NOT NULL, SourceID BIGINT NOT NULL, IsScope VARCHAR(1) NOT NULL, Stamp VARCHAR(15) NOT NULL, TemplateID INT NOT NULL, OptionType INT NOT NULL, OptionValue BIGINT NOT NULL )
2016-03-07 09:30:57.254 3 COMMIT
2016-03-07 09:30:57.254 3 CREATE TABLE SrcAS ( StartDate DATE NOT NULL, StartTime TIME NOT NULL, EndDate DATE NOT NULL, EndTime TIME NOT NULL, RouterIP VARCHAR(15) NOT NULL, Flows BIGINT NOT NULL, Missed BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, Src_As VARCHAR(64) NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.254 3 COMMIT
2016-03-07 09:30:57.254 3 CREATE TABLE DstAS ( StartDate DATE NOT NULL, StartTime TIME NOT NULL, EndDate DATE NOT NULL, EndTime TIME NOT NULL, RouterIP VARCHAR(15) NOT NULL, Flows BIGINT NOT NULL, Missed BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, Dst_As VARCHAR(64) NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.254 3 COMMIT
2016-03-07 09:30:57.269 3 CREATE TABLE ASMatrix ( StartDate DATE NOT NULL, StartTime TIME NOT NULL, EndDate DATE NOT NULL, EndTime TIME NOT NULL, RouterIP VARCHAR(15) NOT NULL, Flows BIGINT NOT NULL, Missed BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, Src_As VARCHAR(64) NOT NULL, Dst_As VARCHAR(64) NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.269 3 COMMIT
2016-03-07 09:30:57.269 3 CREATE TABLE SrcNode ( StartDate DATE NOT NULL, StartTime TIME NOT NULL, EndDate DATE NOT NULL, EndTime TIME NOT NULL, RouterIP VARCHAR(15) NOT NULL, Flows BIGINT NOT NULL, Missed BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, SrcNode VARCHAR(15) NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.269 3 COMMIT
2016-03-07 09:30:57.269 3 CREATE TABLE DstNode ( StartDate DATE NOT NULL, StartTime TIME NOT NULL, EndDate DATE NOT NULL, EndTime TIME NOT NULL, RouterIP VARCHAR(15) NOT NULL, Flows BIGINT NOT NULL, Missed BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, DstNode VARCHAR(15) NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.269 3 COMMIT
2016-03-07 09:30:57.269 3 CREATE TABLE HostMatrix ( StartDate DATE NOT NULL, StartTime TIME NOT NULL, EndDate DATE NOT NULL, EndTime TIME NOT NULL, RouterIP VARCHAR(15) NOT NULL, Flows BIGINT NOT NULL, Missed BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, SrcNode VARCHAR(15) NOT NULL, DstNode VARCHAR(15) NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.269 3 COMMIT
2016-03-07 09:30:57.269 3 CREATE TABLE SrcInt ( StartDate DATE NOT NULL, StartTime TIME NOT NULL, EndDate DATE NOT NULL, EndTime TIME NOT NULL, RouterIP VARCHAR(15) NOT NULL, Flows BIGINT NOT NULL, Missed BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, SrcInt INT NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.269 3 COMMIT
2016-03-07 09:30:57.269 3 CREATE TABLE DstInt ( StartDate DATE NOT NULL, StartTime TIME NOT NULL, EndDate DATE NOT NULL, EndTime TIME NOT NULL, RouterIP VARCHAR(15) NOT NULL, Flows BIGINT NOT NULL, Missed BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, DstInt INT NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.269 3 COMMIT
2016-03-07 09:30:57.269 3 CREATE TABLE IntMatrix ( StartDate DATE NOT NULL, StartTime TIME NOT NULL, EndDate DATE NOT NULL, EndTime TIME NOT NULL, RouterIP VARCHAR(15) NOT NULL, Flows BIGINT NOT NULL, Missed BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, SrcInt INT NOT NULL, DstInt INT NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.269 3 COMMIT
2016-03-07 09:30:57.269 3 CREATE TABLE SrcPrefix ( StartDate DATE NOT NULL, StartTime TIME NOT NULL, EndDate DATE NOT NULL, EndTime TIME NOT NULL, RouterIP VARCHAR(15) NOT NULL, Flows BIGINT NOT NULL, Missed BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, SrcPrefix VARCHAR(15) NOT NULL, SrcMask INT NOT NULL, SrcAS INT NOT NULL, Input INT NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.269 3 COMMIT
2016-03-07 09:30:57.285 3 CREATE TABLE DstPrefix ( StartDate DATE NOT NULL, StartTime TIME NOT NULL, EndDate DATE NOT NULL, EndTime TIME NOT NULL, RouterIP VARCHAR(15) NOT NULL, Flows BIGINT NOT NULL, Missed BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, DstPrefix VARCHAR(15) NOT NULL, DstMask INT NOT NULL, DstAS INT NOT NULL, Output INT NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.285 3 COMMIT
2016-03-07 09:30:57.285 3 CREATE TABLE PrefixMatrix ( StartDate DATE NOT NULL, StartTime TIME NOT NULL, EndDate DATE NOT NULL, EndTime TIME NOT NULL, RouterIP VARCHAR(15) NOT NULL, Flows BIGINT NOT NULL, Missed BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, SrcPrefix VARCHAR(15) NOT NULL, SrcMask INT NOT NULL, SrcAS INT NOT NULL, Input INT NOT NULL, DstPrefix VARCHAR(15) NOT NULL, DstMask INT NOT NULL, DstAS INT NOT NULL, Output INT NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.285 3 COMMIT
2016-03-07 09:30:57.285 3 CREATE TABLE Protocol ( StartDate DATE NOT NULL, StartTime TIME NOT NULL, EndDate DATE NOT NULL, EndTime TIME NOT NULL, RouterIP VARCHAR(15) NOT NULL, Flows BIGINT NOT NULL, Missed BIGINT NOT NULL, dPkts BIGINT NOT NULL, dOctets BIGINT NOT NULL, Protocol VARCHAR(32) NOT NULL, Stamp VARCHAR(15) NULL )
2016-03-07 09:30:57.285 3 COMMIT
2016-03-07 09:30:57.285 3 ROLLBACK
2016-03-07 09:30:57.285 5 ROLLBACK
2016-03-07 09:30:57.285 8 ROLLBACK
2016-03-07 09:30:57.285 7 ROLLBACK
2016-03-07 09:30:57.285 6 ROLLBACK
2016-03-07 09:30:57.285 4 ROLLBACK
2016-03-07 09:30:57.285 9 ROLLBACK
2016-03-07 09:30:57.285 10 ROLLBACK
2016-03-07 09:30:57.285 11 ROLLBACK
2016-03-07 09:30:57.285 12 ROLLBACK
2016-03-07 09:31:02.371 13 select * from IpSegments
2016-03-07 09:31:02.371 13 COMMIT
2016-03-07 09:32:02.026 14 INSERT INTO Raw_V9 ( RouterIP, SysUptime, Secs, PackageSequence, SourceID, SrcAddr, DstAddr, NextHop, Input, Output, dPkts, dOctets, aFirst, aLast, SrcPort, DstPort, Tcp_Flags, Prot, TOS, Src_As, Dst_As, Src_Mask, Dst_Mask, Stamp ) VALUES( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) ('192.168.10.23',888693100,1457339958,30207,0,'178.182.11.16','158.47.120.3','192.168.20.12',1,13,7,1981,888621984,888681988,1660,2405,16,17,0,0,0,24,24,'20160307093202')
2016-03-07 09:32:02.026 14 COMMIT
2016-03-07 09:32:02.026 14 INSERT INTO Raw_V9 ( RouterIP, SysUptime, Secs, PackageSequence, SourceID, SrcAddr, DstAddr, NextHop, Input, Output, dPkts, dOctets, aFirst, aLast, SrcPort, DstPort, Tcp_Flags, Prot, TOS, Src_As, Dst_As, Src_Mask, Dst_Mask, Stamp ) VALUES( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) ('192.168.10.23',888693100,1457339958,30207,0,'178.182.11.4','192.168.11.4','192.168.11.4',1,13,2,110,888669256,888669260,49642,2404,24,6,0,0,0,24,24,'20160307093202')
2016-03-07 09:32:02.026 14 COMMIT
2016-03-07 09:32:02.026 14 INSERT INTO Raw_V9 ( RouterIP, SysUptime, Secs, PackageSequence, SourceID, SrcAddr, DstAddr, NextHop, Input, Output, dPkts, dOctets, aFirst, aLast, SrcPort, DstPort, Tcp_Flags, Prot, TOS, Src_As, Dst_As, Src_Mask, Dst_Mask, Stamp ) VALUES( ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) ('192.168.10.23',888693100,1457339958,30207,0,'192.168.11.4','178.182.11.4','178.182.11.4',13,1,1,58,888669260,888669260,2404,49642,24,6,0,0,0,24,24,'20160307093202')
2016-03-07 09:32:02.026 14 COMMIT
2016-03-07 09:32:32.165 13 ROLLBACK
2016-03-07 09:32:32.181 16 ROLLBACK
2016-03-07 09:32:32.197 14 ROLLBACK
2016-03-07 09:32:32.197 22 ROLLBACK
2016-03-07 09:32:32.197 20 ROLLBACK
2016-03-07 09:32:32.197 15 ROLLBACK
2016-03-07 09:32:32.197 17 ROLLBACK
2016-03-07 09:32:32.197 19 ROLLBACK
2016-03-07 09:32:32.197 21 ROLLBACK
2016-03-07 09:32:32.197 18 ROLLBACK
What could be the cause for this behaviour ?
Feel free to ask for more info as I'm new and I have surely omitted some important details
huginnmuninn
(9 rep)
Mar 7, 2016, 08:59 AM
• Last activity: May 10, 2025, 08:07 AM
0
votes
3
answers
1201
views
Is there an InnoDB max transaction size and what happens if it is reached?
I would like to know if there is a max transaction size for InnoDB. Probably it is `innodb_log_file_size * innodb_log_files_in_group + innodb_log_buffer_size`? And this max size would be shared between all ongoing transactions? What happens when this max size is reached? The transaction is aborted?...
I would like to know if there is a max transaction size for InnoDB. Probably it is
innodb_log_file_size * innodb_log_files_in_group + innodb_log_buffer_size
? And this max size would be shared between all ongoing transactions?
What happens when this max size is reached? The transaction is aborted?
I ask this question because we are having out of memory issues with our MySQL server, we're running out of ideas to understand the problem, and we wonder if it could be caused by transactions beeing too large.
FBB
(113 rep)
Sep 28, 2021, 12:45 PM
• Last activity: May 7, 2025, 07:02 AM
2
votes
2
answers
467
views
Corruption in mariadb binlog
I have an installation with mariadb version 10.0.21. I need to backup the binlog file every 5 minutes to TSM, but as the binlog is being written every second, i am afraid of the file getting corrupted when copied. Is it possible that this happens? If the binlog gets corrupted (the end of it), will i...
I have an installation with mariadb version 10.0.21.
I need to backup the binlog file every 5 minutes to TSM, but as the binlog is being written every second, i am afraid of the file getting corrupted when copied.
Is it possible that this happens?
If the binlog gets corrupted (the end of it), will it corrupt the entire file or i can discard the last lines?
Any other solution proposed? We already have replication with master/slave, but for security and audit reasons, we need to keep a copy of the binlog.
Regards
ctutte
(69 rep)
Nov 13, 2015, 08:09 PM
• Last activity: Apr 26, 2025, 04:04 AM
2
votes
1
answers
2325
views
How make Db2 database transaction log full fully robust?
Db2 v11.5.7.0 on Linux and for some reason few times per year database hits the transaction log full error. My current solution was to increase logsecond, logprimary or logfilsiz and now I hit to foolish level when log file consumed is just 0.1% for whole day to day. But for some reason database few...
Db2 v11.5.7.0 on Linux and for some reason few times per year database hits the transaction log full error.
My current solution was to increase logsecond, logprimary or logfilsiz and now I hit to foolish level when log file consumed is just 0.1% for whole day to day. But for some reason database few times per year still hits the transaction log full error.
I investigated this problem in detail and bellow are step by step details how to reproduce the problem. I created tree cases. First and second case works as expected, but case 3 still runs into transaction log full problem.
db2 "create database test1"
mkdir /home/db2inst1/db2archlog
db2 "update db cfg for test1 using logarchmeth1 DISK:/home/db2inst1/db2archlog"
db2 "backup database test1 to /dev/null"
db2 "update db cfg for test1 using logfilsiz 100"
db2 "update db cfg for test1 using logprimary 3"
db2 "update db cfg for test1 using logsecond 2"
db2 "update db cfg for test1 using num_log_span 4"
db2 "update db cfg for test1 using max_log 95"
Above logfilsiz, logprimary and logsecond are intentionally set to very small values to quickly demonstrate transaction log full problems.
**CASE 1**
db2 connect to test1
db2 "create table admin.tab1 (col1 int)"
db2 +c "INSERT INTO admin.tab1
WITH temp1 (s1) AS
(
VALUES (0)
UNION ALL
SELECT s1 + 1
FROM temp1
WHERE s1 < 1000000
)
SELECT s1 FROM temp1"
After few seconds error is returned:
SQL1224N The database manager is not able to accept new requests, has
terminated all requests in progress, or has terminated the specified request
because of an error or a forced interrupt. SQLSTATE=55032
Analyzing db2diag log:
ADM1542W Application "db2bp" with application handle
"*LOCAL.db2inst1.220512112327" and application id "DB2INST1"
executing under authentication id "95" will be forced off of the
database for violating database configuration parameter MAX_LOG
(current value ""). The unit of work will be rolled back.
Transaction was trying to consume whole transaction log so MAX_LOG hit 95% and application was forced of the database. Works as expected. Transaction log full prevented successfully.
**CASE 2**
In first Db2 session:
db2 connect to test1
db2 +c "insert into admin.tab1 values (1)"
In second Db2 session:
db2 connect to test1
db2 "BEGIN
DECLARE MESSAGE VARCHAR(100);
DECLARE STATUS INTEGER;
DECLARE I INTEGER;
SET i=1;
WHILE i < 50000 DO
INSERT INTO ADMIN.TAB1 VALUES (i);
COMMIT;
SET i = i + 1;
END WHILE;
END"
Above SQL inserts one by one record in loop and commits each of the record. This can take like minute or two. Inserts complete successfully.
Now back to first Db2 session and execute:
db2 commit
and error is displayed:
SQL1224N The database manager is not able to accept new requests, has
terminated all requests in progress, or has terminated the specified request
because of an error or a forced interrupt. SQLSTATE=55032
Analyzing db2diag log:
ADM1541W Application "db2bp" with application handle "0-216" and
application id "*LOCAL.db2inst1.220512112650" executing under
authentication id "DB2INST1" has been forced off of the database for
violating database configuration parameter NUM_LOG_SPAN (current
value "4"). The unit of work will be rolled back.
Because application inserting row by row and committing each row in one UOW exceeded the 4 logs files limit set as NUM_LOG_SPAN, database action was triggered and forced off the oldest uncommitted transaction, which is the one in Db2 first session. Transaction log full prevented successfully.
**CASE 3**
In Db2 first session:
db2 connect to test1
db2 +c "INSERT INTO admin.tab1
WITH temp1 (s1) AS
(
VALUES (0)
UNION ALL
SELECT s1 + 1
FROM temp1
WHERE s1 < 5000
)
SELECT s1 FROM temp1"
Insert executes successfully.
Open second Db2 session and execute the same insert. Insert executes successfully.
Open third Db2 session and execute the same insert and it returns:
SQL0964C The transaction log for the database is full. SQLSTATE=57011
Now because of many small uncommitted transactions that none of it hits the MAX_LOG or NUM_LOG_SPAN limit, transaction log can still get saturated in very short period of time. Like in above sample under one minute.
Analyzing db2diag log:
MESSAGE : ZRC=0x85100009=-2062548983=SQLP_NOSPACE "Log File has reached its
saturation point" DIA8309C Log file was full.
I know I can increase LOGFILSIZ, LOGPRIMARY and LOGSECOND and for one of the database I did this already pretty aggressively in the way that day to day applications consumes maximum of 0.1% of transaction logs and few times a year there is still some combination of application execution that gets into transaction log full situation. I suspect there is some run away application opening several hundreds of connections and does not commit them. This event is so rare it is very difficult to capture. We tried to do tons of application tests in test environment and problem never appeared.
I know the only final solution will be to track down run away application and fix the application bug. But there can still appear some new application that has the same problems. From database perspective I would like to implement some mechanism on database to prevent transaction log full situation independently from application layer.
I know I can write script that runs in a loop and monitors transaction log used space like using MON_GET_TRANSACTION_LOG table and force application off the database when transaction log is full or near full. But is there any simpler way without scripts? Like some Db2 parameter or combination of parameters to prevent "case 3" transaction log saturation?
Regards
folow
(523 rep)
May 12, 2022, 12:25 PM
• Last activity: Apr 7, 2025, 04:03 PM
0
votes
1
answers
55
views
query to find the latest transactional backup for a server involved in availability group
I got a query to find the latest transactional log backup. in this particular server it is not working for some reason. here is the query: ``` SELECT @@Servername AS [Server_Name], B.name AS Database_Name, ISNULL(STR(ABS(DATEDIFF(day, SYSDATETIME(), MAX(A.backup_finish_date)))), 'NEVER') AS DaysSinc...
I got a query to find the latest transactional log backup.
in this particular server it is not working for some reason.
here is the query:
SELECT
@@Servername AS [Server_Name],
B.name AS Database_Name,
ISNULL(STR(ABS(DATEDIFF(day, SYSDATETIME(), MAX(A.backup_finish_date)))), 'NEVER') AS DaysSinceLastBackup,
ISNULL(CONVERT(CHAR(11), MAX(A.backup_finish_date), 113) + ' ' + CONVERT(VARCHAR(8), MAX(A.backup_finish_date), 108), 'NEVER') AS LastBackupDate,
BackupSize_GB = REPLACE(CONVERT(VARCHAR(50), CAST(CAST(COALESCE(MAX(A.compressed_backup_size), MAX(A.backup_size), 0)/1024.0/1024.0/1024.0 AS NUMERIC(18,2)) AS MONEY),1), '.00',''),
BackupSize_MB = REPLACE(CONVERT(VARCHAR(50), CAST(CAST(COALESCE(MAX(A.compressed_backup_size), MAX(A.backup_size), 0)/1024.0/1024.0 AS NUMERIC(18,2)) AS MONEY),1), '.00',''),
[Last Backup Duration (sec)] = DATEDIFF(s, MAX(A.backup_start_date), MAX(A.backup_finish_date)),
[AVG Backup Duration (sec)] = AVG(CAST(DATEDIFF(s, A.backup_start_date, A.backup_finish_date) AS INT)),
[Longest Backup Duration (sec)] = MAX(CAST(DATEDIFF(s, A.backup_start_date, A.backup_finish_date) AS INT)),
[Shortest Backup Duration (sec)] = MIN(CAST(DATEDIFF(s, A.backup_start_date, A.backup_finish_date) AS INT)),
A.type,
media_set_id = MAX(A.media_set_id),
B.create_date,
B.recovery_model_desc,
B.state_desc,
B.is_read_only,
B.database_id,
Backup_Started = MAX(A.backup_start_date),
Backup_Finished = MAX(A.backup_finish_date)
FROM sys.databases B WITH(NOLOCK)
LEFT OUTER JOIN msdb.dbo.backupset A WITH(NOLOCK)
ON A.database_name = B.name
AND A.type = 'L' -- Solo Transaction Log
WHERE 1=1
--B.Name = 'My_database'
GROUP BY
B.Name,
B.database_id,
B.create_date,
B.recovery_model_desc,
B.state_desc,
B.is_read_only,
A.type
Marcello Miorelli
(17274 rep)
Mar 26, 2025, 11:48 AM
• Last activity: Mar 27, 2025, 01:01 PM
2
votes
2
answers
491
views
Trying to read a log file encrypted by TDE
I have a full chain of full backups and log backups. I am looking to investigate exactly what transaction was done by a certain user in a given period of time. The database (thus all backups) are encrypted by TDE. Tools such as ApexSQL Log does not work on TDE-encrypted data source. Is there any way...
I have a full chain of full backups and log backups. I am looking to investigate exactly what transaction was done by a certain user in a given period of time.
The database (thus all backups) are encrypted by TDE. Tools such as ApexSQL Log does not work on TDE-encrypted data source. Is there any way I can analyze the log file to find out the information I want?
I can run fn_dump_dblog which gives me some information, however I cannot retrieve the exact information on what was changed, such as what column was changed to what value.
elty123
(431 rep)
Jul 6, 2018, 06:07 PM
• Last activity: Mar 17, 2025, 02:04 PM
1
votes
2
answers
387
views
Log Sql Server Transactions (Queries + username)
I need to see which user has made what query with a lookback of ~90(?) days. I have activated change tracking on the database with 90 days. I have found several solutions: SELECT deqs.*, dest.*, deqs.last_execution_time AS [Time], dest.TEXT AS [Query] FROM sys.dm_exec_query_stats AS deqs CROSS APPLY...
I need to see which user has made what query with a lookback of ~90(?) days.
I have activated change tracking on the database with 90 days.
I have found several solutions:
SELECT
deqs.*,
dest.*,
deqs.last_execution_time AS [Time],
dest.TEXT AS [Query]
FROM
sys.dm_exec_query_stats AS deqs
CROSS APPLY
sys.dm_exec_sql_text(deqs.sql_handle) AS dest
ORDER BY
deqs.last_execution_time DESC
But this query doesn't give me any information about the user who made the query.
Then I got this query
SELECT
*
FROM
FN_TRACE_GETTABLE('E:\MSSQLServer\MSSQL12.DBNAME\MSSQL\Log\log_29.trc', default)
WHERE
TextData IS NOT NULL
ORDER BY
TransactionID DESC
This give me the username but no statements about DELETE/UPDATE etc.
I read about the SQL Profiler but I can't copy/read the _log.mdf-files because it is always in use.
I need a tip what s the right way to do this.. :/
user170893
(11 rep)
Jan 24, 2019, 04:48 PM
• Last activity: Mar 15, 2025, 04:09 AM
1
votes
0
answers
47
views
SQL Server Transaction Log to find SQL query written in SSMS query editor
In one database table, one column value was overwritten with the same value for the entire table as if someone wrote an update statement without a where clause. We are trying to find out if this is something happened because of a bug in the application or someone wrote an update statement using SSMS...
In one database table, one column value was overwritten with the same value for the entire table as if someone wrote an update statement without a where clause. We are trying to find out if this is something happened because of a bug in the application or someone wrote an update statement using SSMS query editor without a where clause.
Is it possible to find out this information from the .ldf file? Does this file capture all sql query that is written in SSMS query editor or from any another application?
Tipu
(19 rep)
Mar 10, 2025, 03:59 PM
• Last activity: Mar 10, 2025, 06:54 PM
Showing page 1 of 20 total questions