Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
0
answers
3
views
Slow DELETE due to FK trigger on partitioned child table
I'm working with PostgreSQL 15 and experimenting with table partitioning to improve performance. **Original setup: ** I have two tables: `tasks` (parent) with ~65M rows and `records` (child) with ~200M rows There is a foreign key from records.task_id → tasks.id When executing a `DELETE` statement on...
I'm working with PostgreSQL 15 and experimenting with table partitioning to improve performance.
**Original setup:
** I have two tables:
There is a foreign key from records.task_id → tasks.id
When executing a
explain analyze:
1. Create new table
**Original setup:
** I have two tables:
tasks
(parent) with ~65M rows and records
(child) with ~200M rowsThere is a foreign key from records.task_id → tasks.id
When executing a
DELETE
statement on tasks
it takes approximately 12 seconds to completeexplain analyze:
my_db=*> EXPLAIN (ANALYZE, BUFFERS, TIMING, VERBOSE) DELETE FROM tasks WHERE project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e';
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------------------
Delete on public.tasks (cost=5628.30..1402047.95 rows=0 width=0) (actual time=1323.519..1323.520 rows=0 loops=1)
Buffers: shared hit=1160307
-> Bitmap Heap Scan on public.tasks (cost=5628.30..1402047.95 rows=502675 width=6) (actual time=45.328..373.039 rows=513093 loops=1)
Output: ctid
Recheck Cond: (tasks.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
Heap Blocks: exact=133678
Buffers: shared hit=134121
-> Bitmap Index Scan on tasks__project_id (cost=0.00..5502.63 rows=502675 width=0) (actual time=24.057..24.058 rows=513093 loops=1)
Index Cond: (tasks.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
Buffers: shared hit=443
Query Identifier: -8361055271394889220
Planning Time: 0.069 ms
Trigger RI_ConstraintTrigger_a_991596545 for constraint records_task_id_dfa43a78_fk_tasks: time=11377.093 calls=513093
Execution Time: 12742.066 ms
(14 rows)
I decided to partition the records
table based on project_id (list) by doing the following:1. Create new table
tasks_new
and copy the content of tasks
2. Create a new partitioned records
table PARTITION BY LIST (project_id)
3. Change the PRIMARY KEY to be based on (project_id, id)
4. Add foreign key constraint to tasks_new
based on (project_id, task_id)
5. Generate 800 tables and populate the data from records
This works as expected, however, when running and comparing the DELETE
statement from tasks_new
I see it is slower than the original (none partitioned) setup.
explain analyze:
my_db=*> EXPLAIN (ANALYZE, BUFFERS, TIMING, VERBOSE) DELETE FROM tasks_new WHERE project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e';
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------------------------------------
Delete on public.tasks_new (cost=15925.46..1400465.79 rows=0 width=0) (actual time=1352.341..1352.342 rows=0 loops=1)
Buffers: shared hit=1162960
-> Bitmap Heap Scan on public.tasks_new (cost=15925.46..1400465.79 rows=497147 width=6) (actual time=62.350..390.696 rows=513093 loops=1)
Output: ctid
Recheck Cond: (tasks_new.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
Heap Blocks: exact=133678
Buffers: shared hit=136774
-> Bitmap Index Scan on tasks_new_pkey (cost=0.00..15801.17 rows=497147 width=0) (actual time=41.590..41.590 rows=513093 loops=1)
Index Cond: (tasks_new.project_id = '3ae0f5c0-7600-4d4e-9f07-f8c800f2223e'::uuid)
Buffers: shared hit=3096
Query Identifier: -3093644494149541820
Planning Time: 0.066 ms
Trigger RI_ConstraintTrigger_a_991528806 for constraint records_parti_task_id_dfa43a78_fk_tasks: time=28558.369 calls=513093
Execution Time: 29965.025 ms
(14 rows)
I saw that most of the time is spent on the foreign key trigger (to see it, I intentionally set constraints all immediate;
) and it is SLOWER than the test without the partitions.RI_ConstraintTrigger_a_991596545
fires RI_FKey_noaction_del
in AFTER DELETE
of table tasks_new
. However I don't know how RI_FKey_noaction_del
is implemented and I suspect it doesn't work well in partitioned tables or might does the scan per all partitions together (not efficient).
Please help me understand better how it works and how can I improve it.
Cowabunga
(145 rep)
Aug 7, 2025, 10:36 AM
0
votes
3
answers
3616
views
Failure to initialise MySQL Server 5.7 due to mysqlx_port=0.0
I'm migrating a legacy system to a more recent server and I am having issues installing MySQL Server 5.7. When I try I get the error `unknown variable mysqlx_port=0.0`. I've tried updating the `my.ini` file to comment out this variable, or to set it to another value (3306) but whenever the configura...
I'm migrating a legacy system to a more recent server and I am having issues installing MySQL Server 5.7. When I try I get the error
unknown variable mysqlx_port=0.0
. I've tried updating the my.ini
file to comment out this variable, or to set it to another value (3306) but whenever the configuration is ran the file is regenerated with mysqlx_port=0.0
Has anyone run into this issue, or does anyone have any ideas to try? My googling isn't giving me much joy
Thanks!
Full init logs:
configuration step: Writing configuration file
Saving my.ini configuration file...
Saved my.ini configuration file.
Ended configuration step: Writing configuration file
Beginning configuration step: Updating Windows Firewall rules
Adding a Windows Firewall rule for MySQL57 on port 3306.
Attempting to add a Windows Firewall rule with command: netsh.exe advfirewall firewall add rule name="Port 3306" protocol=TCP localport=3306 dir=in action=allow
Ok.
Successfully added the Windows Firewall rule.
Ended configuration step: Updating Windows Firewall rules
Beginning configuration step: Adjusting Windows service
Attempting to grant the required filesystem permissions to the 'NT AUTHORITY\NetworkService' account.
Granted permissions to the data directory.
Adding new service
New service added
Ended configuration step: Adjusting Windows service
Beginning configuration step: Initializing database (may take a long time)
Deleting the data directory from a previous (failed) configuration...
Attempting to run MySQL Server with --initialize-insecure option...
Starting process for MySQL Server 5.7.44...
Starting process with command: C:\Program Files\MySQL\MySQL Server 5.7\bin\mysqld.exe --defaults-file="C:\ProgramData\MySQL\MySQL Server 5.7\my.ini" --console --initialize-insecure=on --lower-case-table-names=1...
TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
InnoDB: New log files created, LSN=45790
InnoDB: Creating foreign key constraint system tables.
unknown variable 'mysqlx_port=0.0'
Aborting
Process for mysqld, with ID 3212, was run successfully and exited with code 1.
Failed to start process for MySQL Server 5.7.44.
Database initialization failed.
Ended configuration step: Initializing database (may take a long time)
Harold
(101 rep)
Jun 11, 2024, 12:40 PM
• Last activity: Aug 7, 2025, 10:07 AM
0
votes
1
answers
140
views
Huge Transaction Log File - Full Database backup, but no Transaction log backups
I hope someone can help. Please be gentle, this type of admin is not what I usually do. I have inherited a database which is set up to do a FULL database backup at 02:00 every day. The transaction log is huge (almost 1TB) and has never had any backups (hence it is growing). The database is 172GB. Th...
I hope someone can help. Please be gentle, this type of admin is not what I usually do.
I have inherited a database which is set up to do a FULL database backup at 02:00 every day.
The transaction log is huge (almost 1TB) and has never had any backups (hence it is growing). The database is 172GB. The Logs and database are on different drives, as are the backups.
We would like to put a new maintenance plan in place with 1 FULL backup every week, 4 differential backups each day, and transaction log backups every 15 minutes.
The size of the transaction log is worrying me. We don't have a drive with that space in order to back it up.
Can I set the backup type to Simple, delete the transaction log, change back to Full, then set up the new jobs to do the differential and log file backups? Or will that break something? Is there a proper / better way to deal with this transaction log - I'm worried we will run out of space soon on the drive.
Very happy to provide more info if needed. I wasn't sure what would be pertinent at this point.
Thanks in advance.
Jubs666
(1 rep)
Jul 24, 2023, 02:10 PM
• Last activity: Aug 7, 2025, 09:01 AM
0
votes
2
answers
2508
views
Cloning the database to another server with raw data files
I'm trying to clone the database of **server 1** to **server 2** with raw data files. I'm using Linux 7 and Oracle 11g R2. I have the instance running with same SID in **Server 2** but file structure is different. The database is about 85 GB. I have got the data files, control file, redolog files, s...
I'm trying to clone the database of **server 1** to **server 2** with raw data files. I'm using Linux 7 and Oracle 11g R2. I have the instance running with same SID in **Server 2** but file structure is different. The database is about 85 GB.
I have got the data files, control file, redolog files, spfile and init.ora. Can someone explain me the way to clone in this way ?
MM_Julia
(1 rep)
Feb 5, 2018, 04:42 AM
• Last activity: Aug 7, 2025, 08:02 AM
0
votes
1
answers
588
views
why binlog doesn't log the DML statement?
two method compared 1. use test; create table t (a int); 2. create table test.t1(a int); my result is the first one can log the binlog but the second one can't not use test;create table t(a int); mysqlbinlog -v mysql-bin.000005|grep create use `test`/*!*/; SET TIMESTAMP=1490090893/*!*/; create table...
two method compared
1. use test; create table t (a int);
2. create table test.t1(a int);
my result is the first one can log the binlog but the second one can't not
use test;create table t(a int);
mysqlbinlog -v mysql-bin.000005|grep create
use
test
/*!*/;
SET TIMESTAMP=1490090893/*!*/;
create table t(a int)
/*!*/;
DELIMITER ;
but when I do not use "use", DML cannot log in binlog,can tell me why?
create table test.t(a int);
mysqlbinlog -v mysql-bin.000005|grep create test.t
LawrenceLi
(321 rep)
Mar 21, 2017, 10:33 AM
• Last activity: Aug 7, 2025, 07:08 AM
2
votes
1
answers
2325
views
How make Db2 database transaction log full fully robust?
Db2 v11.5.7.0 on Linux and for some reason few times per year database hits the transaction log full error. My current solution was to increase logsecond, logprimary or logfilsiz and now I hit to foolish level when log file consumed is just 0.1% for whole day to day. But for some reason database few...
Db2 v11.5.7.0 on Linux and for some reason few times per year database hits the transaction log full error.
My current solution was to increase logsecond, logprimary or logfilsiz and now I hit to foolish level when log file consumed is just 0.1% for whole day to day. But for some reason database few times per year still hits the transaction log full error.
I investigated this problem in detail and bellow are step by step details how to reproduce the problem. I created tree cases. First and second case works as expected, but case 3 still runs into transaction log full problem.
db2 "create database test1"
mkdir /home/db2inst1/db2archlog
db2 "update db cfg for test1 using logarchmeth1 DISK:/home/db2inst1/db2archlog"
db2 "backup database test1 to /dev/null"
db2 "update db cfg for test1 using logfilsiz 100"
db2 "update db cfg for test1 using logprimary 3"
db2 "update db cfg for test1 using logsecond 2"
db2 "update db cfg for test1 using num_log_span 4"
db2 "update db cfg for test1 using max_log 95"
Above logfilsiz, logprimary and logsecond are intentionally set to very small values to quickly demonstrate transaction log full problems.
**CASE 1**
db2 connect to test1
db2 "create table admin.tab1 (col1 int)"
db2 +c "INSERT INTO admin.tab1
WITH temp1 (s1) AS
(
VALUES (0)
UNION ALL
SELECT s1 + 1
FROM temp1
WHERE s1 < 1000000
)
SELECT s1 FROM temp1"
After few seconds error is returned:
SQL1224N The database manager is not able to accept new requests, has
terminated all requests in progress, or has terminated the specified request
because of an error or a forced interrupt. SQLSTATE=55032
Analyzing db2diag log:
ADM1542W Application "db2bp" with application handle
"*LOCAL.db2inst1.220512112327" and application id "DB2INST1"
executing under authentication id "95" will be forced off of the
database for violating database configuration parameter MAX_LOG
(current value ""). The unit of work will be rolled back.
Transaction was trying to consume whole transaction log so MAX_LOG hit 95% and application was forced of the database. Works as expected. Transaction log full prevented successfully.
**CASE 2**
In first Db2 session:
db2 connect to test1
db2 +c "insert into admin.tab1 values (1)"
In second Db2 session:
db2 connect to test1
db2 "BEGIN
DECLARE MESSAGE VARCHAR(100);
DECLARE STATUS INTEGER;
DECLARE I INTEGER;
SET i=1;
WHILE i < 50000 DO
INSERT INTO ADMIN.TAB1 VALUES (i);
COMMIT;
SET i = i + 1;
END WHILE;
END"
Above SQL inserts one by one record in loop and commits each of the record. This can take like minute or two. Inserts complete successfully.
Now back to first Db2 session and execute:
db2 commit
and error is displayed:
SQL1224N The database manager is not able to accept new requests, has
terminated all requests in progress, or has terminated the specified request
because of an error or a forced interrupt. SQLSTATE=55032
Analyzing db2diag log:
ADM1541W Application "db2bp" with application handle "0-216" and
application id "*LOCAL.db2inst1.220512112650" executing under
authentication id "DB2INST1" has been forced off of the database for
violating database configuration parameter NUM_LOG_SPAN (current
value "4"). The unit of work will be rolled back.
Because application inserting row by row and committing each row in one UOW exceeded the 4 logs files limit set as NUM_LOG_SPAN, database action was triggered and forced off the oldest uncommitted transaction, which is the one in Db2 first session. Transaction log full prevented successfully.
**CASE 3**
In Db2 first session:
db2 connect to test1
db2 +c "INSERT INTO admin.tab1
WITH temp1 (s1) AS
(
VALUES (0)
UNION ALL
SELECT s1 + 1
FROM temp1
WHERE s1 < 5000
)
SELECT s1 FROM temp1"
Insert executes successfully.
Open second Db2 session and execute the same insert. Insert executes successfully.
Open third Db2 session and execute the same insert and it returns:
SQL0964C The transaction log for the database is full. SQLSTATE=57011
Now because of many small uncommitted transactions that none of it hits the MAX_LOG or NUM_LOG_SPAN limit, transaction log can still get saturated in very short period of time. Like in above sample under one minute.
Analyzing db2diag log:
MESSAGE : ZRC=0x85100009=-2062548983=SQLP_NOSPACE "Log File has reached its
saturation point" DIA8309C Log file was full.
I know I can increase LOGFILSIZ, LOGPRIMARY and LOGSECOND and for one of the database I did this already pretty aggressively in the way that day to day applications consumes maximum of 0.1% of transaction logs and few times a year there is still some combination of application execution that gets into transaction log full situation. I suspect there is some run away application opening several hundreds of connections and does not commit them. This event is so rare it is very difficult to capture. We tried to do tons of application tests in test environment and problem never appeared.
I know the only final solution will be to track down run away application and fix the application bug. But there can still appear some new application that has the same problems. From database perspective I would like to implement some mechanism on database to prevent transaction log full situation independently from application layer.
I know I can write script that runs in a loop and monitors transaction log used space like using MON_GET_TRANSACTION_LOG table and force application off the database when transaction log is full or near full. But is there any simpler way without scripts? Like some Db2 parameter or combination of parameters to prevent "case 3" transaction log saturation?
Regards
folow
(523 rep)
May 12, 2022, 12:25 PM
• Last activity: Aug 7, 2025, 06:04 AM
1
votes
1
answers
1302
views
permission denied for schema ctgov; SQL state: 42501
I am new to postgresql and I would grateful if you could please advise on how-to resolve the following error.. I have issued the followed commands: ```CREATE VIEW cancer_view AS SELECT sub1.nct_id, sub1.condition, sub1.condition_name, sub2.overall_status, sub2.participants, e.criteria, f.city, f.sta...
I am new to postgresql and I would grateful if you could please advise on how-to resolve the following error..
I have issued the followed commands:
VIEW cancer_view AS
SELECT sub1.nct_id, sub1.condition, sub1.condition_name,
sub2.overall_status, sub2.participants,
e.criteria,
f.city, f.state, f.country,
i.intervention_type, i.name AS intervention
FROM (SELECT c.nct_id, c.name AS condition, c.downcase_name AS condition_name
FROM ctgov.conditions AS c
-- selecting only cancer trials before joining
WHERE c.name ILIKE '%cancer%') sub1
JOIN (SELECT s.nct_id, s.overall_status, s.enrollment AS participants
FROM ctgov.studies AS s
-- selecting only completed cancer trials before joining
WHERE s.overall_status = 'Completed') sub2
ON sub1.nct_id = sub2.nct_id
JOIN ctgov.eligibilities AS e
ON sub1.nct_id = e.nct_id
JOIN ctgov.facilities AS f
ON sub1.nct_id = f.nct_id
JOIN ctgov.interventions AS i
ON sub1.nct_id = i.nct_id;
And the error is:
ERROR: permission denied for schema ctgov
SQL state: 42501
Could you please advise on how to setup the privileges to have access to the tables defined in schema?
Thank you in advance!!!
Lan
(11 rep)
Feb 27, 2023, 05:04 AM
• Last activity: Aug 7, 2025, 05:08 AM
2
votes
1
answers
573
views
Spatial Query using MySQL 8.0
I have a database in MySQL 5.7 where I create a Lat/Long column using "POINT", then I create a Index on this column with main key as second column. So my database is like `PID, SurveyID, GeoPoint` Basically this is a summarize table where PID and SurveyID makes primary key for table. We do a survey...
I have a database in MySQL 5.7 where I create a Lat/Long column using "POINT", then I create a Index on this column with main key as second column. So my database is like
Edit: 12th July 2019
Today I try to rerun queries and it is now executing in 2 seconds, still not good though, but some how it become better from last run.
PID, SurveyID, GeoPoint
Basically this is a summarize table where PID and SurveyID makes primary key for table. We do a survey on Animals and this summarize table record which Animal is surveyed at given Geo location on map.
My Index was on column GeoPoint and PID
. This table store around 400K record which is summarise from main table that has 2M record for faster execution.
Now, we are planning to upgrade to MySQL 8.0 and we found that it doesn't support Spatial column been index with non-spatial column, so our index was not getting created. This results in our Query now take 8 second instead of 0.6 seconds from older version of MySQL.
More over when I create index only on Spatial Column geopoint
, ST_Contains
, and MBRContains
do not use that Index. Basically our query is as simple as we allow user to draw a Square on Google map and then we use those coordinate to find all Animals in that region.
I am not sure how to solve this issue, as no documentation is found by me for it that help tweaking the query.
Query:
SELECT PID, count(distinct SurveyID) as totalsurvey
FROM locationsummary
where st_contains(ST_Envelope(ST_GeomFromText(
'LineString(137.109375 21.47351753335, 87.890625 -22.411028521559)')),
geopoint )
group by PID
Table:
CREATE TABLE locationsummary (
PID bigint(20) NOT NULL,
SurveyID bigint(20) NOT NULL,
Sitelat float NOT NULL,
sitelong float NOT NULL,
geopoint point NOT NULL,
PRIMARY KEY (PID
,SurveyID
),
SPATIAL KEY idx_geopoint
(geopoint
)
) ENGINE=MyISAM DEFAULT CHARSET=utf8 COLLATE=utf8_bin ROW_FORMAT=COMPACT

Sumit Gupta
(175 rep)
May 30, 2019, 10:57 AM
• Last activity: Aug 7, 2025, 04:03 AM
0
votes
1
answers
84
views
If Read Committed Snapshot Isolation is already enabled, what is the cost of enabling Snapshot Isolation?
Suppose that I have a database with Read Committed Snapshot Isolation already enabled. Is there any reason at all to not also enable Snapshot Isolation? Intuitively, you would think that the row versions would be kept around from longer. [The documentation dismisses this](https://learn.microsoft.com...
Suppose that I have a database with Read Committed Snapshot Isolation already enabled. Is there any reason at all to not also enable Snapshot Isolation?
Intuitively, you would think that the row versions would be kept around from longer. [The documentation dismisses this](https://learn.microsoft.com/en-us/sql/relational-databases/sql-server-transaction-locking-and-row-versioning-guide?view=sql-server-ver17#behavior-when-reading-data) .
> Even though
READ COMMITTED
transactions using row versioning provides a transactionally consistent view of the data at a statement level, row versions generated or accessed by this type of transaction are maintained until the transaction completes.
So I am left without any ideas.
Assume SQL Server 2022. SQL Server 2025 brought with it Optimized Locking, which creates just enough uncertainity in my mind that I don't want to ask about it here.
J. Mini
(1237 rep)
Aug 1, 2025, 08:05 PM
• Last activity: Aug 7, 2025, 03:15 AM
1
votes
1
answers
3726
views
Oracle standby databases monitoring
I have many databases for which I have configured data guard setup. When it comes to monitoring the standby's, I use OEM to check the lag and I have enabled mail alerts as well as and when gap is more than 50 archives. But the thing what I recently observed is that in one of my standby databases my...
I have many databases for which I have configured data guard setup. When it comes to monitoring the standby's, I use OEM to check the lag and I have enabled mail alerts as well as and when gap is more than 50 archives.
But the thing what I recently observed is that in one of my standby databases my recovery got stopped for one or more reasons and unfortunately lag also did not go above 50 so there was no alerts. In such a situation how I can ensure when my standby's recovery MRP itself is not active and I should be alerted in such a situation..
How is it possible? Any soultions?
tesla747
(1910 rep)
Jan 4, 2016, 01:16 PM
• Last activity: Aug 7, 2025, 03:03 AM
1
votes
1
answers
9035
views
Linked server error- The table either does not exist or the current user does not have permissions on that table
There is a view that is being called from Server A (SQL 2016) and this view needs to read data from Server B (SQL 2014). There is a linked server set up between these two servers and the connection works fine. The login that is running the view on the solution/application side is experiencing this e...
There is a view that is being called from Server A (SQL 2016) and this view needs to read data from Server B (SQL 2014). There is a linked server set up between these two servers and the connection works fine.
The login that is running the view on the solution/application side is experiencing this error:
> The OLE DB provider "SQLNCLI11" for linked
> server "Server B" does not contain the table ""DB1"."dbo"."TBL1"". The
> table either does not exist or the current user does not have
> permissions on that table.'.
-The login has DBO permissions on both DB's in Server A +B.
-The login is listed as a login in the linked server Security page.
-I have explicitly granted select on all tables being accessed, and it did not help.
-I am able to select and read from all tables under Server Objects >Linked Server> Server B > Table> Select To
-When the login runs the view from SSMS, it works fine without any errors.
Can someone please point me in the right direction on why this error might be occurring and how I can resolve it?
sqllover2020
(73 rep)
Jan 20, 2022, 04:03 PM
• Last activity: Aug 7, 2025, 02:02 AM
0
votes
1
answers
137
views
How can I share values in maintenance plan tasks?
A simple example would be a maintenance plan that clears a directory of old files and create some copy backups of all databases on that instance to that same directory. This is run daily The first cleanup step contains the path where the files are deleted. The second step actually creates the copy b...
A simple example would be a maintenance plan that clears a directory of old files and create some copy backups of all databases on that instance to that same directory. This is run daily
The first cleanup step contains the path where the files are deleted. The second step actually creates the copy backups in the same directory as the first step. So any changes made to the path have to be done in multiple places. For a simple 2 step maintenance plan this is not really a big issue but could be for more complicated plans.
So is there a way that plan steps can share information? I don't necessarily mean passing information between steps but perhaps a global space of the steps?
I could not see anything in the toolbox that would work for this. Best I got is that I could add two new steps.
1. Create a temp table with the path (and any other constants.)
2. Using that path delete directory contents
3. Run copy-backups
4. Delete temp table
I had thought this question: https://dba.stackexchange.com/questions/120498/can-i-pass-data-between-stages-in-a-maintenance-task would have helped but it was focused on sending completion information in email.
Matt
(365 rep)
Mar 17, 2017, 01:03 PM
• Last activity: Aug 7, 2025, 01:01 AM
0
votes
1
answers
483
views
LSN numbers query regarding Differential backups
We have the following backup regime in place which I am not 100% sure about. - Full backup each night at 18:00 taking approx. 6 hours - Transaction log backup upon completion of full generally about 10 mins. - Differential backups each hour taking approx. 20 mins. When querying the msdb.backupset ta...
We have the following backup regime in place which I am not 100% sure about.
- Full backup each night at 18:00 taking approx. 6 hours
- Transaction log backup upon completion of full generally about 10 mins.
- Differential backups each hour taking approx. 20 mins.
When querying the msdb.backupset table, I understand that the first_lsn will only be reset by a transaction log backup, but am slightly confused that the last_lsn entry is incremented each time the differential backup runs.
This seems to imply that the differential backup also contains transaction log entries (which I don't believe it does).
Is this entry in msdb.backupset just to give information to a restore to which transaction log backup to use to preserve the chain of transactions ?
Finally, if we only have a transaction log backup once a night and differentials throughout the day, we are still looking at potentially a maximum of 24 hours data loss despite hourly differentials.
Apologies if this seems like a lot of questions but trying to get this clear in my head.
Thanks for listening.
zapcon
(73 rep)
Jun 1, 2017, 11:16 AM
• Last activity: Aug 7, 2025, 12:04 AM
-1
votes
1
answers
145
views
Mysql Trigger Insert row Another Table using by Barcode
I Have 3 Tables: 1) CREATE TABLE `UrunPaketDetay`( `UrunPaketDetay` int NOT NULL AUTO_INCREMENT, `UrunPaketNo` int NULL, `Tarih` date NULL, `Barkod` varchar(14) NOT NULL, `Urun` varchar(50) NULL, `SunumSekli` varchar(50) NULL, `IslemeSekli` varchar(50) NULL, `PaketlemeSekli` varchar(50) NULL, `Kalib...
I Have 3 Tables:
1) CREATE TABLE
UrunPaketDetay
(
UrunPaketDetay
int NOT NULL AUTO_INCREMENT,
UrunPaketNo
int NULL,
Tarih
date NULL,
Barkod
varchar(14) NOT NULL,
Urun
varchar(50) NULL,
SunumSekli
varchar(50) NULL,
IslemeSekli
varchar(50) NULL,
PaketlemeSekli
varchar(50) NULL,
Kalibre
varchar(50) NULL,
Kilo
double NULL,
GlazeOran
varchar(50) NULL,
Uretici
varchar(190) NULL,
PaletKod
varchar(50) NULL,
PRIMARY KEY (UrunPaketDetay
)
)CHARACTER SET utf8;
2) CREATE TABLE CkisEks
(
CikId
int NOT NULL AUTO_INCREMENT,
Tarih
date NULL,
Musteri
varchar(190) NULL,
TeslimatYer
varchar(50) NULL,
CikisSaati
time NULL,
AracPlakasi
varchar(18) NULL,
AracTel
varchar(16) NULL,
KonteynirNo
varchar(50) NULL,
PaletKod
varchar(12) NULL,
Kilo
double NULL,
PRIMARY KEY (CikId
)
)CHARACTER SET utf8;
3) CREATE TABLE Ckis_Detay
(
CD_Id
int NOT NULL AUTO_INCREMENT,
CikId
int NULL,
Barkod
varchar(50) NULL,
Urun
varchar(50) NULL,
Kalibre
varchar(50) NULL,
Kilo
double NULL,
Uretici
varchar(50) NULL,
Musteri
varchar(190) NULL,
PaletKod
varchar(50) NULL,
Tarih
date NULL,
PRIMARY KEY (CD_Id
)
)CHARACTER SET utf8;
**My Question:**
I fill up my first table. After on 2nd table i call PaletKod row. PaletKod row autofill Kilo Row on 2nd table from 1st Table.
But i need when i fill PaletKod to my 2nd table how can i select Paletkod from first table and fill my 3rd table values (Uretici, Urun, Kilo, Kalibre
) ?
With Trigger ?
---
BEGIN
INSERT INTO Ckis_Detay (PaletKod, Barkod, Urun, Kalibre, Kilo, Uretici)
SELECT PaletKod, Barkod, Urun, Kalibre, Kilo, Uretici
FROM
UrunPaketDetay
WHERE PaletKod = PaletKod;
END
i code this trigger but when i save table 2 this trigger get all data from table 1 to table 3
But i just need only same PaletKod data copy
Taz Plus
(1 rep)
Aug 28, 2018, 01:08 PM
• Last activity: Aug 6, 2025, 11:12 PM
0
votes
2
answers
177
views
The actual cause of transaction log full under below scenario
In one of the instance, it shows >The transaction log for database 'DB' is full. To find out why space in the log cannot be reused, see the **log_reuse_wait_desc** column in **sys.databases** The instance has at least 100g hard disk empty storage and the transaction log allowed 2,097,152MB (2T) to g...
In one of the instance, it shows
>The transaction log for database 'DB' is full. To find out why space in the log cannot be reused, see the **log_reuse_wait_desc** column in **sys.databases**
The instance has at least 100g hard disk empty storage and the transaction log allowed 2,097,152MB (2T) to grow, growth rate at 10%.
The database recovery mode is FULL (at least it displays FULL in the SSMS)
In this case, what can cause the exception of full transaction log?
When the transaction log have enough physical storage to grow and the log size is not bigger than the upper bound
SKLTFZ
(141 rep)
Mar 28, 2019, 04:30 AM
• Last activity: Aug 6, 2025, 11:04 PM
3
votes
1
answers
2624
views
Why is permission denied for the MongoDB keyfile stored in the root user home directory?
I configured in `/etc/mongod.conf` to enforce keyfile access control, in `security` option enabled, `keyFile` is `/root/dbtest.key` (the absolute path of keyfile). I already gave the ownership to `mongodb` user by `chown`, and granted `400` permission on that `dbtest.key` file. But `mongod` keeps fa...
I configured in
Which means I already granted correctly. So I don't know at which step I did wrong
/etc/mongod.conf
to enforce keyfile access control, in security
option enabled, keyFile
is /root/dbtest.key
(the absolute path of keyfile). I already gave the ownership to mongodb
user by chown
, and granted 400
permission on that dbtest.key
file.
But mongod
keeps failing to start, after checking log, the error is Error reading file /root/dbtest.key: Permission denied
. After checking the ownership and permissions on dbtest.key

necroface
(139 rep)
Aug 2, 2019, 03:12 AM
• Last activity: Aug 6, 2025, 10:13 PM
0
votes
2
answers
481
views
server trigger for Always On failover
I am working on a solution for a quite specific problem with environments (SQL Server using AO, with two replicas) having SSRS in scale-out, each node has MSSQL and SSRS installed, and when there is a Failover for MSSQL, one account is removed from the SSRS SCALE-out, for that reason I need to apply...
I am working on a solution for a quite specific problem with environments (SQL Server using AO, with two replicas) having SSRS in scale-out, each node has MSSQL and SSRS installed, and when there is a Failover for MSSQL, one account is removed from the SSRS SCALE-out, for that reason I need to apply a workaround that I have already available, however now I have to find the exact moment to apply it, I have two options,
1. first is to create a SQL Agent job that runs every x minutes and does apply the fix
2. find a way to capture the failover event and apply the fix. has anyone created a SQL Server trigger to capture Always On failovers?
Carlos DBA
(1 rep)
Sep 15, 2022, 09:35 PM
• Last activity: Aug 6, 2025, 08:05 PM
1
votes
1
answers
2361
views
Oracle - Delete dmp files from ASM diskgroup
I want to delete my export files daily from ASM diskgroup with a crontab job. I want to prepare a script for it. ASMCMD> ls exp1.dmp exp2.dmp exp3.dmp exp4.dmp ASMCMD> pwd +DATA/EXP How can I prepare this script? I prepared a template for it, but I couldn't script it. ``` set_gridenv asmcmd cd +DATA...
I want to delete my export files daily from ASM diskgroup with a crontab job. I want to prepare a script for it.
ASMCMD> ls
exp1.dmp
exp2.dmp
exp3.dmp
exp4.dmp
ASMCMD> pwd
+DATA/EXP
How can I prepare this script? I prepared a template for it, but I couldn't script it.
set_gridenv
asmcmd
cd +DATA/EXP
rm -rf exp*.dmp
exit
jrdba
(55 rep)
Sep 21, 2020, 10:45 AM
• Last activity: Aug 6, 2025, 07:08 PM
2
votes
1
answers
44
views
SQL Server Estimates don't use AVG_RANGE_ROWS for Uniqueidentifer Parameter
I'm trying to debug a very weird query row estimation. The query is very simple. I have a table `OrderItems` that contains for each Order (column `OrderId`) the items of the order. ```sql SELECT count(*) FROM orders.OrderItem WHERE OrderId = '5a7e53c4-fc70-f011-8dca-000d3a3aa5e1' ``` According to th...
I'm trying to debug a very weird query row estimation.
The query is very simple. I have a table
OrderItems
that contains for each Order (column OrderId
) the items of the order.
SELECT count(*)
FROM orders.OrderItem
WHERE OrderId = '5a7e53c4-fc70-f011-8dca-000d3a3aa5e1'
According to the statistics from IX_OrderItem_FK_OrderId
(that's just a normal unfiltered foreign key index CREATE INDEX IX_OrderItem_FK_OrderId on orders.OrderId(OrderId)
, the density is 1.2620972E-06 with 7423048 rows, so about ~9.3 items per order (if we ignore the items with OrderId = NULL
, if we include them there are even less).
The statistics are created with FULLSCAN, and are only slightly out of date (around ~0.2% new rows since the last recompute).
| Name | Updated | Rows | Rows Sampled | Steps | Density | Average key length | String Index | Filter Expression | Unfiltered Rows | Persisted Sample Percent |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| IX_OrderItem_FK_OrderId | Aug 3 2025 4:36PM | 7423048 | 7423048 | 198 | 0.1649756 |26.443027 | "NO " | NULL | 7423048 | 100 |
| All density | Average Length | Columns |
| --- | --- | --- |
| 1.2620972E-06 | 10.443027 | OrderId |
| 1.3471555E-07 | 26.443027 | OrderId, Id |
The query plan however expects, that the query returns 205.496 items. And in reality there are actually 0 results - because the orderId
doesn't exist.
Detailed Query Plan:
https://www.brentozar.com/pastetheplan/?id=hVKYNLmXSU
It probably uses the histogram for coming up with the estimate.
It should fall into following bucket with RANGE_HI_KEY = 'a39932d8-aa2c-f011-8b3d-000d3a440098'
. But that estimate should then be 6.87 according to the AVG_RANGE_ROWS.
It somehow looks like it uses the EQ_ROWS from the previous bucket (but 205 might also just be by accident).
| RANGE_HI_KEY | RANGE_ROWS | EQ_ROWS | DISTINCT_RANGE_ROWS | AVG_RANGE_ROWS |
| --- | --- | --- | --- | --- | --- |
| 9d2e2bea-aa6e-f011-8dca-000d3a3aa5e1 | 12889 | 205 | 2412 | 5.343698 |
| a39932d8-aa2c-f011-8b3d-000d3a440098 | 21923 | 107 | 3191 | 6.8702602 |
OPTION(RECOMPILE)
does not help.
Can somebody explain how SQL Server (in particularly Azure SQL) is coming up with that number?
- Does it really think that the parameter is close enough to the bucket start, and just takes the EQ_ROWS value even though the AVG_RANGE_ROWS is a lot smaller?
- Does it not understand the parameter because it's defined as VARCHAR? If I replace it with DECLARE @OrderId UNIQUEIDENTIFIER = '5a7e...'; WHERE OrderId = @OrderId
the estimate is down to 6. But if that's the reason, from where is the estimate 205?
Jakube
(121 rep)
Aug 5, 2025, 04:53 PM
• Last activity: Aug 6, 2025, 06:33 PM
3
votes
1
answers
851
views
MySQL 5.7 enforce SSL
I'm using MySQL 5.7 and I want to set up SSL to force the client to use a cert. This cert is a self-signed cert that I generate. I have the cert for both server and client being: - CA - cert - key Where the CA has the cert for server and client. I then configured the config file (mysqld.conf) to use...
I'm using MySQL 5.7 and I want to set up SSL to force the client to use a cert.
This cert is a self-signed cert that I generate.
I have the cert for both server and client being:
- CA
- cert
- key
Where the CA has the cert for server and client.
I then configured the config file (mysqld.conf) to use the cert:
[mysqld]
ssl
ssl-ca=/database/mysql/ssl/ca-cert.pem
ssl-cert=/database/mysql/ssl/master-public.pem
ssl-key=/database/mysql/ssl/master-private.pem
And run the command to enforce it:
GRANT ALL PRIVILEGES ON
database
TO 'user'@'192.168.10.10' IDENTIFIED BY 'password' REQUIRE SSL;
I can see SSL is enabled and it's picking up the right files running the command:
SHOW VARIABLES LIKE '%ssl%';
However, from the client machine (192.168.10.10. in this example) I can connect via command line without specifying any cert and it just connects.
I can see the connection is using SSL with:
mysql> \s
SSL: Cipher in use is DHE-RSA-AES256-SHA
But the idea is to allow t he connection only if I use the certs.
Is there a way to enforce it to make sure it won't connect without specifying the cert ?
So I can configure the application to use it and make sure other applications wouldn't access it.
Adonist
(131 rep)
Jul 10, 2017, 04:34 PM
• Last activity: Aug 6, 2025, 06:05 PM
Showing page 1 of 20 total questions