Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
148
views
How to setup Mysql master-slave replication with the slave meant for testing where divergence is OK
Problem: I have a staging DB server on which functionality is tested before pushing it to the production environment. Now, we want to start doing VA/PT (Vulnerability Analysis and Penetration Testing) on our application, but since that can be destructive on the staging DB server, we want to make a s...
Problem:
I have a staging DB server on which functionality is tested before pushing it to the production environment. Now, we want to start doing VA/PT (Vulnerability Analysis and Penetration Testing) on our application, but since that can be destructive on the staging DB server, we want to make a separate Testing environment with a VAPT web+DB server.
Requirements:
1. The data from the staging DB server must be replicated onto the VAPT DB server automatically so that specific new use cases, features, etc can be tested for vulnerabilities.
2. Due to VAPT activities (testing data, SQL Injection, possibly DROP TABLE exploits, etc) the VAPT DB server itself will also have its own data changes, i.e. divergence from Staging DB (Master)
So, if I use simple Master-Slave replication as below I am assured of #1:
Staging DB (Master) -> VAPT DB (Slave)
But if I do #2, the slave will eventually diverge, which is fine for the testing environment, but, will it interrupt or mess with the Master-Slave replication as per #1?
An obvious example where divergence will cause errors is a VA/PT activity that causes DROP TABLE users
so that the Staging DB (Master) users
table trying to INSERT/UPDATE data will cause replication errors. Some UPDATEs/DELETEs might cause errors too.
In particular,
If I use ROW-based replication divergence will happen quickly causing frequent errors.
If I use STATEMENT-based replication, since ids will not match, it is possible that some data will break because ids are essential to link data in related tables even though we do not use foreign keys.
Alternatively, instead of replication, I could **manually dump the Staging DB into the VAPT DB daily**, which would be cumbersome to automate.
OR,
I could make copy DBs and setup various partial copy operations, but that would complicate matters too much, given that I am not a developer and that my developers often make and revert changes of various sizes randomly.
*EDIT: The data directory varies between 20-25 GB on Staging*
Surely someone has come across this problem in their work so far and there might be a set of best practices for this situation i.e. maintaining a match between staging and testing environments in real-time while allowing testing freedom to play with the data
.
I tried googling for a while but the right phrasing for google escapes me. All I get is howtos for master slave replication, handling unwanted drift/divergence and so on. Nothing much about desired/accepted drift and divergence or partial replication.
Thanks in advance.
site80443
(119 rep)
Apr 30, 2021, 03:13 PM
• Last activity: Jul 14, 2025, 12:04 PM
3
votes
1
answers
167
views
Randomly sort all non-ordered rows
Is there a way to get PostgreSQL to randomly order query results that haven't been ordered by an `ORDER BY`? I think this would be a useful way to seek out bugs caused by an implicit reliance on the order of DB results.
Is there a way to get PostgreSQL to randomly order query results that haven't been ordered by an
ORDER BY
? I think this would be a useful way to seek out bugs caused by an implicit reliance on the order of DB results.
ldrg
(709 rep)
Apr 4, 2019, 07:37 PM
• Last activity: Jul 8, 2025, 01:08 PM
1
votes
0
answers
53
views
How would I skip test compilation in a MySQL build?
As the title suggests, I am building MySQL version 8.0.37 from source on a Linux machine and would like to skip test compilation. I have attempted adding the flag: ``` -DINSTALL_MYSQLTESTDIR= ``` which, according to https://dev.mysql.com/doc/refman/8.0/en/source-configuration-options.html#option_cma...
As the title suggests, I am building MySQL version 8.0.37 from source on a Linux machine and would like to skip test compilation.
I have attempted adding the flag:
-DINSTALL_MYSQLTESTDIR=
which, according to https://dev.mysql.com/doc/refman/8.0/en/source-configuration-options.html#option_cmake_install_mysqltestdir , should '... suppress installation of this directory.' The directory is still being installed.
Could it be a position dependent flag? The only flags prior to this in my call to cmake
are -DDOWNLOAD_BOOST=1
and -DWITH_BOOST=/path/to/boost/
.
How does one avoid building the test files ? I have not been able to find explanations online.
I am building on the Raspbian OS, which is a fork of Debian 'Bookworm'.
user10709800
(21 rep)
Jun 15, 2024, 03:59 PM
37
votes
5
answers
9534
views
How do you test for race conditions in a database?
I try to write database code to make sure that it's not subject to race conditions, to make sure that I've locked the correct rows or tables. But I often wonder: Is my code correct? Is it possible to force any existing race conditions to manifest? I want to be sure that if they do happen in a produc...
I try to write database code to make sure that it's not subject to race conditions, to make sure that I've locked the correct rows or tables. But I often wonder: Is my code correct? Is it possible to force any existing race conditions to manifest? I want to be sure that if they do happen in a production environment my application will do the right thing.
I generally know exactly which concurrent query is likely to cause a problem, but I've no idea how to force them to run concurrently to see if the correct behavior happens (e.g. I used the correct type of lock), that the right errors are thrown, etc.
*Note: I use PostgreSQL and Perl, so if this can't be answered generically it should probably get retagged as such.*
***Update:** I'd prefer it if the solution was programmatic. That way I can write automated tests to make sure there aren't regressions.*
xenoterracide
(2921 rep)
Jan 4, 2011, 11:30 AM
• Last activity: Jan 7, 2024, 01:54 PM
1
votes
0
answers
69
views
Writing .spec files for PostgreSQL isolation tests
I am developing a PostgreSQL extension for which I would like to run some tests with concurrent connections. If I understand [the documentation](https://www.postgresql.org/docs/current/extend-pgxs.html) correctly, I should be able to do so using the PGXS framework that comes with Postgres. However,...
I am developing a PostgreSQL extension for which I would like to run some tests with concurrent connections. If I understand [the documentation](https://www.postgresql.org/docs/current/extend-pgxs.html) correctly, I should be able to do so using the PGXS framework that comes with Postgres. However, it is mentioned that such isolation tests (listed in the
ISOLATION
variable in my Makefile
) have to be specified in a .spec
file, and nowhere can I find a mention of how to write such a .spec
file. I've failed to find an example in the documentation and I've similarly failed to find example code online.
I know how to write the .sql
tests that one can list in the REGRESS
variable. Thus, my question is not about how to write tests in general. I specifically would like to know how to write a .spec
file.
BigSmoke
(814 rep)
Sep 1, 2023, 11:47 AM
3
votes
1
answers
451
views
How to cause a SQL Server database integrity error
I have created a stored procedure that uses dbcc checkdb on all userdatabases, if there are any errors detected it then collates the errors and formulates a html table and sends an html email to myself, if there are no errors it sends an email with text along the lines of 'no integrity errors found'...
I have created a stored procedure that uses dbcc checkdb on all userdatabases, if there are any errors detected it then collates the errors and formulates a html table and sends an html email to myself, if there are no errors it sends an email with text along the lines of 'no integrity errors found'.
There are no integrity errors regarding the database and so it does indeed successfully send me an email saying 'no integrity errors found'.
Now I am trying to test the error detection and collation, and to do so I have to create ideally multiple integrity errors but am at a loss as to how to do this on purpose, can this be done?
P.S. I'm using SQL Server 2019
BLoB
(163 rep)
Jun 30, 2023, 12:54 PM
• Last activity: Jun 30, 2023, 01:20 PM
3
votes
2
answers
1097
views
Speed up restore database from snapshot
To revert a test database to an initial state (after running a test), I would like to restore the database from snapshot. I'm using the following script to achieve that. However, the script execution now takes around 7-8 seconds, since it first disconnects all users from the database (by setting the...
To revert a test database to an initial state (after running a test), I would like to restore the database from snapshot. I'm using the following script to achieve that.
However, the script execution now takes around 7-8 seconds, since it first disconnects all users from the database (by setting the DB to
SINGLE_USER
mode).
Is there any way the restoration process could be made faster so that the script could be called ideally before each automated (E2E) test?
ALTER DATABASE [MyDb] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
RESTORE DATABASE [MyDb] FROM DATABASE_SNAPSHOT = @snapshotName;
ALTER DATABASE [MyDb] SET MULTI_USER;
Elapsed time for each command:
1) 3065 ms
2) 2766 ms
3) 2 ms
This technique is meant to be used for end-to-end UI tests, so there is no direct control over transactions during a test.
I'm running the SQL Server in Docker. Could that play a significant role in how long it takes?
Jakub Janiš
(39 rep)
Oct 15, 2021, 04:04 PM
• Last activity: Apr 11, 2023, 07:19 AM
3
votes
1
answers
1449
views
Create database from template database, concurrently, in Postgres
In order to run integration tests concurrently, I wish to create a sample database at the beginning of every test. Theses test databases are "clones" of an immutable reference database (the template in ``CREATE test_db Template test_reference``). The cloning (and dropping at the end) operation remai...
In order to run integration tests concurrently, I wish to create a sample database at the beginning of every test. Theses test databases are "clones" of an immutable reference database (the template in `
CREATE test_db Template test_reference
`).
The cloning (and dropping at the end) operation remains fast enough.
The problem is, I can't clone that reference database concurrently or I get the same kind of error as in the [following question](https://stackoverflow.com/questions/14374726/postgresql-cant-create-database-operationalerror-source-database-template) : this is a bottleneck. Is there a way to tell Postgres that I know those concurrent accesses are safe ?
PS: I am not interested for now to change my test setup to the well known strategy which consist in running all my tests in non committing transactions
laurent
(155 rep)
Feb 14, 2023, 11:33 AM
• Last activity: Feb 17, 2023, 12:55 PM
0
votes
1
answers
112
views
How to handle an expected exception in base postgres
I want to be able to write a `.sql` script that will carry out an action which fails, and the script should only report failure if the action _doesn't_ fail. For example, given initial table: ``` create table tbl(x integer, y integer); ``` I might update this table in a migration with the following:...
I want to be able to write a
.sql
script that will carry out an action which fails, and the script should only report failure if the action _doesn't_ fail.
For example, given initial table:
create table tbl(x integer, y integer);
I might update this table in a migration with the following:
alter table tbl add constraint unique_tst unique (x, y);
And want to write a test script for that migration, which will be similar to the following:
insert into tbl(x, y) values
(1, 1),
(1, 1)
;
This will fail - which is expected given the constraint - but I'm not sure how to handle that failure in postgres.
Something such as:
if does not fail:
insert into tbl(x, y) values
(1, 1),
(1, 1)
;
return:
failure
But I have no idea if this exists.
Note - I cannot install any extensions for this.
baxx
(326 rep)
Feb 9, 2023, 11:09 AM
• Last activity: Feb 9, 2023, 06:04 PM
4
votes
1
answers
5804
views
dbeaver check sql for syntax without running
Using the dbeaver tool, I want to check a script for syntax errors without actually running it. Purpose: For a long insert query (postgresql syntax, PostgreSQL 10.15) in the form of: INSERT INTO schema1.table1 (t1_id, fk1_id, fk_2_id, col1, col2) VALUES (nextval('schema1.sq_table1'), fk1_id_1, fk1_i...
Using the dbeaver tool, I want to check a script for syntax errors without actually running it.
Purpose:
For a long insert query (postgresql syntax, PostgreSQL 10.15) in the form of:
INSERT INTO schema1.table1
(t1_id, fk1_id, fk_2_id, col1, col2)
VALUES
(nextval('schema1.sq_table1'), fk1_id_1, fk1_id_2,col1_val_1, col1_val_2),
-- etc, literally thousands of lines
I cannot run the prod-version of the query in a not-prod region, because the foreign keys are different.
So is there a way to just check the SQL's syntax without running it. It throws foreign key constraint errors in the not-prod region, so I think that verifies the syntax, but I'd like to be doubly sure.
As per comment below, I think for the case here (insert with foreign-key), a foreign-key-violation-error would verify the syntax, but there is still the general case: Is it possible to **verify the syntax of the SQL query without actually running it**? I would think that there are probably several cases that this would be useful.
JosephDoggie
(203 rep)
Aug 5, 2021, 03:53 PM
• Last activity: Jan 31, 2023, 05:26 AM
5
votes
2
answers
6314
views
DROP DATABASE statement cannot be used inside a user transaction
Not really sure if this question belongs here, but I hope someone could help me out. I've made integration tests going all the way down to the database (using mssql localDB). I want each test to run independently with it's own data - I want to reseed the database with my fake data before each test i...
Not really sure if this question belongs here, but I hope someone could help me out.
I've made integration tests going all the way down to the database (using mssql localDB). I want each test to run independently with it's own data - I want to reseed the database with my fake data before each test is running. I tried to implement it with transactions without success. Here is how I tried to pull it off:
public class TestDbInitializer : DropCreateAlways()
{
public static List Items;
public override Seed(DbContext context)
{
Items = new List();
// Adding items
// ..
Items.ForEach(x => context.Add(x));
context.SaveChanges();
}
}
public class BaseTransactionsTests
{
private TransactionScope _scope
[TestInitialize]
public void Initialize()
{
_scope = new TransactionScope();
}
[TestCleanup]
public void Cleanup()
{
_scope.Dispose();
}
}
[TestClass]
public class IntegrationTests : BaseTransactionsTests
private IDependenciesContainer _container;
public static void AssemblyInit(TestContext context)
{
Database.SetInitializer(new TestDbInitializer());
_container = new DependenciesContainer();
// Registers all my application's dependencies
_container.RegisterAll();
}
[TestInitialize]
public void Initialize()
{
using (var context = new MyContext("TestsDatabase"))
{
context.Initialize(true);
}
}
[TestMethod]
public void TestAddItem()
{
var controller = _container.Resolve();
var result = controller.AddItem(new Item({Name = "Test"}))
var goodResult = result as OkNegotiatedResult();
if (result == null)
Assert.Fail("Bad result")
using (var context = new MyContext("TestsDatabase"))
{
Assert.AreEqual(context.Items.Count, TestDbInitializer.Items.Count + 1)
}
}
I use my dependency injector in my tests, registering all dependencies once (AssemblyInitialize).
I created a DB instance for testings, and a specific DropCreateAlways initializer with a fake data Seed method, which I set as the initializer in the AssemblyInitialize as well.
I want to reseed the database with my fake data before each test run. For that case I implemented the base class which holds a transaction scope.
When I run my tests, the following exception is thrown when Seeding the database in the TestInitialize:
DROP DATABASE statement cannot be used inside a user transaction
How should I deal with it? Moreover, what do you think of my implementation of those integration tests? What could be improved?
S. Peter
(185 rep)
Apr 5, 2016, 05:22 PM
• Last activity: Jan 1, 2023, 09:45 AM
0
votes
0
answers
252
views
How to compare data between MYSQL and PostgreSQL table?
A database of MYSQL is migrated to Postgres. I need to check whether all the data records have been transferred correctly. Is there a way I can compare the contents of the two tables residing in two different RDBMS?
A database of MYSQL is migrated to Postgres. I need to check whether all the data records have been transferred correctly. Is there a way I can compare the contents of the two tables residing in two different RDBMS?
user18326231
Dec 15, 2022, 11:14 PM
• Last activity: Dec 16, 2022, 03:25 AM
2
votes
2
answers
660
views
Is database development stages the same as software development?
In software development, there are a series of stages that it will goes through - Dev, Test, UAT, Staging, Demo and Production. This is what I feel is correct, after reading/researching through the Internet. Database development 1. Dev the database in a separate DB (with same version as the prod and...
In software development, there are a series of stages that it will goes through - Dev, Test, UAT, Staging, Demo and Production.
This is what I feel is correct, after reading/researching through the Internet.
Database development
1. Dev the database in a separate DB (with same version as the prod and
using only test data)
2. Test the dev database (with same version as the prod and using only test data)
3. Roll out to prod database
My questions are:
1. Am I correct on the above database development stages?
2. Is there an equivalent of UAT/Staging/Demo in database development?
3. If point 1 is correct, how do people create/work on the dev/test database and ultimately pushing it to prod database?
P.S: I am new to database so please go easy on me! Thank you!
SunnyBoiz
(153 rep)
Aug 9, 2022, 02:12 AM
• Last activity: Aug 23, 2022, 05:13 AM
1
votes
0
answers
146
views
SQL Server - cloning a subset of a database for a test environment
I'm trying to set up an automated script which I can run on a schedule in SQL Server 2016 (probably nightly) to copy the structure of our production database and then transfer a subset of data into each table in the new database from the old one so that we can use it for a test environment. Copying...
I'm trying to set up an automated script which I can run on a schedule in SQL Server 2016 (probably nightly) to copy the structure of our production database and then transfer a subset of data into each table in the new database from the old one so that we can use it for a test environment. Copying the data is fairly trivial. I can use DBCC CLONEDATABASE for copying the structure, but that requires sysadmin rights, which I don't want to have to use because I don't want a sysadmin user's authentication details stored in a connection string. I'm open to the idea of approaching the whole problem differently. How do most people solve this?
Just to clarify, the production database is 10s of GBs and ideally I'd like it to be practical to regenerate the test one in a matter of a few minutes if we need to do it manually, so I'd rather avoid a full backup and restore. My script that uses CloneDB currently runs in about 2-3min and produces a test environment with plenty of data for us to use and is considerably faster than a backup and restore, plus it doesn't use up resources as much.
wizzardmr42
(460 rep)
May 12, 2022, 12:39 PM
0
votes
1
answers
317
views
Stop Dataguard configuration to test standby database
I would like to know how it would be the correct way in which I should proceed to stop the Dataguard configuration in order to perform a series of tests on the standby database. It turns out that I have an environment with two Oracle 11gR2 databases, one primary and the other physical standby in a r...
I would like to know how it would be the correct way in which I should proceed to stop the Dataguard configuration in order to perform a series of tests on the standby database.
It turns out that I have an environment with two Oracle 11gR2 databases, one primary and the other physical standby in a remote location (DR), the fact is that I have to perform some stress tests, modifications in some logical structures and above all, apply TDE to column level and measure the performance after applying it, all these actions I would like to do in the physical standby since it is impossible to stop the production database as it is very critical. So, as I was saying, the idea is to momentarily stop Dataguard, perform all operations on standby, and then return everything to the way it was before. What steps should I take to carry out the task in question?
miguel ramires
(169 rep)
Nov 8, 2021, 02:15 AM
• Last activity: Nov 17, 2021, 06:54 PM
0
votes
0
answers
66
views
How to speed up restore database from snapshot in SQL Server?
To revert a test database to an initial state (after running a test), I would like to restore the database from snapshot. I'm using the following script to achieve that. However, the script execution now takes around 7-8 seconds, since it first disconnects all users from the database (by setting the...
To revert a test database to an initial state (after running a test), I would like to restore the database from snapshot. I'm using the following script to achieve that. However, the script execution now takes around 7-8 seconds, since it first disconnects all users from the database (by setting the DB to SINGLE_USER mode). Is there any way how the restoration process could be made faster so that the script could be called ideally before each automated test? Thank you for any opinions.
Note: I'm running the SQL Server in a Docker container. Not sure if this has a significant impact on the rollback and restore performance.
ALTER DATABASE [MyDb] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
RESTORE DATABASE [MyDb] FROM DATABASE_SNAPSHOT = @snapshotName;
ALTER DATABASE [MyDb] SET MULTI_USER;
Jakub Janiš
(39 rep)
Oct 15, 2021, 05:10 PM
• Last activity: Oct 15, 2021, 05:48 PM
15
votes
3
answers
5277
views
Testing stored procedure scalability
I have an email application that will be called upon to deliver to the UI the number of new messages for a given user on each page load. I have a few variations of things I am testing on the DB level but all are abstracted by the stored proc call. I'm trying to slam the DB to see what the breaking p...
I have an email application that will be called upon to deliver to the UI the number of new messages for a given user on each page load. I have a few variations of things I am testing on the DB level but all are abstracted by the stored proc call.
I'm trying to slam the DB to see what the breaking point (# of requests per second) would be.
In a nutshell, I have a table such as this userId, newMsgCount with a clustered index on userId. SQL should be able to server hundreds or thousands of these responses per second. I think the laggard is my .NET app.
How can I make this a good test to achieve the test results based on SQL performance?
Is there a tool for this that i can give it a stored proc name and param for it to pund my DB?
I want to see if the DB can return a min. of 250 responses per second.
kacalapy
(2062 rep)
Sep 15, 2011, 06:04 PM
• Last activity: Sep 8, 2021, 10:23 AM
1
votes
0
answers
138
views
Testing SQL connections with UDL file
I am testing SQL connectivity between a SQL server (windows server 2000) and client PC that has no SQL tools installed (windows 2000 SP4). The client PC runs a java application that needs to connect to the DB. from what I can tell port 1433 comes up ESTABLISHED and enters a TIME_WAIT state. working...
I am testing SQL connectivity between a SQL server (windows server 2000) and client PC that has no SQL tools installed (windows 2000 SP4). The client PC runs a java application that needs to connect to the DB. from what I can tell port 1433 comes up ESTABLISHED and enters a TIME_WAIT state. working with the networking team we are trying to determine connectivity issues outside of the java application.
I discovered the UDL test file process (never heard of this before and not a SQL guy) and am wondering a few things. According to the stuff I have googled I should run the test using "SQL Server Native Client" but in my providers options that is not a choice. Will the test not be accurate if I were to use OLE DB Provider for SQL Server?

mcv110
(75 rep)
May 11, 2021, 04:36 PM
2
votes
1
answers
302
views
How to create development/test environment of TB reporting database in SQL Server
I am working on a couple large reporting databases with lots of reporting and analytical queries and many ETL jobs. When I make changes I usually do in in production, be that changes in indexing or the code. Just had a minor accident after changing some code so I am thinking to start creating a deve...
I am working on a couple large reporting databases with lots of reporting and analytical queries and many ETL jobs. When I make changes I usually do in in production, be that changes in indexing or the code.
Just had a minor accident after changing some code so I am thinking to start creating a development/test environment.
But the databases are huge with some huge tables. The databases are located on a huge server with lots of CPU cores and memory. How can I make a light test environment on my own computer in a developer edition?
Another option would be to create a test environment on the production server which is a data warehouse server (i.e. there are lots of ad hoc queries anyway). Is this a better approach? I would just have to restore a backup and name the database something else...
There are many dependencies in the databases, so scripting out all the objects manually is too much work. E.g. some stored procedures use around 100 tables...
So my question is: how can I stop making all my changes directly in production?
Thanks
xhr489
(827 rep)
Mar 20, 2021, 07:50 PM
• Last activity: Mar 23, 2021, 05:12 PM
0
votes
1
answers
33
views
How to test a function that selects from real tables?
I've got a mega function that everybody is afraid to modify. So each fix adds a dozen of new `IF` lines. To me it feels that this function screams for proper testing. Instead of the real function I'm including a really simplified example, because I believe that the crux of the challenge is that it's...
I've got a mega function that everybody is afraid to modify. So each fix adds a dozen of new
IF
lines. To me it feels that this function screams for proper testing.
Instead of the real function I'm including a really simplified example, because I believe that the crux of the challenge is that it's not only a function of its explicit parameters, but also of the state of some of the actual tables in the database.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER FUNCTION [statistic].[daysByProject](
@DateFrom DATE,
@DateTo DATE,
)
returns
@ret table(ProjectID INT, IsFine TINYINT, Days smallint)
BEGIN
insert into @ret(ProjectID, IsFine, Days)
select ProjectID, IsFine, Days
from dbo.SomeRealTable
RETURN
END
Is it possible to add tests to such a function? I've even heard of including tests in a transaction, apparently something akin to this
/* alter function */
/**
* Set up test case
* Run
* Raise error if test fails
* Repeat with all test cases
*/
/* Roll back on error */
Could something like that be possible so that (unless explicitly working around this) developers could only modify the function in a way that satisfies the requirements of test cases?
Džuris
(121 rep)
Dec 1, 2020, 07:39 AM
• Last activity: Dec 9, 2020, 08:52 AM
Showing page 1 of 20 total questions