Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
1
votes
1
answers
1420
views
MySQLdump Exclude master-data
Is it possible to exclude the master data from a mysql dump? From the manual it seems like the only options are commented out or present. I don't need the data at all because it is going to a static system. Commented out will work but just wondering if `0` or some other value would make it not prese...
Is it possible to exclude the master data from a mysql dump? From the manual it seems like the only options are commented out or present. I don't need the data at all because it is going to a static system. Commented out will work but just wondering if
0
or some other value would make it not present?
>Use this option to dump a master replication server to produce a dump file that can be used to set up another server as a slave of the master. It causes the dump output to include a CHANGE MASTER TO statement that indicates the binary log coordinates (file name and position) of the dumped server. These are the master server coordinates from which the slave should start replicating after you load the dump file into the slave.
>If the option value is 2, the CHANGE MASTER TO statement is written as an SQL comment, and thus is informative only; it has no effect when the dump file is reloaded. If the option value is 1, the statement is not written as a comment and takes effect when the dump file is reloaded. If no option value is specified, the default value is 1.
-https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html#option_mysqldump_master-data
My plan was to use:
mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=0 ...
but from the above entry that would put the CHANGE MASTER TO
as an uncommented command.
user3783243
(157 rep)
Feb 28, 2020, 04:59 PM
• Last activity: Aug 6, 2025, 08:08 AM
0
votes
1
answers
528
views
Insert into table select - Replication lag - Percona Server 5.6
I have two MySQL instances (Percona server 5.6.31) in Master-Slave replication setup. I have set the below configuration: 1. ROW based replication is set. 2. Transaction Isolation is set to read-committed. Today, there was a insert going on in my Master. It was in the format INSERT INTO table1 SELEC...
I have two MySQL instances (Percona server 5.6.31) in Master-Slave replication setup.
I have set the below configuration:
1. ROW based replication is set.
2. Transaction Isolation is set to read-committed.
Today, there was a insert going on in my Master. It was in the format
INSERT INTO table1 SELECT * FROM table2
Table 2 has 200 million rows.
Though the number of insert records was only 5000 but the operation lasted for 30 mins. I observed replication lag during the insert operation.
I have load infile disabled due to security concerns. Hence I can't insert using that as well.
I went this article from Percona which says that this can be resolved if txn isolation is used as ROW and versions above 5.1 that this is resolved.
1. In what way I can make my slave to be in sync with Master in such conditions?
2. Why does the slave lag here?
tesla747
(1910 rep)
Dec 28, 2016, 04:08 PM
• Last activity: Aug 6, 2025, 12:02 AM
1
votes
1
answers
396
views
master-master replication or master slave replication?
We have right now one server with a database. A device and a website are accessing that database causing load What I want is to create two servers master-slave where master databases are replicated to slave but slave have some databases that must not be in master database. Is it possible to do it us...
We have right now one server with a database.
A device and a website are accessing that database causing load
What I want is to create two servers master-slave where master databases are replicated to slave but slave have some databases that must not be in master database.
Is it possible to do it using master-slave replication or using master-master replication?
MIC
(71 rep)
Dec 17, 2015, 08:25 AM
• Last activity: Aug 4, 2025, 01:09 AM
0
votes
1
answers
2658
views
Why pg_wal dir does not clean up while logical replication
my main question - why pg_wal dir is extremely growing up? I have two postgresql instances. The first was set up as master: ``` wal_level: "logical" max_wal_senders: 5 max_replication_slots: 10 ``` Also, I was enable archiving for pg_wal: ``` archive_mode: on archive_command: 'gzip /.../data/pg_wal_...
my main question - why pg_wal dir is extremely growing up?
I have two postgresql instances.
The first was set up as master:
wal_level: "logical"
max_wal_senders: 5
max_replication_slots: 10
Also, I was enable archiving for pg_wal:
archive_mode: on
archive_command: 'gzip /.../data/pg_wal_archive/%f'
After that, PUBLICATION
was created for some of my tables (not ALL TABLES
)
For replication
psql instance - I using default psql config, only created SUBSCRIPTION
to master.
And I have two questions:
1. I run some script on master, which pulling ~ 10 Gb data to one table, which was **not included** in the PUBLICATION. And, I see, that my pg_wal
dir size is increasing anyway. Why?
2. Why pg_wal dir does not cleaning? Is it normal? Looks like I have some data for restore table which is not using in replication, how to delete this data?
vintik
(1 rep)
Aug 12, 2019, 11:46 AM
• Last activity: Aug 2, 2025, 11:04 AM
0
votes
1
answers
143
views
is Master-Master replication ok for me?
I have two separated locations connected by a not too reliable VPN. I have a common system that depends on MYSQL that read/write tables. Will master-master replication keep both locations in sync? I don't care that tables might not look exactly the same in time on both servers (for example, rows on...
I have two separated locations connected by a not too reliable VPN. I have a common system that depends on MYSQL that read/write tables.
Will master-master replication keep both locations in sync?
I don't care that tables might not look exactly the same in time on both servers (for example, rows on both masters when VPN fails, might get a different order after both locations write transactions...)
Note: Not doing auto-increment / offset settings
Otrovagomas
(1 rep)
Jun 10, 2015, 08:16 PM
• Last activity: Aug 2, 2025, 08:06 AM
0
votes
2
answers
70
views
Why use Multi-Master Replication?
I'm trying to understand what's the point of ***multi-master replication***: - scaling database writes? ***sharding*** scales database writes too - hot standby? ***single-master replication*** can be used for hot standby too Is *multi-master replication* necessary? What's the use case, why use it in...
I'm trying to understand what's the point of ***multi-master replication***:
- scaling database writes? ***sharding*** scales database writes too
- hot standby? ***single-master replication*** can be used for hot standby too
Is *multi-master replication* necessary?
What's the use case, why use it in addition to or in place of *sharding & single-master replication*?
csirmazbendeguz
(101 rep)
Mar 3, 2025, 03:04 PM
• Last activity: Jul 31, 2025, 07:37 PM
5
votes
3
answers
2916
views
How can streaming replication be kept lagged by a certain 'delay' time?
Postgres has had [streaming replication](http://wiki.postgresql.org/wiki/Streaming_Replication) since 9.0 and since 9.1 it is possible to pause and resume the application of the streamed WAL records using [`pg_xlog_replay_pause()` and `pg_xlog_replay_resume()`](http://www.postgresql.org/docs/9.3/sta...
Postgres has had [streaming replication](http://wiki.postgresql.org/wiki/Streaming_Replication) since 9.0 and since 9.1 it is possible to pause and resume the application of the streamed WAL records using [
pg_xlog_replay_pause()
and pg_xlog_replay_resume()
](http://www.postgresql.org/docs/9.3/static/functions-admin.html#FUNCTIONS-RECOVERY-CONTROL) . As far as I can tell, there is no built-in way of keeping the application of WAL deliberately lagged by a specific time period (eg 24h).
Have I missed a built-in way of achieving this and if not, how should I approach doing it some other way?
Jack Douglas
(40537 rep)
Mar 10, 2014, 01:55 PM
• Last activity: Jul 31, 2025, 06:11 AM
1
votes
1
answers
910
views
Queries return ReadFailure exception, "Replica(s) failed to execute read"
I've a cluster consisting of 5 cassandra nodes. The cassandra version used is cassandra-3.11.3-1.noarch The `keyspace` strategy and is defined as follows: CREATE KEYSPACE my_keyspace WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': '2'} AND durable_writes = true; Now running qu...
I've a cluster consisting of 5 cassandra nodes. The cassandra version used is
cassandra-3.11.3-1.noarch
The
keyspace
strategy and is defined as follows:
CREATE KEYSPACE my_keyspace
WITH replication = {'class': 'NetworkTopologyStrategy', 'datacenter1': '2'}
AND durable_writes = true;
Now running queries on the cluster, the following issue occurs:
> ReadFailure: Error from server: code=1300 [Replica(s) failed to execute read]
> message="Operation failed - received 0 responses and 1 failures"
> info={'consistency': 'LOCAL_ONE', 'received_responses': 0, 'required_responses': 1, 'failures': 1}
> info={'failures': None, 'consistency': 'Not Set', 'required_responses': None, 'received_responses': None}
Does anyone know what is causing this? If more information needed to better debug this problem, please let me know!
**UPDATE 1**
root# awk '!/#/' /etc/cassandra/conf/cassandra-rackdc.properties
dc=datacenter1
rack=rack1
Valentin Bajrami
(111 rep)
Aug 22, 2018, 08:29 AM
• Last activity: Jul 30, 2025, 05:07 AM
0
votes
1
answers
581
views
MySQL replication still failing after new import
I had an issue with my MySQL replication from Master to Slave. An App we had by mistake wrote to the Slave and deleted some data as well, causing errors during the replication (delete errors, primary key not found errors etc.- error codes 1032, 1062 etc.). Skipping all the errors didn't work as it s...
I had an issue with my MySQL replication from Master to Slave. An App we had by mistake wrote to the Slave and deleted some data as well, causing errors during the replication (delete errors, primary key not found errors etc.- error codes 1032, 1062 etc.).
Skipping all the errors didn't work as it seemed the data was beyond repair and once all the errors from multiple tables were skipped the replication continued failing as above. To this end we exported the data from the Master using MySQL Workbench (after following all the steps of stopping, granting permissions, flushing privileges, locking the master and then exporting the data through MySQL Workbench and unlocking the databases).
https://www.lexiconn.com/blog/2014/04/how-to-set-up-selective-master-slave-replication-in-mysql/ (example guide)
The .sql files were ported to the Slave, Slave was stopped, changed the MASTER to MASTER_HOST and imported all the data through the MySQL Workbench on the Slave. The database is rather large (30 odd Gigs) so this process takes over 24 hours. Once the Slave is then Started, again the errors appear (Duplication errors, cannot delete errors, Primary Key errors etc.).
Initially we used mysqldump to extract the data and MySQL to to re-import the data but this failed on the MySql importing complaining of errors in the SQL files (which are too large to open in any notepad and investigate). So we went for the WorkBench as an export and import (which seems to both function without any errors).
I cannot find any research online that deals with something like this, a failure of replication after clean export and import or where to go from here. The .sql files created by the WorkBench do all have Drop Tables commands before create tables, so surely the tables are deleted and created fresh so should not have any Primary Key concerns etc.
Does anyone have any guidance for this as any test we do has to run for 24 hours before we see the failure.
Much appreciated.
Dug
(19 rep)
Oct 17, 2019, 10:40 PM
• Last activity: Jul 29, 2025, 08:08 PM
0
votes
0
answers
32
views
SQL Server replication error - The Distributor has not been installed correctly. Could not enable database for publishing
I have 4 databases on the server that need to be replicated using Transactional replication. Each database gets 2 publications for itself, so the log reader is the same, but 2 publications for each. These publications are different in the articles that go into each. So, tables with the name not star...
I have 4 databases on the server that need to be replicated using Transactional replication.
Each database gets 2 publications for itself, so the log reader is the same, but 2 publications for each. These publications are different in the articles that go into each.
So, tables with the name not starting with XX_ go to the
PublicationName
and those with the name starting with XX_ go to the PublicationName_XX
. Each publication then gets its own subscriber.
I am able to set up 3 of those replications with no problems, but when I get to the 4th, I get the error that a
The Distributor has not been installed correctly. Could not enable database for publishing.
The confusing part is that it does not happens on a particular db among those 4. It seems to be by count .. I can set up first 3 in any order, and then 4th one fails.
There is more, sometimes, I am able to set up the PublicationName4
, and a failure occurs when PublicationName4_XX
My initial thought was about space limitations, however I am not even close to a half on the disk size.
When I check the distributor
use master
EXEC sp_get_distributor
it shows as properly configured. So, I am at loss ...

Dmitriy Ryabin
(111 rep)
Jul 28, 2025, 09:24 PM
• Last activity: Jul 28, 2025, 10:16 PM
0
votes
1
answers
14
views
Is it possible to run Django migrations on a Cloud SQL replica without being the owner of the table?
I'm using **Google Cloud SQL for PostgreSQL** as an **external primary replica**, with data being replicated continuously from a self-managed PostgreSQL source using **Database Migration Service (DMS)** in CDC mode. I connected a Django project to this replica and tried to run a migration that renam...
I'm using **Google Cloud SQL for PostgreSQL** as an **external primary replica**, with data being replicated continuously from a self-managed PostgreSQL source using **Database Migration Service (DMS)** in CDC mode.
I connected a Django project to this replica and tried to run a migration that renames a column and adds a new one:
uv run python manage.py migrate
However, I get the following error:
django.db.utils.ProgrammingError: must be owner of table camera_manager_invoice
This makes sense, since in PostgreSQL, ALTER TABLE
requires table ownership. But in this case, the replica was created by DMS, so the actual table owner is the replication source — and not the current user.
---
## 🔍 The Problem:
I'm trying to apply schema changes via Django migrations on a Cloud SQL replica that I do **not own**. The replication is working fine for data (CDC), but I need to apply structural changes on the replica independently.
---
## ✅ What I Tried:
* Changing the connected user: still not the owner, so same error.
* Running sqlmigrate
to get the SQL and applying manually: same result — permission denied.
* Attempted to change ownership of the table via ALTER TABLE ... OWNER TO ...
: failed due to not being superuser.
* Tried running migration with --fake
, but this skips execution and doesn't change the schema.
---
## ❓ My Question:
> **Is there any way to apply schema changes via Django migrations (or manually) on a Cloud SQL replica, without being the table owner?**
I'm open to alternatives, best practices, or official GCP recommendations for this situation.
---
Raul Chiarella
(117 rep)
Jul 22, 2025, 06:40 PM
• Last activity: Jul 25, 2025, 02:53 PM
1
votes
1
answers
24
views
Why is my Cloud SQL external replica not reflecting schema changes (like new columns) after Django migrations?
I'm using **Google Cloud Database Migration Service (DMS)** to replicate data from a self-managed PostgreSQL database into a **Cloud SQL for PostgreSQL instance**, configured as an *external primary replica*. The migration job is running in **CDC mode** (Change Data Capture), using **continuous repl...
I'm using **Google Cloud Database Migration Service (DMS)** to replicate data from a self-managed PostgreSQL database into a **Cloud SQL for PostgreSQL instance**, configured as an *external primary replica*.
The migration job is running in **CDC mode** (Change Data Capture), using **continuous replication**. Everything seems fine for data: new rows and updates are being replicated successfully.
However, after running Django’s
makemigrations
and migrate
on the source database — which added new columns and renamed others — **the schema changes are not reflected in the Cloud SQL replica**. The new columns simply don’t exist in the destination.
### 🔍 What I’ve done:
- Source: self-managed PostgreSQL instance.
- Target: Cloud SQL for PostgreSQL set as an external replica.
- Replication user has proper privileges and is connected via mTLS.
- The job is active, with "Optimal" parallelism and healthy status.
- Data replication (INSERT/UPDATE/DELETE) works great.
- Schema changes like ALTER TABLE
, ADD COLUMN
, RENAME COLUMN
are **not reflected** in the replica.
---
### ❓ Question:
**How can I configure DMS or Cloud SQL to also replicate schema changes (like ALTER TABLE or CREATE COLUMN) from the source to the replica? Or is it necessary to manually apply schema changes on the target?**
> I'm fine with workarounds or official recommendations — just need clarity on the correct approach for schema evolution in this setup.
---
Raul Chiarella
(117 rep)
Jul 22, 2025, 06:05 PM
• Last activity: Jul 25, 2025, 02:48 PM
0
votes
1
answers
25
views
cluster vs replica in mariadb or mysql
I'm confused between two words of `cluster` and `replica` in `mariadb`. I saw some tutorial and they explain `cluster` or `replica`. Is `replica` same `cluster` or not? If not , please explain their diffrences.
I'm confused between two words of
I saw some tutorial and they explain
Is
If not , please explain their diffrences.
cluster
and replica
in mariadb
.I saw some tutorial and they explain
cluster
or replica
.Is
replica
same cluster
or not?If not , please explain their diffrences.
PersianGulf
(115 rep)
Jul 24, 2025, 05:58 PM
• Last activity: Jul 24, 2025, 08:28 PM
1
votes
1
answers
1015
views
Get position of mysqlbinlog error. I have the error's line number
If I use mysqlbinlog to load a binlog into my database like this: mysqlbinlog --defaults-file=/path/mysqld.cnf bin.000011| mysql -u root and get an error like this: ERROR 2006 at line 290: MySQL server has gone away How do I convert line 290 into a binlog position that I can use to rerun the log, st...
If I use mysqlbinlog to load a binlog into my database like this:
mysqlbinlog --defaults-file=/path/mysqld.cnf bin.000011| mysql -u root
and get an error like this:
ERROR 2006 at line 290: MySQL server has gone away
How do I convert line 290 into a binlog position that I can use to rerun the log, starting at the failed line?
mysqlbinlog --defaults-file=/path/mysqld.cnf --start-position= bin.000011| mysql -u root
I'm using binlog-format=ROW
Or, how can I log the binlog position of the error, and not just the line number?
It seems that there must be a way to recover from an error while updating a database with mysqlbinlog, otherwise the whole database is corrupted.
user984003
(161 rep)
May 31, 2019, 09:26 PM
• Last activity: Jul 24, 2025, 12:02 PM
0
votes
0
answers
50
views
Postgres pg_wal size increasing possibly from patroni outage
I've recently hit an issue of pg_wal files not being cleaned up on a replica postgres instance server despite these being archived on the master instance. Patroni was out during the time due to an ETCD outage, after correcting this and Patroni reestablishing connections the wal files began to be cle...
I've recently hit an issue of pg_wal files not being cleaned up on a replica postgres instance server despite these being archived on the master instance. Patroni was out during the time due to an ETCD outage, after correcting this and Patroni reestablishing connections the wal files began to be cleaned up on the read only instance. During this time I saw no replication lag during this time. The second replica was also not affected in the same way. All 3 servers had ETCD being out.
System
* Postgres 16
* Patroni 4.0.4
* 3 nodes in the cluster 1 leader, 2 read only replicas
* cluster is managed by patroni
* Replication is done by physical replication
Question
From my understanding of Patroni the outage on Patroni shouldn't prevent the WAL files on the replication instance being removed. Am I missing something here in terms of the active responsibilities of Patroni?
I ensured that the read only replica was still in standby mode and it was operating as such during that time so wasn't that it was promoted during the outage.
From that time Patroni was repeatedly failing on
Jul 23 05:02:24 patroni: 2025-07-23 05:02:24,141 ERROR: watchprefix failed: ProtocolError("Connection broken: InvalidChunkLength(got length b'', 0 bytes read)", InvalidChunkLength(got length b'', 0 bytes rea
Jul 23 05:02:25 patroni: Traceback (most recent call last):
...
Jul 23 05:02:25 patroni: File "/usr/lib/python3/dist-packages/patroni/dcs/etcd.py", line 262, in _do_http_request Jul 23 05:02:25 patroni: raise etcd.EtcdConnectionFailed('No more machines in the cluster')
Jul 23 05:02:25 patroni: etcd.EtcdConnectionFailed: No more machines in the cluster
Jul 23 05:02:25 systemd: patroni.service: Main process exited, code=exited, status=1/FAILURE
Jul 23 05:02:25 systemd: patroni.service: Failed with result 'exit-code'.
Jul 23 05:02:25 systemd: patroni.service: Unit process 407680 (postgres) remains running after unit stopped.
Jul 23 05:02:25 systemd: patroni.service: Unit process 407683 (postgres) remains running after unit stopped.
...
From this, Patroni is crashing whilst the postgres process remains up (The proc for postgres server is 407680)
During this time I had also looked at pg_stat_activity and hadn't seen any long running queries that I'd expect to hold the WAL back.
In terms of postgres logs, the only things of note were when the pg_wal started to increase this is somewhat confirmed in the logs
2025-07-22 06:46:02.235 UTC LOG: restartpoint complete: wrote 517224 buffers (6.2%); 0 WAL file(s) added, 246 removed, 23 recycled; write=66.028 s, sync=0.499 s, total=66.721 s; sync files=686, longest=0.109 s, average=0.001 s; distance=4407526 kB, estimate=4407526 kB; lsn=21204/6CAA5F0, redo lsn=21203/13048F80
2025-07-22 06:47:27.708 UTC LOG: restartpoint complete: wrote 523662 buffers (6.2%); 0 WAL file(s) added, 1 removed, 22 recycled; write=77.386 s, sync=0.263 s, total=77.676 s; sync files=676, longest=0.012 s, average=0.001 s; distance=4407428 kB, estimate=4407516 kB; lsn=21205/159E6E88, redo lsn=21204/2006A040
2025-07-22 06:49:06.802 UTC LOG: restartpoint complete: wrote 562491 buffers (6.7%); 1 WAL file(s) added, 0 removed, 0 recycled; write=89.970 s, sync=0.256 s, total=90.321 s; sync files=598, longest=0.019 s, average=0.001 s; distance=4407504 kB, estimate=4407515 kB; lsn=21206/2207D1D0, redo lsn=21205/2D09E288
Where the WAL file(s) removed goes to 0
and then on recovery postgres receives a SIGHUP which causes it to reload config
2025-07-23 08:16:53.955 UTC LOG: received SIGHUP, reloading configuration files
Iamterribleatcoding
(1 rep)
Jul 23, 2025, 11:37 AM
• Last activity: Jul 23, 2025, 09:49 PM
0
votes
1
answers
145
views
SQL Server Replication - only ran once. How to configure scheduler from beginning
I have configured a transactional replication from server A to Server B some 2 month ago. The source database is a new production server, and the data is being updated there constantly. I have some issues ... I am not sure how to resolve. 1. I see that the "last sync" occurred 2 month ago, when I in...
I have configured a transactional replication from server A to Server B some 2 month ago.
The source database is a new production server, and the data is being updated there constantly.
I have some issues ... I am not sure how to resolve.
1. I see that the "last sync" occurred 2 month ago, when I initially set it up.
- I was expecting a snapshot job to be of a periodic nature, but it is not.
- It is actually set to a "One time" and is not enabled. I know, I can "enable it" and
set the schedule for the future, however I am looking for how to do it at the time when I configure the replication. I may be configuring it with the script and be expecing the configuration go into effect from the power shell script rn, and want to make sure i know how to set the schedule from the beginning.
This is what I see in a schedule of a Snapshot job on the Sql Server agent.

Dmitriy Ryabin
(111 rep)
May 21, 2024, 01:55 PM
• Last activity: Jul 20, 2025, 06:03 AM
0
votes
1
answers
253
views
Transaction Replication error subscription marked inactive and DDL changes one new column is added into Publisher
I am getting in the Replication the subscription marked inactive and must be reinitialized. I tried to reinitialize not fixing the issue and found the following as well. I believe, I need to fix the DDL issue first then I need to reinitialize. Transaction Replication error because of DDL changes one...
I am getting in the Replication the subscription marked inactive and must be reinitialized.
I tried to reinitialize not fixing the issue and found the following as well.
I believe, I need to fix the DDL issue first then I need to reinitialize.
Transaction Replication error because of DDL changes one new column is added into Publisher and that is not not available in the subscriber.
Please assist me to fix this error and let me know if you need any further input.
Saran
(107 rep)
Nov 11, 2023, 05:49 PM
• Last activity: Jul 20, 2025, 05:01 AM
0
votes
1
answers
147
views
Is it possible to READ LOCK all databases and then UNLOCK one by one?
I need to take backup of databases from a live server for replication. There could be 2-5 databases each having size of 1-5 GB. Master is a live server so I can not stop mysql service. Since size of data is big, baking up in Read-Only mode could be dangerous too. I'm wondering is it possible to **1....
I need to take backup of databases from a live server for replication. There could be 2-5 databases each having size of 1-5 GB. Master is a live server so I can not stop mysql service. Since size of data is big, baking up in Read-Only mode could be dangerous too. I'm wondering is it possible to
**1. READ LOCK all databases and stop bin-log writing**
**2. Take backup of a database and UNLOCK only that one.**
MySQL Version : 5.1.61
Storage Engine : MyISAM
Thanks in advance.
Hasan Rumman
(101 rep)
May 14, 2018, 02:33 PM
• Last activity: Jul 19, 2025, 05:03 PM
0
votes
1
answers
165
views
Looking for best option(s) to have near real-time data synced from SQL Managed Instance to another SQL Managed Instance
I have two SQL Managed Instances 1 - Primary Production 2 - Reporting / Analytic workloads Besides traditional SQL replication (transactional) what is the best way to get data in near real-time from the primary to the secondary server. The secondary is not used in HA or DR at all and only for report...
I have two SQL Managed Instances
1 - Primary Production
2 - Reporting / Analytic workloads
Besides traditional SQL replication (transactional) what is the best way to get data in near real-time from the primary to the secondary server. The secondary is not used in HA or DR at all and only for reporting workloads that we don't want to run on the primary managed instance.
My problem with transactional replication is certain tables in the primary instance are truncated which complicates replication. I also don't want to have to drop publications and generate snapshots, impacting the production applications.
Log shipping would be an option but it disables the database on the secondary while restoring and isn't nearly real-time.
Really what I'm looking for is something like AG, which doesn't appear to be available in same region servers, unless I'm wrong.
Researching Managed Instance Link seems perfect, except I'm not seeing that the source can be a Managed Instance, unless I'm wrong.
The target database on the secondary can be read-only if needed.
Cody
(11 rep)
Nov 9, 2023, 09:04 PM
• Last activity: Jul 19, 2025, 04:05 PM
0
votes
1
answers
894
views
How to re-sync the Mysql replication if Master down
I have a very simple webapp infra: FE => BE => MySQL. I have set up MySQL master-slave replication successfully. If my MySQL master node is down, I can manually point the MySQL IP to the Slave node in my BE. Users can insert data into the table with no problem. But I cannot re-sync the data backup t...
I have a very simple webapp infra: FE => BE => MySQL. I have set up MySQL master-slave replication successfully. If my MySQL master node is down, I can manually point the MySQL IP to the Slave node in my BE. Users can insert data into the table with no problem. But I cannot re-sync the data backup to the master node after I bring the master node up. So far I tried 2 methods but both of them don't work:
1. Restart the Master node and reset the Slave node.
2. Switch over the role of them, change original Slave to Master, original Master to Slave.
If the Master is down, how do I re-sync the data back to Master from Salve after Master is back to normal? In my case, do I have to copy the data from Slave to Master by mysqldump before re-sync?
I am very new to MySQL, any help is appreciated!
ITnewbie
(109 rep)
Apr 12, 2022, 06:39 AM
• Last activity: Jul 19, 2025, 03:03 PM
Showing page 1 of 20 total questions