Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
144 views
is Master-Master replication ok for me?
I have two separated locations connected by a not too reliable VPN. I have a common system that depends on MYSQL that read/write tables. Will master-master replication keep both locations in sync? I don't care that tables might not look exactly the same in time on both servers (for example, rows on...
I have two separated locations connected by a not too reliable VPN. I have a common system that depends on MYSQL that read/write tables. Will master-master replication keep both locations in sync? I don't care that tables might not look exactly the same in time on both servers (for example, rows on both masters when VPN fails, might get a different order after both locations write transactions...) Note: Not doing auto-increment / offset settings
Otrovagomas (1 rep)
Jun 10, 2015, 08:16 PM • Last activity: Aug 2, 2025, 08:06 AM
0 votes
1 answers
913 views
Mariadb Galera cluster empty set
I am playing around with a MariaDB Galera Master Master Cluster and I have some issues which I do not understand. [mysqld] bind-address=0.0.0.0 [galera] wsrep_on=ON wsrep_provider=/usr/lib64/galera/libgalera_smm.so binlog_format=ROW innodb_autoinc_lock_mode=2 innodb_locks_unsafe_for_binlog=1 query_c...
I am playing around with a MariaDB Galera Master Master Cluster and I have some issues which I do not understand. [mysqld] bind-address=0.0.0.0 [galera] wsrep_on=ON wsrep_provider=/usr/lib64/galera/libgalera_smm.so binlog_format=ROW innodb_autoinc_lock_mode=2 innodb_locks_unsafe_for_binlog=1 query_cache_size=0 query_cache_type=0 default_storage_engine=InnoDB innodb_log_file_size=100M innodb_file_per_table innodb_flush_log_at_trx_commit=2 wsrep_provider=/usr/lib64/galera/libgalera_smm.so wsrep_cluster_address="gcomm://" wsrep_cluster_name='galera_cluster' wsrep_node_address='192.168.2.67' wsrep_node_name='KVM-1' wsrep_sst_method=rsync wsrep_sst_auth=repl_user:PASS On the other nodes [mysqld] bind-address=0.0.0.0 [galera] wsrep_on=ON wsrep_provider=/usr/lib64/galera/libgalera_smm.so binlog_format=ROW innodb_autoinc_lock_mode=2 innodb_locks_unsafe_for_binlog=1 query_cache_size=0 query_cache_type=0 default_storage_engine=InnoDB innodb_log_file_size=100M innodb_file_per_table innodb_flush_log_at_trx_commit=2 wsrep_provider=/usr/lib64/galera/libgalera_smm.so wsrep_cluster_address="gcomm://IP_ADDR_MAIN_NODE,OTHER_NODES_ADDRS" wsrep_cluster_name='galera_cluster' wsrep_node_address='192.168.2.68' wsrep_node_name='KVM-2' wsrep_sst_method=rsync wsrep_sst_auth=repl_user:PASS when i login to mysql and query for SHOW STATUS LIKE 'wsrep_cluster_size'; Empty set (0.00 sec) Am i missing anything?
mohan reddy (11 rep)
May 18, 2018, 05:10 PM • Last activity: Jul 31, 2025, 12:09 AM
0 votes
1 answers
1103 views
Galera Cluster Setup - Primary and Secondary Site Scenario
I'm very new to Galera Cluster and is exploring a potential setup with reasonable resiliency to node failure and network failure. Looking at the very bottom part of [this documentation][1], the **Weighted Quorum for a Primary and Secondary Site Scenario** is quite promising. For ease of reading, I'v...
I'm very new to Galera Cluster and is exploring a potential setup with reasonable resiliency to node failure and network failure. Looking at the very bottom part of this documentation , the **Weighted Quorum for a Primary and Secondary Site Scenario** is quite promising. For ease of reading, I've extracted the setup from the document as follows: > When configuring quorum weights for primary and secondary sites, use > the following pattern: > > Primary Site: > node1: pc.weight = 2 > node2: pc.weight = 2 > > Secondary Site: > node3: pc.weight = 1 > node4: pc.weight = 1 > > Under this pattern, some nodes are located at the primary site while > others are at the secondary site. In the event that the secondary site > goes down or if network connectivity is lost between the sites, the > nodes at the primary site remain the Primary Component. Additionally, > either node1 or node2 can crash without the rest of the nodes becoming > non-primary components. But there seems to be two drawbacks: 1. If there are two failed nodes and one of them happened to be on the primary site, the quorum will be <= 50% and the remaining two nodes will become non-primary components. 2. Despite pc.weight is a dynamic option that can be changed while the server is running, flipping between primary site and secondary site requires modification on all nodes, which is a bit troublesome. So I've come up with another idea in mind - leave the weight as 1 for all nodes, and in the primary site add a Galera Arbitrator. In this case: - The primary site will remain the Primary Component on network issue, just like the original setup. - The cluster still functions even if two nodes failed. - Flipping between primary and secondary site just require a move of the Galera Arbitrator. May I know if there's anything wrong with my idea, or if there will be any practical difficulties? Appreciate if you can share your thoughts with me.
CLDev (141 rep)
Oct 6, 2017, 03:51 PM • Last activity: Jul 28, 2025, 02:02 AM
0 votes
1 answers
161 views
Alternative to multi-master replication between local and cloud db when writes are infrequent and only 1 db at a time
**Background:** I have a closed-source app on my laptop that saves its data in a local SQLite database. I'm working on creating a mobile app for myself that would replicate the functionality of the desktop app, but run on mobile, consequently necessitating a Cloud database, for example any of the GC...
**Background:** I have a closed-source app on my laptop that saves its data in a local SQLite database. I'm working on creating a mobile app for myself that would replicate the functionality of the desktop app, but run on mobile, consequently necessitating a Cloud database, for example any of the GCP SQL offerings **Constraints**: I will be the sole user of the application, so the DB writes will be very infrequent. Moreover, it can be guaranteed that no writes will happen simultaneously to the local and cloud DBs. The access pattern is more like: 1. Do some work on the laptop, work saved to local db 2. Sync data to cloud after app closes 3. Open app on phone sometime later, read and write to cloud db 4. Open laptop sometime later, get updates from cloud into local 5. Repeat **Issue:** Data needs to be eventually consistent between the mobile and the desktop app, i.e. the local SQLite and the cloud DB. I've read about multi-master replication, but this seems like an overkill since only one database is accessed at a time, and I feel that some simpler solution might fit the bill here. How could I solve this challenge? **Unsubstantiated idea?**: Perhaps it would be possible to emit a Pub/Sub event on writes to either db and have a serverless function listen on local write events, replicating them to the cloud db, and a daemon on the laptop, replicating the cloud writes to local. Would something like this work?
mikemykhaylov (1 rep)
Mar 13, 2024, 11:20 PM • Last activity: Jul 13, 2025, 03:03 PM
0 votes
1 answers
191 views
How to use a MySQL database both remotely and locally
I cannot find the best solution for my problem. I made a C++ application which runs on a Kiosk under Ubuntu that needs to store and retrieve data from a MySQL database and these same data need to be accessed from a remote web application. These are the requirements: - The kiosk is connected to Inter...
I cannot find the best solution for my problem. I made a C++ application which runs on a Kiosk under Ubuntu that needs to store and retrieve data from a MySQL database and these same data need to be accessed from a remote web application. These are the requirements: - The kiosk is connected to Internet, but we cannot assume that internet is always available - The kiosk need to always access to database because the users can access the kiosk services only after the login (user data are stored in the database) - the remote web application needs to insert or modify data stored in the database At the moment, I'm using a local MySQL database installed in the Kiosk with PhpMyAdmin and the application directly access the local data. Then, I used cron to upload the database once a day, then I import the database on my server in order to be accessed by the remote web application. This is really a bad solution, so I would like to find another one. What do you suggest? I would like to have a database on my server and let the remote web application directly use it and receive updates from the Kiosk.
Marcus Barnet (113 rep)
Nov 24, 2019, 06:04 PM • Last activity: Jun 26, 2025, 01:00 PM
0 votes
1 answers
515 views
Restoring a MySQL database to a failed master
I have a master-master configuration in MySQL with two servers. One server should stay live on the network to serve requests (call it server A) and the other should be taken offline to push new code changes (server B). My idea originally was that after running STOP SLAVE on both servers, that server...
I have a master-master configuration in MySQL with two servers. One server should stay live on the network to serve requests (call it server A) and the other should be taken offline to push new code changes (server B). My idea originally was that after running STOP SLAVE on both servers, that server B could be shut down, updated, and even have a new database schema put in. After this, I thought that I could simply START SLAVE on server B and have the entire database from server A replicated/mirrored back over to server B. However, this is not the case. Restarting the slave and doing a CHANGE MASTER TO (...) and syncing up the log files does not replicate old changes like I want it to: it only replicates new writes from that point on. I am looking for a way to bring server B up to speed with the latest database from server A, and then have server B continue to replicate changes in a master-master setup. Then I can continue the sequence of server upgrades by doing the same process but keeping server B online only. Any solutions which require locking the tables will not work since I need to do this change without any downtime. Is this possible?
nwrobel (11 rep)
Jan 18, 2016, 07:57 PM • Last activity: May 28, 2025, 04:04 PM
2 votes
2 answers
1019 views
MySql Multi Master Active Active Mutli Write
I am establishing Multi Master Replication [All mater can write to their database], Initially I started with two server. After establishing connection and sync data on both servers. There are some questions which are bothering me like 1. If there is connection lost between two server and at that tim...
I am establishing Multi Master Replication [All mater can write to their database], Initially I started with two server. After establishing connection and sync data on both servers. There are some questions which are bothering me like 1. If there is connection lost between two server and at that time both updates the same row or create a row with same primary. There will be collision between them and sync will break. 2. Can we implement same things for Multiple Master configuration? 3. Is there any monitoring and conflict resolution tools which can I use?
Arjun Kaushal (21 rep)
Aug 7, 2016, 05:04 PM • Last activity: Apr 1, 2025, 12:03 AM
-1 votes
1 answers
679 views
MySQL Replication ignore few columns
I have setup MySQL replication, is there a way to skip replicating one column or multiple columns from few tables?
I have setup MySQL replication, is there a way to skip replicating one column or multiple columns from few tables?
Andy natty (1 rep)
Jan 29, 2019, 12:45 AM • Last activity: Mar 6, 2025, 01:02 AM
2 votes
1 answers
987 views
MySQL Semi-synchronous replication with Multi-Master
Is it possible to use semi-synchronous replication with a Multi-Master setup? I've tried to follow this guide to setup a semi-synchronous replication for a master-slave setup: https://avdeo.com/2015/02/02/semi-synchronous-replication-in-mysql/ But I'm not sure how to implement this on a Multi-Master...
Is it possible to use semi-synchronous replication with a Multi-Master setup? I've tried to follow this guide to setup a semi-synchronous replication for a master-slave setup: https://avdeo.com/2015/02/02/semi-synchronous-replication-in-mysql/ But I'm not sure how to implement this on a Multi-Master setup. There are two plugins: one for the master and one for the slave. Since a Multi-Master act as a Master and Slave, does that mean I have to install both plugins on all servers? I'm using MySQL 5.7
TheOddGuy (183 rep)
Jun 17, 2016, 08:42 AM • Last activity: Feb 20, 2025, 06:02 AM
0 votes
0 answers
5933 views
Internal MariaDB error code: 1927 ('Connection was killed')
Edit: filed a bug report at https://jira.mariadb.org/browse/MDEV-14493 Mariadb nodes freeze occasionally with the following error in the logs, any clues? (drupal app, db freezes on clearing application cache) [Warning] WSREP: Failed to apply app buffer: seqno: 903792, status: 1 Nov 23 21:42:55 websr...
Edit: filed a bug report at https://jira.mariadb.org/browse/MDEV-14493 Mariadb nodes freeze occasionally with the following error in the logs, any clues? (drupal app, db freezes on clearing application cache) [Warning] WSREP: Failed to apply app buffer: seqno: 903792, status: 1 Nov 23 21:42:55 websrv4 mysqld: #011 at galera/src/trx_handle.cpp:apply():351 Nov 23 21:42:55 websrv4 mysqld: Retrying 2th time Nov 23 21:42:55 websrv4 mysqld: 2017-11-23 21:42:55 140081879742208 [Warning] WSREP: BF applier failed to open_and_lock_tables: 1927, fatal: 0 wsrep = (exec_mode: 1 conflict_state: 5 seqno: 903792) Nov 23 21:42:55 websrv4 mysqld: 2017-11-23 21:42:55 140081879742208 [ERROR] Slave SQL: Error executing row event: 'Connection was killed', Internal MariaDB error code: 1927
a_at_cyber_dot_training (99 rep)
Nov 24, 2017, 05:20 AM • Last activity: Jul 29, 2024, 02:05 AM
0 votes
1 answers
21 views
MySQL replication status not reporting mysqld.cnf parameters
I am running a multi master replication from 3 master servers to 1 slave. Each of the masters has 2 DBs but I am replicating just 1 of the DBs. Server version: 8.0.35-0ubuntu0.20.04.1 (Ubuntu) Replication is working fine but my question is, why when i run the show replica status \G I do not see the...
I am running a multi master replication from 3 master servers to 1 slave. Each of the masters has 2 DBs but I am replicating just 1 of the DBs. Server version: 8.0.35-0ubuntu0.20.04.1 (Ubuntu) Replication is working fine but my question is, why when i run the show replica status \G I do not see the parameters i have set for *Replicate_Do_DB* or *replicate-wild-do-table* per channel in the status report. If I set them as globals (not specifying a channel) they appear in the report, but not when set per channel, any idea why is that? This is extract from the MySQL config file (*/etc/mysql/mysql.conf.d/mysqld.cnf*) : master-info-repository=table relay-log-info-repository=table replicate-wild-do-table='src3_s#####':s#####v.% replicate-wild-do-table='src2_b#####':b#####v.% replicate-wild-do-table='src1_d#####':d#####v.% and this it the result from the 'show replication status \G':
mysql> show replica status \G
*************************** 1. row ***************************
             Replica_IO_State: Waiting for source to send event
                  Source_Host: 101.114.110.113
                  Source_User: slave_user
                  Source_Port: 3306
                Connect_Retry: 60
              Source_Log_File: mysql###-bin.001307
          Read_Source_Log_Pos: 85496
               Relay_Log_File: plfamily-replication-slave-relay-bin-src1_#####.000086
                Relay_Log_Pos: 323
        Relay_Source_Log_File: mysql###-bin.001307
           Replica_IO_Running: Yes
          Replica_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB: 
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Source_Log_Pos: 85496
              Relay_Log_Space: 86304
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Source_SSL_Allowed: No
           Source_SSL_CA_File:
           Source_SSL_CA_Path:
              Source_SSL_Cert:
            Source_SSL_Cipher:
               Source_SSL_Key:
        Seconds_Behind_Source: 0
Source_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Source_Server_Id: 1
                  Source_UUID: a604d2af-ed30-11ea-a99d-8657a156e939
             Source_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
    Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
           Source_Retry_Count: 86400
                  Source_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Source_SSL_Crl:
           Source_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
         Replicate_Rewrite_DB:
                 Channel_Name: src1_d######
           Source_TLS_Version:
       Source_public_key_path:
        Get_Source_public_key: 0
            Network_Namespace:
*************************** 2. row ***************************
             Replica_IO_State: Waiting for source to send event
                  Source_Host: 237.239.251.262
                  Source_User: slave_user
                  Source_Port: 3306
                Connect_Retry: 60
              Source_Log_File: mysql###-bin.000002
          Read_Source_Log_Pos: 45872
               Relay_Log_File: plfamily-replication-slave-relay-bin-src2_#####.000029
                Relay_Log_Pos: 326
        Relay_Source_Log_File: mysql###-bin.000002
           Replica_IO_Running: Yes
          Replica_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB: 
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Source_Log_Pos: 45872
              Relay_Log_Space: 921
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Source_SSL_Allowed: No
           Source_SSL_CA_File:
           Source_SSL_CA_Path:
              Source_SSL_Cert:
            Source_SSL_Cipher:
               Source_SSL_Key:
        Seconds_Behind_Source: 0
Source_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Source_Server_Id: 3
                  Source_UUID: ab801adf-ddb3-11eb-96b2-a27c967973a1
             Source_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
    Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
           Source_Retry_Count: 86400
                  Source_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Source_SSL_Crl:
           Source_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
         Replicate_Rewrite_DB:
                 Channel_Name: src2_b#####
           Source_TLS_Version:
       Source_public_key_path:
        Get_Source_public_key: 0
            Network_Namespace:
*************************** 3. row ***************************
             Replica_IO_State: Waiting for source to send event
                  Source_Host: 134.148.228.213
                  Source_User: slave_user
                  Source_Port: 3306
                Connect_Retry: 60
              Source_Log_File: mysql####-bin.000002
          Read_Source_Log_Pos: 47931
               Relay_Log_File: plfamily-replication-slave-relay-bin-src3_s####.000025
                Relay_Log_Pos: 328
        Relay_Source_Log_File: mysql####-bin.000002
           Replica_IO_Running: Yes
          Replica_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB: 
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Source_Log_Pos: 47931
              Relay_Log_Space: 925
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Source_SSL_Allowed: No
           Source_SSL_CA_File:
           Source_SSL_CA_Path:
              Source_SSL_Cert:
            Source_SSL_Cipher:
               Source_SSL_Key:
        Seconds_Behind_Source: 0
Source_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Source_Server_Id: 4
                  Source_UUID: ab801adf-ddb3-11eb-96b2-a27c967973a1
             Source_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
    Replica_SQL_Running_State: Replica has read all relay log; waiting for more updates
           Source_Retry_Count: 86400
                  Source_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Source_SSL_Crl:
           Source_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0
         Replicate_Rewrite_DB:
                 Channel_Name: src3_s######
           Source_TLS_Version:
       Source_public_key_path:
        Get_Source_public_key: 0
            Network_Namespace:
3 rows in set (0.00 sec)
Andrei Hristov (1 rep)
Jan 24, 2024, 08:38 AM • Last activity: Jan 29, 2024, 08:41 PM
7 votes
4 answers
18703 views
Master-master Replication with 3 Nodes
I need to configure mysql master/master replication with 3 nodes. Currently I have master/slave setup and I need to move it to master/master with 3 nodes. I have gone through some posts about configuring it in ring structure, I'm little nervous about it. If anyone has the similar setup on production...
I need to configure mysql master/master replication with 3 nodes. Currently I have master/slave setup and I need to move it to master/master with 3 nodes. I have gone through some posts about configuring it in ring structure, I'm little nervous about it. If anyone has the similar setup on production, would be great if you can suggest me how to achieve this.
Swaroop Kundeti (247 rep)
Jul 22, 2014, 12:35 PM • Last activity: Feb 27, 2023, 05:00 PM
2 votes
1 answers
1002 views
galera wsrep_cluster_address value
I am trying to deploy galera multimaster replication but faced some problems, google does not solve it it seems, but I see some people had the same problems in the past. Say, I have 3 servers - 192.168.1.1 - 192.168.1.2 - 192.168.1.3 first I figured out that primary node has to be intianalized like...
I am trying to deploy galera multimaster replication but faced some problems, google does not solve it it seems, but I see some people had the same problems in the past. Say, I have 3 servers - 192.168.1.1 - 192.168.1.2 - 192.168.1.3 first I figured out that primary node has to be intianalized like this only once (init system independent below). /etc/init.d/mysql start --wsrep-new-cluster how should the wsrep_cluster_address="gcomm://" kook like on the each node? somewhere I read it has to list **each** address on the **each** node: like wsrep_cluster_address="gcomm://192.168.1.1,192.168.1.2,192.168.1.3" Further googling showed that primary node should only have wsrep_cluster_address="gcomm://" then the second node: wsrep_cluster_address="gcomm://192.168.1.1" and the thrid: wsrep_cluster_address="gcomm://192.168.1.1,192.168.1.2 Sorry if it looks too long, I have been digging up for 4 days already every day :/
M.Mass (121 rep)
Jul 17, 2018, 09:32 PM • Last activity: Dec 9, 2022, 10:06 PM
0 votes
1 answers
704 views
do_table or ignore_table alternatives on MariaDB Galera Multi-Master
As far as I can see, there are no `do_table` or `ignore_table` configs in MariaDB Galera multi-master (master-master) to skip or ignore specific tables. And there are only `binlog_do_db` and `binlog_ignore_db` options, which only work for databases. I have multiple servers in my formation, and I onl...
As far as I can see, there are no do_table or ignore_table configs in MariaDB Galera multi-master (master-master) to skip or ignore specific tables. And there are only binlog_do_db and binlog_ignore_db options, which only work for databases. I have multiple servers in my formation, and I only want to sync two tables in each server database. Other tables' data should be intact. What do you suggest?
Roham (3 rep)
Dec 5, 2021, 07:17 PM • Last activity: Dec 7, 2021, 03:27 PM
1 votes
1 answers
211 views
Geo-distributed DBaaS with MySQL compatible DB with multi-master?
There are a few NewSQL engines exist, most of them Postgres-compatible, but there are MariaDB Xpand and others, MySQL-based. Plus Galera-based clusters. My question - is there production ready DBaaS service, which provide managed DB on AWS?
There are a few NewSQL engines exist, most of them Postgres-compatible, but there are MariaDB Xpand and others, MySQL-based. Plus Galera-based clusters. My question - is there production ready DBaaS service, which provide managed DB on AWS?
Vitaly Karasik DevOps (623 rep)
Oct 4, 2021, 08:18 AM • Last activity: Oct 5, 2021, 02:01 PM
2 votes
0 answers
1124 views
MySQL multi-master group replication on kubernetes
We are trying to setup MySQL multi-master group replication (GR) on kubernetes [Group replication configuring instances][1]. GR is starting on one pod after all the configurations. However the second node goes to RECOVERING state when GR is started followed by ERROR state. There is no error in GCS_D...
We are trying to setup MySQL multi-master group replication (GR) on kubernetes Group replication configuring instances . GR is starting on one pod after all the configurations. However the second node goes to RECOVERING state when GR is started followed by ERROR state. There is no error in GCS_DEBUG_TRACE logs also. Let me know if there is anything missing and if more info is required to analyze. Thanks in advance. Workarounds tried: 1. https://dba.stackexchange.com/questions/268801/mysql-group-replication-multi-primary-setup 2. https://stackoverflow.com/questions/50794695/mysql-group-replication-stuck-on-recovering-forever Cluster Setup: 1. Created 3 PVCs for each pods in a namespace 2. Launched pods using mysql:8.0.23 docker image (https://hub.docker.com/_/mysql ) 3. Ran below queries to configure the pods
$ kubectl get all -n myb5
NAME         READY   STATUS    RESTARTS   AGE
pod/mysql1   1/1     Running   0          15h
pod/mysql2   1/1     Running   0          15h
pod/mysql3   1/1     Running   0          15h

NAME                TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/gr-domain   ClusterIP   None                     15h

$ kubectl get pvc -n myb5
NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mysql-pv-claim-1   Bound    pvc-f7957eff-b75e-4dbc-990a-8d79e54b6f06   250Gi      RWO            robin          15h
mysql-pv-claim-2   Bound    pvc-1c5d4dfd-8495-4266-af0d-882ce8e8ccec   250Gi      RWO            robin          15h
mysql-pv-claim-3   Bound    pvc-49c0979b-49cb-413b-b695-479b32124343   250Gi      RWO            robin          15h
Configuration on all pods: SET PERSIST general_log = ON; SET PERSIST general_log_file= '/var/lib/mysql/mysql1.log'; SET PERSIST group_replication_communication_debug_options='GCS_DEBUG_ALL'; SET PERSIST enforce_gtid_consistency=ON; SET PERSIST gtid_mode = OFF_PERMISSIVE; SET PERSIST gtid_mode = ON_PERMISSIVE; SET PERSIST gtid_mode = ON; SET PERSIST binlog_format = ROW; SET PERSIST master_info_repository='TABLE'; SET PERSIST relay_log_info_repository='TABLE'; SET PERSIST transaction_write_set_extraction=XXHASH64; SET SQL_LOG_BIN = 0; CREATE USER rpl_user@'%' IDENTIFIED BY 'password'; GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%'; GRANT BACKUP_ADMIN ON *.* TO rpl_user@'%'; FLUSH PRIVILEGES; CHANGE REPLICATION SOURCE TO SOURCE_USER='rpl_user', SOURCE_PASSWORD='password' FOR CHANNEL 'group_replication_recovery'; SET SQL_LOG_BIN = 1; INSTALL PLUGIN group_replication SONAME 'group_replication.so'; SET PERSIST group_replication_group_name='85cbd4a0-7338-46f1-b15e-28c1a26f465e'; SET PERSIST group_replication_start_on_boot=OFF; SET PERSIST group_replication_bootstrap_group=OFF; SET PERSIST group_replication_single_primary_mode=OFF; SET PERSIST group_replication_enforce_update_everywhere_checks=ON; SET PERSIST group_replication_member_expel_timeout=3600; SET PERSIST group_replication_group_seeds='mysql1.gr-domain.myb5.svc.cluster.local:33061,mysql2.gr-domain.myb5.svc.cluster.local:33061,mysql3.gr-domain.myb5.svc.cluster.local:33061'; SET PERSIST group_replication_ip_allowlist='mysql1.gr-domain.myb5.svc.cluster.local,mysql2.gr-domain.myb5.svc.cluster.local,mysql3.gr-domain.myb5.svc.cluster.local'; Conf on pod1: SET PERSIST server_id=1; SET PERSIST group_replication_local_address= 'mysql1.gr-domain.myb5.svc.cluster.local:33061'; SET PERSIST group_replication_bootstrap_group=ON; START GROUP_REPLICATION USER='rpl_user', PASSWORD='password'; SET PERSIST group_replication_bootstrap_group=OFF; SET PERSIST group_replication_recovery_get_public_key=ON; Conf on pod2: SET PERSIST server_id=2; SET PERSIST group_replication_local_address= 'mysql2.gr-domain.myb5.svc.cluster.local:33061'; START GROUP_REPLICATION USER='rpl_user', PASSWORD='password'; Conf on pod3: SET PERSIST server_id=3; SET PERSIST group_replication_local_address= 'mysql3.gr-domain.myb5.svc.cluster.local:33061'; START GROUP_REPLICATION USER='rpl_user', PASSWORD='password'; Group Replication Status when started: mysql> SELECT * FROM performance_schema.replication_group_members\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: 57b9f42a-8b4d-11eb-bd3e-0242ac110003 MEMBER_HOST: mysql1 MEMBER_PORT: 3306 MEMBER_STATE: ONLINE MEMBER_ROLE: PRIMARY MEMBER_VERSION: 8.0.23 *************************** 2. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: 57ec4b94-8b4d-11eb-8fdc-0242ac110004 MEMBER_HOST: mysql2 MEMBER_PORT: 3306 MEMBER_STATE: RECOVERING MEMBER_ROLE: PRIMARY MEMBER_VERSION: 8.0.23 *************************** 3. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: 57ec4b94-8b4d-11eb-8fdc-0242ac110005 MEMBER_HOST: mysql3 MEMBER_PORT: 3306 MEMBER_STATE: RECOVERING MEMBER_ROLE: PRIMARY MEMBER_VERSION: 8.0.23 Group Replication Error after few minutes: mysql> SELECT * FROM performance_schema.replication_group_members\G *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: 57ec4b94-8b4d-11eb-8fdc-0242ac110004 MEMBER_HOST: mysql2 MEMBER_PORT: 3306 MEMBER_STATE: ERROR MEMBER_ROLE: MEMBER_VERSION: 8.0.23
Raghavendra V (21 rep)
Mar 24, 2021, 11:54 PM • Last activity: Mar 25, 2021, 05:41 AM
2 votes
2 answers
855 views
How do you update one side of a MySQL master-master replication setup?
I have a master-master (circular) replication configured between two MySQL servers, call them West and East. There is also a web app that interfaces with each DB, also on its own West and East server, such that the West web app talks to the West database and the East web app talks to the East databa...
I have a master-master (circular) replication configured between two MySQL servers, call them West and East. There is also a web app that interfaces with each DB, also on its own West and East server, such that the West web app talks to the West database and the East web app talks to the East database. This all works fine during normal operations, but the problem arises when it comes time to do quarterly deployments. In these deployments, almost always, there are database content updates and frequently there are database structure changes as well (e.g., adding a table, adding columns to a table, etc). The web app is also updated at the same time so that it stays in sync with the database structure. Thus far, we have not been able to perform a successful deployment without breaking replication, leading to a very messy situation and potential loss of data. The way we planned to do deployments is as follows: 1. Shut down the West side web app and direct all traffic to the East side 2. Stop the East side slave 3. Deploy the West side web app + DB updates 4. Turn back on the West side web app, shut down the East web app, and direct all traffic to the updated West side 5. Start the East side slave 6. Deploy to East side web app + DB updates 7. Return to nominal The issue seems to be around step 3. When users are using the web app on the East side (thus modifying the DB) and the deployment is occurring on the West side (modifying that DB), this often stops replication from working, necessitating manual intervention, data copying, altering the pointers to the master logs, etc. Is there a better way to do this to allow each master to be updated in isolation without risking replication getting out of sync? I fear our setup is not correct and/or sustainable. Important considerations: - There can be no downtime for the web app - The updates the users may be making while directed to one side must be preserved when both sides go back online Also, please note that I am not a DBA, so I apologize if I failed to include any important details about the configuration settings. I will try to update the question ASAP if needed.
pineconesundae (123 rep)
Feb 23, 2021, 09:59 PM • Last activity: Feb 27, 2021, 12:59 AM
0 votes
2 answers
1690 views
MySQL Error 1062 after reboot node in multi master replication
I setup 2 MySQL Community Server (8.0.20) with a replication master to master. Today one of the nodes went down, and when it went up, the replication failed. I tried to just ignore a registry with ```SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1;```, but the errors keep coming in. This is the status log of th...
I setup 2 MySQL Community Server (8.0.20) with a replication master to master. Today one of the nodes went down, and when it went up, the replication failed. I tried to just ignore a registry with
GLOBAL SQL_SLAVE_SKIP_COUNTER=1;
, but the errors keep coming in. This is the status log of the particular node that failed:
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: database_1
                  Master_User: replicador
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000122
          Read_Master_Log_Pos: 217195469
               Relay_Log_File: 33afd376b907-relay-bin.000105
                Relay_Log_Pos: 2829
        Relay_Master_Log_File: mysql-bin.000122
             Slave_IO_Running: Yes
            Slave_SQL_Running: No
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 
       Replicate_Ignore_Table: 
      Replicate_Wild_Do_Table: 
  Replicate_Wild_Ignore_Table: 
                   Last_Errno: 1062
                   Last_Error: Could not execute Write_rows event on table gc57125800.ate_logs; Duplicate entry '49328847' for key 'ate_logs.PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000122, end_log_pos 188792316
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 188791943
              Relay_Log_Space: 28407134
              Until_Condition: None
               Until_Log_File: 
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File: 
           Master_SSL_CA_Path: 
              Master_SSL_Cert: 
            Master_SSL_Cipher: 
               Master_SSL_Key: 
        Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error: 
               Last_SQL_Errno: 1062
               Last_SQL_Error: Could not execute Write_rows event on table gc57125800.ate_logs; Duplicate entry '49328847' for key 'ate_logs.PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000122, end_log_pos 188792316
  Replicate_Ignore_Server_Ids: 
             Master_Server_Id: 1
                  Master_UUID: bef45e1b-99d6-11ea-a355-3e2547e4f083
             Master_Info_File: mysql.slave_master_info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: 
           Master_Retry_Count: 86400
                  Master_Bind: 
      Last_IO_Error_Timestamp: 
     Last_SQL_Error_Timestamp: 201029 17:59:51
               Master_SSL_Crl: 
           Master_SSL_Crlpath: 
           Retrieved_Gtid_Set: 
            Executed_Gtid_Set: 
                Auto_Position: 0
         Replicate_Rewrite_DB: 
                 Channel_Name: 
           Master_TLS_Version: 
       Master_public_key_path: 
        Get_master_public_key: 0
            Network_Namespace: 
1 row in set (0.00 sec)
Not sure what I should do. Should I dump the another master and do it all over again? Do I need to do this every time a server fails?
IamRichter (45 rep)
Oct 29, 2020, 06:51 PM • Last activity: Oct 30, 2020, 06:17 PM
2 votes
1 answers
6971 views
MySQL master master replication, safe to delete rows on both masters?
We have a MySQL (v5.1.61) cluster with two masters (and two slaves). We have a table `logs` with a primary key column `logID`. This log table grows rather huge, and we'd like to delete data from it. So I've written a script that does that. — Is it safe to run this script on both masters? It deletes...
We have a MySQL (v5.1.61) cluster with two masters (and two slaves). We have a table logs with a primary key column logID. This log table grows rather huge, and we'd like to delete data from it. So I've written a script that does that. — Is it safe to run this script on both masters? It deletes data only. What I'm wondering is what will the MySQL replication module do, if the row it's about to delete has already been deleted? Will replication break, or is it safe to delete rows on both masters? Does it matter if we use row based or statement based replication? Example: The log data deletion script deletes row 123 on Master A. At the same time, the script deletes that same row on Master B. Then master B reads its relay log, and sees: *"I should delete row 123, because it was deleted on Master A and I'm its slave. Let's delete it... ... ...but it's already gone!"* — now what will Master B do? Will it halt replication, as if there was a duplicate key error? Or will it think *"Fine, I need do nothing"*, and everything is okay.
KajMagnus (1249 rep)
Apr 10, 2014, 03:33 PM • Last activity: Sep 21, 2020, 08:31 PM
0 votes
1 answers
646 views
galera cluster how to run an ALTER without using RSU/TOI
I need to run an alter statement but would like to prevent locking/stall the cluster. The [Galera docs][1] advised using: SET wsrep_OSU_method='RSU'; Then Run the ALTER statement, once finich: SET wsrep_OSU_method='TOI'; Problem is that this needs to be run in every node, and I have only access to o...
I need to run an alter statement but would like to prevent locking/stall the cluster. The Galera docs advised using: SET wsrep_OSU_method='RSU'; Then Run the ALTER statement, once finich: SET wsrep_OSU_method='TOI'; Problem is that this needs to be run in every node, and I have only access to one node behind a load balancer, any alternatives or workarounds?
nbari (149 rep)
May 13, 2019, 01:18 PM • Last activity: Jul 21, 2020, 11:01 PM
Showing page 1 of 20 total questions