Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
2
answers
70
views
Why use Multi-Master Replication?
I'm trying to understand what's the point of ***multi-master replication***: - scaling database writes? ***sharding*** scales database writes too - hot standby? ***single-master replication*** can be used for hot standby too Is *multi-master replication* necessary? What's the use case, why use it in...
I'm trying to understand what's the point of ***multi-master replication***:
- scaling database writes? ***sharding*** scales database writes too
- hot standby? ***single-master replication*** can be used for hot standby too
Is *multi-master replication* necessary?
What's the use case, why use it in addition to or in place of *sharding & single-master replication*?
csirmazbendeguz
(101 rep)
Mar 3, 2025, 03:04 PM
• Last activity: Jul 31, 2025, 07:37 PM
2
votes
1
answers
630
views
How to setup two mysql servers with group replication to work individually, when no network
I have two mysql servers in two different places. Both servers must be able to read and write. All tables are designed to based on unique id (UUID) primary key. I want to sync these two databases live using a replication method when networks are available. But both servers must be able to work indiv...
I have two mysql servers in two different places. Both servers must be able to read and write. All tables are designed to based on unique id (UUID) primary key. I want to sync these two databases live using a replication method when networks are available. But both servers must be able to work individually when networks are not available.
Which replication method is suitable for this situation ? If someone has already tried this, please help me to resolve this problem.
I have tried MySQL group replication, but when network disconnected i can only work on one mysql server which is bootstrapped. Othe one doesn't allow writes.
LahiruTM
(146 rep)
Feb 13, 2019, 03:29 PM
• Last activity: May 11, 2025, 03:02 AM
-1
votes
1
answers
327
views
MySQL replication fails with permission error for unknown reason
Since 36h I'm struggling to setup a simple Master Master replication with MySQL 8. None of the found solutions on the Internet works here and meanwhile I'm a bit desperate. Even the first Master -> Slave replication wont work. So I hope for your help. **How does the environment looks like:** - Maste...
Since 36h I'm struggling to setup a simple Master Master replication with MySQL 8. None of the found solutions on the Internet works here and meanwhile I'm a bit desperate. Even the first Master -> Slave replication wont work. So I hope for your help.
**How does the environment looks like:**
- Master and Slave are running on separated machines
- MySQL runs in a docker (the latest official MySQL image) on port 33060
- hosts are managed by/with Plesk Obsidian
**MySQL configuration on both machines**
[mysqld]
skip-host-cache
skip-name-resolve
datadir=/var/lib/mysql
socket=/var/run/mysqld/mysqld.sock
secure-file-priv=/var/lib/mysql-files
user=mysql
pid-file=/var/run/mysqld/mysqld.pid
sql-mode=""
server-id = 1 // 1 = master host, 2 = slave host
binlog_do_db = dbsscm
gtid_mode = ON
enforce-gtid-consistency = ON
[client]
socket=/var/run/mysqld/mysqld.sock
!includedir /etc/mysql/conf.d/
**Master Setup**
CREATE USER 'replicator'@'%' IDENTIFIED BY 'REPLIC-PW';
GRANT ALL PRIVILEGES ON *.* TO replicator@'%';
FLUSH PRIVILEGES;
**Test master access via:**
mysql --host=MASTER-IP --port=33060 -u replicator --password='REPLIC-PW'
**Slave Setup**
CREATE USER 'replicator'@'%' IDENTIFIED BY 'REPLIC-PW';
GRANT ALL PRIVILEGES ON *.* TO replicator@'%';
FLUSH PRIVILEGES;
**Test slave access via:**
mysql --host=SLAVE-IP --port=33060 -u replicator --password='REPLIC-PW'
The master sql-db is accessible via
mysql --host ...
or telnet from the slave host.
The slave sql-db is accessible via mysql --host ...
or telnet from the master host.
**Setup Master -> Slave**
On the slave host:
STOP SLAVE;
CHANGE MASTER TO
MASTER_HOST = 'MASTER-IP',
MASTER_PORT = 33060,
MASTER_USER = 'replicator',
MASTER_PASSWORD = 'REPLIC-PW',
MASTER_AUTO_POSITION = 1;
START SLAVE;
SHOW SLAVE STATUS\G;
**Problem - Last_IO_Errno: 2003**
MySQL [(none)]> SHOW SLAVE STATUS\G;
*************************** 1. row ***************************
Slave_IO_State: Connecting to source
Master_Host: MASTER-IP
Master_User: replicator
Master_Port: 33060
Connect_Retry: 60
Master_Log_File:
Read_Master_Log_Pos: 4
Relay_Log_File: 8f18893b5155-relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File:
Slave_IO_Running: Connecting
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 0
Relay_Log_Space: 157
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 2003
Last_IO_Error: error connecting to master 'replicator@MASTER-IP:33060' - retry-time: 60 retries: 10 message: Can't connect to MySQL server on 'MASTER-IP:33060' (110)
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
Master_UUID:
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Replica has read all relay log; waiting for more updates
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp: 221129 07:22:21
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set: 50e65cd5-6ccf-11ed-9fbf-0242ac110003:1-12
Auto_Position: 1
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
Master_public_key_path:
Get_master_public_key: 0
Network_Namespace:
1 row in set, 1 warning (0.000 sec)
**What I have tried to solve?**
- search Google and forums for more than 24h now
- remove/reset everything several times
- tried different user names and passwords for the "replicator"
- switched of the firewalls on Master & Slave hosts
Whatever I do - I always get the same Last_IO_Errno: 2003
error and I have no idea what else I can further do?
BTW: The databases on the Master are running and are accessible for client applications via the MASTER-IP.
P.S.: I'm aware that I've yesterday posted a similar post. But the post contains far more details.
Lars
(109 rep)
Nov 29, 2022, 07:44 AM
• Last activity: Apr 27, 2025, 07:01 PM
1
votes
1
answers
359
views
Setup replication between two Galera nodes
I've been using MariaDB 10.2 Galera with 3 nodes successfully on local servers. Now, I have a second 3-nodes Galera cluster on another location, joined by an Site-to-Site IPSec Tunnel. I can't really merge it to one big cluster, as the performance drawback would be high (due to physical positions of...
I've been using MariaDB 10.2 Galera with 3 nodes successfully on local servers. Now, I have a second 3-nodes Galera cluster on another location, joined by an Site-to-Site IPSec Tunnel.
I can't really merge it to one big cluster, as the performance drawback would be high (due to physical positions of those two datacenters).
I'd like to setup asynchronous replication between those two clusters but I can't make it two work. Using "old-school" master-master replication (which is master/slave in both ways, as I get it) between two nodes doesn't work as conflicts starts to rise.
Is there a way to setup asynchronous replication between those two Galera clusters ?
Thanks for your help.
Hakujou
(11 rep)
Mar 16, 2018, 11:33 AM
• Last activity: Apr 20, 2025, 07:01 PM
1
votes
1
answers
743
views
Replicate data between MySQL servers with a different structure
As part of a system upgrade I need to sync a database between 2 different servers (master-master). The catch is that the structure is slightly different between the 2. Most of the differences are:- - Columns that exist in 1 but not the other and need populating based on other fields/tables - Type ch...
As part of a system upgrade I need to sync a database between 2 different servers (master-master). The catch is that the structure is slightly different between the 2.
Most of the differences are:-
- Columns that exist in 1 but not the other and need populating based on other fields/tables
- Type changes where some integer fields are now enums with text values that will need mapping from a static list
- Multiple tables have been merged into 1
So far I've been thinking about setting triggers up which would catch all INSERT/UPDATE/DELETE statements and insert them into a "queue" table. A PHP script would then convert the data and write the changes to the other server.
All queries in the system are executed through a central database object so I could potentially add some code to parse each query, create a new query that works with the other server and execute both queries together.
There's also the possibility to only use the new database and use query parsing code to re-format the data on each read/write to make it compatible with the old system.
Data has to be copied both ways (master-master) and it has to be as close to realtime as possible. Has anyone got any better suggestions or advice on how to do this?
SPKSmithy
(11 rep)
Sep 7, 2017, 11:42 AM
• Last activity: Apr 17, 2025, 10:04 AM
-1
votes
2
answers
832
views
Add database to existing MySQL master-master replication which uses binlog_do_db?
We have 2 `Mysql 5.7` servers running in master-master replication mode. And there is a database replicating between these 2 servers: binlog_do_db = database1 Now we need to add a new database into replication, I have read https://dba.stackexchange.com/questions/76194/add-database-to-existing-replic...
We have 2
Mysql 5.7
servers running in master-master replication mode.
And there is a database replicating between these 2 servers:
binlog_do_db = database1
Now we need to add a new database into replication, I have read https://dba.stackexchange.com/questions/76194/add-database-to-existing-replication-which-uses-binlog-do-db , but its for master-slave. Is there any other things should be considered for master-master situation?
Daniel Dai
(99 rep)
Nov 8, 2017, 02:24 AM
• Last activity: Mar 22, 2025, 09:00 PM
-1
votes
1
answers
679
views
MySQL Replication ignore few columns
I have setup MySQL replication, is there a way to skip replicating one column or multiple columns from few tables?
I have setup MySQL replication, is there a way to skip replicating one column or multiple columns from few tables?
Andy natty
(1 rep)
Jan 29, 2019, 12:45 AM
• Last activity: Mar 6, 2025, 01:02 AM
0
votes
0
answers
32
views
Using pt-osc(online schema change) in Multi-Master(Active/Passive) environment
MySQL version : Percona Server for MySQL 5.7.40-43-log We have Master-Master(Active-Passive) Replication setup in our cluster. Only one Master(Active) is receiving the requests for writes while other one is Passive. Each Master then have 4 Replicas to it. There is bi-directional replica going on bet...
MySQL version : Percona Server for MySQL 5.7.40-43-log
We have Master-Master(Active-Passive) Replication setup in our cluster. Only one Master(Active) is receiving the requests for writes while other one is Passive. Each Master then have 4 Replicas to it. There is bi-directional replica going on between Active and Passive Master.
Now, I need to perform an ALTER to a huge table that has ~600M rows and the table size is around : 250G. All the replicas including the Masters(Since they are replicas for each other anyways) have the following flag for replication : **Replicate_Wild_Do_Table as data%.%** so I am not concerned about the _new, _old table creation while actual execution of the pt-osc because **all the tables(%)** are included in replication.
Before running the pt-osc on the production, I would like to test it on a staging cluster so I have created a cluster like A --> B --> C where A is like my Active Master, B is my Passive Master and C is my Replica pointing to B. Please note that I have not set up Master-Master replication between A B. My main concern is with the replicas that are present in the production cluster as in while doing it on the Master, there shouldn't be any issues with the replication and the replicas should also have the updated change to the table.
I have used the following flags in the pt-osc statement but I am getting error with recurse & recursion-method
pt-online-schema-change --alter "ADD COLUMN Cost decimal(18,9) NOT NULL DEFAULT '0.000000000'" \
--alter-foreign-keys-method=auto \
--check-slave-lag=50 \
--max-lag=50 \
--recurse \
--recursion-method=processlist \
--host=replica-015 \
--max-load=Threads_running=30 \
--critical-load=Threads_running=50 \
--chunk-time=0.5 \
--slave-user=repl \
--slave-password='xxxxxxx' \
--chunk-size=10000 \
--progress percentage,1 \
--dry-run \
--user=test \
--password='xxxxxxx' \
D=avon,t=excel
I tried processlist, hosts but that didn't worked and I got the following error :
I updated the pt-osc command and used --recurse=1 and --recursion-method=processlist and the message is gone but I don't completely get it as to what recurse or recursion-method did here since in the out I don't see any slaves detected vs in some other output I saw online that says no slave detected.
I not completely sure about the DSN table which is mentioned in the percona documentation : https://docs.percona.com/percona-toolkit/pt-online-schema-change.html
Can someone help me here in understanding the replication related flag in pt-osc command and how should I proceed with the --dry-run or --execute in the staging cluster environment that I have created.
Thank you in advance.



msbeast
(21 rep)
Nov 15, 2024, 09:36 PM
0
votes
0
answers
60
views
Reset replica of multi-master
We have a master-master binlog replication using MySQL 8.0 where only master1 is written to by the application and master2 is pretty much a hot standby. I had a replica of master2 where that server failed and I need to recreate the replica, where I do backups. Only one database was being replicated...
We have a master-master binlog replication using MySQL 8.0 where only master1 is written to by the application and master2 is pretty much a hot standby. I had a replica of master2 where that server failed and I need to recreate the replica, where I do backups. Only one database was being replicated that includes innodb and a fairly large 40+ million record myisam table.
Must I do a
reset master
on the master2
server to recreate the replica? Just the commands below before I dump the database to be replicated and UNLOCK TABLES
...
root@localhost [(none)]> FLUSH TABLES WITH READ LOCK;
root@localhost [(none)]> SHOW MASTER STATUS;
If the RESET MASTER
is required first, I would also need to STOP REPLICA
on master1
first? Or should I use the --master-data
option with mysaqldump
instead.
Also, master1
and master2
replicate all databases where the replica only replicates one database, is that ok? Seems to work when I setup initially. But is it safer to replicate all databases on the replica as well? I want to use the replica server for other databases.
rwfitzy
(101 rep)
Aug 17, 2024, 08:35 PM
0
votes
1
answers
2501
views
MASTER-MASTER replication failed on Production: Coordinator stopped because there were error(s) in the worker(s)
This error occured after I was importing a database dump to master1. In production our MASTER-MASTER replication failed UPDATE ------ `SELECT CHANNEL_NAME, WORKER_ID, THREAD_ID, SERVICE_STATE, LAST_ERROR_NUMBER, LAST_ERROR_MESSAGE FROM performance_schema.replication_applier_status_by_worker;` giving...
This error occured after I was importing a database dump to master1.
In production our MASTER-MASTER replication failed
UPDATE
------
`SELECT CHANNEL_NAME, WORKER_ID, THREAD_ID, SERVICE_STATE, LAST_ERROR_NUMBER, LAST_ERROR_MESSAGE
FROM performance_schema.replication_applier_status_by_worker;`
giving output on **Master1**:
Worker 1 failed executing transaction 'ANONYMOUS' at master log node1-bin.000018, end_log_pos 594107878; Error 'Duplicate foreign key constraint name 'client_potential_meganet_agreements_client_id_foreign'' on query. Default database: 'netcore_support'. Query: 'ALTER TABLE
client_potential_meganet_agreements
ADD CONSTRAINT client_potential_meganet_agreements_client_id_foreign FOREIGN KEY (
client_id) REFERENCES
client_potential_meganet (
id`)'
giving output on Master2
Worker 1 failed executing transaction 'ANONYMOUS' at master log node2-bin.000040, end_log_pos 168239245; Could not execute Write_rows event on table netcore_feedback.sessions; Duplicate entry 'ctthXsIZ32QP8lJsYVInWGZW1GwNNgNN1t1wLGbn' for key 'sessions.PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log FIRST, end_log_pos 168239245
**MASTER1 error:**
> Coordinator stopped because there were error(s) in the worker(s). The most recent failure being: Worker 1 failed executing transaction 'ANONYMOUS' at master log node1-bin.000018, end_log_pos 594107483. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any.
**MASTER2 error:**
> Last_SQL_Error: Coordinator stopped because there were error(s) in the worker(s). The most recent failure being: Worker 1 failed executing transaction 'ANONYMOUS' at master log node2-bin.000040, end_log_pos 168239245. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any.
**Master1:**
SHOW MASTER STATUS;
+------------------+-----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+-----------+--------------+------------------+-------------------+
| node2-bin.000051 | 434103214 | | | |
+------------------+-----------+--------------+------------------+-------------------+
**Master2:**
show master status;
+------------------+----------+--------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| node1-bin.000042 | 1451131 | | | |
+------------------+----------+--------------+------------------+-------------------+
my configs.
[mysqld]
server-id=2
replicate-same-server-id=0
auto-increment-increment=2
auto-increment-offset=2
relay-log=/var/lib/mysql/node2-relay-bin
relay-log-index=/var/lib/mysql/node2-relay-bin.index
log-error=/var/log/mysql/mysql.error
master-info-file=/var/lib/mysql/mysql-master.info
relay-log-info-file=/var/lib/mysql/node1-relay-log.info
log-bin=/var/lib/mysql/node2-bin
max_connections = 1000
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysql/mysqld.log
pid-file=/run/mysqld/mysqld.pid
How can I fix this error?
Breaking News
(101 rep)
Jan 21, 2024, 06:56 PM
• Last activity: Jul 3, 2024, 04:24 PM
0
votes
1
answers
68
views
Master master relationship getting "too many connections" error after extended outage
I have a master master database setup with an onprem and cloud hosted MariaDB server. I can stop one database or the other for testing purposes and start it back up and everything seems fine. The server that remains up will continue to serve connections when the other is down. However, we had an ext...
I have a master master database setup with an onprem and cloud hosted MariaDB server. I can stop one database or the other for testing purposes and start it back up and everything seems fine. The server that remains up will continue to serve connections when the other is down. However, we had an extended power outage recently causing the connection between the two to break for around two hours. After around an hour our cloud hosted server started throwing "too many connection" errors. Is it possible that the cloud server kept trying to connect to the onprem master/slave and it resulted in hitting a limit on the number of connections?
All the retry and timeout values are default. When the connection was back up, restarting the service in the cloud resulted in the sync catching up and everything working. However, I'm looking for a way to keep my cloud hosted server serving connections even if the onpremise server is down for an extended period.
user3080539
(1 rep)
Aug 10, 2023, 09:15 PM
• Last activity: Aug 11, 2023, 12:13 AM
1
votes
2
answers
631
views
Tuning MySQL on GROUP REPLICATION
I'm close to switch from old version of MySQL 5.6 ( master / slave configuration ) to MySQL 8 ( 3 nodes on multi-master configuration ) in GROUP REPLICATION. The tables are 99% InnoDB. The webfarm just copied the previous InnoDB configuration variables to these new machines. I was wondering how to i...
I'm close to switch from old version of MySQL 5.6 ( master / slave configuration ) to MySQL 8 ( 3 nodes on multi-master configuration ) in GROUP REPLICATION.
The tables are 99% InnoDB.
The webfarm just copied the previous InnoDB configuration variables to these new machines.
I was wondering how to improve performances, each machine has the below configuration:
1) Cpu:
12 core (Intel Xeon Processor (Skylake, IBRS))
2) RAM:
total used free shared buff/cache available
Mem: 49456252 5164100 11487392 18972 32804760 43676276
Swap: 1998844 6924 1991920
3) Disk:
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 54G 1.2G 50G 3% /
/dev/mapper/vg0-mysql 1004G 414G 590G 42% /var/lib/mysql
All my databases are 250GB of datas.
InnnoDB variables:
*************************** 1. row ***************************
Variable_name: innodb_adaptive_flushing
Value: ON
*************************** 2. row ***************************
Variable_name: innodb_adaptive_flushing_lwm
Value: 10
*************************** 3. row ***************************
Variable_name: innodb_adaptive_hash_index
Value: ON
*************************** 4. row ***************************
Variable_name: innodb_adaptive_hash_index_parts
Value: 8
*************************** 5. row ***************************
Variable_name: innodb_adaptive_max_sleep_delay
Value: 150000
*************************** 6. row ***************************
Variable_name: innodb_api_bk_commit_interval
Value: 5
*************************** 7. row ***************************
Variable_name: innodb_api_disable_rowlock
Value: OFF
*************************** 8. row ***************************
Variable_name: innodb_api_enable_binlog
Value: OFF
*************************** 9. row ***************************
Variable_name: innodb_api_enable_mdl
Value: OFF
*************************** 10. row ***************************
Variable_name: innodb_api_trx_level
Value: 0
*************************** 11. row ***************************
Variable_name: innodb_autoextend_increment
Value: 64
*************************** 12. row ***************************
Variable_name: innodb_autoinc_lock_mode
Value: 2
*************************** 13. row ***************************
Variable_name: innodb_buffer_pool_chunk_size
Value: 134217728
*************************** 14. row ***************************
Variable_name: innodb_buffer_pool_dump_at_shutdown
Value: ON
*************************** 15. row ***************************
Variable_name: innodb_buffer_pool_dump_now
Value: OFF
*************************** 16. row ***************************
Variable_name: innodb_buffer_pool_dump_pct
Value: 25
*************************** 17. row ***************************
Variable_name: innodb_buffer_pool_filename
Value: ib_buffer_pool
*************************** 18. row ***************************
Variable_name: innodb_buffer_pool_in_core_file
Value: ON
*************************** 19. row ***************************
Variable_name: innodb_buffer_pool_instances
Value: 8
*************************** 20. row ***************************
Variable_name: innodb_buffer_pool_load_abort
Value: OFF
*************************** 21. row ***************************
Variable_name: innodb_buffer_pool_load_at_startup
Value: ON
*************************** 22. row ***************************
Variable_name: innodb_buffer_pool_load_now
Value: OFF
*************************** 23. row ***************************
Variable_name: innodb_buffer_pool_size
Value: 34359738368
*************************** 24. row ***************************
Variable_name: innodb_change_buffer_max_size
Value: 25
*************************** 25. row ***************************
Variable_name: innodb_change_buffering
Value: all
*************************** 26. row ***************************
Variable_name: innodb_checksum_algorithm
Value: crc32
*************************** 27. row ***************************
Variable_name: innodb_cmp_per_index_enabled
Value: OFF
*************************** 28. row ***************************
Variable_name: innodb_commit_concurrency
Value: 0
*************************** 29. row ***************************
Variable_name: innodb_compression_failure_threshold_pct
Value: 5
*************************** 30. row ***************************
Variable_name: innodb_compression_level
Value: 6
*************************** 31. row ***************************
Variable_name: innodb_compression_pad_pct_max
Value: 50
*************************** 32. row ***************************
Variable_name: innodb_concurrency_tickets
Value: 5000
*************************** 33. row ***************************
Variable_name: innodb_data_file_path
Value: ibdata1:12M:autoextend
*************************** 34. row ***************************
Variable_name: innodb_data_home_dir
Value:
*************************** 35. row ***************************
Variable_name: innodb_deadlock_detect
Value: ON
*************************** 36. row ***************************
Variable_name: innodb_dedicated_server
Value: OFF
*************************** 37. row ***************************
Variable_name: innodb_default_row_format
Value: dynamic
*************************** 38. row ***************************
Variable_name: innodb_directories
Value:
*************************** 39. row ***************************
Variable_name: innodb_disable_sort_file_cache
Value: OFF
*************************** 40. row ***************************
Variable_name: innodb_doublewrite
Value: ON
*************************** 41. row ***************************
Variable_name: innodb_fast_shutdown
Value: 1
*************************** 42. row ***************************
Variable_name: innodb_file_per_table
Value: ON
*************************** 43. row ***************************
Variable_name: innodb_fill_factor
Value: 100
*************************** 44. row ***************************
Variable_name: innodb_flush_log_at_timeout
Value: 1
*************************** 45. row ***************************
Variable_name: innodb_flush_log_at_trx_commit
Value: 1
*************************** 46. row ***************************
Variable_name: innodb_flush_method
Value: O_DIRECT
*************************** 47. row ***************************
Variable_name: innodb_flush_neighbors
Value: 0
*************************** 48. row ***************************
Variable_name: innodb_flush_sync
Value: ON
*************************** 49. row ***************************
Variable_name: innodb_flushing_avg_loops
Value: 30
*************************** 50. row ***************************
Variable_name: innodb_force_load_corrupted
Value: OFF
*************************** 51. row ***************************
Variable_name: innodb_force_recovery
Value: 0
*************************** 52. row ***************************
Variable_name: innodb_fsync_threshold
Value: 0
*************************** 53. row ***************************
Variable_name: innodb_ft_aux_table
Value:
*************************** 54. row ***************************
Variable_name: innodb_ft_cache_size
Value: 8000000
*************************** 55. row ***************************
Variable_name: innodb_ft_enable_diag_print
Value: OFF
*************************** 56. row ***************************
Variable_name: innodb_ft_enable_stopword
Value: ON
*************************** 57. row ***************************
Variable_name: innodb_ft_max_token_size
Value: 84
*************************** 58. row ***************************
Variable_name: innodb_ft_min_token_size
Value: 3
*************************** 59. row ***************************
Variable_name: innodb_ft_num_word_optimize
Value: 2000
*************************** 60. row ***************************
Variable_name: innodb_ft_result_cache_limit
Value: 2000000000
*************************** 61. row ***************************
Variable_name: innodb_ft_server_stopword_table
Value:
*************************** 62. row ***************************
Variable_name: innodb_ft_sort_pll_degree
Value: 2
*************************** 63. row ***************************
Variable_name: innodb_ft_total_cache_size
Value: 640000000
*************************** 64. row ***************************
Variable_name: innodb_ft_user_stopword_table
Value:
*************************** 65. row ***************************
Variable_name: innodb_io_capacity
Value: 200
*************************** 66. row ***************************
Variable_name: innodb_io_capacity_max
Value: 2000
*************************** 67. row ***************************
Variable_name: innodb_lock_wait_timeout
Value: 120
*************************** 68. row ***************************
Variable_name: innodb_log_buffer_size
Value: 134217728
*************************** 69. row ***************************
Variable_name: innodb_log_checksums
Value: ON
*************************** 70. row ***************************
Variable_name: innodb_log_compressed_pages
Value: ON
*************************** 71. row ***************************
Variable_name: innodb_log_file_size
Value: 268435456
*************************** 72. row ***************************
Variable_name: innodb_log_files_in_group
Value: 2
*************************** 73. row ***************************
Variable_name: innodb_log_group_home_dir
Value: ./
*************************** 74. row ***************************
Variable_name: innodb_log_spin_cpu_abs_lwm
Value: 80
*************************** 75. row ***************************
Variable_name: innodb_log_spin_cpu_pct_hwm
Value: 50
*************************** 76. row ***************************
Variable_name: innodb_log_wait_for_flush_spin_hwm
Value: 400
*************************** 77. row ***************************
Variable_name: innodb_log_write_ahead_size
Value: 8192
*************************** 78. row ***************************
Variable_name: innodb_lru_scan_depth
Value: 1024
*************************** 79. row ***************************
Variable_name: innodb_max_dirty_pages_pct
Value: 90.000000
*************************** 80. row ***************************
Variable_name: innodb_max_dirty_pages_pct_lwm
Value: 10.000000
*************************** 81. row ***************************
Variable_name: innodb_max_purge_lag
Value: 0
*************************** 82. row ***************************
Variable_name: innodb_max_purge_lag_delay
Value: 0
*************************** 83. row ***************************
Variable_name: innodb_max_undo_log_size
Value: 1073741824
*************************** 84. row ***************************
Variable_name: innodb_monitor_disable
Value:
*************************** 85. row ***************************
Variable_name: innodb_monitor_enable
Value:
*************************** 86. row ***************************
Variable_name: innodb_monitor_reset
Value:
*************************** 87. row ***************************
Variable_name: innodb_monitor_reset_all
Value:
*************************** 88. row ***************************
Variable_name: innodb_numa_interleave
Value: OFF
*************************** 89. row ***************************
Variable_name: innodb_old_blocks_pct
Value: 37
*************************** 90. row ***************************
Variable_name: innodb_old_blocks_time
Value: 1000
*************************** 91. row ***************************
Variable_name: innodb_online_alter_log_max_size
Value: 134217728
*************************** 92. row ***************************
Variable_name: innodb_open_files
Value: 3459
*************************** 93. row ***************************
Variable_name: innodb_optimize_fulltext_only
Value: OFF
*************************** 94. row ***************************
Variable_name: innodb_page_cleaners
Value: 4
*************************** 95. row ***************************
Variable_name: innodb_page_size
Value: 16384
*************************** 96. row ***************************
Variable_name: innodb_parallel_read_threads
Value: 4
*************************** 97. row ***************************
Variable_name: innodb_print_all_deadlocks
Value: OFF
*************************** 98. row ***************************
Variable_name: innodb_print_ddl_logs
Value: OFF
*************************** 99. row ***************************
Variable_name: innodb_purge_batch_size
Value: 300
*************************** 100. row ***************************
Variable_name: innodb_purge_rseg_truncate_frequency
Value: 128
*************************** 101. row ***************************
Variable_name: innodb_purge_threads
Value: 4
*************************** 102. row ***************************
Variable_name: innodb_random_read_ahead
Value: OFF
*************************** 103. row ***************************
Variable_name: innodb_read_ahead_threshold
Value: 56
*************************** 104. row ***************************
Variable_name: innodb_read_io_threads
Value: 4
*************************** 105. row ***************************
Variable_name: innodb_read_only
Value: OFF
*************************** 106. row ***************************
Variable_name: innodb_redo_log_archive_dirs
Value:
*************************** 107. row ***************************
Variable_name: innodb_redo_log_encrypt
Value: OFF
*************************** 108. row ***************************
Variable_name: innodb_replication_delay
Value: 0
*************************** 109. row ***************************
Variable_name: innodb_rollback_on_timeout
Value: OFF
*************************** 110. row ***************************
Variable_name: innodb_rollback_segments
Value: 128
*************************** 111. row ***************************
Variable_name: innodb_sort_buffer_size
Value: 1048576
*************************** 112. row ***************************
Variable_name: innodb_spin_wait_delay
Value: 6
*************************** 113. row ***************************
Variable_name: innodb_spin_wait_pause_multiplier
Value: 50
*************************** 114. row ***************************
Variable_name: innodb_stats_auto_recalc
Value: ON
*************************** 115. row ***************************
Variable_name: innodb_stats_include_delete_marked
Value: OFF
*************************** 116. row ***************************
Variable_name: innodb_stats_method
Value: nulls_equal
*************************** 117. row ***************************
Variable_name: innodb_stats_on_metadata
Value: OFF
*************************** 118. row ***************************
Variable_name: innodb_stats_persistent
Value: ON
*************************** 119. row ***************************
Variable_name: innodb_stats_persistent_sample_pages
Value: 20
*************************** 120. row ***************************
Variable_name: innodb_stats_transient_sample_pages
Value: 8
*************************** 121. row ***************************
Variable_name: innodb_status_output
Value: OFF
*************************** 122. row ***************************
Variable_name: innodb_status_output_locks
Value: OFF
*************************** 123. row ***************************
Variable_name: innodb_strict_mode
Value: ON
*************************** 124. row ***************************
Variable_name: innodb_sync_array_size
Value: 1
*************************** 125. row ***************************
Variable_name: innodb_sync_spin_loops
Value: 30
*************************** 126. row ***************************
Variable_name: innodb_table_locks
Value: ON
*************************** 127. row ***************************
Variable_name: innodb_temp_data_file_path
Value: ibtmp1:12M:autoextend
*************************** 128. row ***************************
Variable_name: innodb_temp_tablespaces_dir
Value: ./#innodb_temp/
*************************** 129. row ***************************
Variable_name: innodb_thread_concurrency
Value: 12
*************************** 130. row ***************************
Variable_name: innodb_thread_sleep_delay
Value: 0
*************************** 131. row ***************************
Variable_name: innodb_tmpdir
Value:
*************************** 132. row ***************************
Variable_name: innodb_undo_directory
Value: ./
*************************** 133. row ***************************
Variable_name: innodb_undo_log_encrypt
Value: OFF
*************************** 134. row ***************************
Variable_name: innodb_undo_log_truncate
Value: ON
*************************** 135. row ***************************
Variable_name: innodb_undo_tablespaces
Value: 2
*************************** 136. row ***************************
Variable_name: innodb_use_native_aio
Value: ON
*************************** 137. row ***************************
Variable_name: innodb_version
Value: 8.0.17
*************************** 138. row ***************************
Variable_name: innodb_write_io_threads
Value: 4
*************************** 139. row **************************
Variable_name: max_connections
Value: 3072
*************************** 140. row ***************************
Variable_name: max_user_connections
Value: 3072
I read a lot of articles about the size of buffer pool ( rule of 80% ), but i'm still thinking if there are other variables to increase or not.
In the actual master server i've the following statistics data:
In addition i tried to run the mysqltuner perl script, these are the suggestions:
-------- Recommendations ---------------------------------------------------------------------------
General recommendations:
Control warning line(s) into /var/log/mysql/error.log file
Control error line(s) into /var/log/mysql/error.log file
MySQL was started within the last 24 hours - recommendations may be inaccurate
Reduce your overall MySQL memory footprint for system stability
Dedicate this server to your database for highest performance.
Configure your accounts with ip or subnets only, then update your configuration with skip-name-resolve=1
Before changing innodb_log_file_size and/or innodb_log_files_in_group read this: https://bit.ly/2TcGgtU
Variables to adjust:
*** MySQL's maximum memory usage is dangerously high ***
*** Add RAM before increasing MySQL buffer variables ***
table_definition_cache(2000) > 464470 or -1 (autosizing if supported)
innodb_buffer_pool_size (>= 241.7G) if possible.
innodb_log_file_size should be (=3G) if possible, so InnoDB total log files size equals to 25% of buffer pool size.
innodb_buffer_pool_instances(=24)
The line "innodb_buffer_pool_size (>= 241.7G) if possible." shocked me a bit, how is possibile to have an amount of RAM AND buffer pool so huge?
I understood it's related to my datas size ( 250GB ) i mention before but is not suggestable such a big amount of RAM?
Feel free to ask other details and thanks for your help!
@danblack
Below the mysql config file of one of the GROUP REPLICATION machines, these machines are not used by my customers right now, i'll switch to them when all is ready and properly configured.
https://pastebin.com/dS1AeM2R
This is the full mysqltuner report on GR machine:
https://pastebin.com/XEFbpGUV
@Wilson Hauck
I don't see an SSD or NVMEE as devices:
pvdisplay
--- Physical volume ---
PV Name /dev/sdc
VG Name vg0
PV Size <651.93 GiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 166893
Free PE 493
Allocated PE 166400
PV UUID 4Xco4e-Es5d-LcZ2-FHxt-8PYw-B1E6-h9nwP8
cat /sys/block/sdc/queue/rotational
1
These are the output of one machine where i'in going to switch, i repeat that these machines are still not used by the application:
UPTIME 25 hours:
B)
https://pastebin.com/Uqtre6BS
C)
https://pastebin.com/3HfWdCTR
D)
https://pastebin.com/EazB2XWZ
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 193064
max locked memory (kbytes, -l) 65536
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 193064
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
**In addition to these information i give you the output of the slave machine used by now from my customers**:
UPTIME: 26 days
MySQL config file:
https://pastebin.com/EztvxgYX
mysqltuning script + SHOW GLOBAL STATUS + SHOW GLOBAL VARIABLES + SHOW FULL PROCESSLIST ( i put all here, i fineshed the link available for my reputation)
https://pastebin.com/5rq8sLF1
Disk type:
pvdisplay
--- Physical volume ---
PV Name /dev/vdb1
VG Name vg0
PV Size 912.54 GiB / not usable 2.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 233611
Free PE 5259
Allocated PE 228352
PV UUID G8aSfO-Ktbg-UcfP-4yoL-jjH9-qjxe-OUN9ni
cat /sys/block/vdb/queue/rotational
1
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 459880
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 459880
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
iostat -xm 5 3
Linux 3.2.0-4-amd64 02/24/20 _x86_64_ (12 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
6.95 0.00 0.65 1.20 0.01 91.19
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.16 0.09 0.15 0.00 0.00 22.30 0.00 1.02 1.94 0.46 0.46 0.01
vdb 5.84 89.65 156.08 208.73 7.59 3.05 59.71 0.92 2.53 1.93 2.97 0.68 24.89
dm-0 0.00 0.00 161.92 269.31 7.59 3.05 50.51 1.21 2.81 2.52 2.98 0.58 24.96
avg-cpu: %user %nice %system %iowait %steal %idle
3.35 0.00 0.54 4.43 0.00 91.69
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.60 4.40 1.40 0.02 0.01 9.10 0.01 1.93 2.55 0.00 0.14 0.08
vdb 3.40 35.60 830.40 51.20 17.90 0.30 42.26 1.16 1.31 1.18 3.53 0.85 75.04
dm-0 0.00 0.00 833.80 74.60 17.90 0.30 41.01 1.54 1.69 1.53 3.47 0.83 75.20
avg-cpu: %user %nice %system %iowait %steal %idle
3.40 0.00 0.43 4.64 0.00 91.52
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
vda 0.00 0.40 0.00 0.40 0.00 0.00 16.00 0.00 0.00 0.00 0.00 0.00 0.00
vdb 5.80 30.80 810.60 46.80 14.65 0.27 35.63 1.39 1.59 1.61 1.32 1.04 89.36
dm-0 0.00 0.00 816.20 67.20 14.67 0.27 34.63 2.02 2.24 2.34 1.04 1.01 89.36
**UPDATE 2ND EDIT**
- SLAVE SERVER
( SHOW GLOBAL STATUS; + SHOW GLOBAL VARIABLES; + SHOW FULL PROCESSLIST; )
https://pastebin.com/nTeSuCjZ
- STAGE SERVER
SHOW GLOBAL VARIABLES;
https://pastebin.com/n369zpdB
( SHOW GLOBAL STATUS; + SHOW FULL PROCESSLIST; )
https://pastebin.com/8rDtkubv

Jung
(11 rep)
Feb 23, 2020, 07:27 PM
• Last activity: Dec 4, 2022, 11:05 AM
0
votes
2
answers
1819
views
How do we switchover a slave to another master instead of promoting the slave to be the master?
We have a replication configuration wherein there are 2 Masters and 20 Replicas(Each master has 10 replicas). Now, we need to perform MySQL version upgrade on these instances. The approach for this would be to upgrade the replicas first followed by master. The concerns is while upgrading one of the...
We have a replication configuration wherein there are 2 Masters and 20 Replicas(Each master has 10 replicas). Now, we need to perform MySQL version upgrade on these instances. The approach for this would be to upgrade the replicas first followed by master.
The concerns is while upgrading one of the master server(Lets say Master1), how do we switch the replicas pointing to Master 1 to Master 2. Promoting the slave to master wouldn't be a good approach as we have multiple replicas.
I read an article about Switching Sources During Failover wherein they suggested :
1. Stop Slave, Reset Master
2. Stop Slave, Change Master to ..
However, I am unable to get details/info as to how should we proceed with upgrades in our replication topology and what steps should we follow to do a seamless switchover of replicas.
msbeast
(21 rep)
Aug 30, 2022, 10:54 PM
• Last activity: Sep 12, 2022, 04:28 PM
2
votes
0
answers
55
views
Why would my MySQL replica server experience an sql integrity constraint error after losing connection and attempting to start again?
I have a master-master replication setup between two different servers using MariaDB. One is the primary server (server A) that is connected to my web applications, and the other is a replica that would be used for disaster recovery if the primary one was down (server B). Awhile back I shut down ser...
I have a master-master replication setup between two different servers using MariaDB. One is the primary server (server A) that is connected to my web applications, and the other is a replica that would be used for disaster recovery if the primary one was down (server B). Awhile back I shut down server B rather abruptly for about 5 minutes so that I could resize it, but when I brought it back up, I received an SQL integrity constraint violation error that prevented replication from starting back up.
I ended up rebuilding server B from a backup of server A later down the road and started replication back up successfully.
My question is: Why would this connection issue have caused an integrity constraint error? I thought that it would be able to simply pick up from the log file position that it stopped reading from. I’m trying to figure out so that, in the future, if server B loses connection for whatever reason, that I’m able to recover from it instead of having to rebuild the database.
Timothy Fisher
(177 rep)
Feb 27, 2022, 06:07 AM
• Last activity: Mar 3, 2022, 04:18 PM
0
votes
1
answers
704
views
do_table or ignore_table alternatives on MariaDB Galera Multi-Master
As far as I can see, there are no `do_table` or `ignore_table` configs in MariaDB Galera multi-master (master-master) to skip or ignore specific tables. And there are only `binlog_do_db` and `binlog_ignore_db` options, which only work for databases. I have multiple servers in my formation, and I onl...
As far as I can see, there are no
do_table
or ignore_table
configs in MariaDB Galera multi-master (master-master) to skip or ignore specific tables. And there are only binlog_do_db
and binlog_ignore_db
options, which only work for databases.
I have multiple servers in my formation, and I only want to sync two tables in each server database. Other tables' data should be intact.
What do you suggest?
Roham
(3 rep)
Dec 5, 2021, 07:17 PM
• Last activity: Dec 7, 2021, 03:27 PM
1
votes
3
answers
4395
views
How to Sync MySQL Databases when offline?
My application that is running on a client uses a MySQL database running on a server. So multiple clients are connected to the same server. That works well when the server is online. But now I would like to enhance my application to be able to run in an offline mode. +--------------+ | | +----------...
My application that is running on a client uses a MySQL database running on a server. So multiple clients are connected to the same server. That works well when the server is online. But now I would like to enhance my application to be able to run in an offline mode.
+--------------+
| |
+-----------+ SERVER +----------+
| | | |
| +-------+------+ |
| | |
+------+-------+ +-------+------+ +-------+------+
| | | | | |
| Client 1 | | Client 2 | | Client X |
| | | | | |
+--------------+ +--------------+ +--------------+
Now comes the problem: what happens when the client is offline? I need a copy of my MySQL database on each client too. By default the application interacts with the MySQL on the server. If this server is not accessible (for what reason ever: server is offline or client has no internet connection) it should use the MySQL running on the client. If the client/server connection is available again the databases need to be synched automatically.
My question is now: how to achieve this? First of all I checked the MySQL-replication, but in my scenario I have multiple "masters" and an unknown number of clients. So I afraid that replication is not my solution. Is it possible to solve my problem with MaxScale? I never worked with that so I really appreciate any help.
Lars
(109 rep)
Sep 19, 2021, 09:13 AM
• Last activity: Oct 4, 2021, 07:54 AM
1
votes
3
answers
1862
views
Getting fatal error 1236 from master when reading data from binary log in mariadb master master replication
I am facing the situation similar to this https://dba.stackexchange.com/questions/139131/error-1236-could-not-find-first-log-file-name-in-binary-log-index-file?newreg=4677761266604ec98e94e511bac681c0 I have tried all the answers but I am still not able to get rid of this error on Server id = 1: Last...
I am facing the situation similar to this https://dba.stackexchange.com/questions/139131/error-1236-could-not-find-first-log-file-name-in-binary-log-index-file?newreg=4677761266604ec98e94e511bac681c0
I have tried all the answers but I am still not able to get rid of this error on Server id = 1:
Last_IO_Errno: 1236
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'
I have two nodes running mariadb 10 the master node Server id 1 and and second master node Server id = 2
Server id 1:
MariaDB [(none)]> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: 192.168.1.10
Master_User: replicator
Master_Port: 6306
Connect_Retry: 60
Master_Log_File: mariadb-bin.000001
Read_Master_Log_Pos: 313
Relay_Log_File: mysqld-relay-bin.000001
Relay_Log_Pos: 4
Relay_Master_Log_File: mariadb-bin.000001
Slave_IO_Running: No
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 313
Relay_Log_Space: 256
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 1236
Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 2
Master_SSL_Crl:
Master_SSL_Crlpath:
Using_Gtid: No
Gtid_IO_Pos:
Replicate_Do_Domain_Ids:
Replicate_Ignore_Domain_Ids:
Parallel_Mode: conservative
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
Slave_DDL_Groups: 0
Slave_Non_Transactional_Groups: 0
Slave_Transactional_Groups: 0
1 row in set (0.001 sec)
checked binary logs:
MariaDB [(none)]> SHOW BINARY LOGS;
+------------------+-----------+
| Log_name | File_size |
+------------------+-----------+
| mysql-bin.000001 | 11337613 |
| mysql-bin.000002 | 138563 |
| mysql-bin.000003 | 1100347 |
| mysql-bin.000004 | 7406418 |
| mysql-bin.000005 | 110683302 |
| mysql-bin.000006 | 144929 |
+------------------+-----------+
MariaDB [(none)]> show master status;
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000006 | 173699 | | |
+------------------+----------+--------------+------------------+
then Server id 2:
MariaDB [(none)]> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.1.11
Master_User: replicator
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000006
Read_Master_Log_Pos: 125740
Relay_Log_File: mysqld-relay-bin.000002
Relay_Log_Pos: 10034
Relay_Master_Log_File: mysql-bin.000006
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 125740
Relay_Log_Space: 10337
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_SSL_Crl:
Master_SSL_Crlpath:
Using_Gtid: No
Gtid_IO_Pos:
Replicate_Do_Domain_Ids:
Replicate_Ignore_Domain_Ids:
Parallel_Mode: conservative
1 row in set (0.00 sec)
and binary logs:
MariaDB [(none)]> SHOW BINARY LOGS;
+------------------+-----------+
| Log_name | File_size |
+------------------+-----------+
| mysql-bin.000001 | 313 |
+------------------+-----------+
MariaDB [(none)]> show master status;
+------------------+----------+--------------+------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000001 | 313 | | |
+------------------+----------+--------------+------------------+
I do have backup of database , but I still want to understand and fix the issue rather then restarting from beginning.
Chang Zhao
(111 rep)
Sep 12, 2021, 09:23 PM
• Last activity: Sep 13, 2021, 12:55 AM
-1
votes
1
answers
483
views
Error in Master/Master replication between Mysql 8 and Mysql 5.6
I've setup **master to master replication Between a Mysql 5.6 and Mysql 8.0 server**. I can get both slaves to run on both servers. Except when there is a change made on the Mysql 8.0 server (e.g. database/table created, or row inserted), the **Mysql 5.6 slave encounters an error**: ##### Mysql 5.6...
I've setup **master to master replication Between a Mysql 5.6 and Mysql 8.0 server**. I can get both slaves to run on both servers. Except when there is a change made on the Mysql 8.0 server (e.g. database/table created, or row inserted), the **Mysql 5.6 slave encounters an error**:
##### Mysql 5.6 > 'SHOW SLAVE STATUS\G;'
> Last_Errno: 1594
> Last_Error: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master's binary log is corrupted (you can check this by running 'mysqlbinlog' on the binary log), the slave's relay log is corrupted (you can check this by running 'mysqlbinlog' on the relay log), a network problem, or a bug in the master's or slave's MySQL code. If you want to check the master's binary log or slave's relay log, you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this slave.
I wish to get replication working both ways whilst keeping the versions the same. Is there any way this can be achieved?
#### Mysql 8 config
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
datadir = /var/lib/mysql
bind-address = 0.0.0.0
key_buffer_size = 16M
max_allowed_packet = 16M
myisam-recover-options = BACKUP
max_connections = 1000
log_error = /var/log/mysql/error.log
server-id = 2
log_bin = /var/log/mysql/mysql-bin.log
binlog-format = row
binlog_expire_logs_seconds = 432000
relay-log = /var/log/mysql/mysql-relay-bin.log
relay-log-index = /var/log/mysql/mysql-relay-bin.index
replicate-ignore-db = mysql,information_schema,performance_schema
max_binlog_size = 100M
binlog_ignore_db = mysql,information_schema,performance_schema
binlog_checksum = NONE
lower_case_table_names = 1
#### Mysql 5.6 config
[client]
port = 3306
socket = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket = /var/run/mysqld/mysqld.sock
nice = 0
[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
log_error=/var/log/mysql/mysql_error.log
wait_timeout = 900
key_buffer = 16M
max_allowed_packet = 16M
thread_stack = 192K
thread_cache_size = 8
myisam-recover = BACKUP
max_connections = 1000
query_cache_limit = 1M
query_cache_size = 16M
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
expire_logs_days = 5
relay-log = /var/log/mysql/mysql-relay-bin.log
relay-log-index = /var/log/mysql/mysql-relay-bin.index
replicate-ignore-db = mysql,information_schema,performance_schema
max_binlog_size = 100M
binlog-format = row
binlog_ignore_db = mysql,information_schema,performance_schema
lower_case_table_names = 1
[mysqldump]
quick
quote-names
max_allowed_packet = 16M
[mysql]
[isamchk]
key_buffer = 16M
!includedir /etc/mysql/conf.d/
ishvn
(1 rep)
Aug 4, 2021, 06:42 AM
• Last activity: Aug 17, 2021, 06:26 AM
2
votes
0
answers
122
views
Issues with failback for logical replication using postgres
I am using PostgreSQL 13 to set up logical replication. When the original active node (A) becomes the secondary, the prior secondary(B) which is now active node needs to sync to the (A) node. To summarize the issue: Node A is active and fails at some point of time. Node B takes over and now running...
I am using PostgreSQL 13 to set up logical replication. When the original active node (A) becomes the secondary, the prior secondary(B) which is now active node needs to sync to the (A) node.
To summarize the issue:
Node A is active and fails at some point of time. Node B takes over and now running is active and accepting I/O from application. Now when Node A is recovered from failure and ready to become active again. In order to happen this Node A is trying to get the data which may be have been added while Node A was down. To get the this data Node A is creating a subscription to Node B which is now acting as a publisher. Issue is that this subscription on Node A fails as Node A already has some data before it went down and this data results in conflicts.
So what are my options here?
nishi
(121 rep)
Aug 4, 2021, 12:25 PM
• Last activity: Aug 4, 2021, 02:00 PM
1
votes
1
answers
491
views
Why is disk IO higher on Debian 10 (MariaDB 10.3) with MySQL replication?
I have a MySQL/MariaDB master-master replication setup that has been working well for several years, the db and tables are not very large (under 200MB for 18 tables). These were on 2 servers running Debian 9 and MariaDB 10.1.44. Now I've spun up 2 new servers running Debian 10 and I'm in the process...
I have a MySQL/MariaDB master-master replication setup that has been working well for several years, the db and tables are not very large (under 200MB for 18 tables). These were on 2 servers running Debian 9 and MariaDB 10.1.44. Now I've spun up 2 new servers running Debian 10 and I'm in the process of moving things over to them, but stopped half-way because I'm seeing much higher disk IO usage on the new servers (about 6x more).
So currently, one of the Debian 9 servers and one of the Debian 10 servers are in master-master relationship, with one Debian 9 still being a slave of the master Debian 9 server, and same on the Debian 10 side of things.
I didn't notice the increased disk IO until after all read/write operations were moved to the Debian 10 master. I was trying to browse tables and saw how slow it was outputting the query results, and it felt like I was on a dial-up connection watching the rows scroll across. It turned out there was some disk contention with the virtual host that was partly responsible, and that problem is now mostly gone.
Now, as you can imagine, none of this is crashing the server with such a "small" set of tables, but as things continue to grow, I'm concerned that there is some underlying mis-configuration which will rear its ugly head at an inopportune time. On the Debian 9 servers, iotop shows steady write IO at around 300-600Kb/s, but on Debian 10 it spikes as high as 6MB/s, and averages around 3MB/s.
Here is the standard config on all 4 servers, everything else is default Debian settings (or MariaDB, as the case may be), full config for Debian 10 at https://pastebin.com/Lk2FR4e3 :
That is from the Debian 10 master, and you can see where I moved operations back to the Debian 9 server (more on that in a second). Notice the disk IO does go down slightly at that point, but not to the levels that we'll see on the Debian 9 master. Also note that the public bandwidth chart is pretty much only replication traffic, and that the disk IO far outstrips the replication traffic. The private traffic is all the reads/writes from our application servers.
This is the Debian 9 master server, and you can see where I moved all operations back to this server, the private traffic shoots up, but the write IO hovers around 500kB/s. I didn't have resource graphs being recorded on the old servers, thus the missing bits on the left.
And lastly, for reference, here is the Debian 10 slave server (that will eventually be half of the mastermaster replication). There are no direct reads/writes on this server, all disk IO is from replication.
Just to see what would happen (as I alluded to above), I reverted all direct read/write operations to the Debian 9 master server. While disk IO did fall somewhat on the Debian 10 server, it did not grow on the Debian 9 server to any noticeable extent.
Also, on the Debian 10 slave server, I did STOP SLAVE once to see what happened, and the disk IO went to almost nothing. Doing the same on the Debian 10 master server did not have the same drastic effect, though it's possible there WAS some change that wasn't obvious; the disk IO numbers on iostat fluctuate much more wildly on the Debian 10 servers than they do on the Debian 9 servers.
UPDATE: after moving all read/write/update operations OFF the Debian 10 master, a STOP SLAVE command has the same exact effect as it did on the Debian 10 slave.
UPDATE 2: the more I look at this, the more I think it has nothing to do with replication. It seems that replication simply magnifies the effects of this problem.
So, what is going on here? How can I figure out why MariaDB is writing so much data to disk apparently and/or how can I stop it?
Thanks in advance!
max_connections = 1000
query_cache_limit = 4M
query_cache_size = 0
query_cache_type = 0
server-id = 1 # different for each server
log_bin = /var/log/mysql/mysql-bin.log
binlog_do_db = optimizer
replicate-do-db = optimizer
report-host = xyz.example.com #changed obviously
log-slave-updates = true
innodb_log_file_size = 32M
innodb_buffer_pool_size = 256M
Here are some other settings I've tried that don't seem to make any difference (checked each one by one):
binlog_annotate_row_events = OFF
binlog_checksum = NONE
binlog_format = STATEMENT
innodb_flush_method = O_DIRECT_NO_FSYNC
innodb_log_checksums = OFF
log_slow_slave_statements = OFF
replicate_annotate_row_events = OFF
I've gone through all the settings here that have changed from MariaDB 10.1 to 10.3, and can't seem to find any that make a difference: https://mariadb.com/kb/en/replication-and-binary-log-system-variables/
I also did a full listing of the server variables and compared the configs on 10.1 to the 10.3 configuration and didn't find anything obvious. But either I'm missing something, or the problem lies with Debian 10 itself.
Results of SHOW ENGINE INNODB STATUS are here: https://pastebin.com/mJdLQv8k
Now, how about that disk IO, what is it actually doing? I include 3 screenshots here to show what I mean by increased disk IO:



nosilver4u
(61 rep)
Jun 19, 2020, 11:36 PM
• Last activity: Mar 19, 2021, 06:13 PM
Showing page 1 of 20 total questions