Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
139
views
How to configure mysql group replication with different ports?
I have a k8s cluster and each `mysql` instance is well connected together! But I have another `mysql` server outside the cluster! Each `mysql` inside the cluster has a default `mysql` port, which is `3306` and an external port; which is something random! So when I start `mysql` group replication wit...
I have a k8s cluster and each
mysql
instance is well connected together! But I have another mysql
server outside the cluster! Each mysql
inside the cluster has a default mysql
port, which is 3306
and an external port; which is something random! So when I start mysql
group replication with the instances that are inside the cluster everything works fine!
the thing is the mysql
instance that is outside of the cluster is trying to connect to the 3306
default port with repl
user but it should be connecting to the random port generated! and I don't know how to specify it to connect to the port I want to connect...
**how can i specify the outsider instance to use that random generated port to connect to other instances inside the cluster to use mysql group replication?**
here is my error log:
error connecting to master 'repl@db1-headless:3306'
Hasan Parasteh
(103 rep)
Aug 13, 2022, 02:31 PM
• Last activity: Aug 6, 2025, 01:03 AM
2
votes
2
answers
841
views
Unexplained MySQL memory usage
I am experiencing an issue with MySQL memory usage increasing over time to a point where OOM Killer kills the MySQL process. I have 3 servers configured with Group Replication (1 Primary and 2 secondary). The servers are running on AWS EC2: m5.2xlarge (32GB RAM, 8vCPU,500GB Amazon EBS Storage) Debia...
I am experiencing an issue with MySQL memory usage increasing over time to a point where OOM Killer kills the MySQL process.
I have 3 servers configured with Group Replication (1 Primary and 2 secondary).
The servers are running on AWS EC2:
m5.2xlarge (32GB RAM, 8vCPU,500GB Amazon EBS Storage)
Debian 11
MySQL 8.0.36
Link to additional requested information: https://justpaste.it/f0f4n
Updated additional information after 44 hours of server running: https://justpaste.it/6v4pl
The memory usage issue seems to occur faster on a server that is the PRIMARY server in the cluster but does affect all servers.
I have been trying, without success, to figure out what the cause of this could be.
The system memory usage for the MySQL process and the listed memory allocation in MySQL do not correspond.
*The additional memory usage does seem to be related to HIGH_NUMBER_OF_BYTES_USED of "memory/innodb/ut0rbt". It seems that this memory accounts for the reported memory discrepancy. In all instances where the memory is growing, this reported information is growing as well.*
Below is the output of
top
- As you can see, the current physical RAM in use is by MySQL is 27.0G
Tasks: 118 total, 1 running, 117 sleeping, 0 stopped, 0 zombie
%Cpu(s): 13.4 us, 0.5 sy, 0.0 ni, 85.3 id, 0.5 wa, 0.0 hi, 0.2 si, 0.0 st
MiB Mem : 31650.4 total, 263.6 free, 27895.9 used, 3490.9 buff/cache
MiB Swap: 4096.0 total, 0.1 free, 4095.9 used. 3302.6 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
757962 mysql 20 0 33.3g 27.0g 0 S 55.4 87.3 1104:36 mysqld
Below is the memory usage reported by MySQL:
mysql> select event_name,current_alloc from sys.memory_global_by_current_bytes limit 10;
+---------------------------------------------------------------+---------------+
| event_name | current_alloc |
+---------------------------------------------------------------+---------------+
| memory/innodb/buf_buf_pool | 8.18 GiB |
| memory/group_rpl/GCS_XCom::xcom_cache | 1023.71 MiB |
| memory/innodb/log_buffer_memory | 256.00 MiB |
| memory/temptable/physical_ram | 94.00 MiB |
| memory/performance_schema/events_statements_summary_by_digest | 40.28 MiB |
| memory/mysys/KEY_CACHE | 32.00 MiB |
| memory/innodb/ut0link_buf | 24.00 MiB |
| memory/sql/TABLE | 23.30 MiB |
| memory/innodb/lock0lock | 22.22 MiB |
| memory/innodb/memory | 21.93 MiB |
+---------------------------------------------------------------+---------------+
10 rows in set (0.01 sec)
mysql> select format_bytes(sum(current_alloc)) from sys.x$memory_global_by_current_bytes;
+----------------------------------+
| format_bytes(sum(current_alloc)) |
+----------------------------------+
| 9.95 GiB |
+----------------------------------+
1 row in set (0.00 sec)
Below is a report on what my expected memory usage would be based on the configuration of the server.
+------------------------------------------+--------------------+
| key_buffer_size | 32.000 MB |
| query_cache_size | 0.000 MB |
| innodb_buffer_pool_size | 8192.000 MB |
| innodb_additional_mem_pool_size | 0.000 MB |
| innodb_log_buffer_size | 256.000 MB |
+------------------------------------------+--------------------+
| BASE MEMORY | 8480.000 MB |
+------------------------------------------+--------------------+
| sort_buffer_size | 0.250 MB |
| read_buffer_size | 0.125 MB |
| read_rnd_buffer_size | 0.250 MB |
| join_buffer_size | 0.250 MB |
| thread_stack | 1.000 MB |
| binlog_cache_size | 0.031 MB |
| tmp_table_size | 16.000 MB |
+------------------------------------------+--------------------+
| MEMORY PER CONNECTION | 17.906 MB |
+------------------------------------------+--------------------+
| Max_used_connections | 129 |
| max_connections | 300 |
+------------------------------------------+--------------------+
| TOTAL (MIN) | 10789.906 MB |
| TOTAL (MAX) | 13851.875 MB |
+------------------------------------------+--------------------+
I have a suspicion that the memory usage is somehow related to the memory/innodb/ut0rbt
based on looking at the HIGH_NUMBER_OF_BYTES_USED
being 16GB in the below report.
I cannot find much information on memory/innodb/ut0rbt when searching and am not sure where to go to from here. Any guidance would be appreciated.
SELECT * FROM performance_schema.memory_summary_global_by_event_name where SUM_NUMBER_OF_BYTES_ALLOC !=0 ORDER BY HIGH_NUMBER_OF_BYTES_USED DESC;
+--------------------------------------------------------------------------------+-------------+-------------+---------------------------+--------------------------+----------------+--------------------+-----------------+--------------------------+------------------------------+---------------------------+
| EVENT_NAME | COUNT_ALLOC | COUNT_FREE | SUM_NUMBER_OF_BYTES_ALLOC | SUM_NUMBER_OF_BYTES_FREE | LOW_COUNT_USED | CURRENT_COUNT_USED | HIGH_COUNT_USED | LOW_NUMBER_OF_BYTES_USED | CURRENT_NUMBER_OF_BYTES_USED | HIGH_NUMBER_OF_BYTES_USED |
+--------------------------------------------------------------------------------+-------------+-------------+---------------------------+--------------------------+----------------+--------------------+-----------------+--------------------------+------------------------------+---------------------------+
| memory/innodb/ut0rbt | 12367972599 | 12367971756 | 1175789585128 | 1175789498761 | 0 | 843 | 168276872 | 0 | 86367 | 16869347952 |
| memory/innodb/buf_buf_pool | 64 | 0 | 8783134720 | 0 | 0 | 64 | 64 | 0 | 8783134720 | 8783134720 |
| memory/temptable/physical_ram | 18599 | 18501 | 26578851040 | 26476087456 | 0 | 98 | 506 | 0 | 102763584 | 1634745920 |
| memory/mysys/TREE | 13933 | 13930 | 4655917920 | 4655878864 | 0 | 3 | 1568 | 0 | 39056 | 1119816488 |
| memory/group_rpl/GCS_XCom::xcom_cache | 767889 | 300297 | 1872249111 | 798535661 | 0 | 467592 | 495763 | 0 | 1073713450 | 1075974241 |
| memory/innodb/memory | 532219874 | 532207809 | 13596646725344 | 13596623477632 | 0 | 12065 | 87170 | 0 | 23247712 | 889900184 |
| memory/sql/String::value | 179746709 | 179745073 | 186307408232 | 186301810472 | 0 | 1636 | 8326 | 0 | 5597760 | 444524496 |
| memory/innodb/log_buffer_memory | 1 | 0 | 268436464 | 0 | 0 | 1 | 1 | 0 | 268436464 | 268436464 |
| memory/sql/Unique::merge_buffer | 36 | 36 | 301991328 | 301991328 | 0 | 0 | 26 | 0 | 0 | 218104848 |
| memory/sql/THD::main_mem_root | 200742977 | 200742846 | 3390843376248 | 3390840251216 | 0 | 131 | 3117 | 0 | 3125032 | 88720256 |
| memory/sql/Log_event | 43535365 | 43535358 | 20931402783 | 20931400073 | 0 | 7 | 269641 | 0 | 2710 | 49982659 |
| memory/sql/JSON | 9507512 | 9507456 | 859590368 | 859587008 | 0 | 56 | 549870 | 0 | 3360 | 47863656 |
| memory/group_rpl/certification_info | 1743130 | 1742155 | 164631088 | 158843768 | 0 | 975 | 426208 | 0 | 5787320 | 43207824 |
| memory/group_rpl/transaction_data | 2931445 | 2931445 | 3611045127 | 3611045127 | 0 | 0 | 15 | 0 | 0 | 42283650 |
| memory/performance_schema/events_statements_summary_by_digest | 1 | 0 | 42240000 | 0 | 0 | 1 | 1 | 0 | 42240000 | 42240000 |
| memory/group_rpl/Gcs_message_data::m_buffer | 711966 | 711966 | 3274265418 | 3274265418 | 0 | 0 | 4 | 0 | 0 | 38611767 |
| memory/mysys/KEY_CACHE | 3 | 0 | 33556016 | 0 | 0 | 3 | 3 | 0 | 33556016 | 33556016 |
| memory/sql/TABLE | 2498162 | 2490152 | 2662845439 | 2638261263 | 0 | 8010 | 9131 | 0 | 24584176 | 28309707 |
| memory/sql/Filesort_buffer::sort_keys | 3493367 | 3493366 | 115200850575 | 115200817775 | 0 | 1 | 636 | 0 | 32800 | 25885528 |
+--------------------------------------------------------------------------------+-------------+-------------+---------------------------+--------------------------+----------------+--------------------+-----------------+--------------------------+------------------------------+---------------------------+
O Genthe
(21 rep)
Feb 13, 2024, 02:11 PM
• Last activity: Jul 11, 2025, 03:04 PM
1
votes
1
answers
150
views
innodb cluster group replication Error_code: 1594
So we have an innodb cluster consisting of 4 nodes. one of the nodes crashed (OS crash), while starting the node it shows the following error: ``` language:no_lang 2022-06-13T06:46:02.529058Z 491 [ERROR] Slave SQL for channel 'group_replication_applier': Relay log read failure: Could not parse relay...
So we have an innodb cluster consisting of 4 nodes. one of the nodes crashed (OS crash), while starting the node it shows the following error:
language:no_lang
2022-06-13T06:46:02.529058Z 491 [ERROR] Slave SQL for channel
'group_replication_applier': Relay log read failure: Could not parse relay log
event entry.
The possible reasons are: the master's binary log is corrupted (you can check
this by running 'mysqlbinlog' on the binary log), the slave's relay log is
corrupted
(you can check this by running 'mysqlbinlog' on the relay log), a network
problem, or a bug in the master's or slave's MySQL code. If you want to check
the master's binary log or slave's relay log,
you will be able to know their names by issuing 'SHOW SLAVE STATUS' on this
slave. Error_code: 1594
Up until now, each time it happened we just restored and node from a backup and rejoined the cluster.
Is there a way to refetch the logs and reset the position as on a regular master/slave replication?
running mysql 5.7.36
Pavel
(11 rep)
Jun 13, 2022, 08:16 AM
• Last activity: Jul 11, 2025, 07:07 AM
2
votes
1
answers
1459
views
MySQL group replication plugin won't recognize host's IP address?
I have pulled the current mysql-server image (mysql-server) on 2 Ubuntu 18.04 machines and I want to set up group replication. This is how I start the mysql container on each machine (`--server-id` is 2 for the second machine): docker run -d --name=gr-mysql \ -v /var/lib/mysql:/var/lib/mysql \ -e MY...
I have pulled the current mysql-server image (mysql-server) on 2 Ubuntu 18.04 machines and I want to set up group replication.
This is how I start the mysql container on each machine (
--server-id
is 2 for the second machine):
docker run -d --name=gr-mysql \
-v /var/lib/mysql:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=secret -e MYSQL_ROOT_HOST=% \
-p 3306:3306 -p 33061:33061 \
mysql/mysql-server:8.0 \
--port=3306 \
--log-error-verbosity=3 \
--server-id=1 \
--log-bin='mysql-bin-1.log' \
--enforce-gtid-consistency=ON \
--log-slave-updates=ON \
--gtid-mode=ON \
--transaction-write-set-extraction=XXHASH64 \
--binlog-checksum=NONE \
--master-info-repository=TABLE \
--relay-log-info-repository=TABLE \
--plugin-load=group_replication.so \
--relay-log-recovery=ON \
--loose-group_replication_start_on_boot=OFF \
--loose-group_replication_group_name=a_valid_uuid \
--loose-group_replication_local_address=1.2.3.1:33061 \
--loose-group_replication_group_seeds=1.2.3.1:33061,1.2.3.2:33061 \
--loose-group_replication_single_primary_mode=OFF \
--loose-group_replication_enforce_update_everywhere_checks=ON
Now I try to configure group replication on the first machine:
docker exec -t aerobase-mysql mysql -uroot -psecret \
-e "SET @@GLOBAL.group_replication_bootstrap_group=1;" \
-e "create user 'repl'@'%';" \
-e "GRANT REPLICATION SLAVE ON *.* TO repl@'%';" \
-e "flush privileges;" \
-e "change master to master_user='repl', master_password='secret' for channel 'group_replication_recovery';" \
-e "START GROUP_REPLICATION;" \
-e "SET @@GLOBAL.group_replication_bootstrap_group=0;"
And I'm prompted with this error:
ERROR 3096 (HY000) at line 1: The START GROUP_REPLICATION command failed as there was an error when initializing the group communication layer.
Checking the container log it seems like MySQL can't get the host's IP address:
2020-09-20T22:17:03.060972Z 220 [System] [MY-013587] [Repl] Plugin group_replication reported: 'Plugin 'group_replication' is starting.'
2020-09-20T22:17:03.061056Z 220 [Note] [MY-011716] [Repl] Plugin group_replication reported: 'Current debug options are: 'GCS_DEBUG_NONE'.'
2020-09-20T22:17:03.062114Z 221 [System] [MY-011565] [Repl] Plugin group_replication reported: 'Setting super_read_only=ON.'
2020-09-20T22:17:03.062187Z 220 [Note] [MY-011673] [Repl] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
2020-09-20T22:17:03.063052Z 220 [Note] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Debug messages will be sent to: asynchronous::/var/lib/mysql/GCS_DEBUG_TRACE'
2020-09-20T22:17:03.063205Z 220 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] There is no local IP address matching the one configured for the local node (1.2.3.1:33061).'
2020-09-20T22:17:03.063415Z 220 [ERROR] [MY-011674] [Repl] Plugin group_replication reported: 'Unable to initialize the group communication engine'
2020-09-20T22:17:03.063434Z 220 [ERROR] [MY-011637] [Repl] Plugin group_replication reported: 'Error on group communication engine initialization'
2020-09-20T22:17:03.063444Z 220 [Note] [MY-011649] [Repl] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member'
2020-09-20T22:17:03.063451Z 220 [ERROR] [MY-011718] [Repl] Plugin group_replication reported: 'Error calling group communication interfaces while trying to leave the group'
2020-09-20T22:17:03.063586Z 221 [System] [MY-011566] [Repl] Plugin group_replication reported: 'Setting super_read_only=OFF.'
In the ifconfig
output I can see the host's IP address and that it matches what I entered for MySQL:
docker0: flags=4163 mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 01:23:45:67:89:00 txqueuelen 0 (Ethernet)
RX packets 146483 bytes 312211022 (312.2 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 163698 bytes 12222756 (12.2 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens192: flags=4163 mtu 9000
inet 1.2.3.1 netmask 255.255.255.0 broadcast 1.2.3.255
ether 00:11:22:33:44:55 txqueuelen 13888 (Ethernet)
RX packets 23255307 bytes 10382649061 (10.3 GB)
RX errors 0 dropped 68 overruns 0 frame 0
TX packets 393777 bytes 34493066 (34.4 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 144 bytes 7948 (7.9 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 144 bytes 7948 (7.9 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth0b41841: flags=4163 mtu 1500
ether 66:55:44:33:22:11 txqueuelen 0 (Ethernet)
RX packets 944 bytes 2086507 (2.0 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1058 bytes 81312 (81.3 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Before I can have group replication I need to make MySQL accept the host's IP.
I tried adding an IP whitelist and even using host names instead of IP addresses, to no avail. I don't have firewall or SELinux enabled.
Thanks!
(Question has been moved from SO)
towel
(121 rep)
Sep 21, 2020, 06:27 AM
• Last activity: Jul 2, 2025, 09:03 PM
2
votes
1
answers
630
views
How to setup two mysql servers with group replication to work individually, when no network
I have two mysql servers in two different places. Both servers must be able to read and write. All tables are designed to based on unique id (UUID) primary key. I want to sync these two databases live using a replication method when networks are available. But both servers must be able to work indiv...
I have two mysql servers in two different places. Both servers must be able to read and write. All tables are designed to based on unique id (UUID) primary key. I want to sync these two databases live using a replication method when networks are available. But both servers must be able to work individually when networks are not available.
Which replication method is suitable for this situation ? If someone has already tried this, please help me to resolve this problem.
I have tried MySQL group replication, but when network disconnected i can only work on one mysql server which is bootstrapped. Othe one doesn't allow writes.
LahiruTM
(146 rep)
Feb 13, 2019, 03:29 PM
• Last activity: May 11, 2025, 03:02 AM
0
votes
1
answers
2086
views
MySQL Group Replication - Offline Node with cluster Metadata 1.0.1
I have a production MySQL Group Replication setup out of three Nodes. The cluster has Metadata Version 1.0.1. Today, one node went down an during rejoin phase, it says, it cannot connect to the cluster. When trying to `dba.rejoinInstance('');`, the MySQL Shell says, it won't do it, because it will o...
I have a production MySQL Group Replication setup out of three Nodes. The cluster has Metadata Version 1.0.1. Today, one node went down an during rejoin phase, it says, it cannot connect to the cluster.
When trying to
dba.rejoinInstance('');
, the MySQL Shell says, it won't do it, because it will only make changes to a cluster with Metadata 2.0.0.
`ERROR: Unable to start Group Replication for instance 'O18-SQL-NBG:3306'. Please check the MySQL server error log for more information.
Cluster.rejoinInstance: Group Replication failed to start: MySQL Error 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log. (RuntimeError)`
The logfile says:
[MY-011521] [Repl] Plugin group_replication reported: 'Member version is incompatible with the group.'
When trying to upgrade Metadata on the cluster to 2.0.0, it says, it won't do it either, because one Member of the cluster is not available.
Dba.upgradeMetadata: This operation requires all the cluster members to be ONLINE (RuntimeError)
When trying to rescan()
the cluster, it says, it won't it, because my metadata is out of date and it decided to only do read-only operations on this cluster:
Cluster.rescan: Operation not allowed. No cluster change operations can be executed because the installed metadata version 1.0.1 is lower than the version required by Shell which is version 2.0.0. Upgrade the metadata to remove this restriction. See \? dba.upgradeMetadata for additional details. (RuntimeError)
I have several versions of MySQL Shell available (8.0.17, 8.0.19 and 8.0.20). My primary objective is - guess what - to get the faulty node back up into the cluster. What are my options?
t2m
(11 rep)
Jul 8, 2020, 09:21 PM
• Last activity: Apr 25, 2025, 12:04 AM
2
votes
1
answers
1147
views
How to configure MySQL Router on two application servers?
I have three MySQL servers running with group replication and inside a InnoDB Cluster. I have two app servers that will be connecting to the cluster and I need to configure mysql router on both app servers. When I run router on server1 I use the `--bootstrap` option. When I try to do it on server2 w...
I have three MySQL servers running with group replication and inside a InnoDB Cluster.
I have two app servers that will be connecting to the cluster and I need to configure mysql router on both app servers. When I run router on server1 I use the
--bootstrap
option. When I try to do it on server2 with --bootstrap
I get an error
>a router instance named has been previously configured on this host.
My question is do I need to start mysql router with --bootstrap
option? Or since it will be configured on two servers --bootstrap
option is not required? Do I need to start mysql router with a personalized conf file?
Manuel
(91 rep)
Oct 9, 2019, 10:31 PM
• Last activity: Jan 16, 2025, 12:09 PM
0
votes
0
answers
19
views
Does MySQL InnoDB cluster allow data lost by default?
Group replication use asynchronies replication by default (https://dev.mysql.com/doc/refman/8.4/en/group-replication-system-variables.html#sysvar_group_replication_consistency). Does it mean, by default, when a transaction on master is committed and it crashed for some reason, the newly elected mast...
Group replication use asynchronies replication by default (https://dev.mysql.com/doc/refman/8.4/en/group-replication-system-variables.html#sysvar_group_replication_consistency) . Does it mean, by default, when a transaction on master is committed and it crashed for some reason, the newly elected master will not be guaranteed to be have the committed transaction but will continue to accept new data because of the automatic failover? If so, will the missing transaction sent other nodes if the old master rejoin the group? For apps like banking, is it true that I should be using AFTER for the master node for synchronize replication to avoid data lost in case of master failover?
William
(155 rep)
Jan 2, 2025, 03:20 AM
• Last activity: Jan 5, 2025, 12:17 AM
1
votes
1
answers
1661
views
MySQL Group Replication Multi-Primary Setup
We are running three MySQL Servers in a Group Replication Multi-Primary Setup. Here are is one of my configuration of one member of that GR group: ``` # General replication settings gtid_mode = ON enforce_gtid_consistency = ON master_info_repository = TABLE relay_log_info_repository = TABLE binlog_c...
We are running three MySQL Servers in a Group Replication Multi-Primary Setup.
Here are is one of my configuration of one member of that GR group:
# General replication settings
gtid_mode = ON
enforce_gtid_consistency = ON
master_info_repository = TABLE
relay_log_info_repository = TABLE
binlog_checksum = NONE
log_slave_updates = ON
log_bin = binlog
binlog_format = ROW
transaction_write_set_extraction = XXHASH64
loose-group_replication_bootstrap_group = OFF
loose-group_replication_start_on_boot = OFF
loose-group_replication_ssl_mode = REQUIRED
loose-group_replication_recovery_use_ssl = 1
# Shared replication group configuration
loose-group_replication_group_name = "3a3e0101-83a2-11ea-922f-000c29390e7a"
loose-group_replication_ip_whitelist = "10.0.1.XX,10.0.1.XX,10.0.1.XX"
loose-group_replication_group_seeds = "10.0.1.XX:33061,10.0.1.XX:33061,10.0.1.XX:33061"
# Single or Multi-primary mode? Uncomment these two lines
# for multi-primary mode, where any host can accept writes
loose-group_replication_single_primary_mode = OFF
loose-group_replication_enforce_update_everywhere_checks = ON
(IPs obfuscated)
Everything was running fine, until...
I loaded a smiple SQL file from another MySQL (not a GR member) onto MySQL_1 (one of the three inside GR). The loaded file was created with 'mysqlpump --set-gtid-purged=Off'.
Here the content of that SQL file:
-- Dump created by MySQL pump utility, version: 8.0.11, FreeBSD11.1 (amd64)
-- Dump start time: Tue Jun 9 12:18:03 2020
-- Server version: 8.0.11
SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0;
SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;
SET @OLD_SQL_MODE=@@SQL_MODE;
SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO";
SET @@SESSION.SQL_LOG_BIN= 0;
SET @OLD_TIME_ZONE=@@TIME_ZONE;
SET TIME_ZONE='+00:00';
SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT;
SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS;
SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION;
SET NAMES utf8mb4;
CREATE DATABASE /*!32312 IF NOT EXISTS*/ massmail
/*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_bin */;
CREATE TABLE massmail
.emails
(
id
bigint(20) NOT NULL AUTO_INCREMENT,
ext_id
bigint(20) DEFAULT NULL,
email
varchar(256) CHARACTER SET utf8 COLLATE utf8_unicode_ci NOT NULL,
sent
datetime DEFAULT NULL,
bounced
datetime DEFAULT NULL,
opened
datetime DEFAULT NULL,
unsubscribed
datetime DEFAULT NULL,
CTA
datetime DEFAULT NULL,
PRIMARY KEY (id
)
) ENGINE=InnoDB AUTO_INCREMENT=11 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
;
INSERT INTO massmail
.emails
VALUES (1,NULL,"user1@gmail.com","2020-06-08 16:54:56",NULL,"2020-06-08 17:55:23","2020-06-08 17:55:34","2020-06-08 17:55:47"),(2,NULL,"user2@gmail.com","2020-06-07 16:40:13",NULL,NULL,NULL,NULL),(3,NULL,"user3@gmail.com","2020-06-08 16:57:56",NULL,"2020-06-08 23:54:21","2020-06-08 18:01:01","2020-06-08 18:01:51"),(4,NULL,"user4@gmail.com","2020-06-07 19:04:13",NULL,"2020-06-08 12:05:25","2020-06-08 01:42:53",NULL),(5,NULL,"user5@gmail.com","2020-06-07 16:58:12",NULL,"2020-06-08 11:12:17","2020-06-08 11:03:38",NULL),(6,NULL,"user6@gmail.com","2020-06-07 17:04:13",NULL,"2020-06-08 11:01:03",NULL,NULL),(7,NULL,"notexistingmustbounce@nohererererer.com","2020-06-08 16:59:56","2020-06-08 17:59:59",NULL,NULL,NULL),(8,NULL,"user7@gmail.com","2020-06-07 17:16:12",NULL,"2020-06-08 00:56:23",NULL,NULL),(9,NULL,"user8@gmail.com","2020-06-08 18:03:56",NULL,NULL,NULL,NULL),(10,NULL,"user9@gmail.com","2020-06-07 17:28:12",NULL,NULL,NULL,NULL);
SET TIME_ZONE=@OLD_TIME_ZONE;
SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT;
SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS;
SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION;
SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS;
SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS;
SET SQL_MODE=@OLD_SQL_MODE;
(real emails obfuscated)
I also modified mysql.db table and insered a row that an existing DB user has access to the new table, and I flushed privileges (all this was done on MySQL_1), without using GRANT.
After this actions, MySQL_2 and MySQL_3 went into error mode.
In the logs I found the following:
2020-06-09T10:44:03.769050Z 996 [ERROR] [MY-010584] [Repl] Slave SQL for channel 'group_replication_recovery': Could not execute Update_rows event on table massmail.emails; Can't find record in 'emails', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log binlog.000004, end_log_pos 162167168, Error_code: MY-001032
2020-06-09T10:44:03.769087Z 996 [Warning] [MY-010584] [Repl] Slave: Can't find record in 'emails' Error_code: MY-001032
2020-06-09T10:44:03.769140Z 996 [ERROR] [MY-010586] [Repl] Error running query, slave SQL thread aborted. Fix the problem, and restart the slave SQL thread with "SLAVE START". We stopped at log 'binlog.000004' position 162166781
I researched the above error and found the following thread: https://dba.stackexchange.com/questions/23921/some-mysql-slave-errors
But it's for Master-Slave replication, I could not find something similar for GR Multi-Primary replication...
There it's written by Max Vernon that you can stop the slave, and skip counters...
So I checked on both MySQL_1 and MySQL_2 the latest binlog, and I found probably the place where all breaks...
Now I have several questions:
1. Why did the import of that SQL break the GR sync ? Or was it my manuall manipulation of the mysql.db table (without using the GRANT syntax) that broke GR sync?
1.1 What effect did the parameter '--set-gtid-purged=Off' have in this disaster? I guess none, am I correct?
2. Will the statements (from the above mentioned thread) skipping the binlog entries that breaks the replication work? As it is stated in this article that it is Master-Slave replication, and not GR Multi-Primary (like we have). The following commands are mentioned:
mysql> stop slave;
mysql> set global sql_slave_skip_counter=1;
mysql> show slave status \G;
For example, 'show slave status' gives me an empty response set... (because we use GR, and not Master-Slave replication)...
3. How can I make the MySQL_2 (secondary Primary) skip the "poisioned" events from binlog? (if 2. is not working)
4. I would like to give a brief summary how I think this could also be solved (by reseting MySQL_2/3 and resyncing all from MySQL_1):
- make a backup of the my.cnf
- uninstall mysql-server
- reinstall mysql-server (should give a clean new mysql environment)
- restore backuped my.cnf
- start group replication
Should work right? (the data to be synced from MySQL_1 is not that big yet ~1-200mb)
This 4. Solution is not my prefered one, as I would like to learn more about GR and how to resolve problem in a Multi-Primary GR setup. As last resort, I would have daily backups (of the VMs) that I could rollback, so this question here is more of philosophic nature...
Thank you for your support, and forgive me my stupidity, GR is quite new for me (also MySQL is part of my live for already a very long time)...
Stoney
stoney
(21 rep)
Jun 9, 2020, 01:19 PM
• Last activity: May 13, 2024, 09:06 PM
0
votes
0
answers
63
views
Using Clone for addInstance on MySQL Innodb Cluster uses 100% Memory
When trying to add a new node to a MySQL Innodb cluster using Cluster.addInstance() and selecting the Clone option, within 2 or 3 minutes the donor nodes memory usages hits 100% on a server with 36Gb of memory. At 100% memory usage the server becomes unresponsive and the recipient server moves to an...
When trying to add a new node to a MySQL Innodb cluster using Cluster.addInstance() and selecting the Clone option, within 2 or 3 minutes the donor nodes memory usages hits 100% on a server with 36Gb of memory. At 100% memory usage the server becomes unresponsive and the recipient server moves to another server in the cluster, tries to clone, and the memory usage of the 2nd donor hits 100%. The entire cluster is degraded in under 10 min and unavailable.
This is on a Production server so the consequences here are pretty devastating.
I’m running MySQL 8.0.36 on Debian 12 Bookworm.
I have not encountered this problem setting up a test cluster with a small amount of data, which I have done several times to ensure I’m doing everything correctly. However on my Production server with about 125Gb of data, this problem occurs consistently.
The process chews through 10Gb+ of memory in just a few minutes, which doesn't seem normal. Killing the Clone query doesn't appear to stop the memory consumption.
I cannot find any information regarding the memory requirements for Clone.
Has anyone else experienced this problem or know what the cause might be?
Thank you.
mwarble
(1 rep)
Mar 6, 2024, 03:13 PM
2
votes
0
answers
25
views
Cloning failing in Mysql Group replication
We have a 3-node MySQL group replication set up in K8s. After all the respective setups, during the cloning process from the primary node to rebuild or add a secondary node to the cluster, the process is automatically closing with the following error: ```mysql> CLONE INSTANCE FROM ‘ ’@‘’:3306 IDENTI...
We have a 3-node MySQL group replication set up in K8s. After all the respective setups, during the cloning process from the primary node to rebuild or add a secondary node to the cluster, the process is automatically closing with the following error:
> CLONE INSTANCE FROM ‘’@‘’:3306 IDENTIFIED BY ‘’;
ERROR 1026 (HY000): Error writing file ‘./#innodb_redo.#clone/#ib_redo0’ (errno: 22 - Invalid argument)
This occurs randomly, for example, while the primary is accepting writes and when it’s idle. Sometimes, the cloning operation succeeds without any error. Are there any specific factors causing this error and leading to the cloning failure?
Mysql version: Percona 8.0.34-26
I’ve already checked the disk and read-only part; there was enough disk space available with disk being writable.
I’ve noticed that it randomly fails with the above error, but after recreating the cluster, it works fine. However, the same issue reoccurs sometimes.
While researching online, I found some issues related to this with respect to O_DIRECT. However, in my case, the issue is only happening with innodb_flush_method O_DIRECT(we are using this).
Link: https://bugs.mysql.com/bug.php?id=97755
Could you please let us know if there is any potential issue associated with innodb_flush_method O_DIRECT for this error, or if it might be happening due to something else?
If you see, the same command failed with the error and retrying after sometime succeed.
Failure:
mysql> CLONE INSTANCE FROM ‘donor_clone_user’@‘’:3306 IDENTIFIED BY ‘xxxxxxx’;
ERROR 1026 (HY000): Error writing file ‘./#innodb_redo.#clone/#ib_redo0’ (errno: 22 - Invalid argument)
Retrying after a few seconds without any changes:
Succeed:
mysql> CLONE INSTANCE FROM ‘donor_clone_user’@‘’:3306 IDENTIFIED BY ‘xxxxxxx’;
ERROR 3707 (HY000): Restart server failed (mysqld is not managed by supervisor process).
mysql> command terminated with exit code 137
POD logs during failure:
2023-12-19T07:54:31.000678Z 11 [Warning] [MY-013460] [InnoDB] Clone removing all user data for provisioning: Started
2023-12-19T07:54:31.072144Z 11 [Warning] [MY-013460] [InnoDB] Clone removing all user data for provisioning: Finished
2023-12-19T07:54:32.567528Z 11 [Warning] [MY-012638] [InnoDB] Retry attempts for writing partial data failed.
2023-12-19T07:54:32.567545Z 11 [ERROR] [MY-012639] [InnoDB] Write to file ./#innodb_redo.#clone/#ib_redo0 failed at offset 3584, 1048576 bytes should have been written, only 0 were written. Operating system error number 22. Check that your OS and file system support files of this size. Check also that the disk is not full or a disk quota exceeded.
2023-12-19T07:54:32.567558Z 11 [ERROR] [MY-012640] [InnoDB] Error number 22 means ‘Invalid argument’
Pravat Dash
(21 rep)
Dec 20, 2023, 07:43 AM
2
votes
1
answers
2155
views
Relay log read fail, relay or master log currupt ,How to repair?
I am running mysql InnoDB cluster 8.0.17(group replication) During the repication the slave disk gets full and no space left at all the thread waits for the disk space to be freed at the same time this was happening the server restarted once server came back mysql error log shows relay log is corrup...
I am running mysql InnoDB cluster 8.0.17(group replication)
During the repication the slave disk gets full and no space left at all the thread waits for the disk space to be freed at the same time this was happening the server restarted once server came back mysql error log shows relay log is corrupt and slave cannot rejoin.
my question is How can I repair the slave?
error log
Disk is full writing ‘./ic-1-relay-bin-group_replication_applier.000039’ (OS errno 28 – No space left on device). Waiting for someone to free space… Retry in 60 secs. Message reprinted in 600 secs.
2019-08-21T19:02:37.095609Z 33 [ERROR] [MY-010584] [Repl] Slave SQL for channel ‘group_replication_applier’: Could not execute Write_rows event on table sbtest.sbtest3; Error writing file ‘/tmp/MLfd=28’ (OS errno 22019-08-22T06:55:36.766622Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.17) starting as process 910
2019-08-22T06:55:43.510632Z 1 [Warning] [MY-012637] [InnoDB] 81920 bytes should have been written. Only 77824 bytes written. Retrying for the remaining bytes.
2019-08-22T06:55:43.510669Z 1 [Warning] [MY-012638] [InnoDB] Retry attempts for writing partial data failed.
2019-08-22T06:55:43.510683Z 1 [ERROR] [MY-012639] [InnoDB] Write to file ./#innodb_temp/temp_6.ibt failed at offset 0, 81920 bytes should have been written, only 77824 were written. Operating system error number 28. Check that your OS and file system support files of this size. Check also that the disk is not full or a disk quota exceeded.
….
Error number 28 means ‘No space left on device’
2019-08-22T07:02:37.278732Z 1 [ERROR] [MY-012267] [InnoDB] Could not set the file size of ‘./ibtmp1’. Probably out of disk space
2019-08-22T07:02:37.278746Z 1 [ERROR] [MY-012926] [InnoDB] Unable to create the shared innodb_temporary.
2019-08-22T07:02:37.278764Z 1 [ERROR] [MY-012930] [InnoDB] Plugin initialization aborted with error Generic error.
2019-08-22T07:02:37.776328Z 1 [ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine
2019-08-22T07:02:37.776504Z 0 [ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
2019-08-22T07:02:37.779913Z 0 [ERROR] [MY-010119] [Server] Aborting
…
[MY-010818] [Server] Error reading GTIDs from relaylog: -1
Slave SQL for channel ‘group_replication_applier’: Relay log read failure: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‘mysqlbinlog’ on the binary log), the slave’s relay log is corrupted (you can check this by running ‘mysqlbinlog’ on the relay log), a network problem, the server was unable to fetch a keyring key required to open an encrypted relay log file, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‘SHOW SLAVE STATUS’ on this slave. Error_code: MY-013121
dan
(53 rep)
Aug 23, 2019, 01:11 PM
• Last activity: Oct 14, 2023, 04:09 AM
0
votes
1
answers
979
views
Why slave_parallel_workers affects TPM
I ran a few HammerDB benchmarks and mess with the configurations. And I notice different values of `slave_parallel_workers` have a considerable impact on TPM. As far as I understand, `slave_parallel_workers` only affects how quickly the transactions are being applied from the relay log. Yet, all the...
I ran a few HammerDB benchmarks and mess with the configurations. And I notice different values of
slave_parallel_workers
have a considerable impact on TPM.
As far as I understand, slave_parallel_workers
only affects how quickly the transactions are being applied from the relay log. Yet, all the possible HA/DR setups, including async replication, semi-sync replication, and group replication will not wait for the transactions to be applied on the secondary node(s) before committing. To my understanding, slave_parallel_workers
should only affect replication lag and have zero impact on TPM while this is clearly not the case.
Am I missing something?
**Edit:** To clear any confusion, I expect the value of slave_parallel_workers
will have no impact on TPM. While the reality is the higher the value of slave_parallel_workers
is, the higher the TPM is. So I'm hoping someone would be able to explain the reason for that.
**Edit 2:** All transactions are done on the primary node.
Matthew Kwong
(113 rep)
Jun 28, 2021, 03:39 AM
• Last activity: Dec 8, 2022, 09:03 AM
0
votes
0
answers
192
views
ER_TRANSACTION_ROLLBACK_DURING_COMMIT MySQL error when querying
I've recently seen a high number of errors on my Bugsnag integration of my web app which occur when both reading or writing to my MySQL server. `ER_TRANSACTION_ROLLBACK_DURING_COMMIT: Plugin instructed the server to rollback the current transaction` [![enter image description here][1]][1] For a bit...
I've recently seen a high number of errors on my Bugsnag integration of my web app which occur when both reading or writing to my MySQL server.
For a bit of background on how my MySQL servers are setup:
I currently have a "cluster" of MySQL servers running on 5 VMs which are running the Group Replication plugin in multi-master mode with the following config:
I've searched for the MySQL error code
ER_TRANSACTION_ROLLBACK_DURING_COMMIT: Plugin instructed the server to rollback the current transaction



ER_TRANSACTION_ROLLBACK_DURING_COMMIT
online as well as on stackoverflow, and as far as I can see, I can not find anything that helps me find the cause of this issue or the resolution.
Unfortunately I don't have any more information other than this, but if you require anything specific to help me, please do ask.
Thanks, Dan.
Edit: Added Group Replication config
helloitsdann
(1 rep)
Jan 19, 2021, 12:24 AM
• Last activity: Jul 25, 2022, 02:44 PM
0
votes
2
answers
826
views
What is the best solution for synchronous sync and load balancing an InnoDB database
I'm trying to find the best setup to horizontally scale an ecommerce which is currently on a single dedicated server, but even reading some of the replies here on DBA I'm still confused on what is the best solution for my use case. The sync should be synchronous (I think but I may be wrong here) and...
I'm trying to find the best setup to horizontally scale an ecommerce which is currently on a single dedicated server, but even reading some of the replies here on DBA I'm still confused on what is the best solution for my use case.
The sync should be synchronous (I think but I may be wrong here) and it will be integrated with an HAproxy for load balancing (HAProxy will manage also WWW loadbalancing that's why I mention it, but If I have to use MySQL Router o SQLProxy that's ok too).
I also need the same autoincrement progression between all the MySQL instances (more below, in the M\M part).
Lastly it should be easy to recover if any node goes down.
These are the options I found by googling:
1) MySQL Replication M/M
2) MySQL Replication/Group Replication
3) Percona XtraDB Cluster / Galera
4) MySQL NDB
**MySQL Replication M/M**
I used this in the past, but I had the problem of using the auto_increment_increment/offset setting, resulting in non-continuos order/cart/etc IDs. AFAIK it cannot be configured for synchronous replication. Also is not the most professional approach to the problem, with many cons.
**MySQL Replication/Group Replication**
I don't exactly understand the difference between the two, except that the latter seems to have a nicer way to manage/control the cluster. But this seems to do the job I need. If I understood correctly, using it in semisynchronous mode with rpl_semi_sync_source_wait_for_replica_count equal to the amount of MySQL instances is like running in synchronous mode. Is that correct?
Also I'm a bit confused about Single o Multi Primary Mode. I think every instance must be RW, so MultiPrimary, if I want to balance also the write queries. Or is better to have a Single Primary RW node and redirect only the reading queries on the secondary nodes (if that's possible with HA, i don't know)?
**Percona XtraDB Cluster / Galera**
This looks like a Group Replication with additional features and management tools to make it easier, possibly performing even better. If I understood correctly synchronous mode is enforced here, for more consistency. I'm not really sure what else differs from the MySQL one, that's part of my confusion.
**MySQL NDB**
I read this is the best solution for a synchronous replication. However I don't get if it can be safely used with PHP application wrote with InnoDB in mind. I know I can just re-import the whole InnoDB database with the NDB engine, but it is compatible with any MySQL query I can do on InnoDB? This is the biggest doubt I have. I don't want to move eveything on NDB to find out later that I can't do something or even worse with errors in my app queries.
So, here I am, full of doubt to choose the best option for my use case, since this is a choice I cannot undo. I hope I didn't forget anything useful to know.
Can any DBA expert help me in doing the right choice?
Thanks.
matrix
(1 rep)
May 28, 2022, 01:34 PM
• Last activity: May 28, 2022, 10:10 PM
0
votes
2
answers
1577
views
MySQL Group Replication fails to understand "::ffff:" IP Address
I'm trying to set up group replication using 3 x MySQL 8.0.16 databases, but when I start up the second Node it fails. The error log reports: [GCS] Connection attempt from IP address ::ffff:10.12.1.2 refused. Address is not in the IP whitelist. But if I add a whitelist: group_replication_ip_whitelis...
I'm trying to set up group replication using 3 x MySQL 8.0.16 databases, but when I start up the second Node it fails. The error log reports:
[GCS] Connection attempt from IP address ::ffff:10.12.1.2 refused. Address is not in the IP whitelist.
But if I add a whitelist:
group_replication_ip_whitelist="::ffff:10.12.1.2";
I get:
[GCS] Invalid IP or subnet mask in the whitelist: "::ffff:10.12.1.2";
Is this an issue with the server at my end, or an issue with Group Replication not understanding the IP Address?
My configuration is:
server_id=
gtid_mode=ON
enforce_gtid_consistency=ON
binlog_checksum=NONE
transaction_write_set_extraction=XXHASH64
group_replication_group_name="8112d18e-9e6e-11e9-a526-0e51be33883f"
group_replication_start_on_boot=off
group_replication_local_address= "10.12.1.1:33061" /*This is different for each one*/
group_replication_group_seeds= "10.12.1.1:33061,10.12.1.2:33061,10.12.1.3:33061"
group_replication_bootstrap_group=off
If I look on the first nodes error log I can see:
Connection attempt from IP address ::ffff:10.12.1.2 refused. Address is not in the IP whitelist.'
But again, adding it to a whitelist throws the above error about there being an invalid IP Address,
IGGt
(2276 rep)
Jul 5, 2019, 01:45 PM
• Last activity: May 2, 2022, 09:53 AM
0
votes
1
answers
176
views
MySQL Group replication failover detection while node is recovering
I'm trying to get a solid MySQL group replication load balancing / failover set-up. Currently I use `keepalived` to share one `private IP` to connect to the `loadbalanced` MySQL GROUP REPLICATION cluster, balanced/failover arranged through `haproxy` (tcp checks on port 33061) which works great. Howe...
I'm trying to get a solid MySQL group replication load balancing / failover set-up. Currently I use
keepalived
to share one private IP
to connect to the loadbalanced
MySQL GROUP REPLICATION cluster, balanced/failover arranged through haproxy
(tcp checks on port 33061) which works great.
However, once a node get's in an unreachable state (because of network issues) and eventually goes offline for the cluster we will have to join the node back to the cluster, which is all fine and works. However, during the recovery phase (state RECOVERING
), the node is already listening on port 33061
, enabling the node for loadbalancing and failover. However, the cluster is not operational yet.
Is there any check I can add to prevent the node from being online while the node is still in RECOVERING
state, joining the cluster? Usually this process is rather quick, but it also happend a few times that it may take up to 15 minutes, causing database errors during this phase. Many thanks!
Thomas van Hesteren
(1 rep)
Apr 16, 2022, 08:02 AM
• Last activity: Apr 19, 2022, 10:35 AM
0
votes
2
answers
3180
views
What contributes to the transaction size on a MySQL UPDATE?
**Background** We're running a MySQL 8 INNODB cluster behind some Java services. We received an error from the application because we'd exceeded the group replication transaction size limit (“Error on observer while running replication hook 'before_commit'”). Our limit was set to 150Mb at the time....
**Background**
We're running a MySQL 8 INNODB cluster behind some Java services. We received an error from the application because we'd exceeded the group replication transaction size limit (“Error on observer while running replication hook 'before_commit'”). Our limit was set to 150Mb at the time.
**Problem**
Looking at the transaction involved I don't understand how it might have involved anything like 150Mb.
It involved an update to two tables
update my_table mt
inner join my_table_aud mta on mt.id = mta.id
set mt.boolean_column_1 = TRUE,
mt.boolean_column_2 = TRUE,
mt.varchar_column = coalesce(mt.varchar_column, ?2),
mta.varchar_column = coalesce(mta.varchar_column, ?2)
where mt.boolean_column_1 = FALSE
AND mta.rev <= ?1
which involved approximately 100 rows in my_table and maybe 200 rows in my_table_aud. Plus one other simple insert to a different table. The varchar columns were updated with around 10 bytes of data.
However the two tables involved in the UPDATE do both have a different longtext column, which wasn't updated. There would have been on average maybe 1MB in text per row updated in those columns.
The only explanation I can think of for us exceeding the transaction limit would be that the text in longtext columns contributed to the transaction size, even though they were not referenced in the update.
I searched for documentation on what contributes to the transaction size of a transaction in MySQL and haven't been able to find anything useful.
Please can someone help my understanding of how the transaction size limit might have been exceeded in this scenario?
scarba05
(103 rep)
Jan 15, 2021, 09:29 AM
• Last activity: Nov 15, 2021, 12:58 AM
3
votes
1
answers
68
views
Clarification: Replication Filters for MySQL Group Replication
As I understand it, replication filters in MySQL are not allowed for MySQL Group Replication. Is this correct? Does this imply that all databases among the group replication instances must have the same state initially?
As I understand it, replication filters in MySQL are not allowed for MySQL Group Replication. Is this correct?
Does this imply that all databases among the group replication instances must have the same state initially?
Albertus
(33 rep)
Aug 22, 2021, 09:13 PM
• Last activity: Aug 22, 2021, 09:45 PM
1
votes
1
answers
427
views
MySQL Group Replication consistency EVENTUAL vs BEFORE_ON_PRIMARY_FAILOVER
As far as I understand, `BEFORE_ON_PRIMARY_FAILOVER` is exactly behaving like `EVENTUAL` in normal scenarios until a failover happened, which the former will block all new transactions until the new primary node has applied its backlog while the latter allows transactions to be run immediately. I ha...
As far as I understand,
BEFORE_ON_PRIMARY_FAILOVER
is exactly behaving like EVENTUAL
in normal scenarios until a failover happened, which the former will block all new transactions until the new primary node has applied its backlog while the latter allows transactions to be run immediately.
I have tried both options in the lab environment. When the replication lag is relatively high, it takes a very long time for the new primary to apply backlog and open to new transactions. This significantly affects HA, render the auto-failover feature of Group Replication almost meaningless.
So my question is, is that really beneficial to use BEFORE_ON_PRIMARY_FAILOVER
instead of EVENTUAL
? Is that really a "safer" option?
Some info about my environment in case that matters:
MySQL 8.0.25, installed using TAR ball format.
RHEL: 8.0
Matthew Kwong
(113 rep)
Jun 28, 2021, 02:55 AM
• Last activity: Jun 28, 2021, 03:34 AM
Showing page 1 of 20 total questions