Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
5
votes
1
answers
2634
views
Replication stops with GTID_NEXT error after creation/drop of memory table in mysql5.6
We have recently upgraded to mysql5.6.25 from mysql5.5.x/mysql5.1.x on our mysql-cluster. Below is a brief snapshot of our architecture. [![enter image description here][1]][1] [1]: https://i.sstatic.net/QCakk.jpg Since we have upgraded and enabled gtid-mode we have been intermittently getting slave...
We have recently upgraded to mysql5.6.25 from mysql5.5.x/mysql5.1.x on our mysql-cluster.
Below is a brief snapshot of our architecture.
Since we have upgraded and enabled gtid-mode we have been intermittently getting slave errors similar to :
***Last_SQL_Error: Error 'When @@SESSION.GTID_NEXT is set to a GTID, you must explicitly set it to a different value after a COMMIT or ROLLBACK. Please check GTID_NEXT variable manual page for detailed explanation. Current @@SESSION.GTID_NEXT is 'd7e8990d-3a9e-11e5-8bc7-22000aa63d47:1466'.' on query. Default database: 'adplatform'. Query: 'create table X_new like X'***
Our observations are as below..
- These slave errors are resolved simply by restarting the slave.
- Such errors are always with Create/Drop of tables which have Memory Storage Engine.
- Errors on Complete-Slave(B) show up continuously at a fixed minute (39th) of the hour and have been repeating since we have upgraded, almost a week.
- Errors on Complete-Slave as well as Partial slave are observed whenever its master is restarted.
- Cluster-1 and Cluster-2 have centos machines and Cluster-3 have ubuntu-machines. Slaves on centos machines also fail with the same error whenever its master(C/D) is restarted, but slave on ubuntu machines do not fail!!.
We have temporarily been able to live with this issue by setting up an action-script on our monitoring system which fires on slave error alert on any machine.
A look into gtid_next section in replication-options doc of mysql tells following
> Prior to MySQL 5.6.20, when GTIDs were enabled but gtid_next was not
> AUTOMATIC, DROP TABLE did not work correctly when used on a
> combination of nontemporary tables with temporary tables, or of
> temporary tables using transactional storage engines with temporary
> tables using nontransactional storage engines. In MySQL 5.6.20 and
> later, DROP TABLE or DROP TEMPORARY TABLE fails with an explicit error
> when used with either of these combinations of tables. (Bug #17620053)
This seems related to my issue but still doesn't not explain my scenario.
Any hints/direction to solve the issue would be greatly appreciated...
**EDIT :**
I managed to find a similar recently reported bug in mysql(#77729), description of which is as follows :
https://bugs.mysql.com/bug.php?id=77729
> When you have table with Engine MEMORY working on replication master,
> mysqld injects "DELETE" statement in binary logs on first access query
> to this table. This insures consistency of data on replicating slaves.
>
> If replication is GTID ROW based, this inserted "DELETE" breaks
> replication. Logged event is in STATEMENT format and do not generate
> correct SET GTID_NEXT statements in binary log.
Unfortunately, the status of this bug is marked as

Can't Repeat
...
raman2887
(51 rep)
Aug 5, 2015, 08:05 AM
• Last activity: Jul 1, 2025, 11:02 PM
0
votes
1
answers
663
views
Error 1236 in MySQL GTID Slave due to mistaken 'reset master' on MySQL 5.7 GTID master
Sequence of events: 1. Disabled binary logs in MySQL 8 slave with the following relevant parameters in `mysqld.cnf`: ```disable-log-bin log-replica-updates = OFF gtid-mode = ON enforce-gtid-consistency = ON master-info-repository = TABLE relay-log-info-repository = TABLE relay_log_recovery = ON sync...
Sequence of events:
1. Disabled binary logs in MySQL 8 slave with the following relevant parameters in
mysqld.cnf
:
-log-bin
log-replica-updates = OFF
gtid-mode = ON
enforce-gtid-consistency = ON
master-info-repository = TABLE
relay-log-info-repository = TABLE
relay_log_recovery = ON
sync-master-info = 1
innodb_flush_log_at_trx_commit = 2
sync_binlog = 0
slave-preserve-commit-order = ON
# binlog_expire_logs_seconds = 2592000
#max_binlog_size = 100M
2. Did not purge binary logs before disabling binary
logging in MySQL 8 Slave
3. re-enabled binary logging by uncommenting:
# binlog_expire_logs_seconds = 2592000
# max_binlog_size = 100M
and commenting:
disable-log-bin
and instead of executing reset master
in MySQL 8 slave,
mistakenly executed the command reset master
on MySQL 5.7 master while replication is running
4. There was no backup of binary logs in master
5. Purged and re-installed the MySQL 8 slave, restored the backup (executed on master:
mysqldump -u root -p --all-databases --flush-logs \
--single-transaction --routines --triggers --events \
--master-data=2 > alldbs.sql
6. Reconfigured GTID replication using MySQL master 5.7
and MySQL slave 8.0.
Relevant parameters from the MySQL 5.7 master mysqld.cnf
:
binlog-format = ROW
log-slave-updates = ON
gtid-mode = ON
enforce-gtid-consistency = ON
master-info-repository = TABLE
relay-log-info-repository = TABLE
sync-master-info = 1
innodb_flush_log_at_trx_commit = 1
sync_binlog = 1
binlog-transaction-dependency-tracking = COMMIT_ORDER
7. Observed an error:
> 1236: Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Slave has more GTIDs than the master has, using the master's SERVER_UUID. This may indicate that the end of the binary log was truncated or that the last binary log file was lost, e.g., after a power or disk failure when sync_binlog != 1.
>
>The master may or may not have rolled back transactions that were already replicated to the slave.
>
>Suggest to replicate any transactions that master has rolled back
from slave to master,
and/or commit empty transactions on master to account
for transactions that have been
How to reconfigure GTID replication?
Sarma MV
(1 rep)
Oct 21, 2022, 07:50 AM
• Last activity: Jun 30, 2025, 11:06 AM
1
votes
1
answers
91
views
Restoring a primary mysqldump on a R/O replica
EDITED: I've encountered the following error message on my R/O replica which supports GTIDs and had not been running for a few weeks: > Last_IO_Error: Got fatal error 1236 from source when reading data from binary log: 'Cannot replicate because the source purged required binary logs. Replicate the m...
EDITED:
I've encountered the following error message on my R/O replica which supports GTIDs and had not been running for a few weeks:
> Last_IO_Error: Got fatal error 1236 from source when reading data from binary log: 'Cannot replicate because the source purged required binary logs. Replicate the missing transactions from elsewhere, or provision a new replica from backup. Consider increasing the source's binary log expiration period. The GTID set sent by the replica is '61ab9e32-e5b9-11ed-b4aa-525400003002:1-12069,a4228b68-e5b8-11ed-ac40-525400003001:1-72270', and the missing transactions are 'a4228b68-e5b8-11ed-ac40-525400003001:72271-74629'
As a result, in order to "recreate" the replica, I tried to restore a backup made on a primary (which also supports GTIDs) onto the replica.
The backup made on the primary:
mysqldump --ssl-ca=letsencrypt.pem --ssl-mode=VERIFY_IDENTITY --user=root \
--host=primary.example.com --port=3306 '--password=mypassword' --all-databases \
--events --flush-logs --flush-privileges --opt --routines --single-transaction \
--set-gtid-purged=OFF --source-data=1 --triggers > db_filename
The backup restored on the replica:
```shell
mysql --ssl-ca=letsencrypt.pem --ssl-mode=VERIFY_IDENTITY --user=root \
--host=replica.example.com --port=3306 '--password=mypassword' ERROR 1776 (HY000) at line 24: Parameters SOURCE_LOG_FILE, SOURCE_LOG_POS, RELAY_LOG_FILE and RELAY_LOG_POS cannot be set when SOURCE_AUTO_POSITION is active.
EDITED:
I have access to the backup of all master binlogs. Is there a proper way to reinject them into the primary in order to be able to recreate the replica?
jean-christophe manciot
(121 rep)
May 7, 2025, 03:35 PM
• Last activity: May 12, 2025, 03:36 PM
0
votes
1
answers
1344
views
How to convert standard replication to GTID replication in MySQL
Currently, I have a master-slave structure using binary replication. Since GTID replication is better and easier to manage, I want to convert my replication to GTID with minimal downtime and data loss. MySQL version is 8, the database size is above 1GB, How to do this operation safely?
Currently, I have a master-slave structure using binary replication. Since GTID replication is better and easier to manage, I want to convert my replication to GTID with minimal downtime and data loss.
MySQL version is 8, the database size is above 1GB, How to do this operation safely?
N_Z
(248 rep)
Jan 31, 2023, 08:48 AM
• Last activity: Apr 24, 2025, 04:04 PM
1
votes
1
answers
2252
views
MySQL GTID error 1236
I am try to reduce machine come to live in production.. MySQL is making bottleneck. What i trying. I update code in one machine and also MySQL restore everything is working fine after this i am making EC2 AMI and launched machine from autoscaling group. now in this instance i am not taking and any l...
I am try to reduce machine come to live in production.. MySQL is making bottleneck. What i trying. I update code in one machine and also MySQL restore everything is working fine after this i am making EC2 AMI and launched machine from autoscaling group. now in this instance i am not taking and any live dump. after start MySQL and make it replication but its showing me GTID error. these all processes taking 30 to 40 minutes.
> Last_IO_Error: Got fatal error 1236 from master when
> reading data from binary log: 'The slave is connecting using CHANGE
> MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary
> logs containing GTIDs that the slave requires.
any body can tell what i doing wrong. but if i take fresh dump and restore this in live machine its working. MySQL version 5.6.17.
Obivan
(119 rep)
Sep 22, 2016, 05:20 PM
• Last activity: Apr 24, 2025, 10:06 AM
0
votes
1
answers
359
views
fixing GTID based multi source replication error on selected source
I can't find or think of a solution when ever i encounter db replication errors on affected source and repositioning the GTID in the replica. What i do is retrieve from (four) masters gtid postion (`show master status`) then copy it to the replica server. In replica server. `set global gtid_slave_po...
I can't find or think of a solution when ever i encounter db replication errors on affected source and repositioning the GTID in the replica.
What i do is retrieve from (four) masters gtid postion (
show master status
) then copy it to the replica server.
In replica server.
set global gtid_slave_pos="1-10-1111111,2-20-2222222,3-30-3333333,4-40-444444"
Is there a way to reposition the gtid_slave_pos
only the affected source and will not cause any sync issues on other source that has no errors.
I'm using MariaDB 10.5 for master and replica.
wagop
(1 rep)
Sep 29, 2022, 06:43 AM
• Last activity: Mar 2, 2025, 07:03 AM
0
votes
1
answers
41
views
How can I replicate transactions one by one in mysql?
I have one source server and one replica server. And I set source_delay 30mins, replication is gtid based, mysql version is 8.0.24 1. If I accidentally drop table A, Can I apply transactions just right before the drop transaction?? 2. What is the best solution when I accidentally drop table and I wa...
I have one source server and one replica server. And I set source_delay 30mins, replication is gtid based, mysql version is 8.0.24
1. If I accidentally drop table A, Can I apply transactions just right before the drop transaction??
2. What is the best solution when I accidentally drop table and I want to restore minimum data loss?
I tried restore table using replica's table which have data before 30 mins. So there is no data which is delayed.
제임스으
(1 rep)
Dec 4, 2024, 12:55 AM
• Last activity: Dec 4, 2024, 10:16 PM
0
votes
0
answers
20
views
is it a good idea to use GTID for tenant meta data?
i want To build an audit log for tracking user activity using `Kafka Connect CDC (Change Data Capture)` and `MySQL`, and set a custom GTID (Global Transaction Identifier) before each transaction, here’s a high-level design pattern and implementation strategy. **Overview of the Solution:** **CDC with...
i want To build an audit log for tracking user activity using
Kafka Connect CDC (Change Data Capture)
and MySQL
, and set a custom GTID (Global Transaction Identifier) before each transaction, here’s a high-level design pattern and implementation strategy.
**Overview of the Solution:**
**CDC with Kafka Connect:** i will use Debezium (a Kafka Connect connector) to capture changes in my MySQL database, which will be streamed to Kafka topics. This will act as the foundation for my audit log system.
**Custom GTID:** (Global Transaction ID): Before executing each transaction in my application, i will set the GTID with custom information (e.g., tenant ID, email, UUID). This GTID will help me uniquely identify the transaction and associate it with a specific tenant and action. SET gtid_next = 'tenantId:tenantEmail;randomUUID'
**Audit Log Consumer:** Kafka consumers will read changes from the Kafka topics, process the audit log information (including the GTID), and store the logs in a database or external system.
what re the down side of this approach could issue with replactions?
Naor Tedgi
(101 rep)
Sep 30, 2024, 03:37 PM
0
votes
0
answers
44
views
Resync mysql replica in a GTID setup gives error HA_ERR_FOUND_DUPP_KEY with AUTOPOSITION
I use a procedure based on this [page][1] for years to sync a new replica or resync a broken replica for year without issue. This uses `xtrabackup` to save and send the mysql data to the slave, apply the redo log, and restart the slave with `MASTER_AUTO_POSITION`. Simple and effective. `xtrabackup`...
I use a procedure based on this page for years to sync a new replica or resync a broken replica for year without issue.
This uses
xtrabackup
to save and send the mysql data to the slave, apply the redo log, and restart the slave with MASTER_AUTO_POSITION
. Simple and effective.
xtrabackup
is convenient for us, due to the size of the databases we have to transfer. we use this way xtrabackup --backup --stream=xbstream --parallel=$(NB_PROC/2)
mysql: 5.7.42
xtrabackup: 2.4.29
lately when I start the slave I get such error
Last_Errno: 1062
Last_Error: Could not execute Write_rows event on table db_2.webhook_message_status; Duplicate entry '3304591' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.001615, end_log_pos 5021
Skip_Counter: 0
Exec_Master_Log_Pos: 4692
Relay_Log_Space: 143514642
...
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 1062
Last_SQL_Error: Could not execute Write_rows event on table db_2.webhook_message_status; Duplicate entry '3304591' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.001615, end_log_pos 5021
Replicate_Ignore_Server_Ids:
Master_Server_Id: 189087
Master_UUID: 144c0164-3223-11ef-8319-74563c5c838d
Master_Info_File: mysql.slave_master_info
Looking on slave I confirm the entry with id 3304591
is already there
+---------+---------------------+---------+--------+-----------------------------+---------------------+
| id | message_external_id | site_id | status | context | created_at |
+---------+---------------------+---------+--------+-----------------------------+---------------------+
| 3304591 | xxxxxxxxxxxxxxxxxxx | 483 | read | {"code":200,"title":"read"} | 2024-09-19 07:52:00 |
+---------+---------------------+---------+--------+-----------------------------+---------------------+
so it seems the slave does not know anymore to properly position itself.
As the workaround I get the content of xtrabackup_binlog_info
mysql-bin.001615 73610932 144c0164-3223-11ef-8319-74563c5c838d:1-14071690
and I did this
mysql> reset master
mysql> set global GTID_PURGED="144c0164-3223-11ef-8319-74563c5c838d:1-14071690"
mysql> start slave
and now it works
Master_Server_Id: 189087
Master_UUID: 144c0164-3223-11ef-8319-74563c5c838d
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
Master_Retry_Count: 86400
...
Retrieved_Gtid_Set: 144c0164-3223-11ef-8319-74563c5c838d:14047201-14248907
Executed_Gtid_Set: 144c0164-3223-11ef-8319-74563c5c838d:1-14248907
Auto_Position: 1
as an alternative solution I use pt-slave-restart
to bypass all error id 1062, so after a while, the slave eventually is in sync.
Do you have an idea about what could be the cause of this problem? It used to work fine, during the last 3 years we use without an issue. We did not changed major version of MySQL, or anytool involved.
is the "workaround" I'm doing is fine ? Do I have all master data on slave ?
best
Baptiste Mille-Mathias
(130 rep)
Sep 19, 2024, 09:55 AM
• Last activity: Sep 25, 2024, 09:37 AM
0
votes
1
answers
231
views
What could cause replication to break on MySQL when replicating from 5.7.40 to 8.0 replica
We have a MySQL primary running on 5.7.40. We are in process of testing the upgrade to 8.0.36 (this is on AWS RDS). We had 2 MySQL-8 replicas and 2 MySQL-5.7 replicas replicating from the primary. At almost the same time, both the MySQL 8.0 replicas stopped replicating and complained on `HA_ERR_FOUN...
We have a MySQL primary running on 5.7.40. We are in process of testing the upgrade to 8.0.36 (this is on AWS RDS). We had 2 MySQL-8 replicas and 2 MySQL-5.7 replicas replicating from the primary.
At almost the same time, both the MySQL 8.0 replicas stopped replicating and complained on
HA_ERR_FOUND_DUPP_KEY
Replica SQL for channel '': Worker 2 failed executing transaction '87953f5d-7595-11ed-830d-02f4790d85ab:57805008598' at source log mysql-bin-changelog.676514, end_log_pos 52858646; Could not execute Write_rows event on table ebdb.bike_issues; Duplicate entry '177235118' for key 'bike_issues.PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's source log mysql-bin-changelog.676514, end_log_pos 52858646, Error_code: MY-001062
Weirdly though the replicas complained on different key values, but the timestamp was nearly the same (few ms apart). The MySQL 5.7 replicas were fine, so clearly there was no hiccups on the primary side. Nothing shows up in the logs of primary around this time either.
The table, it complained on, is very commonly written table and we had this MySQL 8 replicas running for over a week now, without any replication issues.
We do Row-based, GTID based replication (gtid_mode ON, enforce_gtid_consistency ON)
I was able to resume replication by setting slave_exec_mode
to IDEMPOTENT
temporarily. When I check the error logs after the replication was in sync, I didn't see errors for the first replica's key in the second replicas error logs & vice versa i.e. they both failed on different keys. Could this be some issue on the replication receiver part? Or possibly some bug due to version mismatch? Or some mysql variable mismatch?
How can I debug this further? What could have possibly caused this blip?
**EDIT #1**
Output of create table
On Primary
Create Table: CREATE TABLE bike_issues
(
id
int(11) NOT NULL AUTO_INCREMENT,
issue_type
varchar(191) COLLATE utf8mb4_bin NOT NULL,
bike_id
int(11) NOT NULL,
reported_at
datetime NOT NULL,
resolved_at
datetime DEFAULT NULL,
resolution_type
varchar(191) COLLATE utf8mb4_bin DEFAULT NULL,
created_at
datetime NOT NULL,
updated_at
datetime NOT NULL,
token
varchar(191) COLLATE utf8mb4_bin DEFAULT NULL,
idempotency_key
varchar(191) COLLATE utf8mb4_bin DEFAULT NULL,
PRIMARY KEY (id
),
UNIQUE KEY index_bike_issues_on_token
(token
),
UNIQUE KEY index_bike_issues_on_idempotency_key
(idempotency_key
),
KEY by_bike_id_issue_type_resolved_at
(bike_id
,issue_type
,resolved_at
) USING BTREE,
KEY index_bike_issues_on_resolved_at
(resolved_at
)
) ENGINE=InnoDB AUTO_INCREMENT=177614144 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin
On MySQL 8 replica
Create Table: CREATE TABLE bike_issues
(
id
int NOT NULL AUTO_INCREMENT,
issue_type
varchar(191) COLLATE utf8mb4_bin NOT NULL,
bike_id
int NOT NULL,
reported_at
datetime NOT NULL,
resolved_at
datetime DEFAULT NULL,
resolution_type
varchar(191) COLLATE utf8mb4_bin DEFAULT NULL,
created_at
datetime NOT NULL,
updated_at
datetime NOT NULL,
token
varchar(191) COLLATE utf8mb4_bin DEFAULT NULL,
idempotency_key
varchar(191) COLLATE utf8mb4_bin DEFAULT NULL,
PRIMARY KEY (id
),
UNIQUE KEY index_bike_issues_on_token
(token
),
UNIQUE KEY index_bike_issues_on_idempotency_key
(idempotency_key
),
KEY by_bike_id_issue_type_resolved_at
(bike_id
,issue_type
,resolved_at
) USING BTREE,
KEY index_bike_issues_on_resolved_at
(resolved_at
)
) ENGINE=InnoDB AUTO_INCREMENT=177614497 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin
On Primary
show variables like '%binlog%';
+--------------------------------------------+----------------------+
| Variable_name | Value |
+--------------------------------------------+----------------------+
| binlog_cache_size | 32768 |
| binlog_checksum | NONE |
| binlog_direct_non_transactional_updates | OFF |
| binlog_error_action | IGNORE_ERROR |
| binlog_format | ROW |
| binlog_group_commit_sync_delay | 0 |
| binlog_group_commit_sync_no_delay_count | 0 |
| binlog_gtid_simple_recovery | ON |
| binlog_max_flush_queue_time | 0 |
| binlog_order_commits | ON |
| binlog_row_image | FULL |
| binlog_rows_query_log_events | OFF |
| binlog_stmt_cache_size | 32768 |
| binlog_transaction_dependency_history_size | 25000 |
| binlog_transaction_dependency_tracking | COMMIT_ORDER |
| innodb_api_enable_binlog | OFF |
| innodb_locks_unsafe_for_binlog | OFF |
| log_statements_unsafe_for_binlog | OFF |
| max_binlog_cache_size | 18446744073709547520 |
| max_binlog_size | 134217728 |
| max_binlog_stmt_cache_size | 18446744073709547520 |
| sync_binlog | 2000 |
+--------------------------------------------+----------------------+
mysql> show variables like '%slave%';
+------------------------------+-----------------------+
| Variable_name | Value |
+------------------------------+-----------------------+
| init_slave | |
| log_slave_updates | ON |
| log_slow_slave_statements | ON |
| pseudo_slave_mode | OFF |
| rpl_stop_slave_timeout | 31536000 |
| slave_allow_batching | OFF |
| slave_checkpoint_group | 512 |
| slave_checkpoint_period | 300 |
| slave_compressed_protocol | OFF |
| slave_exec_mode | STRICT |
| slave_load_tmpdir | /rdsdbdata/tmp |
| slave_max_allowed_packet | 1073741824 |
| slave_net_timeout | 60 |
| slave_parallel_type | DATABASE |
| slave_parallel_workers | 0 |
| slave_pending_jobs_size_max | 16777216 |
| slave_preserve_commit_order | OFF |
| slave_rows_search_algorithms | TABLE_SCAN,INDEX_SCAN |
| slave_skip_errors | OFF |
| slave_sql_verify_checksum | ON |
| slave_transaction_retries | 10 |
| slave_type_conversions | |
| sql_slave_skip_counter | 0 |
+------------------------------+-----------------------+
mysql> show variables where variable_name in ('gtid_mode', 'enforce_gtid_consistency', 'innodb_flush_log_at_trx_commit');
+--------------------------------+-------+
| Variable_name | Value |
+--------------------------------+-------+
| enforce_gtid_consistency | ON |
| gtid_mode | ON |
| innodb_flush_log_at_trx_commit | 2 |
+--------------------------------+-------+
On MySQL 8.0 replica
mysql> show variables like '%binlog%';
+------------------------------------------------+----------------------+
| Variable_name | Value |
+------------------------------------------------+----------------------+
| binlog_cache_size | 32768 |
| binlog_checksum | NONE |
| binlog_direct_non_transactional_updates | OFF |
| binlog_encryption | OFF |
| binlog_error_action | ABORT_SERVER |
| binlog_expire_logs_auto_purge | ON |
| binlog_expire_logs_seconds | 2592000 |
| binlog_format | ROW |
| binlog_group_commit_sync_delay | 0 |
| binlog_group_commit_sync_no_delay_count | 0 |
| binlog_gtid_simple_recovery | ON |
| binlog_max_flush_queue_time | 0 |
| binlog_order_commits | ON |
| binlog_rotate_encryption_master_key_at_startup | OFF |
| binlog_row_event_max_size | 8192 |
| binlog_row_image | FULL |
| binlog_row_metadata | MINIMAL |
| binlog_row_value_options | |
| binlog_rows_query_log_events | OFF |
| binlog_stmt_cache_size | 32768 |
| binlog_transaction_compression | OFF |
| binlog_transaction_compression_level_zstd | 3 |
| binlog_transaction_dependency_history_size | 25000 |
| binlog_transaction_dependency_tracking | COMMIT_ORDER |
| innodb_api_enable_binlog | OFF |
| log_statements_unsafe_for_binlog | OFF |
| max_binlog_cache_size | 18446744073709547520 |
| max_binlog_size | 134217728 |
| max_binlog_stmt_cache_size | 18446744073709547520 |
| sync_binlog | 1000 |
+------------------------------------------------+----------------------+
mysql> show variables where variable_name in ('gtid_mode', 'enforce_gtid_consistency', 'innodb_flush_log_at_trx_commit');
+--------------------------------+-------+
| Variable_name | Value |
+--------------------------------+-------+
| enforce_gtid_consistency | ON |
| gtid_mode | ON |
| innodb_flush_log_at_trx_commit | 2 |
+--------------------------------+-------+
mysql> show variables like '%replica%';
+-----------------------------------------------+----------------+
| Variable_name | Value |
+-----------------------------------------------+----------------+
| group_replication_consistency | EVENTUAL |
| init_replica | |
| innodb_replication_delay | 0 |
| log_replica_updates | ON |
| log_slow_replica_statements | OFF |
| pseudo_replica_mode | OFF |
| replica_allow_batching | ON |
| replica_checkpoint_group | 512 |
| replica_checkpoint_period | 300 |
| replica_compressed_protocol | OFF |
| replica_exec_mode | STRICT |
| replica_load_tmpdir | /rdsdbdata/tmp |
| replica_max_allowed_packet | 1073741824 |
| replica_net_timeout | 60 |
| replica_parallel_type | LOGICAL_CLOCK |
| replica_parallel_workers | 32 |
| replica_pending_jobs_size_max | 134217728 |
| replica_preserve_commit_order | ON |
| replica_skip_errors | OFF |
| replica_sql_verify_checksum | ON |
| replica_transaction_retries | 10 |
| replica_type_conversions | |
| replication_optimize_for_static_plugin_config | OFF |
| replication_sender_observe_commit_only | OFF |
| rpl_stop_replica_timeout | 31536000 |
| skip_replica_start | ON |
| sql_replica_skip_counter | 0 |
+-----------------------------------------------+----------------+
Rahul
(119 rep)
Mar 16, 2024, 01:49 AM
• Last activity: Mar 22, 2024, 06:00 AM
0
votes
3
answers
2930
views
Broken Slave MySQL : How To Fix MySQL Replication (BINLOG) error in the Slave?
Before we start I can give some info about the versions, Ubuntu - 14.04, MySQL - 5.5 I have successfully configured the Master-Slave replication with two EC2 Ubuntu instances. My master is in Ubuntu 14 and slave is in Ubuntu 18. The replication was successful and the slave was replicating data for t...
Before we start I can give some info about the versions,
Ubuntu - 14.04, MySQL - 5.5
I have successfully configured the Master-Slave replication with two EC2 Ubuntu instances. My master is in Ubuntu 14 and slave is in Ubuntu 18. The replication was successful and the slave was replicating data for the last 3-4 days. But, suddenly my slave(Ubuntu 18) stopped replication with a query error.
**Error Message :** Last_SQL_Error: Error 'Duplicate entry '11379183' for key 'PRIMARY'' on query. Default database: 'test'. Query: 'INSERT INTO testTable (id,value1,value2,value3,value4,value5,value6) VALUES(22,"5","429438","1592499300","1","8","1")'
> Slave_IO_Running: Yes
>
> Slave_SQL_Running: No
I have tried,
> STOP SLAVE;
>
> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1;
>
> START SLAVE;
It has helped me for some time. but the replication is broken again.
How can we resolve this kind of replication errors? Is there any automated method for fixing the broken slave?
Is GTID based replication possible with MySQL 5.5? I think GTID is available from MySQL version 5.6?
Dipin V A
(109 rep)
Jun 25, 2020, 02:55 PM
• Last activity: Nov 7, 2023, 06:22 AM
5
votes
1
answers
4303
views
mysqlpump --set-gtid-purged=OFF still adds @SESSION.SQL_LOG_BIN=0;
Whenever I add --set-gtid-purged=OFF, **it still adds @@SESSION.SQL_LOG_BIN=0; to the output.** According to the documentation, shouldn't it get removed? *--set-gtid-purged=OFF: SET @@SESSION.SQL_LOG_BIN=0; is not added to the output.* *Dump created by MySQL pump utility, version: 8.0.16, Win64 (x86...
Whenever I add --set-gtid-purged=OFF, **it still adds @@SESSION.SQL_LOG_BIN=0; to the output.**
According to the documentation, shouldn't it get removed?
*--set-gtid-purged=OFF: SET @@SESSION.SQL_LOG_BIN=0; is not added to the output.*
*Dump created by MySQL pump utility, version: 8.0.16, Win64 (x86_64)*
*SET @@SESSION.SQL_LOG_BIN= 0;*
Is this correct behavior for this tool?
Watson
(155 rep)
Jul 2, 2019, 05:50 PM
• Last activity: Sep 23, 2023, 02:07 PM
1
votes
1
answers
1486
views
How to resolve GTID difference between Master and Slave servers in MariaDB replication using MaxScale?
I wrote accidently directly on the Slave DB, and therefore the data is not in sync anymore. When I check MaxScale status, all servers are running, but the Slaves are not detected as "Slave" and there is a difference in GTID. [![enter image description here][1]][1] Does anyone know how to put them ba...
I wrote accidently directly on the Slave DB, and therefore the data is not in sync anymore.
When I check MaxScale status, all servers are running, but the Slaves are not detected as "Slave" and there is a difference in GTID.
Does anyone know how to put them back in sync?
I've already searched for information into the documentation of MaxScale, but I could not find this specific usecase there: https://mariadb.com/kb/en/mariadb-maxscale-2302-automatic-failover-with-mariadb-monitor/

Lucian Tarbă
(21 rep)
Jun 2, 2023, 08:04 AM
• Last activity: Jun 2, 2023, 11:56 AM
1
votes
2
answers
7137
views
MariaDB GTID current_pos vs slave_pos
In a standard master-slave setup using MariaDB (10.3 at this time) and GTID (Global Transaction ID), it is unclear wether to use **current_pos** or **slave_pos**. Reminder from the [documentation][1]: > - Using the value **current_pos** causes the slave to set its position based on the gtid_current_...
In a standard master-slave setup using MariaDB (10.3 at this time) and GTID (Global Transaction ID), it is unclear wether to use **current_pos** or **slave_pos**.
Reminder from the documentation :
> - Using the value **current_pos** causes the slave to set its position based on the gtid_current_pos system variable. The slave takes the position given to it by the master server.
> - Using the value **slave_pos** causes the slave to instead use the gtid_slave_pos system variable. With this method, the slave takes into account transactions present in its own Binary Log.
I think I understand the difference but what is the recommended option?
To the extent that both exist, one might be better for one use case and the other for another use case but the documentation does not clearly state which one should be used in which case...
Which of these options would you use for a very common case of master-slave replication?
## EDIT 2019-01-18: Context
The context that led me to ask myself whether I should use current_pos or slave_pos:
I recently upgraded some MariaDB clusters (master-slave setups) from MariaDB 10.1 or 10.2 to MariaDB 10.3 on Debian Stretch, using APT.
In practice, I always proceed like this:
1. Upgrade slave(s) to 10.3 (using an apt dist-upgrade)
2. Run mysql_upgrade
3. Switchover master (using Signal18 Replication-manager)
4. Upgrade the old master (that became a slave at the previous step)
5. Switchover master again to return to the original configuration
However, during recent updates, I encountered some failures at Slave Start after running a *mysql_upgrade* command. It did not found binlogs using GTID current position.
This happens, I believe, when there has be No transaction on the master since the last master switchover. In this case, switching GTID mode from *current_pos* to *slave_pos* fix the problem...
Then, should I definitely use ***slave_pos***? However, even in this case, Replication-Manager forces GTID mode to *current_pos* after a switchover...
Nicolas Payart
(2508 rep)
Jan 14, 2019, 10:52 AM
• Last activity: May 6, 2023, 05:00 PM
1
votes
0
answers
1061
views
How to see content of specific GTID transaction in MySQL
To maintain my `GTID-Replication`, I want to see the content of the GTID transaction. I use this command to do that: ``` #I want to see the content of binlog when GTID=1 mysqlbinlog -u USER -p'PASS' mysql-bin.000001 --database=DB --include-gtids='SERVER_ID:1' --verbose ``` Besides the content, the o...
To maintain my
GTID-Replication
, I want to see the content of the GTID transaction. I use this command to do that:
#I want to see the content of binlog when GTID=1
mysqlbinlog -u USER -p'PASS' mysql-bin.000001 --database=DB --include-gtids='SERVER_ID:1' --verbose
Besides the content, the output contains many strange lines starting with # at number
. When I add | wc -l
, those lines exceed thousands.
What I want is:
1. What are those lines? Is there a way to remove them?
2. The output contains many lines starting with #, /* (comments). Is there a way to remove them?
I know it is possible to redirect the output and then use commands like grep
to remove them; I want to know if there is a utility with mysqlbinlog
to do that for me.
N_Z
(248 rep)
Feb 5, 2023, 11:46 AM
• Last activity: Feb 5, 2023, 02:26 PM
2
votes
1
answers
296
views
GTID replication from a MySQL 5.7.38 Master to MariDB 10.6.10?
Is it possible to use GTID in a replication scenario where MySQL 5.7.38 is the master and MariaDB 10.6.10 is the slave?
Is it possible to use GTID in a replication scenario where MySQL 5.7.38 is the master and MariaDB 10.6.10 is the slave?
aeaeae
(23 rep)
Nov 4, 2022, 06:24 AM
• Last activity: Nov 4, 2022, 04:07 PM
0
votes
0
answers
1936
views
ERROR 1682 (HY000): Native table 'performance_schema'.'global_variables' has the wrong structure
I’m getting stuck into a strange problem. When I check for global_variable -it says my performance schema has wrong structure: mysql> SHOW GLOBAL VARIABLES WHERE Variable_name = 'gtid_mode'; ERROR 1682 (HY000): Native table 'performance_schema'.'global_variables' has the wrong structure However, I h...
I’m getting stuck into a strange problem. When I check for global_variable -it says my performance schema has wrong structure:
mysql> SHOW GLOBAL VARIABLES WHERE Variable_name = 'gtid_mode';
ERROR 1682 (HY000): Native table 'performance_schema'.'global_variables' has the
wrong structure
However, I have already tried with performance schema upgrade and it says -it’s already up to date. Not sure, why it's throwing this error then:
mysql_upgrade
Checking if update is needed.
This installation of MySQL is already upgraded to 5.7.35-38, use --force if you
still need to run mysql_upgrade
Below is the structure of my global_variables table:
mysql> describe performance_schema.global_variables ;
+----------------+---------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+----------------+---------------+------+-----+---------+-------+
| VARIABLE_NAME | varchar(64) | NO | | NULL | |
| VARIABLE_VALUE | varchar(2048) | YES | | NULL | |
+----------------+---------------+------+-----+---------+-------+
2 rows in set (0.00 sec)
Just in case for more information, I have another host and there also performance schema.global_variables table have the same structure. However, there I don’t see such kind of error/message and it’s properly showing gtid_mode there using the same command. How can I fix it..? my some other work is getting impacted because of this.
Any help is really appreciated. Also, you if can provide cause of this -that can also really be helpful ,so that i’ll take care from the next time.
Shiwangini
(380 rep)
Jan 15, 2022, 08:17 AM
1
votes
2
answers
2844
views
Does GTID have only benefits in MySQL over "standard" Replication?
I'm used to setup MySQL replication in the old-fashioned way and notice now there is a method using GTID. From what I read, GTID is presented has an improvement over "standard" replication. But I'm sceptical because if GTID is so good I'm surprised this is not the default behaviour, so I'm curious i...
I'm used to setup MySQL replication in the old-fashioned way and notice now there is a method using GTID. From what I read, GTID is presented has an improvement over "standard" replication.
But I'm sceptical because if GTID is so good I'm surprised this is not the default behaviour, so I'm curious if is there any downside using GTID ?
I'm using MySQL 5.7 as of now.
Best.
Baptiste Mille-Mathias
(130 rep)
Jun 30, 2021, 03:03 PM
• Last activity: Sep 2, 2021, 02:19 PM
0
votes
1
answers
768
views
Enabling mySQL replication using GTID after purging binlog
I am currently working to get GTID based replication established between mySQL 5.7.33 on 2 servers. Amidst this work, I ran into a disk space issue and purged the binary logs on the master. Now I have run into the problem where I cannot get the replication started due to the missing binary logs. Wha...
I am currently working to get GTID based replication established between mySQL 5.7.33 on 2 servers. Amidst this work, I ran into a disk space issue and purged the binary logs on the master.
Now I have run into the problem where I cannot get the replication started due to the missing binary logs.
What can I do, without the bin logs, to get replication going?
Thanks for any advice!
user3704686
(1 rep)
Apr 7, 2021, 12:25 AM
• Last activity: Apr 7, 2021, 12:50 AM
3
votes
0
answers
2158
views
Got fatal error 1236 from master when reading data from binary log
I am trying to do a MySQL failover, in some situations I endup with broken replication. Any idea what could be the cause of it ? ### Background Needed to perform a host updates on master host. Hence promoting one of the slave as new master and restore replication across all slaves without missing an...
I am trying to do a MySQL failover, in some situations I endup with broken replication. Any idea what could be the cause of it ?
### Background
Needed to perform a host updates on master host. Hence promoting one of the slave as new master and restore replication across all slaves without missing any transactions
Expected: Since replication is current, failover should be quick and have a minimum read write downtime.
* Lock current master to RO, flush everything in buffer, pending (which appears to be imperfect)
* Make sure all salves replicated everything
* Flip master
* Point all slaves to new master and restore replication
### Technical details
MySQL 5.6, one master(RW) and 25 slaves(RO). All slaves current. No network issues, all of them in same region connected over LAN.
my.cnf
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 384M
innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 100M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
innodb_lock_wait_timeout = 50
Connection pool, C3PO. If the MySQL is in RO mode C3PO would not setup any new connection. churns pool till DB cnames resolves to a RW enabled MySQL host. However, there could be existing connections to the old master.
### Operational details
To flip the database we perform the below operations sequentially
1) execute sql statement through python - FLUSH TABLES WITH READ LOCK; set @@global.read_only = ON
2) Wait till DB is in RO mode by querying select @@global.read_only
3) Sleep 5 sec to make sure we give time for replication to catch up
4) Pick one slave to promote, make sure GTID of slave (get slave status
) is subset of the master GTID executed (get master status
)
5) On new master execute stop slave
, reset slave all
, set @@global.read_only = OFF
, UNLOCK TABLES
6) Update DNS of master DB cname, sleep 90 sec for DNS to update cache on hosts
7) on each slave run below commands
stop slave
CHANGE MASTER TO master_host= master_port= master_sll_path= MASTER_AUTO_POSITION=1, ..
start slave
### Problem
At times replication would be current and would undergo smooth failover, apart from old master (where we restore through mysql dump). However, sometimes slaves run into the issue of broken replication reporting missing transactions and only approach is to restore through Mysql dump or set the GTID position purged to latest one from new master. Both of them sound like missing transactions
error 1236, Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting
using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs
that the slave requires. Replicate the missing transactions from elsewhere, or provision a new slave from
backup. Consider increasing the master's binary log expiration period. The GTID sets and the missing
purged transactions are too long to print in this message. For more information, please see the master's
error log or the manual for GTID_SUBTRACT.'
### Theories
Only possible situation I can think of is old master had some transactions which were not send to slave which is recently promoted to master. These transactions came in after promoting new master and got replicated to all slaves which are connected to old master (till script was sleeping or 90 sec) and recent GTID is not available on new master leading to conclusion that GTID's were purged.
Statements contradicting above theory
* Database is in RO mode, hence C3PO can't grab new connections and write to DB
* Even the existing connections can't write to DB since DB is in RO mode and we issued FLUSH TABLES WITH READ LOCK
. However, we didn't give enough time for existing transactions to complete ? i.e we didn't wait for flush tables command to complete ? (suspicion)
* innodb_flush_log_at_trx_commit = 1
which ensures ACID. Hence there is no chance of new transactions pending in buffer which are about to be written to DB and logs.
Not sure anything that can cause broken replication. However, I see a post FLUSH TABLES WITH READ LOCK will not halt writes to InnoDB. unsure why can DB have more transactions while it is in RO mode. Should I do flush tables
and Flush Logs
as well ? that would be flying blue unsure if it would fix it and even if it does leaves me with uncertainity. Any thoughts from MySQL experts are appreciated.
Fall back approach is to revoke replication for all slaves on old master but even that would lead to missing transactions not being replicated which is the worry here.
RobinHood
(49 rep)
Mar 28, 2021, 03:33 PM
Showing page 1 of 20 total questions