Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
145
views
Scaling MySQL Cluster
I have an NDB cluster which consists of 4 data nodes. Some of the configuration parameters and system resources as : CPU: 32 RAM: 128GB Replication factor: 2 Data memory: 70GB Index memory: 10GB I am planning to extend cluster capacity. I have two options to achieve this: 1. I can add more RAM to da...
I have an NDB cluster which consists of 4 data nodes. Some of the configuration parameters and system resources as :
CPU: 32
RAM: 128GB
Replication factor: 2
Data memory: 70GB
Index memory: 10GB
I am planning to extend cluster capacity. I have two options to achieve this:
1. I can add more RAM to data nodes so data memory and index memory will be able to double up
2. I can add 4 data nodes more to cluster so total data memory and index memory increases too
I am not sure which one is a better solution while extending cluster capacity. If I choose first option data size stored per node will be bigger. On the other hand what about 140GB DataMemory per data node? Are there any recommended maximum value for DataMemory ve IndexMemory parameters?
Gpost
(73 rep)
Nov 8, 2017, 08:42 PM
• Last activity: Jul 20, 2025, 10:01 PM
0
votes
1
answers
158
views
What is MySQL Cluster's Practical Memory Limit?
We are designing a cloud SaaS, and are actively considering **MySQL Cluster** choice for our DB. We will be writing more than 100M rows/day to the DB. We need to understand if there is any practical memory limit. I understand that MySQL Cluster supports at max 48 data nodes, but is there practically...
We are designing a cloud SaaS, and are actively considering **MySQL Cluster** choice for our DB. We will be writing more than 100M rows/day to the DB. We need to understand if there is any practical memory limit.
I understand that MySQL Cluster supports at max 48 data nodes, but is there practically any limit on the RAM each data node can have? RAM will be anyways expensive but we want to eliminate this if there is any hard limit.
The question stems from this link: http://dev.mysql.com/doc/refman/5.6/en/mysql-cluster-ndb-innodb-engines.html , where Storage Limit is mentioned as **3TB**. It is slightly ambiguous to understand the way it is written.
Thanks in advance.
romanch
(1 rep)
Sep 6, 2016, 12:07 AM
• Last activity: Jul 7, 2025, 01:06 AM
1
votes
1
answers
224
views
mysql data node not connecting to node manager on AWS EC2
I am trying to setup mySql manager + data node + sql server (Server A) and another data node cluster on (Server B). Both running on Ubuntu 16.04 amd_64 **Server A** */etc/hosts* also on Server B Both private ip 172.xx.xx.xx ip-172-xx-xx-xx-eu-west-1.compute.internal 172.xx.xx.xx ip-172-xx-xx-xx.eu-e...
I am trying to setup mySql manager + data node + sql server (Server A) and another data node cluster on (Server B). Both running on Ubuntu 16.04 amd_64
**Server A**
*/etc/hosts* also on Server B
Both private ip
172.xx.xx.xx ip-172-xx-xx-xx-eu-west-1.compute.internal
172.xx.xx.xx ip-172-xx-xx-xx.eu-east-1.compute.internal
*/config.ini*
[ndbd default]
NoOfReplicas=2 # Number of replicas
[ndb_mgmd]
hostname=ip-172-xx-xx-xx.eu-west-1.compute.internal show
Connected to Management Server at: 172.xx.xx.xx:1186 <- A
Cluster Configuration
---------------------
[ndbd(NDB)] 1 node(s)
id=2 (not connected, accepting connect from 172.xx.xx.xx) <- B
[ndb_mgmd(MGM)] 1 node(s)
id=1 @172.x.x.x (mysql-5.7.22 ndb-7.6.6)
[mysqld(API)] 1 node(s)
id=3 (not connected, accepting connect from 172.xx.xx.xx) <- A
Not sure how would i connect to mysqld(API) that should work fine after i installed the server/client cluster.
Also i got inbound on both Instances A & B to accept all traffic between all port ranges 1000-60000 just in case i am missing something, and outbound is open to everything.
I wonder what is it i am missing, i got so tired today trying to find out what is going on with the setup.
Any help will be appreciated !
mohamed tebry
(11 rep)
Oct 14, 2019, 06:03 PM
• Last activity: Jun 6, 2025, 12:03 AM
0
votes
1
answers
230
views
MySQL database cluster vs. replication setup for large crawler indexes
What is the most efficient way to set up a MySQL cluster or replication in an application that has a crawler/indexing engine with thousands of read/writes and then a separate front-end for public access. It is kind of like creating a mini Google search engine. The database that keeps the latest craw...
What is the most efficient way to set up a MySQL cluster or replication in an application that has a crawler/indexing engine with thousands of read/writes and then a separate front-end for public access.
It is kind of like creating a mini Google search engine. The database that keeps the latest crawl data has to be separate in a way not to impact the performance of the publicly-accessed database. However, eventually the two databases need to be reconciled so that the publicly-accessed database has all the latest data.
So my idea would be the crawler DB would be the master, while the publicly accessed database would be the slave. And then edit/insert actions on the publicly assessed database would really need to be done on the master DB (same as crawler).
Is this the best way to do such a thing? Is there a better setup?
More information:
Currently there is a lot of reading/updating/inserting at a very fast pace while the crawler is running.
For example, the crawler could send 10 businesses with location and items for each business per every 2-3 seconds, at which, the receiving server needs to do the following. Search DB if business already exists (out of few hundred thousand records), if not create it. Then search if that location (out of half a million records) for that business exists, if not, create it. Then loop through each item of the business and check if it exists (out of 40+ million records), update, if not, create it. Old ones are backed up and then deleted.
zen
(121 rep)
Jan 26, 2020, 07:21 PM
• Last activity: Jun 5, 2025, 09:07 PM
1
votes
1
answers
242
views
Is there a way to hide mysql innodb cluster queries from general_log?
When I start my innodb cluster I get many lines of cluster related logs in the mysql general_log file like below, how can I hide these from the log? ``` 2020-06-26T14:40:43.719029Z 31774 Connect mysql_router1_6dv772yz7qu1@dbsg1 on using SSL/TLS 2020-06-26T14:40:43.719494Z 31774 Query SET @@SESSION.a...
When I start my innodb cluster I get many lines of cluster related logs in the mysql general_log file like below, how can I hide these from the log?
2020-06-26T14:40:43.719029Z 31774 Connect mysql_router1_6dv772yz7qu1@dbsg1 on using SSL/TLS
2020-06-26T14:40:43.719494Z 31774 Query SET @@SESSION.autocommit=1, @@SESSION.character_set_client=utf8, @@SESSION.character_set_re
sults=utf8, @@SESSION.character_set_connection=utf8, @@SESSION.sql_mode='ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION'
2020-06-26T14:40:43.719659Z 31773 Quit
2020-06-26T14:40:43.720231Z 31774 Query SET @@SESSION.group_replication_consistency='EVENTUAL'
2020-06-26T14:40:43.720509Z 31774 Query START TRANSACTION
2020-06-26T14:40:43.720740Z 31774 Query SELECT * FROM mysql_innodb_cluster_metadata.schema_version
2020-06-26T14:40:43.721185Z 31774 Query select cluster_type from mysql_innodb_cluster_metadata.v2_this_instance
2020-06-26T14:40:43.723301Z 31774 Query select I.mysql_server_uuid, I.endpoint, I.xendpoint from mysql_innodb_cluster_metadata.v2_instances I join mysql_innodb_cluster_metadata.v2_gr_clusters C on I.cluster_id = C.cluster_id where C.cluster_name = 'prodcluster' AND C.group_name = '8a70e310-a997-4d59-8e81-92dab9daa5be'
2020-06-26T14:40:43.723985Z 31774 Query COMMIT
2020-06-26T14:40:43.724317Z 31774 Query show status like 'group_replication_primary_member'
2020-06-26T14:40:43.725621Z 31774 Query SELECT member_id, member_host, member_port, member_state, @@group_replication_single_primary_mode FROM performance_schema.replication_group_members WHERE channel_name = 'group_replication_applier'
2020-06-26T14:40:43.793743Z 31771 Query START TRANSACTION
2020-06-26T14:40:43.933131Z 31770 Query SELECT * FROM mysql_innodb_cluster_metadata.schema_version
2020-06-26T14:40:44.032471Z 31771 Query SELECT * FROM mysql_innodb_cluster_metadata.schema_version
2020-06-26T14:40:44.171571Z 31770 Query select cluster_type from mysql_innodb_cluster_metadata.v2_this_instance
2020-06-26T14:40:44.226921Z 31774 Quit
Ken
(21 rep)
Jun 26, 2020, 02:45 PM
• Last activity: Jun 5, 2025, 12:04 AM
2
votes
1
answers
2039
views
mysql cluster table full error
I installed a new mysql cluster on windows (for evaluation). I created a table using command below. After I insert just over 1 million records into the table it gives me the "table datadump is full" exception. I have plenty of hard disk space and ram free, and 1 million records is well below the max...
I installed a new mysql cluster on windows (for evaluation). I created a table using command below. After I insert just over 1 million records into the table it gives me the "table datadump is full" exception. I have plenty of hard disk space and ram free, and 1 million records is well below the max number of rows for the table. Also the MySql RAm usage has not increased during inserts. I am assuming that MySql has an internal cache limit for the table that was hit. is there a config.ini change I can make to disable caching of rows in cluster table or a modification to the create table script to do the same?
create table datadump (name varchar(30) not null) engine=ndb;
user18896654353
(121 rep)
Jan 24, 2014, 03:40 PM
• Last activity: May 30, 2025, 05:01 AM
0
votes
2
answers
280
views
MySQL NDB Cluster settings to prefer data node on localhost than remote
My current `MySQL NDB Cluster` configuration: - Server 1 - management node - Server 2 - huge machine running both `ndbd` and `mysqld` - Server 3 - huge machine running both `ndbd` and `mysqld` - `NoOfReplicas = 1` So server 2 has exact copy of server 3 data During setup evaluation I discovered that...
My current
MySQL NDB Cluster
configuration:
- Server 1 - management node
- Server 2 - huge machine running both ndbd
and mysqld
- Server 3 - huge machine running both ndbd
and mysqld
- NoOfReplicas = 1
So server 2 has exact copy of server 3 data
During setup evaluation I discovered that mysqld on server 2 is sending about 50% queries to ndbd running on server 3.
With greater number of nodes this would be OK to provide load-balancing, but with my specific configuration (both nodes have same dataset) I'd like to get max performance by telling mysqld
on each server to PREFER connecting to ndbd
process running on localhost, and only connect ndbd
on remote host only if local is not available (ie restart)
I've read something about server access cost table that should contain information for query optimizer about latency between nodes, but I can't find this blog post right now.
Any help?
Matthias
(149 rep)
Nov 9, 2016, 03:00 PM
• Last activity: May 13, 2025, 12:00 PM
2
votes
1
answers
990
views
What happens when a MySQL Cluster node is disconnected
I am trying to design a multi-master clustered MySQL database which will be spread through out the country. The main reason for having a cluster is so that when the internet, in one location (or node), is disconnected, that location should still be able to read and modify the contents of the databas...
I am trying to design a multi-master clustered MySQL database which will be spread through out the country.
The main reason for having a cluster is so that when the internet, in one location (or node), is disconnected, that location should still be able to read and modify the contents of the database locally. So that when the internet connection is restored, the rest of the cluster get updated.
I've went through the MySQL cluster documentation, however, I couldn't find this specific question. Is it possible to achieve this using MySQL cluster?
Saif Ur Rehman
(121 rep)
Dec 23, 2015, 04:16 PM
• Last activity: Apr 13, 2025, 05:02 PM
1
votes
1
answers
749
views
Excessive RAM consumption in MySQL Cluster
at this moment I have configured a cluster that has 5 nodes: 2 data nodes 2 SQL nodes 1 node manager My file.config.ini is the following: [ndbd default] NoOfReplicas = 2 # Number of replicas DataMemory = 256M IndexMemory = 100M ServerPort = 2202. MaxNoOfConcurrentTransactions = 429496 MaxNoOfConcurr...
at this moment I have configured a cluster that has 5 nodes:
2 data nodes
2 SQL nodes
1 node manager
My file.config.ini is the following:
[ndbd default]
NoOfReplicas = 2 # Number of replicas
DataMemory = 256M IndexMemory = 100M
ServerPort = 2202.
MaxNoOfConcurrentTransactions = 429496
MaxNoOfConcurrentOperations = 472496
MaxNoOfLocalOperations = 519745
[ndb_mgmd]
NodeId = 1
HostName = 192.168.10.145 # Hostname or IP address of MGM node
DataDir = / var / lib / mysql-cluster # Directory for MGM node log files
[ndbd]
HostName = 192.168.10.181 # Hostname or IP address
NodeId = 2 # Node ID for this data node
DataDir = / mnt / dataPartition / mysql / data # Directory for this data node's data files
[ndbd]
HostName = 192.168.10.183 # Hostname or IP address
NodeId = 3 # Node ID for this data node
DataDir = / mnt / dataPartition / mysql / data # Directory for this data node's data files
[mysqld]
HostName = 192.168.10.140 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
HostName = 192.168.10.184 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
IN THE NDB_MGM REPORT, IT SAYS THE FOLLOWING:
ndb_mgm> 2 REPORT MEMORYUSAGE
Node 2: Data usage is 18% (1476 32K pages of total 8192)
Node 2: Index usage is 11% (1461 8K pages of total 12832)
ndb_mgm> 3 REPORT MEMORYUSAGE
Node 3: Data usage is 18% (1480 32K pages of total 8192)
Node 3: Index usage is 11% (1462 8K pages of total 12832)
Each data node has 2gb of ram, it is assumed that with this configuration should not consume more than 500 mb of ram the ndbd process, but I am consuming 1.89 gb of ram per node, I am using disk storage for the fields that they are not indexed
What is wrong or what is wrong?
Jonathan Diaz
(11 rep)
Nov 17, 2017, 12:50 AM
• Last activity: Mar 22, 2025, 03:05 PM
0
votes
2
answers
2618
views
MySQL Cluster setup, receiving error "Specify node id"
I am attempting to set up a MySQL Cluster. I have the following setup: - One management node - Two data nodes - One SQL node Below is the `config.ini` file: [ndbd default] # Options affecting ndbd processes on all data nodes: NoOfReplicas=1 # Number of replicas DataMemory=256M # How much memory to a...
I am attempting to set up a MySQL Cluster.
I have the following setup:
- One management node
- Two data nodes
- One SQL node
Below is the
config.ini
file:
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=1 # Number of replicas
DataMemory=256M # How much memory to allocate for data storage
IndexMemory=128M # How much memory to allocate for index storage
#Directory for Data Node
DataDir=/var/lib/mysql-cluster
[ndb_mgmd]
# Management process options:
HostName=x.x.x.21
datadir=/var/lib/mysql-cluster
# Storage Nodes
[NDBD]
HostName=x.x.x.188
DataDir= /var/lib/mysql-cluster
[NDBD]
HostName=x.x.x.189
DataDir= /var/lib/mysql-cluster
# Setup node IDs for MySQL API-servers (clients of the cluster)
[mysqld]
# SQL node options:
hostname=x.x.x.22
Below is my.cnf
file
[mysqld]
ndbcluster # run NDB storage engine
ndb-connectstring=X.X.X.21 # location of management server
[mysql_cluster]
ndb-connectstring=X.X.X.21 # location of management server
But when I start command ndb_mgmd -f /etc/config.ini --initial
I am getting the following error:
> MySQL Cluster Management Server mysql-5.6.28 ndb-7.4.10
>
> 2018-05-09 14:29:08 [MgmtSrvr] ERROR -- Could not determine which nodeid to use for this node. Specify it with --ndb-nodeid= on command line
Can anyone please help?
subbhashree
(1 rep)
May 9, 2018, 02:47 PM
• Last activity: Mar 11, 2025, 03:06 AM
0
votes
2
answers
810
views
MySQL Cluster work well but ndb_desc doesn't work
I have deployed and configured MySQL cluster. It works fine. -------------- ndb_mgm> show Connected to Management Server at: localhost:1186 Cluster Configuration [ndbd(NDB)]2 node(s) id=2@192.168.56.2 (mysql-5.6.19 ndb-7.3.6, Nodegroup: 0, *) id=3@192.168.56.3 (mysql-5.6.19 ndb-7.3.6, Nodegroup: 0)...
I have deployed and configured MySQL cluster. It works fine.
--------------
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
[ndbd(NDB)]2 node(s)
id=2@192.168.56.2 (mysql-5.6.19 ndb-7.3.6, Nodegroup: 0, *)
id=3@192.168.56.3 (mysql-5.6.19 ndb-7.3.6, Nodegroup: 0)
[ndb_mgmd(MGM)]1 node(s)
id=1@192.168.56.1 (mysql-5.6.19 ndb-7.3.6)
[mysqld(API)]1 node(s)
id=4@192.168.56.4 (mysql-5.6.19 ndb-7.3.6)
---------
I am reading the manual section about how to use MySQL cluster program http://dev.mysql.com/doc/refman/5.6/en/mysql-cluster-programs.html
On SQL Node - 192.168.56.4, I am running:
shell> ndb_desc -c 192.168.56.1
I receive message:
Unable to connect to management server.
NDBT_ProgramExit: 1 - Failed
I did ping and telnet successful to management node - 192.168.56.1:1186 from sql node - 192.168.56.4 There is no firewall between them.
What is cause of the problem ?
Thank you in advance.
Luke Nguyen
(749 rep)
Aug 2, 2014, 04:03 AM
• Last activity: Feb 21, 2025, 10:04 AM
0
votes
1
answers
406
views
mysql 8 ndbcluster distributed privileges
I've setup a mysql 8 ndbcluster but cannot get the distributed privileges working. following how to setup distributed privileges using NDB_STORED_USER (https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster-privilege-distribution.html), i've verified the following: - SHOW ENGINES lists the value of "...
I've setup a mysql 8 ndbcluster but cannot get the distributed privileges working.
following how to setup distributed privileges using NDB_STORED_USER (https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster-privilege-distribution.html) , i've verified the following:
- SHOW ENGINES lists the value of "DEFAULT" for the Support column on the ndbcluster (Engine) row.
- Both SQL nodes have "default-storage-engine=ndbcluster" in their my.cnf
- I created a user on SQL node 1 and then added them to NDB_STORED_USER.
mysql> GRANT NDB_STORED_USER ON *.* TO 'admin'@'%';
Query OK, 0 rows affected (0.26 sec)
- I verified I could see the respective GRANTS in ndb_sql_metadata from BOTH SQL Nodes
~$ ndb_select_all -d mysql ndb_sql_metadata
type name seq note sql_ddl_text
11 "'admin'@'%'" 0 2 "CREATE USER 'admin'@'%' IDENTIFIED WITH 'caching_sha2_password' AS '$A$005$MGBmokOj
m! select host, user from mysql.user;
+-----------+------------------+
| host | user |
+-----------+------------------+
| localhost | mysql.infoschema |
| localhost | mysql.session |
| localhost | mysql.sys |
| localhost | root |
+-----------+------------------+
4 rows in set (0.00 sec)
Any other suggestions? this is my first time setting up a mysql cluster.
carefreepineapple
(1 rep)
Mar 10, 2020, 04:40 PM
• Last activity: Feb 15, 2025, 09:07 AM
1
votes
1
answers
76
views
MySQL table with partitions on multiple remote servers (only partitions)
in `PostgreSQL` you can create a table with let's say 4 partitions (`hash`) and locate each partition on different servers (remote PostgreSQL servers) and it's called (`Foreign Data Wrapper`). Now in MySQL i heard there is a Federated engine which you can connect to a remote server with it but it ju...
in
PostgreSQL
you can create a table with let's say 4 partitions (hash
)
and locate each partition on different servers (remote PostgreSQL servers) and it's called (Foreign Data Wrapper
). Now in MySQL i heard there is a Federated engine which you can connect to a remote server with it but it just work for a table not its partitions. I know i can use NDB
but i want to have multiple MySQL servers and manage data spread by my self (based on the table partitions). Is there any solution in MySQL to make a table (Innodb
) and spread its partitions on multiple remote MySQL servers ?
user216085
(13 rep)
Nov 12, 2024, 07:56 AM
• Last activity: Nov 12, 2024, 06:56 PM
2
votes
2
answers
22358
views
Dba.getCluster: This function is not available through a session to a standalone instance (RuntimeError)
I am using mysql innodb cluster. Connect to my instance use `\c root@localhost:3306`. The error message pops out when I issue `var cluster = dba.getCluster('testCluster')`. I've already created the cluster in last session. But when I reconnect it does not work anymore. The only thing I changed is ho...
I am using mysql innodb cluster. Connect to my instance use
\c root@localhost:3306
.
The error message pops out when I issue var cluster = dba.getCluster('testCluster')
.
I've already created the cluster in last session. But when I reconnect it does not work anymore.
The only thing I changed is hostname from localhost
to db005
and restart mysql server. Don't even know if it has something to do with my problem. Anybody has idea what happened?
Did some searches on google. Seems this is the only resource from mysql forum. But still not able to solve my problem.
https://forums.mysql.com/read.php?177,653826,657465#msg-657465
Thx!
Yen
(193 rep)
Jan 20, 2018, 07:02 PM
• Last activity: Aug 12, 2024, 03:26 AM
0
votes
0
answers
56
views
New MySQL 8.0 slave added to a 5.7 Percona Mysql cluster is stuck Waiting for dependent transaction to commit
I've tried following several guides to add a new Percona cluster as a slave to another cluster. The new cluster will run MySQL 8.0 while the old cluster needs to continue running 5.7 for a few more weeks at least. I have added a MySQL 8.0 server as a replica to a 5.7 Percona/Galera MySQL cluster. (T...
I've tried following several guides to add a new Percona cluster as a slave to another cluster. The new cluster will run MySQL 8.0 while the old cluster needs to continue running 5.7 for a few more weeks at least.
I have added a MySQL 8.0 server as a replica to a 5.7 Percona/Galera MySQL cluster. (The new server is the first node of a new 8.0 cluster, started in bootstrap mod. The other nodes in the PCX cluster has been shut down and the /var/lib/mysql data directory wiped on all of them)
The data was seeded using using a mysqldump, using the command:
mysqldump -h 172.16.75.3 -u repl --all-databases --master-data --ssl-mode=disabled | mysql
It took three days to import the db. (Previous attempts to get the slave seeded using xtrabackup initially looked promising but the DB refuse to start.)
On the selected master node I can see the slave connected using SHOW SLAVE HOSTS.
mysql> show slave hosts;
+-----------+------+------+-----------+--------------------------------------+
| Server_id | Host | Port | Master_id | Slave_UUID |
+-----------+------+------+-----------+--------------------------------------+
| 11 | | 3306 | 1 | ede7d279-0867-11ef-a45d-566fd2de0048 |
+-----------+------+------+-----------+--------------------------------------+
1 row in set (0.00 sec)
On the new replica the slave status looks like this:
[root@galera-dbnode-03 etc]# mysql -e 'show slave status \G'
*************************** 1. row ***************************
Slave_IO_State: Waiting for source to send event
Master_Host: 172.16.75.3
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: core-cloud-mysql-server-02-bin.000341
Read_Master_Log_Pos: 453928562
Relay_Log_File: galera-dbnode-03-relay-bin.000002
Relay_Log_Pos: 162610326
Relay_Master_Log_File: core-cloud-mysql-server-02-bin.000325
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 1047889015
Relay_Log_Space: 16772376805
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 62789
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_UUID: 6c7d7361-e68b-11ee-bf51-566fd2de005e
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Waiting for dependent transaction to commit
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
Master_public_key_path:
Get_master_public_key: 0
Network_Namespace:
I've tried stopping and starting the slave. Could this be a TLS related issue? What else did I miss?
Edit:
After restarting MySQL server on the replica (again in bootstrap mode, since it is the only valid node right now) it showed an error. I was able to skip the transaction, and then it got back to the same error as shown above. Restarting the service again does not help.
[root@galera-dbnode-03 ~]# mysql -e "show slave status \G"
*************************** 1. row ***************************
Slave_IO_State: Waiting for source to send event
Master_Host: 172.16.75.3
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: core-cloud-mysql-server-02-bin.000341
Read_Master_Log_Pos: 722663436
Relay_Log_File: galera-dbnode-03-relay-bin.000004
Relay_Log_Pos: 712735716
Relay_Master_Log_File: core-cloud-mysql-server-02-bin.000326
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 712735458
Relay_Log_Space: 16852646253
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 65252
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_UUID: 6c7d7361-e68b-11ee-bf51-566fd2de005e
Master_Info_File: mysql.slave_master_info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Waiting for dependent transaction to commit
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 0
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
Master_public_key_path:
Get_master_public_key: 0
Network_Namespace:
Edit:
The /etc/my.cnf
file and the output of pt-mysql-summary
here: https://pastebin.com/kB597zEW
VagrantPaladin
(1 rep)
May 5, 2024, 08:50 AM
• Last activity: May 6, 2024, 08:25 AM
3
votes
1
answers
248
views
What happens to mysql synchronous replication when read replica can't respond?
In the official doc https://dev.mysql.com/doc/mysql-replication-excerpt/8.0/en/replication.html It mentioned that if synchronous replication is required, use NDB instead >For scenarios where synchronous replication is required, use NDB Cluster (see MySQL NDB Cluster 7.5 and NDB Cluster 7.6). **Quest...
In the official doc
https://dev.mysql.com/doc/mysql-replication-excerpt/8.0/en/replication.html
It mentioned that if synchronous replication is required, use NDB instead
>For scenarios where synchronous replication is required, use NDB Cluster (see MySQL NDB Cluster 7.5 and NDB Cluster 7.6).
**Question 1.** Does mysql not support synchronous replication, only mysql cluster support synchronous replication?
**Question 2.** Where is the doc for synchronous replciation for NDB
I see multiple post mentioning that NDB replication is synchronous by default
https://stackoverflow.com/questions/53149674/can-i-implement-synchronous-and-asynchronous-replication-with-the-mysql-cluster
But the official doc only mention asynchronous and semisynchronous replication
https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster-replication.html
> NDB Cluster supports asynchronous replication, more usually referred to simply as “replication”. This section explains how to set up and manage a configuration in which one group of computers operating as an NDB Cluster replicates to a second computer or group of computers. We assume some familiarity on the part of the reader with standard MySQL replication as discussed elsewhere in this Manual. (See Chapter 19, Replication).
**Question 3:** If mysql or NDB supports synchronous replication, do they use 2PC? What happens during network partition, or replica nodes are not available? Does NDB sacrifice availability over consistency? Does NDB do leadership election?
This post say it does
https://dev.mysql.com/blog-archive/2-phase-commit-in-ndbcluster/
But I can't find documentation on the behavior of the trade off on availability vs consistency during network partition or replica failure?
olaf
(143 rep)
Feb 17, 2024, 07:05 PM
• Last activity: Feb 17, 2024, 10:55 PM
1
votes
0
answers
228
views
Persona Xtradb cluster CONFLICT DETECTED
I am working on a high-traffic platform with MySQL clustering using Percona XtraDB Cluster. I have a cluster consisting of 4 nodes. The database is receiving more than 500 requests per second for inserting, selecting, and updating data on the same tables. Below is my MySQL configuration: [mysqld] ##...
I am working on a high-traffic platform with MySQL clustering using Percona XtraDB Cluster. I have a cluster consisting of 4 nodes. The database is receiving more than 500 requests per second for inserting, selecting, and updating data on the same tables. Below is my MySQL configuration:
[mysqld]
######## START General settings ###############
# General settings
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
mysqlx_socket=/tmp/xplugin.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
lc-messages-dir = /usr/share/mysql
bind-address = 192.168.100.20
log-error-verbosity=3
innodb_redo_log_capacity=2147483648
character-set-server = utf8mb4
collation-server = utf8mb4_general_ci
auto_increment_increment = 5
auto_increment_offset = 3
############################## END General settings ##############################
############################## START wsrep ##############################
#-
server-id=2
# Path to Galera library
#-
wsrep_provider=/usr/lib/galera4/libgalera_smm.so
# Cluster connection URL contains IPs of nodes
#If no IP is found, this implies that a new cluster needs to be created,
#in order to do that you need to bootstrap this node
#-
wsrep_cluster_address=gcomm://192.168.100.2,192.168.100.1,192.168.100.20,192.168.100.21
# In order for Galera to work correctly binlog format should be ROW
#-
binlog_format=ROW
# Slave thread to use
#-
wsrep_slave_threads=16
#-
wsrep_log_conflicts
# This changes how InnoDB autoincrement locks are managed and is a requirement for Galera
#-
innodb_autoinc_lock_mode=2
# Node IP address
#-
wsrep_node_address=192.168.100.20
# Cluster name
#-
wsrep_cluster_name=pxc-cluster
#If wsrep_node_name is not specified, then system hostname will be used
#-
wsrep_node_name=pxc-cluster-node-3
#pxc_strict_mode allowed values: DISABLED,PERMISSIVE,ENFORCING,MASTER
#-
pxc_strict_mode=ENFORCING
# SST method
#-
wsrep_sst_method=xtrabackup-v2
#-
pxc_encrypt_cluster_traffic = OFF
############################## END wsrep ##############################
# Number of CPU cores
innodb_read_io_threads = 16
innodb_write_io_threads = 16
innodb_thread_concurrency = 16
innodb_purge_threads = 4
innodb_page_cleaners = 4
innodb_log_write_ahead_size=4
thread_cache_size = 64
#thread_handling = pool-of-threads
thread_handling = one-thread-per-connection
# Buffer pool size (assuming 128GB RAM)
innodb_buffer_pool_size = 100G
# Storage device type
innodb_io_capacity = 1500 # Adjust for NVMe SSD
innodb_io_capacity_max = 3000 # Adjust for NVMe SSD
############################## START Performance settings ##############################
default_storage_engine = InnoDB
key_buffer_size = 256M
innodb_buffer_pool_instances = 16
innodb_flush_log_at_trx_commit = 2
innodb_log_buffer_size = 16M
innodb_flush_method = O_DIRECT
innodb_max_dirty_pages_pct = 90
innodb_flush_neighbors = 0
innodb_lru_scan_depth = 4096
innodb_change_buffer_max_size = 50
innodb_adaptive_hash_index = 0
innodb_doublewrite = 0
innodb_flush_sync = 0
innodb_max_undo_log_size = 2G
innodb_use_native_aio = 0
innodb_undo_log_truncate = ON
############################## END Performance settings ##############################
############################## Start Connection setting ##############################
# Connection settings
max_connections = 3000
max_allowed_packet = 256M
wait_timeout = 600
interactive_timeout = 600
############################## END Connection setting ##############################
############################## START Logging settings ##############################
log_error = /var/log/mysql/error.log
slow_query_log = 1
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 5
log_queries_not_using_indexes = 1
log_slow_admin_statements = 1
log_slow_replica_statements = 1
############################## END Logging settings ##############################
However, I am facing an issue with "CONFLICT DETECTED" And LOCK issue on my nodes, and the following errors are being reported:
2024-01-12T22:52:49.382682Z 16 [Note] [MY-011825] [InnoDB] *** WAITING FOR THIS LOCK TO BE GRANTED:
2024-01-12T22:52:49.382697Z 16 [Note] [MY-011825] [InnoDB] SQL1:
2024-01-12T22:52:49.382711Z 16 [Note] [MY-011825] [InnoDB] SQL2: INSERT INTO
requests_log
.....
2024-01-12T22:52:49.382739Z 16 [Note] [MY-000000] [WSREP] --------- CONFLICT DETECTED --------
2024-01-12T22:52:49.382756Z 16 [Note] [MY-000000] [WSREP] cluster conflict due to high priority abort for threads:
2024-01-12T22:52:49.382768Z 16 [Note] [MY-000000] [WSREP] Winning thread:
THD: 16, mode: high priority, state: exec, conflict: executing, seqno: 177984003
SQL: (null)
2024-01-12T22:52:49.382781Z 16 [Note] [MY-000000] [WSREP] Victim thread:
THD: 2338759, mode: local, state: exec, conflict: certifying, seqno: -1
SQL: INSERT INTO requests_log
......
2024-01-12T22:52:49.407148Z 2338784 [Note] [MY-011825] [InnoDB] *** Lock requesting TRANSACTION:
TRANSACTION 255572069, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
, undo log entries 1
MySQL thread id 2338784, OS thread handle 139937898251840, query id 37507384 wsrep: writing row for write-set (177984004)
2024-01-12T22:52:49.407423Z 2338784 [Note] [MY-011825] [InnoDB] *** Lock holder TRANSACTION:
TRANSACTION 255572067, ACTIVE (PREPARED) 0 sec
, undo log entries 1
MySQL thread id 16, OS thread handle 140003591190080, query id 37507363 wsrep: preparing to commit write set(177984003)
2024-01-12T22:52:50.780305Z 21 [Note] [MY-011825] [InnoDB] *** Priority TRANSACTION:
TRANSACTION 255572175, ACTIVE 0 sec inserting
mysql tables in use 1, locked 1
LOCK WAIT , undo log entries 1
MySQL thread id 21, OS thread handle 139994867009088, query id 37507865 wsrep: writing row for write-set (177984088)
2024-01-12T22:52:50.780456Z 21 [Note] [MY-011825] [InnoDB] *** Victim TRANSACTION:
TRANSACTION 255572174, ACTIVE 0 sec, thread declared inside InnoDB 4999
mysql tables in use 1, locked 1
, undo log entries 1
MySQL thread id 2338797, OS thread handle 139937889797696, query id 37507862 192.168.100.20 mixsimverify wsrep: replicating and certifying write set(-1)
update users
set balance
= ....
2024-01-12T22:52:50.780515Z 21 [Note] [MY-011825] [InnoDB] *** WAITING FOR THIS LOCK TO BE GRANTED:
2024-01-12T22:52:50.780530Z 21 [Note] [MY-011825] [InnoDB] SQL1:
2024-01-12T22:52:50.780545Z 21 [Note] [MY-011825] [InnoDB] SQL2: update users
set balance
= .....
2024-01-12T22:52:50.780564Z 21 [Note] [MY-000000] [WSREP] --------- CONFLICT DETECTED --------
2024-01-12T22:52:50.780588Z 21 [Note] [MY-000000] [WSREP] cluster conflict due to high priority abort for threads:
2024-01-12T22:52:50.780606Z 21 [Note] [MY-000000] [WSREP] Winning thread:
THD: 21, mode: high priority, state: exec, conflict: executing, seqno: 177984088
SQL: (null)
2024-01-12T22:52:50.780624Z 21 [Note] [MY-000000] [WSREP] Victim thread:
THD: 2338797, mode: local, state: exec, conflict: certifying, seqno: -1
SQL: update users
set balance
= ......
2024-01-12T22:52:50.802889Z 2338810 [Note] [MY-011825] [InnoDB] *** Lock requesting TRANSACTION:
TRANSACTION 255572178, ACTIVE 0 sec starting index read
mysql tables in use 1, locked 1
MySQL thread id 2338810, OS thread handle 139937889797696, query id 37507868 Applying batch of row changes (update)
2024-01-12T22:52:50.803030Z 2338810 [Note] [MY-011825] [InnoDB] *** Lock holder TRANSACTION:
TRANSACTION 255572175, ACTIVE (PREPARED) 0 sec committing
, undo log entries 1
MySQL thread id 21, OS thread handle 139994867009088, query id 37507865 innobase_commit_low (177984088)
2024-01-12T22:52:51.291489Z 12 [Note] [MY-011825] [InnoDB] *** Priority TRANSACTION:
TRANSACTION 255572217, ACTIVE 0 sec starting index read
mysql tables in use 1, locked 1
LOCK WAIT
MySQL thread id 12, OS thread handle 140007278396992, query id 37508017 Applying batch of row changes (update)
2024-01-12T22:52:51.291632Z 12 [Note] [MY-011825] [InnoDB] *** Victim TRANSACTION:
TRANSACTION 255572216, ACTIVE 0 sec, thread declared inside InnoDB 4999
mysql tables in use 1, locked 1
, undo log entries 1
MySQL thread id 2338798, OS thread handle 139954061203008, query id 37508015 192.168.100.20 mixsimverify wsrep: replicating and certifying write set(-1)
update users
set balance
= ....
2024-01-12T22:52:51.291691Z 12 [Note] [MY-011825] [InnoDB] *** WAITING FOR THIS LOCK TO BE GRANTED:
2024-01-12T22:52:51.291707Z 12 [Note] [MY-011825] [InnoDB] SQL1:
2024-01-12T22:52:51.291751Z 12 [Note] [MY-011825] [InnoDB] SQL2: update users
set balance
= ....
2024-01-12T22:52:51.291779Z 12 [Note] [MY-000000] [WSREP] --------- CONFLICT DETECTED --------
2024-01-12T22:52:51.291798Z 12 [Note] [MY-000000] [WSREP] cluster conflict due to high priority abort for threads:
How can I address and resolve this conflict and the LOCK issue?
Sam Wiki
(21 rep)
Jan 12, 2024, 11:04 PM
3
votes
3
answers
2293
views
MySQL cluster fast inserts but slow selects
I have a MySQL cluster with 3 Data Nodes and one SQL node. Currently I have 1 million 1kb records. The inserts are very fast, takes only a few ms. But the selects are very slow >20s. All machines are m1.large EC2 instances (8GB RAM). Here is the config.ini: [NDB_MGMD] NodeId=49 HostName=host1 DataDi...
I have a MySQL cluster with 3 Data Nodes and one SQL node. Currently I have 1 million 1kb records. The inserts are very fast, takes only a few ms. But the selects are very slow >20s.
All machines are m1.large EC2 instances (8GB RAM).
Here is the config.ini:
[NDB_MGMD]
NodeId=49
HostName=host1
DataDir=/var/lib/mysql-mgmd-data
Portnumber=1186
[NDBD DEFAULT]
NoOfReplicas=1
DataMemory=1536M
IndexMemory=2560M
StringMemory=25
MaxNoOfTables=4096
MaxNoOfOrderedIndexes=2048
MaxNoOfUniqueHashIndexes=512
MaxNoOfAttributes=24576
MaxNoOfTriggers=14336
FragmentLogFileSize=256M
InitFragmentLogFiles=SPARSE
NoOfFragmentLogFiles=16
RedoBuffer=48M
MaxNoOfConcurrentOperations=100000
MaxNoOfConcurrentTransactions=16384
TimeBetweenGlobalCheckpoints=1000
TimeBetweenEpochs=100
TimeBetweenEpochsTimeout=32000
DiskCheckpointSpeedInRestart=100M
DiskCheckpointSpeed=10M
TimeBetweenLocalCheckpoints=20
HeartbeatIntervalDbDb=15000
HeartbeatIntervalDbApi=15000
MemReportFrequency=30
BackupReportFrequency=10
LogLevelStartup=15
LogLevelShutdown=15
LogLevelCheckpoint=8
LogLevelNodeRestart=15
SharedGlobalMemory=384M
DiskPageBufferMemory=1G
BatchSizePerLocalScan=512
[NDBD]
NodeId=1
HostName=host2
DataDir=/mnt/mysql-cluster/1/
[NDBD]
NodeId=2
HostName=host3
DataDir=/mnt/mysql-cluster/2/
[NDBD]
NodeId=3
HostName=host4
DataDir=/mnt/mysql-cluster/3/
[MYSQLD DEFAULT]
[MYSQLD]
NodeId=53
HostName=host1
my.cnf
[mysqld]
ndbcluster
ndb-nodeid=53
ndb-connectstring=host1,
Insert:
INSERT INTO mytab(mykey, a, b, c, d, e, f, g, h, i,j)VALUES (1,1,2,3,4,5,6,7,8,9,0);
Select:
SELECT * FROM mytab WHERE mykey = 12345;
Create table statement:
mytab | CREATE TABLE
mytab
(
mykey
varchar(32) NOT NULL,
a
varchar(100) DEFAULT NULL,
b
varchar(100) DEFAULT NULL,
c
varchar(100) DEFAULT NULL,
d
varchar(100) DEFAULT NULL,
e
varchar(100) DEFAULT NULL,
f
varchar(100) DEFAULT NULL,
g
varchar(100) DEFAULT NULL,
h
varchar(100) DEFAULT NULL,
i
varchar(100) DEFAULT NULL,
j
varchar(100) DEFAULT NULL,
PRIMARY KEY (mykey
)
) /*!50100 TABLESPACE mytab_space STORAGE DISK */ ENGINE=ndbcluster DEFAULT CHARSET=latin1 MAX_ROWS=1000000000
/*!50100 PARTITION BY KEY (mykey) */ |
Has someone an idea why it is so slow?
Has someone experience with MySQL-cluster and EC2 m1.large instances?
user2601966
(41 rep)
Dec 13, 2013, 07:14 AM
• Last activity: Oct 25, 2023, 12:04 PM
0
votes
1
answers
526
views
config file and parameters for ndb_desc
I'm trying to execute `ndb_desc` from API client. The config and parameters are as below: my.cnf in mysqld: [mysql_cluster] ndb-connectstring=192.168.1.1,192.168.1.2 server-id=5 log-bin=mysql-bin config.ini in mgmd: (part of it) -- ... [mysqld] hostname=192.168.1.5 --- ... Situation: there are 2x mg...
I'm trying to execute
ndb_desc
from API client. The config and parameters are as below:
my.cnf in mysqld:
[mysql_cluster]
ndb-connectstring=192.168.1.1,192.168.1.2
server-id=5
log-bin=mysql-bin
config.ini in mgmd: (part of it)
-- ...
[mysqld]
hostname=192.168.1.5
--- ...
Situation: there are 2x mgmd, 2x data-node, and 2x mysqld.
Parameters tried to call ndb_desc: # ndb_desc -c 192.168.1.1 --ndb-nodeid=7 -d db_name -t tb_name
Above response with error: ndb_desc: [ERROR] unknown variable 'server-id=5
On changing my.cnf
, removing these two lines of server-id
and log-bin
, then the error response after calling ndb_desc will be as follow:
Unable to connect to management server.
NDBT_ProgramExit: 1 - Failed
There is no firewall, ping to the management is running fine.
So the question: how to exec ndb_desc (with correct parameters)? what is the correct my.cnf/config.ini for setting up the node-id?
Some links i have tried to check before posting this:
mysql-cluster-work-well-but-ndb-desc-doesnt-work , mysql:ndb_desc , mysql:connection-string , grokbase , and several hours of googling around.
update1: changed the config.ini on mgmd for emtpy mysqld section, ndb_mgm -e show
output:
[mysqld(API)] 3 node(s)
id=5 @192.168.1.5 (mysql-5.7.23 ndb-7.6.7)
id=6 @192.168.1.6 (mysql-5.7.23 ndb-7.6.7)
id=7 (not connected, accepting connect from any host)
ndb_desc still has the same response: Unable to connect to management server.
update2: from mgmd node, saw this log: WARNING -- Failed to allocate nodeid for API at 192.168.1.5. Returned error: 'Id 7 already allocated by another node.
update3: changed config.ini, added [api] nodeid=7 host=192.168.1.5
. log still the same: WARNING -- Failed to allocate nodeid for API at 192.168.11.51. Returned error: 'No free node id found for mysqld(API).
nyoto arif
(13 rep)
Dec 11, 2018, 09:29 AM
• Last activity: Oct 25, 2023, 08:21 AM
0
votes
0
answers
96
views
What is the best way to simulate split-brain?
I have a Windows failover cluster and would like to test if it works in a split-brain scenario. The cluster consists of virtual machine in hyper-V. How would one go about doing this?
I have a Windows failover cluster and would like to test if it works in a split-brain scenario. The cluster consists of virtual machine in hyper-V. How would one go about doing this?
Nothing Else
(1 rep)
Sep 5, 2023, 10:15 AM
Showing page 1 of 20 total questions