Database Administrators
Q&A for database professionals who wish to improve their database skills
Latest Questions
0
votes
1
answers
149
views
Ndb cluster change variable for heartbeat and disconnect
I search for a variable that I can increase in the server that I don't get any more this error: Node 2 missed heartbeat 2 Node 3 Disconnected Node 2 Disconnected I don't find anything. Regards,
I search for a variable that I can increase in the server that I don't get any more this error:
Node 2 missed heartbeat 2
Node 3 Disconnected
Node 2 Disconnected
I don't find anything.
Regards,
pioupiou
(137 rep)
Oct 15, 2018, 01:02 PM
• Last activity: Jul 21, 2025, 12:01 AM
0
votes
1
answers
145
views
Scaling MySQL Cluster
I have an NDB cluster which consists of 4 data nodes. Some of the configuration parameters and system resources as : CPU: 32 RAM: 128GB Replication factor: 2 Data memory: 70GB Index memory: 10GB I am planning to extend cluster capacity. I have two options to achieve this: 1. I can add more RAM to da...
I have an NDB cluster which consists of 4 data nodes. Some of the configuration parameters and system resources as :
CPU: 32
RAM: 128GB
Replication factor: 2
Data memory: 70GB
Index memory: 10GB
I am planning to extend cluster capacity. I have two options to achieve this:
1. I can add more RAM to data nodes so data memory and index memory will be able to double up
2. I can add 4 data nodes more to cluster so total data memory and index memory increases too
I am not sure which one is a better solution while extending cluster capacity. If I choose first option data size stored per node will be bigger. On the other hand what about 140GB DataMemory per data node? Are there any recommended maximum value for DataMemory ve IndexMemory parameters?
Gpost
(73 rep)
Nov 8, 2017, 08:42 PM
• Last activity: Jul 20, 2025, 10:01 PM
0
votes
1
answers
160
views
Adding Memcached plugin to Mysql Cluster Fails to start
I was able to setup a new mysql cluster, with total 6 VMs. New requirement is to add memcache plugin to mysql cluster, memcache failing to start with this error message `WARNING -- Failed to allocate nodeid for API at 192.168.1.100. Returned error: 'No free node id found for mysqld(API).'` . Here is...
I was able to setup a new mysql cluster, with total 6 VMs. New requirement is to add memcache plugin to mysql cluster, memcache failing to start with this error message
WARNING -- Failed to allocate nodeid for API at 192.168.1.100. Returned error: 'No free node id found for mysqld(API).'
.
Here is my existing setup.
192.168.1.100 Management1
192.168.1.101 Management2
192.168.1.102 datanode1
192.168.1.103 datanode2
192.168.1.104 sqlnode1
192.168.1.105 sqlnode2
I am trying to run memcached on existing sql nodes
> sqlnode1 & sqlnode2
Here is the configuration on management nodes Management1, Management2
[ndb_mgmd default]
DataDir=/mysql/data
[ndbd default]
NoOfReplicas=2
LockPagesInMainMemory=1
DataMemory=25G
IndexMemory=5G
NoOfFragmentLogFiles=300
DataDir=/mysql/data
MaxNoOfConcurrentOperations=1000000
SchedulerSpinTimer=400
SchedulerExecutionTimer=100
RealTimeScheduler=1
MaxNoOfTables=1024
MaxNoOfOrderedIndexes=256
[ndb_mgmd]
HostName=192.168.1.100
NodeId=51
[ndb_mgmd]
HostName=192.168.1.101
NodeId=52
[ndbd]
HostName=192.168.1.102
NodeId=53
[ndbd]
HostName=192.168.1.103
NodeId=54
[mysqld]
HostName=192.168.1.104
NodeId=55
[mysqld]
HostName=192.168.1.105
NodeId=56
[mysqld]
HostName=192.168.1.104
NodeId=57
[mysqld]
HostName=192.168.1.105
NodeId=58
Here is configuration file /etc/my.conf
on data & sql nodes datenode1, datanode2, sqlnode1, sqlnode2
.
[mysqld]
ndbcluster
ndb-connectstring=192.168.1.100,192.168.1.101
datadir=/mysql/data
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
[mysql_cluster]
ndb-connectstring=192.168.1.100,192.168.1.101
current status of cluster.
#ndb_mgm -e show
Connected to Management Server at: 192.168.1.100:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=53 @192.168.1.102 (mysql-5.7.19 ndb-7.5.7, Nodegroup: 0)
id=54 @192.168.1.103 (mysql-5.7.19 ndb-7.5.7, Nodegroup: 0, *)
[ndb_mgmd(MGM)] 2 node(s)
id=51 @192.168.1.100 (mysql-5.7.19 ndb-7.5.7)
id=52 @192.168.1.101 (mysql-5.7.19 ndb-7.5.7)
[mysqld(API)] 4 node(s)
id=55 @192.168.1.104 (mysql-5.7.19 ndb-7.5.7)
id=56 @192.168.1.105 (mysql-5.7.19 ndb-7.5.7)
id=57 (not connected, accepting connect from 192.168.1.104)
id=58 (not connected, accepting connect from 192.168.1.105)
#
here is the error when I try to add memcache plugin to mysql cluster.
created ndbmemcache
configuration in database, didnt notice any error.
# mysql -u root -p ndb_initialize()
main --> connect_to_primary_cluster()
27-Oct-2017 15:35:04 UTC NDB Memcache 5.7.19-ndb-7.5.7 started [NDB 7.5.7; MySQL 5.7.19]
Contacting primary management server (192.168.1.100:1186,192.168.1.101:1186) ...
main --> ClusterConnectionPool::connect()
FAILED.
Could not connect to NDB. Shutting down.
main --> ndb_destroy()
Failed to initialize instance. Error code: 255
#
cluster management logs ndb_51_cluster.log
show this Warning message.
2017-10-27 15:33:44 [MgmtSrvr] WARNING -- Failed to allocate nodeid for API at 192.168.1.104. Returned error: 'No free node id found for mysqld(API).'
2017-10-27 15:35:06 [MgmtSrvr] WARNING -- Failed to allocate nodeid for API at 192.168.1.104. Returned error: 'No free node id found for mysqld(API).'
2017-10-27 15:35:09 [MgmtSrvr] WARNING -- Failed to allocate nodeid for API at 192.168.1.104. Returned error: 'No free node id found for mysqld(API).'
2017-10-27 15:35:12 [MgmtSrvr] WARNING -- Failed to allocate nodeid for API at 192.168.1.104. Returned error: 'No free node id found for mysqld(API).'
2017-10-27 15:35:15 [MgmtSrvr] WARNING -- Failed to allocate nodeid for API at 192.168.1.104. Returned error: 'No free node id found for mysqld(API).'
2017-10-27 15:35:18 [MgmtSrvr] WARNING -- Failed to allocate nodeid for API at 192.168.1.104. Returned error: 'No free node id found for mysqld(API).
'
Can someone please suggest how can I fix this issue, I already have additional sql client nodes, but not able to start memcache process.
Thanks,
bobby
(1 rep)
Oct 27, 2017, 10:35 PM
• Last activity: Jul 12, 2025, 09:07 AM
0
votes
1
answers
162
views
MySQL NDB Cluster Community - High memory utilization by mysqld only on source/replica nodes
I have two MySQL NDB community clusters 8.0.30, each with 3 VMs Cluster#1_VM1: mysqld + ndb_mgmd Cluster#1_VM2: mysqld + ndb Cluster#1_VM3: mysqld + ndb Cluster#2_VM1: mysqld + ndb_mgmd Cluster#2_VM2: mysqld + ndb Cluster#2_VM3: mysqld + ndb I have bidirectional replication between Cluster#1_VM2 and...
I have two MySQL NDB community clusters 8.0.30, each with 3 VMs
Cluster#1_VM1: mysqld + ndb_mgmd
Cluster#1_VM2: mysqld + ndb
Cluster#1_VM3: mysqld + ndb
Cluster#2_VM1: mysqld + ndb_mgmd
Cluster#2_VM2: mysqld + ndb
Cluster#2_VM3: mysqld + ndb
I have bidirectional replication between Cluster#1_VM2 and Cluster#2_VM2.
VM2/3 have identical parameters.
The choice of such architecture might not be perfect but I think it was not relevant here.
mysqld process consumes a lot of memory on VMs with replication.
And replication seems to be the only difference between those VMs.
(Though I'm not quite sure how is load distributed among mysqld processes)
I would assume that ndbd would be the one to use most of the RAM since it has 8GB of DataMemory. But somehow mysqld is also utilizing a lot of RAM too.
I've checked several other questions here (e.g. https://dba.stackexchange.com/questions/286946/high-memory-usage-on-replica-mysql-8 )
Most of them are related to innodb and the queries for e.g. buffer utilization show nothing that could show me where that high memory utilization comes from.
So far I've checked
- buffer utilizations for innoDB (I have NDB, I know)
- engine status for innodb and ndb - no hint of high mem usage, or at least not clearly visible
All I know for now is that it grows over time after restart of the process.
But don't know yet how to limit/control the amount of memory consumed by mysqld in this case.
Marcin Stolzmann
(1 rep)
Sep 14, 2023, 11:34 AM
• Last activity: Jul 12, 2025, 08:02 AM
1
votes
1
answers
224
views
mysql data node not connecting to node manager on AWS EC2
I am trying to setup mySql manager + data node + sql server (Server A) and another data node cluster on (Server B). Both running on Ubuntu 16.04 amd_64 **Server A** */etc/hosts* also on Server B Both private ip 172.xx.xx.xx ip-172-xx-xx-xx-eu-west-1.compute.internal 172.xx.xx.xx ip-172-xx-xx-xx.eu-e...
I am trying to setup mySql manager + data node + sql server (Server A) and another data node cluster on (Server B). Both running on Ubuntu 16.04 amd_64
**Server A**
*/etc/hosts* also on Server B
Both private ip
172.xx.xx.xx ip-172-xx-xx-xx-eu-west-1.compute.internal
172.xx.xx.xx ip-172-xx-xx-xx.eu-east-1.compute.internal
*/config.ini*
[ndbd default]
NoOfReplicas=2 # Number of replicas
[ndb_mgmd]
hostname=ip-172-xx-xx-xx.eu-west-1.compute.internal show
Connected to Management Server at: 172.xx.xx.xx:1186 <- A
Cluster Configuration
---------------------
[ndbd(NDB)] 1 node(s)
id=2 (not connected, accepting connect from 172.xx.xx.xx) <- B
[ndb_mgmd(MGM)] 1 node(s)
id=1 @172.x.x.x (mysql-5.7.22 ndb-7.6.6)
[mysqld(API)] 1 node(s)
id=3 (not connected, accepting connect from 172.xx.xx.xx) <- A
Not sure how would i connect to mysqld(API) that should work fine after i installed the server/client cluster.
Also i got inbound on both Instances A & B to accept all traffic between all port ranges 1000-60000 just in case i am missing something, and outbound is open to everything.
I wonder what is it i am missing, i got so tired today trying to find out what is going on with the setup.
Any help will be appreciated !
mohamed tebry
(11 rep)
Oct 14, 2019, 06:03 PM
• Last activity: Jun 6, 2025, 12:03 AM
2
votes
1
answers
2039
views
mysql cluster table full error
I installed a new mysql cluster on windows (for evaluation). I created a table using command below. After I insert just over 1 million records into the table it gives me the "table datadump is full" exception. I have plenty of hard disk space and ram free, and 1 million records is well below the max...
I installed a new mysql cluster on windows (for evaluation). I created a table using command below. After I insert just over 1 million records into the table it gives me the "table datadump is full" exception. I have plenty of hard disk space and ram free, and 1 million records is well below the max number of rows for the table. Also the MySql RAm usage has not increased during inserts. I am assuming that MySql has an internal cache limit for the table that was hit. is there a config.ini change I can make to disable caching of rows in cluster table or a modification to the create table script to do the same?
create table datadump (name varchar(30) not null) engine=ndb;
user18896654353
(121 rep)
Jan 24, 2014, 03:40 PM
• Last activity: May 30, 2025, 05:01 AM
0
votes
2
answers
280
views
MySQL NDB Cluster settings to prefer data node on localhost than remote
My current `MySQL NDB Cluster` configuration: - Server 1 - management node - Server 2 - huge machine running both `ndbd` and `mysqld` - Server 3 - huge machine running both `ndbd` and `mysqld` - `NoOfReplicas = 1` So server 2 has exact copy of server 3 data During setup evaluation I discovered that...
My current
MySQL NDB Cluster
configuration:
- Server 1 - management node
- Server 2 - huge machine running both ndbd
and mysqld
- Server 3 - huge machine running both ndbd
and mysqld
- NoOfReplicas = 1
So server 2 has exact copy of server 3 data
During setup evaluation I discovered that mysqld on server 2 is sending about 50% queries to ndbd running on server 3.
With greater number of nodes this would be OK to provide load-balancing, but with my specific configuration (both nodes have same dataset) I'd like to get max performance by telling mysqld
on each server to PREFER connecting to ndbd
process running on localhost, and only connect ndbd
on remote host only if local is not available (ie restart)
I've read something about server access cost table that should contain information for query optimizer about latency between nodes, but I can't find this blog post right now.
Any help?
Matthias
(149 rep)
Nov 9, 2016, 03:00 PM
• Last activity: May 13, 2025, 12:00 PM
2
votes
1
answers
990
views
What happens when a MySQL Cluster node is disconnected
I am trying to design a multi-master clustered MySQL database which will be spread through out the country. The main reason for having a cluster is so that when the internet, in one location (or node), is disconnected, that location should still be able to read and modify the contents of the databas...
I am trying to design a multi-master clustered MySQL database which will be spread through out the country.
The main reason for having a cluster is so that when the internet, in one location (or node), is disconnected, that location should still be able to read and modify the contents of the database locally. So that when the internet connection is restored, the rest of the cluster get updated.
I've went through the MySQL cluster documentation, however, I couldn't find this specific question. Is it possible to achieve this using MySQL cluster?
Saif Ur Rehman
(121 rep)
Dec 23, 2015, 04:16 PM
• Last activity: Apr 13, 2025, 05:02 PM
1
votes
1
answers
749
views
Excessive RAM consumption in MySQL Cluster
at this moment I have configured a cluster that has 5 nodes: 2 data nodes 2 SQL nodes 1 node manager My file.config.ini is the following: [ndbd default] NoOfReplicas = 2 # Number of replicas DataMemory = 256M IndexMemory = 100M ServerPort = 2202. MaxNoOfConcurrentTransactions = 429496 MaxNoOfConcurr...
at this moment I have configured a cluster that has 5 nodes:
2 data nodes
2 SQL nodes
1 node manager
My file.config.ini is the following:
[ndbd default]
NoOfReplicas = 2 # Number of replicas
DataMemory = 256M IndexMemory = 100M
ServerPort = 2202.
MaxNoOfConcurrentTransactions = 429496
MaxNoOfConcurrentOperations = 472496
MaxNoOfLocalOperations = 519745
[ndb_mgmd]
NodeId = 1
HostName = 192.168.10.145 # Hostname or IP address of MGM node
DataDir = / var / lib / mysql-cluster # Directory for MGM node log files
[ndbd]
HostName = 192.168.10.181 # Hostname or IP address
NodeId = 2 # Node ID for this data node
DataDir = / mnt / dataPartition / mysql / data # Directory for this data node's data files
[ndbd]
HostName = 192.168.10.183 # Hostname or IP address
NodeId = 3 # Node ID for this data node
DataDir = / mnt / dataPartition / mysql / data # Directory for this data node's data files
[mysqld]
HostName = 192.168.10.140 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
HostName = 192.168.10.184 # Hostname or IP address
# (additional mysqld connections can be
# specified for this node for various
# purposes such as running ndb_restore)
[mysqld]
IN THE NDB_MGM REPORT, IT SAYS THE FOLLOWING:
ndb_mgm> 2 REPORT MEMORYUSAGE
Node 2: Data usage is 18% (1476 32K pages of total 8192)
Node 2: Index usage is 11% (1461 8K pages of total 12832)
ndb_mgm> 3 REPORT MEMORYUSAGE
Node 3: Data usage is 18% (1480 32K pages of total 8192)
Node 3: Index usage is 11% (1462 8K pages of total 12832)
Each data node has 2gb of ram, it is assumed that with this configuration should not consume more than 500 mb of ram the ndbd process, but I am consuming 1.89 gb of ram per node, I am using disk storage for the fields that they are not indexed
What is wrong or what is wrong?
Jonathan Diaz
(11 rep)
Nov 17, 2017, 12:50 AM
• Last activity: Mar 22, 2025, 03:05 PM
0
votes
1
answers
406
views
mysql 8 ndbcluster distributed privileges
I've setup a mysql 8 ndbcluster but cannot get the distributed privileges working. following how to setup distributed privileges using NDB_STORED_USER (https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster-privilege-distribution.html), i've verified the following: - SHOW ENGINES lists the value of "...
I've setup a mysql 8 ndbcluster but cannot get the distributed privileges working.
following how to setup distributed privileges using NDB_STORED_USER (https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster-privilege-distribution.html) , i've verified the following:
- SHOW ENGINES lists the value of "DEFAULT" for the Support column on the ndbcluster (Engine) row.
- Both SQL nodes have "default-storage-engine=ndbcluster" in their my.cnf
- I created a user on SQL node 1 and then added them to NDB_STORED_USER.
mysql> GRANT NDB_STORED_USER ON *.* TO 'admin'@'%';
Query OK, 0 rows affected (0.26 sec)
- I verified I could see the respective GRANTS in ndb_sql_metadata from BOTH SQL Nodes
~$ ndb_select_all -d mysql ndb_sql_metadata
type name seq note sql_ddl_text
11 "'admin'@'%'" 0 2 "CREATE USER 'admin'@'%' IDENTIFIED WITH 'caching_sha2_password' AS '$A$005$MGBmokOj
m! select host, user from mysql.user;
+-----------+------------------+
| host | user |
+-----------+------------------+
| localhost | mysql.infoschema |
| localhost | mysql.session |
| localhost | mysql.sys |
| localhost | root |
+-----------+------------------+
4 rows in set (0.00 sec)
Any other suggestions? this is my first time setting up a mysql cluster.
carefreepineapple
(1 rep)
Mar 10, 2020, 04:40 PM
• Last activity: Feb 15, 2025, 09:07 AM
0
votes
1
answers
193
views
SQL and Management Nodes on same server in NDB cluster. SQL Nodes not connecting
I've setup a ndb cluster with 7 nodes. 3 data nodes on their own servers, and 2 management nodes that each have a sql node on them on 2 servers. I've narrowed down what I think my issue is. My cluster seems to be unable to elect a president data node, but I have no idea how to set the election rules...
I've setup a ndb cluster with 7 nodes. 3 data nodes on their own servers, and 2 management nodes that each have a sql node on them on 2 servers.
I've narrowed down what I think my issue is. My cluster seems to be unable to elect a president data node, but I have no idea how to set the election rules. Can anyone help me with my setup? Is my config wrong?
Thanks.
Below is all the details.
Here is my management node config:
[ndbd default]
NoOfReplicas=3
#DataMemory=2048M
#IndexMemory=512M
#Management Nodes
[ndb_mgmd]
hostname=192.168.0.1
NodeId=1
datadir=/var/lib/mysql-cluster
[ndb_mgmd]
hostname=192.168.0.2
NodeId=2
datadir=/var/lib/mysql-cluster
#Data Nodes
[ndbd]
hostname=192.168.0.3
NodeId=3
datadir=/usr/local/mysql/data
[ndbd]
hostname=192.168.0.4
NodeId=4
datadir=/usr/local/mysql/data
[ndbd]
hostname=192.168.0.5
NodeId=5
datadir=/usr/local/mysql/data
#front end servers
[mysqld]
hostname=192.168.0.1
#NodeId=11
[mysqld]
hostname=192.168.0.2
#NodeId=22
Here is my sql node config - my.cnf
!includedir /etc/mysql/conf.d/
!includedir /etc/mysql/mysql.conf.d/
[mysqld]
ndbcluster
[mysql_cluster]
ndb-connectstring=192.168.0.1,192.168.0.2
Here is the data node connection string
[mysql_cluster]
# Options for NDB Cluster processes:
ndb-connectstring=192.168.0.1,192.168.0.2
This is what ndb_mgm> show is saying
id=3 @192.168.0.3 (mysql-8.3.0 ndb-8.3.0, starting, Nodegroup: 0)
id=4 @192.168.0.4 (mysql-8.3.0 ndb-8.3.0, starting, Nodegroup: 0)
id=5 @192.168.0.5 (mysql-8.3.0 ndb-8.3.0, starting, Nodegroup: 0)
[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.0.1 (mysql-8.3.0 ndb-8.3.0)
id=2 @192.168.0.2 (mysql-8.3.0 ndb-8.3.0)
[mysqld(API)] 2 node(s)
id=6 (not connected, accepting connect from 192.168.0.1)
id=7 (not connected, accepting connect from 192.168.0.2)
Nodes 3,4, & 5 are stuck in status
ndb_mgm> 3 status
Node 3: starting (Last completed phase 0) (mysql-8.3.0 ndb-8.3.0)
ndb_mgm> 4 status
Node 4: starting (Last completed phase 0) (mysql-8.3.0 ndb-8.3.0)
ndb_mgm> 5 status
Node 5: starting (Last completed phase 0) (mysql-8.3.0 ndb-8.3.0)
SQL Nodes 6 (11) & 7 (22) will not connect.
SQL & Data Nodes have nothing in their error logs.
The cluster log for the management nodes have this on repeat every minute:
TIME [MgmtSrvr] INFO -- Alloc node id 6 rejected, no new president yet
TIME [MgmtSrvr] WARNING -- Unable to allocate nodeid for API at 192.168.0.1. Returned error: 'Cluster not ready for nodeid allocation.'
TIME [MgmtSrvr] INFO -- Node 3: Initial start, waiting for 4 and 5 to connect, nodes [ all: 3, 4 and 5 connected: 3 no-wait: ]
TIME [MgmtSrvr] INFO -- Node 5: Initial start, waiting for 3 and 4 to connect, nodes [ all: 3, 4 and 5 connected: 5 no-wait: ]
TIME [MgmtSrvr] INFO -- Node 4: Initial start, waiting for 3 and 5 to connect, nodes [ all: 3, 4 and 5 connected: 4 no-wait: ]
Mr. Berzerk
(3 rep)
May 2, 2024, 09:59 PM
• Last activity: May 3, 2024, 08:41 AM
3
votes
1
answers
248
views
What happens to mysql synchronous replication when read replica can't respond?
In the official doc https://dev.mysql.com/doc/mysql-replication-excerpt/8.0/en/replication.html It mentioned that if synchronous replication is required, use NDB instead >For scenarios where synchronous replication is required, use NDB Cluster (see MySQL NDB Cluster 7.5 and NDB Cluster 7.6). **Quest...
In the official doc
https://dev.mysql.com/doc/mysql-replication-excerpt/8.0/en/replication.html
It mentioned that if synchronous replication is required, use NDB instead
>For scenarios where synchronous replication is required, use NDB Cluster (see MySQL NDB Cluster 7.5 and NDB Cluster 7.6).
**Question 1.** Does mysql not support synchronous replication, only mysql cluster support synchronous replication?
**Question 2.** Where is the doc for synchronous replciation for NDB
I see multiple post mentioning that NDB replication is synchronous by default
https://stackoverflow.com/questions/53149674/can-i-implement-synchronous-and-asynchronous-replication-with-the-mysql-cluster
But the official doc only mention asynchronous and semisynchronous replication
https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster-replication.html
> NDB Cluster supports asynchronous replication, more usually referred to simply as “replication”. This section explains how to set up and manage a configuration in which one group of computers operating as an NDB Cluster replicates to a second computer or group of computers. We assume some familiarity on the part of the reader with standard MySQL replication as discussed elsewhere in this Manual. (See Chapter 19, Replication).
**Question 3:** If mysql or NDB supports synchronous replication, do they use 2PC? What happens during network partition, or replica nodes are not available? Does NDB sacrifice availability over consistency? Does NDB do leadership election?
This post say it does
https://dev.mysql.com/blog-archive/2-phase-commit-in-ndbcluster/
But I can't find documentation on the behavior of the trade off on availability vs consistency during network partition or replica failure?
olaf
(143 rep)
Feb 17, 2024, 07:05 PM
• Last activity: Feb 17, 2024, 10:55 PM
0
votes
1
answers
32
views
Unable to import more than 110 Columns in single table in NDBC
I have configured NDBCLUSTER with 4 data nodes, 1 sql node and 1 management node. But I have increased the 4GB of DataMemory size for every node and IndexMemory size 500MB also. I have given MaxNoOfAttributes=2000 in every node. I have increased the page size for every node. But the problem is I am...
I have configured NDBCLUSTER with 4 data nodes, 1 sql node and 1 management node. But I have increased the 4GB of DataMemory size for every node and IndexMemory size 500MB also. I have given MaxNoOfAttributes=2000 in every node. I have increased the page size for every node.
But the problem is I am unable to create more than 110 columns in a table in ndbcluster.
Debabrata
(1 rep)
Dec 2, 2023, 02:52 PM
• Last activity: Dec 5, 2023, 11:49 AM
3
votes
3
answers
2293
views
MySQL cluster fast inserts but slow selects
I have a MySQL cluster with 3 Data Nodes and one SQL node. Currently I have 1 million 1kb records. The inserts are very fast, takes only a few ms. But the selects are very slow >20s. All machines are m1.large EC2 instances (8GB RAM). Here is the config.ini: [NDB_MGMD] NodeId=49 HostName=host1 DataDi...
I have a MySQL cluster with 3 Data Nodes and one SQL node. Currently I have 1 million 1kb records. The inserts are very fast, takes only a few ms. But the selects are very slow >20s.
All machines are m1.large EC2 instances (8GB RAM).
Here is the config.ini:
[NDB_MGMD]
NodeId=49
HostName=host1
DataDir=/var/lib/mysql-mgmd-data
Portnumber=1186
[NDBD DEFAULT]
NoOfReplicas=1
DataMemory=1536M
IndexMemory=2560M
StringMemory=25
MaxNoOfTables=4096
MaxNoOfOrderedIndexes=2048
MaxNoOfUniqueHashIndexes=512
MaxNoOfAttributes=24576
MaxNoOfTriggers=14336
FragmentLogFileSize=256M
InitFragmentLogFiles=SPARSE
NoOfFragmentLogFiles=16
RedoBuffer=48M
MaxNoOfConcurrentOperations=100000
MaxNoOfConcurrentTransactions=16384
TimeBetweenGlobalCheckpoints=1000
TimeBetweenEpochs=100
TimeBetweenEpochsTimeout=32000
DiskCheckpointSpeedInRestart=100M
DiskCheckpointSpeed=10M
TimeBetweenLocalCheckpoints=20
HeartbeatIntervalDbDb=15000
HeartbeatIntervalDbApi=15000
MemReportFrequency=30
BackupReportFrequency=10
LogLevelStartup=15
LogLevelShutdown=15
LogLevelCheckpoint=8
LogLevelNodeRestart=15
SharedGlobalMemory=384M
DiskPageBufferMemory=1G
BatchSizePerLocalScan=512
[NDBD]
NodeId=1
HostName=host2
DataDir=/mnt/mysql-cluster/1/
[NDBD]
NodeId=2
HostName=host3
DataDir=/mnt/mysql-cluster/2/
[NDBD]
NodeId=3
HostName=host4
DataDir=/mnt/mysql-cluster/3/
[MYSQLD DEFAULT]
[MYSQLD]
NodeId=53
HostName=host1
my.cnf
[mysqld]
ndbcluster
ndb-nodeid=53
ndb-connectstring=host1,
Insert:
INSERT INTO mytab(mykey, a, b, c, d, e, f, g, h, i,j)VALUES (1,1,2,3,4,5,6,7,8,9,0);
Select:
SELECT * FROM mytab WHERE mykey = 12345;
Create table statement:
mytab | CREATE TABLE
mytab
(
mykey
varchar(32) NOT NULL,
a
varchar(100) DEFAULT NULL,
b
varchar(100) DEFAULT NULL,
c
varchar(100) DEFAULT NULL,
d
varchar(100) DEFAULT NULL,
e
varchar(100) DEFAULT NULL,
f
varchar(100) DEFAULT NULL,
g
varchar(100) DEFAULT NULL,
h
varchar(100) DEFAULT NULL,
i
varchar(100) DEFAULT NULL,
j
varchar(100) DEFAULT NULL,
PRIMARY KEY (mykey
)
) /*!50100 TABLESPACE mytab_space STORAGE DISK */ ENGINE=ndbcluster DEFAULT CHARSET=latin1 MAX_ROWS=1000000000
/*!50100 PARTITION BY KEY (mykey) */ |
Has someone an idea why it is so slow?
Has someone experience with MySQL-cluster and EC2 m1.large instances?
user2601966
(41 rep)
Dec 13, 2013, 07:14 AM
• Last activity: Oct 25, 2023, 12:04 PM
0
votes
1
answers
526
views
config file and parameters for ndb_desc
I'm trying to execute `ndb_desc` from API client. The config and parameters are as below: my.cnf in mysqld: [mysql_cluster] ndb-connectstring=192.168.1.1,192.168.1.2 server-id=5 log-bin=mysql-bin config.ini in mgmd: (part of it) -- ... [mysqld] hostname=192.168.1.5 --- ... Situation: there are 2x mg...
I'm trying to execute
ndb_desc
from API client. The config and parameters are as below:
my.cnf in mysqld:
[mysql_cluster]
ndb-connectstring=192.168.1.1,192.168.1.2
server-id=5
log-bin=mysql-bin
config.ini in mgmd: (part of it)
-- ...
[mysqld]
hostname=192.168.1.5
--- ...
Situation: there are 2x mgmd, 2x data-node, and 2x mysqld.
Parameters tried to call ndb_desc: # ndb_desc -c 192.168.1.1 --ndb-nodeid=7 -d db_name -t tb_name
Above response with error: ndb_desc: [ERROR] unknown variable 'server-id=5
On changing my.cnf
, removing these two lines of server-id
and log-bin
, then the error response after calling ndb_desc will be as follow:
Unable to connect to management server.
NDBT_ProgramExit: 1 - Failed
There is no firewall, ping to the management is running fine.
So the question: how to exec ndb_desc (with correct parameters)? what is the correct my.cnf/config.ini for setting up the node-id?
Some links i have tried to check before posting this:
mysql-cluster-work-well-but-ndb-desc-doesnt-work , mysql:ndb_desc , mysql:connection-string , grokbase , and several hours of googling around.
update1: changed the config.ini on mgmd for emtpy mysqld section, ndb_mgm -e show
output:
[mysqld(API)] 3 node(s)
id=5 @192.168.1.5 (mysql-5.7.23 ndb-7.6.7)
id=6 @192.168.1.6 (mysql-5.7.23 ndb-7.6.7)
id=7 (not connected, accepting connect from any host)
ndb_desc still has the same response: Unable to connect to management server.
update2: from mgmd node, saw this log: WARNING -- Failed to allocate nodeid for API at 192.168.1.5. Returned error: 'Id 7 already allocated by another node.
update3: changed config.ini, added [api] nodeid=7 host=192.168.1.5
. log still the same: WARNING -- Failed to allocate nodeid for API at 192.168.11.51. Returned error: 'No free node id found for mysqld(API).
nyoto arif
(13 rep)
Dec 11, 2018, 09:29 AM
• Last activity: Oct 25, 2023, 08:21 AM
0
votes
2
answers
425
views
Can't drop an NDB database because the storage engine is unavailable
I tried to set up a MySQL cluster and failed. I finally got it switched back to standard MySQL, but now I have an `ndbinfo` database that I cannot drop. It causes my `automysqlbackup` cron job to report errors. ``` mysql> drop database ndbinfo; ERROR 1286 (42000): Unknown storage engine 'ndbinfo' ``...
I tried to set up a MySQL cluster and failed. I finally got it switched back to standard MySQL, but now I have an
ndbinfo
database that I cannot drop. It causes my automysqlbackup
cron job to report errors.
mysql> drop database ndbinfo;
ERROR 1286 (42000): Unknown storage engine 'ndbinfo'
Is there any way to manually remove the database? Or maybe a procedure that I do not know about? I did not see 'ndbinfo' referenced anywhere in the MySQL database.
It's on Ubuntu Server 22.04 from the MySQL APT repository. Output from mysql --version
:
mysql Ver 8.0.33 for Linux on x86_64 (MySQL Community Server - GPL)
elyograg
(103 rep)
May 10, 2023, 10:41 PM
• Last activity: May 22, 2023, 12:19 PM
5
votes
2
answers
12367
views
ERROR WSREP`: gcs/src/gcs.cpp:gcs_open():1379:` Failed to open channel 'test_cluster'
I have three server that I want to Install Galera clustering on them, I followed this [link](https://www.digitalocean.com/community/tutorials/how-to-configure-a-galera-cluster-with-mariadb-on-ubuntu-12-04-servers) to install and config it. but when I want to start the first server I got this error :...
I have three server that I want to Install Galera clustering on them, I followed this [link](https://www.digitalocean.com/community/tutorials/how-to-configure-a-galera-cluster-with-mariadb-on-ubuntu-12-04-servers) to install and config it. but when I want to start the first server I got this error :
root@node107:/home/debian# service mysql start --wsrep-new-clusterping
Job for mysql.service failed. See 'systemctl status mysql.service' and 'journalctl -xn' for details.
and :
root@node107:/home/debian# journalctl -xn
-- Logs begin at Tue 2016-03-01 19:02:04 IRST, end at Tue 2016-03-01 19:39:32 IRST. --
Mar 01 19:39:31 node107 mysqld: 160301 19:39:31 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1379: Failed to open channel 'test_cluster' at 'gcomm://192.168.120.107,192.168.120.111,192.168.120.11
Mar 01 19:39:31 node107 mysqld: 160301 19:39:31 [ERROR] WSREP: gcs connect failed: Connection timed out
Mar 01 19:39:31 node107 mysqld: 160301 19:39:31 [ERROR] WSREP: wsrep::connect(gcomm://192.168.120.107,192.168.120.111,192.168.120.117) failed: 7
Mar 01 19:39:31 node107 mysqld: 160301 19:39:31 [ERROR] Aborting
Mar 01 19:39:31 node107 mysqld:
Mar 01 19:39:31 node107 mysqld: 160301 19:39:31 [Note] WSREP: Service disconnected.
Mar 01 19:39:32 node107 mysqld: 160301 19:39:32 [Note] WSREP: Some threads may fail to exit.
Mar 01 19:39:32 node107 mysqld: 160301 19:39:32 [Note] /usr/sbin/mysqld: Shutdown complete
Mar 01 19:39:32 node107 mysqld:
Mar 01 19:39:32 node107 mysqld_safe: mysqld from pid file /var/run/mysqld/mysqld.pid ended
these are configuration file for the three servers :
vim /etc/mysql/conf.d/cluster.cnf
**Server1 :**
[mysqld]
query_cache_size=0
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_type=0
bind-address=0.0.0.0
# Galera Provider Configuration
wsrep_provider=/usr/lib/galera/libgalera_smm.so
#wsrep_provider_options="gcache.size=32G"
# Galera Cluster Configuration
wsrep_cluster_name="test_cluster"
wsrep_cluster_address="gcomm://192.168.120.107,192.168.120.111,192.168.120.117"
#wsrep_cluster_address="gcomm://192.168.120.107"
# Galera Synchronization Congifuration
wsrep_sst_method=rsync
#wsrep_sst_auth=user:pass
# Galera Node Configuration
wsrep_node_address="192.168.120.107"
wsrep_node_name="node107"
**Server 2:**
[mysqld]
query_cache_size=0
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_type=0
bind-address=0.0.0.0
# Galera Provider Configuration
wsrep_provider=/usr/lib/galera/libgalera_smm.so
#wsrep_provider_options="gcache.size=32G"
# Galera Cluster Configuration
wsrep_cluster_name="test_cluster"
wsrep_cluster_address="gcomm://192.168.120.107"
#wsrep_cluster_address="gcomm://192.168.120.107,192.168.120.111,192.168.120.117"
# Galera Synchronization Congifuration
wsrep_sst_method=rsync
#wsrep_sst_auth=user:pass
# Galera Node Configuration
wsrep_node_address="192.168.120.111"
wsrep_node_name="node111"
**server 3 :**
[mysqld]
query_cache_size=0
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_type=0
bind-address=0.0.0.0
# Galera Provider Configuration
wsrep_provider=/usr/lib/galera/libgalera_smm.so
#wsrep_provider_options="gcache.size=32G"
# Galera Cluster Configuration
wsrep_cluster_name="test_cluster"
wsrep_cluster_address="gcomm://192.168.120.107,192.168.120.111,192.168.120.117"
# Galera Synchronization Congifuration
wsrep_sst_method=rsync
#wsrep_sst_auth=user:pass
# Galera Node Configuration
wsrep_node_address="192.168.120.117"
wsrep_node_name="node117"
I don't understand what I have missed. what should I do?
amir jj
(289 rep)
Mar 1, 2016, 04:15 PM
• Last activity: Nov 29, 2022, 12:23 PM
2
votes
1
answers
603
views
MySQL NDB cluster maximum data size
According to the limitations outlined at http://dev.mysql.com/doc/refman/5.7/en/mysql-cluster-limitations.html, there can only be 48 data nodes. Does that mean, with number of replicas 2, I can actually have only 24 nodes to store data? If the data memory is, say 4GB, then, does it mean I can only s...
According to the limitations outlined at http://dev.mysql.com/doc/refman/5.7/en/mysql-cluster-limitations.html , there can only be 48 data nodes.
Does that mean, with number of replicas 2, I can actually have only 24 nodes to store data?
If the data memory is, say 4GB, then, does it mean I can only store 24*4=~100GB?
If so, how can I set up a cluster to store TBs of data?
Babu James
(121 rep)
Jun 23, 2016, 08:09 AM
• Last activity: Jun 25, 2022, 08:00 PM
0
votes
1
answers
72
views
NDB Cluster Read and Write Process multiple Access
When using the MySQL NDB Cluster distributed database, the following general problem has arisen: To insert a new Record / Row, the last row in the database should be read, see if it is compatible with the row to be inserted and then insert the new row. However, the problem arises that in the meantim...
When using the MySQL NDB Cluster distributed database, the following general problem has arisen: To insert a new Record / Row, the last row in the database should be read, see if it is compatible with the row to be inserted and then insert the new row. However, the problem arises that in the meantime another instance also runs through the same process and then at the end both new rows have checked to the same last record to be compatible and are inserted. Is there a way to block this with a locking process?
So that one instance gets the last row/ record via JDBC in Java and locks this row for reading, then checks the object and then insert the new record and now a second instance can run through the same process?
Thanks for your answers.
programmer_13
(1 rep)
Jun 22, 2022, 11:27 PM
• Last activity: Jun 23, 2022, 02:08 PM
0
votes
2
answers
252
views
MySql NDB cluster large transaction
I know that with NDB you can not do large transactions, if you do data nodes can restart into read-only mode and corrupt table. I need to copy records into a new table but the thing is this table contains millions of rows in staging, the live server has even 10 times that amount. how can I divide my...
I know that with NDB you can not do large transactions, if you do data nodes can restart into read-only mode and corrupt table.
I need to copy records into a new table but the thing is this table contains millions of rows in staging, the live server has even 10 times that amount.
how can I divide my copy into smaller transactions so I will not kill my server?
Rait
(47 rep)
Jan 14, 2022, 02:22 PM
• Last activity: Jan 15, 2022, 07:25 PM
Showing page 1 of 20 total questions