Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
543 views
show detail processlist like at mariadb on maxscale
on mariadb when i need check what are query is running or sleep on mariadb instance. i can see with `SHOW PROCESSLIST` or `select * information_schema.processlist`. But after mariadb join and client connect to maxscale i can't see processlist like `SHOW PROCESSLIST` or `select * information_schema.p...
on mariadb when i need check what are query is running or sleep on mariadb instance. i can see with SHOW PROCESSLIST or select * information_schema.processlist. But after mariadb join and client connect to maxscale i can't see processlist like SHOW PROCESSLIST or select * information_schema.processlist any advice ? maxscale configuration [maxscale] threads=auto max_auth_errors_until_block=0 admin_host=192.168.101.107 admin_port=8989 admin_enabled=1 Edit. I already update paramater at maxscale configuration [maxscale] threads=auto max_auth_errors_until_block=0 admin_host=192.168.101.107 admin_port=8989 admin_enabled=1 retain_last_statements=20 dump_last_statements=on_error query already show from command maxctrl show sessions but there is query from last execution on each session. and i can't see the different like sleep or still running when i run command SHOW PROCESSLIST on mariadb. enter image description here
febry (57 rep)
Jun 17, 2020, 03:57 AM • Last activity: Jun 23, 2025, 04:09 AM
1 votes
2 answers
55 views
Why Maxscale doesn't switch master?
I have a galera cluster with mariadb 10.6.22 and maxscale 24.02.2 in ubuntu 22.04 server. This is my configuration: [maxscale] threads=auto [srv1] type=server address=127.0.0.1 port=3306 [srv2] type=server address=10.0.0.2 port=3306 [Galera-Cluster] type=monitor module=galeramon servers=srv1,srv2 us...
I have a galera cluster with mariadb 10.6.22 and maxscale 24.02.2 in ubuntu 22.04 server. This is my configuration: [maxscale] threads=auto [srv1] type=server address=127.0.0.1 port=3306 [srv2] type=server address=10.0.0.2 port=3306 [Galera-Cluster] type=monitor module=galeramon servers=srv1,srv2 user=maxscale password=xxxxxxxxxx monitor_interval=2s root_node_as_master=true [RW-Router] type=service router=readwritesplit cluster=Galera-Cluster user=maxscale password=xxxxxxxxxxx [Read-Write-Listener] type=listener service=RW-Router protocol=mariadbprotocol address=0.0.0.0 port=4008 I'm experincing this issue: when I stop the master node, maxscale doesn't switch the slave to master. ┌────────┬────────────┬──────┬─────────────┬────────────────────────┬──────┬────────────────┐ │ Server │ Address │ Port │ Connections │ State │ GTID │ Monitor │ ├────────┼────────────┼──────┼─────────────┼────────────────────────┼──────┼────────────────┤ │ srv1 │ 127.0.0.1 │ 3306 │ 0 │ Slave, Synced, Running │ │ Galera-Cluster │ ├────────┼────────────┼──────┼─────────────┼────────────────────────┼──────┼────────────────┤ │ srv2 │ 10.0.0.11 │ 3306 │ 0 │ Down │ │ Galera-Cluster │ └────────┴────────────┴──────┴─────────────┴────────────────────────┴──────┴────────────────┘ I cannot understand the reason. Looking into maxscale log I dont find nothing useful: MariaDB MaxScale /var/log/maxscale/maxscale.log Thu Jun 5 15:04:18 2025 ---------------------------------------------------------------------------- notice : Module 'galeramon' loaded from '/usr/lib/x86_64-linux-gnu/maxscale/libgaleramon.so'. notice : Module 'readwritesplit' loaded from '/usr/lib/x86_64-linux-gnu/maxscale/libreadwritesplit.so'. notice : The logging of info messages has been enabled. notice : Using up to 1.16GiB of memory for query classifier cache notice : syslog logging is disabled. notice : maxlog logging is enabled. notice : Host: 'srv1.cinebot.it' OS: Linux@5.15.0-141-generic, #151-Ubuntu SMP Sun May 18 21:35:19 UTC 2025, x86_64 with 2 processor cores (2.00 available). notice : Total main memory: 7.75GiB (7.75GiB usable). notice : MaxScale is running in process 93218 notice : MariaDB MaxScale 24.02.2 started (Commit: b362d654969c495ec50fdf028f419514a854dd0a) notice : Configuration file: /etc/maxscale.cnf notice : Log directory: /var/log/maxscale notice : Data directory: /var/lib/maxscale notice : Module directory: /usr/lib/x86_64-linux-gnu/maxscale notice : Service cache: /var/cache/maxscale notice : Working directory: /var/log/maxscale notice : Query classification results are cached and reused. Memory used per thread: 595.34MiB notice : Password encryption key file '/var/lib/maxscale/.secrets' not found, using configured passwords as plaintext. notice : The systemd watchdog is Enabled. Internal timeout = 30s notice : Module 'pp_sqlite' loaded from '/usr/lib/x86_64-linux-gnu/maxscale/libpp_sqlite.so'. info : pp_sqlite loaded. notice : [MariaDBProtocol] Parser plugin loaded. info : [pp_sqlite] In-memory sqlite database successfully opened for thread 140044797709888. info : No 'auto_tune' parameters specified, no auto tuning will be performed. notice : Using HS256 for JWT signatures warning: The MaxScale GUI is enabled but encryption for the REST API is not enabled, the GUI will not be enabled. Configure admin_ssl_key and admin_ssl_cert to enable HTTPS or add admin_secure_gui=false to allow use of the GUI without encryption. notice : Started REST API on [127.0.0.1]:8989 notice : srv1 sent version string '10.6.22-MariaDB-0ubuntu0.22.04.1'. Detected type: MariaDB, version: 10.6.22. notice : Server 'srv1' charset: utf8mb4_general_ci info : Variables have changed on 'srv1': 'character_set_client = utf8mb4', 'character_set_connection = utf8mb4', 'character_set_results = utf8mb4', 'max_allowed_packet = 16777216', 'session_track_system_variables = autocommit,character_set_client,character_set_connection,character_set_results,time_zone', 'system_time_zone = CEST', 'time_zone = SYSTEM', 'tx_isolation = REPEATABLE-READ', 'wait_timeout = 28800' error : Monitor was unable to connect to server srv2[10.0.0.11:3306] : 'Can't connect to server on '10.0.0.11' (115)' notice : [galeramon] Found cluster members notice : Starting a total of 1 services... notice : (Read-Write-Listener); Listening for connections at [0.0.0.0]:4008 notice : Service 'RW-Router' started (1/1) info : [pp_sqlite] In-memory sqlite database successfully opened for thread 140044754216512. info : Epoll instance for listening sockets added to worker epoll instance. info : [pp_sqlite] In-memory sqlite database successfully opened for thread 140044745823808. info : Epoll instance for listening sockets added to worker epoll instance. notice : MaxScale started with 2 worker threads. notice : Read 19 user@host entries from 'srv1' for service 'RW-Router'. info : Accept authentication from 'admin', using password. Request: /v1/servers info : Accept authentication from 'admin', using password. Request: /v1/servers The galera status seems to be ok for the slave server: wsrep_local_state 4 wsrep_local_state_comment Synced wsrep_cluster_status Primary wsrep_local_index 1 Do you have any idea why Maxscaler doesn't switch slave to master if master is down?
Tobia (211 rep)
Jun 5, 2025, 01:30 PM • Last activity: Jun 10, 2025, 04:52 AM
0 votes
1 answers
282 views
Galera Cluster in read-write split and query cache
I currently have 3 MariaDB 10.4 servers, configured as a Galera cluster. In front of it, a Maxscale router to split writes (1 master) and reads (2 slaves). QC is disabled in all 3 servers. My question is: as the servers 2 and 3 are receiving READS only, is there any (real) benefit to enable QC on th...
I currently have 3 MariaDB 10.4 servers, configured as a Galera cluster. In front of it, a Maxscale router to split writes (1 master) and reads (2 slaves). QC is disabled in all 3 servers. My question is: as the servers 2 and 3 are receiving READS only, is there any (real) benefit to enable QC on those 2 servers? Like server load decreasing? I've already tried the Maxscale cache filter out of the box, but some queries have problems with it.
CrazyRabbit (111 rep)
Jan 13, 2022, 03:12 PM • Last activity: May 22, 2025, 04:10 PM
0 votes
1 answers
334 views
Maxscale R/W split - SELECTS hitting master?
Using Maxscale 6.3.0 in Read-Write Split, with 1 master and 2 slaves( all MariaDB 10.4), MASTER not configured to accept reads, is it normal that Maxscale redirect these to master? If yes, why? 1721780 Prepare SELECT * FROM `jfs_edge` INNER JOIN `jfs_node` ON jfs_edge.inode=jfs_node.inode WHERE `jfs...
Using Maxscale 6.3.0 in Read-Write Split, with 1 master and 2 slaves( all MariaDB 10.4), MASTER not configured to accept reads, is it normal that Maxscale redirect these to master? If yes, why? 1721780 Prepare SELECT * FROM jfs_edge INNER JOIN jfs_node ON jfs_edge.inode=jfs_node.inode WHERE jfs_edge.parent=? AND jfs_edge.name=? LIMIT 1 1721780 Close stmt 1721780 Prepare SELECT * FROM jfs_edge INNER JOIN jfs_node ON jfs_edge.inode=jfs_node.inode WHERE jfs_edge.parent=? AND jfs_edge.name=? LIMIT 1 1721780 Close stmt 1721780 Prepare SELECT * FROM jfs_edge INNER JOIN jfs_node ON jfs_edge.inode=jfs_node.inode WHERE jfs_edge.parent=? AND jfs_edge.name=? LIMIT 1 1721780 Close stmt 1721780 Prepare SELECT * FROM jfs_edge INNER JOIN jfs_node ON jfs_edge.inode=jfs_node.inode WHERE jfs_edge.parent=? AND jfs_edge.name=? LIMIT 1 1721780 Close stmt 1721611 Prepare SELECT * FROM jfs_edge INNER JOIN jfs_node ON jfs_edge.inode=jfs_node.inode WHERE jfs_edge.parent=? AND jfs_edge.name=? LIMIT 1 1721780 Prepare SELECT * FROM jfs_edge INNER JOIN jfs_node ON jfs_edge.inode=jfs_node.inode WHERE jfs_edge.parent=? AND jfs_edge.name=? LIMIT 1 1721611 Close stmt
CrazyRabbit (111 rep)
May 20, 2022, 01:33 PM • Last activity: Apr 28, 2025, 11:03 AM
1 votes
3 answers
1337 views
Manually specifying candidate masters in Maxscale
Looking over the documentation, I haven't quite been able to find if its possible to manually set the candidate master status of a server in Maxscale. The reason I am looking for this is because some of the slaves we have hooked up the the master/slave server architecture have inferior hardware to t...
Looking over the documentation, I haven't quite been able to find if its possible to manually set the candidate master status of a server in Maxscale. The reason I am looking for this is because some of the slaves we have hooked up the the master/slave server architecture have inferior hardware to the others, and we don't want them to become the master. Looking at the documentation, it is possible to set the status of a sever as master or slave, but not specify that we don't want it to become a master. Is there a setting to do this in Maxscale? Possibly in maxadmin or a .cnf file or something? Thanks.
Bif Stone (11 rep)
Mar 1, 2016, 02:58 PM • Last activity: Apr 13, 2025, 07:03 AM
0 votes
1 answers
55 views
maxscale - getting access denied when specifying DB name, but successful when I omit it
We are implementing MaxScale as a DB proxy between our app and the DB hosted in AWS Aurora MySQL. I've configured MaxScale and verified the servers can all connect, and when I connect from the app server to the proxy endpoint via cli, everything works: ``` mysql -h proxy.end.point -u admin -p ``` Th...
We are implementing MaxScale as a DB proxy between our app and the DB hosted in AWS Aurora MySQL. I've configured MaxScale and verified the servers can all connect, and when I connect from the app server to the proxy endpoint via cli, everything works:
mysql -h proxy.end.point -u admin -p
This works just fine as expected, and opens up a connection. I can call use db_name; and change databases no problem, e.g:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 76
Server version: 8.0.32 Source distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> use db_name;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MySQL [db_name]>
However, what's super bizarre is when I specify the db_name as a cli argument, I get a permission error:
mysql -h proxy.end.point -u admin -p db_name

ERROR 1044 (42000): Access denied for user 'admin'@'ip.of.data.base' to database 'db_name'
I'm not sure if this is an issue with the mysql admin user, or if it's some nuance with MaxScale, but was curious if anyone has any insights. For reference, I'm using a maxscale user as the proxy user and it has the following permissions:
GRANT SHOW DATABASES ON *.* TO maxscale@%
GRANT SELECT ON mysql.* TO maxscale@%
GRANT SELECT ON mysql.columns_priv TO maxscale@%
GRANT SELECT ON mysql.db TO maxscale@%
GRANT SELECT ON mysql.procs_priv TO maxscale@%
GRANT SELECT ON mysql.proxies_priv TO maxscale@%
GRANT SELECT ON mysql.tables_priv TO maxscale@%
GRANT SELECT ON mysql.user TO maxscale@%
Here is the contents of my /etc/maxscale.cnf file:
[maxscale]
threads=auto
debug=enable-statement-logging

[db_writer]
type=server
address=rds-writer-endpoint.rds.amazonaws.com
port=3306

[db_reader]
type=server
address=rds-reader-endpoint.rds.amazonaws.com
port=3306

[Read-Write-Service]
type=service
router=readwritesplit
servers=db_writer,db_reader
user=maxscale
password=maxscale_password

[Read-Write-Listener]
type=listener
service=Read-Write-Service
protocol=MariaDBClient
port=3306
I have installed maxscale version 24.02.4. Any help is appreciated. Thanks!
Brian Phelan (11 rep)
Jan 16, 2025, 03:16 PM • Last activity: Jan 17, 2025, 05:31 PM
0 votes
0 answers
76 views
Cluster MariaDB Primary + Two Replicas orchestrated via MaxScale: how to recover from a major disruption?
I am working on a Mariadb cluster primary/replica handled via maxscale, to which I successfully added a third node. When I left things few days ago everything was fine. today I found that all nodes were down, supposedly because of a power outage. This is a lab environment, so nobody really cared to...
I am working on a Mariadb cluster primary/replica handled via maxscale, to which I successfully added a third node. When I left things few days ago everything was fine. today I found that all nodes were down, supposedly because of a power outage. This is a lab environment, so nobody really cared to check before rebooting the hardware. Because it is a lab environment, I could easily restart from the last point, but since I'm here, I'd like to take the chance and learn something. I managed to reboot all nodes, except one which is not getting back to the cluster. This are my variables right now:
mariadb221 [(none)]> show global variables like '%gtid%';

Variable_name            Value
gtid_binlog_pos          1-3000-1359255
gtid_binlog_state        1-2000-358368,1-1000-1359225,1-3000-1359255
gtid_cleanup_batch_size  64
gtid_current_pos         1-3000-1359255
gtid_domain_id           1
gtid_ignore_duplicates   OFF
gtid_pos_auto_engines
gtid_slave_pos           1-3000-1359255
gtid_strict_mode         ON
wsrep_gtid_domain_id     0
wsrep_gtid_mode          OFF
mariadb222 [(none)]> show global variables like '%gtid%';

Variable_name            Value
gtid_binlog_pos          1-1000-1359231
gtid_binlog_state        1-2000-358368,1-3000-359229,1-1000-1359231
gtid_cleanup_batch_size  64
gtid_current_pos         110001359254
gtid_domain_id           1
gtid_ignore_duplicates   OFF
gtid_pos_auto_engines
gtid_slave_pos           110001359254
gtid_strict_mode         ON
wsrep_gtid_domain_id     0
wsrep_gtid_mode          OFF
mariadb223 [(none)]> show global variables like '%gtid%';
Variable_name            Value
gtid_binlog_pos          1-3000-1359255
gtid_binlog_state        1-1000-1359230,1-2000-358368,1-3000-1359255
gtid_cleanup_batch_size  64
gtid_current_pos         1-3000-1359255
gtid_domain_id           1
gtid_ignore_duplicates   OFF
gtid_pos_auto_engines                            
gtid_slave_pos           1-3000-1359255
gtid_strict_mode         ON
wsrep_gtid_domain_id     0
wsrep_gtid_mode          OFF
Aside from deleting the bad node and recreating it like it was a new one (backup of the remaining slave + restore of that backup onto it + adding it to the cluster again), is there anything I can do to recover from this position? Any suggestion of anything else I should check will also be appreciated. P.S.: the bad node is the second one
albea798 (1 rep)
Oct 20, 2024, 09:57 AM • Last activity: Oct 23, 2024, 11:06 AM
0 votes
1 answers
86 views
How can I disable the MaxScale readwriterouter for specific queries?
I'm using a Galera replication and MaxScale readwriterouter. I'm facing an issue because the application has been developed with this flow: 1. start transaction 2. update a record 3. commit 4. read that record The result is the the record is updated using the write-server and the next read is done o...
I'm using a Galera replication and MaxScale readwriterouter. I'm facing an issue because the application has been developed with this flow: 1. start transaction 2. update a record 3. commit 4. read that record The result is the the record is updated using the write-server and the next read is done on read-server. It doesn't get the data that has been just updated due to replication delay. Unfortunately it is a bit diffult to re-factor the whole application and I'm looking if exists some solution to force a read in the write-server so I can be sure to get the data that was just updated.
Tobia (211 rep)
Oct 22, 2024, 02:26 PM • Last activity: Oct 23, 2024, 03:14 AM
0 votes
0 answers
61 views
Issue with MariaDB Master-Slave Cluster Failover in MaxScale When Multiple Pods Are Deleted
**Issue with MariaDB Master-Slave Cluster Failover in MaxScale When Multiple Pods Are Deleted** Hello everyone, I’m running a MariaDB master-slave-slave setup with MaxScale for automatic failover. The cluster consists of one master and two slaves. [![before deleting 2 pods][1]][1] Everything works a...
**Issue with MariaDB Master-Slave Cluster Failover in MaxScale When Multiple Pods Are Deleted** Hello everyone, I’m running a MariaDB master-slave-slave setup with MaxScale for automatic failover. The cluster consists of one master and two slaves. before deleting 2 pods Everything works as expected under normal conditions, but I encountered a problem when I deleted two of the MariaDB pods simultaneously (e.g., mariadb-sts-0 and mariadb-sts-1). MaxScale initially promotes the remaining slave (server3) to the primary role, but once the other two pods come back online, the cluster ends up without a primary. It seems MaxScale is trying to re-promote one of the original servers as the primary, but the process doesn't complete properly. after deleting 2 pods My question is: - How can I configure MaxScale to handle this scenario better? - Specifically, I want the third server to reliably take over as the primary when the first two are deleted, and ensure it remains the primary when they come back online, until manual intervention or an automatic rebalancing. Here are the relevant logs from MaxScale and my current configuration details:
2024-10-11 01:48:22.378   error  : (log_connect_error): Monitor was unable to connect to server server1[mariadb-sts-0.mariadb-service.kaizen.svc.cluster.local:3306] : 'Unknown server host 'mariadb-sts-0.mariadb-service.kaizen.svc.cluster.local' (-2)'
2024-10-11 01:48:22.382   notice : (log_state_change): Server changed state: server1[mariadb-sts-0.mariadb-service.kaizen.svc.cluster.local:3306]: master_down. [Master, Running] -> [Down]
2024-10-11 01:48:22.382   warning: [mariadbmon] (handle_auto_failover): Primary has failed. If primary does not return in 1 monitor tick(s), failover begins.
2024-10-11 01:48:23.385   error  : (log_connect_error): Monitor was unable to connect to server server2[mariadb-sts-1.mariadb-service.kaizen.svc.cluster.local:3306] : 'Unknown server host 'mariadb-sts-1.mariadb-service.kaizen.svc.cluster.local' (-2)'
2024-10-11 01:48:23.385   notice : (log_state_change): Server changed state: server2[mariadb-sts-1.mariadb-service.kaizen.svc.cluster.local:3306]: slave_down. [Slave, Running] -> [Down]
2024-10-11 01:48:23.385   notice : [mariadbmon] (select_promotion_target): Selecting a server to promote and replace 'server1'. Candidates are: 'server2', 'server3'.
2024-10-11 01:48:23.386   warning: [mariadbmon] (select_promotion_target): Some servers were disqualified for promotion:\n'server2' cannot be selected because it is down or in maintenance.
2024-10-11 01:48:23.386   notice : [mariadbmon] (select_promotion_target): Selected 'server3'.
2024-10-11 01:48:23.386   notice : [mariadbmon] (handle_auto_failover): Performing automatic failover to replace failed primary 'server1'.
2024-10-11 01:48:23.567   notice : [mariadbmon] (handle_auto_failover): Failover 'server1' -> 'server3' performed.
2024-10-11 01:48:39.352   notice : (log_state_change): Server changed state: server1[mariadb-sts-0.mariadb-service.kaizen.svc.cluster.local:3306]: server_up. [Down] -> [Running]
2024-10-11 01:48:40.537   warning: [mariadbmon] (update_master): 'server1' is a better primary candidate than the current primary 'server3'. Primary will change when 'server3' is no longer a valid primary.
2024-10-11 01:48:40.537   notice : (log_state_change): Server changed state: server2[mariadb-sts-1.mariadb-service.kaizen.svc.cluster.local:3306]: server_up. [Down] -> [Running]
2024-10-11 01:48:40.932   warning: [mariadbmon] (update_master): 'server1' is a better primary candidate than the current primary 'server3'. Primary will change when 'server3' is no longer a valid primary.
2024-10-11 01:49:22.135   notice : (update_addr_info): Server server1 hostname 'mariadb-sts-0.mariadb-service.kaizen.svc.cluster.local' resolved to 10.42.203.39.
2024-10-11 01:49:22.136   notice : (update_addr_info): Server server2 hostname 'mariadb-sts-1.mariadb-service.kaizen.svc.cluster.local' resolved to 10.42.213.42.
- **MaxScale Version**: [24.02.3] - **MariaDB Version**: [11.5.2-MariaDB-ubu2404-log] Thanks in advance for any suggestions or guidance! ---
unknown (104 rep)
Oct 11, 2024, 02:03 AM
2 votes
1 answers
1227 views
Expanding MariaDB cluster (sharding)
I have an expanding MariaDB 3 node cluster and need to expand the available storage. Each node has (to make the maths slightly easier) 120Gb of database storage. As this is replicated, the total DB storage is also therefore 120Gb. If I add a 4th node, I will gain extra resilience and maybe better re...
I have an expanding MariaDB 3 node cluster and need to expand the available storage. Each node has (to make the maths slightly easier) 120Gb of database storage. As this is replicated, the total DB storage is also therefore 120Gb. If I add a 4th node, I will gain extra resilience and maybe better read speed, but still only have 120Gb storage. Server A Server B Server C +-------+ +-------+ +-------+ | 120Gb | | 120Gb | | 120Gb | +-------+ +-------+ +-------+ Cluster1 --> +-------+ +-------+ +-------+ (120Gb) | 120Gb | | 120Gb | | 120Gb | +-------+ +-------+ +-------+ Total storage 120Gb I am looking at Sharding, using SPIDER tables within MariaDB, to increase the available space but without losing my current x3 replication. With SPIDER you can storage a table across multiple databases, so I am thinking I need to move to a model like this, adding a 4th server D also with 120Gb, but effectively running 4 database clusters across these boxes... Server A Server B Server C Server D +-------+ +-------+ +-------+ +-------+ | 120Gb | | 120Gb | | 120Gb | | 120Gb | +-------+ +-------+ +-------+ +-------+ Spider --> +-------+ +-------+ +-------+ +-------+ ( ~0Gb) +-------+ +-------+ +-------+ +-------+ Cluster1 --> +-------+ +-------+ +-------+ ( 40Gb) | 40Gb | | 40Gb | | 40Gb | +-------+ +-------+ +-------+ Cluster2 --> +-------+ +-------+ +-------+ ( 40Gb) | 40Gb | | 40Gb | | 40Gb | +-------+ +-------+ +-------+ Cluster3 --> +-------+ +-------+ +-------+ ( 40Gb) | 40Gb | | 40Gb | | 40Gb | +-------+ +-------+ +-------+ Cluster4 --> +-------+ +-------+ +-------+ ( 40Gb) | 40Gb | | 40Gb | | 40Gb | +-------+ +-------+ +-------+ (total 160Gb) Each cluster (horizonally) would have 3 box replication and storage 40Gb, while each server (vertically) would store up to 120Gb. Each additional box would therefore add one third of 120Gb... This makes sense to me, I have been testing and I can get multiple MariaDB instances running per box. It then just a matter of keeping all the plans in order, organising which slices will be on which boxes and keeping track of port numbers (each MariaDB instance has to have a different port number on). I can even run a thin cluster slice with just the logical Spider tables on the same box too. Is there an easier way? I can find lots of guides for using Spider, but very few for using MaxScale?? Can anyone point me in the right direction please? (I feel like I am having to re-invent too much myself when I'm sure this is not an uncommon situation)
user3566845 (113 rep)
Feb 7, 2016, 11:25 AM • Last activity: Sep 7, 2024, 11:02 PM
2 votes
1 answers
1406 views
Best Practice of High Availability For MariaDB
I'm preparing to run a MariaDB Cluster with High Availability and I'm looking for the best practices. Years back, I used to run MariaDB in master-master mode, and there was a big issue. Whenever any of the nodes (two nodes in total) was disconnected and reconnected for a second, the whole database w...
I'm preparing to run a MariaDB Cluster with High Availability and I'm looking for the best practices. Years back, I used to run MariaDB in master-master mode, and there was a big issue. Whenever any of the nodes (two nodes in total) was disconnected and reconnected for a second, the whole database was inaccessible until the sync process was completed. I did search a lot and there are multiple suggestions, which not sure which one is the best and simple solution to go for. 1. using maxscale as proxy 2. using master-slave mode 3. using master-master mode I haven't worked with MaxScale, and I would be thankful if someone guides me if this is the best solution to go for a database with high I/O, also I know that I need to have another MaxScale + HAProxy setup to cover MaxScale failures. The second item is not a proper solution, since if the master DB fails I need to go over some manual processes to make it as master. The third item, could be used with Galera, but as I searched I saw many people complaining about the sync issues which I had before. I would be thankful if someone can guide me with at least 2-3 years of experience in such a cluster.
Zareh Kasparian (121 rep)
Feb 17, 2022, 09:06 PM • Last activity: Sep 7, 2024, 04:02 PM
1 votes
1 answers
1602 views
MaxScale login as root or other users
I installed `MaxScale 2.4.4` with `Galera Cluster`. I can access to galera with maxscale user but I can't login with other users to maxscale. I created a user in galera and another user with same username with maxctrl but it return Access denied error. How I can create users for applications in maxs...
I installed MaxScale 2.4.4 with Galera Cluster. I can access to galera with maxscale user but I can't login with other users to maxscale. I created a user in galera and another user with same username with maxctrl but it return Access denied error. How I can create users for applications in maxscale 2.4.4. maxscale.cnf
[maxscale]
threads=1

[Galera-Monitor]
type=monitor
module=galeramon
servers=server1,server2,server3
user=maxscale
password=qwe123
disable_master_failback=1

[Galera-Service]
type=service
router=readwritesplit
servers=server1,server2,server3
user=maxscale
password=qwe123

[Galera-Listener]
type=listener
#router=readwritesplit
service=Galera-Service
protocol=MySQLClient
port=3306

[Write-Service]
type=service
router=readconnroute
router_options=master
servers=server1, server2, server3
user=maxscale
password=qwe123

[Read-Service]
type=service
router=readconnroute
router_options=slave
servers=server1, server2, server3
user=maxscale
password=qwe123

#[Splitter-Service]
#type=service
#router=readwritesplit
#servers=server1,server2,server3
#user=maxscale
#password=qwe123

#[Splitter-Listener]
#type=listener
#service=Splitter-Service
#protocol=MariaDBClient
#port=3306

[CLI]
type=service
router=cli

[CLI-Listener]
type=listener
service=CLI
protocol=maxscaled
address=0.0.0.0
port=6603

[server1]
type=server
address=192.168.122.93
port=3306
protocol=MySQLBackend

[server2]
type=server
address=192.168.122.17
port=3306
protocol=MySQLBackend

[server3]
type=server
address=192.168.122.13
port=3306
protocol=MySQLBackend
I found
=1
for login to maxscale with root user but I don't know where should I add it;
maxctrl list servers
┌─────────┬────────────────┬──────┬─────────────┬─────────────────────────┬──────┐
│ Server  │ Address        │ Port │ Connections │ State                   │ GTID │
├─────────┼────────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ server1 │ 192.168.122.93 │ 3306 │ 0           │ Master, Synced, Running │      │
├─────────┼────────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ server2 │ 192.168.122.17 │ 3306 │ 0           │ Slave, Synced, Running  │      │
├─────────┼────────────────┼──────┼─────────────┼─────────────────────────┼──────┤
│ server3 │ 192.168.122.13 │ 3306 │ 0           │ Slave, Synced, Running  │      │
└─────────┴────────────────┴──────┴─────────────┴─────────────────────────┴──────┘
maxscale runs at
.168.122.222
and I want all applications connect to this address with thier own user;
haj_baba (111 rep)
Dec 28, 2019, 11:38 AM • Last activity: May 14, 2024, 08:12 PM
1 votes
1 answers
1486 views
How to resolve GTID difference between Master and Slave servers in MariaDB replication using MaxScale?
I wrote accidently directly on the Slave DB, and therefore the data is not in sync anymore. When I check MaxScale status, all servers are running, but the Slaves are not detected as "Slave" and there is a difference in GTID. [![enter image description here][1]][1] Does anyone know how to put them ba...
I wrote accidently directly on the Slave DB, and therefore the data is not in sync anymore. When I check MaxScale status, all servers are running, but the Slaves are not detected as "Slave" and there is a difference in GTID. enter image description here Does anyone know how to put them back in sync? I've already searched for information into the documentation of MaxScale, but I could not find this specific usecase there: https://mariadb.com/kb/en/mariadb-maxscale-2302-automatic-failover-with-mariadb-monitor/
Lucian Tarbă (21 rep)
Jun 2, 2023, 08:04 AM • Last activity: Jun 2, 2023, 11:56 AM
2 votes
1 answers
1477 views
Why should I use MaxScale readwritesplit router over a three node Galera cluster?
In the following official MariaDB blog [Getting Started with MariaDB Galera and MariaDB MaxScale][1] the author configures a MaxScale host over a three node Galera cluster, and uses the Galer monitor (OK) and the readwritesplit router, It is not clear for me, if I spread all transactions equally (or...
In the following official MariaDB blog Getting Started with MariaDB Galera and MariaDB MaxScale the author configures a MaxScale host over a three node Galera cluster, and uses the Galer monitor (OK) and the readwritesplit router, It is not clear for me, if I spread all transactions equally (or optionally weighted) on all nodes, including writes, why would I bother to split read and writes? What would be the benefit of this added complexity? In reagards of performance what would be the preferred MaxScale router?
g.pickardou (187 rep)
Jan 22, 2023, 04:53 AM • Last activity: Jan 23, 2023, 06:57 AM
0 votes
1 answers
711 views
MariaDB Maxscale caching not working
I have setup Maxscale (v6.2) and have connected to the Galera Cluster (3 nodes - MariaDB 10.5). I am trying to use the cache filter but it seems to not work. I have enabled general log for all the nodes and whenever I run the query I can see the queries are served from the nodes instead from the Max...
I have setup Maxscale (v6.2) and have connected to the Galera Cluster (3 nodes - MariaDB 10.5). I am trying to use the cache filter but it seems to not work. I have enabled general log for all the nodes and whenever I run the query I can see the queries are served from the nodes instead from the Maxscale cache. Also I noticed that when I use mysqlslap with concurrency 10 I find that the general log file in each node shows 10 times connected and the actual query hit is 3. When I do similar operation using Haproxy then the general log shows 3 times connected and hit also 3. Not sure if there is anything to be setup properly for Maxscale. Here is my maxscale.cnf
# MaxScale documentation:
# https://mariadb.com/kb/en/mariadb-maxscale-25/ 

# Global parameters
#
# Complete list of configuration options:
# https://mariadb.com/kb/en/mariadb-maxscale-25-mariadb-maxscale-configuration-guide/ 

[maxscale]
threads=auto
log_info=true

# Server definitions
#
# Set the address of the server to the network
# address of a MariaDB server.
#

[server1]
type=server
address=1.1.1.1
port=3306
protocol=MariaDBBackend

[server2]
type=server
address=1.1.1.2
port=3306
protocol=MariaDBBackend

[server3]
type=server
address=1.1.1.3
port=3306
protocol=MariaDBBackend

# Monitor for the servers
#
# This will keep MaxScale aware of the state of the servers.
# MariaDB Monitor documentation:
# https://mariadb.com/kb/en/maxscale-25-monitors/ 

[Galera-Monitor]
type=monitor
module=galeramon
servers=server1,server2,server3
user=maxscale
password=XXXXXXXX
monitor_interval=2000

#Galera router service
[Galera-Service]
type=service
router=readwritesplit
servers=server1,server2,server3
user=maxscale
password=XXXXXXXX
lazy_connect=true

#Galera cluster listener
[Galera-Listener]
type=listener
service=Galera-Service
protocol=MariaDBClient
address=0.0.0.0
port=3306

#cache
[Cache]
type=filter
module=cache
storage=storage_inmemory
soft_ttl=300s
hard_ttl=600s
cached_data=shared
Below image shows the maxscale log where the query is routed to the server instead of being served from cache.enter image description here
Suraj (103 rep)
Jan 17, 2022, 03:32 PM • Last activity: Jan 31, 2022, 05:19 PM
1 votes
2 answers
1351 views
How to limit impact of a bad application on MariaDB Galera Cluster with MaxScale?
I have a Galera Replication cluster with three MariaDB nodes where a Maxscale Active-Passive cluster in front provides a single node image to tis clients. I have a bad behaving client, which opens connections and doesn't close them. No of connections keep increasing till database limits hit. To limi...
I have a Galera Replication cluster with three MariaDB nodes where a Maxscale Active-Passive cluster in front provides a single node image to tis clients. I have a bad behaving client, which opens connections and doesn't close them. No of connections keep increasing till database limits hit. To limit the number of connections I have configured below two params
`
max_connections=
max_user_connections=
` My situation is this, When I have only max_connections configured, whenever the limits are reached Galera node stops accepting more connection with error of "Too many connections". When Maxscale see this connection rejections for n number of times, it puts the server under *Maintenance* mode. I can understand this behaviour, it's expected. When I configure max_user_connections, and because the application is behaving bad and trying to make new connections continously, when the userspecific limit reaches further attempt of connections fails to the mariadb nodes in backend. Maxscale observes these failures, and again puts the server in *Maintenance* mode. I believe during this time it only sees connections attempt from the bad client, no other application tried to connect. And this way, MaxScale puts all three nodes in *Maintenance* mode over the time, which makes complete DB service unavailable. For me as administrator, situation becomes same, puting a user specific limit doesn't achieve anything. I would like to ask two points here **Q1. How can I prevent just one user connection failures from puting the backend mariadb node into maintenance?** **Q2. Any documentation, or tutorials, article reference on how and when MaxScale decides to put a server in Maintenance mode?** Below are the details about the environment >Galera - 25.3.23, >MariaDB - 10.3.12, >MaxScale - 2.4.11, >OS - RHEL 7.4 (Maipo) Here is my configuration **MariaDB Galera Configuration**
`
[server]

# this is only for the mysqld standalone daemon
[mysqld]
#user statistics
userstat=1
performance_schema
#wait_timeout=600
max_allowed_packet=1024M
#
lower_case_table_names=1
#
max_connections=1500
max_user_connections=200
#
# * Galera-related settings
#
[galera]
# Mandatory settings
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_provider_options="gcache.size=300M; gcache.page_size=300M; pc.ignore_sb=false; pc.ignore_quorum=false"
#wsrep_cluster_address defines members of the cluster
wsrep_cluster_address=gcomm://x.x.x.1,x.x.x.2,x.x.x.3
wsrep_cluster_name="mariadb-cluster"
wsrep_node_address=x.x.x.1
wsrep_node_incoming_address=x.x.x.1
wsrep_debug=OFF
#
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
innodb_doublewrite=1
query_cache_size=0
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=5G
#
bind-address=x.x.x.1
#
[mariadb]
#performance
wait_timeout=31536000
#
#query logging
log_output=FILE
#slow queries
slow_query_log
slow_query_log_file=/var/log/mariadb/mariadb-slow.log
long_query_time=10.0
log_queries_not_using_indexes=ON
min_examined_row_limit=1000
log_slow_rate_limit=1
log_slow_verbosity=query_plan,explain
#
#error logs
log_error=/var/log/mariadb/mariadb-error.log
log_warnings=2
` Similarly all three Galera nodes are configured. **MaxScale configuration**
`
[maxscale]
threads=auto

# Server definitions
[mariadb1]
type=server
address=x.x.x.1
port=3306
protocol=MariaDBBackend
#priority=0

[mariadb2]
type=server
address=x.x.x.2
port=3306
protocol=MariaDBBackend
#priority=1

[mariadb3]
type=server
address=x.x.x.3
port=3306
protocol=MariaDBBackend
#priority=1

# Monitor for the servers
#

[Galera-Monitor]
type=monitor
module=galeramon
servers=mariadb1, mariadb2, mariadb3
user=xxx
password=xxx
#disable_master_role_setting=true
monitor_interval=1000
#use_priority=true
#
disable_master_failback=true
available_when_donor=true

# Service definitions

[Galera-Service]
type=service
router=readwritesplit
master_accept_reads=true
connection_keepalive=300s
master_reconnection=true
master_failure_mode=error_on_write
connection_timeout=3600s
servers=mariadb1, mariadb2, mariadb3
user=xxx
password=xxx
#filters=Query-Log-Filter

#Listener

[Galera-Listener]
type=listener
service=Galera-Service
protocol=MariaDBClient
port=4306
`
Amit P (111 rep)
Mar 1, 2021, 07:22 AM • Last activity: Oct 28, 2021, 08:26 AM
1 votes
3 answers
4395 views
How to Sync MySQL Databases when offline?
My application that is running on a client uses a MySQL database running on a server. So multiple clients are connected to the same server. That works well when the server is online. But now I would like to enhance my application to be able to run in an offline mode. +--------------+ | | +----------...
My application that is running on a client uses a MySQL database running on a server. So multiple clients are connected to the same server. That works well when the server is online. But now I would like to enhance my application to be able to run in an offline mode. +--------------+ | | +-----------+ SERVER +----------+ | | | | | +-------+------+ | | | | +------+-------+ +-------+------+ +-------+------+ | | | | | | | Client 1 | | Client 2 | | Client X | | | | | | | +--------------+ +--------------+ +--------------+ Now comes the problem: what happens when the client is offline? I need a copy of my MySQL database on each client too. By default the application interacts with the MySQL on the server. If this server is not accessible (for what reason ever: server is offline or client has no internet connection) it should use the MySQL running on the client. If the client/server connection is available again the databases need to be synched automatically. My question is now: how to achieve this? First of all I checked the MySQL-replication, but in my scenario I have multiple "masters" and an unknown number of clients. So I afraid that replication is not my solution. Is it possible to solve my problem with MaxScale? I never worked with that so I really appreciate any help.
Lars (109 rep)
Sep 19, 2021, 09:13 AM • Last activity: Oct 4, 2021, 07:54 AM
0 votes
1 answers
524 views
MariaDB Maxscale REST API can't show result
hi i have already setup maxadmin on maxscale can access from rest api this is my configuration ..... [CLI_Service] type=service router=cli [CLI_Listener] type=listener service=CLI_Service protocol=maxscaled address=192.168.101.107 socket=default [MaxAdmin_Inet] type=listener service=CLI_Service prot...
hi i have already setup maxadmin on maxscale can access from rest api this is my configuration ..... [CLI_Service] type=service router=cli [CLI_Listener] type=listener service=CLI_Service protocol=maxscaled address=192.168.101.107 socket=default [MaxAdmin_Inet] type=listener service=CLI_Service protocol=HTTPD address=192.168.101.107 port=8010 but when i test the url like this curl --include --basic --user "admin:mariadb" http://192.168.101.107:8010/v1/servers the result is like this HTTP/1.1 200 OK Date: Wed, 13 May 2020 11:33:15 GMT Server: MaxScale(c) v.1.0.0 Connection: close WWW-Authenticate: Basic realm="MaxInfo" Content-Type: application/json Commands must consist of at least two words. Type help for a list of commands whether i miss about the configuration ?
febry (57 rep)
May 13, 2020, 04:35 AM • Last activity: May 15, 2021, 07:03 AM
3 votes
2 answers
2952 views
maxscale memory requirements
What is the expected production requirements for memory / cpu for maxscale We have a server with 4 GB of memory configured, to run maxscale as a query r / w router, with a replication manager running on the same server. I have found on doing a large number of inserts in a single transaction, in the...
What is the expected production requirements for memory / cpu for maxscale We have a server with 4 GB of memory configured, to run maxscale as a query r / w router, with a replication manager running on the same server. I have found on doing a large number of inserts in a single transaction, in the millions of rows. 5G File with 10 Million rows Using LOAD data infile. Running this same load of data infile against the backend server directly works with out any issues. This is the max_allowed_packet on the backend server. MariaDB [(none)]> SHOW GLOBAL VARIABLES LIKE 'max_allowed_packet'\G *************************** 1. row *************************** Variable_name: max_allowed_packet Value: 16777216 The backend server has 32 GB of ram, with no services currently useing it as we are still testing it out and tuneing configuration. The max scale server runs out of memory. This large set of inserts causes maxscale to crash. I see the following errors on my client side. ERROR 2013 (HY000) at line 1: Lost connection to MySQL server during query ERROR 2013 (HY000) at line 1: Lost connection to MySQL server at 'reading initial communication packet', system error: 111 ERROR 2013 (HY000) at line 1: Lost connection to MySQL server at 'reading initial communication packet', system error: 111 And on the server side I see the following in the max scale logs. 2017-10-16 19:06:32 notice : Started MaxScale log flusher. 2017-10-16 19:06:32 notice : MaxScale started with 7 server threads. 2017-10-16 19:15:14 notice : Waiting for housekeeper to shut down. 2017-10-16 19:15:15 notice : Finished MaxScale log flusher. 2017-10-16 19:15:15 notice : Housekeeper shutting down. 2017-10-16 19:15:15 notice : Housekeeper has shut down. 2017-10-16 19:15:15 notice : MaxScale received signal SIGTERM. Exiting. 2017-10-16 19:15:15 notice : MaxScale is shutting down. 2017-10-16 19:15:15 notice : MaxScale shutdown completed. 2017-10-16 19:15:15 MariaDB MaxScale is shut down. ---------------------------------------------------- MariaDB MaxScale /var/log/maxscale/maxscale.log Mon Oct 16 19:15:17 2017 ---------------------------------------------------------------------------- 2017-10-16 19:15:17 notice : Working directory: /var/log/maxscale 2017-10-16 19:15:17 notice : MariaDB MaxScale 2.1.5 started 2017-10-16 19:15:17 notice : MaxScale is running in process 21067 2017-10-16 19:15:17 notice : Configuration file: /etc/maxscale.cnf 2017-10-16 19:15:17 notice : Log directory: /var/log/maxscale 2017-10-16 19:15:17 notice : Data directory: /var/cache/maxscale 2017-10-16 19:15:17 notice : Module directory: /usr/lib64/maxscale 2017-10-16 19:15:17 notice : Service cache: /var/cache/maxscale 2017-10-16 19:15:17 notice : Loading /etc/maxscale.cnf. 2017-10-16 19:15:17 notice : /etc/maxscale.cnf.d does not exist, not reading. 2017-10-16 19:15:17 notice : Loaded module ccrfilter: V1.1.0 from /usr/lib64/maxscale/libccrfilter.so 2017-10-16 19:15:17 notice : [cli] Initialise CLI router module 2017-10-16 19:15:17 notice : Loaded module cli: V1.0.0 from /usr/lib64/maxscale/libcli.so 2017-10-16 19:15:17 notice : [readwritesplit] Initializing statement-based read/write split router module. 2017-10-16 19:15:17 notice : Loaded module readwritesplit: V1.1.0 from /usr/lib64/maxscale/libreadwritesplit.so 2017-10-16 19:15:17 notice : [mysqlmon] Initialise the MySQL Monitor module. 2017-10-16 19:15:17 notice : Loaded module mysqlmon: V1.5.0 from /usr/lib64/maxscale/libmysqlmon.so 2017-10-16 19:15:17 notice : Loaded module MySQLBackend: V2.0.0 from /usr/lib64/maxscale/libMySQLBackend.so 2017-10-16 19:15:17 notice : Loaded module MySQLBackendAuth: V1.0.0 from /usr/lib64/maxscale/libMySQLBackendAuth.so 2017-10-16 19:15:17 notice : Loaded module maxscaled: V2.0.0 from /usr/lib64/maxscale/libmaxscaled.so 2017-10-16 19:15:17 notice : Loaded module MaxAdminAuth: V2.1.0 from /usr/lib64/maxscale/libMaxAdminAuth.so 2017-10-16 19:15:17 notice : Loaded module MySQLClient: V1.1.0 from /usr/lib64/maxscale/libMySQLClient.so 2017-10-16 19:15:17 notice : Loaded module MySQLAuth: V1.1.0 from /usr/lib64/maxscale/libMySQLAuth.so 2017-10-16 19:15:17 notice : No query classifier specified, using default 'qc_sqlite'. 2017-10-16 19:15:17 notice : Loaded module qc_sqlite: V1.0.0 from /usr/lib64/maxscale/libqc_sqlite.so 2017-10-16 19:15:17 notice : Encrypted password file /var/cache/maxscale/.secrets can't be accessed (No such file or directory). Password encryption is not used. 2017-10-16 19:15:17 notice : [MySQLAuth] [Read-Write_Service] Loaded 227 MySQL users for listener Read-Write_Listener. 2017-10-16 19:15:17 notice : Listening for connections at [10.56.229.60]:3306 with protocol MySQL 2017-10-16 19:15:17 notice : Listening for connections at [::]:6603 with protocol MaxScale Admin 2017-10-16 19:15:17 notice : Started MaxScale log flusher. 2017-10-16 19:15:17 notice : MaxScale started with 7 server threads. 2017-10-16 19:15:17 notice : Server changed state: tmsdb-isa-01[10.56.228.64:3306]: new_master. [Running] -> [Master, Running] 2017-10-16 19:15:17 notice : Server changed state: tmsdb-isa-02[10.56.228.65:3306]: new_slave. [Running] -> [Slave, Running] 2017-10-16 19:15:17 notice : Server changed state: tmsdb-rp-01[10.21.228.65:3306]: new_slave. [Running] -> [Slave, Running] 2017-10-16 19:15:17 notice : [mysqlmon] A Master Server is now available: 10.56.228.64:3306 I ran VMSTAT and noticed that the server runs out of memory [root@maxscale-isa-02 ~]# vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 1485828 0 356652 0 0 1 0 0 0 0 0 100 0 0 [root@maxscale-isa-02 ~]# vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 368000 0 356496 0 0 1 0 0 0 0 0 100 0 0 [root@maxscale-isa-02 ~]# vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 2 0 0 173388 0 356512 0 0 1 0 0 0 0 0 100 0 0 [root@maxscale-isa-02 ~]# vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 1 1 0 103660 0 308088 0 0 1 0 0 0 0 0 100 0 0 [root@maxscale-isa-02 ~]# vmstat -bash: fork: Cannot allocate memory [root@maxscale-isa-02 ~]# vmstat -bash: fork: Cannot allocate memory [root@maxscale-isa-02 ~]# vmstat -bash: fork: Cannot allocate memory [root@maxscale-isa-02 ~]# vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 2478676 0 323508 0 0 1 0 0 0 0 0 100 0 0 **Edit** We added an additional 4G RAM to the max scale server for 4GB to 8GB total. We can now see that it will read up to the 5G of ram for the import process with out crashing. This seams to indicate that the server running maxscale will need as much memory as all of the services running queries through the server will require at peak memory usage. When using the ReadWriteSplit router. We are going to test the ReadConnRoute router to see if we can lower the memory usage requirements.
nelaaro (767 rep)
Oct 16, 2017, 05:37 PM • Last activity: Dec 13, 2019, 12:40 PM
0 votes
1 answers
579 views
How Maxscale deals with "Select for update transactions"
We are facing intermitent deadlock issues that doesnt got reflected in the maxscale log nor the mariadb log. Only in the applications triying to do sql transactions in the database. We use mostly readwritesplit with 2 nodes (yes, i know, bad design) and we dont know exactly how maxscale deals with t...
We are facing intermitent deadlock issues that doesnt got reflected in the maxscale log nor the mariadb log. Only in the applications triying to do sql transactions in the database. We use mostly readwritesplit with 2 nodes (yes, i know, bad design) and we dont know exactly how maxscale deals with the select for update queries. 1. to the slave, because is a read query (select) 2. to the master, because is a write query (update) Maxscale version 2.1.16, MariaDB 2.1.29 Thanks!
PsySkeletor (11 rep)
Feb 6, 2019, 12:15 PM • Last activity: Feb 6, 2019, 01:53 PM
Showing page 1 of 20 total questions