Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
1 answers
482 views
Postgres-XL adding GTM Proxy seems to do nothing
I've set up a Postgres-XL cluster using [this][1] recipe: GTM: hostname=host1 nodename=gtm Coordinator: hostname=host2 nodename=coord1 Datanode1: hostname=host3 nodename=datanode1 Datanode2: hostname=host4 nodename=datanode2 When I ran a load test against it, the GTM would fallover. I tweak settings...
I've set up a Postgres-XL cluster using this recipe: GTM: hostname=host1 nodename=gtm Coordinator: hostname=host2 nodename=coord1 Datanode1: hostname=host3 nodename=datanode1 Datanode2: hostname=host4 nodename=datanode2 When I ran a load test against it, the GTM would fallover. I tweak settings until the GTM didn't fall over but only reported errors - thus kept on working after the load test. I the added a GTM Proxy. I did not do init all but rather only init the proxy. When I restarted the cluster, the GTM reported that the GTM proxy was up and running. When I looked at the GTM proxy's log, it looked like it started up and was connected. But when I ran the load test again, I got the same result with no log entries for the GTM proxy. Thus it seems like the GTM Proxy didn't pick up the load processing as I expected it to do. I don't know how to trouble shoot this. Any pointers on where to look next? (I don't know what extra info to post here)
TungstenX (61 rep)
Jun 30, 2017, 11:15 AM • Last activity: Aug 4, 2025, 05:07 PM
1 votes
1 answers
375 views
Cassandra pool warning displaying continuously
I am using the [Cassandra driver for python][1] in Spyder. I am trying to fetch some data from Cassandra table. Here is my code: from cassandra.cluster import Cluster cluster=Cluster(['some_ip']) session=cluster.connect('some_key_space') df_filtered_10m=session.execute("some query") This is all work...
I am using the Cassandra driver for python in Spyder. I am trying to fetch some data from Cassandra table. Here is my code: from cassandra.cluster import Cluster cluster=Cluster(['some_ip']) session=cluster.connect('some_key_space') df_filtered_10m=session.execute("some query") This is all working fine and I am getting the desired results. The problem is that in the console, this message is continuously popping up: > WARNING:cassandra.pool:Error attempting to reconnect to 10.0.10.91, > scheduling retry in 256.0 seconds: [Errno None] Tried connecting to > [('10.0.10.91', 9042)]. Last error: timed out I have tried cluster.shutdown but it not working as well. How to get rid of it?
Osama Dar (111 rep)
Sep 12, 2018, 05:19 AM • Last activity: Aug 2, 2025, 01:04 AM
0 votes
1 answers
32 views
If an Availability Group contains two FCIs, how many quorums do I have?
To have an Availability Group, I must have a Windows Server Failover Cluster. All Windows Server Failover Clusters have the ability to form exactly one quorum. This clearly means that if I have two Failover Cluster Instances, I must have two quorums. But what happens if I make an Availability Group...
To have an Availability Group, I must have a Windows Server Failover Cluster. All Windows Server Failover Clusters have the ability to form exactly one quorum. This clearly means that if I have two Failover Cluster Instances, I must have two quorums. But what happens if I make an Availability Group out of two Failover Cluster Instances? **Do I need to manage three quorums?** [The documentation](https://learn.microsoft.com/en-us/sql/database-engine/availability-groups/windows/failover-clustering-and-always-on-availability-groups-sql-server?view=sql-server-ver17#WSFC) certainly gives the impression that the Availability Group is subject to the vote of a quorum, but I don't know which quorum.
J. Mini (1225 rep)
Jul 31, 2025, 09:05 PM • Last activity: Jul 31, 2025, 09:18 PM
0 votes
2 answers
686 views
Restoring secondary backup files on primary server in sql server
I want to restore log backup file on copy-only full backup in restoring mode on primary replica server these backups were taken from secondary replica . I took copy only full back up on secondary replica then failover after that I took .trn backup file on primary replica(which took full backup ). bo...
I want to restore log backup file on copy-only full backup in restoring mode on primary replica server these backups were taken from secondary replica . I took copy only full back up on secondary replica then failover after that I took .trn backup file on primary replica(which took full backup ). both full and trn file backup from one server but full backup took when server in secondary mode and trn backup took when server be in primary backup and I want to restored both of them in restoring mode on secondary replica after failover. I wanted to add database in cluster with my hand because our databases are big it was error when they are restored on primary server the error is > The log in this backup set begins at LSN 38202000004450500001, which is too recent to apply to the database. An earlier log backup that includes LSN 38167000000015500001 can be restored.
NShyn (1 rep)
Jan 25, 2022, 09:57 AM • Last activity: Jul 25, 2025, 10:05 PM
0 votes
1 answers
25 views
cluster vs replica in mariadb or mysql
I'm confused between two words of `cluster` and `replica` in `mariadb`. I saw some tutorial and they explain `cluster` or `replica`. Is `replica` same `cluster` or not? If not , please explain their diffrences.
I'm confused between two words of cluster and replica in mariadb.
I saw some tutorial and they explain cluster or replica.
Is replica same cluster or not?
If not , please explain their diffrences.
PersianGulf (115 rep)
Jul 24, 2025, 05:58 PM • Last activity: Jul 24, 2025, 08:28 PM
0 votes
1 answers
1037 views
The Windows Server Failover Clustering (WSFC) resource control API returned error code 19
What is root cause of this error. I couldn't find any related information.\ There are two nodes on Availability Group and if you reboot second node, database wont up.
What is root cause of this error. I couldn't find any related information.\ There are two nodes on Availability Group and if you reboot second node, database wont up.
Davoud Teimouri (1 rep)
Apr 8, 2021, 12:29 AM • Last activity: Jul 24, 2025, 07:04 PM
0 votes
3 answers
161 views
Need to configure Availability Groups between three different SQL failover instance
We have total 6 Nodes and 3 failover cluster (Each with two node). - `Cluster1.TestAD.com`(Node1,Node2) - SQL Failover Instance (SQL1) - 1 TB Shared storage - `Cluster2.TestAD.com`(Node3,Node4) - SQL Failover Instance (SQL2)- 1 TB Shared storage - `Cluster3.TestAD.com`(Node5,Node6) - SQL Failover In...
We have total 6 Nodes and 3 failover cluster (Each with two node). - Cluster1.TestAD.com(Node1,Node2) - SQL Failover Instance (SQL1) - 1 TB Shared storage - Cluster2.TestAD.com(Node3,Node4) - SQL Failover Instance (SQL2)- 1 TB Shared storage - Cluster3.TestAD.com(Node5,Node6) - SQL Failover Instance (SQL3)- 1 TB Shared storage Now we want to create availability group setup between above three SQL instance. We plan to create new cluster (cluster4.TestAD.com ) with all six node and create Availability groups with all three instance. SQL Edition : SQL Server 2012 R2 Enterprise Is it possible? Any other recommendations ?
adkalavadia (26 rep)
Aug 24, 2016, 09:45 AM • Last activity: Jul 16, 2025, 09:04 PM
0 votes
1 answers
407 views
Cannot start MySQL Cluster Management Service
Could not determine which nodeid to use for this node. Specify it with --ndb-nodeid= on command line. This is what EventViewer is saying
Could not determine which nodeid to use for this node. Specify it with --ndb-nodeid= on command line. This is what EventViewer is saying
Ninja (3 rep)
Jul 22, 2013, 04:01 AM • Last activity: Jul 11, 2025, 09:05 PM
0 votes
1 answers
193 views
Cluster Error 1069 and 1205
I have just built a SQL Server 2022 Always on Availability group with 3 Nodes at HQ and 1 Node at DR. During our testing phase I have noticed every time I rebooted the primary without failing over or moving the cluster resources from the primary I get error `1069` and `1205` in the cluster. My clust...
I have just built a SQL Server 2022 Always on Availability group with 3 Nodes at HQ and 1 Node at DR. During our testing phase I have noticed every time I rebooted the primary without failing over or moving the cluster resources from the primary I get error 1069 and 1205 in the cluster. My cluster validation came without any error but with some warnings as it is a multi subnet cluster and there was nothing on the cluster logs or sql error logs. Before I contact Microsoft support I wanted to check if the errors were intended behavior of the cluster. The availability group always automatically failed over every time I rebooted the primary to simulate a failure. Error 1069: > Cluster resource 'ABC_LIVEDB' of type 'SQL Server Availability Group' > in clustered role 'ABC_LIVEDB' failed. Error 1205: > The Cluster service failed to bring clustered role 'ABC_LIVEDB' > completely online or offline. One or more resources may be in a failed > state. This may impact the availability of the clustered role. I greatly appreciate the guidance.
SQL_NoExpert (1117 rep)
Mar 7, 2025, 01:40 AM • Last activity: Jul 9, 2025, 02:08 AM
2 votes
1 answers
182 views
Snapshot Folder Location for Replication with Clustered Instance
I am assigned to create a transactional replication with a SQL Server clustered instance for reporting purposes. I'm planning to use the Distributor and the Subscriber on the same box. Regarding the snapshot folder, is it ok to place it on the same box where the Distributor\Subscriber resides? Will...
I am assigned to create a transactional replication with a SQL Server clustered instance for reporting purposes. I'm planning to use the Distributor and the Subscriber on the same box. Regarding the snapshot folder, is it ok to place it on the same box where the Distributor\Subscriber resides? Will there by any problems?
SQL_NoExpert (1117 rep)
Sep 11, 2018, 02:24 PM • Last activity: Jul 6, 2025, 01:06 AM
0 votes
1 answers
179 views
MariaDB Galera and Geo redundancy
I am trying to simulate 2 clusters in different datacenters/different locations, using VMWare VSXi and 6 VMs: 3 are located on DC1 and other 3 on DC2. All nodes are RHEL 9. Both DCs communicate through the same VLAN and this works fine. I have installed MariaDB 10.5.22 + Galera. On DC1 , the 3 nodes...
I am trying to simulate 2 clusters in different datacenters/different locations, using VMWare VSXi and 6 VMs: 3 are located on DC1 and other 3 on DC2. All nodes are RHEL 9. Both DCs communicate through the same VLAN and this works fine. I have installed MariaDB 10.5.22 + Galera. On DC1 , the 3 nodes are all data nodes, where node #1 was bootstrapped using galera_new_cluster script On DC2, 2 nodes are Data nodes and I added 1 Arbitrator When starting nodes, all 6 are connected on the same cluster, sharing data, and this seems to be working fine. The problem that I find is when I simulate a network outage: I modify VLAN ID on DC2 and both DCs loose connectivity. Looking at each node's logs, they effectively loose connectivity, but : - all nodes (3 from DC1 and 2 from DC2) are unavailable for queries - Arbitrator keeps retrying to connect to nodes, ex.: Oct 7 20:08:16 ire-lab-se3 garb-systemd: 2024-10-07 20:08:16.562 INFO: (dd79df53-972e, 'tcp://0.0.0.0:4567') connection to peer 00000000-0000 with addr tcp://10.20.0.1:4567 timed out, no messages seen in PT3S, socket stats: rtt: 0 rttvar: 250000 rto: 4000000 lost: 1 last_data_recv: 358971429 cwnd: 1 last_queued_since: 359271583299206 last_delivered_since: 359271583299206 send_queue_length: 0 send_queue_bytes: 0 I was expecting that: - DC1 would remain available and synced, after all, the bootstrap was on Node#1 and there are 3 nodes available on this network - Garbd arbitrator would handle the remaining 2 nodes in DC2, allowing applications on DC2 to use the current data I have tried multiple test scenarios, and this is the only one that is failing (network outage) and I can't seem to be able to solve the issue. The configuration files on each Galera Data Node is similar to (only node name and ID differ):
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#

# this is read by the standalone daemon and embedded servers
[server]
log_error=/var/log/mariadb/mariadb.err
log_warnings=9
#default_time_zone='UTC'
table_definition_cache=4096
table_open_cache=4096
#innodb_read_only_compressed=OFF # only for 10.6 and later

# this is only for the mysqld standalone daemon
# Settings user and group are ignored when systemd is used.
# If you need to run mysqld under a different user or group,
# customize your systemd unit file for mysqld/mariadb according to the
# instructions in http://fedoraproject.org/wiki/Systemd 
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mariadb/mariadb.log
pid-file=/run/mariadb/mariadb.pid

# disabling symlinks is recommend to prevent assorted security risks
symbolic_links=0

#enable binary logging
log_bin=/var/log/mariadb/mariadb-bin
log_bin_index=/var/log/mariadb/mariadb-bin.index

#enable relay log files
relay_log=/var/log/mariadb/relay-bin
relay_log_index=/var/log/mariadb/relay-bin.index

log_slave_updates=1
performance_schema=ON
interactive_timeout=180
wait_timeout=180

max_connections=500

#
# * Galera-related settings
#
[galera]
# Mandatory settings
#wsrep_on=ON
#wsrep_provider=
#wsrep_cluster_address=
#binlog_format=row
#default_storage_engine=InnoDB
#innodb_autoinc_lock_mode=2
#
# Allow server to accept connections on all interfaces.
#
#bind-address=0.0.0.0
#
# Optional setting
#wsrep_slave_threads=1
#innodb_flush_log_at_trx_commit=0

# this is only for embedded server
[embedded]

# This group is only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]

# This group is only read by MariaDB-10.5 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mariadb-10.5]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
wsrep_on=ON
query_cache_size=0
query_cache_type=0
innodb_log_file_size=100M
innodb_file_per_table
innodb_flush_log_at_trx_commit=2
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
# below parameter should not include Arbitrators, only MariaDB nodes
wsrep_cluster_address="gcomm://10.20.0.1,10.20.0.2,10.20.0.3,10.20.0.4,10.20.0.5"
wsrep_cluster_name='galera_cluster_LAB'
wsrep_node_address='10.20.0.1'
wsrep_node_name='galera_LAB_1'
wsrep_sst_method='rsync'
#'mariabackup'
#wsrep_sst_auth=backupuser;backupuser
server_id=2
wsrep_provider_options='gmcast.segment=1;gcache.size=2G'
#wsrep_sst_donor="galera_1"
and the Gardb Arbitrator is set as:
# Copyright (C) 2012 Codership Oy
# This config file is to be sourced by garb service script.

# A comma-separated list of node addresses (address[:port]) in the cluster
 GALERA_NODES="10.20.0.4:4567 10.20.0.5:4567"

# Galera cluster name, should be the same as on the rest of the nodes.
 GALERA_GROUP="galera_cluster_LAB"

# Optional Galera internal options string (e.g. SSL settings)
# see https://galeracluster.com/library/documentation/galera-parameters.html 
 GALERA_OPTIONS=""

# Log file for garbd. Optional, by default logs to syslog
# FOR SOME REASON IT DOESN'T WORK; RETURNS AN ERROR PRBABLY RELATED TO PERMISSIONS
# LOG_FILE="/var/log/mariadb/garbd.log"

# Where to persist necessary data
# WORK_DIR=""
So I think this is pretty much standard.... I googled a lot, but could not find anything that makes this work as expected. The requirements are that, in production, both DCs should communicate freely and if a network outage happens, application servers on both DCs should continue working with the local mariadb galera nodes (querying and inserting new rows, if required) and once the network is up again, all nodes should sync data between each of them (new data from DC1 exported to DC2 and new data from DC2 exported to DC1, and so on....) Am I missing something here? could anyone help? Thanks Sergio
scharrua (11 rep)
Oct 7, 2024, 09:54 PM • Last activity: Jul 4, 2025, 08:08 AM
0 votes
1 answers
179 views
SQL Server 2 node availability group - deleting secondary server, to be replaced later
I found https://dba.stackexchange.com/questions/266597/questions-regarding-removing-a-secondary-node-from-an-availability-group with Tony Hinkle's answer, but I'd like to get some clarification on 1) removing the databases from the AG, 2) removing the secondary node from the cluster, and 3) removing...
I found https://dba.stackexchange.com/questions/266597/questions-regarding-removing-a-secondary-node-from-an-availability-group with Tony Hinkle's answer, but I'd like to get some clarification on 1) removing the databases from the AG, 2) removing the secondary node from the cluster, and 3) removing all quorum votes except for the one node in the cluster. Does step 2 mean removing the node from the AG first, then from WSFC? How does the file share witness factor into removing all the other quorum votes? Should the one remaining cluster node and the file share witness keep their votes or just the node? Thanks!
Bobogator (95 rep)
Mar 31, 2022, 03:53 AM • Last activity: Jul 4, 2025, 07:02 AM
0 votes
2 answers
1094 views
Is MySQL Cluster just for in-memory databases?
In MySQL website I see below definition for MySQL Cluster. > MySQL Cluster is a technology that enables clustering of in-memory > databases in a shared-nothing system. I want to use Clustering for a chat application like `viber` that there are some `Ejabberd` servers, which use MySQL as database.Is...
In MySQL website I see below definition for MySQL Cluster. > MySQL Cluster is a technology that enables clustering of in-memory > databases in a shared-nothing system. I want to use Clustering for a chat application like viber that there are some Ejabberd servers, which use MySQL as database.Is it necessary that databases use in-memory tables. according this definition all databases and tables should be in-memory or not? This is the link of MySQL Cluster definition.
amir jj (289 rep)
Mar 27, 2016, 05:06 PM • Last activity: Jul 2, 2025, 07:05 PM
0 votes
1 answers
194 views
How to do failover between AG setup for FCI-FCI servers
Hope I can portrait my question well because i really get confused with AG's :) We have the setup like below- Windows cluster WKCLU01 with 4 nodes (Node1,2,3,4) and a file share witness in DC3, total 5 votes so ODD# fits in Out of those 4 node In DC1 --> SQL FCI Shared storage between Node1 and Node...
Hope I can portrait my question well because i really get confused with AG's :) We have the setup like below- Windows cluster WKCLU01 with 4 nodes (Node1,2,3,4) and a file share witness in DC3, total 5 votes so ODD# fits in Out of those 4 node In DC1 --> SQL FCI Shared storage between Node1 and Node 2 as SQL1\inst1 In DC2 --> SQL FCI Shared storage between Node3 and Node 4 as SQL2\inst2 Now we have to setup AG between DC1 and DC2. below is my understanding: AG can be setup between 2 replicas here SQL1\inst1 and SQL2\inst2 in ASYNC mode as per limitation and cant use SYNC mode. Assuming this is correct 1. Is it true automatic failover will happen between each of the FCIs only just like plain old Always on FCI's? 2. Now how can we do a planned failover between DC1 and DC2 as per our bi-monthly activity? Do we have TSQL or PS to help us automate this. On some msdn links i am confused doing so will cause data loss and somewhere it wont if we change sync mode? Please suggest
Newbie-DBA (804 rep)
Jul 16, 2020, 07:42 PM • Last activity: Jul 1, 2025, 08:05 AM
0 votes
1 answers
185 views
SQL Server Failover Cluster Scalable by adding nodes?
In the book *Pro SQL Server Always On Availability Groups* They write about scalability. Here is what they say > Windows Server 2012 (and above), failover clustering supports up to 64 > cluster nodes. This helps to easily scale out the solution by adding > nodes to it. For example, if you have a two...
In the book *Pro SQL Server Always On Availability Groups* They write about scalability. Here is what they say > Windows Server 2012 (and above), failover clustering supports up to 64 > cluster nodes. This helps to easily scale out the solution by adding > nodes to it. For example, if you have a two-node failover cluster > hosting applications that is getting close to hitting a node-specific > capacity limit such as a CPU or memory, it is very easy to add a third > node and redistribute the load. How is that possible? If you have one instance on two nodes and you need to add memory. How will another node help? You can't load balance to 3 nodes. I completely understand running two instances ( one on each node) but then adding a 3rd will provide what?
Shellz (452 rep)
Feb 11, 2018, 03:46 AM • Last activity: Jun 30, 2025, 04:03 PM
1 votes
1 answers
216 views
Is it possible to join MySQL 8.0 to a cluster of MySQL 5.7
I have a MySQL 5.7 (5.7.19-17-57-log Percona XtraDB Cluster) cluster using Galera, but I am in the middle of upgrading to 8.0 -- 8.0.23-14 Percona Server (GPL), Release '14'. I wasn't sure if it was possible to join the 8.0 box to the clustered 5.7.
I have a MySQL 5.7 (5.7.19-17-57-log Percona XtraDB Cluster) cluster using Galera, but I am in the middle of upgrading to 8.0 -- 8.0.23-14 Percona Server (GPL), Release '14'. I wasn't sure if it was possible to join the 8.0 box to the clustered 5.7.
user3525290 (113 rep)
May 25, 2021, 10:18 AM • Last activity: Jun 22, 2025, 12:02 PM
0 votes
2 answers
251 views
Why choose SQL FCI over Database Mirroring
I have SQL 2016 **Standard** Edition to install and configure a HA/DR solution. I need automatic failover in a cluster of two nodes. I am proposing SQL FCI over database Mirroring for the following reasons: - FCI will need only two nodes for the configuration and will failover automatically - Databa...
I have SQL 2016 **Standard** Edition to install and configure a HA/DR solution. I need automatic failover in a cluster of two nodes. I am proposing SQL FCI over database Mirroring for the following reasons: - FCI will need only two nodes for the configuration and will failover automatically - Database Mirroring is being phased out by Microsoft and moreover, in order to achieve automatic failover, we will need an extra server (for witness) in addition to the two nodes of the cluster. Hence requiring 3 difference SQL instances in 3 VMs. Would you agree with my reasoning and are there any other things I need to consider please? Automatic failover is essential in the requirements. Thank you.
PTL_SQL (427 rep)
Jun 8, 2021, 12:42 AM • Last activity: May 28, 2025, 06:08 AM
0 votes
1 answers
3571 views
EventID 1196 | SQL Cluster & FailoverClustering
I have Pre-Staged the SQLCluster objects, since login had no Domain admin rights. Later SQL Cluster installation completed successfully.(Also tested with failover and failback).It is working fine till date. But I am receiving error in event log of failover cluster, that Cluster resources failed regi...
I have Pre-Staged the SQLCluster objects, since login had no Domain admin rights. Later SQL Cluster installation completed successfully.(Also tested with failover and failback).It is working fine till date. But I am receiving error in event log of failover cluster, that Cluster resources failed registration of one or more associated DNS name(s). Can anybody explain its impact? Require help to understand below error. ** Cluster network name resource 'SQL Network Name (servername)' failed registration of one or more associated DNS name(s) for the following reason: DNS operation refused. **
Rahrya (11 rep)
Dec 26, 2018, 12:25 PM • Last activity: May 24, 2025, 12:07 AM
0 votes
1 answers
1618 views
Why can't I see witness in the failover cluster manager snap-in?
SELECT * FROM sys.dm_hadr_cluster_members This query shows me 2 nodes and 1 file share witness. However, when I open the wsfc cluster manager snap-in, then click on Role, select the role, I cannot see the witness. Am I looking in wrong place for witness?
SELECT * FROM sys.dm_hadr_cluster_members This query shows me 2 nodes and 1 file share witness. However, when I open the wsfc cluster manager snap-in, then click on Role, select the role, I cannot see the witness. Am I looking in wrong place for witness?
variable (3590 rep)
May 6, 2022, 02:15 PM • Last activity: May 18, 2025, 05:05 PM
1 votes
2 answers
277 views
Automatic MYSQL failover
We have a wordpress environment however this is not a wordpress question. Our scenario is that front Ends connect to a MYSQL backend (MYSQL VM1) which is replicated to a backup vm (MYSQL VM2). The replication is Master - Master. What we're looking for is a MYSQL HA cluster with automatic failover wh...
We have a wordpress environment however this is not a wordpress question. Our scenario is that front Ends connect to a MYSQL backend (MYSQL VM1) which is replicated to a backup vm (MYSQL VM2). The replication is Master - Master. What we're looking for is a MYSQL HA cluster with automatic failover where if VM1 dies / is unresponsive, connection fails over to VM2. Anything you'd recommend ?
joebegborg07 (179 rep)
Mar 8, 2017, 10:11 AM • Last activity: May 15, 2025, 05:26 AM
Showing page 1 of 20 total questions