Sample Header Ad - 728x90

Database Administrators

Q&A for database professionals who wish to improve their database skills

Latest Questions

0 votes
2 answers
727 views
Grid 19c: Purpose of QUORUM FAILGROUP on shared NFS mountpoint when you already have an +OCRDG diskgroup
I'm working somewhere now where every ASM diskgroup has been made with an extra disk under the QUORUM FAILGROUP option, and is built with the following pattern: ``` CREATE DISKGROUP stuffdg NORMAL REDUNDANCY FAILGROUP x DISK 'AFD:X' SIZE 30G FAILGROUP y DISK 'AFD:Y' SIZE 30G FAILGROUP z ... QUORUM F...
I'm working somewhere now where every ASM diskgroup has been made with an extra disk under the QUORUM FAILGROUP option, and is built with the following pattern:
CREATE DISKGROUP stuffdg NORMAL REDUNDANCY
  FAILGROUP x DISK 'AFD:X' SIZE 30G
  FAILGROUP y DISK 'AFD:Y' SIZE 30G
  FAILGROUP z ...
  QUORUM FAILGROUP qrm DISK '/nfsmp/stuffdgdisk' SIZE 1G
  ATTRIBUTE ...;
But on all clusters they have over here, there's also a diskgroup whose sole purpose is to host OCR, voting files, ASM instance spfile etc. This diskgroup (let's call it +OCRDG...) is *also* built upon a similar shared drive visible at OS level (along with two regular ASM disks configured with AFD). Verifying this with CRSCTL QUERY CSS VOTEDISK you're shown the following:
[oracle@~]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   2b07d0a959d14fcbbf891f99e3bade87 (AFD:OCRDGD1) [VOTE]
 2. ONLINE   8943a8f2a8c64fc4bfaae430f45bc58d (AFD:OCRDGD2) [VOTE]
 3. ONLINE   1f166e30237a4f54bfabeb8d0ed63786 (/nfsmp/ocrdgd3) [VOTE]
Located 3 voting disk(s).
Listing all files under /nfsmp shows that since their creation, *only* /nfsmp/ocrdgd3 is used, all other files (like stuffdgdisk above) have kept the timestamp of their creation, months ago. My question: I've never used this configuration before (explicitly adding a quorum failgroup in regular diskgroups - I mean those used for data and archived logs destination or FRA), and obviously these files under /nfsmp (except for that which is used for OCR) are not used, so what is the point of this QUORUM FAILGROUP clause?? I've found the documentation very little helpful to understand this setup. Can someone enlighten me please? Thanks a lot. Regards, Seb
Seb (31 rep)
Nov 2, 2023, 09:06 PM • Last activity: May 7, 2025, 09:07 PM
0 votes
1 answers
78 views
Unsafe_aggressive_sstable_expiration - Procedure to enable unsafe_aggressive_sstable_expiration
I am planning to enable this tag to solve an issue related to some SSTables that are fully expired and are blocking some other expired SSTables from being expired. I have looked for information on which procedure we should follow, but I cannot seem to find information. For instance: should I apply t...
I am planning to enable this tag to solve an issue related to some SSTables that are fully expired and are blocking some other expired SSTables from being expired. I have looked for information on which procedure we should follow, but I cannot seem to find information. For instance: should I apply these changes in all nodes in a cluster and then restart the whole cluster, or can this be done on a one node at a time fashion? I am wondering if there can be some inconsistency issues if one node at a time is changed and restarted, as the configuration would be different between different nodes in the cluster. For the record, we are working with a 4 node cluster, working with Time Window Compaction Strategy and which has a replication factor 3 and Consistency Level Local Quoruml.
Jon Corral (1 rep)
Jul 1, 2024, 01:27 PM • Last activity: Sep 13, 2024, 08:08 PM
1 votes
0 answers
115 views
Configuring 3:1 Failover Cluster for SQL Developer Edition on Windows Server with AWS AMI
We are in the process of configuring a 3:1 failover cluster mechanism for SQL Developer Edition hosted on Windows Server, deployed using an AWS AMI. Our objective is to ensure that if any of the three database instances `(DB01, DB02, DB03`) encounter a failure, requests are seamlessly routed to the...
We are in the process of configuring a 3:1 failover cluster mechanism for SQL Developer Edition hosted on Windows Server, deployed using an AWS AMI. Our objective is to ensure that if any of the three database instances (DB01, DB02, DB03) encounter a failure, requests are seamlessly routed to the fourth instance (PDB). The fourth database (PDB) contains all the data from DB01, DB02, and DB03. Currently, we have set up three database instances named DB01, DB02, and DB03 with IP addresses 172.31.22.161, 172.31.22.162 and 172.31.22.163 respectively, in one network cluster with CIDR 172.31.16.0/20. The other cluster network includes the fourth DB instance named 'PDB' with IP address 172.31.39.150 and CIDR 172.31.32.0/20. Currently, in the event of an instance failure, requests continue to be routed via PDB. We are uncertain if this approach is correct and are not achieving the desired outcome. We seek guidance on the correct approach to achieve this setup. In addition, we have a question regarding how the cluster will determine which database instance to route requests to when hosting three DB instances in one network cluster. Could you please provide clarification on this aspect as well? We have configured roles in failover cluster and designated 'preferred owners' as the fourth DB (PDB), so that in the event of failure of any of the three DB instances, requests are directed to PDB. However, when a DB instance returns to a healthy state, requests should be served by that specific DB instance and not by PDB.
Rahul (11 rep)
Apr 19, 2024, 01:27 PM • Last activity: Apr 23, 2024, 01:52 PM
0 votes
1 answers
240 views
LOCAL_ONE and QUORUM produce different results
I'm connecting to Cassandra DB via IntelliJ Data Source and run an ad-hoc query. We have this DB in multiple data centers. My connection string includes just a single node IP. With the same query and the same URL (connection string) I got different results (more returned rows) when I switched from `...
I'm connecting to Cassandra DB via IntelliJ Data Source and run an ad-hoc query. We have this DB in multiple data centers. My connection string includes just a single node IP. With the same query and the same URL (connection string) I got different results (more returned rows) when I switched from LOCAL_ONE (apparently the default) to QUORUM. Apparently we have a replication problem. Based on the info I presented so far, is it possible to determine whether the discrepancy is between nodes is the same data center or between data centers?
PM 77-1 (123 rep)
Nov 2, 2023, 05:11 PM • Last activity: Nov 6, 2023, 02:27 PM
0 votes
0 answers
512 views
WSFC - Unable to set node weight in Failover cluster
I have a two node Windows failover cluster configured with Always on availability groups which are configured for disaster recovery not for high availability. In this windows failover cluster we are not using quorum/witness, in simple terms cluster is set to use of no witness. We don't want to use q...
I have a two node Windows failover cluster configured with Always on availability groups which are configured for disaster recovery not for high availability. In this windows failover cluster we are not using quorum/witness, in simple terms cluster is set to use of no witness. We don't want to use quorum because we are having multi subnet cluster configured with AG, By using the quorum configurations for Manual failover given in MSDN we have configured from below URL. https://learn.microsoft.com/en-us/windows-server/failover-clustering/manage-cluster-quorum#quorum-considerations-for-disaster-recovery-configurations Current Settings: No preferred site. Dynamic Quorum = 1 Node1 is primary and Node Weight(Assigned Vote) =1 and Dynamic Weight(Current Vote) = 0 Node2 is Secondary and Node Weight = 1 and Current Vote = 1 As Node1 is primary I have moved my cluster group resources to Node1 and able to set the node weight property to 1. But while trying to set the Node2 node weight property to 0 it is throwing me an error as below where I am not able to resolve the issue. Error Message: Exception setting "NodeWeight": "Unable to save property changes for 'Node2'. The request is invalid either because node weight cannot be changed while the cluster is in disk-only quorum mode, or because changing the node weight would violate the minimum cluster quorum requirements" I have tried to restart the cluster service but didn't fix my error. I have tried to add a file share quorum witness and then removed still unable to fix the error. Expected result is: Node Assigned vote Current Vote Node1 1 1 Node2 0 0 Quorum type should be: Node Majority Q1. How to avoid this disk-only quorum situation and fix the error? Any leads to this error is appreciated.
info.sqldba (327 rep)
Oct 11, 2023, 06:47 AM • Last activity: Oct 11, 2023, 09:35 AM
Showing page 1 of 5 total questions