Debian 10 Pacemaker-Cluster: GFS2 Mount fails because of "Global lock failed: check that global lockspace is started."
1
vote
1
answer
1468
views
I'm trying to setup a new Debian 10 cluster with three instances. My stack is based on pacemaker, corosync, dlm, and lvmlockd with a GFS2 volume. All servers have access to the GFS2 volume but I can't mount it with pacemaker or manually when using the GFS2 filesystem. I configured corosync and all three instances are online. I continued with dlm and lvm configuration. Here my configuration steps for LVM and pacemaker:
LVM:
sudo nano /etc/lvm/lvm.conf --> Set locking_type = 1 and use_lvmlockd = 1
Pacemaker Resources:
sudo pcs -f stonith_cfg stonith create meatware meatware hostlist="firmwaredroid-swarm-1 firmwaredroid-swarm-2 firmwaredroid-swarm-3" op monitor interval=60s
sudo pcs resource create dlm ocf:pacemaker:controld \
op start timeout=90s interval=0 \
op stop timeout=100s interval=0
sudo pcs resource create lvmlockd ocf:heartbeat:lvmlockd \
op start timeout=90s interval=0 \
op stop timeout=100s interval=0
sudo pcs resource group add base-group dlm lvmlockd
sudo pcs resource clone base-group \
meta interleave=true ordered=true target-role=Started
The
pcs status
shows that all resources are up and online. After the pacemaker configuration I tried to setup a shared Volume Group to add the Filesystem resource to pacemaker but all the commands fail with Global lock failed: check that global lockspace is started.
sudo pvcreate /dev/vdb
--> Global lock failed: check that global lockspace is started
sudo vgcreate vgGFS2 /dev/vdb —shared
--> Global lock failed: check that global lockspace is started
I then tried to directly format the /dev/vdb with mkfs.gfs2 which works but seems to me a step in the wrong direction, because mounting the volume then always fails:
sudo mkfs.gfs2 -p lock_dlm -t firmwaredroidcluster:gfsvolfs -j 3 /dev/gfs2share/lvGfs2Share
sudo mount -v -t "gfs2" /dev/vdb ./swarm_file_mount/
mount: /home/debian/swarm_file_mount: mount(2) system call failed: Transport endpoint is not connected.
I tried several configurations like starting lvmlockd -g dlm
or debugging dlm with dlm_controld -d
but I don't find any infos on how to do it. On the web I found some RedHat forums that discuss similar errors but do not provide any solutions due to a paywall.
How can I start or initialise the global lock with dlm so that I can mount the GFS2 correctly on the pacemaker Debian cluster? Or in other words what's wrong with my dlm configuration?
Thx for any help!
Asked by Me7e0r
(11 rep)
Jun 23, 2021, 10:53 AM
Last activity: Jul 12, 2021, 03:37 PM
Last activity: Jul 12, 2021, 03:37 PM