Sample Header Ad - 728x90

Xen: some logical volumes are missing after dist-upgrade vrom Debian 10 to 12 and after reboot

0 votes
1 answer
121 views
I updated my Debian Xen Server from 10 to 11 and then without reboot to Debian 12, then I rebooted the server and now only some VMs started. vm06 is running fine while vm04 is not (and some more, but here i redacted them from this question for readability) In the folder /dev/vg0 are only some of the vlumes linked now:
# ll /dev/vg0/
total 0
lrwxrwxrwx 1 root root 7 Aug 20 17:04 backup -> ../dm-2
lrwxrwxrwx 1 root root 7 Aug 20 17:04 root -> ../dm-0
lrwxrwxrwx 1 root root 8 Aug 20 17:04 vm06.docker-disk -> ../dm-10
# lvs
  LV                              VG  Attr       LSize    Pool Origin                 Data%  Meta%  Move Log Cpy%Sync Convert
  backup                          vg0 -wi-ao----    2,01t                                                                    
  root                            vg0 -wi-ao----   10,00g                                                                    
  vm04.matrix-disk                vg0 owi-i-s---  130,00g                         
  vm06.docker-disk                vg0 owi-a-s---  610,00g
If I use lvdisplay, the missing volume vm04-matrix is still included like:
# lvdisplay|grep Path|grep -v swap
  LV Path                /dev/vg0/root
  LV Path                /dev/vg0/backup
  LV Path                /dev/vg0/vm06.docker-disk
  LV Path                /dev/vg0/vm04.matrix-disk
lvdisplay|awk '/LV Name/{n=$3} /Block device/{d=$3; sub(".*:","dm-",d); print d,n;}' shows, that the matrix disk is present at /dev/dm-25
# cat /etc/lvm/backup/vg0

...

vg0 {
	id = "Cfe7Ii-rZBl-mEnH-tk5Z-q9WW-UyTk-3WstVn"
	seqno = 29269
	format = "lvm2"			# informational
	status = ["RESIZEABLE", "READ", "WRITE"]
	flags = []
	extent_size = 8192		# 4 Megabytes
	max_lv = 0
	max_pv = 0
	metadata_copies = 0

	physical_volumes {

		pv0 {
			id = "KlmZUe-3FiK-VbBZ-R962-219A-GGAU-I3a5Nl"
			device = "/dev/md1"	# Hint only

			status = ["ALLOCATABLE"]
			flags = []
			dev_size = 15626736256	# 7,27677 Terabytes
			pe_start = 2048
			pe_count = 1907560	# 7,27676 Terabytes
		}
	}

	logical_volumes {

		...

		vm06.docker-disk {
			id = "y3CSuy-z4gU-72Bd-678E-sYTi-Lkmi-dwAkgT"
			status = ["READ", "WRITE", "VISIBLE"]
			flags = []
			creation_time = 1584654364	# 2020-03-19 22:46:04 +0100
			creation_host = "dom0-eclabs"
			segment_count = 3

			segment1 {
				start_extent = 0
				extent_count = 97280	# 380 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 263936
				]
			}
			segment2 {
				start_extent = 97280
				extent_count = 7680	# 30 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 422912
				]
			}
			segment3 {
				start_extent = 104960
				extent_count = 51200	# 200 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 1050889
				]
			}
		}

		...
		
		vm04.matrix-disk {
			id = "Tak3Zq-3dUU-SAJl-Hd5h-weTM-cHMR-qgWXeI"
			status = ["READ", "WRITE", "VISIBLE"]
			flags = []
			creation_time = 1584774051	# 2020-03-21 08:00:51 +0100
			creation_host = "dom0-eclabs"
			segment_count = 2

			segment1 {
				start_extent = 0
				extent_count = 25600	# 100 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 531968
				]
			}
			segment2 {
				start_extent = 25600
				extent_count = 7680	# 30 Gigabytes

				type = "striped"
				stripe_count = 1	# linear

				stripes = [
					"pv0", 1002249
				]
			}
		}

	}
}
dmsetup still lists the disk:
# dmsetup info|grep matrix 
Name:              vg0-vm04.matrix--disk
But it doesn't seem to be linked in /dev/dm-X: # for i in /dev/dm-*; do echo $i; dmsetup info $i; done|grep matrix (This shows nothing). I already did pvcreate –restorefile lvm_backup_datei –uuid in recovery mode like explained [here](https://docs.hetzner.com/de/robot/dedicated-server/troubleshooting/recovery-of-lvm-volumes/) and rebooted, but still the same problem. I noticed, that **in the recovery system all lvm partitions were there** with lsblk and under /dev/mapper/... (the rescue system uses kernel 6.4.7) but after a reboot into the system, again only those few are visible:
├─sdb2                                          8:18   0   7,3T  0 part  
│ └─md1                                         9:1    0   7,3T  0 raid1 
│   ├─vg0-root                                253:0    0    10G  0 lvm   /
│   ├─vg0-swap                                253:1    0     4G  0 lvm   [SWAP]
│   ├─vg0-backup                              253:2    0     2T  0 lvm   /backup   
│   ├─vg0-vm06.docker--swap                   253:8    0     8G  0 lvm   
│   ├─vg0-vm06.docker--disk-real              253:9    0   610G  0 lvm   
│   │ ├─vg0-vm06.docker--disk                 253:10   0   610G  0 lvm   
│   │ └─vg0-snap--tmp--vm06.docker--disk      253:12   0   610G  0 lvm   
│   ├─vg0-snap--tmp--vm06.docker--disk-cow    253:11   0    16G  0 lvm   
│   │ └─vg0-snap--tmp--vm06.docker--disk      253:12   0   610G  0 lvm
## update: I managed to mount the missing volumes in rescue mode into a backup partition and backup all needed data, so I can create a new volume, reinstall and feed in the backed up database
Asked by rubo77 (30435 rep)
Aug 20, 2023, 03:51 PM
Last activity: Aug 21, 2023, 12:24 PM