Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
1
votes
1
answers
6417
views
Dealing with Device-Mapper (Multipath) Failing paths
When a disk starts to die slowly multipath starts to Failing & Reinstating paths and this keeps forever.. (I'm using LSI-3008HBA card with SAS-JBOD not FC-Network) Dmesg; Sep 13 11:20:17 DEV2 kernel: sd 0:0:190:0: attempting task abort! scmd(ffff88110e632948) Sep 13 11:20:17 DEV2 kernel: sd 0:0:190:...
When a disk starts to die slowly multipath starts to Failing & Reinstating paths and this keeps forever.. (I'm using LSI-3008HBA card with SAS-JBOD not FC-Network)
Dmesg;
Sep 13 11:20:17 DEV2 kernel: sd 0:0:190:0: attempting task abort! scmd(ffff88110e632948)
Sep 13 11:20:17 DEV2 kernel: sd 0:0:190:0: [sdft] tag#3 CDB: opcode=0x0 00 00 00 00 00 00
Sep 13 11:20:17 DEV2 kernel: scsi target0:0:190: handle(0x0037), sas_address(0x5000c50093d4e7c6), phy(38)
Sep 13 11:20:17 DEV2 kernel: scsi target0:0:190: enclosure_logical_id(0x500304800929ec7f), slot(37)
Sep 13 11:20:17 DEV2 kernel: scsi target0:0:190: enclosure level(0x0001),connector name(1 )
Sep 13 11:20:17 DEV2 kernel: sd 0:0:190:0: task abort: SUCCESS scmd(ffff88110e632948)
Sep 13 11:20:18 DEV2 kernel: device-mapper: multipath: Failing path 130:240.
Sep 13 11:25:34 DEV2 kernel: device-mapper: multipath: Reinstating path 130:240.
As you can see kernel aborted the mission and after that multipath failed.
So I want to get rid of this problem via telling Multipath "do not Reinstate the path".
This method will keep dead the zombie disk.
How can I do that?
Ozbit
(439 rep)
Sep 13, 2018, 09:14 AM
• Last activity: May 30, 2025, 10:05 PM
1
votes
0
answers
68
views
IO wait/failure timeout on iscsi device with multipath enablement
- I'm accessing a remote iscsi based SAN using multipath. - The network on the server side has known intermittent issues such that there are session failures and path failures/IO failures. I'm not trying to beat this problem as it's already a WIP. - Now, the issue i have is let's say I'm trying to f...
- I'm accessing a remote iscsi based SAN using multipath.
- The network on the server side has known intermittent issues such that there are session failures and path failures/IO failures. I'm not trying to beat this problem as it's already a WIP.
- Now, the issue i have is let's say I'm trying to format or partition the device via a process/service, the parted/mkfs cmd gets hung causing Kernel panic. This value is set to 240 secs.
- Now, what i want to avoid is the kernel panic, i want parted/mkfs command to fail and return than cause kernel panic.
- I have searched and tried changing various parameters ( iscsid, sysfs, multipath ) to no avail.
This is my iscsid config
iscsid.startup = /bin/systemctl start iscsid.socket iscsiuio.socket
node.startup = automatic
node.leading_login = No
node.session.timeo.replacement_timeout = 30
node.conn.timeo.login_timeout = 30
node.conn.timeo.logout_timeout = 15
node.conn.timeo.noop_out_interval = 5
node.conn.timeo.noop_out_timeout = 5
node.session.err_timeo.abort_timeout = 15
node.session.err_timeo.lu_reset_timeout = 30
node.session.err_timeo.tgt_reset_timeout = 30
node.session.initial_login_retry_max = 8
node.session.cmds_max = 128
node.session.queue_depth = 2
node.session.xmit_thread_priority = -20
node.session.iscsi.InitialR2T = No
node.session.iscsi.ImmediateData = Yes
node.session.iscsi.FirstBurstLength = 262144
node.session.iscsi.MaxBurstLength = 262144
node.conn.iscsi.MaxRecvDataSegmentLength = 262144
node.conn.iscsi.MaxXmitDataSegmentLength = 262144
discovery.sendtargets.iscsi.MaxRecvDataSegmentLength = 32768
node.conn.iscsi.HeaderDigest = CRC32C
node.conn.iscsi.DataDigest = CRC32C
node.session.nr_sessions = 1
node.session.reopen_max = 0
node.session.iscsi.FastAbort = Yes
node.session.scan = auto
multipath conf
defaults {
path_checker none
user_friendly_names yes # To create ‘mpathn’ names for multipath devices
path_grouping_policy multibus # To place all the paths in one priority group
path_selector "round-robin 0" # To use round robin algorithm to determine path for next I/O operation
failback immediate # For immediate failback to highest priority path group with active paths
no_path_retry 1 # To disable I/O queueing after retrying once when all paths are down
}
And I've set all sysfs timeout values of all slave paths to be 30 seconds.
But still parted/mkfs never fail and return when there's network issue ( simulated ). What am i missing?
My multipath version is tad old but i can't upgrade as this is supported version on Rocky 8.
multipath-tools v0.8.4 (05/04, 2020)
iscsid version 6.2.1.4-1
Neetz
(111 rep)
Jan 21, 2025, 09:38 PM
0
votes
2
answers
84
views
Delete failed entries in multipath
Due to some changes, I have to force a rescan of Fibre Channel devices on a CentOS 6 server. This is the output of `multipath -l`: (...) 36000144000000010f01c857894aede59 dm-50 EMC,Invista size=5.0T features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=0 status=active |-...
Due to some changes, I have to force a rescan of Fibre Channel devices on a CentOS 6 server.
This is the output of
multipath -l
:
(...)
36000144000000010f01c857894aede59 dm-50 EMC,Invista
size=5.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 7:0:3:107 sdca 68:224 active undef unknown
|- 4:0:3:107 sdbd 67:112 active undef unknown
|- 7:0:4:107 sdeb 128:48 active undef unknown
`- 4:0:7:107 sddg 70:224 active undef unknown
3600601602bd14600351eb55f237aa77d dm-5 DGC,VRAID
size=3.0T features='0' hwhandler='1 emc' wp=rw
|-+- policy='round-robin 0' prio=0 status=active
| |- 7:0:7:39 sdq 65:0 failed undef unknown
| `- 4:0:6:39 sdei 128:160 failed undef unknown
`-+- policy='round-robin 0' prio=0 status=enabled
|- 7:0:0:39 sdx 65:112 active undef unknown
`- 4:0:0:39 sdk 8:160 active undef unknown
36000144000000010f01c857894aedd26 dm-14 EMC,Invista
size=5.0T features='0' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 7:0:3:94 sdba 67:64 failed undef unknown
|- 4:0:3:94 sdad 65:208 failed undef unknown
|- 7:0:4:94 sddb 70:144 failed undef unknown
`- 4:0:7:94 sdcg 69:64 failed undef unknown
(...)
Running multipath -f dm-5
or multipath -w 3600601602bd14600351eb55f237aa77d
does not delete the entry, not even after running multipath
or restarting the multipathd service.
These entries are not present in /etc/multipath.conf
.
How to delete the failed entries?
dr_
(32068 rep)
Nov 1, 2024, 11:18 AM
• Last activity: Nov 4, 2024, 02:25 PM
1
votes
3
answers
451
views
lvextend not using free space?
Im trying to increase the size of my pv/vg/lv after adding new storage enclosure and I created 2 new volumes and attached the to the host. While increasing PV and VG worked very well it seems that LV does not see any free space. Maybe someone can point me in the right direction why this is not worki...
Im trying to increase the size of my pv/vg/lv after adding new storage enclosure and I created 2 new volumes and attached the to the host.
While increasing PV and VG worked very well it seems that LV does not see any free space.
Maybe someone can point me in the right direction why this is not working?
What was done so far:
rescan-scsi-bus.sh -r
pvcreate /dev/mapper/mpathe
Physical volume "/dev/mapper/mpathe" successfully created.
pvcreate /dev/mapper/mpathf
Physical volume "/dev/mapper/mpathf" successfully created.
vgextend vgall /dev/mapper/mpathe
Volume group "vgall" successfully extended
vgextend vgall /dev/mapper/mpathf
Volume group "vgall" successfully extended
but now lvextend shows this:
lvextend -l +100%FREE /dev/vgall/lvol0
Using stripesize of last segment 4.00 KiB
Rounding size (152285551 extents) down to stripe boundary size for segment (152285548 extents)
Size of logical volume vgall/lvol0 changed from 478.39 TiB (125408168 extents) to 478.51 TiB (125439052 extents).
Logical volume vgall/lvol0 successfully resized.
which is basically nothing, and subsequent runs do show:
lvextend -l +100%FREE /dev/vgall/lvol0
Using stripesize of last segment 4.00 KiB
Rounding size (152285551 extents) down to stripe boundary size for segment (152285548 extents)
Size of logical volume vgall/lvol0 unchanged from 478.51 TiB (125439052 extents).
Command failed with status code 5.
Output of PV / LV / VG display below:
--- Physical volume ---
PV Name /dev/mapper/mpatha
VG Name vgall
PV Size <119.60 TiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 31352042
Free PE 0
Allocated PE 31352042
--- Physical volume ---
PV Name /dev/mapper/mpathb
VG Name vgall
PV Size <119.63 TiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 31359763
Free PE 0
Allocated PE 31359763
--- Physical volume ---
PV Name /dev/mapper/mpathc
VG Name vgall
PV Size <119.60 TiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 31352042
Free PE 0
Allocated PE 31352042
--- Physical volume ---
PV Name /dev/mapper/mpathd
VG Name vgall
PV Size <119.63 TiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 31359763
Free PE 0
Allocated PE 31359763
--- Physical volume ---
PV Name /dev/mapper/mpathe
VG Name vgall
PV Size <51.27 TiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 13438976
Free PE 13431255
Allocated PE 7721
--- Physical volume ---
PV Name /dev/mapper/mpathf
VG Name vgall
PV Size 51.20 TiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 13422965
Free PE 13415244
Allocated PE 7721
--- Volume group ---
VG Name vgall
System ID
Format lvm2
Metadata Areas 6
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 6
Act PV 6
VG Size 580.92 TiB
PE Size 4.00 MiB
Total PE 152285551
Alloc PE / Size 125439052 / 478.51 TiB
Free PE / Size 26846499 / 102.41 TiB
--- Logical volume ---
LV Path /dev/vgall/lvol0
LV Name lvol0
VG Name vgall
LV Write Access read/write
LV Status available
# open 1
LV Size 478.51 TiB
Current LE 125439052
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:6
lvdisplay -m /dev/vgall/lvol0
--- Logical volume ---
LV Path /dev/vgall/lvol0
LV Name lvol0
VG Name vgall
LV UUID MKcUYB-dAoJ-qJq1-hQEy-ArGl-c0bi-WKY0bU
LV Write Access read/write
LV Creation host, time localhost.localdomain, 2023-10-04 20:00:04 +0200
LV Status available
# open 1
LV Size 478.51 TiB
Current LE 125439052
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:6
--- Segments ---
Logical extents 0 to 125408167:
Type striped
Stripes 4
Stripe size 4.00 KiB
Stripe 0:
Physical volume /dev/mapper/mpathb
Physical extents 0 to 31352041
Stripe 1:
Physical volume /dev/mapper/mpathd
Physical extents 0 to 31352041
Stripe 2:
Physical volume /dev/mapper/mpatha
Physical extents 0 to 31352041
Stripe 3:
Physical volume /dev/mapper/mpathc
Physical extents 0 to 31352041
Logical extents 125408168 to 125439051:
Type striped
Stripes 4
Stripe size 4.00 KiB
Stripe 0:
Physical volume /dev/mapper/mpathe
Physical extents 0 to 7720
Stripe 1:
Physical volume /dev/mapper/mpathf
Physical extents 0 to 7720
Stripe 2:
Physical volume /dev/mapper/mpathb
Physical extents 31352042 to 31359762
Stripe 3:
Physical volume /dev/mapper/mpathd
Physical extents 31352042 to 31359762
masteer
(11 rep)
Aug 23, 2024, 10:30 AM
• Last activity: Aug 26, 2024, 06:04 AM
1
votes
1
answers
717
views
multipathd del map fails — how to remove the map?
I have a Dell server connected to a Dell Unisphere storage, which provides a number of multipath devices. It runs Oracle Linux 7, with UEK kernel `5.4.17-2136.304.4.1.el7uek.x86_64`. This map isn't needed anymore, so I want to delete it: ``` mpathr (360060160982053009641546615b605b4) dm-17 DGC ,VRAI...
I have a Dell server connected to a Dell Unisphere storage, which provides a number of multipath devices.
It runs Oracle Linux 7, with UEK kernel
5.4.17-2136.304.4.1.el7uek.x86_64
.
This map isn't needed anymore, so I want to delete it:
mpathr (360060160982053009641546615b605b4) dm-17 DGC ,VRAID
size=6.0T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 15:0:0:11 sdfv 131:16 active ready running
| |- 15:0:3:11 sdfy 131:64 active ready running
| |- 16:0:0:11 sdfz 131:80 active ready running
| |- 16:0:2:11 sdgb 131:112 active ready running
| |- 17:0:0:11 sdgd 131:144 active ready running
| |- 17:0:1:11 sdge 131:160 active ready running
| |- 18:0:2:11 sdgj 131:240 active ready running
| `- 18:0:3:11 sdgk 132:0 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 15:0:1:11 sdfw 131:32 active ready running
|- 15:0:2:11 sdfx 131:48 active ready running
|- 16:0:1:11 sdga 131:96 active ready running
|- 16:0:3:11 sdgc 131:128 active ready running
|- 17:0:2:11 sdgf 131:176 active ready running
|- 17:0:3:11 sdgg 131:192 active ready running
|- 18:0:0:11 sdgh 131:208 active ready running
`- 18:0:1:11 sdgi 131:224 active ready running
The hard requirement is that it should be done **online** (no reboot), it should have **no impact** on the services running on the server and there should be **no dangling remains** in the system. It should be made clean as if this device never existed.
The device was used with a file system which was served over NFS. I removed the /etc/exports
entry, reloaded NFS and unmounted the file system. lsblk
doesn't show it mounted anymore (mountpoint is empty):
root@bccdb:dev# lsblk /dev/mapper/mpathr
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
mpathr 252:17 0 6T 0 mpath
Next steps I was going to take is: delete multipath map, flush buffers on component devices, remove (unregister) these SCSI devices and remove the presentation on the storage. However, deleting the map fails:
root@bccdb:dev# multipathd del map 360060160982053009641546615b605b4
fail
I was even able to unload the filesystem driver (rmmod ext4
and rmmod jbd2
; the other filesystems on this server are XFS), so I am sure it is not filesystem which is still holding it.
There are no holders:
root@bccdb:dev# ls -la /sys/block/dm-17/holders/
итого 0
drwxr-xr-x 2 root root 0 май 27 12:20 .
drwxr-xr-x 10 root root 0 май 27 12:19 ..
dmsetup info shows zero open:
root@bccdb:dev# dmsetup info /dev/mapper/mpathr
Name: mpathr
State: ACTIVE
Read Ahead: 256
Tables present: LIVE
Open count: 0
Event number: 0
Major, minor: 252, 17
Number of targets: 1
UUID: mpath-360060160982053009641546615b605b4
I was already doing this previously on the very same server and two of its siblings, including that it was the same filesystem was on similar multipath disks from the same storage, and it went without such problem.
Why is it failing? How do I properly release it this time?
Nikita Kipriyanov
(1779 rep)
Jul 2, 2024, 08:32 AM
• Last activity: Jul 2, 2024, 11:53 AM
0
votes
0
answers
742
views
Ubuntu 22 disable multipath during installation
I have a server platform with a set of NVMe drives. I am trying to install Ubuntu 22 on one of the drives and mount the other as a data drive later. But on the Ubuntu installation GUI, it shows up as a single multipath device. I went to the Shell and tried doing `multipath -F,` but then the device w...
I have a server platform with a set of NVMe drives. I am trying to install Ubuntu 22 on one of the drives and mount the other as a data drive later. But on the Ubuntu installation GUI, it shows up as a single multipath device. I went to the Shell and tried doing
multipath -F,
but then the device went away altogether. I also tried blacklisting the device then systemctl reload multipathd.service
but no luck. How do I go about it? Although I have other storage drives on the machine, I want to avoid using them. Thanks.



tallharish
(101 rep)
Dec 15, 2023, 06:03 PM
• Last activity: Dec 15, 2023, 06:29 PM
1
votes
1
answers
128
views
Unable to partition and install filesystem on san-connected mutipath volume
Title pretty much says it all. Using Ubuntu 22.04. I have a simple install going -> physical HPE DL360 G9 with two 2-port HBAs connected to an iSCSI switch, which in turn connects to our Nimble storage array. Below is what I've done: - Installed Ubuntu 22.04 and updated it - Changed the iSCSI initia...
Title pretty much says it all. Using Ubuntu 22.04. I have a simple install going -> physical HPE DL360 G9 with two 2-port HBAs connected to an iSCSI switch, which in turn connects to our Nimble storage array. Below is what I've done:
- Installed Ubuntu 22.04 and updated it
- Changed the iSCSI initiator name in the initiator config file
- Changed my NIC names to be understandable (en01 = mgmt0; ens1fs0 = iscsi-1; ens2fs1 = iscsi-2) and applied the changes
- I installed a Nimble Linux toolkit which modifies iscsid.conf and adds multipath.conf "devices" section for Nimble devices
- I then created two iface's with my 2 iscsi NICs and restarted both iscsid and multipathd services
- I configured a Volume on my array and assigned access to it via the iqn of my server
- I then performed a iSCSI discovery via iscsiadm, then did a login.
At this point...all is looking good
- I retrieved the wwid of the Volume connected so I can add a friendly name for it in the multipath.conf file. Still..all good here
- At this point, I reboot the server to refresh everything and make sure my iSCSI target connection automatically reconnects, and it does. Still good
- Here is now my issue. I run fdisk /dev/nimblestorage/ (this is I believe a symbolic link to the connected Volume) and go through the process to partition the disk. Simple process. But, at the end of it after I "write" changes to disk, I receive the following error or warning:
Calling ioctl() to re-read partition table. Re-reading the partition table failed: Invalid Argument.
The kernel still uses the old table. The new table will be used at the next reboot or after you run partprobe(8) or partx(8)
So, I do what the message says to do and reboot. I then attempt to put a filesystem on my partition/volume using the below command:
sudo mkfs.xfs -b siz=4k -m reflink=1,crc=1 -L test01 /dev/nimblestorage/
and receive the following error:
mkfs.xfs: cannot open /dev/mapper/: Device or resource busy
If I try this via /dev/mapper/, I think it works, but the storage doesn't show when running df -hT
I can't do anything. I've been Internet searching all over the place and nothing I've attempted has worked. What am I missing? Is the issue multipathing? Did I not configure something? Really appreciate any help you all can provide. Even if I don't install the Nimble toolkit and just go by /dev/mapper/... , I get the same issues.
coolsport00
(11 rep)
Dec 5, 2023, 05:08 PM
• Last activity: Dec 6, 2023, 07:49 PM
0
votes
2
answers
2325
views
Can't mount a multipath LUN
I'm trying to mount a lun, it's visible in multipath -ll and I can see the multiple access to it with lsblk. I know screw up somewhere along the line because sda isn't visibly mounted in lsblk anymore and i can't find where. Thank you in advance. root@debian:~# multipath -ll mpathb (3600508b10010373...
I'm trying to mount a lun, it's visible in multipath -ll and I can see the multiple access to it with lsblk. I know screw up somewhere along the line because sda isn't visibly mounted in lsblk anymore and i can't find where.
Thank you in advance.
root@debian:~# multipath -ll
mpathb (3600508b1001037383941424344450500) dm-0 HP,LOGICAL VOLUME
size=68G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
`- 3:0:0:0 sda 8:0 active ready running
mpatha (3600601601ad126004652c478fd40e511) dm-1 DGC,VRAID
size=500G features='1 queue_if_no_path' hwhandler='1 emc' wp=rw
|-+- policy='service-time 0' prio=4 status=active
| |- 2:0:0:0 sdb 8:16 active ready running
| `- 4:0:1:0 sde 8:64 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
|- 2:0:1:0 sdc 8:32 active ready running
`- 4:0:0:0 sdd 8:48 active ready running
root@debian:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 68.3G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 127M 0 part
└─sda3 8:3 0 68.2G 0 part
sdb 8:16 0 500G 0 disk
├─sdb1 8:17 0 244G 0 part
└─sdb2 8:18 0 256G 0 part
sdc 8:32 0 500G 0 disk
├─sdc1 8:33 0 244G 0 part
└─sdc2 8:34 0 256G 0 part
sdd 8:48 0 500G 0 disk
├─sdd1 8:49 0 244G 0 part
└─sdd2 8:50 0 256G 0 part
sde 8:64 0 500G 0 disk
├─sde1 8:65 0 244G 0 part
└─sde2 8:66 0 256G 0 part
sr0 11:0 1 1024M 0 rom
Struggler
(1 rep)
May 17, 2016, 01:07 PM
• Last activity: Dec 6, 2023, 04:02 PM
1
votes
1
answers
253
views
create physical volume from multipath raid system
I currently want to mount an existing raid system. The rack is connected via multipath. With the command `multipath -ll` all racks are correctly shown and the mapper are present under `/dev`. My first attempt was to create a physical volume from the mapper pvcreate /dev/mapper/raid However I got an...
I currently want to mount an existing raid system. The rack is connected via multipath. With the command
multipath -ll
all racks are correctly shown and the mapper are present under /dev
.
My first attempt was to create a physical volume from the mapper
pvcreate /dev/mapper/raid
However I got an unexpected return:
Can't initialize physical volume "/dev/mapper/raid" of volume group "vgRAID" without -ff
/dev/mapper/raid: physical volume not initialized.
Am I missing something? It seems that volume group already exist. But as far as I remember I had to first create physical, then group and finally logical volumes. I don't want to use -ff
as I doubt it will overwrite my filesystem.
Yann
(101 rep)
Sep 22, 2023, 06:05 AM
• Last activity: Sep 25, 2023, 11:12 AM
3
votes
1
answers
1023
views
Multipath Duplicate Drives
I am setting up multipaths for Veritas backup on SAN storage. I noticed that `lsblk` shows duplicate disks, which is quite confusing. For example, both `sdc` and `sdd` represent the same disk, and similarly, `sde` and `sdf` represent the same device. sdc 8:32 0 50G 0 disk ├─sdc1 8:33 0 50G 0 part └─...
I am setting up multipaths for Veritas backup on SAN storage. I noticed that
lsblk
shows duplicate disks, which is quite confusing.
For example, both sdc
and sdd
represent the same disk, and similarly, sde
and sdf
represent the same device.
sdc 8:32 0 50G 0 disk
├─sdc1 8:33 0 50G 0 part
└─san69 253:10 0 50G 0 mpath
└─san69p1 253:11 0 50G 0 part
sdd 8:48 0 50G 0 disk
├─sdd1 8:49 0 50G 0 part
└─san69 253:10 0 50G 0 mpath
└─san69p1 253:11 0 50G 0 part
sde 8:64 0 69G 0 disk
├─sde1 8:65 0 69G 0 part
└─mpathb 253:12 0 69G 0 mpath
└─mpathb1 253:13 0 69G 0 part /mnt
sdf 8:80 0 69G 0 disk
├─sdf1 8:81 0 69G 0 part
└─mpathb 253:12 0 69G 0 mpath
└─mpathb1 253:13 0 69G 0 part /mnt
multipath -ll output is as follow
mpathb (360050763808106804800000000000001) dm-12 IBM,2145
size=69G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 11:0:1:1 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
`- 11:0:3:1 sdf 8:80 active ready running
san69 (360050763808106804800000000000000) dm-10 IBM,2145
size=50G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=enabled
| `- 11:0:3:0 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
`- 11:0:1:0 sdd 8:48 active ready running
Zaheer Ahmad Khan
(35 rep)
May 30, 2023, 06:04 AM
• Last activity: May 30, 2023, 08:24 AM
2
votes
1
answers
203
views
With Unified Kernel Images, how are custom initrd scenarios (such as multipath boot) addressed?
I was looking at the Fedora change set for 38 and saw [this][1] which seems like a neat idea but I was wondering how this affects systems that need custom files to be present in the initrd. One example is booting from a multipath device where the multipath configuration needs to already be present w...
I was looking at the Fedora change set for 38 and saw this which seems like a neat idea but I was wondering how this affects systems that need custom files to be present in the initrd. One example is booting from a multipath device where the multipath configuration needs to already be present when it goes to setup the root filesystem. I'm sure there are others though.
Bratchley
(17244 rep)
Mar 10, 2023, 06:30 PM
• Last activity: Mar 10, 2023, 08:17 PM
0
votes
2
answers
4832
views
LVM VG/LV is not activated at system startup
I have two multipath devices configured mpathb (36005076300808b3e9000000000000007) dm-1 IBM,2145 size=16T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw |-+- policy='service-time 0' prio=50 status=active | `- 1:0:1:1 sde 8:64 active ready running `-+- policy='service-time 0' prio=10 status=e...
I have two multipath devices configured
mpathb (36005076300808b3e9000000000000007) dm-1 IBM,2145
size=16T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 1:0:1:1 sde 8:64 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 1:0:0:1 sdc 8:32 active ready running
mpatha (36005076300808b3e9000000000000006) dm-0 IBM,2145
size=16T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 1:0:0:0 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 1:0:1:0 sdd 8:48 active ready running
For each of them I created a PV/VG/LV
$ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/mpatha vg0 lvm2 a-- <16.00t 0
/dev/mapper/mpathb vg1 lvm2 a-- <16.00t 0
After rebooting, my VG/LV is not activated.
$ sudo systemctl status lvm2-pvscan@254:0.service
● lvm2-pvscan@254:0.service - LVM event activation on device 254:0
Loaded: loaded (/lib/systemd/system/lvm2-pvscan@.service; static)
Active: failed (Result: exit-code) since Mon 2022-04-11 21:58:53 MSK; 14min ago
Docs: man:pvscan(8)
Process: 803 ExecStart=/sbin/lvm pvscan --cache --activate ay 254:0 (code=exited, status=5)
Main PID: 803 (code=exited, status=5)
CPU: 10ms
Apr 11 21:58:53 cephnode-1 systemd: Starting LVM event activation on device 254:0...
Apr 11 21:58:53 cephnode-1 lvm: pvscan PV /dev/mapper/mpatha is duplicate for PVID un8VgmPbM5dheccMCCmmMzr4UGcO3Gau on 254:0 and 8:16.
Apr 11 21:58:53 cephnode-1 lvm: pvscan PV /dev/mapper/mpatha failed to create online file.
Apr 11 21:58:53 cephnode-1 systemd: lvm2-pvscan@254:0.service: Main process exited, code=exited, status=5/NOTINSTALLED
Apr 11 21:58:53 cephnode-1 systemd: lvm2-pvscan@254:0.service: Failed with result 'exit-code'.
Apr 11 21:58:53 cephnode-1 systemd: Failed to start LVM event activation on device 254:0.
/etc/lvm/lvm.conf:
filter = [ "a|/dev/mapper/mpath.*|", "r|.*|" ]
What do I have to do to make the VG/LV activation work when I boot up the system?
Thanks in advance.
async await
(3 rep)
Apr 11, 2022, 07:20 PM
• Last activity: Feb 8, 2023, 12:40 AM
0
votes
1
answers
78
views
Saving Files In Kdenlive to removeable media
I’m having trouble with with my files in Kdenlive. I can’t find my files through Kdenlive. They just don’t show up. I have been able to load them into Kdenlive by dragging them into Kdenlive using the files function in Ubuntu. But, I can’t save the rendered file to anywhere. First problem I have is...
I’m having trouble with with my files in Kdenlive.
I can’t find my files through Kdenlive. They just don’t show up.
I have been able to load them into Kdenlive by dragging them into Kdenlive using the files function in Ubuntu.
But, I can’t save the rendered file to anywhere.
First problem I have is finding the file path, so that I can load it into the “Save to” option in the drop down menu from the “Render Button” in Kdenlive.
The second problem is that Kdenlive can’t deal with removable media.
So in looking for a solution, I found a similar problem being discussed at:
**https://unix.stackexchange.com/questions/591962/kdenlive-not-able-to-open-save-due-to-apparmor**
In that link it suggested that I download the "portable AppImage" from the "kenlive website," which I did.
The file copied and it’s sitting in my “Download File.”
But the file will not open run or anything.
The “Kdenlive Website” has this note:
*The Appimage will work on most GNU/Linux distributions. After having downloaded the file, you have to make it executable (right click and in the permissions set “Allow executing file as program” or similar): you can then launch it by double-clicking on it.
If you’re using Ubuntu or a derivative and prefer to use native packages, you can add our PPA to your repositories.*
When I “Right Click” on the download file, I get the following options in the drop down Menu:
Open Containing folder ( I get a message that contents can’t be displayed)
Go to download page Page
Copy Download Link
Remove from History
Clear Preview Panel
I see no option to execute as a program.
The “Ask Ubuntu” site suggests making the file executable with:
**chmod u+x kdenlive-20.12.0-x86_64.appimage**
But the terminal doesn't know how to find the AppImage file where it’s been downloaded into the “Download File.”
So how do I get the terminal to find where the AppImage file is and have it executed?
homestead
(1 rep)
Nov 15, 2021, 12:15 AM
• Last activity: Aug 4, 2022, 07:34 AM
1
votes
0
answers
568
views
file system choice pitfalls on 100TB volume
I have an 84 disk *powervault* storage array, I have configured as a virtual (as opposed to linear) array, with the RAID method as *ADAPT* (versus linear array of raid-5 or raid-10. One volume spans the entire array, and I have it showing up as `/dev/mpatha`. The storage array is connected via scsi...
I have an 84 disk *powervault* storage array, I have configured as a virtual (as opposed to linear) array, with the RAID method as *ADAPT* (versus linear array of raid-5 or raid-10. One volume spans the entire array, and I have it showing up as
/dev/mpatha
. The storage array is connected via scsi cable to host server running rhel-7.9.
Using the gnome disks
gui I format that device using the **XFS** file system, and end up with 115TB of usable space, that I mount as /data
to the powervault host server and then NFS expored and samba shared out, over 1gbps copper and 100gbps infiniband.
Realizing this sounds similar to a *what file system should I choose* question, given the above details is there a better file system choice over XFS out of what's available in RHEL 7.9 to choose from? More importantly, in the future as this thing gets 50% full (50TB !) are there any foreseeable problems... what should I watch out for or specifically not do? I do have a second identical powervault configured that will contain a backup copy of data on the first powervault.
ron
(8647 rep)
Aug 3, 2022, 04:16 PM
0
votes
1
answers
1664
views
mounting block device and multipath
I have a storage unit, connected to the host server via scsi cables. There are two controllers on the storage unit, controller-A and controller-B, each with 1 scsi cable connecting it to the host. After creating one disk group, under controller-A, making a volume... my choices for mapping the one vo...
I have a storage unit, connected to the host server via scsi cables. There are two controllers on the storage unit, controller-A and controller-B, each with 1 scsi cable connecting it to the host.
After creating one disk group, under controller-A, making a volume...
my choices for mapping the one volume are
-
'All Initiators'
- *
- s001-0
- s001-1
If I only map to s001-0
then my storage unit shows up via lsblk
as /dev/sdb
.
If I map to either *
or All initiators
then lsblk
reports
/dev/sdb
|_ mpatha
/dev/sdc
|_ mpatha
It is here that I don't understand *multipathing* on how it's supposed to happen in RHEL 7.9, how do I properly mount my storage array?
I've already been successful in having just the one mapping...
- to s000-0
- then just one block device /dev/sdb
shows up,
- I can parted mklabel gpt
- parted mkpart -a optimal primary 0% 100%
to get just /dev/sdb1
- Then do mkfs.xfs /dev/sdb1
followed by mount /dev/sdb1 /data
.
*how do I properly make use of multipathing, under RHEL 7.9, when I map my one storage volume to all initiators
* to correctly mount my storage unit?
ron
(8647 rep)
Jun 9, 2022, 05:14 PM
• Last activity: Jun 9, 2022, 11:08 PM
0
votes
0
answers
249
views
What is the impact of multipath changes?
I have work to do to install new paths in for multipath and I was asked if there could be an outage as a result of my work. I struggled to answer the question. Simply adding new aliases to multipath.conf and restarting the service should be no impact or outage. However if I was to remove existing al...
I have work to do to install new paths in for multipath and I was asked if there could be an outage as a result of my work. I struggled to answer the question. Simply adding new aliases to multipath.conf and restarting the service should be no impact or outage.
However if I was to remove existing aliases in the multipath.conf file and restarted the service I am unsure what how the application would react to this change.
If the application is a database and is presented with the alias of the disk from multipath, does it use the alias all the time or will it translate that into the uuid. How would a database like Oracle react to removing or resetting the multipath.conf file to defaults.
jacksonecac
(337 rep)
Jan 26, 2021, 03:56 PM
1
votes
1
answers
989
views
Multipath resize without LVM
I have a server with a multipath (example) and without LVM. (mpathb -> 5 Tb). ``` mpathb (360002ac00000000000003af40000af6b) dm-3 3PARdata,VV size=5.0T features='1 queue_if_no_path' hwhandler='0' wp=rw `-+- policy='round-robin 0' prio=0 status=active |- 1:0:1:0 sdc 8:32 active undef running |- 1:0:0...
I have a server with a multipath (example) and without LVM. (mpathb -> 5 Tb).
mpathb (360002ac00000000000003af40000af6b) dm-3 3PARdata,VV
size=5.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 1:0:1:0 sdc 8:32 active undef running
|- 1:0:0:0 sdb 8:16 active undef running
|- 2:0:0:0 sde 8:64 active undef running
`- 2:0:1:0 sdd 8:48 active undef running
And the disk mounted.
/dev/mapper/mpathbp1 /data (5.0 Tb).
I will need to increase this disk, How do that?
Increase at Storage the LUN (From: 5 Tb -> To: 7 Tb) - example.
Execute echo 1 > /sys/block/path_device/device/rescan (for all paths).
Execute multipathd -k 'resize map mpathb'
Execute resize2fs /dev/mpathbp1
This procedure its correct?
---
I have just one partition.
Would be partprobe where? Partprobe /dev/mapper/mparha?
Could explain in details?
Rodrigo Oliveira
(11 rep)
May 11, 2020, 02:27 PM
• Last activity: May 12, 2020, 07:59 AM
0
votes
1
answers
118
views
Can I configure Multipathig on FC Ports?
I have backup software which is installed on RHEL 7 Server. The connectivity between Data Domain and the Backup Server is as below: [![Architecture Diagram][1]][1] Zonning is configured on switch 1 and switch 2 as below: Alias For Data Domain HBA card 1: DD_HBA1_P1 DD_HBA1_P2 Alias For Data Domain H...
I have backup software which is installed on RHEL 7 Server. The connectivity between Data Domain and the Backup Server is as below:
Zonning is configured on switch 1 and switch 2 as below:
Alias For Data Domain HBA card 1:
DD_HBA1_P1
DD_HBA1_P2
Alias For Data Domain HBA card 2:
DD_HBA2_P1
DD_HBA2_P2
Alias for Server 001 HBA Card 1:
Server001_HBA1_P1
Server001_HBA1_P2
Alias for Server 001 HBA Card 2:
Server001_HBA2_P1
Server001_HBA2_P2
As Server001 is initiator and there are 4 ports so I created 4 zones for each initiator.
Data Domain storage is configured as open storage in the backup software. We Can't see the disk mounted on the OS. Data Domain is visible on OS as a character device .
I have few queries as below:
1. Is multipathing required on server001 ? If not then why?
2. Can I configure multipathing on FC Ports? if Yes, Please guide me by sharing the steps or KB article.

Puneet Dixit
(169 rep)
Feb 13, 2020, 06:59 PM
• Last activity: Apr 27, 2020, 04:21 PM
1
votes
1
answers
1181
views
CentOS 8 - Clustered File System
In my environment, I have a need for a shared disk between two application servers such that changes on Server A are immediately available on Server B. Historically, I have solved this issue by sharing a GFS2 volume by using multipathed disks stored on our enterprise filer and attached using our vir...
In my environment, I have a need for a shared disk between two application servers such that changes on Server A are immediately available on Server B. Historically, I have solved this issue by sharing a GFS2 volume by using multipathed disks stored on our enterprise filer and attached using our virtualization solution.
This configuration requires fencing of the GFS2 nodes in the cluster and so I have used pacemaker to handle the fencing for GFS2 in the event that a node dies or becomes unhealthy to prevent full corruption of the file system by configuring [stonith](https://en.wikipedia.org/wiki/STONITH) to use [SBD fencing](http://linux-ha.org/wiki/SBD_Fencing) previously.
While gfs2-utils and fence-agents-sbd is available for CentOS, the pcs command is not available as of CentOS 8.0 and it appears that it may [never be available](https://bugs.centos.org/view.php?id=16469) in the main repos. This is problematic as pcs was integral in configuring this in CentOS 7.
This leaves me wondering
- What can I do as an alternate for fencing the volume without having to compile the applicaiton from source (ongoing security updates & bug fixes is a requirement)
- If nothing, what can I use to provide a distributed & redundant storage solution in CentOS 8? An NFS server would be out of the question as a failure to the file server would take both application servers offline.
James Shewey
(1186 rep)
Nov 18, 2019, 10:18 PM
• Last activity: Mar 6, 2020, 08:06 PM
0
votes
0
answers
208
views
How to get size of the volume in one line?
I have created kubernetes PVC which is creating volume and mounting into a linux host. I need to fetch the size of the volume. Currently below is the way i am taking volume name and size. To get the volume name mount | grep pvc-a2811efd-3ea5-4f8d-8e0c-f4b390a3a0c2 It displays the mount point as belo...
I have created kubernetes PVC which is creating volume and mounting into a linux host. I need to fetch the size of the volume.
Currently below is the way i am taking volume name and size.
To get the volume name
mount | grep pvc-a2811efd-3ea5-4f8d-8e0c-f4b390a3a0c2
It displays the mount point as below.
> /dev/mapper/mpathed on
> /var/lib/kubelet/pods/037656ef-03c1-4b09-be6d-12dd57032218/volumes/kubernetes.io~csi/pvc-a2811efd-3ea5-4f8d-8e0c-f4b390a3a0c2/mount
> type xfs (rw,relatime,attr2,inode64,noquota)
Then i am taking the disk size as below.
multipath -ll /dev/mapper/mpathee | grep size
It displays the size as below
> size=15G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
Seems i need to depend on grep. I have to automate this task. Is there any other way i can get the volume size easily in one line? I need to store the volume size 15 in a variable and compare it with the size i given.So if i can get it in one liner it would be great
Samselvaprabu
(147 rep)
Feb 24, 2020, 09:59 AM
• Last activity: Feb 24, 2020, 10:44 AM
Showing page 1 of 20 total questions