Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
2
votes
0
answers
59
views
Can't mount extended logical partitions in RAID 0 Fake Array created by dmaid and device mapper
First of all, I would like to let you know I come here cause I'm looking for a solution or a way to be able to mount the HOME partition and to read the inside data. I've been running Funtoo GNU/Linux on a **RAID 0 Fake Array** since I bought this computer in 2010, aprox. Yesterday booted into **Syst...
First of all, I would like to let you know I come here cause I'm looking for a solution or a way to be able to mount the HOME partition and to read the inside data.
I've been running Funtoo GNU/Linux on a **RAID 0 Fake Array** since I bought this computer in 2010, aprox.
Yesterday booted into **SystemRescue** and tried to format every partition **except HOME** part. and I think something went wrong while formatting cause suddenly I started suffering the following issue.
dmraid -ay
RAID set "isw_bggjiidefd_240GB_BX100v2_5" was activated
device "isw_bggjiidefd_240GB_BX100v2_5" is now registered with dmeventd for monitoring
RAID set "isw_cfccfdiidi_640GB_RAID0" was activated
device "isw_cfccfdiidi_640GB_RAID0" is now registered with dmeventd for monitoring
**ERROR: dos: partition address past end of RAID device**
RAID set "isw_bggjiidefd_240GB_BX100v2_5p1" was activated
RAID set "isw_bggjiidefd_240GB_BX100v2_5p2" was activated
RAID set "isw_bggjiidefd_240GB_BX100v2_5p3" was activated
ls /dev/mapper/
control isw_bggjiidefd_240GB_BX100v2_5p1 isw_bggjiidefd_240GB_BX100v2_5p3
isw_bggjiidefd_240GB_BX100v2_5 isw_bggjiidefd_240GB_BX100v2_5p2 isw_cfccfdiidi_640GB_RAID0
In the previous scheme and directory, **logical partitions inside the extended partition are missing**.
Disk /dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p1: 300 MiB, 314572800 bytes, 614400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 32768 bytes
Disk /dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p2: 99.56 GiB, 106902323200 bytes, 208793600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 32768 bytes
Disklabel type: dos
Disk identifier: 0x73736572
Device Boot Start End Sectors Size Id Type
/dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p2-part1 1920221984 3736432267 1816210284 866G 72 unknown
/dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p2-part2 1936028192 3889681299 1953653108 931.6G 6c unknown
/dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p2-part3 0 0 0 0B 0 Empty
/dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p2-part4 27722122 27722568 447 223.5K 0 Empty
Partition 4 does not start on physical sector boundary.
Partition table entries are not in disk order.
Disk /dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p3: 450 MiB, 471859200 bytes, 921600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 16384 bytes / 32768 bytes
Disklabel type: dos
Disk identifier: 0x6c727443
Device Boot Start End Sectors Size Id Type
/dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p3-part1 1634886000 3403142031 1768256032 843.2G 75 PC/IX
/dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p3-part2 1936028160 3889681267 1953653108 931.6G 61 SpeedStor
/dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p3-part3 0 0 0 0B 0 Empty
/dev/mapper/isw_bggjiidefd_240GB_BX100v2_5p3-part4 26935690 26936121 432 216K 0 Empty
Partition 1 does not start on physical sector boundary.
Partition 4 does not start on physical sector boundary.
Partition table entries are not in disk order.
Now, **using mdadm** instead of dmraid Fake RAID Tool.
Assembling the array through *mdadm* I can see every block device, but I can't mount HOME partition */dev/md/240GB_BX100v2.5_0p9*.
I can mount the other partitions under the extended partition because I formatted them when assembled by mdadm.
mdadm --examine --scan
ARRAY metadata=imsm UUID=4f6eb512:955e67f6:5a22279e:f181f40d
ARRAY /dev/md/640GB_RAID0 container=4f6eb512:955e67f6:5a22279e:f181f40d member=0 UUID=1f9b13e6:b6dc2975:9c367bbb:88fa3d2b
ARRAY metadata=imsm UUID=c842ced3:6e254355:fed743f8:a4e8b8b8
ARRAY /dev/md/240GB_BX100v2.5 container=c842ced3:6e254355:fed743f8:a4e8b8b8 member=0 UUID=a2e2268c:17e0d658:17b6f16d:b090f250
ls /dev/md/
240GB_BX100v2.5_0 240GB_BX100v2.5_0p3 240GB_BX100v2.5_0p6 240GB_BX100v2.5_0p9 640GB_RAID0_0p2
240GB_BX100v2.5_0p1 240GB_BX100v2.5_0p4 240GB_BX100v2.5_0p7 640GB_RAID0_0 imsm0
240GB_BX100v2.5_0p2 240GB_BX100v2.5_0p5 240GB_BX100v2.5_0p8 640GB_RAID0_0p1 imsm1
mount /dev/md/240GB_BX100v2.5_0p9 /mnt/
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/md126p9, missing codepage or helper program, or other error.
dmesg(1) may have more information after failed mount system call.
dmesg
[ 179.400010] EXT4-fs (md126p9): bad geometry: block count 92565760 exceeds size of device (92565504 blocks)
But I can list every file using debugfs:
debugfs -c /dev/md126p9
debugfs 1.47.0 (5-Feb-2023)
debugfs: ls
2 (12) . 2 (12) .. 11 (56) lost+found 1712129 (16) joan
3670017 (12) tmp 4653057 (916) sys




peris
(121 rep)
Mar 23, 2024, 06:45 PM
• Last activity: Mar 23, 2024, 08:08 PM
0
votes
2
answers
776
views
How to check FakeRAID is doing fine?
I have my very first Xeon server computer. It has two disks in ([FakeRAID][1]) RAID1. The BIOS of the RAID controller says the status is Normal. But, I would like to know if there is a way to check from the system-side, that everything is doing fine. Can I do that? I am on Linux Mint. # blkid /dev/s...
I have my very first Xeon server computer.
It has two disks in (FakeRAID ) RAID1.
The BIOS of the RAID controller says the status is Normal.
But, I would like to know if there is a way to check from the system-side, that everything is doing fine. Can I do that? I am on Linux Mint.
# blkid
/dev/sda1: UUID="6042-870C" TYPE="vfat"
/dev/sda2: UUID="2ef42e6f-4987-46e5-aca9-872fd70a9f9e" TYPE="ext2"
/dev/sda3: UUID="Oz0elc-zUuh-BAK1-i19b-RZZU-YREm-DxVaNi" TYPE="LVM2_member"
/dev/sdb1: UUID="6042-870C" TYPE="vfat"
/dev/sdb2: UUID="2ef42e6f-4987-46e5-aca9-872fd70a9f9e" TYPE="ext2"
/dev/sdb3: UUID="Oz0elc-zUuh-BAK1-i19b-RZZU-YREm-DxVaNi" TYPE="LVM2_member"
/dev/mapper/mint--vg-root: UUID="98a7a4a2-6e71-4aa9-ab48-5c4fc619c321" TYPE="ext4"
/dev/mapper/mint--vg-swap_1: UUID="b62721cf-7b54-4400-92f0-f8f776566c99" TYPE="swap"
Vlastimil Burián
(30505 rep)
Sep 13, 2015, 07:42 PM
• Last activity: Mar 8, 2023, 03:32 AM
2
votes
2
answers
2929
views
Installing to a PCIE sata card
I have an ASUS P5Q deluxe from an old gaming computer that I'm converting to a server. Unfortunately, while their silly onboard fake RAID thing(drive xpert) worked fine in Windows, the drives are not being detected at all when I attempt to install openSUSE to them. I've tried disabling it and settin...
I have an ASUS P5Q deluxe from an old gaming computer that I'm converting to a server. Unfortunately, while their silly onboard fake RAID thing(drive xpert) worked fine in Windows, the drives are not being detected at all when I attempt to install openSUSE to them. I've tried disabling it and setting it to "normal" but still no luck. The other SATA ports are detected without issue, but they're for my storage drives. Eventually I decided the better option might be a pcie SATA card, but I'm not positive it will solve my problem:
Will I be able to install to drives attached through a PCIE card? If so is there a specific card anybody could recommend?
Daniel B.
(125 rep)
Jan 8, 2012, 11:16 PM
• Last activity: Apr 6, 2019, 01:19 AM
7
votes
1
answers
13407
views
How do I (re)build/create/assemble an IMSM RAID-0 array from disk images instead of disk drives using mdadm?
The question: Using Linux and mdadm, how can I read/copy data *as files* from disk images made from hard disks used in an Intel Rapid Storage Technology RAID-0 array (formatted as NTFS, Windows 7 installed)? The problem: One of the drives in the array is going bad, so I'd like to copy as much data a...
The question: Using Linux and mdadm, how can I read/copy data *as files* from disk images made from hard disks used in an Intel Rapid Storage Technology RAID-0 array (formatted as NTFS, Windows 7 installed)?
The problem: One of the drives in the array is going bad, so I'd like to copy as much data as possible before replacing the drive (and thus destroying the array).
I am open to alternative solutions to this question if they solve my problem.
Background
==========
I have a laptop with an Intel Rapid Storage Technology controller (referred to in various contexts as RST, RSTe, or IMSM) that has two (2) hard disks configured in RAID-0 (FakeRAID-0). RAID-0 was not my choice as the laptop was delivered to me in this configuration. One of the disks seems to have accumulated a lot of bad sectors, while the other disk is perfectly healthy. Together, the disks are still healthy enough to boot into the OS (Windows 7 64-bit), but the OS will sometimes hang when accessing damaged disk areas, and it seems like a bad idea to continue trying to use damaged disks. I'd like to copy as much data as possible off of the disks and then replace the damaged drive. Since operating live on the damaged disk is considered bad, I decided to image both disks so I could later mount the images using mdadm or something equivalent. I've spent a lot of time and done a lot of reading, but I still haven't successfully managed to mount the disk images as a (Fake)RAID-0 array. I'll try to recall the steps I performed here. Grab some snacks and a beverage, because this is lengthy.
First, I got a USB external drive to run Ubuntu 15.10 64-bit off of a partition. Using a LiveCD or small USB thumb drive was easier to boot, but slower than an external (and a LiveCD isn't a persistent install). I installed ddrescue and used it to produce an image of each hard disk. There were no notable issues with creating the images.
Once I got the images, I installed mdadm using apt. However, this installed an older version of mdadm from 2013. The changelogs for more recent versions indicated better support for IMSM, so I compiled and installed mdadm 3.4 using this guide , including upgrading to a kernel at or above 4.4.2. The only notable issue here was that some tests did not succeed, but the guide seemed to indicate that that was acceptable.
After that, I read in a few places that I would need to use loopback devices to be able to use the images. I mounted the disk images as /dev/loop0 and /dev/loop1 with no issue.
Here is some relevant info at this point of the process...
mdadm --detail-platform:
$ sudo mdadm --detail-platform
Platform : Intel(R) Rapid Storage Technology
Version : 10.1.0.1008
RAID Levels : raid0 raid1 raid5
Chunk Sizes : 4k 8k 16k 32k 64k 128k
2TB volumes : supported
2TB disks : not supported
Max Disks : 7
Max Volumes : 2 per array, 4 per controller
I/O Controller : /sys/devices/pci0000:00/0000:00:1f.2 (SATA)
Port0 : /dev/sda (W0Q6DV7Z)
Port3 : - non-disk device (HL-DT-ST DVD+-RW GS30N) -
Port1 : /dev/sdb (W0Q6CJM1)
Port2 : - no device attached -
Port4 : - no device attached -
Port5 : - no device attached -
fdisk -l:
$ sudo fdisk -l
Disk /dev/loop0: 298.1 GiB, 320072933376 bytes, 625142448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2bd2c32a
Device Boot Start End Sectors Size Id Type
/dev/loop0p1 * 2048 4196351 4194304 2G 7 HPFS/NTFS/exFAT
/dev/loop0p2 4196352 1250273279 1246076928 594.2G 7 HPFS/NTFS/exFAT
Disk /dev/loop1: 298.1 GiB, 320072933376 bytes, 625142448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sda: 298.1 GiB, 320072933376 bytes, 625142448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x2bd2c32a
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 4196351 4194304 2G 7 HPFS/NTFS/exFAT
/dev/sda2 4196352 1250273279 1246076928 594.2G 7 HPFS/NTFS/exFAT
Disk /dev/sdb: 298.1 GiB, 320072933376 bytes, 625142448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
mdadm --examine --verbose /dev/sda:
$ sudo mdadm --examine --verbose /dev/sda
/dev/sda:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.0.00
Orig Family : 81bdf089
Family : 81bdf089
Generation : 00001796
Attributes : All supported
UUID : acf55f6b:49f936c5:787fa66e:620d7df0
Checksum : 6cf37d06 correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1
RAID Level : 0
Members : 2
Slots : [_U]
Failed disk : 0
This Slot : ?
Array Size : 1250275328 (596.18 GiB 640.14 GB)
Per Dev Size : 625137928 (298.09 GiB 320.07 GB)
Sector Offset : 0
Num Stripes : 2441944
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
Disk00 Serial : W0Q6DV7Z
State : active failed
Id : 00000000
Usable Size : 625136142 (298.09 GiB 320.07 GB)
Disk01 Serial : W0Q6CJM1
State : active
Id : 00010000
Usable Size : 625136142 (298.09 GiB 320.07 GB)
mdadm --examine --verbose /dev/sdb:
$ sudo mdadm --examine --verbose /dev/sdb
/dev/sdb:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.0.00
Orig Family : 81bdf089
Family : 81bdf089
Generation : 00001796
Attributes : All supported
UUID : acf55f6b:49f936c5:787fa66e:620d7df0
Checksum : 6cf37d06 correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1
Disk01 Serial : W0Q6CJM1
State : active
Id : 00010000
Usable Size : 625137928 (298.09 GiB 320.07 GB)
RAID Level : 0
Members : 2
Slots : [_U]
Failed disk : 0
This Slot : 1
Array Size : 1250275328 (596.18 GiB 640.14 GB)
Per Dev Size : 625137928 (298.09 GiB 320.07 GB)
Sector Offset : 0
Num Stripes : 2441944
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
Disk00 Serial : W0Q6DV7Z
State : active failed
Id : 00000000
Usable Size : 625137928 (298.09 GiB 320.07 GB)
Here is where I ran into difficulty. I tried to assemble the array.
$ sudo mdadm --assemble --verbose /dev/md0 /dev/loop0 /dev/loop1
mdadm: looking for devices for /dev/md0
mdadm: Cannot assemble mbr metadata on /dev/loop0
mdadm: /dev/loop0 has no superblock - assembly aborted
I get the same result by using --force or by swapping /dev/loop0 and /dev/loop1.
Since IMSM is a CONTAINER type FakeRAID, I'd seen some indications that you have to create the container instead of assembling it. I tried...
$ sudo mdadm -CR /dev/md/imsm -e imsm -n 2 /dev/loop
mdadm: /dev/loop0 is not attached to Intel(R) RAID controller.
mdadm: /dev/loop0 is not suitable for this array.
mdadm: /dev/loop1 is not attached to Intel(R) RAID controller.
mdadm: /dev/loop1 is not suitable for this array.
mdadm: create aborted
After reading a few more things , it seemed that the culprit here were IMSM_NO_PLATFORM and IMSM_DEVNAME_AS_SERIAL. After futzing around with trying to get environment variables to persist with sudo, I tried...
$ sudo IMSM_NO_PLATFORM=1 IMSM_DEVNAME_AS_SERIAL=1 mdadm -CR /dev/md/imsm -e imsm -n 2 /dev/loop
mdadm: /dev/loop0 appears to be part of a raid array:
level=container devices=0 ctime=Wed Dec 31 19:00:00 1969
mdadm: metadata will over-write last partition on /dev/loop0.
mdadm: /dev/loop1 appears to be part of a raid array:
level=container devices=0 ctime=Wed Dec 31 19:00:00 1969
mdadm: container /dev/md/imsm prepared.
That's something. Taking a closer look...
$ ls -l /dev/md
total 0
lrwxrwxrwx 1 root root 8 Apr 2 05:32 imsm -> ../md126
lrwxrwxrwx 1 root root 8 Apr 2 05:20 imsm0 -> ../md127
/dev/md/imsm0 and /dev/md127 are associated with the physical disk drives (/dev/sda and /dev/sdb). /dev/md/imsm (pointing to /dev/md126) is the newly created container based on the loopback devices. Taking a closer look at that...
$ sudo IMSM_NO_PLATFORM=1 IMSM_DEVNAME_AS_SERIAL=1 mdadm -Ev /dev/md/imsm
/dev/md/imsm:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.0.00
Orig Family : 00000000
Family : ff3cb556
Generation : 00000001
Attributes : All supported
UUID : 00000000:00000000:00000000:00000000
Checksum : 7edb0f81 correct
MPB Sectors : 1
Disks : 1
RAID Devices : 0
Disk00 Serial : /dev/loop0
State : spare
Id : 00000000
Usable Size : 625140238 (298.09 GiB 320.07 GB)
Disk Serial : /dev/loop1
State : spare
Id : 00000000
Usable Size : 625140238 (298.09 GiB 320.07 GB)
Disk Serial : /dev/loop0
State : spare
Id : 00000000
Usable Size : 625140238 (298.09 GiB 320.07 GB)
That looks okay. Let's try to start the array. I found information (here and here ) that said to use Incremental Assembly mode to start a container.
$ sudo IMSM_NO_PLATFORM=1 IMSM_DEVNAME_AS_SERIAL=1 mdadm -I /dev/md/imsm
That gave me nothing. Let's use the verbose flag.
$ sudo IMSM_NO_PLATFORM=1 IMSM_DEVNAME_AS_SERIAL=1 mdadm -Iv /dev/md/imsm
mdadm: not enough devices to start the container
Oh, bother. Let's check /proc/mdstat.
$ sudo cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md126 : inactive loop1[1] (S) loop0(S)
2210 blocks super external:imsm
md127 : inactive sdb[1] (S) sda(S)
5413 blocks super external:imsm
unused devices:
Well, that doesn't look right - the number of blocks don't match. Looking closely at the messages from when I tried to assemble, it seems mdadm said "metadata will over-write last partition on /dev/loop0", so I'm guessing that the image file associated with /dev/loop0 is hosed. Thankfully, I have backup copies of these images, so I can grab those and start over, but it takes a while to re-copy 300-600GB even over USB3.
Anyway, at this point, I'm stumped. I hope someone out there has an idea, because at this point I've got no clue what to try next.
Is this the right path for addressing this problem, and I just need to get some settings right? Or is the above approach completely wrong for mounting IMSM RAID-0 disk images?
Ryan
(1821 rep)
Apr 2, 2016, 11:23 AM
• Last activity: Jul 18, 2018, 01:22 AM
1
votes
1
answers
301
views
How to determine if a controller is true hardraid or fakeraid?
I've just bought a new SAS/SATA controller (Highpoint RocketRAID RR2720SGL) and it looks like it is hardware raid, but I've no idea how to tell if it is true hardware raid or yet another fakeraid controller. (I don't care as I'm using softraid at the moment, but when I rebuild it would be good to us...
I've just bought a new SAS/SATA controller (Highpoint RocketRAID RR2720SGL) and it looks like it is hardware raid, but I've no idea how to tell if it is true hardware raid or yet another fakeraid controller.
(I don't care as I'm using softraid at the moment, but when I rebuild it would be good to use true hardware raid if it is available.)
What I can't tell is what I need to look for in the specification to know which it is, since I've never seen a controller admit to being fakeraid.
Is there a key word I should be looking for, or do you just have to suck it and see?
Screwtape
(113 rep)
Jun 8, 2018, 11:18 AM
• Last activity: Jun 8, 2018, 12:00 PM
0
votes
1
answers
1206
views
AMD Fakeraid and Debian?
I can't get my (AMD-based) BIOS RAID to work with either Debian or Ubuntu. With my BIOS in AHCI mode, I successfully detect all drives; however, in RAID mode, all SATA drives disappear, and I can only see my NVMe card. I've set `dmraid=true`, but I still have no luck. Anyone else have any success wi...
I can't get my (AMD-based) BIOS RAID to work with either Debian or Ubuntu. With my BIOS in AHCI mode, I successfully detect all drives; however, in RAID mode, all SATA drives disappear, and I can only see my NVMe card. I've set
dmraid=true
, but I still have no luck.
Anyone else have any success with AMD BIOS RAID on Debian/Debian based distros? Is this a problem with just my motherboard?
Cowbell
(1 rep)
Jul 24, 2017, 08:58 PM
• Last activity: Oct 23, 2017, 02:34 AM
1
votes
1
answers
544
views
HP B120i Raid showing two disk in CentOS7
We have HP DL360e G8 with `B120i` RAID controller (which is FakeRAID) and i have configured `RAID 1` in RAID configuration utility. After CentOS 7 installation when i run `fdisk -l` i am seeing two individual disk with same partition size. I should suppose to see single disk (/dev/sda) right? Disk /...
We have HP DL360e G8 with
B120i
RAID controller (which is FakeRAID) and i have configured RAID 1
in RAID configuration utility. After CentOS 7 installation when i run fdisk -l
i am seeing two individual disk with same partition size.
I should suppose to see single disk (/dev/sda) right?
Disk /dev/sda: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x00020b93
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 526335 262144 83 Linux
/dev/sda2 526336 8914943 4194304 82 Linux swap / Solaris
/dev/sda3 8914944 976707583 483896320 8e Linux LVM
Disk /dev/sdb: 500.1 GB, 500107862016 bytes, 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x00020b93
Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 526335 262144 83 Linux
/dev/sdb2 526336 8914943 4194304 82 Linux swap / Solaris
/dev/sdb3 8914944 976707583 483896320 8e Linux LVM
Satish
(1672 rep)
Jun 28, 2017, 04:41 AM
• Last activity: Jun 28, 2017, 06:10 AM
1
votes
0
answers
1438
views
How do I mount an existing intel rst raid-5?
I recently installed Linux Mint 18.1 and am dual booting alongside Windows 10. I have two RAID arrays that I want to access, but only one seems to work out of the box. One is a RAID-0 with two disks (my Windows system drives) and another is a RAID-5 with 4 disks. The RAID-0 volume is automatically d...
I recently installed Linux Mint 18.1 and am dual booting alongside Windows 10. I have two RAID arrays that I want to access, but only one seems to work out of the box. One is a RAID-0 with two disks (my Windows system drives) and another is a RAID-5 with 4 disks. The RAID-0 volume is automatically detected, and I can mount it via Nemo, but I don't see the RAID-5 anywhere.
I've done a bit of research to try and figure this out, but I can't find anything that explains in a straightforward manner how to do this. I'm scared of destroying my array and losing data.
All I've found was instructions to run
dmraid -s
and dmraid -r
. These list my volume and disks respectively, so they seem to be detected, but how do I mount the volume?
Here are some command outputs:
dmraid -s:
*** Group superset isw_cbfchdcibb
--> Active Subset
name : isw_cbfchdcibb_Volume1
size : 10557196032
stride : 256
type : raid5_la
status : ok
subsets: 0
devs : 4
spares : 0
dmraid -r:
/dev/sde: isw, "isw_cbfchdcibb", GROUP, ok, 7814037166 sectors, data@ 0
/dev/sdc: isw, "isw_cbfchdcibb", GROUP, ok, 7814037166 sectors, data@ 0
/dev/sdb: isw, "isw_cbfchdcibb", GROUP, ok, 7814037166 sectors, data@ 0
/dev/sda: isw, "isw_cbfchdcibb", GROUP, ok, 7814037166 sectors, data@ 0
ls -al /dev/mapper/*
crw------- 1 root root 10, 236 Mar 4 22:28 /dev/mapper/control
brw-rw---- 1 root disk 252, 0 Mar 4 22:28 /dev/mapper/isw_cbfchdcibb_Volume1
lrwxrwxrwx 1 root root 7 Mar 4 22:28 /dev/mapper/isw_cbfchdcibb_Volume1p1 -> ../dm-1
Mirrana
(111 rep)
Mar 5, 2017, 02:16 AM
0
votes
1
answers
1845
views
fakeraid + UEFI + GPT - grub doesn't detect raid volume after debian install using dmraid
I have a post on the [debian forums][1] as well but it seems to have less traffic than here so I thought I'd try my luck here as well. I'm trying to install windows 10 and debian and possibly more distros on a fakeraid using UEFI and GPT. So I follow [this guide][2] and using dmraid I can successful...
I have a post on the debian forums as well but it seems to have less traffic than here so I thought I'd try my luck here as well.
I'm trying to install windows 10 and debian and possibly more distros on a fakeraid using UEFI and GPT. So I follow this guide and using dmraid I can successfully partition and install. The partitioning looks like this:
/dev/mapper/isw_dagfijbabd_RAID0SYS
|- Microsoft Recovery
|- EFI / boot
|- Microsoft MRS
|- Windows
|- swap
|- LVM PV
\
|-- VG0
\
|--- LV OS_2
|--- LV debian
|--- LV home
The problem is grub doesn't seem to see the raid when setting the root for the kernel. And I get this error
modprobe: module dm-raid45 not found in module.dep
Gave up waiting for root device. Common problems:
- Boot args (cat /proc/cmdline)
- Check rootdelay= (did the system wait long enough?)
- Check root= (did the system wait for the right device?)
- Missing modules (cat /proc/modules; ls /dev)
ALERT! /dev/mapper/VG0-debian does not exist.
modprobe: module ehci-orion not found in modules.dep
I could use ubuntu live to chroot into the system instead of debian rescue mode and finish the installation steps, apart from actually setting the root for grub.
As far as I can tell it seems to be an issue with grub not using mdadm correctly or at all. So I need to edit initramfs to inklude mdadm somehow, right? But how does that work? I have succesfully mounted the initramfs using like this guide from ducea.com. But how would I continue?
# All work is done in a temporary directory
mkdir /tmp/initrdmount
# Copy the image, uncompress it
cp /boot/initrd.img-2.6.15-1-686-smp /tmp/initrd.img.gz
gunzip -v /tmp/initrd.img.gz
# Extract the content of the cpio archive
cd /tmp/initrdmount
cpio -i < /tmp/initrd.img
EDIT:
I'll add some info gathered from the initramfs shell as well:
# this depends ofc on whether I use dmraid or mdadm for kernel boot
(initramfs) cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.16.0.4-amd64 root=/dev/mapper/VG0-debian ro {dmraid/mdadm}=true
(initramfs) cat /proc/mdstat # returns nothing
(initramfs) cat /etc/mdadm/mdadm.conf
ARRAY metadata=imsm UUID=xxxx:xxxx:xxxx:xxxx
ARRAY /dev/md/isw_dagfijbabd_RAID0SYS container=xxxxxxxxxxxxxxxx member=0 UUID=xxxxxx:xxxxxx:xxxxxx:xxxxxx
ARRAY /dev/md/isw_dagfijbabd_RAID0RST container=xxxxxxxxxxxxxxxx member=1 UUID=xxxxxx:xxxxxx:xxxxxx:xxxxxx
(initramfs) ls /dev/mapper/
control isw_dagfijbabd_RAID0RST isw_dagfijbabd_RAID0SYS
(initramfs) lvm pvs # returns nothing
This output was practically the same whether I used dmraid or mdadm in kernel boot line. I realized that I could find mdadm in /sbin either way and that the RAID0 disk isw_dagfijbabd_RAID0SYS / dm-0 is detected but not it's content.
I'm wondering if there is some interference with dmraid and mdadm. Should I remove dmraid from initramfs?
mmFooD
(11 rep)
Jan 28, 2017, 02:12 PM
• Last activity: Jan 29, 2017, 01:11 PM
0
votes
1
answers
236
views
No Raid feature in Redhat Hypervisor
We purchased Supermicro Server "SUPERMICRO CSE-827H-R1400B" . And each of its node has 9TB Storage . And we planned to use this server as a Host for Virtual Machines. So i started to install the hypervisor on each node. And created Raid 5 now Hypervisor presents 5.26 TB . After installation when i c...
We purchased Supermicro Server "SUPERMICRO CSE-827H-R1400B" . And each of its node has 9TB Storage . And we planned to use this server as a Host for Virtual Machines. So i started to install the hypervisor on each node. And created Raid 5 now Hypervisor presents 5.26 TB .
After installation when i check the intel Raid manager it shows the RAID is in INITIALIZE mode and hypervisor gives kernel panic during boot and hangs
Now we need Raid incase of HDD failure. I know its hypervisor which is breaking raid each time.
Redhat Hypervisor 6.6


OmiPenguin
(4398 rep)
May 13, 2015, 11:50 AM
• Last activity: May 26, 2015, 12:40 AM
2
votes
0
answers
1248
views
Dual Boot Debian on Raid 0 in UEFI mode
My new notebook came from the factory with Windows 8 on two 128GB SSDs in RAID 0 mode. I wanted to dual boot it with a Linux distro, I settled on Debian, so I resized my `C:` drive as one normally would to free up some space for the Linux install. I installed from `debian-live-7.8.0-amd64-kde-deskto...
My new notebook came from the factory with Windows 8 on two 128GB SSDs in RAID 0 mode. I wanted to dual boot it with a Linux distro, I settled on Debian, so I resized my
C:
drive as one normally would to free up some space for the Linux install. I installed from debian-live-7.8.0-amd64-kde-desktop
which runs on Linux kernel 3.2.0-4
. I initially followed these [instructions on Debian wiki](https://wiki.debian.org/DebianInstaller/SataRaid) for installing on SataRAID aka BIOS RAID.
I deviated from the instructions on the wiki in step 8, as the Debian rescue install console did not give me the option to mount and use /dev/dm-?
as the root fs and I needed to set up Debian to boot in UEFI mode via Ubuntu Live anyway, primarily using instructions in [this video](https://www.youtube.com/watch?v=DLlOd-a2wG0) , which basically involve using Ubuntu to mount the Debian filesystem, mounting the EFI partition in the Debian file system, then bind mounting /dev/
, /dev/pts
, /proc/
, and /sys
, and finally chroot
to the Debian directory to update the sources.list
and updating all of Debian's packages.
Lastly, apt-get install --reinstall grub-efi
followed by install-grub /dev/mapper/isw__Volume1
to make sure to use the raid volume and not /dev/sdx
.
So now I am able to boot into Grub
but when I select Debian it fails to load, giving me 'Gave up waiting for root device' error, and claiming the 'isw_...' disk does not exist. Can anyone offer some guidance on how to troubleshoot further? I also was not clear about a line on the original page I linked that says, "install Debian as usual, until you get to the disk partitioner. You will see your fake RAID as one disk with a confusing long name. Use it as if it were a single disk and configure your partitions any way you want, including LVM and friends." What does it mean by "including LVM and friends? Did I need to explicitly make the drive an LVM drive when I performed the partitioning during the original install? Any help would be appreciated. Thanks!
JtheDude
(21 rep)
Jan 18, 2015, 03:56 AM
• Last activity: Jan 18, 2015, 04:10 AM
0
votes
0
answers
1695
views
How to make Intel fake RAID 0 volume appear on boot with Ubuntu 14.04 in a multi-boot environment?
I have a computer with a Gigabyte GA-Z97N-Gaming 5 mainboard, a 256 GB drive for booting and two 3 TB drives meant to use in a RAID 0 setup with both ext4 and NTFS partitions. We are planning running Xubuntu 14.04 and Windows 8.1 on it. Using dmraid seemed to limit the RAID volume size to 1.5 TB, so...
I have a computer with a Gigabyte GA-Z97N-Gaming 5 mainboard, a 256 GB drive for booting and two 3 TB drives meant to use in a RAID 0 setup with both ext4 and NTFS partitions. We are planning running Xubuntu 14.04 and Windows 8.1 on it.
Using dmraid seemed to limit the RAID volume size to 1.5 TB, so it was not feasible with 6 TBs of disk surface. Referring to an Intel whitepaper, I could configure a 6 TB-volume and create 4.5 TB ext4 partition using gdisk and mkfs.ext4, and an NTFS partition using Windows to the rest of the capacity.
So, the setup seems to work on both Linux and Windows as expected. However after rebooting Xubuntu, the /dev/md126[|p1|p2] devices will no more appear.
Here are the exact commands which were run in the successful setup
sudo mdadm -C /dev/md/imsm /dev/sd[b-c] -n 2 -e imsm
sudo mdadm -C /dev/md/vol0 /dev/md/imsm -n 2 -l 0
sudo mdadm -E -s --config=mdadm.conf > /etc/mdadm.conf
sudo gdisk /dev/md126 # created partitions
sudo mkfs.ext4 /dev/md126p1
sudo mkdir /home/levo/megaosio
sudo nano /etc/fstab # added UUID->/home/levo/megaosio entry with defaults
sudo chown levo:levo /home/levo/megaosio/
sudo mount -a
First time the volume disappeared, I tried rebuilding the volume from scratch using following commands:`
sudo mdadm -C /dev/md/imsm /dev/sd[b-c] -n 2 -e imsm
sudo mdadm -C /dev/md/vol0 /dev/md/imsm -n 2 -l 0
To my surprise, that made the ext4 and NTFS volumes appear again and I could access them.
I really do not believe that this is the correct way to make a volume appear. Have I missed a step when configuring the volume, or does booting to Windows or rebooting per se just ruin the metadata? What would be the correct way to discover the volumes on startup?
/etc/mdadm.conf contains
ARRAY metadata=imsm UUID=ff5cb77f:cf2f773b:3dc18705:11398139
ARRAY /dev/md/vol0 container=ff5cb77f:cf2f773b:3dc18705:11398139 member=0 UUID=cb1e53b2:e182f7c2:5f3d8a99:6663ffd6
sudo mdadm --detail --scan
only returns a newline.
borellini
(131 rep)
Sep 19, 2014, 05:39 AM
• Last activity: Sep 19, 2014, 11:45 AM
3
votes
1
answers
768
views
RAID-1 mirror has become a single hard disk
I have an HP N40L microserver, with 2 identical drives, I used the system to hardware-RAID them as a mirror. I then installed mint on the system about a year ago. This has been running perfectly, updating, etc. until I upgraded to Mint 17. I thought everything was fine, but I've noticed that mint is...
I have an HP N40L microserver, with 2 identical drives, I used the system to hardware-RAID them as a mirror. I then installed mint on the system about a year ago.
This has been running perfectly, updating, etc. until I upgraded to Mint 17.
I thought everything was fine, but I've noticed that mint is only using 1 of the drives to boot, then for some reason was showing the contents of the other drive.
i.e. it boots
sdb1
, but df
shows sda1
. I'm *sure* df
used to show a /dev/mapper/pdc_bejigbccdb1
drive which was the RAID array. Thus any updates to Grub go to sda1
, but it boots sdb1
then loads the fs sda1
.
N40L marty # df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 233159608 113675036 107617644 52% /
none 4 0 4 0% /sys/fs/cgroup
/dev 2943932 12 2943920 1% /media/sda1/dev
tmpfs 597588 1232 596356 1% /run
none 5120 0 5120 0% /run/lock
none 2987920 0 2987920 0% /run/shm
none 102400 4 102396 1% /run/user
From cat /etc/fstab
N40L marty # cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#
proc /proc proc nodev,noexec,nosuid 0 0
/dev/mapper/pdc_bejigbccdb1 / ext4 errors=remount-ro 0 1
/dev/mapper/pdc_bejigbccdb5 none swap sw 0 0
If I do ls /dev/mapper/
I get
N40L marty # ls /dev/mapper
total 0
crw------- 1 root root 10, 236 Jul 24 17:03 control
How do I get my raid back and how do I get grub to boot to it?
----
Further update:
N40L grub # dmraid -r
/dev/sdb: pdc, "pdc_bejigbccdb", mirror, ok, 486328064 sectors, data@ 0
/dev/sda: pdc, "pdc_bejigbccdb", mirror, ok, 486328064 sectors, data@ 0
N40L grub # dmraid -s
*** Set
name : pdc_bejigbccdb
size : 486328064
stride : 128
type : mirror
status : ok
subsets: 0
devs : 2
spares : 0
N40L grub # dmraid -ay -vvv -d
WARN: locking /var/lock/dmraid/.lock
NOTICE: /dev/sdb: asr discovering
NOTICE: /dev/sdb: ddf1 discovering
NOTICE: /dev/sdb: hpt37x discovering
NOTICE: /dev/sdb: hpt45x discovering
NOTICE: /dev/sdb: isw discovering
DEBUG: not isw at 250059348992
DEBUG: isw trying hard coded -2115 offset.
DEBUG: not isw at 250058267136
NOTICE: /dev/sdb: jmicron discovering
NOTICE: /dev/sdb: lsi discovering
NOTICE: /dev/sdb: nvidia discovering
NOTICE: /dev/sdb: pdc discovering
NOTICE: /dev/sdb: pdc metadata discovered
NOTICE: /dev/sdb: sil discovering
NOTICE: /dev/sdb: via discovering
NOTICE: /dev/sda: asr discovering
NOTICE: /dev/sda: ddf1 discovering
NOTICE: /dev/sda: hpt37x discovering
NOTICE: /dev/sda: hpt45x discovering
NOTICE: /dev/sda: isw discovering
DEBUG: not isw at 250059348992
DEBUG: isw trying hard coded -2115 offset.
DEBUG: not isw at 250058267136
NOTICE: /dev/sda: jmicron discovering
NOTICE: /dev/sda: lsi discovering
NOTICE: /dev/sda: nvidia discovering
NOTICE: /dev/sda: pdc discovering
NOTICE: /dev/sda: pdc metadata discovered
NOTICE: /dev/sda: sil discovering
NOTICE: /dev/sda: via discovering
DEBUG: _find_set: searching pdc_bejigbccdb
DEBUG: _find_set: not found pdc_bejigbccdb
DEBUG: _find_set: searching pdc_bejigbccdb
DEBUG: _find_set: not found pdc_bejigbccdb
NOTICE: added /dev/sdb to RAID set "pdc_bejigbccdb"
DEBUG: _find_set: searching pdc_bejigbccdb
DEBUG: _find_set: found pdc_bejigbccdb
DEBUG: _find_set: searching pdc_bejigbccdb
DEBUG: _find_set: found pdc_bejigbccdb
NOTICE: added /dev/sda to RAID set "pdc_bejigbccdb"
DEBUG: checking pdc device "/dev/sda"
DEBUG: checking pdc device "/dev/sdb"
DEBUG: set status of set "pdc_bejigbccdb" to 16
DEBUG: checking pdc device "/dev/sda"
DEBUG: checking pdc device "/dev/sdb"
DEBUG: set status of set "pdc_bejigbccdb" to 16
RAID set "pdc_bejigbccdb" was not activated
WARN: unlocking /var/lock/dmraid/.lock
DEBUG: freeing devices of RAID set "pdc_bejigbccdb"
DEBUG: freeing device "pdc_bejigbccdb", path "/dev/sda"
DEBUG: freeing device "pdc_bejigbccdb", path "/dev/sdb"
So my system sees the two drives and sees they should be part of an array, but will not activate the array and this not create /dev/mapper/pdc_bejigbccdb
so I cannot load grub to it and boot from it.
How do I get dmraid to activate and create the mapper entry?
wkdmarty
(251 rep)
Jul 25, 2014, 10:00 AM
• Last activity: Jul 31, 2014, 04:02 PM
2
votes
0
answers
191
views
MD soft-raid to replace fake raid: How to get rid of old metadata?
I had a system running with fake RAID (Intel controller, imsm), which had some problems. Now I'm switching to the regular MD soft-RAID - I created a RAID 1 on `/dev/sda1` and `/dev/sdb1`. Seems to work fine. Unfortunately a run of `mdadm --examine /dev/sd[ab]` shows that I still have the Intel metad...
I had a system running with fake RAID (Intel controller, imsm), which had some problems.
Now I'm switching to the regular MD soft-RAID - I created a RAID 1 on
/dev/sda1
and /dev/sdb1
. Seems to work fine.
Unfortunately a run of mdadm --examine /dev/sd[ab]
shows that I still have the Intel metadata sitting on top at /dev/sda
and /dev/sdb
. The metadata of the soft-RAID is only stored within sda1 / sdb1.
How can I get rid of the Intel data, without breaking anything?
trapperjohn
(31 rep)
Jul 10, 2014, 11:34 AM
• Last activity: Jul 12, 2014, 02:08 AM
1
votes
1
answers
460
views
How to rebuild a two drive raid 1 array on Linux?
I need to rebuild a two drive (160x2) raid 1 (mirror) array. It was an Intel raid from a 7 year old Dell computer (Windows). Both drives are assumed to be functioning. The raid was created with an on-board motherboard raid program. It's been quite a while since I had to rebuild a raid, and the last...
I need to rebuild a two drive (160x2) raid 1 (mirror) array. It was an Intel raid from a 7 year old Dell computer (Windows). Both drives are assumed to be functioning. The raid was created with an on-board motherboard raid program. It's been quite a while since I had to rebuild a raid, and the last time it was a raid 5 that had been created on Slackware. Where do I get started? I'm putting them in my personal machine and I'm guessing that Linux is going to be the tool to get this done.
Jeff
(111 rep)
Jan 2, 2013, 09:23 PM
• Last activity: May 12, 2014, 08:33 AM
3
votes
5
answers
1208
views
Linking Intel RAID 5 partitions to boot disk
I have a 5-disk Intel RAID 5 along with a 6th boot disk with /, /boot, and swap. What I was planning to do was mount the Intel RAID partitions (which I've added with fdisk) so that the 6th disk /home, /var, /srv, etc. link to the RAID on the other 5 disks. So far, my attempts at doing this have fail...
I have a 5-disk Intel RAID 5 along with a 6th boot disk with /, /boot, and swap.
What I was planning to do was mount the Intel RAID partitions (which I've added with fdisk) so that the 6th disk /home, /var, /srv, etc. link to the RAID on the other 5 disks. So far, my attempts at doing this have failed (editing fstab, trying to mount the /dev/dm-* partitions manually, etc.) have failed.
Does anyone have experience in this and can point me in the right direction?
Edit: I have the RAID array partitioned so that I can mount each partition as a folder on the boot disk, i.e. RAID /dev/dm-0 -> bootdisk /home.
BLaZuRE
(181 rep)
Jul 19, 2012, 12:03 AM
• Last activity: Nov 28, 2012, 02:17 AM
2
votes
0
answers
1051
views
Linux Mint installer does not see fakeraid drive
I have a computer with a fakeraid 0 array (the computer came with it, and I don't want to remove my other OS to get rid of it), and I am trying to install Linux Mint. When I first boot up from the live usb, the partitions on the raid array do not show up in `/dev/mapper/` and gparted does not see th...
I have a computer with a fakeraid 0 array (the computer came with it, and I don't want to remove my other OS to get rid of it), and I am trying to install Linux Mint. When I first boot up from the live usb, the partitions on the raid array do not show up in
/dev/mapper/
and gparted does not see them. After following various instructions on the internet (involving dmraid and kpartx) I got the drive and all of the partitions to show up in /dev/mapper/
and gparted sees them, but when I run the installer it does not see any of the partitions. How can I make the installer see the partitions in fakeraid array once they are in /dev/mapper/
?
murgatroid99
(173 rep)
Aug 3, 2011, 11:25 PM
• Last activity: Nov 28, 2012, 12:56 AM
Showing page 1 of 17 total questions