Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
4
votes
4
answers
2740
views
How to provision multi-tier a file system across fast and slow storage while combining capacity?
I'm hunting for a way to utilise a slow 500GB magnetic HDD alongside a fast 500GB SSD. I'd like to end up with a reasonable fraction of the two combined [hopefully > 800GB] in terms of capacity, but **deciding which files should live on which disk is going to be really tricky and near impossible to...
I'm hunting for a way to utilise a slow 500GB magnetic HDD alongside a fast 500GB SSD.
I'd like to end up with a reasonable fraction of the two combined [hopefully > 800GB] in terms of capacity, but **deciding which files should live on which disk is going to be really tricky and near impossible to cut across directory boundaries**.
It occurs to be that something like this has already been thought of in the form of multi-tired storage. But so far the only mechanisms I've found use the fast volume as a cache over the slow volume. That's NOT useful to me since I'd just end up with 500GB of SSD cache over 500GB HDD slow drive... I'd be no better off than simply binning the slow HDD and using only the SSD.
Hypothetically a system could exist that spanned across two volumes and dynamically copied from one to the other. This would be similar to a cache but without any requirement to keep one volume wholly consistent, and with the ability to provision more capacity than either single volume.
So far all my searches have drawn blanks. I've found multiple references to LVM and XFS supporting caching. But nothing obviously has a path to achieving higher capacity than either single device.
So while it's hypothetically possible, I don't see a way to achieve it.
---------
*Note that the tricky feature of this question is achieving a solution with only one end file system. Manually choosing which files go where by directory will not be possible for me.*
Philip Couling
(20391 rep)
Apr 17, 2023, 12:13 PM
• Last activity: Apr 12, 2024, 08:46 AM
8
votes
1
answers
1321
views
Using LVMCache or BCache for Hot-Warm-Cold Caching
I've recently reconfigured my current debian machine with three different drives: An SLC SSD, a QLC SSD, and a 4TB HDD. I've been toying around with `lvmcache` and other utilities, and I wanted to know if it was possible to create a multi-tier caching solution that leverages both of the SSDs for cac...
I've recently reconfigured my current debian machine with three different drives: An SLC SSD, a QLC SSD, and a 4TB HDD. I've been toying around with
lvmcache
and other utilities, and I wanted to know if it was possible to create a multi-tier caching solution that leverages both of the SSDs for caching at different levels.
My utopian structure is this:
* SLC SSD (fastest, good-reliability): Hot Cache for files that are written to and read often
* QLC SSD (fast, OK-reliability): Warm Cache for (potentially larger) files that are written to and read from less often
* HDD (slow, high-reliability): Cold Storage for files that aren't written to or read often
Unfortunately, I haven't found much in terms of capabilities for multi-tier caching that allows for this type of configuration in either lvmcache
or bcache
(or, really, anywhere else).
Is it possible for lvmcache
or bcache
to be configured in such a way? And, if not, are there any other solutions out there that may enable such a configuration?
Swivel
(236 rep)
Mar 21, 2021, 06:31 AM
• Last activity: Jul 6, 2023, 03:42 AM
2
votes
0
answers
1055
views
Use SSDs as persistent buffer for HDDs to decrease power usage
similar questions have been asked multiple times ... - https://unix.stackexchange.com/questions/126380/how-to-use-an-ssd-for-caching-so-my-hard-disks-can-spin-down - https://unix.stackexchange.com/questions/122386/caching-write-changes-on-ssd-to-avoid-hdd-spin-up-zfs-but-probably-not-l2arc - https:/...
similar questions have been asked multiple times ...
- https://unix.stackexchange.com/questions/126380/how-to-use-an-ssd-for-caching-so-my-hard-disks-can-spin-down
- https://unix.stackexchange.com/questions/122386/caching-write-changes-on-ssd-to-avoid-hdd-spin-up-zfs-but-probably-not-l2arc
- https://superuser.com/questions/664400/ssd-cache-to-minimize-hdd-spin-up-time
- https://unix.stackexchange.com/questions/382326/unionfs-vs-aufs-vs-overlayfs-vs-mhddfs-which-one-do-i-use?rq=1
- https://unix.stackexchange.com/questions/393930/merge-changes-to-upper-filesystem-to-lower-filesystem-in-linux-overlay-overlayf
... but none of them has a answer. As most of those questions are old (2012-2015), I'm trying again.
I have a 6 GB ZFS pool that is mostly idle. Still it's too busy to get the disks to spin down, as there are always some small writes on the server. By powering the disks down for long periods, I want to keep wasted energy at a minimum. I want to emphasize that performance is *not* my concern (the pool is fast enough), but rather energy consumption.
My idea is to add a 1 TB SSD to the system as "persistent cache" (not sure if that's the right word). More specifically, the system should do the following:
- Reads are initially answered from the pool, but the file (or blocks) in question should be stored on the SSD. Subsequent reads should be answered from the copy on the SSD.
- Writes are stored on the SSD. Rarely (like every three days), new files should be written to the pool.
Are there solutions to this problem? From my research, I found that
- AUFS might be able to solve my problem but it's not included in the kernel. I don't know if there is a userspace driver and how good it is.
- Instead, OverlayFS was pushed by the Linux community. To me, it seems that my use case is explicitly unsupported (as upper folders cannot be merged into lower folders, for whatever reason). If I understand correctly, OverlayFS actually sees that as a feature instead of a shortcoming - that would imply uselessness for my use case.
- bcache: bcache in writeback mode with high writeback time does appear promising. However, the documentation states that "it will let sequentials reads/writes on HDD/Raid devices by default". Is there a way to turn this behavior off? Also I don't want writing back to happen too early and that appears to be problematic.
- MergerFS: I don't know if it can do what I need.
Are there any tools/solutions/suggestions that I didn't see yet?
Max Beikirch
(123 rep)
Jul 3, 2021, 02:15 PM
• Last activity: May 5, 2022, 08:17 AM
0
votes
1
answers
277
views
Installing linux mint on pc with hybrid drive, ssd is read only (no dual boot)
I'm attempting to install linux mint on my pc with a hybrid drive system (SSD and disk); not dual boot. I'm booting linux mint from a usb and install is erroring on grub install, but says the installation is successful. Additionally, the pc won't boot into linux. I believe this is partially due to t...
I'm attempting to install linux mint on my pc with a hybrid drive system (SSD and disk); not dual boot.
I'm booting linux mint from a usb and install is erroring on grub install, but says the installation is successful. Additionally, the pc won't boot into linux. I believe this is partially due to the SSD disk (used for startup).
I've formatted the hard disk, but the SSD is read only and I've attempted
sudo chown
to the mounted SSD partition, but am getting:
chown: changing ownership of '/media/mint/Windows': Read-only file system
I've also attempted various things in gparted and gnome disk management.
I have no access to windows, as the boot manager was corrupt and I've deleted the files on the primary disk.
How can I reformat/remove the partitions from the SSD and change it to read/write?
Jason
(101 rep)
Feb 2, 2022, 12:05 AM
• Last activity: Feb 2, 2022, 02:18 AM
5
votes
0
answers
274
views
Is it possible to assign/limit a BTRFS subvolume to a particular device, and if so how?
My root (BTRFS) filesystem is on a small SSD, and /var is on a larger, slower, separately formatted (also with BTRFS) HDD. This works well, but I can't use CoW when copying files between the two devices and snapshotting my system requires creating two snapshots every time. I know that BTRFS can use...
My root (BTRFS) filesystem is on a small SSD, and /var is on a larger, slower, separately formatted (also with BTRFS) HDD. This works well, but I can't use CoW when copying files between the two devices and snapshotting my system requires creating two snapshots every time.
I know that BTRFS can use multiple devices, but I wonder whether I can limit the root subvolume to only use the SSD, and the /var subvolume to only use the HDD (like LVM). I know that existing blocks won't be moved between devices without running
btrfs balance
, but new blocks could be placed on either device by default.
I know that block-level caches can combine the benefits of SSDs and HHDs within the same filesystem, but I'd like seldom-used system files to remain on the SSD. I'm also hoping that this can be done online and without copying my data elsewhere first, but that is a secondary concern.
ATLief
(328 rep)
Feb 11, 2021, 05:28 PM
• Last activity: Feb 11, 2021, 06:55 PM
3
votes
2
answers
12900
views
Did this drive die?
I am having issues with Seagate Laptop SSHD 1TB, PN: [ST1000LM014-1EJ164-SSHD-8GB][1]. dmesg | grep ata1: says this: [ 1.197516] ata1: SATA max UDMA/133 abar m2048@0xf7d36000 port 0xf7d36100 irq 31 [ 6.548436] ata1: link is slow to respond, please be patient (ready=0) [ 11.232622] ata1: COMRESET fai...
I am having issues with Seagate Laptop SSHD 1TB, PN: ST1000LM014-1EJ164-SSHD-8GB .
dmesg | grep ata1:
says this:
[ 1.197516] ata1: SATA max UDMA/133 abar m2048@0xf7d36000 port 0xf7d36100 irq 31
[ 6.548436] ata1: link is slow to respond, please be patient (ready=0)
[ 11.232622] ata1: COMRESET failed (errno=-16)
[ 16.588832] ata1: link is slow to respond, please be patient (ready=0)
[ 21.269019] ata1: COMRESET failed (errno=-16)
[ 26.621223] ata1: link is slow to respond, please be patient (ready=0)
[ 56.322386] ata1: COMRESET failed (errno=-16)
[ 56.322449] ata1: limiting SATA link speed to 3.0 Gbps
[ 61.374591] ata1: COMRESET failed (errno=-16)
[ 61.374651] ata1: reset failed, giving up
Further, I don't see the drive in GParted.
Does this mean this drive is dead or semi-dead?
Vlastimil Burián
(30515 rep)
Dec 27, 2016, 04:57 PM
• Last activity: Oct 28, 2020, 07:50 PM
3
votes
0
answers
327
views
Hybrid RAID (SDD+HDD) gives unexpected results
I am doing some experiments with hybrid RAID in Linux. My test consists of the following: 2x256GB SSD in RAID 0 (/dev/md1) 2x256GB HDD in RAID 0 (/dev/md2) Then I made md1 and md2 into a RAID 1 (/dev/md127) and marking the slow HDD (md2) as --write-mostly. Essentially, my goal is to get maximum perf...
I am doing some experiments with hybrid RAID in Linux.
My test consists of the following:
2x256GB SSD in RAID 0 (/dev/md1)
2x256GB HDD in RAID 0 (/dev/md2)
Then I made md1 and md2 into a RAID 1 (/dev/md127) and marking the slow HDD (md2) as --write-mostly.
Essentially, my goal is to get maximum performance AND disk space out of my SSDs, but at the same time be "safe" from drive failures. I understand that losing one of the SSDs would mean that I fall back on slow HDDs, but that's a price I am willing to pay compared to losing all data. Besides, it would only be for a few hours until the broken SSDs gets replaced and RAID repaired.
root@s1 / # cat /proc/mdstat
Personalities : [raid0] [raid1] [linear] [multipath] [raid6] [raid5] [raid4] [raid10]
md2 : active raid0 sdd1 sdc1
498802688 blocks super 1.2 512k chunks
md127 : active raid1 md1 md2(W)
498671616 blocks super 1.2 [2/2] [UU]
bitmap: 1/4 pages [4KB], 65536KB chunk
md1 : active raid0 sdb2 sda2
498802688 blocks super 1.2 512k chunks
Now, running a simple throughput benchmark on the 3 raid devices gives a (for me) surprising results:
root@s1 / # hdparm -t /dev/md1
/dev/md1:
Timing buffered disk reads: 2612 MB in 3.00 seconds = 870.36 MB/sec
root@s1 / # hdparm -t /dev/md2
/dev/md2:
Timing buffered disk reads: 812 MB in 3.01 seconds = 270.14 MB/sec
root@s1 / # hdparm -t /dev/md127
/dev/md127:
Timing buffered disk reads: 1312 MB in 3.00 seconds = 437.33 MB/sec
RAID 0 SSD gives 870 MB/sec
RAID 0 HDD gives 270 MB/sec
RAID 1 HYBRID gives 437 MB/sec.
As the HDD raid has been marked as --write-mostly, I would assume that a pure read test would not touch the HDD at all, so what is going on here? I would assume that the hybrid benchmark would give similar results as the pure RAID 0 SSD.
At a first glance, it looks like the HDD somehow is slowing down the RAID, by being partly used for the read (even though I told it not to do reads on the HDD). However, if I have a file copy running on the HDDs while running the hdparm benchmark, I get the same result! If the HDDs WERE used, I would assume the benchmark would give even slower results if the HDDs were used for other tasks during the benchmark.
Daniele Testa
(213 rep)
May 21, 2019, 05:02 PM
• Last activity: May 21, 2019, 07:12 PM
2
votes
0
answers
1519
views
Dual Boot Dell Inspiron 7577 Windows SSD and Linux HDD
I just finally bought a laptop and I am motivated to learn working on linux. The laptop is a Dell Inspiron 7577 that came with two hard disks: - C: PCIe NVMe M.2 SSD 256GB on which is installed windows 10 - D: HDD 1000GB "Healthy (Primary Partition)" My idea is to install Ubuntu on a partition of di...
I just finally bought a laptop and I am motivated to learn working on linux.
The laptop is a Dell Inspiron 7577 that came with two hard disks:
- C: PCIe NVMe M.2 SSD 256GB on which is installed windows 10
- D: HDD 1000GB "Healthy (Primary Partition)"
My idea is to install Ubuntu on a partition of disk D: HDD 1000 GB and use it in dual boot with windows 10.
1) I shrunk the volume D: into two equal sized partitions because I want to install ubuntu 16.04 on the Unallocated partition of HDD D:
2) Then, I downloaded the .iso of Ubuntu and booted it on my usb pen drive using rufus.
3) Restarted the computer and run the ubuntu installer
4) In the page where it is supposed to ask - according to the tutorial enter link description here - whether to erase the computer, install alongside windows and other options, the page is different and do not allow me to select anything.
Any suggestion?
Seymour
(121 rep)
Feb 28, 2018, 09:18 AM
• Last activity: Apr 3, 2019, 06:50 PM
0
votes
1
answers
3603
views
Dual boot windows 10 and Ubuntu 18.04.1 on separate hard drives
I have Asus N552VW with the firmwarre mode set to **Legacy** and the following disks: * 128GB SSD: my windows 10 is installed on it * 2 TB HDD partitioned exactly into two partitions Knowing that, in the firmware, I have set the boot priority to: [![Firmware settings][2]][2] My question becomes: *Wh...
I have Asus N552VW with the firmwarre mode set to **Legacy** and the following disks:
* 128GB SSD: my windows 10 is installed on it
* 2 TB HDD partitioned exactly into two partitions
Knowing that, in the firmware, I have set the boot priority to:
My question becomes: *Which device should I use for the **boot loader**?*


Kasra
(77 rep)
Feb 3, 2019, 02:20 PM
• Last activity: Feb 3, 2019, 10:06 PM
1
votes
3
answers
59
views
Disk setup on laptop with regular HDD plus SDD
This is going to be a question about your opinions on setting up a new laptop with both a regular 1TB HDD and a 250G SSD. Let's suppose that... /dev/sda = 1,000 GB HDD (931 TiB) /dev/sdb = 250 GB SSD (232 TiB) It is my understanding that many people use this method: /dev/sdb1 --> /boot /dev/sdb2 -->...
This is going to be a question about your opinions on setting up a new laptop with both a regular 1TB HDD and a 250G SSD. Let's suppose that...
/dev/sda = 1,000 GB HDD (931 TiB)
/dev/sdb = 250 GB SSD (232 TiB)
It is my understanding that many people use this method:
/dev/sdb1 --> /boot
/dev/sdb2 --> / [root]
/dev/sda1 --> /home
/dev/sda2 --> /swap
Or use it without a separate /boot partition
/dev/sdb1 --> / (root)
/dev/sda1 --> /home
/dev/sda2 --> /swap
I know that the speed of SSDs is the main reason why most would want their root to be on that device but doesn't a 250 GB /boot and / completely overkill? I could never fill up 250GB worth of system files and folders.
Would a possible configuration be the following:
/dev/sda1 --> /boot
/dev/sda2 --> / [root]
/dev/sda3 --> /home
/dev/sda4 --> /swap
/dev/sdb1 --> [something else like frequently used media files avaiable to all users]
Or the same without a dedicated boot partition, where
/dev/sda1
has the boot
flag
/dev/sda1 --> / (root)
/dev/sda2 --> /home
/dev/sda3 --> /swap
/dev/sdb1 --> [something else like frequently used media files available to all users]
Or putting Swap on the SSD, where /dev/sda1
has the boot
flag
/dev/sda1 --> / [root]
/dev/sda2 --> /home
/dev/sdb1 --> /swap
/dev/sdb2 --> [something else like frequently used media files available to all users]
Something else entirely
/dev/sda1 --> ?
/dev/sda2 --> ?
/dev/sdb1 --> ?
/dev/sdb2 --> ?
I can keep listing the possibilities, but I'm positive you should get the gist by now.
Are there other partitioning schemes when using a laptop (or desktop) with both an internal HDD and internal SSD? I suppose that when / (root)
is installed on the SSD, I could install several different distros which use the HDD as a common /home
along with /swap
being on either the HDD or SSD.
I'd really appreciate it if your suggestions are not to just the partitioning schematic but also an explanation of why you are suggesting it. Which scheme do you use? Why did you pick it? If you could go back, would you do it differently next time? I have never used a computer with two internal storage disks before and am very curious about using them both the most proficiently.
Ev-
(649 rep)
Dec 16, 2018, 07:43 AM
• Last activity: Dec 16, 2018, 02:57 PM
2
votes
2
answers
3640
views
How to install Linux Mint 17 on SSD and have Home on HDD
I have Sony Vaio VGN-NW23NE Laptop and I'm using Linux Mint as my Operating System for about two years, I've currently *Qiana*. I bought a new 120 GB SSD to speed up my Laptop, but I'm confused about how to use this SSD because there is so much information on internet. I want to have a fresh install...
I have Sony Vaio VGN-NW23NE Laptop and I'm using Linux Mint as my Operating System for about two years, I've currently *Qiana*.
I bought a new 120 GB SSD to speed up my Laptop, but I'm confused about how to use this SSD because there is so much information on internet.
I want to have a fresh install of Linux Mint on SSD where I only want to install software and my Home on HDD where I can have Data like Documents, Movies etc.
I have no partition on HDD and don't want any partition with new install.
Raj
(31 rep)
Sep 13, 2014, 05:33 PM
• Last activity: Jun 29, 2018, 01:16 PM
0
votes
1
answers
1366
views
How to concatenate 2 or 3 physical disks to create one larger usable volume in Centos7
I will like to apply the concept of concatenation of multiple physical disks and create one larger disk for my my storage, and I am using Centos7 as the my server. I have 2 types of disks: one is an SSD and the other one is an normal Seagate storage hard drive. I will like to know if this is possibl...
I will like to apply the concept of concatenation of multiple physical disks and create one larger disk for my my storage, and I am using Centos7 as the my server. I have 2 types of disks: one is an SSD and the other one is an normal Seagate storage hard drive. I will like to know if this is possible.
christian Martin
(47 rep)
Apr 29, 2018, 06:18 PM
• Last activity: Apr 29, 2018, 06:42 PM
1
votes
1
answers
6417
views
What is causing Mint 18.1 "grub-efi-amd64-signed failed to install into /target/" and following installation failure?
The SSD is set to ACHI (whatever it is, it was automatically set). Secure boot is set to Other OS (only other choice). ***Linux only. No Windows. Hate Windows. No dual boot. Microsoft hasn't touched my computer in years. Windows can suck it.*** I formatted both SSD and HDD drives to GPT. - Corsair 3...
The SSD is set to ACHI (whatever it is, it was automatically set).
Secure boot is set to Other OS (only other choice).
***Linux only. No Windows. Hate Windows. No dual boot. Microsoft hasn't touched my computer in years. Windows can suck it.***
I formatted both SSD and HDD drives to GPT.
- Corsair 30GB SSD (putting
/
and swap
on here)
- Old, ~300GB HDD. No problems with the drive have been found. (putting /home
here)
The live Mint USB isn't corrupt. I did the checksum before installing, and have done multiple checks on the USB before booting.
Installing Mint in UEFI mode is resulting in "**grub-efi-amd64-signed failed to install into /target/**". Any forum I found on this error has not had a solution compatible with my problem. That package doesn't appear to exist. The installer ***will fail*** upon this message occurring. I have an internet connection. It's hardwired... (insert Metallica reference here). I can cruise the interwebs during the live boot.
Installing Mint in legacy, *not* UEFI, is resulting in no OS being found.
**Specs:**
- 18.1 Cinnamon (3.2.6) 64-bit
- AMD FX-8350 8 core x4
- 8GB RAM
- AMD/ATI Turks XT, Radeon HD 6670/7670
I am not sure what the *hell* is going on anymore.
Lorax
(151 rep)
Aug 13, 2017, 02:22 AM
• Last activity: Aug 15, 2017, 03:05 AM
4
votes
1
answers
1439
views
Use SSD for caching files from HDD
I don't have a name for this so I'll present my special **user case:** I have an hybrid-laptop made of a tablet slate of only 32GB SSD and a 500GB HDD build into the USB keyboard-dock and I want to make a better use of that HDD without having it constantly spinning (I take a big hit on battery life...
I don't have a name for this so I'll present my special **user case:**
I have an hybrid-laptop made of a tablet slate of only 32GB SSD and a 500GB HDD build into the USB keyboard-dock and I want to make a better use of that HDD without having it constantly spinning (I take a big hit on battery life otherwise)
Let's say I create a mount point
\mnt\Data
that resides on my external HDD.
What I want to achieve is to have part (not all) of the files in that directory (maybe last n
used\opened ones) cached inside my SSD somewhere
without having me to copy them manually, and then update the old ones into the HDD once I modify them.
**What I am looking for:**
Having 10-20 files I often/recently use cached and modify them on my SSD, rather than directly access it from HDD, allows me to work without the system spinning up my HDD (saving me a lot of power).
Even when I save my work, the HDD is not absolutely required to wakeup to update the files suddenly, this can happen even in time intervals for what I'm concerned.
The HDD is for tablet use only, so I'm not worried about the files being modified externally, they just have to get updated every x-minutes to mirror the SSD "cached" file modifications back to the HDD copy of the file.
**Questions**
1. Is there any newer linux set of tools that could do that kind of caching\partial syncing of files automatically, other than bcach?
2. If bcache is the answer, then should I even consider bdcachefs ?
(including building the kernel under Arch, noobAlert!)
3. Do you guys think I should better use a backup solution to keep versions of my SSD Data files on my HDD?
(but this way I think I will end up with just a bounce of versions of the same files)
I hope my description was explanatory enough of what I'm trying to achieve!
**ADD:** I've found this question: SSD as a read cache for FREQUENTLY read data , which describes mostly what I'm looking for but it is 4 years old (not that I wouldn't investigate such solutions only 'cus of age), I was wondering if there were any newer approaches, probably a bit more "contained" than bcache
.
**EDIT** : A small dettagli that it wasn't clear in the first part of the OP is that the HDD needs to allow for unmount and remounting. The removal of the tablet from it's dock is done by a mechanical button so there is no system notify for unmounting done automatically.
Cody
(41 rep)
May 17, 2017, 06:08 PM
• Last activity: May 20, 2017, 10:12 AM
4
votes
1
answers
4844
views
How to check if a disk is SSHD (hybrid drive) using ATA commands
Having looked at this question: (https://unix.stackexchange.com/questions/65595/how-to-know-if-a-disk-is-an-ssd-or-an-hdd) I am able to detect HDD and SSD. The problem now is that I don't know how to detect SSHD, since the `hdparm` tool shows that my disk (ST1000DX001) is a classic HDD. Am I able to...
Having looked at this question:
(https://unix.stackexchange.com/questions/65595/how-to-know-if-a-disk-is-an-ssd-or-an-hdd)
I am able to detect HDD and SSD.
The problem now is that I don't know how to detect SSHD, since the
hdparm
tool shows that my disk (ST1000DX001) is a classic HDD.
Am I able to detect it basing on the any of the information gathered by issuing IDENTIFY Ata command?
Peter Cerba
(141 rep)
Aug 4, 2016, 11:59 AM
• Last activity: Aug 4, 2016, 01:12 PM
1
votes
1
answers
726
views
SSHD cloning - something special to keep in mind, compared to HDD?
**The question** is whether the fact the device is a hybrid disk has any (and what) importance when using dd or other such tools, or if all operations run the same as for a HDD. **The context** The device in question is a Seagate ST1000LM014. I used this command to make a backup before sending the l...
**The question** is whether the fact the device is a hybrid disk has any (and what) importance when using dd or other such tools, or if all operations run the same as for a HDD.
**The context**
The device in question is a Seagate ST1000LM014.
I used this command to make a backup before sending the laptop in for repairs (soundcard):
dd if=/dev/sdb conv=sync,noerror bs=64K | gzip -c | split -b 2000m - ./sdb_backup.gz.
Predictably the HP service guys formatted the drive, just because. I have no reason (yet) to suspect they swapped it.
I restored the data: cat sdb_backup.gz.* | gunzip -c | dd of=/dev/sdb conv=sync,noerror bs=64K
Now all that is visible from another Windows is the recovery partition, and gdisk -l /dev/sdb
gives me:
Found valid GPT with protective MBR; using GPT. Disk /dev/sdb: 1953525164 sectors, 931.5 GiB Logical sector size: 512 bytes Disk identifier (GUID): {{I removed it}} Partition table holds up to 128 entries First usable sector is 34, last usable sector is 1953525130 Partitions will be aligned on 2048-sector boundaries Total free space is 14851 sectors (7.3 MiB) Number Start (sector) End (sector) Size Code Name 1 2048 1333247 650.0 MiB 2700 Basic data partition 2 1333248 1865727 260.0 MiB EF00 EFI system partition 3 1865728 2127871 128.0 MiB 0C01 Microsoft reserved ... 4 2127872 1907614565 908.6 GiB 0700 Basic data partition 5 1907615744 1909415935 879.0 MiB 2700 6 1909415936 1953513471 21.0 GiB 0700 Basic data partitiongparted shows "unknown" for the type of the first 4 partitions. sdb4, at least, was supposed to be ntfs, but won't mount as such, or ntfs-3g -
mount -r -t ntfs-3g /dev/sdb4 /media/myusername/sdb4
gives:
NTFS signature is missing. Failed to mount '/dev/sdb4': Invalid argument The device '/dev/sdb4' doesn't seem to have a valid NTFS.But that's enough background, I think. I've tried many things for this error, failed to fix. I am not asking for a solution here.
kaay
(125 rep)
Apr 8, 2016, 11:48 AM
• Last activity: Apr 11, 2016, 06:04 AM
7
votes
1
answers
1321
views
Can I get SSD rw performance while keeing data security if I combine SSD & HDD in btrfs RAID1
I have 2 storage devices; classical slow HDD (750GB, `/dev/sda`) and faster SSD (128GB, `/dev/sdb`). Currently I have installed Ubuntu & Mint on same btrfs partition on SSD (`/dev/sdb5`). My btrfs pool consists of `/dev/sdb5`. What I want to achieve is data replication on HDD while keeping SSD perfo...
I have 2 storage devices; classical slow HDD (750GB,
/dev/sda
) and faster SSD (128GB, /dev/sdb
). Currently I have installed Ubuntu & Mint on same btrfs partition on SSD (/dev/sdb5
). My btrfs pool consists of /dev/sdb5
.
What I want to achieve is data replication on HDD while keeping SSD performance: Something like this:
For read case it would be ideal for procedure to go like this:
* read from SSD
* check checksum
* if it verifies.. awesome, give me the data
* if not, fetch data from HDD and perform error correction through duplicate data
I'd manually run (some tool I'm not sure which one... btrfs check --repair
perhaps?) every month or so to check HDD checksums and correct them via SSD data against bitrot.
For write case:
* write to SSD
* sync to HDD when write requirements are low, thus not slowing down the system.
Is this possible and how would I do it?
Bottom line is, I like btrfs and would like it to duplicate data across HDD & SSD drive automatically while keeping SSD excellent performance.
Using SSD as write cache is not an option since it's missing data duplication and security.
nmiculinic
(375 rep)
Dec 21, 2015, 09:07 PM
• Last activity: Dec 23, 2015, 04:36 PM
1
votes
1
answers
1237
views
Change Linux Bcache device type
I have accidentally configured my SSD as a backing device instead of a caching device. Simply trying either one of the following: sudo make-bcache -C /dev/sdb1 sudo make-bcache -C /dev/sdb Gives the errors: Can't open dev /dev/sdb1: Device or resource busy Can't open dev /dev/sdb: Device or resource...
I have accidentally configured my SSD as a backing device instead of a caching device. Simply trying either one of the following:
sudo make-bcache -C /dev/sdb1
sudo make-bcache -C /dev/sdb
Gives the errors:
Can't open dev /dev/sdb1: Device or resource busy
Can't open dev /dev/sdb: Device or resource busy
How does one rectify a situation like this?
Gerharddc
(325 rep)
Apr 20, 2015, 02:20 PM
• Last activity: Jun 26, 2015, 02:39 AM
3
votes
2
answers
4517
views
Is it possible to manually manage a Solid State Hybrid Drive?
I'm about to buy a new laptop which has a SSHD. It stands for Solid State Hybrid Drive, which means it has a big, regular HDD with a small, usually 8GB SSD to cache frequently accessed files. Now since I'm a Linux user and I want pure SSD speed, so I thought it would be awesome if I could detach the...
I'm about to buy a new laptop which has a SSHD. It stands for Solid State Hybrid Drive, which means it has a big, regular HDD with a small, usually 8GB SSD to cache frequently accessed files.
Now since I'm a Linux user and I want pure SSD speed, so I thought it would be awesome if I could detach the SSD from the HDD and use it completely separate.
I've googled for this topic, but could not find anything useful. According to Wikipedia there is a standard to communicate with the drives separately, but I may have misunderstood.
Does anyone know anything about this topic?
Semmu
(43 rep)
Feb 1, 2014, 11:49 PM
• Last activity: Jan 23, 2015, 07:08 PM
1
votes
1
answers
673
views
Failover boot setup
I am considering a failover boot setup which can tolerate any single drive loss. The primary system partitions (/, /home, /usr etc.) shall reside on a flash drive. /var and /tmp partitions shall be mounted from a hard drive. Additionally hard drive contains shadow copies of all partitions from flash...
I am considering a failover boot setup which can tolerate any single drive loss.
The primary system partitions (/, /home, /usr etc.) shall reside on a flash drive.
/var and /tmp partitions shall be mounted from a hard drive.
Additionally hard drive contains shadow copies of all partitions from flash drive; these should be rather static and can be kept up to date with RSYNC (?) plus it has boot preset to start using these partitions.
In case flash drive fails, system simply boots off the hard drive (should be set up in BIOS).
In case of hard drive failure system should still be operable to allow hard drive replacement.
My questions are:
1. Do I miss something essential in such configuration (e.g. security)?
2. If hard drive is lost, the system shall boot with no /var and /tmp mountable; will it start up at all? Of course this is a recovery operation mode which should minimally allow another drive to be set up and mounted properly, but it should at least boot and start all services including ssh.
The distribution I am looking at is Debian, but this is not a final decision.
port443
(111 rep)
Dec 7, 2014, 06:42 AM
• Last activity: Dec 7, 2014, 06:44 PM
Showing page 1 of 20 total questions