Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

1 votes
1 answers
1020 views
How can one use Nvidia RTX IO on Linux to allow the GPU to directly access the storage (SSD) with only a slight CPU overhead?
I saw in the [2020-09-01 Official Launch Event for the NVIDIA GeForce RTX 30 Series](https://youtu.be/E98hC9e__Xs?t=1436) that the RTX IO in the [Nvidia GeForce RTX 30 Series](https://en.wikipedia.org/wiki/GeForce_30_series) allows the GPU to directly access the storage (SSD) with only a slight CPU...
I saw in the [2020-09-01 Official Launch Event for the NVIDIA GeForce RTX 30 Series](https://youtu.be/E98hC9e__Xs?t=1436) that the RTX IO in the [Nvidia GeForce RTX 30 Series](https://en.wikipedia.org/wiki/GeForce_30_series) allows the GPU to directly access the storage (SSD) with only a slight CPU overhead when using Microsoft DirectStorage (see screenshot below). How can one use RTX IO on Linux? enter image description here
Franck Dernoncourt (5533 rep)
Sep 8, 2020, 01:36 AM • Last activity: Aug 4, 2025, 05:21 PM
3 votes
1 answers
60 views
Smartctl triggers: please convert it to SG_IO
When I do smartctl -i /dev/sda there is no output (except from the smartctl welcome message) and the kernel reports: smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO Searching the internet shows that smartctl should be using the new interface for over a decade, and I might be ru...
When I do smartctl -i /dev/sda there is no output (except from the smartctl welcome message) and the kernel reports: smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO Searching the internet shows that smartctl should be using the new interface for over a decade, and I might be running older versions of Linux, but not quite that old! What's going on?
user242579 (161 rep)
Jul 30, 2025, 08:58 AM
2 votes
1 answers
2554 views
"Read Capacity(10) failed" and "Sense Key : Illegal Request" with a SATA-to-USB adapter
What is the meaning of these error messages in the system log, when I plug in [a 2.5" spinning-disk SATA drive](https://www.seagate.com/files/staticfiles/support/docs/samsung-ds/100698122c.pdf) that I _know_ works, using a [USB-to-SATA adapter](https://web.archive.org/web/20230401071626/https://sabr...
What is the meaning of these error messages in the system log, when I plug in [a 2.5" spinning-disk SATA drive](https://www.seagate.com/files/staticfiles/support/docs/samsung-ds/100698122c.pdf) that I _know_ works, using a [USB-to-SATA adapter](https://web.archive.org/web/20230401071626/https://sabrent.com/products/ec-ss31) ? Jun 25 16:08:07 hostname kernel: [181603.928983] scsi 6:0:0:0: Direct-Access SABRENT 2210 PQ: 0 ANSI: 6 Jun 25 16:08:07 hostname kernel: [181603.931640] sd 6:0:0:0: Attached scsi generic sg1 type 0 Jun 25 16:08:07 hostname kernel: [181603.938380] sd 6:0:0:0: [sdb] Read Capacity(10) failed: Result: hostbyte=DID_OK driverbyte=DRIVER_OK Jun 25 16:08:07 hostname kernel: [181603.938391] sd 6:0:0:0: [sdb] Sense Key : Illegal Request [current] Jun 25 16:08:07 hostname kernel: [181603.938398] sd 6:0:0:0: [sdb] Add. Sense: Invalid command operation code Jun 25 16:08:07 hostname kernel: [181603.939443] sd 6:0:0:0: [sdb] 0 512-byte logical blocks: (0 B/0 B) Jun 25 16:08:07 hostname kernel: [181603.939449] sd 6:0:0:0: [sdb] 0-byte physical blocks Jun 25 16:08:07 hostname kernel: [181603.942357] sd 6:0:0:0: [sdb] Test WP failed, assume Write Enabled Jun 25 16:08:07 hostname kernel: [181603.943386] sd 6:0:0:0: [sdb] Asking for cache data failed Jun 25 16:08:07 hostname kernel: [181603.943393] sd 6:0:0:0: [sdb] Assuming drive cache: write through Jun 25 16:08:07 hostname kernel: [181603.944506] sd 6:0:0:0: [sdb] Optimal transfer size 33553920 bytes not a multiple of physical block size (0 bytes) Jun 25 16:08:07 hostname kernel: [181603.948248] sd 6:0:0:0: [sdb] Read Capacity(10) failed: Result: hostbyte=DID_OK driverbyte=DRIVER_OK Jun 25 16:08:07 hostname kernel: [181603.948255] sd 6:0:0:0: [sdb] Sense Key : Illegal Request [current] Jun 25 16:08:07 hostname kernel: [181603.948257] sd 6:0:0:0: [sdb] Add. Sense: Invalid command operation code Jun 25 16:08:07 hostname kernel: [181603.960998] sd 6:0:0:0: [sdb] Attached SCSI disk Specifically: […] Read Capacity(10) failed: Result: hostbyte=DID_OK driverbyte=DRIVER_OK […] Sense Key : Illegal Request [current] […] Add. Sense: Invalid command operation code The device spins up fine mechanically with no unusual noises, but none of the partitions are being detected, and as a result, they don't show up in the file manager. I don't think this matters much, but just to provide some context, it's an Ubuntu 20.04-based distribution (elementary OS 6.1 Jólnir) running on a Samsung Series 9.
Kevin E (540 rep)
Jun 25, 2023, 09:06 PM • Last activity: Jul 29, 2025, 10:45 AM
5 votes
1 answers
5464 views
using overlay2 on CentOS 7.4
How do I install and enable the overlay2 storage driver on CentOS 7? I have done many google searches on this and I see that version 7.4 is required. So I typed the following commands to confirm that the intended server is running version 7.4 of CentOS: [sudoUser@localhost ~]$ cat /etc/centos-releas...
How do I install and enable the overlay2 storage driver on CentOS 7? I have done many google searches on this and I see that version 7.4 is required. So I typed the following commands to confirm that the intended server is running version 7.4 of CentOS: [sudoUser@localhost ~]$ cat /etc/centos-release CentOS Linux release 7.4.1708 (Core) [sudoUser@localhost ~]$ rpm --query centos-release centos-release-7-4.1708.el7.centos.x86_64 [sudoUser@localhost ~]$ But there does not seem to be any yum install overlay2 or yum install overlayfs. >**So what specific steps are required in order to install and enable overlay2 on CentOS 7.4?**
CodeMed (5357 rep)
Apr 9, 2018, 10:47 PM • Last activity: Jul 23, 2025, 11:00 AM
2 votes
3 answers
629 views
How to securely export a device (HDD)?
So I have a Scientific Linux 6.3 (RHEL clone so basically the question is Redhat related) machine called "B" (with an extra "A" HDD besides the system HDD) and a notebook with SL 6.3. They are in a /24 IPv4 subnet, and can fully reach each other. **Q**: How can I export the "A" HDD device to the not...
So I have a Scientific Linux 6.3 (RHEL clone so basically the question is Redhat related) machine called "B" (with an extra "A" HDD besides the system HDD) and a notebook with SL 6.3. They are in a /24 IPv4 subnet, and can fully reach each other. **Q**: How can I export the "A" HDD device to the notebook, so that on the notebook I could see the "A" HDD as a device /HDD/? (, and locally encrypt it using LUKS - I know this last encrypting part) The important thing is that I need the connection to be secured (SSL?) so that no one can intercept the data that I encrypt on the notebook. **OR**: is it already encrypted via LUKS? (and an SSL connection between the notebook and the "B" machine would be just an overhead?) - extra: I also need that the "exporting of the device" must be routable over network. **ps.: so the main question is: does encrypted communication needed between the notebook and the "B" machine or are ALL the datas on the HDD already encrypted when leaving the notebook (even LUKS pwd too??)**
gasko peter (5634 rep)
Sep 17, 2012, 09:00 AM • Last activity: Jul 13, 2025, 04:56 PM
0 votes
4 answers
2711 views
Find out which users are hogging the most disk space on our data server
We are supposed to store our ongoing projects on a rather small (~4TB) data server. Not surprisingly, it is constantly overflowing and people need to move off less recent files manually. Is there an easy (aka standard command-line) way to find out which users take up the most space in a directory? i...
We are supposed to store our ongoing projects on a rather small (~4TB) data server. Not surprisingly, it is constantly overflowing and people need to move off less recent files manually. Is there an easy (aka standard command-line) way to find out which users take up the most space in a directory? i.e. summing up the size of all files in a directory and all sub-directories belonging to each user? Edit: ideally not following symlinks
MechEng (233 rep)
Mar 26, 2019, 07:39 AM • Last activity: Jul 11, 2025, 05:04 PM
0 votes
1 answers
5320 views
virsh pool storage basics
How or where were these pools created? Where are configuration files? $ virsh pool-list --all Name State Autostart ------------------------------------------- default active yes Downloads active yes $ virsh pool-info Downloads Name: Downloads UUID: fdbe7407-67c4-405d-8e46-9c2695a8b353 State: running...
How or where were these pools created? Where are configuration files? $ virsh pool-list --all Name State Autostart ------------------------------------------- default active yes Downloads active yes $ virsh pool-info Downloads Name: Downloads UUID: fdbe7407-67c4-405d-8e46-9c2695a8b353 State: running Persistent: yes Autostart: yes Capacity: 219.88 GiB Allocation: 34.87 GiB Available: 185.01 GiB $ virsh pool-info default Name: default UUID: cb72b02e-b436-4ec9-9460-d297744c4c69 State: running Persistent: yes Autostart: yes Capacity: 219.88 GiB Allocation: 34.95 GiB Available: 184.93 GiB I believe that the pools were created by the virt-manager GUI. Is there free space on default? I think that the Downloads pool is probably superfluous.
Thufir (1970 rep)
Nov 21, 2017, 10:14 AM • Last activity: Jun 26, 2025, 11:10 AM
0 votes
1 answers
2340 views
Udev rule refuses to trigger when harddrive is added
I'm on Ubuntu 20.04. I'm trying to execute an action when a specific harddrive is connected using an udev rule identifying the drive using UUID. The script will eventually do a routine where it will mount the drive and run rsync. To rule out any errors in that process I'm now just trying out a test...
I'm on Ubuntu 20.04. I'm trying to execute an action when a specific harddrive is connected using an udev rule identifying the drive using UUID. The script will eventually do a routine where it will mount the drive and run rsync. To rule out any errors in that process I'm now just trying out a test command. The harddrive is connected via SATA Hotswap and has an UUID which is confirmed to be correct. I've followed numerous guides that seem to use this exact syntax, and still absolutely nothing happens however I try. Here are the steps I've done: - Created a file called 90-backup.rules in /etc/udev/rules.d. The content is: ACTION=="add", ENV{ID_FS_UUID}=="b527aadc-9dce-4ead-8937-e53ca2cfac84", RUN+="/bin/echo 1 >> /rule.test" - Tried udevadm control --reload-rules && udevadm trigger - Tried systemctl reload udev - Running udevadm test /dev/sdX i can see that it lists the rules file: Reading rules file: /etc/udev/rules.d/90-backup.rules - Using udevadm info /dev/sdX confirm that the ID_FS_UUID environmental variable is correct and can be read. - Tried adding KERNEL=='sd?' before the ACTION argument. Since the server is currently live in use, I haven't tried rebooting it yet. And it would be good to once and for all establish what is necessary to have udev reload the rules properly without reboot, for proper debugging. Any help is appreciated. All the best, Andreas
A.H (1 rep)
Jan 31, 2022, 01:23 AM • Last activity: Jun 19, 2025, 05:00 AM
4 votes
3 answers
2272 views
Merging partitions in Debian
I have recently inherited someone else's problem with a linux server. This is one of those all in one debian based lamp setups. It recently ran out of storage and no one seems to know anything about linux in general here. I have managed to expand the drive in VMWare and created the partition as seen...
I have recently inherited someone else's problem with a linux server. This is one of those all in one debian based lamp setups. It recently ran out of storage and no one seems to know anything about linux in general here. I have managed to expand the drive in VMWare and created the partition as seen in the image below. The challenge is to merge the root partition /dev/sda1 with /dev/sda4. Note: the Start - End blocks are not back to back and I can't afford much downtime on this server. /dev/sda3 can probably be merged too but not important. Screenshot UPDATED: df -h output enter image description here UPDATED 2: fdisk -l output enter image description here
Chappy (51 rep)
Jan 5, 2018, 03:22 PM • Last activity: Jun 7, 2025, 04:02 PM
1 votes
1 answers
2956 views
how to run fio verify reliably in linux
I am using `fio` over disks exposed through iscsi. I am giving these params in `fio` ``` fio --name=randwrite --ioengine=libaio --iodepth=64 --rw=randrw --rwmixread=50 --bs=4k-2M --direct=1 -filename=data --numjobs=1 --runtime 36000 --verify=md5 --verify_async=4 --verify_backlog=100000 --verify_dump...
I am using fio over disks exposed through iscsi. I am giving these params in fio
fio --name=randwrite --ioengine=libaio --iodepth=64 --rw=randrw --rwmixread=50  --bs=4k-2M --direct=1 -filename=data --numjobs=1 --runtime 36000 --verify=md5 --verify_async=4 --verify_backlog=100000 --verify_dump=1 --verify_fatal=1 --time_based --group_reporting
With the above parameters can fio send overlap concurrent writes of size more than page size. If yes, then how does fio verify the checksum because atomicity of io is not guaranteed across page size.
rishabh mittal (31 rep)
Jul 27, 2019, 04:52 AM • Last activity: Jun 5, 2025, 02:06 AM
1 votes
1 answers
130 views
Wear level and total bytes written in SATA SSD
On a Samsung SATA SSD, i.e. non NVMe disk, the following are the SmartCtl values that are obtained by running the command `sudo smartctl -a /dev/sda`, SMART Attributes Data Structure revision number: 1 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE...
On a Samsung SATA SSD, i.e. non NVMe disk, the following are the SmartCtl values that are obtained by running the command sudo smartctl -a /dev/sda, SMART Attributes Data Structure revision number: 1 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 177 Wear_Leveling_Count 0x0013 099 099 000 Pre-fail Always - 18 241 Total_LBAs_Written 0x0032 099 099 000 Old_age Always - 10452411061 On the same disk when the command sudo skdump /dev/sda is run the following is the output. Overall Status: GOOD ID# Name Value Worst Thres Pretty Raw Type Updates Good Good/Past 5 reallocated-sector-count 100 100 10 0 sectors 0x000000000000 prefail online yes yes 177 wear-leveling-count 99 99 0 18 0x120000000000 prefail online n/a n/a 241 total-lbas-written 99 99 0 350725.752 TB 0x3d9b036f0200 old-age online n/a n/a For this I had the following queries 1) skdump command is returning a value of 350725.752 TB written, i.e. total-lbas-written. Is this correct? 2) Based on the answer provided in another post the output of smartctl for total-lbas-written is 10452411061, which equates to 4.86 TB written (i.e. 10452411061/2/1024/1024/1024). This differs significantly from the value reported by the skdump command. Is this value accurate? The Sector size is 512 bytes. 3) After looking at various posts in SuperUser and StackExchange, for samsung ssd drives the value of Wear_Leveling_Count determines how much wear leveling has occurred on the SSD. But it is not clear what figure should be considered? The figure of the column **RAW_VALUE** or **VALUE** column. And does having RAW_VALUE of 18 implies that only 18% of the SSD life is left?
KDM (116 rep)
May 27, 2025, 08:12 AM • Last activity: May 27, 2025, 10:17 AM
0 votes
2 answers
93 views
Why did my backup folder with large amounts of repeated data compress so poorly?
I have a folder with around seventy subfolders, each containing a few tarballs which are nightly backups of a few directories (the largest being `/home`) from an old Raspberry Pi. Each is a full backup; they are not incremental. These tarballs are not compressed; they are just regular `.tar` archive...
I have a folder with around seventy subfolders, each containing a few tarballs which are nightly backups of a few directories (the largest being /home) from an old Raspberry Pi. Each is a full backup; they are not incremental. These tarballs are not compressed; they are just regular .tar archives. (They were originally compressed with bzip2, but I have decompressed all of them.) This folder totals 49 GiB according to du -h. I compressed this entire folder into a tar archive compressed with zstd. However, the final archive is 32 GiB, not much smaller than the original. Why is this the case, considering that the vast majority of the data should be common among several files, since I obviously was not replacing every file every day?
kj7rrv (217 rep)
May 24, 2025, 04:33 AM • Last activity: May 24, 2025, 08:08 AM
0 votes
1 answers
2186 views
How to get a PCIe Mini-SAS card working in Linux?
I can't seem to get this PICe card working and I can't find any info/drivers about it on the internet. All the SATA HDDs in my server are connected using this card which came with the system. It works instantly plug-and-play in Windows. But on Linux, nothing. lspci shows the card but no drives or /d...
I can't seem to get this PICe card working and I can't find any info/drivers about it on the internet. All the SATA HDDs in my server are connected using this card which came with the system. It works instantly plug-and-play in Windows. But on Linux, nothing. lspci shows the card but no drives or /dev/sdX devices show up, and I don't see any messages/errors regarding it in dmesg, not sure what I should be looking for though. I'm using Ubuntu Desktop 20.04 btw. (And if you're curious why desktop on a server, it's a headless box but I installed the desktop so I can VNC in as well as SSH in) Here is the card. It says "Newer MAXPower RAID mini-SAS 6G PCIe 2.0" RAID Card EDIT: Here's what lspci -v shows: 06:00.0 RAID bus controller: HighPoint Technologies, Inc. Device 1e10 (rev 03) Subsystem: HighPoint Technologies, Inc. Device 0000 Physical Slot: 3 Flags: fast devsel Memory at 90940000 (64-bit, non-prefetchable) [disabled] [size=128K] Memory at 90900000 (64-bit, non-prefetchable) [disabled] [size=256K] Expansion ROM at 90960000 [disabled] [size=64K] Capabilities: Power Management version 3 Capabilities: MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: Express Endpoint, MSI 00 Capabilities: Advanced Error Reporting Capabilities: Virtual Channel
Pecacheu (101 rep)
Jan 4, 2022, 08:06 PM • Last activity: May 9, 2025, 02:04 PM
2 votes
1 answers
281 views
Can Qemu / OVMF boot from virtual drives with logical sector size 4096?
I am running Debian bookworm on a physical host and use Qemu / KVM to run a bunch of virtual machines there. Most of the VMs are UEFI-based, and most of the virtual drives have 512 bytes logical sector size for historical reasons: The respective VMs actually are backed by raw images of the 512n phys...
I am running Debian bookworm on a physical host and use Qemu / KVM to run a bunch of virtual machines there. Most of the VMs are UEFI-based, and most of the virtual drives have 512 bytes logical sector size for historical reasons: The respective VMs actually are backed by raw images of the 512n physical disks they were running on in former times. That is, I typically have sections like the following in the Qemu command lines:
-drive if=pflash,format=raw,readonly=on,file=/usr/share/OVMF/OVMF_CODE_4M.ms.fd \
-drive if=pflash,format=raw,file=/root/qemu-kvm/OVMF_VARS_4M.ms.p8v.fd \
\
-object iothread,id=thread1 \
-blockdev driver=file,node-name=file03,aio=threads,cache.direct=on,cache.no-flush=off,discard=unmap,filename=/vm/vm-p8v.img \
-device virtio-blk-pci,iothread=thread1,write-cache=on,drive=file03,logical_block_size=512,physical_block_size=4096 \
All of these machines are starting up and running fine. However, yesterday I've decided to waste 12 h of my remaining lifetime by trying to run one of those VMs using a logical block size of 4096 bytes. That is, I'd like to have logical_block_size=4096 instead of logical_block_size=512 in the code shown above. The reason why I'd like to test this is complex and distracting, so let's put that aside for the moment. The sector size conversion for the virtual VM disk was quite tricky, but I knew that before. Among others, I had to move the GPT within the virtual HDD and had to manipulate it, as well as the protective MBR. I also had to "convert" the OS itself and the contents of the EFI system partition. Now I am nearly there, but for the life of me, I can't make OVMF recognize the EFI partition or boot from it. I have tried a bunch of things to find out what's going on: - Attach the guest OS installation ISO to the VM, fire up the VM, boot from the ISO, and from within the ISO, repair the OS bootloader. This works in that it creates a new UEFI boot menu entry. When entering the UEFI setup in the VM afterwards, that new entry shows up and can be selected for boot. However, booting fails with an obscure error message from the guest OS. - Fire up the VM, go into the OVMF UEFI setup before the guest OS starts, and try to create an appropriate new boot entry from there. The problem here is that OVMF does not find the EFI partition, and therefore, I can't create that boot entry. That's weird because I have verified a dozen of times that the EFI partition exists, is formatted as FAT32, and actually contains the bootloader. There is *definitely nothing wrong with the EFI partition*, except for the fact that the virtual disk where is resides on now reports a logical sector size of 4096 bytes instead of 512 bytes. - Fire up the VM, go into the UEFI shell and examine the mapping. Interestingly, I see only BLKx: devices there (if that's the right term), but none of the usual fsx: devices. Again, this means that the OVMF UEFI BIOS does not recognize the EFI system partition. Putting everything together, my current theory is that the OVMF version that Debian bookworm ships is not able to deal with disks that report a logical sector size of 4096 bytes. Of course, I have spent quite some time trying to find out whether that's the case, but I failed in doing so. I wasn't able to spot an answer, not even vague hints. Therefore, I am interested in answers to the following questions: - Is OVMF 2022.11-6+deb12u1 (the version that Debian bookworm ships) able to deal with logical 4096-byte sectors on the underlying virtual disk / EFI partition if used with Qemu 7.2+dfsg-7+deb12u7 (again, the version that Debian bookworm ships)? - If the answer to the above question is no, is there a newer version of the OVMF UEFI BIOS that is capable to to this, and where can I get that version? As a final note, I have not mentioned the guest OS in the respective VM by intention. Maybe the guest OS causes further problems at a later point, but as a matter of fact, the immediate problem to solve next is that OVMF does not recognize the EFI partition or its contents, which doesn't have anything to do with the guest OS. I'd like to emphasize again that the EFI partition definitely exists and contains all files needed. I can mount the EFI partition without any issues and can verify the existence of those files, and have done so a dozen of times. **Update #1** In the meantime, I have found the following document: https://www.linux-kvm.org/downloads/lersek/ovmf-whitepaper-c770f8c.txt In section "Motivation", it states: >Support for booting off disks (eg. pass-through physical SCSI devices) with a 4kB physical and logical sector size, i.e. which don't have 512-byte block emulation. Well, this kind of answers the question, but on the other hand, I don't know which part of the goals is actually implemented, and I still have to state that this doesn't work for me. I am not yet at the point where I give up, though.
Binarus (3871 rep)
Sep 9, 2024, 08:02 AM • Last activity: May 1, 2025, 01:22 AM
2 votes
0 answers
81 views
How to handle Duplicity not being able to do backups to Google Cloud Storage bucket because bucket contains aborted backup
I have setup a Google Cloud Storage bucket for my Duplicity backups. The bucket has a retention policy of 1 year. Today Duplicity got interrupted while doing the backups, and now, every time I want to run a backup, it tries to delete the aborted backup: Attempt of _do_delete Nr. 2 failed. ClientErro...
I have setup a Google Cloud Storage bucket for my Duplicity backups. The bucket has a retention policy of 1 year. Today Duplicity got interrupted while doing the backups, and now, every time I want to run a backup, it tries to delete the aborted backup: Attempt of _do_delete Nr. 2 failed. ClientError: An error occurred (AccessDenied) when calling the DeleteObject operation: Access denied. How can I just leave the aborted file stub (can't be deleted due to retention) and let Duplicity start a new backup anyways? ---- ### Workaround if the bucket retention is not locked * Remove bucket retention and let Duplicity have "Storage Object Admin" access to the bucket. * Rerun Duplicity. But I'd prefer a solution that works even if the destination is read only / WORM.
PetaspeedBeaver (1398 rep)
Apr 20, 2025, 01:38 PM • Last activity: Apr 28, 2025, 09:34 PM
295 votes
13 answers
932740 views
linux: How can I view all UUIDs for all available disks on my system?
My `/etc/fstab` contains this: # / was on /dev/sda1 during installation UUID=77d8da74-a690-481a-86d5-9beab5a8e842 / ext4 errors=remount-ro 0 1 There are several other disks on this system, and not all disks are being mounted to the correct location (For example, /dev/sda1 and /dev/sdb1 are sometimes...
My /etc/fstab contains this: # / was on /dev/sda1 during installation UUID=77d8da74-a690-481a-86d5-9beab5a8e842 / ext4 errors=remount-ro 0 1 There are several other disks on this system, and not all disks are being mounted to the correct location (For example, /dev/sda1 and /dev/sdb1 are sometimes reversed). How can I see the UUIDs for all disks on my system? Can I see the UUID for the third disk on this system?
Stefan Lasiewski (20733 rep)
Aug 18, 2010, 04:14 AM • Last activity: Apr 4, 2025, 02:12 PM
0 votes
0 answers
62 views
Where did my disk space go and why can't I analyze it?
I am using ZorinOS lite mainly for rclone. The PC mainly hast the job to copy/ sync Files from one Cloud to another. Also a few very important business applications as the tabletop simulator for example are installed. Now I See that my 240GB SSD is already almost full. Using `sudo baobab` I can see...
I am using ZorinOS lite mainly for rclone. The PC mainly hast the job to copy/ sync Files from one Cloud to another. Also a few very important business applications as the tabletop simulator for example are installed. Now I See that my 240GB SSD is already almost full. Using sudo baobab I can see that most space is taken Up by the /home/pc directory. But I can not analyze much further. I could imagine that rclone has something to do with my problem but I can't seem to get any more information. Here ist the result of the sudo baobab command: enter image description here Also using sudo du -h /home/pc | sort -hr | head did not give me any useful information about where the majority of my disk space has gone. enter image description here Now I would like to know what ist the exact problem and how to solve it (delete temporary Files (maybe created by rclone?)). Thanks in advance for any help. I am not very experienced in Linux systems so easy understandable answers are appreciated. Edit: Using the command ls -la /home/pc I am getting the following Output: enter image description here
owmal (101 rep)
Dec 5, 2023, 04:12 PM • Last activity: Mar 18, 2025, 12:17 PM
0 votes
0 answers
12 views
Will Duplicity download anything from the destination when doing a `remove-all-but-n-full`?
I'm using Google Cloud Storage to store backups taken by Duplicity. Since I just want to keep 4 full backups at the destination I run: duplicity remove-all-but-n-full 4 --force \ --s3-endpoint-url=https://storage.googleapis.com boto3+s3://example Since I want to avoid read charges I wonder if the ab...
I'm using Google Cloud Storage to store backups taken by Duplicity. Since I just want to keep 4 full backups at the destination I run: duplicity remove-all-but-n-full 4 --force \ --s3-endpoint-url=https://storage.googleapis.com boto3+s3://example Since I want to avoid read charges I wonder if the above command will read anything from the server, or is it just judging which files to delete by using the files in the archive dir?
PetaspeedBeaver (1398 rep)
Mar 12, 2025, 05:00 PM • Last activity: Mar 12, 2025, 05:14 PM
1 votes
0 answers
21 views
How to collect statistics of data transmission in Linux USB gadget device?
I am working on an embedded Linux system (kernel-5.10.188), where USB `mass_storage gadget` is enabled and configured. After the `mass_stoage` is enabled and connected to another USB host, is there a way to gather the number of data transmitted between the `gadget` and the host?
I am working on an embedded Linux system (kernel-5.10.188), where USB mass_storage gadget is enabled and configured. After the mass_stoage is enabled and connected to another USB host, is there a way to gather the number of data transmitted between the gadget and the host?
wangt13 (631 rep)
Mar 10, 2025, 05:56 AM • Last activity: Mar 10, 2025, 06:02 AM
-1 votes
1 answers
263 views
Can systemctl daemon-reload be executed from rescue mode on Debian?
Is there a way to run systemctl daemon-reload in rescue mode. (Debian) If so, how? I was trying to change the mount point of one of my drives, so I edited my fstab file to the desired location. Then, I tried to run systemctl daemon-reload, but it wouldn't. I cant remember the error it gave me but I...
Is there a way to run systemctl daemon-reload in rescue mode. (Debian) If so, how? I was trying to change the mount point of one of my drives, so I edited my fstab file to the desired location. Then, I tried to run systemctl daemon-reload, but it wouldn't. I cant remember the error it gave me but I figured if I restarted my computer it would fix its self... BIG PROBLEM After I restarted, I wasn't able to get past the boot screen. The filesystem check would fail on that drive, so I would be stuck on a repeating screen. I went into rescue mode to figure out what the problem is, and from what I understand, I simply need to run daemon-reload, but since the only way I can use my system is through rescue mode, I can't run the command. From my understanding it won't run in rescue mode because 1. I am asking it to restart its self and it can't do that, or 2. systemd is not running in rescue mode. 2 is improbable to me because why wouldn't it be running, but I get the following error that makes me think it isn't running. > Running in chroot, ignoring command 'daemon-reload' Then with sudo I get a better > unable to allocate pty: no such device I'm assuming the pty does not exist in rescue mode, hence 2. Is my understanding of the problem accurate? Does anyone have any suggestions on what I can try to make daemon-reload run?
Alemayhu Semenh (3 rep)
Mar 9, 2025, 02:59 PM • Last activity: Mar 9, 2025, 10:46 PM
Showing page 1 of 20 total questions