Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
0 answers
37 views
RAID6 Array Inactive, Multiple Missing Drives
Alright, hopefully someone is able to help me out here: My RAID6 array in mdadm just failed today. Sequence of events as follows: 1) PLEX server is not functioning, though it appeared that everything else was working fine... seemed to be an network issue, so I restarted the server/comp. 2) While res...
Alright, hopefully someone is able to help me out here: My RAID6 array in mdadm just failed today. Sequence of events as follows: 1) PLEX server is not functioning, though it appeared that everything else was working fine... seemed to be an network issue, so I restarted the server/comp. 2) While restarting, the computer seemed to hang up on boot up... I let it run, went out of the room and when I came back, my kid had turned off the power, said it looked "frozen"... 6 year olds... 3) Restarted again, booted up fine, PLEX server connected, everything seemed fine. Fast forward several hours, in use, no issues. 4) Problem with Plex again, server not finding files, I look, >80% of files are missing now (Plex itself can still connect to devices, so seems the original issue may be unrelated to RAID problem). 5) Stupidly shut down and attempt reboot, during shut down a bunch of error messages pop up, but before I can take a picture or read them clearly, computer completes shutdown and screen goes black. 6) Restart computer and the RAID6 array is gone. My guess this is not directly related to the earlier issues, other than maybe that the "freeze" and hard shutdown might have exacerbated a drive on the edge. What I have been able to ascertain at this point: 1) All 7 drives in the array show up under lsblk, ran smartctl and they all seem okay (though definitely old). 2) On running cat /proc/mdstat I find two arrays, 1 is my old array which is functioning fine, and the other is called md127 which is an incorrect number. The correct should be md111 (I believe). 3) I can find under md127 that it is a 7 drive array and only 4 devices are connected, which are 4 of the drives from the lost array. I did check cable connections (do not have an extra known good set unfortunately), but on rebooting, the 4 listed drives connected to MD127 have changed to other drives in the array (E C B G instead of C A D B) Lastly, I can see that there was something that happened this evening around 17:10. Using mdadm --examine, the Update Time for 1 drive (sdc) is back in February, for two other drives (sde sdg) at 17:10:41, and then at 17:10:56 for the last 4 drives (sdb, sdd, sdf, sdh). The sdc has 9184 events, sde and sdg have 64641, and the other 4 all have 64644 events. Sorry for the wall of text, but I will freely admit that I am utterly lost at this point. Any help or direction would be greatly appreciated. The only lead that I have been able to find is to attempt to either run/create the array again, but not sure if that would work and I am concerned about data loss (which I realize may already be a foregone conclusion, but grasping at straws). I suspect that I need to add the missing drives back to the array, but again am not sure how to do so (especially since I am not clear on what exact order they should be added). Thank you all again for any help. Update: On another reboot trying to problem solve, the md127 array is now showing all 7 disks as part of it:
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
md0 : active raid6 sdi sdo sdl sdm sdj sdk sdn
      19534430720 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/7] [UUUUUUU]
      bitmap: 0/30 pages [0KB], 65536KB chunk

md127 : inactive sdc(S) sdh(S) sdg(S) sdd(S) sdf(S) sde(S) sdb(S)
      54697261416 blocks super 1.2
       
unused devices: 
The other one, md0, is unrelated other array, it is working fine. Not sure where to go from here. I believe the [S] after each drive means it is treating it as a spare? I also tried the following:
sudo mdadm --run /dev/md127
mdadm: failed to start array /dev/md/snothplex:111: Input/output error
Edit #2... Fixed-ish? Here is output of --detail and --examine:
sudo mdadm --detail --scan --verbose
INACTIVE-ARRAY /dev/md127 level=raid6 num-devices=7 metadata=1.2 name=snothplex:111 UUID=58f4414e:13ba3568:b83070b2:b3f561d2
   devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde
ARRAY /dev/md/snothplex:0 level=raid6 num-devices=7 metadata=1.2 name=snothplex:0 UUID=1b0a0747:27bc69f4:f298f7ae:591d022e
   devices=/dev/sdi,/dev/sdj,/dev/sdk,/dev/sdl,/dev/sdm,/dev/sdn,/dev/sdo
sudo mdadm --examine --scan --verbose
ARRAY /dev/md/111  level=raid6 metadata=1.2 num-devices=7 UUID=58f4414e:13ba3568:b83070b2:b3f561d2 name=snothplex:111
   devices=/dev/sdh,/dev/sdf,/dev/sdb,/dev/sdd,/dev/sdc,/dev/sde,/dev/sdg
ARRAY /dev/md/0  level=raid6 metadata=1.2 num-devices=7 UUID=1b0a0747:27bc69f4:f298f7ae:591d022e name=snothplex:0
   devices=/dev/sdo,/dev/sdn,/dev/sdk,/dev/sdj,/dev/sdi,/dev/sdl,/dev/sdm
I attempted to do --assemble --force:
sudo mdadm --assemble --force /dev/md111
mdadm: /dev/md111 not identified in config file.
sudo mdadm --assemble --force /dev/md127
mdadm: Found some drive for an array that is already active: /dev/md/snothplex:111
mdadm: giving up.
I then stopped the array (again referencing the incorrect md127):
samuel3940@snothplex:~$ sudo mdadm --stop /dev/md127
mdadm: stopped /dev/md127
And then tried assemble again:
samuel3940@snothplex:~$ sudo mdadm --assemble --force /dev/md127
mdadm: Fail create md127 when using /sys/module/md_mod/parameters/new_array
mdadm: forcing event count in /dev/sdf(1) from 64641 upto 64644
mdadm: forcing event count in /dev/sdg(3) from 64641 upto 64644
mdadm: clearing FAULTY flag for device 1 in /dev/md127 for /dev/sdf
mdadm: clearing FAULTY flag for device 6 in /dev/md127 for /dev/sdg
mdadm: Marking array /dev/md127 as 'clean'
mdadm: /dev/md127 has been started with 6 drives (out of 7).
And it works. Sorta. Obviously the oldest failed drive is not initialized, but the files are all back. So currently I am pulling off any crucial data. It has also gone into a resync, unsurprisingly, but I figure doing read is fine (just no write). Otherwise, I suppose it is time to get a new drive or two, wait for the resync to finish, and cross my fingers it doesn't fail again before I can get an alternative setup. Thank you again and will update if anything changes/how the resync goes.
Samuel Nothnagel (1 rep)
Jun 2, 2025, 03:17 AM • Last activity: Jun 2, 2025, 03:25 PM
0 votes
1 answers
1978 views
how to troubleshoot slow usb drive?
TLDR: How do I troubleshoot a slow usb 3.1 device plugged into a laptop. ISSUE: When I copy (tried GUI and terminal) the first few .iso files copy almost instantly 300mb/s+, but then the 3rd/4th start to slow to below 12mb/s (even if copying one at a time). HARDWARE: - Dell XPS 15 9520 (Fedora Linux...
TLDR: How do I troubleshoot a slow usb 3.1 device plugged into a laptop. ISSUE: When I copy (tried GUI and terminal) the first few .iso files copy almost instantly 300mb/s+, but then the 3rd/4th start to slow to below 12mb/s (even if copying one at a time). HARDWARE: - Dell XPS 15 9520 (Fedora Linux 37 - Workstation) - Sandisk Extreme GO USB 3.1 64GB (using Ventoy) - Dell USB-C to USB-A/HDMI Adapter - Anker PowerExpand+ 7-in-1 USB-C PD Hub TRIED: - Reformatting usb (gparted - exfat). - Using USB with and without Ventoy installed. - Using different ports, different adapters/hubs. - Copying lots of .iso files in one go vs copying one file at a time - waiting until each file fully copied. Either way after a few files (around 4GB) the USB drive becomes very slow. Ejecting the USB (via GUI or terminal) can take a long time but after remounting fast speeds return. When ejecting using GUI I get message device busy - so now use terminal and wait until command has completed. DRIVER | PORT DETAILS:
$ udevadm info -q path -n /dev/sdc

/devices/pci0000:00/0000:00:14.0/usb4/4-1/4-1.1/4-1.1:1.0/host1/target1:0:0/1:0:0:0/block/sdc


$ lsusb -t

/:  Bus 04.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/4p, 10000M
    |__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/1p, 5000M
        |__ Port 1: Dev 3, If 0, Class=Mass Storage, Driver=usb-storage, 5000M
/:  Bus 03.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/12p, 480M
    |__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/5p, 480M
        |__ Port 5: Dev 6, If 0, Class=, Driver=, 480M
    |__ Port 2: Dev 3, If 0, Class=Hub, Driver=hub/4p, 480M
        |__ Port 1: Dev 5, If 0, Class=Hub, Driver=hub/4p, 480M
            |__ Port 1: Dev 9, If 0, Class=Human Interface Device, Driver=usbhid, 12M
        |__ Port 4: Dev 8, If 0, Class=Hub, Driver=hub/4p, 480M
            |__ Port 3: Dev 11, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M
            |__ Port 3: Dev 11, If 1, Class=Human Interface Device, Driver=usbhid, 1.5M
            |__ Port 4: Dev 12, If 0, Class=Hub, Driver=hub/4p, 480M
                |__ Port 4: Dev 13, If 0, Class=Hub, Driver=hub/4p, 480M
    |__ Port 3: Dev 15, If 0, Class=Hub, Driver=hub/4p, 480M
        |__ Port 2: Dev 16, If 0, Class=, Driver=, 12M
    |__ Port 6: Dev 4, If 0, Class=Video, Driver=uvcvideo, 480M
    |__ Port 6: Dev 4, If 1, Class=Video, Driver=uvcvideo, 480M
    |__ Port 6: Dev 4, If 2, Class=Video, Driver=uvcvideo, 480M
    |__ Port 6: Dev 4, If 3, Class=Video, Driver=uvcvideo, 480M
    |__ Port 9: Dev 7, If 0, Class=Vendor Specific Class, Driver=, 12M
    |__ Port 10: Dev 10, If 0, Class=Wireless, Driver=btusb, 12M
    |__ Port 10: Dev 10, If 1, Class=Wireless, Driver=btusb, 12M
/:  Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/2p, 20000M/x2
    |__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/4p, 5000M
        |__ Port 2: Dev 3, If 0, Class=Vendor Specific Class, Driver=r8152, 5000M
        |__ Port 3: Dev 4, If 0, Class=Mass Storage, Driver=usb-storage, 5000M
/:  Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/1p, 480M
Xueshe (21 rep)
Apr 10, 2023, 12:16 PM • Last activity: Jun 2, 2025, 01:03 PM
2 votes
1 answers
39 views
subset my .fam file using a .txt file of names
I have a .fam file in plink format, looks like this 1 I001.HO 0 0 1 1 2 I002.HO 0 0 1 1 3 IREJ-T006.HO 0 0 1 1 4 IREJ-T009.HO 0 0 1 1 5 IREJ-T022.HO 0 0 1 1 6 IREJ-T023.HO 0 0 1 1 7 IREJ-T026.HO 0 0 1 1 8 IREJ-T027.HO 0 0 1 1 9 IREJ-T037.HO 0 0 1 1 10 IREJ-T040.HO 0 0 1 1 11 IREJ-T053.HO 0 0 1 1 12...
I have a .fam file in plink format, looks like this 1 I001.HO 0 0 1 1 2 I002.HO 0 0 1 1 3 IREJ-T006.HO 0 0 1 1 4 IREJ-T009.HO 0 0 1 1 5 IREJ-T022.HO 0 0 1 1 6 IREJ-T023.HO 0 0 1 1 7 IREJ-T026.HO 0 0 1 1 8 IREJ-T027.HO 0 0 1 1 9 IREJ-T037.HO 0 0 1 1 10 IREJ-T040.HO 0 0 1 1 11 IREJ-T053.HO 0 0 1 1 12 IREJ-T064.HO 0 0 1 1 13 IREJ-T078.HO 0 0 1 1 14 IREJ-T090.HO 0 0 1 1 15 IREJ-T101.HO 0 0 1 1 16 IREJ-T103.HO 0 0 1 1 17 IREJ-T111.HO 0 0 1 1 18 IREJ-T184.HO 0 0 1 1 19 IREJ-T204.HO 0 0 1 1 20 MAL-005.HO 0 0 1 1 21 MAL-009.HO 0 0 1 1 but with thousands of lines. But I only want a subset of these rows in my final data file. I have a .txt file of each individual I want to keep. So looks like IREJ-T184.HO IREJ-T204.HO MAL-005.HO MAL-009.HO How can I use this .txt file to make a new file with only the rows that include those individuals listed in the .txt file? I want to keep all data in the row, not just the ID name. Thanks!
Savannah Hay (23 rep)
Oct 29, 2024, 10:17 PM • Last activity: Oct 29, 2024, 11:48 PM
2 votes
3 answers
1070 views
How to know if files inside an encrypted ZFS dataset are actually encrypted or not?
When you make any change on a zfs dataset, changes are not applied to already existant data. So, if you find a ZFS dataset that says it has encyption ON, there is a way to check if an individual file has its data really encrypted or not?
When you make any change on a zfs dataset, changes are not applied to already existant data. So, if you find a ZFS dataset that says it has encyption ON, there is a way to check if an individual file has its data really encrypted or not?
Héctor (348 rep)
Sep 19, 2023, 10:48 AM • Last activity: Oct 10, 2024, 10:37 AM
2 votes
1 answers
258 views
Recover data after hard drive failure
A couple of days ago, I tried getting an old server of mine up and running.  It was working initially, but after updating and rebooting the system a few times the system refused to boot, dying at GRUB Rescue. I put the hard drive in another Linux machine to see what was going on, and...
A couple of days ago, I tried getting an old server of mine up and running.  It was working initially, but after updating and rebooting the system a few times the system refused to boot, dying at GRUB Rescue. I put the hard drive in another Linux machine to see what was going on, and the system partition was displaying as unknown.  I ran "fsck" on the partition, which got it working again; however, now the "var" and "usr" folders are missing! I've tried various things to try and recover the data, including running "Check" and "Attempt Data Rescue" (which crashes) on GParted, fsck, testdisk, changing the superblock, creating and mounting an image using DD, and nothing seems to work.  Even worse is, the files I want to recover are not in the "lost+found". The partition type is ext3. The partition reports "5 GB" of data is in use; however, only around "2.8 GB" can be accessed. I've tried doing a PhotoRec on the hard drive, but it seems completely pointless, as any recovered files are unnamed, so it'd be impossible for me to recover anything in a sensible manner.
Aiden Foxx (21 rep)
Jun 4, 2016, 03:14 PM • Last activity: Oct 3, 2024, 08:00 PM
0 votes
0 answers
74 views
e2fsck prompts for inode optimization: safe to proceed?
I am trying to utilize `e2fsck` but it produces the following: ``` sudo e2fsck -f /dev/vgtrisquel/home e2fsck 1.46.5 (30-Dec-2021) Pass 1: Checking inodes, blocks, and sizes Inode extent tree (at level 1) could be shorter. Optimize ? ``` It continues to show many other Inodes that could be shorter....
I am trying to utilize e2fsck but it produces the following:
sudo e2fsck -f /dev/vgtrisquel/home
e2fsck 1.46.5 (30-Dec-2021)
Pass 1: Checking inodes, blocks, and sizes
Inode  extent tree (at level 1) could be shorter.  Optimize?
It continues to show many other Inodes that could be shorter. Should I say yes to optimizing all of them? Does this have any potential issues? The data on this drive has been backed up. Thank you so much for any support you can provide.
Kitty Cat (157 rep)
Sep 21, 2024, 06:11 AM • Last activity: Sep 21, 2024, 08:21 AM
0 votes
2 answers
90 views
Many of my text files are suddenly missing
I am using Manjaro linux with a Samsung SSD 840 Pro. Ive noticed my Documents folder is suddenly empty. After rebooting, most of the text files I had saved are no longer there. The problem is some of these contain passwords such as my BTC wallet, which I only created a few weeks back and do not have...
I am using Manjaro linux with a Samsung SSD 840 Pro. Ive noticed my Documents folder is suddenly empty. After rebooting, most of the text files I had saved are no longer there. The problem is some of these contain passwords such as my BTC wallet, which I only created a few weeks back and do not have a backup, $15,000 is in the wallet. I definitely would not have deleted these files. Are they lost forever and why has this happened?
Rachel1983 (23 rep)
Jul 8, 2024, 05:15 PM • Last activity: Jul 9, 2024, 10:26 PM
0 votes
1 answers
292 views
How can I (safely) refresh data on an HDD?
I have an HDD (for simplicity, let's assume it has a single partition). I would like to refresh the data on my HDD: Read all data, and write the same data, possibly but not necessarily at the exact same physical location, so that the logical contents of the partition has not changed, but the data ha...
I have an HDD (for simplicity, let's assume it has a single partition). I would like to refresh the data on my HDD: Read all data, and write the same data, possibly but not necessarily at the exact same physical location, so that the logical contents of the partition has not changed, but the data has been read and written. This is, of course, a disk maintenance operation for an archival HDD which typically sits unused. There are a few options for doing this on Windows – how would I go about doing this on a (modern) GNU/Linux machine? Note: * Command-line is great, GUI is fine. * I don't mind if this requires root privileges. * I use Devuan GNU/Linux, but would rather the answer be distribution-neutral. * x86_64 machine in case it matters. * HDD is connected either via SATA, USB or PCIe.  If your answer only regards one of these buses – that's ok.
einpoklum (10753 rep)
Jun 30, 2024, 08:47 PM • Last activity: Jul 1, 2024, 11:28 AM
3 votes
1 answers
233 views
Ways to store data for command line API
I am developing an API in Unix environment for virtual machines. I have to store some information in a table about virtual machines. Currently I am using python dictionary of virtual machine objects and storing the same in pickle. I would like to know about other best ways(if any) to store the data...
I am developing an API in Unix environment for virtual machines. I have to store some information in a table about virtual machines. Currently I am using python dictionary of virtual machine objects and storing the same in pickle. I would like to know about other best ways(if any) to store the data in command line API's. Any suggestion would be helpful.
Dany (189 rep)
Nov 9, 2014, 10:08 AM • Last activity: Feb 17, 2024, 09:07 PM
6 votes
2 answers
987 views
What is the best place for storing of semi-short-lived data produced by a bash script?
**UPDATE** Came up with an even better solution: Base directory ```/var/tmp``` Because think about it: https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard ```| /var/tmp | Temporary files to be preserved between reboots. |``` That place is actually all around like /tmp, except that it pays tr...
**UPDATE** Came up with an even better solution: Base directory
/var/tmp
Because think about it: https://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
| /var/tmp | Temporary files to be preserved between reboots. |
That place is actually all around like /tmp, except that it pays tribute to the only condition making the data set unsuitable for /tmp: It has to survive reboots. Viewed from all possible angles, there are only advantages placing said data set into /var/tmp: • It fits the place description well. • bash script X can check this place and create a sub directory in this place without sudo or deviating from conventions recommending not to create a sub directory for own bash scripts at that place, which are not shipped with the distribution.
/var/tmp/bash_script_x_logs
• bash script x can save logs there - for each file processed the 1st time of the day - without needing elevated permissions, sudo or root
ls /var/tmp/bash_script_x_logs

bash_script_A
bash_script_B
bash_script_Y
bash_script_Z
• Everyone can access this place, its shareable, enabling collaborative work on processed bash script Y. Example, imagine there are users A, B and C, either on 1 host or spread across several. It's 8AM, user C is the 1st one working on bash script Y, analyzing how it is by bash script X. Bash script X saves the data set of the first run analyzing bash script Y into
/var/tmp/bash_script_x_logs
. User works on the file until 11AM, making it more efficient, lets it get the job done with needing 15% less lines for it at 11AM than at the beginning at 8AM. At 10PM User A continues work on bash script Y, and in parallel analyzes progress with bash script X, analyzing lines, characters, file size, etc. But as a reference bash script X then uses the data set saved in the morning at 8AM by user C, instead of re-inventing the wheel by generating an own start data set for comparison with later versions of bash script Y, which are still generated at that day. So, the comparison displayed to anyone running bash script X at 11PM against bash script Y shows progress made from first run of the day at 8AM to 11PM - not just from 10PM to 11PM, or even no comparison, but instead simply just setting a new start data set. The next day, User B starts to work on bash script Y and analyzes progress by bash script X, 1st run of the day, 1PM. This value will overwrite the value of the day before. So, to be more clear, the data set has not a shelf life of exactly 24h, but just until midnight. After that, the data set may still be there, so, exist even the next day, but that next day with the next run of bash script X against bash script Y it will not be used anymore as a reference value. Instead bash script acts, as if that old data set doesn't exist: It concludes "Oh, there is no data set for bash script Y saved for today, yet! So, I'll change that, and in case there is anything left from yesterday or even more back, like, last week, or 3 days ago, I'll overwrite it!" Optionally an additional cron job could whip out everything within
/var/tmp/bash_script_x_logs/
at 11:59 PM every day of the month, every month, every week day. But if you ask me, that's unnecessary, overload. That's nice... I like this. ---- I am developing a small bash script Y, which counts lines, characters and file size of a given bash script X, and stores the results of the 1st run of the day at some place. When run additional times for bash script X, it's supposed to compare the stored value with the new value retrieval, output what changed and to what percentage. For example "File size: -12%" for bash script X being 12% smaller in file size since 1st run of the day. So, the stored data of the 1st run is rather short-lived, but is supposed to survive a reboot. It is supposed to have a shelf life of 1 day. So, reboots or shutdown are **NOT** supposed to set an end to the life of the stored data, but as soon as 24h passed, the next run of bash script Y is supposed to overwrite the old data for bash script X by the new data of bash script X. **What is the best place to put this specific kind of data in accordance with the Linux File Hierarchy Structure or the Filesystem Hierarchy Standard (FHS)?** You see: Said data is kind of semi-short-lived; if it was fully short-lived it would be undone, overwritten with every reboot or shutdown. In that case, there's no question that /tmp/ would be a decent place for this data produced by the script. But in said, a bit more long-lasting case, which is supposed to survive reboots and shutdowns, but not a time window beyond 24h? Where to put that kind of data for a working bash script? I assume /var or /usr/local/… maybe as base directory, and then placing an own sub-directory called 'bash script Y' in it…
futurewave (213 rep)
Jan 22, 2024, 01:32 PM • Last activity: Jan 23, 2024, 05:58 PM
0 votes
0 answers
342 views
Mounting an old JBOD array with two disks DOS formatted
I have a couple of 500GB disks with data inside that I would like to get back. The two disks were mounted inside a Lacie disk, part of a JBOD array and formatted FAT32. I don't have the external disk electronics anymore, but I have an HP microserver, so I'm trying to temporarily recreate the JBOD ar...
I have a couple of 500GB disks with data inside that I would like to get back. The two disks were mounted inside a Lacie disk, part of a JBOD array and formatted FAT32. I don't have the external disk electronics anymore, but I have an HP microserver, so I'm trying to temporarily recreate the JBOD array just to get the files, but it seems to fail recognizing one of the disks (sdb). The two disks are mounted inside the microserver. First of all when I launch lsblk I can't see any partition in it, so I can't mount it: nas:~ # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 465,8G 0 disk ├─sda1 8:1 0 2M 0 part ├─sda2 8:2 0 64M 0 part /boot/efi ├─sda3 8:3 0 2G 0 part [SWAP] └─sda4 8:4 0 463,7G 0 part /mnt3/joe/backup_OMV /mnt2/home /mnt2/ROOT /opt /boot/grub2/i386-pc /home /boot/grub2/x86_64-efi /var /root /usr/local /tmp /srv /.snapshots / sdb 8:16 0 465,8G 0 disk └─sdb1 8:17 0 465,8G 0 part sdc 8:32 0 465,8G 0 disk Then I tried with fdisk to see which filesystem was on the disk. Nothing was showed off: nas:~ # fdisk /dev/sdc -l Disk /dev/sdc: 465,76 GiB, 500107862016 bytes, 976773168 sectors Disk model: SAMSUNG HD501LJ Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes As you see the sdb disk was showed correctly (and if I mount it it gets mounted): nas:~ # fdisk /dev/sdb -l Disk /dev/sdb: 465,76 GiB, 500107862016 bytes, 976773168 sectors Disk model: SAMSUNG HD501LJ Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x0001f12c Dispositivo Avvio Start Fine Settori Size Id Tipo /dev/sdb1 * 63 1953536129 1953536067 931,5G b W95 FAT32 I've also tried to follow this guide: https://kb.synology.com/en-us/DSM/tutorial/How_can_I_recover_data_from_my_DiskStation_using_a_PC , but I'm stuck at this command: nas:~ # mdadm -AsfR && vgchange -ay mdadm: No arrays found in config file or automatically Maybe it's a dumb question, but am I missing something? Is there a way to mount the JBOD array without specifying the disk (sdc) partition?
Joe Mauri (1 rep)
Nov 10, 2023, 09:40 AM
0 votes
0 answers
65 views
Rsync transactions?
I use rsync to move changes in my data between two computers. I do backup of data dir from PC2 to usb stick and than I insert usb stick to PC1 and run rsync to update data dir on PC1 according to data dir stored in usb stick. I use extra cute tiny flash 16Gb that i like so much. Data become heavier...
I use rsync to move changes in my data between two computers. I do backup of data dir from PC2 to usb stick and than I insert usb stick to PC1 and run rsync to update data dir on PC1 according to data dir stored in usb stick. I use extra cute tiny flash 16Gb that i like so much. Data become heavier than 16Gb. I always used this rsync command: from=/home/y/data;to=/media/y/16g_flash/_backup_data/; rsync -av --delete $from $to It deleted files from my usb flash that I've deleted on my PC. And it added new files, or updated files according to last modify time. Now Data become heavier than space on my usb stick. **I like my 16Gb usb stick and wish to solve my problem with my hands and your advice**, without using another stick 256Gb for example. The solution I can see is to copy only updated files and use text file with list of files (permissions, and last modify time) and use it for compare which files were deleted, updated, added. Steps are like these: 0. I copy my data dir: /home/y/data to both my PC: PC1 and PC2. So I start with same files list on different pc. 1. I store list of files to my usb stick to /media/y/16g_flash/pc1_data.txt and /media/y/16g_flash/pc2_data.txt. And these files are same from start. 2. I work on my PC2: delete some files, update, create new files. So data dir on my PC2 becomes truth, but data dir on my PC1 becomes not actual and needs to update. 3. I run rsync on PC2 to just copy new files and copy updated files to usb stick. And also I save new file list of my PC2 to /media/y/16g_flash/pc2_data.txt . This list of files is actual, because it was modified later than file /media/y/16g_flash/pc1_data.txt . So here, my usb stick just contains some files I've changed, and also contains "snapshot" of file list of data dir on my PC2. 4. I come to PC1 and run rsync to copy new files and update files that were updated from usb stick to PC1. And also I delete files that not exist in /media/y/16g_flash/pc2_data.txt because it means I've deleted them from PC2. And finally I have synchronized data dir on PC1. This saves me from need to store all my files to USB. I just store "one last transaction": - new and updated files - and current list of files. This may use just 100Mb of space on my usb stick instead of 20Gb that my data dir is using. Is it possible with rsync? I am not sure, how to do rsync comparing my data dir with just list of files in file /media/y/16g_flash/pc1_data.txt ? Also I am not sure how to remove old files on my PC1 if those files not exist in list /media/y/16g_flash/pc2_data.txt - does rsync do that, or I need to type bash script?
youni (100 rep)
Sep 26, 2023, 05:28 PM • Last activity: Sep 27, 2023, 11:22 AM
0 votes
1 answers
164 views
Synchronize Data between two locations
I have to locations with a lot of data. Both locations have a slow Internet connection. So it's impossible to sync the Data between both locations via Internet. So the idea is to use a sneaker net to sync the Data. When I'm at location A, I will plug a NVME Drive into a Server. When I'm leaving I wi...
I have to locations with a lot of data. Both locations have a slow Internet connection. So it's impossible to sync the Data between both locations via Internet. So the idea is to use a sneaker net to sync the Data. When I'm at location A, I will plug a NVME Drive into a Server. When I'm leaving I will take the drive with me. And when I'm in location B, I will plug the drive into the other server. So what is the best way to sync the data? I found the Project Sneakersync: https://github.com/lamyj/sneakersync But this Project does not work for me, because of some missing limitations: 1) In the beginning I have about 45TB of Data to sync. But an limited amount of time and Space on the NVME. 2) The Sync has to work in both ways. From Server A => B and from Server B => A.
user39063 (201 rep)
Sep 21, 2023, 07:49 AM • Last activity: Sep 21, 2023, 09:28 AM
0 votes
2 answers
2939 views
XFS version 4 vs 5 (RHEL 7 to 8)
In RHEL 7.9 I formatted my large `/data` volume with XFS 4.5. In RHEL 8.8 XFS is version 5.0. XFS v5 in RHEL 8 can mount an XFS v4 file system created by RHEL 7.9, however RHEL 7.9 cannot mount an XFS v5 file system created by RHEL 8.8. - Have to migrate from RHEL 7.9 to RHEL 8.8; in doing so do I l...
In RHEL 7.9 I formatted my large /data volume with XFS 4.5. In RHEL 8.8 XFS is version 5.0. XFS v5 in RHEL 8 can mount an XFS v4 file system created by RHEL 7.9, however RHEL 7.9 cannot mount an XFS v5 file system created by RHEL 8.8. - Have to migrate from RHEL 7.9 to RHEL 8.8; in doing so do I leave my /data as is created by RHEL 7.9 with XFS 4.5? Or is it worthwhile to move terabytes of data and reformat my data storage under RHEL 8.8 with XFS v5? - If I leave my data formatted as XFS 4.5, going forward with RHEL 8.8 (no likelihood of using RHEL 9 anytime within next 4 years) what could be some potential problems? *I've had zero problems using XFS thus far.*
ron (8647 rep)
Jul 31, 2023, 03:58 PM • Last activity: Jul 31, 2023, 10:33 PM
0 votes
2 answers
88 views
Making use of second (ext3) hard drive
I have two hard drives. Both are FireCudas, "half ssd, half hdd". One has 1TB of space and the other, with my Manjaro on it, has 480GB. Up until now (for 2 years) i have not used the second drive, however since my install is *pretty* fresh, i thought i'd get around to it. Currently the 1TB only has...
I have two hard drives. Both are FireCudas, "half ssd, half hdd". One has 1TB of space and the other, with my Manjaro on it, has 480GB. Up until now (for 2 years) i have not used the second drive, however since my install is *pretty* fresh, i thought i'd get around to it. Currently the 1TB only has a lost+found dir and is otherwise empty. I want to somehow have my /home folder (I assume that's where all personal junk is) on the second drive along with documents, games on steam, pictures and all the junk. Is that even possible? Is reinstalling every program necessary? How would i go about this, i assume a lot of programs just use /home/me so all would have to change to /db/home/me (/db is where my 1TB is mounted)
Kamil Suhak (3 rep)
Jan 15, 2020, 10:24 PM • Last activity: Jul 27, 2023, 07:30 AM
1 votes
2 answers
3299 views
Bash manage space separated list
I have a rather complex shell script which processes several list of values which are read from a file as space separated values e.g. ``` SET1="value1 value2 value3" for i in ${SET1}; do ... ``` I now want to create a similarly formatted list in the script to write out. However if I do (for example)...
I have a rather complex shell script which processes several list of values which are read from a file as space separated values e.g.
SET1="value1 value2 value3"

for i in ${SET1}; do
   ...
I now want to create a similarly formatted list in the script to write out. However if I do (for example) this:
DISCOVERED=''
DISCOVERED+=( us-east-1 )
DISCOVERED+=( us-east-2 )
DISCOVERED+=( us-west-1 )
for REGION in ${DISCOVERED} ; do
    echo $REGION
done
I get no output. I do get output if I specify in ${DISCOVERED[@]}. It appears I am working with different data types in SET1 and DISCOVERED. I can easily append to a string with space-separated values, but I end up with either a leading or trailing space which needs cleaned up:
function append_discovered {
   local VALUE="$1"
   if [ -z "${DISCOVERED}" ] ; then
      DISCOVERED="$VALUE"
   else
      DISCOVERED="${DISCOVERED} ${VALUE}"
   fi
}
....but this seems rather cumbersome. I could treat my output variable as an array - but then I either need to convert it back (DISCOVERED="${DISCOVERED[@]}") at appropriate places to use a different construct for iterating through this list than I have for the other lists. What is the data type for my input data (e.g. $SET1 above) if not an array? Is there a neater way to append this list and keep the same data type?
symcbean (6301 rep)
Jul 10, 2023, 04:44 PM • Last activity: Jul 11, 2023, 02:09 PM
1 votes
0 answers
156 views
How important is LUKS reserved block count for external drive?
I have LUKS FDE on my external data HDD but recently almost filled it up. When cleaning it up I noticed that the available and free counters don't match. Read that LUKS reserves 5% of total capacity for "root user and system processes" and that reducing it could lead to loss of data. In my case that...
I have LUKS FDE on my external data HDD but recently almost filled it up. When cleaning it up I noticed that the available and free counters don't match. Read that LUKS reserves 5% of total capacity for "root user and system processes" and that reducing it could lead to loss of data. In my case that takes up 250GB of space on a 5TB HDD. What I want to know is what and how important of a role this plays and what is the minimum reserved space that is needed in case of external HDDs (that doesn't have bootable partitions). Does reducing the reserved blocks significantly increase the risk of data loss or make it non-mountable? I want to find a good balance between data safety and disk usage. Edit: Found this answer : https://unix.stackexchange.com/questions/334526/clearing-reserved-space-in-a-luks-encrypted-volume which speak of reducing to 1GB (0.3% of capacity) for a backup drive. Is that safe or does that pose any significant risk of data loss ? I was planning to let 0.5-1 % if that is acceptable.
BharathYes (133 rep)
Mar 26, 2023, 01:16 PM • Last activity: Mar 26, 2023, 02:58 PM
0 votes
2 answers
771 views
read raw data from disk and convert
please help to find a way to convert the data as follows: I read from disk using dd utility > dd if=/dev/sdb skip=8388608 count=560 iflag=skip_bytes,count_bytes |hexdump -C and I am getting > 000001a0 00 00 00 00 cf 4c 79 ce 00 00 00 00 00 00 00 00 |.....Ly.........| but I would like to get : > 0000...
please help to find a way to convert the data as follows: I read from disk using dd utility > dd if=/dev/sdb skip=8388608 count=560 iflag=skip_bytes,count_bytes |hexdump -C and I am getting > 000001a0 00 00 00 00 cf 4c 79 ce 00 00 00 00 00 00 00 00 |.....Ly.........| but I would like to get : > 000001a0 ce794ccf00000000 0000000000000000 |.....Ly.........| I do not have to use hexdump, any other tool can also do the trick. Thank you.
markmi (1 rep)
Feb 10, 2022, 07:51 AM • Last activity: Dec 21, 2022, 12:42 PM
0 votes
1 answers
297 views
Linux completely frozen
My linux system just froze and I can't even switch to tty. I can't force reboot it because it is moving a partiton (a HDD with no system files) with really important data which I can't afford to lose. I even tried blindly switching to tty and writing commands but that does not work. SSH is not confi...
My linux system just froze and I can't even switch to tty. I can't force reboot it because it is moving a partiton (a HDD with no system files) with really important data which I can't afford to lose. I even tried blindly switching to tty and writing commands but that does not work. SSH is not configures but CUPS is,tried to use CUPS in order to see if the system can process something but no luck. What can I do? I use nobara 36 with KDE Plasma,no separate boot device. No backups either,guess I gonna do them after I fix this.
Leaves (1 rep)
Oct 18, 2022, 06:47 PM • Last activity: Oct 18, 2022, 07:41 PM
1 votes
2 answers
983 views
Two consecutive OPs after pipe or two jq OPs in one run?
I have to extract data from a slightly mis-formatted JSON string, hence I first pass it through `sed` & `awk`. What I have is a command like: `sed 's/},/},\n/g' test.json |awk '/"characater"/ { gsub("\"characater\"", "\"char" ++n "\"", $0) } 1'| jq -r '.frames.frame.lps.lp|.characters[]|[.code_ascii...
I have to extract data from a slightly mis-formatted JSON string, hence I first pass it through sed & awk. What I have is a command like: sed 's/},/},\n/g' test.json |awk '/"characater"/ { gsub("\"characater\"", "\"char" ++n "\"", $0) } 1'| jq -r '.frames.frame.lps.lp|.characters[]|[.code_ascii,.confidence]|@tsv' to extract data from a JSON string that can be seen here: {"response":{"container":{"id":"41d6efcb-24d6-490d-8880-762255519b5f","timestamp":"2018-Jul-11 19:51:06.461665"},"id":"00000002-0000-0000-0000-000000000015"},"frames":{"frame":{"id":"5583","timestamp":"2016-Nov-30 13:05:27","lps":{"lp":{"licenseplate":"15451BBL","text":"15451BBL","wtext":"15451BBL","confidence":"20","bkcolor":"16777215","color":"16777215","type":"0","ntip":"11","cct_country_short":"","cct_state_short":"","tips":{"tip":{"poly":{"p":{"x":"1094","y":"643"},"p":{"x":"1099","y":"643"},"p":{"x":"1099","y":"667"},"p":{"x":"1094","y":"667"}},"bkcolor":"16777215","color":"0","code":"49","code_ascii":"1","confidence":"97"},"tip":{"poly":{"p":{"x":"1103","y":"642"},"p":{"x":"1113","y":"642"},"p":{"x":"1112","y":"667"},"p":{"x":"1102","y":"667"}},"bkcolor":"16777215","color":"0","code":"53","code_ascii":"5","confidence":"89"},"tip":{"poly":{"p":{"x":"1112","y":"640"},"p":{"x":"1122","y":"640"},"p":{"x":"1122","y":"666"},"p":{"x":"1112","y":"666"}},"bkcolor":"16777215","color":"0","code":"52","code_ascii":"4","confidence":"97"},"tip":{"poly":{"p":{"x":"1123","y":"640"},"p":{"x":"1132","y":"640"},"p":{"x":"1131","y":"665"},"p":{"x":"1123","y":"665"}},"bkcolor":"16777215","color":"0","code":"53","code_ascii":"5","confidence":"97"},"tip":{"poly":{"p":{"x":"1134","y":"640"},"p":{"x":"1139","y":"640"},"p":{"x":"1139","y":"664"},"p":{"x":"1133","y":"664"}},"bkcolor":"16777215","color":"0","code":"49","code_ascii":"1","confidence":"77"},"tip":{"poly":{"p":{"x":"1154","y":"639"},"p":{"x":"1163","y":"639"},"p":{"x":"1163","y":"663"},"p":{"x":"1153","y":"663"}},"bkcolor":"16777215","color":"0","code":"66","code_ascii":"B","confidence":"97"},"tip":{"poly":{"p":{"x":"1164","y":"638"},"p":{"x":"1173","y":"638"},"p":{"x":"1173","y":"663"},"p":{"x":"1163","y":"663"}},"bkcolor":"16777215","color":"0","code":"66","code_ascii":"B","confidence":"94"},"tip":{"poly":{"p":{"x":"1191","y":"637"},"p":{"x":"1206","y":"636"},"p":{"x":"1205","y":"660"},"p":{"x":"1190","y":"661"}},"bkcolor":"16777215","color":"0","code":"76","code_ascii":"L","confidence":"34"},"tip":{"poly":{"p":{"x":"1103","y":"655"},"p":{"x":"1111","y":"655"},"p":{"x":"1111","y":"667"},"p":{"x":"1103","y":"667"}},"bkcolor":"16777215","color":"0","code":"74","code_ascii":"J","confidence":"57"},"tip":{"poly":{"p":{"x":"1103","y":"655"},"p":{"x":"1111","y":"655"},"p":{"x":"1111","y":"667"},"p":{"x":"1103","y":"667"}},"bkcolor":"16777215","color":"0","code":"74","code_ascii":"J","confidence":"57"},"tip":{"poly":{"p":{"x":"1176","y":"638"},"p":{"x":"1185","y":"637"},"p":{"x":"1184","y":"661"},"p":{"x":"1175","y":"662"}},"bkcolor":"16777215","color":"0","code":"52","code_ascii":"4","confidence":"7"}},"ncharacter":"8","characters":{"characater":{"poly":{"p":{"x":"1094","y":"643"},"p":{"x":"1099","y":"643"},"p":{"x":"1099","y":"667"},"p":{"x":"1094","y":"667"}},"bkcolor":"16777215","color":"0","code":"49","code_ascii":"1","confidence":"97"},"characater":{"poly":{"p":{"x":"1103","y":"642"},"p":{"x":"1113","y":"642"},"p":{"x":"1112","y":"667"},"p":{"x":"1102","y":"667"}},"bkcolor":"16777215","color":"0","code":"53","code_ascii":"5","confidence":"89"},"characater":{"poly":{"p":{"x":"1112","y":"640"},"p":{"x":"1122","y":"640"},"p":{"x":"1122","y":"666"},"p":{"x":"1112","y":"666"}},"bkcolor":"16777215","color":"0","code":"52","code_ascii":"4","confidence":"97"},"characater":{"poly":{"p":{"x":"1123","y":"640"},"p":{"x":"1132","y":"640"},"p":{"x":"1131","y":"665"},"p":{"x":"1123","y":"665"}},"bkcolor":"16777215","color":"0","code":"53","code_ascii":"5","confidence":"97"},"characater":{"poly":{"p":{"x":"1134","y":"640"},"p":{"x":"1139","y":"640"},"p":{"x":"1139","y":"664"},"p":{"x":"1133","y":"664"}},"bkcolor":"16777215","color":"0","code":"49","code_ascii":"1","confidence":"77"},"characater":{"poly":{"p":{"x":"1154","y":"639"},"p":{"x":"1163","y":"639"},"p":{"x":"1163","y":"663"},"p":{"x":"1153","y":"663"}},"bkcolor":"16777215","color":"0","code":"66","code_ascii":"B","confidence":"97"},"characater":{"poly":{"p":{"x":"1164","y":"638"},"p":{"x":"1173","y":"638"},"p":{"x":"1173","y":"663"},"p":{"x":"1163","y":"663"}},"bkcolor":"16777215","color":"0","code":"66","code_ascii":"B","confidence":"94"},"characater":{"poly":{"p":{"x":"1191","y":"637"},"p":{"x":"1206","y":"636"},"p":{"x":"1205","y":"660"},"p":{"x":"1190","y":"661"}},"bkcolor":"16777215","color":"0","code":"76","code_ascii":"L","confidence":"34"}},"det_time_us":"1072592","poly":{"p":{"x":"1088","y":"642"},"p":{"x":"1210","y":"634"},"p":{"x":"1210","y":"661"},"p":{"x":"1087","y":"669"}}}},"det_time_us":"1720812"}}} or on this link: https://drive.google.com/file/d/18wCzjMBpw7SIeVFByAGPQiqCBjg_0te3/view?usp=sharing
Now, that works fine but what I need is, to extract the .frames.frame.lps.lp.ncharacter from the JSON, too. I know I could simply do something like cat test.json | jq -r '.frames.frame.lps.lp.ncharacter'; in front of the above but that won't work as I need these commands to parse a huge file of JSON strings that are formatted as seen on the link and I need the .ncharacter parameter to show up in line with the extracted chars what means I would like to have an output like: ... X 99 Y 99 previous data formatted in the same way 8 1 97 5 89 4 97 5 97 1 77 B 97 B 94 L 34 6 following data formatted in the same way Z 99 ... Where the 8 on top is the .ncharacter parameter. I have tried: sed 's/},/},\n/g' test.json |awk '/"characater"/ { gsub("\"characater\"", "\"char" ++n "\"", $0) } 1'| jq -r '[.frames.frame.lps.lp.ncharacter],.frames.frame.lps.lp|.characters[]|[.code_ascii,.confidence]|@tsv' but that gives me jq: error (at :102): Cannot index array with string "characters" and I'm not sure why that is...
stdcerr (2099 rep)
Jul 13, 2018, 05:17 PM • Last activity: Oct 15, 2022, 05:38 PM
Showing page 1 of 20 total questions