Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
1 answers
2068 views
PV command to show progress in Dialog with transfer rate in Mbits
I am using PV -n command to read partitions and using gzip with pipe to compress the read data and storing the file. While data is read and written I am using a while loop to show progress using linux dialog utility. This works great, progress is updated. I want to also display the transfer speed /...
I am using PV -n command to read partitions and using gzip with pipe to compress the read data and storing the file. While data is read and written I am using a while loop to show progress using linux dialog utility. This works great, progress is updated. I want to also display the transfer speed / read speed in Mbits. Also I want to update the DB table with progress as well as transfer speed in Mbits. Since I am using a while loop to read each line, progress should be displayed on a new line for every update. See below my code. (pv -n /dev/$partitions | gzip -c >$path/${filename// /_}-${partitions}.img.gz) 2>&1 | while IFS= read -r progress; do echo "processing /dev/$partitions currently completed $progress" >/run/log.log echo $progress | dialog --title "Capturing OS image of $hdd" --gauge " now creating image of HDD $hdd writing $filename Image, please wait...\n\n\n Processing Partition $i of $totalparts\n\n This process may take some time to complete\n\n" 13 90 0 mysql -u root -pxxxxxx dbi -h localhost | insert into speed(progress, speed) Values ("$line", "mbits") done if I use pv -n command it only returns numeric value of progress on a newline. See below example: ( /data/pv -n /dev/nvme0n1p1 | gzip -c >/run/test.img ) 5 9 29 67 100 Above works great for the progress bar, but I want to do update my db with average speed in mbits. When I run pv command for progress with average speed agrument, progress is updated on the same line instead of print new lines and it breaks my script. See below example. (pv -rep /dev/nvme0n1p1 | gzip -c >/run/test.img ) [4.9MiB/s] [====> ] 4% ETA 0:00:19 ideal output should be like this. (pv -rep /dev/nvme0n1p1 | gzip -c >/run/test.img ) [ 4.18MiB/s] [====> ] 14% ETA 0:00:19 [14.49MiB/s] [===========> ] 54% ETA 0:00:19 [24.39MiB/s] [========================> ] 74% ETA 0:00:19 [44.29MiB/s] [===========================> ] 78% ETA 0:00:19 [46.19MiB/s] [=============================> ] 98% ETA 0:00:19 [57.99MiB/s] [==============================>] 100% ETA 0:00:19 I can use AWK, Sed and Grep to format the the required data and use it in my while loop. But how can I get this to work. If I use pv -F $'%t %r %e\n' I get the desired results. But I cannot use AWK, grep or tr commands. See below example, it returns nothing. (pv -F $'%t %r %e\n' /dev/nvme0n1p1 | gzip -c >/run/test.img ) 2>&1 | tr -d ':[]' Also if I don't redirect stderr to stdout, with the same command above, I don't get desired results, see below, using tr -d to delete following characters ":[]" but does not work. (pv -F $'%t %r %e\n' /dev/nvme0n1p1 | gzip -c >/run/test.img ) | tr -d ':[]' 0:00:01 [25.2MiB/s] ETA 0:00:18 0:00:02 [23.7MiB/s] ETA 0:00:18 0:00:03 [ 100MiB/s] ETA 0:00:07 0:00:04 [ 199MiB/s] ETA 0:00:01 If I use other arguments such as pv -n -r -e, it ignores all other parameters just returns the numeric progress value on a newline. Maybe there is an alternative to pv that I can use to achieve exactly described above or maybe someone can help with pv command.
user2107349 (147 rep)
Sep 22, 2021, 10:24 AM • Last activity: Jun 3, 2025, 08:02 PM
0 votes
2 answers
135 views
how do i use pv in a script that has a seq for loop
I have a script that has a for loop in it that runs a specific amount of times, and I was wondering if it is possible to implement the pv command in my script to add a progress bar. Here is my code: ```bash for i in $(seq 1 10000000); do echo "iteration ${i}" #rest of for loop done ``` log: ```log i...
I have a script that has a for loop in it that runs a specific amount of times, and I was wondering if it is possible to implement the pv command in my script to add a progress bar. Here is my code:
for i in $(seq 1 10000000); do
    echo "iteration ${i}"
    #rest of for loop
done
log:
iteration 1
iteration 2
iteration 3
iteration 4
iteration 5
iteration 6
iteration 7
iteration 8
iteration 9
iteration 10
*and so on
if anyone can help me with this, any help would be greatly appreciated edit: i changed the code a bit:
for i in $(seq 1 10000000); do
    echo "iteration ${i}" | pv -l
    #rest of for loop
done
and it gave me this output:
1
1.00  0:00:00 [15.2k/s] [                                                                                                                               ]
2
1.00  0:00:00 [14.7k/s] [                                                                                                                               ]
3
1.00  0:00:00 [15.6k/s] [                                                                                                                               ]
4
1.00  0:00:00 [15.0k/s] [                                                                                                                               ]
5
1.00  0:00:00 [15.0k/s] [                                                                                                                               ]
6
1.00  0:00:00 [11.5k/s] [                                                                                                                               ]
7
1.00  0:00:00 [5.54k/s] [                                                                                                                               ]
8
1.00  0:00:00 [14.7k/s] [                                                                                                                               ]
9
1.00  0:00:00 [13.2k/s] [                                                                                                                               ]
10
1.00  0:00:00 [13.4k/s] [                                                                                                                               ]
harry mckay (9 rep)
Apr 10, 2025, 01:14 PM • Last activity: Apr 10, 2025, 05:28 PM
1 votes
2 answers
4362 views
How to redirect dd to pv?
This is my `dd` command which I need to modify: ```lang-sh dd if=/tmp/nfs/image.dd of=/dev/sda bs=16k ``` Now I would like to use `pv` to limit the speed of copying from NFS server. How can I achieve that? I know that `--rate-limit` does the job, but I am not sure how to construct pipes.
This is my dd command which I need to modify:
-sh
dd if=/tmp/nfs/image.dd of=/dev/sda bs=16k
Now I would like to use pv to limit the speed of copying from NFS server. How can I achieve that? I know that --rate-limit does the job, but I am not sure how to construct pipes.
Pablo (245 rep)
Sep 20, 2015, 08:05 PM • Last activity: Dec 5, 2024, 04:28 PM
0 votes
1 answers
61 views
Pipe viewer not working with mysql
I have a new web hosting server (Ubuntu 20) that I need to manage over ssh (shell is bash). It has this strange issue that `pv` does not print anything when piping to `mysql`. I.e. `gunzip /dev/null` This prints progress as expected. I would appreciate any insight on this matter.
I have a new web hosting server (Ubuntu 20) that I need to manage over ssh (shell is bash). It has this strange issue that pv does not print anything when piping to mysql. I.e. gunzip /dev/null This prints progress as expected. I would appreciate any insight on this matter.
Anton Duzenko (149 rep)
Jun 10, 2023, 09:54 AM • Last activity: Jun 21, 2023, 11:00 AM
0 votes
0 answers
24 views
Replicating SD card: hangs at 99%
The goal is to [replicate an SD card][1]. Despite successfully replicating the SD card a few years ago, a recent attempt hangs at 99%, despite attempts from 2 different PCs. ```lang-shellsession user@JUPITER Desktop$ date; sudo sh -c 'pv sdcard.image >/dev/disk5'; date Fri Mar 17 00:42:51 EDT 2023 1...
The goal is to replicate an SD card . Despite successfully replicating the SD card a few years ago, a recent attempt hangs at 99%, despite attempts from 2 different PCs.
-shellsession
user@JUPITER Desktop$ date; sudo sh -c 'pv sdcard.image >/dev/disk5'; date
Fri Mar 17 00:42:51 EDT 2023
 119GiB 6:10:50 [0.00 B/s] [==========================================> ] 99% ETA 0:00:18
Is there any way to measure the SD card to determine if it somehow too small to accommodate the image? diskutil list returned:
/dev/disk5 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *127.9 GB   disk5
   1:             Windows_FAT_32 NO NAME                 254.8 MB   disk5s1
   2:                      Linux                         127.7 GB   disk5s2
The command:
-bash
ls -l sdcard.image
returns:
-rw-r--r--  1 user  staff  128043712512 Oct 29  2021 sdcard.image
gatorback (1522 rep)
Mar 17, 2023, 10:55 AM • Last activity: Mar 17, 2023, 11:01 AM
14 votes
5 answers
69251 views
pvcreate: Can't use /dev/sda: device is partitioned
I'm currently installing arch linux and when I try to create a physical volume it gives me this error. Can not use /dev/sda: device is partitioned What is this error and how can I get rid of it? PS: I have formatted the disk with `mkfs.ext4 /dev/sda`
I'm currently installing arch linux and when I try to create a physical volume it gives me this error. Can not use /dev/sda: device is partitioned What is this error and how can I get rid of it? PS: I have formatted the disk with mkfs.ext4 /dev/sda
dead101 (141 rep)
Dec 9, 2021, 01:18 PM • Last activity: Dec 12, 2022, 11:41 PM
3 votes
1 answers
18229 views
Multithreaded xz, with gzip, pv, and pipes - is this the most efficient I can get?
I'm excited to learn that `xz` now supports multithreading: xz --threads=0 But now I want to utilise this as much as possible. For example, to recompress gzips as xz: gzip -d -k -c myfile.gz | pv | xz -z --threads=0 - > myfile.xz This results in my processor being more highly used (~260% CPU to xz,...
I'm excited to learn that xz now supports multithreading: xz --threads=0 But now I want to utilise this as much as possible. For example, to recompress gzips as xz: gzip -d -k -c myfile.gz | pv | xz -z --threads=0 - > myfile.xz This results in my processor being more highly used (~260% CPU to xz, yay!). However: - I realise that gzip is not (yet) multithreading, - I think that either pv or the pipes may be restricting the number of (IO?) threads. Is this true and, if so, is there a way to make this more efficient (other than to remove pv)?
tudor -Reinstate Monica- (545 rep)
Oct 23, 2019, 11:25 PM • Last activity: Sep 19, 2022, 02:17 PM
2 votes
1 answers
442 views
pv gives 100% progress with small buffer
I have to do a long task (convert image format) for every file of a folder. I achieved to use `pv` to write the estimation of duration with (used `sleep` here to simulated processing time): `pv -B 1 =(find . -iwholename '*.png') | xargs -l sh -c 'sleep 0.1'` I was thinking like there may be an inter...
I have to do a long task (convert image format) for every file of a folder. I achieved to use pv to write the estimation of duration with (used sleep here to simulated processing time): pv -B 1 =(find . -iwholename '*.png') | xargs -l sh -c 'sleep 0.1' I was thinking like there may be an internal buffer which can mess up the estimation, because the list of files itself is not very large (tens of KB) and there may be internal buffer somewhere like in pv, so I limited the buffer to 1 byte with -B 1. And I wonder if that could cause performance issues (because disk can only read by a minimum of 512 bytes per read, and discard 511 bytes), but I don't know if maybe it resides it memory because =() is used. I feel like -B 1 is a bad solution. But more importantly, there are strange behaviour with others values for -B. For example : pv =(yes | head -n 100) -B 1 | xargs -l sh -c 'sleep 1' Will work as expected (although starting strangely around 17% and starting rendering only after a few seconds), but pv =(yes | head -n 100) -B 10 | xargs -l sh -c 'sleep 1' will print immediately 100% even if I thought it should take 100 seconds to process, updating at interval of 5%, because 200 bytes are printed, x100 characters y and \n. Why ? **Note** : =() requires zsh. I tried with <() and the progress bar doesn't work. Maybe pv can't estimate the fifo size, but it can with a file?
rafoo (121 rep)
Aug 2, 2022, 10:07 AM • Last activity: Aug 2, 2022, 05:55 PM
0 votes
1 answers
671 views
How can I estimate the whole time of a write process including sync?
Progress and estimated time to write without and with `sync` --- I have found no tool (or straightforward method) that will include flushing the buffers when showing the progress and estimating the estimated time for the whole write process, ETA (Estimated Time of Arrival). - `pv` can show the time...
Progress and estimated time to write without and with sync --- I have found no tool (or straightforward method) that will include flushing the buffers when showing the progress and estimating the estimated time for the whole write process, ETA (Estimated Time of Arrival). - pv can show the time for the progress as seen by the operating system, but if the target drive is slow and there is a lot of RAM, it shows only the time until the data are written to a buffer. This time can be a small fraction of the the real time until the buffers are flushed. - dd writes a final report about amount of data used time and transfer rate. It can also be made to write 'progress' reports. It used to give a much better estimate than pv, but nowadays the USB drives and memory cards are still very slow, while the other processes are fast and the available memory for buffers big. So dd will also finish long before the buffers are flushed. - I can 'time' the write process including sync with the time command time ( write command; sync ) and it will give me the real time used which is useful, but only after it has finished. It does not show the progress and does not estimate the total remaining time. - I can run iotop to show read and write processes and how fast things are read and written, but it does not estimate the remaining time. How to show progress and estimated time for the *whole* write process? --- How can I show progress and estimated time for the whole write process, ETA (Estimated Time of Arrival), including flushing the buffers with sync? Link to related question --- - https://unix.stackexchange.com/questions/554237/understanding-sync-command-operations-in-linux#554237
sudodus (6666 rep)
Nov 26, 2019, 05:52 PM • Last activity: Aug 2, 2022, 11:17 AM
4 votes
2 answers
2970 views
How to format hexdump as xxd, possible for xxd -revert?
I wish to dump the raw content of a SD card into a file for inspection. Most parts of it are zeroes. Learnt from [this SuperUser answer][1] I could use `pv` to show the progress of `od` and `hexdump`. Both ETA are 1.5 hours. # pv /dev/sdd | od -x --endian=big > sdd_file ... ... ... [> ] ... ETA 1:34...
I wish to dump the raw content of a SD card into a file for inspection. Most parts of it are zeroes. Learnt from this SuperUser answer I could use pv to show the progress of od and hexdump. Both ETA are 1.5 hours. # pv /dev/sdd | od -x --endian=big > sdd_file ... ... ... [> ] ... ETA 1:34:42 and # pv /dev/sdd | hexdump -C > sdd_file ... ... ... [> ] ... ETA 1:35:01 However xxd would need 11 hours. # pv /dev/sdd | xxd -a -u > sdd_file ... ... ... [> ] ... ETA 10:48:53 I prefer xxd mostly because of the -revert possibility. But xxd would take way too long to process a disk. How to format hexdump (or od) to produce the same file format as xxd, so that the file is possible to be -reverted by xxd?
midnite (613 rep)
Dec 18, 2021, 06:59 PM • Last activity: Jun 29, 2022, 12:33 PM
1 votes
1 answers
584 views
Cut off pipe after N bytes
I am piping information to a file using `myTool > file.txt 2>&1`, but the tool might generate GigaBytes worth of data - I need to cut off after the first N bytes, let say 2MB. It seems `pv` can not do that, and sadly it is not an option to go by lines (`head`). Is there no basic tool to do this? Ide...
I am piping information to a file using myTool > file.txt 2>&1, but the tool might generate GigaBytes worth of data - I need to cut off after the first N bytes, let say 2MB. It seems pv can not do that, and sadly it is not an option to go by lines (head). Is there no basic tool to do this? Ideally, it would work like this: myTool | limiter --amount 2M > file.txt 2>&1.
ThE_-_BliZZarD (111 rep)
May 9, 2022, 09:08 PM • Last activity: Jun 9, 2022, 04:28 PM
4 votes
1 answers
6897 views
Syntax When Combining dd and pv
In : sudo dd if=/dev/sda bs=64k | pv --size 1.5t | dd of=/dev/sdb Does the block size for dd go on the left side of this after the input as shown or on the right after the output? With the pipe viewer size option, is it correct that there is no equals sign before the value? Is it okay to use a decim...
In : sudo dd if=/dev/sda bs=64k | pv --size 1.5t | dd of=/dev/sdb Does the block size for dd go on the left side of this after the input as shown or on the right after the output? With the pipe viewer size option, is it correct that there is no equals sign before the value? Is it okay to use a decimal value as shown above?
Less Static (61 rep)
Apr 10, 2015, 09:07 PM • Last activity: Jun 4, 2022, 07:14 PM
1 votes
1 answers
192 views
How to avoid hardware damage with dd
I wanted to create an encrypted USB-stick. In the tutorial I used it said something like 'to avoid pattern based attacks dump random input to the drive'. So I did by dding /dev/urandom to the drive. It took some time, the stick got really hot, but I tought whatever. I had to do that a bunch of times...
I wanted to create an encrypted USB-stick. In the tutorial I used it said something like 'to avoid pattern based attacks dump random input to the drive'. So I did by dding /dev/urandom to the drive. It took some time, the stick got really hot, but I tought whatever. I had to do that a bunch of times, becuase things did not work and now the stick does not answer any more. Does not show up on lsblk, does not show up on lsusb, does not light up when plugged in. Of course this can be a coincidence, but I would like to avoid that. So how can I? I have seen people pipe the dd input into pv, sth like this:
dd if=./sth.iso | pv | dd of=/dev/sdb
This is neat to see a progress bar. But I know you can use pv to slow down throughput through a pipe.
echo unix stackexchange|pv -qL 10
Now what is a safe amount of throughput? Are there other ways to avoid damage? Am I mistaken to think that I broke the drive with dd?
bananabook (92 rep)
May 21, 2022, 09:41 PM • Last activity: May 22, 2022, 02:41 PM
0 votes
0 answers
128 views
did `dd` buffer entire ~700MB iso image?
I read that to transfer an iso image to a pendrive and print progress I should execute pipeline below. ``` $ dd if=$IMG bs=4M | pv -s 668M | sudo dd of=/dev/sdc bs=4M ``` `pv` should print and update a progress bar in terminal. However, my progress bar jumped to 100% and the command hanged. I could...
I read that to transfer an iso image to a pendrive and print progress I should execute pipeline below.
$ dd if=$IMG bs=4M | pv -s 668M | sudo dd of=/dev/sdc bs=4M
pv should print and update a progress bar in terminal. However, my progress bar jumped to 100% and the command hanged. I could not kill the dd writing to the pendrive so I looked under the hood. journal
Jan 13 01:57:32 nixos systemd: systemd-udevd.service: Watchdog timeout (limit 3min)!
Jan 13 01:57:32 nixos systemd: systemd-udevd.service: Killing process 1830 (systemd-udevd) with signal SIGABRT.
Jan 13 01:57:54 nixos sudo: pam_unix(sudo:session): session closed for user root
Jan 13 01:57:54 nixos kernel:  sdc: sdc1 sdc2
iotop
TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                                                                       
   1830 be/4 root        0.00 B/s    0.00 B/s  0.00 % 99.99 % systemd-udevd
   3788 be/4 root        0.00 B/s    0.00 B/s  0.00 % 99.99 % dd of=/dev/sdc bs=4M
      8 be/4 root        0.00 B/s    0.00 B/s  0.00 % 99.99 % [kworker/u64:0+flush-8:32]
I tried with sudo or as root. Same effect. pv shows 100% immediately but dd of=... blocks for about 3 minutes. That is not consistent with what I read on the internet and saw myself on other machines. What happened? PS.
$ sudo hdparm -W /dev/sdc

/dev/sdc:
 write-caching = not supported
PPS. Tried status=progress - same effect (100% and hangs until write is complete).
lord.didger (164 rep)
Jan 17, 2022, 11:39 PM
0 votes
1 answers
1063 views
Get hash from file while you copy or move it
Say I copy a file with `pv`, is there any way to also get any hash, like `md5`, `sha1`, etc without having to read origin twice? It has to work with big files or block devices. Example command which does not work as expected: pv /dev/sda1 | tee md5sum > /mnt/backups/sda.backup
Say I copy a file with pv, is there any way to also get any hash, like md5, sha1, etc without having to read origin twice? It has to work with big files or block devices. Example command which does not work as expected: pv /dev/sda1 | tee md5sum > /mnt/backups/sda.backup
Smeterlink (295 rep)
Dec 14, 2021, 12:09 PM • Last activity: Dec 14, 2021, 04:44 PM
1 votes
2 answers
447 views
Execute single command with write caching disabled
Whenever I write a large file to a USB drive, write caching makes it very difficult to track process. I can disable it in various ways like `echo 1000000 > /proc/sys/vm/dirty_bytes` which solves the problem. However then I also have to re-enable the old setting (or reboot). How can I tell my system...
Whenever I write a large file to a USB drive, write caching makes it very difficult to track process. I can disable it in various ways like echo 1000000 > /proc/sys/vm/dirty_bytes which solves the problem. However then I also have to re-enable the old setting (or reboot). How can I tell my system to bypass the dirty bytes entirely for a single write command? For example, something like:
DIRTYBYTES=nope pv file.bin > /dev/sdb1
Or
pv file.bin | cache_buster > /dev/sdb1
To clarify: The current behavior of pv file.bin > /dev/sdb1 is that the meter shoots to 100% immediately, and then the command hangs waiting for the USB to actually finish writing. Instead, I want to modify the command and make the meter gradually increase at about the correct write rate of the USB drive, **without altering** the dirty bytes etc settings in such a way that the next command will also bypass the cache.
Bagalaw (1085 rep)
Nov 22, 2021, 03:28 AM • Last activity: Nov 26, 2021, 09:00 PM
1 votes
1 answers
773 views
1TB drive compressed shows only 3.8GB, what did I do wrong?
On Linux Mint 20.2 Cinnamon I would like to create a disk image of my secondary disk drive (SATA) containing Windows 10, not that it matters now, directly `gzip`'ed using the [Parallel `gzip` = `pigz`][1] onto NTFS formatted external HDD (compressed on-the-fly). My problem is inside the resulting co...
On Linux Mint 20.2 Cinnamon I would like to create a disk image of my secondary disk drive (SATA) containing Windows 10, not that it matters now, directly gzip'ed using the Parallel gzip = pigz onto NTFS formatted external HDD (compressed on-the-fly). My problem is inside the resulting compressed file, there is somehow _twisted_ (wrong) size of the contents, which I would like you to have a look at: 1TB drive uncompressed disk shows only 3.8GB whereas its compressed size is 193 GB. 1TB drive uncompressed disk shows only 3.8GB whereas its compressed size is 193 GB.
$ gzip --list sata-disk--windows10--2021-Sep-24.img.gz 
         compressed        uncompressed  ratio uncompressed_name
       206222131640          3772473344 -5366.5% sata-disk--windows10--2021-Sep-24.img
-rwxrwxrwx 1 vlastimil vlastimil 193G 2021-Sep-24 sata-disk--windows10--2021-Sep-24.img.gz
*** ## Notes to the below shell snippet I just ran - Serial number censored, of course (ABCDEFGHIJKLMNO) - I tried to force the size with --size of pv command - The exact byte size of the whole disk comes from smartctl -i /dev/sdX *** ## The shell snippet I just ran follows
-bash
dev=/dev/disk/by-id/ata-Samsung_SSD_870_QVO_1TB_ABCDEFGHIJKLMNO; \
file=/media/vlastimil/4TB_Seagate_NTFS/Backups/sata-disk--windows10--"$(date +%Y-%b-%d)".img.gz; \
pv --size 1000204886016  "$file"
*** I am quite sure the problem is in how I used the pipe or pv for that matter, but I fail to prove it. Test scenario with a regular file (~2GB) works just fine and as expected. Can this be an error in gzip maybe...? What am I doing wrong here, please? Thank you in advance. *** Perhaps the last thing to cover is versions of pv and pigz: - I am using a packaged version of pv: 1.6.6-1 - I am using a compiled version of pigz: 2.6
Vlastimil Buri&#225;n (30515 rep)
Sep 24, 2021, 07:21 PM • Last activity: Sep 26, 2021, 06:26 PM
4 votes
1 answers
1040 views
pv not printing to a pipe
Executing this command displays the output on console. But when output is piped to another command it does not work. See below. (pv -F $'%t %r %e\n' /dev/nvme0n1p1 | gzip -c >/run/test.img ) 0:00:01 [25.2MiB/s] ETA 0:00:18 0:00:02 [23.7MiB/s] ETA 0:00:18 0:00:03 [ 100MiB/s] ETA 0:00:07 0:00:04 [ 199...
Executing this command displays the output on console. But when output is piped to another command it does not work. See below. (pv -F $'%t %r %e\n' /dev/nvme0n1p1 | gzip -c >/run/test.img ) 0:00:01 [25.2MiB/s] ETA 0:00:18 0:00:02 [23.7MiB/s] ETA 0:00:18 0:00:03 [ 100MiB/s] ETA 0:00:07 0:00:04 [ 199MiB/s] ETA 0:00:01 Now see below same command output is piped to another command and it does not display anything at all. I have redirected stderr to stdout and passed it to tr -d so it can remove ":[ ] " characters. (pv -F $'%t %r %e\n' /dev/nvme0n1p1 | gzip -c >/run/test.img ) 2>&1 | tr -d ':[]' See below, same command, but I am not redirecting stderr to stdout, Also if I don't redirect stderr to stdout, with the same command above, I don't get desired results, see below, using tr -d to delete following characters ":[]" but does not work. You can see tr -d command is completely ignored. (pv -F $'%t %r %e\n' /dev/nvme0n1p1 | gzip -c >/run/test.img ) | tr -d ':[]' 0:00:01 [25.2MiB/s] ETA 0:00:18 0:00:02 [23.7MiB/s] ETA 0:00:18 0:00:03 [ 100MiB/s] ETA 0:00:07 0:00:04 [ 199MiB/s] ETA 0:00:01 I have spent countless hours to figure this out, searched on stackexchange and all the forums but I cannot get my head around, how to fix this. I have also tried using file Descriptor 2>&3 but still no luck.
user2107349 (147 rep)
Sep 23, 2021, 08:28 AM • Last activity: Sep 23, 2021, 08:59 AM
0 votes
1 answers
363 views
How does redirection to pv actually work?
I am trying to understand how the redirection is exactly working in [this command][1] # tar -czf - ./Downloads/ | (pv -p --timer --rate --bytes > backup.tgz) What is the english translation ? All the data of `tar` is redirected as input to `pv` and then `pv` redirects it to `backup.tgz` ? **Then why...
I am trying to understand how the redirection is exactly working in this command # tar -czf - ./Downloads/ | (pv -p --timer --rate --bytes > backup.tgz) What is the english translation ? All the data of tar is redirected as input to pv and then pv redirects it to backup.tgz ? **Then why is the bracket around pv required ?** Then how does this redirection make sense ? tar -czf - ./Documents/ | (pv -n > backup.tgz) 2>&1 | dialog --gauge "Progress" 10 70 After pv what is the meaning of 2>&1 ?
ng.newbie (1295 rep)
Jun 14, 2021, 08:30 PM • Last activity: Jun 15, 2021, 04:07 PM
65 votes
3 answers
69309 views
Is it better to use cat, dd, pv or another procedure to copy a CD/DVD?
## Background ## I'm copying some data CDs/DVDs to ISO files to use them later without the need of them in the drive. I'm looking on the Net for procedures and I found a lot: - Use of `cat` to copy a medium: http://www.yolinux.com/TUTORIALS/LinuxTutorialCDBurn.html cat /dev/sr0 > image.iso - Use of...
## Background ## I'm copying some data CDs/DVDs to ISO files to use them later without the need of them in the drive. I'm looking on the Net for procedures and I found a lot: - Use of cat to copy a medium: http://www.yolinux.com/TUTORIALS/LinuxTutorialCDBurn.html cat /dev/sr0 > image.iso - Use of dd to do so (apparently the most widely used): http://www.linuxjournal.com/content/archiving-cds-iso-commandline dd if=/dev/cdrom bs=blocksize count=count of=/path/to/isoimage.iso - Use of just pv to accomplish this: See man pv for more information, although here's an excerpt of it: Taking an image of a disk, skipping errors: pv -EE /dev/sda > disk-image.img Writing an image back to a disk: pv disk-image.img > /dev/sda Zeroing a disk: pv /dev/sda I don't know if all of them should be equivalent, although I tested some of them (using the md5sum tool) and, at least, dd and pv are *not* equivalent. Here's the md5sum of both the drive and generated files using each procedure: md5 of dd procedure: 71b676875b0194495060b38f35237c3c md5 of pv procedure: f3524d81fdeeef962b01e1d86e6acc04 **EDIT:** That output was from another CD than the output given. In fact, I realized there are some interesting facts I provide as an answer. In fact, the size of each file *is different* comparing to each other. So, is there a best procedure to copy a CD/DVD or am I just using the commands incorrectly? ---------- ## More information about the situation ## Here is more information about the test case I'm using to check the procedures I've found so far: isoinfo -d i /dev/sr0 Output: https://gist.github.com/JBFWP286/7f50f069dc5d1593ba62#file-isoinfo-output-19-aug-2015 dd to copy the media, with output checksums and file information Output: https://gist.github.com/JBFWP286/75decda0a67605590d32#file-dd-output-with-md5-and-sha256-19-aug-2015 pv to copy the media, with output checksums and file information Output: https://gist.github.com/JBFWP286/700a13fe0a2f06ce5e7a#file-pv-output-with-md5-and-sha256-19-aug-2015 Any help will be appreciated!
user129371
Aug 19, 2015, 07:20 PM • Last activity: Jan 24, 2021, 10:12 AM
Showing page 1 of 20 total questions