Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

1 votes
1 answers
432 views
`pzstd` = parallelized Zstandard, how to watch progress in 4TB large file/disk?
I am brand new to `zstd`/`pzstd`, trying out its features, compression, benchmarking it, and so on. (I run Linux Mint 22 Cinnamon.) This computer has 32 GB RAM. The basic command appears to be working, but I found out it's not fully multi-threaded / parallelized: ```none # zstd --ultra --adapt=min=5...
I am brand new to zstd/pzstd, trying out its features, compression, benchmarking it, and so on. (I run Linux Mint 22 Cinnamon.) This computer has 32 GB RAM. The basic command appears to be working, but I found out it's not fully multi-threaded / parallelized:
# zstd --ultra --adapt=min=5,max=22 --long --auto-threads=logical --progress --keep --force --verbose /dev/nvme0n1 -o /path/to/disk/image/file.zst
As you can see for yourself, I am trying to compress my **NVMe 4TB drive** with only Timeshifts on its ext4 fs. Could you recommend me some tweaks to my zstd command, I would welcome it. But the real question here is how to make it multi-threaded / parallelized? *** I am trying this:
# pzstd --ultra -22 --processes 8 --keep --force --verbose /dev/nvme0n1 -o /path/to/disk/image/file.zst
because of this parallel version of ZSTD does not have apparently the --progress option. I need to find another way to watch it. 4TB will take some time and I don't intend to be totally blind. My tries with pv ended as not working properly. Please help, I'll appreciate it. Thanks.
Vlastimil Burián (30505 rep)
Oct 10, 2024, 09:22 AM • Last activity: Jun 18, 2025, 06:54 AM
0 votes
2 answers
93 views
Why did my backup folder with large amounts of repeated data compress so poorly?
I have a folder with around seventy subfolders, each containing a few tarballs which are nightly backups of a few directories (the largest being `/home`) from an old Raspberry Pi. Each is a full backup; they are not incremental. These tarballs are not compressed; they are just regular `.tar` archive...
I have a folder with around seventy subfolders, each containing a few tarballs which are nightly backups of a few directories (the largest being /home) from an old Raspberry Pi. Each is a full backup; they are not incremental. These tarballs are not compressed; they are just regular .tar archives. (They were originally compressed with bzip2, but I have decompressed all of them.) This folder totals 49 GiB according to du -h. I compressed this entire folder into a tar archive compressed with zstd. However, the final archive is 32 GiB, not much smaller than the original. Why is this the case, considering that the vast majority of the data should be common among several files, since I obviously was not replacing every file every day?
kj7rrv (217 rep)
May 24, 2025, 04:33 AM • Last activity: May 24, 2025, 08:08 AM
4 votes
1 answers
1594 views
How do you ssh using zstd dynamic compression?
I want to set up my ssh tunnel to do dynamic compression so that I get maximum throughput for the bandwidth and CPU resources I have available on each end. Since gzip compression is built-in and not pluggable I thought I could use `-o ProxyCommand` to set up a double ssh tunnel where the outer tunne...
I want to set up my ssh tunnel to do dynamic compression so that I get maximum throughput for the bandwidth and CPU resources I have available on each end. Since gzip compression is built-in and not pluggable I thought I could use -o ProxyCommand to set up a double ssh tunnel where the outer tunnel sends compressed data as the content and the inner tunnel connects to the ssh daemon on the remote host. This *almost* works. Here is my command:
# dynamic compression parameters to zstd omitted for brevity
ssh -o ProxyCommand='zstd | ssh -p %p %h "unzstd | nc localhost %p | zstd" | unzstd'
Here are some things I know about this command: - When I run this command the terminal hangs without any output. - It works if I remove [un]zstd, which is not surprising since using netcat as the proxy command is a standard method to connect through jumphosts - cat | zstd | cat at a command prompt will not return data immediately because compression programs process data in chunks. You have to send EOF with ctrl+d a couple times before it releases the compressed data. You can decompress the data within the pipeline as well and it works the same. - If I hit ctrl+d when running the full command nothing happens. What am I missing here? Is there a way to make this work or another way I am overlooking?
ChinnoDog (41 rep)
Dec 13, 2022, 02:58 PM • Last activity: Oct 5, 2024, 08:48 AM
25 votes
3 answers
57232 views
zst compression not supported by apt/dpkg
I'm running Debian, namely: ```` # uname -A Linux martlins2 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux ```` and for some time I see some errors telling that _some_ parts of _some_ packages uses _unknown_ compression while doing `apt update`. In particular, the cause of the...
I'm running Debian, namely:
`
# uname -A
Linux martlins2 5.10.0-8-amd64 #1 SMP Debian 5.10.46-4 (2021-08-03) x86_64 GNU/Linux
` and for some time I see some errors telling that _some_ parts of _some_ packages uses _unknown_ compression while doing apt update. In particular, the cause of the issue lays in the middle of the dpkg:
`
# apt update
(...)
# apt upgrade
(...)
dpkg-deb: error: archive '/var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb' uses unknown compression for member 'control.tar.zst', giving up
Traceback (most recent call last):
  File "/usr/share/apt-listchanges/DebianFiles.py", line 124, in readdeb
    output = subprocess.check_output(command)
  File "/usr/lib/python3.9/subprocess.py", line 424, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "/usr/lib/python3.9/subprocess.py", line 528, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['dpkg-deb', '-f', '/var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb', 'Package', 'Source', 'Version', 'Architecture', 'Status']' returned non-zero exit status 2.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/bin/apt-listchanges", line 323, in 
    main(config)
  File "/usr/bin/apt-listchanges", line 104, in main
    pkg = DebianFiles.Package(deb)
  File "/usr/share/apt-listchanges/DebianFiles.py", line 358, in __init__
    parser.readdeb(self.path)
  File "/usr/share/apt-listchanges/DebianFiles.py", line 127, in readdeb
    raise RuntimeError(_("Error processing '%(what)s': %(errmsg)s") %
RuntimeError: Error processing '/var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb': Command '['dpkg-deb', '-f', '/var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb', 'Package', 'Source', 'Version', 'Architecture', 'Status']' returned non-zero exit status 2.

dpkg-deb: error: archive '/tmp/apt-dpkg-install-XiLPN8/01-libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb' uses unknown compression for member 'control.tar
.zst', giving up
dpkg: error processing archive /tmp/apt-dpkg-install-XiLPN8/01-libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb (--unpack):
 dpkg-deb --control subprocess returned error exit status 2
(...)
Errors were encountered while processing:
 /tmp/apt-dpkg-install-XiLPN8/01-libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb
(...)
E: Sub-process /usr/bin/dpkg returned an error code (1)
` To proove it, I've run the dpkg command (simplified) directly:
`
# dpkg -f /var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb 'Package'
dpkg-deb: error: archive '/var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb' uses unknown compression for member 'control.tar.zst', giving up
` The file really does use such compression:
`
# file /var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb
/var/cache/apt/archives/libdrm-amdgpu1_2.4.107+git2109030500.d201a4~oibaf~i_amd64.deb: Debian binary package (format 2.0), with control.tar.zs, data compression zst
` I do have installed the zstd package:
`
# apt search zstd
(...)
libzstd1/stable,stable,now 1.4.8+dfsg-2.1 amd64 [installed,automatic]
  fast lossless compression algorithm
(...)
zstd/stable,stable,now 1.4.8+dfsg-2.1 amd64 [installed]
  fast lossless compression algorithm -- CLI tool
` Furthermore, I found following dpkg bugreport: https://bugs.launchpad.net/ubuntu/+source/dpkg/+bug/1764220 saying the zstd support have been added in 1.18.4ubuntu1.7 version. My version of dpkg is 1.20.9:
`
# dpkg --version
Debian 'dpkg' package management program version 1.20.9 (amd64).
(...)
` so that _may_ not be an issue. I've also removed the whole contents of the /var/cache/apt/archives/* and re-update && upgraded. Didn't help. Do you have any tips what to do with that? Is there/Are there an further packages missing? Does the Debian version doesn't have such feature? Is it an configuration issue? Is there any workaround?
martlin (373 rep)
Sep 14, 2021, 08:13 PM • Last activity: Aug 27, 2024, 12:27 PM
1 votes
0 answers
172 views
Recoverably recompress gzip files into zstd, preserving original checksums?
I need to archive a lot of gzip-compressed data. The problem is that compared to zstd, gzip is wasteful, both in terms of ratio and CPU time required to decompress the data. Because of that, I want to recompress the data into zstd. Unfortunately, I need to be able to reconstruct the SHA/MD5 checksum...
I need to archive a lot of gzip-compressed data. The problem is that compared to zstd, gzip is wasteful, both in terms of ratio and CPU time required to decompress the data. Because of that, I want to recompress the data into zstd. Unfortunately, I need to be able to reconstruct the SHA/MD5 checksums of the original compressed gzip files, in order to prove its origin. Is it possible? If the gzip algorithm was deterministic, that would be trivial, but I don't have access to information about which version of gzip was used, what was the compression level etc.
d33tah (1381 rep)
Aug 12, 2024, 02:35 PM
0 votes
1 answers
157 views
btrfs mount with compress fails with udisksctl but succeeds with mount?
$ sudo mkfs.btrfs -fL borgbackups /dev/vgxubuntu/borgbackups $ udisksctl mount -o compress=ztsd:15 -b /dev/mapper/vgxubuntu-borgbackups Error mounting /dev/dm-3: GDBus.Error:org.freedesktop.UDisks2.Error.OptionNotPermitted: Mount option `compress=ztsd:15' is not allowed But then: $ sudo mount -o com...
$ sudo mkfs.btrfs -fL borgbackups /dev/vgxubuntu/borgbackups $ udisksctl mount -o compress=ztsd:15 -b /dev/mapper/vgxubuntu-borgbackups Error mounting /dev/dm-3: GDBus.Error:org.freedesktop.UDisks2.Error.OptionNotPermitted: Mount option `compress=ztsd:15' is not allowed But then: $ sudo mount -o compress=zstd:15 /dev/mapper/vgxubuntu-borgbackups /mnt/sd succeeds: $ mount | grep borgback /dev/mapper/vgxubuntu-borgbackups on /mnt/sd type btrfs (rw,relatime,compress=zstd:15,ssd,space_cache,subvolid=5,subvol=/) What am I missing here?
eugenevd (156 rep)
Feb 10, 2023, 05:08 PM • Last activity: Feb 10, 2023, 05:28 PM
2 votes
2 answers
3485 views
How can I separate multiple files from single file caused by zstd -r folder -o output.zst?
I didn't read enough the manual and run the following command ``` $ zstd -r folder -o output.zst ``` The following command gave me back a single file called output ``` $ unzstd output.zst ``` The output file has all the contents of the files under the folder concatenated. Is there some tools or prog...
I didn't read enough the manual and run the following command
$ zstd -r folder -o output.zst
The following command gave me back a single file called output
$ unzstd output.zst
The output file has all the contents of the files under the folder concatenated. Is there some tools or programs to un-concatenate the single file into multiple original files? This is the only backup file I have and I need the backup. EDIT: what I really should have run (according to [this thread](https://github.com/facebook/zstd/issues/1526)) is
# for tar version 1.31 and above
$ tar --zstd -cf output.tar.zst folder

# for tar version < 1.31
$ tar --use-compress-program zstd -cf output.tar.zst folder
Tun (171 rep)
Oct 31, 2019, 04:56 PM • Last activity: Jan 14, 2023, 01:46 PM
2 votes
2 answers
5494 views
Decompress folder.tar.gz with files inside
I'm trying to decompress folder.tar.gz which was previously compressed with zstd, so I did: unzstd folder.tar.gz.zst Now if I try to decompress by using: tar -xvf folder.tar.gz It will decompress two big files inside some folder joined together. Apparently the path which was compressed was: `/opt/lo...
I'm trying to decompress folder.tar.gz which was previously compressed with zstd, so I did: unzstd folder.tar.gz.zst Now if I try to decompress by using: tar -xvf folder.tar.gz It will decompress two big files inside some folder joined together. Apparently the path which was compressed was: /opt/local/data/file1.log and file2.log. After decompressing I get data folder joined by size of "file1 and file2.log".
user134969 (273 rep)
Oct 24, 2022, 10:53 PM • Last activity: Nov 21, 2022, 02:35 PM
1 votes
1 answers
1152 views
create initrd image compressed with zstd
I have initrd image compressed with `xz`. This is how I created it from image file `initrd`: e2image -ar initrd - | xz -9 --check=crc32 > initrd.xz now I need same image compressed using `zstd` algorithm. What command/parameters do I have to use, for the kernel to be able boot from this initrd image...
I have initrd image compressed with xz. This is how I created it from image file initrd: e2image -ar initrd - | xz -9 --check=crc32 > initrd.xz now I need same image compressed using zstd algorithm. What command/parameters do I have to use, for the kernel to be able boot from this initrd image? I have CONFIG_RD_ZSTD=y enabled in my kernel.
Martin Vegter (586 rep)
Oct 2, 2022, 05:56 AM • Last activity: Oct 3, 2022, 09:12 PM
18 votes
2 answers
38943 views
compressing and decompressing dd image - zstd instead of gzip
Earlier I was using fsarchiver to create compressed partition image. Due to some [weird behavior][1] I am choosing to replace it with `dd`. However, I like how fsarchiver compressed with [zstd][2]. So, I studied, - [How to make a disk image and restore from it later?][3] - [Using DD for disk cloning...
Earlier I was using fsarchiver to create compressed partition image. Due to some weird behavior I am choosing to replace it with dd. However, I like how fsarchiver compressed with zstd . So, I studied, - How to make a disk image and restore from it later? - Using DD for disk cloning - Making full disk image with DD - compressing dd backup on the fly - How do you monitor the progress of dd? What these essentially say is, I have to use the following command to backup dd if=/dev/sda2 status=progress | gzip -c > /media/mint/Data/_Fsarchiver/MintV1.img.gz And the following command to restore gunzip -c /media/mint/Data/_Fsarchiver/MintV1.img.gz | dd of=/dev/sda2 status=progress Now I want to replace gzip -c & gunzip -c with zstd & zstd -d The commands I came up with are To compress sudo dd if=/dev/sda2 status=progress | zstd -16vT6 > /media/mint/Data/_Fsarchiver/MintV1.zst To decompress zstd -vdcfT6 /media/mint/Data/_Fsarchiver/MintV1.zst | dd of=/dev/sda2 status=progress Is it safe to try these commands or am I doing something wrong?
Ahmad Ismail (2998 rep)
Jan 6, 2019, 08:55 AM • Last activity: Sep 10, 2022, 09:28 PM
2 votes
1 answers
3139 views
How do I uncompress or unarchive the contents of initramfs img file in Arch Linux?
I have been using **Arch Linux** for a while and studying about the **initramfs**. I thought of looking into the contents of the file to get a clear idea of it. I googled for various ways to peek through the file but I was not able to. Initially, I checked the file type of initramfs using the below...
I have been using **Arch Linux** for a while and studying about the **initramfs**. I thought of looking into the contents of the file to get a clear idea of it. I googled for various ways to peek through the file but I was not able to. Initially, I checked the file type of initramfs using the below command and got the following output: file /boot/initramfs-linux.img /boot/initramfs-linux.img: Zstandard compressed data (v0.8+), Dictionary IS: None I found that the file was **Zstandard compressed** and used the **zstd** tool to fetch the content of the file as follows: zstd -d /boot/initramfs-linux.img -o SOME_FILE_NAME That yielded gibberish result. I gave a file name as it's argument since it complained when I gave a directory. I thought that the initramfs file contains the initial root file system (set of files and directories). I am **naive** to Arch Linux and it's internals. Kindly help me through this. Thanks.
Panther Coder (408 rep)
Jul 31, 2022, 08:09 PM • Last activity: Aug 1, 2022, 07:44 AM
0 votes
1 answers
1333 views
Recompress 7z archives to tar.zst on the fly
I have a bunch of [7z](https://en.wikipedia.org/wiki/7z) archives (containing directories and files) that I would like to recompress as [tar.zst](https://facebook.github.io/zstd/) (which offers much better decompression speeds if / when I need to unarchive them). I could manually decompress them, th...
I have a bunch of [7z](https://en.wikipedia.org/wiki/7z) archives (containing directories and files) that I would like to recompress as [tar.zst](https://facebook.github.io/zstd/) (which offers much better decompression speeds if / when I need to unarchive them). I could manually decompress them, then recompress with tar -cvf --zstd foo.tar.zst foo/ but that means having the fully decompressed files on disk which isn't great from a disk utilization PoV. Is it possible to "stream" the files (using | somehow) from 7z to tar, to recompress those files without having the decompressed files on disk? If so would a similar solution apply to rar / zip / other archive types?
Christophe L (101 rep)
Jul 10, 2022, 11:28 PM • Last activity: Jul 11, 2022, 09:56 AM
0 votes
1 answers
527 views
Using Debian Stretch to build 5.15.x kernel with zstd compression fails with incorrect parameters
I had to move from Jessie to Stretch as kernel 5.15.49 requires gcc 5.x version (Jessie had 4.9 version Stretch 6.x). I decided to try the ZSTD module compression option in 5.15.x. I ensured to `apt-get install zstd` beforehand. Using `make bindeb-pkg` it gets all the way past compiling and signing...
I had to move from Jessie to Stretch as kernel 5.15.49 requires gcc 5.x version (Jessie had 4.9 version Stretch 6.x). I decided to try the ZSTD module compression option in 5.15.x. I ensured to apt-get install zstd beforehand. Using make bindeb-pkg it gets all the way past compiling and signing the modules but then errors out and you can see on the screen the zstd output saying incorrect parameters then giving a sample of what the parameters should be. So clearly it's executing the compressor but it doesn't like whatever parameters kbuild is sending it? Is this a known issue? Is there an easy fix for this? TIA!!
user3161924 (283 rep)
Jun 23, 2022, 03:58 PM • Last activity: Jun 23, 2022, 04:28 PM
3 votes
1 answers
5735 views
How to tell if btrfs subvolumes are actually compressed or not
I have mounted Btrfs subvolumes (including `/home`) with the `compress=no` option in `/etc/fstab`. However, when I run `btrfs inspect-internal dump-super -a ` (both, on the running system, as well as on a live boot and mounting with `compress=no`), it shows `COMPRESS_ZSTD` in `incompat_flags`. So, a...
I have mounted Btrfs subvolumes (including /home) with the compress=no option in /etc/fstab. However, when I run btrfs inspect-internal dump-super -a (both, on the running system, as well as on a live boot and mounting with compress=no), it shows COMPRESS_ZSTD in incompat_flags. So, are the subvolumes being used without compression, or with compression? Fedora 34 Workstation (GNOME), fresh install. This seems to default to zstd for at least the /home subvolume, which was not the case earlier, but is compression actually enabled despite being mounted with compress=no, as shown by inspect-internal? The partition containing the subvolumes is LUKS2-encrypted.
Saurav Sengupta (321 rep)
Aug 18, 2021, 07:25 PM • Last activity: Sep 5, 2021, 12:48 AM
0 votes
1 answers
607 views
tar creates an absolute sub directory within the compressed file
I would like to `tar` a folder using `zstd`. My folder contains subfolders `a` and `b`. I am `tar`-ing using this: `tar --zstd -cvf /dest/path/to/output/archive.tar.zstd /full/path/to/src/folder` However, this causes `archive.tar.zstd` to contain `/full/path/to/src/folder/` and instead of just `fold...
I would like to tar a folder using zstd. My folder contains subfolders a and b. I am tar-ing using this: tar --zstd -cvf /dest/path/to/output/archive.tar.zstd /full/path/to/src/folder However, this causes archive.tar.zstd to contain /full/path/to/src/folder/ and instead of just folder How do I fix this?
user997112 (1065 rep)
Jun 7, 2021, 05:03 PM • Last activity: Jun 7, 2021, 05:55 PM
2 votes
0 answers
235 views
Is there an equivalent of zdiff for zstandard
Is there an equivalent of the gzip utility [`zdiff(1)`](https://linux.die.net/man/1/zdiff), but for [`zstandard`](https://github.com/facebook/zstd/)? I have not been able to find anything in the zstandard repository other than `zstdgrep` and `zstdless`.
Is there an equivalent of the gzip utility [zdiff(1)](https://linux.die.net/man/1/zdiff) , but for [zstandard](https://github.com/facebook/zstd/) ? I have not been able to find anything in the zstandard repository other than zstdgrep and zstdless.
razeh (659 rep)
Feb 13, 2020, 04:00 PM • Last activity: Feb 13, 2020, 04:20 PM
8 votes
1 answers
2167 views
Is there a compression tool with an arbitrarily large dictionary?
I am looking for a compression tool with an arbitrarily large dictionary (and "block size"). Let me explain by way of examples. First let us create 32MB random data and then concatenate it to itself to make a file of twice the length of length 64MB. head -c32M /dev/urandom > test32.bin cat test32.bi...
I am looking for a compression tool with an arbitrarily large dictionary (and "block size"). Let me explain by way of examples. First let us create 32MB random data and then concatenate it to itself to make a file of twice the length of length 64MB. head -c32M /dev/urandom > test32.bin cat test32.bin test32.bin > test64.bin Of course test32.bin is not compressible because it is random but the first half of test64.bin is the same as the second half, so it should be compressible by roughly 50%. First let's try some standard tools. test64.bin is of size exactly 67108864. - gzip -9. Compressed size 67119133. - bzip2 -9. Compressed size 67409123. (A really big overhead!) - xz -7. Compressed size 67112252. - xz -8. Compressed size 33561724. - zstd --ultra -22. Compressed size 33558039. We learn from this that gzip and bzip2 can never compress this file. However with a big enough dictionary xz and zstd can compress the file and in that case zstd does the best job. However, now try: head -c150M /dev/urandom > test150.bin cat test150.bin test150.bin > test300.bin test300.bin is of size exactly 314572800. Let's try the best compression algorithms again at their highest settings. - xz -9. Compressed size 314588440 - zstd --ultra -22. Compressed size 314580017 In this case neither tool can compress the file. > Is there a tool that has an arbitrarily large dictionary size so it > can compress a file such as test300.bin? --- Thanks to the comment and answer it turns out both zstd and xz can do it. You need zstd version 1.4.x however. - zstd --long=28. Compressed size 157306814 - xz -9 --lzma2=dict=150MiB. Compressed size 157317764.
Simd (371 rep)
Jan 18, 2020, 09:50 PM • Last activity: Jan 19, 2020, 06:11 PM
2 votes
0 answers
71 views
Avoid pipeline stalls from write caching to usb drive
I have the following process I use for backups: ```sh btrfs send snapshots/home-2016-06-04 | zstd --verbose -T4 - | gpg --batch --passphrase-file /tmp/secret --compress-algo none --symmetric - > /mnt/usbdrive/packup.zst.gpg ``` But I find that while it runs very quick at 100% CPU for a while, writes...
I have the following process I use for backups:
btrfs send snapshots/home-2016-06-04 |
zstd --verbose -T4 - |
gpg --batch --passphrase-file /tmp/secret --compress-algo none --symmetric - >
/mnt/usbdrive/packup.zst.gpg
But I find that while it runs very quick at 100% CPU for a while, writes to the usb drive get buffered and then the whole pipeline suddenly stalls while it syncs to the drive. It then alternates between these two states. What I would like to achieve is for zstd --adapt -T1 to work properly. According to the man-page > zstd will dynamically adapt compression level to perceived I/O conditions. This should give the optimal throughput so that the write is at the maximum (40 MB/s - tested with dd) to the thumb drive, and so that zstd will not fill up it's buffer. I suspect I can achieve this by adding dd as the last stage of the pipeline; I've tried this with direct fsyncdata and sync options to oflag, to no avail. Is there some way to achieve this? Am I right in assuming the problem is with the write caching?
Luciano (141 rep)
May 20, 2019, 02:42 PM
17 votes
3 answers
15525 views
How to set a non default zstd compression level at btrfs filesystem defragment?
# btrfs filesystem defragment -r -v -czstd:15 / ERROR: unknown compression type zstd:15 # btrfs filesystem defragment -r -v -czstd_15 / ERROR: unknown compression type zstd_15 # btrfs filesystem defragment -r -v -czstd15 / ERROR: unknown compression type zstd15 The [btrfs manual page][1] doesn't giv...
# btrfs filesystem defragment -r -v -czstd:15 / ERROR: unknown compression type zstd:15 # btrfs filesystem defragment -r -v -czstd_15 / ERROR: unknown compression type zstd_15 # btrfs filesystem defragment -r -v -czstd15 / ERROR: unknown compression type zstd15 The btrfs manual page doesn't give the clue on how to select a compression level: > **-c[algo]** > > compress file contents while defragmenting. Optional argument selects the compression algorithm, zlib (default), lzo or zstd. > Currently it’s not possible to select no compression. See also section > EXAMPLES. How to select a non-default zstd compression level to re-compress existing btrfs filesystems?


--- Note: btrfs filesystem defragment on **snapshots** might result in much larger disk space consumption : > Warning: Defragmenting with Linux kernel versions as well as with Linux stable kernel versions ≥ 3.10.31, ≥ 3.12.12 or ≥ > 3.13.4 **will break up the ref-links of COW data** (for example files copied with cp --reflink, snapshots or de-duplicated data). This may > cause **considerable increase of space usage** depending on the broken up > ref-links.
Pro Backup (5114 rep)
Dec 22, 2017, 11:07 AM • Last activity: Mar 18, 2019, 09:28 AM
3 votes
1 answers
1588 views
Is zstd for zram actually available in Linux 4.15?
I found `zstd` in [`/drivers/block/zram/zcomp.c`](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/block/zram/zcomp.c?h=v4.15#n32), but I can't find anything zstd-related in [`/crypto`](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/crypto?h=v4...
I found zstd in [/drivers/block/zram/zcomp.c](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/block/zram/zcomp.c?h=v4.15#n32) , but I can't find anything zstd-related in [/crypto](https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/crypto?h=v4.15) . So is zstd for zram actually available in Linux 4.15 or not?
illiterate (1023 rep)
Feb 2, 2018, 04:35 AM • Last activity: Feb 2, 2018, 03:15 PM
Showing page 1 of 20 total questions