Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
974
votes
17
answers
1192392
views
How can I reduce a video's size with ffmpeg?
How can I use [`ffmpeg`](https://ffmpeg.org/) to reduce the size of a video by lowering the quality (as minimally as possible, naturally, because I need it to run on a mobile device that doesn't have much available space)? I forgot to mention that when the video can use subtitles (*.srt or *.sub), I...
How can I use [
ffmpeg
](https://ffmpeg.org/) to reduce the size of a video by lowering the quality (as minimally as possible, naturally, because I need it to run on a mobile device that doesn't have much available space)?
I forgot to mention that when the video can use subtitles (*.srt or *.sub), I'd like to convert them too to fit the parameters of the converted video file.
xralf
(15189 rep)
Jan 10, 2012, 09:45 PM
• Last activity: Jul 11, 2025, 02:00 AM
1
votes
1
answers
432
views
`pzstd` = parallelized Zstandard, how to watch progress in 4TB large file/disk?
I am brand new to `zstd`/`pzstd`, trying out its features, compression, benchmarking it, and so on. (I run Linux Mint 22 Cinnamon.) This computer has 32 GB RAM. The basic command appears to be working, but I found out it's not fully multi-threaded / parallelized: ```none # zstd --ultra --adapt=min=5...
I am brand new to
zstd
/pzstd
, trying out its features, compression, benchmarking it, and so on. (I run Linux Mint 22 Cinnamon.) This computer has 32 GB RAM.
The basic command appears to be working, but I found out it's not fully multi-threaded / parallelized:
# zstd --ultra --adapt=min=5,max=22 --long --auto-threads=logical --progress --keep --force --verbose /dev/nvme0n1 -o /path/to/disk/image/file.zst
As you can see for yourself, I am trying to compress my **NVMe 4TB drive** with only Timeshifts on its ext4 fs. Could you recommend me some tweaks to my zstd
command, I would welcome it.
But the real question here is how to make it multi-threaded / parallelized?
***
I am trying this:
# pzstd --ultra -22 --processes 8 --keep --force --verbose /dev/nvme0n1 -o /path/to/disk/image/file.zst
because of this parallel version of ZSTD does not have apparently the --progress
option. I need to find another way to watch it. 4TB will take some time and I don't intend to be totally blind.
My tries with pv
ended as not working properly. Please help, I'll appreciate it. Thanks.
Vlastimil Burián
(30505 rep)
Oct 10, 2024, 09:22 AM
• Last activity: Jun 18, 2025, 06:54 AM
0
votes
2
answers
93
views
Why did my backup folder with large amounts of repeated data compress so poorly?
I have a folder with around seventy subfolders, each containing a few tarballs which are nightly backups of a few directories (the largest being `/home`) from an old Raspberry Pi. Each is a full backup; they are not incremental. These tarballs are not compressed; they are just regular `.tar` archive...
I have a folder with around seventy subfolders, each containing a few tarballs which are nightly backups of a few directories (the largest being
/home
) from an old Raspberry Pi. Each is a full backup; they are not incremental. These tarballs are not compressed; they are just regular .tar
archives. (They were originally compressed with bzip2
, but I have decompressed all of them.) This folder totals 49 GiB according to du -h
.
I compressed this entire folder into a tar
archive compressed with zstd
. However, the final archive is 32 GiB, not much smaller than the original. Why is this the case, considering that the vast majority of the data should be common among several files, since I obviously was not replacing every file every day?
kj7rrv
(217 rep)
May 24, 2025, 04:33 AM
• Last activity: May 24, 2025, 08:08 AM
55
votes
4
answers
53679
views
Does tar actually compress files, or just group them together?
I usually assumed that `tar` was a compression utility, but I am unsure, does it actually compress files, or is it just like an ISO file, a file to hold files?
I usually assumed that
tar
was a compression utility, but I am unsure, does it actually compress files, or is it just like an ISO file, a file to hold files?
TheDoctor
(995 rep)
Apr 29, 2014, 09:29 PM
• Last activity: May 23, 2025, 12:05 AM
0
votes
1
answers
210
views
Is TLS-level compression with Apache possible?
Apache2 can transfer compressed data by using the deflate filter. However, it does a HTTP-level compression: it sends back a compressed response, and it shows in the response headers to the clients to deal with it accordingly. However, not this is what I want. Beside the https-level compression, als...
Apache2 can transfer compressed data by using the deflate filter. However, it does a HTTP-level compression: it sends back a compressed response, and it shows in the response headers to the clients to deal with it accordingly.
However, not this is what I want.
Beside the https-level compression, also TLS has a compression functionality (for example, here is it visible in the mbedtls API).
Can I set up somehow Apache to compress the SSL transfers with it, and not on the http level?
peterh
(10448 rep)
Nov 20, 2019, 10:03 AM
• Last activity: May 22, 2025, 03:30 AM
0
votes
2
answers
91
views
How to defragment compressed btrfs files?
If I defragment files on btrfs with the command btrfs filesystem defrag --step 1G file Everything is fine. A `filefrag -v file` clearly show, the extent count significantly decreased. Things are very different if I deal with compressed files. First, filefrag gives a huge amount of extents: Filesyste...
If I defragment files on btrfs with the command
btrfs filesystem defrag --step 1G file
Everything is fine. A
filefrag -v file
clearly show, the extent count significantly decreased.
Things are very different if I deal with compressed files. First, filefrag gives a huge amount of extents:
Filesystem type is: 9123683e
File size of file is 85942272 (20982 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 31: 607198.. 607229: 32: encoded
1: 32.. 63: 609302.. 609333: 32: 607230: encoded
2: 64.. 95: 609314.. 609345: 32: 609334: encoded
3: 96.. 127: 609326.. 609357: 32: 609346: encoded
...
648: 20928.. 20959: 704298.. 704329: 32: 704299: encoded
649: 20960.. 20981: 691987.. 692008: 22: 704330: last,encoded,eof
file: 650 extents found
Second, if btrfs filesystem defragment
commands return on the spot, without error report - and with an unchanged filefrag output.
My impression is that fragmentation of compressed files is not an issue on btrfs filesystems et all. However, my ears clearly show: yes it is an issue for me.
So, how to defragment on btrfs compressed file? How could I even see, are they even continuous, but not their encoded ( == compressed ) extents, instead their compressed blocks on the hdd?
peterh
(10448 rep)
May 12, 2025, 02:22 PM
• Last activity: May 12, 2025, 06:29 PM
4
votes
2
answers
2305
views
How can you compress images in a PDF?
How can you increase the JPEG compression level on a PDF using batch tools under Linux? Obviously you can use `gs -dPDFSETTINGS=/screen` or `/ebook`, but that downsamples the PDF - it reduces the DPI. It's more efficient (in terms of how nice the PDF looks per KB) to use JPEG compression while retai...
How can you increase the JPEG compression level on a PDF using batch tools under Linux?
Obviously you can use
gs -dPDFSETTINGS=/screen
or /ebook
, but that downsamples the PDF - it reduces the DPI. It's more efficient (in terms of how nice the PDF looks per KB) to use JPEG compression while retaining the same pixel count.
E.g.: https://docupub.com/pdfcompress/ allows you to half the size of a PDF yet when you zoom in it still has good quality, albeit with some artifacts. When you zoom in using gs
's /ebook
mode, it inevitably looks more pixelated.
What Linux tool allows us to apply JPEG compression to each image in a PDF?
Is there any way to use ImageMagick's convert -quality
on a PDF of multiple images?
Zaz
(2654 rep)
Jan 8, 2021, 06:35 PM
• Last activity: May 11, 2025, 09:07 AM
4
votes
1
answers
2189
views
Best compression for operating system image
I have an Operating system image of size 2.5G. I have a device with a limited size. Thus I was looking for the best possible solution for providing the compression. Below are the commands and results of their compression: 1.tar with gzip: tar c Os.img | gzip --best > Os.tar.gz This command returned...
I have an Operating system image of size 2.5G.
I have a device with a limited size. Thus I was looking for the best possible solution for providing the compression.
Below are the commands and results of their compression:
1.tar with gzip:
tar c Os.img | gzip --best > Os.tar.gz
This command returned an image of 1.3G.
2.Xz only:
xz -z -v -k Os.img
This command returned an image of 1021M.
3.Xz with -9:
xz -z -v -9 -k Os.img
This command returned an image of 950M.
4.tar with Xz and -9:
tar cv Os.img | xz -9 -k > Os.tar.xz
This command returned an image of 950M.
5.tar with Xz -9 and -e:
xz -z -v -9 -k -e Os.img
This command returned an image of 949M.
6.lrzip:
lrzip -z -v Os.img
This command returned an image of 729M.
Is there any other possible best solution or command line tool ( preferable ) for the compression?
Sharvin26
(307 rep)
Mar 12, 2019, 02:44 PM
• Last activity: Apr 26, 2025, 09:47 AM
7
votes
1
answers
11758
views
Why does zram occupy much more memory compared to its "compressed" value?
I set up zram and made extensive tests inside my Linux machines to measure that it really helps in my scenario. However, I'm very confused that zram *seems* to use up memory of the whole uncompressed data size. When I type in "zramctl" I see this: NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOU...
I set up zram and made extensive tests inside my Linux machines to measure that it really helps in my scenario. However, I'm very confused that zram *seems* to use up memory of the whole uncompressed data size. When I type in "zramctl" I see this:
NAME ALGORITHM DISKSIZE DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 2G 853,6M 355,1M 367,1M 4 [SWAP]
According to the help command of zramctl ,
DATA
is the uncompressed size and TOTAL
the compressed memory including metadata. Yet, when I type in swapon -s
, I see this output:
Filename Type size used Priority
/dev/sda2 partition 1463292 0 4
/dev/zram0 partition 2024224 906240 5
906240
is the used memory in Kilobytes, which translates to the 853,6M DATA
value of zramctl. Which leaves the impression that the compressed zram device needs more memory than it saves. Once DATA
is full, it actually starts swapping to the disk drive, so it must be indeed full.
Why does zram seemingly occupy memory of the original data size? Why is it not the size of COMPR
or TOTAL
? It seems there is no source about that on the Internet yet, because I haven't found any information about this. Thank you!
Testerhood
(419 rep)
Jun 24, 2020, 02:36 PM
• Last activity: Apr 16, 2025, 12:17 AM
9
votes
4
answers
2048
views
How to compress disk swap
I am looking for a way to compress swap on disk. *I am not looking for wider discussion of alternative solutions. See discussion at the end.* I have tried... Using compressed zfs zvol for swap is NOT WORKING. The setup of it works, swapon works, swapping does happen somewhat, so i guess one could ar...
I am looking for a way to compress swap on disk. *I am not looking for wider discussion of alternative solutions. See discussion at the end.*
I have tried...
Using compressed zfs zvol for swap is NOT WORKING. The setup of it works, swapon works, swapping does happen somewhat, so i guess one could argue it's *technically* a working solution, but exactly how "working" is your working solution, if it's slower than floppy disk access, and causes your system to completely freeze forever 10 times out of 10? Tried it several times -- soon as system enters memory pressure conditions, everything just freezes. I tried to use it indirectly as well, using losetup, and even tried using zfs zvol as a backing device for zram. No difference, always same results -- incredibly slow write/read rates, and system inevitably dies under pressure.
BTRFS. Only supports *uncompressed* swapfiles. Apparently, only supports uncompressed loop images as well, because i tried dd-ing an empty file, formatting it with regular ext2, compressing it, mounting as a loop device, and creating a swapfile inside of it. Didn't work, even when i mounted btrfs with forced compression enabled -- compsize showed the ext2 image compression ratio of exactly 1.00 .
Zswap -- it's just a buffer between ram and regular disk swap. The regular disk swap keeps on being the regular disk swap, zswap uncompresses pages before writing them on there.
Zram -- has a backing device option since it's inception as compcache, and one would think, is a perfect candidate to have had compressed disk swap for years. No such luck. While you can do writeback of compressed in-ram pages to disk at will, the pages get decompressed before they're written. Unlike zswap, doesn't write same- and zero-filled pages though, which both saves i\o, slightly improves throughput, and warrants the use of loop-mounted sparse files as backing_dev. So far, this is the best option I found for swap optimization on low-end devices, despite it still lacking disk compression.
----
Any ideas what else I can try? Maybe there's some compressed block device layer, that I don't know of, that can compress anything written to it, no filesystem required? Maybe there's some compressed overlay I could make use of? Not done in FUSE though, as FUSE itself is a subject to swapping, unless you know a way to prevent it from being swapped out.
Since i don't see this being explored much, you're welcome to suggest any madness you like. Please, let's throw stuff at the wall and see what sticks.
For experts -- if any of you have read, or even written, any part of linux sourse code that relates to this problem, please describe in as much detail as possible, why do you think this hasn't been implemented yet, and how do you think it *could* be implemented, if you have any idea. And obviously, please do implement that if you can, that'll be awesome.
----------
*Discussion*
Before you mark it as a duplicate -- I'm aware there have been a few questions like that around stackexchange, but none i saw had a working answer, and few had any further feedback. So I'll attempt to describe details, sort of aggregate everything, here, in hopes that someone smarter than me can figure this out. I'm not a programmer, just a user and a script kiddie, so that should be a pretty low bar to jump over.
>just buy more ram, it's cheap
>get an ssd
>swap is bad
>compression is slow anyway, why bother
If all you have to say, are any of the above quotes -- go away. Because the argument is **optimization**. However cheap RAM is these days, it's not free. Swap is always needed, the fact that it's good for the system to have it, has been established for years now. And compression is nothing, even "heavy" algorithms perform stupidly fast on any processors made in the last decade. And lastly, sure, compression might actually become a bottleneck if you're using an ssd, but not everyone prioritizes speed over disk space usage, and hdd drives, which DO benefit grossly from disk compression, are still too popular and plentiful to dismiss.
linuxlover69
(119 rep)
Mar 20, 2023, 08:34 AM
• Last activity: Apr 11, 2025, 03:31 PM
2
votes
0
answers
87
views
Add error correction to SquashFS images as part of a backup strategy
My backup strategy currently primarily consists of daily backups of all of my machines with Borg Backup, stored on different storage devices in different locations, following the 3-2-1 strategy. These file-level backups are the important ones that matter most to me. My question is *not* about these...
My backup strategy currently primarily consists of daily backups of all of my machines with Borg Backup, stored on different storage devices in different locations, following the 3-2-1 strategy. These file-level backups are the important ones that matter most to me. My question is *not* about these backups, I mention them for context.
Next to Borg Backup I also sporadically create full disk image backups with
dd
. After zero'ing the disk's free space (using zerofree
on ext4, dd if=/dev/zero
otherwise) I usually create a SquashFS image of the raw disk image (e.g. sda.img
becomes the only file in disks.sqfs
). This allows me to store the raw disk image in compressed form, while still allowing me to access the data without the need to decompress everything first.
These full disk image backups are stored on a single storage device (a NAS to be more precise), i.e. don't follow the 3-2-1 strategy. Creating a second copy of the data is out of scope, simply because they take too much space and because I consider investing into more storage a waste of money due to my Borg Backup backups. So, I'm fine with loosing these backups per-se, but I want to protect them a little better. Thus I'm thinking about adding some sort of error correction mechanism.
I read through a lot of resources and found that Reed–Solomon error correction seems to be the way to go. It adds some overhead to the data stored and provides safety in most, even though not all cases.
**My question is the following:** How do I do that in practice? What tools are available and how would I use them in my case? I found [this 10 years old Stack Exchange question](https://unix.stackexchange.com/questions/170652/is-it-possible-to-add-error-correction-codes-bch-rs-or-etc-to-a-single-file) listing a whole bunch of tools, but many of the projects are apparently dead. Plus, they don't seem to fit my needs:
Storing the data in compressed form and yet being able to access the data without the need to decompress it first is a must-have for me. So, unless there's another solution, I'm stuck with SquashFS. However, according to the resources I read, combining ECC with compression is hard: One apparently shouldn't calculate ECCs from compressed data, but from the original data, because ECC doesn't guarantee a 100% correction and even a single remaining corruption could yield all compressed data useless. However, calculating ECCs from original data and then compressing it wouldn't help either, because I might not be able to decompress the data due to the corruptions. So, apparently one needs software that does both at the same time: compression and ECC. Per ddrescue
I found that lzip
can actually do that by creating forward error correction (fec) files alongside compressing the data, but AFAIK I can't tell SquashFS to create these files.
So, I'm kinda stuck with this chicken-and-egg problem... How can I combine SquashFS with ECC, or is there an alternative to SquashFS that allows this?
Any suggestions?
PhrozenByte
(21 rep)
Apr 1, 2025, 07:26 PM
• Last activity: Apr 1, 2025, 07:30 PM
0
votes
1
answers
4977
views
How do I extract a .tar.xz file?
Some software seems to be distributed in a new archive format: ``` $ file node-v18.7.0-linux-x64.tar.xz node-v18.7.0-linux-x64.tar.xz: XZ compressed data ``` How do I extract this, preferably using tar (like I can with gzip or bzip2) ?
Some software seems to be distributed in a new archive format:
$ file node-v18.7.0-linux-x64.tar.xz
node-v18.7.0-linux-x64.tar.xz: XZ compressed data
How do I extract this, preferably using tar (like I can with gzip or bzip2) ?
mikemaccana
(1863 rep)
Aug 10, 2022, 02:32 PM
• Last activity: Mar 11, 2025, 09:10 AM
1
votes
1
answers
336
views
Zlib dictionary training
I have what is possibly a unique problem to solve I need to create a compress/decompress which works on short strings (sentences) lets say 100 bytes The strings contain a limited number of unique ASCII characters, in fact a total of 41 possible. The strings also contain a relatively small set of pos...
I have what is possibly a unique problem to solve
I need to create a compress/decompress which works on short strings (sentences) lets say 100 bytes
The strings contain a limited number of unique ASCII characters, in fact a total of 41 possible.
The strings also contain a relatively small set of possible to substrings
I want to train zlib to create a dictionary, based upon the legal set of characters, and the frequently occurring substrings
Ideally I would want to create a dictionary by providing a huge dataset of possible sentences, but excliding the illegal characters
Any suggestions?
Thx
leemoore1966
(11 rep)
Oct 19, 2019, 10:09 AM
• Last activity: Mar 4, 2025, 01:39 PM
0
votes
0
answers
58
views
tar extracted file with bad output
I tarred one file with: ```bash tar cf My-tarball.tar path/to/file.txt ``` Then compressed it: ```bash gzip My-tarball.tar ``` But when i decompress it and extract it ```bash gunzip My-tarball.tar.gz tar -xf My-tarball.tar ``` the file is in a bad format, on Vim it shows a lot of: ```txt ^@^@^@^@^@^...
I tarred one file with:
tar cf My-tarball.tar path/to/file.txt
Then compressed it:
gzip My-tarball.tar
But when i decompress it and extract it
gunzip My-tarball.tar.gz
tar -xf My-tarball.tar
the file is in a bad format, on Vim it shows a lot of:
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^
and so on...
I'm really worried if a I lost my file T-T, i have no backup, stupid from my part
What am i missing? What can i do?
**P.S.:** all of the process was run on a NTFS disk mounted with lowntfs-3g
if it helps
Thanks in advance
BenjamimCS
(11 rep)
Feb 17, 2025, 10:24 PM
• Last activity: Feb 18, 2025, 01:57 AM
0
votes
1
answers
54
views
A document format for grey scanned pages with better lossless compression (i.e., a smaller file) than PDF+Zip?
I have `page_1.pnm`, …, `page_6.pnm`, which represent 6 pages of a scanned document, all in gray PNM produced by scanimage and manually postprocessed with GIMP. The command ```lang-sh convert $(for i in 1 2 3 4 5 6; do echo page_$i.pnm; done | xargs echo) -compress Zip -quality 100 document.hi-res.p...
I have
page_1.pnm
, …, page_6.pnm
, which represent 6 pages of a scanned document, all in gray PNM produced by scanimage and manually postprocessed with GIMP. The command
-sh
convert $(for i in 1 2 3 4 5 6; do echo page_$i.pnm; done | xargs echo) -compress Zip -quality 100 document.hi-res.pdf
produced a PDF file of size 15620554 bytes, whereas
-sh
tar cvf document.hi-res.tar $(for i in $(seq 1 6); do echo page_$i.pnm; done| xargs echo)
xz -9 -e -vv document.hi-res.tar
produced a .tar.xz
file of size 12385312 bytes, which is about 72 % of the PDF size. This means that there is enough superfluous information in the document that the PDF+Zip combination doesn't or can't remove.
This raises the question: Is there a document format (for scanned stuff) for which Windows has a built-in viewer and Debian Linux has at least a standard, freely available viewer such that the documents in this format are generally smaller than PDFs without losing information? (Yes, I tried TIFF, and it was larger than PDF. I also produced a Postscript document with convert and then squeezed it via gzip --best
, but the resulting .ps.gz
file was even larger. I don't know how to produce usable DJVU documents from gray images in a lossless way. I don't know how to produce XPS files on Debian - GhostXPS/GhostPDL seems to have no package.)
By the way, is there any shorter and more elegant way to produce
-sh
page_1.pnm page_2.pnm page_3.pnm page_4.pnm page_5.pnm file_6.pnm
than
-sh
$(for i in $(seq 1 6); do echo page_$i.pnm; done | xargs echo)
PS. I don't need lossy compression; if I allow myself to lose information, I'm pretty happy with
-sh
convert $(for i in $(seq 1 6); do echo page_$i.pnm; done | xargs echo) -compress JPEG2000 -quality 40 document.JPEG2000.40.pdf
(replace 40 with your choice until your file is small enough for your application).
PPS. As opposed to https://unix.stackexchange.com/questions/788404/in-imagemagick-how-to-create-a-pdf-file-from-an-image-with-the-best-flate-compr , in this question we are NOT (or at least not necessarily) interested in the best compression ratio for a single-page PDF+Zip but allow for many pages and a wider variety of compressors and formats.
AlMa1r
(1 rep)
Jan 30, 2025, 08:59 PM
• Last activity: Feb 7, 2025, 10:06 AM
169
votes
13
answers
177555
views
How to XZ a directory with TAR using maximum compression?
So I need to compress a directory with max compression. How can I do it with `xz`? I mean I will need `tar` too because I can't compress a directory with only `xz`. Is there a oneliner to produce e.g. `foo.tar.xz`?
So I need to compress a directory with max compression.
How can I do it with
xz
? I mean I will need tar
too because I can't compress a directory with only xz
. Is there a oneliner to produce e.g. foo.tar.xz
?
LanceBaynes
(41465 rep)
Jan 12, 2012, 09:23 PM
• Last activity: Jan 7, 2025, 01:27 PM
0
votes
1
answers
177
views
mod_deflate Logging on Apache/2.4.62 (Debian) does not work
Basics - I am running the Apache/2.4.62 on my [Raspberry Pi 4B][1] ([ARM64/AArch64][2]) with the [Debian][3] GNU/Linux 12 ([bookworm][4]). *** Some HW/OS info: ```none # neofetch _,met$$$$$gg. root@rpi4 ,g$$$$$$$$$$$$$$$P. --------- ,g$$P" """Y$$.". OS: Debian GNU/Linux 12 (bookworm) aarch64 ,$$P' `...
Basics
-
I am running the Apache/2.4.62 on my Raspberry Pi 4B (ARM64/AArch64 ) with the Debian GNU/Linux 12 (bookworm ).
***
Some HW/OS info:
# neofetch
_,met$$$$$gg. root@rpi4
,g$$$$$$$$$$$$$$$P. ---------
,g$$P" """Y$$.". OS: Debian GNU/Linux 12 (bookworm) aarch64
,$$P' `$$$. Host: Raspberry Pi 4 Model B Rev 1.5
',$$P ,ggs. `$$b: Kernel: 6.1.0-28-arm64
`d$$' ,$P"' . $$$ Uptime: 1 day, 7 hours, 13 mins
$$P d$' , $$P Packages: 476 (dpkg), 7 (snap)
$$: $$. - ,d$$' Shell: bash 5.2.15
$$; Y$b._ _,d$P' Terminal: /dev/pts/1
Y$$. .
"Y$$$$P"' CPU: (4) @ 1.500GHz
`$$b "-.__ Memory: 199MiB / 7805MiB
`Y$$
`Y$$.
`$$b.
`Y$$b.
`"Y$b._
`"""
***
Preface
-
I recently enabled mod_deflate
(Apache2 docs):
# a2enmod deflate
Considering dependency filter for deflate:
Module filter already enabled
Module deflate already enabled
After enabling it, I was curious about logging how much data is actually saved.
So, I went to edit the compression config file:
/etc/apache2/mods-enabled/deflate.conf
With current contents:
# Deflate compression level (maximum)
DeflateCompressionLevel 9
# Deflate memory usage level (maximum)
DeflateMemLevel 9
# Logging does not work, investigate
DeflateFilterNote Input instream
DeflateFilterNote Output outstream
DeflateFilterNote Ratio ratio
LogFormat '"%r" %{outstream}n/%{instream}n (%{ratio}n%%)' deflate
CustomLog /var/www/logs/deflate.log deflate
AddOutputFilterByType DEFLATE text/html text/plain text/xml text/css text/javascript
AddOutputFilterByType DEFLATE application/x-javascript application/javascript application/ecmascript
AddOutputFilterByType DEFLATE application/rss+xml
AddOutputFilterByType DEFLATE application/wasm
AddOutputFilterByType DEFLATE application/xml
The compression itself should be working as expected as per online test:
https://www.whatsmyip.org/http-compression-test/
which told me:
https://programyburian.cz is Compressed
Uncompressed Page Size: 10.4 KB
Compressed Page Size: 3.1 KB
Savings: 70.7%
***
Actual Problem
-
Now my problem is Deflate logging, I tried many different configurations ending up with the one above. Also, as per comment, I tried placing its _blocks_ in different order. No change.
# apache2ctl configtest
Syntax OK
I even rebooted this machine, but still no luck, /var/www/logs/deflate.log
is empty.
What can I do?
***
In this section of the Apache2 docs , it is stated:
DeflateFilterNote Input instream
DeflateFilterNote Output outstream
DeflateFilterNote Ratio ratio
LogFormat '"%r" %{outstream}n/%{instream}n (%{ratio}n%%)' deflate
CustomLog "logs/deflate_log" deflate
which I implemented with one change, the CustomLog
, see last line.
Maybe that is the culprit, I don't know really.
But I changed it because it did not work:
Job for apache2.service failed because the control process exited with error code.
See "systemctl status apache2.service" and "journalctl -xeu apache2.service" for details.
Snippet from the log when failed to start:
# journalctl -xeu apache2.service
Dec 18 01:34:19 rpi4 systemd[1] : Starting apache2.service - The Apache HTTP Server...
░░ Subject: A start job for unit apache2.service has begun execution
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit apache2.service has begun execution.
░░
░░ The job identifier is 11533.
Dec 18 01:34:19 rpi4 apachectl: (2)No such file or directory: AH02297: Cannot access directory '/etc/apache2/logs/' for log file 'logs/deflate_log' defined at /etc/apache2/mods-enabled/deflate.conf:24
Dec 18 01:34:19 rpi4 apachectl: AH00014: Configuration check failed
Dec 18 01:34:19 rpi4 apachectl: Action 'start' failed.
Dec 18 01:34:19 rpi4 apachectl: The Apache error log may have more information.
Dec 18 01:34:19 rpi4 systemd[1] : apache2.service: Control process exited, code=exited, status=1/FAILURE
░░ Subject: Unit process exited
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ An ExecStart= process belonging to unit apache2.service has exited.
░░
░░ The process' exit code is 'exited' and its exit status is 1.
Dec 18 01:34:19 rpi4 systemd[1] : apache2.service: Failed with result 'exit-code'.
░░ Subject: Unit failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ The unit apache2.service has entered the 'failed' state with result 'exit-code'.
Dec 18 01:34:19 rpi4 systemd[1] : Failed to start apache2.service - The Apache HTTP Server.
░░ Subject: A start job for unit apache2.service has failed
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░
░░ A start job for unit apache2.service has finished with a failure.
░░
░░ The job identifier is 11533 and the job result is failed.
So, I created the /etc/apache2/logs/
directory and restarted Apache, loaded my page with Ctrl+F5, Shift+F5, even tried different browser, the log remains empty...
*Once I wake up in the morning I will try to add more information.*
Vlastimil Burián
(30505 rep)
May 4, 2024, 07:18 AM
• Last activity: Dec 20, 2024, 04:23 PM
1
votes
1
answers
116
views
In ImageMagick, how to create a PDF file from an image with the best Flate compression ratio?
Assume you have a PNM or PNG image file, gray or color. With ImageMagick, you wish to generate a possibly small PDF file from it without losing information. So far I though it is simply convert infile.pnm -quality 100 -compress Zip outfile.pdf or convert infile.png -quality 100 -compress Zip outfile...
Assume you have a PNM or PNG image file, gray or color. With ImageMagick, you wish to generate a possibly small PDF file from it without losing information. So far I though it is simply
convert infile.pnm -quality 100 -compress Zip outfile.pdf
or
convert infile.png -quality 100 -compress Zip outfile.pdf
Is that it? After testing, I'm no longer sure that the compression is lossless.
AlMa1r
(1 rep)
Dec 19, 2024, 06:50 PM
• Last activity: Dec 20, 2024, 03:53 PM
0
votes
1
answers
78
views
From GIMP, how to store images losslessly at maximum compression when creating a PDF?
Assume an image opened in GIMP in Debian 12. From this image, you would like to create a single-page PDF file with maximum lossless compression. How? As of 2024-12-19, https://docs.gimp.org/en/gimp-images-out.html doesn't mention PDF at all. Besides GIMP, you have standard Linux tools at your dispos...
Assume an image opened in GIMP in Debian 12. From this image, you would like to create a single-page PDF file with maximum lossless compression. How? As of 2024-12-19, https://docs.gimp.org/en/gimp-images-out.html doesn't mention PDF at all. Besides GIMP, you have standard Linux tools at your disposal, including ImageMagick.
AlMa1r
(1 rep)
Dec 19, 2024, 03:12 AM
• Last activity: Dec 20, 2024, 01:27 AM
1
votes
0
answers
31
views
Extract and merge multipe tar gz
I have multiple tar.gz archives, the size of each archive is approximately 40 GB: v1.0-trainval01_blobs.tgz v1.0-trainval01_blobs.tgz ... v1.0-trainval10_blobs.tgz I can unpack each archive and get the following directory structure: v1.0-trainval01_blobs samples sweeps v1.0-trainval02_blobs samples...
I have multiple tar.gz archives, the size of each archive is approximately 40 GB:
v1.0-trainval01_blobs.tgz
v1.0-trainval01_blobs.tgz
...
v1.0-trainval10_blobs.tgz
I can unpack each archive and get the following directory structure:
v1.0-trainval01_blobs
samples
sweeps
v1.0-trainval02_blobs
samples
sweeps
...
v1.0-trainval10_blobs
samples
sweeps
This unpacking will consume many hours, even days. But this is not enough! Next I have to merge samples and sweeps folders:
v1.0-trainval_all_blobs
samples
sweeps
And this is time-consuming operation again... Do I can unpack contents of all tar.gz in v1.0-trainval_all_blobs via the single command?
Ars ML
(11 rep)
Nov 27, 2024, 08:46 AM
Showing page 1 of 20 total questions