Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

28 votes
1 answers
23518 views
tar --list — only the top-level files and folders
I understand that `tar --list --file=x` will list all files and folders. I am looking to just list the-top level data. Does anyone know how to do that? Alternatively, does anyone know how to list only the top-level files, but all folders including subfolders? Maybe with grep somehow? I'm after somet...
I understand that tar --list --file=x will list all files and folders. I am looking to just list the-top level data. Does anyone know how to do that? Alternatively, does anyone know how to list only the top-level files, but all folders including subfolders? Maybe with grep somehow? I'm after something that works on most nix flavors including MacOS.
Alexander Mills (10734 rep)
Dec 22, 2018, 03:16 AM • Last activity: Aug 2, 2025, 06:02 PM
13 votes
3 answers
4774 views
Tar produces different files each time
I often have large directories that I want to transfer to a local computer from a server. Instead of using recursive `scp` or `rsync` on the directory itself, I'll often `tar` and `gzip` it first and then transfer it. Recently, I've wanted to check that this is actually working so I ran md5sum on tw...
I often have large directories that I want to transfer to a local computer from a server. Instead of using recursive scp or rsync on the directory itself, I'll often tar and gzip it first and then transfer it. Recently, I've wanted to check that this is actually working so I ran md5sum on two independently generated tar and gzip archives of the same source directory. To my suprise, the MD5 hash was different. I did this two more times and it was always a new value. Why am I seeing this result? Are two tar and gzipped directories both generated with the same version of GNU tar in the exact same way not supposed to be exactly the same? For clarity, I have a source directory and a destination directory. In the destination directory I have dir1 and dir2. I'm running: tar -zcvf /destination/dir1/source.tar.gz source && md5sum /destination/dir1/source.tar.gz >> md5.txt tar -zcvf /destination/dir2/source.tar.gz source && md5sum /destination/dir2/source.tar.gz >> md5.txt Each time I do this, I get a different result from md5sum. Tar produces no errors or warnings.
Alon Gelber (133 rep)
Apr 17, 2018, 02:55 PM • Last activity: Aug 1, 2025, 01:16 PM
-1 votes
1 answers
81 views
Serving a file (e.g. in Apache) from a named pipe (made with mkfifo)
Let's say I use Apache, and that I am in `/var/www/html/`. I do: mkfifo test.tar tar cvf - ~/test/* > test.tar & In a browser, when trying to download `http://localhost/test.tar` I get: ERR_EMPTY_RESPONSE (didn’t send any data) Is there a specific parameter of `mkfifo` that would the pipe look reall...
Let's say I use Apache, and that I am in /var/www/html/. I do: mkfifo test.tar tar cvf - ~/test/* > test.tar & In a browser, when trying to download http://localhost/test.tar I get: ERR_EMPTY_RESPONSE (didn’t send any data) Is there a specific parameter of mkfifo that would the pipe look really like a regular file? Here the problems seems to come from the fact the file is a named pipe. In my real example, I might use different webservers (not Apache), like [CivetWeb](https://github.com/civetweb/civetweb) , but first I want to analyze if it works with Apache (that I use the most).
Basj (2579 rep)
Jul 13, 2025, 03:26 PM • Last activity: Jul 21, 2025, 07:11 AM
1 votes
2 answers
2026 views
Can I extract an overwriting tar archive, while retaining the ownership of the original destination file(s)?
I have a particular use case, where I want to extract a tar archive (as root) and intentionally overwrite some destination file(s) with the contents of the archive. This is all fine and easily achievable, but I also want to retain the original ownership and permissions of the original destination fi...
I have a particular use case, where I want to extract a tar archive (as root) and intentionally overwrite some destination file(s) with the contents of the archive. This is all fine and easily achievable, but I also want to retain the original ownership and permissions of the original destination file(s). As an example:
$ touch file && tar cf test.tar.gz file &&
  sudo chown www-data:www-data file &&
  sudo tar xf test.tar.gz && ls -l file
-rw-r--r-- 1 tim tim 0 May  1 11:26 file
Here I create a file as my user (tim:tim), archive it, change its ownership to www-data:www-data, then (as root) extract the archive, overwriting the original file. As you can see, its ownership has been modified to that of the file in its pre-archived state, whereas post-extraction, I want it to be owned by www-data:www-data. I've had a fairly close look at the tar man page, but can't see an immediately obvious way to do what I want. Am I missing anything?
Tim Angus (113 rep)
May 1, 2019, 10:38 AM • Last activity: Jul 6, 2025, 05:56 PM
0 votes
1 answers
2611 views
gpgtar: encrypted packet with unknown version
I'm getting the error in the title (`aead encrypted packet with unknown version 29`), when trying to decrypt an encrypted file created in the same environment (Termux on Android, if it matters): ``` $ gpgtar --encrypt --output e -r attilio test $ ls e test $ gpgtar -d e gpgtar: gpg: encrypted with c...
I'm getting the error in the title (aead encrypted packet with unknown version 29), when trying to decrypt an encrypted file created in the same environment (Termux on Android, if it matters):
$ gpgtar --encrypt --output e -r attilio test
$ ls
e test
$ gpgtar -d e
gpgtar: gpg: encrypted with cv25519 key, ID 74341D598FFF0056, created 2021-08-13
gpgtar: gpg:       "attilio"
gpgtar: gpg: public key decryption failed: Not a typewriter
gpgtar: gpg: decryption failed: Not a typewriter
gpgtar: gpg: aead encrypted packet with unknown version 29
gpgtar: error running '/data/data/com.termux/files/usr/bin/gpg': exit status 2
I got the usage from here . **Question:** what does this error even mean, and how can I fix it? (Google results only show the source code, so I guess it does not happen all that often.)
Attilio (385 rep)
Aug 13, 2021, 08:31 PM • Last activity: Jul 5, 2025, 12:17 AM
6 votes
4 answers
23640 views
Podman errors on tar with potentially insufficient UIDs or GIDs available in user namespace
When I run `podman run` I'm getting a particularly weird error, ```shell ❯ podman run -ti --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:latest ✔ docker.io/rancher/rancher:latest Trying to pull docker.io/rancher/rancher:latest... Getting image source signatures [... blob copying...] Wr...
When I run podman run I'm getting a particularly weird error,
❯ podman run -ti --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:latest
✔ docker.io/rancher/rancher:latest
Trying to pull docker.io/rancher/rancher:latest...
Getting image source signatures
[... blob copying...]
Writing manifest to image destination
Storing signatures
  Error processing tar file(exit status 1): potentially insufficient UIDs or GIDs available in user namespace (requested 630384594:600260513 for /usr/bin/etcdctl): Check /etc/subuid and /etc/subgid: lchown /usr/bin/etcdctl: invalid argument
Error: Error committing the finished image: error adding layer with blob "sha256:b4b03dbaa949daab471f94bcfd68cbe21c1147e8ec2acfe3f46f1520db48baeb": Error processing tar file(exit status 1): potentially insufficient UIDs or GIDs available in user namespace (requested 630384594:600260513 for /usr/bin/etcdctl): Check /etc/subuid and /etc/subgid: lchown /usr/bin/etcdctl: invalid argument
What does _"potentially insufficient UIDs or GIDs available in user namespace"_ mean and how can I remedy this problem?
Evan Carroll (34663 rep)
Feb 3, 2022, 07:43 PM • Last activity: Jul 3, 2025, 05:48 PM
22 votes
5 answers
44095 views
Compress a large number of large files fast
I have about 200 GB of log data generated daily, distributed among about 150 different log files. I have a script that moves the files to a temporary location and does a tar-bz2 on the temporary directory. I get good results as 200 GB logs are compressed to about 12-15 GB. The problem is that it tak...
I have about 200 GB of log data generated daily, distributed among about 150 different log files. I have a script that moves the files to a temporary location and does a tar-bz2 on the temporary directory. I get good results as 200 GB logs are compressed to about 12-15 GB. The problem is that it takes forever to compress the files. The cron job runs at 2:30 AM daily and continues to run till 5:00-6:00 PM. Is there a way to improve the speed of the compression and complete the job faster? Any ideas? Don't worry about other processes and all, the location where the compression happens is on a NAS , and I can run mount the NAS on a dedicated VM and run the compression script from there. Here is the output of top for reference: top - 15:53:50 up 1093 days, 6:36, 1 user, load average: 1.00, 1.05, 1.07 Tasks: 101 total, 3 running, 98 sleeping, 0 stopped, 0 zombie Cpu(s): 25.1%us, 0.7%sy, 0.0%ni, 74.1%id, 0.0%wa, 0.0%hi, 0.1%si, 0.1%st Mem: 8388608k total, 8334844k used, 53764k free, 9800k buffers Swap: 12550136k total, 488k used, 12549648k free, 4936168k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7086 appmon 18 0 13256 7880 440 R 96.7 0.1 791:16.83 bzip2 7085 appmon 18 0 19452 1148 856 S 0.0 0.0 1:45.41 tar cjvf /nwk_storelogs/compressed_logs/compressed_logs_2016_30_04.tar.bz2 /nwk_storelogs/temp/ASPEN-GC-32459:nkp-aspn-1014.log /nwk_stor 30756 appmon 15 0 85952 1944 1000 S 0.0 0.0 0:00.00 sshd: appmon@pts/0 30757 appmon 15 0 64884 1816 1032 S 0.0 0.0 0:00.01 -tcsh
anu (362 rep)
May 4, 2016, 11:00 PM • Last activity: Jun 17, 2025, 09:12 AM
3 votes
3 answers
2132 views
curl progress in dialog
how can i properly display the curl progress in the dialog window? curl http://mysite.corp/image/root_21.tar.bz2 | tar -C /mnt/dest/ -jxf - [![enter image description here][1]][1] i tried this command but as you can see it does not display it correctly. curl -f -x '' -L http://mysite.corp/image/root...
how can i properly display the curl progress in the dialog window? curl http://mysite.corp/image/root_21.tar.bz2 | tar -C /mnt/dest/ -jxf - enter image description here i tried this command but as you can see it does not display it correctly. curl -f -x '' -L http://mysite.corp/image/root_21.tar.bz2 | tar -C /mnt/dest -xjpf - --exclude='dev/*' | dialog --backtitle "dialog" --stderr --title 'Linux Image' --textbox /tmp/log 30 80 enter image description here this command almost helps me but i want it to overwrite itself and not show me new line progress in each line. basically i want it to be the same as the original command shows it but in the dialog. (curl -f -x '' -L http://mysite.corp/image/root_21.tar.bz2 | tar -C /mnt/dest -xjpf - --exclude='dev/*' ) 2>&1 | dialog --progressbox 20 120 enter image description here
Asaf Magen (547 rep)
Sep 7, 2015, 09:12 AM • Last activity: Jun 8, 2025, 02:01 PM
6 votes
2 answers
18980 views
How to compare two tar archives (including file content, new/removed files, symlinks)?
I have two tar archives (compressed or not compressed), and I want to find all differences in the two archives. Both archives contain a complete file system (i.e. when unpacked, would generate directories like `/bin`, `/home`, `/root`, `/usr`, `/var`, `/etc`,... I hope you get the point). I want to...
I have two tar archives (compressed or not compressed), and I want to find all differences in the two archives. Both archives contain a complete file system (i.e. when unpacked, would generate directories like /bin, /home, /root, /usr, /var, /etc,... I hope you get the point). I want to have a list of the following: - New files - Removed files - Changed files (content of file, not just size) - Changed symlinks (both relative and absolute) - New/removed symlinks I cannot just unpack those archives and use diff, as diff will not correctly recognize absolute symlinks (as they would point out of the file system structure of the archive). Is there another way to compare the content of two tar archives?
Alex (5848 rep)
Nov 12, 2013, 01:19 PM • Last activity: Jun 5, 2025, 08:57 AM
38 votes
6 answers
4820 views
How to untar safely, without polluting the current directory in case of a tarbomb?
Respectable projects release tar archives that contain a single directory, for instance `zyrgus-3.18.tar.gz` contains a `zyrgus-3.18` folder which in turn contains `src`, `build`, `dist`, etc. But some punk projects put everything at the root :'-( This results in a [total mess][1] when unarchiving....
Respectable projects release tar archives that contain a single directory, for instance zyrgus-3.18.tar.gz contains a zyrgus-3.18 folder which in turn contains src, build, dist, etc. But some punk projects put everything at the root :'-( This results in a total mess when unarchiving. Creating a folder manually every time is a pain, and unnecessary most of the time. - Is there a super-fast way to tell whether a .tar or .tar.gz file contains more than a single directory at its root? Even for a big archive. - Or even better, is there a tool that in such cases would create a directory (name of the archive without the extension) and put everything inside?
Nicolas Raoul (8465 rep)
Nov 11, 2015, 08:24 AM • Last activity: May 29, 2025, 02:32 PM
0 votes
2 answers
93 views
Why did my backup folder with large amounts of repeated data compress so poorly?
I have a folder with around seventy subfolders, each containing a few tarballs which are nightly backups of a few directories (the largest being `/home`) from an old Raspberry Pi. Each is a full backup; they are not incremental. These tarballs are not compressed; they are just regular `.tar` archive...
I have a folder with around seventy subfolders, each containing a few tarballs which are nightly backups of a few directories (the largest being /home) from an old Raspberry Pi. Each is a full backup; they are not incremental. These tarballs are not compressed; they are just regular .tar archives. (They were originally compressed with bzip2, but I have decompressed all of them.) This folder totals 49 GiB according to du -h. I compressed this entire folder into a tar archive compressed with zstd. However, the final archive is 32 GiB, not much smaller than the original. Why is this the case, considering that the vast majority of the data should be common among several files, since I obviously was not replacing every file every day?
kj7rrv (217 rep)
May 24, 2025, 04:33 AM • Last activity: May 24, 2025, 08:08 AM
55 votes
4 answers
53679 views
Does tar actually compress files, or just group them together?
I usually assumed that `tar` was a compression utility, but I am unsure, does it actually compress files, or is it just like an ISO file, a file to hold files?
I usually assumed that tar was a compression utility, but I am unsure, does it actually compress files, or is it just like an ISO file, a file to hold files?
TheDoctor (995 rep)
Apr 29, 2014, 09:29 PM • Last activity: May 23, 2025, 12:05 AM
7 votes
3 answers
3214 views
Tar piped to split piped to scp
So I'm trying to transfer a bunch of files via SCP. Some of these are too large to be stored on the recipient (Android phone, 4GB file size limit). The sender is almost out of space, so I can't create intermediate files locally. I'd like to tar up the bunch and stream it through split so that I can...
So I'm trying to transfer a bunch of files via SCP. Some of these are too large to be stored on the recipient (Android phone, 4GB file size limit). The sender is almost out of space, so I can't create intermediate files locally. I'd like to tar up the bunch and stream it through split so that I can get smaller segments that'll be accepted by the phone, i.e. local command: tar -cvf - ~/batch/ | split --bytes=1024m - batch.tar.seg But I'm not sure how I'd pipe that into scp to get it the phone. According the comment on [this post](http://gnuru.org/article/1522/copying-with-scp-stdin) , it's possible, but I first of all don't quite get what he's saying, second of all I'm not sure how to accomplish this as there'll be multiple files output from split. Any ideas?
FlamingKitties (5029 rep)
Feb 8, 2013, 07:06 AM • Last activity: May 9, 2025, 10:53 AM
1 votes
1 answers
2746 views
Can tar verify integrity of untarred files on destination disk?
Example: I create a.tar.gz from the file "a.txt" (so I used the -z option). Let's say the checksum of the file a.txt before it's added to the archive is "abc123". When I untar and "a.txt" is written to disk, can I make it so tar checks that the checksum of a.txt on the destination disk is "abc123" a...
Example: I create a.tar.gz from the file "a.txt" (so I used the -z option). Let's say the checksum of the file a.txt before it's added to the archive is "abc123". When I untar and "a.txt" is written to disk, can I make it so tar checks that the checksum of a.txt on the destination disk is "abc123" and fail if it isn't the same?
Kias (173 rep)
Nov 21, 2016, 08:11 PM • Last activity: May 6, 2025, 09:05 AM
4 votes
1 answers
2189 views
Best compression for operating system image
I have an Operating system image of size 2.5G. I have a device with a limited size. Thus I was looking for the best possible solution for providing the compression. Below are the commands and results of their compression: 1.tar with gzip: tar c Os.img | gzip --best > Os.tar.gz This command returned...
I have an Operating system image of size 2.5G. I have a device with a limited size. Thus I was looking for the best possible solution for providing the compression. Below are the commands and results of their compression: 1.tar with gzip: tar c Os.img | gzip --best > Os.tar.gz This command returned an image of 1.3G. 2.Xz only: xz -z -v -k Os.img This command returned an image of 1021M. 3.Xz with -9: xz -z -v -9 -k Os.img This command returned an image of 950M. 4.tar with Xz and -9: tar cv Os.img | xz -9 -k > Os.tar.xz This command returned an image of 950M. 5.tar with Xz -9 and -e: xz -z -v -9 -k -e Os.img This command returned an image of 949M. 6.lrzip: lrzip -z -v Os.img This command returned an image of 729M. Is there any other possible best solution or command line tool ( preferable ) for the compression?
Sharvin26 (307 rep)
Mar 12, 2019, 02:44 PM • Last activity: Apr 26, 2025, 09:47 AM
11 votes
2 answers
1595 views
What is the easiest way to list all the user:group found in a tarball?
I'm installing some of my data from my old server to my new server. Since I had my old server for ages, I have a huge amount of legacy data with, most certainly, legacy user and group names. When extracting, tar does its best to match the user and group info by name and uses the identifiers as a fal...
I'm installing some of my data from my old server to my new server. Since I had my old server for ages, I have a huge amount of legacy data with, most certainly, legacy user and group names. When extracting, tar does its best to match the user and group info by name and uses the identifiers as a fallback or the current user as a last resort. What I'd like to do is make sure that all the users and groups exist before I do the extraction. That way all the files get the correct ids. To do that, the best way I can think of is to list all the user and group names found in the tar file. I know I can use the tar tvf backup.tar command to list all the files, but then I'd have to come up with a way to extract the right two names. I'm wondering whether there would be a simpler way than using the tv option. Some tool or command line options that only extracts the user name and group name, then I can use sort -u to reduce the list to unique entries. Anyone knows of such a feature? --- **Update:** For those interested in fixing the ownership after extraction (or in my latest case, after re-installing the OS, so no tar involved), I created a tool to do that safely on an entire tree: https://github.com/AlexisWilke/alex-tools/blob/main/tools/fix-ownership.cpp
Alexis Wilke (3095 rep)
Oct 14, 2019, 07:52 PM • Last activity: Apr 12, 2025, 01:41 PM
0 votes
3 answers
90 views
What are the options to transfer a package from source machine to target machine over the network with only sudo user login via ssh?
**adm package** folder structure: ``` /opt/adm ├── bin ├── cli ├── dev ├── mod ├── pkg └── sys ``` ``` root@deb4 /opt/adm # rwx To change permissions use this command syntax: $ sudo chmod USER = root GROUPS = root PERMISSION | OCTAL | OWNER GROUP NAME drwxr-xr-x | 755 | root root . drwxr-xr-x | 755...
**adm package** folder structure:
/opt/adm
├── bin
├── cli
├── dev
├── mod
├── pkg
└── sys
root@deb4 /opt/adm
# rwx

 To change permissions use this command syntax:
 $ sudo chmod  

 USER = root  GROUPS = root

 PERMISSION |  OCTAL | OWNER          GROUP          NAME
 drwxr-xr-x |   755  | root           root           .
 drwxr-xr-x |   755  | root           root           ..
 drwxr-xr-x |   755  | root           root           bin
 drwxr-xr-x |   755  | root           root           cli
 drwxr-xr-x |   755  | root           root           dev
 drwxr-xr-x |   755  | root           root           mod
 drwxr-xr-x |   755  | root           root           pkg
 drwxr-xr-x |   755  | root           root           sys
In this context **adm package** refers to all the files stored in the folder structure under /opt/adm/. On my development machine the **adm package** contains custom bash scripts in
; it contains text files in
; and package setup files in
. The script
exists in the
. Any user on the host machine can execute the
script to inspect octal permissions in the present working directory as shown above. In general the **adm package** enables users on the host machine to run the bash scripts stored in
from the command line; to read text files in
; and to read text files in
. The
folder is for developing and testing new bash scripts prior to transfer to
. The idea behind this package is to teach principles of bash setup, bash scripts, and Linux administration using
to provide script resources and examples;
to store notes; and
to store system configuration notes. I am not a power user or bash script expert. So the package and scripts might need improvement and feedback from the bash community to develop and evaluate it for teaching purposes. For now the package is private. I may publish it on Github someday. I have two bash scripts in
to transfer the whole **adm package** to another debian-based target machine. These scripts use rsync and tar respectively to show a dry run and then execute the transfer. The tar based script overwrites all existing files with the same names on the source and target. The rsync based script only updates files that have change or that don't already exist. I prefer the rsync. Both scripts work fine provided I have the root password login enabled via ssh for the target machine and I install
on the target if it does not exist already. I have a machine that blocks root password login over ssh. I can only login as sudo user. What are my options to copy **adm package** to /opt/adm on the target machine with only sudo user login via ssh? Must I use keys to login as root? Or is there a way to transfer a tar file
.tar
then untar the file using sudo on the target machine?
SystemTheory (121 rep)
Oct 4, 2024, 02:13 AM • Last activity: Apr 6, 2025, 03:40 PM
0 votes
1 answers
26 views
is it possible to invoke mc with a tar.gz path as a parameter to open mc directly inside the tar.gz?
I am trying the obvious: ``` mc file.tar.gz ``` but the tar.gz is not opened any ideas?
I am trying the obvious:
mc file.tar.gz
but the tar.gz is not opened any ideas?
Persimmonium (103 rep)
Apr 4, 2025, 11:10 AM • Last activity: Apr 4, 2025, 03:17 PM
5 votes
2 answers
638 views
The shell not redirecting output of tar to file
I am assuming this is a simple issue, but I don't have any one to double check my work. Here's my bash script ``` #!/bin/bash # establish date format and dump name DATE=$(date +"%Y%m%d-%H%M") DUMPFILE=$DATE.dump # path to log dir and name of output log file LOGDIR=/opt/mongodb/backups/logs DBLOG=$DA...
I am assuming this is a simple issue, but I don't have any one to double check my work. Here's my bash script
#!/bin/bash

# establish date format and dump name
DATE=$(date +"%Y%m%d-%H%M")
DUMPFILE=$DATE.dump

# path to log dir and name of output log file 
LOGDIR=/opt/mongodb/backups/logs
DBLOG=$DATE-dump.log

# backup the database and output to a dump file, redirect output to a log
docker exec -i mongodb sh -c "mongodump --archive" > \
    $DUMPFILE 2> \ 
    $LOGDIR/$DBLOG

# archive the dump file and the file uploads
ARCHIVENAME=$DATE.tgz
ARCHIVELOG=$DATE-archive.log

tar -czvf $ARCHIVENAME /opt/mongodb/$DUMPFILE /opt/mongodb/files &> \
    $LOGDIR/$ARCHIVELOG
Where I'm stuck, the script correctly outputs the dump log. However when it gets to the end I receive
./backup.sh: line 20: /opt/mongodb/backups/logs/20250328-0942-archive.log: No such file or directory
I've attempted using the absolute path instead of the of the var:
tar -czvf $ARCHIVENAME /opt/mongodb/$DUMPFILE /opt/mongodb/files &> \
    /opt/mongodb/backups/logs/$ARCHIVENAME
But I get the same error. I'm assuming this is either my bad understanding of redirects, tar, or a syntax error. Any help would be appreciated. EDIT: A file called "' '" gets created each time I run the script. If run
cat ' '
I see that file is the log, so it's does redirect the output to a file, just not in the desired manner.
Ambre (111 rep)
Mar 28, 2025, 01:47 PM • Last activity: Apr 3, 2025, 03:11 PM
240 votes
9 answers
497759 views
tar: Removing leading `/' from member names
root@server # tar fcz bkup.tar.gz /home/foo/ tar: Removing leading `/' from member names How can I solve this problem and keep the `/` on file names ?
root@server # tar fcz bkup.tar.gz /home/foo/ tar: Removing leading `/' from member names How can I solve this problem and keep the / on file names ?
superuser (2501 rep)
Dec 23, 2012, 12:47 PM • Last activity: Apr 1, 2025, 07:53 PM
Showing page 1 of 20 total questions