Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
2 answers
1999 views
Problem Unzipping huge zip file
I have issues unzipping a huge zip files containing around 1M files. The zip file is 15GB and uncompresses to ~60GB. When I run `unzip file.zip -d /directory/to/unzip/at` it uncompresses halfway and gives up at around 700K files. No error messages. Any tips?
I have issues unzipping a huge zip files containing around 1M files. The zip file is 15GB and uncompresses to ~60GB. When I run unzip file.zip -d /directory/to/unzip/at it uncompresses halfway and gives up at around 700K files. No error messages. Any tips?
Anon (9 rep)
Jan 24, 2018, 07:22 PM • Last activity: Jul 27, 2025, 04:05 PM
1 votes
2 answers
2026 views
Can I extract an overwriting tar archive, while retaining the ownership of the original destination file(s)?
I have a particular use case, where I want to extract a tar archive (as root) and intentionally overwrite some destination file(s) with the contents of the archive. This is all fine and easily achievable, but I also want to retain the original ownership and permissions of the original destination fi...
I have a particular use case, where I want to extract a tar archive (as root) and intentionally overwrite some destination file(s) with the contents of the archive. This is all fine and easily achievable, but I also want to retain the original ownership and permissions of the original destination file(s). As an example:
$ touch file && tar cf test.tar.gz file &&
  sudo chown www-data:www-data file &&
  sudo tar xf test.tar.gz && ls -l file
-rw-r--r-- 1 tim tim 0 May  1 11:26 file
Here I create a file as my user (tim:tim), archive it, change its ownership to www-data:www-data, then (as root) extract the archive, overwriting the original file. As you can see, its ownership has been modified to that of the file in its pre-archived state, whereas post-extraction, I want it to be owned by www-data:www-data. I've had a fairly close look at the tar man page, but can't see an immediately obvious way to do what I want. Am I missing anything?
Tim Angus (113 rep)
May 1, 2019, 10:38 AM • Last activity: Jul 6, 2025, 05:56 PM
1 votes
1 answers
42 views
Maximize file system usable space for a mostly read-only data partition
I'm trying to wrap my head around all the different (common) file systems available, but I can't decide which one best applies to my scenario. My use case is: 1) The partition is for data only. System files are in a dedicated drive. 2) Each directory contains a big ISO file (movie DVD or BD - anywhe...
I'm trying to wrap my head around all the different (common) file systems available, but I can't decide which one best applies to my scenario. My use case is: 1) The partition is for data only. System files are in a dedicated drive. 2) Each directory contains a big ISO file (movie DVD or BD - anywhere from 5 to 100GB) and 4 very small files (e.g., a text nfo file and a three jpg images). Files will likely never be deleted, with a few rare exceptions. 3) I'd like to maximize the usable disk space. I tried formatting a 3.6TiB partition with ext3, removed all the space reserve for root, but still got a significant (for me) space loss relative to a partition of the same size formatted as NTFS, approximately 57GiB. If I understand it correctly, this is due to the pre-allocated inodes (is that an accurate term?). I don't like the idea of tens of gigabytes sitting there unused, waiting for files that will never come. 4) I'd also like to avoid NTFS partitions. I don't like their performance under Linux (writing big files to ext3 was 6-10 times faster than NTFS in my test). Things I don't care much about: journaling, COW, snapshots. This partition will be pretty much like ROM, or an archive, if you will. If there's a failure when I'm writing the files, I can start over. Please let me know if I'm missing something here. Now, between ext4/3/2, btrfs, xfs, zfs, which one would be most appropriate and why? Should I also consider extFS or F2FS? I haven't read much about these two. Note: I've found this similar question , but it's from 2016, I suppose answers would be different now. Thank you, VMat
VMat (11 rep)
May 24, 2025, 11:01 PM • Last activity: May 25, 2025, 12:25 AM
1 votes
2 answers
414 views
Uncompressed .lzo files in parallel in both the folders simultaneously and then delete the original .lzo files
So I have `.lzo` files in `/test01/primary` folder which I need to uncompress and then delete all the `.lzo` files. Same thing I need to do in `/test02/secondary` folder as well. I will have around 150 `.lzo` files in both folders so total around 300 `.lzo` files. From a command line I was running l...
So I have .lzo files in /test01/primary folder which I need to uncompress and then delete all the .lzo files. Same thing I need to do in /test02/secondary folder as well. I will have around 150 .lzo files in both folders so total around 300 .lzo files. From a command line I was running like this to uncomressed one file lzop -d file_name.lzo. What is the fastest way to uncompressed all .lzo files and then delete all .lzo files from both folders simultaneously. Below is the code I have: #!/bin/bash set -e export PRIMARY=/test01/primary export SECONDARY=/test02/secondary parallel lzop -dU -- ::: {"$PRIMARY","$SECONDARY"}/*.lzo I want to uncompress and delete .lzo files parallelly in both PRIMARY and SECONDARY folder simultaneously. With my above code, it does in PRIMARY first and then in SECONDARY folder. How can I achieve parallellism both in PRIMARY and SECONDARY simultaneously? Also does it uncompress all the files and then delete later on or uncompress one file and then delete that file and then move to next one? I tried with this but it doesn't work. It just works on first 40 files and after that it doesn't work at all. #!/bin/bash set -e export PRIMARY=/test01/primary export SECONDARY=/test02/secondary parallel -j 40 lzop -dU -- ::: "$PRIMARY"/*.lzo & parallel -j 40 lzop -dU -- ::: "$SECONDARY"/*.lzo & wait
user1950349 (841 rep)
Oct 9, 2015, 11:34 PM • Last activity: May 12, 2025, 08:12 AM
19 votes
9 answers
25038 views
How do I recursively grep through compressed archives?
I'm trying to find out what modules [`use Test::Version`](http://search.cpan.org/dist/Test-Version/lib/Version.pm) in cpan. So I've used [`minicpan`](http://search.cpan.org/dist/CPAN-Mini/bin/minicpan) to mirror it. My problem is that I need to iterate through the archives that are downloaded, and g...
I'm trying to find out what modules [use Test::Version](http://search.cpan.org/dist/Test-Version/lib/Version.pm) in cpan. So I've used [minicpan](http://search.cpan.org/dist/CPAN-Mini/bin/minicpan) to mirror it. My problem is that I need to iterate through the archives that are downloaded, and grep the files that are in the archives. Can anyone tell me how I might do this? preferably in a way that tells me which file in the archive and what line it's on. *(note: they aren't all tarballs some are zip files)*
xenoterracide (61203 rep)
May 25, 2011, 12:29 PM • Last activity: Mar 20, 2025, 11:10 AM
0 votes
1 answers
4977 views
How do I extract a .tar.xz file?
Some software seems to be distributed in a new archive format: ``` $ file node-v18.7.0-linux-x64.tar.xz node-v18.7.0-linux-x64.tar.xz: XZ compressed data ``` How do I extract this, preferably using tar (like I can with gzip or bzip2) ?
Some software seems to be distributed in a new archive format:
$ file node-v18.7.0-linux-x64.tar.xz
node-v18.7.0-linux-x64.tar.xz: XZ compressed data
How do I extract this, preferably using tar (like I can with gzip or bzip2) ?
mikemaccana (1863 rep)
Aug 10, 2022, 02:32 PM • Last activity: Mar 11, 2025, 09:10 AM
0 votes
1 answers
2331 views
Difference between an AppImage and an archive file
Many applications (IntelliJ IDEA, PyCharm, Android Studio, etc) are available as a `tar.gz` or `tar.xz` files. They do not need to be installed. You just need to extract the archive file and run the application. On the other hand there are AppImages. By running an AppImage, the AppImage is temporall...
Many applications (IntelliJ IDEA, PyCharm, Android Studio, etc) are available as a tar.gz or tar.xz files. They do not need to be installed. You just need to extract the archive file and run the application. On the other hand there are AppImages. By running an AppImage, the AppImage is temporally mounted on \tmp directory and then is executed. You can also extract the AppImage like any archive file and run the application. So my question is what is the difference between an AppImage and an archive file?
Dante (83 rep)
May 23, 2021, 06:11 PM • Last activity: Mar 7, 2025, 10:07 AM
2 votes
1 answers
3574 views
How to uncompress multiple Zlib Archives with one comand
I have >200 .zlib Archives and I want to uncompress them using one command in Linux console. I just cant get the command right. maybe somone can help me: for z in *.zlib; do; zlib-flate -uncompress $z ; done When I run this command every file is empty. I don't really care about the output-filename,...
I have >200 .zlib Archives and I want to uncompress them using one command in Linux console. I just cant get the command right. maybe somone can help me: for z in *.zlib; do; zlib-flate -uncompress $z ; done When I run this command every file is empty. I don't really care about the output-filename, so this could be just a counter or a added string for example. Many thanks!
jonasbsv (21 rep)
Apr 21, 2021, 08:01 PM • Last activity: Jan 31, 2025, 12:00 PM
1 votes
2 answers
1855 views
EPEL 7 links are dead... Where to find replacement?
I am updating a project that relied on EPEL 7; up until this year, the following repo link worked just fine: https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm This was the full rpm command I was using to enable it: rpm -ivh --nosignature https://dl.fedoraproject.org/pub/epel/epe...
I am updating a project that relied on EPEL 7; up until this year, the following repo link worked just fine: https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm This was the full rpm command I was using to enable it: rpm -ivh --nosignature https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm However, that link now fails to download. When I try the URL myself, I get a 404 Page Not Found error. This makes sense, EPEL 7 was deprecated, but in case the project cannot be upgraded to use EPEL 8, I still would like to have an EPEL 7 source available as a backup. If anyone has a link to such an archive of EPEL 7, once that I can install with RPM, that would be much appreciated. I did find this on the official EPEL site: https://dl.fedoraproject.org/pub/archive/epel/7/ However, I cannot use that link directly, nor am I sure where/if they provide a full RPM package within. The project is running Redhat 7.9 (yes I am aware this version is also outdated, the project requires it.) If EPEL 8 is supposed to be backwards compatible to Redhat 7.9, then maybe I will never have to use EPEL 7 again. EDIT: As far as I can tell, EPEL 8 does require Redhat 8, so that is not an option for me.
Eliezer Miron (121 rep)
Jan 10, 2025, 09:06 PM • Last activity: Jan 13, 2025, 09:03 PM
1 votes
1 answers
66 views
Recover mis-gzipped folder / directory & files
I needed to compress/archive a folder, so I ran the following command: ``` gzip -v --rsyncable --fast -r myFolder/ -c > myFolderArchive.gz ``` ...foolishly thinking this was going to do just what I thought it would: an archive of `myFolderArchive` and its files recursively. It even had a nice output...
I needed to compress/archive a folder, so I ran the following command:
gzip -v --rsyncable --fast -r myFolder/ -c > myFolderArchive.gz
...foolishly thinking this was going to do just what I thought it would: an archive of myFolderArchive and its files recursively. It even had a nice output:
./myFolder/file1 ... 80%
./myFolder/file2 ... 20%
...
Opening the archive later however, I only see a single file in it. A quick search led me to understand my mistake: GZip (or I guess, myself) has taken every file, compressed it, and concatenated them one by one into a single file, essentially. Losing all file/directory structure. In the meantime, I've rm -r'd the original folder. All I have now is myFolderArchive.gz. Would anyone see a way to take that archive, and potentially reconstruct the original set of files, from the myFolderArchive.gz file's content, now that it's all mixed into a single GZipped file? I do still have access to the original disk (for a limited time) and could potentially attempt to recover at least the original directory structure (filesystem is ext4). Technically, the content/data itself is in myFolderArchive.gz, it would "just" need to be sliced right...
Undo (113 rep)
Dec 22, 2024, 04:32 AM • Last activity: Dec 23, 2024, 05:59 AM
0 votes
2 answers
2079 views
7zip archive multiple files with their respective names
I have lots of files that I want to archive/compress using 7zip utility. They all reside in the same folder. Each archive must have the same name as the file that is to be archived. For example, if the files are `1.txt`, `2.txt`, `3.txt` then the archives should be `1.7z`, `2.7z` and so on. I have f...
I have lots of files that I want to archive/compress using 7zip utility. They all reside in the same folder. Each archive must have the same name as the file that is to be archived. For example, if the files are 1.txt, 2.txt, 3.txt then the archives should be 1.7z, 2.7z and so on. I have found some batch scripts, but I need a bash script. I can list all the files using
for i in *.txt; do echo $i; done
but cannot make it work with 7zip command, i.e. 7z a 'archive.7z' 'file.txt'
heikrana (141 rep)
Aug 17, 2022, 04:27 PM • Last activity: Nov 28, 2024, 06:05 PM
3 votes
1 answers
359 views
Extract a RAR file while automatically truncating long filenames
I have a lot of rar archives, and some of them contain files whose names are too long for the filesystem. When attempting to extract them using `unrar x`, I will get the error: Cannot create [extremely long filename].ext File name too long Using any archive utility currently available for Linux, is...
I have a lot of rar archives, and some of them contain files whose names are too long for the filesystem. When attempting to extract them using unrar x, I will get the error: Cannot create [extremely long filename].ext File name too long Using any archive utility currently available for Linux, is there one which can automatically shorten the extracted filename while preserving the extension? If an archive can be automatically edited pre-extraction to rectify this issue, that will suffice as well.
rer (31 rep)
Nov 23, 2024, 07:28 AM • Last activity: Nov 24, 2024, 05:41 AM
3 votes
1 answers
2716 views
Opening files within archives from vim
I know that vim can open files within zip or tar{,.gz} archives interactively: open the archive first, then navigate to the correct entry and press enter. lesspipe (https://github.com/wofr06/lesspipe) (which is a preprocessor for *less*) provides the additional convenience of allowing one to directl...
I know that vim can open files within zip or tar{,.gz} archives interactively: open the archive first, then navigate to the correct entry and press enter. lesspipe (https://github.com/wofr06/lesspipe) (which is a preprocessor for *less*) provides the additional convenience of allowing one to directly input both the archive name and the name of the file within the archive at the same time (less foo.tgz:file-within-foo) (yes, I know, such a scheme leads, in theory, to issues with files containing colons in their names; in practice this is rare...). I was wondering if a similar ability was available (perhaps as a plugin) for vim. Clarification: Fundamentally, what I'm asking for should be "relatively" simple (and mostly focused on a usability POV), because most of the archive-handling ability is actually already present in vim: I am just looking for a plugin that will transform, say, vim foo.tgz:file-within to vim foo.tgz followed by selecting file-within in the listing offered by vim.
antony (161 rep)
Jun 15, 2016, 04:01 AM • Last activity: Nov 4, 2024, 03:26 PM
3 votes
1 answers
549 views
How to open a .tar.xz indexed archivefile with Midnight Commander?
`tar -I pixz -cf foo.tar.xz ./foo` compress the Stuff. `tar -I pixz -xf foo.tar.xz` decompress the Stuff. And with `pixz -l foo.tar.xz` comes a list the contents. How can i do this with (`mc`) Midnight Commander? If i select `foo.tar.xz` in `mc` and press Enter, it comes nothing. The CPU usage goes...
tar -I pixz -cf foo.tar.xz ./foo compress the Stuff. tar -I pixz -xf foo.tar.xz decompress the Stuff. And with pixz -l foo.tar.xz comes a list the contents. How can i do this with (mc) Midnight Commander? If i select foo.tar.xz in mc and press Enter, it comes nothing. The CPU usage goes for a moment high but nothing comes than, no error. Normaly mc can open archive files like this? How can mc open and browse an indexed .tar.xz file?
user447274 (539 rep)
Oct 22, 2024, 04:11 AM • Last activity: Oct 22, 2024, 05:51 AM
0 votes
1 answers
25 views
Archiver to backup a changing directory
So there is a directory that is actively changing. I want a backup snapshot, but both tar and zip crash as a file is deleted or changed while they read it. Is there any archiver in Linux-world that would, in this case, just skip the affected file and continue with the rest?
So there is a directory that is actively changing. I want a backup snapshot, but both tar and zip crash as a file is deleted or changed while they read it. Is there any archiver in Linux-world that would, in this case, just skip the affected file and continue with the rest?
Mikhail Ramendik (538 rep)
Oct 15, 2024, 01:41 PM • Last activity: Oct 15, 2024, 07:06 PM
10 votes
2 answers
1881 views
Make tar (or other) archive, with data block-aligned like in original files for better block-level deduplication?
How can one generate a tar file, so the contents of tarred files are block-aligned like in the original files, so one could benefit from block-level deduplication ( https://unix.stackexchange.com/a/208847/9689 )? (Am I correct that there is nothing intrinsic to the tar format that prevent us from ge...
How can one generate a tar file, so the contents of tarred files are block-aligned like in the original files, so one could benefit from block-level deduplication ( https://unix.stackexchange.com/a/208847/9689 )? (Am I correct that there is nothing intrinsic to the tar format that prevent us from getting such benefit? Otherwise, if not tar, is there maybe another archiver that has such a feature built in? ) P.S. I mean "uncompressed tar" - not tar+gz or something - uncompressed tar and question asks for some trick allowing aligning files block level. AFAIRecall tar was designed for use with tape machines, so maybe adding some extra bits for alignment is possible and easy within file format? I hope there might be even tool for it ;). As far as I recall tar files can be concatenated, so maybe there would be trick for filling space for alignment.
Grzegorz Wierzowiecki (14740 rep)
Apr 16, 2016, 03:52 PM • Last activity: Oct 12, 2024, 03:14 PM
1 votes
1 answers
97 views
Why does tar command create this directory structure when extracting a tar file?
I have created this simple directory structure: ``` $ tree testdir/ testdir/ ├── subdir1 │   ├── file1.txt │   └── file2.txt ├── subdir2 │   ├── file3.txt │   └── file4.txt ``` and would like to archive the contents of testdir into a tar file. So I run the fol...
I have created this simple directory structure:
$ tree testdir/
testdir/
├── subdir1
│   ├── file1.txt
│   └── file2.txt
├── subdir2
│   ├── file3.txt
│   └── file4.txt
and would like to archive the contents of testdir into a tar file. So I run the following command (note the exclude is because of this answer ):
tar cvf /home/myuser/testdir/testdir_tarfile.tar --exclude=/home/myuser/testdir/testdir_tarfile.tar /home/myuser/testdir
Now I want to extract the files (along with their directory structure) once again. So first create a target directory:
$ mkdir /home/myuser/testdir/target_dir
and now extract the tar file to the target directory:
tar xf /home/myuser/testdir/testdir_tarfile.tar -C /home/myuser/testdir/target_dir/
The resulting file structure looks like this:
$ tree testdir/
testdir/
├── subdir1
│   ├── file1.txt
│   └── file2.txt
├── subdir2
│   ├── file3.txt
│   └── file4.txt
├── target_dir
│   └── home
│       └── myuser
│           └── testdir
│               ├── subdir1
│               │   ├── file1.txt
│               │   └── file2.txt
│               └── subdir2
│                   ├── file3.txt
│                   └── file4.txt
└── testdir_tarfile.tar
Why does the target directory now contain subfolders "home" and "myuser"? I would like it to just contain "subdir1" and "subdir2", i.e. like the original archived directory.
teeeeee (305 rep)
Oct 7, 2024, 08:56 PM • Last activity: Oct 7, 2024, 09:59 PM
1 votes
0 answers
192 views
tar using --transform flag is replacing symlinks when extracting archive
When I unpack a tar.gz archive using this command on Ubuntu 22.04.5 LTS: ```sh tar -xvf foo-bar-linux-x64.tar.xz --transform "s:^[^/]*:foo-bar:" ``` everything is unpacked into a `foo-bar` folder instead a `foo-bar-linux-x64` folder (which is the one by default as encapsulated in the archive), but I...
When I unpack a tar.gz archive using this command on Ubuntu 22.04.5 LTS:
tar -xvf foo-bar-linux-x64.tar.xz --transform "s:^[^/]*:foo-bar:"
everything is unpacked into a foo-bar folder instead a foo-bar-linux-x64 folder (which is the one by default as encapsulated in the archive), but I notice that all symbolic links existing in the archive file structure now point to -> foo-bar instead of their respective libraries, e.g. in ./foo-bar/lib/: libwhatever.so -> foo-bar instead of libwhatever.so -> libwhatever.so.1.2 Why and how to fix this? #### Edit: I also tried:
tar -xvf foo-bar-linux-x64.tar.xz --transform "--transform "s:^[^*/]*/:foo-bar/:"
it works, but it curiously creates two folders: foo-bar-linux-x64 which is empty and foo-bar/ which is apparently OK.
$ tar --version
tar (GNU tar) 1.34
Copyright (C) 2021 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later .
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Written by John Gilmore and Jay Fenlason.
s.k (511 rep)
Oct 3, 2024, 09:30 AM • Last activity: Oct 3, 2024, 10:02 AM
0 votes
0 answers
58 views
browse, read, write and rename files in archives, not mount - mc & dolphin
the computer runs a linux kernel and kde6. i use the dolphin filemanager. mark files, add them to an archive is a very usefull option. dolphin can go into archivefiles, but it would be very usefull for me to rename files in the archives, add and or delete files in archives. How can i do that? do i n...
the computer runs a linux kernel and kde6. i use the dolphin filemanager. mark files, add them to an archive is a very usefull option. dolphin can go into archivefiles, but it would be very usefull for me to rename files in the archives, add and or delete files in archives. How can i do that? do i need any plugin or something else? is there any other filemanager for they can do this? is this what i am looking for with midnight commander possible? i know that mc can go into archivefiles and iso files, but rename, delete and add can i not make in mc. i have found some soloutions like avfs, fuse-mount and archivemount. but i have very much archive files, thats not a good way for me to mount them all. i will browse into the files they i need. i will not mount an archive as an folder. i have many .tar and .zip and some .rar archivefiles, but i can all of them convert to any other archiveformat is the required.
user447274 (539 rep)
Sep 29, 2024, 07:17 PM
147 votes
10 answers
419836 views
How to unzip a multipart (spanned) ZIP on Linux?
I need to upload a 400mb file to my web server, but I'm limited to 200mb uploads. My host suggested I use a spanned archive, which I've never done on Linux. I created a test in its own folder, zipping up a PDF into `test.zip.001`, `.002`, and `.003`. How do I go about unzipping it? Do I need to join...
I need to upload a 400mb file to my web server, but I'm limited to 200mb uploads. My host suggested I use a spanned archive, which I've never done on Linux. I created a test in its own folder, zipping up a PDF into test.zip.001, .002, and .003. How do I go about unzipping it? Do I need to join them first? Please note that I'd be just as happy using 7z as I am using ZIP formats. If this makes any difference to the outcome.
Tim (1573 rep)
Jun 11, 2012, 02:41 AM • Last activity: Sep 14, 2024, 04:41 PM
Showing page 1 of 20 total questions