Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
1
votes
1
answers
1886
views
Is there some program that can copy sparse file (/var/log/lastlog) over ssh as fast as cp (on local pc)?
I'm backing ip my server via `rsync` over `ssh` but `/var/log/lastlog` file is 1.2G (it takes only 24K on the hdd). On a local machine `cp` can copy it for no time (a few ms) but `rsync` requires reading the whole file which takes hours. I also tried to mount server's `/var/log` with `sshfs` to my l...
I'm backing ip my server via
rsync
over ssh
but /var/log/lastlog
file is 1.2G (it takes only 24K on the hdd).
On a local machine cp
can copy it for no time (a few ms) but rsync
requires reading the whole file which takes hours. I also tried to mount server's /var/log
with sshfs
to my local pc but my local pc detects the file as 1.2T (so sshfs
doesn't appear to detect sparse files).
Is there some program that detects sparse files over ssh and can copy them the same way cp
(without reading the empty blocks from the file) does?
EDIT: rsync
's -S/--sparse
option still wants to read the whole source file (with all the empty bytes) which takes hours for 1.2T file. After rsync
reads the whole file it creates small destination file (proper sparse file) but the problem is that it reads the source file with all the empty bytes (without skipping them). cp
copies the file in a few ms while rsync
takes hours. You can try it (on Linux) by creating 20G sparse file with truncate -s 20G sparse_file1
and copy it with rsync -S sparse_file1 sparse_file2
(takes long time) and then try to copy it with cp sparse_file1 sparse_file3
(takes a few ms).
FieryRider
(135 rep)
Mar 21, 2019, 06:07 AM
• Last activity: Jul 27, 2025, 03:02 PM
3
votes
1
answers
434
views
How to get the actually used size of a sparse file?
I would like to know, how much is a sparse file really using on the disk. `ls`, `stat`and similar commands all seem showing its virtual size, i.e. with the holes. I want to know the real disk usage of the file, without the holes. How can I do that?
I would like to know, how much is a sparse file really using on the disk.
ls
, stat
and similar commands all seem showing its virtual size, i.e. with the holes.
I want to know the real disk usage of the file, without the holes.
How can I do that?
peterh
(10448 rep)
Apr 11, 2025, 03:40 PM
• Last activity: Apr 13, 2025, 03:42 PM
10
votes
1
answers
528
views
How to display the non-sparse parts of a sparse file?
Imagine a file created with: truncate -s1T file echo test >> file truncate -s2T file I now have a 2 tebibyte file (that occupies 4kiB on disk), with `"test\n"` written in the middle. How would I recover that `"test"` efficiently, that is without having to read the whole file. tr -d '\0' < file Would...
Imagine a file created with:
truncate -s1T file
echo test >> file
truncate -s2T file
I now have a 2 tebibyte file (that occupies 4kiB on disk), with
"test\n"
written in the middle.
How would I recover that "test"
efficiently, that is without having to read the whole file.
tr -d '\0' < file
Would give me the result but that would take hours.
What I'd like is something that outputs only the non-sparse parts of the file (so above only "test\n"
or more likely, the 4kiB block allocated on disk that stores that data).
There are APIs to find out which part of the file are _allocated_ (FIBMAP, FIEMAP, SEEK_HOLE, SEEK_DATA...), but what tools expose those?
A portable solution (at least to the OSes that support those APIs) would be appreciated.
Stéphane Chazelas
(579252 rep)
Mar 26, 2014, 03:05 PM
• Last activity: Apr 12, 2025, 01:57 PM
7
votes
3
answers
2488
views
Can losetup be made efficient with sparse files?
So my setup is like this. $ truncate -s 1T volume $ losetup -f --show volume /dev/loop0 $ mkfs.ext4 /dev/loop0 $ ls -sh volume 1.1G volume $ mount /dev/loop0 /mnt/loop Now I have a 1.1TB volume, as expected. The overhead of ext4 expanded the sparse file to 1.1G, but that's fine. Now to add a file. $...
So my setup is like this.
$ truncate -s 1T volume
$ losetup -f --show volume
/dev/loop0
$ mkfs.ext4 /dev/loop0
$ ls -sh volume
1.1G volume
$ mount /dev/loop0 /mnt/loop
Now I have a 1.1TB volume, as expected. The overhead of ext4 expanded the sparse file to 1.1G, but that's fine. Now to add a file.
$ dd if=/dev/urandom of=/mnt/loop/file bs=1M count=10240
$ ls -sh volume
12G volume
Cool, now I don't want the file.
$ rm /mnt/loop/file
$ ls -sh volume
12G volume
The free space is still taking up space, as expected, and
$ fallocate -d volume
frees up 1gb.
My question is, how can I zero out the free space here without expanding the volume to the full size? $ dd if=/dev/zero
will expand it to full size, and with conv=sparse
makes it create a useless sparse file inside the volume.
TL;DR: Is there a way to make losetup
ignore writes of null blocks to null sectors, while allowing everything else?
Daffy
(465 rep)
Aug 23, 2018, 12:43 AM
• Last activity: Apr 11, 2025, 11:46 PM
16
votes
2
answers
1912
views
What could explain this strange sparse file handling of/in tmpfs?
On my `ext4` filesystem partition I can run the following code: fs="/mnt/ext4" #create sparse 100M file on ${fs} dd if=/dev/zero \ of=${fs}/sparse100M conv=sparse seek=$((100*2*1024-1)) count=1 2> /dev/null #show its actual used size before echo "Before:" ls ${fs}/sparse100M -s #setting the sparse f...
On my
ext4
filesystem partition I can run the following code:
fs="/mnt/ext4"
#create sparse 100M file on ${fs}
dd if=/dev/zero \
of=${fs}/sparse100M conv=sparse seek=$((100*2*1024-1)) count=1 2> /dev/null
#show its actual used size before
echo "Before:"
ls ${fs}/sparse100M -s
#setting the sparse file up as loopback and run md5sum on loopback
losetup /dev/loop0 ${fs}/sparse100M
md5sum /dev/loop0
#show its actual used size afterwards
echo "After:"
ls ${fs}/sparse100M -s
#release loopback and remove file
losetup -d /dev/loop0
rm ${fs}/sparse100M
which yields
Before:
0 sparse100M
2f282b84e7e608d5852449ed940bfc51 /dev/loop0
After:
0 sparse100M
Doing the very same thing on tmpfs as with:
fs="/tmp"
yields
Before:
0 /tmp/sparse100M
2f282b84e7e608d5852449ed940bfc51 /dev/loop0
After:
102400 /tmp/sparse100M
which basically means that something I expected to merely read the data, caused the sparse file to "blow up like a balloon"?
I expect that is because of less perfect support for sparse file in tmpfs
filesystem, and in particular because of the missing FIEMAP ioctl, but I am not sure what causes this behaviour? Can you tell me?
humanityANDpeace
(15072 rep)
Mar 8, 2017, 11:54 PM
• Last activity: Aug 23, 2024, 11:27 PM
0
votes
1
answers
244
views
Encrypted LUKS fs inside a file: sparse or not?
I have a LUKS encrypted file filled with around 160 GB of data that I use a lot. For safety, I created the file with 400 GB.  That is, of course, a lot of wasted space.  So I switched to a sparse file, basically following the advice [here][1], simply by creating the file with the seek opti...
I have a LUKS encrypted file filled with around 160 GB of data
that I use a lot. For safety, I created the file with 400 GB.
That is, of course, a lot of wasted space.
So I switched to a sparse file, basically following the advice here ,
simply by creating the file with the seek option:
dd if=/dev/zero of= bs=1G count=0 seek=400But then I thought: what happens if the file begins to be fragmented? Usually this is not a problem since I don't have very big files, and when I do, they are media files that usually don't change. But an encrypted thing that I use so frequently will probably get fragmented quite soon... So my question is: is there any true downside to using a sparse file instead of a fixed-sized file in my situation? Is fragmenting really an issue? Do you have any further advice?
Luis A. Florit
(509 rep)
Aug 8, 2024, 06:48 PM
• Last activity: Aug 9, 2024, 10:09 AM
9
votes
2
answers
1599
views
What happens to a sparse file's holes when the space is needed?
## The scenario Let's say you have a 1GB sparse file in a 2 GB filesystem and the sparse file is taking only 0.1 GB of space. ## The question(s) I guess you would have 1.9 GB to store data on disk, is that right? If there's actually 1.9 GB of free space in disk, what if I stored 1.5 GB outside the s...
## The scenario
Let's say you have a 1GB sparse file in a 2 GB filesystem and the sparse file is taking only 0.1 GB of space.
## The question(s)
I guess you would have 1.9 GB to store data on disk, is that right?
If there's actually 1.9 GB of free space in disk, what if I stored 1.5 GB outside the sparse file, leaving only 0.4 GB of free space (1.5 GB outside the sparse file + 0.1 GB of the sparse file)? Would that mean that even if I see the sparse file as 1GB I would only be able to store 0.5 GB in total inside it (the 0.1 GB in use + 0.4 GB available)?
What would the OS do in this case?
## Thoughts
If the above is true, using a sparse file sounds nice because of the saved space, but let's say I'm using this file as a storage point (loop device) and I want to count on that space, I guess using the sparse file wouldn't be a good idea because in case I needed the whole 1 GB I wouldn't be able to use it. I may be better off using a non sparse file.
loco.loop
(213 rep)
Jul 22, 2024, 05:24 AM
• Last activity: Aug 3, 2024, 01:22 AM
3
votes
1
answers
89
views
Why does reading a file with hexdump sometimes change find's sparseness value (%S)?
I use a Lustre file system. I've noticed that if I look at the find's sparseness value (`%S`) for a file, then print the file with `hexdump`, then look at the find's sparseness value again, then sometimes find's sparseness value (`%S`) has changed. Why does it change? --- Command to look at the find...
I use a Lustre file system. I've noticed that if I look at the find's sparseness value (
%S
) for a file, then print the file with hexdump
, then look at the find's sparseness value again, then sometimes find's sparseness value (%S
) has changed. Why does it change?
---
Command to look at the find's sparseness value (%S
) for the file myvideo.mp4
:
find myvideo.mp4 -printf "%S"
Command to read the file myvideo.mp4
with hexdump
:
hexdump myvideo.mp4
---
I noticed that behavior on several files. Examples of changes of find's sparseness values (%S
):
- 0.000135559
to 0.631297
- 0.00466808
to 0.228736
Is it because the file is being cached partly locally when reading with hexdump
? I noticed that this change isn't specific to hexdump
, e.g. the same happens with nano
(and likely any other program that read the file):
dernoncourt@server:/videos$ find myvideo.mp4 -printf "%S"
0.00302331
dernoncourt@server:/videos$ nano myvideo.mp4
dernoncourt@server:/videos$ find myvideo.mp4 -printf "%S"
0.486752
Franck Dernoncourt
(5533 rep)
Mar 10, 2024, 02:37 AM
• Last activity: Mar 10, 2024, 10:33 PM
20
votes
3
answers
3685
views
How do I uncompress a file with lots of zeroes as a sparse file?
I have a compressed raw image of a very large hard drive created using `cat /dev/sdx | xz > image.xz`. However, the free space in the drive was zeroed before this operation, and the image consists mostly of zero bytes. What's the easiest way to extract this image as a sparse file, such that the bloc...
I have a compressed raw image of a very large hard drive created using
cat /dev/sdx | xz > image.xz
. However, the free space in the drive was zeroed before this operation, and the image consists mostly of zero bytes. What's the easiest way to extract this image as a sparse file, such that the blocks of zeroes do not take up any space?
jaymmer - Reinstate Monica
(835 rep)
Jan 3, 2022, 12:22 PM
• Last activity: Feb 20, 2024, 07:27 AM
3
votes
1
answers
730
views
How to change zram sector size?
I have cloned a disk into a sparse file that is about 80G but indeed requires only about 12G, even not compressed it fits on my memory, but in order to save some resources I want to use **zram**: sudo modprobe zram num_devices=1 echo 79999997952 | sudo tee /sys/block/zram0/disksize sudo fdisk -c=dos...
I have cloned a disk into a sparse file that is about 80G but indeed requires only about 12G, even not compressed it fits on my memory, but in order to save some resources I want to use **zram**:
sudo modprobe zram num_devices=1
echo 79999997952 | sudo tee /sys/block/zram0/disksize
sudo fdisk -c=dos --sector-size 512 /dev/zram0
However when I create the partition it is using 4096 sector size even I told **fdisk** to use 512.
It do not let me type the size of the partition based on 512 sector size, and it is not a exact number which I could divide by 8 in order to have a 4096 based one, so I have done on a sparse **mbr**:
truncate -s79999997952 /tmp/block
fdisk -c=dos --sector-size 512 /tmp/block
# o, n, p, 1, 63, 156232124, t, 7, a, w
sudo dd if=/tmp/block of=/dev/zram0 count=1 bs=512
It seems with regular files **fdisk** sees no problem in using a 512 sector size! But **zram** is still weird, I do not know if it is going to work, because it shows a different disk size when on 512 mode:
$ sudo fdisk -lu /dev/zram0
Disk /dev/zram0: 74.5 GiB, 80000000000 bytes, 19531250 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x5f8b6465
Device Boot Start End Sectors Size Id Type
/dev/zram0p1 * 63 156232124 156232062 596G 7 HPFS/NTFS/exFAT
$ sudo fdisk -lu --sector-size 512 /dev/zram0
Disk /dev/zram0: 9.3 GiB, 10000000000 bytes, 19531250 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x5f8b6465
Device Boot Start End Sectors Size Id Type
/dev/zram0p1 * 63 156232124 156232062 74.5G 7 HPFS/NTFS/exFAT
Understand, once
156232062 / 8 = 19529007.75
there is no way it can fit on 4096 sector size.
How to force **fdisk** or the **zram** itself to use 512 sector size?
Tiago Pimenta
(646 rep)
Mar 5, 2019, 02:45 PM
• Last activity: Dec 20, 2023, 04:52 PM
40
votes
10
answers
35260
views
Clone whole partition or hard drive to a sparse file
I like to clone a whole partition or a whole hard drive onto a larger external disk but like to create a sparse file. I often use `dd` for cloning, but it doesn't support sparse files. As a workaround I used something like: cp --sparse=always <(dd if=/dev/sda1 bs=8M) /mount/external/backup/sda1.raw...
I like to clone a whole partition or a whole hard drive onto a larger external disk but like to create a sparse file. I often use
dd
for cloning, but it doesn't support sparse files. As a workaround I used something like:
cp --sparse=always <(dd if=/dev/sda1 bs=8M) /mount/external/backup/sda1.raw
However this is a little too tricky for my taste and doesn't allow me to resume the process if aborted. It is funny that there is a NTFS tool for this (ntfsclone
) but no such tool exists for the native file systems of Linux (EXT2-4).
Is there some better tool for this, e.g. a dd
variant with sparse support?
I do not look for some proprietary software for disk backups but simply want to make a sparse clone copy which I can mount as loop device if required.
Martin Scharrer
(920 rep)
Jul 20, 2011, 07:48 PM
• Last activity: Nov 10, 2023, 10:28 AM
24
votes
4
answers
58111
views
Is there a reason why /var/log/lastlog is a huge sparse file (1.1TB)?
I have read some question, that ask advice how to `rsync` sparse files efficiently mentioning the files `/var/log/lastlog` and `/var/log/faillog`. Indeed I myself have stumpled over those files being an "issue" as their being backup via rsync turns them to become "unsparse". What I hence wonder is,...
I have read some question, that ask advice how to
rsync
sparse files efficiently mentioning the files /var/log/lastlog
and /var/log/faillog
. Indeed I myself have stumpled over those files being an "issue" as their being backup via rsync turns them to become "unsparse".
What I hence wonder is, what is the need/backgrounding motivation to have those files as sparse, huge files (in my case it was 1.1TB)?
Also in relationship to this a follow up: Since I was assuming them to be logfiles I do not care about excesively I truncated those files, did I corrupt anything with truncating those files ?
humanityANDpeace
(15072 rep)
Jul 12, 2019, 01:39 PM
• Last activity: Oct 23, 2023, 02:44 PM
-1
votes
1
answers
148
views
Truncate command not creating a hole
I am trying to create a file with hole using the truncate command. I read up in some posts and one of the answers in this [post][1] says to use truncate command. Filesystem used is btrfs. This is the command ``` $: truncate -s 16K holes $: du holes $: 16 holes $: stat holes File: holes Size: 16384 B...
I am trying to create a file with hole using the truncate command. I read up in some posts and one of the answers in this post says to use truncate command. Filesystem used is btrfs. This is the command
$: truncate -s 16K holes
$: du holes
$: 16 holes
$: stat holes
File: holes
Size: 16384 Blocks: 32 IO Block: 4096 regular file
As can be seen, its allocating 16 blocks...my understanding was that it will allocate 0 blocks as mentioned that answer as well. Did I make a mistake in understanding what truncate is doing?
Shivanshu Arora
(11 rep)
Oct 5, 2023, 02:17 AM
• Last activity: Oct 5, 2023, 04:48 PM
8
votes
1
answers
1227
views
How transparent are sparse files for applications?
I am hoping I understand sparse file concept. I also do know `cp` command's `--sparse=...` However when googling for the practical applications I found ambiguous statements about how transparent it is for an application which reads/writes files using the common operating system file I/O API (I mean...
I am hoping I understand sparse file concept.
I also do know
cp
command's --sparse=...
However when googling for the practical applications I found ambiguous statements about how transparent it is for an application which reads/writes files using the common operating system file I/O API (I mean not in the extreme low level, just fopen(), fclose() etc)
It is not clean when reading blogs, explanations what talk about how an application for example a test editor "ruins" a sparse file by explicitly writing zeros to it. I thought this is the point, that if there is a sparse file, and the application writes zeros, that will not be physically stored. The application does not have to know about this, and does not have to deal with gaps and such a things, that is the responsibility of the file system.
**Question**
Suppose there is an existing file which is sparse. Will it completely **transparent** for the application or not? Say there is an 1G sparse file, which have very first byte is non zero, all other bytes are zero. When a "common" application opens that file, I suppose it can open it, an will see as its length is 1G, and can seek to the middle (0.5G), as it were not sparse, can write a non zero byte to the middle, the save, close and it will remain sparse on the file system, will not it?
Will a file 'automatically' sparse? I mean, an application simply creates a file, then writes a bunch of zeros, then writes, will is sparse or not? If not, what an application should do to create that file as sparse?
g.pickardou
(236 rep)
Jan 26, 2023, 05:49 AM
• Last activity: Jan 27, 2023, 04:46 AM
0
votes
1
answers
1051
views
How to mount a sparsed disk image
I have an image of the mSATA SSD disk of a PC. The disk contains the operating system and has a capacity of 512GB. I do not have that free storage, so I have cloned the disk into an image with `dd` compressing it with `gz`, and later according to the answer of [this post][1], I have copied the spars...
I have an image of the mSATA SSD disk of a PC. The disk contains the operating system and has a capacity of 512GB.
I do not have that free storage, so I have cloned the disk into an image with
dd
compressing it with gz
, and later according to the answer of this post , I have copied the sparsed content so that it occupied little.
This has worked correctly, resulting in a 512GB image occupying less than 5GB on disk.
As a summary of what has been done:
# dd bs=64K if=/dev/sdd conv=sync,noerror status=progress | gzip -c > /image.img.gz
# gunzip -c /image.img.gz | cp --sparse=always /dev/stdin mini.img
# ls -lhs
4,8G -rw------- 1 balon users 477G ene 13 10:54 mini.img
2,3G -rw-r--r-- 1 balon users 2,3G ene 11 08:32 minimal-industrial-pc.img.gz
So far, everything is correct. The problem comes when I intend to mount the image (since I want to cage myself in it and make some changes about the file system).
I have tried the following:
1. fdisk
# fdisk -l mini.img
The size mismatch of the primary master boot record (GPT PMBR) (1000215215!= 1000215295) will be corrected by writing.
The backup GPT table is not at the end of the device.
Disk mini.img: 476,94 GiB, 512110231552 bytes, 1000215296 sectors
Units: sectores de 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disc label type: gpt
Disk identifier: 74BC899D-E8BA-4C70-B82D-6F4E8F6343A3
Device Start End Sectors Size Type
mini.img1 2048 2203647 2201600 1G EFI System
mini.img2 2203648 6397951 4194304 2G Linux file system
mini.img3 6397952 1000212479 993814528 473.9G Linux file system
2. kpartx
# kpartx -a -v mini.img
GPT:Primary header thinks Alt. header is not at the end of the disk.
GPT:Alternate GPT header not at the end of the disk.
GPT: Use GNU Parted to correct GPT errors.
add map loop1p1 (254:4): 0 2201600 linear 7:1 2048
add map loop1p2 (254:5): 0 4194304 linear 7:1 2203648
add map loop1p3 (254:6): 0 993814528 linear 7:1 6397952
In this case there seems to be no problems mounting loop1p1
and loop1p2
, but with `loop1p3, which I understand corresponds to the Ubuntu root system, there is no way.
3. gdisk
# gdisk -l mini.img
GPT fdisk (gdisk) version 1.0.9.1
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk mini.img: 1000215296 sectors, 476.9 GiB
Sector size (logical): 512 bytes
Disk identifier (GUID): 74BC899D-E8BA-4C70-B82D-6F4E8F6343A3
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 1000215182
Partitions will be aligned on 2048-sector boundaries
Total free space is 4717 sectors (2.3 MiB)
Number Start (sector) End (sector) Size Code Name
1 2048 2203647 1.0 GiB EF00
2 2203648 6397951 2.0 GiB 8300
3 6397952 1000212479 473.9 GiB 8300
What am I doing wrong?
Francisco de Javier
(1311 rep)
Jan 13, 2023, 10:11 AM
• Last activity: Jan 13, 2023, 05:59 PM
1
votes
1
answers
207
views
Extracting text with expressions from a text file using grep
I have the following 2 lines in my text file and wanted to calculate duration between those lines X & Y in minutes. line X: 18.05.2022 13:54:52 [ INFO]: Starting Component 'OWN_FUNDS_RULES' (5/15) line Y: 18.05.2022 14:28:22 [ INFO]: Finished Component 'OWN_FUNDS_RULES_CONSOLIDATION' (6/15) with SUC...
I have the following 2 lines in my text file and wanted to calculate duration between those lines X & Y in minutes.
line X: 18.05.2022 13:54:52 [ INFO]: Starting Component 'OWN_FUNDS_RULES' (5/15)
line Y: 18.05.2022 14:28:22 [ INFO]: Finished Component 'OWN_FUNDS_RULES_CONSOLIDATION' (6/15) with SUCCESS - 00:07:05.119
I have the following code which is returning durations as zero.
cd /logs/
Header="OFRComponentCalculation"
echo $Header >OutputFile.csv
for file in log_Job_*/process.log; do
### OFRComponentCalculation ###
{
OFRS="$(grep 'Starting Component*OWN_FUNDS_RULES*' "$file" | awk '{print $3,$4}' | cut -d: -f2-)"
OFRE="$(grep 'Finished Component*OWN_FUNDS_RULES_CONSOLIDATION*' "$file" | awk '{print $1,$2}' | cut -d: -f1-)"
convert_date() { printf '%s-%s-%s %s' ${1:6:4} ${1:3:2} ${1:0:2} ${1:11:8}; }
# Convert to timestamp
OFRS_TS=$(date -d "$(convert_date "$OFRS")" +%s)
OFRE_TS=$(date -d "$(convert_date "$OFRE")" +%s)
# Subtract
OFRD=$((OFRS_TS - OFRE_TS))
# convert to HH:MM:SS (note, that if it's more than one day, it will be wrong!)
OFRComponentCalculation=$(date -u -d "@$OFRD" +%H:%M:%S)
echo "$OFRComponentCalculation"
}
Var="$OFRComponentCalculation"
echo $Var >>OutputFile.csv
done
I doubt am messing up something while writing grep commangs for these 2 line, can anyone help me.
Manoj Kumar
(127 rep)
May 19, 2022, 02:13 PM
• Last activity: May 19, 2022, 08:28 PM
0
votes
1
answers
204
views
Shrunk virtual disk file, now reported file size doesn't match
I originally had a single virtual disk file (.qcow2) of roughly 3.5 TiB and I decided to "group" all similar data on its own virtual disk file, for various reasons. Obviously that meant I could shrink the original disk file (to 900 GiB) but `ls`, `stat` and friends still list the original 3.5 TiB: `...
I originally had a single virtual disk file (.qcow2) of roughly 3.5 TiB and I decided to "group" all similar data on its own virtual disk file, for various reasons. Obviously that meant I could shrink the original disk file (to 900 GiB) but
ls
, stat
and friends still list the original 3.5 TiB:
$ ls -lhks mydisk.qcow2
722G -rw-r--r-- 1 root root 3.5T Aug 6 18:06 mydisk.qcow2
Of course I shrunk the filesystem itself (ext4) and sparsified the file afterwards, which is why only 722 GiB is currently allocated. But my backup tools also still detect 3.5 TiB and so will scan it all while the disk itself is only capable of holding 900 GiB, meaning it takes almost 4 times longer to finish than it has to.
How can I "refresh" the reported sizes? This is a machine with HDDs so I'm thinking maybe the file is too fragmented and there's some stuff around the 3.5 TiB mark? But wouldn't copying the file fix that automatically (I tried and it didn't, at least)? Also, the disk file itself resides on ZFS, if that matters.
If at all possible any solution would preferably work __in-place__. Shutting down the VM using the disk for a little while is not a problem.
Sahbi
(111 rep)
Aug 6, 2021, 04:45 PM
• Last activity: May 15, 2022, 07:07 PM
9
votes
1
answers
13270
views
qcow2 actual size
I am little bit confused with the real size of qcow2 files. ls -alh VMs/ubuntu-mini.qcow2 -rw------- 1 root root 21G mar 31 23:15 VMs/ubuntu-mini.qcow2 du -h VMs/ubuntu-mini.qcow2 2,7G VMs/ubuntu-mini.qcow2 I wanted to copy that file to different partition (ext4). It looks this command copied actual...
I am little bit confused with the real size of qcow2 files.
ls -alh VMs/ubuntu-mini.qcow2
-rw------- 1 root root 21G mar 31 23:15 VMs/ubuntu-mini.qcow2
du -h VMs/ubuntu-mini.qcow2
2,7G VMs/ubuntu-mini.qcow2
I wanted to copy that file to different partition (ext4). It looks this command copied actually **21G** not only **2,7G** I expected. And now in new location these both commands (du and ls) shows me the same size - 21G.
du -h /media/HDD0/VMs/ubuntu-mini.qcow2
21G /media/HDD0/VMs/ubuntu-mini.qcow2
ls -alh /media/HDD0/VMs/ubuntu-mini.qcow2
-rw------- 1 root root 21G mar 31 23:15 /media/HDD0/VMs/ubuntu-mini.qcow2
What's the proper way of copying qcow2 files? Is there are switch in "cp" command which makes it to copy only 2,7G?
dePablo
(133 rep)
Apr 4, 2018, 12:12 PM
• Last activity: Apr 14, 2022, 06:52 PM
3
votes
3
answers
1906
views
From a sparse file to a block device
I have a system image sparse file which actual size is only a few MBs but the "apparent size" is about 1GB. I'm trying to write it to a block device efficiently (without the holes). Here are some none working solutions I've tried: - `dd if=sparse_file of=/dev/some_dev` processes the whole file inclu...
I have a system image sparse file which actual size is only a few MBs but the "apparent size" is about 1GB. I'm trying to write it to a block device efficiently (without the holes). Here are some none working solutions I've tried:
-
dd if=sparse_file of=/dev/some_dev
processes the whole file including the holes, so at the end I'm getting something like 1007869952 bytes (1,0 GB) copied, 22,0301 s, 45,7 MB/s
- cp --sparse=always sparse_file /dev/some_dev
seems also is not working as it takes long time for few MBs (~13s)
- ddrescue --sparse --force sparse_file /dev/some_dev
fails with a message ddrescue: Only regular files can be sparse.
(Note: it works in the opposite direction as covered here ).
There are 2 other ways covered here but I'd like to use only the standard tools which are the part of Linux distribution.
So is there a way to write the sparse file to a block image skipping the holes?
Artak Begnazaryan
(203 rep)
Nov 26, 2014, 04:27 PM
• Last activity: Mar 29, 2022, 05:00 PM
17
votes
5
answers
11323
views
Creating an arbitrarily large "fake" file
I would like to create a special file similar to `/dev/null` or `/dev/random`, where the file doesn't actually exist but you can read from it all the same, except that I could actually set a cap on the apparent size of the file. To put it another way, I want to create a special file where (assuming...
I would like to create a special file similar to
/dev/null
or /dev/random
, where the file doesn't actually exist but you can read from it all the same, except that I could actually set a cap on the apparent size of the file.
To put it another way, I want to create a special file where (assuming I set the cap at 500GB) when I "cat" the file it will output all 500GB of the file and then stop. It needs to act the same as an actual 500GB file, but without taking the space. The contents of this file don't matter, it could be all \0
's like /dev/null
, or just a small string being sent over and over, or whatever.
Is this something that's do-able? The only thing remotely close I've been able to find is man pages talking about mknod
, but those weren't very helpful.
Mediocre Gopher
(272 rep)
May 10, 2012, 02:09 AM
• Last activity: Jan 29, 2022, 12:31 AM
Showing page 1 of 20 total questions