Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
2
votes
0
answers
215
views
Samba does not show all ZFS snapshots
I am setting up a file server that uses Samba together with ZFS. I have configured Samba such that it shows the ZFS snapshots under "Previous Versions" in Windows; following is an excerpt of my Samba config: vfs objects = acl_xattr shadow_copy2 shadow:snapdir = .zfs/snapshot shadow:format = -%Y-%m-%...
I am setting up a file server that uses Samba together with ZFS.
I have configured Samba such that it shows the ZFS snapshots under "Previous Versions" in Windows; following is an excerpt of my Samba config:
vfs objects = acl_xattr shadow_copy2
shadow:snapdir = .zfs/snapshot
shadow:format = -%Y-%m-%d-%H%M
shadow:snapprefix = ^zfs-auto-snap_\(frequent\)\{0,1\}\(hourly\)\{0,1\}\(daily\)\{0,1\}\(monthly\)\{0,1\}
shadow:delimiter = -20
Further, I have configured
However, when I want to see the previous versions of an ordinary file, that doesn't work:
I wonder how I shall change my Samba configuration such that I can see the previous versions of files as well. I believe this is possible, but I don't know how.
I am using Fedora 38 and Samba Version 4.18.6.
zfs-auto-snapshot
(https://github.com/zfsonlinux/zfs-auto-snapshot/tree/master) and now, I can see the "Previous Versions" of any folder in Windows Explorer:


T. Pluess
(626 rep)
Sep 6, 2023, 01:54 PM
• Last activity: May 15, 2025, 03:03 PM
4
votes
2
answers
2589
views
How can I mount a VDI with snapshot?
Working on Linux Mint 18.1, VirtualBox 5.0.40_Ubuntu. I have a VDI file from a VirtualBox VM: ~/VirtualBox\ VMs/Win10x64/Win10x64.vdi I've taken a Snapshot: ~/VirtualBox\ VMs/Win10x64/Snapshots/{GUID}.vdi I want to mount the guest's HDD **from the snapshot**. I can successfully mount the base VDI us...
Working on Linux Mint 18.1, VirtualBox 5.0.40_Ubuntu.
I have a VDI file from a VirtualBox VM:
~/VirtualBox\ VMs/Win10x64/Win10x64.vdi
I've taken a Snapshot:
~/VirtualBox\ VMs/Win10x64/Snapshots/{GUID}.vdi
I want to mount the guest's HDD **from the snapshot**.
I can successfully mount the base VDI using
qemu-nbd
:
qemu-nbd -c /dev/nbd0 ~/VirtualBox\ VMs/Win10x64/Win10x64.vdi
But if I try with the Snapshot file:
qemu-nbd -c /dev/nbd0 ~/VirtualBox\ VMs/Win10x64/Snapshots/{GUID}.vdi
it fails with:
unsupported VDI image (non-NULL link UUID)
I did notice the --snapshot
parameter for qemu-nbd
but this doesn't seem to be the right thing.
How can I mount the HDD as it is in the snapshot?
**Edit #1**
I've also tried vdfuse
, but again, doesn't seem to be any way of "applying" the differencing disk.
Bridgey
(203 rep)
May 12, 2017, 02:24 PM
• Last activity: May 6, 2025, 03:10 AM
0
votes
2
answers
3634
views
Best strategy to backup btrfs root filesystem?
I have a Btrfs root partition with an `@` root subvolume and an `@home` subvolume and I do auto-snapshots during updates and timeshift scheduled snapshots, both of which are saved on the same drive. This is great, but I want to have extra redundancy in case of a drive failure. In my last setup on De...
I have a Btrfs root partition with an
@
root subvolume and an @home
subvolume and I do auto-snapshots during updates and timeshift scheduled snapshots, both of which are saved on the same drive. This is great, but I want to have extra redundancy in case of a drive failure.
In my last setup on Debian, I used the ext4 file system and put my timeshift rsync backups on an external drive.
How can I do something similar, i.e. backup to an external drive, while still taking snapshots on the root device?
In addition to the system device, which is a 1 TB SSD formatted as Btrfs, I have a 2 TB HDD currently formatted with two NTFS partitions since I dual boot Windows as well. Now, I would be willing to completely move to a Linux file system on that drive, but I don't know how I would handle backing up the root drive. I thought about doing a disk image onto the HDD with dd
but if I do this, I would (a) loose an extra TB of storage if I understand correctly how dd
works and (b) would not know how to restore from the image. Ideally, I would like to have a Btrfs partition on the second drive for backups of the root device only and a second (e.g. ext4 or NTFS) partition just for overflow data storage.
Essentially, my question is: How can I facilitate a backup of my already "snapshotting" root partition (and also know how to restore from it)?
weygoldt
(69 rep)
Mar 23, 2022, 01:45 PM
• Last activity: Apr 23, 2025, 05:01 AM
3
votes
2
answers
3797
views
Recommended scheme for partitioning root file system into subvolumes following the Filesystem Hierarchy Standard
The [Filesystem Hierarchy Standard][1] (FHS) is the formal codification for root file tree on Linux installations, as inherited from earlier iterations of Unix and POSIX, and subsequently adapted. It standardizes the exact uses of the familiar `/home`, `/etc`, `/usr`, `/var`, and so on, from various...
The Filesystem Hierarchy Standard (FHS) is the formal codification for root file tree on Linux installations, as inherited from earlier iterations of Unix and POSIX, and subsequently adapted. It standardizes the exact uses of the familiar
/home
, /etc
, /usr
, /var
, and so on, from various historic differences of convention, and resolves where application-specific and site-specific file names may be added, or not.
Basic Linux installations historically have placed the entire tree on a single file system, though some variations have utilized a separate partition for /home
, presumably to facilitate backup and migration.
More recently, Btrfs has gained increasing adoption, which allows a single partition to host various subvolumes. Subvolumes are appealing because they may be captured in snapshots, and require no pre-allocation of space.
The mapping of subvolumes to nodes on the FSH appears to vary widely.
Sensible standards and policies respecting such matters are important, for supporting optimal management of files on the system with respect to snaphots and related concerns.
Following are some observations:
- Debian appears to place the entire tree on a single subvolume beneath root.
- Ubuntu appears to allocate a subvolume for /home
, and another for the remainder of the root tree.
- Arch Linux appears to extend the separation adopted by Debian by placing /var/log
and /var/cache
each in a separate subvolume.
- openSUSE has a single subvolume for /var
, and one each for /home
, /root
, /usr/local
, /opt
, and /srv
, as well as one for the remainder of the root tree, a further one for each installed grub architecture.
Have any standards emerged that have attempted to resolve the various design considerations, and to unify the approaches adopted by various operating systems? Has any agreement emerged concerning how to reconcile the functions of the various file tree nodes with policies concerning snapshots?
brainchild
(340 rep)
Nov 8, 2022, 02:11 PM
• Last activity: Apr 11, 2025, 05:07 AM
0
votes
0
answers
41
views
Use device mapper snapshots to create multiple versions of the same machine and choose at boot
I already use LVM to create snapshots of volumes, and I recently discovered that device mapper can manage snapshots "manually" without LVM. It would be useful to be able to create a snapshot of the current system, and then create multiple versions from that base and choose which version to use. With...
I already use LVM to create snapshots of volumes, and I recently discovered that device mapper can manage snapshots "manually" without LVM.
It would be useful to be able to create a snapshot of the current system, and then create multiple versions from that base and choose which version to use. With virtual machines it is quiet easy, but I did not find how to do it on a bare metal machine (i.e. a laptop).
Even more useful would be to be able to choose at boot which version to boot, e.g. in the Grub menu. Then one could boot the COW device and make all the changes to the base system (install packages, configure things...), which will be saved on the COW device but not the base snapshot. At the next boot, one could choose another COW device to do other changes.
I know it can be done (not easily) with Btrfs but I'd like it at the device level, so one could even use dmcrypt to create independent versions of the same encrypted device (although probably it would be useless to change the passphrase since the key would remain the same).
Another option would be to use overlayfs, but it is quite difficult to setup correctly (e.g. in case of kernel update and initramfs update it gets a mess).
Kzar
(111 rep)
Apr 7, 2025, 05:57 PM
0
votes
2
answers
2681
views
How can Nextcloud data be backed up and restored independently? (Versioning / Snapshots)
Whenever multiple users are interacting with a Nextcloud installation, there is a possibility of error. Potentially, a family member may delete an old picture or a co-worker might accidentally tick off a task or calendar event, resulting in issues for other users. When full filesystem snapshots or b...
Whenever multiple users are interacting with a Nextcloud installation, there is a possibility of error. Potentially, a family member may delete an old picture or a co-worker might accidentally tick off a task or calendar event, resulting in issues for other users.
When full filesystem snapshots or backups of the whole Nextcloud directory are available, they can be used to restore an old state of the entire server. This is fine in case of a complete system crash.
However, it becomes an issue if the problem is only noticed after a while and in the mean time, users have modified other data. Then, a choice must be made between rescuing old data and destroying all progress or keeping the current state.
The Nextcloud documentation only describes a way to restore the entire installation.
Is there a way to more intelligently back up all Nextcloud data automatically (files, messages, calendars, tasks, etc.) so that it can be restored independently? (Maybe even in an online state?)
Prototype700
(73 rep)
Nov 2, 2021, 10:14 PM
• Last activity: Apr 6, 2025, 04:01 PM
0
votes
0
answers
14
views
Centos 7 snapshot and unfolding it to new OS
We wanna update an old Centos 7 at a Linux VPS to either *Centos Stream* 9 or *AlmaLinux 9*. For that we think first to make **snapshot of the whole VPS content**. Is it possible to make the snapshot and download it for later unfolding at the new OS ?
We wanna update an old Centos 7 at a Linux VPS to either *Centos Stream* 9 or *AlmaLinux 9*.
For that we think first to make **snapshot of the whole VPS content**.
Is it possible to make the snapshot and download it for later unfolding at the new OS ?
Igor Savinkin
(139 rep)
Mar 7, 2025, 09:14 AM
1
votes
0
answers
95
views
Backup of KVM/virsh VM
I want to create backups of my running KVM VMs using virsh (qcow2). The VMs don't have snapshots, but in order to create a clean backup I want to create a snapshot, back up the backing file then discard (merge back to running file) the snapshot. Most importantly, I need to be able to restore these,...
I want to create backups of my running KVM VMs using virsh (qcow2). The VMs don't have snapshots, but in order to create a clean backup I want to create a snapshot, back up the backing file then discard (merge back to running file) the snapshot.
Most importantly, I need to be able to restore these, too.
What I *don't* want is to suspend or stop the VM at any point during backup except for the instant during the creation and removal of the snapshot.
So far I have the following.
### BACKUP
# virsh snapshot-create-as VM_NAME SNAPSHOT_NAME --disk-only --quiesce
# virsh dumpxml VM_NAME > /VM_NAME.xml
// do backup like rsync, ftp, cp, etc. of the VM_NAME.img and VM_NAME.xml
# virsh snapshot-delete VM_NAME SNAPSHOT_NAME
### RESTORE
// assuming the VM is shut down or deleted, copy VM_NAME.img to the desired folder
// ensure xml file has correct path to VM_NAME.img
# virsh define /VM_NAME.xml
Is this about right?
twkonefal
(111 rep)
Feb 16, 2025, 11:47 PM
0
votes
1
answers
70
views
apt-btrfs-snapshot isn't supported despite using btrfs
``` > sudo apt-btrfs-snapshot --debug The system does not support apt-btrfs-snapshot ## Root partition > blkid | grep -i Tuxedo /dev/nvme0n1p2: LABEL="Tuxedo" UUID="4777728a-1d39-4af3-aeca-4fdf5a3d6cf0" UUID_SUB="34bee598-a820-4335-b372-f99862461189" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="5403ca92...
> sudo apt-btrfs-snapshot --debug
The system does not support apt-btrfs-snapshot
## Root partition
> blkid | grep -i Tuxedo
/dev/nvme0n1p2: LABEL="Tuxedo" UUID="4777728a-1d39-4af3-aeca-4fdf5a3d6cf0" UUID_SUB="34bee598-a820-4335-b372-f99862461189" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="5403ca92-4062-934c-8e52-5a225b593790"
My root partition is on Btrfs, and the root itself is in the @
subvolume.
Here's the list of subvolumes on my primary SSD:
> ls -l /mnt/tmp/
total 32
drwxr-xr-x 1 root 1001 256 Oct 15 01:37 @
drwxr-xr-x 1 root root 24 Sep 21 11:09 @home
drwxrwxr-x 1 root syslog 916 Nov 4 00:00 @logs
drwxr-xr-x 1 root root 1184 Oct 31 03:15 @snap
drwxr-xr-x 1 root root 0 Oct 7 23:21 @snapshots
drwxr-xr-x 1 root root 16 Oct 7 23:34 @swap
drwxr-xr-x 1 root root 210 Oct 8 00:18 timeshift-btrfs
drwxrwxrwt 1 root root 1398 Nov 4 01:02 @tmp
drwxr-xr-x 1 root root 386 Nov 3 20:14 @var-lib-snapd
drwxr-xr-x 1 root root 1166 Oct 31 03:15 @var-snap
Operating System: TUXEDO OS 4
KDE Plasma Version: 6.2.3
KDE Frameworks Version: 6.8.0
Qt Version: 6.8.0
Kernel Version: 6.11.0-105009-tuxedo (64-bit)
Graphics Platform: Wayland
Processors: 24 × AMD Ryzen 9 5900X 12-Core Processor
Memory: 62.7 GiB of RAM
Graphics Processor: AMD Radeon RX 6900 XT
Manufacturer: Gigabyte Technology Co., Ltd.
Product Name: X570S AORUS MASTER
System Version: -CF
YamiYukiSenpai
(141 rep)
Nov 25, 2024, 07:10 AM
• Last activity: Dec 4, 2024, 07:21 AM
0
votes
0
answers
67
views
Safe to delete snapshot parent?
Can I safely delete the `/root` snapshot and replace it with its child? Or does the parent snapshot need to be kept around? Default structure from Fedora, emphasis mine: $ sudo btrfs subvolume list / ID 256 gen 1000272 top level 5 path home ID **257** gen 1000272 top level 5 path root ID 258 gen 100...
Can I safely delete the
/root
snapshot and replace it with its child? Or does the parent snapshot need to be kept around?
Default structure from Fedora, emphasis mine:
$ sudo btrfs subvolume list /
ID 256 gen 1000272 top level 5 path home
ID **257** gen 1000272 top level 5 path root
ID 258 gen 1000243 top level 257 path var/lib/machines
ID 259 gen 476 top level **257** path snapshots/BACKUP_ROOT
ID 260 gen 477 top level 257 path snapshots/BACKUP_HOME
I know I can mount any other snapshot as root. What worries me is the parent/child structure (dependency?). What happens to a snapshot whose parent got deleted?
Or should I just rsync from the snapshot to /root
?
Jakub Fojtik
(130 rep)
Nov 4, 2024, 10:45 AM
6
votes
3
answers
4412
views
Is it possible to create BTRFS snapshot for a directory?
When I tried to create a snapshot: [root@localhost ~]# btrfs subvolume snapshot /home/admin2/ /.snapshots/s2 ERROR: Not a Btrfs subvolume: Invalid argument How to create BTRFS snapshot for a directory?
When I tried to create a snapshot:
[root@localhost ~]# btrfs subvolume snapshot /home/admin2/ /.snapshots/s2
ERROR: Not a Btrfs subvolume: Invalid argument
How to create BTRFS snapshot for a directory?
Viktoria D
(63 rep)
Jan 7, 2021, 06:27 PM
• Last activity: Nov 1, 2024, 10:02 AM
3
votes
2
answers
564
views
How to Remove a Large File from ZFS Snapshots and Reclaim Space?
I have a series of daily ZFS 100Mb snapshots of `archive_zp/my_zfs`, going back 100 days. At day 60 (between @ss59%ss60) I had added a 10 GiB sized `archive_zp/my_zfs/BIG.iso` file, that 10 GiB now appears in the 40 snapshots since, and so retaining the original 10 GiB of `archive_zp` zpool's space...
I have a series of daily ZFS 100Mb snapshots of
archive_zp/my_zfs
, going back 100 days. At day 60 (between @ss59%ss60) I had added a 10 GiB sized archive_zp/my_zfs/BIG.iso
file, that 10 GiB now appears in the 40 snapshots since, and so retaining the original 10 GiB of archive_zp
zpool's space ever since.
I don't want to retain the file BIG.iso
in any snapshot, and do want to reclaim the BIG.iso's
10 GiBs space used. However also I do need to retain the other substance (files and daily snapshots etc) in all the 40 daily snapshots since.
My gut feel is that I could follow this path:
* zfs clone archive_zp/my_zfs@ss60 my_zfs_clone
,
* rm archive_zp/my_zfs_clone60/BIG.iso
,
* zfs promote archive_zp/my_zfs_clone
- this retains earlier zpool blocks prior to @ss60.
Then I get a little vague... (to avoid snapshot's UID issues) Do I use a rsync
sequence like:
* rsync -av --progress /archive_zp/my_zfs/.zfs/snapshot/ss61 /archive_zp/my_zfs_clone
* zfs snapshot archive_zp/my_zfs_clone@ss61
* etc for @ss62 .. @ss100
Or might I take advantage of zfs replication
esp. given BIG.iso
was unchanged in ss61%ss100? eg.:
* zfs send -I archive_zp/my_zfs@ss61 archive_zp/my_zfs@ss100 | zfs receive archive_zp/my_zfs_clone
With a tidy up at the end:
* review archive_zp/my_zfs_clone
- then if all OK...
* zfs destroy archive_zp/my_zfs
* rename archive_zp/my_zfs_clone
to archive_zp/my_zfs
Is there a best practice on, or a zfs reclaim
tool, to achieve this BIG.iso
space recovery?
Maybe a zfs destroy archive_zp/my_zfs/BIG.iso@ss60٪ss100
could do the trick?
NevilleDNZ
(250 rep)
Jul 14, 2024, 09:37 AM
• Last activity: Oct 14, 2024, 06:05 AM
0
votes
1
answers
36
views
Device Mapper Snapshot Stored In An Expanding File
Is there a way to create a block device that is backed by a file that grows in size as the block device is written to? I'm looking to use device mapper snapshots (ie: `dmsetup create` command that specifies `snapshot` as the type of item to create) and according to the documentation the snapshot nee...
Is there a way to create a block device that is backed by a file that grows in size as the block device is written to?
I'm looking to use device mapper snapshots (ie:
dmsetup create
command that specifies snapshot
as the type of item to create) and according to the documentation the snapshot needs to be stored on a block device. One way to create this block device is by using losetup
to turn a file into a block device. However, that requires as fixed size file. I'm trying to figure out if there is any way to not need a fixed size file but instead have the file that backs the block device that backs the snapshot grow as the snapshot grows.
Harry Muscle
(2697 rep)
Sep 21, 2024, 02:17 AM
• Last activity: Sep 21, 2024, 11:56 AM
0
votes
1
answers
34
views
Adding a fresh zfs sub-dataset pri_zp/Z1/Z99-future to pri_zp/Z1, and resuming recursive replication to sec_zp/Z1
I have set up and replicated the OpenZFS dataset `pri_zp/Z1` (with `pri_zp/Z1/Z00-initial`) to `sec_zp/Z1` using a `zfs send -R`. But then (months later) when I try to create (and replicate) a newer data set, called `pri_zp/Z1/Z99-future`, the replication to `sec_zp/Z1` fails. How do I add a fresh s...
I have set up and replicated the OpenZFS dataset
pri_zp/Z1
(with pri_zp/Z1/Z00-initial
) to sec_zp/Z1
using a zfs send -R
.
But then (months later) when I try to create (and replicate) a newer data set, called pri_zp/Z1/Z99-future
, the replication to sec_zp/Z1
fails.
How do I add a fresh sub data set pri_zp/Z1/Z99-future
, and enable its recursive replication?
Below is an example trace of the zfs
commands that demonstrate the hiccup I encountered.
**A. Setup Z1 base with Z00-initial:**
# zfs create pri_zp/Z1
# zfs create pri_zp/Z1/Z00-initial
# zfs snapshot -r pri_zp/Z1@ssA
# zfs send -V -R pri_zp/Z1@ssA | zfs receive -d sec_zp
full send of pri_zp/Z1@ssA estimated size is 79.1K
full send of pri_zp/Z1/Z00-initial@ssA estimated size is 78.6K
total estimated size is 158K
**B. Send an next snapshot, ssB:**
# zfs snapshot -r pri_zp/Z1@ssB
# zfs send -V -R -I pri_zp/Z1@ss{A,B} | zfs receive -d sec_zp
send from @ssA to pri_zp/Z1@ssB estimated size is 624B
send from @ssA to pri_zp/Z1/Z00-initial@ssB estimated size is 624B
total estimated size is 1.22K
**C. Start integration of Z99-future:**
# zfs create pri_zp/Z1/Z99-future
# zfs snapshot -r pri_zp/Z1@ssC
# zfs send -V -I pri_zp/Z1@ss{B,C} | zfs receive -d sec_zp
send from @ssB to pri_zp/Z1@ssC estimated size is 61.1K
total estimated size is 61.1K
Everything working fine until here. But now I need to send the initial ssC
snapshot of pri_zp/Z1/Z99-future@ssC
. (It's not an incremental)
**This next zfs command probably triggers the problem, but it cannot be done recursively:**
# zfs send -V pri_zp/Z1/Z99-future@ssC | zfs receive -d sec_zp
**D. Here the problem manifests itself:**
# zfs snapshot -r pri_zp/Z1@ssD
# zfs send -V -R -I pri_zp/Z1@ss{C,D} | zfs receive -d sec_zp
send from @ssC to pri_zp/Z1@ssD estimated size is 624B
send from @ssC to pri_zp/Z1/Z99-future@ssD estimated size is 624B
send from @ssC to pri_zp/Z1/Z00-initial@ssD estimated size is 624B
total estimated size is 1.83K
cannot receive incremental stream: destination sec_zp/Z1 has been modified
since most recent snapshot
**The specific dataset change caused by** zfs send -V pri_zp/Z1/Z99-future@ssC
:
# zfs diff sec_zp/Z1@ssC
+ /sec_zp/Z1/Z99-future
M /sec_zp/Z1/
- /sec_zp/Z1/Z99-future
- /sec_zp/Z1/Z99-future/
- /sec_zp/Z1/Z99-future//security.selinux
Also:
# zfs list -r -t all -S creation sec_zp/Z1
NAME USED AVAIL REFER MOUNTPOINT
sec_zp/Z1/Z99-future 96K 1.22T 96K /sec_zp/Z1/Z99-future
sec_zp/Z1@ssC 64K - 104K -
sec_zp/Z1/Z99-future@ssC 0B - 96K -
sec_zp/Z1@ssB 0B - 96K -
sec_zp/Z1/Z00-initial@ssB 0B - 96K -
sec_zp/Z1/Z00-initial 96K 1.22T 96K /sec_zp/Z1/Z00-initial
sec_zp/Z1 416K 1.22T 96K /sec_zp/Z1
sec_zp/Z1@ssA 0B - 96K -
sec_zp/Z1/Z00-initial@ssA 0B - 96K
Any hints are welcomed.
NevilleDNZ
(250 rep)
Jul 17, 2024, 02:54 AM
• Last activity: Jul 17, 2024, 11:07 AM
0
votes
1
answers
120
views
How to reinstate an invalidated DM snapshot?
I'm using device mapper snapshots. Let's assume that `/dev/sda` is the read-only origin device, and `/dev/sdb` is the COW device. I created a persistent snapshot this way: ``` # cat /dev/zero > /dev/sdb # dmsetup create mysnap 0 1000000000 snapshot /dev/sda /dev/sdb P 16 ^D # ls /dev/mapper/ control...
I'm using device mapper snapshots.
Let's assume that
I believe that this "invalid" status is fully artificial because, from my experience, persistent DM snapshots survived total system crashes.
/dev/sda
is the read-only origin device, and /dev/sdb
is the COW device. I created a persistent snapshot this way:
# cat /dev/zero > /dev/sdb
# dmsetup create mysnap
0 1000000000 snapshot /dev/sda /dev/sdb P 16
^D
# ls /dev/mapper/
control mysnap
#
It worked fine for a while.
After every boot, to re-attach my persistent snapshot, I was running the same command:
dmsetup create mysnap
0 1000000000 snapshot /dev/sda /dev/sdb P 16
But one day I accidentally disconnected the read-only origin device during operation (the COW device was still there). There was a kernel message like that:
device-mapper: snapshots: Invalidating snapshot: error reading/writing
After that happened, any attempt to attach the snapshot (on any machine) results in error:
device-mapper: snapshots: Snapshot is marked invalid
The mysnap
device gets created, but it refuses any reads/writes with "Input/output error".
Is it possible to clear the "invalid" status on the DM snapshot and bring it up, or at least to recover the data?I believe that this "invalid" status is fully artificial because, from my experience, persistent DM snapshots survived total system crashes.
melonfsck - she her
(150 rep)
Jun 30, 2024, 12:16 PM
• Last activity: Jun 30, 2024, 02:27 PM
6
votes
3
answers
20134
views
Linux alternative to file history/shadow copies for internal backup?
I am looking for (good) backup alternatives to the time machine of MacOS/OS X devices or file history on Windows machines. Actually what I am looking for is closer to Windows' solution than to the time machine. So I know I can [use rsync](https://blog.interlinked.org/tutorials/rsync_time_machine.htm...
I am looking for (good) backup alternatives to the time machine of MacOS/OS X devices or file history on Windows machines. Actually what I am looking for is closer to Windows' solution than to the time machine.
So I know I can [use rsync](https://blog.interlinked.org/tutorials/rsync_time_machine.html) or - with a nice UI - [Back in time](http://backintime.le-web.org/) .
However I am **not looking for an external backup solution!**
This means I rather want to have a file history as in Windows Vista (and above AFAIK). On Windows Vista/7 this worked with [Shadow copies](https://en.wikipedia.org/wiki/Shadow_Copy) , so this is exactly what I'd like to have:

So I want to save **the backup/file history on the same drive** (and probably partition, but that does not matter). I'd also save it on another internal drive, but not on an external one.
Is there such a solution for Linux or how can I best replicate this behaviour?
That's why existing files **should not be duplicated** and a backup (copy of the file) should only be saved when I actually modify or remove it. This way it saves much space, especially for larger files, which you won't edit anyway. As opposed to rsync/backintime, where never-modified files are copied even with incremental backups.
rugk
(3496 rep)
Oct 12, 2016, 03:09 PM
• Last activity: Apr 29, 2024, 05:26 PM
0
votes
0
answers
184
views
How to turn off Sanoid's snapshotting for a subtree of datasets?
I'd like to set up [Sanoid](https://github.com/jimsalterjrs/sanoid) to create snapshots of my system dataset, except for the Docker images under `/var/lib/docker`. I have the following two templates: ``` [template_system] frequently = 0 hourly = 0 daily = 5 monthly = 3 yearly = 0 autosnap = yes auto...
I'd like to set up [Sanoid](https://github.com/jimsalterjrs/sanoid) to create snapshots of my system dataset, except for the Docker images under
/var/lib/docker
. I have the following two templates:
[template_system]
frequently = 0
hourly = 0
daily = 5
monthly = 3
yearly = 0
autosnap = yes
autoprune = yes
[template_ignore]
autoprune = no
autosnap = no
monitor = no
frequently = 0
hourly = 0
daily = 0
monthly = 0
yearly = 0
AFAIU, ignore
shouldn't create any snapshots.
I am then using these two templates in the following way:
[builtin/ROOT]
use_template = system
recursive = yes
[builtin/ROOT/ubuntu/var/lib/docker]
process_children_only = yes
use_template = ignore
recursive = yes
However, with these settings, I am still seeing snapshots getting generated for builtin/ROOT/ubuntu/var/lib/docker
, e.g.:
builtin/ROOT/ubuntu/var/lib/docker/8882d709e8d82bb317bb5b33065e0de6f0cdcc9dab4f6513eb0ac811a3893b47@591379486
builtin/ROOT/ubuntu/var/lib/docker/8882d709e8d82bb317bb5b33065e0de6f0cdcc9dab4f6513eb0ac811a3893b47@autosnap_2024-04-23_10:54:36_daily
builtin/ROOT/ubuntu/var/lib/docker/8882d709e8d82bb317bb5b33065e0de6f0cdcc9dab4f6513eb0ac811a3893b47@autosnap_2024-04-24_04:17:46_daily
builtin/ROOT/ubuntu/var/lib/docker/8882d709e8d82bb317bb5b33065e0de6f0cdcc9dab4f6513eb0ac811a3893b47@autosnap_2024-04-25_01:50:31_daily
builtin/ROOT/ubuntu/var/lib/docker/8882d709e8d82bb317bb5b33065e0de6f0cdcc9dab4f6513eb0ac811a3893b47@autosnap_2024-04-26_04:33:42_daily
builtin/ROOT/ubuntu/var/lib/docker/8882d709e8d82bb317bb5b33065e0de6f0cdcc9dab4f6513eb0ac811a3893b47@autosnap_2024-04-28_09:52:35_daily
builtin/ROOT/ubuntu/var/lib/docker/8882d709e8d82bb317bb5b33065e0de6f0cdcc9dab4f6513eb0ac811a3893b47@autosnap_2024-04-28_10:15:00_monthly
For comparison, for system
datasets I have the same snapshots created, *and also monthly ones*:
builtin/ROOT/ubuntu/var/lib@autosnap_2024-04-14_14:01:03_monthly
builtin/ROOT/ubuntu/var/lib@autosnap_2024-04-23_10:54:36_daily
builtin/ROOT/ubuntu/var/lib@autosnap_2024-04-24_04:17:46_daily
builtin/ROOT/ubuntu/var/lib@autosnap_2024-04-25_01:50:31_daily
builtin/ROOT/ubuntu/var/lib@autosnap_2024-04-26_04:33:42_daily
builtin/ROOT/ubuntu/var/lib@autosnap_2024-04-28_09:52:35_daily
So it seems ignore
's monthly and system
's daily settings are applied to datasets under builtin/ROOT/ubuntu/var/lib/docker
, which is of course not what I want. **I would like Sanoid to create neither daily nor monthly snapshots for those datasets. How do I achieve that?**
Cactus
(855 rep)
Apr 28, 2024, 10:18 AM
0
votes
1
answers
333
views
Incus - Setting migration.stateful for stateful snapshots
I'm trying to get to grips with [Incus](https://linuxcontainers.org/incus/introduction/), because it looks like it is a fork of Canonical's LXD, which I can run fairly easily on Debian 12 with a deb package, rather than using snaps. I have it all set up in a virtual machine on my KVM, running with b...
I'm trying to get to grips with [Incus](https://linuxcontainers.org/incus/introduction/) , because it looks like it is a fork of Canonical's LXD, which I can run fairly easily on Debian 12 with a deb package, rather than using snaps.
I have it all set up in a virtual machine on my KVM, running with both a basic directory based storage pool, and a zfs storage pool. I have spun up a test container called
After reading [the documentation on configuring an instance's options](https://linuxcontainers.org/incus/docs/main/howto/instances_configure/#configure-instance-options) , I have tried running these various commands (the first being the one that I think is the most likely to be correct):
... but it just gets stuck on "Processing..."
How is one supposed to enable stateful snapshots of incus linux containers? Perhaps this is just not possible because I am running inside a virtual machine, rather than the physical box?
test
that I want to take a stateful snapshot of, but it tells me that:
> To create a stateful snapshot, the instance needs the migration.stateful config set to true

incus config set test migration.stateful=true
incus config set test config.migration.stateful=true
incus config set test.migration.stateful=true
incus config set migration.stateful=true
... but I always get an error message similar to below about an unknown configuration key:
> Error: Invalid config: Unknown configuration key: migration.stateful
I have also tried setting the option through the YAML configuration as shown below:


Programster
(2289 rep)
Mar 17, 2024, 06:11 PM
• Last activity: Apr 25, 2024, 02:42 PM
0
votes
1
answers
41
views
check files difference with multiple folders
I would like to check when a file differ inside a backup system using snapshots. I have several folders with the same architecture inside ls -1 .snapshot 4-hourly.2024-04-14_0405 4-hourly.2024-04-14_0805 4-hourly.2024-04-14_1205 4-hourly.2024-04-14_1605 4-hourly.2024-04-14_2005 4-hourly.2024-04-15_0...
I would like to check when a file differ inside a backup system using snapshots.
I have several folders with the same architecture inside
ls -1 .snapshot
4-hourly.2024-04-14_0405
4-hourly.2024-04-14_0805
4-hourly.2024-04-14_1205
4-hourly.2024-04-14_1605
4-hourly.2024-04-14_2005
4-hourly.2024-04-15_0405
4-hourly.2024-04-15_0805
4-hourly.2024-04-15_1205
daily.2024-04-08_0010
daily.2024-04-09_0010
daily.2024-04-10_0010
daily.2024-04-11_0010
daily.2024-04-12_0010
daily.2024-04-13_0010
daily.2024-04-14_0010
daily.2024-04-15_0010
monthly.2024-01-01_0020
monthly.2024-02-01_0020
monthly.2024-03-01_0020
monthly.2024-04-01_0020
weekly.2024-02-25_0015
weekly.2024-03-03_0015
weekly.2024-03-10_0015
weekly.2024-03-17_0015
weekly.2024-03-24_0015
weekly.2024-03-31_0015
weekly.2024-04-07_0015
weekly.2024-04-14_0015
And I have to check a file located inside each of these folder.
For example, the goal is to see if
.snapshot/weekly.2024-04-14_0015/my/path/to/the/file.php
differs from weekly.2024-04-07_0015/my/path/to/the/file.php or from .snapshot/weekly.2024-03-31_0015/my/path/to/the/file.php
, or from .snapshot/weekly.2024-04-07_0015/my/path/to/the/file.php
etc.
Is there an obvious simple way for this?
P-S : There are other files/folders that changed inside this folder and I cannot just compare the whole folder.
ppr
(1977 rep)
Apr 15, 2024, 11:20 AM
• Last activity: Apr 15, 2024, 12:00 PM
8
votes
2
answers
13770
views
Understanding how libvirt snapshots are stored
At first I thought it would be stored in `/var/lib/libvirt/images/` but as I created snapshots for the `centos7` domain, nothing changed in this directory: drwx--x--x 2 root root 4096 Feb 29 21:28 . drwxr-xr-x 7 root root 4096 Feb 28 23:47 .. -rw------- 1 libvirt-qemu kvm 5370216574 Feb 29 22:09 cen...
At first I thought it would be stored in
/var/lib/libvirt/images/
but as I created snapshots for the centos7
domain, nothing changed in this directory:
drwx--x--x 2 root root 4096 Feb 29 21:28 .
drwxr-xr-x 7 root root 4096 Feb 28 23:47 ..
-rw------- 1 libvirt-qemu kvm 5370216574 Feb 29 22:09 centos7-1.qcow2
-rw------- 2 libvirt-qemu kvm 5931597824 Feb 29 22:12 centos7.qcow2
-rw------- 1 root root 1499267135 Feb 28 21:07 centos7-server.qcow2
Next I checked /var/lib/libvirt/qemu/snapshot/centos7
which showed these xml files:
`
client2.xml client.xml disks.xml`
Which are the names I gave to my snapshots.
Can someone please tell me why snapshots are xml files and not disk images? What are these xml files storing and I'm guessing they need the original qcow2 image which is in my images
directory to work and won't work with just any image - is that correct?
Weezy
(679 rep)
Feb 29, 2020, 04:50 PM
• Last activity: Mar 26, 2024, 05:25 PM
Showing page 1 of 20 total questions