Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

1 votes
1 answers
2192 views
How to restore windows image created with dd?
I want to restore my windows image created with `dd` I used the command ``` dd if=/dev/nvme0n1 of="./$(date).img" status=progress ``` to create the image. There where four partitions in my `nvme0n1` ``` * EFI system * Microsoft reserved * mircosoft basic dat * Windows recovery environment ``` my gue...
I want to restore my windows image created with dd I used the command
dd if=/dev/nvme0n1 of="./$(date).img" status=progress
to create the image. There where four partitions in my nvme0n1
* EFI system
  * Microsoft reserved 
  * mircosoft basic dat 
  * Windows recovery environment
my guess to use
dd if=./$(date).img of=/dev/sdaX bs=4m && sync
I was wondering what about the UUID of the partitions. Is there something I need to reconfigure.
A.Dumas (485 rep)
May 28, 2021, 05:47 PM • Last activity: Aug 4, 2025, 07:07 PM
0 votes
1 answers
1945 views
Ubuntu folder not showing in external hard drive
I backed up 2 folders (containing .py files) from my Ubuntu 20 OS to the external hard drive. While trying to see the files on the external hard drive from Windows 10, I can't find those. Please advise
I backed up 2 folders (containing .py files) from my Ubuntu 20 OS to the external hard drive. While trying to see the files on the external hard drive from Windows 10, I can't find those. Please advise
Farhan Kabir (1 rep)
Sep 14, 2022, 01:09 AM • Last activity: Jul 29, 2025, 01:02 AM
2 votes
2 answers
2491 views
Shell-/Bash-Script to delete old backup files by name and specific pattern
every our backup files of a database are created. The files are named like this: ```` prod20210528_1200.sql.gz pattern: prod`date +\%Y%m%d_%H%M` ````` The pattern could be adjusted if needed. I would like to have a script that: - keeps all backups for the last x (e.g. 3) days - for backups older tha...
every our backup files of a database are created. The files are named like this:
`
prod20210528_1200.sql.gz 
pattern: proddate +\%Y%m%d_%H%M
`` The pattern could be adjusted if needed. I would like to have a script that: - keeps all backups for the last x (e.g. 3) days - for backups older than x (e.g. 3) days only the backup from time 00:00 shall be kept - for backups older than y (e.g. 14) days only one file per week (monday) shall be kept - for backups older than z days (e.g. 90) only one file per month (1st of each month) shall be kept - the script should rather use the filename instead of the date (created) information of the file, if that it possible - the script should run every day Unfortunately, I have very little knowledge of the shell-/bash-script language. I would do something like this: ````` if (file today - (x + 1)) { if (%H_of_file != 00 AND %M_of_file != 00) { delete file } } if (file today - (y + 1)) { if (file != Monday) { delete file } } if (file today - (z + 1)) { if (%m_of_file != 01) { delete file } } Does this makes any sense for you? Thank you very much! All the best, Phantom
Phantom (143 rep)
May 28, 2021, 05:25 PM • Last activity: Jul 26, 2025, 04:04 AM
0 votes
0 answers
27 views
Can I complete a full system backup with Deja Dup?
I am attempting to complete a full system backup of my Librem 5. Can I utilize Déjà Dup to back up my entire system? Deja Dup is preinstalled on the system as 'Backups'. I want to include app data—everything if possible.
I am attempting to complete a full system backup of my Librem 5. Can I utilize Déjà Dup to back up my entire system? Deja Dup is preinstalled on the system as 'Backups'. I want to include app data—everything if possible.
SpreadingKindness (23 rep)
Jul 2, 2025, 01:52 AM • Last activity: Jul 4, 2025, 11:35 AM
0 votes
1 answers
44 views
How can I create a full system backup of my Librem 5 using jumpdrive?
I would like to backup my Librem 5 using jumpdrive. How can I create a full system backup of my Librem 5 using jumpdrive?
I would like to backup my Librem 5 using jumpdrive. How can I create a full system backup of my Librem 5 using jumpdrive?
SpreadingKindness (23 rep)
Jul 2, 2025, 07:10 AM • Last activity: Jul 4, 2025, 11:32 AM
3 votes
1 answers
2294 views
Wildcards in exclude-filelist for duplicity
I am trying to exclude a "bulk" folder in each home directory from the backup. For this purpose, I have a line - /data/home/*/bulk in my exclude-filelist file. However, this doesn't seem to be recognised: Warning: file specification '/data/home/*/bulk' in filelist exclude-list-test.txt doesn't start...
I am trying to exclude a "bulk" folder in each home directory from the backup. For this purpose, I have a line - /data/home/*/bulk in my exclude-filelist file. However, this doesn't seem to be recognised: Warning: file specification '/data/home/*/bulk' in filelist exclude-list-test.txt doesn't start with correct prefix /data/home/kay/bulk. Ignoring. Is there a way? BTW: is the format in general compatible with rsync's exclude-from? I have a working exclude list for that, where this wildcard expression works.
mcandril (273 rep)
Oct 6, 2014, 11:30 AM • Last activity: Jun 29, 2025, 09:00 PM
1 votes
0 answers
65 views
How can I find multiple duplicates of media files,sort, backup them and delete the rest?
I have a 4 TB hard drive containing pictures, sounds, and videos from the last 15 years. These files were copied onto this drive from various sources, including hard drives, cameras, phones, CD-ROMs, DVDs, USB sticks, SD cards, and downloads. The files come in formats such as JPEG, PNG, GIF, SVG, VO...
I have a 4 TB hard drive containing pictures, sounds, and videos from the last 15 years. These files were copied onto this drive from various sources, including hard drives, cameras, phones, CD-ROMs, DVDs, USB sticks, SD cards, and downloads. The files come in formats such as JPEG, PNG, GIF, SVG, VOB, MP4, MPEG, MOV, AVI, SWF, WMV, FLV, 3GP, WAV, WMA, AAC, and OGG. Over the years, the files have been copied back and forth between different file systems, including FAT, exFAT, NTFS, HFS+/APFS, and ext3/ext4. Currently, the hard drive uses the ext4 file system. There are folders and files that appear multiple times (duplicates, triplicates, or even more). The problem is that the folder and filenames are not always identical. For example: 1. A folder named "bilder_2012" might appear elsewhere as "backup_bilder_2012" or "media_2012_backup_2016". 2. In some cases, newer folders contain additional files that were not present in the older versions. 3. The files themselves may have inconsistent names, such as "bild1", "bild2" in one folder and "bilder2018(1)", "bilder2018(2)" in another. What I Want to Achieve: 1. Sort and clean up the files and Remove all duplicates and copy the remaining files to a new hard drive. 2. Identify the original copies and Is there a way to determine which version of a file is the earliest/original? 3. Preserve the original folder names and For example, I know that "bilder_2012" was the first name given to a folder, and I like to keep that name if possible. 4. Standardize file naming and After copying, I like the files to follow a consistent naming scheme, such as. Folder: "bilder2012" and Files: "bilder2012(1).jpeg", "bilder2012(2).jpeg", etc. Is there a way to automate this process while ensuring the oldest/original files are preserved and duplicates are safely removed?
Bernd Kunze (11 rep)
Jun 21, 2025, 09:49 AM • Last activity: Jun 25, 2025, 07:26 AM
7 votes
2 answers
757 views
~/.local/share is set for what? And can I ignore it at backups?
I wonder if `~/.local/share` contain real user data or if it only contain session-related data and binary generated stuffs. I also wonder if I can include it in an `--ignore` statement with [BorgBackup](https://www.borgbackup.org/) cause it really seams to not contain data or config files. Just some...
I wonder if ~/.local/share contain real user data or if it only contain session-related data and binary generated stuffs. I also wonder if I can include it in an --ignore statement with [BorgBackup](https://www.borgbackup.org/) cause it really seams to not contain data or config files. Just some kind of cache. So can I just ignore this directory and expect a pretty good recovery, or could it contain some real user-related data?
fauve (1529 rep)
Jun 17, 2025, 06:16 PM • Last activity: Jun 19, 2025, 11:34 AM
0 votes
1 answers
2340 views
Udev rule refuses to trigger when harddrive is added
I'm on Ubuntu 20.04. I'm trying to execute an action when a specific harddrive is connected using an udev rule identifying the drive using UUID. The script will eventually do a routine where it will mount the drive and run rsync. To rule out any errors in that process I'm now just trying out a test...
I'm on Ubuntu 20.04. I'm trying to execute an action when a specific harddrive is connected using an udev rule identifying the drive using UUID. The script will eventually do a routine where it will mount the drive and run rsync. To rule out any errors in that process I'm now just trying out a test command. The harddrive is connected via SATA Hotswap and has an UUID which is confirmed to be correct. I've followed numerous guides that seem to use this exact syntax, and still absolutely nothing happens however I try. Here are the steps I've done: - Created a file called 90-backup.rules in /etc/udev/rules.d. The content is: ACTION=="add", ENV{ID_FS_UUID}=="b527aadc-9dce-4ead-8937-e53ca2cfac84", RUN+="/bin/echo 1 >> /rule.test" - Tried udevadm control --reload-rules && udevadm trigger - Tried systemctl reload udev - Running udevadm test /dev/sdX i can see that it lists the rules file: Reading rules file: /etc/udev/rules.d/90-backup.rules - Using udevadm info /dev/sdX confirm that the ID_FS_UUID environmental variable is correct and can be read. - Tried adding KERNEL=='sd?' before the ACTION argument. Since the server is currently live in use, I haven't tried rebooting it yet. And it would be good to once and for all establish what is necessary to have udev reload the rules properly without reboot, for proper debugging. Any help is appreciated. All the best, Andreas
A.H (1 rep)
Jan 31, 2022, 01:23 AM • Last activity: Jun 19, 2025, 05:00 AM
0 votes
0 answers
38 views
Automatic backup on a second internal hidden disk when files are moved to a first disk
Summary: I need to automatically backup files on a hidden hard disk whenever the user copy files on a specific hard disk. Long explanation: Context: My father (70 years old) is very bad at computers. He barely knows how to copy and paste with a graphical interface, nothing more. I have installed him...
Summary: I need to automatically backup files on a hidden hard disk whenever the user copy files on a specific hard disk. Long explanation: Context: My father (70 years old) is very bad at computers. He barely knows how to copy and paste with a graphical interface, nothing more. I have installed him a linux mint distro which I have tailored so he doesn't crash it and understand easily. He use his computer mostly to save his photographies: around 2TB now... I plan to install him an extra hard disk called "photos" so he can discharge his photos on it. I also plan to put another disk ("photocopie") as a backup so in case the first hard disk crash, he will not lost his photographies. I will make this second hard disk hidden so my dad doesn't fuss with it. I would like to know how to automatically copy the new photos he adds on the hard disk "photos" to the second one "photocopies"; he only do how to do this using caja (Mate file manager). I had first read about using raid 1 but it doesn't seems to be the proper solution. Some suggested using rsync with cron, however cron work at specific times, and since my dad doesn't use his computer often, it may not work properly. I was trying to find an app that would start at a specific event such as "if files are written on hard disk photos, then run rsync as incremental on the hard disk photocopie", but I couldn't find an app detecting events. I have read about inotifywait, could it be the solution? How to implement that? PS: the photo sources could be anything than can be plugged by USB: a camera, an smartphone (Apple). My dad usually run caja and copy past from the phone to the proper directory/hard disk.
Some old geek (1 rep)
Jun 18, 2025, 02:59 PM
0 votes
0 answers
34 views
CacheFiles when the cached system is unmounted, or alternatives
In my current setup, I have two machines `serverA` and `serverB` in different geographical areas. `serverA` has a limited amount of persistent memory (~256GB), while `serverB` can be considered to have enough that I will never use it all up (several TB). `serverA` has a directory `/data` which is an...
In my current setup, I have two machines serverA and serverB in different geographical areas. serverA has a limited amount of persistent memory (~256GB), while serverB can be considered to have enough that I will never use it all up (several TB). serverA has a directory /data which is an NFS share from serverB, and also has CacheFiles enabled. This setup achieves the following: 1. replication: if serverA's disk die, I can still recover the data from serverB 2. unlimited memory: I am not limited by serverA's small amount of persistent memory 3. fast access to data: the content of /data that is in the cache (basically the most recently accessed 200GB) can be accessed without a round-trip on the network Note that a simple backing-up setup would not achieve 2. I'd like to achieve 1., 2. and 3. but also the following: 4. robustness: if serverB goes down temporarily, serverA can still work with the data that's been cached, without me having to manually intervene on serverA 5. encryption: /data is encrypted by serverA, so that someone with access to serverB cannot access the data I'm mostly interested in 4. and 5. would only be a bonus. Here are my questions: - I suppose CacheFiles does not achieve 4., is this correct? - What are the simplest setups that would allow me to achieve 1., 2., 3. and 4., and possibly also 5.?
Quentin (25 rep)
Jun 8, 2025, 11:46 AM • Last activity: Jun 8, 2025, 04:35 PM
0 votes
1 answers
3824 views
Got fatal error during xfer (Child exited prematurely)
I have a backup server which takes backup of other servers and in few days back the backup server started experiencing error and the not able to take back. I think the error log bellow could describe the problem better than me: 2012-09-10 20:05:40 Aborting backup up after signal PIPE 2012-09-10 20:0...
I have a backup server which takes backup of other servers and in few days back the backup server started experiencing error and the not able to take back. I think the error log bellow could describe the problem better than me: 2012-09-10 20:05:40 Aborting backup up after signal PIPE 2012-09-10 20:05:44 Got fatal error during xfer (aborted by signal=PIPE) 2012-09-10 21:00:15 full backup started for directory rootBackup (baseline backup #277) 2012-09-11 10:21:07 Aborting backup up after signal PIPE 2012-09-11 10:21:11 Got fatal error during xfer (aborted by signal=PIPE) 2012-09-11 11:00:24 full backup started for directory rootBackup (baseline backup #277) 2012-09-11 13:22:41 Aborting backup up after signal PIPE 2012-09-11 13:22:52 Got fatal error during xfer (aborted by signal=PIPE) 2012-09-11 14:00:30 full backup started for directory rootBackup (baseline backup #277)for directory rootBackup 2012-09-14 06:38:02 Got fatal error during xfer (Child exited prematurely) 2012-09-14 06:38:08 Backup aborted (Child exited prematurely) 2012-09-14 07:00:27 incr backup started back to 2012-09-11 14:00:30 (backup #278) for directory rootBackup 2012-09-20 14:22:04 Got fatal error during xfer (Child exited prematurely) 2012-09-20 14:22:10 Backup aborted (Child exited prematurely) 2012-09-20 15:00:45 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-20 15:26:29 Got fatal error during xfer (Child exited prematurely) 2012-09-20 15:26:35 Backup aborted (Child exited prematurely) 2012-09-20 16:00:12 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-20 16:27:37 Got fatal error during xfer (Child exited prematurely) 2012-09-20 16:27:43 Backup aborted (Child exited prematurely) 2012-09-20 17:00:09 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-20 17:27:37 Got fatal error during xfer (Child exited prematurely) 2012-09-20 17:27:43 Backup aborted (Child exited prematurely) 2012-09-20 18:00:20 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-20 18:27:49 Got fatal error during xfer (Child exited prematurely) 2012-09-20 18:27:55 Backup aborted (Child exited prematurely) 2012-09-20 19:00:26 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-20 19:28:29 Got fatal error during xfer (Child exited prematurely) 2012-09-20 19:28:36 Backup aborted (Child exited prematurely) 2012-09-20 20:00:32 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-20 20:23:06 Got fatal error during xfer (Child exited prematurely) 2012-09-20 20:23:11 Backup aborted (Child exited prematurely) 2012-09-20 21:00:16 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-20 21:20:30 Got fatal error during xfer (Child exited prematurely) 2012-09-20 21:20:37 Backup aborted (Child exited prematurely) 2012-09-20 22:00:15 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-20 22:15:21 Got fatal error during xfer (Child exited prematurely) 2012-09-20 22:15:26 Backup aborted (Child exited prematurely) 2012-09-20 23:00:21 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-20 23:15:06 Got fatal error during xfer (Child exited prematurely) 2012-09-20 23:15:11 Backup aborted (Child exited prematurely) 2012-09-21 01:00:09 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-21 01:27:47 Got fatal error during xfer (Child exited prematurely) 2012-09-21 01:27:53 Backup aborted (Child exited prematurely) 2012-09-21 02:00:20 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-21 02:34:06 Got fatal error during xfer (Child exited prematurely) 2012-09-21 02:34:13 Backup aborted (Child exited prematurely) 2012-09-21 03:00:21 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-21 03:26:13 Got fatal error during xfer (Child exited prematurely) 2012-09-21 03:26:19 Backup aborted (Child exited prematurely) 2012-09-21 04:00:14 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-21 04:17:05 Got fatal error during xfer (Child exited prematurely) 2012-09-21 04:17:11 Backup aborted (Child exited prematurely) 2012-09-21 05:00:09 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-21 05:27:00 Aborting backup up after signal PIPE 2012-09-21 05:27:02 Got fatal error during xfer (aborted by signal=PIPE) 2012-09-21 06:00:04 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-21 06:19:01 Aborting backup up after signal PIPE 2012-09-21 06:19:02 Got fatal error during xfer (aborted by signal=PIPE) 2012-09-21 07:00:01 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-21 07:11:57 incr backup 283 complete, 138 files, 282332868 bytes, 0 xferErrs (0 bad files, 0 bad shares, 0 other) 2012-09-22 07:00:09 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 07:11:31 Got fatal error during xfer (Child exited prematurely) 2012-09-22 07:11:36 Backup aborted (Child exited prematurely) 2012-09-22 08:00:07 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 08:16:04 Got fatal error during xfer (Child exited prematurely) 2012-09-22 08:16:10 Backup aborted (Child exited prematurely) 2012-09-22 09:00:05 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 09:25:18 Got fatal error during xfer (Child exited prematurely) 2012-09-22 09:25:26 Backup aborted (Child exited prematurely) 2012-09-22 10:00:29 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 10:26:23 Got fatal error during xfer (Child exited prematurely) 2012-09-22 10:26:28 Backup aborted (Child exited prematurely) 2012-09-22 11:00:13 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 11:23:26 Got fatal error during xfer (Child exited prematurely) 2012-09-22 11:23:32 Backup aborted (Child exited prematurely) 2012-09-22 12:00:13 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 12:16:27 Got fatal error during xfer (Child exited prematurely) 2012-09-22 12:16:33 Backup aborted (Child exited prematurely) 2012-09-22 13:00:11 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 13:25:22 Got fatal error during xfer (Child exited prematurely) 2012-09-22 13:25:27 Backup aborted (Child exited prematurely) 2012-09-22 14:00:33 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 14:26:53 Got fatal error during xfer (Child exited prematurely) 2012-09-22 14:26:58 Backup aborted (Child exited prematurely) 2012-09-22 15:00:10 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 15:22:22 Got fatal error during xfer (Child exited prematurely) 2012-09-22 15:22:30 Backup aborted (Child exited prematurely) 2012-09-22 16:00:38 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 16:25:32 Got fatal error during xfer (Child exited prematurely) 2012-09-22 16:25:38 Backup aborted (Child exited prematurely) 2012-09-22 17:00:11 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 17:24:36 Got fatal error during xfer (Child exited prematurely) 2012-09-22 17:24:41 Backup aborted (Child exited prematurely) 2012-09-22 18:00:21 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 18:24:42 Got fatal error during xfer (Child exited prematurely) 2012-09-22 18:24:50 Backup aborted (Child exited prematurely) 2012-09-22 19:00:20 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 19:24:47 Got fatal error during xfer (Child exited prematurely) 2012-09-22 19:24:53 Backup aborted (Child exited prematurely) 2012-09-22 20:00:20 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 20:23:21 Got fatal error during xfer (Child exited prematurely) 2012-09-22 20:23:26 Backup aborted (Child exited prematurely) 2012-09-22 21:00:14 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 21:43:12 Aborting backup up after signal PIPE 2012-09-22 21:43:13 Got fatal error during xfer (aborted by signal=PIPE) 2012-09-22 22:00:10 incr backup started back to 2012-09-19 05:00:02 (backup #282) for directory rootBackup 2012-09-22 22:14:58 Got fatal error during xfer (Child exited prematurely) 2012-09-22 22:15:04 Backup aborted (Child exited prematurely) 2012-09-22 23:00:15 incr backup started back to 2012-09-19 05:00:02 (backup **Note:** I am using backuppc for backup. Backup server is able to take backup for other servers but for one server its experiencing error. So I believe there must be some problem in that particular *client* server so that its prematurely exiting . **Update:** Network seams to be OK(**netstat -in**) on client side. Kernel Interface table Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg eth0 1500 0 1297053361 0 126811 0 2321209324 0 0 0 BMRU lo 16436 0 35169163 0 0 0 35169163 0 0 0 LRU **The bellow is the contents of:** file /var/lib/backuppc/pc/ns381613.ovh.net-daily/XferLOG.bad.z, modified 2012-11-21 17:25:14 (Extracting only Errors) full backup started for directory www (baseline backup #300) Connected to ns381613.ovh.net:873, remote version 30 Negotiated protocol version 28 Connected to module www Sending args: --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . . Xfer PIDs are now 4845 [ skipped 120129 lines ] Remote: file has vanished: "/mywebsite.com/sessions/sess_017p907qdbm3rn5vv8gdin2ab7" (in www) Remote: file has vanished: "/mywebsite.com/sessions Remote: file has vanished: "/mywebsite.com/sessions/sess_vkn369qbj5dhoj4no0bei1sen3" (in www) Remote: file has vanished: "/mywebsite.com/sessions/sess_vkn7demudpv0othe6e98s2v1v4" (in www) Remote: file has vanished: "/mywebsite.com/sessions/sess_voseo6s018c8tmocgthj87irj1" (in www) Remote: file has vanished: "/mywebsite.com/sessions/sess_vqpqhv16urbrh99acecmujj8j1" (in www) Remote: file has vanished: "/mywebsite.com/sessions/sess_vs0j1bsfina3f4913lorb7j681" (in www) Remote: file has vanished: "/mywebsite.com/sessions/sess_vu3jupeug0qpls9f6ikub544t0" (in www) Remote: file has vanished: "/mywebsite.com/sessions/sess_vu3pp00ko7uip62jf7vp8o8la1" (in www) [ skipped 55244 lines ] Read EOF: Connection reset by peer Can't write 32780 bytes to socket Tried again: got 0 bytes finish: removing in-process file mywebsite.com/www-bk-17-oct-2012/lms/archive/LAWYER_1350329888/document/screencasts/TechtutorTv.flv Child is aborting Done: 160949 files, 2244371209 bytes Got fatal error during xfer (aborted by signal=PIPE) Backup aborted by user signal Not saving this as a partial backup since it has fewer files than the prior one (got 160949 and 160949 files versus 5985910) **Then I tried:** > sudo -u backuppc /usr/share/backuppc/bin/BackupPC_dump -v -f > ns381613.ovh.net **Output** Sending args: --server --sender --numeric-ids --perms --owner --group -D --links --hard-links --times --block-size=2048 --recursive --ignore-times . . Xfer PIDs are now 22310 xferPids 22310 Remote: rsync error: timeout in data send/receive (code 30) at io.c(137) [sender=3.0.7] Read EOF: Tried again: got 0 bytes Child is aborting Parent read EOF from child: fatal error! Done: 0 files, 0 bytes Got fatal error during xfer (Child exited prematurely) cmdSystemOrEval: about to system /bin/ping -c 1 ns381613.ovh.net cmdSystemOrEval: finished: got output PING ns381613.ovh.net (188.165.247.43) 56(84) bytes of data. 64 bytes from ns381613.ovh.net (188.165.247.43): icmp_req=1 ttl=60 time=0.591 ms --- ns381613.ovh.net ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.591/0.591/0.591/0.000 ms cmdSystemOrEval: about to system /bin/ping -c 1 ns381613.ovh.net cmdSystemOrEval: finished: got output PING ns381613.ovh.net (188.165.247.43) 56(84) bytes of data. 64 bytes from ns381613.ovh.net (188.165.247.43): icmp_req=1 ttl=60 time=0.366 ms --- ns381613.ovh.net ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms CheckHostAlive: returning 0.366 Backup aborted (Child exited prematurely) Not saving this as a partial backup since it has fewer files than the prior one (got 0 and 0 files versus 1) dump failed: Child exited prematurely In the above output I see it says timeout, But I have increased the timeout variable in configuration.
Subhransu Mishra (749 rep)
Sep 27, 2012, 09:56 AM • Last activity: Jun 6, 2025, 05:06 PM
0 votes
2 answers
93 views
Why did my backup folder with large amounts of repeated data compress so poorly?
I have a folder with around seventy subfolders, each containing a few tarballs which are nightly backups of a few directories (the largest being `/home`) from an old Raspberry Pi. Each is a full backup; they are not incremental. These tarballs are not compressed; they are just regular `.tar` archive...
I have a folder with around seventy subfolders, each containing a few tarballs which are nightly backups of a few directories (the largest being /home) from an old Raspberry Pi. Each is a full backup; they are not incremental. These tarballs are not compressed; they are just regular .tar archives. (They were originally compressed with bzip2, but I have decompressed all of them.) This folder totals 49 GiB according to du -h. I compressed this entire folder into a tar archive compressed with zstd. However, the final archive is 32 GiB, not much smaller than the original. Why is this the case, considering that the vast majority of the data should be common among several files, since I obviously was not replacing every file every day?
kj7rrv (217 rep)
May 24, 2025, 04:33 AM • Last activity: May 24, 2025, 08:08 AM
5 votes
2 answers
2960 views
Backup and restore of Centos network interfaces
I have a server running Centos 7 which needs to be rebooted to upgrade some software. Some of the physical NICs have around 5-10 VLAN interfaces each. They're subject to change on a weekly/monthly basis so storing the details in `/etc/sysconfig/network-scripts` to persist across reboots isn't practi...
I have a server running Centos 7 which needs to be rebooted to upgrade some software. Some of the physical NICs have around 5-10 VLAN interfaces each. They're subject to change on a weekly/monthly basis so storing the details in /etc/sysconfig/network-scripts to persist across reboots isn't practical. Is there an simple way to take a snapshot of the current networking stack and restore after the reboot? Similar to the way you can save/restore iptables rules? I've found several references to the system-config-network-cmd but I'm wary of using this tool in the event it overwrites the static configs for the physical interfaces we do have in /etc/sysconfig/network-scripts Thanks!
popcornuk (97 rep)
Mar 28, 2018, 01:44 PM • Last activity: May 18, 2025, 04:06 AM
3 votes
1 answers
2469 views
Backup entire zfs pool to different filesystem
I have a `zpool` with around 6TB of data on it (including snapshots for the child datasets). For the important data in it, I already have backups on filesystem level. As I need to perform some rather "dangerous" operations (i.e. [this pool migration][1]), I want to backup the whole pool to a differe...
I have a zpool with around 6TB of data on it (including snapshots for the child datasets). For the important data in it, I already have backups on filesystem level. As I need to perform some rather "dangerous" operations (i.e. this pool migration ), I want to backup the whole pool to a different server. In an ideal world, I would have used send and recv (like so ). Unfortunately, this server is btrfs-based with no option to install zfs. When researching, people recommend just plain rsync (e.g. here ) but as far as I can see, I would be back to filesystem level and more importantly, I'm not sure if the already existing dataset snapshots would still be intact. I basically just want to "freeze" the entire pool with a snapshot, send that off to the remote server and in case something goes wrong, restore the pool to the previous state (with ideally one command). **Therefore I'm looking for a solution to backup an entire zfs to a different server running a different filesystem, keeping all datasets and snapshots intact**
Dom42 (31 rep)
Aug 29, 2022, 09:32 AM • Last activity: May 8, 2025, 08:00 PM
3 votes
1 answers
2767 views
How to use Timeshift to backup EXT4 filesystem on a separate BTRFS partition?
I have seen several articles on similar topics, but none directly address my question/problem. First, I want to use Timeshift GUI rather than CLI. I have a single physical drive and want to install Linux Mint on an EXT4 partition and create a 20g BTRFS partition for backup images on the EXT4 system...
I have seen several articles on similar topics, but none directly address my question/problem. First, I want to use Timeshift GUI rather than CLI. I have a single physical drive and want to install Linux Mint on an EXT4 partition and create a 20g BTRFS partition for backup images on the EXT4 system partition. Is it possible? Admittedly, I don't know much about any of this. But I do know I previously installed the OS on EXT4 and formatted a USB stick with BTRFS and there appeared to be no option to backup to that.
Shawn Veillon (31 rep)
Sep 8, 2021, 06:14 AM • Last activity: May 7, 2025, 03:00 PM
26 votes
6 answers
28658 views
Backup and restore IMAP mail account with (open source) Linux tools
**Which Linux tools help to backup and restore a IMAP mail account including all mail and subfolders?** I expect disconnects for large IMAP accounts because of 1. ressource limitiations on the server 2. risk of an interruption increases with the duration. The software should be able to reconnect and...
**Which Linux tools help to backup and restore a IMAP mail account including all mail and subfolders?** I expect disconnects for large IMAP accounts because of 1. ressource limitiations on the server 2. risk of an interruption increases with the duration. The software should be able to reconnect and continue the job after any interruption. For repeating backups it might be very handy to use incremental backups and to run the backup script in a cron job.
Jonas Stein (4298 rep)
Dec 14, 2014, 09:00 PM • Last activity: May 6, 2025, 11:42 PM
2 votes
0 answers
81 views
How to handle Duplicity not being able to do backups to Google Cloud Storage bucket because bucket contains aborted backup
I have setup a Google Cloud Storage bucket for my Duplicity backups. The bucket has a retention policy of 1 year. Today Duplicity got interrupted while doing the backups, and now, every time I want to run a backup, it tries to delete the aborted backup: Attempt of _do_delete Nr. 2 failed. ClientErro...
I have setup a Google Cloud Storage bucket for my Duplicity backups. The bucket has a retention policy of 1 year. Today Duplicity got interrupted while doing the backups, and now, every time I want to run a backup, it tries to delete the aborted backup: Attempt of _do_delete Nr. 2 failed. ClientError: An error occurred (AccessDenied) when calling the DeleteObject operation: Access denied. How can I just leave the aborted file stub (can't be deleted due to retention) and let Duplicity start a new backup anyways? ---- ### Workaround if the bucket retention is not locked * Remove bucket retention and let Duplicity have "Storage Object Admin" access to the bucket. * Rerun Duplicity. But I'd prefer a solution that works even if the destination is read only / WORM.
PetaspeedBeaver (1398 rep)
Apr 20, 2025, 01:38 PM • Last activity: Apr 28, 2025, 09:34 PM
0 votes
2 answers
3633 views
Best strategy to backup btrfs root filesystem?
I have a Btrfs root partition with an `@` root subvolume and an `@home` subvolume and I do auto-snapshots during updates and timeshift scheduled snapshots, both of which are saved on the same drive. This is great, but I want to have extra redundancy in case of a drive failure. In my last setup on De...
I have a Btrfs root partition with an @ root subvolume and an @home subvolume and I do auto-snapshots during updates and timeshift scheduled snapshots, both of which are saved on the same drive. This is great, but I want to have extra redundancy in case of a drive failure. In my last setup on Debian, I used the ext4 file system and put my timeshift rsync backups on an external drive. How can I do something similar, i.e. backup to an external drive, while still taking snapshots on the root device? In addition to the system device, which is a 1 TB SSD formatted as Btrfs, I have a 2 TB HDD currently formatted with two NTFS partitions since I dual boot Windows as well. Now, I would be willing to completely move to a Linux file system on that drive, but I don't know how I would handle backing up the root drive. I thought about doing a disk image onto the HDD with dd but if I do this, I would (a) loose an extra TB of storage if I understand correctly how dd works and (b) would not know how to restore from the image. Ideally, I would like to have a Btrfs partition on the second drive for backups of the root device only and a second (e.g. ext4 or NTFS) partition just for overflow data storage. Essentially, my question is: How can I facilitate a backup of my already "snapshotting" root partition (and also know how to restore from it)?
weygoldt (69 rep)
Mar 23, 2022, 01:45 PM • Last activity: Apr 23, 2025, 05:01 AM
90 votes
7 answers
77034 views
What directories do I need to back up?
What are the directories one should back up, in order to have a backup of all user-generated files? From a vanilla debian install, I can do enough apt to get the packages that I want. So if I don't want to backup the entire system, where all in the filesystem do user-generated configuration and data...
What are the directories one should back up, in order to have a backup of all user-generated files? From a vanilla debian install, I can do enough apt to get the packages that I want. So if I don't want to backup the entire system, where all in the filesystem do user-generated configuration and data files reside?
user394 (14722 rep)
Aug 23, 2010, 01:31 PM • Last activity: Apr 9, 2025, 02:38 AM
Showing page 1 of 20 total questions