Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

3 votes
1 answers
2294 views
Wildcards in exclude-filelist for duplicity
I am trying to exclude a "bulk" folder in each home directory from the backup. For this purpose, I have a line - /data/home/*/bulk in my exclude-filelist file. However, this doesn't seem to be recognised: Warning: file specification '/data/home/*/bulk' in filelist exclude-list-test.txt doesn't start...
I am trying to exclude a "bulk" folder in each home directory from the backup. For this purpose, I have a line - /data/home/*/bulk in my exclude-filelist file. However, this doesn't seem to be recognised: Warning: file specification '/data/home/*/bulk' in filelist exclude-list-test.txt doesn't start with correct prefix /data/home/kay/bulk. Ignoring. Is there a way? BTW: is the format in general compatible with rsync's exclude-from? I have a working exclude list for that, where this wildcard expression works.
mcandril (273 rep)
Oct 6, 2014, 11:30 AM • Last activity: Jun 29, 2025, 09:00 PM
2 votes
0 answers
81 views
How to handle Duplicity not being able to do backups to Google Cloud Storage bucket because bucket contains aborted backup
I have setup a Google Cloud Storage bucket for my Duplicity backups. The bucket has a retention policy of 1 year. Today Duplicity got interrupted while doing the backups, and now, every time I want to run a backup, it tries to delete the aborted backup: Attempt of _do_delete Nr. 2 failed. ClientErro...
I have setup a Google Cloud Storage bucket for my Duplicity backups. The bucket has a retention policy of 1 year. Today Duplicity got interrupted while doing the backups, and now, every time I want to run a backup, it tries to delete the aborted backup: Attempt of _do_delete Nr. 2 failed. ClientError: An error occurred (AccessDenied) when calling the DeleteObject operation: Access denied. How can I just leave the aborted file stub (can't be deleted due to retention) and let Duplicity start a new backup anyways? ---- ### Workaround if the bucket retention is not locked * Remove bucket retention and let Duplicity have "Storage Object Admin" access to the bucket. * Rerun Duplicity. But I'd prefer a solution that works even if the destination is read only / WORM.
PetaspeedBeaver (1398 rep)
Apr 20, 2025, 01:38 PM • Last activity: Apr 28, 2025, 09:34 PM
1 votes
1 answers
848 views
What does it mean when Duplicity exits with exit status 23?
I'm running Duplicity for backup of my desktop to a remote storage. Everything has been working fine, but now I see in my log that Duplicity has been exiting with exit status 23 for a couple of runs. As of what I can see the backups are running and the backups are uploaded to the remote computer. Th...
I'm running Duplicity for backup of my desktop to a remote storage. Everything has been working fine, but now I see in my log that Duplicity has been exiting with exit status 23 for a couple of runs. As of what I can see the backups are running and the backups are uploaded to the remote computer. The exit status 23 seems to only happen when it's run in the background as a cron-job. When I run the backup script manually I get a 0 exit status. Duplicity's man page lacks info on the meanings of the programs exit statuses, so I don't really know where to start. cat syslog.1 outputs: ... Jan 23 12:54:35 xx anacron: Job `cron.daily' started Jan 23 12:54:35 xx anacron: Updated timestamp for job `cron.daily' to 2019-01-23 Jan 23 12:54:45 xx kernel: [ 322.467223] EXT4-fs (sdb1): warning: maximal mount count reached, running e2fsck is recommended Jan 23 12:54:45 xx kernel: [ 322.473997] EXT4-fs (sdb1): recovery complete Jan 23 12:54:45 xx kernel: [ 322.474404] EXT4-fs (sdb1): mounted filesystem with ordered data mode. Opts: (null) Jan 23 12:55:03 xx kernel: [ 340.754000] EXT4-fs (sdc): recovery complete Jan 23 12:55:03 xx kernel: [ 340.794089] EXT4-fs (sdc): mounted filesystem with ordered data mode. Opts: (null) ...
PetaspeedBeaver (1398 rep)
Jan 21, 2019, 07:59 PM • Last activity: Mar 28, 2025, 09:50 AM
0 votes
0 answers
12 views
Will Duplicity download anything from the destination when doing a `remove-all-but-n-full`?
I'm using Google Cloud Storage to store backups taken by Duplicity. Since I just want to keep 4 full backups at the destination I run: duplicity remove-all-but-n-full 4 --force \ --s3-endpoint-url=https://storage.googleapis.com boto3+s3://example Since I want to avoid read charges I wonder if the ab...
I'm using Google Cloud Storage to store backups taken by Duplicity. Since I just want to keep 4 full backups at the destination I run: duplicity remove-all-but-n-full 4 --force \ --s3-endpoint-url=https://storage.googleapis.com boto3+s3://example Since I want to avoid read charges I wonder if the above command will read anything from the server, or is it just judging which files to delete by using the files in the archive dir?
PetaspeedBeaver (1398 rep)
Mar 12, 2025, 05:00 PM • Last activity: Mar 12, 2025, 05:14 PM
17 votes
1 answers
22313 views
How to include / exclude directories in duplicity
I'm using duplicity to backup some of my files. The man page is kind of confusing, regarding the include/exclude Patterns. I'd like to backup the following things: /storage/include /otherthings but NOT /storage/include/exclude The Include-File currently looks: + /storage/include - /storage/include/e...
I'm using duplicity to backup some of my files. The man page is kind of confusing, regarding the include/exclude Patterns. I'd like to backup the following things: /storage/include /otherthings but NOT /storage/include/exclude The Include-File currently looks: + /storage/include - /storage/include/exclude + /otherthings - ** Duplicity is called as followed: /usr/bin/duplicity --include-globbing-filelist /Path/to/file/above / target It simply doesn't work. Every time it backups, it also includes the files in /storage/include/exclude.
int2000 (273 rep)
Mar 2, 2014, 11:08 AM • Last activity: Jan 12, 2024, 10:01 AM
0 votes
1 answers
338 views
Duplicity does not start new chains with full-if-older-than
I want to use duplicity to backup most of my home folder to Backblaze. I'm trying to set it up so that it does a full backup every month. For this I'm using the following commands which are run daily: ``` LOCAL_DIR="/home/username" EXCLUDE="\ --exclude /home/username/temp \ --exclude /home/username/...
I want to use duplicity to backup most of my home folder to Backblaze. I'm trying to set it up so that it does a full backup every month. For this I'm using the following commands which are run daily:
LOCAL_DIR="/home/username"
EXCLUDE="\
 --exclude /home/username/temp \
 --exclude /home/username/.cache \
"
duplicity \
 backup --full-if-older-than 1M ${EXCLUDE} --progress \
 ${LOCAL_DIR} b2://${B2_ID}:${B2_KEY}@${B2_BUCKET}
This successfully creates a backup every day, however duplicity never creates a new chain - there was one full backup in September and since then only incrementals. I want to start a new chain with a full backup each month. Some things I've tried with no luck: - change 1M to 30D - change order of the options (e.g. excludes first) - change backup to incremental Any ideas? Thanks! Here's an example output of the command:
+ duplicity backup --full-if-older-than 1M --exclude /home/username/temp --exclude /home/username/.cache --progress /home/username b2://B2_ID:B2_KEY@B2_BUCKET
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Thu Sep 21 16:19:13 2023
3.9MB 00:01:12 [291.5KB/s] [========================================>] 100% ETA 0sec
--------------[ Backup Statistics ]--------------
StartTime 1702050982.47 (Fri Dec  8 16:56:22 2023)
EndTime 1702051043.61 (Fri Dec  8 16:57:23 2023)
ElapsedTime 61.15 (1 minute 1.15 seconds)
SourceFiles 540202
SourceFileSize 77232368533 (71.9 GB)
NewFiles 39
NewFileSize 15434498 (14.7 MB)
DeletedFiles 11
ChangedFiles 78
ChangedFileSize 70757910 (67.5 MB)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 128
RawDeltaSize 19617995 (18.7 MB)
TotalDestinationSizeChange 3410741 (3.25 MB)
Errors 0
-------------------------------------------------
And here's the collection status:
+ duplicity collection-status --progress b2://B2_ID:B2_KEY@B2_BUCKET
Last full backup date: Thu Sep 21 16:19:13 2023
Collection Status
-----------------
Connecting with backend: BackendWrapper
Archive dir: /home/username/.cache/duplicity/a52568672c187fdf7e6d79e12a4df37f

Found 0 secondary backup chain(s).

Found primary backup chain with matching signature chain:
-------------------------
Chain start time: Thu Sep 21 16:19:13 2023
Chain end time: Fri Dec  8 16:55:20 2023
Number of contained backup sets: 72
Total number of contained volumes: 331
 Type of backup set:                            Time:      Num volumes:
                Full         Thu Sep 21 16:19:13 2023               222
         Incremental         Fri Sep 22 13:41:57 2023                 1
         Incremental         Fri Sep 22 14:04:53 2023                 1
         Incremental         Fri Sep 22 14:07:21 2023                 1
         Incremental         Sat Sep 23 23:07:51 2023                 1
         Incremental         Sun Sep 24 14:38:12 2023                 1
         Incremental         Sun Sep 24 15:44:06 2023                 1
         Incremental         Mon Sep 25 14:49:06 2023                 1
         Incremental         Tue Sep 26 10:31:05 2023                 1
         Incremental         Wed Sep 27 14:24:04 2023                 1
         Incremental         Thu Sep 28 11:10:04 2023                 1
         Incremental         Fri Sep 29 14:29:04 2023                 1
         Incremental         Sat Sep 30 12:08:39 2023                 2
         Incremental         Sun Oct  1 11:33:05 2023                 1
         Incremental         Mon Oct  2 10:22:05 2023                 2
         Incremental         Tue Oct  3 15:37:05 2023                 6
         Incremental         Wed Oct  4 13:50:36 2023                 1
         Incremental         Thu Oct  5 14:28:05 2023                 1
         Incremental         Fri Oct  6 13:39:05 2023                 1
         Incremental         Sat Oct  7 14:42:07 2023                 2
         Incremental         Sun Oct  8 08:21:05 2023                 1
         Incremental         Mon Oct  9 19:11:05 2023                 1
         Incremental         Tue Oct 10 10:06:05 2023                 2
         Incremental         Wed Oct 11 11:07:05 2023                 2
         Incremental         Thu Oct 12 11:21:05 2023                 2
         Incremental         Fri Oct 13 21:04:58 2023                 1
         Incremental         Sat Oct 14 13:27:15 2023                 1
         Incremental         Sun Oct 15 12:13:05 2023                 1
         Incremental         Mon Oct 16 11:27:06 2023                 1
         Incremental         Tue Oct 17 12:10:05 2023                 1
         Incremental         Thu Oct 19 14:37:53 2023                 2
         Incremental         Fri Oct 20 19:13:03 2023                 1
         Incremental         Sat Oct 21 14:26:06 2023                 1
         Incremental         Sun Oct 22 11:49:55 2023                 3
         Incremental         Mon Oct 23 12:20:05 2023                 8
         Incremental         Wed Oct 25 17:05:08 2023                 1
         Incremental         Wed Oct 25 18:28:05 2023                 1
         Incremental         Thu Oct 26 21:35:52 2023                 1
         Incremental         Fri Oct 27 14:36:05 2023                 1
         Incremental         Sat Oct 28 09:06:04 2023                 1
         Incremental         Sun Oct 29 20:04:52 2023                 1
         Incremental         Mon Oct 30 13:30:05 2023                 1
         Incremental         Tue Oct 31 11:24:05 2023                 1
         Incremental         Wed Nov  1 18:36:05 2023                 2
         Incremental         Fri Nov  3 09:49:05 2023                 1
         Incremental         Sat Nov  4 03:27:05 2023                 2
         Incremental         Sun Nov  5 19:50:06 2023                 2
         Incremental         Mon Nov  6 17:49:17 2023                 1
         Incremental         Tue Nov  7 12:38:34 2023                 1
         Incremental         Tue Nov  7 19:10:04 2023                 1
         Incremental         Wed Nov  8 18:12:09 2023                 1
         Incremental         Thu Nov  9 13:43:17 2023                 1
         Incremental         Fri Nov 10 20:29:17 2023                 2
         Incremental         Sat Nov 11 14:19:05 2023                 1
         Incremental         Sun Nov 12 21:27:09 2023                 3
         Incremental         Mon Nov 13 14:29:05 2023                 1
         Incremental         Tue Nov 14 18:05:55 2023                 3
         Incremental         Wed Nov 15 10:06:05 2023                 3
         Incremental         Thu Nov 16 14:45:06 2023                 5
         Incremental         Fri Nov 17 10:40:05 2023                 4
         Incremental         Sun Nov 19 18:14:08 2023                 1
         Incremental         Mon Nov 20 16:51:05 2023                 1
         Incremental         Wed Nov 22 14:23:07 2023                 1
         Incremental         Thu Nov 23 16:27:06 2023                 1
         Incremental         Sun Dec  3 17:10:06 2023                 1
         Incremental         Mon Dec  4 21:17:05 2023                 1
         Incremental         Tue Dec  5 13:51:06 2023                 1
         Incremental         Thu Dec  7 11:14:04 2023                 1
         Incremental         Fri Dec  8 15:38:05 2023                 1
         Incremental         Fri Dec  8 15:48:07 2023                 1
         Incremental         Fri Dec  8 16:26:20 2023                 1
         Incremental         Fri Dec  8 16:55:20 2023                 1
-------------------------
No orphaned or incomplete backup sets found.
werenike (3 rep)
Dec 8, 2023, 04:20 PM • Last activity: Dec 9, 2023, 08:08 AM
0 votes
0 answers
170 views
Duplicity encrypting to a strange key?
I'm trying to get `duplicity` working with gpg keys, but it's behaving a bit strange (on the actual machine that I want backed up, it seems better in virtual machines). Here's a complete set of commands (executed within a minute): ```sh grove@stacey> rm -fr /tmp/backup grove@stacey> duplicity full -...
I'm trying to get duplicity working with gpg keys, but it's behaving a bit strange (on the actual machine that I want backed up, it seems better in virtual machines). Here's a complete set of commands (executed within a minute):
grove@stacey> rm -fr /tmp/backup
grove@stacey> duplicity full --encrypt-key 00FDE9885BB452EC317D6FF924A2044BE1CCBEE1 --sign-key 0FA385BE82DE75CD94338E65EA7482DAB844D7E7 /home/grove/tmp/backuptest file:///tmp/backup
Warning, found signatures but no corresponding backup files
Synchronizing remote metadata to local cache...
Deleting local /home/grove/.cache/duplicity/ba8d32ccb88d13597b4784252744fc75/duplicity-full-signatures.20230721T124839Z.sigtar.gz (not authoritative at backend).
Deleting local /home/grove/.cache/duplicity/ba8d32ccb88d13597b4784252744fc75/duplicity-full.20230721T124839Z.manifest (not authoritative at backend).
Last full backup date: none
GnuPG passphrase for decryption: 
GnuPG passphrase for signing key: 
--------------[ Backup Statistics ]--------------
StartTime 1689944019.33 (Fri Jul 21 14:53:39 2023)
EndTime 1689944019.42 (Fri Jul 21 14:53:39 2023)
ElapsedTime 0.09 (0.09 seconds)
SourceFiles 61
SourceFileSize 45056 (44.0 KB)
NewFiles 61
NewFileSize 45056 (44.0 KB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 61
RawDeltaSize 0 (0 bytes)
TotalDestinationSizeChange 1983 (1.94 KB)
Errors 0
-------------------------------------------------

grove@stacey> duplicity full --encrypt-key 00FDE9885BB452EC317D6FF924A2044BE1CCBEE1 --sign-key 0FA385BE82DE75CD94338E65EA7482DAB844D7E700FDE9885BB452EC317D6FF924A2044BE1CCBEE1 /home/grove/tmp/backuptest
 grove@stacey> duplicity verify --compare-data --encrypt-key 00FDE9885BB452EC317D6FF924A2044BE1CCBEE1 --sign-key 0FA385BE82DE75CD94338E65EA7482DAB844D7E7 file:///tmp/backup /home/grove/tmp/backuptest 
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Fri Jul 21 14:53:21 2023
GnuPG passphrase for decryption: 
GPGError: GPG Failed, see log below:
===== Begin GnuPG log =====
gpg: encrypted with 4096-bit RSA key, ID 7554FBF3A16C9773, created 2023-07-06
"Duplicity_encryption (Encryption key for duplicity)"
gpg: public key decryption failed: No passphrase given
gpg: decryption failed: No secret key
===== End GnuPG log =====

 grove@stacey> gpg --list-keys | grep -i 7554FBF3A16C9773
 grove@stacey> gpg --list-secret-keys | grep -i 7554FBF3A16C9773
 grove@stacey> gpg --list-keys | grep -i -C2 duplicity
pub   rsa4096 2023-07-06 [SC]
      00FDE9885BB452EC317D6FF924A2044BE1CCBEE1
uid           [ ultim. ] Duplicity_encryption (Encryption key for duplicity)
sub   rsa4096 2023-07-06 [E]
For some reason duplicity is asking for the passphrase for decryption when making a backup (and that passphrase is not needed), but that's a minor issue (and might be fixed in newer versions). The big problem is that it seems the backup is encrypted for a GPG key that is not known? (Neither the public part, that I would think is needed to make the backup - nor the private part that is needed to read it). I even specified which key to encrypt for, but that was ignored? (this is using duplicity 0.8.17-1+b1 from Debian Bullseye - I know that is old, stable has a version that is a little newer, but still a bit old, but backups are a thing I'd like to have in place before upgrading) So where is the key it has used for encrypting? Alternatively: How do I make it use the key I actually specified?
Henrik supports the community (5878 rep)
Jul 21, 2023, 01:30 PM
0 votes
1 answers
199 views
Does Duplicity incremental backup overwrites amended files?
What I never understood in incremental backup, via duplicity, is whether amended local files are overwritten in the backup storage, i.e., is it possible with incremental backup to restore an old version of a file that has been amended?
What I never understood in incremental backup, via duplicity, is whether amended local files are overwritten in the backup storage, i.e., is it possible with incremental backup to restore an old version of a file that has been amended?
João Pimentel Ferreira (870 rep)
Apr 18, 2023, 09:10 AM • Last activity: May 31, 2023, 02:33 PM
1 votes
1 answers
734 views
How and when to use duplicity verify?
I'm making daily incremental backups and monthly full backups, both with duplicity Daily backup script (in `/etc/cron.daily/`) ```bash #!/bin/sh adddate() { while IFS= read -r line; do printf '%s %s\n' "$(date):" "$line"; done } # be sure external drives are mounted mount -a # backup to HDD backup B...
I'm making daily incremental backups and monthly full backups, both with duplicity Daily backup script (in /etc/cron.daily/)
#!/bin/sh

adddate() {
    while IFS= read -r line; do
        printf '%s %s\n' "$(date):" "$line";
    done
}

# be sure external drives are mounted
mount -a

# backup to HDD backup B, using duplicity 
echo "\n\nBacking up /home and /etc into /mnt/backupB with duplicity (incremental backup)" | adddate >> /var/log/daily-backup.log 2>&1
export PASSPHRASE=****
duplicity --exclude='**/.cache/' --include /home --include /etc --exclude '**' / file:///mnt/backupB | adddate >> /var/log/daily-backup.log 2>&1
unset PASSPHRASE
Monthly backup script (in /etc/cron.monthly/)
#!/bin/sh

adddate() {
    while IFS= read -r line; do
        printf '%s %s\n' "$(date):" "$line";
    done
}

# be sure external drives are mounted
mount -a

# backup to HDD backup B, using duplicity
echo "\n\nBacking up /home and /etc into /mnt/backupB with duplicity (full backup)" | adddate >> /var/log/monthly-backup.log 2>&1
export PASSPHRASE=*****
duplicity full --exclude='**/.cache/' --include /home --include /etc --exclude '**' / file:///mnt/backupB | adddate >> /var/log/monthly-backup.log 2>&1
unset PASSPHRASE
My question is: when and where shall I use duplicity verify? After incremental or full or both?
João Pimentel Ferreira (870 rep)
Jan 22, 2023, 05:28 PM • Last activity: Jan 23, 2023, 02:21 PM
2 votes
2 answers
509 views
Duplicity gives InvalidBackendURL error when setting up Backblaze B2
When running Duplicity with Backblaze's B2 as outlined in [some articles][1]: duplicity ~ b2://[keyID]:[application key]@[B2 bucket name] Real values hidden, but provided though the Backblaze B2 UI. I encounter the following error: InvalidBackendURL: Syntax error (port) in: b2://[keyID]:[application...
When running Duplicity with Backblaze's B2 as outlined in some articles : duplicity ~ b2://[keyID]:[application key]@[B2 bucket name] Real values hidden, but provided though the Backblaze B2 UI. I encounter the following error: InvalidBackendURL: Syntax error (port) in: b2://[keyID]:[application key]@[B2 bucket name] AFalse BNone [keyID]:[application key partial]\ Where the application key is partially chopped off at the slash. I have attempted many alternatives to escape the slash such as double quotes, single quotes, and backslash escaping, but nothing improves the situation.
Matt Copperwaite (633 rep)
Jan 14, 2021, 02:21 PM • Last activity: Sep 28, 2022, 01:30 AM
4 votes
3 answers
2037 views
How to keep a history of backups?
My intention is to create a "snapshot" of a certain hard drive on a server every day. How can I automatically make sure that the snapshot of today, three days ago, five days ago, 10 days ago (and so on) are being kept and cyclically replaced? I found a tool: duplicity, but I didn't find any way to a...
My intention is to create a "snapshot" of a certain hard drive on a server every day. How can I automatically make sure that the snapshot of today, three days ago, five days ago, 10 days ago (and so on) are being kept and cyclically replaced? I found a tool: duplicity, but I didn't find any way to achieve this using it. Should I go for a bash shell script? Is there any example I can take inspiration from? The system is running Debian.
giovi321 (919 rep)
Mar 27, 2015, 09:31 PM • Last activity: May 18, 2022, 08:30 PM
0 votes
1 answers
260 views
Duplicity: Exclude files from verification
I use duplicity to backup a set of directories, using a daily cron job. I have another daily cron job to `duplicity verify --compare-data` the backup, as a sanity check, which runs shortly after the backup. I'd like to exclude some frequently changing files from the verification, so that I don't alw...
I use duplicity to backup a set of directories, using a daily cron job. I have another daily cron job to duplicity verify --compare-data the backup, as a sanity check, which runs shortly after the backup. I'd like to exclude some frequently changing files from the verification, so that I don't always get false positives in the resulting cron email. That is, I don't want to have those files reported in the number of differences found. I still want to backup those files (otherwise I'd simply exclude them from the backup in the first place). Unfortunately, when verifying, duplicity applies the --exclude/--include options only to the file system side, not to the backup side. That is, when I exclude more files in the verification than were excluded in the backup, duplicity reports those files as missing (present in the backup but missing in the file system). Hence the --exclude option can't be used to exclude files from being verified altogether. There's the --file-to-restore option, which also applies to verification (allows to verify a specific path), but it only accepts a single path, not a set of patterns and no exclusions like with --include/--exclude. Is there some other way to achieve what I want, that is, verify an existing backup against the file system, excluding certain files (preferably file patterns) from both sides of the verification?
nmatt (101 rep)
Apr 23, 2022, 05:25 PM • Last activity: Apr 24, 2022, 10:52 AM
0 votes
1 answers
523 views
using deja dup with rsync server
I used Deja Dup on Ubuntu since years with an remote NFS share. Now I have a new home server, which also runs an rsync daemon. How can I connect deja Dup to this daemon instead of NFS? I found out how to connect duplicity from the command line of the client to the remote rsync daemon: duplicity --no...
I used Deja Dup on Ubuntu since years with an remote NFS share. Now I have a new home server, which also runs an rsync daemon. How can I connect deja Dup to this daemon instead of NFS? I found out how to connect duplicity from the command line of the client to the remote rsync daemon: duplicity --no-encryption testfile.txt rsync://server::module/backupfolder/ This works fine, the client gives out plausible Backup Statistics, no errors and the backup arrives on the server, so I consider my rsync server running and configured correctly and reachable by the client. But when I put rsync://server::module/backupfolder/ into the client's Deja Dup settings as backup location, it says after starting a backup "location cannot be mounted" or similar. Is the URL syntax wrong? What else am I doing wrong? Isn't Deja Dup supposed to work the same way as duplicity, since it uses duplicity? *edit: I corrected the paths, but still no luck connecting with deja dup
PeterK (1 rep)
Mar 24, 2022, 07:39 PM • Last activity: Apr 14, 2022, 07:15 PM
0 votes
0 answers
129 views
duplicity: how to prevent malicious user from messing with backups
I'm using duplicity on almost every host I'm maintaining for creating backups to a remote location (calling it "backup host"). How often and when to do full or incremental backups depends on the host and its use-case. I'm not just trying to protect myself from typical failure (human error, hardware-...
I'm using duplicity on almost every host I'm maintaining for creating backups to a remote location (calling it "backup host"). How often and when to do full or incremental backups depends on the host and its use-case. I'm not just trying to protect myself from typical failure (human error, hardware-/software, etc.), but also to recover from a potential attack. In my cases I'm using the SSH/SFTP backend for running such duplicity backups. Since duplicity needs read/write access to the backup host, somebody gaining control of the the to-be-backed-up hosts can also connect to the backup host and delete / mess with respective backups. Despite that -- according to my understanding -- the to-be-backed-up hosts also /need/ to delete files on the backup host: In order not to run out of space, my to-be-backed-up hosts frequently call duplicity with remove-all-but-n-full- and remove-all-inc-of-but-n-full-actions in order to delete old backups. Such cleanups, as far as I was told, can't happen on the backup host itself, due to the backups -- incl. duplicity's metadata regarding which files belong to which set -- being encrypted, whereas respective private key (GPG) is not present on the backup host. So, to do such cleanups on the backup host, I'd have to do so based on file timestamps, which might result in corrupted/partial backup sets. Currently, to "protect" myself from such an attack, I have a second backup host regularly connecting to the primary backup host which rsync's all backup data from the primary to the secondary backup host. But here again I'm facing already above mentioned issues: - I'm just copying files from the primary to the second backup host without any understanding of their structure (as in: I'll copy not yet completed files, partial sets, etc.). - In order not to run out of space, I'll have to delete stale backups solely based on their file timestamps yet again, which might render backup sets unusable. What is the most elegant and cleanest way for to-be-backed-up hosts being backuped, via duplicity, automatically, while ensuring an attacker gaining full access to the to-be-backed-up hosts can't mess with already created backups? Thanks a lot!
daten (23 rep)
Oct 28, 2021, 08:32 PM
0 votes
1 answers
64 views
Conversion of files into directories, and the possibility of changing them back
Basically what happened is due to a known bug with duplicity I had to manually extract the difftar archives (Difftarchives?) Imagine my shock and *slight* horror when I saw that many files that ought to be regular files and not directory files were converted into directory files. For example, files...
Basically what happened is due to a known bug with duplicity I had to manually extract the difftar archives (Difftarchives?) Imagine my shock and *slight* horror when I saw that many files that ought to be regular files and not directory files were converted into directory files. For example, files that should be plain text are directories. Relevant screenshot: The messed up files
NTibbitts (1 rep)
Oct 18, 2021, 08:00 PM • Last activity: Oct 19, 2021, 06:04 PM
3 votes
1 answers
1208 views
Duplicity: switch to "secondary backup chain"
I ran `duplicity` when my backup drive wasn't connected, and it deleted lots of cached `sigtar` and `manifest` files: Synchronizing remote metadata to local cache... Deleting local /Users/justin/.cache/duplicity/a7190bc7f0d9f083cbc7e03931a8c95f/duplicity-full-signatures.20160608T054248Z.sigtar.gz (n...
I ran duplicity when my backup drive wasn't connected, and it deleted lots of cached sigtar and manifest files: Synchronizing remote metadata to local cache... Deleting local /Users/justin/.cache/duplicity/a7190bc7f0d9f083cbc7e03931a8c95f/duplicity-full-signatures.20160608T054248Z.sigtar.gz (not authoritative at backend). Deleting local /Users/justin/.cache/duplicity/a7190bc7f0d9f083cbc7e03931a8c95f/duplicity-full.20160608T054248Z.manifest (not authoritative at backend). Deleting local /Users/justin/.cache/duplicity/a7190bc7f0d9f083cbc7e03931a8c95f/duplicity-inc.20160608T054248Z.to.20160610T041839Z.manifest (not authoritative at backend). Deleting local /Users/justin/.cache/duplicity/a7190bc7f0d9f083cbc7e03931a8c95f/duplicity-inc.20160610T041839Z.to.20160616T043456Z.manifest (not authoritative at backend). Now, when I run duplicity, it tries to do a full backup instead of finding my existing incremental backups. Running duplicity collection-status file:///Volumes/DuplicityBackup/ shows that my backup chain has become a "secondary backup chain", and there is an empty "primary backup chain": $ duplicity collection-status file:///Volumes/DuplicityBackup/ Synchronizing remote metadata to local cache... Last full backup date: Thu Jul 28 22:17:39 2016 Collection Status ----------------- Connecting with backend: LocalBackend Archive dir: /Users/justin/.cache/duplicity/a7190bc7f0d9f083cbc7e03931a8c95f Found 1 secondary backup chain. Secondary chain 1 of 1: ------------------------- Chain start time: Tue Jun 7 22:42:48 2016 Chain end time: Wed Jul 27 07:19:47 2016 Number of contained backup sets: 23 Total number of contained volumes: 2329 Type of backup set: Time: Num volumes: Full Tue Jun 7 22:42:48 2016 2219 Incremental Thu Jun 9 21:18:39 2016 20 Incremental Wed Jun 15 21:34:56 2016 6 Incremental Fri Jun 17 07:29:49 2016 1 ... ------------------------- Found primary backup chain with matching signature chain: ------------------------- Chain start time: Thu Jul 28 22:17:39 2016 Chain end time: Thu Jul 28 22:17:39 2016 Number of contained backup sets: 1 Total number of contained volumes: 0 Type of backup set: Time: Num volumes: ------------------------- No orphaned or incomplete backup sets found. **How can I fix this?** (ie, delete the empty "primary backup chain" and use the "secondary backup chain")
ConvexMartian (163 rep)
Aug 7, 2016, 12:43 PM • Last activity: Oct 18, 2021, 06:31 AM
11 votes
1 answers
14934 views
How to restore folders to their original destination using duplicity?
After performing a backup of a couple of directories like so: # duplicity\ --exclude /home/user/Documents/test1/file\ --include /home/user/Documents/test1\ --include /tmp/test2\ --exclude '**'\ / file:///home/user/Backup I wanted to test how the restoration works by deleting the backed up directorie...
After performing a backup of a couple of directories like so: # duplicity\ --exclude /home/user/Documents/test1/file\ --include /home/user/Documents/test1\ --include /tmp/test2\ --exclude '**'\ / file:///home/user/Backup I wanted to test how the restoration works by deleting the backed up directories: # rm -rf /home/user/Documents/test1 /tmp/test2 And then, restoring the backup, # duplicity file:///home/user/Backup / But I got the error, Restore destination directory / already exists. Will not overwrite. So it appears that I can't restore to the original destination without emptying the root folder even though the destination of these included folders have already been cleared. Is there a better way than to restore it to another location and then moving each folder one by one? # duplicity --file-to-restore home/user/Documents/test1 file:///home/user/Backup /home/user/Restore1 # mv /home/user/Restore1/home/user/Documents/test1 /home/user/Documents/test1 # duplicity --file-to-restore tmp/test2 file:///home/user/Backup /home/user/Restore2 # mv /home/user/Restore2/tmp/test2 /tmp/test2
Question Overflow (4818 rep)
Feb 8, 2014, 11:16 AM • Last activity: Sep 20, 2021, 09:22 PM
0 votes
1 answers
677 views
excluding relative paths in duplicity
The [duplicity documentation](http://duplicity.nongnu.org/vers8/duplicity.1.html) doesn't appear to fully document its behaviour when relative paths (or bare filenames) are passed to the `--exclude` option. If I pass the option `--exclude foo`, for example, will this cause each file or directory nam...
The [duplicity documentation](http://duplicity.nongnu.org/vers8/duplicity.1.html) doesn't appear to fully document its behaviour when relative paths (or bare filenames) are passed to the --exclude option. If I pass the option --exclude foo, for example, will this cause each file or directory named foo in the entire heirarchy under source_directory to be excluded, or will it only exclude a file or directory with that name in source_directory itself? If the latter is the case, is there a way to exclude source_directory/foo without having to type the full path to source_directory for each such option (other than by using a shell variable)?
intuited (3578 rep)
Sep 2, 2021, 11:07 PM • Last activity: Sep 3, 2021, 02:23 PM
0 votes
1 answers
275 views
SSH connection to know host still needs authorization
I am trying to setup a backup server using `duply` and a secure connection. I have created the `~/.ssh/config` file with the following content: Host backup IdentityFile ~/.ssh/id_ed25519_backup Hostname Port 22 User Furthermore, I have also defined the `known_hosts` file, by copy-pasting in it the s...
I am trying to setup a backup server using duply and a secure connection. I have created the ~/.ssh/config file with the following content: Host backup IdentityFile ~/.ssh/id_ed25519_backup Hostname Port 22 User Furthermore, I have also defined the known_hosts file, by copy-pasting in it the server public key (found in /etc/ssh/ssh_host_ed25519_key.pub) All seems to work properly when using the ssh -v backup command: Authenticated to ([]:22) However, when launching duply backup routine, I see that the server is not recognized: The authenticity of host '[]:22' can't be established. SSH-ED25519 key fingerprint is c3:06:95:f8:5f:d3:76:7f:c6:9d:19:ef:e5:23:9a:14. Are you sure you want to continue connecting (yes/no)? Why is this happening? ---------- **Update** It seems that duply is computing the MD5 hash of the public key, while ssh the SHA256 one: in fact, as mentioned here , ssh-keygen -l -E md5 -f /etc/ssh/ssh_host_ed25519_key.pub returns the same hexadecimal stated above. Since they are two different hash of the same key, why a connection confirmation is still asked? Is it possible to oblige SSH to use only a single hash algorithm? **Further update:** ssh -o FingerprintHash=md5 -v backup does not require confirmation, so I suppose that the issue is limited to duply. Maybe, does it not refer to the user known_hosts file?
rudicangiotti (123 rep)
Feb 21, 2021, 12:14 AM • Last activity: Feb 27, 2021, 12:30 PM
2 votes
1 answers
328 views
duplicity resends all the data if filename is renamed
Let's create a `test/` directory containing a random 1 GB file: `head -c 1G test/1GBfile`, and let's do a backup with [duplicity](http://duplicity.nongnu.org/): duplicity test/ file:///home/www/backup/ Then `/home/www/backup/` contains an encrypted archive, taking around ~1 GB. Then let's add a new...
Let's create a test/ directory containing a random 1 GB file: head -c 1G test/1GBfile, and let's do a backup with [duplicity](http://duplicity.nongnu.org/) : duplicity test/ file:///home/www/backup/ Then /home/www/backup/ contains an encrypted archive, taking around ~1 GB. Then let's add a new file of a few bytes: echo "hello" >test/hello.txt, and redo the backup: duplicity test/ file:///home/www/backup/ The backup/ is still ~ 1 GB. Only a few files were created of < 1 KB, as usual in incremental backup. **Now let's rename the 1 GB file: mv test/1GBfile test/1GBfile_newname and redo the incremental backup**: duplicity test/ file:///home/www/backup/ **Then backup/ is now ~ 2 GB!**. Why does duplicity not take into account the fact it's the same file content with a new name? Here if we had used networking, we would have wasted 1 GB transfer even if the file content is exactly the same. duplicity uses rsync which usually takes care of this problem, is there an option to avoid this problem? ____ Log after the addition of the .txt file:
--------------[ Backup Statistics ]--------------
StartTime 1605543432.43 (Mon Nov 16 17:17:12 2020)
EndTime 1605543432.72 (Mon Nov 16 17:17:12 2020)
ElapsedTime 0.29 (0.29 seconds)
SourceFiles 3
SourceFileSize 1073745926 (1.00 GB)
NewFiles 2
NewFileSize 4102 (4.01 KB)
DeletedFiles 0
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 2
RawDeltaSize 6 (6 bytes)
TotalDestinationSizeChange 230 (230 bytes)
Errors 0
-------------------------------------------------
Log after the renaming of the file:
--------------[ Backup Statistics ]--------------
StartTime 1605543625.97 (Mon Nov 16 17:20:25 2020)
EndTime 1605543840.72 (Mon Nov 16 17:24:00 2020)
ElapsedTime 214.76 (3 minutes 34.76 seconds)
SourceFiles 3
SourceFileSize 1073745926 (1.00 GB)
NewFiles 2
NewFileSize 1073745920 (1.00 GB)
DeletedFiles 1
ChangedFiles 0
ChangedFileSize 0 (0 bytes)
ChangedDeltaSize 0 (0 bytes)
DeltaEntries 3
RawDeltaSize 1073741824 (1.00 GB)
TotalDestinationSizeChange 1080871987 (1.01 GB)
Errors 0
-------------------------------------------------
TotalDestinationSizeChange 1080871987 (1.01 GB), arghh! The file has just been *renamed*!
Basj (2579 rep)
Nov 16, 2020, 04:11 PM • Last activity: Nov 18, 2020, 11:05 AM
Showing page 1 of 20 total questions