Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
2
votes
1
answers
3549
views
Restoring a backup with rsync
I'm getting back into Linux and am running Kubuntu 21.10, and I want to make a proper full backup of the system now that I've configured it how I like. This is the command I used to create a backup on a separate drive: sudo rsync -aAXv / --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt...
I'm getting back into Linux and am running Kubuntu 21.10, and I want to make a proper full backup of the system now that I've configured it how I like. This is the command I used to create a backup on a separate drive:
sudo rsync -aAXv / --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} "/media/backuppath/"
This is what my partitions look like for the drive I want to back up:
/dev/sda1 fat32 /boot/efi (primary)
/dev/sda2 extended
/dev/sda5 ext4 /
I am new to rsync, so it would be helpful to know if this backup command will take care of everything to restore an exact system backup: system settings, installed programs, etc. More importantly, what do I need to run to restore this if the time ever comes?
tommythecat42
(21 rep)
Mar 3, 2022, 11:38 PM
• Last activity: Aug 6, 2025, 11:01 AM
2
votes
1
answers
55
views
How to allow rsync via ssh to a specific directory only
I want to allow moving files to a specific directory on my server using rsync + ssh. However, I don't want to fully trust the users using that SSH user. One solution I found is to set the shell of the user to `rssh` which can be configured to only allow sftp, rsync etc. However, in this case, the us...
I want to allow moving files to a specific directory on my server using rsync + ssh.
However, I don't want to fully trust the users using that SSH user.
One solution I found is to set the shell of the user to
rssh
which can be configured to only allow sftp, rsync etc. However, in this case, the user would still be able to pull any readable files from the server, such as configurations in /etc which I don't want.
I'm currently hesitating to go over my full directory structure and revoking the access for "others".
Is there a way to allow a user to use rsync via ssh but only from / to a specific directory? I've seen that it seems possible to jail the SFTP access of openssh:
Match Group sftponly
ChrootDirectory %h
ForceCommand internal-sftp
AllowTcpForwarding no
X11Forwarding no
PasswordAuthentication no
However, I would prefer rsync, as this account is used to upload bigger data and the internet connections are somewhat unstable (rural area with bad internet). rsync has proven very effective with all its features of continuing cancelled uploads.
GNA
(131 rep)
Aug 4, 2025, 02:23 PM
• Last activity: Aug 5, 2025, 05:20 AM
0
votes
2
answers
1875
views
Aws ec2 - How to rsync files between two remotes?
I'm setting up a crontab server to run several jobs to copy files from prod servers to lower environment servers. I need the cron server job to copy files from one server to another. Here is what I have. the ip's have been modified ssh -v -R localhost:50000:1.0.0.2:22 -i host1key.pem ec2-user@1.0.0....
I'm setting up a crontab server to run several jobs to copy files from prod servers to lower environment servers.
I need the cron server job to copy files from one server to another. Here is what I have.
the ip's have been modified
ssh -v -R localhost:50000:1.0.0.2:22 -i host1key.pem ec2-user@1.0.0.1 'rsync -e "ssh -i /home/ec2-user/host2key.pem -p 50000" -vuar /home/ec2-user/test.txt ec2-user@localhost:/home/ec2-user/test.txt'
I'm using two different pem keys and users. I would think this command would work but I get this error in the debug log. Here is more to it and only show the portion that is erroring. It connects to
ec2-user@1.0.0.1
successfully. But errors on the 1.0.0.2
:
debug1: connect_next: host 1.0.0.2 ([1.0.0.2]:22) in progress, fd=7
debug1: channel 1: new [127.0.0.1]
debug1: confirm forwarded-tcpip
debug1: channel 1: connected to 1.0.0.2 port 22
Host key verification failed.
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug1: client_input_channel_req: channel 0 rtype eow@openssh.com reply 0
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]
debug1: channel 0: free: client-session, nchannels 2
debug1: channel 1: free: 127.0.0.1, nchannels 1
Transferred: sent 5296, received 4736 bytes, in 0.9 seconds
Bytes per second: sent 5901.2, received 5277.2
debug1: Exit status 12
chdev77
(101 rep)
Jul 5, 2017, 10:55 PM
• Last activity: Aug 3, 2025, 06:10 AM
0
votes
2
answers
1932
views
SSH No route to host in local network after using rsync
So, this is my setup: I have computer A and a computer B, both with Ubuntu 20.01. Each computer has openssh-server working just fine. Yesterday, I used rsync to copy a large file from A to B, and it didn't seem to have any issue (it was the first time rsync was used). Today, I tried to connect via S...
So, this is my setup: I have computer A and a computer B, both with Ubuntu 20.01. Each computer has openssh-server working just fine. Yesterday, I used rsync to copy a large file from A to B, and it didn't seem to have any issue (it was the first time rsync was used). Today, I tried to connect via SSH from B to A and I had a "No route to host" error. Then I tried to connect via SSH from A to B and "No route to host" happened again. Then, on each computer I did a:
ssh user@127.0.0.1
and none gave me any issue. Then, I did a: ssh -T git@github.com
on both computers and both were successful. Then, I did an: nmap -Pn -p22 192.168.xx.yy
on both computers trying to connect to the other, the results are:
PORT STATE SERVICE
nmap tested in A with IP of A: 22/tcp filtered ssh
nmap tested in A with IP of B: 22/tcp open ssh
nmap tested in B with IP of A: 22/tcp filtered ssh
nmap tested in B with IP of B: 22/tcp open ssh
What really bugs me out is that yesterday, before using rsync, the ssh connection was working just fine. The file was copied successfully, and both computers have been restarted since, so I don't know if there's some file that was corrupted or something like that. I'm not even sure if the rsync is what caused the issue. Just to be sure, in computer A, I did a:
sudo lsof -i -P -n | grep 192.168
And the only IP address that I see is the one from A. Not sure if this might help, but I only used one rsync command, and is the one that follows:
rsync -rvz -e 'ssh -p XXXX' --progress /PATH/TO/SOURCE/FILE user@192.168.xx.yy:/PATH/TO/DESTINATION/FILE
EDIT: I don't think the path is the issue, since I ran rsync from a dicerctory in /home/user, but for disclosure, the actual rsnc command was:
rsync -rvz -e 'ssh -p 2222' --progress ./someDB.sql user@192.168.0.70:/home/user/DBs
And as for the absolute path of where I ran the command, it was:
/home/user/DB/
DanielUPPA
(1 rep)
Jan 12, 2021, 09:25 PM
• Last activity: Jul 29, 2025, 05:00 PM
1
votes
1
answers
1886
views
Is there some program that can copy sparse file (/var/log/lastlog) over ssh as fast as cp (on local pc)?
I'm backing ip my server via `rsync` over `ssh` but `/var/log/lastlog` file is 1.2G (it takes only 24K on the hdd). On a local machine `cp` can copy it for no time (a few ms) but `rsync` requires reading the whole file which takes hours. I also tried to mount server's `/var/log` with `sshfs` to my l...
I'm backing ip my server via
rsync
over ssh
but /var/log/lastlog
file is 1.2G (it takes only 24K on the hdd).
On a local machine cp
can copy it for no time (a few ms) but rsync
requires reading the whole file which takes hours. I also tried to mount server's /var/log
with sshfs
to my local pc but my local pc detects the file as 1.2T (so sshfs
doesn't appear to detect sparse files).
Is there some program that detects sparse files over ssh and can copy them the same way cp
(without reading the empty blocks from the file) does?
EDIT: rsync
's -S/--sparse
option still wants to read the whole source file (with all the empty bytes) which takes hours for 1.2T file. After rsync
reads the whole file it creates small destination file (proper sparse file) but the problem is that it reads the source file with all the empty bytes (without skipping them). cp
copies the file in a few ms while rsync
takes hours. You can try it (on Linux) by creating 20G sparse file with truncate -s 20G sparse_file1
and copy it with rsync -S sparse_file1 sparse_file2
(takes long time) and then try to copy it with cp sparse_file1 sparse_file3
(takes a few ms).
FieryRider
(135 rep)
Mar 21, 2019, 06:07 AM
• Last activity: Jul 27, 2025, 03:02 PM
40
votes
8
answers
132431
views
Parallelise rsync using GNU Parallel
I have been using a `rsync` script to synchronize data at one host with the data at another host. The data has numerous small-sized files that contribute to almost 1.2TB. In order to sync those files, I have been using `rsync` command as follows: rsync -avzm --stats --human-readable --include-from p...
I have been using a
rsync
script to synchronize data at one host with the data at another host. The data has numerous small-sized files that contribute to almost 1.2TB.
In order to sync those files, I have been using rsync
command as follows:
rsync -avzm --stats --human-readable --include-from proj.lst /data/projects REMOTEHOST:/data/
The contents of proj.lst are as follows:
+ proj1
+ proj1/*
+ proj1/*/*
+ proj1/*/*/*.tar
+ proj1/*/*/*.pdf
+ proj2
+ proj2/*
+ proj2/*/*
+ proj2/*/*/*.tar
+ proj2/*/*/*.pdf
...
...
...
- *
As a test, I picked up two of those projects (8.5GB of data) and executed the command above. Being a sequential process, it took 14 minutes and 58 seconds to complete. So, for 1.2TB of data, it would take several hours.
If I would could have multiple rsync
processes in parallel (using &
, xargs
or parallel
), it would save me time.
I tried with below command with parallel
(after cd
ing to the source directory) and it took 12 minutes 37 seconds to execute:
parallel --will-cite -j 5 rsync -avzm --stats --human-readable {} REMOTEHOST:/data/ ::: .
This should have taken 5 times less time, but it didn't. I think, I'm going wrong somewhere.
How can I run multiple rsync
processes in order to reduce the execution time?
Mandar Shinde
(3374 rep)
Mar 13, 2015, 06:51 AM
• Last activity: Jul 26, 2025, 08:00 PM
17
votes
3
answers
26067
views
rsync using part of a relative path
Suppose I have a directory on a local machine, behind a firewall: local:/home/meee/workdir/ And a directory on a remote machine, on the other side of the firewall: remote:/a1/a2/.../aN/one/two/ remote:/a1/a2/.../aN/one/dont-copy-me{1,2,3,...}/ ...such that `N` >= 0. My local machine has a script tha...
Suppose I have a directory on a local machine, behind a firewall:
local:/home/meee/workdir/
And a directory on a remote machine, on the other side of the firewall:
remote:/a1/a2/.../aN/one/two/
remote:/a1/a2/.../aN/one/dont-copy-me{1,2,3,...}/
...such that
N
>= 0.
My local machine has a script that uses rsync
. I want this script to copy only one/two/
from the remote machine for a variable-but-known 'N' such that I end up with:
local:/home/meee/workdir/one/two/
If I use rsync remote:/a1/a2/.../aN/one/two/ ~/workdir/
, I end up with:
local:/home/meee/workdir/two/
If I use rsync --relative remote:/a1/a2/.../aN/one/two/ ~/workdir/
, I end up with:
local:/home/meee/workdir/a1/a2/.../aN/one/two/
Neither one of these is what I want.
1. Are there rsync
flags which can achieve the desired result?
2. If not, can anyone think of a straightforward solution?
Lagrangian
(437 rep)
Nov 5, 2016, 06:23 AM
• Last activity: Jul 23, 2025, 04:36 AM
13
votes
3
answers
35910
views
How to see the total progress while copying the files
We know that if we give `--progress` parameter to `rsync` it will show the progress of files copied. But issue is that is shows the progress for each single file not total or overall progress. So how to see the Total progress of files copied.
We know that if we give
--progress
parameter to rsync
it will show the progress of files copied. But issue is that is shows the progress for each single file not total or overall progress.
So how to see the Total progress of files copied.
OmiPenguin
(4398 rep)
Mar 20, 2013, 05:21 PM
• Last activity: Jul 21, 2025, 03:34 AM
129
votes
7
answers
200091
views
rsync compare directories?
Is it possible to compare two directories with rsync and only print the differences? There's a dry-run option, but when I increase verbosity to a certain level, every file compared is shown. `ls -alR` and `diff` is no option here, since there are hardlinks in the source making every line different....
Is it possible to compare two directories with rsync and only print the differences? There's a dry-run option, but when I increase verbosity to a certain level, every file compared is shown.
ls -alR
and diff
is no option here, since there are hardlinks in the source making every line different. (Of course, I could delete this column with perl.)
chris
(1785 rep)
Dec 1, 2012, 11:18 AM
• Last activity: Jul 20, 2025, 07:51 AM
7
votes
1
answers
5653
views
rsync_xal_set: lremovexattr("/my/path/file.zPXUj1","security.selinux") failed: Permission denied (13)
I am currently migrating from Ubuntu 20.04 to Fedora 34. Following backup script has worked fine so far: ``` rsync \ -avixXEH \ --stats \ --delete \ --numeric-ids \ --log-file="$LOG_FILE" \ --link-dest "$LATEST" \ --exclude '/some/exclude' \ admin@nas:/{a,b,c} \ # source is remote nas (via ssh) "$TA...
I am currently migrating from Ubuntu 20.04 to Fedora 34. Following backup script has worked fine so far:
rsync \
-avixXEH \
--stats \
--delete \
--numeric-ids \
--log-file="$LOG_FILE" \
--link-dest "$LATEST" \
--exclude '/some/exclude' \
admin@nas:/{a,b,c} \ # source is remote nas (via ssh)
"$TARGET" \ # $TARGET is ext. USB disk on fedora OS desktop
Unfortunately on Fedora, every copied path now results in a warning, polluting the log:
> rsync_xal_set: lremovexattr("/my/path/file.zPXUj1","security.selinux") failed: Permission denied (13)
## Research
This seems to be an issue with rsync wanting to preserve/erase extended attributes (-X
) and SELinux.
Recent quote from Michal Ruprich, Red Hat:
> This was 'fixed' in RHEL5 by suppressing the error message so that it does not disrupt running systems. [...]
>
> "rsync-2.6 does not remove extended attribute of target file in the case that this attribute has been erased in the source file. Lets call it bug.
>
> rsync-3.0 correctly tries to remove erased extended attributes.
>
> If the selinux is present on the target system, rsync can't erase security context of file and it outputs mentioned error. The behaviour of 2.6 and 3.0 is therefore identical except the informational error message."
Using rsync
3.2.3
with a non-SELinux source, my interpretation is - please correct me otherwise:
Copying files from a source without SELinux to a target using this security feature is interpreted as deleting the extended "security.selinux"
file attribute. And rsync
cannot remove it due to SELinux security restrictions on the target.
Which raises the question:
## How to suppress these warnings?
I still would like to copy extended attributes with -X
and *not* temporarily disable complete SELinux as suggested here . Also, stumbled over an alternative that suggests setsebool -P rsync_full_access 1
- not sure, what that does exactly.
It really would be nice to solve the problem at its root only for this particular case: Given USB disk mount point /run/media/user/
, is there some way to grant necessary permissions in SELinux just for this path or similar?
Thanks in advance
grisha
(71 rep)
May 4, 2021, 07:09 PM
• Last activity: Jul 16, 2025, 11:06 PM
3
votes
1
answers
2333
views
"ndd" equivalent of "ethtool" on Solaris
I've to restore a large file from a NAS backup on Solaris 10 ZFS. I'm using this following command: rsync -av user@xxx.xxx.xxx.xxx:from/NAS/files/system to/solaris/files/system And I've got this error: Disconnecting: Corrupted MAC on input. rsync: connection unexpectedly closed (3778664937 bytes rec...
I've to restore a large file from a NAS backup on Solaris 10 ZFS. I'm using this following command:
rsync -av user@xxx.xxx.xxx.xxx:from/NAS/files/system to/solaris/files/system
And I've got this error:
Disconnecting: Corrupted MAC on input.
rsync: connection unexpectedly closed (3778664937 bytes received so far) [receiver]
rsync: [generator] write error: Broken pipe (32)
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [receiver=3.1.0]
rsync error: error in socket IO (code 10) at io.c(837) [generator=3.1.0
rsync Disconnecting: Corrupted MAC on input.
After a little research the solution should be:
ethtool -K eth0 tx off rx off
As the ethtool command doesn't exist on Solaris, I should use the ndd utility in interactive mode. I didn't find any good explanation and the man page is poor, for getting the equivalent of the command line above. I'm missing something maybe.
dubis
(1480 rep)
Jun 28, 2016, 07:28 AM
• Last activity: Jul 14, 2025, 12:03 AM
1
votes
2
answers
2091
views
How to sync two disks - continually?
I have to HDD drives on two different servers that I need to sync regularly. Until now, I've been using rsync over sshfs, first from one to the other, then from the other to the first, but this method is proving unsatisfactory. When a file is removed from drive A, it is not copied to drive B, but th...
I have to HDD drives on two different servers that I need to sync regularly. Until now, I've been using rsync over sshfs, first from one to the other, then from the other to the first, but this method is proving unsatisfactory. When a file is removed from drive A, it is not copied to drive B, but then copied from drive B to drive A. So I can never remove anything unless I do it from both drives. The same goes for renaming - renaming a directory on one drive soon results in two directories with different names but the same contents.
Ideally I need something that keeps track of changes on one drive and then duplicates these changes on the other. Any suggestions?
OZ1SEJ
(239 rep)
Jul 1, 2021, 12:09 PM
• Last activity: Jul 13, 2025, 04:03 PM
0
votes
2
answers
1953
views
zip then transfer files from one server to another using scp or rsync
I have estimated 44 GB of data from my web server. I want to transfer it to another server with less time. I am using Putty to transfer file. is there any way to achieve this? I don't know which commands to use but some blogs said to use `rsync` or `scp` to transfer these files. your help is greatly...
I have estimated 44 GB of data from my web server. I want to transfer it to another server with less time. I am using Putty to transfer file. is there any way to achieve this? I don't know which commands to use but some blogs said to use
rsync
or scp
to transfer these files. your help is greatly appreciated. I've tried scp
from local to server but I what I need is from server to server.
Peter Eris
(1 rep)
Aug 10, 2021, 07:17 AM
• Last activity: Jul 12, 2025, 06:05 AM
1
votes
0
answers
41
views
Rsync and scp do not work when system is the remote connection
This is a pretty weird problem. I have a desktop running openSuse Tumbleweed (192.168.0.YY), and on that machine, I can send and pull files from other computers on my network without any problems. If I try to use rsync (or scp for that matter) from any other machine to push or pull files to or from...
This is a pretty weird problem.
I have a desktop running openSuse Tumbleweed (192.168.0.YY), and on that machine, I can send and pull files from other computers on my network without any problems.
If I try to use rsync (or scp for that matter) from any other machine to push or pull files to or from this particular computer, rsync hangs after the login prompt.
This is the command I'm using from the other computer(s):
> rsync -avvv 192.168.0.YY:/home/USER/test ./
opening connection using: ssh 192.168.0.YY rsync --server --sender -vvvlogDtpre.iLsfxCIvu . /home/USER/test (8 args)
USER@192.168.0.YY's password:
It will just sit there indefinitely.
If however, I'm logged into the problem computer and try to rsync with any other computer on the network as the remote, then it works without issue.
I can ssh into this machine without any problems from anywhere else on the network. And journalctl indicates that the ssh logins from the other computers when I run rsync are successful, because when I interrupt the command it shows a disconnect entry for sshd:
Received disconnect from 192.168.0.XX port 57852:11: disconnected by user
I've tried scp, which has the same behavior.
Has anyone else ever seen something like this? Which other logs should I check to see if I can find any error messages?
Michael Williamson
(11 rep)
Jul 10, 2025, 08:43 AM
0
votes
0
answers
88
views
How to speed up rsync to USB drive
Both systems are Arch Linux with the latest rsync. Connected via 1Gbit/s ethernet. Still rsyncing to a USB drive is slow, it's just 1-2MB/s ``` rsync -Pavh --stats --rsh="ssh -T -c aes128-gcm@openssh.com -o Compression=no" ~/Downloads/googleTakeout/takeout-202* plex:/media/usb-4tb/backup/google send...
Both systems are Arch Linux with the latest rsync. Connected via 1Gbit/s ethernet. Still rsyncing to a USB drive is slow, it's just 1-2MB/s
rsync -Pavh --stats --rsh="ssh -T -c aes128-gcm@openssh.com -o Compression=no" ~/Downloads/googleTakeout/takeout-202* plex:/media/usb-4tb/backup/google
sending incremental file list
takeout-20250428T071344Z-040.zip
2.14G 100% 2.82MB/s 0:12:04 (xfr#1, to-chk=21/61)
takeout-20250428T071344Z-041.zip
2.15G 100% 1.24MB/s 0:27:30 (xfr#2, to-chk=20/61)
takeout-20250428T071344Z-042.zip
2.15G 100% 1.31MB/s 0:26:04 (xfr#3, to-chk=19/61)
takeout-20250428T071344Z-043.zip
2.15G 100% 1.02MB/s 0:33:28 (xfr#4, to-chk=18/61)
Tested USB speed and it's much higher
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 14.0448 s, 76.5 MB/s
sudo hdparm -Tt /dev/sdc
/dev/sdc:
Timing cached reads: 6186 MB in 2.00 seconds = 3097.42 MB/sec
Timing buffered disk reads: 28 MB in 3.03 seconds = 9.24 MB/sec
Tried speed between endpoints
yes | pv | ssh plex "cat > /dev/null"
944MiB 0:00:10 26.8MiB/s
yes | pv | ssh plex "cat > /media/usb-4tb"
1.06MiB 0:00:00 6.99MiB/s
scp is much faster, but it can't do partial copy as rsync
scp Downloads/googleTakeout/takeout-20250428T071344Z-001.zip plex:/media/usb-4tb/backup/google/takeout-20250428T071344Z-001.zip
takeout-20250428T071344Z-001.zip 13% 275MB 28.7MB/s 01:01 ETA
Any ideas how to fix/improve rsync speed?
michalzuber
(211 rep)
Apr 30, 2025, 04:40 AM
• Last activity: Jul 9, 2025, 03:14 PM
3
votes
2
answers
2248
views
Using rsync to preserve permissions only
I'm doing a NAS data migration from a Celerra NS960 to a Unity 500. I have an SMB/CIFS file system I synced using EMCOpy in Windows environment. It's also an NFS (multiprotocol) file system. I have both file systems mounted on a Solaris 10 UNIX server can I just rsync the permissions only from the N...
I'm doing a NAS data migration from a Celerra NS960 to a Unity 500. I have an SMB/CIFS file system I synced using EMCOpy in Windows environment. It's also an NFS (multiprotocol) file system. I have both file systems mounted on a Solaris 10 UNIX server can I just rsync the permissions only from the NS960 to the Unity and not have all the data copy again?
steven qualls
(31 rep)
Oct 11, 2018, 08:39 PM
• Last activity: Jul 9, 2025, 10:01 AM
6
votes
1
answers
450
views
`rsync`: trailing slash vs wildcard
Suppose I want to sync the contents of a directory (`source_dir`) using `rsync` without creating a directory named `source_dir` in the target directory. I can do this using `rsync source_dir/ target_dir` or `rsync source_dir/* target_dir`. Is the following correct? The former will sync everything, i...
Suppose I want to sync the contents of a directory (
source_dir
) using rsync
without creating a directory named source_dir
in the target directory.
I can do this using rsync source_dir/ target_dir
or rsync source_dir/* target_dir
.
Is the following correct?
The former will sync everything, including hidden files and source_dir/.
.
Using the latter the shell will expand the *
to every every non-hidden file in source_dir
and thus every non-hidden file in source_dir
will be synced.
(I'm asking mainly to check my understanding, but the XY problem behind this is that rsync -t source_dir/ target_dir
tries to set the time and I want to avoid that without using -O
.)
jdoe
(63 rep)
Jul 7, 2025, 06:12 AM
• Last activity: Jul 7, 2025, 09:17 PM
2
votes
2
answers
1920
views
Run rsync only if target directory exists?
I have a secondary backup drive that is usually stored offsite, but sometimes mounted. I'd like to put something in a crontab that automatically clones my backup drive to the secondary backup if it's mounted. I know that I could do something like: if [ -d $target_dir ]; then rsync -a --delete $src_d...
I have a secondary backup drive that is usually stored offsite, but sometimes mounted. I'd like to put something in a crontab that automatically clones my backup drive to the secondary backup if it's mounted. I know that I could do something like:
if [ -d $target_dir ]; then rsync -a --delete $src_dir $target_dir; fi
but I'm wondering if there's a way to ask rsync the same thing, without resorting to a shell script? Given that it has 6.02*10^23 command line options, you'd think so...
Scott Deerwester
(411 rep)
Nov 2, 2018, 04:47 PM
• Last activity: Jul 6, 2025, 07:34 AM
3
votes
1
answers
2294
views
Wildcards in exclude-filelist for duplicity
I am trying to exclude a "bulk" folder in each home directory from the backup. For this purpose, I have a line - /data/home/*/bulk in my exclude-filelist file. However, this doesn't seem to be recognised: Warning: file specification '/data/home/*/bulk' in filelist exclude-list-test.txt doesn't start...
I am trying to exclude a "bulk" folder in each home directory from the backup. For this purpose, I have a line
- /data/home/*/bulk
in my exclude-filelist file.
However, this doesn't seem to be recognised:
Warning: file specification '/data/home/*/bulk' in filelist exclude-list-test.txt
doesn't start with correct prefix /data/home/kay/bulk. Ignoring.
Is there a way?
BTW: is the format in general compatible with rsync's exclude-from? I have a working exclude list for that, where this wildcard expression works.
mcandril
(273 rep)
Oct 6, 2014, 11:30 AM
• Last activity: Jun 29, 2025, 09:00 PM
0
votes
1
answers
58
views
Some kind of rsync --metadatasnapshot?
Is there **an option in rsync to fetch all its metadata** about the destination tree, or a `--link-dest` tree,\ **from a cache file** instead of a full traversal of the filesystem? ##### Use case I currently do daily backups of either 2 TB from a dedicated server, or from home PCs or smartphones, to...
Is there **an option in rsync to fetch all its metadata** about the destination tree, or a
--link-dest
tree,\
**from a cache file** instead of a full traversal of the filesystem?
##### Use case
I currently do daily backups of either 2 TB from a dedicated server, or from home PCs or smartphones, to a home (slow) disk plugged to a Raspberry Pi 3; or the the other way round (home PCs → dedicated server).
I use a rotating destination, so in essence my rsync command is:
rsync -az --delete --link-dest=../. :/ /backup/. \
&& mv /backup/. /backup/.
Inbetween, my backup directories are read-only, so when I reuse /backup/.
or /backup/.
I know **their contents have not changed since they where created by rsync
**.
##### Level 1: Read & dump
So I would like to **speed up the transfer by skipping the "building file list" phase** of rsync
, for the destination side (I'm not interested in the source, as 1. they're moving objects (else I wouldn't backup!) and 2. it mostly comes from SSDs).\
Of course I take responsibility for not modifying the backup, so the rsync-managed metadata dump stays in sync, else I'm OK rsync
giving up.
I imagine such an option (or a manual wrapper around rsync?) would:
* if detecting the dump (and it has all the info needed by this run of rsync
: checksums for a -c
, and so on): read it and send it instead of a full traversal
* just before the first write to the dest (so _not_ in -n
mode, nor on --link-dest
): truncate or unlink that dump\
because it isn't guaranteed in sync with the tree anymore
* at the end, dump the metadata corresponding to the data synced into the dest\
(probably incrementally written to a temp file during the transfer, so it's just a rename then)
##### Level 2: no, really, do trust me
OK I said I never modified the destinations.
… Well now let's amend it: I never modify the dest, _except I sometimes replace some files with an hardlink to a static file from another backup_, of course after ensured they have the same checksum (and I don't mind the other attributes changing: r--
vs rw-
, and so on);\
that happens when I detect that the same 300 MB video has been backed up twice, once from the smartphone that captured it, and once from the family PC where it has been copied to.
In this case I would be glad the dump _trust me that the file in the tree is as good as what the metadata cache says_.
Of course I'd like it not to stumble upon it (it should not have stored inodes, or store it but fallback to a conventional path traversal in case it doesn't exist anymore),
but it even shouldn't try to access the filesystem to "validate" the dump.
Guillaume Outters
(109 rep)
May 12, 2025, 03:19 AM
• Last activity: Jun 26, 2025, 12:23 PM
Showing page 1 of 20 total questions