Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

1 votes
0 answers
58 views
How to get `rsync --link-dest=` hard-link moved/renamed files
I am trying to set up [rsnapshot][] for backing up a remote server. However, I realize that my issue is with [rsync][] (rsnapshot’s back-end), not with rsnapshot itself. Thus I am focusing the question on rsync. My goal is to make periodic snapshots of a remote server, while using hard links to avoi...
I am trying to set up [rsnapshot][] for backing up a remote server. However, I realize that my issue is with [rsync][] (rsnapshot’s back-end), not with rsnapshot itself. Thus I am focusing the question on rsync. My goal is to make periodic snapshots of a remote server, while using hard links to avoid duplicating unchanged files on disk. The process is very nicely explained in the blog post Easy Automated Snapshot-Style Backups with Linux and Rsync . As long as the original files are not renamed or moved around, everything works as expected:
# Build the original hierarchy we want to track.
mkdir original
echo "Hello, World!" > original/file1

# Make two snapshots.
rsync -a original/ snapshot1/
rsync -a --link-dest=../snapshot1 original/ snapshot2/

# Check the inode numbers.
ls -i1 snapshot{1,2}/*
The last command above shows the two snapshot copies of file1 share the same inode number. So far so good. The problem is that at some point I may need to rename/reorganize the files within the original hierarchy. Then, the hard-linking fails:
# Rename a file.
mv original/file1 original/renamed1

# Make a third snapshot.
rsync -a --link-dest=../snapshot2 --fuzzy --fuzzy original/ snapshot3/

# Check the inode numbers.
ls -i1 snapshot{1,2,3}/*
The test above shows snapshot3/renamed1 has a different inode number: it is a fresh copy. I expected the repeated --fuzzy option to hard-link the file despite its changed name, as per the manual:
--fuzzy, -y
        This option tells rsync that it should look for a basis file for
        any  destination  file  that  is missing.  The current algorithm
        looks in the same directory as the destination file for either a
        file  that  has  an  identical  size  and  modified-time,  or  a
        similarly-named file.  If found, rsync uses the fuzzy basis file
        to try to speed up the transfer.

        If  the  option is repeated, the fuzzy scan will also be done in
        any  matching  alternate  destination   directories   that   are
        specified via --compare-dest, --copy-dest, or --link-dest.
Note: as per this answer , I tried replacing, in the rsync command line, original/ by localhost:$PWD/original/. It made no difference. Why does rsync fail to hard-link here? Is there a way to convince it to do it? If not, any suggested workaround? ------------------------------------------------------------------ **Edit**: As suggested by @meuh, I tried adding the option --debug=FUZZY2. It printed the messages:
fuzzy size/modtime match for ../snapshot2/file1
fuzzy basis selected for renamed1: ../snapshot2/file1
I then tried syncing a larger file (a ∼ 15 MB copy of the Linux kernel) through ssh (rsync source = localhost:$PWD/original/) with the options -vvv --debug=FUZZY2. This gave the same messages as above, with many more messages asserting hash matches and, at the end.
total: matches=3868  hash_hits=3868  false_alarms=0 data=0
Here is the (almost) complete rsync debug output:
opening connection using: ssh localhost rsync --server --sender -vvvlogDtpre.iLsfxCIvu . /home/edgar/tmp/rsnapshot/original/  (8 args)
receiving incremental file list
server_sender starting pid=42683
[sender] make_file(.,*,0)
[sender] pushing local filters for /home/edgar/tmp/rsnapshot/original/
[sender] make_file(renamed1,*,2)
send_file_list done
send_files starting
recv_file_name(.)
recv_file_name(renamed1)
received 2 names
recv_file_list done
get_local_name count=2 snapshot3/
created directory snapshot3
generator starting pid=42643
delta-transmission enabled
recv_generator(.,0)
./ is uptodate
recv_files(2) starting
set modtime, atime of . to (1743753557) 2025/04/04 09:59:17, (1743753866) 2025/04/04 10:04:26
recv_generator(.,1)
recv_generator(renamed1,2)
[generator] make_file(../snapshot2/file1,*,1)
fuzzy size/modtime match for ../snapshot2/file1
fuzzy basis selected for renamed1: ../snapshot2/file1
generating and sending sums for 2
count=3868 rem=2560 blength=3864 s2length=2 flength=14944648
generate_files phase=1
send_files(2, /home/edgar/tmp/rsnapshot/original/renamed1)
send_files mapped /home/edgar/tmp/rsnapshot/original/renamed1 of size 14944648
calling match_sums /home/edgar/tmp/rsnapshot/original/renamed1
built hash table
hash search b=3864 len=14944648
match at 0 last_match=0 j=0 len=3864 n=0
match at 3864 last_match=3864 j=1 len=3864 n=0
match at 7728 last_match=7728 j=2 len=3864 n=0
[...snip many lines, identical but for the numbers...]
match at 14934360 last_match=14934360 j=3865 len=3864 n=0
match at 14938224 last_match=14938224 j=3866 len=3864 n=0
match at 14942088 last_match=14942088 j=3867 len=2560 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=3868 matches=3868
sender finished /home/edgar/tmp/rsnapshot/original/renamed1
recv_files(renamed1)
renamed1
recv mapped ../snapshot2/file1 of size 14944648
got file_sum
set modtime, atime of .renamed1.l4pR7E to (1743753557) 2025/04/04 09:59:17, (1743753866) 2025/04/04 10:04:26
renaming .renamed1.l4pR7E to renamed1
set modtime, atime of . to (1743753557) 2025/04/04 09:59:17, (1743753866) 2025/04/04 10:04:26
send_files phase=1
recv_files phase=1
generate_files phase=2
send_files phase=2
send files finished
total: matches=3868  hash_hits=3868  false_alarms=0 data=0
recv_files phase=2
recv_files finished
generate_files phase=3
generate_files finished

sent 23,258 bytes  received 249,330 bytes  545,176.00 bytes/sec
total size is 14,944,648  speedup is 54.83
client_run2 waiting on 42644
[generator] _exit_cleanup(code=0, file=main.c, line=1865): about to call exit(0)
It looks to me like rsync did notice that original/renamed1 was identical to snapshot2/file1, and used this fact to speed up the transfer. But it is still unclear to we why it chose to copy (maybe chunk by chunk) snapshot2/file1 instead of hard-link it.
Edgar Bonet (221 rep)
Apr 3, 2025, 02:20 PM • Last activity: Apr 4, 2025, 08:25 AM
0 votes
1 answers
38 views
Rsnapshot backup - how does it handle data?
I've set up my backup using [Rsnapshot][1]. After 3 days I have this: ``` 196 GB /backup 3 GB /backup/daily.0 191 GB /backup/daily.1 2 GB /backup/daily.2 ``` Why is the 2nd backup the one with most data (191 GB)?? My expectation was that first day the backup would be huge, but then every next backup...
I've set up my backup using Rsnapshot . After 3 days I have this:
196 GB	/backup

3	GB  /backup/daily.0
191 GB	/backup/daily.1
2	GB  /backup/daily.2
Why is the 2nd backup the one with most data (191 GB)?? My expectation was that first day the backup would be huge, but then every next backup would be small as only diffs are saved. How does the Rsnaphot handle incremental backup data?
Danijel (186 rep)
Feb 24, 2025, 08:43 AM • Last activity: Feb 24, 2025, 02:14 PM
1 votes
1 answers
129 views
When i try to backup remote server with rsnapshot it errors out with 255 code
Every time I try running `sudo rsnapshot -v alpha` I get this type of error(it errors for every backup entry I have): ``` ERROR: /usr/bin/rsync returned 255 while processing root@151.131.222.222:/etc/ /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \ --rsh=/usr/bin/ssh -i /home...
Every time I try running sudo rsnapshot -v alpha I get this type of error(it errors for every backup entry I have):
ERROR: /usr/bin/rsync returned 255 while processing root@151.131.222.222:/etc/
    /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \
        --rsh=/usr/bin/ssh -i /home/user/ssh/id_ed25519 \
        root@151.131.222.222:/usr/share/ \
        /var/cache/rsnapshot/alpha.0/server_backup/
1. Yes, rsync installed on the server and this machine 2. Yes, rsync works if I try to manually copy some files with these root credentials from the remote 3. There is one thing that can be potentially it. When I tried to run the errored out command it demanded quotes around arguments for the rsh. Otherwise it would throw syntax error. But I am not sure how to forse rsnapshot to do that. And if I run the errored out command with quotes around the rsh key it will error with 255 code as well. 4. Firewall does not block ssh. 5. Server allows only public key authentication 6. I host my ubuntu server on Vultr Here is my rsnapshot.conf file
#################################################
    # rsnapshot.conf - rsnapshot configuration file #
    #################################################
    #                                               #
    # PLEASE BE AWARE OF THE FOLLOWING RULE:        #
    #                                               #
    # This file requires tabs between elements      #
    #                                               #
    #################################################

    #######################
    # CONFIG FILE VERSION #
    #######################

    config_version	1.2

    ###########################
    # SNAPSHOT ROOT DIRECTORY #
    ###########################

    # All snapshots will be stored under this root directory.
    #
    snapshot_root	/var/cache/rsnapshot/

    # If no_create_root is enabled, rsnapshot will not automatically create the
    # snapshot_root directory. This is particularly useful if you are backing
    # up to removable media, such as a FireWire or USB drive.
    #
    #no_create_root	1

    #################################
    # EXTERNAL PROGRAM DEPENDENCIES #
    #################################

    # LINUX USERS:   Be sure to uncomment "cmd_cp". This gives you extra features.
    # EVERYONE ELSE: Leave "cmd_cp" commented out for compatibility.
    #
    # See the README file or the man page for more details.
    #
    cmd_cp		/bin/cp

    # uncomment this to use the rm program instead of the built-in perl routine.
    #
    cmd_rm		/bin/rm

    # rsync must be enabled for anything to work. This is the only command that
    # must be enabled.
    #
    cmd_rsync	/usr/bin/rsync

    # Uncomment this to enable remote ssh backups over rsync.
    #
    cmd_ssh	/usr/bin/ssh

    # Comment this out to disable syslog support.
    #
    cmd_logger	/usr/bin/logger

    # Uncomment this to specify the path to "du" for disk usage checks.
    # If you have an older version of "du", you may also want to check the
    # "du_args" parameter below.
    #
    #cmd_du		/usr/bin/du

    # Uncomment this to specify the path to rsnapshot-diff.
    #
    #cmd_rsnapshot_diff	/usr/bin/rsnapshot-diff

    # Specify the path to a script (and any optional arguments) to run right
    # before rsnapshot syncs files
    #
    #cmd_preexec	/path/to/preexec/script

    # Specify the path to a script (and any optional arguments) to run right
    # after rsnapshot syncs files
    #
    #cmd_postexec	/path/to/postexec/script

    # Paths to lvcreate, lvremove, mount and umount commands, for use with
    # Linux LVMs.
    #
    #linux_lvm_cmd_lvcreate	/sbin/lvcreate
    #linux_lvm_cmd_lvremove	/sbin/lvremove
    #linux_lvm_cmd_mount	/bin/mount
    #linux_lvm_cmd_umount	/bin/umount

    #########################################
    #     BACKUP LEVELS / INTERVALS         #
    # Must be unique and in ascending order #
    # e.g. alpha, beta, gamma, etc.         #
    #########################################

    retain	alpha	6
    retain	beta	7
    retain	gamma	4
    #retain	delta	3

    ############################################
    #              GLOBAL OPTIONS              #
    # All are optional, with sensible defaults #
    ############################################

    # Verbose level, 1 through 5.
    # 1     Quiet           Print fatal errors only
    # 2     Default         Print errors and warnings only
    # 3     Verbose         Show equivalent shell commands being executed
    # 4     Extra Verbose   Show extra verbose information
    # 5     Debug mode      Everything
    #
    verbose		2

    # Same as "verbose" above, but controls the amount of data sent to the
    # logfile, if one is being used. The default is 3.
    # If you want the rsync output, you have to set it to 4
    #
    loglevel	3

    # If you enable this, data will be written to the file you specify. The
    # amount of data written is controlled by the "loglevel" parameter.
    #
    logfile	/var/log/rsnapshot.log

    # If enabled, rsnapshot will write a lockfile to prevent two instances
    # from running simultaneously (and messing up the snapshot_root).
    # If you enable this, make sure the lockfile directory is not world
    # writable. Otherwise anyone can prevent the program from running.
    #
    lockfile	/var/run/rsnapshot.pid

    # By default, rsnapshot check lockfile, check if PID is running
    # and if not, consider lockfile as stale, then start
    # Enabling this stop rsnapshot if PID in lockfile is not running
    #
    #stop_on_stale_lockfile		0

    # Default rsync args. All rsync commands have at least these options set.
    #
    #rsync_short_args	-a
    #rsync_long_args	--delete --numeric-ids --relative --delete-excluded

    # ssh has no args passed by default, but you can specify some here.
    #
    ssh_args	-i /home/user/ssh/id_ed25519

    # Default arguments for the "du" program (for disk space reporting).
    # The GNU version of "du" is preferred. See the man page for more details.
    # If your version of "du" doesn't support the -h flag, try -k flag instead.
    #
    #du_args	-csh

    # If this is enabled, rsync won't span filesystem partitions within a
    # backup point. This essentially passes the -x option to rsync.
    # The default is 0 (off).
    #
    #one_fs		0

    # The include and exclude parameters, if enabled, simply get passed directly
    # to rsync. If you have multiple include/exclude patterns, put each one on a
    # separate line. Please look up the --include and --exclude options in the
    # rsync man page for more details on how to specify file name patterns.
    #
    #include	???
    #include	???
    #exclude	???
    #exclude	???

    # The include_file and exclude_file parameters, if enabled, simply get
    # passed directly to rsync. Please look up the --include-from and
    # --exclude-from options in the rsync man page for more details.
    #
    #include_file	/path/to/include/file
    #exclude_file	/path/to/exclude/file

    # If your version of rsync supports --link-dest, consider enabling this.
    # This is the best way to support special files (FIFOs, etc) cross-platform.
    # The default is 0 (off).
    #
    #link_dest	0

    # When sync_first is enabled, it changes the default behaviour of rsnapshot.
    # Normally, when rsnapshot is called with its lowest interval
    # (i.e.: "rsnapshot alpha"), it will sync files AND rotate the lowest
    # intervals. With sync_first enabled, "rsnapshot sync" handles the file sync,
    # and all interval calls simply rotate files. See the man page for more
    # details. The default is 0 (off).
    #
    #sync_first	0

    # If enabled, rsnapshot will move the oldest directory for each interval
    # to [interval_name].delete, then it will remove the lockfile and delete
    # that directory just before it exits. The default is 0 (off).
    #
    #use_lazy_deletes	0

    # Number of rsync re-tries. If you experience any network problems or
    # network card issues that tend to cause ssh to fail with errors like
    # "Corrupted MAC on input", for example, set this to a non-zero value
    # to have the rsync operation re-tried.
    #
    #rsync_numtries 0

    # LVM parameters. Used to backup with creating lvm snapshot before backup
    # and removing it after. This should ensure consistency of data in some special
    # cases
    #
    # LVM snapshot(s) size (lvcreate --size option).
    #
    #linux_lvm_snapshotsize	100M

    # Name to be used when creating the LVM logical volume snapshot(s).
    #
    #linux_lvm_snapshotname	rsnapshot

    # Path to the LVM Volume Groups.
    #
    #linux_lvm_vgpath	/dev

    # Mount point to use to temporarily mount the snapshot(s).
    #
    #linux_lvm_mountpath	/path/to/mount/lvm/snapshot/during/backup

    ###############################
    ### BACKUP POINTS / SCRIPTS ###
    ###############################

    # REMOTE SERVER
    backup	root@151.131.222.222:/home/	server_backup/
    backup	root@151.131.222.222:/etc/	server_backup/

    #backup_script	/usr/local/bin/backup_pgsql.sh	localhost/postgres/
    # You must set linux_lvm_* parameters below before using lvm snapshots
    #backup	lvm://vg0/xen-home/	lvm-vg0/xen-home/

    # EXAMPLE.COM
    #backup_exec	/bin/date "+ backup of example.com started at %c"
    #backup	root@example.com:/home/	example.com/	+rsync_long_args=--bwlimit=16,exclude=core
    #backup	root@example.com:/etc/	example.com/	exclude=mtab,exclude=core
    #backup_exec	ssh root@example.com "mysqldump -A > /var/db/dump/mysql.sql"
    #backup	root@example.com:/var/db/dump/	example.com/
    #backup_exec	/bin/date "+ backup of example.com ended at %c"

    # CVS.SOURCEFORGE.NET
    #backup_script	/usr/local/bin/backup_rsnapshot_cvsroot.sh	rsnapshot.cvs.sourceforge.net/

    # RSYNC.SAMBA.ORG
    #backup	rsync://rsync.samba.org/r	syncftp/	rsync.samba.org/rsyncftp/
My sshd logs look this way:
2025-01-21T16:47:06.445342+00:00 server sshd: Connection from 99.11.11.11 port 57908 on 151.131.222.222 port 22 rdomain ""
2025-01-21T16:47:06.445890+00:00 server sshd: debug1: Local version string SSH-2.0-OpenSSH_9.7p1 Ubuntu-7ubuntu4
2025-01-21T16:47:06.446150+00:00 server sshd: debug1: Remote protocol version 2.0, remote software version OpenSSH_8.9p1 Ubuntu-3ubuntu0.10
2025-01-21T16:47:06.446387+00:00 server sshd: debug1: compat_banner: match: OpenSSH_8.9p1 Ubuntu-3ubuntu0.10 pat OpenSSH* compat 0x04000000
2025-01-21T16:47:06.448025+00:00 server sshd: debug1: permanently_set_uid: 109/65534 [preauth]
2025-01-21T16:47:06.448401+00:00 server sshd: debug1: list_hostkey_types: rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]
2025-01-21T16:47:06.448865+00:00 server sshd: debug1: SSH2_MSG_KEXINIT sent [preauth]
2025-01-21T16:47:06.473088+00:00 server sshd: debug1: SSH2_MSG_KEXINIT received [preauth]
2025-01-21T16:47:06.473305+00:00 server sshd: debug1: kex: algorithm: curve25519-sha256 [preauth]
2025-01-21T16:47:06.473602+00:00 server sshd: debug1: kex: host key algorithm: ssh-ed25519 [preauth]
2025-01-21T16:47:06.473829+00:00 server sshd: debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC:  compression: none [preauth]
2025-01-21T16:47:06.474193+00:00 server sshd: debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC:  compression: none [preauth]
2025-01-21T16:47:06.474496+00:00 server sshd: debug1: expecting SSH2_MSG_KEX_ECDH_INIT [preauth]
2025-01-21T16:47:06.502026+00:00 server sshd: debug1: SSH2_MSG_KEX_ECDH_INIT received [preauth]
2025-01-21T16:47:06.509345+00:00 server sshd: debug1: ssh_packet_send2_wrapped: resetting send seqnr 3 [preauth]
2025-01-21T16:47:06.509768+00:00 server sshd: debug1: rekey out after 134217728 blocks [preauth]
2025-01-21T16:47:06.510085+00:00 server sshd: debug1: SSH2_MSG_NEWKEYS sent [preauth]
2025-01-21T16:47:06.510210+00:00 server sshd: debug1: Sending SSH2_MSG_EXT_INFO [preauth]
2025-01-21T16:47:06.510573+00:00 server sshd: debug1: expecting SSH2_MSG_NEWKEYS [preauth]
2025-01-21T16:47:06.543286+00:00 server sshd: debug1: ssh_packet_read_poll2: resetting read seqnr 3 [preauth]
2025-01-21T16:47:06.543606+00:00 server sshd: debug1: SSH2_MSG_NEWKEYS received [preauth]
2025-01-21T16:47:06.543946+00:00 server sshd: debug1: rekey in after 134217728 blocks [preauth]
2025-01-21T16:47:06.544260+00:00 server sshd: debug1: KEX done [preauth]
2025-01-21T16:47:06.636933+00:00 server sshd: debug1: userauth-request for user root service ssh-connection method none [preauth]
2025-01-21T16:47:06.637064+00:00 server sshd: debug1: attempt 0 failures 0 [preauth]
2025-01-21T16:47:06.638069+00:00 server sshd: debug1: PAM: initializing for "root"
2025-01-21T16:47:06.641531+00:00 server sshd: debug1: PAM: setting PAM_RHOST to "99.11.11.11"
2025-01-21T16:47:06.642045+00:00 server sshd: debug1: PAM: setting PAM_TTY to "ssh"
2025-01-21T16:47:06.664190+00:00 server sshd: Connection closed by authenticating user root 99.11.11.11 port 57908 [preauth]
2025-01-21T16:47:06.665162+00:00 server sshd: debug1: do_cleanup [preauth]
2025-01-21T16:47:06.666011+00:00 server sshd: debug1: monitor_read_log: child log fd closed
2025-01-21T16:47:06.666354+00:00 server sshd: debug1: do_cleanup
2025-01-21T16:47:06.666609+00:00 server sshd: debug1: PAM: cleanup
2025-01-21T16:47:06.667644+00:00 server sshd: debug1: Killing privsep child 2070
2025-01-21T16:47:06.668031+00:00 server sshd: debug1: audit_event: unhandled event 12
My iptables rules look like this:
Chain INPUT (policy DROP)
target     prot opt source               destination         
ufw-before-logging-input  all  --  anywhere             anywhere            
ufw-before-input  all  --  anywhere             anywhere            
ufw-after-input  all  --  anywhere             anywhere            
ufw-after-logging-input  all  --  anywhere             anywhere            
ufw-reject-input  all  --  anywhere             anywhere            
ufw-track-input  all  --  anywhere             anywhere            

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
ufw-before-logging-forward  all  --  anywhere             anywhere            
ufw-before-forward  all  --  anywhere             anywhere            
ufw-after-forward  all  --  anywhere             anywhere            
ufw-after-logging-forward  all  --  anywhere             anywhere            
ufw-reject-forward  all  --  anywhere             anywhere            
ufw-track-forward  all  --  anywhere             anywhere            

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         
ufw-before-logging-output  all  --  anywhere             anywhere            
ufw-before-output  all  --  anywhere             anywhere            
ufw-after-output  all  --  anywhere             anywhere            
ufw-after-logging-output  all  --  anywhere             anywhere            
ufw-reject-output  all  --  anywhere             anywhere            
ufw-track-output  all  --  anywhere             anywhere            

Chain ufw-after-forward (1 references)
target     prot opt source               destination         

Chain ufw-after-input (1 references)
target     prot opt source               destination         
ufw-skip-to-policy-input  udp  --  anywhere             anywhere             udp dpt:netbios-ns
ufw-skip-to-policy-input  udp  --  anywhere             anywhere             udp dpt:netbios-dgm
ufw-skip-to-policy-input  tcp  --  anywhere             anywhere             tcp dpt:netbios-ssn
ufw-skip-to-policy-input  tcp  --  anywhere             anywhere             tcp dpt:microsoft-ds
ufw-skip-to-policy-input  udp  --  anywhere             anywhere             udp dpt:bootps
ufw-skip-to-policy-input  udp  --  anywhere             anywhere             udp dpt:bootpc
ufw-skip-to-policy-input  all  --  anywhere             anywhere             ADDRTYPE match dst-type BROADCAST

Chain ufw-after-logging-forward (1 references)
target     prot opt source               destination         

Chain ufw-after-logging-input (1 references)
target     prot opt source               destination         
LOG        all  --  anywhere             anywhere             limit: avg 3/min burst 10 LOG level warn prefix "[UFW BLOCK] "

Chain ufw-after-logging-output (1 references)
target     prot opt source               destination         

Chain ufw-after-output (1 references)
target     prot opt source               destination         

Chain ufw-before-forward (1 references)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     icmp --  anywhere             anywhere             icmp destination-unreachable
ACCEPT     icmp --  anywhere             anywhere             icmp time-exceeded
ACCEPT     icmp --  anywhere             anywhere             icmp parameter-problem
ACCEPT     icmp --  anywhere             anywhere             icmp echo-request
ufw-user-forward  all  --  anywhere             anywhere            

Chain ufw-before-input (1 references)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ufw-logging-deny  all  --  anywhere             anywhere             ctstate INVALID
DROP       all  --  anywhere             anywhere             ctstate INVALID
ACCEPT     icmp --  anywhere             anywhere             icmp destination-unreachable
ACCEPT     icmp --  anywhere             anywhere             icmp time-exceeded
ACCEPT     icmp --  anywhere             anywhere             icmp parameter-problem
ACCEPT     icmp --  anywhere             anywhere             icmp echo-request
ACCEPT     udp  --  anywhere             anywhere             udp spt:bootps dpt:bootpc
ufw-not-local  all  --  anywhere             anywhere            
ACCEPT     udp  --  anywhere             mdns.mcast.net       udp dpt:mdns
ACCEPT     udp  --  anywhere             239.200.200.200      udp dpt:1900
ufw-user-input  all  --  anywhere             anywhere            

Chain ufw-before-logging-forward (1 references)
target     prot opt source               destination         

Chain ufw-before-logging-input (1 references)
target     prot opt source               destination         

Chain ufw-before-logging-output (1 references)
target     prot opt source               destination         

Chain ufw-before-output (1 references)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere            
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ufw-user-output  all  --  anywhere             anywhere            

Chain ufw-logging-allow (0 references)
target     prot opt source               destination         
LOG        all  --  anywhere             anywhere             limit: avg 3/min burst 10 LOG level warn prefix "[UFW ALLOW] "

Chain ufw-logging-deny (2 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere             ctstate INVALID limit: avg 3/min burst 10
LOG        all  --  anywhere             anywhere             limit: avg 3/min burst 10 LOG level warn prefix "[UFW BLOCK] "

Chain ufw-not-local (1 references)
target     prot opt source               destination         
RETURN     all  --  anywhere             anywhere             ADDRTYPE match dst-type LOCAL
RETURN     all  --  anywhere             anywhere             ADDRTYPE match dst-type MULTICAST
RETURN     all  --  anywhere             anywhere             ADDRTYPE match dst-type BROADCAST
ufw-logging-deny  all  --  anywhere             anywhere             limit: avg 3/min burst 10
DROP       all  --  anywhere             anywhere            

Chain ufw-reject-forward (1 references)
target     prot opt source               destination         

Chain ufw-reject-input (1 references)
target     prot opt source               destination         

Chain ufw-reject-output (1 references)
target     prot opt source               destination         

Chain ufw-skip-to-policy-forward (0 references)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere            

Chain ufw-skip-to-policy-input (7 references)
target     prot opt source               destination         
DROP       all  --  anywhere             anywhere            

Chain ufw-skip-to-policy-output (0 references)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere            

Chain ufw-track-forward (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             anywhere             ctstate NEW
ACCEPT     udp  --  anywhere             anywhere             ctstate NEW

Chain ufw-track-input (1 references)
target     prot opt source               destination         

Chain ufw-track-output (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             anywhere             ctstate NEW
ACCEPT     udp  --  anywhere             anywhere             ctstate NEW

Chain ufw-user-forward (1 references)
target     prot opt source               destination         

Chain ufw-user-input (1 references)
target     prot opt source               destination         
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh
ACCEPT     udp  --  anywhere             anywhere             udp dpt:openvpn
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:ssh /* 'dapp_OpenSSH' */
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:https
ACCEPT     tcp  --  anywhere             anywhere             tcp dpt:http
ACCEPT     udp  --  anywhere             dns.google           udp dpt:domain
ACCEPT     tcp  --  anywhere             dns.google           tcp dpt:domain

Chain ufw-user-limit (0 references)
target     prot opt source               destination         
LOG        all  --  anywhere             anywhere             limit: avg 3/min burst 5 LOG level warn prefix "[UFW LIMIT BLOCK] "
REJECT     all  --  anywhere             anywhere             reject-with icmp-port-unreachable

Chain ufw-user-limit-accept (0 references)
target     prot opt source               destination         
ACCEPT     all  --  anywhere             anywhere            

Chain ufw-user-logging-forward (0 references)
target     prot opt source               destination         

Chain ufw-user-logging-input (0 references)
target     prot opt source               destination         

Chain ufw-user-logging-output (0 references)
target     prot opt source               destination         

Chain ufw-user-output (1 references)
target     prot opt source               destination
My sshd_config file:
PermitRootLogin yes


# This is the sshd server system-wide configuration file.  See
# sshd_config(5) for more information.

# This sshd was compiled with PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games

# The strategy used for options in the default sshd_config shipped with
# OpenSSH is to specify options with their default value where
# possible, but leave them commented.  Uncommented options override the
# default value.

# Include /etc/ssh/sshd_config.d/*.conf

# When systemd socket activation is used (the default), the socket
# configuration must be re-generated after changing Port, AddressFamily, or
# ListenAddress.
#
# For changes to take effect, run:
#
#   systemctl daemon-reload
#   systemctl restart ssh.socket
#
#Port 22
#AddressFamily any
#ListenAddress 0.0.0.0
#ListenAddress ::

#HostKey /etc/ssh/ssh_host_rsa_key
#HostKey /etc/ssh/ssh_host_ecdsa_key
#HostKey /etc/ssh/ssh_host_ed25519_key

# Ciphers and keying
#RekeyLimit default none

# Logging
#SyslogFacility AUTH
LogLevel DEBUG

# Authentication:

#LoginGraceTime 2m
#PermitRootLogin prohibit-password
#StrictModes yes
#MaxAuthTries 6
#MaxSessions 10

PubkeyAuthentication yes

# Expect .ssh/authorized_keys2 to be disregarded by default in future.
#AuthorizedKeysFile	.ssh/authorized_keys .ssh/authorized_keys2

#AuthorizedPrincipalsFile none

#AuthorizedKeysCommand none
#AuthorizedKeysCommandUser nobody

# For this to work you will also need host keys in /etc/ssh/ssh_known_hosts
#HostbasedAuthentication no
# Change to yes if you don't trust ~/.ssh/known_hosts for
# HostbasedAuthentication
#IgnoreUserKnownHosts no
# Don't read the user's ~/.rhosts and ~/.shosts files
#IgnoreRhosts yes

# To disable tunneled clear text passwords, change to no here!
PasswordAuthentication no
#PermitEmptyPasswords no

# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
KbdInteractiveAuthentication no

# Kerberos options
#KerberosAuthentication no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
#KerberosGetAFSToken no

# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
#GSSAPIStrictAcceptorCheck yes
#GSSAPIKeyExchange no

# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the KbdInteractiveAuthentication and
# PasswordAuthentication.  Depending on your PAM configuration,
# PAM authentication via KbdInteractiveAuthentication may bypass
# the setting of "PermitRootLogin prohibit-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and KbdInteractiveAuthentication to 'no'.
UsePAM yes

#AllowAgentForwarding yes
#AllowTcpForwarding yes
#GatewayPorts no
X11Forwarding yes
#X11DisplayOffset 10
#X11UseLocalhost yes
#PermitTTY yes
PrintMotd no
#PrintLastLog yes
#TCPKeepAlive yes
#PermitUserEnvironment no
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#UseDNS no
#PidFile /run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no
#ChrootDirectory none
#VersionAddendum none

# no default banner path
#Banner none

# Allow client to pass locale environment variables
AcceptEnv LANG LC_*

# override default of no subsystems
Subsystem sftp	/usr/lib/openssh/sftp-server

# Example of overriding settings on a per-user basis
#Match User anoncvs
#	X11Forwarding no
#	AllowTcpForwarding no
#	PermitTTY no
#	ForceCommand cvs server
Thank you for your attention, hopefully someone can help me solve this. I spent the whole day on this issue. If you think I can use another tool for the backups, please let me know. I am new to system administrating so will gladly appreciate any help!
nikita_trifan (13 rep)
Jan 21, 2025, 04:58 PM • Last activity: Jan 21, 2025, 05:48 PM
0 votes
2 answers
926 views
Is it possible to create multiple users on Linux with the same UID and GID, especially UID and GID 0?
The [ArchWiki - rsnapshot page](https://wiki.archlinux.org/title/rsnapshot) mentions creating multiple users with `uid` and `gid` set to `0` as a means of creating users that login remotely to perform backups. >One thing you can do to mitigate the potential damage from a backup server breach is to c...
The [ArchWiki - rsnapshot page](https://wiki.archlinux.org/title/rsnapshot) mentions creating multiple users with uid and gid set to 0 as a means of creating users that login remotely to perform backups. >One thing you can do to mitigate the potential damage from a backup server breach is to create alternate users on the client machines with **uid** and **gid** set to 0, but with a more restrictive shell such as scponly. I assume that the purpose is to give those accounts the read-write-execute permissions of the root user with the proviso their login shell gives them reduced rights. Does that mean that even if accounts have the same gid and uid they are still distinguished by account name and having the same gid and uid gives them same access rights that ?
vfclists (7909 rep)
Apr 5, 2024, 09:21 PM • Last activity: Apr 5, 2024, 10:05 PM
0 votes
1 answers
129 views
Proper way to set up rsnapshot over ssh on multiple machines
I am using the answer in this post to part of my question: [Proper way to set up rsnapshot over ssh][1] [1]: https://unix.stackexchange.com/questions/325100/proper-way-to-set-up-rsnapshot-over-ssh However, this post is seven years old and also I have an additional question. I have the same scenario,...
I am using the answer in this post to part of my question: Proper way to set up rsnapshot over ssh However, this post is seven years old and also I have an additional question. I have the same scenario, except that I have a server B and a server C that need to use rsnapshot over ssh to back up to server A. Question: how can I set the rsnapshot configuration file to not confuse the two (server B and server C) in terms of storage, schedule (via cron), and so on?
bpb (1 rep)
Mar 15, 2024, 02:49 PM • Last activity: Mar 21, 2024, 10:28 PM
2 votes
1 answers
233 views
rsync running under rsnapshot is not creating hard links, why?
I have rsnapshot taking backups for me. However, each backup is the full size of the source data. It is not making hard links and instead is making new copies of ever single (unchanged) file on every instigation. I have verified that new files are being created in each backup by checking the inode n...
I have rsnapshot taking backups for me. However, each backup is the full size of the source data. It is not making hard links and instead is making new copies of ever single (unchanged) file on every instigation. I have verified that new files are being created in each backup by checking the inode numbers against each other and also by counting hardlinks (there's none). The problem seems to be occurring at the rsync stage. rsync looks to be transferring every file at each instigation. The reason is ">f..T......", so to my understanding, it is transferring and creating a whole new file based on a differing timestamp to the backup. In fact the timestamps are significantly different, e.g. on the local system: Access: 2024-03-18 10:14:28.285098766 +0000 Modify: 2023-11-23 21:04:36.000000000 +0000 Change: 2024-03-10 21:11:26.107904822 +0000 Birth: 2023-12-02 19:22:02.022412357 +0000 and the same file on the backup: Access: 2024-03-18 10:14:29.369122130 +0000 Modify: 2024-03-18 10:14:30.609148859 +0000 Change: 2024-03-18 10:14:31.817174900 +0000 Birth: 2024-03-18 10:14:29.369122130 +0000 The upshot of this is that each backup takes forever and takes a huge amount of disk space, precisely the opposite of what I expect. I believe I can fix this by adding --time to the rsnapshot/rsync configuration to "correct" the timestamps on the backup. But that differs how it should be working "out of the box" so I'm reluctant to do it. My specific questions are: 1) Are the timestamps in the backup wrong in this context and what should they actually be? 2) What could have gone wrong here? I feel I may have missed a step in creating the first ever backup e.g. the first backup should have had --time. 3) Is adding --time to the config a bad idea, given it's not already there by default?
Dansk (31 rep)
Mar 18, 2024, 11:15 PM • Last activity: Mar 20, 2024, 12:39 PM
0 votes
0 answers
46 views
rdfind-like tool to sparsify rsync operations
I use `rsnapshot` to take regular, automated snapshots of the home directory on my desktop (which runs Ubuntu on `sda`) and save them to a spare internal harddrive (`sdb`). Occasionally, I manually copy (via `rsync`) the contents of `sdb` to an external USB SSD (call it `sdc`). `sdc` also contains o...
I use rsnapshot to take regular, automated snapshots of the home directory on my desktop (which runs Ubuntu on sda) and save them to a spare internal harddrive (sdb). Occasionally, I manually copy (via rsync) the contents of sdb to an external USB SSD (call it sdc). sdc also contains older manual backups of my files which predate my adoption of rsnapshot, and thus there are a lot of files on sdc that are duplicates of the incoming rsnapshot files. I recently discovered the rdfind tool (with the -makehardlinks option) which lets me greatly reduce the disk usage on sdc by running rdfind after I manually rsync snapshots from sdb to sdc. However, this entails redudant I/O operations, because I first rsync the files over from sdb (writing about 250 GB), *then* run rdfind (freeing nearly the same 250 GB). In principle, it should be possible to run something like rdfind *before* rsync to check hashes and determine which files from sdb need to be written out and which can be hard links—but how? - I am seeking generic solutions for the Linux ecosystem, but distro- or filesystem-specific answers are also welcome. - My desktop runs Ubuntu 22.04 and both sdb and sdc use BTRFS. - My question is distinct from this one because it concerns deduplication *between* the origin and destination, not just at the origin: https://unix.stackexchange.com/questions/186004/deduplication-tool-for-rsync - I am aware of the implications of hardlinking files.
Max (203 rep)
Feb 25, 2024, 09:43 PM • Last activity: Mar 1, 2024, 09:12 PM
2 votes
2 answers
3945 views
What are the advantages of using btrfs snapshots over rsnapshot in a backup scenario
Currently I use rsnapshot to backup data from one encrypted ext4 drive to another. My system opens a LUKS container on each drive and runs rsnapshot according to an hourly schedule. I'm intrigued by btrfs's built in snapshot feature, and I'm curious if it can be used in place of my current setup (as...
Currently I use rsnapshot to backup data from one encrypted ext4 drive to another. My system opens a LUKS container on each drive and runs rsnapshot according to an hourly schedule. I'm intrigued by btrfs's built in snapshot feature, and I'm curious if it can be used in place of my current setup (assuming of course I reformat the drives). Are there any obvious issues I'm failing to realize? Can my current setup be improved by using btrfs, is it faster for example?
qq4 (559 rep)
May 19, 2021, 06:38 PM • Last activity: Jul 31, 2023, 12:54 PM
2 votes
0 answers
98 views
Speeding up rsnapshot backups to btrfs?
I'm using rsnapshot to back up the computers on my home network. For years, I've been using Ext4-on-LUKS on USB hard disks and getting good results. Recently I added a Btrfs-on-LUKS disk to get the benefit of data checksumming, but performance is abysmal: where I can back up to Ext4-on-LUKS in a con...
I'm using rsnapshot to back up the computers on my home network. For years, I've been using Ext4-on-LUKS on USB hard disks and getting good results. Recently I added a Btrfs-on-LUKS disk to get the benefit of data checksumming, but performance is abysmal: where I can back up to Ext4-on-LUKS in a consistent 45 minutes, the time needed to back up to Btrfs-on-LUKS is seven hours and increasing as the disk fills up. The limiting factor appears to be disk I/O: my monitoring software shows a load average between 3 and 5, but top shows the CPU is 95% either "idle" or "waiting". Is there anything I can tweak to speed things up?
Mark (4665 rep)
Jun 14, 2023, 05:56 PM
0 votes
2 answers
276 views
Are Rsnapshot backups self contained - how to save into zip?
I have some old backups done by [rsnapshot][1]. The directories are: ``` /backups/week.0 /backups/week.1 /backups/week.2 /backups/week.3 /backups/week.4 ``` I'd like to keep only one copy and delete the others. Since the backups are incremental, will it work if I zipped `week.0` into `week.0.zip`and...
I have some old backups done by rsnapshot . The directories are:
/backups/week.0
/backups/week.1
/backups/week.2
/backups/week.3
/backups/week.4
I'd like to keep only one copy and delete the others. Since the backups are incremental, will it work if I zipped week.0 into week.0.zipand deleted everything else?
Danijel (186 rep)
May 23, 2023, 09:37 AM • Last activity: May 24, 2023, 07:02 AM
2 votes
1 answers
259 views
Full paths with rsnapshot and rsync server at source side with --relative option are truncated - how to preserve full source paths?
When using rsnapshot with following configuration: #/etc/rsnapshot.conf snapshot_root /backup.rsnapshot/ rsync_long_args --relative backup user@laptop:/home/user/test/ ./ a directory `/backup.rsnapshot/weekly.0/home/user/test/` is made at the destination machine. However, if rsync server on laptop (...
When using rsnapshot with following configuration: #/etc/rsnapshot.conf snapshot_root /backup.rsnapshot/ rsync_long_args --relative backup user@laptop:/home/user/test/ ./ a directory /backup.rsnapshot/weekly.0/home/user/test/ is made at the destination machine. However, if rsync server on laptop (i.e. the source machine) is used: #/etc/rsnapshot.conf snapshot_root /backup.rsnapshot/ rsync_long_args --relative backup rsync://IP_of_laptop/user/test/ ./ no full path is preserved but folder /backup.rsnapshot/weekly.0/test is made. /etc/rsyncd.conf on laptop: uid = 1000 gid = 1001 use chroot = no max connections = 4 syslog facility = local5 pid file = /run/rsyncd.pid [user] path = /home/user comment = user home folder Hence my question is how to preserve the full paths using rsync server like it is preserved when the rsync server is not involved?
Igor Popov (121 rep)
May 24, 2019, 09:38 PM • Last activity: Oct 4, 2022, 11:52 PM
2 votes
0 answers
121 views
Is it possible to rsnapshot localsource to remotetarget?
Since rsnapshot is based on rsync I wanted to try to use rsnapshot on my client server to backup to a remote server. The Problem is I am unable to find proper informations to set this up. I now how to use rsync to save files/dirs from my client server to my remote server but struggle with the rsnaps...
Since rsnapshot is based on rsync I wanted to try to use rsnapshot on my client server to backup to a remote server. The Problem is I am unable to find proper informations to set this up. I now how to use rsync to save files/dirs from my client server to my remote server but struggle with the rsnapshot conf. My initial thought was I just switch source and target inside the rsnapshot.conf file: LocalSource -> LocalDestionation:
backup    /home/    localhost
to LocalSource -> RemoteDestination:
backup    /home/    user@remote.com:/volume1/Backups
It seems this is not working since rsnapshot -t daily results in:
echo 4069 > /var/run/rsnapshot.pid
mkdir -m 0755 -p \
    /var/cache/rsnapshot/daily.0/user@remote.com:/volume1/Backups/
/usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \
    /home/ \
    /var/cache/rsnapshot/daily.0/user@remote.com:/volume1/Backups/remote.com/
touch /var/cache/rsnapshot/daily.0/
and running rsnapshot not in testmode also just saves them to rsnapshots root directory. **Is it possible to achieve this?** I am aware of that the switched situation should work: RemoteSource -> LocalDestination:
backup    user@client.com:/home/    localhost
MaKaNu (131 rep)
Apr 7, 2022, 11:51 AM • Last activity: Jul 15, 2022, 02:06 AM
1 votes
1 answers
540 views
Permission errors backing up entire system using rsnapshot over local server
EDIT: the solution to this problem is the marked solution underneath + enabling `PermitRootLogin without-password ` in `/etc/ssh/sshd_config` I'm trying to backup my entire system to my local server, but I even though I'm running rsnapshot as sudo, I get permission errors in /var/, /etc/ and /usr/....
EDIT: the solution to this problem is the marked solution underneath + enabling PermitRootLogin without-password in /etc/ssh/sshd_config I'm trying to backup my entire system to my local server, but I even though I'm running rsnapshot as sudo, I get permission errors in /var/, /etc/ and /usr/. Is there a way to fix this? If there isn't, what's my best option to backup my system to my local server? This is my rsnapshot.conf
config_version	1.2

###########################
# SNAPSHOT ROOT DIRECTORY #
###########################

snapshot_root	/home/gisbi/backup/

cmd_cp		/bin/cp

cmd_rm		/bin/rm

cmd_rsync	/usr/bin/rsync

cmd_ssh	/usr/bin/ssh

cmd_logger	/usr/bin/logger

cmd_du		/usr/bin/du

#########################################
#     BACKUP LEVELS / INTERVALS         #
# Must be unique and in ascending order #
# e.g. alpha, beta, gamma, etc.         #
#########################################

#retain	hourly	24
retain	daily	7
retain	weekly	4
retain	monthly	12

#logs

verbose		5

loglevel	4

logfile	/var/log/rsnapshot.log

lockfile	/var/run/rsnapshot.pid

ssh_args	-p 22

#exclusions

exclude		/dev/*
exclude		/proc/*
exclude		/sys/*
exclude		/run/*
exclude		/var/tmp/*
exclude		/var/run/*
exclude		/tmp/*
exclude		/run/*
exclude		/mnt/*
exclude		/usr/portage/distfiles/*
exclude		/lost+found
exclude		/home/gisbi/Storage
exclude		/home/gisbi/.local/share/Trash/*

#location

backup	gisbi@192.168.1.15:/		popbackup/
EDIT: errors look like this
rsync: [sender] send_files failed to open "/usr/lib/cups/backend/cups-brf": Permission denied (13)
rsync: [sender] send_files failed to open "/usr/lib/cups/backend/implicitclass": Permission denied (13)
Gisbi (33 rep)
Mar 25, 2022, 06:48 PM • Last activity: Apr 20, 2022, 02:27 PM
0 votes
1 answers
812 views
rsnapshot.conf - reducing verbose parameter not having the desired effect?
TL;DR Is there some configuration for rsnapshot (see versions below) that would allow me to restrict output to the shell commands, errors and start/finish notifications generated by rsnapshot itself, while causing rsync to generate and record the desired level of detail *only* in the rsync log file?...
TL;DR Is there some configuration for rsnapshot (see versions below) that would allow me to restrict output to the shell commands, errors and start/finish notifications generated by rsnapshot itself, while causing rsync to generate and record the desired level of detail *only* in the rsync log file? Or put more succinctly, can I make the results of rsnapshot output match the descriptions of verbosity in the rsnapshot config file? If not, is there an rsnapshot community that takes feature requests? Just the TL part... It appears after some troubleshooting (see below) that my particular combination of rsnapshot and rsync no longer works as it did for the previous several years. Specifically, the output from rsync now shows up in the console output of rsnapshot, regardless of the verbose settings in rsnapshot.conf. I have a fresh install of FreeBSD 12.2
freebsd-version   
12.2-RELEASE-p10
rsync was installed as part of pkg install rsnapshot, and rsync -V shows
rsync  version 3.2.3  protocol version 31
Copyright (C) 1996-2020 by Andrew Tridgell, Wayne Davison, and others.
Web site: https://rsync.samba.org/ 
Capabilities:
    64-bit files, 64-bit inums, 64-bit timestamps, 64-bit long ints,
    socketpairs, hardlinks, hardlink-specials, symlinks, IPv6, atimes,
    batchfiles, inplace, append, ACLs, xattrs, optional protect-args, iconv,
    symtimes, no prealloc, stop-at, no crtimes, file-flags
Optimizations:
    no SIMD, no asm, openssl-crypto
Checksum list:
    xxh128 xxh3 xxh64 (xxhash) md5 md4 none
Compress list:
    zstd lz4 zlibx zlib none
current version of rsnapshot is:
# version of rsnapshot
my $VERSION = '1.4.4';
The issue, after posting the original form of this question, and sleeping on it, plus the aforementioned troubleshooting, boils down to this: rsnapshot runs from crontab, and the unnecessary information from rsync appearing in the console output (despite the verbose 1 setting) seems to requires a "wrapper" script to suppress the noise, and then assemble the correct information from the rsnapshot log. This seems like a very error-prone process, and one that is also likely to be very high-maintenance, as upgrades break the duct-tape nature of the workaround. The least-cost path seems to be to give up on getting any stats information on the effectiveness of the rsync transfer. This allows me to maintain readability for monitoring the success/failure of the rsync, and to easily re-run it when it does fail. Clearly something has changed, I suspect with rsync (an implicit -v setting?), and if there's some configuration that would allow me to get what I want in the rsnapshot output (only the shell commands, errors and start/finish notifications) while recording the rsync output in the rsync log file, I'd sure love to know about it. Please keep in mind the rsnapshot regime has been running perfectly since 2018, and only doing a fresh install of FreeBSD 12.2 broke it. The previous rsnapshot regime was running (on the same hardware) on an upgraded FreeBSD 12.1 (started as 10.x). rsync and rsnapshot were originally built separately (in that order) from the FreeBSD 10.x ports, and upgraded regularly since then with portmaster. This time (as mentioned) I installed rsnapshot with pkg, and let it install rsync (and everything else it neeeded). rsnapshot.conf changed only in the value of snapshot_root and a shortened list of backup points. Here's the output I want (easy-to-process) for the monitoring emails (that is, without all the rsync noise):
Wed Oct 20 21:40:00 PDT 2021
=================================================================

echo 98875 > /var/run/rsnapshot.pid 
mv /obo-offsitepool/archives/daily.5/ /obo-offsitepool/archives/daily.6/ 
mv /obo-offsitepool/archives/daily.4/ /obo-offsitepool/archives/daily.5/ 
mv /obo-offsitepool/archives/daily.3/ /obo-offsitepool/archives/daily.4/ 
mv /obo-offsitepool/archives/daily.2/ /obo-offsitepool/archives/daily.3/ 
mv /obo-offsitepool/archives/daily.1/ /obo-offsitepool/archives/daily.2/ 
native_cp_al("/obo-offsitepool/archives/daily.0", \
    "/obo-offsitepool/archives/daily.1") 
/usr/local/bin/rsync -a --delete --numeric-ids \
    /obo-offsitepool/archives/daily.0/ /obo-offsitepool/archives/daily.1/ 
/usr/local/bin/rsync -rltv --chmod D0770,F0660 --delete --relative \
    --delete-excluded --partial --stats --log-file=/var/log/rsync \
    --human-readable \
    --exclude-from=/obo-offsitepool/archives/.rsnapshot_excludes \
    /usr/local/etc/ \
    /obo-offsitepool/archives/daily.0/obo-offsite1/local_etc/ 
/usr/local/bin/rsync -rltv --chmod D0770,F0660 --delete --relative \
    --delete-excluded --partial --stats --log-file=/var/log/rsync \
    --human-readable \
    --exclude-from=/obo-offsitepool/archives/.rsnapshot_excludes \
    192.168.18.3::srv330-group/ \
    /obo-offsitepool/archives/daily.0/CSO/srv330-group 
... etc
Rather than this (keeping in mind there are >20 backup points, many with hundreds of lines of unwanted rsync output):
Tue Oct 26 18:55:00 PDT 2021
=================================================================

echo 97810 > /var/run/rsnapshot.pid 
/bin/rm -rf /obopool/archives/daily.6/ 
mv /obopool/archives/daily.5/ /obopool/archives/daily.6/ 
mv /obopool/archives/daily.4/ /obopool/archives/daily.5/ 
mv /obopool/archives/daily.3/ /obopool/archives/daily.4/ 
mv /obopool/archives/daily.2/ /obopool/archives/daily.3/ 
mv /obopool/archives/daily.1/ /obopool/archives/daily.2/ 
native_cp_al("/obopool/archives/daily.0", \
    "/obopool/archives/daily.1") 
/usr/local/bin/rsync -a --delete --numeric-ids \
    /obopool/archives/daily.0/ /obopool/archives/daily.1/ 
/usr/local/bin/rsync -rltv --chmod D0770,F0660 --delete --relative \
    --delete-excluded --partial --stats --log-file=/var/log/rsync \
    --human-readable \
    --exclude-from=/obopool/archives/.rsnapshot_excludes /usr/local/etc/ \
    /obopool/archives/daily.0/offsite1/local_etc/ 
sending incremental file list
/usr/
deleting usr/local/etc/ssmtp/ssmtp.conf.sample
deleting usr/local/etc/ssmtp/ssmtp.conf.2021-01-20
deleting usr/local/etc/ssmtp/ssmtp.conf
deleting usr/local/etc/ssmtp/revaliases.sample
deleting usr/local/etc/ssmtp/revaliases
deleting usr/local/etc/ssmtp/
deleting usr/local/etc/dma/dma.conf.sample
deleting usr/local/etc/dma/dma.conf
deleting usr/local/etc/dma/auth.conf.sample
deleting usr/local/etc/dma/auth.conf
deleting usr/local/etc/dma/
deleting usr/local/etc/portmaster.rc.sample
deleting usr/local/etc/papersize.letter
deleting usr/local/etc/papersize.a4
deleting usr/local/etc/bash_completion.d/portmaster.sh
deleting usr/local/etc/rc.d/dma_flushq
/usr/local/
/usr/local/etc/
/usr/local/etc/rsnapshot.conf
/usr/local/etc/rsnapshot.conf.default
/usr/local/etc/rsnapshot.conf_2018-09-08
/usr/local/etc/rsnapshot.conf_2019-06-14
/usr/local/etc/rsnapshot.conf_2019-08-23
/usr/local/etc/rsnapshot.conf_2021-02-01
/usr/local/etc/rsnapshot.conf_2021-06-21
/usr/local/etc/screenrc
/usr/local/etc/screenrc.sample
/usr/local/etc/smartd.conf
/usr/local/etc/smartd.conf.sample
/usr/local/etc/smartd_warning.sh
/usr/local/etc/bash_completion.d/
/usr/local/etc/man.d/
/usr/local/etc/man.d/perl5.conf
/usr/local/etc/newsyslog.conf.d/
/usr/local/etc/newsyslog.conf.d/rsnapshot
/usr/local/etc/newsyslog.conf.d/rsync
/usr/local/etc/newsyslog.conf.d/rsyncd
/usr/local/etc/periodic/
/usr/local/etc/periodic/daily/
/usr/local/etc/periodic/daily/smart
/usr/local/etc/periodic/security/
/usr/local/etc/periodic/weekly/
/usr/local/etc/rc.d/
/usr/local/etc/rc.d/rsyncd
/usr/local/etc/rc.d/smartd
/usr/local/etc/rc.d/uuidd
/usr/local/etc/rsync/
/usr/local/etc/rsync/rsyncd.conf
/usr/local/etc/rsync/rsyncd.conf.sample
/usr/local/etc/smartd_warning.d/
/usr/local/etc/ssl/
/usr/local/etc/ssl/cert.pem

Number of files: 46 (reg: 31, dir: 14, link: 1)
Number of created files: 4 (reg: 3, dir: 1)
Number of deleted files: 16 (reg: 14, dir: 2)
Number of regular files transferred: 23
Total file size: 774.49K bytes
Total transferred file size: 735.92K bytes
Literal data: 735.92K bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 738.45K
Total bytes received: 1.10K

sent 738.45K bytes  received 1.10K bytes  1.48M bytes/sec
total size is 774.49K  speedup is 1.05
/usr/local/bin/rsync -rltv --chmod D0770,F0660 --delete --relative \
    --delete-excluded --partial --stats --log-file=/var/log/rsync \
    --human-readable \
    --exclude-from=/obopool/archives/.rsnapshot_excludes \
    192.168.18.3::srv330-group/ \
    /obopool/archives/daily.0/CSO/srv330-group 
receiving incremental file list
CSO2/Lori/
CSO2/Lori/retail.prices2.docx
Retail/
...etc
**Both of the above email reports were produced from the same rsnapshot.conf file.** Here's the troubleshooting I've been doing. Test 4 is the best approximation to what I had previously.
1
rsnapshot -v alpha && /root/scripts/marklogs.sh --> using default rsnapshot.conf with 
 entries for rsync_short_args AND rsync_long_args  were actually elided from file (not left with # symbol)
 - normal screen output (start notification, shell commands, success notification)
 - rsnapshot log shows same as monitor output, no rsync log output

2 Explicitly add in default settings to the rsnapshot.conf file  
rm -rf  /obopool/tester/alpha* && rsnapshot -v alpha  && /root/scripts/marklogs.sh #--> rsnapshot.conf modifed as shown:
 rsync_short_args        -a
 rsync_long_args         --delete --numeric-ids --relative --delete-excluded
  - same as previous 

3 change rsync_long_args
rm -rf  /obopool/tester/alpha* && rsnapshot -v alpha  && /root/scripts/marklogs.sh #--> rsnapshot.conf modifed from default as shown:
 rsync_short_args        -a
 rsync_long_args         --chmod D0770,F0660 --delete --relative --delete-excluded --partial --log-file=/var/log/rsync-test
 - console shows expected output (start notification, shell commands, success confirmations)
 - rsnapshot log shows same as monitor output
 - rsync log shows building file list, files xferred, sent/received/total summary line for each backup point
 
4 change rsync_short args
rm -rf  /obopool/tester/alpha* && rsnapshot -v alpha && /root/scripts/marklogs.sh  #--> rsnapshot.conf modifed from default as shown:
 rsync_short_args        -rlt
 rsync_long_args         --chmod D0770,F0660 --delete --relative --delete-excluded --partial --log-file=/var/log/rsync-test
 - normal console output (start notification, shell commands, success notification)
 - rsnapshot log shows same as console
 - rsync log shows building file list, files xferred, sent/received/total summary line for each backup point

5 change rsync_short_arg to add -v
rm -rf  /obopool/tester/alpha* && rsnapshot -v alpha && /root/scripts/marklogs.sh  #--> rsnapshot.conf modifed from default as shown:
 rsync_short_args        -rltv
 rsync_long_args         --chmod D0770,F0660 --delete --relative --delete-excluded --partial --log-file=/var/log/rsync-test
 - console shows the unexpected (and undesired), given verbose set at 2, rsync output
 - rsnapshog log is polluted with rsync output, shows the start notification & shell commands before the noise, then some closing shell commands (touch /obopool/tester/alpha.0/; rm -f /var/run/rsnapshot.pid) then the completion notification
 - rsync log adds one line, "total size ..... speedup is .....", to logged info.
  
6 change loglevel to 1 to see effect on rsnapshot log
rm -rf  /obopool/tester/alpha* && rsnapshot -v alpha && /root/scripts/marklogs.sh  #--> rsnapshot.conf modifed from default as shown:
 loglevel	               1
 rsync_short_args        -rltv
 rsync_long_args         --chmod D0770,F0660 --delete --relative --delete-excluded --partial --log-file=/var/log/rsync-test
 - console shows same polluted output (rsnapshot shell commands with rsync data)
 - rsnapshot log received no ouput
 - rsync log shows same (correct) output as previous execution

7 change verbose to 1 (quiet) in rsnapshot.conf
rm -rf  /obopool/tester/alpha* && rsnapshot -v alpha && /root/scripts/marklogs.sh  #--> rsnapshot.conf modifed as shown:
 verbose                 1
 loglevel	               1
 rsync_short_args        -rltv
 rsync_long_args         --chmod D0770,F0660 --delete --relative --delete-excluded --partial --log-file=/var/log/rsync-test
 - console - no change to output from test 6 
 - rsnapshot log received no ouput (expected with loglevel 1)
 - rsync log shows expected output (same as previous)

8 set loglevel to 2 to verify effect on rsnapshot log
rm -rf  /obopool/tester/alpha* && rsnapshot -v alpha && /root/scripts/marklogs.sh  #--> rsnapshot.conf modifed as shown:
 verbose                 1
 loglevel	               2
 rsync_short_args        -rltv
 rsync_long_args         --chmod D0770,F0660 --delete --relative --delete-excluded --partial --log-file=/var/log/rsync-test
 - console - no change
 - rshapshot log shows only start and completion notices (no shell commands). One assumes errors would show up.
 - rsync log shows expected output

9 variation on 4: default verbose/log level - no -v on rsync_short_args, and add --stats to rsync long args
rm -rf  /obopool/tester/alpha* && rsnapshot -v alpha && /root/scripts/marklogs.sh  #--> rsnapshot.conf modifed from default as shown:
 rsync_short_args        -rlt
 rsync_long_args         --chmod D0770,F0660 --delete --relative --delete-excluded --partial --log-file=/var/log/rsync-test --stats
 - console has stats (rsync) information mixed in with correct output (start notification, shell commands, success confirmations)
 - rshapshot log has the same issues as console: stats output mixed in with  expected output (start notification, shell commands, success confirmations)
 - rsync log shows expected output -- files plus stats.
This is the default (installed) /usr/local/etc/rsnapshot.conf file, against which all the logged changes were made.
#/usr/local/etc/rsnapshot.conf
#################################################
# rsnapshot.conf - rsnapshot configuration file #
#################################################
#                                               #
# PLEASE BE AWARE OF THE FOLLOWING RULE:        #
#                                               #
# This file requires tabs between elements      #
#                                               #
#################################################

#######################
# CONFIG FILE VERSION #
#######################

config_version  1.2
snapshot_root   /obopool/tester
no_create_root  1
cmd_rm          /bin/rm
cmd_rsync       /usr/local/bin/rsync
cmd_logger      /usr/bin/logger
#cmd_rsnapshot_diff     /usr/local/bin/rsnapshot-diff

retain  alpha   6
retain  beta    7
retain  gamma   4
#retain delta   3

# Verbose level, 1 through 5.
# 1     Quiet           Print fatal errors only
# 2     Default         Print errors and warnings only
# 3     Verbose         Show equivalent shell commands being executed
# 4     Extra Verbose   Show extra verbose information
# 5     Debug mode      Everything
#
verbose         2

# Same as "verbose" above, but controls the amount of data sent to the
# logfile, if one is being used. The default is 3.
#
loglevel        3

logfile /var/log/rsnapshot-test
lockfile        /var/run/rsnapshot.pid


###############################
### BACKUP POINTS / SCRIPTS ###
###############################

# LOCALHOST
backup  /root/          localhost/
backup  /etc/           localhost/
backup  /usr/local/     localhost/
backup  /var/log/       localhost/
and finally, here's a bit of the perl -V output
perl -V
Summary of my perl5 (revision 5 version 32 subversion 1) configuration:
   
  Platform:
    osname=freebsd
    osvers=12.2-release-p10
    archname=amd64-freebsd-thread-multi
    uname='freebsd 122amd64-quarterly-job-03 12.2-release-p10 freebsd 12.2-release-p10 amd64 '
    config_args='-Darchlib=/usr/local/lib/perl5/5.32/mach -Dcc=cc -Dcf_by=mat -Dcf_email=mat@FreeBSD.org -Dcf_time=Sat Jan 23 14:56:40 UTC 2021 -Dinc_version_list=none -Dlibperl=libperl.so.5.32.1 -Dman1dir=/usr/local/lib/perl5/5.32/perl/man/man1 -Dman3dir=/usr/local/lib/perl5/5.32/perl/man/man3 -Dprefix=/usr/local -Dprivlib=/usr/local/lib/perl5/5.32 -Dscriptdir=/usr/local/bin -Dsitearch=/usr/local/lib/perl5/site_perl/mach/5.32 -Dsitelib=/usr/local/lib/perl5/site_perl -Dsiteman1dir=/usr/local/lib/perl5/site_perl/man/man1 -Dsiteman3dir=/usr/local/lib/perl5/site_perl/man/man3 -Dusenm=n -Duseshrplib -sde -Ui_iconv -Ui_malloc -Uinstallusrbinperl -Accflags=-DUSE_THREAD_SAFE_LOCALE -Alddlflags=-L/wrkdirs/usr/ports/lang/perl5.32/work/perl-5.32.1 -L/usr/local/lib/perl5/5.32/mach/CORE -lperl -Dshrpldflags=$(LDDLFLAGS:N-L/wrkdirs/usr/ports/lang/perl5.32/work/perl-5.32.1:N-L/usr/local/lib/perl5/5.32/mach/CORE:N-lperl) -Wl,-soname,$(LIBPERL:R) -Doptimize=-O2 -pipe  -fstack-protector-strong -fno-strict-aliasing  -Dusedtrace -Ui_gdbm -Dusemultiplicity=y -Duse64bitint -Dusemymalloc=n -Dusethreads=y'
BISI (136 rep)
Oct 28, 2021, 04:55 AM • Last activity: Nov 3, 2021, 06:17 PM
0 votes
1 answers
1769 views
bash script help to check nfs mount exists [rsnapshot]
I have two linux servers; server two is a backup to server one where server two is NFS mounted to server one. I use `rsnapshot` on server one to copy from `/data/` to the **nfs mounted** folder `/bkup` from server two. Problem is if the nfs `/bkup` mount isn't there, rsnapshot will copy /data {20tb)...
I have two linux servers; server two is a backup to server one where server two is NFS mounted to server one. I use rsnapshot on server one to copy from /data/ to the **nfs mounted** folder /bkup from server two. Problem is if the nfs /bkup mount isn't there, rsnapshot will copy /data {20tb) onto the root partition {1tb}. Instead of cron'ing my one call to launch rsnapshot I would like to call a backup script that first checks on everything before calling rsnaphot to prevent that scenario. I do not think rsnapshot's no_create_root is relevant because the /bkup folder will always exist. Can the following happen in a bash script? *i'm hoping someone fluent in bash can type it up in 2 minutes? my bash writing is horrible.* if ( showmount -e server_two responds with "/bkup server_two" ) { if ( check if /bkup is nfs mounted == true ) { /usr/bin/rsnapshot daily } else { mount /bkup if ( check if /bkup is nfs mounted === true ) { /usr/bin/rsnapshot daily } } } right now I have this to work where/when my nfs bkup mount is good on server_one mount | grep bkup server_two:/bkup on /bkup type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.1.2,local_lock=none,addr=192.168.1.1) df -h | grep bkup server_two:/bkup 15T 3.0T 12T 21% /bkup showmount -e server_two Export list for server_two: /bkup server_one
ron (8647 rep)
Sep 29, 2021, 01:57 PM • Last activity: Sep 29, 2021, 03:26 PM
1 votes
1 answers
433 views
Problems getting Rsnapshot to work, even just for a local backup
my goal is to backup a remote server. However, I first want to get just a local backup working, running on Ubuntu 20. For this, my /etc/rsnapshot.conf file is the following: config_version 1.2 snapshot_root /var/backupsFromRsnapshot/ cmd_rsync /usr/bin/rsync # The retain arguments define the number...
my goal is to backup a remote server. However, I first want to get just a local backup working, running on Ubuntu 20. For this, my /etc/rsnapshot.conf file is the following: config_version 1.2 snapshot_root /var/backupsFromRsnapshot/ cmd_rsync /usr/bin/rsync # The retain arguments define the number of snapshots to retain at different le> # I'm going to run cron job beta daily (so below will keep 7 daily snapshots), > retain alpha 6 retain beta 7 retain gamma 4 # Below defines what folders I want included in the snapshots. backup /home/ localhost/ backup /etc/ localhost/ backup /var/ localhost/ backup /usr/local/ localhost/ interval hourly 6 If I run "rsnapshot configtest", I get the following result: SYNTAX OK Then I test the backup with the following command: rsnapshot -t alpha The result is as follows: mkdir -m 0700 -p /var/backupsFromRsnapshot/ mkdir -m 0755 -p /var/backupsFromRsnapshot/alpha.0/ /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \ /home/ /var/backupsFromRsnapshot/alpha.0/localhost/ mkdir -m 0755 -p /var/backupsFromRsnapshot/alpha.0/ /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded /etc/ \ /var/backupsFromRsnapshot/alpha.0/localhost/ mkdir -m 0755 -p /var/backupsFromRsnapshot/alpha.0/ /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \ --filter=-/_/var/backupsFromRsnapshot /var/ \ /var/backupsFromRsnapshot/alpha.0/localhost/ mkdir -m 0755 -p /var/backupsFromRsnapshot/alpha.0/ /usr/bin/rsync -a --delete --numeric-ids --relative --delete-excluded \ /usr/local/ /var/backupsFromRsnapshot/alpha.0/localhost/ touch /var/backupsFromRsnapshot/alpha.0/ However, if I check my /var/ directory, there is no backupsFromRsnapshot folder, yet any backup file. Is my config correct? Is my test expression correct? Where is the fault? Thanks!
RsnapshotBeginner1 (13 rep)
Oct 18, 2020, 02:47 PM • Last activity: Oct 18, 2020, 05:13 PM
3 votes
1 answers
1137 views
Rsnapshot: folder ownership permissions to 'backups' group instead of root
I am using rsnapshot to make daily backups of a MYSQL database on a server. Everything works perfectly except the ownership of the directory is `root:root`. I would like it to be `root:backups` to enable me to easily download these backups to a local computer over an ssh connection. (My ssh user has...
I am using rsnapshot to make daily backups of a MYSQL database on a server. Everything works perfectly except the ownership of the directory is root:root. I would like it to be root:backups to enable me to easily download these backups to a local computer over an ssh connection. (My ssh user has sudo permissions but I don't want to have to type in the password every time I make a local copy of the backups. This user is part of the *backups* group.) In /etc/rsnapshot.conf I have this line: backup_script /usr/local/bin/backup_mysql.sh mysql/ And in the file /usr/local/bin/backup_mysql.sh I have:
umask 0077
# backup the database
date=date +"%y%m%d-%h%m%s"                                                                                             
destination=$date'-data.sql.gz'
/usr/bin/mysqldump --defaults-extra-file=/root/.my.cnf --single-transaction --quick --lock-tables=false --routines data | gzip -c > $destination
/bin/chmod 660 $destination
/bin/chown root:backups $destination
The file structure that results is:
/backups/
├── [drwxrwx---]  daily.0
│   └── [drwxrwx---]  mysql [error opening dir]
├── [drwxrwx---]  daily.1
│   └── [drwxrwx---]  mysql [error opening dir]
The ownership of the backup data file itself is correct, as root:backups, but I cannot access that file because the folder it is in, mysql, belongs to root:root.
Kit Johnson (161 rep)
May 8, 2020, 07:17 AM • Last activity: May 22, 2020, 08:11 AM
0 votes
1 answers
543 views
Exclude folders in rsnapshot containing docker except docker/volumes
I configured rsnapshot, to exclude all foders containing the string `docker/` in its path in the file `rsnapshot_exclude` docker/* but now I want to **include** one particular folder: /var/lib/docker/volumes/ How do I define this in `rsnapshot.conf`? maybe define like this somehow? docker/(?!volumes...
I configured rsnapshot, to exclude all foders containing the string docker/ in its path in the file rsnapshot_exclude docker/* but now I want to **include** one particular folder: /var/lib/docker/volumes/ How do I define this in rsnapshot.conf? maybe define like this somehow? docker/(?!volumes))
rubo77 (30435 rep)
Aug 1, 2019, 12:34 PM • Last activity: Feb 3, 2020, 02:51 PM
1 votes
1 answers
3286 views
How to restore file from rsnapshot backup with correct permissions
I have installed a fresh version of Debian. I would like to restore some files in `home` with my backuped version as well as some under `etc`. The backup was created with rsnapshot. The point which is not that clear to me is as which user should I run the restore command. Let me make an example. Ass...
I have installed a fresh version of Debian. I would like to restore some files in home with my backuped version as well as some under etc. The backup was created with rsnapshot. The point which is not that clear to me is as which user should I run the restore command. Let me make an example. Assume I would like to restore my rsnapshotfile within /etc/cron.d/. By default I see the following right permission in the fresh installed version: station:~$ ls -l /etc/cron.d/rsnapshot -rw-r--r-- 1 root root 472 Mar 26 2016 /etc/cron.d/rsnapshot In this case I need to run the rsync command as root or sudo, i.e. I was doing for testing: @thinkstation:~$ rsync /media/3985DAA24356D774/rsnapshot/station/daily.0/etc/etc/cron.d/rsnapshot /etc/cron.d/rTest But this leads to the following permissions: station:~$ ls -l /etc/cron.d/rTest -rwxr-xr-x 1 root root 513 Sep 24 19:01 /etc/cron.d/rTest and this doesn't match the system defaults one. Another example, which is more frustrating. I was running rsync command as normal user (non root, non sudo). The files I've restored was a project which is version controlled via git. I did the restore because not all files were on github. After the restore I've seen a lot of difference because of the changed permission (which I don't understand as the files didn't involve any sudo rights and were transferred to my home directory). So I'm not sure if I did a mistake in the past while doing the backup or if I'm doing something wrong in restoring the files. In any case I would like to know how I can resolve that issue.
math (31 rep)
Sep 23, 2017, 01:57 PM • Last activity: Nov 30, 2019, 12:03 AM
1 votes
0 answers
191 views
rsnapshot running for long time after deleting a large number of file
We have been running a nightly backup on our Linux server for many years without issue. However, since deleting a large number of files (8gb worth) from one of the folders being backed up the backup process has still been running well into the next day and has had to be killed off. I initially thoug...
We have been running a nightly backup on our Linux server for many years without issue. However, since deleting a large number of files (8gb worth) from one of the folders being backed up the backup process has still been running well into the next day and has had to be killed off. I initially though it might be because it was trying to delete the 8gb off the rd1000 but, after wiping one of the rd1000 discs and having the same issue I'm not so sure now. The backup process is 'rsnapshot -c $CONFIGFILE daily'. $CONFIGFILE
#Rsnapshot backup parameters - Fri 29/11/19 02:00 am
#---------------------------------------------------

config_version          1.2
verbose                 5
loglevel                5
logfile                 /var/log/rsnapshot
lockfile                /var/run/rsnapshot.pid
cmd_cp                  /bin/cp
cmd_rm                  /bin/rm
cmd_rsync               /usr/bin/rsync
cmd_ssh         /usr/bin/ssh
cmd_logger              /usr/bin/logger
cmd_du                  /usr/bin/du
rsync_long_args         --delete --numeric-ids --relative --delete-excluded --st
ats
link_dest               1
use_lazy_deletes        0
snapshot_root           /rd1000/backups/
interval                daily   1
backup                  /usr/sc4/pm/    localhost/
backup                  /usr/sc4/seachange/     localhost/
backup                  /usr/seawriter/ localhost/
backup                  /etc/   localhost/
backup                  /usr/bin/       localhost/
backup                  /usr/leanlifts/ localhost/
backup                  /usr/xml/       localhost/
backup                  /usr/tsh/       localhost/
backup                  /home/  localhost/
backup                  /usr/local/     localhost/
backup                  /var/spool/cron/        localhost/
backup                  /usr/spool/tsh/ localhost/
backup                  root@Brundleweb:/tmp/web/       Brundleweb/
backup                  root@Brundleweb:/tmp/mysql.bkup/                Brundlew
eb/dbase        rsync_long_args=--delete --numeric-ids --delete-excluded --stats
This is what is in /var/log/rsnapshot before I kill the process
[29/Nov/2019:03:00:59] require Lchown
[29/Nov/2019:03:00:59] Lchown module not found
[29/Nov/2019:03:00:59] /usr/bin/rsnapshot -c /tmp/rsnapshot.conf.24861 daily: st
arted
[29/Nov/2019:03:00:59] Setting locale to POSIX "C"
[29/Nov/2019:03:00:59] echo 30828 > /var/run/rsnapshot.pid
[29/Nov/2019:03:00:59] /usr/bin/rsync -av --delete --numeric-ids --relative --de
lete-excluded --stats /usr/sc4/pm /rd1000/backups/daily.0/localhost/
Any help anybody could give me would be greatly appreciated.
Cookie (11 rep)
Nov 29, 2019, 02:12 PM
Showing page 1 of 20 total questions