Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
23
votes
5
answers
141946
views
Invalid cross-device link while Hardlinking in the same file system
I have **/home/myuser/Desktop/rc/.netrc** file that i want hardlink to **/root**, ie home directory of **root** user. When i do: `ln /home/user/Destkop/rc/.netrc /root` it gives the following error: > ln: creating hard link `/root/.netrc' => `.netrc': Invalid > cross-device link but it works when i...
I have **/home/myuser/Desktop/rc/.netrc** file that i want hardlink to **/root**, ie home directory of **root** user.
When i do:
ln /home/user/Destkop/rc/.netrc /root
it gives the following error:
> ln: creating hard link /root/.netrc' =>
.netrc': Invalid
> cross-device link
but it works when i hardlink the file to **myuser**'s home, ie to **/home/myuser**.
So, what's the problem, why it says invalid cross-devices when there is only one file system here?
**PS.** I am using **RHEL6**
Elvin Aslanov
(387 rep)
Jun 12, 2013, 09:48 AM
• Last activity: Jul 19, 2025, 11:33 AM
1
votes
1
answers
2219
views
Git - how to add/link subfolders into one git-repository directory
Assuming I have a file structure like this: ├── Project-1/ │   ├── files/ │   └── special-files/ ├── Project-2/ │   ├── files/ │   └── special-files/ └── Project-3/ ├── files/ └── special-files/ Now I want to create a Git repository, including all the `special...
Assuming I have a file structure like this:
├── Project-1/
│ ├── files/
│ └── special-files/
├── Project-2/
│ ├── files/
│ └── special-files/
└── Project-3/
├── files/
└── special-files/
Now I want to create a Git repository, including all the
special-files
folders. If it was files, I could create a hardlink ln ./Project-1/special-files ./Git-Project/special-files-1
and so on, so I would get:
Git-Project/
├── .git
├── .gitignore
├── special-files-1/
├── special-files-2/
└── special-files-3/
Though hardlinks do not work with folders. Symlinks do not get handled by git
. **Is there a way to achieve, collecting/linking these folders into a git
repository-folder?**
nath
(6094 rep)
Aug 5, 2021, 04:48 PM
• Last activity: Jul 7, 2025, 01:01 PM
0
votes
1
answers
5173
views
getting "Invalid cross-device link" doing a cp -l - same volume
This command - run as Admin (not Root) in directory `/volume2/Media/AllOurMedia/1 Our Pictures/Cataloged ( 3-5)`: cp -l HL_Test.JPG /volume2/phototest/ got this response: cp: cannot create hard link '/volume2/phototest/HL_Test.JPG' to 'HL_Test.JPG': Invalid cross-device link I believe the file syste...
This command - run as Admin (not Root) in directory
/volume2/Media/AllOurMedia/1 Our Pictures/Cataloged ( 3-5)
:
cp -l HL_Test.JPG /volume2/phototest/
got this response:
cp: cannot create hard link '/volume2/phototest/HL_Test.JPG' to 'HL_Test.JPG': Invalid cross-device link
I believe the file system is the same, but I can’t interpret the output of the file /proc/self/mountinfo
, the command blkid
or the command mount
.
Symbolic links will not work for my purposes.
This is on the Synology NAS version of Linux.
$ cat /etc/fstab
none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1
/dev/mapper/cachedev_1 /volume3 btrfs auto_reclaim_space,ssd,synoacl,relatime,nodev 0 0
/dev/mapper/cachedev_0 /volume4 btrfs auto_reclaim_space,ssd,synoacl,relatime,nodev 0 0
/dev/mapper/cachedev_2 /volume2 btrfs auto_reclaim_space,ssd,synoacl,relatime,nodev 0 0
/dev/mapper/cachedev_3 /volume1 btrfs auto_reclaim_space,ssd,synoacl,relatime,nodev 0 0
$ blkid
/dev/md0: LABEL="1.42.6-15090" UUID="87bac7d0-4cbf-4f26-8001-39a4f925cd09" TYPE="ext4"
/dev/mapper/cachedev_0: LABEL="2022.01.11-00:53:52 v42218" UUID="28db81a6-3f7f-42e8-9be7-dda711abf2d6" UUID_SUB="72557eeb-3c15-48c6-8b31-6c17f5c52304" TYPE="btrfs"
/dev/mapper/cachedev_1: LABEL="2022.01.11-00:26:16 v42218" UUID="7ed3fcf4-aeef-4a6a-ab91-968582b18bb7" UUID_SUB="b7219b75-ee21-480b-9bea-1104fdae19b8" TYPE="btrfs"
/dev/mapper/cachedev_2: LABEL="2017.05.30-11:24:09 v15101" UUID="df70890c-7fb8-4afa-a71e-f8805385a1aa" UUID_SUB="f1036abc-e9a3-49de-a972-c1590f3c83df" TYPE="btrfs"
/dev/mapper/cachedev_3: LABEL="2017.05.30-11:09:16 v15101" UUID="08a5afe9-d7f4-46f7-a342-2daea0babd51" UUID_SUB="85b94c31-6975-4db9-b5cb-931abe1a1152" TYPE="btrfs"
/dev/nvme0n1p1: UUID="b0885b18-6ade-6877-7086-234291332fff" UUID_SUB="e3f7fd80-152e-3cec-db39-c70ed0056da9" LABEL="NAS49:6" TYPE="linux_raid_member" PARTUUID="bff908a7-01"
/dev/nvme1n1p1: UUID="b0885b18-6ade-6877-7086-234291332fff" UUID_SUB="17cf5beb-e235-e4c0-62f2-932d993b01ab" LABEL="NAS49:6" TYPE="linux_raid_member" PARTUUID="982d332d-01"
/dev/sata1p1: UUID="df1a96d5-97de-e55a-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="94c2424b-8771-4263-8c76-44458f3ea4ad"
/dev/sata1p2: UUID="024f9ba3-13f9-b821-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="36f3de4f-418e-4f2e-85c2-998ae89d453a"
/dev/sata1p5: UUID="ddaba4d4-f65c-7322-2d60-826a27eaf650" UUID_SUB="6270eb2b-e08e-21c9-1212-b80ae2f40225" LABEL="NAS49:4" TYPE="linux_raid_member" PARTUUID="e36b8065-f5b6-43de-9735-c0f1d9ec264b"
/dev/sata2p1: UUID="df1a96d5-97de-e55a-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="d07fd25b-ea22-4214-99ad-26b14e4733ce"
/dev/sata2p2: UUID="024f9ba3-13f9-b821-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="139e801c-f998-4b71-9c41-0afbb330229f"
/dev/sata2p5: UUID="2300b63b-06a6-2fa5-4984-dc6908a14a29" UUID_SUB="41598730-05e5-5614-fb82-30d82443e1ae" LABEL="NAS49:5" TYPE="linux_raid_member" PARTUUID="8cde27db-286d-4325-8223-9d9ce21367b1"
/dev/sata3p1: UUID="df1a96d5-97de-e55a-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="6b5a60b3-cd53-470f-8567-312ce73e673f"
/dev/sata3p2: UUID="024f9ba3-13f9-b821-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="37750ba6-8645-4cda-acca-0c9ec717cfee"
/dev/sata3p5: UUID="4ca9422b-7156-5a45-a116-0114b098d99f" UUID_SUB="f42d1a26-ab90-4f26-2b1d-015167c29397" LABEL="NAS49:3" TYPE="linux_raid_member" PARTUUID="5a6ac30e-560c-498d-9e0d-c835a95df4a5"
/dev/sata4p1: UUID="df1a96d5-97de-e55a-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="929a3976-46f1-4d92-acd3-1e2831503d19"
/dev/sata4p2: UUID="024f9ba3-13f9-b821-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="147f96b0-ecbd-4050-b86b-3da2ab7cb687"
/dev/sata4p5: UUID="9f7e7e0e-a469-9617-f0db-e01aaf1e7025" UUID_SUB="9ed4fd01-ddbb-614a-e3e4-63bf59f5a872" LABEL="NAS49:2" TYPE="linux_raid_member" PARTUUID="e9025d5e-ae3d-4611-a8c4-6d5f46a39cdd"
/dev/zram0: UUID="c1cee02d-5500-49b8-a9b8-8fcabab4095f" TYPE="swap"
/dev/zram1: UUID="0976baee-86d6-4c08-a7c0-10d2d69a2224" TYPE="swap"
/dev/zram2: UUID="ee1c382c-3517-4648-a9ee-ba531625f8fc" TYPE="swap"
/dev/zram3: UUID="a4ba2853-91b4-4dd7-ad5d-f42e02ff52f6" TYPE="swap"
/dev/md1: UUID="9d0bcb90-72ba-4ebe-aaeb-50dfd83ba20e" TYPE="swap"
/dev/synoboot1: SEC_TYPE="msdos" UUID="10EE-589C" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="90b655a2-5390-4d54-bb66-491e328704de"
/dev/synoboot2: UUID="45e5b07d-4783-4867-a369-f99c0cd1e610" TYPE="ext2" PARTLABEL="Linux filesystem" PARTUUID="93da2cbf-de5c-4a54-b9e2-90e634104aab"
/dev/md6: UUID="Jx6nfA-Oo12-jwv5-KkUp-4R7H-Hlsf-sXDuSf" TYPE="LVM2_member"
/dev/md4: UUID="mGLGr0-l6HF-C96v-wGhD-gXrE-2ibn-C3JyqZ" TYPE="LVM2_member"
/dev/md3: UUID="hu4FBb-d2yT-Whq1-NpRa-dIAz-tXz6-YIjRec" TYPE="LVM2_member"
/dev/md5: UUID="EqPEZc-22l2-CJp4-N8VH-y7c6-pLrH-TJ6ypJ" TYPE="LVM2_member"
/dev/md2: UUID="RvA1WH-mr2k-X2gG-ZnWD-uhY5-ZodO-1BnKkH" TYPE="LVM2_member"
/dev/loop0: LABEL="Windows" UUID="D4D0FCD1D0FCBAB6" TYPE="ntfs"
/dev/loop1: LABEL="LENOVO" UUID="22486CF1486CC4DF" TYPE="ntfs"
/dev/usb1p1: LABEL="New Volume" UUID="F61C55311C54EDDD" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="2c88cf31-2973-4e45-b729-d835b9d4ff75"
$ cat /proc/self/mountinfo
18 0 9:0 / / rw,noatime shared:1 - ext4 /dev/md0 rw,data=ordered
16 18 0:16 / /sys rw,nosuid,nodev,noexec,relatime shared:2 - sysfs sysfs rw
17 18 0:4 / /proc rw,nosuid,nodev,noexec,relatime shared:8 - proc proc rw
19 18 0:6 / /dev rw,nosuid shared:9 - devtmpfs devtmpfs rw,size=1891376k,nr_inodes=472844,mode=755
20 16 0:12 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:3 - securityfs securityfs rw
21 19 0:17 / /dev/shm rw,nosuid,nodev shared:10 - tmpfs tmpfs rw
22 19 0:14 / /dev/pts rw,nosuid,noexec,relatime shared:11 - devpts devpts rw,gid=5,mode=620,ptmxmode=000
23 18 0:18 / /run rw,nodev shared:12 - tmpfs tmpfs rw,mode=755
24 16 0:19 / /sys/fs/cgroup ro,nosuid,nodev,noexec shared:4 - tmpfs tmpfs ro,mode=755
25 24 0:20 / /sys/fs/cgroup/systemd rw,nosuid,nodev,noexec,relatime shared:5 - cgroup cgroup rw,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd
26 24 0:21 / /sys/fs/cgroup/synomonitor rw,nosuid,nodev,noexec,relatime shared:6 - cgroup cgroup rw,name=synomonitor
28 24 0:23 / /sys/fs/cgroup/devices rw,nosuid,nodev,noexec,relatime shared:13 - cgroup cgroup rw,devices
29 24 0:24 / /sys/fs/cgroup/cpuacct rw,nosuid,nodev,noexec,relatime shared:14 - cgroup cgroup rw,cpuacct
30 24 0:25 / /sys/fs/cgroup/blkio rw,nosuid,nodev,noexec,relatime shared:15 - cgroup cgroup rw,blkio
31 24 0:26 / /sys/fs/cgroup/memory rw,nosuid,nodev,noexec,relatime shared:16 - cgroup cgroup rw,memory
32 24 0:27 / /sys/fs/cgroup/cpuset rw,nosuid,nodev,noexec,relatime shared:17 - cgroup cgroup rw,cpuset
33 24 0:28 / /sys/fs/cgroup/cpu rw,nosuid,nodev,noexec,relatime shared:18 - cgroup cgroup rw,cpu
34 24 0:29 / /sys/fs/cgroup/freezer rw,nosuid,nodev,noexec,relatime shared:19 - cgroup cgroup rw,freezer
35 18 0:30 / /tmp rw,nosuid,nodev,noexec shared:20 - tmpfs tmpfs rw
36 16 0:7 / /sys/kernel/debug rw,nosuid,nodev,noexec,relatime shared:21 - debugfs debugfs rw
39 17 0:6 /bus/usb /proc/bus/usb rw,nosuid shared:9 - devtmpfs devtmpfs rw,size=1891376k,nr_inodes=472844,mode=755
63 16 0:33 / /sys/kernel/config rw,nosuid,nodev,noexec,relatime shared:22 - configfs configfs rw
71 18 0:35 /@syno /volume3 rw,nodev,relatime shared:23 - btrfs /dev/mapper/cachedev_1 rw,ssd,synoacl,space_cache=v2,auto_reclaim_space,metadata_ratio=50,block_group_cache_tree,syno_allocator,subvolid=256,subvol=/@syno
69 18 0:37 /@syno /volume4 rw,nodev,relatime shared:24 - btrfs /dev/mapper/cachedev_0 rw,ssd,synoacl,space_cache=v2,auto_reclaim_space,metadata_ratio=50,block_group_cache_tree,syno_allocator,subvolid=256,subvol=/@syno
65 18 0:34 /@syno /volume2 rw,nodev,relatime shared:25 - btrfs /dev/mapper/cachedev_2 rw,ssd,synoacl,nospace_cache,auto_reclaim_space,metadata_ratio=50,syno_allocator,subvolid=257,subvol=/@syno
66 18 0:36 /@syno /volume1 rw,nodev,relatime shared:26 - btrfs /dev/mapper/cachedev_3 rw,ssd,synoacl,nospace_cache,auto_reclaim_space,metadata_ratio=50,syno_allocator,subvolid=257,subvol=/@syno
76 18 0:33 / /config rw,nosuid,nodev,noexec,relatime shared:28 - configfs none rw
80 65 0:34 /@syno/@docker/btrfs /volume2/@docker/btrfs rw,nodev,relatime shared:25 - btrfs /dev/mapper/cachedev_2 rw,ssd,synoacl,nospace_cache,auto_reclaim_space,metadata_ratio=50,syno_allocator,subvolid=257,subvol=/@syno/@docker/btrfs
131 23 0:3 net: /run/docker/netns/c681364feff0 rw shared:27 - nsfs nsfs rw
27 66 0:22 / /volume1/BU/ActiveBackupData rw,nosuid,nodev,relatime shared:7 - fuse.synodedup-fused synodedup-fused rw,user_id=0,group_id=0,default_permissions,allow_other
$ mount
/dev/md0 on / type ext4 (rw,noatime,data=ordered)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=1891376k,nr_inodes=472844,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
cgroup on /sys/fs/cgroup/synomonitor type cgroup (rw,nosuid,nodev,noexec,relatime,name=synomonitor)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu type cgroup (rw,nosuid,nodev,noexec,relatime,cpu)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /proc/bus/usb type devtmpfs (rw,nosuid,size=1891376k,nr_inodes=472844,mode=755)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
/dev/mapper/cachedev_1 on /volume3 type btrfs (rw,nodev,relatime,ssd,synoacl,space_cache=v2,auto_reclaim_space,metadata_ratio=50,block_group_cache_tree,syno_allocator,subvolid=256,subvol=/@syno)
/dev/mapper/cachedev_0 on /volume4 type btrfs (rw,nodev,relatime,ssd,synoacl,space_cache=v2,auto_reclaim_space,metadata_ratio=50,block_group_cache_tree,syno_allocator,subvolid=256,subvol=/@syno)
/dev/mapper/cachedev_2 on /volume2 type btrfs (rw,nodev,relatime,ssd,synoacl,nospace_cache,auto_reclaim_space,metadata_ratio=50,syno_allocator,subvolid=257,subvol=/@syno)
/dev/mapper/cachedev_3 on /volume1 type btrfs (rw,nodev,relatime,ssd,synoacl,nospace_cache,auto_reclaim_space,metadata_ratio=50,syno_allocator,subvolid=257,subvol=/@syno)
none on /config type configfs (rw,nosuid,nodev,noexec,relatime)
/dev/mapper/cachedev_2 on /volume2/@docker/btrfs type btrfs (rw,nodev,relatime,ssd,synoacl,nospace_cache,auto_reclaim_space,metadata_ratio=50,syno_allocator,subvolid=257,subvol=/@syno/@docker/btrfs)
nsfs on /run/docker/netns/c681364feff0 type nsfs (rw)
synodedup-fused on /volume1/BU/ActiveBackupData type fuse.synodedup-fused (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
$ df -h .
Filesystem Size Used Avail Use% Mounted on
- 5.3T 3.0T 2.3T 57% /volume2/Media
$ cd /volume2/phototest
$ df -h .
Filesystem Size Used Avail Use% Mounted on
- 5.3T 3.0T 2.3T 57% /volume2/phototest
Bones
(1 rep)
Nov 20, 2022, 11:21 AM
• Last activity: Jun 4, 2025, 12:04 AM
0
votes
2
answers
81
views
How to programmatically deduplicate files into hard links while maintaining the time stamps of the containing directories?
Continuing https://unix.stackexchange.com/a/22822, how to deduplicate files, given as a list, into hardlinks, while maintaining the timestamps of their directories? Unfortunately, `hardlinks` changes the time stamps: ```sh $ mkdir d1 $ mkdir d2 $ mkdir d3 $ echo "content" > d1/f1 $ echo "content" >...
Continuing https://unix.stackexchange.com/a/22822 , how to deduplicate files, given as a list, into hardlinks, while maintaining the timestamps of their directories? Unfortunately,
hardlinks
changes the time stamps:
$ mkdir d1
$ mkdir d2
$ mkdir d3
$ echo "content" > d1/f1
$ echo "content" > d2/f2
$ echo "content" > d3/f3
$ ls -la --full-time d1 d2 d3
d1:
total 4
drwxr-xr-x 2 username username 60 2025-04-23 17:26:18.624828807 +0200 .
drwxrwxrwt 29 root root 820 2025-04-23 17:26:07.397001442 +0200 ..
-rw-r--r-- 1 username username 8 2025-04-23 17:26:18.624828807 +0200 f1
d2:
total 4
drwxr-xr-x 2 username username 60 2025-04-23 17:26:26.016715230 +0200 .
drwxrwxrwt 29 root root 820 2025-04-23 17:26:07.397001442 +0200 ..
-rw-r--r-- 1 username username 8 2025-04-23 17:26:26.016715230 +0200 f2
d3:
total 4
drwxr-xr-x 2 username username 60 2025-04-23 17:26:29.296664852 +0200 .
drwxrwxrwt 29 root root 820 2025-04-23 17:26:07.397001442 +0200 ..
-rw-r--r-- 1 username username 8 2025-04-23 17:26:29.296664852 +0200 f3
$ hardlink -v -c -M -O -y memcmp d1/f1 d2/f2 d3/f3
Linking /tmp/d1/f1 to /tmp/d2/f2 (-8 B)
Linking /tmp/d1/f1 to /tmp/d3/f3 (-8 B)
Mode: real
Method: memcmp
Files: 3
Linked: 2 files
Compared: 0 xattrs
Compared: 2 files
Saved: 16 B
Duration: 0.000165 seconds
$ ls -la --full-time d1 d2 d3
d1:
total 4
drwxr-xr-x 2 username username 60 2025-04-23 17:26:18.624828807 +0200 .
drwxrwxrwt 29 root root 820 2025-04-23 17:27:19.631893228 +0200 ..
-rw-r--r-- 3 username username 8 2025-04-23 17:26:18.624828807 +0200 f1
d2:
total 4
drwxr-xr-x 2 username username 60 2025-04-23 17:28:45.922576280 +0200 .
drwxrwxrwt 29 root root 820 2025-04-23 17:27:19.631893228 +0200 ..
-rw-r--r-- 3 username username 8 2025-04-23 17:26:18.624828807 +0200 f2
d3:
total 4
drwxr-xr-x 2 username username 60 2025-04-23 17:28:45.922576280 +0200 .
drwxrwxrwt 29 root root 820 2025-04-23 17:27:19.631893228 +0200 ..
-rw-r--r-- 3 username username 8 2025-04-23 17:26:18.624828807 +0200 f3
As we see, two file have been replaced with hard links, which is good.
However, the time stamps of d2
and d3
have been updated. That's NOT what we want.
Ideally, we'd like to have a command that gets a list of files from
find /media/my_NTFS_drive -type f -size $(ls -la -- original_file| cut -d' ' -f5)c -exec cmp -s original_file {} \; -exec ls -t {} + 2>/dev/null
and converts them into hard links to original_file
. If the time stamps of the hardlinked files are to be the same, change them to the oldest among the time stamps of original_file
and its copies. The time stamps of the directories containing original_file
and its copies have to be retained. Clearly, all this has to be automated. (No question we can do it with manual inspection and touch
. From a user's viewpoint, it could be done with just another switch to hardlinks
. As the task seems rather standard, my hope is that in the past decades, someone has already written a standalone program, perhaps even a shell script.)
AlMa1r
(1 rep)
Apr 23, 2025, 03:38 PM
• Last activity: Apr 23, 2025, 06:49 PM
1
votes
0
answers
58
views
How to get `rsync --link-dest=` hard-link moved/renamed files
I am trying to set up [rsnapshot][] for backing up a remote server. However, I realize that my issue is with [rsync][] (rsnapshot’s back-end), not with rsnapshot itself. Thus I am focusing the question on rsync. My goal is to make periodic snapshots of a remote server, while using hard links to avoi...
I am trying to set up [rsnapshot][] for backing up a remote server.
However, I realize that my issue is with [rsync][] (rsnapshot’s
back-end), not with rsnapshot itself. Thus I am focusing the question on
rsync.
My goal is to make periodic snapshots of a remote server, while using
hard links to avoid duplicating unchanged files on disk. The process is
very nicely explained in the blog post Easy Automated Snapshot-Style
Backups with Linux and Rsync . As long as the original files are
not renamed or moved around, everything works as expected:
# Build the original hierarchy we want to track.
mkdir original
echo "Hello, World!" > original/file1
# Make two snapshots.
rsync -a original/ snapshot1/
rsync -a --link-dest=../snapshot1 original/ snapshot2/
# Check the inode numbers.
ls -i1 snapshot{1,2}/*
The last command above shows the two snapshot copies of file1
share
the same inode number. So far so good. The problem is that at some point
I may need to rename/reorganize the files within the original hierarchy.
Then, the hard-linking fails:
# Rename a file.
mv original/file1 original/renamed1
# Make a third snapshot.
rsync -a --link-dest=../snapshot2 --fuzzy --fuzzy original/ snapshot3/
# Check the inode numbers.
ls -i1 snapshot{1,2,3}/*
The test above shows snapshot3/renamed1
has a different inode number:
it is a fresh copy. I expected the repeated --fuzzy
option to
hard-link the file despite its changed name, as per the manual:
--fuzzy, -y
This option tells rsync that it should look for a basis file for
any destination file that is missing. The current algorithm
looks in the same directory as the destination file for either a
file that has an identical size and modified-time, or a
similarly-named file. If found, rsync uses the fuzzy basis file
to try to speed up the transfer.
If the option is repeated, the fuzzy scan will also be done in
any matching alternate destination directories that are
specified via --compare-dest, --copy-dest, or --link-dest.
Note: as per this answer , I tried replacing, in the rsync
command line, original/
by localhost:$PWD/original/
. It made no
difference.
Why does rsync fail to hard-link here? Is there a way to convince it to
do it? If not, any suggested workaround?
------------------------------------------------------------------
**Edit**: As suggested by @meuh, I tried adding the option
--debug=FUZZY2
. It printed the messages:
fuzzy size/modtime match for ../snapshot2/file1
fuzzy basis selected for renamed1: ../snapshot2/file1
I then tried syncing a larger file (a ∼ 15 MB copy of the Linux kernel)
through ssh (rsync source = localhost:$PWD/original/
) with the options
-vvv --debug=FUZZY2
. This gave the same messages as above, with many
more messages asserting hash matches and, at the end.
total: matches=3868 hash_hits=3868 false_alarms=0 data=0
Here is the (almost) complete rsync debug output:
opening connection using: ssh localhost rsync --server --sender -vvvlogDtpre.iLsfxCIvu . /home/edgar/tmp/rsnapshot/original/ (8 args)
receiving incremental file list
server_sender starting pid=42683
[sender] make_file(.,*,0)
[sender] pushing local filters for /home/edgar/tmp/rsnapshot/original/
[sender] make_file(renamed1,*,2)
send_file_list done
send_files starting
recv_file_name(.)
recv_file_name(renamed1)
received 2 names
recv_file_list done
get_local_name count=2 snapshot3/
created directory snapshot3
generator starting pid=42643
delta-transmission enabled
recv_generator(.,0)
./ is uptodate
recv_files(2) starting
set modtime, atime of . to (1743753557) 2025/04/04 09:59:17, (1743753866) 2025/04/04 10:04:26
recv_generator(.,1)
recv_generator(renamed1,2)
[generator] make_file(../snapshot2/file1,*,1)
fuzzy size/modtime match for ../snapshot2/file1
fuzzy basis selected for renamed1: ../snapshot2/file1
generating and sending sums for 2
count=3868 rem=2560 blength=3864 s2length=2 flength=14944648
generate_files phase=1
send_files(2, /home/edgar/tmp/rsnapshot/original/renamed1)
send_files mapped /home/edgar/tmp/rsnapshot/original/renamed1 of size 14944648
calling match_sums /home/edgar/tmp/rsnapshot/original/renamed1
built hash table
hash search b=3864 len=14944648
match at 0 last_match=0 j=0 len=3864 n=0
match at 3864 last_match=3864 j=1 len=3864 n=0
match at 7728 last_match=7728 j=2 len=3864 n=0
[...snip many lines, identical but for the numbers...]
match at 14934360 last_match=14934360 j=3865 len=3864 n=0
match at 14938224 last_match=14938224 j=3866 len=3864 n=0
match at 14942088 last_match=14942088 j=3867 len=2560 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=3868 matches=3868
sender finished /home/edgar/tmp/rsnapshot/original/renamed1
recv_files(renamed1)
renamed1
recv mapped ../snapshot2/file1 of size 14944648
got file_sum
set modtime, atime of .renamed1.l4pR7E to (1743753557) 2025/04/04 09:59:17, (1743753866) 2025/04/04 10:04:26
renaming .renamed1.l4pR7E to renamed1
set modtime, atime of . to (1743753557) 2025/04/04 09:59:17, (1743753866) 2025/04/04 10:04:26
send_files phase=1
recv_files phase=1
generate_files phase=2
send_files phase=2
send files finished
total: matches=3868 hash_hits=3868 false_alarms=0 data=0
recv_files phase=2
recv_files finished
generate_files phase=3
generate_files finished
sent 23,258 bytes received 249,330 bytes 545,176.00 bytes/sec
total size is 14,944,648 speedup is 54.83
client_run2 waiting on 42644
[generator] _exit_cleanup(code=0, file=main.c, line=1865): about to call exit(0)
It looks to me like rsync did notice that original/renamed1
was
identical to snapshot2/file1
, and used this fact to speed up the
transfer. But it is still unclear to we why it chose to copy (maybe
chunk by chunk) snapshot2/file1
instead of hard-link it.
Edgar Bonet
(221 rep)
Apr 3, 2025, 02:20 PM
• Last activity: Apr 4, 2025, 08:25 AM
11
votes
6
answers
2589
views
Why are symbolic links more common than hard links in Unix/Linux?
I frequently find myself googling the difference between symbolic links and hard links. Each time, I conclude that hard links seem superior in most use cases, especially when Linux is my daily driver. However, I find this conclusion unsatisfactory because almost every tutorial, blog post, or Stack O...
I frequently find myself googling the difference between symbolic links and hard links. Each time, I conclude that hard links seem superior in most use cases, especially when Linux is my daily driver. However, I find this conclusion unsatisfactory because almost every tutorial, blog post, or Stack Overflow answer discussing links overwhelmingly recommends symbolic links instead.
My understanding so far:
- On a Unix system, a "**file**" is essentially an address pointing to data on disk.
- A hard link allows multiple addresses to reference the same data, making it a powerful tool.
For example, if I create a hard link to a file in
~/Documents
, I can access the same data from ~/Desktop
, and if I move the file from ~/Documents
to ~/Images
, the hard link still works seamlessly. This behavior reminds me of Windows shortcuts but without the fragility—hard links remain valid even after moving the original file. On the other hand, symbolic links break if the target file is moved, which seems like a significant drawback.
The only major advantage of symlinks I’ve found is that they can span different filesystems, whereas hard links are restricted to the same filesystem.
Given this, why are symbolic links the standard in most cases? What are the practical scenarios where symlinks are preferable over hard links?
CouscousPie
(127 rep)
Mar 8, 2025, 11:23 AM
• Last activity: Mar 11, 2025, 01:00 PM
6
votes
1
answers
1103
views
Is a filesystem without hard links practical as /home on Linux?
I have a novel filesystem in mind, but the structure makes it impossible to implement more than one hard link to each inode. ("." and ".." are handled differently.) It is not intended to be a root filesystem, but it could be used as a general purpose filesystem on /home, etc. - Are hard links common...
I have a novel filesystem in mind, but the structure makes it impossible to implement more than one hard link to each inode. ("." and ".." are handled differently.)
It is not intended to be a root filesystem, but it could be used as a general purpose filesystem on /home, etc.
- Are hard links commonly used outside of (Linux) system directories?
- What problems will not supporting hard links cause users to suffer?
fadedbee
(1113 rep)
Feb 13, 2025, 12:36 PM
• Last activity: Feb 13, 2025, 01:11 PM
29
votes
2
answers
15229
views
Dereferencing hard links
In the manual page of `tar` command, an option for following hard links is listed. -h, --dereference follow symlinks; archive and dump the files they point to --hard-dereference follow hard links; archive and dump the files they refer to How does `tar` know that a file is a hard link? How does it *f...
In the manual page of
tar
command, an option for following hard links is listed.
-h, --dereference
follow symlinks; archive and dump the files they point to
--hard-dereference
follow hard links; archive and dump the files they refer to
How does tar
know that a file is a hard link? How does it *follow* it?
What if I don't choose this option? How does it **not** hard-dereference?
musa
(1028 rep)
Jul 14, 2012, 03:55 AM
• Last activity: Feb 1, 2025, 01:23 AM
33
votes
3
answers
29607
views
Why do hard links seem to take the same space as the originals?
Thanks to some good Q&A around here and [this page](http://linuxgazette.net/105/pitcher.html), I now understand links. I see hard links refer the same inode by a different name, and copies are different "nodes, with different names. Plus soft links have the original file name and path as their inode...
Thanks to some good Q&A around here and [this page](http://linuxgazette.net/105/pitcher.html) , I now understand links. I see hard links refer the same inode by a different name, and copies are different "nodes, with different names. Plus soft links have the original file name and path as their inode, so if the file is moved, the link breaks.
So, I tested what I've learnt with some file ("saluton_mondo.cpp" below), made a hard and a soft link and a copy.
jmcf125@VMUbuntu:~$ ls -lh soft hard copy s*.cpp
-rw-rw-r-- 1 jmcf125 jmcf125 205 Aŭg 27 16:10 copy
-rw-rw-r-- 2 jmcf125 jmcf125 205 Aŭg 25 13:34 hard
-rw-rw-r-- 2 jmcf125 jmcf125 205 Aŭg 25 13:34 saluton_mondo.cpp
lrwxrwxrwx 1 jmcf125 jmcf125 17 Aŭg 27 16:09 soft -> saluton_mondo.cpp
I found awkward that the hard link, however, has the same size as the original and, logically, the copy. If the hard link and the original share the same inode, that has the data, and only differ by the filename, shouldn't the hard link take only the space of its name, instead of 205 bytes? Or is that the size of the original file that
ls -lh
returns? But then how can I know what space does the filename take? [Here](https://unix.stackexchange.com/questions/20670/why-do-hard-links-exist#answer-20716) it says hard links have no size. Is their file name kept alongside the original file name? Where is the file name of hard links stored?
JMCF125
(1132 rep)
Aug 27, 2013, 03:48 PM
• Last activity: Dec 28, 2024, 03:48 PM
0
votes
2
answers
19176
views
How to create a hardlink of a file in a different directory in Linux?
Suppose the file name is `file1` in the home directory. How can I create a hard link in a different directory? I tried : ln -t file1 filehardlink > / home/dir2
Suppose the file name is
file1
in the home directory. How can I create a hard link in a different directory?
I tried :
ln -t file1 filehardlink > / home/dir2
Avinash Utekar
(181 rep)
Apr 24, 2017, 09:44 AM
• Last activity: Nov 20, 2024, 03:56 PM
5
votes
3
answers
2276
views
Hardlink that "split" when a file changes
Is it possible (in classical ext4, and/or in any other filesystem) to create two files that point to the same content, such that if one file is modified, the content is duplicated and the two files become different? It would be very practical to save space on my hard drive. **Context:** I have some...
Is it possible (in classical ext4, and/or in any other filesystem) to create two files that point to the same content, such that if one file is modified, the content is duplicated and the two files become different? It would be very practical to save space on my hard drive.
**Context:** I have some heavy videos that I share on an owncloud server that can be modified by lot's of people and therefore it may be possible that some people modify/remove these files... I really would like to make sure I have a backup of these files, and therefore I need for now to maintain two directories, the normal nextcloud one, and one "backup" directory, which (at least) doubles the size required to store it.
I was thinking of creating a git repo on top of the nextcloud directory, and it make the backup process much easier when new videos are added (just
git add .
), but git
still doubles the space, between the blob and the working directory.
Ideally, a solution that can be combined with git
would be awesome (i.e. that allows me to create a history of the video changes, with commits, checkouts... without doubling the disk space).
Moreover, I'm curious to have a **solution for various file systems** (especially if you have tricks for filesystems that do not implement snapshots). Note that LVM snapshot is not really a solution as I don't want to backup my full volume, only some specific files/folders.
tobiasBora
(4621 rep)
Sep 17, 2019, 05:27 PM
• Last activity: Oct 22, 2024, 06:47 PM
5
votes
1
answers
694
views
Hard links seem to take several hundred bytes just for the link itself (not file data)
## Setup I was thinking of implementing the rsync / hardlink / snapshot backup strategy and was wondering how much space a hard link takes up.  Like, it has to put an entry for the extra link as a directory entry, etc.  I couldn't find any information on this, and I guess it is f...
## Setup
I was thinking of implementing the rsync / hardlink / snapshot backup strategy and was wondering how much space a hard link takes up.
Like, it has to put an entry for the extra link as a directory entry, etc.
I couldn't find any information on this,
and I guess it is file system dependent.
The only info I could find was anecdotal: Either they take no space
(probably meaning they take no space for file contents), or the space they take is negligible, so they take only a few bytes to store the hard link.
I took a couple of systems (one a VM and one on real hardware)
and did the following in the root directory, as root:
mkdir link
cp -al usr link
The
usr
directory had about 54,000 files.
The space used on the HD increased by about 34 MB.
So this works out around 600 bytes per hard link –
or am I doing something wrong?
## Additional Info
I'm using LVM on both systems, formatted as ext4.
The file name size is about 1.5 MB altogether
(I got that by doing ls -R
and redirecting it to a file).
The rsync with hard links works so well
I was planning on using it for daily backup
on a couple of the work servers.
I also thought it would be easy to make incremental backups / snapshots like this for a considerable period of time. However, after ten days 30 MB is 300 MB and so on. In addition, if there have been only a few changes to the actual file data/contents – say, a few hundred KB – then storing 30+ MB of hard links per day seemed excessive, but I take your point about the size of modern disks.
It was simply that I had not seen this hard link size mentioned anywhere
that I thought I may be doing something wrong.
Is 600 bytes normal for a hard link on a Linux OS?
To calculate the space used, I did a df
before and after the cp -al
.
I re-ran the tests again to test Gilles's answer (given below)
that a directory/folder entry takes up 4 KB and this was skewing my numbers,
so this time by placing 20,000 files in a single directory
and doing the cp -al
to another directory.
The results were very different.
After taking off the length of the filenames,
the hard links worked out to about 13 bytes per hard link,
much better than 600.
For completeness, I did the test again,
while working on the answer
saying that this is due to each
directory/folder taking up 4 KB.
But this time I created thousands of directories and placed one file in each directory. The result after the maths (increased space taken on HD divided by number of files, ignoring directories) was almost exactly 4 KB for each file, showing that a hard link does only take up a few bytes,
but an actual directory/folder takes 4 KB.
jamie
(53 rep)
Jul 12, 2015, 08:36 PM
• Last activity: Jun 26, 2024, 05:04 PM
0
votes
1
answers
81
views
How to Avoid Copying Large Folders When Creating a chroot Sandbox Using Symlinks, Hard Links, or Bind Mount?
I am working on creating a chroot sandbox and want to avoid the time-consuming and storage-intensive process of copying large directories such as `bin`, `lib`, and others. Is it possible to use symbolic links, hard links, or bind mount to reference these directories from the host system within the c...
I am working on creating a chroot sandbox and want to avoid the time-consuming and storage-intensive process of copying large directories such as
bin
, lib
, and others.
Is it possible to use symbolic links, hard links, or bind mount to reference these directories from the host system within the chroot environment?
What are the implications or potential issues with each method in terms of:
1. Performance
2. Security
3. Compatibility
4. Ease of setup
Any insights or best practices on this would be greatly appreciated.
Foad
(379 rep)
May 22, 2024, 09:10 PM
• Last activity: May 24, 2024, 11:36 AM
20
votes
2
answers
8714
views
What is the difference between the link and ln commands?
From the man pages: ln - make links between files and link - call the link function to create a link to a file These seem to do the same thing however `ln` takes a lot of options as well. Is `link` just a very basic `ln`? Is there any reason to use link over ln?
From the man pages:
ln - make links between files
and
link - call the link function to create a link to a file
These seem to do the same thing however
ln
takes a lot of options as well.
Is link
just a very basic ln
? Is there any reason to use link over ln?
Qwertie
(1354 rep)
Aug 13, 2016, 05:00 AM
• Last activity: May 22, 2024, 12:38 PM
1
votes
3
answers
1078
views
Are hardlinks with rsync a bad solution?
I've read a lot recently about how making backups with rsync using hardlinks (--link-dest) is a good solution as it's so fast and the backup is complete. But surely it's really a very bad solution because if the data on the disk is corrupted (disk failure, bad blocks,...) the backup is completely us...
I've read a lot recently about how making backups with rsync using hardlinks (--link-dest) is a good solution as it's so fast and the backup is complete. But surely it's really a very bad solution because if the data on the disk is corrupted (disk failure, bad blocks,...) the backup is completely useless.
Or am I missing something?
johnmuir
(51 rep)
Dec 6, 2022, 07:36 PM
• Last activity: May 11, 2024, 07:05 PM
89
votes
7
answers
166147
views
Determining if a file is a hard link or symbolic link?
I'm creating a shell script that would take a filename/path to a file and determine if the file is a symbolic link or a hard link. The only thing is, I don't know how to see if they are a hard link. I created 2 files, one a hard link and one a symbolic link, to use as a test file. But how would I de...
I'm creating a shell script that would take a filename/path to a file and determine if the file is a symbolic link or a hard link.
The only thing is, I don't know how to see if they are a hard link. I created 2 files, one a hard link and one a symbolic link, to use as a test file. But how would I determine if a file is a hard link or symbolic within a shell script?
Also, how would I find the destination partition of a symbolic link? So let's say I have a file that links to a different partition, how would I find the path to that original file?
k-Rocker
(1465 rep)
Nov 12, 2014, 05:28 PM
• Last activity: May 10, 2024, 01:44 PM
0
votes
0
answers
25
views
ls long listing column "number of hard links": What does that mean?
Please shed some light on that erratic, second column of ```ls -l```, that numeric column between the 1st one - the permission string - and the third one - the user ownership. I just can't pin it down, what it means and predict or explain it for any sub-directory with known content. I know what a ha...
Please shed some light on that erratic, second column of
-l
, that numeric column between the 1st one - the permission string - and the third one - the user ownership.
I just can't pin it down, what it means and predict or explain it for any sub-directory with known content.
I know what a hard link is, and what not.
I know that when you create a new file
, it has 1 hard link. When you use anewfile samestuffdifferentname
, the directory containing the aforementioned now contains both files and both have a hard link number of 2, meaning that there is 1 specific content on the storage - with 2 names, not just 1.
So, things in -l
are clear to me for all listings of regular files, but as soon as a sub-directory is listed, things become erratic, unpredictable, unexpected for me.
I've looked for the specific meaning in --help
,
and
, in vein; most video tutorials just cover the meaning of that line superficially, if at all; same with web search results.
I've hacked around with
,
and
, filtering files by permission string with or without
, but I just can't pin it down.
For example:
~/Pictures/
According to -l
this sub-directory of my home directory has a hard link number of 4 - but I've certainly never created any hard link of that file/directory.
So, maybe 4 is constituted by the sub-directories content: Maybe 4 regular files, each containing just 1 hard link, and so a hard link number of 1? Or 3 regular files like that, but additionally that sub-directory also counts as 1, so, the sum of the sub-directory and its content constitutes the 4? Wrong! It contains 22 files: 2 sub-directories and 20 regular files - 18/20 are files with a file extension (image file extension) or .xcf.
Please, let's get to the bottom of that column - once and for all.
futurewave
(213 rep)
Feb 11, 2024, 10:14 AM
3
votes
3
answers
327
views
Why hard link doesn't corrupt if we remove the original file?
Why the hard link doesn't corrupt if we remove the original file? If I remove the original file then the softlink gets corrupt but hard link doesn't so why it does't corrupt
Why the hard link doesn't corrupt if we remove the original file?
If I remove the original file then the softlink gets corrupt but hard link doesn't so why it does't corrupt
Rahul Sharma
Feb 3, 2024, 07:00 AM
• Last activity: Feb 4, 2024, 06:55 PM
1
votes
1
answers
160
views
mirror a directory tree by hard links for file contents and symlinks for directory structure
what is the best way to mirror an entire directory, say `original/`, to a new directory, say `mirror/`, which has the structure `mirror/data/` and `mirror/tree/`, such that - every file in the directory `original/` or in any of its subdirectories is hardlinked to a *file* in `mirror/data` - whose fi...
what is the best way to mirror an entire directory, say
original/
, to a new directory, say mirror/
, which has the structure mirror/data/
and mirror/tree/
, such that
- every file in the directory original/
or in any of its subdirectories is hardlinked to a *file* in mirror/data
- whose filename is a unique identifier of its content, say a hash of its content, and
- which is symlinked to from a point in mirror/tree
whose relative path corresponds to the relative path of the original file in original
,
such that it can be easily restored?
is this feature perhaps implemented by some tool in existence? – one that allows to flexibly choose the command for creating a unique identifier for a file by its content.
---
for instance, say there is only one file original/something
, which is a textfile containing the word “data”. then i want to run a script or command on original
, such that the result is:
$ tree original mirror
original
└── something
mirror
├── data
│ └── 6667b2d1aab6a00caa5aee5af8…
└── tree
└── original
└── something -> ../../data/6667b2d1aab6a00caa5aee5af8…
5 directories, 3 files
here, the file 667b…
is a hard link to original/something
and its filename is sha256sum hash of that file. note that i have abbreviated the filename for legibility.
i want to be able to perfectly restore the original by its mirror.
i know i can write a script to do that, but before i do that and maybe make a mistake and lose some data, i want to know if there is any tool out there that already implements this safely (i didn’t find any so far) or if there are any pitfalls.
*background*: i want to keep an archive of a directory that tracks renames, but i don't need versioning. i know that git-annex
can do that with a lot of overhead using git repositories, but i only need its way to mirror the contents of a directory using symlinks for the directory structure to files whose file names are hashes of their content. then i could use git-diff to track renames. i don't fully understand what git-annex is doing so i don't want to trust it with archiving my data. so i'm looking for a lighter alternative that is less intrusive.
windfish
(113 rep)
Feb 2, 2024, 12:39 PM
• Last activity: Feb 2, 2024, 09:51 PM
0
votes
2
answers
99
views
Creating hard links
$ ln fun fun-hard $ ln fun dir1/fun-hard $ ln fun dir2/fun-hard $ ls -1 total 16 drwxrwxr-x 2 me 4096 2018-01-14 16:17 dir1 drwxrwxr-x 2 me 4096 2018-01-14 16:17 dir2 -rw-r—-r—- 4 me 1650 2018-01-10 16:33 fun -rw-r—-r—- 4 me 1650 2018-01-10 16:33 fun-hard So there are 4 instances of the file fun Bot...
$ ln fun fun-hard
$ ln fun dir1/fun-hard
$ ln fun dir2/fun-hard
$ ls -1
total 16
drwxrwxr-x 2 me 4096 2018-01-14 16:17 dir1
drwxrwxr-x 2 me 4096 2018-01-14 16:17 dir2
-rw-r—-r—- 4 me 1650 2018-01-10 16:33 fun
-rw-r—-r—- 4 me 1650 2018-01-10 16:33 fun-hard
So there are 4 instances of the file fun
Both the second fields in the listings for fun and fun-hard contains a 4, which now is the number of hard links that now exist for the file.
drwxrwxr-x 2 me 4096 2018-01-14 16:17 dir1
drwxrwxr-x 2 me 4096 2018-01-14 16:17 dir2
Why there’s 2 instances of the file fun-hard on dir1 and dir2? Isn’t there’s only one hard link: fun-hard?
-rw-r—-r—- 4 me 1650 2018-01-10 16:33 fun
-rw-r—-r—- 4 me 1650 2018-01-10 16:33 fun-hard
Can you explain more these 4 instances of fun and fun-hard, why they are repeated?
There are 2 hard links on dir1 and dir2, if…:
-rw-r—-r—- 4 me 1650 2018-01-10 16:33 fun
-rw-r—-r—- 4 me 1650 2018-01-10 16:33 fun-hard
…there are 4 instances of hard links why dir1 and dir2 are not 4 instances also?
user599592
(27 rep)
Jan 23, 2024, 10:24 PM
• Last activity: Jan 24, 2024, 11:53 AM
Showing page 1 of 20 total questions