Unix & Linux Stack Exchange
Q&A for users of Linux, FreeBSD and other Unix-like operating systems
Latest Questions
14
votes
1
answers
5016
views
How can I make a device available inside a systemd-nspawn container with user namespacing?
I would like to mount an encrypted image file using `cryptsetup` inside a [`systemd-nspawn`][systemd-nspawn] container. However, I get this error message: [root@container ~]# echo $key | cryptsetup -d - open luks.img luks Cannot initialize device-mapper. Is dm_mod kernel module loaded? Cannot use de...
I would like to mount an encrypted image file using
cryptsetup
inside a systemd-nspawn
container. However, I get this error message:
[root@container ~]# echo $key | cryptsetup -d - open luks.img luks
Cannot initialize device-mapper. Is dm_mod kernel module loaded?
Cannot use device luks, name is invalid or still in use.
The dm_mod
kernel module is loaded on the host system, although things look a bit weird inside the container:
[root@host ~]# grep dm_mod /proc/modules
dm_mod 159744 2 dm_crypt, Live 0xffffffffc12c6000
[root@container ~]# grep dm_mod /proc/modules
dm_mod 159744 2 dm_crypt, Live 0x0000000000000000
strace
indicates that cryptsetup
is unable to create /dev/mapper/control
:
[root@etrial ~]# echo $key | strace cryptsetup -d - open luks.img luks 2>&1 | grep mknod
mknod("/dev/mapper/control", S_IFCHR|0600, makedev(0xa, 0xec)) = -1 EPERM (Operation not permitted)
I am not too sure why this is happening. I am starting the container with the systemd-nspawn@.service
template unit , which seems like it should allow access to the device mapper:
# nspawn can set up LUKS encrypted loopback files, in which case it needs
# access to /dev/mapper/control and the block devices /dev/mapper/*.
DeviceAllow=/dev/mapper/control rw
DeviceAllow=block-device-mapper rw
Reading this comment on a related question about USB devices , I wondered whether the solution was to add a bind mount for /dev/mapper
. However, cryptsetup
gives me the same error message inside the container. When I strace
it, it looks like there's still a permissions issue:
# echo $key | strace cryptsetup open luks.img luks --key-file - 2>&1 | grep "/dev/mapper"
stat("/dev/mapper/control", {st_mode=S_IFCHR|0600, st_rdev=makedev(0xa, 0xec), ...}) = 0
openat(AT_FDCWD, "/dev/mapper/control", O_RDWR) = -1 EACCES (Permission denied)
# ls -la /dev/mapper
total 0
drwxr-xr-x 2 nobody nobody 60 Dec 13 14:33 .
drwxr-xr-x 8 root root 460 Dec 15 14:54 ..
crw------- 1 nobody nobody 10, 236 Dec 13 14:33 control
Apparently, this is happening because the template unit enables user namespacing, which I want anyway for security reasons. As explained in the documentation :
>In most cases, using --private-users=pick
is the recommended option as it enhances container security massively and operates fully automatically in most cases ... [this] is the default if the systemd-nspawn@.service
template unit file is used ...
>
>Note that when [the --bind
option] is used in combination with --private-users
, the resulting mount points will be owned by the nobody
user. That's because the mount and its files and directories continue to be owned by the relevant host users and groups, which do not exist in the container, and thus show up under the wildcard UID 65534 (nobody
). If such bind mounts are created, it is recommended to make them read-only, using --bind-ro=
.
Presumably I won't be able to do anything with read-only permissions to /dev/mapper
. So, is there any way I can get cryptsetup
to work inside the container, so that my application can create and mount arbitrary encrypted volumes at runtime, without disabling user namespacing?
## Related questions
* systemd-nspawn: file-system permissions for a bound folder relates to files rather than devices, and the only answer just says that "-U
is mostly incompatible with rw --bind
."
* systemd-nspawn: how to allow access to all devices doesn't deal with user namespacing and there are no answers.
sjy
(956 rep)
Dec 15, 2019, 02:53 AM
• Last activity: Jul 31, 2025, 03:10 AM
4
votes
1
answers
2151
views
How to bind arduino as a fix block-device --- /dev/ttyACM0?
I want to bind my Arduino Mega as /dev/ttyACM0. Sometimes, it turns out to be /dev/ttyACM0 and sometimes as /dev/ttyACM1. I have taken help from this [question][1] and this [tutorial][2] [1]: https://unix.stackexchange.com/questions/66901/how-to-bind-usb-device-under-a-static-name [2]: http://www.re...
I want to bind my Arduino Mega as /dev/ttyACM0. Sometimes, it turns out to be /dev/ttyACM0 and sometimes as /dev/ttyACM1.
I have taken help from this question and this tutorial
Someone please help me to achieve that as there are only 2 entries in /etc/udev/rules.d :-
1. 20-crystalhd.rules
2. 98-kexec.rules
The output of
udevadm info -a -p $(udevadm info -q path -n /dev/ttyACM0)
:-
> Udevadm info starts with the device specified by the devpath and then
> walks up the chain of parent devices. It prints for every device
> found, all possible attributes in the udev rules key format. A rule to
> match, can be composed by the attributes of the device and the
> attributes from one single parent device.
looking at device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.5/2-1.5:1.0/tty/ttyACM0':
KERNEL=="ttyACM0"
SUBSYSTEM=="tty"
DRIVER==""
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.5/2-1.5:1.0':
KERNELS=="2-1.5:1.0"
SUBSYSTEMS=="usb"
DRIVERS=="cdc_acm"
ATTRS{bInterfaceClass}=="02"
ATTRS{bmCapabilities}=="6"
ATTRS{bInterfaceSubClass}=="02"
ATTRS{bInterfaceProtocol}=="01"
ATTRS{bNumEndpoints}=="01"
ATTRS{supports_autosuspend}=="1"
ATTRS{bAlternateSetting}==" 0"
ATTRS{bInterfaceNumber}=="00"
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.5':
KERNELS=="2-1.5"
SUBSYSTEMS=="usb"
DRIVERS=="usb"
ATTRS{bDeviceSubClass}=="00"
ATTRS{bDeviceProtocol}=="00"
ATTRS{devpath}=="1.5"
ATTRS{idVendor}=="2341"
ATTRS{speed}=="12"
ATTRS{bNumInterfaces}==" 2"
ATTRS{bConfigurationValue}=="1"
ATTRS{bMaxPacketSize0}=="8"
ATTRS{busnum}=="2"
ATTRS{devnum}=="4"
ATTRS{configuration}==""
ATTRS{bMaxPower}=="100mA"
ATTRS{authorized}=="1"
ATTRS{bmAttributes}=="c0"
ATTRS{bNumConfigurations}=="1"
ATTRS{maxchild}=="0"
ATTRS{bcdDevice}=="0001"
ATTRS{avoid_reset_quirk}=="0"
ATTRS{quirks}=="0x0"
ATTRS{serial}=="55431313937351C05151"
ATTRS{version}==" 1.10"
ATTRS{urbnum}=="17"
ATTRS{ltm_capable}=="no"
ATTRS{manufacturer}=="Arduino (www.arduino.cc)"
ATTRS{removable}=="removable"
ATTRS{idProduct}=="0042"
ATTRS{bDeviceClass}=="02"
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2/2-1':
KERNELS=="2-1"
SUBSYSTEMS=="usb"
DRIVERS=="usb"
ATTRS{bDeviceSubClass}=="00"
ATTRS{bDeviceProtocol}=="01"
ATTRS{devpath}=="1"
ATTRS{idVendor}=="8087"
ATTRS{speed}=="480"
ATTRS{bNumInterfaces}==" 1"
ATTRS{bConfigurationValue}=="1"
ATTRS{bMaxPacketSize0}=="64"
ATTRS{busnum}=="2"
ATTRS{devnum}=="2"
ATTRS{configuration}==""
ATTRS{bMaxPower}=="0mA"
ATTRS{authorized}=="1"
ATTRS{bmAttributes}=="e0"
ATTRS{bNumConfigurations}=="1"
ATTRS{maxchild}=="6"
ATTRS{bcdDevice}=="0000"
ATTRS{avoid_reset_quirk}=="0"
ATTRS{quirks}=="0x0"
ATTRS{version}==" 2.00"
ATTRS{urbnum}=="70"
ATTRS{ltm_capable}=="no"
ATTRS{removable}=="unknown"
ATTRS{idProduct}=="0024"
ATTRS{bDeviceClass}=="09"
looking at parent device '/devices/pci0000:00/0000:00:1d.0/usb2':
KERNELS=="usb2"
SUBSYSTEMS=="usb"
DRIVERS=="usb"
ATTRS{bDeviceSubClass}=="00"
ATTRS{bDeviceProtocol}=="00"
ATTRS{devpath}=="0"
ATTRS{idVendor}=="1d6b"
ATTRS{speed}=="480"
ATTRS{bNumInterfaces}==" 1"
ATTRS{bConfigurationValue}=="1"
ATTRS{bMaxPacketSize0}=="64"
ATTRS{authorized_default}=="1"
ATTRS{busnum}=="2"
ATTRS{devnum}=="1"
ATTRS{configuration}==""
ATTRS{bMaxPower}=="0mA"
ATTRS{authorized}=="1"
ATTRS{bmAttributes}=="e0"
ATTRS{bNumConfigurations}=="1"
ATTRS{maxchild}=="2"
ATTRS{bcdDevice}=="0310"
ATTRS{avoid_reset_quirk}=="0"
ATTRS{quirks}=="0x0"
ATTRS{serial}=="0000:00:1d.0"
ATTRS{version}==" 2.00"
ATTRS{urbnum}=="42"
ATTRS{ltm_capable}=="no"
ATTRS{manufacturer}=="Linux 3.10.0-123.13.2.el7.x86_64 ehci_hcd"
ATTRS{removable}=="unknown"
ATTRS{idProduct}=="0002"
ATTRS{bDeviceClass}=="09"
ATTRS{product}=="EHCI Host Controller"
looking at parent device '/devices/pci0000:00/0000:00:1d.0':
KERNELS=="0000:00:1d.0"
SUBSYSTEMS=="pci"
DRIVERS=="ehci-pci"
ATTRS{irq}=="23"
ATTRS{subsystem_vendor}=="0x104d"
ATTRS{broken_parity_status}=="0"
ATTRS{class}=="0x0c0320"
ATTRS{companion}==""
ATTRS{enabled}=="1"
ATTRS{consistent_dma_mask_bits}=="32"
ATTRS{dma_mask_bits}=="32"
ATTRS{local_cpus}=="000f"
ATTRS{device}=="0x1c26"
ATTRS{uframe_periodic_max}=="100"
ATTRS{msi_bus}==""
ATTRS{local_cpulist}=="0-3"
ATTRS{vendor}=="0x8086"
ATTRS{subsystem_device}=="0x9081"
ATTRS{numa_node}=="-1"
ATTRS{d3cold_allowed}=="1"
looking at parent device '/devices/pci0000:00':
KERNELS=="pci0000:00"
SUBSYSTEMS==""
DRIVERS==""
***Where should I edit my entries in which files and how. Please explain in somewhat depth as I am much naive!!***
Please help me establish this. It's already taking my mind and mood off.
Am_I_Helpful
(721 rep)
Jan 17, 2015, 10:10 AM
• Last activity: Jul 18, 2025, 11:04 AM
0
votes
1
answers
88
views
Why does OverlayFS allow unmounting the device that contains upperdir and workdir?
I have two hard drives, each with a single partition (`/dev/sda1` and `/dev/sdb1`). The Linux root is on `/dev/sda1`. I run the following script. ```bash mount /dev/sdb1 /mnt mkdir /data /mnt/upper /mnt/work mount -t overlay overlay -o lowerdir=/data,upperdir=/mnt/upper,workdir=/mnt/work /data umoun...
I have two hard drives, each with a single partition (
/dev/sda1
and /dev/sdb1
). The Linux root is on /dev/sda1
. I run the following script.
mount /dev/sdb1 /mnt
mkdir /data /mnt/upper /mnt/work
mount -t overlay overlay -o lowerdir=/data,upperdir=/mnt/upper,workdir=/mnt/work /data
umount /mnt
I noticed two interesting behaviors here.
1. The overlay **overwrites its own** lowerdir
(since /data
is both the lowerdir
and the mount target).
2. The overlay **continues working correctly** even after unmounting /dev/sdb1
(which holds upperdir
and workdir
).
**Is this behavior reliable?**
I couldn't find any documentation about this behavior in either the [mount(8)](https://man.archlinux.org/man/mount.8.en) man page or the official [OverlayFS](https://docs.kernel.org/filesystems/overlayfs.html) documentation. While eliminating extra mount points would be convenient, can this approach be considered truly reliable?
**Here’s another example that also raises doubts.**
mount /dev/sdb1 /mnt
mkdir /mnt/dir
mount --bind /mnt/dir /dir
umount /mnt
Is the same mechanism at work here? Is this just as (un)reliable as the OverlayFS example?
user741127
(1 rep)
May 12, 2025, 04:10 PM
• Last activity: May 12, 2025, 04:51 PM
2
votes
1
answers
3253
views
Systemd BindPaths= not working
I am trying to test how the `BindPaths=` directive works on a Debian 8 system with systemd. Currently I have a basic unit file for a service: [Unit] Description="Simple Test Service" BindPaths=/path:/bindmount/path:norbind [Service] ExecStart=/usr/bin/long_running_program --flags Restart=always [Ins...
I am trying to test how the
BindPaths=
directive works on a Debian 8 system with systemd.
Currently I have a basic unit file for a service:
[Unit]
Description="Simple Test Service"
BindPaths=/path:/bindmount/path:norbind
[Service]
ExecStart=/usr/bin/long_running_program --flags
Restart=always
[Install]
WantedBy=multi-user.target
When I run findmnt
before and after starting the service I do not see the bind mount at /bindmount/path
listed at all.
When I ls
or ls -a
the bind mount location /bindmount/path
I do not see any files that are in /path
.
**Why is this not working as expected?**
I see in the systemd BindPath= man page it says:
> This option is only available for system services and is not supported for services running in per-user instances of the service manager.
How do I know if I am running in a per-user instance of the service manager vs running a system service? Is it based on if my service is located in /etc/systemd/system
vs /lib/systemd/system
?
Wimateeka
(1085 rep)
Nov 6, 2019, 07:23 PM
• Last activity: May 11, 2025, 11:03 AM
0
votes
1
answers
5579
views
Proxmox LXC storage share, permission problems
I'm totally new in these enviroments but I'm trying to learn. I installed proxmox on a single SSD, then attached one HDD(/dev/sdb) to the system for media storage. The basic idea was to create one container for Plex app and one for rtorrent app. I would like to share the same space(disk) between the...
I'm totally new in these enviroments but I'm trying to learn.
I installed proxmox on a single SSD, then attached one HDD(/dev/sdb) to the system for media storage. The basic idea was to create one container for Plex app and one for rtorrent app. I would like to share the same space(disk) between these containers.
On the host I mounted /dev/sdb1 to /mnt/mediastorage, and created a user called "mediastorage"(110:117) and add access to this space.
Both of the containers I added this(/mnt/mediastorage) to /mediastorage mount point.Like this:
mp0: /mnt/mediastorage/,mp=/mediastorage
After that,I tried to grant access for these files for plex(107:115) user in the "plex" container:
lxc.idmap: u 0 100000 107
lxc.idmap: u 107 110 1
lxc.idmap: u 108 100125 64410
lxc.idmap: g 0 100000 115
lxc.idmap: g 115 117 1
lxc.idmap: g 116 100136 64399
On the host I did this:
root@proxmox:~# cat /etc/subuid
root:100000:65536
root:110:1
root@proxmox:~# cat /etc/subgid
root:100000:65536
root:117:1
Later, I created the other container, where created a user called rtorrent(107:115) and did the same config like the "plex" container.
There was a moment where everything seemd fine but after a reboot(host) incomprehensive things happened like this: Previous thread where it started
Noticed that at the "plex" container, appeared a new entry in the /etc/passwd file:
mediastorage:x:108:116:...etc
/etc/group:
mediastorage:x:116:
-these were not there earlier and the container was in shutdown state.
root@plex:/# ls -al /home
total 12
drwxr-xr-x 3 root root 4096 Jan 23 20:57 .
drwxr-xr-x 23 root root 4096 Jan 24 22:42 ..
drwxr-xr-x 2 nobody nogroup 4096 Jan 23 20:57 mediastorage
Can somebody explain it what happened here please?
How can I achieve my main idea?(share storage between the containers)
Is it possible in this way?
**EDIT1:**
Reinstalled the container, first mounted the /mediastorage than installed plex than add uid mapping to the container's config.(Somewhere I read that maxbe it will work).Now the storage works but the plex service can't start because of permission issues.
From the host -- lxc container's disk mounted as /mnt/lxc102:
/mnt/lxc102/etc/passwd:
plex:x:107:115::/var/lib/plexmediaserver:/bin/bash
/mnt/lxc102/etc/group:
plex:x:115:
ls -al /mnt/lxc102:
drwxr-xr-x 2 100000 100000 4096 Jan 25 23:22 mediastorage
ls -al /mnt/lxc102/var/lib:
drwxr-xr-x 3 100107 100115 4096 Jan 25 23:25 plexmediaserver
On the container, the plexmediaserver directory listed as nobody:nogroup again.
toma3757
(43 rep)
Jan 24, 2020, 11:22 PM
• Last activity: May 9, 2025, 02:08 AM
0
votes
1
answers
42
views
bindfs: Can remounting with new mapping cause side-effects?
I want to create a ansible role that creates some bindfs-mounts and map those to a bunch of users. Now those users can change. And this is why I cannot check, whether the mount already exists and then use bindfs - I have to do that all the time, as the mount might be existing, but with the wrong use...
I want to create a ansible role that creates some bindfs-mounts and map those to a bunch of users.
Now those users can change. And this is why I cannot check, whether the mount already exists and then use bindfs - I have to do that all the time, as the mount might be existing, but with the wrong user mapping.
So basically, everytime the playbook runs, the bindfs-command is being fired. Additionally, I cannot extract the current mapping from the
mount
-overview.
So my question is: Given the fact that userA is uploading a laaaaarge file to the mounted directory and in this moment the bindfs-command is fired again, can this cause data corruption?
The command basically is nothing fancy and looks like this:
bindfs -o nonempty --map=userA/userB /var/foo/bar /mnt/foo/bar
One option, which I thought of at the moment, was to create a passwd-file, as bindfs offers to use --map-passwd. If the file changes, I can register a variable and only in this case I would remount.
But still: In this event, would I risk corrupt data?
Thanks for your help.
SPQRInc
(105 rep)
May 4, 2025, 03:54 PM
• Last activity: May 6, 2025, 06:46 PM
0
votes
3
answers
226
views
nfs v4 export is adding additional options not specified in /etc/exports
This seems trivial but I've lost too much time searching and reading manuals. RHEL 7.9 server. I have a simple directory being exported on nfs v4, using `/etc/exports`, with specific options. ``` [ ~]# cat /etc/exports /path/to/my/share/ remoteserver.host.com(rw,no_root_squash,sync,insecure) [ ~]# `...
This seems trivial but I've lost too much time searching and reading manuals.
RHEL 7.9 server.
I have a simple directory being exported on nfs v4, using
/etc/exports
, with specific options.
[ ~]# cat /etc/exports
/path/to/my/share/ remoteserver.host.com(rw,no_root_squash,sync,insecure)
[ ~]#
This is exported using: ]# exportfs -ra
However, if I view the verbose export information, I'm seeing many more options, which are breaking the intended share operations.
Yes, I know I can be more explicit in /etc/exports
but I'm interested to understand where it's coming from because it's a new issue that has creeped up.
~]# exportfs -v
/path/to/my/share/
remoteserver.host.com(sync,wdelay,hide,no_subtree_check,sec=sys,rw,insecure,no_root_squash,no_all_squash)
[ ~]#
You can see the additional options, and in my case, specifically hide
is creating trouble.
I've checked: /etc/nfsmount.conf
but it's fully commented out.
hsx
(1 rep)
Jan 15, 2025, 09:17 PM
• Last activity: May 2, 2025, 02:29 PM
5
votes
1
answers
6263
views
Two *different* mount points having the *same* absolute path (bind-mount problem)
Scenario -------- - a NFS share is mounted on `/mnt/temp/dir` (and other shares are mounted in subdirectories), - I `umount` everything there but supposedly, it doesn't work well (maybe I start with `umount /mnt/temp/dir` instead of umounting "nested" shares like `/mnt/temp/dir/subdir*` first), - I...
Scenario
--------
- a NFS share is mounted on
/mnt/temp/dir
(and other shares are mounted in subdirectories),
- I umount
everything there but supposedly, it doesn't work well (maybe I start with umount /mnt/temp/dir
instead of umounting "nested" shares like /mnt/temp/dir/subdir*
first),
- I do mount -o bind /data/temp /mnt/temp
,
- I do mount /mnt/temp/dir
,
- I do mount /mnt/temp/dir/subdir1
... and it works well.
Note: /mnt/temp
is initially hosted on the root (/
) filesystem /dev/sda6
, and /data
is another filesystem from /dev/sda8
.
Problem
-------
I cannot delete the /mnt/temp/dir
directory on the root filesystem:
# mount -o bind / /test/root
# rmdir /test/root/mnt/temp/dir
rmdir: failed to remove `dir': Device or resource busy
Some explanation
-----------------
/mnt/temp/dir
is mounted ***twice***, probably once on the root fs, and once on the /data
fs.
Here is cat /proc/mounts
:
nfsserver:/some/share/ /mnt/temp/dir nfs rw,relatime(...) 0 0
nfsserver:/some/share/ /mnt/temp/dir nfs rw,relatime,(...) 0 0
More interesting, here is cat /proc/1/mountinfo
:
29 20 0:18 / /mnt/temp/dir rw,relatime - nfs nfsserver:/some/share/ rw,(...)
33 31 0:18 / /mnt/temp/dir rw,relatime - nfs nfsserver:/some/share/ rw,(...)
See, the two numbers at the beginning are *different*.
Kernel doc says for these two fields:
(1) mount ID: unique identifier of the mount (may be reused after umount)
(2) parent ID: ID of parent (or of self for the top of the mount tree)
They also have different parents 20 and 31 (root fs and /data
fs), see:
20 1 8:6 / / rw,relatime - ext4 /dev/sda6 rw,(...)
31 20 8:8 /temp /mnt/temp rw,relatime - ext4 /dev/sda8 rw,(...)
If I try to umount /mnt/temp/dir
, I get 2 error messages:
umount.nfs: /mnt/temp/dir: device is busy
umount.nfs: /mnt/temp/dir: device is busy
Question
========
**How can I umount
the "bad" one (mount ID 29)?**
Even the umount(2)
system call takes a path for argument, and not a "mount ID".
Totor
(21020 rep)
Feb 27, 2015, 06:24 PM
• Last activity: Apr 13, 2025, 10:04 PM
0
votes
0
answers
37
views
How do I bind multiple read-only directories to a mounted directory to make them writable?
I have a root file system (/) that is read-only on a partition in an eMMC (/dev/mmcblk0p3). There are a variety of directories that I want to be writable and persist. I have a partition available on the eMMC /dev/mmcblk0p4 that is writable and will persist. My fstab file is shown below. /dev/root /...
I have a root file system (/) that is read-only on a partition in an eMMC (/dev/mmcblk0p3). There are a variety of directories that I want to be writable and persist. I have a partition available on the eMMC /dev/mmcblk0p4 that is writable and will persist. My fstab file is shown below.
/dev/root / auto defaults 1 1
proc /proc proc defaults 0 0
devpts /dev/pts devpts mode=0620,ptmxmode=0666,gid=5 0 0
tmpfs /run tmpfs mode=0755,nodev,nosuid,strictatime 0 0
tmpfs /var/volatile tmpfs defaults 0 0
/dev/mmcblk0p4 /persist auto defaults 0 2
You can see I am mounting a /persist directory to /dev/mmcblk0p4 and this works. I want to bind multiple directories to this /persist directory. For example, I want to bind to the following:
/lib/example to /persist/lib
/etc/example to /persist/etc
The problem is if I mount /persist, any created sub directories will disappear because of the mount. So there are no entries I can put into fstab. I have seen suggestions for setting up OverlayFS, but this relies on an upper and work subdirectory. I don't know how to get these subdirectories created before hand to get that to work.
user2899525
(1 rep)
Apr 4, 2025, 12:17 PM
5
votes
2
answers
109
views
mv affected by bind mounts (feels like a bug)
Originally, `mv(1)` was a rename operation; it updated names in filesystems and did not copy files. More recently, a convenience feature was added, whereby if the source and target were on different filesystems, it would copy and delete the files. AKA "inter-device move". Now I was trying to tidy up...
Originally,
mv(1)
was a rename operation; it updated names in filesystems and did not copy files. More recently, a convenience feature was added, whereby if the source and target were on different filesystems, it would copy and delete the files. AKA "inter-device move".
Now I was trying to tidy up my backups. I wanted to move .../rest2/Public/Backups
to .../rest2/Backup/(Backups)
, so:
root@ts412:/QNAP/mounts/rest2# mv Public/Backups Backup/
Where:
root@ts412:/QNAP/mounts/rest2# df -h /QNAP/mounts/rest2/Public/
Filesystem Size Used Avail Use% Mounted on
/dev/sdb10 831G 715G 75G 91% /QNAP/mounts/rest2
root@ts412:/QNAP/mounts/rest2# df -h /QNAP/mounts/rest2/Backup/
Filesystem Size Used Avail Use% Mounted on
/dev/sdb10 831G 715G 75G 91% /QNAP/mounts/rest2
So same filesystem:
(FYI, rest2
is "the rest of the space on disk2
")
But the move started to behave like an "inter-device move" (high CPU, disks busy, various errors about non-empty directories etc.), so I killed it.
Checking in a slightly different fashion (note the .
) :
root@ts412:/QNAP/mounts/rest2# df -h Backup/.
Filesystem Size Used Avail Use% Mounted on
/dev/sdb10 831G 715G 75G 91% /QNAP/mounts/rest2
root@ts412:/QNAP/mounts/rest2# df -h Public/Backups/.
Filesystem Size Used Avail Use% Mounted on
/dev/sdb10 831G 715G 75G 91% /QNAP/mounts/rest2/Public
Then I recall I ALSO had a **bind mount** (it makes the shared names via NFS more friendly).
So I unmounted the extra bind mount:
root@ts412:/QNAP/mounts/rest2# umount /QNAP/mounts/rest2/Public
root@ts412:/QNAP/mounts/rest2# df -h Public/Backups/.
Filesystem Size Used Avail Use% Mounted on
/dev/sdb10 831G 715G 75G 91% /QNAP/mounts/rest2
root@ts412:/QNAP/mounts/rest2# mv Public/Backups Backup/
And the mv(1)
was instant, as I'd expected.
So, notwithstanding the extra mount(8)
s, **the source and target were always in the same filesystem**, the mount -o bind /QNAP/mounts/rest2/Backups /Backups
does not affect this. So I'm wondering if mv(1)
gets back mount points of /QNAP/mounts/rest2
for one and /QNAP/mounts/rest2/Public
for the other, it incorrectly decides the two files are on different filesystems?
GraemeV
(348 rep)
Mar 11, 2025, 01:40 PM
• Last activity: Mar 18, 2025, 08:34 PM
612
votes
3
answers
547056
views
What is a bind mount?
What is a “bind mount”? How do I make one? What is it good for? I've been told to use a bind mount for something, but I don't understand what it is or how to use it.
What is a “bind mount”? How do I make one? What is it good for?
I've been told to use a bind mount for something, but I don't understand what it is or how to use it.
Gilles 'SO- stop being evil'
(862317 rep)
Apr 25, 2015, 05:28 PM
• Last activity: Mar 7, 2025, 03:03 PM
0
votes
0
answers
403
views
Accessing Docker Unix Socket from a Podman Container on a Remote Server (SSH)
I'm trying to access a **Docker** Unix socket on a remote server from within a **Podman** container (`offen/docker-volume-backup`). I've (root-)mounted the entire root filesystem of the remote server using `sshfs` and can access it as root. However, I can't connect to the Unix socket. (All servers a...
I'm trying to access a **Docker** Unix socket on a remote server from within a **Podman** container (
offen/docker-volume-backup
).
I've (root-)mounted the entire root filesystem of the remote server using sshfs
and can access it as root.
However, I can't connect to the Unix socket. (All servers are on AlmaLinux with SELinux, and Podman is used with sudo.)
I've tried the following settings in my Podman container:
volumes:
- /mnt/fuse_to_somewhere/var/run/docker.sock:/var/run/docker.sock:ro,z
security_opt:
- label=disable
privileged:
- true
But I still get this error:
Commands: error querying for containers: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Questions:
1. What steps are needed to connect to the Docker Unix socket from a Podman container?
2. Are there specific permissions required for accessing the Docker socket over sshfs?
Jack
(1 rep)
Feb 21, 2025, 05:05 PM
• Last activity: Feb 21, 2025, 05:22 PM
1
votes
0
answers
28
views
Selective rw access on read-only mounted partition
I have read-only root file system, protected with [dm-verity][1] and clean read-write user data storage. Nevertheless, I need to make a tiny set of files on rootfs which require persistent storage **modifiable**. As far as I know, the common approach for this is to use **unionfs** like file-systems,...
I have read-only root file system, protected with dm-verity and clean read-write user data storage.
Nevertheless, I need to make a tiny set of files on rootfs which require persistent storage **modifiable**.
As far as I know, the common approach for this is to use **unionfs** like file-systems, for example overlayfs . The problem with overlayfs, is that it seems it doesn't provide file-level granularity. What do I mean: for example if I want to make **/etc/resolv.conf** modifiable, I need to mount entire **/etc/** folder accordingly.
mount -t overlay overlay -o lowerdir=/etc,upperdir=/opt/storage/etc-up,workdir=/opt/storage/etc-wd,noexec /etc
I tried then to use file bind mounts instead of overlayfs, to overcome this, so the idea was to copy target file to read write storage at the boot time, and then bind-mount to original place. However it seems in some cases, for example user add, software also tries to **create** some temporary files in /etc folder (f.e. lock files), so that didn't work for me (file creation of course failed because original rootfs mounted ro)
I'm wondering if there is a solution which will help me to do what I want.
The requirements could be summarized as:
- The most of the rootfs is left forever readonly (implemented already, the rootfs shall be mounted ro)
- I can statically define at the image build time that file1, file2 ... file_n are excluded from this "forever-readonly" data list.
- I can define that new files can be created in folder1, folder2 ... folder_n
Alex Hoppus
(257 rep)
Dec 18, 2024, 10:58 AM
1
votes
0
answers
132
views
Right way to recursively share a path (like a symlink) and proper way to unmount/remount without messing with other mount points
Bind mounts seem to be hard. I am looking for the right way to use a bind mount to mount a given directory to another one pretty much like a symlink (but I can't use a symlink because my application is linux containers). I want to be able to unmount that directory without disrupting other possible m...
Bind mounts seem to be hard. I am looking for the right way to use a bind mount to mount a given directory to another one pretty much like a symlink (but I can't use a symlink because my application is linux containers). I want to be able to unmount that directory without disrupting other possible mounts.
**Background**: I want to share a ZFS pool from a host system to a Linux container (proxmox). As a ZFS pool, there are many nested datasets and hence I would like to recursively do the mount. Also, some of the datasets are encrypted and should be transparently available if encryption keys are loaded and not, if unmounted.
**What I have tried**
1. Starting point is the host system with the
mountpoint
property set so that all datasets are mounted to /zpseagate8tb
on the host system. I can freely mount/umount datasets and load/unload encryption keys. I would like to clone this tree exactly into a container
2. I created another directory /zpseagate8tb_bind
, to which I bind mount the original pool. The intention is to mark it as slave to facilitate unmounting. I have the following line in my /etc/fstab
:
/zpseagate8tb /zpseagate8tb_bind none rbind,make-rslave,nofail
3. Then I use LXC's builtin capabilities to bind mount that directory into the container. The config file includes:
lxc.mount.entry: /zpseagate8tb_bind zpseagate8tb none rbind,create=dir 0 0
This works flawlessly until I want to unmount and/or the pool disappears (due to mistakenly unplugging) in which case there is always something unexpected happening. For example, /zpseagate8tb_bind
is empty while the data is still accessible/mounted inside the container. In nearly all cases I have to reboot everything to get a consistent state again.
What is the right approach to create this bind mount and which commands are needed to remove the mount from the container while not disturbing something else?
divB
(218 rep)
Nov 5, 2024, 05:42 AM
• Last activity: Nov 5, 2024, 06:11 AM
0
votes
0
answers
46
views
shared vs private mountpoints in parent/child mount namespaces
As per explicit request, I opened this question to ask the following: on Ubuntu linux systems initial (aka root or default) mount namespace has options for mounted filesystems that are different from the same mounted filesystem within the child mount namespace that gets initially a *copy* of it. roo...
As per explicit request, I opened this question to ask the following:
on Ubuntu linux systems initial (aka root or default) mount namespace has options for mounted filesystems that are different from the same mounted filesystem within the child mount namespace that gets initially a *copy* of it.
root@ubuntu:~# cat /proc/self/mountinfo
25 31 0:23 / /sys rw,nosuid,nodev,noexec,relatime shared:7 - sysfs sysfs rw
26 31 0:24 / /proc rw,nosuid,nodev,noexec,relatime shared:12 - proc proc rw
27 31 0:5 / /dev rw,nosuid,relatime shared:2 - devtmpfs udev rw,size=4008708k,nr_inodes=1002177,mode=755,inode64
28 27 0:25 / /dev/pts rw,nosuid,noexec,relatime shared:3 - devpts devpts rw,gid=5,mode=620,ptmxmode=000
29 31 0:26 / /run rw,nosuid,nodev,noexec,relatime shared:5 - tmpfs tmpfs rw,size=812844k,mode=755,inode64
31 1 8:2 / / rw,relatime shared:1 - ext4 /dev/sda2 rw
32 25 0:6 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:8 - securityfs securityfs rw
33 27 0:28 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64
34 29 0:29 / /run/lock rw,nosuid,nodev,noexec,relatime shared:6 - tmpfs tmpfs rw,size=5120k,inode64
35 25 0:30 / /sys/fs/cgroup rw,nosuid,nodev,noexec,relatime shared:9 - cgroup2 cgroup2 rw,nsdelegate,memory_recursiveprot
36 25 0:31 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:10 - pstore pstore rw
37 25 0:32 / /sys/fs/bpf rw,nosuid,nodev,noexec,relatime shared:11 - bpf bpf rw,mode=700
38 26 0:33 / /proc/sys/fs/binfmt_misc rw,relatime shared:13 - autofs systemd-1 rw,fd=29,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=17383
39 27 0:20 / /dev/mqueue rw,nosuid,nodev,noexec,relatime shared:14 - mqueue mqueue rw
40 27 0:34 / /dev/hugepages rw,relatime shared:15 - hugetlbfs hugetlbfs rw,pagesize=2M
41 25 0:7 / /sys/kernel/debug rw,nosuid,nodev,noexec,relatime shared:16 - debugfs debugfs rw
42 25 0:12 / /sys/kernel/tracing rw,nosuid,nodev,noexec,relatime shared:17 - tracefs tracefs rw
---------------------------- output omitted ------------------------------------------
root@ubuntu:~#
root@ubuntu:~# unshare -m /bin/bash
root@ubuntu:~#
root@ubuntu:~# cat /proc/self/mountinfo
714 713 8:2 / / rw,relatime - ext4 /dev/sda2 rw
715 714 0:5 / /dev rw,nosuid,relatime - devtmpfs udev rw,size=4008708k,nr_inodes=1002177,mode=755,inode64
716 715 0:25 / /dev/pts rw,nosuid,noexec,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000
719 715 0:28 / /dev/shm rw,nosuid,nodev - tmpfs tmpfs rw,inode64
720 715 0:20 / /dev/mqueue rw,nosuid,nodev,noexec,relatime - mqueue mqueue rw
725 715 0:34 / /dev/hugepages rw,relatime - hugetlbfs hugetlbfs rw,pagesize=2M
726 714 0:26 / /run rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,size=812844k,mode=755,inode64
739 726 0:29 / /run/lock rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,size=5120k,inode64
740 726 0:36 / /run/credentials/systemd-sysusers.service ro,nosuid,nodev,noexec,relatime - ramfs none rw,mode=700
741 726 0:26 /snapd/ns /run/snapd/ns rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,size=812844k,mode=755,inode64
742 726 0:26 /netns /run/netns rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,size=812844k,mode=755,inode64
743 726 0:46 / /run/user/1000 rw,nosuid,nodev,relatime - tmpfs tmpfs rw,size=812840k,nr_inodes=203210,mode=700,uid=1000,gid=1000,inode64
744 714 0:23 / /sys rw,nosuid,nodev,noexec,relatime - sysfs sysfs rw
745 744 0:6 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime - securityfs securityfs rw
746 744 0:30 / /sys/fs/cgroup rw,nosuid,nodev,noexec,relatime - cgroup2 cgroup2 rw,nsdelegate,memory_recursiveprot
747 744 0:31 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime - pstore pstore rw
748 744 0:32 / /sys/fs/bpf rw,nosuid,nodev,noexec,relatime - bpf bpf rw,mode=700
814 744 0:7 / /sys/kernel/debug rw,nosuid,nodev,noexec,relatime - debugfs debugfs rw
815 744 0:12 / /sys/kernel/tracing rw,nosuid,nodev,noexec,relatime - tracefs tracefs rw
---------------------------- output omitted ------------------------------------------
root@ubuntu:~#
For instance the root filesystem
/sda/sda2
is tagged as shared:1
in root/initial mount namespace, while it is tagged as private (no explicit tag) in child mount namespace. Which is the reason behind this ? Thanks.
CarloC
(385 rep)
Oct 29, 2024, 01:28 PM
1
votes
1
answers
108
views
Unreliable bind-mounts
On a RHEL8-based virtual machine running `systemd` 239, I have the following bind-mount setup: * a filesystem (identified via UUID) mounted to a "source" mount-point (`/path/to/sourcemnt`) * multiple sub-directories on that "source" mount-point: ```none /path/to/sourcemnt/dirA /path/to/sourcemnt/dir...
On a RHEL8-based virtual machine running
systemd
239, I have the following bind-mount setup:
* a filesystem (identified via UUID) mounted to a "source" mount-point (/path/to/sourcemnt
)
* multiple sub-directories on that "source" mount-point:
/path/to/sourcemnt/dirA
/path/to/sourcemnt/dirB
/path/to/sourcemnt/dirC
* bind-mounts for each sub-directory to "target" mount-points
The fstab
looks like this:
# auto-generated entries for 'sourcemnt'
UUID= /path/to/sourcemnt ext4 defaults 1 2
# manually added entries for bind mounts
/path/to/sourcemnt/dirA /path/to/targetmnt/dirA none bind,x-systemd.requires=/path/to/sourcemnt 0 0
/path/to/sourcemnt/dirB /path/to/targetmnt/dirB none bind,x-systemd.requires=/path/to/sourcemnt 0 0
/path/to/sourcemnt/dirC /path/to/targetmnt/dirC none bind,x-systemd.requires=/path/to/sourcemnt 0 0
**The Problem**
Not all of the bind-mounts are automatically mounted during boot. This can be seen from the ownership/permissions of the mount-points as well as output of findmnt
and mountpoint
commands.
* The journalctl
output doesn't contain error messages trying to mount the "failed" bind-mounts, they are simply never mentioned. Only the successfully mounted bind-mounts are mentioned in the usual way:
systemd: Mounting /path/to/sourcemnt...
...
systemd: Mounted /path/to/sourcemnt.
systemd: Mounting /path/to/targetmnt/dirB...
...
systemd: Mounted /path/to/targetmnt/dirB.
* Manually mounting them (as root
) works.
* The number of "failing" bind-mounts is not consistent on re-boot. Sometimes only one is missing, sometimes two, but at least one is always successfully mounted.
* Changing the order in which the bind-mounts are specified in the fstab
doesn't change which of them are mounted and which are not.
* It seems that the one lexicographically in the middle of the sort-order (dirB
) is always there, dirA
sometimes and dirC
only after a hard restart (if at all) - but I didn't try often enough to say for sure.
* The same setup still worked as of RHEL7
This looks like a timing problem/race condition, but how can I ensure that all bind-mounts are present? I even tried to "cascade" the mount-requirements as in
/path/to/sourcemnt/dirA /path/to/targetmnt/dirA none bind,x-systemd.requires=/path/to/sourcemnt 0 0
/path/to/sourcemnt/dirB /path/to/targetmnt/dirB none bind,x-systemd.requires=/path/to/targetmnt/dirA 0 0
/path/to/sourcemnt/dirC /path/to/targetmnt/dirC none bind,x-systemd.requires=/path/to/targetmnt/dirB 0 0
but that also didn't help (that said, why should it in the first place?).
AdminBee
(23588 rep)
Jun 24, 2024, 03:20 PM
• Last activity: Oct 18, 2024, 08:32 PM
2
votes
0
answers
270
views
Ubuntu NFS mount bind failing
This should be a rather simple question, but all my googlefu has failed me so far. I have an nfs server with a share mounted to my local machine in dir `/share/` `(rw,sync,no_subtree_check)` from there I want to mount bind a folder within `/share/userdata` to my user user directory `/home/user/nfs`...
This should be a rather simple question, but all my googlefu has failed me so far. I have an nfs server with a share mounted to my local machine in dir
/share/
(rw,sync,no_subtree_check)
from there I want to mount bind a folder within /share/userdata
to my user user directory /home/user/nfs
I am able to mount /share/
just fine, however when running this command
sudo mount --bind /share/userdata /home/user/nfs
I am returned the error:
mount: /home/user/nfs: bind /share/userdata failed.
No further details are given and I can't make heads or tails of this error. Does anyone have any clue what is going on or if this is even possible?
Graydon Neill
(21 rep)
Jan 30, 2023, 10:06 PM
• Last activity: Sep 26, 2024, 06:17 PM
0
votes
1
answers
244
views
Are there any implications of using a symbolic link or bind mount vs having the real files there?
I have an indirect wildcard autofs mount on home but would like a few local folders to remain in there. Thus I moved those local folders elsewhere and created a bind mount. As a backup, incase autofs dies, I created a symbolic link in /home to the folder as well. This seems to work fine, however, be...
I have an indirect wildcard autofs mount on home but would like a few local folders to remain in there. Thus I moved those local folders elsewhere and created a bind mount. As a backup, incase autofs dies, I created a symbolic link in /home to the folder as well.
This seems to work fine, however, before I push this configuration to the rest of the systems on my network, I'd like to know if there are any potential issues/drawbacks to doing this. Is this opaque to software accessing the /home paths? Or can this cause problems?
A workaround would be to leave the local folders there and manually have autofs do direct mounts for all the other home folders but it creates more work when adding/deleting accounts.
example, I want /home/local to remain so:
`
# mkdir /export
# mv /home/local /export/.
# ln -s /export/local /home/local
`
auto.master
`
...
/home auto.home
`
auto.home
`
local -bind :/export/local
* -fstype=nfs,rw,sync nfs-server:/home/&
`
when autofs is running, /home/local is available via the bind mount. if autofs is down, it is available via the symbolic link. I beleive any software using anything in /home/local should not know the differnce or be unaffected but I'm not 100% sure. So my question, re the any disadvantages or implications? **This is really about symbolic links or bind mount vs the normal hard link. The autofs reference is just for context.**
eng3
(330 rep)
Sep 15, 2024, 02:05 PM
• Last activity: Sep 15, 2024, 06:56 PM
1
votes
1
answers
714
views
How to mount a local folder with autofs? bind doesnt seem to work
I want all the users to automount from a NFS server except for a few accounts that should use the local home folder. Thus each client computer (and the nfs server) has a few folders in /home, the rest mount from the nfs server into /home as well. Is there a way to accomplish this? As I understand it...
I want all the users to automount from a NFS server except for a few accounts that should use the local home folder. Thus each client computer (and the nfs server) has a few folders in /home, the rest mount from the nfs server into /home as well. Is there a way to accomplish this?
As I understand it, autofs basically will put a mount over /home thus obscuring anything in the "real" /home. So I would need to move the local folders out and put them elsewhere (ie. /export), then do a bind mount. I could also create a symbolic link in the real /home so I'm covered if autofs breaks down. (I know another workaround is to mount elsewhere instead of /home but it is important all the home folders are in /home due to some legacy software issues)
auto.master
`
...
/- auto.local
/home auto.home
`
auto.home
`
* -fstype=nfs,rw,sync nfs-server:/home/&
`
auto.home
`
/home/local -fstype=bind :/export/local
`
Unfortunately this doesn't work. I get the error:
`
do_mount_autofs_direct: failed to create ioctl fd for /home/local
`
I found out this is because I have a /home/local symbolic link to /export/local. I'm not sure why this would cause a problem. Upon deleting it, it still gives me an error:
`
handle_packet_missing_direct: failed to create ioctl fd for /home/local
`
As a sanity check, I did confirm that manually doing the bind mount does succeed.
I'd appreciate any recommendations. I suppose another workaround would be to somehow setup so a symbolic link gets created whenever autofs starts up. Not sure how to do that
I am on an old system ubuntu 18. autofs version is 5.1.2.
The only way I found I can do this is to create direct mounts for all my users. However then it has to get updated everytime a user is added or deleted which is not ideal.
eng3
(330 rep)
Sep 14, 2024, 02:12 PM
• Last activity: Sep 14, 2024, 09:12 PM
0
votes
0
answers
545
views
Docker Compose not synchronising file changes in volume
Reposting from [here](https://forums.docker.com/t/docker-compose-not-synchronising-file-changes-in-volume/79177) as I don't quite understand how the "solution" works. **Symptom:** As reported [here](https://forums.docker.com/t/docker-compose-not-synchronising-file-changes-in-volume/79177): I mount m...
Reposting from [here](https://forums.docker.com/t/docker-compose-not-synchronising-file-changes-in-volume/79177) as I don't quite understand how the "solution" works.
**Symptom:**
As reported [here](https://forums.docker.com/t/docker-compose-not-synchronising-file-changes-in-volume/79177) :
I mount my local files into the container for development. My
docker-compose.yml
file is as follows:
version: '3.7'
services:
node:
build: .
command: npm run dev
image: node
ports:
- "4000:3000"
volumes:
- .:/code
working_dir: /code
when I run the Next JS server in a container, it will initially load fine, but any changes made afterwards will not be shown.
This is same as my observation as well.
**Answers:**
There has been discussion on the issues of running docker in Windows 10 Pro, but I'm hosting in Linux for Linux, and the "solution" mentioned nothing about Windows either.
**Solution:**
The reported working "solution" is:
version: '3.7'
services:
node:
build: .
command: npm run dev
image: node
ports:
- "4000:3000"
volumes:
- type: bind
source: .
target: /code
working_dir: /code
This, to my knowledge, would be the same as what used by OP, i.e.:
volumes:
- .:/code
I.e., I think the question of why _"docker Compose not synchronising file changes in volume"_ was not clearly answered. And neither of the following pages:
- https://stackoverflow.com/questions/44678042/named-docker-volume-not-updating-using-docker-compose
- https://stackoverflow.com/questions/44251094/i-want-to-share-code-content-across-several-containers-using-docker-compose-volu/44265470
- https://stackoverflow.com/questions/42958573/docker-compose-recommended-way-to-use-data-containers
My case:
I have a php site hosted on nginx:
version: '3.7'
services:
web:
image: nginx:latest
ports:
- "80:8081"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:delegated
- ./app:/app:delegated
php:
build:
context: .
dockerfile: PHP.Dockerfile
volumes:
- ./app:/app:delegated
restart: always
This is my observation:
- I do docker compose up
- Then ^-C to stop it
- Update code in host,
- Re- do / start docker compose up
- Check the modified volume-mounted file within docker compose container,
- The file stays the same, as before the change.
xpt
(1858 rep)
Aug 15, 2024, 02:01 PM
• Last activity: Aug 15, 2024, 02:06 PM
Showing page 1 of 20 total questions