Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

14 votes
1 answers
5016 views
How can I make a device available inside a systemd-nspawn container with user namespacing?
I would like to mount an encrypted image file using `cryptsetup` inside a [`systemd-nspawn`][systemd-nspawn] container. However, I get this error message: [root@container ~]# echo $key | cryptsetup -d - open luks.img luks Cannot initialize device-mapper. Is dm_mod kernel module loaded? Cannot use de...
I would like to mount an encrypted image file using cryptsetup inside a systemd-nspawn container. However, I get this error message: [root@container ~]# echo $key | cryptsetup -d - open luks.img luks Cannot initialize device-mapper. Is dm_mod kernel module loaded? Cannot use device luks, name is invalid or still in use. The dm_mod kernel module is loaded on the host system, although things look a bit weird inside the container: [root@host ~]# grep dm_mod /proc/modules dm_mod 159744 2 dm_crypt, Live 0xffffffffc12c6000 [root@container ~]# grep dm_mod /proc/modules dm_mod 159744 2 dm_crypt, Live 0x0000000000000000 strace indicates that cryptsetup is unable to create /dev/mapper/control: [root@etrial ~]# echo $key | strace cryptsetup -d - open luks.img luks 2>&1 | grep mknod mknod("/dev/mapper/control", S_IFCHR|0600, makedev(0xa, 0xec)) = -1 EPERM (Operation not permitted) I am not too sure why this is happening. I am starting the container with the systemd-nspawn@.service template unit , which seems like it should allow access to the device mapper: # nspawn can set up LUKS encrypted loopback files, in which case it needs # access to /dev/mapper/control and the block devices /dev/mapper/*. DeviceAllow=/dev/mapper/control rw DeviceAllow=block-device-mapper rw Reading this comment on a related question about USB devices , I wondered whether the solution was to add a bind mount for /dev/mapper. However, cryptsetup gives me the same error message inside the container. When I strace it, it looks like there's still a permissions issue: # echo $key | strace cryptsetup open luks.img luks --key-file - 2>&1 | grep "/dev/mapper" stat("/dev/mapper/control", {st_mode=S_IFCHR|0600, st_rdev=makedev(0xa, 0xec), ...}) = 0 openat(AT_FDCWD, "/dev/mapper/control", O_RDWR) = -1 EACCES (Permission denied) # ls -la /dev/mapper total 0 drwxr-xr-x 2 nobody nobody 60 Dec 13 14:33 . drwxr-xr-x 8 root root 460 Dec 15 14:54 .. crw------- 1 nobody nobody 10, 236 Dec 13 14:33 control Apparently, this is happening because the template unit enables user namespacing, which I want anyway for security reasons. As explained in the documentation : >In most cases, using --private-users=pick is the recommended option as it enhances container security massively and operates fully automatically in most cases ... [this] is the default if the systemd-nspawn@.service template unit file is used ... > >Note that when [the --bind option] is used in combination with --private-users, the resulting mount points will be owned by the nobody user. That's because the mount and its files and directories continue to be owned by the relevant host users and groups, which do not exist in the container, and thus show up under the wildcard UID 65534 (nobody). If such bind mounts are created, it is recommended to make them read-only, using --bind-ro=. Presumably I won't be able to do anything with read-only permissions to /dev/mapper. So, is there any way I can get cryptsetup to work inside the container, so that my application can create and mount arbitrary encrypted volumes at runtime, without disabling user namespacing? ## Related questions * systemd-nspawn: file-system permissions for a bound folder relates to files rather than devices, and the only answer just says that "-U is mostly incompatible with rw --bind." * systemd-nspawn: how to allow access to all devices doesn't deal with user namespacing and there are no answers.
sjy (956 rep)
Dec 15, 2019, 02:53 AM • Last activity: Jul 31, 2025, 03:10 AM
1 votes
2 answers
2643 views
Using iptables to redirect all docker outbound traffic back into container
I've been stuck on this problem all day and am keeping my fingers crossed some iptables expert reads this and can help me please. I would like to force all my docker containers's outbound traffic to go through a socks5 proxy. This is the closest I've come: ```bash iptables -t nat -N REDSOCKS iptable...
I've been stuck on this problem all day and am keeping my fingers crossed some iptables expert reads this and can help me please. I would like to force all my docker containers's outbound traffic to go through a socks5 proxy. This is the closest I've come:
iptables -t nat -N REDSOCKS
iptables -t nat -A REDSOCKS -s 172.20.0.0/16 -d 0.0.0.0/8 -j RETURN
iptables -t nat -A REDSOCKS -s 172.20.0.0/16 -d 10.0.0.0/8 -j RETURN
iptables -t nat -A REDSOCKS -s 172.20.0.0/16 -d 127.0.0.0/8 -j RETURN
iptables -t nat -A REDSOCKS -s 172.20.0.0/16 -d 169.254.0.0/16 -j RETURN
iptables -t nat -A REDSOCKS -s 172.20.0.0/16 -d 172.16.0.0/12 -j RETURN
iptables -t nat -A REDSOCKS -s 172.20.0.0/16 -d 192.168.0.0/16 -j RETURN
iptables -t nat -A REDSOCKS -s 172.20.0.0/16 -d 224.0.0.0/4 -j RETURN
iptables -t nat -A REDSOCKS -s 172.20.0.0/16 -d 240.0.0.0/4 -j RETURN

iptables -t nat -A REDSOCKS -s 172.20.0.0/16 -p tcp -j DNAT --to-destination 172.17.0.1:12345
iptables -t nat -A OUTPUT -s 172.20.0.0/16 -j REDSOCKS
iptables -t nat -A PREROUTING -s 172.20.0.0/16 -j REDSOCKS
It works almost perfectly, but the socks5 proxy is unable to tell the originating IP address. The remote address is always '127.0.0.1' Is there any way I can keep the originating IP address? # Example Scenario 1) I have applied the iptables rules above to my docker host 2) I have a docker container with the address 172.20.0.2 2) Inside that container, I do a curl to example.com 3) The traffic is forwarded to 172.17.0.1:12345 (the docker host machine) 4) The server running on 12345 shows the remote IP address as being '127.0.0.1' 5) I would like the remote IP address to show as 172.20.0.2 Thank to anyway who can try and help me with this.
Mark (231 rep)
Oct 5, 2020, 10:16 AM • Last activity: Jul 26, 2025, 08:08 PM
33 votes
12 answers
29787 views
Process descendants
I'm trying to build a process container. The container will trigger other programs. For example - a bash script that launches running background tasks with '&' usage. The important feature I'm after is this: when I kill the container, everything that has been spawned under it should be killed. Not j...
I'm trying to build a process container. The container will trigger other programs. For example - a bash script that launches running background tasks with '&' usage. The important feature I'm after is this: when I kill the container, everything that has been spawned under it should be killed. Not just direct children, but their descendants too. When I started this project, I mistakenly believed that when you killed a process its children were automatically killed too. I've sought advice from people who had the same incorrect idea. While it's possible to catch a signal and pass the kill on to children, that's not what I'm looking for here. I believe what I want to be achievable, because when you close an xterm, anything that was running within it is killed unless it was nohup'd. This includes orphaned processes. That's what I'm looking to recreate. I have an idea that what I'm loooking for involves unix sessions. If there was a reliable way to identify all the descendants of a process, it would be useful to be able to send them arbitrary signals, too. e.g. SIGUSR1.
Craig Turner (430 rep)
Jun 11, 2011, 10:59 AM • Last activity: Jul 24, 2025, 05:07 PM
0 votes
1 answers
2309 views
How can I run the sudo command in Python code under CentOS in Docker
I am trying to access the docker image labels from Python as follows hostname = socket.gethostname() cmd = "sudo curl --unix-socket /var/run/docker.sock http:/containers/" + hostname + "/json" output = os.popen(cmd).read() But, the thing is I am getting the following error: We trust you have receive...
I am trying to access the docker image labels from Python as follows hostname = socket.gethostname() cmd = "sudo curl --unix-socket /var/run/docker.sock http:/containers/" + hostname + "/json" output = os.popen(cmd).read() But, the thing is I am getting the following error: We trust you have received the usual lecture from the local System Administrator. It usually boils down to these three things: #1) Respect the privacy of others. #2) Think before you type. #3) With great power comes great responsibility. sudo: no tty present and no askpass program specified It's one of the fancy messages by Unix from some other posts I read from StackOverflow. I am following the below link https://stackoverflow.com/questions/37439887/how-to-access-the-metadata-of-a-docker-container-from-a-script-running-inside-th Only thing is I want to run these things from Python not from the Terminal. Also, FYI, I get the response when I run the same command from the terminal. I tried appending the following piece in Dockerfile RUN echo "root ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers Thanks
jaruto (1 rep)
Jan 29, 2019, 07:29 PM • Last activity: Jul 20, 2025, 11:07 PM
0 votes
0 answers
41 views
Can't use distrobox due to permission error. Podman behaves weirdly
# Prerequisites Alpine Linux Edge ``` ~ $ podman --version podman version 5.5.2 ~ $ distrobox --version distrobox: 1.8.1.2 ~ $ mount|grep ^cgroup|awk '{print $1}'|uniq cgroup2 ``` I followed the steps in Alpine Wiki for setting up distrobox and podman for rootless usage. # What is happening The bloc...
# Prerequisites Alpine Linux Edge
~ $ podman --version
podman version 5.5.2
~ $ distrobox --version
distrobox: 1.8.1.2
~ $ mount|grep ^cgroup|awk '{print $1}'|uniq
cgroup2
I followed the steps in Alpine Wiki for setting up distrobox and podman for rootless usage. # What is happening The block bellow is the primary issue I'm running into.
~ $ distrobox create --name debox --image debian:latest
Creating 'debox' using image debian:latest	[ OK ]
Distrobox 'debox' successfully created.
To enter, run:

distrobox enter debox

~ $ distrobox enter debox
Error: unable to start container "409500222cb9ecfb488522e1d0a13046e68408fcb62a9dcfb52ae88bda0816c0": runc: runc create failed: unable to start container process: unable to apply cgroup configuration: rootless needs no limits + no cgrouppath when no permission is granted for cgroups: mkdir /sys/fs/cgroup/409500222cb9ecfb488522e1d0a13046e68408fcb62a9dcfb52ae88bda0816c0: permission denied: OCI permission denied
I've attempted to create the folder distrobox tries to create and give my user complete permissions to use it to no avail. The same error occurs. Launching this container with just podman will output the same error. Meanwhile, starting similar container with podman seamingly works.
~ $ distrobox rm debox
# output omitted
~ $ podman create --name debox -i debian:latest
62f2044c8bb7e86b4a78bd48e7f0c66c1071924a3bc65c0d49519ca399753d9c
~ $ podman start debox
debox
As indicated by podman stats the container is up and running:
ID            NAME        CPU %       MEM USAGE / LIMIT  MEM %       NET IO      BLOCK IO           PIDS        CPU TIME         AVG CPU %
62f2044c8bb7  debox       23.49%      0B / 7.182GB       0.00%       0B / 796B   2.876GB / 1.516GB  0           1h22m26.154492s  6227.30%
It starts with showing impossibly high CPU percentage hence, the high average CPU use. Probably, irrelevant to issue. After attaching to container there is no prompt. Detaching to exit via ctrl+p, ctrl+q is impossible. Attempting to stop container will force podman to resort to SIGKILL. Container will not appear in podman ps afterwords (it did before) but, still can be launched but, the same as above will repeat:
~ $ podman stop debox
WARN StopSignal SIGTERM failed to stop container debox in 10 seconds, resorting to SIGKILL 
debox
~ $ podman ps
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
~ $ podman start debox
debox
# What I want Just distrobox enter debox and use container as intended.
mcv_dev (101 rep)
Jul 18, 2025, 06:05 PM
0 votes
1 answers
52 views
create container with a tcp server socket inside from an outside app (as non root user)
I have an application and want to start a firefox where all network traffic from firefox goes through the application which does *magic* (doesn't really matter what it does). The idea I have is to open a tcp socket and set a http proxy localhost:port for firefox. That way all (relevant) network traf...
I have an application and want to start a firefox where all network traffic from firefox goes through the application which does *magic* (doesn't really matter what it does). The idea I have is to open a tcp socket and set a http proxy localhost:port for firefox. That way all (relevant) network traffic from firefox is redirected to the apppication. So far that works just fine. My problem now is that this opens a port in the system usable by any user to access the *magic* as well. And that gets me to my problem. To protect the tcp socket I want to start firefox in a container with for example bubblewrap. Again this works fine but now firefox can't communicate with the application. How do I open a tcp socket in the application (which is outside the container) that is then reachable only inside the container? Or some other way to create a socket only firefox can access?
Goswin von Brederlow (150 rep)
Jul 18, 2025, 12:18 AM • Last activity: Jul 18, 2025, 09:45 AM
0 votes
1 answers
92 views
Upgraded k8 worker node from ubuntu 20.04 to 22.04. DNS resolution/networking inside pods doesn’t work & pods keep crashing/restarting
I have a k8 cluster based on Ubuntu 20.04 1 master and 3 worker nodes. I drained one of the worker node. Put kubectl,iptables, kubeadm, kubelet & containerd packages on hold. OS upgrade to 22.04, went smooth, but after upgrade pods (kube-system daemon-sets) kept crashing. One of the issue I found is...
I have a k8 cluster based on Ubuntu 20.04 1 master and 3 worker nodes. I drained one of the worker node. Put kubectl,iptables, kubeadm, kubelet & containerd packages on hold. OS upgrade to 22.04, went smooth, but after upgrade pods (kube-system daemon-sets) kept crashing. One of the issue I found is that DNS resolution is not working inside pods residing on upgraded node. When I revert back to ubuntu 20.04 everything works fine. Anyone help/suggestion please
Muhammad Saeed (31 rep)
Mar 2, 2025, 02:50 PM • Last activity: Jul 8, 2025, 08:13 PM
0 votes
1 answers
96 views
Is there a specialized OS for container orchestration?
Containers are intended to solve the "it worked on my machine" problem. Thus, the blueprint of containers has two compatibility requirements: the OS and the architecture. We often see a container image like linux/amd64, windows/amd64, linux/aarch64, etc. But, as container orchestration joins the pic...
Containers are intended to solve the "it worked on my machine" problem. Thus, the blueprint of containers has two compatibility requirements: the OS and the architecture. We often see a container image like linux/amd64, windows/amd64, linux/aarch64, etc. But, as container orchestration joins the picture, we all agree that the worker node or master node shouldn't use an OS other than Linux—like Windows—due to the overhead they introduce. Moreover, why did we introduce Virtual Machine (VM) technology? Can't we just directly use a single physical machine as a node? I mean, isn't it redundant and needless overhead if VMs run inside a single large physical machine where the hypervisor is installed? Specifically, a bare-metal hypervisor where the hypervisor itself is treated as a specialized OS. I mean, when that physical machine goes down, all corresponding VMs would go down too, right? So, instead of a hypervisor of the bare-metal type, can we just replace it directly with a container orchestrator? Thus, the container orchestrator itself can be viewed as a master node or worker node (it's switchable). I mean, that physical machine can switch roles as a master or a worker.
Muhammad Ikhwan Perwira (319 rep)
Jul 8, 2025, 11:53 AM • Last activity: Jul 8, 2025, 01:51 PM
0 votes
0 answers
49 views
COPY/ADD from host absolute path in podman/dockerfile
Dockerfile documentation states that the ` ` argument of COPY and ADD are relative to the context (location of Dockerfile). If I have third party dependencies located in `/usr/local` or `/opt/` it seems that I am forced to use relative paths in my dockerfile unless I want to copy and paste those thi...
Dockerfile documentation states that the `` argument of COPY and ADD are relative to the context (location of Dockerfile). If I have third party dependencies located in /usr/local or /opt/ it seems that I am forced to use relative paths in my dockerfile unless I want to copy and paste those third party libraries multiple times for each container project that utilizes them. For this reason, I have started to install or unpackage all third-party software to live one level above where I store my all my various project's application code (i.e. instead of /usr/local) I'm wondering if anyone has a different solution for building container images that doesn't involve copying and pasting, hard coding relative paths, or installing libraries in non-standard locations. **Edit** Using relative paths is entirely not allowed it seems. If your third-party dependencies live above your dockerfile, i.e. COPY ../third-party/some-lib/ /opt/ you get the error: possible escaping context directory error... ... no such file or directory So, I really have to copy my third party libraries however many times I want to use them in separate projects.
rocksNwaves (121 rep)
Jun 26, 2025, 04:15 PM • Last activity: Jun 26, 2025, 04:29 PM
1 votes
2 answers
4196 views
How to run docker inside an lxc container?
I have unprivileged lxc container on Arch host created like this: `lxc-create -n test_arch11 -t download -- --dist archlinux --release current --arch amd64` And it doesn't run docker. What I did inside a container: 1. Installed docker from Arch repos `pacman -S docker` 2. Tried to run a hello-world...
I have unprivileged lxc container on Arch host created like this: lxc-create -n test_arch11 -t download -- --dist archlinux --release current --arch amd64 And it doesn't run docker. What I did inside a container: 1. Installed docker from Arch repos pacman -S docker 2. Tried to run a hello-world container docker run hello-world 3. Got the next error: > docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mkdir /sys/fs/cgroup/cpuset/docker: permission denied\"": unknown. > ERRO error waiting for container: context canceled What is wrong and how to make docker work inside a container?
devalone (111 rep)
Oct 27, 2019, 04:49 PM • Last activity: Jun 9, 2025, 04:09 AM
1 votes
1 answers
117 views
Why is this docker container process not triggering a mount for my systemd auto-mounted drive?
I've been struggling to make sense of something, so would appreciate some help. I am mounting a remote NFS drive onto my Debian system with the following fstab entry which uses the systemd automounter, and is set to auto-unmount after 120 seconds of inactivity: ``` 192.168.0.67:/mnt/SSD_240GB/backup...
I've been struggling to make sense of something, so would appreciate some help. I am mounting a remote NFS drive onto my Debian system with the following fstab entry which uses the systemd automounter, and is set to auto-unmount after 120 seconds of inactivity:
192.168.0.67:/mnt/SSD_240GB/backups/TIG_backups  /mnt/nfs/SSD_240GB/backups/TIG_backups   nfs auto,_netdev,bg,soft,x-systemd.automount,x-systemd.idle-timeout=120,timeo=14,nofail,noatime,nolock,tcp,actimeo=1800 0 0
Now on this Debian host system I am running a docker container (Telegraf ), to monitor some metrics of the Debian host. To facilitate this, I am bind-mounting the host filesystem and proc directory (as recommended here in the docs ), as well as bind-mounting the NFS drive. The docker run command looks like this:
docker run -d \
--name telegraf_container \
--user 1001:1001 \
--network docker_monitoring_network \
--mount type=bind,source=/,destination=/hostfs \
--mount type=bind,source=/mnt/nfs/SSD_240GB/backups/TIG_backups/telegraf_backups,destination=/mnt/nfs/SSD_240GB/backups/TIG_backups/telegraf_backups \
-e HOST_MOUNT_PREFIX=/hostfs \
-e HOST_PROC=/hostfs/proc \
telegraf:latest
I am using the Telegraf Disk Input plugin because I want to gather disk usage metrics once every hour for the NFS drive (used, free, total). The problem is that the disk is unmounted automatically 120s after system boot as expected, *but it is never remounted*. I would expect the telegraf container to trigger an automount every hour. The reason I expect this is because the container is essentially running a .go program (as seen here in the source code) which is querying the filesystem. I believe under the hood it is calling some .go libraries (here and here ), which are essentially calling statfs(). I was under the impression that statfs() should trigger a systemd automount. Here in the Debian host's logs, I can see the NFS drive mounting correctly at boot up, and then unmounting after a couple of minutes automatically (but then it never remounts):
root@docker-debian:/home/monitoring/docker_files/scripts# journalctl -u mnt-nfs-SSD_240GB-backups-TIG_backups.automount -b
Jun 05 13:54:12 docker-debian systemd[1] : Set up automount mnt-nfs-SSD_240GB-backups-TIG_backups.automount.
Jun 05 13:54:18 docker-debian systemd[1] : mnt-nfs-SSD_240GB-backups-TIG_backups.automount: Got automount request for /mnt/nfs/SSD_240GB/backups/TIG_backups, triggered by 893 (runc:[2:INIT])

root@docker-debian:/home/monitoring/docker_files/scripts# journalctl -u mnt-nfs-SSD_240GB-backups-TIG_backups.mount -b
Jun 05 13:54:18 docker-debian systemd[1] : Mounting mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups...
Jun 05 13:54:18 docker-debian systemd[1] : Mounted mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups.
Jun 05 13:57:39 docker-debian systemd[1] : Unmounting mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups...
Jun 05 13:57:39 docker-debian systemd[1] : mnt-nfs-SSD_240GB-backups-TIG_backups.mount: Deactivated successfully.
Jun 05 13:57:39 docker-debian systemd[1] : Unmounted mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups.
After the drive has auto-unmounted, it is missing from the host as expected:
monitoring@docker-debian:/$ df
Filesystem     1K-blocks    Used Available Use% Mounted on
udev              983908       0    983908   0% /dev
tmpfs             201420     816    200604   1% /run
/dev/sda1       15421320 4779404   9836748  33% /
tmpfs            1007084       0   1007084   0% /dev/shm
tmpfs               5120       0      5120   0% /run/lock
tmpfs             201416       0    201416   0% /run/user/1001
But it is present in the container:
monitoring@docker-debian:/$ docker exec -it telegraf_container df
Filesystem                                                       1K-blocks     Used Available Use% Mounted on
overlay                                                           15421320  4779404   9836748  33% /
tmpfs                                                                65536        0     65536   0% /dev
shm                                                                  65536        0     65536   0% /dev/shm
/dev/sda1                                                         15421320  4779404   9836748  33% /hostfs
udev                                                                983908        0    983908   0% /hostfs/dev
tmpfs                                                              1007084        0   1007084   0% /hostfs/dev/shm
tmpfs                                                               201420      820    200600   1% /hostfs/run
tmpfs                                                                 5120        0      5120   0% /hostfs/run/lock
192.168.0.67:/mnt/SSD_240GB/backups/TIG_backups/telegraf_backups 229608448 42336256 175535104  20% /mnt/nfs/SSD_240GB/backups/TIG_backups/telegraf_backups
tmpfs                                                              1007084        0   1007084   0% /proc/acpi
tmpfs                                                              1007084        0   1007084   0% /sys/firmware
tmpfs                                                               201416        0    201416   0% /hostfs/run/user/1001
In case it's relevant, the Telegraf config is here:
# GLOBAL SETTINGS
[agent]
  hostname = "docker-debian"
  flush_interval = "60m"
  interval = "60m"

# COLLECT DISK USAGE OF THIS VM
[[inputs.disk]]
  mount_points = ["/", "/mnt/nfs/SSD_240GB/backups/TIG_backups"]  # Only these will be checked
  fieldpass = [ "free", "total", "used", "used_percent" ]
  ignore_fs = ["tmpfs", "devtmpfs", "devfs", "iso9660", "overlay", "aufs", "squashfs"]

# VIEW COLLECTED METRICS
[[outputs.file]]
  files = ["stdout"]
Why is the container not triggering an automount, which leads to it not being able to collect the metrics on the drive? --- **EDIT** After the answer from @grawity, I did a simpler check: - I removed the idle timeout (by setting x-systemd.idle-timeout=0) - I removed explicit bind-mounts for these drives from the docker run command In this situation, I found the following: 1) Immediately after boot, an automount is set up, but nothing triggered it yet, as expected:
root@docker-debian:# journalctl -u mnt-nfs-SSD_240GB-backups-TIG_backups.automount -b
Jun 06 12:22:20 docker-debian systemd[1] : Set up automount mnt-nfs-SSD_240GB-backups-TIG_backups.automount.

root@docker-debian:# journalctl -u mnt-nfs-SSD_240GB-backups-TIG_backups.mount -b
-- No entries --
2) I start a simple container up, with no explicit bind mounts for those drives (only the hostfs structure) :
docker run -d \
--name telegraf_container \
--mount type=bind,source=/,destination=/hostfs \
-e HOST_MOUNT_PREFIX=/hostfs \
-e HOST_PROC=/hostfs/proc \
telegraf:latest
This still does not trigger any automounts on the host. 3) Now I manually trigger an automount on the host by accessing the drive:
ls /mnt/nfs/SSD_240GB/backups/TIG_backups/
The automount is triggered and mounts the drive successfully:
root@docker-debian:# journalctl -u mnt-nfs-SSD_240GB-backups-TIG_backups.automount -b
Jun 06 12:22:20 docker-debian systemd[1] : Set up automount mnt-nfs-SSD_240GB-backups-TIG_backups.automount.
Jun 06 12:35:20 docker-debian systemd[1] : mnt-nfs-SSD_240GB-backups-TIG_backups.automount: Got automount request for /mnt/nfs/SSD_240GB/backups/TIG_backups, triggered by 936 (ls)

root@docker-debian:# journalctl -u mnt-nfs-SSD_240GB-backups-TIG_backups.mount -b
Jun 06 12:35:21 docker-debian systemd[1] : Mounting mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups...
Jun 06 12:35:21 docker-debian systemd[1] : Mounted mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups.
Interestingly, the mounted drive now *automatically* appears inside the container (even though no bind-mounts have been used), but it appears under /hostfs instead:
monitoring@docker-debian:~$ docker exec -it telegraf_container df
Filesystem                                     1K-blocks    Used Available Use% Mounted on
overlay                                         15421320 4686888   9929264  33% /
tmpfs                                              65536       0     65536   0% /dev
shm                                                65536       0     65536   0% /dev/shm
/dev/sda1                                       15421320 4686888   9929264  33% /hostfs
udev                                              983908       0    983908   0% /hostfs/dev
tmpfs                                            1007084       0   1007084   0% /hostfs/dev/shm
tmpfs                                             201420     656    200764   1% /hostfs/run
tmpfs                                               5120       0      5120   0% /hostfs/run/lock
tmpfs                                             201416       0    201416   0% /hostfs/run/user/1001
tmpfs                                            1007084       0   1007084   0% /proc/acpi
tmpfs                                            1007084       0   1007084   0% /sys/firmware
192.168.0.67:/mnt/SSD_240GB/backups/TIG_backups  16337920 5799936   9682944  38% /hostfs/mnt/nfs/SSD_240GB/backups/TIG_backups
If I unmount the drive directly on the host (using umount), then it disappears from the container again. 4) I repeated this but instead using an idle timeout of 2mins now. What I found was that having the docker container running *prevents* the autounmount after 2 mins from happening (even though the container does not explicitly bind-mount in the drive, but instead appears automatically in the container under /hostfs). If I stop and remove the container, then the idle timeout unmounts the drive after the 2mins:
root@docker-debian:# journalctl -u mnt-nfs-SSD_240GB-backups-TIG_backups.mount -b
Jun 06 12:49:40 docker-debian systemd[1] : Mounting mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups...
Jun 06 12:49:41 docker-debian systemd[1] : Mounted mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups.
Jun 06 13:10:28 docker-debian systemd[1] : Unmounting mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups...
Jun 06 13:10:28 docker-debian systemd[1] : mnt-nfs-SSD_240GB-backups-TIG_backups.mount: Deactivated successfully.
Jun 06 13:10:28 docker-debian systemd[1] : Unmounted mnt-nfs-SSD_240GB-backups-TIG_backups.mount - /mnt/nfs/SSD_240GB/backups/TIG_backups.
This makes me think a couple of things: - If I want to use telegraf to monitor drives that are mounted on the host, I don't need to bind mount them in explicitly, because they are present already due to the /hostfs bind-mount. - I should never see what I was originally expecting - namely, a drive automatically unmounting due to the idle timeout, and then the container triggering a remount. Because I observed above that once a drive has been mounted in (in my case at /hostfs), the container actually prevents it from ever being auto-unmounted.
teeeeee (305 rep)
Jun 5, 2025, 03:04 PM • Last activity: Jun 6, 2025, 01:08 PM
0 votes
1 answers
67 views
How to connect to primary process of a running docker container
I've a docker container running on my Linux host. root@eve-ng-6:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ea25d3afa65d ios-xr/xrd-control-plane:latest "/usr/sbin/init" 2 hours ago Up 2 hours 3e2241f3-f914-456a-891f-671c42fafef3-0-5 root@eve-ng-6:~# root@eve-ng-6:~# systemctl...
I've a docker container running on my Linux host. root@eve-ng-6:~# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ea25d3afa65d ios-xr/xrd-control-plane:latest "/usr/sbin/init" 2 hours ago Up 2 hours 3e2241f3-f914-456a-891f-671c42fafef3-0-5 root@eve-ng-6:~# root@eve-ng-6:~# systemctl status docker.service ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/docker.service.d └─http-proxy.conf, tcp-sock.conf Active: active (running) since Thu 2025-05-29 13:12:13 UTC; 3min 54s ago TriggeredBy: ● docker.socket Docs: https://docs.docker.com Main PID: 1265607 (dockerd) Tasks: 19 Memory: 38.5M CPU: 1.258s CGroup: /system.slice/docker.service └─1265607 /usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:4243 root@eve-ng-6:~# ps -ef | grep 1265607 root 1265607 1 0 13:12 ? 00:00:00 /usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:4243 root 1271097 674199 0 13:16 pts/3 00:00:00 grep --color=auto 1265607 root@eve-ng-6:~# Is there a way to connect to its primary process using an URL like docker://127.0.0.1:4243/ ? Thanks. Edit: based on comment received, I'd ask whether it is possible to redirect stdin/stdout of docker attach command to a tcp socket. Searching for it I found docker attach > You can't redirect the standard input of a docker attach command while > attaching to a TTY-enabled container (using the -i and -t options). Does this mean that it is actually not feasible to do that ?
CarloC (385 rep)
May 29, 2025, 12:45 PM • Last activity: Jun 4, 2025, 08:26 PM
0 votes
1 answers
1927 views
How can I access the web console of CodeReady Containers, installed on CentOS Docker container, from Host machine?
I have this scenario: - a HOST machine running Debian that runs docker containers. - a CentOS docker container that have **CodeReady Containers (CRC)** installed on itself. CRC working on the container, via command line, without problems. I want access, from the Host machine, to CRC web console that...
I have this scenario: - a HOST machine running Debian that runs docker containers. - a CentOS docker container that have **CodeReady Containers (CRC)** installed on itself. CRC working on the container, via command line, without problems. I want access, from the Host machine, to CRC web console that works on https://console-openshift-console.apps-crc.testing (on a specific IP in the hosts file). I found this [RedHat guide for accessing CRC remotely](https://www.openshift.com/blog/accessing-codeready-containers-on-a-remote-server/) . But how can I apply it to docker containers logic? And above all, do I really need it? ---------- I had to make the following **changes to haproxy.conf**:
global
log 127.0.0.1 local0
debug

defaults
log global
mode http
timeout connect 5000
timeout check 5000
timeout client 30000
timeout server 30000

frontend apps
bind CONTAINER_IP:80
bind CONTAINER_IP:443
option tcplog
mode tcp
default_backend apps

backend apps
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP:6443 check

frontend api
bind CONTAINER_IP:6443
option tcplog
mode tcp
default_backend api

backend api
mode tcp
balance roundrobin
option ssl-hello-chk
server webserver1 CRC_IP:6443 check
and **enabling forwarding** for the container:
$ sysctl net.ipv4.conf.all.forwarding=1
$ sudo iptables -P FORWARD ACCEPT
I can successfully call the url https://console-openshift-console.apps-crc.testing from the Host machine!!! but I get this error:
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {
    
  },
  "code": 403
}
Anyway the Network part is solved. Now I don't know why I get this error!
Kambei (171 rep)
Jul 18, 2020, 11:22 AM • Last activity: Jun 4, 2025, 09:04 AM
0 votes
0 answers
34 views
Bridging containers to external VLAN
I have a physical network with several VLANs. One of my computers (my main workstation) is connected to two different VLANs on this network, one tagged, the other not. I have successfully set this computer up on both VLANs by making a VLAN clone interface, but I discovered that in order to actually...
I have a physical network with several VLANs. One of my computers (my main workstation) is connected to two different VLANs on this network, one tagged, the other not. I have successfully set this computer up on both VLANs by making a VLAN clone interface, but I discovered that in order to actually receive packets on that interface I had to change the MAC. It seems that the Linux network stack (or maybe the acceleration on the card) looks at the MAC and if it matches, ignores the VLAN tag. I now want to attach this interface to a bridge somehow and then also have containers attach to this same bridge. I know enough about how containers are constructed that I can do this by hand after whatever container system I'm using (podman in this case) sets the container up. The reason I want this is that I'm working on an IPv6 broadcast/multicast protocol that will only work for a local LAN, and in order to test it, I want to have several copies of the servent running in different containers so they can communicate with each other. I've tried this in the obvious way, but none of the packets that are explicitly destined for one of the containers ever makes it to them. I suspect this is because the card or the Linux network stack is just dropping them at the physical interface when their destination MAC doesn't match any of the MACs assigned to the interface. What would be a good way to accomplish this? Should I ask this on Server Fault or Stack Overflow instead?
Omnifarious (1412 rep)
Jun 1, 2025, 03:51 AM
0 votes
0 answers
39 views
Docker attach on a container running /sbin/init
I run a Cisco XRd docker container on my Linux Ubuntu host. I run the image using `docker run -it -d` command. The image's entrypoint is `/usr/sbin/init`, indeed: root@eve-ng-6:/opt/unetlab/html# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2ed1b2e66197 ios-xr/xrd-control-plane:la...
I run a Cisco XRd docker container on my Linux Ubuntu host. I run the image using docker run -it -d command. The image's entrypoint is /usr/sbin/init, indeed: root@eve-ng-6:/opt/unetlab/html# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2ed1b2e66197 ios-xr/xrd-control-plane:latest "/usr/sbin/init" 20 hours ago Up 3 hours 75bc86e8-108a-4300-863a-2141b5718b55-0-2 root@eve-ng-6:/opt/unetlab/html# Then by using docker attach, I can attach to the running docker container. Af far as I understand, see Comparing Docker Exec and Docker Attach , docker attach does not start any new process within the container but simply attaches the current terminal to the stdin/stdout/stderr of the container's primary process (i.e. to the process running the image's entrypoint). In my case it is /usr/sbin/init from Cisco XRd. My question: is the /usr/sbin/init process that is actually "talking" with the user (me) by reading and writing to the attached pseudo-terminal ? P.s. note that a pseudo-terminal is allocated by virtue of the -d option in docker run command.
CarloC (385 rep)
May 30, 2025, 04:31 PM • Last activity: May 30, 2025, 05:07 PM
6 votes
3 answers
67758 views
Mount NFS - "operation not permitted" in Proxmox container
I'm trying to mount a simple NFS share, but it keeps saying "operation not permitted". The NFS server has the following share. /mnt/share_dir 192.168.7.101(ro,fsid=0,all_squash,async,no_subtree_check) 192.168.7.11(ro,fsid=0,all_squash,async,no_subtree_check) The share seems to be active for both cli...
I'm trying to mount a simple NFS share, but it keeps saying "operation not permitted". The NFS server has the following share. /mnt/share_dir 192.168.7.101(ro,fsid=0,all_squash,async,no_subtree_check) 192.168.7.11(ro,fsid=0,all_squash,async,no_subtree_check) The share seems to be active for both clients. # exportfs -s /mnt/share_dir 192.168.7.101(ro,async,wdelay,root_squash,all_squash,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,all_squash) /mnt/share_dir 192.168.7.11(ro,async,wdelay,root_squash,all_squash,no_subtree_check,fsid=0,sec=sys,ro,secure,root_squash,all_squash) The client 192.168.7.101 can see the share. $ sudo showmount -e 192.168.7.10 Export list for 192.168.7.10: /mnt/share_dir 192.168.7.101 192.168.7.101 's mount destination: # ls -lah /mnt/share_dir/ total 8.0K drwxr-xr-x 2 me me 4.0K Aug 28 19:21 . drwxr-xr-x 3 root root 4.0K Aug 28 19:21 .. When I try to mount the share, the client says "operation not permitted" with either nfs or nfs4 type. $ sudo mount -vvv -t nfs 192.168.7.10:/mnt/share_dir /mnt/share_dir mount.nfs: timeout set for Sun Aug 28 21:56:03 2022 mount.nfs: trying text-based options 'vers=4.2,addr=192.168.7.10,clientaddr=192.168.7.101' mount.nfs: mount(2): Operation not permitted mount.nfs: trying text-based options 'addr=192.168.7.10' mount.nfs: prog 100003, trying vers=3, prot=6 mount.nfs: trying 192.168.7.10 prog 100003 vers 3 prot TCP port 2049 mount.nfs: prog 100005, trying vers=3, prot=17 mount.nfs: trying 192.168.7.10 prog 100005 vers 3 prot UDP port 46169 mount.nfs: mount(2): Operation not permitted mount.nfs: Operation not permitted I've set fsid=0 and insecure to the export options, but it didn't work. RPCInfo from the client's side: # rpcinfo -p 192.168.7.10 program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100005 1 udp 59675 mountd 100005 1 tcp 37269 mountd 100005 2 udp 41354 mountd 100005 2 tcp 38377 mountd 100005 3 udp 46169 mountd 100005 3 tcp 39211 mountd 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 3 tcp 2049 100003 3 udp 2049 nfs 100227 3 udp 2049 100021 1 udp 46745 nlockmgr 100021 3 udp 46745 nlockmgr 100021 4 udp 46745 nlockmgr 100021 1 tcp 42571 nlockmgr 100021 3 tcp 42571 nlockmgr 100021 4 tcp 42571 nlockmgr Using another client, *192.168.7.11*, I was able to mount that share with no issues. I can not see any issue or misconfiguration, and could not find a fix anywhere. There's no firewall in the way and both server and client are using Debian 11. Any idea of what's going on?
markfree (425 rep)
Aug 29, 2022, 01:32 AM • Last activity: May 29, 2025, 04:46 PM
0 votes
0 answers
26 views
Incomplete strace output for child processes
So I am writing a program that automatically determines the dependencies of an application and writes a *FROM scratch* dockerfile based on them using *strace*. I was testing it on a MariaDB, but it failed because *chmod* was not found In the MariaDB GitHub page I can see that there is a [docker-entr...
So I am writing a program that automatically determines the dependencies of an application and writes a *FROM scratch* dockerfile based on them using *strace*. I was testing it on a MariaDB, but it failed because *chmod* was not found In the MariaDB GitHub page I can see that there is a [docker-entrypoint.sh](https://github.com/MariaDB/mariadb-docker/blob/master/main/docker-entrypoint.sh) which does a *find .. -exec chown mysql {} \;* but in the strace output i don't see an *execve("/bin/chown",...)* To trace the apps, I am using a *statically-linked strace* binary which I am mounting to the Docker container running the app alongside an out.log file which captures the output The full command is the following:
run --rm --entrypoint "" -v /usr/local/bin/strace:/usr/bin/strace -v ./out.log:/out.log /usr/bin/strace -s 9999 -fe execve,execveat,open,openat docker-entrypoint.sh mariadbd
Howerver, when i try running the a test version of the find locally (not in a docker container), I see clearly the execve call to chown.
$ cat test.sh
#!/bin/sh

for file in /.; do
        find $(dirname file) -type f -exec chown $USER:$USER {} \;
done
`
$ cat out.log
5147  execve("./test.sh", ["./test.sh"], 0x7fff932870b0 /* 18 vars */) = 0
5147  open("./test.sh", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
5148  execve("/usr/bin/dirname", ["dirname", "file"], 0x7f643fa58490 /* 18 vars */) = 0
5148  +++ exited with 0 +++
5147  --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5148, si_uid=0, si_status=0, si_utime=0, si_stime=0} ---
5149  execve("/usr/bin/find", ["find", ".", "-type", "f", "-exec", "chown", "root:root", "{}", ";"], 0x7f643fa58618 /* 18 vars */) = 0
5149  open(".", O_RDONLY|O_LARGEFILE|O_CLOEXEC|O_DIRECTORY) = 3
5149  open("./.config", O_RDONLY|O_LARGEFILE|O_CLOEXEC|O_DIRECTORY) = 4
5149  open("./.config/micro", O_RDONLY|O_LARGEFILE|O_CLOEXEC|O_DIRECTORY) = 5
5150  execve("/usr/local/sbin/chown", ["chown", "root:root", "./.config/micro/settings.json"], 0x7ffe6cb2bbf8 /* 18 vars */) = -1 ENOENT (No such file or directory)
5150  execve("/usr/local/bin/chown", ["chown", "root:root", "./.config/micro/settings.json"], 0x7ffe6cb2bbf8 /* 18 vars */) = -1 ENOENT (No such file or directory)
5150  execve("/usr/sbin/chown", ["chown", "root:root", "./.config/micro/settings.json"], 0x7ffe6cb2bbf8 /* 18 vars */) = -1 ENOENT (No such file or directory)
5150  execve("/usr/bin/chown", ["chown", "root:root", "./.config/micro/settings.json"], 0x7ffe6cb2bbf8 /* 18 vars */) = -1 ENOENT (No such file or directory)
5150  execve("/sbin/chown", ["chown", "root:root", "./.config/micro/settings.json"], 0x7ffe6cb2bbf8 /* 18 vars */) = -1 ENOENT (No such file or directory)
5150  execve("/bin/chown", ["chown", "root:root", "./.config/micro/settings.json"], 0x7ffe6cb2bbf8 /* 18 vars */) = 0
5150  open("/etc/passwd", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
5150  open("/etc/group", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = 3
5150  +++ exited with 0 +++
...
Do I need to add *CAP_SYS_PTRACE* to the container when running the strace probe or anything else?
ReGeLePuMa (1 rep)
May 11, 2025, 09:14 PM
2 votes
2 answers
2797 views
LXC ip allocation using DHCP
I'm trying to set up DHCP for my lxcontainers without using lxc-net. The reason for this decision is that I'd like to place my containers in different networks, such that they are unable to talk to each other by default. I have successfully created and run containers using static IPs assigned within...
I'm trying to set up DHCP for my lxcontainers without using lxc-net. The reason for this decision is that I'd like to place my containers in different networks, such that they are unable to talk to each other by default. I have successfully created and run containers using static IPs assigned within the containers' config file before, but I'd like to use a DHCP server on the host this time. I've installed dnsmasq on my host and configured it like this: # /etc/dnsmasq.d/dnsmasq.lxcbr.conf domain=local.lxc,10.10.10.0/24 interface=lxcbr dhcp-range=lxcbr,10.10.10.1,10.10.10.200,24h dhcp-option=option:router,10.10.10.254 According to this the file is being loaded correctly: root@host:~# service dnsmasq status ● dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server Loaded: loaded (/lib/systemd/system/dnsmasq.service; enabled) [...] Feb 03 19:06:39 host dnsmasq: dnsmasq: syntax check OK. Feb 03 19:06:39 host dnsmasq: started, version 2.72 cachesize 150 Feb 03 19:06:39 host dnsmasq: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect Feb 03 19:06:39 host dnsmasq-dhcp: DHCP, IP range 10.10.10.1 -- 10.10.10.200, lease time 1d Feb 03 19:06:39 host dnsmasq: reading /etc/resolv.conf Feb 03 19:06:39 host dnsmasq: using nameserver upstream.nameserver.ip.here#53 Feb 03 19:06:39 host dnsmasq: using nameserver upstream.nameserver.ip.here#53 Feb 03 19:06:39 host dnsmasq: read /etc/hosts - 5 addresses lxcbr is the host's interface in the container's network: root@host:~# ifconfig [...] lxcbrBind Link encap:Ethernet HWaddr fe:60:7a:cc:56:64 inet addr:10.10.10.254 Bcast:10.10.10.255 Mask:255.255.255.0 inet6 addr: fe80::7a:56ff:fe82:921f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:92 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5688 (5.5 KiB) TX bytes:928 (928.0 B) veth0 Link encap:Ethernet HWaddr fe:60:7a:cc:56:64 inet6 addr: fe80::fc60:7aff:fecc:5664/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) veth0 is the container's veth interface: # /var/lib/lxc/container lxc.network.type = veth lxc.network.name = veth0 lxc.network.flags = up lxc.network.link = lxcbr lxc.network.veth.pair = veth0 I assume I'm doing something very stupid but I've run out of ideas at this point. I appreciate your help, Christopher
Cyclonit (161 rep)
Feb 3, 2016, 06:20 PM • Last activity: May 3, 2025, 04:02 PM
2 votes
1 answers
3158 views
No loop device in container even with loop module loaded
I am trying to make it possible to make and mount loop devices from within a container. This happens to work on my own development system, but is failing to work on our build server where it must be done for an automated build. I am ensuring that I'm starting the container as `privileged`. My contai...
I am trying to make it possible to make and mount loop devices from within a container. This happens to work on my own development system, but is failing to work on our build server where it must be done for an automated build. I am ensuring that I'm starting the container as privileged. My container start line: docker run --privileged -it --rm :latest /bin/bash. From within the container I try the following steps [from the losetup man page](https://man7.org/linux/man-pages/man8/losetup.8.html) :
# dd if=/dev/zero of=/var/tmp/file.img bs=1024k count=4
...
# losetup --show --find /var/tmp/file.img
...
This should provide me the next unused loop device and have associated it to /dev/loop*n*. However, instead I'm presented with the following (and showing that the loop module is loaded and /dev/loop-control is present):
[root@64a3a6900e0d /]# losetup --show --find /var/tmp/file.img
losetup: Could not find any loop device. Maybe this kernel does not know
       about the loop device? (If so, recompile or modprobe loop.)
[root@64a3a6900e0d /]# ls /dev/loop*
/dev/loop-control
[root@64a3a6900e0d /]# lsmod | grep loop
loop                   28072  0
On my own dev box, this works. I loaded loop and started the container as privileged and was able to make loop devices. What should I check for now?
Andrew Falanga (531 rep)
May 23, 2022, 08:13 PM • Last activity: May 3, 2025, 03:04 PM
1 votes
1 answers
452 views
Use adb (android debug bridge) in systemd-nspawn container
I would like to use adb inside an systemd-nspawn container. Unfortunately I cannot access the phone inside the container (connected via USB). pi@debian-buster-64:~ $ export ADB_TRACE=usb pi@debian-buster-64:~ $ adb devices List of devices attached * daemon not running; starting now at tcp:5037 * dae...
I would like to use adb inside an systemd-nspawn container. Unfortunately I cannot access the phone inside the container (connected via USB). pi@debian-buster-64:~ $ export ADB_TRACE=usb pi@debian-buster-64:~ $ adb devices List of devices attached * daemon not running; starting now at tcp:5037 * daemon started successfully pi@debian-buster-64:~ $ Here is the container setup /etc/systemd/nspawn/debian-buster-64.nspawn: [Exec] PrivateUsers=no Capability=CAP_NET_ADMIN [Files] Bind=/home Bind=/run/user:/run/host-user/ BindReadOnly=/etc/resolv.conf [Network] Private=no VirtualEthernet=no Here is the output from lsusb from inside the container: pi@debian-buster-64:~ $ lsusb Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 009: ID 045e:07b2 Microsoft Corp. 2.4GHz Transceiver v8.0 used by mouse Wireless Desktop 900 Bus 001 Device 010: ID 18d1:4ee7 Google Inc. Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Output of lsusb is identical to the output on the host and the phone (Google Inc.) is visible. I want to use adb inside the container because the container is 64bit (host is only 32bit). Unfortunately, adb on 32bit has limitations. Access with adb works on the host (with said 32bit limitations). Any ideas how to get this working inside the container?
alex1452 (11 rep)
Dec 11, 2020, 04:49 AM • Last activity: May 2, 2025, 12:03 PM
Showing page 1 of 20 total questions