Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
1 answers
16 views
Location of LXC user-owned containers
I am setting up some LXC containers as a normal user. I have followed all the steps in the manual for user-owned unprivileged containers, and the instances I created are running fine. I used a template config to create the containers: ``` ### template.conf lxc.include = default.conf # Increment by 1...
I am setting up some LXC containers as a normal user. I have followed all the steps in the manual for user-owned unprivileged containers, and the instances I created are running fine. I used a template config to create the containers:
### template.conf

lxc.include = default.conf
# Increment by 10000 for every new container created, to generate unique UIDs and GIDs.
lxc.idmap = u 0 240000 8192
lxc.idmap = g 0 240000 8192
lxc-create -n mycontainer -f ~/.config/lxc/template.conf -t download -- -d archlinux -r current -a amd64
Now, I'd like to customize each container, e.g. mounting volumes. But I can't see any configuration files created when the containers were created. For privileged containers that I created earlier while experimenting, I can see config files in /var/lib/lxc/config. From readding the LXC manual, I got the impression that the config used for creating a container is not also used at runtime. Where does LXC store config files for live containers? Or if it doesn't, can I create one that is automatically identified and used on start?
user3758232 (101 rep)
Aug 5, 2025, 08:07 PM • Last activity: Aug 5, 2025, 08:29 PM
7 votes
1 answers
7283 views
Using a bridge, an LXC container can't ping router, but the host OS can
I've got a virtual machine running under virtualbox, and in that virtual machine I've got an LXC container I'm trying to bridge to virtualbox's NAT interface: ------------- ----------- ----------- ---------- ---------- | LXC | ---> | Host OS | ---> | Virtual | ---> | Laptop | ---> | Router | | Conta...
I've got a virtual machine running under virtualbox, and in that virtual machine I've got an LXC container I'm trying to bridge to virtualbox's NAT interface: ------------- ----------- ----------- ---------- ---------- | LXC | ---> | Host OS | ---> | Virtual | ---> | Laptop | ---> | Router | | Container | | Linxu | | Box | | | | | ------------- ----------- ----------- ---------- ---------- eth0 10.1.0.35 br0 eth0 NAT GW: 192.168.1.33 GW: gw 10.1.0.2 br0 10.1.0.5 10.1.0.2/16 192.168.1.1 gw 10.1.0.2 Ping 10.1.0.2 ping 10.1.0.2 FAIL OK I cannot ping from the LXC container to the virtualbox gateway, but I can from the Host OS. Note: running tcpdump on the host OS, I can sing pings being sent from the container to the router, and the reply from the router to the container, but tcpdump on the container shows no traffic. **LXC eth0** eth0 Link encap:Ethernet HWaddr 00:16:3e:ed:82:b8 inet addr:10.1.0.35 Bcast:10.1.255.255 Mask:255.255.0.0 inet6 addr: fe80::216:3eff:feed:82b8/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:585 errors:0 dropped:0 overruns:0 frame:0 TX packets:588 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:58003 (58.0 KB) TX bytes:56447 (56.4 KB) **Host OS:** root@ubuntuserver:/# ifconfig br0 Link encap:Ethernet HWaddr 08:00:27:ca:5f:7a inet addr:10.1.0.5 Bcast:10.1.255.255 Mask:255.255.0.0 inet6 addr: fe80::a00:27ff:feca:5f7a/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:2012 errors:0 dropped:0 overruns:0 frame:0 TX packets:882 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:158794 (158.7 KB) TX bytes:139083 (139.0 KB) eth0 Link encap:Ethernet HWaddr 08:00:27:ca:5f:7a UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:2968 errors:0 dropped:0 overruns:0 frame:0 TX packets:2404 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:281188 (281.1 KB) TX bytes:312109 (312.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:180 errors:0 dropped:0 overruns:0 frame:0 TX packets:180 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:14376 (14.3 KB) TX bytes:14376 (14.3 KB) vethStvXMU Link encap:Ethernet HWaddr fe:9a:36:3a:84:1c inet6 addr: fe80::fc9a:36ff:fe3a:841c/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:557 errors:0 dropped:0 overruns:0 frame:0 TX packets:554 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:53465 (53.4 KB) TX bytes:55003 (55.0 KB) root@ubuntuserver:/# brctl show bridge name bridge id STP enabled interfaces br0 8000.080027ca5f7a no eth0 vethStvXMU
David Parks (1190 rep)
Mar 29, 2013, 04:28 AM • Last activity: Aug 2, 2025, 02:03 AM
0 votes
0 answers
67 views
LXC Container on Proxmox Can’t Resolve DNS — Outbound UDP Works, But No Replies
I'm trying to configure a reverse proxy on an LXC Container in proxmox, however the container is not able to resolve DNS. The proxmox node has no issue with DNS, and both the node and the container are able to ping outbound. The container specifically is able to make outbound DNS requests but just r...
I'm trying to configure a reverse proxy on an LXC Container in proxmox, however the container is not able to resolve DNS. The proxmox node has no issue with DNS, and both the node and the container are able to ping outbound. The container specifically is able to make outbound DNS requests but just receives no response. As a note, I have some restrictions being in an apartment on my apartments internet. Unfortunately I do not have access to my primary router configuration and my homelab is behind a secondary bridged router. So I've had to make some work arounds regarding this. Since I don't have access to the main apartment router and can't forward ports or run custom DNS there, I needed a local solution to resolve DNS inside my container. Initially, I tried just pointing the container to public nameservers (like 1.1.1.1 and 8.8.8.8), but DNS responses never made it back — likely because of how my network handles outbound NAT from bridged containers. To work around this, I enabled SNAT on the Proxmox node to ensure that all outgoing traffic from the container gets rewritten with the node’s IP. This should’ve made return traffic more reliable. I also set up dnsmasq on the Proxmox node as a local DNS forwarder. The idea was that the container would send DNS requests to the node (10.124.16.3), which would forward them to public resolvers and relay the responses back. This avoids having to deal with external DNS servers rejecting packets from unexpected source IPs. I've made sure dnsmasq is working by running ss -lunp | grep 53 and got the following: udp UNCONN 0 0 10.124.16.3:53 0.0.0.0:* users:(("dnsmasq",pid=xxx,fd=x)) Despite this, the container still fails to resolve DNS — even when dnsmasq is working correctly and requests are visible in tcpdump. 10.124.16.3 is the proxmox node and 10.124.16.4 is the container
Here's the node network configuration page (/etc/network/interfaces)

auto lo
iface lo inet loopback

iface enp5s0 inet manual

auto vmbr0
iface vmbr0 inet static
    address 10.124.16.3/22
    gateway 10.124.16.1
    bridge-ports enp5s0
    bridge-stp off
    bridge-fd 0

   post-up iptables -t nat -A POSTROUTING -s 10.124.16.0/22 -o vmbr0 -j SNAT --to-source 10.124.16.3

   post-down iptables -t nat -D POSTROUTING -s 10.124.16.0/22 -o vmbr0 -j SNAT --to-source 10.124.16.3


heres the container config (/etc/pve/lxc/.conf)

arch: amd64
cores: 1
memory: 256
swap: 256
hostname: cf-tunnel
net0: name=eth0,bridge=vmbr0,firewall=1,gw=10.124.16.1,ip=10.124.16.4/22,type=veth
unprivileged: 1
features: nesting=1

and in the container (/etc/resolv.conf) it contains

    nameserver 1.1.1.1
    nameserver 8.8.8.8
When I run tcpdump -ni vmbr0 port 53 on the node and I dig on the container with dig google.com (I've also tried digging with specific DNS servers with @1.1.1.1) Here's the output I get in the tcpdump
root@geeksquad:~# tcpdump -ni vmbr0 host 10.124.16.4 and port 53
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on vmbr0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
23:13:56.887877 IP 10.124.16.4.38419 > 1.1.1.1.53: 19663+ [1au] A? google.com. (51)
23:14:00.280550 IP 10.124.16.4.52162 > 10.124.16.3.53: 19721+ [1au] TXT? protocol-v2.argotunnel.com. (55)
23:14:01.892819 IP 10.124.16.4.39216 > 1.1.1.1.53: 19663+ [1au] A? google.com. (51)
23:14:05.307826 IP 10.124.16.4.44721 > 10.124.16.3.53: 13780+ [1au] SRV? _v2-origintunneld._tcp.argotunnel.com. (66)
23:14:06.898125 IP 10.124.16.4.59178 > 1.1.1.1.53: 19663+ [1au] A? google.com. (51)
23:14:10.308108 IP 10.124.16.4.48477 > 10.124.16.3.53: 45090+ [1au] SRV? _v2-origintunneld._tcp.argotunnel.com. (66)
23:14:25.321538 IP 10.124.16.4.56031 > 10.124.16.3.53: 17689+ [1au] SRV? _v2-origintunneld._tcp.argotunnel.com. (66)
also checking journalctl -u dnsmasq I get this
Jul 07 23:56:31 geeksquad systemd: Started dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server.
Jul 07 23:57:22 geeksquad systemd: Stopping dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server...
Jul 07 23:57:22 geeksquad dnsmasq: exiting on receipt of SIGTERM
Jul 07 23:57:22 geeksquad systemd: dnsmasq.service: Deactivated successfully.
Jul 07 23:57:22 geeksquad systemd: Stopped dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server.
Jul 07 23:57:22 geeksquad systemd: Starting dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server...
Jul 07 23:57:22 geeksquad dnsmasq: started, version 2.90 cachesize 150
Jul 07 23:57:22 geeksquad dnsmasq: DNS service limited to local subnets
Jul 07 23:57:22 geeksquad dnsmasq: compile time options: IPv6 GNU-getopt DBus no-UBus i18n IDN2 DHCP DHCPv6 no-Lua TFTP conntrack ipset nftset auth cr>
Jul 07 23:57:22 geeksquad dnsmasq: reading /etc/resolv.conf
Jul 07 23:57:22 geeksquad dnsmasq: using nameserver 1.1.1.1#53
Jul 07 23:57:22 geeksquad dnsmasq: using nameserver 8.8.8.8#53
Jul 07 23:57:22 geeksquad dnsmasq: read /etc/hosts - 11 names
Jul 07 23:57:22 geeksquad systemd: Started dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server.
Jul 07 23:57:33 geeksquad dnsmasq: reading /etc/resolv.conf
Jul 07 23:57:33 geeksquad dnsmasq: ignoring nameserver 10.124.16.3 - local interface
Jul 07 23:59:23 geeksquad dnsmasq: reading /etc/resolv.conf
Jul 07 23:59:23 geeksquad dnsmasq: using nameserver 1.1.1.1#53
Jul 07 23:59:23 geeksquad dnsmasq: using nameserver 8.8.8.8#53
Any help at all would be appreciated. As far as firewall rules go, I do not believe that's the issue. I set my firewall rules within the proxmox gui but have tried all variations of allowing all traffic in and out temporarily and have also disabled the firewalls entirely as a test, Neither changing the outcome.
tkennedy741 (1 rep)
Jul 8, 2025, 06:04 PM • Last activity: Jul 9, 2025, 05:43 AM
0 votes
3 answers
206 views
Alpine linux (LXC) not running cron jobs
I have been trying for a long to make the crontab entries to run, but it doesn't matter what Time / schedule I enter, it doesn't seem to work. I have confirmed my current time zone with `date` command & `/etc/localtime` for the symbolic link. How do I know that cron is not working? My script calls $...
I have been trying for a long to make the crontab entries to run, but it doesn't matter what Time / schedule I enter, it doesn't seem to work. I have confirmed my current time zone with date command & /etc/localtime for the symbolic link. How do I know that cron is not working? My script calls $date (date and time or running) in it to print within the output log, but when I modify the crontab entry to a specific time, it doesn't reflect that time, only the time of last manual execution. Furthermore, I know that the script does run as I have run it manually and verified the output log, so it's not due to triggering a script error. Yes, rc-service crond restart has also been applied. Someone, please suggest any avenue to investigate or troubleshoot further. Here is my crontab entry, please ignore the 'weekly' naming. Redacted timezone entries are Continent/Timezone crontab entry Here I am providing the script, as there is a suspicion regarding this --- This is under **root** cron so privileges are not an issue. #!/bin/sh # Log file for updates LOG_FILE="/var/log/alpine_weekly_update.log" echo "--- $(date) ---" >> "$LOG_FILE" echo "Starting Alpine Linux weekly update..." >> "$LOG_FILE" apk update >> "$LOG_FILE" 2>&1 apk upgrade >> "$LOG_FILE" 2>&1 # &1 is to differentiate from a file name(str or int) but as a desctiptor, here it will redirect both the stderr,and stdout to the logfile # Check for kernel updates: if [ -f /var/run/reboot-required ]; then echo "Reboot required after update. Rebooting Now" >> "$LOG_FILE" reboot fi echo "Alpine Linux weekly update finished." >> "$LOG_FILE" echo "-----------------" >> "$LOG_FILE" ---------------------------------------------------- Update: After a whole day of troubleshooting with this deployment, I did not managed to figure out the fault, except that I had installed **Cornie** in that which is causing a conflict. Synopsis: - syslog-ng was running and logging (installed, not a default). - crond was confirmed running (by rc-service crond status). - crontab entry was valid and present for root. - Ownership (chown), Mode (chmod) and UID, GID all are in favour of root in the Script, log and cron (cornie-service / crond) - crond was NOT sending any CRON messages to syslog-ng, even after explicit restarts - Installed and reinstalled Cornie, Deleted the .Pid files (/run/cronie.pid ; /var/run/cronie.pid) ---------------------------------------------------------- I spun off another alpine (3.21, previous was the same) and became very surprised by this -- Here I didn't install Cornie, but **no entry** on the crontab which had a **specific time** mentioned, did work. For example if I wanted to run each day 12:30AM (30 12 * * * /path/to/script). However, when I put the schedule of running every minute (* * * * * /path/to/script) it ran properly. And I want to mention, that no declaration of path (PATH=) or Time zone / TZ (which was my primary suspicion) was provided, yet the each minute implementation was working. Furthermore I didnt even re-load the rc-services. At this point I dont think there is any point going further, as most probably the busybox itself has some flaws or the cron has some other deep issues.
Neail (79 rep)
Jun 21, 2025, 03:25 PM • Last activity: Jun 28, 2025, 07:17 AM
1 votes
2 answers
4196 views
How to run docker inside an lxc container?
I have unprivileged lxc container on Arch host created like this: `lxc-create -n test_arch11 -t download -- --dist archlinux --release current --arch amd64` And it doesn't run docker. What I did inside a container: 1. Installed docker from Arch repos `pacman -S docker` 2. Tried to run a hello-world...
I have unprivileged lxc container on Arch host created like this: lxc-create -n test_arch11 -t download -- --dist archlinux --release current --arch amd64 And it doesn't run docker. What I did inside a container: 1. Installed docker from Arch repos pacman -S docker 2. Tried to run a hello-world container docker run hello-world 3. Got the next error: > docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"mkdir /sys/fs/cgroup/cpuset/docker: permission denied\"": unknown. > ERRO error waiting for container: context canceled What is wrong and how to make docker work inside a container?
devalone (111 rep)
Oct 27, 2019, 04:49 PM • Last activity: Jun 9, 2025, 04:09 AM
1 votes
1 answers
45 views
disconnect packages managed by alpine apk but keep them on the system
I've installed nextcloud on proxmox in an alpine container using ttecks (rip!) helper script. After some tweaks, I am now able to update nextcloud from the application itself. Now I have the following issue: Nextcloud is still managed by alpines apk, so when I run `apk upgrade`, the nextcloud packag...
I've installed nextcloud on proxmox in an alpine container using ttecks (rip!) helper script. After some tweaks, I am now able to update nextcloud from the application itself. Now I have the following issue: Nextcloud is still managed by alpines apk, so when I run apk upgrade, the nextcloud package will be updated too, what will break my nextcloud instance. Is there a way to remove (the nextcloud) packages from the management by apk? Like removal of the apk database? I tried to remove it from the /etc/apk/world file. The result is nextcloud beeing removed by apk :/
lutz108 (11 rep)
Nov 26, 2024, 12:31 AM • Last activity: May 7, 2025, 03:53 PM
0 votes
2 answers
2232 views
How to trust self-signed LXD daemon TLS certificate in Vagrant?
Following up from [another question][1] I've got the LXD daemon running and working: $ curl --insecure https://127.0.0.1:8443 {"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":["/1.0"]} However, when trying to start a Vagrant container with the L...
Following up from another question I've got the LXD daemon running and working: $ curl --insecure https://127.0.0.1:8443 {"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":["/1.0"]} However, when trying to start a Vagrant container with the LXD provider it doesn't like the certificate: $ vagrant up The provider could not authenticate to the LXD daemon at https://127.0.0.1:8443 . You may need configure LXD to allow requests from this machine. The easiest way to do this is to add your LXC client certificate to LXD's list of trusted certificates. This can typically be done with the following command: $ lxc config trust add /home/username/.config/lxc/client.crt You can find more information about configuring LXD at: https://linuxcontainers.org/lxd/getting-started-cli/#initial-configuration There is no client.crt anywhere on my system. lsof -p [PID of the program serving at port 8443] doesn't list any certificates. sudo locate .crt | grep lxd found only /var/lib/lxd/server.crt, but lxc config trust add /var/lib/lxd/server.crt didn't help. The configuration documentation doesn't mention having to trust a certificate. I suspect I'm supposed to communicate with the daemon using a Unix socket rather than HTTPS. How do I move forward? For the record I'm able to launch containers with for example lxc launch ubuntu:18.10 test and get a shell with lxc exec test -- /bin/bash, so LXC is working fine.
l0b0 (53368 rep)
Mar 8, 2019, 10:14 AM • Last activity: May 7, 2025, 08:05 AM
2 votes
2 answers
2797 views
LXC ip allocation using DHCP
I'm trying to set up DHCP for my lxcontainers without using lxc-net. The reason for this decision is that I'd like to place my containers in different networks, such that they are unable to talk to each other by default. I have successfully created and run containers using static IPs assigned within...
I'm trying to set up DHCP for my lxcontainers without using lxc-net. The reason for this decision is that I'd like to place my containers in different networks, such that they are unable to talk to each other by default. I have successfully created and run containers using static IPs assigned within the containers' config file before, but I'd like to use a DHCP server on the host this time. I've installed dnsmasq on my host and configured it like this: # /etc/dnsmasq.d/dnsmasq.lxcbr.conf domain=local.lxc,10.10.10.0/24 interface=lxcbr dhcp-range=lxcbr,10.10.10.1,10.10.10.200,24h dhcp-option=option:router,10.10.10.254 According to this the file is being loaded correctly: root@host:~# service dnsmasq status ● dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server Loaded: loaded (/lib/systemd/system/dnsmasq.service; enabled) [...] Feb 03 19:06:39 host dnsmasq: dnsmasq: syntax check OK. Feb 03 19:06:39 host dnsmasq: started, version 2.72 cachesize 150 Feb 03 19:06:39 host dnsmasq: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect Feb 03 19:06:39 host dnsmasq-dhcp: DHCP, IP range 10.10.10.1 -- 10.10.10.200, lease time 1d Feb 03 19:06:39 host dnsmasq: reading /etc/resolv.conf Feb 03 19:06:39 host dnsmasq: using nameserver upstream.nameserver.ip.here#53 Feb 03 19:06:39 host dnsmasq: using nameserver upstream.nameserver.ip.here#53 Feb 03 19:06:39 host dnsmasq: read /etc/hosts - 5 addresses lxcbr is the host's interface in the container's network: root@host:~# ifconfig [...] lxcbrBind Link encap:Ethernet HWaddr fe:60:7a:cc:56:64 inet addr:10.10.10.254 Bcast:10.10.10.255 Mask:255.255.255.0 inet6 addr: fe80::7a:56ff:fe82:921f/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:92 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5688 (5.5 KiB) TX bytes:928 (928.0 B) veth0 Link encap:Ethernet HWaddr fe:60:7a:cc:56:64 inet6 addr: fe80::fc60:7aff:fecc:5664/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:648 (648.0 B) TX bytes:648 (648.0 B) veth0 is the container's veth interface: # /var/lib/lxc/container lxc.network.type = veth lxc.network.name = veth0 lxc.network.flags = up lxc.network.link = lxcbr lxc.network.veth.pair = veth0 I assume I'm doing something very stupid but I've run out of ideas at this point. I appreciate your help, Christopher
Cyclonit (161 rep)
Feb 3, 2016, 06:20 PM • Last activity: May 3, 2025, 04:02 PM
1 votes
0 answers
356 views
How can I import lxc containers back from the snapshots the lxc create just before the `lxc snap` is removed?
I have an Ubuntu 22.04 node, which used to have [lxd v5][1] installed through snap. It used to have 3 containers, at the moment of remove the lxd using the `snap remove lxd` [it did take a snapshot of all the containers][2], but when I try to import those snapshoted containers I got some errors. Fir...
I have an Ubuntu 22.04 node, which used to have lxd v5 installed through snap. It used to have 3 containers, at the moment of remove the lxd using the snap remove lxd it did take a snapshot of all the containers , but when I try to import those snapshoted containers I got some errors. First, the snapshots were deliver to me in .zip format, which when I try to import using the lxc import file_name.zip I get the *Importing instance: 100% (108.71MB/s)Error: Unsupported compression* error, weird because as I pointed I got the snapshots in .zip format. So I needed to extract it and then compress it again using tar (tar -czvf containers.tar.gz file_contains_extracteds/) Once I got the correct format, then I try to import it, but I get the following error: lxc import containers.tar.gz Importing instance: 100% (701.66MB/s)Error: Backup is missing at "backup/index.yaml" When I look into the extracted .zip file I get: ls 25112 archive.tgz common meta.json meta.sha3_384 user And inside the common there are all the container's folder: ls common/lxd/storage-pools/default/containers/ monitor orch1 vrouter How can I import lxc containers back from the snapshots the lxc create just before the lxc snap is removed?
k.Cyborg (527 rep)
Aug 25, 2023, 08:56 PM • Last activity: Apr 14, 2025, 07:33 AM
21 votes
3 answers
32548 views
Executing a command inside a running LXC
I want to execute a command inside an existing lxc without going through the regular Linux init. `lxc-execute` command is for that I guess but I get the following error when I run this command on my existing test lxc. sudo lxc-execute -n test -- service apache2 start I get following error: lxc-execu...
I want to execute a command inside an existing lxc without going through the regular Linux init. lxc-execute command is for that I guess but I get the following error when I run this command on my existing test lxc. sudo lxc-execute -n test -- service apache2 start I get following error: lxc-execute: Failed to find an lxc-init lxc-execute: invalid sequence number 1. expected 4 lxc-execute: failed to spawn 'test'
user52881 (211 rep)
Nov 22, 2013, 12:35 PM • Last activity: Apr 10, 2025, 05:57 PM
0 votes
0 answers
274 views
Configure IPv6 DNS server via DHCP in LXC container
I have an LXC container created and started as follows: ```shell sudo lxc-create -t debian -n mylxc -- --release bullseye sudo lxc-start -n mylxc sudo lxc-attach -n mylxc ``` On the LXC host I have a DHCP server running which provides IPv4 and IPv6 IP addresses and announces IPv4 and IPv6 DNS server...
I have an LXC container created and started as follows:
sudo lxc-create -t debian -n mylxc -- --release bullseye
sudo lxc-start -n mylxc
sudo lxc-attach -n mylxc
On the LXC host I have a DHCP server running which provides IPv4 and IPv6 IP addresses and announces IPv4 and IPv6 DNS servers. The LXC correctly gets both an IPv4 and an IPv6 address. It adds only the IPv4 DNS server to /etc/resolv.conf, though. How can I get my LXC container to properly receive the IPv6 DNS server via DHCP?
carlfriedrich (153 rep)
May 9, 2023, 07:04 AM • Last activity: Mar 20, 2025, 10:05 AM
0 votes
1 answers
106 views
Firewall in Bridged LXC Containers
I am new to networking, and I am trying to implement a firewall inside an LXC container (Alpine Linux) that is bridged with another LXC container (Alpine Linux) through a `br0` interface. Right now, my only goal is to block all traffic that is coming from the client device through the container. So...
I am new to networking, and I am trying to implement a firewall inside an LXC container (Alpine Linux) that is bridged with another LXC container (Alpine Linux) through a br0 interface. Right now, my only goal is to block all traffic that is coming from the client device through the container. So far I have had lots of trouble getting any of the firewall rules to apply/work properly. What happens is I can set a rule/policy (e.g. drop forward chain), verify that it is in the ruleset, but then when I connect a client device to the network, it does not seem to apply (I can still access the network). I am using nftables to configure the firewall settings. My basic process is: 1. install nftables. 1. add policy to drop packets in the forwarding chain. I have tried every possible configuration I can think of for these rules. I was reading that because the container is bridged, that the data packets only travel on layer 2, so the layer 3 firewall rules would not ever apply to the packet, is this true? I have been able to use layer 2 rules to block traffic (e.g. bridge rules in nftables and ebtables rules), but nothing on layer 3 yet. For more background, here is the container interface setup: WLAN0/WLAN1 -> br0 (Container A) -> br0 (Container B) -> eth0 -> internet I am trying to apply firewall rules inside of container A right now. If any more information is needed, let me know :)
RGB Engineer (101 rep)
Jan 22, 2025, 06:11 PM • Last activity: Mar 6, 2025, 01:54 AM
1 votes
2 answers
1521 views
bridge-routing and martian packets
i am experimenting with routing a bit, and now i have a problem i cannot solve by myself eth0 | | | ------------------------ br0 ------------------------ | (192.168.100.1) | | | | | | | lxc_vpn_eth0 lxc_test_eth0 (192.168.100.120) (192.168.100.130) | | tun0 i want to send some packets (udp) out of t...
i am experimenting with routing a bit, and now i have a problem i cannot solve by myself eth0 | | | ------------------------ br0 ------------------------ | (192.168.100.1) | | | | | | | lxc_vpn_eth0 lxc_test_eth0 (192.168.100.120) (192.168.100.130) | | tun0 i want to send some packets (udp) out of the lxc-container (test) to the other lxc container (vpn) and from there through openvpn running inside this container, this works so far, but somehow the response is marked by the kernel as martian and dropped from the bridge br0 i tested with tcpdump on all 3 "places" the packets pass by and this are the results : (in vpn container ) #tcpdump -i eth0 21:25:12.043321 IP 192.168.100.1.55081 > XX.YY.UU.VV.6969: UDP, length 16 21:25:12.097040 IP XX.YY.UU.VV.6969 > 192.168.100.1.55081: UDP, length 16 as you see, i am masquerading the packets on tun0 so the packets from the test-container arive at the vpn container, go out through tun0 and i get an answer, but as soon as this response-packet is placed on the bridge i get this in kernel logs : kernel: [c0] IPv4: martian source 192.168.100.120 from XX.YY.UU.VV, on dev br0 so how i have to configure the routing ,that the response packet doesn't get dropped? It should already be on the bridge where the container with ip 192.168.100.120 sits, and waits for it ... Thx in advance for helping me out, i'll happily provide you with further informations ... ( i dont wanted to post all the routing tables, because i dont wanted to fill the posting with maybe useless informations )
J0hnD0e (11 rep)
May 4, 2016, 08:33 PM • Last activity: Feb 9, 2025, 06:44 PM
0 votes
0 answers
15 views
Lxc: how to set name of veth?
I have tried this for a lxc container lxc.net.0.veth.pair = vethvlan2 but when container start it takes a random name br1 ************ no enp7s0.1 veth1001_23rP veth1001_9ZMK
I have tried this for a lxc container lxc.net.0.veth.pair = vethvlan2 but when container start it takes a random name br1 ************ no enp7s0.1 veth1001_23rP veth1001_9ZMK
elbarna (13690 rep)
Feb 2, 2025, 06:35 PM
3 votes
4 answers
1304 views
Persist resolvectl changes across reboots
I'm using LXC containers, and resolving CONTAINERNAME.lxd to the IP of the specified container, using: ``` sudo resolvectl dns lxdbr0 $bridge_ip sudo resolvectl domain lxdbr0 '~lxd' ``` This works great! But the changes don't persist over a host reboot - how can I make them do so? I'm on Pop!_OS 22....
I'm using LXC containers, and resolving CONTAINERNAME.lxd to the IP of the specified container, using:
sudo resolvectl dns lxdbr0 $bridge_ip
sudo resolvectl domain lxdbr0 '~lxd'
This works great! But the changes don't persist over a host reboot - how can I make them do so? I'm on Pop!_OS 22.04, which is based on Ubuntu 22.04. (I've described 'things I've tried' as answers to this question, which have varying degrees of success.)
Jonathan Hartley (480 rep)
Sep 27, 2022, 06:42 PM • Last activity: Jan 9, 2025, 08:36 PM
0 votes
1 answers
398 views
Attach gdb from a docker container to a process running in a different PID namespace
I built a docker image with `gcc` binutils and `gdb` debugger installed inside. I would attach `gdb` from that docker container to a process inside a `lxc` container running on the same Linux host. The `lxc` container uses its own `PID` namespace, therefore `gdb` running in the docker container comp...
I built a docker image with gcc binutils and gdb debugger installed inside. I would attach gdb from that docker container to a process inside a lxc container running on the same Linux host. The lxc container uses its own PID namespace, therefore gdb running in the docker container complains that target process and debugger are not in the same PID namespace. [SR-PCE-251:~]$ docker run -it --pid host --rm --cap-add=SYS_PTRACE --security-opt seccomp=unconfined carlo/ubuntu root@e7b2db23af34:/# root@e7b2db23af34:/# id uid=0(root) gid=0(root) groups=0(root) root@e7b2db23af34:/# root@e7b2db23af34:/# gdb -q attach 11365 attach: No such file or directory. Attaching to process 11365 [New LWP 24283] [New LWP 20025] [New LWP 20024] [New LWP 19992] [New LWP 19991] [New LWP 13974] [New LWP 13970] [New LWP 13969] [New LWP 13968] [New LWP 13967] [New LWP 13962] [New LWP 13958] [New LWP 13957] [New LWP 13954] [New LWP 13952] [New LWP 13944] [New LWP 12078] [New LWP 11822] [New LWP 11543] [New LWP 11515] [New LWP 11489] [New LWP 11483] [New LWP 11482] [New LWP 11477] [New LWP 11476] warning: "target:/proc/11365/exe": could not open as an executable file: Operation not permitted. warning: `target:/proc/11365/exe': can't open to read symbols: Operation not permitted. warning: Could not load vsyscall page because no executable was specified warning: Target and debugger are in different PID namespaces; thread lists and other data are likely unreliable. Connect to gdbserver inside the container. 0x00007f0bf997ac73 in ?? () (gdb) Edited to provide more information based on comments received [host:/gdb-install]$ id uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 [host:/gdb-install]$ [host:/gdb-install]$ ls -la total 24 drwxrwxr-x. 6 1000 1000 4096 Nov 14 09:32 . drwxr-xr-x. 30 root root 4096 Nov 14 11:27 .. drwxrwxr-x. 2 1000 1000 4096 Nov 14 09:32 bin drwxrwxr-x. 3 1000 1000 4096 Nov 14 09:32 include drwxrwxr-x. 2 1000 1000 4096 Nov 14 09:32 lib drwxrwxr-x. 6 1000 1000 4096 Nov 14 09:32 share [host:/gdb-install]$ [host:/gdb-install]$ export LD_LIBRARY_PATH=/gdb-install/bin:/gdb-install/lib [host:/gdb-install]$ ./bin/gdb ./bin/gdb: error while loading shared libraries: libmpfr.so.6: cannot open shared object file: No such file or directory [host:/gdb-install]$
CarloC (385 rep)
Nov 13, 2024, 08:43 AM • Last activity: Nov 20, 2024, 05:54 PM
1 votes
0 answers
132 views
Right way to recursively share a path (like a symlink) and proper way to unmount/remount without messing with other mount points
Bind mounts seem to be hard. I am looking for the right way to use a bind mount to mount a given directory to another one pretty much like a symlink (but I can't use a symlink because my application is linux containers). I want to be able to unmount that directory without disrupting other possible m...
Bind mounts seem to be hard. I am looking for the right way to use a bind mount to mount a given directory to another one pretty much like a symlink (but I can't use a symlink because my application is linux containers). I want to be able to unmount that directory without disrupting other possible mounts. **Background**: I want to share a ZFS pool from a host system to a Linux container (proxmox). As a ZFS pool, there are many nested datasets and hence I would like to recursively do the mount. Also, some of the datasets are encrypted and should be transparently available if encryption keys are loaded and not, if unmounted. **What I have tried** 1. Starting point is the host system with the mountpoint property set so that all datasets are mounted to /zpseagate8tb on the host system. I can freely mount/umount datasets and load/unload encryption keys. I would like to clone this tree exactly into a container 2. I created another directory /zpseagate8tb_bind, to which I bind mount the original pool. The intention is to mark it as slave to facilitate unmounting. I have the following line in my /etc/fstab: /zpseagate8tb /zpseagate8tb_bind none rbind,make-rslave,nofail 3. Then I use LXC's builtin capabilities to bind mount that directory into the container. The config file includes: lxc.mount.entry: /zpseagate8tb_bind zpseagate8tb none rbind,create=dir 0 0 This works flawlessly until I want to unmount and/or the pool disappears (due to mistakenly unplugging) in which case there is always something unexpected happening. For example, /zpseagate8tb_bind is empty while the data is still accessible/mounted inside the container. In nearly all cases I have to reboot everything to get a consistent state again. What is the right approach to create this bind mount and which commands are needed to remove the mount from the container while not disturbing something else?
divB (218 rep)
Nov 5, 2024, 05:42 AM • Last activity: Nov 5, 2024, 06:11 AM
12 votes
6 answers
13591 views
How can I list all connections to my host, including those to LXC guests?
I tried both `netstat` and `lsof`, but it appears it's not possible to see the connections to my LXC guests. Is there a way to achieve this ... for **all** guests at once? ---------- Essentially what throws me off here is the fact that I can see the processes of the guests as long as I run as superu...
I tried both netstat and lsof, but it appears it's not possible to see the connections to my LXC guests. Is there a way to achieve this ... for **all** guests at once? ---------- Essentially what throws me off here is the fact that I can see the processes of the guests as long as I run as superuser. I can also see the veth interfaces that get dynamically created per guest. Why can I not see connections on processes that are otherwise visible?
0xC0000022L (16938 rep)
May 16, 2015, 12:09 AM • Last activity: Oct 15, 2024, 01:54 PM
0 votes
0 answers
24 views
IPC_LOCK not available at LXC startup during boot
I am running a Linux Container (LXC) with (Hashicorp) vault installed that requires IPC_LOCK. Whenever I reboot or boot, it fails to start up vault with autostart. From what I can see in the logs it complains on a lack of IPC_LOCK. However, this is not an issue when I manually restart it. I have add...
I am running a Linux Container (LXC) with (Hashicorp) vault installed that requires IPC_LOCK. Whenever I reboot or boot, it fails to start up vault with autostart. From what I can see in the logs it complains on a lack of IPC_LOCK. However, this is not an issue when I manually restart it. I have added different delays to try and remedy this, but it seems not to work. I want to ask if there is a systemd target or service that checks if such kernel capabilities are available before starting the LXC service, but maybe I am missing something more fundamental here?
Caesar (25 rep)
Aug 16, 2024, 08:06 PM
1 votes
1 answers
1032 views
Mounting mergerfs directory with bind mount under lxc shows nothing unless root or 555 permissions
So I'm using proxmox and have a few hard drives mounted to `/mnt/hdd1`, `/mnt/hdd2` etc I use mergerfs so that they all show up as one drive. I then use a bind mount so that I can access the dir when in various lxc containers. I have the folder permissions set to 550 however, when I try `ls -la` on...
So I'm using proxmox and have a few hard drives mounted to /mnt/hdd1, /mnt/hdd2 etc I use mergerfs so that they all show up as one drive. I then use a bind mount so that I can access the dir when in various lxc containers. I have the folder permissions set to 550 however, when I try ls -la on the directory, all it returns is "total 0". I can view the directory with sudo ls -la though so it does mount. Changing the permissions to 555 lets me view the directory properly however I checked using id username and I am a member of the group the directory is owned by. Also, if I mount /mnt/hdd1 for example using the same method, I can access the directory with permissions 550. I tried recreating the group. Any ideas what is causing this? I can access this fine from the host as the same user. (Again just a member of the group) My mergerfs settings in /etc/fstab are defaults,allow_other,use_ino,func.getattr=newest. Using default_permissions instead of allow_other results in d?????????, but still works for root.
Joseph Kavanagh (11 rep)
Apr 21, 2017, 07:11 PM • Last activity: May 26, 2024, 08:05 PM
Showing page 1 of 20 total questions