Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

2 votes
2 answers
2011 views
ETH0 no address ip with ifconfig
I've created a new LXC on debian jessie, but it doesn't have an ipv4 address. When I connect to my LXC and do ifconfig: eth0 Link encap:Ethernet HWaddr blabla inet6 addr: blabla/64 Scope:Global inet6 addr: blabla/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:81 errors:0 d...
I've created a new LXC on debian jessie, but it doesn't have an ipv4 address. When I connect to my LXC and do ifconfig: eth0 Link encap:Ethernet HWaddr blabla inet6 addr: blabla/64 Scope:Global inet6 addr: blabla/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:81 errors:0 dropped:0 overruns:0 frame:0 TX packets:48 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10368 (10.1 KiB) TX bytes:9480 (9.2 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) As you can see, I have no inet addr in the eth0. I've tried to restart the networking service, but nothing has changed. How can I get an address? I even tried: lxc test stop lxc network attach lxdbr0 test eth0 eth0 lxc config device set test eth0 ipv4.address 10.99.10.42 lxc start test But nothing Thanks
Akame14 (21 rep)
Aug 21, 2020, 10:37 AM • Last activity: May 24, 2025, 12:13 AM
0 votes
2 answers
2233 views
How to trust self-signed LXD daemon TLS certificate in Vagrant?
Following up from [another question][1] I've got the LXD daemon running and working: $ curl --insecure https://127.0.0.1:8443 {"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":["/1.0"]} However, when trying to start a Vagrant container with the L...
Following up from another question I've got the LXD daemon running and working: $ curl --insecure https://127.0.0.1:8443 {"type":"sync","status":"Success","status_code":200,"operation":"","error_code":0,"error":"","metadata":["/1.0"]} However, when trying to start a Vagrant container with the LXD provider it doesn't like the certificate: $ vagrant up The provider could not authenticate to the LXD daemon at https://127.0.0.1:8443 . You may need configure LXD to allow requests from this machine. The easiest way to do this is to add your LXC client certificate to LXD's list of trusted certificates. This can typically be done with the following command: $ lxc config trust add /home/username/.config/lxc/client.crt You can find more information about configuring LXD at: https://linuxcontainers.org/lxd/getting-started-cli/#initial-configuration There is no client.crt anywhere on my system. lsof -p [PID of the program serving at port 8443] doesn't list any certificates. sudo locate .crt | grep lxd found only /var/lib/lxd/server.crt, but lxc config trust add /var/lib/lxd/server.crt didn't help. The configuration documentation doesn't mention having to trust a certificate. I suspect I'm supposed to communicate with the daemon using a Unix socket rather than HTTPS. How do I move forward? For the record I'm able to launch containers with for example lxc launch ubuntu:18.10 test and get a shell with lxc exec test -- /bin/bash, so LXC is working fine.
l0b0 (53368 rep)
Mar 8, 2019, 10:14 AM • Last activity: May 7, 2025, 08:05 AM
3 votes
4 answers
1305 views
Persist resolvectl changes across reboots
I'm using LXC containers, and resolving CONTAINERNAME.lxd to the IP of the specified container, using: ``` sudo resolvectl dns lxdbr0 $bridge_ip sudo resolvectl domain lxdbr0 '~lxd' ``` This works great! But the changes don't persist over a host reboot - how can I make them do so? I'm on Pop!_OS 22....
I'm using LXC containers, and resolving CONTAINERNAME.lxd to the IP of the specified container, using:
sudo resolvectl dns lxdbr0 $bridge_ip
sudo resolvectl domain lxdbr0 '~lxd'
This works great! But the changes don't persist over a host reboot - how can I make them do so? I'm on Pop!_OS 22.04, which is based on Ubuntu 22.04. (I've described 'things I've tried' as answers to this question, which have varying degrees of success.)
Jonathan Hartley (480 rep)
Sep 27, 2022, 06:42 PM • Last activity: Jan 9, 2025, 08:36 PM
-1 votes
1 answers
21 views
Ansible, lxd error when installing XRoad: Attribute Error: NoneType has no attribute "items" lead to lxd_container.py
I was trying to install XRoad using ansible and LXD with these steps: Sudo snap install lxd lxd init $ sudo apt update $ sudo apt install software-properties-common $ sudo add-apt-repository --yes --update ppa:ansible/ansible $ sudo apt install ansible ansible-playbook hosts -i hosts/lxd_hosts.txt x...
I was trying to install XRoad using ansible and LXD with these steps: Sudo snap install lxd lxd init $ sudo apt update $ sudo apt install software-properties-common $ sudo add-apt-repository --yes --update ppa:ansible/ansible $ sudo apt install ansible ansible-playbook hosts -i hosts/lxd_hosts.txt xroad_init.yml I got this error: Attribute Error: NoneType has no attribute "items" leading to lxd_container.py ansible-loop-var item: xroad-lxd-cs Localhost ok=1 failed=1 First time i was doing nearly exact same steps i didnt get any error. Now i tried installing Xroad on a different VM (Ubuntu), i got this error. Im using ubuntu 22.04 Thanks for any help I tried: Reinstalling lxd, ansible (to different versions) Installing python 3.12 Apt update + upgrade Change from Bridged to NAT Reinstalling Ubuntu Images of error: enter image description here enter image description here enter image description here enter image description here enter image description here enter image description here
user27623139 (1 rep)
Oct 7, 2024, 03:14 AM • Last activity: Oct 8, 2024, 09:32 AM
0 votes
1 answers
333 views
Incus - Setting migration.stateful for stateful snapshots
I'm trying to get to grips with [Incus](https://linuxcontainers.org/incus/introduction/), because it looks like it is a fork of Canonical's LXD, which I can run fairly easily on Debian 12 with a deb package, rather than using snaps. I have it all set up in a virtual machine on my KVM, running with b...
I'm trying to get to grips with [Incus](https://linuxcontainers.org/incus/introduction/) , because it looks like it is a fork of Canonical's LXD, which I can run fairly easily on Debian 12 with a deb package, rather than using snaps. I have it all set up in a virtual machine on my KVM, running with both a basic directory based storage pool, and a zfs storage pool. I have spun up a test container called test that I want to take a stateful snapshot of, but it tells me that: > To create a stateful snapshot, the instance needs the migration.stateful config set to true enter image description here After reading [the documentation on configuring an instance's options](https://linuxcontainers.org/incus/docs/main/howto/instances_configure/#configure-instance-options) , I have tried running these various commands (the first being the one that I think is the most likely to be correct):
incus config set test migration.stateful=true
incus config set test config.migration.stateful=true
incus config set test.migration.stateful=true
incus config set migration.stateful=true
... but I always get an error message similar to below about an unknown configuration key: > Error: Invalid config: Unknown configuration key: migration.stateful I have also tried setting the option through the YAML configuration as shown below: enter image description here ... but it just gets stuck on "Processing..." enter image description here How is one supposed to enable stateful snapshots of incus linux containers? Perhaps this is just not possible because I am running inside a virtual machine, rather than the physical box?
Programster (2289 rep)
Mar 17, 2024, 06:11 PM • Last activity: Apr 25, 2024, 02:42 PM
4 votes
2 answers
8429 views
Creating a custom template, based on some existing LXC template after running the instance at least once
(Please note that [this question](https://unix.stackexchange.com/q/258164/5462) is about LXC 1.x, whereas *this one* is about LXC 2.x/LXD) I scoured the web for an answer to this one, but couldn't come up with any reasonably non-hacky answer. What I am looking for is an approach to fashion an existi...
(Please note that [this question](https://unix.stackexchange.com/q/258164/5462) is about LXC 1.x, whereas *this one* is about LXC 2.x/LXD) I scoured the web for an answer to this one, but couldn't come up with any reasonably non-hacky answer. What I am looking for is an approach to fashion an existing template a way I'd like to. In particular what I'm after is to customize the upstream Ubuntu cloud image by making various changes in its root FS and adding/changing configuration. So my current approach is to lxc launch ubuntu:lts CONTAINER and then use lxc exec CONTAINER -- ... to run a script I authored (after pushing it into the container) to perform my customizations. What I get using this approach is a reasonably customized container. Alas, there's a catch. The container at this point has been primed by cloud-init and it's a container instance, not an image/template. So this is where I'm at a loss now. What I would need is to turn my container back into an image (should be doable by using lxc publish) and either undo the changes done to it by cloud-init or at least "cock" cloud-init again so it triggers the next time the image is used as source for lxc init or lxc launch. Alternatively, maybe there's a way to completely disable cloud-init when I lxc launch from the upstream image? Is there an authoritative way? Even though I looked through all kinds of documentation, including the [Markdown documentation in the LXD repository](https://github.com/lxc/lxd/tree/master/doc) as well as the blog series by Stéphane Graber (LXD project lead), [especially \[5/12\]][1] , I was unable to find a suitable approach. Perhaps I just missed it (that's to say, I'll be happy to read through more documentation if you know some that describes what I need). LXC version used is 2.20 (i.e. I'm using the LXD frontend).
0xC0000022L (16938 rep)
Nov 27, 2017, 03:59 PM • Last activity: Oct 26, 2023, 04:19 AM
1 votes
0 answers
183 views
Multipass with QEmu driver: how to access usb camera
on my Asus rog laptop with ubuntu 23.10 I installed the ROS (robot os) image from multipass, as explained in this official guide https://ubuntu.com//blog/ros-development-on-linux-windows-and-macos . I need to work on Slam algorithms and need access to usb cameras, but unfortunately `lsusb` on the ro...
on my Asus rog laptop with ubuntu 23.10 I installed the ROS (robot os) image from multipass, as explained in this official guide https://ubuntu.com//blog/ros-development-on-linux-windows-and-macos . I need to work on Slam algorithms and need access to usb cameras, but unfortunately lsusb on the ros command line shows nothing. On the host, i can see the laptop camera as usb in the output of the lsusb command. bloom@bloom-ROG-Strix-G614JZ-G614JZ:~/sw_develop/Temp$ lsusb Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 322e:2122 Sonix Technology Co., Ltd. USB2.0 HD UVC WebCam Bus 001 Device 002: ID 0b05:19b6 ASUSTek Computer, Inc. N-KEY Device Bus 001 Device 004: ID 8087:0033 Intel Corp. Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub The local driver for multipass appears to be Qemu, as per bloom@bloom-ROG-Strix-G614JZ-G614JZ:~/sw_develop/Temp$ multipass get local.driver qemu - Is it possible to configure Qemu used by multipass? - Is it possible to configure Qemu so that the next time I start the noetic ROS image i will be able to use the laptop camera or any other usb camera? - if not possible with Qemu, would it be possible with other drivers like lxd, etc?
AndrewBloom (111 rep)
Oct 18, 2023, 08:42 PM • Last activity: Oct 18, 2023, 09:17 PM
0 votes
1 answers
1028 views
How can I monitor individual containers resource usage in LXD/C
I would like to be able to view which individual containers are using what percentage of CPU, memory etc. I have installed HTOP but it doesn't tell me which container, and I have 20+ containers running.
I would like to be able to view which individual containers are using what percentage of CPU, memory etc. I have installed HTOP but it doesn't tell me which container, and I have 20+ containers running.
unixlearner (1 rep)
Sep 23, 2022, 12:37 PM • Last activity: Apr 20, 2023, 10:28 AM
1 votes
2 answers
1935 views
LXD ZFS storage pool on lvm filesystem?
The default filesystem of my machine is lvm (Ubuntu 22.04). I would like to spin up LXD/LXC virtual machines to run some Apache projects like Hadoop and Spark. When setting up Hadoop there is a step where I need to format the filesystem within the VM to hdfs and I don't know how that would affect th...
The default filesystem of my machine is lvm (Ubuntu 22.04). I would like to spin up LXD/LXC virtual machines to run some Apache projects like Hadoop and Spark. When setting up Hadoop there is a step where I need to format the filesystem within the VM to hdfs and I don't know how that would affect the system if my storage is DIR and not a system like zfs. Furthermore, I cannot manually assign a custom size to a DIR storage, but I can on ZFS. ZFS seems the superior option but the problem is my native filesystem is lvm not zfs. Is it possible to have zfs storage despite lvm filesystem underneath or do I need to change my entire filesystem to zfs? When I run lxd init the setup does offer to me to make my backend storage zfs [default=zfs]. Any insight on ZFS/LVM coexistence is greatly appreciated. I am not very fluent in this topic and I don't want to break my machine :)
Mnemosyne (161 rep)
Mar 15, 2023, 06:31 PM • Last activity: Mar 16, 2023, 11:54 AM
3 votes
1 answers
2919 views
Project Quota on a live root EXT4 filesystem without live-cd
How do I accomplish setting up project quota for my live root folder being ext4 on Ubuntu 18.04? Documentation specific to project quota on the ext4 filesystem is basically non-existent and I tried this: 1. Installed Quota with `apt install quota -y` 2. Put `prjquota` into `/etc/fstab` for the root...
How do I accomplish setting up project quota for my live root folder being ext4 on Ubuntu 18.04? Documentation specific to project quota on the ext4 filesystem is basically non-existent and I tried this: 1. Installed Quota with apt install quota -y 2. Put prjquota into /etc/fstab for the root / and rebooted, filesystem got booted as read-only, no project quota (from here only with prjquota instead of the user and group quotas) 3. Also `find /lib/modules/uname -r -type f -name '*quota_v*.ko*' was run and both kernel modules /lib/modules/4.15.0-96-generic/kernel/fs/quota/quota_v2.ko and /lib/modules/4.15.0-96-generic/kernel/fs/quota/quota_v1.ko` were found (from this tutorial) 4. Put GRUB_CMDLINE_LINUX_DEFAULT="rootflags=prjquota" into /etc/default/grub, ran update-grub and rebooted, machine does not come up anymore. 5. Putting rootflags=quota into GRUB_CMDLINE_LINUX="... rootflags=quota" running update-grub and restarting did show quota and usrquota being enabled on root, but it does not work with prjquota or pquota or project being set as an rootflag I need this for the DIR storage backend of LXD to be able to limit container storage size. What else can I try?
michacassola (51 rep)
Apr 6, 2020, 08:48 PM • Last activity: Jan 30, 2023, 08:02 PM
1 votes
1 answers
4976 views
How can I display a GUI LXC container on a physically connected display without a window manager?
I want to have a setup where I have LXC OS containers that start full-screen on particular displays. As an intermediate step, I am trying to get X apps from the container to display via host X. I have been using this as a guide: https://blog.simos.info/running-x11-software-in-lxd-containers/ My proc...
I want to have a setup where I have LXC OS containers that start full-screen on particular displays. As an intermediate step, I am trying to get X apps from the container to display via host X. I have been using this as a guide: https://blog.simos.info/running-x11-software-in-lxd-containers/ My procedure is like this: ### Create a profile for running X11 containers.
catx11.profile
config:
  environment.DISPLAY: :0
  environment.PULSE_SERVER: unix:/home/ubuntu/pulse-native
  nvidia.driver.capabilities: all
  nvidia.runtime: "true"
  user.user-data: |
    #cloud-config
    runcmd:
      - 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
    packages:
      - x11-apps
      - mesa-utils
      - pulseaudio
description: GUI LXD profile
devices:
  PASocket1:
    bind: container
    connect: unix:/run/user/1000/pulse/native
    listen: unix:/home/ubuntu/pulse-native
    security.gid: "1000"
    security.uid: "1000"
    uid: "1000"
    gid: "1000"
    mode: "0777"
    type: proxy
  X0:
    bind: container
    connect: unix:@/tmp/.X11-unix/X0
    listen: unix:@/tmp/.X11-unix/X0
    security.gid: "1000"
    security.uid: "1000"
    type: proxy
  mygpu:
    type: gpu
name: x11
used_by: []
EOF


lxc profile create x11
cat x11.profile | lxc profile edit x11
### I can then create the container (although X needs to be started, which I did not at first realize was necessary)
$ lxc launch ubuntu:18.04 --profile default --profile x11 mycontainer
Creating mycontainer
Starting mycontainer
Error: Failed to run: /snap/lxd/current/bin/lxd forkstart mycontainer /var/snap/lxd/common/lxd/containers /var/snap/lxd/common/lxd/logs/mycontainer/lxc.conf: 
Try lxc info --show-log local:mycontainer for more info


$ sudo startx &
 12745
$ 
X.Org X Server 1.20.11
X Protocol Version 11, Revision 0
Build Operating System: linux Ubuntu
Current Operating System: Linux virtland 5.11.0-40-generic #44~20.04.2-Ubuntu SMP Tue Oct 26 18:07:44 UTC 2021 x86_64
Kernel command line: BOOT_IMAGE=/BOOT/ubuntu_2aec3h@/vmlinuz-5.11.0-40-generic root=ZFS=rpool/ROOT/ubuntu_2aec3h ro init_on_alloc=0 amd_iommu=on vfio_pci.ids=10de:1b81,10de:10f0,1002:67df,1002:aaf0,1b21:1242 crashkernel=512M-:192M
Build Date: 06 July 2021  10:17:51AM
xorg-server 2:1.20.11-1ubuntu1~20.04.2 (For technical support please see http://www.ubuntu.com/support)  
Current version of pixman: 0.38.4
        Before reporting problems, check http://wiki.x.org 
        to make sure that you have the latest version.
Markers: (--) probed, (**) from config file, (==) default setting,
        (++) from command line, (!!) notice, (II) informational,
        (WW) warning, (EE) error, (NI) not implemented, (??) unknown.
(==) Log file: "/var/log/Xorg.0.log", Time: Wed Nov 17 18:29:01 2021
(==) Using config file: "/etc/X11/xorg.conf"
(==) Using system config directory "/usr/share/X11/xorg.conf.d"
$ lxc launch ubuntu:18.04 --profile default --profile x11 mycontainer
Creating mycontainer
Starting mycontainer
### When I run xeyes from within the container, I cannot connect to the display. I had this working with a previous install (it would make xeyes appear on the host desktop, but without the desktop it would bring up a bare xterm window instead of xeyes), but for some reason now it is not working at all and I cannot seem to figure out what I have done differently.
~$ lxc exec mycontainer -- sudo --user ubuntu --login
To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.

ubuntu@mycontainer:~$ xeyes
No protocol specified
Error: Can't open display: :0
### So, for starters, I'd like to be able to make xeyes appear on the host desktop again. ### Eventually I would like to be able to run it with bare X and no window manager, and eventually run a full DE using that procedure. System info: OS: Ubuntu Server 20.04 LXD verison: 4.20 LXC version: 4.20
Stonecraft (869 rep)
Nov 17, 2021, 11:47 PM • Last activity: Nov 12, 2022, 09:02 AM
0 votes
3 answers
7514 views
What is the ID of nobody user and nogroup group?
When trying LXD, I tried to share a folder from my computer with the LXC Container, but I could not write in the folder in the container because `ls -l` shows that it belongs to user `nobody` and group `nobody`. How to know the ID of this user and group?
When trying LXD, I tried to share a folder from my computer with the LXC Container, but I could not write in the folder in the container because ls -l shows that it belongs to user nobody and group nobody. How to know the ID of this user and group?
Ruben Alves (151 rep)
Oct 11, 2022, 01:46 PM • Last activity: Oct 11, 2022, 10:06 PM
1 votes
1 answers
129 views
Route trafic to bridged network
I have a server running with multiple services running inside each of their own LXD container. All containers uses a bridged network. For now I am routing everything through the servers main ip address and forwarding the required ports to each container, but this is annoying. For an example I have m...
I have a server running with multiple services running inside each of their own LXD container. All containers uses a bridged network. For now I am routing everything through the servers main ip address and forwarding the required ports to each container, but this is annoying. For an example I have multiple web GUI's running and using this setup they all need a different port like 8080, 8081 and so on, instead of just using the standard port 80 or 433. What I would like is to setup Pi-Hole with internal URL's pointing at each containers IP, but I need to somehow have traffic automatically routed through the server. I am able to manually add a route for the LXD "10.x.x.x" network to the server and have it connect directly to a container. But I would have to do this to every single device that is being connected to the network. Is there a way to have the server tell the router to route traffic for this network through the server? I know that I can setup LXD containers to use macvlan, but this introduces other issues and is also not a real solution either, but more a work-around.
dbergloev (13 rep)
Sep 5, 2022, 09:46 AM • Last activity: Sep 8, 2022, 07:47 AM
0 votes
1 answers
2138 views
Docker on Debian Failing to Bind to Port 80
I am running Docker on Debian 11. I deploy an Nginx container and it fails to bind to port 80 even though port 80 is not in use by any other process. I even tried running Docker as root. Here's the command: `docker run -d -p 80:80 nginx:alpine` Here's the container logs: ``` /docker-entrypoint.sh: /...
I am running Docker on Debian 11. I deploy an Nginx container and it fails to bind to port 80 even though port 80 is not in use by any other process. I even tried running Docker as root. Here's the command: docker run -d -p 80:80 nginx:alpine Here's the container logs:
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2022/08/03 11:06:15 [emerg] 1#1: socket() 0.0.0.0:80 failed (13: Permission denied)
nginx: [emerg] socket() 0.0.0.0:80 failed (13: Permission denied)
I suspect that **Apparmor is blocking the network access**. When I uninstall Apparmor everything works well. However, with Apparmor installed, only **privileged** Docker containers are able to connect to the internet. Please let me know if you need any other information to help me debug this problem.
Krishna Chaitanya (1 rep)
Aug 4, 2022, 04:55 AM • Last activity: Aug 4, 2022, 07:01 AM
1 votes
0 answers
192 views
Prevent network namespaces / lxd-bridges from talking to each other, but keep internet access?
Following goal: - I have lxd containers - each set of containers should have their dedicated isolated network - each should still be able to connect to the internet (e.g. apt update or curl), but not leak ports (e.g. a webserver) to the internet, unless I lxd proxy them I have tried to deny same sub...
Following goal: - I have lxd containers - each set of containers should have their dedicated isolated network - each should still be able to connect to the internet (e.g. apt update or curl), but not leak ports (e.g. a webserver) to the internet, unless I lxd proxy them I have tried to deny same subnet communication with 10.0.0.0/8 and alikes, but that would prevent it from getting an IP from LXD-dhcp or the internet. To allow internet access to LXD currently I do: sudo ufw allow in on sudo ufw route allow in on I have now attempted to create network-namespaces to control the flow manually, but as soon as the namespaces are connected to their respective bridge, they can cross-talk, and I'm at my starting point of trying to keep them separate. I have come across many solutions that just suggest to e.g. deny bridge1 to bridge2, but that scales horribly, the more networks I'd have, the more rules going both ways I'd have to setup, even scripting that would be cumbersome and spam iptables with hundreds of rules. Is there a way to achieve the above goal, but without having to add hundreds of rules to keep cross-talk away, some kind of default-off, opt-in communication? since maybe I would want some networks to communicate to each other, but by default I would want them all to keep to their own namespace/bridge. The following diagram shows it for my current attempt of using network-namespaces: (orange lines mark optional communication, I could live without it [and can currently toggle it via lxd port security], if it meant there's an easy solution for the rest, but would prefer to have the option to allow those communications if the need arises) network diagram it has been bootstrapped by the following code: #!/usr/bin/env bash NS1="ns1" VETH1="veth1" VPEER1="vpeer1" NS2="ns2" VETH2="veth2" VPEER2="vpeer2" # clean up previous ip netns del ${NS1} >/dev/null 2>&1 ip netns del ${NS2} >/dev/null 2>&1 ip link delete ${VETH1} >/dev/null 2>&1 ip link delete ${VETH2} >/dev/null 2>&1 ip link delete ${VETH1} >/dev/null 2>&1 ip link delete ${VETH2} >/dev/null 2>&1 # create namespace ip netns add $NS1 ip netns add $NS2 # create veth link ip link add ${VETH1} type veth peer name ${VPEER1} ip link add ${VETH2} type veth peer name ${VPEER2} # setup veth link ip link set ${VETH1} up ip link set ${VETH2} up # add peers to ns ip link set ${VPEER1} netns ${NS1} ip link set ${VPEER2} netns ${NS2} # setup loopback interface ip netns exec ${NS1} ip link set lo up ip netns exec ${NS2} ip link set lo up # setup peer ns interface ip netns exec ${NS1} ip link set ${VPEER1} up ip netns exec ${NS2} ip link set ${VPEER2} up # assign ip address to ns interfaces VPEER_ADDR1="10.10.0.10" VPEER_ADDR2="10.20.0.10" ip netns exec ${NS1} ip addr add ${VPEER_ADDR1}/16 dev ${VPEER1} ip netns exec ${NS2} ip addr add ${VPEER_ADDR2}/16 dev ${VPEER2} setup_bridge() { BR_ADDR="$1" BR_DEV="$2" NAMESPACE="$3" VETH="$4" # delete old bridge ip link delete ${BR_DEV} type bridge >/dev/null 2>&1 # setup bridge ip link add ${BR_DEV} type bridge ip link set ${BR_DEV} up # assign veth pairs to bridge ip link set ${VETH} master ${BR_DEV} # setup bridge ip ip addr add ${BR_ADDR}/16 dev ${BR_DEV} # add default routes for ns ip netns exec ${NAMESPACE} ip route add default via ${BR_ADDR} # enable ip forwarding bash -c 'echo 1 > /proc/sys/net/ipv4/ip_forward' # masquerade (internet => bridge) iptables -t nat -A POSTROUTING -s ${BR_ADDR}/16 ! -o ${BR_DEV} -j MASQUERADE } # clear out postrouting iptables -t nat -F BR_IP="10.10.0.1" BR_DEV="br0" setup_bridge $BR_IP $BR_DEV $NS1 $VETH1 BR_IP="10.20.0.1" BR_DEV="br1" setup_bridge $BR_IP $BR_DEV $NS2 $VETH2 Thanks!
stackbox (11 rep)
Aug 4, 2022, 03:09 AM
1 votes
1 answers
195 views
Is it possible to use a file as Filesystem?
Here is the origin of my question: * I'm running Linux containers with LXD snap version at Ubuntu 22.04 on a VPS. The root file system of the VPS is Ext4 and there is not additional storage attached. So the default LXD storage pool is created by the *dir* option. * When I'm taking a snapshots of the...
Here is the origin of my question: * I'm running Linux containers with LXD snap version at Ubuntu 22.04 on a VPS. The root file system of the VPS is Ext4 and there is not additional storage attached. So the default LXD storage pool is created by the *dir* option. * When I'm taking a snapshots of these LXCs, the whole data is duplicated - i.e. the if the container is 6G the snapshot become another 6G. * I think if it was LVM filesystem the snapshots will be created in a different way. So my question is: * It possible to do something like fallocate -l 16G /lvm.fs, then format it as LVM, mount it and use it as storage pool for LXD? And of course, how can I do that if it is possible? --- Some notes: The solution provided by @larsks works as it is expected! Later I found, when we are using lxc storage create pool-name lvm without additional options and parameters, it does almost the same. I didn't test it before I published the question because I was thinking the lvm driver mandatory will require be in use a separate partition. However in both cases this approach, in my opinion, has much more cons than pros, for example: * The write speed is decreased with about 10% in comparison of the cases when we are using the dir driver. * Hard to recover situations when no space left on the *disk*, even when the overload data is located in /tmp... in contrast, when the dir driver is used, LXD prevents the consumption of the entire host's file system space, so your system and containers are still operational. This is much conviniant in my VPS case.
pa4080 (121 rep)
May 6, 2022, 09:10 AM • Last activity: May 9, 2022, 06:19 AM
0 votes
1 answers
1162 views
Connection refused between 2 linux containers
On my host Ubuntu 18.04 I am running two lxc containers using default setups. Containers use Ubuntu 18.04 as well. I have an app running on container1 that offers an https based service on https://localhost:3000/. Container2 is not able to even establish a connection with container1. Container2 can...
On my host Ubuntu 18.04 I am running two lxc containers using default setups. Containers use Ubuntu 18.04 as well. I have an app running on container1 that offers an https based service on https://localhost:3000/. Container2 is not able to even establish a connection with container1. Container2 can ping container1 and read the html of the default Apache2 server running on localhost (for container1). Testing with netcat, I can establish connection with a few main ports, however I get connection refused for port 3000. root@c2:~# nc -zv c1 22 Connection to c1 22 port [tcp/ssh] succeeded! root@c2:~# nc -zv c1 80 Connection to c1 80 port [tcp/http] succeeded! root@c2:~# nc -zv c1 443 nc: connect to c1 port 443 (tcp) failed: Connection refused nc: connect to c1 port 443 (tcp) failed: Connection refused root@c2:~# nc -zv c1 3000 nc: connect to c1 port 3000 (tcp) failed: Connection refused nc: connect to c1 port 3000 (tcp) failed: Connection refused The same situation applies between my host and any of my containers. Only ports 22 and 80 seem to be reachable by default. I tried enabling ufw on all containers, but it still doesnt work out: root@c1:~# ufw status Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere 22/tcp ALLOW Anywhere 22 ALLOW Anywhere 443 ALLOW Anywhere 873 ALLOW Anywhere 3000 ALLOW Anywhere Anywhere on eth0@if16 ALLOW Anywhere Apache ALLOW Anywhere 80 ALLOW Anywhere 20 ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6) 22/tcp (v6) ALLOW Anywhere (v6) 22 (v6) ALLOW Anywhere (v6) 443 (v6) ALLOW Anywhere (v6) 873 (v6) ALLOW Anywhere (v6) 3000 (v6) ALLOW Anywhere (v6) Anywhere (v6) on eth0@if16 ALLOW Anywhere (v6) Apache (v6) ALLOW Anywhere (v6) 80 (v6) ALLOW Anywhere (v6) 20 (v6) ALLOW Anywhere (v6) Anywhere ALLOW OUT Anywhere on eth0@if16 Anywhere (v6) ALLOW OUT Anywhere (v6) on eth0@if16 Even testing via curl clearly shows me that port connection is closed and thats the issue: root@c2:~# curl https://10.155.120.175:3000/ curl: (7) Failed to connect to 10.155.120.175 port 3000: Connection refused I have been stuck in this issue for a week, can anyone help me troubleshoot this? Edit (additional data): results for netstat on container1: root@c1:~# netstat -lntp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 289/systemd-resolve tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1385/sshd tcp 0 0 127.0.0.1:3000 0.0.0.0:* LISTEN 293/MyApp tcp6 0 0 :::80 :::* LISTEN 310/apache2 tcp6 0 0 :::22 :::* LISTEN 1385/sshd
Mnemosyne (161 rep)
Nov 19, 2021, 12:38 PM • Last activity: Nov 19, 2021, 01:38 PM
0 votes
1 answers
732 views
Install browser inside LXD container and run it on host OS
I am trying to install Brave browser inside an LXD container (Voidlinux - preferably, or linuxMint), create a shortcut for that app inside my host OS and launch it as any other linux app with the exception that it will run inside a container. I am not sure how to configure the display part or lxc pr...
I am trying to install Brave browser inside an LXD container (Voidlinux - preferably, or linuxMint), create a shortcut for that app inside my host OS and launch it as any other linux app with the exception that it will run inside a container. I am not sure how to configure the display part or lxc profile on my non-ubuntu host OS. Tries these tutorials with no success: https://blog.simos.info/running-x11-software-in-lxd-containers/ например, environment.PULSE_SERVER: unix:/home/ubuntu/pulse-native connect: unix:/run/user/1000/pulse/native listen: unix:/home/ubuntu/pulse-native what can I replace ubuntu and user with in a Gentoo distro? lxc exec mycontainer -- sudo --user ubuntu --login that isn't working in voidlinux container.
CodeGust (141 rep)
Oct 25, 2021, 09:08 PM • Last activity: Nov 3, 2021, 12:51 PM
2 votes
0 answers
747 views
Start armv7 container on arm64 with lxc / lxd
I installed Ubuntu 20.04 on my Raspberry PI, which has an `arm64v8` architecture (but could be on any other debian arm distribution / hardware). Currently I compile a program for several arm architectures / distros. So I use lxc container for that purpose. This worked well for all debian and ubuntu...
I installed Ubuntu 20.04 on my Raspberry PI, which has an arm64v8 architecture (but could be on any other debian arm distribution / hardware). Currently I compile a program for several arm architectures / distros. So I use lxc container for that purpose. This worked well for all debian and ubuntu versions for the arm64v8 architecture. Then I downloaded an armhf container for debian buster, which technically should be architecture arm32v7 alias armv7:
lxc launch images:debian/10/armhf armhf-buster
Then I logged in the container and uname -a says: armv8l. I tried even to compile, but pip wheel refuses to take the arm32v7 packages and therefore I have to compile all dependencies for arm32v8 myself, which takes ages (waited 4 hours for one package and then aborted) due to limited memory and cpu capacaties. **Anyways: is there a way to launch a container as armv7 on an arm64v8 distro?** _P.s.: also obviously I can install the official Raspberry PI OS, which is armv7l on that armv8 processor, so technically shouldn't be a problem to run that as lxc virtualization._
User Rebo (121 rep)
Dec 5, 2020, 09:34 PM • Last activity: Apr 19, 2021, 08:15 PM
1 votes
0 answers
497 views
Linux on chromeos encountering many errors:58:51
OK This is a complex maze of error messages so bear with me. When I try to open or doing anything with Linux I get ``` [=======/ ] Starting the Linux container Error starting penguin container: 58 Launching VM shell failed: Error starting crostini for terminal: 58 ``` When I close and reopen the ter...
OK This is a complex maze of error messages so bear with me. When I try to open or doing anything with Linux I get
[=======/  ] Starting the Linux container Error starting penguin container: 58
Launching VM shell failed: Error starting crostini for terminal: 58
When I close and reopen the terminal I get a ready message, then an error message that I can't read because it closes directly after it displays. This is what I get when I run lxc list:
(termina) chronos@localhost ~ $ lxc list
+---------+---------+-----------------------+------+------------+-----------+
|  NAME   |  STATE  |         IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+---------+---------+-----------------------+------+------------+-----------+
| penguin | RUNNING | 100.115.92.197 (eth0) |      | PERSISTENT | 0         |
+---------+---------+-----------------------+------+------------+-----------+
(termina) chronos@localhost ~ $
I get this when I run vmc list:
crosh> vmc list
penquin (6238208 bytes, raw, sparse)
termina (5690859520 bytes, min shrinkable size 5021630464 bytes, raw)
Total Size (bytes): 5697097728
This happened like one week ago and I ended up powerwash my Chromebook then it worked until now. I really don't what to powerwash it again. Any help would be greatly appreciated. EDIT: ok now I am getting
[====-     ] Starting the container manager Error starting penguin container: 51
Launching vmshell failed: Error starting crostini for terminal: 51
I don't know what to do!!!
AidenSnyder (11 rep)
Apr 12, 2021, 06:46 PM • Last activity: Apr 13, 2021, 01:58 PM
Showing page 1 of 20 total questions