Sample Header Ad - 728x90

Unix & Linux Stack Exchange

Q&A for users of Linux, FreeBSD and other Unix-like operating systems

Latest Questions

0 votes
3 answers
1964 views
sed? - Insert line after a string with special characters to Neutron service
I am attempting to write a bash script that will insert a string after matching on a string in /usr/lib/systemd/system/neutron-server.service I have been able to do this on other files easily as I was just insert variables into neccessary config files, but this one seems to be giving me trouble. I b...
I am attempting to write a bash script that will insert a string after matching on a string in /usr/lib/systemd/system/neutron-server.service I have been able to do this on other files easily as I was just insert variables into neccessary config files, but this one seems to be giving me trouble. I believe the error is that sed is not ignoring the special characters. In my attempt I have tried using sed of single quotes and double quotes (which I understand are for variables, but thought it might change something. Is there a better way of going about this or some special sed flags or syntax I am missing? sed ‘/--config-file /etc/neutron/plugin.ini/a\--config-file /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini‘ /usr/lib/systemd/system/neutron-server TL;DR - Insert --config-file /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini After --config-file /etc/neutron/plugin.ini Orginial File [Unit] Description=OpenStack Neutron Server After=syslog.target network.target [Service] Type=notify User=neutron ExecStart=/usr/bin/neutron-server --config-file /usr/share/neutron/neutron- dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server - -log-file /var/log/neutron/server.log PrivateTmp=true NotifyAccess=all KillMode=process TimeoutStartSec="infinity" [Install] WantedBy=multi-user.target File after desired change command. [Unit] Description=OpenStack Neutron Server After=syslog.target network.target [Service] Type=notify User=neutron ExecStart=/usr/bin/neutron-server --config-file /usr/share/neutron/neutron- dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config- file /etc/neutron/plugins/ml2/ml2_conf_cisco_apic.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server - -log-file /var/log/neutron/server.log PrivateTmp=true NotifyAccess=all KillMode=process TimeoutStartSec="infinity" [Install] WantedBy=multi-user.target
fly2809 (1 rep)
Jun 13, 2017, 08:48 PM • Last activity: Jul 22, 2025, 09:04 AM
1 votes
2 answers
2093 views
how to solve the error in cinder on openstack havana?
I am install openstack controller node for one machtion, and Another metchion running nova-compute only. so I am running controller node cinder will got error. I clearly meantion it which service gor error, so please help me. cat /var/log/cinder/cinder-backup.log 1) ERROR cinder.service [-] Recovere...
I am install openstack controller node for one machtion, and Another metchion running nova-compute only. so I am running controller node cinder will got error. I clearly meantion it which service gor error, so please help me. cat /var/log/cinder/cinder-backup.log 1) ERROR cinder.service [-] Recovered model server connection! 2) 2014-11-28 12:43:35.415 4628 ERROR cinder.openstack.common.rpc.common AMQP server on 10.192.1.126:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 1 seconds. 3) ERROR cinder.brick.local_dev.lvm Unable to locate Volume Group cinder-volumes 4) ERROR cinder.backup.manager Error encountered during initialization of driver: LVMISCSIDriver 5) ERROR cinder.backup.manager Bad or unexpected response from the storage volume backend API: Volume Group cinder-volumes does not exist scheduler: 1) ERROR cinder.service [-] Recovered model server connection! 2) ERROR cinder.volume.flows.create_volume Failed to schedule_create_volume: No valid host was found.
dhamu (49 rep)
Nov 28, 2014, 09:01 AM • Last activity: Jun 8, 2025, 03:06 PM
0 votes
1 answers
2468 views
OpenStack Cloud Instance can't get metadata
I'm actually setting up a private cloud for my company with OpenStack (Stein). I followed the tutorial from the official website and everything seems to work well... except getting metadata from cloud instance. Let me explain how is set up my infrastructure : All OpenStack are installed on KVM host...
I'm actually setting up a private cloud for my company with OpenStack (Stein). I followed the tutorial from the official website and everything seems to work well... except getting metadata from cloud instance. Let me explain how is set up my infrastructure : All OpenStack are installed on KVM host (2xXeon 32 Core, 320Go RAM, 2To HDD, ...) I set up VM like following : - openstack-controller001 192.168.50.11 - openstack-compute001 192.168.50.41 - openstack-storage001 192.168.50.61 (for Cinder) - db001 192.168.50.81 (DB is not hosted on the same server as the controller) - ldap001 192.168.50.251 (not using LDAP yet, only DNS and NTP server) When i launch new instance of Ubuntu or Debian created from cloud images, I'm not able to connect to thoses VM via SSH, my keypair is always refused (error: permission denied). After some investigations, I realised that VM is not uploading SSH private key from the host. It seems the VM is contacting metadata server by using DHCP server IP address of my virtual network instead of metadata proxy server which is the controller if i'm not mistaken ? [ 15.840973] cloud-init: 2019-05-20 05:53:58,124 - url_helper.py[WARNING]: Calling 'http://172.16.10.10/latest/meta-data/instance-id ' failed [0/120s]: request error [HTTPConnectionPool(host='172.16.10.10', port=80): Max retries exceeded with url: /latest/meta-data/instance-id (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))] 172.16.10.10 represents the DHCP server of my Virtual Network (172.16.0.0/16, DHCP Range 172.16.10.10~172.16.20.254). I think there is something wrong with that, despite the config seems correct. **/etc/neutron/neutron.conf (openstack-controller001)** [DEFAULT] # ... nova_metadata_host = openstack-controller001 metadata_proxy_shared_secret = XXXXXXXXXXXXXXXXXX **/etc/nova/nova.conf (openstack-compute001)** [neutron] # ... service_metadata_proxy = true metadata_proxy_shared_secret = XXXXXXXXXXXXXXXXXX
Julien Guillot (101 rep)
May 20, 2019, 07:03 AM • Last activity: Apr 16, 2025, 12:07 PM
6 votes
2 answers
12799 views
Script in cron cannot find command
I have a script which dumps out database and uploads the SQL file to Swift. I've run into the issue where the script runs fine in terminal but fails in cron. A bit of debugging and I found that the `/usr/local/bin/swift` command is not found in the script. Here's my crontab entry: */2 * * * * . /etc...
I have a script which dumps out database and uploads the SQL file to Swift. I've run into the issue where the script runs fine in terminal but fails in cron. A bit of debugging and I found that the /usr/local/bin/swift command is not found in the script. Here's my crontab entry: */2 * * * * . /etc/profile; bash /var/lib/postgresql/scripts/backup Here's what I've tried: 1. Using full path to swift as /usr/local/bin/swift 2. Executing the /etc/profile script before executing the bash script. How do I solve this?
An SO User (263 rep)
Aug 8, 2017, 11:50 AM • Last activity: Aug 11, 2024, 09:52 AM
0 votes
0 answers
39 views
How to boot Partitionless on Openstack?
I just migrated a virtual machine from Linode Cloud to Openstack via Veeam Agent. But when I boot up I see the problem, the screen shows "*Boot failed: not a bootable disk...No bootable device...*". As far as I know about Linode Cloud, they are deploying VMs with partitionless. 1. https://www.linode...
I just migrated a virtual machine from Linode Cloud to Openstack via Veeam Agent. But when I boot up I see the problem, the screen shows "*Boot failed: not a bootable disk...No bootable device...*". As far as I know about Linode Cloud, they are deploying VMs with partitionless. 1. https://www.linode.com/docs/guides/install-nixos-on-linode/ 2. https://gist.github.com/ZeroDeth/b583dffa50768bce595e10dc13a64b96 I have tried everything I know, such as reducing the partition size to make room for the boot partition, but all attempts have been unsuccessful.
Huy võ lê (1 rep)
Mar 19, 2024, 02:11 AM
0 votes
1 answers
266 views
Most stable/efficient method for OpenStack deployment using Debian
I was wondering witch method to use to deploy OpenStack. I read [guides][1] of OpenStack, but there is no focusing on Debian. What is the most stable method to deploy OpenStack on Debian? [1]: https://docs.openstack.org/wallaby/deploy/index.html
I was wondering witch method to use to deploy OpenStack. I read guides of OpenStack, but there is no focusing on Debian. What is the most stable method to deploy OpenStack on Debian?
Zaman Oof (123 rep)
Sep 30, 2021, 11:39 AM • Last activity: Jan 29, 2024, 01:22 PM
1 votes
0 answers
69 views
SH script fails when something already exists
I have a problem with my SH script, I'm writing a SH script that automates the installation of OpenStack Keystone, and also uses a small configuration file, everything works fine but due to a syntax error on my part OpenStack command, script fails when something has already been created, example Rab...
I have a problem with my SH script, I'm writing a SH script that automates the installation of OpenStack Keystone, and also uses a small configuration file, everything works fine but due to a syntax error on my part OpenStack command, script fails when something has already been created, example RabbitMQ user, Project or OpenStack User The contents of my SH script is this:
#!/usr/bin/env bash

set -e
set -o xtrace

source "$PWD/install-keystone.conf"

conf=/etc/keystone/keystone.conf

if [ "$EUID" -ne 0 ]; then 
  echo "To proceed with the installation of OpenStack Keystone you must be root user"
  echo "Please run as root"
  exit 1
fi


config_mysql()
{
apt install mariadb-server python3-pymysql -y


if ! test -f /etc/mysql/mariadb.conf.d/99-openstack.cnf; then
cat >> /etc/mysql/mariadb.conf.d/99-openstack.cnf > /root/admin-openrc.sh > /root/demo-openrc.sh << EOF
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=$DEMO_PASSWORD
export OS_AUTH_URL=http://$HOST_IP:5000/v3 
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
fi

}

config_mysql
config_rabbitmq
install_pkgs
create_keystone_database
conf_keystone
create_projects_users
request_auth_token
create_cli_environment_scripts

echo ""
echo ""
echo ""
echo "This is your host IP address: $HOST_IP"
echo "Keystone URL is available at http://$HOST_IP:5000/ "
echo "The default users are: admin and demo"
echo "The password of administrator is '$ADMIN_PASSWORD'"
echo ""
echo "OpenStack Keystone has been successfully installed!"
echo "Learn more about Keystone in guides and more from : https://docs.openstack.org/keystone/latest/ "
echo ""
echo "Now connect this Keystone installation to other OpenStack services"
echo "It is recommended to subsequently install Glance (Image Service) on your controller node"
exit
My custom config file:
HOST_IP=
ADMIN_PASSWORD=
DEMO_PASSWORD=
DATABASE_PASSWORD=
RABBITMQ_PASSWORD=
I realized that it initially failed because I made a mistake in the
--os-username
section I didn't enter the username and the script failed because of me now as soon as it comes to OpenStack user creation on RabbitMQ it says the user already exists but the script fails without ignoring that message Is there a way to ignore the message? And I use
-e
, to avoid having the script continue due to an error, but even if something exists it still marks it as an error and the script doesn't proceed
Steforgame 910 (11 rep)
Oct 31, 2023, 08:52 AM • Last activity: Oct 31, 2023, 09:03 AM
3 votes
1 answers
1733 views
libvirt-bin error on a VM when I try to list VM
I'm using Virtualbox 4.3.18 on my Arch Linux Host machine and libvirt-bin 1.2.9 on my Ubuntu Server Cloud guest machine. Everytime I try to follow [this tutorial](http://maas.ubuntu.com/docs/nodes.html#virtual-machine-nodes) I receive the following error when I run virsh: Command: virsh -c vbox+ssh:...
I'm using Virtualbox 4.3.18 on my Arch Linux Host machine and libvirt-bin 1.2.9 on my Ubuntu Server Cloud guest machine. Everytime I try to follow [this tutorial](http://maas.ubuntu.com/docs/nodes.html#virtual-machine-nodes) I receive the following error when I run virsh: Command: virsh -c vbox+ssh://leandro@10.0.3.15/system list --all Error: error: failed to connect to the hypervisor error: internal error: unable to initialize VirtualBox driver API Someone know how to fix this?
Leandro (131 rep)
Oct 17, 2014, 07:52 PM • Last activity: Oct 23, 2023, 02:01 AM
0 votes
0 answers
799 views
error : virNetSocketReadWire:1792 : End of file while reading data: Input/output error
libvirt version - 8.0.0 OS - Centos 8 Stream Kernel - Linux 4.18.0-408.el8.x86_64 #3 SMP Mon Aug 21 12:19:16 EDT 2023 x86_64 x86_64 x86_64 GNU/Linux Openstack Version - Yoga 24.1.1 I am running openstack setup, post running IO for couple of days, I am seeing input/output error in libvirtd.logs. Not...
libvirt version - 8.0.0 OS - Centos 8 Stream Kernel - Linux 4.18.0-408.el8.x86_64 #3 SMP Mon Aug 21 12:19:16 EDT 2023 x86_64 x86_64 x86_64 GNU/Linux Openstack Version - Yoga 24.1.1 I am running openstack setup, post running IO for couple of days, I am seeing input/output error in libvirtd.logs. Not sure how to find root cause and solve it.
varun vyas (1 rep)
Aug 24, 2023, 10:45 AM
0 votes
0 answers
266 views
vlan with linux bridge ping not working
I want to configure the bridge `br-mgmt` with VLAN. I created the VLAN and bridge, then assigned IP addresses to the bridge, but `ping` is not working. My netplan configuration looks like this: server 1 ``` root@server1:~# cat /etc/netplan/02-netcfg.yaml network: version: 2 renderer: networkd ethern...
I want to configure the bridge br-mgmt with VLAN. I created the VLAN and bridge, then assigned IP addresses to the bridge, but ping is not working. My netplan configuration looks like this: server 1
root@server1:~# cat /etc/netplan/02-netcfg.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    enp5s0:
      dhcp4: no
  vlans:
    vlan.50:
      id: 50
      link: enp5s0
  bridges:
    br-mgmt:
      interfaces: [vlan.50]
      addresses: [ 192.168.50.194/24 ]
server 2
root@server2:~# cat /etc/netplan/02-netcfg.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    enp5s0:
      dhcp4: no
  vlans:
    vlan.50:
      id: 50
      link: enp5s0
  bridges:
    br-mgmt:
      interfaces: [vlan.50]
      addresses: [ 192.168.50.195/24 ]
ping test
root@server1:~# ping 192.168.50.195
PING 192.168.50.195 (192.168.50.195) 56(84) bytes of data.
From 192.168.50.194 icmp_seq=1 Destination Host Unreachable
From 192.168.50.194 icmp_seq=2 Destination Host Unreachable
Am I missing somehting? --- - ubuntu: 22.02 server - kernel verstion : 5.15.0-75-generic - systemd-networkd daemon for netplan
daniel (1 rep)
Jun 19, 2023, 11:25 PM • Last activity: Jun 20, 2023, 08:28 AM
4 votes
0 answers
589 views
How I can improve openstack swift replication speed?
I am using openstack object storage in my production, and faced with big problem - low replication speed. I have cluster with replication factor 3, with 3 zones and 25 hdds per zone. My container has ~100 million small objects. I've added new hdds (10 per zone) and start rebalance my cluster, and sw...
I am using openstack object storage in my production, and faced with big problem - low replication speed. I have cluster with replication factor 3, with 3 zones and 25 hdds per zone. My container has ~100 million small objects. I've added new hdds (10 per zone) and start rebalance my cluster, and swift report me that rebalancing will be finishing after ~1 year. Swift uses rsync, and I think if I could try copy this objects by hand using rsync - it will be more fast than swift do it. Is there any way to increase replication speed in openstack swift ? I have a feel that swift makes a "pity" for my HDDs and not using 100% capacity to make replication proccess faster. I tried to ask this in Openstack Forum but there is no answers(. **This is my /etc/swift/object-server.conf** [DEFAULT] bind_port = 6000 user = swift swift_dir = /etc/swift devices = /mnt/swift mount_check = True log_level = ERROR conn_timeout = 5 container_update_timeout = 5 node_timeout = 5 max_clients = 4096 [pipeline:main] pipeline = healthcheck recon object-server [app:object-server] use = egg:swift#object replication_concurrency = 1500 replication_one_per_device = False replication_lock_timeout = 30 [filter:healthcheck] use = egg:swift#healthcheck [filter:recon] use = egg:swift#recon recon_cache_path = /var/cache/swift recon_lock_path = /var/lock [object-replicator] concurrency = 1500 run_pause = 5 interval = 5 log_level = DEBUG stats_interval = 10 rsync_io_timeout = 60 [object-reconstructor] [object-updater] concurrency = 200 interval = 20 slowdown = 0.008 log_level = DEBUG [object-auditor] interval = 300 [filter:xprofile] use = egg:swift#xprofile **This is /etc/rsyncd.conf** [object] path = /mnt/swift read only = false write only = no list = yes incoming chmod = 0644 outgoing chmod = 0644 max connections = 1500 lock file = /var/lock/object.lock
Kostyantyn Velychkovsky (41 rep)
Jan 19, 2018, 12:12 PM • Last activity: May 16, 2023, 06:56 AM
1 votes
0 answers
85 views
how could deploy backup keystone service across multiple region
In my situation, there are three Openstack clusters deployed at three different cities which called regionA, regionB and regionC. - We use the one keystone service which is only deployed in regionA. - Now we should deploy the keystone backup service at regionC, this regionC keystone is only for back...
In my situation, there are three Openstack clusters deployed at three different cities which called regionA, regionB and regionC. - We use the one keystone service which is only deployed in regionA. - Now we should deploy the keystone backup service at regionC, this regionC keystone is only for backup or using while regionA keystone shutdown. How could I achieve that? In my opinion, we should deploy the databases like MariaDB and Keystone service at regionC, but I can't figure out how to do, this reference "Multiple Regions Deployment with Kolla " can't solve my problem
VictorLee (37 rep)
Apr 19, 2023, 03:30 AM • Last activity: Apr 19, 2023, 05:33 AM
-1 votes
2 answers
2612 views
Permission denied (publickey) on ubuntu 16.04
$ ssh mykey.pem ubuntu@10.128.2.7 -v OpenSSH_7.3p1, OpenSSL 1.0.2j 26 Sep 2016 debug1: Reading configuration data /c/Users/works/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Connecting to 10.128.2.7 [10.128.2.7] port 22. debug1: Connection established. debug1: key_load_...
$ ssh mykey.pem ubuntu@10.128.2.7 -v OpenSSH_7.3p1, OpenSSL 1.0.2j 26 Sep 2016 debug1: Reading configuration data /c/Users/works/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: Connecting to 10.128.2.7 [10.128.2.7] port 22. debug1: Connection established. debug1: key_load_public: No such file or directory debug1: identity file /c/Users/works/Documents/interface setup/ifx_key.pem type -1 debug1: key_load_public: No such file or directory debug1: identity file /c/Users/works/Documents/interface setup/ifx_key.pem-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_7.3 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8 debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.8 pat OpenSSH_6.6.1* compat 0x04000000 debug1: Authenticating to 10.128.2.7:22 as 'ubuntu' debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: algorithm: curve25519-sha256@libssh.org debug1: kex: host key algorithm: ecdsa-sha2-nistp256 debug1: kex: server->client cipher: chacha20-poly1305@openssh.com MAC: compression: none debug1: kex: client->server cipher: chacha20-poly1305@openssh.com MAC: compression: none debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ecdsa-sha2-nistp256 SHA256:R+d2ELtCJyoeyHMfivCsGKk98GOIfxxsTEPAFmKkSOI debug1: Host '10.128.2.7' is known and matches the ECDSA host key. debug1: Found key in /c/Users/works/.ssh/known_hosts:1 debug1: rekey after 134217728 blocks debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: rekey after 134217728 blocks debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/works/Documents/interface setup/ifx_key.pem debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). I used to be able to ssh into this machine until yesterday. Is there a way to login into it?
kosta (199 rep)
Feb 8, 2018, 07:41 AM • Last activity: Mar 21, 2023, 03:03 AM
2 votes
1 answers
140 views
docker logs err:"+ sudo -E kolla_set_configs sudo: unknown uid 42401: who are you?" in openstack container
Multinodes(3 nodes) openstack cluster deploy by `kolla-ansible`, two nodes(2nd and 3rd nodes) are working well, the one node(1st_node) have some containers always **Restarting** with the error logs, e.g. `kolla_toolbox` container: ``` + sudo -E kolla_set_configs sudo: unknown uid 42401: who are you?...
Multinodes(3 nodes) openstack cluster deploy by kolla-ansible, two nodes(2nd and 3rd nodes) are working well, the one node(1st_node) have some containers always **Restarting** with the error logs, e.g. kolla_toolbox container:
+ sudo -E kolla_set_configs
sudo: unknown uid 42401: who are you?
I had check the kolla_toolbox container's /etc/passwd file, it has the same md5sum with the other two normal nodes. And the /etc/passwd file has the line with the content: ansible:x:42401:42401::/var/lib/ansible:/usr/sbin/nologin. The result of id 42401 and id ansible in all containers of three nodes are:
uid=42401(ansible) gid=42401(ansible) groups=42401(ansible),42400(kolla)
in three hypervisor nodes are:
:no such user
I had ran docker image rm kolla_toolbox, pull it and deploy in 1st_node, the issue still exist, but it works on the other two nodes. What's wrong with the 1st_node about the docker or container? How could I fix it? *kolla_set_configs* is a python file in the path of /usr/local/bin/kolla_set_configs which only found inside the container, and I can't figure out which line about the **kolla_set_configs ** file make the error logs.
VictorLee (37 rep)
Feb 14, 2023, 01:42 AM • Last activity: Feb 17, 2023, 07:27 AM
-4 votes
2 answers
155 views
Linux as a router to forward
How to make an OpenStack instance to forward route to another Linux machine which is not OpenStack instance but in the same LAN? PC_A has 192.168.1.133/27. PC_B has 192.168.1.140/27. PC_A has 10.26.14.16/25 route. PC_B has no 10.26.14.16/25 route. I want PC_B to reach `10.26.14.16/25` via PC_A. Note...
How to make an OpenStack instance to forward route to another Linux machine which is not OpenStack instance but in the same LAN? PC_A has 192.168.1.133/27. PC_B has 192.168.1.140/27. PC_A has 10.26.14.16/25 route. PC_B has no 10.26.14.16/25 route. I want PC_B to reach 10.26.14.16/25 via PC_A. Note: PC_B is an OpenStack Instance. It's private IP is 192.168.118.10/27 and public IP is 192.168.1.140/27. FW----------PC_A--------------PC_B [192.168.1.133/27] [192.168.1.140/27] Public IP of B [192.168.118.10/27] Private IP of B PC_A has 10.26.14.16/25 route. PC_B has no 10.26.14.16/25 route. I want PC_B to reach 10.26.14.16/25 via PC_A.
rinto (13 rep)
Jun 5, 2021, 03:38 AM • Last activity: Jan 21, 2023, 07:09 PM
0 votes
1 answers
300 views
How to resolve "Unable to resolve host" in OpenStack Yoga?
I’m trying to install Openstack on CentOS Stream 9 by following the official openstack installation guide for Yoga available at: https://docs.openstack.org/install-guide/ 1. When I try to bootstrap keystone I get the following error: ```/etc/keystone/fernet-keys/``` does not exist. [![keystone error...
I’m trying to install Openstack on CentOS Stream 9 by following the official openstack installation guide for Yoga available at: https://docs.openstack.org/install-guide/ 1. When I try to bootstrap keystone I get the following error:
/etc/keystone/fernet-keys/
does not exist. keystone error 2. When I tried to create a domain using
domain create --description "An Example Domain" example
it failed. Upon pinging controller I found out that the machine could not resolve the controller. Next, I added an entry to
/etc/hosts
that explicitly resolved the controller to my machine’s IP controller error 3. Pinging the controller succeeded but I was still not able to create a domain 4. I tried creating a project using
project create --domain default --description "Service Project" service
This command failed with internal server error. internal server error
Hinddeep S. Purohit (1 rep)
Jul 18, 2022, 12:41 AM • Last activity: Jan 21, 2023, 01:27 PM
0 votes
1 answers
257 views
cephadm set cluster_network
I have setup new cluster on single NIC using `cephadm` but i have added extra nic for replication `cluster_network`. This is what i did to configure cephadm but it didn't work. `$ ceph config set global cluster_network 192.168.1.0/24` View config $ ceph config get mon cluster_network 192.168.1.0/24...
I have setup new cluster on single NIC using cephadm but i have added extra nic for replication cluster_network. This is what i did to configure cephadm but it didn't work. $ ceph config set global cluster_network 192.168.1.0/24 View config $ ceph config get mon cluster_network 192.168.1.0/24 $ ceph config get mon public_network 10.73.3.0/24 Validate $ ceph osd metadata 1 | grep addr "back_addr": "[v2:10.73.3.191:6812/1317996473,v1:10.73.3.191:6813/1317996473]", "front_addr": "[v2:10.73.3.191:6810/1317996473,v1:10.73.3.191:6811/1317996473]", "hb_back_addr": "[v2:10.73.3.191:6816/1317996473,v1:10.73.3.191:6817/1317996473]", "hb_front_addr": "[v2:10.73.3.191:6814/1317996473,v1:10.73.3.191:6815/1317996473]", Restarted using daemon $ ceph orch restart osd.1 Still no impact $ ceph osd metadata 1 | grep back_addr "back_addr": "[v2:10.73.3.191:6812/1317996473,v1:10.73.3.191:6813/1317996473]", "hb_back_addr": "[v2:10.73.3.191:6816/1317996473,v1:10.73.3.191:6817/1317996473]",
Satish (1672 rep)
Aug 31, 2022, 03:17 AM • Last activity: Jan 21, 2023, 01:18 PM
4 votes
2 answers
4116 views
qemu-img compress image from stdin / gunzip
because of low space i would like to gunzip a zipped "glance image-download" and compress it with qemu-img to a "qcow2" formatted file. tried this: gunzip -c file.gz |qemu-img convert -f raw /dev/stdin -O qcow2 file.qcow2 but it fails with: qemu-img: Could not open '/dev/stdin': Could not refresh to...
because of low space i would like to gunzip a zipped "glance image-download" and compress it with qemu-img to a "qcow2" formatted file. tried this: gunzip -c file.gz |qemu-img convert -f raw /dev/stdin -O qcow2 file.qcow2 but it fails with: qemu-img: Could not open '/dev/stdin': Could not refresh total sector count: Illegal seek Any idea if this is at all possible?
James Baker (161 rep)
Aug 20, 2019, 08:18 AM • Last activity: Sep 3, 2022, 11:03 PM
0 votes
1 answers
547 views
Installation stuck at installing packages for openstack.kolla.docker
I am a newbie trying to install AIO OpenStack using Kolla Ansible under a virtual environment. I am following the docs at https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html. I am using the base machine as a centos 8 stream and facing an issue while running the command. Here is the...
I am a newbie trying to install AIO OpenStack using Kolla Ansible under a virtual environment. I am following the docs at https://docs.openstack.org/kolla-ansible/latest/user/quickstart.html . I am using the base machine as a centos 8 stream and facing an issue while running the command. Here is the detailed error message I am getting. I have added the docker-ce repo but no luck. Any assistance is highly appreciated #kolla-ansible -i ./multinode bootstrap-servers TASK [openstack.kolla.docker : Install packages] ************************************************************************************************ task path: /root/.ansible/collections/ansible_collections/openstack/kolla/roles/docker/tasks/install.yml:32 Using module file /root/venv/lib64/python3.6/site-packages/ansible/modules/dnf.py Pipelining is enabled. ESTABLISH LOCAL CONNECTION FOR USER: root EXEC /bin/sh -c 'python && sleep 0' The full traceback is: File "/tmp/ansible_ansible.legacy.dnf_payload_0imz5i2d/ansible_ansible.legacy.dnf_payload.zip/ansible/modules/dnf.py", line 1180, in ensure File "/usr/lib/python3.6/site-packages/dnf/base.py", line 901, in resolve raise exc fatal: [localhost]: FAILED! => { "changed": false, "failures": [], "invocation": { "module_args": { "allow_downgrade": false, "allowerasing": false, "autoremove": false, "bugfix": false, "conf_file": null, "disable_excludes": null, "disable_gpg_check": false, "disable_plugin": [], "disablerepo": [], "download_dir": null, "download_only": false, "enable_plugin": [], "enablerepo": [], "exclude": [], "install_repoquery": true, "install_weak_deps": true, "installroot": "/", "list": null, "lock_timeout": 30, "name": [ "docker-ce" ], "nobest": false, "releasever": null, "security": false, "skip_broken": false, "state": "present", "update_cache": true, "update_only": false, "validate_certs": true } }, "msg": "Depsolve Error occured: \n Problem: problem with installed package buildah-1:1.24.2-2.module_el8.7.0+1106+45480ee0.x86_64\n - package buildah-1:1.24.2-2.module_el8.7.0+1106+45480ee0.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed\n - package buildah-1.24.0-0.7.module_el8.6.0+944+d413f95e.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed\n - package buildah-1:1.23.1-2.module_el8.6.0+954+963caf36.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed\n - package buildah-1.22.3-2.module_el8.6.0+926+8bef8ae7.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed\n - package buildah-1.22.3-2.module_el8.5.0+911+f19012f9.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed\n - package buildah-1.22.3-1.module_el8.5.0+901+79ce9cba.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed\n - package buildah-1.22.0-2.module_el8.5.0+890+6b136101.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed\n - package buildah-1.22.0-2.module_el8.5.0+877+1c30e0c9.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed\n - package buildah-1.22.0-0.2.module_el8.5.0+874+6db8bee3.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed\n - package buildah-1.21.4-2.module_el8.5.0+870+f792de72.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed\n - package buildah-1.19.8-1.module_el8.5.0+733+9bb5dffa.x86_64 requires runc >= 1.0.0-26, but none of the providers can be installed\n - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.3-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.3-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package docker-ce-3:20.10.17-3.el8.x86_64 requires containerd.io >= 1.4.1, but none of the providers can be installed\n - package containerd.io-1.4.3-3.2.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.3-3.2.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.3-3.2.el8.x86_64 conflicts with runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.3-3.2.el8.x86_64 obsoletes runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.3-3.2.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.3-3.2.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.3-3.2.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.3-3.2.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.3-3.2.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.3-3.2.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.3-3.2.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.3-3.2.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.3-3.2.el8.x86_64 conflicts with runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.3-3.2.el8.x86_64 obsoletes runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.10-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.10-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.12-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.12-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.13-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.13-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.4-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.4-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.6-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.6-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.8-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.8-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.5.10-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.5.10-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.5.11-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.5.11-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.6.4-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.6.4-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.6.6-3.1.el8.x86_64 conflicts with runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - package containerd.io-1.6.6-3.1.el8.x86_64 obsoletes runc provided by runc-1:1.0.3-3.module_el8.7.0+1106+45480ee0.x86_64\n - cannot install the best candidate for the job\n - package runc-1.0.0-56.rc5.dev.git2abd837.module_el8.4.0+521+9df8e6d3.x86_64 is filtered out by modular filtering\n - package runc-1.0.0-64.rc10.module_el8.4.0+522+66908d0c.x86_64 is filtered out by modular filtering\n - package runc-1.0.0-70.rc92.module_el8.5.0+736+58cc1a5a.x86_64 is filtered out by modular filtering\n - package runc-1.0.0-73.rc95.module_el8.6.0+1107+d59a301b.x86_64 is filtered out by modular filtering\n - package runc-1:1.0.3-1.module_el8.6.0+1108+b13568aa.x86_64 is filtered out by modular filtering\n - package containerd.io-1.4.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.8-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.8-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.8-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.8-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.8-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.8-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.8-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.8-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.8-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.8-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.8-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.8-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.9-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.9-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.12-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.12-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.12-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.12-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.12-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.12-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.12-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.12-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.12-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.12-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.12-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.12-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.13-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.13-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.4.13-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.13-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.4.13-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.13-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.4.13-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.13-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.4.13-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.13-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.4.13-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.4.13-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.5.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.5.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.5.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.5.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.5.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.5.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.5.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.5.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.5.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.5.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.5.10-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.5.10-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.5.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.5.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.5.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.5.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.5.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.5.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.5.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.5.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.5.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.5.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.5.11-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.5.11-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.6.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.6.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.6.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.6.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.6.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.6.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.6.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.6.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.6.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.6.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.6.4-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.6.4-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.6.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.6.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.0-70.rc92.module_el8.5.0+733+9bb5dffa.x86_64\n - package containerd.io-1.6.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.6.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-3.module_el8.5.0+870+f792de72.x86_64\n - package containerd.io-1.6.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.6.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+878+851f435b.x86_64\n - package containerd.io-1.6.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.6.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.1-5.module_el8.5.0+890+6b136101.x86_64\n - package containerd.io-1.6.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.6.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64\n - package containerd.io-1.6.6-3.1.el8.x86_64 conflicts with runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64\n - package containerd.io-1.6.6-3.1.el8.x86_64 obsoletes runc provided by runc-1.0.2-1.module_el8.6.0+926+8bef8ae7.x86_64", "rc": 1, "results": [] } localhost : ok=16 changed=0 unreachable=0 failed=1 skipped=5 rescued=0 ignored=0 Command failed ansible-playbook -e @/etc/kolla/globals.yml -e @/etc/kolla/passwords.yml -e CONFIG_DIR=/etc/kolla -e kolla_action=bootstrap-servers /root/venv/share/kolla-ansible/ansible/kolla-host.yml --verbose --verbose --verbose --inventory ./all-in-one
xrkr (279 rep)
Jul 26, 2022, 06:27 AM • Last activity: Jul 26, 2022, 07:45 AM
1 votes
0 answers
128 views
Openstack Services Status Ubuntu
i installed openstack on Ubuntu server LTS 20.4 . in centos 7.9 usually i install a package `openstack-utils` which let me run `openstack-status` and i can see all services status at same time. how should i achieve the same thing in ubuntu server ? what package should i use ?
i installed openstack on Ubuntu server LTS 20.4 . in centos 7.9 usually i install a package openstack-utils which let me run openstack-status and i can see all services status at same time. how should i achieve the same thing in ubuntu server ? what package should i use ?
Hoodad Tabibi (353 rep)
Jun 19, 2022, 08:26 AM • Last activity: Jun 21, 2022, 05:45 AM
Showing page 1 of 20 total questions