Corosync error "No interfaces defined" in a cluster member
1
vote
2
answers
9754
views
I am having an error starting corosync on a cluster member:
May 16 00:53:32 neftis corosync: [MAIN ] Corosync Cluster Engine ('2.3.4'): started and ready to provide service.
May 16 00:53:32 neftis corosync: [MAIN ] Corosync built-in features: dbus systemd xmlconf snmp pie relro bindnow
May 16 00:53:32 neftis corosync: [MAIN ] parse error in config: No interfaces defined
May 16 00:53:32 neftis corosync: [MAIN ] Corosync Cluster Engine exiting with status 8 at main.c:1278.
May 16 00:53:32 neftis corosync: Starting Corosync Cluster Engine (corosync): [FALL�]
May 16 00:53:32 neftis systemd: corosync.service: control process exited, code=exited status=1
May 16 00:53:32 neftis systemd: Failed to start Corosync Cluster Engine.
May 16 00:53:32 neftis systemd: Unit corosync.service entered failed state.
May 16 00:53:32 neftis systemd: corosync.service failed.
May 16 00:54:06 neftis systemd: Cannot add dependency job for unit firewalld.service, ignoring: Unit firewalld.service is masked.
May 16 00:54:06 neftis systemd: Starting Corosync Cluster Engine...
May 16 00:54:06 neftis corosync: [MAIN ] Corosync Cluster Engine ('2.3.4'): started and ready to provide service.
May 16 00:54:06 neftis corosync: [MAIN ] Corosync built-in features: dbus systemd xmlconf snmp pie relro bindnow
May 16 00:54:06 neftis corosync: [MAIN ] parse error in config: No interfaces defined
May 16 00:54:06 neftis corosync: [MAIN ] Corosync Cluster Engine exiting with status 8 at main.c:1278.
May 16 00:54:06 neftis corosync: Starting Corosync Cluster Engine (corosync): [FALL�]
May 16 00:54:06 neftis systemd: corosync.service: control process exited, code=exited status=1
May 16 00:54:06 neftis systemd: Failed to start Corosync Cluster Engine.
May 16 00:54:06 neftis systemd: Unit corosync.service entered failed state.
Here is my config on the three nodes but it is failing just in netfis which I added recently.
totem {
version: 2
secauth: off
cluster_name: cluster-osiris
transport: udpu
}
nodelist {
node {
ring0_addr: isis.localdoamin
nodeid: 1
}
node {
ring0_addr: horus.localdoamin
nodeid: 2
}
node {
ring0_addr: netfis.localdoamin
nodeid: 3
}
}
quorum {
provider: corosync_votequorum
}
logging {
to_syslog: yes
}
I am running a pacemaker, corosync, pcs cluster on CentOS 7.1 64. bits.
I searched on internet but It is not clear what is going on.
Could you help me?
Asked by mijhael3000
(85 rep)
May 16, 2016, 04:28 AM
Last activity: Jun 18, 2019, 08:42 AM
Last activity: Jun 18, 2019, 08:42 AM