Error building Consul cluster
gbujerin
4 Posts
September 12, 2019, 11:41 amQuote from gbujerin on September 12, 2019, 11:41 am
Error building cluster, please check detail below.
Error List
Cluster Node peta1 failed to join the cluster or is not alive.
Cluster Node peta2 failed to join the cluster or is not alive.
Still same error
Error building cluster, please check detail below.
Error List
Cluster Node peta1 failed to join the cluster or is not alive.
Cluster Node peta2 failed to join the cluster or is not alive.
Still same error
admin
2,918 Posts
September 12, 2019, 10:04 pmQuote from admin on September 12, 2019, 10:04 pmcan you email the /opt/petasan/log/PetaSAN.log file on all 3 nodes + /opt/petasan/config/cluster_info.json to contact-us @ petasan.org
thanks !
can you email the /opt/petasan/log/PetaSAN.log file on all 3 nodes + /opt/petasan/config/cluster_info.json to contact-us @ petasan.org
thanks !
gbujerin
4 Posts
September 14, 2019, 6:00 amQuote from gbujerin on September 14, 2019, 6:00 amSent you the logs by mail
Sent you the logs by mail
admin
2,918 Posts
September 14, 2019, 12:04 pmQuote from admin on September 14, 2019, 12:04 pmThe logs failed during the cluster build step, which happens in node 3 deployment. It failed to build the Consul server, which is the first service we use that relies on the backend network. It is most probably due to incorrect backend network settings:
"backend_1_base_ip": "134.137.0.0",
"backend_1_eth_name": "eth3",
"backend_1_mask": "255.255.0.0",
"backend_1_vlan_id": "",
"backend_2_base_ip": "134.137.0.0",
"backend_2_eth_name": "eth4",
"backend_2_mask": "255.255.0.0",
Here both backend 2 and backend 1 are using overlapped addresses 134.137.0.0. The tcp/network layer can get confused it may send a backend 1 packet sometimes on eth3 other times eth4, this will cause packet not being received. To fix:
"backend_1_base_ip": "134.137.0.0",
"backend_1_mask": "255.255.0.0",
"backend_2_base_ip": "134.138.0.0",
"backend_2_mask": "255.255.0.0",
and modify the backend 2 ip assigned to each node.
It is probably possible for you to change the config files/reboot..but is probably easier and quicker to install from start
The logs failed during the cluster build step, which happens in node 3 deployment. It failed to build the Consul server, which is the first service we use that relies on the backend network. It is most probably due to incorrect backend network settings:
"backend_1_base_ip": "134.137.0.0",
"backend_1_eth_name": "eth3",
"backend_1_mask": "255.255.0.0",
"backend_1_vlan_id": "",
"backend_2_base_ip": "134.137.0.0",
"backend_2_eth_name": "eth4",
"backend_2_mask": "255.255.0.0",
Here both backend 2 and backend 1 are using overlapped addresses 134.137.0.0. The tcp/network layer can get confused it may send a backend 1 packet sometimes on eth3 other times eth4, this will cause packet not being received. To fix:
"backend_1_base_ip": "134.137.0.0",
"backend_1_mask": "255.255.0.0",
"backend_2_base_ip": "134.138.0.0",
"backend_2_mask": "255.255.0.0",
and modify the backend 2 ip assigned to each node.
It is probably possible for you to change the config files/reboot..but is probably easier and quicker to install from start
Last edited on September 14, 2019, 12:04 pm by admin · #24
gbujerin
4 Posts
September 14, 2019, 2:53 pmQuote from gbujerin on September 14, 2019, 2:53 pmThank you. Its working.
Thank you. Its working.
Error building Consul cluster
gbujerin
4 Posts
Quote from gbujerin on September 12, 2019, 11:41 amError building cluster, please check detail below.
Error List
Cluster Node peta1 failed to join the cluster or is not alive.
Cluster Node peta2 failed to join the cluster or is not alive.Still same error
Error building cluster, please check detail below.
Error List
Cluster Node peta1 failed to join the cluster or is not alive.
Cluster Node peta2 failed to join the cluster or is not alive.
admin
2,918 Posts
Quote from admin on September 12, 2019, 10:04 pmcan you email the /opt/petasan/log/PetaSAN.log file on all 3 nodes + /opt/petasan/config/cluster_info.json to contact-us @ petasan.org
thanks !
can you email the /opt/petasan/log/PetaSAN.log file on all 3 nodes + /opt/petasan/config/cluster_info.json to contact-us @ petasan.org
thanks !
gbujerin
4 Posts
Quote from gbujerin on September 14, 2019, 6:00 amSent you the logs by mail
Sent you the logs by mail
admin
2,918 Posts
Quote from admin on September 14, 2019, 12:04 pmThe logs failed during the cluster build step, which happens in node 3 deployment. It failed to build the Consul server, which is the first service we use that relies on the backend network. It is most probably due to incorrect backend network settings:
"backend_1_base_ip": "134.137.0.0",
"backend_1_eth_name": "eth3",
"backend_1_mask": "255.255.0.0",
"backend_1_vlan_id": "",
"backend_2_base_ip": "134.137.0.0",
"backend_2_eth_name": "eth4",
"backend_2_mask": "255.255.0.0",Here both backend 2 and backend 1 are using overlapped addresses 134.137.0.0. The tcp/network layer can get confused it may send a backend 1 packet sometimes on eth3 other times eth4, this will cause packet not being received. To fix:
"backend_1_base_ip": "134.137.0.0",
"backend_1_mask": "255.255.0.0",
"backend_2_base_ip": "134.138.0.0",
"backend_2_mask": "255.255.0.0",and modify the backend 2 ip assigned to each node.
It is probably possible for you to change the config files/reboot..but is probably easier and quicker to install from start
The logs failed during the cluster build step, which happens in node 3 deployment. It failed to build the Consul server, which is the first service we use that relies on the backend network. It is most probably due to incorrect backend network settings:
"backend_1_base_ip": "134.137.0.0",
"backend_1_eth_name": "eth3",
"backend_1_mask": "255.255.0.0",
"backend_1_vlan_id": "",
"backend_2_base_ip": "134.137.0.0",
"backend_2_eth_name": "eth4",
"backend_2_mask": "255.255.0.0",
Here both backend 2 and backend 1 are using overlapped addresses 134.137.0.0. The tcp/network layer can get confused it may send a backend 1 packet sometimes on eth3 other times eth4, this will cause packet not being received. To fix:
"backend_1_base_ip": "134.137.0.0",
"backend_1_mask": "255.255.0.0",
"backend_2_base_ip": "134.138.0.0",
"backend_2_mask": "255.255.0.0",
and modify the backend 2 ip assigned to each node.
It is probably possible for you to change the config files/reboot..but is probably easier and quicker to install from start
gbujerin
4 Posts
Quote from gbujerin on September 14, 2019, 2:53 pmThank you. Its working.
Thank you. Its working.