Join 3rd Node to PetaSAN Cluster failed
mathiasfo
1 Post
February 27, 2023, 8:26 amQuote from mathiasfo on February 27, 2023, 8:26 amHello together,
I've two dell pe r630 and one dell pe r620 servers which all work in hba or it mode. I've already joined one 620 and one 630 to the new cluster but the last 630 could not be joined. It stays about 24 hours in "processing ..." status and after that time the following appeared in "/opt/petasan/log/PetaSAN.log".
root@xx-xxxx-xx:/opt/petasan/log# tail PetaSAN.log
25/02/2023 11:21:14 ERROR Cannot create monitor on remote node xx.xx.xx.101
25/02/2023 11:21:14 ERROR Could not build ceph monitors.
25/02/2023 11:21:14 ERROR ['core_cluster_deploy_monitor_create_err%xx.xx.xx.101']
26/02/2023 10:12:51 ERROR Failed to add osd_scrub_auto_repair = False
NoneType: None
27/02/2023 05:14:17 ERROR Cluster is not completed, PetaSAN will check node join status.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/cluster/deploy.py", line 84, in get_node_status
raise Exception("Cluster is not completed, PetaSAN will check node join stat us.")
Exception: Cluster is not completed, PetaSAN will check node join status.
I've already reinstalled the node a few times, but it is the same error every time.
Any ideas what's the problem here? You need any further information from me?
Many thanks in advance!
Regards
Mathias
Hello together,
I've two dell pe r630 and one dell pe r620 servers which all work in hba or it mode. I've already joined one 620 and one 630 to the new cluster but the last 630 could not be joined. It stays about 24 hours in "processing ..." status and after that time the following appeared in "/opt/petasan/log/PetaSAN.log".
root@xx-xxxx-xx:/opt/petasan/log# tail PetaSAN.log
25/02/2023 11:21:14 ERROR Cannot create monitor on remote node xx.xx.xx.101
25/02/2023 11:21:14 ERROR Could not build ceph monitors.
25/02/2023 11:21:14 ERROR ['core_cluster_deploy_monitor_create_err%xx.xx.xx.101']
26/02/2023 10:12:51 ERROR Failed to add osd_scrub_auto_repair = False
NoneType: None
27/02/2023 05:14:17 ERROR Cluster is not completed, PetaSAN will check node join status.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/cluster/deploy.py", line 84, in get_node_status
raise Exception("Cluster is not completed, PetaSAN will check node join stat us.")
Exception: Cluster is not completed, PetaSAN will check node join status.
I've already reinstalled the node a few times, but it is the same error every time.
Any ideas what's the problem here? You need any further information from me?
Many thanks in advance!
Regards
Mathias
Last edited on February 27, 2023, 8:28 am by mathiasfo · #1
admin
2,918 Posts
February 27, 2023, 11:01 amQuote from admin on February 27, 2023, 11:01 amfailing to build monitor on remote node is most likely caused by network issues. check your cabling, switch ports, configuration, no ip subnet overlaping. try to make things as simple as possible (as first attempt), no bonds, jumbo mtu frames.
failing to build monitor on remote node is most likely caused by network issues. check your cabling, switch ports, configuration, no ip subnet overlaping. try to make things as simple as possible (as first attempt), no bonds, jumbo mtu frames.
Join 3rd Node to PetaSAN Cluster failed
mathiasfo
1 Post
Quote from mathiasfo on February 27, 2023, 8:26 amHello together,
I've two dell pe r630 and one dell pe r620 servers which all work in hba or it mode. I've already joined one 620 and one 630 to the new cluster but the last 630 could not be joined. It stays about 24 hours in "processing ..." status and after that time the following appeared in "/opt/petasan/log/PetaSAN.log".
root@xx-xxxx-xx:/opt/petasan/log# tail PetaSAN.log
25/02/2023 11:21:14 ERROR Cannot create monitor on remote node xx.xx.xx.101
25/02/2023 11:21:14 ERROR Could not build ceph monitors.
25/02/2023 11:21:14 ERROR ['core_cluster_deploy_monitor_create_err%xx.xx.xx.101']
26/02/2023 10:12:51 ERROR Failed to add osd_scrub_auto_repair = False
NoneType: None
27/02/2023 05:14:17 ERROR Cluster is not completed, PetaSAN will check node join status.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/cluster/deploy.py", line 84, in get_node_status
raise Exception("Cluster is not completed, PetaSAN will check node join stat us.")
Exception: Cluster is not completed, PetaSAN will check node join status.I've already reinstalled the node a few times, but it is the same error every time.
Any ideas what's the problem here? You need any further information from me?
Many thanks in advance!
Regards
Mathias
Hello together,
I've two dell pe r630 and one dell pe r620 servers which all work in hba or it mode. I've already joined one 620 and one 630 to the new cluster but the last 630 could not be joined. It stays about 24 hours in "processing ..." status and after that time the following appeared in "/opt/petasan/log/PetaSAN.log".
root@xx-xxxx-xx:/opt/petasan/log# tail PetaSAN.log
25/02/2023 11:21:14 ERROR Cannot create monitor on remote node xx.xx.xx.101
25/02/2023 11:21:14 ERROR Could not build ceph monitors.
25/02/2023 11:21:14 ERROR ['core_cluster_deploy_monitor_create_err%xx.xx.xx.101']
26/02/2023 10:12:51 ERROR Failed to add osd_scrub_auto_repair = False
NoneType: None
27/02/2023 05:14:17 ERROR Cluster is not completed, PetaSAN will check node join status.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/cluster/deploy.py", line 84, in get_node_status
raise Exception("Cluster is not completed, PetaSAN will check node join stat us.")
Exception: Cluster is not completed, PetaSAN will check node join status.
I've already reinstalled the node a few times, but it is the same error every time.
Any ideas what's the problem here? You need any further information from me?
Many thanks in advance!
Regards
Mathias
admin
2,918 Posts
Quote from admin on February 27, 2023, 11:01 amfailing to build monitor on remote node is most likely caused by network issues. check your cabling, switch ports, configuration, no ip subnet overlaping. try to make things as simple as possible (as first attempt), no bonds, jumbo mtu frames.
failing to build monitor on remote node is most likely caused by network issues. check your cabling, switch ports, configuration, no ip subnet overlaping. try to make things as simple as possible (as first attempt), no bonds, jumbo mtu frames.