Replace management node when 2 of 3 node down
Pages: 1 2
admin
2,930 Posts
February 20, 2023, 9:46 pmQuote from admin on February 20, 2023, 9:46 pmyou can view that nodes are up in the consul cluster with
consul members
you can see the fencing config value from
consul kv get --recurse PetaSAN/Config/Maintenance
maybe easier than switch off fencing is to stop any iSCSI disks stored in consul when it crashed
consul kv delete --recurse PetaSAN/Disks
you can view that nodes are up in the consul cluster with
consul members
you can see the fencing config value from
consul kv get --recurse PetaSAN/Config/Maintenance
maybe easier than switch off fencing is to stop any iSCSI disks stored in consul when it crashed
consul kv delete --recurse PetaSAN/Disks
atselitan
21 Posts
February 21, 2023, 1:15 pmQuote from atselitan on February 21, 2023, 1:15 pmThanks!
The problem was that the consul contained information about discs that did not actually exist. I removed non -existent discs from the consul. The nodes stopped turning off.
Thanks!
The problem was that the consul contained information about discs that did not actually exist. I removed non -existent discs from the consul. The nodes stopped turning off.
atselitan
21 Posts
February 22, 2023, 7:00 amQuote from atselitan on February 22, 2023, 7:00 amOne more question:
During the procedure of node replace, the Master notified that the discs on the replaced Node would return to the cluster and will function. But that did not happen. OSDs in down state. Directory /var/lib/ceph/osd is empty.
Is there a way to return the OSD to the cluster without redeploy it?
Update:
OSD earned after a while! Everything is fine!
One more question:
During the procedure of node replace, the Master notified that the discs on the replaced Node would return to the cluster and will function. But that did not happen. OSDs in down state. Directory /var/lib/ceph/osd is empty.
Is there a way to return the OSD to the cluster without redeploy it?
Update:
OSD earned after a while! Everything is fine!
Last edited on February 22, 2023, 1:35 pm by atselitan · #13
Pages: 1 2
Replace management node when 2 of 3 node down
admin
2,930 Posts
Quote from admin on February 20, 2023, 9:46 pmyou can view that nodes are up in the consul cluster with
consul members
you can see the fencing config value from
consul kv get --recurse PetaSAN/Config/Maintenance
maybe easier than switch off fencing is to stop any iSCSI disks stored in consul when it crashed
consul kv delete --recurse PetaSAN/Disks
you can view that nodes are up in the consul cluster with
consul members
you can see the fencing config value from
consul kv get --recurse PetaSAN/Config/Maintenance
maybe easier than switch off fencing is to stop any iSCSI disks stored in consul when it crashed
consul kv delete --recurse PetaSAN/Disks
atselitan
21 Posts
Quote from atselitan on February 21, 2023, 1:15 pmThanks!
The problem was that the consul contained information about discs that did not actually exist. I removed non -existent discs from the consul. The nodes stopped turning off.
Thanks!
The problem was that the consul contained information about discs that did not actually exist. I removed non -existent discs from the consul. The nodes stopped turning off.
atselitan
21 Posts
Quote from atselitan on February 22, 2023, 7:00 amOne more question:
During the procedure of node replace, the Master notified that the discs on the replaced Node would return to the cluster and will function. But that did not happen. OSDs in down state. Directory /var/lib/ceph/osd is empty.
Is there a way to return the OSD to the cluster without redeploy it?
Update:
OSD earned after a while! Everything is fine!
One more question:
During the procedure of node replace, the Master notified that the discs on the replaced Node would return to the cluster and will function. But that did not happen. OSDs in down state. Directory /var/lib/ceph/osd is empty.
Is there a way to return the OSD to the cluster without redeploy it?
Update:
OSD earned after a while! Everything is fine!