Reinstall Petasan and use OSD's from previous install?
Soderhall
7 Posts
November 28, 2017, 10:26 amQuote from Soderhall on November 28, 2017, 10:26 amHi.
I had a working virtual Petasan and got some problems and tried to reinstall the management nodes and rejoin the cluster. But now I only have one of the management nodes left(not fully working), but all three OSD's from the previous install are still there. Is there some way I can reinstall Petasan and reuse the OSD's and/or config-files from the third management node?
The management node still alive does not start Ceph and I can't join the other two it. I get errors like this...
Failed to start Ceph disk activation: /dev/sdb2.
Hi.
I had a working virtual Petasan and got some problems and tried to reinstall the management nodes and rejoin the cluster. But now I only have one of the management nodes left(not fully working), but all three OSD's from the previous install are still there. Is there some way I can reinstall Petasan and reuse the OSD's and/or config-files from the third management node?
The management node still alive does not start Ceph and I can't join the other two it. I get errors like this...
Failed to start Ceph disk activation: /dev/sdb2.
admin
2,930 Posts
November 28, 2017, 12:42 pmQuote from admin on November 28, 2017, 12:42 pmThe management nodes are special, they contain the brains of the cluster. when one fails it should be replaced as soon as possible with the "Replace Management Node" option rather than "Join". I believe re-joining 2 of the nodes made the cluster do down. To rebuild the cluster will create new cluster ids/security keys that will not match the ones on the OSDs, if this was test cluster i suggest you re build and start over, if you care about the existing data it will involve some manual effort.
The management nodes are special, they contain the brains of the cluster. when one fails it should be replaced as soon as possible with the "Replace Management Node" option rather than "Join". I believe re-joining 2 of the nodes made the cluster do down. To rebuild the cluster will create new cluster ids/security keys that will not match the ones on the OSDs, if this was test cluster i suggest you re build and start over, if you care about the existing data it will involve some manual effort.
Soderhall
7 Posts
November 29, 2017, 10:45 amQuote from Soderhall on November 29, 2017, 10:45 amSorry!
A bit of a typo. Of course I choose "replace management node", but since I reinstalled wrong node first I only get "Cannot perform replace, ceph monitor status is not healthy.".
The data is not so important, but it would be good if I could get it back.
Sorry!
A bit of a typo. Of course I choose "replace management node", but since I reinstalled wrong node first I only get "Cannot perform replace, ceph monitor status is not healthy.".
The data is not so important, but it would be good if I could get it back.
Soderhall
7 Posts
December 20, 2017, 4:07 pmQuote from Soderhall on December 20, 2017, 4:07 pmLong time, but the status is still the same and I wonder if it's possible to to get the cluster working from only having one management node and three OSDs?
Long time, but the status is still the same and I wonder if it's possible to to get the cluster working from only having one management node and three OSDs?
admin
2,930 Posts
December 20, 2017, 6:21 pmQuote from admin on December 20, 2017, 6:21 pmIf you just need to re-install PetaSAN and discard old data, you will need at least 3 nodes. If you want to preserve data and your old cluster is gone, it is much more manual work.
If you just need to re-install PetaSAN and discard old data, you will need at least 3 nodes. If you want to preserve data and your old cluster is gone, it is much more manual work.
Reinstall Petasan and use OSD's from previous install?
Soderhall
7 Posts
Quote from Soderhall on November 28, 2017, 10:26 amHi.
I had a working virtual Petasan and got some problems and tried to reinstall the management nodes and rejoin the cluster. But now I only have one of the management nodes left(not fully working), but all three OSD's from the previous install are still there. Is there some way I can reinstall Petasan and reuse the OSD's and/or config-files from the third management node?The management node still alive does not start Ceph and I can't join the other two it. I get errors like this...
Failed to start Ceph disk activation: /dev/sdb2.
Hi.
I had a working virtual Petasan and got some problems and tried to reinstall the management nodes and rejoin the cluster. But now I only have one of the management nodes left(not fully working), but all three OSD's from the previous install are still there. Is there some way I can reinstall Petasan and reuse the OSD's and/or config-files from the third management node?
The management node still alive does not start Ceph and I can't join the other two it. I get errors like this...
Failed to start Ceph disk activation: /dev/sdb2.
admin
2,930 Posts
Quote from admin on November 28, 2017, 12:42 pmThe management nodes are special, they contain the brains of the cluster. when one fails it should be replaced as soon as possible with the "Replace Management Node" option rather than "Join". I believe re-joining 2 of the nodes made the cluster do down. To rebuild the cluster will create new cluster ids/security keys that will not match the ones on the OSDs, if this was test cluster i suggest you re build and start over, if you care about the existing data it will involve some manual effort.
The management nodes are special, they contain the brains of the cluster. when one fails it should be replaced as soon as possible with the "Replace Management Node" option rather than "Join". I believe re-joining 2 of the nodes made the cluster do down. To rebuild the cluster will create new cluster ids/security keys that will not match the ones on the OSDs, if this was test cluster i suggest you re build and start over, if you care about the existing data it will involve some manual effort.
Soderhall
7 Posts
Quote from Soderhall on November 29, 2017, 10:45 amSorry!
A bit of a typo. Of course I choose "replace management node", but since I reinstalled wrong node first I only get "Cannot perform replace, ceph monitor status is not healthy.".
The data is not so important, but it would be good if I could get it back.
Sorry!
A bit of a typo. Of course I choose "replace management node", but since I reinstalled wrong node first I only get "Cannot perform replace, ceph monitor status is not healthy.".
The data is not so important, but it would be good if I could get it back.
Soderhall
7 Posts
Quote from Soderhall on December 20, 2017, 4:07 pmLong time, but the status is still the same and I wonder if it's possible to to get the cluster working from only having one management node and three OSDs?
Long time, but the status is still the same and I wonder if it's possible to to get the cluster working from only having one management node and three OSDs?
admin
2,930 Posts
Quote from admin on December 20, 2017, 6:21 pmIf you just need to re-install PetaSAN and discard old data, you will need at least 3 nodes. If you want to preserve data and your old cluster is gone, it is much more manual work.
If you just need to re-install PetaSAN and discard old data, you will need at least 3 nodes. If you want to preserve data and your old cluster is gone, it is much more manual work.