Difficulty recovering one of three servers after power outage
admin
2,930 Posts
October 28, 2018, 7:02 pmQuote from admin on October 28, 2018, 7:02 pmdid you add them via the ui as described in my previous replies ?
did you add them via the ui as described in my previous replies ?
southcoast
50 Posts
October 28, 2018, 7:22 pmQuote from southcoast on October 28, 2018, 7:22 pmNo ui actions resulted in an OSD being created. I instead used this CLI command:
ceph osd create --cluster=Cameron-SAN-01
No ui actions resulted in an OSD being created. I instead used this CLI command:
ceph osd create --cluster=Cameron-SAN-01
admin
2,930 Posts
October 28, 2018, 7:40 pmQuote from admin on October 28, 2018, 7:40 pm
No ui actions resulted in an OSD being created
Can you give more detail what actions you did and what was the result, so i can help better.
ceph osd create --cluster=Cameron-SAN-01
This command is just 1 step in osd creation, it requires other commands to have a functioning osd.
No ui actions resulted in an OSD being created
Can you give more detail what actions you did and what was the result, so i can help better.
ceph osd create --cluster=Cameron-SAN-01
This command is just 1 step in osd creation, it requires other commands to have a functioning osd.
Last edited on October 28, 2018, 7:41 pm by admin · #33
southcoast
50 Posts
October 28, 2018, 8:11 pmQuote from southcoast on October 28, 2018, 8:11 pmIn fact, I entered: ceph osd create --cluster=Cameron-SAN-01, three times, resulting in 3 OSD's:
root@Peta-SAN-01:~# ceph osd tree --cluster=Cameron-SAN-01
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-2 0 datacenter VMware-class-01
-1 0 root default
0 0 osd.0 down 0 1.00000
1 0 osd.1 down 0 1.00000
2 0 osd.2 down 0 1.00000
root@Peta-SAN-01:~#
This create command I found from reviewing the Ceph site and the details therein for OSD operations. Although, The start command syntax I cannot identify.
In fact, I entered: ceph osd create --cluster=Cameron-SAN-01, three times, resulting in 3 OSD's:
root@Peta-SAN-01:~# ceph osd tree --cluster=Cameron-SAN-01
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-2 0 datacenter VMware-class-01
-1 0 root default
0 0 osd.0 down 0 1.00000
1 0 osd.1 down 0 1.00000
2 0 osd.2 down 0 1.00000
root@Peta-SAN-01:~#
This create command I found from reviewing the Ceph site and the details therein for OSD operations. Although, The start command syntax I cannot identify.
southcoast
50 Posts
October 28, 2018, 8:16 pmQuote from southcoast on October 28, 2018, 8:16 pmI was looking in this location for the relevant OSD creation commands:
http://docs.ceph.com/docs/master/rados/configuration/common/#osds
I was looking in this location for the relevant OSD creation commands:
http://docs.ceph.com/docs/master/rados/configuration/common/#osds
admin
2,930 Posts
October 28, 2018, 8:23 pmQuote from admin on October 28, 2018, 8:23 pmcan you answer my question please
can you answer my question please
southcoast
50 Posts
October 28, 2018, 8:59 pmQuote from southcoast on October 28, 2018, 8:59 pmI did not add the OSD per your previous replies, as they did not specify how to add the OSD's. Although your questions were helpful in regards to the status of OSD's, there were not instructions on creating the needed OSD's. Therefore, it was necessary for me to look up the Ceph manual entries to facilitate this. I am sure when the Petasan manual set is published, my situation will be much easier. The question remains, if the OSD's are created but down, how do I start them? The Petasan 2.1 application through 2 installations (the initial 3 node install and the repeated install on node 2) has not created these.
please advise.
Thank-you
I did not add the OSD per your previous replies, as they did not specify how to add the OSD's. Although your questions were helpful in regards to the status of OSD's, there were not instructions on creating the needed OSD's. Therefore, it was necessary for me to look up the Ceph manual entries to facilitate this. I am sure when the Petasan manual set is published, my situation will be much easier. The question remains, if the OSD's are created but down, how do I start them? The Petasan 2.1 application through 2 installations (the initial 3 node install and the repeated install on node 2) has not created these.
please advise.
Thank-you
admin
2,930 Posts
October 28, 2018, 9:41 pmQuote from admin on October 28, 2018, 9:41 pmThanks for the clarifications. 🙂
The current Admin guide does cover adding osds/journal.
This is what i wrote in a prev reply:
From the Node List -> Physical Disk List page you can add disks as OSDs and journals. If all disks are the same add all as OSDs, if you have a few faster devices, make the faster devices journals, then add the slower ones as OSDs, this is explained in the admin guide.
It is actually very easy to do. To describe it in different way: go to the node list, for the target node click on Physical Disk List: you will see all physical disks of that node, select a non system disk and press '+' "Add OSD" action button.
the cli command you used is one of several steps to create a working OSD. You can continue the cli steps if this is really what you want. But i would recommend you try the ui first, depending on the state of the half-created manual osds, you may be able to delete them from the same ui page if it show a X action button, if so delete them.
You can also delete them via cli
ceph osd rm osd.0 --cluster Cameron-SAN-01
ceph osd rm osd.1 --cluster Cameron-SAN-01
ceph osd rm osd.2 --cluster Cameron-SAN-01
Since this cluster has no storage yet, if i were you i would consider starting clean and re-install all 3 nodes, create at least 1 OSD per node via the ui (either during node deployment) or later via the Physical Disk List page described above. Do not use cli commands, do not create custom pools, no custom rules and do not alter the crush map. Everything will work and will take you 10 min to have a clean system running . Once you are more familiar with the applications then you can try out customizing these if you wish.
hope this helps
Thanks for the clarifications. 🙂
The current Admin guide does cover adding osds/journal.
This is what i wrote in a prev reply:
From the Node List -> Physical Disk List page you can add disks as OSDs and journals. If all disks are the same add all as OSDs, if you have a few faster devices, make the faster devices journals, then add the slower ones as OSDs, this is explained in the admin guide.
It is actually very easy to do. To describe it in different way: go to the node list, for the target node click on Physical Disk List: you will see all physical disks of that node, select a non system disk and press '+' "Add OSD" action button.
the cli command you used is one of several steps to create a working OSD. You can continue the cli steps if this is really what you want. But i would recommend you try the ui first, depending on the state of the half-created manual osds, you may be able to delete them from the same ui page if it show a X action button, if so delete them.
You can also delete them via cli
ceph osd rm osd.0 --cluster Cameron-SAN-01
ceph osd rm osd.1 --cluster Cameron-SAN-01
ceph osd rm osd.2 --cluster Cameron-SAN-01
Since this cluster has no storage yet, if i were you i would consider starting clean and re-install all 3 nodes, create at least 1 OSD per node via the ui (either during node deployment) or later via the Physical Disk List page described above. Do not use cli commands, do not create custom pools, no custom rules and do not alter the crush map. Everything will work and will take you 10 min to have a clean system running . Once you are more familiar with the applications then you can try out customizing these if you wish.
hope this helps
Last edited on October 28, 2018, 10:03 pm by admin · #38
southcoast
50 Posts
October 28, 2018, 10:29 pmQuote from southcoast on October 28, 2018, 10:29 pmHello,
Thanks for the detailed and prompt response. I would rather use the UI as recommended rather than hacking a configuration from the CLI. I think I see the issue. When I access the physical disk list (manage node > nodes list > physical disk list), there is nothing offered under the action cell in the display. There columns labeled: Name, size, type, SSD, vendor, model, serial, used, status, journal,action.
the cells all the way to and including "used" have information populated. The status, journal and action cells are empty.
Moreover, on the same physical disk list panel, there is no add button as I see in the pools or iscsi disk list pages. I will try tomorrow when I am onsite a clean v2.1 install on the other 2 nodes and prior to each an initialization of the disk pair in each server to insure the RAID disk pair is cleared of any legacy software.
Thank-you.
Hello,
Thanks for the detailed and prompt response. I would rather use the UI as recommended rather than hacking a configuration from the CLI. I think I see the issue. When I access the physical disk list (manage node > nodes list > physical disk list), there is nothing offered under the action cell in the display. There columns labeled: Name, size, type, SSD, vendor, model, serial, used, status, journal,action.
the cells all the way to and including "used" have information populated. The status, journal and action cells are empty.
Moreover, on the same physical disk list panel, there is no add button as I see in the pools or iscsi disk list pages. I will try tomorrow when I am onsite a clean v2.1 install on the other 2 nodes and prior to each an initialization of the disk pair in each server to insure the RAID disk pair is cleared of any legacy software.
Thank-you.
admin
2,930 Posts
October 28, 2018, 10:46 pmQuote from admin on October 28, 2018, 10:46 pmCan you share a screen-shot ?
The add button is not at the top of page but rather under the Actions cloumn on the right, there should be a button for each free disk, you should have more than 1 disk per node.
prior to each an initialization of the disk pair in each server to insure the RAID disk pair is
Just to check: You need several disks per node, one as a system disk and several others (at least 1 more) to be added as storage for OSDs. Some users create a RAID 1 for the system disk but it is not a must since the system can handle node failures. The most important is to have several other disks than your system disk for usage as OSDs, and they should not be grouped together in a RAID volume.
Last if you will go with the re-install approach, re-install all 3 nodes to create a new cluster from scratch, rather that do a replace node on 2 nodes since this keeps existing cluster as is.
Can you share a screen-shot ?
The add button is not at the top of page but rather under the Actions cloumn on the right, there should be a button for each free disk, you should have more than 1 disk per node.
prior to each an initialization of the disk pair in each server to insure the RAID disk pair is
Just to check: You need several disks per node, one as a system disk and several others (at least 1 more) to be added as storage for OSDs. Some users create a RAID 1 for the system disk but it is not a must since the system can handle node failures. The most important is to have several other disks than your system disk for usage as OSDs, and they should not be grouped together in a RAID volume.
Last if you will go with the re-install approach, re-install all 3 nodes to create a new cluster from scratch, rather that do a replace node on 2 nodes since this keeps existing cluster as is.
Last edited on October 28, 2018, 10:57 pm by admin · #40
Difficulty recovering one of three servers after power outage
admin
2,930 Posts
Quote from admin on October 28, 2018, 7:02 pmdid you add them via the ui as described in my previous replies ?
did you add them via the ui as described in my previous replies ?
southcoast
50 Posts
Quote from southcoast on October 28, 2018, 7:22 pmNo ui actions resulted in an OSD being created. I instead used this CLI command:
ceph osd create --cluster=Cameron-SAN-01
No ui actions resulted in an OSD being created. I instead used this CLI command:
ceph osd create --cluster=Cameron-SAN-01
admin
2,930 Posts
Quote from admin on October 28, 2018, 7:40 pm
No ui actions resulted in an OSD being created
Can you give more detail what actions you did and what was the result, so i can help better.
ceph osd create --cluster=Cameron-SAN-01
This command is just 1 step in osd creation, it requires other commands to have a functioning osd.
No ui actions resulted in an OSD being created
Can you give more detail what actions you did and what was the result, so i can help better.
ceph osd create --cluster=Cameron-SAN-01
This command is just 1 step in osd creation, it requires other commands to have a functioning osd.
southcoast
50 Posts
Quote from southcoast on October 28, 2018, 8:11 pmIn fact, I entered: ceph osd create --cluster=Cameron-SAN-01, three times, resulting in 3 OSD's:
root@Peta-SAN-01:~# ceph osd tree --cluster=Cameron-SAN-01
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-2 0 datacenter VMware-class-01
-1 0 root default
0 0 osd.0 down 0 1.00000
1 0 osd.1 down 0 1.00000
2 0 osd.2 down 0 1.00000
root@Peta-SAN-01:~#This create command I found from reviewing the Ceph site and the details therein for OSD operations. Although, The start command syntax I cannot identify.
In fact, I entered: ceph osd create --cluster=Cameron-SAN-01, three times, resulting in 3 OSD's:
root@Peta-SAN-01:~# ceph osd tree --cluster=Cameron-SAN-01
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-2 0 datacenter VMware-class-01
-1 0 root default
0 0 osd.0 down 0 1.00000
1 0 osd.1 down 0 1.00000
2 0 osd.2 down 0 1.00000
root@Peta-SAN-01:~#
This create command I found from reviewing the Ceph site and the details therein for OSD operations. Although, The start command syntax I cannot identify.
southcoast
50 Posts
Quote from southcoast on October 28, 2018, 8:16 pmI was looking in this location for the relevant OSD creation commands:
http://docs.ceph.com/docs/master/rados/configuration/common/#osds
I was looking in this location for the relevant OSD creation commands:
http://docs.ceph.com/docs/master/rados/configuration/common/#osds
admin
2,930 Posts
Quote from admin on October 28, 2018, 8:23 pmcan you answer my question please
can you answer my question please
southcoast
50 Posts
Quote from southcoast on October 28, 2018, 8:59 pmI did not add the OSD per your previous replies, as they did not specify how to add the OSD's. Although your questions were helpful in regards to the status of OSD's, there were not instructions on creating the needed OSD's. Therefore, it was necessary for me to look up the Ceph manual entries to facilitate this. I am sure when the Petasan manual set is published, my situation will be much easier. The question remains, if the OSD's are created but down, how do I start them? The Petasan 2.1 application through 2 installations (the initial 3 node install and the repeated install on node 2) has not created these.
please advise.
Thank-you
I did not add the OSD per your previous replies, as they did not specify how to add the OSD's. Although your questions were helpful in regards to the status of OSD's, there were not instructions on creating the needed OSD's. Therefore, it was necessary for me to look up the Ceph manual entries to facilitate this. I am sure when the Petasan manual set is published, my situation will be much easier. The question remains, if the OSD's are created but down, how do I start them? The Petasan 2.1 application through 2 installations (the initial 3 node install and the repeated install on node 2) has not created these.
please advise.
Thank-you
admin
2,930 Posts
Quote from admin on October 28, 2018, 9:41 pmThanks for the clarifications. 🙂
The current Admin guide does cover adding osds/journal.
This is what i wrote in a prev reply:
From the Node List -> Physical Disk List page you can add disks as OSDs and journals. If all disks are the same add all as OSDs, if you have a few faster devices, make the faster devices journals, then add the slower ones as OSDs, this is explained in the admin guide.
It is actually very easy to do. To describe it in different way: go to the node list, for the target node click on Physical Disk List: you will see all physical disks of that node, select a non system disk and press '+' "Add OSD" action button.
the cli command you used is one of several steps to create a working OSD. You can continue the cli steps if this is really what you want. But i would recommend you try the ui first, depending on the state of the half-created manual osds, you may be able to delete them from the same ui page if it show a X action button, if so delete them.
You can also delete them via cli
ceph osd rm osd.0 --cluster Cameron-SAN-01
ceph osd rm osd.1 --cluster Cameron-SAN-01
ceph osd rm osd.2 --cluster Cameron-SAN-01Since this cluster has no storage yet, if i were you i would consider starting clean and re-install all 3 nodes, create at least 1 OSD per node via the ui (either during node deployment) or later via the Physical Disk List page described above. Do not use cli commands, do not create custom pools, no custom rules and do not alter the crush map. Everything will work and will take you 10 min to have a clean system running . Once you are more familiar with the applications then you can try out customizing these if you wish.
hope this helps
Thanks for the clarifications. 🙂
The current Admin guide does cover adding osds/journal.
This is what i wrote in a prev reply:
From the Node List -> Physical Disk List page you can add disks as OSDs and journals. If all disks are the same add all as OSDs, if you have a few faster devices, make the faster devices journals, then add the slower ones as OSDs, this is explained in the admin guide.
It is actually very easy to do. To describe it in different way: go to the node list, for the target node click on Physical Disk List: you will see all physical disks of that node, select a non system disk and press '+' "Add OSD" action button.
the cli command you used is one of several steps to create a working OSD. You can continue the cli steps if this is really what you want. But i would recommend you try the ui first, depending on the state of the half-created manual osds, you may be able to delete them from the same ui page if it show a X action button, if so delete them.
You can also delete them via cli
ceph osd rm osd.0 --cluster Cameron-SAN-01
ceph osd rm osd.1 --cluster Cameron-SAN-01
ceph osd rm osd.2 --cluster Cameron-SAN-01
Since this cluster has no storage yet, if i were you i would consider starting clean and re-install all 3 nodes, create at least 1 OSD per node via the ui (either during node deployment) or later via the Physical Disk List page described above. Do not use cli commands, do not create custom pools, no custom rules and do not alter the crush map. Everything will work and will take you 10 min to have a clean system running . Once you are more familiar with the applications then you can try out customizing these if you wish.
hope this helps
southcoast
50 Posts
Quote from southcoast on October 28, 2018, 10:29 pmHello,
Thanks for the detailed and prompt response. I would rather use the UI as recommended rather than hacking a configuration from the CLI. I think I see the issue. When I access the physical disk list (manage node > nodes list > physical disk list), there is nothing offered under the action cell in the display. There columns labeled: Name, size, type, SSD, vendor, model, serial, used, status, journal,action.
the cells all the way to and including "used" have information populated. The status, journal and action cells are empty.
Moreover, on the same physical disk list panel, there is no add button as I see in the pools or iscsi disk list pages. I will try tomorrow when I am onsite a clean v2.1 install on the other 2 nodes and prior to each an initialization of the disk pair in each server to insure the RAID disk pair is cleared of any legacy software.
Thank-you.
Hello,
Thanks for the detailed and prompt response. I would rather use the UI as recommended rather than hacking a configuration from the CLI. I think I see the issue. When I access the physical disk list (manage node > nodes list > physical disk list), there is nothing offered under the action cell in the display. There columns labeled: Name, size, type, SSD, vendor, model, serial, used, status, journal,action.
the cells all the way to and including "used" have information populated. The status, journal and action cells are empty.
Moreover, on the same physical disk list panel, there is no add button as I see in the pools or iscsi disk list pages. I will try tomorrow when I am onsite a clean v2.1 install on the other 2 nodes and prior to each an initialization of the disk pair in each server to insure the RAID disk pair is cleared of any legacy software.
Thank-you.
admin
2,930 Posts
Quote from admin on October 28, 2018, 10:46 pmCan you share a screen-shot ?
The add button is not at the top of page but rather under the Actions cloumn on the right, there should be a button for each free disk, you should have more than 1 disk per node.
prior to each an initialization of the disk pair in each server to insure the RAID disk pair is
Just to check: You need several disks per node, one as a system disk and several others (at least 1 more) to be added as storage for OSDs. Some users create a RAID 1 for the system disk but it is not a must since the system can handle node failures. The most important is to have several other disks than your system disk for usage as OSDs, and they should not be grouped together in a RAID volume.
Last if you will go with the re-install approach, re-install all 3 nodes to create a new cluster from scratch, rather that do a replace node on 2 nodes since this keeps existing cluster as is.
Can you share a screen-shot ?
The add button is not at the top of page but rather under the Actions cloumn on the right, there should be a button for each free disk, you should have more than 1 disk per node.
prior to each an initialization of the disk pair in each server to insure the RAID disk pair is
Just to check: You need several disks per node, one as a system disk and several others (at least 1 more) to be added as storage for OSDs. Some users create a RAID 1 for the system disk but it is not a must since the system can handle node failures. The most important is to have several other disks than your system disk for usage as OSDs, and they should not be grouped together in a RAID volume.
Last if you will go with the re-install approach, re-install all 3 nodes to create a new cluster from scratch, rather that do a replace node on 2 nodes since this keeps existing cluster as is.