Clean 1.5 install success, but 0 OSDs and CEPH health error post install
Kadrel
5 Posts
February 2, 2018, 7:22 amQuote from Kadrel on February 2, 2018, 7:22 amHello,
I demo'd PetaSAN 1.4 about 4 months ago, liked what I saw, but wasn't ready to do much with it.
Now I'm back and would like to play with 1.5, but I'm unable to get my deployment operational.
This is a demo environment, details are:
3x Dell PowerEdge nodes running Hyper-V 2016. 3x PetaSan running in VM's (1 each). Each PetaSan node has 8x SAS HDD's in pass-through, and 2x Virtual Disks on SSD storage for Journaling, in addition to 40GB SSD for OS. 2x 10GB and 1x 1GB NICs per node.
BEHAVIOR:
Installation goes clean and normal, as I remember in 1.4.
Cluster setup also goes clean and normal, all 3 nodes appear to join and give me "Green" success messages.
BUT, when I log into Admin console, it shows 0 OSDs, and I see a warning "CEPH Health: Error"' , which (if clicked) also reports "No OSDs".
Probably just a newbie error.
Logs from the first node are:
Thank you!
Hello,
I demo'd PetaSAN 1.4 about 4 months ago, liked what I saw, but wasn't ready to do much with it.
Now I'm back and would like to play with 1.5, but I'm unable to get my deployment operational.
This is a demo environment, details are:
3x Dell PowerEdge nodes running Hyper-V 2016. 3x PetaSan running in VM's (1 each). Each PetaSan node has 8x SAS HDD's in pass-through, and 2x Virtual Disks on SSD storage for Journaling, in addition to 40GB SSD for OS. 2x 10GB and 1x 1GB NICs per node.
BEHAVIOR:
Installation goes clean and normal, as I remember in 1.4.
Cluster setup also goes clean and normal, all 3 nodes appear to join and give me "Green" success messages.
BUT, when I log into Admin console, it shows 0 OSDs, and I see a warning "CEPH Health: Error"' , which (if clicked) also reports "No OSDs".
Probably just a newbie error.
Logs from the first node are:
Thank you!
admin
2,930 Posts
February 2, 2018, 9:56 amQuote from admin on February 2, 2018, 9:56 amHi
It is in error since there is no storage available. In v 1,5 with the support of adding journal disks, you need to add disks yourself as either OSDs or journals, we do not grab all the disks ourselves for usage as in previous versions. You can specify the disk usage during deployment or later while the cluster is up. So go to your Node List -> Physical Disk List then add your journal disks then add you OSDs .
Once you have at least 1 disk on your 3 nodes , your cluster will be healthy and able to serve client io
Hi
It is in error since there is no storage available. In v 1,5 with the support of adding journal disks, you need to add disks yourself as either OSDs or journals, we do not grab all the disks ourselves for usage as in previous versions. You can specify the disk usage during deployment or later while the cluster is up. So go to your Node List -> Physical Disk List then add your journal disks then add you OSDs .
Once you have at least 1 disk on your 3 nodes , your cluster will be healthy and able to serve client io
Last edited on February 2, 2018, 11:21 am by admin · #2
Kadrel
5 Posts
February 2, 2018, 1:33 pmQuote from Kadrel on February 2, 2018, 1:33 pmThanks for the response.
I think I'm running up against a poor understanding of Journal disks.
My SSD-based Journal Disks are NOT pass-through drives - they are Thick-provisioned VHD's on SSD storage.
When I just now when back to re-configure drives (which I DID do during cluster creation, but apparently failed silently), I can add OSD's without issue. However, adding Journal disks (on the SSD-backed VHD's) fails.
Is this expected/as designed? I see that PetaSan does not identify the Journal disks as SSD (I'm sure because they are VHD's, and it can't detect the underlying hardware).
Any way around this? Do I have to use dedicated drives for Journaling?
Thanks!
Thanks for the response.
I think I'm running up against a poor understanding of Journal disks.
My SSD-based Journal Disks are NOT pass-through drives - they are Thick-provisioned VHD's on SSD storage.
When I just now when back to re-configure drives (which I DID do during cluster creation, but apparently failed silently), I can add OSD's without issue. However, adding Journal disks (on the SSD-backed VHD's) fails.
Is this expected/as designed? I see that PetaSan does not identify the Journal disks as SSD (I'm sure because they are VHD's, and it can't detect the underlying hardware).
Any way around this? Do I have to use dedicated drives for Journaling?
Thanks!
admin
2,930 Posts
February 2, 2018, 7:31 pmQuote from admin on February 2, 2018, 7:31 pmHi,
We do not do anything special such as constraining journals to be SSDs, we just show the info we detect and let the user decide if the journal is suitable. I do not why the VHD disks do not work, some things to try:
-Since this is a test environment, test if you can create pure OSD on the VHD disks, if they also fail then it probably indicates an issue with how the VHD disks were setup.
-Make sure you have enough space on the journal disk, each OSD connected to the journal needs a 20G partition on the journal.
-Try wiping out all partitions on the VHD disk using the command:
ceph-disk zap /dev/sdX
where X is the disk number of the journal disk as seen by PetaSAN
Hi,
We do not do anything special such as constraining journals to be SSDs, we just show the info we detect and let the user decide if the journal is suitable. I do not why the VHD disks do not work, some things to try:
-Since this is a test environment, test if you can create pure OSD on the VHD disks, if they also fail then it probably indicates an issue with how the VHD disks were setup.
-Make sure you have enough space on the journal disk, each OSD connected to the journal needs a 20G partition on the journal.
-Try wiping out all partitions on the VHD disk using the command:
ceph-disk zap /dev/sdX
where X is the disk number of the journal disk as seen by PetaSAN
Last edited on February 2, 2018, 7:36 pm by admin · #4
Kadrel
5 Posts
February 3, 2018, 8:39 pmQuote from Kadrel on February 3, 2018, 8:39 pmThanks for the help - the VHD's were 20GB RAW, but apparently after formatting were too small. I increased them to 25GB and then they setup properly.
Thanks!
Thanks for the help - the VHD's were 20GB RAW, but apparently after formatting were too small. I increased them to 25GB and then they setup properly.
Thanks!
Clean 1.5 install success, but 0 OSDs and CEPH health error post install
Kadrel
5 Posts
Quote from Kadrel on February 2, 2018, 7:22 amHello,
I demo'd PetaSAN 1.4 about 4 months ago, liked what I saw, but wasn't ready to do much with it.
Now I'm back and would like to play with 1.5, but I'm unable to get my deployment operational.
This is a demo environment, details are:
3x Dell PowerEdge nodes running Hyper-V 2016. 3x PetaSan running in VM's (1 each). Each PetaSan node has 8x SAS HDD's in pass-through, and 2x Virtual Disks on SSD storage for Journaling, in addition to 40GB SSD for OS. 2x 10GB and 1x 1GB NICs per node.
BEHAVIOR:
Installation goes clean and normal, as I remember in 1.4.
Cluster setup also goes clean and normal, all 3 nodes appear to join and give me "Green" success messages.
BUT, when I log into Admin console, it shows 0 OSDs, and I see a warning "CEPH Health: Error"' , which (if clicked) also reports "No OSDs".
Probably just a newbie error.
Logs from the first node are:
Thank you!
Hello,
I demo'd PetaSAN 1.4 about 4 months ago, liked what I saw, but wasn't ready to do much with it.
Now I'm back and would like to play with 1.5, but I'm unable to get my deployment operational.
This is a demo environment, details are:
3x Dell PowerEdge nodes running Hyper-V 2016. 3x PetaSan running in VM's (1 each). Each PetaSan node has 8x SAS HDD's in pass-through, and 2x Virtual Disks on SSD storage for Journaling, in addition to 40GB SSD for OS. 2x 10GB and 1x 1GB NICs per node.
BEHAVIOR:
Installation goes clean and normal, as I remember in 1.4.
Cluster setup also goes clean and normal, all 3 nodes appear to join and give me "Green" success messages.
BUT, when I log into Admin console, it shows 0 OSDs, and I see a warning "CEPH Health: Error"' , which (if clicked) also reports "No OSDs".
Probably just a newbie error.
Logs from the first node are:
Thank you!
admin
2,930 Posts
Quote from admin on February 2, 2018, 9:56 amHi
It is in error since there is no storage available. In v 1,5 with the support of adding journal disks, you need to add disks yourself as either OSDs or journals, we do not grab all the disks ourselves for usage as in previous versions. You can specify the disk usage during deployment or later while the cluster is up. So go to your Node List -> Physical Disk List then add your journal disks then add you OSDs .
Once you have at least 1 disk on your 3 nodes , your cluster will be healthy and able to serve client io
Hi
It is in error since there is no storage available. In v 1,5 with the support of adding journal disks, you need to add disks yourself as either OSDs or journals, we do not grab all the disks ourselves for usage as in previous versions. You can specify the disk usage during deployment or later while the cluster is up. So go to your Node List -> Physical Disk List then add your journal disks then add you OSDs .
Once you have at least 1 disk on your 3 nodes , your cluster will be healthy and able to serve client io
Kadrel
5 Posts
Quote from Kadrel on February 2, 2018, 1:33 pmThanks for the response.
I think I'm running up against a poor understanding of Journal disks.
My SSD-based Journal Disks are NOT pass-through drives - they are Thick-provisioned VHD's on SSD storage.
When I just now when back to re-configure drives (which I DID do during cluster creation, but apparently failed silently), I can add OSD's without issue. However, adding Journal disks (on the SSD-backed VHD's) fails.
Is this expected/as designed? I see that PetaSan does not identify the Journal disks as SSD (I'm sure because they are VHD's, and it can't detect the underlying hardware).
Any way around this? Do I have to use dedicated drives for Journaling?
Thanks!
Thanks for the response.
I think I'm running up against a poor understanding of Journal disks.
My SSD-based Journal Disks are NOT pass-through drives - they are Thick-provisioned VHD's on SSD storage.
When I just now when back to re-configure drives (which I DID do during cluster creation, but apparently failed silently), I can add OSD's without issue. However, adding Journal disks (on the SSD-backed VHD's) fails.
Is this expected/as designed? I see that PetaSan does not identify the Journal disks as SSD (I'm sure because they are VHD's, and it can't detect the underlying hardware).
Any way around this? Do I have to use dedicated drives for Journaling?
Thanks!
admin
2,930 Posts
Quote from admin on February 2, 2018, 7:31 pmHi,
We do not do anything special such as constraining journals to be SSDs, we just show the info we detect and let the user decide if the journal is suitable. I do not why the VHD disks do not work, some things to try:
-Since this is a test environment, test if you can create pure OSD on the VHD disks, if they also fail then it probably indicates an issue with how the VHD disks were setup.
-Make sure you have enough space on the journal disk, each OSD connected to the journal needs a 20G partition on the journal.
-Try wiping out all partitions on the VHD disk using the command:
ceph-disk zap /dev/sdX
where X is the disk number of the journal disk as seen by PetaSAN
Hi,
We do not do anything special such as constraining journals to be SSDs, we just show the info we detect and let the user decide if the journal is suitable. I do not why the VHD disks do not work, some things to try:
-Since this is a test environment, test if you can create pure OSD on the VHD disks, if they also fail then it probably indicates an issue with how the VHD disks were setup.
-Make sure you have enough space on the journal disk, each OSD connected to the journal needs a 20G partition on the journal.
-Try wiping out all partitions on the VHD disk using the command:
ceph-disk zap /dev/sdX
where X is the disk number of the journal disk as seen by PetaSAN
Kadrel
5 Posts
Quote from Kadrel on February 3, 2018, 8:39 pmThanks for the help - the VHD's were 20GB RAW, but apparently after formatting were too small. I increased them to 25GB and then they setup properly.
Thanks!
Thanks for the help - the VHD's were 20GB RAW, but apparently after formatting were too small. I increased them to 25GB and then they setup properly.
Thanks!