Disk prepare failure on node
trexman
60 Posts
November 20, 2018, 12:49 pmQuote from trexman on November 20, 2018, 12:49 pmHello,
we did a new installation of PetaSAN 2.2 (3 Nodes, 10 SATA ODS per node, 1 NVME per node as Journal)
After the Cluster was build I got the following error:
Node deployment completed successfully. Cluster has been built and is now ready for use.
Error List
Disk sdk prepare failure on node HBPS03.
Disk sdh prepare failure on node HBPS03.
Disk sdi prepare failure on node HBPS03.
Disk sde prepare failure on node HBPS01.
Disk sdb prepare failure on node HBPS01.
Disk sdc prepare failure on node HBPS01.
Disk sdj prepare failure on node HBPS01.
Disk sdk prepare failure on node HBPS01.
Disk sdh prepare failure on node HBPS01.
Disk sdi prepare failure on node HBPS01.
Disk sde prepare failure on node HBPS02.
Disk sdb prepare failure on node HBPS02.
Disk sdc prepare failure on node HBPS02.
Disk sdj prepare failure on node HBPS02.
Disk sdk prepare failure on node HBPS02.
Disk sdh prepare failure on node HBPS02.
Disk sdi prepare failure on node HBPS02.
If I try to add the "failed" OSDs manually I get the following error in my browser:
Method Not Allowed
The method is not allowed for the requested URL.
Fdisk tell's me that there is one partition on all not working OSD - maybe from the installation
Disk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 5D229396-FAC8-44A4-9DCD-43F6079CFA6A
Device Start End Sectors Size Type
/dev/sdb1 2048 206847 204800 100M Ceph disk in creation
But also if I try to delete the partition and MBR I still get this error. Reboot of the node after "disc wipe" also don't help
How can I solve this problem?
Thanks for your help.
Hello,
we did a new installation of PetaSAN 2.2 (3 Nodes, 10 SATA ODS per node, 1 NVME per node as Journal)
After the Cluster was build I got the following error:
Node deployment completed successfully. Cluster has been built and is now ready for use.
Error List
Disk sdk prepare failure on node HBPS03.
Disk sdh prepare failure on node HBPS03.
Disk sdi prepare failure on node HBPS03.
Disk sde prepare failure on node HBPS01.
Disk sdb prepare failure on node HBPS01.
Disk sdc prepare failure on node HBPS01.
Disk sdj prepare failure on node HBPS01.
Disk sdk prepare failure on node HBPS01.
Disk sdh prepare failure on node HBPS01.
Disk sdi prepare failure on node HBPS01.
Disk sde prepare failure on node HBPS02.
Disk sdb prepare failure on node HBPS02.
Disk sdc prepare failure on node HBPS02.
Disk sdj prepare failure on node HBPS02.
Disk sdk prepare failure on node HBPS02.
Disk sdh prepare failure on node HBPS02.
Disk sdi prepare failure on node HBPS02.
If I try to add the "failed" OSDs manually I get the following error in my browser:
Method Not Allowed
The method is not allowed for the requested URL.
Fdisk tell's me that there is one partition on all not working OSD - maybe from the installation
Disk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 5D229396-FAC8-44A4-9DCD-43F6079CFA6A
Device Start End Sectors Size Type
/dev/sdb1 2048 206847 204800 100M Ceph disk in creation
But also if I try to delete the partition and MBR I still get this error. Reboot of the node after "disc wipe" also don't help
How can I solve this problem?
Thanks for your help.
trexman
60 Posts
November 20, 2018, 12:59 pmQuote from trexman on November 20, 2018, 12:59 pmI just found out, that the NVMe is full. But besides of that I also can add an ODS without Journal.
Why is now every Journal partition 60GB big? It was 20GB in the past if i remember this right.
It there a way to change the size of the Journal partition?
I just found out, that the NVMe is full. But besides of that I also can add an ODS without Journal.
Why is now every Journal partition 60GB big? It was 20GB in the past if i remember this right.
It there a way to change the size of the Journal partition?
admin
2,930 Posts
November 20, 2018, 1:15 pmQuote from admin on November 20, 2018, 1:15 pmThis is based on recent discussions on the Ceph mailing list. Unfortunately there was no detailed analysis on what the optimum size should be. For rbd workloads it is now recommended to be at least 30 GB, hence the change. If you really cannot use 60 GB, then do 40 GB or 30 GB at least.
For an existing cluster, this parameter can be changed on all nodes in the /etc/ceph/xxx.conf file
bluestore_block_db_size = 64424509440
If you are building a new cluster, you can change it in the Cluster Tuning page and it will apply to all nodes.
This is based on recent discussions on the Ceph mailing list. Unfortunately there was no detailed analysis on what the optimum size should be. For rbd workloads it is now recommended to be at least 30 GB, hence the change. If you really cannot use 60 GB, then do 40 GB or 30 GB at least.
For an existing cluster, this parameter can be changed on all nodes in the /etc/ceph/xxx.conf file
bluestore_block_db_size = 64424509440
If you are building a new cluster, you can change it in the Cluster Tuning page and it will apply to all nodes.
Last edited on November 20, 2018, 1:16 pm by admin · #3
trexman
60 Posts
November 20, 2018, 2:25 pmQuote from trexman on November 20, 2018, 2:25 pmThank you for the information.
But my primary problem still exist.
- I changed the parameter bluestore_block_db_size in all ceph confs.
- offline all OSDs on one node (systemctl stop ceph-osd...)
- deleted all ODS of this node
- deleted all partition of the NVMe
- tried to add the OSDs
This way also results to the error:
Method Not Allowed
The method is not allowed for the requested URL.
Thank you for the information.
But my primary problem still exist.
- I changed the parameter bluestore_block_db_size in all ceph confs.
- offline all OSDs on one node (systemctl stop ceph-osd...)
- deleted all ODS of this node
- deleted all partition of the NVMe
- tried to add the OSDs
This way also results to the error:
Method Not Allowed
The method is not allowed for the requested URL.
trexman
60 Posts
November 20, 2018, 3:08 pmQuote from trexman on November 20, 2018, 3:08 pmOK I found the problem that occurred the error: Method Not Allowed
It was an old Firefox browser.
Sorry for that.
OK I found the problem that occurred the error: Method Not Allowed
It was an old Firefox browser.
Sorry for that.
trexman
60 Posts
November 20, 2018, 3:33 pmQuote from trexman on November 20, 2018, 3:33 pm
Quote from admin on November 20, 2018, 1:15 pm
This is based on recent discussions on the Ceph mailing list. Unfortunately there was no detailed analysis on what the optimum size should be. For rbd workloads it is now recommended to be at least 30 GB, hence the change. If you really cannot use 60 GB, then do 40 GB or 30 GB at least.
I also did a little bit of research about this and could not find a satisfactory answer.
If I change the bluestore_block_db_size to 20GB for example. What are the disadvantage of this change?
Is it only loosing performance or would it result a kind of Deadlock?
Quote from admin on November 20, 2018, 1:15 pm
This is based on recent discussions on the Ceph mailing list. Unfortunately there was no detailed analysis on what the optimum size should be. For rbd workloads it is now recommended to be at least 30 GB, hence the change. If you really cannot use 60 GB, then do 40 GB or 30 GB at least.
I also did a little bit of research about this and could not find a satisfactory answer.
If I change the bluestore_block_db_size to 20GB for example. What are the disadvantage of this change?
Is it only loosing performance or would it result a kind of Deadlock?
admin
2,930 Posts
Disk prepare failure on node
trexman
60 Posts
Quote from trexman on November 20, 2018, 12:49 pmHello,
we did a new installation of PetaSAN 2.2 (3 Nodes, 10 SATA ODS per node, 1 NVME per node as Journal)
After the Cluster was build I got the following error:
Node deployment completed successfully. Cluster has been built and is now ready for use.
Error List
Disk sdk prepare failure on node HBPS03.
Disk sdh prepare failure on node HBPS03.
Disk sdi prepare failure on node HBPS03.
Disk sde prepare failure on node HBPS01.
Disk sdb prepare failure on node HBPS01.
Disk sdc prepare failure on node HBPS01.
Disk sdj prepare failure on node HBPS01.
Disk sdk prepare failure on node HBPS01.
Disk sdh prepare failure on node HBPS01.
Disk sdi prepare failure on node HBPS01.
Disk sde prepare failure on node HBPS02.
Disk sdb prepare failure on node HBPS02.
Disk sdc prepare failure on node HBPS02.
Disk sdj prepare failure on node HBPS02.
Disk sdk prepare failure on node HBPS02.
Disk sdh prepare failure on node HBPS02.
Disk sdi prepare failure on node HBPS02.If I try to add the "failed" OSDs manually I get the following error in my browser:
Method Not Allowed
The method is not allowed for the requested URL.
Fdisk tell's me that there is one partition on all not working OSD - maybe from the installation
Disk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 5D229396-FAC8-44A4-9DCD-43F6079CFA6ADevice Start End Sectors Size Type
/dev/sdb1 2048 206847 204800 100M Ceph disk in creationBut also if I try to delete the partition and MBR I still get this error. Reboot of the node after "disc wipe" also don't help
How can I solve this problem?
Thanks for your help.
Hello,
we did a new installation of PetaSAN 2.2 (3 Nodes, 10 SATA ODS per node, 1 NVME per node as Journal)
After the Cluster was build I got the following error:
Node deployment completed successfully. Cluster has been built and is now ready for use.
Error List
Disk sdk prepare failure on node HBPS03.
Disk sdh prepare failure on node HBPS03.
Disk sdi prepare failure on node HBPS03.
Disk sde prepare failure on node HBPS01.
Disk sdb prepare failure on node HBPS01.
Disk sdc prepare failure on node HBPS01.
Disk sdj prepare failure on node HBPS01.
Disk sdk prepare failure on node HBPS01.
Disk sdh prepare failure on node HBPS01.
Disk sdi prepare failure on node HBPS01.
Disk sde prepare failure on node HBPS02.
Disk sdb prepare failure on node HBPS02.
Disk sdc prepare failure on node HBPS02.
Disk sdj prepare failure on node HBPS02.
Disk sdk prepare failure on node HBPS02.
Disk sdh prepare failure on node HBPS02.
Disk sdi prepare failure on node HBPS02.
If I try to add the "failed" OSDs manually I get the following error in my browser:
Method Not Allowed
The method is not allowed for the requested URL.
Fdisk tell's me that there is one partition on all not working OSD - maybe from the installation
Disk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 5D229396-FAC8-44A4-9DCD-43F6079CFA6ADevice Start End Sectors Size Type
/dev/sdb1 2048 206847 204800 100M Ceph disk in creation
But also if I try to delete the partition and MBR I still get this error. Reboot of the node after "disc wipe" also don't help
How can I solve this problem?
Thanks for your help.
trexman
60 Posts
Quote from trexman on November 20, 2018, 12:59 pmI just found out, that the NVMe is full. But besides of that I also can add an ODS without Journal.
Why is now every Journal partition 60GB big? It was 20GB in the past if i remember this right.
It there a way to change the size of the Journal partition?
I just found out, that the NVMe is full. But besides of that I also can add an ODS without Journal.
Why is now every Journal partition 60GB big? It was 20GB in the past if i remember this right.
It there a way to change the size of the Journal partition?
admin
2,930 Posts
Quote from admin on November 20, 2018, 1:15 pmThis is based on recent discussions on the Ceph mailing list. Unfortunately there was no detailed analysis on what the optimum size should be. For rbd workloads it is now recommended to be at least 30 GB, hence the change. If you really cannot use 60 GB, then do 40 GB or 30 GB at least.
For an existing cluster, this parameter can be changed on all nodes in the /etc/ceph/xxx.conf file
bluestore_block_db_size = 64424509440
If you are building a new cluster, you can change it in the Cluster Tuning page and it will apply to all nodes.
This is based on recent discussions on the Ceph mailing list. Unfortunately there was no detailed analysis on what the optimum size should be. For rbd workloads it is now recommended to be at least 30 GB, hence the change. If you really cannot use 60 GB, then do 40 GB or 30 GB at least.
For an existing cluster, this parameter can be changed on all nodes in the /etc/ceph/xxx.conf file
bluestore_block_db_size = 64424509440
If you are building a new cluster, you can change it in the Cluster Tuning page and it will apply to all nodes.
trexman
60 Posts
Quote from trexman on November 20, 2018, 2:25 pmThank you for the information.
But my primary problem still exist.
- I changed the parameter bluestore_block_db_size in all ceph confs.
- offline all OSDs on one node (systemctl stop ceph-osd...)
- deleted all ODS of this node
- deleted all partition of the NVMe
- tried to add the OSDs
This way also results to the error:
Method Not Allowed
The method is not allowed for the requested URL.
Thank you for the information.
But my primary problem still exist.
- I changed the parameter bluestore_block_db_size in all ceph confs.
- offline all OSDs on one node (systemctl stop ceph-osd...)
- deleted all ODS of this node
- deleted all partition of the NVMe
- tried to add the OSDs
This way also results to the error:
Method Not Allowed
The method is not allowed for the requested URL.
trexman
60 Posts
Quote from trexman on November 20, 2018, 3:08 pmOK I found the problem that occurred the error: Method Not Allowed
It was an old Firefox browser.
Sorry for that.
OK I found the problem that occurred the error: Method Not Allowed
It was an old Firefox browser.
Sorry for that.
trexman
60 Posts
Quote from trexman on November 20, 2018, 3:33 pmQuote from admin on November 20, 2018, 1:15 pmThis is based on recent discussions on the Ceph mailing list. Unfortunately there was no detailed analysis on what the optimum size should be. For rbd workloads it is now recommended to be at least 30 GB, hence the change. If you really cannot use 60 GB, then do 40 GB or 30 GB at least.
I also did a little bit of research about this and could not find a satisfactory answer.
If I change the bluestore_block_db_size to 20GB for example. What are the disadvantage of this change?
Is it only loosing performance or would it result a kind of Deadlock?
Quote from admin on November 20, 2018, 1:15 pmThis is based on recent discussions on the Ceph mailing list. Unfortunately there was no detailed analysis on what the optimum size should be. For rbd workloads it is now recommended to be at least 30 GB, hence the change. If you really cannot use 60 GB, then do 40 GB or 30 GB at least.
I also did a little bit of research about this and could not find a satisfactory answer.
If I change the bluestore_block_db_size to 20GB for example. What are the disadvantage of this change?
Is it only loosing performance or would it result a kind of Deadlock?
admin
2,930 Posts