Too many PGs per OSD
stevea
2 Posts
January 23, 2018, 1:24 pmQuote from stevea on January 23, 2018, 1:24 pmHello: Newb here. I set up a test v1.5 cluster in the lab to get my toes wet with PetaSAN, and after initial setup using the Quick Start as a guide was greeted with this Ceph Health error:
too many PGs per OSD (512 > max 500)
I think I noticed this after creating a test iSCSI Target disk of 150GB. It is the only iSCSI disk.
There are 3 nodes, VMs on Hyper-V, all with SSDs for the OS:
PS1 - Monitor Only - vhdx sits on dedicated SSD, no other disks attached
PS2 - Monitor, Local Storage, & iSCSI - System SSD + 3 new 4TB Seagate Enterprise HDDs
PS3 - Monitor, Local Storage, & iSCSI - System SSD + 3 new 4TB Seagate Enterprise HDDs
PS1 runs on a .vhdx disk
PS2 & PS3 both have all 4 Physical Disks passed through directly to them.
On the setup screens, I was fairly unsure what to pick as far as size of cluster (it's open-ended at this point), and the grade of the hardware. I think I picked small (up to 15 disks?), and Medium Grade hardware.
Not sure what to do next. Started doing some research on PGs on Ceph, and it looks complicated. Any direction would be appreciated.
Thanks.
Hello: Newb here. I set up a test v1.5 cluster in the lab to get my toes wet with PetaSAN, and after initial setup using the Quick Start as a guide was greeted with this Ceph Health error:
too many PGs per OSD (512 > max 500)
I think I noticed this after creating a test iSCSI Target disk of 150GB. It is the only iSCSI disk.
There are 3 nodes, VMs on Hyper-V, all with SSDs for the OS:
PS1 - Monitor Only - vhdx sits on dedicated SSD, no other disks attached
PS2 - Monitor, Local Storage, & iSCSI - System SSD + 3 new 4TB Seagate Enterprise HDDs
PS3 - Monitor, Local Storage, & iSCSI - System SSD + 3 new 4TB Seagate Enterprise HDDs
PS1 runs on a .vhdx disk
PS2 & PS3 both have all 4 Physical Disks passed through directly to them.
On the setup screens, I was fairly unsure what to pick as far as size of cluster (it's open-ended at this point), and the grade of the hardware. I think I picked small (up to 15 disks?), and Medium Grade hardware.
Not sure what to do next. Started doing some research on PGs on Ceph, and it looks complicated. Any direction would be appreciated.
Thanks.
admin
2,930 Posts
January 23, 2018, 2:00 pmQuote from admin on January 23, 2018, 2:00 pmprobably you picked a higher disk range than you have.
How many OSDs are up ? any down ? you have 3 x 4TB in each of your 2 storage nodes: do you have 6 OSDs up ?
what is the output of:
ceph osd pool get rbd pg_num --cluster CLUSTER_NAME
probably you picked a higher disk range than you have.
How many OSDs are up ? any down ? you have 3 x 4TB in each of your 2 storage nodes: do you have 6 OSDs up ?
what is the output of:
ceph osd pool get rbd pg_num --cluster CLUSTER_NAME
stevea
2 Posts
January 23, 2018, 5:43 pmQuote from stevea on January 23, 2018, 5:43 pmThank you for the quick reply!
Ok, my error (as I figured it would be). So I had just added the first two 4TB on each node PS2 & PS3 to get started, and had 4 OSDs up corresponding with them. I just now went ahead and added the final two HDDs, and after a short time, 6 OSDs up, and all the errors cleared a few minutes later. There is joy!
BTW: output of ceph osd pool get rbd pg_num --cluster CLUSTER_NAME is:
pg_num: 1024
Thank you!
Thank you for the quick reply!
Ok, my error (as I figured it would be). So I had just added the first two 4TB on each node PS2 & PS3 to get started, and had 4 OSDs up corresponding with them. I just now went ahead and added the final two HDDs, and after a short time, 6 OSDs up, and all the errors cleared a few minutes later. There is joy!
BTW: output of ceph osd pool get rbd pg_num --cluster CLUSTER_NAME is:
pg_num: 1024
Thank you!
admin
2,930 Posts
January 24, 2018, 7:11 amQuote from admin on January 24, 2018, 7:11 amGlad you fixed it. Note that it does appear you did choose a cluster size of 15-50 disks in the tuning page, it was better to have selected 3-15. It will work ok but may put extra load on your system specially in case of disk failures/recovery. Try to add a couple more disks if you can so they would be at least 9 in total.
Glad you fixed it. Note that it does appear you did choose a cluster size of 15-50 disks in the tuning page, it was better to have selected 3-15. It will work ok but may put extra load on your system specially in case of disk failures/recovery. Try to add a couple more disks if you can so they would be at least 9 in total.
Too many PGs per OSD
stevea
2 Posts
Quote from stevea on January 23, 2018, 1:24 pmHello: Newb here. I set up a test v1.5 cluster in the lab to get my toes wet with PetaSAN, and after initial setup using the Quick Start as a guide was greeted with this Ceph Health error:
too many PGs per OSD (512 > max 500)
I think I noticed this after creating a test iSCSI Target disk of 150GB. It is the only iSCSI disk.
There are 3 nodes, VMs on Hyper-V, all with SSDs for the OS:
PS1 - Monitor Only - vhdx sits on dedicated SSD, no other disks attached
PS2 - Monitor, Local Storage, & iSCSI - System SSD + 3 new 4TB Seagate Enterprise HDDs
PS3 - Monitor, Local Storage, & iSCSI - System SSD + 3 new 4TB Seagate Enterprise HDDsPS1 runs on a .vhdx disk
PS2 & PS3 both have all 4 Physical Disks passed through directly to them.On the setup screens, I was fairly unsure what to pick as far as size of cluster (it's open-ended at this point), and the grade of the hardware. I think I picked small (up to 15 disks?), and Medium Grade hardware.
Not sure what to do next. Started doing some research on PGs on Ceph, and it looks complicated. Any direction would be appreciated.
Thanks.
Hello: Newb here. I set up a test v1.5 cluster in the lab to get my toes wet with PetaSAN, and after initial setup using the Quick Start as a guide was greeted with this Ceph Health error:
too many PGs per OSD (512 > max 500)
I think I noticed this after creating a test iSCSI Target disk of 150GB. It is the only iSCSI disk.
There are 3 nodes, VMs on Hyper-V, all with SSDs for the OS:
PS1 - Monitor Only - vhdx sits on dedicated SSD, no other disks attached
PS2 - Monitor, Local Storage, & iSCSI - System SSD + 3 new 4TB Seagate Enterprise HDDs
PS3 - Monitor, Local Storage, & iSCSI - System SSD + 3 new 4TB Seagate Enterprise HDDs
PS1 runs on a .vhdx disk
PS2 & PS3 both have all 4 Physical Disks passed through directly to them.
On the setup screens, I was fairly unsure what to pick as far as size of cluster (it's open-ended at this point), and the grade of the hardware. I think I picked small (up to 15 disks?), and Medium Grade hardware.
Not sure what to do next. Started doing some research on PGs on Ceph, and it looks complicated. Any direction would be appreciated.
Thanks.
admin
2,930 Posts
Quote from admin on January 23, 2018, 2:00 pmprobably you picked a higher disk range than you have.
How many OSDs are up ? any down ? you have 3 x 4TB in each of your 2 storage nodes: do you have 6 OSDs up ?
what is the output of:
ceph osd pool get rbd pg_num --cluster CLUSTER_NAME
probably you picked a higher disk range than you have.
How many OSDs are up ? any down ? you have 3 x 4TB in each of your 2 storage nodes: do you have 6 OSDs up ?
what is the output of:
ceph osd pool get rbd pg_num --cluster CLUSTER_NAME
stevea
2 Posts
Quote from stevea on January 23, 2018, 5:43 pmThank you for the quick reply!
Ok, my error (as I figured it would be). So I had just added the first two 4TB on each node PS2 & PS3 to get started, and had 4 OSDs up corresponding with them. I just now went ahead and added the final two HDDs, and after a short time, 6 OSDs up, and all the errors cleared a few minutes later. There is joy!
BTW: output of ceph osd pool get rbd pg_num --cluster CLUSTER_NAME is:
pg_num: 1024
Thank you!
Thank you for the quick reply!
Ok, my error (as I figured it would be). So I had just added the first two 4TB on each node PS2 & PS3 to get started, and had 4 OSDs up corresponding with them. I just now went ahead and added the final two HDDs, and after a short time, 6 OSDs up, and all the errors cleared a few minutes later. There is joy!
BTW: output of ceph osd pool get rbd pg_num --cluster CLUSTER_NAME is:
pg_num: 1024
Thank you!
admin
2,930 Posts
Quote from admin on January 24, 2018, 7:11 amGlad you fixed it. Note that it does appear you did choose a cluster size of 15-50 disks in the tuning page, it was better to have selected 3-15. It will work ok but may put extra load on your system specially in case of disk failures/recovery. Try to add a couple more disks if you can so they would be at least 9 in total.
Glad you fixed it. Note that it does appear you did choose a cluster size of 15-50 disks in the tuning page, it was better to have selected 3-15. It will work ok but may put extra load on your system specially in case of disk failures/recovery. Try to add a couple more disks if you can so they would be at least 9 in total.