Adding multiple OSDs
jnickel
12 Posts
January 3, 2020, 5:49 pmQuote from jnickel on January 3, 2020, 5:49 pmHi Guys,
I have purchased 10 GB gear and moving to my production phase of my Petasan project.
It seems when I am adding OSDs to Petasan through the WebUI, I can only do one at a time. I thought I would be able to add a OSD on one node, then go to another node and add one there too, but that doesn't work. What is the reason that we can't add multiple OSDs at a time? Could you change it to allow the selection of multiple disks and then have the system perform the actions one after the other if they need to happen sequentially?
Jim
Hi Guys,
I have purchased 10 GB gear and moving to my production phase of my Petasan project.
It seems when I am adding OSDs to Petasan through the WebUI, I can only do one at a time. I thought I would be able to add a OSD on one node, then go to another node and add one there too, but that doesn't work. What is the reason that we can't add multiple OSDs at a time? Could you change it to allow the selection of multiple disks and then have the system perform the actions one after the other if they need to happen sequentially?
Jim
admin
2,930 Posts
January 3, 2020, 6:59 pmQuote from admin on January 3, 2020, 6:59 pmWhen you deploy a new node, you can select many/multiple disks.
For an existing running node, it is single OSD at a time, you can specify options like external journal and cache, it also takes less than a minute. If you have already existing nodes and have a large number of disks added, you can always write a script to add the OSD using ceph commands.
When you deploy a new node, you can select many/multiple disks.
For an existing running node, it is single OSD at a time, you can specify options like external journal and cache, it also takes less than a minute. If you have already existing nodes and have a large number of disks added, you can always write a script to add the OSD using ceph commands.
jnickel
12 Posts
January 3, 2020, 7:45 pmQuote from jnickel on January 3, 2020, 7:45 pmI like the script idea, but I have no idea of the cli commands....guess I need to start reading the ceph documentation...but then that kinda defeats the purpose of using Petasan so I don't have to know all the complexities of ceph.
I guess my hardware is older too because adding a disk takes 2 min 30 sec from the time I press the button on the WebUI till it says UP.
I am also adding the drives slowly as I had lots of problems when I added them all at once at the beginning - I recognize that many of my problems are my own doing - not enough RAM, using used gear (it is enterprise class, just not new).
The cool thing that even if my gear is older and imperfect, Petasan/ceph still works and allows me to run without problems.
Jim
I like the script idea, but I have no idea of the cli commands....guess I need to start reading the ceph documentation...but then that kinda defeats the purpose of using Petasan so I don't have to know all the complexities of ceph.
I guess my hardware is older too because adding a disk takes 2 min 30 sec from the time I press the button on the WebUI till it says UP.
I am also adding the drives slowly as I had lots of problems when I added them all at once at the beginning - I recognize that many of my problems are my own doing - not enough RAM, using used gear (it is enterprise class, just not new).
The cool thing that even if my gear is older and imperfect, Petasan/ceph still works and allows me to run without problems.
Jim
Shiori
86 Posts
January 28, 2020, 5:31 pmQuote from Shiori on January 28, 2020, 5:31 pmNew or not, latest gen or 3 gens back we have the same issue. It is ram, osd size and sas controller dependent. The process is similar to formatting a new hdd. It takes time to setup the partion and write the block layout, you can not get around that. But you can help it, run your sas cards in HBA mode(IT mode), use sas controllers that are at least a lsi sas2008-b2 version or newer, give the os enough ram to do the job plus a bit extra. A 1TB osd will use 2GB of ram during setup, the OS needs 4GB for itself plus a bit for the services you are running on that node. I usually give 4gb to the os, 2gb per service and 1gb per osd installed. This allows enough ram to do about 2/3rds of the osds at once. For speed and function, only do half at a time.
Since we like to know which drive is which osd and lable them with stickers, we have chosen to online them one at a time regardless. It takes a bit more time and a little human involvement but the gains of knowing which osd is in which slot is worth the costs as it speeds up repair times by an order of magnitude. You get a 100 drive cluster going and osd 61 fails, you can get to the node but which drive is it? Having them onlined one at a time and then labled (plus in our inventory system by serial number, node, slot and osd#) saves a lot of trouble finding which drive it actually is.
New or not, latest gen or 3 gens back we have the same issue. It is ram, osd size and sas controller dependent. The process is similar to formatting a new hdd. It takes time to setup the partion and write the block layout, you can not get around that. But you can help it, run your sas cards in HBA mode(IT mode), use sas controllers that are at least a lsi sas2008-b2 version or newer, give the os enough ram to do the job plus a bit extra. A 1TB osd will use 2GB of ram during setup, the OS needs 4GB for itself plus a bit for the services you are running on that node. I usually give 4gb to the os, 2gb per service and 1gb per osd installed. This allows enough ram to do about 2/3rds of the osds at once. For speed and function, only do half at a time.
Since we like to know which drive is which osd and lable them with stickers, we have chosen to online them one at a time regardless. It takes a bit more time and a little human involvement but the gains of knowing which osd is in which slot is worth the costs as it speeds up repair times by an order of magnitude. You get a 100 drive cluster going and osd 61 fails, you can get to the node but which drive is it? Having them onlined one at a time and then labled (plus in our inventory system by serial number, node, slot and osd#) saves a lot of trouble finding which drive it actually is.
Adding multiple OSDs
jnickel
12 Posts
Quote from jnickel on January 3, 2020, 5:49 pmHi Guys,
I have purchased 10 GB gear and moving to my production phase of my Petasan project.
It seems when I am adding OSDs to Petasan through the WebUI, I can only do one at a time. I thought I would be able to add a OSD on one node, then go to another node and add one there too, but that doesn't work. What is the reason that we can't add multiple OSDs at a time? Could you change it to allow the selection of multiple disks and then have the system perform the actions one after the other if they need to happen sequentially?
Jim
Hi Guys,
I have purchased 10 GB gear and moving to my production phase of my Petasan project.
It seems when I am adding OSDs to Petasan through the WebUI, I can only do one at a time. I thought I would be able to add a OSD on one node, then go to another node and add one there too, but that doesn't work. What is the reason that we can't add multiple OSDs at a time? Could you change it to allow the selection of multiple disks and then have the system perform the actions one after the other if they need to happen sequentially?
Jim
admin
2,930 Posts
Quote from admin on January 3, 2020, 6:59 pmWhen you deploy a new node, you can select many/multiple disks.
For an existing running node, it is single OSD at a time, you can specify options like external journal and cache, it also takes less than a minute. If you have already existing nodes and have a large number of disks added, you can always write a script to add the OSD using ceph commands.
When you deploy a new node, you can select many/multiple disks.
For an existing running node, it is single OSD at a time, you can specify options like external journal and cache, it also takes less than a minute. If you have already existing nodes and have a large number of disks added, you can always write a script to add the OSD using ceph commands.
jnickel
12 Posts
Quote from jnickel on January 3, 2020, 7:45 pmI like the script idea, but I have no idea of the cli commands....guess I need to start reading the ceph documentation...but then that kinda defeats the purpose of using Petasan so I don't have to know all the complexities of ceph.
I guess my hardware is older too because adding a disk takes 2 min 30 sec from the time I press the button on the WebUI till it says UP.
I am also adding the drives slowly as I had lots of problems when I added them all at once at the beginning - I recognize that many of my problems are my own doing - not enough RAM, using used gear (it is enterprise class, just not new).
The cool thing that even if my gear is older and imperfect, Petasan/ceph still works and allows me to run without problems.
Jim
I like the script idea, but I have no idea of the cli commands....guess I need to start reading the ceph documentation...but then that kinda defeats the purpose of using Petasan so I don't have to know all the complexities of ceph.
I guess my hardware is older too because adding a disk takes 2 min 30 sec from the time I press the button on the WebUI till it says UP.
I am also adding the drives slowly as I had lots of problems when I added them all at once at the beginning - I recognize that many of my problems are my own doing - not enough RAM, using used gear (it is enterprise class, just not new).
The cool thing that even if my gear is older and imperfect, Petasan/ceph still works and allows me to run without problems.
Jim
Shiori
86 Posts
Quote from Shiori on January 28, 2020, 5:31 pmNew or not, latest gen or 3 gens back we have the same issue. It is ram, osd size and sas controller dependent. The process is similar to formatting a new hdd. It takes time to setup the partion and write the block layout, you can not get around that. But you can help it, run your sas cards in HBA mode(IT mode), use sas controllers that are at least a lsi sas2008-b2 version or newer, give the os enough ram to do the job plus a bit extra. A 1TB osd will use 2GB of ram during setup, the OS needs 4GB for itself plus a bit for the services you are running on that node. I usually give 4gb to the os, 2gb per service and 1gb per osd installed. This allows enough ram to do about 2/3rds of the osds at once. For speed and function, only do half at a time.
Since we like to know which drive is which osd and lable them with stickers, we have chosen to online them one at a time regardless. It takes a bit more time and a little human involvement but the gains of knowing which osd is in which slot is worth the costs as it speeds up repair times by an order of magnitude. You get a 100 drive cluster going and osd 61 fails, you can get to the node but which drive is it? Having them onlined one at a time and then labled (plus in our inventory system by serial number, node, slot and osd#) saves a lot of trouble finding which drive it actually is.
New or not, latest gen or 3 gens back we have the same issue. It is ram, osd size and sas controller dependent. The process is similar to formatting a new hdd. It takes time to setup the partion and write the block layout, you can not get around that. But you can help it, run your sas cards in HBA mode(IT mode), use sas controllers that are at least a lsi sas2008-b2 version or newer, give the os enough ram to do the job plus a bit extra. A 1TB osd will use 2GB of ram during setup, the OS needs 4GB for itself plus a bit for the services you are running on that node. I usually give 4gb to the os, 2gb per service and 1gb per osd installed. This allows enough ram to do about 2/3rds of the osds at once. For speed and function, only do half at a time.
Since we like to know which drive is which osd and lable them with stickers, we have chosen to online them one at a time regardless. It takes a bit more time and a little human involvement but the gains of knowing which osd is in which slot is worth the costs as it speeds up repair times by an order of magnitude. You get a 100 drive cluster going and osd 61 fails, you can get to the node but which drive is it? Having them onlined one at a time and then labled (plus in our inventory system by serial number, node, slot and osd#) saves a lot of trouble finding which drive it actually is.