Actual vs Committed space
erazmus
40 Posts
August 30, 2017, 3:02 pmQuote from erazmus on August 30, 2017, 3:02 pmHello,
Thanks to admin's help, I now have a PetaSAN cluster up and running on some re-purposed HP DL360 G5 servers. I'm liking the ease of installation - so much better than hand-rolling ceph via the command line.
I have a few questions. I can't seem to find a way of showing how much committed space I've allocated - for example, if I allocate a 100Gb iSCSI volume, the used space doesn't go up until I store data on the volume. Is there a way to see how much committed space I've allocated, so that I know that I haven't over-committed storage?
My second question is regarding open source/public participation. Does this project have a repository so that we can get early-access to features/bug-fixes before the next release?
Thanks for a promising product!
Hello,
Thanks to admin's help, I now have a PetaSAN cluster up and running on some re-purposed HP DL360 G5 servers. I'm liking the ease of installation - so much better than hand-rolling ceph via the command line.
I have a few questions. I can't seem to find a way of showing how much committed space I've allocated - for example, if I allocate a 100Gb iSCSI volume, the used space doesn't go up until I store data on the volume. Is there a way to see how much committed space I've allocated, so that I know that I haven't over-committed storage?
My second question is regarding open source/public participation. Does this project have a repository so that we can get early-access to features/bug-fixes before the next release?
Thanks for a promising product!
admin
2,930 Posts
August 31, 2017, 9:57 amQuote from admin on August 31, 2017, 9:57 amHi there,
Ceph is a cloud based technology so thin provisioning is advantageous by design. Currently if you would like to use thick provisioning, it is the client responsibility to do this for example in VMWare you can choose to create thick eager zero disks, so VMWare will write zeros on all the space before using the disk. But maybe we can support an option in PetaSAN to write zeros to the entire space when creating a disk.
We do plan to use github hopefully soon, we are currently using svn/subversion and were really busy to switch but again it should be soon. Although this will be better in many other ways, i do not believe it will help too much getting incremental features sooner, our release cycle is already quite short.
Hi there,
Ceph is a cloud based technology so thin provisioning is advantageous by design. Currently if you would like to use thick provisioning, it is the client responsibility to do this for example in VMWare you can choose to create thick eager zero disks, so VMWare will write zeros on all the space before using the disk. But maybe we can support an option in PetaSAN to write zeros to the entire space when creating a disk.
We do plan to use github hopefully soon, we are currently using svn/subversion and were really busy to switch but again it should be soon. Although this will be better in many other ways, i do not believe it will help too much getting incremental features sooner, our release cycle is already quite short.
Last edited on August 31, 2017, 9:57 am by admin · #2
erazmus
40 Posts
August 31, 2017, 5:38 pmQuote from erazmus on August 31, 2017, 5:38 pm
Quote from admin on August 31, 2017, 9:57 am
Ceph is a cloud based technology so thin provisioning is advantageous by design. Currently if you would like to use thick provisioning, it is the client responsibility to do this for example in VMWare you can choose to create thick eager zero disks, so VMWare will write zeros on all the space before using the disk. But maybe we can support an option in PetaSAN to write zeros to the entire space when creating a disk.
I'm not looking for thick provisioning, but I would like a graphical representation of what size drives I've allocated (and not just the actual used space like it does now) - just a suggestion. Also looking forward to github once you get it going.
Quote from admin on August 31, 2017, 9:57 am
Ceph is a cloud based technology so thin provisioning is advantageous by design. Currently if you would like to use thick provisioning, it is the client responsibility to do this for example in VMWare you can choose to create thick eager zero disks, so VMWare will write zeros on all the space before using the disk. But maybe we can support an option in PetaSAN to write zeros to the entire space when creating a disk.
I'm not looking for thick provisioning, but I would like a graphical representation of what size drives I've allocated (and not just the actual used space like it does now) - just a suggestion. Also looking forward to github once you get it going.
therm
121 Posts
September 1, 2017, 5:47 amQuote from therm on September 1, 2017, 5:47 amHi erazmus,
do you really mean allocated (on petasans disks) or provided by petasan in direction of the hypervisor?
For example 5 x 10 TB ISCSI LUNs are mapped to ESX, this means that (with an replication factor of 3) you have to reserved 150 TB in Petasan if one day every block is written by ESX. This would be indeed helpful to have an overview.
For the allocation on the disks: This is shown in the Dashboard but with its gross value (in the example above it would be the 150TB including replication).
Regards,
Dennis
Hi erazmus,
do you really mean allocated (on petasans disks) or provided by petasan in direction of the hypervisor?
For example 5 x 10 TB ISCSI LUNs are mapped to ESX, this means that (with an replication factor of 3) you have to reserved 150 TB in Petasan if one day every block is written by ESX. This would be indeed helpful to have an overview.
For the allocation on the disks: This is shown in the Dashboard but with its gross value (in the example above it would be the 150TB including replication).
Regards,
Dennis
erazmus
40 Posts
September 1, 2017, 5:37 pmQuote from erazmus on September 1, 2017, 5:37 pm
Quote from therm on September 1, 2017, 5:47 am
For example 5 x 10 TB ISCSI LUNs are mapped to ESX, this means that (with an replication factor of 3) you have to reserved 150 TB in Petasan if one day every block is written by ESX. This would be indeed helpful to have an overview.
Yes - This is what I mean. I would like to know if I am in jeopardy of being oversubscribed in the case of all the drives filling up.
Quote from therm on September 1, 2017, 5:47 am
For example 5 x 10 TB ISCSI LUNs are mapped to ESX, this means that (with an replication factor of 3) you have to reserved 150 TB in Petasan if one day every block is written by ESX. This would be indeed helpful to have an overview.
Yes - This is what I mean. I would like to know if I am in jeopardy of being oversubscribed in the case of all the drives filling up.
admin
2,930 Posts
September 2, 2017, 8:48 pmQuote from admin on September 2, 2017, 8:48 pmI will add this to our to do list.
I will add this to our to do list.
Actual vs Committed space
erazmus
40 Posts
Quote from erazmus on August 30, 2017, 3:02 pmHello,
Thanks to admin's help, I now have a PetaSAN cluster up and running on some re-purposed HP DL360 G5 servers. I'm liking the ease of installation - so much better than hand-rolling ceph via the command line.
I have a few questions. I can't seem to find a way of showing how much committed space I've allocated - for example, if I allocate a 100Gb iSCSI volume, the used space doesn't go up until I store data on the volume. Is there a way to see how much committed space I've allocated, so that I know that I haven't over-committed storage?
My second question is regarding open source/public participation. Does this project have a repository so that we can get early-access to features/bug-fixes before the next release?
Thanks for a promising product!
Hello,
Thanks to admin's help, I now have a PetaSAN cluster up and running on some re-purposed HP DL360 G5 servers. I'm liking the ease of installation - so much better than hand-rolling ceph via the command line.
I have a few questions. I can't seem to find a way of showing how much committed space I've allocated - for example, if I allocate a 100Gb iSCSI volume, the used space doesn't go up until I store data on the volume. Is there a way to see how much committed space I've allocated, so that I know that I haven't over-committed storage?
My second question is regarding open source/public participation. Does this project have a repository so that we can get early-access to features/bug-fixes before the next release?
Thanks for a promising product!
admin
2,930 Posts
Quote from admin on August 31, 2017, 9:57 amHi there,
Ceph is a cloud based technology so thin provisioning is advantageous by design. Currently if you would like to use thick provisioning, it is the client responsibility to do this for example in VMWare you can choose to create thick eager zero disks, so VMWare will write zeros on all the space before using the disk. But maybe we can support an option in PetaSAN to write zeros to the entire space when creating a disk.
We do plan to use github hopefully soon, we are currently using svn/subversion and were really busy to switch but again it should be soon. Although this will be better in many other ways, i do not believe it will help too much getting incremental features sooner, our release cycle is already quite short.
Hi there,
Ceph is a cloud based technology so thin provisioning is advantageous by design. Currently if you would like to use thick provisioning, it is the client responsibility to do this for example in VMWare you can choose to create thick eager zero disks, so VMWare will write zeros on all the space before using the disk. But maybe we can support an option in PetaSAN to write zeros to the entire space when creating a disk.
We do plan to use github hopefully soon, we are currently using svn/subversion and were really busy to switch but again it should be soon. Although this will be better in many other ways, i do not believe it will help too much getting incremental features sooner, our release cycle is already quite short.
erazmus
40 Posts
Quote from erazmus on August 31, 2017, 5:38 pmQuote from admin on August 31, 2017, 9:57 amCeph is a cloud based technology so thin provisioning is advantageous by design. Currently if you would like to use thick provisioning, it is the client responsibility to do this for example in VMWare you can choose to create thick eager zero disks, so VMWare will write zeros on all the space before using the disk. But maybe we can support an option in PetaSAN to write zeros to the entire space when creating a disk.
I'm not looking for thick provisioning, but I would like a graphical representation of what size drives I've allocated (and not just the actual used space like it does now) - just a suggestion. Also looking forward to github once you get it going.
Quote from admin on August 31, 2017, 9:57 amCeph is a cloud based technology so thin provisioning is advantageous by design. Currently if you would like to use thick provisioning, it is the client responsibility to do this for example in VMWare you can choose to create thick eager zero disks, so VMWare will write zeros on all the space before using the disk. But maybe we can support an option in PetaSAN to write zeros to the entire space when creating a disk.
I'm not looking for thick provisioning, but I would like a graphical representation of what size drives I've allocated (and not just the actual used space like it does now) - just a suggestion. Also looking forward to github once you get it going.
therm
121 Posts
Quote from therm on September 1, 2017, 5:47 amHi erazmus,
do you really mean allocated (on petasans disks) or provided by petasan in direction of the hypervisor?
For example 5 x 10 TB ISCSI LUNs are mapped to ESX, this means that (with an replication factor of 3) you have to reserved 150 TB in Petasan if one day every block is written by ESX. This would be indeed helpful to have an overview.
For the allocation on the disks: This is shown in the Dashboard but with its gross value (in the example above it would be the 150TB including replication).
Regards,
Dennis
Hi erazmus,
do you really mean allocated (on petasans disks) or provided by petasan in direction of the hypervisor?
For example 5 x 10 TB ISCSI LUNs are mapped to ESX, this means that (with an replication factor of 3) you have to reserved 150 TB in Petasan if one day every block is written by ESX. This would be indeed helpful to have an overview.
For the allocation on the disks: This is shown in the Dashboard but with its gross value (in the example above it would be the 150TB including replication).
Regards,
Dennis
erazmus
40 Posts
Quote from erazmus on September 1, 2017, 5:37 pmQuote from therm on September 1, 2017, 5:47 amFor example 5 x 10 TB ISCSI LUNs are mapped to ESX, this means that (with an replication factor of 3) you have to reserved 150 TB in Petasan if one day every block is written by ESX. This would be indeed helpful to have an overview.
Yes - This is what I mean. I would like to know if I am in jeopardy of being oversubscribed in the case of all the drives filling up.
Quote from therm on September 1, 2017, 5:47 amFor example 5 x 10 TB ISCSI LUNs are mapped to ESX, this means that (with an replication factor of 3) you have to reserved 150 TB in Petasan if one day every block is written by ESX. This would be indeed helpful to have an overview.
Yes - This is what I mean. I would like to know if I am in jeopardy of being oversubscribed in the case of all the drives filling up.
admin
2,930 Posts
Quote from admin on September 2, 2017, 8:48 pmI will add this to our to do list.
I will add this to our to do list.