Different Weighting of Nodes/OSDs?
fx882
17 Posts
September 18, 2017, 7:39 amQuote from fx882 on September 18, 2017, 7:39 amHi,
is there an automatic or manual weighting of OSDs or Nodes according to the different Speeds and available Capacities of different Nodes/OSDs?
Regards,
Tobias
Hi,
is there an automatic or manual weighting of OSDs or Nodes according to the different Speeds and available Capacities of different Nodes/OSDs?
Regards,
Tobias
admin
2,930 Posts
September 18, 2017, 10:22 amQuote from admin on September 18, 2017, 10:22 amWeighting nodes by capacity is built in ceph, the ceph placement algorithm "crush" assigns weights to individual disks based in their capacity and assign weights to nodes based on their total disk capacity. it does this to achieve uniform utilization ratios among nodes and disks. having said this we recommend that you do not have unbalanced weights and have all your disks the same capacity and type and similarly have the same capacity storage per node so the performance load will be balanced evenly.
In version 1.5&2.0 we will be supporting SSD write ahead journaling (+storing rocksdb on SSD) where an SSD is used to speed-up up to 4 spinning disks. In the future we will also support SSD caching of entire disks using bcache read/write cache, this setup still requires more testing/qualification by the ceph community.
There are no weights by speed in Ceph. However ceph does give complete customization to crush data placement (referred to as crush map editing). It is possible to define that certain data gets stored on specific disks such having having data on fast SSD only and backup data on slower spinning disks. For large installations it is possible to define different datacenters with room/racks and configure exactly how your replicas get placed. We have this further down the road to allow editing the crush configuration in a graphical way, but it is further down the road. Note that all this can still be done with existing PetaSAN by using cli commands.
Weighting nodes by capacity is built in ceph, the ceph placement algorithm "crush" assigns weights to individual disks based in their capacity and assign weights to nodes based on their total disk capacity. it does this to achieve uniform utilization ratios among nodes and disks. having said this we recommend that you do not have unbalanced weights and have all your disks the same capacity and type and similarly have the same capacity storage per node so the performance load will be balanced evenly.
In version 1.5&2.0 we will be supporting SSD write ahead journaling (+storing rocksdb on SSD) where an SSD is used to speed-up up to 4 spinning disks. In the future we will also support SSD caching of entire disks using bcache read/write cache, this setup still requires more testing/qualification by the ceph community.
There are no weights by speed in Ceph. However ceph does give complete customization to crush data placement (referred to as crush map editing). It is possible to define that certain data gets stored on specific disks such having having data on fast SSD only and backup data on slower spinning disks. For large installations it is possible to define different datacenters with room/racks and configure exactly how your replicas get placed. We have this further down the road to allow editing the crush configuration in a graphical way, but it is further down the road. Note that all this can still be done with existing PetaSAN by using cli commands.
Last edited on September 18, 2017, 10:25 am by admin · #2
fx882
17 Posts
September 18, 2017, 10:38 amQuote from fx882 on September 18, 2017, 10:38 amThanks again for this extensive introduction for a newbie like me. I'll have a deeper look into the ceph docs then.
Thanks again for this extensive introduction for a newbie like me. I'll have a deeper look into the ceph docs then.
Different Weighting of Nodes/OSDs?
fx882
17 Posts
Quote from fx882 on September 18, 2017, 7:39 amHi,
is there an automatic or manual weighting of OSDs or Nodes according to the different Speeds and available Capacities of different Nodes/OSDs?
Regards,
Tobias
Hi,
is there an automatic or manual weighting of OSDs or Nodes according to the different Speeds and available Capacities of different Nodes/OSDs?
Regards,
Tobias
admin
2,930 Posts
Quote from admin on September 18, 2017, 10:22 amWeighting nodes by capacity is built in ceph, the ceph placement algorithm "crush" assigns weights to individual disks based in their capacity and assign weights to nodes based on their total disk capacity. it does this to achieve uniform utilization ratios among nodes and disks. having said this we recommend that you do not have unbalanced weights and have all your disks the same capacity and type and similarly have the same capacity storage per node so the performance load will be balanced evenly.
In version 1.5&2.0 we will be supporting SSD write ahead journaling (+storing rocksdb on SSD) where an SSD is used to speed-up up to 4 spinning disks. In the future we will also support SSD caching of entire disks using bcache read/write cache, this setup still requires more testing/qualification by the ceph community.
There are no weights by speed in Ceph. However ceph does give complete customization to crush data placement (referred to as crush map editing). It is possible to define that certain data gets stored on specific disks such having having data on fast SSD only and backup data on slower spinning disks. For large installations it is possible to define different datacenters with room/racks and configure exactly how your replicas get placed. We have this further down the road to allow editing the crush configuration in a graphical way, but it is further down the road. Note that all this can still be done with existing PetaSAN by using cli commands.
Weighting nodes by capacity is built in ceph, the ceph placement algorithm "crush" assigns weights to individual disks based in their capacity and assign weights to nodes based on their total disk capacity. it does this to achieve uniform utilization ratios among nodes and disks. having said this we recommend that you do not have unbalanced weights and have all your disks the same capacity and type and similarly have the same capacity storage per node so the performance load will be balanced evenly.
In version 1.5&2.0 we will be supporting SSD write ahead journaling (+storing rocksdb on SSD) where an SSD is used to speed-up up to 4 spinning disks. In the future we will also support SSD caching of entire disks using bcache read/write cache, this setup still requires more testing/qualification by the ceph community.
There are no weights by speed in Ceph. However ceph does give complete customization to crush data placement (referred to as crush map editing). It is possible to define that certain data gets stored on specific disks such having having data on fast SSD only and backup data on slower spinning disks. For large installations it is possible to define different datacenters with room/racks and configure exactly how your replicas get placed. We have this further down the road to allow editing the crush configuration in a graphical way, but it is further down the road. Note that all this can still be done with existing PetaSAN by using cli commands.
fx882
17 Posts
Quote from fx882 on September 18, 2017, 10:38 amThanks again for this extensive introduction for a newbie like me. I'll have a deeper look into the ceph docs then.
Thanks again for this extensive introduction for a newbie like me. I'll have a deeper look into the ceph docs then.