Organizing disk
Tarmabal
4 Posts
June 29, 2017, 3:29 pmQuote from Tarmabal on June 29, 2017, 3:29 pmI would like to know if you can set different pools for different osd and different targets
Like for example having a SATA slow pool with large storage devices but slow
and having a SSD pool with less storage but faster devices?
Is it possible?
In ceph this is done trough crushmaps where you can create buckects and rulesets for the osds and the pools, so you can set a pool to be in a bucket with specific osds (like all ssd osd)
I would like to know if you can set different pools for different osd and different targets
Like for example having a SATA slow pool with large storage devices but slow
and having a SSD pool with less storage but faster devices?
Is it possible?
In ceph this is done trough crushmaps where you can create buckects and rulesets for the osds and the pools, so you can set a pool to be in a bucket with specific osds (like all ssd osd)
Last edited on June 29, 2017, 3:32 pm · #1
admin
2,921 Posts
June 29, 2017, 6:41 pmQuote from admin on June 29, 2017, 6:41 pmAt this time, it is not on our list, if many people ask for it we would do it.
Initially we were going to support 2 pools for cache tiering but we will be implementing caching on a device level as this seems to be the trend forward.
We will be offering crush map editing in a graphical way to identify disks within buckets such as hosts/racks/rooms with the purpose of giving Ceph more intelligence in how to place the replicas so for example not to place all replicas within the same rack. But still in the context of 1 pool.
At this time, it is not on our list, if many people ask for it we would do it.
Initially we were going to support 2 pools for cache tiering but we will be implementing caching on a device level as this seems to be the trend forward.
We will be offering crush map editing in a graphical way to identify disks within buckets such as hosts/racks/rooms with the purpose of giving Ceph more intelligence in how to place the replicas so for example not to place all replicas within the same rack. But still in the context of 1 pool.
SK
4 Posts
May 4, 2018, 3:50 pmQuote from SK on May 4, 2018, 3:50 pmDear all
What is the PetaSAN default setting for replication? Are the replicas placed by disks or by PetaSAN-nodes?
BR SK
Dear all
What is the PetaSAN default setting for replication? Are the replicas placed by disks or by PetaSAN-nodes?
BR SK
admin
2,921 Posts
May 4, 2018, 6:54 pmQuote from admin on May 4, 2018, 6:54 pmCurrently you specify the replication count ( 2 or 3 ) during cluster creation, you can also change it in a running cluster from the cluster settings page. For each 4M sector of your iSCSI disks, Ceph will place it in 3 (for 3 replicas) different osds located on different nodes, it will not place 2 replicas of the same sector on 1 node. In case of node/osd failure, Ceph will take care of this to make sure all sector have the correct number of replicas.
In version 2.1, we will support graphical definition of data-centers/rooms/racks etc and the ability to define yourself how the replicas are placed, you could also define more than 3 replicas.
Currently you specify the replication count ( 2 or 3 ) during cluster creation, you can also change it in a running cluster from the cluster settings page. For each 4M sector of your iSCSI disks, Ceph will place it in 3 (for 3 replicas) different osds located on different nodes, it will not place 2 replicas of the same sector on 1 node. In case of node/osd failure, Ceph will take care of this to make sure all sector have the correct number of replicas.
In version 2.1, we will support graphical definition of data-centers/rooms/racks etc and the ability to define yourself how the replicas are placed, you could also define more than 3 replicas.
Last edited on May 4, 2018, 6:55 pm by admin · #4
moh
16 Posts
September 8, 2020, 6:14 amQuote from moh on September 8, 2020, 6:14 amHello Admin,
I would like created my disks but without the replication of datas, that's possible??? If yes how I can do it??
Hello Admin,
I would like created my disks but without the replication of datas, that's possible??? If yes how I can do it??
admin
2,921 Posts
September 8, 2020, 4:58 pmQuote from admin on September 8, 2020, 4:58 pmit is possible. mind you it means there no is redundancy, if a disk fails your data is lost, so do you really mean this ? it is done by setting the pool size to 1, i am not sure the PetaSAN ui will allow this but if you want after creating a pool use cli command to set pool size to 1.
it is possible. mind you it means there no is redundancy, if a disk fails your data is lost, so do you really mean this ? it is done by setting the pool size to 1, i am not sure the PetaSAN ui will allow this but if you want after creating a pool use cli command to set pool size to 1.
Last edited on September 8, 2020, 4:59 pm by admin · #6
moh
16 Posts
September 8, 2020, 9:41 pmQuote from moh on September 8, 2020, 9:41 pmYes, if a disk fails the datas are lost. I mean this.
Can I have the cli command to set it ??
Yes, if a disk fails the datas are lost. I mean this.
Can I have the cli command to set it ??
admin
2,921 Posts
September 8, 2020, 10:58 pmQuote from admin on September 8, 2020, 10:58 pmceph osd pool set MyPool min_size 1
ceph osd pool set MyPool size 1
ceph osd pool set MyPool min_size 1
ceph osd pool set MyPool size 1
moh
16 Posts
September 9, 2020, 10:06 amQuote from moh on September 9, 2020, 10:06 amit's done!
after I just have create a disk and I have no more configuration to do??
it's done!
after I just have create a disk and I have no more configuration to do??
admin
2,921 Posts
September 9, 2020, 10:50 amQuote from admin on September 9, 2020, 10:50 amthe 2 commands were to change the pool replication to 1, you do not need to do any more configuration.
the 2 commands were to change the pool replication to 1, you do not need to do any more configuration.
Organizing disk
Tarmabal
4 Posts
Quote from Tarmabal on June 29, 2017, 3:29 pmI would like to know if you can set different pools for different osd and different targets
Like for example having a SATA slow pool with large storage devices but slow
and having a SSD pool with less storage but faster devices?
Is it possible?
In ceph this is done trough crushmaps where you can create buckects and rulesets for the osds and the pools, so you can set a pool to be in a bucket with specific osds (like all ssd osd)
I would like to know if you can set different pools for different osd and different targets
Like for example having a SATA slow pool with large storage devices but slow
and having a SSD pool with less storage but faster devices?
Is it possible?
In ceph this is done trough crushmaps where you can create buckects and rulesets for the osds and the pools, so you can set a pool to be in a bucket with specific osds (like all ssd osd)
admin
2,921 Posts
Quote from admin on June 29, 2017, 6:41 pmAt this time, it is not on our list, if many people ask for it we would do it.
Initially we were going to support 2 pools for cache tiering but we will be implementing caching on a device level as this seems to be the trend forward.
We will be offering crush map editing in a graphical way to identify disks within buckets such as hosts/racks/rooms with the purpose of giving Ceph more intelligence in how to place the replicas so for example not to place all replicas within the same rack. But still in the context of 1 pool.
At this time, it is not on our list, if many people ask for it we would do it.
Initially we were going to support 2 pools for cache tiering but we will be implementing caching on a device level as this seems to be the trend forward.
We will be offering crush map editing in a graphical way to identify disks within buckets such as hosts/racks/rooms with the purpose of giving Ceph more intelligence in how to place the replicas so for example not to place all replicas within the same rack. But still in the context of 1 pool.
SK
4 Posts
Quote from SK on May 4, 2018, 3:50 pmDear all
What is the PetaSAN default setting for replication? Are the replicas placed by disks or by PetaSAN-nodes?
BR SK
Dear all
What is the PetaSAN default setting for replication? Are the replicas placed by disks or by PetaSAN-nodes?
BR SK
admin
2,921 Posts
Quote from admin on May 4, 2018, 6:54 pmCurrently you specify the replication count ( 2 or 3 ) during cluster creation, you can also change it in a running cluster from the cluster settings page. For each 4M sector of your iSCSI disks, Ceph will place it in 3 (for 3 replicas) different osds located on different nodes, it will not place 2 replicas of the same sector on 1 node. In case of node/osd failure, Ceph will take care of this to make sure all sector have the correct number of replicas.
In version 2.1, we will support graphical definition of data-centers/rooms/racks etc and the ability to define yourself how the replicas are placed, you could also define more than 3 replicas.
Currently you specify the replication count ( 2 or 3 ) during cluster creation, you can also change it in a running cluster from the cluster settings page. For each 4M sector of your iSCSI disks, Ceph will place it in 3 (for 3 replicas) different osds located on different nodes, it will not place 2 replicas of the same sector on 1 node. In case of node/osd failure, Ceph will take care of this to make sure all sector have the correct number of replicas.
In version 2.1, we will support graphical definition of data-centers/rooms/racks etc and the ability to define yourself how the replicas are placed, you could also define more than 3 replicas.
moh
16 Posts
Quote from moh on September 8, 2020, 6:14 amHello Admin,
I would like created my disks but without the replication of datas, that's possible??? If yes how I can do it??
Hello Admin,
I would like created my disks but without the replication of datas, that's possible??? If yes how I can do it??
admin
2,921 Posts
Quote from admin on September 8, 2020, 4:58 pmit is possible. mind you it means there no is redundancy, if a disk fails your data is lost, so do you really mean this ? it is done by setting the pool size to 1, i am not sure the PetaSAN ui will allow this but if you want after creating a pool use cli command to set pool size to 1.
it is possible. mind you it means there no is redundancy, if a disk fails your data is lost, so do you really mean this ? it is done by setting the pool size to 1, i am not sure the PetaSAN ui will allow this but if you want after creating a pool use cli command to set pool size to 1.
moh
16 Posts
Quote from moh on September 8, 2020, 9:41 pmYes, if a disk fails the datas are lost. I mean this.
Can I have the cli command to set it ??
Yes, if a disk fails the datas are lost. I mean this.
Can I have the cli command to set it ??
admin
2,921 Posts
Quote from admin on September 8, 2020, 10:58 pmceph osd pool set MyPool min_size 1
ceph osd pool set MyPool size 1
ceph osd pool set MyPool min_size 1
ceph osd pool set MyPool size 1
moh
16 Posts
Quote from moh on September 9, 2020, 10:06 amit's done!
after I just have create a disk and I have no more configuration to do??
it's done!
after I just have create a disk and I have no more configuration to do??
admin
2,921 Posts
Quote from admin on September 9, 2020, 10:50 amthe 2 commands were to change the pool replication to 1, you do not need to do any more configuration.
the 2 commands were to change the pool replication to 1, you do not need to do any more configuration.