Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Organizing disk

I would like to know if you can set different pools for different osd and different targets

 

Like for example having a SATA slow pool with large storage devices but slow

and having a SSD pool with less storage but faster devices?

 

Is it possible?

 

In ceph this is done trough crushmaps where you can create buckects and rulesets for the osds and the pools, so you can set a pool to be in a bucket with specific osds (like all ssd osd)

At this time, it is not on our list, if many people ask for it we would do it.

Initially we were going to support 2 pools for cache tiering but we will be implementing caching on a device level as this seems to be the trend forward.

We will be offering crush map editing in a graphical way to identify disks within buckets such as hosts/racks/rooms with the purpose of giving Ceph more intelligence in how to place the replicas so for example not to place all replicas within the same rack. But still in the context of 1 pool.

Dear all

What is the PetaSAN default setting for replication? Are the replicas placed by disks or by PetaSAN-nodes?

BR SK

Currently you specify the replication count ( 2 or 3 ) during cluster creation, you can also change it in a running cluster from the cluster settings page. For each 4M sector of your iSCSI disks, Ceph will place it in 3 (for 3 replicas) different osds located on different nodes, it will not place 2 replicas of the same sector on 1 node. In case of node/osd failure, Ceph will take care of this to make sure all sector have the correct number of replicas.

In version 2.1, we will support graphical definition of data-centers/rooms/racks etc and the ability to define yourself how the replicas are placed, you could also define more than 3 replicas.

Hello Admin,

I would like created my disks but without the replication of datas, that's possible??? If yes how I can do it??

it is possible. mind you it means there no is redundancy,  if a disk fails your data is lost, so do you really mean this ? it is done by setting the pool size to 1, i am not sure the PetaSAN ui will allow this but if you want after creating a pool use cli command to set pool size to 1.

Yes, if a disk fails the datas are lost. I mean this.

Can I have the cli command to set it ??

ceph osd pool set MyPool min_size 1
ceph osd pool set MyPool size 1

it's done!

after I just have create a disk and I have no more configuration to do??

the 2 commands were to change the pool replication to 1, you do not need to do any more configuration.