PetaSAN + vSphere 6.5
KennyMacCormik
2 Posts
March 21, 2018, 12:03 pmQuote from KennyMacCormik on March 21, 2018, 12:03 pmGreetings,
We are currently looking for storage options for our private cloud deployment. I'm interested if PetaSAN offers any of the following:
- VAAI support
- Storage QoS
Greetings,
We are currently looking for storage options for our private cloud deployment. I'm interested if PetaSAN offers any of the following:
- VAAI support
- Storage QoS
admin
2,930 Posts
March 21, 2018, 1:45 pmQuote from admin on March 21, 2018, 1:45 pmHi
We do support VAAI operations: Atomic Test & Set (ATS), Write Same/Zero, XCOPY, unmap. ATS is quite important in VMWare.
We do not support QoS yet.
Hi
We do support VAAI operations: Atomic Test & Set (ATS), Write Same/Zero, XCOPY, unmap. ATS is quite important in VMWare.
We do not support QoS yet.
KennyMacCormik
2 Posts
March 21, 2018, 2:14 pmQuote from KennyMacCormik on March 21, 2018, 2:14 pmThank you for your reply.
Do you support creating LUNs that are backed differently (raid 10 for lun1, raid 5 for lun2, maybe you support different numbers of data copies per lun or anything similar)? Is there any way we can implement tiered storage (even in the simplest way e.g. lun1 backed with ssd and lun2 backed with hdd)? Any way we can reserve some direct amount of cache per lun?
Thank you for your reply.
Do you support creating LUNs that are backed differently (raid 10 for lun1, raid 5 for lun2, maybe you support different numbers of data copies per lun or anything similar)? Is there any way we can implement tiered storage (even in the simplest way e.g. lun1 backed with ssd and lun2 backed with hdd)? Any way we can reserve some direct amount of cache per lun?
admin
2,930 Posts
March 21, 2018, 2:38 pmQuote from admin on March 21, 2018, 2:38 pmraid is not recommended in Ceph. Ceph will be faster if it has control over all disks itself, it does handle redundancy itself.
Version 2.1 will allow you to create different pools, you can create pools that are backed by different disk type (hdd/sdd/nvme) and you can also specify how many data replicas for each pool.
raid is not recommended in Ceph. Ceph will be faster if it has control over all disks itself, it does handle redundancy itself.
Version 2.1 will allow you to create different pools, you can create pools that are backed by different disk type (hdd/sdd/nvme) and you can also specify how many data replicas for each pool.
jeenode
27 Posts
March 23, 2018, 5:54 pmQuote from jeenode on March 23, 2018, 5:54 pmAny ETA on version 2.1?
Any ETA on version 2.1?
PetaSAN + vSphere 6.5
KennyMacCormik
2 Posts
Quote from KennyMacCormik on March 21, 2018, 12:03 pmGreetings,
We are currently looking for storage options for our private cloud deployment. I'm interested if PetaSAN offers any of the following:
- VAAI support
- Storage QoS
Greetings,
We are currently looking for storage options for our private cloud deployment. I'm interested if PetaSAN offers any of the following:
- VAAI support
- Storage QoS
admin
2,930 Posts
Quote from admin on March 21, 2018, 1:45 pmHi
We do support VAAI operations: Atomic Test & Set (ATS), Write Same/Zero, XCOPY, unmap. ATS is quite important in VMWare.
We do not support QoS yet.
Hi
We do support VAAI operations: Atomic Test & Set (ATS), Write Same/Zero, XCOPY, unmap. ATS is quite important in VMWare.
We do not support QoS yet.
KennyMacCormik
2 Posts
Quote from KennyMacCormik on March 21, 2018, 2:14 pmThank you for your reply.
Do you support creating LUNs that are backed differently (raid 10 for lun1, raid 5 for lun2, maybe you support different numbers of data copies per lun or anything similar)? Is there any way we can implement tiered storage (even in the simplest way e.g. lun1 backed with ssd and lun2 backed with hdd)? Any way we can reserve some direct amount of cache per lun?
Thank you for your reply.
Do you support creating LUNs that are backed differently (raid 10 for lun1, raid 5 for lun2, maybe you support different numbers of data copies per lun or anything similar)? Is there any way we can implement tiered storage (even in the simplest way e.g. lun1 backed with ssd and lun2 backed with hdd)? Any way we can reserve some direct amount of cache per lun?
admin
2,930 Posts
Quote from admin on March 21, 2018, 2:38 pmraid is not recommended in Ceph. Ceph will be faster if it has control over all disks itself, it does handle redundancy itself.
Version 2.1 will allow you to create different pools, you can create pools that are backed by different disk type (hdd/sdd/nvme) and you can also specify how many data replicas for each pool.
raid is not recommended in Ceph. Ceph will be faster if it has control over all disks itself, it does handle redundancy itself.
Version 2.1 will allow you to create different pools, you can create pools that are backed by different disk type (hdd/sdd/nvme) and you can also specify how many data replicas for each pool.
jeenode
27 Posts
Quote from jeenode on March 23, 2018, 5:54 pmAny ETA on version 2.1?
Any ETA on version 2.1?