Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Hyper-V Questions

Hello and congratulations for your work ,

 

Im looking to create a high available storage for my hyper-v virtual machines. Im not very good with linux and that is why im avoiding ceph storage. It seems that you have create a very nice UI and im very positive to test your solution . I also have some questions below if you can answer me ,

1. Lets say i have 3 nodes . I create a lun and a volume. This lun will be spread to all 3 nodes ? So i have the performance of all disks from all 3 nodes ?

2. If the answer in question 1 is yes , and lets say i add another node to the cluster , is this volume data spread to the other node to gain more iops and more performance ?

3. Can i increase volume online ?

4. Caching ? Is there any caching used in the simplest scenario ? Can i use an SSD disk for caching ?

 

Thanks a lot

Thanks for your nice comments !

The solution works well with hyper-v. The PetaSAN disks you create can be used with Hyper-v directly as CSV Clustered Shared Volume/Storage Spaces or you can use them to create a Scale Out File System SOFS that serves shares to Hyper-V. We have some use cases in our documentation, be sure to check them out. We also support Windows Server 2012R2 as well as Windows Server 2016.

1. Lets say i have 3 nodes . I create a lun and a volume. This lun will be spread to all 3 nodes ? So i have the performance of all disks from all 3 nodes ?

Yes. Your single lun will use all available cluster resources ( disks/nodes..to be economical you try to add as much disks per node, till you start hitting cpu bottle-neck, in such cases you need to add more nodes).

The more disks you add the faster the lun, it is like a giant networked RAID. To be more exact, you will see this scaling per lun when you have concurrent io on the lun ( such as Hyper-V serving VMs, SQL Server with concurrent transactions ), this is because each concurrent io will be accessing a differing sector in the lun which will map to a different physical disk/node. You will also see this scaling if you have a single threaded application that writes/reads large block (eg multi-media application), in such cases the large blocks will be striped concurrently to different disks/luns. The exception to this scaling is of you have only 1 thread that is accessing small block io you will not see parallel perforance there, but you will not install a Ceph cluster for a single threaded app that writes small blocks anyway 🙂

2. If the answer in question 1 is yes , and lets say i add another node to the cluster , is this volume data spread to the other node to gain more iops and more performance ?

Yes ! Also note Ceph is designed to handle a dynamic environment (disks adding/deletion/failure) by design rather than an exception condition. If you have 20 disks and add 1, it will move a small chunk ( 5% ) from each disk to redistribute the load on the now 21 disk cluster, it does this in the background while minimizing the effect on existing client io. It does this very well.

3. Can i increase volume online ?

Yes you can through the ui you can edit the disk/lun. It needs to be stopped first..most client OS need to stop a lun to re-read its new size. Note also you can create luns/disks with more storage than actual physical storage available, you only need to add physical disks as your luns get fuller.

4. Caching ? Is there any caching used in the simplest scenario ? Can i use an SSD disk for caching ?

Current version of PetaSAN does not support caching. You should user either all SSDs or  all spinning. We had planned to support Cache tiering in this upcoming version 1.3, but there is indication in the Ceph community that Cache tiering will be replaced with something else.

Hello again and thanks for all your answers . Some more questions i have ,

1. Can i change settings (fine tuning) on ceph that is running under your implementation or it is closed ?

2. About online resizing you said that i had to stop the disk. But if i stop the disk this isn't an online procedure. So if i have a hyper-v CSV 1 TB and i want to make it 2 TB  i must take the whole csv offline to resize it ? In freenas i have used and also HP Lefthand and HP 3PAR this whole procedure is online

3. Do i have the choice to create thick or thin provisioning volumes ?

4. Any professional support on the plans ?

 

Thanks a lot and again congratulations for your efforts.

Hi

1) You can use ssh/Winscp to log into any node and change files / run commands. you probably be interested in /etc/ceph/ceph.conf and/or using admin sockets to send commands to ceph services.  Of course if you do tune things better we would hope to know 🙂 . Actually we do have future plans to support different configuration templates depending on hardware / uses cases that allows the admin to choose from or define his own.

2) got you. currently you need to stop the lun but we can support this in a future release, technically it is do-able, the current stop procedure was put to ensure data integrity from the client OS side.

3) It is thin provisioned. You can also over commit storage: create luns whose size exceeds the physical disk storage, and add physical disks as the lun(s) get filled. So you can be comfortable to create large lun sizes from the beginning. I am not sure from your question if you have a need to actually thick provision but it is not supported.

4) Yes we do have plans to offer a support options but PetaSAN will always be available free as agpl3.

Thanks a lot for your answer . I have already start creating a demo environment in vmware . Is this a supported scenario ? I have create 3 virtual machines for the poc .

 

Thanks

for a demo environment, running under vmware is fine