Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Number of networks

Pages: 1 2

Hello guys,

I have a question regarding the subnets, I was watching the video and other documentations, it recommends 4 subnets and minimum 2 nics.

1 - Why do you need 4 subnets. Can I get away with. 2 nics, and three subnets, a. Management, b. ISCSI, c.Backend ?

2 - I have only two 10GB nics, and I would like to Bond them together, and all three subnets together. e.g.

bond0 - bond.

bond0:0 - mgmt

bond0:1 - iscsi

bond0:2 - backend.

 

Thanks

 

 

 

1 - Why do you need 4 subnets. Can I get away with. 2 nics, and three subnets, a. Management, b. ISCSI, c.Backend ?

You can have multiple subnets per nic. You can map 3 or 5 subnets per nic (or nic bond) so the concept is the same, you also end up with the same traffic load in both cases. Having more subnets makes sense if you do have more nics, so you can separate traffic load. If you only have 1 or 2 nics, there would not be an advantage since all subnets will be mapped to them.  iSCSI deployments favor having 2 iSCSI subnets for use of MPIO (multipath io) rather than a single bonded link, Windows and VMWare initiators are typical of this.  For Ceph, the backend 2 network (in Ceph terminology it is called the private network) is used for data replication which in case of recovery can be highly loaded, backend 1 is for client io (Ceph public network) so if you can separate them for load traffic it may be better if you can, if your nic is capable to handle all traffic at once then just map all subnets to it.

2-I have only two 10GB nics, and I would like to Bond them together, and all three subnets together. e.g.

You can if you want. I would recommend at least separating the management network (even on a 1G link) so you can route it and have external access if needed.

Hello Admin,

Thank you for the reply.

So I would do the following.

NIC1 = Mgmt,ISCSI 1, Backend 1    and NIC2 = ISCSI2, Backend 2 ..  as you recommend in the documentation. however with that said. I will have 3 VM's only for management and 6 to 8 Storage routers.

With the management nodes (VM) - can I only have it to run the Management service only. just to access and control the cluster. and run ISCSI and Backend on the storage nodes  ?

 

Thanks again

I can not at the moment recommend running PetaSAN as vms, it is not something we test currently. There could be configuration and settings on the hypervisor side to get decent performance.

Generally you can restrict the first 3 nodes to management services only without storage or iSCSI if you want.

Thank you noted.

 

I will just create the first 3 nodes as bundle services, each server I have has SSD on them, which configuration set I should take, nothing mentioned in the doc's to create like a caching layer. only does not recommend mixing Disks, which makes sense.

Thanks

If you have all SSDs (best via pci passthrough  to vm) then you just create osds without using an external wal/db journal disk.

Not really,

I have 2 SSD on each server and 8 HDD, I wanted to make the SSD for caching, and HDD for storage.

 

In this case you should use your ssds as journals wa/db devices. We will be supporting caching in the future, it will be done via bcache or dm-cache in Ceph but it is not ready yet.

Is there an option in PetaSan to do this?

If not could you please care to explain or point me to the right direction

Thanks

Yes, from the ui  you add an ssd as a jounral, then when you add an hdd as an osd you have the option to use the journal. you should use approx 1 ssd per 4 hdd, you need 20G on the ssd per hdd.

Pages: 1 2