Number of networks
Pages: 1 2
msalem
87 Posts
April 15, 2018, 11:58 amQuote from msalem on April 15, 2018, 11:58 amHello guys,
I have a question regarding the subnets, I was watching the video and other documentations, it recommends 4 subnets and minimum 2 nics.
1 - Why do you need 4 subnets. Can I get away with. 2 nics, and three subnets, a. Management, b. ISCSI, c.Backend ?
2 - I have only two 10GB nics, and I would like to Bond them together, and all three subnets together. e.g.
bond0 - bond.
bond0:0 - mgmt
bond0:1 - iscsi
bond0:2 - backend.
Thanks
Hello guys,
I have a question regarding the subnets, I was watching the video and other documentations, it recommends 4 subnets and minimum 2 nics.
1 - Why do you need 4 subnets. Can I get away with. 2 nics, and three subnets, a. Management, b. ISCSI, c.Backend ?
2 - I have only two 10GB nics, and I would like to Bond them together, and all three subnets together. e.g.
bond0 - bond.
bond0:0 - mgmt
bond0:1 - iscsi
bond0:2 - backend.
Thanks
admin
2,930 Posts
April 15, 2018, 1:15 pmQuote from admin on April 15, 2018, 1:15 pm1 - Why do you need 4 subnets. Can I get away with. 2 nics, and three subnets, a. Management, b. ISCSI, c.Backend ?
You can have multiple subnets per nic. You can map 3 or 5 subnets per nic (or nic bond) so the concept is the same, you also end up with the same traffic load in both cases. Having more subnets makes sense if you do have more nics, so you can separate traffic load. If you only have 1 or 2 nics, there would not be an advantage since all subnets will be mapped to them. iSCSI deployments favor having 2 iSCSI subnets for use of MPIO (multipath io) rather than a single bonded link, Windows and VMWare initiators are typical of this. For Ceph, the backend 2 network (in Ceph terminology it is called the private network) is used for data replication which in case of recovery can be highly loaded, backend 1 is for client io (Ceph public network) so if you can separate them for load traffic it may be better if you can, if your nic is capable to handle all traffic at once then just map all subnets to it.
2-I have only two 10GB nics, and I would like to Bond them together, and all three subnets together. e.g.
You can if you want. I would recommend at least separating the management network (even on a 1G link) so you can route it and have external access if needed.
1 - Why do you need 4 subnets. Can I get away with. 2 nics, and three subnets, a. Management, b. ISCSI, c.Backend ?
You can have multiple subnets per nic. You can map 3 or 5 subnets per nic (or nic bond) so the concept is the same, you also end up with the same traffic load in both cases. Having more subnets makes sense if you do have more nics, so you can separate traffic load. If you only have 1 or 2 nics, there would not be an advantage since all subnets will be mapped to them. iSCSI deployments favor having 2 iSCSI subnets for use of MPIO (multipath io) rather than a single bonded link, Windows and VMWare initiators are typical of this. For Ceph, the backend 2 network (in Ceph terminology it is called the private network) is used for data replication which in case of recovery can be highly loaded, backend 1 is for client io (Ceph public network) so if you can separate them for load traffic it may be better if you can, if your nic is capable to handle all traffic at once then just map all subnets to it.
2-I have only two 10GB nics, and I would like to Bond them together, and all three subnets together. e.g.
You can if you want. I would recommend at least separating the management network (even on a 1G link) so you can route it and have external access if needed.
Last edited on April 15, 2018, 1:16 pm by admin · #2
msalem
87 Posts
April 15, 2018, 2:25 pmQuote from msalem on April 15, 2018, 2:25 pmHello Admin,
Thank you for the reply.
So I would do the following.
NIC1 = Mgmt,ISCSI 1, Backend 1 and NIC2 = ISCSI2, Backend 2 .. as you recommend in the documentation. however with that said. I will have 3 VM's only for management and 6 to 8 Storage routers.
With the management nodes (VM) - can I only have it to run the Management service only. just to access and control the cluster. and run ISCSI and Backend on the storage nodes ?
Thanks again
Hello Admin,
Thank you for the reply.
So I would do the following.
NIC1 = Mgmt,ISCSI 1, Backend 1 and NIC2 = ISCSI2, Backend 2 .. as you recommend in the documentation. however with that said. I will have 3 VM's only for management and 6 to 8 Storage routers.
With the management nodes (VM) - can I only have it to run the Management service only. just to access and control the cluster. and run ISCSI and Backend on the storage nodes ?
Thanks again
admin
2,930 Posts
April 15, 2018, 2:43 pmQuote from admin on April 15, 2018, 2:43 pmI can not at the moment recommend running PetaSAN as vms, it is not something we test currently. There could be configuration and settings on the hypervisor side to get decent performance.
Generally you can restrict the first 3 nodes to management services only without storage or iSCSI if you want.
I can not at the moment recommend running PetaSAN as vms, it is not something we test currently. There could be configuration and settings on the hypervisor side to get decent performance.
Generally you can restrict the first 3 nodes to management services only without storage or iSCSI if you want.
msalem
87 Posts
April 15, 2018, 2:51 pmQuote from msalem on April 15, 2018, 2:51 pmThank you noted.
I will just create the first 3 nodes as bundle services, each server I have has SSD on them, which configuration set I should take, nothing mentioned in the doc's to create like a caching layer. only does not recommend mixing Disks, which makes sense.
Thanks
Thank you noted.
I will just create the first 3 nodes as bundle services, each server I have has SSD on them, which configuration set I should take, nothing mentioned in the doc's to create like a caching layer. only does not recommend mixing Disks, which makes sense.
Thanks
admin
2,930 Posts
April 15, 2018, 3:34 pmQuote from admin on April 15, 2018, 3:34 pmIf you have all SSDs (best via pci passthrough to vm) then you just create osds without using an external wal/db journal disk.
If you have all SSDs (best via pci passthrough to vm) then you just create osds without using an external wal/db journal disk.
msalem
87 Posts
April 15, 2018, 3:45 pmQuote from msalem on April 15, 2018, 3:45 pmNot really,
I have 2 SSD on each server and 8 HDD, I wanted to make the SSD for caching, and HDD for storage.
Not really,
I have 2 SSD on each server and 8 HDD, I wanted to make the SSD for caching, and HDD for storage.
admin
2,930 Posts
April 15, 2018, 6:49 pmQuote from admin on April 15, 2018, 6:49 pmIn this case you should use your ssds as journals wa/db devices. We will be supporting caching in the future, it will be done via bcache or dm-cache in Ceph but it is not ready yet.
In this case you should use your ssds as journals wa/db devices. We will be supporting caching in the future, it will be done via bcache or dm-cache in Ceph but it is not ready yet.
msalem
87 Posts
April 15, 2018, 8:16 pmQuote from msalem on April 15, 2018, 8:16 pmIs there an option in PetaSan to do this?
If not could you please care to explain or point me to the right direction
Thanks
Is there an option in PetaSan to do this?
If not could you please care to explain or point me to the right direction
Thanks
admin
2,930 Posts
April 16, 2018, 8:04 amQuote from admin on April 16, 2018, 8:04 amYes, from the ui you add an ssd as a jounral, then when you add an hdd as an osd you have the option to use the journal. you should use approx 1 ssd per 4 hdd, you need 20G on the ssd per hdd.
Yes, from the ui you add an ssd as a jounral, then when you add an hdd as an osd you have the option to use the journal. you should use approx 1 ssd per 4 hdd, you need 20G on the ssd per hdd.
Last edited on April 16, 2018, 8:05 am by admin · #10
Pages: 1 2
Number of networks
msalem
87 Posts
Quote from msalem on April 15, 2018, 11:58 amHello guys,
I have a question regarding the subnets, I was watching the video and other documentations, it recommends 4 subnets and minimum 2 nics.
1 - Why do you need 4 subnets. Can I get away with. 2 nics, and three subnets, a. Management, b. ISCSI, c.Backend ?
2 - I have only two 10GB nics, and I would like to Bond them together, and all three subnets together. e.g.
bond0 - bond.
bond0:0 - mgmt
bond0:1 - iscsi
bond0:2 - backend.
Thanks
Hello guys,
I have a question regarding the subnets, I was watching the video and other documentations, it recommends 4 subnets and minimum 2 nics.
1 - Why do you need 4 subnets. Can I get away with. 2 nics, and three subnets, a. Management, b. ISCSI, c.Backend ?
2 - I have only two 10GB nics, and I would like to Bond them together, and all three subnets together. e.g.
bond0 - bond.
bond0:0 - mgmt
bond0:1 - iscsi
bond0:2 - backend.
Thanks
admin
2,930 Posts
Quote from admin on April 15, 2018, 1:15 pm1 - Why do you need 4 subnets. Can I get away with. 2 nics, and three subnets, a. Management, b. ISCSI, c.Backend ?
You can have multiple subnets per nic. You can map 3 or 5 subnets per nic (or nic bond) so the concept is the same, you also end up with the same traffic load in both cases. Having more subnets makes sense if you do have more nics, so you can separate traffic load. If you only have 1 or 2 nics, there would not be an advantage since all subnets will be mapped to them. iSCSI deployments favor having 2 iSCSI subnets for use of MPIO (multipath io) rather than a single bonded link, Windows and VMWare initiators are typical of this. For Ceph, the backend 2 network (in Ceph terminology it is called the private network) is used for data replication which in case of recovery can be highly loaded, backend 1 is for client io (Ceph public network) so if you can separate them for load traffic it may be better if you can, if your nic is capable to handle all traffic at once then just map all subnets to it.
2-I have only two 10GB nics, and I would like to Bond them together, and all three subnets together. e.g.
You can if you want. I would recommend at least separating the management network (even on a 1G link) so you can route it and have external access if needed.
1 - Why do you need 4 subnets. Can I get away with. 2 nics, and three subnets, a. Management, b. ISCSI, c.Backend ?
You can have multiple subnets per nic. You can map 3 or 5 subnets per nic (or nic bond) so the concept is the same, you also end up with the same traffic load in both cases. Having more subnets makes sense if you do have more nics, so you can separate traffic load. If you only have 1 or 2 nics, there would not be an advantage since all subnets will be mapped to them. iSCSI deployments favor having 2 iSCSI subnets for use of MPIO (multipath io) rather than a single bonded link, Windows and VMWare initiators are typical of this. For Ceph, the backend 2 network (in Ceph terminology it is called the private network) is used for data replication which in case of recovery can be highly loaded, backend 1 is for client io (Ceph public network) so if you can separate them for load traffic it may be better if you can, if your nic is capable to handle all traffic at once then just map all subnets to it.
2-I have only two 10GB nics, and I would like to Bond them together, and all three subnets together. e.g.
You can if you want. I would recommend at least separating the management network (even on a 1G link) so you can route it and have external access if needed.
msalem
87 Posts
Quote from msalem on April 15, 2018, 2:25 pmHello Admin,
Thank you for the reply.
So I would do the following.
NIC1 = Mgmt,ISCSI 1, Backend 1 and NIC2 = ISCSI2, Backend 2 .. as you recommend in the documentation. however with that said. I will have 3 VM's only for management and 6 to 8 Storage routers.
With the management nodes (VM) - can I only have it to run the Management service only. just to access and control the cluster. and run ISCSI and Backend on the storage nodes ?
Thanks again
Hello Admin,
Thank you for the reply.
So I would do the following.
NIC1 = Mgmt,ISCSI 1, Backend 1 and NIC2 = ISCSI2, Backend 2 .. as you recommend in the documentation. however with that said. I will have 3 VM's only for management and 6 to 8 Storage routers.
With the management nodes (VM) - can I only have it to run the Management service only. just to access and control the cluster. and run ISCSI and Backend on the storage nodes ?
Thanks again
admin
2,930 Posts
Quote from admin on April 15, 2018, 2:43 pmI can not at the moment recommend running PetaSAN as vms, it is not something we test currently. There could be configuration and settings on the hypervisor side to get decent performance.
Generally you can restrict the first 3 nodes to management services only without storage or iSCSI if you want.
I can not at the moment recommend running PetaSAN as vms, it is not something we test currently. There could be configuration and settings on the hypervisor side to get decent performance.
Generally you can restrict the first 3 nodes to management services only without storage or iSCSI if you want.
msalem
87 Posts
Quote from msalem on April 15, 2018, 2:51 pmThank you noted.
I will just create the first 3 nodes as bundle services, each server I have has SSD on them, which configuration set I should take, nothing mentioned in the doc's to create like a caching layer. only does not recommend mixing Disks, which makes sense.
Thanks
Thank you noted.
I will just create the first 3 nodes as bundle services, each server I have has SSD on them, which configuration set I should take, nothing mentioned in the doc's to create like a caching layer. only does not recommend mixing Disks, which makes sense.
Thanks
admin
2,930 Posts
Quote from admin on April 15, 2018, 3:34 pmIf you have all SSDs (best via pci passthrough to vm) then you just create osds without using an external wal/db journal disk.
If you have all SSDs (best via pci passthrough to vm) then you just create osds without using an external wal/db journal disk.
msalem
87 Posts
Quote from msalem on April 15, 2018, 3:45 pmNot really,
I have 2 SSD on each server and 8 HDD, I wanted to make the SSD for caching, and HDD for storage.
Not really,
I have 2 SSD on each server and 8 HDD, I wanted to make the SSD for caching, and HDD for storage.
admin
2,930 Posts
Quote from admin on April 15, 2018, 6:49 pmIn this case you should use your ssds as journals wa/db devices. We will be supporting caching in the future, it will be done via bcache or dm-cache in Ceph but it is not ready yet.
In this case you should use your ssds as journals wa/db devices. We will be supporting caching in the future, it will be done via bcache or dm-cache in Ceph but it is not ready yet.
msalem
87 Posts
Quote from msalem on April 15, 2018, 8:16 pmIs there an option in PetaSan to do this?
If not could you please care to explain or point me to the right direction
Thanks
Is there an option in PetaSan to do this?
If not could you please care to explain or point me to the right direction
Thanks
admin
2,930 Posts
Quote from admin on April 16, 2018, 8:04 amYes, from the ui you add an ssd as a jounral, then when you add an hdd as an osd you have the option to use the journal. you should use approx 1 ssd per 4 hdd, you need 20G on the ssd per hdd.
Yes, from the ui you add an ssd as a jounral, then when you add an hdd as an osd you have the option to use the journal. you should use approx 1 ssd per 4 hdd, you need 20G on the ssd per hdd.