Multiple IP for PetaSAN Services
Pages: 1 2
pedro6161
36 Posts
December 30, 2020, 1:59 pmQuote from pedro6161 on December 30, 2020, 1:59 pmHi,
i have 3 server for PetaSAN, all running OSD from local disk and only 2 server serve services like ISCSI, NFS and CIFS. my questions is why we should put range of ip address for single single services ex: iscsi, if i only have 2 server it mean i only need 2 subnet and 2 ip address from each subnet for multipath iscsi 1 and iscsi 2 (1 server only have 2 ip address). i just not understand why in quick start document they put 1 server has 10 ip address.
i'm still not understand regarding distribute of ip address in here, since if 1 server down, all 10 ip address is hosted on that server it will down
Hi,
i have 3 server for PetaSAN, all running OSD from local disk and only 2 server serve services like ISCSI, NFS and CIFS. my questions is why we should put range of ip address for single single services ex: iscsi, if i only have 2 server it mean i only need 2 subnet and 2 ip address from each subnet for multipath iscsi 1 and iscsi 2 (1 server only have 2 ip address). i just not understand why in quick start document they put 1 server has 10 ip address.
i'm still not understand regarding distribute of ip address in here, since if 1 server down, all 10 ip address is hosted on that server it will down
Last edited on December 30, 2020, 1:59 pm by pedro6161 · #1
admin
2,930 Posts
December 30, 2020, 2:35 pmQuote from admin on December 30, 2020, 2:35 pmThe service ips are highly available, if a node goes down the ips will be re-assigned to running nodes. In a 2 node active/passive setup, 1 ip will be enough. For many active/active nodes, many ips are better:
For CIFS/SMB/NFS : Assume you have 5 service nodes, if each node has 1 service ip and 1 node fails its ip will be distributed to 1 other node, it will take all the load, making it 2x as loaded as the other nodes. If however each node had 4 ips and a node fails, its load will be distributed evenly on the running nodes.
in iSCSI it is possible to group all luns/disks under 1 ip in a 2 node active/passive setup, having 1 ip per path helps in load distribution in active/active load distribution and failover as above as well as allows a single disk to be served by many nodes actively so a single disk is scaled out. We also allow path assignment to move a path from 1 node to another either manually or in auto mode for better load distribution.
We also try to help ip distribution by allowing users define a to/from auto ip range and we take care of ip distribution automatically.
The service ips are highly available, if a node goes down the ips will be re-assigned to running nodes. In a 2 node active/passive setup, 1 ip will be enough. For many active/active nodes, many ips are better:
For CIFS/SMB/NFS : Assume you have 5 service nodes, if each node has 1 service ip and 1 node fails its ip will be distributed to 1 other node, it will take all the load, making it 2x as loaded as the other nodes. If however each node had 4 ips and a node fails, its load will be distributed evenly on the running nodes.
in iSCSI it is possible to group all luns/disks under 1 ip in a 2 node active/passive setup, having 1 ip per path helps in load distribution in active/active load distribution and failover as above as well as allows a single disk to be served by many nodes actively so a single disk is scaled out. We also allow path assignment to move a path from 1 node to another either manually or in auto mode for better load distribution.
We also try to help ip distribution by allowing users define a to/from auto ip range and we take care of ip distribution automatically.
Last edited on December 30, 2020, 2:42 pm by admin · #2
pedro6161
36 Posts
January 2, 2021, 9:39 pmQuote from pedro6161 on January 2, 2021, 9:39 pmHi,
at the moment i give 100 ip for iSCSI range with vlan tagging 3094 and 3095 for iSCSI 2, and when i create the iscsi disk the ip will allocate to iscsi disk, but the problem is i cannot find the ip address on OS level, only i saw is services running on port 3260 but no interface vlan on bonding interface
root@ceph-02:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond2-services state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond2-services state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
8: bond0-mgmt: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
inet 192.168.74.22/24 scope global bond0-mgmt
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe71:cf5c/64 scope link
valid_lft forever preferred_lft forever
9: bond1-backend: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
inet 172.16.91.22/24 scope global bond1-backend
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe69:e929/64 scope link
valid_lft forever preferred_lft forever
10: bond2-services: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forever
should be on bond2-services has multiple vlan and ip address since on menu managed iscsi disk > iscsi disk > active path show on below :
Active Paths
Disk 00005
IP
Interface
Assigned Node
172.16.94.130
bond2-services.3094
ceph-02
172.16.95.130
bond2-services.3095
ceph-03
root@ceph-02:~# netstat -ano|grep 3260
tcp 0 0 172.16.95.130:3260 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 172.16.94.130:3260 0.0.0.0:* LISTEN off (0.00/0/0)
Hi,
at the moment i give 100 ip for iSCSI range with vlan tagging 3094 and 3095 for iSCSI 2, and when i create the iscsi disk the ip will allocate to iscsi disk, but the problem is i cannot find the ip address on OS level, only i saw is services running on port 3260 but no interface vlan on bonding interface
root@ceph-02:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond2-services state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond2-services state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
8: bond0-mgmt: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
inet 192.168.74.22/24 scope global bond0-mgmt
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe71:cf5c/64 scope link
valid_lft forever preferred_lft forever
9: bond1-backend: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
inet 172.16.91.22/24 scope global bond1-backend
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe69:e929/64 scope link
valid_lft forever preferred_lft forever
10: bond2-services: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forever
should be on bond2-services has multiple vlan and ip address since on menu managed iscsi disk > iscsi disk > active path show on below :
Active Paths
Disk 00005
IP
Interface
Assigned Node
172.16.94.130
bond2-services.3094
ceph-02
172.16.95.130
bond2-services.3095
ceph-03
root@ceph-02:~# netstat -ano|grep 3260
tcp 0 0 172.16.95.130:3260 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 172.16.94.130:3260 0.0.0.0:* LISTEN off (0.00/0/0)
Last edited on January 2, 2021, 9:41 pm by pedro6161 · #3
pedro6161
36 Posts
January 3, 2021, 7:37 amQuote from pedro6161 on January 3, 2021, 7:37 ami found the issue, module 8021q not loaded so i enable manual with modprobe 8021q and then create add vlan by manual using vconfig add bond2-services 3094 and add the ip address manual and ESXI can discovery the iSCSI
are there is a way to load module 8021q automatically ? or i need add again in pre-script ?
the problem is when i apply the ip address and vlan from iSCSI settings, the web apps not load the 8021q module and then after creating iSCSI Disk, not apply the vlan interface and ip address in OS level
Please Advice
i found the issue, module 8021q not loaded so i enable manual with modprobe 8021q and then create add vlan by manual using vconfig add bond2-services 3094 and add the ip address manual and ESXI can discovery the iSCSI
are there is a way to load module 8021q automatically ? or i need add again in pre-script ?
the problem is when i apply the ip address and vlan from iSCSI settings, the web apps not load the 8021q module and then after creating iSCSI Disk, not apply the vlan interface and ip address in OS level
Please Advice
admin
2,930 Posts
January 3, 2021, 8:26 amQuote from admin on January 3, 2021, 8:26 amwe use vlan all the time, you should not have to explicitly load the module. Try to see if you have errors in syslog and dmesg which could help. If you need to load it manually, you can add it to
/etc/modules-load.d/*.conf
One thing maybe to try is to rename the bond to just bond2, i believe there is a kernel limit on interface names to 16 chars
bond2-services.3094 -> bond2.3094
you would need to manually change it in /opt/petasan/config/cluster_info.json on all nodes, then choose in the iSCSI settings
we use vlan all the time, you should not have to explicitly load the module. Try to see if you have errors in syslog and dmesg which could help. If you need to load it manually, you can add it to
/etc/modules-load.d/*.conf
One thing maybe to try is to rename the bond to just bond2, i believe there is a kernel limit on interface names to 16 chars
bond2-services.3094 -> bond2.3094
you would need to manually change it in /opt/petasan/config/cluster_info.json on all nodes, then choose in the iSCSI settings
pedro6161
36 Posts
January 3, 2021, 11:56 amQuote from pedro6161 on January 3, 2021, 11:56 ami already change from cluster_info.json to make short name of interface but after reboot, my gateway is gone, do i need add gateway manually ? or can i have multiple gateway ? or i have to install iproute2 package ?
root@ceph-02:~# route -ne
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
172.16.91.0 0.0.0.0 255.255.255.0 U 0 0 0 b1-backend
192.168.74.0 0.0.0.0 255.255.255.0 U 0 0 0 b0-mgmt
i already change from cluster_info.json to make short name of interface but after reboot, my gateway is gone, do i need add gateway manually ? or can i have multiple gateway ? or i have to install iproute2 package ?
root@ceph-02:~# route -ne
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
172.16.91.0 0.0.0.0 255.255.255.0 U 0 0 0 b1-backend
192.168.74.0 0.0.0.0 255.255.255.0 U 0 0 0 b0-mgmt
admin
2,930 Posts
January 3, 2021, 1:01 pmQuote from admin on January 3, 2021, 1:01 pmdid short bond name work apart from external gateway ?
did short bond name work apart from external gateway ?
Last edited on January 3, 2021, 1:02 pm by admin · #7
pedro6161
36 Posts
January 3, 2021, 1:43 pmQuote from pedro6161 on January 3, 2021, 1:43 pm
Quote from admin on January 3, 2021, 1:01 pm
did short bond name work apart from external gateway ?
at the moment short name of interface(bond) work for vlan, now automatically create vlan interface from bond but still no gateway, so i add manual for management and for services impossible to add gateway since already define manual for management interface
root@ceph-02:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b2-svc state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b2-svc state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
8: b0-mgmt: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
inet 192.168.74.22/24 scope global b0-mgmt
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe71:cf5c/64 scope link
valid_lft forever preferred_lft forever
9: b1-backend: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
inet 172.16.91.22/24 scope global b1-backend
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe69:e929/64 scope link
valid_lft forever preferred_lft forever
10: b2-svc: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forever
11: b2-svc.3095@b2-svc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet 172.16.95.130/24 scope global b2-svc.3095
valid_lft forever preferred_lft forever
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forever
12: b2-svc.3094@b2-svc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet 172.16.94.129/24 scope global b2-svc.3094
valid_lft forever preferred_lft forever
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forever
root@ceph-02:~#
root@ceph-02:~#
root@ceph-02:~#
root@ceph-02:~# route -ne
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
172.16.91.0 0.0.0.0 255.255.255.0 U 0 0 0 b1-backend
172.16.94.0 0.0.0.0 255.255.255.0 U 0 0 0 b2-svc.3094
172.16.95.0 0.0.0.0 255.255.255.0 U 0 0 0 b2-svc.3095
192.168.74.0 0.0.0.0 255.255.255.0 U 0 0 0 b0-mgmt
root@ceph-02:~#
Quote from admin on January 3, 2021, 1:01 pm
did short bond name work apart from external gateway ?
at the moment short name of interface(bond) work for vlan, now automatically create vlan interface from bond but still no gateway, so i add manual for management and for services impossible to add gateway since already define manual for management interface
root@ceph-02:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b2-svc state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b2-svc state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
8: b0-mgmt: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
inet 192.168.74.22/24 scope global b0-mgmt
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe71:cf5c/64 scope link
valid_lft forever preferred_lft forever
9: b1-backend: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
inet 172.16.91.22/24 scope global b1-backend
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe69:e929/64 scope link
valid_lft forever preferred_lft forever
10: b2-svc: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forever
11: b2-svc.3095@b2-svc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet 172.16.95.130/24 scope global b2-svc.3095
valid_lft forever preferred_lft forever
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forever
12: b2-svc.3094@b2-svc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet 172.16.94.129/24 scope global b2-svc.3094
valid_lft forever preferred_lft forever
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forever
root@ceph-02:~#
root@ceph-02:~#
root@ceph-02:~#
root@ceph-02:~# route -ne
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
172.16.91.0 0.0.0.0 255.255.255.0 U 0 0 0 b1-backend
172.16.94.0 0.0.0.0 255.255.255.0 U 0 0 0 b2-svc.3094
172.16.95.0 0.0.0.0 255.255.255.0 U 0 0 0 b2-svc.3095
192.168.74.0 0.0.0.0 255.255.255.0 U 0 0 0 b0-mgmt
root@ceph-02:~#
admin
2,930 Posts
January 3, 2021, 1:55 pmQuote from admin on January 3, 2021, 1:55 pmat the moment short name of interface(bond) work for vlan
Excellent 🙂 the interface name is bondname.vlan_id should be max 16 characters in linux. We probably should warn of this in ui.
You should not have to configure the default gateway. Make sure it is /etc/network/interfaces and it should be configured automatically. If you ever need to add any custom configuration do so in
/opt/petasan/scripts/custom/post_start_network.sh
for example
route add default gw X.X.X.X b0-mgmt
but again it should be done automatically if in /etc/network/interfaces . You also do not have to install route/route2 they are already installed.
at the moment short name of interface(bond) work for vlan
Excellent 🙂 the interface name is bondname.vlan_id should be max 16 characters in linux. We probably should warn of this in ui.
You should not have to configure the default gateway. Make sure it is /etc/network/interfaces and it should be configured automatically. If you ever need to add any custom configuration do so in
/opt/petasan/scripts/custom/post_start_network.sh
for example
route add default gw X.X.X.X b0-mgmt
but again it should be done automatically if in /etc/network/interfaces . You also do not have to install route/route2 they are already installed.
Last edited on January 3, 2021, 2:00 pm by admin · #9
pedro6161
36 Posts
January 3, 2021, 7:31 pmQuote from pedro6161 on January 3, 2021, 7:31 pmi tried to change on /etc/network/interface from eth4 to b0-mgmt but not success i think need manual add on /opt/petasan/scripts/custom/post_start_network.sh
on this situation, are there is a way go to shell rather than using ssh ? i mean from blue console, since after all cluster register, no options go to shell on menu
or may be we can customize the blue console screen ex: to access the menu need login first like VMware to access the menu need login, even if we want to reboot the server
i tried to change on /etc/network/interface from eth4 to b0-mgmt but not success i think need manual add on /opt/petasan/scripts/custom/post_start_network.sh
on this situation, are there is a way go to shell rather than using ssh ? i mean from blue console, since after all cluster register, no options go to shell on menu
or may be we can customize the blue console screen ex: to access the menu need login first like VMware to access the menu need login, even if we want to reboot the server
Last edited on January 3, 2021, 7:34 pm by pedro6161 · #10
Pages: 1 2
Multiple IP for PetaSAN Services
pedro6161
36 Posts
Quote from pedro6161 on December 30, 2020, 1:59 pmHi,
i have 3 server for PetaSAN, all running OSD from local disk and only 2 server serve services like ISCSI, NFS and CIFS. my questions is why we should put range of ip address for single single services ex: iscsi, if i only have 2 server it mean i only need 2 subnet and 2 ip address from each subnet for multipath iscsi 1 and iscsi 2 (1 server only have 2 ip address). i just not understand why in quick start document they put 1 server has 10 ip address.
i'm still not understand regarding distribute of ip address in here, since if 1 server down, all 10 ip address is hosted on that server it will down
Hi,
i have 3 server for PetaSAN, all running OSD from local disk and only 2 server serve services like ISCSI, NFS and CIFS. my questions is why we should put range of ip address for single single services ex: iscsi, if i only have 2 server it mean i only need 2 subnet and 2 ip address from each subnet for multipath iscsi 1 and iscsi 2 (1 server only have 2 ip address). i just not understand why in quick start document they put 1 server has 10 ip address.
i'm still not understand regarding distribute of ip address in here, since if 1 server down, all 10 ip address is hosted on that server it will down
admin
2,930 Posts
Quote from admin on December 30, 2020, 2:35 pmThe service ips are highly available, if a node goes down the ips will be re-assigned to running nodes. In a 2 node active/passive setup, 1 ip will be enough. For many active/active nodes, many ips are better:
For CIFS/SMB/NFS : Assume you have 5 service nodes, if each node has 1 service ip and 1 node fails its ip will be distributed to 1 other node, it will take all the load, making it 2x as loaded as the other nodes. If however each node had 4 ips and a node fails, its load will be distributed evenly on the running nodes.
in iSCSI it is possible to group all luns/disks under 1 ip in a 2 node active/passive setup, having 1 ip per path helps in load distribution in active/active load distribution and failover as above as well as allows a single disk to be served by many nodes actively so a single disk is scaled out. We also allow path assignment to move a path from 1 node to another either manually or in auto mode for better load distribution.
We also try to help ip distribution by allowing users define a to/from auto ip range and we take care of ip distribution automatically.
The service ips are highly available, if a node goes down the ips will be re-assigned to running nodes. In a 2 node active/passive setup, 1 ip will be enough. For many active/active nodes, many ips are better:
For CIFS/SMB/NFS : Assume you have 5 service nodes, if each node has 1 service ip and 1 node fails its ip will be distributed to 1 other node, it will take all the load, making it 2x as loaded as the other nodes. If however each node had 4 ips and a node fails, its load will be distributed evenly on the running nodes.
in iSCSI it is possible to group all luns/disks under 1 ip in a 2 node active/passive setup, having 1 ip per path helps in load distribution in active/active load distribution and failover as above as well as allows a single disk to be served by many nodes actively so a single disk is scaled out. We also allow path assignment to move a path from 1 node to another either manually or in auto mode for better load distribution.
We also try to help ip distribution by allowing users define a to/from auto ip range and we take care of ip distribution automatically.
pedro6161
36 Posts
Quote from pedro6161 on January 2, 2021, 9:39 pmHi,
at the moment i give 100 ip for iSCSI range with vlan tagging 3094 and 3095 for iSCSI 2, and when i create the iscsi disk the ip will allocate to iscsi disk, but the problem is i cannot find the ip address on OS level, only i saw is services running on port 3260 but no interface vlan on bonding interface
root@ceph-02:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond2-services state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond2-services state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
8: bond0-mgmt: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
inet 192.168.74.22/24 scope global bond0-mgmt
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe71:cf5c/64 scope link
valid_lft forever preferred_lft forever
9: bond1-backend: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
inet 172.16.91.22/24 scope global bond1-backend
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe69:e929/64 scope link
valid_lft forever preferred_lft forever
10: bond2-services: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forevershould be on bond2-services has multiple vlan and ip address since on menu managed iscsi disk > iscsi disk > active path show on below :
Active Paths
Disk 00005
IP Interface Assigned Node 172.16.94.130 bond2-services.3094 ceph-02 172.16.95.130 bond2-services.3095 ceph-03
root@ceph-02:~# netstat -ano|grep 3260
tcp 0 0 172.16.95.130:3260 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 172.16.94.130:3260 0.0.0.0:* LISTEN off (0.00/0/0)
Hi,
at the moment i give 100 ip for iSCSI range with vlan tagging 3094 and 3095 for iSCSI 2, and when i create the iscsi disk the ip will allocate to iscsi disk, but the problem is i cannot find the ip address on OS level, only i saw is services running on port 3260 but no interface vlan on bonding interface
root@ceph-02:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond2-services state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond2-services state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
8: bond0-mgmt: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
inet 192.168.74.22/24 scope global bond0-mgmt
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe71:cf5c/64 scope link
valid_lft forever preferred_lft forever
9: bond1-backend: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
inet 172.16.91.22/24 scope global bond1-backend
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe69:e929/64 scope link
valid_lft forever preferred_lft forever
10: bond2-services: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forever
should be on bond2-services has multiple vlan and ip address since on menu managed iscsi disk > iscsi disk > active path show on below :
Active PathsDisk 00005
|
root@ceph-02:~# netstat -ano|grep 3260
tcp 0 0 172.16.95.130:3260 0.0.0.0:* LISTEN off (0.00/0/0)
tcp 0 0 172.16.94.130:3260 0.0.0.0:* LISTEN off (0.00/0/0)
pedro6161
36 Posts
Quote from pedro6161 on January 3, 2021, 7:37 ami found the issue, module 8021q not loaded so i enable manual with modprobe 8021q and then create add vlan by manual using vconfig add bond2-services 3094 and add the ip address manual and ESXI can discovery the iSCSI
are there is a way to load module 8021q automatically ? or i need add again in pre-script ?
the problem is when i apply the ip address and vlan from iSCSI settings, the web apps not load the 8021q module and then after creating iSCSI Disk, not apply the vlan interface and ip address in OS level
Please Advice
i found the issue, module 8021q not loaded so i enable manual with modprobe 8021q and then create add vlan by manual using vconfig add bond2-services 3094 and add the ip address manual and ESXI can discovery the iSCSI
are there is a way to load module 8021q automatically ? or i need add again in pre-script ?
the problem is when i apply the ip address and vlan from iSCSI settings, the web apps not load the 8021q module and then after creating iSCSI Disk, not apply the vlan interface and ip address in OS level
Please Advice
admin
2,930 Posts
Quote from admin on January 3, 2021, 8:26 amwe use vlan all the time, you should not have to explicitly load the module. Try to see if you have errors in syslog and dmesg which could help. If you need to load it manually, you can add it to
/etc/modules-load.d/*.conf
One thing maybe to try is to rename the bond to just bond2, i believe there is a kernel limit on interface names to 16 chars
bond2-services.3094 -> bond2.3094
you would need to manually change it in /opt/petasan/config/cluster_info.json on all nodes, then choose in the iSCSI settings
we use vlan all the time, you should not have to explicitly load the module. Try to see if you have errors in syslog and dmesg which could help. If you need to load it manually, you can add it to
/etc/modules-load.d/*.conf
One thing maybe to try is to rename the bond to just bond2, i believe there is a kernel limit on interface names to 16 chars
bond2-services.3094 -> bond2.3094
you would need to manually change it in /opt/petasan/config/cluster_info.json on all nodes, then choose in the iSCSI settings
pedro6161
36 Posts
Quote from pedro6161 on January 3, 2021, 11:56 ami already change from cluster_info.json to make short name of interface but after reboot, my gateway is gone, do i need add gateway manually ? or can i have multiple gateway ? or i have to install iproute2 package ?
root@ceph-02:~# route -ne
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
172.16.91.0 0.0.0.0 255.255.255.0 U 0 0 0 b1-backend
192.168.74.0 0.0.0.0 255.255.255.0 U 0 0 0 b0-mgmt
i already change from cluster_info.json to make short name of interface but after reboot, my gateway is gone, do i need add gateway manually ? or can i have multiple gateway ? or i have to install iproute2 package ?
root@ceph-02:~# route -ne
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
172.16.91.0 0.0.0.0 255.255.255.0 U 0 0 0 b1-backend
192.168.74.0 0.0.0.0 255.255.255.0 U 0 0 0 b0-mgmt
admin
2,930 Posts
Quote from admin on January 3, 2021, 1:01 pmdid short bond name work apart from external gateway ?
did short bond name work apart from external gateway ?
pedro6161
36 Posts
Quote from pedro6161 on January 3, 2021, 1:43 pmQuote from admin on January 3, 2021, 1:01 pmdid short bond name work apart from external gateway ?
at the moment short name of interface(bond) work for vlan, now automatically create vlan interface from bond but still no gateway, so i add manual for management and for services impossible to add gateway since already define manual for management interface
root@ceph-02:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b2-svc state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b2-svc state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
8: b0-mgmt: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
inet 192.168.74.22/24 scope global b0-mgmt
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe71:cf5c/64 scope link
valid_lft forever preferred_lft forever
9: b1-backend: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
inet 172.16.91.22/24 scope global b1-backend
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe69:e929/64 scope link
valid_lft forever preferred_lft forever
10: b2-svc: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forever
11: b2-svc.3095@b2-svc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet 172.16.95.130/24 scope global b2-svc.3095
valid_lft forever preferred_lft forever
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forever
12: b2-svc.3094@b2-svc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet 172.16.94.129/24 scope global b2-svc.3094
valid_lft forever preferred_lft forever
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forever
root@ceph-02:~#
root@ceph-02:~#
root@ceph-02:~#
root@ceph-02:~# route -ne
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
172.16.91.0 0.0.0.0 255.255.255.0 U 0 0 0 b1-backend
172.16.94.0 0.0.0.0 255.255.255.0 U 0 0 0 b2-svc.3094
172.16.95.0 0.0.0.0 255.255.255.0 U 0 0 0 b2-svc.3095
192.168.74.0 0.0.0.0 255.255.255.0 U 0 0 0 b0-mgmt
root@ceph-02:~#
Quote from admin on January 3, 2021, 1:01 pmdid short bond name work apart from external gateway ?
at the moment short name of interface(bond) work for vlan, now automatically create vlan interface from bond but still no gateway, so i add manual for management and for services impossible to add gateway since already define manual for management interface
root@ceph-02:~# ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b1-backend state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b2-svc state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b2-svc state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
6: eth4: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
7: eth5: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master b0-mgmt state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
8: b0-mgmt: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:71:cf:5c brd ff:ff:ff:ff:ff:ff
inet 192.168.74.22/24 scope global b0-mgmt
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe71:cf5c/64 scope link
valid_lft forever preferred_lft forever
9: b1-backend: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:3a:7d:69:e9:29 brd ff:ff:ff:ff:ff:ff
inet 172.16.91.22/24 scope global b1-backend
valid_lft forever preferred_lft forever
inet6 fe80::23a:7dff:fe69:e929/64 scope link
valid_lft forever preferred_lft forever
10: b2-svc: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forever
11: b2-svc.3095@b2-svc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet 172.16.95.130/24 scope global b2-svc.3095
valid_lft forever preferred_lft forever
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forever
12: b2-svc.3094@b2-svc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:42:68:81:83:2b brd ff:ff:ff:ff:ff:ff
inet 172.16.94.129/24 scope global b2-svc.3094
valid_lft forever preferred_lft forever
inet6 fe80::242:68ff:fe81:832b/64 scope link
valid_lft forever preferred_lft forever
root@ceph-02:~#
root@ceph-02:~#
root@ceph-02:~#
root@ceph-02:~# route -ne
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
172.16.91.0 0.0.0.0 255.255.255.0 U 0 0 0 b1-backend
172.16.94.0 0.0.0.0 255.255.255.0 U 0 0 0 b2-svc.3094
172.16.95.0 0.0.0.0 255.255.255.0 U 0 0 0 b2-svc.3095
192.168.74.0 0.0.0.0 255.255.255.0 U 0 0 0 b0-mgmt
root@ceph-02:~#
admin
2,930 Posts
Quote from admin on January 3, 2021, 1:55 pmat the moment short name of interface(bond) work for vlan
Excellent 🙂 the interface name is bondname.vlan_id should be max 16 characters in linux. We probably should warn of this in ui.
You should not have to configure the default gateway. Make sure it is /etc/network/interfaces and it should be configured automatically. If you ever need to add any custom configuration do so in
/opt/petasan/scripts/custom/post_start_network.sh
for example
route add default gw X.X.X.X b0-mgmt
but again it should be done automatically if in /etc/network/interfaces . You also do not have to install route/route2 they are already installed.
at the moment short name of interface(bond) work for vlan
Excellent 🙂 the interface name is bondname.vlan_id should be max 16 characters in linux. We probably should warn of this in ui.
You should not have to configure the default gateway. Make sure it is /etc/network/interfaces and it should be configured automatically. If you ever need to add any custom configuration do so in
/opt/petasan/scripts/custom/post_start_network.sh
for example
route add default gw X.X.X.X b0-mgmt
but again it should be done automatically if in /etc/network/interfaces . You also do not have to install route/route2 they are already installed.
pedro6161
36 Posts
Quote from pedro6161 on January 3, 2021, 7:31 pmi tried to change on /etc/network/interface from eth4 to b0-mgmt but not success i think need manual add on /opt/petasan/scripts/custom/post_start_network.sh
on this situation, are there is a way go to shell rather than using ssh ? i mean from blue console, since after all cluster register, no options go to shell on menu
or may be we can customize the blue console screen ex: to access the menu need login first like VMware to access the menu need login, even if we want to reboot the server
i tried to change on /etc/network/interface from eth4 to b0-mgmt but not success i think need manual add on /opt/petasan/scripts/custom/post_start_network.sh
on this situation, are there is a way go to shell rather than using ssh ? i mean from blue console, since after all cluster register, no options go to shell on menu
or may be we can customize the blue console screen ex: to access the menu need login first like VMware to access the menu need login, even if we want to reboot the server