Using Jumbo Frames - Peering problem.
sniffus
20 Posts
February 2, 2018, 3:31 pmQuote from sniffus on February 2, 2018, 3:31 pmGood <time_o'_day>
We've upgraded from 1.4 to 1.5, added jumbo frames in the mix. Now we end up with PGs peering ala all the time.
I was wondering where the jumbo frames are set in your scripts, or do you support them?
Thanks guys, it's a beautiful adventure this PetaSAN project, we love it.
M.
Good <time_o'_day>
We've upgraded from 1.4 to 1.5, added jumbo frames in the mix. Now we end up with PGs peering ala all the time.
I was wondering where the jumbo frames are set in your scripts, or do you support them?
Thanks guys, it's a beautiful adventure this PetaSAN project, we love it.
M.
sniffus
20 Posts
February 2, 2018, 3:44 pmQuote from sniffus on February 2, 2018, 3:44 pmIf I manually set the MTU to 8220 it works.
How do I change the scripts to add ifconfig ethx MTU 8220 ?
Thanks,
M.
If I manually set the MTU to 8220 it works.
How do I change the scripts to add ifconfig ethx MTU 8220 ?
Thanks,
M.
admin
2,930 Posts
February 2, 2018, 8:09 pmQuote from admin on February 2, 2018, 8:09 pmIn PetaSAN the network definition should happen during cluster creation, this is where you would define and jumbo frames and assign them to interfaces via the deployment wizard.
If you do it yourself, it will be messier:
You need define the interfaces / bonds and jumbo frame assignment in
/opt/petasan/config/cluster_info.json
The code that sets jumbo frames ( 9000 MTU ) is in
/opt/petasan/scripts/node_start_ips.py function: set_jumbo_frames()
In PetaSAN the network definition should happen during cluster creation, this is where you would define and jumbo frames and assign them to interfaces via the deployment wizard.
If you do it yourself, it will be messier:
You need define the interfaces / bonds and jumbo frame assignment in
/opt/petasan/config/cluster_info.json
The code that sets jumbo frames ( 9000 MTU ) is in
/opt/petasan/scripts/node_start_ips.py function: set_jumbo_frames()
Last edited on February 2, 2018, 8:46 pm by admin · #3
sniffus
20 Posts
February 5, 2018, 2:57 pmQuote from sniffus on February 5, 2018, 2:57 pmThe issue I encounter is the following.
main network, is 1Gb runs MTU 1500
Backend is 10Gb Jumbo frame MTU 9014
iSCSI is 10Gb Jumbo frame MTU 4088 (a glitch that existed in Windows' driver, been fixed but the case is possible)
Seems like you just set Jumbo everywhere the same, which would be the logical thing, however the world is a crooked place and things don't always workout 😉
M.
The issue I encounter is the following.
main network, is 1Gb runs MTU 1500
Backend is 10Gb Jumbo frame MTU 9014
iSCSI is 10Gb Jumbo frame MTU 4088 (a glitch that existed in Windows' driver, been fixed but the case is possible)
Seems like you just set Jumbo everywhere the same, which would be the logical thing, however the world is a crooked place and things don't always workout 😉
M.
Shiori
86 Posts
February 5, 2018, 3:46 pmQuote from Shiori on February 5, 2018, 3:46 pmwith the current scripts for network control, you can not easily set differing jumbo frame sizes. The only hard set interface is your management interface and it is set in /etc/network/interfaces.
Not being a java guy, I haven't addressed this issue on my own yet. For now I would say that jumbo frames work but are not fully compatible with the existing networks that are out there. I tried to use jumbo frames on my system as well and found that though everything could use an mtu of 9000, it still did not work as the systems are supposed to negotiate up to this max. This negotiation is not working correctly and is handled at layer2 protocol levels, so this may be a more than simply in the json configs. One thing I do now is that jumbo frames was working over my SAN hardware without issues and that it is the PetaSAN implementation that has an issue.
with the current scripts for network control, you can not easily set differing jumbo frame sizes. The only hard set interface is your management interface and it is set in /etc/network/interfaces.
Not being a java guy, I haven't addressed this issue on my own yet. For now I would say that jumbo frames work but are not fully compatible with the existing networks that are out there. I tried to use jumbo frames on my system as well and found that though everything could use an mtu of 9000, it still did not work as the systems are supposed to negotiate up to this max. This negotiation is not working correctly and is handled at layer2 protocol levels, so this may be a more than simply in the json configs. One thing I do now is that jumbo frames was working over my SAN hardware without issues and that it is the PetaSAN implementation that has an issue.
admin
2,930 Posts
February 5, 2018, 4:56 pmQuote from admin on February 5, 2018, 4:56 pmif you change the code
/opt/petasan/scripts/node_start_ips.py function: set_jumbo_frames()
and change 9000 value does it not work ?
if you change the code
/opt/petasan/scripts/node_start_ips.py function: set_jumbo_frames()
and change 9000 value does it not work ?
Shiori
86 Posts
February 5, 2018, 8:39 pmQuote from Shiori on February 5, 2018, 8:39 pmit does not, at least not as expected. this seems to set the MTU to only 9000 (or any other value set) but you loose the auto negotiation that is supposed to happen. The MTU is supposed to be flexible and this allow setting of 9000 should mean that the MTU is allowed to be as big as 9000, instead it is being used as all packets must be 9000 long. It did not matter the network adapter nor the switch involved, I even tried a cross-over cable between two nics and still could not make it work correctly. Any packet forced to 9000 long worked, but nothing smaller.
I am wondering if a flag was set to hard set the mtu? I haven't taken the time to really dig into this yet, the above is all I have done. I used Intel, Nvidia and Realtek network adapters. All adapters are 1Gbps Ethernet and connect to a Cisco 2950G switch. All Ethernet cables are known good, but double checked regardless on a cable analyzer. The switch is capable of jumbo mtu of 9000 (and more) and is set to allow jumbo frames (previous SAN used this switch and had no issues with jumbo frames, though that was a home spun system based on Debian.)
As I said before, I have not taken the time to dissect this issue as I use Infiniband for my backend/iSCSI networks and only the management is on the Gbps nic. Further, I am not a java guy so it will take me much longer to go through the json file and see what is actually going on.
it does not, at least not as expected. this seems to set the MTU to only 9000 (or any other value set) but you loose the auto negotiation that is supposed to happen. The MTU is supposed to be flexible and this allow setting of 9000 should mean that the MTU is allowed to be as big as 9000, instead it is being used as all packets must be 9000 long. It did not matter the network adapter nor the switch involved, I even tried a cross-over cable between two nics and still could not make it work correctly. Any packet forced to 9000 long worked, but nothing smaller.
I am wondering if a flag was set to hard set the mtu? I haven't taken the time to really dig into this yet, the above is all I have done. I used Intel, Nvidia and Realtek network adapters. All adapters are 1Gbps Ethernet and connect to a Cisco 2950G switch. All Ethernet cables are known good, but double checked regardless on a cable analyzer. The switch is capable of jumbo mtu of 9000 (and more) and is set to allow jumbo frames (previous SAN used this switch and had no issues with jumbo frames, though that was a home spun system based on Debian.)
As I said before, I have not taken the time to dissect this issue as I use Infiniband for my backend/iSCSI networks and only the management is on the Gbps nic. Further, I am not a java guy so it will take me much longer to go through the json file and see what is actually going on.
admin
2,930 Posts
February 6, 2018, 1:31 pmQuote from admin on February 6, 2018, 1:31 pmHi,
we have it working in our test environment. Looking at this further, there are 2 methods for mtu discovery:
RFC1191 Traditional Path MTU Discovery (PMTUD) uses ICMP (older)
RFC4821 Packetization Layer Path MTU Discovery (PLPMTUD) works over the network layer above IP (newer)
In PetaSAN, PMTUD ia enabled, PLPMTUD is not.
To enable PLPMTUD add the following to /etc/sysctl.conf
net.ipv4.tcp_mtu_probing=2
To enable PLPMTUD and disable PMTUD:
net.ipv4.tcp_mtu_probing=2
net.ipv4.ip_no_pmtu_disc=1
Then:
sysctl --system
Hi,
we have it working in our test environment. Looking at this further, there are 2 methods for mtu discovery:
RFC1191 Traditional Path MTU Discovery (PMTUD) uses ICMP (older)
RFC4821 Packetization Layer Path MTU Discovery (PLPMTUD) works over the network layer above IP (newer)
In PetaSAN, PMTUD ia enabled, PLPMTUD is not.
To enable PLPMTUD add the following to /etc/sysctl.conf
net.ipv4.tcp_mtu_probing=2
To enable PLPMTUD and disable PMTUD:
net.ipv4.tcp_mtu_probing=2
net.ipv4.ip_no_pmtu_disc=1
Then:
sysctl --system
Using Jumbo Frames - Peering problem.
sniffus
20 Posts
Quote from sniffus on February 2, 2018, 3:31 pmGood <time_o'_day>
We've upgraded from 1.4 to 1.5, added jumbo frames in the mix. Now we end up with PGs peering ala all the time.
I was wondering where the jumbo frames are set in your scripts, or do you support them?
Thanks guys, it's a beautiful adventure this PetaSAN project, we love it.
M.
Good <time_o'_day>
We've upgraded from 1.4 to 1.5, added jumbo frames in the mix. Now we end up with PGs peering ala all the time.
I was wondering where the jumbo frames are set in your scripts, or do you support them?
Thanks guys, it's a beautiful adventure this PetaSAN project, we love it.
M.
sniffus
20 Posts
Quote from sniffus on February 2, 2018, 3:44 pmIf I manually set the MTU to 8220 it works.
How do I change the scripts to add ifconfig ethx MTU 8220 ?
Thanks,
M.
If I manually set the MTU to 8220 it works.
How do I change the scripts to add ifconfig ethx MTU 8220 ?
Thanks,
M.
admin
2,930 Posts
Quote from admin on February 2, 2018, 8:09 pmIn PetaSAN the network definition should happen during cluster creation, this is where you would define and jumbo frames and assign them to interfaces via the deployment wizard.
If you do it yourself, it will be messier:
You need define the interfaces / bonds and jumbo frame assignment in
/opt/petasan/config/cluster_info.jsonThe code that sets jumbo frames ( 9000 MTU ) is in
/opt/petasan/scripts/node_start_ips.py function: set_jumbo_frames()
In PetaSAN the network definition should happen during cluster creation, this is where you would define and jumbo frames and assign them to interfaces via the deployment wizard.
If you do it yourself, it will be messier:
You need define the interfaces / bonds and jumbo frame assignment in
/opt/petasan/config/cluster_info.json
The code that sets jumbo frames ( 9000 MTU ) is in
/opt/petasan/scripts/node_start_ips.py function: set_jumbo_frames()
sniffus
20 Posts
Quote from sniffus on February 5, 2018, 2:57 pmThe issue I encounter is the following.
main network, is 1Gb runs MTU 1500
Backend is 10Gb Jumbo frame MTU 9014
iSCSI is 10Gb Jumbo frame MTU 4088 (a glitch that existed in Windows' driver, been fixed but the case is possible)
Seems like you just set Jumbo everywhere the same, which would be the logical thing, however the world is a crooked place and things don't always workout 😉
M.
The issue I encounter is the following.
main network, is 1Gb runs MTU 1500
Backend is 10Gb Jumbo frame MTU 9014
iSCSI is 10Gb Jumbo frame MTU 4088 (a glitch that existed in Windows' driver, been fixed but the case is possible)
Seems like you just set Jumbo everywhere the same, which would be the logical thing, however the world is a crooked place and things don't always workout 😉
M.
Shiori
86 Posts
Quote from Shiori on February 5, 2018, 3:46 pmwith the current scripts for network control, you can not easily set differing jumbo frame sizes. The only hard set interface is your management interface and it is set in /etc/network/interfaces.
Not being a java guy, I haven't addressed this issue on my own yet. For now I would say that jumbo frames work but are not fully compatible with the existing networks that are out there. I tried to use jumbo frames on my system as well and found that though everything could use an mtu of 9000, it still did not work as the systems are supposed to negotiate up to this max. This negotiation is not working correctly and is handled at layer2 protocol levels, so this may be a more than simply in the json configs. One thing I do now is that jumbo frames was working over my SAN hardware without issues and that it is the PetaSAN implementation that has an issue.
with the current scripts for network control, you can not easily set differing jumbo frame sizes. The only hard set interface is your management interface and it is set in /etc/network/interfaces.
Not being a java guy, I haven't addressed this issue on my own yet. For now I would say that jumbo frames work but are not fully compatible with the existing networks that are out there. I tried to use jumbo frames on my system as well and found that though everything could use an mtu of 9000, it still did not work as the systems are supposed to negotiate up to this max. This negotiation is not working correctly and is handled at layer2 protocol levels, so this may be a more than simply in the json configs. One thing I do now is that jumbo frames was working over my SAN hardware without issues and that it is the PetaSAN implementation that has an issue.
admin
2,930 Posts
Quote from admin on February 5, 2018, 4:56 pmif you change the code
/opt/petasan/scripts/node_start_ips.py function: set_jumbo_frames()and change 9000 value does it not work ?
if you change the code
/opt/petasan/scripts/node_start_ips.py function: set_jumbo_frames()
and change 9000 value does it not work ?
Shiori
86 Posts
Quote from Shiori on February 5, 2018, 8:39 pmit does not, at least not as expected. this seems to set the MTU to only 9000 (or any other value set) but you loose the auto negotiation that is supposed to happen. The MTU is supposed to be flexible and this allow setting of 9000 should mean that the MTU is allowed to be as big as 9000, instead it is being used as all packets must be 9000 long. It did not matter the network adapter nor the switch involved, I even tried a cross-over cable between two nics and still could not make it work correctly. Any packet forced to 9000 long worked, but nothing smaller.
I am wondering if a flag was set to hard set the mtu? I haven't taken the time to really dig into this yet, the above is all I have done. I used Intel, Nvidia and Realtek network adapters. All adapters are 1Gbps Ethernet and connect to a Cisco 2950G switch. All Ethernet cables are known good, but double checked regardless on a cable analyzer. The switch is capable of jumbo mtu of 9000 (and more) and is set to allow jumbo frames (previous SAN used this switch and had no issues with jumbo frames, though that was a home spun system based on Debian.)
As I said before, I have not taken the time to dissect this issue as I use Infiniband for my backend/iSCSI networks and only the management is on the Gbps nic. Further, I am not a java guy so it will take me much longer to go through the json file and see what is actually going on.
it does not, at least not as expected. this seems to set the MTU to only 9000 (or any other value set) but you loose the auto negotiation that is supposed to happen. The MTU is supposed to be flexible and this allow setting of 9000 should mean that the MTU is allowed to be as big as 9000, instead it is being used as all packets must be 9000 long. It did not matter the network adapter nor the switch involved, I even tried a cross-over cable between two nics and still could not make it work correctly. Any packet forced to 9000 long worked, but nothing smaller.
I am wondering if a flag was set to hard set the mtu? I haven't taken the time to really dig into this yet, the above is all I have done. I used Intel, Nvidia and Realtek network adapters. All adapters are 1Gbps Ethernet and connect to a Cisco 2950G switch. All Ethernet cables are known good, but double checked regardless on a cable analyzer. The switch is capable of jumbo mtu of 9000 (and more) and is set to allow jumbo frames (previous SAN used this switch and had no issues with jumbo frames, though that was a home spun system based on Debian.)
As I said before, I have not taken the time to dissect this issue as I use Infiniband for my backend/iSCSI networks and only the management is on the Gbps nic. Further, I am not a java guy so it will take me much longer to go through the json file and see what is actually going on.
admin
2,930 Posts
Quote from admin on February 6, 2018, 1:31 pmHi,
we have it working in our test environment. Looking at this further, there are 2 methods for mtu discovery:
RFC1191 Traditional Path MTU Discovery (PMTUD) uses ICMP (older)
RFC4821 Packetization Layer Path MTU Discovery (PLPMTUD) works over the network layer above IP (newer)In PetaSAN, PMTUD ia enabled, PLPMTUD is not.
To enable PLPMTUD add the following to /etc/sysctl.confnet.ipv4.tcp_mtu_probing=2
To enable PLPMTUD and disable PMTUD:
net.ipv4.tcp_mtu_probing=2
net.ipv4.ip_no_pmtu_disc=1Then:
sysctl --system
Hi,
we have it working in our test environment. Looking at this further, there are 2 methods for mtu discovery:
RFC1191 Traditional Path MTU Discovery (PMTUD) uses ICMP (older)
RFC4821 Packetization Layer Path MTU Discovery (PLPMTUD) works over the network layer above IP (newer)
In PetaSAN, PMTUD ia enabled, PLPMTUD is not.
To enable PLPMTUD add the following to /etc/sysctl.conf
net.ipv4.tcp_mtu_probing=2
To enable PLPMTUD and disable PMTUD:
net.ipv4.tcp_mtu_probing=2
net.ipv4.ip_no_pmtu_disc=1
Then:
sysctl --system