one ip address PER ISCSI
R3LZX
50 Posts
January 3, 2021, 12:39 amQuote from R3LZX on January 3, 2021, 12:39 amso I have a petasan cluster that is used for backups only and some iso storage. its on latest version and what ive done to conserve ip addresses is just use the same IP address for all eight nodes for the single iscsi disk that it has.
example:
10.10.219.199
iqn.2016-05.com.tdrfyguhi.petasan:00006
same running on eight different vmware nodes some version seven others version 6.7
I am not seeing issues is this something you recommend I avoid? In the equalogics world its also commonly done
by the way latest version (at least for me) is showing significant performance improvements when loading browsing iscsi drives and time to mount etc. thank you for all the hard work
so I have a petasan cluster that is used for backups only and some iso storage. its on latest version and what ive done to conserve ip addresses is just use the same IP address for all eight nodes for the single iscsi disk that it has.
example:
10.10.219.199
iqn.2016-05.com.tdrfyguhi.petasan:00006
same running on eight different vmware nodes some version seven others version 6.7
I am not seeing issues is this something you recommend I avoid? In the equalogics world its also commonly done
by the way latest version (at least for me) is showing significant performance improvements when loading browsing iscsi drives and time to mount etc. thank you for all the hard work
admin
2,930 Posts
January 3, 2021, 10:12 amQuote from admin on January 3, 2021, 10:12 amI think there are 2 different issues: 1) how a single client can access many iSCSI gateways in scale-out active/active and 2) if many clients can access the same disks all actively or they need to be active/passive among themselves (client layer) and if they can use the same ips.
- Having more than 1 ip per disk allows a single client to scale-out in active/active way to get combined performance from all gateways as well as better load balancing. Of course the down side is more internal ips to generate and configure, but we recommend at least 2 to correctly make use of MPIO. In some cases like VMWare if you only have 1 path and it times out, the entire datastore could be set inactive for a longer time before re-attempting to re-activate ( APD error : all paths down ), however if you have another MPIO path, it will continue to function and re-try the failed path regularly so it is more available.
- If you have more than 1 client needing connection to the disk, they can use the same path ip(s), we do not need to generate unique paths per client. Whether the clients can co-ordinate themselves to access the disk actively or be setup as active/passive is a client clustering issue: you need a clustered file system: VMWare VMFS, Windows/Hyper-v CSV, OCFS2 to support active/active clients. Windows NTFS clients cannot and need to be configured as active/passive among themselves ie 1 client at a time accessing the same disk using many active/active paths as in 1)
I think there are 2 different issues: 1) how a single client can access many iSCSI gateways in scale-out active/active and 2) if many clients can access the same disks all actively or they need to be active/passive among themselves (client layer) and if they can use the same ips.
- Having more than 1 ip per disk allows a single client to scale-out in active/active way to get combined performance from all gateways as well as better load balancing. Of course the down side is more internal ips to generate and configure, but we recommend at least 2 to correctly make use of MPIO. In some cases like VMWare if you only have 1 path and it times out, the entire datastore could be set inactive for a longer time before re-attempting to re-activate ( APD error : all paths down ), however if you have another MPIO path, it will continue to function and re-try the failed path regularly so it is more available.
- If you have more than 1 client needing connection to the disk, they can use the same path ip(s), we do not need to generate unique paths per client. Whether the clients can co-ordinate themselves to access the disk actively or be setup as active/passive is a client clustering issue: you need a clustered file system: VMWare VMFS, Windows/Hyper-v CSV, OCFS2 to support active/active clients. Windows NTFS clients cannot and need to be configured as active/passive among themselves ie 1 client at a time accessing the same disk using many active/active paths as in 1)
Last edited on January 3, 2021, 10:16 am by admin · #2
one ip address PER ISCSI
R3LZX
50 Posts
Quote from R3LZX on January 3, 2021, 12:39 amso I have a petasan cluster that is used for backups only and some iso storage. its on latest version and what ive done to conserve ip addresses is just use the same IP address for all eight nodes for the single iscsi disk that it has.
example:
10.10.219.199
iqn.2016-05.com.tdrfyguhi.petasan:00006
same running on eight different vmware nodes some version seven others version 6.7
I am not seeing issues is this something you recommend I avoid? In the equalogics world its also commonly done
by the way latest version (at least for me) is showing significant performance improvements when loading browsing iscsi drives and time to mount etc. thank you for all the hard work
so I have a petasan cluster that is used for backups only and some iso storage. its on latest version and what ive done to conserve ip addresses is just use the same IP address for all eight nodes for the single iscsi disk that it has.
example:
10.10.219.199
iqn.2016-05.com.tdrfyguhi.petasan:00006
same running on eight different vmware nodes some version seven others version 6.7
I am not seeing issues is this something you recommend I avoid? In the equalogics world its also commonly done
by the way latest version (at least for me) is showing significant performance improvements when loading browsing iscsi drives and time to mount etc. thank you for all the hard work
admin
2,930 Posts
Quote from admin on January 3, 2021, 10:12 amI think there are 2 different issues: 1) how a single client can access many iSCSI gateways in scale-out active/active and 2) if many clients can access the same disks all actively or they need to be active/passive among themselves (client layer) and if they can use the same ips.
- Having more than 1 ip per disk allows a single client to scale-out in active/active way to get combined performance from all gateways as well as better load balancing. Of course the down side is more internal ips to generate and configure, but we recommend at least 2 to correctly make use of MPIO. In some cases like VMWare if you only have 1 path and it times out, the entire datastore could be set inactive for a longer time before re-attempting to re-activate ( APD error : all paths down ), however if you have another MPIO path, it will continue to function and re-try the failed path regularly so it is more available.
- If you have more than 1 client needing connection to the disk, they can use the same path ip(s), we do not need to generate unique paths per client. Whether the clients can co-ordinate themselves to access the disk actively or be setup as active/passive is a client clustering issue: you need a clustered file system: VMWare VMFS, Windows/Hyper-v CSV, OCFS2 to support active/active clients. Windows NTFS clients cannot and need to be configured as active/passive among themselves ie 1 client at a time accessing the same disk using many active/active paths as in 1)
I think there are 2 different issues: 1) how a single client can access many iSCSI gateways in scale-out active/active and 2) if many clients can access the same disks all actively or they need to be active/passive among themselves (client layer) and if they can use the same ips.
- Having more than 1 ip per disk allows a single client to scale-out in active/active way to get combined performance from all gateways as well as better load balancing. Of course the down side is more internal ips to generate and configure, but we recommend at least 2 to correctly make use of MPIO. In some cases like VMWare if you only have 1 path and it times out, the entire datastore could be set inactive for a longer time before re-attempting to re-activate ( APD error : all paths down ), however if you have another MPIO path, it will continue to function and re-try the failed path regularly so it is more available.
- If you have more than 1 client needing connection to the disk, they can use the same path ip(s), we do not need to generate unique paths per client. Whether the clients can co-ordinate themselves to access the disk actively or be setup as active/passive is a client clustering issue: you need a clustered file system: VMWare VMFS, Windows/Hyper-v CSV, OCFS2 to support active/active clients. Windows NTFS clients cannot and need to be configured as active/passive among themselves ie 1 client at a time accessing the same disk using many active/active paths as in 1)