Dependency on /etc/network/interfaces and hard-coded interface names
NTmatter
8 Posts
July 17, 2020, 4:51 amQuote from NTmatter on July 17, 2020, 4:51 amI know this is well outside PetaSAN's intended mode of operation, so I'd say it's more of an observation than a bug report.
I'm expecting to have some mismatched hardware over the lifetime of my cluster, starting with Virtual and moving to Hybrid Physical. This will create some issues with interface counts, types, and names when new hardware comes in. As a workaround, I've been trying to normalize interface names with systemd-networkd, allowing for swapping the underlying network interfaces in the future (eg, to Bonds, Teams, Dummies, Multinets, or mismatched VLANs).
As part of the process, I commented out the contents of /etc/network/interfaces:
# auto eth0
# iface eth0 inet static
# address 10.0.32.25
# netmask 255.255.255.224
# gateway 10.0.32.30
# dns-nameservers 8.8.8.8
After rebooting the host, the Console UI was fairly broken, instead of displaying node information as normal:
Host Name: test1
Management Interface: eth0
IP Address: 10.0.123.45
Subnet Mask: 255.255.255.0
Gateway: 10.0.123.254
DNS: 8.8.8.8
It displays the preceding token in the file:
Host Name: test1
Management Interface: eth0
IP Address: address
Subnet Mask: netmask
Gateway: gateway
DNS: 8.8.8.8
I'm guessing that DNS is pulled from resolv.conf, and is thereby unaffected?
Additionally, attempting to use Options > Interface Naming causes the text UI to crash and eventually restart.
In cases where I delete /etc/network/interfaces, the Console UI did not start at all. When this is done prior to joining the cluster, the root password is not set, and networking may be broken, meaning that it is impossible to repair the situation after a reboot. Rolling back to a snapshot did the trick, but it could mean rebuilding the machine if things go particularly wrong.
As another part of the experiment, I've been renaming interfaces by changing the interface names in /etc/udev/rules.d/70-persistent-net.rules to mgmt0, ceph0, cifs0, iscsi1, and iscsi2 prior to running the installer. This generally seems to work, however there are a few challenges in the Web Installer Wizard on the Cluster Network Settings page.
I am able to correctly select the Management and Backend interfaces using their modified names (mgmt0, ceph0). The Services only offer a static list of eth0-eth9. If I use the Dev Tools to edit one of the dropdown entries from <option value="eth9">eth9</option> to <option value="iscsi1">iscsi1</option> and select the new option, the supplied value is accepted and persisted all the way down to cluster_info.json on newly-added nodes that don't even have the relevant interface.
I haven't done much testing in this configuration, but it generally seems to work.
I know this is well outside PetaSAN's intended mode of operation, so I'd say it's more of an observation than a bug report.
I'm expecting to have some mismatched hardware over the lifetime of my cluster, starting with Virtual and moving to Hybrid Physical. This will create some issues with interface counts, types, and names when new hardware comes in. As a workaround, I've been trying to normalize interface names with systemd-networkd, allowing for swapping the underlying network interfaces in the future (eg, to Bonds, Teams, Dummies, Multinets, or mismatched VLANs).
As part of the process, I commented out the contents of /etc/network/interfaces:
# auto eth0
# iface eth0 inet static
# address 10.0.32.25
# netmask 255.255.255.224
# gateway 10.0.32.30
# dns-nameservers 8.8.8.8
After rebooting the host, the Console UI was fairly broken, instead of displaying node information as normal:
Host Name: test1
Management Interface: eth0
IP Address: 10.0.123.45
Subnet Mask: 255.255.255.0
Gateway: 10.0.123.254
DNS: 8.8.8.8
It displays the preceding token in the file:
Host Name: test1
Management Interface: eth0
IP Address: address
Subnet Mask: netmask
Gateway: gateway
DNS: 8.8.8.8
I'm guessing that DNS is pulled from resolv.conf, and is thereby unaffected?
Additionally, attempting to use Options > Interface Naming causes the text UI to crash and eventually restart.
In cases where I delete /etc/network/interfaces, the Console UI did not start at all. When this is done prior to joining the cluster, the root password is not set, and networking may be broken, meaning that it is impossible to repair the situation after a reboot. Rolling back to a snapshot did the trick, but it could mean rebuilding the machine if things go particularly wrong.
As another part of the experiment, I've been renaming interfaces by changing the interface names in /etc/udev/rules.d/70-persistent-net.rules to mgmt0, ceph0, cifs0, iscsi1, and iscsi2 prior to running the installer. This generally seems to work, however there are a few challenges in the Web Installer Wizard on the Cluster Network Settings page.
I am able to correctly select the Management and Backend interfaces using their modified names (mgmt0, ceph0). The Services only offer a static list of eth0-eth9. If I use the Dev Tools to edit one of the dropdown entries from <option value="eth9">eth9</option> to <option value="iscsi1">iscsi1</option> and select the new option, the supplied value is accepted and persisted all the way down to cluster_info.json on newly-added nodes that don't even have the relevant interface.
I haven't done much testing in this configuration, but it generally seems to work.
Last edited on July 17, 2020, 5:19 am by NTmatter · #1
admin
2,930 Posts
July 17, 2020, 11:16 amQuote from admin on July 17, 2020, 11:16 amAs of last 2.5 release we do not force that all nodes have the same interface count and configuration. Each node requirements will be determined by the role/services it is enabled.
In 2.6 we making things easier to be able to change, in an already deployed cluster, interface requirements for such services/roles, before this it was done at cluster creation in the ui wizard and any changes needed manual changes in cluster_info.json.
We need network naming to be consistent like eth0,..X : we cannot allow dynamic generated names generated by kernel since we want to have 2 types of settings: those that apply to a node (like ip address and those that apply to cluster service/role ( service interface/subnet/vlan/mtu/bonds), the later is required to make it easier both to setup the cluster and later to operate/troubleshoot, we want to avoid having interface X on node A talk to interface Y on node B.
To achieve network naming consistency across different hardware, we rely on the interface naming function on the console menu / blue node screen. If you say it is not working with you let us know as a bug and we will fix it, it is definitely something we test here.
As of last 2.5 release we do not force that all nodes have the same interface count and configuration. Each node requirements will be determined by the role/services it is enabled.
In 2.6 we making things easier to be able to change, in an already deployed cluster, interface requirements for such services/roles, before this it was done at cluster creation in the ui wizard and any changes needed manual changes in cluster_info.json.
We need network naming to be consistent like eth0,..X : we cannot allow dynamic generated names generated by kernel since we want to have 2 types of settings: those that apply to a node (like ip address and those that apply to cluster service/role ( service interface/subnet/vlan/mtu/bonds), the later is required to make it easier both to setup the cluster and later to operate/troubleshoot, we want to avoid having interface X on node A talk to interface Y on node B.
To achieve network naming consistency across different hardware, we rely on the interface naming function on the console menu / blue node screen. If you say it is not working with you let us know as a bug and we will fix it, it is definitely something we test here.
NTmatter
8 Posts
July 17, 2020, 12:40 pmQuote from NTmatter on July 17, 2020, 12:40 pmWhen I say "it's not working" it is in the context of having manually messed with the configuration files after first boot 🙂 I think that the Console and Installer UI could be made more robust, possibly by looking at the actual network interfaces or accounting for comments in the configs.
In regards to settings for "cluster service/role (service interface/subnet/vlan/mtu/bonds)," the name of the interface will change when switching from a VMware-provided untagged interface (ethX) to a physical machine with a tagged VLAN interface (ethX.0001). Similarly, a hardware refresh may change the topology if we move from 4x10GbE to 2x100GbE in the coming years.
As a preventive workaround, I'm trying to rename all of my interfaces to match their roles. Using systemd-networkd provides enough flexibility to give a VLAN, IPVLAN, or Dummy interface an actual name. The console UI only allows setting interfaces names to eth# instead of a functional name such as iscsi1. To be clear, the interface names are manually-defined and static.
My initial virtualized nodes have the following change (edited for brevity) in /etc/udev/rules.d/70-persistent-net.rules before running the cluster setup wizard:
SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="iscsi2"
SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="iscsi1"
SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="smb0"
SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="ceph0"
SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="mgmt0"
This causes problems for the console menu, as it requires interfaces named "eth#"
A future physical host would delete some or all of 70-persistent-net.rules, leaving configuration to systemd-networkd:
# /etc/systemd/network/10-iscsi1.netdev
# Define VLAN interface named "iscsi1"
# NOTE: Addressing is managed by PetaSAN
[NetDev]
Name=iscsi1
Kind=vlan
[VLAN]
Id=123
# /etc/systemd/network/10-storage.network
# Assign all of the service VLANs to the storage interface
[Match]
Name=storage
[Network]
VLAN=iscsi1
VLAN=iscsi2
VLN=smb0
# /etc/systemd/network/10-storage.link
# Rename the underlying interface to "storage" for clarity
[Match]
MACAddress=00:0c:29:bd:3b:4d
Property=!DEVTYPE=vlan
[Link]
Name=storage
# /etc/systemd/systemd/systemd-udevd.service.d/override.conf
# Address the udev race condition described in systemd #9682, fixed in systemd 239
# Taken from https://github.com/systemd/systemd/issues/9682#issuecomment-407085955
[Service]
ExecStartPost=/bin/sleep 2
Enable systemd-networkd, and reboot:
systemctl enable --now systemd-networkd
In this configuration, I can specify the "iscsi1" interface without worrying about the underlying network configuration. This makes initial setup a bit painful, but I consider it to be an acceptable escape hatch for cases that the UI can't handle.
When I say "it's not working" it is in the context of having manually messed with the configuration files after first boot 🙂 I think that the Console and Installer UI could be made more robust, possibly by looking at the actual network interfaces or accounting for comments in the configs.
In regards to settings for "cluster service/role (service interface/subnet/vlan/mtu/bonds)," the name of the interface will change when switching from a VMware-provided untagged interface (ethX) to a physical machine with a tagged VLAN interface (ethX.0001). Similarly, a hardware refresh may change the topology if we move from 4x10GbE to 2x100GbE in the coming years.
As a preventive workaround, I'm trying to rename all of my interfaces to match their roles. Using systemd-networkd provides enough flexibility to give a VLAN, IPVLAN, or Dummy interface an actual name. The console UI only allows setting interfaces names to eth# instead of a functional name such as iscsi1. To be clear, the interface names are manually-defined and static.
My initial virtualized nodes have the following change (edited for brevity) in /etc/udev/rules.d/70-persistent-net.rules before running the cluster setup wizard:
SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="iscsi2"
SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="iscsi1"
SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="smb0"
SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="ceph0"
SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="mgmt0"
This causes problems for the console menu, as it requires interfaces named "eth#"
A future physical host would delete some or all of 70-persistent-net.rules, leaving configuration to systemd-networkd:
# /etc/systemd/network/10-iscsi1.netdev
# Define VLAN interface named "iscsi1"
# NOTE: Addressing is managed by PetaSAN
[NetDev]
Name=iscsi1
Kind=vlan
[VLAN]
Id=123
# /etc/systemd/network/10-storage.network
# Assign all of the service VLANs to the storage interface
[Match]
Name=storage
[Network]
VLAN=iscsi1
VLAN=iscsi2
VLN=smb0
# /etc/systemd/network/10-storage.link
# Rename the underlying interface to "storage" for clarity
[Match]
MACAddress=00:0c:29:bd:3b:4d
Property=!DEVTYPE=vlan
[Link]
Name=storage
# /etc/systemd/systemd/systemd-udevd.service.d/override.conf
# Address the udev race condition described in systemd #9682, fixed in systemd 239
# Taken from https://github.com/systemd/systemd/issues/9682#issuecomment-407085955
[Service]
ExecStartPost=/bin/sleep 2
Enable systemd-networkd, and reboot:
systemctl enable --now systemd-networkd
In this configuration, I can specify the "iscsi1" interface without worrying about the underlying network configuration. This makes initial setup a bit painful, but I consider it to be an acceptable escape hatch for cases that the UI can't handle.
Dependency on /etc/network/interfaces and hard-coded interface names
NTmatter
8 Posts
Quote from NTmatter on July 17, 2020, 4:51 amI know this is well outside PetaSAN's intended mode of operation, so I'd say it's more of an observation than a bug report.
I'm expecting to have some mismatched hardware over the lifetime of my cluster, starting with Virtual and moving to Hybrid Physical. This will create some issues with interface counts, types, and names when new hardware comes in. As a workaround, I've been trying to normalize interface names with systemd-networkd, allowing for swapping the underlying network interfaces in the future (eg, to Bonds, Teams, Dummies, Multinets, or mismatched VLANs).
As part of the process, I commented out the contents of /etc/network/interfaces:
# auto eth0 # iface eth0 inet static # address 10.0.32.25 # netmask 255.255.255.224 # gateway 10.0.32.30 # dns-nameservers 8.8.8.8After rebooting the host, the Console UI was fairly broken, instead of displaying node information as normal:
Host Name: test1 Management Interface: eth0 IP Address: 10.0.123.45 Subnet Mask: 255.255.255.0 Gateway: 10.0.123.254 DNS: 8.8.8.8It displays the preceding token in the file:
Host Name: test1 Management Interface: eth0 IP Address: address Subnet Mask: netmask Gateway: gateway DNS: 8.8.8.8I'm guessing that DNS is pulled from resolv.conf, and is thereby unaffected?
Additionally, attempting to use Options > Interface Naming causes the text UI to crash and eventually restart.
In cases where I delete /etc/network/interfaces, the Console UI did not start at all. When this is done prior to joining the cluster, the root password is not set, and networking may be broken, meaning that it is impossible to repair the situation after a reboot. Rolling back to a snapshot did the trick, but it could mean rebuilding the machine if things go particularly wrong.
As another part of the experiment, I've been renaming interfaces by changing the interface names in /etc/udev/rules.d/70-persistent-net.rules to mgmt0, ceph0, cifs0, iscsi1, and iscsi2 prior to running the installer. This generally seems to work, however there are a few challenges in the Web Installer Wizard on the Cluster Network Settings page.
I am able to correctly select the Management and Backend interfaces using their modified names (mgmt0, ceph0). The Services only offer a static list of eth0-eth9. If I use the Dev Tools to edit one of the dropdown entries from <option value="eth9">eth9</option> to <option value="iscsi1">iscsi1</option> and select the new option, the supplied value is accepted and persisted all the way down to cluster_info.json on newly-added nodes that don't even have the relevant interface.
I haven't done much testing in this configuration, but it generally seems to work.
I know this is well outside PetaSAN's intended mode of operation, so I'd say it's more of an observation than a bug report.
I'm expecting to have some mismatched hardware over the lifetime of my cluster, starting with Virtual and moving to Hybrid Physical. This will create some issues with interface counts, types, and names when new hardware comes in. As a workaround, I've been trying to normalize interface names with systemd-networkd, allowing for swapping the underlying network interfaces in the future (eg, to Bonds, Teams, Dummies, Multinets, or mismatched VLANs).
As part of the process, I commented out the contents of /etc/network/interfaces:
# auto eth0 # iface eth0 inet static # address 10.0.32.25 # netmask 255.255.255.224 # gateway 10.0.32.30 # dns-nameservers 8.8.8.8
After rebooting the host, the Console UI was fairly broken, instead of displaying node information as normal:
Host Name: test1 Management Interface: eth0 IP Address: 10.0.123.45 Subnet Mask: 255.255.255.0 Gateway: 10.0.123.254 DNS: 8.8.8.8
It displays the preceding token in the file:
Host Name: test1 Management Interface: eth0 IP Address: address Subnet Mask: netmask Gateway: gateway DNS: 8.8.8.8
I'm guessing that DNS is pulled from resolv.conf, and is thereby unaffected?
Additionally, attempting to use Options > Interface Naming causes the text UI to crash and eventually restart.
In cases where I delete /etc/network/interfaces, the Console UI did not start at all. When this is done prior to joining the cluster, the root password is not set, and networking may be broken, meaning that it is impossible to repair the situation after a reboot. Rolling back to a snapshot did the trick, but it could mean rebuilding the machine if things go particularly wrong.
As another part of the experiment, I've been renaming interfaces by changing the interface names in /etc/udev/rules.d/70-persistent-net.rules to mgmt0, ceph0, cifs0, iscsi1, and iscsi2 prior to running the installer. This generally seems to work, however there are a few challenges in the Web Installer Wizard on the Cluster Network Settings page.
I am able to correctly select the Management and Backend interfaces using their modified names (mgmt0, ceph0). The Services only offer a static list of eth0-eth9. If I use the Dev Tools to edit one of the dropdown entries from <option value="eth9">eth9</option> to <option value="iscsi1">iscsi1</option> and select the new option, the supplied value is accepted and persisted all the way down to cluster_info.json on newly-added nodes that don't even have the relevant interface.
I haven't done much testing in this configuration, but it generally seems to work.
admin
2,930 Posts
Quote from admin on July 17, 2020, 11:16 amAs of last 2.5 release we do not force that all nodes have the same interface count and configuration. Each node requirements will be determined by the role/services it is enabled.
In 2.6 we making things easier to be able to change, in an already deployed cluster, interface requirements for such services/roles, before this it was done at cluster creation in the ui wizard and any changes needed manual changes in cluster_info.json.
We need network naming to be consistent like eth0,..X : we cannot allow dynamic generated names generated by kernel since we want to have 2 types of settings: those that apply to a node (like ip address and those that apply to cluster service/role ( service interface/subnet/vlan/mtu/bonds), the later is required to make it easier both to setup the cluster and later to operate/troubleshoot, we want to avoid having interface X on node A talk to interface Y on node B.
To achieve network naming consistency across different hardware, we rely on the interface naming function on the console menu / blue node screen. If you say it is not working with you let us know as a bug and we will fix it, it is definitely something we test here.
As of last 2.5 release we do not force that all nodes have the same interface count and configuration. Each node requirements will be determined by the role/services it is enabled.
In 2.6 we making things easier to be able to change, in an already deployed cluster, interface requirements for such services/roles, before this it was done at cluster creation in the ui wizard and any changes needed manual changes in cluster_info.json.
We need network naming to be consistent like eth0,..X : we cannot allow dynamic generated names generated by kernel since we want to have 2 types of settings: those that apply to a node (like ip address and those that apply to cluster service/role ( service interface/subnet/vlan/mtu/bonds), the later is required to make it easier both to setup the cluster and later to operate/troubleshoot, we want to avoid having interface X on node A talk to interface Y on node B.
To achieve network naming consistency across different hardware, we rely on the interface naming function on the console menu / blue node screen. If you say it is not working with you let us know as a bug and we will fix it, it is definitely something we test here.
NTmatter
8 Posts
Quote from NTmatter on July 17, 2020, 12:40 pmWhen I say "it's not working" it is in the context of having manually messed with the configuration files after first boot 🙂 I think that the Console and Installer UI could be made more robust, possibly by looking at the actual network interfaces or accounting for comments in the configs.
In regards to settings for "cluster service/role (service interface/subnet/vlan/mtu/bonds)," the name of the interface will change when switching from a VMware-provided untagged interface (ethX) to a physical machine with a tagged VLAN interface (ethX.0001). Similarly, a hardware refresh may change the topology if we move from 4x10GbE to 2x100GbE in the coming years.
As a preventive workaround, I'm trying to rename all of my interfaces to match their roles. Using systemd-networkd provides enough flexibility to give a VLAN, IPVLAN, or Dummy interface an actual name. The console UI only allows setting interfaces names to eth# instead of a functional name such as iscsi1. To be clear, the interface names are manually-defined and static.
My initial virtualized nodes have the following change (edited for brevity) in /etc/udev/rules.d/70-persistent-net.rules before running the cluster setup wizard:
SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="iscsi2" SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="iscsi1" SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="smb0" SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="ceph0" SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="mgmt0"This causes problems for the console menu, as it requires interfaces named "eth#"
A future physical host would delete some or all of 70-persistent-net.rules, leaving configuration to systemd-networkd:
# /etc/systemd/network/10-iscsi1.netdev # Define VLAN interface named "iscsi1" # NOTE: Addressing is managed by PetaSAN [NetDev] Name=iscsi1 Kind=vlan [VLAN] Id=123 # /etc/systemd/network/10-storage.network # Assign all of the service VLANs to the storage interface [Match] Name=storage [Network] VLAN=iscsi1 VLAN=iscsi2 VLN=smb0 # /etc/systemd/network/10-storage.link # Rename the underlying interface to "storage" for clarity [Match] MACAddress=00:0c:29:bd:3b:4d Property=!DEVTYPE=vlan [Link] Name=storage# /etc/systemd/systemd/systemd-udevd.service.d/override.conf # Address the udev race condition described in systemd #9682, fixed in systemd 239 # Taken from https://github.com/systemd/systemd/issues/9682#issuecomment-407085955 [Service] ExecStartPost=/bin/sleep 2Enable systemd-networkd, and reboot:
systemctl enable --now systemd-networkdIn this configuration, I can specify the "iscsi1" interface without worrying about the underlying network configuration. This makes initial setup a bit painful, but I consider it to be an acceptable escape hatch for cases that the UI can't handle.
When I say "it's not working" it is in the context of having manually messed with the configuration files after first boot 🙂 I think that the Console and Installer UI could be made more robust, possibly by looking at the actual network interfaces or accounting for comments in the configs.
In regards to settings for "cluster service/role (service interface/subnet/vlan/mtu/bonds)," the name of the interface will change when switching from a VMware-provided untagged interface (ethX) to a physical machine with a tagged VLAN interface (ethX.0001). Similarly, a hardware refresh may change the topology if we move from 4x10GbE to 2x100GbE in the coming years.
As a preventive workaround, I'm trying to rename all of my interfaces to match their roles. Using systemd-networkd provides enough flexibility to give a VLAN, IPVLAN, or Dummy interface an actual name. The console UI only allows setting interfaces names to eth# instead of a functional name such as iscsi1. To be clear, the interface names are manually-defined and static.
My initial virtualized nodes have the following change (edited for brevity) in /etc/udev/rules.d/70-persistent-net.rules before running the cluster setup wizard:
SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="iscsi2" SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="iscsi1" SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="smb0" SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="ceph0" SUBSYSTEM=="net", ATTR{address}=="...", ATTR{type}=="1", KERNEL=="eth*", NAME="mgmt0"
This causes problems for the console menu, as it requires interfaces named "eth#"
A future physical host would delete some or all of 70-persistent-net.rules, leaving configuration to systemd-networkd:
# /etc/systemd/network/10-iscsi1.netdev # Define VLAN interface named "iscsi1" # NOTE: Addressing is managed by PetaSAN [NetDev] Name=iscsi1 Kind=vlan [VLAN] Id=123 # /etc/systemd/network/10-storage.network # Assign all of the service VLANs to the storage interface [Match] Name=storage [Network] VLAN=iscsi1 VLAN=iscsi2 VLN=smb0 # /etc/systemd/network/10-storage.link # Rename the underlying interface to "storage" for clarity [Match] MACAddress=00:0c:29:bd:3b:4d Property=!DEVTYPE=vlan [Link] Name=storage
# /etc/systemd/systemd/systemd-udevd.service.d/override.conf # Address the udev race condition described in systemd #9682, fixed in systemd 239 # Taken from https://github.com/systemd/systemd/issues/9682#issuecomment-407085955 [Service] ExecStartPost=/bin/sleep 2
Enable systemd-networkd, and reboot:
systemctl enable --now systemd-networkd
In this configuration, I can specify the "iscsi1" interface without worrying about the underlying network configuration. This makes initial setup a bit painful, but I consider it to be an acceptable escape hatch for cases that the UI can't handle.