Fresh install error
kpiti
23 Posts
January 30, 2023, 6:40 pmQuote from kpiti on January 30, 2023, 6:40 pmHi, I'm trying to do a fresh petasan install and on the step6 I get: Alert! Error saving node network settings.
My setup is 2x 100G bond interface with 2 VLANs for data & sync(backend) and in the setup I just configured Mgmt (on eth2) and Backend (on bond0 - eth0,eth1) with VLAN tag.
I didn't set up any of the access services in the initial setup. I've checked /opt/petasan/log/PetaSAN.log and it seens ok:
30/01/2023 18:47:24 ERROR Config file error. The PetaSAN os maybe just installed.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/cluster/deploy.py", line 75, in get_node_status
node_name = config.get_node_info().name
File "/usr/lib/python3/dist-packages/PetaSAN/core/cluster/configuration.py", line 99, in get_node_info
with open(config.get_node_info_file_path(), 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/opt/petasan/config/node_info.json'
30/01/2023 18:47:31 INFO Created keys for cluster CEPH-FMFOF
30/01/2023 18:47:31 INFO Created cluster file and set cluster name to CEPH-FMFOF
30/01/2023 18:47:31 INFO password set successfully.
30/01/2023 18:47:46 INFO Updated cluster interface successfully.
30/01/2023 18:48:18 INFO 1353
30/01/2023 18:48:18 INFO Updated cluster network successfully.
30/01/2023 18:48:25 INFO Updated cluster tuning successfully.
30/01/2023 18:48:25 INFO Current tuning configurations saved.
(The 1353 entry is the VLAN ID)
The interfaces on the console are still eth0-3 and the bond0 interface hasn't been configured. I tried rebooting and it's still the same.. There is no /opt/petasan/config/node_info.json as stated in the error msg..
I would skip the bond part but the guide states it's kind of difficult to change the backend afterwards.. Any ideas how to go from here?
Thanks, Jure
(release 3.1.0)
Hi, I'm trying to do a fresh petasan install and on the step6 I get: Alert! Error saving node network settings.
My setup is 2x 100G bond interface with 2 VLANs for data & sync(backend) and in the setup I just configured Mgmt (on eth2) and Backend (on bond0 - eth0,eth1) with VLAN tag.
I didn't set up any of the access services in the initial setup. I've checked /opt/petasan/log/PetaSAN.log and it seens ok:
30/01/2023 18:47:24 ERROR Config file error. The PetaSAN os maybe just installed.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/cluster/deploy.py", line 75, in get_node_status
node_name = config.get_node_info().name
File "/usr/lib/python3/dist-packages/PetaSAN/core/cluster/configuration.py", line 99, in get_node_info
with open(config.get_node_info_file_path(), 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/opt/petasan/config/node_info.json'
30/01/2023 18:47:31 INFO Created keys for cluster CEPH-FMFOF
30/01/2023 18:47:31 INFO Created cluster file and set cluster name to CEPH-FMFOF
30/01/2023 18:47:31 INFO password set successfully.
30/01/2023 18:47:46 INFO Updated cluster interface successfully.
30/01/2023 18:48:18 INFO 1353
30/01/2023 18:48:18 INFO Updated cluster network successfully.
30/01/2023 18:48:25 INFO Updated cluster tuning successfully.
30/01/2023 18:48:25 INFO Current tuning configurations saved.
(The 1353 entry is the VLAN ID)
The interfaces on the console are still eth0-3 and the bond0 interface hasn't been configured. I tried rebooting and it's still the same.. There is no /opt/petasan/config/node_info.json as stated in the error msg..
I would skip the bond part but the guide states it's kind of difficult to change the backend afterwards.. Any ideas how to go from here?
Thanks, Jure
(release 3.1.0)
admin
2,930 Posts
January 31, 2023, 9:37 amQuote from admin on January 31, 2023, 9:37 amI understand this happens on the very first node ?
Can you double check if you gave the node being deployed a service Role and check if you had defined an interface for this Role that exists on the current node. I would suggest you re-install the node and start fresh. You could also omit defining service interfaces at deploy time, you can later add them/change them after the cluster is built.
I understand this happens on the very first node ?
Can you double check if you gave the node being deployed a service Role and check if you had defined an interface for this Role that exists on the current node. I would suggest you re-install the node and start fresh. You could also omit defining service interfaces at deploy time, you can later add them/change them after the cluster is built.
kpiti
23 Posts
February 1, 2023, 7:53 pmQuote from kpiti on February 1, 2023, 7:53 pmHi,
yes, this was/is the first node.
I've just reinstalled the node and got stuck on the same spot.
I don't think I get to define a role by this point, it's
- s1: Create a new cluster
- s2: Cluster name+pwd
- s3: Jumbo frames (yes) & bonding (yes)
- s4: Core net (mgmt+backend [VLAN]) ; Services (skip)
- s5: Default pools (cephfs,15-50 hdd, rep3)
- s6: Node BE IP -> Crash....
I've skipped the service and just defined the management & backend interfaces.
I did actually create the bond interface and the vlan in the console by hand just to see if there is some HW issue with that but it goes ok..
Also tried without jumbo frames and without vlan tag, all same. But this are blind permutations and don't go in the right direction.
Do you have any idea how to debug this issue?
Thanks, Jure
Hi,
yes, this was/is the first node.
I've just reinstalled the node and got stuck on the same spot.
I don't think I get to define a role by this point, it's
- s1: Create a new cluster
- s2: Cluster name+pwd
- s3: Jumbo frames (yes) & bonding (yes)
- s4: Core net (mgmt+backend [VLAN]) ; Services (skip)
- s5: Default pools (cephfs,15-50 hdd, rep3)
- s6: Node BE IP -> Crash....
I've skipped the service and just defined the management & backend interfaces.
I did actually create the bond interface and the vlan in the console by hand just to see if there is some HW issue with that but it goes ok..
Also tried without jumbo frames and without vlan tag, all same. But this are blind permutations and don't go in the right direction.
Do you have any idea how to debug this issue?
Thanks, Jure
admin
2,930 Posts
February 1, 2023, 11:33 pmQuote from admin on February 1, 2023, 11:33 pmDo you see errors in syslog and/or dmesg ?
Can you try without bonding/vlan/jumbo frames. do not create bond by hand. Can you try same configuration on different hardware.
Can you check obvious things like no ip conflicts, no subnets overlap.
If nothing works, we can send you modified sources with more logs, but since failure is so early stage, it is easier to test different combinations to try to narrow down things.
is this a virtual setup ? any special hardware ?
Do you see errors in syslog and/or dmesg ?
Can you try without bonding/vlan/jumbo frames. do not create bond by hand. Can you try same configuration on different hardware.
Can you check obvious things like no ip conflicts, no subnets overlap.
If nothing works, we can send you modified sources with more logs, but since failure is so early stage, it is easier to test different combinations to try to narrow down things.
is this a virtual setup ? any special hardware ?
Last edited on February 1, 2023, 11:35 pm by admin · #4
kpiti
23 Posts
February 7, 2023, 12:15 amQuote from kpiti on February 7, 2023, 12:15 amHi,
I tried with basically just next-next-next.. just entering the bare minimum like networks/IPs and number of disks and got stuck at the same spot. No bond, no jumbo, no vlans.. The networks are not overlapping, all /24 (10.12.13/24 mgmt, 10.12.201/24 backend). Unfortunately all the boxes are same HW, I did try on a different node (of the same ones) and it was the same. The HW is rather standard - supermicro, AMD Epic 16c, 4x16G RAM, 6x 8TB HDD, 2x 1TB NVMe, 2x 500G SSD (system), 2x 100G mellanox NIC (1x dual-port), 1x 10G NIC mgmt..
The logs don't show anything special:
/var/log/syslog:
Feb 2 11:36:13 CEPH1 systemd[1]: Starting PetaSAN Node Console...
Feb 2 11:36:13 CEPH1 deploy.py[1394]: * Serving Flask app "deploy" (lazy loading)
Feb 2 11:36:13 CEPH1 deploy.py[1394]: * Environment: production
Feb 2 11:36:13 CEPH1 deploy.py[1394]: WARNING: This is a development server. Do not use it in a production deployment.
Feb 2 11:36:13 CEPH1 deploy.py[1394]: Use a production WSGI server instead.
Feb 2 11:36:13 CEPH1 deploy.py[1394]: * Debug mode: off
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1516]: /bin/sh: 1: /opt/petasan/scripts/custom/pre_start_network.sh: not found
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: Traceback (most recent call last):
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: File "/opt/petasan/scripts/node_start_ips.py", line 32, in <module>
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: node = config.get_node_info()
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: File "/usr/lib/python3/dist-packages/PetaSAN/core/cluster/configuration.py", line 99, in get_node_info
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: with open(config.get_node_info_file_path(), 'r') as f:
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: FileNotFoundError: [Errno 2] No such file or directory: '/opt/petasan/config/node_info.json'
Feb 2 11:36:14 CEPH1 kernel: [ 8.028637] scsi 17:0:0:0: Direct-Access General UDisk 5.00 PQ: 0 ANSI: 2
Feb 2 11:36:14 CEPH1 kernel: [ 8.028860] sd 17:0:0:0: Attached scsi generic sg8 type 0
Feb 2 11:36:14 CEPH1 kernel: [ 8.029083] sd 17:0:0:0: [sdi] 30720000 512-byte logical blocks: (15.7 GB/14.6 GiB)
Feb 2 11:36:14 CEPH1 kernel: [ 8.029238] sd 17:0:0:0: [sdi] Write Protect is off
Feb 2 11:36:14 CEPH1 kernel: [ 8.029242] sd 17:0:0:0: [sdi] Mode Sense: 0b 00 00 08
Feb 2 11:36:14 CEPH1 kernel: [ 8.029391] sd 17:0:0:0: [sdi] No Caching mode page found
Feb 2 11:36:14 CEPH1 kernel: [ 8.029423] sd 17:0:0:0: [sdi] Assuming drive cache: write through
Feb 2 11:36:14 CEPH1 systemd[1]: Finished PetaSAN Start Services.
Feb 2 11:36:14 CEPH1 kernel: [ 8.080803] sdi: sdi1 sdi2
Feb 2 11:36:14 CEPH1 kernel: [ 8.082070] sd 17:0:0:0: [sdi] Attached SCSI removable disk
Feb 2 11:36:14 CEPH1 multipath: sdi: can't store path info
Feb 2 11:36:14 CEPH1 multipathd[1093]: uevent trigger error
Feb 2 11:36:17 CEPH1 kernel: [ 11.330283] ixgbe 0000:c1:00.0 eth2: NIC Link is Up 1 Gbps, Flow Control: None
Feb 2 11:36:17 CEPH1 kernel: [ 11.330577] IPv6: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
Feb 2 11:36:18 CEPH1 set-cpufreq[1178]: Setting ondemand scheduler for all CPUs
Feb 2 11:36:18 CEPH1 systemd[1]: ondemand.service: Succeeded.
Feb 2 11:36:18 CEPH1 systemd[1]: dmesg.service: Succeeded.
Feb 2 11:36:23 CEPH1 systemd[1]: Started PetaSAN Node Console.
Feb 2 11:36:23 CEPH1 systemd[1]: Reached target Graphical Interface.
Feb 2 11:36:23 CEPH1 systemd[1]: Starting Update UTMP about System Runlevel Changes...
Feb 2 11:36:23 CEPH1 systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
Feb 2 11:36:23 CEPH1 systemd[1]: Finished Update UTMP about System Runlevel Changes.
Feb 2 11:36:23 CEPH1 systemd[1]: Startup finished in 4.153s (kernel) + 13.611s (userspace) = 17.765s.
Feb 2 11:37:48 CEPH1 deploy.py[1591]: Generating public/private rsa key pair.
Feb 2 11:37:48 CEPH1 deploy.py[1591]: Your identification has been saved in /root/.ssh/id_rsa
Feb 2 11:37:48 CEPH1 deploy.py[1591]: Your public key has been saved in /root/.ssh/id_rsa.pub ..[ssh Art]..
Feb 2 11:37:48 CEPH1 kernel: [ 102.289537] Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
and in /opt/petasan/log/PetaSAN.log it's just:
03/02/2023 13:37:47 ERROR Config file error. The PetaSAN os maybe just installed.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/cluster/deploy.py", line 75, in get_node_status
node_name = config.get_node_info().name
File "/usr/lib/python3/dist-packages/PetaSAN/core/cluster/configuration.py", line 99, in get_node_info
with open(config.get_node_info_file_path(), 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/opt/petasan/config/node_info.json'
03/02/2023 13:37:54 INFO Created keys for cluster CEPH-FMFOF
03/02/2023 13:37:54 INFO Created cluster file and set cluster name to CEPH-FMFOF
03/02/2023 13:37:54 INFO password set successfully.
03/02/2023 13:37:56 INFO Updated cluster interface successfully.
03/02/2023 13:38:15 INFO Updated cluster network successfully.
03/02/2023 13:38:29 INFO Updated cluster tuning successfully.
03/02/2023 13:38:29 INFO Current tuning configurations saved.
I just created the bond by hand to verify everything works as it should with MLAGs and cabling, I removed it afterwards..
Couldn't find any serious errors in dmesg either, I can post it
Hi,
I tried with basically just next-next-next.. just entering the bare minimum like networks/IPs and number of disks and got stuck at the same spot. No bond, no jumbo, no vlans.. The networks are not overlapping, all /24 (10.12.13/24 mgmt, 10.12.201/24 backend). Unfortunately all the boxes are same HW, I did try on a different node (of the same ones) and it was the same. The HW is rather standard - supermicro, AMD Epic 16c, 4x16G RAM, 6x 8TB HDD, 2x 1TB NVMe, 2x 500G SSD (system), 2x 100G mellanox NIC (1x dual-port), 1x 10G NIC mgmt..
The logs don't show anything special:
/var/log/syslog:
Feb 2 11:36:13 CEPH1 systemd[1]: Starting PetaSAN Node Console...
Feb 2 11:36:13 CEPH1 deploy.py[1394]: * Serving Flask app "deploy" (lazy loading)
Feb 2 11:36:13 CEPH1 deploy.py[1394]: * Environment: production
Feb 2 11:36:13 CEPH1 deploy.py[1394]: WARNING: This is a development server. Do not use it in a production deployment.
Feb 2 11:36:13 CEPH1 deploy.py[1394]: Use a production WSGI server instead.
Feb 2 11:36:13 CEPH1 deploy.py[1394]: * Debug mode: off
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1516]: /bin/sh: 1: /opt/petasan/scripts/custom/pre_start_network.sh: not found
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: Traceback (most recent call last):
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: File "/opt/petasan/scripts/node_start_ips.py", line 32, in <module>
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: node = config.get_node_info()
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: File "/usr/lib/python3/dist-packages/PetaSAN/core/cluster/configuration.py", line 99, in get_node_info
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: with open(config.get_node_info_file_path(), 'r') as f:
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: FileNotFoundError: [Errno 2] No such file or directory: '/opt/petasan/config/node_info.json'
Feb 2 11:36:14 CEPH1 kernel: [ 8.028637] scsi 17:0:0:0: Direct-Access General UDisk 5.00 PQ: 0 ANSI: 2
Feb 2 11:36:14 CEPH1 kernel: [ 8.028860] sd 17:0:0:0: Attached scsi generic sg8 type 0
Feb 2 11:36:14 CEPH1 kernel: [ 8.029083] sd 17:0:0:0: [sdi] 30720000 512-byte logical blocks: (15.7 GB/14.6 GiB)
Feb 2 11:36:14 CEPH1 kernel: [ 8.029238] sd 17:0:0:0: [sdi] Write Protect is off
Feb 2 11:36:14 CEPH1 kernel: [ 8.029242] sd 17:0:0:0: [sdi] Mode Sense: 0b 00 00 08
Feb 2 11:36:14 CEPH1 kernel: [ 8.029391] sd 17:0:0:0: [sdi] No Caching mode page found
Feb 2 11:36:14 CEPH1 kernel: [ 8.029423] sd 17:0:0:0: [sdi] Assuming drive cache: write through
Feb 2 11:36:14 CEPH1 systemd[1]: Finished PetaSAN Start Services.
Feb 2 11:36:14 CEPH1 kernel: [ 8.080803] sdi: sdi1 sdi2
Feb 2 11:36:14 CEPH1 kernel: [ 8.082070] sd 17:0:0:0: [sdi] Attached SCSI removable disk
Feb 2 11:36:14 CEPH1 multipath: sdi: can't store path info
Feb 2 11:36:14 CEPH1 multipathd[1093]: uevent trigger error
Feb 2 11:36:17 CEPH1 kernel: [ 11.330283] ixgbe 0000:c1:00.0 eth2: NIC Link is Up 1 Gbps, Flow Control: None
Feb 2 11:36:17 CEPH1 kernel: [ 11.330577] IPv6: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
Feb 2 11:36:18 CEPH1 set-cpufreq[1178]: Setting ondemand scheduler for all CPUs
Feb 2 11:36:18 CEPH1 systemd[1]: ondemand.service: Succeeded.
Feb 2 11:36:18 CEPH1 systemd[1]: dmesg.service: Succeeded.
Feb 2 11:36:23 CEPH1 systemd[1]: Started PetaSAN Node Console.
Feb 2 11:36:23 CEPH1 systemd[1]: Reached target Graphical Interface.
Feb 2 11:36:23 CEPH1 systemd[1]: Starting Update UTMP about System Runlevel Changes...
Feb 2 11:36:23 CEPH1 systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
Feb 2 11:36:23 CEPH1 systemd[1]: Finished Update UTMP about System Runlevel Changes.
Feb 2 11:36:23 CEPH1 systemd[1]: Startup finished in 4.153s (kernel) + 13.611s (userspace) = 17.765s.
Feb 2 11:37:48 CEPH1 deploy.py[1591]: Generating public/private rsa key pair.
Feb 2 11:37:48 CEPH1 deploy.py[1591]: Your identification has been saved in /root/.ssh/id_rsa
Feb 2 11:37:48 CEPH1 deploy.py[1591]: Your public key has been saved in /root/.ssh/id_rsa.pub ..[ssh Art]..
Feb 2 11:37:48 CEPH1 kernel: [ 102.289537] Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
and in /opt/petasan/log/PetaSAN.log it's just:
03/02/2023 13:37:47 ERROR Config file error. The PetaSAN os maybe just installed.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/cluster/deploy.py", line 75, in get_node_status
node_name = config.get_node_info().name
File "/usr/lib/python3/dist-packages/PetaSAN/core/cluster/configuration.py", line 99, in get_node_info
with open(config.get_node_info_file_path(), 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/opt/petasan/config/node_info.json'
03/02/2023 13:37:54 INFO Created keys for cluster CEPH-FMFOF
03/02/2023 13:37:54 INFO Created cluster file and set cluster name to CEPH-FMFOF
03/02/2023 13:37:54 INFO password set successfully.
03/02/2023 13:37:56 INFO Updated cluster interface successfully.
03/02/2023 13:38:15 INFO Updated cluster network successfully.
03/02/2023 13:38:29 INFO Updated cluster tuning successfully.
03/02/2023 13:38:29 INFO Current tuning configurations saved.
I just created the bond by hand to verify everything works as it should with MLAGs and cabling, I removed it afterwards..
Couldn't find any serious errors in dmesg either, I can post it
admin
2,930 Posts
February 8, 2023, 7:13 pmQuote from admin on February 8, 2023, 7:13 pmIt is strange, i cannot find any smoking guns in the logs.
Can you please try the following:
Do a fresh install from iso. After boot, from the blue screen menu Options -> Bash shell, then add root password with
passwd
Access the node (example via Winscp)
edit file /usr/lib/python3/dist-packages/PetaSAN/web/deploy_controller/wizard.py
starting line 645, add the 2 lines in bold:
except Exception as e:
logger.error(e)
logger.exception(e)
session['err'] = "ui_deploy_save_node_network_setting_error"
(make sure you use correct spacing, using spaces not tabs)
save then restart deploy service
systemctl restart petasan-deploy
Deploy node, do not use bonds, vlan, jumbo.
If you get failure on Step 6, we should get more info in the PetaSAN.log file
It is strange, i cannot find any smoking guns in the logs.
Can you please try the following:
Do a fresh install from iso. After boot, from the blue screen menu Options -> Bash shell, then add root password with
passwd
Access the node (example via Winscp)
edit file /usr/lib/python3/dist-packages/PetaSAN/web/deploy_controller/wizard.py
starting line 645, add the 2 lines in bold:
except Exception as e:
logger.error(e)
logger.exception(e)
session['err'] = "ui_deploy_save_node_network_setting_error"
(make sure you use correct spacing, using spaces not tabs)
save then restart deploy service
systemctl restart petasan-deploy
Deploy node, do not use bonds, vlan, jumbo.
If you get failure on Step 6, we should get more info in the PetaSAN.log file
Last edited on February 8, 2023, 7:19 pm by admin · #6
kpiti
23 Posts
February 9, 2023, 7:15 pmQuote from kpiti on February 9, 2023, 7:15 pmOk, mistery solved:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/web/deploy_controller/wizard.py", line 621, in save_node_network_setting
if is_ip_in_subnet(new_node_info.backend_1_ip, cluster_info.backend_1_base_ip, cluster_info.backend_1_mask):
File "/usr/lib/python3/dist-packages/PetaSAN/core/network/ip_utils.py", line 50, in is_ip_in_subnet
net = IPv4Network(subnet_base_ip + '/' + mask, False)
File "/usr/lib/python3.8/ipaddress.py", line 1454, in __init__
self.network_address = IPv4Address(addr)
File "/usr/lib/python3.8/ipaddress.py", line 1257, in __init__
self._ip = self._ip_int_from_string(addr_str)
File "/usr/lib/python3.8/ipaddress.py", line 1149, in _ip_int_from_string
raise AddressValueError("%s in %r" % (exc, ip_str)) from None
ipaddress.AddressValueError: Leading zeros are not permitted in '010' in '010.012.201.0'
The reason why I was adding 0 padding is that GUI wouldn't let me proceed if I just typed 10 - it does if you type . (dot) but not tab or click to next field in the backend interface section.. It's probably something windows users are used to as you type . for delimeter/next-field in network setup but not so common in linux world.. If it was a plain textbox it wouldn't have happened. And tbh, 010.012.201.0 is not really an invalid V4 address..
Be as it may, mea culpa, perhaps you can add some s/^0+// to not get stuck here (or just a comment with the error or somewhere).
Thanks a lot for your help! Got to the end now with VLANs, jumbo, bond and all!
Cheers, Jure
Ok, mistery solved:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/web/deploy_controller/wizard.py", line 621, in save_node_network_setting
if is_ip_in_subnet(new_node_info.backend_1_ip, cluster_info.backend_1_base_ip, cluster_info.backend_1_mask):
File "/usr/lib/python3/dist-packages/PetaSAN/core/network/ip_utils.py", line 50, in is_ip_in_subnet
net = IPv4Network(subnet_base_ip + '/' + mask, False)
File "/usr/lib/python3.8/ipaddress.py", line 1454, in __init__
self.network_address = IPv4Address(addr)
File "/usr/lib/python3.8/ipaddress.py", line 1257, in __init__
self._ip = self._ip_int_from_string(addr_str)
File "/usr/lib/python3.8/ipaddress.py", line 1149, in _ip_int_from_string
raise AddressValueError("%s in %r" % (exc, ip_str)) from None
ipaddress.AddressValueError: Leading zeros are not permitted in '010' in '010.012.201.0'
The reason why I was adding 0 padding is that GUI wouldn't let me proceed if I just typed 10 - it does if you type . (dot) but not tab or click to next field in the backend interface section.. It's probably something windows users are used to as you type . for delimeter/next-field in network setup but not so common in linux world.. If it was a plain textbox it wouldn't have happened. And tbh, 010.012.201.0 is not really an invalid V4 address..
Be as it may, mea culpa, perhaps you can add some s/^0+// to not get stuck here (or just a comment with the error or somewhere).
Thanks a lot for your help! Got to the end now with VLANs, jumbo, bond and all!
Cheers, Jure
admin
2,930 Posts
February 9, 2023, 8:58 pmQuote from admin on February 9, 2023, 8:58 pmExcellent it worked. I am happy i was able to help 🙂
In the IP input control you can use space bar or as you mentioned (i did not know) dot, but tab throws you to the next control.
Excellent it worked. I am happy i was able to help 🙂
In the IP input control you can use space bar or as you mentioned (i did not know) dot, but tab throws you to the next control.
Fresh install error
kpiti
23 Posts
Quote from kpiti on January 30, 2023, 6:40 pmHi, I'm trying to do a fresh petasan install and on the step6 I get: Alert! Error saving node network settings.
My setup is 2x 100G bond interface with 2 VLANs for data & sync(backend) and in the setup I just configured Mgmt (on eth2) and Backend (on bond0 - eth0,eth1) with VLAN tag.
I didn't set up any of the access services in the initial setup. I've checked /opt/petasan/log/PetaSAN.log and it seens ok:
30/01/2023 18:47:24 ERROR Config file error. The PetaSAN os maybe just installed.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/cluster/deploy.py", line 75, in get_node_status
node_name = config.get_node_info().name
File "/usr/lib/python3/dist-packages/PetaSAN/core/cluster/configuration.py", line 99, in get_node_info
with open(config.get_node_info_file_path(), 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/opt/petasan/config/node_info.json'
30/01/2023 18:47:31 INFO Created keys for cluster CEPH-FMFOF
30/01/2023 18:47:31 INFO Created cluster file and set cluster name to CEPH-FMFOF
30/01/2023 18:47:31 INFO password set successfully.
30/01/2023 18:47:46 INFO Updated cluster interface successfully.
30/01/2023 18:48:18 INFO 1353
30/01/2023 18:48:18 INFO Updated cluster network successfully.
30/01/2023 18:48:25 INFO Updated cluster tuning successfully.
30/01/2023 18:48:25 INFO Current tuning configurations saved.(The 1353 entry is the VLAN ID)
The interfaces on the console are still eth0-3 and the bond0 interface hasn't been configured. I tried rebooting and it's still the same.. There is no /opt/petasan/config/node_info.json as stated in the error msg..
I would skip the bond part but the guide states it's kind of difficult to change the backend afterwards.. Any ideas how to go from here?
Thanks, Jure
(release 3.1.0)
Hi, I'm trying to do a fresh petasan install and on the step6 I get: Alert! Error saving node network settings.
My setup is 2x 100G bond interface with 2 VLANs for data & sync(backend) and in the setup I just configured Mgmt (on eth2) and Backend (on bond0 - eth0,eth1) with VLAN tag.
I didn't set up any of the access services in the initial setup. I've checked /opt/petasan/log/PetaSAN.log and it seens ok:
30/01/2023 18:47:24 ERROR Config file error. The PetaSAN os maybe just installed.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/cluster/deploy.py", line 75, in get_node_status
node_name = config.get_node_info().name
File "/usr/lib/python3/dist-packages/PetaSAN/core/cluster/configuration.py", line 99, in get_node_info
with open(config.get_node_info_file_path(), 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/opt/petasan/config/node_info.json'
30/01/2023 18:47:31 INFO Created keys for cluster CEPH-FMFOF
30/01/2023 18:47:31 INFO Created cluster file and set cluster name to CEPH-FMFOF
30/01/2023 18:47:31 INFO password set successfully.
30/01/2023 18:47:46 INFO Updated cluster interface successfully.
30/01/2023 18:48:18 INFO 1353
30/01/2023 18:48:18 INFO Updated cluster network successfully.
30/01/2023 18:48:25 INFO Updated cluster tuning successfully.
30/01/2023 18:48:25 INFO Current tuning configurations saved.
(The 1353 entry is the VLAN ID)
The interfaces on the console are still eth0-3 and the bond0 interface hasn't been configured. I tried rebooting and it's still the same.. There is no /opt/petasan/config/node_info.json as stated in the error msg..
I would skip the bond part but the guide states it's kind of difficult to change the backend afterwards.. Any ideas how to go from here?
Thanks, Jure
(release 3.1.0)
admin
2,930 Posts
Quote from admin on January 31, 2023, 9:37 amI understand this happens on the very first node ?
Can you double check if you gave the node being deployed a service Role and check if you had defined an interface for this Role that exists on the current node. I would suggest you re-install the node and start fresh. You could also omit defining service interfaces at deploy time, you can later add them/change them after the cluster is built.
I understand this happens on the very first node ?
Can you double check if you gave the node being deployed a service Role and check if you had defined an interface for this Role that exists on the current node. I would suggest you re-install the node and start fresh. You could also omit defining service interfaces at deploy time, you can later add them/change them after the cluster is built.
kpiti
23 Posts
Quote from kpiti on February 1, 2023, 7:53 pmHi,
yes, this was/is the first node.
I've just reinstalled the node and got stuck on the same spot.
I don't think I get to define a role by this point, it's
- s1: Create a new cluster
- s2: Cluster name+pwd
- s3: Jumbo frames (yes) & bonding (yes)
- s4: Core net (mgmt+backend [VLAN]) ; Services (skip)
- s5: Default pools (cephfs,15-50 hdd, rep3)
- s6: Node BE IP -> Crash....
I've skipped the service and just defined the management & backend interfaces.
I did actually create the bond interface and the vlan in the console by hand just to see if there is some HW issue with that but it goes ok..
Also tried without jumbo frames and without vlan tag, all same. But this are blind permutations and don't go in the right direction.
Do you have any idea how to debug this issue?
Thanks, Jure
Hi,
yes, this was/is the first node.
I've just reinstalled the node and got stuck on the same spot.
I don't think I get to define a role by this point, it's
- s1: Create a new cluster
- s2: Cluster name+pwd
- s3: Jumbo frames (yes) & bonding (yes)
- s4: Core net (mgmt+backend [VLAN]) ; Services (skip)
- s5: Default pools (cephfs,15-50 hdd, rep3)
- s6: Node BE IP -> Crash....
I've skipped the service and just defined the management & backend interfaces.
I did actually create the bond interface and the vlan in the console by hand just to see if there is some HW issue with that but it goes ok..
Also tried without jumbo frames and without vlan tag, all same. But this are blind permutations and don't go in the right direction.
Do you have any idea how to debug this issue?
Thanks, Jure
admin
2,930 Posts
Quote from admin on February 1, 2023, 11:33 pmDo you see errors in syslog and/or dmesg ?
Can you try without bonding/vlan/jumbo frames. do not create bond by hand. Can you try same configuration on different hardware.
Can you check obvious things like no ip conflicts, no subnets overlap.
If nothing works, we can send you modified sources with more logs, but since failure is so early stage, it is easier to test different combinations to try to narrow down things.
is this a virtual setup ? any special hardware ?
Do you see errors in syslog and/or dmesg ?
Can you try without bonding/vlan/jumbo frames. do not create bond by hand. Can you try same configuration on different hardware.
Can you check obvious things like no ip conflicts, no subnets overlap.
If nothing works, we can send you modified sources with more logs, but since failure is so early stage, it is easier to test different combinations to try to narrow down things.
is this a virtual setup ? any special hardware ?
kpiti
23 Posts
Quote from kpiti on February 7, 2023, 12:15 amHi,
I tried with basically just next-next-next.. just entering the bare minimum like networks/IPs and number of disks and got stuck at the same spot. No bond, no jumbo, no vlans.. The networks are not overlapping, all /24 (10.12.13/24 mgmt, 10.12.201/24 backend). Unfortunately all the boxes are same HW, I did try on a different node (of the same ones) and it was the same. The HW is rather standard - supermicro, AMD Epic 16c, 4x16G RAM, 6x 8TB HDD, 2x 1TB NVMe, 2x 500G SSD (system), 2x 100G mellanox NIC (1x dual-port), 1x 10G NIC mgmt..
The logs don't show anything special:
/var/log/syslog:
Feb 2 11:36:13 CEPH1 systemd[1]: Starting PetaSAN Node Console...
Feb 2 11:36:13 CEPH1 deploy.py[1394]: * Serving Flask app "deploy" (lazy loading)
Feb 2 11:36:13 CEPH1 deploy.py[1394]: * Environment: production
Feb 2 11:36:13 CEPH1 deploy.py[1394]: WARNING: This is a development server. Do not use it in a production deployment.
Feb 2 11:36:13 CEPH1 deploy.py[1394]: Use a production WSGI server instead.
Feb 2 11:36:13 CEPH1 deploy.py[1394]: * Debug mode: off
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1516]: /bin/sh: 1: /opt/petasan/scripts/custom/pre_start_network.sh: not found
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: Traceback (most recent call last):
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: File "/opt/petasan/scripts/node_start_ips.py", line 32, in <module>
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: node = config.get_node_info()
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: File "/usr/lib/python3/dist-packages/PetaSAN/core/cluster/configuration.py", line 99, in get_node_info
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: with open(config.get_node_info_file_path(), 'r') as f:
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: FileNotFoundError: [Errno 2] No such file or directory: '/opt/petasan/config/node_info.json'
Feb 2 11:36:14 CEPH1 kernel: [ 8.028637] scsi 17:0:0:0: Direct-Access General UDisk 5.00 PQ: 0 ANSI: 2
Feb 2 11:36:14 CEPH1 kernel: [ 8.028860] sd 17:0:0:0: Attached scsi generic sg8 type 0
Feb 2 11:36:14 CEPH1 kernel: [ 8.029083] sd 17:0:0:0: [sdi] 30720000 512-byte logical blocks: (15.7 GB/14.6 GiB)
Feb 2 11:36:14 CEPH1 kernel: [ 8.029238] sd 17:0:0:0: [sdi] Write Protect is off
Feb 2 11:36:14 CEPH1 kernel: [ 8.029242] sd 17:0:0:0: [sdi] Mode Sense: 0b 00 00 08
Feb 2 11:36:14 CEPH1 kernel: [ 8.029391] sd 17:0:0:0: [sdi] No Caching mode page found
Feb 2 11:36:14 CEPH1 kernel: [ 8.029423] sd 17:0:0:0: [sdi] Assuming drive cache: write through
Feb 2 11:36:14 CEPH1 systemd[1]: Finished PetaSAN Start Services.
Feb 2 11:36:14 CEPH1 kernel: [ 8.080803] sdi: sdi1 sdi2
Feb 2 11:36:14 CEPH1 kernel: [ 8.082070] sd 17:0:0:0: [sdi] Attached SCSI removable disk
Feb 2 11:36:14 CEPH1 multipath: sdi: can't store path info
Feb 2 11:36:14 CEPH1 multipathd[1093]: uevent trigger error
Feb 2 11:36:17 CEPH1 kernel: [ 11.330283] ixgbe 0000:c1:00.0 eth2: NIC Link is Up 1 Gbps, Flow Control: None
Feb 2 11:36:17 CEPH1 kernel: [ 11.330577] IPv6: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
Feb 2 11:36:18 CEPH1 set-cpufreq[1178]: Setting ondemand scheduler for all CPUs
Feb 2 11:36:18 CEPH1 systemd[1]: ondemand.service: Succeeded.
Feb 2 11:36:18 CEPH1 systemd[1]: dmesg.service: Succeeded.
Feb 2 11:36:23 CEPH1 systemd[1]: Started PetaSAN Node Console.
Feb 2 11:36:23 CEPH1 systemd[1]: Reached target Graphical Interface.
Feb 2 11:36:23 CEPH1 systemd[1]: Starting Update UTMP about System Runlevel Changes...
Feb 2 11:36:23 CEPH1 systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
Feb 2 11:36:23 CEPH1 systemd[1]: Finished Update UTMP about System Runlevel Changes.
Feb 2 11:36:23 CEPH1 systemd[1]: Startup finished in 4.153s (kernel) + 13.611s (userspace) = 17.765s.
Feb 2 11:37:48 CEPH1 deploy.py[1591]: Generating public/private rsa key pair.
Feb 2 11:37:48 CEPH1 deploy.py[1591]: Your identification has been saved in /root/.ssh/id_rsa
Feb 2 11:37:48 CEPH1 deploy.py[1591]: Your public key has been saved in /root/.ssh/id_rsa.pub ..[ssh Art]..Feb 2 11:37:48 CEPH1 kernel: [ 102.289537] Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
and in /opt/petasan/log/PetaSAN.log it's just:
03/02/2023 13:37:47 ERROR Config file error. The PetaSAN os maybe just installed.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/cluster/deploy.py", line 75, in get_node_status
node_name = config.get_node_info().name
File "/usr/lib/python3/dist-packages/PetaSAN/core/cluster/configuration.py", line 99, in get_node_info
with open(config.get_node_info_file_path(), 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/opt/petasan/config/node_info.json'
03/02/2023 13:37:54 INFO Created keys for cluster CEPH-FMFOF
03/02/2023 13:37:54 INFO Created cluster file and set cluster name to CEPH-FMFOF
03/02/2023 13:37:54 INFO password set successfully.
03/02/2023 13:37:56 INFO Updated cluster interface successfully.
03/02/2023 13:38:15 INFO Updated cluster network successfully.
03/02/2023 13:38:29 INFO Updated cluster tuning successfully.
03/02/2023 13:38:29 INFO Current tuning configurations saved.I just created the bond by hand to verify everything works as it should with MLAGs and cabling, I removed it afterwards..
Couldn't find any serious errors in dmesg either, I can post it
Hi,
I tried with basically just next-next-next.. just entering the bare minimum like networks/IPs and number of disks and got stuck at the same spot. No bond, no jumbo, no vlans.. The networks are not overlapping, all /24 (10.12.13/24 mgmt, 10.12.201/24 backend). Unfortunately all the boxes are same HW, I did try on a different node (of the same ones) and it was the same. The HW is rather standard - supermicro, AMD Epic 16c, 4x16G RAM, 6x 8TB HDD, 2x 1TB NVMe, 2x 500G SSD (system), 2x 100G mellanox NIC (1x dual-port), 1x 10G NIC mgmt..
The logs don't show anything special:
/var/log/syslog:
Feb 2 11:36:13 CEPH1 systemd[1]: Starting PetaSAN Node Console...
Feb 2 11:36:13 CEPH1 deploy.py[1394]: * Serving Flask app "deploy" (lazy loading)
Feb 2 11:36:13 CEPH1 deploy.py[1394]: * Environment: production
Feb 2 11:36:13 CEPH1 deploy.py[1394]: WARNING: This is a development server. Do not use it in a production deployment.
Feb 2 11:36:13 CEPH1 deploy.py[1394]: Use a production WSGI server instead.
Feb 2 11:36:13 CEPH1 deploy.py[1394]: * Debug mode: off
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1516]: /bin/sh: 1: /opt/petasan/scripts/custom/pre_start_network.sh: not found
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: Traceback (most recent call last):
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: File "/opt/petasan/scripts/node_start_ips.py", line 32, in <module>
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: node = config.get_node_info()
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: File "/usr/lib/python3/dist-packages/PetaSAN/core/cluster/configuration.py", line 99, in get_node_info
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: with open(config.get_node_info_file_path(), 'r') as f:
Feb 2 11:36:13 CEPH1 start_petasan_services.py[1513]: FileNotFoundError: [Errno 2] No such file or directory: '/opt/petasan/config/node_info.json'
Feb 2 11:36:14 CEPH1 kernel: [ 8.028637] scsi 17:0:0:0: Direct-Access General UDisk 5.00 PQ: 0 ANSI: 2
Feb 2 11:36:14 CEPH1 kernel: [ 8.028860] sd 17:0:0:0: Attached scsi generic sg8 type 0
Feb 2 11:36:14 CEPH1 kernel: [ 8.029083] sd 17:0:0:0: [sdi] 30720000 512-byte logical blocks: (15.7 GB/14.6 GiB)
Feb 2 11:36:14 CEPH1 kernel: [ 8.029238] sd 17:0:0:0: [sdi] Write Protect is off
Feb 2 11:36:14 CEPH1 kernel: [ 8.029242] sd 17:0:0:0: [sdi] Mode Sense: 0b 00 00 08
Feb 2 11:36:14 CEPH1 kernel: [ 8.029391] sd 17:0:0:0: [sdi] No Caching mode page found
Feb 2 11:36:14 CEPH1 kernel: [ 8.029423] sd 17:0:0:0: [sdi] Assuming drive cache: write through
Feb 2 11:36:14 CEPH1 systemd[1]: Finished PetaSAN Start Services.
Feb 2 11:36:14 CEPH1 kernel: [ 8.080803] sdi: sdi1 sdi2
Feb 2 11:36:14 CEPH1 kernel: [ 8.082070] sd 17:0:0:0: [sdi] Attached SCSI removable disk
Feb 2 11:36:14 CEPH1 multipath: sdi: can't store path info
Feb 2 11:36:14 CEPH1 multipathd[1093]: uevent trigger error
Feb 2 11:36:17 CEPH1 kernel: [ 11.330283] ixgbe 0000:c1:00.0 eth2: NIC Link is Up 1 Gbps, Flow Control: None
Feb 2 11:36:17 CEPH1 kernel: [ 11.330577] IPv6: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready
Feb 2 11:36:18 CEPH1 set-cpufreq[1178]: Setting ondemand scheduler for all CPUs
Feb 2 11:36:18 CEPH1 systemd[1]: ondemand.service: Succeeded.
Feb 2 11:36:18 CEPH1 systemd[1]: dmesg.service: Succeeded.
Feb 2 11:36:23 CEPH1 systemd[1]: Started PetaSAN Node Console.
Feb 2 11:36:23 CEPH1 systemd[1]: Reached target Graphical Interface.
Feb 2 11:36:23 CEPH1 systemd[1]: Starting Update UTMP about System Runlevel Changes...
Feb 2 11:36:23 CEPH1 systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
Feb 2 11:36:23 CEPH1 systemd[1]: Finished Update UTMP about System Runlevel Changes.
Feb 2 11:36:23 CEPH1 systemd[1]: Startup finished in 4.153s (kernel) + 13.611s (userspace) = 17.765s.
Feb 2 11:37:48 CEPH1 deploy.py[1591]: Generating public/private rsa key pair.
Feb 2 11:37:48 CEPH1 deploy.py[1591]: Your identification has been saved in /root/.ssh/id_rsa
Feb 2 11:37:48 CEPH1 deploy.py[1591]: Your public key has been saved in /root/.ssh/id_rsa.pub ..[ssh Art]..Feb 2 11:37:48 CEPH1 kernel: [ 102.289537] Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
and in /opt/petasan/log/PetaSAN.log it's just:
03/02/2023 13:37:47 ERROR Config file error. The PetaSAN os maybe just installed.
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/backend/cluster/deploy.py", line 75, in get_node_status
node_name = config.get_node_info().name
File "/usr/lib/python3/dist-packages/PetaSAN/core/cluster/configuration.py", line 99, in get_node_info
with open(config.get_node_info_file_path(), 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: '/opt/petasan/config/node_info.json'
03/02/2023 13:37:54 INFO Created keys for cluster CEPH-FMFOF
03/02/2023 13:37:54 INFO Created cluster file and set cluster name to CEPH-FMFOF
03/02/2023 13:37:54 INFO password set successfully.
03/02/2023 13:37:56 INFO Updated cluster interface successfully.
03/02/2023 13:38:15 INFO Updated cluster network successfully.
03/02/2023 13:38:29 INFO Updated cluster tuning successfully.
03/02/2023 13:38:29 INFO Current tuning configurations saved.
I just created the bond by hand to verify everything works as it should with MLAGs and cabling, I removed it afterwards..
Couldn't find any serious errors in dmesg either, I can post it
admin
2,930 Posts
Quote from admin on February 8, 2023, 7:13 pmIt is strange, i cannot find any smoking guns in the logs.
Can you please try the following:
Do a fresh install from iso. After boot, from the blue screen menu Options -> Bash shell, then add root password with
passwd
Access the node (example via Winscp)
edit file /usr/lib/python3/dist-packages/PetaSAN/web/deploy_controller/wizard.py
starting line 645, add the 2 lines in bold:except Exception as e:
logger.error(e)
logger.exception(e)
session['err'] = "ui_deploy_save_node_network_setting_error"(make sure you use correct spacing, using spaces not tabs)
save then restart deploy service
systemctl restart petasan-deploy
Deploy node, do not use bonds, vlan, jumbo.
If you get failure on Step 6, we should get more info in the PetaSAN.log file
It is strange, i cannot find any smoking guns in the logs.
Can you please try the following:
Do a fresh install from iso. After boot, from the blue screen menu Options -> Bash shell, then add root password with
passwd
Access the node (example via Winscp)
edit file /usr/lib/python3/dist-packages/PetaSAN/web/deploy_controller/wizard.py
starting line 645, add the 2 lines in bold:
except Exception as e:
logger.error(e)
logger.exception(e)
session['err'] = "ui_deploy_save_node_network_setting_error"
(make sure you use correct spacing, using spaces not tabs)
save then restart deploy service
systemctl restart petasan-deploy
Deploy node, do not use bonds, vlan, jumbo.
If you get failure on Step 6, we should get more info in the PetaSAN.log file
kpiti
23 Posts
Quote from kpiti on February 9, 2023, 7:15 pmOk, mistery solved:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/web/deploy_controller/wizard.py", line 621, in save_node_network_setting
if is_ip_in_subnet(new_node_info.backend_1_ip, cluster_info.backend_1_base_ip, cluster_info.backend_1_mask):
File "/usr/lib/python3/dist-packages/PetaSAN/core/network/ip_utils.py", line 50, in is_ip_in_subnet
net = IPv4Network(subnet_base_ip + '/' + mask, False)
File "/usr/lib/python3.8/ipaddress.py", line 1454, in __init__
self.network_address = IPv4Address(addr)
File "/usr/lib/python3.8/ipaddress.py", line 1257, in __init__
self._ip = self._ip_int_from_string(addr_str)
File "/usr/lib/python3.8/ipaddress.py", line 1149, in _ip_int_from_string
raise AddressValueError("%s in %r" % (exc, ip_str)) from None
ipaddress.AddressValueError: Leading zeros are not permitted in '010' in '010.012.201.0'The reason why I was adding 0 padding is that GUI wouldn't let me proceed if I just typed 10 - it does if you type . (dot) but not tab or click to next field in the backend interface section.. It's probably something windows users are used to as you type . for delimeter/next-field in network setup but not so common in linux world.. If it was a plain textbox it wouldn't have happened. And tbh, 010.012.201.0 is not really an invalid V4 address..
Be as it may, mea culpa, perhaps you can add some s/^0+// to not get stuck here (or just a comment with the error or somewhere).
Thanks a lot for your help! Got to the end now with VLANs, jumbo, bond and all!
Cheers, Jure
Ok, mistery solved:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/PetaSAN/web/deploy_controller/wizard.py", line 621, in save_node_network_setting
if is_ip_in_subnet(new_node_info.backend_1_ip, cluster_info.backend_1_base_ip, cluster_info.backend_1_mask):
File "/usr/lib/python3/dist-packages/PetaSAN/core/network/ip_utils.py", line 50, in is_ip_in_subnet
net = IPv4Network(subnet_base_ip + '/' + mask, False)
File "/usr/lib/python3.8/ipaddress.py", line 1454, in __init__
self.network_address = IPv4Address(addr)
File "/usr/lib/python3.8/ipaddress.py", line 1257, in __init__
self._ip = self._ip_int_from_string(addr_str)
File "/usr/lib/python3.8/ipaddress.py", line 1149, in _ip_int_from_string
raise AddressValueError("%s in %r" % (exc, ip_str)) from None
ipaddress.AddressValueError: Leading zeros are not permitted in '010' in '010.012.201.0'
The reason why I was adding 0 padding is that GUI wouldn't let me proceed if I just typed 10 - it does if you type . (dot) but not tab or click to next field in the backend interface section.. It's probably something windows users are used to as you type . for delimeter/next-field in network setup but not so common in linux world.. If it was a plain textbox it wouldn't have happened. And tbh, 010.012.201.0 is not really an invalid V4 address..
Be as it may, mea culpa, perhaps you can add some s/^0+// to not get stuck here (or just a comment with the error or somewhere).
Thanks a lot for your help! Got to the end now with VLANs, jumbo, bond and all!
Cheers, Jure
admin
2,930 Posts
Quote from admin on February 9, 2023, 8:58 pmExcellent it worked. I am happy i was able to help 🙂
In the IP input control you can use space bar or as you mentioned (i did not know) dot, but tab throws you to the next control.
Excellent it worked. I am happy i was able to help 🙂
In the IP input control you can use space bar or as you mentioned (i did not know) dot, but tab throws you to the next control.