GUI cannot display disks - 2.3.1
bill gottlieb
26 Posts
August 30, 2019, 4:01 pmQuote from bill gottlieb on August 30, 2019, 4:01 pmWhile in the GUI and I select Node -> Physical Disk List, I get the "wait" circle, but nothing after that.
Log shows:
30/08/2019 11:51:54 ERROR 'in <string>' requires string as left operand, not NoneType
Traceback (most recent call last):
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 121, in main_catch
func(args)
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 134, in node_disk_list_json
print (json.dumps([o.get_dict() for o in ceph_disk_lib.get_full_disk_list(args.pid)]))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 288, in get_full_disk_list
ceph_disk_list = get_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 196, in get_disk_list
ceph_disk_list = get_ceph_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 104, in get_ceph_disk_list
for device in ceph_disk.list_devices():
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 720, in list_devices
space_map))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 516, in list_dev
if ptype in (PTYPE['regular']['osd']['ready']):
TypeError: 'in <string>' requires string as left operand, not NoneType
30/08/2019 11:51:54 ERROR Error while run command.
Please advise.
At the same time, none of my graphs are displaying on the Dashboard Page and I get a "Request Error" tag on the display. I don't see anything in the PetaSAN.log about that though.
- Bill
While in the GUI and I select Node -> Physical Disk List, I get the "wait" circle, but nothing after that.
Log shows:
30/08/2019 11:51:54 ERROR 'in <string>' requires string as left operand, not NoneType
Traceback (most recent call last):
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 121, in main_catch
func(args)
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 134, in node_disk_list_json
print (json.dumps([o.get_dict() for o in ceph_disk_lib.get_full_disk_list(args.pid)]))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 288, in get_full_disk_list
ceph_disk_list = get_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 196, in get_disk_list
ceph_disk_list = get_ceph_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 104, in get_ceph_disk_list
for device in ceph_disk.list_devices():
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 720, in list_devices
space_map))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 516, in list_dev
if ptype in (PTYPE['regular']['osd']['ready']):
TypeError: 'in <string>' requires string as left operand, not NoneType
30/08/2019 11:51:54 ERROR Error while run command.
Please advise.
At the same time, none of my graphs are displaying on the Dashboard Page and I get a "Request Error" tag on the display. I don't see anything in the PetaSAN.log about that though.
- Bill
admin
2,929 Posts
August 30, 2019, 6:19 pmQuote from admin on August 30, 2019, 6:19 pmAt first sight,it could be related to the /etc/hosts issue you had...from the host you are connecting the ui to, make sure you are able to ping all other hosts by hostname.
the other thing to check is doing a
ceph osd tree
and make sure all hostnames are listed correctly.
At first sight,it could be related to the /etc/hosts issue you had...from the host you are connecting the ui to, make sure you are able to ping all other hosts by hostname.
the other thing to check is doing a
ceph osd tree
and make sure all hostnames are listed correctly.
bill gottlieb
26 Posts
September 3, 2019, 10:57 amQuote from bill gottlieb on September 3, 2019, 10:57 amroot@petasan-0:~# ping petasan-0
PING petasan-0 (10.55.55.50) 56(84) bytes of data.
64 bytes from petasan-0 (10.55.55.50): icmp_seq=1 ttl=64 time=0.012 ms
64 bytes from petasan-0 (10.55.55.50): icmp_seq=2 ttl=64 time=0.014 ms
^C
--- petasan-0 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1011ms
rtt min/avg/max/mdev = 0.012/0.013/0.014/0.001 ms
root@petasan-0:~# ping petasan-1
PING petasan-1 (10.55.55.51) 56(84) bytes of data.
64 bytes from petasan-1 (10.55.55.51): icmp_seq=1 ttl=64 time=0.066 ms
64 bytes from petasan-1 (10.55.55.51): icmp_seq=2 ttl=64 time=0.064 ms
^C
--- petasan-1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.064/0.065/0.066/0.001 ms
root@petasan-0:~# ping petasan-2
PING petasan-2 (10.55.55.52) 56(84) bytes of data.
64 bytes from petasan-2 (10.55.55.52): icmp_seq=1 ttl=64 time=0.122 ms
64 bytes from petasan-2 (10.55.55.52): icmp_seq=2 ttl=64 time=0.118 ms
^C
--- petasan-2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.118/0.120/0.122/0.002 ms
root@petasan-0:~# ping petasan-3
PING petasan-3 (10.55.55.53) 56(84) bytes of data.
64 bytes from petasan-3 (10.55.55.53): icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from petasan-3 (10.55.55.53): icmp_seq=2 ttl=64 time=0.084 ms
^C
--- petasan-3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1010ms
rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms
root@petasan-0:~# ping petasan-6
PING petasan-6 (10.55.55.56) 56(84) bytes of data.
64 bytes from petasan-6 (10.55.55.56): icmp_seq=1 ttl=64 time=0.078 ms
64 bytes from petasan-6 (10.55.55.56): icmp_seq=2 ttl=64 time=0.100 ms
^C
--- petasan-6 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.078/0.089/0.100/0.011 ms
root@petasan-0:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 14.97543 root default
-5 2.99509 host petasan-0
2 hdd 1.63379 osd.2 up 1.00000 1.00000
3 hdd 1.36130 osd.3 up 1.00000 1.00000
-7 2.99509 host petasan-1
4 hdd 1.63379 osd.4 up 1.00000 1.00000
5 hdd 1.36130 osd.5 up 1.00000 1.00000
-3 2.99509 host petasan-2
0 hdd 1.63379 osd.0 up 1.00000 1.00000
1 hdd 1.36130 osd.1 up 1.00000 1.00000
-11 2.99509 host petasan-3
8 hdd 1.63379 osd.8 up 1.00000 1.00000
9 hdd 1.36130 osd.9 up 1.00000 1.00000
-9 2.99509 host petasan-6
6 hdd 1.63379 osd.6 up 1.00000 1.00000
7 hdd 1.36130 osd.7 up 1.00000 1.00000
root@petasan-0:~#
GUI only displays disks from petasan-0 and petasan-3 successfully.
- Bill
root@petasan-0:~# ping petasan-0
PING petasan-0 (10.55.55.50) 56(84) bytes of data.
64 bytes from petasan-0 (10.55.55.50): icmp_seq=1 ttl=64 time=0.012 ms
64 bytes from petasan-0 (10.55.55.50): icmp_seq=2 ttl=64 time=0.014 ms
^C
--- petasan-0 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1011ms
rtt min/avg/max/mdev = 0.012/0.013/0.014/0.001 ms
root@petasan-0:~# ping petasan-1
PING petasan-1 (10.55.55.51) 56(84) bytes of data.
64 bytes from petasan-1 (10.55.55.51): icmp_seq=1 ttl=64 time=0.066 ms
64 bytes from petasan-1 (10.55.55.51): icmp_seq=2 ttl=64 time=0.064 ms
^C
--- petasan-1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.064/0.065/0.066/0.001 ms
root@petasan-0:~# ping petasan-2
PING petasan-2 (10.55.55.52) 56(84) bytes of data.
64 bytes from petasan-2 (10.55.55.52): icmp_seq=1 ttl=64 time=0.122 ms
64 bytes from petasan-2 (10.55.55.52): icmp_seq=2 ttl=64 time=0.118 ms
^C
--- petasan-2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.118/0.120/0.122/0.002 ms
root@petasan-0:~# ping petasan-3
PING petasan-3 (10.55.55.53) 56(84) bytes of data.
64 bytes from petasan-3 (10.55.55.53): icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from petasan-3 (10.55.55.53): icmp_seq=2 ttl=64 time=0.084 ms
^C
--- petasan-3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1010ms
rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms
root@petasan-0:~# ping petasan-6
PING petasan-6 (10.55.55.56) 56(84) bytes of data.
64 bytes from petasan-6 (10.55.55.56): icmp_seq=1 ttl=64 time=0.078 ms
64 bytes from petasan-6 (10.55.55.56): icmp_seq=2 ttl=64 time=0.100 ms
^C
--- petasan-6 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.078/0.089/0.100/0.011 ms
root@petasan-0:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 14.97543 root default
-5 2.99509 host petasan-0
2 hdd 1.63379 osd.2 up 1.00000 1.00000
3 hdd 1.36130 osd.3 up 1.00000 1.00000
-7 2.99509 host petasan-1
4 hdd 1.63379 osd.4 up 1.00000 1.00000
5 hdd 1.36130 osd.5 up 1.00000 1.00000
-3 2.99509 host petasan-2
0 hdd 1.63379 osd.0 up 1.00000 1.00000
1 hdd 1.36130 osd.1 up 1.00000 1.00000
-11 2.99509 host petasan-3
8 hdd 1.63379 osd.8 up 1.00000 1.00000
9 hdd 1.36130 osd.9 up 1.00000 1.00000
-9 2.99509 host petasan-6
6 hdd 1.63379 osd.6 up 1.00000 1.00000
7 hdd 1.36130 osd.7 up 1.00000 1.00000
root@petasan-0:~#
GUI only displays disks from petasan-0 and petasan-3 successfully.
- Bill
admin
2,929 Posts
September 3, 2019, 2:04 pmQuote from admin on September 3, 2019, 2:04 pmif i understand, when you click the physical disk list, some nodes work, some you get a wait circle.. the node you connect to vi the browser is able to ping all nodes via their host names.
can you send the /opt/petasan/log/PetaSAN.log files on both the node you connect to via browser as well as the node you select and get a wait circle, capturing the logs immediately after getting the problem. email both files to contact-us at petasan.org
if i understand, when you click the physical disk list, some nodes work, some you get a wait circle.. the node you connect to vi the browser is able to ping all nodes via their host names.
can you send the /opt/petasan/log/PetaSAN.log files on both the node you connect to via browser as well as the node you select and get a wait circle, capturing the logs immediately after getting the problem. email both files to contact-us at petasan.org
bill gottlieb
26 Posts
September 3, 2019, 3:16 pmQuote from bill gottlieb on September 3, 2019, 3:16 pmEmail sent.
Cheers,
- Bill
Email sent.
Cheers,
- Bill
admin
2,929 Posts
September 4, 2019, 3:56 pmQuote from admin on September 4, 2019, 3:56 pmCan you try the following patch on nodes with issue:
https://drive.google.com/file/d/1xMmkrdYCwM9wdcNGnbUq4hZCXpH_JZuC/view?usp=sharing
apply via
patch -d / -p1 < handle_ptype_none.patch
we think some partitions may have a missing partition type, maybe created from another os..maybe.
Can you try the following patch on nodes with issue:
https://drive.google.com/file/d/1xMmkrdYCwM9wdcNGnbUq4hZCXpH_JZuC/view?usp=sharing
apply via
patch -d / -p1 < handle_ptype_none.patch
we think some partitions may have a missing partition type, maybe created from another os..maybe.
bill gottlieb
26 Posts
September 4, 2019, 4:28 pmQuote from bill gottlieb on September 4, 2019, 4:28 pminstalled the patch on petasan-6.
Now displays the disks as expected. Also getting the following INFO alert:
04/09/2019 12:26:40 INFO partision rbd0p1 has no ptype.
installed the patch on petasan-6.
Now displays the disks as expected. Also getting the following INFO alert:
04/09/2019 12:26:40 INFO partision rbd0p1 has no ptype.
admin
2,929 Posts
September 4, 2019, 5:26 pmQuote from admin on September 4, 2019, 5:26 pmExcellent.. this seems to fix it 🙂
it seems /dev/rbd0p1 was the issue..it this a partition you mounted yourself ? it will help to know
Excellent.. this seems to fix it 🙂
it seems /dev/rbd0p1 was the issue..it this a partition you mounted yourself ? it will help to know
bill gottlieb
26 Posts
September 4, 2019, 7:23 pmQuote from bill gottlieb on September 4, 2019, 7:23 pmWasn't me... no idea what /dev/rbd0p1 would even be...
Wasn't me... no idea what /dev/rbd0p1 would even be...
WNDJ24
1 Post
October 29, 2024, 8:31 pmQuote from WNDJ24 on October 29, 2024, 8:31 pmI am seeing a very similar behavior coming from 2.3.1 > took the upgrade path to 3.3 and still broken when trying to view the physical disk of the nodes.
29/10/2024 16:28:50 ERROR Exception on /node/list/get_all_disk/nodename/0/off [GET]
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/flask/app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/python3/dist-packages/flask/app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/python3/dist-packages/flask/app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python3/dist-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/lib/python3/dist-packages/flask/app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/python3/dist-packages/flask/app.py", line 1935, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/lib/python3/dist-packages/PetaSAN/core/security/basic_auth.py", line 71, in decorated
return f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/PetaSAN/core/security/basic_auth.py", line 91, in wrapped_function
return fn(*args, **kwargs)
File "/usr/lib/python3/dist-packages/PetaSAN/web/admin_controller/node.py", line 84, in get_all_disk
json_data = json.dumps([obj.__dict__ for obj in disk_ls])
TypeError: 'NoneType' object is not iterable
I am seeing a very similar behavior coming from 2.3.1 > took the upgrade path to 3.3 and still broken when trying to view the physical disk of the nodes.
29/10/2024 16:28:50 ERROR Exception on /node/list/get_all_disk/nodename/0/off [GET]
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/flask/app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/python3/dist-packages/flask/app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/python3/dist-packages/flask/app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python3/dist-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/lib/python3/dist-packages/flask/app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/python3/dist-packages/flask/app.py", line 1935, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/lib/python3/dist-packages/PetaSAN/core/security/basic_auth.py", line 71, in decorated
return f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/PetaSAN/core/security/basic_auth.py", line 91, in wrapped_function
return fn(*args, **kwargs)
File "/usr/lib/python3/dist-packages/PetaSAN/web/admin_controller/node.py", line 84, in get_all_disk
json_data = json.dumps([obj.__dict__ for obj in disk_ls])
TypeError: 'NoneType' object is not iterable
Last edited on October 29, 2024, 8:32 pm by WNDJ24 · #10
GUI cannot display disks - 2.3.1
bill gottlieb
26 Posts
Quote from bill gottlieb on August 30, 2019, 4:01 pmWhile in the GUI and I select Node -> Physical Disk List, I get the "wait" circle, but nothing after that.
Log shows:
30/08/2019 11:51:54 ERROR 'in <string>' requires string as left operand, not NoneType
Traceback (most recent call last):
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 121, in main_catch
func(args)
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 134, in node_disk_list_json
print (json.dumps([o.get_dict() for o in ceph_disk_lib.get_full_disk_list(args.pid)]))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 288, in get_full_disk_list
ceph_disk_list = get_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 196, in get_disk_list
ceph_disk_list = get_ceph_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 104, in get_ceph_disk_list
for device in ceph_disk.list_devices():
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 720, in list_devices
space_map))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 516, in list_dev
if ptype in (PTYPE['regular']['osd']['ready']):
TypeError: 'in <string>' requires string as left operand, not NoneType
30/08/2019 11:51:54 ERROR Error while run command.Please advise.
At the same time, none of my graphs are displaying on the Dashboard Page and I get a "Request Error" tag on the display. I don't see anything in the PetaSAN.log about that though.
- Bill
While in the GUI and I select Node -> Physical Disk List, I get the "wait" circle, but nothing after that.
Log shows:
30/08/2019 11:51:54 ERROR 'in <string>' requires string as left operand, not NoneType
Traceback (most recent call last):
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 121, in main_catch
func(args)
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 134, in node_disk_list_json
print (json.dumps([o.get_dict() for o in ceph_disk_lib.get_full_disk_list(args.pid)]))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 288, in get_full_disk_list
ceph_disk_list = get_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 196, in get_disk_list
ceph_disk_list = get_ceph_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 104, in get_ceph_disk_list
for device in ceph_disk.list_devices():
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 720, in list_devices
space_map))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 516, in list_dev
if ptype in (PTYPE['regular']['osd']['ready']):
TypeError: 'in <string>' requires string as left operand, not NoneType
30/08/2019 11:51:54 ERROR Error while run command.
Please advise.
At the same time, none of my graphs are displaying on the Dashboard Page and I get a "Request Error" tag on the display. I don't see anything in the PetaSAN.log about that though.
- Bill
admin
2,929 Posts
Quote from admin on August 30, 2019, 6:19 pmAt first sight,it could be related to the /etc/hosts issue you had...from the host you are connecting the ui to, make sure you are able to ping all other hosts by hostname.
the other thing to check is doing a
ceph osd tree
and make sure all hostnames are listed correctly.
At first sight,it could be related to the /etc/hosts issue you had...from the host you are connecting the ui to, make sure you are able to ping all other hosts by hostname.
the other thing to check is doing a
ceph osd tree
and make sure all hostnames are listed correctly.
bill gottlieb
26 Posts
Quote from bill gottlieb on September 3, 2019, 10:57 amroot@petasan-0:~# ping petasan-0
PING petasan-0 (10.55.55.50) 56(84) bytes of data.
64 bytes from petasan-0 (10.55.55.50): icmp_seq=1 ttl=64 time=0.012 ms
64 bytes from petasan-0 (10.55.55.50): icmp_seq=2 ttl=64 time=0.014 ms
^C
--- petasan-0 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1011ms
rtt min/avg/max/mdev = 0.012/0.013/0.014/0.001 ms
root@petasan-0:~# ping petasan-1
PING petasan-1 (10.55.55.51) 56(84) bytes of data.
64 bytes from petasan-1 (10.55.55.51): icmp_seq=1 ttl=64 time=0.066 ms
64 bytes from petasan-1 (10.55.55.51): icmp_seq=2 ttl=64 time=0.064 ms
^C
--- petasan-1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.064/0.065/0.066/0.001 ms
root@petasan-0:~# ping petasan-2
PING petasan-2 (10.55.55.52) 56(84) bytes of data.
64 bytes from petasan-2 (10.55.55.52): icmp_seq=1 ttl=64 time=0.122 ms
64 bytes from petasan-2 (10.55.55.52): icmp_seq=2 ttl=64 time=0.118 ms
^C
--- petasan-2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.118/0.120/0.122/0.002 ms
root@petasan-0:~# ping petasan-3
PING petasan-3 (10.55.55.53) 56(84) bytes of data.
64 bytes from petasan-3 (10.55.55.53): icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from petasan-3 (10.55.55.53): icmp_seq=2 ttl=64 time=0.084 ms
^C
--- petasan-3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1010ms
rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms
root@petasan-0:~# ping petasan-6
PING petasan-6 (10.55.55.56) 56(84) bytes of data.
64 bytes from petasan-6 (10.55.55.56): icmp_seq=1 ttl=64 time=0.078 ms
64 bytes from petasan-6 (10.55.55.56): icmp_seq=2 ttl=64 time=0.100 ms
^C
--- petasan-6 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.078/0.089/0.100/0.011 ms
root@petasan-0:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 14.97543 root default
-5 2.99509 host petasan-0
2 hdd 1.63379 osd.2 up 1.00000 1.00000
3 hdd 1.36130 osd.3 up 1.00000 1.00000
-7 2.99509 host petasan-1
4 hdd 1.63379 osd.4 up 1.00000 1.00000
5 hdd 1.36130 osd.5 up 1.00000 1.00000
-3 2.99509 host petasan-2
0 hdd 1.63379 osd.0 up 1.00000 1.00000
1 hdd 1.36130 osd.1 up 1.00000 1.00000
-11 2.99509 host petasan-3
8 hdd 1.63379 osd.8 up 1.00000 1.00000
9 hdd 1.36130 osd.9 up 1.00000 1.00000
-9 2.99509 host petasan-6
6 hdd 1.63379 osd.6 up 1.00000 1.00000
7 hdd 1.36130 osd.7 up 1.00000 1.00000
root@petasan-0:~#
GUI only displays disks from petasan-0 and petasan-3 successfully.
- Bill
root@petasan-0:~# ping petasan-0
PING petasan-0 (10.55.55.50) 56(84) bytes of data.
64 bytes from petasan-0 (10.55.55.50): icmp_seq=1 ttl=64 time=0.012 ms
64 bytes from petasan-0 (10.55.55.50): icmp_seq=2 ttl=64 time=0.014 ms
^C
--- petasan-0 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1011ms
rtt min/avg/max/mdev = 0.012/0.013/0.014/0.001 ms
root@petasan-0:~# ping petasan-1
PING petasan-1 (10.55.55.51) 56(84) bytes of data.
64 bytes from petasan-1 (10.55.55.51): icmp_seq=1 ttl=64 time=0.066 ms
64 bytes from petasan-1 (10.55.55.51): icmp_seq=2 ttl=64 time=0.064 ms
^C
--- petasan-1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.064/0.065/0.066/0.001 ms
root@petasan-0:~# ping petasan-2
PING petasan-2 (10.55.55.52) 56(84) bytes of data.
64 bytes from petasan-2 (10.55.55.52): icmp_seq=1 ttl=64 time=0.122 ms
64 bytes from petasan-2 (10.55.55.52): icmp_seq=2 ttl=64 time=0.118 ms
^C
--- petasan-2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.118/0.120/0.122/0.002 ms
root@petasan-0:~# ping petasan-3
PING petasan-3 (10.55.55.53) 56(84) bytes of data.
64 bytes from petasan-3 (10.55.55.53): icmp_seq=1 ttl=64 time=0.084 ms
64 bytes from petasan-3 (10.55.55.53): icmp_seq=2 ttl=64 time=0.084 ms
^C
--- petasan-3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1010ms
rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms
root@petasan-0:~# ping petasan-6
PING petasan-6 (10.55.55.56) 56(84) bytes of data.
64 bytes from petasan-6 (10.55.55.56): icmp_seq=1 ttl=64 time=0.078 ms
64 bytes from petasan-6 (10.55.55.56): icmp_seq=2 ttl=64 time=0.100 ms
^C
--- petasan-6 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.078/0.089/0.100/0.011 ms
root@petasan-0:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 14.97543 root default
-5 2.99509 host petasan-0
2 hdd 1.63379 osd.2 up 1.00000 1.00000
3 hdd 1.36130 osd.3 up 1.00000 1.00000
-7 2.99509 host petasan-1
4 hdd 1.63379 osd.4 up 1.00000 1.00000
5 hdd 1.36130 osd.5 up 1.00000 1.00000
-3 2.99509 host petasan-2
0 hdd 1.63379 osd.0 up 1.00000 1.00000
1 hdd 1.36130 osd.1 up 1.00000 1.00000
-11 2.99509 host petasan-3
8 hdd 1.63379 osd.8 up 1.00000 1.00000
9 hdd 1.36130 osd.9 up 1.00000 1.00000
-9 2.99509 host petasan-6
6 hdd 1.63379 osd.6 up 1.00000 1.00000
7 hdd 1.36130 osd.7 up 1.00000 1.00000
root@petasan-0:~#
GUI only displays disks from petasan-0 and petasan-3 successfully.
- Bill
admin
2,929 Posts
Quote from admin on September 3, 2019, 2:04 pmif i understand, when you click the physical disk list, some nodes work, some you get a wait circle.. the node you connect to vi the browser is able to ping all nodes via their host names.
can you send the /opt/petasan/log/PetaSAN.log files on both the node you connect to via browser as well as the node you select and get a wait circle, capturing the logs immediately after getting the problem. email both files to contact-us at petasan.org
if i understand, when you click the physical disk list, some nodes work, some you get a wait circle.. the node you connect to vi the browser is able to ping all nodes via their host names.
can you send the /opt/petasan/log/PetaSAN.log files on both the node you connect to via browser as well as the node you select and get a wait circle, capturing the logs immediately after getting the problem. email both files to contact-us at petasan.org
bill gottlieb
26 Posts
Quote from bill gottlieb on September 3, 2019, 3:16 pmEmail sent.
Cheers,
- Bill
Email sent.
Cheers,
- Bill
admin
2,929 Posts
Quote from admin on September 4, 2019, 3:56 pmCan you try the following patch on nodes with issue:
https://drive.google.com/file/d/1xMmkrdYCwM9wdcNGnbUq4hZCXpH_JZuC/view?usp=sharing
apply via
patch -d / -p1 < handle_ptype_none.patch
we think some partitions may have a missing partition type, maybe created from another os..maybe.
Can you try the following patch on nodes with issue:
https://drive.google.com/file/d/1xMmkrdYCwM9wdcNGnbUq4hZCXpH_JZuC/view?usp=sharing
apply via
patch -d / -p1 < handle_ptype_none.patch
we think some partitions may have a missing partition type, maybe created from another os..maybe.
bill gottlieb
26 Posts
Quote from bill gottlieb on September 4, 2019, 4:28 pminstalled the patch on petasan-6.
Now displays the disks as expected. Also getting the following INFO alert:
04/09/2019 12:26:40 INFO partision rbd0p1 has no ptype.
installed the patch on petasan-6.
Now displays the disks as expected. Also getting the following INFO alert:
04/09/2019 12:26:40 INFO partision rbd0p1 has no ptype.
admin
2,929 Posts
Quote from admin on September 4, 2019, 5:26 pmExcellent.. this seems to fix it 🙂
it seems /dev/rbd0p1 was the issue..it this a partition you mounted yourself ? it will help to know
Excellent.. this seems to fix it 🙂
it seems /dev/rbd0p1 was the issue..it this a partition you mounted yourself ? it will help to know
bill gottlieb
26 Posts
Quote from bill gottlieb on September 4, 2019, 7:23 pmWasn't me... no idea what /dev/rbd0p1 would even be...
Wasn't me... no idea what /dev/rbd0p1 would even be...
WNDJ24
1 Post
Quote from WNDJ24 on October 29, 2024, 8:31 pmI am seeing a very similar behavior coming from 2.3.1 > took the upgrade path to 3.3 and still broken when trying to view the physical disk of the nodes.
29/10/2024 16:28:50 ERROR Exception on /node/list/get_all_disk/nodename/0/off [GET]
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/flask/app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/python3/dist-packages/flask/app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/python3/dist-packages/flask/app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python3/dist-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/lib/python3/dist-packages/flask/app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/python3/dist-packages/flask/app.py", line 1935, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/lib/python3/dist-packages/PetaSAN/core/security/basic_auth.py", line 71, in decorated
return f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/PetaSAN/core/security/basic_auth.py", line 91, in wrapped_function
return fn(*args, **kwargs)
File "/usr/lib/python3/dist-packages/PetaSAN/web/admin_controller/node.py", line 84, in get_all_disk
json_data = json.dumps([obj.__dict__ for obj in disk_ls])
TypeError: 'NoneType' object is not iterable
I am seeing a very similar behavior coming from 2.3.1 > took the upgrade path to 3.3 and still broken when trying to view the physical disk of the nodes.
29/10/2024 16:28:50 ERROR Exception on /node/list/get_all_disk/nodename/0/off [GET]
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/flask/app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/python3/dist-packages/flask/app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/python3/dist-packages/flask/app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python3/dist-packages/flask/_compat.py", line 39, in reraise
raise value
File "/usr/lib/python3/dist-packages/flask/app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/python3/dist-packages/flask/app.py", line 1935, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/lib/python3/dist-packages/PetaSAN/core/security/basic_auth.py", line 71, in decorated
return f(*args, **kwargs)
File "/usr/lib/python3/dist-packages/PetaSAN/core/security/basic_auth.py", line 91, in wrapped_function
return fn(*args, **kwargs)
File "/usr/lib/python3/dist-packages/PetaSAN/web/admin_controller/node.py", line 84, in get_all_disk
json_data = json.dumps([obj.__dict__ for obj in disk_ls])
TypeError: 'NoneType' object is not iterable