Viewing Physical disk in UI not working
ghbiz
76 Posts
November 22, 2019, 12:41 pmQuote from ghbiz on November 22, 2019, 12:41 pmHello,
Building a new cluster and having trouble viewing disks in the UI. After install of Petasan OS, I've installed StorCLI and created 1x Raid 0 VD for each of the drives. Drives show up in the OS when running the pcblk command, however, when trying to view drives in UI, the pages just keeps running. There is a python error of some sort in the Petasan.log file as follows:
22/11/2019 07:39:15 ERROR 'in <string>' requires string as left operand, not NoneType
Traceback (most recent call last):
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 121, in main_catch
func(args)
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 134, in node_disk_list_json
print (json.dumps([o.get_dict() for o in ceph_disk_lib.get_full_disk_list(args.pid)]))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 288, in get_full_disk_list
ceph_disk_list = get_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 196, in get_disk_list
ceph_disk_list = get_ceph_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 104, in get_ceph_disk_list
for device in ceph_disk.list_devices():
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 720, in list_devices
space_map))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 516, in list_dev
if ptype in (PTYPE['regular']['osd']['ready']):
TypeError: 'in <string>' requires string as left operand, not NoneType
22/11/2019 07:39:15 ERROR Error while run command.
root@ceph-public3:/opt/petasan/log# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 446.6G 0 disk
└─sda1 8:1 0 446.6G 0 part
sdb 8:16 0 446.6G 0 disk
└─sdb1 8:17 0 446.6G 0 part
sdc 8:32 0 2.7T 0 disk
sdd 8:48 0 3.7T 0 disk
sde 8:64 0 3.7T 0 disk
sdf 8:80 0 3.7T 0 disk
sdg 8:96 0 317.9G 0 disk
├─sdg1 8:97 0 1M 0 part
├─sdg2 8:98 0 128M 0 part /boot/efi
├─sdg3 8:99 0 15G 0 part /
├─sdg4 8:100 0 30G 0 part /var/lib/ceph
└─sdg5 8:101 0 272.8G 0 part /opt/petasan/config
nvme0n1 259:0 0 745.2G 0 disk
└─nvme0n1p1 259:1 0 20G 0 part
VD LIST :
=======
---------------------------------------------------------------
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
---------------------------------------------------------------
0/0 RAID0 Optl RW Yes RWBD - ON 446.625 GB
1/1 RAID0 Optl RW Yes RWBD - ON 446.625 GB
2/2 RAID0 Optl RW Yes RWBD - ON 2.728 TB
3/3 RAID0 Optl RW Yes RWBD - ON 3.637 TB
4/4 RAID0 Optl RW Yes RWBD - ON 3.637 TB
5/5 RAID0 Optl RW Yes RWBD - ON 3.637 TB
---------------------------------------------------------------
Physical Drives = 6
PD LIST :
=======
----------------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
----------------------------------------------------------------------------------
8:0 17 Onln 0 446.625 GB SATA SSD N N 512B SanDisk SDSSDXPS480G U -
8:1 19 Onln 1 446.625 GB SATA SSD N N 512B SanDisk SDSSDXPS480G U -
8:3 23 Onln 2 2.728 TB SATA HDD N N 512B WDC WD30EFRX-68EUZN0 U -
8:7 20 Onln 3 3.637 TB SATA HDD N N 512B HGST HUS726T4TALA6L4 U -
8:10 21 Onln 4 3.637 TB SATA HDD N N 512B HGST HUS726T4TALA6L4 U -
8:11 22 Onln 5 3.637 TB SATA HDD N N 512B HGST HUS726T4TALA6L4 U -
----------------------------------------------------------------------------------
Hello,
Building a new cluster and having trouble viewing disks in the UI. After install of Petasan OS, I've installed StorCLI and created 1x Raid 0 VD for each of the drives. Drives show up in the OS when running the pcblk command, however, when trying to view drives in UI, the pages just keeps running. There is a python error of some sort in the Petasan.log file as follows:
22/11/2019 07:39:15 ERROR 'in <string>' requires string as left operand, not NoneType
Traceback (most recent call last):
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 121, in main_catch
func(args)
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 134, in node_disk_list_json
print (json.dumps([o.get_dict() for o in ceph_disk_lib.get_full_disk_list(args.pid)]))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 288, in get_full_disk_list
ceph_disk_list = get_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 196, in get_disk_list
ceph_disk_list = get_ceph_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 104, in get_ceph_disk_list
for device in ceph_disk.list_devices():
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 720, in list_devices
space_map))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 516, in list_dev
if ptype in (PTYPE['regular']['osd']['ready']):
TypeError: 'in <string>' requires string as left operand, not NoneType
22/11/2019 07:39:15 ERROR Error while run command.
root@ceph-public3:/opt/petasan/log# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 446.6G 0 disk
└─sda1 8:1 0 446.6G 0 part
sdb 8:16 0 446.6G 0 disk
└─sdb1 8:17 0 446.6G 0 part
sdc 8:32 0 2.7T 0 disk
sdd 8:48 0 3.7T 0 disk
sde 8:64 0 3.7T 0 disk
sdf 8:80 0 3.7T 0 disk
sdg 8:96 0 317.9G 0 disk
├─sdg1 8:97 0 1M 0 part
├─sdg2 8:98 0 128M 0 part /boot/efi
├─sdg3 8:99 0 15G 0 part /
├─sdg4 8:100 0 30G 0 part /var/lib/ceph
└─sdg5 8:101 0 272.8G 0 part /opt/petasan/config
nvme0n1 259:0 0 745.2G 0 disk
└─nvme0n1p1 259:1 0 20G 0 part
VD LIST :
=======
---------------------------------------------------------------
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
---------------------------------------------------------------
0/0 RAID0 Optl RW Yes RWBD - ON 446.625 GB
1/1 RAID0 Optl RW Yes RWBD - ON 446.625 GB
2/2 RAID0 Optl RW Yes RWBD - ON 2.728 TB
3/3 RAID0 Optl RW Yes RWBD - ON 3.637 TB
4/4 RAID0 Optl RW Yes RWBD - ON 3.637 TB
5/5 RAID0 Optl RW Yes RWBD - ON 3.637 TB
---------------------------------------------------------------
Physical Drives = 6
PD LIST :
=======
----------------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
----------------------------------------------------------------------------------
8:0 17 Onln 0 446.625 GB SATA SSD N N 512B SanDisk SDSSDXPS480G U -
8:1 19 Onln 1 446.625 GB SATA SSD N N 512B SanDisk SDSSDXPS480G U -
8:3 23 Onln 2 2.728 TB SATA HDD N N 512B WDC WD30EFRX-68EUZN0 U -
8:7 20 Onln 3 3.637 TB SATA HDD N N 512B HGST HUS726T4TALA6L4 U -
8:10 21 Onln 4 3.637 TB SATA HDD N N 512B HGST HUS726T4TALA6L4 U -
8:11 22 Onln 5 3.637 TB SATA HDD N N 512B HGST HUS726T4TALA6L4 U -
----------------------------------------------------------------------------------
Last edited on November 22, 2019, 12:43 pm by ghbiz · #1
admin
2,918 Posts
November 22, 2019, 1:40 pmQuote from admin on November 22, 2019, 1:40 pmcan you try the patch at end of
http://www.petasan.org/forums/?view=thread&id=515
can you try the patch at end of
http://www.petasan.org/forums/?view=thread&id=515
Last edited on November 22, 2019, 10:24 pm by admin · #2
ghbiz
76 Posts
November 24, 2019, 3:29 pmQuote from ghbiz on November 24, 2019, 3:29 pmwe ran dd /dev/zero.... on the suspect disks that we suspected of being the issue as they had a foreign partition on them... once we wrote some zeros to the partition table, things worked like a charm.
Based on the other thread mentioned above, the patch should have worked here and i am sure it will be a good addition to petasan in the future release.
Thank you for your help in the matter.
we ran dd /dev/zero.... on the suspect disks that we suspected of being the issue as they had a foreign partition on them... once we wrote some zeros to the partition table, things worked like a charm.
Based on the other thread mentioned above, the patch should have worked here and i am sure it will be a good addition to petasan in the future release.
Thank you for your help in the matter.
alienn
37 Posts
November 27, 2019, 6:05 pmQuote from alienn on November 27, 2019, 6:05 pmHi,
today I encountered the same problem. This time I think it is a little different.
What did I do... I added a Samsung pcie nvme card to my nodes. Before adding the cards all disks showed as usual. As soon as the cards where inserted I got the spinnig icon and was not able to view any disks on any nodes with pcie nvme cards.
The logfiles show this:
27/11/2019 18:48:42 ERROR Error while run command.
OSError: [Errno 2] No such file or directory: '/dev/nvme1c1n1'
rdev = os.stat(path).st_rdev
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 252, in block_path for name in os.listdir(block_path(dev)):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 263, in list_partitions_device
return list_partitions_device(dev)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 247, in list_partitions
dev_part_list[name] = list_partitions(get_dev_path(name))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 241, in list_all_partitions
partmap = list_all_partitions()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 675, in list_devices
for device in ceph_disk.list_devices():
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 104, in get_ceph_disk_list
ceph_disk_list = get_ceph_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 196, in get_disk_list
ceph_disk_list = get_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 288, in get_full_disk_list
print (json.dumps([o.get_dict() for o in ceph_disk_lib.get_full_disk_list(args.pid)]))
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 134, in node_disk_list_json
func(args)
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 121, in main_catch
Traceback (most recent call last):
27/11/2019 18:48:42 ERROR
27/11/2019 18:48:19 ERROR Error while run command.
I already applied the patch from the other thread but this did not change the behavior. I think it is worth to note that in lsblk I have a device /dev/nvme1n1 and the error message tries to get infos about /dev/nvmen1c1p1.
It seems that some nvme ssd create a seprate namespace for multipathing and this could be the source of the problem.
Hi,
today I encountered the same problem. This time I think it is a little different.
What did I do... I added a Samsung pcie nvme card to my nodes. Before adding the cards all disks showed as usual. As soon as the cards where inserted I got the spinnig icon and was not able to view any disks on any nodes with pcie nvme cards.
The logfiles show this:
27/11/2019 18:48:42 ERROR Error while run command.
OSError: [Errno 2] No such file or directory: '/dev/nvme1c1n1'
rdev = os.stat(path).st_rdev
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 252, in block_path for name in os.listdir(block_path(dev)):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 263, in list_partitions_device
return list_partitions_device(dev)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 247, in list_partitions
dev_part_list[name] = list_partitions(get_dev_path(name))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 241, in list_all_partitions
partmap = list_all_partitions()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 675, in list_devices
for device in ceph_disk.list_devices():
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 104, in get_ceph_disk_list
ceph_disk_list = get_ceph_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 196, in get_disk_list
ceph_disk_list = get_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 288, in get_full_disk_list
print (json.dumps([o.get_dict() for o in ceph_disk_lib.get_full_disk_list(args.pid)]))
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 134, in node_disk_list_json
func(args)
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 121, in main_catch
Traceback (most recent call last):
27/11/2019 18:48:42 ERROR
27/11/2019 18:48:19 ERROR Error while run command.
I already applied the patch from the other thread but this did not change the behavior. I think it is worth to note that in lsblk I have a device /dev/nvme1n1 and the error message tries to get infos about /dev/nvmen1c1p1.
It seems that some nvme ssd create a seprate namespace for multipathing and this could be the source of the problem.
alienn
37 Posts
November 27, 2019, 6:23 pmQuote from alienn on November 27, 2019, 6:23 pmSome screenshots:
Some screenshots:
alienn
37 Posts
November 29, 2019, 8:36 amQuote from alienn on November 29, 2019, 8:36 amHi, did you have time to look into this? I'd really like to add the new ssds to petasan. I could do it via cli, but then I cannot provide feedback in case of a patch and I would really like to help to make PetaSAN better...
Cheers
Hi, did you have time to look into this? I'd really like to add the new ssds to petasan. I could do it via cli, but then I cannot provide feedback in case of a patch and I would really like to help to make PetaSAN better...
Cheers
admin
2,918 Posts
November 29, 2019, 9:15 amQuote from admin on November 29, 2019, 9:15 amCan you try the following patch, on 1 node and see if it fixes it
https://drive.google.com/open?id=18PEaa1UdAUpa6xJ50zdDCT6TY5w3CLhr
patch -p1 -d / < skip_unmapped_devices.patch
it is not something we can reproduce, so we can test if this will fix it.
Can you try the following patch, on 1 node and see if it fixes it
https://drive.google.com/open?id=18PEaa1UdAUpa6xJ50zdDCT6TY5w3CLhr
patch -p1 -d / < skip_unmapped_devices.patch
it is not something we can reproduce, so we can test if this will fix it.
alienn
37 Posts
November 29, 2019, 10:49 amQuote from alienn on November 29, 2019, 10:49 amThanks for the fast reply.
After patching on node ceph01 I can now view the disks again and the ssd shows up.
Do you know why there would be a device /sys/block/nvme1c1n1? In this document they talk about multipathing, but I think that my ssd does not support this as there is only one sysfs entry (and not two). It should be safe to just use the head namespace (/dev/nvme1n1).
Can I provide any further information before going in production with these ssds?
Thanks for the fast reply.
After patching on node ceph01 I can now view the disks again and the ssd shows up.
Do you know why there would be a device /sys/block/nvme1c1n1? In this document they talk about multipathing, but I think that my ssd does not support this as there is only one sysfs entry (and not two). It should be safe to just use the head namespace (/dev/nvme1n1).
Can I provide any further information before going in production with these ssds?
Viewing Physical disk in UI not working
ghbiz
76 Posts
Quote from ghbiz on November 22, 2019, 12:41 pmHello,
Building a new cluster and having trouble viewing disks in the UI. After install of Petasan OS, I've installed StorCLI and created 1x Raid 0 VD for each of the drives. Drives show up in the OS when running the pcblk command, however, when trying to view drives in UI, the pages just keeps running. There is a python error of some sort in the Petasan.log file as follows:
22/11/2019 07:39:15 ERROR 'in <string>' requires string as left operand, not NoneType
Traceback (most recent call last):
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 121, in main_catch
func(args)
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 134, in node_disk_list_json
print (json.dumps([o.get_dict() for o in ceph_disk_lib.get_full_disk_list(args.pid)]))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 288, in get_full_disk_list
ceph_disk_list = get_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 196, in get_disk_list
ceph_disk_list = get_ceph_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 104, in get_ceph_disk_list
for device in ceph_disk.list_devices():
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 720, in list_devices
space_map))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 516, in list_dev
if ptype in (PTYPE['regular']['osd']['ready']):
TypeError: 'in <string>' requires string as left operand, not NoneType
22/11/2019 07:39:15 ERROR Error while run command.
root@ceph-public3:/opt/petasan/log# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 446.6G 0 disk
└─sda1 8:1 0 446.6G 0 part
sdb 8:16 0 446.6G 0 disk
└─sdb1 8:17 0 446.6G 0 part
sdc 8:32 0 2.7T 0 disk
sdd 8:48 0 3.7T 0 disk
sde 8:64 0 3.7T 0 disk
sdf 8:80 0 3.7T 0 disk
sdg 8:96 0 317.9G 0 disk
├─sdg1 8:97 0 1M 0 part
├─sdg2 8:98 0 128M 0 part /boot/efi
├─sdg3 8:99 0 15G 0 part /
├─sdg4 8:100 0 30G 0 part /var/lib/ceph
└─sdg5 8:101 0 272.8G 0 part /opt/petasan/config
nvme0n1 259:0 0 745.2G 0 disk
└─nvme0n1p1 259:1 0 20G 0 partVD LIST :
=======---------------------------------------------------------------
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
---------------------------------------------------------------
0/0 RAID0 Optl RW Yes RWBD - ON 446.625 GB
1/1 RAID0 Optl RW Yes RWBD - ON 446.625 GB
2/2 RAID0 Optl RW Yes RWBD - ON 2.728 TB
3/3 RAID0 Optl RW Yes RWBD - ON 3.637 TB
4/4 RAID0 Optl RW Yes RWBD - ON 3.637 TB
5/5 RAID0 Optl RW Yes RWBD - ON 3.637 TB
---------------------------------------------------------------Physical Drives = 6
PD LIST :
=======----------------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
----------------------------------------------------------------------------------
8:0 17 Onln 0 446.625 GB SATA SSD N N 512B SanDisk SDSSDXPS480G U -
8:1 19 Onln 1 446.625 GB SATA SSD N N 512B SanDisk SDSSDXPS480G U -
8:3 23 Onln 2 2.728 TB SATA HDD N N 512B WDC WD30EFRX-68EUZN0 U -
8:7 20 Onln 3 3.637 TB SATA HDD N N 512B HGST HUS726T4TALA6L4 U -
8:10 21 Onln 4 3.637 TB SATA HDD N N 512B HGST HUS726T4TALA6L4 U -
8:11 22 Onln 5 3.637 TB SATA HDD N N 512B HGST HUS726T4TALA6L4 U -
----------------------------------------------------------------------------------
Hello,
Building a new cluster and having trouble viewing disks in the UI. After install of Petasan OS, I've installed StorCLI and created 1x Raid 0 VD for each of the drives. Drives show up in the OS when running the pcblk command, however, when trying to view drives in UI, the pages just keeps running. There is a python error of some sort in the Petasan.log file as follows:
22/11/2019 07:39:15 ERROR 'in <string>' requires string as left operand, not NoneType
Traceback (most recent call last):
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 121, in main_catch
func(args)
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 134, in node_disk_list_json
print (json.dumps([o.get_dict() for o in ceph_disk_lib.get_full_disk_list(args.pid)]))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 288, in get_full_disk_list
ceph_disk_list = get_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 196, in get_disk_list
ceph_disk_list = get_ceph_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 104, in get_ceph_disk_list
for device in ceph_disk.list_devices():
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 720, in list_devices
space_map))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 516, in list_dev
if ptype in (PTYPE['regular']['osd']['ready']):
TypeError: 'in <string>' requires string as left operand, not NoneType
22/11/2019 07:39:15 ERROR Error while run command.
root@ceph-public3:/opt/petasan/log# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 446.6G 0 disk
└─sda1 8:1 0 446.6G 0 part
sdb 8:16 0 446.6G 0 disk
└─sdb1 8:17 0 446.6G 0 part
sdc 8:32 0 2.7T 0 disk
sdd 8:48 0 3.7T 0 disk
sde 8:64 0 3.7T 0 disk
sdf 8:80 0 3.7T 0 disk
sdg 8:96 0 317.9G 0 disk
├─sdg1 8:97 0 1M 0 part
├─sdg2 8:98 0 128M 0 part /boot/efi
├─sdg3 8:99 0 15G 0 part /
├─sdg4 8:100 0 30G 0 part /var/lib/ceph
└─sdg5 8:101 0 272.8G 0 part /opt/petasan/config
nvme0n1 259:0 0 745.2G 0 disk
└─nvme0n1p1 259:1 0 20G 0 part
VD LIST :
=======
---------------------------------------------------------------
DG/VD TYPE State Access Consist Cache Cac sCC Size Name
---------------------------------------------------------------
0/0 RAID0 Optl RW Yes RWBD - ON 446.625 GB
1/1 RAID0 Optl RW Yes RWBD - ON 446.625 GB
2/2 RAID0 Optl RW Yes RWBD - ON 2.728 TB
3/3 RAID0 Optl RW Yes RWBD - ON 3.637 TB
4/4 RAID0 Optl RW Yes RWBD - ON 3.637 TB
5/5 RAID0 Optl RW Yes RWBD - ON 3.637 TB
---------------------------------------------------------------
Physical Drives = 6
PD LIST :
=======
----------------------------------------------------------------------------------
EID:Slt DID State DG Size Intf Med SED PI SeSz Model Sp Type
----------------------------------------------------------------------------------
8:0 17 Onln 0 446.625 GB SATA SSD N N 512B SanDisk SDSSDXPS480G U -
8:1 19 Onln 1 446.625 GB SATA SSD N N 512B SanDisk SDSSDXPS480G U -
8:3 23 Onln 2 2.728 TB SATA HDD N N 512B WDC WD30EFRX-68EUZN0 U -
8:7 20 Onln 3 3.637 TB SATA HDD N N 512B HGST HUS726T4TALA6L4 U -
8:10 21 Onln 4 3.637 TB SATA HDD N N 512B HGST HUS726T4TALA6L4 U -
8:11 22 Onln 5 3.637 TB SATA HDD N N 512B HGST HUS726T4TALA6L4 U -
----------------------------------------------------------------------------------
admin
2,918 Posts
Quote from admin on November 22, 2019, 1:40 pmcan you try the patch at end of
http://www.petasan.org/forums/?view=thread&id=515
can you try the patch at end of
http://www.petasan.org/forums/?view=thread&id=515
ghbiz
76 Posts
Quote from ghbiz on November 24, 2019, 3:29 pmwe ran dd /dev/zero.... on the suspect disks that we suspected of being the issue as they had a foreign partition on them... once we wrote some zeros to the partition table, things worked like a charm.
Based on the other thread mentioned above, the patch should have worked here and i am sure it will be a good addition to petasan in the future release.
Thank you for your help in the matter.
we ran dd /dev/zero.... on the suspect disks that we suspected of being the issue as they had a foreign partition on them... once we wrote some zeros to the partition table, things worked like a charm.
Based on the other thread mentioned above, the patch should have worked here and i am sure it will be a good addition to petasan in the future release.
Thank you for your help in the matter.
alienn
37 Posts
Quote from alienn on November 27, 2019, 6:05 pmHi,
today I encountered the same problem. This time I think it is a little different.
What did I do... I added a Samsung pcie nvme card to my nodes. Before adding the cards all disks showed as usual. As soon as the cards where inserted I got the spinnig icon and was not able to view any disks on any nodes with pcie nvme cards.
The logfiles show this:27/11/2019 18:48:42 ERROR Error while run command.
OSError: [Errno 2] No such file or directory: '/dev/nvme1c1n1'
rdev = os.stat(path).st_rdev
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 252, in block_path for name in os.listdir(block_path(dev)):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 263, in list_partitions_device
return list_partitions_device(dev)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 247, in list_partitions
dev_part_list[name] = list_partitions(get_dev_path(name))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 241, in list_all_partitions
partmap = list_all_partitions()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 675, in list_devices
for device in ceph_disk.list_devices():
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 104, in get_ceph_disk_list
ceph_disk_list = get_ceph_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 196, in get_disk_list
ceph_disk_list = get_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 288, in get_full_disk_list
print (json.dumps([o.get_dict() for o in ceph_disk_lib.get_full_disk_list(args.pid)]))
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 134, in node_disk_list_json
func(args)
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 121, in main_catch
Traceback (most recent call last):27/11/2019 18:48:42 ERROR
27/11/2019 18:48:19 ERROR Error while run command.
I already applied the patch from the other thread but this did not change the behavior. I think it is worth to note that in lsblk I have a device /dev/nvme1n1 and the error message tries to get infos about /dev/nvmen1c1p1.
It seems that some nvme ssd create a seprate namespace for multipathing and this could be the source of the problem.
Hi,
today I encountered the same problem. This time I think it is a little different.
What did I do... I added a Samsung pcie nvme card to my nodes. Before adding the cards all disks showed as usual. As soon as the cards where inserted I got the spinnig icon and was not able to view any disks on any nodes with pcie nvme cards.
The logfiles show this:
27/11/2019 18:48:42 ERROR Error while run command.
OSError: [Errno 2] No such file or directory: '/dev/nvme1c1n1'
rdev = os.stat(path).st_rdev
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 252, in block_path for name in os.listdir(block_path(dev)):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 263, in list_partitions_device
return list_partitions_device(dev)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 247, in list_partitions
dev_part_list[name] = list_partitions(get_dev_path(name))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 241, in list_all_partitions
partmap = list_all_partitions()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk.py", line 675, in list_devices
for device in ceph_disk.list_devices():
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 104, in get_ceph_disk_list
ceph_disk_list = get_ceph_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 196, in get_disk_list
ceph_disk_list = get_disk_list()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_disk_lib.py", line 288, in get_full_disk_list
print (json.dumps([o.get_dict() for o in ceph_disk_lib.get_full_disk_list(args.pid)]))
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 134, in node_disk_list_json
func(args)
File "/opt/petasan/scripts/admin/node_manage_disks.py", line 121, in main_catch
Traceback (most recent call last):
27/11/2019 18:48:42 ERROR
27/11/2019 18:48:19 ERROR Error while run command.
I already applied the patch from the other thread but this did not change the behavior. I think it is worth to note that in lsblk I have a device /dev/nvme1n1 and the error message tries to get infos about /dev/nvmen1c1p1.
It seems that some nvme ssd create a seprate namespace for multipathing and this could be the source of the problem.
alienn
37 Posts
Quote from alienn on November 27, 2019, 6:23 pmSome screenshots:
Some screenshots:
alienn
37 Posts
Quote from alienn on November 29, 2019, 8:36 amHi, did you have time to look into this? I'd really like to add the new ssds to petasan. I could do it via cli, but then I cannot provide feedback in case of a patch and I would really like to help to make PetaSAN better...
Cheers
Hi, did you have time to look into this? I'd really like to add the new ssds to petasan. I could do it via cli, but then I cannot provide feedback in case of a patch and I would really like to help to make PetaSAN better...
Cheers
admin
2,918 Posts
Quote from admin on November 29, 2019, 9:15 amCan you try the following patch, on 1 node and see if it fixes it
https://drive.google.com/open?id=18PEaa1UdAUpa6xJ50zdDCT6TY5w3CLhr
patch -p1 -d / < skip_unmapped_devices.patch
it is not something we can reproduce, so we can test if this will fix it.
Can you try the following patch, on 1 node and see if it fixes it
https://drive.google.com/open?id=18PEaa1UdAUpa6xJ50zdDCT6TY5w3CLhr
patch -p1 -d / < skip_unmapped_devices.patch
it is not something we can reproduce, so we can test if this will fix it.
alienn
37 Posts
Quote from alienn on November 29, 2019, 10:49 amThanks for the fast reply.
After patching on node ceph01 I can now view the disks again and the ssd shows up.
Do you know why there would be a device /sys/block/nvme1c1n1? In this document they talk about multipathing, but I think that my ssd does not support this as there is only one sysfs entry (and not two). It should be safe to just use the head namespace (/dev/nvme1n1).
Can I provide any further information before going in production with these ssds?
Thanks for the fast reply.
After patching on node ceph01 I can now view the disks again and the ssd shows up.
Do you know why there would be a device /sys/block/nvme1c1n1? In this document they talk about multipathing, but I think that my ssd does not support this as there is only one sysfs entry (and not two). It should be safe to just use the head namespace (/dev/nvme1n1).
Can I provide any further information before going in production with these ssds?