Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

iSCSI multi-client access to disk

Pages: 1 2 3

Found that the disk was not mounted. mounted the disk to the OSD ID, then ran systemctl start ceph-osd@37, which brought the OSD up.

Any plans to build into the WebUI an easier way to go about this?

It should be mounted automatically. When an OSD is added, udev will fire events for the new detected partitions which are handled in /lib/udev/rules.d/95-ceph-osd.rules to mount and start the OSD.
Obviously it did not work in your case, probably no udev messages were triggered. It may be related to manually pulling the disk test, i recall you had earlier issues with scsi_remove_device errors:
http://www.petasan.org/forums/?view=thread&id=273
which was present in kernels prior to 4.4.120 kernel...Can you confirm there are no similar dmesg logs.
If you add a new OSD without the manual removal step, does it work normally ?

Can you please email me (contact-us @ petasan.org) the following 2 logs:

/opt/petasan/log/PetaSAN.log
/opt/petasan/log/ceph-disk.log

Thanks for your help. much appreciated.

I just sent you those logs. I don't see any of the same scsi errors anymore, but there are some panic tracebacks from the OSD i removed and tried to remount, not sure if that would be related, i'm sure you'll be able to tell more than I.

The only suspicious entry i see in dmesg while trying to add the new disk (before manually mounting) is a LOT of this repeated:

[181015.842211] XFS (sdn1): Mounting V5 Filesystem
[181015.881107] XFS (sdn1): Ending clean mount
[181015.900014] XFS (sdn1): Unmounting Filesystem
[181035.248091] XFS (sdn1): Mounting V5 Filesystem
[181035.340651] XFS (sdn1): Ending clean mount
[181035.343135] XFS (sdn1): Unmounting Filesystem
[181035.883511] XFS (sdn1): Mounting V5 Filesystem
[181035.990743] XFS (sdn1): Ending clean mount
[181036.009896] XFS (sdn1): Unmounting Filesystem
[181045.943034] XFS (sdn1): Mounting V5 Filesystem
[181046.041456] XFS (sdn1): Ending clean mount
[181046.044636] XFS (sdn1): Unmounting Filesystem

I unfortunately don't have any extra matching disks to try adding without pulling others at the moment. I will hopefully have a few extra SAS drives in the next week or so to test that. I can mix drive capacities with PetaSAN, right? What about mixing SAS and SATA? I am trying to keep this cluster all SAS, but I do have a few extra smaller SATA disks I could test with.

Thanks!

 

 

Yes you can mix SAS/SATA and disk of various sizes. The system will correctly assign more weight to larger disks and correspondingly store more data in them, However for best performance it is better to use the same hardware as much as possible, a high capacity disk will get more traffic and hence will slow down a system comprised of smaller disks.

From the ceph disk logs contain errors like:
ceph_disk.main.Error: Error: another BD-Ceph-Cluster1 osd.37 already mounted in position (old/different cluster instance?); unmounting ours.
which seems it aborts mounting the disk because it believes the mount point for the osd is already mounted from a different device...it does so using the stat function
stat /var/lib/ceph/osd/BD-Ceph-Cluster1-37
which reports it mounted from another device

I am not sure if this is related to the manual pulling of the drives, but i believe this is the case. If you try to add new replacement drives things should work, also maybe if the drive being pulled gets cleaned/reformatted then inserted i expect ot will work too.

I understand after you manually remove the drive, you then re-insert it and then delete it from ui and then you try re-adding it. Can you please verify that after deletion and before addition:

there is no mount for that osd
mount | grep /var/lib/ceph/osd
there is no crush entry in ceph
ceph osd tree --cluster XX
the osd status is down:
systemctl status ceph-osd@XX

Pages: 1 2 3