Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Missing Mountpoint after OSD exchange

Hello,

after checking one node a few days after the replacement of a failed drive I saw that the new drive has no mount point.

Raid controller information:

Array G

Logical Drive: 7
Size: 1.82 TB
Fault Tolerance: 0
Heads: 255
Sectors Per Track: 32
Cylinders: 65535
Strip Size: 256 KB
Full Stripe Size: 256 KB
Status: OK
Caching: Enabled
Unique Identifier: 600508B1001CFE96ABA5D03B6308015B
Disk Name: /dev/sdg
Mount Points: None
Logical Drive Label: 1AFCF89A50014380210E83E0 1FD4
Drive Type: Data
LD Acceleration Method: Controller Cache

For all other drive the Raid controllers show under Mount Points something like this:
/var/lib/ceph/osd/ceph-0 100 MB Partition Number 1

If I few the mount point under /proc/mounts You can see, that the new drive is tmpfs:

tmpfs /var/lib/ceph/osd/ceph-21 tmpfs rw,relatime 0 0
/dev/sdi1 /var/lib/ceph/osd/ceph-23 xfs rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota 0 0
/dev/sdk1 /var/lib/ceph/osd/ceph-31 xfs rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota 0 0
/dev/sdb1 /var/lib/ceph/osd/ceph-1 xfs rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota 0 0
/dev/sde1 /var/lib/ceph/osd/ceph-3 xfs rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota 0 0
/dev/sdd1 /var/lib/ceph/osd/ceph-0 xfs rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota 0 0
/dev/sdj1 /var/lib/ceph/osd/ceph-30 xfs rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota 0 0
/dev/sdc1 /var/lib/ceph/osd/ceph-2 xfs rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota 0 0
/dev/sdl1 /var/lib/ceph/osd/ceph-32 xfs rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota 0 0
/dev/sdf1 /var/lib/ceph/osd/ceph-14 xfs rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota 0 0
/dev/sdh1 /var/lib/ceph/osd/ceph-22 xfs rw,relatime,attr2,inode64,sunit=512,swidth=512,noquota 0 0

The service of the new OSD 21 is running.

What is here the problem? (a server reboot did not solve this)
None of our other nodes show this behavior, but this was also the first OSD drive failure.

Thanks for your help

This is because since 2.3.1 we use ceph-volume rather ceph-disk tool to create OSDs. This is OK.