two disks mapped as the same osd
Pages: 1 2
alexurpi
8 Posts
February 18, 2021, 3:56 pmQuote from alexurpi on February 18, 2021, 3:56 pmHi there, I'm noticing that in one storage node I have two different disks mapped as the same osd (OSD11), and even using the same cache disk partition. Is it normal?
Cheers!
Hi there, I'm noticing that in one storage node I have two different disks mapped as the same osd (OSD11), and even using the same cache disk partition. Is it normal?
Cheers!
admin
2,930 Posts
February 18, 2021, 10:06 pmQuote from admin on February 18, 2021, 10:06 pmWhat version of PetaSAN are you using ?
can you list the output of
ceph-volume lvm list
on the node with issue.
What version of PetaSAN are you using ?
can you list the output of
ceph-volume lvm list
on the node with issue.
alexurpi
8 Posts
February 18, 2021, 10:15 pmQuote from alexurpi on February 18, 2021, 10:15 pmHi! I'm using 2.7.2, this node was a fresh install.
here goes the output:
====== osd.10 ======
[block] /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10/main
block device /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10/main
block uuid r9cNWz-g7rr-5ec5-96Z2-CjOP-xKt4-VdRNWN
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid 834077c7-5653-4232-8c1c-8c4f5fc51bbb
osd id 10
osdspec affinity
type block
vdo 0
====== osd.11 ======
[block] /dev/ceph-ccc02316-4684-4a5a-ac05-56250f33469f/osd-block-86f297d7-c9f6-4e78-b385-fa2beaebfefa
block device /dev/ceph-ccc02316-4684-4a5a-ac05-56250f33469f/osd-block-86f297d7-c9f6-4e78-b385-fa2beaebfefa
block uuid tM84WZ-GTOh-lQ3e-U4Kw-AKtI-EnCh-az1TZL
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid 86f297d7-c9f6-4e78-b385-fa2beaebfefa
osd id 11
type block
vdo 0
devices /dev/sdf1
[block] /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11/main
block device /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11/main
block uuid v96LuA-X2sH-o1Kb-9OSb-2ELi-IDmq-wFmeBm
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid 32db3102-eb60-4b9e-82cf-5a8ea0160c4f
osd id 11
osdspec affinity
type block
vdo 0
====== osd.9 =======
[block] /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9/main
block device /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9/main
block uuid 0GMiMN-nTCp-ZTQK-lSt2-zwtc-t25e-QepDyF
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid fe7603bf-1807-4444-9134-59295575e7a8
osd id 9
osdspec affinity
type block
vdo 0
Hi! I'm using 2.7.2, this node was a fresh install.
here goes the output:
====== osd.10 ======
[block] /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10/main
block device /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10/main
block uuid r9cNWz-g7rr-5ec5-96Z2-CjOP-xKt4-VdRNWN
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid 834077c7-5653-4232-8c1c-8c4f5fc51bbb
osd id 10
osdspec affinity
type block
vdo 0
====== osd.11 ======
[block] /dev/ceph-ccc02316-4684-4a5a-ac05-56250f33469f/osd-block-86f297d7-c9f6-4e78-b385-fa2beaebfefa
block device /dev/ceph-ccc02316-4684-4a5a-ac05-56250f33469f/osd-block-86f297d7-c9f6-4e78-b385-fa2beaebfefa
block uuid tM84WZ-GTOh-lQ3e-U4Kw-AKtI-EnCh-az1TZL
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid 86f297d7-c9f6-4e78-b385-fa2beaebfefa
osd id 11
type block
vdo 0
devices /dev/sdf1
[block] /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11/main
block device /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11/main
block uuid v96LuA-X2sH-o1Kb-9OSb-2ELi-IDmq-wFmeBm
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid 32db3102-eb60-4b9e-82cf-5a8ea0160c4f
osd id 11
osdspec affinity
type block
vdo 0
====== osd.9 =======
[block] /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9/main
block device /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9/main
block uuid 0GMiMN-nTCp-ZTQK-lSt2-zwtc-t25e-QepDyF
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid fe7603bf-1807-4444-9134-59295575e7a8
osd id 9
osdspec affinity
type block
vdo 0
admin
2,930 Posts
February 18, 2021, 10:47 pmQuote from admin on February 18, 2021, 10:47 pmcan you show output of
ls -l /var/lib/ceph/osd/ceph-11/block
can you show output of
ls -l /var/lib/ceph/osd/ceph-11/block
alexurpi
8 Posts
February 18, 2021, 10:48 pmQuote from alexurpi on February 18, 2021, 10:48 pmsure!
lrwxrwxrwx 1 ceph ceph 59 Feb 17 11:08 /var/lib/ceph/osd/ceph-11/block -> /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11/main
thanks for your answers!
sure!
lrwxrwxrwx 1 ceph ceph 59 Feb 17 11:08 /var/lib/ceph/osd/ceph-11/block -> /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11/main
thanks for your answers!
admin
2,930 Posts
February 18, 2021, 11:06 pmQuote from admin on February 18, 2021, 11:06 pmone more command
pvs -o pv_name,vg_name,lv_name
one more command
pvs -o pv_name,vg_name,lv_name
alexurpi
8 Posts
February 18, 2021, 11:07 pmQuote from alexurpi on February 18, 2021, 11:07 pmPV VG LV
/dev/sda1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9 [cache]
/dev/sda2 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10 [cache]
/dev/sda3 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11 [cache]
/dev/sdb1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9 [main_wcorig]
/dev/sdd1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10 [main_wcorig]
/dev/sde1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11 [main_wcorig]
/dev/sdf1 ceph-ccc02316-4684-4a5a-ac05-56250f33469f osd-block-86f297d7-c9f6-4e78-b385-fa2beaebfefa
PV VG LV
/dev/sda1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9 [cache]
/dev/sda2 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10 [cache]
/dev/sda3 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11 [cache]
/dev/sdb1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9 [main_wcorig]
/dev/sdd1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10 [main_wcorig]
/dev/sde1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11 [main_wcorig]
/dev/sdf1 ceph-ccc02316-4684-4a5a-ac05-56250f33469f osd-block-86f297d7-c9f6-4e78-b385-fa2beaebfefa
admin
2,930 Posts
February 18, 2021, 11:19 pmQuote from admin on February 18, 2021, 11:19 pmIt looks like /dev/sdf drive had an old attempt to be added as OSD 11 without cache but did not succeed for some reason, in such add failures i do believe we wipe the disk but it seems this did not happen so it still has tags showing OSD 11. /dev/sde is the one serving OSD 11 and is configured to use cache like the other OSDs.
If you verify /dev/sdf is not used you can remove it or clean it
wipefs -a /dev/sdf1
dd if=/dev/zero of=/dev/sdf bs=100M count=1 oflag=direct,dsync
This will remove any tags for OSD 11.
It looks like /dev/sdf drive had an old attempt to be added as OSD 11 without cache but did not succeed for some reason, in such add failures i do believe we wipe the disk but it seems this did not happen so it still has tags showing OSD 11. /dev/sde is the one serving OSD 11 and is configured to use cache like the other OSDs.
If you verify /dev/sdf is not used you can remove it or clean it
wipefs -a /dev/sdf1
dd if=/dev/zero of=/dev/sdf bs=100M count=1 oflag=direct,dsync
This will remove any tags for OSD 11.
alexurpi
8 Posts
February 18, 2021, 11:23 pmQuote from alexurpi on February 18, 2021, 11:23 pmI'm getting an error:
wipefs: error: /dev/sdf1: probing initialization failed: Device or resource busy
do you think that marking OSD11 as "out", waiting for re-balancing, and removing OSD11 will solve the problem?
I'm getting an error:
wipefs: error: /dev/sdf1: probing initialization failed: Device or resource busy
do you think that marking OSD11 as "out", waiting for re-balancing, and removing OSD11 will solve the problem?
admin
2,930 Posts
February 18, 2021, 11:33 pmQuote from admin on February 18, 2021, 11:33 pmCan you first try
vgchange -a n ceph-ccc02316-4684-4a5a-ac05-56250f33469f
then
wipefs -a /dev/sdf1
dd if=/dev/zero of=/dev/sdf bs=100M count=1 oflag=direct,dsync
Can you first try
vgchange -a n ceph-ccc02316-4684-4a5a-ac05-56250f33469f
then
wipefs -a /dev/sdf1
dd if=/dev/zero of=/dev/sdf bs=100M count=1 oflag=direct,dsync
Pages: 1 2
two disks mapped as the same osd
alexurpi
8 Posts
Quote from alexurpi on February 18, 2021, 3:56 pmHi there, I'm noticing that in one storage node I have two different disks mapped as the same osd (OSD11), and even using the same cache disk partition. Is it normal?
Cheers!
Hi there, I'm noticing that in one storage node I have two different disks mapped as the same osd (OSD11), and even using the same cache disk partition. Is it normal?
Cheers!
admin
2,930 Posts
Quote from admin on February 18, 2021, 10:06 pmWhat version of PetaSAN are you using ?
can you list the output of
ceph-volume lvm list
on the node with issue.
What version of PetaSAN are you using ?
can you list the output of
ceph-volume lvm list
on the node with issue.
alexurpi
8 Posts
Quote from alexurpi on February 18, 2021, 10:15 pmHi! I'm using 2.7.2, this node was a fresh install.
here goes the output:
====== osd.10 ======
[block] /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10/main
block device /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10/main
block uuid r9cNWz-g7rr-5ec5-96Z2-CjOP-xKt4-VdRNWN
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid 834077c7-5653-4232-8c1c-8c4f5fc51bbb
osd id 10
osdspec affinity
type block
vdo 0====== osd.11 ======
[block] /dev/ceph-ccc02316-4684-4a5a-ac05-56250f33469f/osd-block-86f297d7-c9f6-4e78-b385-fa2beaebfefa
block device /dev/ceph-ccc02316-4684-4a5a-ac05-56250f33469f/osd-block-86f297d7-c9f6-4e78-b385-fa2beaebfefa
block uuid tM84WZ-GTOh-lQ3e-U4Kw-AKtI-EnCh-az1TZL
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid 86f297d7-c9f6-4e78-b385-fa2beaebfefa
osd id 11
type block
vdo 0
devices /dev/sdf1[block] /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11/main
block device /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11/main
block uuid v96LuA-X2sH-o1Kb-9OSb-2ELi-IDmq-wFmeBm
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid 32db3102-eb60-4b9e-82cf-5a8ea0160c4f
osd id 11
osdspec affinity
type block
vdo 0====== osd.9 =======
[block] /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9/main
block device /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9/main
block uuid 0GMiMN-nTCp-ZTQK-lSt2-zwtc-t25e-QepDyF
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid fe7603bf-1807-4444-9134-59295575e7a8
osd id 9
osdspec affinity
type block
vdo 0
Hi! I'm using 2.7.2, this node was a fresh install.
here goes the output:
====== osd.10 ======
[block] /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10/main
block device /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10/main
block uuid r9cNWz-g7rr-5ec5-96Z2-CjOP-xKt4-VdRNWN
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid 834077c7-5653-4232-8c1c-8c4f5fc51bbb
osd id 10
osdspec affinity
type block
vdo 0
====== osd.11 ======
[block] /dev/ceph-ccc02316-4684-4a5a-ac05-56250f33469f/osd-block-86f297d7-c9f6-4e78-b385-fa2beaebfefa
block device /dev/ceph-ccc02316-4684-4a5a-ac05-56250f33469f/osd-block-86f297d7-c9f6-4e78-b385-fa2beaebfefa
block uuid tM84WZ-GTOh-lQ3e-U4Kw-AKtI-EnCh-az1TZL
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid 86f297d7-c9f6-4e78-b385-fa2beaebfefa
osd id 11
type block
vdo 0
devices /dev/sdf1
[block] /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11/main
block device /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11/main
block uuid v96LuA-X2sH-o1Kb-9OSb-2ELi-IDmq-wFmeBm
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid 32db3102-eb60-4b9e-82cf-5a8ea0160c4f
osd id 11
osdspec affinity
type block
vdo 0
====== osd.9 =======
[block] /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9/main
block device /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9/main
block uuid 0GMiMN-nTCp-ZTQK-lSt2-zwtc-t25e-QepDyF
cephx lockbox secret
cluster fsid 2e1fa5e6-1e20-4a48-8390-493b59378c72
cluster name ceph
crush device class None
encrypted 0
osd fsid fe7603bf-1807-4444-9134-59295575e7a8
osd id 9
osdspec affinity
type block
vdo 0
admin
2,930 Posts
Quote from admin on February 18, 2021, 10:47 pmcan you show output of
ls -l /var/lib/ceph/osd/ceph-11/block
can you show output of
ls -l /var/lib/ceph/osd/ceph-11/block
alexurpi
8 Posts
Quote from alexurpi on February 18, 2021, 10:48 pmsure!
lrwxrwxrwx 1 ceph ceph 59 Feb 17 11:08 /var/lib/ceph/osd/ceph-11/block -> /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11/main
thanks for your answers!
sure!
lrwxrwxrwx 1 ceph ceph 59 Feb 17 11:08 /var/lib/ceph/osd/ceph-11/block -> /dev/ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11/main
thanks for your answers!
admin
2,930 Posts
Quote from admin on February 18, 2021, 11:06 pmone more command
pvs -o pv_name,vg_name,lv_name
one more command
pvs -o pv_name,vg_name,lv_name
alexurpi
8 Posts
Quote from alexurpi on February 18, 2021, 11:07 pmPV VG LV
/dev/sda1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9 [cache]
/dev/sda2 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10 [cache]
/dev/sda3 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11 [cache]
/dev/sdb1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9 [main_wcorig]
/dev/sdd1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10 [main_wcorig]
/dev/sde1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11 [main_wcorig]
/dev/sdf1 ceph-ccc02316-4684-4a5a-ac05-56250f33469f osd-block-86f297d7-c9f6-4e78-b385-fa2beaebfefa
PV VG LV
/dev/sda1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9 [cache]
/dev/sda2 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10 [cache]
/dev/sda3 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11 [cache]
/dev/sdb1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.9 [main_wcorig]
/dev/sdd1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.10 [main_wcorig]
/dev/sde1 ps-2e1fa5e6-1e20-4a48-8390-493b59378c72-wc-osd.11 [main_wcorig]
/dev/sdf1 ceph-ccc02316-4684-4a5a-ac05-56250f33469f osd-block-86f297d7-c9f6-4e78-b385-fa2beaebfefa
admin
2,930 Posts
Quote from admin on February 18, 2021, 11:19 pmIt looks like /dev/sdf drive had an old attempt to be added as OSD 11 without cache but did not succeed for some reason, in such add failures i do believe we wipe the disk but it seems this did not happen so it still has tags showing OSD 11. /dev/sde is the one serving OSD 11 and is configured to use cache like the other OSDs.
If you verify /dev/sdf is not used you can remove it or clean it
wipefs -a /dev/sdf1
dd if=/dev/zero of=/dev/sdf bs=100M count=1 oflag=direct,dsyncThis will remove any tags for OSD 11.
It looks like /dev/sdf drive had an old attempt to be added as OSD 11 without cache but did not succeed for some reason, in such add failures i do believe we wipe the disk but it seems this did not happen so it still has tags showing OSD 11. /dev/sde is the one serving OSD 11 and is configured to use cache like the other OSDs.
If you verify /dev/sdf is not used you can remove it or clean it
wipefs -a /dev/sdf1
dd if=/dev/zero of=/dev/sdf bs=100M count=1 oflag=direct,dsync
This will remove any tags for OSD 11.
alexurpi
8 Posts
Quote from alexurpi on February 18, 2021, 11:23 pmI'm getting an error:
wipefs: error: /dev/sdf1: probing initialization failed: Device or resource busy
do you think that marking OSD11 as "out", waiting for re-balancing, and removing OSD11 will solve the problem?
I'm getting an error:
wipefs: error: /dev/sdf1: probing initialization failed: Device or resource busy
do you think that marking OSD11 as "out", waiting for re-balancing, and removing OSD11 will solve the problem?
admin
2,930 Posts
Quote from admin on February 18, 2021, 11:33 pmCan you first try
vgchange -a n ceph-ccc02316-4684-4a5a-ac05-56250f33469f
then
wipefs -a /dev/sdf1
dd if=/dev/zero of=/dev/sdf bs=100M count=1 oflag=direct,dsync
Can you first try
vgchange -a n ceph-ccc02316-4684-4a5a-ac05-56250f33469f
then
wipefs -a /dev/sdf1
dd if=/dev/zero of=/dev/sdf bs=100M count=1 oflag=direct,dsync