error list: "Disk nvme1n1 prepare failure on node"

hak
23 Posts
July 23, 2023, 12:50 pmQuote from hak on July 23, 2023, 12:50 pmHi,
rebuilding my pre-production cluster with final node count. when trying to add my nodes to a/as a new cluster I'm getting:
Error List
Disk nvme0n1 prepare failure on node xxx-PSAN-01.
Disk nvme1n1 prepare failure on node xxx-PSAN-01.
Disk nvme2n1 prepare failure on node xxx-PSAN-01.
Disk nvme0n1 prepare failure on node xxx-PSAN-02.
Disk nvme1n1 prepare failure on node xxx-PSAN-02.
Disk nvme2n1 prepare failure on node xxx-PSAN-02.
Disk nvme0n1 prepare failure on node xxx-PSAN-03.
Disk nvme1n1 prepare failure on node xxx-PSAN-03.
Disk nvme2n1 prepare failure on node xxx-PSAN-03.
is this a legacy partition table or something i need to manually clear out before proceeding? i can log into gui, and have 0/0/o osd's...
Hi,
rebuilding my pre-production cluster with final node count. when trying to add my nodes to a/as a new cluster I'm getting:
Error List
Disk nvme0n1 prepare failure on node xxx-PSAN-01.
Disk nvme1n1 prepare failure on node xxx-PSAN-01.
Disk nvme2n1 prepare failure on node xxx-PSAN-01.
Disk nvme0n1 prepare failure on node xxx-PSAN-02.
Disk nvme1n1 prepare failure on node xxx-PSAN-02.
Disk nvme2n1 prepare failure on node xxx-PSAN-02.
Disk nvme0n1 prepare failure on node xxx-PSAN-03.
Disk nvme1n1 prepare failure on node xxx-PSAN-03.
Disk nvme2n1 prepare failure on node xxx-PSAN-03.
is this a legacy partition table or something i need to manually clear out before proceeding? i can log into gui, and have 0/0/o osd's...

admin
2,967 Posts
July 23, 2023, 8:02 pmQuote from admin on July 23, 2023, 8:02 pmcould be the drives as you had earlier drive issues. try to manually wipe the drives with a tool like wipefs, we do that ourselves but maybe you would an error. if ok i would build/deploy cluster without OSDs then add OSD manually after cluster is built. if it fails look into
/opt/petasan/log/ceph-volume.log
if it fails to prepare it is generally a hardware issue
could be the drives as you had earlier drive issues. try to manually wipe the drives with a tool like wipefs, we do that ourselves but maybe you would an error. if ok i would build/deploy cluster without OSDs then add OSD manually after cluster is built. if it fails look into
/opt/petasan/log/ceph-volume.log
if it fails to prepare it is generally a hardware issue

hak
23 Posts
August 1, 2023, 2:34 pmQuote from hak on August 1, 2023, 2:34 pmnah, these are the new non issue drives. i'll be wiping the drives and trying again as these were fully functional in my test clusgter. the cluster is built without OSDs now. so will wipe and try to manually add and report back.
nah, these are the new non issue drives. i'll be wiping the drives and trying again as these were fully functional in my test clusgter. the cluster is built without OSDs now. so will wipe and try to manually add and report back.

hak
23 Posts
August 4, 2023, 3:24 amQuote from hak on August 4, 2023, 3:24 amclosing the loop here,
sgdisk -Z /dev/nvme[xyz]
and a reboot now shows the OSDs as available (there was nothing i wanted on these drives/partitions)
closing the loop here,
sgdisk -Z /dev/nvme[xyz]
and a reboot now shows the OSDs as available (there was nothing i wanted on these drives/partitions)
error list: "Disk nvme1n1 prepare failure on node"
hak
23 Posts
Quote from hak on July 23, 2023, 12:50 pmHi,
rebuilding my pre-production cluster with final node count. when trying to add my nodes to a/as a new cluster I'm getting:
Error List
Disk nvme0n1 prepare failure on node xxx-PSAN-01.
Disk nvme1n1 prepare failure on node xxx-PSAN-01.
Disk nvme2n1 prepare failure on node xxx-PSAN-01.
Disk nvme0n1 prepare failure on node xxx-PSAN-02.
Disk nvme1n1 prepare failure on node xxx-PSAN-02.
Disk nvme2n1 prepare failure on node xxx-PSAN-02.
Disk nvme0n1 prepare failure on node xxx-PSAN-03.
Disk nvme1n1 prepare failure on node xxx-PSAN-03.
Disk nvme2n1 prepare failure on node xxx-PSAN-03.is this a legacy partition table or something i need to manually clear out before proceeding? i can log into gui, and have 0/0/o osd's...
Hi,
rebuilding my pre-production cluster with final node count. when trying to add my nodes to a/as a new cluster I'm getting:
Error List
Disk nvme0n1 prepare failure on node xxx-PSAN-01.
Disk nvme1n1 prepare failure on node xxx-PSAN-01.
Disk nvme2n1 prepare failure on node xxx-PSAN-01.
Disk nvme0n1 prepare failure on node xxx-PSAN-02.
Disk nvme1n1 prepare failure on node xxx-PSAN-02.
Disk nvme2n1 prepare failure on node xxx-PSAN-02.
Disk nvme0n1 prepare failure on node xxx-PSAN-03.
Disk nvme1n1 prepare failure on node xxx-PSAN-03.
Disk nvme2n1 prepare failure on node xxx-PSAN-03.
is this a legacy partition table or something i need to manually clear out before proceeding? i can log into gui, and have 0/0/o osd's...
admin
2,967 Posts
Quote from admin on July 23, 2023, 8:02 pmcould be the drives as you had earlier drive issues. try to manually wipe the drives with a tool like wipefs, we do that ourselves but maybe you would an error. if ok i would build/deploy cluster without OSDs then add OSD manually after cluster is built. if it fails look into
/opt/petasan/log/ceph-volume.log
if it fails to prepare it is generally a hardware issue
could be the drives as you had earlier drive issues. try to manually wipe the drives with a tool like wipefs, we do that ourselves but maybe you would an error. if ok i would build/deploy cluster without OSDs then add OSD manually after cluster is built. if it fails look into
/opt/petasan/log/ceph-volume.log
if it fails to prepare it is generally a hardware issue
hak
23 Posts
Quote from hak on August 1, 2023, 2:34 pmnah, these are the new non issue drives. i'll be wiping the drives and trying again as these were fully functional in my test clusgter. the cluster is built without OSDs now. so will wipe and try to manually add and report back.
nah, these are the new non issue drives. i'll be wiping the drives and trying again as these were fully functional in my test clusgter. the cluster is built without OSDs now. so will wipe and try to manually add and report back.
hak
23 Posts
Quote from hak on August 4, 2023, 3:24 amclosing the loop here,
sgdisk -Z /dev/nvme[xyz]
and a reboot now shows the OSDs as available (there was nothing i wanted on these drives/partitions)
closing the loop here,
sgdisk -Z /dev/nvme[xyz]
and a reboot now shows the OSDs as available (there was nothing i wanted on these drives/partitions)