Can't add disk as cache on new node - old raid disks - best method to fix
neiltorda
98 Posts
January 5, 2021, 5:24 pmQuote from neiltorda on January 5, 2021, 5:24 pmI just added a 4th node to our petasan cluster.
I am now trying to add cache/journal/osd disks. The first disk i chose to add as a cache disk fails and I get the following in the log:
05/01/2021 11:18:16 ERROR Error executing : wipefs --all /dev/sdaa
05/01/2021 11:18:16 INFO Executing : wipefs --all /dev/sdaa
05/01/2021 11:18:15 INFO Start cleaning disk : sdaa
05/01/2021 11:18:10 INFO Start add cache job for disk sdaa.
05/01/2021 11:18:10 INFO -disk_name sdaa -partitions 4
05/01/2021 11:18:10 INFO params
When I log in at the console on the system and do a blkid I see that the disks used to be part of some raid sets:
/dev/sdab: UUID="e44dfd08-xxxxxxc8ca6adfc" UUID_SUB="b481dac4xxxxxx9c4aac7a3ac6" LABEL="ceph-osdxxxxxxu.edu:0" TYPE="linux_raid_member"
/dev/sdac: UUID="e44dfd08-xxxxxxc8ca6adfc" UUID_SUB="3e865b1xxxxxx6ad8842a4" LABEL="ceph-osdxxxxxxu.edu:0" TYPE="linux_raid_member"
/dev/sdad: UUID="e44dfd08-xxxxxx-284c8ca6adfc" UUID_SUB="fba56exxxxxx9e8bd2" LABEL="ceph-osdxxxxxxu.edu:0" TYPE="linux_raid_member"
lsblk reports this:
sdab 65:176 0 894.3G 0 disk
└─md0 9:0 0 15.7T 0 raid6
sdac 65:192 0 894.3G 0 disk
└─md0 9:0 0 15.7T 0 raid6
sdad 65:208 0 894.3G 0 disk
└─md0 9:0 0 15.7T 0 raid6
What is the best way to clear the old RAID info from these disks so I can add them into the cluster as Journal/Cache/OSD?
Thanks so much!
Neil
I just added a 4th node to our petasan cluster.
I am now trying to add cache/journal/osd disks. The first disk i chose to add as a cache disk fails and I get the following in the log:
05/01/2021 11:18:16 ERROR Error executing : wipefs --all /dev/sdaa
05/01/2021 11:18:16 INFO Executing : wipefs --all /dev/sdaa
05/01/2021 11:18:15 INFO Start cleaning disk : sdaa
05/01/2021 11:18:10 INFO Start add cache job for disk sdaa.
05/01/2021 11:18:10 INFO -disk_name sdaa -partitions 4
05/01/2021 11:18:10 INFO params
When I log in at the console on the system and do a blkid I see that the disks used to be part of some raid sets:
/dev/sdab: UUID="e44dfd08-xxxxxxc8ca6adfc" UUID_SUB="b481dac4xxxxxx9c4aac7a3ac6" LABEL="ceph-osdxxxxxxu.edu:0" TYPE="linux_raid_member"
/dev/sdac: UUID="e44dfd08-xxxxxxc8ca6adfc" UUID_SUB="3e865b1xxxxxx6ad8842a4" LABEL="ceph-osdxxxxxxu.edu:0" TYPE="linux_raid_member"
/dev/sdad: UUID="e44dfd08-xxxxxx-284c8ca6adfc" UUID_SUB="fba56exxxxxx9e8bd2" LABEL="ceph-osdxxxxxxu.edu:0" TYPE="linux_raid_member"
lsblk reports this:
sdab 65:176 0 894.3G 0 disk
└─md0 9:0 0 15.7T 0 raid6
sdac 65:192 0 894.3G 0 disk
└─md0 9:0 0 15.7T 0 raid6
sdad 65:208 0 894.3G 0 disk
└─md0 9:0 0 15.7T 0 raid6
What is the best way to clear the old RAID info from these disks so I can add them into the cluster as Journal/Cache/OSD?
Thanks so much!
Neil
admin
2,930 Posts
January 5, 2021, 7:04 pmQuote from admin on January 5, 2021, 7:04 pmwipefs is failing, if you try the command manually, does it give more info on the error
wipefs --all /dev/sdaa
maybe the old RAID was activated, try to stop/delete it:
mdadm --stop /dev/md0
mdadm --remove /dev/md0
wipefs is failing, if you try the command manually, does it give more info on the error
wipefs --all /dev/sdaa
maybe the old RAID was activated, try to stop/delete it:
mdadm --stop /dev/md0
mdadm --remove /dev/md0
neiltorda
98 Posts
January 6, 2021, 2:22 pmQuote from neiltorda on January 6, 2021, 2:22 pmThat was it. I stopped the old raids and then manually wiped the disks. I was then able to add them into as new members of the petasan volume.
That was it. I stopped the old raids and then manually wiped the disks. I was then able to add them into as new members of the petasan volume.
Can't add disk as cache on new node - old raid disks - best method to fix
neiltorda
98 Posts
Quote from neiltorda on January 5, 2021, 5:24 pmI just added a 4th node to our petasan cluster.
I am now trying to add cache/journal/osd disks. The first disk i chose to add as a cache disk fails and I get the following in the log:
05/01/2021 11:18:16 ERROR Error executing : wipefs --all /dev/sdaa
05/01/2021 11:18:16 INFO Executing : wipefs --all /dev/sdaa
05/01/2021 11:18:15 INFO Start cleaning disk : sdaa
05/01/2021 11:18:10 INFO Start add cache job for disk sdaa.
05/01/2021 11:18:10 INFO -disk_name sdaa -partitions 4
05/01/2021 11:18:10 INFO params
When I log in at the console on the system and do a blkid I see that the disks used to be part of some raid sets:
/dev/sdab: UUID="e44dfd08-xxxxxxc8ca6adfc" UUID_SUB="b481dac4xxxxxx9c4aac7a3ac6" LABEL="ceph-osdxxxxxxu.edu:0" TYPE="linux_raid_member"
/dev/sdac: UUID="e44dfd08-xxxxxxc8ca6adfc" UUID_SUB="3e865b1xxxxxx6ad8842a4" LABEL="ceph-osdxxxxxxu.edu:0" TYPE="linux_raid_member"
/dev/sdad: UUID="e44dfd08-xxxxxx-284c8ca6adfc" UUID_SUB="fba56exxxxxx9e8bd2" LABEL="ceph-osdxxxxxxu.edu:0" TYPE="linux_raid_member"
lsblk reports this:
sdab 65:176 0 894.3G 0 disk
└─md0 9:0 0 15.7T 0 raid6
sdac 65:192 0 894.3G 0 disk
└─md0 9:0 0 15.7T 0 raid6
sdad 65:208 0 894.3G 0 disk
└─md0 9:0 0 15.7T 0 raid6
What is the best way to clear the old RAID info from these disks so I can add them into the cluster as Journal/Cache/OSD?
Thanks so much!
Neil
I just added a 4th node to our petasan cluster.
I am now trying to add cache/journal/osd disks. The first disk i chose to add as a cache disk fails and I get the following in the log:
05/01/2021 11:18:16 ERROR Error executing : wipefs --all /dev/sdaa
05/01/2021 11:18:16 INFO Executing : wipefs --all /dev/sdaa
05/01/2021 11:18:15 INFO Start cleaning disk : sdaa
05/01/2021 11:18:10 INFO Start add cache job for disk sdaa.
05/01/2021 11:18:10 INFO -disk_name sdaa -partitions 4
05/01/2021 11:18:10 INFO params
When I log in at the console on the system and do a blkid I see that the disks used to be part of some raid sets:
/dev/sdab: UUID="e44dfd08-xxxxxxc8ca6adfc" UUID_SUB="b481dac4xxxxxx9c4aac7a3ac6" LABEL="ceph-osdxxxxxxu.edu:0" TYPE="linux_raid_member"
/dev/sdac: UUID="e44dfd08-xxxxxxc8ca6adfc" UUID_SUB="3e865b1xxxxxx6ad8842a4" LABEL="ceph-osdxxxxxxu.edu:0" TYPE="linux_raid_member"
/dev/sdad: UUID="e44dfd08-xxxxxx-284c8ca6adfc" UUID_SUB="fba56exxxxxx9e8bd2" LABEL="ceph-osdxxxxxxu.edu:0" TYPE="linux_raid_member"
lsblk reports this:
sdab 65:176 0 894.3G 0 disk
└─md0 9:0 0 15.7T 0 raid6
sdac 65:192 0 894.3G 0 disk
└─md0 9:0 0 15.7T 0 raid6
sdad 65:208 0 894.3G 0 disk
└─md0 9:0 0 15.7T 0 raid6
What is the best way to clear the old RAID info from these disks so I can add them into the cluster as Journal/Cache/OSD?
Thanks so much!
Neil
admin
2,930 Posts
Quote from admin on January 5, 2021, 7:04 pmwipefs is failing, if you try the command manually, does it give more info on the error
wipefs --all /dev/sdaa
maybe the old RAID was activated, try to stop/delete it:
mdadm --stop /dev/md0
mdadm --remove /dev/md0
wipefs is failing, if you try the command manually, does it give more info on the error
wipefs --all /dev/sdaa
maybe the old RAID was activated, try to stop/delete it:
mdadm --stop /dev/md0
mdadm --remove /dev/md0
neiltorda
98 Posts
Quote from neiltorda on January 6, 2021, 2:22 pmThat was it. I stopped the old raids and then manually wiped the disks. I was then able to add them into as new members of the petasan volume.
That was it. I stopped the old raids and then manually wiped the disks. I was then able to add them into as new members of the petasan volume.