Cluster stopped, 1 osd is full
Pages: 1 2
wid
47 Posts
March 18, 2021, 7:36 pmQuote from wid on March 18, 2021, 7:36 pmOk, now cluster is HEALTH_OK, just 10 days 😉
Thank You for help.
One more question:
What is correct way, to have Journal on SSD discs?
3 nodes. 25 OSD total
Add 1 SSD disk to each node as journal and then mark each OSD out, wait, recreate OSD (rolling of course)
Or maybe is any better way to do it? 🙂
Ok, now cluster is HEALTH_OK, just 10 days 😉
Thank You for help.
One more question:
What is correct way, to have Journal on SSD discs?
3 nodes. 25 OSD total
Add 1 SSD disk to each node as journal and then mark each OSD out, wait, recreate OSD (rolling of course)
Or maybe is any better way to do it? 🙂
admin
2,930 Posts
March 19, 2021, 8:50 amQuote from admin on March 19, 2021, 8:50 amYou could add new journal SSD to existing HDD OSD using bluefs-bdev-new-db option in ceph-bluestore-tool
In PetaSAN we tag the journal partition used with a patition type 30cd0809-c2b2-499c-8879-2d6b78529876
you could use sgdisk for example to tag /dev/sde1 as used juonral
sgdisk -t 1:30cd0809-c2b2-499c-8879-2d6b78529876 /dev/sde
so run this after creating external journal with the ceph-bluestore-tool or it will not show in ui
You could add new journal SSD to existing HDD OSD using bluefs-bdev-new-db option in ceph-bluestore-tool
In PetaSAN we tag the journal partition used with a patition type 30cd0809-c2b2-499c-8879-2d6b78529876
you could use sgdisk for example to tag /dev/sde1 as used juonral
sgdisk -t 1:30cd0809-c2b2-499c-8879-2d6b78529876 /dev/sde
so run this after creating external journal with the ceph-bluestore-tool or it will not show in ui
Last edited on March 19, 2021, 8:51 am by admin · #12
Pages: 1 2
Cluster stopped, 1 osd is full
wid
47 Posts
Quote from wid on March 18, 2021, 7:36 pmOk, now cluster is HEALTH_OK, just 10 days 😉
Thank You for help.
One more question:
What is correct way, to have Journal on SSD discs?
3 nodes. 25 OSD totalAdd 1 SSD disk to each node as journal and then mark each OSD out, wait, recreate OSD (rolling of course)
Or maybe is any better way to do it? 🙂
Ok, now cluster is HEALTH_OK, just 10 days 😉
Thank You for help.
One more question:
What is correct way, to have Journal on SSD discs?
3 nodes. 25 OSD total
Add 1 SSD disk to each node as journal and then mark each OSD out, wait, recreate OSD (rolling of course)
Or maybe is any better way to do it? 🙂
admin
2,930 Posts
Quote from admin on March 19, 2021, 8:50 amYou could add new journal SSD to existing HDD OSD using bluefs-bdev-new-db option in ceph-bluestore-tool
In PetaSAN we tag the journal partition used with a patition type 30cd0809-c2b2-499c-8879-2d6b78529876
you could use sgdisk for example to tag /dev/sde1 as used juonral
sgdisk -t 1:30cd0809-c2b2-499c-8879-2d6b78529876 /dev/sde
so run this after creating external journal with the ceph-bluestore-tool or it will not show in ui
You could add new journal SSD to existing HDD OSD using bluefs-bdev-new-db option in ceph-bluestore-tool
In PetaSAN we tag the journal partition used with a patition type 30cd0809-c2b2-499c-8879-2d6b78529876
you could use sgdisk for example to tag /dev/sde1 as used juonral
sgdisk -t 1:30cd0809-c2b2-499c-8879-2d6b78529876 /dev/sde
so run this after creating external journal with the ceph-bluestore-tool or it will not show in ui