Configure existing OSDs to use a Journal
moose999
9 Posts
January 7, 2023, 4:00 pmQuote from moose999 on January 7, 2023, 4:00 pmHi,
First off I'm very new to PetaSAN, and I only have a litle experience with Ceph, but I'm keen to learn!
I have 3 * nodes, each with 2 * HDDS and 1* SSD. The SSDs were set to journal drives, and HDDs as OSDs, during installation.
My HDDs did not successfully add themselves as OSDs during installation, presumably due to some existing RAID info. wipefs took care of that, and I was able to add the HDDs as OSDs in the WebUI.
HOWEVER, when I added the HDDs as OSDs, I did not choose for the journal to be 'enabled'. I now believe this was a mistake, as when I benchmark my cluster, I get '0' for every metric on the journal drives!
So, as per the title, can I enable the journal on existing OSDs? If not, and I must recreate my OSDs, what's the best way to go about that?
Many thanks in advance!
Best Regards,
Justin
Hi,
First off I'm very new to PetaSAN, and I only have a litle experience with Ceph, but I'm keen to learn!
I have 3 * nodes, each with 2 * HDDS and 1* SSD. The SSDs were set to journal drives, and HDDs as OSDs, during installation.
My HDDs did not successfully add themselves as OSDs during installation, presumably due to some existing RAID info. wipefs took care of that, and I was able to add the HDDs as OSDs in the WebUI.
HOWEVER, when I added the HDDs as OSDs, I did not choose for the journal to be 'enabled'. I now believe this was a mistake, as when I benchmark my cluster, I get '0' for every metric on the journal drives!
So, as per the title, can I enable the journal on existing OSDs? If not, and I must recreate my OSDs, what's the best way to go about that?
Many thanks in advance!
Best Regards,
Justin
admin
2,930 Posts
January 7, 2023, 6:50 pmQuote from admin on January 7, 2023, 6:50 pmTo check if you are using journals or not:
from the UI, Node List -> Physical Disk List, if the OSD is using a journal device, you would see the journal partition being listed on the same row line of the OSD under the column Linked Devices
another way us using the cli command
ceph-volume lvm list
if you do confirm you are not using a journal, you can add one to an existing OSD using the script
/opt/petasan/scripts/util/migrate_db.sh
i recommend you read the top of the script, it is use at your own risk. i recommend you try it first on a test/vm cluster to make sure it does what you need. also use in production cluster one OSD at a time making sure the OSD comes online correctly before using on next one. make sure the cluster health is ok before using.
To check if you are using journals or not:
from the UI, Node List -> Physical Disk List, if the OSD is using a journal device, you would see the journal partition being listed on the same row line of the OSD under the column Linked Devices
another way us using the cli command
ceph-volume lvm list
if you do confirm you are not using a journal, you can add one to an existing OSD using the script
/opt/petasan/scripts/util/migrate_db.sh
i recommend you read the top of the script, it is use at your own risk. i recommend you try it first on a test/vm cluster to make sure it does what you need. also use in production cluster one OSD at a time making sure the OSD comes online correctly before using on next one. make sure the cluster health is ok before using.
Last edited on January 7, 2023, 6:51 pm by admin · #2
Configure existing OSDs to use a Journal
moose999
9 Posts
Quote from moose999 on January 7, 2023, 4:00 pmHi,
First off I'm very new to PetaSAN, and I only have a litle experience with Ceph, but I'm keen to learn!
I have 3 * nodes, each with 2 * HDDS and 1* SSD. The SSDs were set to journal drives, and HDDs as OSDs, during installation.
My HDDs did not successfully add themselves as OSDs during installation, presumably due to some existing RAID info. wipefs took care of that, and I was able to add the HDDs as OSDs in the WebUI.
HOWEVER, when I added the HDDs as OSDs, I did not choose for the journal to be 'enabled'. I now believe this was a mistake, as when I benchmark my cluster, I get '0' for every metric on the journal drives!
So, as per the title, can I enable the journal on existing OSDs? If not, and I must recreate my OSDs, what's the best way to go about that?
Many thanks in advance!
Best Regards,
Justin
Hi,
First off I'm very new to PetaSAN, and I only have a litle experience with Ceph, but I'm keen to learn!
I have 3 * nodes, each with 2 * HDDS and 1* SSD. The SSDs were set to journal drives, and HDDs as OSDs, during installation.
My HDDs did not successfully add themselves as OSDs during installation, presumably due to some existing RAID info. wipefs took care of that, and I was able to add the HDDs as OSDs in the WebUI.
HOWEVER, when I added the HDDs as OSDs, I did not choose for the journal to be 'enabled'. I now believe this was a mistake, as when I benchmark my cluster, I get '0' for every metric on the journal drives!
So, as per the title, can I enable the journal on existing OSDs? If not, and I must recreate my OSDs, what's the best way to go about that?
Many thanks in advance!
Best Regards,
Justin
admin
2,930 Posts
Quote from admin on January 7, 2023, 6:50 pmTo check if you are using journals or not:
from the UI, Node List -> Physical Disk List, if the OSD is using a journal device, you would see the journal partition being listed on the same row line of the OSD under the column Linked Devices
another way us using the cli command
ceph-volume lvm listif you do confirm you are not using a journal, you can add one to an existing OSD using the script
/opt/petasan/scripts/util/migrate_db.sh
i recommend you read the top of the script, it is use at your own risk. i recommend you try it first on a test/vm cluster to make sure it does what you need. also use in production cluster one OSD at a time making sure the OSD comes online correctly before using on next one. make sure the cluster health is ok before using.
To check if you are using journals or not:
from the UI, Node List -> Physical Disk List, if the OSD is using a journal device, you would see the journal partition being listed on the same row line of the OSD under the column Linked Devices
another way us using the cli command
ceph-volume lvm list
if you do confirm you are not using a journal, you can add one to an existing OSD using the script
/opt/petasan/scripts/util/migrate_db.sh
i recommend you read the top of the script, it is use at your own risk. i recommend you try it first on a test/vm cluster to make sure it does what you need. also use in production cluster one OSD at a time making sure the OSD comes online correctly before using on next one. make sure the cluster health is ok before using.