ceph journal partition
Pages: 1 2
hoanglongvina
12 Posts
August 31, 2017, 6:41 amQuote from hoanglongvina on August 31, 2017, 6:41 amHi Everybody,
Currently the default config of ceph journal when installing PETASAN is 5G.
I want up to size of the ceph journal partition? please help me, Thank.
Hi Everybody,
Currently the default config of ceph journal when installing PETASAN is 5G.
I want up to size of the ceph journal partition? please help me, Thank.
admin
2,930 Posts
August 31, 2017, 9:30 amQuote from admin on August 31, 2017, 9:30 amYou can change this value from the config file:
/etc/ceph/CLUSTER_NAME.conf
add this at end:
osd_journal_size = 10240
The value is in MB so this will make it 10G
you need to do this on all your osd nodes and restart the machine or restart the osd services
You can change this value from the config file:
/etc/ceph/CLUSTER_NAME.conf
add this at end:
osd_journal_size = 10240
The value is in MB so this will make it 10G
you need to do this on all your osd nodes and restart the machine or restart the osd services
Last edited on August 31, 2017, 9:31 am by admin · #2
hoanglongvina
12 Posts
August 31, 2017, 10:21 amQuote from hoanglongvina on August 31, 2017, 10:21 amHi ADMIN,
Currently each partition has its own journal partition, I want to configure a SSD drive as its own journal partition ?
ex:
1. /dev/sdb : 20 G
/dev/sdb1: 15G --> DATA
/dev/sdb2: 5G --> Journal
2. /dev/sdc : 20 G
/dev/sdc1: 15G --> DATA
/dev/sdc2: 5G --> Journal
i wan't config 1 driver SSD:
---> 1 SSD 20 G --> Journal
Thank Everybody.
Hi ADMIN,
Currently each partition has its own journal partition, I want to configure a SSD drive as its own journal partition ?
ex:
1. /dev/sdb : 20 G
/dev/sdb1: 15G --> DATA
/dev/sdb2: 5G --> Journal
2. /dev/sdc : 20 G
/dev/sdc1: 15G --> DATA
/dev/sdc2: 5G --> Journal
i wan't config 1 driver SSD:
---> 1 SSD 20 G --> Journal
Thank Everybody.
admin
2,930 Posts
August 31, 2017, 10:30 amQuote from admin on August 31, 2017, 10:30 amHi there,
Currently we do not support mixing SSDs and spinning disks together as mentioned in our release notes, but it will be supported in the future when adding support for Bluestore. However you can do this now yourself manually using ceph-disk command.
Hi there,
Currently we do not support mixing SSDs and spinning disks together as mentioned in our release notes, but it will be supported in the future when adding support for Bluestore. However you can do this now yourself manually using ceph-disk command.
Last edited on August 31, 2017, 10:31 am by admin · #4
hoanglongvina
12 Posts
August 31, 2017, 10:38 amQuote from hoanglongvina on August 31, 2017, 10:38 amHi ADMIN,
ex:
I'm create partition sdb:
/dev/sda : systems
/dev/sdb1 : 15 G ceph osd
/dev/sbb2: 5 G ceph Journal
when i add create partition journal + "5 G" = 10G , it's create "5G" to partition (sdb) ?
Thank.
Hi ADMIN,
ex:
I'm create partition sdb:
/dev/sda : systems
/dev/sdb1 : 15 G ceph osd
/dev/sbb2: 5 G ceph Journal
when i add create partition journal + "5 G" = 10G , it's create "5G" to partition (sdb) ?
Thank.
admin
2,930 Posts
August 31, 2017, 11:03 amQuote from admin on August 31, 2017, 11:03 amAssume you have 2 spinning disks /dev/sdd and /dev/sde for OSD and SSD for journal on /dev/sdf
You can manually create the 2 OSDs with journal as follows
ceph-disk prepare --cluster CLUSTER_NAME /dev/sdd /dev/sdf
ceph-disk prepare --cluster CLUSTER_NAME /dev/sde /dev/sdf
This will create OSD partitions
/dev/sdd1 size is entire disk
/dev/sde1 size is entire disk
/dev/sdf1 SSD journal for first disk size determined by osd_journal_size in conf file (default 5G) as per earlier post
/dev/sdf2 SSD journal for second disk size determined by osd_journal_size in conf file (default 5G) as per earlier post
You do not need to create the partitions yourself, just specify the disk is fine
Assume you have 2 spinning disks /dev/sdd and /dev/sde for OSD and SSD for journal on /dev/sdf
You can manually create the 2 OSDs with journal as follows
ceph-disk prepare --cluster CLUSTER_NAME /dev/sdd /dev/sdf
ceph-disk prepare --cluster CLUSTER_NAME /dev/sde /dev/sdf
This will create OSD partitions
/dev/sdd1 size is entire disk
/dev/sde1 size is entire disk
/dev/sdf1 SSD journal for first disk size determined by osd_journal_size in conf file (default 5G) as per earlier post
/dev/sdf2 SSD journal for second disk size determined by osd_journal_size in conf file (default 5G) as per earlier post
You do not need to create the partitions yourself, just specify the disk is fine
Last edited on August 31, 2017, 11:06 am by admin · #6
hoanglongvina
12 Posts
September 1, 2017, 3:01 amQuote from hoanglongvina on September 1, 2017, 3:01 amHi ADMIN,
I will try on the my systems. Thank very much.
Hi ADMIN,
I will try on the my systems. Thank very much.
hoanglongvina
12 Posts
September 1, 2017, 8:02 amQuote from hoanglongvina on September 1, 2017, 8:02 amHi ADMIN.
I'm setup petasan with partition OSD on /dev/sdc (driver - SAS)
- /dev/sdc1 --> 10G Type Ceph OSD
- /dev/sdc2 --> 5G Type Ceph Journal
I want dell or down partition ceph Journal sdc2 (5G --> 1M) and create partition ceph journal on one driver - SSD (/dev/sdd). mount it's with command
ceph-disk prepare --cluster CLUSTER_NAME /dev/sdc2 /dev/sdd --> can't create partition Ceph Journal.
please help me.
Thank.
Hi ADMIN.
I'm setup petasan with partition OSD on /dev/sdc (driver - SAS)
- /dev/sdc1 --> 10G Type Ceph OSD
- /dev/sdc2 --> 5G Type Ceph Journal
I want dell or down partition ceph Journal sdc2 (5G --> 1M) and create partition ceph journal on one driver - SSD (/dev/sdd). mount it's with command
ceph-disk prepare --cluster CLUSTER_NAME /dev/sdc2 /dev/sdd --> can't create partition Ceph Journal.
please help me.
Thank.
admin
2,930 Posts
September 1, 2017, 9:48 amQuote from admin on September 1, 2017, 9:48 amThe steps i sent earlier is for creating new OSDs with SSD journals.
If you have existing OSDs and need to move/replace their journals to SSDs, you can follow:
https://docs.hpcloud.com/hos-3.x/helion/operations/maintenance/ceph/replace_osd_journaldisk.html
https://www.sebastien-han.fr/blog/2014/11/27/ceph-recover-osds-after-ssd-journal-failure/
You may find it easier to use the steps i sent ealier for creating new OSDs. To stop PetaSAN from automatically creating OSDs itself during cluster creation or new node joining: during node deployment in the web deployment ui uncheck the "Local Storage Service" this will create a cluster (or join a new node) without creating OSDs then after the cluster is built (ignore warning that it does not have storage) go to the Node List and manually re-add/re-check the "Local Storage Service" role, now manually add your OSDs as per my prev steps.
The steps i sent earlier is for creating new OSDs with SSD journals.
If you have existing OSDs and need to move/replace their journals to SSDs, you can follow:
https://docs.hpcloud.com/hos-3.x/helion/operations/maintenance/ceph/replace_osd_journaldisk.html
https://www.sebastien-han.fr/blog/2014/11/27/ceph-recover-osds-after-ssd-journal-failure/
You may find it easier to use the steps i sent ealier for creating new OSDs. To stop PetaSAN from automatically creating OSDs itself during cluster creation or new node joining: during node deployment in the web deployment ui uncheck the "Local Storage Service" this will create a cluster (or join a new node) without creating OSDs then after the cluster is built (ignore warning that it does not have storage) go to the Node List and manually re-add/re-check the "Local Storage Service" role, now manually add your OSDs as per my prev steps.
hoanglongvina
12 Posts
September 4, 2017, 1:55 amQuote from hoanglongvina on September 4, 2017, 1:55 amHi ADMIN,
I will try on the my systems. Thank very much.
Hi ADMIN,
I will try on the my systems. Thank very much.
Pages: 1 2
ceph journal partition
hoanglongvina
12 Posts
Quote from hoanglongvina on August 31, 2017, 6:41 amHi Everybody, Currently the default config of ceph journal when installing PETASAN is 5G. I want up to size of the ceph journal partition? please help me, Thank.
Hi Everybody,
Currently the default config of ceph journal when installing PETASAN is 5G.
I want up to size of the ceph journal partition? please help me, Thank.
admin
2,930 Posts
Quote from admin on August 31, 2017, 9:30 amYou can change this value from the config file:
/etc/ceph/CLUSTER_NAME.conf
add this at end:
osd_journal_size = 10240
The value is in MB so this will make it 10G
you need to do this on all your osd nodes and restart the machine or restart the osd services
You can change this value from the config file:
/etc/ceph/CLUSTER_NAME.conf
add this at end:
osd_journal_size = 10240
The value is in MB so this will make it 10G
you need to do this on all your osd nodes and restart the machine or restart the osd services
hoanglongvina
12 Posts
Quote from hoanglongvina on August 31, 2017, 10:21 amHi ADMIN,
Currently each partition has its own journal partition, I want to configure a SSD drive as its own journal partition ?
ex:
1. /dev/sdb : 20 G
/dev/sdb1: 15G --> DATA
/dev/sdb2: 5G --> Journal
2. /dev/sdc : 20 G
/dev/sdc1: 15G --> DATA
/dev/sdc2: 5G --> Journal
i wan't config 1 driver SSD:
---> 1 SSD 20 G --> Journal
Thank Everybody.
Hi ADMIN,
Currently each partition has its own journal partition, I want to configure a SSD drive as its own journal partition ?
ex:
1. /dev/sdb : 20 G
/dev/sdb1: 15G --> DATA
/dev/sdb2: 5G --> Journal
2. /dev/sdc : 20 G
/dev/sdc1: 15G --> DATA
/dev/sdc2: 5G --> Journal
i wan't config 1 driver SSD:
---> 1 SSD 20 G --> Journal
Thank Everybody.
admin
2,930 Posts
Quote from admin on August 31, 2017, 10:30 amHi there,
Currently we do not support mixing SSDs and spinning disks together as mentioned in our release notes, but it will be supported in the future when adding support for Bluestore. However you can do this now yourself manually using ceph-disk command.
Hi there,
Currently we do not support mixing SSDs and spinning disks together as mentioned in our release notes, but it will be supported in the future when adding support for Bluestore. However you can do this now yourself manually using ceph-disk command.
hoanglongvina
12 Posts
Quote from hoanglongvina on August 31, 2017, 10:38 amHi ADMIN,
ex:
I'm create partition sdb:
/dev/sda : systems
/dev/sdb1 : 15 G ceph osd
/dev/sbb2: 5 G ceph Journal
when i add create partition journal + "5 G" = 10G , it's create "5G" to partition (sdb) ?
Thank.
Hi ADMIN,
ex:
I'm create partition sdb:
/dev/sda : systems
/dev/sdb1 : 15 G ceph osd
/dev/sbb2: 5 G ceph Journal
when i add create partition journal + "5 G" = 10G , it's create "5G" to partition (sdb) ?
Thank.
admin
2,930 Posts
Quote from admin on August 31, 2017, 11:03 amAssume you have 2 spinning disks /dev/sdd and /dev/sde for OSD and SSD for journal on /dev/sdf
You can manually create the 2 OSDs with journal as follows
ceph-disk prepare --cluster CLUSTER_NAME /dev/sdd /dev/sdf
ceph-disk prepare --cluster CLUSTER_NAME /dev/sde /dev/sdfThis will create OSD partitions
/dev/sdd1 size is entire disk
/dev/sde1 size is entire disk
/dev/sdf1 SSD journal for first disk size determined by osd_journal_size in conf file (default 5G) as per earlier post
/dev/sdf2 SSD journal for second disk size determined by osd_journal_size in conf file (default 5G) as per earlier post
You do not need to create the partitions yourself, just specify the disk is fine
Assume you have 2 spinning disks /dev/sdd and /dev/sde for OSD and SSD for journal on /dev/sdf
You can manually create the 2 OSDs with journal as follows
ceph-disk prepare --cluster CLUSTER_NAME /dev/sdd /dev/sdf
ceph-disk prepare --cluster CLUSTER_NAME /dev/sde /dev/sdf
This will create OSD partitions
/dev/sdd1 size is entire disk
/dev/sde1 size is entire disk
/dev/sdf1 SSD journal for first disk size determined by osd_journal_size in conf file (default 5G) as per earlier post
/dev/sdf2 SSD journal for second disk size determined by osd_journal_size in conf file (default 5G) as per earlier post
You do not need to create the partitions yourself, just specify the disk is fine
hoanglongvina
12 Posts
Quote from hoanglongvina on September 1, 2017, 3:01 amHi ADMIN,
I will try on the my systems. Thank very much.
Hi ADMIN,
I will try on the my systems. Thank very much.
hoanglongvina
12 Posts
Quote from hoanglongvina on September 1, 2017, 8:02 amHi ADMIN.
I'm setup petasan with partition OSD on /dev/sdc (driver - SAS)
- /dev/sdc1 --> 10G Type Ceph OSD
- /dev/sdc2 --> 5G Type Ceph Journal
I want dell or down partition ceph Journal sdc2 (5G --> 1M) and create partition ceph journal on one driver - SSD (/dev/sdd). mount it's with command
ceph-disk prepare --cluster CLUSTER_NAME /dev/sdc2 /dev/sdd --> can't create partition Ceph Journal.
please help me.
Thank.
Hi ADMIN.
I'm setup petasan with partition OSD on /dev/sdc (driver - SAS)
- /dev/sdc1 --> 10G Type Ceph OSD
- /dev/sdc2 --> 5G Type Ceph Journal
I want dell or down partition ceph Journal sdc2 (5G --> 1M) and create partition ceph journal on one driver - SSD (/dev/sdd). mount it's with command
ceph-disk prepare --cluster CLUSTER_NAME /dev/sdc2 /dev/sdd --> can't create partition Ceph Journal.
please help me.
Thank.
admin
2,930 Posts
Quote from admin on September 1, 2017, 9:48 amThe steps i sent earlier is for creating new OSDs with SSD journals.
If you have existing OSDs and need to move/replace their journals to SSDs, you can follow:https://docs.hpcloud.com/hos-3.x/helion/operations/maintenance/ceph/replace_osd_journaldisk.html
https://www.sebastien-han.fr/blog/2014/11/27/ceph-recover-osds-after-ssd-journal-failure/You may find it easier to use the steps i sent ealier for creating new OSDs. To stop PetaSAN from automatically creating OSDs itself during cluster creation or new node joining: during node deployment in the web deployment ui uncheck the "Local Storage Service" this will create a cluster (or join a new node) without creating OSDs then after the cluster is built (ignore warning that it does not have storage) go to the Node List and manually re-add/re-check the "Local Storage Service" role, now manually add your OSDs as per my prev steps.
The steps i sent earlier is for creating new OSDs with SSD journals.
If you have existing OSDs and need to move/replace their journals to SSDs, you can follow:
https://docs.hpcloud.com/hos-3.x/helion/operations/maintenance/ceph/replace_osd_journaldisk.html
https://www.sebastien-han.fr/blog/2014/11/27/ceph-recover-osds-after-ssd-journal-failure/
You may find it easier to use the steps i sent ealier for creating new OSDs. To stop PetaSAN from automatically creating OSDs itself during cluster creation or new node joining: during node deployment in the web deployment ui uncheck the "Local Storage Service" this will create a cluster (or join a new node) without creating OSDs then after the cluster is built (ignore warning that it does not have storage) go to the Node List and manually re-add/re-check the "Local Storage Service" role, now manually add your OSDs as per my prev steps.
hoanglongvina
12 Posts
Quote from hoanglongvina on September 4, 2017, 1:55 amHi ADMIN,
I will try on the my systems. Thank very much.
Hi ADMIN,