Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

[Guide] How to add cephfs support

Hi Guys,

Just thought I would share how you can easily add 'cephfs' support to PetaSAN.

This is all based off calling the cluster 'ceph', but should work no matter what you call it.

First during the installer where you select profile etc to use, you can adjust the ceph.conf file.

Add the following:

[mds]
mds data = /var/lib/ceph/mds/$cluster-$id
keyring = /var/lib/ceph/mds/$cluster-$id/keyring

You also can edit /etc/ceph/ceph.conf (or whatever your cluster name is) after install.  This just tells it the location to look for the keys.

Next on the first node, run the following:

ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_metadata 32
ceph fs new cephfs cephfs_metadata cephfs_data

This creates the cephfs pool and filesystem.

On each of your three (master) nodes run the following:

id=$(hostname -s)
mkdir -p /var/lib/ceph/mds/ceph-$id;
ceph auth get-or-create mds.${id} mon 'profile mds' mgr 'profile mds' mds 'allow *' osd 'allow *' > /var/lib/ceph/mds/ceph-${id}/keyring
for i in enable start status; do systemctl $i ceph-mds@$id; done

This will create MDS's on each of your node. The status is just to make sure that it's started successfully.

Confirm that MDS are online correctly:

ceph -s

You should see something like:

root@psan01:~# ceph -s
cluster:
id: 8a0b84a9-a956-4fd9-bd55-244c0119998e
health: HEALTH_OK

services:
mon: 3 daemons, quorum psan01,psan02,psan03
mgr: psan03(active), standbys: psan01, psan02
mds: cephfs-1/1/1 up {0=psan01=up:active}, 1 up:standby
osd: 6 osds: 6 up, 6 in

data:
pools: 3 pools, 416 pgs
objects: 1.05k objects, 4.00GiB
usage: 14.0GiB used, 5.44TiB / 5.46TiB avail
pgs: 416 active+clean

After this is done, export the admin key which you will require to mount cephfs filesystem on another node.

ceph-authtool -p /etc/ceph/ceph.client.admin.keyring' > ceph.key

This will create a file called ceph.key which you then can scp to whatever node you wish you mount cepfs onto.

For mounting you can use the following on the nodes you wish to mount on

sudo mkdir -p /mnt/cephfs
sudo mount -t ceph 10.227.3.1:6789,10.227.3.2:6789,10.227.3.3:6789:/ /mnt/cephfs -o name=admin,secretfile=ceph.key

Replace IP's with your 'backend 1' IP's of your servers.  This is what ceph calls the public network.  This will make sure that if one of the nodes fails, that you still are mounted and working.

For an alt method without using key file you can use:

mount -t ceph 10.227.3.1:6789,10.227.3.2:6789,10.227.3.3:6789:/ /mnt/cephfs/ -o name=admin,secret="AQDZsz5dSBj8NRAAsYESiK3uJjblBAIjw0qFNw=="

You can find the key by catting the file you exported earlier.

To confirm its working as expected, just use a normal 'df -h'

df -h
Filesystem Size Used Avail Use% Mounted on
udev 983M 0 983M 0% /dev
tmpfs 200M 3.1M 197M 2% /run
/dev/sda1 10G 1.1G 8.9G 11% /
tmpfs 997M 0 997M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 997M 0 997M 0% /sys/fs/cgroup
tmpfs 200M 0 200M 0% /run/user/0
tmpfs 200M 0 200M 0% /run/user/1000
10.227.3.1:6789,10.227.3.2:6789,10.227.3.3:6789:/ 2.6T 4.0G 2.6T 1% /mnt/cephfs

I hope some of you find this helpful.

Gotas

If your using a 'cluster name' other then ceph, you will need to add the cluster config file to the ceph commands.  For example, if you call your cluster 'bob' you would need to use:

ceph -s -c /etc/ceph/bob.conf

Instead of

ceph -s