Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Split out Ceph public

I've seen forum posts saying that the ceph public network can be used natively instead of NFS or iSCSI.  We'd like to use this for one pool, and use the NFS for another pool.  However, we'd want to access the ceph public network on it's own VLAN/subnet - not on the cluster/osd subnet.  We basically have two bonds - bond0 for client access, and bond1 for cluster osd replication.

What would you change / edit to ensure ceph public can be accessed on its own vlan on bond0 without affecting future upgrades?

(See cluster_info.json and ceph.conf below)

Many thanks

it3

root@pet201:~# cat /etc/ceph/ceph.conf
# minimal ceph.conf for d98de______________6a5a74
[global]
fsid = d98de30_______________f6a5a74
mon_host = [v2:x.x.21.81:3300/0,v1:x.x.21.81:6789/0] [v2:x.x.21.82:3300/0,v1:x.x.21.82:6789/0] [v2:x.x.21.83:3300/0,v1:x.x.21.83:6789/0]
auth_client_required = cephx
root@pet201:~#

 
root@pet201:~# cat /opt/petasan/config/cluster_info.json
{
"backend_1_base_ip": "x.x.21.0",
"backend_1_eth_name": "bond1",
"backend_1_mask": "255.255.255.0",
"backend_1_vlan_id": "21",
"backend_2_base_ip": "",
"backend_2_eth_name": "",
"backend_2_mask": "",
"backend_2_vlan_id": "",
"bonds": [
{
"interfaces": "eth6,eth7",
"is_jumbo_frames": false,
"mode": "802.3ad",
"name": "bond0",
"primary_interface": ""
},
{
"interfaces": "eth8,eth9",
"is_jumbo_frames": true,
"mode": "802.3ad",
"name": "bond1",
"primary_interface": ""
}
],
"default_pool": "both",
"default_pool_pgs": "4096",
"default_pool_replicas": "3",
"eth_count": 10,
"jf_mtu_size": "9000",
"jumbo_frames": [
"eth4",
"eth5",
"eth8",
"eth9"
],
"management_eth_name": "bond0",
"management_nodes": [
{
"backend_1_ip": "x.x.21.81",
"backend_2_ip": "",
"is_backup": false,
"is_cifs": true,
"is_iscsi": true,
"is_management": true,
"is_nfs": true,
"is_storage": true,
"management_ip": "x.x.8.81",
"name": "pet201"
},
{
"backend_1_ip": "x.x.21.82",
"backend_2_ip": "",
"is_backup": false,
"is_cifs": true,
"is_iscsi": true,
"is_management": true,
"is_nfs": true,
"is_storage": true,
"management_ip": "x.x.8.82",
"name": "pet202"
},
{
"backend_1_ip": "x.x.21.83",
"backend_2_ip": "",
"is_backup": false,
"is_cifs": true,
"is_iscsi": true,
"is_management": true,
"is_nfs": true,
"is_storage": true,
"management_ip": "x.x.8.83",
"name": "pet203"
}
],
"name": "peta001",
"storage_engine": "bluestore"

I understand you want to access Ceph directly/natively from Linux clients on the public network. In such case you only need to configure the clients, nothing needed on the server/PetaSAN side. The Ceph public network is the Backend network on PetaSAN

I am not sure if/why you need your clients to be on a different VLAN than the Ceph public network and still want direct access, it is like having an iSCSI server listening on a VLAN but your client need to be on a different VLAN, you may be able to route between VLANs by configuring your switch, but easier to be on same VLAN.

Hi there

Thanks for coming back to me. I don't think I've explained very well, so apologies for that.

I would prefer not to have linux clients accessing ceph on the cluster/backend network.

We use ceph natively at the moment and this is what our ceph.conf file shows;

# Your network address
public network = x.x.3.0/24
cluster network = x.x.4.0/24

So our linux clients are on x.x.3.0/24.

Is it possible to do this with petasan? (As I understand it, both public and cluster are on the same interface with petasan.)