Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

4+ Management node

After adding the 4th mon unit below I notice I have issue with the gui/dashboard. Not sure what is causing it. But i notice lots of the dashboard setting is not updating anymore after adding the 4th node. Thing still broken after i ran the following scripts so I am wondering if there are other script I am forgetting to get the data to sync with the gui/dashboard beside below:

create_mon.py
create_mgr.py
create_mds.py

create_consul_conf.py                        # Seem to help sync the different node
consul_client_start_up.py                 # Ran and error occur. Think it's normal as node is a MON node.
consul_start_up.py                              # Ran and seem to work...

The health warning is from a 5 node cluster requirement I set on it but it is down to 4 node so think that can be ignored. Issue is the gui not letting me change node roles (cifs, nfs, s3, replication, etc...) or previously it said interfaces is not assigned even though it is... As if the gui cannot talk to the node and update the settings... but can read off the status. Any suggestion would help.

ceph -s seem to said it work fine below:

cluster:
id: XXXXX
health: HEALTH_WARN
clock skew detected on mon.pool1-node1
Degraded data redundancy: 24/120 objects degraded (20.000%), 17 pgs degraded, 129 pgs undersized

services:
mon: 4 daemons, quorum pool0-node3,pool0-node1,pool0-node2,pool1-node1 (age 5m)
mgr: pool0-node(active, since 11h), standbys: pool0-node1,pool0-node2,pool1-node1
mds: 1/1 daemons up, 3 standby
osd: 4 osds: 4 up (since 11h), 4 in (since 28h)

data:
volumes: 1/1 healthy
pools: 3 pools, 129 pgs
objects: 24 objects, 457 KiB
usage: 47 MiB used, 1000 GiB / 1000 GiB avail
pgs: 24/120 objects degraded (20.000%)
112 active+undersized
17 active+undersized+degraded

Is it possible to run a 4th dashboard from petasan as I can't seem to get it up and running by having a 4th dashboard and then a 5th...

 

I notice issue when i start editing cluster_info.json by adding a 4th management node. Otherwise things seem to work fine... for dashboard.

why do you need a 4th management node?

if you need to add a ceph monitor service, you can do so using ceph command line.

What is the typical process to add a 4th or 5th node. The only reason to add more management nodes is if 1 goes down you are only left with 2 and any more that does down after that no more write can occur. Adding the management using command line doesn't give us the gui interface of petasan i believe right? Is there a way to add everything with 4th and 5th gui?

If not guess having a 4th and 5th mon, mgr, and mds should let cluster continue to run even with 1 petasan management gui interface running i think. As long as one other mon is running.

When you deploy a node, you can Create New Cluster, Join Existing Cluster or Replace Node

To add use the second, to replace failed host, use third.