Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

From the admin web application, add & remove OSD's?

Pages: 1 2

Trying to figure out #4 & #5 from page 6 of 2.0 upgrade guide, can you give some more info? Can't find these options in the web app running at port 5000. From the node list I can list the OSD's but not getting how to remove and/or add one. thank you

just found this discussion, which has the answer http://www.petasan.org/forums/?view=thread&id=313

is it safe to delete the OSD's one by one, and re-add, to convert to bluestore? Do have some VMware datastores running in this cluster

Adding and deleting OSDs is in the Admin guide page 17, maybe it should be highlighted better.

Yes it is safe to do it one at a time and make sure the cluster is active and clean before doing the next, the upgrade guide shows this.

Hi admin,

I also want to delete all three OSDs from a node, but I cannot get the "physical disk list" web page: it stays on the wheel for several minutes and then goes back to the login screen. Is there a command line procedure to safely remove OSD from a node ?

Thanks and bye, S

PS: the 3 OSDs are already down and the cluster status is OK.

Hi,

What made the 3 osds become down ?

Can you open the "physical disk list" on other nodes fine with the exception of this problem node ?

On this  problem node, what is the output of:

ceph status cluster --xx

ceph-disk list

The node went down for an hardware issue, and as it was a monitoring node I replaced it with a VM with the same name and ip but without the 3 OSDs. Now the disks are not present anymore and I would like to remove the enries from the cluster.

root@petatest03:~# ceph status --cluster=petasan
cluster:
id:     8934312d-82e6-4cd1-8b71-01251c7c15f8
health: HEALTH_OK

services:
mon: 3 daemons, quorum petatest01,petatest02,petatest03
mgr: petatest01(active), standbys: petatest02, petatest03
osd: 13 osds: 10 up, 10 in

data:
pools:   1 pools, 256 pgs
objects: 7143 objects, 26978 MB
usage:   65630 MB used, 2728 GB / 2792 GB avail
pgs:     256 active+clean

root@petatest03:~#
root@petatest03:~#
root@petatest03:~#
root@petatest03:~# ceph-disk list
/dev/sda :
/dev/sda1 other, ext4, mounted on /boot
/dev/sda2 other, ext4, mounted on /
/dev/sda3 other, ext4, mounted on /var/lib/ceph
/dev/sda4 other, ext4, mounted on /opt/petasan/config
/dev/sr0 other, unknown
root@petatest03:~#

I cannot reach the disk list web page from any node.

Bye, S.

can you please send the log file

/opt/petasan/log/PetaSAN.log

of the problem node and email it to contact-us @ petasan.org

cheers..

File sent.

My feeling is the /etc/hosts got corrupted while joining the node, there is a lot of  "Error on consul connection" in the logs, consul is responsible among other things for updating /etc/hosts + a lot more things, so if this is the case we need get consul running correctly.

First to check /etc/hosts on all 3 nodes if it looks ok, also can all nodes ping each other by hostname ?

Can you run the following command on all 3 nodes:

consul members

Can all nodes ping each other on ip addresses of management and backend-1, specially to-from the problem node. If you have problems, check your network setups, switches..etc. To know things are running fine, we should not see any "Error on consul connection" in the log. We need to fix consul first before modifying /etc/hosts, else it will be over-ridden + our cluster will not function correctly.  Once you no longer have these errors,  i will show you how to modify /etc/hosts and sync it via consul.

First to check /etc/hosts on all 3 nodes if it looks ok, also can all nodes ping each other by hostname ?

Well, the /etc/hosts files are empty (length 0) on all 3 nodes, so ping by hostname doesn't work, while pings on all numeric IP (management and backend) addresses work fine in all possible directions (network is OK).

Here's the command output:

root@petatest01:~# consul members
Node        Address             Status  Type    Build  Protocol  DC
petatest01  192.168.111.1:8301  alive   server  0.7.3  2         petasan
petatest02  192.168.111.2:8301  alive   server  0.7.3  2         petasan
petatest03  192.168.111.3:8301  alive   server  0.7.3  2         petasan

 

root@petatest02:~# consul members
Node        Address             Status  Type    Build  Protocol  DC
petatest01  192.168.111.1:8301  alive   server  0.7.3  2         petasan
petatest02  192.168.111.2:8301  alive   server  0.7.3  2         petasan
petatest03  192.168.111.3:8301  alive   server  0.7.3  2         petasan

 

root@petatest03:~# consul members
Node        Address             Status  Type    Build  Protocol  DC
petatest01  192.168.111.1:8301  alive   server  0.7.3  2         petasan
petatest02  192.168.111.2:8301  alive   server  0.7.3  2         petasan
petatest03  192.168.111.3:8301  alive   server  0.7.3  2         petasan

 

Pages: 1 2