Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

ADD NODE OSD and MON on PETASAN

Hello,

I want to know about adding an osd node, a mon node and a disk and how to delete them. you can guide me.

this is in the admin guide.

mon role is assigned to  the first 3 nodes. the nodes can also serve other roles like osd storage and iscsi.

you add new osd nodes by joining new nodes (beyond 3)

you add osd disks by physically adding disks, then from the ui add the disk as osd or journal

for deletion:

a failed osd disk can be deleted from the ui

a failed osd node can be deleted from ui

a failed mon node can be "replaced" by a new node from ui

You can also use cli command line to do anything you want.

1、a failed osd node can be deleted from ui

I don't see a button that can delete the OSD NODE. Is there a command line?

2、a failed mon node can be "replaced" by a new node from ui

I have four nodes, the first three are management nodes. After I shut down one of the previous management nodes, the check of the management node in front of my fourth OSD node is not allowed, and the fourth node cannot be the management node.

3、You can also use cli command line to do anything you want.

In addition, there is no document of the command line cli, if you can, please send me a mailbox 171293193@qq.com
Thank you

 

Down node will show up is red color as down in the Node List. If it is a non management node, there will be a  delete button. If it is management node, there will be no delete button, you have to replace it with a new node, choosing the "Replace Management Node" when deploying the new node.

For cli: PetaSAN nodes are Ubuntu 16.04 (will be moving to 18.04 soon),  so it is a standard Linux command, you can also perform Ceph and iSCSI/LIO (such as targetcli ) commands if you know what you are doing.

I have three management nodes, each with two OSDs, and the status is normal. I reinstalled one of the management nodes, and the OSD on this reinstalled management node was not removed from the cluster. I used this management node to replace the management node method and it can be successful. But I am managing nodes at the management node---nodes---list---petasan01Physical Disk List. I want to display the OSD interface on the management node. The interface is always in the loop, and the OSD disk is not displayed, but the fourth node for storage, OSD. The interface can be opened and the OSD interface of the three management nodes cannot be opened.

 

So you can only see the list of node 4, but not the first 3 ?

If you login on node 2 or 3, do you also have the same problem ?

On node 1,  do you see all nodes defines in /etc/hosts ?

Thank you very much, I see all the nodes / etc / hosts only define the Ip of petasan04 node, 172.16.110.173 petasan04, how to modify it? I manually added it on node 1, and after some time it became the Ip 172.16.110.173 petasan04 of node 4.

This is because it is being re-synced from Consul configuration system. To fix it:

systemctl stop petasan-file-sync
# edit /etc/hosts, then
/opt/petasan/scripts/util/sync_file.py /etc/hosts
systemctl start petasan-file-sync

We have seen this issue before, it could be due to a problem during deployment of node 4 or maybe it tried to modify the hosts file  at the moment it is syncing from consul. I logged it as a bug to make see if we can catch these cases.

I have 4 nodes, 3 node manager. Now, my 1 node manager is dead. How to make the 4th node, become the node manager (No reinstallation).Many thanks!

To replace a management node, you need to install and deploy with "Replace Management Node", you should give it the same hostname and management ip. If the old node had OSD disks, you can put them in the new node, only the os disk gets installed.

Technically it should be possible to re-assign another node to take over management services, there are several components that need to be changed: mon, mgr, mds, consul, gluster, stats..etc to be reconfigured to use the new hostname/ip with the rest of the management nodes, but is not straightforward. It is safer to re-deploy a new node with same values already configured.