ADD NODE OSD and MON on PETASAN
tuocmd
54 Posts
May 15, 2018, 1:15 amQuote from tuocmd on May 15, 2018, 1:15 amHello,
I want to know about adding an osd node, a mon node and a disk and how to delete them. you can guide me.
Hello,
I want to know about adding an osd node, a mon node and a disk and how to delete them. you can guide me.
admin
2,921 Posts
May 15, 2018, 2:59 pmQuote from admin on May 15, 2018, 2:59 pmthis is in the admin guide.
mon role is assigned to the first 3 nodes. the nodes can also serve other roles like osd storage and iscsi.
you add new osd nodes by joining new nodes (beyond 3)
you add osd disks by physically adding disks, then from the ui add the disk as osd or journal
for deletion:
a failed osd disk can be deleted from the ui
a failed osd node can be deleted from ui
a failed mon node can be "replaced" by a new node from ui
You can also use cli command line to do anything you want.
this is in the admin guide.
mon role is assigned to the first 3 nodes. the nodes can also serve other roles like osd storage and iscsi.
you add new osd nodes by joining new nodes (beyond 3)
you add osd disks by physically adding disks, then from the ui add the disk as osd or journal
for deletion:
a failed osd disk can be deleted from the ui
a failed osd node can be deleted from ui
a failed mon node can be "replaced" by a new node from ui
You can also use cli command line to do anything you want.
yangsm
31 Posts
January 6, 2019, 4:55 amQuote from yangsm on January 6, 2019, 4:55 am1、a failed osd node can be deleted from ui
I don't see a button that can delete the OSD NODE. Is there a command line?
2、a failed mon node can be "replaced" by a new node from ui
I have four nodes, the first three are management nodes. After I shut down one of the previous management nodes, the check of the management node in front of my fourth OSD node is not allowed, and the fourth node cannot be the management node.
3、You can also use cli command line to do anything you want.
In addition, there is no document of the command line cli, if you can, please send me a mailbox 171293193@qq.com
Thank you
1、a failed osd node can be deleted from ui
I don't see a button that can delete the OSD NODE. Is there a command line?
2、a failed mon node can be "replaced" by a new node from ui
I have four nodes, the first three are management nodes. After I shut down one of the previous management nodes, the check of the management node in front of my fourth OSD node is not allowed, and the fourth node cannot be the management node.
3、You can also use cli command line to do anything you want.
In addition, there is no document of the command line cli, if you can, please send me a mailbox 171293193@qq.com
Thank you
admin
2,921 Posts
January 6, 2019, 12:11 pmQuote from admin on January 6, 2019, 12:11 pmDown node will show up is red color as down in the Node List. If it is a non management node, there will be a delete button. If it is management node, there will be no delete button, you have to replace it with a new node, choosing the "Replace Management Node" when deploying the new node.
For cli: PetaSAN nodes are Ubuntu 16.04 (will be moving to 18.04 soon), so it is a standard Linux command, you can also perform Ceph and iSCSI/LIO (such as targetcli ) commands if you know what you are doing.
Down node will show up is red color as down in the Node List. If it is a non management node, there will be a delete button. If it is management node, there will be no delete button, you have to replace it with a new node, choosing the "Replace Management Node" when deploying the new node.
For cli: PetaSAN nodes are Ubuntu 16.04 (will be moving to 18.04 soon), so it is a standard Linux command, you can also perform Ceph and iSCSI/LIO (such as targetcli ) commands if you know what you are doing.
Last edited on January 6, 2019, 12:11 pm by admin · #4
yangsm
31 Posts
January 15, 2019, 7:09 amQuote from yangsm on January 15, 2019, 7:09 amI have three management nodes, each with two OSDs, and the status is normal. I reinstalled one of the management nodes, and the OSD on this reinstalled management node was not removed from the cluster. I used this management node to replace the management node method and it can be successful. But I am managing nodes at the management node---nodes---list---petasan01Physical Disk List. I want to display the OSD interface on the management node. The interface is always in the loop, and the OSD disk is not displayed, but the fourth node for storage, OSD. The interface can be opened and the OSD interface of the three management nodes cannot be opened.
I have three management nodes, each with two OSDs, and the status is normal. I reinstalled one of the management nodes, and the OSD on this reinstalled management node was not removed from the cluster. I used this management node to replace the management node method and it can be successful. But I am managing nodes at the management node---nodes---list---petasan01Physical Disk List. I want to display the OSD interface on the management node. The interface is always in the loop, and the OSD disk is not displayed, but the fourth node for storage, OSD. The interface can be opened and the OSD interface of the three management nodes cannot be opened.
admin
2,921 Posts
January 15, 2019, 3:09 pmQuote from admin on January 15, 2019, 3:09 pmSo you can only see the list of node 4, but not the first 3 ?
If you login on node 2 or 3, do you also have the same problem ?
On node 1, do you see all nodes defines in /etc/hosts ?
So you can only see the list of node 4, but not the first 3 ?
If you login on node 2 or 3, do you also have the same problem ?
On node 1, do you see all nodes defines in /etc/hosts ?
yangsm
31 Posts
January 16, 2019, 2:08 amQuote from yangsm on January 16, 2019, 2:08 amThank you very much, I see all the nodes / etc / hosts only define the Ip of petasan04 node, 172.16.110.173 petasan04, how to modify it? I manually added it on node 1, and after some time it became the Ip 172.16.110.173 petasan04 of node 4.
Thank you very much, I see all the nodes / etc / hosts only define the Ip of petasan04 node, 172.16.110.173 petasan04, how to modify it? I manually added it on node 1, and after some time it became the Ip 172.16.110.173 petasan04 of node 4.
admin
2,921 Posts
January 16, 2019, 7:59 amQuote from admin on January 16, 2019, 7:59 amThis is because it is being re-synced from Consul configuration system. To fix it:
systemctl stop petasan-file-sync
# edit /etc/hosts, then
/opt/petasan/scripts/util/sync_file.py /etc/hosts
systemctl start petasan-file-sync
We have seen this issue before, it could be due to a problem during deployment of node 4 or maybe it tried to modify the hosts file at the moment it is syncing from consul. I logged it as a bug to make see if we can catch these cases.
This is because it is being re-synced from Consul configuration system. To fix it:
systemctl stop petasan-file-sync
# edit /etc/hosts, then
/opt/petasan/scripts/util/sync_file.py /etc/hosts
systemctl start petasan-file-sync
We have seen this issue before, it could be due to a problem during deployment of node 4 or maybe it tried to modify the hosts file at the moment it is syncing from consul. I logged it as a bug to make see if we can catch these cases.
lacdan92
2 Posts
December 14, 2020, 2:12 amQuote from lacdan92 on December 14, 2020, 2:12 amI have 4 nodes, 3 node manager. Now, my 1 node manager is dead. How to make the 4th node, become the node manager (No reinstallation).Many thanks!
I have 4 nodes, 3 node manager. Now, my 1 node manager is dead. How to make the 4th node, become the node manager (No reinstallation).Many thanks!
admin
2,921 Posts
December 14, 2020, 1:15 pmQuote from admin on December 14, 2020, 1:15 pmTo replace a management node, you need to install and deploy with "Replace Management Node", you should give it the same hostname and management ip. If the old node had OSD disks, you can put them in the new node, only the os disk gets installed.
Technically it should be possible to re-assign another node to take over management services, there are several components that need to be changed: mon, mgr, mds, consul, gluster, stats..etc to be reconfigured to use the new hostname/ip with the rest of the management nodes, but is not straightforward. It is safer to re-deploy a new node with same values already configured.
To replace a management node, you need to install and deploy with "Replace Management Node", you should give it the same hostname and management ip. If the old node had OSD disks, you can put them in the new node, only the os disk gets installed.
Technically it should be possible to re-assign another node to take over management services, there are several components that need to be changed: mon, mgr, mds, consul, gluster, stats..etc to be reconfigured to use the new hostname/ip with the rest of the management nodes, but is not straightforward. It is safer to re-deploy a new node with same values already configured.
Last edited on December 14, 2020, 1:17 pm by admin · #10
ADD NODE OSD and MON on PETASAN
tuocmd
54 Posts
Quote from tuocmd on May 15, 2018, 1:15 amHello,
I want to know about adding an osd node, a mon node and a disk and how to delete them. you can guide me.
Hello,
I want to know about adding an osd node, a mon node and a disk and how to delete them. you can guide me.
admin
2,921 Posts
Quote from admin on May 15, 2018, 2:59 pmthis is in the admin guide.
mon role is assigned to the first 3 nodes. the nodes can also serve other roles like osd storage and iscsi.
you add new osd nodes by joining new nodes (beyond 3)
you add osd disks by physically adding disks, then from the ui add the disk as osd or journal
for deletion:
a failed osd disk can be deleted from the ui
a failed osd node can be deleted from ui
a failed mon node can be "replaced" by a new node from ui
You can also use cli command line to do anything you want.
this is in the admin guide.
mon role is assigned to the first 3 nodes. the nodes can also serve other roles like osd storage and iscsi.
you add new osd nodes by joining new nodes (beyond 3)
you add osd disks by physically adding disks, then from the ui add the disk as osd or journal
for deletion:
a failed osd disk can be deleted from the ui
a failed osd node can be deleted from ui
a failed mon node can be "replaced" by a new node from ui
You can also use cli command line to do anything you want.
yangsm
31 Posts
Quote from yangsm on January 6, 2019, 4:55 am1、a failed osd node can be deleted from ui
I don't see a button that can delete the OSD NODE. Is there a command line?
2、a failed mon node can be "replaced" by a new node from ui
I have four nodes, the first three are management nodes. After I shut down one of the previous management nodes, the check of the management node in front of my fourth OSD node is not allowed, and the fourth node cannot be the management node.
3、You can also use cli command line to do anything you want.
In addition, there is no document of the command line cli, if you can, please send me a mailbox 171293193@qq.com
Thank you
1、a failed osd node can be deleted from ui
I don't see a button that can delete the OSD NODE. Is there a command line?
2、a failed mon node can be "replaced" by a new node from ui
I have four nodes, the first three are management nodes. After I shut down one of the previous management nodes, the check of the management node in front of my fourth OSD node is not allowed, and the fourth node cannot be the management node.
3、You can also use cli command line to do anything you want.
In addition, there is no document of the command line cli, if you can, please send me a mailbox 171293193@qq.com
Thank you
admin
2,921 Posts
Quote from admin on January 6, 2019, 12:11 pmDown node will show up is red color as down in the Node List. If it is a non management node, there will be a delete button. If it is management node, there will be no delete button, you have to replace it with a new node, choosing the "Replace Management Node" when deploying the new node.
For cli: PetaSAN nodes are Ubuntu 16.04 (will be moving to 18.04 soon), so it is a standard Linux command, you can also perform Ceph and iSCSI/LIO (such as targetcli ) commands if you know what you are doing.
Down node will show up is red color as down in the Node List. If it is a non management node, there will be a delete button. If it is management node, there will be no delete button, you have to replace it with a new node, choosing the "Replace Management Node" when deploying the new node.
For cli: PetaSAN nodes are Ubuntu 16.04 (will be moving to 18.04 soon), so it is a standard Linux command, you can also perform Ceph and iSCSI/LIO (such as targetcli ) commands if you know what you are doing.
yangsm
31 Posts
Quote from yangsm on January 15, 2019, 7:09 amI have three management nodes, each with two OSDs, and the status is normal. I reinstalled one of the management nodes, and the OSD on this reinstalled management node was not removed from the cluster. I used this management node to replace the management node method and it can be successful. But I am managing nodes at the management node---nodes---list---petasan01Physical Disk List. I want to display the OSD interface on the management node. The interface is always in the loop, and the OSD disk is not displayed, but the fourth node for storage, OSD. The interface can be opened and the OSD interface of the three management nodes cannot be opened.
I have three management nodes, each with two OSDs, and the status is normal. I reinstalled one of the management nodes, and the OSD on this reinstalled management node was not removed from the cluster. I used this management node to replace the management node method and it can be successful. But I am managing nodes at the management node---nodes---list---petasan01Physical Disk List. I want to display the OSD interface on the management node. The interface is always in the loop, and the OSD disk is not displayed, but the fourth node for storage, OSD. The interface can be opened and the OSD interface of the three management nodes cannot be opened.
admin
2,921 Posts
Quote from admin on January 15, 2019, 3:09 pmSo you can only see the list of node 4, but not the first 3 ?
If you login on node 2 or 3, do you also have the same problem ?
On node 1, do you see all nodes defines in /etc/hosts ?
So you can only see the list of node 4, but not the first 3 ?
If you login on node 2 or 3, do you also have the same problem ?
On node 1, do you see all nodes defines in /etc/hosts ?
yangsm
31 Posts
Quote from yangsm on January 16, 2019, 2:08 amThank you very much, I see all the nodes / etc / hosts only define the Ip of petasan04 node, 172.16.110.173 petasan04, how to modify it? I manually added it on node 1, and after some time it became the Ip 172.16.110.173 petasan04 of node 4.
Thank you very much, I see all the nodes / etc / hosts only define the Ip of petasan04 node, 172.16.110.173 petasan04, how to modify it? I manually added it on node 1, and after some time it became the Ip 172.16.110.173 petasan04 of node 4.
admin
2,921 Posts
Quote from admin on January 16, 2019, 7:59 amThis is because it is being re-synced from Consul configuration system. To fix it:
systemctl stop petasan-file-sync
# edit /etc/hosts, then
/opt/petasan/scripts/util/sync_file.py /etc/hosts
systemctl start petasan-file-syncWe have seen this issue before, it could be due to a problem during deployment of node 4 or maybe it tried to modify the hosts file at the moment it is syncing from consul. I logged it as a bug to make see if we can catch these cases.
This is because it is being re-synced from Consul configuration system. To fix it:
systemctl stop petasan-file-sync
# edit /etc/hosts, then
/opt/petasan/scripts/util/sync_file.py /etc/hosts
systemctl start petasan-file-sync
We have seen this issue before, it could be due to a problem during deployment of node 4 or maybe it tried to modify the hosts file at the moment it is syncing from consul. I logged it as a bug to make see if we can catch these cases.
lacdan92
2 Posts
Quote from lacdan92 on December 14, 2020, 2:12 amI have 4 nodes, 3 node manager. Now, my 1 node manager is dead. How to make the 4th node, become the node manager (No reinstallation).Many thanks!
I have 4 nodes, 3 node manager. Now, my 1 node manager is dead. How to make the 4th node, become the node manager (No reinstallation).Many thanks!
admin
2,921 Posts
Quote from admin on December 14, 2020, 1:15 pmTo replace a management node, you need to install and deploy with "Replace Management Node", you should give it the same hostname and management ip. If the old node had OSD disks, you can put them in the new node, only the os disk gets installed.
Technically it should be possible to re-assign another node to take over management services, there are several components that need to be changed: mon, mgr, mds, consul, gluster, stats..etc to be reconfigured to use the new hostname/ip with the rest of the management nodes, but is not straightforward. It is safer to re-deploy a new node with same values already configured.
To replace a management node, you need to install and deploy with "Replace Management Node", you should give it the same hostname and management ip. If the old node had OSD disks, you can put them in the new node, only the os disk gets installed.
Technically it should be possible to re-assign another node to take over management services, there are several components that need to be changed: mon, mgr, mds, consul, gluster, stats..etc to be reconfigured to use the new hostname/ip with the rest of the management nodes, but is not straightforward. It is safer to re-deploy a new node with same values already configured.