Petasan test cluster modification Ip
yangsm
31 Posts
March 4, 2019, 3:12 amQuote from yangsm on March 4, 2019, 3:12 amDue to the petasan cluster tested, it is not well considered when planning IP. Now I need to change the Management Subnet, Backend 1 Subnet, and Backend 2 Subnet in the cluster to new ones. Can you help me? Thank you
Management Subnet 172.16.110.170, 172.16.110.171, 172.16.110.172 was changed to 192.168.110.170, 192.168.110.171, 192.168.110.172
Backend 1 Subnet 172.16.120.170, 172.16.120.171, 172.16.120.172 is changed to 192.168.120.170, 192.168.120.171, 192.168.120.172
Backend 2 Subnet 172.16.140.170, 172.16.140.171, 172.16.140.172 was changed to 192.168.140.170, 192.168.140.171, 192.168.140.172
Due to the petasan cluster tested, it is not well considered when planning IP. Now I need to change the Management Subnet, Backend 1 Subnet, and Backend 2 Subnet in the cluster to new ones. Can you help me? Thank you
Management Subnet 172.16.110.170, 172.16.110.171, 172.16.110.172 was changed to 192.168.110.170, 192.168.110.171, 192.168.110.172
Backend 1 Subnet 172.16.120.170, 172.16.120.171, 172.16.120.172 is changed to 192.168.120.170, 192.168.120.171, 192.168.120.172
Backend 2 Subnet 172.16.140.170, 172.16.140.171, 172.16.140.172 was changed to 192.168.140.170, 192.168.140.171, 192.168.140.172
admin
2,921 Posts
March 4, 2019, 7:16 amQuote from admin on March 4, 2019, 7:16 amWith the exception of backend 1, other config (ips/nics/bonds) can be changed from the config file.
backend 1 is used by Ceph monitors and Consul, and they cannot be easily changed once set.
With the exception of backend 1, other config (ips/nics/bonds) can be changed from the config file.
backend 1 is used by Ceph monitors and Consul, and they cannot be easily changed once set.
yangsm
31 Posts
March 4, 2019, 8:53 amQuote from yangsm on March 4, 2019, 8:53 amManagement Subnet, Backend 2 Subnet mainly change which files, can you elaborate? In addition, Backend 1 Subnet is mainly Ceph monitors and Consul. Ceph monitors Ip should be able to be modified according to Ceph's order, but Consul I don't understand, I don't know where to modify, if you have time, I am bothering you.
Management Subnet, Backend 2 Subnet mainly change which files, can you elaborate? In addition, Backend 1 Subnet is mainly Ceph monitors and Consul. Ceph monitors Ip should be able to be modified according to Ceph's order, but Consul I don't understand, I don't know where to modify, if you have time, I am bothering you.
admin
2,921 Posts
March 4, 2019, 12:45 pmQuote from admin on March 4, 2019, 12:45 pmThere are 2 files per node:
/opt/petasan/config/cluster_info.json
/opt/petasan/config/node_info.json
The first is for cluster wide settings (nics/bonds/subnets), the second for existing node ips
Again all can be changes with exception of backend 1 ip since it is used by Ceph monitors and Consul server which is not straight forward to change, google on how/if you can change. It is also used by the Gluster server (for node stats data), but should be easier to change.
There are 2 files per node:
/opt/petasan/config/cluster_info.json
/opt/petasan/config/node_info.json
The first is for cluster wide settings (nics/bonds/subnets), the second for existing node ips
Again all can be changes with exception of backend 1 ip since it is used by Ceph monitors and Consul server which is not straight forward to change, google on how/if you can change. It is also used by the Gluster server (for node stats data), but should be easier to change.
yangsm
31 Posts
March 6, 2019, 8:46 amQuote from yangsm on March 6, 2019, 8:46 amI just added a new node today, only for storage. I manually changed the Management Subnet node IP, Backend 1 Subnet node Ip, Backend 2 Subnet node Ip. Currently, the cluster is normal, but in the interface Manage Nodes--- 『Node List--->Management ip shows the old management Ip172.16.110.170 address, I don't know where to change, in order to display the address of my modified management Ip172.16.110.167. Also see the Gluster server you mentioned, can we use it to do the copy volume for Iscsi?
I just added a new node today, only for storage. I manually changed the Management Subnet node IP, Backend 1 Subnet node Ip, Backend 2 Subnet node Ip. Currently, the cluster is normal, but in the interface Manage Nodes--- 『Node List--->Management ip shows the old management Ip172.16.110.170 address, I don't know where to change, in order to display the address of my modified management Ip172.16.110.167. Also see the Gluster server you mentioned, can we use it to do the copy volume for Iscsi?
admin
2,921 Posts
March 6, 2019, 3:07 pmQuote from admin on March 6, 2019, 3:07 pmThe list of cluster nodes is stored in Consul, you can view the list:
consul kv get -recurse PetaSAN/Nodes
To save info on 1 host:
consul kv get PetaSAN/Nodes/host7 > host7.json
cat host7.json
{
"backend_1_ip": "10.0.4.17",
"backend_2_ip": "10.0.5.17",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.17",
"name": "host7"
}
you can modify and put back via:
consul kv put PetaSAN/Nodes/host7 @host7.json
The GlusterFS is used as a small shared file system mainly to store the stats data for the charts, it is needed to provide HA for the stats service. It is meant to be small 15G partition, so you should not use it for something large.
In PetaSAN 2.4 we will be adding backup/restore of images.
The list of cluster nodes is stored in Consul, you can view the list:
consul kv get -recurse PetaSAN/Nodes
To save info on 1 host:
consul kv get PetaSAN/Nodes/host7 > host7.json
cat host7.json
{
"backend_1_ip": "10.0.4.17",
"backend_2_ip": "10.0.5.17",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.17",
"name": "host7"
}
you can modify and put back via:
consul kv put PetaSAN/Nodes/host7 @host7.json
The GlusterFS is used as a small shared file system mainly to store the stats data for the charts, it is needed to provide HA for the stats service. It is meant to be small 15G partition, so you should not use it for something large.
In PetaSAN 2.4 we will be adding backup/restore of images.
yangsm
31 Posts
March 7, 2019, 6:00 amQuote from yangsm on March 7, 2019, 6:00 amToday the cluster's Backend 1 Subnet 172.16.120.170, 172.16.120.171, 172.16.120.172 is changed to 172.16.90.170, 172.16.90.171, 172.16.90.172
Backend 2 Subnet 172.16.140.170, 172.16.140.171, 172.16.140.172 was changed to 172.16.150.170, 172.168.150.171, 172.168.150.172;
Ceph's Mon IP I also modified root@petasan01:/opt# monmaptool --print ./monmap.map
Monmaptool: monmap file ./monmap.map
Epoch 0
Fsid d79fafc9-6473-48f0-a18c-3bd66fb81abc
Last_changed 2019-03-07 10:12:24.643304
Created 2019-03-07 10:12:24.643304
0: 172.16.90.170:6789/0 mon.noname-b
1: 172.16.90.171:6789/0 mon.noname-c
2: 172.16.90.172:6789/0 mon.noname-a
When I changed Mon, I also changed the value of the host information with consul kv put. But now it is not normal to use the consul kv to get the host information. Consul kv get PetaSAN/Nodes/petasan01 > petasan01.json shows Error querying Consul agent: Unexpected response code: 500, it should be corrupted consul cluster, the current page can not be accessed, suggesting the following error PetaSAN.core.consul.ps_consul. RetryConsulException
RetryConsulException
Traceback (most recent call last)
File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1836, in __call__
Return self.wsgi_app(environ, start_response)
File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1813, in wsgi_app
Ctx.push()
File "/usr/lib/python2.7/dist-packages/flask/ctx.py", line 321, in push
Self.session = self.app.open_session(self.request)
File "/usr/lib/python2.7/dist-packages/flask/app.py", line 825, in open_session
Return self.session_interface.open_session(self, request)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/security/session.py", line 127, in open_session
_exp = consul_base_api.read_value("".join( [consul_session_key,sid,"/_exp"]))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/base.py", line 29, in read_value
Index, data = cons.kv.get(key)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 391, in get
Callback, '/v1/kv/%s' % key, params=params)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __
Petasan.log log
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/api.py", line 448, in get_key
Index, data = cons.kv.get(key)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 391, in get
Callback, '/v1/kv/%s' % key, params=params)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __retry_internal
Return f()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/ps_consul.py", line 71, in get
Raise RetryConsulException()
RetryConsulException
07/03/2019 13:21:38 WARNING , retrying in 16 seconds...
07/03/2019 13:21:42 ERROR
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/file_sync_manager.py", line 75, in sync
Index, data = base.watch(self.root_path, current_index)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/base.py", line 49, in watch
Return cons.kv.get(key, index=current_index, recurse=True)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 391, in get
Callback, '/v1/kv/%s' % key, params=params)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __retry_internal
Return f()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/ps_consul.py", line 71, in get
Raise RetryConsulException()
RetryConsulException
07/03/2019 13:21:44 ERROR
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/iscsi_service.py", line 642, in handle_cluster_startup
Result = consul_api.set_leader_startup_time(current_node_name, str(i))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/api.py", line 372, in set_leader_startup_time
Return consul_obj.kv.put(ConfigAPI().get_consul_leaders_path()+node_name,minutes)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 459, in put
'/v1/kv/%s' % key, params=params, data=value)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __retry_internal
Return f()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/ps_consul.py", line 82, in put
Raise RetryConsulException()
RetryConsulException
07/03/2019 13:21:45 WARNING , retrying in 8 seconds...
07/03/2019 13:21:51 WARNING , retrying in 1 seconds...
07/03/2019 13:21:53 WARNING , retrying in 1 seconds...
07/03/2019 13:21:59 WARNING , retrying in 2 seconds...
07/03/2019 13:22:01 WARNING , retrying in 16 seconds...
07/03/2019 13:22:01 ERROR
07/03/2019 13:22:01 WARNING , retrying in 2 seconds...
07/03/2019 13:22:01 INFO GlusterFS mount attempt
07/03/2019 13:22:09 WARNING , retrying in 4 seconds...
07/03/2019 13:22:10 WARNING , retrying in 4 seconds...
Today the cluster's Backend 1 Subnet 172.16.120.170, 172.16.120.171, 172.16.120.172 is changed to 172.16.90.170, 172.16.90.171, 172.16.90.172
Backend 2 Subnet 172.16.140.170, 172.16.140.171, 172.16.140.172 was changed to 172.16.150.170, 172.168.150.171, 172.168.150.172;
Ceph's Mon IP I also modified root@petasan01:/opt# monmaptool --print ./monmap.map
Monmaptool: monmap file ./monmap.map
Epoch 0
Fsid d79fafc9-6473-48f0-a18c-3bd66fb81abc
Last_changed 2019-03-07 10:12:24.643304
Created 2019-03-07 10:12:24.643304
0: 172.16.90.170:6789/0 mon.noname-b
1: 172.16.90.171:6789/0 mon.noname-c
2: 172.16.90.172:6789/0 mon.noname-a
When I changed Mon, I also changed the value of the host information with consul kv put. But now it is not normal to use the consul kv to get the host information. Consul kv get PetaSAN/Nodes/petasan01 > petasan01.json shows Error querying Consul agent: Unexpected response code: 500, it should be corrupted consul cluster, the current page can not be accessed, suggesting the following error PetaSAN.core.consul.ps_consul. RetryConsulException
RetryConsulException
Traceback (most recent call last)
File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1836, in __call__
Return self.wsgi_app(environ, start_response)
File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1813, in wsgi_app
Ctx.push()
File "/usr/lib/python2.7/dist-packages/flask/ctx.py", line 321, in push
Self.session = self.app.open_session(self.request)
File "/usr/lib/python2.7/dist-packages/flask/app.py", line 825, in open_session
Return self.session_interface.open_session(self, request)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/security/session.py", line 127, in open_session
_exp = consul_base_api.read_value("".join( [consul_session_key,sid,"/_exp"]))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/base.py", line 29, in read_value
Index, data = cons.kv.get(key)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 391, in get
Callback, '/v1/kv/%s' % key, params=params)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __
Petasan.log log
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/api.py", line 448, in get_key
Index, data = cons.kv.get(key)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 391, in get
Callback, '/v1/kv/%s' % key, params=params)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __retry_internal
Return f()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/ps_consul.py", line 71, in get
Raise RetryConsulException()
RetryConsulException
07/03/2019 13:21:38 WARNING , retrying in 16 seconds...
07/03/2019 13:21:42 ERROR
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/file_sync_manager.py", line 75, in sync
Index, data = base.watch(self.root_path, current_index)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/base.py", line 49, in watch
Return cons.kv.get(key, index=current_index, recurse=True)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 391, in get
Callback, '/v1/kv/%s' % key, params=params)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __retry_internal
Return f()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/ps_consul.py", line 71, in get
Raise RetryConsulException()
RetryConsulException
07/03/2019 13:21:44 ERROR
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/iscsi_service.py", line 642, in handle_cluster_startup
Result = consul_api.set_leader_startup_time(current_node_name, str(i))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/api.py", line 372, in set_leader_startup_time
Return consul_obj.kv.put(ConfigAPI().get_consul_leaders_path()+node_name,minutes)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 459, in put
'/v1/kv/%s' % key, params=params, data=value)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __retry_internal
Return f()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/ps_consul.py", line 82, in put
Raise RetryConsulException()
RetryConsulException
07/03/2019 13:21:45 WARNING , retrying in 8 seconds...
07/03/2019 13:21:51 WARNING , retrying in 1 seconds...
07/03/2019 13:21:53 WARNING , retrying in 1 seconds...
07/03/2019 13:21:59 WARNING , retrying in 2 seconds...
07/03/2019 13:22:01 WARNING , retrying in 16 seconds...
07/03/2019 13:22:01 ERROR
07/03/2019 13:22:01 WARNING , retrying in 2 seconds...
07/03/2019 13:22:01 INFO GlusterFS mount attempt
07/03/2019 13:22:09 WARNING , retrying in 4 seconds...
07/03/2019 13:22:10 WARNING , retrying in 4 seconds...
admin
2,921 Posts
March 7, 2019, 6:46 amQuote from admin on March 7, 2019, 6:46 amYes this is why in my earlier replies i said you can change all configs with exception of backend 1 ip address. Backend 1 ip is used by Ceph,Consul and Gluster and changing it is non trivial at best. You can try to search online how to do it for these systems, but is not something we support or recommend but rather recommend against.
My prev post on changing node data stored in consul was for management ip as per your inquiry.
My recommendation is to revert back to the original value.
Yes this is why in my earlier replies i said you can change all configs with exception of backend 1 ip address. Backend 1 ip is used by Ceph,Consul and Gluster and changing it is non trivial at best. You can try to search online how to do it for these systems, but is not something we support or recommend but rather recommend against.
My prev post on changing node data stored in consul was for management ip as per your inquiry.
My recommendation is to revert back to the original value.
yangsm
31 Posts
March 7, 2019, 9:57 amQuote from yangsm on March 7, 2019, 9:57 amThank you, give you a lot of trouble, and are now ready to restore the original value.
In addition, it is found that the modified management address is normal in the petasan01 host startup interface. Node is part of petasan cluser ceph
Cluster management URLS:
Http://172.16.110.167:5000/
Http://172.16.110.171:5000/
Http://172.16.110.172:5000/
In the other two petasan02, petasan03 host startup interface display management address or the old management ip address
Node is part of petasan cluser ceph
Cluster management URLS:
Http://172.16.110.170:5000/
Http://172.16.110.171:5000/
Http://172.16.110.172:5000/
The management of petasan01 ip172.16.110.170 was changed to 172.16.110.167.
Thank you, give you a lot of trouble, and are now ready to restore the original value.
In addition, it is found that the modified management address is normal in the petasan01 host startup interface. Node is part of petasan cluser ceph
Cluster management URLS:
http://172.16.110.167:5000/
http://172.16.110.171:5000/
http://172.16.110.172:5000/
In the other two petasan02, petasan03 host startup interface display management address or the old management ip address
Node is part of petasan cluser ceph
Cluster management URLS:
http://172.16.110.170:5000/
http://172.16.110.171:5000/
http://172.16.110.172:5000/
The management of petasan01 ip172.16.110.170 was changed to 172.16.110.167.
admin
2,921 Posts
March 7, 2019, 12:09 pmQuote from admin on March 7, 2019, 12:09 pmMake sure the config files are update on all the nodes
/opt/petasan/config/cluster_info.json
/opt/petasan/config/node_info.json
Also change the management ip in /etc/netwrok/interfaces
Last update the /etc/hosts on 1 node and broadcast it /sync it via Consul to other nodes:
# stop auto sync service
systemctl stop petasan-file-sync
# manual fix hosts file
nano /etc/hosts
# sync the hosts file to all nodes:
/opt/petasan/scripts/util/sync_file.py /etc/hosts
# restart the sync service on current node
systemctl start petasan-file-sync
Make sure the config files are update on all the nodes
/opt/petasan/config/cluster_info.json
/opt/petasan/config/node_info.json
Also change the management ip in /etc/netwrok/interfaces
Last update the /etc/hosts on 1 node and broadcast it /sync it via Consul to other nodes:
# stop auto sync service
systemctl stop petasan-file-sync
# manual fix hosts file
nano /etc/hosts
# sync the hosts file to all nodes:
/opt/petasan/scripts/util/sync_file.py /etc/hosts
# restart the sync service on current node
systemctl start petasan-file-sync
Petasan test cluster modification Ip
yangsm
31 Posts
Quote from yangsm on March 4, 2019, 3:12 amDue to the petasan cluster tested, it is not well considered when planning IP. Now I need to change the Management Subnet, Backend 1 Subnet, and Backend 2 Subnet in the cluster to new ones. Can you help me? Thank you
Management Subnet 172.16.110.170, 172.16.110.171, 172.16.110.172 was changed to 192.168.110.170, 192.168.110.171, 192.168.110.172
Backend 1 Subnet 172.16.120.170, 172.16.120.171, 172.16.120.172 is changed to 192.168.120.170, 192.168.120.171, 192.168.120.172
Backend 2 Subnet 172.16.140.170, 172.16.140.171, 172.16.140.172 was changed to 192.168.140.170, 192.168.140.171, 192.168.140.172
Due to the petasan cluster tested, it is not well considered when planning IP. Now I need to change the Management Subnet, Backend 1 Subnet, and Backend 2 Subnet in the cluster to new ones. Can you help me? Thank you
Management Subnet 172.16.110.170, 172.16.110.171, 172.16.110.172 was changed to 192.168.110.170, 192.168.110.171, 192.168.110.172
Backend 1 Subnet 172.16.120.170, 172.16.120.171, 172.16.120.172 is changed to 192.168.120.170, 192.168.120.171, 192.168.120.172
Backend 2 Subnet 172.16.140.170, 172.16.140.171, 172.16.140.172 was changed to 192.168.140.170, 192.168.140.171, 192.168.140.172
admin
2,921 Posts
Quote from admin on March 4, 2019, 7:16 amWith the exception of backend 1, other config (ips/nics/bonds) can be changed from the config file.
backend 1 is used by Ceph monitors and Consul, and they cannot be easily changed once set.
With the exception of backend 1, other config (ips/nics/bonds) can be changed from the config file.
backend 1 is used by Ceph monitors and Consul, and they cannot be easily changed once set.
yangsm
31 Posts
Quote from yangsm on March 4, 2019, 8:53 amManagement Subnet, Backend 2 Subnet mainly change which files, can you elaborate? In addition, Backend 1 Subnet is mainly Ceph monitors and Consul. Ceph monitors Ip should be able to be modified according to Ceph's order, but Consul I don't understand, I don't know where to modify, if you have time, I am bothering you.
Management Subnet, Backend 2 Subnet mainly change which files, can you elaborate? In addition, Backend 1 Subnet is mainly Ceph monitors and Consul. Ceph monitors Ip should be able to be modified according to Ceph's order, but Consul I don't understand, I don't know where to modify, if you have time, I am bothering you.
admin
2,921 Posts
Quote from admin on March 4, 2019, 12:45 pmThere are 2 files per node:
/opt/petasan/config/cluster_info.json
/opt/petasan/config/node_info.json
The first is for cluster wide settings (nics/bonds/subnets), the second for existing node ips
Again all can be changes with exception of backend 1 ip since it is used by Ceph monitors and Consul server which is not straight forward to change, google on how/if you can change. It is also used by the Gluster server (for node stats data), but should be easier to change.
There are 2 files per node:
/opt/petasan/config/cluster_info.json
/opt/petasan/config/node_info.json
The first is for cluster wide settings (nics/bonds/subnets), the second for existing node ips
Again all can be changes with exception of backend 1 ip since it is used by Ceph monitors and Consul server which is not straight forward to change, google on how/if you can change. It is also used by the Gluster server (for node stats data), but should be easier to change.
yangsm
31 Posts
Quote from yangsm on March 6, 2019, 8:46 amI just added a new node today, only for storage. I manually changed the Management Subnet node IP, Backend 1 Subnet node Ip, Backend 2 Subnet node Ip. Currently, the cluster is normal, but in the interface Manage Nodes--- 『Node List--->Management ip shows the old management Ip172.16.110.170 address, I don't know where to change, in order to display the address of my modified management Ip172.16.110.167. Also see the Gluster server you mentioned, can we use it to do the copy volume for Iscsi?
I just added a new node today, only for storage. I manually changed the Management Subnet node IP, Backend 1 Subnet node Ip, Backend 2 Subnet node Ip. Currently, the cluster is normal, but in the interface Manage Nodes--- 『Node List--->Management ip shows the old management Ip172.16.110.170 address, I don't know where to change, in order to display the address of my modified management Ip172.16.110.167. Also see the Gluster server you mentioned, can we use it to do the copy volume for Iscsi?
admin
2,921 Posts
Quote from admin on March 6, 2019, 3:07 pmThe list of cluster nodes is stored in Consul, you can view the list:
consul kv get -recurse PetaSAN/NodesTo save info on 1 host:
consul kv get PetaSAN/Nodes/host7 > host7.jsoncat host7.json
{
"backend_1_ip": "10.0.4.17",
"backend_2_ip": "10.0.5.17",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.17",
"name": "host7"
}you can modify and put back via:
consul kv put PetaSAN/Nodes/host7 @host7.jsonThe GlusterFS is used as a small shared file system mainly to store the stats data for the charts, it is needed to provide HA for the stats service. It is meant to be small 15G partition, so you should not use it for something large.
In PetaSAN 2.4 we will be adding backup/restore of images.
The list of cluster nodes is stored in Consul, you can view the list:
consul kv get -recurse PetaSAN/Nodes
To save info on 1 host:
consul kv get PetaSAN/Nodes/host7 > host7.json
cat host7.json
{
"backend_1_ip": "10.0.4.17",
"backend_2_ip": "10.0.5.17",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.17",
"name": "host7"
}
you can modify and put back via:
consul kv put PetaSAN/Nodes/host7 @host7.json
The GlusterFS is used as a small shared file system mainly to store the stats data for the charts, it is needed to provide HA for the stats service. It is meant to be small 15G partition, so you should not use it for something large.
In PetaSAN 2.4 we will be adding backup/restore of images.
yangsm
31 Posts
Quote from yangsm on March 7, 2019, 6:00 amToday the cluster's Backend 1 Subnet 172.16.120.170, 172.16.120.171, 172.16.120.172 is changed to 172.16.90.170, 172.16.90.171, 172.16.90.172
Backend 2 Subnet 172.16.140.170, 172.16.140.171, 172.16.140.172 was changed to 172.16.150.170, 172.168.150.171, 172.168.150.172;
Ceph's Mon IP I also modified root@petasan01:/opt# monmaptool --print ./monmap.map
Monmaptool: monmap file ./monmap.map
Epoch 0
Fsid d79fafc9-6473-48f0-a18c-3bd66fb81abc
Last_changed 2019-03-07 10:12:24.643304
Created 2019-03-07 10:12:24.643304
0: 172.16.90.170:6789/0 mon.noname-b
1: 172.16.90.171:6789/0 mon.noname-c
2: 172.16.90.172:6789/0 mon.noname-a
When I changed Mon, I also changed the value of the host information with consul kv put. But now it is not normal to use the consul kv to get the host information. Consul kv get PetaSAN/Nodes/petasan01 > petasan01.json shows Error querying Consul agent: Unexpected response code: 500, it should be corrupted consul cluster, the current page can not be accessed, suggesting the following error PetaSAN.core.consul.ps_consul. RetryConsulException
RetryConsulExceptionTraceback (most recent call last)
File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1836, in __call__
Return self.wsgi_app(environ, start_response)
File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1813, in wsgi_app
Ctx.push()
File "/usr/lib/python2.7/dist-packages/flask/ctx.py", line 321, in push
Self.session = self.app.open_session(self.request)
File "/usr/lib/python2.7/dist-packages/flask/app.py", line 825, in open_session
Return self.session_interface.open_session(self, request)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/security/session.py", line 127, in open_session
_exp = consul_base_api.read_value("".join( [consul_session_key,sid,"/_exp"]))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/base.py", line 29, in read_value
Index, data = cons.kv.get(key)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 391, in get
Callback, '/v1/kv/%s' % key, params=params)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __
Petasan.log log
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/api.py", line 448, in get_key
Index, data = cons.kv.get(key)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 391, in get
Callback, '/v1/kv/%s' % key, params=params)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __retry_internal
Return f()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/ps_consul.py", line 71, in get
Raise RetryConsulException()
RetryConsulException
07/03/2019 13:21:38 WARNING , retrying in 16 seconds...
07/03/2019 13:21:42 ERROR
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/file_sync_manager.py", line 75, in sync
Index, data = base.watch(self.root_path, current_index)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/base.py", line 49, in watch
Return cons.kv.get(key, index=current_index, recurse=True)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 391, in get
Callback, '/v1/kv/%s' % key, params=params)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __retry_internal
Return f()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/ps_consul.py", line 71, in get
Raise RetryConsulException()
RetryConsulException
07/03/2019 13:21:44 ERROR
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/iscsi_service.py", line 642, in handle_cluster_startup
Result = consul_api.set_leader_startup_time(current_node_name, str(i))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/api.py", line 372, in set_leader_startup_time
Return consul_obj.kv.put(ConfigAPI().get_consul_leaders_path()+node_name,minutes)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 459, in put
'/v1/kv/%s' % key, params=params, data=value)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __retry_internal
Return f()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/ps_consul.py", line 82, in put
Raise RetryConsulException()
RetryConsulException
07/03/2019 13:21:45 WARNING , retrying in 8 seconds...
07/03/2019 13:21:51 WARNING , retrying in 1 seconds...
07/03/2019 13:21:53 WARNING , retrying in 1 seconds...
07/03/2019 13:21:59 WARNING , retrying in 2 seconds...
07/03/2019 13:22:01 WARNING , retrying in 16 seconds...
07/03/2019 13:22:01 ERROR
07/03/2019 13:22:01 WARNING , retrying in 2 seconds...
07/03/2019 13:22:01 INFO GlusterFS mount attempt
07/03/2019 13:22:09 WARNING , retrying in 4 seconds...
07/03/2019 13:22:10 WARNING , retrying in 4 seconds...
Today the cluster's Backend 1 Subnet 172.16.120.170, 172.16.120.171, 172.16.120.172 is changed to 172.16.90.170, 172.16.90.171, 172.16.90.172
Backend 2 Subnet 172.16.140.170, 172.16.140.171, 172.16.140.172 was changed to 172.16.150.170, 172.168.150.171, 172.168.150.172;
Ceph's Mon IP I also modified root@petasan01:/opt# monmaptool --print ./monmap.map
Monmaptool: monmap file ./monmap.map
Epoch 0
Fsid d79fafc9-6473-48f0-a18c-3bd66fb81abc
Last_changed 2019-03-07 10:12:24.643304
Created 2019-03-07 10:12:24.643304
0: 172.16.90.170:6789/0 mon.noname-b
1: 172.16.90.171:6789/0 mon.noname-c
2: 172.16.90.172:6789/0 mon.noname-a
When I changed Mon, I also changed the value of the host information with consul kv put. But now it is not normal to use the consul kv to get the host information. Consul kv get PetaSAN/Nodes/petasan01 > petasan01.json shows Error querying Consul agent: Unexpected response code: 500, it should be corrupted consul cluster, the current page can not be accessed, suggesting the following error PetaSAN.core.consul.ps_consul. RetryConsulException
RetryConsulException
Traceback (most recent call last)
File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1836, in __call__
Return self.wsgi_app(environ, start_response)
File "/usr/lib/python2.7/dist-packages/flask/app.py", line 1813, in wsgi_app
Ctx.push()
File "/usr/lib/python2.7/dist-packages/flask/ctx.py", line 321, in push
Self.session = self.app.open_session(self.request)
File "/usr/lib/python2.7/dist-packages/flask/app.py", line 825, in open_session
Return self.session_interface.open_session(self, request)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/security/session.py", line 127, in open_session
_exp = consul_base_api.read_value("".join( [consul_session_key,sid,"/_exp"]))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/base.py", line 29, in read_value
Index, data = cons.kv.get(key)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 391, in get
Callback, '/v1/kv/%s' % key, params=params)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __
Petasan.log log
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/api.py", line 448, in get_key
Index, data = cons.kv.get(key)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 391, in get
Callback, '/v1/kv/%s' % key, params=params)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __retry_internal
Return f()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/ps_consul.py", line 71, in get
Raise RetryConsulException()
RetryConsulException
07/03/2019 13:21:38 WARNING , retrying in 16 seconds...
07/03/2019 13:21:42 ERROR
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/file_sync_manager.py", line 75, in sync
Index, data = base.watch(self.root_path, current_index)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/base.py", line 49, in watch
Return cons.kv.get(key, index=current_index, recurse=True)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 391, in get
Callback, '/v1/kv/%s' % key, params=params)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __retry_internal
Return f()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/ps_consul.py", line 71, in get
Raise RetryConsulException()
RetryConsulException
07/03/2019 13:21:44 ERROR
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/iscsi_service.py", line 642, in handle_cluster_startup
Result = consul_api.set_leader_startup_time(current_node_name, str(i))
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/api.py", line 372, in set_leader_startup_time
Return consul_obj.kv.put(ConfigAPI().get_consul_leaders_path()+node_name,minutes)
File "/usr/local/lib/python2.7/dist-packages/consul/base.py", line 459, in put
'/v1/kv/%s' % key, params=params, data=value)
File "/usr/local/lib/python2.7/dist-packages/retry/compat.py", line 16, in wrapper
Return caller(f, *args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 74, in retry_decorator
Logger)
File "/usr/local/lib/python2.7/dist-packages/retry/api.py", line 33, in __retry_internal
Return f()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/consul/ps_consul.py", line 82, in put
Raise RetryConsulException()
RetryConsulException
07/03/2019 13:21:45 WARNING , retrying in 8 seconds...
07/03/2019 13:21:51 WARNING , retrying in 1 seconds...
07/03/2019 13:21:53 WARNING , retrying in 1 seconds...
07/03/2019 13:21:59 WARNING , retrying in 2 seconds...
07/03/2019 13:22:01 WARNING , retrying in 16 seconds...
07/03/2019 13:22:01 ERROR
07/03/2019 13:22:01 WARNING , retrying in 2 seconds...
07/03/2019 13:22:01 INFO GlusterFS mount attempt
07/03/2019 13:22:09 WARNING , retrying in 4 seconds...
07/03/2019 13:22:10 WARNING , retrying in 4 seconds...
admin
2,921 Posts
Quote from admin on March 7, 2019, 6:46 amYes this is why in my earlier replies i said you can change all configs with exception of backend 1 ip address. Backend 1 ip is used by Ceph,Consul and Gluster and changing it is non trivial at best. You can try to search online how to do it for these systems, but is not something we support or recommend but rather recommend against.
My prev post on changing node data stored in consul was for management ip as per your inquiry.
My recommendation is to revert back to the original value.
Yes this is why in my earlier replies i said you can change all configs with exception of backend 1 ip address. Backend 1 ip is used by Ceph,Consul and Gluster and changing it is non trivial at best. You can try to search online how to do it for these systems, but is not something we support or recommend but rather recommend against.
My prev post on changing node data stored in consul was for management ip as per your inquiry.
My recommendation is to revert back to the original value.
yangsm
31 Posts
Quote from yangsm on March 7, 2019, 9:57 amThank you, give you a lot of trouble, and are now ready to restore the original value.
In addition, it is found that the modified management address is normal in the petasan01 host startup interface. Node is part of petasan cluser ceph
Cluster management URLS:
Http://172.16.110.167:5000/
Http://172.16.110.171:5000/
Http://172.16.110.172:5000/
In the other two petasan02, petasan03 host startup interface display management address or the old management ip address
Node is part of petasan cluser ceph
Cluster management URLS:
Http://172.16.110.170:5000/
Http://172.16.110.171:5000/
Http://172.16.110.172:5000/
The management of petasan01 ip172.16.110.170 was changed to 172.16.110.167.
Thank you, give you a lot of trouble, and are now ready to restore the original value.
In addition, it is found that the modified management address is normal in the petasan01 host startup interface. Node is part of petasan cluser ceph
Cluster management URLS:
http://172.16.110.167:5000/
http://172.16.110.171:5000/
http://172.16.110.172:5000/
In the other two petasan02, petasan03 host startup interface display management address or the old management ip address
Node is part of petasan cluser ceph
Cluster management URLS:
http://172.16.110.170:5000/
http://172.16.110.171:5000/
http://172.16.110.172:5000/
The management of petasan01 ip172.16.110.170 was changed to 172.16.110.167.
admin
2,921 Posts
Quote from admin on March 7, 2019, 12:09 pmMake sure the config files are update on all the nodes
/opt/petasan/config/cluster_info.json
/opt/petasan/config/node_info.jsonAlso change the management ip in /etc/netwrok/interfaces
Last update the /etc/hosts on 1 node and broadcast it /sync it via Consul to other nodes:
# stop auto sync service
systemctl stop petasan-file-sync
# manual fix hosts file
nano /etc/hosts
# sync the hosts file to all nodes:
/opt/petasan/scripts/util/sync_file.py /etc/hosts
# restart the sync service on current node
systemctl start petasan-file-sync
Make sure the config files are update on all the nodes
/opt/petasan/config/cluster_info.json
/opt/petasan/config/node_info.json
Also change the management ip in /etc/netwrok/interfaces
Last update the /etc/hosts on 1 node and broadcast it /sync it via Consul to other nodes:
# stop auto sync service
systemctl stop petasan-file-sync
# manual fix hosts file
nano /etc/hosts
# sync the hosts file to all nodes:
/opt/petasan/scripts/util/sync_file.py /etc/hosts
# restart the sync service on current node
systemctl start petasan-file-sync