Difficulty recovering one of three servers after power outage
southcoast
50 Posts
October 30, 2018, 5:26 pmQuote from southcoast on October 30, 2018, 5:26 pmOk. I have management set on my 192.168.0.0 LAN along with the clients. So, for my purposes anyway, I set my iSCSI addresses the same as the management LAN?
Ok. I have management set on my 192.168.0.0 LAN along with the clients. So, for my purposes anyway, I set my iSCSI addresses the same as the management LAN?
admin
2,930 Posts
October 30, 2018, 5:38 pmQuote from admin on October 30, 2018, 5:38 pmNo, the iSCSI clients will connect on a different network that the management network.
For high availability, your clients will have 2 interfaces required to setup MPIO, each interface will be on a different network and have different ips, we call these iSCSI subnet 1 and iSCSI subnet 2.
For non HA, your clients will use 1 interface and connect to iSCSI subnet 1.
In either case you need to have your iSCSI disks on the same ip subnets as your clients.
No, the iSCSI clients will connect on a different network that the management network.
For high availability, your clients will have 2 interfaces required to setup MPIO, each interface will be on a different network and have different ips, we call these iSCSI subnet 1 and iSCSI subnet 2.
For non HA, your clients will use 1 interface and connect to iSCSI subnet 1.
In either case you need to have your iSCSI disks on the same ip subnets as your clients.
Last edited on October 30, 2018, 5:39 pm by admin · #52
southcoast
50 Posts
October 30, 2018, 5:49 pmQuote from southcoast on October 30, 2018, 5:49 pmAlright. In my circumstance, with many clients and few management hosts, would it therefore make sense from the perspective of Petsan to switch my addressing such that the interface supporting iSCSI access are all in the 192.168.0.0/24 (eth0) ip subnet then assign the management to a new ip subnet, or just swap the assignments I have?
Thank-you
Alright. In my circumstance, with many clients and few management hosts, would it therefore make sense from the perspective of Petsan to switch my addressing such that the interface supporting iSCSI access are all in the 192.168.0.0/24 (eth0) ip subnet then assign the management to a new ip subnet, or just swap the assignments I have?
Thank-you
admin
2,930 Posts
October 30, 2018, 9:08 pmQuote from admin on October 30, 2018, 9:08 pmIf it makes sense to you, yes. this is really up to you.
If it makes sense to you, yes. this is really up to you.
southcoast
50 Posts
October 30, 2018, 10:22 pmQuote from southcoast on October 30, 2018, 10:22 pmMore to the point, if I want to reassign the IP addressing of my management functions, do I need to execute a re-install of the Petasan application?
More to the point, if I want to reassign the IP addressing of my management functions, do I need to execute a re-install of the Petasan application?
admin
2,930 Posts
October 30, 2018, 10:46 pmQuote from admin on October 30, 2018, 10:46 pmYou can change them in
/opt/petasan/config/cluster_info # subnets, nics, nic mapping
/opt/petasan/config/node_info # node ip information
Could be quicker to re-install
You can change them in
/opt/petasan/config/cluster_info # subnets, nics, nic mapping
/opt/petasan/config/node_info # node ip information
Could be quicker to re-install
southcoast
50 Posts
October 30, 2018, 11:18 pmQuote from southcoast on October 30, 2018, 11:18 pmOk. No reinstall is definitely better, thanks. So, update the indicated json file, then reload each server, correct?
Thank-you
Ok. No reinstall is definitely better, thanks. So, update the indicated json file, then reload each server, correct?
Thank-you
admin
2,930 Posts
October 30, 2018, 11:32 pmQuote from admin on October 30, 2018, 11:32 pmyes. you may want to backup you json config files in case you make a mistake. i still think a re-install will be quicker in your case.
yes. you may want to backup you json config files in case you make a mistake. i still think a re-install will be quicker in your case.
southcoast
50 Posts
November 1, 2018, 1:23 amQuote from southcoast on November 1, 2018, 1:23 amI re-installed Petasan on the 3 servers and designated my management port eth0 in network 10.0.20.0/24 and established the needed VLANs and routing to access the servers on the needed 5001 for setup and 5000 for operation. I had specified the iSCSI 1 port as eth1 in the 192.168.0.0/24 IP scheme as the client devices are in now. When the servers came up, the eth1 port had been set with the backend scheme. When I tried to set the ISCSI 1 disks on node 1 in the 192.168.0.0/24 scheme, the server went offline and no longer accessible either on UI access at port 5000 or ssh access on port 22. Node 3 did the same only for port 22, but did recover after some minutes.
From the console I can ping the next hop and each of the other two servers in the cluster. from node 2 and 3's iSCSI disk list it shows:
Active Paths
Disk 00001
IP
Assigned Node
10.0.3.100
peta-san-03
192.168.0.160
peta-san-01
In this configuration, how do I set eth1 in the 192.168.0.0/24 scheme and not get a repeat of this behavior?
I re-installed Petasan on the 3 servers and designated my management port eth0 in network 10.0.20.0/24 and established the needed VLANs and routing to access the servers on the needed 5001 for setup and 5000 for operation. I had specified the iSCSI 1 port as eth1 in the 192.168.0.0/24 IP scheme as the client devices are in now. When the servers came up, the eth1 port had been set with the backend scheme. When I tried to set the ISCSI 1 disks on node 1 in the 192.168.0.0/24 scheme, the server went offline and no longer accessible either on UI access at port 5000 or ssh access on port 22. Node 3 did the same only for port 22, but did recover after some minutes.
From the console I can ping the next hop and each of the other two servers in the cluster. from node 2 and 3's iSCSI disk list it shows:
Active Paths
Disk 00001
IP
Assigned Node
10.0.3.100
peta-san-03
192.168.0.160
peta-san-01
In this configuration, how do I set eth1 in the 192.168.0.0/24 scheme and not get a repeat of this behavior?
admin
2,930 Posts
November 1, 2018, 5:08 amQuote from admin on November 1, 2018, 5:08 amAs i understand from where you are you can access node 2 without issues, but cannot ping node 1 or ssh to it or access its ui on port 5000.
Can you try:
ssh to node 2
while in node 2 try to:
ping node_1_management_ip
ping node_1_backend_1_ip
ping node_1_hostname
access its ui on port 5000 using:
wget node_1_management_ip:5000
open ssh to node 1 (while still in node 2 ):
ssh node_1_management_ip
Do the above work/fail ?
As i understand from where you are you can access node 2 without issues, but cannot ping node 1 or ssh to it or access its ui on port 5000.
Can you try:
ssh to node 2
while in node 2 try to:
ping node_1_management_ip
ping node_1_backend_1_ip
ping node_1_hostname
access its ui on port 5000 using:
wget node_1_management_ip:5000
open ssh to node 1 (while still in node 2 ):
ssh node_1_management_ip
Do the above work/fail ?
Difficulty recovering one of three servers after power outage
southcoast
50 Posts
Quote from southcoast on October 30, 2018, 5:26 pmOk. I have management set on my 192.168.0.0 LAN along with the clients. So, for my purposes anyway, I set my iSCSI addresses the same as the management LAN?
Ok. I have management set on my 192.168.0.0 LAN along with the clients. So, for my purposes anyway, I set my iSCSI addresses the same as the management LAN?
admin
2,930 Posts
Quote from admin on October 30, 2018, 5:38 pmNo, the iSCSI clients will connect on a different network that the management network.
For high availability, your clients will have 2 interfaces required to setup MPIO, each interface will be on a different network and have different ips, we call these iSCSI subnet 1 and iSCSI subnet 2.
For non HA, your clients will use 1 interface and connect to iSCSI subnet 1.
In either case you need to have your iSCSI disks on the same ip subnets as your clients.
No, the iSCSI clients will connect on a different network that the management network.
For high availability, your clients will have 2 interfaces required to setup MPIO, each interface will be on a different network and have different ips, we call these iSCSI subnet 1 and iSCSI subnet 2.
For non HA, your clients will use 1 interface and connect to iSCSI subnet 1.
In either case you need to have your iSCSI disks on the same ip subnets as your clients.
southcoast
50 Posts
Quote from southcoast on October 30, 2018, 5:49 pmAlright. In my circumstance, with many clients and few management hosts, would it therefore make sense from the perspective of Petsan to switch my addressing such that the interface supporting iSCSI access are all in the 192.168.0.0/24 (eth0) ip subnet then assign the management to a new ip subnet, or just swap the assignments I have?
Thank-you
Alright. In my circumstance, with many clients and few management hosts, would it therefore make sense from the perspective of Petsan to switch my addressing such that the interface supporting iSCSI access are all in the 192.168.0.0/24 (eth0) ip subnet then assign the management to a new ip subnet, or just swap the assignments I have?
Thank-you
admin
2,930 Posts
Quote from admin on October 30, 2018, 9:08 pmIf it makes sense to you, yes. this is really up to you.
If it makes sense to you, yes. this is really up to you.
southcoast
50 Posts
Quote from southcoast on October 30, 2018, 10:22 pmMore to the point, if I want to reassign the IP addressing of my management functions, do I need to execute a re-install of the Petasan application?
More to the point, if I want to reassign the IP addressing of my management functions, do I need to execute a re-install of the Petasan application?
admin
2,930 Posts
Quote from admin on October 30, 2018, 10:46 pmYou can change them in
/opt/petasan/config/cluster_info # subnets, nics, nic mapping
/opt/petasan/config/node_info # node ip information
Could be quicker to re-install
You can change them in
/opt/petasan/config/cluster_info # subnets, nics, nic mapping
/opt/petasan/config/node_info # node ip information
Could be quicker to re-install
southcoast
50 Posts
Quote from southcoast on October 30, 2018, 11:18 pmOk. No reinstall is definitely better, thanks. So, update the indicated json file, then reload each server, correct?
Thank-you
Ok. No reinstall is definitely better, thanks. So, update the indicated json file, then reload each server, correct?
Thank-you
admin
2,930 Posts
Quote from admin on October 30, 2018, 11:32 pmyes. you may want to backup you json config files in case you make a mistake. i still think a re-install will be quicker in your case.
yes. you may want to backup you json config files in case you make a mistake. i still think a re-install will be quicker in your case.
southcoast
50 Posts
Quote from southcoast on November 1, 2018, 1:23 amI re-installed Petasan on the 3 servers and designated my management port eth0 in network 10.0.20.0/24 and established the needed VLANs and routing to access the servers on the needed 5001 for setup and 5000 for operation. I had specified the iSCSI 1 port as eth1 in the 192.168.0.0/24 IP scheme as the client devices are in now. When the servers came up, the eth1 port had been set with the backend scheme. When I tried to set the ISCSI 1 disks on node 1 in the 192.168.0.0/24 scheme, the server went offline and no longer accessible either on UI access at port 5000 or ssh access on port 22. Node 3 did the same only for port 22, but did recover after some minutes.
From the console I can ping the next hop and each of the other two servers in the cluster. from node 2 and 3's iSCSI disk list it shows:
Active Paths
Disk 00001
IP Assigned Node 10.0.3.100 peta-san-03 192.168.0.160 peta-san-01 In this configuration, how do I set eth1 in the 192.168.0.0/24 scheme and not get a repeat of this behavior?
I re-installed Petasan on the 3 servers and designated my management port eth0 in network 10.0.20.0/24 and established the needed VLANs and routing to access the servers on the needed 5001 for setup and 5000 for operation. I had specified the iSCSI 1 port as eth1 in the 192.168.0.0/24 IP scheme as the client devices are in now. When the servers came up, the eth1 port had been set with the backend scheme. When I tried to set the ISCSI 1 disks on node 1 in the 192.168.0.0/24 scheme, the server went offline and no longer accessible either on UI access at port 5000 or ssh access on port 22. Node 3 did the same only for port 22, but did recover after some minutes.
From the console I can ping the next hop and each of the other two servers in the cluster. from node 2 and 3's iSCSI disk list it shows:
Active Paths
Disk 00001
IP | Assigned Node |
---|---|
10.0.3.100 | peta-san-03 |
192.168.0.160 | peta-san-01 |
admin
2,930 Posts
Quote from admin on November 1, 2018, 5:08 amAs i understand from where you are you can access node 2 without issues, but cannot ping node 1 or ssh to it or access its ui on port 5000.
Can you try:
ssh to node 2
while in node 2 try to:
ping node_1_management_ip
ping node_1_backend_1_ip
ping node_1_hostname
access its ui on port 5000 using:
wget node_1_management_ip:5000
open ssh to node 1 (while still in node 2 ):
ssh node_1_management_ip
Do the above work/fail ?
As i understand from where you are you can access node 2 without issues, but cannot ping node 1 or ssh to it or access its ui on port 5000.
Can you try:
ssh to node 2
while in node 2 try to:
ping node_1_management_ip
ping node_1_backend_1_ip
ping node_1_hostname
access its ui on port 5000 using:
wget node_1_management_ip:5000
open ssh to node 1 (while still in node 2 ):
ssh node_1_management_ip
Do the above work/fail ?