Node interfaces do not match cluster interface settings.
Pages: 1 2
admin
2,930 Posts
September 13, 2017, 4:22 pmQuote from admin on September 13, 2017, 4:22 pmThe config file path
/opt/petasan/config/cluster_info.json
in your case you will need to change the value for "iscsi_2_eth_name"
The file is self explanatory, example demo file:
{
"backend_1_base_ip": "10.0.4.0",
"backend_1_eth_name": "eth3",
"backend_1_mask": "255.255.255.0",
"backend_2_base_ip": "10.0.5.0",
"backend_2_eth_name": "eth4",
"backend_2_mask": "255.255.255.0",
"bonds": [],
"eth_count": 5,
"iscsi_1_eth_name": "eth1",
"iscsi_2_eth_name": "eth2",
"jumbo_frames": [],
"management_eth_name": "eth0",
"management_nodes": [
{
"backend_1_ip": "10.0.4.11",
"backend_2_ip": "10.0.5.11",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.11",
"name": "ps-node-01"
},
{
"backend_1_ip": "10.0.4.12",
"backend_2_ip": "10.0.5.12",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.12",
"name": "ps-node-02"
},
{
"backend_1_ip": "10.0.4.13",
"backend_2_ip": "10.0.5.13",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.13",
"name": "ps-node-03"
}
],
"name": "demo"
}
The config file path
/opt/petasan/config/cluster_info.json
in your case you will need to change the value for "iscsi_2_eth_name"
The file is self explanatory, example demo file:
{
"backend_1_base_ip": "10.0.4.0",
"backend_1_eth_name": "eth3",
"backend_1_mask": "255.255.255.0",
"backend_2_base_ip": "10.0.5.0",
"backend_2_eth_name": "eth4",
"backend_2_mask": "255.255.255.0",
"bonds": [],
"eth_count": 5,
"iscsi_1_eth_name": "eth1",
"iscsi_2_eth_name": "eth2",
"jumbo_frames": [],
"management_eth_name": "eth0",
"management_nodes": [
{
"backend_1_ip": "10.0.4.11",
"backend_2_ip": "10.0.5.11",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.11",
"name": "ps-node-01"
},
{
"backend_1_ip": "10.0.4.12",
"backend_2_ip": "10.0.5.12",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.12",
"name": "ps-node-02"
},
{
"backend_1_ip": "10.0.4.13",
"backend_2_ip": "10.0.5.13",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.13",
"name": "ps-node-03"
}
],
"name": "demo"
}
erazmus
40 Posts
October 7, 2017, 8:01 pmQuote from erazmus on October 7, 2017, 8:01 pmThe config file path
/opt/petasan/config/cluster_info.json
in your case you will need to change the value for "iscsi_2_eth_name"
I edited this file one one node, then rebooted it. I then edited the file on a node that was hosting an iSCSI target and rebooted it. The iSCSI moved to the first node, but iscsi_2 is still bound to the first card, not the second. Is there a second file that I need to edit?
The config file path
/opt/petasan/config/cluster_info.json
in your case you will need to change the value for "iscsi_2_eth_name"
I edited this file one one node, then rebooted it. I then edited the file on a node that was hosting an iSCSI target and rebooted it. The iSCSI moved to the first node, but iscsi_2 is still bound to the first card, not the second. Is there a second file that I need to edit?
admin
2,930 Posts
October 8, 2017, 12:02 pmQuote from admin on October 8, 2017, 12:02 pmThe changes will work on newly created iSCSI disks. For existing disks, it is a little messier: you will need to stop/detach/attach/start..sorry i missed this. When re-attaching the disks, you need to specify their ip addresses manually or if you had used auto ip assignment you need to add your disks in sequence so they get their original ip.
So it is a little messier than i thought, but in all cases down time of the cluster is expected. The logic of all this attach/detach is that all the iSCSI information: iqn/ips/nics is stored in Ceph as metadata associated with the data image, so the old nic card is stored within existing Ceph images. We do so so that in case, heavens forbid 🙂 , the PetaSAN/iSCSI/Consul layer above Ceph is removed or, destroyed then all data is still stored internal to the Ceph image so in a disaster situation you still can quickly re-start your disks with simple scripts. de-attaching will clear iSCSI metadata from Ceph image, attaching populates iSCSI data to an images whether created from PetaSAN or external Ceph images being imported in PetaSAN.
The changes will work on newly created iSCSI disks. For existing disks, it is a little messier: you will need to stop/detach/attach/start..sorry i missed this. When re-attaching the disks, you need to specify their ip addresses manually or if you had used auto ip assignment you need to add your disks in sequence so they get their original ip.
So it is a little messier than i thought, but in all cases down time of the cluster is expected. The logic of all this attach/detach is that all the iSCSI information: iqn/ips/nics is stored in Ceph as metadata associated with the data image, so the old nic card is stored within existing Ceph images. We do so so that in case, heavens forbid 🙂 , the PetaSAN/iSCSI/Consul layer above Ceph is removed or, destroyed then all data is still stored internal to the Ceph image so in a disaster situation you still can quickly re-start your disks with simple scripts. de-attaching will clear iSCSI metadata from Ceph image, attaching populates iSCSI data to an images whether created from PetaSAN or external Ceph images being imported in PetaSAN.
Last edited on October 8, 2017, 12:17 pm by admin · #13
Pages: 1 2
Node interfaces do not match cluster interface settings.
admin
2,930 Posts
Quote from admin on September 13, 2017, 4:22 pmThe config file path
/opt/petasan/config/cluster_info.json
in your case you will need to change the value for "iscsi_2_eth_name"
The file is self explanatory, example demo file:{
"backend_1_base_ip": "10.0.4.0",
"backend_1_eth_name": "eth3",
"backend_1_mask": "255.255.255.0",
"backend_2_base_ip": "10.0.5.0",
"backend_2_eth_name": "eth4",
"backend_2_mask": "255.255.255.0",
"bonds": [],
"eth_count": 5,
"iscsi_1_eth_name": "eth1",
"iscsi_2_eth_name": "eth2",
"jumbo_frames": [],
"management_eth_name": "eth0",
"management_nodes": [
{
"backend_1_ip": "10.0.4.11",
"backend_2_ip": "10.0.5.11",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.11",
"name": "ps-node-01"
},
{
"backend_1_ip": "10.0.4.12",
"backend_2_ip": "10.0.5.12",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.12",
"name": "ps-node-02"
},
{
"backend_1_ip": "10.0.4.13",
"backend_2_ip": "10.0.5.13",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.13",
"name": "ps-node-03"
}
],
"name": "demo"
}
The config file path
/opt/petasan/config/cluster_info.json
in your case you will need to change the value for "iscsi_2_eth_name"
The file is self explanatory, example demo file:
{
"backend_1_base_ip": "10.0.4.0",
"backend_1_eth_name": "eth3",
"backend_1_mask": "255.255.255.0",
"backend_2_base_ip": "10.0.5.0",
"backend_2_eth_name": "eth4",
"backend_2_mask": "255.255.255.0",
"bonds": [],
"eth_count": 5,
"iscsi_1_eth_name": "eth1",
"iscsi_2_eth_name": "eth2",
"jumbo_frames": [],
"management_eth_name": "eth0",
"management_nodes": [
{
"backend_1_ip": "10.0.4.11",
"backend_2_ip": "10.0.5.11",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.11",
"name": "ps-node-01"
},
{
"backend_1_ip": "10.0.4.12",
"backend_2_ip": "10.0.5.12",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.12",
"name": "ps-node-02"
},
{
"backend_1_ip": "10.0.4.13",
"backend_2_ip": "10.0.5.13",
"is_iscsi": true,
"is_management": true,
"is_storage": true,
"management_ip": "10.0.1.13",
"name": "ps-node-03"
}
],
"name": "demo"
}
erazmus
40 Posts
Quote from erazmus on October 7, 2017, 8:01 pmThe config file path
/opt/petasan/config/cluster_info.json
in your case you will need to change the value for "iscsi_2_eth_name"I edited this file one one node, then rebooted it. I then edited the file on a node that was hosting an iSCSI target and rebooted it. The iSCSI moved to the first node, but iscsi_2 is still bound to the first card, not the second. Is there a second file that I need to edit?
The config file path
/opt/petasan/config/cluster_info.json
in your case you will need to change the value for "iscsi_2_eth_name"
I edited this file one one node, then rebooted it. I then edited the file on a node that was hosting an iSCSI target and rebooted it. The iSCSI moved to the first node, but iscsi_2 is still bound to the first card, not the second. Is there a second file that I need to edit?
admin
2,930 Posts
Quote from admin on October 8, 2017, 12:02 pmThe changes will work on newly created iSCSI disks. For existing disks, it is a little messier: you will need to stop/detach/attach/start..sorry i missed this. When re-attaching the disks, you need to specify their ip addresses manually or if you had used auto ip assignment you need to add your disks in sequence so they get their original ip.
So it is a little messier than i thought, but in all cases down time of the cluster is expected. The logic of all this attach/detach is that all the iSCSI information: iqn/ips/nics is stored in Ceph as metadata associated with the data image, so the old nic card is stored within existing Ceph images. We do so so that in case, heavens forbid 🙂 , the PetaSAN/iSCSI/Consul layer above Ceph is removed or, destroyed then all data is still stored internal to the Ceph image so in a disaster situation you still can quickly re-start your disks with simple scripts. de-attaching will clear iSCSI metadata from Ceph image, attaching populates iSCSI data to an images whether created from PetaSAN or external Ceph images being imported in PetaSAN.
The changes will work on newly created iSCSI disks. For existing disks, it is a little messier: you will need to stop/detach/attach/start..sorry i missed this. When re-attaching the disks, you need to specify their ip addresses manually or if you had used auto ip assignment you need to add your disks in sequence so they get their original ip.
So it is a little messier than i thought, but in all cases down time of the cluster is expected. The logic of all this attach/detach is that all the iSCSI information: iqn/ips/nics is stored in Ceph as metadata associated with the data image, so the old nic card is stored within existing Ceph images. We do so so that in case, heavens forbid 🙂 , the PetaSAN/iSCSI/Consul layer above Ceph is removed or, destroyed then all data is still stored internal to the Ceph image so in a disaster situation you still can quickly re-start your disks with simple scripts. de-attaching will clear iSCSI metadata from Ceph image, attaching populates iSCSI data to an images whether created from PetaSAN or external Ceph images being imported in PetaSAN.