same naa on second cluster
Pages: 1 2
BonsaiJoe
53 Posts
July 15, 2018, 7:26 pmQuote from BonsaiJoe on July 15, 2018, 7:26 pmHi,
after setting up our second petsan cluster we have seen that the naa of the second cluster matches the naa of the first cluster (naa.60014050000100000000000000000000) looks like this naa is always the same on every installation (first volume)
but do we run into trouble if we use the same naa within the same VMware cluster ?
or do we better need to change the naa on one system ? and if so how can we do this ?
Hi,
after setting up our second petsan cluster we have seen that the naa of the second cluster matches the naa of the first cluster (naa.60014050000100000000000000000000) looks like this naa is always the same on every installation (first volume)
but do we run into trouble if we use the same naa within the same VMware cluster ?
or do we better need to change the naa on one system ? and if so how can we do this ?
admin
2,930 Posts
July 16, 2018, 10:30 amQuote from admin on July 16, 2018, 10:30 amThe naa will be the same, we do genetate them based on the disk ids:
disk 00001 naa.60014050000100000000000000000000
disk 00023 naa.60014050002300000000000000000000
The 001405 is generated by the linux kernel.
This could lead to issues if the same client initiator is accessing 2 different PetaSAN clusters with similar disk ids. We did not anticipate this. There is no problem if within your VMWare cluster some ESXis use datastores from one cluster and some other ESXis use datastore from the other. If you absolutely need a single ESXi to see both, you can make sure you use disks that do not have the same disk ids but you need to be careful, you also should change the iqn base prefix so it is different between both clusters.
It should be possible for us to include a cluster identifier within the naa, but this will require stopping the disks during such upgrade and re-connecting the luns from ESXis, we can send you an update that does this within a few days if this is better for you.
The naa will be the same, we do genetate them based on the disk ids:
disk 00001 naa.60014050000100000000000000000000
disk 00023 naa.60014050002300000000000000000000
The 001405 is generated by the linux kernel.
This could lead to issues if the same client initiator is accessing 2 different PetaSAN clusters with similar disk ids. We did not anticipate this. There is no problem if within your VMWare cluster some ESXis use datastores from one cluster and some other ESXis use datastore from the other. If you absolutely need a single ESXi to see both, you can make sure you use disks that do not have the same disk ids but you need to be careful, you also should change the iqn base prefix so it is different between both clusters.
It should be possible for us to include a cluster identifier within the naa, but this will require stopping the disks during such upgrade and re-connecting the luns from ESXis, we can send you an update that does this within a few days if this is better for you.
BonsaiJoe
53 Posts
July 16, 2018, 11:21 amQuote from BonsaiJoe on July 16, 2018, 11:21 amthanks for the fast update. yes we need to have the naa unic cause we would like to have all clusters withing the same esx cluster reachable.
yes the iqn is the first we changed but we would not run into trouble cause of the dublicated naa.
a cluser identifyer will be grate.
to stop and start the disk is not a problem cause we are waiting to go to production with the new cluster untill we have changed the naa
thanks for the fast update. yes we need to have the naa unic cause we would like to have all clusters withing the same esx cluster reachable.
yes the iqn is the first we changed but we would not run into trouble cause of the dublicated naa.
a cluser identifyer will be grate.
to stop and start the disk is not a problem cause we are waiting to go to production with the new cluster untill we have changed the naa
BonsaiJoe
53 Posts
July 19, 2018, 1:48 pmQuote from BonsaiJoe on July 19, 2018, 1:48 pmdo you have a update here ?
thanks for your help
do you have a update here ?
thanks for your help
admin
2,930 Posts
July 19, 2018, 2:02 pmQuote from admin on July 19, 2018, 2:02 pmShould be by Monday.
Should be by Monday.
admin
2,930 Posts
July 23, 2018, 11:52 amQuote from admin on July 23, 2018, 11:52 amdownload patch tool and wwn patch from:
https://drive.google.com/drive/folders/1ZycfOSo0DAyPFsZ-o4Ic1kYH98-a9ze6?usp=sharing
stop all disks, disconnect clients
on all nodes install patch tool and apply wwn patch
dpkg -i patch_2.7.5-1ubuntu0.16.04.1_amd64.deb
patch -d /usr/lib/python2.7/dist-packages -p1 < wwn.patch
restart following services on all nodes
systemctl restart petasan-iscsi
systemctl restart petasan-admin
From admin ui: Configuration -> iSCSI Settings then press Submit button
check flag is saved correctly in consul:
consul kv get PetaSAN/Config
should print "wwn_fsid_tag": true at end
From now all newly created disks will have the cluster tag in their serial numbers, however any existing disks will still use their old serial numbers when started, this is because we use consul during new disk creation and save all iSCSI data within the Ceph image metadata..this is done so it would be possible in case of disaster to start existing disks even if consul system is down.
To modify existing disks to use new serial numbers, detach the disks (this will clear the Ceph metada) then re-attach them, but this can also change the ip information unless you manually specify them or auto add in same order.
If you add new nodes to the cluster you need to apply the patch and restart services on them before assigning new disks. You can also add them without the iSCSI role then add the role to them once you apply the patch.
We will include this patch to v2.1.
download patch tool and wwn patch from:
https://drive.google.com/drive/folders/1ZycfOSo0DAyPFsZ-o4Ic1kYH98-a9ze6?usp=sharing
stop all disks, disconnect clients
on all nodes install patch tool and apply wwn patch
dpkg -i patch_2.7.5-1ubuntu0.16.04.1_amd64.deb
patch -d /usr/lib/python2.7/dist-packages -p1 < wwn.patch
restart following services on all nodes
systemctl restart petasan-iscsi
systemctl restart petasan-admin
From admin ui: Configuration -> iSCSI Settings then press Submit button
check flag is saved correctly in consul:
consul kv get PetaSAN/Config
should print "wwn_fsid_tag": true at end
From now all newly created disks will have the cluster tag in their serial numbers, however any existing disks will still use their old serial numbers when started, this is because we use consul during new disk creation and save all iSCSI data within the Ceph image metadata..this is done so it would be possible in case of disaster to start existing disks even if consul system is down.
To modify existing disks to use new serial numbers, detach the disks (this will clear the Ceph metada) then re-attach them, but this can also change the ip information unless you manually specify them or auto add in same order.
If you add new nodes to the cluster you need to apply the patch and restart services on them before assigning new disks. You can also add them without the iSCSI role then add the role to them once you apply the patch.
We will include this patch to v2.1.
Last edited on July 23, 2018, 11:55 am by admin · #6
BonsaiJoe
53 Posts
July 24, 2018, 11:33 amQuote from BonsaiJoe on July 24, 2018, 11:33 amthanks so much works perfect
thanks so much works perfect
BonsaiJoe
53 Posts
December 6, 2018, 11:00 pmQuote from BonsaiJoe on December 6, 2018, 11:00 pmi have to bring back this Topic cause we have installed a new Cluster with 2.2 and it Looks like the wwn patch is not includet naa is still the same as on our first Cluster with 2.0 and VMware did not find the new volumes cause of the duplicated naa
can you confirm this ?
i have to bring back this Topic cause we have installed a new Cluster with 2.2 and it Looks like the wwn patch is not includet naa is still the same as on our first Cluster with 2.0 and VMware did not find the new volumes cause of the duplicated naa
can you confirm this ?
admin
2,930 Posts
December 7, 2018, 9:27 amQuote from admin on December 7, 2018, 9:27 amIt is included but is not enabled by default for new cluster, you will need to enable it the same way. Your existing clusters which have this enabled will be upgraded with this setting on. We probably should have this setting in the ui, but for now you need to it manually once.
It is included but is not enabled by default for new cluster, you will need to enable it the same way. Your existing clusters which have this enabled will be upgraded with this setting on. We probably should have this setting in the ui, but for now you need to it manually once.
BonsaiJoe
53 Posts
December 7, 2018, 3:19 pmQuote from BonsaiJoe on December 7, 2018, 3:19 pmhow can we enable it ?
consul kv get PetaSAN/Config shows "wwn_fsid_tag": false
how can we enable it ?
consul kv get PetaSAN/Config shows "wwn_fsid_tag": false
Pages: 1 2
same naa on second cluster
BonsaiJoe
53 Posts
Quote from BonsaiJoe on July 15, 2018, 7:26 pmHi,
after setting up our second petsan cluster we have seen that the naa of the second cluster matches the naa of the first cluster (naa.60014050000100000000000000000000) looks like this naa is always the same on every installation (first volume)
but do we run into trouble if we use the same naa within the same VMware cluster ?
or do we better need to change the naa on one system ? and if so how can we do this ?
Hi,
after setting up our second petsan cluster we have seen that the naa of the second cluster matches the naa of the first cluster (naa.60014050000100000000000000000000) looks like this naa is always the same on every installation (first volume)
but do we run into trouble if we use the same naa within the same VMware cluster ?
or do we better need to change the naa on one system ? and if so how can we do this ?
admin
2,930 Posts
Quote from admin on July 16, 2018, 10:30 amThe naa will be the same, we do genetate them based on the disk ids:
disk 00001 naa.60014050000100000000000000000000
disk 00023 naa.60014050002300000000000000000000
The 001405 is generated by the linux kernel.This could lead to issues if the same client initiator is accessing 2 different PetaSAN clusters with similar disk ids. We did not anticipate this. There is no problem if within your VMWare cluster some ESXis use datastores from one cluster and some other ESXis use datastore from the other. If you absolutely need a single ESXi to see both, you can make sure you use disks that do not have the same disk ids but you need to be careful, you also should change the iqn base prefix so it is different between both clusters.
It should be possible for us to include a cluster identifier within the naa, but this will require stopping the disks during such upgrade and re-connecting the luns from ESXis, we can send you an update that does this within a few days if this is better for you.
The naa will be the same, we do genetate them based on the disk ids:
disk 00001 naa.60014050000100000000000000000000
disk 00023 naa.60014050002300000000000000000000
The 001405 is generated by the linux kernel.
This could lead to issues if the same client initiator is accessing 2 different PetaSAN clusters with similar disk ids. We did not anticipate this. There is no problem if within your VMWare cluster some ESXis use datastores from one cluster and some other ESXis use datastore from the other. If you absolutely need a single ESXi to see both, you can make sure you use disks that do not have the same disk ids but you need to be careful, you also should change the iqn base prefix so it is different between both clusters.
It should be possible for us to include a cluster identifier within the naa, but this will require stopping the disks during such upgrade and re-connecting the luns from ESXis, we can send you an update that does this within a few days if this is better for you.
BonsaiJoe
53 Posts
Quote from BonsaiJoe on July 16, 2018, 11:21 amthanks for the fast update. yes we need to have the naa unic cause we would like to have all clusters withing the same esx cluster reachable.
yes the iqn is the first we changed but we would not run into trouble cause of the dublicated naa.
a cluser identifyer will be grate.
to stop and start the disk is not a problem cause we are waiting to go to production with the new cluster untill we have changed the naa
thanks for the fast update. yes we need to have the naa unic cause we would like to have all clusters withing the same esx cluster reachable.
yes the iqn is the first we changed but we would not run into trouble cause of the dublicated naa.
a cluser identifyer will be grate.
to stop and start the disk is not a problem cause we are waiting to go to production with the new cluster untill we have changed the naa
BonsaiJoe
53 Posts
Quote from BonsaiJoe on July 19, 2018, 1:48 pmdo you have a update here ?
thanks for your help
do you have a update here ?
thanks for your help
admin
2,930 Posts
Quote from admin on July 19, 2018, 2:02 pmShould be by Monday.
Should be by Monday.
admin
2,930 Posts
Quote from admin on July 23, 2018, 11:52 amdownload patch tool and wwn patch from:
https://drive.google.com/drive/folders/1ZycfOSo0DAyPFsZ-o4Ic1kYH98-a9ze6?usp=sharing
stop all disks, disconnect clients
on all nodes install patch tool and apply wwn patch
dpkg -i patch_2.7.5-1ubuntu0.16.04.1_amd64.deb
patch -d /usr/lib/python2.7/dist-packages -p1 < wwn.patchrestart following services on all nodes
systemctl restart petasan-iscsi
systemctl restart petasan-adminFrom admin ui: Configuration -> iSCSI Settings then press Submit button
check flag is saved correctly in consul:consul kv get PetaSAN/Config
should print "wwn_fsid_tag": true at end
From now all newly created disks will have the cluster tag in their serial numbers, however any existing disks will still use their old serial numbers when started, this is because we use consul during new disk creation and save all iSCSI data within the Ceph image metadata..this is done so it would be possible in case of disaster to start existing disks even if consul system is down.
To modify existing disks to use new serial numbers, detach the disks (this will clear the Ceph metada) then re-attach them, but this can also change the ip information unless you manually specify them or auto add in same order.If you add new nodes to the cluster you need to apply the patch and restart services on them before assigning new disks. You can also add them without the iSCSI role then add the role to them once you apply the patch.
We will include this patch to v2.1.
download patch tool and wwn patch from:
https://drive.google.com/drive/folders/1ZycfOSo0DAyPFsZ-o4Ic1kYH98-a9ze6?usp=sharing
stop all disks, disconnect clients
on all nodes install patch tool and apply wwn patch
dpkg -i patch_2.7.5-1ubuntu0.16.04.1_amd64.deb
patch -d /usr/lib/python2.7/dist-packages -p1 < wwn.patch
restart following services on all nodes
systemctl restart petasan-iscsi
systemctl restart petasan-admin
From admin ui: Configuration -> iSCSI Settings then press Submit button
check flag is saved correctly in consul:
consul kv get PetaSAN/Config
should print "wwn_fsid_tag": true at end
From now all newly created disks will have the cluster tag in their serial numbers, however any existing disks will still use their old serial numbers when started, this is because we use consul during new disk creation and save all iSCSI data within the Ceph image metadata..this is done so it would be possible in case of disaster to start existing disks even if consul system is down.
To modify existing disks to use new serial numbers, detach the disks (this will clear the Ceph metada) then re-attach them, but this can also change the ip information unless you manually specify them or auto add in same order.
If you add new nodes to the cluster you need to apply the patch and restart services on them before assigning new disks. You can also add them without the iSCSI role then add the role to them once you apply the patch.
We will include this patch to v2.1.
BonsaiJoe
53 Posts
Quote from BonsaiJoe on July 24, 2018, 11:33 amthanks so much works perfect
thanks so much works perfect
BonsaiJoe
53 Posts
Quote from BonsaiJoe on December 6, 2018, 11:00 pmi have to bring back this Topic cause we have installed a new Cluster with 2.2 and it Looks like the wwn patch is not includet naa is still the same as on our first Cluster with 2.0 and VMware did not find the new volumes cause of the duplicated naa
can you confirm this ?
i have to bring back this Topic cause we have installed a new Cluster with 2.2 and it Looks like the wwn patch is not includet naa is still the same as on our first Cluster with 2.0 and VMware did not find the new volumes cause of the duplicated naa
can you confirm this ?
admin
2,930 Posts
Quote from admin on December 7, 2018, 9:27 amIt is included but is not enabled by default for new cluster, you will need to enable it the same way. Your existing clusters which have this enabled will be upgraded with this setting on. We probably should have this setting in the ui, but for now you need to it manually once.
It is included but is not enabled by default for new cluster, you will need to enable it the same way. Your existing clusters which have this enabled will be upgraded with this setting on. We probably should have this setting in the ui, but for now you need to it manually once.
BonsaiJoe
53 Posts
Quote from BonsaiJoe on December 7, 2018, 3:19 pmhow can we enable it ?
consul kv get PetaSAN/Config shows "wwn_fsid_tag": false
how can we enable it ?
consul kv get PetaSAN/Config shows "wwn_fsid_tag": false