Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

same naa on second cluster

Pages: 1 2

Hi,

after setting up our second petsan cluster we have seen that the naa of the second cluster matches the naa of the first cluster (naa.60014050000100000000000000000000)   looks like this naa is always the same on every installation (first volume)

but do we run into trouble if we use the same naa within the same VMware cluster ?

or do we better need to change the naa on one system ? and if so how can we do this ?

 

The naa will be the same, we do genetate them based on the disk ids:

disk 00001 naa.60014050000100000000000000000000
disk 00023 naa.60014050002300000000000000000000
The 001405 is generated by the linux kernel.

This could lead to issues if the same client initiator is accessing 2 different PetaSAN clusters with similar disk ids. We did not anticipate this. There is no problem if within your VMWare cluster some ESXis use datastores from one cluster and some other ESXis use datastore from the other. If you absolutely need a single ESXi to see both, you can make sure you use disks that do not have the same disk ids but you need to be careful, you also should change the iqn base prefix so it is different between both clusters.

It should be possible for us to include a cluster identifier within the naa, but this will require stopping the disks during such upgrade and re-connecting the luns from ESXis, we can send you an update that does this within a few days if this is better for you.

thanks for the fast update. yes we need to have the naa unic cause we would like to have all clusters withing the same esx cluster reachable.

yes the iqn is the first we changed but we would not run into trouble cause of the dublicated naa.

a cluser identifyer will be grate.

to stop and start the disk is not a problem cause we are waiting to go to production with the new cluster untill we have changed the naa

 

do you have a update here ?

thanks for your help

 

Should be by Monday.

download patch tool  and wwn patch from:

https://drive.google.com/drive/folders/1ZycfOSo0DAyPFsZ-o4Ic1kYH98-a9ze6?usp=sharing

stop all disks, disconnect clients

on all nodes install patch tool and apply wwn patch

dpkg -i patch_2.7.5-1ubuntu0.16.04.1_amd64.deb
patch -d /usr/lib/python2.7/dist-packages -p1 < wwn.patch

restart following services on all nodes

systemctl restart petasan-iscsi
systemctl restart petasan-admin

From admin ui: Configuration -> iSCSI Settings then press Submit button
check flag is saved correctly in consul:

consul kv get PetaSAN/Config

should print "wwn_fsid_tag": true at end

From now all newly created disks will have the cluster tag in their serial numbers, however any existing disks will still use their old serial numbers when started, this is because we use consul during new disk creation and save all iSCSI data within the Ceph image metadata..this is done so it would be possible in case of disaster to start existing disks even if consul system is down.
To modify existing disks to use new serial numbers, detach the disks (this will clear the Ceph metada) then re-attach them, but this can also change the ip information unless you manually specify them or auto add in same order.

If you add new nodes to the cluster you need to apply the patch and restart services on them before assigning new disks. You can also add them without the iSCSI role then add the role to them once you apply the patch.

We will include this patch to v2.1.

thanks so much works perfect

i have to bring back this Topic cause we have installed a new Cluster with 2.2 and it Looks like the wwn  patch is not includet naa is still the same as on our first Cluster with 2.0 and VMware did not find the new  volumes cause of the duplicated naa

can you confirm this ?

 

It is included but is not enabled by default for new cluster, you will need to enable it the same way. Your existing clusters which have this enabled will be upgraded with this setting on. We probably should have this setting in the ui, but for now you need to it manually once.

how can we enable it ?

consul kv get PetaSAN/Config shows "wwn_fsid_tag": false

Pages: 1 2