Upgrading NICs
protocol6v
85 Posts
March 12, 2019, 5:10 pmQuote from protocol6v on March 12, 2019, 5:10 pmHello,
We have a four node cluster with each node having a 4 port 10GBe card for network. We've upgraded our core, and want to replace the 4x 10G card with a 2x 40GBe card. Is this something that can be done on an existing cluster?
Thanks!
Hello,
We have a four node cluster with each node having a 4 port 10GBe card for network. We've upgraded our core, and want to replace the 4x 10G card with a 2x 40GBe card. Is this something that can be done on an existing cluster?
Thanks!
admin
2,930 Posts
March 13, 2019, 5:06 pmQuote from admin on March 13, 2019, 5:06 pmYou can change the nic setting from
/opt/petasan/config/cluster_info.json
The cluster needs to halt and be restarted.
Another thing is we save all iSCSI info including paths and nics in ceph image metadata, so that has to be modified for all disks. This can be done via detach/attach on each disk (but in case of auto ips, the ips may change) anther option is to use the following disk_meta.py tool to alter the metadata yourself (tool will be in v 2.3)
download from
https://drive.google.com/file/d/1O82Nw4-1grE6QIA460DQdor8NNWjdRvh/view?usp=sharing
place in /opt/petasan/scripts/util
get disk iSCSI metadata json file
/opt/petasan/scripts/util/disk_meta.py rd -image image-00001 -pool rbd > meta.json
modify the nics, then save
/opt/petasan/scripts/util/disk_meta.py wr -image image-00001 -pool rbd -file meta.json
You can change the nic setting from
/opt/petasan/config/cluster_info.json
The cluster needs to halt and be restarted.
Another thing is we save all iSCSI info including paths and nics in ceph image metadata, so that has to be modified for all disks. This can be done via detach/attach on each disk (but in case of auto ips, the ips may change) anther option is to use the following disk_meta.py tool to alter the metadata yourself (tool will be in v 2.3)
download from
https://drive.google.com/file/d/1O82Nw4-1grE6QIA460DQdor8NNWjdRvh/view?usp=sharing
place in /opt/petasan/scripts/util
get disk iSCSI metadata json file
/opt/petasan/scripts/util/disk_meta.py rd -image image-00001 -pool rbd > meta.json
modify the nics, then save
/opt/petasan/scripts/util/disk_meta.py wr -image image-00001 -pool rbd -file meta.json
protocol6v
85 Posts
March 18, 2019, 6:28 pmQuote from protocol6v on March 18, 2019, 6:28 pmThanks,
I haven't looked into those files yet, but would we need to change in ceph image data if the IPs are staying the same?
Currently, the 4x 10Gb nics are configured with LACP, so there's only actually two connections. I would be phasing out each of those LACP groups for one 40gb nic.
Would you support this kind of change in production? If not, I will find another way to drain the cluster first.
Thanks!
Thanks,
I haven't looked into those files yet, but would we need to change in ceph image data if the IPs are staying the same?
Currently, the 4x 10Gb nics are configured with LACP, so there's only actually two connections. I would be phasing out each of those LACP groups for one 40gb nic.
Would you support this kind of change in production? If not, I will find another way to drain the cluster first.
Thanks!
admin
2,930 Posts
March 18, 2019, 6:47 pmQuote from admin on March 18, 2019, 6:47 pmwould we need to change in ceph image data if the IPs are staying the same?
Yes (change the image metada). Remember the iSCSI ips are floating, they are not assigned to a node, the act of moving a path to a node also requires setting up / adding the ip to this node. You need to define on which interface this ip will be added to. With the metadata scripts, this is very easy to do, you only need to change the interface name. Also detaching/attaching the disk via the ui will also work, but you need to manually enter the ips as choosing auto will assign the next available ip.
The reason we put all iSCSI info as Ceph metadata is in case of DR if , heavens forbids 🙂 the layer above Ceph is wiped out, you could still start and function your iSCSI disks.
Yes you should be able to change from the LACP to the 40 Gbps
would we need to change in ceph image data if the IPs are staying the same?
Yes (change the image metada). Remember the iSCSI ips are floating, they are not assigned to a node, the act of moving a path to a node also requires setting up / adding the ip to this node. You need to define on which interface this ip will be added to. With the metadata scripts, this is very easy to do, you only need to change the interface name. Also detaching/attaching the disk via the ui will also work, but you need to manually enter the ips as choosing auto will assign the next available ip.
The reason we put all iSCSI info as Ceph metadata is in case of DR if , heavens forbids 🙂 the layer above Ceph is wiped out, you could still start and function your iSCSI disks.
Yes you should be able to change from the LACP to the 40 Gbps
Last edited on March 18, 2019, 6:54 pm by admin · #4
Upgrading NICs
protocol6v
85 Posts
Quote from protocol6v on March 12, 2019, 5:10 pmHello,
We have a four node cluster with each node having a 4 port 10GBe card for network. We've upgraded our core, and want to replace the 4x 10G card with a 2x 40GBe card. Is this something that can be done on an existing cluster?
Thanks!
Hello,
We have a four node cluster with each node having a 4 port 10GBe card for network. We've upgraded our core, and want to replace the 4x 10G card with a 2x 40GBe card. Is this something that can be done on an existing cluster?
Thanks!
admin
2,930 Posts
Quote from admin on March 13, 2019, 5:06 pmYou can change the nic setting from
/opt/petasan/config/cluster_info.json
The cluster needs to halt and be restarted.
Another thing is we save all iSCSI info including paths and nics in ceph image metadata, so that has to be modified for all disks. This can be done via detach/attach on each disk (but in case of auto ips, the ips may change) anther option is to use the following disk_meta.py tool to alter the metadata yourself (tool will be in v 2.3)
download from
https://drive.google.com/file/d/1O82Nw4-1grE6QIA460DQdor8NNWjdRvh/view?usp=sharing
place in /opt/petasan/scripts/util
get disk iSCSI metadata json file
/opt/petasan/scripts/util/disk_meta.py rd -image image-00001 -pool rbd > meta.json
modify the nics, then save
/opt/petasan/scripts/util/disk_meta.py wr -image image-00001 -pool rbd -file meta.json
You can change the nic setting from
/opt/petasan/config/cluster_info.json
The cluster needs to halt and be restarted.
Another thing is we save all iSCSI info including paths and nics in ceph image metadata, so that has to be modified for all disks. This can be done via detach/attach on each disk (but in case of auto ips, the ips may change) anther option is to use the following disk_meta.py tool to alter the metadata yourself (tool will be in v 2.3)
download from
https://drive.google.com/file/d/1O82Nw4-1grE6QIA460DQdor8NNWjdRvh/view?usp=sharing
place in /opt/petasan/scripts/util
get disk iSCSI metadata json file
/opt/petasan/scripts/util/disk_meta.py rd -image image-00001 -pool rbd > meta.json
modify the nics, then save
/opt/petasan/scripts/util/disk_meta.py wr -image image-00001 -pool rbd -file meta.json
protocol6v
85 Posts
Quote from protocol6v on March 18, 2019, 6:28 pmThanks,
I haven't looked into those files yet, but would we need to change in ceph image data if the IPs are staying the same?
Currently, the 4x 10Gb nics are configured with LACP, so there's only actually two connections. I would be phasing out each of those LACP groups for one 40gb nic.
Would you support this kind of change in production? If not, I will find another way to drain the cluster first.
Thanks!
Thanks,
I haven't looked into those files yet, but would we need to change in ceph image data if the IPs are staying the same?
Currently, the 4x 10Gb nics are configured with LACP, so there's only actually two connections. I would be phasing out each of those LACP groups for one 40gb nic.
Would you support this kind of change in production? If not, I will find another way to drain the cluster first.
Thanks!
admin
2,930 Posts
Quote from admin on March 18, 2019, 6:47 pmwould we need to change in ceph image data if the IPs are staying the same?
Yes (change the image metada). Remember the iSCSI ips are floating, they are not assigned to a node, the act of moving a path to a node also requires setting up / adding the ip to this node. You need to define on which interface this ip will be added to. With the metadata scripts, this is very easy to do, you only need to change the interface name. Also detaching/attaching the disk via the ui will also work, but you need to manually enter the ips as choosing auto will assign the next available ip.
The reason we put all iSCSI info as Ceph metadata is in case of DR if , heavens forbids 🙂 the layer above Ceph is wiped out, you could still start and function your iSCSI disks.
Yes you should be able to change from the LACP to the 40 Gbps
would we need to change in ceph image data if the IPs are staying the same?
Yes (change the image metada). Remember the iSCSI ips are floating, they are not assigned to a node, the act of moving a path to a node also requires setting up / adding the ip to this node. You need to define on which interface this ip will be added to. With the metadata scripts, this is very easy to do, you only need to change the interface name. Also detaching/attaching the disk via the ui will also work, but you need to manually enter the ips as choosing auto will assign the next available ip.
The reason we put all iSCSI info as Ceph metadata is in case of DR if , heavens forbids 🙂 the layer above Ceph is wiped out, you could still start and function your iSCSI disks.
Yes you should be able to change from the LACP to the 40 Gbps