Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

online resize disk.

Pages: 1 2

We continue to test on a small cluster. The question arose whether online change size of the disk is possible (lun) in the further expansion of the cluster. Now to change the size of the disk needs to be stop.

Implementation is possible?

I changed the size of lun using "rbd resize image-00001 -s..."

It's work!

You can also do it though the web ui:  from the Actions buttons: stop the disk then edit and choose the size from slider (must be a greater size, it will not allow smaller for data protection ) then restart the disk. Of course you will need to resize/expand the file system from the client.

 

Sorry i missed the online part 🙂

In the ui you have to stop the disk first.

I think it is asking for trouble to resize a disk while it is used by a client.

This is still not possible in the GUI? Would there be a way for the next version?
In my option the online resize of the LUNs is necessary especially with ceph and in a growing IT infrastructure.

And it is possible with ceph and the ceph iscsi gateway, as x129 wrote above.
If we change the LUN size in the CLI would this "destroy" the view or config in the GUI?

Quote from admin on November 23, 2016, 3:33 pm

 

I think it is asking for trouble to resize a disk while it is used by a client.

Not for VMWare 🙂

 

The image size can in Ceph be changed on fly, the issue is updating a running iSCSI server with the new sector count. Also i am not sure how Windows client os will handle a dynamic disk size change. i will look it up.

From my experience with tools like Open-E there is no real problem with an online resize. After the resize and after reloading the iscsi portal service (with open-e or ietd) you have to perform a rescan on the "client" to see the new disk size. Then you can resize the partition.

With windows and vmware this is quite easy. With linux using lvm is essential. 🙂

 after reloading the iscsi portal service (with open-e or ietd) you have to perform a rescan on the "client" to see the new disk size. Then you can resize the partition.

Starting and stopping a disk in PetaSAN is equivalent to a reload of a specific target/lun, it does not actually restart the iSCSI service.  We can probably stop/start the disk automatically if this helps, but still the user will need to manually perform a rescan and resize of the filesystem on the client. We will look more into this..

 

I tested the online resize via command line.
It works. Meaning I do the resize of the RBD image and the resize of the VMFS volume under VMWare without any interruption.

But, PetaSAN do not recognize or show the new size under the iSCSI disks. There is still the old value.
Also if i stop and start the iSCSI disks.

Problem here is, that I can enlarge the disk for 100GB online via command line to e.g. 200GB, and resize the VMFS to 200GB. Later I can do an offline resize via Petasan for only 50 GB and have a 150GB image instead of 200GB. I guess this means a data lost at some point...?

Is there a possibility to do a refresh the size of the iSCSI disk screen on PetaSAN after a online resize via command line?

Here are my commands:

# rbd -c /etc/ceph/SAN01.conf info -p HDD image-00003
rbd image 'image-00003':
size 102400 MB in 25600 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.39b3c6b8b4567
format: 2
features: layering
flags:
create_timestamp: Wed Feb 6 14:48:41 2019

# rbd -c /etc/ceph/SAN01.conf resize -p HDD --size 204800 image-00003
Resizing image: 100% complete...done.

# rbd -c /etc/ceph/SAN01.conf info -p HDD image-00003
rbd image 'image-00003':
size 200 GB in 51200 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.39b3c6b8b4567
format: 2
features: layering
flags:
create_timestamp: Wed Feb 6 14:48:41 2019

And here the rbd info after the addition resize of the 100GB to only 50GB via PetaSAN GUI.

# rbd -c /etc/ceph/SAN01.conf info -p HDD image-00003
rbd image 'image-00003':
size 150 GB in 38400 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.39b3c6b8b4567
format: 2
features: layering
flags:
create_timestamp: Wed Feb 6 14:48:41 2019

 

Typically all changes you do via the cli is/should be reflected in the ui,  this has always been a design principle. The image size displayed in the ui is an exception as we store it in the Ceph rbd metada together with all other iSCSI info, the idea is to read all the metadata once per disk to display its information without the need to do a further Ceph api call just to get the size, so to get better response in case you had a large number of iSCSI disks to display ( above 1k ), we need to do a lot of optimization such as multi-threading to display this data in a few seconds.

If you did change the size dynamically via cli as you stated and everything worked fine aside from the ui display of old size, you can just ignore the size in the ui for now, it does not have a functional effect. In version 2.3 we are adding some additional admin cli tools which includes commands to read and write the image metadata (we use it internally to support async replication), you could use these to modify the display size.

As for downsizing an exiting image, Ceph does not mind this. We try to disable this in our ui so you can only increase the size, but ofcourse if you changed the size earlier via cli, this extra protection from the ui will not work, and Ceph will allow it. But if you do use the cli, you should know what you are doing.

Pages: 1 2