Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Proper Shutdown of Cluster

Hello - really loving PetaSAN!  Is there a proper way to shutdown the cluster?  Also, are there any plans to make a shutdown process within the GUI?  Thanks! - Gordon

Generally Ceph is designed to be always up and not for regular switching on/off.

If you do have a need to shutdown, i would recommend you make sure all client io have stopped then shut down all nodes. i would recommend the nodes get shutdown together in a relatively short period of time ( within 5 min or sooner ) so to avoid kicking in any state changes and recovery. Similarly boot all nodes within the same time span.

Hi - I performed a full shutdown of my three physical servers (all 3 are monitors, iscsi, storage) and added a 10 GBE adapter to each one.  I booted them up simultaneously, and after coming online, everything looked good.  However, the 100TB demo iSCSI disk I created is missing now.  Is there a way to see what happened?  Can I use SSH to see if it is still there (the data that I put there still seems to be there because is shows the usage in the graph)?  This is scares me because I worry about something like this happening in production.  Thanks for your help.  - Gordon

Hi - I tried to reproduce this by creating another 100TB disk and then performing the same steps as before, and it is still in the list.  I did this a bunch of times, also not gracefully shutting down and even leaving the iscsi connection open with a locked file.  I also brought the hosts up with a lot of lag time, and every time the cluster returned to healthy and the iscsi disk is still there.  So my hunch is it had something to do with adding the 10GBe adapters after that first iscsi disk was created as mentioned above.  What do you think?  Thanks - Gordon

Have you added / deleted pools at all..deleting a pool will delete the disk. Apart from this as well as directly deleting the disk, i can not think of any reason. Ceph will not delete a disk from behind you.

Hi - no, never deleted anything.  After your response to my first question, I went to the iscsi list and detached the disk, I shut down each node almost within 5 seconds of each other, then added the 10GBe adapters to each node, powered them all on at once, health was green within a minute, and I tried to connect to the disk from my test server and it hung.  I opened the disk list and noticed it was gone.  I think I am going to do a full reinstall, but without the 10GBe adapters again, and repeat this whole process again from start to finish.  I will let you know if I reproduce it.  Thanks - Gordon

The command to show you available images in Ceph is :

rbd ls --cluster xxx

where xxx is the name you named your cluster

Something that may be related: if the cluster is not responding, which could happen temporary if you reboot the entire cluster, then the above command may hang until everything is responding..also the ui iSCSI list will be empty (it uses the above command), but once Ceph is responsive the image should be displayed via the command as well as within the ui.