Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

PetaSAN v 2.0 released!

Pages: 1 2

I did.

I will reboot each node in sequence and wait for stable before the next.

M.

It should work without a reboot. Can you first check all monitors are up:

systemctl status ceph-mon@HOST_NAME

also check the cluster status:

ceph status --cluster CLUSTER_NAME

systemctl restart   is a dark evil beast in our lab....   I should have stop/start instead.

reboot did the trick.

By the way, there are two - in --cluster in the Post-Upgrade step 😉

Would you like our comments on the upgrade document?  Few things could stand a little re-write for clarity.

But now my VMs are PetaSAN 2.0 running bluestore.  The baremetal will be completed tomorrow.  Quite cool, 0 downtime with the scripts.  Our boss was amazed, we had to do a show and tell on Peta today, looking good.  We definitively will have to make the multiple clusters work on same host, there will be a mix of performance between different clusters.

 

Will the script you gave us earlier work on 2.0?  (to change the serial number info on iscsi disks).

 

Thank you again, great stuff.  Can't wait to see numbers !

 

M.

Yes please by all means post any changes/wording you like to see in the upgrade guide and i will update it.

Happy to hear you had no downtime and your boss liked it 🙂 it still think a reboot is not needed since we tested it many times and also the ceph mgr creation should be straightforward once the new monitors are up, however one thing we were doing is to make sure the new monitors were indeed started and are all in quorum, this is the check i posted in my previous post. so maybe that should be explicitly stated before step 9. If you do have chance to retry this, please do let me know.

The script i sent you is not in the release. I understood it did not work + more importantly is it will break any live upgrade since having a mix of serial numbers for the same disk will surely drive the client mpio layer nuts. Note that currently you still can have a client talking to 2 different PetaSAN clusters if you are talking to disks with different ids, it will only break if the ids overlap at the client, it is something you probably can setup. If you still need this, you can apply the same pacth on 2.0 and it will work the same.

Was running 1.4, and I used the express installer to upgrade to 2.0 this was a seamless & problem free process...

with version 1.4 our software installation took 1 hour and 18 minutes its a heavy io intensive thing.  After upgrading to 2.0 I ran the same installation process, which finished in less than 16 minutes. All SSD esxi 6 environment this is what we were hoping for! No more SAN type speeds!

thank you and keep up the great work!

 

Hi,

Please let me know whether PetaSAN supports

  • Block Storage (for hosting VMs for VMWARE, KVM, QEMU, MS-HYPER V….)
    • I know that this functionality is already available through ISCI target and initiator way.
    • Can I use VMWARE Esxi 5.5 server to connect to PetaSAN to create datastores and store live VMs on it? Currently I am using VMWARE 5.5 Esxi server.
  • Object storage (compatible with S3, Swift ..)
  • File storage (NFS….) for storing and sharing files

My future project is to setup private cloud in our  data center similar to AWS.  Was considering OpenStack and Ceph combination to support all the major 3 storage types.  Does PetaSAN support OpenStack?

Please provide this information and advice.

Thanks,

Vijaya Bhasker G

Our main focus is supporting fault tolerant iSCSI over Ceph. So we focus on block storage and support for hyper-v, vmware, xen. ESX can connect to a datastore you create via the PetaSAN ui, as per our docs for vmware.

We do not focus on S3 and File ( CephFS/NFS) in our ui, we do not include their packages in the install iso. We can make such packages avialable for download but you will not find specific support in the PetaSAN ui for them and you will need manual configuration, some users find PetaSAN easy to create the Ceph cluster and manage adding/removing nodes/disks, so it will help you in this area. Openstack will be similar, it will talk to PetaSAN directly using the rbd pool, it will be able to create / delete images itself.

 

Hi there!

Congratulations for such outstanding software.

Is there any plan to add Fibre Channel support to it??

 

Tanks

Thanks very much.

Unfortunately no short term plans for FC as we focus now on iSCSI.

Pages: 1 2