Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Upcoming releases 1.5.0 and 2.0.0

Pages: 1 2

Hello all,

Finally fixed the features for the upcoming 2 releases:

Release 1.5.0

  • SSD Journal support for enhanced write performance.
  • Active Path Re-assignment for better load distribution.
  • Cluster Maintenance mode.
  • Improvements to email notification system.

 

Release 2.0.0

  • Support new Ceph Luminous 12.2.0 with Bluestore engine.

 

As always the installer will support automatic upgrades from previous versions.

for clarification, the Bluestore engine won't be available in v 1.5, correct? We've tested up to v 1.4 in VMware esxi 6.5 all ssd enviro and the writes are too slow to use in production (SAN like) and it seems that Bluestore is best option to speed things up, just based on researching ceph and ssd's

Bluestore was released a couple of days ago and offers approx 2x write performance.  We will be offering 2.0.0 at or shortly after 1.5.0 release depending on the load on our testing.

However you should be getting decent performance from exiting Ceph Jewel. Are you using bare metal or running PetaSAN virtualized ?  How many OSDs per host do you have ? Are you RAIDing several disks together ? Can you run the cluster benchmark available in v 1.4  for 4M Throughput and 4K IOPS at 1/16/64 threads, what do you get ? What are the values for % utilization for disk / cpu ?

 

Are you using bare metal or running PetaSAN virtualized ? Each esxi 6.5 host is running 1 petasan VM

How many OSDs per host do you have ? each esxi 6.5 host has 10 x SSD drives so each petasan VM is direct connected to 10 OSD's and each petaSAN VM is configured with 32GB or memory

Are you RAIDing several disks together ? no RAID, the SSD's are configured as Non-RAID & each petaSan 1.4 vm is connecting to each of the 10 SSD's via RDM raw device mapping

Can you run the cluster benchmark available in v 1.4 ? Each of the 3 petaSAN vm's started as v 1.3 and were upgraded to 1.3.1 and then upgraded to v 1.4 I can click on the benchmark option but nothing happens except a spinning wheel and "processing" I never see any results. I noticed the info "Perform benchmark while there is no other load on the cluster. Client nodes simulating client i/o are best chosen from nodes not running the Local Storage Service and will be excluded from resource load results" well all 3 of the petaSAN nodes are running the Local Storage Service so that may be why this doesn't even work?

thank you

 

 

The cluster benchmark should work even if all your machines are running the local storage service. It could be a bug we have, we test the upgrades quite a lot but i will try to have the same upgrade sequence as yours done maybe we have an issue.  I will let you know in couple of days if we hit the same issue and any fix if any,

Also if you have the option to re-install virtualized from fresh install and run the benchmark it will help a lot. The benchmark will show you what the max throughput and iops are and what bottlenecks exist. The way it should work is you select 1 client machine such as node 3 to act as a client simulating client io, The info message you see is saying you should try to use a client not running OSDs so it is not acting as a client and server at the same time...so you get more accurate results, but even if you chose a node with OSDs, the benchmark result will be good enough.

How did you measure the performance yourself, did you use a tool that simulates many client workers/threads or did you just do a single task like a file copy for test ? Ceph performs well when you have many concurrent ios and not a single io stream.

Your 10 SSDs should give very good performance. Of course if you have the option to install bare metal that will be great,  it can also help when comparing impact of running virtualized.

I should be able to do a clean 1.4 install within the next week and try again. Will report back. Thanks for quick responses. Oh and mostly I've been using CrystalDiskMark so yeah, probably not the best way to test ceph

Any idea on the planned release dates ?

We cannot commit to dates yet, but typically we try to release every 3 months for new features

Are you more interested in bluestore or in re-assigning of active paths   ?

I'm currently looking to replace our Eternus DX90S2 storage.  The storage is used as backend for our vmware view and servers.  I would like to start directly with journal on SSD's disk, and if possible also the bluestore.  This is what i have in my mind.

6 OSD-NODES

  • 4 x SATA 1TB Enterprise Disk
  • 1 x SATADOM 128GB For OS
  • 1 x M.2 SSD 128 GB for journal
  • 8 GB RAM
  • 2 x 10 GB network
  • 6 x 1 GB network  (bonding)
  • 1 Quad-Core CPU

2 iscsi-gateway nodes

  • 16 GB Ram
  • 1 x SATADOM 128 GB
  • 2x 10 GB network
  • 6x 1GB Network
  • 1 Quad-Core CPU

Regards

RAf

 

For now we use 1.4.0 version and we want to upgrade to the v.2.0 version with Bluestone engine.

Where can i read about update procedure? I need to reboot my management nodes and boot from flash to upgrade? Or it can be done using cli without rebooting?

And what about availability of cluster while cluster updating?

 

Thanks.

Pages: 1 2