Upcoming releases 1.5.0 and 2.0.0
Pages: 1 2
admin
2,930 Posts
September 6, 2017, 3:44 pmQuote from admin on September 6, 2017, 3:44 pmHello all,
Finally fixed the features for the upcoming 2 releases:
Release 1.5.0
- SSD Journal support for enhanced write performance.
- Active Path Re-assignment for better load distribution.
- Cluster Maintenance mode.
- Improvements to email notification system.
Release 2.0.0
- Support new Ceph Luminous 12.2.0 with Bluestore engine.
As always the installer will support automatic upgrades from previous versions.
Hello all,
Finally fixed the features for the upcoming 2 releases:
Release 1.5.0
- SSD Journal support for enhanced write performance.
- Active Path Re-assignment for better load distribution.
- Cluster Maintenance mode.
- Improvements to email notification system.
Release 2.0.0
- Support new Ceph Luminous 12.2.0 with Bluestore engine.
As always the installer will support automatic upgrades from previous versions.
Last edited on September 6, 2017, 3:56 pm by admin · #1
philip.shannon
37 Posts
September 8, 2017, 12:05 pmQuote from philip.shannon on September 8, 2017, 12:05 pmfor clarification, the Bluestore engine won't be available in v 1.5, correct? We've tested up to v 1.4 in VMware esxi 6.5 all ssd enviro and the writes are too slow to use in production (SAN like) and it seems that Bluestore is best option to speed things up, just based on researching ceph and ssd's
for clarification, the Bluestore engine won't be available in v 1.5, correct? We've tested up to v 1.4 in VMware esxi 6.5 all ssd enviro and the writes are too slow to use in production (SAN like) and it seems that Bluestore is best option to speed things up, just based on researching ceph and ssd's
admin
2,930 Posts
September 8, 2017, 12:49 pmQuote from admin on September 8, 2017, 12:49 pmBluestore was released a couple of days ago and offers approx 2x write performance. We will be offering 2.0.0 at or shortly after 1.5.0 release depending on the load on our testing.
However you should be getting decent performance from exiting Ceph Jewel. Are you using bare metal or running PetaSAN virtualized ? How many OSDs per host do you have ? Are you RAIDing several disks together ? Can you run the cluster benchmark available in v 1.4 for 4M Throughput and 4K IOPS at 1/16/64 threads, what do you get ? What are the values for % utilization for disk / cpu ?
Bluestore was released a couple of days ago and offers approx 2x write performance. We will be offering 2.0.0 at or shortly after 1.5.0 release depending on the load on our testing.
However you should be getting decent performance from exiting Ceph Jewel. Are you using bare metal or running PetaSAN virtualized ? How many OSDs per host do you have ? Are you RAIDing several disks together ? Can you run the cluster benchmark available in v 1.4 for 4M Throughput and 4K IOPS at 1/16/64 threads, what do you get ? What are the values for % utilization for disk / cpu ?
Last edited on September 8, 2017, 1:00 pm by admin · #3
philip.shannon
37 Posts
September 8, 2017, 1:10 pmQuote from philip.shannon on September 8, 2017, 1:10 pmAre you using bare metal or running PetaSAN virtualized ? Each esxi 6.5 host is running 1 petasan VM
How many OSDs per host do you have ? each esxi 6.5 host has 10 x SSD drives so each petasan VM is direct connected to 10 OSD's and each petaSAN VM is configured with 32GB or memory
Are you RAIDing several disks together ? no RAID, the SSD's are configured as Non-RAID & each petaSan 1.4 vm is connecting to each of the 10 SSD's via RDM raw device mapping
Can you run the cluster benchmark available in v 1.4 ? Each of the 3 petaSAN vm's started as v 1.3 and were upgraded to 1.3.1 and then upgraded to v 1.4 I can click on the benchmark option but nothing happens except a spinning wheel and "processing" I never see any results. I noticed the info "Perform benchmark while there is no other load on the cluster. Client nodes simulating client i/o are best chosen from nodes not running the Local Storage Service and will be excluded from resource load results" well all 3 of the petaSAN nodes are running the Local Storage Service so that may be why this doesn't even work?
thank you
Are you using bare metal or running PetaSAN virtualized ? Each esxi 6.5 host is running 1 petasan VM
How many OSDs per host do you have ? each esxi 6.5 host has 10 x SSD drives so each petasan VM is direct connected to 10 OSD's and each petaSAN VM is configured with 32GB or memory
Are you RAIDing several disks together ? no RAID, the SSD's are configured as Non-RAID & each petaSan 1.4 vm is connecting to each of the 10 SSD's via RDM raw device mapping
Can you run the cluster benchmark available in v 1.4 ? Each of the 3 petaSAN vm's started as v 1.3 and were upgraded to 1.3.1 and then upgraded to v 1.4 I can click on the benchmark option but nothing happens except a spinning wheel and "processing" I never see any results. I noticed the info "Perform benchmark while there is no other load on the cluster. Client nodes simulating client i/o are best chosen from nodes not running the Local Storage Service and will be excluded from resource load results" well all 3 of the petaSAN nodes are running the Local Storage Service so that may be why this doesn't even work?
thank you
Last edited on September 8, 2017, 1:14 pm by philip.shannon · #4
admin
2,930 Posts
September 8, 2017, 2:15 pmQuote from admin on September 8, 2017, 2:15 pmThe cluster benchmark should work even if all your machines are running the local storage service. It could be a bug we have, we test the upgrades quite a lot but i will try to have the same upgrade sequence as yours done maybe we have an issue. I will let you know in couple of days if we hit the same issue and any fix if any,
Also if you have the option to re-install virtualized from fresh install and run the benchmark it will help a lot. The benchmark will show you what the max throughput and iops are and what bottlenecks exist. The way it should work is you select 1 client machine such as node 3 to act as a client simulating client io, The info message you see is saying you should try to use a client not running OSDs so it is not acting as a client and server at the same time...so you get more accurate results, but even if you chose a node with OSDs, the benchmark result will be good enough.
How did you measure the performance yourself, did you use a tool that simulates many client workers/threads or did you just do a single task like a file copy for test ? Ceph performs well when you have many concurrent ios and not a single io stream.
Your 10 SSDs should give very good performance. Of course if you have the option to install bare metal that will be great, it can also help when comparing impact of running virtualized.
The cluster benchmark should work even if all your machines are running the local storage service. It could be a bug we have, we test the upgrades quite a lot but i will try to have the same upgrade sequence as yours done maybe we have an issue. I will let you know in couple of days if we hit the same issue and any fix if any,
Also if you have the option to re-install virtualized from fresh install and run the benchmark it will help a lot. The benchmark will show you what the max throughput and iops are and what bottlenecks exist. The way it should work is you select 1 client machine such as node 3 to act as a client simulating client io, The info message you see is saying you should try to use a client not running OSDs so it is not acting as a client and server at the same time...so you get more accurate results, but even if you chose a node with OSDs, the benchmark result will be good enough.
How did you measure the performance yourself, did you use a tool that simulates many client workers/threads or did you just do a single task like a file copy for test ? Ceph performs well when you have many concurrent ios and not a single io stream.
Your 10 SSDs should give very good performance. Of course if you have the option to install bare metal that will be great, it can also help when comparing impact of running virtualized.
Last edited on September 8, 2017, 2:17 pm by admin · #5
philip.shannon
37 Posts
September 8, 2017, 2:48 pmQuote from philip.shannon on September 8, 2017, 2:48 pmI should be able to do a clean 1.4 install within the next week and try again. Will report back. Thanks for quick responses. Oh and mostly I've been using CrystalDiskMark so yeah, probably not the best way to test ceph
I should be able to do a clean 1.4 install within the next week and try again. Will report back. Thanks for quick responses. Oh and mostly I've been using CrystalDiskMark so yeah, probably not the best way to test ceph
RafS
32 Posts
September 12, 2017, 8:18 pmQuote from RafS on September 12, 2017, 8:18 pmAny idea on the planned release dates ?
Any idea on the planned release dates ?
admin
2,930 Posts
September 13, 2017, 12:18 pmQuote from admin on September 13, 2017, 12:18 pmWe cannot commit to dates yet, but typically we try to release every 3 months for new features
Are you more interested in bluestore or in re-assigning of active paths ?
We cannot commit to dates yet, but typically we try to release every 3 months for new features
Are you more interested in bluestore or in re-assigning of active paths ?
RafS
32 Posts
September 13, 2017, 2:13 pmQuote from RafS on September 13, 2017, 2:13 pmI'm currently looking to replace our Eternus DX90S2 storage. The storage is used as backend for our vmware view and servers. I would like to start directly with journal on SSD's disk, and if possible also the bluestore. This is what i have in my mind.
6 OSD-NODES
- 4 x SATA 1TB Enterprise Disk
- 1 x SATADOM 128GB For OS
- 1 x M.2 SSD 128 GB for journal
- 8 GB RAM
- 2 x 10 GB network
- 6 x 1 GB network (bonding)
- 1 Quad-Core CPU
2 iscsi-gateway nodes
- 16 GB Ram
- 1 x SATADOM 128 GB
- 2x 10 GB network
- 6x 1GB Network
- 1 Quad-Core CPU
Regards
RAf
I'm currently looking to replace our Eternus DX90S2 storage. The storage is used as backend for our vmware view and servers. I would like to start directly with journal on SSD's disk, and if possible also the bluestore. This is what i have in my mind.
6 OSD-NODES
- 4 x SATA 1TB Enterprise Disk
- 1 x SATADOM 128GB For OS
- 1 x M.2 SSD 128 GB for journal
- 8 GB RAM
- 2 x 10 GB network
- 6 x 1 GB network (bonding)
- 1 Quad-Core CPU
2 iscsi-gateway nodes
- 16 GB Ram
- 1 x SATADOM 128 GB
- 2x 10 GB network
- 6x 1GB Network
- 1 Quad-Core CPU
Regards
RAf
Krelian
1 Post
September 14, 2017, 7:32 pmQuote from Krelian on September 14, 2017, 7:32 pmFor now we use 1.4.0 version and we want to upgrade to the v.2.0 version with Bluestone engine.
Where can i read about update procedure? I need to reboot my management nodes and boot from flash to upgrade? Or it can be done using cli without rebooting?
And what about availability of cluster while cluster updating?
Thanks.
For now we use 1.4.0 version and we want to upgrade to the v.2.0 version with Bluestone engine.
Where can i read about update procedure? I need to reboot my management nodes and boot from flash to upgrade? Or it can be done using cli without rebooting?
And what about availability of cluster while cluster updating?
Thanks.
Pages: 1 2
Upcoming releases 1.5.0 and 2.0.0
admin
2,930 Posts
Quote from admin on September 6, 2017, 3:44 pmHello all,
Finally fixed the features for the upcoming 2 releases:
Release 1.5.0
- SSD Journal support for enhanced write performance.
- Active Path Re-assignment for better load distribution.
- Cluster Maintenance mode.
- Improvements to email notification system.
Release 2.0.0
- Support new Ceph Luminous 12.2.0 with Bluestore engine.
As always the installer will support automatic upgrades from previous versions.
Hello all,
Finally fixed the features for the upcoming 2 releases:
Release 1.5.0
- SSD Journal support for enhanced write performance.
- Active Path Re-assignment for better load distribution.
- Cluster Maintenance mode.
- Improvements to email notification system.
Release 2.0.0
- Support new Ceph Luminous 12.2.0 with Bluestore engine.
As always the installer will support automatic upgrades from previous versions.
philip.shannon
37 Posts
Quote from philip.shannon on September 8, 2017, 12:05 pmfor clarification, the Bluestore engine won't be available in v 1.5, correct? We've tested up to v 1.4 in VMware esxi 6.5 all ssd enviro and the writes are too slow to use in production (SAN like) and it seems that Bluestore is best option to speed things up, just based on researching ceph and ssd's
for clarification, the Bluestore engine won't be available in v 1.5, correct? We've tested up to v 1.4 in VMware esxi 6.5 all ssd enviro and the writes are too slow to use in production (SAN like) and it seems that Bluestore is best option to speed things up, just based on researching ceph and ssd's
admin
2,930 Posts
Quote from admin on September 8, 2017, 12:49 pmBluestore was released a couple of days ago and offers approx 2x write performance. We will be offering 2.0.0 at or shortly after 1.5.0 release depending on the load on our testing.
However you should be getting decent performance from exiting Ceph Jewel. Are you using bare metal or running PetaSAN virtualized ? How many OSDs per host do you have ? Are you RAIDing several disks together ? Can you run the cluster benchmark available in v 1.4 for 4M Throughput and 4K IOPS at 1/16/64 threads, what do you get ? What are the values for % utilization for disk / cpu ?
Bluestore was released a couple of days ago and offers approx 2x write performance. We will be offering 2.0.0 at or shortly after 1.5.0 release depending on the load on our testing.
However you should be getting decent performance from exiting Ceph Jewel. Are you using bare metal or running PetaSAN virtualized ? How many OSDs per host do you have ? Are you RAIDing several disks together ? Can you run the cluster benchmark available in v 1.4 for 4M Throughput and 4K IOPS at 1/16/64 threads, what do you get ? What are the values for % utilization for disk / cpu ?
philip.shannon
37 Posts
Quote from philip.shannon on September 8, 2017, 1:10 pmAre you using bare metal or running PetaSAN virtualized ? Each esxi 6.5 host is running 1 petasan VM
How many OSDs per host do you have ? each esxi 6.5 host has 10 x SSD drives so each petasan VM is direct connected to 10 OSD's and each petaSAN VM is configured with 32GB or memory
Are you RAIDing several disks together ? no RAID, the SSD's are configured as Non-RAID & each petaSan 1.4 vm is connecting to each of the 10 SSD's via RDM raw device mapping
Can you run the cluster benchmark available in v 1.4 ? Each of the 3 petaSAN vm's started as v 1.3 and were upgraded to 1.3.1 and then upgraded to v 1.4 I can click on the benchmark option but nothing happens except a spinning wheel and "processing" I never see any results. I noticed the info "Perform benchmark while there is no other load on the cluster. Client nodes simulating client i/o are best chosen from nodes not running the Local Storage Service and will be excluded from resource load results" well all 3 of the petaSAN nodes are running the Local Storage Service so that may be why this doesn't even work?
thank you
Are you using bare metal or running PetaSAN virtualized ? Each esxi 6.5 host is running 1 petasan VM
How many OSDs per host do you have ? each esxi 6.5 host has 10 x SSD drives so each petasan VM is direct connected to 10 OSD's and each petaSAN VM is configured with 32GB or memory
Are you RAIDing several disks together ? no RAID, the SSD's are configured as Non-RAID & each petaSan 1.4 vm is connecting to each of the 10 SSD's via RDM raw device mapping
Can you run the cluster benchmark available in v 1.4 ? Each of the 3 petaSAN vm's started as v 1.3 and were upgraded to 1.3.1 and then upgraded to v 1.4 I can click on the benchmark option but nothing happens except a spinning wheel and "processing" I never see any results. I noticed the info "Perform benchmark while there is no other load on the cluster. Client nodes simulating client i/o are best chosen from nodes not running the Local Storage Service and will be excluded from resource load results" well all 3 of the petaSAN nodes are running the Local Storage Service so that may be why this doesn't even work?
thank you
admin
2,930 Posts
Quote from admin on September 8, 2017, 2:15 pmThe cluster benchmark should work even if all your machines are running the local storage service. It could be a bug we have, we test the upgrades quite a lot but i will try to have the same upgrade sequence as yours done maybe we have an issue. I will let you know in couple of days if we hit the same issue and any fix if any,
Also if you have the option to re-install virtualized from fresh install and run the benchmark it will help a lot. The benchmark will show you what the max throughput and iops are and what bottlenecks exist. The way it should work is you select 1 client machine such as node 3 to act as a client simulating client io, The info message you see is saying you should try to use a client not running OSDs so it is not acting as a client and server at the same time...so you get more accurate results, but even if you chose a node with OSDs, the benchmark result will be good enough.
How did you measure the performance yourself, did you use a tool that simulates many client workers/threads or did you just do a single task like a file copy for test ? Ceph performs well when you have many concurrent ios and not a single io stream.
Your 10 SSDs should give very good performance. Of course if you have the option to install bare metal that will be great, it can also help when comparing impact of running virtualized.
The cluster benchmark should work even if all your machines are running the local storage service. It could be a bug we have, we test the upgrades quite a lot but i will try to have the same upgrade sequence as yours done maybe we have an issue. I will let you know in couple of days if we hit the same issue and any fix if any,
Also if you have the option to re-install virtualized from fresh install and run the benchmark it will help a lot. The benchmark will show you what the max throughput and iops are and what bottlenecks exist. The way it should work is you select 1 client machine such as node 3 to act as a client simulating client io, The info message you see is saying you should try to use a client not running OSDs so it is not acting as a client and server at the same time...so you get more accurate results, but even if you chose a node with OSDs, the benchmark result will be good enough.
How did you measure the performance yourself, did you use a tool that simulates many client workers/threads or did you just do a single task like a file copy for test ? Ceph performs well when you have many concurrent ios and not a single io stream.
Your 10 SSDs should give very good performance. Of course if you have the option to install bare metal that will be great, it can also help when comparing impact of running virtualized.
philip.shannon
37 Posts
Quote from philip.shannon on September 8, 2017, 2:48 pmI should be able to do a clean 1.4 install within the next week and try again. Will report back. Thanks for quick responses. Oh and mostly I've been using CrystalDiskMark so yeah, probably not the best way to test ceph
I should be able to do a clean 1.4 install within the next week and try again. Will report back. Thanks for quick responses. Oh and mostly I've been using CrystalDiskMark so yeah, probably not the best way to test ceph
RafS
32 Posts
Quote from RafS on September 12, 2017, 8:18 pmAny idea on the planned release dates ?
Any idea on the planned release dates ?
admin
2,930 Posts
Quote from admin on September 13, 2017, 12:18 pmWe cannot commit to dates yet, but typically we try to release every 3 months for new features
Are you more interested in bluestore or in re-assigning of active paths ?
We cannot commit to dates yet, but typically we try to release every 3 months for new features
Are you more interested in bluestore or in re-assigning of active paths ?
RafS
32 Posts
Quote from RafS on September 13, 2017, 2:13 pmI'm currently looking to replace our Eternus DX90S2 storage. The storage is used as backend for our vmware view and servers. I would like to start directly with journal on SSD's disk, and if possible also the bluestore. This is what i have in my mind.
6 OSD-NODES
- 4 x SATA 1TB Enterprise Disk
- 1 x SATADOM 128GB For OS
- 1 x M.2 SSD 128 GB for journal
- 8 GB RAM
- 2 x 10 GB network
- 6 x 1 GB network (bonding)
- 1 Quad-Core CPU
2 iscsi-gateway nodes
- 16 GB Ram
- 1 x SATADOM 128 GB
- 2x 10 GB network
- 6x 1GB Network
- 1 Quad-Core CPU
Regards
RAf
I'm currently looking to replace our Eternus DX90S2 storage. The storage is used as backend for our vmware view and servers. I would like to start directly with journal on SSD's disk, and if possible also the bluestore. This is what i have in my mind.
6 OSD-NODES
- 4 x SATA 1TB Enterprise Disk
- 1 x SATADOM 128GB For OS
- 1 x M.2 SSD 128 GB for journal
- 8 GB RAM
- 2 x 10 GB network
- 6 x 1 GB network (bonding)
- 1 Quad-Core CPU
2 iscsi-gateway nodes
- 16 GB Ram
- 1 x SATADOM 128 GB
- 2x 10 GB network
- 6x 1GB Network
- 1 Quad-Core CPU
Regards
RAf
Krelian
1 Post
Quote from Krelian on September 14, 2017, 7:32 pmFor now we use 1.4.0 version and we want to upgrade to the v.2.0 version with Bluestone engine.
Where can i read about update procedure? I need to reboot my management nodes and boot from flash to upgrade? Or it can be done using cli without rebooting?
And what about availability of cluster while cluster updating?
Thanks.
For now we use 1.4.0 version and we want to upgrade to the v.2.0 version with Bluestone engine.
Where can i read about update procedure? I need to reboot my management nodes and boot from flash to upgrade? Or it can be done using cli without rebooting?
And what about availability of cluster while cluster updating?
Thanks.