Upcoming releases 1.5.0 and 2.0.0
Pages: 1 2
admin
2,930 Posts
September 15, 2017, 8:29 amQuote from admin on September 15, 2017, 8:29 amIn general it is quite easy, you boot from flash and the installer will autodetect your existing installation and its version and request confirmation to upgrade. It upgrades entire system disk: os/kernel/ceph but is quick, maybe takes about 5 min. The node being upgraded will be offline but your cluster will be running, so there is no downtime with client io. In the future we will be looking at adding other update methods that will not shutdown services in case of smaller patches/fixes for non core components.
For the bluestore update it will be involved. After upgrading to version 2 on all your hosts when you add new OSDs they will be formatted to use the bluestore engine however your existing OSDs are still using older filestore engine. The cluster will function having a mix of both formats, but the performance of slower filestore disks will be dictating the entire cluster performance. Currently Ceph users will perform manual replacement of 1 OSD at a time, ie removing a filestore OSD and create/add new bluestore OSD and wait for Ceph to fill the new OSD then repeat the process. We will have some write up on how to do this.
In general it is quite easy, you boot from flash and the installer will autodetect your existing installation and its version and request confirmation to upgrade. It upgrades entire system disk: os/kernel/ceph but is quick, maybe takes about 5 min. The node being upgraded will be offline but your cluster will be running, so there is no downtime with client io. In the future we will be looking at adding other update methods that will not shutdown services in case of smaller patches/fixes for non core components.
For the bluestore update it will be involved. After upgrading to version 2 on all your hosts when you add new OSDs they will be formatted to use the bluestore engine however your existing OSDs are still using older filestore engine. The cluster will function having a mix of both formats, but the performance of slower filestore disks will be dictating the entire cluster performance. Currently Ceph users will perform manual replacement of 1 OSD at a time, ie removing a filestore OSD and create/add new bluestore OSD and wait for Ceph to fill the new OSD then repeat the process. We will have some write up on how to do this.
Last edited on September 15, 2017, 8:29 am by admin · #11
Pages: 1 2
Upcoming releases 1.5.0 and 2.0.0
admin
2,930 Posts
Quote from admin on September 15, 2017, 8:29 amIn general it is quite easy, you boot from flash and the installer will autodetect your existing installation and its version and request confirmation to upgrade. It upgrades entire system disk: os/kernel/ceph but is quick, maybe takes about 5 min. The node being upgraded will be offline but your cluster will be running, so there is no downtime with client io. In the future we will be looking at adding other update methods that will not shutdown services in case of smaller patches/fixes for non core components.
For the bluestore update it will be involved. After upgrading to version 2 on all your hosts when you add new OSDs they will be formatted to use the bluestore engine however your existing OSDs are still using older filestore engine. The cluster will function having a mix of both formats, but the performance of slower filestore disks will be dictating the entire cluster performance. Currently Ceph users will perform manual replacement of 1 OSD at a time, ie removing a filestore OSD and create/add new bluestore OSD and wait for Ceph to fill the new OSD then repeat the process. We will have some write up on how to do this.
In general it is quite easy, you boot from flash and the installer will autodetect your existing installation and its version and request confirmation to upgrade. It upgrades entire system disk: os/kernel/ceph but is quick, maybe takes about 5 min. The node being upgraded will be offline but your cluster will be running, so there is no downtime with client io. In the future we will be looking at adding other update methods that will not shutdown services in case of smaller patches/fixes for non core components.
For the bluestore update it will be involved. After upgrading to version 2 on all your hosts when you add new OSDs they will be formatted to use the bluestore engine however your existing OSDs are still using older filestore engine. The cluster will function having a mix of both formats, but the performance of slower filestore disks will be dictating the entire cluster performance. Currently Ceph users will perform manual replacement of 1 OSD at a time, ie removing a filestore OSD and create/add new bluestore OSD and wait for Ceph to fill the new OSD then repeat the process. We will have some write up on how to do this.