replacement of small ssds with larger ones
exitsys
43 Posts
February 16, 2021, 2:20 pmQuote from exitsys on February 16, 2021, 2:20 pmhello all, i have a question regarding the replacement of ssd with larger models.
CURRENT STATUS:
3 nodes with 4 x 2 tb ssd and4 x 1tb ssd per node
i would now like to swap all 1tb for 2tb in as short a time as possible.
how many of the small ssd can i remove from which node at the same time without data loss?
thx
hello all, i have a question regarding the replacement of ssd with larger models.
CURRENT STATUS:
3 nodes with 4 x 2 tb ssd and4 x 1tb ssd per node
i would now like to swap all 1tb for 2tb in as short a time as possible.
how many of the small ssd can i remove from which node at the same time without data loss?
thx
admin
2,930 Posts
February 16, 2021, 10:23 pmQuote from admin on February 16, 2021, 10:23 pmFirst thing is to never remove disks from different nodes at the same time.
If you are using x2 replication (not recommended) you should be extra careful, you need to set the OSD crush weight to 0 first to drain the data to other OSDs before removal.
To replace an OSD, physically remove current disk then delete OSD from user interface, then add new disk as OSD. It is safer to do 1 OSD at a time, start when all PGs are active/clean. Once you replaced an OSD with new one, wait for cluster to be active/clean before repeating on next OSD. You could replace more that 1 OSD at a time as long as they are on same host, but it could put a lot of recovery load on you host and depending on your hardware and client load it could affect client performance, you could slow he recovery/backfill speed in this case.
Looking at the PG Status chart does help to track progress of clean PG count and help estimate the time to completion.
First thing is to never remove disks from different nodes at the same time.
If you are using x2 replication (not recommended) you should be extra careful, you need to set the OSD crush weight to 0 first to drain the data to other OSDs before removal.
To replace an OSD, physically remove current disk then delete OSD from user interface, then add new disk as OSD. It is safer to do 1 OSD at a time, start when all PGs are active/clean. Once you replaced an OSD with new one, wait for cluster to be active/clean before repeating on next OSD. You could replace more that 1 OSD at a time as long as they are on same host, but it could put a lot of recovery load on you host and depending on your hardware and client load it could affect client performance, you could slow he recovery/backfill speed in this case.
Looking at the PG Status chart does help to track progress of clean PG count and help estimate the time to completion.
Last edited on February 16, 2021, 10:25 pm by admin · #2
replacement of small ssds with larger ones
exitsys
43 Posts
Quote from exitsys on February 16, 2021, 2:20 pmhello all, i have a question regarding the replacement of ssd with larger models.
CURRENT STATUS:
3 nodes with 4 x 2 tb ssd and4 x 1tb ssd per nodei would now like to swap all 1tb for 2tb in as short a time as possible.
how many of the small ssd can i remove from which node at the same time without data loss?thx
hello all, i have a question regarding the replacement of ssd with larger models.
CURRENT STATUS:
3 nodes with 4 x 2 tb ssd and4 x 1tb ssd per node
i would now like to swap all 1tb for 2tb in as short a time as possible.
how many of the small ssd can i remove from which node at the same time without data loss?
thx
admin
2,930 Posts
Quote from admin on February 16, 2021, 10:23 pmFirst thing is to never remove disks from different nodes at the same time.
If you are using x2 replication (not recommended) you should be extra careful, you need to set the OSD crush weight to 0 first to drain the data to other OSDs before removal.
To replace an OSD, physically remove current disk then delete OSD from user interface, then add new disk as OSD. It is safer to do 1 OSD at a time, start when all PGs are active/clean. Once you replaced an OSD with new one, wait for cluster to be active/clean before repeating on next OSD. You could replace more that 1 OSD at a time as long as they are on same host, but it could put a lot of recovery load on you host and depending on your hardware and client load it could affect client performance, you could slow he recovery/backfill speed in this case.
Looking at the PG Status chart does help to track progress of clean PG count and help estimate the time to completion.
First thing is to never remove disks from different nodes at the same time.
If you are using x2 replication (not recommended) you should be extra careful, you need to set the OSD crush weight to 0 first to drain the data to other OSDs before removal.
To replace an OSD, physically remove current disk then delete OSD from user interface, then add new disk as OSD. It is safer to do 1 OSD at a time, start when all PGs are active/clean. Once you replaced an OSD with new one, wait for cluster to be active/clean before repeating on next OSD. You could replace more that 1 OSD at a time as long as they are on same host, but it could put a lot of recovery load on you host and depending on your hardware and client load it could affect client performance, you could slow he recovery/backfill speed in this case.
Looking at the PG Status chart does help to track progress of clean PG count and help estimate the time to completion.