mix different sizes of SSD
Pages: 1 2
exitsys
43 Posts
October 6, 2020, 2:19 pmQuote from exitsys on October 6, 2020, 2:19 pmHello everyone, I have one more question.
Is it possible to mix different sizes of SSD? I have e.g. 12pcs 960GB SAS 12G Samsung PM1643a and another 12pcs Samsung PM883 1,9TB SSD
many thanks for your excellent support
Hello everyone, I have one more question.
Is it possible to mix different sizes of SSD? I have e.g. 12pcs 960GB SAS 12G Samsung PM1643a and another 12pcs Samsung PM883 1,9TB SSD
many thanks for your excellent support
admin
2,930 Posts
October 6, 2020, 3:10 pmQuote from admin on October 6, 2020, 3:10 pmMixing drives is definitely allowed. In some/many cases you need to consider it impact on overall performance, as larger drivers will end up serving more ios that the average and thus can become a bottleneck..this is more with slower hdds when the size discrepancy is high, it such case you could try to create different storage pools for them..but in your case both your drive types are very good so it is not an issue.
Mixing drives is definitely allowed. In some/many cases you need to consider it impact on overall performance, as larger drivers will end up serving more ios that the average and thus can become a bottleneck..this is more with slower hdds when the size discrepancy is high, it such case you could try to create different storage pools for them..but in your case both your drive types are very good so it is not an issue.
exitsys
43 Posts
October 9, 2020, 3:25 pmQuote from exitsys on October 9, 2020, 3:25 pmThank you very much. One more question about the replacement of OSDs. If I have like 3 Nodes a 6 x 960GB SSDs and I want to exchange them for 1.9TB, how many can be exchanged from one node at the same time? theoretically all of them, because a complete node could fail. is that right?
Thank you very much. One more question about the replacement of OSDs. If I have like 3 Nodes a 6 x 960GB SSDs and I want to exchange them for 1.9TB, how many can be exchanged from one node at the same time? theoretically all of them, because a complete node could fail. is that right?
exitsys
43 Posts
October 9, 2020, 6:02 pmQuote from exitsys on October 9, 2020, 6:02 pmI have now tested the following.
on the third node i took out a 960 ssd and removed it from the system. added a 1920 ssd. then i waited a while and it took a lot of time. i did not wait until the warning was gone but shut down the third node and removed the remaining 960s. also removed them from the system. everything went on. Then i added the 1920's and set them up as osds. there was a BSOD on the Hyper V server. everything off. the 3 petaSAN nodes continued to run and seemingly set up the new SSds. but the iSCSI disk disappeared from the system. was this normal behavior?
I have now tested the following.
on the third node i took out a 960 ssd and removed it from the system. added a 1920 ssd. then i waited a while and it took a lot of time. i did not wait until the warning was gone but shut down the third node and removed the remaining 960s. also removed them from the system. everything went on. Then i added the 1920's and set them up as osds. there was a BSOD on the Hyper V server. everything off. the 3 petaSAN nodes continued to run and seemingly set up the new SSds. but the iSCSI disk disappeared from the system. was this normal behavior?
admin
2,930 Posts
October 9, 2020, 7:11 pmQuote from admin on October 9, 2020, 7:11 pmit depends on how much data you have stored and the current node count, if you replace all disks in a node in a 3 node cluster, you are rebalancing 1/3 of your data. it such case you could lower the backfill speed from the maintenance tab. it depends on the load but your resources saturate then your iSCSI can timeout.
it depends on how much data you have stored and the current node count, if you replace all disks in a node in a 3 node cluster, you are rebalancing 1/3 of your data. it such case you could lower the backfill speed from the maintenance tab. it depends on the load but your resources saturate then your iSCSI can timeout.
exitsys
43 Posts
October 9, 2020, 7:32 pmQuote from exitsys on October 9, 2020, 7:32 pmi.e. when the rebuild is finished, the iscsi disk is also visible again? because it has completely disappeared.
i.e. when the rebuild is finished, the iscsi disk is also visible again? because it has completely disappeared.
exitsys
43 Posts
October 9, 2020, 8:10 pmQuote from exitsys on October 9, 2020, 8:10 pmfor the sake of completeness. I had an iscsi disc that was smaller than the 33.3% threshold. It had 5TB of which only 1.02TB was occupied.
Now the iSCSI disk is back.
i configured 3 replicas
http://florian.ca/ceph-calculator/
Total cluster size
17280 gb
Worst failure replication size
5760 gb
Risky cluster size
5760 gb
Safe cluster size
3840 gb
for the sake of completeness. I had an iscsi disc that was smaller than the 33.3% threshold. It had 5TB of which only 1.02TB was occupied.
Now the iSCSI disk is back.
i configured 3 replicas
http://florian.ca/ceph-calculator/
Total cluster size
17280 gb
Worst failure replication size
5760 gb
Risky cluster size
5760 gb
Safe cluster size
3840 gb
Last edited on October 9, 2020, 8:28 pm by exitsys · #7
admin
2,930 Posts
October 9, 2020, 8:24 pmQuote from admin on October 9, 2020, 8:24 pmit is not related to rebuild being finished, what i guess is load is too high due to too much rebalance, the connection could timeout, hence it helps to reduce the backfill recovery speed in these cases.
it is not related to rebuild being finished, what i guess is load is too high due to too much rebalance, the connection could timeout, hence it helps to reduce the backfill recovery speed in these cases.
exitsys
43 Posts
October 9, 2020, 8:28 pmQuote from exitsys on October 9, 2020, 8:28 pmno the rebuild is not finished yet. But to finish the question. Suppose the third node just failed and I added a new one. would the iscsi target have been gone too? What made it completely disappear? because of overload? Then I have to ask myself if PetaSAN is the right one for us. Because a node failure can happen quite often.
no the rebuild is not finished yet. But to finish the question. Suppose the third node just failed and I added a new one. would the iscsi target have been gone too? What made it completely disappear? because of overload? Then I have to ask myself if PetaSAN is the right one for us. Because a node failure can happen quite often.
admin
2,930 Posts
October 9, 2020, 9:12 pmQuote from admin on October 9, 2020, 9:12 pmit depends on your hardware, current client load, how many disks you are adding at once, the % of disks being added to the total cluster disks as well as the current backfill speed settings. it helps to reduce the backfill speed to "slow" in such case, i believe we show this as info message when joining, You can also join a node but add the OSDs in steps and not all at once.
it depends on your hardware, current client load, how many disks you are adding at once, the % of disks being added to the total cluster disks as well as the current backfill speed settings. it helps to reduce the backfill speed to "slow" in such case, i believe we show this as info message when joining, You can also join a node but add the OSDs in steps and not all at once.
Last edited on October 9, 2020, 9:13 pm by admin · #10
Pages: 1 2
mix different sizes of SSD
exitsys
43 Posts
Quote from exitsys on October 6, 2020, 2:19 pmHello everyone, I have one more question.
Is it possible to mix different sizes of SSD? I have e.g. 12pcs 960GB SAS 12G Samsung PM1643a and another 12pcs Samsung PM883 1,9TB SSDmany thanks for your excellent support
Hello everyone, I have one more question.
Is it possible to mix different sizes of SSD? I have e.g. 12pcs 960GB SAS 12G Samsung PM1643a and another 12pcs Samsung PM883 1,9TB SSD
many thanks for your excellent support
admin
2,930 Posts
Quote from admin on October 6, 2020, 3:10 pmMixing drives is definitely allowed. In some/many cases you need to consider it impact on overall performance, as larger drivers will end up serving more ios that the average and thus can become a bottleneck..this is more with slower hdds when the size discrepancy is high, it such case you could try to create different storage pools for them..but in your case both your drive types are very good so it is not an issue.
Mixing drives is definitely allowed. In some/many cases you need to consider it impact on overall performance, as larger drivers will end up serving more ios that the average and thus can become a bottleneck..this is more with slower hdds when the size discrepancy is high, it such case you could try to create different storage pools for them..but in your case both your drive types are very good so it is not an issue.
exitsys
43 Posts
Quote from exitsys on October 9, 2020, 3:25 pmThank you very much. One more question about the replacement of OSDs. If I have like 3 Nodes a 6 x 960GB SSDs and I want to exchange them for 1.9TB, how many can be exchanged from one node at the same time? theoretically all of them, because a complete node could fail. is that right?
Thank you very much. One more question about the replacement of OSDs. If I have like 3 Nodes a 6 x 960GB SSDs and I want to exchange them for 1.9TB, how many can be exchanged from one node at the same time? theoretically all of them, because a complete node could fail. is that right?
exitsys
43 Posts
Quote from exitsys on October 9, 2020, 6:02 pmI have now tested the following.
on the third node i took out a 960 ssd and removed it from the system. added a 1920 ssd. then i waited a while and it took a lot of time. i did not wait until the warning was gone but shut down the third node and removed the remaining 960s. also removed them from the system. everything went on. Then i added the 1920's and set them up as osds. there was a BSOD on the Hyper V server. everything off. the 3 petaSAN nodes continued to run and seemingly set up the new SSds. but the iSCSI disk disappeared from the system. was this normal behavior?
I have now tested the following.
on the third node i took out a 960 ssd and removed it from the system. added a 1920 ssd. then i waited a while and it took a lot of time. i did not wait until the warning was gone but shut down the third node and removed the remaining 960s. also removed them from the system. everything went on. Then i added the 1920's and set them up as osds. there was a BSOD on the Hyper V server. everything off. the 3 petaSAN nodes continued to run and seemingly set up the new SSds. but the iSCSI disk disappeared from the system. was this normal behavior?
admin
2,930 Posts
Quote from admin on October 9, 2020, 7:11 pmit depends on how much data you have stored and the current node count, if you replace all disks in a node in a 3 node cluster, you are rebalancing 1/3 of your data. it such case you could lower the backfill speed from the maintenance tab. it depends on the load but your resources saturate then your iSCSI can timeout.
it depends on how much data you have stored and the current node count, if you replace all disks in a node in a 3 node cluster, you are rebalancing 1/3 of your data. it such case you could lower the backfill speed from the maintenance tab. it depends on the load but your resources saturate then your iSCSI can timeout.
exitsys
43 Posts
Quote from exitsys on October 9, 2020, 7:32 pmi.e. when the rebuild is finished, the iscsi disk is also visible again? because it has completely disappeared.
i.e. when the rebuild is finished, the iscsi disk is also visible again? because it has completely disappeared.
exitsys
43 Posts
Quote from exitsys on October 9, 2020, 8:10 pmfor the sake of completeness. I had an iscsi disc that was smaller than the 33.3% threshold. It had 5TB of which only 1.02TB was occupied.
Now the iSCSI disk is back.
i configured 3 replicas
http://florian.ca/ceph-calculator/
Total cluster size 17280 gb
Worst failure replication size 5760 gb
Risky cluster size 5760 gb
Safe cluster size 3840 gb
for the sake of completeness. I had an iscsi disc that was smaller than the 33.3% threshold. It had 5TB of which only 1.02TB was occupied.
Now the iSCSI disk is back.
i configured 3 replicas
http://florian.ca/ceph-calculator/
Total cluster size |
17280 gb
|
Worst failure replication size |
5760 gb
|
Risky cluster size |
5760 gb
|
Safe cluster size |
3840 gb
|
admin
2,930 Posts
Quote from admin on October 9, 2020, 8:24 pmit is not related to rebuild being finished, what i guess is load is too high due to too much rebalance, the connection could timeout, hence it helps to reduce the backfill recovery speed in these cases.
it is not related to rebuild being finished, what i guess is load is too high due to too much rebalance, the connection could timeout, hence it helps to reduce the backfill recovery speed in these cases.
exitsys
43 Posts
Quote from exitsys on October 9, 2020, 8:28 pmno the rebuild is not finished yet. But to finish the question. Suppose the third node just failed and I added a new one. would the iscsi target have been gone too? What made it completely disappear? because of overload? Then I have to ask myself if PetaSAN is the right one for us. Because a node failure can happen quite often.
no the rebuild is not finished yet. But to finish the question. Suppose the third node just failed and I added a new one. would the iscsi target have been gone too? What made it completely disappear? because of overload? Then I have to ask myself if PetaSAN is the right one for us. Because a node failure can happen quite often.
admin
2,930 Posts
Quote from admin on October 9, 2020, 9:12 pmit depends on your hardware, current client load, how many disks you are adding at once, the % of disks being added to the total cluster disks as well as the current backfill speed settings. it helps to reduce the backfill speed to "slow" in such case, i believe we show this as info message when joining, You can also join a node but add the OSDs in steps and not all at once.
it depends on your hardware, current client load, how many disks you are adding at once, the % of disks being added to the total cluster disks as well as the current backfill speed settings. it helps to reduce the backfill speed to "slow" in such case, i believe we show this as info message when joining, You can also join a node but add the OSDs in steps and not all at once.