Multiple Pools ( Only 10x SSD // Only 10x SAS HDD + 2 SSD journal) each nodes
Kurti2k
16 Posts
June 6, 2019, 12:05 pmQuote from Kurti2k on June 6, 2019, 12:05 pmI Have
3 Nodes with each
2x E52620 V3 - 64GB DDR4
2x Dual 40GBE MLX 354A Cards
10x SAS SSDs & 10x SAS HDDs 10x + 2 SSD journal
How can i configure 2 Sepperate Pools 1x for fast VM & 1x slow Data pool
Best Regards
I Have
3 Nodes with each
2x E52620 V3 - 64GB DDR4
2x Dual 40GBE MLX 354A Cards
10x SAS SSDs & 10x SAS HDDs 10x + 2 SSD journal
How can i configure 2 Sepperate Pools 1x for fast VM & 1x slow Data pool
Best Regards
admin
2,930 Posts
June 6, 2019, 12:27 pmQuote from admin on June 6, 2019, 12:27 pmuse the built in by-host-ssd and by-host-hdd rule templates and create the fast pool using the first and the slow pool the second. use default size of 3 and assign 2048 pgs per pool, delete the default pool if not used.
use the built in by-host-ssd and by-host-hdd rule templates and create the fast pool using the first and the slow pool the second. use default size of 3 and assign 2048 pgs per pool, delete the default pool if not used.
Kurti2k
16 Posts
June 6, 2019, 1:36 pmQuote from Kurti2k on June 6, 2019, 1:36 pmTHX it works 🙂
THX it works 🙂
Kurti2k
16 Posts
June 7, 2019, 10:17 amQuote from Kurti2k on June 7, 2019, 10:17 am
the ssd pool is 10k iops write slower than the default pool with 1024 PGs
Best Regards
the ssd pool is 10k iops write slower than the default pool with 1024 PGs
Best Regards
admin
2,930 Posts
June 8, 2019, 4:03 pmQuote from admin on June 8, 2019, 4:03 pmIf you give more detail i will try to help:
What model of ssd do you use ?
What are the read/write iops for both the default and ssd pools ?
How many PG for both pools ?
Can you test with 2 client nodes, 64 threads each for 5 min.
What are the resource load ? if possible post screen shots of benchmark page as well as node stats for cpu% and disk%
If you give more detail i will try to help:
What model of ssd do you use ?
What are the read/write iops for both the default and ssd pools ?
How many PG for both pools ?
Can you test with 2 client nodes, 64 threads each for 5 min.
What are the resource load ? if possible post screen shots of benchmark page as well as node stats for cpu% and disk%
Kurti2k
16 Posts
June 19, 2019, 8:02 amQuote from Kurti2k on June 19, 2019, 8:02 amper node:  SSD: 4x PM1633a 960GB
SSD: 2x PM1633a 960GB Journal
SSD: 8x Toshiba THNSN81Q92CSE
HDD: 10x HGST HUC101212CSS600 600GB 10K
Ceph total
Pool 1 -- SSDs PM1633a & THNSN81Q92CSE
Pool 2 --Â SAS HDDs + SSD journal PM1633a
As soon as I have time again I will benchmark with client nodes
Â
best regards
per node:  SSD: 4x PM1633a 960GB
SSD: 2x PM1633a 960GB Journal
SSD: 8x Toshiba THNSN81Q92CSE
HDD: 10x HGST HUC101212CSS600 600GB 10K
Ceph total
Pool 1 -- SSDs PM1633a & THNSN81Q92CSE
Pool 2 --Â SAS HDDs + SSD journal PM1633a
As soon as I have time again I will benchmark with client nodes
Â
best regards
Last edited on June 19, 2019, 8:10 am by Kurti2k · #6
admin
2,930 Posts
June 19, 2019, 9:35 amQuote from admin on June 19, 2019, 9:35 amAlso if you can run the raw disk test from the blue node console, they write data on raw devices so they need to run on disks before adding as OSDs. or you can delete an OSD to run the tests.
For the SSD pool, typically you would not add another SSD as a journal, you would normally not add a journal or use a faster nvme device. unless the SSD for the OSD is very slow. The raw tests will help to know.
Also if you can run the raw disk test from the blue node console, they write data on raw devices so they need to run on disks before adding as OSDs. or you can delete an OSD to run the tests.
For the SSD pool, typically you would not add another SSD as a journal, you would normally not add a journal or use a faster nvme device. unless the SSD for the OSD is very slow. The raw tests will help to know.
Kurti2k
16 Posts
June 19, 2019, 11:31 amQuote from Kurti2k on June 19, 2019, 11:31 amyes ssd pool = 4x pm1633a + 8x THNSN81Q92CSE witout journal
yes ssd pool = 4x pm1633a + 8x THNSN81Q92CSE witout journal
Multiple Pools ( Only 10x SSD // Only 10x SAS HDD + 2 SSD journal) each nodes
Kurti2k
16 Posts
Quote from Kurti2k on June 6, 2019, 12:05 pmI Have
3 Nodes with each
2x E52620 V3 - 64GB DDR4
2x Dual 40GBE MLX 354A Cards
10x SAS SSDs & 10x SAS HDDs 10x + 2 SSD journal
How can i configure 2 Sepperate Pools 1x for fast VM & 1x slow Data pool
Best Regards
I Have
3 Nodes with each
2x E52620 V3 - 64GB DDR4
2x Dual 40GBE MLX 354A Cards
10x SAS SSDs & 10x SAS HDDs 10x + 2 SSD journal
How can i configure 2 Sepperate Pools 1x for fast VM & 1x slow Data pool
Best Regards
admin
2,930 Posts
Quote from admin on June 6, 2019, 12:27 pmuse the built in by-host-ssd and by-host-hdd rule templates and create the fast pool using the first and the slow pool the second. use default size of 3 and assign 2048 pgs per pool, delete the default pool if not used.
use the built in by-host-ssd and by-host-hdd rule templates and create the fast pool using the first and the slow pool the second. use default size of 3 and assign 2048 pgs per pool, delete the default pool if not used.
Kurti2k
16 Posts
Quote from Kurti2k on June 6, 2019, 1:36 pmTHX it works 🙂
THX it works 🙂
Kurti2k
16 Posts
Quote from Kurti2k on June 7, 2019, 10:17 amthe ssd pool is 10k iops write slower than the default pool with 1024 PGs Best Regards
the ssd pool is 10k iops write slower than the default pool with 1024 PGs
Best Regards
admin
2,930 Posts
Quote from admin on June 8, 2019, 4:03 pmIf you give more detail i will try to help:
What model of ssd do you use ?
What are the read/write iops for both the default and ssd pools ?
How many PG for both pools ?
Can you test with 2 client nodes, 64 threads each for 5 min.
What are the resource load ? if possible post screen shots of benchmark page as well as node stats for cpu% and disk%
If you give more detail i will try to help:
What model of ssd do you use ?
What are the read/write iops for both the default and ssd pools ?
How many PG for both pools ?
Can you test with 2 client nodes, 64 threads each for 5 min.
What are the resource load ? if possible post screen shots of benchmark page as well as node stats for cpu% and disk%
Kurti2k
16 Posts
Quote from Kurti2k on June 19, 2019, 8:02 amper node:  SSD: 4x PM1633a 960GB
SSD: 2x PM1633a 960GB Journal
SSD: 8x Toshiba THNSN81Q92CSE
HDD: 10x HGST HUC101212CSS600 600GB 10K
Ceph total
Pool 1 -- SSDs PM1633a & THNSN81Q92CSE
Pool 2 --Â SAS HDDs + SSD journal PM1633a
As soon as I have time again I will benchmark with client nodes
Â
best regards
per node:  SSD: 4x PM1633a 960GB
SSD: 2x PM1633a 960GB Journal
SSD: 8x Toshiba THNSN81Q92CSE
HDD: 10x HGST HUC101212CSS600 600GB 10K
Ceph total
Pool 1 -- SSDs PM1633a & THNSN81Q92CSE
Pool 2 --Â SAS HDDs + SSD journal PM1633a
As soon as I have time again I will benchmark with client nodes
Â
best regards
admin
2,930 Posts
Quote from admin on June 19, 2019, 9:35 amAlso if you can run the raw disk test from the blue node console, they write data on raw devices so they need to run on disks before adding as OSDs. or you can delete an OSD to run the tests.
For the SSD pool, typically you would not add another SSD as a journal, you would normally not add a journal or use a faster nvme device. unless the SSD for the OSD is very slow. The raw tests will help to know.
Also if you can run the raw disk test from the blue node console, they write data on raw devices so they need to run on disks before adding as OSDs. or you can delete an OSD to run the tests.
For the SSD pool, typically you would not add another SSD as a journal, you would normally not add a journal or use a faster nvme device. unless the SSD for the OSD is very slow. The raw tests will help to know.
Kurti2k
16 Posts
Quote from Kurti2k on June 19, 2019, 11:31 amyes ssd pool = 4x pm1633a + 8x THNSN81Q92CSE witout journal
yes ssd pool = 4x pm1633a + 8x THNSN81Q92CSE witout journal