Journal for Multiple nodes, different size OSDs and separated node for reporting
thodd
1 Post
November 13, 2018, 11:14 amQuote from thodd on November 13, 2018, 11:14 amHi,
I'm setting up a PetaSAN cluster using 5 old servers (4 cores 8GB each) with the following setup:
1. Node 01,02,03: monitor
2. Node 01,02,04,05: storage
3. All nodes: iSCSI target
I currently have several 146GB SAS drivers and a little amount of 7200RPM drives (which is much larger than 146GB SAS). So, I'm planning to make 2 SAS drivers for each node, one for OS and one for journal. The slower drivers will be all OSDs.
With the setup above, I have 3 questions about the scenario:
1. Is it possible to have different OSD size (varying from 0.5TB to 4TB) over all of my 4 storage nodes? (node 3 is for reporting only) Is there any limitation in terms of performance and reliability?
2. Is it recommended to have a separated node for reporting?
3. I also have a single 250GB SSD, how could I make use of the fast SSD, maybe one SSD for journal using over all nodes or something?
Thank you.
Hi,
I'm setting up a PetaSAN cluster using 5 old servers (4 cores 8GB each) with the following setup:
1. Node 01,02,03: monitor
2. Node 01,02,04,05: storage
3. All nodes: iSCSI target
I currently have several 146GB SAS drivers and a little amount of 7200RPM drives (which is much larger than 146GB SAS). So, I'm planning to make 2 SAS drivers for each node, one for OS and one for journal. The slower drivers will be all OSDs.
With the setup above, I have 3 questions about the scenario:
1. Is it possible to have different OSD size (varying from 0.5TB to 4TB) over all of my 4 storage nodes? (node 3 is for reporting only) Is there any limitation in terms of performance and reliability?
2. Is it recommended to have a separated node for reporting?
3. I also have a single 250GB SSD, how could I make use of the fast SSD, maybe one SSD for journal using over all nodes or something?
Thank you.
Last edited on November 13, 2018, 11:15 am by thodd · #1
admin
2,930 Posts
November 13, 2018, 12:41 pmQuote from admin on November 13, 2018, 12:41 pmThere are many possible ways to build your cluster. One thing that will affect performance and reliability is try to make the hardware similar. A 4TB disk will get 8x the io load than a 0.5TB disk, it will be more busy and become a performance bottleneck and probably fail sooner.
All management nodes will share the same management/monitoring functions..so it is better to have similar hardware..also most commonly they are either all dedicated for this task only or all share other tasks together such as iSCSI and storage.
journals can only serve local disks/osds
There are many possible ways to build your cluster. One thing that will affect performance and reliability is try to make the hardware similar. A 4TB disk will get 8x the io load than a 0.5TB disk, it will be more busy and become a performance bottleneck and probably fail sooner.
All management nodes will share the same management/monitoring functions..so it is better to have similar hardware..also most commonly they are either all dedicated for this task only or all share other tasks together such as iSCSI and storage.
journals can only serve local disks/osds
Journal for Multiple nodes, different size OSDs and separated node for reporting
thodd
1 Post
Quote from thodd on November 13, 2018, 11:14 amHi,
I'm setting up a PetaSAN cluster using 5 old servers (4 cores 8GB each) with the following setup:
1. Node 01,02,03: monitor
2. Node 01,02,04,05: storage
3. All nodes: iSCSI targetI currently have several 146GB SAS drivers and a little amount of 7200RPM drives (which is much larger than 146GB SAS). So, I'm planning to make 2 SAS drivers for each node, one for OS and one for journal. The slower drivers will be all OSDs.
With the setup above, I have 3 questions about the scenario:
1. Is it possible to have different OSD size (varying from 0.5TB to 4TB) over all of my 4 storage nodes? (node 3 is for reporting only) Is there any limitation in terms of performance and reliability?
2. Is it recommended to have a separated node for reporting?
3. I also have a single 250GB SSD, how could I make use of the fast SSD, maybe one SSD for journal using over all nodes or something?Thank you.
Hi,
I'm setting up a PetaSAN cluster using 5 old servers (4 cores 8GB each) with the following setup:
1. Node 01,02,03: monitor
2. Node 01,02,04,05: storage
3. All nodes: iSCSI target
I currently have several 146GB SAS drivers and a little amount of 7200RPM drives (which is much larger than 146GB SAS). So, I'm planning to make 2 SAS drivers for each node, one for OS and one for journal. The slower drivers will be all OSDs.
With the setup above, I have 3 questions about the scenario:
1. Is it possible to have different OSD size (varying from 0.5TB to 4TB) over all of my 4 storage nodes? (node 3 is for reporting only) Is there any limitation in terms of performance and reliability?
2. Is it recommended to have a separated node for reporting?
3. I also have a single 250GB SSD, how could I make use of the fast SSD, maybe one SSD for journal using over all nodes or something?
Thank you.
admin
2,930 Posts
Quote from admin on November 13, 2018, 12:41 pmThere are many possible ways to build your cluster. One thing that will affect performance and reliability is try to make the hardware similar. A 4TB disk will get 8x the io load than a 0.5TB disk, it will be more busy and become a performance bottleneck and probably fail sooner.
All management nodes will share the same management/monitoring functions..so it is better to have similar hardware..also most commonly they are either all dedicated for this task only or all share other tasks together such as iSCSI and storage.
journals can only serve local disks/osds
There are many possible ways to build your cluster. One thing that will affect performance and reliability is try to make the hardware similar. A 4TB disk will get 8x the io load than a 0.5TB disk, it will be more busy and become a performance bottleneck and probably fail sooner.
All management nodes will share the same management/monitoring functions..so it is better to have similar hardware..also most commonly they are either all dedicated for this task only or all share other tasks together such as iSCSI and storage.
journals can only serve local disks/osds