Petasan query
Srmvel
10 Posts
October 14, 2019, 2:42 amQuote from Srmvel on October 14, 2019, 2:42 am
Hi Team,
I'm new to this open source Petasan.
I like to test this before taking this to production(to vmware 6.7).
I have three supermicro servers.
Each server has two CPU.
64 gb memory per server
2*1g and 2*10 nics per server.
5*3 TB SAS disks and 2*1 TB SSD disks per server.
My query:-
As per the Petasan hardware recommendations, we need to use SSD disks. I have 2 SSD disks per server and I don't have any other SSD disks, rest all other disks are SAS disks only(HDD).
Can I go with this approach? Or only SSD is required (all disks)? Can I get some moderate iops with this method (SAS plus SSD combination) while testing?
What about disks raid(data)? Ceph has no raid?
Also I guess 4 nics are ok to set up this infrastructure.
We are going to use the same setup to production after testing this.
Can someone please confirm?
Thank you,
Manivel RR
Hi Team,
I'm new to this open source Petasan.
I like to test this before taking this to production(to vmware 6.7).
I have three supermicro servers.
Each server has two CPU.
64 gb memory per server
2*1g and 2*10 nics per server.
5*3 TB SAS disks and 2*1 TB SSD disks per server.
My query:-
As per the Petasan hardware recommendations, we need to use SSD disks. I have 2 SSD disks per server and I don't have any other SSD disks, rest all other disks are SAS disks only(HDD).
Can I go with this approach? Or only SSD is required (all disks)? Can I get some moderate iops with this method (SAS plus SSD combination) while testing?
What about disks raid(data)? Ceph has no raid?
Also I guess 4 nics are ok to set up this infrastructure.
We are going to use the same setup to production after testing this.
Can someone please confirm?
Thank you,
Manivel RR
admin
2,930 Posts
October 14, 2019, 6:45 amQuote from admin on October 14, 2019, 6:45 amYou can install the OS on 1 SAS and have 4 SAS drives as OSD with 2 SSD journals ( 1 SSD journal per 2 drives ).
Ceph works better without RAID, you can do it but not recommended.
You need at least 2 nics with PetaSAN.
Also recommend you test your SSDs with the console menu for sync write speed to see if your SSD model is adequate.
You can install the OS on 1 SAS and have 4 SAS drives as OSD with 2 SSD journals ( 1 SSD journal per 2 drives ).
Ceph works better without RAID, you can do it but not recommended.
You need at least 2 nics with PetaSAN.
Also recommend you test your SSDs with the console menu for sync write speed to see if your SSD model is adequate.
Srmvel
10 Posts
October 14, 2019, 7:04 amQuote from Srmvel on October 14, 2019, 7:04 amThanks admin for reply.
I dont understand this below line.Could you please give more insight on this?
"Also recommend you test your SSDs with the console menu for sync write speed to see if your SSD model is adequate."
Im going to use HBA card "9207-8i"broadcom(for data disks). We are going to use RAID card only for OS disks(RAID 1).
Any setting needs to do on HBA card? example write back,disk cache(example like in RAID).
Also Im going to use 3 NICs.I like to segregate the management traffic with dedicated single nic(eth0).Will this setup work ?
Eth0-->managment
eth1-->icsci 1 and backend1
eth2-->icsci 2 and backend 2.
Also why 2 ISCSI subnets and 2 backend subnets? Is this mandatory for loadbalancing purpose ?
Thanks,
Manivel R
Thanks admin for reply.
I dont understand this below line.Could you please give more insight on this?
"Also recommend you test your SSDs with the console menu for sync write speed to see if your SSD model is adequate."
Im going to use HBA card "9207-8i"broadcom(for data disks). We are going to use RAID card only for OS disks(RAID 1).
Any setting needs to do on HBA card? example write back,disk cache(example like in RAID).
Also Im going to use 3 NICs.I like to segregate the management traffic with dedicated single nic(eth0).Will this setup work ?
Eth0-->managment
eth1-->icsci 1 and backend1
eth2-->icsci 2 and backend 2.
Also why 2 ISCSI subnets and 2 backend subnets? Is this mandatory for loadbalancing purpose ?
Thanks,
Manivel R
Last edited on October 14, 2019, 10:31 am by Srmvel · #3
admin
2,930 Posts
October 14, 2019, 11:01 amQuote from admin on October 14, 2019, 11:01 amNot all SSD give good performance with Ceph. You can either get an SSD model that is known to work well, or test it using PetaSAN, there is blue screen console menu on each node that has disk speed test..look for the result for sync writes. Also you some low end SSDs can loose data on power loss, so look around for model that do not show this.
If your controller support write back cache with bbu, then this may help setting the HDDs to use this, you may need to add the drives as single volume RAID-0 to enable cache.
The nic settings are fine, another option is to create a single bond between 2 nics and place both iSCSI and backend networks on it. PetaSAN can do this for you in the ui wizard.
Yes you do need 2 separate subnets to support iSCSI MPIO and Ceph need 2 other networks..you still can map them all to just 2 interfaces as per above. If you had more cards it would be better to have each on its own physical card
Not all SSD give good performance with Ceph. You can either get an SSD model that is known to work well, or test it using PetaSAN, there is blue screen console menu on each node that has disk speed test..look for the result for sync writes. Also you some low end SSDs can loose data on power loss, so look around for model that do not show this.
If your controller support write back cache with bbu, then this may help setting the HDDs to use this, you may need to add the drives as single volume RAID-0 to enable cache.
The nic settings are fine, another option is to create a single bond between 2 nics and place both iSCSI and backend networks on it. PetaSAN can do this for you in the ui wizard.
Yes you do need 2 separate subnets to support iSCSI MPIO and Ceph need 2 other networks..you still can map them all to just 2 interfaces as per above. If you had more cards it would be better to have each on its own physical card
Srmvel
10 Posts
October 14, 2019, 11:27 amQuote from Srmvel on October 14, 2019, 11:27 amThank you admin for your prompt feedback.
I will deploy and will let you know the results.
Thanks,
Manivel R
Thank you admin for your prompt feedback.
I will deploy and will let you know the results.
Thanks,
Manivel R
Srmvel
10 Posts
October 17, 2019, 4:30 amQuote from Srmvel on October 17, 2019, 4:30 amHi Admin,
I need to test this.I have only one node.It has 5 *3 TB HDD and 2 * 1 TB SSD.
Can i test the petasan with Only one storage node ?
I mean(one petsan node to standalone ESXi server) for testing.
Thanks,
Manivel RR
Hi Admin,
I need to test this.I have only one node.It has 5 *3 TB HDD and 2 * 1 TB SSD.
Can i test the petasan with Only one storage node ?
I mean(one petsan node to standalone ESXi server) for testing.
Thanks,
Manivel RR
admin
2,930 Posts
October 17, 2019, 8:28 amQuote from admin on October 17, 2019, 8:28 amyou need at least 3 nodes
you need at least 3 nodes
Srmvel
10 Posts
November 5, 2019, 9:47 pmQuote from Srmvel on November 5, 2019, 9:47 pmHi Admin,
When I copy the data between one drive to another drive (guest os), we are seeing very high latency from esxtop.
Latency going up to 10k ms.
File copy starts from 200 mb/s in the beginning and later on its going down to below 10 mb/s because of latency.
I don't know how to solve this? Could you provide some more insight into this?
I used three supermicro X10DRL-C hardwares.
Removed the default raid and replaced all three servers with lsi 9207-8i hba card. I was not installed any specific firmware from broadcom site. As per some article, v19 firmware is quite stable than v20.
I went with 4:1 ratio(hdd:ssd) but hdd and ssd size was not uniform between all these three servers.
Each server had 2 cpu and 32 gb of physical ram.
2 * 10 g nics we used for this testing.
Thank you,
Manivel RR
Hi Admin,
When I copy the data between one drive to another drive (guest os), we are seeing very high latency from esxtop.
Latency going up to 10k ms.
File copy starts from 200 mb/s in the beginning and later on its going down to below 10 mb/s because of latency.
I don't know how to solve this? Could you provide some more insight into this?
I used three supermicro X10DRL-C hardwares.
Removed the default raid and replaced all three servers with lsi 9207-8i hba card. I was not installed any specific firmware from broadcom site. As per some article, v19 firmware is quite stable than v20.
I went with 4:1 ratio(hdd:ssd) but hdd and ssd size was not uniform between all these three servers.
Each server had 2 cpu and 32 gb of physical ram.
2 * 10 g nics we used for this testing.
Thank you,
Manivel RR
admin
2,930 Posts
November 5, 2019, 10:41 pmQuote from admin on November 5, 2019, 10:41 pmhow many hdd osds do you have ? do their sizes vary ?
Can you run a 1min 4k test from the ui using both 1 thread and 64 threads.
What guest os do you use ? if windows, can you test using a disk speed tool like crystal disk mark.
Can you try a new vmdk using thick eager zero ( thick provisioning)
Can you increase the MaxIOSize param as per our vmware guide
how many hdd osds do you have ? do their sizes vary ?
Can you run a 1min 4k test from the ui using both 1 thread and 64 threads.
What guest os do you use ? if windows, can you test using a disk speed tool like crystal disk mark.
Can you try a new vmdk using thick eager zero ( thick provisioning)
Can you increase the MaxIOSize param as per our vmware guide
Petasan query
Srmvel
10 Posts
Quote from Srmvel on October 14, 2019, 2:42 am
Hi Team,
I'm new to this open source Petasan.
I like to test this before taking this to production(to vmware 6.7).
I have three supermicro servers.
Each server has two CPU.
64 gb memory per server
2*1g and 2*10 nics per server.
5*3 TB SAS disks and 2*1 TB SSD disks per server.
My query:-
As per the Petasan hardware recommendations, we need to use SSD disks. I have 2 SSD disks per server and I don't have any other SSD disks, rest all other disks are SAS disks only(HDD).
Can I go with this approach? Or only SSD is required (all disks)? Can I get some moderate iops with this method (SAS plus SSD combination) while testing?
What about disks raid(data)? Ceph has no raid?
Also I guess 4 nics are ok to set up this infrastructure.
We are going to use the same setup to production after testing this.
Can someone please confirm?
Thank you,
Manivel RR
Hi Team,
I'm new to this open source Petasan.
I like to test this before taking this to production(to vmware 6.7).
I have three supermicro servers.
Each server has two CPU.
64 gb memory per server
2*1g and 2*10 nics per server.
5*3 TB SAS disks and 2*1 TB SSD disks per server.
My query:-
As per the Petasan hardware recommendations, we need to use SSD disks. I have 2 SSD disks per server and I don't have any other SSD disks, rest all other disks are SAS disks only(HDD).
Can I go with this approach? Or only SSD is required (all disks)? Can I get some moderate iops with this method (SAS plus SSD combination) while testing?
What about disks raid(data)? Ceph has no raid?
Also I guess 4 nics are ok to set up this infrastructure.
We are going to use the same setup to production after testing this.
Can someone please confirm?
Thank you,
Manivel RR
admin
2,930 Posts
Quote from admin on October 14, 2019, 6:45 amYou can install the OS on 1 SAS and have 4 SAS drives as OSD with 2 SSD journals ( 1 SSD journal per 2 drives ).
Ceph works better without RAID, you can do it but not recommended.
You need at least 2 nics with PetaSAN.
Also recommend you test your SSDs with the console menu for sync write speed to see if your SSD model is adequate.
You can install the OS on 1 SAS and have 4 SAS drives as OSD with 2 SSD journals ( 1 SSD journal per 2 drives ).
Ceph works better without RAID, you can do it but not recommended.
You need at least 2 nics with PetaSAN.
Also recommend you test your SSDs with the console menu for sync write speed to see if your SSD model is adequate.
Srmvel
10 Posts
Quote from Srmvel on October 14, 2019, 7:04 amThanks admin for reply.
I dont understand this below line.Could you please give more insight on this?
"Also recommend you test your SSDs with the console menu for sync write speed to see if your SSD model is adequate."
Im going to use HBA card "9207-8i"broadcom(for data disks). We are going to use RAID card only for OS disks(RAID 1).
Any setting needs to do on HBA card? example write back,disk cache(example like in RAID).
Also Im going to use 3 NICs.I like to segregate the management traffic with dedicated single nic(eth0).Will this setup work ?
Eth0-->managment
eth1-->icsci 1 and backend1
eth2-->icsci 2 and backend 2.
Also why 2 ISCSI subnets and 2 backend subnets? Is this mandatory for loadbalancing purpose ?
Thanks,
Manivel R
Thanks admin for reply.
I dont understand this below line.Could you please give more insight on this?
"Also recommend you test your SSDs with the console menu for sync write speed to see if your SSD model is adequate."
Im going to use HBA card "9207-8i"broadcom(for data disks). We are going to use RAID card only for OS disks(RAID 1).
Any setting needs to do on HBA card? example write back,disk cache(example like in RAID).
Also Im going to use 3 NICs.I like to segregate the management traffic with dedicated single nic(eth0).Will this setup work ?
Eth0-->managment
eth1-->icsci 1 and backend1
eth2-->icsci 2 and backend 2.
Also why 2 ISCSI subnets and 2 backend subnets? Is this mandatory for loadbalancing purpose ?
Thanks,
Manivel R
admin
2,930 Posts
Quote from admin on October 14, 2019, 11:01 amNot all SSD give good performance with Ceph. You can either get an SSD model that is known to work well, or test it using PetaSAN, there is blue screen console menu on each node that has disk speed test..look for the result for sync writes. Also you some low end SSDs can loose data on power loss, so look around for model that do not show this.
If your controller support write back cache with bbu, then this may help setting the HDDs to use this, you may need to add the drives as single volume RAID-0 to enable cache.
The nic settings are fine, another option is to create a single bond between 2 nics and place both iSCSI and backend networks on it. PetaSAN can do this for you in the ui wizard.
Yes you do need 2 separate subnets to support iSCSI MPIO and Ceph need 2 other networks..you still can map them all to just 2 interfaces as per above. If you had more cards it would be better to have each on its own physical card
Not all SSD give good performance with Ceph. You can either get an SSD model that is known to work well, or test it using PetaSAN, there is blue screen console menu on each node that has disk speed test..look for the result for sync writes. Also you some low end SSDs can loose data on power loss, so look around for model that do not show this.
If your controller support write back cache with bbu, then this may help setting the HDDs to use this, you may need to add the drives as single volume RAID-0 to enable cache.
The nic settings are fine, another option is to create a single bond between 2 nics and place both iSCSI and backend networks on it. PetaSAN can do this for you in the ui wizard.
Yes you do need 2 separate subnets to support iSCSI MPIO and Ceph need 2 other networks..you still can map them all to just 2 interfaces as per above. If you had more cards it would be better to have each on its own physical card
Srmvel
10 Posts
Quote from Srmvel on October 14, 2019, 11:27 amThank you admin for your prompt feedback.
I will deploy and will let you know the results.
Thanks,
Manivel R
Thank you admin for your prompt feedback.
I will deploy and will let you know the results.
Thanks,
Manivel R
Srmvel
10 Posts
Quote from Srmvel on October 17, 2019, 4:30 amHi Admin,
I need to test this.I have only one node.It has 5 *3 TB HDD and 2 * 1 TB SSD.
Can i test the petasan with Only one storage node ?
I mean(one petsan node to standalone ESXi server) for testing.
Thanks,
Manivel RR
Hi Admin,
I need to test this.I have only one node.It has 5 *3 TB HDD and 2 * 1 TB SSD.
Can i test the petasan with Only one storage node ?
I mean(one petsan node to standalone ESXi server) for testing.
Thanks,
Manivel RR
admin
2,930 Posts
Quote from admin on October 17, 2019, 8:28 amyou need at least 3 nodes
you need at least 3 nodes
Srmvel
10 Posts
Quote from Srmvel on November 5, 2019, 9:47 pmHi Admin,
When I copy the data between one drive to another drive (guest os), we are seeing very high latency from esxtop.
Latency going up to 10k ms.
File copy starts from 200 mb/s in the beginning and later on its going down to below 10 mb/s because of latency.
I don't know how to solve this? Could you provide some more insight into this?
I used three supermicro X10DRL-C hardwares.
Removed the default raid and replaced all three servers with lsi 9207-8i hba card. I was not installed any specific firmware from broadcom site. As per some article, v19 firmware is quite stable than v20.
I went with 4:1 ratio(hdd:ssd) but hdd and ssd size was not uniform between all these three servers.
Each server had 2 cpu and 32 gb of physical ram.
2 * 10 g nics we used for this testing.
Thank you,
Manivel RR
Hi Admin,
When I copy the data between one drive to another drive (guest os), we are seeing very high latency from esxtop.
Latency going up to 10k ms.
File copy starts from 200 mb/s in the beginning and later on its going down to below 10 mb/s because of latency.
I don't know how to solve this? Could you provide some more insight into this?
I used three supermicro X10DRL-C hardwares.
Removed the default raid and replaced all three servers with lsi 9207-8i hba card. I was not installed any specific firmware from broadcom site. As per some article, v19 firmware is quite stable than v20.
I went with 4:1 ratio(hdd:ssd) but hdd and ssd size was not uniform between all these three servers.
Each server had 2 cpu and 32 gb of physical ram.
2 * 10 g nics we used for this testing.
Thank you,
Manivel RR
admin
2,930 Posts
Quote from admin on November 5, 2019, 10:41 pmhow many hdd osds do you have ? do their sizes vary ?
Can you run a 1min 4k test from the ui using both 1 thread and 64 threads.
What guest os do you use ? if windows, can you test using a disk speed tool like crystal disk mark.
Can you try a new vmdk using thick eager zero ( thick provisioning)
Can you increase the MaxIOSize param as per our vmware guide
how many hdd osds do you have ? do their sizes vary ?
Can you run a 1min 4k test from the ui using both 1 thread and 64 threads.
What guest os do you use ? if windows, can you test using a disk speed tool like crystal disk mark.
Can you try a new vmdk using thick eager zero ( thick provisioning)
Can you increase the MaxIOSize param as per our vmware guide