PetaSAN Deployment info
Pages: 1 2
nosinformatica
9 Posts
July 19, 2024, 4:07 pmQuote from nosinformatica on July 19, 2024, 4:07 pmHi, we are new to PetaSAN, and I have some questions (sorry if these topics have already been discussed).
- Is PetaSAN production ready?
- Is a virtualized architecture with VMware ESXi suitable? For example, if we have 3 VMware nodes with a RAID HW controller with cache, does PetaSAN perform well in terms of performance? Or is it advisable to install it on bare-metal?
For instance, if I had 3 servers composed as follows: CPU Xeon 2680v4 - 64GB RAM - Controller P440AR + 2GB CACHE - 8x4TB HDD SAS 10k 7.2RPM RAID 5+HS - 2x10Gbe interfaces + 4x1Gbe interfaces, would you configure a physical pass-through of the disks, or would you install VMware with RAID management of the disks?
Thank you very much!
Hi, we are new to PetaSAN, and I have some questions (sorry if these topics have already been discussed).
- Is PetaSAN production ready?
- Is a virtualized architecture with VMware ESXi suitable? For example, if we have 3 VMware nodes with a RAID HW controller with cache, does PetaSAN perform well in terms of performance? Or is it advisable to install it on bare-metal?
For instance, if I had 3 servers composed as follows: CPU Xeon 2680v4 - 64GB RAM - Controller P440AR + 2GB CACHE - 8x4TB HDD SAS 10k 7.2RPM RAID 5+HS - 2x10Gbe interfaces + 4x1Gbe interfaces, would you configure a physical pass-through of the disks, or would you install VMware with RAID management of the disks?
Thank you very much!
admin
2,930 Posts
July 19, 2024, 5:44 pmQuote from admin on July 19, 2024, 5:44 pmYes PetaSAN is production grade, it is very stable.
Virtualized deployment is not something we support. You can try it yourself to see difference with bare metal and see if it is acceptable. One thing is that any software defined storage system requires resources, in a hyper converged setup, the storage will compete with other vms running for resources.
Yes PetaSAN is production grade, it is very stable.
Virtualized deployment is not something we support. You can try it yourself to see difference with bare metal and see if it is acceptable. One thing is that any software defined storage system requires resources, in a hyper converged setup, the storage will compete with other vms running for resources.
Last edited on July 19, 2024, 5:46 pm by admin · #2
nosinformatica
9 Posts
July 19, 2024, 8:25 pmQuote from nosinformatica on July 19, 2024, 8:25 pmThanks you for reply. For the storage on physical installation, it's more suitable a hw raid or hba mode? Thanks
Thanks you for reply. For the storage on physical installation, it's more suitable a hw raid or hba mode? Thanks
rgloeschner
1 Post
September 5, 2024, 5:58 amQuote from rgloeschner on September 5, 2024, 5:58 amSomething to consider...
ESXi is a top-notch hypervisor and it can do some wonderful work. In a lab, I am working on this virtualized PetaSAN configuration myself right now. However, I would caution you against moving this into a production workload. Here's my thoughts...
- 1. The VMWare licenses are relatively expensive, and should be used for the vm infrastructure that provides the greatest return on investment. If you need a minimal deployment of NAS, iSCSI, Hadoop, (for example) then the hypervisor route is probable the one that throws out the least amount of obstacles, and its a great way to deploy an MVP or a proof of concept quickly with little to no additional capital expenditures.
- 2. At scale these types of workloads may not be suitable for a hypervisor based platform. These platforms have advanced disk management, network management and fault management techniques. If you were to build these solutions on top of ESXi platform that is also providing advanced disk management, network management and fault management techniques you may actually undercut your own work and thereby create a more fragile infrastructure. A fragile infrastructure is more prone to break and when it does break its harder to diagnose and fix. None of that leads to a happy life.
- There are a few areas within the datacenter where it is advisable to deploy your bare metal platforms. For "at-scale" solutions, simplicity is your friend.
Closing thoughts...
as an infrastructure architect for many years I have had the opportunity to wrestle with these questions myself. For instance, "Should you virtualize SQL Server?", or "Should you virtualize Oracle?".
The fact of the matter is that you very well might be able to successfully virtualize 99% of the SQL Server workloads out there and you might be able to virtualize 50% of the Oracle workloads out there. From a business decision ask yourself "how much does the software license cost on a per-cpu basis?" If your paying 48K-80K per CPU for your software you really don't want to have too many of those CPU's hanging around that you have to true-up your licenses every 2-4 years. For these Bare Metal with fewer CPUs that run at the highest possible speeds probably make more sense.
Something to consider...
ESXi is a top-notch hypervisor and it can do some wonderful work. In a lab, I am working on this virtualized PetaSAN configuration myself right now. However, I would caution you against moving this into a production workload. Here's my thoughts...
- 1. The VMWare licenses are relatively expensive, and should be used for the vm infrastructure that provides the greatest return on investment. If you need a minimal deployment of NAS, iSCSI, Hadoop, (for example) then the hypervisor route is probable the one that throws out the least amount of obstacles, and its a great way to deploy an MVP or a proof of concept quickly with little to no additional capital expenditures.
- 2. At scale these types of workloads may not be suitable for a hypervisor based platform. These platforms have advanced disk management, network management and fault management techniques. If you were to build these solutions on top of ESXi platform that is also providing advanced disk management, network management and fault management techniques you may actually undercut your own work and thereby create a more fragile infrastructure. A fragile infrastructure is more prone to break and when it does break its harder to diagnose and fix. None of that leads to a happy life.
- There are a few areas within the datacenter where it is advisable to deploy your bare metal platforms. For "at-scale" solutions, simplicity is your friend.
Closing thoughts...
as an infrastructure architect for many years I have had the opportunity to wrestle with these questions myself. For instance, "Should you virtualize SQL Server?", or "Should you virtualize Oracle?".
The fact of the matter is that you very well might be able to successfully virtualize 99% of the SQL Server workloads out there and you might be able to virtualize 50% of the Oracle workloads out there. From a business decision ask yourself "how much does the software license cost on a per-cpu basis?" If your paying 48K-80K per CPU for your software you really don't want to have too many of those CPU's hanging around that you have to true-up your licenses every 2-4 years. For these Bare Metal with fewer CPUs that run at the highest possible speeds probably make more sense.
nosinformatica
9 Posts
October 10, 2024, 2:36 pmQuote from nosinformatica on October 10, 2024, 2:36 pmOk, thanks for your feedback. For what we need to do (exposing S3-compatible storage), we have opted for this configuration: 2 bare-metal nodes with 1 CPU E5-2680v4, 10x6TB SAS HDDs, 2x128 SSDs for the OS, 2x 40GB connections for server interconnection, and 1 Virtual Machine in a HA cluster for monitoring and management. Should cache disks be included for the storage? Does this configuration seem appropriate? Thanks.
Ok, thanks for your feedback. For what we need to do (exposing S3-compatible storage), we have opted for this configuration: 2 bare-metal nodes with 1 CPU E5-2680v4, 10x6TB SAS HDDs, 2x128 SSDs for the OS, 2x 40GB connections for server interconnection, and 1 Virtual Machine in a HA cluster for monitoring and management. Should cache disks be included for the storage? Does this configuration seem appropriate? Thanks.
f.cuseo
66 Posts
October 12, 2024, 5:08 pmQuote from f.cuseo on October 12, 2024, 5:08 pm
Quote from nosinformatica on October 10, 2024, 2:36 pm
Ok, thanks for your feedback. For what we need to do (exposing S3-compatible storage), we have opted for this configuration: 2 bare-metal nodes with 1 CPU E5-2680v4, 10x6TB SAS HDDs, 2x128 SSDs for the OS, 2x 40GB connections for server interconnection, and 1 Virtual Machine in a HA cluster for monitoring and management. Should cache disks be included for the storage? Does this configuration seem appropriate? Thanks.
I think that is not the correct configuration.
A more appropriate will be:
- 3 bare metal nodes (not less, because you need a quorum) with 1 or 2 cpu (depending of your load, but consider that you will have all Ceph + radosgw load), 64Gbyte memory or more (you need memory for each OSD), NO RAID.
- to reach decent performances, you can use 2 standard ssd with raid hw (petasan still doesn't support software raid) for OS, 10x6Tbyte sas HDDs, but add 2 nvme or ssd, enterprise, with power loss protection (don't use consumer drives); i think you will need 1.9Tbyte drives, I don't remember if 960 is enough. Use them for journal/cache when you provision OSD drives.
- don't upgrade to the last 3.3.0 release but keep it to 3.2.1 because there is a bug with multipart upload on s3 gateway.
I have a 12 nodes cluster with 12x8Tbyte + cache with 2 cpu (8 core each), 64 gigabytes, 2xSSD for os, all for S3gateway and NFS. 10 nodes are the cluster (3.3.0 release) and 2 other nodes only s3 gateway (3.2.1), waiting for the next release with fixed problem.
Quote from nosinformatica on October 10, 2024, 2:36 pm
Ok, thanks for your feedback. For what we need to do (exposing S3-compatible storage), we have opted for this configuration: 2 bare-metal nodes with 1 CPU E5-2680v4, 10x6TB SAS HDDs, 2x128 SSDs for the OS, 2x 40GB connections for server interconnection, and 1 Virtual Machine in a HA cluster for monitoring and management. Should cache disks be included for the storage? Does this configuration seem appropriate? Thanks.
I think that is not the correct configuration.
A more appropriate will be:
- 3 bare metal nodes (not less, because you need a quorum) with 1 or 2 cpu (depending of your load, but consider that you will have all Ceph + radosgw load), 64Gbyte memory or more (you need memory for each OSD), NO RAID.
- to reach decent performances, you can use 2 standard ssd with raid hw (petasan still doesn't support software raid) for OS, 10x6Tbyte sas HDDs, but add 2 nvme or ssd, enterprise, with power loss protection (don't use consumer drives); i think you will need 1.9Tbyte drives, I don't remember if 960 is enough. Use them for journal/cache when you provision OSD drives.
- don't upgrade to the last 3.3.0 release but keep it to 3.2.1 because there is a bug with multipart upload on s3 gateway.
I have a 12 nodes cluster with 12x8Tbyte + cache with 2 cpu (8 core each), 64 gigabytes, 2xSSD for os, all for S3gateway and NFS. 10 nodes are the cluster (3.3.0 release) and 2 other nodes only s3 gateway (3.2.1), waiting for the next release with fixed problem.
nosinformatica
9 Posts
October 17, 2024, 9:31 amQuote from nosinformatica on October 17, 2024, 9:31 amHi Fabrizio, why a vm for the third node is not a good choice? it does not share OSDs but only for prevent split brain
Hi Fabrizio, why a vm for the third node is not a good choice? it does not share OSDs but only for prevent split brain
f.cuseo
66 Posts
October 17, 2024, 9:36 amQuote from f.cuseo on October 17, 2024, 9:36 am
Quote from nosinformatica on October 17, 2024, 9:31 am
Hi Fabrizio, why a vm for the third node is not a good choice? it does not share OSDs but only for prevent split brain
Because you need not less than 3 copies in replica to avoid problems in case of data damage. With 2 copies, you don't know which copy is the good one, and you need to repair manually without knowing which is the good copy, with 3 copies ceph will have a quorum... so repair will be more affordable and automatic.
When you talk about a cluster, 3 is always the minimum number.
Quote from nosinformatica on October 17, 2024, 9:31 am
Hi Fabrizio, why a vm for the third node is not a good choice? it does not share OSDs but only for prevent split brain
Because you need not less than 3 copies in replica to avoid problems in case of data damage. With 2 copies, you don't know which copy is the good one, and you need to repair manually without knowing which is the good copy, with 3 copies ceph will have a quorum... so repair will be more affordable and automatic.
When you talk about a cluster, 3 is always the minimum number.
nosinformatica
9 Posts
October 17, 2024, 9:59 amQuote from nosinformatica on October 17, 2024, 9:59 amThanks, i'll replace the VM with a new bare metal node.
I have to put same roles on all nodes? Now i have this roles
Thanks, i'll replace the VM with a new bare metal node.
I have to put same roles on all nodes? Now i have this roles
f.cuseo
66 Posts
October 17, 2024, 10:17 amQuote from f.cuseo on October 17, 2024, 10:17 am
Quote from nosinformatica on October 17, 2024, 9:59 am
Thanks, i'll replace the VM with a new bare metal node.
I have to put same roles on all nodes? Now i have this roles
Use what you need; for object storage, only S3 service is needed, if you need iscsi too, select them.
Your choice...
Quote from nosinformatica on October 17, 2024, 9:59 am
Thanks, i'll replace the VM with a new bare metal node.
I have to put same roles on all nodes? Now i have this roles
Use what you need; for object storage, only S3 service is needed, if you need iscsi too, select them.
Your choice...
Pages: 1 2
PetaSAN Deployment info
nosinformatica
9 Posts
Quote from nosinformatica on July 19, 2024, 4:07 pmHi, we are new to PetaSAN, and I have some questions (sorry if these topics have already been discussed).
- Is PetaSAN production ready?
- Is a virtualized architecture with VMware ESXi suitable? For example, if we have 3 VMware nodes with a RAID HW controller with cache, does PetaSAN perform well in terms of performance? Or is it advisable to install it on bare-metal?
For instance, if I had 3 servers composed as follows: CPU Xeon 2680v4 - 64GB RAM - Controller P440AR + 2GB CACHE - 8x4TB HDD SAS 10k 7.2RPM RAID 5+HS - 2x10Gbe interfaces + 4x1Gbe interfaces, would you configure a physical pass-through of the disks, or would you install VMware with RAID management of the disks?
Thank you very much!
Hi, we are new to PetaSAN, and I have some questions (sorry if these topics have already been discussed).
- Is PetaSAN production ready?
- Is a virtualized architecture with VMware ESXi suitable? For example, if we have 3 VMware nodes with a RAID HW controller with cache, does PetaSAN perform well in terms of performance? Or is it advisable to install it on bare-metal?
For instance, if I had 3 servers composed as follows: CPU Xeon 2680v4 - 64GB RAM - Controller P440AR + 2GB CACHE - 8x4TB HDD SAS 10k 7.2RPM RAID 5+HS - 2x10Gbe interfaces + 4x1Gbe interfaces, would you configure a physical pass-through of the disks, or would you install VMware with RAID management of the disks?
Thank you very much!
admin
2,930 Posts
Quote from admin on July 19, 2024, 5:44 pmYes PetaSAN is production grade, it is very stable.
Virtualized deployment is not something we support. You can try it yourself to see difference with bare metal and see if it is acceptable. One thing is that any software defined storage system requires resources, in a hyper converged setup, the storage will compete with other vms running for resources.
Yes PetaSAN is production grade, it is very stable.
Virtualized deployment is not something we support. You can try it yourself to see difference with bare metal and see if it is acceptable. One thing is that any software defined storage system requires resources, in a hyper converged setup, the storage will compete with other vms running for resources.
nosinformatica
9 Posts
Quote from nosinformatica on July 19, 2024, 8:25 pmThanks you for reply. For the storage on physical installation, it's more suitable a hw raid or hba mode? Thanks
Thanks you for reply. For the storage on physical installation, it's more suitable a hw raid or hba mode? Thanks
rgloeschner
1 Post
Quote from rgloeschner on September 5, 2024, 5:58 amSomething to consider...
ESXi is a top-notch hypervisor and it can do some wonderful work. In a lab, I am working on this virtualized PetaSAN configuration myself right now. However, I would caution you against moving this into a production workload. Here's my thoughts...
- 1. The VMWare licenses are relatively expensive, and should be used for the vm infrastructure that provides the greatest return on investment. If you need a minimal deployment of NAS, iSCSI, Hadoop, (for example) then the hypervisor route is probable the one that throws out the least amount of obstacles, and its a great way to deploy an MVP or a proof of concept quickly with little to no additional capital expenditures.
- 2. At scale these types of workloads may not be suitable for a hypervisor based platform. These platforms have advanced disk management, network management and fault management techniques. If you were to build these solutions on top of ESXi platform that is also providing advanced disk management, network management and fault management techniques you may actually undercut your own work and thereby create a more fragile infrastructure. A fragile infrastructure is more prone to break and when it does break its harder to diagnose and fix. None of that leads to a happy life.
- There are a few areas within the datacenter where it is advisable to deploy your bare metal platforms. For "at-scale" solutions, simplicity is your friend.
Closing thoughts...
as an infrastructure architect for many years I have had the opportunity to wrestle with these questions myself. For instance, "Should you virtualize SQL Server?", or "Should you virtualize Oracle?".
The fact of the matter is that you very well might be able to successfully virtualize 99% of the SQL Server workloads out there and you might be able to virtualize 50% of the Oracle workloads out there. From a business decision ask yourself "how much does the software license cost on a per-cpu basis?" If your paying 48K-80K per CPU for your software you really don't want to have too many of those CPU's hanging around that you have to true-up your licenses every 2-4 years. For these Bare Metal with fewer CPUs that run at the highest possible speeds probably make more sense.
Something to consider...
ESXi is a top-notch hypervisor and it can do some wonderful work. In a lab, I am working on this virtualized PetaSAN configuration myself right now. However, I would caution you against moving this into a production workload. Here's my thoughts...
- 1. The VMWare licenses are relatively expensive, and should be used for the vm infrastructure that provides the greatest return on investment. If you need a minimal deployment of NAS, iSCSI, Hadoop, (for example) then the hypervisor route is probable the one that throws out the least amount of obstacles, and its a great way to deploy an MVP or a proof of concept quickly with little to no additional capital expenditures.
- 2. At scale these types of workloads may not be suitable for a hypervisor based platform. These platforms have advanced disk management, network management and fault management techniques. If you were to build these solutions on top of ESXi platform that is also providing advanced disk management, network management and fault management techniques you may actually undercut your own work and thereby create a more fragile infrastructure. A fragile infrastructure is more prone to break and when it does break its harder to diagnose and fix. None of that leads to a happy life.
- There are a few areas within the datacenter where it is advisable to deploy your bare metal platforms. For "at-scale" solutions, simplicity is your friend.
Closing thoughts...
as an infrastructure architect for many years I have had the opportunity to wrestle with these questions myself. For instance, "Should you virtualize SQL Server?", or "Should you virtualize Oracle?".
The fact of the matter is that you very well might be able to successfully virtualize 99% of the SQL Server workloads out there and you might be able to virtualize 50% of the Oracle workloads out there. From a business decision ask yourself "how much does the software license cost on a per-cpu basis?" If your paying 48K-80K per CPU for your software you really don't want to have too many of those CPU's hanging around that you have to true-up your licenses every 2-4 years. For these Bare Metal with fewer CPUs that run at the highest possible speeds probably make more sense.
nosinformatica
9 Posts
Quote from nosinformatica on October 10, 2024, 2:36 pmOk, thanks for your feedback. For what we need to do (exposing S3-compatible storage), we have opted for this configuration: 2 bare-metal nodes with 1 CPU E5-2680v4, 10x6TB SAS HDDs, 2x128 SSDs for the OS, 2x 40GB connections for server interconnection, and 1 Virtual Machine in a HA cluster for monitoring and management. Should cache disks be included for the storage? Does this configuration seem appropriate? Thanks.
Ok, thanks for your feedback. For what we need to do (exposing S3-compatible storage), we have opted for this configuration: 2 bare-metal nodes with 1 CPU E5-2680v4, 10x6TB SAS HDDs, 2x128 SSDs for the OS, 2x 40GB connections for server interconnection, and 1 Virtual Machine in a HA cluster for monitoring and management. Should cache disks be included for the storage? Does this configuration seem appropriate? Thanks.
f.cuseo
66 Posts
Quote from f.cuseo on October 12, 2024, 5:08 pmQuote from nosinformatica on October 10, 2024, 2:36 pmOk, thanks for your feedback. For what we need to do (exposing S3-compatible storage), we have opted for this configuration: 2 bare-metal nodes with 1 CPU E5-2680v4, 10x6TB SAS HDDs, 2x128 SSDs for the OS, 2x 40GB connections for server interconnection, and 1 Virtual Machine in a HA cluster for monitoring and management. Should cache disks be included for the storage? Does this configuration seem appropriate? Thanks.
I think that is not the correct configuration.
A more appropriate will be:
- 3 bare metal nodes (not less, because you need a quorum) with 1 or 2 cpu (depending of your load, but consider that you will have all Ceph + radosgw load), 64Gbyte memory or more (you need memory for each OSD), NO RAID.
- to reach decent performances, you can use 2 standard ssd with raid hw (petasan still doesn't support software raid) for OS, 10x6Tbyte sas HDDs, but add 2 nvme or ssd, enterprise, with power loss protection (don't use consumer drives); i think you will need 1.9Tbyte drives, I don't remember if 960 is enough. Use them for journal/cache when you provision OSD drives.
- don't upgrade to the last 3.3.0 release but keep it to 3.2.1 because there is a bug with multipart upload on s3 gateway.I have a 12 nodes cluster with 12x8Tbyte + cache with 2 cpu (8 core each), 64 gigabytes, 2xSSD for os, all for S3gateway and NFS. 10 nodes are the cluster (3.3.0 release) and 2 other nodes only s3 gateway (3.2.1), waiting for the next release with fixed problem.
Quote from nosinformatica on October 10, 2024, 2:36 pmOk, thanks for your feedback. For what we need to do (exposing S3-compatible storage), we have opted for this configuration: 2 bare-metal nodes with 1 CPU E5-2680v4, 10x6TB SAS HDDs, 2x128 SSDs for the OS, 2x 40GB connections for server interconnection, and 1 Virtual Machine in a HA cluster for monitoring and management. Should cache disks be included for the storage? Does this configuration seem appropriate? Thanks.
I think that is not the correct configuration.
A more appropriate will be:
- 3 bare metal nodes (not less, because you need a quorum) with 1 or 2 cpu (depending of your load, but consider that you will have all Ceph + radosgw load), 64Gbyte memory or more (you need memory for each OSD), NO RAID.
- to reach decent performances, you can use 2 standard ssd with raid hw (petasan still doesn't support software raid) for OS, 10x6Tbyte sas HDDs, but add 2 nvme or ssd, enterprise, with power loss protection (don't use consumer drives); i think you will need 1.9Tbyte drives, I don't remember if 960 is enough. Use them for journal/cache when you provision OSD drives.
- don't upgrade to the last 3.3.0 release but keep it to 3.2.1 because there is a bug with multipart upload on s3 gateway.
I have a 12 nodes cluster with 12x8Tbyte + cache with 2 cpu (8 core each), 64 gigabytes, 2xSSD for os, all for S3gateway and NFS. 10 nodes are the cluster (3.3.0 release) and 2 other nodes only s3 gateway (3.2.1), waiting for the next release with fixed problem.
nosinformatica
9 Posts
Quote from nosinformatica on October 17, 2024, 9:31 amHi Fabrizio, why a vm for the third node is not a good choice? it does not share OSDs but only for prevent split brain
Hi Fabrizio, why a vm for the third node is not a good choice? it does not share OSDs but only for prevent split brain
f.cuseo
66 Posts
Quote from f.cuseo on October 17, 2024, 9:36 amQuote from nosinformatica on October 17, 2024, 9:31 amHi Fabrizio, why a vm for the third node is not a good choice? it does not share OSDs but only for prevent split brain
Because you need not less than 3 copies in replica to avoid problems in case of data damage. With 2 copies, you don't know which copy is the good one, and you need to repair manually without knowing which is the good copy, with 3 copies ceph will have a quorum... so repair will be more affordable and automatic.
When you talk about a cluster, 3 is always the minimum number.
Quote from nosinformatica on October 17, 2024, 9:31 amHi Fabrizio, why a vm for the third node is not a good choice? it does not share OSDs but only for prevent split brain
Because you need not less than 3 copies in replica to avoid problems in case of data damage. With 2 copies, you don't know which copy is the good one, and you need to repair manually without knowing which is the good copy, with 3 copies ceph will have a quorum... so repair will be more affordable and automatic.
When you talk about a cluster, 3 is always the minimum number.
nosinformatica
9 Posts
Quote from nosinformatica on October 17, 2024, 9:59 amThanks, i'll replace the VM with a new bare metal node.
I have to put same roles on all nodes? Now i have this roles
Thanks, i'll replace the VM with a new bare metal node.
I have to put same roles on all nodes? Now i have this roles
f.cuseo
66 Posts
Quote from f.cuseo on October 17, 2024, 10:17 amQuote from nosinformatica on October 17, 2024, 9:59 amThanks, i'll replace the VM with a new bare metal node.
I have to put same roles on all nodes? Now i have this roles
Use what you need; for object storage, only S3 service is needed, if you need iscsi too, select them.
Your choice...
Quote from nosinformatica on October 17, 2024, 9:59 amThanks, i'll replace the VM with a new bare metal node.
I have to put same roles on all nodes? Now i have this roles
Use what you need; for object storage, only S3 service is needed, if you need iscsi too, select them.
Your choice...