Veeam Backup Target
craig51
10 Posts
July 17, 2022, 5:50 pmQuote from craig51 on July 17, 2022, 5:50 pmTeam,
we are gettign ready to do a testing deployment of Petasan to use as backup targest for our VEEAM infrastructure where we create longer retention backups for datacenter VM's
holding data and other imprtant info and target for cleints with onsite servers that we send to our cloud. I have seen info here on how to set up S3 storage and also on 45Drives on how to use RBD,
1; i am wondering if there is a preferrred option and if so why?
2; if i am not going to use ISCSI can i ignore the basic networking configrations that reference the ISCSI network segments, if so are there any other things i should add.
3; we can deply this storage pod in our datacenter using local lan conenctivity or over a PTP fiber conenction back to our office, pros and cons on each method from my view
datacenter; adds additonal power comsumption and space (cost), speed between VEEAM Repos and storeage can be faster
office; minimal cost for power in my office and space is not an isue, resolves datacenter disaster concerens (but is a commercial Involta cetner), woudl require adding PTP fiber from
DC to office (cost), would prefer generator to keep system up during power problems (cost).
If a PTP Fiber is aquired to run this traffic over what would be needed? 1gig? 500 meg?
thank you in advance for your help
Team,
we are gettign ready to do a testing deployment of Petasan to use as backup targest for our VEEAM infrastructure where we create longer retention backups for datacenter VM's
holding data and other imprtant info and target for cleints with onsite servers that we send to our cloud. I have seen info here on how to set up S3 storage and also on 45Drives on how to use RBD,
1; i am wondering if there is a preferrred option and if so why?
2; if i am not going to use ISCSI can i ignore the basic networking configrations that reference the ISCSI network segments, if so are there any other things i should add.
3; we can deply this storage pod in our datacenter using local lan conenctivity or over a PTP fiber conenction back to our office, pros and cons on each method from my view
datacenter; adds additonal power comsumption and space (cost), speed between VEEAM Repos and storeage can be faster
office; minimal cost for power in my office and space is not an isue, resolves datacenter disaster concerens (but is a commercial Involta cetner), woudl require adding PTP fiber from
DC to office (cost), would prefer generator to keep system up during power problems (cost).
If a PTP Fiber is aquired to run this traffic over what would be needed? 1gig? 500 meg?
thank you in advance for your help
wid
47 Posts
September 27, 2022, 6:54 pmQuote from wid on September 27, 2022, 6:54 pmHey,
I have been testing my 'lab storage' for veeam backup for a very long time, I have the following conclusions:
1) It is really very important that the servers are:
at least three - but if you have the budget, start with 4
at least DELL R720xd
at least 64 GB of RAM
at least 2 x NVME 512 GB with factory built-in heat sink.
at least 1 x dual-port 10 Gbps cards
2) I used iSCSI - it was cool because the files were visible from the system, the structure in PetaSAN was simple - RBD block and iSCSI enabled.
On the other hand, the performance of doing backups or copies - was terrible, at times up to 30 MB/s (I do not blame PetaSAN, it could have been a problem of my skills, hardware, lack of NVMe on the Journal)
There was no problem with MDS, because there is none.
3) Further not having NVME I moved everything to NFS. It immediately became 3 times faster. NFS has no authentication, you have to watch the IP. The files are not visible on the system. Sometimes MDS has a problem with 'cache pressure'. In the meantime, each drive got its Journal on NVME. Transfers reach quietly 200 MB/s.
Hey,
I have been testing my 'lab storage' for veeam backup for a very long time, I have the following conclusions:
1) It is really very important that the servers are:
at least three - but if you have the budget, start with 4
at least DELL R720xd
at least 64 GB of RAM
at least 2 x NVME 512 GB with factory built-in heat sink.
at least 1 x dual-port 10 Gbps cards
2) I used iSCSI - it was cool because the files were visible from the system, the structure in PetaSAN was simple - RBD block and iSCSI enabled.
On the other hand, the performance of doing backups or copies - was terrible, at times up to 30 MB/s (I do not blame PetaSAN, it could have been a problem of my skills, hardware, lack of NVMe on the Journal)
There was no problem with MDS, because there is none.
3) Further not having NVME I moved everything to NFS. It immediately became 3 times faster. NFS has no authentication, you have to watch the IP. The files are not visible on the system. Sometimes MDS has a problem with 'cache pressure'. In the meantime, each drive got its Journal on NVME. Transfers reach quietly 200 MB/s.
craig51
10 Posts
September 28, 2022, 7:27 pmQuote from craig51 on September 28, 2022, 7:27 pmThank you for the feedback,
this is similar to what i experienced, using iscsi or nfs shares to vmware esxi was problematic and slow, when i connected the nfs share directly as a veeam repository it was very fast and could use very large storage volumes comared t omaking drives on esxi. Agreed, not having a good view of the backup files is concverning but i guess one could work around that. Did yo ujust connect your NFS exports t othe VEEAM backup server or another way to present to VEEAM?
Thank you for the feedback,
this is similar to what i experienced, using iscsi or nfs shares to vmware esxi was problematic and slow, when i connected the nfs share directly as a veeam repository it was very fast and could use very large storage volumes comared t omaking drives on esxi. Agreed, not having a good view of the backup files is concverning but i guess one could work around that. Did yo ujust connect your NFS exports t othe VEEAM backup server or another way to present to VEEAM?
Veeam Backup Target
craig51
10 Posts
Quote from craig51 on July 17, 2022, 5:50 pmTeam,
we are gettign ready to do a testing deployment of Petasan to use as backup targest for our VEEAM infrastructure where we create longer retention backups for datacenter VM's
holding data and other imprtant info and target for cleints with onsite servers that we send to our cloud. I have seen info here on how to set up S3 storage and also on 45Drives on how to use RBD,
1; i am wondering if there is a preferrred option and if so why?
2; if i am not going to use ISCSI can i ignore the basic networking configrations that reference the ISCSI network segments, if so are there any other things i should add.
3; we can deply this storage pod in our datacenter using local lan conenctivity or over a PTP fiber conenction back to our office, pros and cons on each method from my view
datacenter; adds additonal power comsumption and space (cost), speed between VEEAM Repos and storeage can be faster
office; minimal cost for power in my office and space is not an isue, resolves datacenter disaster concerens (but is a commercial Involta cetner), woudl require adding PTP fiber from
DC to office (cost), would prefer generator to keep system up during power problems (cost).
If a PTP Fiber is aquired to run this traffic over what would be needed? 1gig? 500 meg?
thank you in advance for your help
Team,
we are gettign ready to do a testing deployment of Petasan to use as backup targest for our VEEAM infrastructure where we create longer retention backups for datacenter VM's
holding data and other imprtant info and target for cleints with onsite servers that we send to our cloud. I have seen info here on how to set up S3 storage and also on 45Drives on how to use RBD,
1; i am wondering if there is a preferrred option and if so why?
2; if i am not going to use ISCSI can i ignore the basic networking configrations that reference the ISCSI network segments, if so are there any other things i should add.
3; we can deply this storage pod in our datacenter using local lan conenctivity or over a PTP fiber conenction back to our office, pros and cons on each method from my view
datacenter; adds additonal power comsumption and space (cost), speed between VEEAM Repos and storeage can be faster
office; minimal cost for power in my office and space is not an isue, resolves datacenter disaster concerens (but is a commercial Involta cetner), woudl require adding PTP fiber from
DC to office (cost), would prefer generator to keep system up during power problems (cost).
If a PTP Fiber is aquired to run this traffic over what would be needed? 1gig? 500 meg?
thank you in advance for your help
wid
47 Posts
Quote from wid on September 27, 2022, 6:54 pmHey,
I have been testing my 'lab storage' for veeam backup for a very long time, I have the following conclusions:
1) It is really very important that the servers are:
at least three - but if you have the budget, start with 4
at least DELL R720xd
at least 64 GB of RAM
at least 2 x NVME 512 GB with factory built-in heat sink.
at least 1 x dual-port 10 Gbps cards2) I used iSCSI - it was cool because the files were visible from the system, the structure in PetaSAN was simple - RBD block and iSCSI enabled.
On the other hand, the performance of doing backups or copies - was terrible, at times up to 30 MB/s (I do not blame PetaSAN, it could have been a problem of my skills, hardware, lack of NVMe on the Journal)
There was no problem with MDS, because there is none.3) Further not having NVME I moved everything to NFS. It immediately became 3 times faster. NFS has no authentication, you have to watch the IP. The files are not visible on the system. Sometimes MDS has a problem with 'cache pressure'. In the meantime, each drive got its Journal on NVME. Transfers reach quietly 200 MB/s.
Hey,
I have been testing my 'lab storage' for veeam backup for a very long time, I have the following conclusions:
1) It is really very important that the servers are:
at least three - but if you have the budget, start with 4
at least DELL R720xd
at least 64 GB of RAM
at least 2 x NVME 512 GB with factory built-in heat sink.
at least 1 x dual-port 10 Gbps cards
2) I used iSCSI - it was cool because the files were visible from the system, the structure in PetaSAN was simple - RBD block and iSCSI enabled.
On the other hand, the performance of doing backups or copies - was terrible, at times up to 30 MB/s (I do not blame PetaSAN, it could have been a problem of my skills, hardware, lack of NVMe on the Journal)
There was no problem with MDS, because there is none.
3) Further not having NVME I moved everything to NFS. It immediately became 3 times faster. NFS has no authentication, you have to watch the IP. The files are not visible on the system. Sometimes MDS has a problem with 'cache pressure'. In the meantime, each drive got its Journal on NVME. Transfers reach quietly 200 MB/s.
craig51
10 Posts
Quote from craig51 on September 28, 2022, 7:27 pmThank you for the feedback,
this is similar to what i experienced, using iscsi or nfs shares to vmware esxi was problematic and slow, when i connected the nfs share directly as a veeam repository it was very fast and could use very large storage volumes comared t omaking drives on esxi. Agreed, not having a good view of the backup files is concverning but i guess one could work around that. Did yo ujust connect your NFS exports t othe VEEAM backup server or another way to present to VEEAM?
Thank you for the feedback,
this is similar to what i experienced, using iscsi or nfs shares to vmware esxi was problematic and slow, when i connected the nfs share directly as a veeam repository it was very fast and could use very large storage volumes comared t omaking drives on esxi. Agreed, not having a good view of the backup files is concverning but i guess one could work around that. Did yo ujust connect your NFS exports t othe VEEAM backup server or another way to present to VEEAM?