Storage Cluster for vmware
Pages: 1 2
harald
6 Posts
June 16, 2019, 1:56 pmQuote from harald on June 16, 2019, 1:56 pmHi,
I'm new to PetaSAN and looking for some kind of tutorial/howto to setup a storage cluster for vmware.
For my private lab i'm looking for a shared virtual storage solution.
i would like to have to storage nodes that are replicating like a RAID1 over the network. one iscsi path goes to the first node, the second to the second node.
if one node fails, the other takes over.
is there any howto on this?
tia
harald
Hi,
I'm new to PetaSAN and looking for some kind of tutorial/howto to setup a storage cluster for vmware.
For my private lab i'm looking for a shared virtual storage solution.
i would like to have to storage nodes that are replicating like a RAID1 over the network. one iscsi path goes to the first node, the second to the second node.
if one node fails, the other takes over.
is there any howto on this?
tia
harald
admin
2,930 Posts
June 16, 2019, 7:45 pmQuote from admin on June 16, 2019, 7:45 pmlook at the documentation, we have a quick start guide + a guide for vmware setup. Note that we do not support running PetaSAN as virtualized vms.
You do not need to worry or do anything special about high availability as it is built-in.
look at the documentation, we have a quick start guide + a guide for vmware setup. Note that we do not support running PetaSAN as virtualized vms.
You do not need to worry or do anything special about high availability as it is built-in.
harald
6 Posts
June 17, 2019, 5:50 amQuote from harald on June 17, 2019, 5:50 amI already did this.
One thing is not clear to me:
I have 200GB on 2 nodes for testing.
The Cluster is reporting 400GB, shouldn't it be 200GB if it is a cluster?
I already did this.
One thing is not clear to me:
I have 200GB on 2 nodes for testing.
The Cluster is reporting 400GB, shouldn't it be 200GB if it is a cluster?
admin
2,930 Posts
June 17, 2019, 9:49 amQuote from admin on June 17, 2019, 9:49 amThe dashboard pie chart you see is the global "raw" available storage. If you write 1 GB from your vm, this value will decrease to 398 GB. Look at the Pool pages, you can create various pool with different replication factors such as 3x, 4x or EC pools, each will consume a different overhead from the global value.
As a side note, storage is thin provisioned, so you can create iSCSI disks whose total capacity exceeds global physical storage capacity, but as you fill your iSCSI disks you will need to add physical disks.
The dashboard pie chart you see is the global "raw" available storage. If you write 1 GB from your vm, this value will decrease to 398 GB. Look at the Pool pages, you can create various pool with different replication factors such as 3x, 4x or EC pools, each will consume a different overhead from the global value.
As a side note, storage is thin provisioned, so you can create iSCSI disks whose total capacity exceeds global physical storage capacity, but as you fill your iSCSI disks you will need to add physical disks.
harald
6 Posts
June 17, 2019, 5:05 pmQuote from harald on June 17, 2019, 5:05 pmOK, then PetaSAN is already doing what i wanted.
But now I have the next problem.
the automatic discovery is not finding any target on the nodes.
I have another iscsi drive on the same subnet, so the network itself is working correct.
I'm using vmware 6.7.
any suggestion?
OK, then PetaSAN is already doing what i wanted.
But now I have the next problem.
the automatic discovery is not finding any target on the nodes.
I have another iscsi drive on the same subnet, so the network itself is working correct.
I'm using vmware 6.7.
any suggestion?
admin
2,930 Posts
June 17, 2019, 5:18 pmQuote from admin on June 17, 2019, 5:18 pmyou mean you created 2 iscsi disks, the esxi can discover 1 but not the other ?
you mean you created 2 iscsi disks, the esxi can discover 1 but not the other ?
harald
6 Posts
June 17, 2019, 5:19 pmQuote from harald on June 17, 2019, 5:19 pmit is discovering an qnap iscsi target, but not the PetaSAN target.
it is discovering an qnap iscsi target, but not the PetaSAN target.
admin
2,930 Posts
June 17, 2019, 5:30 pmQuote from admin on June 17, 2019, 5:30 pmdid you use a dynamic ip assigned to the disk ? if yes can you ping the ip ?
did you use a dynamic ip assigned to the disk ? if yes can you ping the ip ?
harald
6 Posts
June 17, 2019, 8:09 pmQuote from harald on June 17, 2019, 8:09 pmJust tested, no ping possible.
can I access the cli to see the mapping of the nics?
sometimes vmware mix up the nics.
Just tested, no ping possible.
can I access the cli to see the mapping of the nics?
sometimes vmware mix up the nics.
harald
6 Posts
June 17, 2019, 8:43 pmQuote from harald on June 17, 2019, 8:43 pmI checked the Mac addresses and mapping of the interfaces, all are correct.
the only interface that answers to ping is management.
none of the other interfaces is answering.
I'm pinging from the same subnet, so no routing involved.
any suggestions?
I checked the Mac addresses and mapping of the interfaces, all are correct.
the only interface that answers to ping is management.
none of the other interfaces is answering.
I'm pinging from the same subnet, so no routing involved.
any suggestions?
Pages: 1 2
Storage Cluster for vmware
harald
6 Posts
Quote from harald on June 16, 2019, 1:56 pmHi,
I'm new to PetaSAN and looking for some kind of tutorial/howto to setup a storage cluster for vmware.
For my private lab i'm looking for a shared virtual storage solution.
i would like to have to storage nodes that are replicating like a RAID1 over the network. one iscsi path goes to the first node, the second to the second node.
if one node fails, the other takes over.
is there any howto on this?
tia
harald
Hi,
I'm new to PetaSAN and looking for some kind of tutorial/howto to setup a storage cluster for vmware.
For my private lab i'm looking for a shared virtual storage solution.
i would like to have to storage nodes that are replicating like a RAID1 over the network. one iscsi path goes to the first node, the second to the second node.
if one node fails, the other takes over.
is there any howto on this?
tia
harald
admin
2,930 Posts
Quote from admin on June 16, 2019, 7:45 pmlook at the documentation, we have a quick start guide + a guide for vmware setup. Note that we do not support running PetaSAN as virtualized vms.
You do not need to worry or do anything special about high availability as it is built-in.
look at the documentation, we have a quick start guide + a guide for vmware setup. Note that we do not support running PetaSAN as virtualized vms.
You do not need to worry or do anything special about high availability as it is built-in.
harald
6 Posts
Quote from harald on June 17, 2019, 5:50 amI already did this.
One thing is not clear to me:
I have 200GB on 2 nodes for testing.
The Cluster is reporting 400GB, shouldn't it be 200GB if it is a cluster?
I already did this.
One thing is not clear to me:
I have 200GB on 2 nodes for testing.
The Cluster is reporting 400GB, shouldn't it be 200GB if it is a cluster?
admin
2,930 Posts
Quote from admin on June 17, 2019, 9:49 amThe dashboard pie chart you see is the global "raw" available storage. If you write 1 GB from your vm, this value will decrease to 398 GB. Look at the Pool pages, you can create various pool with different replication factors such as 3x, 4x or EC pools, each will consume a different overhead from the global value.
As a side note, storage is thin provisioned, so you can create iSCSI disks whose total capacity exceeds global physical storage capacity, but as you fill your iSCSI disks you will need to add physical disks.
The dashboard pie chart you see is the global "raw" available storage. If you write 1 GB from your vm, this value will decrease to 398 GB. Look at the Pool pages, you can create various pool with different replication factors such as 3x, 4x or EC pools, each will consume a different overhead from the global value.
As a side note, storage is thin provisioned, so you can create iSCSI disks whose total capacity exceeds global physical storage capacity, but as you fill your iSCSI disks you will need to add physical disks.
harald
6 Posts
Quote from harald on June 17, 2019, 5:05 pmOK, then PetaSAN is already doing what i wanted.
But now I have the next problem.
the automatic discovery is not finding any target on the nodes.
I have another iscsi drive on the same subnet, so the network itself is working correct.
I'm using vmware 6.7.
any suggestion?
OK, then PetaSAN is already doing what i wanted.
But now I have the next problem.
the automatic discovery is not finding any target on the nodes.
I have another iscsi drive on the same subnet, so the network itself is working correct.
I'm using vmware 6.7.
any suggestion?
admin
2,930 Posts
Quote from admin on June 17, 2019, 5:18 pmyou mean you created 2 iscsi disks, the esxi can discover 1 but not the other ?
you mean you created 2 iscsi disks, the esxi can discover 1 but not the other ?
harald
6 Posts
Quote from harald on June 17, 2019, 5:19 pmit is discovering an qnap iscsi target, but not the PetaSAN target.
it is discovering an qnap iscsi target, but not the PetaSAN target.
admin
2,930 Posts
Quote from admin on June 17, 2019, 5:30 pmdid you use a dynamic ip assigned to the disk ? if yes can you ping the ip ?
did you use a dynamic ip assigned to the disk ? if yes can you ping the ip ?
harald
6 Posts
Quote from harald on June 17, 2019, 8:09 pmJust tested, no ping possible.
can I access the cli to see the mapping of the nics?
sometimes vmware mix up the nics.
Just tested, no ping possible.
can I access the cli to see the mapping of the nics?
sometimes vmware mix up the nics.
harald
6 Posts
Quote from harald on June 17, 2019, 8:43 pmI checked the Mac addresses and mapping of the interfaces, all are correct.
the only interface that answers to ping is management.
none of the other interfaces is answering.
I'm pinging from the same subnet, so no routing involved.
any suggestions?
I checked the Mac addresses and mapping of the interfaces, all are correct.
the only interface that answers to ping is management.
none of the other interfaces is answering.
I'm pinging from the same subnet, so no routing involved.
any suggestions?