petasan vs ceph
yln209
1 Post
November 9, 2016, 5:11 amQuote from yln209 on November 9, 2016, 5:11 amHi, administrators
can you tell us the relationship between ceph and petasan in detail?
i have saw the document of installation,but there is noting about ceph.
i used ceph rbd in my project(create rbd and map in client ,use like a local disk), will the petasan replace of it friendly?
when more than one client use one iscsi disk, can the write data keep synchronizing.
Hi, administrators
can you tell us the relationship between ceph and petasan in detail?
i have saw the document of installation,but there is noting about ceph.
i used ceph rbd in my project(create rbd and map in client ,use like a local disk), will the petasan replace of it friendly?
when more than one client use one iscsi disk, can the write data keep synchronizing.
admin
2,930 Posts
November 9, 2016, 10:04 amQuote from admin on November 9, 2016, 10:04 amPetaSAN provides iSCSI storage, its aim is to be very easy to use yet very powerful. Internally it uses Ceph to provide a scale out storage platform, Consul to provide HA and patched kernel to support symmetric Active/Active clustering across different host machines for the same iSCSI disk.
Ceph usage is mentioned in the docs, but PetaSAN's aim is to make all management easy to use via point and click user interfaces. Power users with Ceph knowledge are welcome to log into the nodes and manually use Ceph cli commands if they wish.
Regarding your question on multiple clients accessing the same disk without corruption, this is supported by PetaSAN at the block level but you also need to ensure support at the filesystem and application levels. For example Hyper-v/CSV, VMware/VMFS, OCFS2, GFS2 support this. You can find this setup in our documentation for Hyper-V, SOFS and VMware use cases.
PetaSAN provides iSCSI storage, its aim is to be very easy to use yet very powerful. Internally it uses Ceph to provide a scale out storage platform, Consul to provide HA and patched kernel to support symmetric Active/Active clustering across different host machines for the same iSCSI disk.
Ceph usage is mentioned in the docs, but PetaSAN's aim is to make all management easy to use via point and click user interfaces. Power users with Ceph knowledge are welcome to log into the nodes and manually use Ceph cli commands if they wish.
Regarding your question on multiple clients accessing the same disk without corruption, this is supported by PetaSAN at the block level but you also need to ensure support at the filesystem and application levels. For example Hyper-v/CSV, VMware/VMFS, OCFS2, GFS2 support this. You can find this setup in our documentation for Hyper-V, SOFS and VMware use cases.
petasan vs ceph
yln209
1 Post
Quote from yln209 on November 9, 2016, 5:11 amHi, administrators
can you tell us the relationship between ceph and petasan in detail?
i have saw the document of installation,but there is noting about ceph.
i used ceph rbd in my project(create rbd and map in client ,use like a local disk), will the petasan replace of it friendly?
when more than one client use one iscsi disk, can the write data keep synchronizing.
Hi, administrators
can you tell us the relationship between ceph and petasan in detail?
i have saw the document of installation,but there is noting about ceph.
i used ceph rbd in my project(create rbd and map in client ,use like a local disk), will the petasan replace of it friendly?
when more than one client use one iscsi disk, can the write data keep synchronizing.
admin
2,930 Posts
Quote from admin on November 9, 2016, 10:04 amPetaSAN provides iSCSI storage, its aim is to be very easy to use yet very powerful. Internally it uses Ceph to provide a scale out storage platform, Consul to provide HA and patched kernel to support symmetric Active/Active clustering across different host machines for the same iSCSI disk.
Ceph usage is mentioned in the docs, but PetaSAN's aim is to make all management easy to use via point and click user interfaces. Power users with Ceph knowledge are welcome to log into the nodes and manually use Ceph cli commands if they wish.
Regarding your question on multiple clients accessing the same disk without corruption, this is supported by PetaSAN at the block level but you also need to ensure support at the filesystem and application levels. For example Hyper-v/CSV, VMware/VMFS, OCFS2, GFS2 support this. You can find this setup in our documentation for Hyper-V, SOFS and VMware use cases.
PetaSAN provides iSCSI storage, its aim is to be very easy to use yet very powerful. Internally it uses Ceph to provide a scale out storage platform, Consul to provide HA and patched kernel to support symmetric Active/Active clustering across different host machines for the same iSCSI disk.
Ceph usage is mentioned in the docs, but PetaSAN's aim is to make all management easy to use via point and click user interfaces. Power users with Ceph knowledge are welcome to log into the nodes and manually use Ceph cli commands if they wish.
Regarding your question on multiple clients accessing the same disk without corruption, this is supported by PetaSAN at the block level but you also need to ensure support at the filesystem and application levels. For example Hyper-v/CSV, VMware/VMFS, OCFS2, GFS2 support this. You can find this setup in our documentation for Hyper-V, SOFS and VMware use cases.