Access CEPH cluster directly
smurphy4321
2 Posts
October 26, 2018, 7:00 pmQuote from smurphy4321 on October 26, 2018, 7:00 pmI like the concept of PetaSAN to allow ESX to use a CEPH backend. Does this solution also allow you to connect directly to the CEPH cluster so other things like OpenStack and Kubernetes can make use of the storage without using the iSCSI?
I like the concept of PetaSAN to allow ESX to use a CEPH backend. Does this solution also allow you to connect directly to the CEPH cluster so other things like OpenStack and Kubernetes can make use of the storage without using the iSCSI?
admin
2,930 Posts
October 26, 2018, 8:19 pmQuote from admin on October 26, 2018, 8:19 pmYou can create manual rbd images in the PetaSAN cluster and have external clients such as Openstack access them, they will live happily co-located with their iSCSI images. They will be viewed in the web ui as "detached" disks. If in the future you decide to use such a manually created disk by the PetaSAN iSCSI layer you "Attach" the disk and it will become an iSCSI disk, you can also do the reverse, a PetaSAN iSCSI disk created from the ui can be "Detached" so it will not be used by the iSCSI layer and becomes a pure rbd image.
PetaSAN iSCSI disks are just rbd images with rbd metada containing the iSCSI information. Attach/Detach will add/remove rbd level metada on existing images.
You can create manual rbd images in the PetaSAN cluster and have external clients such as Openstack access them, they will live happily co-located with their iSCSI images. They will be viewed in the web ui as "detached" disks. If in the future you decide to use such a manually created disk by the PetaSAN iSCSI layer you "Attach" the disk and it will become an iSCSI disk, you can also do the reverse, a PetaSAN iSCSI disk created from the ui can be "Detached" so it will not be used by the iSCSI layer and becomes a pure rbd image.
PetaSAN iSCSI disks are just rbd images with rbd metada containing the iSCSI information. Attach/Detach will add/remove rbd level metada on existing images.
smurphy4321
2 Posts
October 26, 2018, 8:22 pmQuote from smurphy4321 on October 26, 2018, 8:22 pmI presume that the management nodes in the PetaSAN cluster are the CEPH monitor and manager nodes, and those are what you point your other systems to?
I presume that the management nodes in the PetaSAN cluster are the CEPH monitor and manager nodes, and those are what you point your other systems to?
admin
2,930 Posts
October 26, 2018, 8:34 pmQuote from admin on October 26, 2018, 8:34 pmYes.
Technically the management nodes have other roles as well outside of Ceph, but for pure Ceph clients they are indeed the monitors/managers.
Yes.
Technically the management nodes have other roles as well outside of Ceph, but for pure Ceph clients they are indeed the monitors/managers.
Last edited on October 26, 2018, 8:35 pm by admin · #4
Kurti2k
16 Posts
November 8, 2018, 1:08 pmQuote from Kurti2k on November 8, 2018, 1:08 pmHi
i will connect with 2x CentOS Servers on one rbd image
rbd create share --size 300G --image-shared --pool rbd
sudo rbd map share
sudo mkfs.ext4 /dev/rbd0
sudo mkdir -p /mnt/cephshare
sudo mount /dev/rbd0 /mnt/cephshare
it is only syncroniced when i reconnect to rbd image
how can i fix this ?
i will create an HA cifs Server
best regards
Marcel
Hi
i will connect with 2x CentOS Servers on one rbd image
rbd create share --size 300G --image-shared --pool rbd
sudo rbd map share
sudo mkfs.ext4 /dev/rbd0
sudo mkdir -p /mnt/cephshare
sudo mount /dev/rbd0 /mnt/cephshare
it is only syncroniced when i reconnect to rbd image
how can i fix this ?
i will create an HA cifs Server
best regards
Marcel
Last edited on November 8, 2018, 1:08 pm by Kurti2k · #5
admin
2,930 Posts
November 8, 2018, 4:44 pmQuote from admin on November 8, 2018, 4:44 pmThis is due to file system caching, you can sync/flush the file system using the "sync" command
you can control the cache parameters via sysctl for:
vm.dirty_background_bytes
vm.dirty_background_ratio
vm.dirty_bytes
vm.dirty_expire_centisecs
vm.dirty_ratio
vm.dirty_writeback_centisecs
Note that with caching, in case of failure, you risk losing the data that was not written to disk.
Samba with Ceph is quite new, you may find this useful:
https://www.youtube.com/watch?v=4rBwbW4TTYM&t=1767s
This is due to file system caching, you can sync/flush the file system using the "sync" command
you can control the cache parameters via sysctl for:
vm.dirty_background_bytes
vm.dirty_background_ratio
vm.dirty_bytes
vm.dirty_expire_centisecs
vm.dirty_ratio
vm.dirty_writeback_centisecs
Note that with caching, in case of failure, you risk losing the data that was not written to disk.
Samba with Ceph is quite new, you may find this useful:
https://www.youtube.com/watch?v=4rBwbW4TTYM&t=1767s
Last edited on November 8, 2018, 4:44 pm by admin · #6
Access CEPH cluster directly
smurphy4321
2 Posts
Quote from smurphy4321 on October 26, 2018, 7:00 pmI like the concept of PetaSAN to allow ESX to use a CEPH backend. Does this solution also allow you to connect directly to the CEPH cluster so other things like OpenStack and Kubernetes can make use of the storage without using the iSCSI?
I like the concept of PetaSAN to allow ESX to use a CEPH backend. Does this solution also allow you to connect directly to the CEPH cluster so other things like OpenStack and Kubernetes can make use of the storage without using the iSCSI?
admin
2,930 Posts
Quote from admin on October 26, 2018, 8:19 pmYou can create manual rbd images in the PetaSAN cluster and have external clients such as Openstack access them, they will live happily co-located with their iSCSI images. They will be viewed in the web ui as "detached" disks. If in the future you decide to use such a manually created disk by the PetaSAN iSCSI layer you "Attach" the disk and it will become an iSCSI disk, you can also do the reverse, a PetaSAN iSCSI disk created from the ui can be "Detached" so it will not be used by the iSCSI layer and becomes a pure rbd image.
PetaSAN iSCSI disks are just rbd images with rbd metada containing the iSCSI information. Attach/Detach will add/remove rbd level metada on existing images.
You can create manual rbd images in the PetaSAN cluster and have external clients such as Openstack access them, they will live happily co-located with their iSCSI images. They will be viewed in the web ui as "detached" disks. If in the future you decide to use such a manually created disk by the PetaSAN iSCSI layer you "Attach" the disk and it will become an iSCSI disk, you can also do the reverse, a PetaSAN iSCSI disk created from the ui can be "Detached" so it will not be used by the iSCSI layer and becomes a pure rbd image.
PetaSAN iSCSI disks are just rbd images with rbd metada containing the iSCSI information. Attach/Detach will add/remove rbd level metada on existing images.
smurphy4321
2 Posts
Quote from smurphy4321 on October 26, 2018, 8:22 pmI presume that the management nodes in the PetaSAN cluster are the CEPH monitor and manager nodes, and those are what you point your other systems to?
I presume that the management nodes in the PetaSAN cluster are the CEPH monitor and manager nodes, and those are what you point your other systems to?
admin
2,930 Posts
Quote from admin on October 26, 2018, 8:34 pmYes.
Technically the management nodes have other roles as well outside of Ceph, but for pure Ceph clients they are indeed the monitors/managers.
Yes.
Technically the management nodes have other roles as well outside of Ceph, but for pure Ceph clients they are indeed the monitors/managers.
Kurti2k
16 Posts
Quote from Kurti2k on November 8, 2018, 1:08 pmHi
i will connect with 2x CentOS Servers on one rbd image
rbd create share --size 300G --image-shared --pool rbd
sudo rbd map share
sudo mkfs.ext4 /dev/rbd0
sudo mkdir -p /mnt/cephshare
sudo mount /dev/rbd0 /mnt/cephshare
it is only syncroniced when i reconnect to rbd image
how can i fix this ?
i will create an HA cifs Server
best regards
Marcel
Hi
i will connect with 2x CentOS Servers on one rbd image
rbd create share --size 300G --image-shared --pool rbd
sudo rbd map share
sudo mkfs.ext4 /dev/rbd0
sudo mkdir -p /mnt/cephshare
sudo mount /dev/rbd0 /mnt/cephshare
it is only syncroniced when i reconnect to rbd image
how can i fix this ?
i will create an HA cifs Server
best regards
Marcel
admin
2,930 Posts
Quote from admin on November 8, 2018, 4:44 pmThis is due to file system caching, you can sync/flush the file system using the "sync" command
you can control the cache parameters via sysctl for:vm.dirty_background_bytes
vm.dirty_background_ratio
vm.dirty_bytes
vm.dirty_expire_centisecs
vm.dirty_ratio
vm.dirty_writeback_centisecsNote that with caching, in case of failure, you risk losing the data that was not written to disk.
Samba with Ceph is quite new, you may find this useful:
https://www.youtube.com/watch?v=4rBwbW4TTYM&t=1767s
This is due to file system caching, you can sync/flush the file system using the "sync" command
you can control the cache parameters via sysctl for:
vm.dirty_background_bytes
vm.dirty_background_ratio
vm.dirty_bytes
vm.dirty_expire_centisecs
vm.dirty_ratio
vm.dirty_writeback_centisecs
Note that with caching, in case of failure, you risk losing the data that was not written to disk.
Samba with Ceph is quite new, you may find this useful:
https://www.youtube.com/watch?v=4rBwbW4TTYM&t=1767s