how to connect the PetaSAN cifs share folder
skyyxy
6 Posts
May 14, 2022, 9:24 amQuote from skyyxy on May 14, 2022, 9:24 amHi everyone here, I'm the new guy for PetaSAN and move from Freenas user , very happy what the PetaSAN can do and big thanks.
The only one I want to use PetaSAN is for CIFS/SMB share. Size and performance can be Scale-out and jumpup one by one extend box not like Freenas, very excited.
I watched the tutorial video from offical website and last step is access the CIFS folder by \\servername.
I created 3 vms in my test pc in Virtualbox and the ceph units works fine.
My question is: if its possbile access the cifs share by ip address like: \\server ip address.
Thanks advance.
Hi everyone here, I'm the new guy for PetaSAN and move from Freenas user , very happy what the PetaSAN can do and big thanks.
The only one I want to use PetaSAN is for CIFS/SMB share. Size and performance can be Scale-out and jumpup one by one extend box not like Freenas, very excited.
I watched the tutorial video from offical website and last step is access the CIFS folder by \\servername.
I created 3 vms in my test pc in Virtualbox and the ceph units works fine.
My question is: if its possbile access the cifs share by ip address like: \\server ip address.
Thanks advance.
admin
2,930 Posts
May 14, 2022, 1:55 pmQuote from admin on May 14, 2022, 1:55 pmYes you can access the shares via ip address. You can define a range of CIFS ips for the CIFS service nodes and distribute the ips among your clients. However it is best that client access using a service name like \\petasan-cifs and have this name resolve the many ip addresses using round robin DNS (externally setup) for automatic load balancing.
Yes you can access the shares via ip address. You can define a range of CIFS ips for the CIFS service nodes and distribute the ips among your clients. However it is best that client access using a service name like \\petasan-cifs and have this name resolve the many ip addresses using round robin DNS (externally setup) for automatic load balancing.
skyyxy
6 Posts
May 15, 2022, 3:39 amQuote from skyyxy on May 15, 2022, 3:39 amThanks a lot. I still researching how to do the correct setup for cifs server ip, may be my vms network setting not works. I will run in 3real pcs with 3 nic for each days later and hope can be works.
Anthoer question is: in webui I saw an option, set the device as the cache so,
1: This cache device is for read cache or write cache(like zfs has l2arc or zil device) or global cache?
2: This cache is only for this node or all ceph OSDs? I think only for this node.
Big thanks again.
Thanks a lot. I still researching how to do the correct setup for cifs server ip, may be my vms network setting not works. I will run in 3real pcs with 3 nic for each days later and hope can be works.
Anthoer question is: in webui I saw an option, set the device as the cache so,
1: This cache device is for read cache or write cache(like zfs has l2arc or zil device) or global cache?
2: This cache is only for this node or all ceph OSDs? I think only for this node.
Big thanks again.
how to connect the PetaSAN cifs share folder
skyyxy
6 Posts
Quote from skyyxy on May 14, 2022, 9:24 amHi everyone here, I'm the new guy for PetaSAN and move from Freenas user , very happy what the PetaSAN can do and big thanks.
The only one I want to use PetaSAN is for CIFS/SMB share. Size and performance can be Scale-out and jumpup one by one extend box not like Freenas, very excited.
I watched the tutorial video from offical website and last step is access the CIFS folder by \\servername.
I created 3 vms in my test pc in Virtualbox and the ceph units works fine.
My question is: if its possbile access the cifs share by ip address like: \\server ip address.
Thanks advance.
Hi everyone here, I'm the new guy for PetaSAN and move from Freenas user , very happy what the PetaSAN can do and big thanks.
The only one I want to use PetaSAN is for CIFS/SMB share. Size and performance can be Scale-out and jumpup one by one extend box not like Freenas, very excited.
I watched the tutorial video from offical website and last step is access the CIFS folder by \\servername.
I created 3 vms in my test pc in Virtualbox and the ceph units works fine.
My question is: if its possbile access the cifs share by ip address like: \\server ip address.
Thanks advance.
admin
2,930 Posts
Quote from admin on May 14, 2022, 1:55 pmYes you can access the shares via ip address. You can define a range of CIFS ips for the CIFS service nodes and distribute the ips among your clients. However it is best that client access using a service name like \\petasan-cifs and have this name resolve the many ip addresses using round robin DNS (externally setup) for automatic load balancing.
Yes you can access the shares via ip address. You can define a range of CIFS ips for the CIFS service nodes and distribute the ips among your clients. However it is best that client access using a service name like \\petasan-cifs and have this name resolve the many ip addresses using round robin DNS (externally setup) for automatic load balancing.
skyyxy
6 Posts
Quote from skyyxy on May 15, 2022, 3:39 amThanks a lot. I still researching how to do the correct setup for cifs server ip, may be my vms network setting not works. I will run in 3real pcs with 3 nic for each days later and hope can be works.
Anthoer question is: in webui I saw an option, set the device as the cache so,
1: This cache device is for read cache or write cache(like zfs has l2arc or zil device) or global cache?
2: This cache is only for this node or all ceph OSDs? I think only for this node.
Big thanks again.
Thanks a lot. I still researching how to do the correct setup for cifs server ip, may be my vms network setting not works. I will run in 3real pcs with 3 nic for each days later and hope can be works.
Anthoer question is: in webui I saw an option, set the device as the cache so,
1: This cache device is for read cache or write cache(like zfs has l2arc or zil device) or global cache?
2: This cache is only for this node or all ceph OSDs? I think only for this node.
Big thanks again.