PetaSAN 2.5.0 released.
Pages: 1 2
admin
2,930 Posts
March 3, 2020, 7:53 pmQuote from admin on March 3, 2020, 7:53 pmHi all,
happy to announce PetaSAN 2.5.0.
New Features:
- Scalable active/active CIFS/SMB shares with Active Directory authentication.
- Central Ceph configuration using monitor db with predefined categories.
- NUMA pinning.
- Different interface configuration based on node role.
- Single backend network
- OSD failure report, clickable from the dashboard.
- Ceph 14.2.7 latest stable.
Notes:
-For online upgrades refer to Online Upgrade Guide.
-For copying large files using CIFS/SMB shares,we recommend using Microsoft robocopy tool
For screenshots, click here
Hi all,
happy to announce PetaSAN 2.5.0.
New Features:
- Scalable active/active CIFS/SMB shares with Active Directory authentication.
- Central Ceph configuration using monitor db with predefined categories.
- NUMA pinning.
- Different interface configuration based on node role.
- Single backend network
- OSD failure report, clickable from the dashboard.
- Ceph 14.2.7 latest stable.
Notes:
-For online upgrades refer to Online Upgrade Guide.
-For copying large files using CIFS/SMB shares,we recommend using Microsoft robocopy tool
For screenshots, click here
Last edited on March 3, 2020, 7:55 pm by admin · #1
tuocmd
54 Posts
March 4, 2020, 1:05 amQuote from tuocmd on March 4, 2020, 1:05 amHi admin
Is creating a partition for sharing via CIFS / SMB still using rbd or cephfs?
Is the limit for partition creation still 100TB?
Can we extend the partition over 100TB?
Hi admin
Is creating a partition for sharing via CIFS / SMB still using rbd or cephfs?
Is the limit for partition creation still 100TB?
Can we extend the partition over 100TB?
admin
2,930 Posts
March 4, 2020, 8:44 amQuote from admin on March 4, 2020, 8:44 amit is using cephfs. the limit is determined by OSD capacity, it will grow as you add OSDs.
the 100 TB is current limit per iSCSI disk/lun, there is overhead of disk size to perform snapshots and deletions, what practical size do you think it should be ?
it is using cephfs. the limit is determined by OSD capacity, it will grow as you add OSDs.
the 100 TB is current limit per iSCSI disk/lun, there is overhead of disk size to perform snapshots and deletions, what practical size do you think it should be ?
tuocmd
54 Posts
March 4, 2020, 10:47 amQuote from tuocmd on March 4, 2020, 10:47 amso currently petasan will have cephfs and ceph rdb when installed?
The use of SMB and SCSI is separate.
I want to use smb to share 500TB partition for windows machines? So can it be done?
so currently petasan will have cephfs and ceph rdb when installed?
The use of SMB and SCSI is separate.
I want to use smb to share 500TB partition for windows machines? So can it be done?
admin
2,930 Posts
March 4, 2020, 2:31 pmQuote from admin on March 4, 2020, 2:31 pmit supports creation of both rbd and cephfs pools.
you can also create default pools for both while deploying (default choice) to get you started, but for production it is better to create your own pools to specify placement rules..etc.
you can create 500 TB shares if you add OSDs.
it supports creation of both rbd and cephfs pools.
you can also create default pools for both while deploying (default choice) to get you started, but for production it is better to create your own pools to specify placement rules..etc.
you can create 500 TB shares if you add OSDs.
Last edited on March 4, 2020, 2:32 pm by admin · #5
tuocmd
54 Posts
March 5, 2020, 9:25 amQuote from tuocmd on March 5, 2020, 9:25 amDo you have any instructions on connecting clients to cephfs?
Do you have any instructions on connecting clients to cephfs?
admin
2,930 Posts
March 5, 2020, 1:18 pmQuote from admin on March 5, 2020, 1:18 pmIf you refer to Windows clients, they connect as a regular share.
If you mean linux clients to access cephs directly:
mount -t ceph ip1,ip2,ip3:/ /mnt/cephfs -o name=admin,mds_namespace=cephfs,secretfile=/etc/ceph/cephfs-key
ip1,ip2,ip3 ip of the 3 management nodes ( first 3 nodes)
mds_namespace=cephfs is the name of the filesystem in this case cephfs
/etc/ceph/cephfs-key needs to be copied from a PetaSAN CIFS server node
If you refer to Windows clients, they connect as a regular share.
If you mean linux clients to access cephs directly:
mount -t ceph ip1,ip2,ip3:/ /mnt/cephfs -o name=admin,mds_namespace=cephfs,secretfile=/etc/ceph/cephfs-key
ip1,ip2,ip3 ip of the 3 management nodes ( first 3 nodes)
mds_namespace=cephfs is the name of the filesystem in this case cephfs
/etc/ceph/cephfs-key needs to be copied from a PetaSAN CIFS server node
Last edited on March 5, 2020, 9:37 pm by admin · #7
vuquangthang
1 Post
May 21, 2020, 4:29 pmQuote from vuquangthang on May 21, 2020, 4:29 pmHi admin,
Scalable active/active CIFS/SMB shares with Active Directory authentication is a new features of version 2.5.0.
Now i have cluster petasan, i want to share CIFS/SMB to windows server with active directory, but i can't find any tutorial on this.
Thanks
Hi admin,
Scalable active/active CIFS/SMB shares with Active Directory authentication is a new features of version 2.5.0.
Now i have cluster petasan, i want to share CIFS/SMB to windows server with active directory, but i can't find any tutorial on this.
Thanks
admin
2,930 Posts
May 21, 2020, 10:40 pmQuote from admin on May 21, 2020, 10:40 pmThere is a small help in the quick start guide, we are also working on updating the admin guide.
It should be easy enough to figure if you spend a few minutes, basically you you start off by creating a filesystem (or use the dafault once created when you build the clusrer), you need to assign CIFS server roles to some nodes, in the CIFS settings page you need to input virtual ip range to assign to the CIFS servers (+ AD settings), then from the CIFS Shares page add shares.
There is a small help in the quick start guide, we are also working on updating the admin guide.
It should be easy enough to figure if you spend a few minutes, basically you you start off by creating a filesystem (or use the dafault once created when you build the clusrer), you need to assign CIFS server roles to some nodes, in the CIFS settings page you need to input virtual ip range to assign to the CIFS servers (+ AD settings), then from the CIFS Shares page add shares.
Last edited on May 21, 2020, 10:42 pm by admin · #9
gxnncg
1 Post
May 26, 2020, 8:34 amQuote from gxnncg on May 26, 2020, 8:34 amhow to download iso
how to download iso
Pages: 1 2
PetaSAN 2.5.0 released.
admin
2,930 Posts
Quote from admin on March 3, 2020, 7:53 pmHi all,
happy to announce PetaSAN 2.5.0.
New Features:
- Scalable active/active CIFS/SMB shares with Active Directory authentication.
- Central Ceph configuration using monitor db with predefined categories.
- NUMA pinning.
- Different interface configuration based on node role.
- Single backend network
- OSD failure report, clickable from the dashboard.
- Ceph 14.2.7 latest stable.
Notes:
-For online upgrades refer to Online Upgrade Guide.
-For copying large files using CIFS/SMB shares,we recommend using Microsoft robocopy tool
For screenshots, click here
Hi all,
happy to announce PetaSAN 2.5.0.
New Features:
- Scalable active/active CIFS/SMB shares with Active Directory authentication.
- Central Ceph configuration using monitor db with predefined categories.
- NUMA pinning.
- Different interface configuration based on node role.
- Single backend network
- OSD failure report, clickable from the dashboard.
- Ceph 14.2.7 latest stable.
Notes:
-For online upgrades refer to Online Upgrade Guide.
-For copying large files using CIFS/SMB shares,we recommend using Microsoft robocopy tool
For screenshots, click here
tuocmd
54 Posts
Quote from tuocmd on March 4, 2020, 1:05 amHi admin
Is creating a partition for sharing via CIFS / SMB still using rbd or cephfs?
Is the limit for partition creation still 100TB?
Can we extend the partition over 100TB?
Hi admin
Is creating a partition for sharing via CIFS / SMB still using rbd or cephfs?
Is the limit for partition creation still 100TB?
Can we extend the partition over 100TB?
admin
2,930 Posts
Quote from admin on March 4, 2020, 8:44 amit is using cephfs. the limit is determined by OSD capacity, it will grow as you add OSDs.
the 100 TB is current limit per iSCSI disk/lun, there is overhead of disk size to perform snapshots and deletions, what practical size do you think it should be ?
it is using cephfs. the limit is determined by OSD capacity, it will grow as you add OSDs.
the 100 TB is current limit per iSCSI disk/lun, there is overhead of disk size to perform snapshots and deletions, what practical size do you think it should be ?
tuocmd
54 Posts
Quote from tuocmd on March 4, 2020, 10:47 amso currently petasan will have cephfs and ceph rdb when installed?
The use of SMB and SCSI is separate.
I want to use smb to share 500TB partition for windows machines? So can it be done?
so currently petasan will have cephfs and ceph rdb when installed?
The use of SMB and SCSI is separate.
I want to use smb to share 500TB partition for windows machines? So can it be done?
admin
2,930 Posts
Quote from admin on March 4, 2020, 2:31 pmit supports creation of both rbd and cephfs pools.
you can also create default pools for both while deploying (default choice) to get you started, but for production it is better to create your own pools to specify placement rules..etc.
you can create 500 TB shares if you add OSDs.
it supports creation of both rbd and cephfs pools.
you can also create default pools for both while deploying (default choice) to get you started, but for production it is better to create your own pools to specify placement rules..etc.
you can create 500 TB shares if you add OSDs.
tuocmd
54 Posts
Quote from tuocmd on March 5, 2020, 9:25 amDo you have any instructions on connecting clients to cephfs?
Do you have any instructions on connecting clients to cephfs?
admin
2,930 Posts
Quote from admin on March 5, 2020, 1:18 pmIf you refer to Windows clients, they connect as a regular share.
If you mean linux clients to access cephs directly:
mount -t ceph ip1,ip2,ip3:/ /mnt/cephfs -o name=admin,mds_namespace=cephfs,secretfile=/etc/ceph/cephfs-key
ip1,ip2,ip3 ip of the 3 management nodes ( first 3 nodes)
mds_namespace=cephfs is the name of the filesystem in this case cephfs
/etc/ceph/cephfs-key needs to be copied from a PetaSAN CIFS server node
If you refer to Windows clients, they connect as a regular share.
If you mean linux clients to access cephs directly:
mount -t ceph ip1,ip2,ip3:/ /mnt/cephfs -o name=admin,mds_namespace=cephfs,secretfile=/etc/ceph/cephfs-key
ip1,ip2,ip3 ip of the 3 management nodes ( first 3 nodes)
mds_namespace=cephfs is the name of the filesystem in this case cephfs
/etc/ceph/cephfs-key needs to be copied from a PetaSAN CIFS server node
vuquangthang
1 Post
Quote from vuquangthang on May 21, 2020, 4:29 pmHi admin,
Scalable active/active CIFS/SMB shares with Active Directory authentication is a new features of version 2.5.0.
Now i have cluster petasan, i want to share CIFS/SMB to windows server with active directory, but i can't find any tutorial on this.Thanks
Hi admin,
Scalable active/active CIFS/SMB shares with Active Directory authentication is a new features of version 2.5.0.
Now i have cluster petasan, i want to share CIFS/SMB to windows server with active directory, but i can't find any tutorial on this.
Thanks
admin
2,930 Posts
Quote from admin on May 21, 2020, 10:40 pmThere is a small help in the quick start guide, we are also working on updating the admin guide.
It should be easy enough to figure if you spend a few minutes, basically you you start off by creating a filesystem (or use the dafault once created when you build the clusrer), you need to assign CIFS server roles to some nodes, in the CIFS settings page you need to input virtual ip range to assign to the CIFS servers (+ AD settings), then from the CIFS Shares page add shares.
There is a small help in the quick start guide, we are also working on updating the admin guide.
It should be easy enough to figure if you spend a few minutes, basically you you start off by creating a filesystem (or use the dafault once created when you build the clusrer), you need to assign CIFS server roles to some nodes, in the CIFS settings page you need to input virtual ip range to assign to the CIFS servers (+ AD settings), then from the CIFS Shares page add shares.
gxnncg
1 Post
Quote from gxnncg on May 26, 2020, 8:34 amhow to download iso
how to download iso