Feature Suggestions - dashboard improvements, iscsi name, time, Node IP, system disk & snapshots
tarlo
1 Post
April 19, 2017, 6:28 amQuote from tarlo on April 19, 2017, 6:28 amHi
I am new to PetaSAN however I am currently testing it in a VM environment.
I have experience in Storage systems in the industry and I love the simple look and feel and the how Ceph functions.
I have some suggestions that will improve usability:
- On the DashBoard, is it possible to have a summery of online nodes and NICs? I see there is a summery of OSD status of HDDs
- Under iSCSI list, we have Disk ID "00001" and Disk name "disk". However IQN address lists the Disk ID. for example iqn.2016-05.com.petasan:00001.
This is a hassle to find out what the actual meaningful disk / volume name is. Can we have the option to select the disk name for the IQN address?
- Can we have an option to setup clock and time zone within the GUI?
- Can we have the option to change the nodes IP addresses in GUI
-There is one major issue with system disk redundancy, Its single point of failure.
I am not sure if its possible, is it possilbe to use Ceph on the systems disks to replicate the data to other system disks in the cluster?
If not possible to use Ceph, can we use Software RAID 1 (mirror) and have the option to install a 2nd disk for system disk redundancy. This is to avoid having to completely installing PetaSAN again on the node if the system disk fails.
- It has already been mentioned; Disk Compression / De-duplication
- Disk snapshots, replication & scheduling
I am sure there will be more later.
thanks
Tarlo
Hi
I am new to PetaSAN however I am currently testing it in a VM environment.
I have experience in Storage systems in the industry and I love the simple look and feel and the how Ceph functions.
I have some suggestions that will improve usability:
- On the DashBoard, is it possible to have a summery of online nodes and NICs? I see there is a summery of OSD status of HDDs
- Under iSCSI list, we have Disk ID "00001" and Disk name "disk". However IQN address lists the Disk ID. for example iqn.2016-05.com.petasan:00001.
This is a hassle to find out what the actual meaningful disk / volume name is. Can we have the option to select the disk name for the IQN address?
- Can we have an option to setup clock and time zone within the GUI?
- Can we have the option to change the nodes IP addresses in GUI
-There is one major issue with system disk redundancy, Its single point of failure.
I am not sure if its possible, is it possilbe to use Ceph on the systems disks to replicate the data to other system disks in the cluster?
If not possible to use Ceph, can we use Software RAID 1 (mirror) and have the option to install a 2nd disk for system disk redundancy. This is to avoid having to completely installing PetaSAN again on the node if the system disk fails.
- It has already been mentioned; Disk Compression / De-duplication
- Disk snapshots, replication & scheduling
I am sure there will be more later.
thanks
Tarlo
admin
2,921 Posts
April 19, 2017, 9:55 amQuote from admin on April 19, 2017, 9:55 amThank you for your comments, such feedback allows us to continually improve and build a great system. Most if not all of the enhancements we are planning to add. Just 3 issues:
- For the disk name when you create a new disk in the add disk page you specify a descriptive name (it is not fixed to "disk"). You can change name,IQN-name,size ..etc in the edit page (the disk needs to be in stopped state to allow editing).
- The node IPs are difficult to change, this is primarily a constraint from Ceph ( you cannot change the IPs of the monitors ). However on the iSCSI side you can at any time change the iSCSI subnets and the disk IPs ( disks with old IPs and subnets will continue to work correctly ).
- Re system disk redundancy: Software RAID 1 can be an option, but if you need RAID 1 for the system disk, you can always use hardware RAID. In fact this is what other server systems do. If you do not use RAID 1 and the system disk fails, although this will knock out the node, the cluster will continue to function so it is not really catastrophic. Ceph will automatically self adjust and recreate data replicas in the background while continuing to server client io. You can also remove any of the functioning storage disks in the failed node and just hot plug them in other nodes (existing/new), this will make Ceph's recovery job much lighter by just copying updated data replicas instead of creating complete ones.
I understand some of this needs to be documented, we will be working on this too.
Thank you for your comments, such feedback allows us to continually improve and build a great system. Most if not all of the enhancements we are planning to add. Just 3 issues:
- For the disk name when you create a new disk in the add disk page you specify a descriptive name (it is not fixed to "disk"). You can change name,IQN-name,size ..etc in the edit page (the disk needs to be in stopped state to allow editing).
- The node IPs are difficult to change, this is primarily a constraint from Ceph ( you cannot change the IPs of the monitors ). However on the iSCSI side you can at any time change the iSCSI subnets and the disk IPs ( disks with old IPs and subnets will continue to work correctly ).
- Re system disk redundancy: Software RAID 1 can be an option, but if you need RAID 1 for the system disk, you can always use hardware RAID. In fact this is what other server systems do. If you do not use RAID 1 and the system disk fails, although this will knock out the node, the cluster will continue to function so it is not really catastrophic. Ceph will automatically self adjust and recreate data replicas in the background while continuing to server client io. You can also remove any of the functioning storage disks in the failed node and just hot plug them in other nodes (existing/new), this will make Ceph's recovery job much lighter by just copying updated data replicas instead of creating complete ones.
I understand some of this needs to be documented, we will be working on this too.
admin
2,921 Posts
April 19, 2017, 11:44 amQuote from admin on April 19, 2017, 11:44 ami spoke too soon on point 1, the IQN name does need to end with the disk index. Maybe we could append this index to a user defined iqn name.
i spoke too soon on point 1, the IQN name does need to end with the disk index. Maybe we could append this index to a user defined iqn name.
colombabianca86
1 Post
November 20, 2017, 10:04 amQuote from colombabianca86 on November 20, 2017, 10:04 am
Quote from tarlo on April 19, 2017, 6:28 am
Hi
I am new to PetaSAN however I am currently testing it in a VM environment.
I have experience in Storage systems in the industry and I love the simple look and feel and the how Ceph functions.
I have some suggestions that will improve usability:
- On the DashBoard, is it possible to have a summery of online nodes and NICs? I see there is a summery of OSD status of HDDs
- Under iSCSI list, we have Disk ID "00001" and Disk name "disk". However IQN address lists the Disk ID. for example iqn.2016-05.com.petasan:00001.
This is a hassle to find out what the actual meaningful disk / volume name is. Can we have the option to select the disk name for the IQN address?
- Can we have an option to setup clock and time zone within the GUI?
- Can we have the option to change the nodes IP addresses in GUI
-There is one major issue with system disk redundancy, Its single point of failure.
I am not sure if its possible, is it possilbe to use Ceph on the systems disks to replicate the data to other system disks in the cluster?
If not possible to use Ceph, can we use Software RAID 1 (mirror) and have the option to install a 2nd disk for system disk redundancy. This is to avoid having to completely installing PetaSAN again on the node if the system disk fails.
- It has already been mentioned; Disk Compression / De-duplication
- Disk snapshots, replication & scheduling
I am sure there will be more later.
thanks
Tarlo
I am new to PetaSAN and I would like to to start testing it in a small server environment.
Currently using vmware systems but I'm planning to replace servers with new xen / kvm
hosts and new ceph storage. The question may look stupid: can xen / kvm be integrated into the same hosts
that have petasan installed ?
Basically, is it possible to have something similar to vsphere + vsan? Thanks for the reply.
Quote from tarlo on April 19, 2017, 6:28 am
Hi
I am new to PetaSAN however I am currently testing it in a VM environment.
I have experience in Storage systems in the industry and I love the simple look and feel and the how Ceph functions.
I have some suggestions that will improve usability:
- On the DashBoard, is it possible to have a summery of online nodes and NICs? I see there is a summery of OSD status of HDDs
- Under iSCSI list, we have Disk ID "00001" and Disk name "disk". However IQN address lists the Disk ID. for example iqn.2016-05.com.petasan:00001.
This is a hassle to find out what the actual meaningful disk / volume name is. Can we have the option to select the disk name for the IQN address?
- Can we have an option to setup clock and time zone within the GUI?
- Can we have the option to change the nodes IP addresses in GUI
-There is one major issue with system disk redundancy, Its single point of failure.
I am not sure if its possible, is it possilbe to use Ceph on the systems disks to replicate the data to other system disks in the cluster?
If not possible to use Ceph, can we use Software RAID 1 (mirror) and have the option to install a 2nd disk for system disk redundancy. This is to avoid having to completely installing PetaSAN again on the node if the system disk fails.
- It has already been mentioned; Disk Compression / De-duplication
- Disk snapshots, replication & scheduling
I am sure there will be more later.
thanks
Tarlo
I am new to PetaSAN and I would like to to start testing it in a small server environment.
Currently using vmware systems but I'm planning to replace servers with new xen / kvm
hosts and new ceph storage. The question may look stupid: can xen / kvm be integrated into the same hosts
that have petasan installed ?
Basically, is it possible to have something similar to vsphere + vsan? Thanks for the reply.
admin
2,921 Posts
November 21, 2017, 8:46 amQuote from admin on November 21, 2017, 8:46 amHi there,
Currently we do not support this configuration. I would not recommend adding Xen and PetaSAN both native on the same box but possibly try to run PetaSAN virtualized under Xen in a hyper-converged setup, it will give less performance but the native method is likely to be more involved and is likely to break as we upgrade new versions of PetaSAN. But again neither method is supported or has been tested by us.
Hi there,
Currently we do not support this configuration. I would not recommend adding Xen and PetaSAN both native on the same box but possibly try to run PetaSAN virtualized under Xen in a hyper-converged setup, it will give less performance but the native method is likely to be more involved and is likely to break as we upgrade new versions of PetaSAN. But again neither method is supported or has been tested by us.
Feature Suggestions - dashboard improvements, iscsi name, time, Node IP, system disk & snapshots
tarlo
1 Post
Quote from tarlo on April 19, 2017, 6:28 amHi
I am new to PetaSAN however I am currently testing it in a VM environment.
I have experience in Storage systems in the industry and I love the simple look and feel and the how Ceph functions.I have some suggestions that will improve usability:
- On the DashBoard, is it possible to have a summery of online nodes and NICs? I see there is a summery of OSD status of HDDs- Under iSCSI list, we have Disk ID "00001" and Disk name "disk". However IQN address lists the Disk ID. for example iqn.2016-05.com.petasan:00001.
This is a hassle to find out what the actual meaningful disk / volume name is. Can we have the option to select the disk name for the IQN address?- Can we have an option to setup clock and time zone within the GUI?
- Can we have the option to change the nodes IP addresses in GUI
-There is one major issue with system disk redundancy, Its single point of failure.
I am not sure if its possible, is it possilbe to use Ceph on the systems disks to replicate the data to other system disks in the cluster?
If not possible to use Ceph, can we use Software RAID 1 (mirror) and have the option to install a 2nd disk for system disk redundancy. This is to avoid having to completely installing PetaSAN again on the node if the system disk fails.- It has already been mentioned; Disk Compression / De-duplication
- Disk snapshots, replication & scheduling
I am sure there will be more later.
thanks
Tarlo
Hi
I am new to PetaSAN however I am currently testing it in a VM environment.
I have experience in Storage systems in the industry and I love the simple look and feel and the how Ceph functions.
I have some suggestions that will improve usability:
- On the DashBoard, is it possible to have a summery of online nodes and NICs? I see there is a summery of OSD status of HDDs
- Under iSCSI list, we have Disk ID "00001" and Disk name "disk". However IQN address lists the Disk ID. for example iqn.2016-05.com.petasan:00001.
This is a hassle to find out what the actual meaningful disk / volume name is. Can we have the option to select the disk name for the IQN address?
- Can we have an option to setup clock and time zone within the GUI?
- Can we have the option to change the nodes IP addresses in GUI
-There is one major issue with system disk redundancy, Its single point of failure.
I am not sure if its possible, is it possilbe to use Ceph on the systems disks to replicate the data to other system disks in the cluster?
If not possible to use Ceph, can we use Software RAID 1 (mirror) and have the option to install a 2nd disk for system disk redundancy. This is to avoid having to completely installing PetaSAN again on the node if the system disk fails.
- It has already been mentioned; Disk Compression / De-duplication
- Disk snapshots, replication & scheduling
I am sure there will be more later.
thanks
Tarlo
admin
2,921 Posts
Quote from admin on April 19, 2017, 9:55 amThank you for your comments, such feedback allows us to continually improve and build a great system. Most if not all of the enhancements we are planning to add. Just 3 issues:
- For the disk name when you create a new disk in the add disk page you specify a descriptive name (it is not fixed to "disk"). You can change name,IQN-name,size ..etc in the edit page (the disk needs to be in stopped state to allow editing).
- The node IPs are difficult to change, this is primarily a constraint from Ceph ( you cannot change the IPs of the monitors ). However on the iSCSI side you can at any time change the iSCSI subnets and the disk IPs ( disks with old IPs and subnets will continue to work correctly ).
- Re system disk redundancy: Software RAID 1 can be an option, but if you need RAID 1 for the system disk, you can always use hardware RAID. In fact this is what other server systems do. If you do not use RAID 1 and the system disk fails, although this will knock out the node, the cluster will continue to function so it is not really catastrophic. Ceph will automatically self adjust and recreate data replicas in the background while continuing to server client io. You can also remove any of the functioning storage disks in the failed node and just hot plug them in other nodes (existing/new), this will make Ceph's recovery job much lighter by just copying updated data replicas instead of creating complete ones.
I understand some of this needs to be documented, we will be working on this too.
Thank you for your comments, such feedback allows us to continually improve and build a great system. Most if not all of the enhancements we are planning to add. Just 3 issues:
- For the disk name when you create a new disk in the add disk page you specify a descriptive name (it is not fixed to "disk"). You can change name,IQN-name,size ..etc in the edit page (the disk needs to be in stopped state to allow editing).
- The node IPs are difficult to change, this is primarily a constraint from Ceph ( you cannot change the IPs of the monitors ). However on the iSCSI side you can at any time change the iSCSI subnets and the disk IPs ( disks with old IPs and subnets will continue to work correctly ).
- Re system disk redundancy: Software RAID 1 can be an option, but if you need RAID 1 for the system disk, you can always use hardware RAID. In fact this is what other server systems do. If you do not use RAID 1 and the system disk fails, although this will knock out the node, the cluster will continue to function so it is not really catastrophic. Ceph will automatically self adjust and recreate data replicas in the background while continuing to server client io. You can also remove any of the functioning storage disks in the failed node and just hot plug them in other nodes (existing/new), this will make Ceph's recovery job much lighter by just copying updated data replicas instead of creating complete ones.
I understand some of this needs to be documented, we will be working on this too.
admin
2,921 Posts
Quote from admin on April 19, 2017, 11:44 ami spoke too soon on point 1, the IQN name does need to end with the disk index. Maybe we could append this index to a user defined iqn name.
i spoke too soon on point 1, the IQN name does need to end with the disk index. Maybe we could append this index to a user defined iqn name.
colombabianca86
1 Post
Quote from colombabianca86 on November 20, 2017, 10:04 amQuote from tarlo on April 19, 2017, 6:28 amHi
I am new to PetaSAN however I am currently testing it in a VM environment.
I have experience in Storage systems in the industry and I love the simple look and feel and the how Ceph functions.I have some suggestions that will improve usability:
- On the DashBoard, is it possible to have a summery of online nodes and NICs? I see there is a summery of OSD status of HDDs- Under iSCSI list, we have Disk ID "00001" and Disk name "disk". However IQN address lists the Disk ID. for example iqn.2016-05.com.petasan:00001.
This is a hassle to find out what the actual meaningful disk / volume name is. Can we have the option to select the disk name for the IQN address?- Can we have an option to setup clock and time zone within the GUI?
- Can we have the option to change the nodes IP addresses in GUI
-There is one major issue with system disk redundancy, Its single point of failure.
I am not sure if its possible, is it possilbe to use Ceph on the systems disks to replicate the data to other system disks in the cluster?
If not possible to use Ceph, can we use Software RAID 1 (mirror) and have the option to install a 2nd disk for system disk redundancy. This is to avoid having to completely installing PetaSAN again on the node if the system disk fails.- It has already been mentioned; Disk Compression / De-duplication
- Disk snapshots, replication & scheduling
I am sure there will be more later.
thanks
Tarlo
I am new to PetaSAN and I would like to to start testing it in a small server environment.
Currently using vmware systems but I'm planning to replace servers with new xen / kvm hosts and new ceph storage. The question may look stupid: can xen / kvm be integrated into the same hosts that have petasan installed ?Basically, is it possible to have something similar to vsphere + vsan? Thanks for the reply.
Quote from tarlo on April 19, 2017, 6:28 amHi
I am new to PetaSAN however I am currently testing it in a VM environment.
I have experience in Storage systems in the industry and I love the simple look and feel and the how Ceph functions.I have some suggestions that will improve usability:
- On the DashBoard, is it possible to have a summery of online nodes and NICs? I see there is a summery of OSD status of HDDs- Under iSCSI list, we have Disk ID "00001" and Disk name "disk". However IQN address lists the Disk ID. for example iqn.2016-05.com.petasan:00001.
This is a hassle to find out what the actual meaningful disk / volume name is. Can we have the option to select the disk name for the IQN address?- Can we have an option to setup clock and time zone within the GUI?
- Can we have the option to change the nodes IP addresses in GUI
-There is one major issue with system disk redundancy, Its single point of failure.
I am not sure if its possible, is it possilbe to use Ceph on the systems disks to replicate the data to other system disks in the cluster?
If not possible to use Ceph, can we use Software RAID 1 (mirror) and have the option to install a 2nd disk for system disk redundancy. This is to avoid having to completely installing PetaSAN again on the node if the system disk fails.- It has already been mentioned; Disk Compression / De-duplication
- Disk snapshots, replication & scheduling
I am sure there will be more later.
thanks
Tarlo
I am new to PetaSAN and I would like to to start testing it in a small server environment.
Currently using vmware systems but I'm planning to replace servers with new xen / kvm hosts and new ceph storage. The question may look stupid: can xen / kvm be integrated into the same hosts that have petasan installed ?
Basically, is it possible to have something similar to vsphere + vsan? Thanks for the reply.
admin
2,921 Posts
Quote from admin on November 21, 2017, 8:46 amHi there,
Currently we do not support this configuration. I would not recommend adding Xen and PetaSAN both native on the same box but possibly try to run PetaSAN virtualized under Xen in a hyper-converged setup, it will give less performance but the native method is likely to be more involved and is likely to break as we upgrade new versions of PetaSAN. But again neither method is supported or has been tested by us.
Hi there,
Currently we do not support this configuration. I would not recommend adding Xen and PetaSAN both native on the same box but possibly try to run PetaSAN virtualized under Xen in a hyper-converged setup, it will give less performance but the native method is likely to be more involved and is likely to break as we upgrade new versions of PetaSAN. But again neither method is supported or has been tested by us.