PetaSAN v 2.0 released!
Pages: 1 2
sniffus
20 Posts
February 13, 2018, 8:47 pmQuote from sniffus on February 13, 2018, 8:47 pmI did.
I will reboot each node in sequence and wait for stable before the next.
M.
I did.
I will reboot each node in sequence and wait for stable before the next.
M.
admin
2,930 Posts
February 13, 2018, 8:53 pmQuote from admin on February 13, 2018, 8:53 pmIt should work without a reboot. Can you first check all monitors are up:
systemctl status ceph-mon@HOST_NAME
also check the cluster status:
ceph status --cluster CLUSTER_NAME
It should work without a reboot. Can you first check all monitors are up:
systemctl status ceph-mon@HOST_NAME
also check the cluster status:
ceph status --cluster CLUSTER_NAME
sniffus
20 Posts
February 13, 2018, 10:09 pmQuote from sniffus on February 13, 2018, 10:09 pmsystemctl restart is a dark evil beast in our lab.... I should have stop/start instead.
reboot did the trick.
By the way, there are two - in --cluster in the Post-Upgrade step 😉
Would you like our comments on the upgrade document? Few things could stand a little re-write for clarity.
But now my VMs are PetaSAN 2.0 running bluestore. The baremetal will be completed tomorrow. Quite cool, 0 downtime with the scripts. Our boss was amazed, we had to do a show and tell on Peta today, looking good. We definitively will have to make the multiple clusters work on same host, there will be a mix of performance between different clusters.
Will the script you gave us earlier work on 2.0? (to change the serial number info on iscsi disks).
Thank you again, great stuff. Can't wait to see numbers !
M.
systemctl restart is a dark evil beast in our lab.... I should have stop/start instead.
reboot did the trick.
By the way, there are two - in --cluster in the Post-Upgrade step 😉
Would you like our comments on the upgrade document? Few things could stand a little re-write for clarity.
But now my VMs are PetaSAN 2.0 running bluestore. The baremetal will be completed tomorrow. Quite cool, 0 downtime with the scripts. Our boss was amazed, we had to do a show and tell on Peta today, looking good. We definitively will have to make the multiple clusters work on same host, there will be a mix of performance between different clusters.
Will the script you gave us earlier work on 2.0? (to change the serial number info on iscsi disks).
Thank you again, great stuff. Can't wait to see numbers !
M.
admin
2,930 Posts
February 14, 2018, 8:54 amQuote from admin on February 14, 2018, 8:54 amYes please by all means post any changes/wording you like to see in the upgrade guide and i will update it.
Happy to hear you had no downtime and your boss liked it 🙂 it still think a reboot is not needed since we tested it many times and also the ceph mgr creation should be straightforward once the new monitors are up, however one thing we were doing is to make sure the new monitors were indeed started and are all in quorum, this is the check i posted in my previous post. so maybe that should be explicitly stated before step 9. If you do have chance to retry this, please do let me know.
The script i sent you is not in the release. I understood it did not work + more importantly is it will break any live upgrade since having a mix of serial numbers for the same disk will surely drive the client mpio layer nuts. Note that currently you still can have a client talking to 2 different PetaSAN clusters if you are talking to disks with different ids, it will only break if the ids overlap at the client, it is something you probably can setup. If you still need this, you can apply the same pacth on 2.0 and it will work the same.
Yes please by all means post any changes/wording you like to see in the upgrade guide and i will update it.
Happy to hear you had no downtime and your boss liked it 🙂 it still think a reboot is not needed since we tested it many times and also the ceph mgr creation should be straightforward once the new monitors are up, however one thing we were doing is to make sure the new monitors were indeed started and are all in quorum, this is the check i posted in my previous post. so maybe that should be explicitly stated before step 9. If you do have chance to retry this, please do let me know.
The script i sent you is not in the release. I understood it did not work + more importantly is it will break any live upgrade since having a mix of serial numbers for the same disk will surely drive the client mpio layer nuts. Note that currently you still can have a client talking to 2 different PetaSAN clusters if you are talking to disks with different ids, it will only break if the ids overlap at the client, it is something you probably can setup. If you still need this, you can apply the same pacth on 2.0 and it will work the same.
Last edited on February 14, 2018, 8:55 am by admin · #14
philip.shannon
37 Posts
February 16, 2018, 4:05 pmQuote from philip.shannon on February 16, 2018, 4:05 pmWas running 1.4, and I used the express installer to upgrade to 2.0 this was a seamless & problem free process...
with version 1.4 our software installation took 1 hour and 18 minutes its a heavy io intensive thing. After upgrading to 2.0 I ran the same installation process, which finished in less than 16 minutes. All SSD esxi 6 environment this is what we were hoping for! No more SAN type speeds!
thank you and keep up the great work!
Was running 1.4, and I used the express installer to upgrade to 2.0 this was a seamless & problem free process...
with version 1.4 our software installation took 1 hour and 18 minutes its a heavy io intensive thing. After upgrading to 2.0 I ran the same installation process, which finished in less than 16 minutes. All SSD esxi 6 environment this is what we were hoping for! No more SAN type speeds!
thank you and keep up the great work!
Last edited on February 16, 2018, 4:06 pm by philip.shannon · #15
vijayabhasker.gandla@spectraforce.com
6 Posts
February 28, 2018, 10:48 amQuote from vijayabhasker.gandla@spectraforce.com on February 28, 2018, 10:48 amHi,
Please let me know whether PetaSAN supports
- Block Storage (for hosting VMs for VMWARE, KVM, QEMU, MS-HYPER V….)
- I know that this functionality is already available through ISCI target and initiator way.
- Can I use VMWARE Esxi 5.5 server to connect to PetaSAN to create datastores and store live VMs on it? Currently I am using VMWARE 5.5 Esxi server.
- Object storage (compatible with S3, Swift ..)
- File storage (NFS….) for storing and sharing files
My future project is to setup private cloud in our data center similar to AWS. Was considering OpenStack and Ceph combination to support all the major 3 storage types. Does PetaSAN support OpenStack?
Please provide this information and advice.
Thanks,
Vijaya Bhasker G
Hi,
Please let me know whether PetaSAN supports
- Block Storage (for hosting VMs for VMWARE, KVM, QEMU, MS-HYPER V….)
- I know that this functionality is already available through ISCI target and initiator way.
- Can I use VMWARE Esxi 5.5 server to connect to PetaSAN to create datastores and store live VMs on it? Currently I am using VMWARE 5.5 Esxi server.
- Object storage (compatible with S3, Swift ..)
- File storage (NFS….) for storing and sharing files
My future project is to setup private cloud in our data center similar to AWS. Was considering OpenStack and Ceph combination to support all the major 3 storage types. Does PetaSAN support OpenStack?
Please provide this information and advice.
Thanks,
Vijaya Bhasker G
admin
2,930 Posts
February 28, 2018, 4:09 pmQuote from admin on February 28, 2018, 4:09 pmOur main focus is supporting fault tolerant iSCSI over Ceph. So we focus on block storage and support for hyper-v, vmware, xen. ESX can connect to a datastore you create via the PetaSAN ui, as per our docs for vmware.
We do not focus on S3 and File ( CephFS/NFS) in our ui, we do not include their packages in the install iso. We can make such packages avialable for download but you will not find specific support in the PetaSAN ui for them and you will need manual configuration, some users find PetaSAN easy to create the Ceph cluster and manage adding/removing nodes/disks, so it will help you in this area. Openstack will be similar, it will talk to PetaSAN directly using the rbd pool, it will be able to create / delete images itself.
Our main focus is supporting fault tolerant iSCSI over Ceph. So we focus on block storage and support for hyper-v, vmware, xen. ESX can connect to a datastore you create via the PetaSAN ui, as per our docs for vmware.
We do not focus on S3 and File ( CephFS/NFS) in our ui, we do not include their packages in the install iso. We can make such packages avialable for download but you will not find specific support in the PetaSAN ui for them and you will need manual configuration, some users find PetaSAN easy to create the Ceph cluster and manage adding/removing nodes/disks, so it will help you in this area. Openstack will be similar, it will talk to PetaSAN directly using the rbd pool, it will be able to create / delete images itself.
Last edited on February 28, 2018, 4:11 pm by admin · #17
Gilberto
5 Posts
March 23, 2018, 9:33 pmQuote from Gilberto on March 23, 2018, 9:33 pmHi there!
Congratulations for such outstanding software.
Is there any plan to add Fibre Channel support to it??
Tanks
Hi there!
Congratulations for such outstanding software.
Is there any plan to add Fibre Channel support to it??
Tanks
admin
2,930 Posts
March 24, 2018, 12:03 amQuote from admin on March 24, 2018, 12:03 amThanks very much.
Unfortunately no short term plans for FC as we focus now on iSCSI.
Thanks very much.
Unfortunately no short term plans for FC as we focus now on iSCSI.
Last edited on March 24, 2018, 12:08 am by admin · #19
Pages: 1 2
PetaSAN v 2.0 released!
sniffus
20 Posts
Quote from sniffus on February 13, 2018, 8:47 pmI did.
I will reboot each node in sequence and wait for stable before the next.
M.
I did.
I will reboot each node in sequence and wait for stable before the next.
M.
admin
2,930 Posts
Quote from admin on February 13, 2018, 8:53 pmIt should work without a reboot. Can you first check all monitors are up:
systemctl status ceph-mon@HOST_NAME
also check the cluster status:
ceph status --cluster CLUSTER_NAME
It should work without a reboot. Can you first check all monitors are up:
systemctl status ceph-mon@HOST_NAME
also check the cluster status:
ceph status --cluster CLUSTER_NAME
sniffus
20 Posts
Quote from sniffus on February 13, 2018, 10:09 pmsystemctl restart is a dark evil beast in our lab.... I should have stop/start instead.
reboot did the trick.
By the way, there are two - in --cluster in the Post-Upgrade step 😉
Would you like our comments on the upgrade document? Few things could stand a little re-write for clarity.
But now my VMs are PetaSAN 2.0 running bluestore. The baremetal will be completed tomorrow. Quite cool, 0 downtime with the scripts. Our boss was amazed, we had to do a show and tell on Peta today, looking good. We definitively will have to make the multiple clusters work on same host, there will be a mix of performance between different clusters.
Will the script you gave us earlier work on 2.0? (to change the serial number info on iscsi disks).
Thank you again, great stuff. Can't wait to see numbers !
M.
systemctl restart is a dark evil beast in our lab.... I should have stop/start instead.
reboot did the trick.
By the way, there are two - in --cluster in the Post-Upgrade step 😉
Would you like our comments on the upgrade document? Few things could stand a little re-write for clarity.
But now my VMs are PetaSAN 2.0 running bluestore. The baremetal will be completed tomorrow. Quite cool, 0 downtime with the scripts. Our boss was amazed, we had to do a show and tell on Peta today, looking good. We definitively will have to make the multiple clusters work on same host, there will be a mix of performance between different clusters.
Will the script you gave us earlier work on 2.0? (to change the serial number info on iscsi disks).
Thank you again, great stuff. Can't wait to see numbers !
M.
admin
2,930 Posts
Quote from admin on February 14, 2018, 8:54 amYes please by all means post any changes/wording you like to see in the upgrade guide and i will update it.
Happy to hear you had no downtime and your boss liked it 🙂 it still think a reboot is not needed since we tested it many times and also the ceph mgr creation should be straightforward once the new monitors are up, however one thing we were doing is to make sure the new monitors were indeed started and are all in quorum, this is the check i posted in my previous post. so maybe that should be explicitly stated before step 9. If you do have chance to retry this, please do let me know.
The script i sent you is not in the release. I understood it did not work + more importantly is it will break any live upgrade since having a mix of serial numbers for the same disk will surely drive the client mpio layer nuts. Note that currently you still can have a client talking to 2 different PetaSAN clusters if you are talking to disks with different ids, it will only break if the ids overlap at the client, it is something you probably can setup. If you still need this, you can apply the same pacth on 2.0 and it will work the same.
Yes please by all means post any changes/wording you like to see in the upgrade guide and i will update it.
Happy to hear you had no downtime and your boss liked it 🙂 it still think a reboot is not needed since we tested it many times and also the ceph mgr creation should be straightforward once the new monitors are up, however one thing we were doing is to make sure the new monitors were indeed started and are all in quorum, this is the check i posted in my previous post. so maybe that should be explicitly stated before step 9. If you do have chance to retry this, please do let me know.
The script i sent you is not in the release. I understood it did not work + more importantly is it will break any live upgrade since having a mix of serial numbers for the same disk will surely drive the client mpio layer nuts. Note that currently you still can have a client talking to 2 different PetaSAN clusters if you are talking to disks with different ids, it will only break if the ids overlap at the client, it is something you probably can setup. If you still need this, you can apply the same pacth on 2.0 and it will work the same.
philip.shannon
37 Posts
Quote from philip.shannon on February 16, 2018, 4:05 pmWas running 1.4, and I used the express installer to upgrade to 2.0 this was a seamless & problem free process...
with version 1.4 our software installation took 1 hour and 18 minutes its a heavy io intensive thing. After upgrading to 2.0 I ran the same installation process, which finished in less than 16 minutes. All SSD esxi 6 environment this is what we were hoping for! No more SAN type speeds!
thank you and keep up the great work!
Was running 1.4, and I used the express installer to upgrade to 2.0 this was a seamless & problem free process...
with version 1.4 our software installation took 1 hour and 18 minutes its a heavy io intensive thing. After upgrading to 2.0 I ran the same installation process, which finished in less than 16 minutes. All SSD esxi 6 environment this is what we were hoping for! No more SAN type speeds!
thank you and keep up the great work!
vijayabhasker.gandla@spectraforce.com
6 Posts
Quote from vijayabhasker.gandla@spectraforce.com on February 28, 2018, 10:48 amHi,
Please let me know whether PetaSAN supports
- Block Storage (for hosting VMs for VMWARE, KVM, QEMU, MS-HYPER V….)
- I know that this functionality is already available through ISCI target and initiator way.
- Can I use VMWARE Esxi 5.5 server to connect to PetaSAN to create datastores and store live VMs on it? Currently I am using VMWARE 5.5 Esxi server.
- Object storage (compatible with S3, Swift ..)
- File storage (NFS….) for storing and sharing files
My future project is to setup private cloud in our data center similar to AWS. Was considering OpenStack and Ceph combination to support all the major 3 storage types. Does PetaSAN support OpenStack?
Please provide this information and advice.
Thanks,
Vijaya Bhasker G
Hi,
Please let me know whether PetaSAN supports
- Block Storage (for hosting VMs for VMWARE, KVM, QEMU, MS-HYPER V….)
- I know that this functionality is already available through ISCI target and initiator way.
- Can I use VMWARE Esxi 5.5 server to connect to PetaSAN to create datastores and store live VMs on it? Currently I am using VMWARE 5.5 Esxi server.
- Object storage (compatible with S3, Swift ..)
- File storage (NFS….) for storing and sharing files
My future project is to setup private cloud in our data center similar to AWS. Was considering OpenStack and Ceph combination to support all the major 3 storage types. Does PetaSAN support OpenStack?
Please provide this information and advice.
Thanks,
Vijaya Bhasker G
admin
2,930 Posts
Quote from admin on February 28, 2018, 4:09 pmOur main focus is supporting fault tolerant iSCSI over Ceph. So we focus on block storage and support for hyper-v, vmware, xen. ESX can connect to a datastore you create via the PetaSAN ui, as per our docs for vmware.
We do not focus on S3 and File ( CephFS/NFS) in our ui, we do not include their packages in the install iso. We can make such packages avialable for download but you will not find specific support in the PetaSAN ui for them and you will need manual configuration, some users find PetaSAN easy to create the Ceph cluster and manage adding/removing nodes/disks, so it will help you in this area. Openstack will be similar, it will talk to PetaSAN directly using the rbd pool, it will be able to create / delete images itself.
Our main focus is supporting fault tolerant iSCSI over Ceph. So we focus on block storage and support for hyper-v, vmware, xen. ESX can connect to a datastore you create via the PetaSAN ui, as per our docs for vmware.
We do not focus on S3 and File ( CephFS/NFS) in our ui, we do not include their packages in the install iso. We can make such packages avialable for download but you will not find specific support in the PetaSAN ui for them and you will need manual configuration, some users find PetaSAN easy to create the Ceph cluster and manage adding/removing nodes/disks, so it will help you in this area. Openstack will be similar, it will talk to PetaSAN directly using the rbd pool, it will be able to create / delete images itself.
Gilberto
5 Posts
Quote from Gilberto on March 23, 2018, 9:33 pmHi there!
Congratulations for such outstanding software.
Is there any plan to add Fibre Channel support to it??
Tanks
Hi there!
Congratulations for such outstanding software.
Is there any plan to add Fibre Channel support to it??
Tanks
admin
2,930 Posts
Quote from admin on March 24, 2018, 12:03 amThanks very much.
Unfortunately no short term plans for FC as we focus now on iSCSI.
Thanks very much.
Unfortunately no short term plans for FC as we focus now on iSCSI.