ISCSI disks
Pages: 1 2
valio
17 Posts
March 28, 2018, 10:52 amQuote from valio on March 28, 2018, 10:52 amISCSI disks are not starting automatically. Is that behavior is by design ?
ISCSI disks are not starting automatically. Is that behavior is by design ?
admin
2,930 Posts
March 28, 2018, 11:13 amQuote from admin on March 28, 2018, 11:13 amIf you mean they do not start automatically if you shutdown the entire cluster and restart it, then yes.
PetaSAN is not meant to be restarted all the time, a restart is like recovering fro disaster case. For iSCSI disks to function the Ceph and Consul servers need to be running, Ceph servers needs to sync to make sure all data is up to date. The admin should start the disks only when this is met, it will help maybe if e add a "restart all" operation.
If you mean they do not start automatically if you shutdown the entire cluster and restart it, then yes.
PetaSAN is not meant to be restarted all the time, a restart is like recovering fro disaster case. For iSCSI disks to function the Ceph and Consul servers need to be running, Ceph servers needs to sync to make sure all data is up to date. The admin should start the disks only when this is met, it will help maybe if e add a "restart all" operation.
valio
17 Posts
March 28, 2018, 11:55 amQuote from valio on March 28, 2018, 11:55 amYes I mean after restarting CEPH cluster, ISCSI disk are not starting. I have the following scenario four nodes with ESX i 6. Each one is holding one Petasan node. Petasan cluster is servicing ISCSI disks on which disks I have VMs which supposedly will start after shared volumes are presented to ESX i hosts. I have modified /etc/rc.local.d/local.sh in ESX i hosts in folowing manaer:
#!/bin/sh
# local configuration options
# Note: modify at your own risk! If you do/use anything in this
# script that is not part of a stable API (relying on files to be in
# specific places, specific tools, specific output, etc) there is a
# possibility you will end up with a broken system after patching or
# upgrading. Changes are not supported unless under direction of
# VMware support.
# Start the VM Server that provides the iSCSI LUN
vim-cmd vmsvc/getallvms | grep Petasan0.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep Petasan1.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep Petasan2.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep Petasan3.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep VCSA-1.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep DOM-1.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
# Other VMs
esxcfg-swiscsi -e
esxcfg-swiscsi -s
sleep 10
exixcli storage core adapter rescan --all
exit 0
----------------------------
This script works well with openATTIC cluster but not with Petasan because ISCSI disks are not starting automatically. Is it possible to have script inside Petasan which will do all checks on the ceph cluster and then start ISCSI disks ?
Yes I mean after restarting CEPH cluster, ISCSI disk are not starting. I have the following scenario four nodes with ESX i 6. Each one is holding one Petasan node. Petasan cluster is servicing ISCSI disks on which disks I have VMs which supposedly will start after shared volumes are presented to ESX i hosts. I have modified /etc/rc.local.d/local.sh in ESX i hosts in folowing manaer:
#!/bin/sh
# local configuration options
# Note: modify at your own risk! If you do/use anything in this
# script that is not part of a stable API (relying on files to be in
# specific places, specific tools, specific output, etc) there is a
# possibility you will end up with a broken system after patching or
# upgrading. Changes are not supported unless under direction of
# VMware support.
# Start the VM Server that provides the iSCSI LUN
vim-cmd vmsvc/getallvms | grep Petasan0.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep Petasan1.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep Petasan2.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep Petasan3.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep VCSA-1.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep DOM-1.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
# Other VMs
esxcfg-swiscsi -e
esxcfg-swiscsi -s
sleep 10
exixcli storage core adapter rescan --all
exit 0
----------------------------
This script works well with openATTIC cluster but not with Petasan because ISCSI disks are not starting automatically. Is it possible to have script inside Petasan which will do all checks on the ceph cluster and then start ISCSI disks ?
admin
2,930 Posts
March 28, 2018, 3:08 pmQuote from admin on March 28, 2018, 3:08 pmYes it could be done. but i would consider first why do you need to manually restart the cluster on a regular basis in the first place since as i mentioned Ceph will take time to stabilize if the entire cluster is restarted. Also what is the impact of having to manually restart the disks in such cases ?
In a traditional SAN it is easy for a node to start an iSCSI service when it boots. A PetaSAN node starting cannot assume that all other cluster nodes are also starting with it at the same time. It is possible to do, however it will be more involved than the traditional case and i am not sure it will be that beneficial. Maybe a manual restart all will be acceptable.
Yes it could be done. but i would consider first why do you need to manually restart the cluster on a regular basis in the first place since as i mentioned Ceph will take time to stabilize if the entire cluster is restarted. Also what is the impact of having to manually restart the disks in such cases ?
In a traditional SAN it is easy for a node to start an iSCSI service when it boots. A PetaSAN node starting cannot assume that all other cluster nodes are also starting with it at the same time. It is possible to do, however it will be more involved than the traditional case and i am not sure it will be that beneficial. Maybe a manual restart all will be acceptable.
Last edited on March 28, 2018, 3:10 pm by admin · #4
valio
17 Posts
March 28, 2018, 4:31 pmQuote from valio on March 28, 2018, 4:31 pmNever mind. I am using CEPH clusters on DR sites for various clients. Most of this DR sites have no UPS backup power, or some of them have backup for 10-15 min. After power outage DR sites will start automatically and continue replication, that is the main reason for starting ISCSI disks automatically. Presently I am using open ATTIC and Deepsea deployment, and it works well. The only difference is that Petasan is much more easy to deploy and maintain.
Never mind. I am using CEPH clusters on DR sites for various clients. Most of this DR sites have no UPS backup power, or some of them have backup for 10-15 min. After power outage DR sites will start automatically and continue replication, that is the main reason for starting ISCSI disks automatically. Presently I am using open ATTIC and Deepsea deployment, and it works well. The only difference is that Petasan is much more easy to deploy and maintain.
admin
2,930 Posts
March 30, 2018, 11:34 amQuote from admin on March 30, 2018, 11:34 amWould a script to start an iSCSI disk help you ? You can use sleep as you did in the script you posted to start after some time.
Would a script to start an iSCSI disk help you ? You can use sleep as you did in the script you posted to start after some time.
valio
17 Posts
April 5, 2018, 5:08 pmQuote from valio on April 5, 2018, 5:08 pmYes defensively will help. Unfortunately I am not familiar neither with CONSUL nor with LIO target. In fact on all my deployments of CEPH I am using SCST project for ISCSI targets and RBDMAP. If you could give me a clue as to how start from console ISCSI disks I can then figure out how to write remote script from ESXi hosts. I will really appreciated this because Petasan is a great idea.
Yes defensively will help. Unfortunately I am not familiar neither with CONSUL nor with LIO target. In fact on all my deployments of CEPH I am using SCST project for ISCSI targets and RBDMAP. If you could give me a clue as to how start from console ISCSI disks I can then figure out how to write remote script from ESXi hosts. I will really appreciated this because Petasan is a great idea.
admin
2,930 Posts
April 5, 2018, 5:40 pmQuote from admin on April 5, 2018, 5:40 pmI will post the script here by Monday if not tomorrow
I will post the script here by Monday if not tomorrow
valio
17 Posts
April 5, 2018, 5:46 pmQuote from valio on April 5, 2018, 5:46 pmThanks really appreciated !
Thanks really appreciated !
admin
2,930 Posts
April 6, 2018, 11:21 amQuote from admin on April 6, 2018, 11:21 amHere is the script, it will start all non started disks.
https://drive.google.com/file/d/1IKFZNLDVC88nlnyOg-l6RO8QNQZGUwlr/view?usp=sharing
best place it in
/opt/petasan/scripts/util/start_all_disks.py
since we will add it in next versions
Here is the script, it will start all non started disks.
https://drive.google.com/file/d/1IKFZNLDVC88nlnyOg-l6RO8QNQZGUwlr/view?usp=sharing
best place it in
/opt/petasan/scripts/util/start_all_disks.py
since we will add it in next versions
Last edited on April 6, 2018, 11:24 am by admin · #10
Pages: 1 2
ISCSI disks
valio
17 Posts
Quote from valio on March 28, 2018, 10:52 amISCSI disks are not starting automatically. Is that behavior is by design ?
ISCSI disks are not starting automatically. Is that behavior is by design ?
admin
2,930 Posts
Quote from admin on March 28, 2018, 11:13 amIf you mean they do not start automatically if you shutdown the entire cluster and restart it, then yes.
PetaSAN is not meant to be restarted all the time, a restart is like recovering fro disaster case. For iSCSI disks to function the Ceph and Consul servers need to be running, Ceph servers needs to sync to make sure all data is up to date. The admin should start the disks only when this is met, it will help maybe if e add a "restart all" operation.
If you mean they do not start automatically if you shutdown the entire cluster and restart it, then yes.
PetaSAN is not meant to be restarted all the time, a restart is like recovering fro disaster case. For iSCSI disks to function the Ceph and Consul servers need to be running, Ceph servers needs to sync to make sure all data is up to date. The admin should start the disks only when this is met, it will help maybe if e add a "restart all" operation.
valio
17 Posts
Quote from valio on March 28, 2018, 11:55 amYes I mean after restarting CEPH cluster, ISCSI disk are not starting. I have the following scenario four nodes with ESX i 6. Each one is holding one Petasan node. Petasan cluster is servicing ISCSI disks on which disks I have VMs which supposedly will start after shared volumes are presented to ESX i hosts. I have modified /etc/rc.local.d/local.sh in ESX i hosts in folowing manaer:
#!/bin/sh
# local configuration options
# Note: modify at your own risk! If you do/use anything in this
# script that is not part of a stable API (relying on files to be in
# specific places, specific tools, specific output, etc) there is a
# possibility you will end up with a broken system after patching or
# upgrading. Changes are not supported unless under direction of
# VMware support.# Start the VM Server that provides the iSCSI LUN
vim-cmd vmsvc/getallvms | grep Petasan0.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep Petasan1.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep Petasan2.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep Petasan3.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep VCSA-1.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep DOM-1.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
# Other VMs
esxcfg-swiscsi -e
esxcfg-swiscsi -s
sleep 10
exixcli storage core adapter rescan --all
exit 0
----------------------------
This script works well with openATTIC cluster but not with Petasan because ISCSI disks are not starting automatically. Is it possible to have script inside Petasan which will do all checks on the ceph cluster and then start ISCSI disks ?
Yes I mean after restarting CEPH cluster, ISCSI disk are not starting. I have the following scenario four nodes with ESX i 6. Each one is holding one Petasan node. Petasan cluster is servicing ISCSI disks on which disks I have VMs which supposedly will start after shared volumes are presented to ESX i hosts. I have modified /etc/rc.local.d/local.sh in ESX i hosts in folowing manaer:
#!/bin/sh
# local configuration options
# Note: modify at your own risk! If you do/use anything in this
# script that is not part of a stable API (relying on files to be in
# specific places, specific tools, specific output, etc) there is a
# possibility you will end up with a broken system after patching or
# upgrading. Changes are not supported unless under direction of
# VMware support.
# Start the VM Server that provides the iSCSI LUN
vim-cmd vmsvc/getallvms | grep Petasan0.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep Petasan1.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep Petasan2.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep Petasan3.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep VCSA-1.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
sleep 90
vim-cmd vmsvc/getallvms | grep DOM-1.vmx | cut -d ' ' -f 1 | vim-cmd vmsvc/power.on
# Other VMs
esxcfg-swiscsi -e
esxcfg-swiscsi -s
sleep 10
exixcli storage core adapter rescan --all
exit 0
----------------------------
This script works well with openATTIC cluster but not with Petasan because ISCSI disks are not starting automatically. Is it possible to have script inside Petasan which will do all checks on the ceph cluster and then start ISCSI disks ?
admin
2,930 Posts
Quote from admin on March 28, 2018, 3:08 pmYes it could be done. but i would consider first why do you need to manually restart the cluster on a regular basis in the first place since as i mentioned Ceph will take time to stabilize if the entire cluster is restarted. Also what is the impact of having to manually restart the disks in such cases ?
In a traditional SAN it is easy for a node to start an iSCSI service when it boots. A PetaSAN node starting cannot assume that all other cluster nodes are also starting with it at the same time. It is possible to do, however it will be more involved than the traditional case and i am not sure it will be that beneficial. Maybe a manual restart all will be acceptable.
Yes it could be done. but i would consider first why do you need to manually restart the cluster on a regular basis in the first place since as i mentioned Ceph will take time to stabilize if the entire cluster is restarted. Also what is the impact of having to manually restart the disks in such cases ?
In a traditional SAN it is easy for a node to start an iSCSI service when it boots. A PetaSAN node starting cannot assume that all other cluster nodes are also starting with it at the same time. It is possible to do, however it will be more involved than the traditional case and i am not sure it will be that beneficial. Maybe a manual restart all will be acceptable.
valio
17 Posts
Quote from valio on March 28, 2018, 4:31 pmNever mind. I am using CEPH clusters on DR sites for various clients. Most of this DR sites have no UPS backup power, or some of them have backup for 10-15 min. After power outage DR sites will start automatically and continue replication, that is the main reason for starting ISCSI disks automatically. Presently I am using open ATTIC and Deepsea deployment, and it works well. The only difference is that Petasan is much more easy to deploy and maintain.
Never mind. I am using CEPH clusters on DR sites for various clients. Most of this DR sites have no UPS backup power, or some of them have backup for 10-15 min. After power outage DR sites will start automatically and continue replication, that is the main reason for starting ISCSI disks automatically. Presently I am using open ATTIC and Deepsea deployment, and it works well. The only difference is that Petasan is much more easy to deploy and maintain.
admin
2,930 Posts
Quote from admin on March 30, 2018, 11:34 amWould a script to start an iSCSI disk help you ? You can use sleep as you did in the script you posted to start after some time.
Would a script to start an iSCSI disk help you ? You can use sleep as you did in the script you posted to start after some time.
valio
17 Posts
Quote from valio on April 5, 2018, 5:08 pmYes defensively will help. Unfortunately I am not familiar neither with CONSUL nor with LIO target. In fact on all my deployments of CEPH I am using SCST project for ISCSI targets and RBDMAP. If you could give me a clue as to how start from console ISCSI disks I can then figure out how to write remote script from ESXi hosts. I will really appreciated this because Petasan is a great idea.
Yes defensively will help. Unfortunately I am not familiar neither with CONSUL nor with LIO target. In fact on all my deployments of CEPH I am using SCST project for ISCSI targets and RBDMAP. If you could give me a clue as to how start from console ISCSI disks I can then figure out how to write remote script from ESXi hosts. I will really appreciated this because Petasan is a great idea.
admin
2,930 Posts
Quote from admin on April 5, 2018, 5:40 pmI will post the script here by Monday if not tomorrow
I will post the script here by Monday if not tomorrow
valio
17 Posts
Quote from valio on April 5, 2018, 5:46 pmThanks really appreciated !
Thanks really appreciated !
admin
2,930 Posts
Quote from admin on April 6, 2018, 11:21 amHere is the script, it will start all non started disks.
https://drive.google.com/file/d/1IKFZNLDVC88nlnyOg-l6RO8QNQZGUwlr/view?usp=sharing
best place it in
/opt/petasan/scripts/util/start_all_disks.pysince we will add it in next versions
Here is the script, it will start all non started disks.
https://drive.google.com/file/d/1IKFZNLDVC88nlnyOg-l6RO8QNQZGUwlr/view?usp=sharing
best place it in
/opt/petasan/scripts/util/start_all_disks.py
since we will add it in next versions