Patasan node shut down automaticaly
faruk.fa
1 Post
May 12, 2019, 10:49 amQuote from faruk.fa on May 12, 2019, 10:49 amI have made 3 node petasan cluster. some node turn off frequently.
Need to Start iSCSI TERGET MANUALYY AFTER NODE FAILURE.
Configuration:
Hyper-V Node on DEll R730 server (2x xeon processor, raid 10 SAS, 16GB RaM)
Each Node: 2 Virtual processor
RAM: 3 GB
HDD :Virtual HDD 100GB, 200GB
Please help...
I have made 3 node petasan cluster. some node turn off frequently.
Need to Start iSCSI TERGET MANUALYY AFTER NODE FAILURE.
Configuration:
Hyper-V Node on DEll R730 server (2x xeon processor, raid 10 SAS, 16GB RaM)
Each Node: 2 Virtual processor
RAM: 3 GB
HDD :Virtual HDD 100GB, 200GB
Please help...
admin
2,930 Posts
May 12, 2019, 1:27 pmQuote from admin on May 12, 2019, 1:27 pmwe do not support running PetaSAN as vms.
Check your stats charts in the dashboard, for cpu%, disk%, memory %. You probably need more ram, also it may help decreasing the ram per osd from from the configuration /etc/ceph/xx.conf
if using 2.3
osd_memory_target = 1073741824
if earlier version
bluestore_cache_size_ssd = 1073741824
bluestore_cache_size_hdd = 1073741824
Also if this is fresh install, probably use only 1 OSD per node rather than 2.
I guess that this happens sometimes, but generally you do have an "OK" status on your dashboard, if not i would re-check the network setup , the virtual switches ...etc.
Also if you do have real load on the vms, you may look into creating your virtual disks as pass through to real volumes and see if there is any settings that limits disk io per vm / queue depths or any limits imposed on the vm to limits its io for disk and network. you may try the native hyper-v drivers for network and disk may give better performance. But again this is something we do not support.
we do not support running PetaSAN as vms.
Check your stats charts in the dashboard, for cpu%, disk%, memory %. You probably need more ram, also it may help decreasing the ram per osd from from the configuration /etc/ceph/xx.conf
if using 2.3
osd_memory_target = 1073741824
if earlier version
bluestore_cache_size_ssd = 1073741824
bluestore_cache_size_hdd = 1073741824
Also if this is fresh install, probably use only 1 OSD per node rather than 2.
I guess that this happens sometimes, but generally you do have an "OK" status on your dashboard, if not i would re-check the network setup , the virtual switches ...etc.
Also if you do have real load on the vms, you may look into creating your virtual disks as pass through to real volumes and see if there is any settings that limits disk io per vm / queue depths or any limits imposed on the vm to limits its io for disk and network. you may try the native hyper-v drivers for network and disk may give better performance. But again this is something we do not support.
Patasan node shut down automaticaly
faruk.fa
1 Post
Quote from faruk.fa on May 12, 2019, 10:49 amI have made 3 node petasan cluster. some node turn off frequently.
Need to Start iSCSI TERGET MANUALYY AFTER NODE FAILURE.
Configuration:
Hyper-V Node on DEll R730 server (2x xeon processor, raid 10 SAS, 16GB RaM)
Each Node: 2 Virtual processor
RAM: 3 GB
HDD :Virtual HDD 100GB, 200GB
Please help...
I have made 3 node petasan cluster. some node turn off frequently.
Need to Start iSCSI TERGET MANUALYY AFTER NODE FAILURE.
Configuration:
Hyper-V Node on DEll R730 server (2x xeon processor, raid 10 SAS, 16GB RaM)
Each Node: 2 Virtual processor
RAM: 3 GB
HDD :Virtual HDD 100GB, 200GB
Please help...
admin
2,930 Posts
Quote from admin on May 12, 2019, 1:27 pmwe do not support running PetaSAN as vms.
Check your stats charts in the dashboard, for cpu%, disk%, memory %. You probably need more ram, also it may help decreasing the ram per osd from from the configuration /etc/ceph/xx.conf
if using 2.3
osd_memory_target = 1073741824if earlier version
bluestore_cache_size_ssd = 1073741824
bluestore_cache_size_hdd = 1073741824Also if this is fresh install, probably use only 1 OSD per node rather than 2.
I guess that this happens sometimes, but generally you do have an "OK" status on your dashboard, if not i would re-check the network setup , the virtual switches ...etc.
Also if you do have real load on the vms, you may look into creating your virtual disks as pass through to real volumes and see if there is any settings that limits disk io per vm / queue depths or any limits imposed on the vm to limits its io for disk and network. you may try the native hyper-v drivers for network and disk may give better performance. But again this is something we do not support.
we do not support running PetaSAN as vms.
Check your stats charts in the dashboard, for cpu%, disk%, memory %. You probably need more ram, also it may help decreasing the ram per osd from from the configuration /etc/ceph/xx.conf
if using 2.3
osd_memory_target = 1073741824
if earlier version
bluestore_cache_size_ssd = 1073741824
bluestore_cache_size_hdd = 1073741824
Also if this is fresh install, probably use only 1 OSD per node rather than 2.
I guess that this happens sometimes, but generally you do have an "OK" status on your dashboard, if not i would re-check the network setup , the virtual switches ...etc.
Also if you do have real load on the vms, you may look into creating your virtual disks as pass through to real volumes and see if there is any settings that limits disk io per vm / queue depths or any limits imposed on the vm to limits its io for disk and network. you may try the native hyper-v drivers for network and disk may give better performance. But again this is something we do not support.