Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

PetaSAN tuning

hello,I found that in the installation optimization script, there is a write back cache for turning off HDD and SSD disks, but it has been commented out now. Is it ineffective to turn off the write back cache for HDD and SSD

## set drive write-caching flag (0) for hdd and ssd devices
#ACTION=="add|change", KERNEL=="sd[a-z]",ENV{DEVTYPE}=="disk", RUN+="/sbin/hdparm -W 0 /dev/%k"

I would leave it as is, you can however experiment with this setting anytime while in production, in some cases like with some Micron SSDs it is better to run the script.

Hello, I installed a three-node petasan server today. After I tested that one management node was damaged, I could successfully update the management node; but the system disks of two of the three management nodes were damaged, my newly installed management The node cannot be added successfully by replacing the management node; if this happens, is there a way to deal with it? You can do it manually in the background, thank you

if 2 of 3 nodes are down, you cannot rely on the ui to fix it, you need to manually repair things as ceph and consul clusters will be down.

You are an expert in petasan products, can you write down some command processing procedures for manually repairing consul and ceph when you have time? This is very useful for two node damage, hope to get your help, I will disturb you
Thank you

I am working on the osd hot-plug function of the petasan platform. I unplug a hard disk while the host is running. After the osd reports down and out, I plug this piece of hardware back into the hard disk rack, and then manually start osd.2. Cannot start, report an error; is there any way to hot-plug the hard disk and then plug it back in, the OSD can be started; this osd is  sdb1 with Journal: sdc1

OSD2
Down

Journal: sdc1

You should activate them with

ceph-volume lvm activate --all

A long time ago when ceph-disk was used, it used udev rules so hot plug was automated

This command cannot be activated for osd with link device Journal: sdc1, you can try it if you have time

root@san06:/etc/systemd/system# ceph-volume lvm activate --all
--> Activating OSD ID 2 FSID fb37ff64-820b-459e-9a94-d01017f83f3b
Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-f3453478-577c-491b-9902-49e54f78779d/osd-block-fb37ff64-820b-459e-9a94-d01017f83f3b --path /var/lib/ceph/osd/ceph-2 --no-mon-config
stderr: failed to read label for /dev/ceph-f3453478-577c-491b-9902-49e54f78779d/osd-block-fb37ff64-820b-459e-9a94-d01017f83f3b: (5) Input/output error
stderr: 2021-08-15T09:40:47.974+0800 7fcea3d720c0 -1 bluestore(/dev/ceph-f3453478-577c-491b-9902-49e54f78779d/osd-block-fb37ff64-820b-459e-9a94-d01017f83f3b) _read_bdev_label failed to read from /dev/ceph-f3453478-577c-491b-9902-49e54f78779d/osd-block-fb37ff64-820b-459e-9a94-d01017f83f3b: (5) Input/output error
--> RuntimeError: command returned non-zero exit status: 1
root@san06:/etc/systemd/system#

 

the command supports osds with journals, can you double check using a different drive