Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Unable to add OSD with external jorunal

Under the manage nodes section, I added a SSD as Journal device first.  When I add the OSD with external journal enabled option, I got the following error.

Internal Server Error

The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.

I can only add the OSD with journal disabled.  However, the journal doesn't seem like working at all because the data access is slow.  No matter if I add the journal device or remove it from the node, the IOMeter speed shows the same number.  That tells me the journal device is not caching.   I can confirm the Ceph status by checking the OSD.  It shows no Journal device in the list.

 

root@PetaSan-3:/var/lib/ceph/osd# ls -l /var/lib/ceph/osd/ceph-2/
total 52
-rw-r--r-- 1 ceph ceph 445 Apr 26 18:14 activate.monmap
lrwxrwxrwx 1 ceph ceph 93 Apr 26 18:14 block -> /dev/ceph-b56a3e33-ccf2-4c41-9bee-853d54f0e86f/osd-block-27ae81a9-9ef2-4621-9cb4-eec68a771bbd
-rw------- 1 ceph ceph 2 Apr 26 18:14 bluefs
-rw------- 1 ceph ceph 37 Apr 26 18:14 ceph_fsid
-rw-r--r-- 1 ceph ceph 37 Apr 26 18:14 fsid
-rw------- 1 ceph ceph 55 Apr 26 18:14 keyring
-rw------- 1 ceph ceph 8 Apr 26 18:14 kv_backend
-rw------- 1 ceph ceph 21 Apr 26 18:14 magic
-rw------- 1 ceph ceph 4 Apr 26 18:14 mkfs_done
-rw------- 1 ceph ceph 41 Apr 26 18:14 osd_key
-rw------- 1 ceph ceph 6 Apr 26 18:14 ready
-rw------- 1 ceph ceph 3 Apr 26 18:14 require_osd_release
-rw------- 1 ceph ceph 10 Apr 26 18:14 type
-rw------- 1 ceph ceph 2 Apr 26 18:14 whoami

 

 

when you get this error, can you look into PetaSAN.log and ceph-volume.log in /opt/petasan/log for any errors.

There is no error in ceph-volume.log.  As soon as the web page get the error.  These messages show in ceph-volume.log.

 

[2020-04-27 10:46:54,722][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm list --format json
[2020-04-27 10:46:54,723][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-27 10:47:14,720][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm list --format json
[2020-04-27 10:47:14,722][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-27 10:47:34,716][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm list --format json
[2020-04-27 10:47:34,718][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-27 10:47:54,672][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm list --format json
[2020-04-27 10:47:54,674][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-27 10:48:15,055][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm list --format json
[2020-04-27 10:48:15,057][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-27 10:48:34,695][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm list --format json
[2020-04-27 10:48:34,697][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-27 10:48:43,569][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm list --format json
[2020-04-27 10:48:43,571][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-27 10:48:44,451][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm list --format json
[2020-04-27 10:48:44,453][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-27 10:49:04,444][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm list --format json
[2020-04-27 10:49:04,446][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-27 10:49:23,208][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm list --format json
[2020-04-27 10:49:23,210][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-27 10:49:24,625][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm list --format json
[2020-04-27 10:49:24,626][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-27 10:49:50,805][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm list --format json
[2020-04-27 10:49:50,807][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size
[2020-04-27 10:50:05,887][ceph_volume.main][INFO ] Running command: ceph-volume --log-path /opt/petasan/log lvm list --format json
[2020-04-27 10:50:05,889][ceph_volume.process][INFO ] Running command: /sbin/lvs --noheadings --readonly --separator=";" -a -o lv_tags,lv_path,lv_name,vg_name,lv_uuid,lv_size

 

There is no message generate in PetaSAN.log when the GUI gets error.