Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Error parsing config using cli

Pages: 1 2

Hi All,

Absolutely loving PetaSAN so far. Been looking for a solution like this for years. Well done Devs.

I am running into a little issue when trying to use the CLI ceph utilities though, getting this output when trying to launch ceph:

root@bd-ceph-sd1:~# ceph
2018-03-29 08:36:49.146109 7f297fc62700 -1 Errors while parsing config file!
2018-03-29 08:36:49.146115 7f297fc62700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2018-03-29 08:36:49.146117 7f297fc62700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2018-03-29 08:36:49.146119 7f297fc62700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)

Where does PetaSAN 2.0 store the config file by default? Or do I have something else wrong? Are we not allowed to use the CLI util?

WebUI all seems to be working as expected, and no errors.

Also, are we safe up update the distro to 16.04.4? Or should we wait for an official PetaSAN release?

Thanks!

Thanks 🙂  , you need to add the

--cluster CLUSTER_NAME

following your ceph commands. We name the ceph cluster with the name you specified during cluster creation rather than the default name "ceph", we are hoping to support multi clusters in the longer term.  If you forgot the cluster name it is in /opt/petasan/config/cluster_info.json  under the "name" field.

I would not recommend doing large updates/upgrades yourself and wait for our updates, just in case theremay be  system changes. It is better to restrict getting software to getting extra tools but not for upgrades.

I'm in! Thank you!

Makes sense about the updates, just wanted to check.

Looking forward to seeing the growth of this software, keep up the good work!

Ok, new issue. I was messing around with the iSCSI disks, and I created and removed one a few times. It got hung up removing one last night, and I went home. When I came back, the iSCSI disk list was empty, so I tried to create a new one.

It says the disk already exists. Where would I check for this, or how would I go about "importing(?)" the disk into the disk list?

All my OSDs are online, and I didn't mess with those at all, so I don't believe it would have wiped out the data.

Thanks!

hi

hmm, we do not import the list, we get the data live from ceph and display it in the ui.

can you do a

rbd ls --cluster XXXX

That produces no output. Seems like the disk doesn't exist, but the WebUI thinks it does for some reason.

I don't seem to be able to add any new iscsi disks, regardless of the name used. Is there a service I should restart?

can you post the log file /opt/petasan/log/PetaSAN.log

Here's the full log(link)

 

 

Thanks!

Well, I rebooted all four nodes, and am now able to add disks. Not going to be an acceptable solution once in production, so I'm going to continue testing to see if I can reproduce the problem.

Seems i was somewhat able to reproduce. I created another 100TB iSCSI target with password and 8 paths, then stopped it, and tried to delete it. The WebUI just keeps loading and never times out. It ran for ~15min and here's where the petasan.log is at:

 

30/03/2018 08:19:34 INFO Cleaned disk path 00001/2.
30/03/2018 08:19:34 INFO Cleaned disk path 00001/1.
30/03/2018 08:19:34 INFO PetaSAN cleaned local paths not locked by this nod e in consul.
30/03/2018 08:19:34 INFO Stopping disk 00001
30/03/2018 08:19:34 ERROR Could not find ips for image-00001
30/03/2018 08:19:34 INFO LIO deleted backstore image image-00001
30/03/2018 08:19:34 INFO LIO deleted Target iqn.2018-03.net.resolveit.inter nal.bd-ceph-cluster1:00001
30/03/2018 08:19:34 INFO Image image-00001 unmapped successfully.
30/03/2018 08:19:37 INFO PetaSAN removed key of stopped disk 00001 from con sul.

The WebUI was still spinning/loading at this point, so I tried to reload the page to see if the disk would disappear from the list, but it's still listed as stopped with 8 paths active. If I go to the path re-assignment page, there are no paths listed. Trying to delete the disk again gives a generic "Error deleting disk", and appends this to the petasan log:

30/03/2018 08:37:58 ERROR Delete disk 00001 error
30/03/2018 08:37:58 ERROR error removing image
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/api.py", line 341, in delete_disk
rbd_inst.remove(ioctx, "".join([str(self.conf_api.get_image_name_prefix()), str(disk_id)]))
File "rbd.pyx", line 873, in rbd.RBD.remove (/root/ceph-12.2.2/obj-x86_64-linux-gnu/src/pybind/rbd/pyrex/rbd.c:7109)
ImageBusy: [errno 16] error removing image

Am i just being impatient? This doesn't seem to occur with smaller disks.

 

Let me know if I can provide anything else to help diagnose. When originally configuring, I chose high-end hardware(do dual e5-2690v2 with 128GB ram fall under this category, or did i get a little over ambitious?)

 

Thanks!

Pages: 1 2