Error parsing config using cli
Pages: 1 2
protocol6v
85 Posts
March 29, 2018, 12:39 pmQuote from protocol6v on March 29, 2018, 12:39 pmHi All,
Absolutely loving PetaSAN so far. Been looking for a solution like this for years. Well done Devs.
I am running into a little issue when trying to use the CLI ceph utilities though, getting this output when trying to launch ceph:
root@bd-ceph-sd1:~# ceph
2018-03-29 08:36:49.146109 7f297fc62700 -1 Errors while parsing config file!
2018-03-29 08:36:49.146115 7f297fc62700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2018-03-29 08:36:49.146117 7f297fc62700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2018-03-29 08:36:49.146119 7f297fc62700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)
Where does PetaSAN 2.0 store the config file by default? Or do I have something else wrong? Are we not allowed to use the CLI util?
WebUI all seems to be working as expected, and no errors.
Also, are we safe up update the distro to 16.04.4? Or should we wait for an official PetaSAN release?
Thanks!
Hi All,
Absolutely loving PetaSAN so far. Been looking for a solution like this for years. Well done Devs.
I am running into a little issue when trying to use the CLI ceph utilities though, getting this output when trying to launch ceph:
root@bd-ceph-sd1:~# ceph
2018-03-29 08:36:49.146109 7f297fc62700 -1 Errors while parsing config file!
2018-03-29 08:36:49.146115 7f297fc62700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2018-03-29 08:36:49.146117 7f297fc62700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2018-03-29 08:36:49.146119 7f297fc62700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)
Where does PetaSAN 2.0 store the config file by default? Or do I have something else wrong? Are we not allowed to use the CLI util?
WebUI all seems to be working as expected, and no errors.
Also, are we safe up update the distro to 16.04.4? Or should we wait for an official PetaSAN release?
Thanks!
admin
2,930 Posts
March 29, 2018, 12:53 pmQuote from admin on March 29, 2018, 12:53 pmThanks 🙂 , you need to add the
--cluster CLUSTER_NAME
following your ceph commands. We name the ceph cluster with the name you specified during cluster creation rather than the default name "ceph", we are hoping to support multi clusters in the longer term. If you forgot the cluster name it is in /opt/petasan/config/cluster_info.json under the "name" field.
I would not recommend doing large updates/upgrades yourself and wait for our updates, just in case theremay be system changes. It is better to restrict getting software to getting extra tools but not for upgrades.
Thanks 🙂 , you need to add the
--cluster CLUSTER_NAME
following your ceph commands. We name the ceph cluster with the name you specified during cluster creation rather than the default name "ceph", we are hoping to support multi clusters in the longer term. If you forgot the cluster name it is in /opt/petasan/config/cluster_info.json under the "name" field.
I would not recommend doing large updates/upgrades yourself and wait for our updates, just in case theremay be system changes. It is better to restrict getting software to getting extra tools but not for upgrades.
Last edited on March 29, 2018, 12:56 pm by admin · #2
protocol6v
85 Posts
March 29, 2018, 1:16 pmQuote from protocol6v on March 29, 2018, 1:16 pmI'm in! Thank you!
Makes sense about the updates, just wanted to check.
Looking forward to seeing the growth of this software, keep up the good work!
I'm in! Thank you!
Makes sense about the updates, just wanted to check.
Looking forward to seeing the growth of this software, keep up the good work!
protocol6v
85 Posts
March 29, 2018, 1:26 pmQuote from protocol6v on March 29, 2018, 1:26 pmOk, new issue. I was messing around with the iSCSI disks, and I created and removed one a few times. It got hung up removing one last night, and I went home. When I came back, the iSCSI disk list was empty, so I tried to create a new one.
It says the disk already exists. Where would I check for this, or how would I go about "importing(?)" the disk into the disk list?
All my OSDs are online, and I didn't mess with those at all, so I don't believe it would have wiped out the data.
Thanks!
Ok, new issue. I was messing around with the iSCSI disks, and I created and removed one a few times. It got hung up removing one last night, and I went home. When I came back, the iSCSI disk list was empty, so I tried to create a new one.
It says the disk already exists. Where would I check for this, or how would I go about "importing(?)" the disk into the disk list?
All my OSDs are online, and I didn't mess with those at all, so I don't believe it would have wiped out the data.
Thanks!
admin
2,930 Posts
March 29, 2018, 3:06 pmQuote from admin on March 29, 2018, 3:06 pmhi
hmm, we do not import the list, we get the data live from ceph and display it in the ui.
can you do a
rbd ls --cluster XXXX
hi
hmm, we do not import the list, we get the data live from ceph and display it in the ui.
can you do a
rbd ls --cluster XXXX
protocol6v
85 Posts
March 29, 2018, 3:42 pmQuote from protocol6v on March 29, 2018, 3:42 pmThat produces no output. Seems like the disk doesn't exist, but the WebUI thinks it does for some reason.
I don't seem to be able to add any new iscsi disks, regardless of the name used. Is there a service I should restart?
That produces no output. Seems like the disk doesn't exist, but the WebUI thinks it does for some reason.
I don't seem to be able to add any new iscsi disks, regardless of the name used. Is there a service I should restart?
Last edited on March 29, 2018, 3:44 pm by protocol6v · #6
admin
2,930 Posts
March 29, 2018, 5:24 pmQuote from admin on March 29, 2018, 5:24 pmcan you post the log file /opt/petasan/log/PetaSAN.log
can you post the log file /opt/petasan/log/PetaSAN.log
protocol6v
85 Posts
March 29, 2018, 6:01 pmQuote from protocol6v on March 29, 2018, 6:01 pmHere's the full log(link)
Thanks!
Here's the full log(link)
Thanks!
Last edited on March 29, 2018, 10:51 pm by protocol6v · #8
protocol6v
85 Posts
March 30, 2018, 12:04 pmQuote from protocol6v on March 30, 2018, 12:04 pmWell, I rebooted all four nodes, and am now able to add disks. Not going to be an acceptable solution once in production, so I'm going to continue testing to see if I can reproduce the problem.
Well, I rebooted all four nodes, and am now able to add disks. Not going to be an acceptable solution once in production, so I'm going to continue testing to see if I can reproduce the problem.
protocol6v
85 Posts
March 30, 2018, 12:42 pmQuote from protocol6v on March 30, 2018, 12:42 pmSeems i was somewhat able to reproduce. I created another 100TB iSCSI target with password and 8 paths, then stopped it, and tried to delete it. The WebUI just keeps loading and never times out. It ran for ~15min and here's where the petasan.log is at:
30/03/2018 08:19:34 INFO Cleaned disk path 00001/2.
30/03/2018 08:19:34 INFO Cleaned disk path 00001/1.
30/03/2018 08:19:34 INFO PetaSAN cleaned local paths not locked by this nod e in consul.
30/03/2018 08:19:34 INFO Stopping disk 00001
30/03/2018 08:19:34 ERROR Could not find ips for image-00001
30/03/2018 08:19:34 INFO LIO deleted backstore image image-00001
30/03/2018 08:19:34 INFO LIO deleted Target iqn.2018-03.net.resolveit.inter nal.bd-ceph-cluster1:00001
30/03/2018 08:19:34 INFO Image image-00001 unmapped successfully.
30/03/2018 08:19:37 INFO PetaSAN removed key of stopped disk 00001 from con sul.
The WebUI was still spinning/loading at this point, so I tried to reload the page to see if the disk would disappear from the list, but it's still listed as stopped with 8 paths active. If I go to the path re-assignment page, there are no paths listed. Trying to delete the disk again gives a generic "Error deleting disk", and appends this to the petasan log:
30/03/2018 08:37:58 ERROR Delete disk 00001 error
30/03/2018 08:37:58 ERROR error removing image
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/api.py", line 341, in delete_disk
rbd_inst.remove(ioctx, "".join([str(self.conf_api.get_image_name_prefix()), str(disk_id)]))
File "rbd.pyx", line 873, in rbd.RBD.remove (/root/ceph-12.2.2/obj-x86_64-linux-gnu/src/pybind/rbd/pyrex/rbd.c:7109)
ImageBusy: [errno 16] error removing image
Am i just being impatient? This doesn't seem to occur with smaller disks.
Let me know if I can provide anything else to help diagnose. When originally configuring, I chose high-end hardware(do dual e5-2690v2 with 128GB ram fall under this category, or did i get a little over ambitious?)
Thanks!
Seems i was somewhat able to reproduce. I created another 100TB iSCSI target with password and 8 paths, then stopped it, and tried to delete it. The WebUI just keeps loading and never times out. It ran for ~15min and here's where the petasan.log is at:
30/03/2018 08:19:34 INFO Cleaned disk path 00001/2.
30/03/2018 08:19:34 INFO Cleaned disk path 00001/1.
30/03/2018 08:19:34 INFO PetaSAN cleaned local paths not locked by this nod e in consul.
30/03/2018 08:19:34 INFO Stopping disk 00001
30/03/2018 08:19:34 ERROR Could not find ips for image-00001
30/03/2018 08:19:34 INFO LIO deleted backstore image image-00001
30/03/2018 08:19:34 INFO LIO deleted Target iqn.2018-03.net.resolveit.inter nal.bd-ceph-cluster1:00001
30/03/2018 08:19:34 INFO Image image-00001 unmapped successfully.
30/03/2018 08:19:37 INFO PetaSAN removed key of stopped disk 00001 from con sul.
The WebUI was still spinning/loading at this point, so I tried to reload the page to see if the disk would disappear from the list, but it's still listed as stopped with 8 paths active. If I go to the path re-assignment page, there are no paths listed. Trying to delete the disk again gives a generic "Error deleting disk", and appends this to the petasan log:
30/03/2018 08:37:58 ERROR Delete disk 00001 error
30/03/2018 08:37:58 ERROR error removing image
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/api.py", line 341, in delete_disk
rbd_inst.remove(ioctx, "".join([str(self.conf_api.get_image_name_prefix()), str(disk_id)]))
File "rbd.pyx", line 873, in rbd.RBD.remove (/root/ceph-12.2.2/obj-x86_64-linux-gnu/src/pybind/rbd/pyrex/rbd.c:7109)
ImageBusy: [errno 16] error removing image
Am i just being impatient? This doesn't seem to occur with smaller disks.
Let me know if I can provide anything else to help diagnose. When originally configuring, I chose high-end hardware(do dual e5-2690v2 with 128GB ram fall under this category, or did i get a little over ambitious?)
Thanks!
Last edited on March 30, 2018, 12:44 pm by protocol6v · #10
Pages: 1 2
Error parsing config using cli
protocol6v
85 Posts
Quote from protocol6v on March 29, 2018, 12:39 pmHi All,
Absolutely loving PetaSAN so far. Been looking for a solution like this for years. Well done Devs.
I am running into a little issue when trying to use the CLI ceph utilities though, getting this output when trying to launch ceph:
root@bd-ceph-sd1:~# ceph
2018-03-29 08:36:49.146109 7f297fc62700 -1 Errors while parsing config file!
2018-03-29 08:36:49.146115 7f297fc62700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2018-03-29 08:36:49.146117 7f297fc62700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2018-03-29 08:36:49.146119 7f297fc62700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)Where does PetaSAN 2.0 store the config file by default? Or do I have something else wrong? Are we not allowed to use the CLI util?
WebUI all seems to be working as expected, and no errors.
Also, are we safe up update the distro to 16.04.4? Or should we wait for an official PetaSAN release?
Thanks!
Hi All,
Absolutely loving PetaSAN so far. Been looking for a solution like this for years. Well done Devs.
I am running into a little issue when trying to use the CLI ceph utilities though, getting this output when trying to launch ceph:
root@bd-ceph-sd1:~# ceph
2018-03-29 08:36:49.146109 7f297fc62700 -1 Errors while parsing config file!
2018-03-29 08:36:49.146115 7f297fc62700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2018-03-29 08:36:49.146117 7f297fc62700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2018-03-29 08:36:49.146119 7f297fc62700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)
Where does PetaSAN 2.0 store the config file by default? Or do I have something else wrong? Are we not allowed to use the CLI util?
WebUI all seems to be working as expected, and no errors.
Also, are we safe up update the distro to 16.04.4? Or should we wait for an official PetaSAN release?
Thanks!
admin
2,930 Posts
Quote from admin on March 29, 2018, 12:53 pmThanks 🙂 , you need to add the
--cluster CLUSTER_NAME
following your ceph commands. We name the ceph cluster with the name you specified during cluster creation rather than the default name "ceph", we are hoping to support multi clusters in the longer term. If you forgot the cluster name it is in /opt/petasan/config/cluster_info.json under the "name" field.
I would not recommend doing large updates/upgrades yourself and wait for our updates, just in case theremay be system changes. It is better to restrict getting software to getting extra tools but not for upgrades.
Thanks 🙂 , you need to add the
--cluster CLUSTER_NAME
following your ceph commands. We name the ceph cluster with the name you specified during cluster creation rather than the default name "ceph", we are hoping to support multi clusters in the longer term. If you forgot the cluster name it is in /opt/petasan/config/cluster_info.json under the "name" field.
I would not recommend doing large updates/upgrades yourself and wait for our updates, just in case theremay be system changes. It is better to restrict getting software to getting extra tools but not for upgrades.
protocol6v
85 Posts
Quote from protocol6v on March 29, 2018, 1:16 pmI'm in! Thank you!
Makes sense about the updates, just wanted to check.
Looking forward to seeing the growth of this software, keep up the good work!
I'm in! Thank you!
Makes sense about the updates, just wanted to check.
Looking forward to seeing the growth of this software, keep up the good work!
protocol6v
85 Posts
Quote from protocol6v on March 29, 2018, 1:26 pmOk, new issue. I was messing around with the iSCSI disks, and I created and removed one a few times. It got hung up removing one last night, and I went home. When I came back, the iSCSI disk list was empty, so I tried to create a new one.
It says the disk already exists. Where would I check for this, or how would I go about "importing(?)" the disk into the disk list?
All my OSDs are online, and I didn't mess with those at all, so I don't believe it would have wiped out the data.
Thanks!
Ok, new issue. I was messing around with the iSCSI disks, and I created and removed one a few times. It got hung up removing one last night, and I went home. When I came back, the iSCSI disk list was empty, so I tried to create a new one.
It says the disk already exists. Where would I check for this, or how would I go about "importing(?)" the disk into the disk list?
All my OSDs are online, and I didn't mess with those at all, so I don't believe it would have wiped out the data.
Thanks!
admin
2,930 Posts
Quote from admin on March 29, 2018, 3:06 pmhi
hmm, we do not import the list, we get the data live from ceph and display it in the ui.
can you do a
rbd ls --cluster XXXX
hi
hmm, we do not import the list, we get the data live from ceph and display it in the ui.
can you do a
rbd ls --cluster XXXX
protocol6v
85 Posts
Quote from protocol6v on March 29, 2018, 3:42 pmThat produces no output. Seems like the disk doesn't exist, but the WebUI thinks it does for some reason.
I don't seem to be able to add any new iscsi disks, regardless of the name used. Is there a service I should restart?
That produces no output. Seems like the disk doesn't exist, but the WebUI thinks it does for some reason.
I don't seem to be able to add any new iscsi disks, regardless of the name used. Is there a service I should restart?
admin
2,930 Posts
Quote from admin on March 29, 2018, 5:24 pmcan you post the log file /opt/petasan/log/PetaSAN.log
can you post the log file /opt/petasan/log/PetaSAN.log
protocol6v
85 Posts
Quote from protocol6v on March 29, 2018, 6:01 pmHere's the full log(link)
Thanks!
Here's the full log(link)
Thanks!
protocol6v
85 Posts
Quote from protocol6v on March 30, 2018, 12:04 pmWell, I rebooted all four nodes, and am now able to add disks. Not going to be an acceptable solution once in production, so I'm going to continue testing to see if I can reproduce the problem.
Well, I rebooted all four nodes, and am now able to add disks. Not going to be an acceptable solution once in production, so I'm going to continue testing to see if I can reproduce the problem.
protocol6v
85 Posts
Quote from protocol6v on March 30, 2018, 12:42 pmSeems i was somewhat able to reproduce. I created another 100TB iSCSI target with password and 8 paths, then stopped it, and tried to delete it. The WebUI just keeps loading and never times out. It ran for ~15min and here's where the petasan.log is at:
30/03/2018 08:19:34 INFO Cleaned disk path 00001/2.
30/03/2018 08:19:34 INFO Cleaned disk path 00001/1.
30/03/2018 08:19:34 INFO PetaSAN cleaned local paths not locked by this nod e in consul.
30/03/2018 08:19:34 INFO Stopping disk 00001
30/03/2018 08:19:34 ERROR Could not find ips for image-00001
30/03/2018 08:19:34 INFO LIO deleted backstore image image-00001
30/03/2018 08:19:34 INFO LIO deleted Target iqn.2018-03.net.resolveit.inter nal.bd-ceph-cluster1:00001
30/03/2018 08:19:34 INFO Image image-00001 unmapped successfully.
30/03/2018 08:19:37 INFO PetaSAN removed key of stopped disk 00001 from con sul.The WebUI was still spinning/loading at this point, so I tried to reload the page to see if the disk would disappear from the list, but it's still listed as stopped with 8 paths active. If I go to the path re-assignment page, there are no paths listed. Trying to delete the disk again gives a generic "Error deleting disk", and appends this to the petasan log:
30/03/2018 08:37:58 ERROR Delete disk 00001 error
30/03/2018 08:37:58 ERROR error removing image
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/api.py", line 341, in delete_disk
rbd_inst.remove(ioctx, "".join([str(self.conf_api.get_image_name_prefix()), str(disk_id)]))
File "rbd.pyx", line 873, in rbd.RBD.remove (/root/ceph-12.2.2/obj-x86_64-linux-gnu/src/pybind/rbd/pyrex/rbd.c:7109)
ImageBusy: [errno 16] error removing imageAm i just being impatient? This doesn't seem to occur with smaller disks.
Let me know if I can provide anything else to help diagnose. When originally configuring, I chose high-end hardware(do dual e5-2690v2 with 128GB ram fall under this category, or did i get a little over ambitious?)
Thanks!
Seems i was somewhat able to reproduce. I created another 100TB iSCSI target with password and 8 paths, then stopped it, and tried to delete it. The WebUI just keeps loading and never times out. It ran for ~15min and here's where the petasan.log is at:
30/03/2018 08:19:34 INFO Cleaned disk path 00001/2.
30/03/2018 08:19:34 INFO Cleaned disk path 00001/1.
30/03/2018 08:19:34 INFO PetaSAN cleaned local paths not locked by this nod e in consul.
30/03/2018 08:19:34 INFO Stopping disk 00001
30/03/2018 08:19:34 ERROR Could not find ips for image-00001
30/03/2018 08:19:34 INFO LIO deleted backstore image image-00001
30/03/2018 08:19:34 INFO LIO deleted Target iqn.2018-03.net.resolveit.inter nal.bd-ceph-cluster1:00001
30/03/2018 08:19:34 INFO Image image-00001 unmapped successfully.
30/03/2018 08:19:37 INFO PetaSAN removed key of stopped disk 00001 from con sul.
The WebUI was still spinning/loading at this point, so I tried to reload the page to see if the disk would disappear from the list, but it's still listed as stopped with 8 paths active. If I go to the path re-assignment page, there are no paths listed. Trying to delete the disk again gives a generic "Error deleting disk", and appends this to the petasan log:
30/03/2018 08:37:58 ERROR Delete disk 00001 error
30/03/2018 08:37:58 ERROR error removing image
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/api.py", line 341, in delete_disk
rbd_inst.remove(ioctx, "".join([str(self.conf_api.get_image_name_prefix()), str(disk_id)]))
File "rbd.pyx", line 873, in rbd.RBD.remove (/root/ceph-12.2.2/obj-x86_64-linux-gnu/src/pybind/rbd/pyrex/rbd.c:7109)
ImageBusy: [errno 16] error removing image
Am i just being impatient? This doesn't seem to occur with smaller disks.
Let me know if I can provide anything else to help diagnose. When originally configuring, I chose high-end hardware(do dual e5-2690v2 with 128GB ram fall under this category, or did i get a little over ambitious?)
Thanks!