Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

PetaSAN 2.2 Released!

Pages: 1 2 3 4

run this command:
ceph tell mon.* injectargs '--mon_pg_warn_max_object_skew 0' --cluster CLUSTER_NAME

if it does not remove the warning, place this in the [global] section of /etc/ceph/CLUSTER_NAME.conf file:
mon_pg_warn_min_objects = 0

and reboot 1 node at a time.

The warning will most likely happen if you have pools with many pgs that are not being used, yet have a pool with a lot of data but few pgs, so your pg resources are not optimum.
It most cases you can ignore this warning.

Hi,admin

Thanks for the new release.

Is it trial version? Or,Do you have commercial version?What is the difference between them?

This is a full version not trial. The difference is if you want to get commercial support from us.

Hello Admin,

 

We have a test cluster that is onl from 4 nodes for now, we are thinking on making it EC, K=2 and M=1 .. however we might add two new nodes to make it 6, at that point can we change to K=2,M=2 - while keeping the LUNS intact,

 

Please let me know.

 

Thanks

No it is not possible to change these on a existing EC pool,

You would have to create a new pool with the desired k/m and copy the disks over.

Internally the k/m determine the size of the data and erasure chunks, there is no quick conversion since it is at the core of how the data is formatted so it will have to be copied / reformatted anyway.

Hello,

I have installed PetaSan with 3 Nodes, my concern is after a restart i have lost my iSCSI Disks and i see that the status of my pool is inactive.

Have you a solution for me please.

what version are you using ?

how did the issue happen, just by restarting the cluster ? did you restart all nodes ? were all nodes running before you restarted ?

what is output of :

ceph status

ceph osd tree

  • we have version 2.3.0
  • the issue happen after by stoping and retarting the cluster (the tree nodes) and all nodes running before we restarted

ceph status

root@NODE-01:~# ceph status
2020-03-29 00:44:21.597510 7f74d2f9f700 -1 Errors while parsing config file!
2020-03-29 00:44:21.597514 7f74d2f9f700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:21.597515 7f74d2f9f700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:21.597516 7f74d2f9f700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)

ceph osd tree

root@NODE-01:~# ceph osd tree
2020-03-29 00:44:24.320660 7fbee0fc5700 -1 Errors while parsing config file!
2020-03-29 00:44:24.320665 7fbee0fc5700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:24.320666 7fbee0fc5700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:24.320666 7fbee0fc5700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)

I just went to see in the file  /etc/ceph/   and I have different names that ceph.client.admin.keyring  and ceph.conf

, is there a problem with their names? should i modify them to make it the same or should i create new ones ??

For pre 2.3.1, you need to add the following parameter to the commands:

--cluster CLUSTER_NAME

 

Pages: 1 2 3 4