PetaSAN 2.2 Released!
admin
2,930 Posts
November 19, 2018, 4:40 pmQuote from admin on November 19, 2018, 4:40 pmrun this command:
ceph tell mon.* injectargs '--mon_pg_warn_max_object_skew 0' --cluster CLUSTER_NAME
if it does not remove the warning, place this in the [global] section of /etc/ceph/CLUSTER_NAME.conf file:
mon_pg_warn_min_objects = 0
and reboot 1 node at a time.
The warning will most likely happen if you have pools with many pgs that are not being used, yet have a pool with a lot of data but few pgs, so your pg resources are not optimum.
It most cases you can ignore this warning.
run this command:
ceph tell mon.* injectargs '--mon_pg_warn_max_object_skew 0' --cluster CLUSTER_NAME
if it does not remove the warning, place this in the [global] section of /etc/ceph/CLUSTER_NAME.conf file:
mon_pg_warn_min_objects = 0
and reboot 1 node at a time.
The warning will most likely happen if you have pools with many pgs that are not being used, yet have a pool with a lot of data but few pgs, so your pg resources are not optimum.
It most cases you can ignore this warning.
laa122@yeah.net
2 Posts
November 20, 2018, 1:34 amQuote from laa122@yeah.net on November 20, 2018, 1:34 amHi,admin
Thanks for the new release.
Is it trial version? Or,Do you have commercial version?What is the difference between them?
Hi,admin
Thanks for the new release.
Is it trial version? Or,Do you have commercial version?What is the difference between them?
admin
2,930 Posts
November 20, 2018, 6:10 amQuote from admin on November 20, 2018, 6:10 amThis is a full version not trial. The difference is if you want to get commercial support from us.
This is a full version not trial. The difference is if you want to get commercial support from us.
Last edited on November 20, 2018, 6:55 am by admin · #23
msalem
87 Posts
November 28, 2018, 9:04 amQuote from msalem on November 28, 2018, 9:04 amHello Admin,
We have a test cluster that is onl from 4 nodes for now, we are thinking on making it EC, K=2 and M=1 .. however we might add two new nodes to make it 6, at that point can we change to K=2,M=2 - while keeping the LUNS intact,
Please let me know.
Thanks
Hello Admin,
We have a test cluster that is onl from 4 nodes for now, we are thinking on making it EC, K=2 and M=1 .. however we might add two new nodes to make it 6, at that point can we change to K=2,M=2 - while keeping the LUNS intact,
Please let me know.
Thanks
admin
2,930 Posts
November 28, 2018, 6:06 pmQuote from admin on November 28, 2018, 6:06 pmNo it is not possible to change these on a existing EC pool,
You would have to create a new pool with the desired k/m and copy the disks over.
Internally the k/m determine the size of the data and erasure chunks, there is no quick conversion since it is at the core of how the data is formatted so it will have to be copied / reformatted anyway.
No it is not possible to change these on a existing EC pool,
You would have to create a new pool with the desired k/m and copy the disks over.
Internally the k/m determine the size of the data and erasure chunks, there is no quick conversion since it is at the core of how the data is formatted so it will have to be copied / reformatted anyway.
moh
16 Posts
March 27, 2020, 7:40 pmQuote from moh on March 27, 2020, 7:40 pmHello,
I have installed PetaSan with 3 Nodes, my concern is after a restart i have lost my iSCSI Disks and i see that the status of my pool is inactive.
Have you a solution for me please.
Hello,
I have installed PetaSan with 3 Nodes, my concern is after a restart i have lost my iSCSI Disks and i see that the status of my pool is inactive.
Have you a solution for me please.
admin
2,930 Posts
March 28, 2020, 10:26 amQuote from admin on March 28, 2020, 10:26 amwhat version are you using ?
how did the issue happen, just by restarting the cluster ? did you restart all nodes ? were all nodes running before you restarted ?
what is output of :
ceph status
ceph osd tree
what version are you using ?
how did the issue happen, just by restarting the cluster ? did you restart all nodes ? were all nodes running before you restarted ?
what is output of :
ceph status
ceph osd tree
moh
16 Posts
March 29, 2020, 12:54 amQuote from moh on March 29, 2020, 12:54 am
- we have version 2.3.0
- the issue happen after by stoping and retarting the cluster (the tree nodes) and all nodes running before we restarted
ceph status
root@NODE-01:~# ceph status
2020-03-29 00:44:21.597510 7f74d2f9f700 -1 Errors while parsing config file!
2020-03-29 00:44:21.597514 7f74d2f9f700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:21.597515 7f74d2f9f700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:21.597516 7f74d2f9f700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)
ceph osd tree
root@NODE-01:~# ceph osd tree
2020-03-29 00:44:24.320660 7fbee0fc5700 -1 Errors while parsing config file!
2020-03-29 00:44:24.320665 7fbee0fc5700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:24.320666 7fbee0fc5700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:24.320666 7fbee0fc5700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)
- we have version 2.3.0
- the issue happen after by stoping and retarting the cluster (the tree nodes) and all nodes running before we restarted
ceph status
root@NODE-01:~# ceph status
2020-03-29 00:44:21.597510 7f74d2f9f700 -1 Errors while parsing config file!
2020-03-29 00:44:21.597514 7f74d2f9f700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:21.597515 7f74d2f9f700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:21.597516 7f74d2f9f700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)
ceph osd tree
root@NODE-01:~# ceph osd tree
2020-03-29 00:44:24.320660 7fbee0fc5700 -1 Errors while parsing config file!
2020-03-29 00:44:24.320665 7fbee0fc5700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:24.320666 7fbee0fc5700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:24.320666 7fbee0fc5700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)
moh
16 Posts
March 29, 2020, 1:45 amQuote from moh on March 29, 2020, 1:45 amI just went to see in the file /etc/ceph/ and I have different names that ceph.client.admin.keyring and ceph.conf
, is there a problem with their names? should i modify them to make it the same or should i create new ones ??
I just went to see in the file /etc/ceph/ and I have different names that ceph.client.admin.keyring and ceph.conf
, is there a problem with their names? should i modify them to make it the same or should i create new ones ??
admin
2,930 Posts
March 29, 2020, 10:07 amQuote from admin on March 29, 2020, 10:07 amFor pre 2.3.1, you need to add the following parameter to the commands:
--cluster CLUSTER_NAME
For pre 2.3.1, you need to add the following parameter to the commands:
--cluster CLUSTER_NAME
PetaSAN 2.2 Released!
admin
2,930 Posts
Quote from admin on November 19, 2018, 4:40 pmrun this command:
ceph tell mon.* injectargs '--mon_pg_warn_max_object_skew 0' --cluster CLUSTER_NAMEif it does not remove the warning, place this in the [global] section of /etc/ceph/CLUSTER_NAME.conf file:
mon_pg_warn_min_objects = 0and reboot 1 node at a time.
The warning will most likely happen if you have pools with many pgs that are not being used, yet have a pool with a lot of data but few pgs, so your pg resources are not optimum.
It most cases you can ignore this warning.
run this command:
ceph tell mon.* injectargs '--mon_pg_warn_max_object_skew 0' --cluster CLUSTER_NAME
if it does not remove the warning, place this in the [global] section of /etc/ceph/CLUSTER_NAME.conf file:
mon_pg_warn_min_objects = 0
and reboot 1 node at a time.
The warning will most likely happen if you have pools with many pgs that are not being used, yet have a pool with a lot of data but few pgs, so your pg resources are not optimum.
It most cases you can ignore this warning.
laa122@yeah.net
2 Posts
Quote from laa122@yeah.net on November 20, 2018, 1:34 amHi,admin
Thanks for the new release.
Is it trial version? Or,Do you have commercial version?What is the difference between them?
Hi,admin
Thanks for the new release.
Is it trial version? Or,Do you have commercial version?What is the difference between them?
admin
2,930 Posts
Quote from admin on November 20, 2018, 6:10 amThis is a full version not trial. The difference is if you want to get commercial support from us.
This is a full version not trial. The difference is if you want to get commercial support from us.
msalem
87 Posts
Quote from msalem on November 28, 2018, 9:04 amHello Admin,
We have a test cluster that is onl from 4 nodes for now, we are thinking on making it EC, K=2 and M=1 .. however we might add two new nodes to make it 6, at that point can we change to K=2,M=2 - while keeping the LUNS intact,
Please let me know.
Thanks
Hello Admin,
We have a test cluster that is onl from 4 nodes for now, we are thinking on making it EC, K=2 and M=1 .. however we might add two new nodes to make it 6, at that point can we change to K=2,M=2 - while keeping the LUNS intact,
Please let me know.
Thanks
admin
2,930 Posts
Quote from admin on November 28, 2018, 6:06 pmNo it is not possible to change these on a existing EC pool,
You would have to create a new pool with the desired k/m and copy the disks over.
Internally the k/m determine the size of the data and erasure chunks, there is no quick conversion since it is at the core of how the data is formatted so it will have to be copied / reformatted anyway.
No it is not possible to change these on a existing EC pool,
You would have to create a new pool with the desired k/m and copy the disks over.
Internally the k/m determine the size of the data and erasure chunks, there is no quick conversion since it is at the core of how the data is formatted so it will have to be copied / reformatted anyway.
moh
16 Posts
Quote from moh on March 27, 2020, 7:40 pmHello,
I have installed PetaSan with 3 Nodes, my concern is after a restart i have lost my iSCSI Disks and i see that the status of my pool is inactive.
Have you a solution for me please.
Hello,
I have installed PetaSan with 3 Nodes, my concern is after a restart i have lost my iSCSI Disks and i see that the status of my pool is inactive.
Have you a solution for me please.
admin
2,930 Posts
Quote from admin on March 28, 2020, 10:26 amwhat version are you using ?
how did the issue happen, just by restarting the cluster ? did you restart all nodes ? were all nodes running before you restarted ?
what is output of :
ceph status
ceph osd tree
what version are you using ?
how did the issue happen, just by restarting the cluster ? did you restart all nodes ? were all nodes running before you restarted ?
what is output of :
ceph status
ceph osd tree
moh
16 Posts
Quote from moh on March 29, 2020, 12:54 am
- we have version 2.3.0
- the issue happen after by stoping and retarting the cluster (the tree nodes) and all nodes running before we restarted
ceph status
root@NODE-01:~# ceph status
2020-03-29 00:44:21.597510 7f74d2f9f700 -1 Errors while parsing config file!
2020-03-29 00:44:21.597514 7f74d2f9f700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:21.597515 7f74d2f9f700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:21.597516 7f74d2f9f700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)ceph osd tree
root@NODE-01:~# ceph osd tree
2020-03-29 00:44:24.320660 7fbee0fc5700 -1 Errors while parsing config file!
2020-03-29 00:44:24.320665 7fbee0fc5700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:24.320666 7fbee0fc5700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:24.320666 7fbee0fc5700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)
- we have version 2.3.0
- the issue happen after by stoping and retarting the cluster (the tree nodes) and all nodes running before we restarted
ceph status
root@NODE-01:~# ceph status
2020-03-29 00:44:21.597510 7f74d2f9f700 -1 Errors while parsing config file!
2020-03-29 00:44:21.597514 7f74d2f9f700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:21.597515 7f74d2f9f700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:21.597516 7f74d2f9f700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)
ceph osd tree
root@NODE-01:~# ceph osd tree
2020-03-29 00:44:24.320660 7fbee0fc5700 -1 Errors while parsing config file!
2020-03-29 00:44:24.320665 7fbee0fc5700 -1 parse_file: cannot open /etc/ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:24.320666 7fbee0fc5700 -1 parse_file: cannot open ~/.ceph/ceph.conf: (2) No such file or directory
2020-03-29 00:44:24.320666 7fbee0fc5700 -1 parse_file: cannot open ceph.conf: (2) No such file or directory
Error initializing cluster client: ObjectNotFound('error calling conf_read_file',)
moh
16 Posts
Quote from moh on March 29, 2020, 1:45 amI just went to see in the file /etc/ceph/ and I have different names that ceph.client.admin.keyring and ceph.conf
, is there a problem with their names? should i modify them to make it the same or should i create new ones ??
I just went to see in the file /etc/ceph/ and I have different names that ceph.client.admin.keyring and ceph.conf
, is there a problem with their names? should i modify them to make it the same or should i create new ones ??
admin
2,930 Posts
Quote from admin on March 29, 2020, 10:07 amFor pre 2.3.1, you need to add the following parameter to the commands:
--cluster CLUSTER_NAME
For pre 2.3.1, you need to add the following parameter to the commands:
--cluster CLUSTER_NAME