Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Pg files not deep-scrubbed

Pages: 1 2

Hello,

Have the following errors that came up recently.  How can I correct this?  Does this indicate that anything is going to fail?

ceph health detail
HEALTH_WARN 4 pgs not deep-scrubbed in time
PG_NOT_DEEP_SCRUBBED 4 pgs not deep-scrubbed in time
pg 1.2ee not deep-scrubbed since 2020-06-05 01:42:30.292398
pg 1.2c5 not deep-scrubbed since 2020-06-05 02:25:29.814373
pg 1.c8 not deep-scrubbed since 2020-06-05 00:21:50.917998
pg 1.186 not deep-scrubbed since 2020-06-05 00:52:27.085918

 

Thanks,

Hello,

Wanted to follow up so there is a trail.

After doing the "ceph health detail" and getting a list of the pgs, do a manual scrub.

Was able to do a manual scrub with the command "ceph pg deep-scrub 1.2ee" (osd listed) and it seems to be taking care of it.  Question is , what is really causing the delays as this system has been running a year without any issues?

It could be load related, generally you try to lower scrub load so that it does not affect your io yet not too low so that it does not complete in time.

you can set the begin/end hours

osd_scrub_begin_hour 0
osd_scrub_end_hour 24

try the following values
osd_scrub_sleep = 0.2
osd_scrub_load_threshold = 0.5

for 1 week, if you still have issues, increase osd_scrub_load_threshold and decrease osd_scrub_sleep in small steps and observe once a week.

Thank you,

Where would this be done?  If so, does the device need to be restarted?

Found it in the /etc/ceph directory.  So does this need to be done on all nodes and restarted or will it pick it up by itself?

Since release 2.4,  under Configurations Ceph menu, we have a super cool ui to edit ceph configuration centrally.

thanks,  right now running 2.3.1, so do I change each node and restart the nodes?

you can restart all osds

systemctl restart ceph-osd.target

Quote from admin on June 19, 2020, 2:36 pm

you can restart all osds

systemctl restart ceph-osd.target

Hi, I had a similar stale situation, with many pg not scrubbed, several inconsistent, undersized, etc..., even the "ceph pg repair" command didn't solve the issue. But after restarting ceph-osd.target everything unlocked and the cluster went  back to OK status.

Thanks.

upgraded system to 2.6.2 and was able to change the settings, we'll see what happens next.

 

Thanks,

Just updated to 2.6.2 and used the GUI to set the parameters.  Question.  Noticed today that the /etc/ceph/ceph.conf is basically gone and the GUI seems to have taken over, is this correct?

Pages: 1 2