Pg files not deep-scrubbed
Pages: 1 2
khopkins
96 Posts
June 17, 2020, 2:27 pmQuote from khopkins on June 17, 2020, 2:27 pmHello,
Have the following errors that came up recently. How can I correct this? Does this indicate that anything is going to fail?
ceph health detail
HEALTH_WARN 4 pgs not deep-scrubbed in time
PG_NOT_DEEP_SCRUBBED 4 pgs not deep-scrubbed in time
pg 1.2ee not deep-scrubbed since 2020-06-05 01:42:30.292398
pg 1.2c5 not deep-scrubbed since 2020-06-05 02:25:29.814373
pg 1.c8 not deep-scrubbed since 2020-06-05 00:21:50.917998
pg 1.186 not deep-scrubbed since 2020-06-05 00:52:27.085918
Thanks,
Hello,
Have the following errors that came up recently. How can I correct this? Does this indicate that anything is going to fail?
ceph health detail
HEALTH_WARN 4 pgs not deep-scrubbed in time
PG_NOT_DEEP_SCRUBBED 4 pgs not deep-scrubbed in time
pg 1.2ee not deep-scrubbed since 2020-06-05 01:42:30.292398
pg 1.2c5 not deep-scrubbed since 2020-06-05 02:25:29.814373
pg 1.c8 not deep-scrubbed since 2020-06-05 00:21:50.917998
pg 1.186 not deep-scrubbed since 2020-06-05 00:52:27.085918
Thanks,
khopkins
96 Posts
June 17, 2020, 3:10 pmQuote from khopkins on June 17, 2020, 3:10 pmHello,
Wanted to follow up so there is a trail.
After doing the "ceph health detail" and getting a list of the pgs, do a manual scrub.
Was able to do a manual scrub with the command "ceph pg deep-scrub 1.2ee" (osd listed) and it seems to be taking care of it. Question is , what is really causing the delays as this system has been running a year without any issues?
Hello,
Wanted to follow up so there is a trail.
After doing the "ceph health detail" and getting a list of the pgs, do a manual scrub.
Was able to do a manual scrub with the command "ceph pg deep-scrub 1.2ee" (osd listed) and it seems to be taking care of it. Question is , what is really causing the delays as this system has been running a year without any issues?
admin
2,930 Posts
June 17, 2020, 10:18 pmQuote from admin on June 17, 2020, 10:18 pmIt could be load related, generally you try to lower scrub load so that it does not affect your io yet not too low so that it does not complete in time.
you can set the begin/end hours
osd_scrub_begin_hour 0
osd_scrub_end_hour 24
try the following values
osd_scrub_sleep = 0.2
osd_scrub_load_threshold = 0.5
for 1 week, if you still have issues, increase osd_scrub_load_threshold and decrease osd_scrub_sleep in small steps and observe once a week.
It could be load related, generally you try to lower scrub load so that it does not affect your io yet not too low so that it does not complete in time.
you can set the begin/end hours
osd_scrub_begin_hour 0
osd_scrub_end_hour 24
try the following values
osd_scrub_sleep = 0.2
osd_scrub_load_threshold = 0.5
for 1 week, if you still have issues, increase osd_scrub_load_threshold and decrease osd_scrub_sleep in small steps and observe once a week.
khopkins
96 Posts
June 18, 2020, 12:47 pmQuote from khopkins on June 18, 2020, 12:47 pmThank you,
Where would this be done? If so, does the device need to be restarted?
Found it in the /etc/ceph directory. So does this need to be done on all nodes and restarted or will it pick it up by itself?
Thank you,
Where would this be done? If so, does the device need to be restarted?
Found it in the /etc/ceph directory. So does this need to be done on all nodes and restarted or will it pick it up by itself?
Last edited on June 18, 2020, 2:12 pm by khopkins · #4
admin
2,930 Posts
June 18, 2020, 8:01 pmQuote from admin on June 18, 2020, 8:01 pmSince release 2.4, under Configurations Ceph menu, we have a super cool ui to edit ceph configuration centrally.
Since release 2.4, under Configurations Ceph menu, we have a super cool ui to edit ceph configuration centrally.
Last edited on June 18, 2020, 8:02 pm by admin · #5
khopkins
96 Posts
June 19, 2020, 2:01 pmQuote from khopkins on June 19, 2020, 2:01 pmthanks, right now running 2.3.1, so do I change each node and restart the nodes?
thanks, right now running 2.3.1, so do I change each node and restart the nodes?
admin
2,930 Posts
June 19, 2020, 2:36 pmQuote from admin on June 19, 2020, 2:36 pmyou can restart all osds
systemctl restart ceph-osd.target
you can restart all osds
systemctl restart ceph-osd.target
Ste
125 Posts
September 16, 2020, 11:55 amQuote from Ste on September 16, 2020, 11:55 am
Quote from admin on June 19, 2020, 2:36 pm
you can restart all osds
systemctl restart ceph-osd.target
Hi, I had a similar stale situation, with many pg not scrubbed, several inconsistent, undersized, etc..., even the "ceph pg repair" command didn't solve the issue. But after restarting ceph-osd.target everything unlocked and the cluster went back to OK status.
Thanks.
Quote from admin on June 19, 2020, 2:36 pm
you can restart all osds
systemctl restart ceph-osd.target
Hi, I had a similar stale situation, with many pg not scrubbed, several inconsistent, undersized, etc..., even the "ceph pg repair" command didn't solve the issue. But after restarting ceph-osd.target everything unlocked and the cluster went back to OK status.
Thanks.
khopkins
96 Posts
September 30, 2020, 11:17 amQuote from khopkins on September 30, 2020, 11:17 amupgraded system to 2.6.2 and was able to change the settings, we'll see what happens next.
Thanks,
upgraded system to 2.6.2 and was able to change the settings, we'll see what happens next.
Thanks,
khopkins
96 Posts
October 5, 2020, 1:22 pmQuote from khopkins on October 5, 2020, 1:22 pmJust updated to 2.6.2 and used the GUI to set the parameters. Question. Noticed today that the /etc/ceph/ceph.conf is basically gone and the GUI seems to have taken over, is this correct?
Just updated to 2.6.2 and used the GUI to set the parameters. Question. Noticed today that the /etc/ceph/ceph.conf is basically gone and the GUI seems to have taken over, is this correct?
Pages: 1 2
Pg files not deep-scrubbed
khopkins
96 Posts
Quote from khopkins on June 17, 2020, 2:27 pmHello,
Have the following errors that came up recently. How can I correct this? Does this indicate that anything is going to fail?
ceph health detail
HEALTH_WARN 4 pgs not deep-scrubbed in time
PG_NOT_DEEP_SCRUBBED 4 pgs not deep-scrubbed in time
pg 1.2ee not deep-scrubbed since 2020-06-05 01:42:30.292398
pg 1.2c5 not deep-scrubbed since 2020-06-05 02:25:29.814373
pg 1.c8 not deep-scrubbed since 2020-06-05 00:21:50.917998
pg 1.186 not deep-scrubbed since 2020-06-05 00:52:27.085918
Thanks,
Hello,
Have the following errors that came up recently. How can I correct this? Does this indicate that anything is going to fail?
ceph health detail
HEALTH_WARN 4 pgs not deep-scrubbed in time
PG_NOT_DEEP_SCRUBBED 4 pgs not deep-scrubbed in time
pg 1.2ee not deep-scrubbed since 2020-06-05 01:42:30.292398
pg 1.2c5 not deep-scrubbed since 2020-06-05 02:25:29.814373
pg 1.c8 not deep-scrubbed since 2020-06-05 00:21:50.917998
pg 1.186 not deep-scrubbed since 2020-06-05 00:52:27.085918
Thanks,
khopkins
96 Posts
Quote from khopkins on June 17, 2020, 3:10 pmHello,
Wanted to follow up so there is a trail.
After doing the "ceph health detail" and getting a list of the pgs, do a manual scrub.
Was able to do a manual scrub with the command "ceph pg deep-scrub 1.2ee" (osd listed) and it seems to be taking care of it. Question is , what is really causing the delays as this system has been running a year without any issues?
Hello,
Wanted to follow up so there is a trail.
After doing the "ceph health detail" and getting a list of the pgs, do a manual scrub.
Was able to do a manual scrub with the command "ceph pg deep-scrub 1.2ee" (osd listed) and it seems to be taking care of it. Question is , what is really causing the delays as this system has been running a year without any issues?
admin
2,930 Posts
Quote from admin on June 17, 2020, 10:18 pmIt could be load related, generally you try to lower scrub load so that it does not affect your io yet not too low so that it does not complete in time.
you can set the begin/end hours
osd_scrub_begin_hour 0
osd_scrub_end_hour 24try the following values
osd_scrub_sleep = 0.2
osd_scrub_load_threshold = 0.5for 1 week, if you still have issues, increase osd_scrub_load_threshold and decrease osd_scrub_sleep in small steps and observe once a week.
It could be load related, generally you try to lower scrub load so that it does not affect your io yet not too low so that it does not complete in time.
you can set the begin/end hours
osd_scrub_begin_hour 0
osd_scrub_end_hour 24
try the following values
osd_scrub_sleep = 0.2
osd_scrub_load_threshold = 0.5
for 1 week, if you still have issues, increase osd_scrub_load_threshold and decrease osd_scrub_sleep in small steps and observe once a week.
khopkins
96 Posts
Quote from khopkins on June 18, 2020, 12:47 pmThank you,
Where would this be done? If so, does the device need to be restarted?
Found it in the /etc/ceph directory. So does this need to be done on all nodes and restarted or will it pick it up by itself?
Thank you,
Where would this be done? If so, does the device need to be restarted?
Found it in the /etc/ceph directory. So does this need to be done on all nodes and restarted or will it pick it up by itself?
admin
2,930 Posts
Quote from admin on June 18, 2020, 8:01 pmSince release 2.4, under Configurations Ceph menu, we have a super cool ui to edit ceph configuration centrally.
Since release 2.4, under Configurations Ceph menu, we have a super cool ui to edit ceph configuration centrally.
khopkins
96 Posts
Quote from khopkins on June 19, 2020, 2:01 pmthanks, right now running 2.3.1, so do I change each node and restart the nodes?
thanks, right now running 2.3.1, so do I change each node and restart the nodes?
admin
2,930 Posts
Quote from admin on June 19, 2020, 2:36 pmyou can restart all osds
systemctl restart ceph-osd.target
you can restart all osds
systemctl restart ceph-osd.target
Ste
125 Posts
Quote from Ste on September 16, 2020, 11:55 amQuote from admin on June 19, 2020, 2:36 pmyou can restart all osds
systemctl restart ceph-osd.target
Hi, I had a similar stale situation, with many pg not scrubbed, several inconsistent, undersized, etc..., even the "ceph pg repair" command didn't solve the issue. But after restarting ceph-osd.target everything unlocked and the cluster went back to OK status.
Thanks.
Quote from admin on June 19, 2020, 2:36 pmyou can restart all osds
systemctl restart ceph-osd.target
Hi, I had a similar stale situation, with many pg not scrubbed, several inconsistent, undersized, etc..., even the "ceph pg repair" command didn't solve the issue. But after restarting ceph-osd.target everything unlocked and the cluster went back to OK status.
Thanks.
khopkins
96 Posts
Quote from khopkins on September 30, 2020, 11:17 amupgraded system to 2.6.2 and was able to change the settings, we'll see what happens next.
Thanks,
upgraded system to 2.6.2 and was able to change the settings, we'll see what happens next.
Thanks,
khopkins
96 Posts
Quote from khopkins on October 5, 2020, 1:22 pmJust updated to 2.6.2 and used the GUI to set the parameters. Question. Noticed today that the /etc/ceph/ceph.conf is basically gone and the GUI seems to have taken over, is this correct?
Just updated to 2.6.2 and used the GUI to set the parameters. Question. Noticed today that the /etc/ceph/ceph.conf is basically gone and the GUI seems to have taken over, is this correct?