217 pgs not deep-scrubbed in time
R3LZX
50 Posts
August 25, 2019, 11:55 amQuote from R3LZX on August 25, 2019, 11:55 amgetting this error this morning, is there a way to lessen the interval on this?
1TB SSD journal for every 3 10TB 7200rpm SAS drives
after researching looks like its based on intervals and drive speed
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-April/034097.html
do you recommend changing the interval or just letting it catch up?
getting this error this morning, is there a way to lessen the interval on this?
1TB SSD journal for every 3 10TB 7200rpm SAS drives
after researching looks like its based on intervals and drive speed
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-April/034097.html
do you recommend changing the interval or just letting it catch up?
Last edited on August 25, 2019, 11:58 am by R3LZX · #1
admin
2,930 Posts
August 25, 2019, 1:01 pmQuote from admin on August 25, 2019, 1:01 pmi'd recommend to first increase the load threshold to allow scrub to run, from 0.3 to 0.6
ceph tell osd.* injectargs '--osd_scrub_load_threshold=0.6'
check value was set ( X is osd numbet )
ceph daemon osd.X config show | grep osd_scrub_load
to make this persistent, change the ceph.conf file
osd_scrub_load_threshold = 0.3 to 0.6
i'd recommend to first increase the load threshold to allow scrub to run, from 0.3 to 0.6
ceph tell osd.* injectargs '--osd_scrub_load_threshold=0.6'
check value was set ( X is osd numbet )
ceph daemon osd.X config show | grep osd_scrub_load
to make this persistent, change the ceph.conf file
osd_scrub_load_threshold = 0.3 to 0.6
R3LZX
50 Posts
August 25, 2019, 4:15 pmQuote from R3LZX on August 25, 2019, 4:15 pmive changed the conf file on all three nodes, I am guessing this requires a restart?
ive changed the conf file on all three nodes, I am guessing this requires a restart?
admin
2,930 Posts
August 25, 2019, 5:43 pmQuote from admin on August 25, 2019, 5:43 pmthe conf file alone does require a restart ..but additionally the inject args will change it dynamically so it should not need restart
the conf file alone does require a restart ..but additionally the inject args will change it dynamically so it should not need restart
R3LZX
50 Posts
August 31, 2019, 9:04 pmQuote from R3LZX on August 31, 2019, 9:04 pmso far I've done
ceph tell osd.* injectargs '--osd_scrub_load_threshold=0.6'
and osd_scrub_load_threshold = 0.3 to 0.6
with a restart of all three nodes but it still shows 2 not scrubbed
HEALTH_WARN 2 pgs not deep-scrubbed in time
root@PSAN10453:~#
so far I've done
ceph tell osd.* injectargs '--osd_scrub_load_threshold=0.6'
and osd_scrub_load_threshold = 0.3 to 0.6
with a restart of all three nodes but it still shows 2 not scrubbed
HEALTH_WARN 2 pgs not deep-scrubbed in time
root@PSAN10453:~#
admin
2,930 Posts
August 31, 2019, 10:22 pmQuote from admin on August 31, 2019, 10:22 pmAlmost there, try
osd_scrub_sleep from 1 -> 0.5
as before, use inject + edit the config file..no need to restart
Almost there, try
osd_scrub_sleep from 1 -> 0.5
as before, use inject + edit the config file..no need to restart
R3LZX
50 Posts
September 2, 2019, 8:35 amQuote from R3LZX on September 2, 2019, 8:35 amseems it has reverted again, not sure why these dates are showing, this is as of few minutes ago and time is accurate across all nodes
PG_NOT_DEEP_SCRUBBED 21 pgs not deep-scrubbed in time
pg 1.3e0 not deep-scrubbed since 2019-08-20 05:41:16.841975
pg 1.3da not deep-scrubbed since 2019-08-20 06:09:20.983290
pg 1.3cb not deep-scrubbed since 2019-08-20 05:10:28.395554
pg 1.3a8 not deep-scrubbed since 2019-08-20 01:04:09.228066
pg 1.35d not deep-scrubbed since 2019-08-20 21:07:01.999925
pg 1.33c not deep-scrubbed since 2019-08-20 20:53:38.722869
pg 1.2cb not deep-scrubbed since 2019-08-20 02:17:26.866724
pg 1.134 not deep-scrubbed since 2019-08-19 23:49:21.708663
pg 1.132 not deep-scrubbed since 2019-08-20 02:15:28.899080
pg 1.ed not deep-scrubbed since 2019-08-20 02:43:40.700947
pg 1.45 not deep-scrubbed since 2019-08-20 04:03:16.835321
pg 1.29 not deep-scrubbed since 2019-08-20 05:38:19.699399
pg 1.6a not deep-scrubbed since 2019-08-19 21:55:10.968874
pg 1.98 not deep-scrubbed since 2019-08-20 06:06:51.068429
pg 1.184 not deep-scrubbed since 2019-08-20 20:39:50.666097
pg 1.19e not deep-scrubbed since 2019-08-20 05:05:29.554926
pg 1.1b3 not deep-scrubbed since 2019-08-20 01:06:03.365636
pg 1.203 not deep-scrubbed since 2019-08-20 21:06:53.131949
pg 1.234 not deep-scrubbed since 2019-08-20 04:49:26.989743
pg 1.29b not deep-scrubbed since 2019-08-20 20:26:31.442237
pg 1.2bb not deep-scrubbed since 2019-08-20 21:32:29.664467
root@PSAN10453:~#
seems it has reverted again, not sure why these dates are showing, this is as of few minutes ago and time is accurate across all nodes
PG_NOT_DEEP_SCRUBBED 21 pgs not deep-scrubbed in time
pg 1.3e0 not deep-scrubbed since 2019-08-20 05:41:16.841975
pg 1.3da not deep-scrubbed since 2019-08-20 06:09:20.983290
pg 1.3cb not deep-scrubbed since 2019-08-20 05:10:28.395554
pg 1.3a8 not deep-scrubbed since 2019-08-20 01:04:09.228066
pg 1.35d not deep-scrubbed since 2019-08-20 21:07:01.999925
pg 1.33c not deep-scrubbed since 2019-08-20 20:53:38.722869
pg 1.2cb not deep-scrubbed since 2019-08-20 02:17:26.866724
pg 1.134 not deep-scrubbed since 2019-08-19 23:49:21.708663
pg 1.132 not deep-scrubbed since 2019-08-20 02:15:28.899080
pg 1.ed not deep-scrubbed since 2019-08-20 02:43:40.700947
pg 1.45 not deep-scrubbed since 2019-08-20 04:03:16.835321
pg 1.29 not deep-scrubbed since 2019-08-20 05:38:19.699399
pg 1.6a not deep-scrubbed since 2019-08-19 21:55:10.968874
pg 1.98 not deep-scrubbed since 2019-08-20 06:06:51.068429
pg 1.184 not deep-scrubbed since 2019-08-20 20:39:50.666097
pg 1.19e not deep-scrubbed since 2019-08-20 05:05:29.554926
pg 1.1b3 not deep-scrubbed since 2019-08-20 01:06:03.365636
pg 1.203 not deep-scrubbed since 2019-08-20 21:06:53.131949
pg 1.234 not deep-scrubbed since 2019-08-20 04:49:26.989743
pg 1.29b not deep-scrubbed since 2019-08-20 20:26:31.442237
pg 1.2bb not deep-scrubbed since 2019-08-20 21:32:29.664467
root@PSAN10453:~#
admin
2,930 Posts
September 2, 2019, 4:49 pmQuote from admin on September 2, 2019, 4:49 pmwait some more days with the new settings, if still the it persist, lower osd_scrub_sleep to 0.1
wait some more days with the new settings, if still the it persist, lower osd_scrub_sleep to 0.1
217 pgs not deep-scrubbed in time
R3LZX
50 Posts
Quote from R3LZX on August 25, 2019, 11:55 amgetting this error this morning, is there a way to lessen the interval on this?
1TB SSD journal for every 3 10TB 7200rpm SAS drives
after researching looks like its based on intervals and drive speed
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-April/034097.html
do you recommend changing the interval or just letting it catch up?
getting this error this morning, is there a way to lessen the interval on this?
1TB SSD journal for every 3 10TB 7200rpm SAS drives
after researching looks like its based on intervals and drive speed
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2019-April/034097.html
do you recommend changing the interval or just letting it catch up?
admin
2,930 Posts
Quote from admin on August 25, 2019, 1:01 pmi'd recommend to first increase the load threshold to allow scrub to run, from 0.3 to 0.6
ceph tell osd.* injectargs '--osd_scrub_load_threshold=0.6'
check value was set ( X is osd numbet )
ceph daemon osd.X config show | grep osd_scrub_load
to make this persistent, change the ceph.conf file
osd_scrub_load_threshold = 0.3 to 0.6
i'd recommend to first increase the load threshold to allow scrub to run, from 0.3 to 0.6
ceph tell osd.* injectargs '--osd_scrub_load_threshold=0.6'
check value was set ( X is osd numbet )
ceph daemon osd.X config show | grep osd_scrub_load
to make this persistent, change the ceph.conf file
osd_scrub_load_threshold = 0.3 to 0.6
R3LZX
50 Posts
Quote from R3LZX on August 25, 2019, 4:15 pmive changed the conf file on all three nodes, I am guessing this requires a restart?
ive changed the conf file on all three nodes, I am guessing this requires a restart?
admin
2,930 Posts
Quote from admin on August 25, 2019, 5:43 pmthe conf file alone does require a restart ..but additionally the inject args will change it dynamically so it should not need restart
the conf file alone does require a restart ..but additionally the inject args will change it dynamically so it should not need restart
R3LZX
50 Posts
Quote from R3LZX on August 31, 2019, 9:04 pmso far I've done
ceph tell osd.* injectargs '--osd_scrub_load_threshold=0.6'
and osd_scrub_load_threshold = 0.3 to 0.6
with a restart of all three nodes but it still shows 2 not scrubbed
HEALTH_WARN 2 pgs not deep-scrubbed in time
root@PSAN10453:~#
so far I've done
ceph tell osd.* injectargs '--osd_scrub_load_threshold=0.6'
and osd_scrub_load_threshold = 0.3 to 0.6
with a restart of all three nodes but it still shows 2 not scrubbed
HEALTH_WARN 2 pgs not deep-scrubbed in time
root@PSAN10453:~#
admin
2,930 Posts
Quote from admin on August 31, 2019, 10:22 pmAlmost there, try
osd_scrub_sleep from 1 -> 0.5
as before, use inject + edit the config file..no need to restart
Almost there, try
osd_scrub_sleep from 1 -> 0.5
as before, use inject + edit the config file..no need to restart
R3LZX
50 Posts
Quote from R3LZX on September 2, 2019, 8:35 amseems it has reverted again, not sure why these dates are showing, this is as of few minutes ago and time is accurate across all nodes
PG_NOT_DEEP_SCRUBBED 21 pgs not deep-scrubbed in time
pg 1.3e0 not deep-scrubbed since 2019-08-20 05:41:16.841975
pg 1.3da not deep-scrubbed since 2019-08-20 06:09:20.983290
pg 1.3cb not deep-scrubbed since 2019-08-20 05:10:28.395554
pg 1.3a8 not deep-scrubbed since 2019-08-20 01:04:09.228066
pg 1.35d not deep-scrubbed since 2019-08-20 21:07:01.999925
pg 1.33c not deep-scrubbed since 2019-08-20 20:53:38.722869
pg 1.2cb not deep-scrubbed since 2019-08-20 02:17:26.866724
pg 1.134 not deep-scrubbed since 2019-08-19 23:49:21.708663
pg 1.132 not deep-scrubbed since 2019-08-20 02:15:28.899080
pg 1.ed not deep-scrubbed since 2019-08-20 02:43:40.700947
pg 1.45 not deep-scrubbed since 2019-08-20 04:03:16.835321
pg 1.29 not deep-scrubbed since 2019-08-20 05:38:19.699399
pg 1.6a not deep-scrubbed since 2019-08-19 21:55:10.968874
pg 1.98 not deep-scrubbed since 2019-08-20 06:06:51.068429
pg 1.184 not deep-scrubbed since 2019-08-20 20:39:50.666097
pg 1.19e not deep-scrubbed since 2019-08-20 05:05:29.554926
pg 1.1b3 not deep-scrubbed since 2019-08-20 01:06:03.365636
pg 1.203 not deep-scrubbed since 2019-08-20 21:06:53.131949
pg 1.234 not deep-scrubbed since 2019-08-20 04:49:26.989743
pg 1.29b not deep-scrubbed since 2019-08-20 20:26:31.442237
pg 1.2bb not deep-scrubbed since 2019-08-20 21:32:29.664467
root@PSAN10453:~#
seems it has reverted again, not sure why these dates are showing, this is as of few minutes ago and time is accurate across all nodes
PG_NOT_DEEP_SCRUBBED 21 pgs not deep-scrubbed in time
pg 1.3e0 not deep-scrubbed since 2019-08-20 05:41:16.841975
pg 1.3da not deep-scrubbed since 2019-08-20 06:09:20.983290
pg 1.3cb not deep-scrubbed since 2019-08-20 05:10:28.395554
pg 1.3a8 not deep-scrubbed since 2019-08-20 01:04:09.228066
pg 1.35d not deep-scrubbed since 2019-08-20 21:07:01.999925
pg 1.33c not deep-scrubbed since 2019-08-20 20:53:38.722869
pg 1.2cb not deep-scrubbed since 2019-08-20 02:17:26.866724
pg 1.134 not deep-scrubbed since 2019-08-19 23:49:21.708663
pg 1.132 not deep-scrubbed since 2019-08-20 02:15:28.899080
pg 1.ed not deep-scrubbed since 2019-08-20 02:43:40.700947
pg 1.45 not deep-scrubbed since 2019-08-20 04:03:16.835321
pg 1.29 not deep-scrubbed since 2019-08-20 05:38:19.699399
pg 1.6a not deep-scrubbed since 2019-08-19 21:55:10.968874
pg 1.98 not deep-scrubbed since 2019-08-20 06:06:51.068429
pg 1.184 not deep-scrubbed since 2019-08-20 20:39:50.666097
pg 1.19e not deep-scrubbed since 2019-08-20 05:05:29.554926
pg 1.1b3 not deep-scrubbed since 2019-08-20 01:06:03.365636
pg 1.203 not deep-scrubbed since 2019-08-20 21:06:53.131949
pg 1.234 not deep-scrubbed since 2019-08-20 04:49:26.989743
pg 1.29b not deep-scrubbed since 2019-08-20 20:26:31.442237
pg 1.2bb not deep-scrubbed since 2019-08-20 21:32:29.664467
root@PSAN10453:~#
admin
2,930 Posts
Quote from admin on September 2, 2019, 4:49 pmwait some more days with the new settings, if still the it persist, lower osd_scrub_sleep to 0.1
wait some more days with the new settings, if still the it persist, lower osd_scrub_sleep to 0.1