Custom deep scrubbing intervals ignored in 2.4.0
wailer
75 Posts
January 5, 2020, 7:00 pmQuote from wailer on January 5, 2020, 7:00 pmHi there,
I am facing a strange issue with custom deep scrubbing intervals, our current config is:
[mon]
setuser_match_path = /var/lib/ceph/$type/$cluster-$id
mon_clock_drift_allowed = .300
mon compact on start = true
osd_scrub_max_interval = 2419200
osd_scrub_min_interval = 604800
osd_deep_scrub_interval = 2419200
[osd]
...
osd_max_scrubs = 2
osd_scrub_during_recovery = false
osd_scrub_priority = 5
osd_scrub_sleep = 1
osd_scrub_chunk_min = 1
osd_scrub_chunk_max = 1
osd_scrub_load_threshold = 0.3
osd_scrub_begin_hour = 14
osd_scrub_end_hour = 22
osd_scrub_min_interval=604800
osd_scrub_max_interval=2419200
osd_deep_scrub_interval=2419200
osd_scrub_interval_randomize_ratio=0
We basically want to change daily minimal to weekly and weekly maximum to monthly. We have tried injectargs on osd's, restarting nodes, monitors, etc... but still ceph gives warning about pg's not deepscrubbed for more than default values.
root@CEPH-12:~# ceph health detail
HEALTH_WARN 16 pgs not deep-scrubbed in time
PG_NOT_DEEP_SCRUBBED 16 pgs not deep-scrubbed in time
pg 1.3fe not deep-scrubbed since 2019-12-24 12:23:53.511747
pg 1.3e4 not deep-scrubbed since 2019-12-24 11:01:57.645717
pg 1.395 not deep-scrubbed since 2019-12-24 08:47:47.601328
pg 1.2df not deep-scrubbed since 2019-12-24 09:40:57.742947
pg 1.e8 not deep-scrubbed since 2019-12-24 10:13:57.540665
pg 1.d4 not deep-scrubbed since 2019-12-24 09:31:03.034303
pg 1.cf not deep-scrubbed since 2019-12-24 13:54:11.067359
pg 1.c0 not deep-scrubbed since 2019-12-24 13:08:56.671768
pg 1.29 not deep-scrubbed since 2019-12-24 13:10:38.458345
pg 1.5b not deep-scrubbed since 2019-12-24 13:07:56.771137
pg 1.6f not deep-scrubbed since 2019-12-24 08:56:37.326816
pg 1.16f not deep-scrubbed since 2019-12-24 13:55:28.972891
pg 1.1a0 not deep-scrubbed since 2019-12-24 08:13:34.690425
pg 1.1df not deep-scrubbed since 2019-12-24 10:24:26.686858
pg 1.20c not deep-scrubbed since 2019-12-24 10:16:24.855584
pg 1.25b not deep-scrubbed since 2019-12-24 13:08:45.394878
Any hint on this, I guess something changed in Nautilus?
Thanks,
Hi there,
I am facing a strange issue with custom deep scrubbing intervals, our current config is:
[mon]
setuser_match_path = /var/lib/ceph/$type/$cluster-$id
mon_clock_drift_allowed = .300
mon compact on start = true
osd_scrub_max_interval = 2419200
osd_scrub_min_interval = 604800
osd_deep_scrub_interval = 2419200
[osd]
...
osd_max_scrubs = 2
osd_scrub_during_recovery = false
osd_scrub_priority = 5
osd_scrub_sleep = 1
osd_scrub_chunk_min = 1
osd_scrub_chunk_max = 1
osd_scrub_load_threshold = 0.3
osd_scrub_begin_hour = 14
osd_scrub_end_hour = 22
osd_scrub_min_interval=604800
osd_scrub_max_interval=2419200
osd_deep_scrub_interval=2419200
osd_scrub_interval_randomize_ratio=0
We basically want to change daily minimal to weekly and weekly maximum to monthly. We have tried injectargs on osd's, restarting nodes, monitors, etc... but still ceph gives warning about pg's not deepscrubbed for more than default values.
root@CEPH-12:~# ceph health detail
HEALTH_WARN 16 pgs not deep-scrubbed in time
PG_NOT_DEEP_SCRUBBED 16 pgs not deep-scrubbed in time
pg 1.3fe not deep-scrubbed since 2019-12-24 12:23:53.511747
pg 1.3e4 not deep-scrubbed since 2019-12-24 11:01:57.645717
pg 1.395 not deep-scrubbed since 2019-12-24 08:47:47.601328
pg 1.2df not deep-scrubbed since 2019-12-24 09:40:57.742947
pg 1.e8 not deep-scrubbed since 2019-12-24 10:13:57.540665
pg 1.d4 not deep-scrubbed since 2019-12-24 09:31:03.034303
pg 1.cf not deep-scrubbed since 2019-12-24 13:54:11.067359
pg 1.c0 not deep-scrubbed since 2019-12-24 13:08:56.671768
pg 1.29 not deep-scrubbed since 2019-12-24 13:10:38.458345
pg 1.5b not deep-scrubbed since 2019-12-24 13:07:56.771137
pg 1.6f not deep-scrubbed since 2019-12-24 08:56:37.326816
pg 1.16f not deep-scrubbed since 2019-12-24 13:55:28.972891
pg 1.1a0 not deep-scrubbed since 2019-12-24 08:13:34.690425
pg 1.1df not deep-scrubbed since 2019-12-24 10:24:26.686858
pg 1.20c not deep-scrubbed since 2019-12-24 10:16:24.855584
pg 1.25b not deep-scrubbed since 2019-12-24 13:08:45.394878
Any hint on this, I guess something changed in Nautilus?
Thanks,
wailer
75 Posts
January 12, 2020, 1:32 amQuote from wailer on January 12, 2020, 1:32 amJust in case someone finds the same problem. I solved it by putting the deep-scrub config under [global] section in ceph.conf
osd_deep_scrub_interval=2419200
After that, restart mgr daemons on each node.
Regards,
Just in case someone finds the same problem. I solved it by putting the deep-scrub config under [global] section in ceph.conf
osd_deep_scrub_interval=2419200
After that, restart mgr daemons on each node.
Regards,
Custom deep scrubbing intervals ignored in 2.4.0
wailer
75 Posts
Quote from wailer on January 5, 2020, 7:00 pmHi there,
I am facing a strange issue with custom deep scrubbing intervals, our current config is:
[mon]
setuser_match_path = /var/lib/ceph/$type/$cluster-$id
mon_clock_drift_allowed = .300
mon compact on start = true
osd_scrub_max_interval = 2419200
osd_scrub_min_interval = 604800
osd_deep_scrub_interval = 2419200[osd]
...osd_max_scrubs = 2
osd_scrub_during_recovery = false
osd_scrub_priority = 5
osd_scrub_sleep = 1
osd_scrub_chunk_min = 1
osd_scrub_chunk_max = 1
osd_scrub_load_threshold = 0.3
osd_scrub_begin_hour = 14
osd_scrub_end_hour = 22
osd_scrub_min_interval=604800
osd_scrub_max_interval=2419200
osd_deep_scrub_interval=2419200
osd_scrub_interval_randomize_ratio=0We basically want to change daily minimal to weekly and weekly maximum to monthly. We have tried injectargs on osd's, restarting nodes, monitors, etc... but still ceph gives warning about pg's not deepscrubbed for more than default values.
root@CEPH-12:~# ceph health detail
HEALTH_WARN 16 pgs not deep-scrubbed in time
PG_NOT_DEEP_SCRUBBED 16 pgs not deep-scrubbed in time
pg 1.3fe not deep-scrubbed since 2019-12-24 12:23:53.511747
pg 1.3e4 not deep-scrubbed since 2019-12-24 11:01:57.645717
pg 1.395 not deep-scrubbed since 2019-12-24 08:47:47.601328
pg 1.2df not deep-scrubbed since 2019-12-24 09:40:57.742947
pg 1.e8 not deep-scrubbed since 2019-12-24 10:13:57.540665
pg 1.d4 not deep-scrubbed since 2019-12-24 09:31:03.034303
pg 1.cf not deep-scrubbed since 2019-12-24 13:54:11.067359
pg 1.c0 not deep-scrubbed since 2019-12-24 13:08:56.671768
pg 1.29 not deep-scrubbed since 2019-12-24 13:10:38.458345
pg 1.5b not deep-scrubbed since 2019-12-24 13:07:56.771137
pg 1.6f not deep-scrubbed since 2019-12-24 08:56:37.326816
pg 1.16f not deep-scrubbed since 2019-12-24 13:55:28.972891
pg 1.1a0 not deep-scrubbed since 2019-12-24 08:13:34.690425
pg 1.1df not deep-scrubbed since 2019-12-24 10:24:26.686858
pg 1.20c not deep-scrubbed since 2019-12-24 10:16:24.855584
pg 1.25b not deep-scrubbed since 2019-12-24 13:08:45.394878Any hint on this, I guess something changed in Nautilus?
Thanks,
Hi there,
I am facing a strange issue with custom deep scrubbing intervals, our current config is:
[mon]
setuser_match_path = /var/lib/ceph/$type/$cluster-$id
mon_clock_drift_allowed = .300
mon compact on start = true
osd_scrub_max_interval = 2419200
osd_scrub_min_interval = 604800
osd_deep_scrub_interval = 2419200
[osd]
...
osd_max_scrubs = 2
osd_scrub_during_recovery = false
osd_scrub_priority = 5
osd_scrub_sleep = 1
osd_scrub_chunk_min = 1
osd_scrub_chunk_max = 1
osd_scrub_load_threshold = 0.3
osd_scrub_begin_hour = 14
osd_scrub_end_hour = 22
osd_scrub_min_interval=604800
osd_scrub_max_interval=2419200
osd_deep_scrub_interval=2419200
osd_scrub_interval_randomize_ratio=0
We basically want to change daily minimal to weekly and weekly maximum to monthly. We have tried injectargs on osd's, restarting nodes, monitors, etc... but still ceph gives warning about pg's not deepscrubbed for more than default values.
root@CEPH-12:~# ceph health detail
HEALTH_WARN 16 pgs not deep-scrubbed in time
PG_NOT_DEEP_SCRUBBED 16 pgs not deep-scrubbed in time
pg 1.3fe not deep-scrubbed since 2019-12-24 12:23:53.511747
pg 1.3e4 not deep-scrubbed since 2019-12-24 11:01:57.645717
pg 1.395 not deep-scrubbed since 2019-12-24 08:47:47.601328
pg 1.2df not deep-scrubbed since 2019-12-24 09:40:57.742947
pg 1.e8 not deep-scrubbed since 2019-12-24 10:13:57.540665
pg 1.d4 not deep-scrubbed since 2019-12-24 09:31:03.034303
pg 1.cf not deep-scrubbed since 2019-12-24 13:54:11.067359
pg 1.c0 not deep-scrubbed since 2019-12-24 13:08:56.671768
pg 1.29 not deep-scrubbed since 2019-12-24 13:10:38.458345
pg 1.5b not deep-scrubbed since 2019-12-24 13:07:56.771137
pg 1.6f not deep-scrubbed since 2019-12-24 08:56:37.326816
pg 1.16f not deep-scrubbed since 2019-12-24 13:55:28.972891
pg 1.1a0 not deep-scrubbed since 2019-12-24 08:13:34.690425
pg 1.1df not deep-scrubbed since 2019-12-24 10:24:26.686858
pg 1.20c not deep-scrubbed since 2019-12-24 10:16:24.855584
pg 1.25b not deep-scrubbed since 2019-12-24 13:08:45.394878
Any hint on this, I guess something changed in Nautilus?
Thanks,
wailer
75 Posts
Quote from wailer on January 12, 2020, 1:32 amJust in case someone finds the same problem. I solved it by putting the deep-scrub config under [global] section in ceph.conf
osd_deep_scrub_interval=2419200
After that, restart mgr daemons on each node.
Regards,
Just in case someone finds the same problem. I solved it by putting the deep-scrub config under [global] section in ceph.conf
osd_deep_scrub_interval=2419200
After that, restart mgr daemons on each node.
Regards,