SegFault on Osd Ver. 2.1.0
abierwirth
2 Posts
October 18, 2018, 1:42 pmQuote from abierwirth on October 18, 2018, 1:42 pmHi,
yesterday on OSD is crashed and restarted after a segfault..
any idea why this happend?
Maybe this issue? https://tracker.ceph.com/issues/23431
kern.log
Oct 17 21:08:49 FS-CEPH1 kernel: [1007172.371098] libceph: osd12 192.168.204.74:6806 socket closed (con state OPEN)
Oct 17 21:08:49 FS-CEPH1 kernel: [1007172.371869] libceph: osd12 192.168.204.74:6806 socket closed (con state CONNECTING)
Oct 17 21:08:50 FS-CEPH1 kernel: [1007173.009068] libceph: osd12 down
Oct 17 21:09:14 FS-CEPH1 kernel: [1007197.312142] libceph: osd12 up
osd.12.log
2018-10-17 20:54:01.397306 7f68b0b4e700 0 log_channel(cluster) log [DBG] : 1.1d4 scrub starts
2018-10-17 20:55:10.627110 7f68b0b4e700 0 log_channel(cluster) log [DBG] : 1.1d4 scrub ok
2018-10-17 20:55:50.340216 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x56086b6cf000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg: challenging authorizer
2018-10-17 20:55:50.340368 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x56086b6cf000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg accept connect_seq 341 vs existin
2018-10-17 20:55:50.340586 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x56086b6cf000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg accept connect_seq 342 vs existin
2018-10-17 20:55:50.340992 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x560867ab1000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=2328 cs=341 l=0).handle_connect_msg: challenging authorizer
2018-10-17 21:08:49.494411 7f68cab82700 -1 *** Caught signal (Segmentation fault) **
in thread 7f68cab82700 thread_name:safe_timer
ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)
1: (()+0xa84e94) [0x56084f8d2e94]
2: (()+0x11390) [0x7f68d24ec390]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
--- begin dump of recent events ---
-10000> 2018-10-17 21:07:02.811416 7f68ceb74700 1 -- 192.168.204.74:6806/2597 <== client.14520 192.168.204.75:0/3066932386 2223248 ==== osd_op(client.14520.0:54574378 1.38f 1.96f6a78f (undecoded) ondisk+write+known_if_redirected e97
-9999> 2018-10-17 21:07:02.811572 7f68af34b700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.75:6810/8583 -- osd_repop(client.14520.0:54574378 1.38f e97/95 1:f1e56f69:::rbd_data.114e6b8b4567.000000000004b577:head v 97'89787) v2 --
-9998> 2018-10-17 21:07:02.811622 7f68af34b700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'89787, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-9997> 2018-10-17 21:07:02.812658 7f68ceb74700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6810/8583 conn(0x560867993800 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1118 cs=31 l=0). rx osd.19 seq 451109 0x560938ea94
-9996> 2018-10-17 21:07:02.812673 7f68ceb74700 1 -- 192.168.205.74:6806/2597 <== osd.19 192.168.205.75:6810/8583 451109 ==== osd_repop_reply(client.14520.0:54574378 1.38f e97/95) v2 ==== 111+0+0 (3959920469 0 0) 0x560938ea9480 con
-9995> 2018-10-17 21:07:02.812737 7f68b3353700 1 -- 192.168.204.74:6806/2597 --> 192.168.204.75:0/3066932386 -- osd_op_reply(54574378 rbd_data.114e6b8b4567.000000000004b577 [set-alloc-hint object_size 4194304 write_size 4194304,wri
-9994> 2018-10-17 21:07:02.813334 7f68ce373700 5 -- 192.168.204.74:6806/2597 >> 192.168.204.74:0/221407467 conn(0x56086fbd5000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=3 cs=1 l=1). rx client.6839 seq 1510007 0x5608e8
-9993> 2018-10-17 21:07:02.813363 7f68ce373700 1 -- 192.168.204.74:6806/2597 <== client.6839 192.168.204.74:0/221407467 1510007 ==== osd_op(client.6839.0:29263289 1.38f 1.96f6a78f (undecoded) ondisk+write+known_if_redirected e97) v
-9992> 2018-10-17 21:07:02.813534 7f68af34b700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.75:6810/8583 -- osd_repop(client.6839.0:29263289 1.38f e97/95 1:f1e56f69:::rbd_data.114e6b8b4567.000000000004b577:head v 97'89788) v2 -- 0
-9991> 2018-10-17 21:07:02.813582 7f68af34b700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'89788, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-9990> 2018-10-17 21:07:02.814561 7f68ceb74700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6810/8583 conn(0x560867993800 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1118 cs=31 l=0). rx osd.19 seq 451110 0x56089d2d51
...Jump to the End...
-30> 2018-10-17 21:08:49.032018 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263335 0x56091174b50
-29> 2018-10-17 21:08:49.032024 7f68b034d700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56946, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-28> 2018-10-17 21:08:49.032033 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263335 ==== osd_repop(osd.3.0:2611603 1.3bd e97/95) v2 ==== 1038+0+603 (2720381368 0 2999236717) 0x56091174b500 con 0x5
-27> 2018-10-17 21:08:49.032118 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263336 0x5608c134780
-26> 2018-10-17 21:08:49.032133 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263336 ==== osd_repop(osd.3.0:2611604 1.3bd e97/95) v2 ==== 1038+0+603 (1712617172 0 282554481) 0x5608c1347800 con 0x56
-25> 2018-10-17 21:08:49.032220 7f68b4355700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56947, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-24> 2018-10-17 21:08:49.032228 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263337 0x56089dcc270
-23> 2018-10-17 21:08:49.032238 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263337 ==== osd_repop(osd.3.0:2611605 1.3bd e97/95) v2 ==== 1038+0+603 (798853978 0 3359824368) 0x56089dcc2700 con 0x56
-22> 2018-10-17 21:08:49.032302 7f68b034d700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56948, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-21> 2018-10-17 21:08:49.032307 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263338 0x5608f481470
-20> 2018-10-17 21:08:49.032316 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263338 ==== osd_repop(osd.3.0:2611606 1.3bd e97/95) v2 ==== 1038+0+603 (3082320877 0 1872136456) 0x5608f4814700 con 0x5
-19> 2018-10-17 21:08:49.032381 7f68b034d700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56949, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-18> 2018-10-17 21:08:49.032424 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611602 1.3bd e97/95 ondisk, result = 0) v2 -- 0x5608992b8080 con 0
-17> 2018-10-17 21:08:49.032514 7f68b4355700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56950, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-16> 2018-10-17 21:08:49.032589 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611603 1.3bd e97/95 ondisk, result = 0) v2 -- 0x56089d3f1180 con 0
-15> 2018-10-17 21:08:49.032629 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611604 1.3bd e97/95 ondisk, result = 0) v2 -- 0x56089d3f1680 con 0
-14> 2018-10-17 21:08:49.032801 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611605 1.3bd e97/95 ondisk, result = 0) v2 -- 0x560941992280 con 0
-13> 2018-10-17 21:08:49.032836 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611606 1.3bd e97/95 ondisk, result = 0) v2 -- 0x560927974f80 con 0
-12> 2018-10-17 21:08:49.248032 7f68cf375700 5 -- 192.168.205.74:6807/2597 >> 192.168.205.76:0/7294 conn(0x56086a0bb000 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=24 cs=1 l=1). rx osd.4 seq 365720 0x5608d0727c00 osd_
-11> 2018-10-17 21:08:49.248056 7f68cf375700 1 -- 192.168.205.74:6807/2597 <== osd.4 192.168.205.76:0/7294 365720 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.247660) v4 ==== 2004+0+0 (4121449234 0 0) 0x5608d0727c00 con 0x560
-10> 2018-10-17 21:08:49.248069 7f68cf375700 1 -- 192.168.205.74:6807/2597 --> 192.168.205.76:0/7294 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.247660) v4 -- 0x56086890c200 con 0
-9> 2018-10-17 21:08:49.248189 7f68cf375700 5 -- 192.168.204.74:6807/2597 >> 192.168.204.76:0/7294 conn(0x56086a0b9800 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=24 cs=1 l=1). rx osd.4 seq 365720 0x5608a584ce00 osd_
-8> 2018-10-17 21:08:49.248205 7f68cf375700 1 -- 192.168.204.74:6807/2597 <== osd.4 192.168.204.76:0/7294 365720 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.247660) v4 ==== 2004+0+0 (4121449234 0 0) 0x5608a584ce00 con 0x560
-7> 2018-10-17 21:08:49.248215 7f68cf375700 1 -- 192.168.204.74:6807/2597 --> 192.168.204.76:0/7294 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.247660) v4 -- 0x56086890c200 con 0
-6> 2018-10-17 21:08:49.250515 7f68ce373700 5 -- 192.168.205.74:6807/2597 >> 192.168.205.75:0/3233 conn(0x560867a03800 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=20 cs=1 l=1). rx osd.14 seq 365960 0x56088c8c9000 osd
-5> 2018-10-17 21:08:49.250524 7f68cf375700 5 -- 192.168.204.74:6807/2597 >> 192.168.204.75:0/3233 conn(0x560867a02000 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=20 cs=1 l=1). rx osd.14 seq 365960 0x5608d0727c00 osd
-4> 2018-10-17 21:08:49.250535 7f68ce373700 1 -- 192.168.205.74:6807/2597 <== osd.14 192.168.205.75:0/3233 365960 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.249760) v4 ==== 2004+0+0 (2425140742 0 0) 0x56088c8c9000 con 0x56
-3> 2018-10-17 21:08:49.250542 7f68cf375700 1 -- 192.168.204.74:6807/2597 <== osd.14 192.168.204.75:0/3233 365960 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.249760) v4 ==== 2004+0+0 (2425140742 0 0) 0x5608d0727c00 con 0x56
-2> 2018-10-17 21:08:49.250548 7f68ce373700 1 -- 192.168.205.74:6807/2597 --> 192.168.205.75:0/3233 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.249760) v4 -- 0x5608d8036400 con 0
-1> 2018-10-17 21:08:49.250571 7f68cf375700 1 -- 192.168.204.74:6807/2597 --> 192.168.204.75:0/3233 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.249760) v4 -- 0x560915b18800 con 0
0> 2018-10-17 21:08:49.494411 7f68cab82700 -1 *** Caught signal (Segmentation fault) **
in thread 7f68cab82700 thread_name:safe_timer
ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)
1: (()+0xa84e94) [0x56084f8d2e94]
2: (()+0x11390) [0x7f68d24ec390]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
--- end dump of recent events ---
2018-10-17 21:09:09.761050 7f8eb5456e00 0 set uid:gid to 64045:64045 (ceph:ceph)
2018-10-17 21:09:09.761065 7f8eb5456e00 0 ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable), process ceph-osd, pid 54183
2018-10-17 21:09:09.766779 7f8eb5456e00 0 pidfile_write: ignore empty --pid-file
FS-CEPH.log
2018-10-17 16:00:00.000129 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11313 : cluster [INF] overall HEALTH_OK
2018-10-17 17:00:00.000115 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11346 : cluster [INF] overall HEALTH_OK
2018-10-17 18:00:00.000112 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11378 : cluster [INF] overall HEALTH_OK
2018-10-17 19:00:00.000099 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11416 : cluster [INF] overall HEALTH_OK
2018-10-17 20:00:00.000108 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11454 : cluster [INF] overall HEALTH_OK
2018-10-17 21:00:00.000123 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11498 : cluster [INF] overall HEALTH_OK
2018-10-17 21:08:49.640118 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11509 : cluster [INF] osd.12 failed (root=default,host=FS-CEPH1) (connection refused reported by osd.4)
2018-10-17 21:08:50.259665 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11578 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2018-10-17 21:08:51.274782 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11580 : cluster [WRN] Health check failed: Reduced data availability: 7 pgs peering (PG_AVAILABILITY)
2018-10-17 21:08:53.282846 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11582 : cluster [WRN] Health check failed: Degraded data redundancy: 3647/694932 objects degraded (0.525%), 11 pgs degraded (PG_DEGRADED)
2018-10-17 21:08:57.260404 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11583 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 7 pgs peering)
2018-10-17 21:09:02.568365 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11584 : cluster [WRN] Health check update: Degraded data redundancy: 25574/694932 objects degraded (3.680%), 75 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:14.559451 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11587 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-10-17 21:09:14.573313 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11588 : cluster [INF] osd.12 192.168.204.74:6806/54183 boot
2018-10-17 21:09:15.576752 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11590 : cluster [WRN] Health check update: Degraded data redundancy: 21479/694932 objects degraded (3.091%), 63 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:21.284065 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11592 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 11860/694932 objects degraded (1.707%), 35 pgs degraded)
2018-10-17 21:09:21.284093 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11593 : cluster [INF] Cluster is now healthy
2018-10-17 22:00:00.000104 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11635 : cluster [INF] overall HEALTH_OK
2018-10-17 23:00:00.000108 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11670 : cluster [INF] overall HEALTH_OK
2018-10-18 00:00:00.000093 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11715 : cluster [INF] overall HEALTH_OK
2018-10-18 01:00:00.000075 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11760 : cluster [INF] overall HEALTH_OK
FS-CEPH-mon.FS-CEPH1.log
2018-10-17 21:08:35.315315 7f7200a25700 4 rocksdb: (Original Log Time 2018/10/17-21:08:35.315273) EVENT_LOG_v1 {"time_micros": 1539803315315265, "job": 9116, "event": "compaction_finished", "compaction_time_micros": 50111, "output_l
2018-10-17 21:08:35.315470 7f7200a25700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803315315466, "job": 9116, "event": "table_file_deletion", "file_number": 13841}
2018-10-17 21:08:35.316869 7f7200a25700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803315316866, "job": 9116, "event": "table_file_deletion", "file_number": 13839}
2018-10-17 21:08:35.316910 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316920 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316922 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316924 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316926 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:49.640057 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.640070 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.640094 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 we're forcing failure of osd.12
2018-10-17 21:08:49.640115 7f71fd21e700 0 log_channel(cluster) log [INF] : osd.12 failed (root=default,host=FS-CEPH1) (connection refused reported by osd.4)
2018-10-17 21:08:49.640239 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.640248 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.840602 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:49.840612 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:49.840704 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:49.840713 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:49.840793 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:49.840802 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:49.840889 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:49.840897 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:49.840975 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:49.840983 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:49.841060 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:49.841068 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:49.841147 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:49.841155 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:49.841232 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:49.841241 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:49.841317 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:49.841326 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:49.841401 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:49.841410 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:49.841485 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:49.841493 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:49.841568 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:49.841577 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:49.841651 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:49.841660 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:49.841734 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:49.841742 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:49.843368 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:49.843379 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:49.843443 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:49.843451 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:49.843845 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:49.843855 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:49.843944 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:49.843953 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:49.845392 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:49.845403 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:49.845479 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:49.845488 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:49.845585 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:49.845593 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:49.845667 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:49.845676 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:49.845748 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:49.845756 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:49.845829 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:49.845837 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:49.845905 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:49.845913 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:49.845983 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.845991 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.846060 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:49.846069 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:49.846139 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:49.846147 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:49.846217 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:49.846225 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:49.846319 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.846328 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.847327 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:49.847338 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:49.847427 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:49.847437 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:50.035106 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.035117 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.163510 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:50.163522 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:50.236781 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.236791 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.244587 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:50.244597 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:50.244670 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:50.244677 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:50.244778 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:50.244787 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:50.244862 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:50.244870 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:50.244944 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:50.244952 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:50.245019 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:50.245027 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:50.245095 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:50.245103 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:50.245176 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:50.245185 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:50.245255 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:50.245263 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:50.245334 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:50.245343 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:50.245413 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:50.245421 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:50.245521 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:50.245529 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:50.245600 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:50.245608 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:50.245676 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:50.245684 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:50.247378 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:50.247389 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:50.247454 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:50.247462 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:50.247651 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:50.247659 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:50.247739 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:50.247749 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:50.249290 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:50.249298 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:50.249369 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:50.249377 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:50.249489 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:50.249498 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:50.249587 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:50.249595 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:50.249672 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.249680 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.249751 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:50.249759 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:50.249828 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:50.249836 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:50.249906 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:50.249914 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:50.249984 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.249992 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.250063 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:50.250071 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:50.250142 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:50.250150 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:50.250221 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:50.250229 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:50.250299 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:50.250307 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:50.250378 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:50.250386 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:50.259662 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)
2018-10-17 21:08:50.272483 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e98 e98: 21 total, 20 up, 21 in
2018-10-17 21:08:50.278411 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e98: 21 total, 20 up, 21 in
2018-10-17 21:08:51.274779 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 7 pgs peering (PG_AVAILABILITY)
2018-10-17 21:08:51.281467 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e99 e99: 21 total, 20 up, 21 in
2018-10-17 21:08:51.282926 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e99: 21 total, 20 up, 21 in
2018-10-17 21:08:53.282839 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 3647/694932 objects degraded (0.525%), 11 pgs degraded (PG_DEGRADED)
2018-10-17 21:08:57.260399 7f71ffa23700 0 log_channel(cluster) log [INF] : Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 7 pgs peering)
2018-10-17 21:09:02.568360 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check update: Degraded data redundancy: 25574/694932 objects degraded (3.680%), 75 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:08.827944 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:09:14.503211 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "ssd", "ids": ["12"]} v 0) v1
2018-10-17 21:09:14.503259 7f71fd21e700 0 log_channel(audit) log [INF] : from='osd.12 192.168.204.74:6806/54183' entity='osd.12' cmd=[{"prefix": "osd crush set-device-class", "class": "ssd", "ids": ["12"]}]: dispatch
2018-10-17 21:09:14.503616 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 12, "weight":0.8732, "args": ["host=FS-CEPH1", "root=default"]} v 0) v1
2018-10-17 21:09:14.503643 7f71fd21e700 0 log_channel(audit) log [INF] : from='osd.12 192.168.204.74:6806/54183' entity='osd.12' cmd=[{"prefix": "osd crush create-or-move", "id": 12, "weight":0.8732, "args": ["host=FS-CEPH1", "root=
2018-10-17 21:09:14.503710 7f71fd21e700 0 mon.FS-CEPH1@0(leader).osd e99 create-or-move crush item name 'osd.12' initial_weight 0.8732 at location {host=FS-CEPH1,root=default}
2018-10-17 21:09:14.559447 7f71ffa23700 0 log_channel(cluster) log [INF] : Health check cleared: OSD_DOWN (was: 1 osds down)
2018-10-17 21:09:14.572429 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e100 e100: 21 total, 21 up, 21 in
2018-10-17 21:09:14.573310 7f71f9216700 0 log_channel(cluster) log [INF] : osd.12 192.168.204.74:6806/54183 boot
2018-10-17 21:09:14.573378 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e100: 21 total, 21 up, 21 in
2018-10-17 21:09:15.576748 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check update: Degraded data redundancy: 21479/694932 objects degraded (3.091%), 63 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:15.580616 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e101 e101: 21 total, 21 up, 21 in
2018-10-17 21:09:15.584509 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e101: 21 total, 21 up, 21 in
2018-10-17 21:09:21.284060 7f71ffa23700 0 log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 11860/694932 objects degraded (1.707%), 35 pgs degraded)
2018-10-17 21:09:21.284092 7f71ffa23700 0 log_channel(cluster) log [INF] : Cluster is now healthy
2018-10-17 21:09:57.959730 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " f o r m a t " : " j s o n " , " p r e f i x " : " s t a t u s " } v 0) v1
2018-10-17 21:09:57.959759 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/1787096585' entity='client.admin' cmd=[{,",f,o,r,m,a,t,",:, ,",j,s,o,n,",,, ,",p,r,e,f,i,x,",:, ,",s,t,a,t,u,s,",}]: dispatch
2018-10-17 21:10:08.828125 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:10:58.038667 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " p r e f i x " : " o s d t r e e " , " f o r m a t " : " j s o n " } v 0) v1
2018-10-17 21:10:58.038712 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/3665145900' entity='client.admin' cmd=[{,",p,r,e,f,i,x,",:, ,",o,s,d, ,t,r,e,e,",,, ,",f,o,r,m,a,t,",:, ,",j,s,o,n,",}]: dispat
2018-10-17 21:11:08.828298 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:11:58.122520 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " p r e f i x " : " o s d t r e e " , " f o r m a t " : " j s o n " } v 0) v1
2018-10-17 21:11:58.122563 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/180114923' entity='client.admin' cmd=[{,",p,r,e,f,i,x,",:, ,",o,s,d, ,t,r,e,e,",,, ,",f,o,r,m,a,t,",:, ,",j,s,o,n,",}]: dispatc
2018-10-17 21:11:58.122844 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " f o r m a t " : " j s o n " , " p r e f i x " : " s t a t u s " } v 0) v1
2018-10-17 21:11:58.122861 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/2138078266' entity='client.admin' cmd=[{,",f,o,r,m,a,t,",:, ,",j,s,o,n,",,, ,",p,r,e,f,i,x,",:, ,",s,t,a,t,u,s,",}]: dispatch
2018-10-17 21:12:08.828467 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:13:08.828638 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:13:47.584516 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_write.cc:725] [default] New memtable created with log file: #13843. Immutable memtables: 0.
2018-10-17 21:13:47.584558 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:49] [JOB 9117] Syncing log #13840
2018-10-17 21:13:47.584903 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.584548) [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:1158] Calling FlushMemTableToOutputFile with column family [default], fl
2018-10-17 21:13:47.584910 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/flush_job.cc:264] [default] [JOB 9117] Flushing memtable with next log file: 13843
2018-10-17 21:13:47.584920 7f71f7a13700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803627584915, "job": 9117, "event": "flush_started", "num_memtables": 1, "num_entries": 1604, "num_deletes": 250, "memory_usage": 1556216}
2018-10-17 21:13:47.584924 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/flush_job.cc:293] [default] [JOB 9117] Level-0 flush table #13844: started
2018-10-17 21:13:47.593254 7f71f7a13700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803627593243, "cf_name": "default", "job": 9117, "event": "table_file_creation", "file_number": 13844, "file_size": 1158932, "table_properties": {"d
2018-10-17 21:13:47.593263 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/flush_job.cc:319] [default] [JOB 9117] Level-0 flush table #13844: 1158932 bytes OK
2018-10-17 21:13:47.595064 7f71f7a13700 3 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/version_set.cc:2087] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
2018-10-17 21:13:47.598359 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.595049) [/root/ceph-12.2.7/src/rocksdb/db/memtable_list.cc:360] [default] Level-0 commit table #13844 started
2018-10-17 21:13:47.598365 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.598329) [/root/ceph-12.2.7/src/rocksdb/db/memtable_list.cc:383] [default] Level-0 commit table #13844: memtable #1 done
2018-10-17 21:13:47.598368 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.598340) EVENT_LOG_v1 {"time_micros": 1539803627598336, "job": 9117, "event": "flush_finished", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_
2018-10-17 21:13:47.598370 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.598350) [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:132] [default] Level summary: base level 6 max bytes base 26843546 files
Hi,
yesterday on OSD is crashed and restarted after a segfault..
any idea why this happend?
Maybe this issue? https://tracker.ceph.com/issues/23431
kern.log
Oct 17 21:08:49 FS-CEPH1 kernel: [1007172.371098] libceph: osd12 192.168.204.74:6806 socket closed (con state OPEN)
Oct 17 21:08:49 FS-CEPH1 kernel: [1007172.371869] libceph: osd12 192.168.204.74:6806 socket closed (con state CONNECTING)
Oct 17 21:08:50 FS-CEPH1 kernel: [1007173.009068] libceph: osd12 down
Oct 17 21:09:14 FS-CEPH1 kernel: [1007197.312142] libceph: osd12 up
osd.12.log
2018-10-17 20:54:01.397306 7f68b0b4e700 0 log_channel(cluster) log [DBG] : 1.1d4 scrub starts
2018-10-17 20:55:10.627110 7f68b0b4e700 0 log_channel(cluster) log [DBG] : 1.1d4 scrub ok
2018-10-17 20:55:50.340216 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x56086b6cf000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg: challenging authorizer
2018-10-17 20:55:50.340368 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x56086b6cf000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg accept connect_seq 341 vs existin
2018-10-17 20:55:50.340586 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x56086b6cf000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg accept connect_seq 342 vs existin
2018-10-17 20:55:50.340992 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x560867ab1000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=2328 cs=341 l=0).handle_connect_msg: challenging authorizer
2018-10-17 21:08:49.494411 7f68cab82700 -1 *** Caught signal (Segmentation fault) **
in thread 7f68cab82700 thread_name:safe_timer
ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)
1: (()+0xa84e94) [0x56084f8d2e94]
2: (()+0x11390) [0x7f68d24ec390]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
--- begin dump of recent events ---
-10000> 2018-10-17 21:07:02.811416 7f68ceb74700 1 -- 192.168.204.74:6806/2597 <== client.14520 192.168.204.75:0/3066932386 2223248 ==== osd_op(client.14520.0:54574378 1.38f 1.96f6a78f (undecoded) ondisk+write+known_if_redirected e97
-9999> 2018-10-17 21:07:02.811572 7f68af34b700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.75:6810/8583 -- osd_repop(client.14520.0:54574378 1.38f e97/95 1:f1e56f69:::rbd_data.114e6b8b4567.000000000004b577:head v 97'89787) v2 --
-9998> 2018-10-17 21:07:02.811622 7f68af34b700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'89787, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-9997> 2018-10-17 21:07:02.812658 7f68ceb74700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6810/8583 conn(0x560867993800 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1118 cs=31 l=0). rx osd.19 seq 451109 0x560938ea94
-9996> 2018-10-17 21:07:02.812673 7f68ceb74700 1 -- 192.168.205.74:6806/2597 <== osd.19 192.168.205.75:6810/8583 451109 ==== osd_repop_reply(client.14520.0:54574378 1.38f e97/95) v2 ==== 111+0+0 (3959920469 0 0) 0x560938ea9480 con
-9995> 2018-10-17 21:07:02.812737 7f68b3353700 1 -- 192.168.204.74:6806/2597 --> 192.168.204.75:0/3066932386 -- osd_op_reply(54574378 rbd_data.114e6b8b4567.000000000004b577 [set-alloc-hint object_size 4194304 write_size 4194304,wri
-9994> 2018-10-17 21:07:02.813334 7f68ce373700 5 -- 192.168.204.74:6806/2597 >> 192.168.204.74:0/221407467 conn(0x56086fbd5000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=3 cs=1 l=1). rx client.6839 seq 1510007 0x5608e8
-9993> 2018-10-17 21:07:02.813363 7f68ce373700 1 -- 192.168.204.74:6806/2597 <== client.6839 192.168.204.74:0/221407467 1510007 ==== osd_op(client.6839.0:29263289 1.38f 1.96f6a78f (undecoded) ondisk+write+known_if_redirected e97) v
-9992> 2018-10-17 21:07:02.813534 7f68af34b700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.75:6810/8583 -- osd_repop(client.6839.0:29263289 1.38f e97/95 1:f1e56f69:::rbd_data.114e6b8b4567.000000000004b577:head v 97'89788) v2 -- 0
-9991> 2018-10-17 21:07:02.813582 7f68af34b700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'89788, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-9990> 2018-10-17 21:07:02.814561 7f68ceb74700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6810/8583 conn(0x560867993800 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1118 cs=31 l=0). rx osd.19 seq 451110 0x56089d2d51
...Jump to the End...
-30> 2018-10-17 21:08:49.032018 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263335 0x56091174b50
-29> 2018-10-17 21:08:49.032024 7f68b034d700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56946, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-28> 2018-10-17 21:08:49.032033 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263335 ==== osd_repop(osd.3.0:2611603 1.3bd e97/95) v2 ==== 1038+0+603 (2720381368 0 2999236717) 0x56091174b500 con 0x5
-27> 2018-10-17 21:08:49.032118 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263336 0x5608c134780
-26> 2018-10-17 21:08:49.032133 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263336 ==== osd_repop(osd.3.0:2611604 1.3bd e97/95) v2 ==== 1038+0+603 (1712617172 0 282554481) 0x5608c1347800 con 0x56
-25> 2018-10-17 21:08:49.032220 7f68b4355700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56947, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-24> 2018-10-17 21:08:49.032228 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263337 0x56089dcc270
-23> 2018-10-17 21:08:49.032238 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263337 ==== osd_repop(osd.3.0:2611605 1.3bd e97/95) v2 ==== 1038+0+603 (798853978 0 3359824368) 0x56089dcc2700 con 0x56
-22> 2018-10-17 21:08:49.032302 7f68b034d700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56948, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-21> 2018-10-17 21:08:49.032307 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263338 0x5608f481470
-20> 2018-10-17 21:08:49.032316 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263338 ==== osd_repop(osd.3.0:2611606 1.3bd e97/95) v2 ==== 1038+0+603 (3082320877 0 1872136456) 0x5608f4814700 con 0x5
-19> 2018-10-17 21:08:49.032381 7f68b034d700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56949, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-18> 2018-10-17 21:08:49.032424 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611602 1.3bd e97/95 ondisk, result = 0) v2 -- 0x5608992b8080 con 0
-17> 2018-10-17 21:08:49.032514 7f68b4355700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56950, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-16> 2018-10-17 21:08:49.032589 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611603 1.3bd e97/95 ondisk, result = 0) v2 -- 0x56089d3f1180 con 0
-15> 2018-10-17 21:08:49.032629 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611604 1.3bd e97/95 ondisk, result = 0) v2 -- 0x56089d3f1680 con 0
-14> 2018-10-17 21:08:49.032801 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611605 1.3bd e97/95 ondisk, result = 0) v2 -- 0x560941992280 con 0
-13> 2018-10-17 21:08:49.032836 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611606 1.3bd e97/95 ondisk, result = 0) v2 -- 0x560927974f80 con 0
-12> 2018-10-17 21:08:49.248032 7f68cf375700 5 -- 192.168.205.74:6807/2597 >> 192.168.205.76:0/7294 conn(0x56086a0bb000 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=24 cs=1 l=1). rx osd.4 seq 365720 0x5608d0727c00 osd_
-11> 2018-10-17 21:08:49.248056 7f68cf375700 1 -- 192.168.205.74:6807/2597 <== osd.4 192.168.205.76:0/7294 365720 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.247660) v4 ==== 2004+0+0 (4121449234 0 0) 0x5608d0727c00 con 0x560
-10> 2018-10-17 21:08:49.248069 7f68cf375700 1 -- 192.168.205.74:6807/2597 --> 192.168.205.76:0/7294 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.247660) v4 -- 0x56086890c200 con 0
-9> 2018-10-17 21:08:49.248189 7f68cf375700 5 -- 192.168.204.74:6807/2597 >> 192.168.204.76:0/7294 conn(0x56086a0b9800 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=24 cs=1 l=1). rx osd.4 seq 365720 0x5608a584ce00 osd_
-8> 2018-10-17 21:08:49.248205 7f68cf375700 1 -- 192.168.204.74:6807/2597 <== osd.4 192.168.204.76:0/7294 365720 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.247660) v4 ==== 2004+0+0 (4121449234 0 0) 0x5608a584ce00 con 0x560
-7> 2018-10-17 21:08:49.248215 7f68cf375700 1 -- 192.168.204.74:6807/2597 --> 192.168.204.76:0/7294 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.247660) v4 -- 0x56086890c200 con 0
-6> 2018-10-17 21:08:49.250515 7f68ce373700 5 -- 192.168.205.74:6807/2597 >> 192.168.205.75:0/3233 conn(0x560867a03800 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=20 cs=1 l=1). rx osd.14 seq 365960 0x56088c8c9000 osd
-5> 2018-10-17 21:08:49.250524 7f68cf375700 5 -- 192.168.204.74:6807/2597 >> 192.168.204.75:0/3233 conn(0x560867a02000 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=20 cs=1 l=1). rx osd.14 seq 365960 0x5608d0727c00 osd
-4> 2018-10-17 21:08:49.250535 7f68ce373700 1 -- 192.168.205.74:6807/2597 <== osd.14 192.168.205.75:0/3233 365960 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.249760) v4 ==== 2004+0+0 (2425140742 0 0) 0x56088c8c9000 con 0x56
-3> 2018-10-17 21:08:49.250542 7f68cf375700 1 -- 192.168.204.74:6807/2597 <== osd.14 192.168.204.75:0/3233 365960 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.249760) v4 ==== 2004+0+0 (2425140742 0 0) 0x5608d0727c00 con 0x56
-2> 2018-10-17 21:08:49.250548 7f68ce373700 1 -- 192.168.205.74:6807/2597 --> 192.168.205.75:0/3233 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.249760) v4 -- 0x5608d8036400 con 0
-1> 2018-10-17 21:08:49.250571 7f68cf375700 1 -- 192.168.204.74:6807/2597 --> 192.168.204.75:0/3233 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.249760) v4 -- 0x560915b18800 con 0
0> 2018-10-17 21:08:49.494411 7f68cab82700 -1 *** Caught signal (Segmentation fault) **
in thread 7f68cab82700 thread_name:safe_timer
ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)
1: (()+0xa84e94) [0x56084f8d2e94]
2: (()+0x11390) [0x7f68d24ec390]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
--- end dump of recent events ---
2018-10-17 21:09:09.761050 7f8eb5456e00 0 set uid:gid to 64045:64045 (ceph:ceph)
2018-10-17 21:09:09.761065 7f8eb5456e00 0 ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable), process ceph-osd, pid 54183
2018-10-17 21:09:09.766779 7f8eb5456e00 0 pidfile_write: ignore empty --pid-file
FS-CEPH.log
2018-10-17 16:00:00.000129 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11313 : cluster [INF] overall HEALTH_OK
2018-10-17 17:00:00.000115 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11346 : cluster [INF] overall HEALTH_OK
2018-10-17 18:00:00.000112 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11378 : cluster [INF] overall HEALTH_OK
2018-10-17 19:00:00.000099 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11416 : cluster [INF] overall HEALTH_OK
2018-10-17 20:00:00.000108 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11454 : cluster [INF] overall HEALTH_OK
2018-10-17 21:00:00.000123 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11498 : cluster [INF] overall HEALTH_OK
2018-10-17 21:08:49.640118 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11509 : cluster [INF] osd.12 failed (root=default,host=FS-CEPH1) (connection refused reported by osd.4)
2018-10-17 21:08:50.259665 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11578 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2018-10-17 21:08:51.274782 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11580 : cluster [WRN] Health check failed: Reduced data availability: 7 pgs peering (PG_AVAILABILITY)
2018-10-17 21:08:53.282846 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11582 : cluster [WRN] Health check failed: Degraded data redundancy: 3647/694932 objects degraded (0.525%), 11 pgs degraded (PG_DEGRADED)
2018-10-17 21:08:57.260404 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11583 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 7 pgs peering)
2018-10-17 21:09:02.568365 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11584 : cluster [WRN] Health check update: Degraded data redundancy: 25574/694932 objects degraded (3.680%), 75 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:14.559451 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11587 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-10-17 21:09:14.573313 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11588 : cluster [INF] osd.12 192.168.204.74:6806/54183 boot
2018-10-17 21:09:15.576752 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11590 : cluster [WRN] Health check update: Degraded data redundancy: 21479/694932 objects degraded (3.091%), 63 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:21.284065 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11592 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 11860/694932 objects degraded (1.707%), 35 pgs degraded)
2018-10-17 21:09:21.284093 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11593 : cluster [INF] Cluster is now healthy
2018-10-17 22:00:00.000104 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11635 : cluster [INF] overall HEALTH_OK
2018-10-17 23:00:00.000108 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11670 : cluster [INF] overall HEALTH_OK
2018-10-18 00:00:00.000093 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11715 : cluster [INF] overall HEALTH_OK
2018-10-18 01:00:00.000075 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11760 : cluster [INF] overall HEALTH_OK
FS-CEPH-mon.FS-CEPH1.log
2018-10-17 21:08:35.315315 7f7200a25700 4 rocksdb: (Original Log Time 2018/10/17-21:08:35.315273) EVENT_LOG_v1 {"time_micros": 1539803315315265, "job": 9116, "event": "compaction_finished", "compaction_time_micros": 50111, "output_l
2018-10-17 21:08:35.315470 7f7200a25700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803315315466, "job": 9116, "event": "table_file_deletion", "file_number": 13841}
2018-10-17 21:08:35.316869 7f7200a25700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803315316866, "job": 9116, "event": "table_file_deletion", "file_number": 13839}
2018-10-17 21:08:35.316910 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316920 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316922 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316924 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316926 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:49.640057 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.640070 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.640094 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 we're forcing failure of osd.12
2018-10-17 21:08:49.640115 7f71fd21e700 0 log_channel(cluster) log [INF] : osd.12 failed (root=default,host=FS-CEPH1) (connection refused reported by osd.4)
2018-10-17 21:08:49.640239 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.640248 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.840602 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:49.840612 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:49.840704 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:49.840713 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:49.840793 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:49.840802 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:49.840889 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:49.840897 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:49.840975 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:49.840983 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:49.841060 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:49.841068 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:49.841147 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:49.841155 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:49.841232 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:49.841241 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:49.841317 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:49.841326 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:49.841401 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:49.841410 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:49.841485 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:49.841493 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:49.841568 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:49.841577 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:49.841651 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:49.841660 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:49.841734 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:49.841742 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:49.843368 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:49.843379 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:49.843443 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:49.843451 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:49.843845 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:49.843855 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:49.843944 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:49.843953 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:49.845392 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:49.845403 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:49.845479 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:49.845488 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:49.845585 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:49.845593 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:49.845667 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:49.845676 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:49.845748 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:49.845756 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:49.845829 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:49.845837 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:49.845905 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:49.845913 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:49.845983 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.845991 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.846060 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:49.846069 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:49.846139 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:49.846147 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:49.846217 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:49.846225 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:49.846319 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.846328 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.847327 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:49.847338 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:49.847427 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:49.847437 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:50.035106 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.035117 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.163510 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:50.163522 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:50.236781 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.236791 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.244587 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:50.244597 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:50.244670 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:50.244677 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:50.244778 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:50.244787 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:50.244862 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:50.244870 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:50.244944 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:50.244952 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:50.245019 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:50.245027 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:50.245095 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:50.245103 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:50.245176 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:50.245185 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:50.245255 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:50.245263 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:50.245334 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:50.245343 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:50.245413 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:50.245421 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:50.245521 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:50.245529 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:50.245600 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:50.245608 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:50.245676 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:50.245684 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:50.247378 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:50.247389 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:50.247454 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:50.247462 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:50.247651 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:50.247659 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:50.247739 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:50.247749 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:50.249290 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:50.249298 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:50.249369 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:50.249377 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:50.249489 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:50.249498 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:50.249587 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:50.249595 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:50.249672 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.249680 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.249751 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:50.249759 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:50.249828 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:50.249836 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:50.249906 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:50.249914 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:50.249984 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.249992 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.250063 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:50.250071 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:50.250142 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:50.250150 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:50.250221 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:50.250229 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:50.250299 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:50.250307 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:50.250378 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:50.250386 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:50.259662 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)
2018-10-17 21:08:50.272483 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e98 e98: 21 total, 20 up, 21 in
2018-10-17 21:08:50.278411 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e98: 21 total, 20 up, 21 in
2018-10-17 21:08:51.274779 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 7 pgs peering (PG_AVAILABILITY)
2018-10-17 21:08:51.281467 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e99 e99: 21 total, 20 up, 21 in
2018-10-17 21:08:51.282926 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e99: 21 total, 20 up, 21 in
2018-10-17 21:08:53.282839 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 3647/694932 objects degraded (0.525%), 11 pgs degraded (PG_DEGRADED)
2018-10-17 21:08:57.260399 7f71ffa23700 0 log_channel(cluster) log [INF] : Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 7 pgs peering)
2018-10-17 21:09:02.568360 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check update: Degraded data redundancy: 25574/694932 objects degraded (3.680%), 75 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:08.827944 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:09:14.503211 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "ssd", "ids": ["12"]} v 0) v1
2018-10-17 21:09:14.503259 7f71fd21e700 0 log_channel(audit) log [INF] : from='osd.12 192.168.204.74:6806/54183' entity='osd.12' cmd=[{"prefix": "osd crush set-device-class", "class": "ssd", "ids": ["12"]}]: dispatch
2018-10-17 21:09:14.503616 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 12, "weight":0.8732, "args": ["host=FS-CEPH1", "root=default"]} v 0) v1
2018-10-17 21:09:14.503643 7f71fd21e700 0 log_channel(audit) log [INF] : from='osd.12 192.168.204.74:6806/54183' entity='osd.12' cmd=[{"prefix": "osd crush create-or-move", "id": 12, "weight":0.8732, "args": ["host=FS-CEPH1", "root=
2018-10-17 21:09:14.503710 7f71fd21e700 0 mon.FS-CEPH1@0(leader).osd e99 create-or-move crush item name 'osd.12' initial_weight 0.8732 at location {host=FS-CEPH1,root=default}
2018-10-17 21:09:14.559447 7f71ffa23700 0 log_channel(cluster) log [INF] : Health check cleared: OSD_DOWN (was: 1 osds down)
2018-10-17 21:09:14.572429 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e100 e100: 21 total, 21 up, 21 in
2018-10-17 21:09:14.573310 7f71f9216700 0 log_channel(cluster) log [INF] : osd.12 192.168.204.74:6806/54183 boot
2018-10-17 21:09:14.573378 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e100: 21 total, 21 up, 21 in
2018-10-17 21:09:15.576748 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check update: Degraded data redundancy: 21479/694932 objects degraded (3.091%), 63 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:15.580616 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e101 e101: 21 total, 21 up, 21 in
2018-10-17 21:09:15.584509 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e101: 21 total, 21 up, 21 in
2018-10-17 21:09:21.284060 7f71ffa23700 0 log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 11860/694932 objects degraded (1.707%), 35 pgs degraded)
2018-10-17 21:09:21.284092 7f71ffa23700 0 log_channel(cluster) log [INF] : Cluster is now healthy
2018-10-17 21:09:57.959730 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " f o r m a t " : " j s o n " , " p r e f i x " : " s t a t u s " } v 0) v1
2018-10-17 21:09:57.959759 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/1787096585' entity='client.admin' cmd=[{,",f,o,r,m,a,t,",:, ,",j,s,o,n,",,, ,",p,r,e,f,i,x,",:, ,",s,t,a,t,u,s,",}]: dispatch
2018-10-17 21:10:08.828125 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:10:58.038667 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " p r e f i x " : " o s d t r e e " , " f o r m a t " : " j s o n " } v 0) v1
2018-10-17 21:10:58.038712 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/3665145900' entity='client.admin' cmd=[{,",p,r,e,f,i,x,",:, ,",o,s,d, ,t,r,e,e,",,, ,",f,o,r,m,a,t,",:, ,",j,s,o,n,",}]: dispat
2018-10-17 21:11:08.828298 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:11:58.122520 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " p r e f i x " : " o s d t r e e " , " f o r m a t " : " j s o n " } v 0) v1
2018-10-17 21:11:58.122563 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/180114923' entity='client.admin' cmd=[{,",p,r,e,f,i,x,",:, ,",o,s,d, ,t,r,e,e,",,, ,",f,o,r,m,a,t,",:, ,",j,s,o,n,",}]: dispatc
2018-10-17 21:11:58.122844 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " f o r m a t " : " j s o n " , " p r e f i x " : " s t a t u s " } v 0) v1
2018-10-17 21:11:58.122861 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/2138078266' entity='client.admin' cmd=[{,",f,o,r,m,a,t,",:, ,",j,s,o,n,",,, ,",p,r,e,f,i,x,",:, ,",s,t,a,t,u,s,",}]: dispatch
2018-10-17 21:12:08.828467 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:13:08.828638 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:13:47.584516 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_write.cc:725] [default] New memtable created with log file: #13843. Immutable memtables: 0.
2018-10-17 21:13:47.584558 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:49] [JOB 9117] Syncing log #13840
2018-10-17 21:13:47.584903 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.584548) [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:1158] Calling FlushMemTableToOutputFile with column family [default], fl
2018-10-17 21:13:47.584910 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/flush_job.cc:264] [default] [JOB 9117] Flushing memtable with next log file: 13843
2018-10-17 21:13:47.584920 7f71f7a13700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803627584915, "job": 9117, "event": "flush_started", "num_memtables": 1, "num_entries": 1604, "num_deletes": 250, "memory_usage": 1556216}
2018-10-17 21:13:47.584924 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/flush_job.cc:293] [default] [JOB 9117] Level-0 flush table #13844: started
2018-10-17 21:13:47.593254 7f71f7a13700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803627593243, "cf_name": "default", "job": 9117, "event": "table_file_creation", "file_number": 13844, "file_size": 1158932, "table_properties": {"d
2018-10-17 21:13:47.593263 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/flush_job.cc:319] [default] [JOB 9117] Level-0 flush table #13844: 1158932 bytes OK
2018-10-17 21:13:47.595064 7f71f7a13700 3 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/version_set.cc:2087] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
2018-10-17 21:13:47.598359 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.595049) [/root/ceph-12.2.7/src/rocksdb/db/memtable_list.cc:360] [default] Level-0 commit table #13844 started
2018-10-17 21:13:47.598365 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.598329) [/root/ceph-12.2.7/src/rocksdb/db/memtable_list.cc:383] [default] Level-0 commit table #13844: memtable #1 done
2018-10-17 21:13:47.598368 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.598340) EVENT_LOG_v1 {"time_micros": 1539803627598336, "job": 9117, "event": "flush_finished", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_
2018-10-17 21:13:47.598370 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.598350) [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:132] [default] Level summary: base level 6 max bytes base 26843546 files
Last edited on October 18, 2018, 2:05 pm by abierwirth · #1
admin
2,930 Posts
October 18, 2018, 3:24 pmQuote from admin on October 18, 2018, 3:24 pmThanks for reporting this and for digging it up. it looks like solved in Ceph 12.2.8
Thanks for reporting this and for digging it up. it looks like solved in Ceph 12.2.8
Last edited on October 18, 2018, 3:24 pm by admin · #2
SegFault on Osd Ver. 2.1.0
abierwirth
2 Posts
Quote from abierwirth on October 18, 2018, 1:42 pmHi,
yesterday on OSD is crashed and restarted after a segfault..
any idea why this happend?
Maybe this issue? https://tracker.ceph.com/issues/23431
kern.log
Oct 17 21:08:49 FS-CEPH1 kernel: [1007172.371098] libceph: osd12 192.168.204.74:6806 socket closed (con state OPEN)
Oct 17 21:08:49 FS-CEPH1 kernel: [1007172.371869] libceph: osd12 192.168.204.74:6806 socket closed (con state CONNECTING)
Oct 17 21:08:50 FS-CEPH1 kernel: [1007173.009068] libceph: osd12 down
Oct 17 21:09:14 FS-CEPH1 kernel: [1007197.312142] libceph: osd12 uposd.12.log
2018-10-17 20:54:01.397306 7f68b0b4e700 0 log_channel(cluster) log [DBG] : 1.1d4 scrub starts
2018-10-17 20:55:10.627110 7f68b0b4e700 0 log_channel(cluster) log [DBG] : 1.1d4 scrub ok
2018-10-17 20:55:50.340216 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x56086b6cf000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg: challenging authorizer
2018-10-17 20:55:50.340368 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x56086b6cf000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg accept connect_seq 341 vs existin
2018-10-17 20:55:50.340586 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x56086b6cf000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg accept connect_seq 342 vs existin
2018-10-17 20:55:50.340992 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x560867ab1000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=2328 cs=341 l=0).handle_connect_msg: challenging authorizer
2018-10-17 21:08:49.494411 7f68cab82700 -1 *** Caught signal (Segmentation fault) **
in thread 7f68cab82700 thread_name:safe_timerceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)
1: (()+0xa84e94) [0x56084f8d2e94]
2: (()+0x11390) [0x7f68d24ec390]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.--- begin dump of recent events ---
-10000> 2018-10-17 21:07:02.811416 7f68ceb74700 1 -- 192.168.204.74:6806/2597 <== client.14520 192.168.204.75:0/3066932386 2223248 ==== osd_op(client.14520.0:54574378 1.38f 1.96f6a78f (undecoded) ondisk+write+known_if_redirected e97
-9999> 2018-10-17 21:07:02.811572 7f68af34b700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.75:6810/8583 -- osd_repop(client.14520.0:54574378 1.38f e97/95 1:f1e56f69:::rbd_data.114e6b8b4567.000000000004b577:head v 97'89787) v2 --
-9998> 2018-10-17 21:07:02.811622 7f68af34b700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'89787, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-9997> 2018-10-17 21:07:02.812658 7f68ceb74700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6810/8583 conn(0x560867993800 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1118 cs=31 l=0). rx osd.19 seq 451109 0x560938ea94
-9996> 2018-10-17 21:07:02.812673 7f68ceb74700 1 -- 192.168.205.74:6806/2597 <== osd.19 192.168.205.75:6810/8583 451109 ==== osd_repop_reply(client.14520.0:54574378 1.38f e97/95) v2 ==== 111+0+0 (3959920469 0 0) 0x560938ea9480 con
-9995> 2018-10-17 21:07:02.812737 7f68b3353700 1 -- 192.168.204.74:6806/2597 --> 192.168.204.75:0/3066932386 -- osd_op_reply(54574378 rbd_data.114e6b8b4567.000000000004b577 [set-alloc-hint object_size 4194304 write_size 4194304,wri
-9994> 2018-10-17 21:07:02.813334 7f68ce373700 5 -- 192.168.204.74:6806/2597 >> 192.168.204.74:0/221407467 conn(0x56086fbd5000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=3 cs=1 l=1). rx client.6839 seq 1510007 0x5608e8
-9993> 2018-10-17 21:07:02.813363 7f68ce373700 1 -- 192.168.204.74:6806/2597 <== client.6839 192.168.204.74:0/221407467 1510007 ==== osd_op(client.6839.0:29263289 1.38f 1.96f6a78f (undecoded) ondisk+write+known_if_redirected e97) v
-9992> 2018-10-17 21:07:02.813534 7f68af34b700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.75:6810/8583 -- osd_repop(client.6839.0:29263289 1.38f e97/95 1:f1e56f69:::rbd_data.114e6b8b4567.000000000004b577:head v 97'89788) v2 -- 0
-9991> 2018-10-17 21:07:02.813582 7f68af34b700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'89788, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-9990> 2018-10-17 21:07:02.814561 7f68ceb74700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6810/8583 conn(0x560867993800 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1118 cs=31 l=0). rx osd.19 seq 451110 0x56089d2d51...Jump to the End...
-30> 2018-10-17 21:08:49.032018 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263335 0x56091174b50
-29> 2018-10-17 21:08:49.032024 7f68b034d700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56946, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-28> 2018-10-17 21:08:49.032033 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263335 ==== osd_repop(osd.3.0:2611603 1.3bd e97/95) v2 ==== 1038+0+603 (2720381368 0 2999236717) 0x56091174b500 con 0x5
-27> 2018-10-17 21:08:49.032118 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263336 0x5608c134780
-26> 2018-10-17 21:08:49.032133 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263336 ==== osd_repop(osd.3.0:2611604 1.3bd e97/95) v2 ==== 1038+0+603 (1712617172 0 282554481) 0x5608c1347800 con 0x56
-25> 2018-10-17 21:08:49.032220 7f68b4355700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56947, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-24> 2018-10-17 21:08:49.032228 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263337 0x56089dcc270
-23> 2018-10-17 21:08:49.032238 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263337 ==== osd_repop(osd.3.0:2611605 1.3bd e97/95) v2 ==== 1038+0+603 (798853978 0 3359824368) 0x56089dcc2700 con 0x56
-22> 2018-10-17 21:08:49.032302 7f68b034d700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56948, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-21> 2018-10-17 21:08:49.032307 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263338 0x5608f481470
-20> 2018-10-17 21:08:49.032316 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263338 ==== osd_repop(osd.3.0:2611606 1.3bd e97/95) v2 ==== 1038+0+603 (3082320877 0 1872136456) 0x5608f4814700 con 0x5
-19> 2018-10-17 21:08:49.032381 7f68b034d700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56949, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-18> 2018-10-17 21:08:49.032424 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611602 1.3bd e97/95 ondisk, result = 0) v2 -- 0x5608992b8080 con 0
-17> 2018-10-17 21:08:49.032514 7f68b4355700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56950, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-16> 2018-10-17 21:08:49.032589 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611603 1.3bd e97/95 ondisk, result = 0) v2 -- 0x56089d3f1180 con 0
-15> 2018-10-17 21:08:49.032629 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611604 1.3bd e97/95 ondisk, result = 0) v2 -- 0x56089d3f1680 con 0
-14> 2018-10-17 21:08:49.032801 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611605 1.3bd e97/95 ondisk, result = 0) v2 -- 0x560941992280 con 0
-13> 2018-10-17 21:08:49.032836 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611606 1.3bd e97/95 ondisk, result = 0) v2 -- 0x560927974f80 con 0
-12> 2018-10-17 21:08:49.248032 7f68cf375700 5 -- 192.168.205.74:6807/2597 >> 192.168.205.76:0/7294 conn(0x56086a0bb000 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=24 cs=1 l=1). rx osd.4 seq 365720 0x5608d0727c00 osd_
-11> 2018-10-17 21:08:49.248056 7f68cf375700 1 -- 192.168.205.74:6807/2597 <== osd.4 192.168.205.76:0/7294 365720 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.247660) v4 ==== 2004+0+0 (4121449234 0 0) 0x5608d0727c00 con 0x560
-10> 2018-10-17 21:08:49.248069 7f68cf375700 1 -- 192.168.205.74:6807/2597 --> 192.168.205.76:0/7294 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.247660) v4 -- 0x56086890c200 con 0
-9> 2018-10-17 21:08:49.248189 7f68cf375700 5 -- 192.168.204.74:6807/2597 >> 192.168.204.76:0/7294 conn(0x56086a0b9800 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=24 cs=1 l=1). rx osd.4 seq 365720 0x5608a584ce00 osd_
-8> 2018-10-17 21:08:49.248205 7f68cf375700 1 -- 192.168.204.74:6807/2597 <== osd.4 192.168.204.76:0/7294 365720 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.247660) v4 ==== 2004+0+0 (4121449234 0 0) 0x5608a584ce00 con 0x560
-7> 2018-10-17 21:08:49.248215 7f68cf375700 1 -- 192.168.204.74:6807/2597 --> 192.168.204.76:0/7294 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.247660) v4 -- 0x56086890c200 con 0
-6> 2018-10-17 21:08:49.250515 7f68ce373700 5 -- 192.168.205.74:6807/2597 >> 192.168.205.75:0/3233 conn(0x560867a03800 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=20 cs=1 l=1). rx osd.14 seq 365960 0x56088c8c9000 osd
-5> 2018-10-17 21:08:49.250524 7f68cf375700 5 -- 192.168.204.74:6807/2597 >> 192.168.204.75:0/3233 conn(0x560867a02000 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=20 cs=1 l=1). rx osd.14 seq 365960 0x5608d0727c00 osd
-4> 2018-10-17 21:08:49.250535 7f68ce373700 1 -- 192.168.205.74:6807/2597 <== osd.14 192.168.205.75:0/3233 365960 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.249760) v4 ==== 2004+0+0 (2425140742 0 0) 0x56088c8c9000 con 0x56
-3> 2018-10-17 21:08:49.250542 7f68cf375700 1 -- 192.168.204.74:6807/2597 <== osd.14 192.168.204.75:0/3233 365960 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.249760) v4 ==== 2004+0+0 (2425140742 0 0) 0x5608d0727c00 con 0x56
-2> 2018-10-17 21:08:49.250548 7f68ce373700 1 -- 192.168.205.74:6807/2597 --> 192.168.205.75:0/3233 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.249760) v4 -- 0x5608d8036400 con 0
-1> 2018-10-17 21:08:49.250571 7f68cf375700 1 -- 192.168.204.74:6807/2597 --> 192.168.204.75:0/3233 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.249760) v4 -- 0x560915b18800 con 0
0> 2018-10-17 21:08:49.494411 7f68cab82700 -1 *** Caught signal (Segmentation fault) **
in thread 7f68cab82700 thread_name:safe_timerceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)
1: (()+0xa84e94) [0x56084f8d2e94]
2: (()+0x11390) [0x7f68d24ec390]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.--- end dump of recent events ---
2018-10-17 21:09:09.761050 7f8eb5456e00 0 set uid:gid to 64045:64045 (ceph:ceph)
2018-10-17 21:09:09.761065 7f8eb5456e00 0 ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable), process ceph-osd, pid 54183
2018-10-17 21:09:09.766779 7f8eb5456e00 0 pidfile_write: ignore empty --pid-fileFS-CEPH.log
2018-10-17 16:00:00.000129 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11313 : cluster [INF] overall HEALTH_OK
2018-10-17 17:00:00.000115 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11346 : cluster [INF] overall HEALTH_OK
2018-10-17 18:00:00.000112 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11378 : cluster [INF] overall HEALTH_OK
2018-10-17 19:00:00.000099 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11416 : cluster [INF] overall HEALTH_OK
2018-10-17 20:00:00.000108 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11454 : cluster [INF] overall HEALTH_OK
2018-10-17 21:00:00.000123 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11498 : cluster [INF] overall HEALTH_OK
2018-10-17 21:08:49.640118 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11509 : cluster [INF] osd.12 failed (root=default,host=FS-CEPH1) (connection refused reported by osd.4)
2018-10-17 21:08:50.259665 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11578 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2018-10-17 21:08:51.274782 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11580 : cluster [WRN] Health check failed: Reduced data availability: 7 pgs peering (PG_AVAILABILITY)
2018-10-17 21:08:53.282846 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11582 : cluster [WRN] Health check failed: Degraded data redundancy: 3647/694932 objects degraded (0.525%), 11 pgs degraded (PG_DEGRADED)
2018-10-17 21:08:57.260404 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11583 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 7 pgs peering)
2018-10-17 21:09:02.568365 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11584 : cluster [WRN] Health check update: Degraded data redundancy: 25574/694932 objects degraded (3.680%), 75 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:14.559451 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11587 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-10-17 21:09:14.573313 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11588 : cluster [INF] osd.12 192.168.204.74:6806/54183 boot
2018-10-17 21:09:15.576752 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11590 : cluster [WRN] Health check update: Degraded data redundancy: 21479/694932 objects degraded (3.091%), 63 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:21.284065 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11592 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 11860/694932 objects degraded (1.707%), 35 pgs degraded)
2018-10-17 21:09:21.284093 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11593 : cluster [INF] Cluster is now healthy
2018-10-17 22:00:00.000104 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11635 : cluster [INF] overall HEALTH_OK
2018-10-17 23:00:00.000108 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11670 : cluster [INF] overall HEALTH_OK
2018-10-18 00:00:00.000093 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11715 : cluster [INF] overall HEALTH_OK
2018-10-18 01:00:00.000075 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11760 : cluster [INF] overall HEALTH_OKFS-CEPH-mon.FS-CEPH1.log
2018-10-17 21:08:35.315315 7f7200a25700 4 rocksdb: (Original Log Time 2018/10/17-21:08:35.315273) EVENT_LOG_v1 {"time_micros": 1539803315315265, "job": 9116, "event": "compaction_finished", "compaction_time_micros": 50111, "output_l
2018-10-17 21:08:35.315470 7f7200a25700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803315315466, "job": 9116, "event": "table_file_deletion", "file_number": 13841}
2018-10-17 21:08:35.316869 7f7200a25700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803315316866, "job": 9116, "event": "table_file_deletion", "file_number": 13839}
2018-10-17 21:08:35.316910 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316920 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316922 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316924 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316926 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:49.640057 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.640070 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.640094 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 we're forcing failure of osd.12
2018-10-17 21:08:49.640115 7f71fd21e700 0 log_channel(cluster) log [INF] : osd.12 failed (root=default,host=FS-CEPH1) (connection refused reported by osd.4)
2018-10-17 21:08:49.640239 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.640248 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.840602 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:49.840612 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:49.840704 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:49.840713 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:49.840793 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:49.840802 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:49.840889 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:49.840897 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:49.840975 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:49.840983 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:49.841060 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:49.841068 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:49.841147 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:49.841155 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:49.841232 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:49.841241 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:49.841317 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:49.841326 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:49.841401 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:49.841410 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:49.841485 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:49.841493 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:49.841568 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:49.841577 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:49.841651 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:49.841660 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:49.841734 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:49.841742 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:49.843368 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:49.843379 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:49.843443 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:49.843451 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:49.843845 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:49.843855 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:49.843944 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:49.843953 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:49.845392 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:49.845403 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:49.845479 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:49.845488 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:49.845585 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:49.845593 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:49.845667 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:49.845676 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:49.845748 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:49.845756 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:49.845829 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:49.845837 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:49.845905 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:49.845913 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:49.845983 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.845991 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.846060 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:49.846069 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:49.846139 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:49.846147 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:49.846217 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:49.846225 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:49.846319 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.846328 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.847327 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:49.847338 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:49.847427 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:49.847437 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:50.035106 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.035117 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.163510 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:50.163522 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:50.236781 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.236791 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.244587 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:50.244597 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:50.244670 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:50.244677 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:50.244778 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:50.244787 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:50.244862 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:50.244870 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:50.244944 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:50.244952 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:50.245019 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:50.245027 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:50.245095 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:50.245103 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:50.245176 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:50.245185 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:50.245255 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:50.245263 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:50.245334 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:50.245343 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:50.245413 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:50.245421 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:50.245521 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:50.245529 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:50.245600 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:50.245608 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:50.245676 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:50.245684 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:50.247378 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:50.247389 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:50.247454 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:50.247462 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:50.247651 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:50.247659 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:50.247739 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:50.247749 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:50.249290 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:50.249298 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:50.249369 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:50.249377 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:50.249489 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:50.249498 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:50.249587 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:50.249595 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:50.249672 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.249680 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.249751 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:50.249759 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:50.249828 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:50.249836 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:50.249906 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:50.249914 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:50.249984 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.249992 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.250063 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:50.250071 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:50.250142 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:50.250150 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:50.250221 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:50.250229 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:50.250299 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:50.250307 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:50.250378 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:50.250386 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:50.259662 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)
2018-10-17 21:08:50.272483 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e98 e98: 21 total, 20 up, 21 in
2018-10-17 21:08:50.278411 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e98: 21 total, 20 up, 21 in
2018-10-17 21:08:51.274779 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 7 pgs peering (PG_AVAILABILITY)
2018-10-17 21:08:51.281467 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e99 e99: 21 total, 20 up, 21 in
2018-10-17 21:08:51.282926 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e99: 21 total, 20 up, 21 in
2018-10-17 21:08:53.282839 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 3647/694932 objects degraded (0.525%), 11 pgs degraded (PG_DEGRADED)
2018-10-17 21:08:57.260399 7f71ffa23700 0 log_channel(cluster) log [INF] : Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 7 pgs peering)
2018-10-17 21:09:02.568360 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check update: Degraded data redundancy: 25574/694932 objects degraded (3.680%), 75 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:08.827944 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:09:14.503211 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "ssd", "ids": ["12"]} v 0) v1
2018-10-17 21:09:14.503259 7f71fd21e700 0 log_channel(audit) log [INF] : from='osd.12 192.168.204.74:6806/54183' entity='osd.12' cmd=[{"prefix": "osd crush set-device-class", "class": "ssd", "ids": ["12"]}]: dispatch
2018-10-17 21:09:14.503616 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 12, "weight":0.8732, "args": ["host=FS-CEPH1", "root=default"]} v 0) v1
2018-10-17 21:09:14.503643 7f71fd21e700 0 log_channel(audit) log [INF] : from='osd.12 192.168.204.74:6806/54183' entity='osd.12' cmd=[{"prefix": "osd crush create-or-move", "id": 12, "weight":0.8732, "args": ["host=FS-CEPH1", "root=
2018-10-17 21:09:14.503710 7f71fd21e700 0 mon.FS-CEPH1@0(leader).osd e99 create-or-move crush item name 'osd.12' initial_weight 0.8732 at location {host=FS-CEPH1,root=default}
2018-10-17 21:09:14.559447 7f71ffa23700 0 log_channel(cluster) log [INF] : Health check cleared: OSD_DOWN (was: 1 osds down)
2018-10-17 21:09:14.572429 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e100 e100: 21 total, 21 up, 21 in
2018-10-17 21:09:14.573310 7f71f9216700 0 log_channel(cluster) log [INF] : osd.12 192.168.204.74:6806/54183 boot
2018-10-17 21:09:14.573378 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e100: 21 total, 21 up, 21 in
2018-10-17 21:09:15.576748 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check update: Degraded data redundancy: 21479/694932 objects degraded (3.091%), 63 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:15.580616 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e101 e101: 21 total, 21 up, 21 in
2018-10-17 21:09:15.584509 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e101: 21 total, 21 up, 21 in
2018-10-17 21:09:21.284060 7f71ffa23700 0 log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 11860/694932 objects degraded (1.707%), 35 pgs degraded)
2018-10-17 21:09:21.284092 7f71ffa23700 0 log_channel(cluster) log [INF] : Cluster is now healthy
2018-10-17 21:09:57.959730 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " f o r m a t " : " j s o n " , " p r e f i x " : " s t a t u s " } v 0) v1
2018-10-17 21:09:57.959759 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/1787096585' entity='client.admin' cmd=[{,",f,o,r,m,a,t,",:, ,",j,s,o,n,",,, ,",p,r,e,f,i,x,",:, ,",s,t,a,t,u,s,",}]: dispatch
2018-10-17 21:10:08.828125 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:10:58.038667 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " p r e f i x " : " o s d t r e e " , " f o r m a t " : " j s o n " } v 0) v1
2018-10-17 21:10:58.038712 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/3665145900' entity='client.admin' cmd=[{,",p,r,e,f,i,x,",:, ,",o,s,d, ,t,r,e,e,",,, ,",f,o,r,m,a,t,",:, ,",j,s,o,n,",}]: dispat
2018-10-17 21:11:08.828298 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:11:58.122520 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " p r e f i x " : " o s d t r e e " , " f o r m a t " : " j s o n " } v 0) v1
2018-10-17 21:11:58.122563 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/180114923' entity='client.admin' cmd=[{,",p,r,e,f,i,x,",:, ,",o,s,d, ,t,r,e,e,",,, ,",f,o,r,m,a,t,",:, ,",j,s,o,n,",}]: dispatc
2018-10-17 21:11:58.122844 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " f o r m a t " : " j s o n " , " p r e f i x " : " s t a t u s " } v 0) v1
2018-10-17 21:11:58.122861 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/2138078266' entity='client.admin' cmd=[{,",f,o,r,m,a,t,",:, ,",j,s,o,n,",,, ,",p,r,e,f,i,x,",:, ,",s,t,a,t,u,s,",}]: dispatch
2018-10-17 21:12:08.828467 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:13:08.828638 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:13:47.584516 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_write.cc:725] [default] New memtable created with log file: #13843. Immutable memtables: 0.2018-10-17 21:13:47.584558 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:49] [JOB 9117] Syncing log #13840
2018-10-17 21:13:47.584903 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.584548) [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:1158] Calling FlushMemTableToOutputFile with column family [default], fl
2018-10-17 21:13:47.584910 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/flush_job.cc:264] [default] [JOB 9117] Flushing memtable with next log file: 138432018-10-17 21:13:47.584920 7f71f7a13700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803627584915, "job": 9117, "event": "flush_started", "num_memtables": 1, "num_entries": 1604, "num_deletes": 250, "memory_usage": 1556216}
2018-10-17 21:13:47.584924 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/flush_job.cc:293] [default] [JOB 9117] Level-0 flush table #13844: started
2018-10-17 21:13:47.593254 7f71f7a13700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803627593243, "cf_name": "default", "job": 9117, "event": "table_file_creation", "file_number": 13844, "file_size": 1158932, "table_properties": {"d
2018-10-17 21:13:47.593263 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/flush_job.cc:319] [default] [JOB 9117] Level-0 flush table #13844: 1158932 bytes OK
2018-10-17 21:13:47.595064 7f71f7a13700 3 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/version_set.cc:2087] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
2018-10-17 21:13:47.598359 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.595049) [/root/ceph-12.2.7/src/rocksdb/db/memtable_list.cc:360] [default] Level-0 commit table #13844 started
2018-10-17 21:13:47.598365 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.598329) [/root/ceph-12.2.7/src/rocksdb/db/memtable_list.cc:383] [default] Level-0 commit table #13844: memtable #1 done
2018-10-17 21:13:47.598368 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.598340) EVENT_LOG_v1 {"time_micros": 1539803627598336, "job": 9117, "event": "flush_finished", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_
2018-10-17 21:13:47.598370 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.598350) [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:132] [default] Level summary: base level 6 max bytes base 26843546 files
Hi,
yesterday on OSD is crashed and restarted after a segfault..
any idea why this happend?
Maybe this issue? https://tracker.ceph.com/issues/23431
kern.log
Oct 17 21:08:49 FS-CEPH1 kernel: [1007172.371098] libceph: osd12 192.168.204.74:6806 socket closed (con state OPEN)
Oct 17 21:08:49 FS-CEPH1 kernel: [1007172.371869] libceph: osd12 192.168.204.74:6806 socket closed (con state CONNECTING)
Oct 17 21:08:50 FS-CEPH1 kernel: [1007173.009068] libceph: osd12 down
Oct 17 21:09:14 FS-CEPH1 kernel: [1007197.312142] libceph: osd12 up
osd.12.log
2018-10-17 20:54:01.397306 7f68b0b4e700 0 log_channel(cluster) log [DBG] : 1.1d4 scrub starts
2018-10-17 20:55:10.627110 7f68b0b4e700 0 log_channel(cluster) log [DBG] : 1.1d4 scrub ok
2018-10-17 20:55:50.340216 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x56086b6cf000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg: challenging authorizer
2018-10-17 20:55:50.340368 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x56086b6cf000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg accept connect_seq 341 vs existin
2018-10-17 20:55:50.340586 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x56086b6cf000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=0 cs=0 l=0).handle_connect_msg accept connect_seq 342 vs existin
2018-10-17 20:55:50.340992 7f68cf375700 0 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6804/5360 conn(0x560867ab1000 :6806 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH pgs=2328 cs=341 l=0).handle_connect_msg: challenging authorizer
2018-10-17 21:08:49.494411 7f68cab82700 -1 *** Caught signal (Segmentation fault) **
in thread 7f68cab82700 thread_name:safe_timer
ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)
1: (()+0xa84e94) [0x56084f8d2e94]
2: (()+0x11390) [0x7f68d24ec390]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
--- begin dump of recent events ---
-10000> 2018-10-17 21:07:02.811416 7f68ceb74700 1 -- 192.168.204.74:6806/2597 <== client.14520 192.168.204.75:0/3066932386 2223248 ==== osd_op(client.14520.0:54574378 1.38f 1.96f6a78f (undecoded) ondisk+write+known_if_redirected e97
-9999> 2018-10-17 21:07:02.811572 7f68af34b700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.75:6810/8583 -- osd_repop(client.14520.0:54574378 1.38f e97/95 1:f1e56f69:::rbd_data.114e6b8b4567.000000000004b577:head v 97'89787) v2 --
-9998> 2018-10-17 21:07:02.811622 7f68af34b700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'89787, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-9997> 2018-10-17 21:07:02.812658 7f68ceb74700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6810/8583 conn(0x560867993800 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1118 cs=31 l=0). rx osd.19 seq 451109 0x560938ea94
-9996> 2018-10-17 21:07:02.812673 7f68ceb74700 1 -- 192.168.205.74:6806/2597 <== osd.19 192.168.205.75:6810/8583 451109 ==== osd_repop_reply(client.14520.0:54574378 1.38f e97/95) v2 ==== 111+0+0 (3959920469 0 0) 0x560938ea9480 con
-9995> 2018-10-17 21:07:02.812737 7f68b3353700 1 -- 192.168.204.74:6806/2597 --> 192.168.204.75:0/3066932386 -- osd_op_reply(54574378 rbd_data.114e6b8b4567.000000000004b577 [set-alloc-hint object_size 4194304 write_size 4194304,wri
-9994> 2018-10-17 21:07:02.813334 7f68ce373700 5 -- 192.168.204.74:6806/2597 >> 192.168.204.74:0/221407467 conn(0x56086fbd5000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=3 cs=1 l=1). rx client.6839 seq 1510007 0x5608e8
-9993> 2018-10-17 21:07:02.813363 7f68ce373700 1 -- 192.168.204.74:6806/2597 <== client.6839 192.168.204.74:0/221407467 1510007 ==== osd_op(client.6839.0:29263289 1.38f 1.96f6a78f (undecoded) ondisk+write+known_if_redirected e97) v
-9992> 2018-10-17 21:07:02.813534 7f68af34b700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.75:6810/8583 -- osd_repop(client.6839.0:29263289 1.38f e97/95 1:f1e56f69:::rbd_data.114e6b8b4567.000000000004b577:head v 97'89788) v2 -- 0
-9991> 2018-10-17 21:07:02.813582 7f68af34b700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'89788, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-9990> 2018-10-17 21:07:02.814561 7f68ceb74700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.75:6810/8583 conn(0x560867993800 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1118 cs=31 l=0). rx osd.19 seq 451110 0x56089d2d51
...Jump to the End...
-30> 2018-10-17 21:08:49.032018 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263335 0x56091174b50
-29> 2018-10-17 21:08:49.032024 7f68b034d700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56946, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-28> 2018-10-17 21:08:49.032033 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263335 ==== osd_repop(osd.3.0:2611603 1.3bd e97/95) v2 ==== 1038+0+603 (2720381368 0 2999236717) 0x56091174b500 con 0x5
-27> 2018-10-17 21:08:49.032118 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263336 0x5608c134780
-26> 2018-10-17 21:08:49.032133 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263336 ==== osd_repop(osd.3.0:2611604 1.3bd e97/95) v2 ==== 1038+0+603 (1712617172 0 282554481) 0x5608c1347800 con 0x56
-25> 2018-10-17 21:08:49.032220 7f68b4355700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56947, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-24> 2018-10-17 21:08:49.032228 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263337 0x56089dcc270
-23> 2018-10-17 21:08:49.032238 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263337 ==== osd_repop(osd.3.0:2611605 1.3bd e97/95) v2 ==== 1038+0+603 (798853978 0 3359824368) 0x56089dcc2700 con 0x56
-22> 2018-10-17 21:08:49.032302 7f68b034d700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56948, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-21> 2018-10-17 21:08:49.032307 7f68ce373700 5 -- 192.168.205.74:6806/2597 >> 192.168.205.76:6806/6266 conn(0x5608679c2000 :6806 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=1536 cs=49 l=0). rx osd.3 seq 263338 0x5608f481470
-20> 2018-10-17 21:08:49.032316 7f68ce373700 1 -- 192.168.205.74:6806/2597 <== osd.3 192.168.205.76:6806/6266 263338 ==== osd_repop(osd.3.0:2611606 1.3bd e97/95) v2 ==== 1038+0+603 (3082320877 0 1872136456) 0x5608f4814700 con 0x5
-19> 2018-10-17 21:08:49.032381 7f68b034d700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56949, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-18> 2018-10-17 21:08:49.032424 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611602 1.3bd e97/95 ondisk, result = 0) v2 -- 0x5608992b8080 con 0
-17> 2018-10-17 21:08:49.032514 7f68b4355700 5 write_log_and_missing with: dirty_to: 0'0, dirty_from: 4294967295'18446744073709551615, writeout_from: 97'56950, trimmed: , trimmed_dups: , clear_divergent_priors: 0
-16> 2018-10-17 21:08:49.032589 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611603 1.3bd e97/95 ondisk, result = 0) v2 -- 0x56089d3f1180 con 0
-15> 2018-10-17 21:08:49.032629 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611604 1.3bd e97/95 ondisk, result = 0) v2 -- 0x56089d3f1680 con 0
-14> 2018-10-17 21:08:49.032801 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611605 1.3bd e97/95 ondisk, result = 0) v2 -- 0x560941992280 con 0
-13> 2018-10-17 21:08:49.032836 7f68c136f700 1 -- 192.168.205.74:6806/2597 --> 192.168.205.76:6806/6266 -- osd_repop_reply(osd.3.0:2611606 1.3bd e97/95 ondisk, result = 0) v2 -- 0x560927974f80 con 0
-12> 2018-10-17 21:08:49.248032 7f68cf375700 5 -- 192.168.205.74:6807/2597 >> 192.168.205.76:0/7294 conn(0x56086a0bb000 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=24 cs=1 l=1). rx osd.4 seq 365720 0x5608d0727c00 osd_
-11> 2018-10-17 21:08:49.248056 7f68cf375700 1 -- 192.168.205.74:6807/2597 <== osd.4 192.168.205.76:0/7294 365720 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.247660) v4 ==== 2004+0+0 (4121449234 0 0) 0x5608d0727c00 con 0x560
-10> 2018-10-17 21:08:49.248069 7f68cf375700 1 -- 192.168.205.74:6807/2597 --> 192.168.205.76:0/7294 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.247660) v4 -- 0x56086890c200 con 0
-9> 2018-10-17 21:08:49.248189 7f68cf375700 5 -- 192.168.204.74:6807/2597 >> 192.168.204.76:0/7294 conn(0x56086a0b9800 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=24 cs=1 l=1). rx osd.4 seq 365720 0x5608a584ce00 osd_
-8> 2018-10-17 21:08:49.248205 7f68cf375700 1 -- 192.168.204.74:6807/2597 <== osd.4 192.168.204.76:0/7294 365720 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.247660) v4 ==== 2004+0+0 (4121449234 0 0) 0x5608a584ce00 con 0x560
-7> 2018-10-17 21:08:49.248215 7f68cf375700 1 -- 192.168.204.74:6807/2597 --> 192.168.204.76:0/7294 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.247660) v4 -- 0x56086890c200 con 0
-6> 2018-10-17 21:08:49.250515 7f68ce373700 5 -- 192.168.205.74:6807/2597 >> 192.168.205.75:0/3233 conn(0x560867a03800 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=20 cs=1 l=1). rx osd.14 seq 365960 0x56088c8c9000 osd
-5> 2018-10-17 21:08:49.250524 7f68cf375700 5 -- 192.168.204.74:6807/2597 >> 192.168.204.75:0/3233 conn(0x560867a02000 :6807 s=STATE_OPEN_MESSAGE_READ_FOOTER_AND_DISPATCH pgs=20 cs=1 l=1). rx osd.14 seq 365960 0x5608d0727c00 osd
-4> 2018-10-17 21:08:49.250535 7f68ce373700 1 -- 192.168.205.74:6807/2597 <== osd.14 192.168.205.75:0/3233 365960 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.249760) v4 ==== 2004+0+0 (2425140742 0 0) 0x56088c8c9000 con 0x56
-3> 2018-10-17 21:08:49.250542 7f68cf375700 1 -- 192.168.204.74:6807/2597 <== osd.14 192.168.204.75:0/3233 365960 ==== osd_ping(ping e97 stamp 2018-10-17 21:08:49.249760) v4 ==== 2004+0+0 (2425140742 0 0) 0x5608d0727c00 con 0x56
-2> 2018-10-17 21:08:49.250548 7f68ce373700 1 -- 192.168.205.74:6807/2597 --> 192.168.205.75:0/3233 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.249760) v4 -- 0x5608d8036400 con 0
-1> 2018-10-17 21:08:49.250571 7f68cf375700 1 -- 192.168.204.74:6807/2597 --> 192.168.204.75:0/3233 -- osd_ping(ping_reply e97 stamp 2018-10-17 21:08:49.249760) v4 -- 0x560915b18800 con 0
0> 2018-10-17 21:08:49.494411 7f68cab82700 -1 *** Caught signal (Segmentation fault) **
in thread 7f68cab82700 thread_name:safe_timer
ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable)
1: (()+0xa84e94) [0x56084f8d2e94]
2: (()+0x11390) [0x7f68d24ec390]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
--- end dump of recent events ---
2018-10-17 21:09:09.761050 7f8eb5456e00 0 set uid:gid to 64045:64045 (ceph:ceph)
2018-10-17 21:09:09.761065 7f8eb5456e00 0 ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5) luminous (stable), process ceph-osd, pid 54183
2018-10-17 21:09:09.766779 7f8eb5456e00 0 pidfile_write: ignore empty --pid-file
FS-CEPH.log
2018-10-17 16:00:00.000129 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11313 : cluster [INF] overall HEALTH_OK
2018-10-17 17:00:00.000115 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11346 : cluster [INF] overall HEALTH_OK
2018-10-17 18:00:00.000112 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11378 : cluster [INF] overall HEALTH_OK
2018-10-17 19:00:00.000099 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11416 : cluster [INF] overall HEALTH_OK
2018-10-17 20:00:00.000108 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11454 : cluster [INF] overall HEALTH_OK
2018-10-17 21:00:00.000123 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11498 : cluster [INF] overall HEALTH_OK
2018-10-17 21:08:49.640118 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11509 : cluster [INF] osd.12 failed (root=default,host=FS-CEPH1) (connection refused reported by osd.4)
2018-10-17 21:08:50.259665 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11578 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2018-10-17 21:08:51.274782 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11580 : cluster [WRN] Health check failed: Reduced data availability: 7 pgs peering (PG_AVAILABILITY)
2018-10-17 21:08:53.282846 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11582 : cluster [WRN] Health check failed: Degraded data redundancy: 3647/694932 objects degraded (0.525%), 11 pgs degraded (PG_DEGRADED)
2018-10-17 21:08:57.260404 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11583 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 7 pgs peering)
2018-10-17 21:09:02.568365 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11584 : cluster [WRN] Health check update: Degraded data redundancy: 25574/694932 objects degraded (3.680%), 75 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:14.559451 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11587 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-10-17 21:09:14.573313 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11588 : cluster [INF] osd.12 192.168.204.74:6806/54183 boot
2018-10-17 21:09:15.576752 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11590 : cluster [WRN] Health check update: Degraded data redundancy: 21479/694932 objects degraded (3.091%), 63 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:21.284065 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11592 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 11860/694932 objects degraded (1.707%), 35 pgs degraded)
2018-10-17 21:09:21.284093 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11593 : cluster [INF] Cluster is now healthy
2018-10-17 22:00:00.000104 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11635 : cluster [INF] overall HEALTH_OK
2018-10-17 23:00:00.000108 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11670 : cluster [INF] overall HEALTH_OK
2018-10-18 00:00:00.000093 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11715 : cluster [INF] overall HEALTH_OK
2018-10-18 01:00:00.000075 mon.FS-CEPH1 mon.0 192.168.204.74:6789/0 11760 : cluster [INF] overall HEALTH_OK
FS-CEPH-mon.FS-CEPH1.log
2018-10-17 21:08:35.315315 7f7200a25700 4 rocksdb: (Original Log Time 2018/10/17-21:08:35.315273) EVENT_LOG_v1 {"time_micros": 1539803315315265, "job": 9116, "event": "compaction_finished", "compaction_time_micros": 50111, "output_l
2018-10-17 21:08:35.315470 7f7200a25700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803315315466, "job": 9116, "event": "table_file_deletion", "file_number": 13841}
2018-10-17 21:08:35.316869 7f7200a25700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803315316866, "job": 9116, "event": "table_file_deletion", "file_number": 13839}
2018-10-17 21:08:35.316910 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316920 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316922 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316924 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:35.316926 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:839] [default] Manual compaction starting
2018-10-17 21:08:49.640057 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.640070 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.640094 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 we're forcing failure of osd.12
2018-10-17 21:08:49.640115 7f71fd21e700 0 log_channel(cluster) log [INF] : osd.12 failed (root=default,host=FS-CEPH1) (connection refused reported by osd.4)
2018-10-17 21:08:49.640239 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.640248 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.840602 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:49.840612 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:49.840704 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:49.840713 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:49.840793 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:49.840802 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:49.840889 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:49.840897 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:49.840975 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:49.840983 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:49.841060 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:49.841068 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:49.841147 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:49.841155 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:49.841232 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:49.841241 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:49.841317 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:49.841326 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:49.841401 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:49.841410 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:49.841485 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:49.841493 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:49.841568 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:49.841577 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:49.841651 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:49.841660 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:49.841734 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:49.841742 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:49.843368 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:49.843379 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:49.843443 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:49.843451 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:49.843845 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:49.843855 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:49.843944 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:49.843953 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:49.845392 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:49.845403 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:49.845479 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:49.845488 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:49.845585 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:49.845593 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:49.845667 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:49.845676 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:49.845748 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:49.845756 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:49.845829 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:49.845837 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:49.845905 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:49.845913 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:49.845983 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.845991 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.846060 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:49.846069 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:49.846139 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:49.846147 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:49.846217 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:49.846225 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:49.846319 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:49.846328 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:49.847327 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:49.847338 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:49.847427 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:49.847437 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:50.035106 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.035117 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.163510 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:50.163522 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:50.236781 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.236791 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.244587 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:50.244597 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:50.244670 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.16 192.168.204.75:6804/5360 is reporting failure:1
2018-10-17 21:08:50.244677 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.16 192.168.204.75:6804/5360
2018-10-17 21:08:50.244778 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:50.244787 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:50.244862 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:50.244870 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:50.244944 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.18 192.168.204.75:6808/7466 is reporting failure:1
2018-10-17 21:08:50.244952 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.18 192.168.204.75:6808/7466
2018-10-17 21:08:50.245019 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:50.245027 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:50.245095 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:50.245103 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:50.245176 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:50.245185 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:50.245255 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.20 192.168.204.75:6812/9555 is reporting failure:1
2018-10-17 21:08:50.245263 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.20 192.168.204.75:6812/9555
2018-10-17 21:08:50.245334 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:50.245343 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:50.245413 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.19 192.168.204.75:6810/8583 is reporting failure:1
2018-10-17 21:08:50.245421 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.19 192.168.204.75:6810/8583
2018-10-17 21:08:50.245521 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.14 192.168.204.75:6800/3233 is reporting failure:1
2018-10-17 21:08:50.245529 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.14 192.168.204.75:6800/3233
2018-10-17 21:08:50.245600 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.15 192.168.204.75:6802/4284 is reporting failure:1
2018-10-17 21:08:50.245608 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.15 192.168.204.75:6802/4284
2018-10-17 21:08:50.245676 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.17 192.168.204.75:6806/6370 is reporting failure:1
2018-10-17 21:08:50.245684 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.17 192.168.204.75:6806/6370
2018-10-17 21:08:50.247378 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:50.247389 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:50.247454 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.11 192.168.204.74:6804/2486 is reporting failure:1
2018-10-17 21:08:50.247462 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.11 192.168.204.74:6804/2486
2018-10-17 21:08:50.247651 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:50.247659 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:50.247739 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.13 192.168.204.74:6800/2104 is reporting failure:1
2018-10-17 21:08:50.247749 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.13 192.168.204.74:6800/2104
2018-10-17 21:08:50.249290 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:50.249298 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:50.249369 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.2 192.168.204.76:6804/5172 is reporting failure:1
2018-10-17 21:08:50.249377 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.2 192.168.204.76:6804/5172
2018-10-17 21:08:50.249489 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:50.249498 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:50.249587 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:50.249595 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:50.249672 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.249680 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.249751 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:50.249759 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:50.249828 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.6 192.168.204.76:6812/9366 is reporting failure:1
2018-10-17 21:08:50.249836 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.6 192.168.204.76:6812/9366
2018-10-17 21:08:50.249906 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:50.249914 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:50.249984 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.3 192.168.204.76:6806/6266 is reporting failure:1
2018-10-17 21:08:50.249992 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.3 192.168.204.76:6806/6266
2018-10-17 21:08:50.250063 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:50.250071 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:50.250142 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.5 192.168.204.76:6810/8413 is reporting failure:1
2018-10-17 21:08:50.250150 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.5 192.168.204.76:6810/8413
2018-10-17 21:08:50.250221 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.0 192.168.204.76:6800/3056 is reporting failure:1
2018-10-17 21:08:50.250229 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.0 192.168.204.76:6800/3056
2018-10-17 21:08:50.250299 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.1 192.168.204.76:6802/4110 is reporting failure:1
2018-10-17 21:08:50.250307 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.1 192.168.204.76:6802/4110
2018-10-17 21:08:50.250378 7f71fd21e700 1 mon.FS-CEPH1@0(leader).osd e97 prepare_failure osd.12 192.168.204.74:6806/2597 from osd.4 192.168.204.76:6808/7294 is reporting failure:1
2018-10-17 21:08:50.250386 7f71fd21e700 0 log_channel(cluster) log [DBG] : osd.12 192.168.204.74:6806/2597 reported immediately failed by osd.4 192.168.204.76:6808/7294
2018-10-17 21:08:50.259662 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check failed: 1 osds down (OSD_DOWN)
2018-10-17 21:08:50.272483 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e98 e98: 21 total, 20 up, 21 in
2018-10-17 21:08:50.278411 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e98: 21 total, 20 up, 21 in
2018-10-17 21:08:51.274779 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check failed: Reduced data availability: 7 pgs peering (PG_AVAILABILITY)
2018-10-17 21:08:51.281467 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e99 e99: 21 total, 20 up, 21 in
2018-10-17 21:08:51.282926 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e99: 21 total, 20 up, 21 in
2018-10-17 21:08:53.282839 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 3647/694932 objects degraded (0.525%), 11 pgs degraded (PG_DEGRADED)
2018-10-17 21:08:57.260399 7f71ffa23700 0 log_channel(cluster) log [INF] : Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 7 pgs peering)
2018-10-17 21:09:02.568360 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check update: Degraded data redundancy: 25574/694932 objects degraded (3.680%), 75 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:08.827944 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:09:14.503211 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "ssd", "ids": ["12"]} v 0) v1
2018-10-17 21:09:14.503259 7f71fd21e700 0 log_channel(audit) log [INF] : from='osd.12 192.168.204.74:6806/54183' entity='osd.12' cmd=[{"prefix": "osd crush set-device-class", "class": "ssd", "ids": ["12"]}]: dispatch
2018-10-17 21:09:14.503616 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 12, "weight":0.8732, "args": ["host=FS-CEPH1", "root=default"]} v 0) v1
2018-10-17 21:09:14.503643 7f71fd21e700 0 log_channel(audit) log [INF] : from='osd.12 192.168.204.74:6806/54183' entity='osd.12' cmd=[{"prefix": "osd crush create-or-move", "id": 12, "weight":0.8732, "args": ["host=FS-CEPH1", "root=
2018-10-17 21:09:14.503710 7f71fd21e700 0 mon.FS-CEPH1@0(leader).osd e99 create-or-move crush item name 'osd.12' initial_weight 0.8732 at location {host=FS-CEPH1,root=default}
2018-10-17 21:09:14.559447 7f71ffa23700 0 log_channel(cluster) log [INF] : Health check cleared: OSD_DOWN (was: 1 osds down)
2018-10-17 21:09:14.572429 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e100 e100: 21 total, 21 up, 21 in
2018-10-17 21:09:14.573310 7f71f9216700 0 log_channel(cluster) log [INF] : osd.12 192.168.204.74:6806/54183 boot
2018-10-17 21:09:14.573378 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e100: 21 total, 21 up, 21 in
2018-10-17 21:09:15.576748 7f71ffa23700 0 log_channel(cluster) log [WRN] : Health check update: Degraded data redundancy: 21479/694932 objects degraded (3.091%), 63 pgs degraded (PG_DEGRADED)
2018-10-17 21:09:15.580616 7f71f9216700 1 mon.FS-CEPH1@0(leader).osd e101 e101: 21 total, 21 up, 21 in
2018-10-17 21:09:15.584509 7f71f9216700 0 log_channel(cluster) log [DBG] : osdmap e101: 21 total, 21 up, 21 in
2018-10-17 21:09:21.284060 7f71ffa23700 0 log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 11860/694932 objects degraded (1.707%), 35 pgs degraded)
2018-10-17 21:09:21.284092 7f71ffa23700 0 log_channel(cluster) log [INF] : Cluster is now healthy
2018-10-17 21:09:57.959730 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " f o r m a t " : " j s o n " , " p r e f i x " : " s t a t u s " } v 0) v1
2018-10-17 21:09:57.959759 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/1787096585' entity='client.admin' cmd=[{,",f,o,r,m,a,t,",:, ,",j,s,o,n,",,, ,",p,r,e,f,i,x,",:, ,",s,t,a,t,u,s,",}]: dispatch
2018-10-17 21:10:08.828125 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:10:58.038667 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " p r e f i x " : " o s d t r e e " , " f o r m a t " : " j s o n " } v 0) v1
2018-10-17 21:10:58.038712 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/3665145900' entity='client.admin' cmd=[{,",p,r,e,f,i,x,",:, ,",o,s,d, ,t,r,e,e,",,, ,",f,o,r,m,a,t,",:, ,",j,s,o,n,",}]: dispat
2018-10-17 21:11:08.828298 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:11:58.122520 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " p r e f i x " : " o s d t r e e " , " f o r m a t " : " j s o n " } v 0) v1
2018-10-17 21:11:58.122563 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/180114923' entity='client.admin' cmd=[{,",p,r,e,f,i,x,",:, ,",o,s,d, ,t,r,e,e,",,, ,",f,o,r,m,a,t,",:, ,",j,s,o,n,",}]: dispatc
2018-10-17 21:11:58.122844 7f71fd21e700 0 mon.FS-CEPH1@0(leader) e3 handle_command mon_command({ " f o r m a t " : " j s o n " , " p r e f i x " : " s t a t u s " } v 0) v1
2018-10-17 21:11:58.122861 7f71fd21e700 0 log_channel(audit) log [DBG] : from='client.? 192.168.204.76:0/2138078266' entity='client.admin' cmd=[{,",f,o,r,m,a,t,",:, ,",j,s,o,n,",,, ,",p,r,e,f,i,x,",:, ,",s,t,a,t,u,s,",}]: dispatch
2018-10-17 21:12:08.828467 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:13:08.828638 7f71ffa23700 0 mon.FS-CEPH1@0(leader).data_health(20) update_stats avail 99% total 9268 MB, used 70388 kB, avail 9183 MB
2018-10-17 21:13:47.584516 7f71f8214700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_write.cc:725] [default] New memtable created with log file: #13843. Immutable memtables: 0.
2018-10-17 21:13:47.584558 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:49] [JOB 9117] Syncing log #13840
2018-10-17 21:13:47.584903 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.584548) [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:1158] Calling FlushMemTableToOutputFile with column family [default], fl
2018-10-17 21:13:47.584910 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/flush_job.cc:264] [default] [JOB 9117] Flushing memtable with next log file: 13843
2018-10-17 21:13:47.584920 7f71f7a13700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803627584915, "job": 9117, "event": "flush_started", "num_memtables": 1, "num_entries": 1604, "num_deletes": 250, "memory_usage": 1556216}
2018-10-17 21:13:47.584924 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/flush_job.cc:293] [default] [JOB 9117] Level-0 flush table #13844: started
2018-10-17 21:13:47.593254 7f71f7a13700 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1539803627593243, "cf_name": "default", "job": 9117, "event": "table_file_creation", "file_number": 13844, "file_size": 1158932, "table_properties": {"d
2018-10-17 21:13:47.593263 7f71f7a13700 4 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/flush_job.cc:319] [default] [JOB 9117] Level-0 flush table #13844: 1158932 bytes OK
2018-10-17 21:13:47.595064 7f71f7a13700 3 rocksdb: [/root/ceph-12.2.7/src/rocksdb/db/version_set.cc:2087] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
2018-10-17 21:13:47.598359 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.595049) [/root/ceph-12.2.7/src/rocksdb/db/memtable_list.cc:360] [default] Level-0 commit table #13844 started
2018-10-17 21:13:47.598365 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.598329) [/root/ceph-12.2.7/src/rocksdb/db/memtable_list.cc:383] [default] Level-0 commit table #13844: memtable #1 done
2018-10-17 21:13:47.598368 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.598340) EVENT_LOG_v1 {"time_micros": 1539803627598336, "job": 9117, "event": "flush_finished", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_
2018-10-17 21:13:47.598370 7f71f7a13700 4 rocksdb: (Original Log Time 2018/10/17-21:13:47.598350) [/root/ceph-12.2.7/src/rocksdb/db/db_impl_compaction_flush.cc:132] [default] Level summary: base level 6 max bytes base 26843546 files
admin
2,930 Posts
Quote from admin on October 18, 2018, 3:24 pmThanks for reporting this and for digging it up. it looks like solved in Ceph 12.2.8
Thanks for reporting this and for digging it up. it looks like solved in Ceph 12.2.8