OSD crashed and restarted
BonsaiJoe
53 Posts
April 25, 2018, 9:09 pmQuote from BonsaiJoe on April 25, 2018, 9:09 pmHi,
today one OSD crashed and restarted automatically but in the petasan node log cant find anything about this.
any idea why this happend?
kern.log
Apr 25 20:49:10 node04 kernel: [1476787.943772] libceph: osd75 192.168.42.14:6808 socket closed (con state OPEN)
Apr 25 20:49:10 node04 kernel: [1476788.373922] libceph: osd75 down
Apr 25 20:49:39 node04 kernel: [1476817.047380] libceph: osd75 up
syslog:
Apr 25 20:49:10 node04 ceph-osd[3971]: *** Caught signal (Segmentation fault) **
Apr 25 20:49:10 node04 ceph-osd[3971]: in thread 7fbe33048700 thread_name:bstore_mempool
Apr 25 20:49:10 node04 ceph-osd[3971]: ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
Apr 25 20:49:10 node04 ceph-osd[3971]: 1: (()+0xa65c54) [0x55cf8d431c54]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2: (()+0x11390) [0x7fbe3dbbb390]
Apr 25 20:49:10 node04 ceph-osd[3971]: 3: (BlueStore::TwoQCache::_trim(unsigned long, unsigned long)+0x518) [0x55cf8d2e2da8]
Apr 25 20:49:10 node04 ceph-osd[3971]: 4: (BlueStore::Cache::trim(unsigned long, float, float, float)+0x4e4) [0x55cf8d2b24e4]
Apr 25 20:49:10 node04 ceph-osd[3971]: 5: (BlueStore::MempoolThread::entry()+0x155) [0x55cf8d2b8f85]
Apr 25 20:49:10 node04 ceph-osd[3971]: 6: (()+0x76ba) [0x7fbe3dbb16ba]
Apr 25 20:49:10 node04 ceph-osd[3971]: 7: (clone()+0x6d) [0x7fbe3cc283dd]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2018-04-25 20:49:10.123605 7fbe33048700 -1 *** Caught signal (Segmentation fault) **
Apr 25 20:49:10 node04 ceph-osd[3971]: in thread 7fbe33048700 thread_name:bstore_mempool
Apr 25 20:49:10 node04 ceph-osd[3971]: ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
Apr 25 20:49:10 node04 ceph-osd[3971]: 1: (()+0xa65c54) [0x55cf8d431c54]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2: (()+0x11390) [0x7fbe3dbbb390]
Apr 25 20:49:10 node04 ceph-osd[3971]: 3: (BlueStore::TwoQCache::_trim(unsigned long, unsigned long)+0x518) [0x55cf8d2e2da8]
Apr 25 20:49:10 node04 ceph-osd[3971]: 4: (BlueStore::Cache::trim(unsigned long, float, float, float)+0x4e4) [0x55cf8d2b24e4]
Apr 25 20:49:10 node04 ceph-osd[3971]: 5: (BlueStore::MempoolThread::entry()+0x155) [0x55cf8d2b8f85]
Apr 25 20:49:10 node04 ceph-osd[3971]: 6: (()+0x76ba) [0x7fbe3dbb16ba]
Apr 25 20:49:10 node04 ceph-osd[3971]: 7: (clone()+0x6d) [0x7fbe3cc283dd]
Apr 25 20:49:10 node04 ceph-osd[3971]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Apr 25 20:49:10 node04 ceph-osd[3971]: 0> 2018-04-25 20:49:10.123605 7fbe33048700 -1 *** Caught signal (Segmentation fault) **
Apr 25 20:49:10 node04 ceph-osd[3971]: in thread 7fbe33048700 thread_name:bstore_mempool
Apr 25 20:49:10 node04 ceph-osd[3971]: ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
Apr 25 20:49:10 node04 ceph-osd[3971]: 1: (()+0xa65c54) [0x55cf8d431c54]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2: (()+0x11390) [0x7fbe3dbbb390]
Apr 25 20:49:10 node04 ceph-osd[3971]: 3: (BlueStore::TwoQCache::_trim(unsigned long, unsigned long)+0x518) [0x55cf8d2e2da8]
Apr 25 20:49:10 node04 ceph-osd[3971]: 4: (BlueStore::Cache::trim(unsigned long, float, float, float)+0x4e4) [0x55cf8d2b24e4]
Apr 25 20:49:10 node04 ceph-osd[3971]: 5: (BlueStore::MempoolThread::entry()+0x155) [0x55cf8d2b8f85]
Apr 25 20:49:10 node04 ceph-osd[3971]: 6: (()+0x76ba) [0x7fbe3dbb16ba]
Apr 25 20:49:10 node04 ceph-osd[3971]: 7: (clone()+0x6d) [0x7fbe3cc283dd]
Apr 25 20:49:10 node04 ceph-osd[3971]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Apr 25 20:49:10 node04 kernel: [1476787.943772] libceph: osd75 192.168.42.14:6808 socket closed (con state OPEN)
Apr 25 20:49:10 node04 systemd[1]: ceph-osd@75.service: Main process exited, code=killed, status=11/SEGV
Apr 25 20:49:10 node04 systemd[1]: ceph-osd@75.service: Unit entered failed state.
Apr 25 20:49:10 node04 systemd[1]: ceph-osd@75.service: Failed with result 'signal'.
Apr 25 20:49:10 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.db.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.db.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:3/0:0:3:7/block/sdx/sdx7 and /sys/devices/pci0000:b2/0000:b2:00.$
Apr 25 20:49:10 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:1/0:0:1:4/block/sde/sde2 and /sys/devices/pci0000:b2/0000:b2:00.0/0000$
Apr 25 20:49:10 node04 kernel: [1476788.373922] libceph: osd75 down
Apr 25 20:49:10 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:1/0:0:1:4/block/sde/sde2 and /sys/devices/pci0000:b2/0000:b2:00.0/0000$
Apr 25 20:49:30 node04 systemd[1]: ceph-osd@75.service: Service hold-off time over, scheduling restart.
Apr 25 20:49:30 node04 systemd[1]: Stopped Ceph object storage daemon osd.75.
Apr 25 20:49:30 node04 systemd[1]: Starting Ceph object storage daemon osd.75...
Apr 25 20:49:30 node04 systemd[1]: Started Ceph object storage daemon osd.75.
Apr 25 20:49:30 node04 ceph-osd[293686]: starting osd.75 at - osd_data /var/lib/ceph/osd/ps-cl01-75 /var/lib/ceph/osd/ps-cl01-75/journal
Apr 25 20:49:30 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:1/0:0:1:4/block/sde/sde2 and /sys/devices/pci0000:b2/0000:b2:00.0/0000$
Apr 25 20:49:39 node04 ceph-osd[293686]: 2018-04-25 20:49:39.183383 7fe156c97e00 -1 osd.75 26127 log_to_monitors {default=true}
Apr 25 20:49:39 node04 kernel: [1476817.047380] libceph: osd75 up
Hi,
today one OSD crashed and restarted automatically but in the petasan node log cant find anything about this.
any idea why this happend?
kern.log
Apr 25 20:49:10 node04 kernel: [1476787.943772] libceph: osd75 192.168.42.14:6808 socket closed (con state OPEN)
Apr 25 20:49:10 node04 kernel: [1476788.373922] libceph: osd75 down
Apr 25 20:49:39 node04 kernel: [1476817.047380] libceph: osd75 up
syslog:
Apr 25 20:49:10 node04 ceph-osd[3971]: *** Caught signal (Segmentation fault) **
Apr 25 20:49:10 node04 ceph-osd[3971]: in thread 7fbe33048700 thread_name:bstore_mempool
Apr 25 20:49:10 node04 ceph-osd[3971]: ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
Apr 25 20:49:10 node04 ceph-osd[3971]: 1: (()+0xa65c54) [0x55cf8d431c54]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2: (()+0x11390) [0x7fbe3dbbb390]
Apr 25 20:49:10 node04 ceph-osd[3971]: 3: (BlueStore::TwoQCache::_trim(unsigned long, unsigned long)+0x518) [0x55cf8d2e2da8]
Apr 25 20:49:10 node04 ceph-osd[3971]: 4: (BlueStore::Cache::trim(unsigned long, float, float, float)+0x4e4) [0x55cf8d2b24e4]
Apr 25 20:49:10 node04 ceph-osd[3971]: 5: (BlueStore::MempoolThread::entry()+0x155) [0x55cf8d2b8f85]
Apr 25 20:49:10 node04 ceph-osd[3971]: 6: (()+0x76ba) [0x7fbe3dbb16ba]
Apr 25 20:49:10 node04 ceph-osd[3971]: 7: (clone()+0x6d) [0x7fbe3cc283dd]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2018-04-25 20:49:10.123605 7fbe33048700 -1 *** Caught signal (Segmentation fault) **
Apr 25 20:49:10 node04 ceph-osd[3971]: in thread 7fbe33048700 thread_name:bstore_mempool
Apr 25 20:49:10 node04 ceph-osd[3971]: ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
Apr 25 20:49:10 node04 ceph-osd[3971]: 1: (()+0xa65c54) [0x55cf8d431c54]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2: (()+0x11390) [0x7fbe3dbbb390]
Apr 25 20:49:10 node04 ceph-osd[3971]: 3: (BlueStore::TwoQCache::_trim(unsigned long, unsigned long)+0x518) [0x55cf8d2e2da8]
Apr 25 20:49:10 node04 ceph-osd[3971]: 4: (BlueStore::Cache::trim(unsigned long, float, float, float)+0x4e4) [0x55cf8d2b24e4]
Apr 25 20:49:10 node04 ceph-osd[3971]: 5: (BlueStore::MempoolThread::entry()+0x155) [0x55cf8d2b8f85]
Apr 25 20:49:10 node04 ceph-osd[3971]: 6: (()+0x76ba) [0x7fbe3dbb16ba]
Apr 25 20:49:10 node04 ceph-osd[3971]: 7: (clone()+0x6d) [0x7fbe3cc283dd]
Apr 25 20:49:10 node04 ceph-osd[3971]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Apr 25 20:49:10 node04 ceph-osd[3971]: 0> 2018-04-25 20:49:10.123605 7fbe33048700 -1 *** Caught signal (Segmentation fault) **
Apr 25 20:49:10 node04 ceph-osd[3971]: in thread 7fbe33048700 thread_name:bstore_mempool
Apr 25 20:49:10 node04 ceph-osd[3971]: ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
Apr 25 20:49:10 node04 ceph-osd[3971]: 1: (()+0xa65c54) [0x55cf8d431c54]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2: (()+0x11390) [0x7fbe3dbbb390]
Apr 25 20:49:10 node04 ceph-osd[3971]: 3: (BlueStore::TwoQCache::_trim(unsigned long, unsigned long)+0x518) [0x55cf8d2e2da8]
Apr 25 20:49:10 node04 ceph-osd[3971]: 4: (BlueStore::Cache::trim(unsigned long, float, float, float)+0x4e4) [0x55cf8d2b24e4]
Apr 25 20:49:10 node04 ceph-osd[3971]: 5: (BlueStore::MempoolThread::entry()+0x155) [0x55cf8d2b8f85]
Apr 25 20:49:10 node04 ceph-osd[3971]: 6: (()+0x76ba) [0x7fbe3dbb16ba]
Apr 25 20:49:10 node04 ceph-osd[3971]: 7: (clone()+0x6d) [0x7fbe3cc283dd]
Apr 25 20:49:10 node04 ceph-osd[3971]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Apr 25 20:49:10 node04 kernel: [1476787.943772] libceph: osd75 192.168.42.14:6808 socket closed (con state OPEN)
Apr 25 20:49:10 node04 systemd[1]: ceph-osd@75.service: Main process exited, code=killed, status=11/SEGV
Apr 25 20:49:10 node04 systemd[1]: ceph-osd@75.service: Unit entered failed state.
Apr 25 20:49:10 node04 systemd[1]: ceph-osd@75.service: Failed with result 'signal'.
Apr 25 20:49:10 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.db.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.db.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:3/0:0:3:7/block/sdx/sdx7 and /sys/devices/pci0000:b2/0000:b2:00.$
Apr 25 20:49:10 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:1/0:0:1:4/block/sde/sde2 and /sys/devices/pci0000:b2/0000:b2:00.0/0000$
Apr 25 20:49:10 node04 kernel: [1476788.373922] libceph: osd75 down
Apr 25 20:49:10 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:1/0:0:1:4/block/sde/sde2 and /sys/devices/pci0000:b2/0000:b2:00.0/0000$
Apr 25 20:49:30 node04 systemd[1]: ceph-osd@75.service: Service hold-off time over, scheduling restart.
Apr 25 20:49:30 node04 systemd[1]: Stopped Ceph object storage daemon osd.75.
Apr 25 20:49:30 node04 systemd[1]: Starting Ceph object storage daemon osd.75...
Apr 25 20:49:30 node04 systemd[1]: Started Ceph object storage daemon osd.75.
Apr 25 20:49:30 node04 ceph-osd[293686]: starting osd.75 at - osd_data /var/lib/ceph/osd/ps-cl01-75 /var/lib/ceph/osd/ps-cl01-75/journal
Apr 25 20:49:30 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:1/0:0:1:4/block/sde/sde2 and /sys/devices/pci0000:b2/0000:b2:00.0/0000$
Apr 25 20:49:39 node04 ceph-osd[293686]: 2018-04-25 20:49:39.183383 7fe156c97e00 -1 osd.75 26127 log_to_monitors {default=true}
Apr 25 20:49:39 node04 kernel: [1476817.047380] libceph: osd75 up
admin
2,930 Posts
April 25, 2018, 9:56 pmQuote from admin on April 25, 2018, 9:56 pmIt is a Ceph issue:
https://tracker.ceph.com/issues/21259
The logs are matching with 12.2.4 errors:
https://tracker.ceph.com/issues/21259#note-24
As stated it seems to be fixed in 12.2.5 which was just released 2 days ago, but please do not upgrade yourself since we patch the sources to include vmware specific code.
It is a Ceph issue:
https://tracker.ceph.com/issues/21259
The logs are matching with 12.2.4 errors:
https://tracker.ceph.com/issues/21259#note-24
As stated it seems to be fixed in 12.2.5 which was just released 2 days ago, but please do not upgrade yourself since we patch the sources to include vmware specific code.
Last edited on April 25, 2018, 9:57 pm by admin · #2
BonsaiJoe
53 Posts
April 26, 2018, 9:40 amQuote from BonsaiJoe on April 26, 2018, 9:40 amthanks for the fast reply ... any idea when you can provide a patched version?
thanks for the fast reply ... any idea when you can provide a patched version?
admin
2,930 Posts
April 27, 2018, 5:22 pmQuote from admin on April 27, 2018, 5:22 pmIt is hard for us to do an update for just this issue. it was labeled as minor severity by ceph, it takes us some effort such as complete test cycle to do a release.
if you see this again or think it is high severity. do let us know.
It is hard for us to do an update for just this issue. it was labeled as minor severity by ceph, it takes us some effort such as complete test cycle to do a release.
if you see this again or think it is high severity. do let us know.
Last edited on April 27, 2018, 5:22 pm by admin · #4
BonsaiJoe
53 Posts
May 2, 2018, 2:11 pmQuote from BonsaiJoe on May 2, 2018, 2:11 pmToday 12 of 100 osd crashed parallel at nodes 3 of 5 recovering tooked aprox 3 minutes
Today 12 of 100 osd crashed parallel at nodes 3 of 5 recovering tooked aprox 3 minutes
Last edited on May 2, 2018, 2:39 pm by BonsaiJoe · #5
BonsaiJoe
53 Posts
May 2, 2018, 7:34 pmQuote from BonsaiJoe on May 2, 2018, 7:34 pmafter some more investigation we fond that this looks like another bug with the nic driver:
kern.log shows:
May 2 15:17:04 node03 kernel: [695525.704075] i40e 0000:17:00.1: TX driver issue detected, PF reset issued
May 2 15:17:04 node03 kernel: [695525.835170] i40e 0000:17:00.1: i40e_ptp_init: PTP not supported on eth3
May 2 15:17:04 node03 kernel: [695525.911684] i40e 0000:17:00.1 eth3: speed changed to 0 for port eth3
May 2 15:17:04 node03 kernel: [695525.913112] bond1: link status up again after 0 ms for interface eth3
maybe it´s this bug:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1700834
do i see this right, cause of losing one nic from the bond ceph lost the connection to the cluster for some seconds and reported the osd as down.
the strange thing is that ceph starts to set the osd to down 26 sec after the bond was recovered .....any idea?
from syslog (cleared from hundrets of : heartbeat_check: no reply from xx messages)
May 2 15:17:01 node03 CRON[4152775]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
May 2 15:17:04 node03 kernel: [695525.704075] i40e 0000:17:00.1: TX driver issue detected, PF reset issued
May 2 15:17:04 node03 kernel: [695525.835170] i40e 0000:17:00.1: i40e_ptp_init: PTP not supported on eth3
May 2 15:17:04 node03 kernel: [695525.911684] i40e 0000:17:00.1 eth3: speed changed to 0 for port eth3
May 2 15:17:04 node03 kernel: [695525.913112] bond1: link status up again after 0 ms for interface eth3
May 2 15:17:16 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 2.500179394s
May 2 15:17:19 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 4.96820263s
May 2 15:17:21 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 7.45790572s
May 2 15:17:24 node03 consul[3360]: raft: aborting pipeline replication to peer {Voter 192.168.42.12:8300 192.168.42.12:8300}
May 2 15:17:24 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.11:8300: read tcp 192.168.42.13:37038->192.168.42.11:8300: i/o timeout
May 2 15:17:24 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: read tcp 192.168.42.13:55244->192.168.42.12:8300: i/o timeout
May 2 15:17:26 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 2.50027097s
May 2 15:17:29 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 4.984897637s
May 2 15:17:31 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 7.420438172s
May 2 15:17:32 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:34 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:35 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.11:8300: dial tcp 192.168.42.11:8300: i/o timeout
May 2 15:17:35 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:38 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:44 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:46 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:46 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:47 node03 consul[3360]: yamux: keepalive failed: i/o deadline reached
May 2 15:17:47 node03 consul[3360]: consul.rpc: multiplex conn accept failed: keepalive timeout from=192.168.42.12:55674
May 2 15:17:51 node03 consul[3360]: memberlist: Push/Pull with node02 failed: dial tcp 192.168.42.12:8301: i/o timeout
May 2 15:17:53 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:54 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:57 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:01 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:18:04 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:04 node03 consul[3360]: raft: peer {Voter 192.168.42.12:8300 192.168.42.12:8300} has newer term, stopping replication
May 2 15:18:04 node03 consul[3360]: raft: Node at 192.168.42.13:8300 [Follower] entering Follower state (Leader: "")
May 2 15:18:04 node03 consul[3360]: consul: cluster leadership lost
May 2 15:18:04 node03 consul[3360]: raft: aborting pipeline replication to peer {Voter 192.168.42.11:8300 192.168.42.11:8300}
May 2 15:18:04 node03 consul[3360]: consul.coordinate: Batch update failed: node is not the leader
May 2 15:18:08 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:09 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since our last index is greater (3213435, 3213428)
May 2 15:18:13 node03 consul[3360]: http: Request GET /v1/kv/PetaSAN/Services/ClusterLeader?index=2620988&wait=20s, error: No cluster leader from=127.0.0.1:53994
May 2 15:18:13 node03 consul[3360]: raft: Heartbeat timeout from "" reached, starting election
May 2 15:18:13 node03 consul[3360]: raft: Node at 192.168.42.13:8300 [Candidate] entering Candidate state in term 184
May 2 15:18:13 node03 consul[3360]: raft: Election won. Tally: 2
May 2 15:18:13 node03 consul[3360]: raft: Node at 192.168.42.13:8300 [Leader] entering Leader state
May 2 15:18:13 node03 consul[3360]: raft: Added peer 192.168.42.11:8300, starting replication
May 2 15:18:13 node03 consul[3360]: raft: Added peer 192.168.42.12:8300, starting replication
May 2 15:18:13 node03 consul[3360]: consul: cluster leadership acquired
May 2 15:18:13 node03 consul[3360]: consul: New leader elected: node03
May 2 15:18:13 node03 consul[3360]: raft: pipelining replication to peer {Voter 192.168.42.11:8300 192.168.42.11:8300}
May 2 15:18:23 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:23 node03 consul[3360]: raft: AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300} rejected, sending older logs (next: 3213429)
May 2 15:18:23 node03 consul[3360]: raft: pipelining replication to peer {Voter 192.168.42.12:8300 192.168.42.12:8300}
from ps-cl01.log
2018-05-02 15:17:30.689945 mon.node01 mon.0 192.168.42.11:6789/0 231117 : cluster [INF] osd.0 failed (root=default,host=node03) (2 reporters from different host after 20.000325 >= grace 20.000000)
2018-05-02 15:17:30.690389 mon.node01 mon.0 192.168.42.11:6789/0 231119 : cluster [INF] osd.4 failed (root=default,host=node03) (2 reporters from different host after 20.000540 >= grace 20.000000)
2018-05-02 15:17:30.690758 mon.node01 mon.0 192.168.42.11:6789/0 231121 : cluster [INF] osd.6 failed (root=default,host=node03) (2 reporters from different host after 20.000898 >= grace 20.000000)
2018-05-02 15:17:30.691040 mon.node01 mon.0 192.168.42.11:6789/0 231123 : cluster [INF] osd.8 failed (root=default,host=node03) (2 reporters from different host after 20.001124 >= grace 20.000000)
2018-05-02 15:17:30.691317 mon.node01 mon.0 192.168.42.11:6789/0 231125 : cluster [INF] osd.10 failed (root=default,host=node03) (2 reporters from different host after 20.001331 >= grace 20.000000)
2018-05-02 15:17:30.691936 mon.node01 mon.0 192.168.42.11:6789/0 231129 : cluster [INF] osd.15 failed (root=default,host=node03) (2 reporters from different host after 20.001689 >= grace 20.000000)
2018-05-02 15:17:30.728612 mon.node01 mon.0 192.168.42.11:6789/0 231134 : cluster [INF] osd.5 failed (root=default,host=node03) (2 reporters from different host after 20.000418 >= grace 20.000000)
2018-05-02 15:17:30.728887 mon.node01 mon.0 192.168.42.11:6789/0 231136 : cluster [INF] osd.7 failed (root=default,host=node03) (2 reporters from different host after 20.000640 >= grace 20.000000)
2018-05-02 15:17:30.729478 mon.node01 mon.0 192.168.42.11:6789/0 231140 : cluster [INF] osd.11 failed (root=default,host=node03) (2 reporters from different host after 20.000955 >= grace 20.000000)
2018-05-02 15:17:31.071749 mon.node01 mon.0 192.168.42.11:6789/0 231188 : cluster [INF] osd.9 failed (root=default,host=node03) (2 reporters from different host after 20.000476 >= grace 20.000000)
2018-05-02 15:17:31.072294 mon.node01 mon.0 192.168.42.11:6789/0 231192 : cluster [INF] osd.14 failed (root=default,host=node03) (2 reporters from different host after 20.000645 >= grace 20.000000)
2018-05-02 15:17:31.072648 mon.node01 mon.0 192.168.42.11:6789/0 231195 : cluster [INF] osd.17 failed (root=default,host=node03) (2 reporters from different host after 20.000775 >= grace 20.000000)
2018-05-02 15:17:31.073013 mon.node01 mon.0 192.168.42.11:6789/0 231198 : cluster [INF] osd.19 failed (root=default,host=node03) (2 reporters from different host after 20.000923 >= grace 20.000000)
2018-05-02 15:17:31.145738 mon.node01 mon.0 192.168.42.11:6789/0 231201 : cluster [INF] osd.2 failed (root=default,host=node03) (2 reporters from different host after 20.000246 >= grace 20.000000)
2018-05-02 15:17:31.276822 mon.node01 mon.0 192.168.42.11:6789/0 231213 : cluster [WRN] Health check failed: 14 osds down (OSD_DOWN)
2018-05-02 15:17:31.297002 mon.node01 mon.0 192.168.42.11:6789/0 231215 : cluster [INF] osd.1 failed (root=default,host=node03) (2 reporters from different host after 20.066715 >= grace 20.000000)
2018-05-02 15:17:31.297100 mon.node01 mon.0 192.168.42.11:6789/0 231217 : cluster [INF] osd.3 failed (root=default,host=node03) (2 reporters from different host after 20.066565 >= grace 20.000000)
2018-05-02 15:17:31.297290 mon.node01 mon.0 192.168.42.11:6789/0 231219 : cluster [INF] osd.12 failed (root=default,host=node03) (2 reporters from different host after 20.066392 >= grace 20.000000)
2018-05-02 15:17:31.297562 mon.node01 mon.0 192.168.42.11:6789/0 231222 : cluster [INF] osd.13 failed (root=default,host=node03) (2 reporters from different host after 20.010716 >= grace 20.000000)
2018-05-02 15:17:31.297644 mon.node01 mon.0 192.168.42.11:6789/0 231224 : cluster [INF] osd.16 failed (root=default,host=node03) (2 reporters from different host after 20.010419 >= grace 20.000000)
2018-05-02 15:17:31.342169 mon.node01 mon.0 192.168.42.11:6789/0 231245 : cluster [INF] osd.18 failed (root=default,host=node03) (2 reporters from different host after 20.003455 >= grace 20.000000)
2018-05-02 15:17:31.900209 osd.17 osd.17 192.168.42.13:6814/4554 347 : cluster [WRN] Monitor daemon marked osd.17 down, but it is still running
2018-05-02 15:17:32.347387 mon.node01 mon.0 192.168.42.11:6789/0 231324 : cluster [INF] osd.5 192.168.42.13:6808/4150 boot
2018-05-02 15:17:32.122997 osd.14 osd.14 192.168.42.13:6812/4444 351 : cluster [WRN] Monitor daemon marked osd.14 down, but it is still running
2018-05-02 15:17:33.417134 mon.node01 mon.0 192.168.42.11:6789/0 231341 : cluster [INF] osd.19 192.168.42.13:6838/7714 boot
2018-05-02 15:17:32.150394 osd.15 osd.15 192.168.42.13:6818/4787 385 : cluster [WRN] Monitor daemon marked osd.15 down, but it is still running
2018-05-02 15:17:34.449728 mon.node01 mon.0 192.168.42.11:6789/0 231360 : cluster [WRN] Health check failed: Reduced data availability: 182 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:34.449810 mon.node01 mon.0 192.168.42.11:6789/0 231361 : cluster [WRN] Health check failed: Degraded data redundancy: 266928/11883735 objects degraded (2.246%), 265 pgs unclean, 276 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:34.449892 mon.node01 mon.0 192.168.42.11:6789/0 231362 : cluster [WRN] Health check failed: 6 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:34.462178 mon.node01 mon.0 192.168.42.11:6789/0 231363 : cluster [INF] osd.14 192.168.42.13:6812/4444 boot
2018-05-02 15:17:34.462279 mon.node01 mon.0 192.168.42.11:6789/0 231364 : cluster [INF] osd.15 192.168.42.13:6818/4787 boot
2018-05-02 15:17:34.462350 mon.node01 mon.0 192.168.42.11:6789/0 231365 : cluster [INF] osd.1 192.168.42.13:6832/7029 boot
2018-05-02 15:17:34.462398 mon.node01 mon.0 192.168.42.11:6789/0 231366 : cluster [INF] osd.11 192.168.42.13:6804/3915 boot
2018-05-02 15:17:34.462440 mon.node01 mon.0 192.168.42.11:6789/0 231367 : cluster [INF] osd.9 192.168.42.13:6830/6607 boot
2018-05-02 15:17:34.462483 mon.node01 mon.0 192.168.42.11:6789/0 231368 : cluster [INF] osd.0 192.168.42.13:6828/5616 boot
2018-05-02 15:17:34.526923 osd.13 osd.13 192.168.42.13:6820/4934 391 : cluster [WRN] Monitor daemon marked osd.13 down, but it is still running
2018-05-02 15:17:31.950776 osd.9 osd.9 192.168.42.13:6830/6607 439 : cluster [WRN] Monitor daemon marked osd.9 down, but it is still running
2018-05-02 15:17:35.528042 mon.node01 mon.0 192.168.42.11:6789/0 231385 : cluster [INF] osd.13 192.168.42.13:6820/4934 boot
2018-05-02 15:17:35.528149 mon.node01 mon.0 192.168.42.11:6789/0 231386 : cluster [INF] osd.4 192.168.42.13:6816/4672 boot
2018-05-02 15:17:36.561053 mon.node01 mon.0 192.168.42.11:6789/0 231403 : cluster [WRN] Health check update: 9 osds down (OSD_DOWN)
2018-05-02 15:17:36.577048 mon.node01 mon.0 192.168.42.11:6789/0 231404 : cluster [INF] osd.12 192.168.42.13:6824/5233 boot
2018-05-02 15:17:31.901987 osd.7 osd.7 192.168.42.13:6834/7201 353 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running
2018-05-02 15:17:33.112422 osd.3 osd.3 192.168.42.13:6802/3776 433 : cluster [WRN] Monitor daemon marked osd.3 down, but it is still running
2018-05-02 15:17:33.454019 osd.1 osd.1 192.168.42.13:6832/7029 453 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running
2018-05-02 15:17:38.145518 mon.node01 mon.0 192.168.42.11:6789/0 231444 : cluster [INF] osd.16 192.168.42.13:6822/5084 boot
2018-05-02 15:17:31.878606 osd.6 osd.6 192.168.42.13:6806/4031 379 : cluster [WRN] Monitor daemon marked osd.6 down, but it is still running
2018-05-02 15:17:37.074577 osd.16 osd.16 192.168.42.13:6822/5084 343 : cluster [WRN] Monitor daemon marked osd.16 down, but it is still running
2018-05-02 15:17:39.450525 mon.node01 mon.0 192.168.42.11:6789/0 231512 : cluster [WRN] Health check update: Reduced data availability: 156 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:39.450603 mon.node01 mon.0 192.168.42.11:6789/0 231513 : cluster [WRN] Health check update: Degraded data redundancy: 780173/11883735 objects degraded (6.565%), 409 pgs unclean, 826 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:39.450648 mon.node01 mon.0 192.168.42.11:6789/0 231514 : cluster [WRN] Health check update: 20 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:33.481474 osd.0 osd.0 192.168.42.13:6828/5616 375 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running
2018-05-02 15:17:34.835065 osd.4 osd.4 192.168.42.13:6816/4672 398 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running
2018-05-02 15:17:31.758601 osd.11 osd.11 192.168.42.13:6804/3915 325 : cluster [WRN] Monitor daemon marked osd.11 down, but it is still running
2018-05-02 15:17:31.854285 osd.5 osd.5 192.168.42.13:6808/4150 339 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running
2018-05-02 15:17:31.927593 osd.8 osd.8 192.168.42.13:6826/5418 327 : cluster [WRN] Monitor daemon marked osd.8 down, but it is still running
2018-05-02 15:17:32.264540 osd.2 osd.2 192.168.42.13:6810/4329 373 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running
2018-05-02 15:17:32.504532 osd.19 osd.19 192.168.42.13:6838/7714 367 : cluster [WRN] Monitor daemon marked osd.19 down, but it is still running
2018-05-02 15:17:35.304926 osd.12 osd.12 192.168.42.13:6824/5233 409 : cluster [WRN] Monitor daemon marked osd.12 down, but it is still running
2018-05-02 15:17:41.201037 mon.node01 mon.0 192.168.42.11:6789/0 231515 : cluster [INF] osd.10 192.168.42.13:6836/7465 boot
2018-05-02 15:17:37.224788 osd.18 osd.18 192.168.42.13:6800/3571 353 : cluster [WRN] Monitor daemon marked osd.18 down, but it is still running
2018-05-02 15:17:44.451567 mon.node01 mon.0 192.168.42.11:6789/0 231566 : cluster [WRN] Health check update: 7 osds down (OSD_DOWN)
2018-05-02 15:17:44.451681 mon.node01 mon.0 192.168.42.11:6789/0 231567 : cluster [WRN] Health check update: Reduced data availability: 119 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:44.451740 mon.node01 mon.0 192.168.42.11:6789/0 231568 : cluster [WRN] Health check update: Degraded data redundancy: 567324/11883735 objects degraded (4.774%), 376 pgs unclean, 598 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:44.451787 mon.node01 mon.0 192.168.42.11:6789/0 231569 : cluster [WRN] Health check update: 8 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:39.947717 osd.10 osd.10 192.168.42.13:6836/7465 363 : cluster [WRN] Monitor daemon marked osd.10 down, but it is still running
2018-05-02 15:17:49.452743 mon.node01 mon.0 192.168.42.11:6789/0 231571 : cluster [WRN] Health check update: Reduced data availability: 124 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:49.452802 mon.node01 mon.0 192.168.42.11:6789/0 231572 : cluster [WRN] Health check update: Degraded data redundancy: 545427/11883735 objects degraded (4.590%), 400 pgs unclean, 569 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:49.452830 mon.node01 mon.0 192.168.42.11:6789/0 231573 : cluster [WRN] Health check update: 5 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:45.648204 osd.22 osd.22 192.168.42.11:6818/5595 347 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.352068 secs
2018-05-02 15:17:45.648216 osd.22 osd.22 192.168.42.11:6818/5595 348 : cluster [WRN] slow request 30.352068 seconds old, received at 2018-05-02 15:17:15.296043: osd_op(client.6526841.0:199030532 1.f58 1:1afd369a:::rbd_data.51ffd188bfb19.0000000000a9a36d:head [set-alloc-hint object_size 4194304 write_size 4194304,write 931840~1536] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:48.674430 osd.57 osd.57 192.168.42.12:6824/5788 369 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.028853 secs
2018-05-02 15:17:48.674449 osd.57 osd.57 192.168.42.12:6824/5788 370 : cluster [WRN] slow request 30.028853 seconds old, received at 2018-05-02 15:17:18.645456: osd_op(client.6526841.0:199030675 1.580 1:01afed70:::rbd_data.51ffd188bfb19.00000000000920ad:head [set-alloc-hint object_size 4194304 write_size 4194304,write 2617344~3584] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:46.337758 osd.37 osd.37 192.168.42.11:6812/4603 313 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.840941 secs
2018-05-02 15:17:46.337771 osd.37 osd.37 192.168.42.11:6812/4603 314 : cluster [WRN] slow request 30.840941 seconds old, received at 2018-05-02 15:17:15.496724: osd_op(client.5168842.0:565598091 1.cd6 1:6b3ac828:::rbd_data.51ffd188bfb19.0000000000a994d6:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1515520~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:45.668260 osd.51 osd.51 192.168.42.12:6810/4338 363 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.950745 secs
2018-05-02 15:17:45.668273 osd.51 osd.51 192.168.42.12:6810/4338 364 : cluster [WRN] slow request 30.950745 seconds old, received at 2018-05-02 15:17:14.717398: osd_op(client.6596773.0:183937335 1.a33 1:cc5b61d1:::rbd_data.1e815e6eb1d34.000000000056c0b3:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1323520~32768] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:52.669812 osd.51 osd.51 192.168.42.12:6810/4338 365 : cluster [WRN] 3 slow requests, 2 included below; oldest blocked for > 37.952326 secs
2018-05-02 15:17:52.669821 osd.51 osd.51 192.168.42.12:6810/4338 366 : cluster [WRN] slow request 30.265711 seconds old, received at 2018-05-02 15:17:22.404014: osd_op(client.6596773.0:183937651 1.aab 1:d557b9bd:::rbd_data.1e815e6eb1d34.00000000005effb8:head [set-alloc-hint object_size 4194304 write_size 4194304,write 3551232~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:52.669853 osd.51 osd.51 192.168.42.12:6810/4338 367 : cluster [WRN] slow request 30.259918 seconds old, received at 2018-05-02 15:17:22.409807: osd_op(client.6526841.0:199030712 1.227 1:e44f9365:::rbd_data.51ffd188bfb19.000000000041d7c0:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1818624~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:54.499661 mon.node01 mon.0 192.168.42.11:6789/0 231630 : cluster [WRN] Health check update: 5 osds down (OSD_DOWN)
2018-05-02 15:17:54.502279 mon.node01 mon.0 192.168.42.11:6789/0 231631 : cluster [WRN] Health check update: Reduced data availability: 152 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:54.502352 mon.node01 mon.0 192.168.42.11:6789/0 231632 : cluster [WRN] Health check update: Degraded data redundancy: 545424/11883735 objects degraded (4.590%), 470 pgs unclean, 564 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:54.502394 mon.node01 mon.0 192.168.42.11:6789/0 231633 : cluster [WRN] Health check update: 9 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:54.511364 mon.node01 mon.0 192.168.42.11:6789/0 231634 : cluster [INF] osd.7 192.168.42.13:6834/7201 boot
2018-05-02 15:17:54.511419 mon.node01 mon.0 192.168.42.11:6789/0 231635 : cluster [INF] osd.2 192.168.42.13:6810/4329 boot
2018-05-02 15:17:55.560707 mon.node01 mon.0 192.168.42.11:6789/0 231772 : cluster [INF] osd.3 192.168.42.13:6802/3776 boot
2018-05-02 15:17:58.644882 mon.node01 mon.0 192.168.42.11:6789/0 232360 : cluster [INF] osd.18 192.168.42.13:6800/3571 boot
2018-05-02 15:17:58.997648 mon.node01 mon.0 192.168.42.11:6789/0 232419 : cluster [WRN] overall HEALTH_WARN 3 osds down; Reduced data availability: 187 pgs peering; Degraded data redundancy: 393287/11883735 objects degraded (3.309%), 464 pgs unclean, 414 pgs degraded; 20 slow requests are blocked > 32 sec
2018-05-02 15:17:59.502999 mon.node01 mon.0 192.168.42.11:6789/0 232447 : cluster [WRN] Health check update: 3 osds down (OSD_DOWN)
2018-05-02 15:17:59.503131 mon.node01 mon.0 192.168.42.11:6789/0 232448 : cluster [WRN] Health check update: Reduced data availability: 187 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:59.503190 mon.node01 mon.0 192.168.42.11:6789/0 232449 : cluster [WRN] Health check update: Degraded data redundancy: 393287/11883735 objects degraded (3.309%), 464 pgs unclean, 414 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:59.503246 mon.node01 mon.0 192.168.42.11:6789/0 232450 : cluster [WRN] Health check update: 20 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:04.507977 mon.node01 mon.0 192.168.42.11:6789/0 232590 : cluster [WRN] Health check update: Reduced data availability: 211 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:04.508058 mon.node01 mon.0 192.168.42.11:6789/0 232591 : cluster [WRN] Health check update: Degraded data redundancy: 242695/11883735 objects degraded (2.042%), 477 pgs unclean, 252 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:04.508106 mon.node01 mon.0 192.168.42.11:6789/0 232592 : cluster [WRN] Health check update: 23 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:05.877862 osd.61 osd.61 192.168.42.14:6814/4904 1237 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.173325 secs
2018-05-02 15:18:05.877874 osd.61 osd.61 192.168.42.14:6814/4904 1238 : cluster [WRN] slow request 30.173325 seconds old, received at 2018-05-02 15:17:35.704479: osd_op(client.6596773.0:183937862 1.c0e 1.d2861c0e (undecoded) ondisk+write+known_if_redirected e26136) currently waiting for peered
2018-05-02 15:18:02.797168 osd.58 osd.58 192.168.42.12:6804/3828 331 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.363481 secs
2018-05-02 15:18:02.797179 osd.58 osd.58 192.168.42.12:6804/3828 332 : cluster [WRN] slow request 30.363481 seconds old, received at 2018-05-02 15:17:32.433619: osd_op(client.6596773.0:183937757 1.a86 1.a57fca86 (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:09.513538 mon.node01 mon.0 192.168.42.11:6789/0 232608 : cluster [WRN] Health check update: Reduced data availability: 252 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:09.513599 mon.node01 mon.0 192.168.42.11:6789/0 232609 : cluster [WRN] Health check update: Degraded data redundancy: 239770/11883735 objects degraded (2.018%), 568 pgs unclean, 248 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:09.513636 mon.node01 mon.0 192.168.42.11:6789/0 232610 : cluster [WRN] Health check update: 25 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:02.817820 osd.77 osd.77 192.168.42.14:6828/6313 1117 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.394863 secs
2018-05-02 15:18:02.817831 osd.77 osd.77 192.168.42.14:6828/6313 1118 : cluster [WRN] slow request 30.394863 seconds old, received at 2018-05-02 15:17:32.422898: osd_op(client.6596773.0:183937750 1.6df 1.cf9bc6df (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:02.476590 osd.5 osd.5 192.168.42.13:6808/4150 341 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.064270 secs
2018-05-02 15:18:02.476601 osd.5 osd.5 192.168.42.13:6808/4150 342 : cluster [WRN] slow request 30.064270 seconds old, received at 2018-05-02 15:17:32.412264: osd_op(client.6596773.0:183937421 1.116 1.1c1d4116 (undecoded) ondisk+retry+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:02.651760 osd.22 osd.22 192.168.42.11:6818/5595 349 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 47.355668 secs
2018-05-02 15:18:02.651779 osd.22 osd.22 192.168.42.11:6818/5595 350 : cluster [WRN] slow request 30.174091 seconds old, received at 2018-05-02 15:17:32.477620: osd_op(client.5168842.0:565598316 1.f58 1.596cbf58 (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:02.797120 osd.97 osd.97 192.168.42.15:6834/31139 554 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.381988 secs
2018-05-02 15:18:02.797141 osd.97 osd.97 192.168.42.15:6834/31139 555 : cluster [WRN] slow request 30.381988 seconds old, received at 2018-05-02 15:17:32.415062: osd_op(client.6526841.0:199030755 1.569 1.b0c48569 (undecoded) ondisk+read+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:04.413088 osd.19 osd.19 192.168.42.13:6838/7714 369 : cluster [WRN] 4 slow requests, 4 included below; oldest blocked for > 30.948256 secs
2018-05-02 15:18:04.413100 osd.19 osd.19 192.168.42.13:6838/7714 370 : cluster [WRN] slow request 30.484589 seconds old, received at 2018-05-02 15:17:33.928388: osd_op(client.6526841.0:199030843 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:04.413107 osd.19 osd.19 192.168.42.13:6838/7714 371 : cluster [WRN] slow request 30.465765 seconds old, received at 2018-05-02 15:17:33.947212: osd_op(client.6596773.0:183937784 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:04.413114 osd.19 osd.19 192.168.42.13:6838/7714 372 : cluster [WRN] slow request 30.425245 seconds old, received at 2018-05-02 15:17:33.987732: osd_op(client.6596773.0:183937785 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:04.413119 osd.19 osd.19 192.168.42.13:6838/7714 373 : cluster [WRN] slow request 30.948256 seconds old, received at 2018-05-02 15:17:33.464721: osd_op(client.6526841.0:199030727 1.71e 1.5c1ee71e (undecoded) ondisk+retry+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:05.413744 osd.19 osd.19 192.168.42.13:6838/7714 374 : cluster [WRN] 7 slow requests, 3 included below; oldest blocked for > 31.948951 secs
2018-05-02 15:18:05.413755 osd.19 osd.19 192.168.42.13:6838/7714 375 : cluster [WRN] slow request 30.999699 seconds old, received at 2018-05-02 15:17:34.413974: osd_op(client.6526841.0:199030846 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:05.413760 osd.19 osd.19 192.168.42.13:6838/7714 376 : cluster [WRN] slow request 30.926653 seconds old, received at 2018-05-02 15:17:34.487019: osd_op(client.6603196.0:182818149 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:05.413769 osd.19 osd.19 192.168.42.13:6838/7714 377 : cluster [WRN] slow request 30.914518 seconds old, received at 2018-05-02 15:17:34.499154: osd_op(client.6526841.0:199030848 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:06.284732 osd.4 osd.4 192.168.42.13:6816/4672 400 : cluster [WRN] 2 slow requests, 2 included below; oldest blocked for > 30.629189 secs
2018-05-02 15:18:06.284743 osd.4 osd.4 192.168.42.13:6816/4672 401 : cluster [WRN] slow request 30.629189 seconds old, received at 2018-05-02 15:17:35.655473: osd_op(client.6596773.0:183937792 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26136) currently waiting for peered
2018-05-02 15:18:06.284756 osd.4 osd.4 192.168.42.13:6816/4672 402 : cluster [WRN] slow request 30.260563 seconds old, received at 2018-05-02 15:17:36.024099: osd_op(client.6596773.0:183937865 1.2dd 1.8dea32dd (undecoded) ondisk+read+known_if_redirected e26136) currently waiting for peered
2018-05-02 15:18:08.285983 osd.4 osd.4 192.168.42.13:6816/4672 403 : cluster [WRN] 3 slow requests, 1 included below; oldest blocked for > 32.630454 secs
2018-05-02 15:18:08.285995 osd.4 osd.4 192.168.42.13:6816/4672 404 : cluster [WRN] slow request 30.278627 seconds old, received at 2018-05-02 15:17:38.007300: osd_op(client.6596773.0:183937868 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26138) currently waiting for peered
2018-05-02 15:18:03.279070 osd.82 osd.82 192.168.42.15:6804/8697 409 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.845140 secs
2018-05-02 15:18:03.279083 osd.82 osd.82 192.168.42.15:6804/8697 410 : cluster [WRN] slow request 30.845140 seconds old, received at 2018-05-02 15:17:32.433862: osd_op(client.6526841.0:199030770 1.1b9 1.2f4701b9 (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:13.808180 mon.node01 mon.0 192.168.42.11:6789/0 232619 : cluster [WRN] Health check update: 2 osds down (OSD_DOWN)
2018-05-02 15:18:13.825306 mon.node01 mon.0 192.168.42.11:6789/0 232620 : cluster [INF] osd.6 192.168.42.13:6806/4031 boot
2018-05-02 15:18:14.518576 mon.node01 mon.0 192.168.42.11:6789/0 232641 : cluster [WRN] Health check update: Reduced data availability: 311 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:14.518655 mon.node01 mon.0 192.168.42.11:6789/0 232642 : cluster [WRN] Health check update: Degraded data redundancy: 239770/11883735 objects degraded (2.018%), 706 pgs unclean, 248 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:14.518708 mon.node01 mon.0 192.168.42.11:6789/0 232643 : cluster [WRN] Health check update: 27 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:17.801511 osd.58 osd.58 192.168.42.12:6804/3828 333 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 45.367834 secs
2018-05-02 15:18:17.801519 osd.58 osd.58 192.168.42.12:6804/3828 334 : cluster [WRN] slow request 30.183339 seconds old, received at 2018-05-02 15:17:47.618114: osd_op(client.6603196.0:182818173 1.a86 1.a57fca86 (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:19.525411 mon.node01 mon.0 192.168.42.11:6789/0 232881 : cluster [WRN] Health check update: Reduced data availability: 358 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:19.525473 mon.node01 mon.0 192.168.42.11:6789/0 232882 : cluster [WRN] Health check update: Degraded data redundancy: 188544/11883735 objects degraded (1.587%), 725 pgs unclean, 199 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:19.525510 mon.node01 mon.0 192.168.42.11:6789/0 232883 : cluster [WRN] Health check update: 34 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:19.525823 mon.node01 mon.0 192.168.42.11:6789/0 232884 : cluster [INF] osd.0 failed (root=default,host=node03) (4 reporters from different host after 22.185499 >= grace 20.000000)
2018-05-02 15:18:19.526157 mon.node01 mon.0 192.168.42.11:6789/0 232885 : cluster [INF] osd.1 failed (root=default,host=node03) (4 reporters from different host after 20.729223 >= grace 20.000000)
2018-05-02 15:18:19.527041 mon.node01 mon.0 192.168.42.11:6789/0 232886 : cluster [INF] osd.5 failed (root=default,host=node03) (4 reporters from different host after 21.722805 >= grace 20.000000)
2018-05-02 15:18:19.527435 mon.node01 mon.0 192.168.42.11:6789/0 232887 : cluster [INF] osd.9 failed (root=default,host=node03) (4 reporters from different host after 20.729151 >= grace 20.000000)
2018-05-02 15:18:19.527946 mon.node01 mon.0 192.168.42.11:6789/0 232888 : cluster [INF] osd.11 failed (root=default,host=node03) (4 reporters from different host after 20.399416 >= grace 20.000000)
2018-05-02 15:18:19.528543 mon.node01 mon.0 192.168.42.11:6789/0 232889 : cluster [INF] osd.14 failed (root=default,host=node03) (4 reporters from different host after 20.729112 >= grace 20.000000)
2018-05-02 15:18:19.528831 mon.node01 mon.0 192.168.42.11:6789/0 232890 : cluster [INF] osd.15 failed (root=default,host=node03) (4 reporters from different host after 20.399139 >= grace 20.000000)
2018-05-02 15:18:19.529181 mon.node01 mon.0 192.168.42.11:6789/0 232891 : cluster [INF] osd.19 failed (root=default,host=node03) (4 reporters from different host after 21.722456 >= grace 20.000000)
2018-05-02 15:18:19.546706 mon.node01 mon.0 192.168.42.11:6789/0 232892 : cluster [WRN] Health check update: 10 osds down (OSD_DOWN)
2018-05-02 15:18:13.289192 osd.4 osd.4 192.168.42.13:6816/4672 405 : cluster [WRN] 4 slow requests, 1 included below; oldest blocked for > 37.633638 secs
2018-05-02 15:18:13.289200 osd.4 osd.4 192.168.42.13:6816/4672 406 : cluster [WRN] slow request 30.273720 seconds old, received at 2018-05-02 15:17:43.015391: osd_op(client.6596773.0:183937876 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26142) currently waiting for peered
2018-05-02 15:18:15.655564 osd.22 osd.22 192.168.42.11:6818/5595 351 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 60.359482 secs
2018-05-02 15:18:15.655573 osd.22 osd.22 192.168.42.11:6818/5595 352 : cluster [WRN] slow request 60.359482 seconds old, received at 2018-05-02 15:17:15.296043: osd_op(client.6526841.0:199030532 1.f58 1:1afd369a:::rbd_data.51ffd188bfb19.0000000000a9a36d:head [set-alloc-hint object_size 4194304 write_size 4194304,write 931840~1536] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:18.292619 osd.4 osd.4 192.168.42.13:6816/4672 407 : cluster [WRN] 5 slow requests, 1 included below; oldest blocked for > 42.637083 secs
2018-05-02 15:18:18.292632 osd.4 osd.4 192.168.42.13:6816/4672 408 : cluster [WRN] slow request 30.269277 seconds old, received at 2018-05-02 15:17:48.023279: osd_op(client.6596773.0:183937882 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:18.656500 osd.22 osd.22 192.168.42.11:6818/5595 353 : cluster [WRN] 3 slow requests, 1 included below; oldest blocked for > 63.360399 secs
2018-05-02 15:18:18.656510 osd.22 osd.22 192.168.42.11:6818/5595 354 : cluster [WRN] slow request 30.693945 seconds old, received at 2018-05-02 15:17:47.962497: osd_op(client.6526841.0:199030863 1.f58 1.596cbf58 (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:18.682212 osd.57 osd.57 192.168.42.12:6824/5788 371 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 60.036720 secs
2018-05-02 15:18:18.682220 osd.57 osd.57 192.168.42.12:6824/5788 372 : cluster [WRN] slow request 60.036720 seconds old, received at 2018-05-02 15:17:18.645456: osd_op(client.6526841.0:199030675 1.580 1:01afed70:::rbd_data.51ffd188bfb19.00000000000920ad:head [set-alloc-hint object_size 4194304 write_size 4194304,write 2617344~3584] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:20.130588 osd.19 osd.19 192.168.42.13:6838/7714 378 : cluster [WRN] Monitor daemon marked osd.19 down, but it is still running
2018-05-02 15:18:20.874752 osd.11 osd.11 192.168.42.13:6804/3915 327 : cluster [WRN] Monitor daemon marked osd.11 down, but it is still running
2018-05-02 15:18:16.345047 osd.37 osd.37 192.168.42.11:6812/4603 315 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 60.848283 secs
2018-05-02 15:18:16.345056 osd.37 osd.37 192.168.42.11:6812/4603 316 : cluster [WRN] slow request 60.848283 seconds old, received at 2018-05-02 15:17:15.496724: osd_op(client.5168842.0:565598091 1.cd6 1:6b3ac828:::rbd_data.51ffd188bfb19.0000000000a994d6:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1515520~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:21.933793 mon.node01 mon.0 192.168.42.11:6789/0 232999 : cluster [INF] osd.14 192.168.42.13:6812/4444 boot
2018-05-02 15:18:21.933874 mon.node01 mon.0 192.168.42.11:6789/0 233000 : cluster [INF] osd.15 192.168.42.13:6818/4787 boot
2018-05-02 15:18:21.933989 mon.node01 mon.0 192.168.42.11:6789/0 233001 : cluster [INF] osd.11 192.168.42.13:6804/3915 boot
2018-05-02 15:18:22.993198 mon.node01 mon.0 192.168.42.11:6789/0 233024 : cluster [INF] osd.9 192.168.42.13:6830/6607 boot
2018-05-02 15:18:22.995855 mon.node01 mon.0 192.168.42.11:6789/0 233025 : cluster [INF] osd.0 192.168.42.13:6828/5616 boot
2018-05-02 15:18:22.995930 mon.node01 mon.0 192.168.42.11:6789/0 233026 : cluster [INF] osd.1 192.168.42.13:6832/7029 boot
2018-05-02 15:18:15.676606 osd.51 osd.51 192.168.42.12:6810/4338 368 : cluster [WRN] 3 slow requests, 1 included below; oldest blocked for > 60.959171 secs
2018-05-02 15:18:15.676613 osd.51 osd.51 192.168.42.12:6810/4338 369 : cluster [WRN] slow request 60.959171 seconds old, received at 2018-05-02 15:17:14.717398: osd_op(client.6596773.0:183937335 1.a33 1:cc5b61d1:::rbd_data.1e815e6eb1d34.000000000056c0b3:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1323520~32768] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:21.036818 osd.14 osd.14 192.168.42.13:6812/4444 353 : cluster [WRN] Monitor daemon marked osd.14 down, but it is still running
2018-05-02 15:18:22.678710 osd.51 osd.51 192.168.42.12:6810/4338 370 : cluster [WRN] 2 slow requests, 2 included below; oldest blocked for > 60.274650 secs
2018-05-02 15:18:22.678722 osd.51 osd.51 192.168.42.12:6810/4338 371 : cluster [WRN] slow request 60.274650 seconds old, received at 2018-05-02 15:17:22.404014: osd_op(client.6596773.0:183937651 1.aab 1:d557b9bd:::rbd_data.1e815e6eb1d34.00000000005effb8:head [set-alloc-hint object_size 4194304 write_size 4194304,write 3551232~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:22.678729 osd.51 osd.51 192.168.42.12:6810/4338 372 : cluster [WRN] slow request 60.268857 seconds old, received at 2018-05-02 15:17:22.409807: osd_op(client.6526841.0:199030712 1.227 1:e44f9365:::rbd_data.51ffd188bfb19.000000000041d7c0:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1818624~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:24.042094 mon.node01 mon.0 192.168.42.11:6789/0 233075 : cluster [INF] osd.5 192.168.42.13:6808/4150 boot
2018-05-02 15:18:24.043666 mon.node01 mon.0 192.168.42.11:6789/0 233076 : cluster [INF] osd.17 192.168.42.13:6814/4554 boot
2018-05-02 15:18:24.548038 mon.node01 mon.0 192.168.42.11:6789/0 233191 : cluster [WRN] Health check update: 2 osds down (OSD_DOWN)
2018-05-02 15:18:24.548147 mon.node01 mon.0 192.168.42.11:6789/0 233192 : cluster [WRN] Health check update: Reduced data availability: 245 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:24.548204 mon.node01 mon.0 192.168.42.11:6789/0 233193 : cluster [WRN] Health check update: Degraded data redundancy: 493813/11883735 objects degraded (4.155%), 733 pgs unclean, 514 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:24.548250 mon.node01 mon.0 192.168.42.11:6789/0 233194 : cluster [WRN] Health check update: 20 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:24.549084 mon.node01 mon.0 192.168.42.11:6789/0 233195 : cluster [INF] osd.4 failed (root=default,host=node03) (4 reporters from different host after 24.643274 >= grace 20.298579)
2018-05-02 15:18:24.549862 mon.node01 mon.0 192.168.42.11:6789/0 233196 : cluster [INF] osd.12 failed (root=default,host=node03) (3 reporters from different host after 23.203997 >= grace 20.000000)
2018-05-02 15:18:24.550361 mon.node01 mon.0 192.168.42.11:6789/0 233197 : cluster [INF] osd.13 failed (root=default,host=node03) (4 reporters from different host after 24.454206 >= grace 20.000000)
2018-05-02 15:18:24.550688 mon.node01 mon.0 192.168.42.11:6789/0 233198 : cluster [INF] osd.16 failed (root=default,host=node03) (4 reporters from different host after 21.857742 >= grace 20.298740)
2018-05-02 15:18:21.047160 osd.15 osd.15 192.168.42.13:6818/4787 387 : cluster [WRN] Monitor daemon marked osd.15 down, but it is still running
2018-05-02 15:18:25.156763 osd.13 osd.13 192.168.42.13:6820/4934 393 : cluster [WRN] Monitor daemon marked osd.13 down, but it is still running
2018-05-02 15:18:25.631974 mon.node01 mon.0 192.168.42.11:6789/0 233396 : cluster [INF] osd.4 192.168.42.13:6816/4672 boot
2018-05-02 15:18:21.012257 osd.9 osd.9 192.168.42.13:6830/6607 441 : cluster [WRN] Monitor daemon marked osd.9 down, but it is still running
2018-05-02 15:18:26.679788 mon.node01 mon.0 192.168.42.11:6789/0 233663 : cluster [INF] osd.12 192.168.42.13:6824/5233 boot
2018-05-02 15:18:26.680755 mon.node01 mon.0 192.168.42.11:6789/0 233664 : cluster [INF] osd.19 192.168.42.13:6838/7714 boot
2018-05-02 15:18:26.680955 mon.node01 mon.0 192.168.42.11:6789/0 233665 : cluster [INF] osd.13 192.168.42.13:6820/4934 boot
2018-05-02 15:18:26.680998 mon.node01 mon.0 192.168.42.11:6789/0 233666 : cluster [INF] osd.16 192.168.42.13:6822/5084 boot
2018-05-02 15:18:22.079422 osd.1 osd.1 192.168.42.13:6832/7029 455 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running
2018-05-02 15:18:25.700018 osd.16 osd.16 192.168.42.13:6822/5084 345 : cluster [WRN] Monitor daemon marked osd.16 down, but it is still running
2018-05-02 15:18:29.101502 mon.node01 mon.0 192.168.42.11:6789/0 233914 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-05-02 15:18:29.114105 mon.node01 mon.0 192.168.42.11:6789/0 233915 : cluster [INF] osd.8 192.168.42.13:6826/5418 boot
2018-05-02 15:18:29.565219 mon.node01 mon.0 192.168.42.11:6789/0 233932 : cluster [WRN] Health check update: Reduced data availability: 84 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:29.565271 mon.node01 mon.0 192.168.42.11:6789/0 233933 : cluster [WRN] Health check update: Degraded data redundancy: 202763/11883735 objects degraded (1.706%), 404 pgs unclean, 239 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:29.565297 mon.node01 mon.0 192.168.42.11:6789/0 233934 : cluster [WRN] Health check update: 5 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:29.565507 mon.node01 mon.0 192.168.42.11:6789/0 233935 : cluster [INF] osd.10 failed (root=default,host=node03) (4 reporters from different host after 23.766485 >= grace 20.597260)
2018-05-02 15:18:29.567298 mon.node01 mon.0 192.168.42.11:6789/0 233936 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2018-05-02 15:18:22.098559 osd.0 osd.0 192.168.42.13:6828/5616 377 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running
2018-05-02 15:18:23.295719 osd.4 osd.4 192.168.42.13:6816/4672 409 : cluster [WRN] 6 slow requests, 1 included below; oldest blocked for > 47.640189 secs
2018-05-02 15:18:23.295728 osd.4 osd.4 192.168.42.13:6816/4672 410 : cluster [WRN] slow request 30.264308 seconds old, received at 2018-05-02 15:17:53.031354: osd_op(client.6596773.0:183937898 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:24.965849 osd.4 osd.4 192.168.42.13:6816/4672 411 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running
2018-05-02 15:18:21.980375 osd.5 osd.5 192.168.42.13:6808/4150 343 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running
2018-05-02 15:18:25.166233 osd.12 osd.12 192.168.42.13:6824/5233 411 : cluster [WRN] Monitor daemon marked osd.12 down, but it is still running
2018-05-02 15:18:33.187741 mon.node01 mon.0 192.168.42.11:6789/0 234132 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-05-02 15:18:33.204727 mon.node01 mon.0 192.168.42.11:6789/0 234133 : cluster [INF] osd.10 192.168.42.13:6836/7465 boot
2018-05-02 15:18:34.203560 mon.node01 mon.0 192.168.42.11:6789/0 234150 : cluster [INF] Health check cleared: REQUEST_SLOW (was: 2 slow requests are blocked > 32 sec)
2018-05-02 15:18:34.568568 mon.node01 mon.0 192.168.42.11:6789/0 234168 : cluster [WRN] Health check update: Reduced data availability: 13 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:34.568641 mon.node01 mon.0 192.168.42.11:6789/0 234169 : cluster [WRN] Health check update: Degraded data redundancy: 47468/11883735 objects degraded (0.399%), 92 pgs unclean, 101 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:32.064002 osd.10 osd.10 192.168.42.13:6836/7465 365 : cluster [WRN] Monitor daemon marked osd.10 down, but it is still running
2018-05-02 15:18:38.215116 mon.node01 mon.0 192.168.42.11:6789/0 234170 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 13 pgs peering)
2018-05-02 15:18:39.569077 mon.node01 mon.0 192.168.42.11:6789/0 234172 : cluster [WRN] Health check update: Degraded data redundancy: 14590/11883735 objects degraded (0.123%), 55 pgs unclean, 73 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:44.569539 mon.node01 mon.0 192.168.42.11:6789/0 234182 : cluster [WRN] Health check update: Degraded data redundancy: 27/11883735 objects degraded (0.000%), 15 pgs unclean, 26 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:49.569956 mon.node01 mon.0 192.168.42.11:6789/0 234184 : cluster [WRN] Health check update: Degraded data redundancy: 12/11883735 objects degraded (0.000%), 6 pgs unclean, 13 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:53.915551 mon.node01 mon.0 192.168.42.11:6789/0 234186 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/11883735 objects degraded (0.000%), 2 pgs unclean, 3 pgs degraded)
2018-05-02 15:18:53.915621 mon.node01 mon.0 192.168.42.11:6789/0 234187 : cluster [INF] Cluster is now healthy
after some more investigation we fond that this looks like another bug with the nic driver:
kern.log shows:
May 2 15:17:04 node03 kernel: [695525.704075] i40e 0000:17:00.1: TX driver issue detected, PF reset issued
May 2 15:17:04 node03 kernel: [695525.835170] i40e 0000:17:00.1: i40e_ptp_init: PTP not supported on eth3
May 2 15:17:04 node03 kernel: [695525.911684] i40e 0000:17:00.1 eth3: speed changed to 0 for port eth3
May 2 15:17:04 node03 kernel: [695525.913112] bond1: link status up again after 0 ms for interface eth3
maybe it´s this bug:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1700834
do i see this right, cause of losing one nic from the bond ceph lost the connection to the cluster for some seconds and reported the osd as down.
the strange thing is that ceph starts to set the osd to down 26 sec after the bond was recovered .....any idea?
from syslog (cleared from hundrets of : heartbeat_check: no reply from xx messages)
May 2 15:17:01 node03 CRON[4152775]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
May 2 15:17:04 node03 kernel: [695525.704075] i40e 0000:17:00.1: TX driver issue detected, PF reset issued
May 2 15:17:04 node03 kernel: [695525.835170] i40e 0000:17:00.1: i40e_ptp_init: PTP not supported on eth3
May 2 15:17:04 node03 kernel: [695525.911684] i40e 0000:17:00.1 eth3: speed changed to 0 for port eth3
May 2 15:17:04 node03 kernel: [695525.913112] bond1: link status up again after 0 ms for interface eth3
May 2 15:17:16 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 2.500179394s
May 2 15:17:19 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 4.96820263s
May 2 15:17:21 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 7.45790572s
May 2 15:17:24 node03 consul[3360]: raft: aborting pipeline replication to peer {Voter 192.168.42.12:8300 192.168.42.12:8300}
May 2 15:17:24 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.11:8300: read tcp 192.168.42.13:37038->192.168.42.11:8300: i/o timeout
May 2 15:17:24 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: read tcp 192.168.42.13:55244->192.168.42.12:8300: i/o timeout
May 2 15:17:26 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 2.50027097s
May 2 15:17:29 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 4.984897637s
May 2 15:17:31 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 7.420438172s
May 2 15:17:32 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:34 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:35 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.11:8300: dial tcp 192.168.42.11:8300: i/o timeout
May 2 15:17:35 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:38 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:44 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:46 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:46 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:47 node03 consul[3360]: yamux: keepalive failed: i/o deadline reached
May 2 15:17:47 node03 consul[3360]: consul.rpc: multiplex conn accept failed: keepalive timeout from=192.168.42.12:55674
May 2 15:17:51 node03 consul[3360]: memberlist: Push/Pull with node02 failed: dial tcp 192.168.42.12:8301: i/o timeout
May 2 15:17:53 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:54 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:57 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:01 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:18:04 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:04 node03 consul[3360]: raft: peer {Voter 192.168.42.12:8300 192.168.42.12:8300} has newer term, stopping replication
May 2 15:18:04 node03 consul[3360]: raft: Node at 192.168.42.13:8300 [Follower] entering Follower state (Leader: "")
May 2 15:18:04 node03 consul[3360]: consul: cluster leadership lost
May 2 15:18:04 node03 consul[3360]: raft: aborting pipeline replication to peer {Voter 192.168.42.11:8300 192.168.42.11:8300}
May 2 15:18:04 node03 consul[3360]: consul.coordinate: Batch update failed: node is not the leader
May 2 15:18:08 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:09 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since our last index is greater (3213435, 3213428)
May 2 15:18:13 node03 consul[3360]: http: Request GET /v1/kv/PetaSAN/Services/ClusterLeader?index=2620988&wait=20s, error: No cluster leader from=127.0.0.1:53994
May 2 15:18:13 node03 consul[3360]: raft: Heartbeat timeout from "" reached, starting election
May 2 15:18:13 node03 consul[3360]: raft: Node at 192.168.42.13:8300 [Candidate] entering Candidate state in term 184
May 2 15:18:13 node03 consul[3360]: raft: Election won. Tally: 2
May 2 15:18:13 node03 consul[3360]: raft: Node at 192.168.42.13:8300 [Leader] entering Leader state
May 2 15:18:13 node03 consul[3360]: raft: Added peer 192.168.42.11:8300, starting replication
May 2 15:18:13 node03 consul[3360]: raft: Added peer 192.168.42.12:8300, starting replication
May 2 15:18:13 node03 consul[3360]: consul: cluster leadership acquired
May 2 15:18:13 node03 consul[3360]: consul: New leader elected: node03
May 2 15:18:13 node03 consul[3360]: raft: pipelining replication to peer {Voter 192.168.42.11:8300 192.168.42.11:8300}
May 2 15:18:23 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:23 node03 consul[3360]: raft: AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300} rejected, sending older logs (next: 3213429)
May 2 15:18:23 node03 consul[3360]: raft: pipelining replication to peer {Voter 192.168.42.12:8300 192.168.42.12:8300}
from ps-cl01.log
2018-05-02 15:17:30.689945 mon.node01 mon.0 192.168.42.11:6789/0 231117 : cluster [INF] osd.0 failed (root=default,host=node03) (2 reporters from different host after 20.000325 >= grace 20.000000)
2018-05-02 15:17:30.690389 mon.node01 mon.0 192.168.42.11:6789/0 231119 : cluster [INF] osd.4 failed (root=default,host=node03) (2 reporters from different host after 20.000540 >= grace 20.000000)
2018-05-02 15:17:30.690758 mon.node01 mon.0 192.168.42.11:6789/0 231121 : cluster [INF] osd.6 failed (root=default,host=node03) (2 reporters from different host after 20.000898 >= grace 20.000000)
2018-05-02 15:17:30.691040 mon.node01 mon.0 192.168.42.11:6789/0 231123 : cluster [INF] osd.8 failed (root=default,host=node03) (2 reporters from different host after 20.001124 >= grace 20.000000)
2018-05-02 15:17:30.691317 mon.node01 mon.0 192.168.42.11:6789/0 231125 : cluster [INF] osd.10 failed (root=default,host=node03) (2 reporters from different host after 20.001331 >= grace 20.000000)
2018-05-02 15:17:30.691936 mon.node01 mon.0 192.168.42.11:6789/0 231129 : cluster [INF] osd.15 failed (root=default,host=node03) (2 reporters from different host after 20.001689 >= grace 20.000000)
2018-05-02 15:17:30.728612 mon.node01 mon.0 192.168.42.11:6789/0 231134 : cluster [INF] osd.5 failed (root=default,host=node03) (2 reporters from different host after 20.000418 >= grace 20.000000)
2018-05-02 15:17:30.728887 mon.node01 mon.0 192.168.42.11:6789/0 231136 : cluster [INF] osd.7 failed (root=default,host=node03) (2 reporters from different host after 20.000640 >= grace 20.000000)
2018-05-02 15:17:30.729478 mon.node01 mon.0 192.168.42.11:6789/0 231140 : cluster [INF] osd.11 failed (root=default,host=node03) (2 reporters from different host after 20.000955 >= grace 20.000000)
2018-05-02 15:17:31.071749 mon.node01 mon.0 192.168.42.11:6789/0 231188 : cluster [INF] osd.9 failed (root=default,host=node03) (2 reporters from different host after 20.000476 >= grace 20.000000)
2018-05-02 15:17:31.072294 mon.node01 mon.0 192.168.42.11:6789/0 231192 : cluster [INF] osd.14 failed (root=default,host=node03) (2 reporters from different host after 20.000645 >= grace 20.000000)
2018-05-02 15:17:31.072648 mon.node01 mon.0 192.168.42.11:6789/0 231195 : cluster [INF] osd.17 failed (root=default,host=node03) (2 reporters from different host after 20.000775 >= grace 20.000000)
2018-05-02 15:17:31.073013 mon.node01 mon.0 192.168.42.11:6789/0 231198 : cluster [INF] osd.19 failed (root=default,host=node03) (2 reporters from different host after 20.000923 >= grace 20.000000)
2018-05-02 15:17:31.145738 mon.node01 mon.0 192.168.42.11:6789/0 231201 : cluster [INF] osd.2 failed (root=default,host=node03) (2 reporters from different host after 20.000246 >= grace 20.000000)
2018-05-02 15:17:31.276822 mon.node01 mon.0 192.168.42.11:6789/0 231213 : cluster [WRN] Health check failed: 14 osds down (OSD_DOWN)
2018-05-02 15:17:31.297002 mon.node01 mon.0 192.168.42.11:6789/0 231215 : cluster [INF] osd.1 failed (root=default,host=node03) (2 reporters from different host after 20.066715 >= grace 20.000000)
2018-05-02 15:17:31.297100 mon.node01 mon.0 192.168.42.11:6789/0 231217 : cluster [INF] osd.3 failed (root=default,host=node03) (2 reporters from different host after 20.066565 >= grace 20.000000)
2018-05-02 15:17:31.297290 mon.node01 mon.0 192.168.42.11:6789/0 231219 : cluster [INF] osd.12 failed (root=default,host=node03) (2 reporters from different host after 20.066392 >= grace 20.000000)
2018-05-02 15:17:31.297562 mon.node01 mon.0 192.168.42.11:6789/0 231222 : cluster [INF] osd.13 failed (root=default,host=node03) (2 reporters from different host after 20.010716 >= grace 20.000000)
2018-05-02 15:17:31.297644 mon.node01 mon.0 192.168.42.11:6789/0 231224 : cluster [INF] osd.16 failed (root=default,host=node03) (2 reporters from different host after 20.010419 >= grace 20.000000)
2018-05-02 15:17:31.342169 mon.node01 mon.0 192.168.42.11:6789/0 231245 : cluster [INF] osd.18 failed (root=default,host=node03) (2 reporters from different host after 20.003455 >= grace 20.000000)
2018-05-02 15:17:31.900209 osd.17 osd.17 192.168.42.13:6814/4554 347 : cluster [WRN] Monitor daemon marked osd.17 down, but it is still running
2018-05-02 15:17:32.347387 mon.node01 mon.0 192.168.42.11:6789/0 231324 : cluster [INF] osd.5 192.168.42.13:6808/4150 boot
2018-05-02 15:17:32.122997 osd.14 osd.14 192.168.42.13:6812/4444 351 : cluster [WRN] Monitor daemon marked osd.14 down, but it is still running
2018-05-02 15:17:33.417134 mon.node01 mon.0 192.168.42.11:6789/0 231341 : cluster [INF] osd.19 192.168.42.13:6838/7714 boot
2018-05-02 15:17:32.150394 osd.15 osd.15 192.168.42.13:6818/4787 385 : cluster [WRN] Monitor daemon marked osd.15 down, but it is still running
2018-05-02 15:17:34.449728 mon.node01 mon.0 192.168.42.11:6789/0 231360 : cluster [WRN] Health check failed: Reduced data availability: 182 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:34.449810 mon.node01 mon.0 192.168.42.11:6789/0 231361 : cluster [WRN] Health check failed: Degraded data redundancy: 266928/11883735 objects degraded (2.246%), 265 pgs unclean, 276 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:34.449892 mon.node01 mon.0 192.168.42.11:6789/0 231362 : cluster [WRN] Health check failed: 6 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:34.462178 mon.node01 mon.0 192.168.42.11:6789/0 231363 : cluster [INF] osd.14 192.168.42.13:6812/4444 boot
2018-05-02 15:17:34.462279 mon.node01 mon.0 192.168.42.11:6789/0 231364 : cluster [INF] osd.15 192.168.42.13:6818/4787 boot
2018-05-02 15:17:34.462350 mon.node01 mon.0 192.168.42.11:6789/0 231365 : cluster [INF] osd.1 192.168.42.13:6832/7029 boot
2018-05-02 15:17:34.462398 mon.node01 mon.0 192.168.42.11:6789/0 231366 : cluster [INF] osd.11 192.168.42.13:6804/3915 boot
2018-05-02 15:17:34.462440 mon.node01 mon.0 192.168.42.11:6789/0 231367 : cluster [INF] osd.9 192.168.42.13:6830/6607 boot
2018-05-02 15:17:34.462483 mon.node01 mon.0 192.168.42.11:6789/0 231368 : cluster [INF] osd.0 192.168.42.13:6828/5616 boot
2018-05-02 15:17:34.526923 osd.13 osd.13 192.168.42.13:6820/4934 391 : cluster [WRN] Monitor daemon marked osd.13 down, but it is still running
2018-05-02 15:17:31.950776 osd.9 osd.9 192.168.42.13:6830/6607 439 : cluster [WRN] Monitor daemon marked osd.9 down, but it is still running
2018-05-02 15:17:35.528042 mon.node01 mon.0 192.168.42.11:6789/0 231385 : cluster [INF] osd.13 192.168.42.13:6820/4934 boot
2018-05-02 15:17:35.528149 mon.node01 mon.0 192.168.42.11:6789/0 231386 : cluster [INF] osd.4 192.168.42.13:6816/4672 boot
2018-05-02 15:17:36.561053 mon.node01 mon.0 192.168.42.11:6789/0 231403 : cluster [WRN] Health check update: 9 osds down (OSD_DOWN)
2018-05-02 15:17:36.577048 mon.node01 mon.0 192.168.42.11:6789/0 231404 : cluster [INF] osd.12 192.168.42.13:6824/5233 boot
2018-05-02 15:17:31.901987 osd.7 osd.7 192.168.42.13:6834/7201 353 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running
2018-05-02 15:17:33.112422 osd.3 osd.3 192.168.42.13:6802/3776 433 : cluster [WRN] Monitor daemon marked osd.3 down, but it is still running
2018-05-02 15:17:33.454019 osd.1 osd.1 192.168.42.13:6832/7029 453 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running
2018-05-02 15:17:38.145518 mon.node01 mon.0 192.168.42.11:6789/0 231444 : cluster [INF] osd.16 192.168.42.13:6822/5084 boot
2018-05-02 15:17:31.878606 osd.6 osd.6 192.168.42.13:6806/4031 379 : cluster [WRN] Monitor daemon marked osd.6 down, but it is still running
2018-05-02 15:17:37.074577 osd.16 osd.16 192.168.42.13:6822/5084 343 : cluster [WRN] Monitor daemon marked osd.16 down, but it is still running
2018-05-02 15:17:39.450525 mon.node01 mon.0 192.168.42.11:6789/0 231512 : cluster [WRN] Health check update: Reduced data availability: 156 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:39.450603 mon.node01 mon.0 192.168.42.11:6789/0 231513 : cluster [WRN] Health check update: Degraded data redundancy: 780173/11883735 objects degraded (6.565%), 409 pgs unclean, 826 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:39.450648 mon.node01 mon.0 192.168.42.11:6789/0 231514 : cluster [WRN] Health check update: 20 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:33.481474 osd.0 osd.0 192.168.42.13:6828/5616 375 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running
2018-05-02 15:17:34.835065 osd.4 osd.4 192.168.42.13:6816/4672 398 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running
2018-05-02 15:17:31.758601 osd.11 osd.11 192.168.42.13:6804/3915 325 : cluster [WRN] Monitor daemon marked osd.11 down, but it is still running
2018-05-02 15:17:31.854285 osd.5 osd.5 192.168.42.13:6808/4150 339 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running
2018-05-02 15:17:31.927593 osd.8 osd.8 192.168.42.13:6826/5418 327 : cluster [WRN] Monitor daemon marked osd.8 down, but it is still running
2018-05-02 15:17:32.264540 osd.2 osd.2 192.168.42.13:6810/4329 373 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running
2018-05-02 15:17:32.504532 osd.19 osd.19 192.168.42.13:6838/7714 367 : cluster [WRN] Monitor daemon marked osd.19 down, but it is still running
2018-05-02 15:17:35.304926 osd.12 osd.12 192.168.42.13:6824/5233 409 : cluster [WRN] Monitor daemon marked osd.12 down, but it is still running
2018-05-02 15:17:41.201037 mon.node01 mon.0 192.168.42.11:6789/0 231515 : cluster [INF] osd.10 192.168.42.13:6836/7465 boot
2018-05-02 15:17:37.224788 osd.18 osd.18 192.168.42.13:6800/3571 353 : cluster [WRN] Monitor daemon marked osd.18 down, but it is still running
2018-05-02 15:17:44.451567 mon.node01 mon.0 192.168.42.11:6789/0 231566 : cluster [WRN] Health check update: 7 osds down (OSD_DOWN)
2018-05-02 15:17:44.451681 mon.node01 mon.0 192.168.42.11:6789/0 231567 : cluster [WRN] Health check update: Reduced data availability: 119 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:44.451740 mon.node01 mon.0 192.168.42.11:6789/0 231568 : cluster [WRN] Health check update: Degraded data redundancy: 567324/11883735 objects degraded (4.774%), 376 pgs unclean, 598 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:44.451787 mon.node01 mon.0 192.168.42.11:6789/0 231569 : cluster [WRN] Health check update: 8 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:39.947717 osd.10 osd.10 192.168.42.13:6836/7465 363 : cluster [WRN] Monitor daemon marked osd.10 down, but it is still running
2018-05-02 15:17:49.452743 mon.node01 mon.0 192.168.42.11:6789/0 231571 : cluster [WRN] Health check update: Reduced data availability: 124 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:49.452802 mon.node01 mon.0 192.168.42.11:6789/0 231572 : cluster [WRN] Health check update: Degraded data redundancy: 545427/11883735 objects degraded (4.590%), 400 pgs unclean, 569 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:49.452830 mon.node01 mon.0 192.168.42.11:6789/0 231573 : cluster [WRN] Health check update: 5 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:45.648204 osd.22 osd.22 192.168.42.11:6818/5595 347 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.352068 secs
2018-05-02 15:17:45.648216 osd.22 osd.22 192.168.42.11:6818/5595 348 : cluster [WRN] slow request 30.352068 seconds old, received at 2018-05-02 15:17:15.296043: osd_op(client.6526841.0:199030532 1.f58 1:1afd369a:::rbd_data.51ffd188bfb19.0000000000a9a36d:head [set-alloc-hint object_size 4194304 write_size 4194304,write 931840~1536] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:48.674430 osd.57 osd.57 192.168.42.12:6824/5788 369 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.028853 secs
2018-05-02 15:17:48.674449 osd.57 osd.57 192.168.42.12:6824/5788 370 : cluster [WRN] slow request 30.028853 seconds old, received at 2018-05-02 15:17:18.645456: osd_op(client.6526841.0:199030675 1.580 1:01afed70:::rbd_data.51ffd188bfb19.00000000000920ad:head [set-alloc-hint object_size 4194304 write_size 4194304,write 2617344~3584] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:46.337758 osd.37 osd.37 192.168.42.11:6812/4603 313 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.840941 secs
2018-05-02 15:17:46.337771 osd.37 osd.37 192.168.42.11:6812/4603 314 : cluster [WRN] slow request 30.840941 seconds old, received at 2018-05-02 15:17:15.496724: osd_op(client.5168842.0:565598091 1.cd6 1:6b3ac828:::rbd_data.51ffd188bfb19.0000000000a994d6:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1515520~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:45.668260 osd.51 osd.51 192.168.42.12:6810/4338 363 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.950745 secs
2018-05-02 15:17:45.668273 osd.51 osd.51 192.168.42.12:6810/4338 364 : cluster [WRN] slow request 30.950745 seconds old, received at 2018-05-02 15:17:14.717398: osd_op(client.6596773.0:183937335 1.a33 1:cc5b61d1:::rbd_data.1e815e6eb1d34.000000000056c0b3:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1323520~32768] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:52.669812 osd.51 osd.51 192.168.42.12:6810/4338 365 : cluster [WRN] 3 slow requests, 2 included below; oldest blocked for > 37.952326 secs
2018-05-02 15:17:52.669821 osd.51 osd.51 192.168.42.12:6810/4338 366 : cluster [WRN] slow request 30.265711 seconds old, received at 2018-05-02 15:17:22.404014: osd_op(client.6596773.0:183937651 1.aab 1:d557b9bd:::rbd_data.1e815e6eb1d34.00000000005effb8:head [set-alloc-hint object_size 4194304 write_size 4194304,write 3551232~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:52.669853 osd.51 osd.51 192.168.42.12:6810/4338 367 : cluster [WRN] slow request 30.259918 seconds old, received at 2018-05-02 15:17:22.409807: osd_op(client.6526841.0:199030712 1.227 1:e44f9365:::rbd_data.51ffd188bfb19.000000000041d7c0:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1818624~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:54.499661 mon.node01 mon.0 192.168.42.11:6789/0 231630 : cluster [WRN] Health check update: 5 osds down (OSD_DOWN)
2018-05-02 15:17:54.502279 mon.node01 mon.0 192.168.42.11:6789/0 231631 : cluster [WRN] Health check update: Reduced data availability: 152 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:54.502352 mon.node01 mon.0 192.168.42.11:6789/0 231632 : cluster [WRN] Health check update: Degraded data redundancy: 545424/11883735 objects degraded (4.590%), 470 pgs unclean, 564 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:54.502394 mon.node01 mon.0 192.168.42.11:6789/0 231633 : cluster [WRN] Health check update: 9 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:54.511364 mon.node01 mon.0 192.168.42.11:6789/0 231634 : cluster [INF] osd.7 192.168.42.13:6834/7201 boot
2018-05-02 15:17:54.511419 mon.node01 mon.0 192.168.42.11:6789/0 231635 : cluster [INF] osd.2 192.168.42.13:6810/4329 boot
2018-05-02 15:17:55.560707 mon.node01 mon.0 192.168.42.11:6789/0 231772 : cluster [INF] osd.3 192.168.42.13:6802/3776 boot
2018-05-02 15:17:58.644882 mon.node01 mon.0 192.168.42.11:6789/0 232360 : cluster [INF] osd.18 192.168.42.13:6800/3571 boot
2018-05-02 15:17:58.997648 mon.node01 mon.0 192.168.42.11:6789/0 232419 : cluster [WRN] overall HEALTH_WARN 3 osds down; Reduced data availability: 187 pgs peering; Degraded data redundancy: 393287/11883735 objects degraded (3.309%), 464 pgs unclean, 414 pgs degraded; 20 slow requests are blocked > 32 sec
2018-05-02 15:17:59.502999 mon.node01 mon.0 192.168.42.11:6789/0 232447 : cluster [WRN] Health check update: 3 osds down (OSD_DOWN)
2018-05-02 15:17:59.503131 mon.node01 mon.0 192.168.42.11:6789/0 232448 : cluster [WRN] Health check update: Reduced data availability: 187 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:59.503190 mon.node01 mon.0 192.168.42.11:6789/0 232449 : cluster [WRN] Health check update: Degraded data redundancy: 393287/11883735 objects degraded (3.309%), 464 pgs unclean, 414 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:59.503246 mon.node01 mon.0 192.168.42.11:6789/0 232450 : cluster [WRN] Health check update: 20 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:04.507977 mon.node01 mon.0 192.168.42.11:6789/0 232590 : cluster [WRN] Health check update: Reduced data availability: 211 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:04.508058 mon.node01 mon.0 192.168.42.11:6789/0 232591 : cluster [WRN] Health check update: Degraded data redundancy: 242695/11883735 objects degraded (2.042%), 477 pgs unclean, 252 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:04.508106 mon.node01 mon.0 192.168.42.11:6789/0 232592 : cluster [WRN] Health check update: 23 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:05.877862 osd.61 osd.61 192.168.42.14:6814/4904 1237 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.173325 secs
2018-05-02 15:18:05.877874 osd.61 osd.61 192.168.42.14:6814/4904 1238 : cluster [WRN] slow request 30.173325 seconds old, received at 2018-05-02 15:17:35.704479: osd_op(client.6596773.0:183937862 1.c0e 1.d2861c0e (undecoded) ondisk+write+known_if_redirected e26136) currently waiting for peered
2018-05-02 15:18:02.797168 osd.58 osd.58 192.168.42.12:6804/3828 331 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.363481 secs
2018-05-02 15:18:02.797179 osd.58 osd.58 192.168.42.12:6804/3828 332 : cluster [WRN] slow request 30.363481 seconds old, received at 2018-05-02 15:17:32.433619: osd_op(client.6596773.0:183937757 1.a86 1.a57fca86 (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:09.513538 mon.node01 mon.0 192.168.42.11:6789/0 232608 : cluster [WRN] Health check update: Reduced data availability: 252 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:09.513599 mon.node01 mon.0 192.168.42.11:6789/0 232609 : cluster [WRN] Health check update: Degraded data redundancy: 239770/11883735 objects degraded (2.018%), 568 pgs unclean, 248 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:09.513636 mon.node01 mon.0 192.168.42.11:6789/0 232610 : cluster [WRN] Health check update: 25 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:02.817820 osd.77 osd.77 192.168.42.14:6828/6313 1117 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.394863 secs
2018-05-02 15:18:02.817831 osd.77 osd.77 192.168.42.14:6828/6313 1118 : cluster [WRN] slow request 30.394863 seconds old, received at 2018-05-02 15:17:32.422898: osd_op(client.6596773.0:183937750 1.6df 1.cf9bc6df (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:02.476590 osd.5 osd.5 192.168.42.13:6808/4150 341 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.064270 secs
2018-05-02 15:18:02.476601 osd.5 osd.5 192.168.42.13:6808/4150 342 : cluster [WRN] slow request 30.064270 seconds old, received at 2018-05-02 15:17:32.412264: osd_op(client.6596773.0:183937421 1.116 1.1c1d4116 (undecoded) ondisk+retry+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:02.651760 osd.22 osd.22 192.168.42.11:6818/5595 349 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 47.355668 secs
2018-05-02 15:18:02.651779 osd.22 osd.22 192.168.42.11:6818/5595 350 : cluster [WRN] slow request 30.174091 seconds old, received at 2018-05-02 15:17:32.477620: osd_op(client.5168842.0:565598316 1.f58 1.596cbf58 (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:02.797120 osd.97 osd.97 192.168.42.15:6834/31139 554 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.381988 secs
2018-05-02 15:18:02.797141 osd.97 osd.97 192.168.42.15:6834/31139 555 : cluster [WRN] slow request 30.381988 seconds old, received at 2018-05-02 15:17:32.415062: osd_op(client.6526841.0:199030755 1.569 1.b0c48569 (undecoded) ondisk+read+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:04.413088 osd.19 osd.19 192.168.42.13:6838/7714 369 : cluster [WRN] 4 slow requests, 4 included below; oldest blocked for > 30.948256 secs
2018-05-02 15:18:04.413100 osd.19 osd.19 192.168.42.13:6838/7714 370 : cluster [WRN] slow request 30.484589 seconds old, received at 2018-05-02 15:17:33.928388: osd_op(client.6526841.0:199030843 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:04.413107 osd.19 osd.19 192.168.42.13:6838/7714 371 : cluster [WRN] slow request 30.465765 seconds old, received at 2018-05-02 15:17:33.947212: osd_op(client.6596773.0:183937784 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:04.413114 osd.19 osd.19 192.168.42.13:6838/7714 372 : cluster [WRN] slow request 30.425245 seconds old, received at 2018-05-02 15:17:33.987732: osd_op(client.6596773.0:183937785 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:04.413119 osd.19 osd.19 192.168.42.13:6838/7714 373 : cluster [WRN] slow request 30.948256 seconds old, received at 2018-05-02 15:17:33.464721: osd_op(client.6526841.0:199030727 1.71e 1.5c1ee71e (undecoded) ondisk+retry+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:05.413744 osd.19 osd.19 192.168.42.13:6838/7714 374 : cluster [WRN] 7 slow requests, 3 included below; oldest blocked for > 31.948951 secs
2018-05-02 15:18:05.413755 osd.19 osd.19 192.168.42.13:6838/7714 375 : cluster [WRN] slow request 30.999699 seconds old, received at 2018-05-02 15:17:34.413974: osd_op(client.6526841.0:199030846 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:05.413760 osd.19 osd.19 192.168.42.13:6838/7714 376 : cluster [WRN] slow request 30.926653 seconds old, received at 2018-05-02 15:17:34.487019: osd_op(client.6603196.0:182818149 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:05.413769 osd.19 osd.19 192.168.42.13:6838/7714 377 : cluster [WRN] slow request 30.914518 seconds old, received at 2018-05-02 15:17:34.499154: osd_op(client.6526841.0:199030848 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:06.284732 osd.4 osd.4 192.168.42.13:6816/4672 400 : cluster [WRN] 2 slow requests, 2 included below; oldest blocked for > 30.629189 secs
2018-05-02 15:18:06.284743 osd.4 osd.4 192.168.42.13:6816/4672 401 : cluster [WRN] slow request 30.629189 seconds old, received at 2018-05-02 15:17:35.655473: osd_op(client.6596773.0:183937792 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26136) currently waiting for peered
2018-05-02 15:18:06.284756 osd.4 osd.4 192.168.42.13:6816/4672 402 : cluster [WRN] slow request 30.260563 seconds old, received at 2018-05-02 15:17:36.024099: osd_op(client.6596773.0:183937865 1.2dd 1.8dea32dd (undecoded) ondisk+read+known_if_redirected e26136) currently waiting for peered
2018-05-02 15:18:08.285983 osd.4 osd.4 192.168.42.13:6816/4672 403 : cluster [WRN] 3 slow requests, 1 included below; oldest blocked for > 32.630454 secs
2018-05-02 15:18:08.285995 osd.4 osd.4 192.168.42.13:6816/4672 404 : cluster [WRN] slow request 30.278627 seconds old, received at 2018-05-02 15:17:38.007300: osd_op(client.6596773.0:183937868 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26138) currently waiting for peered
2018-05-02 15:18:03.279070 osd.82 osd.82 192.168.42.15:6804/8697 409 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.845140 secs
2018-05-02 15:18:03.279083 osd.82 osd.82 192.168.42.15:6804/8697 410 : cluster [WRN] slow request 30.845140 seconds old, received at 2018-05-02 15:17:32.433862: osd_op(client.6526841.0:199030770 1.1b9 1.2f4701b9 (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:13.808180 mon.node01 mon.0 192.168.42.11:6789/0 232619 : cluster [WRN] Health check update: 2 osds down (OSD_DOWN)
2018-05-02 15:18:13.825306 mon.node01 mon.0 192.168.42.11:6789/0 232620 : cluster [INF] osd.6 192.168.42.13:6806/4031 boot
2018-05-02 15:18:14.518576 mon.node01 mon.0 192.168.42.11:6789/0 232641 : cluster [WRN] Health check update: Reduced data availability: 311 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:14.518655 mon.node01 mon.0 192.168.42.11:6789/0 232642 : cluster [WRN] Health check update: Degraded data redundancy: 239770/11883735 objects degraded (2.018%), 706 pgs unclean, 248 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:14.518708 mon.node01 mon.0 192.168.42.11:6789/0 232643 : cluster [WRN] Health check update: 27 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:17.801511 osd.58 osd.58 192.168.42.12:6804/3828 333 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 45.367834 secs
2018-05-02 15:18:17.801519 osd.58 osd.58 192.168.42.12:6804/3828 334 : cluster [WRN] slow request 30.183339 seconds old, received at 2018-05-02 15:17:47.618114: osd_op(client.6603196.0:182818173 1.a86 1.a57fca86 (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:19.525411 mon.node01 mon.0 192.168.42.11:6789/0 232881 : cluster [WRN] Health check update: Reduced data availability: 358 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:19.525473 mon.node01 mon.0 192.168.42.11:6789/0 232882 : cluster [WRN] Health check update: Degraded data redundancy: 188544/11883735 objects degraded (1.587%), 725 pgs unclean, 199 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:19.525510 mon.node01 mon.0 192.168.42.11:6789/0 232883 : cluster [WRN] Health check update: 34 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:19.525823 mon.node01 mon.0 192.168.42.11:6789/0 232884 : cluster [INF] osd.0 failed (root=default,host=node03) (4 reporters from different host after 22.185499 >= grace 20.000000)
2018-05-02 15:18:19.526157 mon.node01 mon.0 192.168.42.11:6789/0 232885 : cluster [INF] osd.1 failed (root=default,host=node03) (4 reporters from different host after 20.729223 >= grace 20.000000)
2018-05-02 15:18:19.527041 mon.node01 mon.0 192.168.42.11:6789/0 232886 : cluster [INF] osd.5 failed (root=default,host=node03) (4 reporters from different host after 21.722805 >= grace 20.000000)
2018-05-02 15:18:19.527435 mon.node01 mon.0 192.168.42.11:6789/0 232887 : cluster [INF] osd.9 failed (root=default,host=node03) (4 reporters from different host after 20.729151 >= grace 20.000000)
2018-05-02 15:18:19.527946 mon.node01 mon.0 192.168.42.11:6789/0 232888 : cluster [INF] osd.11 failed (root=default,host=node03) (4 reporters from different host after 20.399416 >= grace 20.000000)
2018-05-02 15:18:19.528543 mon.node01 mon.0 192.168.42.11:6789/0 232889 : cluster [INF] osd.14 failed (root=default,host=node03) (4 reporters from different host after 20.729112 >= grace 20.000000)
2018-05-02 15:18:19.528831 mon.node01 mon.0 192.168.42.11:6789/0 232890 : cluster [INF] osd.15 failed (root=default,host=node03) (4 reporters from different host after 20.399139 >= grace 20.000000)
2018-05-02 15:18:19.529181 mon.node01 mon.0 192.168.42.11:6789/0 232891 : cluster [INF] osd.19 failed (root=default,host=node03) (4 reporters from different host after 21.722456 >= grace 20.000000)
2018-05-02 15:18:19.546706 mon.node01 mon.0 192.168.42.11:6789/0 232892 : cluster [WRN] Health check update: 10 osds down (OSD_DOWN)
2018-05-02 15:18:13.289192 osd.4 osd.4 192.168.42.13:6816/4672 405 : cluster [WRN] 4 slow requests, 1 included below; oldest blocked for > 37.633638 secs
2018-05-02 15:18:13.289200 osd.4 osd.4 192.168.42.13:6816/4672 406 : cluster [WRN] slow request 30.273720 seconds old, received at 2018-05-02 15:17:43.015391: osd_op(client.6596773.0:183937876 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26142) currently waiting for peered
2018-05-02 15:18:15.655564 osd.22 osd.22 192.168.42.11:6818/5595 351 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 60.359482 secs
2018-05-02 15:18:15.655573 osd.22 osd.22 192.168.42.11:6818/5595 352 : cluster [WRN] slow request 60.359482 seconds old, received at 2018-05-02 15:17:15.296043: osd_op(client.6526841.0:199030532 1.f58 1:1afd369a:::rbd_data.51ffd188bfb19.0000000000a9a36d:head [set-alloc-hint object_size 4194304 write_size 4194304,write 931840~1536] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:18.292619 osd.4 osd.4 192.168.42.13:6816/4672 407 : cluster [WRN] 5 slow requests, 1 included below; oldest blocked for > 42.637083 secs
2018-05-02 15:18:18.292632 osd.4 osd.4 192.168.42.13:6816/4672 408 : cluster [WRN] slow request 30.269277 seconds old, received at 2018-05-02 15:17:48.023279: osd_op(client.6596773.0:183937882 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:18.656500 osd.22 osd.22 192.168.42.11:6818/5595 353 : cluster [WRN] 3 slow requests, 1 included below; oldest blocked for > 63.360399 secs
2018-05-02 15:18:18.656510 osd.22 osd.22 192.168.42.11:6818/5595 354 : cluster [WRN] slow request 30.693945 seconds old, received at 2018-05-02 15:17:47.962497: osd_op(client.6526841.0:199030863 1.f58 1.596cbf58 (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:18.682212 osd.57 osd.57 192.168.42.12:6824/5788 371 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 60.036720 secs
2018-05-02 15:18:18.682220 osd.57 osd.57 192.168.42.12:6824/5788 372 : cluster [WRN] slow request 60.036720 seconds old, received at 2018-05-02 15:17:18.645456: osd_op(client.6526841.0:199030675 1.580 1:01afed70:::rbd_data.51ffd188bfb19.00000000000920ad:head [set-alloc-hint object_size 4194304 write_size 4194304,write 2617344~3584] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:20.130588 osd.19 osd.19 192.168.42.13:6838/7714 378 : cluster [WRN] Monitor daemon marked osd.19 down, but it is still running
2018-05-02 15:18:20.874752 osd.11 osd.11 192.168.42.13:6804/3915 327 : cluster [WRN] Monitor daemon marked osd.11 down, but it is still running
2018-05-02 15:18:16.345047 osd.37 osd.37 192.168.42.11:6812/4603 315 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 60.848283 secs
2018-05-02 15:18:16.345056 osd.37 osd.37 192.168.42.11:6812/4603 316 : cluster [WRN] slow request 60.848283 seconds old, received at 2018-05-02 15:17:15.496724: osd_op(client.5168842.0:565598091 1.cd6 1:6b3ac828:::rbd_data.51ffd188bfb19.0000000000a994d6:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1515520~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:21.933793 mon.node01 mon.0 192.168.42.11:6789/0 232999 : cluster [INF] osd.14 192.168.42.13:6812/4444 boot
2018-05-02 15:18:21.933874 mon.node01 mon.0 192.168.42.11:6789/0 233000 : cluster [INF] osd.15 192.168.42.13:6818/4787 boot
2018-05-02 15:18:21.933989 mon.node01 mon.0 192.168.42.11:6789/0 233001 : cluster [INF] osd.11 192.168.42.13:6804/3915 boot
2018-05-02 15:18:22.993198 mon.node01 mon.0 192.168.42.11:6789/0 233024 : cluster [INF] osd.9 192.168.42.13:6830/6607 boot
2018-05-02 15:18:22.995855 mon.node01 mon.0 192.168.42.11:6789/0 233025 : cluster [INF] osd.0 192.168.42.13:6828/5616 boot
2018-05-02 15:18:22.995930 mon.node01 mon.0 192.168.42.11:6789/0 233026 : cluster [INF] osd.1 192.168.42.13:6832/7029 boot
2018-05-02 15:18:15.676606 osd.51 osd.51 192.168.42.12:6810/4338 368 : cluster [WRN] 3 slow requests, 1 included below; oldest blocked for > 60.959171 secs
2018-05-02 15:18:15.676613 osd.51 osd.51 192.168.42.12:6810/4338 369 : cluster [WRN] slow request 60.959171 seconds old, received at 2018-05-02 15:17:14.717398: osd_op(client.6596773.0:183937335 1.a33 1:cc5b61d1:::rbd_data.1e815e6eb1d34.000000000056c0b3:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1323520~32768] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:21.036818 osd.14 osd.14 192.168.42.13:6812/4444 353 : cluster [WRN] Monitor daemon marked osd.14 down, but it is still running
2018-05-02 15:18:22.678710 osd.51 osd.51 192.168.42.12:6810/4338 370 : cluster [WRN] 2 slow requests, 2 included below; oldest blocked for > 60.274650 secs
2018-05-02 15:18:22.678722 osd.51 osd.51 192.168.42.12:6810/4338 371 : cluster [WRN] slow request 60.274650 seconds old, received at 2018-05-02 15:17:22.404014: osd_op(client.6596773.0:183937651 1.aab 1:d557b9bd:::rbd_data.1e815e6eb1d34.00000000005effb8:head [set-alloc-hint object_size 4194304 write_size 4194304,write 3551232~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:22.678729 osd.51 osd.51 192.168.42.12:6810/4338 372 : cluster [WRN] slow request 60.268857 seconds old, received at 2018-05-02 15:17:22.409807: osd_op(client.6526841.0:199030712 1.227 1:e44f9365:::rbd_data.51ffd188bfb19.000000000041d7c0:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1818624~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:24.042094 mon.node01 mon.0 192.168.42.11:6789/0 233075 : cluster [INF] osd.5 192.168.42.13:6808/4150 boot
2018-05-02 15:18:24.043666 mon.node01 mon.0 192.168.42.11:6789/0 233076 : cluster [INF] osd.17 192.168.42.13:6814/4554 boot
2018-05-02 15:18:24.548038 mon.node01 mon.0 192.168.42.11:6789/0 233191 : cluster [WRN] Health check update: 2 osds down (OSD_DOWN)
2018-05-02 15:18:24.548147 mon.node01 mon.0 192.168.42.11:6789/0 233192 : cluster [WRN] Health check update: Reduced data availability: 245 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:24.548204 mon.node01 mon.0 192.168.42.11:6789/0 233193 : cluster [WRN] Health check update: Degraded data redundancy: 493813/11883735 objects degraded (4.155%), 733 pgs unclean, 514 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:24.548250 mon.node01 mon.0 192.168.42.11:6789/0 233194 : cluster [WRN] Health check update: 20 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:24.549084 mon.node01 mon.0 192.168.42.11:6789/0 233195 : cluster [INF] osd.4 failed (root=default,host=node03) (4 reporters from different host after 24.643274 >= grace 20.298579)
2018-05-02 15:18:24.549862 mon.node01 mon.0 192.168.42.11:6789/0 233196 : cluster [INF] osd.12 failed (root=default,host=node03) (3 reporters from different host after 23.203997 >= grace 20.000000)
2018-05-02 15:18:24.550361 mon.node01 mon.0 192.168.42.11:6789/0 233197 : cluster [INF] osd.13 failed (root=default,host=node03) (4 reporters from different host after 24.454206 >= grace 20.000000)
2018-05-02 15:18:24.550688 mon.node01 mon.0 192.168.42.11:6789/0 233198 : cluster [INF] osd.16 failed (root=default,host=node03) (4 reporters from different host after 21.857742 >= grace 20.298740)
2018-05-02 15:18:21.047160 osd.15 osd.15 192.168.42.13:6818/4787 387 : cluster [WRN] Monitor daemon marked osd.15 down, but it is still running
2018-05-02 15:18:25.156763 osd.13 osd.13 192.168.42.13:6820/4934 393 : cluster [WRN] Monitor daemon marked osd.13 down, but it is still running
2018-05-02 15:18:25.631974 mon.node01 mon.0 192.168.42.11:6789/0 233396 : cluster [INF] osd.4 192.168.42.13:6816/4672 boot
2018-05-02 15:18:21.012257 osd.9 osd.9 192.168.42.13:6830/6607 441 : cluster [WRN] Monitor daemon marked osd.9 down, but it is still running
2018-05-02 15:18:26.679788 mon.node01 mon.0 192.168.42.11:6789/0 233663 : cluster [INF] osd.12 192.168.42.13:6824/5233 boot
2018-05-02 15:18:26.680755 mon.node01 mon.0 192.168.42.11:6789/0 233664 : cluster [INF] osd.19 192.168.42.13:6838/7714 boot
2018-05-02 15:18:26.680955 mon.node01 mon.0 192.168.42.11:6789/0 233665 : cluster [INF] osd.13 192.168.42.13:6820/4934 boot
2018-05-02 15:18:26.680998 mon.node01 mon.0 192.168.42.11:6789/0 233666 : cluster [INF] osd.16 192.168.42.13:6822/5084 boot
2018-05-02 15:18:22.079422 osd.1 osd.1 192.168.42.13:6832/7029 455 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running
2018-05-02 15:18:25.700018 osd.16 osd.16 192.168.42.13:6822/5084 345 : cluster [WRN] Monitor daemon marked osd.16 down, but it is still running
2018-05-02 15:18:29.101502 mon.node01 mon.0 192.168.42.11:6789/0 233914 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-05-02 15:18:29.114105 mon.node01 mon.0 192.168.42.11:6789/0 233915 : cluster [INF] osd.8 192.168.42.13:6826/5418 boot
2018-05-02 15:18:29.565219 mon.node01 mon.0 192.168.42.11:6789/0 233932 : cluster [WRN] Health check update: Reduced data availability: 84 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:29.565271 mon.node01 mon.0 192.168.42.11:6789/0 233933 : cluster [WRN] Health check update: Degraded data redundancy: 202763/11883735 objects degraded (1.706%), 404 pgs unclean, 239 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:29.565297 mon.node01 mon.0 192.168.42.11:6789/0 233934 : cluster [WRN] Health check update: 5 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:29.565507 mon.node01 mon.0 192.168.42.11:6789/0 233935 : cluster [INF] osd.10 failed (root=default,host=node03) (4 reporters from different host after 23.766485 >= grace 20.597260)
2018-05-02 15:18:29.567298 mon.node01 mon.0 192.168.42.11:6789/0 233936 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2018-05-02 15:18:22.098559 osd.0 osd.0 192.168.42.13:6828/5616 377 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running
2018-05-02 15:18:23.295719 osd.4 osd.4 192.168.42.13:6816/4672 409 : cluster [WRN] 6 slow requests, 1 included below; oldest blocked for > 47.640189 secs
2018-05-02 15:18:23.295728 osd.4 osd.4 192.168.42.13:6816/4672 410 : cluster [WRN] slow request 30.264308 seconds old, received at 2018-05-02 15:17:53.031354: osd_op(client.6596773.0:183937898 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:24.965849 osd.4 osd.4 192.168.42.13:6816/4672 411 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running
2018-05-02 15:18:21.980375 osd.5 osd.5 192.168.42.13:6808/4150 343 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running
2018-05-02 15:18:25.166233 osd.12 osd.12 192.168.42.13:6824/5233 411 : cluster [WRN] Monitor daemon marked osd.12 down, but it is still running
2018-05-02 15:18:33.187741 mon.node01 mon.0 192.168.42.11:6789/0 234132 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-05-02 15:18:33.204727 mon.node01 mon.0 192.168.42.11:6789/0 234133 : cluster [INF] osd.10 192.168.42.13:6836/7465 boot
2018-05-02 15:18:34.203560 mon.node01 mon.0 192.168.42.11:6789/0 234150 : cluster [INF] Health check cleared: REQUEST_SLOW (was: 2 slow requests are blocked > 32 sec)
2018-05-02 15:18:34.568568 mon.node01 mon.0 192.168.42.11:6789/0 234168 : cluster [WRN] Health check update: Reduced data availability: 13 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:34.568641 mon.node01 mon.0 192.168.42.11:6789/0 234169 : cluster [WRN] Health check update: Degraded data redundancy: 47468/11883735 objects degraded (0.399%), 92 pgs unclean, 101 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:32.064002 osd.10 osd.10 192.168.42.13:6836/7465 365 : cluster [WRN] Monitor daemon marked osd.10 down, but it is still running
2018-05-02 15:18:38.215116 mon.node01 mon.0 192.168.42.11:6789/0 234170 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 13 pgs peering)
2018-05-02 15:18:39.569077 mon.node01 mon.0 192.168.42.11:6789/0 234172 : cluster [WRN] Health check update: Degraded data redundancy: 14590/11883735 objects degraded (0.123%), 55 pgs unclean, 73 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:44.569539 mon.node01 mon.0 192.168.42.11:6789/0 234182 : cluster [WRN] Health check update: Degraded data redundancy: 27/11883735 objects degraded (0.000%), 15 pgs unclean, 26 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:49.569956 mon.node01 mon.0 192.168.42.11:6789/0 234184 : cluster [WRN] Health check update: Degraded data redundancy: 12/11883735 objects degraded (0.000%), 6 pgs unclean, 13 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:53.915551 mon.node01 mon.0 192.168.42.11:6789/0 234186 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/11883735 objects degraded (0.000%), 2 pgs unclean, 3 pgs degraded)
2018-05-02 15:18:53.915621 mon.node01 mon.0 192.168.42.11:6789/0 234187 : cluster [INF] Cluster is now healthy
Last edited on May 2, 2018, 8:51 pm by BonsaiJoe · #6
BonsaiJoe
53 Posts
May 6, 2018, 12:34 pmQuote from BonsaiJoe on May 6, 2018, 12:34 pmhi,
today we got the same error like last week again but on a diferent node:
[2402056.952552] i40e 0000:17:00.1: TX driver issue detected, PF reset issued
[2402057.082512] i40e 0000:17:00.1: i40e_ptp_init: PTP not supported on eth3
[2402057.161123] i40e 0000:17:00.1 eth3: speed changed to 0 for port eth3
[2402057.161586] bond1: link status up again after 0 ms for interface eth3
then all 20 OSD went down and recovered within some minutes.
hi,
today we got the same error like last week again but on a diferent node:
[2402056.952552] i40e 0000:17:00.1: TX driver issue detected, PF reset issued
[2402057.082512] i40e 0000:17:00.1: i40e_ptp_init: PTP not supported on eth3
[2402057.161123] i40e 0000:17:00.1 eth3: speed changed to 0 for port eth3
[2402057.161586] bond1: link status up again after 0 ms for interface eth3
then all 20 OSD went down and recovered within some minutes.
admin
2,930 Posts
May 6, 2018, 12:41 pmQuote from admin on May 6, 2018, 12:41 pmwe will build a newer kernel with updated i40e driver in a couple of days for you to test.
we will build a newer kernel with updated i40e driver in a couple of days for you to test.
BonsaiJoe
53 Posts
admin
2,930 Posts
May 6, 2018, 7:39 pmQuote from admin on May 6, 2018, 7:39 pmCan you show thee output of
ethtool -i eth3
dmesg | grep firmware
lspci -v
Can you show thee output of
ethtool -i eth3
dmesg | grep firmware
lspci -v
OSD crashed and restarted
BonsaiJoe
53 Posts
Quote from BonsaiJoe on April 25, 2018, 9:09 pmHi,
today one OSD crashed and restarted automatically but in the petasan node log cant find anything about this.
any idea why this happend?
kern.log
Apr 25 20:49:10 node04 kernel: [1476787.943772] libceph: osd75 192.168.42.14:6808 socket closed (con state OPEN)
Apr 25 20:49:10 node04 kernel: [1476788.373922] libceph: osd75 down
Apr 25 20:49:39 node04 kernel: [1476817.047380] libceph: osd75 up
syslog:
Apr 25 20:49:10 node04 ceph-osd[3971]: *** Caught signal (Segmentation fault) **
Apr 25 20:49:10 node04 ceph-osd[3971]: in thread 7fbe33048700 thread_name:bstore_mempool
Apr 25 20:49:10 node04 ceph-osd[3971]: ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
Apr 25 20:49:10 node04 ceph-osd[3971]: 1: (()+0xa65c54) [0x55cf8d431c54]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2: (()+0x11390) [0x7fbe3dbbb390]
Apr 25 20:49:10 node04 ceph-osd[3971]: 3: (BlueStore::TwoQCache::_trim(unsigned long, unsigned long)+0x518) [0x55cf8d2e2da8]
Apr 25 20:49:10 node04 ceph-osd[3971]: 4: (BlueStore::Cache::trim(unsigned long, float, float, float)+0x4e4) [0x55cf8d2b24e4]
Apr 25 20:49:10 node04 ceph-osd[3971]: 5: (BlueStore::MempoolThread::entry()+0x155) [0x55cf8d2b8f85]
Apr 25 20:49:10 node04 ceph-osd[3971]: 6: (()+0x76ba) [0x7fbe3dbb16ba]
Apr 25 20:49:10 node04 ceph-osd[3971]: 7: (clone()+0x6d) [0x7fbe3cc283dd]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2018-04-25 20:49:10.123605 7fbe33048700 -1 *** Caught signal (Segmentation fault) **
Apr 25 20:49:10 node04 ceph-osd[3971]: in thread 7fbe33048700 thread_name:bstore_mempool
Apr 25 20:49:10 node04 ceph-osd[3971]: ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
Apr 25 20:49:10 node04 ceph-osd[3971]: 1: (()+0xa65c54) [0x55cf8d431c54]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2: (()+0x11390) [0x7fbe3dbbb390]
Apr 25 20:49:10 node04 ceph-osd[3971]: 3: (BlueStore::TwoQCache::_trim(unsigned long, unsigned long)+0x518) [0x55cf8d2e2da8]
Apr 25 20:49:10 node04 ceph-osd[3971]: 4: (BlueStore::Cache::trim(unsigned long, float, float, float)+0x4e4) [0x55cf8d2b24e4]
Apr 25 20:49:10 node04 ceph-osd[3971]: 5: (BlueStore::MempoolThread::entry()+0x155) [0x55cf8d2b8f85]
Apr 25 20:49:10 node04 ceph-osd[3971]: 6: (()+0x76ba) [0x7fbe3dbb16ba]
Apr 25 20:49:10 node04 ceph-osd[3971]: 7: (clone()+0x6d) [0x7fbe3cc283dd]
Apr 25 20:49:10 node04 ceph-osd[3971]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Apr 25 20:49:10 node04 ceph-osd[3971]: 0> 2018-04-25 20:49:10.123605 7fbe33048700 -1 *** Caught signal (Segmentation fault) **
Apr 25 20:49:10 node04 ceph-osd[3971]: in thread 7fbe33048700 thread_name:bstore_mempool
Apr 25 20:49:10 node04 ceph-osd[3971]: ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
Apr 25 20:49:10 node04 ceph-osd[3971]: 1: (()+0xa65c54) [0x55cf8d431c54]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2: (()+0x11390) [0x7fbe3dbbb390]
Apr 25 20:49:10 node04 ceph-osd[3971]: 3: (BlueStore::TwoQCache::_trim(unsigned long, unsigned long)+0x518) [0x55cf8d2e2da8]
Apr 25 20:49:10 node04 ceph-osd[3971]: 4: (BlueStore::Cache::trim(unsigned long, float, float, float)+0x4e4) [0x55cf8d2b24e4]
Apr 25 20:49:10 node04 ceph-osd[3971]: 5: (BlueStore::MempoolThread::entry()+0x155) [0x55cf8d2b8f85]
Apr 25 20:49:10 node04 ceph-osd[3971]: 6: (()+0x76ba) [0x7fbe3dbb16ba]
Apr 25 20:49:10 node04 ceph-osd[3971]: 7: (clone()+0x6d) [0x7fbe3cc283dd]
Apr 25 20:49:10 node04 ceph-osd[3971]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Apr 25 20:49:10 node04 kernel: [1476787.943772] libceph: osd75 192.168.42.14:6808 socket closed (con state OPEN)
Apr 25 20:49:10 node04 systemd[1]: ceph-osd@75.service: Main process exited, code=killed, status=11/SEGV
Apr 25 20:49:10 node04 systemd[1]: ceph-osd@75.service: Unit entered failed state.
Apr 25 20:49:10 node04 systemd[1]: ceph-osd@75.service: Failed with result 'signal'.
Apr 25 20:49:10 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.db.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.db.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:3/0:0:3:7/block/sdx/sdx7 and /sys/devices/pci0000:b2/0000:b2:00.$
Apr 25 20:49:10 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:1/0:0:1:4/block/sde/sde2 and /sys/devices/pci0000:b2/0000:b2:00.0/0000$
Apr 25 20:49:10 node04 kernel: [1476788.373922] libceph: osd75 down
Apr 25 20:49:10 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:1/0:0:1:4/block/sde/sde2 and /sys/devices/pci0000:b2/0000:b2:00.0/0000$
Apr 25 20:49:30 node04 systemd[1]: ceph-osd@75.service: Service hold-off time over, scheduling restart.
Apr 25 20:49:30 node04 systemd[1]: Stopped Ceph object storage daemon osd.75.
Apr 25 20:49:30 node04 systemd[1]: Starting Ceph object storage daemon osd.75...
Apr 25 20:49:30 node04 systemd[1]: Started Ceph object storage daemon osd.75.
Apr 25 20:49:30 node04 ceph-osd[293686]: starting osd.75 at - osd_data /var/lib/ceph/osd/ps-cl01-75 /var/lib/ceph/osd/ps-cl01-75/journal
Apr 25 20:49:30 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:1/0:0:1:4/block/sde/sde2 and /sys/devices/pci0000:b2/0000:b2:00.0/0000$
Apr 25 20:49:39 node04 ceph-osd[293686]: 2018-04-25 20:49:39.183383 7fe156c97e00 -1 osd.75 26127 log_to_monitors {default=true}
Apr 25 20:49:39 node04 kernel: [1476817.047380] libceph: osd75 up
Hi,
today one OSD crashed and restarted automatically but in the petasan node log cant find anything about this.
any idea why this happend?
kern.log
Apr 25 20:49:10 node04 kernel: [1476787.943772] libceph: osd75 192.168.42.14:6808 socket closed (con state OPEN)
Apr 25 20:49:10 node04 kernel: [1476788.373922] libceph: osd75 down
Apr 25 20:49:39 node04 kernel: [1476817.047380] libceph: osd75 up
syslog:
Apr 25 20:49:10 node04 ceph-osd[3971]: *** Caught signal (Segmentation fault) **
Apr 25 20:49:10 node04 ceph-osd[3971]: in thread 7fbe33048700 thread_name:bstore_mempool
Apr 25 20:49:10 node04 ceph-osd[3971]: ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
Apr 25 20:49:10 node04 ceph-osd[3971]: 1: (()+0xa65c54) [0x55cf8d431c54]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2: (()+0x11390) [0x7fbe3dbbb390]
Apr 25 20:49:10 node04 ceph-osd[3971]: 3: (BlueStore::TwoQCache::_trim(unsigned long, unsigned long)+0x518) [0x55cf8d2e2da8]
Apr 25 20:49:10 node04 ceph-osd[3971]: 4: (BlueStore::Cache::trim(unsigned long, float, float, float)+0x4e4) [0x55cf8d2b24e4]
Apr 25 20:49:10 node04 ceph-osd[3971]: 5: (BlueStore::MempoolThread::entry()+0x155) [0x55cf8d2b8f85]
Apr 25 20:49:10 node04 ceph-osd[3971]: 6: (()+0x76ba) [0x7fbe3dbb16ba]
Apr 25 20:49:10 node04 ceph-osd[3971]: 7: (clone()+0x6d) [0x7fbe3cc283dd]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2018-04-25 20:49:10.123605 7fbe33048700 -1 *** Caught signal (Segmentation fault) **
Apr 25 20:49:10 node04 ceph-osd[3971]: in thread 7fbe33048700 thread_name:bstore_mempool
Apr 25 20:49:10 node04 ceph-osd[3971]: ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
Apr 25 20:49:10 node04 ceph-osd[3971]: 1: (()+0xa65c54) [0x55cf8d431c54]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2: (()+0x11390) [0x7fbe3dbbb390]
Apr 25 20:49:10 node04 ceph-osd[3971]: 3: (BlueStore::TwoQCache::_trim(unsigned long, unsigned long)+0x518) [0x55cf8d2e2da8]
Apr 25 20:49:10 node04 ceph-osd[3971]: 4: (BlueStore::Cache::trim(unsigned long, float, float, float)+0x4e4) [0x55cf8d2b24e4]
Apr 25 20:49:10 node04 ceph-osd[3971]: 5: (BlueStore::MempoolThread::entry()+0x155) [0x55cf8d2b8f85]
Apr 25 20:49:10 node04 ceph-osd[3971]: 6: (()+0x76ba) [0x7fbe3dbb16ba]
Apr 25 20:49:10 node04 ceph-osd[3971]: 7: (clone()+0x6d) [0x7fbe3cc283dd]
Apr 25 20:49:10 node04 ceph-osd[3971]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Apr 25 20:49:10 node04 ceph-osd[3971]: 0> 2018-04-25 20:49:10.123605 7fbe33048700 -1 *** Caught signal (Segmentation fault) **
Apr 25 20:49:10 node04 ceph-osd[3971]: in thread 7fbe33048700 thread_name:bstore_mempool
Apr 25 20:49:10 node04 ceph-osd[3971]: ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)
Apr 25 20:49:10 node04 ceph-osd[3971]: 1: (()+0xa65c54) [0x55cf8d431c54]
Apr 25 20:49:10 node04 ceph-osd[3971]: 2: (()+0x11390) [0x7fbe3dbbb390]
Apr 25 20:49:10 node04 ceph-osd[3971]: 3: (BlueStore::TwoQCache::_trim(unsigned long, unsigned long)+0x518) [0x55cf8d2e2da8]
Apr 25 20:49:10 node04 ceph-osd[3971]: 4: (BlueStore::Cache::trim(unsigned long, float, float, float)+0x4e4) [0x55cf8d2b24e4]
Apr 25 20:49:10 node04 ceph-osd[3971]: 5: (BlueStore::MempoolThread::entry()+0x155) [0x55cf8d2b8f85]
Apr 25 20:49:10 node04 ceph-osd[3971]: 6: (()+0x76ba) [0x7fbe3dbb16ba]
Apr 25 20:49:10 node04 ceph-osd[3971]: 7: (clone()+0x6d) [0x7fbe3cc283dd]
Apr 25 20:49:10 node04 ceph-osd[3971]: NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Apr 25 20:49:10 node04 kernel: [1476787.943772] libceph: osd75 192.168.42.14:6808 socket closed (con state OPEN)
Apr 25 20:49:10 node04 systemd[1]: ceph-osd@75.service: Main process exited, code=killed, status=11/SEGV
Apr 25 20:49:10 node04 systemd[1]: ceph-osd@75.service: Unit entered failed state.
Apr 25 20:49:10 node04 systemd[1]: ceph-osd@75.service: Failed with result 'signal'.
Apr 25 20:49:10 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.db.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.db.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:3/0:0:3:7/block/sdx/sdx7 and /sys/devices/pci0000:b2/0000:b2:00.$
Apr 25 20:49:10 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:1/0:0:1:4/block/sde/sde2 and /sys/devices/pci0000:b2/0000:b2:00.0/0000$
Apr 25 20:49:10 node04 kernel: [1476788.373922] libceph: osd75 down
Apr 25 20:49:10 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:1/0:0:1:4/block/sde/sde2 and /sys/devices/pci0000:b2/0000:b2:00.0/0000$
Apr 25 20:49:30 node04 systemd[1]: ceph-osd@75.service: Service hold-off time over, scheduling restart.
Apr 25 20:49:30 node04 systemd[1]: Stopped Ceph object storage daemon osd.75.
Apr 25 20:49:30 node04 systemd[1]: Starting Ceph object storage daemon osd.75...
Apr 25 20:49:30 node04 systemd[1]: Started Ceph object storage daemon osd.75.
Apr 25 20:49:30 node04 ceph-osd[293686]: starting osd.75 at - osd_data /var/lib/ceph/osd/ps-cl01-75 /var/lib/ceph/osd/ps-cl01-75/journal
Apr 25 20:49:30 node04 systemd[1]: dev-disk-by\x2dpartlabel-ceph\x5cx20block.device: Dev dev-disk-by\x2dpartlabel-ceph\x5cx20block.device appeared twice with different sysfs paths /sys/devices/pci0000:b2/0000:b2:00.0/0000:b3:00.0/host0/target0:0:1/0:0:1:4/block/sde/sde2 and /sys/devices/pci0000:b2/0000:b2:00.0/0000$
Apr 25 20:49:39 node04 ceph-osd[293686]: 2018-04-25 20:49:39.183383 7fe156c97e00 -1 osd.75 26127 log_to_monitors {default=true}
Apr 25 20:49:39 node04 kernel: [1476817.047380] libceph: osd75 up
admin
2,930 Posts
Quote from admin on April 25, 2018, 9:56 pmIt is a Ceph issue:
https://tracker.ceph.com/issues/21259
The logs are matching with 12.2.4 errors:
https://tracker.ceph.com/issues/21259#note-24
As stated it seems to be fixed in 12.2.5 which was just released 2 days ago, but please do not upgrade yourself since we patch the sources to include vmware specific code.
It is a Ceph issue:
https://tracker.ceph.com/issues/21259
The logs are matching with 12.2.4 errors:
https://tracker.ceph.com/issues/21259#note-24
As stated it seems to be fixed in 12.2.5 which was just released 2 days ago, but please do not upgrade yourself since we patch the sources to include vmware specific code.
BonsaiJoe
53 Posts
Quote from BonsaiJoe on April 26, 2018, 9:40 amthanks for the fast reply ... any idea when you can provide a patched version?
thanks for the fast reply ... any idea when you can provide a patched version?
admin
2,930 Posts
Quote from admin on April 27, 2018, 5:22 pmIt is hard for us to do an update for just this issue. it was labeled as minor severity by ceph, it takes us some effort such as complete test cycle to do a release.
if you see this again or think it is high severity. do let us know.
It is hard for us to do an update for just this issue. it was labeled as minor severity by ceph, it takes us some effort such as complete test cycle to do a release.
if you see this again or think it is high severity. do let us know.
BonsaiJoe
53 Posts
Quote from BonsaiJoe on May 2, 2018, 2:11 pmToday 12 of 100 osd crashed parallel at nodes 3 of 5 recovering tooked aprox 3 minutes
Today 12 of 100 osd crashed parallel at nodes 3 of 5 recovering tooked aprox 3 minutes
BonsaiJoe
53 Posts
Quote from BonsaiJoe on May 2, 2018, 7:34 pmafter some more investigation we fond that this looks like another bug with the nic driver:
kern.log shows:
May 2 15:17:04 node03 kernel: [695525.704075] i40e 0000:17:00.1: TX driver issue detected, PF reset issued
May 2 15:17:04 node03 kernel: [695525.835170] i40e 0000:17:00.1: i40e_ptp_init: PTP not supported on eth3
May 2 15:17:04 node03 kernel: [695525.911684] i40e 0000:17:00.1 eth3: speed changed to 0 for port eth3
May 2 15:17:04 node03 kernel: [695525.913112] bond1: link status up again after 0 ms for interface eth3maybe it´s this bug:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1700834
do i see this right, cause of losing one nic from the bond ceph lost the connection to the cluster for some seconds and reported the osd as down.the strange thing is that ceph starts to set the osd to down 26 sec after the bond was recovered .....any idea?
from syslog (cleared from hundrets of : heartbeat_check: no reply from xx messages)
May 2 15:17:01 node03 CRON[4152775]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
May 2 15:17:04 node03 kernel: [695525.704075] i40e 0000:17:00.1: TX driver issue detected, PF reset issued
May 2 15:17:04 node03 kernel: [695525.835170] i40e 0000:17:00.1: i40e_ptp_init: PTP not supported on eth3
May 2 15:17:04 node03 kernel: [695525.911684] i40e 0000:17:00.1 eth3: speed changed to 0 for port eth3
May 2 15:17:04 node03 kernel: [695525.913112] bond1: link status up again after 0 ms for interface eth3
May 2 15:17:16 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 2.500179394s
May 2 15:17:19 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 4.96820263s
May 2 15:17:21 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 7.45790572s
May 2 15:17:24 node03 consul[3360]: raft: aborting pipeline replication to peer {Voter 192.168.42.12:8300 192.168.42.12:8300}
May 2 15:17:24 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.11:8300: read tcp 192.168.42.13:37038->192.168.42.11:8300: i/o timeout
May 2 15:17:24 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: read tcp 192.168.42.13:55244->192.168.42.12:8300: i/o timeout
May 2 15:17:26 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 2.50027097s
May 2 15:17:29 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 4.984897637s
May 2 15:17:31 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 7.420438172s
May 2 15:17:32 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:34 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:35 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.11:8300: dial tcp 192.168.42.11:8300: i/o timeout
May 2 15:17:35 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:38 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:44 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:46 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:46 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:47 node03 consul[3360]: yamux: keepalive failed: i/o deadline reached
May 2 15:17:47 node03 consul[3360]: consul.rpc: multiplex conn accept failed: keepalive timeout from=192.168.42.12:55674
May 2 15:17:51 node03 consul[3360]: memberlist: Push/Pull with node02 failed: dial tcp 192.168.42.12:8301: i/o timeout
May 2 15:17:53 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:54 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:57 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:01 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:18:04 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:04 node03 consul[3360]: raft: peer {Voter 192.168.42.12:8300 192.168.42.12:8300} has newer term, stopping replication
May 2 15:18:04 node03 consul[3360]: raft: Node at 192.168.42.13:8300 [Follower] entering Follower state (Leader: "")
May 2 15:18:04 node03 consul[3360]: consul: cluster leadership lost
May 2 15:18:04 node03 consul[3360]: raft: aborting pipeline replication to peer {Voter 192.168.42.11:8300 192.168.42.11:8300}
May 2 15:18:04 node03 consul[3360]: consul.coordinate: Batch update failed: node is not the leader
May 2 15:18:08 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:09 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since our last index is greater (3213435, 3213428)
May 2 15:18:13 node03 consul[3360]: http: Request GET /v1/kv/PetaSAN/Services/ClusterLeader?index=2620988&wait=20s, error: No cluster leader from=127.0.0.1:53994
May 2 15:18:13 node03 consul[3360]: raft: Heartbeat timeout from "" reached, starting election
May 2 15:18:13 node03 consul[3360]: raft: Node at 192.168.42.13:8300 [Candidate] entering Candidate state in term 184
May 2 15:18:13 node03 consul[3360]: raft: Election won. Tally: 2
May 2 15:18:13 node03 consul[3360]: raft: Node at 192.168.42.13:8300 [Leader] entering Leader state
May 2 15:18:13 node03 consul[3360]: raft: Added peer 192.168.42.11:8300, starting replication
May 2 15:18:13 node03 consul[3360]: raft: Added peer 192.168.42.12:8300, starting replication
May 2 15:18:13 node03 consul[3360]: consul: cluster leadership acquired
May 2 15:18:13 node03 consul[3360]: consul: New leader elected: node03
May 2 15:18:13 node03 consul[3360]: raft: pipelining replication to peer {Voter 192.168.42.11:8300 192.168.42.11:8300}
May 2 15:18:23 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:23 node03 consul[3360]: raft: AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300} rejected, sending older logs (next: 3213429)
May 2 15:18:23 node03 consul[3360]: raft: pipelining replication to peer {Voter 192.168.42.12:8300 192.168.42.12:8300}
from ps-cl01.log
2018-05-02 15:17:30.689945 mon.node01 mon.0 192.168.42.11:6789/0 231117 : cluster [INF] osd.0 failed (root=default,host=node03) (2 reporters from different host after 20.000325 >= grace 20.000000)
2018-05-02 15:17:30.690389 mon.node01 mon.0 192.168.42.11:6789/0 231119 : cluster [INF] osd.4 failed (root=default,host=node03) (2 reporters from different host after 20.000540 >= grace 20.000000)
2018-05-02 15:17:30.690758 mon.node01 mon.0 192.168.42.11:6789/0 231121 : cluster [INF] osd.6 failed (root=default,host=node03) (2 reporters from different host after 20.000898 >= grace 20.000000)
2018-05-02 15:17:30.691040 mon.node01 mon.0 192.168.42.11:6789/0 231123 : cluster [INF] osd.8 failed (root=default,host=node03) (2 reporters from different host after 20.001124 >= grace 20.000000)
2018-05-02 15:17:30.691317 mon.node01 mon.0 192.168.42.11:6789/0 231125 : cluster [INF] osd.10 failed (root=default,host=node03) (2 reporters from different host after 20.001331 >= grace 20.000000)
2018-05-02 15:17:30.691936 mon.node01 mon.0 192.168.42.11:6789/0 231129 : cluster [INF] osd.15 failed (root=default,host=node03) (2 reporters from different host after 20.001689 >= grace 20.000000)
2018-05-02 15:17:30.728612 mon.node01 mon.0 192.168.42.11:6789/0 231134 : cluster [INF] osd.5 failed (root=default,host=node03) (2 reporters from different host after 20.000418 >= grace 20.000000)
2018-05-02 15:17:30.728887 mon.node01 mon.0 192.168.42.11:6789/0 231136 : cluster [INF] osd.7 failed (root=default,host=node03) (2 reporters from different host after 20.000640 >= grace 20.000000)
2018-05-02 15:17:30.729478 mon.node01 mon.0 192.168.42.11:6789/0 231140 : cluster [INF] osd.11 failed (root=default,host=node03) (2 reporters from different host after 20.000955 >= grace 20.000000)
2018-05-02 15:17:31.071749 mon.node01 mon.0 192.168.42.11:6789/0 231188 : cluster [INF] osd.9 failed (root=default,host=node03) (2 reporters from different host after 20.000476 >= grace 20.000000)
2018-05-02 15:17:31.072294 mon.node01 mon.0 192.168.42.11:6789/0 231192 : cluster [INF] osd.14 failed (root=default,host=node03) (2 reporters from different host after 20.000645 >= grace 20.000000)
2018-05-02 15:17:31.072648 mon.node01 mon.0 192.168.42.11:6789/0 231195 : cluster [INF] osd.17 failed (root=default,host=node03) (2 reporters from different host after 20.000775 >= grace 20.000000)
2018-05-02 15:17:31.073013 mon.node01 mon.0 192.168.42.11:6789/0 231198 : cluster [INF] osd.19 failed (root=default,host=node03) (2 reporters from different host after 20.000923 >= grace 20.000000)
2018-05-02 15:17:31.145738 mon.node01 mon.0 192.168.42.11:6789/0 231201 : cluster [INF] osd.2 failed (root=default,host=node03) (2 reporters from different host after 20.000246 >= grace 20.000000)
2018-05-02 15:17:31.276822 mon.node01 mon.0 192.168.42.11:6789/0 231213 : cluster [WRN] Health check failed: 14 osds down (OSD_DOWN)
2018-05-02 15:17:31.297002 mon.node01 mon.0 192.168.42.11:6789/0 231215 : cluster [INF] osd.1 failed (root=default,host=node03) (2 reporters from different host after 20.066715 >= grace 20.000000)
2018-05-02 15:17:31.297100 mon.node01 mon.0 192.168.42.11:6789/0 231217 : cluster [INF] osd.3 failed (root=default,host=node03) (2 reporters from different host after 20.066565 >= grace 20.000000)
2018-05-02 15:17:31.297290 mon.node01 mon.0 192.168.42.11:6789/0 231219 : cluster [INF] osd.12 failed (root=default,host=node03) (2 reporters from different host after 20.066392 >= grace 20.000000)
2018-05-02 15:17:31.297562 mon.node01 mon.0 192.168.42.11:6789/0 231222 : cluster [INF] osd.13 failed (root=default,host=node03) (2 reporters from different host after 20.010716 >= grace 20.000000)
2018-05-02 15:17:31.297644 mon.node01 mon.0 192.168.42.11:6789/0 231224 : cluster [INF] osd.16 failed (root=default,host=node03) (2 reporters from different host after 20.010419 >= grace 20.000000)
2018-05-02 15:17:31.342169 mon.node01 mon.0 192.168.42.11:6789/0 231245 : cluster [INF] osd.18 failed (root=default,host=node03) (2 reporters from different host after 20.003455 >= grace 20.000000)
2018-05-02 15:17:31.900209 osd.17 osd.17 192.168.42.13:6814/4554 347 : cluster [WRN] Monitor daemon marked osd.17 down, but it is still running
2018-05-02 15:17:32.347387 mon.node01 mon.0 192.168.42.11:6789/0 231324 : cluster [INF] osd.5 192.168.42.13:6808/4150 boot
2018-05-02 15:17:32.122997 osd.14 osd.14 192.168.42.13:6812/4444 351 : cluster [WRN] Monitor daemon marked osd.14 down, but it is still running
2018-05-02 15:17:33.417134 mon.node01 mon.0 192.168.42.11:6789/0 231341 : cluster [INF] osd.19 192.168.42.13:6838/7714 boot
2018-05-02 15:17:32.150394 osd.15 osd.15 192.168.42.13:6818/4787 385 : cluster [WRN] Monitor daemon marked osd.15 down, but it is still running
2018-05-02 15:17:34.449728 mon.node01 mon.0 192.168.42.11:6789/0 231360 : cluster [WRN] Health check failed: Reduced data availability: 182 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:34.449810 mon.node01 mon.0 192.168.42.11:6789/0 231361 : cluster [WRN] Health check failed: Degraded data redundancy: 266928/11883735 objects degraded (2.246%), 265 pgs unclean, 276 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:34.449892 mon.node01 mon.0 192.168.42.11:6789/0 231362 : cluster [WRN] Health check failed: 6 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:34.462178 mon.node01 mon.0 192.168.42.11:6789/0 231363 : cluster [INF] osd.14 192.168.42.13:6812/4444 boot
2018-05-02 15:17:34.462279 mon.node01 mon.0 192.168.42.11:6789/0 231364 : cluster [INF] osd.15 192.168.42.13:6818/4787 boot
2018-05-02 15:17:34.462350 mon.node01 mon.0 192.168.42.11:6789/0 231365 : cluster [INF] osd.1 192.168.42.13:6832/7029 boot
2018-05-02 15:17:34.462398 mon.node01 mon.0 192.168.42.11:6789/0 231366 : cluster [INF] osd.11 192.168.42.13:6804/3915 boot
2018-05-02 15:17:34.462440 mon.node01 mon.0 192.168.42.11:6789/0 231367 : cluster [INF] osd.9 192.168.42.13:6830/6607 boot
2018-05-02 15:17:34.462483 mon.node01 mon.0 192.168.42.11:6789/0 231368 : cluster [INF] osd.0 192.168.42.13:6828/5616 boot
2018-05-02 15:17:34.526923 osd.13 osd.13 192.168.42.13:6820/4934 391 : cluster [WRN] Monitor daemon marked osd.13 down, but it is still running
2018-05-02 15:17:31.950776 osd.9 osd.9 192.168.42.13:6830/6607 439 : cluster [WRN] Monitor daemon marked osd.9 down, but it is still running
2018-05-02 15:17:35.528042 mon.node01 mon.0 192.168.42.11:6789/0 231385 : cluster [INF] osd.13 192.168.42.13:6820/4934 boot
2018-05-02 15:17:35.528149 mon.node01 mon.0 192.168.42.11:6789/0 231386 : cluster [INF] osd.4 192.168.42.13:6816/4672 boot
2018-05-02 15:17:36.561053 mon.node01 mon.0 192.168.42.11:6789/0 231403 : cluster [WRN] Health check update: 9 osds down (OSD_DOWN)
2018-05-02 15:17:36.577048 mon.node01 mon.0 192.168.42.11:6789/0 231404 : cluster [INF] osd.12 192.168.42.13:6824/5233 boot
2018-05-02 15:17:31.901987 osd.7 osd.7 192.168.42.13:6834/7201 353 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running
2018-05-02 15:17:33.112422 osd.3 osd.3 192.168.42.13:6802/3776 433 : cluster [WRN] Monitor daemon marked osd.3 down, but it is still running
2018-05-02 15:17:33.454019 osd.1 osd.1 192.168.42.13:6832/7029 453 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running
2018-05-02 15:17:38.145518 mon.node01 mon.0 192.168.42.11:6789/0 231444 : cluster [INF] osd.16 192.168.42.13:6822/5084 boot
2018-05-02 15:17:31.878606 osd.6 osd.6 192.168.42.13:6806/4031 379 : cluster [WRN] Monitor daemon marked osd.6 down, but it is still running
2018-05-02 15:17:37.074577 osd.16 osd.16 192.168.42.13:6822/5084 343 : cluster [WRN] Monitor daemon marked osd.16 down, but it is still running
2018-05-02 15:17:39.450525 mon.node01 mon.0 192.168.42.11:6789/0 231512 : cluster [WRN] Health check update: Reduced data availability: 156 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:39.450603 mon.node01 mon.0 192.168.42.11:6789/0 231513 : cluster [WRN] Health check update: Degraded data redundancy: 780173/11883735 objects degraded (6.565%), 409 pgs unclean, 826 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:39.450648 mon.node01 mon.0 192.168.42.11:6789/0 231514 : cluster [WRN] Health check update: 20 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:33.481474 osd.0 osd.0 192.168.42.13:6828/5616 375 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running
2018-05-02 15:17:34.835065 osd.4 osd.4 192.168.42.13:6816/4672 398 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running
2018-05-02 15:17:31.758601 osd.11 osd.11 192.168.42.13:6804/3915 325 : cluster [WRN] Monitor daemon marked osd.11 down, but it is still running
2018-05-02 15:17:31.854285 osd.5 osd.5 192.168.42.13:6808/4150 339 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running
2018-05-02 15:17:31.927593 osd.8 osd.8 192.168.42.13:6826/5418 327 : cluster [WRN] Monitor daemon marked osd.8 down, but it is still running
2018-05-02 15:17:32.264540 osd.2 osd.2 192.168.42.13:6810/4329 373 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running
2018-05-02 15:17:32.504532 osd.19 osd.19 192.168.42.13:6838/7714 367 : cluster [WRN] Monitor daemon marked osd.19 down, but it is still running
2018-05-02 15:17:35.304926 osd.12 osd.12 192.168.42.13:6824/5233 409 : cluster [WRN] Monitor daemon marked osd.12 down, but it is still running
2018-05-02 15:17:41.201037 mon.node01 mon.0 192.168.42.11:6789/0 231515 : cluster [INF] osd.10 192.168.42.13:6836/7465 boot
2018-05-02 15:17:37.224788 osd.18 osd.18 192.168.42.13:6800/3571 353 : cluster [WRN] Monitor daemon marked osd.18 down, but it is still running
2018-05-02 15:17:44.451567 mon.node01 mon.0 192.168.42.11:6789/0 231566 : cluster [WRN] Health check update: 7 osds down (OSD_DOWN)
2018-05-02 15:17:44.451681 mon.node01 mon.0 192.168.42.11:6789/0 231567 : cluster [WRN] Health check update: Reduced data availability: 119 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:44.451740 mon.node01 mon.0 192.168.42.11:6789/0 231568 : cluster [WRN] Health check update: Degraded data redundancy: 567324/11883735 objects degraded (4.774%), 376 pgs unclean, 598 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:44.451787 mon.node01 mon.0 192.168.42.11:6789/0 231569 : cluster [WRN] Health check update: 8 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:39.947717 osd.10 osd.10 192.168.42.13:6836/7465 363 : cluster [WRN] Monitor daemon marked osd.10 down, but it is still running
2018-05-02 15:17:49.452743 mon.node01 mon.0 192.168.42.11:6789/0 231571 : cluster [WRN] Health check update: Reduced data availability: 124 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:49.452802 mon.node01 mon.0 192.168.42.11:6789/0 231572 : cluster [WRN] Health check update: Degraded data redundancy: 545427/11883735 objects degraded (4.590%), 400 pgs unclean, 569 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:49.452830 mon.node01 mon.0 192.168.42.11:6789/0 231573 : cluster [WRN] Health check update: 5 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:45.648204 osd.22 osd.22 192.168.42.11:6818/5595 347 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.352068 secs
2018-05-02 15:17:45.648216 osd.22 osd.22 192.168.42.11:6818/5595 348 : cluster [WRN] slow request 30.352068 seconds old, received at 2018-05-02 15:17:15.296043: osd_op(client.6526841.0:199030532 1.f58 1:1afd369a:::rbd_data.51ffd188bfb19.0000000000a9a36d:head [set-alloc-hint object_size 4194304 write_size 4194304,write 931840~1536] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:48.674430 osd.57 osd.57 192.168.42.12:6824/5788 369 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.028853 secs
2018-05-02 15:17:48.674449 osd.57 osd.57 192.168.42.12:6824/5788 370 : cluster [WRN] slow request 30.028853 seconds old, received at 2018-05-02 15:17:18.645456: osd_op(client.6526841.0:199030675 1.580 1:01afed70:::rbd_data.51ffd188bfb19.00000000000920ad:head [set-alloc-hint object_size 4194304 write_size 4194304,write 2617344~3584] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:46.337758 osd.37 osd.37 192.168.42.11:6812/4603 313 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.840941 secs
2018-05-02 15:17:46.337771 osd.37 osd.37 192.168.42.11:6812/4603 314 : cluster [WRN] slow request 30.840941 seconds old, received at 2018-05-02 15:17:15.496724: osd_op(client.5168842.0:565598091 1.cd6 1:6b3ac828:::rbd_data.51ffd188bfb19.0000000000a994d6:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1515520~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:45.668260 osd.51 osd.51 192.168.42.12:6810/4338 363 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.950745 secs
2018-05-02 15:17:45.668273 osd.51 osd.51 192.168.42.12:6810/4338 364 : cluster [WRN] slow request 30.950745 seconds old, received at 2018-05-02 15:17:14.717398: osd_op(client.6596773.0:183937335 1.a33 1:cc5b61d1:::rbd_data.1e815e6eb1d34.000000000056c0b3:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1323520~32768] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:52.669812 osd.51 osd.51 192.168.42.12:6810/4338 365 : cluster [WRN] 3 slow requests, 2 included below; oldest blocked for > 37.952326 secs
2018-05-02 15:17:52.669821 osd.51 osd.51 192.168.42.12:6810/4338 366 : cluster [WRN] slow request 30.265711 seconds old, received at 2018-05-02 15:17:22.404014: osd_op(client.6596773.0:183937651 1.aab 1:d557b9bd:::rbd_data.1e815e6eb1d34.00000000005effb8:head [set-alloc-hint object_size 4194304 write_size 4194304,write 3551232~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:52.669853 osd.51 osd.51 192.168.42.12:6810/4338 367 : cluster [WRN] slow request 30.259918 seconds old, received at 2018-05-02 15:17:22.409807: osd_op(client.6526841.0:199030712 1.227 1:e44f9365:::rbd_data.51ffd188bfb19.000000000041d7c0:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1818624~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:54.499661 mon.node01 mon.0 192.168.42.11:6789/0 231630 : cluster [WRN] Health check update: 5 osds down (OSD_DOWN)
2018-05-02 15:17:54.502279 mon.node01 mon.0 192.168.42.11:6789/0 231631 : cluster [WRN] Health check update: Reduced data availability: 152 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:54.502352 mon.node01 mon.0 192.168.42.11:6789/0 231632 : cluster [WRN] Health check update: Degraded data redundancy: 545424/11883735 objects degraded (4.590%), 470 pgs unclean, 564 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:54.502394 mon.node01 mon.0 192.168.42.11:6789/0 231633 : cluster [WRN] Health check update: 9 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:54.511364 mon.node01 mon.0 192.168.42.11:6789/0 231634 : cluster [INF] osd.7 192.168.42.13:6834/7201 boot
2018-05-02 15:17:54.511419 mon.node01 mon.0 192.168.42.11:6789/0 231635 : cluster [INF] osd.2 192.168.42.13:6810/4329 boot
2018-05-02 15:17:55.560707 mon.node01 mon.0 192.168.42.11:6789/0 231772 : cluster [INF] osd.3 192.168.42.13:6802/3776 boot
2018-05-02 15:17:58.644882 mon.node01 mon.0 192.168.42.11:6789/0 232360 : cluster [INF] osd.18 192.168.42.13:6800/3571 boot
2018-05-02 15:17:58.997648 mon.node01 mon.0 192.168.42.11:6789/0 232419 : cluster [WRN] overall HEALTH_WARN 3 osds down; Reduced data availability: 187 pgs peering; Degraded data redundancy: 393287/11883735 objects degraded (3.309%), 464 pgs unclean, 414 pgs degraded; 20 slow requests are blocked > 32 sec
2018-05-02 15:17:59.502999 mon.node01 mon.0 192.168.42.11:6789/0 232447 : cluster [WRN] Health check update: 3 osds down (OSD_DOWN)
2018-05-02 15:17:59.503131 mon.node01 mon.0 192.168.42.11:6789/0 232448 : cluster [WRN] Health check update: Reduced data availability: 187 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:59.503190 mon.node01 mon.0 192.168.42.11:6789/0 232449 : cluster [WRN] Health check update: Degraded data redundancy: 393287/11883735 objects degraded (3.309%), 464 pgs unclean, 414 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:59.503246 mon.node01 mon.0 192.168.42.11:6789/0 232450 : cluster [WRN] Health check update: 20 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:04.507977 mon.node01 mon.0 192.168.42.11:6789/0 232590 : cluster [WRN] Health check update: Reduced data availability: 211 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:04.508058 mon.node01 mon.0 192.168.42.11:6789/0 232591 : cluster [WRN] Health check update: Degraded data redundancy: 242695/11883735 objects degraded (2.042%), 477 pgs unclean, 252 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:04.508106 mon.node01 mon.0 192.168.42.11:6789/0 232592 : cluster [WRN] Health check update: 23 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:05.877862 osd.61 osd.61 192.168.42.14:6814/4904 1237 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.173325 secs
2018-05-02 15:18:05.877874 osd.61 osd.61 192.168.42.14:6814/4904 1238 : cluster [WRN] slow request 30.173325 seconds old, received at 2018-05-02 15:17:35.704479: osd_op(client.6596773.0:183937862 1.c0e 1.d2861c0e (undecoded) ondisk+write+known_if_redirected e26136) currently waiting for peered
2018-05-02 15:18:02.797168 osd.58 osd.58 192.168.42.12:6804/3828 331 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.363481 secs
2018-05-02 15:18:02.797179 osd.58 osd.58 192.168.42.12:6804/3828 332 : cluster [WRN] slow request 30.363481 seconds old, received at 2018-05-02 15:17:32.433619: osd_op(client.6596773.0:183937757 1.a86 1.a57fca86 (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:09.513538 mon.node01 mon.0 192.168.42.11:6789/0 232608 : cluster [WRN] Health check update: Reduced data availability: 252 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:09.513599 mon.node01 mon.0 192.168.42.11:6789/0 232609 : cluster [WRN] Health check update: Degraded data redundancy: 239770/11883735 objects degraded (2.018%), 568 pgs unclean, 248 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:09.513636 mon.node01 mon.0 192.168.42.11:6789/0 232610 : cluster [WRN] Health check update: 25 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:02.817820 osd.77 osd.77 192.168.42.14:6828/6313 1117 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.394863 secs
2018-05-02 15:18:02.817831 osd.77 osd.77 192.168.42.14:6828/6313 1118 : cluster [WRN] slow request 30.394863 seconds old, received at 2018-05-02 15:17:32.422898: osd_op(client.6596773.0:183937750 1.6df 1.cf9bc6df (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:02.476590 osd.5 osd.5 192.168.42.13:6808/4150 341 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.064270 secs
2018-05-02 15:18:02.476601 osd.5 osd.5 192.168.42.13:6808/4150 342 : cluster [WRN] slow request 30.064270 seconds old, received at 2018-05-02 15:17:32.412264: osd_op(client.6596773.0:183937421 1.116 1.1c1d4116 (undecoded) ondisk+retry+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:02.651760 osd.22 osd.22 192.168.42.11:6818/5595 349 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 47.355668 secs
2018-05-02 15:18:02.651779 osd.22 osd.22 192.168.42.11:6818/5595 350 : cluster [WRN] slow request 30.174091 seconds old, received at 2018-05-02 15:17:32.477620: osd_op(client.5168842.0:565598316 1.f58 1.596cbf58 (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:02.797120 osd.97 osd.97 192.168.42.15:6834/31139 554 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.381988 secs
2018-05-02 15:18:02.797141 osd.97 osd.97 192.168.42.15:6834/31139 555 : cluster [WRN] slow request 30.381988 seconds old, received at 2018-05-02 15:17:32.415062: osd_op(client.6526841.0:199030755 1.569 1.b0c48569 (undecoded) ondisk+read+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:04.413088 osd.19 osd.19 192.168.42.13:6838/7714 369 : cluster [WRN] 4 slow requests, 4 included below; oldest blocked for > 30.948256 secs
2018-05-02 15:18:04.413100 osd.19 osd.19 192.168.42.13:6838/7714 370 : cluster [WRN] slow request 30.484589 seconds old, received at 2018-05-02 15:17:33.928388: osd_op(client.6526841.0:199030843 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:04.413107 osd.19 osd.19 192.168.42.13:6838/7714 371 : cluster [WRN] slow request 30.465765 seconds old, received at 2018-05-02 15:17:33.947212: osd_op(client.6596773.0:183937784 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:04.413114 osd.19 osd.19 192.168.42.13:6838/7714 372 : cluster [WRN] slow request 30.425245 seconds old, received at 2018-05-02 15:17:33.987732: osd_op(client.6596773.0:183937785 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:04.413119 osd.19 osd.19 192.168.42.13:6838/7714 373 : cluster [WRN] slow request 30.948256 seconds old, received at 2018-05-02 15:17:33.464721: osd_op(client.6526841.0:199030727 1.71e 1.5c1ee71e (undecoded) ondisk+retry+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:05.413744 osd.19 osd.19 192.168.42.13:6838/7714 374 : cluster [WRN] 7 slow requests, 3 included below; oldest blocked for > 31.948951 secs
2018-05-02 15:18:05.413755 osd.19 osd.19 192.168.42.13:6838/7714 375 : cluster [WRN] slow request 30.999699 seconds old, received at 2018-05-02 15:17:34.413974: osd_op(client.6526841.0:199030846 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:05.413760 osd.19 osd.19 192.168.42.13:6838/7714 376 : cluster [WRN] slow request 30.926653 seconds old, received at 2018-05-02 15:17:34.487019: osd_op(client.6603196.0:182818149 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:05.413769 osd.19 osd.19 192.168.42.13:6838/7714 377 : cluster [WRN] slow request 30.914518 seconds old, received at 2018-05-02 15:17:34.499154: osd_op(client.6526841.0:199030848 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:06.284732 osd.4 osd.4 192.168.42.13:6816/4672 400 : cluster [WRN] 2 slow requests, 2 included below; oldest blocked for > 30.629189 secs
2018-05-02 15:18:06.284743 osd.4 osd.4 192.168.42.13:6816/4672 401 : cluster [WRN] slow request 30.629189 seconds old, received at 2018-05-02 15:17:35.655473: osd_op(client.6596773.0:183937792 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26136) currently waiting for peered
2018-05-02 15:18:06.284756 osd.4 osd.4 192.168.42.13:6816/4672 402 : cluster [WRN] slow request 30.260563 seconds old, received at 2018-05-02 15:17:36.024099: osd_op(client.6596773.0:183937865 1.2dd 1.8dea32dd (undecoded) ondisk+read+known_if_redirected e26136) currently waiting for peered
2018-05-02 15:18:08.285983 osd.4 osd.4 192.168.42.13:6816/4672 403 : cluster [WRN] 3 slow requests, 1 included below; oldest blocked for > 32.630454 secs
2018-05-02 15:18:08.285995 osd.4 osd.4 192.168.42.13:6816/4672 404 : cluster [WRN] slow request 30.278627 seconds old, received at 2018-05-02 15:17:38.007300: osd_op(client.6596773.0:183937868 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26138) currently waiting for peered
2018-05-02 15:18:03.279070 osd.82 osd.82 192.168.42.15:6804/8697 409 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.845140 secs
2018-05-02 15:18:03.279083 osd.82 osd.82 192.168.42.15:6804/8697 410 : cluster [WRN] slow request 30.845140 seconds old, received at 2018-05-02 15:17:32.433862: osd_op(client.6526841.0:199030770 1.1b9 1.2f4701b9 (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:13.808180 mon.node01 mon.0 192.168.42.11:6789/0 232619 : cluster [WRN] Health check update: 2 osds down (OSD_DOWN)
2018-05-02 15:18:13.825306 mon.node01 mon.0 192.168.42.11:6789/0 232620 : cluster [INF] osd.6 192.168.42.13:6806/4031 boot
2018-05-02 15:18:14.518576 mon.node01 mon.0 192.168.42.11:6789/0 232641 : cluster [WRN] Health check update: Reduced data availability: 311 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:14.518655 mon.node01 mon.0 192.168.42.11:6789/0 232642 : cluster [WRN] Health check update: Degraded data redundancy: 239770/11883735 objects degraded (2.018%), 706 pgs unclean, 248 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:14.518708 mon.node01 mon.0 192.168.42.11:6789/0 232643 : cluster [WRN] Health check update: 27 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:17.801511 osd.58 osd.58 192.168.42.12:6804/3828 333 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 45.367834 secs
2018-05-02 15:18:17.801519 osd.58 osd.58 192.168.42.12:6804/3828 334 : cluster [WRN] slow request 30.183339 seconds old, received at 2018-05-02 15:17:47.618114: osd_op(client.6603196.0:182818173 1.a86 1.a57fca86 (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:19.525411 mon.node01 mon.0 192.168.42.11:6789/0 232881 : cluster [WRN] Health check update: Reduced data availability: 358 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:19.525473 mon.node01 mon.0 192.168.42.11:6789/0 232882 : cluster [WRN] Health check update: Degraded data redundancy: 188544/11883735 objects degraded (1.587%), 725 pgs unclean, 199 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:19.525510 mon.node01 mon.0 192.168.42.11:6789/0 232883 : cluster [WRN] Health check update: 34 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:19.525823 mon.node01 mon.0 192.168.42.11:6789/0 232884 : cluster [INF] osd.0 failed (root=default,host=node03) (4 reporters from different host after 22.185499 >= grace 20.000000)
2018-05-02 15:18:19.526157 mon.node01 mon.0 192.168.42.11:6789/0 232885 : cluster [INF] osd.1 failed (root=default,host=node03) (4 reporters from different host after 20.729223 >= grace 20.000000)
2018-05-02 15:18:19.527041 mon.node01 mon.0 192.168.42.11:6789/0 232886 : cluster [INF] osd.5 failed (root=default,host=node03) (4 reporters from different host after 21.722805 >= grace 20.000000)
2018-05-02 15:18:19.527435 mon.node01 mon.0 192.168.42.11:6789/0 232887 : cluster [INF] osd.9 failed (root=default,host=node03) (4 reporters from different host after 20.729151 >= grace 20.000000)
2018-05-02 15:18:19.527946 mon.node01 mon.0 192.168.42.11:6789/0 232888 : cluster [INF] osd.11 failed (root=default,host=node03) (4 reporters from different host after 20.399416 >= grace 20.000000)
2018-05-02 15:18:19.528543 mon.node01 mon.0 192.168.42.11:6789/0 232889 : cluster [INF] osd.14 failed (root=default,host=node03) (4 reporters from different host after 20.729112 >= grace 20.000000)
2018-05-02 15:18:19.528831 mon.node01 mon.0 192.168.42.11:6789/0 232890 : cluster [INF] osd.15 failed (root=default,host=node03) (4 reporters from different host after 20.399139 >= grace 20.000000)
2018-05-02 15:18:19.529181 mon.node01 mon.0 192.168.42.11:6789/0 232891 : cluster [INF] osd.19 failed (root=default,host=node03) (4 reporters from different host after 21.722456 >= grace 20.000000)
2018-05-02 15:18:19.546706 mon.node01 mon.0 192.168.42.11:6789/0 232892 : cluster [WRN] Health check update: 10 osds down (OSD_DOWN)
2018-05-02 15:18:13.289192 osd.4 osd.4 192.168.42.13:6816/4672 405 : cluster [WRN] 4 slow requests, 1 included below; oldest blocked for > 37.633638 secs
2018-05-02 15:18:13.289200 osd.4 osd.4 192.168.42.13:6816/4672 406 : cluster [WRN] slow request 30.273720 seconds old, received at 2018-05-02 15:17:43.015391: osd_op(client.6596773.0:183937876 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26142) currently waiting for peered
2018-05-02 15:18:15.655564 osd.22 osd.22 192.168.42.11:6818/5595 351 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 60.359482 secs
2018-05-02 15:18:15.655573 osd.22 osd.22 192.168.42.11:6818/5595 352 : cluster [WRN] slow request 60.359482 seconds old, received at 2018-05-02 15:17:15.296043: osd_op(client.6526841.0:199030532 1.f58 1:1afd369a:::rbd_data.51ffd188bfb19.0000000000a9a36d:head [set-alloc-hint object_size 4194304 write_size 4194304,write 931840~1536] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:18.292619 osd.4 osd.4 192.168.42.13:6816/4672 407 : cluster [WRN] 5 slow requests, 1 included below; oldest blocked for > 42.637083 secs
2018-05-02 15:18:18.292632 osd.4 osd.4 192.168.42.13:6816/4672 408 : cluster [WRN] slow request 30.269277 seconds old, received at 2018-05-02 15:17:48.023279: osd_op(client.6596773.0:183937882 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:18.656500 osd.22 osd.22 192.168.42.11:6818/5595 353 : cluster [WRN] 3 slow requests, 1 included below; oldest blocked for > 63.360399 secs
2018-05-02 15:18:18.656510 osd.22 osd.22 192.168.42.11:6818/5595 354 : cluster [WRN] slow request 30.693945 seconds old, received at 2018-05-02 15:17:47.962497: osd_op(client.6526841.0:199030863 1.f58 1.596cbf58 (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:18.682212 osd.57 osd.57 192.168.42.12:6824/5788 371 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 60.036720 secs
2018-05-02 15:18:18.682220 osd.57 osd.57 192.168.42.12:6824/5788 372 : cluster [WRN] slow request 60.036720 seconds old, received at 2018-05-02 15:17:18.645456: osd_op(client.6526841.0:199030675 1.580 1:01afed70:::rbd_data.51ffd188bfb19.00000000000920ad:head [set-alloc-hint object_size 4194304 write_size 4194304,write 2617344~3584] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:20.130588 osd.19 osd.19 192.168.42.13:6838/7714 378 : cluster [WRN] Monitor daemon marked osd.19 down, but it is still running
2018-05-02 15:18:20.874752 osd.11 osd.11 192.168.42.13:6804/3915 327 : cluster [WRN] Monitor daemon marked osd.11 down, but it is still running
2018-05-02 15:18:16.345047 osd.37 osd.37 192.168.42.11:6812/4603 315 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 60.848283 secs
2018-05-02 15:18:16.345056 osd.37 osd.37 192.168.42.11:6812/4603 316 : cluster [WRN] slow request 60.848283 seconds old, received at 2018-05-02 15:17:15.496724: osd_op(client.5168842.0:565598091 1.cd6 1:6b3ac828:::rbd_data.51ffd188bfb19.0000000000a994d6:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1515520~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:21.933793 mon.node01 mon.0 192.168.42.11:6789/0 232999 : cluster [INF] osd.14 192.168.42.13:6812/4444 boot
2018-05-02 15:18:21.933874 mon.node01 mon.0 192.168.42.11:6789/0 233000 : cluster [INF] osd.15 192.168.42.13:6818/4787 boot
2018-05-02 15:18:21.933989 mon.node01 mon.0 192.168.42.11:6789/0 233001 : cluster [INF] osd.11 192.168.42.13:6804/3915 boot
2018-05-02 15:18:22.993198 mon.node01 mon.0 192.168.42.11:6789/0 233024 : cluster [INF] osd.9 192.168.42.13:6830/6607 boot
2018-05-02 15:18:22.995855 mon.node01 mon.0 192.168.42.11:6789/0 233025 : cluster [INF] osd.0 192.168.42.13:6828/5616 boot
2018-05-02 15:18:22.995930 mon.node01 mon.0 192.168.42.11:6789/0 233026 : cluster [INF] osd.1 192.168.42.13:6832/7029 boot
2018-05-02 15:18:15.676606 osd.51 osd.51 192.168.42.12:6810/4338 368 : cluster [WRN] 3 slow requests, 1 included below; oldest blocked for > 60.959171 secs
2018-05-02 15:18:15.676613 osd.51 osd.51 192.168.42.12:6810/4338 369 : cluster [WRN] slow request 60.959171 seconds old, received at 2018-05-02 15:17:14.717398: osd_op(client.6596773.0:183937335 1.a33 1:cc5b61d1:::rbd_data.1e815e6eb1d34.000000000056c0b3:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1323520~32768] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:21.036818 osd.14 osd.14 192.168.42.13:6812/4444 353 : cluster [WRN] Monitor daemon marked osd.14 down, but it is still running
2018-05-02 15:18:22.678710 osd.51 osd.51 192.168.42.12:6810/4338 370 : cluster [WRN] 2 slow requests, 2 included below; oldest blocked for > 60.274650 secs
2018-05-02 15:18:22.678722 osd.51 osd.51 192.168.42.12:6810/4338 371 : cluster [WRN] slow request 60.274650 seconds old, received at 2018-05-02 15:17:22.404014: osd_op(client.6596773.0:183937651 1.aab 1:d557b9bd:::rbd_data.1e815e6eb1d34.00000000005effb8:head [set-alloc-hint object_size 4194304 write_size 4194304,write 3551232~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:22.678729 osd.51 osd.51 192.168.42.12:6810/4338 372 : cluster [WRN] slow request 60.268857 seconds old, received at 2018-05-02 15:17:22.409807: osd_op(client.6526841.0:199030712 1.227 1:e44f9365:::rbd_data.51ffd188bfb19.000000000041d7c0:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1818624~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:24.042094 mon.node01 mon.0 192.168.42.11:6789/0 233075 : cluster [INF] osd.5 192.168.42.13:6808/4150 boot
2018-05-02 15:18:24.043666 mon.node01 mon.0 192.168.42.11:6789/0 233076 : cluster [INF] osd.17 192.168.42.13:6814/4554 boot
2018-05-02 15:18:24.548038 mon.node01 mon.0 192.168.42.11:6789/0 233191 : cluster [WRN] Health check update: 2 osds down (OSD_DOWN)
2018-05-02 15:18:24.548147 mon.node01 mon.0 192.168.42.11:6789/0 233192 : cluster [WRN] Health check update: Reduced data availability: 245 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:24.548204 mon.node01 mon.0 192.168.42.11:6789/0 233193 : cluster [WRN] Health check update: Degraded data redundancy: 493813/11883735 objects degraded (4.155%), 733 pgs unclean, 514 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:24.548250 mon.node01 mon.0 192.168.42.11:6789/0 233194 : cluster [WRN] Health check update: 20 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:24.549084 mon.node01 mon.0 192.168.42.11:6789/0 233195 : cluster [INF] osd.4 failed (root=default,host=node03) (4 reporters from different host after 24.643274 >= grace 20.298579)
2018-05-02 15:18:24.549862 mon.node01 mon.0 192.168.42.11:6789/0 233196 : cluster [INF] osd.12 failed (root=default,host=node03) (3 reporters from different host after 23.203997 >= grace 20.000000)
2018-05-02 15:18:24.550361 mon.node01 mon.0 192.168.42.11:6789/0 233197 : cluster [INF] osd.13 failed (root=default,host=node03) (4 reporters from different host after 24.454206 >= grace 20.000000)
2018-05-02 15:18:24.550688 mon.node01 mon.0 192.168.42.11:6789/0 233198 : cluster [INF] osd.16 failed (root=default,host=node03) (4 reporters from different host after 21.857742 >= grace 20.298740)
2018-05-02 15:18:21.047160 osd.15 osd.15 192.168.42.13:6818/4787 387 : cluster [WRN] Monitor daemon marked osd.15 down, but it is still running
2018-05-02 15:18:25.156763 osd.13 osd.13 192.168.42.13:6820/4934 393 : cluster [WRN] Monitor daemon marked osd.13 down, but it is still running
2018-05-02 15:18:25.631974 mon.node01 mon.0 192.168.42.11:6789/0 233396 : cluster [INF] osd.4 192.168.42.13:6816/4672 boot
2018-05-02 15:18:21.012257 osd.9 osd.9 192.168.42.13:6830/6607 441 : cluster [WRN] Monitor daemon marked osd.9 down, but it is still running
2018-05-02 15:18:26.679788 mon.node01 mon.0 192.168.42.11:6789/0 233663 : cluster [INF] osd.12 192.168.42.13:6824/5233 boot
2018-05-02 15:18:26.680755 mon.node01 mon.0 192.168.42.11:6789/0 233664 : cluster [INF] osd.19 192.168.42.13:6838/7714 boot
2018-05-02 15:18:26.680955 mon.node01 mon.0 192.168.42.11:6789/0 233665 : cluster [INF] osd.13 192.168.42.13:6820/4934 boot
2018-05-02 15:18:26.680998 mon.node01 mon.0 192.168.42.11:6789/0 233666 : cluster [INF] osd.16 192.168.42.13:6822/5084 boot
2018-05-02 15:18:22.079422 osd.1 osd.1 192.168.42.13:6832/7029 455 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running
2018-05-02 15:18:25.700018 osd.16 osd.16 192.168.42.13:6822/5084 345 : cluster [WRN] Monitor daemon marked osd.16 down, but it is still running
2018-05-02 15:18:29.101502 mon.node01 mon.0 192.168.42.11:6789/0 233914 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-05-02 15:18:29.114105 mon.node01 mon.0 192.168.42.11:6789/0 233915 : cluster [INF] osd.8 192.168.42.13:6826/5418 boot
2018-05-02 15:18:29.565219 mon.node01 mon.0 192.168.42.11:6789/0 233932 : cluster [WRN] Health check update: Reduced data availability: 84 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:29.565271 mon.node01 mon.0 192.168.42.11:6789/0 233933 : cluster [WRN] Health check update: Degraded data redundancy: 202763/11883735 objects degraded (1.706%), 404 pgs unclean, 239 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:29.565297 mon.node01 mon.0 192.168.42.11:6789/0 233934 : cluster [WRN] Health check update: 5 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:29.565507 mon.node01 mon.0 192.168.42.11:6789/0 233935 : cluster [INF] osd.10 failed (root=default,host=node03) (4 reporters from different host after 23.766485 >= grace 20.597260)
2018-05-02 15:18:29.567298 mon.node01 mon.0 192.168.42.11:6789/0 233936 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2018-05-02 15:18:22.098559 osd.0 osd.0 192.168.42.13:6828/5616 377 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running
2018-05-02 15:18:23.295719 osd.4 osd.4 192.168.42.13:6816/4672 409 : cluster [WRN] 6 slow requests, 1 included below; oldest blocked for > 47.640189 secs
2018-05-02 15:18:23.295728 osd.4 osd.4 192.168.42.13:6816/4672 410 : cluster [WRN] slow request 30.264308 seconds old, received at 2018-05-02 15:17:53.031354: osd_op(client.6596773.0:183937898 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:24.965849 osd.4 osd.4 192.168.42.13:6816/4672 411 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running
2018-05-02 15:18:21.980375 osd.5 osd.5 192.168.42.13:6808/4150 343 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running
2018-05-02 15:18:25.166233 osd.12 osd.12 192.168.42.13:6824/5233 411 : cluster [WRN] Monitor daemon marked osd.12 down, but it is still running
2018-05-02 15:18:33.187741 mon.node01 mon.0 192.168.42.11:6789/0 234132 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-05-02 15:18:33.204727 mon.node01 mon.0 192.168.42.11:6789/0 234133 : cluster [INF] osd.10 192.168.42.13:6836/7465 boot
2018-05-02 15:18:34.203560 mon.node01 mon.0 192.168.42.11:6789/0 234150 : cluster [INF] Health check cleared: REQUEST_SLOW (was: 2 slow requests are blocked > 32 sec)
2018-05-02 15:18:34.568568 mon.node01 mon.0 192.168.42.11:6789/0 234168 : cluster [WRN] Health check update: Reduced data availability: 13 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:34.568641 mon.node01 mon.0 192.168.42.11:6789/0 234169 : cluster [WRN] Health check update: Degraded data redundancy: 47468/11883735 objects degraded (0.399%), 92 pgs unclean, 101 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:32.064002 osd.10 osd.10 192.168.42.13:6836/7465 365 : cluster [WRN] Monitor daemon marked osd.10 down, but it is still running
2018-05-02 15:18:38.215116 mon.node01 mon.0 192.168.42.11:6789/0 234170 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 13 pgs peering)
2018-05-02 15:18:39.569077 mon.node01 mon.0 192.168.42.11:6789/0 234172 : cluster [WRN] Health check update: Degraded data redundancy: 14590/11883735 objects degraded (0.123%), 55 pgs unclean, 73 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:44.569539 mon.node01 mon.0 192.168.42.11:6789/0 234182 : cluster [WRN] Health check update: Degraded data redundancy: 27/11883735 objects degraded (0.000%), 15 pgs unclean, 26 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:49.569956 mon.node01 mon.0 192.168.42.11:6789/0 234184 : cluster [WRN] Health check update: Degraded data redundancy: 12/11883735 objects degraded (0.000%), 6 pgs unclean, 13 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:53.915551 mon.node01 mon.0 192.168.42.11:6789/0 234186 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/11883735 objects degraded (0.000%), 2 pgs unclean, 3 pgs degraded)
2018-05-02 15:18:53.915621 mon.node01 mon.0 192.168.42.11:6789/0 234187 : cluster [INF] Cluster is now healthy
after some more investigation we fond that this looks like another bug with the nic driver:
kern.log shows:
May 2 15:17:04 node03 kernel: [695525.704075] i40e 0000:17:00.1: TX driver issue detected, PF reset issued
May 2 15:17:04 node03 kernel: [695525.835170] i40e 0000:17:00.1: i40e_ptp_init: PTP not supported on eth3
May 2 15:17:04 node03 kernel: [695525.911684] i40e 0000:17:00.1 eth3: speed changed to 0 for port eth3
May 2 15:17:04 node03 kernel: [695525.913112] bond1: link status up again after 0 ms for interface eth3
maybe it´s this bug:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1700834
do i see this right, cause of losing one nic from the bond ceph lost the connection to the cluster for some seconds and reported the osd as down.
the strange thing is that ceph starts to set the osd to down 26 sec after the bond was recovered .....any idea?
from syslog (cleared from hundrets of : heartbeat_check: no reply from xx messages)
May 2 15:17:01 node03 CRON[4152775]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
May 2 15:17:04 node03 kernel: [695525.704075] i40e 0000:17:00.1: TX driver issue detected, PF reset issued
May 2 15:17:04 node03 kernel: [695525.835170] i40e 0000:17:00.1: i40e_ptp_init: PTP not supported on eth3
May 2 15:17:04 node03 kernel: [695525.911684] i40e 0000:17:00.1 eth3: speed changed to 0 for port eth3
May 2 15:17:04 node03 kernel: [695525.913112] bond1: link status up again after 0 ms for interface eth3
May 2 15:17:16 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 2.500179394s
May 2 15:17:19 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 4.96820263s
May 2 15:17:21 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 7.45790572s
May 2 15:17:24 node03 consul[3360]: raft: aborting pipeline replication to peer {Voter 192.168.42.12:8300 192.168.42.12:8300}
May 2 15:17:24 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.11:8300: read tcp 192.168.42.13:37038->192.168.42.11:8300: i/o timeout
May 2 15:17:24 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: read tcp 192.168.42.13:55244->192.168.42.12:8300: i/o timeout
May 2 15:17:26 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 2.50027097s
May 2 15:17:29 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 4.984897637s
May 2 15:17:31 node03 consul[3360]: raft: Failed to contact 192.168.42.12:8300 in 7.420438172s
May 2 15:17:32 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:34 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:35 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.11:8300: dial tcp 192.168.42.11:8300: i/o timeout
May 2 15:17:35 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:38 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:44 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:46 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:46 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:47 node03 consul[3360]: yamux: keepalive failed: i/o deadline reached
May 2 15:17:47 node03 consul[3360]: consul.rpc: multiplex conn accept failed: keepalive timeout from=192.168.42.12:55674
May 2 15:17:51 node03 consul[3360]: memberlist: Push/Pull with node02 failed: dial tcp 192.168.42.12:8301: i/o timeout
May 2 15:17:53 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:17:54 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:17:57 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:01 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since we have a leader: 192.168.42.13:8300
May 2 15:18:04 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:04 node03 consul[3360]: raft: peer {Voter 192.168.42.12:8300 192.168.42.12:8300} has newer term, stopping replication
May 2 15:18:04 node03 consul[3360]: raft: Node at 192.168.42.13:8300 [Follower] entering Follower state (Leader: "")
May 2 15:18:04 node03 consul[3360]: consul: cluster leadership lost
May 2 15:18:04 node03 consul[3360]: raft: aborting pipeline replication to peer {Voter 192.168.42.11:8300 192.168.42.11:8300}
May 2 15:18:04 node03 consul[3360]: consul.coordinate: Batch update failed: node is not the leader
May 2 15:18:08 node03 consul[3360]: raft: Failed to heartbeat to 192.168.42.12:8300: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:09 node03 consul[3360]: raft: Rejecting vote request from 192.168.42.12:8300 since our last index is greater (3213435, 3213428)
May 2 15:18:13 node03 consul[3360]: http: Request GET /v1/kv/PetaSAN/Services/ClusterLeader?index=2620988&wait=20s, error: No cluster leader from=127.0.0.1:53994
May 2 15:18:13 node03 consul[3360]: raft: Heartbeat timeout from "" reached, starting election
May 2 15:18:13 node03 consul[3360]: raft: Node at 192.168.42.13:8300 [Candidate] entering Candidate state in term 184
May 2 15:18:13 node03 consul[3360]: raft: Election won. Tally: 2
May 2 15:18:13 node03 consul[3360]: raft: Node at 192.168.42.13:8300 [Leader] entering Leader state
May 2 15:18:13 node03 consul[3360]: raft: Added peer 192.168.42.11:8300, starting replication
May 2 15:18:13 node03 consul[3360]: raft: Added peer 192.168.42.12:8300, starting replication
May 2 15:18:13 node03 consul[3360]: consul: cluster leadership acquired
May 2 15:18:13 node03 consul[3360]: consul: New leader elected: node03
May 2 15:18:13 node03 consul[3360]: raft: pipelining replication to peer {Voter 192.168.42.11:8300 192.168.42.11:8300}
May 2 15:18:23 node03 consul[3360]: raft: Failed to AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300}: dial tcp 192.168.42.12:8300: i/o timeout
May 2 15:18:23 node03 consul[3360]: raft: AppendEntries to {Voter 192.168.42.12:8300 192.168.42.12:8300} rejected, sending older logs (next: 3213429)
May 2 15:18:23 node03 consul[3360]: raft: pipelining replication to peer {Voter 192.168.42.12:8300 192.168.42.12:8300}
from ps-cl01.log
2018-05-02 15:17:30.689945 mon.node01 mon.0 192.168.42.11:6789/0 231117 : cluster [INF] osd.0 failed (root=default,host=node03) (2 reporters from different host after 20.000325 >= grace 20.000000)
2018-05-02 15:17:30.690389 mon.node01 mon.0 192.168.42.11:6789/0 231119 : cluster [INF] osd.4 failed (root=default,host=node03) (2 reporters from different host after 20.000540 >= grace 20.000000)
2018-05-02 15:17:30.690758 mon.node01 mon.0 192.168.42.11:6789/0 231121 : cluster [INF] osd.6 failed (root=default,host=node03) (2 reporters from different host after 20.000898 >= grace 20.000000)
2018-05-02 15:17:30.691040 mon.node01 mon.0 192.168.42.11:6789/0 231123 : cluster [INF] osd.8 failed (root=default,host=node03) (2 reporters from different host after 20.001124 >= grace 20.000000)
2018-05-02 15:17:30.691317 mon.node01 mon.0 192.168.42.11:6789/0 231125 : cluster [INF] osd.10 failed (root=default,host=node03) (2 reporters from different host after 20.001331 >= grace 20.000000)
2018-05-02 15:17:30.691936 mon.node01 mon.0 192.168.42.11:6789/0 231129 : cluster [INF] osd.15 failed (root=default,host=node03) (2 reporters from different host after 20.001689 >= grace 20.000000)
2018-05-02 15:17:30.728612 mon.node01 mon.0 192.168.42.11:6789/0 231134 : cluster [INF] osd.5 failed (root=default,host=node03) (2 reporters from different host after 20.000418 >= grace 20.000000)
2018-05-02 15:17:30.728887 mon.node01 mon.0 192.168.42.11:6789/0 231136 : cluster [INF] osd.7 failed (root=default,host=node03) (2 reporters from different host after 20.000640 >= grace 20.000000)
2018-05-02 15:17:30.729478 mon.node01 mon.0 192.168.42.11:6789/0 231140 : cluster [INF] osd.11 failed (root=default,host=node03) (2 reporters from different host after 20.000955 >= grace 20.000000)
2018-05-02 15:17:31.071749 mon.node01 mon.0 192.168.42.11:6789/0 231188 : cluster [INF] osd.9 failed (root=default,host=node03) (2 reporters from different host after 20.000476 >= grace 20.000000)
2018-05-02 15:17:31.072294 mon.node01 mon.0 192.168.42.11:6789/0 231192 : cluster [INF] osd.14 failed (root=default,host=node03) (2 reporters from different host after 20.000645 >= grace 20.000000)
2018-05-02 15:17:31.072648 mon.node01 mon.0 192.168.42.11:6789/0 231195 : cluster [INF] osd.17 failed (root=default,host=node03) (2 reporters from different host after 20.000775 >= grace 20.000000)
2018-05-02 15:17:31.073013 mon.node01 mon.0 192.168.42.11:6789/0 231198 : cluster [INF] osd.19 failed (root=default,host=node03) (2 reporters from different host after 20.000923 >= grace 20.000000)
2018-05-02 15:17:31.145738 mon.node01 mon.0 192.168.42.11:6789/0 231201 : cluster [INF] osd.2 failed (root=default,host=node03) (2 reporters from different host after 20.000246 >= grace 20.000000)
2018-05-02 15:17:31.276822 mon.node01 mon.0 192.168.42.11:6789/0 231213 : cluster [WRN] Health check failed: 14 osds down (OSD_DOWN)
2018-05-02 15:17:31.297002 mon.node01 mon.0 192.168.42.11:6789/0 231215 : cluster [INF] osd.1 failed (root=default,host=node03) (2 reporters from different host after 20.066715 >= grace 20.000000)
2018-05-02 15:17:31.297100 mon.node01 mon.0 192.168.42.11:6789/0 231217 : cluster [INF] osd.3 failed (root=default,host=node03) (2 reporters from different host after 20.066565 >= grace 20.000000)
2018-05-02 15:17:31.297290 mon.node01 mon.0 192.168.42.11:6789/0 231219 : cluster [INF] osd.12 failed (root=default,host=node03) (2 reporters from different host after 20.066392 >= grace 20.000000)
2018-05-02 15:17:31.297562 mon.node01 mon.0 192.168.42.11:6789/0 231222 : cluster [INF] osd.13 failed (root=default,host=node03) (2 reporters from different host after 20.010716 >= grace 20.000000)
2018-05-02 15:17:31.297644 mon.node01 mon.0 192.168.42.11:6789/0 231224 : cluster [INF] osd.16 failed (root=default,host=node03) (2 reporters from different host after 20.010419 >= grace 20.000000)
2018-05-02 15:17:31.342169 mon.node01 mon.0 192.168.42.11:6789/0 231245 : cluster [INF] osd.18 failed (root=default,host=node03) (2 reporters from different host after 20.003455 >= grace 20.000000)
2018-05-02 15:17:31.900209 osd.17 osd.17 192.168.42.13:6814/4554 347 : cluster [WRN] Monitor daemon marked osd.17 down, but it is still running
2018-05-02 15:17:32.347387 mon.node01 mon.0 192.168.42.11:6789/0 231324 : cluster [INF] osd.5 192.168.42.13:6808/4150 boot
2018-05-02 15:17:32.122997 osd.14 osd.14 192.168.42.13:6812/4444 351 : cluster [WRN] Monitor daemon marked osd.14 down, but it is still running
2018-05-02 15:17:33.417134 mon.node01 mon.0 192.168.42.11:6789/0 231341 : cluster [INF] osd.19 192.168.42.13:6838/7714 boot
2018-05-02 15:17:32.150394 osd.15 osd.15 192.168.42.13:6818/4787 385 : cluster [WRN] Monitor daemon marked osd.15 down, but it is still running
2018-05-02 15:17:34.449728 mon.node01 mon.0 192.168.42.11:6789/0 231360 : cluster [WRN] Health check failed: Reduced data availability: 182 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:34.449810 mon.node01 mon.0 192.168.42.11:6789/0 231361 : cluster [WRN] Health check failed: Degraded data redundancy: 266928/11883735 objects degraded (2.246%), 265 pgs unclean, 276 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:34.449892 mon.node01 mon.0 192.168.42.11:6789/0 231362 : cluster [WRN] Health check failed: 6 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:34.462178 mon.node01 mon.0 192.168.42.11:6789/0 231363 : cluster [INF] osd.14 192.168.42.13:6812/4444 boot
2018-05-02 15:17:34.462279 mon.node01 mon.0 192.168.42.11:6789/0 231364 : cluster [INF] osd.15 192.168.42.13:6818/4787 boot
2018-05-02 15:17:34.462350 mon.node01 mon.0 192.168.42.11:6789/0 231365 : cluster [INF] osd.1 192.168.42.13:6832/7029 boot
2018-05-02 15:17:34.462398 mon.node01 mon.0 192.168.42.11:6789/0 231366 : cluster [INF] osd.11 192.168.42.13:6804/3915 boot
2018-05-02 15:17:34.462440 mon.node01 mon.0 192.168.42.11:6789/0 231367 : cluster [INF] osd.9 192.168.42.13:6830/6607 boot
2018-05-02 15:17:34.462483 mon.node01 mon.0 192.168.42.11:6789/0 231368 : cluster [INF] osd.0 192.168.42.13:6828/5616 boot
2018-05-02 15:17:34.526923 osd.13 osd.13 192.168.42.13:6820/4934 391 : cluster [WRN] Monitor daemon marked osd.13 down, but it is still running
2018-05-02 15:17:31.950776 osd.9 osd.9 192.168.42.13:6830/6607 439 : cluster [WRN] Monitor daemon marked osd.9 down, but it is still running
2018-05-02 15:17:35.528042 mon.node01 mon.0 192.168.42.11:6789/0 231385 : cluster [INF] osd.13 192.168.42.13:6820/4934 boot
2018-05-02 15:17:35.528149 mon.node01 mon.0 192.168.42.11:6789/0 231386 : cluster [INF] osd.4 192.168.42.13:6816/4672 boot
2018-05-02 15:17:36.561053 mon.node01 mon.0 192.168.42.11:6789/0 231403 : cluster [WRN] Health check update: 9 osds down (OSD_DOWN)
2018-05-02 15:17:36.577048 mon.node01 mon.0 192.168.42.11:6789/0 231404 : cluster [INF] osd.12 192.168.42.13:6824/5233 boot
2018-05-02 15:17:31.901987 osd.7 osd.7 192.168.42.13:6834/7201 353 : cluster [WRN] Monitor daemon marked osd.7 down, but it is still running
2018-05-02 15:17:33.112422 osd.3 osd.3 192.168.42.13:6802/3776 433 : cluster [WRN] Monitor daemon marked osd.3 down, but it is still running
2018-05-02 15:17:33.454019 osd.1 osd.1 192.168.42.13:6832/7029 453 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running
2018-05-02 15:17:38.145518 mon.node01 mon.0 192.168.42.11:6789/0 231444 : cluster [INF] osd.16 192.168.42.13:6822/5084 boot
2018-05-02 15:17:31.878606 osd.6 osd.6 192.168.42.13:6806/4031 379 : cluster [WRN] Monitor daemon marked osd.6 down, but it is still running
2018-05-02 15:17:37.074577 osd.16 osd.16 192.168.42.13:6822/5084 343 : cluster [WRN] Monitor daemon marked osd.16 down, but it is still running
2018-05-02 15:17:39.450525 mon.node01 mon.0 192.168.42.11:6789/0 231512 : cluster [WRN] Health check update: Reduced data availability: 156 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:39.450603 mon.node01 mon.0 192.168.42.11:6789/0 231513 : cluster [WRN] Health check update: Degraded data redundancy: 780173/11883735 objects degraded (6.565%), 409 pgs unclean, 826 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:39.450648 mon.node01 mon.0 192.168.42.11:6789/0 231514 : cluster [WRN] Health check update: 20 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:33.481474 osd.0 osd.0 192.168.42.13:6828/5616 375 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running
2018-05-02 15:17:34.835065 osd.4 osd.4 192.168.42.13:6816/4672 398 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running
2018-05-02 15:17:31.758601 osd.11 osd.11 192.168.42.13:6804/3915 325 : cluster [WRN] Monitor daemon marked osd.11 down, but it is still running
2018-05-02 15:17:31.854285 osd.5 osd.5 192.168.42.13:6808/4150 339 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running
2018-05-02 15:17:31.927593 osd.8 osd.8 192.168.42.13:6826/5418 327 : cluster [WRN] Monitor daemon marked osd.8 down, but it is still running
2018-05-02 15:17:32.264540 osd.2 osd.2 192.168.42.13:6810/4329 373 : cluster [WRN] Monitor daemon marked osd.2 down, but it is still running
2018-05-02 15:17:32.504532 osd.19 osd.19 192.168.42.13:6838/7714 367 : cluster [WRN] Monitor daemon marked osd.19 down, but it is still running
2018-05-02 15:17:35.304926 osd.12 osd.12 192.168.42.13:6824/5233 409 : cluster [WRN] Monitor daemon marked osd.12 down, but it is still running
2018-05-02 15:17:41.201037 mon.node01 mon.0 192.168.42.11:6789/0 231515 : cluster [INF] osd.10 192.168.42.13:6836/7465 boot
2018-05-02 15:17:37.224788 osd.18 osd.18 192.168.42.13:6800/3571 353 : cluster [WRN] Monitor daemon marked osd.18 down, but it is still running
2018-05-02 15:17:44.451567 mon.node01 mon.0 192.168.42.11:6789/0 231566 : cluster [WRN] Health check update: 7 osds down (OSD_DOWN)
2018-05-02 15:17:44.451681 mon.node01 mon.0 192.168.42.11:6789/0 231567 : cluster [WRN] Health check update: Reduced data availability: 119 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:44.451740 mon.node01 mon.0 192.168.42.11:6789/0 231568 : cluster [WRN] Health check update: Degraded data redundancy: 567324/11883735 objects degraded (4.774%), 376 pgs unclean, 598 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:44.451787 mon.node01 mon.0 192.168.42.11:6789/0 231569 : cluster [WRN] Health check update: 8 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:39.947717 osd.10 osd.10 192.168.42.13:6836/7465 363 : cluster [WRN] Monitor daemon marked osd.10 down, but it is still running
2018-05-02 15:17:49.452743 mon.node01 mon.0 192.168.42.11:6789/0 231571 : cluster [WRN] Health check update: Reduced data availability: 124 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:49.452802 mon.node01 mon.0 192.168.42.11:6789/0 231572 : cluster [WRN] Health check update: Degraded data redundancy: 545427/11883735 objects degraded (4.590%), 400 pgs unclean, 569 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:49.452830 mon.node01 mon.0 192.168.42.11:6789/0 231573 : cluster [WRN] Health check update: 5 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:45.648204 osd.22 osd.22 192.168.42.11:6818/5595 347 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.352068 secs
2018-05-02 15:17:45.648216 osd.22 osd.22 192.168.42.11:6818/5595 348 : cluster [WRN] slow request 30.352068 seconds old, received at 2018-05-02 15:17:15.296043: osd_op(client.6526841.0:199030532 1.f58 1:1afd369a:::rbd_data.51ffd188bfb19.0000000000a9a36d:head [set-alloc-hint object_size 4194304 write_size 4194304,write 931840~1536] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:48.674430 osd.57 osd.57 192.168.42.12:6824/5788 369 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.028853 secs
2018-05-02 15:17:48.674449 osd.57 osd.57 192.168.42.12:6824/5788 370 : cluster [WRN] slow request 30.028853 seconds old, received at 2018-05-02 15:17:18.645456: osd_op(client.6526841.0:199030675 1.580 1:01afed70:::rbd_data.51ffd188bfb19.00000000000920ad:head [set-alloc-hint object_size 4194304 write_size 4194304,write 2617344~3584] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:46.337758 osd.37 osd.37 192.168.42.11:6812/4603 313 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.840941 secs
2018-05-02 15:17:46.337771 osd.37 osd.37 192.168.42.11:6812/4603 314 : cluster [WRN] slow request 30.840941 seconds old, received at 2018-05-02 15:17:15.496724: osd_op(client.5168842.0:565598091 1.cd6 1:6b3ac828:::rbd_data.51ffd188bfb19.0000000000a994d6:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1515520~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:45.668260 osd.51 osd.51 192.168.42.12:6810/4338 363 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.950745 secs
2018-05-02 15:17:45.668273 osd.51 osd.51 192.168.42.12:6810/4338 364 : cluster [WRN] slow request 30.950745 seconds old, received at 2018-05-02 15:17:14.717398: osd_op(client.6596773.0:183937335 1.a33 1:cc5b61d1:::rbd_data.1e815e6eb1d34.000000000056c0b3:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1323520~32768] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:52.669812 osd.51 osd.51 192.168.42.12:6810/4338 365 : cluster [WRN] 3 slow requests, 2 included below; oldest blocked for > 37.952326 secs
2018-05-02 15:17:52.669821 osd.51 osd.51 192.168.42.12:6810/4338 366 : cluster [WRN] slow request 30.265711 seconds old, received at 2018-05-02 15:17:22.404014: osd_op(client.6596773.0:183937651 1.aab 1:d557b9bd:::rbd_data.1e815e6eb1d34.00000000005effb8:head [set-alloc-hint object_size 4194304 write_size 4194304,write 3551232~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:52.669853 osd.51 osd.51 192.168.42.12:6810/4338 367 : cluster [WRN] slow request 30.259918 seconds old, received at 2018-05-02 15:17:22.409807: osd_op(client.6526841.0:199030712 1.227 1:e44f9365:::rbd_data.51ffd188bfb19.000000000041d7c0:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1818624~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:17:54.499661 mon.node01 mon.0 192.168.42.11:6789/0 231630 : cluster [WRN] Health check update: 5 osds down (OSD_DOWN)
2018-05-02 15:17:54.502279 mon.node01 mon.0 192.168.42.11:6789/0 231631 : cluster [WRN] Health check update: Reduced data availability: 152 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:54.502352 mon.node01 mon.0 192.168.42.11:6789/0 231632 : cluster [WRN] Health check update: Degraded data redundancy: 545424/11883735 objects degraded (4.590%), 470 pgs unclean, 564 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:54.502394 mon.node01 mon.0 192.168.42.11:6789/0 231633 : cluster [WRN] Health check update: 9 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:17:54.511364 mon.node01 mon.0 192.168.42.11:6789/0 231634 : cluster [INF] osd.7 192.168.42.13:6834/7201 boot
2018-05-02 15:17:54.511419 mon.node01 mon.0 192.168.42.11:6789/0 231635 : cluster [INF] osd.2 192.168.42.13:6810/4329 boot
2018-05-02 15:17:55.560707 mon.node01 mon.0 192.168.42.11:6789/0 231772 : cluster [INF] osd.3 192.168.42.13:6802/3776 boot
2018-05-02 15:17:58.644882 mon.node01 mon.0 192.168.42.11:6789/0 232360 : cluster [INF] osd.18 192.168.42.13:6800/3571 boot
2018-05-02 15:17:58.997648 mon.node01 mon.0 192.168.42.11:6789/0 232419 : cluster [WRN] overall HEALTH_WARN 3 osds down; Reduced data availability: 187 pgs peering; Degraded data redundancy: 393287/11883735 objects degraded (3.309%), 464 pgs unclean, 414 pgs degraded; 20 slow requests are blocked > 32 sec
2018-05-02 15:17:59.502999 mon.node01 mon.0 192.168.42.11:6789/0 232447 : cluster [WRN] Health check update: 3 osds down (OSD_DOWN)
2018-05-02 15:17:59.503131 mon.node01 mon.0 192.168.42.11:6789/0 232448 : cluster [WRN] Health check update: Reduced data availability: 187 pgs peering (PG_AVAILABILITY)
2018-05-02 15:17:59.503190 mon.node01 mon.0 192.168.42.11:6789/0 232449 : cluster [WRN] Health check update: Degraded data redundancy: 393287/11883735 objects degraded (3.309%), 464 pgs unclean, 414 pgs degraded (PG_DEGRADED)
2018-05-02 15:17:59.503246 mon.node01 mon.0 192.168.42.11:6789/0 232450 : cluster [WRN] Health check update: 20 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:04.507977 mon.node01 mon.0 192.168.42.11:6789/0 232590 : cluster [WRN] Health check update: Reduced data availability: 211 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:04.508058 mon.node01 mon.0 192.168.42.11:6789/0 232591 : cluster [WRN] Health check update: Degraded data redundancy: 242695/11883735 objects degraded (2.042%), 477 pgs unclean, 252 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:04.508106 mon.node01 mon.0 192.168.42.11:6789/0 232592 : cluster [WRN] Health check update: 23 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:05.877862 osd.61 osd.61 192.168.42.14:6814/4904 1237 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.173325 secs
2018-05-02 15:18:05.877874 osd.61 osd.61 192.168.42.14:6814/4904 1238 : cluster [WRN] slow request 30.173325 seconds old, received at 2018-05-02 15:17:35.704479: osd_op(client.6596773.0:183937862 1.c0e 1.d2861c0e (undecoded) ondisk+write+known_if_redirected e26136) currently waiting for peered
2018-05-02 15:18:02.797168 osd.58 osd.58 192.168.42.12:6804/3828 331 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.363481 secs
2018-05-02 15:18:02.797179 osd.58 osd.58 192.168.42.12:6804/3828 332 : cluster [WRN] slow request 30.363481 seconds old, received at 2018-05-02 15:17:32.433619: osd_op(client.6596773.0:183937757 1.a86 1.a57fca86 (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:09.513538 mon.node01 mon.0 192.168.42.11:6789/0 232608 : cluster [WRN] Health check update: Reduced data availability: 252 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:09.513599 mon.node01 mon.0 192.168.42.11:6789/0 232609 : cluster [WRN] Health check update: Degraded data redundancy: 239770/11883735 objects degraded (2.018%), 568 pgs unclean, 248 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:09.513636 mon.node01 mon.0 192.168.42.11:6789/0 232610 : cluster [WRN] Health check update: 25 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:02.817820 osd.77 osd.77 192.168.42.14:6828/6313 1117 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.394863 secs
2018-05-02 15:18:02.817831 osd.77 osd.77 192.168.42.14:6828/6313 1118 : cluster [WRN] slow request 30.394863 seconds old, received at 2018-05-02 15:17:32.422898: osd_op(client.6596773.0:183937750 1.6df 1.cf9bc6df (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:02.476590 osd.5 osd.5 192.168.42.13:6808/4150 341 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.064270 secs
2018-05-02 15:18:02.476601 osd.5 osd.5 192.168.42.13:6808/4150 342 : cluster [WRN] slow request 30.064270 seconds old, received at 2018-05-02 15:17:32.412264: osd_op(client.6596773.0:183937421 1.116 1.1c1d4116 (undecoded) ondisk+retry+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:02.651760 osd.22 osd.22 192.168.42.11:6818/5595 349 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 47.355668 secs
2018-05-02 15:18:02.651779 osd.22 osd.22 192.168.42.11:6818/5595 350 : cluster [WRN] slow request 30.174091 seconds old, received at 2018-05-02 15:17:32.477620: osd_op(client.5168842.0:565598316 1.f58 1.596cbf58 (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:02.797120 osd.97 osd.97 192.168.42.15:6834/31139 554 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.381988 secs
2018-05-02 15:18:02.797141 osd.97 osd.97 192.168.42.15:6834/31139 555 : cluster [WRN] slow request 30.381988 seconds old, received at 2018-05-02 15:17:32.415062: osd_op(client.6526841.0:199030755 1.569 1.b0c48569 (undecoded) ondisk+read+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:04.413088 osd.19 osd.19 192.168.42.13:6838/7714 369 : cluster [WRN] 4 slow requests, 4 included below; oldest blocked for > 30.948256 secs
2018-05-02 15:18:04.413100 osd.19 osd.19 192.168.42.13:6838/7714 370 : cluster [WRN] slow request 30.484589 seconds old, received at 2018-05-02 15:17:33.928388: osd_op(client.6526841.0:199030843 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:04.413107 osd.19 osd.19 192.168.42.13:6838/7714 371 : cluster [WRN] slow request 30.465765 seconds old, received at 2018-05-02 15:17:33.947212: osd_op(client.6596773.0:183937784 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:04.413114 osd.19 osd.19 192.168.42.13:6838/7714 372 : cluster [WRN] slow request 30.425245 seconds old, received at 2018-05-02 15:17:33.987732: osd_op(client.6596773.0:183937785 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:04.413119 osd.19 osd.19 192.168.42.13:6838/7714 373 : cluster [WRN] slow request 30.948256 seconds old, received at 2018-05-02 15:17:33.464721: osd_op(client.6526841.0:199030727 1.71e 1.5c1ee71e (undecoded) ondisk+retry+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:05.413744 osd.19 osd.19 192.168.42.13:6838/7714 374 : cluster [WRN] 7 slow requests, 3 included below; oldest blocked for > 31.948951 secs
2018-05-02 15:18:05.413755 osd.19 osd.19 192.168.42.13:6838/7714 375 : cluster [WRN] slow request 30.999699 seconds old, received at 2018-05-02 15:17:34.413974: osd_op(client.6526841.0:199030846 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:05.413760 osd.19 osd.19 192.168.42.13:6838/7714 376 : cluster [WRN] slow request 30.926653 seconds old, received at 2018-05-02 15:17:34.487019: osd_op(client.6603196.0:182818149 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:05.413769 osd.19 osd.19 192.168.42.13:6838/7714 377 : cluster [WRN] slow request 30.914518 seconds old, received at 2018-05-02 15:17:34.499154: osd_op(client.6526841.0:199030848 1.13e 1.b7b9213e (undecoded) ondisk+write+known_if_redirected e26134) currently waiting for peered
2018-05-02 15:18:06.284732 osd.4 osd.4 192.168.42.13:6816/4672 400 : cluster [WRN] 2 slow requests, 2 included below; oldest blocked for > 30.629189 secs
2018-05-02 15:18:06.284743 osd.4 osd.4 192.168.42.13:6816/4672 401 : cluster [WRN] slow request 30.629189 seconds old, received at 2018-05-02 15:17:35.655473: osd_op(client.6596773.0:183937792 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26136) currently waiting for peered
2018-05-02 15:18:06.284756 osd.4 osd.4 192.168.42.13:6816/4672 402 : cluster [WRN] slow request 30.260563 seconds old, received at 2018-05-02 15:17:36.024099: osd_op(client.6596773.0:183937865 1.2dd 1.8dea32dd (undecoded) ondisk+read+known_if_redirected e26136) currently waiting for peered
2018-05-02 15:18:08.285983 osd.4 osd.4 192.168.42.13:6816/4672 403 : cluster [WRN] 3 slow requests, 1 included below; oldest blocked for > 32.630454 secs
2018-05-02 15:18:08.285995 osd.4 osd.4 192.168.42.13:6816/4672 404 : cluster [WRN] slow request 30.278627 seconds old, received at 2018-05-02 15:17:38.007300: osd_op(client.6596773.0:183937868 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26138) currently waiting for peered
2018-05-02 15:18:03.279070 osd.82 osd.82 192.168.42.15:6804/8697 409 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 30.845140 secs
2018-05-02 15:18:03.279083 osd.82 osd.82 192.168.42.15:6804/8697 410 : cluster [WRN] slow request 30.845140 seconds old, received at 2018-05-02 15:17:32.433862: osd_op(client.6526841.0:199030770 1.1b9 1.2f4701b9 (undecoded) ondisk+write+known_if_redirected e26133) currently waiting for peered
2018-05-02 15:18:13.808180 mon.node01 mon.0 192.168.42.11:6789/0 232619 : cluster [WRN] Health check update: 2 osds down (OSD_DOWN)
2018-05-02 15:18:13.825306 mon.node01 mon.0 192.168.42.11:6789/0 232620 : cluster [INF] osd.6 192.168.42.13:6806/4031 boot
2018-05-02 15:18:14.518576 mon.node01 mon.0 192.168.42.11:6789/0 232641 : cluster [WRN] Health check update: Reduced data availability: 311 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:14.518655 mon.node01 mon.0 192.168.42.11:6789/0 232642 : cluster [WRN] Health check update: Degraded data redundancy: 239770/11883735 objects degraded (2.018%), 706 pgs unclean, 248 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:14.518708 mon.node01 mon.0 192.168.42.11:6789/0 232643 : cluster [WRN] Health check update: 27 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:17.801511 osd.58 osd.58 192.168.42.12:6804/3828 333 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 45.367834 secs
2018-05-02 15:18:17.801519 osd.58 osd.58 192.168.42.12:6804/3828 334 : cluster [WRN] slow request 30.183339 seconds old, received at 2018-05-02 15:17:47.618114: osd_op(client.6603196.0:182818173 1.a86 1.a57fca86 (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:19.525411 mon.node01 mon.0 192.168.42.11:6789/0 232881 : cluster [WRN] Health check update: Reduced data availability: 358 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:19.525473 mon.node01 mon.0 192.168.42.11:6789/0 232882 : cluster [WRN] Health check update: Degraded data redundancy: 188544/11883735 objects degraded (1.587%), 725 pgs unclean, 199 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:19.525510 mon.node01 mon.0 192.168.42.11:6789/0 232883 : cluster [WRN] Health check update: 34 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:19.525823 mon.node01 mon.0 192.168.42.11:6789/0 232884 : cluster [INF] osd.0 failed (root=default,host=node03) (4 reporters from different host after 22.185499 >= grace 20.000000)
2018-05-02 15:18:19.526157 mon.node01 mon.0 192.168.42.11:6789/0 232885 : cluster [INF] osd.1 failed (root=default,host=node03) (4 reporters from different host after 20.729223 >= grace 20.000000)
2018-05-02 15:18:19.527041 mon.node01 mon.0 192.168.42.11:6789/0 232886 : cluster [INF] osd.5 failed (root=default,host=node03) (4 reporters from different host after 21.722805 >= grace 20.000000)
2018-05-02 15:18:19.527435 mon.node01 mon.0 192.168.42.11:6789/0 232887 : cluster [INF] osd.9 failed (root=default,host=node03) (4 reporters from different host after 20.729151 >= grace 20.000000)
2018-05-02 15:18:19.527946 mon.node01 mon.0 192.168.42.11:6789/0 232888 : cluster [INF] osd.11 failed (root=default,host=node03) (4 reporters from different host after 20.399416 >= grace 20.000000)
2018-05-02 15:18:19.528543 mon.node01 mon.0 192.168.42.11:6789/0 232889 : cluster [INF] osd.14 failed (root=default,host=node03) (4 reporters from different host after 20.729112 >= grace 20.000000)
2018-05-02 15:18:19.528831 mon.node01 mon.0 192.168.42.11:6789/0 232890 : cluster [INF] osd.15 failed (root=default,host=node03) (4 reporters from different host after 20.399139 >= grace 20.000000)
2018-05-02 15:18:19.529181 mon.node01 mon.0 192.168.42.11:6789/0 232891 : cluster [INF] osd.19 failed (root=default,host=node03) (4 reporters from different host after 21.722456 >= grace 20.000000)
2018-05-02 15:18:19.546706 mon.node01 mon.0 192.168.42.11:6789/0 232892 : cluster [WRN] Health check update: 10 osds down (OSD_DOWN)
2018-05-02 15:18:13.289192 osd.4 osd.4 192.168.42.13:6816/4672 405 : cluster [WRN] 4 slow requests, 1 included below; oldest blocked for > 37.633638 secs
2018-05-02 15:18:13.289200 osd.4 osd.4 192.168.42.13:6816/4672 406 : cluster [WRN] slow request 30.273720 seconds old, received at 2018-05-02 15:17:43.015391: osd_op(client.6596773.0:183937876 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26142) currently waiting for peered
2018-05-02 15:18:15.655564 osd.22 osd.22 192.168.42.11:6818/5595 351 : cluster [WRN] 2 slow requests, 1 included below; oldest blocked for > 60.359482 secs
2018-05-02 15:18:15.655573 osd.22 osd.22 192.168.42.11:6818/5595 352 : cluster [WRN] slow request 60.359482 seconds old, received at 2018-05-02 15:17:15.296043: osd_op(client.6526841.0:199030532 1.f58 1:1afd369a:::rbd_data.51ffd188bfb19.0000000000a9a36d:head [set-alloc-hint object_size 4194304 write_size 4194304,write 931840~1536] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:18.292619 osd.4 osd.4 192.168.42.13:6816/4672 407 : cluster [WRN] 5 slow requests, 1 included below; oldest blocked for > 42.637083 secs
2018-05-02 15:18:18.292632 osd.4 osd.4 192.168.42.13:6816/4672 408 : cluster [WRN] slow request 30.269277 seconds old, received at 2018-05-02 15:17:48.023279: osd_op(client.6596773.0:183937882 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:18.656500 osd.22 osd.22 192.168.42.11:6818/5595 353 : cluster [WRN] 3 slow requests, 1 included below; oldest blocked for > 63.360399 secs
2018-05-02 15:18:18.656510 osd.22 osd.22 192.168.42.11:6818/5595 354 : cluster [WRN] slow request 30.693945 seconds old, received at 2018-05-02 15:17:47.962497: osd_op(client.6526841.0:199030863 1.f58 1.596cbf58 (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:18.682212 osd.57 osd.57 192.168.42.12:6824/5788 371 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 60.036720 secs
2018-05-02 15:18:18.682220 osd.57 osd.57 192.168.42.12:6824/5788 372 : cluster [WRN] slow request 60.036720 seconds old, received at 2018-05-02 15:17:18.645456: osd_op(client.6526841.0:199030675 1.580 1:01afed70:::rbd_data.51ffd188bfb19.00000000000920ad:head [set-alloc-hint object_size 4194304 write_size 4194304,write 2617344~3584] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:20.130588 osd.19 osd.19 192.168.42.13:6838/7714 378 : cluster [WRN] Monitor daemon marked osd.19 down, but it is still running
2018-05-02 15:18:20.874752 osd.11 osd.11 192.168.42.13:6804/3915 327 : cluster [WRN] Monitor daemon marked osd.11 down, but it is still running
2018-05-02 15:18:16.345047 osd.37 osd.37 192.168.42.11:6812/4603 315 : cluster [WRN] 1 slow requests, 1 included below; oldest blocked for > 60.848283 secs
2018-05-02 15:18:16.345056 osd.37 osd.37 192.168.42.11:6812/4603 316 : cluster [WRN] slow request 60.848283 seconds old, received at 2018-05-02 15:17:15.496724: osd_op(client.5168842.0:565598091 1.cd6 1:6b3ac828:::rbd_data.51ffd188bfb19.0000000000a994d6:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1515520~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:21.933793 mon.node01 mon.0 192.168.42.11:6789/0 232999 : cluster [INF] osd.14 192.168.42.13:6812/4444 boot
2018-05-02 15:18:21.933874 mon.node01 mon.0 192.168.42.11:6789/0 233000 : cluster [INF] osd.15 192.168.42.13:6818/4787 boot
2018-05-02 15:18:21.933989 mon.node01 mon.0 192.168.42.11:6789/0 233001 : cluster [INF] osd.11 192.168.42.13:6804/3915 boot
2018-05-02 15:18:22.993198 mon.node01 mon.0 192.168.42.11:6789/0 233024 : cluster [INF] osd.9 192.168.42.13:6830/6607 boot
2018-05-02 15:18:22.995855 mon.node01 mon.0 192.168.42.11:6789/0 233025 : cluster [INF] osd.0 192.168.42.13:6828/5616 boot
2018-05-02 15:18:22.995930 mon.node01 mon.0 192.168.42.11:6789/0 233026 : cluster [INF] osd.1 192.168.42.13:6832/7029 boot
2018-05-02 15:18:15.676606 osd.51 osd.51 192.168.42.12:6810/4338 368 : cluster [WRN] 3 slow requests, 1 included below; oldest blocked for > 60.959171 secs
2018-05-02 15:18:15.676613 osd.51 osd.51 192.168.42.12:6810/4338 369 : cluster [WRN] slow request 60.959171 seconds old, received at 2018-05-02 15:17:14.717398: osd_op(client.6596773.0:183937335 1.a33 1:cc5b61d1:::rbd_data.1e815e6eb1d34.000000000056c0b3:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1323520~32768] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:21.036818 osd.14 osd.14 192.168.42.13:6812/4444 353 : cluster [WRN] Monitor daemon marked osd.14 down, but it is still running
2018-05-02 15:18:22.678710 osd.51 osd.51 192.168.42.12:6810/4338 370 : cluster [WRN] 2 slow requests, 2 included below; oldest blocked for > 60.274650 secs
2018-05-02 15:18:22.678722 osd.51 osd.51 192.168.42.12:6810/4338 371 : cluster [WRN] slow request 60.274650 seconds old, received at 2018-05-02 15:17:22.404014: osd_op(client.6596773.0:183937651 1.aab 1:d557b9bd:::rbd_data.1e815e6eb1d34.00000000005effb8:head [set-alloc-hint object_size 4194304 write_size 4194304,write 3551232~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:22.678729 osd.51 osd.51 192.168.42.12:6810/4338 372 : cluster [WRN] slow request 60.268857 seconds old, received at 2018-05-02 15:17:22.409807: osd_op(client.6526841.0:199030712 1.227 1:e44f9365:::rbd_data.51ffd188bfb19.000000000041d7c0:head [set-alloc-hint object_size 4194304 write_size 4194304,write 1818624~4096] snapc 0=[] ondisk+write+known_if_redirected e26131) currently waiting for peered
2018-05-02 15:18:24.042094 mon.node01 mon.0 192.168.42.11:6789/0 233075 : cluster [INF] osd.5 192.168.42.13:6808/4150 boot
2018-05-02 15:18:24.043666 mon.node01 mon.0 192.168.42.11:6789/0 233076 : cluster [INF] osd.17 192.168.42.13:6814/4554 boot
2018-05-02 15:18:24.548038 mon.node01 mon.0 192.168.42.11:6789/0 233191 : cluster [WRN] Health check update: 2 osds down (OSD_DOWN)
2018-05-02 15:18:24.548147 mon.node01 mon.0 192.168.42.11:6789/0 233192 : cluster [WRN] Health check update: Reduced data availability: 245 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:24.548204 mon.node01 mon.0 192.168.42.11:6789/0 233193 : cluster [WRN] Health check update: Degraded data redundancy: 493813/11883735 objects degraded (4.155%), 733 pgs unclean, 514 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:24.548250 mon.node01 mon.0 192.168.42.11:6789/0 233194 : cluster [WRN] Health check update: 20 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:24.549084 mon.node01 mon.0 192.168.42.11:6789/0 233195 : cluster [INF] osd.4 failed (root=default,host=node03) (4 reporters from different host after 24.643274 >= grace 20.298579)
2018-05-02 15:18:24.549862 mon.node01 mon.0 192.168.42.11:6789/0 233196 : cluster [INF] osd.12 failed (root=default,host=node03) (3 reporters from different host after 23.203997 >= grace 20.000000)
2018-05-02 15:18:24.550361 mon.node01 mon.0 192.168.42.11:6789/0 233197 : cluster [INF] osd.13 failed (root=default,host=node03) (4 reporters from different host after 24.454206 >= grace 20.000000)
2018-05-02 15:18:24.550688 mon.node01 mon.0 192.168.42.11:6789/0 233198 : cluster [INF] osd.16 failed (root=default,host=node03) (4 reporters from different host after 21.857742 >= grace 20.298740)
2018-05-02 15:18:21.047160 osd.15 osd.15 192.168.42.13:6818/4787 387 : cluster [WRN] Monitor daemon marked osd.15 down, but it is still running
2018-05-02 15:18:25.156763 osd.13 osd.13 192.168.42.13:6820/4934 393 : cluster [WRN] Monitor daemon marked osd.13 down, but it is still running
2018-05-02 15:18:25.631974 mon.node01 mon.0 192.168.42.11:6789/0 233396 : cluster [INF] osd.4 192.168.42.13:6816/4672 boot
2018-05-02 15:18:21.012257 osd.9 osd.9 192.168.42.13:6830/6607 441 : cluster [WRN] Monitor daemon marked osd.9 down, but it is still running
2018-05-02 15:18:26.679788 mon.node01 mon.0 192.168.42.11:6789/0 233663 : cluster [INF] osd.12 192.168.42.13:6824/5233 boot
2018-05-02 15:18:26.680755 mon.node01 mon.0 192.168.42.11:6789/0 233664 : cluster [INF] osd.19 192.168.42.13:6838/7714 boot
2018-05-02 15:18:26.680955 mon.node01 mon.0 192.168.42.11:6789/0 233665 : cluster [INF] osd.13 192.168.42.13:6820/4934 boot
2018-05-02 15:18:26.680998 mon.node01 mon.0 192.168.42.11:6789/0 233666 : cluster [INF] osd.16 192.168.42.13:6822/5084 boot
2018-05-02 15:18:22.079422 osd.1 osd.1 192.168.42.13:6832/7029 455 : cluster [WRN] Monitor daemon marked osd.1 down, but it is still running
2018-05-02 15:18:25.700018 osd.16 osd.16 192.168.42.13:6822/5084 345 : cluster [WRN] Monitor daemon marked osd.16 down, but it is still running
2018-05-02 15:18:29.101502 mon.node01 mon.0 192.168.42.11:6789/0 233914 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-05-02 15:18:29.114105 mon.node01 mon.0 192.168.42.11:6789/0 233915 : cluster [INF] osd.8 192.168.42.13:6826/5418 boot
2018-05-02 15:18:29.565219 mon.node01 mon.0 192.168.42.11:6789/0 233932 : cluster [WRN] Health check update: Reduced data availability: 84 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:29.565271 mon.node01 mon.0 192.168.42.11:6789/0 233933 : cluster [WRN] Health check update: Degraded data redundancy: 202763/11883735 objects degraded (1.706%), 404 pgs unclean, 239 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:29.565297 mon.node01 mon.0 192.168.42.11:6789/0 233934 : cluster [WRN] Health check update: 5 slow requests are blocked > 32 sec (REQUEST_SLOW)
2018-05-02 15:18:29.565507 mon.node01 mon.0 192.168.42.11:6789/0 233935 : cluster [INF] osd.10 failed (root=default,host=node03) (4 reporters from different host after 23.766485 >= grace 20.597260)
2018-05-02 15:18:29.567298 mon.node01 mon.0 192.168.42.11:6789/0 233936 : cluster [WRN] Health check failed: 1 osds down (OSD_DOWN)
2018-05-02 15:18:22.098559 osd.0 osd.0 192.168.42.13:6828/5616 377 : cluster [WRN] Monitor daemon marked osd.0 down, but it is still running
2018-05-02 15:18:23.295719 osd.4 osd.4 192.168.42.13:6816/4672 409 : cluster [WRN] 6 slow requests, 1 included below; oldest blocked for > 47.640189 secs
2018-05-02 15:18:23.295728 osd.4 osd.4 192.168.42.13:6816/4672 410 : cluster [WRN] slow request 30.264308 seconds old, received at 2018-05-02 15:17:53.031354: osd_op(client.6596773.0:183937898 1.2dd 1.8dea32dd (undecoded) ondisk+write+known_if_redirected e26143) currently waiting for peered
2018-05-02 15:18:24.965849 osd.4 osd.4 192.168.42.13:6816/4672 411 : cluster [WRN] Monitor daemon marked osd.4 down, but it is still running
2018-05-02 15:18:21.980375 osd.5 osd.5 192.168.42.13:6808/4150 343 : cluster [WRN] Monitor daemon marked osd.5 down, but it is still running
2018-05-02 15:18:25.166233 osd.12 osd.12 192.168.42.13:6824/5233 411 : cluster [WRN] Monitor daemon marked osd.12 down, but it is still running
2018-05-02 15:18:33.187741 mon.node01 mon.0 192.168.42.11:6789/0 234132 : cluster [INF] Health check cleared: OSD_DOWN (was: 1 osds down)
2018-05-02 15:18:33.204727 mon.node01 mon.0 192.168.42.11:6789/0 234133 : cluster [INF] osd.10 192.168.42.13:6836/7465 boot
2018-05-02 15:18:34.203560 mon.node01 mon.0 192.168.42.11:6789/0 234150 : cluster [INF] Health check cleared: REQUEST_SLOW (was: 2 slow requests are blocked > 32 sec)
2018-05-02 15:18:34.568568 mon.node01 mon.0 192.168.42.11:6789/0 234168 : cluster [WRN] Health check update: Reduced data availability: 13 pgs peering (PG_AVAILABILITY)
2018-05-02 15:18:34.568641 mon.node01 mon.0 192.168.42.11:6789/0 234169 : cluster [WRN] Health check update: Degraded data redundancy: 47468/11883735 objects degraded (0.399%), 92 pgs unclean, 101 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:32.064002 osd.10 osd.10 192.168.42.13:6836/7465 365 : cluster [WRN] Monitor daemon marked osd.10 down, but it is still running
2018-05-02 15:18:38.215116 mon.node01 mon.0 192.168.42.11:6789/0 234170 : cluster [INF] Health check cleared: PG_AVAILABILITY (was: Reduced data availability: 13 pgs peering)
2018-05-02 15:18:39.569077 mon.node01 mon.0 192.168.42.11:6789/0 234172 : cluster [WRN] Health check update: Degraded data redundancy: 14590/11883735 objects degraded (0.123%), 55 pgs unclean, 73 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:44.569539 mon.node01 mon.0 192.168.42.11:6789/0 234182 : cluster [WRN] Health check update: Degraded data redundancy: 27/11883735 objects degraded (0.000%), 15 pgs unclean, 26 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:49.569956 mon.node01 mon.0 192.168.42.11:6789/0 234184 : cluster [WRN] Health check update: Degraded data redundancy: 12/11883735 objects degraded (0.000%), 6 pgs unclean, 13 pgs degraded (PG_DEGRADED)
2018-05-02 15:18:53.915551 mon.node01 mon.0 192.168.42.11:6789/0 234186 : cluster [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/11883735 objects degraded (0.000%), 2 pgs unclean, 3 pgs degraded)
2018-05-02 15:18:53.915621 mon.node01 mon.0 192.168.42.11:6789/0 234187 : cluster [INF] Cluster is now healthy
BonsaiJoe
53 Posts
Quote from BonsaiJoe on May 6, 2018, 12:34 pmhi,
today we got the same error like last week again but on a diferent node:
[2402056.952552] i40e 0000:17:00.1: TX driver issue detected, PF reset issued
[2402057.082512] i40e 0000:17:00.1: i40e_ptp_init: PTP not supported on eth3
[2402057.161123] i40e 0000:17:00.1 eth3: speed changed to 0 for port eth3
[2402057.161586] bond1: link status up again after 0 ms for interface eth3
then all 20 OSD went down and recovered within some minutes.
hi,
today we got the same error like last week again but on a diferent node:
[2402056.952552] i40e 0000:17:00.1: TX driver issue detected, PF reset issued
[2402057.082512] i40e 0000:17:00.1: i40e_ptp_init: PTP not supported on eth3
[2402057.161123] i40e 0000:17:00.1 eth3: speed changed to 0 for port eth3
[2402057.161586] bond1: link status up again after 0 ms for interface eth3
then all 20 OSD went down and recovered within some minutes.
admin
2,930 Posts
Quote from admin on May 6, 2018, 12:41 pmwe will build a newer kernel with updated i40e driver in a couple of days for you to test.
we will build a newer kernel with updated i40e driver in a couple of days for you to test.
BonsaiJoe
53 Posts
admin
2,930 Posts
Quote from admin on May 6, 2018, 7:39 pmCan you show thee output of
ethtool -i eth3
dmesg | grep firmware
lspci -v
Can you show thee output of
ethtool -i eth3
dmesg | grep firmware
lspci -v