Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

[Solved] Cluster upgrade halted in the middle of operation

Pages: 1 2

Hello,

I started the upgrade of my 12 nodes PetaSAN cluster, from vers. 3.1.0 to 3.2.1, one node at a time as says the guide. The first 4 nodes it went smooth, but when I was going to start the 5th I realized that the dashboard was not working anymore, the upgrade script did not run anymore as it was hanging at the very beginning. After some investigation, I'm afraid the issue is on the "ceph" command and it's authentication:

root@psan05:/opt/petasan/scripts/online-updates# ceph osd tree
2023-10-12T18:19:26.537+0200 7f01def25700  0 monclient(hunting): authenticate timed out after 300
2023-10-12T18:24:26.542+0200 7f01def25700  0 monclient(hunting): authenticate timed out after 300

Now I have 8 nodes (with 2 monitors) with 3.1.0, 4 nodes (with 1 monitor) with 3.2.1, PetaSAN web gui not working, ceph command not working ! 🙁 The monitor processes do not start.

root@psan01:~# systemctl status ceph-mon@psan01.service
ceph-mon@psan01.service - Ceph cluster monitor daemon
Loaded: loaded (/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: enabled)
Active: failed (Result: signal) since Thu 2023-10-12 18:31:18 CEST; 28min ago
Process: 6831 ExecStart=/usr/bin/ceph-mon -f --cluster ${CLUSTER} --id psan01 --setuser ceph --setgroup ceph (code=killed, signal=ABRT)
Main PID: 6831 (code=killed, signal=ABRT)

Oct 12 18:31:08 psan01 systemd[1]: ceph-mon@psan01.service: Main process exited, code=killed, status=6/ABRT
Oct 12 18:31:08 psan01 systemd[1]: ceph-mon@psan01.service: Failed with result 'signal'.
Oct 12 18:31:18 psan01 systemd[1]: ceph-mon@psan01.service: Scheduled restart job, restart counter is at 5.
Oct 12 18:31:18 psan01 systemd[1]: Stopped Ceph cluster monitor daemon.
Oct 12 18:31:18 psan01 systemd[1]: ceph-mon@psan01.service: Start request repeated too quickly.
Oct 12 18:31:18 psan01 systemd[1]: ceph-mon@psan01.service: Failed with result 'signal'.
Oct 12 18:31:18 psan01 systemd[1]: Failed to start Ceph cluster monitor daemon.
Oct 12 18:59:05 psan01 systemd[1]: ceph-mon@psan01.service: Start request repeated too quickly.
Oct 12 18:59:05 psan01 systemd[1]: ceph-mon@psan01.service: Failed with result 'signal'.
Oct 12 18:59:05 psan01 systemd[1]: Failed to start Ceph cluster monitor daemon.

I rebooted almost all nodes but nothing seems to change. What can I do to unlock this situation ?

Thanks, Ste

 

PS: here i add the output of the journalctl command:

root@psan05:~#

root@psan05:~# journalctl -xe
-- Logs begin at Thu 2023-10-12 17:08:23 CEST, end at Thu 2023-10-12 19:30:29 CEST. --
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-52-00107bb6-8aec-45f8-a86a-8a8d24442f2c.
-- Subject: A start job for unit ceph-volume@lvm-52-00107bb6-8aec-45f8-a86a-8a8d24442f2c.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-52-00107bb6-8aec-45f8-a86a-8a8d24442f2c.service has finished successfully.
--
-- The job identifier is 98.
Oct 12 19:20:29 psan05 systemd[1]: Started Ceph object storage daemon osd.66.
-- Subject: A start job for unit ceph-osd@66.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-osd@66.service has finished successfully.
--
-- The job identifier is 709.
Oct 12 19:20:29 psan05 sh[2238]: Running command: /usr/sbin/ceph-volume lvm trigger 55-d10c4d8b-e5c4-4f38-91b5-ee8d5ba054b2
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-57-e3d6da65-9a85-4c0b-bef6-7bec0db6861d.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-57-e3d6da65-9a85-4c0b-bef6-7bec0db6861d.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-57-e3d6da65-9a85-4c0b-bef6-7bec0db6861d.
-- Subject: A start job for unit ceph-volume@lvm-57-e3d6da65-9a85-4c0b-bef6-7bec0db6861d.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-57-e3d6da65-9a85-4c0b-bef6-7bec0db6861d.service has finished successfully.
--
-- The job identifier is 164.
Oct 12 19:20:29 psan05 sh[2225]: Running command: /usr/sbin/ceph-volume lvm trigger 59-dbaf9fa7-bbe8-4a4e-90e4-14a1ca0b8b2b
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-54-fef146d2-06d9-4cd9-82f7-fd22d3908926.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-54-fef146d2-06d9-4cd9-82f7-fd22d3908926.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-54-fef146d2-06d9-4cd9-82f7-fd22d3908926.
-- Subject: A start job for unit ceph-volume@lvm-54-fef146d2-06d9-4cd9-82f7-fd22d3908926.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-54-fef146d2-06d9-4cd9-82f7-fd22d3908926.service has finished successfully.
--
-- The job identifier is 160.
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-53-a9513956-686f-4a28-9418-d32121a8a340.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-53-a9513956-686f-4a28-9418-d32121a8a340.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-53-a9513956-686f-4a28-9418-d32121a8a340.
-- Subject: A start job for unit ceph-volume@lvm-53-a9513956-686f-4a28-9418-d32121a8a340.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-53-a9513956-686f-4a28-9418-d32121a8a340.service has finished successfully.
--
-- The job identifier is 112.
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-56-49c57650-6453-4531-84ee-0a101cc4116f.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-56-49c57650-6453-4531-84ee-0a101cc4116f.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-56-49c57650-6453-4531-84ee-0a101cc4116f.
-- Subject: A start job for unit ceph-volume@lvm-56-49c57650-6453-4531-84ee-0a101cc4116f.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-56-49c57650-6453-4531-84ee-0a101cc4116f.service has finished successfully.
--
-- The job identifier is 133.
Oct 12 19:20:29 psan05 sh[2231]: Running command: /usr/sbin/ceph-volume lvm trigger 60-f151eb17-6a6e-4811-9098-c3c72da6fd08
Oct 12 19:20:29 psan05 systemd[1]: Started Ceph object storage daemon osd.67.
-- Subject: A start job for unit ceph-osd@67.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-osd@67.service has finished successfully.
--
-- The job identifier is 727.
Oct 12 19:20:29 psan05 sh[2232]: Running command: /usr/sbin/ceph-volume lvm trigger 58-6d3b731f-2990-4191-9a75-3b78d6820cd1
Oct 12 19:20:29 psan05 systemd[1]: Started Ceph object storage daemon osd.68.
-- Subject: A start job for unit ceph-osd@68.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-osd@68.service has finished successfully.
--
-- The job identifier is 731.
Oct 12 19:20:29 psan05 sh[2243]: Running command: /usr/sbin/ceph-volume lvm trigger 63-8af79cb4-b14f-4fda-8ab5-cee38061fdc1
Oct 12 19:20:29 psan05 sh[2237]: Running command: /usr/sbin/ceph-volume lvm trigger 62-e366845b-0233-4580-a6a9-93b778b0fa05
Oct 12 19:20:29 psan05 systemd[1]: Started Ceph object storage daemon osd.69.
-- Subject: A start job for unit ceph-osd@69.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-osd@69.service has finished successfully.
--
-- The job identifier is 711.
Oct 12 19:20:29 psan05 sh[2234]: Running command: /usr/sbin/ceph-volume lvm trigger 61-0630f9fa-e499-47a2-b356-10cd44ec3d9e
Oct 12 19:20:29 psan05 systemd[1]: Started Ceph object storage daemon osd.70.
-- Subject: A start job for unit ceph-osd@70.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-osd@70.service has finished successfully.
--
-- The job identifier is 738.
Oct 12 19:20:29 psan05 sh[2246]: Running command: /usr/sbin/ceph-volume lvm trigger 64-49d3e2a0-0dd4-4d4e-ac75-1f950e2dd33b
Oct 12 19:20:29 psan05 systemd[1]: Started Ceph object storage daemon osd.71.
-- Subject: A start job for unit ceph-osd@71.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-osd@71.service has finished successfully.
--
-- The job identifier is 740.
Oct 12 19:20:29 psan05 sh[2248]: Running command: /usr/sbin/ceph-volume lvm trigger 65-6290ffc1-2b13-498b-bbda-0a8b0172f9bc
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-55-d10c4d8b-e5c4-4f38-91b5-ee8d5ba054b2.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-55-d10c4d8b-e5c4-4f38-91b5-ee8d5ba054b2.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-55-d10c4d8b-e5c4-4f38-91b5-ee8d5ba054b2.
-- Subject: A start job for unit ceph-volume@lvm-55-d10c4d8b-e5c4-4f38-91b5-ee8d5ba054b2.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-55-d10c4d8b-e5c4-4f38-91b5-ee8d5ba054b2.service has finished successfully.
--
-- The job identifier is 99.
Oct 12 19:20:29 psan05 systemd[1]: Started Ceph object storage daemon osd.72.
-- Subject: A start job for unit ceph-osd@72.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-osd@72.service has finished successfully.
--
-- The job identifier is 708.
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-59-dbaf9fa7-bbe8-4a4e-90e4-14a1ca0b8b2b.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-59-dbaf9fa7-bbe8-4a4e-90e4-14a1ca0b8b2b.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-59-dbaf9fa7-bbe8-4a4e-90e4-14a1ca0b8b2b.
-- Subject: A start job for unit ceph-volume@lvm-59-dbaf9fa7-bbe8-4a4e-90e4-14a1ca0b8b2b.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-59-dbaf9fa7-bbe8-4a4e-90e4-14a1ca0b8b2b.service has finished successfully.
--
-- The job identifier is 111.
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-60-f151eb17-6a6e-4811-9098-c3c72da6fd08.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-60-f151eb17-6a6e-4811-9098-c3c72da6fd08.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-60-f151eb17-6a6e-4811-9098-c3c72da6fd08.
-- Subject: A start job for unit ceph-volume@lvm-60-f151eb17-6a6e-4811-9098-c3c72da6fd08.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-60-f151eb17-6a6e-4811-9098-c3c72da6fd08.service has finished successfully.
--
-- The job identifier is 101.
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-62-e366845b-0233-4580-a6a9-93b778b0fa05.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-62-e366845b-0233-4580-a6a9-93b778b0fa05.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-62-e366845b-0233-4580-a6a9-93b778b0fa05.
-- Subject: A start job for unit ceph-volume@lvm-62-e366845b-0233-4580-a6a9-93b778b0fa05.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-62-e366845b-0233-4580-a6a9-93b778b0fa05.service has finished successfully.
--
-- The job identifier is 97.
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-63-8af79cb4-b14f-4fda-8ab5-cee38061fdc1.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-63-8af79cb4-b14f-4fda-8ab5-cee38061fdc1.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-63-8af79cb4-b14f-4fda-8ab5-cee38061fdc1.
-- Subject: A start job for unit ceph-volume@lvm-63-8af79cb4-b14f-4fda-8ab5-cee38061fdc1.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-63-8af79cb4-b14f-4fda-8ab5-cee38061fdc1.service has finished successfully.
--
-- The job identifier is 134.
Oct 12 19:20:29 psan05 sh[2254]: Running command: /usr/sbin/ceph-volume lvm trigger 66-43d6c21c-4f3d-4b3c-9f05-7b74c169f529
Oct 12 19:20:29 psan05 systemd[1]: Started Ceph object storage daemon osd.73.
-- Subject: A start job for unit ceph-osd@73.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-osd@73.service has finished successfully.
--
-- The job identifier is 741.
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-58-6d3b731f-2990-4191-9a75-3b78d6820cd1.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-58-6d3b731f-2990-4191-9a75-3b78d6820cd1.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-58-6d3b731f-2990-4191-9a75-3b78d6820cd1.
-- Subject: A start job for unit ceph-volume@lvm-58-6d3b731f-2990-4191-9a75-3b78d6820cd1.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-58-6d3b731f-2990-4191-9a75-3b78d6820cd1.service has finished successfully.
--
-- The job identifier is 136.
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-64-49d3e2a0-0dd4-4d4e-ac75-1f950e2dd33b.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-64-49d3e2a0-0dd4-4d4e-ac75-1f950e2dd33b.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-64-49d3e2a0-0dd4-4d4e-ac75-1f950e2dd33b.
-- Subject: A start job for unit ceph-volume@lvm-64-49d3e2a0-0dd4-4d4e-ac75-1f950e2dd33b.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-64-49d3e2a0-0dd4-4d4e-ac75-1f950e2dd33b.service has finished successfully.
--
-- The job identifier is 152.
Oct 12 19:20:29 psan05 sh[2255]: Running command: /usr/sbin/ceph-volume lvm trigger 67-4053a559-540c-4a81-9255-d8996cc18cf9
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-61-0630f9fa-e499-47a2-b356-10cd44ec3d9e.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-61-0630f9fa-e499-47a2-b356-10cd44ec3d9e.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-61-0630f9fa-e499-47a2-b356-10cd44ec3d9e.
-- Subject: A start job for unit ceph-volume@lvm-61-0630f9fa-e499-47a2-b356-10cd44ec3d9e.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-61-0630f9fa-e499-47a2-b356-10cd44ec3d9e.service has finished successfully.
--
-- The job identifier is 114.
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-65-6290ffc1-2b13-498b-bbda-0a8b0172f9bc.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-65-6290ffc1-2b13-498b-bbda-0a8b0172f9bc.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-65-6290ffc1-2b13-498b-bbda-0a8b0172f9bc.
-- Subject: A start job for unit ceph-volume@lvm-65-6290ffc1-2b13-498b-bbda-0a8b0172f9bc.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-65-6290ffc1-2b13-498b-bbda-0a8b0172f9bc.service has finished successfully.
--
-- The job identifier is 86.
Oct 12 19:20:29 psan05 sh[2264]: Running command: /usr/sbin/ceph-volume lvm trigger 69-2a8a7ad9-d1e8-4558-8267-20f9b777a889
Oct 12 19:20:29 psan05 sh[2258]: Running command: /usr/sbin/ceph-volume lvm trigger 68-ace2e8f3-2cf5-46cc-8d96-05c6a1df6b13
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-66-43d6c21c-4f3d-4b3c-9f05-7b74c169f529.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-66-43d6c21c-4f3d-4b3c-9f05-7b74c169f529.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-66-43d6c21c-4f3d-4b3c-9f05-7b74c169f529.
-- Subject: A start job for unit ceph-volume@lvm-66-43d6c21c-4f3d-4b3c-9f05-7b74c169f529.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-66-43d6c21c-4f3d-4b3c-9f05-7b74c169f529.service has finished successfully.
--
-- The job identifier is 163.
Oct 12 19:20:29 psan05 sh[2271]: Running command: /usr/sbin/ceph-volume lvm trigger 72-0128a512-b4a6-4238-85f7-3db574cb4509
Oct 12 19:20:29 psan05 sh[2269]: Running command: /usr/sbin/ceph-volume lvm trigger 71-b5266ae5-eb82-4896-9ce0-a5ca09911f45
Oct 12 19:20:29 psan05 sh[2267]: Running command: /usr/sbin/ceph-volume lvm trigger 70-13032b2c-29d9-4611-942c-eba89cd57cc4
Oct 12 19:20:29 psan05 systemd[1]: Reached target ceph target allowing to start/stop all ceph-osd@.service instances at once.
-- Subject: A start job for unit ceph-osd.target has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-osd.target has finished successfully.
--
-- The job identifier is 127.
Oct 12 19:20:29 psan05 systemd[1]: Reached target ceph target allowing to start/stop all ceph*@.service instances at once.
-- Subject: A start job for unit ceph.target has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph.target has finished successfully.
--
-- The job identifier is 123.
Oct 12 19:20:29 psan05 systemd[1]: Starting Map RBD devices...
-- Subject: A start job for unit rbdmap.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit rbdmap.service has begun execution.
--
-- The job identifier is 153.
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-69-2a8a7ad9-d1e8-4558-8267-20f9b777a889.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-69-2a8a7ad9-d1e8-4558-8267-20f9b777a889.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-69-2a8a7ad9-d1e8-4558-8267-20f9b777a889.
-- Subject: A start job for unit ceph-volume@lvm-69-2a8a7ad9-d1e8-4558-8267-20f9b777a889.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-69-2a8a7ad9-d1e8-4558-8267-20f9b777a889.service has finished successfully.
--
-- The job identifier is 102.
Oct 12 19:20:29 psan05 sh[2274]: Running command: /usr/sbin/ceph-volume lvm trigger 73-e6b2093c-a6ba-4bca-810d-ab5c48bed65d
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-67-4053a559-540c-4a81-9255-d8996cc18cf9.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-67-4053a559-540c-4a81-9255-d8996cc18cf9.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-67-4053a559-540c-4a81-9255-d8996cc18cf9.
-- Subject: A start job for unit ceph-volume@lvm-67-4053a559-540c-4a81-9255-d8996cc18cf9.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-67-4053a559-540c-4a81-9255-d8996cc18cf9.service has finished successfully.
--
-- The job identifier is 147.
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-72-0128a512-b4a6-4238-85f7-3db574cb4509.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-72-0128a512-b4a6-4238-85f7-3db574cb4509.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-72-0128a512-b4a6-4238-85f7-3db574cb4509.
-- Subject: A start job for unit ceph-volume@lvm-72-0128a512-b4a6-4238-85f7-3db574cb4509.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-72-0128a512-b4a6-4238-85f7-3db574cb4509.service has finished successfully.
--
-- The job identifier is 145.
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-71-b5266ae5-eb82-4896-9ce0-a5ca09911f45.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-71-b5266ae5-eb82-4896-9ce0-a5ca09911f45.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-71-b5266ae5-eb82-4896-9ce0-a5ca09911f45.
-- Subject: A start job for unit ceph-volume@lvm-71-b5266ae5-eb82-4896-9ce0-a5ca09911f45.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-71-b5266ae5-eb82-4896-9ce0-a5ca09911f45.service has finished successfully.
--
-- The job identifier is 110.
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-68-ace2e8f3-2cf5-46cc-8d96-05c6a1df6b13.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-68-ace2e8f3-2cf5-46cc-8d96-05c6a1df6b13.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-68-ace2e8f3-2cf5-46cc-8d96-05c6a1df6b13.
-- Subject: A start job for unit ceph-volume@lvm-68-ace2e8f3-2cf5-46cc-8d96-05c6a1df6b13.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-68-ace2e8f3-2cf5-46cc-8d96-05c6a1df6b13.service has finished successfully.
--
-- The job identifier is 144.
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-70-13032b2c-29d9-4611-942c-eba89cd57cc4.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-70-13032b2c-29d9-4611-942c-eba89cd57cc4.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-70-13032b2c-29d9-4611-942c-eba89cd57cc4.
-- Subject: A start job for unit ceph-volume@lvm-70-13032b2c-29d9-4611-942c-eba89cd57cc4.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-70-13032b2c-29d9-4611-942c-eba89cd57cc4.service has finished successfully.
--
-- The job identifier is 106.
Oct 12 19:20:29 psan05 systemd[1]: ceph-volume@lvm-73-e6b2093c-a6ba-4bca-810d-ab5c48bed65d.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-volume@lvm-73-e6b2093c-a6ba-4bca-810d-ab5c48bed65d.service has successfully entered the 'dead' state.
Oct 12 19:20:29 psan05 systemd[1]: Finished Ceph Volume activation: lvm-73-e6b2093c-a6ba-4bca-810d-ab5c48bed65d.
-- Subject: A start job for unit ceph-volume@lvm-73-e6b2093c-a6ba-4bca-810d-ab5c48bed65d.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-volume@lvm-73-e6b2093c-a6ba-4bca-810d-ab5c48bed65d.service has finished successfully.
--
-- The job identifier is 165.
Oct 12 19:20:29 psan05 rbdmap[5646]: No /etc/ceph/rbdmap found.
Oct 12 19:20:29 psan05 systemd[1]: Finished Map RBD devices.
-- Subject: A start job for unit rbdmap.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit rbdmap.service has finished successfully.
--
-- The job identifier is 153.
Oct 12 19:20:29 psan05 systemd[1]: Reached target Remote File Systems (Pre).
-- Subject: A start job for unit remote-fs-pre.target has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit remote-fs-pre.target has finished successfully.
--
-- The job identifier is 48.
Oct 12 19:20:29 psan05 systemd[1]: Reached target Remote File Systems.
-- Subject: A start job for unit remote-fs.target has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit remote-fs.target has finished successfully.
--
-- The job identifier is 166.
Oct 12 19:20:29 psan05 systemd[1]: Finished Availability of block devices.
-- Subject: A start job for unit blk-availability.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit blk-availability.service has finished successfully.
--
-- The job identifier is 63.
Oct 12 19:20:29 psan05 systemd[1]: Starting LSB: Collectl monitors system performance....
-- Subject: A start job for unit collectl.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit collectl.service has begun execution.
--
-- The job identifier is 143.
Oct 12 19:20:29 psan05 systemd[1]: Started Regular background program processing daemon.
-- Subject: A start job for unit cron.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit cron.service has finished successfully.
--
-- The job identifier is 146.
Oct 12 19:20:29 psan05 systemd[1]: Starting LSB: Start opensm subnet manager....
-- Subject: A start job for unit opensm.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit opensm.service has begun execution.
--
-- The job identifier is 88.
Oct 12 19:20:29 psan05 systemd[1]: Starting LSB: radosgw RESTful rados gateway...
-- Subject: A start job for unit radosgw.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit radosgw.service has begun execution.
--
-- The job identifier is 108.
Oct 12 19:20:29 psan05 systemd[1]: Starting Permit User Sessions...
-- Subject: A start job for unit systemd-user-sessions.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit systemd-user-sessions.service has begun execution.
--
-- The job identifier is 90.
Oct 12 19:20:29 psan05 cron[5649]: (CRON) INFO (pidfile fd = 3)
Oct 12 19:20:29 psan05 cron[5649]: (CRON) INFO (Running @reboot jobs)
Oct 12 19:20:29 psan05 systemd[1]: Finished Permit User Sessions.
-- Subject: A start job for unit systemd-user-sessions.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit systemd-user-sessions.service has finished successfully.
--
-- The job identifier is 90.
Oct 12 19:20:29 psan05 systemd[1]: Starting Set console scheme...
-- Subject: A start job for unit setvtrgb.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit setvtrgb.service has begun execution.
--
-- The job identifier is 64.
Oct 12 19:20:29 psan05 systemd[1]: Started LSB: Start opensm subnet manager..
-- Subject: A start job for unit opensm.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit opensm.service has finished successfully.
--
-- The job identifier is 88.
Oct 12 19:20:29 psan05 opensm[5654]: No infiniband adapters found.
Oct 12 19:20:29 psan05 systemd[1]: Finished Set console scheme.
-- Subject: A start job for unit setvtrgb.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit setvtrgb.service has finished successfully.
--
-- The job identifier is 64.
Oct 12 19:20:29 psan05 systemd[1]: Created slice system-getty.slice.
-- Subject: A start job for unit system-getty.slice has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit system-getty.slice has finished successfully.
--
-- The job identifier is 94.
Oct 12 19:20:29 psan05 systemd[1]: Started Getty on tty1.
-- Subject: A start job for unit getty@tty1.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit getty@tty1.service has finished successfully.
--
-- The job identifier is 93.
Oct 12 19:20:29 psan05 systemd[1]: Reached target Login Prompts.
-- Subject: A start job for unit getty.target has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit getty.target has finished successfully.
--
-- The job identifier is 91.
Oct 12 19:20:29 psan05 ceph-mon[5469]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 7fc0f3453580 time 2023-10-12T19:20:29.648985+0200
Oct 12 19:20:29 psan05 ceph-mon[5469]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:20:29 psan05 ceph-mon[5469]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:29 psan05 ceph-mon[5469]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7fc0f4369c01]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  2: (()+0x26ae09) [0x7fc0f4369e09]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  3: (FSMap::sanity() const+0xc7) [0x7fc0f48e2727]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55757abb2588]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  5: (PaxosService::refresh(bool*)+0x28f) [0x55757aacd41f]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55757a96b35c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  7: (Monitor::init_paxos()+0x7c) [0x55757a96b66c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  8: (Monitor::preinit()+0xf56) [0x55757a9ae1c6]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  9: (main()+0x2e0c) [0x55757a93f8dc]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  10: (__libc_start_main()+0xf3) [0x7fc0f3916083]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  11: (_start()+0x2e) [0x55757a95190e]
Oct 12 19:20:29 psan05 ceph-mon[5469]: *** Caught signal (Aborted) **
Oct 12 19:20:29 psan05 ceph-mon[5469]:  in thread 7fc0f3453580 thread_name:ceph-mon
Oct 12 19:20:29 psan05 ceph-mon[5469]: 2023-10-12T19:20:29.652+0200 7fc0f3453580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 7fc0f
3453580 time 2023-10-12T19:20:29.648985+0200
Oct 12 19:20:29 psan05 ceph-mon[5469]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:20:29 psan05 ceph-mon[5469]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:29 psan05 ceph-mon[5469]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7fc0f4369c01]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  2: (()+0x26ae09) [0x7fc0f4369e09]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  3: (FSMap::sanity() const+0xc7) [0x7fc0f48e2727]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55757abb2588]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  5: (PaxosService::refresh(bool*)+0x28f) [0x55757aacd41f]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55757a96b35c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  7: (Monitor::init_paxos()+0x7c) [0x55757a96b66c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  8: (Monitor::preinit()+0xf56) [0x55757a9ae1c6]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  9: (main()+0x2e0c) [0x55757a93f8dc]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  10: (__libc_start_main()+0xf3) [0x7fc0f3916083]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  11: (_start()+0x2e) [0x55757a95190e]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:29 psan05 ceph-mon[5469]:  1: (()+0x14420) [0x7fc0f3e4a420]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  2: (gsignal()+0xcb) [0x7fc0f393500b]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  3: (abort()+0x12b) [0x7fc0f3914859]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7fc0f4369c5c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  5: (()+0x26ae09) [0x7fc0f4369e09]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  6: (FSMap::sanity() const+0xc7) [0x7fc0f48e2727]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55757abb2588]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  8: (PaxosService::refresh(bool*)+0x28f) [0x55757aacd41f]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55757a96b35c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  10: (Monitor::init_paxos()+0x7c) [0x55757a96b66c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  11: (Monitor::preinit()+0xf56) [0x55757a9ae1c6]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  12: (main()+0x2e0c) [0x55757a93f8dc]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  13: (__libc_start_main()+0xf3) [0x7fc0f3916083]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  14: (_start()+0x2e) [0x55757a95190e]
Oct 12 19:20:29 psan05 ceph-mon[5469]: 2023-10-12T19:20:29.656+0200 7fc0f3453580 -1 *** Caught signal (Aborted) **
Oct 12 19:20:29 psan05 ceph-mon[5469]:  in thread 7fc0f3453580 thread_name:ceph-mon
Oct 12 19:20:29 psan05 ceph-mon[5469]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:29 psan05 ceph-mon[5469]:  1: (()+0x14420) [0x7fc0f3e4a420]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  2: (gsignal()+0xcb) [0x7fc0f393500b]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  3: (abort()+0x12b) [0x7fc0f3914859]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7fc0f4369c5c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  5: (()+0x26ae09) [0x7fc0f4369e09]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  6: (FSMap::sanity() const+0xc7) [0x7fc0f48e2727]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55757abb2588]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  8: (PaxosService::refresh(bool*)+0x28f) [0x55757aacd41f]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55757a96b35c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  10: (Monitor::init_paxos()+0x7c) [0x55757a96b66c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  11: (Monitor::preinit()+0xf56) [0x55757a9ae1c6]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  12: (main()+0x2e0c) [0x55757a93f8dc]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  13: (__libc_start_main()+0xf3) [0x7fc0f3916083]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  14: (_start()+0x2e) [0x55757a95190e]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 12 19:20:29 psan05 ceph-mon[5469]:   -254> 2023-10-12T19:20:29.652+0200 7fc0f3453580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thre
ad 7fc0f3453580 time 2023-10-12T19:20:29.648985+0200
Oct 12 19:20:29 psan05 ceph-mon[5469]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:20:29 psan05 ceph-mon[5469]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:29 psan05 ceph-mon[5469]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7fc0f4369c01]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  2: (()+0x26ae09) [0x7fc0f4369e09]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  3: (FSMap::sanity() const+0xc7) [0x7fc0f48e2727]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55757abb2588]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  5: (PaxosService::refresh(bool*)+0x28f) [0x55757aacd41f]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55757a96b35c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  7: (Monitor::init_paxos()+0x7c) [0x55757a96b66c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  8: (Monitor::preinit()+0xf56) [0x55757a9ae1c6]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  9: (main()+0x2e0c) [0x55757a93f8dc]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  10: (__libc_start_main()+0xf3) [0x7fc0f3916083]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  11: (_start()+0x2e) [0x55757a95190e]
Oct 12 19:20:29 psan05 ceph-mon[5469]:   -253> 2023-10-12T19:20:29.656+0200 7fc0f3453580 -1 *** Caught signal (Aborted) **
Oct 12 19:20:29 psan05 ceph-mon[5469]:  in thread 7fc0f3453580 thread_name:ceph-mon
Oct 12 19:20:29 psan05 ceph-mon[5469]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:29 psan05 ceph-mon[5469]:  1: (()+0x14420) [0x7fc0f3e4a420]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  2: (gsignal()+0xcb) [0x7fc0f393500b]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  3: (abort()+0x12b) [0x7fc0f3914859]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7fc0f4369c5c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  5: (()+0x26ae09) [0x7fc0f4369e09]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  6: (FSMap::sanity() const+0xc7) [0x7fc0f48e2727]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55757abb2588]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  8: (PaxosService::refresh(bool*)+0x28f) [0x55757aacd41f]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55757a96b35c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  10: (Monitor::init_paxos()+0x7c) [0x55757a96b66c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  11: (Monitor::preinit()+0xf56) [0x55757a9ae1c6]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  12: (main()+0x2e0c) [0x55757a93f8dc]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  13: (__libc_start_main()+0xf3) [0x7fc0f3916083]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  14: (_start()+0x2e) [0x55757a95190e]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 12 19:20:29 psan05 ceph-mon[5469]:   -254> 2023-10-12T19:20:29.652+0200 7fc0f3453580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thre
ad 7fc0f3453580 time 2023-10-12T19:20:29.648985+0200
Oct 12 19:20:29 psan05 ceph-mon[5469]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:20:29 psan05 ceph-mon[5469]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:29 psan05 ceph-mon[5469]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7fc0f4369c01]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  2: (()+0x26ae09) [0x7fc0f4369e09]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  3: (FSMap::sanity() const+0xc7) [0x7fc0f48e2727]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55757abb2588]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  5: (PaxosService::refresh(bool*)+0x28f) [0x55757aacd41f]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55757a96b35c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  7: (Monitor::init_paxos()+0x7c) [0x55757a96b66c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  8: (Monitor::preinit()+0xf56) [0x55757a9ae1c6]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  9: (main()+0x2e0c) [0x55757a93f8dc]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  10: (__libc_start_main()+0xf3) [0x7fc0f3916083]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  11: (_start()+0x2e) [0x55757a95190e]
Oct 12 19:20:29 psan05 ceph-mon[5469]:   -253> 2023-10-12T19:20:29.656+0200 7fc0f3453580 -1 *** Caught signal (Aborted) **
Oct 12 19:20:29 psan05 ceph-mon[5469]:  in thread 7fc0f3453580 thread_name:ceph-mon
Oct 12 19:20:29 psan05 ceph-mon[5469]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:29 psan05 ceph-mon[5469]:  1: (()+0x14420) [0x7fc0f3e4a420]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  2: (gsignal()+0xcb) [0x7fc0f393500b]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  3: (abort()+0x12b) [0x7fc0f3914859]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7fc0f4369c5c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  5: (()+0x26ae09) [0x7fc0f4369e09]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  6: (FSMap::sanity() const+0xc7) [0x7fc0f48e2727]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55757abb2588]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  8: (PaxosService::refresh(bool*)+0x28f) [0x55757aacd41f]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55757a96b35c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  10: (Monitor::init_paxos()+0x7c) [0x55757a96b66c]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  11: (Monitor::preinit()+0xf56) [0x55757a9ae1c6]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  12: (main()+0x2e0c) [0x55757a93f8dc]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  13: (__libc_start_main()+0xf3) [0x7fc0f3916083]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  14: (_start()+0x2e) [0x55757a95190e]
Oct 12 19:20:29 psan05 ceph-mon[5469]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 12 19:20:29 psan05 systemd[1]: Started LSB: radosgw RESTful rados gateway.
-- Subject: A start job for unit radosgw.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit radosgw.service has finished successfully.
--
-- The job identifier is 108.
Oct 12 19:20:29 psan05 systemd[1]: ceph-mon@psan05.service: Main process exited, code=killed, status=6/ABRT
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- An ExecStart= process belonging to unit ceph-mon@psan05.service has exited.
--
-- The process' exit code is 'killed' and its exit status is 6.
Oct 12 19:20:29 psan05 systemd[1]: ceph-mon@psan05.service: Failed with result 'signal'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-mon@psan05.service has entered the 'failed' state with result 'signal'.
Oct 12 19:20:30 psan05 collectl[5667]: V4.3.1-1 Beginning execution on psan05...
Oct 12 19:20:30 psan05 collectl[5647]: Starting collectl: collectl
Oct 12 19:20:30 psan05 collectl[5667]: Use of uninitialized value $strace in pattern match (m//) at /usr/share/collectl/formatit.ph line 178.
Oct 12 19:20:30 psan05 collectl[5667]: Use of uninitialized value $speed in numeric gt (>) at /usr/share/collectl/formatit.ph line 181.
Oct 12 19:20:30 psan05 collectl[5667]: Exiting subroutine via last at /usr/share/collectl/formatit.ph line 382.
Oct 12 19:20:30 psan05 collectl[5667]: Can't "last" outside a loop block at /usr/share/collectl/formatit.ph line 382.
Oct 12 19:20:30 psan05 collectl[5647]: .
Oct 12 19:20:30 psan05 systemd[1]: Started LSB: Collectl monitors system performance..
-- Subject: A start job for unit collectl.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit collectl.service has finished successfully.
--
-- The job identifier is 143.
Oct 12 19:20:30 psan05 systemd[1]: Reached target Multi-User System.
-- Subject: A start job for unit multi-user.target has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit multi-user.target has finished successfully.
--
-- The job identifier is 2.
Oct 12 19:20:30 psan05 systemd[1]: Starting PetaSAN Node Console...
-- Subject: A start job for unit petasan-console.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit petasan-console.service has begun execution.
--
-- The job identifier is 168.
Oct 12 19:20:30 psan05 systemd[1]: Started PetaSAN Tuning Service.
-- Subject: A start job for unit petasan-tuning.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit petasan-tuning.service has finished successfully.
--
-- The job identifier is 4740.
Oct 12 19:20:39 psan05 systemd[1]: ceph-mon@psan05.service: Scheduled restart job, restart counter is at 1.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Automatic restarting of the unit ceph-mon@psan05.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Oct 12 19:20:39 psan05 systemd[1]: Stopped Ceph cluster monitor daemon.
-- Subject: A stop job for unit ceph-mon@psan05.service has finished
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A stop job for unit ceph-mon@psan05.service has finished.
--
-- The job identifier is 5104 and the job result is done.
Oct 12 19:20:39 psan05 systemd[1]: Started Ceph cluster monitor daemon.
-- Subject: A start job for unit ceph-mon@psan05.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-mon@psan05.service has finished successfully.
--
-- The job identifier is 5104.
Oct 12 19:20:39 psan05 ceph-mon[5952]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 7f6d57f31580 time 2023-10-12T19:20:39.981757+0200
Oct 12 19:20:39 psan05 ceph-mon[5952]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:20:39 psan05 ceph-mon[5952]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:39 psan05 ceph-mon[5952]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7f6d58e47c01]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  2: (()+0x26ae09) [0x7f6d58e47e09]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  3: (FSMap::sanity() const+0xc7) [0x7f6d593c0727]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d176111588]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  5: (PaxosService::refresh(bool*)+0x28f) [0x55d17602c41f]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d175eca35c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  7: (Monitor::init_paxos()+0x7c) [0x55d175eca66c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  8: (Monitor::preinit()+0xf56) [0x55d175f0d1c6]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  9: (main()+0x2e0c) [0x55d175e9e8dc]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  10: (__libc_start_main()+0xf3) [0x7f6d583f4083]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  11: (_start()+0x2e) [0x55d175eb090e]
Oct 12 19:20:39 psan05 ceph-mon[5952]: *** Caught signal (Aborted) **
Oct 12 19:20:39 psan05 ceph-mon[5952]:  in thread 7f6d57f31580 thread_name:ceph-mon
Oct 12 19:20:39 psan05 ceph-mon[5952]: 2023-10-12T19:20:39.984+0200 7f6d57f31580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 7f6d5
7f31580 time 2023-10-12T19:20:39.981757+0200
Oct 12 19:20:39 psan05 ceph-mon[5952]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:20:39 psan05 ceph-mon[5952]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:39 psan05 ceph-mon[5952]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7f6d58e47c01]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  2: (()+0x26ae09) [0x7f6d58e47e09]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  3: (FSMap::sanity() const+0xc7) [0x7f6d593c0727]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d176111588]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  5: (PaxosService::refresh(bool*)+0x28f) [0x55d17602c41f]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d175eca35c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  7: (Monitor::init_paxos()+0x7c) [0x55d175eca66c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  8: (Monitor::preinit()+0xf56) [0x55d175f0d1c6]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  9: (main()+0x2e0c) [0x55d175e9e8dc]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  10: (__libc_start_main()+0xf3) [0x7f6d583f4083]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  11: (_start()+0x2e) [0x55d175eb090e]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:39 psan05 ceph-mon[5952]:  1: (()+0x14420) [0x7f6d58928420]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  2: (gsignal()+0xcb) [0x7f6d5841300b]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  3: (abort()+0x12b) [0x7f6d583f2859]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7f6d58e47c5c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  5: (()+0x26ae09) [0x7f6d58e47e09]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  6: (FSMap::sanity() const+0xc7) [0x7f6d593c0727]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d176111588]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  8: (PaxosService::refresh(bool*)+0x28f) [0x55d17602c41f]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d175eca35c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  10: (Monitor::init_paxos()+0x7c) [0x55d175eca66c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  11: (Monitor::preinit()+0xf56) [0x55d175f0d1c6]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  12: (main()+0x2e0c) [0x55d175e9e8dc]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  13: (__libc_start_main()+0xf3) [0x7f6d583f4083]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  14: (_start()+0x2e) [0x55d175eb090e]
Oct 12 19:20:39 psan05 ceph-mon[5952]: 2023-10-12T19:20:39.988+0200 7f6d57f31580 -1 *** Caught signal (Aborted) **
Oct 12 19:20:39 psan05 ceph-mon[5952]:  in thread 7f6d57f31580 thread_name:ceph-mon
Oct 12 19:20:39 psan05 ceph-mon[5952]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:39 psan05 ceph-mon[5952]:  1: (()+0x14420) [0x7f6d58928420]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  2: (gsignal()+0xcb) [0x7f6d5841300b]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  3: (abort()+0x12b) [0x7f6d583f2859]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7f6d58e47c5c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  5: (()+0x26ae09) [0x7f6d58e47e09]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  6: (FSMap::sanity() const+0xc7) [0x7f6d593c0727]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d176111588]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  8: (PaxosService::refresh(bool*)+0x28f) [0x55d17602c41f]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d175eca35c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  10: (Monitor::init_paxos()+0x7c) [0x55d175eca66c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  11: (Monitor::preinit()+0xf56) [0x55d175f0d1c6]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  12: (main()+0x2e0c) [0x55d175e9e8dc]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  13: (__libc_start_main()+0xf3) [0x7f6d583f4083]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  14: (_start()+0x2e) [0x55d175eb090e]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 12 19:20:39 psan05 ceph-mon[5952]:   -254> 2023-10-12T19:20:39.984+0200 7f6d57f31580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thre
ad 7f6d57f31580 time 2023-10-12T19:20:39.981757+0200
Oct 12 19:20:39 psan05 ceph-mon[5952]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:20:39 psan05 ceph-mon[5952]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:39 psan05 ceph-mon[5952]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7f6d58e47c01]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  2: (()+0x26ae09) [0x7f6d58e47e09]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  3: (FSMap::sanity() const+0xc7) [0x7f6d593c0727]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d176111588]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  5: (PaxosService::refresh(bool*)+0x28f) [0x55d17602c41f]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d175eca35c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  7: (Monitor::init_paxos()+0x7c) [0x55d175eca66c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  8: (Monitor::preinit()+0xf56) [0x55d175f0d1c6]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  9: (main()+0x2e0c) [0x55d175e9e8dc]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  10: (__libc_start_main()+0xf3) [0x7f6d583f4083]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  11: (_start()+0x2e) [0x55d175eb090e]
Oct 12 19:20:39 psan05 ceph-mon[5952]:   -253> 2023-10-12T19:20:39.988+0200 7f6d57f31580 -1 *** Caught signal (Aborted) **
Oct 12 19:20:39 psan05 ceph-mon[5952]:  in thread 7f6d57f31580 thread_name:ceph-mon
Oct 12 19:20:39 psan05 ceph-mon[5952]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:39 psan05 ceph-mon[5952]:  1: (()+0x14420) [0x7f6d58928420]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  2: (gsignal()+0xcb) [0x7f6d5841300b]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  3: (abort()+0x12b) [0x7f6d583f2859]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7f6d58e47c5c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  5: (()+0x26ae09) [0x7f6d58e47e09]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  6: (FSMap::sanity() const+0xc7) [0x7f6d593c0727]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d176111588]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  8: (PaxosService::refresh(bool*)+0x28f) [0x55d17602c41f]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d175eca35c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  10: (Monitor::init_paxos()+0x7c) [0x55d175eca66c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  11: (Monitor::preinit()+0xf56) [0x55d175f0d1c6]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  12: (main()+0x2e0c) [0x55d175e9e8dc]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  13: (__libc_start_main()+0xf3) [0x7f6d583f4083]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  14: (_start()+0x2e) [0x55d175eb090e]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 12 19:20:39 psan05 ceph-mon[5952]:   -254> 2023-10-12T19:20:39.984+0200 7f6d57f31580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thre
ad 7f6d57f31580 time 2023-10-12T19:20:39.981757+0200
Oct 12 19:20:39 psan05 ceph-mon[5952]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:20:39 psan05 ceph-mon[5952]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:39 psan05 ceph-mon[5952]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7f6d58e47c01]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  2: (()+0x26ae09) [0x7f6d58e47e09]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  3: (FSMap::sanity() const+0xc7) [0x7f6d593c0727]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d176111588]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  5: (PaxosService::refresh(bool*)+0x28f) [0x55d17602c41f]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d175eca35c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  7: (Monitor::init_paxos()+0x7c) [0x55d175eca66c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  8: (Monitor::preinit()+0xf56) [0x55d175f0d1c6]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  9: (main()+0x2e0c) [0x55d175e9e8dc]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  10: (__libc_start_main()+0xf3) [0x7f6d583f4083]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  11: (_start()+0x2e) [0x55d175eb090e]
Oct 12 19:20:39 psan05 ceph-mon[5952]:   -253> 2023-10-12T19:20:39.988+0200 7f6d57f31580 -1 *** Caught signal (Aborted) **
Oct 12 19:20:39 psan05 ceph-mon[5952]:  in thread 7f6d57f31580 thread_name:ceph-mon
Oct 12 19:20:39 psan05 ceph-mon[5952]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:39 psan05 ceph-mon[5952]:  1: (()+0x14420) [0x7f6d58928420]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  2: (gsignal()+0xcb) [0x7f6d5841300b]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  3: (abort()+0x12b) [0x7f6d583f2859]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7f6d58e47c5c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  5: (()+0x26ae09) [0x7f6d58e47e09]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  6: (FSMap::sanity() const+0xc7) [0x7f6d593c0727]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d176111588]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  8: (PaxosService::refresh(bool*)+0x28f) [0x55d17602c41f]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d175eca35c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  10: (Monitor::init_paxos()+0x7c) [0x55d175eca66c]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  11: (Monitor::preinit()+0xf56) [0x55d175f0d1c6]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  12: (main()+0x2e0c) [0x55d175e9e8dc]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  13: (__libc_start_main()+0xf3) [0x7f6d583f4083]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  14: (_start()+0x2e) [0x55d175eb090e]
Oct 12 19:20:39 psan05 ceph-mon[5952]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 12 19:20:39 psan05 systemd[1]: ceph-mon@psan05.service: Main process exited, code=killed, status=6/ABRT
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- An ExecStart= process belonging to unit ceph-mon@psan05.service has exited.
--
-- The process' exit code is 'killed' and its exit status is 6.
Oct 12 19:20:39 psan05 systemd[1]: ceph-mon@psan05.service: Failed with result 'signal'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-mon@psan05.service has entered the 'failed' state with result 'signal'.
Oct 12 19:20:40 psan05 systemd[1]: Started PetaSAN Node Console.
-- Subject: A start job for unit petasan-console.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit petasan-console.service has finished successfully.
--
-- The job identifier is 168.
Oct 12 19:20:40 psan05 systemd[1]: Reached target Graphical Interface.
-- Subject: A start job for unit graphical.target has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit graphical.target has finished successfully.
--
-- The job identifier is 1.
Oct 12 19:20:40 psan05 systemd[1]: Starting Update UTMP about System Runlevel Changes...
-- Subject: A start job for unit systemd-update-utmp-runlevel.service has begun execution
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit systemd-update-utmp-runlevel.service has begun execution.
--
-- The job identifier is 103.
Oct 12 19:20:40 psan05 systemd[1]: systemd-update-utmp-runlevel.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit systemd-update-utmp-runlevel.service has successfully entered the 'dead' state.
Oct 12 19:20:40 psan05 systemd[1]: Finished Update UTMP about System Runlevel Changes.
-- Subject: A start job for unit systemd-update-utmp-runlevel.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit systemd-update-utmp-runlevel.service has finished successfully.
--
-- The job identifier is 103.
Oct 12 19:20:40 psan05 systemd[1]: Startup finished in 6.539s (kernel) + 1min 17.038s (userspace) = 1min 23.578s.
-- Subject: System start-up is now complete
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- All system services necessary queued for starting at boot have been
-- started. Note that this does not mean that the machine is now idle as services
-- might still be busy with completing start-up.
--
-- Kernel start-up required 6539800 microseconds.
--
-- Initial RAM disk start-up required INITRD_USEC microseconds.
--
-- Userspace start-up required 77038358 microseconds.
Oct 12 19:20:50 psan05 systemd[1]: ceph-mon@psan05.service: Scheduled restart job, restart counter is at 2.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Automatic restarting of the unit ceph-mon@psan05.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Oct 12 19:20:50 psan05 systemd[1]: Stopped Ceph cluster monitor daemon.
-- Subject: A stop job for unit ceph-mon@psan05.service has finished
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A stop job for unit ceph-mon@psan05.service has finished.
--
-- The job identifier is 5237 and the job result is done.
Oct 12 19:20:50 psan05 systemd[1]: Started Ceph cluster monitor daemon.
-- Subject: A start job for unit ceph-mon@psan05.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-mon@psan05.service has finished successfully.
--
-- The job identifier is 5237.
Oct 12 19:20:50 psan05 ceph-mon[6811]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 7ff0e7162580 time 2023-10-12T19:20:50.223377+0200
Oct 12 19:20:50 psan05 ceph-mon[6811]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:20:50 psan05 ceph-mon[6811]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:50 psan05 ceph-mon[6811]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7ff0e8078c01]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  2: (()+0x26ae09) [0x7ff0e8078e09]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  3: (FSMap::sanity() const+0xc7) [0x7ff0e85f1727]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d66065c588]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  5: (PaxosService::refresh(bool*)+0x28f) [0x55d66057741f]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d66041535c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  7: (Monitor::init_paxos()+0x7c) [0x55d66041566c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  8: (Monitor::preinit()+0xf56) [0x55d6604581c6]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  9: (main()+0x2e0c) [0x55d6603e98dc]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  10: (__libc_start_main()+0xf3) [0x7ff0e7625083]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  11: (_start()+0x2e) [0x55d6603fb90e]
Oct 12 19:20:50 psan05 ceph-mon[6811]: *** Caught signal (Aborted) **
Oct 12 19:20:50 psan05 ceph-mon[6811]:  in thread 7ff0e7162580 thread_name:ceph-mon
Oct 12 19:20:50 psan05 ceph-mon[6811]: 2023-10-12T19:20:50.224+0200 7ff0e7162580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 7ff0e
7162580 time 2023-10-12T19:20:50.223377+0200
Oct 12 19:20:50 psan05 ceph-mon[6811]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:20:50 psan05 ceph-mon[6811]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:50 psan05 ceph-mon[6811]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7ff0e8078c01]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  2: (()+0x26ae09) [0x7ff0e8078e09]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  3: (FSMap::sanity() const+0xc7) [0x7ff0e85f1727]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d66065c588]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  5: (PaxosService::refresh(bool*)+0x28f) [0x55d66057741f]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d66041535c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  7: (Monitor::init_paxos()+0x7c) [0x55d66041566c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  8: (Monitor::preinit()+0xf56) [0x55d6604581c6]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  9: (main()+0x2e0c) [0x55d6603e98dc]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  10: (__libc_start_main()+0xf3) [0x7ff0e7625083]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  11: (_start()+0x2e) [0x55d6603fb90e]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:50 psan05 ceph-mon[6811]:  1: (()+0x14420) [0x7ff0e7b59420]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  2: (gsignal()+0xcb) [0x7ff0e764400b]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  3: (abort()+0x12b) [0x7ff0e7623859]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7ff0e8078c5c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  5: (()+0x26ae09) [0x7ff0e8078e09]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  6: (FSMap::sanity() const+0xc7) [0x7ff0e85f1727]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d66065c588]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  8: (PaxosService::refresh(bool*)+0x28f) [0x55d66057741f]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d66041535c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  10: (Monitor::init_paxos()+0x7c) [0x55d66041566c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  11: (Monitor::preinit()+0xf56) [0x55d6604581c6]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  12: (main()+0x2e0c) [0x55d6603e98dc]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  13: (__libc_start_main()+0xf3) [0x7ff0e7625083]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  14: (_start()+0x2e) [0x55d6603fb90e]
Oct 12 19:20:50 psan05 ceph-mon[6811]: 2023-10-12T19:20:50.228+0200 7ff0e7162580 -1 *** Caught signal (Aborted) **
Oct 12 19:20:50 psan05 ceph-mon[6811]:  in thread 7ff0e7162580 thread_name:ceph-mon
Oct 12 19:20:50 psan05 ceph-mon[6811]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:50 psan05 ceph-mon[6811]:  1: (()+0x14420) [0x7ff0e7b59420]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  2: (gsignal()+0xcb) [0x7ff0e764400b]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  3: (abort()+0x12b) [0x7ff0e7623859]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7ff0e8078c5c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  5: (()+0x26ae09) [0x7ff0e8078e09]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  6: (FSMap::sanity() const+0xc7) [0x7ff0e85f1727]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d66065c588]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  8: (PaxosService::refresh(bool*)+0x28f) [0x55d66057741f]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d66041535c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  10: (Monitor::init_paxos()+0x7c) [0x55d66041566c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  11: (Monitor::preinit()+0xf56) [0x55d6604581c6]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  12: (main()+0x2e0c) [0x55d6603e98dc]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  13: (__libc_start_main()+0xf3) [0x7ff0e7625083]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  14: (_start()+0x2e) [0x55d6603fb90e]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 12 19:20:50 psan05 ceph-mon[6811]:   -254> 2023-10-12T19:20:50.224+0200 7ff0e7162580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thre
ad 7ff0e7162580 time 2023-10-12T19:20:50.223377+0200
Oct 12 19:20:50 psan05 ceph-mon[6811]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:20:50 psan05 ceph-mon[6811]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:50 psan05 ceph-mon[6811]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7ff0e8078c01]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  2: (()+0x26ae09) [0x7ff0e8078e09]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  3: (FSMap::sanity() const+0xc7) [0x7ff0e85f1727]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d66065c588]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  5: (PaxosService::refresh(bool*)+0x28f) [0x55d66057741f]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d66041535c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  7: (Monitor::init_paxos()+0x7c) [0x55d66041566c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  8: (Monitor::preinit()+0xf56) [0x55d6604581c6]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  9: (main()+0x2e0c) [0x55d6603e98dc]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  10: (__libc_start_main()+0xf3) [0x7ff0e7625083]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  11: (_start()+0x2e) [0x55d6603fb90e]
Oct 12 19:20:50 psan05 ceph-mon[6811]:   -253> 2023-10-12T19:20:50.228+0200 7ff0e7162580 -1 *** Caught signal (Aborted) **
Oct 12 19:20:50 psan05 ceph-mon[6811]:  in thread 7ff0e7162580 thread_name:ceph-mon
Oct 12 19:20:50 psan05 ceph-mon[6811]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:50 psan05 ceph-mon[6811]:  1: (()+0x14420) [0x7ff0e7b59420]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  2: (gsignal()+0xcb) [0x7ff0e764400b]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  3: (abort()+0x12b) [0x7ff0e7623859]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7ff0e8078c5c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  5: (()+0x26ae09) [0x7ff0e8078e09]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  6: (FSMap::sanity() const+0xc7) [0x7ff0e85f1727]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d66065c588]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  8: (PaxosService::refresh(bool*)+0x28f) [0x55d66057741f]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d66041535c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  10: (Monitor::init_paxos()+0x7c) [0x55d66041566c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  11: (Monitor::preinit()+0xf56) [0x55d6604581c6]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  12: (main()+0x2e0c) [0x55d6603e98dc]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  13: (__libc_start_main()+0xf3) [0x7ff0e7625083]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  14: (_start()+0x2e) [0x55d6603fb90e]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 12 19:20:50 psan05 ceph-mon[6811]:   -254> 2023-10-12T19:20:50.224+0200 7ff0e7162580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thre
ad 7ff0e7162580 time 2023-10-12T19:20:50.223377+0200
Oct 12 19:20:50 psan05 ceph-mon[6811]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:20:50 psan05 ceph-mon[6811]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:50 psan05 ceph-mon[6811]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7ff0e8078c01]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  2: (()+0x26ae09) [0x7ff0e8078e09]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  3: (FSMap::sanity() const+0xc7) [0x7ff0e85f1727]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d66065c588]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  5: (PaxosService::refresh(bool*)+0x28f) [0x55d66057741f]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d66041535c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  7: (Monitor::init_paxos()+0x7c) [0x55d66041566c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  8: (Monitor::preinit()+0xf56) [0x55d6604581c6]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  9: (main()+0x2e0c) [0x55d6603e98dc]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  10: (__libc_start_main()+0xf3) [0x7ff0e7625083]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  11: (_start()+0x2e) [0x55d6603fb90e]
Oct 12 19:20:50 psan05 ceph-mon[6811]:   -253> 2023-10-12T19:20:50.228+0200 7ff0e7162580 -1 *** Caught signal (Aborted) **
Oct 12 19:20:50 psan05 ceph-mon[6811]:  in thread 7ff0e7162580 thread_name:ceph-mon
Oct 12 19:20:50 psan05 ceph-mon[6811]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:20:50 psan05 ceph-mon[6811]:  1: (()+0x14420) [0x7ff0e7b59420]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  2: (gsignal()+0xcb) [0x7ff0e764400b]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  3: (abort()+0x12b) [0x7ff0e7623859]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7ff0e8078c5c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  5: (()+0x26ae09) [0x7ff0e8078e09]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  6: (FSMap::sanity() const+0xc7) [0x7ff0e85f1727]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55d66065c588]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  8: (PaxosService::refresh(bool*)+0x28f) [0x55d66057741f]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55d66041535c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  10: (Monitor::init_paxos()+0x7c) [0x55d66041566c]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  11: (Monitor::preinit()+0xf56) [0x55d6604581c6]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  12: (main()+0x2e0c) [0x55d6603e98dc]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  13: (__libc_start_main()+0xf3) [0x7ff0e7625083]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  14: (_start()+0x2e) [0x55d6603fb90e]
Oct 12 19:20:50 psan05 ceph-mon[6811]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 12 19:20:50 psan05 systemd[1]: ceph-mon@psan05.service: Main process exited, code=killed, status=6/ABRT
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- An ExecStart= process belonging to unit ceph-mon@psan05.service has exited.
--
-- The process' exit code is 'killed' and its exit status is 6.
Oct 12 19:20:50 psan05 systemd[1]: ceph-mon@psan05.service: Failed with result 'signal'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-mon@psan05.service has entered the 'failed' state with result 'signal'.
Oct 12 19:21:00 psan05 systemd[1]: ceph-mon@psan05.service: Scheduled restart job, restart counter is at 3.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Automatic restarting of the unit ceph-mon@psan05.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Oct 12 19:21:00 psan05 systemd[1]: Stopped Ceph cluster monitor daemon.
-- Subject: A stop job for unit ceph-mon@psan05.service has finished
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A stop job for unit ceph-mon@psan05.service has finished.
--
-- The job identifier is 5370 and the job result is done.
Oct 12 19:21:00 psan05 systemd[1]: Started Ceph cluster monitor daemon.
-- Subject: A start job for unit ceph-mon@psan05.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-mon@psan05.service has finished successfully.
--
-- The job identifier is 5370.
Oct 12 19:21:00 psan05 ceph-mon[6843]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 7f86cdbb3580 time 2023-10-12T19:21:00.473170+0200
Oct 12 19:21:00 psan05 ceph-mon[6843]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:21:00 psan05 ceph-mon[6843]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:00 psan05 ceph-mon[6843]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7f86ceac9c01]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  2: (()+0x26ae09) [0x7f86ceac9e09]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  3: (FSMap::sanity() const+0xc7) [0x7f86cf042727]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x5556e43bf588]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  5: (PaxosService::refresh(bool*)+0x28f) [0x5556e42da41f]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x5556e417835c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  7: (Monitor::init_paxos()+0x7c) [0x5556e417866c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  8: (Monitor::preinit()+0xf56) [0x5556e41bb1c6]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  9: (main()+0x2e0c) [0x5556e414c8dc]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  10: (__libc_start_main()+0xf3) [0x7f86ce076083]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  11: (_start()+0x2e) [0x5556e415e90e]
Oct 12 19:21:00 psan05 ceph-mon[6843]: *** Caught signal (Aborted) **
Oct 12 19:21:00 psan05 ceph-mon[6843]:  in thread 7f86cdbb3580 thread_name:ceph-mon
Oct 12 19:21:00 psan05 ceph-mon[6843]: 2023-10-12T19:21:00.472+0200 7f86cdbb3580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 7f86c
dbb3580 time 2023-10-12T19:21:00.473170+0200
Oct 12 19:21:00 psan05 ceph-mon[6843]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:21:00 psan05 ceph-mon[6843]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:00 psan05 ceph-mon[6843]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7f86ceac9c01]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  2: (()+0x26ae09) [0x7f86ceac9e09]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  3: (FSMap::sanity() const+0xc7) [0x7f86cf042727]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x5556e43bf588]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  5: (PaxosService::refresh(bool*)+0x28f) [0x5556e42da41f]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x5556e417835c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  7: (Monitor::init_paxos()+0x7c) [0x5556e417866c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  8: (Monitor::preinit()+0xf56) [0x5556e41bb1c6]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  9: (main()+0x2e0c) [0x5556e414c8dc]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  10: (__libc_start_main()+0xf3) [0x7f86ce076083]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  11: (_start()+0x2e) [0x5556e415e90e]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:00 psan05 ceph-mon[6843]:  1: (()+0x14420) [0x7f86ce5aa420]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  2: (gsignal()+0xcb) [0x7f86ce09500b]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  3: (abort()+0x12b) [0x7f86ce074859]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7f86ceac9c5c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  5: (()+0x26ae09) [0x7f86ceac9e09]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  6: (FSMap::sanity() const+0xc7) [0x7f86cf042727]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x5556e43bf588]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  8: (PaxosService::refresh(bool*)+0x28f) [0x5556e42da41f]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x5556e417835c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  10: (Monitor::init_paxos()+0x7c) [0x5556e417866c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  11: (Monitor::preinit()+0xf56) [0x5556e41bb1c6]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  12: (main()+0x2e0c) [0x5556e414c8dc]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  13: (__libc_start_main()+0xf3) [0x7f86ce076083]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  14: (_start()+0x2e) [0x5556e415e90e]
Oct 12 19:21:00 psan05 ceph-mon[6843]: 2023-10-12T19:21:00.476+0200 7f86cdbb3580 -1 *** Caught signal (Aborted) **
Oct 12 19:21:00 psan05 ceph-mon[6843]:  in thread 7f86cdbb3580 thread_name:ceph-mon
Oct 12 19:21:00 psan05 ceph-mon[6843]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:00 psan05 ceph-mon[6843]:  1: (()+0x14420) [0x7f86ce5aa420]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  2: (gsignal()+0xcb) [0x7f86ce09500b]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  3: (abort()+0x12b) [0x7f86ce074859]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7f86ceac9c5c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  5: (()+0x26ae09) [0x7f86ceac9e09]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  6: (FSMap::sanity() const+0xc7) [0x7f86cf042727]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x5556e43bf588]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  8: (PaxosService::refresh(bool*)+0x28f) [0x5556e42da41f]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x5556e417835c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  10: (Monitor::init_paxos()+0x7c) [0x5556e417866c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  11: (Monitor::preinit()+0xf56) [0x5556e41bb1c6]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  12: (main()+0x2e0c) [0x5556e414c8dc]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  13: (__libc_start_main()+0xf3) [0x7f86ce076083]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  14: (_start()+0x2e) [0x5556e415e90e]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 12 19:21:00 psan05 ceph-mon[6843]:   -254> 2023-10-12T19:21:00.472+0200 7f86cdbb3580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thre
ad 7f86cdbb3580 time 2023-10-12T19:21:00.473170+0200
Oct 12 19:21:00 psan05 ceph-mon[6843]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:21:00 psan05 ceph-mon[6843]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:00 psan05 ceph-mon[6843]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7f86ceac9c01]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  2: (()+0x26ae09) [0x7f86ceac9e09]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  3: (FSMap::sanity() const+0xc7) [0x7f86cf042727]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x5556e43bf588]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  5: (PaxosService::refresh(bool*)+0x28f) [0x5556e42da41f]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x5556e417835c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  7: (Monitor::init_paxos()+0x7c) [0x5556e417866c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  8: (Monitor::preinit()+0xf56) [0x5556e41bb1c6]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  9: (main()+0x2e0c) [0x5556e414c8dc]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  10: (__libc_start_main()+0xf3) [0x7f86ce076083]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  11: (_start()+0x2e) [0x5556e415e90e]
Oct 12 19:21:00 psan05 ceph-mon[6843]:   -253> 2023-10-12T19:21:00.476+0200 7f86cdbb3580 -1 *** Caught signal (Aborted) **
Oct 12 19:21:00 psan05 ceph-mon[6843]:  in thread 7f86cdbb3580 thread_name:ceph-mon
Oct 12 19:21:00 psan05 ceph-mon[6843]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:00 psan05 ceph-mon[6843]:  1: (()+0x14420) [0x7f86ce5aa420]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  2: (gsignal()+0xcb) [0x7f86ce09500b]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  3: (abort()+0x12b) [0x7f86ce074859]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7f86ceac9c5c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  5: (()+0x26ae09) [0x7f86ceac9e09]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  6: (FSMap::sanity() const+0xc7) [0x7f86cf042727]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x5556e43bf588]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  8: (PaxosService::refresh(bool*)+0x28f) [0x5556e42da41f]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x5556e417835c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  10: (Monitor::init_paxos()+0x7c) [0x5556e417866c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  11: (Monitor::preinit()+0xf56) [0x5556e41bb1c6]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  12: (main()+0x2e0c) [0x5556e414c8dc]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  13: (__libc_start_main()+0xf3) [0x7f86ce076083]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  14: (_start()+0x2e) [0x5556e415e90e]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 12 19:21:00 psan05 ceph-mon[6843]:   -254> 2023-10-12T19:21:00.472+0200 7f86cdbb3580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thre
ad 7f86cdbb3580 time 2023-10-12T19:21:00.473170+0200
Oct 12 19:21:00 psan05 ceph-mon[6843]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:21:00 psan05 ceph-mon[6843]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:00 psan05 ceph-mon[6843]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7f86ceac9c01]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  2: (()+0x26ae09) [0x7f86ceac9e09]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  3: (FSMap::sanity() const+0xc7) [0x7f86cf042727]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x5556e43bf588]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  5: (PaxosService::refresh(bool*)+0x28f) [0x5556e42da41f]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x5556e417835c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  7: (Monitor::init_paxos()+0x7c) [0x5556e417866c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  8: (Monitor::preinit()+0xf56) [0x5556e41bb1c6]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  9: (main()+0x2e0c) [0x5556e414c8dc]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  10: (__libc_start_main()+0xf3) [0x7f86ce076083]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  11: (_start()+0x2e) [0x5556e415e90e]
Oct 12 19:21:00 psan05 ceph-mon[6843]:   -253> 2023-10-12T19:21:00.476+0200 7f86cdbb3580 -1 *** Caught signal (Aborted) **
Oct 12 19:21:00 psan05 ceph-mon[6843]:  in thread 7f86cdbb3580 thread_name:ceph-mon
Oct 12 19:21:00 psan05 ceph-mon[6843]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:00 psan05 ceph-mon[6843]:  1: (()+0x14420) [0x7f86ce5aa420]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  2: (gsignal()+0xcb) [0x7f86ce09500b]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  3: (abort()+0x12b) [0x7f86ce074859]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7f86ceac9c5c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  5: (()+0x26ae09) [0x7f86ceac9e09]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  6: (FSMap::sanity() const+0xc7) [0x7f86cf042727]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x5556e43bf588]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  8: (PaxosService::refresh(bool*)+0x28f) [0x5556e42da41f]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x5556e417835c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  10: (Monitor::init_paxos()+0x7c) [0x5556e417866c]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  11: (Monitor::preinit()+0xf56) [0x5556e41bb1c6]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  12: (main()+0x2e0c) [0x5556e414c8dc]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  13: (__libc_start_main()+0xf3) [0x7f86ce076083]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  14: (_start()+0x2e) [0x5556e415e90e]
Oct 12 19:21:00 psan05 ceph-mon[6843]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 12 19:21:00 psan05 systemd[1]: ceph-mon@psan05.service: Main process exited, code=killed, status=6/ABRT
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- An ExecStart= process belonging to unit ceph-mon@psan05.service has exited.
--
-- The process' exit code is 'killed' and its exit status is 6.
Oct 12 19:21:00 psan05 systemd[1]: ceph-mon@psan05.service: Failed with result 'signal'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-mon@psan05.service has entered the 'failed' state with result 'signal'.
Oct 12 19:21:10 psan05 systemd[1]: ceph-mon@psan05.service: Scheduled restart job, restart counter is at 4.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Automatic restarting of the unit ceph-mon@psan05.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Oct 12 19:21:10 psan05 systemd[1]: Stopped Ceph cluster monitor daemon.
-- Subject: A stop job for unit ceph-mon@psan05.service has finished
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A stop job for unit ceph-mon@psan05.service has finished.
--
-- The job identifier is 5503 and the job result is done.
Oct 12 19:21:10 psan05 systemd[1]: Started Ceph cluster monitor daemon.
-- Subject: A start job for unit ceph-mon@psan05.service has finished successfully
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-mon@psan05.service has finished successfully.
--
-- The job identifier is 5503.
Oct 12 19:21:10 psan05 ceph-mon[6878]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 7ffa3097a580 time 2023-10-12T19:21:10.724898+0200
Oct 12 19:21:10 psan05 ceph-mon[6878]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:21:10 psan05 ceph-mon[6878]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:10 psan05 ceph-mon[6878]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7ffa31890c01]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  2: (()+0x26ae09) [0x7ffa31890e09]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  3: (FSMap::sanity() const+0xc7) [0x7ffa31e09727]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x557afa8e5588]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  5: (PaxosService::refresh(bool*)+0x28f) [0x557afa80041f]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x557afa69e35c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  7: (Monitor::init_paxos()+0x7c) [0x557afa69e66c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  8: (Monitor::preinit()+0xf56) [0x557afa6e11c6]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  9: (main()+0x2e0c) [0x557afa6728dc]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  10: (__libc_start_main()+0xf3) [0x7ffa30e3d083]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  11: (_start()+0x2e) [0x557afa68490e]
Oct 12 19:21:10 psan05 ceph-mon[6878]: *** Caught signal (Aborted) **
Oct 12 19:21:10 psan05 ceph-mon[6878]:  in thread 7ffa3097a580 thread_name:ceph-mon
Oct 12 19:21:10 psan05 ceph-mon[6878]: 2023-10-12T19:21:10.724+0200 7ffa3097a580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 7ffa3
097a580 time 2023-10-12T19:21:10.724898+0200
Oct 12 19:21:10 psan05 ceph-mon[6878]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:21:10 psan05 ceph-mon[6878]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:10 psan05 ceph-mon[6878]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7ffa31890c01]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  2: (()+0x26ae09) [0x7ffa31890e09]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  3: (FSMap::sanity() const+0xc7) [0x7ffa31e09727]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x557afa8e5588]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  5: (PaxosService::refresh(bool*)+0x28f) [0x557afa80041f]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x557afa69e35c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  7: (Monitor::init_paxos()+0x7c) [0x557afa69e66c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  8: (Monitor::preinit()+0xf56) [0x557afa6e11c6]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  9: (main()+0x2e0c) [0x557afa6728dc]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  10: (__libc_start_main()+0xf3) [0x7ffa30e3d083]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  11: (_start()+0x2e) [0x557afa68490e]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:10 psan05 ceph-mon[6878]:  1: (()+0x14420) [0x7ffa31371420]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  2: (gsignal()+0xcb) [0x7ffa30e5c00b]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  3: (abort()+0x12b) [0x7ffa30e3b859]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7ffa31890c5c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  5: (()+0x26ae09) [0x7ffa31890e09]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  6: (FSMap::sanity() const+0xc7) [0x7ffa31e09727]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x557afa8e5588]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  8: (PaxosService::refresh(bool*)+0x28f) [0x557afa80041f]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x557afa69e35c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  10: (Monitor::init_paxos()+0x7c) [0x557afa69e66c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  11: (Monitor::preinit()+0xf56) [0x557afa6e11c6]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  12: (main()+0x2e0c) [0x557afa6728dc]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  13: (__libc_start_main()+0xf3) [0x7ffa30e3d083]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  14: (_start()+0x2e) [0x557afa68490e]
Oct 12 19:21:10 psan05 ceph-mon[6878]: 2023-10-12T19:21:10.728+0200 7ffa3097a580 -1 *** Caught signal (Aborted) **
Oct 12 19:21:10 psan05 ceph-mon[6878]:  in thread 7ffa3097a580 thread_name:ceph-mon
Oct 12 19:21:10 psan05 ceph-mon[6878]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:10 psan05 ceph-mon[6878]:  1: (()+0x14420) [0x7ffa31371420]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  2: (gsignal()+0xcb) [0x7ffa30e5c00b]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  3: (abort()+0x12b) [0x7ffa30e3b859]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7ffa31890c5c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  5: (()+0x26ae09) [0x7ffa31890e09]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  6: (FSMap::sanity() const+0xc7) [0x7ffa31e09727]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x557afa8e5588]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  8: (PaxosService::refresh(bool*)+0x28f) [0x557afa80041f]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x557afa69e35c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  10: (Monitor::init_paxos()+0x7c) [0x557afa69e66c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  11: (Monitor::preinit()+0xf56) [0x557afa6e11c6]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  12: (main()+0x2e0c) [0x557afa6728dc]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  13: (__libc_start_main()+0xf3) [0x7ffa30e3d083]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  14: (_start()+0x2e) [0x557afa68490e]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 12 19:21:10 psan05 ceph-mon[6878]:   -254> 2023-10-12T19:21:10.724+0200 7ffa3097a580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thre
ad 7ffa3097a580 time 2023-10-12T19:21:10.724898+0200
Oct 12 19:21:10 psan05 ceph-mon[6878]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:21:10 psan05 ceph-mon[6878]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:10 psan05 ceph-mon[6878]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7ffa31890c01]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  2: (()+0x26ae09) [0x7ffa31890e09]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  3: (FSMap::sanity() const+0xc7) [0x7ffa31e09727]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x557afa8e5588]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  5: (PaxosService::refresh(bool*)+0x28f) [0x557afa80041f]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x557afa69e35c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  7: (Monitor::init_paxos()+0x7c) [0x557afa69e66c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  8: (Monitor::preinit()+0xf56) [0x557afa6e11c6]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  9: (main()+0x2e0c) [0x557afa6728dc]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  10: (__libc_start_main()+0xf3) [0x7ffa30e3d083]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  11: (_start()+0x2e) [0x557afa68490e]
Oct 12 19:21:10 psan05 ceph-mon[6878]:   -253> 2023-10-12T19:21:10.728+0200 7ffa3097a580 -1 *** Caught signal (Aborted) **
Oct 12 19:21:10 psan05 ceph-mon[6878]:  in thread 7ffa3097a580 thread_name:ceph-mon
Oct 12 19:21:10 psan05 ceph-mon[6878]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:10 psan05 ceph-mon[6878]:  1: (()+0x14420) [0x7ffa31371420]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  2: (gsignal()+0xcb) [0x7ffa30e5c00b]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  3: (abort()+0x12b) [0x7ffa30e3b859]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7ffa31890c5c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  5: (()+0x26ae09) [0x7ffa31890e09]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  6: (FSMap::sanity() const+0xc7) [0x7ffa31e09727]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x557afa8e5588]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  8: (PaxosService::refresh(bool*)+0x28f) [0x557afa80041f]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x557afa69e35c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  10: (Monitor::init_paxos()+0x7c) [0x557afa69e66c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  11: (Monitor::preinit()+0xf56) [0x557afa6e11c6]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  12: (main()+0x2e0c) [0x557afa6728dc]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  13: (__libc_start_main()+0xf3) [0x7ffa30e3d083]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  14: (_start()+0x2e) [0x557afa68490e]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 12 19:21:10 psan05 ceph-mon[6878]:   -254> 2023-10-12T19:21:10.724+0200 7ffa3097a580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thre
ad 7ffa3097a580 time 2023-10-12T19:21:10.724898+0200
Oct 12 19:21:10 psan05 ceph-mon[6878]: /mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
Oct 12 19:21:10 psan05 ceph-mon[6878]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:10 psan05 ceph-mon[6878]:  1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7ffa31890c01]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  2: (()+0x26ae09) [0x7ffa31890e09]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  3: (FSMap::sanity() const+0xc7) [0x7ffa31e09727]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x557afa8e5588]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  5: (PaxosService::refresh(bool*)+0x28f) [0x557afa80041f]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x557afa69e35c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  7: (Monitor::init_paxos()+0x7c) [0x557afa69e66c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  8: (Monitor::preinit()+0xf56) [0x557afa6e11c6]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  9: (main()+0x2e0c) [0x557afa6728dc]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  10: (__libc_start_main()+0xf3) [0x7ffa30e3d083]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  11: (_start()+0x2e) [0x557afa68490e]
Oct 12 19:21:10 psan05 ceph-mon[6878]:   -253> 2023-10-12T19:21:10.728+0200 7ffa3097a580 -1 *** Caught signal (Aborted) **
Oct 12 19:21:10 psan05 ceph-mon[6878]:  in thread 7ffa3097a580 thread_name:ceph-mon
Oct 12 19:21:10 psan05 ceph-mon[6878]:  ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
Oct 12 19:21:10 psan05 ceph-mon[6878]:  1: (()+0x14420) [0x7ffa31371420]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  2: (gsignal()+0xcb) [0x7ffa30e5c00b]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  3: (abort()+0x12b) [0x7ffa30e3b859]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7ffa31890c5c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  5: (()+0x26ae09) [0x7ffa31890e09]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  6: (FSMap::sanity() const+0xc7) [0x7ffa31e09727]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x557afa8e5588]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  8: (PaxosService::refresh(bool*)+0x28f) [0x557afa80041f]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x557afa69e35c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  10: (Monitor::init_paxos()+0x7c) [0x557afa69e66c]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  11: (Monitor::preinit()+0xf56) [0x557afa6e11c6]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  12: (main()+0x2e0c) [0x557afa6728dc]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  13: (__libc_start_main()+0xf3) [0x7ffa30e3d083]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  14: (_start()+0x2e) [0x557afa68490e]
Oct 12 19:21:10 psan05 ceph-mon[6878]:  NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.
Oct 12 19:21:10 psan05 systemd[1]: ceph-mon@psan05.service: Main process exited, code=killed, status=6/ABRT
-- Subject: Unit process exited
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- An ExecStart= process belonging to unit ceph-mon@psan05.service has exited.
--
-- The process' exit code is 'killed' and its exit status is 6.
Oct 12 19:21:10 psan05 systemd[1]: ceph-mon@psan05.service: Failed with result 'signal'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-mon@psan05.service has entered the 'failed' state with result 'signal'.
Oct 12 19:21:20 psan05 tuning.sh[6913]: performance
Oct 12 19:21:20 psan05 systemd[1]: ceph-mon@psan05.service: Scheduled restart job, restart counter is at 5.
-- Subject: Automatic restarting of a unit has been scheduled
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- Automatic restarting of the unit ceph-mon@psan05.service has been scheduled, as the result for
-- the configured Restart= setting for the unit.
Oct 12 19:21:20 psan05 systemd[1]: Stopped Ceph cluster monitor daemon.
-- Subject: A stop job for unit ceph-mon@psan05.service has finished
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A stop job for unit ceph-mon@psan05.service has finished.
--
-- The job identifier is 5636 and the job result is done.
Oct 12 19:21:20 psan05 systemd[1]: ceph-mon@psan05.service: Start request repeated too quickly.
Oct 12 19:21:20 psan05 systemd[1]: ceph-mon@psan05.service: Failed with result 'signal'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-mon@psan05.service has entered the 'failed' state with result 'signal'.
Oct 12 19:21:20 psan05 systemd[1]: Failed to start Ceph cluster monitor daemon.
-- Subject: A start job for unit ceph-mon@psan05.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-mon@psan05.service has finished with a failure.
--
-- The job identifier is 5636 and the job result is failed.
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 0
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 0
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 1
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 1
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 2
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 2
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 3
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 3
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 4
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 4
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 5
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 5
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 6
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 6
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 7
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 7
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 8
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 8
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 9
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 9
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 10
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 10
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 11
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 11
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 12
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 12
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 13
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 13
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 14
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 14
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 15
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 15
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 16
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 16
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 17
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 17
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 18
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 18
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 19
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 19
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 20
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 20
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 21
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 21
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 22
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 22
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 23
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 23
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 24
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 24
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 25
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 25
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 26
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 26
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 27
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 27
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 28
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 28
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 29
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 29
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 30
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 30
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 31
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 31
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 32
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 32
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 33
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 33
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 34
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 34
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 35
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 35
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 36
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 36
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 37
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 37
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 38
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 38
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 39
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 39
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 40
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 40
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 41
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 41
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 42
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 42
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 43
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 43
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 44
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 44
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 45
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 45
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 46
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 46
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 47
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 47
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 48
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 48
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 49
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 49
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 50
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 50
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 51
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 51
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 52
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 52
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 53
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 53
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 54
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 54
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 55
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 55
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 56
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 56
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 57
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 57
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 58
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 58
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 59
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 59
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 60
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 60
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 61
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 61
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 62
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 62
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 63
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 63
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 64
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 64
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 65
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 65
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 66
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 66
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 67
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 67
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 68
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 68
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 69
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 69
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 70
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 70
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 0 disabled on CPU 71
Oct 12 19:23:01 psan05 tuning.sh[10826]: Idlestate 1 disabled on CPU 71
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 0
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 1
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 2
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 3
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 4
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 5
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 6
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 7
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 8
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 9
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 10
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 11
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 12
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 13
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 14
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 15
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 16
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 17
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 18
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 19
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 20
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 21
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 22
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 23
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 24
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 25
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 26
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 27
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 28
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 29
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 30
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 31
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 32
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 33
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 34
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 35
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 36
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 37
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 38
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 39
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 40
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 41
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 42
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 43
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 44
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 45
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 46
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 47
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 48
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 49
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 50
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 51
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 52
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 53
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 54
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 55
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 56
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 57
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 58
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 59
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 60
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 61
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 62
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 63
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 64
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 65
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 66
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 67
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 68
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 69
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 70
Oct 12 19:23:01 psan05 tuning.sh[10852]: Setting cpu: 71
Oct 12 19:25:01 psan05 CRON[12497]: pam_unix(cron:session): session opened for user root by (uid=0)
Oct 12 19:25:01 psan05 CRON[12499]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Oct 12 19:25:01 psan05 CRON[12497]: pam_unix(cron:session): session closed for user root
Oct 12 19:25:29 psan05 petasan_config_upload.py[5587]: 2023-10-12T19:25:29.705+0200 7fd8b0aaf700  0 monclient(hunting): authenticate timed out after 300
Oct 12 19:26:39 psan05 sshd[13392]: Accepted password for root from 10.212.134.200 port 59005 ssh2
Oct 12 19:26:39 psan05 sshd[13392]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 12 19:26:39 psan05 sshd[13402]: Accepted password for root from 10.212.134.200 port 59006 ssh2
Oct 12 19:26:39 psan05 sshd[13402]: pam_unix(sshd:session): session opened for user root by (uid=0)
Oct 12 19:30:11 psan05 systemd[1]: ceph-mon@psan05.service: Start request repeated too quickly.
Oct 12 19:30:11 psan05 systemd[1]: ceph-mon@psan05.service: Failed with result 'signal'.
-- Subject: Unit failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- The unit ceph-mon@psan05.service has entered the 'failed' state with result 'signal'.
Oct 12 19:30:11 psan05 systemd[1]: Failed to start Ceph cluster monitor daemon.
-- Subject: A start job for unit ceph-mon@psan05.service has failed
-- Defined-By: systemd
-- Support: http://www.ubuntu.com/support
--
-- A start job for unit ceph-mon@psan05.service has finished with a failure.
--
-- The job identifier is 5769 and the job result is failed.
Oct 12 19:30:29 psan05 petasan_config_upload.py[5587]: 2023-10-12T19:30:29.706+0200 7fd8b0aaf700  0 monclient(hunting): authenticate timed out after 300
root@psan05:~#

1) The most important thing is to get the monitors to run,
On the 3 management nodes, if the monitors do not run, you can get error log in
/var/log/ceph/ceph-mon.HOSTNAME
it also could help to try to run the monitor on command line
/usr/bin/ceph-mon -d --cluster ceph --id $(hostname) --setuser ceph --setgroup ceph
do you see errors or warnings ?

2) did the update script before runing initially request you to run
ceph osd require-osd-release octopus

3) not sure why you have 1 mon with version 3.2.1 Quincy while the 2 others at 3.1.0, did not you upgrade the first 4 nodes which should have the 3 mons ?

Well, the host naming is: psan01 -> psan12, where psan01, psan05, psan09 are the monitors. Currently 01 and 05 are still at version 3.1.0, while 09 ha version 3.2.1.

3) I started migrating starting from psan12 in reverse order, so: 12, 11, 10 and 09. I was continuously monitoring the GUI pointing at psan01, for the cluster to be in OK status before upgrading another host. After psan09 the GUI stopped working. from that point on I could not go further on

1) The log files of psan01 and 05 are small and untouched since march 2022, while in psan09 it is 122 MB size and updated. Here is the output when I try to start the monitor:

root@psan09:~# /usr/bin/ceph-mon -d --cluster ceph --id psan09 --setuser ceph --setgroup ceph
2023-10-13T14:31:14.346+0200 7f720cd1e700  0 set uid:gid to 64045:64045 (ceph:ceph)
2023-10-13T14:31:14.346+0200 7f720cd1e700  0 ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable), process ceph-mon, pid 1157507
2023-10-13T14:31:14.346+0200 7f720cd1e700  0 pidfile_write: ignore empty --pid-file
2023-10-13T14:31:14.346+0200 7f720cd1e700 -1 asok(0x55d893fd4000) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/ceph-mon.psan09.asok': (17) File exists
2023-10-13T14:31:14.350+0200 7f720cd1e700  0 load: jerasure load: lrc
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb: RocksDB version: 6.15.5

2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb: Git sha rocksdb_build_git_sha:@0@
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb: Compile date Jan  2 2023
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb: DB SUMMARY

2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb: DB Session ID:  ZCZ007VEWUVFK8PZ8RDD

2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb: CURRENT file:  CURRENT

2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb: IDENTITY file:  IDENTITY

2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb: MANIFEST file:  MANIFEST-2015232 size: 718 Bytes

2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb: SST files in /var/lib/ceph/mon/ceph-psan09/store.db dir, Total Num: 3, files: 2015236.sst 2015237.sst 2015238.sst

2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-psan09/store.db: 2015233.log size: 0 ;

2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                         Options.error_if_exists: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                       Options.create_if_missing: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                         Options.paranoid_checks: 1
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                                     Options.env: 0x55d8928d3280
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                                      Options.fs: Posix File System
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                                Options.info_log: 0x55d893d221a0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                Options.max_file_opening_threads: 16
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                              Options.statistics: (nil)
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                               Options.use_fsync: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                       Options.max_log_file_size: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                  Options.max_manifest_file_size: 1073741824
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                   Options.log_file_time_to_roll: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                       Options.keep_log_file_num: 1000
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                    Options.recycle_log_file_num: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                         Options.allow_fallocate: 1
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                        Options.allow_mmap_reads: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                       Options.allow_mmap_writes: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                        Options.use_direct_reads: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:          Options.create_missing_column_families: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                              Options.db_log_dir:
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                                 Options.wal_dir: /var/lib/ceph/mon/ceph-psan09/store.db
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                Options.table_cache_numshardbits: 6
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                         Options.WAL_ttl_seconds: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                       Options.WAL_size_limit_MB: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.manifest_preallocation_size: 4194304
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                     Options.is_fd_close_on_exec: 1
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                   Options.advise_random_on_open: 1
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                    Options.db_write_buffer_size: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                    Options.write_buffer_manager: 0x55d893dec3f0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:         Options.access_hint_on_compaction_start: 1
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:  Options.new_table_reader_for_compaction_inputs: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:           Options.random_access_max_buffer_size: 1048576
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                      Options.use_adaptive_mutex: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                            Options.rate_limiter: (nil)
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                       Options.wal_recovery_mode: 2
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                  Options.enable_thread_tracking: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                  Options.enable_pipelined_write: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                  Options.unordered_write: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:         Options.allow_concurrent_memtable_write: 1
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:      Options.enable_write_thread_adaptive_yield: 1
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.write_thread_max_yield_usec: 100
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:            Options.write_thread_slow_yield_usec: 3
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                               Options.row_cache: None
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                              Options.wal_filter: None
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.avoid_flush_during_recovery: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.allow_ingest_behind: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.preserve_deletes: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.two_write_queues: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.manual_wal_flush: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.atomic_flush: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.avoid_unnecessary_blocking_io: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                 Options.persist_stats_to_disk: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                 Options.write_dbid_to_manifest: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                 Options.log_readahead_size: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                 Options.file_checksum_gen_factory: Unknown
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                 Options.best_efforts_recovery: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                Options.max_bgerror_resume_count: 2147483647
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:            Options.bgerror_resume_retry_interval: 1000000
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.allow_data_in_errors: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.db_host_id: __hostname__
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.max_background_jobs: 2
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.max_background_compactions: -1
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.max_subcompactions: 1
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.avoid_flush_during_shutdown: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:           Options.writable_file_max_buffer_size: 1048576
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.delayed_write_rate : 16777216
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.max_total_wal_size: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                   Options.stats_dump_period_sec: 600
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                 Options.stats_persist_period_sec: 600
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                 Options.stats_history_buffer_size: 1048576
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                          Options.max_open_files: -1
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                          Options.bytes_per_sync: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                      Options.wal_bytes_per_sync: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                   Options.strict_bytes_per_sync: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:       Options.compaction_readahead_size: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:                  Options.max_background_flushes: -1
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb: Compression algorithms supported:
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:   kZSTDNotFinalCompression supported: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:   kZSTD supported: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:   kXpressCompression supported: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:   kLZ4HCCompression supported: 1
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:   kLZ4Compression supported: 1
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:   kBZip2Compression supported: 0
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:   kZlibCompression supported: 1
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb:   kSnappyCompression supported: 1
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb: Fast CRC32 supported: Supported on x86
2023-10-13T14:31:14.354+0200 7f720cd1e700  3 rocksdb: [db/db_impl/db_impl_open.cc:1785] Persisting Option File error: OK
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb: [db/db_impl/db_impl.cc:446] Shutdown: canceling all background work
2023-10-13T14:31:14.354+0200 7f720cd1e700  4 rocksdb: [db/db_impl/db_impl.cc:625] Shutdown complete
2023-10-13T14:31:14.354+0200 7f720cd1e700 -1 rocksdb: IO error: While lock file: /var/lib/ceph/mon/ceph-psan09/store.db/LOCK: Resource temporarily unavailable
2023-10-13T14:31:14.354+0200 7f720cd1e700 -1 error opening mon data directory at '/var/lib/ceph/mon/ceph-psan09': (22) Invalid argument

And here is the link to the log file (reduced in size by cutting all repeated lines).

2)  I followed the guide issuing these commands in order:

  • apt update
  • apt install ca-certificates
  • /opt/petasan/scripts/online-updates/update.sh

Thanks and bye,  Ste.

Not sure if reversing the node order had an impact.

Running the monitor on node 9 via console, the error seems to be that another monitor process is already running. Before running the process, can you make sure no other monitor process is running, either from systemctl stop ceph-mon$(hostname) and ps aux | grep ceph-mon and use kill to make sure no mons are running,

Also if as i understand the other 2 monitor nodes, the monitors are down, if so can you start them on command line and report any errors.

Ok, on node 09 there was another process, I killed it and this is the output of the start command:

root@psan09:/var/lib/ceph/mon/ceph-psan09# /usr/bin/ceph-mon -d --cluster ceph --id psan09 --setuser ceph --setgroup ceph
2023-10-13T16:38:42.866+0200 7f7ebe351700  0 set uid:gid to 64045:64045 (ceph:ceph)
2023-10-13T16:38:42.866+0200 7f7ebe351700  0 ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable), process ceph-mon, pid 1308988
2023-10-13T16:38:42.866+0200 7f7ebe351700  0 pidfile_write: ignore empty --pid-file
2023-10-13T16:38:42.870+0200 7f7ebe351700  0 load: jerasure load: lrc
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: RocksDB version: 6.15.5

2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Git sha rocksdb_build_git_sha:@0@
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Compile date Jan  2 2023
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: DB SUMMARY

2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: DB Session ID:  X2F54OARGFAGZL6II9A7

2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: CURRENT file:  CURRENT

2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: IDENTITY file:  IDENTITY

2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: MANIFEST file:  MANIFEST-2015232 size: 718 Bytes

2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: SST files in /var/lib/ceph/mon/ceph-psan09/store.db dir, Total Num: 3, files: 2015236.sst 2015237.sst 2015238.sst

2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-psan09/store.db: 2015233.log size: 0 ;

2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                         Options.error_if_exists: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                       Options.create_if_missing: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                         Options.paranoid_checks: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                                     Options.env: 0x55ad50d98280
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                                      Options.fs: Posix File System
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                                Options.info_log: 0x55ad52310060
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                Options.max_file_opening_threads: 16
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                              Options.statistics: (nil)
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                               Options.use_fsync: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                       Options.max_log_file_size: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                  Options.max_manifest_file_size: 1073741824
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                   Options.log_file_time_to_roll: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                       Options.keep_log_file_num: 1000
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                    Options.recycle_log_file_num: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                         Options.allow_fallocate: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                        Options.allow_mmap_reads: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                       Options.allow_mmap_writes: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                        Options.use_direct_reads: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:          Options.create_missing_column_families: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                              Options.db_log_dir:
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                                 Options.wal_dir: /var/lib/ceph/mon/ceph-psan09/store.db
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                Options.table_cache_numshardbits: 6
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                         Options.WAL_ttl_seconds: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                       Options.WAL_size_limit_MB: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.manifest_preallocation_size: 4194304
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                     Options.is_fd_close_on_exec: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                   Options.advise_random_on_open: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                    Options.db_write_buffer_size: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                    Options.write_buffer_manager: 0x55ad523da450
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:         Options.access_hint_on_compaction_start: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:  Options.new_table_reader_for_compaction_inputs: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:           Options.random_access_max_buffer_size: 1048576
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                      Options.use_adaptive_mutex: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                            Options.rate_limiter: (nil)
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                       Options.wal_recovery_mode: 2
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                  Options.enable_thread_tracking: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                  Options.enable_pipelined_write: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                  Options.unordered_write: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:         Options.allow_concurrent_memtable_write: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:      Options.enable_write_thread_adaptive_yield: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.write_thread_max_yield_usec: 100
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:            Options.write_thread_slow_yield_usec: 3
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                               Options.row_cache: None
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                              Options.wal_filter: None
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.avoid_flush_during_recovery: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.allow_ingest_behind: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.preserve_deletes: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.two_write_queues: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.manual_wal_flush: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.atomic_flush: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.avoid_unnecessary_blocking_io: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                 Options.persist_stats_to_disk: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                 Options.write_dbid_to_manifest: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                 Options.log_readahead_size: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                 Options.file_checksum_gen_factory: Unknown
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                 Options.best_efforts_recovery: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                Options.max_bgerror_resume_count: 2147483647
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:            Options.bgerror_resume_retry_interval: 1000000
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.allow_data_in_errors: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.db_host_id: __hostname__
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.max_background_jobs: 2
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.max_background_compactions: -1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.max_subcompactions: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.avoid_flush_during_shutdown: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:           Options.writable_file_max_buffer_size: 1048576
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.delayed_write_rate : 16777216
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.max_total_wal_size: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                   Options.stats_dump_period_sec: 600
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                 Options.stats_persist_period_sec: 600
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                 Options.stats_history_buffer_size: 1048576
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                          Options.max_open_files: -1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                          Options.bytes_per_sync: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                      Options.wal_bytes_per_sync: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                   Options.strict_bytes_per_sync: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:       Options.compaction_readahead_size: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                  Options.max_background_flushes: -1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Compression algorithms supported:
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:   kZSTDNotFinalCompression supported: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:   kZSTD supported: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:   kXpressCompression supported: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:   kLZ4HCCompression supported: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:   kLZ4Compression supported: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:   kBZip2Compression supported: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:   kZlibCompression supported: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:   kSnappyCompression supported: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Fast CRC32 supported: Supported on x86
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: [db/version_set.cc:4724] Recovering from manifest file: /var/lib/ceph/mon/ceph-psan09/store.db/MANIFEST-2015232

2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: [db/column_family.cc:595] --------------- Options for column family [default]:

2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:               Options.comparator: leveldb.BytewiseComparator
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:           Options.merge_operator:
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:        Options.compaction_filter: None
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:        Options.compaction_filter_factory: None
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:  Options.sst_partitioner_factory: None
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:         Options.memtable_factory: SkipListFactory
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:            Options.table_factory: BlockBasedTable
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55ad522d59b8)
cache_index_and_filter_blocks: 1
cache_index_and_filter_blocks_with_high_priority: 0
pin_l0_filter_and_index_blocks_in_cache: 0
pin_top_level_index_and_filter: 1
index_type: 0
data_block_index_type: 0
index_shortening: 1
data_block_hash_table_util_ratio: 0.750000
hash_index_allow_collision: 1
checksum: 1
no_block_cache: 0
block_cache: 0x55ad523429b0
block_cache_name: BinnedLRUCache
block_cache_options:
capacity : 536870912
num_shard_bits : 4
strict_capacity_limit : 0
high_pri_pool_ratio: 0.000
block_cache_compressed: (nil)
persistent_cache: (nil)
block_size: 4096
block_size_deviation: 10
block_restart_interval: 16
index_block_restart_interval: 1
metadata_block_size: 4096
partition_filters: 0
use_delta_encoding: 1
filter_policy: rocksdb.BuiltinBloomFilter
whole_key_filtering: 1
verify_compression: 0
read_amp_bytes_per_bit: 0
format_version: 4
enable_index_compression: 1
block_align: 0

2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:        Options.write_buffer_size: 33554432
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:  Options.max_write_buffer_number: 2
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:          Options.compression: NoCompression
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                  Options.bottommost_compression: Disabled
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:       Options.prefix_extractor: nullptr
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.num_levels: 7
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:        Options.min_write_buffer_number_to_merge: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:     Options.max_write_buffer_number_to_maintain: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:     Options.max_write_buffer_size_to_maintain: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:            Options.bottommost_compression_opts.window_bits: -14
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                  Options.bottommost_compression_opts.level: 32767
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:               Options.bottommost_compression_opts.strategy: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                  Options.bottommost_compression_opts.enabled: false
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:            Options.compression_opts.window_bits: -14
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                  Options.compression_opts.level: 32767
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:               Options.compression_opts.strategy: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:         Options.compression_opts.max_dict_bytes: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:         Options.compression_opts.parallel_threads: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                  Options.compression_opts.enabled: false
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:      Options.level0_file_num_compaction_trigger: 4
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:          Options.level0_slowdown_writes_trigger: 20
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:              Options.level0_stop_writes_trigger: 36
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                   Options.target_file_size_base: 67108864
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:             Options.target_file_size_multiplier: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                Options.max_bytes_for_level_base: 268435456
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:       Options.max_sequential_skip_in_iterations: 8
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                    Options.max_compaction_bytes: 1677721600
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                        Options.arena_block_size: 4194304
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:       Options.rate_limit_delay_max_milliseconds: 100
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                Options.disable_auto_compactions: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                        Options.compaction_style: kCompactionStyleLevel
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.compaction_options_universal.size_ratio: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                   Options.table_properties_collectors:
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                   Options.inplace_update_support: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                 Options.inplace_update_num_locks: 10000
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:               Options.memtable_whole_key_filtering: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:   Options.memtable_huge_page_size: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                           Options.bloom_locality: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                    Options.max_successive_merges: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                Options.optimize_filters_for_hits: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                Options.paranoid_file_checks: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                Options.force_consistency_checks: 1
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                Options.report_bg_io_stats: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                               Options.ttl: 2592000
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:          Options.periodic_compaction_seconds: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                    Options.enable_blob_files: false
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                        Options.min_blob_size: 0
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                       Options.blob_file_size: 268435456
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:                Options.blob_compression_type: NoCompression
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:       Options.enable_blob_garbage_collection: false
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb:   Options.blob_garbage_collection_age_cutoff: 0.250000
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: [db/version_set.cc:4764] Recovered from manifest file:/var/lib/ceph/mon/ceph-psan09/store.db/MANIFEST-2015232 succeeded,manifest_file_number is 2015232, next_file_number is 2015240, last_sequence is 628967530, log_number is 2015225,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0

2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: [db/version_set.cc:4779] Column family [default] (ID 0), log number is 2015225

2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: [db/version_set.cc:4082] Creating manifest 2015240

2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1697207922879040, "job": 1, "event": "recovery_started", "wal_files": [2015233]}
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: [db/db_impl/db_impl_open.cc:845] Recovering log #2015233 mode 2
2023-10-13T16:38:42.874+0200 7f7ebe351700  4 rocksdb: [db/version_set.cc:4082] Creating manifest 2015241

2023-10-13T16:38:42.878+0200 7f7ebe351700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1697207922880357, "job": 1, "event": "recovery_finished"}
2023-10-13T16:38:42.878+0200 7f7ebe351700  4 rocksdb: [file/delete_scheduler.cc:69] Deleted file /var/lib/ceph/mon/ceph-psan09/store.db/2015233.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
2023-10-13T16:38:42.878+0200 7f7ebe351700  4 rocksdb: [db/db_impl/db_impl_open.cc:1700] SstFileManager instance 0x55ad52328a80
2023-10-13T16:38:42.878+0200 7f7ebe351700  4 rocksdb: DB pointer 0x55ad5239c000
2023-10-13T16:38:42.878+0200 7f7eb40c8700  4 rocksdb: [db/db_impl/db_impl.cc:901] ------- DUMPING STATS -------
2023-10-13T16:38:42.878+0200 7f7eb40c8700  4 rocksdb: [db/db_impl/db_impl.cc:903]
** DB Stats **
Uptime(secs): 0.0 total, 0.0 interval
Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s
Interval stall: 00:00:0.000 H:M:S, 0.0 percent

** Compaction Stats [default] **
Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L6      3/0   154.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Sum      3/0   154.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0

** Compaction Stats [default] **
Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Uptime(secs): 0.0 total, 0.0 interval
Flush(GB): cumulative 0.000, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] **

** Compaction Stats [default] **
Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L6      3/0   154.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Sum      3/0   154.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0

** Compaction Stats [default] **
Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Uptime(secs): 0.0 total, 0.0 interval
Flush(GB): cumulative 0.000, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] **

2023-10-13T16:38:42.878+0200 7f7ebe351700  0 starting mon.psan09 rank 0 at public addrs [v2:10.221.4.39:3300/0,v1:10.221.4.39:6789/0] at bind addrs [v2:10.221.4.39:3300/0,v1:10.221.4.39:6789/0] mon_data /var/lib/ceph/mon/ceph-psan09 fsid 77a85c30-0112-45c2-8aea-44ff69828c96
2023-10-13T16:38:42.882+0200 7f7ebe351700  1 mon.psan09@-1(???) e4 preinit fsid 77a85c30-0112-45c2-8aea-44ff69828c96
2023-10-13T16:38:42.882+0200 7f7ebe351700  0 mon.psan09@-1(???).mds e4619 new map
2023-10-13T16:38:42.882+0200 7f7ebe351700  0 mon.psan09@-1(???).mds e4619 print_map
e4619
enable_multiple, ever_enabled_multiple: 0,1
default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
legacy client fscid: 1

Filesystem 'nfs_fs' (1)
fs_name nfs_fs
epoch   4617
flags   12 joinable allow_snaps allow_multimds_snaps
created 2022-06-21T15:51:11.373792+0200
modified        2023-10-11T05:48:25.682635+0200
tableserver     0
root    0
session_timeout 60
session_autoclose       300
max_file_size   1099511627776
required_client_features        {}
last_failure    0
last_failure_osd_epoch  118993
compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 1
in      0
up      {0=60602768}
failed
damaged
stopped
data_pools      [22]
metadata_pool   22
inline_data     disabled
balancer
standby_count_wanted    1
[mds.psan01{0:60602768} state up:active seq 5087151 addr [v2:10.221.4.31:6800/3782739203,v1:10.221.4.31:6801/3782739203] compat {c=[1],r=[1],i=[7ff]}]

Standby daemons:

[mds.psan05{-1:132228283} state up:standby seq 1 addr [v2:10.221.4.35:6800/3576554988,v1:10.221.4.35:6801/3576554988] compat {c=[1],r=[1],i=[7ff]}]
[mds.psan09{-1:132454209} state up:standby seq 1 addr [v2:10.221.4.39:6800/3670081898,v1:10.221.4.39:6801/3670081898] compat {c=[1],r=[1],i=[7ff]}]

2023-10-13T16:38:42.882+0200 7f7ebe351700  0 mon.psan09@-1(???).osd e126317 crush map has features 3314933000852226048, adjusting msgr requires
2023-10-13T16:38:42.882+0200 7f7ebe351700  0 mon.psan09@-1(???).osd e126317 crush map has features 288514051259236352, adjusting msgr requires
2023-10-13T16:38:42.882+0200 7f7ebe351700  0 mon.psan09@-1(???).osd e126317 crush map has features 288514051259236352, adjusting msgr requires
2023-10-13T16:38:42.882+0200 7f7ebe351700  0 mon.psan09@-1(???).osd e126317 crush map has features 288514051259236352, adjusting msgr requires
2023-10-13T16:38:42.882+0200 7f7ebe351700  1 mon.psan09@-1(???).paxosservice(auth 26251..26439) refresh upgraded, format 0 -> 3
2023-10-13T16:38:42.890+0200 7f7ebe351700 -1 compacting monitor store ...
2023-10-13T16:38:42.890+0200 7f7ebe351700 -1 done compacting
2023-10-13T16:38:42.898+0200 7f7ebe351700  0 mon.psan09@-1(probing) e4  my rank is now 0 (was -1)
2023-10-13T16:38:43.994+0200 7f7ebb0d6700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:43.994+0200 7f7ebb0d6700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:43.998+0200 7f7ebb0d6700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:47.198+0200 7f7ebb0d6700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:47.198+0200 7f7eb50ca700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:47.198+0200 7f7ebb0d6700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:49.422+0200 7f7ebb0d6700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:49.422+0200 7f7eb58cb700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:49.422+0200 7f7eb50ca700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:50.870+0200 7f7eb58cb700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:52.270+0200 7f7eb58cb700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:53.602+0200 7f7eb58cb700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:53.602+0200 7f7ebb0d6700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:53.606+0200 7f7eb58cb700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:55.294+0200 7f7eb58cb700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:55.302+0200 7f7eb58cb700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:55.306+0200 7f7eb58cb700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:55.306+0200 7f7ebb0d6700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:55.306+0200 7f7eb50ca700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:55.310+0200 7f7eb58cb700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:38:55.310+0200 7f7ebb0d6700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id

2023-10-13T16:43:12.793+0200 7fc1f4196700 -1 mon.psan09@0(probing) e4 get_health_metrics reporting 26879 slow ops, oldest is osd_failure(failed immediate osd.51 [v2:10.221.4.35:6840/6365,v1:10.221.4.35:6859/6365] for 21sec e126317 v126317)
2023-10-13T16:43:17.801+0200 7fc1f4196700 -1 mon.psan09@0(probing) e4 get_health_metrics reporting 35620 slow ops, oldest is osd_failure(failed immediate osd.51 [v2:10.221.4.35:6840/6365,v1:10.221.4.35:6859/6365] for 21sec e126317 v126317)
2023-10-13T16:43:19.397+0200 7fc1f018e700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:43:19.397+0200 7fc1ef98d700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id
2023-10-13T16:43:19.397+0200 7fc1f5999700  1 mon.psan09@0(probing) e4 handle_auth_request failed to assign global_id

...  endless list ...

 

On node 1 and 5 ther was no mon process running, and this is the output (it appears to be the same for both):

root@psan01:~#
root@psan01:~#
root@psan01:~# /usr/bin/ceph-mon -d --cluster ceph --id psan01 --setuser ceph --setgroup ceph
2023-10-13T16:44:13.731+0200 7f154d31c580  0 set uid:gid to 64045:64045 (ceph:ceph)
2023-10-13T16:44:13.731+0200 7f154d31c580  0 ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable), process ceph-mon, pid 1131362
2023-10-13T16:44:13.731+0200 7f154d31c580  0 pidfile_write: ignore empty --pid-file
2023-10-13T16:44:13.735+0200 7f154d31c580  0 load: jerasure load: lrc load: isa
2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option compression = kNoCompression
2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option level_compaction_dynamic_level_bytes = true
2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option write_buffer_size = 33554432
2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option compression = kNoCompression
2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option level_compaction_dynamic_level_bytes = true
2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option write_buffer_size = 33554432
2023-10-13T16:44:13.735+0200 7f154d31c580  1 rocksdb: do_open column families: [default]
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: RocksDB version: 6.1.2

2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Git sha rocksdb_build_git_sha:@0@
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Compile date Jan  1 2021
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: DB SUMMARY

2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: CURRENT file:  CURRENT

2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: IDENTITY file:  IDENTITY

2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: MANIFEST file:  MANIFEST-2015667 size: 316 Bytes

2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: SST files in /var/lib/ceph/mon/ceph-psan01/store.db dir, Total Num: 4, files: 2015617.sst 2015618.sst 2015619.sst 2015621.sst

2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-psan01/store.db: 2015668.log size: 0 ;

2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                         Options.error_if_exists: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.create_if_missing: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                         Options.paranoid_checks: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                                     Options.env: 0x55a324987000
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                                Options.info_log: 0x55a325db2d40
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.max_file_opening_threads: 16
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                              Options.statistics: (nil)
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                               Options.use_fsync: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.max_log_file_size: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.max_manifest_file_size: 1073741824
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.log_file_time_to_roll: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.keep_log_file_num: 1000
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                    Options.recycle_log_file_num: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                         Options.allow_fallocate: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                        Options.allow_mmap_reads: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.allow_mmap_writes: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                        Options.use_direct_reads: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:          Options.create_missing_column_families: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                              Options.db_log_dir:
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                                 Options.wal_dir: /var/lib/ceph/mon/ceph-psan01/store.db
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.table_cache_numshardbits: 6
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                      Options.max_subcompactions: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.max_background_flushes: -1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                         Options.WAL_ttl_seconds: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.WAL_size_limit_MB: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.manifest_preallocation_size: 4194304
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                     Options.is_fd_close_on_exec: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.advise_random_on_open: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                    Options.db_write_buffer_size: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                    Options.write_buffer_manager: 0x55a325ecd2c0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.access_hint_on_compaction_start: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:  Options.new_table_reader_for_compaction_inputs: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:           Options.random_access_max_buffer_size: 1048576
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                      Options.use_adaptive_mutex: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                            Options.rate_limiter: (nil)
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.wal_recovery_mode: 2
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.enable_thread_tracking: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.enable_pipelined_write: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.allow_concurrent_memtable_write: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:      Options.enable_write_thread_adaptive_yield: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.write_thread_max_yield_usec: 100
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:            Options.write_thread_slow_yield_usec: 3
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                               Options.row_cache: None
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                              Options.wal_filter: None
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.avoid_flush_during_recovery: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.allow_ingest_behind: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.preserve_deletes: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.two_write_queues: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.manual_wal_flush: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.atomic_flush: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.avoid_unnecessary_blocking_io: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.max_background_jobs: 2
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.max_background_compactions: -1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.avoid_flush_during_shutdown: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:           Options.writable_file_max_buffer_size: 1048576
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.delayed_write_rate : 16777216
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.max_total_wal_size: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.stats_dump_period_sec: 600
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                 Options.stats_persist_period_sec: 600
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                 Options.stats_history_buffer_size: 1048576
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                          Options.max_open_files: -1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                          Options.bytes_per_sync: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                      Options.wal_bytes_per_sync: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:       Options.compaction_readahead_size: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Compression algorithms supported:
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kZSTDNotFinalCompression supported: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kZSTD supported: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kXpressCompression supported: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kLZ4HCCompression supported: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kLZ4Compression supported: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kBZip2Compression supported: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kZlibCompression supported: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kSnappyCompression supported: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Fast CRC32 supported: Supported on x86
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/version_set.cc:3542] Recovering from manifest file: /var/lib/ceph/mon/ceph-psan01/store.db/MANIFEST-2015667

2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/column_family.cc:475] --------------- Options for column family [default]:

2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:               Options.comparator: leveldb.BytewiseComparator
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:           Options.merge_operator:
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:        Options.compaction_filter: None
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:        Options.compaction_filter_factory: None
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.memtable_factory: SkipListFactory
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:            Options.table_factory: BlockBasedTable
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a325cc7688)
cache_index_and_filter_blocks: 1
cache_index_and_filter_blocks_with_high_priority: 1
pin_l0_filter_and_index_blocks_in_cache: 0
pin_top_level_index_and_filter: 1
index_type: 0
data_block_index_type: 0
data_block_hash_table_util_ratio: 0.750000
hash_index_allow_collision: 1
checksum: 1
no_block_cache: 0
block_cache: 0x55a325d02610
block_cache_name: BinnedLRUCache
block_cache_options:
capacity : 536870912
num_shard_bits : 4
strict_capacity_limit : 0
high_pri_pool_ratio: 0.000
block_cache_compressed: (nil)
persistent_cache: (nil)
block_size: 4096
block_size_deviation: 10
block_restart_interval: 16
index_block_restart_interval: 1
metadata_block_size: 4096
partition_filters: 0
use_delta_encoding: 1
filter_policy: rocksdb.BuiltinBloomFilter
whole_key_filtering: 1
verify_compression: 0
read_amp_bytes_per_bit: 0
format_version: 2
enable_index_compression: 1
block_align: 0

2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:        Options.write_buffer_size: 33554432
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:  Options.max_write_buffer_number: 2
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:          Options.compression: NoCompression
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.bottommost_compression: Disabled
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:       Options.prefix_extractor: nullptr
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.num_levels: 7
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:        Options.min_write_buffer_number_to_merge: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:     Options.max_write_buffer_number_to_maintain: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:            Options.bottommost_compression_opts.window_bits: -14
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.bottommost_compression_opts.level: 32767
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:               Options.bottommost_compression_opts.strategy: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.bottommost_compression_opts.enabled: false
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:            Options.compression_opts.window_bits: -14
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.compression_opts.level: 32767
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:               Options.compression_opts.strategy: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.compression_opts.max_dict_bytes: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.compression_opts.enabled: false
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:      Options.level0_file_num_compaction_trigger: 4
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:          Options.level0_slowdown_writes_trigger: 20
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:              Options.level0_stop_writes_trigger: 36
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.target_file_size_base: 67108864
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.target_file_size_multiplier: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.max_bytes_for_level_base: 268435456
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:       Options.max_sequential_skip_in_iterations: 8
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                    Options.max_compaction_bytes: 1677721600
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                        Options.arena_block_size: 4194304
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:       Options.rate_limit_delay_max_milliseconds: 100
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.disable_auto_compactions: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                        Options.compaction_style: kCompactionStyleLevel
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.size_ratio: 1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.table_properties_collectors:
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.inplace_update_support: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                 Options.inplace_update_num_locks: 10000
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:               Options.memtable_whole_key_filtering: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   Options.memtable_huge_page_size: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                           Options.bloom_locality: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                    Options.max_successive_merges: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.optimize_filters_for_hits: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.paranoid_file_checks: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.force_consistency_checks: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.report_bg_io_stats: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                               Options.ttl: 0
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/version_set.cc:3747] Recovered from manifest file:/var/lib/ceph/mon/ceph-psan01/store.db/MANIFEST-2015667 succeeded,manifest_file_number is 2015667, next_file_number is 2015669, last_sequence is 691104782, log_number is 2015666,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0

2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/version_set.cc:3763] Column family [default] (ID 0), log number is 2015666

2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1697208253738965, "job": 1, "event": "recovery_started", "log_files": [2015668]}
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/db_impl_open.cc:581] Recovering log #2015668 mode 2
2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/version_set.cc:3035] Creating manifest 2015670

2023-10-13T16:44:13.739+0200 7f154d31c580  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1697208253740089, "job": 1, "event": "recovery_finished"}
2023-10-13T16:44:13.739+0200 7f154d31c580  4 rocksdb: DB pointer 0x55a325dd0000
2023-10-13T16:44:13.743+0200 7f1542830700  4 rocksdb: [db/db_impl.cc:776] ------- DUMPING STATS -------
2023-10-13T16:44:13.743+0200 7f1542830700  4 rocksdb: [db/db_impl.cc:778]
** DB Stats **
Uptime(secs): 0.0 total, 0.0 interval
Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s
Interval stall: 00:00:0.000 H:M:S, 0.0 percent

** Compaction Stats [default] **
Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0      1/0   10.56 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
L6      3/0   154.92 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Sum      4/0   154.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0

** Compaction Stats [default] **
Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Uptime(secs): 0.0 total, 0.0 interval
Flush(GB): cumulative 0.000, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] **

** Compaction Stats [default] **
Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0      1/0   10.56 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
L6      3/0   154.92 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Sum      4/0   154.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0

** Compaction Stats [default] **
Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Uptime(secs): 0.0 total, 0.0 interval
Flush(GB): cumulative 0.000, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] **

2023-10-13T16:44:13.743+0200 7f154d31c580  0 starting mon.psan01 rank 1 at public addrs [v2:10.221.4.31:3300/0,v1:10.221.4.31:6789/0] at bind addrs [v2:10.221.4.31:3300/0,v1:10.221.4.31:6789/0] mon_data /var/lib/ceph/mon/ceph-psan01 fsid 77a85c30-0112-45c2-8aea-44ff69828c96
2023-10-13T16:44:13.743+0200 7f154d31c580  1 mon.psan01@-1(???) e4 preinit fsid 77a85c30-0112-45c2-8aea-44ff69828c96
2023-10-13T16:44:13.743+0200 7f154d31c580  0 mon.psan01@-1(???).mds e4619 new map
2023-10-13T16:44:13.743+0200 7f154d31c580  0 mon.psan01@-1(???).mds e4619 print_map
e4619
enable_multiple, ever_enabled_multiple: 0,1
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
legacy client fscid: 1

Filesystem 'nfs_fs' (1)
fs_name nfs_fs
epoch   4617
flags   12
created 2022-06-21T15:51:11.373792+0200
modified        2023-10-11T05:48:25.682635+0200
tableserver     0
root    0
session_timeout 60
session_autoclose       300
max_file_size   1099511627776
min_compat_client       0 (unknown)
last_failure    0
last_failure_osd_epoch  118993
compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 1
in      0
up      {0=60602768}
failed
damaged
stopped
data_pools      [22]
metadata_pool   22
inline_data     disabled
balancer
standby_count_wanted    1
[mds.psan01{0:60602768} state up:active seq 5087151 addr [v2:10.221.4.31:6800/3782739203,v1:10.221.4.31:6801/3782739203]]

Standby daemons:

[mds.psan05{-1:132228283} state up:standby seq 1 addr [v2:10.221.4.35:6800/3576554988,v1:10.221.4.35:6801/3576554988]]
[mds.psan09{-1:132454209} state up:standby seq 1 addr [v2:10.221.4.39:6800/3670081898,v1:10.221.4.39:6801/3670081898]]

/mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 7f154d31c580 time 2023-10-13T16:44:13.746649+0200
/mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)
ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7f154e232c01]
2: (()+0x26ae09) [0x7f154e232e09]
3: (FSMap::sanity() const+0xc7) [0x7f154e7ab727]
4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55a324200588]
5: (PaxosService::refresh(bool*)+0x28f) [0x55a32411b41f]
6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55a323fb935c]
7: (Monitor::init_paxos()+0x7c) [0x55a323fb966c]
8: (Monitor::preinit()+0xf56) [0x55a323ffc1c6]
9: (main()+0x2e0c) [0x55a323f8d8dc]
10: (__libc_start_main()+0xf3) [0x7f154d7df083]
11: (_start()+0x2e) [0x55a323f9f90e]
2023-10-13T16:44:13.747+0200 7f154d31c580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 7f154d31c580 time 2023-10-13T16:44:13.746649+0200
/mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)

ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7f154e232c01]
2: (()+0x26ae09) [0x7f154e232e09]
3: (FSMap::sanity() const+0xc7) [0x7f154e7ab727]
4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55a324200588]
5: (PaxosService::refresh(bool*)+0x28f) [0x55a32411b41f]
6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55a323fb935c]
7: (Monitor::init_paxos()+0x7c) [0x55a323fb966c]
8: (Monitor::preinit()+0xf56) [0x55a323ffc1c6]
9: (main()+0x2e0c) [0x55a323f8d8dc]
10: (__libc_start_main()+0xf3) [0x7f154d7df083]
11: (_start()+0x2e) [0x55a323f9f90e]

*** Caught signal (Aborted) **
in thread 7f154d31c580 thread_name:ceph-mon
ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
1: (()+0x14420) [0x7f154dd13420]
2: (gsignal()+0xcb) [0x7f154d7fe00b]
3: (abort()+0x12b) [0x7f154d7dd859]
4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7f154e232c5c]
5: (()+0x26ae09) [0x7f154e232e09]
6: (FSMap::sanity() const+0xc7) [0x7f154e7ab727]
7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55a324200588]
8: (PaxosService::refresh(bool*)+0x28f) [0x55a32411b41f]
9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55a323fb935c]
10: (Monitor::init_paxos()+0x7c) [0x55a323fb966c]
11: (Monitor::preinit()+0xf56) [0x55a323ffc1c6]
12: (main()+0x2e0c) [0x55a323f8d8dc]
13: (__libc_start_main()+0xf3) [0x7f154d7df083]
14: (_start()+0x2e) [0x55a323f9f90e]
2023-10-13T16:44:13.751+0200 7f154d31c580 -1 *** Caught signal (Aborted) **
in thread 7f154d31c580 thread_name:ceph-mon

ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
1: (()+0x14420) [0x7f154dd13420]
2: (gsignal()+0xcb) [0x7f154d7fe00b]
3: (abort()+0x12b) [0x7f154d7dd859]
4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7f154e232c5c]
5: (()+0x26ae09) [0x7f154e232e09]
6: (FSMap::sanity() const+0xc7) [0x7f154e7ab727]
7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55a324200588]
8: (PaxosService::refresh(bool*)+0x28f) [0x55a32411b41f]
9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55a323fb935c]
10: (Monitor::init_paxos()+0x7c) [0x55a323fb966c]
11: (Monitor::preinit()+0xf56) [0x55a323ffc1c6]
12: (main()+0x2e0c) [0x55a323f8d8dc]
13: (__libc_start_main()+0xf3) [0x7f154d7df083]
14: (_start()+0x2e) [0x55a323f9f90e]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- begin dump of recent events ---
-254> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command assert hook 0x55a325cc8610
-253> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command abort hook 0x55a325cc8610
-252> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command leak_some_memory hook 0x55a325cc8610
-251> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command perfcounters_dump hook 0x55a325cc8610
-250> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command 1 hook 0x55a325cc8610
-249> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command perf dump hook 0x55a325cc8610
-248> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command perfcounters_schema hook 0x55a325cc8610
-247> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command perf histogram dump hook 0x55a325cc8610
-246> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command 2 hook 0x55a325cc8610
-245> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command perf schema hook 0x55a325cc8610
-244> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command perf histogram schema hook 0x55a325cc8610
-243> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command perf reset hook 0x55a325cc8610
-242> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command config show hook 0x55a325cc8610
-241> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command config help hook 0x55a325cc8610
-240> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command config set hook 0x55a325cc8610
-239> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command config unset hook 0x55a325cc8610
-238> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command config get hook 0x55a325cc8610
-237> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command config diff hook 0x55a325cc8610
-236> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command config diff get hook 0x55a325cc8610
-235> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command injectargs hook 0x55a325cc8610
-234> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command log flush hook 0x55a325cc8610
-233> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command log dump hook 0x55a325cc8610
-232> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command log reopen hook 0x55a325cc8610
-231> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command dump_mempools hook 0x55a326950068
-230> 2023-10-13T16:44:13.731+0200 7f154d31c580  0 set uid:gid to 64045:64045 (ceph:ceph)
-229> 2023-10-13T16:44:13.731+0200 7f154d31c580  0 ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable), process ceph-mon, pid 1131362
-228> 2023-10-13T16:44:13.731+0200 7f154d31c580  0 pidfile_write: ignore empty --pid-file
-227> 2023-10-13T16:44:13.731+0200 7f154d31c580  5 asok(0x55a325d70000) init /var/run/ceph/ceph-mon.psan01.asok
-226> 2023-10-13T16:44:13.731+0200 7f154d31c580  5 asok(0x55a325d70000) bind_and_listen /var/run/ceph/ceph-mon.psan01.asok
-225> 2023-10-13T16:44:13.731+0200 7f154d31c580  5 asok(0x55a325d70000) register_command 0 hook 0x55a325cc7668
-224> 2023-10-13T16:44:13.731+0200 7f154d31c580  5 asok(0x55a325d70000) register_command version hook 0x55a325cc7668
-223> 2023-10-13T16:44:13.731+0200 7f154d31c580  5 asok(0x55a325d70000) register_command git_version hook 0x55a325cc7668
-222> 2023-10-13T16:44:13.731+0200 7f154d31c580  5 asok(0x55a325d70000) register_command help hook 0x55a325cc8650
-221> 2023-10-13T16:44:13.731+0200 7f154d31c580  5 asok(0x55a325d70000) register_command get_command_descriptions hook 0x55a325cc8660
-220> 2023-10-13T16:44:13.731+0200 7f154c2be700  5 asok(0x55a325d70000) entry start
-219> 2023-10-13T16:44:13.735+0200 7f154d31c580  0 load: jerasure load: lrc load: isa
-218> 2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option compression = kNoCompression
-217> 2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option level_compaction_dynamic_level_bytes = true
-216> 2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option write_buffer_size = 33554432
-215> 2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option compression = kNoCompression
-214> 2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option level_compaction_dynamic_level_bytes = true
-213> 2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option write_buffer_size = 33554432
-212> 2023-10-13T16:44:13.735+0200 7f154d31c580  1 rocksdb: do_open column families: [default]
-211> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: RocksDB version: 6.1.2

-210> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Git sha rocksdb_build_git_sha:@0@
-209> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Compile date Jan  1 2021
-208> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: DB SUMMARY

-207> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: CURRENT file:  CURRENT

-206> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: IDENTITY file:  IDENTITY

-205> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: MANIFEST file:  MANIFEST-2015667 size: 316 Bytes

-204> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: SST files in /var/lib/ceph/mon/ceph-psan01/store.db dir, Total Num: 4, files: 2015617.sst 2015618.sst 2015619.sst 2015621.sst

-203> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-psan01/store.db: 2015668.log size: 0 ;

-202> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                         Options.error_if_exists: 0
-201> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.create_if_missing: 0
-200> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                         Options.paranoid_checks: 1
-199> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                                     Options.env: 0x55a324987000
-198> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                                Options.info_log: 0x55a325db2d40
-197> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.max_file_opening_threads: 16
-196> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                              Options.statistics: (nil)
-195> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                               Options.use_fsync: 0
-194> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.max_log_file_size: 0
-193> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.max_manifest_file_size: 1073741824
-192> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.log_file_time_to_roll: 0
-191> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.keep_log_file_num: 1000
-190> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                    Options.recycle_log_file_num: 0
-189> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                         Options.allow_fallocate: 1
-188> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                        Options.allow_mmap_reads: 0
-187> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.allow_mmap_writes: 0
-186> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                        Options.use_direct_reads: 0
-185> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
-184> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:          Options.create_missing_column_families: 0
-183> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                              Options.db_log_dir:
-182> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                                 Options.wal_dir: /var/lib/ceph/mon/ceph-psan01/store.db
-181> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.table_cache_numshardbits: 6
-180> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                      Options.max_subcompactions: 1
-179> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.max_background_flushes: -1
-178> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                         Options.WAL_ttl_seconds: 0
-177> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.WAL_size_limit_MB: 0
-176> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.manifest_preallocation_size: 4194304
-175> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                     Options.is_fd_close_on_exec: 1
-174> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.advise_random_on_open: 1
-173> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                    Options.db_write_buffer_size: 0
-172> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                    Options.write_buffer_manager: 0x55a325ecd2c0
-171> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.access_hint_on_compaction_start: 1
-170> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:  Options.new_table_reader_for_compaction_inputs: 0
-169> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:           Options.random_access_max_buffer_size: 1048576
-168> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                      Options.use_adaptive_mutex: 0
-167> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                            Options.rate_limiter: (nil)
-166> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
-165> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.wal_recovery_mode: 2
-164> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.enable_thread_tracking: 0
-163> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.enable_pipelined_write: 0
-162> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.allow_concurrent_memtable_write: 1
-161> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:      Options.enable_write_thread_adaptive_yield: 1
-160> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.write_thread_max_yield_usec: 100
-159> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:            Options.write_thread_slow_yield_usec: 3
-158> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                               Options.row_cache: None
-157> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                              Options.wal_filter: None
-156> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.avoid_flush_during_recovery: 0
-155> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.allow_ingest_behind: 0
-154> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.preserve_deletes: 0
-153> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.two_write_queues: 0
-152> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.manual_wal_flush: 0
-151> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.atomic_flush: 0
-150> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.avoid_unnecessary_blocking_io: 0
-149> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.max_background_jobs: 2
-148> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.max_background_compactions: -1
-147> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.avoid_flush_during_shutdown: 0
-146> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:           Options.writable_file_max_buffer_size: 1048576
-145> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.delayed_write_rate : 16777216
-144> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.max_total_wal_size: 0
-143> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
-142> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.stats_dump_period_sec: 600
-141> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                 Options.stats_persist_period_sec: 600
-140> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                 Options.stats_history_buffer_size: 1048576
-139> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                          Options.max_open_files: -1
-138> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                          Options.bytes_per_sync: 0
-137> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                      Options.wal_bytes_per_sync: 0
-136> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:       Options.compaction_readahead_size: 0
-135> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Compression algorithms supported:
-134> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kZSTDNotFinalCompression supported: 0
-133> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kZSTD supported: 0
-132> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kXpressCompression supported: 0
-131> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kLZ4HCCompression supported: 1
-130> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kLZ4Compression supported: 1
-129> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kBZip2Compression supported: 0
-128> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kZlibCompression supported: 1
-127> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kSnappyCompression supported: 1
-126> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Fast CRC32 supported: Supported on x86
-125> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/version_set.cc:3542] Recovering from manifest file: /var/lib/ceph/mon/ceph-psan01/store.db/MANIFEST-2015667

-124> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/column_family.cc:475] --------------- Options for column family [default]:

-123> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:               Options.comparator: leveldb.BytewiseComparator
-122> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:           Options.merge_operator:
-121> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:        Options.compaction_filter: None
-120> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:        Options.compaction_filter_factory: None
-119> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.memtable_factory: SkipListFactory
-118> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:            Options.table_factory: BlockBasedTable
-117> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a325cc7688)
cache_index_and_filter_blocks: 1
cache_index_and_filter_blocks_with_high_priority: 1
pin_l0_filter_and_index_blocks_in_cache: 0
pin_top_level_index_and_filter: 1
index_type: 0
data_block_index_type: 0
data_block_hash_table_util_ratio: 0.750000
hash_index_allow_collision: 1
checksum: 1
no_block_cache: 0
block_cache: 0x55a325d02610
block_cache_name: BinnedLRUCache
block_cache_options:
capacity : 536870912
num_shard_bits : 4
strict_capacity_limit : 0
high_pri_pool_ratio: 0.000
block_cache_compressed: (nil)
persistent_cache: (nil)
block_size: 4096
block_size_deviation: 10
block_restart_interval: 16
index_block_restart_interval: 1
metadata_block_size: 4096
partition_filters: 0
use_delta_encoding: 1
filter_policy: rocksdb.BuiltinBloomFilter
whole_key_filtering: 1
verify_compression: 0
read_amp_bytes_per_bit: 0
format_version: 2
enable_index_compression: 1
block_align: 0

-116> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:        Options.write_buffer_size: 33554432
-115> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:  Options.max_write_buffer_number: 2
-114> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:          Options.compression: NoCompression
-113> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.bottommost_compression: Disabled
-112> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:       Options.prefix_extractor: nullptr
-111> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
-110> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.num_levels: 7
-109> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:        Options.min_write_buffer_number_to_merge: 1
-108> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:     Options.max_write_buffer_number_to_maintain: 0
-107> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:            Options.bottommost_compression_opts.window_bits: -14
-106> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.bottommost_compression_opts.level: 32767
-105> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:               Options.bottommost_compression_opts.strategy: 0
-104> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
-103> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
-102> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.bottommost_compression_opts.enabled: false
-101> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:            Options.compression_opts.window_bits: -14
-100> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.compression_opts.level: 32767
-99> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:               Options.compression_opts.strategy: 0
-98> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.compression_opts.max_dict_bytes: 0
-97> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
-96> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.compression_opts.enabled: false
-95> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:      Options.level0_file_num_compaction_trigger: 4
-94> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:          Options.level0_slowdown_writes_trigger: 20
-93> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:              Options.level0_stop_writes_trigger: 36
-92> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.target_file_size_base: 67108864
-91> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.target_file_size_multiplier: 1
-90> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.max_bytes_for_level_base: 268435456
-89> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
-88> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
-87> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
-86> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
-85> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
-84> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
-83> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
-82> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
-81> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
-80> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:       Options.max_sequential_skip_in_iterations: 8
-79> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                    Options.max_compaction_bytes: 1677721600
-78> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                        Options.arena_block_size: 4194304
-77> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
-76> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
-75> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:       Options.rate_limit_delay_max_milliseconds: 100
-74> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.disable_auto_compactions: 0
-73> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                        Options.compaction_style: kCompactionStyleLevel
-72> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
-71> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.size_ratio: 1
-70> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
-69> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
-68> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
-67> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
-66> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
-65> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
-64> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
-63> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.table_properties_collectors:
-62> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.inplace_update_support: 0
-61> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                 Options.inplace_update_num_locks: 10000
-60> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
-59> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:               Options.memtable_whole_key_filtering: 0
-58> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   Options.memtable_huge_page_size: 0
-57> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                           Options.bloom_locality: 0
-56> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                    Options.max_successive_merges: 0
-55> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.optimize_filters_for_hits: 0
-54> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.paranoid_file_checks: 0
-53> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.force_consistency_checks: 0
-52> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.report_bg_io_stats: 0
-51> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                               Options.ttl: 0
-50> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/version_set.cc:3747] Recovered from manifest file:/var/lib/ceph/mon/ceph-psan01/store.db/MANIFEST-2015667 succeeded,manifest_file_number is 2015667, next_file_number is 2015669, last_sequence is 691104782, log_number is 2015666,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0

-49> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/version_set.cc:3763] Column family [default] (ID 0), log number is 2015666

-48> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1697208253738965, "job": 1, "event": "recovery_started", "log_files": [2015668]}
-47> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/db_impl_open.cc:581] Recovering log #2015668 mode 2
-46> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/version_set.cc:3035] Creating manifest 2015670

-45> 2023-10-13T16:44:13.739+0200 7f154d31c580  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1697208253740089, "job": 1, "event": "recovery_finished"}
-44> 2023-10-13T16:44:13.739+0200 7f154d31c580  4 rocksdb: DB pointer 0x55a325dd0000
-43> 2023-10-13T16:44:13.743+0200 7f1542830700  4 rocksdb: [db/db_impl.cc:776] ------- DUMPING STATS -------
-42> 2023-10-13T16:44:13.743+0200 7f1542830700  4 rocksdb: [db/db_impl.cc:778]
** DB Stats **
Uptime(secs): 0.0 total, 0.0 interval
Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s
Interval stall: 00:00:0.000 H:M:S, 0.0 percent

** Compaction Stats [default] **
Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0      1/0   10.56 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
L6      3/0   154.92 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Sum      4/0   154.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0

** Compaction Stats [default] **
Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Uptime(secs): 0.0 total, 0.0 interval
Flush(GB): cumulative 0.000, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] **

** Compaction Stats [default] **
Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0      1/0   10.56 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
L6      3/0   154.92 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Sum      4/0   154.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0

** Compaction Stats [default] **
Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Uptime(secs): 0.0 total, 0.0 interval
Flush(GB): cumulative 0.000, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] **

-41> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding auth protocol: cephx
-40> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding auth protocol: cephx
-39> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding auth protocol: cephx
-38> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: secure
-37> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: crc
-36> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: secure
-35> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: crc
-34> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: secure
-33> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: crc
-32> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: crc
-31> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: secure
-30> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: crc
-29> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: secure
-28> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: crc
-27> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: secure
-26> 2023-10-13T16:44:13.743+0200 7f154d31c580  2 auth: KeyRing::load: loaded key file /var/lib/ceph/mon/ceph-psan01/keyring
-25> 2023-10-13T16:44:13.743+0200 7f154d31c580  0 starting mon.psan01 rank 1 at public addrs [v2:10.221.4.31:3300/0,v1:10.221.4.31:6789/0] at bind addrs [v2:10.221.4.31:3300/0,v1:10.221.4.31:6789/0] mon_data /var/lib/ceph/mon/ceph-psan01 fsid 77a85c30-0112-45c2-8aea-44ff69828c96
-24> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding auth protocol: cephx
-23> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding auth protocol: cephx
-22> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding auth protocol: cephx
-21> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: secure
-20> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: crc
-19> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: secure
-18> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: crc
-17> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: secure
-16> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: crc
-15> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: crc
-14> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: secure
-13> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: crc
-12> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: secure
-11> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: crc
-10> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: secure
-9> 2023-10-13T16:44:13.743+0200 7f154d31c580  2 auth: KeyRing::load: loaded key file /var/lib/ceph/mon/ceph-psan01/keyring
-8> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 adding auth protocol: cephx
-7> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 adding auth protocol: cephx
-6> 2023-10-13T16:44:13.743+0200 7f154d31c580 10 log_channel(cluster) update_config to_monitors: true to_syslog: false syslog_facility: daemon prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201)
-5> 2023-10-13T16:44:13.743+0200 7f154d31c580 10 log_channel(audit) update_config to_monitors: true to_syslog: false syslog_facility: local0 prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201)
-4> 2023-10-13T16:44:13.743+0200 7f154d31c580  1 mon.psan01@-1(???) e4 preinit fsid 77a85c30-0112-45c2-8aea-44ff69828c96
-3> 2023-10-13T16:44:13.743+0200 7f154d31c580  0 mon.psan01@-1(???).mds e4619 new map
-2> 2023-10-13T16:44:13.743+0200 7f154d31c580  0 mon.psan01@-1(???).mds e4619 print_map
e4619
enable_multiple, ever_enabled_multiple: 0,1
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
legacy client fscid: 1

Filesystem 'nfs_fs' (1)
fs_name nfs_fs
epoch   4617
flags   12
created 2022-06-21T15:51:11.373792+0200
modified        2023-10-11T05:48:25.682635+0200
tableserver     0
root    0
session_timeout 60
session_autoclose       300
max_file_size   1099511627776
min_compat_client       0 (unknown)
last_failure    0
last_failure_osd_epoch  118993
compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 1
in      0
up      {0=60602768}
failed
damaged
stopped
data_pools      [22]
metadata_pool   22
inline_data     disabled
balancer
standby_count_wanted    1
[mds.psan01{0:60602768} state up:active seq 5087151 addr [v2:10.221.4.31:6800/3782739203,v1:10.221.4.31:6801/3782739203]]

Standby daemons:

[mds.psan05{-1:132228283} state up:standby seq 1 addr [v2:10.221.4.35:6800/3576554988,v1:10.221.4.35:6801/3576554988]]
[mds.psan09{-1:132454209} state up:standby seq 1 addr [v2:10.221.4.39:6800/3670081898,v1:10.221.4.39:6801/3670081898]]

-1> 2023-10-13T16:44:13.747+0200 7f154d31c580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 7f154d31c580 time 2023-10-13T16:44:13.746649+0200
/mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)

ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7f154e232c01]
2: (()+0x26ae09) [0x7f154e232e09]
3: (FSMap::sanity() const+0xc7) [0x7f154e7ab727]
4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55a324200588]
5: (PaxosService::refresh(bool*)+0x28f) [0x55a32411b41f]
6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55a323fb935c]
7: (Monitor::init_paxos()+0x7c) [0x55a323fb966c]
8: (Monitor::preinit()+0xf56) [0x55a323ffc1c6]
9: (main()+0x2e0c) [0x55a323f8d8dc]
10: (__libc_start_main()+0xf3) [0x7f154d7df083]
11: (_start()+0x2e) [0x55a323f9f90e]

0> 2023-10-13T16:44:13.751+0200 7f154d31c580 -1 *** Caught signal (Aborted) **
in thread 7f154d31c580 thread_name:ceph-mon

ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
1: (()+0x14420) [0x7f154dd13420]
2: (gsignal()+0xcb) [0x7f154d7fe00b]
3: (abort()+0x12b) [0x7f154d7dd859]
4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7f154e232c5c]
5: (()+0x26ae09) [0x7f154e232e09]
6: (FSMap::sanity() const+0xc7) [0x7f154e7ab727]
7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55a324200588]
8: (PaxosService::refresh(bool*)+0x28f) [0x55a32411b41f]
9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55a323fb935c]
10: (Monitor::init_paxos()+0x7c) [0x55a323fb966c]
11: (Monitor::preinit()+0xf56) [0x55a323ffc1c6]
12: (main()+0x2e0c) [0x55a323f8d8dc]
13: (__libc_start_main()+0xf3) [0x7f154d7df083]
14: (_start()+0x2e) [0x55a323f9f90e]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
0/ 5 none
0/ 1 lockdep
0/ 1 context
1/ 1 crush
1/ 5 mds
1/ 5 mds_balancer
1/ 5 mds_locker
1/ 5 mds_log
1/ 5 mds_log_expire
1/ 5 mds_migrator
0/ 1 buffer
0/ 1 timer
0/ 1 filer
0/ 1 striper
0/ 1 objecter
0/ 5 rados
0/ 5 rbd
0/ 5 rbd_mirror
0/ 5 rbd_replay
0/ 5 rbd_rwl
0/ 5 journaler
0/ 5 objectcacher
0/ 5 immutable_obj_cache
0/ 5 client
1/ 5 osd
0/ 5 optracker
0/ 5 objclass
1/ 3 filestore
1/ 3 journal
0/ 0 ms
1/ 5 mon
0/10 monc
1/ 5 paxos
0/ 5 tp
1/ 5 auth
1/ 5 crypto
1/ 1 finisher
1/ 1 reserver
1/ 5 heartbeatmap
1/ 5 perfcounter
1/ 5 rgw
1/ 5 rgw_sync
1/10 civetweb
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
0/ 0 refs
1/ 5 compressor
1/ 5 bluestore
1/ 5 bluefs
1/ 3 bdev
1/ 5 kstore
4/ 5 rocksdb
4/ 5 leveldb
4/ 5 memdb
1/ 5 fuse
1/ 5 mgr
1/ 5 mgrc
1/ 5 dpdk
1/ 5 eventtrace
1/ 5 prioritycache
0/ 5 test
-2/-2 (syslog threshold)
99/99 (stderr threshold)
--- pthread ID / name mapping for recent threads ---
7f1542830700 / rocksdb:dump_st
7f154c2be700 / admin_socket
7f154d31c580 / ceph-mon
max_recent     10000
max_new         1000
log_file
--- end dump of recent events ---
--- begin dump of recent events ---
-254> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command assert hook 0x55a325cc8610
-253> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command abort hook 0x55a325cc8610
-252> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command leak_some_memory hook 0x55a325cc8610
-251> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command perfcounters_dump hook 0x55a325cc8610
-250> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command 1 hook 0x55a325cc8610
-249> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command perf dump hook 0x55a325cc8610
-248> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command perfcounters_schema hook 0x55a325cc8610
-247> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command perf histogram dump hook 0x55a325cc8610
-246> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command 2 hook 0x55a325cc8610
-245> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command perf schema hook 0x55a325cc8610
-244> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command perf histogram schema hook 0x55a325cc8610
-243> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command perf reset hook 0x55a325cc8610
-242> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command config show hook 0x55a325cc8610
-241> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command config help hook 0x55a325cc8610
-240> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command config set hook 0x55a325cc8610
-239> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command config unset hook 0x55a325cc8610
-238> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command config get hook 0x55a325cc8610
-237> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command config diff hook 0x55a325cc8610
-236> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command config diff get hook 0x55a325cc8610
-235> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command injectargs hook 0x55a325cc8610
-234> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command log flush hook 0x55a325cc8610
-233> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command log dump hook 0x55a325cc8610
-232> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command log reopen hook 0x55a325cc8610
-231> 2023-10-13T16:44:13.723+0200 7f154d31c580  5 asok(0x55a325d70000) register_command dump_mempools hook 0x55a326950068
-230> 2023-10-13T16:44:13.731+0200 7f154d31c580  0 set uid:gid to 64045:64045 (ceph:ceph)
-229> 2023-10-13T16:44:13.731+0200 7f154d31c580  0 ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable), process ceph-mon, pid 1131362
-228> 2023-10-13T16:44:13.731+0200 7f154d31c580  0 pidfile_write: ignore empty --pid-file
-227> 2023-10-13T16:44:13.731+0200 7f154d31c580  5 asok(0x55a325d70000) init /var/run/ceph/ceph-mon.psan01.asok
-226> 2023-10-13T16:44:13.731+0200 7f154d31c580  5 asok(0x55a325d70000) bind_and_listen /var/run/ceph/ceph-mon.psan01.asok
-225> 2023-10-13T16:44:13.731+0200 7f154d31c580  5 asok(0x55a325d70000) register_command 0 hook 0x55a325cc7668
-224> 2023-10-13T16:44:13.731+0200 7f154d31c580  5 asok(0x55a325d70000) register_command version hook 0x55a325cc7668
-223> 2023-10-13T16:44:13.731+0200 7f154d31c580  5 asok(0x55a325d70000) register_command git_version hook 0x55a325cc7668
-222> 2023-10-13T16:44:13.731+0200 7f154d31c580  5 asok(0x55a325d70000) register_command help hook 0x55a325cc8650
-221> 2023-10-13T16:44:13.731+0200 7f154d31c580  5 asok(0x55a325d70000) register_command get_command_descriptions hook 0x55a325cc8660
-220> 2023-10-13T16:44:13.731+0200 7f154c2be700  5 asok(0x55a325d70000) entry start
-219> 2023-10-13T16:44:13.735+0200 7f154d31c580  0 load: jerasure load: lrc load: isa
-218> 2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option compression = kNoCompression
-217> 2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option level_compaction_dynamic_level_bytes = true
-216> 2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option write_buffer_size = 33554432
-215> 2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option compression = kNoCompression
-214> 2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option level_compaction_dynamic_level_bytes = true
-213> 2023-10-13T16:44:13.735+0200 7f154d31c580  0  set rocksdb option write_buffer_size = 33554432
-212> 2023-10-13T16:44:13.735+0200 7f154d31c580  1 rocksdb: do_open column families: [default]
-211> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: RocksDB version: 6.1.2

-210> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Git sha rocksdb_build_git_sha:@0@
-209> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Compile date Jan  1 2021
-208> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: DB SUMMARY

-207> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: CURRENT file:  CURRENT

-206> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: IDENTITY file:  IDENTITY

-205> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: MANIFEST file:  MANIFEST-2015667 size: 316 Bytes

-204> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: SST files in /var/lib/ceph/mon/ceph-psan01/store.db dir, Total Num: 4, files: 2015617.sst 2015618.sst 2015619.sst 2015621.sst

-203> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-psan01/store.db: 2015668.log size: 0 ;

-202> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                         Options.error_if_exists: 0
-201> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.create_if_missing: 0
-200> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                         Options.paranoid_checks: 1
-199> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                                     Options.env: 0x55a324987000
-198> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                                Options.info_log: 0x55a325db2d40
-197> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.max_file_opening_threads: 16
-196> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                              Options.statistics: (nil)
-195> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                               Options.use_fsync: 0
-194> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.max_log_file_size: 0
-193> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.max_manifest_file_size: 1073741824
-192> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.log_file_time_to_roll: 0
-191> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.keep_log_file_num: 1000
-190> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                    Options.recycle_log_file_num: 0
-189> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                         Options.allow_fallocate: 1
-188> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                        Options.allow_mmap_reads: 0
-187> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.allow_mmap_writes: 0
-186> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                        Options.use_direct_reads: 0
-185> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
-184> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:          Options.create_missing_column_families: 0
-183> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                              Options.db_log_dir:
-182> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                                 Options.wal_dir: /var/lib/ceph/mon/ceph-psan01/store.db
-181> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.table_cache_numshardbits: 6
-180> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                      Options.max_subcompactions: 1
-179> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.max_background_flushes: -1
-178> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                         Options.WAL_ttl_seconds: 0
-177> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.WAL_size_limit_MB: 0
-176> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.manifest_preallocation_size: 4194304
-175> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                     Options.is_fd_close_on_exec: 1
-174> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.advise_random_on_open: 1
-173> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                    Options.db_write_buffer_size: 0
-172> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                    Options.write_buffer_manager: 0x55a325ecd2c0
-171> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.access_hint_on_compaction_start: 1
-170> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:  Options.new_table_reader_for_compaction_inputs: 0
-169> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:           Options.random_access_max_buffer_size: 1048576
-168> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                      Options.use_adaptive_mutex: 0
-167> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                            Options.rate_limiter: (nil)
-166> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
-165> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                       Options.wal_recovery_mode: 2
-164> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.enable_thread_tracking: 0
-163> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.enable_pipelined_write: 0
-162> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.allow_concurrent_memtable_write: 1
-161> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:      Options.enable_write_thread_adaptive_yield: 1
-160> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.write_thread_max_yield_usec: 100
-159> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:            Options.write_thread_slow_yield_usec: 3
-158> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                               Options.row_cache: None
-157> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                              Options.wal_filter: None
-156> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.avoid_flush_during_recovery: 0
-155> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.allow_ingest_behind: 0
-154> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.preserve_deletes: 0
-153> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.two_write_queues: 0
-152> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.manual_wal_flush: 0
-151> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.atomic_flush: 0
-150> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.avoid_unnecessary_blocking_io: 0
-149> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.max_background_jobs: 2
-148> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.max_background_compactions: -1
-147> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.avoid_flush_during_shutdown: 0
-146> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:           Options.writable_file_max_buffer_size: 1048576
-145> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.delayed_write_rate : 16777216
-144> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.max_total_wal_size: 0
-143> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
-142> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.stats_dump_period_sec: 600
-141> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                 Options.stats_persist_period_sec: 600
-140> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                 Options.stats_history_buffer_size: 1048576
-139> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                          Options.max_open_files: -1
-138> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                          Options.bytes_per_sync: 0
-137> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                      Options.wal_bytes_per_sync: 0
-136> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:       Options.compaction_readahead_size: 0
-135> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Compression algorithms supported:
-134> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kZSTDNotFinalCompression supported: 0
-133> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kZSTD supported: 0
-132> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kXpressCompression supported: 0
-131> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kLZ4HCCompression supported: 1
-130> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kLZ4Compression supported: 1
-129> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kBZip2Compression supported: 0
-128> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kZlibCompression supported: 1
-127> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   kSnappyCompression supported: 1
-126> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Fast CRC32 supported: Supported on x86
-125> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/version_set.cc:3542] Recovering from manifest file: /var/lib/ceph/mon/ceph-psan01/store.db/MANIFEST-2015667

-124> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/column_family.cc:475] --------------- Options for column family [default]:

-123> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:               Options.comparator: leveldb.BytewiseComparator
-122> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:           Options.merge_operator:
-121> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:        Options.compaction_filter: None
-120> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:        Options.compaction_filter_factory: None
-119> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.memtable_factory: SkipListFactory
-118> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:            Options.table_factory: BlockBasedTable
-117> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a325cc7688)
cache_index_and_filter_blocks: 1
cache_index_and_filter_blocks_with_high_priority: 1
pin_l0_filter_and_index_blocks_in_cache: 0
pin_top_level_index_and_filter: 1
index_type: 0
data_block_index_type: 0
data_block_hash_table_util_ratio: 0.750000
hash_index_allow_collision: 1
checksum: 1
no_block_cache: 0
block_cache: 0x55a325d02610
block_cache_name: BinnedLRUCache
block_cache_options:
capacity : 536870912
num_shard_bits : 4
strict_capacity_limit : 0
high_pri_pool_ratio: 0.000
block_cache_compressed: (nil)
persistent_cache: (nil)
block_size: 4096
block_size_deviation: 10
block_restart_interval: 16
index_block_restart_interval: 1
metadata_block_size: 4096
partition_filters: 0
use_delta_encoding: 1
filter_policy: rocksdb.BuiltinBloomFilter
whole_key_filtering: 1
verify_compression: 0
read_amp_bytes_per_bit: 0
format_version: 2
enable_index_compression: 1
block_align: 0

-116> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:        Options.write_buffer_size: 33554432
-115> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:  Options.max_write_buffer_number: 2
-114> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:          Options.compression: NoCompression
-113> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.bottommost_compression: Disabled
-112> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:       Options.prefix_extractor: nullptr
-111> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
-110> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.num_levels: 7
-109> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:        Options.min_write_buffer_number_to_merge: 1
-108> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:     Options.max_write_buffer_number_to_maintain: 0
-107> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:            Options.bottommost_compression_opts.window_bits: -14
-106> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.bottommost_compression_opts.level: 32767
-105> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:               Options.bottommost_compression_opts.strategy: 0
-104> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
-103> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
-102> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.bottommost_compression_opts.enabled: false
-101> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:            Options.compression_opts.window_bits: -14
-100> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.compression_opts.level: 32767
-99> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:               Options.compression_opts.strategy: 0
-98> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.compression_opts.max_dict_bytes: 0
-97> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
-96> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                  Options.compression_opts.enabled: false
-95> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:      Options.level0_file_num_compaction_trigger: 4
-94> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:          Options.level0_slowdown_writes_trigger: 20
-93> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:              Options.level0_stop_writes_trigger: 36
-92> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.target_file_size_base: 67108864
-91> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:             Options.target_file_size_multiplier: 1
-90> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.max_bytes_for_level_base: 268435456
-89> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
-88> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
-87> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
-86> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
-85> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
-84> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
-83> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
-82> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
-81> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
-80> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:       Options.max_sequential_skip_in_iterations: 8
-79> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                    Options.max_compaction_bytes: 1677721600
-78> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                        Options.arena_block_size: 4194304
-77> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
-76> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
-75> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:       Options.rate_limit_delay_max_milliseconds: 100
-74> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.disable_auto_compactions: 0
-73> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                        Options.compaction_style: kCompactionStyleLevel
-72> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
-71> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.size_ratio: 1
-70> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
-69> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
-68> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
-67> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
-66> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
-65> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
-64> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
-63> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.table_properties_collectors:
-62> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                   Options.inplace_update_support: 0
-61> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                 Options.inplace_update_num_locks: 10000
-60> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
-59> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:               Options.memtable_whole_key_filtering: 0
-58> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:   Options.memtable_huge_page_size: 0
-57> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                           Options.bloom_locality: 0
-56> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                    Options.max_successive_merges: 0
-55> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.optimize_filters_for_hits: 0
-54> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.paranoid_file_checks: 0
-53> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.force_consistency_checks: 0
-52> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                Options.report_bg_io_stats: 0
-51> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb:                               Options.ttl: 0
-50> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/version_set.cc:3747] Recovered from manifest file:/var/lib/ceph/mon/ceph-psan01/store.db/MANIFEST-2015667 succeeded,manifest_file_number is 2015667, next_file_number is 2015669, last_sequence is 691104782, log_number is 2015666,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0

-49> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/version_set.cc:3763] Column family [default] (ID 0), log number is 2015666

-48> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1697208253738965, "job": 1, "event": "recovery_started", "log_files": [2015668]}
-47> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/db_impl_open.cc:581] Recovering log #2015668 mode 2
-46> 2023-10-13T16:44:13.735+0200 7f154d31c580  4 rocksdb: [db/version_set.cc:3035] Creating manifest 2015670

-45> 2023-10-13T16:44:13.739+0200 7f154d31c580  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1697208253740089, "job": 1, "event": "recovery_finished"}
-44> 2023-10-13T16:44:13.739+0200 7f154d31c580  4 rocksdb: DB pointer 0x55a325dd0000
-43> 2023-10-13T16:44:13.743+0200 7f1542830700  4 rocksdb: [db/db_impl.cc:776] ------- DUMPING STATS -------
-42> 2023-10-13T16:44:13.743+0200 7f1542830700  4 rocksdb: [db/db_impl.cc:778]
** DB Stats **
Uptime(secs): 0.0 total, 0.0 interval
Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 MB, 0.00 MB/s
Interval stall: 00:00:0.000 H:M:S, 0.0 percent

** Compaction Stats [default] **
Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0      1/0   10.56 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
L6      3/0   154.92 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Sum      4/0   154.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0

** Compaction Stats [default] **
Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Uptime(secs): 0.0 total, 0.0 interval
Flush(GB): cumulative 0.000, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] **

** Compaction Stats [default] **
Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
L0      1/0   10.56 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
L6      3/0   154.92 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Sum      4/0   154.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0
Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0

** Compaction Stats [default] **
Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Uptime(secs): 0.0 total, 0.0 interval
Flush(GB): cumulative 0.000, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count

** File Read Latency Histogram By Level [default] **

-41> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding auth protocol: cephx
-40> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding auth protocol: cephx
-39> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding auth protocol: cephx
-38> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: secure
-37> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: crc
-36> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: secure
-35> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: crc
-34> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: secure
-33> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: crc
-32> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: crc
-31> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: secure
-30> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: crc
-29> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: secure
-28> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: crc
-27> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5b940) adding con mode: secure
-26> 2023-10-13T16:44:13.743+0200 7f154d31c580  2 auth: KeyRing::load: loaded key file /var/lib/ceph/mon/ceph-psan01/keyring
-25> 2023-10-13T16:44:13.743+0200 7f154d31c580  0 starting mon.psan01 rank 1 at public addrs [v2:10.221.4.31:3300/0,v1:10.221.4.31:6789/0] at bind addrs [v2:10.221.4.31:3300/0,v1:10.221.4.31:6789/0] mon_data /var/lib/ceph/mon/ceph-psan01 fsid 77a85c30-0112-45c2-8aea-44ff69828c96
-24> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding auth protocol: cephx
-23> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding auth protocol: cephx
-22> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding auth protocol: cephx
-21> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: secure
-20> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: crc
-19> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: secure
-18> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: crc
-17> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: secure
-16> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: crc
-15> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: crc
-14> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: secure
-13> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: crc
-12> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: secure
-11> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: crc
-10> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 AuthRegistry(0x55a325d5c940) adding con mode: secure
-9> 2023-10-13T16:44:13.743+0200 7f154d31c580  2 auth: KeyRing::load: loaded key file /var/lib/ceph/mon/ceph-psan01/keyring
-8> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 adding auth protocol: cephx
-7> 2023-10-13T16:44:13.743+0200 7f154d31c580  5 adding auth protocol: cephx
-6> 2023-10-13T16:44:13.743+0200 7f154d31c580 10 log_channel(cluster) update_config to_monitors: true to_syslog: false syslog_facility: daemon prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201)
-5> 2023-10-13T16:44:13.743+0200 7f154d31c580 10 log_channel(audit) update_config to_monitors: true to_syslog: false syslog_facility: local0 prio: info to_graylog: false graylog_host: 127.0.0.1 graylog_port: 12201)
-4> 2023-10-13T16:44:13.743+0200 7f154d31c580  1 mon.psan01@-1(???) e4 preinit fsid 77a85c30-0112-45c2-8aea-44ff69828c96
-3> 2023-10-13T16:44:13.743+0200 7f154d31c580  0 mon.psan01@-1(???).mds e4619 new map
-2> 2023-10-13T16:44:13.743+0200 7f154d31c580  0 mon.psan01@-1(???).mds e4619 print_map
e4619
enable_multiple, ever_enabled_multiple: 0,1
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
legacy client fscid: 1

Filesystem 'nfs_fs' (1)
fs_name nfs_fs
epoch   4617
flags   12
created 2022-06-21T15:51:11.373792+0200
modified        2023-10-11T05:48:25.682635+0200
tableserver     0
root    0
session_timeout 60
session_autoclose       300
max_file_size   1099511627776
min_compat_client       0 (unknown)
last_failure    0
last_failure_osd_epoch  118993
compat  compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
max_mds 1
in      0
up      {0=60602768}
failed
damaged
stopped
data_pools      [22]
metadata_pool   22
inline_data     disabled
balancer
standby_count_wanted    1
[mds.psan01{0:60602768} state up:active seq 5087151 addr [v2:10.221.4.31:6800/3782739203,v1:10.221.4.31:6801/3782739203]]

Standby daemons:

[mds.psan05{-1:132228283} state up:standby seq 1 addr [v2:10.221.4.35:6800/3576554988,v1:10.221.4.35:6801/3576554988]]
[mds.psan09{-1:132454209} state up:standby seq 1 addr [v2:10.221.4.39:6800/3670081898,v1:10.221.4.39:6801/3670081898]]

-1> 2023-10-13T16:44:13.747+0200 7f154d31c580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 7f154d31c580 time 2023-10-13T16:44:13.746649+0200
/mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)

ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x155) [0x7f154e232c01]
2: (()+0x26ae09) [0x7f154e232e09]
3: (FSMap::sanity() const+0xc7) [0x7f154e7ab727]
4: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55a324200588]
5: (PaxosService::refresh(bool*)+0x28f) [0x55a32411b41f]
6: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55a323fb935c]
7: (Monitor::init_paxos()+0x7c) [0x55a323fb966c]
8: (Monitor::preinit()+0xf56) [0x55a323ffc1c6]
9: (main()+0x2e0c) [0x55a323f8d8dc]
10: (__libc_start_main()+0xf3) [0x7f154d7df083]
11: (_start()+0x2e) [0x55a323f9f90e]

0> 2023-10-13T16:44:13.751+0200 7f154d31c580 -1 *** Caught signal (Aborted) **
in thread 7f154d31c580 thread_name:ceph-mon

ceph version 15.2.14 (cd3bb7e87a2f62c1b862ff3fd8b1eec13391a5be) octopus (stable)
1: (()+0x14420) [0x7f154dd13420]
2: (gsignal()+0xcb) [0x7f154d7fe00b]
3: (abort()+0x12b) [0x7f154d7dd859]
4: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1b0) [0x7f154e232c5c]
5: (()+0x26ae09) [0x7f154e232e09]
6: (FSMap::sanity() const+0xc7) [0x7f154e7ab727]
7: (MDSMonitor::update_from_paxos(bool*)+0x548) [0x55a324200588]
8: (PaxosService::refresh(bool*)+0x28f) [0x55a32411b41f]
9: (Monitor::refresh_from_paxos(bool*)+0x11c) [0x55a323fb935c]
10: (Monitor::init_paxos()+0x7c) [0x55a323fb966c]
11: (Monitor::preinit()+0xf56) [0x55a323ffc1c6]
12: (main()+0x2e0c) [0x55a323f8d8dc]
13: (__libc_start_main()+0xf3) [0x7f154d7df083]
14: (_start()+0x2e) [0x55a323f9f90e]
NOTE: a copy of the executable, or `objdump -rdS <executable>` is needed to interpret this.

--- logging levels ---
0/ 5 none
0/ 1 lockdep
0/ 1 context
1/ 1 crush
1/ 5 mds
1/ 5 mds_balancer
1/ 5 mds_locker
1/ 5 mds_log
1/ 5 mds_log_expire
1/ 5 mds_migrator
0/ 1 buffer
0/ 1 timer
0/ 1 filer
0/ 1 striper
0/ 1 objecter
0/ 5 rados
0/ 5 rbd
0/ 5 rbd_mirror
0/ 5 rbd_replay
0/ 5 rbd_rwl
0/ 5 journaler
0/ 5 objectcacher
0/ 5 immutable_obj_cache
0/ 5 client
1/ 5 osd
0/ 5 optracker
0/ 5 objclass
1/ 3 filestore
1/ 3 journal
0/ 0 ms
1/ 5 mon
0/10 monc
1/ 5 paxos
0/ 5 tp
1/ 5 auth
1/ 5 crypto
1/ 1 finisher
1/ 1 reserver
1/ 5 heartbeatmap
1/ 5 perfcounter
1/ 5 rgw
1/ 5 rgw_sync
1/10 civetweb
1/ 5 javaclient
1/ 5 asok
1/ 1 throttle
0/ 0 refs
1/ 5 compressor
1/ 5 bluestore
1/ 5 bluefs
1/ 3 bdev
1/ 5 kstore
4/ 5 rocksdb
4/ 5 leveldb
4/ 5 memdb
1/ 5 fuse
1/ 5 mgr
1/ 5 mgrc
1/ 5 dpdk
1/ 5 eventtrace
1/ 5 prioritycache
0/ 5 test
-2/-2 (syslog threshold)
99/99 (stderr threshold)
--- pthread ID / name mapping for recent threads ---
7f1542830700 / rocksdb:dump_st
7f154c2be700 / admin_socket
7f154d31c580 / ceph-mon
max_recent     10000
max_new         1000
log_file /var/lib/ceph/crash/2023-10-13T14:44:13.755236Z_bc1bc74b-51b1-499c-abf6-cd27af033c50/log
--- end dump of recent events ---
Aborted
root@psan01:~#

Thanks.

The error is
2023-10-13T16:44:13.747+0200 7f154d31c580 -1 /mnt/ceph-15.2.14/src/mds/FSMap.cc: In function 'void FSMap::sanity() const' thread 7f154d31c580 time 2023-10-13T16:44:13.746649+0200
/mnt/ceph-15.2.14/src/mds/FSMap.cc: 847: FAILED ceph_assert(fs->mds_map.compat.compare(compat) == 0)

It is possible you hit
https://github.com/rook/rook/issues/9373

it is possible this happens if PetaSAN was first installed from 2.3.0 or earlier
in /etc/ceph/ceph.conf on all nonitor nodes add

[mon]
mon_mds_skip_sanity = true

And re-start the monitor nodes via command line.
If it fixes issues, i recommend you reboot all nodes

Great, thanks, it worked ! Now I restarted all the 3 monitors and the cluster seems to work again (ceph commands and dashboard ok).

Before I restart the other nodes, some of them filled the root dir with logs in /var/log/ceph/ceph-osd.<nnn>.log, shall I delete all these logs before reboot ? [EDIT: I cleaned the log files with: $ echo " " >ceph-osdNNN.log ]

Second question: Now all is back to normal, can I resume the upgrade ? In which order, first the remaining 2 monitors or order it is not important ?

Thanks and bye, Ste

Very good 🙂  You can delete the log file no problem. You should upgrade the 2 other monitor nodes, then the other nodes.

Cluster successfully upgraded to 3.2.1. 🙂

Quote from admin on October 13, 2023, 3:26 pm

It is possible you hit
https://github.com/rook/rook/issues/9373

it is possible this happens if PetaSAN was first installed from 2.3.0 or earlier
in /etc/ceph/ceph.conf on all nonitor nodes add

 

Just for completeness of informations: this cluster was built starting from version 3.0 ISO, then upgraded to 3.1.0 and now to 3.2.1. So maybe that bug can affect version 3 also, in addition to version 2.

Thanks and bye, Ste

Thanks for this feedback. We have hit this issue during our recent upgrade tests with with version 2.3.0 but not later. But from your feedback it did happen with 3.0 as well. We will review our update script to handle this.

Pages: 1 2