Problem with osd
o.sapranov
2 Posts
November 29, 2024, 4:03 pmQuote from o.sapranov on November 29, 2024, 4:03 pmHello
I have problem with 1 osd.
Croot@petasan4:~# ceph -w
cluster:
id: bec4f252-b4a5-4467-83a6-235cc4631079
health: HEALTH_WARN
Degraded data redundancy: 159396/70221213 objects degraded (0.227%), 7 pgs degraded, 7 pgs undersized
services:
mon: 3 daemons, quorum petasan1,petasan3,petasan2 (age 4M)
mgr: petasan2(active, since 4M), standbys: petasan1, petasan3
mds: 1/1 daemons up, 2 standby
osd: 56 osds: 55 up (since 40m), 55 in (since 5h); 7 remapped pgs
data:
volumes: 1/1 healthy
pools: 12 pools, 321 pgs
objects: 17.56M objects, 67 TiB
usage: 136 TiB used, 649 TiB / 785 TiB avail
pgs: 159396/70221213 objects degraded (0.227%)
309 active+clean
5 active+clean+scrubbing
5 active+undersized+degraded+remapped+backfill_wait
2 active+undersized+degraded+remapped+backfilling
io:
client: 258 KiB/s rd, 186 KiB/s wr, 1 op/s rd, 1 op/s wr
recovery: 67 MiB/s, 17 objects/s
root@petasan4:~# systemctl status ceph-osd@54.service
● ceph-osd@54.service - Ceph object storage daemon osd.54
Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2024-11-29 18:19:24 MSK; 41min ago
Process: 2194154 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id 54 (code=exited, status=0/SUCCESS)
Process: 2194159 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id 54 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 2194159 (code=exited, status=1/FAILURE)
Nov 29 18:19:24 petasan4 systemd[1]: ceph-osd@54.service: Scheduled restart job, restart counter is at 3.
Nov 29 18:19:24 petasan4 systemd[1]: Stopped Ceph object storage daemon osd.54.
Nov 29 18:19:24 petasan4 systemd[1]: ceph-osd@54.service: Start request repeated too quickly.
Nov 29 18:19:24 petasan4 systemd[1]: ceph-osd@54.service: Failed with result 'exit-code'.
Nov 29 18:19:24 petasan4 systemd[1]: Failed to start Ceph object storage daemon osd.54.
Nov 29 18:35:15 petasan4 systemd[1]: ceph-osd@54.service: Start request repeated too quickly.
Nov 29 18:35:15 petasan4 systemd[1]: ceph-osd@54.service: Failed with result 'exit-code'.
Nov 29 18:35:15 petasan4 systemd[1]: Failed to start Ceph object storage daemon osd.54.
How to recover a disk?
Hello
I have problem with 1 osd.
Croot@petasan4:~# ceph -w
cluster:
id: bec4f252-b4a5-4467-83a6-235cc4631079
health: HEALTH_WARN
Degraded data redundancy: 159396/70221213 objects degraded (0.227%), 7 pgs degraded, 7 pgs undersized
services:
mon: 3 daemons, quorum petasan1,petasan3,petasan2 (age 4M)
mgr: petasan2(active, since 4M), standbys: petasan1, petasan3
mds: 1/1 daemons up, 2 standby
osd: 56 osds: 55 up (since 40m), 55 in (since 5h); 7 remapped pgs
data:
volumes: 1/1 healthy
pools: 12 pools, 321 pgs
objects: 17.56M objects, 67 TiB
usage: 136 TiB used, 649 TiB / 785 TiB avail
pgs: 159396/70221213 objects degraded (0.227%)
309 active+clean
5 active+clean+scrubbing
5 active+undersized+degraded+remapped+backfill_wait
2 active+undersized+degraded+remapped+backfilling
io:
client: 258 KiB/s rd, 186 KiB/s wr, 1 op/s rd, 1 op/s wr
recovery: 67 MiB/s, 17 objects/s
root@petasan4:~# systemctl status ceph-osd@54.service
● ceph-osd@54.service - Ceph object storage daemon osd.54
Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2024-11-29 18:19:24 MSK; 41min ago
Process: 2194154 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id 54 (code=exited, status=0/SUCCESS)
Process: 2194159 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id 54 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 2194159 (code=exited, status=1/FAILURE)
Nov 29 18:19:24 petasan4 systemd[1]: ceph-osd@54.service: Scheduled restart job, restart counter is at 3.
Nov 29 18:19:24 petasan4 systemd[1]: Stopped Ceph object storage daemon osd.54.
Nov 29 18:19:24 petasan4 systemd[1]: ceph-osd@54.service: Start request repeated too quickly.
Nov 29 18:19:24 petasan4 systemd[1]: ceph-osd@54.service: Failed with result 'exit-code'.
Nov 29 18:19:24 petasan4 systemd[1]: Failed to start Ceph object storage daemon osd.54.
Nov 29 18:35:15 petasan4 systemd[1]: ceph-osd@54.service: Start request repeated too quickly.
Nov 29 18:35:15 petasan4 systemd[1]: ceph-osd@54.service: Failed with result 'exit-code'.
Nov 29 18:35:15 petasan4 systemd[1]: Failed to start Ceph object storage daemon osd.54.
How to recover a disk?
Problem with osd
o.sapranov
2 Posts
Quote from o.sapranov on November 29, 2024, 4:03 pmHello
I have problem with 1 osd.Croot@petasan4:~# ceph -w
cluster:
id: bec4f252-b4a5-4467-83a6-235cc4631079
health: HEALTH_WARN
Degraded data redundancy: 159396/70221213 objects degraded (0.227%), 7 pgs degraded, 7 pgs undersizedservices:
mon: 3 daemons, quorum petasan1,petasan3,petasan2 (age 4M)
mgr: petasan2(active, since 4M), standbys: petasan1, petasan3
mds: 1/1 daemons up, 2 standby
osd: 56 osds: 55 up (since 40m), 55 in (since 5h); 7 remapped pgsdata:
volumes: 1/1 healthy
pools: 12 pools, 321 pgs
objects: 17.56M objects, 67 TiB
usage: 136 TiB used, 649 TiB / 785 TiB avail
pgs: 159396/70221213 objects degraded (0.227%)
309 active+clean
5 active+clean+scrubbing
5 active+undersized+degraded+remapped+backfill_wait
2 active+undersized+degraded+remapped+backfillingio:
client: 258 KiB/s rd, 186 KiB/s wr, 1 op/s rd, 1 op/s wr
recovery: 67 MiB/s, 17 objects/sroot@petasan4:~# systemctl status ceph-osd@54.service
● ceph-osd@54.service - Ceph object storage daemon osd.54
Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2024-11-29 18:19:24 MSK; 41min ago
Process: 2194154 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id 54 (code=exited, status=0/SUCCESS)
Process: 2194159 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id 54 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 2194159 (code=exited, status=1/FAILURE)Nov 29 18:19:24 petasan4 systemd[1]: ceph-osd@54.service: Scheduled restart job, restart counter is at 3.
Nov 29 18:19:24 petasan4 systemd[1]: Stopped Ceph object storage daemon osd.54.
Nov 29 18:19:24 petasan4 systemd[1]: ceph-osd@54.service: Start request repeated too quickly.
Nov 29 18:19:24 petasan4 systemd[1]: ceph-osd@54.service: Failed with result 'exit-code'.
Nov 29 18:19:24 petasan4 systemd[1]: Failed to start Ceph object storage daemon osd.54.
Nov 29 18:35:15 petasan4 systemd[1]: ceph-osd@54.service: Start request repeated too quickly.
Nov 29 18:35:15 petasan4 systemd[1]: ceph-osd@54.service: Failed with result 'exit-code'.
Nov 29 18:35:15 petasan4 systemd[1]: Failed to start Ceph object storage daemon osd.54.
How to recover a disk?
Hello
I have problem with 1 osd.
Croot@petasan4:~# ceph -w
cluster:
id: bec4f252-b4a5-4467-83a6-235cc4631079
health: HEALTH_WARN
Degraded data redundancy: 159396/70221213 objects degraded (0.227%), 7 pgs degraded, 7 pgs undersized
services:
mon: 3 daemons, quorum petasan1,petasan3,petasan2 (age 4M)
mgr: petasan2(active, since 4M), standbys: petasan1, petasan3
mds: 1/1 daemons up, 2 standby
osd: 56 osds: 55 up (since 40m), 55 in (since 5h); 7 remapped pgs
data:
volumes: 1/1 healthy
pools: 12 pools, 321 pgs
objects: 17.56M objects, 67 TiB
usage: 136 TiB used, 649 TiB / 785 TiB avail
pgs: 159396/70221213 objects degraded (0.227%)
309 active+clean
5 active+clean+scrubbing
5 active+undersized+degraded+remapped+backfill_wait
2 active+undersized+degraded+remapped+backfilling
io:
client: 258 KiB/s rd, 186 KiB/s wr, 1 op/s rd, 1 op/s wr
recovery: 67 MiB/s, 17 objects/s
root@petasan4:~# systemctl status ceph-osd@54.service
● ceph-osd@54.service - Ceph object storage daemon osd.54
Loaded: loaded (/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: enabled)
Active: failed (Result: exit-code) since Fri 2024-11-29 18:19:24 MSK; 41min ago
Process: 2194154 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id 54 (code=exited, status=0/SUCCESS)
Process: 2194159 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id 54 --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 2194159 (code=exited, status=1/FAILURE)
Nov 29 18:19:24 petasan4 systemd[1]: ceph-osd@54.service: Scheduled restart job, restart counter is at 3.
Nov 29 18:19:24 petasan4 systemd[1]: Stopped Ceph object storage daemon osd.54.
Nov 29 18:19:24 petasan4 systemd[1]: ceph-osd@54.service: Start request repeated too quickly.
Nov 29 18:19:24 petasan4 systemd[1]: ceph-osd@54.service: Failed with result 'exit-code'.
Nov 29 18:19:24 petasan4 systemd[1]: Failed to start Ceph object storage daemon osd.54.
Nov 29 18:35:15 petasan4 systemd[1]: ceph-osd@54.service: Start request repeated too quickly.
Nov 29 18:35:15 petasan4 systemd[1]: ceph-osd@54.service: Failed with result 'exit-code'.
Nov 29 18:35:15 petasan4 systemd[1]: Failed to start Ceph object storage daemon osd.54.
How to recover a disk?