pools are fluctuating between active and inactive
hjallisnorra
19 Posts
December 19, 2019, 9:22 amQuote from hjallisnorra on December 19, 2019, 9:22 amHi,
we are testing petasan and looking into using in production, but we are having some issues.
pools are fluctuating between active and inactive, but no error or anything visibly wrong.
ceph status:
root@av-petasan-mgm-ash1-001:~# ceph status
cluster:
id: 48cc93e7-b2ee-4fe1-8b01-b6aacb9dda66
health: HEALTH_OK
services:
mon: 3 daemons, quorum av-petasan-mgm-ash1-003,av-petasan-mgm-ash1-001,av-petasan-mgm-ash1-002 (age 10h)
mgr: av-petasan-mgm-ash1-001(active, since 10h), standbys: av-petasan-mgm-ash1-003, av-petasan-mgm-ash1-002
osd: 368 osds: 368 up (since 17h), 368 in (since 17h)
data:
pools: 2 pools, 12288 pgs
objects: 13.37k objects, 52 GiB
usage: 14 TiB used, 498 TiB / 512 TiB avail
pgs: 12288 active+clean
io:
client: 1.1 MiB/s rd, 21 KiB/s wr, 10 op/s rd, 1 op/s wr
and the one disk that we have created is only sometimes visible in the iscsi disk list.
any insight?
Hi,
we are testing petasan and looking into using in production, but we are having some issues.
pools are fluctuating between active and inactive, but no error or anything visibly wrong.
ceph status:
root@av-petasan-mgm-ash1-001:~# ceph status
cluster:
id: 48cc93e7-b2ee-4fe1-8b01-b6aacb9dda66
health: HEALTH_OK
services:
mon: 3 daemons, quorum av-petasan-mgm-ash1-003,av-petasan-mgm-ash1-001,av-petasan-mgm-ash1-002 (age 10h)
mgr: av-petasan-mgm-ash1-001(active, since 10h), standbys: av-petasan-mgm-ash1-003, av-petasan-mgm-ash1-002
osd: 368 osds: 368 up (since 17h), 368 in (since 17h)
data:
pools: 2 pools, 12288 pgs
objects: 13.37k objects, 52 GiB
usage: 14 TiB used, 498 TiB / 512 TiB avail
pgs: 12288 active+clean
io:
client: 1.1 MiB/s rd, 21 KiB/s wr, 10 op/s rd, 1 op/s wr
and the one disk that we have created is only sometimes visible in the iscsi disk list.
any insight?
hjallisnorra
19 Posts
December 19, 2019, 2:40 pmQuote from hjallisnorra on December 19, 2019, 2:40 pmwell we found some network config fails, that will probably count for this.
well we found some network config fails, that will probably count for this.
pools are fluctuating between active and inactive
hjallisnorra
19 Posts
Quote from hjallisnorra on December 19, 2019, 9:22 amHi,
we are testing petasan and looking into using in production, but we are having some issues.
pools are fluctuating between active and inactive, but no error or anything visibly wrong.
ceph status:
root@av-petasan-mgm-ash1-001:~# ceph status
cluster:
id: 48cc93e7-b2ee-4fe1-8b01-b6aacb9dda66
health: HEALTH_OKservices:
mon: 3 daemons, quorum av-petasan-mgm-ash1-003,av-petasan-mgm-ash1-001,av-petasan-mgm-ash1-002 (age 10h)
mgr: av-petasan-mgm-ash1-001(active, since 10h), standbys: av-petasan-mgm-ash1-003, av-petasan-mgm-ash1-002
osd: 368 osds: 368 up (since 17h), 368 in (since 17h)data:
pools: 2 pools, 12288 pgs
objects: 13.37k objects, 52 GiB
usage: 14 TiB used, 498 TiB / 512 TiB avail
pgs: 12288 active+cleanio:
client: 1.1 MiB/s rd, 21 KiB/s wr, 10 op/s rd, 1 op/s wrand the one disk that we have created is only sometimes visible in the iscsi disk list.
any insight?
Hi,
we are testing petasan and looking into using in production, but we are having some issues.
pools are fluctuating between active and inactive, but no error or anything visibly wrong.
ceph status:
root@av-petasan-mgm-ash1-001:~# ceph status
cluster:
id: 48cc93e7-b2ee-4fe1-8b01-b6aacb9dda66
health: HEALTH_OK
services:
mon: 3 daemons, quorum av-petasan-mgm-ash1-003,av-petasan-mgm-ash1-001,av-petasan-mgm-ash1-002 (age 10h)
mgr: av-petasan-mgm-ash1-001(active, since 10h), standbys: av-petasan-mgm-ash1-003, av-petasan-mgm-ash1-002
osd: 368 osds: 368 up (since 17h), 368 in (since 17h)
data:
pools: 2 pools, 12288 pgs
objects: 13.37k objects, 52 GiB
usage: 14 TiB used, 498 TiB / 512 TiB avail
pgs: 12288 active+clean
io:
client: 1.1 MiB/s rd, 21 KiB/s wr, 10 op/s rd, 1 op/s wr
and the one disk that we have created is only sometimes visible in the iscsi disk list.
any insight?
hjallisnorra
19 Posts
Quote from hjallisnorra on December 19, 2019, 2:40 pmwell we found some network config fails, that will probably count for this.
well we found some network config fails, that will probably count for this.