No free space after deleting disk
elwan
14 Posts
July 10, 2019, 12:45 pmQuote from elwan on July 10, 2019, 12:45 pmafter deleting disk with not recommanded method related to the following thread i just remarque space was not free at petasan. Now i juste have one 30TB disk used in production but in storage it show 58.5TB was used.Can you help please.
in below details of cluster :
root@NODEBKO-01:~# rbd ls --cluster ClusterCCTVBKO
image-00013
root@NODEBKO-01:~# rbd info image-00013 --cluster ClusterCCTVBKO
rbd image 'image-00013':
size 30TiB in 7864320 objects
order 22 (4MiB objects)
block_name_prefix: rbd_data.1bf3496b8b4567
format: 2
features: layering
flags:
create_timestamp: Wed Apr 17 14:22:15 2019
root@NODEBKO-01:~# ceph --cluster ClusterCCTVBKO -s
cluster:
id: ed96e77e-1ff8-4e6a-aa02-3f5caed963a8
health: HEALTH_OK
services:
mon: 3 daemons, quorum NODEBKO-01,NODEBKO-02,NODEBKO-03
mgr: NODEBKO-02(active), standbys: NODEBKO-01
osd: 3 osds: 3 up, 3 in
data:
pools: 1 pools, 256 pgs
objects: 7.67M objects, 29.2TiB
usage: 58.5TiB used, 23.3TiB / 81.9TiB avail
pgs: 255 active+clean
1 active+clean+scrubbing+deep
io:
client: 1.33KiB/s rd, 10.1MiB/s wr, 0op/s rd, 49op/s wr
root@NODEBKO-01:~#
Thanks.
after deleting disk with not recommanded method related to the following thread i just remarque space was not free at petasan. Now i juste have one 30TB disk used in production but in storage it show 58.5TB was used.Can you help please.
in below details of cluster :
root@NODEBKO-01:~# rbd ls --cluster ClusterCCTVBKO
image-00013
root@NODEBKO-01:~# rbd info image-00013 --cluster ClusterCCTVBKO
rbd image 'image-00013':
size 30TiB in 7864320 objects
order 22 (4MiB objects)
block_name_prefix: rbd_data.1bf3496b8b4567
format: 2
features: layering
flags:
create_timestamp: Wed Apr 17 14:22:15 2019
root@NODEBKO-01:~# ceph --cluster ClusterCCTVBKO -s
cluster:
id: ed96e77e-1ff8-4e6a-aa02-3f5caed963a8
health: HEALTH_OK
services:
mon: 3 daemons, quorum NODEBKO-01,NODEBKO-02,NODEBKO-03
mgr: NODEBKO-02(active), standbys: NODEBKO-01
osd: 3 osds: 3 up, 3 in
data:
pools: 1 pools, 256 pgs
objects: 7.67M objects, 29.2TiB
usage: 58.5TiB used, 23.3TiB / 81.9TiB avail
pgs: 255 active+clean
1 active+clean+scrubbing+deep
io:
client: 1.33KiB/s rd, 10.1MiB/s wr, 0op/s rd, 49op/s wr
root@NODEBKO-01:~#
Thanks.
admin
2,930 Posts
July 10, 2019, 8:15 pmQuote from admin on July 10, 2019, 8:15 pmStorage will show raw storage used after taking replication into account. For a 30 TB iSCSI disk with 2x replication will consume 60 TB.
Storage will show raw storage used after taking replication into account. For a 30 TB iSCSI disk with 2x replication will consume 60 TB.
No free space after deleting disk
elwan
14 Posts
Quote from elwan on July 10, 2019, 12:45 pmafter deleting disk with not recommanded method related to the following thread i just remarque space was not free at petasan. Now i juste have one 30TB disk used in production but in storage it show 58.5TB was used.Can you help please.
in below details of cluster :
root@NODEBKO-01:~# rbd ls --cluster ClusterCCTVBKO
image-00013root@NODEBKO-01:~# rbd info image-00013 --cluster ClusterCCTVBKO
rbd image 'image-00013':
size 30TiB in 7864320 objects
order 22 (4MiB objects)
block_name_prefix: rbd_data.1bf3496b8b4567
format: 2
features: layering
flags:
create_timestamp: Wed Apr 17 14:22:15 2019
root@NODEBKO-01:~# ceph --cluster ClusterCCTVBKO -s
cluster:
id: ed96e77e-1ff8-4e6a-aa02-3f5caed963a8
health: HEALTH_OKservices:
mon: 3 daemons, quorum NODEBKO-01,NODEBKO-02,NODEBKO-03
mgr: NODEBKO-02(active), standbys: NODEBKO-01
osd: 3 osds: 3 up, 3 indata:
pools: 1 pools, 256 pgs
objects: 7.67M objects, 29.2TiB
usage: 58.5TiB used, 23.3TiB / 81.9TiB avail
pgs: 255 active+clean
1 active+clean+scrubbing+deepio:
client: 1.33KiB/s rd, 10.1MiB/s wr, 0op/s rd, 49op/s wrroot@NODEBKO-01:~#
Thanks.
after deleting disk with not recommanded method related to the following thread i just remarque space was not free at petasan. Now i juste have one 30TB disk used in production but in storage it show 58.5TB was used.Can you help please.
in below details of cluster :
root@NODEBKO-01:~# rbd ls --cluster ClusterCCTVBKO
image-00013root@NODEBKO-01:~# rbd info image-00013 --cluster ClusterCCTVBKO
rbd image 'image-00013':
size 30TiB in 7864320 objects
order 22 (4MiB objects)
block_name_prefix: rbd_data.1bf3496b8b4567
format: 2
features: layering
flags:
create_timestamp: Wed Apr 17 14:22:15 2019
root@NODEBKO-01:~# ceph --cluster ClusterCCTVBKO -s
cluster:
id: ed96e77e-1ff8-4e6a-aa02-3f5caed963a8
health: HEALTH_OKservices:
mon: 3 daemons, quorum NODEBKO-01,NODEBKO-02,NODEBKO-03
mgr: NODEBKO-02(active), standbys: NODEBKO-01
osd: 3 osds: 3 up, 3 indata:
pools: 1 pools, 256 pgs
objects: 7.67M objects, 29.2TiB
usage: 58.5TiB used, 23.3TiB / 81.9TiB avail
pgs: 255 active+clean
1 active+clean+scrubbing+deepio:
client: 1.33KiB/s rd, 10.1MiB/s wr, 0op/s rd, 49op/s wrroot@NODEBKO-01:~#
Thanks.
admin
2,930 Posts
Quote from admin on July 10, 2019, 8:15 pmStorage will show raw storage used after taking replication into account. For a 30 TB iSCSI disk with 2x replication will consume 60 TB.
Storage will show raw storage used after taking replication into account. For a 30 TB iSCSI disk with 2x replication will consume 60 TB.