Individual Pool utilization charts showing zero
ghbiz
76 Posts
January 29, 2020, 3:25 pmQuote from ghbiz on January 29, 2020, 3:25 pmHello,
I've been reviewing storage utilization of our cluster and see under the mail page when using drop down Cluster storage, pool = all that it shows 29T used out of 79.7T. Then when taking a deeper dive into individual pools, they all report 0T used. Furthermore, the sum of all pools reported in graphs is 15.98 + 2.76T = 18.74T which is ~15% of what the ALL graph reports.
Have a bunch of images to attach but doesn't look like forum supports attachments.
Brian
Hello,
I've been reviewing storage utilization of our cluster and see under the mail page when using drop down Cluster storage, pool = all that it shows 29T used out of 79.7T. Then when taking a deeper dive into individual pools, they all report 0T used. Furthermore, the sum of all pools reported in graphs is 15.98 + 2.76T = 18.74T which is ~15% of what the ALL graph reports.
Have a bunch of images to attach but doesn't look like forum supports attachments.
Brian
ghbiz
76 Posts
February 3, 2020, 7:25 pmQuote from ghbiz on February 3, 2020, 7:25 pmCan anyone else confirm these numbers are incorrect?
Brian
Can anyone else confirm these numbers are incorrect?
Brian
admin
2,918 Posts
February 4, 2020, 12:39 pmQuote from admin on February 4, 2020, 12:39 pmThis is similar to the earlier post here
Yes there is a recent bug in PetaSAN chart show 0 bytes at the pool level, it is due to recent change in ceph df --format json-pretty command returning bytes_used instead or earlier raw_bytes_used.
This is solved in 2.5 scheduled later part of Feb. it could also be put earlier as online update 2.4.2
This is similar to the earlier post here
Yes there is a recent bug in PetaSAN chart show 0 bytes at the pool level, it is due to recent change in ceph df --format json-pretty command returning bytes_used instead or earlier raw_bytes_used.
This is solved in 2.5 scheduled later part of Feb. it could also be put earlier as online update 2.4.2
ghbiz
76 Posts
March 10, 2020, 8:24 pmQuote from ghbiz on March 10, 2020, 8:24 pmHello,
Following up on this ticket. I see that graphs are reporting data now. However, it does not appear to be accurate still. For example in the UI graphs, I have two pools; pool1-hdd reports 8.39TB used and 16.45 free and pool1-ssd reports 4.69TB used and 2.92TB free. This doesn't line up with what the command ceph df shows below:
root@ceph-node2:~# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 71 TiB 51 TiB 20 TiB 20 TiB 27.91
ssd 27 TiB 15 TiB 12 TiB 12 TiB 44.28
TOTAL 98 TiB 66 TiB 32 TiB 32 TiB 32.40
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
pool1-hdd 4 7.6 TiB 2.00M 7.6 TiB 17.17 16 TiB
pool2-ssd 5 4.3 TiB 1.13M 4.3 TiB 34.87 2.7 TiB
Hello,
Following up on this ticket. I see that graphs are reporting data now. However, it does not appear to be accurate still. For example in the UI graphs, I have two pools; pool1-hdd reports 8.39TB used and 16.45 free and pool1-ssd reports 4.69TB used and 2.92TB free. This doesn't line up with what the command ceph df shows below:
root@ceph-node2:~# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 71 TiB 51 TiB 20 TiB 20 TiB 27.91
ssd 27 TiB 15 TiB 12 TiB 12 TiB 44.28
TOTAL 98 TiB 66 TiB 32 TiB 32 TiB 32.40
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
pool1-hdd 4 7.6 TiB 2.00M 7.6 TiB 17.17 16 TiB
pool2-ssd 5 4.3 TiB 1.13M 4.3 TiB 34.87 2.7 TiB
admin
2,918 Posts
March 11, 2020, 12:13 pmQuote from admin on March 11, 2020, 12:13 pmThe numbers are close, could be a rounding/conversion issue from bytes to TB.
can you do a
ceph df detail --format json-pretty
and compare the results with the graph result.
The numbers are close, could be a rounding/conversion issue from bytes to TB.
can you do a
ceph df detail --format json-pretty
and compare the results with the graph result.
ghbiz
76 Posts
March 11, 2020, 9:01 pmQuote from ghbiz on March 11, 2020, 9:01 pmt@ceph-node6:~# ceph df detail --format json-pretty
{
"stats": {
"total_bytes": 107332145393664,
"total_avail_bytes": 68591159476224,
"total_used_bytes": 38677635149824,
"total_used_raw_bytes": 38740985917440,
"total_used_raw_ratio": 0.36094486713409424,
"num_osds": 59,
"num_per_pool_osds": 56
},
"stats_by_class": {
"hdd": {
"total_bytes": 77758235074560,
"total_avail_bytes": 52134216138752,
"total_used_bytes": 25579995521024,
"total_used_raw_bytes": 25624018935808,
"total_used_raw_ratio": 0.32953447103500366
},
"ssd": {
"total_bytes": 29573910319104,
"total_avail_bytes": 16456943337472,
"total_used_bytes": 13097639628800,
"total_used_raw_bytes": 13116966981632,
"total_used_raw_ratio": 0.44353175163269043
}
},
"pools": [
{
"name": "pool1-hdd",
"id": 4,
"stats": {
"stored": 8390073045373,
"objects": 2000775,
"kb_used": 8193430709,
"bytes_used": 8390073045373,
"percent_used": 0.1796998530626297,
"max_avail": 13309980966912,
"quota_objects": 0,
"quota_bytes": 0,
"dirty": 2000775,
"rd": 650180833,
"rd_bytes": 38110921756672,
"wr": 2619747939,
"wr_bytes": 38145361373184,
"compress_bytes_used": 395699814400,
"compress_under_bytes": 799056330752,
"stored_raw": 8390073045373
}
},
{
"name": "pool2-ssd",
"id": 5,
"stats": {
"stored": 4694220364533,
"objects": 1135258,
"kb_used": 4584199575,
"bytes_used": 4694220364533,
"percent_used": 0.28477197885513306,
"max_avail": 3929971884032,
"quota_objects": 0,
"quota_bytes": 0,
"dirty": 1135258,
"rd": 10646030784,
"rd_bytes": 1405303184867328,
"wr": 12163626985,
"wr_bytes": 109082802186240,
"compress_bytes_used": 1402994573312,
"compress_under_bytes": 3145503752704,
"stored_raw": 4694220364533
}
}
]
}
root@ceph-node6:~#
t@ceph-node6:~# ceph df detail --format json-pretty
{
"stats": {
"total_bytes": 107332145393664,
"total_avail_bytes": 68591159476224,
"total_used_bytes": 38677635149824,
"total_used_raw_bytes": 38740985917440,
"total_used_raw_ratio": 0.36094486713409424,
"num_osds": 59,
"num_per_pool_osds": 56
},
"stats_by_class": {
"hdd": {
"total_bytes": 77758235074560,
"total_avail_bytes": 52134216138752,
"total_used_bytes": 25579995521024,
"total_used_raw_bytes": 25624018935808,
"total_used_raw_ratio": 0.32953447103500366
},
"ssd": {
"total_bytes": 29573910319104,
"total_avail_bytes": 16456943337472,
"total_used_bytes": 13097639628800,
"total_used_raw_bytes": 13116966981632,
"total_used_raw_ratio": 0.44353175163269043
}
},
"pools": [
{
"name": "pool1-hdd",
"id": 4,
"stats": {
"stored": 8390073045373,
"objects": 2000775,
"kb_used": 8193430709,
"bytes_used": 8390073045373,
"percent_used": 0.1796998530626297,
"max_avail": 13309980966912,
"quota_objects": 0,
"quota_bytes": 0,
"dirty": 2000775,
"rd": 650180833,
"rd_bytes": 38110921756672,
"wr": 2619747939,
"wr_bytes": 38145361373184,
"compress_bytes_used": 395699814400,
"compress_under_bytes": 799056330752,
"stored_raw": 8390073045373
}
},
{
"name": "pool2-ssd",
"id": 5,
"stats": {
"stored": 4694220364533,
"objects": 1135258,
"kb_used": 4584199575,
"bytes_used": 4694220364533,
"percent_used": 0.28477197885513306,
"max_avail": 3929971884032,
"quota_objects": 0,
"quota_bytes": 0,
"dirty": 1135258,
"rd": 10646030784,
"rd_bytes": 1405303184867328,
"wr": 12163626985,
"wr_bytes": 109082802186240,
"compress_bytes_used": 1402994573312,
"compress_under_bytes": 3145503752704,
"stored_raw": 4694220364533
}
}
]
}
root@ceph-node6:~#
admin
2,918 Posts
March 20, 2020, 9:08 pmQuote from admin on March 20, 2020, 9:08 pmWe are feeding the Grafana charts values in bytes (as per the json you sent), it seems the bytes -> TB conversion should be dividing by 1024 x 1024 x .. rather than 1000 x 1000 x .. We will look into this more.
We are feeding the Grafana charts values in bytes (as per the json you sent), it seems the bytes -> TB conversion should be dividing by 1024 x 1024 x .. rather than 1000 x 1000 x .. We will look into this more.
Individual Pool utilization charts showing zero
ghbiz
76 Posts
Quote from ghbiz on January 29, 2020, 3:25 pmHello,
I've been reviewing storage utilization of our cluster and see under the mail page when using drop down Cluster storage, pool = all that it shows 29T used out of 79.7T. Then when taking a deeper dive into individual pools, they all report 0T used. Furthermore, the sum of all pools reported in graphs is 15.98 + 2.76T = 18.74T which is ~15% of what the ALL graph reports.
Have a bunch of images to attach but doesn't look like forum supports attachments.
Brian
Hello,
I've been reviewing storage utilization of our cluster and see under the mail page when using drop down Cluster storage, pool = all that it shows 29T used out of 79.7T. Then when taking a deeper dive into individual pools, they all report 0T used. Furthermore, the sum of all pools reported in graphs is 15.98 + 2.76T = 18.74T which is ~15% of what the ALL graph reports.
Have a bunch of images to attach but doesn't look like forum supports attachments.
Brian
ghbiz
76 Posts
Quote from ghbiz on February 3, 2020, 7:25 pmCan anyone else confirm these numbers are incorrect?
Brian
Can anyone else confirm these numbers are incorrect?
Brian
admin
2,918 Posts
Quote from admin on February 4, 2020, 12:39 pmThis is similar to the earlier post here
Yes there is a recent bug in PetaSAN chart show 0 bytes at the pool level, it is due to recent change in ceph df --format json-pretty command returning bytes_used instead or earlier raw_bytes_used.
This is solved in 2.5 scheduled later part of Feb. it could also be put earlier as online update 2.4.2
This is similar to the earlier post here
Yes there is a recent bug in PetaSAN chart show 0 bytes at the pool level, it is due to recent change in ceph df --format json-pretty command returning bytes_used instead or earlier raw_bytes_used.
This is solved in 2.5 scheduled later part of Feb. it could also be put earlier as online update 2.4.2
ghbiz
76 Posts
Quote from ghbiz on March 10, 2020, 8:24 pmHello,
Following up on this ticket. I see that graphs are reporting data now. However, it does not appear to be accurate still. For example in the UI graphs, I have two pools; pool1-hdd reports 8.39TB used and 16.45 free and pool1-ssd reports 4.69TB used and 2.92TB free. This doesn't line up with what the command ceph df shows below:
root@ceph-node2:~# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 71 TiB 51 TiB 20 TiB 20 TiB 27.91
ssd 27 TiB 15 TiB 12 TiB 12 TiB 44.28
TOTAL 98 TiB 66 TiB 32 TiB 32 TiB 32.40POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
pool1-hdd 4 7.6 TiB 2.00M 7.6 TiB 17.17 16 TiB
pool2-ssd 5 4.3 TiB 1.13M 4.3 TiB 34.87 2.7 TiB
Hello,
Following up on this ticket. I see that graphs are reporting data now. However, it does not appear to be accurate still. For example in the UI graphs, I have two pools; pool1-hdd reports 8.39TB used and 16.45 free and pool1-ssd reports 4.69TB used and 2.92TB free. This doesn't line up with what the command ceph df shows below:
root@ceph-node2:~# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 71 TiB 51 TiB 20 TiB 20 TiB 27.91
ssd 27 TiB 15 TiB 12 TiB 12 TiB 44.28
TOTAL 98 TiB 66 TiB 32 TiB 32 TiB 32.40
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
pool1-hdd 4 7.6 TiB 2.00M 7.6 TiB 17.17 16 TiB
pool2-ssd 5 4.3 TiB 1.13M 4.3 TiB 34.87 2.7 TiB
admin
2,918 Posts
Quote from admin on March 11, 2020, 12:13 pmThe numbers are close, could be a rounding/conversion issue from bytes to TB.
can you do a
ceph df detail --format json-pretty
and compare the results with the graph result.
The numbers are close, could be a rounding/conversion issue from bytes to TB.
can you do a
ceph df detail --format json-pretty
and compare the results with the graph result.
ghbiz
76 Posts
Quote from ghbiz on March 11, 2020, 9:01 pmt@ceph-node6:~# ceph df detail --format json-pretty
{
"stats": {
"total_bytes": 107332145393664,
"total_avail_bytes": 68591159476224,
"total_used_bytes": 38677635149824,
"total_used_raw_bytes": 38740985917440,
"total_used_raw_ratio": 0.36094486713409424,
"num_osds": 59,
"num_per_pool_osds": 56
},
"stats_by_class": {
"hdd": {
"total_bytes": 77758235074560,
"total_avail_bytes": 52134216138752,
"total_used_bytes": 25579995521024,
"total_used_raw_bytes": 25624018935808,
"total_used_raw_ratio": 0.32953447103500366
},
"ssd": {
"total_bytes": 29573910319104,
"total_avail_bytes": 16456943337472,
"total_used_bytes": 13097639628800,
"total_used_raw_bytes": 13116966981632,
"total_used_raw_ratio": 0.44353175163269043
}
},
"pools": [
{
"name": "pool1-hdd",
"id": 4,
"stats": {
"stored": 8390073045373,
"objects": 2000775,
"kb_used": 8193430709,
"bytes_used": 8390073045373,
"percent_used": 0.1796998530626297,
"max_avail": 13309980966912,
"quota_objects": 0,
"quota_bytes": 0,
"dirty": 2000775,
"rd": 650180833,
"rd_bytes": 38110921756672,
"wr": 2619747939,
"wr_bytes": 38145361373184,
"compress_bytes_used": 395699814400,
"compress_under_bytes": 799056330752,
"stored_raw": 8390073045373
}
},
{
"name": "pool2-ssd",
"id": 5,
"stats": {
"stored": 4694220364533,
"objects": 1135258,
"kb_used": 4584199575,
"bytes_used": 4694220364533,
"percent_used": 0.28477197885513306,
"max_avail": 3929971884032,
"quota_objects": 0,
"quota_bytes": 0,
"dirty": 1135258,
"rd": 10646030784,
"rd_bytes": 1405303184867328,
"wr": 12163626985,
"wr_bytes": 109082802186240,
"compress_bytes_used": 1402994573312,
"compress_under_bytes": 3145503752704,
"stored_raw": 4694220364533
}
}
]
}
root@ceph-node6:~#
t@ceph-node6:~# ceph df detail --format json-pretty
{
"stats": {
"total_bytes": 107332145393664,
"total_avail_bytes": 68591159476224,
"total_used_bytes": 38677635149824,
"total_used_raw_bytes": 38740985917440,
"total_used_raw_ratio": 0.36094486713409424,
"num_osds": 59,
"num_per_pool_osds": 56
},
"stats_by_class": {
"hdd": {
"total_bytes": 77758235074560,
"total_avail_bytes": 52134216138752,
"total_used_bytes": 25579995521024,
"total_used_raw_bytes": 25624018935808,
"total_used_raw_ratio": 0.32953447103500366
},
"ssd": {
"total_bytes": 29573910319104,
"total_avail_bytes": 16456943337472,
"total_used_bytes": 13097639628800,
"total_used_raw_bytes": 13116966981632,
"total_used_raw_ratio": 0.44353175163269043
}
},
"pools": [
{
"name": "pool1-hdd",
"id": 4,
"stats": {
"stored": 8390073045373,
"objects": 2000775,
"kb_used": 8193430709,
"bytes_used": 8390073045373,
"percent_used": 0.1796998530626297,
"max_avail": 13309980966912,
"quota_objects": 0,
"quota_bytes": 0,
"dirty": 2000775,
"rd": 650180833,
"rd_bytes": 38110921756672,
"wr": 2619747939,
"wr_bytes": 38145361373184,
"compress_bytes_used": 395699814400,
"compress_under_bytes": 799056330752,
"stored_raw": 8390073045373
}
},
{
"name": "pool2-ssd",
"id": 5,
"stats": {
"stored": 4694220364533,
"objects": 1135258,
"kb_used": 4584199575,
"bytes_used": 4694220364533,
"percent_used": 0.28477197885513306,
"max_avail": 3929971884032,
"quota_objects": 0,
"quota_bytes": 0,
"dirty": 1135258,
"rd": 10646030784,
"rd_bytes": 1405303184867328,
"wr": 12163626985,
"wr_bytes": 109082802186240,
"compress_bytes_used": 1402994573312,
"compress_under_bytes": 3145503752704,
"stored_raw": 4694220364533
}
}
]
}
root@ceph-node6:~#
admin
2,918 Posts
Quote from admin on March 20, 2020, 9:08 pmWe are feeding the Grafana charts values in bytes (as per the json you sent), it seems the bytes -> TB conversion should be dividing by 1024 x 1024 x .. rather than 1000 x 1000 x .. We will look into this more.
We are feeding the Grafana charts values in bytes (as per the json you sent), it seems the bytes -> TB conversion should be dividing by 1024 x 1024 x .. rather than 1000 x 1000 x .. We will look into this more.