Weird and confused disk size.
skyyxy
6 Posts
June 2, 2023, 6:52 amQuote from skyyxy on June 2, 2023, 6:52 amHi everyone here, I'm the new guy for Petasan. My Petasan environment is (I running Petasan in VirtualBOX and todo testing):
3 Cluster Management nodes has 70GB storage for each and running cifs share service.
2 storage nodes nodes has 70GB storage for each and running cifs share service.
In Webgui show me has 350GB for this Cluster Pool (70GB * 5). But When I connect to the server and map the driver just show me 108GB(I dont know why ) at all. So Im very confused why or its normal? Thanks a lot.
Hi everyone here, I'm the new guy for Petasan. My Petasan environment is (I running Petasan in VirtualBOX and todo testing):
3 Cluster Management nodes has 70GB storage for each and running cifs share service.
2 storage nodes nodes has 70GB storage for each and running cifs share service.
In Webgui show me has 350GB for this Cluster Pool (70GB * 5). But When I connect to the server and map the driver just show me 108GB(I dont know why ) at all. So Im very confused why or its normal? Thanks a lot.
admin
2,930 Posts
June 2, 2023, 8:44 amQuote from admin on June 2, 2023, 8:44 amThe 350 GB is the total raw storage available to all pools.
The 108 GB is size available on the cephfs data pool taking into account the 3x replication on that pool.
The 350 GB is the total raw storage available to all pools.
The 108 GB is size available on the cephfs data pool taking into account the 3x replication on that pool.
skyyxy
6 Posts
June 5, 2023, 4:04 amQuote from skyyxy on June 5, 2023, 4:04 amThanks for the quick reply, the mean is all storage nodes must same capacity size like Cluster Management nodes? exsample: the 3 Managenment nodes has 8TB*24 raid6, and all storage nodes need same 8TB *24 or can higher like 16TB *24? Thanks.
Thanks for the quick reply, the mean is all storage nodes must same capacity size like Cluster Management nodes? exsample: the 3 Managenment nodes has 8TB*24 raid6, and all storage nodes need same 8TB *24 or can higher like 16TB *24? Thanks.
admin
2,930 Posts
June 5, 2023, 10:38 amQuote from admin on June 5, 2023, 10:38 amyour question is not clear. you can put as much or as little storage in your nodes or non at all, it is up to you.
for your prev questions: the storage you add ( 350 GB ) is available for storage pools. you pool has 3x replication so if you write 1 GB it will actually use 3 GB for redundancy. So divide 350 by 3 is what you can write
your question is not clear. you can put as much or as little storage in your nodes or non at all, it is up to you.
for your prev questions: the storage you add ( 350 GB ) is available for storage pools. you pool has 3x replication so if you write 1 GB it will actually use 3 GB for redundancy. So divide 350 by 3 is what you can write
Weird and confused disk size.
skyyxy
6 Posts
Quote from skyyxy on June 2, 2023, 6:52 amHi everyone here, I'm the new guy for Petasan. My Petasan environment is (I running Petasan in VirtualBOX and todo testing):
3 Cluster Management nodes has 70GB storage for each and running cifs share service.
2 storage nodes nodes has 70GB storage for each and running cifs share service.
In Webgui show me has 350GB for this Cluster Pool (70GB * 5). But When I connect to the server and map the driver just show me 108GB(I dont know why ) at all. So Im very confused why or its normal? Thanks a lot.
Hi everyone here, I'm the new guy for Petasan. My Petasan environment is (I running Petasan in VirtualBOX and todo testing):
3 Cluster Management nodes has 70GB storage for each and running cifs share service.
2 storage nodes nodes has 70GB storage for each and running cifs share service.
In Webgui show me has 350GB for this Cluster Pool (70GB * 5). But When I connect to the server and map the driver just show me 108GB(I dont know why ) at all. So Im very confused why or its normal? Thanks a lot.
admin
2,930 Posts
Quote from admin on June 2, 2023, 8:44 amThe 350 GB is the total raw storage available to all pools.
The 108 GB is size available on the cephfs data pool taking into account the 3x replication on that pool.
The 350 GB is the total raw storage available to all pools.
The 108 GB is size available on the cephfs data pool taking into account the 3x replication on that pool.
skyyxy
6 Posts
Quote from skyyxy on June 5, 2023, 4:04 amThanks for the quick reply, the mean is all storage nodes must same capacity size like Cluster Management nodes? exsample: the 3 Managenment nodes has 8TB*24 raid6, and all storage nodes need same 8TB *24 or can higher like 16TB *24? Thanks.
Thanks for the quick reply, the mean is all storage nodes must same capacity size like Cluster Management nodes? exsample: the 3 Managenment nodes has 8TB*24 raid6, and all storage nodes need same 8TB *24 or can higher like 16TB *24? Thanks.
admin
2,930 Posts
Quote from admin on June 5, 2023, 10:38 amyour question is not clear. you can put as much or as little storage in your nodes or non at all, it is up to you.
for your prev questions: the storage you add ( 350 GB ) is available for storage pools. you pool has 3x replication so if you write 1 GB it will actually use 3 GB for redundancy. So divide 350 by 3 is what you can write
your question is not clear. you can put as much or as little storage in your nodes or non at all, it is up to you.
for your prev questions: the storage you add ( 350 GB ) is available for storage pools. you pool has 3x replication so if you write 1 GB it will actually use 3 GB for redundancy. So divide 350 by 3 is what you can write