Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Weird and confused disk size.

Hi everyone here, I'm the new guy for Petasan. My Petasan environment is (I running Petasan in VirtualBOX and todo testing):

3 Cluster Management nodes has 70GB storage for each and running cifs share service.

2 storage nodes nodes has 70GB storage for each and running cifs share service.

In Webgui show me has 350GB for this Cluster Pool (70GB * 5). But When I connect to the server and map the driver just show me 108GB(I dont know why ) at all. So Im very confused why or its normal? Thanks a lot.

 

The 350 GB is the total raw storage available to all pools.

The 108 GB is size available on the cephfs data pool taking into account the 3x replication on that pool.

Thanks for the quick reply, the mean is all storage nodes must same capacity size like Cluster Management nodes?  exsample: the 3 Managenment nodes has 8TB*24 raid6, and all storage nodes need same 8TB *24 or can higher like 16TB *24? Thanks.

your question is not clear. you can put as much or as little storage in your nodes or non at all, it is up to you.

for your prev questions: the storage you add ( 350 GB ) is available for storage pools. you pool has 3x replication so if you write 1 GB it will actually use 3 GB for redundancy. So divide 350 by 3 is what you can write