Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Multiple IP for PetaSAN Services

Pages: 1 2

as described earlier, if your remaining issue is the default gateway, make sure it is defined in /etc/network/interfaces, it should be there if you defined it in the installer, else add it and it should get configured on boot. you do not need to change anything else.

the console menu does allow shell before you deploy. after you deploy and set a root password the menu is removed. You can either log via ssh or press ALT-F1 (or F1-F8) to open a virtual console where you need tp log in with your root password.

Something strange, after deploy i cannot open virtual console using ALT-F1 or F1-F8

Anyway how to change the object size for rbd from 4M to 1MB ?

Because i use 10Gb network port for iscsi but the throughput i see only around 50 Mbps from grafana in PetaSAN dashboard

 

 

 

what hardware / disks do you use ? how many client operations / load do you have ?

Quote from admin on January 5, 2021, 9:13 am

what hardware / disks do you use ? how many client operations / load do you have ?

i use 3 node on below :

node 1 :

1 TB x 22 Disk SAS 10K RPM with JBOD using LSI MegaRAID 12GB

CPU 2 x 12 Cores Intel

RAM 256 GB

Network : 2 x 1GB for Management (Bond b0-mgmt) and 2 x 10GB for Backend Ceph (Bond b1-backend)

 

node 2 and 3 :

1 TB x 22 Disk SAS 10K RPM with JBOD using LSI MegaRAID 12GB

CPU 2 x 12 Cores Intel

RAM 256 GB

Network : 2 x 1GB for Management (Bond b0-mgmt), 2 x 10GB for Backend Ceph (Bond b1-backend) and 2 x 10GB for Storage services (Bond b2-svc)

 

client :

4 node ESXI with 2 x 10Gb to Storage (Layer 2 same subnet with b1-backend on Ceph Backend to query iSCSI Services for shared storage

 

do you know how to change the Object Size from 4MB to 1 MB and change the value max_data_area_mb on RBD image ?

 

 

 

Pure hdd is not recommended for virtualization workloads, it could be used for backups and streaming but for latency sensitive load like vms it is recommended to use pure SSD or at least for journal or write cache or use controller cache on some hardware.

What is the load you have that gives 50 MB/s ? is it a single file copy from inside a vm in ESXi ? or you do many file copies and 50 MB/s is the total from all ?  or you get 50 MB/s from database workload which tends to use small block size ?

Can you use the benchmark ui page and run the following tests

4k iops random 1 client 1 thread 1 min.
4k iops random 2 clients 128 threads 1 min
4M throughput 1 client 1 thread 1 min.
4M throughput 2 clients 32 threads 1 min

better stop ESXi load so the tests values will be accurate. i understand you are not production yet.

 

Pages: 1 2