Multiple IP for PetaSAN Services
Pages: 1 2
admin
2,930 Posts
January 3, 2021, 8:22 pmQuote from admin on January 3, 2021, 8:22 pmas described earlier, if your remaining issue is the default gateway, make sure it is defined in /etc/network/interfaces, it should be there if you defined it in the installer, else add it and it should get configured on boot. you do not need to change anything else.
the console menu does allow shell before you deploy. after you deploy and set a root password the menu is removed. You can either log via ssh or press ALT-F1 (or F1-F8) to open a virtual console where you need tp log in with your root password.
as described earlier, if your remaining issue is the default gateway, make sure it is defined in /etc/network/interfaces, it should be there if you defined it in the installer, else add it and it should get configured on boot. you do not need to change anything else.
the console menu does allow shell before you deploy. after you deploy and set a root password the menu is removed. You can either log via ssh or press ALT-F1 (or F1-F8) to open a virtual console where you need tp log in with your root password.
Last edited on January 3, 2021, 8:24 pm by admin · #11
pedro6161
36 Posts
January 4, 2021, 11:31 pmQuote from pedro6161 on January 4, 2021, 11:31 pmSomething strange, after deploy i cannot open virtual console using ALT-F1 or F1-F8
Anyway how to change the object size for rbd from 4M to 1MB ?
Because i use 10Gb network port for iscsi but the throughput i see only around 50 Mbps from grafana in PetaSAN dashboard
Something strange, after deploy i cannot open virtual console using ALT-F1 or F1-F8
Anyway how to change the object size for rbd from 4M to 1MB ?
Because i use 10Gb network port for iscsi but the throughput i see only around 50 Mbps from grafana in PetaSAN dashboard
admin
2,930 Posts
January 5, 2021, 9:13 amQuote from admin on January 5, 2021, 9:13 amwhat hardware / disks do you use ? how many client operations / load do you have ?
what hardware / disks do you use ? how many client operations / load do you have ?
Last edited on January 5, 2021, 9:15 am by admin · #13
pedro6161
36 Posts
January 5, 2021, 9:26 amQuote from pedro6161 on January 5, 2021, 9:26 am
Quote from admin on January 5, 2021, 9:13 am
what hardware / disks do you use ? how many client operations / load do you have ?
i use 3 node on below :
node 1 :
1 TB x 22 Disk SAS 10K RPM with JBOD using LSI MegaRAID 12GB
CPU 2 x 12 Cores Intel
RAM 256 GB
Network : 2 x 1GB for Management (Bond b0-mgmt) and 2 x 10GB for Backend Ceph (Bond b1-backend)
node 2 and 3 :
1 TB x 22 Disk SAS 10K RPM with JBOD using LSI MegaRAID 12GB
CPU 2 x 12 Cores Intel
RAM 256 GB
Network : 2 x 1GB for Management (Bond b0-mgmt), 2 x 10GB for Backend Ceph (Bond b1-backend) and 2 x 10GB for Storage services (Bond b2-svc)
client :
4 node ESXI with 2 x 10Gb to Storage (Layer 2 same subnet with b1-backend on Ceph Backend to query iSCSI Services for shared storage
do you know how to change the Object Size from 4MB to 1 MB and change the value max_data_area_mb on RBD image ?
Quote from admin on January 5, 2021, 9:13 am
what hardware / disks do you use ? how many client operations / load do you have ?
i use 3 node on below :
node 1 :
1 TB x 22 Disk SAS 10K RPM with JBOD using LSI MegaRAID 12GB
CPU 2 x 12 Cores Intel
RAM 256 GB
Network : 2 x 1GB for Management (Bond b0-mgmt) and 2 x 10GB for Backend Ceph (Bond b1-backend)
node 2 and 3 :
1 TB x 22 Disk SAS 10K RPM with JBOD using LSI MegaRAID 12GB
CPU 2 x 12 Cores Intel
RAM 256 GB
Network : 2 x 1GB for Management (Bond b0-mgmt), 2 x 10GB for Backend Ceph (Bond b1-backend) and 2 x 10GB for Storage services (Bond b2-svc)
client :
4 node ESXI with 2 x 10Gb to Storage (Layer 2 same subnet with b1-backend on Ceph Backend to query iSCSI Services for shared storage
do you know how to change the Object Size from 4MB to 1 MB and change the value max_data_area_mb on RBD image ?
Last edited on January 5, 2021, 9:27 am by pedro6161 · #14
admin
2,930 Posts
January 5, 2021, 10:29 amQuote from admin on January 5, 2021, 10:29 amPure hdd is not recommended for virtualization workloads, it could be used for backups and streaming but for latency sensitive load like vms it is recommended to use pure SSD or at least for journal or write cache or use controller cache on some hardware.
What is the load you have that gives 50 MB/s ? is it a single file copy from inside a vm in ESXi ? or you do many file copies and 50 MB/s is the total from all ? or you get 50 MB/s from database workload which tends to use small block size ?
Can you use the benchmark ui page and run the following tests
4k iops random 1 client 1 thread 1 min.
4k iops random 2 clients 128 threads 1 min
4M throughput 1 client 1 thread 1 min.
4M throughput 2 clients 32 threads 1 min
better stop ESXi load so the tests values will be accurate. i understand you are not production yet.
Pure hdd is not recommended for virtualization workloads, it could be used for backups and streaming but for latency sensitive load like vms it is recommended to use pure SSD or at least for journal or write cache or use controller cache on some hardware.
What is the load you have that gives 50 MB/s ? is it a single file copy from inside a vm in ESXi ? or you do many file copies and 50 MB/s is the total from all ? or you get 50 MB/s from database workload which tends to use small block size ?
Can you use the benchmark ui page and run the following tests
4k iops random 1 client 1 thread 1 min.
4k iops random 2 clients 128 threads 1 min
4M throughput 1 client 1 thread 1 min.
4M throughput 2 clients 32 threads 1 min
better stop ESXi load so the tests values will be accurate. i understand you are not production yet.
Last edited on January 5, 2021, 10:34 am by admin · #15
Pages: 1 2
Multiple IP for PetaSAN Services
admin
2,930 Posts
Quote from admin on January 3, 2021, 8:22 pmas described earlier, if your remaining issue is the default gateway, make sure it is defined in /etc/network/interfaces, it should be there if you defined it in the installer, else add it and it should get configured on boot. you do not need to change anything else.
the console menu does allow shell before you deploy. after you deploy and set a root password the menu is removed. You can either log via ssh or press ALT-F1 (or F1-F8) to open a virtual console where you need tp log in with your root password.
as described earlier, if your remaining issue is the default gateway, make sure it is defined in /etc/network/interfaces, it should be there if you defined it in the installer, else add it and it should get configured on boot. you do not need to change anything else.
the console menu does allow shell before you deploy. after you deploy and set a root password the menu is removed. You can either log via ssh or press ALT-F1 (or F1-F8) to open a virtual console where you need tp log in with your root password.
pedro6161
36 Posts
Quote from pedro6161 on January 4, 2021, 11:31 pmSomething strange, after deploy i cannot open virtual console using ALT-F1 or F1-F8
Anyway how to change the object size for rbd from 4M to 1MB ?
Because i use 10Gb network port for iscsi but the throughput i see only around 50 Mbps from grafana in PetaSAN dashboard
Something strange, after deploy i cannot open virtual console using ALT-F1 or F1-F8
Anyway how to change the object size for rbd from 4M to 1MB ?
Because i use 10Gb network port for iscsi but the throughput i see only around 50 Mbps from grafana in PetaSAN dashboard
admin
2,930 Posts
Quote from admin on January 5, 2021, 9:13 amwhat hardware / disks do you use ? how many client operations / load do you have ?
what hardware / disks do you use ? how many client operations / load do you have ?
pedro6161
36 Posts
Quote from pedro6161 on January 5, 2021, 9:26 amQuote from admin on January 5, 2021, 9:13 amwhat hardware / disks do you use ? how many client operations / load do you have ?
i use 3 node on below :
node 1 :
1 TB x 22 Disk SAS 10K RPM with JBOD using LSI MegaRAID 12GB
CPU 2 x 12 Cores Intel
RAM 256 GB
Network : 2 x 1GB for Management (Bond b0-mgmt) and 2 x 10GB for Backend Ceph (Bond b1-backend)
node 2 and 3 :
1 TB x 22 Disk SAS 10K RPM with JBOD using LSI MegaRAID 12GB
CPU 2 x 12 Cores Intel
RAM 256 GB
Network : 2 x 1GB for Management (Bond b0-mgmt), 2 x 10GB for Backend Ceph (Bond b1-backend) and 2 x 10GB for Storage services (Bond b2-svc)
client :
4 node ESXI with 2 x 10Gb to Storage (Layer 2 same subnet with b1-backend on Ceph Backend to query iSCSI Services for shared storage
do you know how to change the Object Size from 4MB to 1 MB and change the value max_data_area_mb on RBD image ?
Quote from admin on January 5, 2021, 9:13 amwhat hardware / disks do you use ? how many client operations / load do you have ?
i use 3 node on below :
node 1 :
1 TB x 22 Disk SAS 10K RPM with JBOD using LSI MegaRAID 12GB
CPU 2 x 12 Cores Intel
RAM 256 GB
Network : 2 x 1GB for Management (Bond b0-mgmt) and 2 x 10GB for Backend Ceph (Bond b1-backend)
node 2 and 3 :
1 TB x 22 Disk SAS 10K RPM with JBOD using LSI MegaRAID 12GB
CPU 2 x 12 Cores Intel
RAM 256 GB
Network : 2 x 1GB for Management (Bond b0-mgmt), 2 x 10GB for Backend Ceph (Bond b1-backend) and 2 x 10GB for Storage services (Bond b2-svc)
client :
4 node ESXI with 2 x 10Gb to Storage (Layer 2 same subnet with b1-backend on Ceph Backend to query iSCSI Services for shared storage
do you know how to change the Object Size from 4MB to 1 MB and change the value max_data_area_mb on RBD image ?
admin
2,930 Posts
Quote from admin on January 5, 2021, 10:29 amPure hdd is not recommended for virtualization workloads, it could be used for backups and streaming but for latency sensitive load like vms it is recommended to use pure SSD or at least for journal or write cache or use controller cache on some hardware.
What is the load you have that gives 50 MB/s ? is it a single file copy from inside a vm in ESXi ? or you do many file copies and 50 MB/s is the total from all ? or you get 50 MB/s from database workload which tends to use small block size ?
Can you use the benchmark ui page and run the following tests
4k iops random 1 client 1 thread 1 min.
4k iops random 2 clients 128 threads 1 min
4M throughput 1 client 1 thread 1 min.
4M throughput 2 clients 32 threads 1 minbetter stop ESXi load so the tests values will be accurate. i understand you are not production yet.
Pure hdd is not recommended for virtualization workloads, it could be used for backups and streaming but for latency sensitive load like vms it is recommended to use pure SSD or at least for journal or write cache or use controller cache on some hardware.
What is the load you have that gives 50 MB/s ? is it a single file copy from inside a vm in ESXi ? or you do many file copies and 50 MB/s is the total from all ? or you get 50 MB/s from database workload which tends to use small block size ?
Can you use the benchmark ui page and run the following tests
4k iops random 1 client 1 thread 1 min.
4k iops random 2 clients 128 threads 1 min
4M throughput 1 client 1 thread 1 min.
4M throughput 2 clients 32 threads 1 min
better stop ESXi load so the tests values will be accurate. i understand you are not production yet.