Error loading benchmark test report
Pages: 1 2
therm
121 Posts
July 11, 2018, 9:21 amQuote from therm on July 11, 2018, 9:21 amHi,
I am getting an error with the title "Error loading benchmark test report" when running 4k benchmark. Log is attached:
11/07/2018 11:17:04 INFO Benchmark Test Started
11/07/2018 11:17:04 INFO Benchmark manager cmd.
11/07/2018 11:17:05 INFO Benchmark start rados write.
11/07/2018 11:17:05 INFO Run rados write cmd on node ceph-node-mro-1 : python /opt/petasan/scripts/jobs/benchmark/client_stress.py -d 30 -t 16 -b 4096 -m w
11/07/2018 11:17:05 INFO Wait time before collect storage state.
11/07/2018 11:17:13 INFO Run sar state cmd on node ceph-node-mro-2 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:13 INFO Run sar state cmd on node ceph-node-mro-3 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:14 INFO Start storage load job for 'sar'
11/07/2018 11:17:14 INFO Run sar state cmd on node ceph-node-mru-1 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:14 INFO Benchmark storage cmd.
11/07/2018 11:17:14 INFO Run sar state cmd on node ceph-node-mru-2 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:14 INFO Run sar state cmd on node ceph-node-mru-3 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:38 ERROR Expecting value: line 2 column 1 (char 1)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/cluster/benchmark.py", line 96, in manager
self.__write()
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/cluster/benchmark.py", line 165, in __write
sar_rs.load_json(out)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/entity/benchmark.py", line 68, in load_json
self.__dict__ = json.loads(j)
File "/usr/lib/python2.7/dist-packages/flask/json.py", line 149, in loads
return _json.loads(s, **kwargs)
File "/usr/lib/python2.7/dist-packages/simplejson/__init__.py", line 533, in loads
return cls(encoding=encoding, **kw).decode(s)
File "/usr/lib/python2.7/dist-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/lib/python2.7/dist-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
JSONDecodeError: Expecting value: line 2 column 1 (char 1)
11/07/2018 11:17:38 ERROR 'NoneType' object has no attribute 'write_json'
Traceback (most recent call last):
File "/opt/petasan/scripts/util/benchmark.py", line 133, in manager
result = result.write_json()
AttributeError: 'NoneType' object has no attribute 'write_json'
11/07/2018 11:17:38 ERROR integer division or modulo by zero
Traceback (most recent call last):
File "/opt/petasan/scripts/util/benchmark.py", line 88, in storage
result = Benchmark().sar_stats(args.d)
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/cluster/benchmark.py", line 55, in sar_stats
return Sar().run(duration)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/common/sar.py", line 51, in run
self.__process_sar_output_section()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/common/sar.py", line 109, in __process_sar_output_section
self.__stat.disk_avg = round(self.__stat.disk_avg / (len(self.__stat.disks)), 2)
ZeroDivisionError: integer division or modulo by zero
11/07/2018 11:17:40 INFO Benchmark Test Completed
Regards,
Dennis
Hi,
I am getting an error with the title "Error loading benchmark test report" when running 4k benchmark. Log is attached:
11/07/2018 11:17:04 INFO Benchmark Test Started
11/07/2018 11:17:04 INFO Benchmark manager cmd.
11/07/2018 11:17:05 INFO Benchmark start rados write.
11/07/2018 11:17:05 INFO Run rados write cmd on node ceph-node-mro-1 : python /opt/petasan/scripts/jobs/benchmark/client_stress.py -d 30 -t 16 -b 4096 -m w
11/07/2018 11:17:05 INFO Wait time before collect storage state.
11/07/2018 11:17:13 INFO Run sar state cmd on node ceph-node-mro-2 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:13 INFO Run sar state cmd on node ceph-node-mro-3 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:14 INFO Start storage load job for 'sar'
11/07/2018 11:17:14 INFO Run sar state cmd on node ceph-node-mru-1 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:14 INFO Benchmark storage cmd.
11/07/2018 11:17:14 INFO Run sar state cmd on node ceph-node-mru-2 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:14 INFO Run sar state cmd on node ceph-node-mru-3 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:38 ERROR Expecting value: line 2 column 1 (char 1)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/cluster/benchmark.py", line 96, in manager
self.__write()
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/cluster/benchmark.py", line 165, in __write
sar_rs.load_json(out)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/entity/benchmark.py", line 68, in load_json
self.__dict__ = json.loads(j)
File "/usr/lib/python2.7/dist-packages/flask/json.py", line 149, in loads
return _json.loads(s, **kwargs)
File "/usr/lib/python2.7/dist-packages/simplejson/__init__.py", line 533, in loads
return cls(encoding=encoding, **kw).decode(s)
File "/usr/lib/python2.7/dist-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/lib/python2.7/dist-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
JSONDecodeError: Expecting value: line 2 column 1 (char 1)
11/07/2018 11:17:38 ERROR 'NoneType' object has no attribute 'write_json'
Traceback (most recent call last):
File "/opt/petasan/scripts/util/benchmark.py", line 133, in manager
result = result.write_json()
AttributeError: 'NoneType' object has no attribute 'write_json'
11/07/2018 11:17:38 ERROR integer division or modulo by zero
Traceback (most recent call last):
File "/opt/petasan/scripts/util/benchmark.py", line 88, in storage
result = Benchmark().sar_stats(args.d)
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/cluster/benchmark.py", line 55, in sar_stats
return Sar().run(duration)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/common/sar.py", line 51, in run
self.__process_sar_output_section()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/common/sar.py", line 109, in __process_sar_output_section
self.__stat.disk_avg = round(self.__stat.disk_avg / (len(self.__stat.disks)), 2)
ZeroDivisionError: integer division or modulo by zero
11/07/2018 11:17:40 INFO Benchmark Test Completed
Regards,
Dennis
admin
2,930 Posts
July 11, 2018, 11:02 amQuote from admin on July 11, 2018, 11:02 amHi,
Are you using hostnames that contain dots '.' characters..ie fqdn ?
What version of PetaSAN do you use ?
Hi,
Are you using hostnames that contain dots '.' characters..ie fqdn ?
What version of PetaSAN do you use ?
therm
121 Posts
July 11, 2018, 11:07 amQuote from therm on July 11, 2018, 11:07 amHi,
no, names are like this "ceph-node-mro-1". Version is 1.5.
Hi,
no, names are like this "ceph-node-mro-1". Version is 1.5.
admin
2,930 Posts
July 11, 2018, 8:52 pmQuote from admin on July 11, 2018, 8:52 pm
- Did this never work or did it stop working after adding new nodes ?
- Do you have any nodes assigned as OSD storage but do not have any storage disks yet ?
- From the ui, if you select some nodes to act as stress clients, they will not be included in the benchmark report, only non client nodes will. By using this way it is possible to select all nodes with the exception of one to act as clients, this will allow getting the benchmark report on 1 specific node. Does the benchmark report run on some specific nodes or does it fail on all ?
- On a node that is known to fail the benchmark report, what is the output of:
sar 3 1 -u -P ALL -r -n DEV -d -p
ceph-disk list --cluster xx
ceph osd tree --cluster xx
- On a node that is known to fail the benchmark report, can you replace the following 2 files https://drive.google.com/open?id=1sZic2104yL0QM4zrIPqwOwr_qmM3_m6J
and place them in (make backups of the original) :
/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_osd.py
/usr/lib/python2.7/dist-packages/PetaSAN/core/common/sar.py
They include more logging info to help is indentify where the issue is. Then after running the benchmark, email us the PetaSAN.log file on that node to contact-us @ petasan.org
- Did this never work or did it stop working after adding new nodes ?
- Do you have any nodes assigned as OSD storage but do not have any storage disks yet ?
- From the ui, if you select some nodes to act as stress clients, they will not be included in the benchmark report, only non client nodes will. By using this way it is possible to select all nodes with the exception of one to act as clients, this will allow getting the benchmark report on 1 specific node. Does the benchmark report run on some specific nodes or does it fail on all ?
- On a node that is known to fail the benchmark report, what is the output of:
sar 3 1 -u -P ALL -r -n DEV -d -p
ceph-disk list --cluster xx
ceph osd tree --cluster xx
- On a node that is known to fail the benchmark report, can you replace the following 2 files https://drive.google.com/open?id=1sZic2104yL0QM4zrIPqwOwr_qmM3_m6J
and place them in (make backups of the original) :
/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_osd.py
/usr/lib/python2.7/dist-packages/PetaSAN/core/common/sar.py
They include more logging info to help is indentify where the issue is. Then after running the benchmark, email us the PetaSAN.log file on that node to contact-us @ petasan.org
Last edited on July 11, 2018, 8:54 pm by admin · #4
therm
121 Posts
July 12, 2018, 8:47 amQuote from therm on July 12, 2018, 8:47 am
- Did this never work or did it stop working after adding new nodes ?
- I am unsure about it. I never tried it because I was afraid of producing crashed because of our Bcache config. (BTW. we finally get rid of it!!!!)
- Do you have any nodes assigned as OSD storage but do not have any storage disks yet ?
- No.
- From the ui, if you select some nodes to act as stress clients, they will not be included in the benchmark report, only non client nodes will. By using this way it is possible to select all nodes with the exception of one to act as clients, this will allow getting the benchmark report on 1 specific node. Does the benchmark report run on some specific nodes or does it fail on all ?
- I tested selecting single clients with different nodes and all but one, the problem is the same.
sar output:
root@ceph-node-mro-1:~# sar 3 1 -u -P ALL -r -n DEV -d -p
Linux 4.4.92-09-petasan (ceph-node-mro-1) 07/12/18 _x86_64_ (32 CPU)
10:41:03 CPU %user %nice %system %iowait %steal %idle
10:41:06 all 0.78 0.00 0.56 2.39 0.00 96.28
10:41:06 0 0.00 0.00 0.67 1.00 0.00 98.33
10:41:06 1 0.00 0.00 0.33 1.33 0.00 98.33
10:41:06 2 0.33 0.00 0.33 0.66 0.00 98.67
10:41:06 3 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 4 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 5 0.33 0.00 0.00 0.00 0.00 99.67
10:41:06 6 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 7 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 8 2.03 0.00 3.73 1.69 0.00 92.54
10:41:06 9 1.36 0.00 0.68 4.75 0.00 93.22
10:41:06 10 1.68 0.00 0.34 3.03 0.00 94.95
10:41:06 11 1.34 0.00 1.01 2.01 0.00 95.64
10:41:06 12 1.01 0.00 1.34 3.02 0.00 94.63
10:41:06 13 1.68 0.00 0.34 5.05 0.00 92.93
10:41:06 14 2.02 0.00 0.67 5.72 0.00 91.58
10:41:06 15 1.35 0.00 0.68 8.78 0.00 89.19
10:41:06 16 0.33 0.00 0.00 0.00 0.00 99.67
10:41:06 17 0.00 0.00 0.33 1.34 0.00 98.33
10:41:06 18 0.00 0.00 0.00 0.67 0.00 99.33
10:41:06 19 0.00 0.00 0.33 1.34 0.00 98.33
10:41:06 20 0.00 0.00 0.33 1.00 0.00 98.66
10:41:06 21 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 22 0.00 0.00 0.33 0.67 0.00 99.00
10:41:06 23 0.33 0.00 0.00 0.00 0.00 99.67
10:41:06 24 1.35 0.00 0.67 11.11 0.00 86.87
10:41:06 25 1.35 0.00 0.67 4.04 0.00 93.94
10:41:06 26 1.68 0.00 0.67 2.35 0.00 95.30
10:41:06 27 1.34 0.00 1.01 5.03 0.00 92.62
10:41:06 28 2.00 0.00 1.00 5.67 0.00 91.33
10:41:06 29 0.68 0.00 0.68 2.03 0.00 96.62
10:41:06 30 1.33 0.00 1.00 1.67 0.00 96.00
10:41:06 31 1.99 0.00 1.00 2.66 0.00 94.35
10:41:03 kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
10:41:06 129047456 69007300 34.84 4396 3591764 123268288 62.24 61957096 3173336 27684
10:41:03 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
10:41:06 nvme1n1 78.67 0.00 2085.33 26.51 0.00 0.00 0.00 0.00
10:41:06 nvme0n1 83.33 0.00 2389.33 28.67 0.00 0.02 0.02 0.13
10:41:06 sdc 12.67 186.67 0.00 14.74 0.11 8.95 8.53 10.80
10:41:06 sda 1.00 0.00 82.67 82.67 0.00 2.67 1.33 0.13
10:41:06 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 sdd 8.33 88.00 0.00 10.56 0.06 7.04 7.04 5.87
10:41:06 sde 35.33 13760.00 0.00 389.43 0.32 8.94 3.02 10.67
10:41:06 sdg 35.00 13901.33 0.00 397.18 0.40 11.35 3.62 12.67
10:41:06 sdf 30.00 13853.33 0.00 461.78 0.21 7.07 1.78 5.33
10:41:06 sdh 43.00 14162.67 0.00 329.36 0.44 10.33 3.84 16.53
10:41:06 sdi 28.33 13674.67 0.00 482.64 0.25 8.85 1.88 5.33
10:41:06 sdj 35.67 13800.00 0.00 386.92 0.37 10.39 3.10 11.07
10:41:06 sdk 2.00 32.00 0.00 16.00 0.01 6.00 6.00 1.20
10:41:06 sdl 66.33 14077.33 1346.67 232.52 0.44 6.59 1.11 7.33
10:41:06 sdm 32.33 13712.00 0.00 424.08 0.42 12.91 3.09 10.00
10:41:06 sdn 31.33 13917.33 0.00 444.17 0.30 9.57 2.43 7.60
10:41:06 sdo 4.67 56.00 0.00 12.00 0.04 8.29 8.29 3.87
10:41:06 sdp 27.33 13680.00 0.00 500.49 0.20 7.46 1.17 3.20
10:41:06 sdq 56.00 194.67 1005.33 21.43 0.58 10.29 1.17 6.53
10:41:06 sdr 30.00 13738.67 0.00 457.96 0.25 8.44 1.87 5.60
10:41:06 sds 31.00 13912.00 0.00 448.77 0.30 9.59 2.24 6.93
10:41:06 sdt 3.33 114.67 0.00 34.40 0.02 6.40 6.40 2.13
10:41:06 sdu 1.00 8.00 0.00 8.00 0.01 5.33 5.33 0.53
10:41:06 sdv 35.67 13698.67 173.33 388.93 0.40 11.10 1.87 6.67
10:41:06 sdw 2.00 21.33 0.00 10.67 0.02 9.33 9.33 1.87
10:41:06 sdx 84.33 9064.00 1824.00 129.11 0.17 1.99 0.47 4.00
10:41:06 sdy 1.00 13.33 0.00 13.33 0.01 12.00 12.00 1.20
10:41:06 sdz 4.67 58.67 0.00 12.57 0.03 7.43 7.43 3.47
10:41:06 rbd3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:03 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
10:41:06 eth7 4601.33 4535.33 4585.26 3852.95 0.00 0.00 0.00 0.38
10:41:06 lo 200.67 200.67 143.20 143.20 0.00 0.00 0.00 0.00
10:41:06 eth3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 eth6 3997.00 4029.00 3638.23 3578.81 0.00 0.00 0.00 0.30
10:41:06 eth2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 eth5 219.00 498.00 104.82 647.57 0.00 0.00 0.00 0.05
10:41:06 eth1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 eth4 48.33 42.67 138.32 42.27 0.00 0.00 0.00 0.01
10:41:06 eth0 5.67 1.67 0.39 0.18 0.00 0.00 0.33 0.00
Average: CPU %user %nice %system %iowait %steal %idle
Average: all 0.78 0.00 0.56 2.39 0.00 96.28
Average: 0 0.00 0.00 0.67 1.00 0.00 98.33
Average: 1 0.00 0.00 0.33 1.33 0.00 98.33
Average: 2 0.33 0.00 0.33 0.66 0.00 98.67
Average: 3 0.00 0.00 0.00 0.00 0.00 100.00
Average: 4 0.00 0.00 0.00 0.00 0.00 100.00
Average: 5 0.33 0.00 0.00 0.00 0.00 99.67
Average: 6 0.00 0.00 0.00 0.00 0.00 100.00
Average: 7 0.00 0.00 0.00 0.00 0.00 100.00
Average: 8 2.03 0.00 3.73 1.69 0.00 92.54
Average: 9 1.36 0.00 0.68 4.75 0.00 93.22
Average: 10 1.68 0.00 0.34 3.03 0.00 94.95
Average: 11 1.34 0.00 1.01 2.01 0.00 95.64
Average: 12 1.01 0.00 1.34 3.02 0.00 94.63
Average: 13 1.68 0.00 0.34 5.05 0.00 92.93
Average: 14 2.02 0.00 0.67 5.72 0.00 91.58
Average: 15 1.35 0.00 0.68 8.78 0.00 89.19
Average: 16 0.33 0.00 0.00 0.00 0.00 99.67
Average: 17 0.00 0.00 0.33 1.34 0.00 98.33
Average: 18 0.00 0.00 0.00 0.67 0.00 99.33
Average: 19 0.00 0.00 0.33 1.34 0.00 98.33
Average: 20 0.00 0.00 0.33 1.00 0.00 98.66
Average: 21 0.00 0.00 0.00 0.00 0.00 100.00
Average: 22 0.00 0.00 0.33 0.67 0.00 99.00
Average: 23 0.33 0.00 0.00 0.00 0.00 99.67
Average: 24 1.35 0.00 0.67 11.11 0.00 86.87
Average: 25 1.35 0.00 0.67 4.04 0.00 93.94
Average: 26 1.68 0.00 0.67 2.35 0.00 95.30
Average: 27 1.34 0.00 1.01 5.03 0.00 92.62
Average: 28 2.00 0.00 1.00 5.67 0.00 91.33
Average: 29 0.68 0.00 0.68 2.03 0.00 96.62
Average: 30 1.33 0.00 1.00 1.67 0.00 96.00
Average: 31 1.99 0.00 1.00 2.66 0.00 94.35
Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
Average: 129047456 69007300 34.84 4396 3591764 123268288 62.24 61957096 3173336 27684
Average: DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
Average: nvme1n1 78.67 0.00 2085.33 26.51 0.00 0.00 0.00 0.00
Average: nvme0n1 83.33 0.00 2389.33 28.67 0.00 0.02 0.02 0.13
Average: sdc 12.67 186.67 0.00 14.74 0.11 8.95 8.53 10.80
Average: sda 1.00 0.00 82.67 82.67 0.00 2.67 1.33 0.13
Average: sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: sdd 8.33 88.00 0.00 10.56 0.06 7.04 7.04 5.87
Average: sde 35.33 13760.00 0.00 389.43 0.32 8.94 3.02 10.67
Average: sdg 35.00 13901.33 0.00 397.18 0.40 11.35 3.62 12.67
Average: sdf 30.00 13853.33 0.00 461.78 0.21 7.07 1.78 5.33
Average: sdh 43.00 14162.67 0.00 329.36 0.44 10.33 3.84 16.53
Average: sdi 28.33 13674.67 0.00 482.64 0.25 8.85 1.88 5.33
Average: sdj 35.67 13800.00 0.00 386.92 0.37 10.39 3.10 11.07
Average: sdk 2.00 32.00 0.00 16.00 0.01 6.00 6.00 1.20
Average: sdl 66.33 14077.33 1346.67 232.52 0.44 6.59 1.11 7.33
Average: sdm 32.33 13712.00 0.00 424.08 0.42 12.91 3.09 10.00
Average: sdn 31.33 13917.33 0.00 444.17 0.30 9.57 2.43 7.60
Average: sdo 4.67 56.00 0.00 12.00 0.04 8.29 8.29 3.87
Average: sdp 27.33 13680.00 0.00 500.49 0.20 7.46 1.17 3.20
Average: sdq 56.00 194.67 1005.33 21.43 0.58 10.29 1.17 6.53
Average: sdr 30.00 13738.67 0.00 457.96 0.25 8.44 1.87 5.60
Average: sds 31.00 13912.00 0.00 448.77 0.30 9.59 2.24 6.93
Average: sdt 3.33 114.67 0.00 34.40 0.02 6.40 6.40 2.13
Average: sdu 1.00 8.00 0.00 8.00 0.01 5.33 5.33 0.53
Average: sdv 35.67 13698.67 173.33 388.93 0.40 11.10 1.87 6.67
Average: sdw 2.00 21.33 0.00 10.67 0.02 9.33 9.33 1.87
Average: sdx 84.33 9064.00 1824.00 129.11 0.17 1.99 0.47 4.00
Average: sdy 1.00 13.33 0.00 13.33 0.01 12.00 12.00 1.20
Average: sdz 4.67 58.67 0.00 12.57 0.03 7.43 7.43 3.47
Average: rbd3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
Average: eth7 4601.33 4535.33 4585.26 3852.95 0.00 0.00 0.00 0.38
Average: lo 200.67 200.67 143.20 143.20 0.00 0.00 0.00 0.00
Average: eth3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: eth6 3997.00 4029.00 3638.23 3578.81 0.00 0.00 0.00 0.30
Average: eth2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: eth5 219.00 498.00 104.82 647.57 0.00 0.00 0.00 0.05
Average: eth1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: eth4 48.33 42.67 138.32 42.27 0.00 0.00 0.00 0.01
Average: eth0 5.67 1.67 0.39 0.18 0.00 0.00 0.33 0.00
root@ceph-node-mro-1:~# ceph-disk list
/dev/nvme0n1 :
/dev/nvme0n1p1 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p10 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p11 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p12 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p2 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p3 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p4 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p5 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p6 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p7 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p8 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p9 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1 :
/dev/nvme1n1p1 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p10 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p11 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p12 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p2 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p3 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p4 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p5 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p6 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p7 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p8 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p9 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/rbd0 :
/dev/rbd0p1 other, VMFS_volume_member
/dev/rbd1 :
/dev/rbd1p1 other, VMFS_volume_member
/dev/rbd3 :
/dev/rbd3p1 other, VMFS_volume_member
/dev/rbd4 :
/dev/rbd4p1 other, VMFS_volume_member
/dev/rbd5 :
/dev/rbd5p1 other, VMFS_volume_member
/dev/rbd6 :
/dev/rbd6p1 other, VMFS_volume_member
/dev/rbd7 :
/dev/rbd7p1 other, VMFS_volume_member
/dev/sda :
/dev/sda2 other, ext4, mounted on /
/dev/sda1 other, ext4, mounted on /boot
/dev/sda4 other, ext4, mounted on /opt/petasan/config
/dev/sda3 other, ext4, mounted on /var/lib/ceph
/dev/sdb other, unknown
/dev/sdc other, xfs, mounted on /var/lib/ceph/osd/ceph-72
/dev/sdd other, xfs, mounted on /var/lib/ceph/osd/ceph-73
/dev/sde other, xfs, mounted on /var/lib/ceph/osd/ceph-74
/dev/sdf other, xfs, mounted on /var/lib/ceph/osd/ceph-75
/dev/sdg other, xfs, mounted on /var/lib/ceph/osd/ceph-76
/dev/sdh other, xfs, mounted on /var/lib/ceph/osd/ceph-77
/dev/sdi other, xfs, mounted on /var/lib/ceph/osd/ceph-78
/dev/sdj other, xfs, mounted on /var/lib/ceph/osd/ceph-79
/dev/sdk other, xfs, mounted on /var/lib/ceph/osd/ceph-80
/dev/sdl other, xfs, mounted on /var/lib/ceph/osd/ceph-81
/dev/sdm other, xfs, mounted on /var/lib/ceph/osd/ceph-82
/dev/sdn other, xfs, mounted on /var/lib/ceph/osd/ceph-83
/dev/sdo other, xfs, mounted on /var/lib/ceph/osd/ceph-84
/dev/sdp other, xfs, mounted on /var/lib/ceph/osd/ceph-85
/dev/sdq other, xfs, mounted on /var/lib/ceph/osd/ceph-86
/dev/sdr other, xfs, mounted on /var/lib/ceph/osd/ceph-87
/dev/sds other, xfs, mounted on /var/lib/ceph/osd/ceph-88
/dev/sdt other, xfs, mounted on /var/lib/ceph/osd/ceph-89
/dev/sdu other, xfs, mounted on /var/lib/ceph/osd/ceph-90
/dev/sdv other, xfs, mounted on /var/lib/ceph/osd/ceph-91
/dev/sdw other, xfs, mounted on /var/lib/ceph/osd/ceph-92
/dev/sdx other, xfs, mounted on /var/lib/ceph/osd/ceph-93
/dev/sdy other, xfs, mounted on /var/lib/ceph/osd/ceph-94
/dev/sdz other, xfs, mounted on /var/lib/ceph/osd/ceph-95
/dev/sr0 other, unknown
root@ceph-node-mru-1:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 523.72485 root default
-9 261.86243 room room-mru
-3 87.28748 host ceph-node-mru-1
0 3.63698 osd.0 up 1.00000 1.00000
1 3.63698 osd.1 up 1.00000 1.00000
2 3.63698 osd.2 up 1.00000 1.00000
3 3.63698 osd.3 up 1.00000 1.00000
4 3.63698 osd.4 up 1.00000 1.00000
5 3.63698 osd.5 up 1.00000 1.00000
6 3.63698 osd.6 up 1.00000 1.00000
7 3.63698 osd.7 up 1.00000 1.00000
8 3.63698 osd.8 up 1.00000 1.00000
9 3.63698 osd.9 up 1.00000 1.00000
10 3.63698 osd.10 up 1.00000 1.00000
11 3.63698 osd.11 up 1.00000 1.00000
12 3.63698 osd.12 up 1.00000 1.00000
13 3.63698 osd.13 up 1.00000 1.00000
14 3.63698 osd.14 up 1.00000 1.00000
15 3.63698 osd.15 up 1.00000 1.00000
16 3.63698 osd.16 up 1.00000 1.00000
17 3.63698 osd.17 up 1.00000 1.00000
18 3.63698 osd.18 up 1.00000 1.00000
19 3.63698 osd.19 up 1.00000 1.00000
20 3.63698 osd.20 up 1.00000 1.00000
21 3.63698 osd.21 up 1.00000 1.00000
22 3.63698 osd.22 up 1.00000 1.00000
23 3.63698 osd.23 up 1.00000 1.00000
-4 87.28748 host ceph-node-mru-2
24 3.63698 osd.24 up 1.00000 1.00000
25 3.63698 osd.25 up 1.00000 1.00000
26 3.63698 osd.26 up 1.00000 1.00000
27 3.63698 osd.27 up 1.00000 1.00000
28 3.63698 osd.28 up 1.00000 1.00000
29 3.63698 osd.29 up 1.00000 1.00000
30 3.63698 osd.30 up 1.00000 1.00000
31 3.63698 osd.31 up 1.00000 1.00000
32 3.63698 osd.32 up 1.00000 1.00000
33 3.63698 osd.33 up 1.00000 1.00000
34 3.63698 osd.34 up 1.00000 1.00000
35 3.63698 osd.35 up 1.00000 1.00000
36 3.63698 osd.36 up 1.00000 1.00000
37 3.63698 osd.37 up 1.00000 1.00000
38 3.63698 osd.38 up 1.00000 1.00000
39 3.63698 osd.39 up 1.00000 1.00000
40 3.63698 osd.40 up 1.00000 1.00000
41 3.63698 osd.41 up 1.00000 1.00000
42 3.63698 osd.42 up 1.00000 1.00000
43 3.63698 osd.43 up 1.00000 1.00000
44 3.63698 osd.44 up 1.00000 1.00000
45 3.63698 osd.45 up 1.00000 1.00000
46 3.63698 osd.46 up 1.00000 1.00000
47 3.63698 osd.47 up 1.00000 1.00000
-2 87.28748 host ceph-node-mru-3
48 3.63698 osd.48 up 1.00000 1.00000
49 3.63698 osd.49 up 1.00000 1.00000
50 3.63698 osd.50 up 1.00000 1.00000
51 3.63698 osd.51 up 1.00000 1.00000
52 3.63698 osd.52 up 1.00000 1.00000
53 3.63698 osd.53 up 1.00000 1.00000
54 3.63698 osd.54 up 1.00000 1.00000
55 3.63698 osd.55 up 1.00000 1.00000
56 3.63698 osd.56 up 1.00000 1.00000
57 3.63698 osd.57 up 1.00000 1.00000
58 3.63698 osd.58 up 1.00000 1.00000
59 3.63698 osd.59 up 1.00000 1.00000
60 3.63698 osd.60 up 1.00000 1.00000
61 3.63698 osd.61 up 1.00000 1.00000
62 3.63698 osd.62 up 1.00000 1.00000
63 3.63698 osd.63 up 1.00000 1.00000
64 3.63698 osd.64 up 1.00000 1.00000
65 3.63698 osd.65 up 1.00000 1.00000
66 3.63698 osd.66 up 1.00000 1.00000
67 3.63698 osd.67 up 1.00000 1.00000
68 3.63698 osd.68 up 1.00000 1.00000
69 3.63698 osd.69 up 1.00000 1.00000
70 3.63698 osd.70 up 1.00000 1.00000
71 3.63698 osd.71 up 1.00000 1.00000
-8 261.86243 room room-mro
-5 87.28748 host ceph-node-mro-1
72 3.63698 osd.72 up 1.00000 1.00000
73 3.63698 osd.73 up 1.00000 1.00000
74 3.63698 osd.74 up 1.00000 1.00000
75 3.63698 osd.75 up 1.00000 1.00000
76 3.63698 osd.76 up 1.00000 1.00000
77 3.63698 osd.77 up 1.00000 1.00000
78 3.63698 osd.78 up 1.00000 1.00000
79 3.63698 osd.79 up 1.00000 1.00000
80 3.63698 osd.80 up 1.00000 1.00000
81 3.63698 osd.81 up 1.00000 1.00000
82 3.63698 osd.82 up 1.00000 1.00000
83 3.63698 osd.83 up 1.00000 1.00000
84 3.63698 osd.84 up 1.00000 1.00000
85 3.63698 osd.85 up 1.00000 1.00000
86 3.63698 osd.86 up 1.00000 1.00000
87 3.63698 osd.87 up 1.00000 1.00000
88 3.63698 osd.88 up 1.00000 1.00000
89 3.63698 osd.89 up 1.00000 1.00000
90 3.63698 osd.90 up 1.00000 1.00000
91 3.63698 osd.91 up 1.00000 1.00000
92 3.63698 osd.92 up 1.00000 1.00000
93 3.63698 osd.93 up 1.00000 1.00000
94 3.63698 osd.94 up 1.00000 1.00000
95 3.63698 osd.95 up 1.00000 1.00000
-6 87.28748 host ceph-node-mro-2
96 3.63698 osd.96 up 1.00000 1.00000
97 3.63698 osd.97 up 1.00000 1.00000
98 3.63698 osd.98 up 1.00000 1.00000
99 3.63698 osd.99 up 1.00000 1.00000
100 3.63698 osd.100 up 1.00000 1.00000
101 3.63698 osd.101 up 1.00000 1.00000
102 3.63698 osd.102 up 1.00000 1.00000
103 3.63698 osd.103 up 1.00000 1.00000
104 3.63698 osd.104 up 1.00000 1.00000
105 3.63698 osd.105 up 1.00000 1.00000
106 3.63698 osd.106 up 1.00000 1.00000
107 3.63698 osd.107 up 1.00000 1.00000
108 3.63698 osd.108 up 1.00000 1.00000
109 3.63698 osd.109 up 1.00000 1.00000
110 3.63698 osd.110 up 1.00000 1.00000
111 3.63698 osd.111 up 1.00000 1.00000
112 3.63698 osd.112 up 1.00000 1.00000
113 3.63698 osd.113 up 1.00000 1.00000
114 3.63698 osd.114 up 1.00000 1.00000
115 3.63698 osd.115 up 1.00000 1.00000
116 3.63698 osd.116 up 1.00000 1.00000
117 3.63698 osd.117 up 1.00000 1.00000
118 3.63698 osd.118 up 1.00000 1.00000
119 3.63698 osd.119 up 1.00000 1.00000
-7 87.28748 host ceph-node-mro-3
120 3.63698 osd.120 up 1.00000 1.00000
121 3.63698 osd.121 up 1.00000 1.00000
122 3.63698 osd.122 up 1.00000 1.00000
123 3.63698 osd.123 up 1.00000 1.00000
124 3.63698 osd.124 up 1.00000 1.00000
125 3.63698 osd.125 up 1.00000 1.00000
126 3.63698 osd.126 up 1.00000 1.00000
127 3.63698 osd.127 up 1.00000 1.00000
128 3.63698 osd.128 up 1.00000 1.00000
129 3.63698 osd.129 up 1.00000 1.00000
130 3.63698 osd.130 up 1.00000 1.00000
131 3.63698 osd.131 up 1.00000 1.00000
132 3.63698 osd.132 up 1.00000 1.00000
133 3.63698 osd.133 up 1.00000 1.00000
134 3.63698 osd.134 up 1.00000 1.00000
135 3.63698 osd.135 up 1.00000 1.00000
136 3.63698 osd.136 up 1.00000 1.00000
137 3.63698 osd.137 up 1.00000 1.00000
138 3.63698 osd.138 up 1.00000 1.00000
139 3.63698 osd.139 up 1.00000 1.00000
140 3.63698 osd.140 up 1.00000 1.00000
141 3.63698 osd.141 up 1.00000 1.00000
142 3.63698 osd.142 up 1.00000 1.00000
143 3.63698 osd.143 up 1.00000 1.00000
Logs will come in half an hour.
Regards,
Dennis
- Did this never work or did it stop working after adding new nodes ?
- I am unsure about it. I never tried it because I was afraid of producing crashed because of our Bcache config. (BTW. we finally get rid of it!!!!)
- Do you have any nodes assigned as OSD storage but do not have any storage disks yet ?
- No.
- From the ui, if you select some nodes to act as stress clients, they will not be included in the benchmark report, only non client nodes will. By using this way it is possible to select all nodes with the exception of one to act as clients, this will allow getting the benchmark report on 1 specific node. Does the benchmark report run on some specific nodes or does it fail on all ?
- I tested selecting single clients with different nodes and all but one, the problem is the same.
sar output:
root@ceph-node-mro-1:~# sar 3 1 -u -P ALL -r -n DEV -d -p
Linux 4.4.92-09-petasan (ceph-node-mro-1) 07/12/18 _x86_64_ (32 CPU)
10:41:03 CPU %user %nice %system %iowait %steal %idle
10:41:06 all 0.78 0.00 0.56 2.39 0.00 96.28
10:41:06 0 0.00 0.00 0.67 1.00 0.00 98.33
10:41:06 1 0.00 0.00 0.33 1.33 0.00 98.33
10:41:06 2 0.33 0.00 0.33 0.66 0.00 98.67
10:41:06 3 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 4 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 5 0.33 0.00 0.00 0.00 0.00 99.67
10:41:06 6 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 7 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 8 2.03 0.00 3.73 1.69 0.00 92.54
10:41:06 9 1.36 0.00 0.68 4.75 0.00 93.22
10:41:06 10 1.68 0.00 0.34 3.03 0.00 94.95
10:41:06 11 1.34 0.00 1.01 2.01 0.00 95.64
10:41:06 12 1.01 0.00 1.34 3.02 0.00 94.63
10:41:06 13 1.68 0.00 0.34 5.05 0.00 92.93
10:41:06 14 2.02 0.00 0.67 5.72 0.00 91.58
10:41:06 15 1.35 0.00 0.68 8.78 0.00 89.19
10:41:06 16 0.33 0.00 0.00 0.00 0.00 99.67
10:41:06 17 0.00 0.00 0.33 1.34 0.00 98.33
10:41:06 18 0.00 0.00 0.00 0.67 0.00 99.33
10:41:06 19 0.00 0.00 0.33 1.34 0.00 98.33
10:41:06 20 0.00 0.00 0.33 1.00 0.00 98.66
10:41:06 21 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 22 0.00 0.00 0.33 0.67 0.00 99.00
10:41:06 23 0.33 0.00 0.00 0.00 0.00 99.67
10:41:06 24 1.35 0.00 0.67 11.11 0.00 86.87
10:41:06 25 1.35 0.00 0.67 4.04 0.00 93.94
10:41:06 26 1.68 0.00 0.67 2.35 0.00 95.30
10:41:06 27 1.34 0.00 1.01 5.03 0.00 92.62
10:41:06 28 2.00 0.00 1.00 5.67 0.00 91.33
10:41:06 29 0.68 0.00 0.68 2.03 0.00 96.62
10:41:06 30 1.33 0.00 1.00 1.67 0.00 96.00
10:41:06 31 1.99 0.00 1.00 2.66 0.00 94.35
10:41:03 kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
10:41:06 129047456 69007300 34.84 4396 3591764 123268288 62.24 61957096 3173336 27684
10:41:03 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
10:41:06 nvme1n1 78.67 0.00 2085.33 26.51 0.00 0.00 0.00 0.00
10:41:06 nvme0n1 83.33 0.00 2389.33 28.67 0.00 0.02 0.02 0.13
10:41:06 sdc 12.67 186.67 0.00 14.74 0.11 8.95 8.53 10.80
10:41:06 sda 1.00 0.00 82.67 82.67 0.00 2.67 1.33 0.13
10:41:06 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 sdd 8.33 88.00 0.00 10.56 0.06 7.04 7.04 5.87
10:41:06 sde 35.33 13760.00 0.00 389.43 0.32 8.94 3.02 10.67
10:41:06 sdg 35.00 13901.33 0.00 397.18 0.40 11.35 3.62 12.67
10:41:06 sdf 30.00 13853.33 0.00 461.78 0.21 7.07 1.78 5.33
10:41:06 sdh 43.00 14162.67 0.00 329.36 0.44 10.33 3.84 16.53
10:41:06 sdi 28.33 13674.67 0.00 482.64 0.25 8.85 1.88 5.33
10:41:06 sdj 35.67 13800.00 0.00 386.92 0.37 10.39 3.10 11.07
10:41:06 sdk 2.00 32.00 0.00 16.00 0.01 6.00 6.00 1.20
10:41:06 sdl 66.33 14077.33 1346.67 232.52 0.44 6.59 1.11 7.33
10:41:06 sdm 32.33 13712.00 0.00 424.08 0.42 12.91 3.09 10.00
10:41:06 sdn 31.33 13917.33 0.00 444.17 0.30 9.57 2.43 7.60
10:41:06 sdo 4.67 56.00 0.00 12.00 0.04 8.29 8.29 3.87
10:41:06 sdp 27.33 13680.00 0.00 500.49 0.20 7.46 1.17 3.20
10:41:06 sdq 56.00 194.67 1005.33 21.43 0.58 10.29 1.17 6.53
10:41:06 sdr 30.00 13738.67 0.00 457.96 0.25 8.44 1.87 5.60
10:41:06 sds 31.00 13912.00 0.00 448.77 0.30 9.59 2.24 6.93
10:41:06 sdt 3.33 114.67 0.00 34.40 0.02 6.40 6.40 2.13
10:41:06 sdu 1.00 8.00 0.00 8.00 0.01 5.33 5.33 0.53
10:41:06 sdv 35.67 13698.67 173.33 388.93 0.40 11.10 1.87 6.67
10:41:06 sdw 2.00 21.33 0.00 10.67 0.02 9.33 9.33 1.87
10:41:06 sdx 84.33 9064.00 1824.00 129.11 0.17 1.99 0.47 4.00
10:41:06 sdy 1.00 13.33 0.00 13.33 0.01 12.00 12.00 1.20
10:41:06 sdz 4.67 58.67 0.00 12.57 0.03 7.43 7.43 3.47
10:41:06 rbd3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:03 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
10:41:06 eth7 4601.33 4535.33 4585.26 3852.95 0.00 0.00 0.00 0.38
10:41:06 lo 200.67 200.67 143.20 143.20 0.00 0.00 0.00 0.00
10:41:06 eth3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 eth6 3997.00 4029.00 3638.23 3578.81 0.00 0.00 0.00 0.30
10:41:06 eth2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 eth5 219.00 498.00 104.82 647.57 0.00 0.00 0.00 0.05
10:41:06 eth1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 eth4 48.33 42.67 138.32 42.27 0.00 0.00 0.00 0.01
10:41:06 eth0 5.67 1.67 0.39 0.18 0.00 0.00 0.33 0.00
Average: CPU %user %nice %system %iowait %steal %idle
Average: all 0.78 0.00 0.56 2.39 0.00 96.28
Average: 0 0.00 0.00 0.67 1.00 0.00 98.33
Average: 1 0.00 0.00 0.33 1.33 0.00 98.33
Average: 2 0.33 0.00 0.33 0.66 0.00 98.67
Average: 3 0.00 0.00 0.00 0.00 0.00 100.00
Average: 4 0.00 0.00 0.00 0.00 0.00 100.00
Average: 5 0.33 0.00 0.00 0.00 0.00 99.67
Average: 6 0.00 0.00 0.00 0.00 0.00 100.00
Average: 7 0.00 0.00 0.00 0.00 0.00 100.00
Average: 8 2.03 0.00 3.73 1.69 0.00 92.54
Average: 9 1.36 0.00 0.68 4.75 0.00 93.22
Average: 10 1.68 0.00 0.34 3.03 0.00 94.95
Average: 11 1.34 0.00 1.01 2.01 0.00 95.64
Average: 12 1.01 0.00 1.34 3.02 0.00 94.63
Average: 13 1.68 0.00 0.34 5.05 0.00 92.93
Average: 14 2.02 0.00 0.67 5.72 0.00 91.58
Average: 15 1.35 0.00 0.68 8.78 0.00 89.19
Average: 16 0.33 0.00 0.00 0.00 0.00 99.67
Average: 17 0.00 0.00 0.33 1.34 0.00 98.33
Average: 18 0.00 0.00 0.00 0.67 0.00 99.33
Average: 19 0.00 0.00 0.33 1.34 0.00 98.33
Average: 20 0.00 0.00 0.33 1.00 0.00 98.66
Average: 21 0.00 0.00 0.00 0.00 0.00 100.00
Average: 22 0.00 0.00 0.33 0.67 0.00 99.00
Average: 23 0.33 0.00 0.00 0.00 0.00 99.67
Average: 24 1.35 0.00 0.67 11.11 0.00 86.87
Average: 25 1.35 0.00 0.67 4.04 0.00 93.94
Average: 26 1.68 0.00 0.67 2.35 0.00 95.30
Average: 27 1.34 0.00 1.01 5.03 0.00 92.62
Average: 28 2.00 0.00 1.00 5.67 0.00 91.33
Average: 29 0.68 0.00 0.68 2.03 0.00 96.62
Average: 30 1.33 0.00 1.00 1.67 0.00 96.00
Average: 31 1.99 0.00 1.00 2.66 0.00 94.35
Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
Average: 129047456 69007300 34.84 4396 3591764 123268288 62.24 61957096 3173336 27684
Average: DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
Average: nvme1n1 78.67 0.00 2085.33 26.51 0.00 0.00 0.00 0.00
Average: nvme0n1 83.33 0.00 2389.33 28.67 0.00 0.02 0.02 0.13
Average: sdc 12.67 186.67 0.00 14.74 0.11 8.95 8.53 10.80
Average: sda 1.00 0.00 82.67 82.67 0.00 2.67 1.33 0.13
Average: sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: sdd 8.33 88.00 0.00 10.56 0.06 7.04 7.04 5.87
Average: sde 35.33 13760.00 0.00 389.43 0.32 8.94 3.02 10.67
Average: sdg 35.00 13901.33 0.00 397.18 0.40 11.35 3.62 12.67
Average: sdf 30.00 13853.33 0.00 461.78 0.21 7.07 1.78 5.33
Average: sdh 43.00 14162.67 0.00 329.36 0.44 10.33 3.84 16.53
Average: sdi 28.33 13674.67 0.00 482.64 0.25 8.85 1.88 5.33
Average: sdj 35.67 13800.00 0.00 386.92 0.37 10.39 3.10 11.07
Average: sdk 2.00 32.00 0.00 16.00 0.01 6.00 6.00 1.20
Average: sdl 66.33 14077.33 1346.67 232.52 0.44 6.59 1.11 7.33
Average: sdm 32.33 13712.00 0.00 424.08 0.42 12.91 3.09 10.00
Average: sdn 31.33 13917.33 0.00 444.17 0.30 9.57 2.43 7.60
Average: sdo 4.67 56.00 0.00 12.00 0.04 8.29 8.29 3.87
Average: sdp 27.33 13680.00 0.00 500.49 0.20 7.46 1.17 3.20
Average: sdq 56.00 194.67 1005.33 21.43 0.58 10.29 1.17 6.53
Average: sdr 30.00 13738.67 0.00 457.96 0.25 8.44 1.87 5.60
Average: sds 31.00 13912.00 0.00 448.77 0.30 9.59 2.24 6.93
Average: sdt 3.33 114.67 0.00 34.40 0.02 6.40 6.40 2.13
Average: sdu 1.00 8.00 0.00 8.00 0.01 5.33 5.33 0.53
Average: sdv 35.67 13698.67 173.33 388.93 0.40 11.10 1.87 6.67
Average: sdw 2.00 21.33 0.00 10.67 0.02 9.33 9.33 1.87
Average: sdx 84.33 9064.00 1824.00 129.11 0.17 1.99 0.47 4.00
Average: sdy 1.00 13.33 0.00 13.33 0.01 12.00 12.00 1.20
Average: sdz 4.67 58.67 0.00 12.57 0.03 7.43 7.43 3.47
Average: rbd3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
Average: eth7 4601.33 4535.33 4585.26 3852.95 0.00 0.00 0.00 0.38
Average: lo 200.67 200.67 143.20 143.20 0.00 0.00 0.00 0.00
Average: eth3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: eth6 3997.00 4029.00 3638.23 3578.81 0.00 0.00 0.00 0.30
Average: eth2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: eth5 219.00 498.00 104.82 647.57 0.00 0.00 0.00 0.05
Average: eth1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: eth4 48.33 42.67 138.32 42.27 0.00 0.00 0.00 0.01
Average: eth0 5.67 1.67 0.39 0.18 0.00 0.00 0.33 0.00
root@ceph-node-mro-1:~# ceph-disk list
/dev/nvme0n1 :
/dev/nvme0n1p1 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p10 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p11 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p12 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p2 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p3 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p4 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p5 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p6 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p7 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p8 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p9 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1 :
/dev/nvme1n1p1 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p10 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p11 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p12 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p2 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p3 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p4 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p5 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p6 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p7 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p8 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p9 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/rbd0 :
/dev/rbd0p1 other, VMFS_volume_member
/dev/rbd1 :
/dev/rbd1p1 other, VMFS_volume_member
/dev/rbd3 :
/dev/rbd3p1 other, VMFS_volume_member
/dev/rbd4 :
/dev/rbd4p1 other, VMFS_volume_member
/dev/rbd5 :
/dev/rbd5p1 other, VMFS_volume_member
/dev/rbd6 :
/dev/rbd6p1 other, VMFS_volume_member
/dev/rbd7 :
/dev/rbd7p1 other, VMFS_volume_member
/dev/sda :
/dev/sda2 other, ext4, mounted on /
/dev/sda1 other, ext4, mounted on /boot
/dev/sda4 other, ext4, mounted on /opt/petasan/config
/dev/sda3 other, ext4, mounted on /var/lib/ceph
/dev/sdb other, unknown
/dev/sdc other, xfs, mounted on /var/lib/ceph/osd/ceph-72
/dev/sdd other, xfs, mounted on /var/lib/ceph/osd/ceph-73
/dev/sde other, xfs, mounted on /var/lib/ceph/osd/ceph-74
/dev/sdf other, xfs, mounted on /var/lib/ceph/osd/ceph-75
/dev/sdg other, xfs, mounted on /var/lib/ceph/osd/ceph-76
/dev/sdh other, xfs, mounted on /var/lib/ceph/osd/ceph-77
/dev/sdi other, xfs, mounted on /var/lib/ceph/osd/ceph-78
/dev/sdj other, xfs, mounted on /var/lib/ceph/osd/ceph-79
/dev/sdk other, xfs, mounted on /var/lib/ceph/osd/ceph-80
/dev/sdl other, xfs, mounted on /var/lib/ceph/osd/ceph-81
/dev/sdm other, xfs, mounted on /var/lib/ceph/osd/ceph-82
/dev/sdn other, xfs, mounted on /var/lib/ceph/osd/ceph-83
/dev/sdo other, xfs, mounted on /var/lib/ceph/osd/ceph-84
/dev/sdp other, xfs, mounted on /var/lib/ceph/osd/ceph-85
/dev/sdq other, xfs, mounted on /var/lib/ceph/osd/ceph-86
/dev/sdr other, xfs, mounted on /var/lib/ceph/osd/ceph-87
/dev/sds other, xfs, mounted on /var/lib/ceph/osd/ceph-88
/dev/sdt other, xfs, mounted on /var/lib/ceph/osd/ceph-89
/dev/sdu other, xfs, mounted on /var/lib/ceph/osd/ceph-90
/dev/sdv other, xfs, mounted on /var/lib/ceph/osd/ceph-91
/dev/sdw other, xfs, mounted on /var/lib/ceph/osd/ceph-92
/dev/sdx other, xfs, mounted on /var/lib/ceph/osd/ceph-93
/dev/sdy other, xfs, mounted on /var/lib/ceph/osd/ceph-94
/dev/sdz other, xfs, mounted on /var/lib/ceph/osd/ceph-95
/dev/sr0 other, unknown
root@ceph-node-mru-1:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 523.72485 root default
-9 261.86243 room room-mru
-3 87.28748 host ceph-node-mru-1
0 3.63698 osd.0 up 1.00000 1.00000
1 3.63698 osd.1 up 1.00000 1.00000
2 3.63698 osd.2 up 1.00000 1.00000
3 3.63698 osd.3 up 1.00000 1.00000
4 3.63698 osd.4 up 1.00000 1.00000
5 3.63698 osd.5 up 1.00000 1.00000
6 3.63698 osd.6 up 1.00000 1.00000
7 3.63698 osd.7 up 1.00000 1.00000
8 3.63698 osd.8 up 1.00000 1.00000
9 3.63698 osd.9 up 1.00000 1.00000
10 3.63698 osd.10 up 1.00000 1.00000
11 3.63698 osd.11 up 1.00000 1.00000
12 3.63698 osd.12 up 1.00000 1.00000
13 3.63698 osd.13 up 1.00000 1.00000
14 3.63698 osd.14 up 1.00000 1.00000
15 3.63698 osd.15 up 1.00000 1.00000
16 3.63698 osd.16 up 1.00000 1.00000
17 3.63698 osd.17 up 1.00000 1.00000
18 3.63698 osd.18 up 1.00000 1.00000
19 3.63698 osd.19 up 1.00000 1.00000
20 3.63698 osd.20 up 1.00000 1.00000
21 3.63698 osd.21 up 1.00000 1.00000
22 3.63698 osd.22 up 1.00000 1.00000
23 3.63698 osd.23 up 1.00000 1.00000
-4 87.28748 host ceph-node-mru-2
24 3.63698 osd.24 up 1.00000 1.00000
25 3.63698 osd.25 up 1.00000 1.00000
26 3.63698 osd.26 up 1.00000 1.00000
27 3.63698 osd.27 up 1.00000 1.00000
28 3.63698 osd.28 up 1.00000 1.00000
29 3.63698 osd.29 up 1.00000 1.00000
30 3.63698 osd.30 up 1.00000 1.00000
31 3.63698 osd.31 up 1.00000 1.00000
32 3.63698 osd.32 up 1.00000 1.00000
33 3.63698 osd.33 up 1.00000 1.00000
34 3.63698 osd.34 up 1.00000 1.00000
35 3.63698 osd.35 up 1.00000 1.00000
36 3.63698 osd.36 up 1.00000 1.00000
37 3.63698 osd.37 up 1.00000 1.00000
38 3.63698 osd.38 up 1.00000 1.00000
39 3.63698 osd.39 up 1.00000 1.00000
40 3.63698 osd.40 up 1.00000 1.00000
41 3.63698 osd.41 up 1.00000 1.00000
42 3.63698 osd.42 up 1.00000 1.00000
43 3.63698 osd.43 up 1.00000 1.00000
44 3.63698 osd.44 up 1.00000 1.00000
45 3.63698 osd.45 up 1.00000 1.00000
46 3.63698 osd.46 up 1.00000 1.00000
47 3.63698 osd.47 up 1.00000 1.00000
-2 87.28748 host ceph-node-mru-3
48 3.63698 osd.48 up 1.00000 1.00000
49 3.63698 osd.49 up 1.00000 1.00000
50 3.63698 osd.50 up 1.00000 1.00000
51 3.63698 osd.51 up 1.00000 1.00000
52 3.63698 osd.52 up 1.00000 1.00000
53 3.63698 osd.53 up 1.00000 1.00000
54 3.63698 osd.54 up 1.00000 1.00000
55 3.63698 osd.55 up 1.00000 1.00000
56 3.63698 osd.56 up 1.00000 1.00000
57 3.63698 osd.57 up 1.00000 1.00000
58 3.63698 osd.58 up 1.00000 1.00000
59 3.63698 osd.59 up 1.00000 1.00000
60 3.63698 osd.60 up 1.00000 1.00000
61 3.63698 osd.61 up 1.00000 1.00000
62 3.63698 osd.62 up 1.00000 1.00000
63 3.63698 osd.63 up 1.00000 1.00000
64 3.63698 osd.64 up 1.00000 1.00000
65 3.63698 osd.65 up 1.00000 1.00000
66 3.63698 osd.66 up 1.00000 1.00000
67 3.63698 osd.67 up 1.00000 1.00000
68 3.63698 osd.68 up 1.00000 1.00000
69 3.63698 osd.69 up 1.00000 1.00000
70 3.63698 osd.70 up 1.00000 1.00000
71 3.63698 osd.71 up 1.00000 1.00000
-8 261.86243 room room-mro
-5 87.28748 host ceph-node-mro-1
72 3.63698 osd.72 up 1.00000 1.00000
73 3.63698 osd.73 up 1.00000 1.00000
74 3.63698 osd.74 up 1.00000 1.00000
75 3.63698 osd.75 up 1.00000 1.00000
76 3.63698 osd.76 up 1.00000 1.00000
77 3.63698 osd.77 up 1.00000 1.00000
78 3.63698 osd.78 up 1.00000 1.00000
79 3.63698 osd.79 up 1.00000 1.00000
80 3.63698 osd.80 up 1.00000 1.00000
81 3.63698 osd.81 up 1.00000 1.00000
82 3.63698 osd.82 up 1.00000 1.00000
83 3.63698 osd.83 up 1.00000 1.00000
84 3.63698 osd.84 up 1.00000 1.00000
85 3.63698 osd.85 up 1.00000 1.00000
86 3.63698 osd.86 up 1.00000 1.00000
87 3.63698 osd.87 up 1.00000 1.00000
88 3.63698 osd.88 up 1.00000 1.00000
89 3.63698 osd.89 up 1.00000 1.00000
90 3.63698 osd.90 up 1.00000 1.00000
91 3.63698 osd.91 up 1.00000 1.00000
92 3.63698 osd.92 up 1.00000 1.00000
93 3.63698 osd.93 up 1.00000 1.00000
94 3.63698 osd.94 up 1.00000 1.00000
95 3.63698 osd.95 up 1.00000 1.00000
-6 87.28748 host ceph-node-mro-2
96 3.63698 osd.96 up 1.00000 1.00000
97 3.63698 osd.97 up 1.00000 1.00000
98 3.63698 osd.98 up 1.00000 1.00000
99 3.63698 osd.99 up 1.00000 1.00000
100 3.63698 osd.100 up 1.00000 1.00000
101 3.63698 osd.101 up 1.00000 1.00000
102 3.63698 osd.102 up 1.00000 1.00000
103 3.63698 osd.103 up 1.00000 1.00000
104 3.63698 osd.104 up 1.00000 1.00000
105 3.63698 osd.105 up 1.00000 1.00000
106 3.63698 osd.106 up 1.00000 1.00000
107 3.63698 osd.107 up 1.00000 1.00000
108 3.63698 osd.108 up 1.00000 1.00000
109 3.63698 osd.109 up 1.00000 1.00000
110 3.63698 osd.110 up 1.00000 1.00000
111 3.63698 osd.111 up 1.00000 1.00000
112 3.63698 osd.112 up 1.00000 1.00000
113 3.63698 osd.113 up 1.00000 1.00000
114 3.63698 osd.114 up 1.00000 1.00000
115 3.63698 osd.115 up 1.00000 1.00000
116 3.63698 osd.116 up 1.00000 1.00000
117 3.63698 osd.117 up 1.00000 1.00000
118 3.63698 osd.118 up 1.00000 1.00000
119 3.63698 osd.119 up 1.00000 1.00000
-7 87.28748 host ceph-node-mro-3
120 3.63698 osd.120 up 1.00000 1.00000
121 3.63698 osd.121 up 1.00000 1.00000
122 3.63698 osd.122 up 1.00000 1.00000
123 3.63698 osd.123 up 1.00000 1.00000
124 3.63698 osd.124 up 1.00000 1.00000
125 3.63698 osd.125 up 1.00000 1.00000
126 3.63698 osd.126 up 1.00000 1.00000
127 3.63698 osd.127 up 1.00000 1.00000
128 3.63698 osd.128 up 1.00000 1.00000
129 3.63698 osd.129 up 1.00000 1.00000
130 3.63698 osd.130 up 1.00000 1.00000
131 3.63698 osd.131 up 1.00000 1.00000
132 3.63698 osd.132 up 1.00000 1.00000
133 3.63698 osd.133 up 1.00000 1.00000
134 3.63698 osd.134 up 1.00000 1.00000
135 3.63698 osd.135 up 1.00000 1.00000
136 3.63698 osd.136 up 1.00000 1.00000
137 3.63698 osd.137 up 1.00000 1.00000
138 3.63698 osd.138 up 1.00000 1.00000
139 3.63698 osd.139 up 1.00000 1.00000
140 3.63698 osd.140 up 1.00000 1.00000
141 3.63698 osd.141 up 1.00000 1.00000
142 3.63698 osd.142 up 1.00000 1.00000
143 3.63698 osd.143 up 1.00000 1.00000
Logs will come in half an hour.
Regards,
Dennis
admin
2,930 Posts
July 12, 2018, 12:25 pmQuote from admin on July 12, 2018, 12:25 pmHi Dennis,
I looked at the logs, it does not show any of the extra traces we added, can you please double check both files were placed in the correct paths, and send me the logs from that node. This node needs to be un-selected from the client list in the ui so to be included in the benchmark report.
Hatem
Hi Dennis,
I looked at the logs, it does not show any of the extra traces we added, can you please double check both files were placed in the correct paths, and send me the logs from that node. This node needs to be un-selected from the client list in the ui so to be included in the benchmark report.
Hatem
therm
121 Posts
July 13, 2018, 1:55 pmQuote from therm on July 13, 2018, 1:55 pmHi Hatem,
I sent you another log. Did you get it?
Dennis
Hi Hatem,
I sent you another log. Did you get it?
Dennis
admin
2,930 Posts
July 13, 2018, 6:31 pmQuote from admin on July 13, 2018, 6:31 pmi only got one, which did not have the extra log traces. if you have another log please resend it.
i only got one, which did not have the extra log traces. if you have another log please resend it.
admin
2,930 Posts
July 14, 2018, 2:12 pmQuote from admin on July 14, 2018, 2:12 pmgot the new logs.
got the new logs.
admin
2,930 Posts
July 14, 2018, 3:26 pmQuote from admin on July 14, 2018, 3:26 pmThe output of:
ceph-disk list
you posted shows drives as:
/dev/sdc other, xfs, mounted on /var/lib/ceph/osd/ceph-72
should have shown something more like this instead:
/dev/sdc1 ceph data, active, cluster xx, osd.72, journal /dev/nvme0n1p3
It does not appear your OSDs have the correct GPT partition GUID for an OSD, if you have manually created them, did you use ceph-disk tool ?
the benchmark is failing because for disk data it only wants to report OSDs disks which it indetifies using the GUID. Do you have other issues in the Node List -> Physical Disk List ? or do you not use this page ?
The output of:
ceph-disk list
you posted shows drives as:
/dev/sdc other, xfs, mounted on /var/lib/ceph/osd/ceph-72
should have shown something more like this instead:
/dev/sdc1 ceph data, active, cluster xx, osd.72, journal /dev/nvme0n1p3
It does not appear your OSDs have the correct GPT partition GUID for an OSD, if you have manually created them, did you use ceph-disk tool ?
the benchmark is failing because for disk data it only wants to report OSDs disks which it indetifies using the GUID. Do you have other issues in the Node List -> Physical Disk List ? or do you not use this page ?
Last edited on July 14, 2018, 3:27 pm by admin · #10
Pages: 1 2
Error loading benchmark test report
therm
121 Posts
Quote from therm on July 11, 2018, 9:21 amHi,
I am getting an error with the title "Error loading benchmark test report" when running 4k benchmark. Log is attached:
11/07/2018 11:17:04 INFO Benchmark Test Started
11/07/2018 11:17:04 INFO Benchmark manager cmd.
11/07/2018 11:17:05 INFO Benchmark start rados write.
11/07/2018 11:17:05 INFO Run rados write cmd on node ceph-node-mro-1 : python /opt/petasan/scripts/jobs/benchmark/client_stress.py -d 30 -t 16 -b 4096 -m w
11/07/2018 11:17:05 INFO Wait time before collect storage state.
11/07/2018 11:17:13 INFO Run sar state cmd on node ceph-node-mro-2 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:13 INFO Run sar state cmd on node ceph-node-mro-3 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:14 INFO Start storage load job for 'sar'
11/07/2018 11:17:14 INFO Run sar state cmd on node ceph-node-mru-1 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:14 INFO Benchmark storage cmd.
11/07/2018 11:17:14 INFO Run sar state cmd on node ceph-node-mru-2 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:14 INFO Run sar state cmd on node ceph-node-mru-3 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:38 ERROR Expecting value: line 2 column 1 (char 1)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/cluster/benchmark.py", line 96, in manager
self.__write()
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/cluster/benchmark.py", line 165, in __write
sar_rs.load_json(out)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/entity/benchmark.py", line 68, in load_json
self.__dict__ = json.loads(j)
File "/usr/lib/python2.7/dist-packages/flask/json.py", line 149, in loads
return _json.loads(s, **kwargs)
File "/usr/lib/python2.7/dist-packages/simplejson/__init__.py", line 533, in loads
return cls(encoding=encoding, **kw).decode(s)
File "/usr/lib/python2.7/dist-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/lib/python2.7/dist-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
JSONDecodeError: Expecting value: line 2 column 1 (char 1)
11/07/2018 11:17:38 ERROR 'NoneType' object has no attribute 'write_json'
Traceback (most recent call last):
File "/opt/petasan/scripts/util/benchmark.py", line 133, in manager
result = result.write_json()
AttributeError: 'NoneType' object has no attribute 'write_json'
11/07/2018 11:17:38 ERROR integer division or modulo by zero
Traceback (most recent call last):
File "/opt/petasan/scripts/util/benchmark.py", line 88, in storage
result = Benchmark().sar_stats(args.d)
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/cluster/benchmark.py", line 55, in sar_stats
return Sar().run(duration)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/common/sar.py", line 51, in run
self.__process_sar_output_section()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/common/sar.py", line 109, in __process_sar_output_section
self.__stat.disk_avg = round(self.__stat.disk_avg / (len(self.__stat.disks)), 2)
ZeroDivisionError: integer division or modulo by zero
11/07/2018 11:17:40 INFO Benchmark Test CompletedRegards,
Dennis
Hi,
I am getting an error with the title "Error loading benchmark test report" when running 4k benchmark. Log is attached:
11/07/2018 11:17:04 INFO Benchmark Test Started
11/07/2018 11:17:04 INFO Benchmark manager cmd.
11/07/2018 11:17:05 INFO Benchmark start rados write.
11/07/2018 11:17:05 INFO Run rados write cmd on node ceph-node-mro-1 : python /opt/petasan/scripts/jobs/benchmark/client_stress.py -d 30 -t 16 -b 4096 -m w
11/07/2018 11:17:05 INFO Wait time before collect storage state.
11/07/2018 11:17:13 INFO Run sar state cmd on node ceph-node-mro-2 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:13 INFO Run sar state cmd on node ceph-node-mro-3 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:14 INFO Start storage load job for 'sar'
11/07/2018 11:17:14 INFO Run sar state cmd on node ceph-node-mru-1 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:14 INFO Benchmark storage cmd.
11/07/2018 11:17:14 INFO Run sar state cmd on node ceph-node-mru-2 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:14 INFO Run sar state cmd on node ceph-node-mru-3 : python /opt/petasan/scripts/jobs/benchmark/storage_load.py -d 15
11/07/2018 11:17:38 ERROR Expecting value: line 2 column 1 (char 1)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/cluster/benchmark.py", line 96, in manager
self.__write()
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/cluster/benchmark.py", line 165, in __write
sar_rs.load_json(out)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/entity/benchmark.py", line 68, in load_json
self.__dict__ = json.loads(j)
File "/usr/lib/python2.7/dist-packages/flask/json.py", line 149, in loads
return _json.loads(s, **kwargs)
File "/usr/lib/python2.7/dist-packages/simplejson/__init__.py", line 533, in loads
return cls(encoding=encoding, **kw).decode(s)
File "/usr/lib/python2.7/dist-packages/simplejson/decoder.py", line 370, in decode
obj, end = self.raw_decode(s)
File "/usr/lib/python2.7/dist-packages/simplejson/decoder.py", line 400, in raw_decode
return self.scan_once(s, idx=_w(s, idx).end())
JSONDecodeError: Expecting value: line 2 column 1 (char 1)
11/07/2018 11:17:38 ERROR 'NoneType' object has no attribute 'write_json'
Traceback (most recent call last):
File "/opt/petasan/scripts/util/benchmark.py", line 133, in manager
result = result.write_json()
AttributeError: 'NoneType' object has no attribute 'write_json'
11/07/2018 11:17:38 ERROR integer division or modulo by zero
Traceback (most recent call last):
File "/opt/petasan/scripts/util/benchmark.py", line 88, in storage
result = Benchmark().sar_stats(args.d)
File "/usr/lib/python2.7/dist-packages/PetaSAN/backend/cluster/benchmark.py", line 55, in sar_stats
return Sar().run(duration)
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/common/sar.py", line 51, in run
self.__process_sar_output_section()
File "/usr/lib/python2.7/dist-packages/PetaSAN/core/common/sar.py", line 109, in __process_sar_output_section
self.__stat.disk_avg = round(self.__stat.disk_avg / (len(self.__stat.disks)), 2)
ZeroDivisionError: integer division or modulo by zero
11/07/2018 11:17:40 INFO Benchmark Test Completed
Regards,
Dennis
admin
2,930 Posts
Quote from admin on July 11, 2018, 11:02 amHi,
Are you using hostnames that contain dots '.' characters..ie fqdn ?
What version of PetaSAN do you use ?
Hi,
Are you using hostnames that contain dots '.' characters..ie fqdn ?
What version of PetaSAN do you use ?
therm
121 Posts
Quote from therm on July 11, 2018, 11:07 amHi,
no, names are like this "ceph-node-mro-1". Version is 1.5.
Hi,
no, names are like this "ceph-node-mro-1". Version is 1.5.
admin
2,930 Posts
Quote from admin on July 11, 2018, 8:52 pm
- Did this never work or did it stop working after adding new nodes ?
- Do you have any nodes assigned as OSD storage but do not have any storage disks yet ?
- From the ui, if you select some nodes to act as stress clients, they will not be included in the benchmark report, only non client nodes will. By using this way it is possible to select all nodes with the exception of one to act as clients, this will allow getting the benchmark report on 1 specific node. Does the benchmark report run on some specific nodes or does it fail on all ?
- On a node that is known to fail the benchmark report, what is the output of:
sar 3 1 -u -P ALL -r -n DEV -d -p
ceph-disk list --cluster xx
ceph osd tree --cluster xx
- On a node that is known to fail the benchmark report, can you replace the following 2 files https://drive.google.com/open?id=1sZic2104yL0QM4zrIPqwOwr_qmM3_m6J
and place them in (make backups of the original) :
/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_osd.py
/usr/lib/python2.7/dist-packages/PetaSAN/core/common/sar.pyThey include more logging info to help is indentify where the issue is. Then after running the benchmark, email us the PetaSAN.log file on that node to contact-us @ petasan.org
- Did this never work or did it stop working after adding new nodes ?
- Do you have any nodes assigned as OSD storage but do not have any storage disks yet ?
- From the ui, if you select some nodes to act as stress clients, they will not be included in the benchmark report, only non client nodes will. By using this way it is possible to select all nodes with the exception of one to act as clients, this will allow getting the benchmark report on 1 specific node. Does the benchmark report run on some specific nodes or does it fail on all ?
- On a node that is known to fail the benchmark report, what is the output of:
sar 3 1 -u -P ALL -r -n DEV -d -p
ceph-disk list --cluster xx
ceph osd tree --cluster xx
- On a node that is known to fail the benchmark report, can you replace the following 2 files https://drive.google.com/open?id=1sZic2104yL0QM4zrIPqwOwr_qmM3_m6J
and place them in (make backups of the original) :
/usr/lib/python2.7/dist-packages/PetaSAN/core/ceph/ceph_osd.py
/usr/lib/python2.7/dist-packages/PetaSAN/core/common/sar.py
They include more logging info to help is indentify where the issue is. Then after running the benchmark, email us the PetaSAN.log file on that node to contact-us @ petasan.org
therm
121 Posts
Quote from therm on July 12, 2018, 8:47 am
- Did this never work or did it stop working after adding new nodes ?
- I am unsure about it. I never tried it because I was afraid of producing crashed because of our Bcache config. (BTW. we finally get rid of it!!!!)
- Do you have any nodes assigned as OSD storage but do not have any storage disks yet ?
- No.
- From the ui, if you select some nodes to act as stress clients, they will not be included in the benchmark report, only non client nodes will. By using this way it is possible to select all nodes with the exception of one to act as clients, this will allow getting the benchmark report on 1 specific node. Does the benchmark report run on some specific nodes or does it fail on all ?
- I tested selecting single clients with different nodes and all but one, the problem is the same.
sar output:
root@ceph-node-mro-1:~# sar 3 1 -u -P ALL -r -n DEV -d -p
Linux 4.4.92-09-petasan (ceph-node-mro-1) 07/12/18 _x86_64_ (32 CPU)10:41:03 CPU %user %nice %system %iowait %steal %idle
10:41:06 all 0.78 0.00 0.56 2.39 0.00 96.28
10:41:06 0 0.00 0.00 0.67 1.00 0.00 98.33
10:41:06 1 0.00 0.00 0.33 1.33 0.00 98.33
10:41:06 2 0.33 0.00 0.33 0.66 0.00 98.67
10:41:06 3 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 4 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 5 0.33 0.00 0.00 0.00 0.00 99.67
10:41:06 6 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 7 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 8 2.03 0.00 3.73 1.69 0.00 92.54
10:41:06 9 1.36 0.00 0.68 4.75 0.00 93.22
10:41:06 10 1.68 0.00 0.34 3.03 0.00 94.95
10:41:06 11 1.34 0.00 1.01 2.01 0.00 95.64
10:41:06 12 1.01 0.00 1.34 3.02 0.00 94.63
10:41:06 13 1.68 0.00 0.34 5.05 0.00 92.93
10:41:06 14 2.02 0.00 0.67 5.72 0.00 91.58
10:41:06 15 1.35 0.00 0.68 8.78 0.00 89.19
10:41:06 16 0.33 0.00 0.00 0.00 0.00 99.67
10:41:06 17 0.00 0.00 0.33 1.34 0.00 98.33
10:41:06 18 0.00 0.00 0.00 0.67 0.00 99.33
10:41:06 19 0.00 0.00 0.33 1.34 0.00 98.33
10:41:06 20 0.00 0.00 0.33 1.00 0.00 98.66
10:41:06 21 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 22 0.00 0.00 0.33 0.67 0.00 99.00
10:41:06 23 0.33 0.00 0.00 0.00 0.00 99.67
10:41:06 24 1.35 0.00 0.67 11.11 0.00 86.87
10:41:06 25 1.35 0.00 0.67 4.04 0.00 93.94
10:41:06 26 1.68 0.00 0.67 2.35 0.00 95.30
10:41:06 27 1.34 0.00 1.01 5.03 0.00 92.62
10:41:06 28 2.00 0.00 1.00 5.67 0.00 91.33
10:41:06 29 0.68 0.00 0.68 2.03 0.00 96.62
10:41:06 30 1.33 0.00 1.00 1.67 0.00 96.00
10:41:06 31 1.99 0.00 1.00 2.66 0.00 94.3510:41:03 kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
10:41:06 129047456 69007300 34.84 4396 3591764 123268288 62.24 61957096 3173336 2768410:41:03 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
10:41:06 nvme1n1 78.67 0.00 2085.33 26.51 0.00 0.00 0.00 0.00
10:41:06 nvme0n1 83.33 0.00 2389.33 28.67 0.00 0.02 0.02 0.13
10:41:06 sdc 12.67 186.67 0.00 14.74 0.11 8.95 8.53 10.80
10:41:06 sda 1.00 0.00 82.67 82.67 0.00 2.67 1.33 0.13
10:41:06 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 sdd 8.33 88.00 0.00 10.56 0.06 7.04 7.04 5.87
10:41:06 sde 35.33 13760.00 0.00 389.43 0.32 8.94 3.02 10.67
10:41:06 sdg 35.00 13901.33 0.00 397.18 0.40 11.35 3.62 12.67
10:41:06 sdf 30.00 13853.33 0.00 461.78 0.21 7.07 1.78 5.33
10:41:06 sdh 43.00 14162.67 0.00 329.36 0.44 10.33 3.84 16.53
10:41:06 sdi 28.33 13674.67 0.00 482.64 0.25 8.85 1.88 5.33
10:41:06 sdj 35.67 13800.00 0.00 386.92 0.37 10.39 3.10 11.07
10:41:06 sdk 2.00 32.00 0.00 16.00 0.01 6.00 6.00 1.20
10:41:06 sdl 66.33 14077.33 1346.67 232.52 0.44 6.59 1.11 7.33
10:41:06 sdm 32.33 13712.00 0.00 424.08 0.42 12.91 3.09 10.00
10:41:06 sdn 31.33 13917.33 0.00 444.17 0.30 9.57 2.43 7.60
10:41:06 sdo 4.67 56.00 0.00 12.00 0.04 8.29 8.29 3.87
10:41:06 sdp 27.33 13680.00 0.00 500.49 0.20 7.46 1.17 3.20
10:41:06 sdq 56.00 194.67 1005.33 21.43 0.58 10.29 1.17 6.53
10:41:06 sdr 30.00 13738.67 0.00 457.96 0.25 8.44 1.87 5.60
10:41:06 sds 31.00 13912.00 0.00 448.77 0.30 9.59 2.24 6.93
10:41:06 sdt 3.33 114.67 0.00 34.40 0.02 6.40 6.40 2.13
10:41:06 sdu 1.00 8.00 0.00 8.00 0.01 5.33 5.33 0.53
10:41:06 sdv 35.67 13698.67 173.33 388.93 0.40 11.10 1.87 6.67
10:41:06 sdw 2.00 21.33 0.00 10.67 0.02 9.33 9.33 1.87
10:41:06 sdx 84.33 9064.00 1824.00 129.11 0.17 1.99 0.47 4.00
10:41:06 sdy 1.00 13.33 0.00 13.33 0.01 12.00 12.00 1.20
10:41:06 sdz 4.67 58.67 0.00 12.57 0.03 7.43 7.43 3.47
10:41:06 rbd3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0010:41:03 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
10:41:06 eth7 4601.33 4535.33 4585.26 3852.95 0.00 0.00 0.00 0.38
10:41:06 lo 200.67 200.67 143.20 143.20 0.00 0.00 0.00 0.00
10:41:06 eth3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 eth6 3997.00 4029.00 3638.23 3578.81 0.00 0.00 0.00 0.30
10:41:06 eth2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 eth5 219.00 498.00 104.82 647.57 0.00 0.00 0.00 0.05
10:41:06 eth1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 eth4 48.33 42.67 138.32 42.27 0.00 0.00 0.00 0.01
10:41:06 eth0 5.67 1.67 0.39 0.18 0.00 0.00 0.33 0.00Average: CPU %user %nice %system %iowait %steal %idle
Average: all 0.78 0.00 0.56 2.39 0.00 96.28
Average: 0 0.00 0.00 0.67 1.00 0.00 98.33
Average: 1 0.00 0.00 0.33 1.33 0.00 98.33
Average: 2 0.33 0.00 0.33 0.66 0.00 98.67
Average: 3 0.00 0.00 0.00 0.00 0.00 100.00
Average: 4 0.00 0.00 0.00 0.00 0.00 100.00
Average: 5 0.33 0.00 0.00 0.00 0.00 99.67
Average: 6 0.00 0.00 0.00 0.00 0.00 100.00
Average: 7 0.00 0.00 0.00 0.00 0.00 100.00
Average: 8 2.03 0.00 3.73 1.69 0.00 92.54
Average: 9 1.36 0.00 0.68 4.75 0.00 93.22
Average: 10 1.68 0.00 0.34 3.03 0.00 94.95
Average: 11 1.34 0.00 1.01 2.01 0.00 95.64
Average: 12 1.01 0.00 1.34 3.02 0.00 94.63
Average: 13 1.68 0.00 0.34 5.05 0.00 92.93
Average: 14 2.02 0.00 0.67 5.72 0.00 91.58
Average: 15 1.35 0.00 0.68 8.78 0.00 89.19
Average: 16 0.33 0.00 0.00 0.00 0.00 99.67
Average: 17 0.00 0.00 0.33 1.34 0.00 98.33
Average: 18 0.00 0.00 0.00 0.67 0.00 99.33
Average: 19 0.00 0.00 0.33 1.34 0.00 98.33
Average: 20 0.00 0.00 0.33 1.00 0.00 98.66
Average: 21 0.00 0.00 0.00 0.00 0.00 100.00
Average: 22 0.00 0.00 0.33 0.67 0.00 99.00
Average: 23 0.33 0.00 0.00 0.00 0.00 99.67
Average: 24 1.35 0.00 0.67 11.11 0.00 86.87
Average: 25 1.35 0.00 0.67 4.04 0.00 93.94
Average: 26 1.68 0.00 0.67 2.35 0.00 95.30
Average: 27 1.34 0.00 1.01 5.03 0.00 92.62
Average: 28 2.00 0.00 1.00 5.67 0.00 91.33
Average: 29 0.68 0.00 0.68 2.03 0.00 96.62
Average: 30 1.33 0.00 1.00 1.67 0.00 96.00
Average: 31 1.99 0.00 1.00 2.66 0.00 94.35Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
Average: 129047456 69007300 34.84 4396 3591764 123268288 62.24 61957096 3173336 27684Average: DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
Average: nvme1n1 78.67 0.00 2085.33 26.51 0.00 0.00 0.00 0.00
Average: nvme0n1 83.33 0.00 2389.33 28.67 0.00 0.02 0.02 0.13
Average: sdc 12.67 186.67 0.00 14.74 0.11 8.95 8.53 10.80
Average: sda 1.00 0.00 82.67 82.67 0.00 2.67 1.33 0.13
Average: sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: sdd 8.33 88.00 0.00 10.56 0.06 7.04 7.04 5.87
Average: sde 35.33 13760.00 0.00 389.43 0.32 8.94 3.02 10.67
Average: sdg 35.00 13901.33 0.00 397.18 0.40 11.35 3.62 12.67
Average: sdf 30.00 13853.33 0.00 461.78 0.21 7.07 1.78 5.33
Average: sdh 43.00 14162.67 0.00 329.36 0.44 10.33 3.84 16.53
Average: sdi 28.33 13674.67 0.00 482.64 0.25 8.85 1.88 5.33
Average: sdj 35.67 13800.00 0.00 386.92 0.37 10.39 3.10 11.07
Average: sdk 2.00 32.00 0.00 16.00 0.01 6.00 6.00 1.20
Average: sdl 66.33 14077.33 1346.67 232.52 0.44 6.59 1.11 7.33
Average: sdm 32.33 13712.00 0.00 424.08 0.42 12.91 3.09 10.00
Average: sdn 31.33 13917.33 0.00 444.17 0.30 9.57 2.43 7.60
Average: sdo 4.67 56.00 0.00 12.00 0.04 8.29 8.29 3.87
Average: sdp 27.33 13680.00 0.00 500.49 0.20 7.46 1.17 3.20
Average: sdq 56.00 194.67 1005.33 21.43 0.58 10.29 1.17 6.53
Average: sdr 30.00 13738.67 0.00 457.96 0.25 8.44 1.87 5.60
Average: sds 31.00 13912.00 0.00 448.77 0.30 9.59 2.24 6.93
Average: sdt 3.33 114.67 0.00 34.40 0.02 6.40 6.40 2.13
Average: sdu 1.00 8.00 0.00 8.00 0.01 5.33 5.33 0.53
Average: sdv 35.67 13698.67 173.33 388.93 0.40 11.10 1.87 6.67
Average: sdw 2.00 21.33 0.00 10.67 0.02 9.33 9.33 1.87
Average: sdx 84.33 9064.00 1824.00 129.11 0.17 1.99 0.47 4.00
Average: sdy 1.00 13.33 0.00 13.33 0.01 12.00 12.00 1.20
Average: sdz 4.67 58.67 0.00 12.57 0.03 7.43 7.43 3.47
Average: rbd3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00Average: IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
Average: eth7 4601.33 4535.33 4585.26 3852.95 0.00 0.00 0.00 0.38
Average: lo 200.67 200.67 143.20 143.20 0.00 0.00 0.00 0.00
Average: eth3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: eth6 3997.00 4029.00 3638.23 3578.81 0.00 0.00 0.00 0.30
Average: eth2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: eth5 219.00 498.00 104.82 647.57 0.00 0.00 0.00 0.05
Average: eth1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: eth4 48.33 42.67 138.32 42.27 0.00 0.00 0.00 0.01
Average: eth0 5.67 1.67 0.39 0.18 0.00 0.00 0.33 0.00
root@ceph-node-mro-1:~# ceph-disk list
/dev/nvme0n1 :
/dev/nvme0n1p1 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p10 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p11 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p12 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p2 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p3 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p4 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p5 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p6 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p7 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p8 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p9 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1 :
/dev/nvme1n1p1 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p10 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p11 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p12 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p2 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p3 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p4 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p5 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p6 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p7 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p8 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p9 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/rbd0 :
/dev/rbd0p1 other, VMFS_volume_member
/dev/rbd1 :
/dev/rbd1p1 other, VMFS_volume_member
/dev/rbd3 :
/dev/rbd3p1 other, VMFS_volume_member
/dev/rbd4 :
/dev/rbd4p1 other, VMFS_volume_member
/dev/rbd5 :
/dev/rbd5p1 other, VMFS_volume_member
/dev/rbd6 :
/dev/rbd6p1 other, VMFS_volume_member
/dev/rbd7 :
/dev/rbd7p1 other, VMFS_volume_member
/dev/sda :
/dev/sda2 other, ext4, mounted on /
/dev/sda1 other, ext4, mounted on /boot
/dev/sda4 other, ext4, mounted on /opt/petasan/config
/dev/sda3 other, ext4, mounted on /var/lib/ceph
/dev/sdb other, unknown
/dev/sdc other, xfs, mounted on /var/lib/ceph/osd/ceph-72
/dev/sdd other, xfs, mounted on /var/lib/ceph/osd/ceph-73
/dev/sde other, xfs, mounted on /var/lib/ceph/osd/ceph-74
/dev/sdf other, xfs, mounted on /var/lib/ceph/osd/ceph-75
/dev/sdg other, xfs, mounted on /var/lib/ceph/osd/ceph-76
/dev/sdh other, xfs, mounted on /var/lib/ceph/osd/ceph-77
/dev/sdi other, xfs, mounted on /var/lib/ceph/osd/ceph-78
/dev/sdj other, xfs, mounted on /var/lib/ceph/osd/ceph-79
/dev/sdk other, xfs, mounted on /var/lib/ceph/osd/ceph-80
/dev/sdl other, xfs, mounted on /var/lib/ceph/osd/ceph-81
/dev/sdm other, xfs, mounted on /var/lib/ceph/osd/ceph-82
/dev/sdn other, xfs, mounted on /var/lib/ceph/osd/ceph-83
/dev/sdo other, xfs, mounted on /var/lib/ceph/osd/ceph-84
/dev/sdp other, xfs, mounted on /var/lib/ceph/osd/ceph-85
/dev/sdq other, xfs, mounted on /var/lib/ceph/osd/ceph-86
/dev/sdr other, xfs, mounted on /var/lib/ceph/osd/ceph-87
/dev/sds other, xfs, mounted on /var/lib/ceph/osd/ceph-88
/dev/sdt other, xfs, mounted on /var/lib/ceph/osd/ceph-89
/dev/sdu other, xfs, mounted on /var/lib/ceph/osd/ceph-90
/dev/sdv other, xfs, mounted on /var/lib/ceph/osd/ceph-91
/dev/sdw other, xfs, mounted on /var/lib/ceph/osd/ceph-92
/dev/sdx other, xfs, mounted on /var/lib/ceph/osd/ceph-93
/dev/sdy other, xfs, mounted on /var/lib/ceph/osd/ceph-94
/dev/sdz other, xfs, mounted on /var/lib/ceph/osd/ceph-95
/dev/sr0 other, unknown
root@ceph-node-mru-1:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 523.72485 root default
-9 261.86243 room room-mru
-3 87.28748 host ceph-node-mru-1
0 3.63698 osd.0 up 1.00000 1.00000
1 3.63698 osd.1 up 1.00000 1.00000
2 3.63698 osd.2 up 1.00000 1.00000
3 3.63698 osd.3 up 1.00000 1.00000
4 3.63698 osd.4 up 1.00000 1.00000
5 3.63698 osd.5 up 1.00000 1.00000
6 3.63698 osd.6 up 1.00000 1.00000
7 3.63698 osd.7 up 1.00000 1.00000
8 3.63698 osd.8 up 1.00000 1.00000
9 3.63698 osd.9 up 1.00000 1.00000
10 3.63698 osd.10 up 1.00000 1.00000
11 3.63698 osd.11 up 1.00000 1.00000
12 3.63698 osd.12 up 1.00000 1.00000
13 3.63698 osd.13 up 1.00000 1.00000
14 3.63698 osd.14 up 1.00000 1.00000
15 3.63698 osd.15 up 1.00000 1.00000
16 3.63698 osd.16 up 1.00000 1.00000
17 3.63698 osd.17 up 1.00000 1.00000
18 3.63698 osd.18 up 1.00000 1.00000
19 3.63698 osd.19 up 1.00000 1.00000
20 3.63698 osd.20 up 1.00000 1.00000
21 3.63698 osd.21 up 1.00000 1.00000
22 3.63698 osd.22 up 1.00000 1.00000
23 3.63698 osd.23 up 1.00000 1.00000
-4 87.28748 host ceph-node-mru-2
24 3.63698 osd.24 up 1.00000 1.00000
25 3.63698 osd.25 up 1.00000 1.00000
26 3.63698 osd.26 up 1.00000 1.00000
27 3.63698 osd.27 up 1.00000 1.00000
28 3.63698 osd.28 up 1.00000 1.00000
29 3.63698 osd.29 up 1.00000 1.00000
30 3.63698 osd.30 up 1.00000 1.00000
31 3.63698 osd.31 up 1.00000 1.00000
32 3.63698 osd.32 up 1.00000 1.00000
33 3.63698 osd.33 up 1.00000 1.00000
34 3.63698 osd.34 up 1.00000 1.00000
35 3.63698 osd.35 up 1.00000 1.00000
36 3.63698 osd.36 up 1.00000 1.00000
37 3.63698 osd.37 up 1.00000 1.00000
38 3.63698 osd.38 up 1.00000 1.00000
39 3.63698 osd.39 up 1.00000 1.00000
40 3.63698 osd.40 up 1.00000 1.00000
41 3.63698 osd.41 up 1.00000 1.00000
42 3.63698 osd.42 up 1.00000 1.00000
43 3.63698 osd.43 up 1.00000 1.00000
44 3.63698 osd.44 up 1.00000 1.00000
45 3.63698 osd.45 up 1.00000 1.00000
46 3.63698 osd.46 up 1.00000 1.00000
47 3.63698 osd.47 up 1.00000 1.00000
-2 87.28748 host ceph-node-mru-3
48 3.63698 osd.48 up 1.00000 1.00000
49 3.63698 osd.49 up 1.00000 1.00000
50 3.63698 osd.50 up 1.00000 1.00000
51 3.63698 osd.51 up 1.00000 1.00000
52 3.63698 osd.52 up 1.00000 1.00000
53 3.63698 osd.53 up 1.00000 1.00000
54 3.63698 osd.54 up 1.00000 1.00000
55 3.63698 osd.55 up 1.00000 1.00000
56 3.63698 osd.56 up 1.00000 1.00000
57 3.63698 osd.57 up 1.00000 1.00000
58 3.63698 osd.58 up 1.00000 1.00000
59 3.63698 osd.59 up 1.00000 1.00000
60 3.63698 osd.60 up 1.00000 1.00000
61 3.63698 osd.61 up 1.00000 1.00000
62 3.63698 osd.62 up 1.00000 1.00000
63 3.63698 osd.63 up 1.00000 1.00000
64 3.63698 osd.64 up 1.00000 1.00000
65 3.63698 osd.65 up 1.00000 1.00000
66 3.63698 osd.66 up 1.00000 1.00000
67 3.63698 osd.67 up 1.00000 1.00000
68 3.63698 osd.68 up 1.00000 1.00000
69 3.63698 osd.69 up 1.00000 1.00000
70 3.63698 osd.70 up 1.00000 1.00000
71 3.63698 osd.71 up 1.00000 1.00000
-8 261.86243 room room-mro
-5 87.28748 host ceph-node-mro-1
72 3.63698 osd.72 up 1.00000 1.00000
73 3.63698 osd.73 up 1.00000 1.00000
74 3.63698 osd.74 up 1.00000 1.00000
75 3.63698 osd.75 up 1.00000 1.00000
76 3.63698 osd.76 up 1.00000 1.00000
77 3.63698 osd.77 up 1.00000 1.00000
78 3.63698 osd.78 up 1.00000 1.00000
79 3.63698 osd.79 up 1.00000 1.00000
80 3.63698 osd.80 up 1.00000 1.00000
81 3.63698 osd.81 up 1.00000 1.00000
82 3.63698 osd.82 up 1.00000 1.00000
83 3.63698 osd.83 up 1.00000 1.00000
84 3.63698 osd.84 up 1.00000 1.00000
85 3.63698 osd.85 up 1.00000 1.00000
86 3.63698 osd.86 up 1.00000 1.00000
87 3.63698 osd.87 up 1.00000 1.00000
88 3.63698 osd.88 up 1.00000 1.00000
89 3.63698 osd.89 up 1.00000 1.00000
90 3.63698 osd.90 up 1.00000 1.00000
91 3.63698 osd.91 up 1.00000 1.00000
92 3.63698 osd.92 up 1.00000 1.00000
93 3.63698 osd.93 up 1.00000 1.00000
94 3.63698 osd.94 up 1.00000 1.00000
95 3.63698 osd.95 up 1.00000 1.00000
-6 87.28748 host ceph-node-mro-2
96 3.63698 osd.96 up 1.00000 1.00000
97 3.63698 osd.97 up 1.00000 1.00000
98 3.63698 osd.98 up 1.00000 1.00000
99 3.63698 osd.99 up 1.00000 1.00000
100 3.63698 osd.100 up 1.00000 1.00000
101 3.63698 osd.101 up 1.00000 1.00000
102 3.63698 osd.102 up 1.00000 1.00000
103 3.63698 osd.103 up 1.00000 1.00000
104 3.63698 osd.104 up 1.00000 1.00000
105 3.63698 osd.105 up 1.00000 1.00000
106 3.63698 osd.106 up 1.00000 1.00000
107 3.63698 osd.107 up 1.00000 1.00000
108 3.63698 osd.108 up 1.00000 1.00000
109 3.63698 osd.109 up 1.00000 1.00000
110 3.63698 osd.110 up 1.00000 1.00000
111 3.63698 osd.111 up 1.00000 1.00000
112 3.63698 osd.112 up 1.00000 1.00000
113 3.63698 osd.113 up 1.00000 1.00000
114 3.63698 osd.114 up 1.00000 1.00000
115 3.63698 osd.115 up 1.00000 1.00000
116 3.63698 osd.116 up 1.00000 1.00000
117 3.63698 osd.117 up 1.00000 1.00000
118 3.63698 osd.118 up 1.00000 1.00000
119 3.63698 osd.119 up 1.00000 1.00000
-7 87.28748 host ceph-node-mro-3
120 3.63698 osd.120 up 1.00000 1.00000
121 3.63698 osd.121 up 1.00000 1.00000
122 3.63698 osd.122 up 1.00000 1.00000
123 3.63698 osd.123 up 1.00000 1.00000
124 3.63698 osd.124 up 1.00000 1.00000
125 3.63698 osd.125 up 1.00000 1.00000
126 3.63698 osd.126 up 1.00000 1.00000
127 3.63698 osd.127 up 1.00000 1.00000
128 3.63698 osd.128 up 1.00000 1.00000
129 3.63698 osd.129 up 1.00000 1.00000
130 3.63698 osd.130 up 1.00000 1.00000
131 3.63698 osd.131 up 1.00000 1.00000
132 3.63698 osd.132 up 1.00000 1.00000
133 3.63698 osd.133 up 1.00000 1.00000
134 3.63698 osd.134 up 1.00000 1.00000
135 3.63698 osd.135 up 1.00000 1.00000
136 3.63698 osd.136 up 1.00000 1.00000
137 3.63698 osd.137 up 1.00000 1.00000
138 3.63698 osd.138 up 1.00000 1.00000
139 3.63698 osd.139 up 1.00000 1.00000
140 3.63698 osd.140 up 1.00000 1.00000
141 3.63698 osd.141 up 1.00000 1.00000
142 3.63698 osd.142 up 1.00000 1.00000
143 3.63698 osd.143 up 1.00000 1.00000Logs will come in half an hour.
Regards,
Dennis
- Did this never work or did it stop working after adding new nodes ?
- I am unsure about it. I never tried it because I was afraid of producing crashed because of our Bcache config. (BTW. we finally get rid of it!!!!)
- Do you have any nodes assigned as OSD storage but do not have any storage disks yet ?
- No.
- From the ui, if you select some nodes to act as stress clients, they will not be included in the benchmark report, only non client nodes will. By using this way it is possible to select all nodes with the exception of one to act as clients, this will allow getting the benchmark report on 1 specific node. Does the benchmark report run on some specific nodes or does it fail on all ?
- I tested selecting single clients with different nodes and all but one, the problem is the same.
sar output:
root@ceph-node-mro-1:~# sar 3 1 -u -P ALL -r -n DEV -d -p
Linux 4.4.92-09-petasan (ceph-node-mro-1) 07/12/18 _x86_64_ (32 CPU)
10:41:03 CPU %user %nice %system %iowait %steal %idle
10:41:06 all 0.78 0.00 0.56 2.39 0.00 96.28
10:41:06 0 0.00 0.00 0.67 1.00 0.00 98.33
10:41:06 1 0.00 0.00 0.33 1.33 0.00 98.33
10:41:06 2 0.33 0.00 0.33 0.66 0.00 98.67
10:41:06 3 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 4 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 5 0.33 0.00 0.00 0.00 0.00 99.67
10:41:06 6 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 7 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 8 2.03 0.00 3.73 1.69 0.00 92.54
10:41:06 9 1.36 0.00 0.68 4.75 0.00 93.22
10:41:06 10 1.68 0.00 0.34 3.03 0.00 94.95
10:41:06 11 1.34 0.00 1.01 2.01 0.00 95.64
10:41:06 12 1.01 0.00 1.34 3.02 0.00 94.63
10:41:06 13 1.68 0.00 0.34 5.05 0.00 92.93
10:41:06 14 2.02 0.00 0.67 5.72 0.00 91.58
10:41:06 15 1.35 0.00 0.68 8.78 0.00 89.19
10:41:06 16 0.33 0.00 0.00 0.00 0.00 99.67
10:41:06 17 0.00 0.00 0.33 1.34 0.00 98.33
10:41:06 18 0.00 0.00 0.00 0.67 0.00 99.33
10:41:06 19 0.00 0.00 0.33 1.34 0.00 98.33
10:41:06 20 0.00 0.00 0.33 1.00 0.00 98.66
10:41:06 21 0.00 0.00 0.00 0.00 0.00 100.00
10:41:06 22 0.00 0.00 0.33 0.67 0.00 99.00
10:41:06 23 0.33 0.00 0.00 0.00 0.00 99.67
10:41:06 24 1.35 0.00 0.67 11.11 0.00 86.87
10:41:06 25 1.35 0.00 0.67 4.04 0.00 93.94
10:41:06 26 1.68 0.00 0.67 2.35 0.00 95.30
10:41:06 27 1.34 0.00 1.01 5.03 0.00 92.62
10:41:06 28 2.00 0.00 1.00 5.67 0.00 91.33
10:41:06 29 0.68 0.00 0.68 2.03 0.00 96.62
10:41:06 30 1.33 0.00 1.00 1.67 0.00 96.00
10:41:06 31 1.99 0.00 1.00 2.66 0.00 94.35
10:41:03 kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
10:41:06 129047456 69007300 34.84 4396 3591764 123268288 62.24 61957096 3173336 27684
10:41:03 DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
10:41:06 nvme1n1 78.67 0.00 2085.33 26.51 0.00 0.00 0.00 0.00
10:41:06 nvme0n1 83.33 0.00 2389.33 28.67 0.00 0.02 0.02 0.13
10:41:06 sdc 12.67 186.67 0.00 14.74 0.11 8.95 8.53 10.80
10:41:06 sda 1.00 0.00 82.67 82.67 0.00 2.67 1.33 0.13
10:41:06 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 sdd 8.33 88.00 0.00 10.56 0.06 7.04 7.04 5.87
10:41:06 sde 35.33 13760.00 0.00 389.43 0.32 8.94 3.02 10.67
10:41:06 sdg 35.00 13901.33 0.00 397.18 0.40 11.35 3.62 12.67
10:41:06 sdf 30.00 13853.33 0.00 461.78 0.21 7.07 1.78 5.33
10:41:06 sdh 43.00 14162.67 0.00 329.36 0.44 10.33 3.84 16.53
10:41:06 sdi 28.33 13674.67 0.00 482.64 0.25 8.85 1.88 5.33
10:41:06 sdj 35.67 13800.00 0.00 386.92 0.37 10.39 3.10 11.07
10:41:06 sdk 2.00 32.00 0.00 16.00 0.01 6.00 6.00 1.20
10:41:06 sdl 66.33 14077.33 1346.67 232.52 0.44 6.59 1.11 7.33
10:41:06 sdm 32.33 13712.00 0.00 424.08 0.42 12.91 3.09 10.00
10:41:06 sdn 31.33 13917.33 0.00 444.17 0.30 9.57 2.43 7.60
10:41:06 sdo 4.67 56.00 0.00 12.00 0.04 8.29 8.29 3.87
10:41:06 sdp 27.33 13680.00 0.00 500.49 0.20 7.46 1.17 3.20
10:41:06 sdq 56.00 194.67 1005.33 21.43 0.58 10.29 1.17 6.53
10:41:06 sdr 30.00 13738.67 0.00 457.96 0.25 8.44 1.87 5.60
10:41:06 sds 31.00 13912.00 0.00 448.77 0.30 9.59 2.24 6.93
10:41:06 sdt 3.33 114.67 0.00 34.40 0.02 6.40 6.40 2.13
10:41:06 sdu 1.00 8.00 0.00 8.00 0.01 5.33 5.33 0.53
10:41:06 sdv 35.67 13698.67 173.33 388.93 0.40 11.10 1.87 6.67
10:41:06 sdw 2.00 21.33 0.00 10.67 0.02 9.33 9.33 1.87
10:41:06 sdx 84.33 9064.00 1824.00 129.11 0.17 1.99 0.47 4.00
10:41:06 sdy 1.00 13.33 0.00 13.33 0.01 12.00 12.00 1.20
10:41:06 sdz 4.67 58.67 0.00 12.57 0.03 7.43 7.43 3.47
10:41:06 rbd3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 rbd7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:03 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
10:41:06 eth7 4601.33 4535.33 4585.26 3852.95 0.00 0.00 0.00 0.38
10:41:06 lo 200.67 200.67 143.20 143.20 0.00 0.00 0.00 0.00
10:41:06 eth3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 eth6 3997.00 4029.00 3638.23 3578.81 0.00 0.00 0.00 0.30
10:41:06 eth2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 eth5 219.00 498.00 104.82 647.57 0.00 0.00 0.00 0.05
10:41:06 eth1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
10:41:06 eth4 48.33 42.67 138.32 42.27 0.00 0.00 0.00 0.01
10:41:06 eth0 5.67 1.67 0.39 0.18 0.00 0.00 0.33 0.00
Average: CPU %user %nice %system %iowait %steal %idle
Average: all 0.78 0.00 0.56 2.39 0.00 96.28
Average: 0 0.00 0.00 0.67 1.00 0.00 98.33
Average: 1 0.00 0.00 0.33 1.33 0.00 98.33
Average: 2 0.33 0.00 0.33 0.66 0.00 98.67
Average: 3 0.00 0.00 0.00 0.00 0.00 100.00
Average: 4 0.00 0.00 0.00 0.00 0.00 100.00
Average: 5 0.33 0.00 0.00 0.00 0.00 99.67
Average: 6 0.00 0.00 0.00 0.00 0.00 100.00
Average: 7 0.00 0.00 0.00 0.00 0.00 100.00
Average: 8 2.03 0.00 3.73 1.69 0.00 92.54
Average: 9 1.36 0.00 0.68 4.75 0.00 93.22
Average: 10 1.68 0.00 0.34 3.03 0.00 94.95
Average: 11 1.34 0.00 1.01 2.01 0.00 95.64
Average: 12 1.01 0.00 1.34 3.02 0.00 94.63
Average: 13 1.68 0.00 0.34 5.05 0.00 92.93
Average: 14 2.02 0.00 0.67 5.72 0.00 91.58
Average: 15 1.35 0.00 0.68 8.78 0.00 89.19
Average: 16 0.33 0.00 0.00 0.00 0.00 99.67
Average: 17 0.00 0.00 0.33 1.34 0.00 98.33
Average: 18 0.00 0.00 0.00 0.67 0.00 99.33
Average: 19 0.00 0.00 0.33 1.34 0.00 98.33
Average: 20 0.00 0.00 0.33 1.00 0.00 98.66
Average: 21 0.00 0.00 0.00 0.00 0.00 100.00
Average: 22 0.00 0.00 0.33 0.67 0.00 99.00
Average: 23 0.33 0.00 0.00 0.00 0.00 99.67
Average: 24 1.35 0.00 0.67 11.11 0.00 86.87
Average: 25 1.35 0.00 0.67 4.04 0.00 93.94
Average: 26 1.68 0.00 0.67 2.35 0.00 95.30
Average: 27 1.34 0.00 1.01 5.03 0.00 92.62
Average: 28 2.00 0.00 1.00 5.67 0.00 91.33
Average: 29 0.68 0.00 0.68 2.03 0.00 96.62
Average: 30 1.33 0.00 1.00 1.67 0.00 96.00
Average: 31 1.99 0.00 1.00 2.66 0.00 94.35
Average: kbmemfree kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty
Average: 129047456 69007300 34.84 4396 3591764 123268288 62.24 61957096 3173336 27684
Average: DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util
Average: nvme1n1 78.67 0.00 2085.33 26.51 0.00 0.00 0.00 0.00
Average: nvme0n1 83.33 0.00 2389.33 28.67 0.00 0.02 0.02 0.13
Average: sdc 12.67 186.67 0.00 14.74 0.11 8.95 8.53 10.80
Average: sda 1.00 0.00 82.67 82.67 0.00 2.67 1.33 0.13
Average: sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: sdd 8.33 88.00 0.00 10.56 0.06 7.04 7.04 5.87
Average: sde 35.33 13760.00 0.00 389.43 0.32 8.94 3.02 10.67
Average: sdg 35.00 13901.33 0.00 397.18 0.40 11.35 3.62 12.67
Average: sdf 30.00 13853.33 0.00 461.78 0.21 7.07 1.78 5.33
Average: sdh 43.00 14162.67 0.00 329.36 0.44 10.33 3.84 16.53
Average: sdi 28.33 13674.67 0.00 482.64 0.25 8.85 1.88 5.33
Average: sdj 35.67 13800.00 0.00 386.92 0.37 10.39 3.10 11.07
Average: sdk 2.00 32.00 0.00 16.00 0.01 6.00 6.00 1.20
Average: sdl 66.33 14077.33 1346.67 232.52 0.44 6.59 1.11 7.33
Average: sdm 32.33 13712.00 0.00 424.08 0.42 12.91 3.09 10.00
Average: sdn 31.33 13917.33 0.00 444.17 0.30 9.57 2.43 7.60
Average: sdo 4.67 56.00 0.00 12.00 0.04 8.29 8.29 3.87
Average: sdp 27.33 13680.00 0.00 500.49 0.20 7.46 1.17 3.20
Average: sdq 56.00 194.67 1005.33 21.43 0.58 10.29 1.17 6.53
Average: sdr 30.00 13738.67 0.00 457.96 0.25 8.44 1.87 5.60
Average: sds 31.00 13912.00 0.00 448.77 0.30 9.59 2.24 6.93
Average: sdt 3.33 114.67 0.00 34.40 0.02 6.40 6.40 2.13
Average: sdu 1.00 8.00 0.00 8.00 0.01 5.33 5.33 0.53
Average: sdv 35.67 13698.67 173.33 388.93 0.40 11.10 1.87 6.67
Average: sdw 2.00 21.33 0.00 10.67 0.02 9.33 9.33 1.87
Average: sdx 84.33 9064.00 1824.00 129.11 0.17 1.99 0.47 4.00
Average: sdy 1.00 13.33 0.00 13.33 0.01 12.00 12.00 1.20
Average: sdz 4.67 58.67 0.00 12.57 0.03 7.43 7.43 3.47
Average: rbd3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: rbd7 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil
Average: eth7 4601.33 4535.33 4585.26 3852.95 0.00 0.00 0.00 0.38
Average: lo 200.67 200.67 143.20 143.20 0.00 0.00 0.00 0.00
Average: eth3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: eth6 3997.00 4029.00 3638.23 3578.81 0.00 0.00 0.00 0.30
Average: eth2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: eth5 219.00 498.00 104.82 647.57 0.00 0.00 0.00 0.05
Average: eth1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Average: eth4 48.33 42.67 138.32 42.27 0.00 0.00 0.00 0.01
Average: eth0 5.67 1.67 0.39 0.18 0.00 0.00 0.33 0.00
root@ceph-node-mro-1:~# ceph-disk list
/dev/nvme0n1 :
/dev/nvme0n1p1 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p10 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p11 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p12 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p2 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p3 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p4 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p5 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p6 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p7 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p8 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme0n1p9 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1 :
/dev/nvme1n1p1 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p10 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p11 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p12 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p2 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p3 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p4 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p5 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p6 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p7 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p8 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/nvme1n1p9 other, 0fc63daf-8483-4772-8e79-3d69d8477de4
/dev/rbd0 :
/dev/rbd0p1 other, VMFS_volume_member
/dev/rbd1 :
/dev/rbd1p1 other, VMFS_volume_member
/dev/rbd3 :
/dev/rbd3p1 other, VMFS_volume_member
/dev/rbd4 :
/dev/rbd4p1 other, VMFS_volume_member
/dev/rbd5 :
/dev/rbd5p1 other, VMFS_volume_member
/dev/rbd6 :
/dev/rbd6p1 other, VMFS_volume_member
/dev/rbd7 :
/dev/rbd7p1 other, VMFS_volume_member
/dev/sda :
/dev/sda2 other, ext4, mounted on /
/dev/sda1 other, ext4, mounted on /boot
/dev/sda4 other, ext4, mounted on /opt/petasan/config
/dev/sda3 other, ext4, mounted on /var/lib/ceph
/dev/sdb other, unknown
/dev/sdc other, xfs, mounted on /var/lib/ceph/osd/ceph-72
/dev/sdd other, xfs, mounted on /var/lib/ceph/osd/ceph-73
/dev/sde other, xfs, mounted on /var/lib/ceph/osd/ceph-74
/dev/sdf other, xfs, mounted on /var/lib/ceph/osd/ceph-75
/dev/sdg other, xfs, mounted on /var/lib/ceph/osd/ceph-76
/dev/sdh other, xfs, mounted on /var/lib/ceph/osd/ceph-77
/dev/sdi other, xfs, mounted on /var/lib/ceph/osd/ceph-78
/dev/sdj other, xfs, mounted on /var/lib/ceph/osd/ceph-79
/dev/sdk other, xfs, mounted on /var/lib/ceph/osd/ceph-80
/dev/sdl other, xfs, mounted on /var/lib/ceph/osd/ceph-81
/dev/sdm other, xfs, mounted on /var/lib/ceph/osd/ceph-82
/dev/sdn other, xfs, mounted on /var/lib/ceph/osd/ceph-83
/dev/sdo other, xfs, mounted on /var/lib/ceph/osd/ceph-84
/dev/sdp other, xfs, mounted on /var/lib/ceph/osd/ceph-85
/dev/sdq other, xfs, mounted on /var/lib/ceph/osd/ceph-86
/dev/sdr other, xfs, mounted on /var/lib/ceph/osd/ceph-87
/dev/sds other, xfs, mounted on /var/lib/ceph/osd/ceph-88
/dev/sdt other, xfs, mounted on /var/lib/ceph/osd/ceph-89
/dev/sdu other, xfs, mounted on /var/lib/ceph/osd/ceph-90
/dev/sdv other, xfs, mounted on /var/lib/ceph/osd/ceph-91
/dev/sdw other, xfs, mounted on /var/lib/ceph/osd/ceph-92
/dev/sdx other, xfs, mounted on /var/lib/ceph/osd/ceph-93
/dev/sdy other, xfs, mounted on /var/lib/ceph/osd/ceph-94
/dev/sdz other, xfs, mounted on /var/lib/ceph/osd/ceph-95
/dev/sr0 other, unknown
root@ceph-node-mru-1:~# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 523.72485 root default
-9 261.86243 room room-mru
-3 87.28748 host ceph-node-mru-1
0 3.63698 osd.0 up 1.00000 1.00000
1 3.63698 osd.1 up 1.00000 1.00000
2 3.63698 osd.2 up 1.00000 1.00000
3 3.63698 osd.3 up 1.00000 1.00000
4 3.63698 osd.4 up 1.00000 1.00000
5 3.63698 osd.5 up 1.00000 1.00000
6 3.63698 osd.6 up 1.00000 1.00000
7 3.63698 osd.7 up 1.00000 1.00000
8 3.63698 osd.8 up 1.00000 1.00000
9 3.63698 osd.9 up 1.00000 1.00000
10 3.63698 osd.10 up 1.00000 1.00000
11 3.63698 osd.11 up 1.00000 1.00000
12 3.63698 osd.12 up 1.00000 1.00000
13 3.63698 osd.13 up 1.00000 1.00000
14 3.63698 osd.14 up 1.00000 1.00000
15 3.63698 osd.15 up 1.00000 1.00000
16 3.63698 osd.16 up 1.00000 1.00000
17 3.63698 osd.17 up 1.00000 1.00000
18 3.63698 osd.18 up 1.00000 1.00000
19 3.63698 osd.19 up 1.00000 1.00000
20 3.63698 osd.20 up 1.00000 1.00000
21 3.63698 osd.21 up 1.00000 1.00000
22 3.63698 osd.22 up 1.00000 1.00000
23 3.63698 osd.23 up 1.00000 1.00000
-4 87.28748 host ceph-node-mru-2
24 3.63698 osd.24 up 1.00000 1.00000
25 3.63698 osd.25 up 1.00000 1.00000
26 3.63698 osd.26 up 1.00000 1.00000
27 3.63698 osd.27 up 1.00000 1.00000
28 3.63698 osd.28 up 1.00000 1.00000
29 3.63698 osd.29 up 1.00000 1.00000
30 3.63698 osd.30 up 1.00000 1.00000
31 3.63698 osd.31 up 1.00000 1.00000
32 3.63698 osd.32 up 1.00000 1.00000
33 3.63698 osd.33 up 1.00000 1.00000
34 3.63698 osd.34 up 1.00000 1.00000
35 3.63698 osd.35 up 1.00000 1.00000
36 3.63698 osd.36 up 1.00000 1.00000
37 3.63698 osd.37 up 1.00000 1.00000
38 3.63698 osd.38 up 1.00000 1.00000
39 3.63698 osd.39 up 1.00000 1.00000
40 3.63698 osd.40 up 1.00000 1.00000
41 3.63698 osd.41 up 1.00000 1.00000
42 3.63698 osd.42 up 1.00000 1.00000
43 3.63698 osd.43 up 1.00000 1.00000
44 3.63698 osd.44 up 1.00000 1.00000
45 3.63698 osd.45 up 1.00000 1.00000
46 3.63698 osd.46 up 1.00000 1.00000
47 3.63698 osd.47 up 1.00000 1.00000
-2 87.28748 host ceph-node-mru-3
48 3.63698 osd.48 up 1.00000 1.00000
49 3.63698 osd.49 up 1.00000 1.00000
50 3.63698 osd.50 up 1.00000 1.00000
51 3.63698 osd.51 up 1.00000 1.00000
52 3.63698 osd.52 up 1.00000 1.00000
53 3.63698 osd.53 up 1.00000 1.00000
54 3.63698 osd.54 up 1.00000 1.00000
55 3.63698 osd.55 up 1.00000 1.00000
56 3.63698 osd.56 up 1.00000 1.00000
57 3.63698 osd.57 up 1.00000 1.00000
58 3.63698 osd.58 up 1.00000 1.00000
59 3.63698 osd.59 up 1.00000 1.00000
60 3.63698 osd.60 up 1.00000 1.00000
61 3.63698 osd.61 up 1.00000 1.00000
62 3.63698 osd.62 up 1.00000 1.00000
63 3.63698 osd.63 up 1.00000 1.00000
64 3.63698 osd.64 up 1.00000 1.00000
65 3.63698 osd.65 up 1.00000 1.00000
66 3.63698 osd.66 up 1.00000 1.00000
67 3.63698 osd.67 up 1.00000 1.00000
68 3.63698 osd.68 up 1.00000 1.00000
69 3.63698 osd.69 up 1.00000 1.00000
70 3.63698 osd.70 up 1.00000 1.00000
71 3.63698 osd.71 up 1.00000 1.00000
-8 261.86243 room room-mro
-5 87.28748 host ceph-node-mro-1
72 3.63698 osd.72 up 1.00000 1.00000
73 3.63698 osd.73 up 1.00000 1.00000
74 3.63698 osd.74 up 1.00000 1.00000
75 3.63698 osd.75 up 1.00000 1.00000
76 3.63698 osd.76 up 1.00000 1.00000
77 3.63698 osd.77 up 1.00000 1.00000
78 3.63698 osd.78 up 1.00000 1.00000
79 3.63698 osd.79 up 1.00000 1.00000
80 3.63698 osd.80 up 1.00000 1.00000
81 3.63698 osd.81 up 1.00000 1.00000
82 3.63698 osd.82 up 1.00000 1.00000
83 3.63698 osd.83 up 1.00000 1.00000
84 3.63698 osd.84 up 1.00000 1.00000
85 3.63698 osd.85 up 1.00000 1.00000
86 3.63698 osd.86 up 1.00000 1.00000
87 3.63698 osd.87 up 1.00000 1.00000
88 3.63698 osd.88 up 1.00000 1.00000
89 3.63698 osd.89 up 1.00000 1.00000
90 3.63698 osd.90 up 1.00000 1.00000
91 3.63698 osd.91 up 1.00000 1.00000
92 3.63698 osd.92 up 1.00000 1.00000
93 3.63698 osd.93 up 1.00000 1.00000
94 3.63698 osd.94 up 1.00000 1.00000
95 3.63698 osd.95 up 1.00000 1.00000
-6 87.28748 host ceph-node-mro-2
96 3.63698 osd.96 up 1.00000 1.00000
97 3.63698 osd.97 up 1.00000 1.00000
98 3.63698 osd.98 up 1.00000 1.00000
99 3.63698 osd.99 up 1.00000 1.00000
100 3.63698 osd.100 up 1.00000 1.00000
101 3.63698 osd.101 up 1.00000 1.00000
102 3.63698 osd.102 up 1.00000 1.00000
103 3.63698 osd.103 up 1.00000 1.00000
104 3.63698 osd.104 up 1.00000 1.00000
105 3.63698 osd.105 up 1.00000 1.00000
106 3.63698 osd.106 up 1.00000 1.00000
107 3.63698 osd.107 up 1.00000 1.00000
108 3.63698 osd.108 up 1.00000 1.00000
109 3.63698 osd.109 up 1.00000 1.00000
110 3.63698 osd.110 up 1.00000 1.00000
111 3.63698 osd.111 up 1.00000 1.00000
112 3.63698 osd.112 up 1.00000 1.00000
113 3.63698 osd.113 up 1.00000 1.00000
114 3.63698 osd.114 up 1.00000 1.00000
115 3.63698 osd.115 up 1.00000 1.00000
116 3.63698 osd.116 up 1.00000 1.00000
117 3.63698 osd.117 up 1.00000 1.00000
118 3.63698 osd.118 up 1.00000 1.00000
119 3.63698 osd.119 up 1.00000 1.00000
-7 87.28748 host ceph-node-mro-3
120 3.63698 osd.120 up 1.00000 1.00000
121 3.63698 osd.121 up 1.00000 1.00000
122 3.63698 osd.122 up 1.00000 1.00000
123 3.63698 osd.123 up 1.00000 1.00000
124 3.63698 osd.124 up 1.00000 1.00000
125 3.63698 osd.125 up 1.00000 1.00000
126 3.63698 osd.126 up 1.00000 1.00000
127 3.63698 osd.127 up 1.00000 1.00000
128 3.63698 osd.128 up 1.00000 1.00000
129 3.63698 osd.129 up 1.00000 1.00000
130 3.63698 osd.130 up 1.00000 1.00000
131 3.63698 osd.131 up 1.00000 1.00000
132 3.63698 osd.132 up 1.00000 1.00000
133 3.63698 osd.133 up 1.00000 1.00000
134 3.63698 osd.134 up 1.00000 1.00000
135 3.63698 osd.135 up 1.00000 1.00000
136 3.63698 osd.136 up 1.00000 1.00000
137 3.63698 osd.137 up 1.00000 1.00000
138 3.63698 osd.138 up 1.00000 1.00000
139 3.63698 osd.139 up 1.00000 1.00000
140 3.63698 osd.140 up 1.00000 1.00000
141 3.63698 osd.141 up 1.00000 1.00000
142 3.63698 osd.142 up 1.00000 1.00000
143 3.63698 osd.143 up 1.00000 1.00000
Logs will come in half an hour.
Regards,
Dennis
admin
2,930 Posts
Quote from admin on July 12, 2018, 12:25 pmHi Dennis,
I looked at the logs, it does not show any of the extra traces we added, can you please double check both files were placed in the correct paths, and send me the logs from that node. This node needs to be un-selected from the client list in the ui so to be included in the benchmark report.
Hatem
Hi Dennis,
I looked at the logs, it does not show any of the extra traces we added, can you please double check both files were placed in the correct paths, and send me the logs from that node. This node needs to be un-selected from the client list in the ui so to be included in the benchmark report.
Hatem
therm
121 Posts
Quote from therm on July 13, 2018, 1:55 pmHi Hatem,
I sent you another log. Did you get it?
Dennis
Hi Hatem,
I sent you another log. Did you get it?
Dennis
admin
2,930 Posts
Quote from admin on July 13, 2018, 6:31 pmi only got one, which did not have the extra log traces. if you have another log please resend it.
i only got one, which did not have the extra log traces. if you have another log please resend it.
admin
2,930 Posts
Quote from admin on July 14, 2018, 2:12 pmgot the new logs.
got the new logs.
admin
2,930 Posts
Quote from admin on July 14, 2018, 3:26 pmThe output of:
ceph-disk list
you posted shows drives as:
/dev/sdc other, xfs, mounted on /var/lib/ceph/osd/ceph-72
should have shown something more like this instead:
/dev/sdc1 ceph data, active, cluster xx, osd.72, journal /dev/nvme0n1p3
It does not appear your OSDs have the correct GPT partition GUID for an OSD, if you have manually created them, did you use ceph-disk tool ?
the benchmark is failing because for disk data it only wants to report OSDs disks which it indetifies using the GUID. Do you have other issues in the Node List -> Physical Disk List ? or do you not use this page ?
The output of:
ceph-disk list
you posted shows drives as:
/dev/sdc other, xfs, mounted on /var/lib/ceph/osd/ceph-72
should have shown something more like this instead:
/dev/sdc1 ceph data, active, cluster xx, osd.72, journal /dev/nvme0n1p3
It does not appear your OSDs have the correct GPT partition GUID for an OSD, if you have manually created them, did you use ceph-disk tool ?
the benchmark is failing because for disk data it only wants to report OSDs disks which it indetifies using the GUID. Do you have other issues in the Node List -> Physical Disk List ? or do you not use this page ?