Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

No Datapoints !

Hi,

I seen no statistic info on the dashboard.  I only get message no datapoints.

Regards

Raf Schandevyl

Do you have your time zones set correctly for the servers as well as the client browser ?

Date -u returns

Thu Apr 26 05:20:08 UTC 2018.

During install I installed all nodes with timezone Europe/Brussels.

The cluster is already running for more than a month without problems.  The problem started when one of the switches where the backend network is connected to powered off.

 

Regards

Raf

Is the browser also set correctly for time / time zone ?
If you set the chart time range to last month or last year do you still get no data ?
Do you see no datapoints for both cluster stats (thoughput/iops) as well as node stats (cpu/ram) ?

identify the current stats server, on all management nodes run:

systemctl status carbon-cache

the node showing this service as active (running) is the current stats server,  on this server run:

ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
do you see the following folders ?
ClusterStats NodeStats

For cluster stats, running these from stats server may help:
/opt/petasan/scripts/stats-stop.sh
/opt/petasan/scripts/stats-start.sh

If you do not see node stats for specific nodes, run these on the nodes themselves:

systemctl stop petasan-node-stats
systemctl start petasan-node-stats

Browser is set to same timezone.

I don't have any statistics info not for cluster, not for node (cpu/ram)

I can't change the date range.

output of

ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/

root@ps-node-02:~# ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
ls: cannot access '/opt/petasan/config/shared/graphite/whisper/PetaSAN/': Transport endpoint is not connected

regards

Raf

 

 

On all 3 management nodes, what is the output of:

systemctl status glusterfs-server
gluster peer status
gluster vol status gfs-vol
ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/

On the stats server, run the following:

systemctl stop petasan-mount-sharedfs
systemctl stop glusterfs-server
killall glusterfsd
killall glusterfs
umount /opt/petasan/config/shared
systemctl start glusterfs-server
systemctl start petasan-mount-sharedfs
/opt/petasan/scripts/stats-stop.sh
/opt/petasan/scripts/stats-start.sh

What is the output of:
ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/

Output of commands :

root@ps-node-01:~# systemctl status glusterfs-server
● glusterfs-server.service - LSB: GlusterFS server
Loaded: loaded (/etc/init.d/glusterfs-server; bad; vendor preset: enabled)
Active: active (running) since Thu 2018-03-08 13:40:10 CET; 1 months 18 days ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/glusterfs-server.service
├─2490 /usr/sbin/glusterd -p /var/run/glusterd.pid
├─2595 /usr/sbin/glusterfsd -s 192.168.110.150 --volfile-id gfs-vol.192.168.110.150.opt-petasan-config-gfs-brick -p /var/lib/glusterd/vols/gfs-vol/run/192.168.110.150-opt-petasan-config-gfs-brick.pid -S /var/run/gluster/6ba0c21e
cdcd3685a6c85158c82fa84d.socket --brick-name /opt/petasan/config/gfs-brick -l /var/log/glusterfs/bricks/opt-petasan-config-gfs-brick.log --xlator-option *-posix.glusterd-uuid=2008cfa4-dbd6-4bb9-89f6-2f68e55b5fcd --brick-port 49152 --xlat
or-option gfs-vol-server.listen-port=49152
└─2621 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/1cbdcf5a16c5947952f4a0cddc676624.socket --xlator
-option *replicate*.node-uuid=2008cfa4-dbd6-4bb9-89f6-2f68e55b5fcd

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

root@ps-node-01:~# gluster peer status
Number of Peers: 2

Hostname: 192.168.110.151
Uuid: 839feb11-7b62-44b8-964c-d45b69e992d3
State: Peer in Cluster (Connected)

Hostname: 192.168.110.152
Uuid: 389f8ed8-d679-4f07-aff7-103a5fa5d238
State: Peer in Cluster (Connected)
root@ps-node-01:~# gluster vol status gfs-vol
Status of volume: gfs-vol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.110.150:/opt/petasan/config/g
fs-brick 49152 0 Y 2595
Brick 192.168.110.152:/opt/petasan/config/g
fs-brick 49153 0 Y 1870
Brick 192.168.110.151:/opt/petasan/config/g
fs-brick 49152 0 Y 2447
Self-heal Daemon on localhost N/A N/A Y 2621
Self-heal Daemon on 192.168.110.151 N/A N/A Y 2474
Self-heal Daemon on 192.168.110.152 N/A N/A Y 2445

Task Status of Volume gfs-vol
------------------------------------------------------------------------------
There are no active volume tasks

root@ps-node-01:~# ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
ClusterStats NodeStats
-------------------------------------------------------------------------------------------------------------------------------

root@ps-node-02:/opt/petasan/config# systemctl status glusterfs-server
● glusterfs-server.service - LSB: GlusterFS server
Loaded: loaded (/etc/init.d/glusterfs-server; bad; vendor preset: enabled)
Active: active (running) since Thu 2018-03-08 13:40:08 CET; 1 months 18 days ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/glusterfs-server.service
├─2300 /usr/sbin/glusterd -p /var/run/glusterd.pid
├─2447 /usr/sbin/glusterfsd -s 192.168.110.151 --volfile-id gfs-vol.192.168.110.151.opt-petasan-config-gfs-brick -p /var/lib/glusterd/vols/gfs-vol/run/192.168.110.151-opt-petasan-config-gfs-brick.pid -S /var/run/gluster/800c9334
e9c13648d817983821dcad49.socket --brick-name /opt/petasan/config/gfs-brick -l /var/log/glusterfs/bricks/opt-petasan-config-gfs-brick.log --xlator-option *-posix.glusterd-uuid=839feb11-7b62-44b8-964c-d45b69e992d3 --brick-port 49152 --xlat
or-option gfs-vol-server.listen-port=49152
└─2474 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/aff68228039ab0c8d10feaadbe0e05a4.socket --xlator
-option *replicate*.node-uuid=839feb11-7b62-44b8-964c-d45b69e992d3

Mar 08 13:40:06 ps-node-02 systemd[1]: Starting LSB: GlusterFS server...
Mar 08 13:40:06 ps-node-02 glusterfs-server[2293]: * Starting glusterd service glusterd
Mar 08 13:40:08 ps-node-02 glusterfs-server[2293]: ...done.
Mar 08 13:40:08 ps-node-02 systemd[1]: Started LSB: GlusterFS server.

root@ps-node-02:/opt/petasan/config# gluster peer status
Number of Peers: 2

Hostname: 192.168.110.150
Uuid: 2008cfa4-dbd6-4bb9-89f6-2f68e55b5fcd
State: Peer in Cluster (Connected)

Hostname: 192.168.110.152
Uuid: 389f8ed8-d679-4f07-aff7-103a5fa5d238
State: Peer in Cluster (Connected)
root@ps-node-02:/opt/petasan/config#

root@ps-node-02:/opt/petasan/config# gluster vol status gfs-vol
Status of volume: gfs-vol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.110.150:/opt/petasan/config/g
fs-brick 49152 0 Y 2595
Brick 192.168.110.152:/opt/petasan/config/g
fs-brick 49153 0 Y 1870
Brick 192.168.110.151:/opt/petasan/config/g
fs-brick 49152 0 Y 2447
Self-heal Daemon on localhost N/A N/A Y 2474
Self-heal Daemon on 192.168.110.150 N/A N/A Y 2621
Self-heal Daemon on 192.168.110.152 N/A N/A Y 2445

Task Status of Volume gfs-vol
------------------------------------------------------------------------------
There are no active volume tasks

root@ps-node-02:/opt/petasan/config#
root@ps-node-02:/opt/petasan/config# ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
ls: cannot access '/opt/petasan/config/shared/graphite/whisper/PetaSAN/': Transport endpoint is not connected
root@ps-node-02:/opt/petasan/config#
----------------------------------------------------------------
● glusterfs-server.service - LSB: GlusterFS server
Loaded: loaded (/etc/init.d/glusterfs-server; bad; vendor preset: enabled)
Active: active (running) since Mon 2018-04-09 10:34:41 CEST; 2 weeks 3 days ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/glusterfs-server.service
├─1733 /usr/sbin/glusterd -p /var/run/glusterd.pid
├─1870 /usr/sbin/glusterfsd -s 192.168.110.152 --volfile-id gfs-vol.192.168.110.152.opt-petasan-config-gfs-brick -p /var/lib/glusterd/vols/gfs-vol/run/192.168.110.152-opt-petasan-config-gfs-brick.pid -S /var/run/gluster/6e73c97e
76c3a37e66d096978bc948be.socket --brick-name /opt/petasan/config/gfs-brick -l /var/log/glusterfs/bricks/opt-petasan-config-gfs-brick.log --xlator-option *-posix.glusterd-uuid=389f8ed8-d679-4f07-aff7-103a5fa5d238 --brick-port 49153 --xlat
or-option gfs-vol-server.listen-port=49153
└─2445 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/4c92e7b3dcba2ef2388c5c7a05de0a57.socket --xlator
-option *replicate*.node-uuid=389f8ed8-d679-4f07-aff7-103a5fa5d238

Apr 09 10:34:39 ps-node-03 systemd[1]: Starting LSB: GlusterFS server...
Apr 09 10:34:39 ps-node-03 glusterfs-server[1719]: * Starting glusterd service glusterd
Apr 09 10:34:41 ps-node-03 glusterfs-server[1719]: ...done.
Apr 09 10:34:41 ps-node-03 systemd[1]: Started LSB: GlusterFS server.

root@ps-node-03:~# gluster peer status
Number of Peers: 2

Hostname: 192.168.110.150
Uuid: 2008cfa4-dbd6-4bb9-89f6-2f68e55b5fcd
State: Peer in Cluster (Connected)

Hostname: 192.168.110.151
Uuid: 839feb11-7b62-44b8-964c-d45b69e992d3
State: Peer in Cluster (Connected)
root@ps-node-03:~#
root@ps-node-03:~# gluster vol status gfs-vol
Status of volume: gfs-vol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.110.150:/opt/petasan/config/g
fs-brick 49152 0 Y 2595
Brick 192.168.110.152:/opt/petasan/config/g
fs-brick 49153 0 Y 1870
Brick 192.168.110.151:/opt/petasan/config/g
fs-brick 49152 0 Y 2447
Self-heal Daemon on localhost N/A N/A Y 2445
Self-heal Daemon on 192.168.110.151 N/A N/A Y 2474
Self-heal Daemon on 192.168.110.150 N/A N/A Y 2621

Task Status of Volume gfs-vol
------------------------------------------------------------------------------
There are no active volume tasks

root@ps-node-03:~#
root@ps-node-03:~# ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
ClusterStats NodeStats

 

After running the other commands, I seems to be running again.

Thanks

Raf

Good it worked 🙂