"No datapoints" on graphs
Ste
125 Posts
June 22, 2018, 8:54 amQuote from Ste on June 22, 2018, 8:54 amHi,
I'm experiencing a curious issue in the last 3/4 days: in all the stat graphs in the web interface (cluster storage, pg, status, etc...) no info are displayed anymore, but only a "no datapoints" label appears. This happens connecting to web dashboard of all three management nodes. What could be the issue and where can I have a look to fix it ?
Thanks and bye, S.
Hi,
I'm experiencing a curious issue in the last 3/4 days: in all the stat graphs in the web interface (cluster storage, pg, status, etc...) no info are displayed anymore, but only a "no datapoints" label appears. This happens connecting to web dashboard of all three management nodes. What could be the issue and where can I have a look to fix it ?
Thanks and bye, S.
admin
2,930 Posts
June 22, 2018, 8:07 pmQuote from admin on June 22, 2018, 8:07 pmFirst check your client browser time and time zone are set correctly.
On all 3 management nodes, what is the output of:
systemctl status glusterfs-server
gluster peer status
gluster vol status gfs-vol
ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
identify the current stats server, on all management nodes run:
systemctl status carbon-cache
the node showing this service as active (running) is the current stats server, on this server run:
systemctl stop petasan-mount-sharedfs
systemctl stop glusterfs-server
killall glusterfsd
killall glusterfs
umount /opt/petasan/config/shared
systemctl start glusterfs-server
systemctl start petasan-mount-sharedfs
/opt/petasan/scripts/stats-stop.sh
/opt/petasan/scripts/stats-start.sh
What is the output of:
ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
First check your client browser time and time zone are set correctly.
On all 3 management nodes, what is the output of:
systemctl status glusterfs-server
gluster peer status
gluster vol status gfs-vol
ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
identify the current stats server, on all management nodes run:
systemctl status carbon-cache
the node showing this service as active (running) is the current stats server, on this server run:
systemctl stop petasan-mount-sharedfs
systemctl stop glusterfs-server
killall glusterfsd
killall glusterfs
umount /opt/petasan/config/shared
systemctl start glusterfs-server
systemctl start petasan-mount-sharedfs
/opt/petasan/scripts/stats-stop.sh
/opt/petasan/scripts/stats-start.sh
What is the output of:
ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
Last edited on June 22, 2018, 8:18 pm by admin · #2
Ste
125 Posts
June 25, 2018, 9:04 amQuote from Ste on June 25, 2018, 9:04 amProblem fixed ! Thank you 😉
The output of all the first set of command was quite good, I only have a doubt on this, if it is normal that gfs-vol on host #3 is offline:
root@petatest03:~# gluster vol status gfs-vol
Status of volume: gfs-vol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.111.1:/opt/petasan/config/gfs
-brick 49152 0 Y 1504
Brick 192.168.111.2:/opt/petasan/config/gfs
-brick 49154 0 Y 1388
Brick 192.168.111.3:/opt/petasan/config/gfs
-brick N/A N/A N N/A
Self-heal Daemon on localhost N/A N/A Y 3519
Self-heal Daemon on 192.168.111.2 N/A N/A Y 1403
Self-heal Daemon on 192.168.111.1 N/A N/A Y 1511
Task Status of Volume gfs-vol
------------------------------------------------------------------------------
There are no active volume tasks
For the rest, on the current stats server (host #2), the command "ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/" returned "no file founds", so after issuing the last group of commands all is back to normal and stats are available again.
root@petatest02:~# ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
ClusterStats NodeStats
What could have caused the deletion of those 2 directories ? Bye. S.
Problem fixed ! Thank you 😉
The output of all the first set of command was quite good, I only have a doubt on this, if it is normal that gfs-vol on host #3 is offline:
root@petatest03:~# gluster vol status gfs-vol
Status of volume: gfs-vol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.111.1:/opt/petasan/config/gfs
-brick 49152 0 Y 1504
Brick 192.168.111.2:/opt/petasan/config/gfs
-brick 49154 0 Y 1388
Brick 192.168.111.3:/opt/petasan/config/gfs
-brick N/A N/A N N/A
Self-heal Daemon on localhost N/A N/A Y 3519
Self-heal Daemon on 192.168.111.2 N/A N/A Y 1403
Self-heal Daemon on 192.168.111.1 N/A N/A Y 1511
Task Status of Volume gfs-vol
------------------------------------------------------------------------------
There are no active volume tasks
For the rest, on the current stats server (host #2), the command "ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/" returned "no file founds", so after issuing the last group of commands all is back to normal and stats are available again.
root@petatest02:~# ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
ClusterStats NodeStats
What could have caused the deletion of those 2 directories ? Bye. S.
Last edited on June 25, 2018, 9:04 am by Ste · #3
"No datapoints" on graphs
Ste
125 Posts
Quote from Ste on June 22, 2018, 8:54 amHi,
I'm experiencing a curious issue in the last 3/4 days: in all the stat graphs in the web interface (cluster storage, pg, status, etc...) no info are displayed anymore, but only a "no datapoints" label appears. This happens connecting to web dashboard of all three management nodes. What could be the issue and where can I have a look to fix it ?
Thanks and bye, S.
Hi,
I'm experiencing a curious issue in the last 3/4 days: in all the stat graphs in the web interface (cluster storage, pg, status, etc...) no info are displayed anymore, but only a "no datapoints" label appears. This happens connecting to web dashboard of all three management nodes. What could be the issue and where can I have a look to fix it ?
Thanks and bye, S.
admin
2,930 Posts
Quote from admin on June 22, 2018, 8:07 pmFirst check your client browser time and time zone are set correctly.
On all 3 management nodes, what is the output of:
systemctl status glusterfs-server
gluster peer status
gluster vol status gfs-vol
ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/identify the current stats server, on all management nodes run:
systemctl status carbon-cache
the node showing this service as active (running) is the current stats server, on this server run:
systemctl stop petasan-mount-sharedfs
systemctl stop glusterfs-server
killall glusterfsd
killall glusterfs
umount /opt/petasan/config/shared
systemctl start glusterfs-server
systemctl start petasan-mount-sharedfs
/opt/petasan/scripts/stats-stop.sh
/opt/petasan/scripts/stats-start.shWhat is the output of:
ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
First check your client browser time and time zone are set correctly.
On all 3 management nodes, what is the output of:
systemctl status glusterfs-server
gluster peer status
gluster vol status gfs-vol
ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
identify the current stats server, on all management nodes run:
systemctl status carbon-cache
the node showing this service as active (running) is the current stats server, on this server run:
systemctl stop petasan-mount-sharedfs
systemctl stop glusterfs-server
killall glusterfsd
killall glusterfs
umount /opt/petasan/config/shared
systemctl start glusterfs-server
systemctl start petasan-mount-sharedfs
/opt/petasan/scripts/stats-stop.sh
/opt/petasan/scripts/stats-start.shWhat is the output of:
ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
Ste
125 Posts
Quote from Ste on June 25, 2018, 9:04 amProblem fixed ! Thank you 😉
The output of all the first set of command was quite good, I only have a doubt on this, if it is normal that gfs-vol on host #3 is offline:
root@petatest03:~# gluster vol status gfs-vol
Status of volume: gfs-vol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.111.1:/opt/petasan/config/gfs
-brick 49152 0 Y 1504
Brick 192.168.111.2:/opt/petasan/config/gfs
-brick 49154 0 Y 1388
Brick 192.168.111.3:/opt/petasan/config/gfs
-brick N/A N/A N N/A
Self-heal Daemon on localhost N/A N/A Y 3519
Self-heal Daemon on 192.168.111.2 N/A N/A Y 1403
Self-heal Daemon on 192.168.111.1 N/A N/A Y 1511Task Status of Volume gfs-vol
------------------------------------------------------------------------------
There are no active volume tasks
For the rest, on the current stats server (host #2), the command "ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/" returned "no file founds", so after issuing the last group of commands all is back to normal and stats are available again.
root@petatest02:~# ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
ClusterStats NodeStatsWhat could have caused the deletion of those 2 directories ? Bye. S.
Problem fixed ! Thank you 😉
The output of all the first set of command was quite good, I only have a doubt on this, if it is normal that gfs-vol on host #3 is offline:
root@petatest03:~# gluster vol status gfs-vol
Status of volume: gfs-vol
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.111.1:/opt/petasan/config/gfs
-brick 49152 0 Y 1504
Brick 192.168.111.2:/opt/petasan/config/gfs
-brick 49154 0 Y 1388
Brick 192.168.111.3:/opt/petasan/config/gfs
-brick N/A N/A N N/A
Self-heal Daemon on localhost N/A N/A Y 3519
Self-heal Daemon on 192.168.111.2 N/A N/A Y 1403
Self-heal Daemon on 192.168.111.1 N/A N/A Y 1511Task Status of Volume gfs-vol
------------------------------------------------------------------------------
There are no active volume tasks
For the rest, on the current stats server (host #2), the command "ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/" returned "no file founds", so after issuing the last group of commands all is back to normal and stats are available again.
root@petatest02:~# ls /opt/petasan/config/shared/graphite/whisper/PetaSAN/
ClusterStats NodeStats
What could have caused the deletion of those 2 directories ? Bye. S.