Issue with graphs
Pages: 1 2
shin234
6 Posts
November 1, 2017, 7:55 pmQuote from shin234 on November 1, 2017, 7:55 pmI just setup a 3 node deploy of petasan 1.4 but I am not getting the graphs on the home page.
I found the post http://www.petasan.org/forums/?view=thread&id=157 but the restart of the services and remount didn't work.
The services restart but I get
root@sbd-cephmon01:~# umount -l /opt/petasan/config/shared/
umount: /opt/petasan/config/shared/: not mounted
when i try to unmount the share.
any ideas?
I just setup a 3 node deploy of petasan 1.4 but I am not getting the graphs on the home page.
I found the post http://www.petasan.org/forums/?view=thread&id=157 but the restart of the services and remount didn't work.
The services restart but I get
root@sbd-cephmon01:~# umount -l /opt/petasan/config/shared/
umount: /opt/petasan/config/shared/: not mounted
when i try to unmount the share.
any ideas?
admin
2,930 Posts
November 1, 2017, 8:53 pmQuote from admin on November 1, 2017, 8:53 pmWhat is the output of
mount | grep shared
gluster peer status
gluster vol info gfs-vol
systemctl status glusterfs-server
run on any of the 3 machines
What is the output of
mount | grep shared
gluster peer status
gluster vol info gfs-vol
systemctl status glusterfs-server
run on any of the 3 machines
shin234
6 Posts
November 1, 2017, 8:56 pmQuote from shin234 on November 1, 2017, 8:56 pmroot@sbd-cephnode01:~# mount | grep shared
peerroot@sbd-cephnode01:~# gluster peer status
luster vol info gfs-vol
systemctl status glusterfs-server
Number of Peers: 2
Hostname: 192.168.15.100
Uuid: 6ae58d2e-d7fe-4233-8788-e0d0264a7c38
State: Peer in Cluster (Connected)
Hostname: sbd-cephnode02
Uuid: cac873ca-d472-40c6-be95-b46b60866f8a
State: Peer in Cluster (Connected)
root@sbd-cephnode01:~# gluster vol info gfs-vol
Volume gfs-vol does not exist
root@sbd-cephnode01:~# systemctl status glusterfs-server
● glusterfs-server.service - LSB: GlusterFS server
Loaded: loaded (/etc/init.d/glusterfs-server; bad; vendor preset: enabled)
Active: active (running) since Wed 2017-11-01 14:31:46 CDT; 1h 23min ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/glusterfs-server.service
└─1796 /usr/sbin/glusterd -p /var/run/glusterd.pid
Nov 01 14:31:43 sbd-cephnode01 systemd[1]: Starting LSB: GlusterFS server...
Nov 01 14:31:43 sbd-cephnode01 glusterfs-server[1789]: * Starting glusterd service glusterd
Nov 01 14:31:46 sbd-cephnode01 glusterfs-server[1789]: ...done.
Nov 01 14:31:46 sbd-cephnode01 systemd[1]: Started LSB: GlusterFS server.
root@sbd-cephnode01:~# mount | grep shared
peerroot@sbd-cephnode01:~# gluster peer status
luster vol info gfs-vol
systemctl status glusterfs-server
Number of Peers: 2
Hostname: 192.168.15.100
Uuid: 6ae58d2e-d7fe-4233-8788-e0d0264a7c38
State: Peer in Cluster (Connected)
Hostname: sbd-cephnode02
Uuid: cac873ca-d472-40c6-be95-b46b60866f8a
State: Peer in Cluster (Connected)
root@sbd-cephnode01:~# gluster vol info gfs-vol
Volume gfs-vol does not exist
root@sbd-cephnode01:~# systemctl status glusterfs-server
● glusterfs-server.service - LSB: GlusterFS server
Loaded: loaded (/etc/init.d/glusterfs-server; bad; vendor preset: enabled)
Active: active (running) since Wed 2017-11-01 14:31:46 CDT; 1h 23min ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/glusterfs-server.service
└─1796 /usr/sbin/glusterd -p /var/run/glusterd.pid
Nov 01 14:31:43 sbd-cephnode01 systemd[1]: Starting LSB: GlusterFS server...
Nov 01 14:31:43 sbd-cephnode01 glusterfs-server[1789]: * Starting glusterd service glusterd
Nov 01 14:31:46 sbd-cephnode01 glusterfs-server[1789]: ...done.
Nov 01 14:31:46 sbd-cephnode01 systemd[1]: Started LSB: GlusterFS server.
admin
2,930 Posts
November 1, 2017, 9:17 pmQuote from admin on November 1, 2017, 9:17 pm#For some reason, the gluster volume failed to create during setup.
#create volume: note replace IP1, IP2, IP3 with the ip addresses of your backend 1 ip for nodes 1,2,3 respectively:
gluster vol create gfs-vol replica 3 IP1:/opt/petasan/config/gfs-brick IP2:/opt/petasan/config/gfs-brick IP3:/opt/petasan/config/gfs-brick
# start volume
gluster vol start gfs-vol
gluster vol set gfs-vol network.ping-timeout 5
gluster vol set gfs-vol nfs.disable true
# check volume is working
gluster vol info gfs-vol
After this i recommend to reboot all 3 nodes, if you cannot let me know and we can go through more steps.
One more thing: for graph data to show correctly, the browser and the PetaSAN nodes need to have correctly set time and time zone.
#For some reason, the gluster volume failed to create during setup.
#create volume: note replace IP1, IP2, IP3 with the ip addresses of your backend 1 ip for nodes 1,2,3 respectively:
gluster vol create gfs-vol replica 3 IP1:/opt/petasan/config/gfs-brick IP2:/opt/petasan/config/gfs-brick IP3:/opt/petasan/config/gfs-brick
# start volume
gluster vol start gfs-vol
gluster vol set gfs-vol network.ping-timeout 5
gluster vol set gfs-vol nfs.disable true
# check volume is working
gluster vol info gfs-vol
After this i recommend to reboot all 3 nodes, if you cannot let me know and we can go through more steps.
One more thing: for graph data to show correctly, the browser and the PetaSAN nodes need to have correctly set time and time zone.
shin234
6 Posts
November 1, 2017, 9:27 pmQuote from shin234 on November 1, 2017, 9:27 pmGot the below output. I really haven't worked with gluster but im guessing i need to do something to pull the brick out of the existing volume?
root@sbd-cephmon01:~# gluster vol create gfs-vol replica 3 192.168.15.1:/opt/petasan/config/gfs-brick 192.168.15.2:/opt/petasan/config/gfs-brick 192.168.15.3:/opt/petasan/config/gfs-brick
volume create: gfs-vol: failed: /opt/petasan/config/gfs-brick is already part of a volume
Got the below output. I really haven't worked with gluster but im guessing i need to do something to pull the brick out of the existing volume?
root@sbd-cephmon01:~# gluster vol create gfs-vol replica 3 192.168.15.1:/opt/petasan/config/gfs-brick 192.168.15.2:/opt/petasan/config/gfs-brick 192.168.15.3:/opt/petasan/config/gfs-brick
volume create: gfs-vol: failed: /opt/petasan/config/gfs-brick is already part of a volume
admin
2,930 Posts
November 1, 2017, 9:39 pmQuote from admin on November 1, 2017, 9:39 pmSo it was already created..try the commands starting from # start volume
if this does not work , please let me know the output of
gluster vol status
gluster vol info
So it was already created..try the commands starting from # start volume
if this does not work , please let me know the output of
gluster vol status
gluster vol info
shin234
6 Posts
November 1, 2017, 9:40 pmQuote from shin234 on November 1, 2017, 9:40 pmroot@sbd-cephmon01:~# gluster vol start gfs-vol
volume start: gfs-vol: failed: Volume gfs-vol does not exist
root@sbd-cephmon01:~# gluster vol start gfs-vol
volume start: gfs-vol: failed: Volume gfs-vol does not exist
shin234
6 Posts
November 1, 2017, 9:41 pmQuote from shin234 on November 1, 2017, 9:41 pmroot@sbd-cephmon01:~# gluster vol status
No volumes present
root@sbd-cephmon01:~# gluster vol info
No volumes present
root@sbd-cephmon01:~# gluster vol status
No volumes present
root@sbd-cephmon01:~# gluster vol info
No volumes present
admin
2,930 Posts
November 1, 2017, 9:59 pmQuote from admin on November 1, 2017, 9:59 pmtry to do the following on all 3 nodes:
systemctl stop petasan-mount-sharedfs
systemctl stop glusterfs-server
killall glusterfsd
killall glusterfs
rm -rf /var/lib/glusterd/vols/gfs-vol
rm -rf /opt/petasan/config/gfs-brick
mkdir -p /opt/petasan/config/gfs-brick
systemctl start petasan-mount-sharedfs
systemctl start glusterfs-server
then do this from any node
#create volume: note replace IP1, IP2, IP3 with the ip addresses of your backend 1 ip for nodes 1,2,3 respectively:
gluster vol create gfs-vol replica 3 IP1:/opt/petasan/config/gfs-brick IP2:/opt/petasan/config/gfs-brick IP3:/opt/petasan/config/gfs-brick
# start volume
gluster vol start gfs-vol
gluster vol set gfs-vol network.ping-timeout 5
gluster vol set gfs-vol nfs.disable true
# check volume is working
gluster vol info gfs-vol
try to do the following on all 3 nodes:
systemctl stop petasan-mount-sharedfs
systemctl stop glusterfs-server
killall glusterfsd
killall glusterfs
rm -rf /var/lib/glusterd/vols/gfs-vol
rm -rf /opt/petasan/config/gfs-brick
mkdir -p /opt/petasan/config/gfs-brick
systemctl start petasan-mount-sharedfs
systemctl start glusterfs-server
then do this from any node
#create volume: note replace IP1, IP2, IP3 with the ip addresses of your backend 1 ip for nodes 1,2,3 respectively:
gluster vol create gfs-vol replica 3 IP1:/opt/petasan/config/gfs-brick IP2:/opt/petasan/config/gfs-brick IP3:/opt/petasan/config/gfs-brick
# start volume
gluster vol start gfs-vol
gluster vol set gfs-vol network.ping-timeout 5
gluster vol set gfs-vol nfs.disable true
# check volume is working
gluster vol info gfs-vol
Last edited on November 1, 2017, 10:00 pm by admin · #9
admin
2,930 Posts
November 1, 2017, 10:06 pmQuote from admin on November 1, 2017, 10:06 pmCorrection
try to do the following on all 3 nodes:
systemctl stop petasan-mount-sharedfs
systemctl stop glusterfs-server
killall glusterfsd
killall glusterfs
rm -rf /var/lib/glusterd/vols/gfs-vol
rm -rf /opt/petasan/config/gfs-brick
mkdir -p /opt/petasan/config/gfs-brick
Then after the above is done on all 3 nodes, start the following services on all 3 nodes:
systemctl start petasan-mount-sharedfs
systemctl start glusterfs-server
then do this from any node
#create volume: note replace IP1, IP2, IP3 with the ip addresses of your backend 1 ip for nodes 1,2,3 respectively:
gluster vol create gfs-vol replica 3 IP1:/opt/petasan/config/gfs-brick IP2:/opt/petasan/config/gfs-brick IP3:/opt/petasan/config/gfs-brick
# start volume
gluster vol start gfs-vol
gluster vol set gfs-vol network.ping-timeout 5
gluster vol set gfs-vol nfs.disable true
# check volume is working
gluster vol info gfs-vol
Correction
try to do the following on all 3 nodes:
systemctl stop petasan-mount-sharedfs
systemctl stop glusterfs-server
killall glusterfsd
killall glusterfs
rm -rf /var/lib/glusterd/vols/gfs-vol
rm -rf /opt/petasan/config/gfs-brick
mkdir -p /opt/petasan/config/gfs-brick
Then after the above is done on all 3 nodes, start the following services on all 3 nodes:
systemctl start petasan-mount-sharedfs
systemctl start glusterfs-server
then do this from any node
#create volume: note replace IP1, IP2, IP3 with the ip addresses of your backend 1 ip for nodes 1,2,3 respectively:
gluster vol create gfs-vol replica 3 IP1:/opt/petasan/config/gfs-brick IP2:/opt/petasan/config/gfs-brick IP3:/opt/petasan/config/gfs-brick
# start volume
gluster vol start gfs-vol
gluster vol set gfs-vol network.ping-timeout 5
gluster vol set gfs-vol nfs.disable true
# check volume is working
gluster vol info gfs-vol
Pages: 1 2
Issue with graphs
shin234
6 Posts
Quote from shin234 on November 1, 2017, 7:55 pmI just setup a 3 node deploy of petasan 1.4 but I am not getting the graphs on the home page.
I found the post http://www.petasan.org/forums/?view=thread&id=157 but the restart of the services and remount didn't work.
The services restart but I get
root@sbd-cephmon01:~# umount -l /opt/petasan/config/shared/
umount: /opt/petasan/config/shared/: not mountedwhen i try to unmount the share.
any ideas?
I just setup a 3 node deploy of petasan 1.4 but I am not getting the graphs on the home page.
I found the post http://www.petasan.org/forums/?view=thread&id=157 but the restart of the services and remount didn't work.
The services restart but I get
root@sbd-cephmon01:~# umount -l /opt/petasan/config/shared/
umount: /opt/petasan/config/shared/: not mounted
when i try to unmount the share.
any ideas?
admin
2,930 Posts
Quote from admin on November 1, 2017, 8:53 pmWhat is the output of
mount | grep shared
gluster peer status
gluster vol info gfs-vol
systemctl status glusterfs-serverrun on any of the 3 machines
What is the output of
mount | grep shared
gluster peer status
gluster vol info gfs-vol
systemctl status glusterfs-server
run on any of the 3 machines
shin234
6 Posts
Quote from shin234 on November 1, 2017, 8:56 pmroot@sbd-cephnode01:~# mount | grep shared
peerroot@sbd-cephnode01:~# gluster peer status
luster vol info gfs-vol
systemctl status glusterfs-server
Number of Peers: 2Hostname: 192.168.15.100
Uuid: 6ae58d2e-d7fe-4233-8788-e0d0264a7c38
State: Peer in Cluster (Connected)Hostname: sbd-cephnode02
Uuid: cac873ca-d472-40c6-be95-b46b60866f8a
State: Peer in Cluster (Connected)
root@sbd-cephnode01:~# gluster vol info gfs-vol
Volume gfs-vol does not exist
root@sbd-cephnode01:~# systemctl status glusterfs-server
● glusterfs-server.service - LSB: GlusterFS server
Loaded: loaded (/etc/init.d/glusterfs-server; bad; vendor preset: enabled)
Active: active (running) since Wed 2017-11-01 14:31:46 CDT; 1h 23min ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/glusterfs-server.service
└─1796 /usr/sbin/glusterd -p /var/run/glusterd.pidNov 01 14:31:43 sbd-cephnode01 systemd[1]: Starting LSB: GlusterFS server...
Nov 01 14:31:43 sbd-cephnode01 glusterfs-server[1789]: * Starting glusterd service glusterd
Nov 01 14:31:46 sbd-cephnode01 glusterfs-server[1789]: ...done.
Nov 01 14:31:46 sbd-cephnode01 systemd[1]: Started LSB: GlusterFS server.
root@sbd-cephnode01:~# mount | grep shared
peerroot@sbd-cephnode01:~# gluster peer status
luster vol info gfs-vol
systemctl status glusterfs-server
Number of Peers: 2
Hostname: 192.168.15.100
Uuid: 6ae58d2e-d7fe-4233-8788-e0d0264a7c38
State: Peer in Cluster (Connected)
Hostname: sbd-cephnode02
Uuid: cac873ca-d472-40c6-be95-b46b60866f8a
State: Peer in Cluster (Connected)
root@sbd-cephnode01:~# gluster vol info gfs-vol
Volume gfs-vol does not exist
root@sbd-cephnode01:~# systemctl status glusterfs-server
● glusterfs-server.service - LSB: GlusterFS server
Loaded: loaded (/etc/init.d/glusterfs-server; bad; vendor preset: enabled)
Active: active (running) since Wed 2017-11-01 14:31:46 CDT; 1h 23min ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/glusterfs-server.service
└─1796 /usr/sbin/glusterd -p /var/run/glusterd.pid
Nov 01 14:31:43 sbd-cephnode01 systemd[1]: Starting LSB: GlusterFS server...
Nov 01 14:31:43 sbd-cephnode01 glusterfs-server[1789]: * Starting glusterd service glusterd
Nov 01 14:31:46 sbd-cephnode01 glusterfs-server[1789]: ...done.
Nov 01 14:31:46 sbd-cephnode01 systemd[1]: Started LSB: GlusterFS server.
admin
2,930 Posts
Quote from admin on November 1, 2017, 9:17 pm#For some reason, the gluster volume failed to create during setup.
#create volume: note replace IP1, IP2, IP3 with the ip addresses of your backend 1 ip for nodes 1,2,3 respectively:
gluster vol create gfs-vol replica 3 IP1:/opt/petasan/config/gfs-brick IP2:/opt/petasan/config/gfs-brick IP3:/opt/petasan/config/gfs-brick# start volume
gluster vol start gfs-vol
gluster vol set gfs-vol network.ping-timeout 5
gluster vol set gfs-vol nfs.disable true# check volume is working
gluster vol info gfs-volAfter this i recommend to reboot all 3 nodes, if you cannot let me know and we can go through more steps.
One more thing: for graph data to show correctly, the browser and the PetaSAN nodes need to have correctly set time and time zone.
#For some reason, the gluster volume failed to create during setup.
#create volume: note replace IP1, IP2, IP3 with the ip addresses of your backend 1 ip for nodes 1,2,3 respectively:
gluster vol create gfs-vol replica 3 IP1:/opt/petasan/config/gfs-brick IP2:/opt/petasan/config/gfs-brick IP3:/opt/petasan/config/gfs-brick
# start volume
gluster vol start gfs-vol
gluster vol set gfs-vol network.ping-timeout 5
gluster vol set gfs-vol nfs.disable true
# check volume is working
gluster vol info gfs-vol
After this i recommend to reboot all 3 nodes, if you cannot let me know and we can go through more steps.
One more thing: for graph data to show correctly, the browser and the PetaSAN nodes need to have correctly set time and time zone.
shin234
6 Posts
Quote from shin234 on November 1, 2017, 9:27 pmGot the below output. I really haven't worked with gluster but im guessing i need to do something to pull the brick out of the existing volume?
root@sbd-cephmon01:~# gluster vol create gfs-vol replica 3 192.168.15.1:/opt/petasan/config/gfs-brick 192.168.15.2:/opt/petasan/config/gfs-brick 192.168.15.3:/opt/petasan/config/gfs-brick
volume create: gfs-vol: failed: /opt/petasan/config/gfs-brick is already part of a volume
Got the below output. I really haven't worked with gluster but im guessing i need to do something to pull the brick out of the existing volume?
root@sbd-cephmon01:~# gluster vol create gfs-vol replica 3 192.168.15.1:/opt/petasan/config/gfs-brick 192.168.15.2:/opt/petasan/config/gfs-brick 192.168.15.3:/opt/petasan/config/gfs-brick
volume create: gfs-vol: failed: /opt/petasan/config/gfs-brick is already part of a volume
admin
2,930 Posts
Quote from admin on November 1, 2017, 9:39 pmSo it was already created..try the commands starting from # start volume
if this does not work , please let me know the output of
gluster vol status
gluster vol info
So it was already created..try the commands starting from # start volume
if this does not work , please let me know the output of
gluster vol status
gluster vol info
shin234
6 Posts
Quote from shin234 on November 1, 2017, 9:40 pmroot@sbd-cephmon01:~# gluster vol start gfs-vol
volume start: gfs-vol: failed: Volume gfs-vol does not exist
root@sbd-cephmon01:~# gluster vol start gfs-vol
volume start: gfs-vol: failed: Volume gfs-vol does not exist
shin234
6 Posts
Quote from shin234 on November 1, 2017, 9:41 pmroot@sbd-cephmon01:~# gluster vol status
No volumes present
root@sbd-cephmon01:~# gluster vol info
No volumes present
root@sbd-cephmon01:~# gluster vol status
No volumes present
root@sbd-cephmon01:~# gluster vol info
No volumes present
admin
2,930 Posts
Quote from admin on November 1, 2017, 9:59 pmtry to do the following on all 3 nodes:
systemctl stop petasan-mount-sharedfs
systemctl stop glusterfs-server
killall glusterfsd
killall glusterfsrm -rf /var/lib/glusterd/vols/gfs-vol
rm -rf /opt/petasan/config/gfs-brick
mkdir -p /opt/petasan/config/gfs-bricksystemctl start petasan-mount-sharedfs
systemctl start glusterfs-serverthen do this from any node
#create volume: note replace IP1, IP2, IP3 with the ip addresses of your backend 1 ip for nodes 1,2,3 respectively:
gluster vol create gfs-vol replica 3 IP1:/opt/petasan/config/gfs-brick IP2:/opt/petasan/config/gfs-brick IP3:/opt/petasan/config/gfs-brick# start volume
gluster vol start gfs-vol
gluster vol set gfs-vol network.ping-timeout 5
gluster vol set gfs-vol nfs.disable true# check volume is working
gluster vol info gfs-vol
try to do the following on all 3 nodes:
systemctl stop petasan-mount-sharedfs
systemctl stop glusterfs-server
killall glusterfsd
killall glusterfs
rm -rf /var/lib/glusterd/vols/gfs-vol
rm -rf /opt/petasan/config/gfs-brick
mkdir -p /opt/petasan/config/gfs-brick
systemctl start petasan-mount-sharedfs
systemctl start glusterfs-server
then do this from any node
#create volume: note replace IP1, IP2, IP3 with the ip addresses of your backend 1 ip for nodes 1,2,3 respectively:
gluster vol create gfs-vol replica 3 IP1:/opt/petasan/config/gfs-brick IP2:/opt/petasan/config/gfs-brick IP3:/opt/petasan/config/gfs-brick
# start volume
gluster vol start gfs-vol
gluster vol set gfs-vol network.ping-timeout 5
gluster vol set gfs-vol nfs.disable true
# check volume is working
gluster vol info gfs-vol
admin
2,930 Posts
Quote from admin on November 1, 2017, 10:06 pmCorrection
try to do the following on all 3 nodes:
systemctl stop petasan-mount-sharedfs
systemctl stop glusterfs-server
killall glusterfsd
killall glusterfsrm -rf /var/lib/glusterd/vols/gfs-vol
rm -rf /opt/petasan/config/gfs-brick
mkdir -p /opt/petasan/config/gfs-brickThen after the above is done on all 3 nodes, start the following services on all 3 nodes:
systemctl start petasan-mount-sharedfs
systemctl start glusterfs-serverthen do this from any node
#create volume: note replace IP1, IP2, IP3 with the ip addresses of your backend 1 ip for nodes 1,2,3 respectively:
gluster vol create gfs-vol replica 3 IP1:/opt/petasan/config/gfs-brick IP2:/opt/petasan/config/gfs-brick IP3:/opt/petasan/config/gfs-brick# start volume
gluster vol start gfs-vol
gluster vol set gfs-vol network.ping-timeout 5
gluster vol set gfs-vol nfs.disable true# check volume is working
gluster vol info gfs-vol
Correction
try to do the following on all 3 nodes:
systemctl stop petasan-mount-sharedfs
systemctl stop glusterfs-server
killall glusterfsd
killall glusterfs
rm -rf /var/lib/glusterd/vols/gfs-vol
rm -rf /opt/petasan/config/gfs-brick
mkdir -p /opt/petasan/config/gfs-brick
Then after the above is done on all 3 nodes, start the following services on all 3 nodes:
systemctl start petasan-mount-sharedfs
systemctl start glusterfs-server
then do this from any node
#create volume: note replace IP1, IP2, IP3 with the ip addresses of your backend 1 ip for nodes 1,2,3 respectively:
gluster vol create gfs-vol replica 3 IP1:/opt/petasan/config/gfs-brick IP2:/opt/petasan/config/gfs-brick IP3:/opt/petasan/config/gfs-brick
# start volume
gluster vol start gfs-vol
gluster vol set gfs-vol network.ping-timeout 5
gluster vol set gfs-vol nfs.disable true
# check volume is working
gluster vol info gfs-vol