Forums

Home / Forums

You need to log in to create posts and topics. Login · Register

Issue with graphs

Pages: 1 2

I just setup a 3 node deploy of petasan 1.4 but I am not getting the graphs on the home page.

I found the post http://www.petasan.org/forums/?view=thread&id=157 but the restart of the services and remount didn't work.

The services restart but I get

root@sbd-cephmon01:~# umount -l /opt/petasan/config/shared/
umount: /opt/petasan/config/shared/: not mounted

when i try to unmount the share.

any ideas?

 

 

What is the output of

mount | grep shared
gluster peer status
gluster vol info gfs-vol
systemctl status glusterfs-server

run on any of the 3 machines

root@sbd-cephnode01:~# mount | grep shared
peerroot@sbd-cephnode01:~# gluster peer status
luster vol info gfs-vol
systemctl status glusterfs-server
Number of Peers: 2

Hostname: 192.168.15.100
Uuid: 6ae58d2e-d7fe-4233-8788-e0d0264a7c38
State: Peer in Cluster (Connected)

Hostname: sbd-cephnode02
Uuid: cac873ca-d472-40c6-be95-b46b60866f8a
State: Peer in Cluster (Connected)
root@sbd-cephnode01:~# gluster vol info gfs-vol
Volume gfs-vol does not exist
root@sbd-cephnode01:~# systemctl status glusterfs-server
● glusterfs-server.service - LSB: GlusterFS server
Loaded: loaded (/etc/init.d/glusterfs-server; bad; vendor preset: enabled)
Active: active (running) since Wed 2017-11-01 14:31:46 CDT; 1h 23min ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/glusterfs-server.service
└─1796 /usr/sbin/glusterd -p /var/run/glusterd.pid

Nov 01 14:31:43 sbd-cephnode01 systemd[1]: Starting LSB: GlusterFS server...
Nov 01 14:31:43 sbd-cephnode01 glusterfs-server[1789]: * Starting glusterd service glusterd
Nov 01 14:31:46 sbd-cephnode01 glusterfs-server[1789]: ...done.
Nov 01 14:31:46 sbd-cephnode01 systemd[1]: Started LSB: GlusterFS server.

#For some reason, the gluster volume failed to create during setup.
#create volume: note replace IP1, IP2, IP3 with the ip addresses of your backend 1 ip for nodes 1,2,3 respectively:
gluster vol create gfs-vol replica 3  IP1:/opt/petasan/config/gfs-brick  IP2:/opt/petasan/config/gfs-brick   IP3:/opt/petasan/config/gfs-brick

# start volume
gluster vol start gfs-vol
gluster vol set gfs-vol network.ping-timeout 5
gluster vol set gfs-vol nfs.disable true

# check volume is working
gluster vol info gfs-vol

After this i recommend to reboot all 3 nodes, if you cannot let me know and we can go through more steps.

One more thing: for graph data to show correctly, the browser and the PetaSAN nodes need to have correctly set time and time zone.

Got the below output. I really haven't worked with gluster but im guessing i need to do something to pull the brick out of the existing volume?

root@sbd-cephmon01:~# gluster vol create gfs-vol replica 3 192.168.15.1:/opt/petasan/config/gfs-brick 192.168.15.2:/opt/petasan/config/gfs-brick 192.168.15.3:/opt/petasan/config/gfs-brick
volume create: gfs-vol: failed: /opt/petasan/config/gfs-brick is already part of a volume

So it was already created..try the commands  starting from # start volume

if this does not work , please let me know the  output of

gluster vol status
gluster vol info

root@sbd-cephmon01:~# gluster vol start gfs-vol
volume start: gfs-vol: failed: Volume gfs-vol does not exist

root@sbd-cephmon01:~# gluster vol status
No volumes present
root@sbd-cephmon01:~# gluster vol info
No volumes present

try to do the following on all 3 nodes:

systemctl stop petasan-mount-sharedfs
systemctl stop glusterfs-server
killall glusterfsd
killall glusterfs

rm -rf /var/lib/glusterd/vols/gfs-vol
rm -rf /opt/petasan/config/gfs-brick
mkdir -p /opt/petasan/config/gfs-brick

systemctl start petasan-mount-sharedfs
systemctl start glusterfs-server

then do this from any node

#create volume: note replace IP1, IP2, IP3 with the ip addresses of your backend 1 ip for nodes 1,2,3 respectively:
gluster vol create gfs-vol replica 3 IP1:/opt/petasan/config/gfs-brick IP2:/opt/petasan/config/gfs-brick IP3:/opt/petasan/config/gfs-brick

# start volume
gluster vol start gfs-vol
gluster vol set gfs-vol network.ping-timeout 5
gluster vol set gfs-vol nfs.disable true

# check volume is working
gluster vol info gfs-vol

Correction

try to do the following on all 3 nodes:

systemctl stop petasan-mount-sharedfs
systemctl stop glusterfs-server
killall glusterfsd
killall glusterfs

rm -rf /var/lib/glusterd/vols/gfs-vol
rm -rf /opt/petasan/config/gfs-brick
mkdir -p /opt/petasan/config/gfs-brick

Then after the above is done on all 3 nodes, start the following services on all 3 nodes:

systemctl start petasan-mount-sharedfs
systemctl start glusterfs-server

then do this from any node

#create volume: note replace IP1, IP2, IP3 with the ip addresses of your backend 1 ip for nodes 1,2,3 respectively:
gluster vol create gfs-vol replica 3 IP1:/opt/petasan/config/gfs-brick IP2:/opt/petasan/config/gfs-brick IP3:/opt/petasan/config/gfs-brick

# start volume
gluster vol start gfs-vol
gluster vol set gfs-vol network.ping-timeout 5
gluster vol set gfs-vol nfs.disable true

# check volume is working
gluster vol info gfs-vol

Pages: 1 2