Day: January 28, 2015
How to do a full volume heal with glusterfs
How to fix a split-brain fully
If you nodes get out of sync and you know which node is the correct one.
So if you want node 2 to match Node 1
Follow the following setps:
- gluster volume stop $volumename
- /etc/init.d/glusterfsd stop
- rm -rf /mnt/lv_glusterfs/brick/*
- /etc/init.d/glusterfsd start
- “gluster volume start $volumename force”
- “gluster volume heal $volumename full”
You should see a successful output, and you will start to see the “/mnt/lv_glusterfs/brick/” directory now match node a
Finally you can run.
- gluster volume heal $volumename info split-brain (this will show if there are any splitbrains)
- gluster volume heal $volumename info heal-failed (this will show you files that failed the heal)
Cheers
How to setup GlusterFS Server/Client
Gluster setup Server/client on both nodes
On both machines:
- wget http://www.nicktailor.com/files/redhat6/glusterfs-3.4.0-8.el6.x86_64.rpm
- wget http://www.nicktailor.com/files/redhat6/glusterfs-fuse-3.4.0-8.el6.x86_64.rpm
- wget http://www.nicktailor.com/files/redhat6/glusterfs-server-3.4.0-8.el6.x86_64.rpm
- wget http://www.nicktailor.com/files/redhat6/glusterfs-libs-3.4.0-8.el6.x86_64.rpm
- wget http://www.nicktailor.com/files/redhat6/glusterfs-cli-3.4.0-8.el6.x86_64.rpm
Install GlusterFS Server and Client - yum localinstall -y gluster*.rpm
- yum install fuse
We want to use LVM for the glusterfs, so if we need to increase the size of the volume in future we can do so relatively easily. Repeat these steps on both nodes.
Create your physical volume
- pvcreate /dev/sdb
Create your volume
- vgcreate vg_gluster /dev/sdb
Create the logical volume
- lvcreate -l100%VG -n lv_gluster vg_gluster
Format your volume
- mkfs.ext3 /dev/vg_gluster/lv_gluster
Make the directory for your glusterfs
- mkdir -p /mnt/lv_gluster
Mount the logical volume to your destination
- mount /dev/vg_gluster/lv_gluster /mnt/lv_gluster
Create your brick
- mkdir -p /mnt/lv_gluster/brick
Add to your fstab if you wish for it to automount upon reboots
- echo “” >> /etc/fstab
- echo “/dev/vg_glusterfs/lv_gluster /mnt/lv_gluster ext3 defaults 0 0” >> /etc/fstab
- service glusterd start
- chkconfig glusterd on
Now from server1.nicktailor.com:
Test to ensure you can contact your second node
- gluster peer probe server2.nicktailor.com
Create glusterfs volume name and replication between both nodes
- gluster volume create $volumename replica 2 transport tcp
server1.nicktailor.com:/mnt/lv_gluster/brick server2.nicktailor.com:/mnt/lv_gluster/brick
Start the glusterfs volume
- gluster volume start $volumename
Now on server1.nicktailor.com:
Now we need to make the glusterfs directory from which everything will write to and replicate from.
NOTE: You will not be able to mount the storage unless your glusterfs volume is started
- mkdir /storage
- mount -t glusterfs server1.nicktailor.com:/sftp /storage
Add to these lines for automounting upon reboots
- echo “” >> /etc/fstab
- echo “glusterfs server1.nicktailor.com:/sftp /storage glusterfs defaults,_netdev 0 0” >> /etc/fstab
- echo “” >> /etc/rc.local
- echo “grep -v ‘^\s*#’ /etc/fstab | awk ‘{if (\$3 == \”glusterfs\”) print \$2}’ | xargs mount” >> /etc/rc.local
- echo “mount -t glusterfs server1.nicktailor.com:/sftp /storage” >> /etc/rc.local
Now on server2.nicktailor.com do the following after you install the glusterfs and setup the volume group and start the glusterfs service
- mkdir /storage
- mount -t glusterfs server2.nicktailor.com:/sftp /storage
- echo “” >> /etc/fstab
- echo ” server2.nicktailor.com:/sftp /storage glusterfs defaults 0 0″ >> /etc/fstab (if this doesnt automount use the mount -t line at the bottom in /etc/rc.local instead)
- echo “” >> /etc/rc.local
- echo “grep -v ‘^\s*#’ /etc/fstab | awk ‘{if (\$3 == \”glusterfs\”) print \$2}’ | xargs mount” >> /etc/rc.local
- echo “mount -t glusterfs server2.nicktailor.com:/sftp /storage” >> /etc/rc.local
CheersNick Tailor
If you have questions email nick@nicktailor.com and I will try to answer as soon as I can.