Showing posts with label Cluster. Show all posts
Showing posts with label Cluster. Show all posts

Saturday, 10 May 2014

Set up GlusterFS with a volume replicated over 2 nodes

The servers setup:

To install the required packages run on both servers:
sudo apt-get install glusterfs-server
If you want a more up to date version of GlusterFS you can add the following repo:
sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4
Now from one of the servers you must connect to the other:
sudo gluster peer probe <ip_of_the_other_server>
You should see the following output:
peer probe: success
You can check the status from any of the hosts with:
sudo gluster peer status
Now we need to create the volume where the data will reside. For this run the following comand:
sudo gluster volume create datastore1 replica 2 transport tcp <server1_IP>:/mnt/gfs_block <server2_IP>:/mnt/gfs_block
Where /mnt/gfs_block is the mount point where the data will be on each node and datastore1 is the name of the volume you are creating.

If this has been sucessful, you should see:
Creation of volume datastore1 has been successful. Please start the volume to access data.
As the message indicates, we now need to start the volume:
sudo gluster volume start datastore1
As a final test, to make sure the volume is available, run gluster volume info.
sudo gluster volume info 
Your GlusterFS volume is ready and will maintain replication across two nodes.
If you want to Restrict Access to the Volume, you can use the following command:
sudo gluster volume set datastore1 auth.allow gluster_client1_ip,gluster_client2_ip
If you need to remove the restriction at any point, you can type:
sudo gluster volume set volume1 auth.allow *

Setup the clients:

Install the needed packages with:
sudo apt-get install glusterfs-client
To mount the volume you must edit the fstab file:
sudo vi /etc/fstab
And append the following to it:
[HOST1]:/[VOLUME]    /[MOUNT] glusterfs defaults,_netdev,backupvolfile-server=[HOST2] 0 0
Where [HOST1] is the IP address of one of the servers and [HOST2] is the IP of the other server. [VOLUME] is the Volume name, in our case datastore1 and [MOUNT] is the path where you whant the files on the client.

Or, you can also mount the volume using a volume config file:

Create a volume config file for your GlusterFS client.
vi /etc/glusterfs/datastore.vol
Create the above file and replace [HOST1] with your GlusterFS server 1, [HOST2] with your GlusterFS server 2 and [VOLNAME] with the Gluster FS volume to mount.
 volume remote1
 type protocol/client
 option transport-type tcp
 option remote-host [HOST1]
 option remote-subvolume [VOLNAME]
 end-volume

 volume remote2
 type protocol/client
 option transport-type tcp
 option remote-host [HOST2]
 option remote-subvolume [VOLNAME]
 end-volume

 volume replicate
 type cluster/replicate
 subvolumes remote1 remote2
 end-volume

 volume writebehind
 type performance/write-behind
 option window-size 1MB
 subvolumes replicate
 end-volume

 volume cache
 type performance/io-cache
 option cache-size 512MB
 subvolumes writebehind
 end-volume
Finally, edit fstab to add this config file and it's mount point. Replace [MOUNT] with the location to mount the storage to.
/etc/glusterfs/datastore.vol [MOUNT] glusterfs rw,allow_other,default_permissions,max_read=131072 0 0

Possibly Related Posts

Saturday, 5 January 2013

PostgreSQL cluster using DRBD and hot standby

Cluster Configuration:

First install all the necessary packages:
yum install gfs2-utils cman fence-virtd-checkpoint lvm2-cluster perl-Net-Telnet rgmanager device-mapper-multipath ipvsadm piranha luci modcluster cluster-snmp ricci
yum groupinstall "High Availability"
yum install postgresql-server
chkconfig --level 123456 ricci on
chkconfig --level 123456 luci on
chkconfig --level 123456 cman on
chkconfig --level 123456 iptables off
chkconfig --level 123456 ip6tables off
chkconfig postgresql on
chkconfig cman on
chkconfig rgmanager on
Now edit the cluster configuration file:
vi vi /etc/cluster/cluster.conf
Make it look like this:
<?xml version="1.0"?>
<cluster config_version="7" name="pgcluster">
<clusternodes>
<clusternode name="10.39.30.7" votes="1" nodeid="1">
<fence/>
</clusternode>
<clusternode name="10.39.30.8" votes="1" nodeid="2">
<fence/>
</clusternode>
</clusternodes>
<rm>
<failoverdomains>
<failoverdomain name="PGSQL" nofailback="0" ordered="0" restricted="0">
<failoverdomainnode name="10.39.30.7"/>
<failoverdomainnode name="10.39.30.8"/>
</failoverdomain>
</failoverdomains>
<resources>
<ip address="10.39.30.6" monitor_link="on" sleeptime="10"/>
<postgres-8 config_file="/var/lib/pgsql/data/postgresql.conf" name="pgsql" shutdown_wait="5" />
</resources>
<service autostart="1" exclusive="0" domain="PGSQL" name="pgsql" recovery="relocate">
<drbd name="drdb-postgres" resource="r0">
<fs device="/dev/drbd0" fsid="6202" fstype="ext3" mountpoint="/var/lib/pgsql" name="pgsql" options="noatime"/>
</drbd>
<ip ref="10.39.30.6"/>
<postgres-8 ref="pgsql"/>
</service>
</rm>
<cman expected_votes="1" two_node="1"/>
<fence_daemon clean_start="1" post_fail_delay="0" post_join_delay="3"/>
</cluster>

DRDB Configuration:

Install the necessary files:
yum install gcc flex make libxslt rpm-build redhat-rpm-config kernel-devel
You need to download and install DRBD manually
wget http://oss.linbit.com/drbd/8.4/drbd-8.4.1.tar.gz 
 the following commands will generate DRBD RPM packages:
tar -xvf *.tar.gz
mkdir -p /root/rpmbuild/SOURCES/
cp drbd*.tar.gz /root/rpmbuild/SOURCES/
cd drbd-8.4.1
./configure --with-rgmanager --enable-spec --with-km
make tgz
rpmbuild --bb drbd.spec --without xen --without heartbeat --without udev --without pacemaker --with rgmanager
rpmbuild --bb drbd-kernel.spec
rpmbuild --bb drbd-km.spec
Now install the newly created packages:
cd /root/rpmbuild/RPMS/x86_64
rpm -i drbd-utils-8.4.1-1.el6.x86_64.rpm drbd-bash-completion-8.4.1-1.el6.x86_64.rpm drbd-8.4.1-1.el6.x86_64.rpm drbd-rgmanager-8.4.1-1.el6.x86_64.rpm drbd-km-2.6.32_279.14.1.el6.x86_64-8.4.1-1.el6.x86_64.rpm
Add your nodes IP addresses to the hosts file on both machines:
vi /etc/hosts
10.39.30.7 RHPG1
10.39.30.8 RHPG2
Create a DRBD configuration file:
vi /etc/drbd.d/r0.res
resource r0 {
   device /dev/drbd0;
   meta-disk internal;
   on RHPG1 {
      address 10.39.30.7:7789;
      disk /dev/sdb1;
   }
   on RHPG2 {
      address 10.39.30.8:7789;
      disk /dev/sdb1;
   }
}
Create the partion /dev/sdb1 but do not format it:
fdisk /dev/sdb
Run on both machines:
drbdadm create-md r0
modprobe drbd
drbdadm up r0
Run on one of the machines to create the file system:
drbdadm -- --overwrite-data-of-peer primary r0
Check the sync status on any of the hosts with:
service drbd status
Create the file system on /dev/drbd0
mkfs.ext3 /dev/drbd0
and move over the PostgreSQL data
mkdir /tmp/pgdata
mount /dev/drbd0 /tmp/pgdata
cp -r /var/lib/pgsql /tmp/pgdata
Wait until the data is synced over the two hosts check the status with:
service drbd status
Unmount the drbd device:
umount /dev/drbd0
and then, on both hosts do:
rm -rf /var/lib/pgsql/*
Restart the drbd service
service drbd restart
the status from:
service drbd status
should show that both hosts are in secondary:
0:r0 Connected Secondary/Secondary UpToDate/UpToDate C
And ready to be managed by rgmanager.

Possibly Related Posts