Friday, October 4, 2013

Setup HAProxy Load Balancer on Ubuntu

Today I need to setup HAProxy Load Balance on Ubuntu. I think this month most of my task is on the clustering, load balance and HA. Last 2 weeks, I just finish my cluster for hyper-v. Luckily it's working very well.

Ok let go to the point...

1. Install HAProxy into Ubuntu Server.

# apt-get install haproxy

2. Change the HAProxy configuration. 

# nano etc/haproxy/haproxy.cfg

3. Add following text to file and save it

global
        log 127.0.0.1   daemon debug
        #log 127.0.0.1  local0
        #log 127.0.0.1  local1 notice
        #log loghost    local0 info
        stats socket /tmp/stats
        maxconn 4096
        pidfile /var/run/haproxy.pid
        #chroot /usr/share/haproxy
        #user haproxy
        #group haproxy
        daemon
        #debug
        #quiet

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        retries 3
        option redispatch
        maxconn 3000
        contimeout      5000
        clitimeout      50000
        srvtimeout      50000


listen  webcluster *:80
        mode http
        stats enable
        stats uri /stats           //default using http://web.domain.com/haproxy?stats
        stats auth admin:p@ssw0rd
        balance roundrobin
        option httpchk HEAD / HTTP/1.0
        option forwardfor
        cookie LSW_WEB insert
        option httpclose
        server  192.18.250.22 192.18.250.22:80 check inter 5000 fastinter 1000 fall 1 weight 1
        #server web1 192.18.250.22:80 weight 1 fastinter 5000 rise 2 fall 3
        server  192.18.250.23 192.18.250.23:80 check inter 5000 fastinter 1000 fall 1 weight 1
        #server web2 192.18.250.23:80 weight 1 fastinter 5000 rise 2 fall 3
        option  httpclose               # disable keep-alive
        option  checkcache              # block response if set-cookie & cacheable

        rspidel ^Set-cookie:\ IP=       # do not let this cookie tell our internal IP address

        #errorloc       502     http://192.168.114.58/error502.html
        #errorfile      503     /etc/haproxy/errors/503.http
        errorfile       400     /etc/haproxy/errors/400.http
        errorfile       403     /etc/haproxy/errors/403.http
        errorfile       408     /etc/haproxy/errors/408.http
        errorfile       500     /etc/haproxy/errors/500.http
        errorfile       502     /etc/haproxy/errors/502.http
        errorfile       503     /etc/haproxy/errors/503.http
        errorfile       504     /etc/haproxy/errors/504.http


3. Setting startup parameter for HAProxy set enabled as 1 to start HAproxy

# nano /etc/default/haproxy

4. Start HAProxy form command line

# /etc/init.d/haproxy start

5. Access Web user interface from browser

http://web.domain.com/stats




Thursday, October 3, 2013

Set Up an NFS Mount on Ubuntu


NFS (Network File System) Mounts

NFS mounts work to share a directory between several virtual servers. This has the advantage of saving disk space, as the home directory is only kept on one virtual private server, and others can connect to it over the network. When setting up mounts, NFS is most effective for permanent fixtures that should always be accessible.

How to setup

An NFS mount is set up between at least two virtual servers. The machine hosting the shared network is called the server, while the ones that connect to it are called ‘clients’.

This tutorial requires 2 servers: one acting as the server and one as the client. We will set up the server machine first, followed by the client. The following IP addresses will refer to each one:

Master: 172.19.250.10
Client: 172.19.250.20


The system should be set up as root. You can access the root user by typing

# su-

Setting Up the NFS Server (master)

1. Download the Required Software (nfs-kernel-server)

Start off by using apt-get to install the nfs programs.

# apt-get install nfs-kernel-server portmap

2. Export the Shared Directory

The next step is to decide which directory we want to share with the client server. The chosen directory should then be added to the /etc/exports file, which specifies both the directory to be shared and the details of how it is shared.

Suppose we wanted to share two directories: /home and /var/nfs.

Because the /var/nfs/ does not exist, we need to do two things before we can export it.

First, we need to create the directory itself:

# mkdir /var/nfs/


Second, we should change the ownership of the directory to the user, nobody and the group, no group. These represent the default user through which clients can access a directory shared through NFS.

Go ahead and chown the directory:

# chown nobody:nogroup /var/nfs

or in mycase.. ownership base on my old setup


After completing those steps, it’s time to export the directories to the other VPS:

# nano /etc/exports


Add the following lines to the bottom of the file, sharing both directories with the client:

# /home 172.19.250.20(rw,sync,no_root_squash,no_subtree_check)

# /var/nfs 172.19.250.20(rw,sync,no_subtree_check)



These settings accomplish several tasks:

* rw: This option allows the client server to both read and write within the shared directory

* sync: Sync confirms requests to the shared directory only once the changes have been committed.

* no_subtree_check: This option prevents the subtree checking. When a shared directory is the subdirectory of a larger filesystem, nfs performs scans of every directory above it, in order to verify its permissions and details. Disabling the subtree check may increase the reliability of NFS, but reduce security.

* no_root_squash: This phrase allows root to connect to the designated directory


Once you have entered in the settings for each directory, run the following command to export them:

# exportfs -a


Setting Up the NFS Client


1. Download the Required Software (nfs-common)

Start off by using apt-get to install the nfs programs.

# apt-get install nfs-common portmap

2. Mount the Directories

Once the programs have been downloaded to the the client server, create the directories that will contain the NFS shared files

# mkdir -p /mnt/nfs/home

# mkdir -p /mnt/nfs/var/nfs



Then go ahead and mount them

# mount 172.19.250.10:/home /mnt/nfs/home

# mount 172.19.250.10:/var/nfs /mnt/nfs/var/nfs



You can use the df -h command to check that the directories have been mounted. You will see them last on the list.

# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/sda 20G 948M 19G 5% /

udev 119M 4.0K 119M 1% /dev

tmpfs 49M 208K 49M 1% /run

none 5.0M 0 5.0M 0% /run/lock

none 122M 0 122M 0% /run/shm

172.19.250.10:/home 20G 948M 19G 5% /mnt/nfs/home

172.19.250.10:/var/nfs 20G 948M 19G 5% /mnt/nfs/var/nfs



Additionally, use the mount command to see the entire list of mounted file systems.

# mount


Your list should look something like this:

/dev/sda on / type ext4 (rw,errors=remount-ro,barrier=0) [DOROOT]

proc on /proc type proc (rw,noexec,nosuid,nodev)

sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)

none on /sys/fs/fuse/connections type fusectl (rw)

none on /sys/kernel/debug type debugfs (rw)

none on /sys/kernel/security type securityfs (rw)

udev on /dev type devtmpfs (rw,mode=0755)

devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)

tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)

none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)

none on /run/shm type tmpfs (rw,nosuid,nodev)

rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw)

172.19.250.10:/home on /mnt/nfs/home type nfs (rw,vers=4,addr= 172.19.250.10,clientaddr=172.19.250.20)

172.19.250.10:/var/nfs on /mnt/nfs/var/nfs type nfs (rw,vers=4,addr=12.34.56.78,clientaddr=172.19.250.20)



Testing the NFS Mount


Once you have successfully mounted your NFS directories, you can test that they work by creating files on the Client and checking their availability on the Server.

Create a file in each directory to try it out:

# touch /mnt/nfs/home/example /mnt/nfs/var/nfs/example


You should then be able to find the files on the Server in the /home and /var/nfs directories.

# ls /home

# ls /var/nfs/



You can ensure that the mount is always active by adding the directories to the fstab file on the client. This will ensure that the mounts start up after the server reboots.

# nano /etc/fstab

++++++++++++++++++++++++
172.19.250.10:/home /mnt/nfs/home nfs auto,noatime,nolock,bg,nfsvers=3,intr,tcp,actimeo=1800 0 0

172.19.250.10:/var/nfs /mnt/nfs/var/nfs nfs auto,noatime,nolock,bg,nfsvers=3,intr,tcp,actimeo=1800 0 0

+++++++++++++++++++++++++++

You can learn more about the fstab options by typing in:

# man nfs


Any subsequent restarts will include the NFS mount—although the mount may take a minute to load after the reboot You can check the mounted directories with the two earlier commands:

# df -h

# mount


Removing the NFS Mount

Should you decide to remove a directory, you can unmount it using the umount command:

# cd

# umount /directory name



You can see that the mounts were removed by then looking at the filesystem again.

# df -h

You should find your selected mounted directory gone.