Wednesday, January 28, 2015

Very Bery Busy

It takes along time not updating new post. Its doesn't mean that I am not gaining any new technology/knowledge but to much work but less resource cause me have no time to update and share the information to all readers.

Here are the list of info that I should update to my blog; 
1. Amazon hardisk static and magnetic
2. How to install SCCM which I already done last year.. but still have no time to share.
3. Attending SCCM training at infotech
4. Direct Access to allow end user to auto receive changes of their password from the their domain laptop outside from office. (Done and work with charm)
5.  Backup PC installation
6. Voip Technology -Setup for new building

Currently... MSSQL cluster.. still study on it... but not sure will finish since I need to migrate few apps in win 2003 which already end of life this April 2015. 


Huuuh many thing to write.. and many task to be done as well.. 
Will update all of you later. 

Bye.. 

Wednesday, May 28, 2014

Vbulletin 4 change URL

step vb4
1. change config.php

                disable can cache
                ----------------
                //$config['Datastore']['class'] = 'vB_Datastore_Memcached';
                enable kan
                ----------
                $config['Datastore']['class'] = 'vB_Datastore_Filecache';


2. http-vhost & etc\host (PC) ke apps tuh..
change old.contoso.com

utk check dia pi tak kat folder tuh.. letak laa file nih..
pening.php
< ? php
echo "new.contoso.com!";
? >


Browse vb.mediu.edu.my/pening.php -> kalu betol.. sila masuk vb.mediu.edu.my/admincp & terus ke step bawah ni
3. You can change the primary URL of your vBulletin installation via:
AdminCP > vBulletin Options > vBulletin Options > Site Name/URL/Contact Details


4. Afterward you need to reset Cookie path and Domain to the dafault values using
AdminCP -> vBulletin Options -> Cookies and HTTP Header Options


Lepas tuh.. jangan logout dulu.. pi ke browser lain.. login new.contoso.com/admincp -> kalu dapat masuk… TAHNIAH J kalu tak Berjaya.. tak tau la NASIB laa tuh..

Monday, February 24, 2014

Cannot run basic command in linux/unix

Sometime you have problem to run simple command like :
ifconfig, adduser etc..

If you have problem like that, you can run from /sbin/ifconfig or etc..

Thursday, January 23, 2014

Mysql Check

1. Check a Specific Table in a Database
If your application gives an error message saying that a specific table is corrupted, execute the mysqlcheck command to check that one table.
The following example checks TableName table in DBNAME database.
# mysqlcheck -c DBNAME TableName -u root -p
Enter password:
DBNAME.TableName    OK
You should pass the username/password to the mysqlcheck command. If not, you’ll get the following error message.
# mysqlcheck -c DBNAME TableName
mysqlcheck: Got error: 1045: Access denied for user 'root'@'localhost' (using password: NO) when trying to connect
Please note that myisamchk command that we discussed a while back works similar to the mysqlcheck command. However, the advantage of mysqlcheck command is that it can be executed when the mysql daemon is running. So, using mysqlcheck command you can check and repair corrupted table while the database is still running.
2. Check All Tables in a Database
To check all the tables in a particular database, don’t specify the table name. Just specify the database name.
The following example checks all the tables in the DBNAME2 database.
# mysqlcheck -c DBNAME2  -u root -p
Enter password:
DBNAME2.JBPM_ACTION                               OK
DBNAME2.JBPM_BYTEARRAY                            OK
DBNAME2.JBPM_BYTEBLOCK                            OK
DBNAME2.JBPM_COMMENT                              OK
DBNAME2.JBPM_DECISIONCONDITIONS                   OK
DBNAME2.JBPM_DELEGATION                           OK
DBNAME2.JBPM_EVENT                                OK
..
3. Check All Tables and All Databases
To check all the tables and all the databases use the “–all-databases” along with -c option as shown below.
# mysqlcheck -c  -u root -p --all-databases
Enter password:
DBNAME.TableName                              OK
DBNAME2.JBPM_ACTION                               OK
DBNAME2.JBPM_BYTEARRAY                            OK
DBNAME2.JBPM_BYTEBLOCK                            OK
..
..
mysql.help_category
error    : Table upgrade required. Please do "REPAIR TABLE `help_category`" or dump/reload to fix it!
mysql.help_keyword
error    : Table upgrade required. Please do "REPAIR TABLE `help_keyword`" or dump/reload to fix it!
..
If you want to check all tables of few databases, specify the database names using “–databases”.
The following example checks all the tables in DBNAME and DBNAME2 database.
# mysqlcheck -c  -u root -p --databases DBNAME DBNAME2
Enter password:
DBNAME.TableName                              OK
DBNAME2.JBPM_ACTION                               OK
DBNAME2.JBPM_BYTEARRAY                            OK
DBNAME2.JBPM_BYTEBLOCK                            OK
..
4. Analyze Tables using Mysqlcheck
The following analyzes TableName table that is located in DBNAME database.
# mysqlcheck -a DBNAME TableName -u root -p
Enter password:
DBNAME.TableName   Table is already up to date
Internally mysqlcheck command uses “ANALYZE TABLE” command. While mysqlcheck is executing the analyze command the table is locked and available for other process only in the read mode.
5. Optimize Tables using Mysqlcheck
The following optimizes TableName table that is located in DBNAME database.
# mysqlcheck -o DBNAME TableName -u root -p
Enter password:
DBNAME.TableName         OK
Internally mysqlcheck command uses “OPTIMIZE TABLE” command. When you delete lot of rows from a table, optimizing it helps to get the unused space and defragment the data file. This might improve performance on huge tables that has gone through several updates.
6. Repair Tables using Mysqlcheck
The following repairs TableName table that is located in DBNAME database.
# mysqlcheck -r DBNAME TableName -u root -p
Enter password:
DBNAME.TableName        OK
Internally mysqlcheck command uses “REPAIR TABLE” command. This will repair and fix a corrupted MyISAM and archive tables.
7. Combine Check, Optimize, and Repair Tables
Instead of checking and repairing separately. You can combine check, optimize and repair functionality together using “–auto-repair” as shown below.
The following checks, optimizes and repairs all the corrupted table in DBNAME database.
# mysqlcheck -u root -p --auto-repair -c -o DBNAME
You an also check, optimize and repair all the tables across all your databases using the following command.
# mysqlcheck -u root -p --auto-repair -c -o --all-databases
If you want to know what the command is doing while it is checking, add the –debug-info as shown below. This is helpful while you are checking a huge table.
# mysqlcheck --debug-info -u root -p --auto-repair -c -o DBNAME TableName
Enter password:
DBNAME.TableName  Table is already up to date

User time 0.00, System time 0.00
Maximum resident set size 0, Integral resident set size 0
Non-physical pagefaults 344, Physical pagefaults 0, Swaps 0
Blocks in 0 out 0, Messages in 0 out 0, Signals 0
Voluntary context switches 12, Involuntary context switches 9
8. Additional Useful Mysqlcheck Options
The following are some of the key options that you can use along with mysqlcheck.
§  -A, –all-databases Consider all the databases
§  -a, –analyze Analyze tables
§  -1, –all-in-1 Use one query per database with tables listed in a comma separated way
§  –auto-repair Repair the table automatically it if is corrupted
§  -c, –check Check table errors
§  -C, –check-only-changed Check tables that are changed since last check
§  -g, –check-upgrade Check for version dependent changes in the tables
§  -B, –databases Check more than one databases
§  -F, –fast Check tables that are not closed properly
§  –fix-db-names Fix DB names
§  –fix-table-names Fix table names
§  -f, –force Continue even when there is an error
§  -e, –extended Perform extended check on a table. This will take a long time to execute.
§  -m, –medium-check Faster than extended check option, but does most checks
§  -o, –optimize Optimize tables
§  -q, –quick Faster than medium check option
§  -r, –repair Fix the table corruption


Friday, October 4, 2013

Setup HAProxy Load Balancer on Ubuntu

Today I need to setup HAProxy Load Balance on Ubuntu. I think this month most of my task is on the clustering, load balance and HA. Last 2 weeks, I just finish my cluster for hyper-v. Luckily it's working very well.

Ok let go to the point...

1. Install HAProxy into Ubuntu Server.

# apt-get install haproxy

2. Change the HAProxy configuration. 

# nano etc/haproxy/haproxy.cfg

3. Add following text to file and save it

global
        log 127.0.0.1   daemon debug
        #log 127.0.0.1  local0
        #log 127.0.0.1  local1 notice
        #log loghost    local0 info
        stats socket /tmp/stats
        maxconn 4096
        pidfile /var/run/haproxy.pid
        #chroot /usr/share/haproxy
        #user haproxy
        #group haproxy
        daemon
        #debug
        #quiet

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        retries 3
        option redispatch
        maxconn 3000
        contimeout      5000
        clitimeout      50000
        srvtimeout      50000


listen  webcluster *:80
        mode http
        stats enable
        stats uri /stats           //default using http://web.domain.com/haproxy?stats
        stats auth admin:p@ssw0rd
        balance roundrobin
        option httpchk HEAD / HTTP/1.0
        option forwardfor
        cookie LSW_WEB insert
        option httpclose
        server  192.18.250.22 192.18.250.22:80 check inter 5000 fastinter 1000 fall 1 weight 1
        #server web1 192.18.250.22:80 weight 1 fastinter 5000 rise 2 fall 3
        server  192.18.250.23 192.18.250.23:80 check inter 5000 fastinter 1000 fall 1 weight 1
        #server web2 192.18.250.23:80 weight 1 fastinter 5000 rise 2 fall 3
        option  httpclose               # disable keep-alive
        option  checkcache              # block response if set-cookie & cacheable

        rspidel ^Set-cookie:\ IP=       # do not let this cookie tell our internal IP address

        #errorloc       502     http://192.168.114.58/error502.html
        #errorfile      503     /etc/haproxy/errors/503.http
        errorfile       400     /etc/haproxy/errors/400.http
        errorfile       403     /etc/haproxy/errors/403.http
        errorfile       408     /etc/haproxy/errors/408.http
        errorfile       500     /etc/haproxy/errors/500.http
        errorfile       502     /etc/haproxy/errors/502.http
        errorfile       503     /etc/haproxy/errors/503.http
        errorfile       504     /etc/haproxy/errors/504.http


3. Setting startup parameter for HAProxy set enabled as 1 to start HAproxy

# nano /etc/default/haproxy

4. Start HAProxy form command line

# /etc/init.d/haproxy start

5. Access Web user interface from browser

http://web.domain.com/stats




Thursday, October 3, 2013

Set Up an NFS Mount on Ubuntu


NFS (Network File System) Mounts

NFS mounts work to share a directory between several virtual servers. This has the advantage of saving disk space, as the home directory is only kept on one virtual private server, and others can connect to it over the network. When setting up mounts, NFS is most effective for permanent fixtures that should always be accessible.

How to setup

An NFS mount is set up between at least two virtual servers. The machine hosting the shared network is called the server, while the ones that connect to it are called ‘clients’.

This tutorial requires 2 servers: one acting as the server and one as the client. We will set up the server machine first, followed by the client. The following IP addresses will refer to each one:

Master: 172.19.250.10
Client: 172.19.250.20


The system should be set up as root. You can access the root user by typing

# su-

Setting Up the NFS Server (master)

1. Download the Required Software (nfs-kernel-server)

Start off by using apt-get to install the nfs programs.

# apt-get install nfs-kernel-server portmap

2. Export the Shared Directory

The next step is to decide which directory we want to share with the client server. The chosen directory should then be added to the /etc/exports file, which specifies both the directory to be shared and the details of how it is shared.

Suppose we wanted to share two directories: /home and /var/nfs.

Because the /var/nfs/ does not exist, we need to do two things before we can export it.

First, we need to create the directory itself:

# mkdir /var/nfs/


Second, we should change the ownership of the directory to the user, nobody and the group, no group. These represent the default user through which clients can access a directory shared through NFS.

Go ahead and chown the directory:

# chown nobody:nogroup /var/nfs

or in mycase.. ownership base on my old setup


After completing those steps, it’s time to export the directories to the other VPS:

# nano /etc/exports


Add the following lines to the bottom of the file, sharing both directories with the client:

# /home 172.19.250.20(rw,sync,no_root_squash,no_subtree_check)

# /var/nfs 172.19.250.20(rw,sync,no_subtree_check)



These settings accomplish several tasks:

* rw: This option allows the client server to both read and write within the shared directory

* sync: Sync confirms requests to the shared directory only once the changes have been committed.

* no_subtree_check: This option prevents the subtree checking. When a shared directory is the subdirectory of a larger filesystem, nfs performs scans of every directory above it, in order to verify its permissions and details. Disabling the subtree check may increase the reliability of NFS, but reduce security.

* no_root_squash: This phrase allows root to connect to the designated directory


Once you have entered in the settings for each directory, run the following command to export them:

# exportfs -a


Setting Up the NFS Client


1. Download the Required Software (nfs-common)

Start off by using apt-get to install the nfs programs.

# apt-get install nfs-common portmap

2. Mount the Directories

Once the programs have been downloaded to the the client server, create the directories that will contain the NFS shared files

# mkdir -p /mnt/nfs/home

# mkdir -p /mnt/nfs/var/nfs



Then go ahead and mount them

# mount 172.19.250.10:/home /mnt/nfs/home

# mount 172.19.250.10:/var/nfs /mnt/nfs/var/nfs



You can use the df -h command to check that the directories have been mounted. You will see them last on the list.

# df -h

Filesystem Size Used Avail Use% Mounted on

/dev/sda 20G 948M 19G 5% /

udev 119M 4.0K 119M 1% /dev

tmpfs 49M 208K 49M 1% /run

none 5.0M 0 5.0M 0% /run/lock

none 122M 0 122M 0% /run/shm

172.19.250.10:/home 20G 948M 19G 5% /mnt/nfs/home

172.19.250.10:/var/nfs 20G 948M 19G 5% /mnt/nfs/var/nfs



Additionally, use the mount command to see the entire list of mounted file systems.

# mount


Your list should look something like this:

/dev/sda on / type ext4 (rw,errors=remount-ro,barrier=0) [DOROOT]

proc on /proc type proc (rw,noexec,nosuid,nodev)

sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)

none on /sys/fs/fuse/connections type fusectl (rw)

none on /sys/kernel/debug type debugfs (rw)

none on /sys/kernel/security type securityfs (rw)

udev on /dev type devtmpfs (rw,mode=0755)

devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620)

tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755)

none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880)

none on /run/shm type tmpfs (rw,nosuid,nodev)

rpc_pipefs on /run/rpc_pipefs type rpc_pipefs (rw)

172.19.250.10:/home on /mnt/nfs/home type nfs (rw,vers=4,addr= 172.19.250.10,clientaddr=172.19.250.20)

172.19.250.10:/var/nfs on /mnt/nfs/var/nfs type nfs (rw,vers=4,addr=12.34.56.78,clientaddr=172.19.250.20)



Testing the NFS Mount


Once you have successfully mounted your NFS directories, you can test that they work by creating files on the Client and checking their availability on the Server.

Create a file in each directory to try it out:

# touch /mnt/nfs/home/example /mnt/nfs/var/nfs/example


You should then be able to find the files on the Server in the /home and /var/nfs directories.

# ls /home

# ls /var/nfs/



You can ensure that the mount is always active by adding the directories to the fstab file on the client. This will ensure that the mounts start up after the server reboots.

# nano /etc/fstab

++++++++++++++++++++++++
172.19.250.10:/home /mnt/nfs/home nfs auto,noatime,nolock,bg,nfsvers=3,intr,tcp,actimeo=1800 0 0

172.19.250.10:/var/nfs /mnt/nfs/var/nfs nfs auto,noatime,nolock,bg,nfsvers=3,intr,tcp,actimeo=1800 0 0

+++++++++++++++++++++++++++

You can learn more about the fstab options by typing in:

# man nfs


Any subsequent restarts will include the NFS mount—although the mount may take a minute to load after the reboot You can check the mounted directories with the two earlier commands:

# df -h

# mount


Removing the NFS Mount

Should you decide to remove a directory, you can unmount it using the umount command:

# cd

# umount /directory name



You can see that the mounts were removed by then looking at the filesystem again.

# df -h

You should find your selected mounted directory gone.

Monday, September 16, 2013

Hyper-V Cluster

Ok... This is my new task after taking my confinement leave.. I almost forget the linux command and windows skill. Huhuhu... But I just need 1 week to recover back my skill... and I think we as human also need DRS (Desaster Recovery System)... So, if anything happen in our brain.. we can restore it back.. huhuhu..

Actually this Hyper-V Cluster is not very hard to do. It is simple if you understand how cluster is working.

The step is same as below refference:

1. Name of the Virtual Switch SHOULD same for both cluster... host1 and host2.
Description: Validate that all specified nodes share the same set of network resource pools and virtual switches.
Gathering information about network resource pools used by servers running Hyper-V.
Node
Network Resource Pools
Virtual Ethernet Switches
host1.contoso.com
'Storage_net', 'Internal_IW', 'Segment250', 'DMZ-SCVMM', 'Client-Management-Network'
host2.contoso.com
'Segment250', 'HP NC373i Multifunction Gigabit Server Adapter #42 - Virtual Switch'

Processing information about network resource pools used by servers running Hyper-V.
Node 'host1.contoso.com' is missing a virtual Ethernet switch 'HP NC373i Multifunction Gigabit Server Adapter #42 - Virtual Switch' that is present on at least one other node. Either remove the virtual Ethernet switch from all nodes or ensure that it is present on all nodes.
Node 'host2.contoso.com' is missing a virtual Ethernet switch 'Client-Management-Network' that is present on at least one other node. Either remove the virtual Ethernet switch from all nodes or ensure that it is present on all nodes.
Node 'host2.contoso.com' is missing a virtual Ethernet switch 'DMZ-SCVMM' that is present on at least one other node. Either remove the virtual Ethernet switch from all nodes or ensure that it is present on all nodes.
Node 'host2.contoso.com' is missing a virtual Ethernet switch 'Internal_IW' that is present on at least one other node. Either remove the virtual Ethernet switch from all nodes or ensure that it is present on all nodes.
Node 'host2.contoso.com' is missing a virtual Ethernet switch 'Storage_net' that is present on at least one other node. Either remove the virtual Ethernet switch from all nodes or ensure that it is present on all nodes.

2. Version error. Then you have to update both server so both have same version. 

Result:
Both host was install with virtual machine, eg: vm_host1.contoso.com at host1.contoso.com and vm_host2.contoso.com at host2.contoso.com. Then, I shutdown the host1.contoso.com. You can see the motion of vm_host1.contoso.com is moving from host1.contoso.com to host2.contoso.com. It is a great result... i guest :) .. Done..