Wednesday, 14 December 2011

Block P2P Traffic on a Cisco IOS Router using NBAR

In the following example, we'll use NBAR to block P2P traffic on our router's Gigabit interface.
  • Create a class-map to match the protocols to be blocked.
  • Create a policy-map to specify what should be done with the traffic.
  • Apply the policy to the user-facing (incoming) interface.
conf t
!--- IP CEF should be enabled at first to block P2P traffic.
!--- P2P traffic cannot be blocked when IPC CEF is disabled.
ip cef
!--- Configure the class map named p2p to match the P2P protocols
!--- to be blocked with this class map p2p.
class-map match-any p2p
!--- Mention the P2P protocols to be blocked in order to block the
!--- P2P traffic flow between the required networks. edonkey,
!--- fasttrack, gnutella, kazaa2, skype are some of the P2P
!--- protocols used for P2P traffic flow. This example
!--- blocks these protocols.
match protocol edonkey
match protocol fasttrack
match protocol gnutella
match protocol kazaa2
match protocol winmx
match protocol skype
!--- Here the policy map named SDM-QoS-Policy is created, and the
!--- configured class map p2p is attached to this policy map.
!--- Drop is the command to block the P2P traffic.
policy-map SDM-QoS-Policy
class p2p
!--- Use the inferface where you wich to block the P2P traffic
interface GigabitEthernet 0/1
!--- The command ip nbar protocol-discovery enables NBAR
!--- protocol discovery on this interface where the QoS
!--- policy configured is being used.
ip nbar protocol-discovery
!--- Use the service-policy command to attach a policy map to
!--- an input interface so that the interface uses this policy map.
service-policy input SDM-QoS-Policy
!--- Save the current configuration
And that's it.
You can ensure the policy is working with the command:
show policy-map
However if your version of IOS is older than 12.2(13)T, you will need some extra steps. This process relies on setting the DSCP field in the incoming packets, and then dropping those packets on the outbound interface. In the following example, we'll block P2P using the DSCP field.
  • Create a class-map to match the protocols to be blocked.
  • Create a policy-map to specify what should be done with the traffic.
  • Create an access-list to block packets with the DSCP field set to 1.
  • Apply the policy to the user-facing (incoming) interface.
  • Apply the blocking access-list to the outbound interface.
conf t
!--- IP CEF should be enabled at first to block P2P traffic.
!--- P2P traffic cannot be blocked when IPC CEF is disabled.
ip cef
!--- Configure the class map named p2p to match the P2P protocols
!--- to be blocked with this class map p2p.
class-map match-any P2P
match protocol edonkey
match protocol fasttrack
match protocol gnutella
match protocol kazaa2
match protocol winmx
match protocol skype
policy-map P2P
class P2P
set ip dscp 1
!--- Block all traffic with the DSCP field set to 1.
access-list 100 deny ip any any dscp 1
access-list 100 permit ip any any
interface GigabitEthernet0/1
service-policy input P2P
interface POS1/1
ip access-group 100 out

Possibly Related Posts

Installing DNS Master and Slave Servers

Install bind:
apt-get install bind9
Configure The Master

First we need to stop bind9:
/etc/init.d/bind9 stop
edit the /etc/bind/named.conf.options file so it looks something like this (use the forwarders of your liking):
options {
directory "/var/cache/bind";
// If there is a firewall between you and nameservers you want
// to talk to, you may need to fix the firewall to allow multiple
// ports to talk. See
// If your ISP provided one or more IP addresses for stable
// nameservers, you probably want to use them as forwarders.
// Uncomment the following block, and insert the addresses replacing
// the all-0's placeholder.
dnssec-enable yes;
query-source address * port 53;
allow-query { any; };
forwarders {;;;
auth-nxdomain no; # conform to RFC1035
//listen-on-v6 { any; };
Add the ip of this newly installed DNS server (the localhost) to your /etc/resolv.conf to use it:
echo "search linux.lan" > /etc/resolv.conf
echo "nameserver" >> /etc/resolv.conf
Now restart bind9:
/etc/init.d/bind9 start
And test !
If you get a reply, then your DNS master server is working and ready to use. We will now fill and use the linux.lan domain with our new master server.

Setting up the linux.lan domain

The master DNS server is currently just forwarding requests to the server(s) you have configured in the options file. So, we will now install and configure our own domain and let our new server handle all request regarding that domain.
Lets start with creating the directory where we will store the zone file. This file contains all info about the domain.
mkdir /etc/bind/zones/
Next we will create the zones file, /etc/bind/zones/master_linux.lan, something like this:
@ IN SOA ns1.linux.lan. hostmaster.linux.lan. (
199802151 ; serial, todays date + todays serial #
8H ; refresh, seconds
2H ; retry, seconds
4W ; expire, seconds
1D ) ; minimum, seconds
TXT "Linux.LAN, serving YOUR domain :)"
NS ns1 ; Inet Address of name server
NS ns2
MX 10 mail ; Primary Mail Exchanger
localhost A
ns1 A
ns2 A
www CNAME ns1
Here we have created a simple zone file with both nameservers and a www alias for ns1. Just in case we have a running apache on ns1 ;)

Now edit /etc/bind/named.conf.local and add:
zone "linux.lan" {
type master;
file "/etc/bind/zones/master_linux.lan";
This is it, we can now restart bind and check if it works:
/etc/init.d/bind9 restart
And test if it's working:
ping ns1.linux.lan
At this stage you should have a working and usable DNS server.
If it says it cannot find the domain, maybe dhclient has changed your nameserver entry... You should check that.

Installing The Slave
Basically, the slave uses the same basic system as we constructed in the first part (just before we added the zone file). We will add some little changes to both master and slave to make them work together. The zones file will be transfered over the net using encryption.
Unless else stated, these commands are for the slave ONLY.

Create the zones dir:
mkdir /etc/bind/zones
For both master AND slave edit /etc/bind/named.conf.options and make sure you have:
dnssec-enable yes;
Now we need a secure key. This will generate a .private and a .key file. The 'key=' line in the .private file represents the hashkey:
dnssec-keygen -a hmac-md5 -b 128 -n host linux.lan
Add this in your /etc/bind/named.conf on master AND slave:
key "TRANSFER" {
algorithm hmac-md5;
secret "---HASHKEY---";
On the master add the slave ip to /etc/bind/named.conf:
server {
keys {
And on the slave we add the master ip to /etc/bind/named.conf:
server {
keys {
Add to /etc/bind/named.conf.local:
zone "linux.lan" {
type slave;
file "/etc/bind/zones/slave_linux.lan";
masters {; };
allow-notify {; };
Finally we need to, on BOTH hosts, add this to /etc/bind/named.conf:
include "/etc/bind/rndc.key";
In order to have a succesfull zone transfer both systems need to have a synchronised clock, so:
apt-get -y install ntpdate

Restart bind on both machines and notice the new zone file on the slave.
If you're wondering why _updates_ to the zonefile on your master seem to fail, check the expire etc. settings inside the zonefile.

NOTE: if you get an error on syslog saying "bind dumping master file (...) permission denied ubuntu" check the /etc/apparmor.d/usr.sbin.named file and change the line:
/etc/bind/** r,
/etc/bind/** rw,

Possibly Related Posts

Friday, 18 November 2011

Create symbolic links for multiple files simutaneously

As simple as:
for file in $(ls <path>|grep <something>); do ln -s <path>$file <new_path>$file; done

Possibly Related Posts

Monday, 17 October 2011

Drop all tables in a MySQL database

If you whant to drop all tables from one MySQL DB without droping the DB, you can use this command:
mysqldump -u[USERNAME] -p[PASSWORD] --add-drop-table --no-data [DATABASE] | grep ^DROP | mysql -u[USERNAME] -p[PASSWORD] [DATABASE]

Possibly Related Posts

Saturday, 8 October 2011

VDI is not Available - Xenserver error

In order to recover the VM you have to:

1. Run XE VDI-LIST and determine the UUID of the VM giving you the problem, like this:

xe vdi-list | grep -i <VM-NAME> -B2 -A2
2. Once you have the UUID run:

xe vdi-forget uuid=<VDI-UUID>
3. Rescan the SR with:
xe sr-scan uuid=<SR-UUID>
4. Now, in XenCenter, go to the VM and click on the Storage tab. You should see it empty. Then click on attach and first entry on the list should be NO NAME. Attach it to the VM, wait about 30 seconds, then power it up!

5. In most cases it should be up and running. If you are still getting errors then wait a minute and try it again. If still not working repeat the previous steps.

Possibly Related Posts

Friday, 7 October 2011

Some VM's are missing after Xenserver failure

Without enabled HA feature there is no mechanism enabled which checks if the host which went down had any VMs running at the time of the failure. There is no mechanism which updates the database with the informatio​n that the VMs which were running on the failing host should be marked as halted after the crash.

So, when you do not have the possibilit​y to enable HA you can do the following to make the VMs available in XenCenter again:

1. Locate the VMs which were running on the failed host with the following command:
xe vm-list resident-o​n=<UUID of the XenServer host> --multiple
You can determine the UUID of the host which failed by running the `xe host-list`​ command.

2. reset the power status of the VMs to halted using the following command:
xe vm-reset-p​owerstate vm=<Nam​e of VM received from the command in step 1> force=true​

(repeat this step for all VMs which were running on the failed host)

Once you reset the powerstate​ of the VM using the above command, the VM should appear in XenCenter again and can be started on another XenServer host.

NOTE: make sure that the VM you reset to halted using the vm-reset-p​owerstate command is actually powered off (e.g. because it was running on a XenServer which really failed) and not running on any other XenServer.​ Do NOT use this command while simulating​ the failure of a XenServer by stopping only the network of the host.

Possibly Related Posts

XenServer Pool, Master host failure

Every member of a resource pool contains all the information necessary to take over the role of master if required. When a master node fails, the following sequence of events occurs:

1. The members realize that communication has been lost and each tries to reconnect for sixty seconds.

2. Each member then puts itself into emergency mode, whereby the member XenServer hosts will only accept the pool-emergency commands:
xe pool-emergency-reset-master
xe pool-emergency-transition-to-master
If the master comes back up at this point, it re-establishes communication with its members, the members leave emergency mode, and operation returns to normal.
However if the master is really dead, choose one of the remaining members and run the command:
xe pool-emergency-transition-to-master
on it. Once it has become the master, issue the command:
xe pool-recover-slaves
and the members will now point to the new master.
If you repair or replace the server that was the original master, you can simply bring it up, install the XenServer host software, and add it to the pool.

Possibly Related Posts

Tuesday, 4 October 2011

How to install Android SDK without internet connection

The magic URL is:
That is the XML file from which the URL for downloading the SDK packages are obtained.

For e.g. if you want to download Mac version of Android SDK for version 2.0, you could look up that XML file. You will find a block under tag SDK 2.0 like this:
<sdk:archive arch="any" os="macosx"><sdk:size>74956356</sdk:size>
<sdk:checksum type="sha1">2a866d0870dbba18e0503cd41e5fae988a21b314</sdk:checksum>
So the URL would be:

Possibly Related Posts

Friday, 30 September 2011

Add extra SWAP file

First you need to create an empty swap file, the next command will create a 1Gb file:
dd if=/dev/zero of=/mnt/extra.swap count=1024 bs=1048576 #(where 1048576 bytes = 1Mb)
sudo chmod 600 /mnt/extra.swap
sudo mkswap /mnt/extra.swap
Now edit your /etc/fstab and add a the line:
/mnt/extra.swap none swap sw 0 0
finaly activate the new swap file with:
swapon -a

Possibly Related Posts

Wednesday, 28 September 2011

How to Configure SNMP in Xenserver 5.x

Change firewall settings

You must change your firewall settings as follows to allow communication through the port that SNMP uses:

1. Open the file /etc/sysconfig/iptables in your preferred editor.

2. Add the following lines to the INPUT section after the line with -A RH-Firewall-1-INPUT –p udp –dport 5353... :
-A RH-Firewall-1-INPUT -p udp --dport 161 -j ACCEPT
3. Save and close the file.

4. Restart the firewall service:
# service iptables restart

Enable snmpd service
1. To enable snmpd service run the following command:
# chkconfig snmpd on
2. Start the snmpd service:
# service snmpd start

Change SNMP configuration
1. To change snmp configuration edit the /etc/snmp/snmpd.conf file.

2. Restart the snmpd service:
# service snmpd restart

SNMP configuration examples
Default settings
You can view only systemview subtree .

View whole subtree

1. Change lines as follows:
After the lines lines starting with:
view systemview included (...)
Add this:
view all included .1
Change line:
access notConfigGroup “” any noauth exact systemview none none
access notConfigGroup “” any noauth exact all none none
2. Save the file.

3. Restart the service:
# service snmpd restart
Change community string (default is “public”)
Change line:
com2sec notConfigUser default public
com2sec notConfigUser default anything_you_need

Possibly Related Posts

Friday, 23 September 2011

Packet loss monitoring with zabbix

1. create a file named "packetloss" at this location "/etc/zabbix/externalscripts/"
vi /etc/zabbix/externalscripts/packetloss
note: you may need to create the external scripts directory:
mkdir -p /etc/zabbix/externalscripts
2. cut out and paste this in "packetloss" file
if [ -z $1 ]
echo "missing ip / hostname address"
echo " example ./packetloss 10000"
echo "10000 = 10000 bytes to ping with. the more you use the harder the network will have to deliver it and you start see packetloss. ping with normal ping size is kinda pointless, on LAN networks I recommend to use 10000 - 20000 and on Internet around 1394 (1500 - 48(pppoe + IP + TCP) - 58(ipsec)"
echo "Remember some firewalls might block pings over 100"
echo " "
if [ -z $2 ]
echo "missing ping size"
echo " example ./packetloss 10000"
echo "10000 = 10000 bytes to ping with. The more you use the harder the network will have to deliver
it and you start see packetloss. ping with normal ping size is kinda pointless, on LAN networks I recommend to use 10000 - 20000 and on Internet around 1394 (1500 - 48(pppoe + IP + TCP) - 58(ipsec)"
echo "Remember some firewalls might block pings over 100"
echo " "
tal=`ping -q -i0.30 -n -s $2 -c$PINGCOUNT $1 | grep "packet loss" | cut -d " " -f6 | cut -d "%" -f1`
if [ -z $tal ]
echo 100
echo $tal
3. Make the file runnable by typing:
chmod u+x etc/zabbix/externalscripts/packetloss
4. in zabbix verify the host/template you want to monitor the packet loss on have a valid IP or host name and the correct "Connect to" selected.

Then under Item you create a new Item for that host/template
Type: External Check
Key: packetloss[10000]

5. now check monitoring -> latest data for that host and you should start seeing packet loss values.


The number 10000 is Ping size, its very hard to spot packet loss when only sending a few bytes as a normal ping does.

Try increasing the size until you see packet loss then you know you pushing your equipment to the limit.

Possibly Related Posts

Thursday, 22 September 2011

Backuppc and MySQL

The best way to backup a MySql Server using Backuppc is to use a pre-dump script.

you can use $Conf{DumpPreUserCmd} to issue a MysqLDump

Stdout from these commands will be written to the Xfer (or Restore) log file, note that all Cmds are executed directly without a shell, so the prog name needs to be a full path and you can't include shell syntax like redirection and pipes; put that in a script if you need it.

So in our case we would create a script, on the Backuppc client, to dump all databases into a file:
vi /usr/local/sbin/
and paste the following into it:
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
# no need to change anything below...
if [ -f $LOCKFILE ]; then
echo "Lockfile $LOCKFILE exists, exiting!"
exit 1
echo "== MySQL Dump Starting $(date) =="
$MYSQLDUMP --single-transaction --user=${MYSQLUSER} --password="${MYSQLPASS}" -A > ${DEST}
echo "== MySQL Dump Ended $(date) =="
make the script executable:
chmod +x /usr/local/sbin/ 
and set $Conf{DumpPreUserCmd} with:
$sshPath -q -x -l root $host /usr/local/sbin/
Now you just have to make shure that Backuppc is getting the /backup folder (or whatever folder you have set in the script) and you can also exclude the /var/lib/mysql folder from backuppc backups.

Possibly Related Posts

Backuppc Got unknown type errors

Type 8 are socket files and type 9 is unknown (Solaris door files).
These are transient files that don't need to be backed up since they can't be restored - they are created by the applications that need them.

The warning messages are benign - they simply mean that those files are being skipped.

Possibly Related Posts

watch backuppc progress

From the Backuppc server you can check witch files backuppc is using at the moment with:
watch "lsof -n -u backuppc | egrep ' (REG|DIR) ' | egrep -v '( (mem|txt|cwd|rtd) |/LOG)' | awk '{print $9}'"
you can also check the running log with:
On the client side you can use:
watch "lsof -n | grep rsync | egrep ' (REG|DIR) ' | egrep -v '( (mem|txt|cwd|rtd) |/LOG)' | awk '{print $9}'"

Possibly Related Posts

SVN Dump Parts Of A Repository

Assuming the following SVN repo structure:
- repository root
| - project1
| - project2
| - project3
To dump the project1 history into a portable, re-creatable format, first you use svnadmin dump, like this:
svnadmin dump [path to repo] > repo.dump
Which creates a dump of the entire repository into a file called repo.dump. This might take some time and is CPU intensive so it would be best to perform this outside of normal work hours...
Then use svndumpfilter to filter just for the project1 folder (see folder tree above):
svndumpfilter include project1 < repo.dump > project1.dump
If you have nested repositories, then it breaks with a syntax error. To get around this you need to run the dump multiple times using the ‘exclude’ directive until you have what you want:
svndumpfilter exclude project2 < repo.dump >> project1.dump
svndumpfilter exclude project3 < project1.dump >> project1.dump
At the end you get a full svn repository that could be re-created anywhere, like this:
svnadmin create /var/svn/project1svnadmin load /var/svn/project1 < project1.dump
mkdir -p ~/workingcopies/project1svn co file:///var/svn/project1~/workingcopies/project1/

Possibly Related Posts

Thursday, 15 September 2011

Remove a failed Xenserver from Pool

Run the following command to discover the UUID of the broken server:
xe host-list
Use the following command to remove the host:
xe host-forget uuid=<uuid_of_broken_server>
Note that a host should only be forgotten if it is physically unrecoverable, if possible, Hosts should be 'ejected' from the Pool instead.
Once a host has been forgotten it will have to be re-installed.

If the forget command fails with:
This host cannot be forgotten because there are some user VMs still running
Use this command to find witch VMs are listed as running on that server:
xe vm-list resident-on=<uuid_of_broken_server>
Then, for each VM returned by the previous command use the following command:
xe vm-reset-powerstate uuid=<VM_uuid>
Then try the forget command again.

Possibly Related Posts

Sunday, 4 September 2011

Combining multiple SVN repositories into one

Assuming that
The existing repositories have a structure like:
- repository root
 | - branches
 | - tags
 | - trunk

and you want a structure something like:
- repository root
 | - projectA
   | - branches
   | - tags
   | - trunk
 | - projectB
   | - branches
   | - tags
   | - trunk

Then for each of your project repositories:
svnadmin dump > project<n>.dmp
Then for each of the dump files:
svnadmin load --parent-dir "project<n>" <filesystem path to repos>
More complex manipulations are possible, but this is the simplest, most straightforward. Changing the source repository structure during a dump/load is hazardous, but doable through a combination of svnadmin dump, svndumpfilter, hand-editing or additional text filters and svnadmin load

What about the revision numbering?
Let's assume that you have to repositories, one with HEAD revision 100 and the other with HEAD revision 150.

You dump the first repository and load it in the new one: you end up with the full story of the first repository, from revision 0 to revision 150.

Then you dump the second repository and load it in the new one: it gets loaded with its full history, the only things that change are the actual revision numbers. The history of the second repository will be represented in the new repository from revision 151 to revision 250.

The full history of both repositories is preserver, only the revision numbers change for the repository that is imported for second.

The same of course applies for more than two repositories.

Possibly Related Posts

MySQL - Converting to Per Table Data File for InnoDB

Issue with shared InnoDB /var/lib/mysql/ibdata1 storage
InnoDB tables currently store data and indexes into a shared tablespace (/var/lib/mysql/ibdata1). Due to the shared tablespace, data corruption for one InnoDB table can result in MySQL failing to start up on the entire machine. Repairing InnoDB corruption can be extremely difficult to perform and can result in data loss for tables that were not corrupted originally during that repair process.

Since MySQL 5.5 will be using InnoDB as the default storage engine, it is important to consider the consequences of continuing to utilize the shared tablespace in /var/lib/mysql/ibdata1Changing to per-table tablespace with innodb_file_per_table

As an option to resolve the issue, MySQL has a configuration variable called innodb_file per_table. To use this variable, the following could be placed into /etc/my.cnf to convert InnoDB to a per table file for each InnoDB engine table:
After adding the line, MySQL would need to be restarted on the machine.
The result for using that line in /etc/my.cnf would cause any databases after the line is added to create .idb files in /var/lib/mysql/database/ location. Please note that the shared tablespace will still hold internal data dictionary and undo logs.

Converting old InnoDB tables
Any old databases with InnoDB tables set to previously share the tablespace in ibdata1 will still be using that file, so those old databases would need to be switched to the new system. The following command in MySQL CLI would create a list of InnoDB engine tables and a command to run for each to convert them to the new innodb_file_per_table system:
select concat('alter table ',TABLE_SCHEMA ,'.',table_name,' ENGINE=InnoDB;') as command FROM INFORMATION_SCHEMA.tables where table_type='BASE TABLE' and engine = 'InnoDB';
An example for Roundcube on my test machine shows the following return upon running the prior command:
alter table roundcube.cache ENGINE=InnoDB;
alter table roundcube.contacts ENGINE=InnoDB;
alter table roundcube.identities ENGINE=InnoDB;
alter table roundcube.messages ENGINE=InnoDB;
alter table roundcube.session ENGINE=InnoDB;
alter table roundcube.users ENGINE=InnoDB;
You would then simply need to issue the commands noted by MySQL CLI to then covert each table to the new innodb_file_per_table format.

Please note that these commands would only need to be run in MySQL command line for the conversion.

You can use the following script:
MYSQL="$(which mysql)"
# no need to change anything below...
TBLS=$(mysql -u $MYSQLUSER -p$MYSQLPASS -Bse "select concat(TABLE_SCHEMA ,'.',table_name) as tbl FROM INFORMATION_SCHEMA.tables where table_type='BASE TABLE' and engine = 'InnoDB';")
for tbl in $TBLS
echo "Converting table $tbl"
mysql -u $MYSQLUSER -p$MYSQLPASS -Bse "alter table $tbl ENGINE=InnoDB;"
Possible Issues for Converting Old InnoDB Tables
1. Possible system load might occur during the conversion
2. Possible issues with drive space filling up for the conversion

Possibly Related Posts

How to determine type of mysql database

To determine the storage engine being used by a table, you can use show table status. The Engine field in the results will show the database engine for the table. Alternately, you can select the engine field from information_schema.tables.

To get the type per database:
mysql -u root -p'<password>' -Bse 'select distinct table_schema, engine from information_schema.tables'
For a specific table use:
select engine from information_schema.tables where table_schema = 'schema_name' and table_name = 'table_name'
You can change between storage engines using alter table:
alter table the_table engine = InnoDB;
Where, of course, you can specify any available storage engine.

Possibly Related Posts

Wednesday, 31 August 2011

Apache reverse proxy of virtual hosts

Here you have a simple example on how to create a vhost for a reverse proxied site:

NameVirtualHost *:80
<VirtualHost *:80>
    ProxyRequests off
    ProxyPass /
    ProxyPassReverse /
<VirtualHost *:80>
    ProxyRequests off
    ProxyPass /
    ProxyPassReverse /
This is also useful if you have a tomcat or anything else running on a different port and you want to serve everything on the same port:

<VirtualHost *:80>
    ProxyRequests off
    ProxyPass /myapp http://localhost:8080/myapp
    ProxyPassReverse /myapp http://localhost:8080/myapp

Possibly Related Posts

Friday, 26 August 2011

Recursively remove all empty directories

In Linux:
find <parent-dir> -type d -empty -delete
find <parent-dir> -empty -type d -exec rmdir {} +
find <parent-dir> -depth -type d -empty -exec rmdir -v {} +
find <parent-dir> -depth -type d -empty -exec rmdir -v {} \;

In Windows:
for /f "usebackq" %%d in (`"dir /ad/b/s | sort /R"`) do rd "%%d"

Possibly Related Posts

Thursday, 25 August 2011

Cisco - Set default route per interface

If you want to set a different default exit route for your clients and servers you can use route-maps to achieve this, using policy-based routing.
With the following configuration the servers from the network will use the default gateway but the clients will use as default gateway:

interface GigabitEthernet0/1.1
description Servers Network
encapsulation dot1Q 1 native
ip address
interface GigabitEthernet0/1.2012
description Clients Network
encapsulation dot1Q 2012
ip address
ip policy route-map lanhop
ip route
! -- This sets the default GW
access-list 100 permit ip any
! -- This matches the entire network
route-map lanhop permit 10
match ip address 100
set ip default next-hop
! -- This sets the default GW for the IPs matched by the previous acl.

This is a sample configuration for policy-based routing using the set ip default next-hop and set ip next-hop commands:
  • The set ip default next-hop command verifies the existence of the destination IP address in the routing table, and… if the destination IP address exists, the command does not policy route the packet, but forwards the packet based on the routing table. if the destination IP address does not exist, the command policy routes the packet by sending it to the specified next hop.
  • The set ip next-hop command verifies the existence of the next hop specified, and… if the next hop exists in the routing table, then the command policy routes the packet to the next hop. if the next hop does not exist in the routing table, the command uses the normal routing table to forward the packet.

Possibly Related Posts

Monday, 22 August 2011

Performance Tuning MySQL for Zabbix

On my previous post I've shared some tips on how to tune ZABBIX configuration to get better results,however the most important tunning you have to do is to the data base server. Remember that this values depend on how much memory you have available on your server, here is how I've configured my MySQL server:

1. use a tmpfs tmpdir, create a folder like /mytmp and In /etc/my.cnf configure:
in /etc/fstab i put:
tmpfs /mytmp tmpfs size=1g,nr_inodes=10k,mode=700,uid=102,gid=105 0 0
You'll have to mkdir /mytmp and the numeric uid and gid values for your mysql user+group need to go on that line. Then you should be able to mount /mytmp and use tmpfs for mysql's temp directory. I don't know about the size and nr_inodes options there, I just saw those in linux tmpfs docs on the web and they seemed reasonable to me.

2. Buffer cache/pool settings.

In /etc/my.cnf jack up innodb_buffer_pool_size as much as possible. If you use /usr/bin/free the value in the "+/- buffer cache" row under the "free" column shows you how much buffer cache you have. I've also setup innodb to use O_DIRECT so that the data cached in the innodb buffer pool would not be duplicated in the filesystem buffer cache. So, in /etc/my.cnf:
3. Size the log files.

The correct way to resize this is documented here:

In /etc/my.cnf the value I'm going to try is:
A too small value means that MySQL is constantly flushing from the logfiles to the table spaces. It is better to increase this size on write-mostly databases to keep zabbix streaming to the logfiles and not flushing into the tablespaces constantly. However, the penalty is slower shutdown and startup times.

4. other parameters
Use file_per_table to keep tablespaces more compact and use "optimize table" periodically. And when you set this value in my.cnf you don't get an actual file_per_table until you run an optimize on all the tables. This'll take a long time on the large zabbix history* and trends* tables.
Turn on slow query logging:
This setting seems to affect the hit rate of Threads_created per Connection.
max_connections = 400
This should help a lot for high volume writes.

Possibly Related Posts

Simple Zabbix tunning tips

If you're getting gaps on ZABBIX's graphs and unknown status on some items, a little to often it might mean that you're monitoring server is low on performance, here are some general rules you can follow to boost ZABBIX performance:

  • If the DB is located on the same host as zabbix, change zabbix_server.conf so it uses a Unix socket to connect to the DB
  • Increase the number of pollers, trapers and pingers on the server config but don't overdo it.
    • General rule - keep value of this parameter as low as possible. Every additional instance of zabbix_server adds known overhead, in the same time, parallelism is increased. Optimal number of instances is achieved when queue, on average, contains minimum number of parameters (ideally, 0 at any given moment). This value can be monitored by using internal check zabbix[queue] or you can look at "Administration -> Queue" on the web interface.
  • increase the number of processes on the agents configuration, again, don't overdo it.
  • Change some of the items to use active checks (leave a few as regular checks so you can get availability information, leave stuff like host status as a regular check). Remember that the hostname set on the zabbix agent conf file must match the hostname given to the host on the web interface.
    • A regular check is initiated by ZABBIX server, it periodically sends requests to an agent to get latest info. The agent is passive, it just processes requests sent by the server.
    • An active check works the following way. ZABBIX agents connect to ZABBIX server to get a list of all checks for a host. Then, periodically, send required information to ZABBIX server. Note that ZABBIX server does not initiate anything. ZABBIX agent does all active work. This doesn't require polling on server side, thus it significantly (1.5x-2x) improve performance of ZABBIX server but if the host goes down the server won't get any information.
  • monitor required parameters only

    However the most important tunning you have to make is to the DB server, in my next post I'll give you some advice on how to tune a MySQL server to boost ZABBIX performance.

Possibly Related Posts

Friday, 12 August 2011

Script to check if process is running

This is a skeleton of a watchdog script to check if a process is running:

log_found=`ps faux|grep -v grep|grep $PROCESS|grep -v $0|awk '{print $2}'`
if [ "$log_found" == "" ]; then
    echo "No process found"
    echo "Processes found:"
    for PID in $log_found; do
        echo $PID

You must change the PROCESS variable to your process name and add actions for when the process is or isn't found...

Possibly Related Posts

Thursday, 4 August 2011

Set nginx maximum upload size

Edit nginx configuration and look for html block.
Inside html block add the following:

http {
include conf/mime.types;
default_type application/octet-stream;
client_max_body_size 10m;

Possibly Related Posts

Thursday, 28 July 2011

Duplicate a MySQL Database

Here is a simple script to duplicate a MySQL database:

mysqladmin create new_DB_name -u DB_user --password=DB_pass && \
mysqldump -u DB_user --password=DB_pass DB_name | mysql -u DB_user --password=DB_pass -h DB_host new_DB_name

Possibly Related Posts

Friday, 22 July 2011

Install oracle on 64b Ubuntu 10.04

This are the steps I took to install Oracle 11gR2 x86_64-bit in Ubuntu Linux 10.04 Intel x86_64-bit.

Oracle Installation:

Oracle Software PrerequisitesInstall required packages
sudo su - 
apt-get install build-essential libaio1 libaio-dev unixODBC unixODBC-dev pdksh expat sysstat libelf-dev elfutils lsb-cxx

To avoid error "linking ctx/lib/":
cd /tmp
dpkg-deb -x libstdc++5_3.3.6-17ubuntu1_amd64.deb ia64-libs
cp ia64-libs/usr/lib/ /usr/lib64/
cd /usr/lib64/
ln -s
cd /tmp
dpkg-deb -x ia32-libs_2.7ubuntu6.1_amd64.deb ia32-libs
cp ia32-libs/usr/lib32/ /usr/lib32/
cd /usr/lib32
ln -s
cd /tmp
rm *.deb
rm -r ia64-libs
rm -r ia32-libs

To avoid error invoking target 'idg4odbc' of makefile:
ln -s /usr/bin/basename /bin/basename
To avoid errors when executing the post-install script:
ln -s /usr/bin/awk /bin/awk
Kernel Parameters
sudo su -
Make a backup of the original kernel configuration file:
cp /etc/sysctl.conf /etc/sysctl.original
Modify the kernel parameter file
echo "#">> /etc/sysctl.conf
echo "# Oracle 11gR2 entries">> /etc/sysctl.conf
echo "fs.aio-max-nr=1048576" >> /etc/sysctl.conf
echo "fs.file-max=6815744" >> /etc/sysctl.conf
echo "kernel.shmall=2097152" >> /etc/sysctl.conf
echo "kernel.shmmni=4096" >> /etc/sysctl.conf
echo "kernel.sem=250 32000 100 128" >> /etc/sysctl.conf
echo "net.ipv4.ip_local_port_range=9000 65500" >> /etc/sysctl.conf
echo "net.core.rmem_default=262144" >> /etc/sysctl.conf
echo "net.core.rmem_max=4194304" >> /etc/sysctl.conf
echo "net.core.wmem_default=262144" >> /etc/sysctl.conf
echo "net.core.wmem_max=1048586" >> /etc/sysctl.conf
echo "kernel.shmmax=2147483648" >> /etc/sysctl.conf
Note: kernel.shmmax = max possible value, e.g. size of physical memory in bytes
Load new kernel parameters
sysctl -p
Oracle Groups and Accounts
sudo su -
groupadd oinstall
groupadd dba
useradd -m -g oinstall -G dba oracle
usermod -s /bin/bash oracle
passwd oracle
groupadd nobody
usermod -g nobody nobody
id oracle
uid=1001(oracle) gid=1001(oinstall) groups=1001(oinstall),1002(dba)
Make a backup of the original file:
cp /etc/security/limits.conf /etc/security/limits.conf.original
echo "#Oracle 11gR2 shell limits:">>/etc/security/limits.conf
echo "oracle soft nproc 2048">>/etc/security/limits.conf
echo "oracle hard nproc 16384">>/etc/security/limits.conf
echo "oracle soft nofile 1024">>/etc/security/limits.conf
echo "oracle hard nofile 65536">>/etc/security/limits.conf
Oracle Directories

i.e. /u01/app for Oracle software and /u02/oradata for database files
mkdir -p /u01/app/oracle
mkdir -p /u01/app/oraInventory
mkdir -p /u02/oradata
chown oracle:oinstall /u01/app/oracle
chown oracle:oinstall /u01/app/oraInventory
chown oracle:oinstall /u02/oradata
chmod 750 /u01/app/oracle
chmod 750 /u01/app/oraInventory
chmod 750 /u02/oradata
Execute the Oracle Universal Installer:
Login as the Oracle user - do not use 'su' command
ssh -Y oracle@server_address
See Tips below for mounting the Oracle installation source
Note: Select the "Ignore All" button at the Prerequisite Checks dialog.

Check some more tips after the jump.

Possibly Related Posts

Setting up a TFTP Server

atftp is Multi-threaded TFTP server implementing all options (option extension and multicast) as specified in RFC1350, RFC2090, RFC2347, RFC2348 and RFC2349. Atftpd also supports multicast protocol known as mtftp, defined in the PXE specification. The server supports being started from inetd as well as in daemon mode using init scripts.

Install atftp Server in Ubuntu
sudo aptitude install atftpd
Using atftpd

By default atftpd server starts using inetd so we need to tell atftpd to run as a server directly, not through inetd.Edit /etc/default/atftpd file using the following command
sudo gedit /etc/default/atftpd

Change the following line
save and exit the file

Now you need to run the following command
sudo invoke-rc.d atftpd start
Configuring atftpd

First you need to create a directory where you can place the files
sudo mkdir /tftpboot
sudo chmod -R 777 /tftpboot
sudo chown -R nobody /tftpboot
sudo /etc/init.d/atftpd restart
Security configuration for atftp

Some level of security can be gained using atftp libwrap support. Adding proper entry to /etc/hosts.allow and /etc/hosts.deny will restrict access to trusted hosts. Daemon name to use in these files is in.tftpd.

in.tftpd : FQD or IP
atftp client installation

Advance Trivial file transfer protocol client,atftp is the user interface to the Internet ATFTP (Advanced Trivial File Transfer Protocol), which allows users to transfer files to and from a remote machine. The remote host may be specified on the command line, in which case atftp uses host as the default host for future transfers.
sudo aptitude install atftp
That’s it you are ready to transfer your files using tftp clients

Testing tftp server

Tranfering file hda.txt from (Client using tftp) to (Server Get an example file to transfer (eg. hda.txt)
touch /tftpboot/hda.txt  
chmod 777 /tftpboot/hda.txt 
ls -l /tftpboot/
total 0
-rwxrwxrwx 1 ruchi ruchi 223 hda.txt 
atftp> put hda.txt
Sent 722 bytes in 0.0 seconds
atftp> quit
ls -l /tftpboot/
total 4
-rwxrwxrwx 1 ruchi ruchi 707 2008-07-07 23:07 hda.txt

Possibly Related Posts

Tuesday, 19 July 2011

How to configure multiple Cisco switch ports at the same time

To configure multiple switchports at the same time we use the interface range configuration command.
Switch(config)#interface range fastethernet0/1 – 20
Switch(config-if-range)#speed 100
Switch(config-if-range)#duplex full
The previous example will hardcode the speed and duplex settings on switchports 1 to 20. But this could well have been assinging them all to the same vlan.

We can even define multiple ranges.
Switch(config)#interface range fastethernet0/1 – 4 , fastethernet0/10 – 15
Notice the spaces between the ranges.

The interface range command works with vlan, port-channel, fastethernet and gigabitethernet interfaces.

Possibly Related Posts

Friday, 15 July 2011

Creating and editing Cisco Extended access lists

Extended ACLs allow you to permit or deny traffic from specific IP addresses to a specific destination IP address and port. It also allows you to specify different types of traffic such as ICMP, TCP, UDP, etc. Needless to say, it is very grangular and allows you to be very specific. If you intend to create a packet filtering firewall to protect your network it is an Extended ACL that you will need to create.

Here you have a few examples of how to interact with extended access lists:

To create a new extended acl:
router#conf t
router(config)#ip access-list extended 199
router(config)#10 permit tcp any any
router(config)#20 permit udp any any
router(config)#30 deny ip any any
Display the current rules:
router#show access-list 199
Extended IP access list 199
10 permit tcp any any
20 permit udp any any
30 deny ip any any
Add a new rule:
router#conf t
router(config)#ip access-list extended 199
router(config-ext-nacl)#21 permit gre any any

router#show access-list 199
Extended IP access list 199
10 permit tcp any any
20 permit udp any any
21 permit gre any any
30 deny ip any any
Rearrange the rules numbering:
router#ip access-list resequence 199 10 10
router#show access-list 199
Extended IP access list 199
10 permit tcp any any
20 permit udp any any
30 permit gre any any
40 deny ip any any

Possibly Related Posts

Thursday, 14 July 2011

How to disable Mailscanner for outgoing email only

You need to use a rules file. If you haven't already got one, modify MailScanner.conf so that
Spam Checks = %rules-dir%/spam.scanning.rules
Then create a file in the rules subdirectory called spam.scanning.rules and add the domains as follows:
To: * yes
To: * yes
FromOrTo: default no
The last one is a catchall to not scan domains that are not listed.

The key here is using To: instead of FromOrTo: to prevent outgoing email from being scanned for spam.

Stop and restart MailScanner after making any changes.

Possibly Related Posts

Wednesday, 13 July 2011

Shutdown a Windows machine from a Linux box

You can shutdown a windows box if you have samba installed.

net rpc SHUTDOWN -C "some comment here" -f -I x.x.x.x -U user_name%password
As long as the user you supplied has rights to shutdown the system it will work.

This bash script scans the network and turns off all systems that are left on over night.
wks=(`nmap -sL --dns-servers 10.x.x.x,10.x.x.x 10.x.x.x/22, 10.x.x.x.x/23 grep FQDN|cut -d" " -f2 |grep -v -f serverlist`)
for (( i=0; i < "${#wks[@]}"; i++)); do
net rpc SHUTDOWN -C "This system was left on after hours and is being shutdown" -f -I "${wks[$i]}" -U user_name%password
Basically what the script does is scans the network(s) with nmap, pipes it though grep and cut to get the FQDN. Then "grep -v -f serverlist" is an exclude list of servers we don't want to shutdown. From there it puts the workstations into an array and turns off each system.

Possibly Related Posts

Tuesday, 12 July 2011

How to hide query string in url with a .htaccess file

To masquerade the query string into a pretty SEO url you can use Apache's mod_rewrite.
Rewrite rules don't actually hide the query string, rewrite rules pretty much convert SEO friendly urls into the actual query string.

Example .htaccess:
RewriteEngine on
RewriteRule ([a-zA-Z0-9_]+)\.html$ viewPage.php?ID=$1 [L]
The above rewrite rule will allow you to do something like this:

url reads:
apache interprets as:
Therefore the following PHP code:
echo $id;
will output test.

What if we want to pass more than one value, like "" ?

We just have to convert /categories/coding/test.html into /viewPage.php?category=coding&ID=test

This will do the trick:
RewriteEngine On
RewriteRule ^categories/(\w+)/(\w+)\.html viewPage.php?category=$1&ID=$2 [L]

Possibly Related Posts

Sunday, 10 July 2011

VM Stuck in "Pending" State on XenServer (orange/yellow icon)

This happens when some task is staled, in this case, from the XenServer console CLI:

1. Get the list of Pending tasks
xe task-list
2. Cancel the pending task
xe task-cancel force=true uuid=<the UUID from the above command>

Possibly Related Posts

Saturday, 9 July 2011

Xenserver - Edit grub.conf of halted VM

If a VM doesn’t boot due to an incorrect grub configuration, you can use the xe-edit-bootloader script in the XenServer control domain to edit the grub.conf until the config works, example:
xe-edit-bootloader -n "VM Name" -p 1
This will open the grub.conf file of the specified VM in nano editor.

Possibly Related Posts

Resizing LUNs for Xenserver SRs with Script

Here is another solution for re-sizing a LUN on a ISCSI Xenserver SR without rebooting.

First you need to resize the lun on the iscsi server, then use the following script:

SR2GROW=$(xe sr-list params=uuid name-label=$SR_NAME | awk '{ print $NF }')
# find devices to resize
DEV2GROW=$(pvscan | grep $SR2GROW | awk '{ print $2 }')
# scan for resized devices
iscsiadm -m node -R
# do the resize
for dev in $DEV2GROW ; do
pvresize $dev
# tell xenapi to look for the new LVM size
xe sr-scan uuid=${SR2GROW}

Possibly Related Posts

Friday, 8 July 2011

Set Xenserver VMs Custom Fields with a script

On a previous post I've shown you a script that I use to backup my virtual machines on a XenServer Pool, but I have a lot of VMs so it's not easy to set the custom fields for every VM. So I've made another script that allows you set the custom fields on every VM or in a group of VMs using the tags from XenCenter.

you can use the script like this: [-t tag] [<template_frequency> <template_retention> <xva_frequency> <xva_retention>]

if you omit the -t flag it sets the custom fields for all VMs

And the script goes like this:

# Variables



if [ -f $LOCKFILE ]; then
echo "Lockfile $LOCKFILE exists, exiting!"
exit 1


# Don't modify below this line

# using getopts to parse arguments
while getopts 't:' OPTION
case $OPTION in
?) printf "Usage: %s: [-t tag] [<template_frequency> <template_retention> <xva_frequency> <xva_retention>]\n" $(basename $0) >&2
exit 2
shift $(($OPTIND - 1))



# Quick hack to grab the required paramater from the output of the xe command
function xe_param()
while read DATA; do
LINE=$(echo $DATA | egrep "$PARAM")
if [ $? -eq 0 ]; then
echo "$LINE" | awk 'BEGIN{FS=": "}{print $2}'

# Get all running VMs
RUNNING_VMS=$(xe vm-list power-state=running is-control-domain=false | xe_param uuid)

for VM in $RUNNING_VMS; do
VM_NAME="$(xe vm-list uuid=$VM | xe_param name-label)"

echo " "
echo "= Retrieving backup paramaters for $VM_NAME - $(date) ="
#echo "= $VM_NAME uuid is $VM ="
#Template backups
SCHEDULE=$(xe vm-param-get uuid=$VM param-name=other-config param-key=XenCenter.CustomFields.backup)
RETAIN=$(xe vm-param-get uuid=$VM param-name=other-config param-key=XenCenter.CustomFields.retain)
#XVA Backups
XVA_SCHEDULE=$(xe vm-param-get uuid=$VM param-name=other-config param-key=XenCenter.CustomFields.xva_backup)
XVA_RETAIN=$(xe vm-param-get uuid=$VM param-name=other-config param-key=XenCenter.CustomFields.xva_retain)

if [[ $TAG == "all" ]]
VM_TAGS=$(xe vm-param-get uuid=$VM param-name=tags)

if [[ $VM_TAGS == *$TAG* ]]

if [ "$SCHEDULE" != "$TEMPLATE_BACKUP" ]; then
echo "Updating template backup schedule..."
xe vm-param-set uuid=$VM other-config:XenCenter.CustomFields.backup="$TEMPLATE_BACKUP"

if [ "$RETAIN" != "$TEMPLATE_KEEP" ]; then
echo "Updating template backup retention..."
xe vm-param-set uuid=$VM other-config:XenCenter.CustomFields.retain="$TEMPLATE_KEEP"

if [ "$XVA_SCHEDULE" != "$XVA_BACKUP" ]; then
echo "Updating XVA backup schedule..."
xe vm-param-set uuid=$VM other-config:XenCenter.CustomFields.xva_backup="$XVA_BACKUP"

if [ "$XVA_RETAIN" != "$XVA_KEEP" ]; then
echo "Updating template XVA retention..."
xe vm-param-set uuid=$VM other-config:XenCenter.CustomFields.xva_retain="$XVA_KEEP"




Possibly Related Posts

Resizing LUNs for Xenserver SRs

Perform steps 2-7 on the Pool Master:

1. Extend the volume/LUN from the SAN management console

2.Execute the following command and note the uuid of the SR.
xe sr-list name-label=<your SR name you want to resize>
3.To get the device name (eg: PV /dev/sdj ) use:
pvscan | grep <the uuid you noted in the previous step>
4.Tell the serve to refresh the iscsi connection:
echo 1 > /sys/block/device/device/rescan (e.g. echo 1 > /sys/block/sdj/device/rescan)
5.Resize the volume
pvresize <device name> (eg: pvresize /dev/sdj )
6. Rescan the SR:
xe sr-scan <the uuid you noted in the previous step>
7. Verify that the XE host sees the larger physical disk:
pvscan | grep <the uuid you noted in step 2>


Possibly Related Posts

Wednesday, 6 July 2011

Upgrading Windows Server 2008 R2 Edition without media

You can accomplish this using DISM command line tool:

To determine the installed edition, run:
DISM /online /Get-CurrentEdition
To check the possible target editions, run:
DISM /online /Get-TargetEditions
Finally, to initiate an upgrade, run:
DISM /online /Set-Edition:<edition ID> /ProductKey:XXXXX-XXXXX-XXXXX-XXXXX-XXXXX
So, for example, to upgrade to Windows Server 2008 R2 Datacenter from a Standard edition, you would run:
DISM /online /Set-Edition:ServerDatacenter /productkey:ABCDE-ABCDE-ABCDE-ABCDE-ABCDE
After running the /Set-Edition command, DISM will prepare the operating system for the edition servicing operation, then reboot twice while it applies the changes to the operating system. After the final reboot, you’ll be running the new edition!

Note: that the server can't be a DC at the time of upgrade. If you demote a DC using dcpromo, you can upgrade, then re-promote it (you may need to migrate FSMO roles, etc, in order to succesfully demote.)

Possibly Related Posts

Tuesday, 5 July 2011

Reclaim Disk Space from Deleted XenServer Snapshots and Clones

Running this script will incur some downtime for the VM, due the suspend/resume operations performed.

This instructions are for XenServer 5.6 and later

Citrix recommends that you back up the VM on which you will run the space reclamation tool. You can use the XenCenter export option for this purpose.

Run the following command from the XenServer CLI:
xe host-call-plugin host-uuid=<host-UUID> plugin=coalesce-leaf fn=leaf-coalesce args:vm_uuid=<VM-UUID>
The amount of time required varies based on the amount of data written to the disk since the last snapshot. Smaller VMs (that is, 10 GB or less) take less than a minute.

If the Virtual Disk Images (VDIs) to be coalesced are on shared storage, you must execute the off-line coalesce tool on the pool master.

To get Pool Master UUID you can use this command:
xe pool-list params=master | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"
To get uuids of all running VMs
xe vm-list is-control-domain=false power-state=running params=uuid | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}"
so you can create a script to execute the off-line coalescing tool on every VM like this:

MASTER=$(xe pool-list params=master | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}")
#All VMs
RUNNING_VMS=$(xe vm-list is-control-domain=false params=uuid | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}")
#All running VMs
#RUNNING_VMS=$(xe vm-list is-control-domain=false power-state=running params=uuid | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}")
#All halted VMs
#RUNNING_VMS=$(xe vm-list is-control-domain=false power-state=halted  params=uuid | egrep -o "[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}")

for VM in $RUNNING_VMS; do
    echo " "
    echo "=== Starting coalesce leaf process for $VM at $(date) ==="
    echo " "

    xe host-call-plugin host-uuid=$MASTER plugin=coalesce-leaf fn=leaf-coalesce args:vm_uuid=$VM

    echo " "
    echo "=== Coalesce leaf process for $VM ended at $(date) ==="
    echo " "

Possibly Related Posts

Wednesday, 22 June 2011

Archiving and extracting tar files

Create tar.gz file:
tar -czf /path/to/output/folder/filename.tar.gz/path/to/folder
Extract tar.gz to folder:
tar -xvzf /path/to/output/folder/filename.tar.gz -C /path/to/folder
List the contents of a tar.gz file:
tar -ztvf file.tar.gz

c: create
x: extract
t: List the contents of an archive
v: Verbosely list files processed (display detailed information)
z: Filter the archive through gzip so that we can open compressed (decompress) .gz tar file
j: Filter archive through bzip2, use to decompress .bz2 files.
f filename: Use archive file called filename

Possibly Related Posts

Connect to a WPA/WPA2 Secured network in Linux

A combination of wpa_supplicant and wpa_passphrase will do the trick.

First, you need to install the relevant software. You need to have a wired connection at this point, otherwise this wont work.
sudo apt-get install wireless-tools wpasupplicant
Run iwlist scanning, and check your card can see the wireless network in question If it can, run:
wpa_passphrase [your-wireless-network-name] > wpa.conf
The prompt will wait for you to enter a passphrase. Do this and hit enter.
Run wpa_supplicant. Replace wext with the correct wireless driver (which is probably wext, but run wpa_supplicant --help to check) and wlan0 with your wireless interface
wpa_supplicant -Dwext -iwlan0 -c/root/wpa.conf
If that works, you should see text to the effect of “successfully associated”. If not, try again with a different driver, make sure your passphrase is correct, and make sure your wireless interface is working properly.
Hit Ctrl+c, then the up arrow, then add a -B (for background) onto the end of the last command, thus:
wpa_supplicant -Dwext -iwlan0 -c/root/wpa.conf -B
Run dhclient -r to release any DHCP leases you have.
Run dhclient wlan0 to get a new IP address. Substitute wlan0 for your wireless interface, of course.
You should now be connected.

Possibly Related Posts

Sunday, 19 June 2011

OpenKM LDAP auth

Hello, after some time kicking the machine and trying a several configurations, i got it working..

NOTE: for the config options to be red from the file you must first delete the equivalent configs from the web user interface (those are stored in the DB and override the config file)

Checkout the relevant parts of the final configuration files after the break

Possibly Related Posts

Install OpenKM on ubuntu 10.04 LTS

There’re several ways to install it, we use to install in Ubuntu but can be used any other Linux flavor.

Enable the partner repository:
sudo su
vi /etc/apt/sources.lst
Uncomment the following lines:
deb lucid partner
deb-src lucid partner
Install needed packages:

Execute on terminal the command
$ sudo aptitude install sun-java6-bin sun-java6-jdk sun-java6-jre imagemagick swftools tesseract-ocr
set the Java Home environment variable
vi /etc/environment
add this line at the end of the file:
Now update the environment variables:
# source /etc/environment
Install OpenKM

Download ( OpenKM 5.0.x + JBoss 4.2.3.GA bundle and uncompress on your file system disk (a good option is to uncompress under /opt/).

Execute on terminal the command
$ unzip
For document preview you need add these two entries in the OpenKM.cfg file:
You can configure OpenKM to use a remote server for OpenOffice document conversion:
Or you can configure listen port and a maximun conversion tasks:
Note that system.openoffice.tasks and system.openoffice.port have already a default value and is not needed to be set.

Enabling OCR

To enable OCR you must put the files system path of OCR engine:
Enable PS to SWF conversion

To enable postscript document preview, OpenKM need to convert PS files to SWF using the ps2pdf utility from Ghostscript:
Enable image preview

To enable image preview, you need to install que ImageMagick convert utility and configure:
Configuring chat service

By default chat and autologin are enabled. In order to enable or disable values can be "on" or "off".
Check for more information

First login

Execute the file /opt/jboss-4.2.3.GA/bin/ to run OpenKM + JBoss application server.

If you want your OpenKM installation to be accessed from other computers add the -b command line parameter (see Basic application knowledge)

Open on a client browser the URL http://localhost:8080/OpenKM/.

Autenticate to OpenKM using user "okmAdmin" with password "admin".

Note: From OpenKM 5.x there's a property definition in OpenKM.cfg to create automatically database. Once the tables are created, change the hibernate.hbm2ddl property from create to none. Do it after first time running, in other case all repository it'll be deleted and created in next OpenKM starting.

Please take a look at if you have any problem

Possibly Related Posts

Gammu SMSD files backend conf


Instead of using a MySQL backend you can use a files backend:
port = /dev/ttyUSB0connection = at19200
# Debugging
#logformat = textall
synchronizetime = yes
# SMSD configuration, see gammu-smsdrc(5)
service = files
logfile = syslog
# Increase for debugging information
debuglevel = 0
# Paths where messages are stored
inboxpath = /var/spool/gammu/inbox/
outboxpath = /var/spool/gammu/outbox/
sentsmspath = /var/spool/gammu/sent/
errorsmspath = /var/spool/gammu/error/

Possibly Related Posts

Setting up SMS gateway using Gammu

Install Gammu and Gammu SMS Daemon by running the command
sudo apt-get install gammu gammu-smsd
Create a gammu configuration file /etc/gammurc, whose contents look like this
Test that the SMS functionality by sending a message like this
$ echo "Test message"|gammu sendsms TEXT <Phone Number>
Setting up the SMS Daemon
A prerequisite is that a MySQL server is installed (either local or remote). Create the necessary tables in the database by running the SQL statements in /usr/share/doc/gammu/examples/sql/mysql.sql.gz. Create a gammu-smsd configuration file /etc/gammu-smsdrc
port = /dev/ttyUSB0model =
connection = at19200
synchronizetime = yes
logfile =
logformat = nothing
use_locking =
gammuloc =

debuglevel = 255
logfile = smsd.log
Service = mysql
User = root
Password = <Password>
Database = smsd
Now test that the daemon is receiving the messages by sending an SMS to the number associated with the SIM in the modem. Verify that the message is written into the table named "Inbox". Similarly you can send a message out by creating a row in the "Outbox" table.

  • You can inject sms into the daemon from the command line with gammu-smsd-inject example:
    • echo "All your base are belong to us" | gammu-smsd-inject TEXT 123456
  • Gammu-smsd also supports other backends like txt files in a specified folder
  • To use SMSD as a daemon, you might want to use the init script which is shipped with Gammu in contrib/init directory. It is not installed by default, either install it manually or check INSTALL file for instructions.

Possibly Related Posts

Using Gammu

sudo apt-get install gammu
To configure:
edit /etc/gammurc:

port = /dev/ttyUSB0model =
connection = at19200
synchronizetime = yes
logfile =
logformat = nothing
use_locking =
gammuloc =
Or run gammu-config

Send sms:
echo "boo" | gammu --sendsms TEXT [recipient mobile number]
Read all sms:
gammu --getallsms
Get sms folders:
gammu --getsmsfolders
Delete all sms:
gammu --deleteallsms [folder number]

Possibly Related Posts

Zabbix - SMS with Gammu

Create an sms script (and make it executable) on the zabbix server in the AlertScriptsPath (=/etc/zabbix/alert.d/ on ubuntu)
echo "Recipient='$1' Message='$3'" >> ${LOGFILE}
MOBILE_NUMBER=`echo "$1" | sed s#\s##`
# Log it
echo "echo $3 | /usr/bin/sudo /usr/bin/gammu --sendsms TEXT ${MOBILE_NUMBER}" >>${LOGFILE}
# Send it
echo "$3" | /usr/bin/sudo /usr/bin/gammu --sendsms TEXT "${MOBILE_NUMBER}" 1>>${LOGFILE} 2>&1
zabbix ALL = NOPASSWD:/usr/bin/gammu
to the sudoers to make it available for the zabbix user

Configure a media type (menu administration) with the same name as your script (without path, without parameters)

Link this media type to a user (menu administration) and use the phone number as Send to parameter.

Dont forget to give zabbix a shell in /etc/passwd and permissions to write in the log file...

Note: if you have gammu configured as a daemon use gammu-smsd-inject instead of gammu --sendsms or
echo "$3" > /var/spool/gammu/outbox/OUT"${MOBILE_NUMBER}".txt
If you are using the files backend

I use a hawaii usb gsm modem and my /etc/gammurc looks like this:
port = /dev/ttyUSB0
model =
connection = at19200
synchronizetime = yes
logfile =
logformat = nothing
use_locking =
gammuloc =

Possibly Related Posts

Saturday, 4 June 2011

Configure LDAP authentication for Alfresco

Under the /subsystems/authentication structure, there are folders for ldap, passthru, etc. In the ldap folder, there is a .properties file...
This is what you have to edit...
Specifying your server, ldap structure, authentication account, if you sync or not, etc. Go through it, there are pretty good explanations in the comments.

Lastly, edit the file... add ldap1:ldap to the chain (probably only has alfrescoNtlm on it?) to activate your ldap config. You can also set this in the file
file: /opt/alfresco/tomcat/webapps/alfresco/WEB-INF/classes/alfresco/
 - or -
file: /opt/alfresco/tomcat/shared/classes/
# The default authentication chain
restart, test...

Check out the example after the break.

Possibly Related Posts

Install Alfresco on Ubuntu 10.04 LTS

Enable the partner repository:
sudo su
vi /etc/apt/sources.lst
Uncomment the following lines:
deb lucid partner
deb-src lucid partner
update your system:
# apt-get update
# apt-get upgrade
Install the needed packages:
# apt-get install mysql-server sun-java6-jdk imagemagick swftools
set the Java Home environment variable
vi /etc/environment
add this line at the end of the file:
Now update the environment variables:
# source /etc/environment

Create the alfresco database:
# mysql -uroot -p

GRANT ALL PRIVILEGES ON alfresco.* TO alfresco@localhost IDENTIFIED BY 'alfresco';
GRANT SELECT,LOCK TABLES ON alfresco.* TO alfresco@localhost IDENTIFIED BY 'alfresco';
Create a directory for alfresco:
# mkdir -p /opt/alfresco# cd /opt/
Download the latest alfresco release (replace the url if necessary)
# wget
# sudo chmod +x alfresco-community-3.4.d-installer-linux-x32.bin
# ./alfresco-community-3.4.d-installer-linux-x32.bin
And follow the wizard

Then point your browser to:

http://yourserver:8080/alfresco (Alfresco DMS)
http://yourserver:8080/share (Alfresco Share)

Log in with user:admin e password:admin

Possibly Related Posts

Friday, 3 June 2011

Xenserver Backup solution

After reviewing some of the available backup solutions (see my previous post) I've opted to use the script provided by Mark Round but I've modified it a bit to allow backing up the VMs to .xva files with an independent schedule from the template backups, I use this to store xva files of my VMs in tapes monthly.

To use it you just have to place the script in /usr/local/bin/ in your pool master host and add a line to /etc/crontab:
2 1 * * * root /usr/local/bin/ > /var/log/snapback.log 2>&1
Then you can configure the scheduling and retention for each VM, from the XenCenter, by adding the following costume fields:
  • backup - to set the schedule for the template backups (daily, monthly or weekly)
  • retain - to set the number of backup templates to keep
  • xva_backup - to set the schedule for the xva backups (daily, monthly or weekly)
  • xva_retain - to set the number of xva files to keep
Click in the "read more" link to see the script.

Possibly Related Posts

Saturday, 28 May 2011

Xenserver Backup solutions

There are a lot of different backup solutions available to backup your VMs. So I've compiled a comparative list of some of the available solutions, and in my next post I will tell you about the one I chose for my backups.

Generic Backup Solutions:
  • No agent needed, it uses rsync, rsyncd or smb
  • File level deduplication across all backups
  • Incremental and full backup scheduling
  • Easy to setup
  • Web interface
  • Command Line Interface
  • No support for tape drives
  • disk-based data backup and recovery
  • Free and Open Source
  • No downtime
  • Free and Open Source
  • No downtime
  • Support for tape drives and tape libraries
  • Uses agents
  • Web interface
  • wxWidgets interface
  • Command Line Interface
  • can encrypt data in transit
  • supports snapshots via Windows VSS
  • Data backed up by Bacula must be recovered by Bacula
  • user postings online indicate that it can be quite complex to set up
  • Free and Open Source
  • No downtime
  • Free and Open Source
  • can back up Linux, Unix, Mac and Windows clients to tape, disk and storage grids, such as Amazon Simple Storage Service (S3)
  • Write to Multiple Volumes in Parallel
  • Support for tape drives and tape libraries
  • Virtual tapes
  • use of native tools, such as ufsdump, dump and/or GNU tar
  • Ability to read the backup tapes without Amanda
  • Uses agents
  • Commercial version named Zamanda with added features such as:
    • web-based GUI
    • management console
    • one-click restore and reporting application agents (priced additionally) for Microsoft Exchange, SQL Server and SharePoint, and Oracle
    • 24/7 customer support
    • orderly new feature release schedule
  • Commercial Version Pricing (,3):
    • Basic:
      • Server $400
      • Linux, Solaris and Windows Cients $150
      • Windows Clients for desktops and laptops $200
      • Backup to S3 option $250
    • Standard:
      • Server $500
      • Linux, Solaris and Windows Cients $300
      • Windows Clients for desktops and laptops $300
      • Backup to S3 option $500
      • Oracle agent $300
      • Postgres agent $300
      • VMWare vSphere and ESXi client $300
    • Premium:
      • Server $750
      • Linux, Solaris and Windows Cients $450
      • Windows Clients for desktops and laptops $450
      • Backup to S3 option $750
      • Oracle agent $300
      • Postgres agent $450
      • VMWare vSphere and ESXi client $450
Acronis Backup:
  • Uses agents
  • Server runs on Windows
  • Supports Tape drives and tape autoloaders
  • Compress backups to optimize your storage space.
  • Save storage space and time by excluding non-essential files and folders from backups.
  • Store backups into two different locations — backup to a local disk and a copy to a network share.
  • Automatic or manual splitting of backups
  • Bare-metal restore
  • Perform remote restores of your networked machines
  • Restore to dissimilar hardware-optional
  • Convert backup images to virtual machine formats compatible with VMware, Microsoft Hyper-V, Citrix XenServer and Parallels environments.
  • install Agent on unlimited number of virtual machines
  • Automated deletion of outdated backups
  • Backup validation and consolidation by Acronis Storage Node
  • Consolidate incremental and differential backups to save space (Deduplication).
  • Templates for backup rotation schemes
  • Centralized management
  • Reporting and monitoring
  • Command line with scripting support
  • Encrypted network communications
  • Costs 1784€ per license
Xen specific Backup Solutions:
Manual Snapshots:
XenServer supports three types of VM snapshots: regular, quiesced and snapshot with memory. Regular snapshots are crash consistent and can be performed on all VM types. The VM snapshot contains all the storage information and VM configuration, including attached VIFs, allowing them to be exported and restored for backup purposes.

Quiesced snapshots take advantage of the Windows Volume Shadow Copy Service (VSS) to generate application consistent point-in-time snapshots. The VSS framework helps VSS-aware applications like Microsoft Exchange or Microsoft SQL Server to flush data to disk and prepare for the snapshot before it is taken. XenServer supports quiesced snapshots on Windows Server 2003 and Windows Server 2008 for both 32-bit and 64-bit variants. Windows 2000, Windows XP and Windows Vista are not supported.

Snapshot with memory save the VMs state (RAM).This can be useful if you are upgrading or patching software, or want to test a new application, but also want the option to be able to get back to the current, pre-change state (RAM) of the VM. Reverting back to a snapshot with memory, does not require a reboot of the VM.

Backup across multiple external disks:
  • back up to muliple esata hard drive's so if you have a 1.5 tb image and only 2 1tb esata hard drives you can span it between both.
  • This will work with any number of drive’s as long as the total combined disk space is larger then the vm’s you are trying to back up.
  • This also works if you only have one drive
  • The backup will be just a little larger then the used space on the drive, so even if you have a 2tb virtual drive but it is only using 500gb it should fit on a drive with 600gb or more freespace.
  • need to manualy create a list of VMs to backup
Zero-Downtime Limited-Space Backup Script:
  • Currently, Windows servers are not supported, only Linux VMs and the XenServer instance, itself.
  • based on using the python API and command-line LVM snapshots
  • No downtime
  • Free and Open Source
  • Limited space - "Doing built-in snapshots of VM's was not feasible for us. Currently, there is no way to exclude disks in a snapshot (that we have found). A snapshot will take about double the currently used space for a disk on an SR, and this space cannot be reclaimed until the snapshot is deleted and the machine is shutdown to be coalesced. In one of our VMs we have about 8 TB of user drive space, with no extra space on the SRs where the disks are allocated. We don't have enough room, nor do we care about creating a snapshot with the user data since it is already backed up with netbackup. The script allows us to get no-downtime snapshots of the system disks with only requiring a small and temporary amount of extra space on the SRs".
  • The python API is used to gather metadata about the VM, its disks, and its network interfaces. The metadata is written to plain text files. The data from the disks is imaged by doing a dd on the lvm volumes that correspond to the VDIs for the disks.
  • To restore, a new VM is created and given the memory and CPUs settings stored in the metadata. Then the VIF and disks are restored with the stored images being written to the new lvm volumes.
  • The script is still a work in progress
  • Support for Windows will be added
TINABS (This Is Not Another Backup Script):
  • It is based on using the python API and tested under XenServer 5.6 FP1
  • This library allows you to create simple scripts to backup the skeleton of one or more virtual machines.
  • Data disks are not included and they are recreated empty
  • The core of the library is the backup() function which iterates through a list of user supplied virtual machine and:
    • gets a snapshot of the system disk, attach it to a brand new virtual machine created based on the parameters of the current one in the list,
    • recreates any data disks on a shared SR (I preferly use an NFS SR as destination due to the fact thatt “For file-based VHDs, all nodes consume only as much data as has been written, and the leaf node files grow to accommodate data as it is actively written. If a 100GB VDI is allocated for a new VM and an OS is installed, the VDI file will physically be only the size of the OS data that has been written to the disk, plus some minor metadata overhead” as stated in XenServer Administrator's Guide) and attaches them to the backup one,
    • recreates any VIFs of the original virtual machine and attaches them to the backup one,
    • exports the backup virtual machine in .xva format on a local directory,
    • completely deletes the backup virtual machine.
  • The restoring process simply consists in importing the .xva previously created and restoring any data from a backup!
  • Live backup and export of virtual machines.
  • I don't care about creating a snapshot of the entire virtual machine including even any data disks since their data are already backed up with a backup tool.
  • Run from a remote host (even a Windows machine)
  • Provide a simple GUI (WxPython + XRC)
  • By default (if running through the GUI) all pool's virtual machines tagged with the current day of the week (in the format: Mon, Tue, Wed, Thu, Fri, Sat, Sun) are selected for backup.
  • A single virtual machine can be selected as well
NOTE: In some cases useing the following scritps the disk space won't be freed after deleting the snapshots, if that happens follow this instructions: but according to this problem is solved in XenServer 5.6 FP1

XenServer Live Backup Script:

  • runs on windows
  • written in VBScript
  • Requires that XenCenter be installed on Windows machine you run the script from.
  • Beta Stage
Filippo Zanardo's Xenbackup Script:
  • written in Perl
  • skip VMs by adding them to a list
  • Optional use of snapshot: if set to true backup script try to make a snapshot of the vm, else he shutdown the machine, export and power on the machine
  • Mail Notification
  • Optionaly create a subfolder in the store for each backup based on vm name
  • versioning: Set to true to let the script manage to delete the backup older than a certain day or number or hours specified in the $delnumber variable
  • automount: if set to true script try to mount the backupdir specified in mountcommand at start and umount at end, else no action taken and u have to mount dir manually
  • checkspace: if set to true the script check the avaiable space on the backup dir and if less than $spacerequired quit with a message, size is in MB
  • Free and Open Source
  • the author is also working on a web based Xen backup solution (
Andy Burton's VM export script:
  • Backup of the entire machine
  • Fast recovery in case of disaster
  • Free and Open Source
  • No downtime
  • VDI removal – Run in addition to the standard vm-uninstall command to stop snapshotted VMs allocating all your disk space
  • Backup VM selection – Select between all, running, none and specifically set (by VM uuid) virtual machines.
  • Quiesce snapshots – To take advantage of the quiesce snapshot functionality if the VM supports it.
  • There is an improved version on (posted in: this version adds:
    • some cleanup scripts to handle disk remounts, removal of older backup images, and some logic to not back up if the backup drive is not present and mounted
    • A plaintext dump of all the info needed to figure out what used to be connected to what and where it used to live, all the SR, VM, VIF, UUID's etc. are here in a reasonably readable format if needed.
    • A sctipt that unmounts and remounts the backup disk, and then cleans it up so that we only have the last two backups on it. Needs some logic to abort if the drive isn't, or can't be, mounted.
    • A script to back up the metadata of the Xen Pool in a restorable format. Backs up the host machines over to the backup drive as well.
  • Back up all the VM's, Xen hosts, and metadata from a single Xen host, so you only need to set this up on one machine
  • Backup destination can be an NFS share, SMB share, USB disk, flash drive, or anything else you can get mounted up.
Markround Script:
  • Similar to the previous but more complete
  • Backup and retention policy can be configured from XenCenter
  • Ability to use a different SR for the backups
  • Agentless backup for XenServer
  • Comes in 3 versions, free, Standard ($899/XenServer Host) and DR ($1189/XenServer Host)
  • Volume licensing
  • Block-level data deduplication across all VMs backed up
  • Friendly UI
  • Versions each snapshot that is backed up
  • Alike is able to backup any or all of the drives in any VM
  • Jobs can be scheduled daily, weekly, or monthly; may be configured for multiple runs per day
  • Alike can run on 64-bit Windows, can back up any guest OS
  • Backup to Any Common Storage Type
  • Alike can fully automate and schedule Citrix's Coalesce tool, dramatically simplifying the reclaim process.
  • Alike can schedule the export, migration or replication of your VMs, providing simple offsite support.
  • Alike installs nothing on the XenServer host operating system (Dom0), and does not require disk from XenServer Storage Repository (SR).
PHD backup:
  • Block level deduplication
  • No downtime
  • Backups saved as VHD
  • File level recovery Any OS, Any File System
  • Removes the need to deploy and manage a separate physical server, additional software, scripts or agents for backup and recovery of the virtual environment
  • Simple to Deploy & Easy to Use
  • Integrate management for backup and recovery into XenCenter
  • Data is checked both during the backup and restore processes ensuring data integrity. Self-healing is provided by automatically detecting and repairing corrupt data blocks.
  • Multiple Data Streams for Fast Backup and Restore
  • Job Scheduling
  • Supports Tape Backup Solutions
  • Application Consistent Backup using VSS
  • E-mail notification
  • Support for Thin Provisioned Disks
  • Backup Storage Warnings
  • Distributed Virtual Switch Support
  • Supports all Operating Systems supported by XenServer
  • Licence per host
  • we own the licences with an optional annual subscription.
  • 1395$ until the end of the month, regualrly 2000$ per server. with one year suppor included (email and phone suppot, updates and patches)
  • 280$ per host for the annual subscription 11/5 EST working hours
  • Resellers in Portugal
  • 15 day trial

Possibly Related Posts