Friday 30 September 2011

Add extra SWAP file

First you need to create an empty swap file, the next command will create a 1Gb file:
dd if=/dev/zero of=/mnt/extra.swap count=1024 bs=1048576 #(where 1048576 bytes = 1Mb)
sudo chmod 600 /mnt/extra.swap
sudo mkswap /mnt/extra.swap
Now edit your /etc/fstab and add a the line:
/mnt/extra.swap none swap sw 0 0
finaly activate the new swap file with:
swapon -a

Possibly Related Posts

Wednesday 28 September 2011

How to Configure SNMP in Xenserver 5.x

Change firewall settings

You must change your firewall settings as follows to allow communication through the port that SNMP uses:

1. Open the file /etc/sysconfig/iptables in your preferred editor.

2. Add the following lines to the INPUT section after the line with -A RH-Firewall-1-INPUT –p udp –dport 5353... :
-A RH-Firewall-1-INPUT -p udp --dport 161 -j ACCEPT
3. Save and close the file.

4. Restart the firewall service:
# service iptables restart

Enable snmpd service
1. To enable snmpd service run the following command:
# chkconfig snmpd on
2. Start the snmpd service:
# service snmpd start

Change SNMP configuration
1. To change snmp configuration edit the /etc/snmp/snmpd.conf file.

2. Restart the snmpd service:
# service snmpd restart


SNMP configuration examples
Default settings
You can view only systemview subtree .1.3.6.1.2.1.1

View whole subtree

1. Change lines as follows:
After the lines lines starting with:
view systemview included (...)
Add this:
view all included .1
Change line:
access notConfigGroup “” any noauth exact systemview none none
To:
access notConfigGroup “” any noauth exact all none none
2. Save the file.

3. Restart the service:
# service snmpd restart
Change community string (default is “public”)
Change line:
com2sec notConfigUser default public
To:
com2sec notConfigUser default anything_you_need

Possibly Related Posts

Friday 23 September 2011

Packet loss monitoring with zabbix

1. create a file named "packetloss" at this location "/etc/zabbix/externalscripts/"
vi /etc/zabbix/externalscripts/packetloss
note: you may need to create the external scripts directory:
mkdir -p /etc/zabbix/externalscripts
2. cut out and paste this in "packetloss" file
#!/bin/sh
if [ -z $1 ]
then
echo "missing ip / hostname address"
echo " example ./packetloss 192.168.201.1 10000"
echo "10000 = 10000 bytes to ping with. the more you use the harder the network will have to deliver it and you start see packetloss. ping with normal ping size is kinda pointless, on LAN networks I recommend to use 10000 - 20000 and on Internet around 1394 (1500 - 48(pppoe + IP + TCP) - 58(ipsec)"
echo "Remember some firewalls might block pings over 100"
echo " "
fi
if [ -z $2 ]
then
echo "missing ping size"
echo " example ./packetloss 192.168.201.1 10000"
echo "10000 = 10000 bytes to ping with. The more you use the harder the network will have to deliver
it and you start see packetloss. ping with normal ping size is kinda pointless, on LAN networks I recommend to use 10000 - 20000 and on Internet around 1394 (1500 - 48(pppoe + IP + TCP) - 58(ipsec)"
echo "Remember some firewalls might block pings over 100"
echo " "
exit
fi
PINGCOUNT = 10
tal=`ping -q -i0.30 -n -s $2 -c$PINGCOUNT $1 | grep "packet loss" | cut -d " " -f6 | cut -d "%" -f1`
if [ -z $tal ]
then
echo 100
else
echo $tal
fi
3. Make the file runnable by typing:
chmod u+x etc/zabbix/externalscripts/packetloss
4. in zabbix verify the host/template you want to monitor the packet loss on have a valid IP or host name and the correct "Connect to" selected.

Then under Item you create a new Item for that host/template
Type: External Check
Key: packetloss[10000]
SAVE

5. now check monitoring -> latest data for that host and you should start seeing packet loss values.

Done.

The number 10000 is Ping size, its very hard to spot packet loss when only sending a few bytes as a normal ping does.

Try increasing the size until you see packet loss then you know you pushing your equipment to the limit.

Possibly Related Posts

Thursday 22 September 2011

Backuppc and MySQL

The best way to backup a MySql Server using Backuppc is to use a pre-dump script.

you can use $Conf{DumpPreUserCmd} to issue a MysqLDump

Stdout from these commands will be written to the Xfer (or Restore) log file, note that all Cmds are executed directly without a shell, so the prog name needs to be a full path and you can't include shell syntax like redirection and pipes; put that in a script if you need it.

So in our case we would create a script, on the Backuppc client, to dump all databases into a file:
vi /usr/local/sbin/myBkp.sh
and paste the following into it:
#!/bin/bash
MYSQL="$(which mysql)"
MYSQLDUMP="$(which mysqldump)"
DEST="/backup/mysqlDump.sql"
MYSQLUSER="root"
MYSQLPASS="mypassword"
# no need to change anything below...
#####################################################
LOCKFILE=/tmp/myBkup.lock
if [ -f $LOCKFILE ]; then
echo "Lockfile $LOCKFILE exists, exiting!"
exit 1
fi
touch $LOCKFILE
echo "== MySQL Dump Starting $(date) =="
$MYSQLDUMP --single-transaction --user=${MYSQLUSER} --password="${MYSQLPASS}" -A > ${DEST}
echo "== MySQL Dump Ended $(date) =="
rm $LOCKFILE
make the script executable:
chmod +x /usr/local/sbin/myBkp.sh 
and set $Conf{DumpPreUserCmd} with:
$sshPath -q -x -l root $host /usr/local/sbin/myBkp.sh
Now you just have to make shure that Backuppc is getting the /backup folder (or whatever folder you have set in the script) and you can also exclude the /var/lib/mysql folder from backuppc backups.

Possibly Related Posts

Backuppc Got unknown type errors

Type 8 are socket files and type 9 is unknown (Solaris door files).
These are transient files that don't need to be backed up since they can't be restored - they are created by the applications that need them.

The warning messages are benign - they simply mean that those files are being skipped.

Possibly Related Posts

watch backuppc progress


From the Backuppc server you can check witch files backuppc is using at the moment with:
watch "lsof -n -u backuppc | egrep ' (REG|DIR) ' | egrep -v '( (mem|txt|cwd|rtd) |/LOG)' | awk '{print $9}'"
you can also check the running log with:
/usr/share/backuppc/bin/BackupPC_zcat/var/lib/backuppc/pc/desktop2/XferLOG.z
On the client side you can use:
watch "lsof -n | grep rsync | egrep ' (REG|DIR) ' | egrep -v '( (mem|txt|cwd|rtd) |/LOG)' | awk '{print $9}'"

Possibly Related Posts

SVN Dump Parts Of A Repository

Assuming the following SVN repo structure:
- repository root
| - project1
| - project2
| - project3
To dump the project1 history into a portable, re-creatable format, first you use svnadmin dump, like this:
svnadmin dump [path to repo] > repo.dump
Which creates a dump of the entire repository into a file called repo.dump. This might take some time and is CPU intensive so it would be best to perform this outside of normal work hours...
Then use svndumpfilter to filter just for the project1 folder (see folder tree above):
svndumpfilter include project1 < repo.dump > project1.dump
If you have nested repositories, then it breaks with a syntax error. To get around this you need to run the dump multiple times using the ‘exclude’ directive until you have what you want:
svndumpfilter exclude project2 < repo.dump >> project1.dump
svndumpfilter exclude project3 < project1.dump >> project1.dump
At the end you get a full svn repository that could be re-created anywhere, like this:
svnadmin create /var/svn/project1svnadmin load /var/svn/project1 < project1.dump
mkdir -p ~/workingcopies/project1svn co file:///var/svn/project1~/workingcopies/project1/

Possibly Related Posts

Thursday 15 September 2011

Remove a failed Xenserver from Pool

Run the following command to discover the UUID of the broken server:
xe host-list
Use the following command to remove the host:
xe host-forget uuid=<uuid_of_broken_server>
Note that a host should only be forgotten if it is physically unrecoverable, if possible, Hosts should be 'ejected' from the Pool instead.
Once a host has been forgotten it will have to be re-installed.

If the forget command fails with:
This host cannot be forgotten because there are some user VMs still running
Use this command to find witch VMs are listed as running on that server:
xe vm-list resident-on=<uuid_of_broken_server>
Then, for each VM returned by the previous command use the following command:
xe vm-reset-powerstate uuid=<VM_uuid>
Then try the forget command again.

Possibly Related Posts

Sunday 4 September 2011

Combining multiple SVN repositories into one

Assuming that
The existing repositories have a structure like:
- repository root
 | - branches
 | - tags
 | - trunk

and you want a structure something like:
- repository root
 | - projectA
   | - branches
   | - tags
   | - trunk
 | - projectB
   | - branches
   | - tags
   | - trunk

Then for each of your project repositories:
svnadmin dump > project<n>.dmp
Then for each of the dump files:
svnadmin load --parent-dir "project<n>" <filesystem path to repos>
More complex manipulations are possible, but this is the simplest, most straightforward. Changing the source repository structure during a dump/load is hazardous, but doable through a combination of svnadmin dump, svndumpfilter, hand-editing or additional text filters and svnadmin load

What about the revision numbering?
Let's assume that you have to repositories, one with HEAD revision 100 and the other with HEAD revision 150.

You dump the first repository and load it in the new one: you end up with the full story of the first repository, from revision 0 to revision 150.

Then you dump the second repository and load it in the new one: it gets loaded with its full history, the only things that change are the actual revision numbers. The history of the second repository will be represented in the new repository from revision 151 to revision 250.

The full history of both repositories is preserver, only the revision numbers change for the repository that is imported for second.

The same of course applies for more than two repositories.

Possibly Related Posts

MySQL - Converting to Per Table Data File for InnoDB

Issue with shared InnoDB /var/lib/mysql/ibdata1 storage
InnoDB tables currently store data and indexes into a shared tablespace (/var/lib/mysql/ibdata1). Due to the shared tablespace, data corruption for one InnoDB table can result in MySQL failing to start up on the entire machine. Repairing InnoDB corruption can be extremely difficult to perform and can result in data loss for tables that were not corrupted originally during that repair process.

Since MySQL 5.5 will be using InnoDB as the default storage engine, it is important to consider the consequences of continuing to utilize the shared tablespace in /var/lib/mysql/ibdata1Changing to per-table tablespace with innodb_file_per_table

As an option to resolve the issue, MySQL has a configuration variable called innodb_file per_table. To use this variable, the following could be placed into /etc/my.cnf to convert InnoDB to a per table file for each InnoDB engine table:
innodb_file_per_table=1
After adding the line, MySQL would need to be restarted on the machine.
The result for using that line in /etc/my.cnf would cause any databases after the line is added to create .idb files in /var/lib/mysql/database/ location. Please note that the shared tablespace will still hold internal data dictionary and undo logs.

Converting old InnoDB tables
Any old databases with InnoDB tables set to previously share the tablespace in ibdata1 will still be using that file, so those old databases would need to be switched to the new system. The following command in MySQL CLI would create a list of InnoDB engine tables and a command to run for each to convert them to the new innodb_file_per_table system:
select concat('alter table ',TABLE_SCHEMA ,'.',table_name,' ENGINE=InnoDB;') as command FROM INFORMATION_SCHEMA.tables where table_type='BASE TABLE' and engine = 'InnoDB';
An example for Roundcube on my test machine shows the following return upon running the prior command:
alter table roundcube.cache ENGINE=InnoDB;
alter table roundcube.contacts ENGINE=InnoDB;
alter table roundcube.identities ENGINE=InnoDB;
alter table roundcube.messages ENGINE=InnoDB;
alter table roundcube.session ENGINE=InnoDB;
alter table roundcube.users ENGINE=InnoDB;
You would then simply need to issue the commands noted by MySQL CLI to then covert each table to the new innodb_file_per_table format.

Please note that these commands would only need to be run in MySQL command line for the conversion.

You can use the following script:
#!/bin/bash
MYSQL="$(which mysql)"
MYSQLUSER="root"
MYSQLPASS="mypassword"
# no need to change anything below...
#####################################################
TBLS=$(mysql -u $MYSQLUSER -p$MYSQLPASS -Bse "select concat(TABLE_SCHEMA ,'.',table_name) as tbl FROM INFORMATION_SCHEMA.tables where table_type='BASE TABLE' and engine = 'InnoDB';")
for tbl in $TBLS
do
echo "Converting table $tbl"
mysql -u $MYSQLUSER -p$MYSQLPASS -Bse "alter table $tbl ENGINE=InnoDB;"
done
Possible Issues for Converting Old InnoDB Tables
1. Possible system load might occur during the conversion
2. Possible issues with drive space filling up for the conversion

Possibly Related Posts

How to determine type of mysql database

To determine the storage engine being used by a table, you can use show table status. The Engine field in the results will show the database engine for the table. Alternately, you can select the engine field from information_schema.tables.

To get the type per database:
mysql -u root -p'<password>' -Bse 'select distinct table_schema, engine from information_schema.tables'
For a specific table use:
select engine from information_schema.tables where table_schema = 'schema_name' and table_name = 'table_name'
You can change between storage engines using alter table:
alter table the_table engine = InnoDB;
Where, of course, you can specify any available storage engine.

Possibly Related Posts