Saturday, 26 January 2013

How to reset the root password of VMware ESXi 4.1 and 5.0

You can't recover your old password but by following this steps you can set it to blank and then set a new one.

First you'll need a Linux live CD, any one will do.

After booting to a live session of Linux you must look for a file named state.tgz on your vmware host's hard drive. To do so I used:
parted -l
To list all available partitions and mounted every VFAT partition to look inside, I found it was on /dev/sda5, in your case it might be on a different one, you can mount the partitions with:
mount /dev/sda1 /mnt
(replace sda with your device's name and 1 with your partition numver)
then check inside if the file state.tgz exists:
ls /mnt/
After finding the state.tzg file you must uncompress it using:
cd /tmp
tar xzf /mnt/Hypervisor3/state.tgz
this will get you a local.tgz file witch you have to extract using:
tar xzf local.tgz
now edit the file /tmp/etc/shadow
vi etc/shadow
Inside locate the root account and just remove it's hash (everything between the first and the second colon) and login to the service console as root with no password at all.

Finally re-pack the files and move the modified state.tgz back to the VFAT partition. Probably it is a good idea to make a backup copy of the original state.tgz in case something goes wrong:
mv /mnt/state.tgz /mnt/state.tgz.bak
rm local.tgz
tar czf local.tgz etc
tar czf state.tgz local.tgz
mv state.tgz /mnt/
Reboot back into ESXi and you're done.

Possibly Related Posts

Friday, 11 January 2013

Calculating total disk usage by files with specific extension

For example if you want to check how much space is being used by log files on your entire system, you can use the following:

find / -type f -name "*.log*" -exec du -b {} \; | awk '{ sum += $1 } END { kb = sum / 1024; mb = kb / 1024; gb = mb / 1024; printf "%.0f MB (%.2fGB) disk space used\n", mb, gb}'
Just replace "*.log*" with the file extension you want to search for and the above will give you the disk used by the sum of all the files with that extension.

Possibly Related Posts

Saturday, 5 January 2013

PostgreSQL cluster using DRBD and hot standby

Cluster Configuration:

First install all the necessary packages:
yum install gfs2-utils cman fence-virtd-checkpoint lvm2-cluster perl-Net-Telnet rgmanager device-mapper-multipath ipvsadm piranha luci modcluster cluster-snmp ricci
yum groupinstall "High Availability"
yum install postgresql-server
chkconfig --level 123456 ricci on
chkconfig --level 123456 luci on
chkconfig --level 123456 cman on
chkconfig --level 123456 iptables off
chkconfig --level 123456 ip6tables off
chkconfig postgresql on
chkconfig cman on
chkconfig rgmanager on
Now edit the cluster configuration file:
vi vi /etc/cluster/cluster.conf
Make it look like this:
<?xml version="1.0"?>
<cluster config_version="7" name="pgcluster">
<clusternodes>
<clusternode name="10.39.30.7" votes="1" nodeid="1">
<fence/>
</clusternode>
<clusternode name="10.39.30.8" votes="1" nodeid="2">
<fence/>
</clusternode>
</clusternodes>
<rm>
<failoverdomains>
<failoverdomain name="PGSQL" nofailback="0" ordered="0" restricted="0">
<failoverdomainnode name="10.39.30.7"/>
<failoverdomainnode name="10.39.30.8"/>
</failoverdomain>
</failoverdomains>
<resources>
<ip address="10.39.30.6" monitor_link="on" sleeptime="10"/>
<postgres-8 config_file="/var/lib/pgsql/data/postgresql.conf" name="pgsql" shutdown_wait="5" />
</resources>
<service autostart="1" exclusive="0" domain="PGSQL" name="pgsql" recovery="relocate">
<drbd name="drdb-postgres" resource="r0">
<fs device="/dev/drbd0" fsid="6202" fstype="ext3" mountpoint="/var/lib/pgsql" name="pgsql" options="noatime"/>
</drbd>
<ip ref="10.39.30.6"/>
<postgres-8 ref="pgsql"/>
</service>
</rm>
<cman expected_votes="1" two_node="1"/>
<fence_daemon clean_start="1" post_fail_delay="0" post_join_delay="3"/>
</cluster>

DRDB Configuration:

Install the necessary files:
yum install gcc flex make libxslt rpm-build redhat-rpm-config kernel-devel
You need to download and install DRBD manually
wget http://oss.linbit.com/drbd/8.4/drbd-8.4.1.tar.gz 
 the following commands will generate DRBD RPM packages:
tar -xvf *.tar.gz
mkdir -p /root/rpmbuild/SOURCES/
cp drbd*.tar.gz /root/rpmbuild/SOURCES/
cd drbd-8.4.1
./configure --with-rgmanager --enable-spec --with-km
make tgz
rpmbuild --bb drbd.spec --without xen --without heartbeat --without udev --without pacemaker --with rgmanager
rpmbuild --bb drbd-kernel.spec
rpmbuild --bb drbd-km.spec
Now install the newly created packages:
cd /root/rpmbuild/RPMS/x86_64
rpm -i drbd-utils-8.4.1-1.el6.x86_64.rpm drbd-bash-completion-8.4.1-1.el6.x86_64.rpm drbd-8.4.1-1.el6.x86_64.rpm drbd-rgmanager-8.4.1-1.el6.x86_64.rpm drbd-km-2.6.32_279.14.1.el6.x86_64-8.4.1-1.el6.x86_64.rpm
Add your nodes IP addresses to the hosts file on both machines:
vi /etc/hosts
10.39.30.7 RHPG1
10.39.30.8 RHPG2
Create a DRBD configuration file:
vi /etc/drbd.d/r0.res
resource r0 {
   device /dev/drbd0;
   meta-disk internal;
   on RHPG1 {
      address 10.39.30.7:7789;
      disk /dev/sdb1;
   }
   on RHPG2 {
      address 10.39.30.8:7789;
      disk /dev/sdb1;
   }
}
Create the partion /dev/sdb1 but do not format it:
fdisk /dev/sdb
Run on both machines:
drbdadm create-md r0
modprobe drbd
drbdadm up r0
Run on one of the machines to create the file system:
drbdadm -- --overwrite-data-of-peer primary r0
Check the sync status on any of the hosts with:
service drbd status
Create the file system on /dev/drbd0
mkfs.ext3 /dev/drbd0
and move over the PostgreSQL data
mkdir /tmp/pgdata
mount /dev/drbd0 /tmp/pgdata
cp -r /var/lib/pgsql /tmp/pgdata
Wait until the data is synced over the two hosts check the status with:
service drbd status
Unmount the drbd device:
umount /dev/drbd0
and then, on both hosts do:
rm -rf /var/lib/pgsql/*
Restart the drbd service
service drbd restart
the status from:
service drbd status
should show that both hosts are in secondary:
0:r0 Connected Secondary/Secondary UpToDate/UpToDate C
And ready to be managed by rgmanager.

Possibly Related Posts