dpkg --list |grep "^rc" | cut -d " " -f 3 | xargs -r sudo dpkg --purge
Sunday, 7 September 2014
Purge Removed packages
Packages marked as rc by dpkg mean that the configuration files are not yet removed. The following command will purge them:
Possibly Related Posts
How to permanently delete ALL older kernels
This script will remove ALL versions but two, the currently active and the most recent of the remaining installed versions:
#/bin/bash
keep=2
ls /boot/ | grep vmlinuz | sed 's@vmlinuz-@linux-image-@g' | grep -v $(uname -r) | sort -Vr | tail -n +$keep | while read I
do
aptitude purge -y $I
done
update-grub
you can specify how many kernels to keep by adjusting the keep variable, if you set it to 1, only the active kernel will be left installed.
Or you can do it in one line:
Or you can do it in one line:
ls /boot/ | grep vmlinuz | sed 's@vmlinuz-@linux-image-@g' | grep -v $(uname -r) | sort -Vr | tail -n +2 | xargs -r sudo aptitude purge -ythat you can use in crontab.
Possibly Related Posts
Saturday, 30 August 2014
GitLab update script
I've recently installed GitLab and they provide easy to install deb and rpm packages but not a repository to help us keep our installation up to date. So I developed the following script that will check https://about.gitlab.com/downloads/archives/ for newer versions and install them when available:
#!/bin/bashI haven't tested this script on a CentOS machine so it might need some adjustments to work there.
OS="ubuntu"
OS_Version="14.04"
OS_ARCHITECTURE="amd64"
# Ubuntu/Debian:
INSTALLED_VERSION=$(dpkg -s gitlab | grep -i version | cut -d" " -f2)
# CentOS:
#INSTALLED_VERSION=$(rpm -qa | grep omnibus)
# Uses sort -V to compare versions
LATEST=$(wget -q -O- https://about.gitlab.com/downloads/archives/ | grep -i "$OS" | grep -i "$OS_VERSION" | grep -i $OS_ARCHITECTURE | grep -Eo 'href=".*"' | cut -d'"' -f2 | sort -V | tail -n 1)
PACKAGE=${LATEST##*/}
LATEST_VERSION=$(echo $PACKAGE | cut -d_ -f2)
echo ""
echo " Current version: $INSTALLED_VERSION"
echo " Latest version: $LATEST_VERSION"
if [[ "$INSTALLED_VERSION" != "$LATEST_VERSION" && "$LATEST_VERSION" != "" ]]; then
echo " Update to $LATEST_VERSION available!"
echo -n " Do you wich to upgrade? [y/N]? "
read answer
case $answer in
y*)
# Backup branding:
cp /opt/gitlab/embedded/service/gitlab-rails/public/assets/*logo*.png /tmp/
wget $LATEST
# Stop unicorn and sidekiq so we can do database migrations
sudo gitlab-ctl stop unicorn
sudo gitlab-ctl stop sidekiq
# Create a database backup in case the upgrade fails
sudo gitlab-rake gitlab:backup:create
# Install the latest package
# Ubuntu/Debian:
sudo dpkg -i $PACKAGE
# CentOS:
#sudo rpm -Uvh $PACKAGE
# Restore branding:
sudo cp /tmp/*logo*.png /opt/gitlab/embedded/service/gitlab-rails/public/assets/
# Reconfigure GitLab (includes database migrations)
sudo gitlab-ctl reconfigure
# Restart all gitlab services
sudo gitlab-ctl restart
rm $PACKAGE
;;
*)
echo "No change"
;;
esac
else
echo " Nothing to do!"
fi
echo ""
Possibly Related Posts
Friday, 25 July 2014
Removing hosts from backuppc
Simply remove the host from the web interface and rm -rf the pc/<host> directory, then wait for the next BackupPC_nightly run - it will remove all superfluous files from the pools.
The path for this directory usually is:
1. Login to the Backuppc server
2. Remove the host in the Backuppc web-interface (under hosts)
3. remove it's directory /var/lib/Backuppc/pc/<hostname>:
The path for this directory usually is:
/var/lib/Backuppc/pc/<hostname>If you want to force the clean-up process, you can remove your host like this:
1. Login to the Backuppc server
2. Remove the host in the Backuppc web-interface (under hosts)
3. remove it's directory /var/lib/Backuppc/pc/<hostname>:
rm -rf /var/lib/Backuppc/pc/<hostname>4. Shutdown backuppc:
service backuppc stop5. Change into backuppc:
su - backuppc6. Run the nightly script:
/usr/share/BackupPC/bin/BackupPC_nightly 0 2557. Go back to root:
exit8. Start backuppc again:
service backuppc start
Labels:
backuppc,
Backups,
Command Line,
Linux
Possibly Related Posts
Reclaim free space from Time Machine sparsebundle
You msut run these commands as root:
The following script will do all that for you:
sudo su -Make sure the mount point exists:
mkdir -p /Volumes/TMThen mount the afp share:
mount_afp 'afp://user:password@afp_server_address/share_name' /Volumes/TMNow use hdiutil to reclaim the available free space:
hdiutil compact /Volumes/TM/ComputerName.sparsebundle/unmont the share:
umount /Volumes/TM/If you get an error message saying:
hdiutil: compact failed - Resource temporarily unavailableYou must make sure you don't have the afp share mounted elsewhere, you can check your mounts with:
df -hIf the output contains a line with your afp server's address or with the string "Time Machine" you have to unmount them.
The following script will do all that for you:
SRV="afp_server_address"
SAVEIFS=$IFS
IFS=$'\n';
for v in $(df -h | grep -E "$SRV|Time\sMachine" | cut -d"%" -f3 | sed -e "s/ *\//\//"); do
umount "$v"
done
IFS=$SAVEIFS
mkdir -p /Volumes/TM
mount_afp "afp://user:password@$SRV/share" /Volumes/TM
hdiutil compact "/Volumes/TM/$(scutil --get ComputerName).sparsebundle"
umount /Volumes/TM/
Labels:
Backups,
Command Line,
Mac OS X,
Time Machine
Possibly Related Posts
Thursday, 24 July 2014
Changing Time Machine Backup Interval
You can use the following command to change the Time Machine backup interval:
Checkout my previous post to learn how to manually delete Time Machine backups.
sudo defaults write /System/Library/LaunchDaemons/com.apple.backupd-auto StartInterval -int 14400The time interval is in seconds, so 43200 will start a backup every 12hrs.
Checkout my previous post to learn how to manually delete Time Machine backups.
Labels:
Backups,
Command Line,
Mac OS X,
Time Machine
Possibly Related Posts
Manage time machine backups
Some times you get some errors saying that your time machine's disk is full and Time Machine could not complete the backup so you need to manually delete old backups.
tmutil provides methods of controlling and interacting with Time Machine, as well as examining and manipulating Time Machine backups. Common abilities include restoring data from backups, editing exclusions, and comparing backups.
tmutil latestbackup
Will output the path to the most recent backup and
tmutil listbackups
will list all existing backups, if you use the same backup disk for multiple machines, you can get just the backups from your machine with:
tmutil listbackups | grep "$(scutil --get ComputerName)"
The following command will delete the backups from a mac named old_mac_name:
sudo tmutil delete /Volumes/drive_name/Backups.backupdb/old_mac_name
If you want to be safe, you can pick one snapshot to delete first to be sure the command works as intended. This is nice since it could take hours to clean up some larger backup sets and you want to leave the Mac confident it's deleting the correct information store.
You can use the tmutil tool to delete backups one by one.
sudo tmutil delete /Volumes/drive_name/Backups.backupdb/mac_name/YYYY-MM-DD-hhmmss
Since tmutil was introduced with Lion, this will not work on earlier OS versions.
The tmutil delete command only removes the backup from the sparse bundle. It doesn’t actually free the disk space. To do that, you have to go a little deeper.
On your Mac is a mount point called /Volumes. You can examine the contents of this mount point with ls:
cd /VolumesShould output something like:
ls -1
Macintosh HD
Recovery HD
Time Machine Backups
TimeMachine
These are the names of all the mounted disks (or things that look like disks) on your Mac. Notice two likely candidates for your actual TimeMachine volume. Yours may be named slightly differently, but the one you want is the one that actually shows files of type .sparsebundle . In my case, it is the volume TimeMachine:
sudo ls -l TimeMachine/
and you should see something similar to:
...
drwxr-x---@ 1 root wheel 264 Jul 25 08:21 sysadmin’s MacbookPro.sparsebundle
...
Notice that you don’t actually own the file. (Had I not used the sudo command with ls I could not have listed the contents of /Volumes/TimeMachine)
That .sparsebundle file for your Mac is where all your backup sets live. TimeMachine manages the contents of this file, but doesn’t do anything automatically to reduce its size. Luckily there is another tool for that, but you’ll have to be root to run it:
sudo su -
hdiutil compact /Volumes/TimeMachine/YourBackup.sparsebundle
Sample output:
Starting to compact…
Reclaiming free space…
...................................................
Finishing compaction…
Reclaimed 3.1 GB out of 304.1 GB possible.
That’s it! In this example I reclaimed 3.1GB of actual disk space on my TimeMachine volume.
The following bash script will remove the oldest backup and reclaim the free space:
COMPUTER_NAME=$(/usr/sbin/scutil --get ComputerName)
NBACKUPS=$(/usr/bin/tmutil listbackups | /usr/bin/grep "$COMPUTER_NAME" | /usr/bin/wc -l)
OLDEST_BACKUP=$(/usr/bin/tmutil listbackups | /usr/bin/grep "$COMPUTER_NAME" | /usr/bin/head -n1)
LATEST_BACKUP=$(/usr/bin/tmutil latestbackup)
echo Latest backup: $LATEST_BACKUP
if [[ -n "$LATEST_BACKUP" && "$LATEST_BACKUP" != "$OLDEST_BACKUP" ]]
then
echo "$NBACKUPS backups. Delete oldest: ${OLDEST_BACKUP##*/} [y/N]? \c"
read answer
case $answer in
y*)
echo Running: /usr/bin/sudo /usr/bin/tmutil delete "$OLDEST_BACKUP"
/usr/bin/sudo time /usr/bin/tmutil delete "$OLDEST_BACKUP"
echo "Do you wish to reclaim the free space now? [y/N]? \c"
read answer
case $answer in
y*)
mkdir -p /Volumes/TM
mount_afp 'afp://user:pass@afp_server_address/share_name' /Volumes/TM
hdiutil compact "/Volumes/TM/$(scutil --get ComputerName).sparsebundle"
umount /Volumes/TM/
;;
*)
echo No change
;;
esac
;;
*)
echo No change
;;
esac
else
echo "No backup available for deletion"
fi
In the script above, don't forget to change the afp URL (afp://user:pass@afp_server_address/share_name) to your own.
Labels:
Backups,
Command Line,
Mac OS X,
Time Machine
Possibly Related Posts
Wednesday, 23 July 2014
Sorry, Command-not-found Has Crashed
When you try to execute a command that is not installed Ubuntu tries to hint you on the package that you should install but some times, especially after an upgrade, you get an error message saying:
Sorry, command-not-found has crashed! Please file a bug report at:
(...)
This solves the problem:
export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8
locale-gen en_US.UTF-8
sudo dpkg-reconfigure locales
Labels:
Command Line,
Linux,
Ubuntu
Possibly Related Posts
Ubuntu as Time Machine server
This guide will help you to install and configure the netatalk servise on an Ubuntu server so it can function as a Time Machine backup server for your Mac OS machines.
First install the necessary packages:
Edit the afpd configuration file:
Create a configuration file for the avahi afpd discovery:
First install the necessary packages:
sudo apt-get install netatalk avahi-daemon libnss-mdnsOpen the netatalk default configuration file:
sudo vi /etc/default/netatalkModify the lines:
ATALKD_RUN=yesEdit the atalkd.conf file:
PAPD_RUN=no
CNID_METAD_RUN=yes
AFPD_RUN=yes
TIMELORD_RUN=no
A2BOOT_RUN=no
sudo vi /etc/netatalk/atalkd.confadd to the bottom
eth0Edit the AppleVolumes.default file:
sudo vi /etc/netatalk/AppleVolumes.defaultadd to the bottom:
/backups/timemachine "Time Machine" allow:@admin cnidscheme:cdb volsizelimit:200000 options:usedots,upriv,tmThe example above also limits the size shown to OS X as 200 GB (the number is given in MiB, so it's 200,000 times 1024 in the real world)
Edit the afpd configuration file:
sudo vi /etc/netatalk/afpd.confadd to the bottom:
- -transall -uamlist uams_dhx.so,uams_dhx2.so -nosavepassword -advertise_ssh -mimicmodel TimeCapsule6,106 -setuplog "default log_warn /var/log/afpd.log"
Create a configuration file for the avahi afpd discovery:
sudo vi /etc/avahi/services/afpd.serviceand enter the following into it:
<?xml version="1.0" standalone='no'?><!--*-nxml-*-->Restart the services:
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<service-group>
<name replace-wildcards="yes">%h</name>
<service>
<type>_afpovertcp._tcp</type>
<port>548</port>
</service>
<service>
<type>_device-info._tcp</type>
<port>0</port>
<txt-record>model=MacPro</txt-record>
</service>
</service-group>
sudo service netatalk restart
sudo service avahi-daemon restart
Labels:
Backups,
Command Line,
Linux,
Mac OS X,
Time Machine
Possibly Related Posts
Saturday, 19 July 2014
MySQL and Oracle command equivalents
MySQL has specific commands, which provides the easy access to the information_schema database, to get the schema level details. But oracle does not provide such easy access to some of the schema level meta data.
Here are some MySQL specific commands/Syntaxes & equivalent Oracle techniques:
To get the list of databases
MySQL :
To get the current schema
MySQL :
To get the list of tables within the current database
MySQL :
USER_TABLES will have a row for every table in your schema. If you are looking for the tables in your schema, this would be the correct query. If you are looking for the tables in some other schema, this is not the right table to use.
ALL_TABLES will have a row for every table you have access to regardless of schema. You would, presumably, want to qualify the query by specifying the name of the schema you are interested in, i.e.
To get the connected connection info
MySQL :
To limit the selection
MySQL :
MySQL :
Describe table has a same syntax in both MySQL & Oracle.
MySQL :
MySQL :
i) Create table without the auto_increment keywords (because it does not exist in Oracle)
MySQL :
Make sure that the select_catalog_role is already available for the given user if not assign the role, as shown below.
grant select_catalog_role to [username];Increase the page size and maximum width for displaying the results so that complete table definition can be displayed in the sqlplus console.
MySQL :
Explain the execution plan of a sql statement
MySQL :
i) First execute the explain plan so that it will fill the plan_table (this table need to be created according to the standard plan_table format, if it does not exist already)
Here are some MySQL specific commands/Syntaxes & equivalent Oracle techniques:
To get the list of databases
MySQL :
show databasesOracle :
SELECT username FROM all_users ORDER BY username;
To get the current schema
MySQL :
select DATABASE();Oracle :
SELECT sys_context('USERENV', 'CURRENT_SCHEMA') FROM dual;
To get the list of tables within the current database
MySQL :
use database_name;Oracle :
show tables;
select * from user_tables;Here schema is based on the connected username, so it is selected during the creation of the connection.
USER_TABLES will have a row for every table in your schema. If you are looking for the tables in your schema, this would be the correct query. If you are looking for the tables in some other schema, this is not the right table to use.
ALL_TABLES will have a row for every table you have access to regardless of schema. You would, presumably, want to qualify the query by specifying the name of the schema you are interested in, i.e.
SELECT table_nameOf course, that assumes that you have at least SELECT access on every table in that schema. If that is not the case, then you would need to use DBA_TABLES (which would require that the DBA grant you access to that table), i.e.
FROM all_tables
WHERE owner = <<name of schema>>
SELECT table_name
FROM dba_tables
WHERE owner = <<name of schema>>
To get the connected connection info
MySQL :
show processlistOracle :
SELECT sess.process, sess.status, sess.username, sess.schemaname, sql.sql_text FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id AND sess.type = 'USER';
To limit the selection
MySQL :
select * from user limit 10;Oracle :
select * from table_name where ROWNUM <= 10;To select rows which is somewhere middle
MySQL :
select username from user limit 10, 15;Oracle :
select element_name from (select element_name, ROWNUM as row_number from table_name) as t1 where t1.row_number > 10 and t1.row_number <= 15;Note: Here we have to use SubQuery rather than call it directly as "select element_name from table_name as t1 where ROWNUM > 10 and ROWNUM <= 15;". This cannot be done as these ROWNUMs are assigned once they are satisfied the given conditions, which follows the WHERE. Since condition "ROWNUM > 10 and ROWNUM <= 15" will never be satisfied from the start ROWNUMs will never be incremented. So we need to use the Subqueries to let the ROWNUMs assigned within the Subquery and later filter the required results from the outside query.
Describe table has a same syntax in both MySQL & Oracle.
desc table_name;To view errors/warnings
MySQL :
show warings / show errorsOracle :
select * from user_errors;/ show errorsMySQL has auto_increment Columns
MySQL :
create table table_name (element_id int AUTO_INCREMENT primary, element_name varchar(20));Oracle :
i) Create table without the auto_increment keywords (because it does not exist in Oracle)
create table table_name (element_id int primary, element_name varchar(20));ii) Create a sequence, which provides the incremented values
create sequence auto_incrementor;iii) Create a trigger, which gets the next value from the sequence and updates it to the column to be auto_incremented
CREATE TRIGGER trig_incrementor BEFORE INSERT ON table_nameTo get the table create script back
FOR EACH ROW
BEGIN
SELECT auto_incrementer.NEXTVAL into :new.element_id FROM dual;
END;
MySQL :
show create table table_name;Oracle :
Make sure that the select_catalog_role is already available for the given user if not assign the role, as shown below.
grant select_catalog_role to [username];Increase the page size and maximum width for displaying the results so that complete table definition can be displayed in the sqlplus console.
set pagesize 999To get the session variables
set long 9000
select dbms_metadata.get_ddl('TABLE', 'TABLE_NAME', 'database_name') from dual;
MySQL :
show variables; or show variables like 'inno%';Oracle :
SELECT name, value FROM gv$parameter; or SELECT sys_context('USERENV', ) FROM dual;
Explain the execution plan of a sql statement
MySQL :
explain select * from table_name;Oracle :
i) First execute the explain plan so that it will fill the plan_table (this table need to be created according to the standard plan_table format, if it does not exist already)
explain plan select * from table_name;ii) Now the results of the explain plan will be populated in the plan_table, so we need to use a row connecting query to get a readable summary of the results.
select substr (lpad(' ', level-1) || operation || ' (' || options || ')',1,30 ) "Operation", object_name "Object" from plan_table start with id = 0 connect by prior id=parent_id;
Possibly Related Posts
Thursday, 17 July 2014
MySQL and Postgres command equivalents
Task: list existing databases, connect to one of the databases, then list existing tables and finally show the structure of one of the tables.
In Mysql:
show databases;
use database;
show tables;
describe table;
exit
In PostgreSQL:
\l
\c database
\dt
\d+ table
\q
Labels:
Databases,
MySQL,
PostgreSQL
Possibly Related Posts
Wednesday, 16 July 2014
Test imap using telnet
For added security, you can encrypt your IMAP connection. This requires that your server supports SSL or TLS and that you have access to an SSL/TLS client program, for example OpenSSL, to use instead of telnet.
As the port-number normally is 993, an example OpenSSL command would be openssl s_client -connect imap.example.com:993 -quiet. (If you would like to see the public key of the server, as well as some other encryption-related information, omit -quiet.) The server should then start an IMAP session, displaying a greeting such as the * OK Dovecot ready example below.
telnet imap.example.com 143
#output: Trying 193.136.28.29...
#output: Connected to imap.example.com.
#output: Escape character is '^]'.
#output: * OK Dovecot ready.
a1 LOGIN MyUsername MyPassword
#output: a1 OK Logged in.
a2 LIST "" "*"
#output: * LIST (\HasNoChildren) "." "INBOX"
#output: a2 OK List completed.
a3 EXAMINE INBOX
#output: * FLAGS (\Answered \Flagged \Deleted \Seen \Draft)
#output: * OK [PERMANENTFLAGS ()] Read-only mailbox.
#output: * 1 EXISTS
#output: * 1 RECENT
#output: * OK [UNSEEN 1] First unseen.
#output: * OK [UIDVALIDITY 1257842737] UIDs valid
#output: * OK [UIDNEXT 2] Predicted next UID
#output: a3 OK [READ-ONLY] Select completed.
a4 FETCH 1 BODY[]
#output: * 1 FETCH (BODY[] {405}
#output: Return-Path: sender@example.com
#output: Received: from client.example.com ([192.0.2.1])
#output: by mx1.example.com with ESMTP
#output: id <20040120203404.CCCC18555.mx1.example.com@client.example.com>
#output: for <recipient@example.com>; Tue, 20 Jan 2004 22:34:24 +0200
#output: From: sender@example.com
#output: Subject: Test message
#output: To: recipient@example.com
#output: Message-Id: <20040120203404.CCCC18555.mx1.example.com@client.example.com>
#output:
#output: This is a test message.
#output: )
#output: a4 OK Fetch completed.
a5 LOGOUT
#output: * BYE Logging out
#output: a5 OK Logout completed.
Labels:
Command Line,
dovecot,
IMAP,
telnet
Possibly Related Posts
Source IP based reverse proxy
If you want to proxy some source IP ranges/subnets to one server and onther subnets to go to a different server, you can do this using mod_rewrite for proxying. You will have to setup a rewrite condition based on the source IP and a rewrite rule with the [P] flag. Something like this should work:
If you're using Apache 2.4 or newer you can also achieve this with the following configuration:
RewriteCond %{REMOTE_ADDR} ^10\.2\.One possible scenario for this if you want to migrate your users from server A to server B but you want to migrate one IP range at the time.
RewriteRule ^/(.*) http://old-app/$1 [P]
ProxyPassReverse / http://serverA/
RewriteCond %{REMOTE_ADDR} ^10\.3\.
RewriteRule ^/(.*) http://new-app/$1 [P]
ProxyPassReverse / http://serverB/
If you're using Apache 2.4 or newer you can also achieve this with the following configuration:
<If "-R '10.2.0.0/16'">
ProxyPassReverse / /http://serverA/
</If>
<ElseIf "-R '10.3.0.0/16'">
ProxyPassReverse / /http://serverB/
</ElseIf>
<Else>
ProxyPassReverse / /http://serverC/
</Else>
Possibly Related Posts
Tuesday, 15 July 2014
Preserving client ip with apache reverse proxy
The first thing that I thought of was the “X-Forwarded-For” headers, which is an HTTP header inserted into the original HTTP GET request whose value is equal to the client’s public IP. Turns out apache reverse proxy inserts this header by default. So we somehow need to instruct the backend server itself to provide the application with the correct client IP.
If your backend server is a Tomcat server the solution cam be using the RemoteIP tomcat valve.
It’s quite simple to configure in that all that needs to be done is to modify tomcat server.xml to recognise original client IP rather than the proxy IP by adding the following to server.xml:
<Valve className="org.apache.catalina.valves.RemoteIpValve" internalProxies="127\.0\.0\.1" />
make sure to change 127.0.0.1 to the address of the apache reverse proxy.
The application could now recognise the original client IP.
The apache equivalent of the above method is using mod_rpaf for Apache 1.3 & 2.2.x and mod_remoteip for Apache 2.4 and 2.5.
These apache modules can be used to preserve both remote IP/HOST. Internally they use X-Forwarded-For header to detect a proxy in it’s list of known proxies and reset the headers accordingly. This works with any proxy server in the front end provided that the proxy server sets X-Forwarded-For header.
To use mod_rpaf, install and enable it in the backend server and add following directives in the module’s configuration:
RPAFenable On
RPAFsethostname On
RPAFproxy_ips 127.0.0.1
Remote IP is automatically preserved when RPAFenable On directive is used. RPAFsethostname On directive should be used to preserve host and RPAFproxy_ips is the list of known proxy ips.
Restart backend apache server and you are good to go.
For mod_remoteip, it’s a bit similar, the configuration should look something lke this:
RemoteIPHeader X-Real-IP
RemoteIPInternalProxy 1.2.3.4
RemoteIPInternalProxy 5.6.7.8
mod_remoteip however has a lot more configuration options.
When the proxy server is an Apache, ProxyPreserveHost directive in mod_proxy can be used to preserve the remote host not the remote ip. This is useful for situations where name based virtual hosting is used and the backend server needs to know the virtual name of host.
Open mod_proxy configuration file of your proxy server and enter directive, ProxyPreserveHost On, and restart your apache instance.
Possibly Related Posts
Thursday, 10 July 2014
Finding external IP using the command line
The easiest way is to use an external service via a commandline browser or download tool. Since wget is available by default in Ubuntu, we can use that.
To find your ip, use:
wget -qO- http://ipecho.net/plain ;
You can do the same using curl:
curl ipecho.net/plain ; echo
Labels:
Command Line,
Linux,
Networking
Possibly Related Posts
Wednesday, 9 July 2014
How to test a listening TCP/UDP port through nc
Netcat (nc) can also be used for a lot of other purposes. It can also be used as a very fast basic port scanner, you can scan a port or a range.
To scan a range of UDP ports 1 to 1000:
nc -vzu destination_ip 1-1000
To scan a range of TCP ports 1 to 1000
nc -vz destination_ip 1-1000
Labels:
Command Line,
Linux,
Networking
Possibly Related Posts
Thursday, 26 June 2014
Install Vsftpd on Ubuntu
On my last post I've talked about enabling the userdir module on Apache, you can use vsftpd to give your users FTP access to their own pages, this is hoe you can install it:
aptitude -y install vsftpd
Then edit it's configuration file:
vi /etc/vsftpd.conf
And make the following changes:
# line 29: uncomment
write_enable=YES
# line 97,98: uncomment ( allow ascii mode transfer )
ascii_upload_enable=YES
ascii_download_enable=YES
# line 120: uncomment ( enable chroot )
chroot_local_user=YES
# line 121: uncomment ( enable chroot list )
chroot_list_enable=YES
# line 123: uncomment ( enable chroot list )
chroot_list_file=/etc/vsftpd.chroot_list
# line 129: uncomment
ls_recurse_enable=YES
# add at the last line
# specify root directory ( if don't specify, users' home directory equals FTP home directory)
#local_root=public_html
# turn off seccomp filter
seccomp_sandbox=NO
Edit the list of users that can access your server.
vi /etc/vsftpd.chroot_list
Add the users you allow to move over their home directory
Finally restart the FTP service:
service vsftpd restart
Possibly Related Posts
Enable userdir Apache module on Ubuntu
First activate the module:
sudo a2enmod userdir
now edit the module's conf file:
sudo vi /etc/apache2/mods-enabled/userdir.conf
and change the line:
AllowOverride FileInfo AuthConfig Limit Indexes
to
AllowOverride All
By default PHP is explicitly turned off in user directories, to enable it edit the php module conf file:
sudo vi /etc/apache2/mods-enabled/php5.conf
and comment out the following lines:
#<IfModule mod_userdir.c>
# <Directory /home/*/public_html>
# php_admin_flag engine Off
# </Directory>
#</IfModule>
Now just restart your apache srerver and that's it:
sudo service apache2 restart
You can now create a public_html folder on every users homes with the following script:
#!/bin/bash
for I in /home/*; do
if [ ! -d $I/$FOLDER ]; then
mkdir -p $I/$FOLDER
U=$(basename $I)
chown $U $I/$FOLDER
chgrp $U $I/$FOLDER
fi
done # for
Now if you whant to go further and create dynamic vhost for each of your users you can change your default virtual host with something like this:
<VirtualHost *:80>
RewriteEngine on
RewriteMap lowercase int:tolower
# allow CGIs to work
RewriteCond %{REQUEST_URI} !^/cgi-bin/
# check the hostname is right so that the RewriteRule works
RewriteCond ${lowercase:%{SERVER_NAME}} ^[a-z-]+\.example\.com$
# concatenate the virtual host name onto the start of the URI
# the [C] means do the next rewrite on the result of this one
RewriteRule ^(.+) ${lowercase:%{SERVER_NAME}}$1 [C]
# now create the real file name
RewriteRule ^([a-z-]+)\.example\.com/(.*) /home/$1/public_html/$2
<Location / >
Order allow,deny
allow from all
</Location>
DocumentRoot /var/www/
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /var/www/>
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all
</Directory>
# define the global CGI directory
ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<Directory "/usr/lib/cgi-bin">
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
This will allow you to use user.example.com to access your user's pages.
Possibly Related Posts
Tuesday, 17 June 2014
IPTables debugging
The following command will only show rules that have the action set to DROP or REJECT and omit the rules that didn't had any matches:
watch -n1 "iptables -nvL | grep -i 'DROP\|REJECT\' | egrep -v '^\s*0\s*0'"
This one does the same but with some colour highlighting, it will only show rules with matches, the words DROP and REJECT will appear in red and the word ACCEPT will be in green:
watch --color -n1 "iptables -nvL | egrep -v '^\s*0\s*0' | sed 's/\(DROP\|REJECT\)/\x1b[49;31m\1\x1b[0m/g' | sed 's/\(ACCEPT\)/\x1b[49;32m\1\x1b[0m/g'"
Possibly Related Posts
Monday, 16 June 2014
Using the IP command
The command /bin/ip has been around for some time now. But people continue using the older command /sbin/ifconfig. ifconfig won't go away quickly, but its newer version, ip, is more powerful and will eventually replace it.
So here are the basics of the new ip command.
Assign a IP Address to Specific Interface:
So here are the basics of the new ip command.
Assign a IP Address to Specific Interface:
sudo ip addr add 192.168.50.5 dev eth1Check an IP Address:
sudo ip addr showRemove an IP Address:
sudo ip addr del 192.168.50.5/24 dev eth1Enable Network Interface:
sudo ip link set eth1 upDisable Network Interface:
sudo ip link set eth1 downCheck Route Table:
sudo ip route showAdd Static Route:
sudo ip route add 10.10.20.0/24 via 192.168.50.100 dev eth0Remove Static Route:
sudo ip route del 10.10.20.0/24Add Default Gateway:
sudo ip route add default via 192.168.50.100
Labels:
Command Line,
Linux,
Networking
Possibly Related Posts
Sunday, 15 June 2014
Change Root DN Password on OpenLDAP
First, we need to locate the credentials information of the administrator account in the correct database within the LDAP tree.
This can be done using the ldapsearch command:
This command will return something like:
we need to modify the entry “dn: olcDatabase={1}hdb,cn=config“
the current password is hashed with SHA1 algorythm.
To generate our new password with the same algorythm we'll use the command slappasswd with the syntax:
This can be done using the ldapsearch command:
ldapsearch -LLL -Y EXTERNAL -H ldapi:/// -b cn=config olcRootDN=cn=admin,dc=example,dc=com dn olcRootDN olcRootPW(replace the olcRootDN value with the correct value to match your configuration)
This command will return something like:
SASL/EXTERNAL authentication startedThere are two interesting information we know now:
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
dn: olcDatabase={1}hdb,cn=config
olcRootDN: cn=admin,dc=example,dc=com
olcRootPW: {SHA}ks1xBVfgRXavGCpkPefc9hRHL4X=
we need to modify the entry “dn: olcDatabase={1}hdb,cn=config“
the current password is hashed with SHA1 algorythm.
To generate our new password with the same algorythm we'll use the command slappasswd with the syntax:
slappasswd -h <the hashing scheme we want to use - for example {SHA}>The system will then prompt you for the new password to use, twice, and will finally display the hashed value we’re interested in:
root@testbox:~# slappasswd -h {SHA}Then we’ll proceed to modify the entry we’ve identified above using the command:
New password:
Re-enter new password:
{SHA}W6ph5Mm7Ps6GglULbPgzG37mj0g=
root@testbox:~# ldapmodify -Y EXTERNAL -H ldapi:///The system will start the listening mode for modifying commands:
SASL/EXTERNAL authentication startedFirst, we enter the entry we want to modify:
SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth
SASL SSF: 0
dn: olcDatabase={1}hdb,cn=configSecond, we type in the parameter we want to modify:
replace: olcRootPWThird, we type in the new password generated above (copy and paste is MUCH less error prone than manual typing at this point ;) )
olcRootPW: {SHA}W6ph5Mm7Ps6GglULbPgzG37mj0g=Hit Enter another time to commit the modification and the following line will appear:
modifying entry "olcDatabase={1}hdb,cn=config"After this, you can exit the listening mode with CTRL+C and restart the LDAP database service using:
service slapd stopand login now with the new password set.
service slapd start
Labels:
auth,
Command Line,
LDAP,
security
Possibly Related Posts
Saturday, 10 May 2014
Set up GlusterFS with a volume replicated over 2 nodes
The servers setup:
To install the required packages run on both servers:sudo apt-get install glusterfs-serverIf you want a more up to date version of GlusterFS you can add the following repo:
sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4Now from one of the servers you must connect to the other:
sudo gluster peer probe <ip_of_the_other_server>You should see the following output:
peer probe: successYou can check the status from any of the hosts with:
sudo gluster peer statusNow we need to create the volume where the data will reside. For this run the following comand:
sudo gluster volume create datastore1 replica 2 transport tcp <server1_IP>:/mnt/gfs_block <server2_IP>:/mnt/gfs_blockWhere /mnt/gfs_block is the mount point where the data will be on each node and datastore1 is the name of the volume you are creating.
If this has been sucessful, you should see:
Creation of volume datastore1 has been successful. Please start the volume to access data.As the message indicates, we now need to start the volume:
sudo gluster volume start datastore1As a final test, to make sure the volume is available, run gluster volume info.
sudo gluster volume infoYour GlusterFS volume is ready and will maintain replication across two nodes.
If you want to Restrict Access to the Volume, you can use the following command:
sudo gluster volume set datastore1 auth.allow gluster_client1_ip,gluster_client2_ipIf you need to remove the restriction at any point, you can type:
sudo gluster volume set volume1 auth.allow *
Setup the clients:
Install the needed packages with:sudo apt-get install glusterfs-clientTo mount the volume you must edit the fstab file:
sudo vi /etc/fstabAnd append the following to it:
[HOST1]:/[VOLUME] /[MOUNT] glusterfs defaults,_netdev,backupvolfile-server=[HOST2] 0 0Where [HOST1] is the IP address of one of the servers and [HOST2] is the IP of the other server. [VOLUME] is the Volume name, in our case datastore1 and [MOUNT] is the path where you whant the files on the client.
Or, you can also mount the volume using a volume config file:
Create a volume config file for your GlusterFS client.
vi /etc/glusterfs/datastore.volCreate the above file and replace [HOST1] with your GlusterFS server 1, [HOST2] with your GlusterFS server 2 and [VOLNAME] with the Gluster FS volume to mount.
volume remote1Finally, edit fstab to add this config file and it's mount point. Replace [MOUNT] with the location to mount the storage to.
type protocol/client
option transport-type tcp
option remote-host [HOST1]
option remote-subvolume [VOLNAME]
end-volume
volume remote2
type protocol/client
option transport-type tcp
option remote-host [HOST2]
option remote-subvolume [VOLNAME]
end-volume
volume replicate
type cluster/replicate
subvolumes remote1 remote2
end-volume
volume writebehind
type performance/write-behind
option window-size 1MB
subvolumes replicate
end-volume
volume cache
type performance/io-cache
option cache-size 512MB
subvolumes writebehind
end-volume
/etc/glusterfs/datastore.vol [MOUNT] glusterfs rw,allow_other,default_permissions,max_read=131072 0 0
Possibly Related Posts
Create LVM volume from multiple disks
Recently I had to crate an Amazon EC2 Instance with a storage capacity of 5Tb, unfortunately, Amazon only allows us to create 1Tb volumes so I had to create 5 volumes, attach them to the instance and create a 5Tb LVM device.
My instance was running Ubuntu and I hat to install the lvm2 package:
apt-get install lvm2
The volumes attached to my instance where named from /dev/xvdg to /dev/xvdk
to find the names you can use the command:
fdisk -l
First we have to prepare our volumes for LVM with:
pvcreate /dev/xvdg /dev/xvdh /dev/xvdi /dev/xvdj /dev/xvdk
You can run the following command to check the result:
pvdisplay
The next step is to create a volume group, I used the command:
vgcreate storage /dev/xvdg /dev/xvdh /dev/xvdi /dev/xvdj /dev/xvdk
And used the command:
vgdisplay
to check the result, you can also use:
vgscan
Now we need to create the logical volume, in this case I wanted to use the entire available space so, I used the command:
lvcreate -n data -l 100%free storage
And
lvdisplay
to check the new volume if every thing goes well it should be on /dev/storage/data
you can also use the command
lvscan
Now you just have to format the new device, you can use:
mkft -t ext4 /dev/storage/data
When ready you can mount it with:
mout /dev/storage/data /mnt
Possibly Related Posts
Saturday, 8 February 2014
Create keystore from certificates
I had a wildcard certificate that already been used previously on a few apacahe servers. so I had already generated a CSR.
To generate a new keystore from the existing certificates I used the following commands:
Create a pkcs12 keystore from the certificate using openssl:
To generate a new keystore from the existing certificates I used the following commands:
Create a pkcs12 keystore from the certificate using openssl:
openssl pkcs12 -export -in star_domain_com.crt -inkey star_domain_com.key -certfile DigiCertCA.crt -out keystore.p12Convert the pkcs12 keystore into a jks keystore:
keytool -importkeystore -srckeystore keystore.p12 -destkeystore keystore -srcstoretype pkcs12You can use the following command to check your keystore contents:
keytool -list -keystore keystoreUsually your certificate will be stored under the alias 1, you might want to change that to tomcat, use the command:
keytool -changealias -alias 1 -destalias tomcat -keystore keystore
Possibly Related Posts
Subscribe to:
Posts (Atom)