while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; cat index.html; } | nc -l 8080; done
Showing posts with label Linux. Show all posts
Showing posts with label Linux. Show all posts
Monday, 9 February 2015
One line web server
The following one line script will create a web server running on port 80 using nc (netcat):
Labels:
Command Line,
Linux,
netcat,
Scripting,
webserver
Possibly Related Posts
Sunday, 7 September 2014
Purge Removed packages
Packages marked as rc by dpkg mean that the configuration files are not yet removed. The following command will purge them:
dpkg --list |grep "^rc" | cut -d " " -f 3 | xargs -r sudo dpkg --purge
Possibly Related Posts
Friday, 25 July 2014
Removing hosts from backuppc
Simply remove the host from the web interface and rm -rf the pc/<host> directory, then wait for the next BackupPC_nightly run - it will remove all superfluous files from the pools.
The path for this directory usually is:
1. Login to the Backuppc server
2. Remove the host in the Backuppc web-interface (under hosts)
3. remove it's directory /var/lib/Backuppc/pc/<hostname>:
The path for this directory usually is:
/var/lib/Backuppc/pc/<hostname>If you want to force the clean-up process, you can remove your host like this:
1. Login to the Backuppc server
2. Remove the host in the Backuppc web-interface (under hosts)
3. remove it's directory /var/lib/Backuppc/pc/<hostname>:
rm -rf /var/lib/Backuppc/pc/<hostname>4. Shutdown backuppc:
service backuppc stop5. Change into backuppc:
su - backuppc6. Run the nightly script:
/usr/share/BackupPC/bin/BackupPC_nightly 0 2557. Go back to root:
exit8. Start backuppc again:
service backuppc start
Labels:
backuppc,
Backups,
Command Line,
Linux
Possibly Related Posts
Wednesday, 23 July 2014
Sorry, Command-not-found Has Crashed
When you try to execute a command that is not installed Ubuntu tries to hint you on the package that you should install but some times, especially after an upgrade, you get an error message saying:
Sorry, command-not-found has crashed! Please file a bug report at:
(...)
This solves the problem:
export LANGUAGE=en_US.UTF-8
export LANG=en_US.UTF-8
export LC_ALL=en_US.UTF-8
locale-gen en_US.UTF-8
sudo dpkg-reconfigure locales
Labels:
Command Line,
Linux,
Ubuntu
Possibly Related Posts
Ubuntu as Time Machine server
This guide will help you to install and configure the netatalk servise on an Ubuntu server so it can function as a Time Machine backup server for your Mac OS machines.
First install the necessary packages:
Edit the afpd configuration file:
Create a configuration file for the avahi afpd discovery:
First install the necessary packages:
sudo apt-get install netatalk avahi-daemon libnss-mdnsOpen the netatalk default configuration file:
sudo vi /etc/default/netatalkModify the lines:
ATALKD_RUN=yesEdit the atalkd.conf file:
PAPD_RUN=no
CNID_METAD_RUN=yes
AFPD_RUN=yes
TIMELORD_RUN=no
A2BOOT_RUN=no
sudo vi /etc/netatalk/atalkd.confadd to the bottom
eth0Edit the AppleVolumes.default file:
sudo vi /etc/netatalk/AppleVolumes.defaultadd to the bottom:
/backups/timemachine "Time Machine" allow:@admin cnidscheme:cdb volsizelimit:200000 options:usedots,upriv,tmThe example above also limits the size shown to OS X as 200 GB (the number is given in MiB, so it's 200,000 times 1024 in the real world)
Edit the afpd configuration file:
sudo vi /etc/netatalk/afpd.confadd to the bottom:
- -transall -uamlist uams_dhx.so,uams_dhx2.so -nosavepassword -advertise_ssh -mimicmodel TimeCapsule6,106 -setuplog "default log_warn /var/log/afpd.log"
Create a configuration file for the avahi afpd discovery:
sudo vi /etc/avahi/services/afpd.serviceand enter the following into it:
<?xml version="1.0" standalone='no'?><!--*-nxml-*-->Restart the services:
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<service-group>
<name replace-wildcards="yes">%h</name>
<service>
<type>_afpovertcp._tcp</type>
<port>548</port>
</service>
<service>
<type>_device-info._tcp</type>
<port>0</port>
<txt-record>model=MacPro</txt-record>
</service>
</service-group>
sudo service netatalk restart
sudo service avahi-daemon restart
Labels:
Backups,
Command Line,
Linux,
Mac OS X,
Time Machine
Possibly Related Posts
Thursday, 10 July 2014
Finding external IP using the command line
The easiest way is to use an external service via a commandline browser or download tool. Since wget is available by default in Ubuntu, we can use that.
To find your ip, use:
wget -qO- http://ipecho.net/plain ;
You can do the same using curl:
curl ipecho.net/plain ; echo
Labels:
Command Line,
Linux,
Networking
Possibly Related Posts
Wednesday, 9 July 2014
How to test a listening TCP/UDP port through nc
Netcat (nc) can also be used for a lot of other purposes. It can also be used as a very fast basic port scanner, you can scan a port or a range.
To scan a range of UDP ports 1 to 1000:
nc -vzu destination_ip 1-1000
To scan a range of TCP ports 1 to 1000
nc -vz destination_ip 1-1000
Labels:
Command Line,
Linux,
Networking
Possibly Related Posts
Thursday, 26 June 2014
Install Vsftpd on Ubuntu
On my last post I've talked about enabling the userdir module on Apache, you can use vsftpd to give your users FTP access to their own pages, this is hoe you can install it:
aptitude -y install vsftpd
Then edit it's configuration file:
vi /etc/vsftpd.conf
And make the following changes:
# line 29: uncomment
write_enable=YES
# line 97,98: uncomment ( allow ascii mode transfer )
ascii_upload_enable=YES
ascii_download_enable=YES
# line 120: uncomment ( enable chroot )
chroot_local_user=YES
# line 121: uncomment ( enable chroot list )
chroot_list_enable=YES
# line 123: uncomment ( enable chroot list )
chroot_list_file=/etc/vsftpd.chroot_list
# line 129: uncomment
ls_recurse_enable=YES
# add at the last line
# specify root directory ( if don't specify, users' home directory equals FTP home directory)
#local_root=public_html
# turn off seccomp filter
seccomp_sandbox=NO
Edit the list of users that can access your server.
vi /etc/vsftpd.chroot_list
Add the users you allow to move over their home directory
Finally restart the FTP service:
service vsftpd restart
Possibly Related Posts
Enable userdir Apache module on Ubuntu
First activate the module:
sudo a2enmod userdir
now edit the module's conf file:
sudo vi /etc/apache2/mods-enabled/userdir.conf
and change the line:
AllowOverride FileInfo AuthConfig Limit Indexes
to
AllowOverride All
By default PHP is explicitly turned off in user directories, to enable it edit the php module conf file:
sudo vi /etc/apache2/mods-enabled/php5.conf
and comment out the following lines:
#<IfModule mod_userdir.c>
# <Directory /home/*/public_html>
#php_admin_flag engine Off
# </Directory>
#</IfModule>
Now just restart your apache srerver and that's it:
sudo service apache2 restart
You can now create a public_html folder on every users homes with the following script:
#!/bin/bash
for I in /home/*; do
if [ ! -d $I/$FOLDER ]; then
mkdir -p $I/$FOLDER
U=$(basename $I)
chown $U $I/$FOLDER
chgrp $U $I/$FOLDER
fi
done # for
Now if you whant to go further and create dynamic vhost for each of your users you can change your default virtual host with something like this:
<VirtualHost *:80>
RewriteEngine on
RewriteMap lowercase int:tolower
# allow CGIs to work
RewriteCond %{REQUEST_URI} !^/cgi-bin/
# check the hostname is right so that the RewriteRule works
RewriteCond ${lowercase:%{SERVER_NAME}} ^[a-z-]+\.example\.com$
# concatenate the virtual host name onto the start of the URI
# the [C] means do the next rewrite on the result of this one
RewriteRule ^(.+) ${lowercase:%{SERVER_NAME}}$1 [C]
# now create the real file name
RewriteRule ^([a-z-]+)\.example\.com/(.*) /home/$1/public_html/$2
<Location / >
Order allow,deny
allow from all
</Location>
DocumentRoot /var/www/
<Directory />
Options FollowSymLinks
AllowOverride None
</Directory>
<Directory /var/www/>
Options Indexes FollowSymLinks MultiViews
AllowOverride None
Order allow,deny
allow from all
</Directory>
# define the global CGI directory
ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
<Directory "/usr/lib/cgi-bin">
AllowOverride None
Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
Order allow,deny
Allow from all
</Directory>
</VirtualHost>
This will allow you to use user.example.com to access your user's pages.
Possibly Related Posts
Monday, 16 June 2014
Using the IP command
The command /bin/ip has been around for some time now. But people continue using the older command /sbin/ifconfig. ifconfig won't go away quickly, but its newer version, ip, is more powerful and will eventually replace it.
So here are the basics of the new ip command.
Assign a IP Address to Specific Interface:
So here are the basics of the new ip command.
Assign a IP Address to Specific Interface:
sudo ip addr add 192.168.50.5 dev eth1Check an IP Address:
sudo ip addr showRemove an IP Address:
sudo ip addr del 192.168.50.5/24 dev eth1Enable Network Interface:
sudo ip link set eth1 upDisable Network Interface:
sudo ip link set eth1 downCheck Route Table:
sudo ip route showAdd Static Route:
sudo ip route add 10.10.20.0/24 via 192.168.50.100 dev eth0Remove Static Route:
sudo ip route del 10.10.20.0/24Add Default Gateway:
sudo ip route add default via 192.168.50.100
Labels:
Command Line,
Linux,
Networking
Possibly Related Posts
Saturday, 10 May 2014
Set up GlusterFS with a volume replicated over 2 nodes
The servers setup:
To install the required packages run on both servers:sudo apt-get install glusterfs-serverIf you want a more up to date version of GlusterFS you can add the following repo:
sudo add-apt-repository ppa:semiosis/ubuntu-glusterfs-3.4Now from one of the servers you must connect to the other:
sudo gluster peer probe <ip_of_the_other_server>You should see the following output:
peer probe: successYou can check the status from any of the hosts with:
sudo gluster peer statusNow we need to create the volume where the data will reside. For this run the following comand:
sudo gluster volume create datastore1 replica 2 transport tcp <server1_IP>:/mnt/gfs_block <server2_IP>:/mnt/gfs_blockWhere /mnt/gfs_block is the mount point where the data will be on each node and datastore1 is the name of the volume you are creating.
If this has been sucessful, you should see:
Creation of volume datastore1 has been successful. Please start the volume to access data.As the message indicates, we now need to start the volume:
sudo gluster volume start datastore1As a final test, to make sure the volume is available, run gluster volume info.
sudo gluster volume infoYour GlusterFS volume is ready and will maintain replication across two nodes.
If you want to Restrict Access to the Volume, you can use the following command:
sudo gluster volume set datastore1 auth.allow gluster_client1_ip,gluster_client2_ipIf you need to remove the restriction at any point, you can type:
sudo gluster volume set volume1 auth.allow *
Setup the clients:
Install the needed packages with:sudo apt-get install glusterfs-clientTo mount the volume you must edit the fstab file:
sudo vi /etc/fstabAnd append the following to it:
[HOST1]:/[VOLUME] /[MOUNT] glusterfs defaults,_netdev,backupvolfile-server=[HOST2] 0 0Where [HOST1] is the IP address of one of the servers and [HOST2] is the IP of the other server. [VOLUME] is the Volume name, in our case datastore1 and [MOUNT] is the path where you whant the files on the client.
Or, you can also mount the volume using a volume config file:
Create a volume config file for your GlusterFS client.
vi /etc/glusterfs/datastore.volCreate the above file and replace [HOST1] with your GlusterFS server 1, [HOST2] with your GlusterFS server 2 and [VOLNAME] with the Gluster FS volume to mount.
volume remote1Finally, edit fstab to add this config file and it's mount point. Replace [MOUNT] with the location to mount the storage to.
type protocol/client
option transport-type tcp
option remote-host [HOST1]
option remote-subvolume [VOLNAME]
end-volume
volume remote2
type protocol/client
option transport-type tcp
option remote-host [HOST2]
option remote-subvolume [VOLNAME]
end-volume
volume replicate
type cluster/replicate
subvolumes remote1 remote2
end-volume
volume writebehind
type performance/write-behind
option window-size 1MB
subvolumes replicate
end-volume
volume cache
type performance/io-cache
option cache-size 512MB
subvolumes writebehind
end-volume
/etc/glusterfs/datastore.vol [MOUNT] glusterfs rw,allow_other,default_permissions,max_read=131072 0 0
Possibly Related Posts
Create LVM volume from multiple disks
Recently I had to crate an Amazon EC2 Instance with a storage capacity of 5Tb, unfortunately, Amazon only allows us to create 1Tb volumes so I had to create 5 volumes, attach them to the instance and create a 5Tb LVM device.
My instance was running Ubuntu and I hat to install the lvm2 package:
apt-get install lvm2
The volumes attached to my instance where named from /dev/xvdg to /dev/xvdk
to find the names you can use the command:
fdisk -l
First we have to prepare our volumes for LVM with:
pvcreate /dev/xvdg /dev/xvdh /dev/xvdi /dev/xvdj /dev/xvdk
You can run the following command to check the result:
pvdisplay
The next step is to create a volume group, I used the command:
vgcreate storage /dev/xvdg /dev/xvdh /dev/xvdi /dev/xvdj /dev/xvdk
And used the command:
vgdisplay
to check the result, you can also use:
vgscan
Now we need to create the logical volume, in this case I wanted to use the entire available space so, I used the command:
lvcreate -n data -l 100%free storage
And
lvdisplay
to check the new volume if every thing goes well it should be on /dev/storage/data
you can also use the command
lvscan
Now you just have to format the new device, you can use:
mkft -t ext4 /dev/storage/data
When ready you can mount it with:
mout /dev/storage/data /mnt
Possibly Related Posts
Thursday, 14 March 2013
Disable IPv6 on Ubuntu
Edit your /etc/sysctl.conf file and add the following to the bottom:
After rebooting you can check if IPv6 has been disabled with the following command:
#disable ipv6Or you can use the following script:
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
echo "#disable ipv6" | sudo tee -a /etc/sysctl.confFor this changes to take effect you must reboot your system.
echo "net.ipv6.conf.all.disable_ipv6 = 1" | sudo tee -a /etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" | sudo tee -a /etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" | sudo tee -a /etc/sysctl.conf
After rebooting you can check if IPv6 has been disabled with the following command:
cat /proc/sys/net/ipv6/conf/all/disable_ipv60 means it's enabled and 1 - disabled.
Labels:
Linux,
Networking,
Ubuntu
Possibly Related Posts
Linux Stress tests
Consume CPU:
Fork bomb:
:(){ :|:& };:The next one will load four CPU cores at 100%:
for i in `seq 1 4` ; do while : ; do : ; done & ; doneOr:
for i in `seq 1 4` ; do cat /dev/zero > /dev/null & ; doneOr:
#!/bin/bashUsing the stress tool:
duration=120 # seconds
instances=4 # cpus
endtime=$(($(date +%s) + $duration))
for ((i=0; i<instances; i++))
do
while (($(date +%s) < $endtime)); do : ; done &
done
stress --cpu 3
Consume RAM:
Create a 30gb ramdisk and fills it with file full of zeroes:sudo mount -t tmpfs -o size=30G tmpfs /mnt
dd if=/dev/zero of=/mnt/tmp bs=10240 count=30720MB
Create a giant virable:
x="x" ; while : ; do x=$x$x ; echo -n "." ; done
Consume Disk:
dd if=/dev/zero of=bigfile bs=10240 count=30720MB
Simulate packet loss:
For randomly dropping 10% of incoming packets:iptables -A INPUT -m statistic --mode random --probability 0.1 -j DROPand for dropping 10% of outgoing packets:
iptables -A OUTPUT -m statistic --mode random --probability 0.1 -j DROP
Labels:
Command Line,
Linux,
testing
Possibly Related Posts
Friday, 11 January 2013
Calculating total disk usage by files with specific extension
For example if you want to check how much space is being used by log files on your entire system, you can use the following:
find / -type f -name "*.log*" -exec du -b {} \; | awk '{ sum += $1 } END { kb = sum / 1024; mb = kb / 1024; gb = mb / 1024; printf "%.0f MB (%.2fGB) disk space used\n", mb, gb}'Just replace "*.log*" with the file extension you want to search for and the above will give you the disk used by the sum of all the files with that extension.
Labels:
Command Line,
Linux,
Scripting
Possibly Related Posts
Thursday, 29 November 2012
File rotator script
This is a script that I use to rotate some logs, the commented lines will tell you what it does exactly:
#!/bin/sh
#-----------------------------------------------------------------------
# FILE ROTATOR SCRIPT
#
# The purpose of this script is to rotate, compress and delete files
# - Files older than ARC_AGE are gzipped and rotated
# - Files bigger than SIZE_LIM are gzipped and rotated
# - Gzipped files older than DEL_AGE are deleted
#
#-----------------------------------------------------------------------
# Vars
DATE=`date +%F"-"%H:%M`
FILEDIR="/storage/logs/"
DEL_AGE="30"
ARC_AGE="1"
SIZE_LIM="20M"
# Diagnostics
echo "-= Rotation starting =-"
echo " Directory to search: $FILEDIR"
echo " File age to check for delition: $DEL_AGE"
echo " File age to check for archive: $ARC_AGE"
echo " File size to check for archive: $SIZE_LIM"
echo " "
# Compress all unconpressed files which last modification occured more than ARC_AGE days ago
echo "-= Looking for old files =-"
FILES=`find $FILEDIR -type f -mtime +$ARC_AGE -not \( -name '*.gz' \) -print`
echo "Files to be archived:"
echo $FILES
echo " "
for FILE in $FILES; do
# Compress but keep the original file
gzip -9 -c "$FILE" > "$FILE".$DATE.gz;
# Check if file is beeing used:
lsof $FILE
ACTIVE=$?
# Delete inactive files, truncate if active
if [ $ACTIVE != 0 ]; then
# Delete the file
rm "$FILE";
else
# Truncate file to 0
:>"$FILE";
fi
done
# Compress all unconpressed files that are bigger than SIZE_LIM
echo "-= Looking for big files =-"
FILES=`find $FILEDIR -type f -size +$SIZE_LIM -not \( -name '*.gz' \) -print`
echo "Files to be archived:"
echo $FILES
echo " "
for FILE in $FILES; do
# Compress but keep the original file
gzip -9 -c "$FILE" > "$FILE".$DATE.gz;
# Truncate original file to 0
:>"$FILE";
done
echo "-= Deleting old archived files =-"
FILES_OLD=`find $FILEDIR -type f -mtime +$DEL_AGE -name '*.gz' -print`
echo "Archived files older than $DEL_AGE days to be deleted:"
echo $FILES_OLD
echo " "
# Deletes old archived files.
find $FILEDIR -type f -mtime +$DEL_AGE -name '*.gz' -exec rm -f {} \;
echo "-= Rotation completed =-"
echo " "
Possibly Related Posts
Monday, 10 September 2012
MySQL Export to CSV
If you need the data from a table or a query in a CSV fiel so that you can open it on any spreadsheet software, like Excel you can use something like the following:
username is your mysql username
password is your mysql password
database is your mysql database
table is the table you want to export
The -B option will delimit the data using tabs and each row will appear on a new line.
The -e option denotes the MySQL command to run, in our case the "SELECT" statement.
The "sed" command used here contains three sed scripts:
s/\t/","/g;s/^/"/ - this will search and replace all occurences of 'tabs' and replace them with a ",".
s/$/"/; - this will place a " at the start of the line.
s/\n//g - this will place a " at the end of the line.
You can find the exported CSV file in the current directory. The name of the file is filename.csv.
However if there are a lot of tables that you need to export, you'll need a script like this:
Name the file something like: export_csv.sh and be sure to make it executable. In Linux, do something like:
To change that behavior, you could easily modify the "OUTFILE" variable to something like:
SELECT id, name, email INTO OUTFILE '/tmp/result.csv'Or you can use sed:
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
ESCAPED BY ‘\\’
LINES TERMINATED BY '\n'
FROM users WHERE 1
mysql -u username -ppassword database -B -e "SELECT * FROM table;" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > filename.csvExplanation:
username is your mysql username
password is your mysql password
database is your mysql database
table is the table you want to export
The -B option will delimit the data using tabs and each row will appear on a new line.
The -e option denotes the MySQL command to run, in our case the "SELECT" statement.
The "sed" command used here contains three sed scripts:
s/\t/","/g;s/^/"/ - this will search and replace all occurences of 'tabs' and replace them with a ",".
s/$/"/; - this will place a " at the start of the line.
s/\n//g - this will place a " at the end of the line.
You can find the exported CSV file in the current directory. The name of the file is filename.csv.
However if there are a lot of tables that you need to export, you'll need a script like this:
#!/bin/bashJust be sure to change the configuration section to meet your needs.
#### Begin Configuration ####
DB="mydb"
MYSQL_USER="root"
MYSQL_PASSWD='mypass'
MYSQL_HOST="127.0.0.1"
MYSQL_PORT="3306"
MYSQL="/usr/bin/mysql"
#### End Configuration ####
MYSQL_CMD="$MYSQL -u $MYSQL_USER -p$MYSQL_PASSWD -P $MYSQL_PORT -h $MYSQL_HOST"
TABLES=`$MYSQL_CMD --batch -N -D $DB -e "show tables"`
for TABLE in $TABLES
do
SQL="SELECT * FROM $TABLE;"
OUTFILE=$TABLE.csv
$MYSQL_CMD --database=$DB --execute="$SQL" | sed 's/\t/","/g;s/^/"/;s/$/"/;s/\n//g' > $OUTFILE
done
Name the file something like: export_csv.sh and be sure to make it executable. In Linux, do something like:
chmod +x ./export_csv.shIf you want to have all of the exported files in a certain directory, you could either modify the script or just make the cirectory, "cd" into it, and then run the script. It assumes you want to create the files in the current working directory.
To change that behavior, you could easily modify the "OUTFILE" variable to something like:
OUTFILE="/my_path/$TABLE.csv"
Possibly Related Posts
Friday, 7 September 2012
How to set the timezone on Ubuntu Server
You can check your current timezone by just running
$ dateOr checking the timezone file with:
Mon Sep 3 18:03:04 WEST 2012
$ cat /etc/timezoneSo to change it just run
Europe/Lisbon
$ sudo dpkg-reconfigure tzdataAlso be sure to restart cron as it won’t pick up the timezone change and will still be running on UTC.
And follow on screen instructions. Easy.
$ /etc/init.d/cron stopyou might also want to install ntp to keep the correct time:
$ /etc/init.d/cron start
aptitude install ntp
Possibly Related Posts
Tuesday, 4 September 2012
Anti-Spam Email server
In this post I'll show you how to install an anti-spam smart host relay server, based on Ubuntu 12.04 LTS, that will include:
Postfix w/Bayesian Filtering and Anti-Backscatter (Relay Recipients via look-ahead), Apache2, Mysql, Dnsmasq, MailScanner (Spamassassin, ClamAV, Pyzor, Razor, DCC-Client), Baruwa, SPF Checks, FuzzyOcr, Sanesecurity Signatures, PostGrey, KAM, Scamnailer, FireHOL (Iptables Firewall) and Relay Recipients Script.
Continue reading for the instructions.
Postfix w/Bayesian Filtering and Anti-Backscatter (Relay Recipients via look-ahead), Apache2, Mysql, Dnsmasq, MailScanner (Spamassassin, ClamAV, Pyzor, Razor, DCC-Client), Baruwa, SPF Checks, FuzzyOcr, Sanesecurity Signatures, PostGrey, KAM, Scamnailer, FireHOL (Iptables Firewall) and Relay Recipients Script.
Continue reading for the instructions.
Labels:
baruwa,
email,
firehole,
Linux,
mailscanner,
Postfix,
PostGrey,
spamassassin,
Ubuntu
Possibly Related Posts
Thursday, 30 August 2012
Show the 20 most CPU/Memory hungry processes
Display the top 20 running processes - sorted by memory usage
ps returns all running processes which are then sorted by the 4th field in numerical order and the top 20 are sent to STDOUT.
This command will show the 20 processes using the most CPU time (hungriest at the bottom).
ps returns all running processes which are then sorted by the 4th field in numerical order and the top 20 are sent to STDOUT.
ps aux | sort -nk +4 | tail -20Show the 20 most CPU/Memory hungry processes
This command will show the 20 processes using the most CPU time (hungriest at the bottom).
ps aux | sort -nk +3 | tail -20Or, run both:
echo "CPU:" && ps aux | sort -nk +3 | tail -20 && echo "Memory:" && ps aux | sort -nk +4 | tail -20
Labels:
Command Line,
Linux
Possibly Related Posts
Subscribe to:
Comments (Atom)