BASH, read first column in file in loop - bash

Here is what I am trying to do. I would like to catch the top 10 cpu consuming PID's and find the program name. Then display the program name and % CPU in file.
CPU_per=$(sar 1 1 | tail -1 | awk '{print 100 - $5}')
echo $CPU_per
if [ $CPU_per -gt 80 ]
(prstat -u user -n 900 0 1 | grep Type | head -n 10 | awk '{print $1 " " $9}') >> /tmp/PID
for i in $(cat /tmp/PID)
do
(awk '{print $1 } | ps -p $PID -o args | tail -1 | cut -d \ -f 2)
I would like output to look like
Process %CPU
Program1 5%
Program2 9%
Program3 12%

Like this ?
echo -e "COMMAND\t\t%CPU"; ps -eo "%c %C%%" --sort pcpu | tail -n10

Related

bash send mail when threshold is exceeded in three successive runs

I have a bash script that does a pretty decent job on reporting CPU level above 95%. The issue I am running into is it will report on even "spikes". This script runs every 10 minutes and checks all of my servers. Is there a way to only report if the server reports a level above 95% for 3 iterations? say after the 3rd time it runs, i.e 30 min.
12:00 - 1st report - 98%
12:10 - 2nd report - 99%
12:20 - 3rd report - 98% (now alert the admin)
here is the section of the script:
for sn in $(cat /tmp/hosts |grep -v "#"); do
cpuuse=$(ssh -qn -o ConnectTimeout=15 -oStrictHostKeyChecking=no -o BatchMode=yes $sn "top -b -n2 -p 1 | fgrep \"Cpu(s)\" | tail -1 | awk -F'id,' -v prefix=\"\$prefix\" '{ split(\$1, vs, \",\"); v=vs[length(vs)]; sub(\"%\", \"\", v); printf \"%s%.1f%%\n\", prefix, 100 - v }' | rev | cut -c 4- | rev")
if [[ "$cpuuse" -ge 95 ]]; then
echo "CPU Alert!! $sn CPU is high - $cpuuse%" | mailx -s "CPU Alert on $sn" admin#sample.com
fi
done
AFAIK There isn't really a bash trick. You just need to store a counter somewhere. Something like this could do the trick:
for sn in $(cat /tmp/hosts |grep -v "#"); do
cpuuse=$(ssh -qn -o ConnectTimeout=15 -oStrictHostKeyChecking=no -o BatchMode=yes $sn "top -b -n2 -p 1 | fgrep \"Cpu(s)\" | tail -1 | awk -F'id,' -v prefix=\"\$prefix\" '{ split(\$1, vs, \",\"); v=vs[length(vs)]; sub(\"%\", \"\", v); printf \"%s%.1f%%\n\", prefix, 100 - v }' | rev | cut -c 4- | rev")
counter_file=/tmp/my-counter-file-$sn # separate counter file for each server
if [[ "$cpuuse" -ge 95 ]]; then
date >> $counter_file # just add a line to the counter file
if [[ $(wc -l $counter_file) -ge 3 ]]; then
echo "CPU Alert!! $sn CPU is high - $cpuuse%" | mailx -s "CPU Alert on $sn" admin#sample.com
rm $counter_file # message was sent, reset counter
fi
else
rm $counter_file # below limit, reset counter
fi
done
The trick here is to store a counter in a file. The number of lines in the file is your counter value.

Running Shell Script having multiple programs dynamically in parallel

I have a shell script which captures the Process ID, CPU and Memory of the JVM every nth second and writes the output to a file. Below is my code:
JVM="aaa001_bcdefx01"
systime=$(date +"%m-%d-%y-%T")
for i in {1..10}
do
PID=`ps -auxwww | grep "jdk" | grep $JVM | grep -v grep | cut -c -30 | awk '{print $2}'`
MEM=`ps -auxwww | grep "jdk" | grep $JVM | grep -v grep | cut -c -30 | awk '{print $4 }'`
CPU=`ps -auxwww | grep "jdk" | grep $JVM | grep -v grep | cut -c -30 | awk '{print $3 }'`
printf '%-5s %-20s %-20s %-20s %-20s \n' "$systime $JVM $PID $CPU $MEM " >> $LOGFILE
sleep 5
done
This run perfectly fine when i have only one JVM in that server. How can i execute the same script in parallel and fetch the details if i have multiple JVM for a server.
I looked for some solutions and found & to be used in the script but couldn't understand how this can be implemented to my above script. Let's say I have 5 JVMs. How can i run the script and fetch the stats in parallel for all the below JVMs in parallel. Kindly guide. Any help would be appreciated.
JVM="aaa001_bcdefx01"
JVM="aaa002_bcdefx01"
JVM="aaa003_bcdefx01"
JVM="aaa004_bcdefx01"
JVM="aaa005_bcdefx01"
GNU Parallel is made for this kind of stuff
doit() {
JVM="$1"
systime=$(date +"%m-%d-%y-%T")
for i in {1..10}
do
PID=`ps -auxwww | grep "jdk" | grep $JVM | grep -v grep | cut -c -30 | awk '{print $2}'`
MEM=`ps -auxwww | grep "jdk" | grep $JVM | grep -v grep | cut -c -30 | awk '{print $4 }'`
CPU=`ps -auxwww | grep "jdk" | grep $JVM | grep -v grep | cut -c -30 | awk '{print $3 }'`
printf '%-5s %-20s %-20s %-20s %-20s \n' "$systime $JVM $PID $CPU $MEM "
sleep 5
done
}
export -f doit
parallel -j0 --linebuffer --tag doit ::: aaa00{1..5}_bcdefx01 >> $LOGFILE
The function is basically your code. The change is that it takes the JVM as argument and it prints to stdout (standard output). GNU Parallel calls the function with the arguments aaa00N_bcdefx01 where N = 1..5, and saves the output to $LOGFILE. It uses --linebuffer to pass the output as soon as there is a full line, and thus guarantees that you will not get half-a-line from one process mixed with a line from another process. --tag prepends the line with the JVM.
How about using subshell?
Each JVM shell script should go inside '(' and ')'. Put '&' at the end so that it executes in the background.
An example is given here.
#!/bin/bash
echo > testfile.txt
echo "execute subshell 1"
(
#JVM 1 should go here
sleep 10
echo "subshell 1" >> testfile
)&
echo "execute subshell 2"
(
#JVM 2 should go here
sleep 10
echo "subshell 2" >> testfile
)&
echo "execute subshell 3"
(
#JVM 3 should go here
sleep 10
echo "subhsell 3" >> testfile
)&
Here each subshell writes data to testfile.txt after waiting for 10 seconds.

BASH: Remove newline for multiple commands

I need some help . I want the result will be
UP:N%:N%
but the current result is
UP:N%
:N%
this is the code.
#!/bin/bash
UP=$(pgrep mysql | wc -l);
if [ "$UP" -ne 1 ];
then
echo -n "DOWN"
else
echo -n "UP:"
fi
df -hl | grep 'sda1' | awk ' {percent+=$5;} END{print percent"%"}'| column -t && echo -n ":"
top -bn2 | grep "Cpu(s)" | \sed "s/.*, *\([0-9.]*\)%* id.*/\1/" | \awk 'END{print 100 - $1"%"}'
You can use command substitution in your first sentence (notice you're creating a subshell in this way):
echo -n $(df -hl | grep 'sda1' | awk ' {percent+=$5;} END{print percent"%"}'| column -t ):

Bash- Converting a variable to human readable format (KB, MB, GB)

In my bash script, I run through a list of directories and read in the size of each directory to a variable using the du command. I also keep a running total of the total size of the directories. The only problem is that after I get the total size, it's in an unreadable format (ex. 64827120). How can I convert the variable containing this number into GBs, MBs, etc?
You want to use du -h which gives you a 'human readable' output ie KB, MB, GB, etc.
You can use numfmt to convert raw (decimal) numbers
to human-readable form. 
Use --to=iec to output binary-prefix numbers
(i.e., K=1024, M=220, etc.)
$ printf '%s %s\n' 1000000 foo 1048576 bar | numfmt --to=iec
977K foo
1.0M bar
and use --to=si to output metric-prefix numbers
(i.e., K=1000, M=106, etc.)
$ printf '%s %s\n' 1000000 foo 1048576 bar | numfmt --to=si
1.0M foo
1.1M bar
If you specifically want to get “MB”, “GB”, etc., use --suffix:
$ printf '%s %s\n' 1000000 foo 1048576 bar | numfmt --to=si --suffix=B
1.0MB foo
1.1MB bar
If your numbers are in a column other than the first
(as in Mik R’s answer), use --field:
$ printf '/home/%s %s\n' foo 1000000 bar 1048576 | numfmt --to=si --field=2
/home/foo 1.0M
/home/bar 1.1M
Or you can convert numbers on the command line (instead of using a pipe):
$ numfmt --to=si 1000000 1048576
1.0M
1.1M
Try using du -sh for getting summarise size in human readable, also you can find the command related help in manual.
Try below command, it will give you the size in Human readable format
du | tail -1 | awk {'print $1'} | awk '{ total = $1 / 1024 ; print total "MB" }'
du | tail -1 | awk {'print $1'} | awk '{ total = $1 / 1024/1024 ; print total "GB" }'
This is a combination of #Mahattam response and some others I combined which tallys the total amount in the standard format and then formats the output in human readable.
for path in $(awk -F: '{if ($3 >= 1000) print $6}' < /etc/passwd); do disk_usage=0; disk_usage=$(du -s ${path} | grep -oE '[[:digit:]]+'); echo "$path: $(echo $disk_usage | tail -1 | awk {'print $1'} | awk '{ total = $1 / 1024/1024 ; printf("%.2fGB\n", total) }')"; myAssociativeArray[${path}]=${disk_usage}; done ; total=$(IFS=+; echo "$((${myAssociativeArray[*]}))"); echo "Total disk usage: $(echo $total | tail -1 | awk {'print $1'} | awk '{ total = $1 / 1024/1024 ; printf("%.2fGB\n", total) }')"; unset total; unset disk_usage ;
How it works.
This could be anything you want to iterate through path list but in this example its just using the /etc/pass to loop over users paths source is here
for path in $(awk -F: '{if ($3 >= 1000) print $6}' < /etc/passwd)
It then calculates the usage per folder and extracts only digits from the output in the loop
disk_usage=0; disk_usage=$(du -s ${path} | grep -oE '[[:digit:]]+')
It outputs the nice formatting rounded to 2 decimal points
echo "$path: $(echo $disk_usage | tail -1 | awk {'print $1'} | awk '{ total = $1 / 1024/1024 ; printf("%.2fGB\n", total) }')";
Adds this to the bash associative array
myAssociativeArray[${path}]=${disk_usage}
then it sums the total value in the original amount from the array
total=$(IFS=+; echo "$((${myAssociativeArray[*]}))")
then we use the same fancy output formatting to show this nicely
echo "Total disk usage: $(echo $total | tail -1 | awk {'print $1'} | awk '{ total = $1 / 1024/1024 ; printf("%.2fGB\n", total) }')";
I used a variation of this for calculating cPanel Resellers accounts disk usage in the below monster oneliner.
Reseller="CPUsernameInputField"; declare -A myAssociativeArray ; echo "==========================================" | tee -a ${Reseller}_disk_breakdown.txt ; echo "Reseller ${Reseller}'s Disk usage by account"| tee -a ${Reseller}_disk_breakdown.txt; for acct in $(sudo grep ${Reseller} /etc/trueuserowners | cut -d: -f1); do disk_usage=0; disk_usage=$(du -s /home/${acct} | grep -oE '[[:digit:]]+'); echo "$acct: $(echo $disk_usage | tail -1 | awk {'print $1'} | awk '{ total = $1 / 1024/1024 ; printf("%.2fGB\n", total) }')" | tee -a ${Reseller}_disk_breakdown.txt ; myAssociativeArray[${acct}]=${disk_usage}; done ; total=$(IFS=+; echo "$((${myAssociativeArray[*]}))"); echo "Total disk usage: $(echo $total | tail -1 | awk {'print $1'} | awk '{ total = $1 / 1024/1024 ; printf("%.2fGB\n", total) }')" | tee -a ${Reseller}_disk_breakdown.txt; unset total; unset disk_usage;echo "==========================================" | tee -a ${Reseller}_disk_breakdown.txt ; echo "Sorted by top users" | tee -a ${Reseller}_disk_breakdown.txt; for key in "${!myAssociativeArray[#]}"; do printf '%s:%s\n' "$key" "${myAssociativeArray[$key]}"; done | sort -t : -k 2rn | tee -a ${Reseller}_disk_breakdown.txt;echo "==========================================" | tee -a ${Reseller}_disk_breakdown.txt ;for key in "${!myAssociativeArray[#]}"; do USER_HOME=$(eval echo ~${key}); echo "Disk breakdown for $key" | tee -a ${Reseller}_disk_breakdown.txt ; sudo du -h ${USER_HOME} --exclude=/app --exclude=/home/virtfs| grep ^[0-9.]*[G,M] | sort -rh|head -n20 | tee -a ${Reseller}_disk_breakdown.txt;echo "=======================================" | tee -a ${Reseller}_disk_breakdown.txt; done

Using bash command on a variable that will be used as reference for an array

Short and direct, basically I want to use the value of $command on a variable, instead using it inside the while loop as a command itself. So:
This Works, but I think it's ugly:
#!/bin/bash
IFS=$'\n'
lsof=`which lsof`
whoami=`whoami`
while true ; do
execution_array=($(${lsof} -iTCP -P 2> /dev/null | grep ':' | grep ${whoami} | awk '{print $9}' | cut -f2 -d'>' | sort | uniq ))
for i in ${execution_array[*]}; do
echo $i
done
sleep 1
done
unset IFS
This doesn't work ( no output happens ), but i think is less ugly:
#!/bin/bash
IFS=$'\n'
lsof=`which lsof`
whoami=`whoami`
command="${lsof} -iTCP -P 2> /dev/null | grep ':' | grep ${whoami} | awk '{print $9}' | cut -f2 -d'>' | sort | uniq"
while true ; do
execution_array=($(command))
for i in ${execution_array[*]}; do
echo $i
done
sleep 1
done
unset IFS
This solved my problem:
#!/bin/bash
IFS=$'\n'
lsof=$(which lsof)
list_connections() {
${lsof} -iTCP -P 2> /dev/null | grep ':' | grep $(whoami) | awk '{print $9}' | cut -f2 -d'>' | sort | uniq
}
while true ; do
execution_array=($(list_connections))
for i in ${execution_array[*]}; do
echo $i
done
sleep 1
done
unset IFS

Resources