I was experimenting with a bash script that would recursively fork and call itself. The terminating condition was subtle and I got it a wrong a few times, the result being a script that called itself ad infinitum. What's a safe way to sandbox a script like this while debugging it so that every time there's a mistake, I don't have to deal with stopping the infinite tower that fills up the process table?
You could use ulimit
ulimit -u 20
Will limit the maximum number of processes runned by your user
You could simply count the numbers of processes with your script name and terminate if the number gets too high.
This blog post introduces a function to achieve this:
count_process(){
return $(ps -ef | grep -v grep | grep -c $1)
}
Explanation (taken from blog post):
ps -ef will return a list of all running processes (in detail),
and the process list will then be filtered first to exclude any instances of grep
and second to count the processes specified with $1
For re-usability the author provides the function as little script:
#!/bin/sh
#
# /usr/bin/processCount
# Source: http://samcaldwell.net/index.php/technical-articles/3-how-to-articles/68-how-do-i-count-the-number-of-linux-instances-of-a-given-process-using-a-bash-script
#
[ -z $1 ] && {
echo " "
echo "Missing expected input."
echo " "
echo "USAGE:"
echo " "
echo " $0 <executable file>"
echo " "
echo " NOTE: When executing this script use the path and filename of the"
echo " program. Using only the process name can potentially return inaccurate"
echo " results."
echo " "
exit 1
}
echo $(ps -ef | grep -v grep | grep -v $0 | grep -c $1)
#script ends here.
Related
I have a fairly large list of websites in "file.txt" and wanted to check if the words "Hello World!" in the site in the list using looping and curl.
i.e in "file.txt" :
blabla.com
blabla2.com
blabla3.com
then my code :
#!/bin/bash
put() {
printf "list : "
read list
run=$(cat $list)
}
put
scan_list() {
for run in $(cat $list);do
if [[ $(curl -skL ${run}) =~ "Hello World!" ]];then
printf "${run} Hello World! \n"
else
printf "${run} No Hello:( \n"
fi
done
}
scan_list
this takes a lot of time, is there a way to make the checking process faster?
Use xargs:
% tr '\12' '\0' < file.txt | \
xargs -0 -r -n 1 -t -P 3 sh -c '
if curl -skL "$1" | grep -q "Hello World!"; then
echo "$1 Hello World!"
exit
fi
echo "$1 No Hello:("
' _
Use tr to convert returns in the file.txt to nulls (\0).
Pass through xargs with -0 option to parse by nulls.
The -r option prevents the command from being ran if the input is empty. This is only available on Linux, so for macOS or *BSD you will need to check that file.txt is not empty before running.
The -n 1 permits only one file per execution.
The -t option is debugging, it prints the command before it is ran.
We allow 3 simultaneous commands in parallel with the -P 3 option.
Using sh -c with a single quoted multi-line command, we substitute $1 for the entries from the file.
The _ fills in the $0 argument, so our entries are $1.
I have a bash script like this:
#!/bin/bash
log_file=/home/michael/bash/test.log
checkalive=checkalive.php
#declare
needRestart=0
#Check checkalive.php
is_checkalive=`ps aux | grep -v grep| grep -v "$0" | grep $checkalive| wc -l | awk '{print $1}'`
if [ $is_checkalive != "0" ] ;
then
checkaliveId=$(ps -ef | grep $checkalive | grep -v 'grep' | awk '{ printf $2 }')
echo "Service $checkalive is running. $checkaliveId"
else
echo "$checkalive OFF"
needRestart=1
fi
#NEED needRestart
if [ $needRestart == "1" ];
then
#START SERVICE
echo "Restarting services..."
/usr/bin/php5.6 /home/michael/bash/$checkalive >/dev/null 2>&1 &
echo "$checkalive..."
echo `date '+%Y-%m-%d %H:%M:%S'` " Start /home/michael/bash/$checkalive" >> $log_file
fi
I can run it manually but when I try to run it in Cron, it doesn't work for some reasons. Apparently the command:
/usr/bin/php5.6 /home/michael/bash/$checkalive >/dev/null 2>&1 &
does not work.
All of file permissions are already set to executable. Any advice?
Thank you
You have run into one of cron's most common mistakes, trying to use it like an arbitrary shell script. Cron is not a shell script and you can't do everything you can do in one, like dereferencing variables or setting arbitrary new variables.
I suggest you replace your values into the cron line and avoid usage of variables
/usr/bin/php5.6 /home/michael/bash/checkalive.php >/dev/null 2>&1 &
Also, consider removing the trailing & as it is not necessary.
I have a source file with the following information in it.
WABEL8499IPM101
WABEL8499IPM102
WABEL8499IPM103
WABEL8499IPM104
WABEL8499IPM105
WABEL8499IPM106
WABEL8499IPM107
WABEL8499IPM108
I need to be able to find the largest name in the sequence and then create a new variable with the next logical name in the sequence. I need to be able to create multiple if necessary. For example:
Use grep to search the file for WABEL8499IPM which shows all of the above results. I need to find WABEL8499IPM108 because it's the largest in the sequence and then create a new variable (how many depends on what the user inputs) with the value WABEL8499IPM109. If user inputs a quantity of 2 then I need both 109 and 110. My goal is to build a bash script to input the base name (without the last 3 digits), find the largest in the sequence and then output to a log file the next names in the sequence however many times the user needs.
I'm not really sure where to start. I can find all using grep but having difficulty finding only the largest value/sequence. The user will only input the base name because they won't know the last 3 digits. Currently I don't have any code that works.
SRCFILE="~/Desktop/deviceinfo.csv"
LOGDIR="~/Desktop/"
LOGFILE="$LOGDIR/DeviceNames.csv"
echo -e "\n"
echo "What is the base device name?"
read deviceName
echo "How many device names do you need?"
read quantityName
lines=$(grep -c "$deviceName" $SRCFILE)
echo -e "\n"
echo "There are $lines results."
deviceResults=$(grep -F "$deviceName" $SRCFILE)
echo -e "\n"
echo Device Name\'s Currently Enrolled:
echo "$deviceResults"
echo -e "\n"
echo "Your output file has been created."
CODE FOR CREATING OUTPUT FILE HERE
echo "$deviceName1" >> "$LOGFILE"
echo "$deviceName2" >> "$LOGFILE"
echo "$deviceName3" >> "$LOGFILE"
Would there be a way with this method to use a reference file for the input? For example if I had to research and create multiple names with different quantities could we use an input reference file for that so we don't have to type them each individually and run the script multiple times?
SRCFILE="~/Desktop/deviceinfo.csv"
LOGDIR="~/Desktop/"
LOGFILE="$LOGDIR/DeviceNames.csv"
# base name, such as "WABEL8499IPM"
device_name=$1
# quantity, such as "2"
quantityNum=$2
# the largest in sequence, such as "WABEL8499IPM108"
max_sequence_name=$(cat $SRCFILE | grep -o -e "$device_name[0-9]*" | sort --reverse | head -n 1)
# extract the last 3digit number (such as "108") from max_sequence_name
max_sequence_num=$(echo $max_sequence_name | rev | cut -c 1-3 | rev)
# creat a sequence of files starting from "WABEL8499IPM101" if there is not any "WABEL8499IPM".
if [ -z "$max_sequence_name" ];
then
max_sequence_name=device_name
max_sequence_num=100
fi
# create new sequence_name
# such as ["WABEL8499IPM109", "WABEL8499IPM110"]
array_new_sequence_name=()
for i in $(seq 1 $quantityNum);
do
cnum=$((max_sequence_num + i))
array_new_sequence_name+=($(echo $device_name$cnum))
done
#CODE FOR CREATING OUTPUT FILE HERE
#for fn in ${array_new_sequence_name[#]}; do touch $fn; done;
# write log
for sqn in ${array_new_sequence_name[#]};
do
echo $sqn >> $LOGFILE
done
Usage:
bash test.sh WABEL8499IPM 2
Result in the log file:
WABEL8499IPM109
WABEL8499IPM110
EDITED
The input reference file (input.txt) :
WABEL8499IPM,2
WABEL8555IPM,6
WABEL8444IPM,5
The driver shell script :
INPFIL="./input.txt"
PSRC="./test.sh"
cat $INPFIL | while read line;
do
device_name=`echo $line | cut -d "," -f 1`
quantity_num=`echo $line | cut -d "," -f 2`
bash $PSRC $device_name $quantity_num
done;
You can try
logdir="~/Desktop/"
srcfile="$logdir/deviceinfo.csv"
logfile="$logdir/DeviceNames.csv"
echo
read -p "What is the base device name? " deviceName
echo
read -p "How many device names do you need? " quantityName
echo
awk -v name="$deviceName" \
-v q="$quantityName" \
-v lelog="$logfile" '
$0 ~ "^"name {
sub(name,"")
a=a>$0?a:$0
}
END {
if ( a )
for ( i = 1 ; i <= q ; i++ )
print name ( a + i ) >> lelog
}
' "$srcfile"
into a bash script, I need to grep a contiuous log streaming and when the proper string is filtered, I need to stop the 'tailf' command to move ond with other implementations.
The common command that works is:
tailf /dir/dir/dir/server.log | grep --line-buffered "Started in"
after the "Started in" line is gathered, I need to break down the "tailf" command.
All this stuff into a bash script.
use grep -m1, it means return the first match then stop:
-m num, --max-count=num
Stop reading the file after num matches.
tailf /dir/dir/dir/server.log | grep -m1 "Started in"
Figured out...
tailf /dir/dir/dir/server.log | while read line
do
echo $line | grep "thing_to_grep"
if [ "$?" -eq "0" ]; then
echo "";echo "[ message ]";echo "";
kill -2 -$$
fi
done
$$ is the PID of the current shell, in this case associated to the "tailf" command.
I'm trying to write a script that monitors real time CPU% load in AIX 6.1-servers by process(PID), and have been searching for this both in IBMs documentation and all over stackoverflow.
I only find examples of people using, for example
ps aux
Needless to say, thats not what I need, since it only monitors how much CPU% the process has been using over the session time, which in my case is quite long.
The information I need is contained in topas and nmon, but I don't know how to grab a snapshot of this information for each individual moment.
top
Does not exist in AIX systems.
Solved this by making a script that generates 30second tprof logs and iterates through them adding up the process threads by PID and reaching a sum that equals more or less a real-time CPU load% process list.
Here is a function I use to search and grab CPU load from nmon data log
function fetch_UARG_process_by_pid ()
{
#Check_Argument $1 $2
#parameter initialization
filesource=$1
cpuvalue=$2
readarray -t X <<< "$(grep TOP $filesource)"
length=${#X[#]}
#echo " this is the length of my array : $length"
#you have to start from 2 to avoid the first lines that describe the content of the file
#TOP,%CPU Utilisation
#TOP,+PID,Time,%CPU,%Usr, Sys,Size,ResSet,ResText,ResData,ShdLib,MinorFault,MajorFault,Command
for ((i = 2; i != length; i++)); do
echo ${X[i]} | awk -F "," '{print $2 , $4}' | while read processid n
do
if (( $(echo "$n > $cpuvalue " |bc -l) ));
then
echo "the value of CPU usage is: $n"
echo "the Linux PID is : $processid "
echo "And the short desciption of the process:"
echo ${X[i]} | awk -F "," '{print $14}'
echo -e "And the long desciption of the process:"
grep UARG $1 | grep $processid | awk -F "," '{print $5}'
echo -e "\n"
fi
done
done
}