I'm attempting to sweep an IP block totaling about 65,000 addresses. We've been instructed to use specifically ICMP packets with bash and find a way to parallelize it. Here's what I've come up with:
#!/bin/bash
ping() {
if ping -c 1 -W 5 131.212.$i.$j >/dev/null
then
((++s))
echo -n "*"
else
((++f))
echo -n "."
fi
((++j))
#if j has reached 255, set it to zero and increment i
if [ $j -gt 255 ]; then
j=0
((++i))
echo "Pinging 131.212.$i.xx IP Block...\n"
fi
}
s=0 #number of responses recieved
f=0 #number of failures recieved
i=0 #IP increment 1
j=0 #IP increment 2
curProcs=$(ps | wc -l)
maxProcs=$(getconf OPEN_MAX)
while [ $i -lt 256 ]; do
curProcs=$(ps | wc -l)
if [ $curProcs -lt $maxProcs ]; then
ping &
else
sleep 10
fi
done
echo "Found "$s" responses and "$f" timeouts."
echo /usr/bin/time -l
done
However, I've been running into the following error (on macOS):
redirection error: cannot duplicate fd: Too many open files
My understanding is I'm going over a resource limit, which I've attempted to rectify by only starting new ping processes if the existing processes count is less than the specified max, but this does not solve the issue.
Thank you for your time and suggestions.
EDIT:
There are a lot of good suggestions below for doing this with preexisting tools. Since I was limited by academic requirements, I ended up splitting the ping loops into a different process for each 12.34.x.x blocks, which although ugly did the trick in under 5 minutes. This code has a lot of problems, but it might be a good starting point for someone in the future:
#!/bin/bash
#############################
# Ping Subfunction #
#############################
# blocks with more responses will complete first since worst-case scenerio
# is O(n) if no IPs generate a response
pingSubnet() {
for ((j = 0 ; j <= 255 ; j++)); do
# send a single ping with a timeout of 5 sec, piping output to the bitbucket
if ping -c 1 -W 1 131.212."$i"."$j" >/dev/null
then
((++s))
else
((++f))
fi
done
#echo "Recieved $s responses with $f timeouts in block $i..."
# output number of success results to the pipe opened in at the start
echo "$s" >"$pipe"
exit 0
}
#############################
# Variable Declaration #
#############################
start=$(date +%s) #start of execution time
startMem=$(vm_stat | awk '/Pages free/ {print $3}' | awk 'BEGIN { FS = "\." }; {print ($1*0.004092)}' | sed 's/\..*$//');
startCPU=$(top -l 1 | grep "CPU usage" | awk '{print 100-$7;}' | sed 's/\..*$//')
s=0 #number of responses recieved
f=0 #number of failures recieved
i=0 #IP increment 1
j=0 #IP increment 2
#############################
# Pipe Initialization #
#############################
# create a pipe for child procs to write to
# child procs inherit runtime environment of parent proc, but cannot
# write back to it (like passing by value in C, but the whole env)
# hence, they need somewhere else to write back to that the parent
# proc can read back in
pipe=/tmp/pingpipe
trap 'rm -f $pipe' EXIT
if [[ ! -p $pipe ]]; then
mkfifo $pipe
exec 3<> $pipe
fi
#############################
# IP Block Iteration #
#############################
# adding an ampersand to the end forks the command to a separate, backgrounded
# child process. this allows for parellel computation but adds logistical
# challenges since children can't write the parent's variables
echo "Initiating scan processes..."
while [ $i -lt 256 ]; do
#echo "Beginning 131.212.$i.x block scan..."
#ping subnet asynchronously
pingSubnet &
((++i))
done
echo "Waiting for scans to complete (this may take up to 5 minutes)..."
peakMem=$(vm_stat | awk '/Pages free/ {print $3}' | awk 'BEGIN { FS = "\." }; {print ($1*0.004092)}' | sed 's/\..*$//')
peakCPU=$(top -l 1 | grep "CPU usage" | awk '{print 100-$7;}' | sed 's/\..*$//')
wait
echo -e "done" >$pipe
#############################
# Concat Pipe Outputs #
#############################
# read each line from the pipe we created earlier, adding the number
# of successes up in a variable
success=0
echo "Tallying responses..."
while read -r line <$pipe; do
if [[ "$line" == 'done' ]]; then
break
fi
success=$((line+success))
done
#############################
# Output Statistics #
#############################
echo "Gathering Statistics..."
fail=$((65535-success))
#output program statistics
averageMem=$((peakMem-startMem))
averageCPU=$((peakCPU-startCPU))
end=$(date +%s) #end of execution time
runtime=$((end-start))
echo "Scan completed in $runtime seconds."
echo "Found $success active servers and $fail nonresponsive addresses with a timeout of 1."
echo "Estimated memory usage was $averageMem MB."
echo "Estimated CPU utilization was $averageCPU %"
This should give you some ideas with GNU Parallel
parallel --dry-run -j 64 -k ping 131.212.{1}.{2} ::: $(seq 1 3) ::: $(seq 11 13)
ping 131.212.1.11
ping 131.212.1.12
ping 131.212.1.13
ping 131.212.2.11
ping 131.212.2.12
ping 131.212.2.13
ping 131.212.3.11
ping 131.212.3.12
ping 131.212.3.13
-j64 executes 64 pings in parallel at a time
-dry-run means do nothing but show what it would do
-k means keep the output in order - (just so you can understand it)
The ::: introduces the arguments and I have repeated them with different numbers (1 through 3, and then 11 through 13) so you can distinguish the two counters and see that all permutations and combinations are generated.
Don't do that.
Use fping instead. It will probe far more efficiently than your program will.
$ brew install fping
will make it available, thanks to the magic of brew.
Of course it's not as optimal as you are trying to build above, but you could start the maximum allow number of processes on the background, wait for them to end and start the next batch, something like this (except I'm using sleep 1s):
for i in {1..20} # iterate some
do
sleep 1 & # start in the background
if ! ((i % 5)) # after every 5th (using mod to detect)
then
wait %1 %2 %3 %4 %5 # wait for all jobs to finish
fi
done
Related
I want to use fping to ping multiple ips contained in a file and output the failed ips into a file i.e.
hosts.txt
8.8.8.8
8.8.4.4
1.1.1.1
ping.sh
#!/bin/bash
HOSTS="/tmp/hosts.txt"
fping -q -c 2 < $HOSTS
if ip down
echo ip > /tmp/down.log
fi
So I would like to end up with 1.1.1.1 in the down.log file
It seems that parsing the data from fping is somewhat difficult. It allows the parsing of data for hosts that is alive but not dead. As a way round the issue and to allow for multiple host processing simultaneously with -f, all the hosts that are alive are placed in a variable called alive and then the hosts in the /tmp/hosts.txt file are looped through and grepped against the variable alive to decipher whether the host is alive or dead. A return code of 1 signifies that grep cannot find the host in alive and hence an addition to down.log.
alive=$(fping -c 1 -f ipsfile | awk -F: '{ print $1 }')
while read line
do
grep -q -o $line <<<$alive
if [[ "$?" == "1" ]]
then
echo $line >> down.log
fi
done < /tmp/hosts.txt
Here's one way to get the result you want. Note however; i didn't use fping anywhere in my script. If the usage of fping is crucial to you then i might have missed the point entirely.
#!/bin/bash
HOSTS="/tmp/hosts.txt"
declare -i DELAY=$1 # Amount of time in seconds to wait for a packet
declare -i REPEAT=$2 # Amount of times to retry pinging upon failure
# Read HOSTS line by line
while read -r line; do
c=0
while [[ $c < $REPEAT ]]; do
# If pinging an address does not return the word "0 received", we assume the ping has succeeded
if [[ -z $(ping -q -c $REPEAT -W $DELAY $line | grep "0 received") ]]; then
echo "Attempt[$(( c + 1))] $line : Success"
break;
fi
echo "Attempt[$(( c + 1))] $line : Failed"
(( c++ ))
done
# If we failed the pinging of an address equal to the REPEAT count, we assume address is down
if [[ $c == $REPEAT ]]; then
echo "$line : Failed" >> /tmp/down.log # Log the failed address
fi
done < $HOSTS
Usage: ./script [delay] [repeatCount] -- 'delay' is the total amount of seconds we wait for a response from a ping, 'repeatCount' is how many times we retry pinging upon failure before deciding the address is down.
Here we are reading the /tmp/hosts.txt line by line and evaluating each adress using ping. If pinging an address succeeds, we move on to the next one. If an address fails, we try again for as many times as the user has specified. If the address fails all of the pings, we log it in our /tmp/down.log.
The conditions for checking whether a ping failed/succeeded may not be accurate for your use-cases, so maybe you will have to edit that. Still, i hope this gets the general idea across.
This question already has answers here:
Parallelize Bash script with maximum number of processes
(16 answers)
Closed 1 year ago.
Is there an easy way to limit the number of concurrent jobs in bash? By that I mean making the & block when there are more then n concurrent jobs running in the background.
I know I can implement this with ps | grep -style tricks, but is there an easier way?
If you have GNU Parallel http://www.gnu.org/software/parallel/ installed you can do this:
parallel gzip ::: *.log
which will run one gzip per CPU core until all logfiles are gzipped.
If it is part of a larger loop you can use sem instead:
for i in *.log ; do
echo $i Do more stuff here
sem -j+0 gzip $i ";" echo done
done
sem --wait
It will do the same, but give you a chance to do more stuff for each file.
If GNU Parallel is not packaged for your distribution you can install GNU Parallel simply by:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
12345678 883c667e 01eed62f 975ad28b 6d50e22a
$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
cc21b4c9 43fd03e9 3ae1ae49 e28573c0
$ sha512sum install.sh | grep da012ec113b49a54e705f86d51e784ebced224fdf
79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
$ bash install.sh
It will download, check signature, and do a personal installation if it cannot install globally.
Watch the intro videos for GNU Parallel to learn more:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
A small bash script could help you:
# content of script exec-async.sh
joblist=($(jobs -p))
while (( ${#joblist[*]} >= 3 ))
do
sleep 1
joblist=($(jobs -p))
done
$* &
If you call:
. exec-async.sh sleep 10
...four times, the first three calls will return immediately, the fourth call will block until there are less than three jobs running.
You need to start this script inside the current session by prefixing it with ., because jobs lists only the jobs of the current session.
The sleep inside is ugly, but I didn't find a way to wait for the first job that terminates.
The following script shows a way to do this with functions. You can either put the bgxupdate() and bgxlimit() functions in your script, or have them in a separate file which is sourced from your script with:
. /path/to/bgx.sh
It has the advantage that you can maintain multiple groups of processes independently (you can run, for example, one group with a limit of 10 and another totally separate group with a limit of 3).
It uses the Bash built-in jobs to get a list of sub-processes but maintains them in individual variables. In the loop at the bottom, you can see how to call the bgxlimit() function:
Set up an empty group variable.
Transfer that to bgxgrp.
Call bgxlimit() with the limit and command you want to run.
Transfer the new group back to your group variable.
Of course, if you only have one group, just use bgxgrp variable directly rather than transferring in and out.
#!/bin/bash
# bgxupdate - update active processes in a group.
# Works by transferring each process to new group
# if it is still active.
# in: bgxgrp - current group of processes.
# out: bgxgrp - new group of processes.
# out: bgxcount - number of processes in new group.
bgxupdate() {
bgxoldgrp=${bgxgrp}
bgxgrp=""
((bgxcount = 0))
bgxjobs=" $(jobs -pr | tr '\n' ' ')"
for bgxpid in ${bgxoldgrp} ; do
echo "${bgxjobs}" | grep " ${bgxpid} " >/dev/null 2>&1
if [[ $? -eq 0 ]]; then
bgxgrp="${bgxgrp} ${bgxpid}"
((bgxcount++))
fi
done
}
# bgxlimit - start a sub-process with a limit.
# Loops, calling bgxupdate until there is a free
# slot to run another sub-process. Then runs it
# an updates the process group.
# in: $1 - the limit on processes.
# in: $2+ - the command to run for new process.
# in: bgxgrp - the current group of processes.
# out: bgxgrp - new group of processes
bgxlimit() {
bgxmax=$1; shift
bgxupdate
while [[ ${bgxcount} -ge ${bgxmax} ]]; do
sleep 1
bgxupdate
done
if [[ "$1" != "-" ]]; then
$* &
bgxgrp="${bgxgrp} $!"
fi
}
# Test program, create group and run 6 sleeps with
# limit of 3.
group1=""
echo 0 $(date | awk '{print $4}') '[' ${group1} ']'
echo
for i in 1 2 3 4 5 6; do
bgxgrp=${group1}; bgxlimit 3 sleep ${i}0; group1=${bgxgrp}
echo ${i} $(date | awk '{print $4}') '[' ${group1} ']'
done
# Wait until all others are finished.
echo
bgxgrp=${group1}; bgxupdate; group1=${bgxgrp}
while [[ ${bgxcount} -ne 0 ]]; do
oldcount=${bgxcount}
while [[ ${oldcount} -eq ${bgxcount} ]]; do
sleep 1
bgxgrp=${group1}; bgxupdate; group1=${bgxgrp}
done
echo 9 $(date | awk '{print $4}') '[' ${group1} ']'
done
Here’s a sample run, with blank lines inserted to clearly delineate different time points:
0 12:38:00 [ ]
1 12:38:00 [ 3368 ]
2 12:38:00 [ 3368 5880 ]
3 12:38:00 [ 3368 5880 2524 ]
4 12:38:10 [ 5880 2524 1560 ]
5 12:38:20 [ 2524 1560 5032 ]
6 12:38:30 [ 1560 5032 5212 ]
9 12:38:50 [ 5032 5212 ]
9 12:39:10 [ 5212 ]
9 12:39:30 [ ]
The whole thing starts at 12:38:00 (time t = 0) and, as you can see, the first three processes run immediately.
Each process sleeps for 10n seconds and the fourth process doesn’t start until the first exits (at time t = 10). You can see that process 3368 has disappeared from the list before 1560 is added.
Similarly, the fifth process 5032 starts when 5880 (the second) exits at time t = 20.
And finally, the sixth process 5212 starts when 2524 (the third) exits at time t = 30.
Then the rundown begins, the fourth process exits at time t = 50 (started at 10 with 40 duration).
The fifth exits at time t = 70 (started at 20 with 50 duration).
Finally, the sixth exits at time t = 90 (started at 30 with 60 duration).
Or, if you prefer it in a more graphical time-line form:
Process: 1 2 3 4 5 6
-------- - - - - - -
12:38:00 ^ ^ ^ 1/2/3 start together.
12:38:10 v | | ^ 4 starts when 1 done.
12:38:20 v | | ^ 5 starts when 2 done.
12:38:30 v | | ^ 6 starts when 3 done.
12:38:40 | | |
12:38:50 v | | 4 ends.
12:39:00 | |
12:39:10 v | 5 ends.
12:39:20 |
12:39:30 v 6 ends.
Here's the shortest way:
waitforjobs() {
while test $(jobs -p | wc -w) -ge "$1"; do wait -n; done
}
Call this function before forking off any new job:
waitforjobs 10
run_another_job &
To have as many background jobs as cores on the machine, use $(nproc) instead of a fixed number like 10.
Assuming you'd like to write code like this:
for x in $(seq 1 100); do # 100 things we want to put into the background.
max_bg_procs 5 # Define the limit. See below.
your_intensive_job &
done
Where max_bg_procs should be put in your .bashrc:
function max_bg_procs {
if [[ $# -eq 0 ]] ; then
echo "Usage: max_bg_procs NUM_PROCS. Will wait until the number of background (&)"
echo " bash processes (as determined by 'jobs -pr') falls below NUM_PROCS"
return
fi
local max_number=$((0 + ${1:-0}))
while true; do
local current_number=$(jobs -pr | wc -l)
if [[ $current_number -lt $max_number ]]; then
break
fi
sleep 1
done
}
The following function (developed from tangens answer above, either copy into script or source from file):
job_limit () {
# Test for single positive integer input
if (( $# == 1 )) && [[ $1 =~ ^[1-9][0-9]*$ ]]
then
# Check number of running jobs
joblist=($(jobs -rp))
while (( ${#joblist[*]} >= $1 ))
do
# Wait for any job to finish
command='wait '${joblist[0]}
for job in ${joblist[#]:1}
do
command+=' || wait '$job
done
eval $command
joblist=($(jobs -rp))
done
fi
}
1) Only requires inserting a single line to limit an existing loop
while :
do
task &
job_limit `nproc`
done
2) Waits on completion of existing background tasks rather than polling, increasing efficiency for fast tasks
This might be good enough for most purposes, but is not optimal.
#!/bin/bash
n=0
maxjobs=10
for i in *.m4a ; do
# ( DO SOMETHING ) &
# limit jobs
if (( $(($((++n)) % $maxjobs)) == 0 )) ; then
wait # wait until all have finished (not optimal, but most times good enough)
echo $n wait
fi
done
If you're willing to do this outside of pure bash, you should look into a job queuing system.
For instance, there's GNU queue or PBS. And for PBS, you might want to look into Maui for configuration.
Both systems will require some configuration, but it's entirely possible to allow a specific number of jobs to run at once, only starting newly queued jobs when a running job finishes. Typically, these job queuing systems would be used on supercomputing clusters, where you would want to allocate a specific amount of memory or computing time to any given batch job; however, there's no reason you can't use one of these on a single desktop computer without regard for compute time or memory limits.
It is hard to do without wait -n (for example, shell in busybox does not support it). So here is a workaround, it is not optimal because it calls 'jobs' and 'wc' commands 10x per second. You can reduce the calls to 1x per second for example, if you don't mind waiting a bit longer for each job to complete.
# $1 = maximum concurent jobs
#
limit_jobs()
{
while true; do
if [ "$(jobs -p | wc -l)" -lt "$1" ]; then break; fi
usleep 100000
done
}
# and now start some tasks:
task &
limit_jobs 2
task &
limit_jobs 2
task &
limit_jobs 2
task &
limit_jobs 2
wait
On Linux I use this to limit the bash jobs to the number of available CPUs (possibly overriden by setting the CPU_NUMBER).
[ "$CPU_NUMBER" ] || CPU_NUMBER="`nproc 2>/dev/null || echo 1`"
while [ "$1" ]; do
{
do something
with $1
in parallel
echo "[$# items left] $1 done"
} &
while true; do
# load the PIDs of all child processes to the array
joblist=(`jobs -p`)
if [ ${#joblist[*]} -ge "$CPU_NUMBER" ]; then
# when the job limit is reached, wait for *single* job to finish
wait -n
else
# stop checking when we're below the limit
break
fi
done
# it's great we executed zero external commands to check!
shift
done
# wait for all currently active child processes
wait
Wait command, -n option, waits for the next job to terminate.
maxjobs=10
# wait for the amount of processes less to $maxjobs
jobIds=($(jobs -p))
len=${#jobIds[#]}
while [ $len -ge $maxjobs ]; do
# Wait until one job is finished
wait -n $jobIds
jobIds=($(jobs -p))
len=${#jobIds[#]}
done
Have you considered starting ten long-running listener processes and communicating with them via named pipes?
you can use ulimit -u
see http://ss64.com/bash/ulimit.html
Bash mostly processes files line by line.
So you cap split input file input files by N lines then simple pattern is applicable:
mkdir tmp ; pushd tmp ; split -l 50 ../mainfile.txt
for file in * ; do
while read a b c ; do curl -s http://$a/$b/$c <$file &
done ; wait ; done
popd ; rm -rf tmp;
Im new to bash scripting.
I need a script to get the ms of a ping to a IP and if the time is over 100 it will print a echo message.
For the example lets do it with the google ip 8.8.8.8
Could you please help me?
Edit:
Okay how to make it like this:
#!/bin/sh
echo '>> Start ping test 2.0'
/bin/ping 8.8.8.8 | awk -F' |=' '$10=="time"'
if [$11>100]
then
echo "Slow response"
else
echo "Fast response"
fi
Okay... First off, you are not writing a bash script, your script is called using #!/bin/sh, so even if your system uses bash as its system shell, it's being run in sh compatibility mode. So you can't use bashisms. Write your script as I've shown below instead.
So... it seems to me that if you want your ping to have output that is handled by your script, then the ping needs to actually EXIT. Your if will never get processed, because ping never stops running. And besides $11 within the awk script isn't the same as $11 within the shell script. So something like this might work:
#!/bin/bash
while sleep 5; do
t="$(ping -c 1 8.8.8.8 | sed -ne '/.*time=/{;s///;s/\..*//;p;}')"
if [ "$t" -gt 100 ]; then
# do something
else
# do something else
fi
done
This while loop, in shell (or bash) will run ping every five seconds with only one packet sent (the -c 1), and parse its output using sed. The sed script works like this:
/.*time=/{...} - look for a line containing the time and run stuff in the curly braces on that line...
s/// - substitute the previously found expression (the time) with nothing (erasing it from the line)
s/\..*// - replace everything from the first period to the end of the line with nothing (since shell math only handles integers)
p - and print the remaining data from the line.
And alternate way of handling this is to parse ping's output as a stream instead of spawning a new ping process for each test. For example:
#!/bin/bash
ping -i 60 8.8.8.8 | while read line; do
case "$line" in
*time=*ms)
t=${line#.*=} # strip off everything up to the last equals
t=${t% *} # strip off everything from the last space to the end
if [[ (($t > 100)) ]]; then
# do something
else
# do something else
fi
;;
done
These solutions are a bit problematic in that they fail to report when connectivity goes away ENTIRELY. But perhaps you can adapt them to handle that case too.
Note that these may not be your best solution. If you really want a monitoring system, larger scale things like Nagios, Icinga, Munin, etc, are a good way to go.
For small-scale ping monitoring like this, you might also want to look at fping.
There's a couple transformations you'll need to do to the ping output in order to get the actual number for milliseconds.
First, to make this simple, use the -c 1 flag for ping to only send one packet.
The output for ping will look like:
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: icmp_seq=0 ttl=59 time=41.101 ms
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 41.101/41.101/41.101/0.000 mss
Since you want the '41.101' piece, you'll need to parse out the second to last element of the second line.
To extract the second line you can use the FNR variable in awk, and to get the second to last column you can use the NF (number of fields) variable.
ping -c 1 8.8.8.8 | awk 'FNR == 2 { print $(NF-1) }'
This will give you time=41.101, to get just the number use cut to extract the field after the equals sign
ping -c 1 8.8.8.8 | awk 'FNR == 2 { print $(NF-1) }' | cut -d'=' -f2
This is what I did to get a trace on slow ping times and also get a mail sent to me or anyone if you want to have that.
#!/bin/bash
if [ "$#" -ne 1 ]; then
echo "You must enter 1 command line arguments - The address which you want to ping against"
exit
fi
hostname=$(hostname)
while true; do
RESULT=$(ping -c 1 $1 | awk -v time="$(date +"%Y-%m-%d %H:%M:%S")" -Ftime= 'NF>1{if ($2+0 > 1) print $1 $2 $4 $3 $5 " "time }')
if [ "$RESULT" != "" ]; then
echo $RESULT >> pingscript.log
echo $RESULT | mail -s "pingAlert between $hostname - $1" foo#bar.com
fi
sleep 2
done
Here's a script I pulled together from different examples. It prints out the date/time along with ok or FAIL if it was slower than 100ms
#!/bin/bash
host=$1
if [ -z $host ]; then
echo "Usage: `basename $0` [HOST]"
exit 1
fi
function pingTestTime()
{
while :; do
test2 $1
sleep 1
done
}
function test2()
{
now=`date "+%a %D %r"`
timeinms=$(ping -c 1 $1 | grep -oP 'time=\K\S+')
status=$?
timeint=${timeinms%.*}
if [ ! -z "$timeint" ]
then
if (( $timeint > 100 )); then
extraText="FAIL (Slow)"
else
extraText="ok"
fi
else
extraText="FAIL (Not Connected)"
fi
#echo "Status="$status
echo $now $timeinms"ms" $extraText
}
pingTestTime $host
I'm trying to limit the amount of subshells that are spawned in a script that I'm using to sniff our internal network to audit Linux servers in our network. The script works as intended, but due to the way I'm nesting the for loop, it spawns 255 Sub Shells for each network, so therefore it kills the CPU due to the fact that there are over 1000 processes spawned off. I need to be able to limit the amount of processes, and since variables lose their value when in a Sub Shell, I can't figure out a way to make this work. Again, the script works, it just spawns a ton a processes - I need to limit it to, say 10 Processes max:
#!/bin/bash
FILE=/root/ats_net_final
for network in `cat $FILE`;do
for ip in $network.{1..255};do
(
SYSNAME=`snmpwalk -v2c -c public -t1 -r1 $ip sysName.0 2>/dev/null | awk '{ print $NF }'`
SYSTYPE=`snmpwalk -v2c -c public -t1 -r1 $ip sysDescr.0 2>/dev/null | grep -o Linux`
if [ $? -eq 0 ];then
echo "$SYSNAME"
exit 0;
else
echo "Processed $ip"
exit 0
fi
) &
done
done
This solution I found that works, but not in my case, because no matter what, it still will spawn the processes before the logic of limiting the processes. I think that maybe I've just been looking at the code too long and it's just a simple logic issue and I'm placing things in the wrong area, or in the wrong order.
Answer Accepted:
I've accepted the answer from huitseeker. He was able to provide me with direction of how the logic works, allowing me to get it to work. Final script:
#!/bin/bash
FILE=/root/ats_net_final
for network in `cat $FILE`;do
#for ip in $network.{1..255};do
for ip in {1..255};do
(
ip=$network.$ip
SYSNAME=`snmpwalk -v2c -c public -t1 -r1 $ip sysName.0 2>/dev/null | awk '{ print $NF }'`
SYSTYPE=`snmpwalk -v2c -c public -t1 -r1 $ip sysDescr.0 2>/dev/null | grep -o Linux`
if [ $? -eq 0 ];then
echo "$SYSNAME"
exit 0;
else
echo "Processed $ip"
exit 0
fi
) &
if (( $ip % 10 == 0 )); then wait; fi
done
wait
done
An easy way to limit the number of concurrent subshells to 10 is:
for ip in $(seq 1 255);do
(<whatever you did with $ip in the subshell here, do with $network.$ip instead >) &
if (( $ip % 10 == 0 )); then wait; fi
done
wait
With the last wait being useful not to let the subshells of the last round of the inner loop overlap with those created in first round of the next outer run.
I think I have found a better solution. I implemented using make:
#!/usr/bin/make -f
IPFILE := ipfile
IPS:=$(foreach ip,$(shell cat $(IPFILE)),$(foreach sub,$(shell seq 1 1 254),$(ip).$(sub)))
all: $(IPS)
$(IPS):
SYSNAME=`snmpwalk -v2c -c public -t1 -r1 $# sysName.0 2>/dev/null | awk '{ print $$NF }'`; \
SYSTYPE=`snmpwalk -v2c -c public -t1 -r1 $# sysDescr.0 2>/dev/null | grep -o Linux`; \
if [ $$? -eq 0 ]; then \
echo "$$SYSNAME"; \
else \
echo "Processed $#"; \
fi
The first part of the IPs (e.g. 192.168.1) should be placed to ipfile. Then it generates all the IP addresses into the variable IPS (Like 192.168.1.1 ... 192.168.1.254 ...).
These lines can be copied to e.g. test.mak and add execute rights to the file. If one run it as ./test.mak then it will process the IPs in IPS one by one. But if it is run as ./test.mak -j 10 then it will process 10 IPs at once. Also it can be run as ./test.mak -j 10 -l 0.5. It will run maximum 10 processes or until the system load reaches 0.5.
This question already has answers here:
Parallelize Bash script with maximum number of processes
(16 answers)
Closed 1 year ago.
Is there an easy way to limit the number of concurrent jobs in bash? By that I mean making the & block when there are more then n concurrent jobs running in the background.
I know I can implement this with ps | grep -style tricks, but is there an easier way?
If you have GNU Parallel http://www.gnu.org/software/parallel/ installed you can do this:
parallel gzip ::: *.log
which will run one gzip per CPU core until all logfiles are gzipped.
If it is part of a larger loop you can use sem instead:
for i in *.log ; do
echo $i Do more stuff here
sem -j+0 gzip $i ";" echo done
done
sem --wait
It will do the same, but give you a chance to do more stuff for each file.
If GNU Parallel is not packaged for your distribution you can install GNU Parallel simply by:
$ (wget -O - pi.dk/3 || lynx -source pi.dk/3 || curl pi.dk/3/ || \
fetch -o - http://pi.dk/3 ) > install.sh
$ sha1sum install.sh | grep 883c667e01eed62f975ad28b6d50e22a
12345678 883c667e 01eed62f 975ad28b 6d50e22a
$ md5sum install.sh | grep cc21b4c943fd03e93ae1ae49e28573c0
cc21b4c9 43fd03e9 3ae1ae49 e28573c0
$ sha512sum install.sh | grep da012ec113b49a54e705f86d51e784ebced224fdf
79945d9d 250b42a4 2067bb00 99da012e c113b49a 54e705f8 6d51e784 ebced224
fdff3f52 ca588d64 e75f6033 61bd543f d631f592 2f87ceb2 ab034149 6df84a35
$ bash install.sh
It will download, check signature, and do a personal installation if it cannot install globally.
Watch the intro videos for GNU Parallel to learn more:
https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
A small bash script could help you:
# content of script exec-async.sh
joblist=($(jobs -p))
while (( ${#joblist[*]} >= 3 ))
do
sleep 1
joblist=($(jobs -p))
done
$* &
If you call:
. exec-async.sh sleep 10
...four times, the first three calls will return immediately, the fourth call will block until there are less than three jobs running.
You need to start this script inside the current session by prefixing it with ., because jobs lists only the jobs of the current session.
The sleep inside is ugly, but I didn't find a way to wait for the first job that terminates.
The following script shows a way to do this with functions. You can either put the bgxupdate() and bgxlimit() functions in your script, or have them in a separate file which is sourced from your script with:
. /path/to/bgx.sh
It has the advantage that you can maintain multiple groups of processes independently (you can run, for example, one group with a limit of 10 and another totally separate group with a limit of 3).
It uses the Bash built-in jobs to get a list of sub-processes but maintains them in individual variables. In the loop at the bottom, you can see how to call the bgxlimit() function:
Set up an empty group variable.
Transfer that to bgxgrp.
Call bgxlimit() with the limit and command you want to run.
Transfer the new group back to your group variable.
Of course, if you only have one group, just use bgxgrp variable directly rather than transferring in and out.
#!/bin/bash
# bgxupdate - update active processes in a group.
# Works by transferring each process to new group
# if it is still active.
# in: bgxgrp - current group of processes.
# out: bgxgrp - new group of processes.
# out: bgxcount - number of processes in new group.
bgxupdate() {
bgxoldgrp=${bgxgrp}
bgxgrp=""
((bgxcount = 0))
bgxjobs=" $(jobs -pr | tr '\n' ' ')"
for bgxpid in ${bgxoldgrp} ; do
echo "${bgxjobs}" | grep " ${bgxpid} " >/dev/null 2>&1
if [[ $? -eq 0 ]]; then
bgxgrp="${bgxgrp} ${bgxpid}"
((bgxcount++))
fi
done
}
# bgxlimit - start a sub-process with a limit.
# Loops, calling bgxupdate until there is a free
# slot to run another sub-process. Then runs it
# an updates the process group.
# in: $1 - the limit on processes.
# in: $2+ - the command to run for new process.
# in: bgxgrp - the current group of processes.
# out: bgxgrp - new group of processes
bgxlimit() {
bgxmax=$1; shift
bgxupdate
while [[ ${bgxcount} -ge ${bgxmax} ]]; do
sleep 1
bgxupdate
done
if [[ "$1" != "-" ]]; then
$* &
bgxgrp="${bgxgrp} $!"
fi
}
# Test program, create group and run 6 sleeps with
# limit of 3.
group1=""
echo 0 $(date | awk '{print $4}') '[' ${group1} ']'
echo
for i in 1 2 3 4 5 6; do
bgxgrp=${group1}; bgxlimit 3 sleep ${i}0; group1=${bgxgrp}
echo ${i} $(date | awk '{print $4}') '[' ${group1} ']'
done
# Wait until all others are finished.
echo
bgxgrp=${group1}; bgxupdate; group1=${bgxgrp}
while [[ ${bgxcount} -ne 0 ]]; do
oldcount=${bgxcount}
while [[ ${oldcount} -eq ${bgxcount} ]]; do
sleep 1
bgxgrp=${group1}; bgxupdate; group1=${bgxgrp}
done
echo 9 $(date | awk '{print $4}') '[' ${group1} ']'
done
Here’s a sample run, with blank lines inserted to clearly delineate different time points:
0 12:38:00 [ ]
1 12:38:00 [ 3368 ]
2 12:38:00 [ 3368 5880 ]
3 12:38:00 [ 3368 5880 2524 ]
4 12:38:10 [ 5880 2524 1560 ]
5 12:38:20 [ 2524 1560 5032 ]
6 12:38:30 [ 1560 5032 5212 ]
9 12:38:50 [ 5032 5212 ]
9 12:39:10 [ 5212 ]
9 12:39:30 [ ]
The whole thing starts at 12:38:00 (time t = 0) and, as you can see, the first three processes run immediately.
Each process sleeps for 10n seconds and the fourth process doesn’t start until the first exits (at time t = 10). You can see that process 3368 has disappeared from the list before 1560 is added.
Similarly, the fifth process 5032 starts when 5880 (the second) exits at time t = 20.
And finally, the sixth process 5212 starts when 2524 (the third) exits at time t = 30.
Then the rundown begins, the fourth process exits at time t = 50 (started at 10 with 40 duration).
The fifth exits at time t = 70 (started at 20 with 50 duration).
Finally, the sixth exits at time t = 90 (started at 30 with 60 duration).
Or, if you prefer it in a more graphical time-line form:
Process: 1 2 3 4 5 6
-------- - - - - - -
12:38:00 ^ ^ ^ 1/2/3 start together.
12:38:10 v | | ^ 4 starts when 1 done.
12:38:20 v | | ^ 5 starts when 2 done.
12:38:30 v | | ^ 6 starts when 3 done.
12:38:40 | | |
12:38:50 v | | 4 ends.
12:39:00 | |
12:39:10 v | 5 ends.
12:39:20 |
12:39:30 v 6 ends.
Here's the shortest way:
waitforjobs() {
while test $(jobs -p | wc -w) -ge "$1"; do wait -n; done
}
Call this function before forking off any new job:
waitforjobs 10
run_another_job &
To have as many background jobs as cores on the machine, use $(nproc) instead of a fixed number like 10.
Assuming you'd like to write code like this:
for x in $(seq 1 100); do # 100 things we want to put into the background.
max_bg_procs 5 # Define the limit. See below.
your_intensive_job &
done
Where max_bg_procs should be put in your .bashrc:
function max_bg_procs {
if [[ $# -eq 0 ]] ; then
echo "Usage: max_bg_procs NUM_PROCS. Will wait until the number of background (&)"
echo " bash processes (as determined by 'jobs -pr') falls below NUM_PROCS"
return
fi
local max_number=$((0 + ${1:-0}))
while true; do
local current_number=$(jobs -pr | wc -l)
if [[ $current_number -lt $max_number ]]; then
break
fi
sleep 1
done
}
The following function (developed from tangens answer above, either copy into script or source from file):
job_limit () {
# Test for single positive integer input
if (( $# == 1 )) && [[ $1 =~ ^[1-9][0-9]*$ ]]
then
# Check number of running jobs
joblist=($(jobs -rp))
while (( ${#joblist[*]} >= $1 ))
do
# Wait for any job to finish
command='wait '${joblist[0]}
for job in ${joblist[#]:1}
do
command+=' || wait '$job
done
eval $command
joblist=($(jobs -rp))
done
fi
}
1) Only requires inserting a single line to limit an existing loop
while :
do
task &
job_limit `nproc`
done
2) Waits on completion of existing background tasks rather than polling, increasing efficiency for fast tasks
This might be good enough for most purposes, but is not optimal.
#!/bin/bash
n=0
maxjobs=10
for i in *.m4a ; do
# ( DO SOMETHING ) &
# limit jobs
if (( $(($((++n)) % $maxjobs)) == 0 )) ; then
wait # wait until all have finished (not optimal, but most times good enough)
echo $n wait
fi
done
If you're willing to do this outside of pure bash, you should look into a job queuing system.
For instance, there's GNU queue or PBS. And for PBS, you might want to look into Maui for configuration.
Both systems will require some configuration, but it's entirely possible to allow a specific number of jobs to run at once, only starting newly queued jobs when a running job finishes. Typically, these job queuing systems would be used on supercomputing clusters, where you would want to allocate a specific amount of memory or computing time to any given batch job; however, there's no reason you can't use one of these on a single desktop computer without regard for compute time or memory limits.
It is hard to do without wait -n (for example, shell in busybox does not support it). So here is a workaround, it is not optimal because it calls 'jobs' and 'wc' commands 10x per second. You can reduce the calls to 1x per second for example, if you don't mind waiting a bit longer for each job to complete.
# $1 = maximum concurent jobs
#
limit_jobs()
{
while true; do
if [ "$(jobs -p | wc -l)" -lt "$1" ]; then break; fi
usleep 100000
done
}
# and now start some tasks:
task &
limit_jobs 2
task &
limit_jobs 2
task &
limit_jobs 2
task &
limit_jobs 2
wait
On Linux I use this to limit the bash jobs to the number of available CPUs (possibly overriden by setting the CPU_NUMBER).
[ "$CPU_NUMBER" ] || CPU_NUMBER="`nproc 2>/dev/null || echo 1`"
while [ "$1" ]; do
{
do something
with $1
in parallel
echo "[$# items left] $1 done"
} &
while true; do
# load the PIDs of all child processes to the array
joblist=(`jobs -p`)
if [ ${#joblist[*]} -ge "$CPU_NUMBER" ]; then
# when the job limit is reached, wait for *single* job to finish
wait -n
else
# stop checking when we're below the limit
break
fi
done
# it's great we executed zero external commands to check!
shift
done
# wait for all currently active child processes
wait
Wait command, -n option, waits for the next job to terminate.
maxjobs=10
# wait for the amount of processes less to $maxjobs
jobIds=($(jobs -p))
len=${#jobIds[#]}
while [ $len -ge $maxjobs ]; do
# Wait until one job is finished
wait -n $jobIds
jobIds=($(jobs -p))
len=${#jobIds[#]}
done
Have you considered starting ten long-running listener processes and communicating with them via named pipes?
you can use ulimit -u
see http://ss64.com/bash/ulimit.html
Bash mostly processes files line by line.
So you cap split input file input files by N lines then simple pattern is applicable:
mkdir tmp ; pushd tmp ; split -l 50 ../mainfile.txt
for file in * ; do
while read a b c ; do curl -s http://$a/$b/$c <$file &
done ; wait ; done
popd ; rm -rf tmp;