Batch files processing in bash with full processor occupancy - bash

Maybe really simple question, but I don't know where to dig.
I have a list of files (random names), and I want to process them using some command
processing_command $i ${i%.*}.txt
I want to speed up by using all processors. How to make such the script occupy the 10 processors simultaneously (by processing 10 files)? processing_command is not parallel by default. Thank you!

the trivial approach would be to use:
for i in $items
do
processing_command $i ${i%.*}.txt &
done
which will start a new (parallel instance of) processing_command for each $i (the trick is the trailing & which will background the process)
the drawback is, that if you have e.g. 1000 items, then this will start 1000 parallel processes, which (while occupying all 10 cores) will be busy doing context switching rather than doing the actual processing.
if you have as many (or less) items as cores, than this is a good and simple solution.
usually you don't want to start more processes than cores.
a simplistic approach (assuming that all items take about the same time when processing), is to split the the original "items" list into number_of_cores equally long lists. the following is slightly modified version of an example taken from an article in the german linux-magazin:
#!/bin/bash
## number of processors
PMAX=$(ls -1d /sys/devices/system/cpu/cpu[0-9]* | wc -l)
## call processing_command on each argument:
doSequential() {
local i
for i in "$#"; do
processing_command $i ${i%.*}.txt
done
}
## run PMAX parallel processes
doParallel() {
# split the arguments into PMAX equally sized lists
local items item currentProcess=0
for item in "$#"; do
items[$currentProcess]="${items[$currentProcess]} "$item""
shift
let currentProcess=$(( (currentProcess+1)%PMAX ))
done
# run PMAX processes, each with the shorter list of items
currentProcess=0
while [ $currentProcess -lt $PMAX ]; do
[ -n "${items[$currentProcess]}" ] &&
eval doSequential ${items[$currentProcess]} &
currentProcess=$((currentProcess+1))
done
wait
}
doParallel $ITEMS

Related

parallel computing in multiple cores for data which is indepedently run with the program

I have a simulation program in fortran which takes the input from a .dat. This file has 100.000 lines which takes really long to run. The program take the first line, run all the simulations and write in a .out the result and pass to the next line. I have a computer with 16 cpu so how can I do to split my data in 16 parts and run it separatly in each of the cpus? I am running in a machine with ubuntu. It is totally independent each line from the other.
For example my data is HeadData10000.dat, then I have a file simulation.ini with the name of the input data in this case: HeadData10000.dat and with the name of the output data. So the file simulation.ini will look like that
HeadData10000.dat
outputdata.out
Then now I have two computer so I split my HeadData10000.dat y two files and I do two simulation.ini for each input data and I run it like this in each computer: ./simulation.exe<./simulation.ini.
Assuming your list of 100,000 jobs is called "jobs.txt" and looks like this:
JobA
JobB
JobC
JobD
You could run this:
parallel 'printf "{}\n{.}.out" | ./simulation.exe' < jobs.txt
If you want to do a dry run to see what that would do without doing anything:
parallel --dry-run 'printf "{}\n{.}.out" | ./simulation.exe' < jobs.txt
Sample Output
printf "JobA\nJobA.out" | ./simulation.exe
printf "JobB\nJobB.out" | ./simulation.exe
printf "JobC\nJobC.out" | ./simulation.exe
printf "JobD\nJobD.out" | ./simulation.exe
If you have multiple servers available, look at using the -S parameter to GNU Parallel to spread the jobs across the machines. Also, look at the --eta and --bar parameters for getting progress reports.
I used printf "line1 \n line2" to generate two lines of input in order to avoid having to create, and later delete 100,000 files.
By default, GNU Parallel will keep 1 job per CPU core running, so there will always be 16 jobs running on your 16-core machine, but you can change that to, say, 8 if you want to with parallel -j 8. You can also specify the number of jobs to run on your second (and subsequent) machines.

SLURM batch array loop?

I'm somewhat bash challenged and trying to send a large job array through slurm on my institution's cluster. I am way over my limit (which appears to be 1000 jobs per job array) and am having to iteratively parse out the list into blocks of 1000, which is tedious:
sbatch --array=17001-18000 -p <server-name> --time=12:00:00 <my-bash-script>
How might I write a loop to do this? Each job takes about 11 minutes, so I would need to build in a pause in the loop. Otherwise, I suspect SLURM will reject the new batch job. Anyone out there know what to do? Thanks in advance!
Something like this should do what you want
START=1
END=10000
STEP=1000
SLEEP=700 #Just over 11 Minutes (in seconds)
for i in $(seq $START $STEP $END) ; do
JSTART=$i
JEND=$[ $JSTART + $STEP - 1 ]
echo "Submitting with ${JSTART} and ${JEND}"
sbatch --array=${JSTART}-${JEND} -p <server-name> --time=12:00:00 <my-bash-script>
sleep $SLEEP
done

bash asynchronous variable setting (dns lookup)

Let's say we had a loop that we want to have run as quickly as possible. Let's say something was being done to a list of hosts inside that loop; just for the sake of argument, let's say it was a redis query. Let's say that the list of hosts may change occasionally due to hosts being added/removed from a pool (not load balanced); however, the list is predictable (e.g., they all start with “foo” and end with 2 digits. So we want to run this occasionally; say, once every 15 minutes:
listOfHosts=$(dig +noall +ans foo{00..99}.domain | while read -r n rest; do printf '%s\n' ${n%.}; done)
to get the list of hosts. Let's say our loop looked something like this:
while :; do
for i in $listOfHosts; do
redis-cli -h $i llen something
done
(( ( $(date +%s) % 60 * 15) == 0 )) && callFunctionThatSetslistOfHosts
done
(now obviously there's some things missing, like testing to see if we've already run callFunctionThatSetslistOfHosts in the current minute and only running it once, and doing something with the redis output, and maybe the list of hosts should be an array, but basically this is it.)
How can we run callFunctionThatSetslistOfHosts asynchronously so that it doesn't slow down the loop. I.e., have it running in the background setting listOfHosts occasionally (e.g. once every 15 minutes), so that the next time the inner loop is run it gets a potentially different set of hosts to run the redis query on?
My major problem seems to be that in order to set listOfHosts in a loop, that loop has to be a subshell, and listOfHosts is local to that subshell, and setting it doesn't affect the global listOfHosts.
I may resort to pipes, but will have to poll the reader before generating a new list — not that that's terribly bad if I poll slowly, but I thought I'd present this as a problem.
Thanks.

Keep looping until ram is full then pause and continue the loop once it get free again

I have to run a command on many files which I do by bash looping and an ampersand at the end of each command so files run in parallel. Yet I don't want to consume all the ram on the server in case where I have 100 or such files. Is there a way to keep the loop going on until ram usage exceed certain threshold and at this point the loop pauses and once ram is free again then continue looping to next files ? Thanks
Yes, you can. Use the free command to determine the RAM usage. Then compare the current usage to your threshold. Wrap the condition of the loop into a function to make it a bit more readable:
ramAboveThreshold() {
local threshold=6000000000 # threshold in bytes (6 GB)
local used="$(free -b | awk 'NR == 2 {print $2}')"
(( used > threshold ))
}
Inside your old loop, place another loop that waits for the RAM usage to drop under your threshold:
for i in myFiles/*; do
while ramAboveThreshold; do
sleep 5
done
myCommand "$i" &
done
free does not only print the used RAM, but also the free and total RAM, so the script could be altered to have a a threshold like »at least n bytes free« or even »less than 60% used«.

fastest hashing in a unix environment?

I need to examine the output of a certain script 1000s of times on a unix platform and check if any of it has changed from before.
I've been doing this:
(script_stuff) | md5sum
and storing this value. I actually don't really need "md5", JUST a simple hash function which I can compare against a stored value to see if its changed. Its okay if there are an occassional false positive.
Is there anything better than md5sum that works faster and generates a fairly usable hash value? The script itself generates a few lines of text - maybe 10-20 on average to max 100 or so.
I had a look at fast md5sum on millions of strings in bash/ubuntu - that's wonderful, but I can't compile a new program. Need a system utility... :(
Additional "background" details:
I've been asked to monitor the DNS record of a set of 1000 or so domains and immediately call certain other scripts if there has been any change. I intend to do a dig xyz +short statement and hash its output and store that, and then check it against a previously stored value. Any change will trigger the other script, otherwise it just goes on. Right now, we're planning on using cron for a set of these 1000, but can think completely diffeerently for "seriously heavy" usage - ~20,000 or so.
I have no idea what the use of such a system would be, I'm just doing this as a job for someone else...
The cksum utility calculates a non-cryptographic CRC checksum.
How big is the output you're checking? A hundred lines max. I'd just save the entire original file then use cmp to see if it's changed. Given that a hash calculation will have to read every byte anyway, the only way you'll get an advantage from a checksum type calculation is if the cost of doing it is less than reading two files of that size.
And cmp won't give you any false positives or negatives :-)
pax> echo hello >qq1.txt
pax> echo goodbye >qq2.txt
pax> cp qq1.txt qq3.txt
pax> cmp qq1.txt qq2.txt >/dev/null
pax> echo $?
1
pax> cmp qq1.txt qq3.txt >/dev/null
pax> echo $?
0
Based on your question update:
I've been asked to monitor the DNS record of a set of 1000 or so domains and immediately call certain other scripts if there has been any change. I intend to do a dig xyz +short statement and hash its output and store that, and then check it against a previously stored value. Any change will trigger the other script, otherwise it just goes on. Right now, we're planning on using cron for a set of these 1000, but can think completely diffeerently for "seriously heavy" usage - ~20,000 or so.
I'm not sure you need to worry too much about the file I/O. The following script executed dig microsoft.com +short 5000 times first with file I/O then with output to /dev/null (by changing the comments).
#!/bin/bash
rm -rf qqtemp
mkdir qqtemp
((i = 0))
while [[ $i -ne 5000 ]] ; do
#dig microsoft.com +short >qqtemp/microsoft.com.$i
dig microsoft.com +short >/dev/null
((i = i + 1))
done
The elapsed times at 5 runs each are:
File I/O | /dev/null
----------+-----------
3:09 | 1:52
2:54 | 2:33
2:43 | 3:04
2:49 | 2:38
2:33 | 3:08
After removing the outliers and averaging, the results are 2:49 for the file I/O and 2:45 for the /dev/null. The time difference is four seconds for 5000 iterations, only 1/1250th of a second per item.
However, since an iteration over the 5000 takes up to three minutes, that's how long it will take maximum to detect a problem (a minute and a half on average). If that's not acceptable, you need to move away from bash to another tool.
Given that a single dig only takes about 0.012 seconds, you should theoretically do 5000 in sixty seconds assuming your checking tool takes no time at all. You may be better off doing something like this in Perl and using an associative array to store the output from dig.
Perl's semi-compiled nature means that it will probably run substantially faster than a bash script and Perl's fancy stuff will make the job a lot easier. However, you're unlikely to get that 60-second time much lower just because that's how long it takes to run the dig commands.

Resources