Parallel execution of a command on few boxes over ssh (using bash) - bash

What would be the cleanest way to execute same command remotely on several boxes with joint output to console?
For instance, I would like to tail logs from several boxes all together in my console as one output.

Definitely GNU parallel is a nice tool for parallelizing things in the shell. It also has reasonable remote execution capabilities.
It can be as easy as
parallel -S $SERVER1 -S $SERVER2 echo ::: running on more hosts

Related

Running bash scripts in parallel

I would like to run commands in parallel. So that if one fails, the whole job exists as failure. How can I do that? More specifically, I would like to run aws sync commands in parallel. I have 5 aws sync commands that run sequentially. I would like them to run in parallel so that if one fails, the whole job fails. How can I do that?
GNU Parallel is a really handy and powerful tool that works with anything bash
http://www.gnu.org/software/parallel/
https://www.youtube.com/watch?v=OpaiGYxkSuQ
# run lines from a file, 8 at a time
cat commands.txt | parallel --eta -j 8 "{}"

shell script to loop and start processes in parallel?

I need a shell script that will create a loop to start parallel tasks read in from a file...
Something in the lines of..
#!/bin/bash
mylist=/home/mylist.txt
for i in ('ls $mylist')
do
do something like cp -rp $i /destination &
end
wait
So what I am trying to do is send a bunch of tasks in the background with the "&" for each line in $mylist and wait for them to finish before existing.
However, there may be a lot of lines in there so I want to control how many parallel background processes get started; want to be able to max it at say.. 5? 10?
Any ideas?
Thank you
Your task manager will make it seem like you can run many parallel jobs. How many you can actually run to obtain maximum efficiency depends on your processor. Overall you don't have to worry about starting too many processes because your system will do that for you. If you want to limit them anyway because the number could get absurdly high you could use something like this (provided you execute a cp command every time):
...
while ...; do
jobs=$(pgrep 'cp' | wc -l)
[[ $jobs -gt 50 ]] && (sleep 100 ; continue)
...
done
The number of running cp commands will be stored in the jobs variable and before starting a new iteration it will check if there are too many already. Note that we jump to a new iteration so you'd have to keep track of how many commands you already executed. Alternatively you could use wait.
Edit:
On a side note, you can assign a specific CPU core to a process using taskset, it may come in handy when you have fewer more complex commands.
You are probably looking for something like this using GNU Parallel:
parallel -j10 cp -rp {} /destination :::: /home/mylist.txt
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel

Run script in multiple machines in parallel

I am interested to know the best way to start a script in the background in multiple machines as fast as possible. Currently, I'm doing this
Run for each IP address
ssh user#ip -t "perl ~/setup.pl >& ~/log &" &
But this takes time as it individually tries to SSH into each one by one to start the setup.pl in the background in that machine. This takes time as I've got a large number of machines to start this script on.
I tried using GNU parallel, but couldn't get it to work properly:
seq COUNT | parallel -j 1 -u -S ip1,ip2,... perl ~/setup.pl >& ~/log
But it doesn't seem to work, I see the script started by GNU parallel in the target machine, but it's stagnant. I don't see anything in the log.
What am I doing wrong in using the GNU parallel?
GNU Parallel assumes per default that it does not matter which machine it runs a job on - which is normally true for computations. In your case it matters greatly: You want one job on each of the machine. Also GNU Parallel will give a number as argument to setup.pl, and you clearly do not want that.
Luckily GNU Parallel does support what you want using --nonall:
http://www.gnu.org/software/parallel/man.html#example__running_the_same_command_on_remote_computers
I encourage you to read and understand the rest of the examples, too.
I recommend that you use pdsh
It allows you to run the same command on multiple machines
Usage:
pdsh -w machine1,machine2,...,machineN <command>
It might not be included in your distribution of linux so get it through yum or apt
Try to wrap ssh user#ip -t "perl ~/setup.pl >& ~/log &" & in the shell script, and run for each ip address ./mysctipt.sh &

How to run shell script in few jobs

I have a build script, which works very slowly, especially on Solaris. I want to improve its performance by running it in multiple jobs. How can I do that?
Try GNU Parallel, it is quite easy to use:
GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU parallel can then split the input and pipe it into commands in parallel.
If you use xargs and tee today you will find GNU parallel very easy to use as GNU parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel.
GNU parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU parallel as input for other programs.
For each line of input GNU parallel will execute command with the line as arguments. If no command is given, the line of input is executed. Several lines will be run in parallel. GNU parallel can often be used as a substitute for xargs or cat | bash.
You mentioned that it is a build script. If you are using command line utility make you can parallelize builds using make's -j<N> option:
GNU make knows how to execute several recipes at once. Normally, make will execute only one recipe at a time, waiting for it to finish before executing the next. However, the ‘-j’ or ‘--jobs’ option tells make to execute many recipes simultaneously.
Also, there is distcc which can be used with make to distribute compiling to multiple hosts:
export DISTCC_POTENTIAL_HOSTS='localhost red green blue'
cd ~/work/myproject;
make -j8 CC=distcc
GNU parallel is quite good. #Maxim - good suggestion +1.
For a one off, if you cannot install new software, try this for a slow command that has to run multiple times, run slowcommand 17 times. Change things to fit your needs:
#!/bin/bash
cnt=0
while [ $cnt -le 17 ] # loop 17 times
do
slow_command &
cnt=$(( $cnt + 1 ))
[ $(( $cnt % 5 )) -eq 0 ] && wait # 5 jobs at a time in parallel
done
wait # you will have 2 jobs you di not wait for in the loop 17 % 5 == 2

How can I implement MapReduce using shell commands?

How do you execute a Unix shell command (e.g awk one liner) on a cluster in parallel (step 1) and collect the results back to a central node (step 2)?
Update: I've just found http://blog.last.fm/2009/04/06/mapreduce-bash-script
It seems to do exactly what I need.
If all you're trying to do is fire off a bunch of remote commands, you could just use perl. You can "open" a ssh command and pipe the results back to perl. (You of course need to set up keys to allow password-less access)
open (REMOTE, "ssh user#hostB \"myScript\"|");
while (<REMOTE>)
{
print $_;
}
You'd want to craft a loop with your machine names, and fire off one for each. After that just do non-blocking reads on the filehandles to pull back the data as it becomes available.
parallel can be installed on your central node and can be used to run a command across multiple machines.
In the example below, multiple ssh connections are used to run commands on the remote hosts. (-j is the number of jobs to run at the same time on the central node). The result can then be piped to commands to perform the "reduce" stage. (sort then uniq in this example).
parallel -j 50 ssh {} "ls" ::: host1 host2 hostn | sort | uniq -c
This example assumes "keyless ssh login" has been set up between the central node and all machines in the cluster.
It can be tricky to escape characters correctly when running more complex commands than "ls" remotely, you have to escape the escape character sometimes. You mention bashreduce, it may simplify this.

Resources