Julia Command Line Running Processes in Parallel - bash

I have a Julia script that converts csvs to a binary format. Trust me it's great. I also have many (seemingly innumerable) csvs that I want to process. It's a shared network and so I can only process five files at a clip without savagely burdening the CPU and making my coworkers irate and potentially unstable. Accordingly, I want to run the script in groups of five, wait for them to finish, and then run the next batch as background processes until it's Miller time all using Julia's wonderful run() function ala:
julia csvparse3.jl /home/file1.csv > /dev/null 2>&1 &
I'm fairly certain that I could sidestep all of this by using addprocs() and pmap() if I made my parsing script into a Julia module/function. However, the reason I'm asking this is because I don't know what I would then do if my original script was written in Fortran or even worse Python? Is there a way for me to achieve my aforementioned goals for an arbitrary number of external programs, ascertain when the processes are finished, and start anew in the context of a simple loop? Many thanks.

With GNU Parallel you can run:
parallel -j5 julia csvparse3.jl ::: /home/*.csv > /dev/null 2>&1
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel

Related

How to utilise GNU parallel efficiently?

I have a script say parallelise.sh, whose contents are 10 different python calls shown below:
python3.8 script1.py
python3.8 script2.py
.
.
.
python3.8 script10.py
Now, I use GNU parallel
nohup parallel -j 5 < parallellise.sh &
It starts as expected; 5 different processors are being used and the first 5 scripts, script_1.py ... script_5.py are running. Now I notice that some of them (say two of them script_1.py and script_2.py) complete very fast, whereas the others need more time to complete.
Now, there are unused resources (2 processors) while waiting for the remaining 3 scripts (script_3.py, script_4.py, and script_5.py) to complete so that the next 5 can be loaded. Is there a way to use these resources by loading new ones as existing commands get completed?
For information: My OS is CentOS
As #RenaudPacalet says there is nothing else to do.
So there is something in your scripts which causes this not to happen.
To help debug you can use:
parallel --lb --tag < parallellise.sh
and maybe add a "Starting X" line at the beginning of scriptX.py and a "Finishing X" line at the end of scriptX.py so you can see that the scripts are indeed finishing.
Without knowing anything about scriptX.py it is impossible to say what is causing this.
(Instead of nohup consider using tmux or screen so you can have the jobs run in the background but always check in on them and see their output. nohup is not ideal for debugging).

GNU parallel kills command in bash script on HPC

I am running GNU parallel to run a bash script but it seems GNU parallel automatically kills my program and I am not sure why. It run normally when I run the inside script individually.
I wonder why this happen and how to solve it?
Your help is really appreciated!
Here is my code:
parallel --progress --joblog ${home}/data/hsc-admmt/Projects/log_a.sh -j 5 :::: a.sh
Here is the message at the end of output of GNU parallel
/scratch/eu82/bt0689/data/hsc-admmt/Projects/sim_causal_mantel_generate.sh: line 54: 3050285 Killed $home/data/hsc-admmt/Tools/mtg2 -plink plink_all${nsamp}${nsnp}_1 -simreal snp.lst1
I just had the same problem and found this possible explanation:
“Some jobs need a lot of memory, and should only be started when there is enough memory free. Using --memfree GNU parallel can check if there is enough memory free. Additionally, GNU parallel will kill off the youngest job if the memory free falls below 50% of the size. The killed job will put back on the queue and retried later.”
(from https://www.gnu.org/software/parallel/parallel_tutorial.html).
In my case the killed job was not resumed. I’m not sure if this is the reason for your problem but it would explain mine since the error only occurs to me when I parallelize my script for more than 3 jobs.

shell script to loop and start processes in parallel?

I need a shell script that will create a loop to start parallel tasks read in from a file...
Something in the lines of..
#!/bin/bash
mylist=/home/mylist.txt
for i in ('ls $mylist')
do
do something like cp -rp $i /destination &
end
wait
So what I am trying to do is send a bunch of tasks in the background with the "&" for each line in $mylist and wait for them to finish before existing.
However, there may be a lot of lines in there so I want to control how many parallel background processes get started; want to be able to max it at say.. 5? 10?
Any ideas?
Thank you
Your task manager will make it seem like you can run many parallel jobs. How many you can actually run to obtain maximum efficiency depends on your processor. Overall you don't have to worry about starting too many processes because your system will do that for you. If you want to limit them anyway because the number could get absurdly high you could use something like this (provided you execute a cp command every time):
...
while ...; do
jobs=$(pgrep 'cp' | wc -l)
[[ $jobs -gt 50 ]] && (sleep 100 ; continue)
...
done
The number of running cp commands will be stored in the jobs variable and before starting a new iteration it will check if there are too many already. Note that we jump to a new iteration so you'd have to keep track of how many commands you already executed. Alternatively you could use wait.
Edit:
On a side note, you can assign a specific CPU core to a process using taskset, it may come in handy when you have fewer more complex commands.
You are probably looking for something like this using GNU Parallel:
parallel -j10 cp -rp {} /destination :::: /home/mylist.txt
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel

How to run shell script in few jobs

I have a build script, which works very slowly, especially on Solaris. I want to improve its performance by running it in multiple jobs. How can I do that?
Try GNU Parallel, it is quite easy to use:
GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. The typical input is a list of files, a list of hosts, a list of users, a list of URLs, or a list of tables. A job can also be a command that reads from a pipe. GNU parallel can then split the input and pipe it into commands in parallel.
If you use xargs and tee today you will find GNU parallel very easy to use as GNU parallel is written to have the same options as xargs. If you write loops in shell, you will find GNU parallel may be able to replace most of the loops and make them run faster by running several jobs in parallel.
GNU parallel makes sure output from the commands is the same output as you would get had you run the commands sequentially. This makes it possible to use output from GNU parallel as input for other programs.
For each line of input GNU parallel will execute command with the line as arguments. If no command is given, the line of input is executed. Several lines will be run in parallel. GNU parallel can often be used as a substitute for xargs or cat | bash.
You mentioned that it is a build script. If you are using command line utility make you can parallelize builds using make's -j<N> option:
GNU make knows how to execute several recipes at once. Normally, make will execute only one recipe at a time, waiting for it to finish before executing the next. However, the ‘-j’ or ‘--jobs’ option tells make to execute many recipes simultaneously.
Also, there is distcc which can be used with make to distribute compiling to multiple hosts:
export DISTCC_POTENTIAL_HOSTS='localhost red green blue'
cd ~/work/myproject;
make -j8 CC=distcc
GNU parallel is quite good. #Maxim - good suggestion +1.
For a one off, if you cannot install new software, try this for a slow command that has to run multiple times, run slowcommand 17 times. Change things to fit your needs:
#!/bin/bash
cnt=0
while [ $cnt -le 17 ] # loop 17 times
do
slow_command &
cnt=$(( $cnt + 1 ))
[ $(( $cnt % 5 )) -eq 0 ] && wait # 5 jobs at a time in parallel
done
wait # you will have 2 jobs you di not wait for in the loop 17 % 5 == 2

Parallel building/deployment

I'm not sure what tags to put on this :
I have an Xcode project with multiple schemes that outputs multiple apps. I have a script archive_all.sh that is setting everything up to build and deploy (to TestFlight) each app (13 at the moment) by calling archive.sh. I tried (stupid me) to do : sh archive.sh & in the loop, but my laptop handled it hardly, and I plan to have much more than 13 apps to deployed in the future.
Is there a way, preferably in shell script to set up a queue of executable to call ? My laptop could probably handle 3-4 calls to archive.sh at a time.
Try ppss which supports both Linux and Mac OS X. It will auto-detect the number of cores in your CPU and execute tasks efficiently on those cores.
GNU Parallel is made for these kind of jobs.
parallel archive.sh {} ::: app1 app2 ... app15
This will archive.sh for each app by running one per CPU core.
GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to.
If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:
GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:
Installation
If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:
(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
For other installation options see http://git.savannah.gnu.org/cgit/parallel.git/tree/README
Learn more
See more examples: http://www.gnu.org/software/parallel/man.html
Watch the intro videos: https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1
Walk through the tutorial: http://www.gnu.org/software/parallel/parallel_tutorial.html
Sign up for the email list to get support: https://lists.gnu.org/mailman/listinfo/parallel

Resources