Controlling output of jobs run with gnu parallel - bash

On my old machine with gnu parallel version 20121122 after I
parallel -j 15 < jobs.txt
I see output of the jobs being run which is very handy for debug purposes.
On my new machine with parallel version 20140122 then I execute above-mentioned command I see nothing in the terminal.
From another SO thread I found out about --tollef flag which fixes problem for me, but soon it is to be retired. How do I keep things working after retirement of --tollef flag?

--ungroup (if half-lines are allowed to mix - as they are with --tollef).
--line-buffer (if you only want complete lines printed - i.e. no half-line mixing).

Related

How to assign several cores/threads/tasks to an application that runs in parallel and is going to be run from the command line on MacOS laptop?

I have an application that can run commands in parallel. This application can work on a cluster using SLURM to get the resources, then internally, I assign each of the tasks I require to be performed by a different CPU/worker. Now, I want to run this application on my laptop (macOS) through the command line, the same code (except for the SLURM part) works fine with the only difference being that it is only performing one task at a time.
I have run code in parallel in MATLAB using the commands parcluster, parfor, etc. In this code, I can get up to 16 workers to work in parallel on my laptop.
I was hoping there is a similar solution for any other application that is not MATLAB to run other code in parallel, especially to assign the resources. Then my application itself is built to manage them.
If it is of any help, I run my application from the command line as follows:
chmod +x ./bin/OpenSees
./bin/OpenSees Run.tcl
I have read about GNU parallel or even using SLURM on my laptop, but I am not sure if these are the best (or feasible) solutions.
I tried using GNU parallel:
chmod +x ./bin/OpenSees
parallel -j 4 ./bin/OpenSees ::: Run.tcl
but it continues doing one at a time, do you have any suggestions?

How to run multiple, distinct Fortran scripts in parallel on RHEL 6.9

Let's say I have N Fortran executables and M cores on my machine, where N is greater than M. I want to be able to run these executables in parallel. I am using RHEL 6.9
I have used both OpenMP and GNU Parallel in the past to run code in parallel. However for my current purposes, neither of these two options would work: RHEL doesn't have a GNU Parallel distribution, and OpenMP applies to parallelizing blocks within a single executable, not multiple executables.
What is the best way to run these N executables in parallel? Would a simple approach like
executable_1 & executable_2 & ... & executable_N
work?
Just because it is not part of the official repository, doesn't mean you cannot use GNU parallel on a RHEL system. Just build GNU parallel yourself or install a third party rpm.
xargs supports parallel execution as well. Its interface is not ideal for your use case, but this should work:
echo executable_1 executable_2 ... executable_N | xargs -n1 -P8 bash -c
(-P8 means “run eight processes in parallel”.)
For more complex tasks, I sometimes write makefiles and use make -j8 to run targets in parallel.

Run wine in parallel with gnu-parallel - needs {%} slot substitution to work

I have been running wine/dos commands in parallel in ubuntu with gnu-parallel. I can and have done this successfully with simple commands without problem.
However, some more complex problems can result in interference between components within wine.
Thus, to solve this I'd like to restrict one job at a time to specific named "wine prefix" instance using the {%} as queried in this question. The trouble is: the {%} substition doesnt seem to work.
I'd eventually like to be able to run something like the following
parallel -j4 'WINEPREFIX=$HOME/slot{%} wine cmd /c #echo {%} 2>/dev/null' ::: A B C D
Unfortunately a single new wine prefix slot{%} is created and used rather than the extant slot1, slot2, slot3, and slot4 prefix directories.
Following the manual I tried:
parallel -j 2 echo {%} ::: A B C
but instead of returning something like:
1
2
1
it returns:
{%} A
{%} B
{%} C
So I dont think the problem is wine, but something else: Does the {%} substitution need to be enabled somehow? Perhaps it not available in my version? Maybe I copied the example usage incorrectly? I can find no other example of this problem anywhere but it happens every time to me.
As a weak workaround I've been applying the bash modulo operator to the {#} jobs substitution but this is not perfect because I still get occasional slot-slot collisions and subsiquent crashes.
FYI1: lsb_release -a returns
Distributor ID: Ubuntu
Description: Ubuntu 14.04.3 LTS
Release: 14.04
Codename: trusty
parallel --version returns
GNU parallel 20130922
Your version of parallel appears to be too old.
Do you see {%} in the documentation that shipped with your version or just online?
The release notes for GNU Parallel 20140522 indicate:
{%} introduced as job slot replacement string. It has known bugs.
and the release notes for GNU Parallel 20140622 indicate:
{%} works as job slot.

unix application benchmarking through time utility by running app in parallel

i am trying to find the resources used by my application. i am using time utility advised here stackoverflow.com/questions/560089/unix-command-for-benchmarking-code-running-k-times
i used command as time (for i in $(seq 100); do ./mycode; done) as suggested by one of the answers.
problem: application run one by one but not n parallel. i need to run mycode 100/1000 times in parallel. any suggestion how to run application in parallel more than once. In other words how to run above command 100 times so 100X100 instances will be running at the same time.
I am also not able to place any format switch along with for loop.
i tried time -v (for i in $(seq 100); do ./mycode; done) for verbose output
note: I also tried /usr/bin/time and complete switch --verbose
EDIT: I changed my code as per instructions from Cyrus reply. simultaneous ruuning of mycode is solved. but still I am looking for my second question how to get verbose output of time utility by using -v or --verbose.

mpirun OpenFOAM parallel app in gdb

I'm trying to step through an OpenFOAM application (in this case, icoFoam, but this question is in general for any OpenFOAM app).
I'd like to use gdb to step through an analysis running in parallel (let's say, 2 procs).
To simply launch the app in parallel, I type:
mpirun -np 2 icoFoam -parallel
Now I want to step through it in gdb. But I'm having trouble launching icoFoam in parallel and debugging it, since I can't figure out how to set a break point before the application begins to execute.
One thing I know I could do is insert a section of code after the MPI_Initialize that waits (and endless loop) until I change some variable in gdb. Then I'd run the app in parallel, attach a gdb session to each of those PIDs, and happily debug. But I'd rather not have to alter the OpenFOAM source and recompile.
So, how can I start the application running in parallel, some how get it to stop (like at the beginning of main) and then step through it in gdb? All without changing the original source code?
Kindest regards,
Madeleine.
You could try this resource
It looks like the correct command is:
mpirunDebug -np 2 xxxFoam -parallel

Resources