How to run parallel Quick sort using MPI? - parallel-processing

How to run this parallel quick sort using openMP
https://github.com/clasnake/parallel_sort/blob/master/quick_sort.cpp
https://github.com/clasnake/parallel_sort/blob/master/quick_sort

Assuming you have MPI installed in the system you are trying to run your program, you can first compile the cpp code using this command -
mpic++ quick_sort.cpp -o quick_sort
After you compile, An executable file named "quick_sort" will be generated. Then you can run that executable using this following command -
mpirun -n [number of mpi ranks you want to use] quick_sort [input_data , for example smallset.txt in the folder you shared]

Related

PVS-Studio: No compilation units were found

I'm using PVS-Studio in docker image based on ubuntu:18.04 for cross-compiling a couple of files with arm-none-eabi-gcc. After doing pvs-studio-analyzer trace -- .test/compile_with_gcc.sh strace_out file is successfully created, it's not empty and contains calls to arm-none-eabi-gcc.
However pvs-studio-analyzer analyze complains that "No compilation units were found". I tried using --compiler arm-none-eabi-gcc key with no success.
Any ideas?
The problem was in my approach to compilation. Instead of using a proper build system, I used a wacky shell script (surely, I thought, using a build system for 3 files is an overkill, shell script won't hurt anybody). And in that script I used grep to redefine one constant in the source - kinda like that: grep -v -i "#define[[:blank:]]\+${define_name}[[:blank:]]" ${project}/src/main/main.c | ~/opt/gcc-arm-none-eabi-8-2018-q4-major/bin/arm-none-eabi-gcc -o main.o -xc
So compiler didn't actually compiled a proper file, it compiled output of grep. So naturally, PVS-Studio wasn't able to analyze it.
TL;DR: Don't use shell scripts as build system.
We have reviewed the stace_out file. It can be handled correctly by the analyzer, if the source files and compilers are located by the absolute path in the stace_out file. We have a suggestion what might help you. You can "wrap" the build command in a call to pvs-studio-analyzer -- trace and pvs-studio-analyzer analyze and place them inside your script (compile_with_gcc.sh). Thus, the script should start with the command:
pvs-studio-analyzer trace --
and end with the command:
pvs-studio-analyzer analyze
This way we will make sure that the build and analysis were started at the same container run. If the proposed method does not help, please describe in more detail, by commands, the process of building the project and running the analyzer. Also tell us whether the container reruns between the build and the formation of strace_out, and the analysis itself.
It would also help us a lot if you ran the pvs-studio-analyzer command with the optional --dump-log flag and provided it to us. An example of a command that can be used to do this:
pvs-studio-analyzer analyze --dump-log ex.log
Also, it seems that it is not possible to quickly solve the problem and it is probably more convenient to continue the conversation via the feedback form on the product website.

calling MPI of Fortran from python

I try to calling MPI of Fortran from python.
in helloworld.f90, I write:
subroutine sayhello(comm)
use mpi
!include 'mpif.h'
implicit none
integer :: comm, rank, size, ierr
call MPI_Comm_size(comm, size, ierr)
call MPI_Comm_rank(comm, rank, ierr)
print *, ’Hello, World! I am process ’,rank,’ of ’,size,’.’
end subroutine sayhello
And another one called hello.py:
from mpi4py import MPI
import helloworld
fcomm = MPI.COMM_WORLD.py2f()
helloworld.sayhello(fcomm)
However, in the ubuntu 18.04, I use Python 3.7.4, I can not use the command to create file.so:
f2py -m helloworld -h helloworld.pyf helloworld.f90
f2py -c helloworld.pyf hello.py
When I run command:
mpirun.openmpi was unable to find the specified executable file, and
therefore
did not launch the job. This error was first reported for process
rank 0; it may have occurred for other processes as well.
Can you help me to fix this error or provide other command to run
My computer can openMP of Fortran from Python. Also, I can run MPI in Fortran . However, I can not call MPI of Fortran from Python.
Any help will be appreciated
I just want to make some comments about this issue. I create the Python module in Ubuntu 18.04. Actually, it is a WSL (Windows sub system). I run with openMPI 1.10.7 (is too old), but it is the version that I have. My tests were made using numpy 1.16.5 and Python 2.7.15.
1) I have created a Python module using your source codes through the following command from mpi4py doc page
f2py --f90exec=mpif90 -c helloworld.f90 -m helloworld
as said by Gilles you need to link the mpi, but you can use the flag --f90exec as shown above. This flag tells to f2py which F90 compiler you want to use. The f2pydoc has a complete list of f2py flags that will help you to create Python modules from Fortran codes. After the Python module generation you can run your script.
Another way to do it is like you did. However, I think that your second command is wrong. You do not use the flag --f90exec in the second command. Moreover, you have used the signature files with the name of your Python script in order to create the module. You should replace the Python script by the Fortran filename.
2) I tested what you have done. Then, when I tried to import the module in the Python interpreter a n exceptionwas raised. To sum up, in order to create a Python module using .pyf files type
f2py -m helloworld -h helloworld.pyf helloworld.f90
and then
f2py --f90exec=mpif90 -c helloworld.pyf helloworld.f90
3) King said that you do not need mpi4py. However, I tested using without mpi4py and did not work, even doing all the MPI initialization process inside a FORTRAN subroutine. Below, one can see the codes that I used to make this test.
helloworld.f90
subroutine sayhello
use mpi
implicit none
! include 'mpif.h'
integer :: comm, rank, size, ierr, namelength
character(len=15) :: processorname
call MPI_INIT(ierr)
call MPI_Comm_size(MPI_COMM_WORLD, size, ierr)
call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr)
call MPI_GET_PROCESSOR_NAME(processorName, namelength, ierr)
print *, 'Hello, World! I am process ',rank,' of ',size,'.'
end subroutine sayhello
hello.py
from mpi4py import MPI
import helloworld
helloworld.sayhello()
4) If you want to create a module for Python 3.x scripts you can use similar commands, you just have to substitute f2py by f2py3 or python3 -m numpy.f2py in the commands above.

How to run multiple, distinct Fortran scripts in parallel on RHEL 6.9

Let's say I have N Fortran executables and M cores on my machine, where N is greater than M. I want to be able to run these executables in parallel. I am using RHEL 6.9
I have used both OpenMP and GNU Parallel in the past to run code in parallel. However for my current purposes, neither of these two options would work: RHEL doesn't have a GNU Parallel distribution, and OpenMP applies to parallelizing blocks within a single executable, not multiple executables.
What is the best way to run these N executables in parallel? Would a simple approach like
executable_1 & executable_2 & ... & executable_N
work?
Just because it is not part of the official repository, doesn't mean you cannot use GNU parallel on a RHEL system. Just build GNU parallel yourself or install a third party rpm.
xargs supports parallel execution as well. Its interface is not ideal for your use case, but this should work:
echo executable_1 executable_2 ... executable_N | xargs -n1 -P8 bash -c
(-P8 means “run eight processes in parallel”.)
For more complex tasks, I sometimes write makefiles and use make -j8 to run targets in parallel.

Xeon Phi cannot execute binary file

I am trying to execute a binary file on a xeon phi coprocessor, and it is coming back with "bash: cannot execute binary file". So I am trying to find how to either view an error log or have it display what's happening when I tell it to execute that is causing it not work. I have already tried bash --verbose but it didn't display any additional information. Any ideas?
You don't specify where you compiled your executable nor where you tried to execute from.
To compile a program on the host system to be executed directly on the coprocessor, you must either:
if using one of the Intel compilers, add -mmic to the compiler
command line
if using gcc, use the cross-compilers provided with the MPSS
(/usr/linux-k1om-4.7) - note, however, that the gcc compiler does not
take advantage of vectorization on the coprocessor
If you want to compile directly on the coprocessor, you can install the necessary files from the additional rpm files provided for the coprocessor (found in mpss-/k1om) using the directions from the MPSS user's guide for installing additional rpm files.
To run a program on the coprocessor, if you have compiled it on the host, you must either:
copy your executable file and required libraries to the coprocessor
using scp before you ssh to the coprocessor yourself to execute the
code.
use the micnativeloadex command on the host - you can find a man page
for that on the host.
If you are writing a program using the offload model (part of the work is done using the host then some of the work is passed off to the coprocessor), you can compile on the host using the Intel compilers with no special options.
Note, however, that, regardless of what method you use, any libraries to be used with an executable for the coprocessor will need themselves to be built for the coprocessor. The default libraries exist but any libraries you add, you need to build a version for the coprocessor in addition to any version you make for the host system.
I highly recommend the articles you will find under https://software.intel.com/en-us/articles/programming-and-compiling-for-intel-many-integrated-core-architecture. These articles are written by people who develop and/or support the various programming tools for the coprocessor and should answer most of your questions.
Update: What's below does NOT answer the OP's question - it is one possible explanation for the cannot execute binary file error, but the fact that the error message is prefixed with bash: indicates that the binary is being invoked correctly (by bash), but is not compatible with the executing platform (compiled for a different architecture) - as #Barmar has already stated in a comment.
Thus, while the following contains some (hopefully still somewhat useful) general information, it does not address the OP's problem.
One possible reason for cannot execute binary file is to mistakenly pass a binary (executable) file -- rather than a shell script (text file containing shell code) -- as an operand (filename argument) to bash.
The following demonstrates the problem:
bash printf # fails with '/usr/bin/printf: /usr/bin/printf: cannot execute binary file'
Note how the mistakenly passed binary's path prefixes the error message twice; If the first prefix says bash: instead, the cause is most likely not a problem of incorrect invocation, but one of trying to a invoke an incompatible binary (compiled for a different architecture).
If you want bash to invoke a binary, you must use the -c option to pass it, which allows you to specify an entire command line; i.e., the binary plus arguments; e.g.:
bash -c '/usr/bin/printf "%s\n" "hello"' # -> 'hello'
If you pass a mere binary filename instead of a full path - e.g., -c 'program ...' - then a binary by that name must exist in one of the directories listed in the $PATH variable that bash sees, otherwise you'll get a command not found error.
If, by contrast, the binary is located in the current directory, you must prefix the filename with ./ for bash to find it; e.g. -c './program ...'

Output from fortran application not showing up in Matlab

I'm having some issues with output from a fortran application being executed from within Matlab. We use Matlab to call a number of fortran applications and to display output and results.
I'm using gfortran on OSX to build one of these programs, which does a large amount of file output and a little output to stdout to track progress. stdout output is accomplished mainly through print * statements, but I've tried write( * , * ) as well. The program uses OpenMP, but none of the print * or write( * , * ) statements are performed within OpenMP parallel sections.Everything works fine when the program is executed from a terminal. However, when the program is executed from within matlab, there is no output from stdout. The file output works fine though.
Additionally, the same code, when compiled with Intel's ifort, displays its output in matlab without issue. Unfortunately I don't have regular access to the Intel compiler.
I'm positive that the output is going to stdout (not stderr), and I've tried flushing both from within the code (call flush(6) & call flush(0)), but this doesn't seem to make a difference.
I'm not sure what could be causing this. Any thoughts?
some relevant information:
OS: OSX 10.6.8 (64bit mode)
Matlab: R2012b
gfortran: 4.7.2 (obtained via fink)
compile flags: -cpp -fopenmp -ffree-line-length-0 -fno-range-check -m64 -static-libgfortran -fconvert=little-endian -fstrict-aliasing
EDIT:
I've done some more testing, creating a simple 'hello' program:
program printTest
write (*,*) 'hello'
end program
compiled with...
gfortran test.f90 -o test
which exhibits the same behavior.
I've also tried compiling with an earlier version of gfortran (4.2.1), which produced some interesting results. it executes fine in terminal, but in matlab I get the following:
!./test
dyld: lazy symbol binding failed: Symbol not found: __gfortran_set_std
Referenced from: /Users/sah/Desktop/./test
Expected in: /Applications/MATLAB_R2012b.app/sys/os/maci64/libgfortran.2.dylib
dyld: Symbol not found: __gfortran_set_std
Referenced from: /Users/sah/Desktop/./test
Expected in: /Applications/MATLAB_R2012b.app/sys/os/maci64/libgfortran.2.dylib
./test: Trace/breakpoint trap
This leads me to believe its a library issue. using -static-libgfortran produces the same result in this case.
I believe Matlab is a single threaded application. When you invoke a multithreaded executive, I have seen various issues with piping the output back to Matlab. Have you considered recompiling into a Fortran mex file?
I am not sure a mex file would print to stdout any better than a standalone executable.
There are other options. One is to write(append) all your diagnostics to a file and just look at the file when you want to. Emacs, for example, automatically "revert"s the contents of a file every second or whatever you set the interval to. Another option might be to convert the fortran source into matlab source (see f2matlab) and keep it all in matlab.
bb
According to the system function documentation
[status, result] = system('command') returns completion status to the status variable and returns the result of the command to the result variable.
[status,result] = system('command','-echo') also forces the output to the Command Window.
So you should use '-echo' parameter to the system call to see the output directly in the command window
system(['cd "',handles.indir,'";chmod u+x ./qp.exe',... ';./qp.exe'], '-echo')
or you can assign the stdout to a variable:
[ret txt] = system(['cd "',handles.indir,'";chmod u+x ./qp.exe',... ';./qp.exe'])

Resources