calling MPI of Fortran from python - parallel-processing

I try to calling MPI of Fortran from python.
in helloworld.f90, I write:
subroutine sayhello(comm)
use mpi
!include 'mpif.h'
implicit none
integer :: comm, rank, size, ierr
call MPI_Comm_size(comm, size, ierr)
call MPI_Comm_rank(comm, rank, ierr)
print *, ’Hello, World! I am process ’,rank,’ of ’,size,’.’
end subroutine sayhello
And another one called hello.py:
from mpi4py import MPI
import helloworld
fcomm = MPI.COMM_WORLD.py2f()
helloworld.sayhello(fcomm)
However, in the ubuntu 18.04, I use Python 3.7.4, I can not use the command to create file.so:
f2py -m helloworld -h helloworld.pyf helloworld.f90
f2py -c helloworld.pyf hello.py
When I run command:
mpirun.openmpi was unable to find the specified executable file, and
therefore
did not launch the job. This error was first reported for process
rank 0; it may have occurred for other processes as well.
Can you help me to fix this error or provide other command to run
My computer can openMP of Fortran from Python. Also, I can run MPI in Fortran . However, I can not call MPI of Fortran from Python.
Any help will be appreciated

I just want to make some comments about this issue. I create the Python module in Ubuntu 18.04. Actually, it is a WSL (Windows sub system). I run with openMPI 1.10.7 (is too old), but it is the version that I have. My tests were made using numpy 1.16.5 and Python 2.7.15.
1) I have created a Python module using your source codes through the following command from mpi4py doc page
f2py --f90exec=mpif90 -c helloworld.f90 -m helloworld
as said by Gilles you need to link the mpi, but you can use the flag --f90exec as shown above. This flag tells to f2py which F90 compiler you want to use. The f2pydoc has a complete list of f2py flags that will help you to create Python modules from Fortran codes. After the Python module generation you can run your script.
Another way to do it is like you did. However, I think that your second command is wrong. You do not use the flag --f90exec in the second command. Moreover, you have used the signature files with the name of your Python script in order to create the module. You should replace the Python script by the Fortran filename.
2) I tested what you have done. Then, when I tried to import the module in the Python interpreter a n exceptionwas raised. To sum up, in order to create a Python module using .pyf files type
f2py -m helloworld -h helloworld.pyf helloworld.f90
and then
f2py --f90exec=mpif90 -c helloworld.pyf helloworld.f90
3) King said that you do not need mpi4py. However, I tested using without mpi4py and did not work, even doing all the MPI initialization process inside a FORTRAN subroutine. Below, one can see the codes that I used to make this test.
helloworld.f90
subroutine sayhello
use mpi
implicit none
! include 'mpif.h'
integer :: comm, rank, size, ierr, namelength
character(len=15) :: processorname
call MPI_INIT(ierr)
call MPI_Comm_size(MPI_COMM_WORLD, size, ierr)
call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr)
call MPI_GET_PROCESSOR_NAME(processorName, namelength, ierr)
print *, 'Hello, World! I am process ',rank,' of ',size,'.'
end subroutine sayhello
hello.py
from mpi4py import MPI
import helloworld
helloworld.sayhello()
4) If you want to create a module for Python 3.x scripts you can use similar commands, you just have to substitute f2py by f2py3 or python3 -m numpy.f2py in the commands above.

Related

How to run parallel Quick sort using MPI?

How to run this parallel quick sort using openMP
https://github.com/clasnake/parallel_sort/blob/master/quick_sort.cpp
https://github.com/clasnake/parallel_sort/blob/master/quick_sort
Assuming you have MPI installed in the system you are trying to run your program, you can first compile the cpp code using this command -
mpic++ quick_sort.cpp -o quick_sort
After you compile, An executable file named "quick_sort" will be generated. Then you can run that executable using this following command -
mpirun -n [number of mpi ranks you want to use] quick_sort [input_data , for example smallset.txt in the folder you shared]

Why won't an external executable run without manual input from the terminal command line?

I am currently writing a Python script that will pipe some RNA sequences (strings) into a UNIX executable, which, after processing them, will then send the output back into my Python script for further processing. I am doing this with the subprocess module.
However, in order for the executable to run, it must also have some additional arguments provided to it. Using the subprocess.call method, I have been trying to run:
import subprocess
seq= "acgtgagtag"
output= subprocess.Popen(["./DNAanalyzer", seq])
Despite having my environmental variables set properly, the executables running without problem from the command line of the terminal, and the subprocess module functioning normally (e.g. subprocess.Popen(["ls"]) works just fine), the Unix executable prints out the same output:
Failed to open input file acgtgagtag.in
Requesting input manually.
There are a few other Unix executables in this package, and all of them behave the same way. I even tried to create a simple text file containing the sequence and specify it as the input in both the Python script as well as within the command line, but the executables only want manual input.
I have looked through the package's manual, but it does not mention why the executables can ostensibly be only run through the command line. Because I have limited experience with this module (and Python in general), can anybody indicate what the best approach to this problem would be?
The Popen() is actually a constructor for an object – that object being a "sub-shell" that directly runs the executable. But because I didn't set a standard input or output (stdin and stdout), they default to None, meaning that the process's I/O are both closed.
What I should have done is pass subprocess.PIPE to signify to the Popen object that I want to pipe input and output between my program and the process.
Additionally, the environment variables of the script (in the main shell) were not the same as the environment variables of the subshell, and these specific executables needed certain environment variables in order to function (in this case, it was the path to the parameter files in its package). This was done in the following fashion:
import subprocess as sb
seq= "acgtgagtag"
my_env= {BIOPACKAGEPATH: "/Users/Bobmcbobson/Documents/Biopackage/"}
p= sb.Popen(['biopackage/bin/DNAanalyzer'], stdin=sb.PIPE, stdout=sb.PIPE, env=my_env)
strb = (seq + '\n').encode('utf-8')
data = p.communicate(input=strb)
After creating the Popen object, we send it a formatted input string using communicate(). The output can now be read, and further processed in whatever way in the script.

LLDB: How to define a function with arguments in .lldbinit?

I would like write a helper function which to be available in my LLDB session. (I am not talking about python here)
This function will invoke methods of current program variables and then pass them to a python script.
I guess i understand how to write a python script, but i am still not sure how to write an lldb-script which interacts with my program.
For a general intro on how to use the lldb Python module to interact with your program, see:
https://lldb.llvm.org/use/python-reference.html
That will show you some different ways you can use Python in lldb, and particularly how to make Python based commands and load them into the lldb command interpreter.
There are a variety of example scripts that you can look at here:
https://github.com/llvm/llvm-project/tree/main/lldb/examples/python
There's an on-line version of the Python API help here:
https://lldb.llvm.org/python_api.html
and you can access the same information from within lldb by doing:
(lldb) script
Python Interactive Interpreter. To exit, type 'quit()', 'exit()' or Ctrl-D.
>>> help(lldb)
Help on package lldb:
NAME
lldb
FILE
/Applications/Xcode.app/Contents/SharedFrameworks/LLDB.framework/Resources/Python/lldb/__init__.py
DESCRIPTION
...

How to invoke octave script in unix shell

I have written an octave script file (.m)
If anyone could point me out on how to run octave scripts on unix shell that would be really helpful. I do not want to execute the script by invoking octave program.
I am new to unix and octave.
Thanks in advance
Yes, of course you can write an Octave program. Like so:
$ cat octave_program
#!/usr/bin/env octave
## Never forget your licence at the top of the files.
1;
function [rv] = main (argv)
disp ("hello world");
rv = 0;
return;
endfunction
main (argv);
$ chmod a+x octave_program # add executable permissions
$ ./octave_program
hello world
There's a couple of things important for an Octave program:
the first statement cannot be a function declaration. In all my programs, the first statements are loading of necessary packages. If you don't have packages, it is common to use 1;
a she-bang line. That's the first line of your program which tells you how to run your program. If you know where Octave will be installed, you can use #!/usr/bin/octave but using #!/usr/bin/env octave will be more portable and flexible.
your program needs executable permissions

Output from fortran application not showing up in Matlab

I'm having some issues with output from a fortran application being executed from within Matlab. We use Matlab to call a number of fortran applications and to display output and results.
I'm using gfortran on OSX to build one of these programs, which does a large amount of file output and a little output to stdout to track progress. stdout output is accomplished mainly through print * statements, but I've tried write( * , * ) as well. The program uses OpenMP, but none of the print * or write( * , * ) statements are performed within OpenMP parallel sections.Everything works fine when the program is executed from a terminal. However, when the program is executed from within matlab, there is no output from stdout. The file output works fine though.
Additionally, the same code, when compiled with Intel's ifort, displays its output in matlab without issue. Unfortunately I don't have regular access to the Intel compiler.
I'm positive that the output is going to stdout (not stderr), and I've tried flushing both from within the code (call flush(6) & call flush(0)), but this doesn't seem to make a difference.
I'm not sure what could be causing this. Any thoughts?
some relevant information:
OS: OSX 10.6.8 (64bit mode)
Matlab: R2012b
gfortran: 4.7.2 (obtained via fink)
compile flags: -cpp -fopenmp -ffree-line-length-0 -fno-range-check -m64 -static-libgfortran -fconvert=little-endian -fstrict-aliasing
EDIT:
I've done some more testing, creating a simple 'hello' program:
program printTest
write (*,*) 'hello'
end program
compiled with...
gfortran test.f90 -o test
which exhibits the same behavior.
I've also tried compiling with an earlier version of gfortran (4.2.1), which produced some interesting results. it executes fine in terminal, but in matlab I get the following:
!./test
dyld: lazy symbol binding failed: Symbol not found: __gfortran_set_std
Referenced from: /Users/sah/Desktop/./test
Expected in: /Applications/MATLAB_R2012b.app/sys/os/maci64/libgfortran.2.dylib
dyld: Symbol not found: __gfortran_set_std
Referenced from: /Users/sah/Desktop/./test
Expected in: /Applications/MATLAB_R2012b.app/sys/os/maci64/libgfortran.2.dylib
./test: Trace/breakpoint trap
This leads me to believe its a library issue. using -static-libgfortran produces the same result in this case.
I believe Matlab is a single threaded application. When you invoke a multithreaded executive, I have seen various issues with piping the output back to Matlab. Have you considered recompiling into a Fortran mex file?
I am not sure a mex file would print to stdout any better than a standalone executable.
There are other options. One is to write(append) all your diagnostics to a file and just look at the file when you want to. Emacs, for example, automatically "revert"s the contents of a file every second or whatever you set the interval to. Another option might be to convert the fortran source into matlab source (see f2matlab) and keep it all in matlab.
bb
According to the system function documentation
[status, result] = system('command') returns completion status to the status variable and returns the result of the command to the result variable.
[status,result] = system('command','-echo') also forces the output to the Command Window.
So you should use '-echo' parameter to the system call to see the output directly in the command window
system(['cd "',handles.indir,'";chmod u+x ./qp.exe',... ';./qp.exe'], '-echo')
or you can assign the stdout to a variable:
[ret txt] = system(['cd "',handles.indir,'";chmod u+x ./qp.exe',... ';./qp.exe'])

Resources