Using open MPI in a part of the code - parallel-processing

I am writing a Fortran 95 code (with gfortran as a compiler). In one of the subroutines, I initialize the Message Passing Interface by calling MPI_Init. I close the interface by calling MPI_Finalize in the same subroutine. Neither in the main program nor in any other subroutines do I use MPI commands.
The code is running well; however, every WRITE(*,*) "text" statement in the main program is executed twice when I run the code (I am testing the code on my laptop with two physical cores). So it seems that both cores are processing all the commands of the main program.
Is this what one should expect? What is the right way of initializing and finalizing the MPI?
I would like one core to process all the sequential tasks and use multicore processing only in the subroutine.

Well, this is more an extended comment than an answer...
With MPI the program is executed once for each process (in your case twice), and you have to make sure which process executes which code or operates on which chunk of data. This is usually done by determining the rank of the process by calling MPI_Comm_rank(). Based on the rank and the number of processes obtained with MPI_Comm_size() you can distribute the workload.
Since you do not do that, each process is doing exactly the same work, and consequently prints the same text onto the terminal.
This is an example to illustrate this:
program test
use mpi
implicit none
integer :: myrank, size, stat
! Init MPI
call MPI_Init(stat)
! Get process ID
call MPI_Comm_rank ( MPI_COMM_WORLD, myrank, stat )
! Get number of processes
call MPI_Comm_size ( MPI_COMM_WORLD, size, stat )
! This is only written by the first process
if ( myrank == 0 ) write(*,*) 'Starting output...'
! This is written by every process
write(*,*) "Hello world"
write(*,*) "I am ",myrank,"of",size
call MPI_Finalize(stat)
end program
With the output:
$ mpirun -np 2 ./a.out
Starting output...
Hello world
Hello world
I am 1 of 2
I am 0 of 2
You can see that without the check for the rank of the process, "Hello World" is printed twice.

Related

In MATLAB, how to store Unix stdout in cell array for multi-line commands?

I have to run unix code on a remote system that does not have MATLAB. I call it like this
% shared key between computers so no password required to type in
ssh_cmd = 'ssh login#ipaddress ';
for x = 1:nfiles
cmd = sprintf('calcPosition filename%d',x);
% as an example, basically it runs a c++ command
% on that computer and I pass in a different filename as an input
full_cmd = [ssh_cmd, cmd];
[Status,Message] = system(full_cmd);
% the stdout is a mix of strings and numbers
% <code that parses Message and handles errors>
end
For a small test this takes about 30 seconds to run. If I set it so that
full_cmd = [ssh_cmd, cmd1; cmd2; cmdN]; % obviously not valid syntax here
% do the same thing as above, but no loop
it takes about 5 seconds, so most of the time is spent connecting to the other system. But Message is the combined stdout of all n files.
I know I can pipe the outputs to separate files, but I'd rather pass it back directly to MATLAB without addition I/O. So is there a way to get the stdout (Message) as a cell array (or Unix equivalent) for each separate command? Then I could just loop over the cells of Message and the remainder of the function would not need to be changed.
I thought about using an identifier text around each command call as a way to help parse Message, e.g.
full_cmd = [ssh_cmd; 'echo "begincommand " '; cmd1; 'echo "begincommand " '; cmd2]; % etc
But there must be something more elegant than that. Any tips?

Set timeout for command execution and output results to file

i have a basic script.sh that runs some commands inside. The script is like:
(script.sh)
......
`gcc -o program program.c`
if [ $? -eq 0 ]; then
echo "Compiled successfully....\n" >> out.txt
#set a timeout for ./program execution and append results to file
(gtimeout 10s ./program) 2> out.txt # <-- NOT WORKING
......
I run this script through the terminal like:
#Go to this directory,pass all folders to compile&execute the program.c file
czar#MBP~$ for D in /Users/czar/Desktop/1/*; do sh script.sh $D; done
EDIT: The output i get in the terminal, not so important though:
# program.c from 1st folder inside the above path
Cycle l=1: 46 46
Cycle l=1: 48 48
Cycle l=2: 250 274 250
Cycle l=1: 896 896
.........
# program.c from 2nd folder inside the above path
Cycle l=1: 46 46
Cycle l=1: 48 48
Cycle l=2: 250 274 250
Cycle l=1: 896 896
.........
The GOAL is to have those into the out.txt
The output i get is almost what i want: it executes whatever possible in those 10seconds but doesn't redirect the result to out.txt, it just prints to the terminal.
I have tried every suggestion proposed here but no luck.
Any other ideas appreciated.
EDIT 2: SOLUTION given in the comments.
The basic approach is much simpler than the command you copied from the answer to a completely different question. What you need to do is simply redirect standard output to your file:
# Use gtimeout on systems which rename standard Gnu utilities
timeout 10s ./program >> out.txt
However, that will probably not produce all of the output generated by the program if the program is killed by gtimeout, because the output is still sitting in a buffer inside the standard library. (There is nothing special about this buffer; it's just a block of memory malloc'd by the library functions the first time data is written to the stream.) When the program is terminated, its memory is returned to the operating system; nothing will even try to ensure that standard library buffers are flushed to their respective streams.
There are three buffering modes:
Block buffered: no output is produced until the stream's buffer is full. (Usually, the stream's buffer will be around 8kb, but it varies from system to system.)
Line buffered: output is produced when a newline character is sent to the stream. It's also produced if the buffer fills up, but it's rare for a single line to be long enough to fill a buffer.
Unbuffered: No buffering is performed at all. Every character is immediately sent to the output.
Normally, standard output is block buffered unless it is directed to a terminal, in which case it will be line buffered. (That's not guaranteed; the various standards allow quite a lot of latitude.) Line buffering is probably what you want, unless you're in the habit of writing programs which write partial lines. (The oddly-common idiom of putting a newline at the beginning of each output line rather than at the end is a really bad idea, precisely because it defeats line-buffering.) Unbuffered output is another possibility, but it's really slow if the program produces a substantial amount of output.
You can change the buffering mode before you write any data to the stream by calling setvbuf:
/* Line buffer stdout */
setvbuf(stdout, NULL, _IOLBF, 0);
(See man setvbuf for more options.)
You can also tell the library to immediately send any buffered data by calling fflush:
fflush(stdout);
That's an effective technique if you don't want the (slight) overhead of line buffering, but you know when it is important to send data (typically, because the program is about to do some very long computation, or wait for some external event).
If you can't modify the source code, you can use the Gnu utility stdbuf to change the buffering mode before starting the program. stdbuf will not work with all programs -- for example, it won't have any effect if the program does call setvbuf -- but it is usually effective. For example, to line buffer stdout, you could do this:
timeout 10s stdbuf -oL ./program >> out.txt
# Or: gtimeout 10s gstdbuf -oL ./program >> out.txt
See man stdbuf for more information.

Type of the actual argument differs from the type of the dummy argument

I have some demonstrative Fortran code (which is supposed to be successfully compiled) and try to compile in Intel Visual Fortran + VS2013. But I got an error during compilation
error #6633: The type of the actual argument differs from the type of the dummy argument
It seems the code in line 6 has some problem, but I cannot figure it out exactly
Aside: I can compile this code successfully within command line, so it's highly possible due to Visual studio setting. But I cannot figure out which option is responsible. Some solution suggest disabling the Check Routine Interface option, but it seems too dangerous.
SUBROUTINE FUNARR (A,N)
IMPLICIT NONE
INTEGER:: N
REAL,DIMENSION(N):: A
WRITE(*,*)"FUNARR VECTOR SUBSCRIPT....IN"
WRITE(*,'(10F12.6)')A(:N)
WRITE(*,*)"FUNARR VECTOR SUBSCRIPT....OUT"
A(:N)=A(:N)-100
END SUBROUTINE
PROGRAM WWW_FCODE_CN
IMPLICIT NONE
INTEGER,PARAMETER:: NMAX=100
REAL,PARAMETER:: PI=3.14159265
REAL,DIMENSION(NMAX):: ARR
INTEGER,DIMENSION(5):: VECSUBSCP=(/1,4,6,7,5/)
INTEGER:: NARR,I
NARR=10
ARR=1.
ARR(:NARR)=1.
WRITE(*,*)"ASSIGN TOTAL"
WRITE(*,'(10F12.6)')ARR(:NARR)
WRITE(*,*)
ARR(1)=-1.
WRITE(*,*)"ASSIGN ELEMENT"
WRITE(*,'(10F12.6)')ARR(:NARR)
WRITE(*,*)
ARR(2:4)=(/(SIN(PI*(I-1.)/(NARR-1)),I=2,4)/)
WRITE(*,*)"ASSIGN SLICE"
WRITE(*,'(10F12.6)')ARR(:NARR)
WRITE(*,*)
CALL FUNARR(ARR(2:4),3)
WRITE(*,*)"FUNARR SLICE"
WRITE(*,'(10F12.6)')ARR(:NARR)
WRITE(*,*)
ARR(VECSUBSCP(:3))=0.5
WRITE(*,*)"ASSIGN VECTOR SUBSCRIPT"
WRITE(*,'(10F12.6)')ARR(:NARR)
WRITE(*,*)
! ACTUAL ARGUMENT IS AN ARRAY SECTION WITH A VECTOR SUBSCRIPT
! DISCONTINUE,COPY,UNKNOWN POSITION,CANNOT BE INTENT(OUT)
CALL FUNARR(ARR(VECSUBSCP(3:5)),3)
WRITE(*,*)"FUNARR VECTOR SUBSCRIPT"
WRITE(*,'(10F12.6)')ARR(:NARR)
WRITE(*,*)
STOP
END PROGRAM WWW_FCODE_CN

How to assign a value returned from a function to a variable in GDB script?

For example, consider the following debugging session:
(gdb) break foo
Breakpoint 1 at 0x4004f1: file tst.c, line 5.
(gdb) run
Starting program: /tmp/tst
Breakpoint 1, foo () at tst.c:5
5 return ary[i++];
(gdb) finish
Run till exit from #0 foo () at tst.c:5
Value returned is $1 = 1
(gdb) cont
Continuing.
Breakpoint 1, foo () at tst.c:5
5 return ary[i++];
(gdb) finish
Run till exit from #0 foo () at tst.c:5
Value returned is $2 = 3
After executing a finish command, I get the return value assigned to a
convenience variable (e.g. $1 or $2). Unfortunately, every time the command
is executed, the value is assigned to a different variable. That's the problem,
I cannot write a script which examines the returned value cause I don't know
what variable the value was assigned to.
Why do I need that? I want to set a breakpoint at a certain function but to
stop program execution only if the function has returned a specific value. Something
like this:
break foo
commands
finish
if ($return_value != 42)
continue;
end
end
So the question is: Is there any way to examine in a script the value returned
from a function?
This isn't easy to do from the gdb CLI. Maybe it is impossible purely using the traditional CLI -- because you can have inferior control commands like finish in a breakpoint's commands. This is a longstanding gdb issue.
However, like most automation problems in gdb, it can be solved using the Python API. Now, unfortunately, this approach requires a bit of work on your part.
Essentially what you want to do is subclass the Python FinishBreakpoint class to have it do what you want. In particular you want to write a new command that will set a regular breakpoint in some function; then when this breakpoint is hit, it will instantiate your new FinishBreakpoint class. Your class will have a stop method that will use the return_value of the finish breakpoint as you like.
The first part of your question is straightforward: just use $ to access the most recent value in gdb's value history.
From GDB: Value History
The values printed are given history numbers by which you can refer to them. These are successive integers starting with one. print shows you the history number assigned to a value by printing ‘$num = ’ before the value; here num is the history number.
To refer to any previous value, use ‘$’ followed by the value's history number. The way print labels its output is designed to remind you of this. Just $ refers to the most recent value in the history, and $$ refers to the value before that. $$n refers to the nth value from the end.
But, executing commands following a finish command in a breakpoint command list may not currently be possible; see Tom Tromey's answer for a workaround.

Using `sleep` makes pipe to not work

I have this oprint script:
#!/usr/bin/env ruby
amount = 100
index = 0
loop do
index += 1
if index % 5 == 0
amount += 10
end
sleep 0.1
$stdout.puts amount
end
If I run oprint | echo, then I don't see anything. If I comment out the sleep 0.1 inside oprint, then I see a lot of output. Does sleep break the pipe? Is there a fix?
oprint | echo really shouldn't work, because echo doesn't read from the input stream. It echos its arguments. If you want to test a simple pipe, oprint | cat would be more appropriate.
Even then, you should add $stdout.flush after the puts when you have an infinite loop like that. Since lots of small IO calls can be a performance bottleneck, Ruby buffers its output by default - meaning it stores up lots of little output in a buffer and then writes the whole buffer all at once. Flushing the buffer manually ensures that it won't end up waiting forever to do the actual write.
Making those two changes gives me the expected output.

Resources