MPI-IO MPI_INTEGER and MPI_DOUBLE_PRECISION give different results - multiprocessing

The problem below is Write a 4 * 4 buff array using mpiio. I use 4 cores, so the subarray should be 2 * 2. The problem is that, when I set buff is integer(2,2), and set all the MPI_DOUBLE_PRECISION to MPI_INTEGER, the code works well. However, MPI_DOUBLE_PRECISION gives wrong results. It is strang becasue I don't think there are mistakes when I set the buff array.
Integer results::
0000000 1 1 1 1
0000016 1 1 1 1
0000032 1 1 1 1
0000048 1 1 1 1
0000064
Double Precision results::
0000000 0 1072693248 0 1072693248
0000016 0 1072693248 0 1072693248
0000032 0 1072693248 0 1072693248
0000048 0 1072693248 0 1072693248
0000064 0 1072693248 0 1072693248
0000080 0 1072693248 0 1072693248
0000096 0 1072693248 0 1072693248
0000112 0 1072693248 0 1072693248
0000128
This is the code:
program test
use mpi
implicit none
integer::rank,nproc,ierr,buffsize,status(MPI_STATUS_SIZE),intsize,i,j,filetype,cart_comm,count
integer::fh
integer(kind=mpi_offset_kind):: offset=0
double precision,dimension(2,2)::buff
character:: filename*50
integer::sizes(2)
integer::gsize(2)
integer::start(2)
integer::subsize(2)
integer::coords(2)
integer:: nprocs_cart(2)=(/2,2/)
logical::periods(2)
character:: name*50,para*100,zone*100
gsize=(/4,4/)
subsize=(/2,2/)
offset=0
buff=1.d0
count=1
call MPI_init(ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD, nproc, ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr)
CALL MPI_Dims_create(nproc, 2, nprocs_cart, ierr)
CALL MPI_Cart_create(MPI_COMM_WORLD, 2, nprocs_cart, periods, .TRUE., &
cart_comm, ierr)
CALL MPI_Comm_rank(cart_comm, rank, ierr)
CALL MPI_Cart_coords(cart_comm, rank, 2, coords, ierr)
start=coords*2
call MPI_TYPE_CREATE_SUBARRAY(2,gsize,subsize,start,MPI_ORDER_FORTRAN,&
MPI_DOUBLE_PRECISION,filetype,ierr)
call MPI_TYPE_COMMIT(filetype,ierr)
If( rank == 0 ) Then
Call mpi_file_delete( 'out.dat', MPI_INFO_NULL, ierr )
End If
Call mpi_barrier( mpi_comm_world, ierr )
call MPI_File_open(MPI_COMM_WORLD,'out.dat',&
MPI_MODE_WRONLY + MPI_MODE_CREATE, MPI_INFO_NULL, fh,ierr)
call MPI_File_set_view(fh,offset,MPI_DOUBLE_PRECISION,filetype,&
"native",MPI_INFO_NULL,ierr)
CALL MPI_FILE_WRITE_all(fh, buff,4, MPI_DOUBLE_PRECISION, MPI_STATUS_ignore, ierr)
call MPI_File_close(fh,ierr)
call MPI_FINALIZE(ierr)
end program test

Related

Segfault when passing data using MPI Windows (MPI_Put)

I am trying to figure out how to use MPI to work with matrices.
I have a 3x6 matrix filled with zeros and am running code with 3 threads. 0 is the main one, 1 writes to the first row of the matrix in columns from 1 to 3 ones, and 2 stream writes to the second row in columns 4-6 of two.
I pass these formed parts to the main thread (at 0), I get the correct result, but after that a memory error is output to the console.
I can't figure out what I'm doing wrong. Can you please tell me what is my mistake?
program test
Use mpi
Implicit None
integer :: process_Rank, size_Of_Cluster, ierror = 0, win, size_s, n = 6
integer:: i , j
integer:: start, target_count = 9
integer :: mtx(3,6)
integer(kind = MPI_ADDRESS_KIND) :: nbytes = 4
!input matrix
do i = 1,3
do j =1,6
mtx(i,j) = 0
end do
end do
Call mpi_sizeof( mtx, size_s, ierror ) !Get the size of a matrix element
call MPI_INIT(ierror)
call MPI_COMM_SIZE(MPI_COMM_WORLD, size_Of_Cluster, ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD, process_Rank, ierror)
!create windows
if(process_Rank == 0) then
call MPI_WIN_CREATE(mtx, size_s *6 * 3 * nbytes, 1, MPI_INFO_NULL, MPI_COMM_WORLD, win, ierror)
else
call MPI_WIN_CREATE(mtx, size_s * 6* 3*nbytes,1, MPI_INFO_NULL, MPI_COMM_WORLD, win, ierror)
end if
CALL MPI_Win_fence(0,win,ierror)
if(process_Rank == 1) then
!fill 3 columns of the first row with ones
start = 0
do i = 0,3
mtx(process_Rank,i+start) = process_Rank
end do
CALL MPI_PUT(mtx, size_s*3*6, MPI_INTEGER, 0, start * nbytes, target_count, MPI_INTEGER, win, ierror)
!print mtx
print *, process_Rank, ' put = '
do i = 1,3
print *, ''
do j = 1,3
write(*,fmt='(g0)', advance = 'no') mtx(i,j)
write(*,fmt='(g0)', advance = 'no') ' '
end do
end do
end if
CALL MPI_Win_fence(0, win,ierror)
if(process_Rank == 2) then
!fill the last 3 columns of the second row with twos
start = 3
do i = 1,3
mtx(process_Rank,i+start) = process_Rank
end do
CALL MPI_PUT(mtx(1:3,4:6), size_s* 3 *6, MPI_INTEGER, 0, 3 * 3 * nbytes, target_count, MPI_INTEGER, win, ierror)
!print mtx
print *, process_Rank, ' put = '
do i = 1,3
print *, ''
do j = 4,6
write(*,fmt='(g0)', advance = 'no') mtx(i,j)
write(*,fmt='(g0)', advance = 'no') ' '
end do
end do
end if
CALL MPI_Win_fence(0, win,ierror)
! print result
if(process_Rank == 0) then
print *, 'result = '
do i = 1,3
print *, ''
do j = 1,6
write(*,fmt='(g0)', advance = 'no') mtx(i,j)
write(*,fmt='(g0)', advance = 'no') ' '
end do
end do
end if
CALL MPI_Win_fence(0, win,ierror)
CALL MPI_WIN_FREE(win, ierror)
call MPI_FINALIZE(ierror)
end program test
Console:
1 put =
1 1 1
0 0 0
0 0 0
2 put =
0 0 0
2 2 2
0 0 0
result =
1 1 1 0 0 0
0 0 0 2 2 2
0 0 0 0 0 0
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
#0 0x7fd4447bcd01 in ???
#1 0x7fd4447bbed5 in ???
#2 0x7fd4445f020f in ???
--------------------------------------------------------------------------
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun noticed that process rank 0 with PID 0 on node alm-VirtualBox exited on signal 11 (Segmentation fault).
--------------------------------------------------------------------------
If you use the -fcheck=all, which Ian Bush suggested to you in the first comment under your question, you will get the reason for the error immediately and you do not have to wait many hours for feedback on the internet. I got:
At line 38 of file mpi_wins.f90 Fortran runtime error:
Index '0' of dimension 2 of array 'mtx' below lower bound of 1
Error termination. Backtrace:
#0 0x7f7ed3e75640 in ???
#1 0x7f7ed3e76185 in ???
#2 0x7f7ed3e7652a in ???
#3 0x4010e4 in test
at /home/lada/f/testy/stackoverflow/mpi_wins.f90:38
#4 0x401e78 in main
at /home/lada/f/testy/stackoverflow/mpi_wins.f90:3
You are indexing your mtx array using the process rank, but the array is defined to start from 1.
integer :: mtx(3,6)
However, MPI ranks start from 0, not from 1.
Also notice that the backtrace now contains a better code location thanks to the -g compiler option.

Eigen Dynamic-sized matrix verticaly/horizontaly Linspaced?

I'm currently working on a project where i need to be able to create matrix such as :
MatrixXi lin_spaced_horizontaly =
0 1 2 3 4 ... ncols
0 1 2 3 4 ... ncols
0 1 2 3 4 ... ncols
MatrixXi lin_spaced_verticaly =
0 0 0
1 1 1
2 2 2
3 3 3
4 4 4
. . .
nrows nrows nrows
Currently i am trying things like that :
Eigen::VectorXi v_lin_vec = Eigen::VectorXi::LinSpaced(nrows_, 0, nrows_).transpose
Eigen::MatrixXi v_lin_matrix (nrows_, ncols_);
for (auto i = 0; i<ncols_; i++)
v_lin_matrix << v_lin_vec;
Eigen::VectorXi h_lin_vec = Eigen::VectorXi::LinSpaced(ncols_, 0, ncols_)
Eigen::MatrixXi h_lin_matrix (nrows_, ncols_);
for (auto i = 0; i<ncols_; i++)
h_lin_matrix << h_lin_vec;
And I am getting results such as :
v_lin_matrix
-------------
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
4 0 0 0 0
h_lin_matrix
-------------
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
3 0 0 0 0
4 0 0 0 0
Thanks in advance !
You can use .rowwise().replicate(ncols_) and .colwise().replicate(nrows_) like so:
Eigen::MatrixXi v_lin_matrix = Eigen::VectorXi::LinSpaced(nrows_, 0, nrows_)
.rowwise().replicate(ncols_);
Eigen::MatrixXi h_lin_matrix = Eigen::RowVectorXi::LinSpaced(ncols_, 0, ncols_)
.colwise().replicate(nrows_);
Or alternatively:
Eigen::MatrixXi v_lin_matrix = Eigen::VectorXi::LinSpaced(nrows_, 0, nrows_)
.replicate(1,ncols_);
Eigen::MatrixXi h_lin_matrix = Eigen::RowVectorXi::LinSpaced(ncols_, 0, ncols_)
.replicate(nrows_,1);
Regarding your use of <<: This is meant to be used in combination with the , operator to initialize a matrix in a single expression, like this:
Eigen::MatrixXi A(4,ncols_);
A << row0, row1, row2, row3;
What you wrote will assert at runtime (if you compile without -DNDEBUG, which I strongly recommend until your code is sufficiently tested!)

Problem with using MPI_SENDRECV with 4 threads

As a minimal problem, I'm trying to send an integer between 4 processors: 0 -> 3 (rank 0 sends to and receives from rank 3), 2 -> 1, 1 -> 2, 3 -> 0. It never finishes execution and hangs, probably waiting for the response from other threads.
I'm compiling the code with mpif90 ... and running with mpiexec -np 4 .... Below is the minimal snippet:
program sendrecv
implicit none
include "mpif.h"
integer :: foo, bar
integer :: mpi_rank, mpi_size, ierr
integer :: mpi_sendto, mpi_recvfrom
integer :: istat(MPI_STATUS_SIZE), status, i
call MPI_INIT(ierr)
call MPI_COMM_SIZE(MPI_COMM_WORLD, mpi_size, ierr)
call MPI_COMM_RANK(MPI_COMM_WORLD, mpi_rank, ierr)
print *, "SENDING..."
if (mpi_rank .eq. 0) then
mpi_sendto = 3; mpi_recvfrom = 3
else if (mpi_rank .eq. 1) then
mpi_sendto = 2; mpi_recvfrom = 2
else if (mpi_rank .eq. 2) then
mpi_sendto = 1; mpi_recvfrom = 1
else
mpi_sendto = 0; mpi_recvfrom = 0
end if
foo = mpi_rank
do i = 1, 5
foo = mpi_rank
call MPI_SENDRECV(foo, 1,&
& MPI_INTEGER, mpi_sendto, mpi_rank * 10 + i,&
& bar, 1,&
& MPI_INTEGER, mpi_recvfrom, mpi_rank * 10 + i,&
& MPI_COMM_WORLD, istat, ierr)
end do
print *, "...DONE"
call MPI_FINALIZE(ierr)
end
I don't really understand why this program hangs, maybe I'm missing something or doing something really wrong. If I understand correctly, MPI_SENDRECV is just non-blocking send and recv with two wait-s. In that case, say, if rank=0 sends to rank=3 it shouldn't have any problem receiving from it, right?
I tried sending/receiving from different threads, i.e., doing this:
if (mpi_rank .eq. 0) then
mpi_sendto = 1; mpi_recvfrom = 3
else if (mpi_rank .eq. 1) then
mpi_sendto = 2; mpi_recvfrom = 0
else if (mpi_rank .eq. 2) then
mpi_sendto = 3; mpi_recvfrom = 1
else
mpi_sendto = 0; mpi_recvfrom = 2
end if
still not working.
UPD As it was pointed out, tags should be the same when doing SENDRECV, however In case when doing this call within a loop, similar tags don't help much (see modified code). Old version:
call MPI_SENDRECV(foo, 1,&
& MPI_INTEGER, mpi_sendto, 200,&
& bar, 1,&
& MPI_INTEGER, mpi_recvfrom, 100,&
& MPI_COMM_WORLD, status, ierr)
UPD#2 Actually, if anyone is interested, I found a discussion exactly about the problem I have on why SENDRECV-s may deadlock sometimes.
The term "thread" is misleading here, you should talk about MPI task or MPI process (both are equivalent).
The root cause is a tag mismatch. You send with tag 200 but receive with tag 100.
Also, you should use istat instead of status as the status argument of MPI_Sendrecv().
Here is how you can fix your program
call MPI_SENDRECV(foo, 1,&
& MPI_INTEGER, mpi_sendto, 200,&
& bar, 1,&
& MPI_INTEGER, mpi_recvfrom, 200,&
& MPI_COMM_WORLD, istat, ierr)

Search first occurance of "OK" in a text file and extract first 2 characters in the same line to save it as a variable in UFT/VB scripting

I've been trying to find a proper solution to my problem for several days now looking everywhere. Hopefully some of you guys can direct me to the right direction.
I need to find the string "OK" in a text file and Extract first 2 characters in the same line if I find "OK" to save it as a variable.
I give you an example of the lines you can find in this text file:
Debugger
--------------
>h state 2
Health thread state is: POLLING
Health Devices:
Sensor Name State Eval RED Value ( D , M ) Link Active Grp Description
11 ( 2) TEMP ( 1) OK 1 1 21 ( 1, 0) 0xff 0000 0 01-Inlet Ambient (X:1 y:1)
12 ( 2) TEMP ( 1) OK 0 1 40 ( 1, 0) 0xff 0000 0 02-CPU 1 (X:11 y:5)
13 ( 2) TEMP ( 2) MISSING 0 1 0 ( 0, 1) 0xff 0000 0 04-P1 DIMM 1-6 (X:14 y:5)
14 ( 2) TEMP ( 1) OK 0 1 24 ( 1, 0) 0xff r0000 0 05-P1 DIMM 7-12 (X:9 y:5)
15 ( 2) TEMP ( 2) MISSING 0 1 0 ( 0, 1) 0xff 0000 0 06-P2 DIMM 1-6 (X:6 y:5)
16 ( 2) TEMP ( 2) MISSING 0 1 0 ( 0, 0) 0xff 0000 0 07-P2 DIMM 7-12 (X:1 y:5)
17 ( 2) TEMP ( 1) OK 0 1 35 ( 1, 0) 0xff 0000 0 08-HD Max (X:2 y:3)
18 ( 2) TEMP ( 1) OK 0 1 38 ( 1, 0) 0xff 0000 0 10-Chipset (X:13 y:10)
19 ( 2) TEMP ( 1) OK 0 1 24 ( 1, 0) 0xff 0000 0 11-PS 1 Inlet (X:1 y:14)
20 ( 2) TEMP ( 2) MISSING 0 1 0 ( 0, 0) 0xff 0000 0 12-PS 2 Inlet (X:4 y:14)
21 ( 2) TEMP ( 1) OK 0 1 32 ( 1, 0) 0xff 0000 0 13-VR P1 (X:10 y:1)
22 ( 2) TEMP ( 1) OK 0 1 28 ( 1, 0) 0xff 0000 0 15-VR P1 Mem (X:13 y:1)
23 ( 2) TEMP ( 1) OK 0 1 27 ( 1, 0) 0xff 0000 0 16-VR P1 Mem (X:9 y:1)
24 ( 2) TEMP ( 1) OK 0 1 40 ( 1, 0) 0xff 0000 0 19-PS 1 Internal (X:8 y:1)
25 ( 2) TEMP ( 2) MISSING 0 1 0 ( 0, 0) 0xff 0000 0 20-PS 2 Internal (X:1 y:8)
26 ( 2) TEMP ( 2) MISSING 0 1 0 ( 0, 0) 0xff 0000 0 21-PCI 1 (X:5 y:12)
27 ( 2) TEMP ( 2) MISSING 0 1 0 ( 0, 0) 0xff 0000 0 22-PCI 2 (X:11 y:12)
28 ( 2) TEMP ( 2) MISSING 0 1 0 ( 0, 0) 0xff 0000 0
Using the following Code I can extract the first 2 characters in a line but I have to extract 2 characters from the line where I find the first occurrence of "OK"
strLine = objTextFile.ReadLine
objTextFile.Close
'Gets first 2 chars
SerNum = Left(strLine, 2)
Looking for help in this... Thanks in advance...
My unfinished vbscript:
Const ForReading = 1
Dim strSearchFor
strSearchFor = "OK"
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objTextFile = objFSO.OpenTextFile("C:\myFile.txt", ForReading)
For i = 0 to 20
strLine = objTextFile.ReadLine()
If InStr(strLine, strSearchFor) > 0 Then
SensorNumb = Left(strLine, 2)
Exit For
End If
Next
Final Code :
Const ForReading = 1
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objTextFile = objFSO.OpenTextFile("C:\myFile.txt", ForReading)
For i = 0 to 20
strLine = objTextFile.ReadLine()
If InStr(strLine, "OK") > 0 Then
SensorNumb = Left(strLine, 2)
objTextFile.Close
Exit For
End If
Next
Basically what you want to do is:
read the file line by line
Find a substring using InStr
Print the first two chars Mid(str, 1, 2)
You should be able to chain these together yourself.
This python script should work for you:
lines = [line[:-1] for line in open("MY_FILENAME")]
for i in range(len(lines)):
if lines[i].contains("OK"):
print lines[i][:2]
Save this as a *.py file and execute it with python path/to/python/file

MPI_BCAST() applied on only a fraction of the base group

I grouped 8 processors into two groups, each of which contains evenly four processors. I ask the root of each subgroup to do some communication with their subordinates using the subroutine "MPI_BCAST."
I came across a question: to indicate the root of a subgroup, should I use the original rank which the subgroup root corresponds to with the MPI_COMM_WORLD communicator, or the new rank it represents with the new communicator?
Take the code snippet below for example, I want to require P:0 to send data to its subordinates P:1, P:2, and P:3, and similarly, I ask P:4 to send out its data to P:5, P:6, P:7. To reach this goal, I am wondering if I should specify the fourth argument in line 36 as 1, or specify them as 0 and 4 respectively conditional on which head of subgroup I am referring to?
Thanks.
Lee
1 program main
2 include 'mpif.h'
3 integer :: ierr, irank, num_procs, base_group
4 integer :: nrow, ncol, irow, icol
5 integer :: dummy_group, dummy_comm, new_comm, new_rank
6 integer :: i, j, roster(4), data(4)
7
8 call MPI_Init ( ierr )
9 call MPI_COMM_RANK( MPI_comm_world, irank, ierr )
10 call MPI_COMM_SIZE( MPI_comm_world, num_procs, ierr)
11 call MPI_COMM_GROUP( MPI_comm_world, base_group, ierr)
12 nrow = 4
13 ncol = 2
14 irow = mod( irank, nrow ) + 1
15 icol = irank/nrow + 1
16
17 roster(1) = 0
18 do i = 2, nrow
19 roster(i) = roster(i-1) + 1
20 enddo
21
22 do i = 1, ncol
23 call MPI_GROUP_INCL( base_group, nrow, roster, dummy_group, ierr )
24 call MPI_COMM_CREATE( MPI_COMM_WORLD, dummy_group, dummy_comm, ierr )
25 if( icol == i ) new_comm = dummy_comm
26 forall( j=1:nrow ) roster(j) = roster(j) + nrow
27 enddo
28
29 ! Here I want to initialize data for processors P:0 and P:4
30 if( irank == 0 ) data = 0
31 if( irank == 4 ) data = 4
32
33 ! In the code below I want to require P:0 to send data to
34 ! its subordinates P:1, P:2, and P:3. Similarly, I ask P:4
35 ! to send out its data to P:5, P:6, P:7.
36 call MPI_BCAST( data, 4, MPI_INTEGER, 0, new_comm, ierr)
37
38 call MPI_Finalize ( ierr )
39 end program
All rank-type arguments (origin, target, etc.) in MPI must be ranks in the same communicator as that given by the communicator argument. In practice, what this means is that after creating a new communicator, each process in that communicator must call MPI_Comm_rank and MPI_Comm_size to retrieve it's rank and the total size in that communicator (unless you can deduce the new rank and size by other means in your code, of course).
As an aside, as what you're doing is splitting the original communicator into two disjoint communicators, I think an easier way to accomplish that is to use MPI_Comm_split rather than setting up groups manually as you have done.

Resources