This question already has an answer here:
MPI_Recv overwrites parts of memory it should not access
(1 answer)
Closed 3 years ago.
I have some Fortran code that I'm parallelizing with MPI which is doing truly bizarre things. First, there's a variable nstartg that I broadcast from the boss process to all the workers:
call mpi_bcast(nstartg,1,mpi_integer,0,mpi_comm_world,ierr)
The variable nstartg is never altered again in the program. Later on, I have the boss process send eproc elements of an array edge to the workers:
if (me==0) then
do n=1,ntasks-1
(determine the starting point estart and the number eproc
of values to send)
call mpi_send(edge(estart),eproc,mpi_integer,n,n,mpi_comm_world,ierr)
enddo
endif
with a matching receive statement if me is non-zero. (I've left out some other code for readability; there's a good reason I'm not using scatterv.)
Here's where things get weird: the variable nstartg gets altered to n instead of keeping its actual value. For example, on process 1, after the mpi_recv, nstartg = 1, and on process 2 it's equal to 2, and so forth. Moreover, if I change the code above to
call mpi_send(edge(estart),eproc,mpi_integer,n,n+1234567,mpi_comm_world,ierr)
and change the tag accordingly in the matching call to mpi_recv, then on process 1, nstartg = 1234568; on process 2, nstartg = 1234569, etc.
What on earth is going on? All I've changed is the tag that mpi_send/recv are using to identify the message; provided the tags are unique so that the messages don't get mixed up, this shouldn't change anything, and yet it's altering a totally unrelated variable.
On the boss process, nstartg is unaltered, so I can fix this by broadcasting it again, but that's hardly a real solution. Finally, I should mention that compiling and running this code using electric fence hasn't picked up any buffer overflows, nor did -fbounds-check throw anything at me.
The most probable cause is that you pass an INTEGER scalar as the actual status argument to MPI_RECV when it should be really declared as an array with an implementation-specific size, available as the MPI_STATUS_SIZE constant:
INTEGER, DIMENSION(MPI_STATUS_SIZE) :: status
or
INTEGER status(MPI_STATUS_SIZE)
The message tag is written to one of the status fields by the receive operation (its implementation-specific index is available as the MPI_TAG constant and the field value can be accessed as status(MPI_TAG)) and if your status is simply a scalar INTEGER, then several other local variables would get overwritten. In your case it simply happens so that nstartg falls just above status in the stack.
If you do not care about the receive status, you can pass the special constant MPI_STATUS_IGNORE instead.
Related
In the Windows system, we can modify the memory of another process across processes. For example, if process A wants to modify the memory of process B, A can call the system function WriteProcessMemory. The approximate form is as follows:
BOOL flag = WriteProcessMemory(handler, p_B_addr, &p_A_buff, write_size); ...
This function return a Boolean value, which represents whether the write operation is successful. It needs to pass four parameters, let's take a look at these four parameters:
handler. This is a process handle, and it can be used to find B process.
p_B_addr. In process B, the address offset to be written into memory.
p_A_buff. In process A, the pointer to the write data buffer.
write_size. The number of bytes to write.
I am confused about the first parameter handler, which is a variable of type HANDLE. For example, when our program is actually running, the ID of process B is 2680, and then I want to write memory to process B. First I need to use this 2680 to get the handle of process B in process A. The specific form is handler=OpenProcess(PROCESS_ALL_ACCESS, FALSE, 2680), and then you can use this handler to fall into the kernel to modify the memory of process B.
Since they are all trapped in kernel functions to modify memory across processes, why is the WriteProcessMemory function not designed to be in the form of WriteProcessMemory(B_procID, p_B_addr, &p_A_buff, write_size)?
Among them, B_procID is the ID of the B process, since each process they all have unique IDs. Can the system kernel not find the physical address that the virtual address of the B process can map through this B_procID? Why must the process handle index of the B process in the A process be passed in?
There are multiple reasons, all touched on in the comments.
Lifetime. The process id is simply a number, knowing the id does not keep the process alive. Having a open handle to a process means the kernel EPROCESS structure and the process address space will stay intact, even if said process finishes by calling ExitProcess. Windows tries to not re-use the id for a new process right away but it will happen some time in the future given enough time.
Security/Access control. In Windows NT, access control is performed when you open a object, not each time you interact with the object. In this case, the kernel needs to know that the caller has PROCESS_VM_WRITE and PROCESS_VM_OPERATION access to the process. This is related to point 3, efficiency.
Speed. Windows could of course implement a WriteProcessMemoryById function that calls OpenProcess+WriteProcessMemory+CloseHandle but this encourages sub optimal design as well as opening you up to race conditions related to point 1. The same applies to "why is there no WriteFileByFilename function" (and all other Read/Write functions).
I'm using the libwebsockets v2.4.
The doc seems unclear to me about what I have to do with the returned value of the lws_write() function.
If it returns -1, it's an error and I'm invited to close the connection. That's fine for me.
But when it returns a value that is strictly inferior to the buffer length I pass, should I consider that I have to write the last bytes that could not be written later (in another WRITABLE callback occurrence). Is it even possible to have this situation?
Also, should I use the lws_send_pipe_choked() before using the lws_write(), considering that I always use lws_write() in the context of a WRITABLE callback?
My understanding is that lws_write always return the asked buffer length except is an error occurs.
If you look at lws_issue_raw() (from which the result is returned by lws_write()) in output.c (https://github.com/warmcat/libwebsockets/blob/v2.4.0/lib/output.c#L157), you can see that if the length written by lws_ssl_capable_write() is less than the provided length, then the lws allocate a buffer to fill up the remaining bytes on wsi->trunc_alloc, in order for it to be sent in the future.
Concerning your second question, I think it is safe to call lws_write() in the context of a WRITABLE callback without checking if the pipe is choked. However, if you happen to loop on lws_write() in the callback, lws_send_pipe_choked() must be called in order to protect the subsequent calls to lws_write(). If you don't, you might stumble upon this assertion https://github.com/warmcat/libwebsockets/blob/v2.4.0/lib/output.c#L83 and the usercode will crash.
I'm writing a project that firstly designates the root process to read a large data file and do some calculations, and secondly broadcast the calculated results to all other processes. Here is my code: (1) it reads random numbers from a txt file with nsample=30000 (2) generate dens_ent matrix by some rule (3) broadcast to other processes. Btw, I'm using OpenMPI with gfortran.
IF (myid==0) THEN
OPEN(UNIT=8,FILE='rnseed_ent20.txt')
DO i=1,n_sample
DO j=1,3
READ(8,*) rn(i,j)
END DO
END DO
CLOSE(8)
END IF
dens_ent=0.0d0
DO i=1,n_sample
IF (myid==0) THEN
!Random draws of productivity and savings
rn_zb=MC_JOINT_SAMPLE((/-0.1d0,mu_b0/),var,rn(i,1:2))
iz=minloc(abs(log(zgrid)-rn_zb(1)),dim=1)
ib=minloc(abs(log(bgrid(1:nb/2))-rn_zb(2)),dim=1) !Find the closest saving grid
CALL SUB2IND(j,(/nb,nm,nk,nxi,nz/),(/ib,1,1,1,iz/))
DO iixi=1,nxi
DO iiz=1,nz
CALL SUB2IND(jj,(/nb,nm,nk,nxi,nz/),(/policybmk_2_statebmk_index(j,:),iixi,iiz/))
dens_ent(jj)=dens_ent(jj)+1.0d0/real(nxi)*markovian(iz,iiz)*merge(1.0d0,0.0d0,vent(j) .GE. -bgrid(ib)+ce)
!Density only recorded if the value of entry is greater than b0+ce
END DO
END DO
END IF
END DO
PRINT *, 'dingdongdingdong',myid
IF (myid==0) dens_ent=dens_ent/real(n_sample)*Mpo
IF (myid==0) PRINT *, 'sum_density by joint normal distribution',sum(dens_ent)
PRINT *, 'BLBLALALALALALA',myid
CALL MPI_BCAST(dens_ent,N,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
Problem arises:
(1) IF (myid==0) PRINT *, 'sum_density by joint normal distribution',sum(dens_ent) seems not executed, as there is no print out.
(2) I then verify this by adding PRINT *, 'BLBLALALALALALA',myid etc messages. Again no print out for root process myid=0.
It seems like root process is not working? How can this be true? I'm quite confused. Is it because I'm not using MPI_BARRIER before PRINT *, 'dingdongdingdong',myid?
Is it possible that you miss the following statement just at the very beginning of your code?
CALL MPI_COMM_RANK (MPI_COMM_WORLD, myid, ierr)
IF (ierr /= MPI_SUCCESS) THEN
STOP "MPI_COMM_RANK failed!"
END IF
The MPI_COMM_RANK returns into myid (if succeeds) the identifier of the process within the MPI_COMM_WORLD communicator (i.e a value within 0 and NP, where NP is the total number of MPI ranks).
Thanks for contributions from #cw21 #Harald and #Hristo Iliev.
The failure lies in unit numbering. One reference says:
unit number : This must be present and takes any integer type. Note this ‘number’ identifies the
file and must be unique so if you have more than one file open then you must specify a different
unit number for each file. Avoid using 0,5 or 6 as these UNITs are typically picked to be used by
Fortran as follows.
– Standard Error = 0 : Used to print error messages to the screen.
– Standard In = 5 : Used to read in data from the keyboard.
– Standard Out = 6 : Used to print general output to the screen.
So I changed all numbering i into 1i, not working; then changed into 10i. It starts to work. Mysteriously, as correctly pointed out by #Hristo Iliev, as long as the numbering is not 0,5,6, the code should behave properly. I cannot explain to myself why 1i not working. But anyhow, the root process is now printing out results.
This value was appeared in the poison.h (linux source\include\linux\poison.h):
/*
* Architectures might want to move the poison pointer offset
* into some well-recognized area such as 0xdead000000000000,
* that is also not mappable by user-space exploits:
*/
I just curious about the special of the value 0xdead000000000000?
Pretty sure this is just a variant of deadbeef; i.e. it's just an easily identified signal value (see http://en.wikipedia.org/wiki/Hexspeak for deadbeef)
The idea of pointer poisoning is to ensure that a poisoned list pointer can't be used without causing a crash. Say you unlink a structure from the list it was in. You then want to invalidate the pointer value to make sure it's not used again for traversing the list. If there's a bug somewhere in the code -- a dangling pointer reference -- you want to make sure that any code trying to follow the list through this now-unlinked node crashes immediately (rather than later in some possibly unrelated area of code).
Of course you can poison the pointer simply by putting a null value in it or any other invalid address. Using 0xdead000000000000 as the base value just makes it easier to distinguish an explicitly poisoned value from one that was initialized with zero or got overwritten with zeroes. And it can be used with an offset (LIST_POISON{1,2}) to create multiple distinct poison values that all point into unusable areas of the virtual address space and are identifiable as invalid at a glance.
I am trying to build something with MPI, so since i am not very familiar with it, i started with some arrays and printing stuff. I noticed that a plain C command (not an MPI one) works simultaneously on every process, i.e. printing something like that :
printf("Process No.%d",rank);
Them i noticed that the numbers of the processes got all scrambled and because the right sequence of the processes would fit me, i tried using a for-loop like that :
for(rank=0; rank<processes; rank++) printf("Process No.%d",rank);
And that started a third world war in my computer. Lots of strange errors in a strange format that i couldn't understand and that made me suspicious. How is it possible since an if-loop stating a ranks value , like the master rank:
if(rank==0) printf("Process No.%d",rank);
cant use a for-loop for the same reason. Well, that is my first question.
My second question is about an other for-loop i used, that it got ignored.
printf("PROCESS --------------->**%d**\n",id);
for (i = 0; i < PARTS; ++i){
printf("Array No.%d\n", i+1);
for (j = 0; j < MAXWORDS; ++j)
printf("%d, ",0);
printf("\n\n");
}
I run that for-loop and every process printed only the first line:
$ mpiexec -n 6 `pwd`/test
PROCESS --------------->**0**
PROCESS --------------->**1**
PROCESS --------------->**3**
PROCESS --------------->**2**
PROCESS --------------->**4**
PROCESS --------------->**5**
And not the following amount of zeros (there was an array there at first that i removed cause i was trying to figure out why it didn't get printed).
So, why is it about MPI and for-loops that don't get along?
--edit 1: grammar
--edit 2: Code paste
It is not the same as above, but same problem in the the last for-loop with fprintf.
This is a paste zone, sorry for that, i couldn't deal with the code system here
--edit 3: fixed
Well i finally figured it out. For first i have to say that the fprintf function when used inside MPI is a mess. Apparently there is a kind of overlap while every process writes in a text file. I tested it with the printf function and it worked. The second thing i was doing is, i was calling the MPI_Scatter function from inside root:
if(rank==root) MPI_Scatter();
..which only scatters the data inside the process and not the others.
Now that i have fixed those two issues, the program works as it should, apart a minor problem when i printf the my_list arrays. It seems like every array has a random number of inputs, but when i tested using a counter for every array, it's only the data that is printed like this. Tried using fflush(stdout); but it returned me an error.
usr/lib/gcc/x86_64-pc-linux-gnu/4.2.2/../../../../x86_64-pc-linux-gnu/bin/ld: final link failed: `Input/output error collect2: ld returned 1 exit status`
MPI in and of itself does not have a problem with for loops. However, just like with any other software, you should always remember that it will work the way you code it, not the way you intend it. It appears that you are having two distinct issues, both of which are only tangentially related to MPI.
The first issue is that the variable PARTS is defined in such a way that it depends on another variable, procs, which is not initialized at before-hand. This means that the value of PARTS is undefined, and probably ends up causing a divide-by-zero as often as not. PARTS should actually be set after line 44, when procs is initialized.
The second issue is with the loop for(i = 0; i = LISTS; i++) labeled /*Here is the problem*/. First of all, the test condition of the loop always sets i to the value of LISTS, regardless of the initial value of 0 and the increment at the end of the loop. Perhaps it was intended to be i < LISTS? Secondly, LISTS is initialized in a way that depends on PARTS, which depends on procs, before that variable is initialized. As with PARTS, LISTS must be initialized after the statement MPI_Comm_size(MPI_COMM_WORLD, &procs); on line 44.
Please be more careful when you write your loops. Also, make sure that you initialize variables correctly. I highly recommend using print statements (for small programs) or the debugger to make sure your variables are being set to the expected values.