DDD execution line jumps to subroutine top # continuation line Fortran 77 - debugging

I work with some very old Fortran code in my research lab, and have relied heavily on DDD to test new code and subroutines. Until 6 months ago I was able to generate core dumps and step through code without any fuss. Now, whenever DDD encounters a continuation line, or "&", the green arrow pointing to the execution line jumps back to where the line where the subroutine I'm currently in is started. This is annoying because I have to work back to see where the code falls down, although I use the "undo" function in DDD to see where I was. I cannot tell if it's my code or the debugger that has changed
I was wondering if anyone else has encountered this behaviour and if there is a way to stop the execution line from jumping between "&" and the SUBROUTINE lines? This also disrupts my core files, which just point me to the SUBROUTINE line as well.
I've added a code snippet of the subroutine declaration and portions of the code where the debugger green arrow jumps between the continuation lines and the subroutine declaration.
SUBROUTINE SOFC_A42_coeffgen(IPCOMP,COUT,ISTATS)
IMPLICIT NONE
~~~~~~~~DECLARATIONS AND A BUNCH OF CODE~~~~~~~~
IF( P_battery > P_fully_discharge )
& P_battery = P_fully_discharge
C---------Determine SOC at end of time-step.
battery_SOC_f = ( battery_SOC_p*battery_capacity
& - P_battery*TIMSEC/battery_discharge_eff
& ) / battery_capacity
C---------Determine heat generated due to discharging losses.
q_battery_loss = P_battery * (1. - battery_discharge_eff)

Related

How can I prevent interrupts stepping out from my code while debugging?

I'm currently debugging the linux kernel and it's properly set up with kgdb.
I set a breakpoint to a function I am trying to debug, and the break occurs once I run my program which needs this kernel function to do something, this is wanted. But whenever I try to step through the code with "n" or "si", I always immediately land in arch/x86/include/asm/apic.h, which then runs some interrupt handling code and timers. I'm aware of the kernel being heavily parallelized so it has to move into some other code while being executed, but is it possible to step through the function more comfortably?
What I want to achieve:
before:
-> line A
line B
after:
line A
-> line B
What I have right now:
before:
-> line A
line B
after:
line A
... jumps into way different code here
I think it's hard. Let's think if the interrupts are all aborted, then how can you put your order by keybord or mouse?

TI BASIC remainder() domain error

In my TI BASIC program on TI 84 plus C, I have a line that says remainder((A/X),2). This creates a domain error every time the program runs. In the case of my tests, A/X is 4. remainder(4,2) works. 4->Z remainder(Z,2) works. If I pause the program and display (A/X) immediately before the remainder line, the result is certainly 4. (A/X)->Z remainder(Z,2) does not work.
What could possibly be wrong with my code?
link to the code:
https://drive.google.com/open?id=0B07yHA0-EssLUjZOc0dxektwbUU
https://drive.google.com/open?id=0B07yHA0-EssLSFlNcFNWZm0zNDQ

Missing print-out for MPI root process, after its handling data reading alone

I'm writing a project that firstly designates the root process to read a large data file and do some calculations, and secondly broadcast the calculated results to all other processes. Here is my code: (1) it reads random numbers from a txt file with nsample=30000 (2) generate dens_ent matrix by some rule (3) broadcast to other processes. Btw, I'm using OpenMPI with gfortran.
IF (myid==0) THEN
OPEN(UNIT=8,FILE='rnseed_ent20.txt')
DO i=1,n_sample
DO j=1,3
READ(8,*) rn(i,j)
END DO
END DO
CLOSE(8)
END IF
dens_ent=0.0d0
DO i=1,n_sample
IF (myid==0) THEN
!Random draws of productivity and savings
rn_zb=MC_JOINT_SAMPLE((/-0.1d0,mu_b0/),var,rn(i,1:2))
iz=minloc(abs(log(zgrid)-rn_zb(1)),dim=1)
ib=minloc(abs(log(bgrid(1:nb/2))-rn_zb(2)),dim=1) !Find the closest saving grid
CALL SUB2IND(j,(/nb,nm,nk,nxi,nz/),(/ib,1,1,1,iz/))
DO iixi=1,nxi
DO iiz=1,nz
CALL SUB2IND(jj,(/nb,nm,nk,nxi,nz/),(/policybmk_2_statebmk_index(j,:),iixi,iiz/))
dens_ent(jj)=dens_ent(jj)+1.0d0/real(nxi)*markovian(iz,iiz)*merge(1.0d0,0.0d0,vent(j) .GE. -bgrid(ib)+ce)
!Density only recorded if the value of entry is greater than b0+ce
END DO
END DO
END IF
END DO
PRINT *, 'dingdongdingdong',myid
IF (myid==0) dens_ent=dens_ent/real(n_sample)*Mpo
IF (myid==0) PRINT *, 'sum_density by joint normal distribution',sum(dens_ent)
PRINT *, 'BLBLALALALALALA',myid
CALL MPI_BCAST(dens_ent,N,MPI_DOUBLE_PRECISION,0,MPI_COMM_WORLD,ierr)
Problem arises:
(1) IF (myid==0) PRINT *, 'sum_density by joint normal distribution',sum(dens_ent) seems not executed, as there is no print out.
(2) I then verify this by adding PRINT *, 'BLBLALALALALALA',myid etc messages. Again no print out for root process myid=0.
It seems like root process is not working? How can this be true? I'm quite confused. Is it because I'm not using MPI_BARRIER before PRINT *, 'dingdongdingdong',myid?
Is it possible that you miss the following statement just at the very beginning of your code?
CALL MPI_COMM_RANK (MPI_COMM_WORLD, myid, ierr)
IF (ierr /= MPI_SUCCESS) THEN
STOP "MPI_COMM_RANK failed!"
END IF
The MPI_COMM_RANK returns into myid (if succeeds) the identifier of the process within the MPI_COMM_WORLD communicator (i.e a value within 0 and NP, where NP is the total number of MPI ranks).
Thanks for contributions from #cw21 #Harald and #Hristo Iliev.
The failure lies in unit numbering. One reference says:
unit number : This must be present and takes any integer type. Note this ‘number’ identifies the
file and must be unique so if you have more than one file open then you must specify a different
unit number for each file. Avoid using 0,5 or 6 as these UNITs are typically picked to be used by
Fortran as follows.
– Standard Error = 0 : Used to print error messages to the screen.
– Standard In = 5 : Used to read in data from the keyboard.
– Standard Out = 6 : Used to print general output to the screen.
So I changed all numbering i into 1i, not working; then changed into 10i. It starts to work. Mysteriously, as correctly pointed out by #Hristo Iliev, as long as the numbering is not 0,5,6, the code should behave properly. I cannot explain to myself why 1i not working. But anyhow, the root process is now printing out results.

Debugger implementation - Step over issue

I am currently writing a debugger for a script virtual machine.
The compiler for the scripts generates debug information, such as function entry points, variable scopes, names, instruction to line mappings, etc.
However, and have run into an issue with step-over.
Right now, I have the following:
1. Look up the current IP
2. Get the source line from that
3. Get the next (valid) source line
4. Get the IP where the next valid source line starts
5. Set a temporary breakpoint at that instruction
or: if the next source line no longer belongs to the same function, set the temp breakpoint at the next valid source line after return address.
So far this works well. However, I seem to be having problems with jumps.
For example, take the following code:
n = 5; // Line A
if(n == 5) // Line B
{
foo(); // Line C
}
else
{
bar(); // Line D
--n;
}
Given this code, if I'm on line B and choose to step-over, the IP determined for the breakpoint will be on line C. If, however, the conditional jump evaluates to false, it should be placed on line D. Because of this, the step-over wouldn't halt at the expected location (or rather, it wouldn't halt at all).
There seems to be little information on debugger implementation of this specific issue out there. However, I found this. While this is for a native debugger on Windows, the theory still holds true.
It seems though that the author has not considered this issue, either, in section "Implementing Step-Over" as he says:
1. The UI-threads calls CDebuggerCore::ResumeDebugging with EResumeFlag set to StepOver.
This tells the debugger thread (having the debugger-loop) to put IBP on next line.
2. The debugger-thread locates next executable line and address (0x41141e), it places an IBP on that location.
3. It calls then ContinueDebugEvent, which tells the OS to continue running debuggee.
4. The BP is now hit, it passes through EXCEPTION_BREAKPOINT and reaches at EXCEPTION_SINGLE_STEP. Both these steps are same, including instruction reversal, EIP reduction etc.
5. It again calls HaltDebugging, which in turn, awaits user input.
Again:
The debugger-thread locates next executable line and address (0x41141e), it places an IBP on that location.
This statement does not seem to hold true in cases where jumps are involved, though.
Has anyone encountered this problem before? If so, do you have any tips on how to tackle this?
Since this thread comes in Google first when searching for "debugger implement step over". I'll share my experiences regarding the x86 architecture.
You start first by implementing step into: This is basically single stepping on the instructions and checking whether the line corresponding to the current EIP changes. (You use either the DIA SDK or the read the dwarf debug data to find out the current line for an EIP).
In the case of step over: before single stepping to the next instruction, you'll need to check if the current instruction is a CALL instuction. If it's a CALL instruction then put a temporary breakpoint on the instruction following it and continue execution till the execution stops (then remove it). In this case you effectively stepped over function calls literally in the assembly level and so in the source too.
No need to manage stack frames (unless you'll need to deal with single line recursive functions). This analogy can be applied to other architectures as well.
Ok, so since this seems to be a bit of black magic, in this particular case the most intelligent thing was to enumerate the instruction where the next line starts (or the instruction stream ends + 1), and then run that many instructions before halting again.
The only gotcha was that I have to keep track of the stack frame in case CALL is executed; those instructions should run without counting in case of step-over.

Behaviour of debuggers regarding Step Over + Breakpoints

(I'm coding a debugger. But my doubt is also from the point of view of a debugger user)
Many debuggers in many languages (GDB, Eclipse) implement a STEP_OVER command that permits to execute one statement at a time; the difference with STEP_INTO is that it does not perform the stepping down in stack (i.e., called functions), which is often a good thing.
10 : y = f1(x);
11 : z = y + 1;
Now, suppose I step over line 10 above, but a breakpoint is hit inside function f1 (perhaps several levels deep in the call stack). It's not clear what should happen when I resume: should the debugger pause at line 11 (effectively "completing the step over" command)? Or should it forget about it? I believe most (all?) debuggers do the later. Is that the standard/expected behaviour? I myself have found this a little frustrating. Is there a way (in some debugger) to resume execution from the inside breakpoint to the outside stepped-over statement? Or is there some way to do a step-over-ignoring breakpoints?
WinDbg does the latter, and I believe this is standard behavior. If you are worried about a different breakpoint occurring during a step-over command, you could always manually set a breakpoint on line 11 and continue running until line 11 is hit. Alternatively, you could temporarily disable the other breakpoints, but note that the debugger still may break for other reasons as well (such as raising an exception), depending on its configuration.

Resources