Find write statement in Fortran - debugging

I'm using Fortran for my research and sometimes, for debugging purposes, someone will insert in the code something like this:
write(*,*) 'Variable x:', varx
The problem is that sometimes it happens that we forget to remove that statement from the code and it becomes difficult to find where it is being printed. I usually can get a good idea where it is by the name 'Variable x' but it sometimes happens that that information might no be present and I just see random numbers showing up.
One can imagine that doing a grep for write(*,*) is basically useless so I was wondering if there is an efficient way of finding my culprit, like forcing every call of write(*,*) to print a file and line number, or tracking stdout.
Thank you.

Intel's Fortran preprocessor defines a number of macros, such as __file__ and __line__ which will be replaced by, respectively, the file name (as a string) and line number (as an integer) when the pre-processor runs. For more details consult the documentation.
GFortran offers similar facilities, consult the documentation.
Perhaps your compiler offers similar capabilities.

As has been previously implied, there's no Fortran--although there may be a compiler approach---way to change the behaviour of the write statement as you want. However, as your problem is more to do with handling (unintentionally produced) bad code there are options.
If you can't easily find an unwanted write(*,*) in your code that suggests that you have many legitimate such statements. One solution is to reduce the count:
use an explicit format, rather than list-directed output (* as the format);
instead of * as the output unit, use output_unit from the intrinsic module iso_fortran_env.
[Having an explicit format for "proper" output is a good idea, anyway.]
If that fails, use your version control system to compare an old "good" version against the new "bad" version. Perhaps even have your version control system flag/block commits with new write(*,*)s.
And if all that still doesn't help, then the pre-processor macros previously mentioned could be a final resort.

Related

Need validation on a claim from Go Lang

I have been lately looking into GoLang -- coming from C++ background-- I am reading a paper which allegedly explains the reasoning behind making Golang, here is its link: https://talks.golang.org/2012/splash.article
One of the claims being is, handling Dependencies (Package) in C and C++ is pain and takes on a #ifndef guard instance to state
The intent is that the C preprocessor reads in the file but disregards
the contents on the second and subsequent readings of the file...
I referred a GCC page for the same, https://gcc.gnu.org/onlinedocs/cppinternals/Guard-Macros.html.
so that if the header file appears in a subsequent #include directive
and FOO is defined, then it is ignored and it doesn’t preprocess or
even re-open the file a second time
Go: "Reads in and disregard"
vs
GCC: it doesn’t preprocess or even re-open the file a second time.
Doesn't contradict?
your thoughts are appreciated. Thanks for Reading my question.
The first passage is talking about a generic compiler, which, conceptually speaking, should read the contents of the file and disregard the contents (because they are #ifdefd out). That is, roughly, what the C standard specifies a compiler should do.
But practically everything in the C standard is under the "as if" rule - a compiler does not actually have to be implemented in the way suggested in the standard, so long as the end result it produces is exactly the same in every case. As such, GCC's particular implementation adds an optimization where, in cases where it can tell with certainty that the contents of the file would be disregarded, it doesn't actually read it. This is perfectly fine because it still behaves as if it has read the file but disregarded it.
Note that other compilers do not necessarily do the same.

Intel Fortran to GNU Fortran Conversion [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I am working on a custom CFD Solver written in Fortran 90 and MPI.
The code contain 15+ Modules and was initially designed to work with the Intel Fortran compiler. Now since i do not have access to the Intel compiler I need to make it work using the GNU Fortran Compiler.
I made changes in the Makefile that initially had flags suitable for the ifort.
I am using it on Ubuntu with GNU Fortran and Openmpi
I am sorry I am unable to put in anything from the code structure or terminal output due to IP restrictions of my university. Nevertheless,I will try to best describe the issues
So now when I compile the code I am having some strange issues.
The GNU Fortran is not able to read lines that are too long and I get errors during compilation. As a result I have to break it into multiple lines using the '&' symbol
A module D.f90 contains all the Global variables declared. However, now I during compilation i get error is in module B.F90.
The error I get is 'Unclassified Statement Error', I was able to fix it in some subroutines and functions by locally declaring the variables again.
I am not the most experienced person in Fortran, but I thought that the change in compiler should not be a reason for new found syntax errors.
The errors described above so far could be remedied but considering the expanse of the code it is impractical.
I was hoping if anyone could share views on this matter and provide guidance on how to tackle it.
You should start reading three pieces of documentation:
The Fortran 90 standard (alternatively, other versions), which tells you what is legal, standard Fortran and what is not. Whenever you find some error, look at your code and check if what you are doing is legal, standard Fortran. Likely, the code in question will either be completely nonstandard (e.g. REAL*8, although that extension is fairly well understood) or rely on unspecified behaviour that Intel Fortran and GFortran are interpreting in different ways.
The GFortran manual for your version, which tells you how GFortran decides such unspecified cases, what intrinsic functions are available, how to change some options/flags, etc. This would tell you that your problem with the line lengths would be solved by adding -ffree-line-length-none.
The Intel Fortran manual for your version, which in cases of non-standard or unspecified behaviour, will allow you to know what the code you are reading was written to do, e.g. the behaviour that you would expect. In particular, it will allow you to decipher what the compiler flags that are currently being used mean. They may or may not need translation to GFortran, e.g. /Qsave will need to become -f-no-automatic.
A concrete example of interpretative differences within the range allowed be the standard: until Fortran 2003, the units for the "record length" in random access record files were left unspecified. Intel Fortran used "one machine word" (4 bytes in x86) while GFortran used 1 byte. Both were compliant with the standard letter, but incompatible.
Furthermore, even when coding "to standard", you may hit a wall if the compiler does not implement part of the Fnn standard, or it is buggy. Case in point: Intel Fortran 12.0 (old, but it's what I work with) does not the implement the ALLOCATE(y, SOURCE=x) construct for polymorphic x (the "clone allocation"). On the other hand, GFortran has not completely implemented FINAL type-bound procedures (destructors).
In both cases, you will need to find workarounds. For example, for the first issue you can use a special form of the INQUIRE statement (kudos to #haraldkl). In other cases, the workaround might even involve using some kind of feature detection (see autoconf, CMake, etc.) and storing the results as PARAMETER variables in a config.f90 file that is included by your code. Your code would then take decisions based on it, as in:
! config.f90.in (things in #x# would get subtituted by automake, for example)
INTEGER, PARAMETER :: RECORD_LEN_BYTES = #RECORD_LEN_BYTES#
! Some other file which opens a file
INCLUDE "config.f90"
!...
OPEN(u, FILE='DE430.BIN', ACCESS='direct', FORM='unformatted', RECL=56 / RECORD_LEN_BYTES)
People have been having complaints about following the standard since at least the 60s. But those cDEC$ features were put in a for good reasons...
It is valuable to cross compile though and you usually have things caught in one compiler or the other.
For you question #1 "The GNU Fortran is not able to read lines that are too long and I get errors during compilation. As a result I have to break it into multiple lines using the '&' symbol"
In the days of old there was:
options/extended_source
SUBROUTINE...
In fort it is -132, but I have not found a gfortran equivalent to -132 . It may be -ffixed-line-length-n -ffixed-line-length-none -ffree-line-length-n -ffree-line-length-none per the link: http://www.math.uni-leipzig.de/~hellmund/Vorlesung/gfortran.html#SEC8
Also the ifort standard for .f90 and .f95 is the the compiler switch '-free' '-fixed' is the standard <.f90... However one can use -fixed with .f90 and use column 6 and 'D' in column #1... Which is handy with '-D_lines' or '-DD'.
Per the link: https://software.intel.com/sites/default/files/m/f/8/5/8/0/6366-ifort.txt
For you question #2: "A module D.f90 contains all the Global variables declared. However, now I during compilation i get error is in module B.F90. The error I get is 'Unclassified Statement Error', I was able to fix it in some subroutines and functions by locally declaring the variables again."
You probably need to put in the offending line, if you can get an IP waiver.
Making variables local if they are expected to be shared in a /common/ or shared in a module will not work.
If there were in /common/ or PUBLIC then they are shared.
If they are local then they are PRIVATE.
it would be easy to get that error if a PRIVATE statement was in the wrong place, or a USE statement was omitted.

Why do Julia programmers need to prefix macros with the at-sign?

Whenever I see a Julia macro in use like #assert or #time I'm always wondering about the need to distinguish a macro syntactically with the # prefix. What should I be thinking of when using # for a macro? For me it adds noise and distraction to an otherwise very nice language (syntactically speaking).
I mean, for me '#' has a meaning of reference, i.e. a location like a domain or address. In the location sense # does not have a meaning for macros other than that it is a different compilation step.
The # should be seen as a warning sign which indicates that the normal rules of the language might not apply. E.g., a function call
f(x)
will never modify the value of the variable x in the calling context, but a macro invocation
#mymacro x
(or #mymacro f(x) for that matter) very well might.
Another reason is that macros in Julia are not based on textual substitution as in C, but substitution in the abstract syntax tree (which is much more powerful and avoids the unexpected consequences that textual substitution macros are notorious for).
Macros have special syntax in Julia, and since they are expanded after parse time, the parser also needs an unambiguous way to recognise them
(without knowing which macros have been defined in the current scope).
ASCII characters are a precious resource in the design of most programming languages, Julia very much included. I would guess that the choice of # mostly comes down to the fact that it was not needed for something more important, and that it stands out pretty well.
Symbols always need to be interpreted within the context they are used. Having multiple meanings for symbols, across contexts, is not new and will probably never go away. For example, no one should expect #include in a C program to go viral on Twitter.
Julia's Documentation entry Hold up: why macros? explains pretty well some of the things you might keep in mind while writing and/or using macros.
Here are a few snippets:
Macros are necessary because they execute when code is parsed,
therefore, macros allow the programmer to generate and include
fragments of customized code before the full program is run.
...
It is important to emphasize that macros receive their arguments as
expressions, literals, or symbols.
So, if a macro is called with an expression, it gets the whole expression, not just the result.
...
In place of the written syntax, the macro call is expanded at parse
time to its returned result.
It actually fits quite nicely with the semantics of the # symbol on its own.
If we look up the Wikipedia entry for 'At symbol' we find that it is often used as a replacement for the preposition 'at' (yes it even reads 'at'). And the preposition 'at' is used to express a spatial or temporal relation.
Because of that we can use the #-symbol as an abbreviation for the preposition at to refer to a spatial relation, i.e. a location like #tony's bar, #france, etc., to some memory location #0x50FA2C (e.g. for pointers/addresses), to the receiver of a message (#user0851 which twitter and other forums use, etc.) but as well for a temporal relation, i.e. #05:00 am, #midnight, #compile_time or #parse_time.
And since macros are processed at parse time (here you have it) and this is totally distinct from the other code that is evaluated at run time (yes there are many different phases in between but that's not the point here).
In addition to explicitly direct the attention to the programmer that the following code fragment is processed at parse time! as oppossed to run time, we use #.
For me this explanation fits nicely in the language.
thanks#all ;)

When I use Conditional Compilation Arguments to Exclude Code, why doesn't VB6 EXE file size change?

Basically, when declaring Windows API functions in my VB6 code, there comes with these many constants that need to be declared or used with this function, in fact, usually most of these constants are not used and you only end up using one of them or so when making your API calls, so I am using Conditional Compilation Arguments to exclude these (and other things) using something like this:
IncludeUnused = 0 : Testing = 1
(this is how I set two conditional compilation arguments (they are of Boolean type by default).
So, many unused things are excluded like this:
#If IncludeUnused Then
' Some constant declarations and API declarations go here, sometimes functions
' and function calls go here as well, so it's not just declarations and constants
#End If
I also use a similar wrapper using the Testing Boolean declared in the Conditional Compilation Argument input field in the VB6 Properties windows "Make" tab. The Testing Boolean is used to display message boxes and things like that when I am in testing mode, and of course, these message boxed are removed (not displayed) if I have Testing set to 0 (and it is obviously 1 when I am Testing).
The problem is, I tried setting IncludeUnused and Testing to 0 and 1 and visa versa, a total of four (4) combinations, and no matter what combination I set these values to, the output EXE file size for my VB6 EXE does not change! It is always 49,152 when compiled to Native Code using Fast Code, and when using Small Code.
Additionally, if I compile to p-code under the four (4) combinations of Testing and IncludeUnused, i always end up with the file size 32,768 no matter what.
This is driving me crazy, since it is leading me to believe that no change is actually occuring, even though it is. Why is it that when segments of code are excluded from compilation, the file size is still the same? What am I missing or doing wrong, or what have I miscalculated?
I have considered the option that perhaps VB6 automatically does not compile code which is not used into the final output EXE, but I have read from a few sources that this is not true, in that, if it's included, it is compiled (correct me if I am wrong), and if this is right, then there is no need to use the IncludeUnused Boolean to remove unused code...?
If anyone can shed some light on these thoughts, I'd greatly appreciate it.
It could well be that the size difference is very small and that the exe size is padded to the next 512 or 1024 byte alignment. Try compressing the exe's with zip and see if the zip-file sizes differ.
You misunderstand what a compiler does. The output of the VB6 compiler is code. Constants are merely place holders for values, they are not code. The compiler adds them to its symbol table. And when it later encounters a statement in your code that uses the constant then it replaces the constant by its value. That statement produces the exact same code whether you use a constant or hard-code the value in the statement.
So this automatically implies that if you never actually use the constant anywhere then there is no difference at all in the generated code. All that you accomplished by using the #If is to keep the compiler's symbol table smaller. Which is something that makes very little sense to do, the actual gain from compilation speed you get is not measurable. Symbol tables are implemented as hash tables, they have O(1) amortized complexity.
You use constants only to make your code more readable. And to make it easy to change a constant value if the need ever arises. By using #If, you actually made your code less readable.
You can't test runtime data in conditional compilation directives.
These directives use expressions made up of literal values, operators, and CC constants. One way to set constant values is:
#Const IncludeUnused = 0
#Const Testing = 1
You can also define them via Project Properties for IDE testing. Go to the Make tab in that dialog and click the Help button for details.
Perhaps this is where you are setting the values? If so, consider this just additional info for later readers rather than an answer.
See #If...Then...#Else Directive
VB6 executable sizes are padded to 4KB blocks, so if the code difference is small it will make no difference to the executable.

Cross version line matching

I'm considering how to do automatic bug tracking and as part of that I'm wondering what is available to match source code line numbers (or more accurate numbers mapped from instruction pointers via something like addr2line) in one version of a program to the same line in another. (Assume everything is in some kind of source control and is available to my code)
The simplest approach would be to use a diff tool/lib on the files and do some math on the line number spans, however this has some limitations:
It doesn't handle cross file motion.
It might not play well with lines that get changed
It doesn't look at the information available in the intermediate versions.
It provides no way to manually patch up lines when the diff tool gets things wrong.
It's kinda clunky
Before I start diving into developing something better:
What already exists to do this?
What features do similar system have that I've not thought of?
Why do you need to do this? If you use decent source version control, you should have access to old versions of the code, you can simply provide a link to that so people can see the bug in its original place. In fact the main problem I see with this system is that the bug may have already been fixed, but your automatic line tracking code will point to a line and say there's a bug there. Seems this system would be a pain to build, and not provide a whole lot of help in practice.
My suggestion is: instead of trying to track line numbers, which as you observed can quickly get out of sync as software changes, you should decorate each assertion (or other line of interest) with a unique identifier.
Assuming you're using C, in the case of assertions, this could be as simple as changing something like assert(x == 42); to assert(("check_x", x == 42)); -- this is functionally identical, due to the semantics of the comma operator in C and the fact that a string literal will always evaluate to true.
Of course this means that you need to identify a priori those items that you wish to track. But given that there's no generally reliable way to match up source line numbers across versions (by which I mean that for any mechanism you could propose, I believe I could propose a situation in which that mechanism does the wrong thing) I would argue that this is the best you can do.
Another idea: If you're using C++, you can make use of RAII to track dynamic scopes very elegantly. Basically, you have a Track class whose constructor takes a string describing the scope and adds this to a global stack of currently active scopes. The Track destructor pops the top element off the stack. The final ingredient is a static function Track::getState(), which simply returns a list of all currently active scopes -- this can be called from an exception handler or other error-handling mechanism.

Resources