Do OpenMP (OMP_*) environment variables matter during compilation? - compilation

I am compiling a code in Fotran 90 that uses OpenMP, and I need it to run with, say, OMP_NESTED=false and OMP_MAX_ACTIVE_LEVELS=2.
My question is, do I need to export the variabes before I compile the code, and then before each run, or should I just export them before I run the code?

No, that does not affect the compilation. And even if it did, the OpenMP specifications clearly states what they should do at runtime so that the compile time setting would be overridden.
You just have to export the variables before the run and you can change them for some subsequent run.

Related

Fortran error: (.text+0x0): multiple definition of

I tried to include my Fortran modules in an extensive library that is also written in Fortran. To compile and install this library, the autotools suits are used. I made a Makefile to compile separately (in another directory and explained here) my modules and check if they were running fine. The test was successful. However, when I tried to couple them with this extensive library, I had trouble. I think, but not sure, that the problem is coming from the fact that in some of my subroutines, another subroutine is called several times. I have an error as follows:
DirectoryA/.libs/A.o: In function `__A_MOD_sub1':
A.F90:(.text+0x0): multiple definition of `__A_MOD_sub1'
DirectoryA/.libs/A.o:A.F90:(.text+0x0): first defined here
I tried to couple a very simple module with this library to make sure that the problem is not coming from how I modified the makefiles. This test was successful. In that simple module, I made just some subroutines printing some parameters.
In the new slightly complicated set of modules, I knowingly call a subroutine several times inside of another subroutine to perform a desired task. Is that where the problem comes from? Shall I add a flag to configure.ac in order to circumvent this issue?
I added LDFLAGS="${LDFLAGS} -Wl,--allow-multiple-definition" to the configure.ac file and the problem is solved.

when does the compiler compile the code into machine code?

As far as I know, the compiler compiles the code by converting it to a language that a computer can understand which is the machine language and this is done before running the code.
So, does the compiler compile my code each time I write a character in the file?
And if so, does it check the whole code? Or just the line that updated.
An important part to this question is the type of programming language(PL) we are talking about. Generally speaking, I would categorize PL into 3 groups:
Traditional PLs. Ex: C, C++, Rust
The compiler compiles the code into machine language when you hit the "build" button or the "run" button.
It doesn't compile every time you change the code, but a code linter does continuously observe your code and check it for errors.
Another note, when you change part of the code and compile it, the compiler doesn't recompile everything. It usually only recompile the current assembly file (or module or whatever you call them).
It is also important to note that a lot of modern IDEs, compile when you save the files.
There is also the hot reload feature. It is a smart compiler feature that can swap certain parts of the code while it is running.
Interpreted PLs Ex: python, JS and PHP
Those languages never get compiled; Rather, they get interpreted or translated into native code on the fly and in-memory when you run them.
Those languages usually employee a cache to accelerate the subsequent code execution.
Intermediary Code PL. Ex: Kotlin, java, C#
Have 2 stages of compilation:
Build time compilation.
Just in time (run-time) compilation.
Build time compilation converts the code into intermediary language (IL) machine code, which is special to the run-time.
This code only understood by the run time like Java runtime or dot net runtime
The second compilation happens when the programs get installed or ran for the first time. This is called just in time compilation (JIT)
The run-time convert the code into native code specific to the run-time OS.

Changing the GCC Code. How to test the addition of newly added features?

I am learning compilers and want to make changes of my own to GCC parser and lexer. Is there any testing tool or some another way available which let me change gcc code and test it accordingly.
I tried changing the lexical analysis file but now I am stuck because I don't know how to compile these files. I tried the compilation using other GCC compiler but show errors. I even tried configure and make but doing this with every change does not seems efficient.
The purpose of these changes is just learning and I have to consider GCC only as this is the only compiler my instructor allowed.
I even tried configure and make but doing that wit every change is not at all efficient.
That is exactly what you should be doing. (Well, you don't need to re-configure after every change, just run make again.) However, by default GCC configures itself in bootstrap mode, which means not only does your host compiler compile GCC, that compiled GCC then compiles GCC again (and again). That is overkill for your purposes, and you can prevent that from happening by adding --disable-bootstrap to the configuration options.
Another option that can help significantly reduce build times is enabling only the languages you're interested in. Since you're experimenting, you'll likely be very happy if you create something that works for C or for C++, even if for some obscure reason Java happens to break. Testing other languages becomes relevant when you make your changes available for a larger audience, but that isn't the case just yet. The configuration option that covers this is --enable-languages=c,c++.
Most of the configuration options are documented on the Installing GCC: Configuration page. Throroughly testing your changes is documented on the Contributing to GCC page, but that's likely something for later: you should know how to make your own simpler tests pass, by simply trying code that makes use of your new feature.
You make changes (which are made "permanent" by saving the files you modify), compile the code, and run the test suite.
You typically write additional tests or remove those that are invalidated by your changes and that's it.
If your changes don't contribute anything "positive" to the compiler upstream will probably never accept them, and the only "permanence" you can get is the modifications in your local copy.

Environment variables that one should update when using a new compiler

Say a system admin provides a new version of the gcc compiler available on /some/path on a machine where I build software (all types of software including open source, 3rd party tools, my own programs, etc.):
I usually update the following three environment variables $PATH, $LD_LIBRARY_PATH and $MANPATH, according to what I understand is standard practice for interfacing with general building tools (e.g. autoconf, cmake, etc.) or scripts.
setenv MY_GCC /some/path
setenv PATH $MY_GCC/bin:$PATH
setenv LD_LIBRARY_PATH $MY_GCC/lib64:$LD_LIBRARY_PATH
setenv MANPATH $MY_GCC/share/man:$MANPATH
Here I have a quick question: is there really a reason to update LD_LIBRARY_PATH (why would programs link against a compiler?).
But more generally, what environment variables should one update upon the installation of a new compiler to guarantee a proper building environment?
It depends.
Generally, you do not need to set up any environment variables other than PATH to have a proper building environment (and if you use an IDE, not even that may be necessary, though you might have to tell the IDE where to find the compiler if it lives in an unexpected, non-standard location).
If you use something like autoconf (or CMAKE, or any similar thing), and especially if you have several compiler versions (or cross compilers) on the system, you may want to set variables like CC or CXX to a reasonable default just to be sure (and modify them accordingly if you want something else).
Though if your compiler has its target appended to its name (as in most builds), this is probably not necessary. At least, it works perfectly well for me without doing anything special.
If English is not your native language and your GCC was built with locale support (most stupid idea ever, if you ask me), you may want to set LC_ALL to "C". Otherwise you'll notice that your "unreadable" error messages won't get you much help if you ask for a compiler problem on a forum.
If you have a ramdisk (or SSD) in addition to a normal harddisk, but your projects are on the normal harddisk, you may want to set TMPDIR (even if you always compile with -pipe, since this sometimes seems to create temp files anyway for a reason I don't understand).
If you have non-standard locations for libraries that you want to use, you can set LIBRARY_PATH, but I advise against it. It is better to have your build scripts (or project settings in the IDE) such that these locations are given to the linker on the commandline (or passed through configure with something like --with-foo-path=...). This guarantees that your projects build everywhere and anywhere without requiring someone else to performa a magic dance with some unknown, obscure environment variables. The same goes for C_INCLUDE_PATH.

From configure scripts to Makefiles?

I'd like to build my own GNU/Linux system from scratch using cross-compilation (just like the CLFS project). Most of the packages I would use are distributed with a configure script, and you just have to run it with the right arguments. For various reasons, I'd like to skip this step, and run make instead. Of course I need a custom Makefile for this to work. The question is: is it feasible to create custom Makefiles without having to read and comprehend all the source code? Is it possible to just read the configure.ac files or something like those? Thanks.
Probably not. What happens is that configure tests which of a number of options are most suited for your environment then substitutes them into Makefile.in to build the Makefile, config.h.in to build config.h etc. You could skip running configure and just determine what these values should be from simple cases of configure.ac (or just keep one huge cache if your environment won't change) but I think packages can define extra inline checks in configure.ac that you'd have to parse and implement correctly. It's going to be a lot easier to just run configure, even if you do have to figure out the correct parameter values for your cross-compiled environment without runtime checks.
However hopefully you only need to build a small number of packages cross (kernel, glibc, gcc, make, bash, etc.), then you can switch into your new environment and build the remaining packages there using configure? If you want inspiration as to what switch values you should be using you can always look at the parameters in Fedora SRPMs or Debian source-debs.

Resources