Considerations for compiling Fortran code to 64bit? - compilation

I'm working on scientific/engineering programs written in fortran for a 32bit unix system and I have to recompile for a new 64bit cluster and having trouble making sense of errors and what changes I should make.

I compile Fortran programs for 32-bit or 64-bit OSes and haven't encountered any problems. What errors have you seen? Are you changing it to a parallel program?
A program implemented to the best design philosophy of Fortran >=90 requests numeric types according to the precision that it needs, e.g., using "selected_real_kind" to specify the number of digits needed in a real type. Then the compiler (on the OS and host computer) provides the requested precision if it can, or otherwise the program refuses to run. If the requested precisions are sufficient to compute the answer, this approach should make a portable program. It isn't perfect since the numeric computing model isn't totally specified.

Related

Why executables aren't written in a way that the OS understand instead of machine code?

We know that computer programs are either AOT compiled, JIT compiled or interpreted
And we also know that AOT compiled programs usually get compiled from its high level source code into machine code
Now the question is, if machine code is so hard to understand and write, why the idea of compilers wasn't to translate programs into a simpler language understood by the operating system instead of translating directly into machine code
And if such operating-system-dependent language existed, the OS should read the executables written in it and translate them into the corresponding machine code understood by the CPU
In other words, wouldn't the process of compiling into machine code have been easier if OSs had some kind of JIT compiler (VM?) that translated a specific kind of bytecode (which should exist) into machine code?
Are there any disadvantages of all this?

Compiling COBOL as 32-bit Executable For Windows

I am diving into the world of COBOL and have written a simple program that compiles and runs as intended from my KDE Plasma command line using open-cobol (cobc). I have seen a few sites mention that COBOL is quite portable and does not require multiple compilations, but when I try to run the same output program on Windows 10 (ie 32-bit), the system states that the program is a 16-bit application and thus cannot run.
Are there parameters that I can use with cobc to compile in such a way that my programs will run on Windows 10, or am I fundamentally misunderstanding the portability of this language?
Compilation command: cobc -x -o program program.cob
Your program is likely already a 64bit executable (depending on your actual OS, otherwise its 32bit), but it is definitely no Windows binary (and because Windows doesn't recognize it, it just guesses this is a 16bit executable).
COBOL itself is portable, even between different compilers (if you restrict yourself to "standard" COBOL or use only the extensions that the compilers used share), but you need "some" native parts in any case.
As a well known example take Java or .NET: the "runtime" is a native binary, which executes the java (or msil) byte code.
There are some COBOL compilers generating intermediate code which is actually portable and can be used with the "native runtime" you have to install beforehand.
The easiest option for your case: take a compatible compiler and recompile your COBOL source for this platform on this platform.
I'd suggesting the successor of OpenCOBOL: GnuCOBOL, using the official windows binaries.

Does the compiler actually produce Machine Code?

I've been reading that in most cases (like gcc) the compiler reads the source code in a high level language and spits out the corresponding machine code. Now, machine code by definition is the code that a processor can understand directly. So, machine code should be only machine (processor) dependent and OS independent. But this is not the case. Even if 2 different operating systems are running on the same processor, I can not run the same compiled file (.exe for Windows or .out for Linux) on both the Operating Systems.
So, what am I missing? Is the output of a gcc compiler (and most compilers) not Machine Code? Or is Machine Code not the lowest level of code and the OS translated it further to a set of instructions that the processor can execute?
You are confusing a few things. I retargettable compiler like gcc and other generic compilers compile files to objects, then the linker later links objects with other libraries as needed to make a so called binary that the operating system can then read, parse, load the loadable blocks and start execution.
A sane compiler author will use assembly language as the output of the compiler then the compiler or the user in their makefile calls the assembler which creates the object. This is how gcc works. And how clang works sorta, but llc can make objects directly now not just assembly that gets assembled.
It makes far more sense to generate debuggable assembly language that produce raw machine code. You really need a good reason like JIT to skip the step. I would avoid toolchains that go straight to machine code just because they can, they are harder to maintain and more likely to have bugs or take longer to fix bugs.
If the architecture is the same there is no reason why you cant have a generic toolchain generate code for incompatible operating systems. the gnu tools for example can do this. Operating system differences are not by definition at the machine code level most are at the high level language level C libraries that you can to create gui windows, etc have nothing to do with the machine code nor the processor architecture, for some operating systems the same operating system specific C code can be used on mips or arm or powerpc or x86. where the architecture becomes specific is the mechanism that actual system calls are invoked. A specific instruction is often used. and machine code is eventually used yes but no reason why this cant be coded in real or inline assembly.
And then this leads to libraries, even fopen and printf which are generic C calls eventually have to make a system call so much of the library support code can be in a compatible across systems high level language, there will need to be a system and architecture specific bit of code for the last mile. You should see this in glibc sources, or hooks into newlib for example in other library solutions. As examples.
Same is true for other languages like C++ as it is for C. Interpreted languages have additional layers but their virtual machines are just programs that sit on similar layers.
Low level programming doesnt mean machine nor assembly language it just means whatever programming language you are using accesses at a lower level, below the application or below the operating system, etc...
Compilers produce assembly code, which is a human-readable version of machine code (eg, instead of 1's and 0's you have actual commands). However, the correct assembly/machine code needed to make your program run correctly is different depending on the operating system. So the language the processors use is the same, but your program needs to talk to the operating system, which is different.
For example, say you're writing a Hello World program. You need to print the phrase "Hello, World" onto the screen. Your program, will need to go through the OS to actually do that, and different OSes have different interfaces.
I'm deliberately avoiding technical terms here to keep the answer understandable for beginners. To be more precise, your program needs to go through the operating system to interact with the other hardware on your computer(eg, keyboard, display). This is done through system calls that are different for each family of OS.
The machine code that is generated can run on any of the same type of processor it was generated for. The challenge is that your code will interact with other modules or programs on the system and to do that you need a conventions for calling and returning. The code generated assumes a runtime environment (OS) as well as library support (calling conventions). Those are not consistent across operating systems.
So, things break when they need to transition to and depend on other modules using conventions defined by the operating system's calling conventions.
Even if the machine code instructions are identical for the compiled program on two different operating systems (not at all likely, since different operating systems provide different services in different ways), the machine code needs to be stored in a format that the host OS can use "load into" a process for execution. And those formats are frequently different between different operating systems.

How can a compiler cross-compile to a different OS and architecture?

I'm very intrigued by the fact that Go (since v1.5) has in-built cross compilation options.
But how is it possible to compile for a different OS and architecture?
I mean that would require knowing (and probably behaving like) the target machine language and platform.
I mean that would require knowing (and probably behaving like) the target machine language and platform.
Yes, the Go compiler has to know how the target operating system works, but it doesn't need to behave like the target OS, as the Go compiler will not run the compiled executable binary, it just needs to produce it.
All the Go tools need to know is the binary formats of the different Operating Systems, and OS and architectural details (such as the instruction set, word size, endianness, alignment, available registers etc.; more info on this). And this knowledge is built into the Go tools.

Why is bytecode JIT compiled at execution time and not at installation time?

Compiling a program to bytecode instead of native code enables a certain level of portability, so long a fitting Virtual Machine exists.
But I'm kinda wondering, why delay the compilation? Why not simply compile the byte code when installing an application?
And if that is done, why not adopt it to languages that directly compile to native code? Compile them to an intermediate format, distribute a "JIT" compiler with the installer and compile it on the target machine.
The only thing I can think of is runtime optimization. That's about the only major thing that can't be done at installation time. Thoughts?
Often it is precompiled. Consider, for example, precompiling .NET code with NGEN.
One reason for not precompiling everything would be extensibility. Consider those languages which allow use of reflection to load additional code at runtime.
Some JIT Compilers (Java HotSpot, for example) use type feedback based inlining. They track which types are actually used in the program, and inline function calls based on the assumption that what they saw earlier is what they will see later. In order for this to work, they need to run the program through a number of iterations of its "hot loop" in order to know what types are used.
This optimization is totally unavailable at install time.
The bytecode has been compiled just as well as the C++ code has been compiled.
Also the JIT compiler, i.e. .NET and the Java runtimes are massive and dynamic; And you can't foresee in a program which parts the apps use so you need the entire runtime.
Also one has to realize that a language targeted to a virtual machine has very different design goals than a language targeted to bare metal.
Take C++ vs. Java.
C++ wouldn't work on a VM, In particular a lot of the C++ language design is geared towards RAII.
Java wouldn't work on bare metal for so many reasons. primitive types for one.
EDIT: As delnan points out correctly; JIT and similar technologies, though hugely benificial to bytecode performance, would likely not be available at install time. Also compiling for a VM is very different from compiling to native code.

Resources