LLVM translation unit - debugging

I try to understand LLVM program high level structure.
I read in the book that "programs are composed of modules ,each of which correspons to tranlation unit".Can someone explain me in more details the above and what is the diffrenece between modules and translation units(if any).
I am also interested to know which part of the code is called when translation unit starts and completes debugging information encoding?

Translation unit is term from language standard. For example, this is from C (c99 iso draft)
5.1 Conceptual models; 5.1.1 Translation environment; 5.1.1.1 Program structure
A C program need not all be translated at the same time. The text of the program is kept
in units called source files, (or preprocessing files) in this International Standard. A
source file together with all the headers and source files included via the preprocessing
directive #include is known as a preprocessing translation unit. After preprocessing, a
preprocessing translation unit is called a translation unit.
So, translation unit is the single source file (file.c) after preprocessing (all #included *.h files instantiated, all macro are expanded, all comments are skipped, and file is ready for tokenizing).
Translation unit is a unit of compiling, because it didn't depend on any external resource until linking step. All headers are within TU.
Term module is not defined in the language standard, but it AFAIK refers to translation unit at deeper translation phases.
LLVM describes it as: http://llvm.org/docs/ProgrammersManual.html
The Module class represents the top level structure present in LLVM programs. An LLVM module is effectively either a translation unit of the original program or a combination of several translation units merged by the linker.
The Module class keeps track of a list of Functions, a list of GlobalVariables, and a SymbolTable. Additionally, it contains a few helpful member functions that try to make common operations easy.
About this part of your question:
I am also interested to know which part of the code is called when translation unit starts and completes debugging information encoding?
This depends on how LLVM is used. LLVM itself is a library and can be used in various ways.
For clang/LLVM (C/C++ complier build on libclang and LLVM) the translation unit created after preprocessing stage. It will be parsed into AST, then into LLVM assembly and saved in Module.
For tutorial example, here is a creation of Modules http://llvm.org/releases/2.6/docs/tutorial/JITTutorial1.html

Related

Where is __builtin_va_start defined?

I'm trying to locate where __builtin_va_start is defined in GCC's source code, and see how it is implemented. (I was looking for where va_start is defined and then found that this macro is defined as __builtin_va_start.) I used cscope -r in GCC 9.1's source code directory to search the definition but haven't found it. Can anyone point where this function is defined?
That __builtin_va_start is not defined anywhere. It is a GCC compiler builtin (a bit like sizeof is a compile-time operator). It is an implementation detail related to the <stdarg.h> standard header (provided by the compiler, not the C standard library implementation libc). What really matters are the calling conventions and ABI followed by the generated assembler.
GCC has special code to deal with compiler builtins. And that code is not defining the builtin, but implementing its ad-hoc behavior inside the compiler. And __builtin_va_start is expanded into some compiler-specific internal representation of your compiled C/C++ code, specific to GCC (some GIMPLE perhaps)
From a comment of yours, I would infer that you are interested in implementation details. But that should be in your question
If you study GCC 9.1 source code, look inside some of gcc-9.1.0/gcc/builtins.c (the expand_builtin_va_start function there), and for other builtins inside gcc-9.1.0/gcc/c-family/c-cppbuiltin.c, gcc-9.1.0/gcc/cppbuiltin.c, gcc-9.1.0/gcc/jit/jit-builtins.c
You could write your own GCC plugin (in 2Q2019, for GCC 9, and the C++ code of your plugin might have to change for the future GCC 10) to add your own GCC builtins. BTW, you might even overload the behavior of the existing __builtin_va_start by your own specific code, and/or you might have -at least for research purposes- your own stdarg.h header with #define va_start(v,l) __my_builtin_va_start(v,l) and have your GCC plugin understand your __my_builtin_va_start plugin-specific builtin. Be however aware of the GCC runtime library exception and read its rationale: I am not a lawyer, but I tend to believe that you should (and that legal document requires you to) publish your GCC plugin with some open source license.
You first need to read a textbook on compilers, such as the Dragon book, to understand that an optimizing compiler is mostly transforming internal representations of your compiled code.
You further need to spend months in studying the many internal representations of GCC. Remember, GCC is a very complex program (of about ten millions lines of code). Don't expect to understand it with only a few days of work. Look inside the GCC resource center website.
My dead GCC MELT project had references and slides explaining more of GCC (the design philosophy and architecture of GCC changes slowly; so the concepts are still relevant, even if individual details changed). It took me almost ten years full time to partly understand some of the middle-end layers of GCC. I cannot transmit that knowledge in a StackOverflow answer.
My draft Bismon report (work in progress, funded by H2020, so lot of bureaucracy) has a dozen of pages (in its sections ยง1.3 and 1.4) introducing the internal representations of GCC.

C code Optimization by compiler for atmel studio

I am using Atmel Studio 7 and in that, optimization level is -O1.
Can I check what portion of code is being optimized by the compiler itself?
If I am disabling the optimization, my binary file size is of 12KB and on using optimization level -O1, binary file size if 5.5KB.
Can I check what portion of code is being optimized by the compiler itself?
All the code is optimized by the compiler, i.e affected by optimization flags except
It's code that's dragged from libraries (libgcc.a, libc.a, libm.a, lib<device>.a).
Startup code (crt<device>.o) which also includes the vector table, or code from other objects that already exist and are not (re-)compiled in the current compilation. The latter can happen with Makefiles when you change flags therein: If the modules do not depend on the Makefile itself, make will not rebuild them.
Code from assembly modules (*.S, *.sx, *.s) provided preprocessed assembly code does not use conditional assemblation by means of #ifdef __OPTIMIZE__ or similar.
Code in inline assembly, provided the inline asm is not optimized away.
In order to determine whether anything of this is in effect, you can respectively:
Link with -Wl,-Map,file.map and inspect that map file (a text file). It will list which objects have been dragged from where due to which undefined symbol.
Startup code is linked except you -nostartfiles. Add -Wl,-v to the link stage, you'll see crt<device>.o being linked.
You know your compilation units, assembly modules, don't you?
Add -save-temps to the compilation. Inline asm will show in the intermediate *.s file as
/* #APP */
; <line> "<compilation-unit>"
<inline-asm-code>
/* #NOAPP */

What is a library unit in Chicken Scheme?

The terms "unit" and "library unit" are used in many places on the web site, but I failed to find documentation or even definitions of these terms. The only description that I found is in "User's Manual/Supported language/Declarations/(unit|uses)". Also there is an example in "User's Manual/Using the compiler/An example with multiple files". As you can see, very scarce.
If I ever get a response, the next question is how are "units" related to modules described in "User's Manual/Supported language/Modules"? I suppose that "units" somehow relate to compilation, while modules relate to Scheme value names.
Unit is short for "unit of compilation", which is basically a compiled library. If you look at the source code for CHICKEN, you'll notice that each unit from the manual corresponds (roughly) to a source file. Each source file is compiled separately into a .o file, and these units are all linked together into libchicken.so/libchicken.a.
This terminology is not very relevant anymore, except when you're linking statically. Then you need (declare (uses ...)), which refers to the unit name. This is needed, because the toplevel of the particular unit needs to run before the toplevels that depend upon it, so that any definitions are loaded.
In modern code you'll typically use only modules, but that means your code won't be statically linkable. We know this is confusing, which is why we're attempting to make static linking with modules easier with CHICKEN 5, and reducing the need to know about units.

Java: Do BOTH the compiler AND the JRE require access to all 3rd-party class files?

I have 15 years' C++ experience but am new to Java. I am trying to understand how the absence of header files is handled by Java. I have a few questions related to this issue.
Specifically, suppose that I write source code for a class 'A' that imports a 3rd-party class 'Z' (and uses Z). I understand that at compile-time, the Java compiler must have "access" to the information about Z in order to compile A.java, creating A.class. Therefore, either Z.java or Z.class (or a JAR containing one of these; say Z.jar) must be present on the local filesystem at compile time - correct?
Does the compiler use a class loader to load Z (to reiterate - at compile time)?
If I'm correct that a class loader is used at COMPILE time, what if a user-defined class loader (L) is desired - and is part of the project being compiled? Suppose, for example, that L is responsible for downloading Z.class AT RUNTIME across a network? In this scenario, how will the Java compiler obtain Z.class at compile time? Will it attempt to compile L first, and then use L at compile time to obtain Z?
I understand that using Maven to build the project, Z.jar can be located on a remote repository over the internet at compile time - either on ibiblio, or on a custom repository defined in the POM file. I hope I'm correct that it is MAVEN that is responsible for downloading the 3rd-party JAR file at compile time, rather than the compiler's JVM?
Note, however, that at RUNTIME, A.class again requires Z.class - how will JRE know where to download Z.class from (without Maven to help)? Or is it the developer's responsibility to ship Z.class along with A.class with the application (say in the JAR file)? (...assuming a user-defined class loader is not used.)
Now a related question, just for confirmation: I assume that once compiled, A.class contains only symbolic links to Z.class - the bytecodes of Z.class are not part of A.class; please correct me if I'm wrong. (In C++, static linking would copy the bytes from Z.class into A.class, whereas dynamic linking would not.)
Another related question regarding the compilation process: once the necessary files describing Z are located on the CLASSPATH at compile time, does the compiler require the bytecodes from Z.class in order to compile A.java (and will build Z.class, if necessary, from Z.java), or does Z.java suffice for the compiler?
My overall confusion can be summarized as follows. It seems that the full [byte]code for Z needs to be present TWICE - once during compilation, and a second time during runtime - and that this must be true for ALL classes referenced by a Java program. In other words, every single class must be downloaded/present TWICE. Not a single class can be represented during compile time as just a header file (as it can be in C++).
Does the compiler use a class loader to load Z (to reiterate - at compile time)?
Almost. It uses a JavaFileManager which acts like a class loader in many ways. It does not actually load classes though since it needs to create class signatures from .java files as well as .class files.
I hope I'm correct that it is MAVEN that is responsible for downloading the 3rd-party JAR file at compile time, rather than the compiler's JVM?
Yes, Maven pulls down jars, although it is possible to implement a JavaFileManager that behaves like a URLClassLoader. Maven manages a local cache of jars, and will fill that cache from the network as needed.
Another related question regarding the compilation process: once the necessary files describing Z are located on the CLASSPATH at compile time, does the compiler require the bytecodes from Z.class in order to compile A.java (and will build Z.class, if necessary, from Z.java), or does Z.java suffice for the compiler?
It does not require all bytecode. Just class, method, and property signatures and metadata.
If A depends on Z, that dependency can be satisfied by a Z.java found on the source path, on a Z.class found on any of the (class path, system class path), or via some custom extension like a Z.jsp.
My overall confusion can be summarized as follows. It seems that the full [byte]code for Z needs to be present TWICE - once during compilation, and a second time during runtime - and that this must be true for ALL classes referenced by a Java program. In other words, every single class must be downloaded/present TWICE. Not a single class can be represented during compile time as just a header file (as it can be in C++).
Maybe an example can help clear this up. The java language specification requires the compiler do certain optimizations. Inlining of static final primtives and Strings.
If class A depends on B only for a constant:
class B {
public static final String FOO = "foo";
}
class A {
A() { System.out.println(B.FOO); }
}
then A can be compiled, loaded, and instantiated without B.class on the classpath.
If you changed and shipped a B.class with a different value of FOO then A would still have that compile time dependency.
So it is possible to have a compile-time dependency and not a link-time dependency.
It is, of course, possible to have a runtime dependency without a compile-time dependency via reflection.
To summarize, at compile time, the compiler makes sure that the methods and properties a class accesses are available.
At class load time (runtime) the byte-code verifier checks that the expected methods and properties are really there. So the byte-code verifier double checks the assumptions the compiler makes (except for inlining assumptions such as those above).
It is possible to blur these distinctions. E.g. JSP uses a custom classloader that invokes the java compiler to compile and load classes from source as needed at runtime.
The best way to understand how Maven fits into the picture is to realize that it (mostly) doesn't.
Maven is NOT INVOLVED in the processes by which the compiler finds definitions, or the runtime system loads classes. The compiler does this by itself ... based on what the build-time classpath says. By the time that you run the application, Maven is no longer in the picture at all.
At build time, Maven's role is to examine the project dependencies declared in the POM files, check versions, download missing projects, put the JARs in a well known place and create a "classpath" for the compiler (and other tools) to use.
The compiler then "loads" the classes that it needs from those JAR files to extract type signature information in the compiled class files. It doesn't use a regular class loader to do this, but the basic algorithm for locating the classes is the same.
Once the compiler has done, Maven then takes care of packaging into JAR, WAR, EAR files and so on, as specified by the POM file(s). In the case of a WAR or EAR file, all of the required dependent JARs packaged into the file.
No Maven-directed JAR downloading takes place at runtime. However, it is possible that running the application could involve downloading JAR files; e.g. if the application is deployed using Java WebStart. (But the JARs won't be downloaded from a Maven repository in this case ...)
Some more things to note:
Maven does not need to be in the picture at all. You could use an IDE to do the building, the Ant build tool (maybe with Ivy), Make or even "dumb" shell scripts. Depending on the build mechanism, you may need to handle external dependencies by hand; e.g. figuring out with external JARs to download, where to put them and so on.
The Java runtime system typically has to load more than the compiler does. The compiler only needs to load those classes that are necessary to type-check the classes that are being compiled.
For example, suppose class A has a method that uses class B as a parameter, and class B has a method that uses class C as a parameter. When compiling A, B needs to be loaded, but not C (unless A directly depends on C in some way). When executing A, both B and C needs to be loaded.
A second example, suppose that class A depends on interface I with implementations IC1 and IC2. Unless A explicitly depends on IC1 or IC2, the compiler does not need to load them to compile A.
It is also possible to dynamically load classes at runtime; e.g. by calling Class.forName(className) where className is a string-valued expression.
You wrote:
For the example in your second bullet point - I'd think that the developer could choose to provide, at compile time, a stub file for B that does not include B's method that uses C, and A would compile just fine. This would confirm my assessment that, at compile time, what might be called "header" files with only the necessary functions declared (even as stubs) is perfectly allowed in Java - so it's just for convenience/convention that tools have evolved over time not to use a header/source file distinction. (Correct me if I'm wrong.)
It is not a convenience / evolutionary thing. Java has NEVER supported separate header files. James Gosling et al started from the position that header files and preprocessors were a bad idea.
Your hypothetical stub version of B would have to have all of the visible methods, constructors and fields of the real B, and the methods and constructors would have to have bodies. The stub B wouldn't compile otherwise. (I guess in theory, the bodies could be empty, return a dummy value or throw an unchecked exception.)
The problem with this approach is that it would be horribly fragile. If you made the smallest mistake in keeping the stub and full versions of B in step, the result would be that the class loader (at runtime) would report a fatal error.
By the way, C and C++ are pretty much the exception in having separate header files. In most other languages that support separate compilation (of different files comprising an application), the compiler can extract the interface information (e.g. signatures) from the implementation source code.
One other piece to the puzzle which may help, interfaces and abstract classes are compiled to class files as well. So when compiling A, ideally you would be compiling against the API and not necessarily the concrete class. So if A uses interface B (which is implemented by Z) at compile time you would need classfiles for A & B but at runtime you would need class files for A, B and Z. You are correct that all classes are dynamically linked (You can crack the files and look at the bytecode and see the fully qualified names in there. jclasslib is an excellent utility for inspecting class files and reading bytecode if you're curious). I can replace classes at runtime. But problems at runtime often result in various forms of LinkageErrors
Often the decision on should a class be shipped with your compiled jar files, depends on your particular scenario. There are classes that are assumed to be available in every JRE implementation. But if I have my own API and implementation I would have to somehow provide both to wherever they are run. There are some APIs though, for example servlets where I would compile against the servlet API, but the container (e.g. Websphere) is responsible for providing the servlet API and implementation at runtime for me (therefore I shouldn't ship my own copies of these).
I have 15 years' C++ experience but am new to Java.
The biggest challenge you are likely to face is that many things which are treated as important in C++, such as the sizeof() an object, unsigned integers and destructors, are not easy to do in Java and are not treated with the same importance and have other solutions/work arounds.
I am trying to understand how the absence of header files is handled by Java. I have a few questions related to this issue.
Java has interfaces which are similar in concept to header files in the sense that they contain only declarations (and constants) without definitions. Classes are often paired with an interface for that class, sometimes one to one.
Does the compiler use a class loader to load Z (to reiterate - at compile time)?
When a class loader loads a class, it calls the static initialisation block, which can do just about anything. All the compiler needs is to extract meta-data from the class, not the byte code and this is what it does.
it is MAVEN that is responsible for downloading the 3rd-party JAR file at compile time, rather than the compiler's JVM?
Maven must load the file to a local filesystem, the default locations is ~/.m2/repository
how will JRE know where to download Z.class from (without Maven to help)?
Its must either use Maven; Some OSGi containers are able to load and unload different versions dynamically, for example you can change the version of a library in a running system, or update a SNAPSHOT from a maven build.
Or you have a stand alone application; Using a Maven plugin like appassembly you can create batch/shell script and a directory with a copy of all the libraries you need.
Or a web archive war which contains meta informations and many jars inside it. (It just a jar containing jars ;)
Or is it the developer's responsibility to ship Z.class along with A.class with the application
For a standalone application yes.
Now a related question, just for confirmation: I assume that once compiled, A.class contains only symbolic links to Z.class
Technically, it only contains strings with Z in them, not the .class itself. You can change alot of the Z without compiling A again and it will still work. e.g. you might compile against once version of Z and replace it with another version later and the application can still run. You can even replace it while the application is running. ;)
the bytecodes of Z.class are not part of A.class;
The compiler does next to no optimisations. The only significant one IMHO, is that it inlines compile time constants. This means if you change a constant in Z after compiling A it may not change in A. (If you make the constant not known at compile time it won't inline it)
No byte-code is inlined, native code from the byte code is inlined at at runtime based on how the program actually runs. e.g. say you have a virtual methods with N implementations. A C++ compiler wouldn't know which ones to inline esp as they might not be available at compile time. However the JVM can see which ones are used the most (it collects stats as the program runs) and can inline the two most commonly used implementations. (Food for thought as to what happens when you remove/update one of those classes at runtime ;)
please correct me if I'm wrong. (In C++, static linking would copy the bytes from Z.class into A.class, whereas dynamic linking would not.)
Java has only dynamic linking but this doesn't prevent inlining of code at runtime which is as efficient as using a macro.
Another related question regarding the compilation process: once the necessary files describing Z are located on the CLASSPATH at compile time, does the compiler require the bytecodes from Z.class in order to compile A.java (and will build Z.class, if necessary, from Z.java), or does Z.java suffice for the compiler?
The compiler will compile all .java files as required. You need only provide the .java but it must compile (ie. its dependancies must be available) However if you use a .class file not all of its dependancy need to be available to compile A.
My overall confusion can be summarized as follows. It seems that the full [byte]code for Z needs to be present TWICE - once during compilation, and a second time during runtime -
Technically a class contains byte-code and meta-data such a method signatures, fields and constants. None of the byte-code is used at compile time, only the meta-information. The byte-code at compile time does not need to match what is used at runtime. (The signatures/fields used do) It is just simpler to have one copy of each class, but you could use a stripped down version at compile time if you needed to for some purpose.
and that this must be true for ALL classes referenced by a Java program. In other words, every single class must be downloaded/present TWICE. Not a single class can be represented during compile time as just a header file (as it can be in C++).
It only needs to be downloaded once as it sits in a repository or somewher on you disk. The interfaces like the headers may be all you need at compile time and these could be a seperate library, but typically it is not as it is just simpler to have a single archive in most cases (OSGi is the only example I know of where it is worth seperating them)
Your summary is correct, however I would like to add that if you compile to a jar, then the jar will contain Z ( and if Z is a jar only the files inside the Z jar that are needed.
However the same Z can be used for both compile and runtime.
Simply put, no. If you look at say JDBC code it is compiled against an interface, which for this purpose acts like a header file, and uses reflection to pull in the right implementation at runtime. The drivers do not need to be present at all on the build machine, though these days a cleaner way to do this kind of thing is via a dependency injection framework.
In any case, there's nothing stopping you from compiling against one 'header' class file and then running against the actual class file (Java is mostly dynamically linked) but this just seems to be making extra work for yourself.

Difference between API and ABI

I am new to Linux system programming and I came across API and ABI while reading
Linux System Programming.
Definition of API:
An API defines the interfaces by which
one piece of software communicates
with another at the source level.
Definition of ABI:
Whereas an API defines a source
interface, an ABI defines the
low-level binary interface between two
or more pieces of software on a
particular architecture. It defines
how an application interacts with
itself, how an application interacts
with the kernel, and how an
application interacts with libraries.
How can a program communicate at a source level? What is a source level? Is it related to source code in any way? Or the source of the library gets included in the main program?
The only difference I know is API is mostly used by programmers and ABI is mostly used by a compiler.
API: Application Program Interface
This is the set of public types/variables/functions that you expose from your application/library.
In C/C++ this is what you expose in the header files that you ship with the application.
ABI: Application Binary Interface
This is how the compiler builds an application.
It defines things (but is not limited to):
How parameters are passed to functions (registers/stack).
Who cleans parameters from the stack (caller/callee).
Where the return value is placed for return.
How exceptions propagate.
The API is what humans use. We write source code. When we write a program and want to use some library function we write code like:
long howManyDecibels = 123L;
int ok = livenMyHills(howManyDecibels);
and we needed to know that there is a method livenMyHills(), which takes a long integer parameter. So as a Programming Interface it's all expressed in source code. The compiler turns this into executable instructions which conform to the implementation of this language on this particular operating system. And in this case result in some low level operations on an Audio unit. So particular bits and bytes are squirted at some hardware. So at runtime there's lots of Binary level action going on which we don't usually see.
At the binary level there must be a precise definition of what bytes are passed at the Binary level, for example the order of bytes in a 4 byte integer, or the layout of a complex data structure - are there padding bytes to align some values. This definition is the ABI.
I mostly come across these terms in the sense of an API-incompatible change, or an ABI-incompatible change.
An API change is essentially where code that would have compiled with the previous version won't work anymore. This can happen because you added an argument to a function, or changed the name of something accessible outside of your local code. Any time you change a header, and it forces you to change something in a .c/.cpp file, you've made an API-change.
An ABI change is where code that has already been compiled against version 1 will no longer work with version 2 of a codebase (usually a library). This is generally trickier to keep track of than API-incompatible change since something as simple as adding a virtual method to a class can be ABI incompatible.
I've found two extremely useful resources for figuring out what ABI compatibility is and how to preserve it:
The list of Do's and Dont's with C++ for the KDE project
Ulrich Drepper's How to Write Shared Libraries.pdf (primary author of glibc)
Linux shared library minimal runnable API vs ABI example
This answer has been extracted from my other answer: What is an application binary interface (ABI)? but I felt that it directly answers this one as well, and that the questions are not duplicates.
In the context of shared libraries, the most important implication of "having a stable ABI" is that you don't need to recompile your programs after the library changes.
As we will see in the example below, it is possible to modify the ABI, breaking programs, even though the API is unchanged.
main.c
#include <assert.h>
#include <stdlib.h>
#include "mylib.h"
int main(void) {
mylib_mystruct *myobject = mylib_init(1);
assert(myobject->old_field == 1);
free(myobject);
return EXIT_SUCCESS;
}
mylib.c
#include <stdlib.h>
#include "mylib.h"
mylib_mystruct* mylib_init(int old_field) {
mylib_mystruct *myobject;
myobject = malloc(sizeof(mylib_mystruct));
myobject->old_field = old_field;
return myobject;
}
mylib.h
#ifndef MYLIB_H
#define MYLIB_H
typedef struct {
int old_field;
} mylib_mystruct;
mylib_mystruct* mylib_init(int old_field);
#endif
Compiles and runs fine with:
cc='gcc -pedantic-errors -std=c89 -Wall -Wextra'
$cc -fPIC -c -o mylib.o mylib.c
$cc -L . -shared -o libmylib.so mylib.o
$cc -L . -o main.out main.c -lmylib
LD_LIBRARY_PATH=. ./main.out
Now, suppose that for v2 of the library, we want to add a new field to mylib_mystruct called new_field.
If we added the field before old_field as in:
typedef struct {
int new_field;
int old_field;
} mylib_mystruct;
and rebuilt the library but not main.out, then the assert fails!
This is because the line:
myobject->old_field == 1
had generated assembly that is trying to access the very first int of the struct, which is now new_field instead of the expected old_field.
Therefore this change broke the ABI.
If, however, we add new_field after old_field:
typedef struct {
int old_field;
int new_field;
} mylib_mystruct;
then the old generated assembly still accesses the first int of the struct, and the program still works, because we kept the ABI stable.
Here is a fully automated version of this example on GitHub.
Another way to keep this ABI stable would have been to treat mylib_mystruct as an opaque struct, and only access its fields through method helpers. This makes it easier to keep the ABI stable, but would incur a performance overhead as we'd do more function calls.
API vs ABI
In the previous example, it is interesting to note that adding the new_field before old_field, only broke the ABI, but not the API.
What this means, is that if we had recompiled our main.c program against the library, it would have worked regardless.
We would also have broken the API however if we had changed for example the function signature:
mylib_mystruct* mylib_init(int old_field, int new_field);
since in that case, main.c would stop compiling altogether.
Semantic API vs Programming API
We can also classify API changes in a third type: semantic changes.
The semantic API, is usually a natural language description of what the API is supposed to do, usually included in the API documentation.
It is therefore possible to break the semantic API without breaking the program build itself.
For example, if we had modified
myobject->old_field = old_field;
to:
myobject->old_field = old_field + 1;
then this would have broken neither programming API, nor ABI, but main.c the semantic API would break.
There are two ways to programmatically check the contract API:
test a bunch of corner cases. Easy to do, but you might always miss one.
formal verification. Harder to do, but produces mathematical proof of correctness, essentially unifying documentation and tests into a "human" / machine verifiable manner! As long as there isn't a bug in your formal description of course ;-)
Tested in Ubuntu 18.10, GCC 8.2.0.
This is my layman explanations:
API - think of include files. They provide programming interfaces.
ABI - think of kernel module. When you run it on some kernel, it has to agree on how to communicate without include files, i.e. as low-level binary interface.
(Application Binary Interface) A specification for a specific hardware platform combined with the operating system. It is one step beyond the API (Application Program Interface), which defines the calls from the application to the operating system. The ABI defines the API plus the machine language for a particular CPU family. An API does not ensure runtime compatibility, but an ABI does, because it defines the machine language, or runtime, format.
Courtesy
Let me give a specific example how ABI and API differ in Java.
An ABI incompatible change is if I change a method A#m() from taking a String as an argument to String... argument. This is not ABI compatible because you have to recompile code that is calling that, but it is API compatible as you can resolve it by recompiling without any code changes in the caller.
Here is the example spelled out. I have my Java library with class A
// Version 1.0.0
public class A {
public void m(String string) {
System.out.println(string);
}
}
And I have a class that uses this library
public class Main {
public static void main(String[] args) {
(new A()).m("string");
}
}
Now, the library author compiled their class A, I compiled my class Main and it is all working nicely. Imagine a new version of A comes
// Version 2.0.0
public class A {
public void m(String... string) {
System.out.println(string[0]);
}
}
If I just take the new compiled class A and drop it together with the previously compiled class Main, I get an exception on attempt to invoke the method
Exception in thread "main" java.lang.NoSuchMethodError: A.m(Ljava/lang/String;)V
at Main.main(Main.java:5)
If I recompile Main, this is fixed and all is working again.
Your program (source code) can be compiled with modules who provide proper API.
Your program (binary) can run on platforms who provide proper ABI.
API restricts type definitions, function definitions, macros, sometimes global variables a library should expose.
ABI restricts what a "platform" should provide for you program to run on. I like to consider it in 3 levels:
processor level - the instruction set, the calling convention
kernel level - the system call convention, the special file path convention (e.g. the /proc and /sys files in Linux), etc.
OS level - the object format, the runtime libraries, etc.
Consider a cross-compiler named arm-linux-gnueabi-gcc. "arm" indicates the processor architecture, "linux" indicates the kernel, "gnu" indicates its target programs use GNU's libc as runtime library, different from arm-linux-androideabi-gcc which use Android's libc implementation.
API - Application Programming Interface is a compile time interface which can is used by developer to use non-project functionality like library, OS, core calls in source code
ABI[About] - Application Binary Interface is a runtime interface which is used by a program during executing for communication between components in machine code
The ABI refers to the layout of an object file / library and final binary from the perspective of successfully linking, loading and executing certain binaries without link errors or logic errors occuring due to binary incompatibility.
The binary format specification (PE, COFF, ELF, .obj, .o, .a, .lib (import library, static library), .NET assembly, .pyc, COM .dll): the headers, the header format, defining where the sections are and where the import / export / exception tables are and the format of those
The instruction set used to encode the bytes in the code section, as well as the specific machine instructions
The actual signature of the functions and data as defined in the API (as well as how they are represented in the binary (the next 2 points))
The calling convention of the functions in the code section, which may be called by other binaries (particularly relevant to ABI compatibility being the functions that are actually exported)
The way data is represented and aligned in the data section with respect to its type (particularly relevant to ABI compatibility being the data that is actually exported)
The system call numbers or interrupt vectors hooked in the code
The name decoration of exported functions and data
Linker directives in object files
Preprocessor / compiler / assembler / linker flags and directives used by the API programmer and how they are interpreted to omit, optimise, inline or change the linkage of certain symbols or code in the library or final binary (be that binary a .dll or the executable in the event of static linking)
The bytecode format of .NET C# is an ABI (general), which includes the .NET assembly .dll format. The virtual machine that interprets the bytecode has a specific ABI that is C++ based, where types need to be marshalled between native C++ types that the native code's specific ABI uses and the boxed types of the virtual machine's ABI when calling bytecode from native code and native code from bytecode. Here I am calling an ABI of a specific program a specific ABI, whereas an ABI in general, such as 'MS ABI' or 'C ABI' simply refers to the calling convention and the way structures are organised, but not a specific embodiment of the ABI by a specific binary that introduces a new level of ABI compatibility concerns.
An API refers to the set of type definitions exported by a particular library imported and used in a particular translation unit, from the perspective of the compiler of a translation unit, to successfully resolve and check type references to be able to compile a binary, and that binary will adhere to the standard of the target ABI, such that if the library that actually implements the API is also compiled to a compatible ABI, it will link and work as intended. If the API is updated the application may still compile, but there will now be a binary incompatibility and therefore a new binary needs to be used.
An API involves:
Functions, variables, classes, objects, constants, their names, types and definitions presented in the language in which they are coded in a syntactically and semantically correct manner
What those functions actually do and how to use them in the source language
The source code files that need to be included / binaries that need to be linked to in order to make use of them, and the ABI compatibility thereof
I'll begin by answering your specific questions.
1.What is a source level? Is it related to source code in any way?
Yes, the term source level refers to the level of source code. The term level refers to the semantic level of the computation requirements as they get translated from the application domain level to the source code level and from the source code level to the machine code level (binary codes). The application domain level refers what end-users of the software want and specify as their computation requirements. The source code level refers to what programmers make of the application level requirements and then specify as a program in a certain language.
How can a program communicate at a source level? Or the source of the library gets included in the main program?
Language API refers specifically to all that a language requires(specifies) (hence interfaces) to write reusable modules in that language. A reusable program conforms to these interface (API) requirements to be reused in other programs in the same language. Every reuse needs to conform to the same API requirements as well. So, the word "communicate" refers to reuse.
Yes, source code (of a reusable module; in the case of C/C++, .h files ) getting included (copied at pre-processing stage) is the common way of reusing in C/C++ and is thus part of C++ API. Even when you just write a simple function foo() in the global space of a C++ program and then call the function as foo(); any number of times is reuse as per the C++language API. Java classes in Java packages are reusable modules in Java. The Java beans specification is also a Java API enabling reusable programs (beans) to be reused by other modules ( could be another bean) with the help of runtimes/containers (conforming to that specification).
Coming to your overall question of the difference between language API and ABI, and how service-oriented APIs compare with language APIs, my answer here on SO should be helpful.

Resources