Difference between API and ABI - abi

I am new to Linux system programming and I came across API and ABI while reading
Linux System Programming.
Definition of API:
An API defines the interfaces by which
one piece of software communicates
with another at the source level.
Definition of ABI:
Whereas an API defines a source
interface, an ABI defines the
low-level binary interface between two
or more pieces of software on a
particular architecture. It defines
how an application interacts with
itself, how an application interacts
with the kernel, and how an
application interacts with libraries.
How can a program communicate at a source level? What is a source level? Is it related to source code in any way? Or the source of the library gets included in the main program?
The only difference I know is API is mostly used by programmers and ABI is mostly used by a compiler.

API: Application Program Interface
This is the set of public types/variables/functions that you expose from your application/library.
In C/C++ this is what you expose in the header files that you ship with the application.
ABI: Application Binary Interface
This is how the compiler builds an application.
It defines things (but is not limited to):
How parameters are passed to functions (registers/stack).
Who cleans parameters from the stack (caller/callee).
Where the return value is placed for return.
How exceptions propagate.

The API is what humans use. We write source code. When we write a program and want to use some library function we write code like:
long howManyDecibels = 123L;
int ok = livenMyHills(howManyDecibels);
and we needed to know that there is a method livenMyHills(), which takes a long integer parameter. So as a Programming Interface it's all expressed in source code. The compiler turns this into executable instructions which conform to the implementation of this language on this particular operating system. And in this case result in some low level operations on an Audio unit. So particular bits and bytes are squirted at some hardware. So at runtime there's lots of Binary level action going on which we don't usually see.
At the binary level there must be a precise definition of what bytes are passed at the Binary level, for example the order of bytes in a 4 byte integer, or the layout of a complex data structure - are there padding bytes to align some values. This definition is the ABI.

I mostly come across these terms in the sense of an API-incompatible change, or an ABI-incompatible change.
An API change is essentially where code that would have compiled with the previous version won't work anymore. This can happen because you added an argument to a function, or changed the name of something accessible outside of your local code. Any time you change a header, and it forces you to change something in a .c/.cpp file, you've made an API-change.
An ABI change is where code that has already been compiled against version 1 will no longer work with version 2 of a codebase (usually a library). This is generally trickier to keep track of than API-incompatible change since something as simple as adding a virtual method to a class can be ABI incompatible.
I've found two extremely useful resources for figuring out what ABI compatibility is and how to preserve it:
The list of Do's and Dont's with C++ for the KDE project
Ulrich Drepper's How to Write Shared Libraries.pdf (primary author of glibc)

Linux shared library minimal runnable API vs ABI example
This answer has been extracted from my other answer: What is an application binary interface (ABI)? but I felt that it directly answers this one as well, and that the questions are not duplicates.
In the context of shared libraries, the most important implication of "having a stable ABI" is that you don't need to recompile your programs after the library changes.
As we will see in the example below, it is possible to modify the ABI, breaking programs, even though the API is unchanged.
main.c
#include <assert.h>
#include <stdlib.h>
#include "mylib.h"
int main(void) {
mylib_mystruct *myobject = mylib_init(1);
assert(myobject->old_field == 1);
free(myobject);
return EXIT_SUCCESS;
}
mylib.c
#include <stdlib.h>
#include "mylib.h"
mylib_mystruct* mylib_init(int old_field) {
mylib_mystruct *myobject;
myobject = malloc(sizeof(mylib_mystruct));
myobject->old_field = old_field;
return myobject;
}
mylib.h
#ifndef MYLIB_H
#define MYLIB_H
typedef struct {
int old_field;
} mylib_mystruct;
mylib_mystruct* mylib_init(int old_field);
#endif
Compiles and runs fine with:
cc='gcc -pedantic-errors -std=c89 -Wall -Wextra'
$cc -fPIC -c -o mylib.o mylib.c
$cc -L . -shared -o libmylib.so mylib.o
$cc -L . -o main.out main.c -lmylib
LD_LIBRARY_PATH=. ./main.out
Now, suppose that for v2 of the library, we want to add a new field to mylib_mystruct called new_field.
If we added the field before old_field as in:
typedef struct {
int new_field;
int old_field;
} mylib_mystruct;
and rebuilt the library but not main.out, then the assert fails!
This is because the line:
myobject->old_field == 1
had generated assembly that is trying to access the very first int of the struct, which is now new_field instead of the expected old_field.
Therefore this change broke the ABI.
If, however, we add new_field after old_field:
typedef struct {
int old_field;
int new_field;
} mylib_mystruct;
then the old generated assembly still accesses the first int of the struct, and the program still works, because we kept the ABI stable.
Here is a fully automated version of this example on GitHub.
Another way to keep this ABI stable would have been to treat mylib_mystruct as an opaque struct, and only access its fields through method helpers. This makes it easier to keep the ABI stable, but would incur a performance overhead as we'd do more function calls.
API vs ABI
In the previous example, it is interesting to note that adding the new_field before old_field, only broke the ABI, but not the API.
What this means, is that if we had recompiled our main.c program against the library, it would have worked regardless.
We would also have broken the API however if we had changed for example the function signature:
mylib_mystruct* mylib_init(int old_field, int new_field);
since in that case, main.c would stop compiling altogether.
Semantic API vs Programming API
We can also classify API changes in a third type: semantic changes.
The semantic API, is usually a natural language description of what the API is supposed to do, usually included in the API documentation.
It is therefore possible to break the semantic API without breaking the program build itself.
For example, if we had modified
myobject->old_field = old_field;
to:
myobject->old_field = old_field + 1;
then this would have broken neither programming API, nor ABI, but main.c the semantic API would break.
There are two ways to programmatically check the contract API:
test a bunch of corner cases. Easy to do, but you might always miss one.
formal verification. Harder to do, but produces mathematical proof of correctness, essentially unifying documentation and tests into a "human" / machine verifiable manner! As long as there isn't a bug in your formal description of course ;-)
Tested in Ubuntu 18.10, GCC 8.2.0.

This is my layman explanations:
API - think of include files. They provide programming interfaces.
ABI - think of kernel module. When you run it on some kernel, it has to agree on how to communicate without include files, i.e. as low-level binary interface.

(Application Binary Interface) A specification for a specific hardware platform combined with the operating system. It is one step beyond the API (Application Program Interface), which defines the calls from the application to the operating system. The ABI defines the API plus the machine language for a particular CPU family. An API does not ensure runtime compatibility, but an ABI does, because it defines the machine language, or runtime, format.
Courtesy

Let me give a specific example how ABI and API differ in Java.
An ABI incompatible change is if I change a method A#m() from taking a String as an argument to String... argument. This is not ABI compatible because you have to recompile code that is calling that, but it is API compatible as you can resolve it by recompiling without any code changes in the caller.
Here is the example spelled out. I have my Java library with class A
// Version 1.0.0
public class A {
public void m(String string) {
System.out.println(string);
}
}
And I have a class that uses this library
public class Main {
public static void main(String[] args) {
(new A()).m("string");
}
}
Now, the library author compiled their class A, I compiled my class Main and it is all working nicely. Imagine a new version of A comes
// Version 2.0.0
public class A {
public void m(String... string) {
System.out.println(string[0]);
}
}
If I just take the new compiled class A and drop it together with the previously compiled class Main, I get an exception on attempt to invoke the method
Exception in thread "main" java.lang.NoSuchMethodError: A.m(Ljava/lang/String;)V
at Main.main(Main.java:5)
If I recompile Main, this is fixed and all is working again.

Your program (source code) can be compiled with modules who provide proper API.
Your program (binary) can run on platforms who provide proper ABI.
API restricts type definitions, function definitions, macros, sometimes global variables a library should expose.
ABI restricts what a "platform" should provide for you program to run on. I like to consider it in 3 levels:
processor level - the instruction set, the calling convention
kernel level - the system call convention, the special file path convention (e.g. the /proc and /sys files in Linux), etc.
OS level - the object format, the runtime libraries, etc.
Consider a cross-compiler named arm-linux-gnueabi-gcc. "arm" indicates the processor architecture, "linux" indicates the kernel, "gnu" indicates its target programs use GNU's libc as runtime library, different from arm-linux-androideabi-gcc which use Android's libc implementation.

API - Application Programming Interface is a compile time interface which can is used by developer to use non-project functionality like library, OS, core calls in source code
ABI[About] - Application Binary Interface is a runtime interface which is used by a program during executing for communication between components in machine code

The ABI refers to the layout of an object file / library and final binary from the perspective of successfully linking, loading and executing certain binaries without link errors or logic errors occuring due to binary incompatibility.
The binary format specification (PE, COFF, ELF, .obj, .o, .a, .lib (import library, static library), .NET assembly, .pyc, COM .dll): the headers, the header format, defining where the sections are and where the import / export / exception tables are and the format of those
The instruction set used to encode the bytes in the code section, as well as the specific machine instructions
The actual signature of the functions and data as defined in the API (as well as how they are represented in the binary (the next 2 points))
The calling convention of the functions in the code section, which may be called by other binaries (particularly relevant to ABI compatibility being the functions that are actually exported)
The way data is represented and aligned in the data section with respect to its type (particularly relevant to ABI compatibility being the data that is actually exported)
The system call numbers or interrupt vectors hooked in the code
The name decoration of exported functions and data
Linker directives in object files
Preprocessor / compiler / assembler / linker flags and directives used by the API programmer and how they are interpreted to omit, optimise, inline or change the linkage of certain symbols or code in the library or final binary (be that binary a .dll or the executable in the event of static linking)
The bytecode format of .NET C# is an ABI (general), which includes the .NET assembly .dll format. The virtual machine that interprets the bytecode has a specific ABI that is C++ based, where types need to be marshalled between native C++ types that the native code's specific ABI uses and the boxed types of the virtual machine's ABI when calling bytecode from native code and native code from bytecode. Here I am calling an ABI of a specific program a specific ABI, whereas an ABI in general, such as 'MS ABI' or 'C ABI' simply refers to the calling convention and the way structures are organised, but not a specific embodiment of the ABI by a specific binary that introduces a new level of ABI compatibility concerns.
An API refers to the set of type definitions exported by a particular library imported and used in a particular translation unit, from the perspective of the compiler of a translation unit, to successfully resolve and check type references to be able to compile a binary, and that binary will adhere to the standard of the target ABI, such that if the library that actually implements the API is also compiled to a compatible ABI, it will link and work as intended. If the API is updated the application may still compile, but there will now be a binary incompatibility and therefore a new binary needs to be used.
An API involves:
Functions, variables, classes, objects, constants, their names, types and definitions presented in the language in which they are coded in a syntactically and semantically correct manner
What those functions actually do and how to use them in the source language
The source code files that need to be included / binaries that need to be linked to in order to make use of them, and the ABI compatibility thereof

I'll begin by answering your specific questions.
1.What is a source level? Is it related to source code in any way?
Yes, the term source level refers to the level of source code. The term level refers to the semantic level of the computation requirements as they get translated from the application domain level to the source code level and from the source code level to the machine code level (binary codes). The application domain level refers what end-users of the software want and specify as their computation requirements. The source code level refers to what programmers make of the application level requirements and then specify as a program in a certain language.
How can a program communicate at a source level? Or the source of the library gets included in the main program?
Language API refers specifically to all that a language requires(specifies) (hence interfaces) to write reusable modules in that language. A reusable program conforms to these interface (API) requirements to be reused in other programs in the same language. Every reuse needs to conform to the same API requirements as well. So, the word "communicate" refers to reuse.
Yes, source code (of a reusable module; in the case of C/C++, .h files ) getting included (copied at pre-processing stage) is the common way of reusing in C/C++ and is thus part of C++ API. Even when you just write a simple function foo() in the global space of a C++ program and then call the function as foo(); any number of times is reuse as per the C++language API. Java classes in Java packages are reusable modules in Java. The Java beans specification is also a Java API enabling reusable programs (beans) to be reused by other modules ( could be another bean) with the help of runtimes/containers (conforming to that specification).
Coming to your overall question of the difference between language API and ABI, and how service-oriented APIs compare with language APIs, my answer here on SO should be helpful.

Related

Where is __builtin_va_start defined?

I'm trying to locate where __builtin_va_start is defined in GCC's source code, and see how it is implemented. (I was looking for where va_start is defined and then found that this macro is defined as __builtin_va_start.) I used cscope -r in GCC 9.1's source code directory to search the definition but haven't found it. Can anyone point where this function is defined?
That __builtin_va_start is not defined anywhere. It is a GCC compiler builtin (a bit like sizeof is a compile-time operator). It is an implementation detail related to the <stdarg.h> standard header (provided by the compiler, not the C standard library implementation libc). What really matters are the calling conventions and ABI followed by the generated assembler.
GCC has special code to deal with compiler builtins. And that code is not defining the builtin, but implementing its ad-hoc behavior inside the compiler. And __builtin_va_start is expanded into some compiler-specific internal representation of your compiled C/C++ code, specific to GCC (some GIMPLE perhaps)
From a comment of yours, I would infer that you are interested in implementation details. But that should be in your question
If you study GCC 9.1 source code, look inside some of gcc-9.1.0/gcc/builtins.c (the expand_builtin_va_start function there), and for other builtins inside gcc-9.1.0/gcc/c-family/c-cppbuiltin.c, gcc-9.1.0/gcc/cppbuiltin.c, gcc-9.1.0/gcc/jit/jit-builtins.c
You could write your own GCC plugin (in 2Q2019, for GCC 9, and the C++ code of your plugin might have to change for the future GCC 10) to add your own GCC builtins. BTW, you might even overload the behavior of the existing __builtin_va_start by your own specific code, and/or you might have -at least for research purposes- your own stdarg.h header with #define va_start(v,l) __my_builtin_va_start(v,l) and have your GCC plugin understand your __my_builtin_va_start plugin-specific builtin. Be however aware of the GCC runtime library exception and read its rationale: I am not a lawyer, but I tend to believe that you should (and that legal document requires you to) publish your GCC plugin with some open source license.
You first need to read a textbook on compilers, such as the Dragon book, to understand that an optimizing compiler is mostly transforming internal representations of your compiled code.
You further need to spend months in studying the many internal representations of GCC. Remember, GCC is a very complex program (of about ten millions lines of code). Don't expect to understand it with only a few days of work. Look inside the GCC resource center website.
My dead GCC MELT project had references and slides explaining more of GCC (the design philosophy and architecture of GCC changes slowly; so the concepts are still relevant, even if individual details changed). It took me almost ten years full time to partly understand some of the middle-end layers of GCC. I cannot transmit that knowledge in a StackOverflow answer.
My draft Bismon report (work in progress, funded by H2020, so lot of bureaucracy) has a dozen of pages (in its sections ยง1.3 and 1.4) introducing the internal representations of GCC.

Windows DLL & Dynamic Initialization Ordering

I have some question regarding dynamic initialization (i.e. constructors before main) and DLL link ordering - for both Windows and POSIX.
To make it easier to talk about, I'll define a couple terms:
Load-Time Libraries: libraries which have been "linked" at compile
time such that, when the system loads my application, they get loaded
in automatically. (i.e. ones put in CMake's target_link_libraries
command).
Run-Time Libraries: libraries which I load manually by dlopen or
equivalents. For the purposes of this discussion, I'll say that I only
ever manually load libraries using dlopen in main, so this should
simplify things.
Dynamic Initialization: if you're not familiar with the C++ spec's
definition of this, please don't try to answer this question.
Ok, so let's say I have an application (MyAwesomeApp) and it links against a dynamic library (MyLib1), which in turn links against another library (MyLib2). So the dependency tree is:
MyAwesomeApp -> MyLib1 -> MyLib2
For this example, let's say MyLib1 and MyLib2 are both Load-Time Libraries.
What's the initialization order of the above? It is obvious that all static initialization, including linking of exported/imported functions (windows-only) will occur first... But what happens to Dynamic Initialization? I'd expect the overall ordering:
ALL import/export symbol linking
ALL Static Initialization
ALL of MyLib2's Dynamic Initialization
ALL of MyLib1's Dynamic Initialization
ALL of MyAwesomeApp's Dynamic Initialization
MyAwesomeApp's main() function
But I can't find anything in specs that mandate this. I DID see something with elf that hinted at it, but I need to find guarantees in specs for me to do something I'm trying to do.
Just to make sure my thinking is clear, I'd expect that library loading works very similarly to 'import in Python in that, if it hasn't been loaded yet, it'll be loaded fully (including any initialization) before I do anything... and if it has been loaded, then I'll just link to it.
To give a more complex example to make sure there isn't another definition of my first example that yields a different response:
MyAwesomeApp depends on MyLib1 & MyLib2
MyLib1 depends on MyLib2
I'd expect the following initialization:
ALL import/export symbol linking
ALL Static Initialization
ALL of MyLib2's Dynamic Initialization
ALL of MyLib1's Dynamic Initialization
ALL of MyAwesomeApp's Dynamic Initialization
MyAwesomeApp's main() function
I'd love any help pointing out specs that say this is how it is. Or, if this is wrong, any spec saying what REALLY happens!
Thanks in advance!
-Christopher
Nothing in the C++ standard mandates how dynamic linking works.
Having said that, Visual Studio ships with the C Runtime (aka CRT) source, and you can see where static initializers get run in dllcrt0.c.
You can also derive the relative ordering of operations if you think about what constraints need to be satisfied to run each stage:
Import/export resolution just needs .dlls.
Static initialization just needs .dlls.
Dynamic initialization requires all imports to be resolved for the .dll.
Step 1 & 2 do not depend on each other, so they can happen independently. Step 3 requires 1 & 2 for each .dll, so it has to happen after both 1 & 2.
So any specific loading order that satisfies the constraints above will be a valid loading order.
In other words, if you need to care about the specific ordering of specific steps, you probably are doing something dangerous that relies on implementation specific details that will not be preserved across major or minor revisions of the OS. For example, the way the loader lock works for .dlls has changed significantly over the various releases of Windows.

GCC technical details

I don't know if this is the right place for things like this, but I am curious about a few aspects of the GCC front-end/back-end architecture:
I know I can compile .o files from C code and link them to C++ code, and I think I can do it the other way round, too. Does this work because the two languages are similar, or because the GCC back-end is really language-independent? Would this work with ADA code too? (I don't even know if that makes sense, since I don't know ADA or if it even has "functions", but the question is understood. If it makes no sense, think "Pascal" or even "my own custom language front-end")
Where would garbage-collection be implemented? For example, a Java front-end. The way I understand, if compiling to a JVM back-end, the "platform" will take care of the GC, and so the front-end needs not do anything about it, but if compiling to native code, would the front-end send garbage-collecting GENERIC code to the back-end, or does it turn on some flag telling the back-end to produce garbage-collecting code? The first makes more sense to me, but that would mean the front-end produces different output based on the target, which seems to miss the point of the GCC's front-end/back-end architecture.
Where would language-specific libraries go? For instance, the standard Java classes or standard C headers. If they are linked in at the end, then could a C program theoretically call functions from the Java library or something like that, since it is just another linked library?
Yes, the backend is at least reasonably language independent. Yes, it works with Ada.
GCJ generates native code which uses a runtime library. The garbage collector is part of the runtime library.
GCJ implements the CNI, which allows you to write code in C++ that can be used as native methods by Java code -- but being able to do this is a consequence of them having designed it in, not just an accidental byproduct of using the same back-end.
It is possible because calling convention is compatible, but name mangling is different (no mangling in C). To call C function from C++ you should declare it with extern "C". And to call C++ function from C you should declare it with mangled name (and may be with additional or different type args). The calling Fortran code is possible in some cases too, but argument passing convention is different (pass by ref in Fortran).
There were actually a converters from C++ to C (cfront) and from fortran to c (f2c) and some solutions from them are still used.
garbage-collection is implemented in run-time library, e.g. boehm. Backend should generate objects compatible with selected GC library.
Compiler driver (g++, gfortran, ..) will add language-specific libraries to linking step.

LLVM translation unit

I try to understand LLVM program high level structure.
I read in the book that "programs are composed of modules ,each of which correspons to tranlation unit".Can someone explain me in more details the above and what is the diffrenece between modules and translation units(if any).
I am also interested to know which part of the code is called when translation unit starts and completes debugging information encoding?
Translation unit is term from language standard. For example, this is from C (c99 iso draft)
5.1 Conceptual models; 5.1.1 Translation environment; 5.1.1.1 Program structure
A C program need not all be translated at the same time. The text of the program is kept
in units called source files, (or preprocessing files) in this International Standard. A
source file together with all the headers and source files included via the preprocessing
directive #include is known as a preprocessing translation unit. After preprocessing, a
preprocessing translation unit is called a translation unit.
So, translation unit is the single source file (file.c) after preprocessing (all #included *.h files instantiated, all macro are expanded, all comments are skipped, and file is ready for tokenizing).
Translation unit is a unit of compiling, because it didn't depend on any external resource until linking step. All headers are within TU.
Term module is not defined in the language standard, but it AFAIK refers to translation unit at deeper translation phases.
LLVM describes it as: http://llvm.org/docs/ProgrammersManual.html
The Module class represents the top level structure present in LLVM programs. An LLVM module is effectively either a translation unit of the original program or a combination of several translation units merged by the linker.
The Module class keeps track of a list of Functions, a list of GlobalVariables, and a SymbolTable. Additionally, it contains a few helpful member functions that try to make common operations easy.
About this part of your question:
I am also interested to know which part of the code is called when translation unit starts and completes debugging information encoding?
This depends on how LLVM is used. LLVM itself is a library and can be used in various ways.
For clang/LLVM (C/C++ complier build on libclang and LLVM) the translation unit created after preprocessing stage. It will be parsed into AST, then into LLVM assembly and saved in Module.
For tutorial example, here is a creation of Modules http://llvm.org/releases/2.6/docs/tutorial/JITTutorial1.html

Size of a library and the executable

I have a static library *.lib created using MSVC on windows. The size of library is say 70KB. Then I have an application which links this library. But now the size of the final executable (*.exe) is 29KB, less than the library. What i want to know is :
Since the library is statically linked, I was thinking it should add directly to the executable size and the final exe size should be more than that? Does windows exe format also do some compression of the binary data?
How is it for linux systems, that is how do sizes of library on linux (*.a/*.la file) relate with size of linux executable (*.out) ?
-AD
A static library on both Windows and Unix is a collection of .obj/.o files. The linker looks at each of these object files and determines if it is needed for the program to link. If it isn't needed, then the object file won't get included in the final executable. This can lead to executables that are smaller then the library.
EDIT: As MSalters points out, on Windows the VC++ compiler now supports generating object files that enable function-level linking, e.g., see here. In fact, edit-and-continue requires this, since the edit-and-continue needs to be able to replace the smallest possible part of the executable.
There is additional bookkeeping information in the .lib file that is not needed for the final executable. This information helps the linker find the code to actually link. Also, debug information may be stored in the .lib file but not in the .exe file (I don't recall where debug info is stored for objs in a lib file, it might be somewhere else).
The static library probably contains several functions which are never used. When the linker links the library with the main executable, it sees that certain functions are never used (and that their addresses are never taken and stored in function pointers), it just throws away the code. It can also do this recursively: if function A() is never called, and A() calls B(), but B() is never otherwise called, it can remove the code for both A() and B(). On Linux, the same thing happens.
A static library has to contain every symbol defined in its source code, because it might get linked into an executable which needs just that specific symbol. But once it is linked into an executable, we know exactly which symbols end up being used, and which ones don't. So the linker can trivially remove unused code, trimming the file size by a lot. Similarly, any duplicate symbols (anything that's defined in both the static library and the executable it's linked into gets merged into a single instance.
Disclaimer: It's been a long time since I dealt with static linking, so take my answer with a grain of salt.
You wrote: I was thinking it should add directly to the executable size and final exe size should be more than that?
Naive linkers work exactly this way - back when I was doing hobby development for CP/M systems (a LONG time ago), this was a real problem.
Modern linkers are smarter, however - they only link in the functions referenced by the original code, or as required.
Additionally to the current answers, the linker is allowed to remove function definitions if they have identical object code - this is intended to help reduce the bloating effects of templated code.
#All: Thanks for the pointers.
#Greg Hewgill - Your answer was a good pointer. Thanks.
The answer i found out was as follows:
1.)During Library building what happens is if the option "Keep Program debug databse" in MSVC (or something alike ) is ON, then library will have this debug info bloating its size.
but when i statically include that library and create a executable, the linker strips all that debug info from the library before geenrating the exe and hence the exe size is less than that of the library.
2.) When i disabled the option "Keep Program debug databse", i got an library whose size was smaller than the final executable, which was what i thought is nromal in most situations.
-AD

Resources