Confusion in Bjarne's PPP 2nd edition Pg. 316 - c++11

• The function will be inline; that is, the compiler will try to generate code for the function at each point of call rather than using function-call instructions to use common code. This can be a significant performance advantage for functions, such as month(), that hardly do anything but
are used a lot.
• All uses of the class will have to be recompiled whenever we make a change to the body of an inlined function. If the function body is out of the class declaration, recompilation of users is needed only when the class declaration is itself changed. Not recompiling when the body is
changed can be a huge advantage in large programs.
• The class definition gets larger. Consequently, it can be harder to find the members among the member function definitions.
All uses of the class will have to be recompiled whenever we make a change to the body of an inlined function. If the function body is out of the class declaration, recompilation of users is needed only when the class declaration is itself changed. Not recompiling when the body is
changed can be a huge advantage in large programs.
I don't know what the book is trying to say exactly in this point. What do we mean by "have to be recompiled" and "recompilation is needed only when the class declaration is itself changed"

I suppose, from the context, that the quoted part discusses the pros & cons of putting member definitions inside the class declaration.
Suppose you have class X. You have to declare it somewhere. In a typical scenario, it will be placed in a header file whose only role will be to hold this declaration. Let's call it x.h.
A class usually has member functions. Now you can choose to either put them inside the header file inside the class declaration or in a separate file (typically: x.cpp).
Solution 1:
// file x.h contains everything
class X
{
public:
X() { std::cout << "X() has been hit\n"; }
};
Solution 2:
// file x.h contains only the declaration(s)
class X
{
public:
X();
};
// file x.cpp contains the class member definitions
#include "x.h"
X::X() { std::cout << "X() has been hit\n"; }
Whichever solution you use, you surely have some code that uses your class, and typically it is located in a different source file(s), e.g.:
// main.cpp
#include "x.h"
int main()
{
X x;
}
The first thing to notice: the user (here: main.cpp) looks the same whether you choose Solution 1 or 2. This is great. Now, here comes the message Bjarne wants to tell you: consider how changes to the class code will impact the users.
In Solution 1 you've packed everything into the header file. Any change to the class, even so apparently harmless as adding a new member function or just changing class formatting (you know, tabs, spaces, etc.) or adding a comment will force the compiler to recompile main.cpp. Why? Professional C++ programs are composed of many, many source files and their compilation is controlled and executed by special utility programs, like cmake, make, and many others. They simply look at the timestamps of the files that make up the program. Any change is a signal to recompile. Header files are never compiled, but all source files (= *.cpp) that include them (even indirectly, via other header files) have to be recompiled. This explains this:
All uses of the class will have to be recompiled whenever we make a change to the body of an inlined function.
(just to be sure: all class member functions declared inside the class declaration are considered inline by default). Here, main.cpp is an example of a "uses" mentioned above.
In Solution 2, file main.cpp will be recompiled only if x.h has been changed (in any way). If a programmer touches only x.cpp, then main.cpp will not be recompiled, because (a) C++ is designed in such a way to allow it and (b) professional C++ programs use other programs (I've mentioned above) that facilitate the efficient compilation of even large C++ programs. To be explicit: they are not compiled using commands like g++ *.cpp that can be found in some introductory C++ textbooks.
One final remark. The inline keyword was introduced essentially to allow Solution 1. Solution 2 is the original C language way. Solution 1 is sometimes used in C++ for better performance (but modern compilers can in many situations do the same job without it) and very often for templates (which are absent in C). Solution 1 is the most common way of programming templates, Solution 2 is typical for "ordinary" member functions. What Bjarne writes about is extremely important for library designers, I hope now you understand why.

Related

ESP32 compiler giving "multiple definition of" errors

Got a new issue I've not come across before that's appeared when using the Espressif ESP32 ESP-IDF standard setup under VSCode. It uses the GNU compiler.
I'm getting "multiple definition of" errors on variables that share the same name, but which should be local.
So I use a .c and .h pair of files approach.
In my .c files I do this at the top
#define IO_EXPANDER_C //<<<This is a unique define for this file pair
#include "io-pca9539.h"
In my .h files I do this:
#ifdef IO_EXPANDER_C
//----- INTERNAL ONLY MEMORY DEFINITIONS -----
uint8_t *NextReadDataPointer;
//----- INTERNAL & EXTERNAL MEMORY DEFINITIONS -----
//(Also defined below as extern)
int SomeVariableIWantAvailableGlobally;
#else
//----- EXTERNAL MEMORY DEFINITIONS -----
extern int SomeVariableIWantAvailableGlobally;
#endif
It's a great simple system, any other .c file that includes the .h file (without the #define above its include statemnt), gets all of its extern variables, none of its local variables.
But, compiling in VSCode with my ESP-IDF based project, I'm getting "multiple definition of" errors relating to "NextReadDataPointer"
I use the same variable name NextReadDataPointer in another file pair in just the same way, but it's never declared anywhere as extern and each file pair uses a separate #define (IO_EXPANDER_C and LED_C). I do this all the time normally and I can't see any obvious mistakes.
I've never seen a C compiler do this before, it's as if it's mixing up the local definitions somehow. A #define should only have scope in the file it is declared in and in any includes within that file.
Even odder, the error is not generated if the project is built but a function is called from just one of the file pairs that share the same local variable name. It's only generated when functions are called from both file pairs from my main application.
Can anyone shed light on whether the GNU C compiler does something funky for a standard ESP-IDF project as it's got me baffled?
uint8_t *NextReadDataPointer; creates a variable which is visible across all translation units, i.e. it's the opposite of "private". If you include this header in multiple c files and the linker tries to link those together; it'll see a conflict. The keyword you're looking for is static, for example static uint8_t *NextReadDataPointer; creates a variable that is not visible across translation units. The reason you don't see the problem if calling a function from only one of those two files is because in this case the linker doesn't bother looking into the other one.
Personally I'd avoid such clever preprocessor hacks because it's quite difficult to see how files include one another and debug the resulting problems. I'd suggest sticking to the standard way of declaring shared things in header files and keeping the private stuff inside the c file (prepended by static).

Common lisp best practices for splitting code between files

I'm moderately new to common lisp, but have extended experience with other 'separate compilation' languages (think C/C++/FORTRAN and such)
I know how to do an ASDF system definition. I know how to separate stuff in packages. I'm using SBCL, by the way.
The question is this: what's the best practice for splitting code (large packages) between .lisp files? I mean, in C there are include files, while lisp lives with the current image state. So with multiple files I need to handle dependencies or serial order in the system definition. But without something like forward declarations it's painful.
Simple example on what I want to do: I have, for example, two defstructs that are part of the same bigger data structure (like struct1 is a parent of some set of struct2). Some functions works on one, some other works on the other and some other use both.
So I would have: a packages.lisp, a fun1.lisp (with the first defstruct and related functions), a fun2.lisp (with the other defstruct and functions) and a funmix.lisp (with functions that use both). In an ideal world everything is sealed and compiling these in this order would be fine. As most of you know, this in practice almost never happen.
If I need to use struct2 functions from the struct1 ones I would need to either reorder or add a dependency. But then if there's some kind of back call (that can't be done with a closure) I would have struct1.lisp depending on struct2.lisp and vice-versa which is obviously not valid. So what? I could break the loop putting the defstruct in a separate file (say, structs.lisp) but what if either of the struct's function need to access the common functions in the third file? I would like to avoid style notes.
What's the common way to solve this, i.e. keeping loosely related code in the same file but still be able to interface to other ones. Is the correct solution to seal everything in a compilation unit (a single file)? use a package for every file with exports?
Lisp dependencies are simple, because in many cases, a Lisp implementation doesn't need to process the definition of something in order to compile its use.
Some exceptions to the rule are:
Macros: macros must be loaded in order to be expanded. There is a compile-time dependency between a file which uses macro and the file which defines them.
Packages: a package foo must be defined in order to use symbols like foo:bar or foo::priv. If foo is defined by a defpackage form in some foo.lisp file, then that file has to be loaded (either in source or compiled form).
Constants: constants defined with defconstant should be seen before their use. Similar remarks apply to inline functions, compiler macros.
Any custom things in a "domain specific language" which enforces definition before use. E.g. if Whizbang Inference Engine needs rules to be defined when uses of the rules are compiled, you have to arrange for that.
For certain diagnostics to be suppressed like calls to undefined functions, the defining and using files must be taken to be as a single compilation unit. (See below.)
All the above remarks also have implications for incremental recompilation.
When there is dependency like the above between files so that one is a prerequisite of the other, when the prerequisite is touched, the dependent one must be recompiled.
How to split code into files is going to be influenced by all the usual things: cohesion, coupling and what have you. Common-Lisp-specific reasons to keep certain things together in one file is inlining. The call to a function which is in the same file as the caller may be inlined. If your program supports any in-service upgrade, the granularity of code loading is individual files. If some functions foo and bar should be independently redefinable, don't put them in the same file.
Now about compilation units. Suppose you have a file foo.lisp which defines a function called foo and bar.lisp which calls (foo). If you just compile bar.lisp, you will likely get a warning that an undefined function foo has been called. You could compile foo.lisp first and then load it, and then compile bar.lisp. But that will not work if there is a circular reference between the two: say foo.lisp also calls (bar) which bar.lisp defines.
In Common Lisp, you can defer such warnings to the end of a compilation unit, and what defines a compilation unit isn't a single file, but a dynamic scope established by a macro called with-compilation-unit. Simply put, if we do this:
(with-compilation-unit
(compile-file "foo.lisp") ;; contains (defun foo () (bar))
(compile-file "bar.lisp")) ;; contains (defun bar () (foo))
If a compile-file isn't surrounded by with-compilation-unit then there is a compilation unit spanning that file. Otherwise, the outermost nesting of the with-compilation-unit macro determines the scope of what is in the compilation unit.
Warnings about undefined functions (and such) are deferred to the end of the compilation unit. So by putting foo.lisp and bar.lisp compilation into one unit, we suppress the warnings about either foo or bar not being defined and we can compile the two in any order.
Build systems use with-compilation-unit under the hood, as appropriate.
The compilation unit isn't about dependencies but diagnostics. Above, we don't have a compile time dependency. If we touch foo.lisp, bar.lisp doesn't have to be recompiled or vice versa.
By and large, Lisp codebases don't have a lot of hard dependencies among the files. Incremental compilation often means that just the affected files that were changed have to be recompiled. The C or C++ problem that everything has to be rebuilt because a core header file was touched is essentially nonexistent.
but what if
No matter how you first organize your code, if you change it significantly you are going to have to refactor. IMO there is no ideal way of grouping dependencies in advance.
As a rule of thumb it is generally safe to define generic functions first, then types, then actual methods, for example. For non-generic functions, you can cut circular dependencies by adding forward declarations:
(declaim (ftype function ...))
Having too much circular dependency is a bit of a code smell.
Is the correct solution to seal everything in a compilation unit
Yes, if you group the definitions in the same compilation unit (the same file), the file compiler will be able to silence the style notes until it reaches the end of file: at this point it knows if there are still missing references or if all the cross-references are resolved.
But then if there's some kind of back call (that can't be done with a closure)
If you have a specific example in mind please share, but typically you can define struct1 and its functions in a way that can be self-contained; maybe it can accept a map that binds event names to callbacks:
(make-struct-1 :callbacks (list :on-empty one-is-empty
:on-full one-is-full))
Similarly, struct2 can accept callbacks too (Dependency Injection) and the main struct ties them using closures (?).
Alternatively, you can design your data-structures so that they signal conditions, and the in the caller code you intercept them to bind things together.

What's the differences from inline and block compilation of SBCL?

Several weeks ago, SBCL updated 2.0.2 and brought the Block compilation feature. I have read this article to understand what it is.
I have a question, what's the difference between (declaim (inline 'some-function)) and Block compilation? Block compilation is automatic by the compiler?
Thanks.
Inline compilation is a specific optimization technique. A function being called is directly integrated into the calling function - usually using its source code - and then compiled.
This means that the inlined function might not be inlined only in one function, but in multiple functions.
Advantage: the overhead of calling a function disappears.
Disadvantage: the code size increases and the calling function(s) needs to be recompiled, when the inlined function changed and we want this change to become visible. Macros have the same problem.
Block compilation means that a bunch of code gets compiled together with different semantic constraints and that this enables the compiler to do a bunch of new optimizations.
Common Lisp has in the standard support for block compilation of single files. It allows the file compiler to assume that a file is such a block of code.
Example from the Common Lisp standard:
3.2.2.3 Semantic Constraints
A call within a file to a named function that is defined in the same file refers to that function, unless that function has been declared notinline. The consequences are unspecified if functions are redefined individually at run time or multiply defined in the same file.
This allows the code to call a global function and not use the symbol's function cell for the call. Thus this disables late binding for global function calls - in this file and for functions in this file.
It's not said how this can be achieved, but the compiler might just allocate the code somewhere and the calls just jump there.
So this part of block compilation is defined in the standard and some compilers are doing that.
Block compilation for multiple files
If the file compiler can use block compilation for one file, then what about multiple files? A few compilers can also tell the file compiler that several files make a block for compilation. CMUCL does that. SBCL was derived and simplified from CMUCL and lacks it until now. I think Lucid Common Lisp (which is no longer actively sold) did support something like that, too.
Might be useful to add this to SBCL, too.

c++ Hidden Unique Pointer

I have some code which depends on some include files which are partly defined at the start of source files (which is usual) and others which are used within functions.
I typical example for that are the OpenFOAM solver sources.
Because the scheme of this code is highly procedural, but I want to put all this into a class which provides init(), run() and maybe release(), I plan to put some of the variables into the classes as private making them members.
I don't want to modify the included files because they belong to a library.
The reason for using a class is that other routines classes run together with this code.
Here is the thing. init() must prepare some variable and there situations that theses variables (being type of other clases) not explicit constructors and special arguments. It is called once. run() is called several times. The procedural code has a loop only and the contents of that loop are put into the run() method.
So the best solution was to put these variables into std::unique_ptr and init can construct whatever it needs to. Obviously with that trick the variable signature changed, so I created a second declaration of a reference like this:
std::unique_ptr<volScalarField> mp_p;
volScalarField &p = *mp_p;
Now this is a bit tedious so I created a macro
FOAMPTR(volVectorField, p)
which does all the work for me:
#define FOAMPTR(TYPE,NAME) std::unique_ptr<TYPE> mp_##NAME; TYPE &NAME=*mp_##NAME
It works pretty well, but I'm not fan of macros in general, especially if you need to debug code.
Now my question is: Is there a better way to tackle this and use something else like a template definition which might do all the magic?
Edit: With 'works pretty well' I mean, that the compiler can translate that. The reference though still is invalid.
Edit: Okay, I solved the invalid pointer problem using two Macros:
#define FOAMPTR(TYPE,NAME) std::unique_ptr<TYPE> mp_##NAME
#define FETCHFOAMREF(NAME) auto &NAME=*mp_##NAME
Now I put FOAMPTR(TYPE,NAME) to the member and I get my unique ptrs. In the run() method the second macro FETCHFOAMREF(NAME) is used. Of course init() must be sure to correctly initialize the object or else the program is going to crash.
I still leave the question open because I'm not satisfied with that solution.

Difference between API and ABI

I am new to Linux system programming and I came across API and ABI while reading
Linux System Programming.
Definition of API:
An API defines the interfaces by which
one piece of software communicates
with another at the source level.
Definition of ABI:
Whereas an API defines a source
interface, an ABI defines the
low-level binary interface between two
or more pieces of software on a
particular architecture. It defines
how an application interacts with
itself, how an application interacts
with the kernel, and how an
application interacts with libraries.
How can a program communicate at a source level? What is a source level? Is it related to source code in any way? Or the source of the library gets included in the main program?
The only difference I know is API is mostly used by programmers and ABI is mostly used by a compiler.
API: Application Program Interface
This is the set of public types/variables/functions that you expose from your application/library.
In C/C++ this is what you expose in the header files that you ship with the application.
ABI: Application Binary Interface
This is how the compiler builds an application.
It defines things (but is not limited to):
How parameters are passed to functions (registers/stack).
Who cleans parameters from the stack (caller/callee).
Where the return value is placed for return.
How exceptions propagate.
The API is what humans use. We write source code. When we write a program and want to use some library function we write code like:
long howManyDecibels = 123L;
int ok = livenMyHills(howManyDecibels);
and we needed to know that there is a method livenMyHills(), which takes a long integer parameter. So as a Programming Interface it's all expressed in source code. The compiler turns this into executable instructions which conform to the implementation of this language on this particular operating system. And in this case result in some low level operations on an Audio unit. So particular bits and bytes are squirted at some hardware. So at runtime there's lots of Binary level action going on which we don't usually see.
At the binary level there must be a precise definition of what bytes are passed at the Binary level, for example the order of bytes in a 4 byte integer, or the layout of a complex data structure - are there padding bytes to align some values. This definition is the ABI.
I mostly come across these terms in the sense of an API-incompatible change, or an ABI-incompatible change.
An API change is essentially where code that would have compiled with the previous version won't work anymore. This can happen because you added an argument to a function, or changed the name of something accessible outside of your local code. Any time you change a header, and it forces you to change something in a .c/.cpp file, you've made an API-change.
An ABI change is where code that has already been compiled against version 1 will no longer work with version 2 of a codebase (usually a library). This is generally trickier to keep track of than API-incompatible change since something as simple as adding a virtual method to a class can be ABI incompatible.
I've found two extremely useful resources for figuring out what ABI compatibility is and how to preserve it:
The list of Do's and Dont's with C++ for the KDE project
Ulrich Drepper's How to Write Shared Libraries.pdf (primary author of glibc)
Linux shared library minimal runnable API vs ABI example
This answer has been extracted from my other answer: What is an application binary interface (ABI)? but I felt that it directly answers this one as well, and that the questions are not duplicates.
In the context of shared libraries, the most important implication of "having a stable ABI" is that you don't need to recompile your programs after the library changes.
As we will see in the example below, it is possible to modify the ABI, breaking programs, even though the API is unchanged.
main.c
#include <assert.h>
#include <stdlib.h>
#include "mylib.h"
int main(void) {
mylib_mystruct *myobject = mylib_init(1);
assert(myobject->old_field == 1);
free(myobject);
return EXIT_SUCCESS;
}
mylib.c
#include <stdlib.h>
#include "mylib.h"
mylib_mystruct* mylib_init(int old_field) {
mylib_mystruct *myobject;
myobject = malloc(sizeof(mylib_mystruct));
myobject->old_field = old_field;
return myobject;
}
mylib.h
#ifndef MYLIB_H
#define MYLIB_H
typedef struct {
int old_field;
} mylib_mystruct;
mylib_mystruct* mylib_init(int old_field);
#endif
Compiles and runs fine with:
cc='gcc -pedantic-errors -std=c89 -Wall -Wextra'
$cc -fPIC -c -o mylib.o mylib.c
$cc -L . -shared -o libmylib.so mylib.o
$cc -L . -o main.out main.c -lmylib
LD_LIBRARY_PATH=. ./main.out
Now, suppose that for v2 of the library, we want to add a new field to mylib_mystruct called new_field.
If we added the field before old_field as in:
typedef struct {
int new_field;
int old_field;
} mylib_mystruct;
and rebuilt the library but not main.out, then the assert fails!
This is because the line:
myobject->old_field == 1
had generated assembly that is trying to access the very first int of the struct, which is now new_field instead of the expected old_field.
Therefore this change broke the ABI.
If, however, we add new_field after old_field:
typedef struct {
int old_field;
int new_field;
} mylib_mystruct;
then the old generated assembly still accesses the first int of the struct, and the program still works, because we kept the ABI stable.
Here is a fully automated version of this example on GitHub.
Another way to keep this ABI stable would have been to treat mylib_mystruct as an opaque struct, and only access its fields through method helpers. This makes it easier to keep the ABI stable, but would incur a performance overhead as we'd do more function calls.
API vs ABI
In the previous example, it is interesting to note that adding the new_field before old_field, only broke the ABI, but not the API.
What this means, is that if we had recompiled our main.c program against the library, it would have worked regardless.
We would also have broken the API however if we had changed for example the function signature:
mylib_mystruct* mylib_init(int old_field, int new_field);
since in that case, main.c would stop compiling altogether.
Semantic API vs Programming API
We can also classify API changes in a third type: semantic changes.
The semantic API, is usually a natural language description of what the API is supposed to do, usually included in the API documentation.
It is therefore possible to break the semantic API without breaking the program build itself.
For example, if we had modified
myobject->old_field = old_field;
to:
myobject->old_field = old_field + 1;
then this would have broken neither programming API, nor ABI, but main.c the semantic API would break.
There are two ways to programmatically check the contract API:
test a bunch of corner cases. Easy to do, but you might always miss one.
formal verification. Harder to do, but produces mathematical proof of correctness, essentially unifying documentation and tests into a "human" / machine verifiable manner! As long as there isn't a bug in your formal description of course ;-)
Tested in Ubuntu 18.10, GCC 8.2.0.
This is my layman explanations:
API - think of include files. They provide programming interfaces.
ABI - think of kernel module. When you run it on some kernel, it has to agree on how to communicate without include files, i.e. as low-level binary interface.
(Application Binary Interface) A specification for a specific hardware platform combined with the operating system. It is one step beyond the API (Application Program Interface), which defines the calls from the application to the operating system. The ABI defines the API plus the machine language for a particular CPU family. An API does not ensure runtime compatibility, but an ABI does, because it defines the machine language, or runtime, format.
Courtesy
Let me give a specific example how ABI and API differ in Java.
An ABI incompatible change is if I change a method A#m() from taking a String as an argument to String... argument. This is not ABI compatible because you have to recompile code that is calling that, but it is API compatible as you can resolve it by recompiling without any code changes in the caller.
Here is the example spelled out. I have my Java library with class A
// Version 1.0.0
public class A {
public void m(String string) {
System.out.println(string);
}
}
And I have a class that uses this library
public class Main {
public static void main(String[] args) {
(new A()).m("string");
}
}
Now, the library author compiled their class A, I compiled my class Main and it is all working nicely. Imagine a new version of A comes
// Version 2.0.0
public class A {
public void m(String... string) {
System.out.println(string[0]);
}
}
If I just take the new compiled class A and drop it together with the previously compiled class Main, I get an exception on attempt to invoke the method
Exception in thread "main" java.lang.NoSuchMethodError: A.m(Ljava/lang/String;)V
at Main.main(Main.java:5)
If I recompile Main, this is fixed and all is working again.
Your program (source code) can be compiled with modules who provide proper API.
Your program (binary) can run on platforms who provide proper ABI.
API restricts type definitions, function definitions, macros, sometimes global variables a library should expose.
ABI restricts what a "platform" should provide for you program to run on. I like to consider it in 3 levels:
processor level - the instruction set, the calling convention
kernel level - the system call convention, the special file path convention (e.g. the /proc and /sys files in Linux), etc.
OS level - the object format, the runtime libraries, etc.
Consider a cross-compiler named arm-linux-gnueabi-gcc. "arm" indicates the processor architecture, "linux" indicates the kernel, "gnu" indicates its target programs use GNU's libc as runtime library, different from arm-linux-androideabi-gcc which use Android's libc implementation.
API - Application Programming Interface is a compile time interface which can is used by developer to use non-project functionality like library, OS, core calls in source code
ABI[About] - Application Binary Interface is a runtime interface which is used by a program during executing for communication between components in machine code
The ABI refers to the layout of an object file / library and final binary from the perspective of successfully linking, loading and executing certain binaries without link errors or logic errors occuring due to binary incompatibility.
The binary format specification (PE, COFF, ELF, .obj, .o, .a, .lib (import library, static library), .NET assembly, .pyc, COM .dll): the headers, the header format, defining where the sections are and where the import / export / exception tables are and the format of those
The instruction set used to encode the bytes in the code section, as well as the specific machine instructions
The actual signature of the functions and data as defined in the API (as well as how they are represented in the binary (the next 2 points))
The calling convention of the functions in the code section, which may be called by other binaries (particularly relevant to ABI compatibility being the functions that are actually exported)
The way data is represented and aligned in the data section with respect to its type (particularly relevant to ABI compatibility being the data that is actually exported)
The system call numbers or interrupt vectors hooked in the code
The name decoration of exported functions and data
Linker directives in object files
Preprocessor / compiler / assembler / linker flags and directives used by the API programmer and how they are interpreted to omit, optimise, inline or change the linkage of certain symbols or code in the library or final binary (be that binary a .dll or the executable in the event of static linking)
The bytecode format of .NET C# is an ABI (general), which includes the .NET assembly .dll format. The virtual machine that interprets the bytecode has a specific ABI that is C++ based, where types need to be marshalled between native C++ types that the native code's specific ABI uses and the boxed types of the virtual machine's ABI when calling bytecode from native code and native code from bytecode. Here I am calling an ABI of a specific program a specific ABI, whereas an ABI in general, such as 'MS ABI' or 'C ABI' simply refers to the calling convention and the way structures are organised, but not a specific embodiment of the ABI by a specific binary that introduces a new level of ABI compatibility concerns.
An API refers to the set of type definitions exported by a particular library imported and used in a particular translation unit, from the perspective of the compiler of a translation unit, to successfully resolve and check type references to be able to compile a binary, and that binary will adhere to the standard of the target ABI, such that if the library that actually implements the API is also compiled to a compatible ABI, it will link and work as intended. If the API is updated the application may still compile, but there will now be a binary incompatibility and therefore a new binary needs to be used.
An API involves:
Functions, variables, classes, objects, constants, their names, types and definitions presented in the language in which they are coded in a syntactically and semantically correct manner
What those functions actually do and how to use them in the source language
The source code files that need to be included / binaries that need to be linked to in order to make use of them, and the ABI compatibility thereof
I'll begin by answering your specific questions.
1.What is a source level? Is it related to source code in any way?
Yes, the term source level refers to the level of source code. The term level refers to the semantic level of the computation requirements as they get translated from the application domain level to the source code level and from the source code level to the machine code level (binary codes). The application domain level refers what end-users of the software want and specify as their computation requirements. The source code level refers to what programmers make of the application level requirements and then specify as a program in a certain language.
How can a program communicate at a source level? Or the source of the library gets included in the main program?
Language API refers specifically to all that a language requires(specifies) (hence interfaces) to write reusable modules in that language. A reusable program conforms to these interface (API) requirements to be reused in other programs in the same language. Every reuse needs to conform to the same API requirements as well. So, the word "communicate" refers to reuse.
Yes, source code (of a reusable module; in the case of C/C++, .h files ) getting included (copied at pre-processing stage) is the common way of reusing in C/C++ and is thus part of C++ API. Even when you just write a simple function foo() in the global space of a C++ program and then call the function as foo(); any number of times is reuse as per the C++language API. Java classes in Java packages are reusable modules in Java. The Java beans specification is also a Java API enabling reusable programs (beans) to be reused by other modules ( could be another bean) with the help of runtimes/containers (conforming to that specification).
Coming to your overall question of the difference between language API and ABI, and how service-oriented APIs compare with language APIs, my answer here on SO should be helpful.

Resources