go build multiple binary versions each against a different shared library - go

I want to create a go executable which communicates with xen through it's native interface. There is a C shared library (actually 2) for this purpose and I created a simple go wrapper with cgo.
The problem is that I want to target 3 xen versions (3.2, 3.4, 4.0), each of which has a different shared library. The library itself provides basically the same API, but the sizes and shape of the structs defined in the C header are different, and thus the same compiled go binary cannot be used with these different shared libraries.
I want to have a go binary holding the 'main' and a go pkg which is the wrapper for xen.
I was thinking about 2 solutions:
I could build 3 different versions of the compiled pkg and also 3 different versions of the main binary each linked with the corresponding pkg version. This solution requires building manually the makefiles so that I can pass the correct paths etc
I could build a thin C wrapper as a shared library and build it in 3 versions against the 3 versions of xen C bindings. This C wrapper would then export a stable C interface which is then used by one single go pkg. Then I can deploy the correct C wrapper shared library to the hosts and have it resolve at runtime
Is there any other way to handle that ? I would prefer using pure (c)go code but I don't like the additional burden of maintaining complicated makefiles.
EDIT: More details about why I feel uncomfortable about handling that manually in the makefiles:
For example the name of the _obj dir is hardcoded in Make.inc and company, and these makefiles rely on some generated .c files containing special information about the name of the shared library, which I have to cleanup before building the next version of the pkg. A snipped of my makefile:
all:
rm -f _obj/_*
$(MAKE) -f Makefile.common XENVERSION=3.0
rm -f _obj/_*
$(MAKE) -f Makefile.common XENVERSION=3.4
rm -f _obj/_*
$(MAKE) -f Makefile.common XENVERSION=4.0
where Makefile.common basically is a normal pkg makefile which uses TARG=$(XENVERSION)/vimini/xen, as I cannot encode the version in the package name because I'd have to modify the imports in the source.
By encoding the version in the package directory I can use GCIMPORTS=-I../../pkg/xen/_obj/$(XENVERSION) to select the right une from the main cmd's Makefile.
Of course I could roll out my own makefile which invokes 6c and 6l, cgo etc, but I prefer to leverage the existing make infrastructure since it seems that there is some wisdom in it.

Have you tried this approach?
Architecture- and operating system-specific code
It could easily be adapted to use a $XENVER environment variable.

Related

gcc make file and parallel building

I need to pick up in the make file the number of processor used for the paralell compilation.
e.g.
make -j32 .....
I need to pick up the number 32 inside the Makefile.
I know this comes inside the variable MAKEFLAG so I could parse it, but is there some other variable that gives this information directly?
For example:
NUMCPU = 32
already solved in
GNU Make: Check number of parallel jobs
NUMPROC = $(patsubst -j%,%,$(filter -j%,$(MAKEFLAGS))
#DevSolar me too would like not mix make and ninja build But I need to do,the project is a big project involving several libraries and several teams so I cannot decide alone the build process.
in order to explain the build process I have a target and some libraries that use for the build system meson/ninja and other libraries that use make.
Now during the official release phase all the library must be recompiled, so first are compiled the ones with the legacy "make " and then the ones with meson and the final the binary/executable that will link al of the previous compiled libraries.
At the moment all is triggered by a make command and the production team wants to use the -j option both for make and ninja.
for this reason I am tring to provide the -j to the libraries and final binary/executable built with ninja.

Configure compilation options and compiler autoconf

I'm working on a personal project with Rust and tcl but i still want to use the classic makefile structure.
I know that to compile multifile I just need to declare mod second on main.rs and rustc automatically connect the modules. So I use
$ rustc main.rs -o output -C debuginfo=2
Now I tried to integrate autoconf and automake because I want to make a configure script to check for tcl, rustup etc... But I don't know how to edit to compile with rustc and its options insead of cc and c options (like trying a .o that doesn't compile because they don't have a main function).
for the configure.ac i used:
AC_CONFIG_SRCDIR([source/main.rs])
AC_CONFIG_AUX_DIR(config)
# I manually checked for rustup and tclsh
AM_INIT_AUTOMAKE
AC_CONFIG_FILES([Makefile])
AC_OUTPUT
for the Makefile.am:
AUTOMAKE_OPTIONS = foreign
bin_PROGRAMS = output
SUBDIRS = sources
output_SOURCES = sources/main.rs
I have the main directory with configure.ac and Makefile.am and the sources directory with all the stuff (and also the config directory for autoconf)
Now I tried to integrate autoconf and automake because I want to make a configure script to check for tcl, rustup etc...
The configure script is the responsibility of Autoconf. It is not obligatory to use Automake together with Autoconf, and you should consider whether it would be sensible for you to use Autoconf alone. That would give you complete control over the generated Makefile, as you would write a Makefile.in directly instead of relying on Automake to do that for you. Presumably, you would write a much simpler Makefile.in than Automake generates, and that's fine.
Automake is not necessarily out of the question, but its manual has this to say about language support:
Automake currently only includes full support for C, C++ (see C++
Support), Objective C (see Objective C Support), Objective C++ (see
Objective C++ Support), Fortran 77 (see Fortran 77 Support), Fortran
9x (see Fortran 9x Support), and Java (see Java Support with gcj).
There is only rudimentary support for other languages, support for
which will be improved based on user demand.
Some limited support for adding your own languages is available via
the suffix rule handling (see Suffixes).
The referenced section about suffix rules shows how you might use such a rule to teach Automake how to build Rust programs. It might look something like this:
.rs:
$(RUSTC) $< -o $# $(AM_RUSTFLAGS) $(RUSTFLAGS)
SUFFIXES = .rs
That assumes that configure will identify the Rust compiler and export its name as RUSTC. AM_RUSTFLAGS is for defining compilation flags internally in your project (typically in your Makefile.am), and RUSTFLAGS is for the builder to add or override compilation flags at build time.
But since the compiler does not produce intermediate object files (or so I gather), I would expect that defining sources in output_SOURCES would not yield a working Makefile, and that you would probably need the name of the top-level Rust source to match the name of the wanted binary (i.e. output.rs instead of main.rs). The single-suffix rule should, then, get your binary built without any sources being explicitly specified. You would also want to name all contributing Rust sources in the EXTRA_SOURCES variable, else they would be omitted from distribution packages built via make dist.
Note, too, that the above does not define all the build dependencies that actually exist if you're building multifile programs. I would suggest doing that by adding an appropriate prerequisite-only rule, such as
output: $(output_extra_sources)
(with no recipe) in multifile cases. This will ensure that make will recognize when output needs to be rebuilt as a result of a modification to one of its sources other than output.rs.

Configure Linux kernel, modules, apps Makefiles to generate assembly files

Is there a way to configure the Makefiles compiling the linux kernel, modules and userspace applications leading to
the generation of assembly files (.s - files)? I'm primarily interested in the assembly files for the ./lib and the userspace applications
(which I want to modify for some experiment and want to compile and integrate in a second run)? I'm aware of this requires finally to pass gcc the -S option, but I'm a liitle confused how to set this via HOSTCFLAGS,
CFLAGS_MODULE, CFLAGS_KERNEL, CFLAGS_GCOV, KBUILD_CFLAGS, KBUILD_CFLAGS_KERNEL, KBUILD_CFLAGS_MODULE, KBUILD_CFLAGS_MODULE variables etc.?
You can use objdump -S to create an assembly file from a compiled .o file. For example:
objdump -S amba-pl011.o > amba-pl011.S

Multiple version of library, how to compile with GCC/g++ whit out version number

I am working on a library in C, let us call it ninja.
Ninja depends upon some under laying libraries (which we also provide) (e.g jutsu, goku, bla).
These are all placed in a shared library folder, let us say /usr/lib/secret/.
The clients whom are using this project wants to be able to havde ninja version 1 and 2 laying side by side, this it not so hard. The problem comes when ninja 1 dependes up on for instance jutsu 1 and ninja 2 depends upon jutsu 3. How the h... do we/I do so so that when installing ninja from our package repository. It knows the correct version of jutsu. Of course the rpm/deb package should depend upon the correct version of the jutsu package.
so what we want is when, we execute for instance zypper in ninja. and it installs and compiles on the system, it knows which jutsu library to take with out been given a version number.
So we in the make file don't have to do this:
gcc ninja.c -o ninja -L /usr/local/lib/secret/ -l jutsu_2
But just
gcc ninja.c -o ninja -L /usr/local/lib/secret/ -l jutsu
NOTE: I know it is random to use ninja and so on, but I am not allowed to publish the real library names
You want to use an SONAME. Describing all the steps necessary is probably too large a scope for a good StackOverflow answer, but I can give an overview and point to some documentation.
An SONAME is a special data field inside a shared library. It is typically used to indicate compatibility with other versions of the same library; if two different versions of a shared library have the same SONAME, the linkers will know that either one can fill the dependency on that library. If they have a different SONAME, they can't.
Example: I have libdns88 and libbind-dev version 1:9.8.4.dfsg.P1-6+nmu2+deb7u1 installed on a Debian wheezy system. I build a binary called samurai with -ldns. The GNU linker finds "libdns.so" in my library search path and dynamically links samurai with it. It reads the SONAME field from libdns.so (which is a symlink to libdns.so.88.1.1). The SONAME there is "libdns.so.88".
$ objdump -p /usr/lib/libdns.so | grep SONAME
SONAME libdns.so.88
The libdns developers (or maybe packagers) chose that SONAME to indicate that any version 88.* of libdns is expected to be binary compatible with any other version 88.*. They use that same SONAME for all versions with a compatible ABI. When the ABI had a change, they changed the SONAME to libdns.so.89, and so on. (Most well-managed libraries don't change their ABI that often.)
So the library dependency written into the samurai binary is just libdns.so.88. When I run samurai later, the dynamic linker/loader looks for a file called "libdns.so.88" instead of just "libdns.so".
Also by convention, the name of an rpm or deb package should change when the SONAME of the library contained changes. That's why there is a libdns88 package separate from the libdns100 package, and they can be installed side by side without interfering with each other. My samurai package will have a dependency on "libdns88" and I can expect that any package called libdns88 will have a compatible ABI to the one I built it against. Tools like dpkg-shlibdeps make it simple to create the right shared library package dependencies when SONAMEs and versioned symbols are used.
http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html

What is the difference between make and gcc?

The last sentence in the article caught my eye
[F]or C/C++ developers and
students interested in learning to
program in C/C++ rather than users of
Linux. This is because the compiling
of source code is made simple in
GNU/Linux by the use of the 'make'
command.
I have always used gcc to compile my C/C++ programs, whereas javac to compile my Java programs. I have only used make to install programs to my computer by configure/make/make install.
It seems that you can compile apparently all your programs with the command make.
What is the difference between make and gcc?
Well ... gcc is a compiler, make is a tool to help build programs. The difference is huge. You can never build a program purely using make; it's not a compiler. What make does it introduce a separate file of "rules", that describes how to go from source code to finished program. It then interprets this file, figures out what needs to be compiled, and calls gcc for you. This is very useful for larger projects, with hundreds or thousands of source code files, and to keep track of things like compiler options, include paths, and so on.
gcc compiles and/or links a single file. It supports multiple languages, but does not knows how to combine several source files into a non-trivial, running program - you will usually need at least two invocations of gcc (compile and link) to create even the simplest of programs.
Wikipedia page on GCC describes it as a "compiler system":
The GNU Compiler Collection (usually shortened to GCC) is a compiler system produced by the GNU Project supporting various programming languages.
make is a "build tool" that invokes the compiler (which could be gcc) in a particular sequence to compile multiple sources and link them together. It also tracks dependencies between various source files and object files that result from compilation of sources and does only the operations on components that have changed since last build.
GNUmake is one popular implementation of make. The description from GNUmake is as follows:
Make is a tool which controls the generation of executables and other non-source files of a program from the program's source files.
Make gets its knowledge of how to build your program from a file called the makefile, which lists each of the non-source files and how to compute it from other files.
gcc is a C compiler: it takes a C source file and creates machine code, either in the form of unlinked object files or as an actual executable program, which has been linked to all object modules and libraries.
make is useful for controlling the build process of a project. A typical C program consists of several modules (.c) and header files (.h). It would be time-consuming to always compile everything after you change anything, so make is designed to only compile the parts that need to be re-compiled after a change.
It does this by following rules created by the programmer. For example:
foo.o: foo.c foo.h
cc -c foo.c
This rule tells make that the file foo.o depends on the files foo.c and foo.h, and if either of them changes, it can be built by running the command on the second line. (The above is not actual syntax: make wants the commands indented by a TAB characters, which I can't do in this editing mode. Imagine it's there, though.)
make reads its rules from a file that is usually called a Makefile. Since these files are (traditionally) written by hand, make has a lot of magic to let you shorten the rules. For example, it knows that a foo.o can be built from a foo.c, and it knows what the command to do so is. Thus, the above rule could be shortened to this:
foo.o: foo.h
A small program consisting of three modules might have a Makefile like this:
mycmd: main.o foo.o bar.o
$(CC) $(LDFLAGS) -o mycmd main.o foo.o bar.o
foo.o: foo.h bar.h
bar.o: bar.h
make can do more than just compile programs. A typical Makefile will have a rule to clean out unwanted files:
clean:
rm -f *.o core myapp
Another rule might run tests:
check: myapp
./myapp < test.input > test.output
diff -u test.correct test.output
A Makefile might "build" documentation: run a tool to convert documentation from some markup language to HTML and PDF, for example.
A Makefile might have an install rule to copy the binary program it builds to wherever the user or system administrator wants it installed.
And so on. Since make is generic and powerful, it is typically used to automate the whole process from unpacking a source tarball to the point where the software is ready to be used by the user.
There is a whole lot of to learn about make if you want to learn it fully. The GNU version of make has particularly good documentation: http://www.gnu.org/software/make/manual/ has it in various forms.
Make often uses gcc to compile a multitude of C or C++ files.
Make is a tool for building any complex system where there are dependancies between the various system components, by doing the minimal amount of work necessary.
If you want to find out all the things make can be used for, the GNU make manual is excellent.
make uses a Makefile in the current directory to apply a set of rules to its input arguments. Make also knows some default rules so that it executes even if it doesn't find a Makefile (or similar) file in the current directory. The rule to execute for cpp files so happens to call gcc on many systems.
Notice that you don't call make with the input file names but rather with rule names which reflect the output. So calling make xyz will strive to execute rule xyz which by default builds a file xyz (for example based on a source code file xyz.cpp.
gcc is a compiler like javac. You give it source files, it gives you a program.
make is a build tool. It takes a file that describes how to build the files in your project based on dependencies between files, so when you change one source file, you don't have to rebuild everything (like if you used a build script). make usually uses gcc to actually compile source files.
make is essentially an expert system for building code. You set up rules for how things are built, and what they depend on. Make can then look at the timestamps on all your files and figure out exactly what needs to be rebuilt at any time.
gcc is the "gnu compiler collection". There are many languages it supports (C, C++, Ada, etc depending on your setup), but still it is just one tool out of many that make may use to build your system.
You can use make to compile your C and C++ programs by calling gcc or g++ in your makefile to do all the compilation and linking steps, allowing you to do all these steps with one simple command. It is not a replacement for the compiler.
'gcc' is the compiler - the program that actually turns the source code into an executable. You have to tell it where the source code is, what to output, and various other things like libraries and options.
'make' is more like a scripting language for compiling programs. It's a way to hide all the details of compiling your source (all those arguments you have to pass the compiler). You script all of the above details once in the Makefile, so you don't have to type it every time for every file. It will also do nifty things like only recompile source files that have been updated, and handle dependancies (if I recompile this file, I will then need to recompile THAT file.)
The biggest difference is that make is turing complete (Are makefiles Turing complete?) while gcc is not.
Let's take the gcc compiler for example.
It only knows how to compile the given .cpp file into .o file given the files needed for compilation to succeed (i.e. dependencies such as .h files).
However, those dependencies create a graph. e.g., b.o might require a.o in the compilation process which means it needs to be compiled independently beforehand.
Do you, as a programer want to keep track of all those dependencies and run them in order for your target .o file to build?
Of course not. You want something to do that task for you.
Those are build tools - tools that help making the build process (i.e. building the artifacts like .o files) easier. One such tool is make.
I hope that clarifies the difference :)

Resources