git2go with libssl and libssh2 in single binary - go

Could anyone offer some suggestions (or resources) on how I could package a GO program that uses git2go, libssl and libssh2 such that it doesn't require the end user to install these libraries separately?
I am only targeting Linux distros (if it matters)

One way would be to build those dependencies statically as well and use PKG_CONFIG_PATH point to your own copies so everything gets linked statically. That should make CMake choose the static versions.
But if the goal is to avoid depending on the user-installed libraries rather than making everything a single executable, I would recommend shipping the libraries and working with the load path to make sure they get loaded. With gcc you'd pass -Wl,-R to set the search path in the binary itself, so you can set where to search for the shared libraries you're shipping with your app. With go it looks like you can pass -r to the linker (via -ldflags or manually) to do the same thing.
libgit2 is rather extensible, so there is a third option which is to implement the TLS stream and SSH transport in Go and plug those into a version of libgit2 without support for these. This is however a significant amount of work.

Related

Compile single static library for Cortex M3, M4, M23 and M33

I'm currently working on a rather generic communication stack. It gets bytes in on one end, parses the packet and calls a callback.
I want to have this stack in a static library (i.e. libcommstack.a).
The library is aimed towards embedded ARM Cortex-M devices. At the moment we have specified that at least a Cortex-M3 should be used (but it should also work for an M4 or M33).
Right now I'm integrating it into another application to verify that linking it is possible. In the future the idea is that we will ship this .a file to customers so they can build their application around it, without having direct access to our sources (to encapsulate our IP).
We are using GCC ARM v7.2.1 to compile both the library and the application that is linked to it.
The application I'm trying to integrate it with is compiled for a Cortex M33 with -mfloat-abi=hard -mfpu-fpv6-sp-d16.
The code for the library does not use any floating points and is compiled using -march=archv7-m (both have the -mthumb flag).
Linking seemed to all go well, until I actually called a function from the lib. At that point the linker starts to complain:
application.elf uses VFP register arguments, libcommstack.a(somefile.c.obj) does not
failed to merge target specific data of file libcommstack.a(somefile.c.obj)
Since I'm not using floating points in the library and I don't know (upfront) if the target application does or does not have an FPU (or even uses floats), I'm not sure how to approach this.
I figured there would be two approaches:
Compile a single version of the lib, using an instruction set that all of the microcontrollers understand. I was hoping that this would be the case with ARMv7 (although I'm not yet 100% confident that the M23/M33 also support this).
Compile a lot of different libs for the different flavors based on the different architectures, FPU, etc.
As you can imagine, I would prefer to keep it simple and go for option 1, but I'm not sure how to "convince" the linker to link these two (or perhaps how to convince the compiler NOT to care about floating points for the lib).
Does anyone know if option 1 is feasible and how it can be achieved?
If it is not feasible, what would be the variables to keep in mind to determine the different build flavors?
Does anyone know if option 1 is feasible
Well, feasible, probably.
how it can be achieved?
Get all the processors you want to support and determine the instructions sets available on all these processors. Then compile for that instruction set.
But, please don't, that is a workaround.
If it is not feasible, what would be the variables to keep in mind to determine the different build flavors?
Gcc has something like "multilib profiles". See arm-none-eabi-gcc --print-multi-lib output. If you have newlib installed, you can go to /usr/arm-none-eabi/lib/thumb/ and see the directories there - newlib is compiled for each profile and installs separate library for it and different library is picked up depending on configuration. Compile for each of those profiles, and package your library by putting libraries in proper /usr/arm-none-eabi/lib/proper/directory/here and compiler will pick them up by itself (see gcc -v output for library search paths). For an example search newlib sources where it happens, can't find it. (Here's my example). With cmake as a backend as a example you could compile and install as follows:
arm-none-eabi-gcc --print-multi-lib |
while IFS=';' read -r dir opts; do
cmake -B builddir CMAKE_C_FLAGS="$opts" CMAKE_INSTALL_LIBDIR="$dir"
cmake --build builddir
cmake --install builddir --prefix "/usr/arm-none-eabi/"
done

Compile libraries for ARM toolchain(buildroot)

I am using buildroot's toolchain to cross compile applications for ARM. However some application requires libraries that are not compiled for that tool chain. I have those libraries on my host tool chain like -ljack, lfftw etc.
I need to know that if I get tarball of the required packages then how can I configure them so that the libraries are compiled by arm-gcc and the headers/libraries copied to /usr and /include of the buildroot ?
In this way I should be able to access these libraries via buildroot's toolchain.
Thanks,
Well, you need to integrate them into Buildroot.
Take fftw for example, in that particular case, fftw is already available in Buildroot, and you just have to enable it in your build. Go to Target packages->Libraries->Other and enable fftw.
If you don't know where to find a package, run make menuconfig and type Ctrl-/ to get a search box. There you could type e.g. fftw and learn where in the menu system it is located and what dependencies it has.
If fftw (or some other library you need) hadn't been / isn't available in Buildroot, you need to add it yourself. See e.g. adding packages to Buildroot.

Building GCC with MPFR, GMP and MPC

Of course we all know building GCC version >= 4.1.x requires the supplementary packages MPFR, GMP and MPC to be present.
There's a few ways to handle these GCC dependencies:
1) Download and build each supporting package separately and then tell make where the binaries are located during GCC build time.
2) Download each supporting package, untar and move the source into your GCC build directory and make will automatically build each of the packages when needed.
(Executing the gcc-src/contrib/download_prerequisites script does the same as option 2)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Is there an advantage to either method? Does pre-compiling the binaries provide something I'm missing by taking the "easy route" and just dumping the package's source into my GCC build directory and letting make figure it out?
I've seen it done more frequently in various build scripts by pre-compiling each package to a binary, and then telling make where they are located during gcc compilation. Is this the "preferred" way to do it? Why?
To add context, I'm mainly building cross-compilers targeting various ARM platforms.
For most use cases I believe that option 2 is just as good as option 1. However, I can see a few situations in which one would want to do it manually.
A package maintainer wants to build separately as they want separate packages for mpfr et al.
Someone who wants to pass different configure arguments/CFLAGS to each of the packages.
A GCC developer who wants to keep their source and build trees small as they don't make any changes to MPFR/GMP/etc.
I haven't done too much work with the (rather ugly) GCC build system, but I haven't seen any obvious differences in how the binaries are built.
I'm not the biggest authority on this though, so YMMV; I may be wrong.

When/how to specify configure/make target

Large variety of open-source projects are distributed in source-code and supposed to be compiled with ./configure && make approach. But if I want to cross-compile, at which of those two steps I am supposed to tell them what target platform I want to get the binary?
Does it have to do with configure/make in general, or this is specific to every project? What could be an example of compiling some project, library or console application and specifying target?
I know many projects have a web-page on their websites that is dedicated to "cross compiling this program". So it seems to be project-specific setting. But the project still uses configure/make, so what is the relation of all that?
If your system is using standard GNU autoconf, then you would always define the cross-compilation at configure time, not at make time. If the configure script does not know you're cross-compiling it may obtain incorrect answers when it probes the system looking for what is supported and what is not supported.
Cross-compilation is what the --build, --host, and --target flags to configure are for. You should never need to set --build: it always refers to the system you're running configure on, and configure can figure that out for itself. For a normal cross-compilation you also do not set --host, and you would set --target to the cross-compilation target. You may also need to set the CC (for C programs) and/or CXX (for C++ programs), LD, AR, STRIP, and a few others, if needed. Personally I prefer to build in a separate directory as well, although some packages don't support it unfortunately):
tar xzf foo-1.1.tar.gz
mkdir obj
cd obj
../foo-1.1/configure --target=... CC=...-gcc CXX=...-g++ ...
make
Note this is all provided by basic autoconf / automake, so all projects will do it the same way (although in my experience many projects which do not attempt cross-compilation somewhat regularly, do something wrong such that it doesn't work so well).

Is it possible to link some — but not all — libraries statically with libtool?

I am working on a project which is built using autoconf, automake and libtool. The project is distributed in both binary and source form.
On Linux, by default the build script links to all libraries dynamically. This makes sense since Linux users can rely on their distribution’s package manager to handle dependencies.
On Windows, by default the build script links to all libraries statically using libtool’s -all-static option. This makes sense since none of the dependencies are provided with Windows, and it’s helpful to be able to distribute a single binary containing all dependencies rather than mucking about distributing tons of DLLs.
On OSX, some of the dependencies are provided by the OS, and some are not. Therefore it would be helpful to link to the OS-provided libraries dynamically and to the other libraries statically. Unfortunately libtool’s all-or-nothing -all-static option is not helpful here.
Is there a good way to get libtool to link to some libraries statically, but not all?
Note: I realise I could carefully compile the dependencies so that only static builds are available. However, I’d rather the build system for my project were robust in the common case of static and dynamic builds of dependencies being available.
Note: Of course, I am not concerned with really low level dependencies like the C/C++ runtime libraries, which are always linked dynamically on all three of the above platforms.
After some research I have answered my own question.
If you have static and dynamic builds of a library installed, and you link to that library using the -l parameter, libtool links by preference to the dynamic build. It links to a static build if there is no dynamic build available, or if you pass the -static or -all-static options.
libtool can be forced to link to the static library by giving the full path to that library in place of the -l option.

Resources