Checking and compiling Protocol Buffers using GNU build tools - makefile

So I've this project which depends on Google's Protocol Buffers compiler and libraries. Checking for the libraries is easy as a pkg-config file is provided, thus the checking process is reduced to PKG_CHECK_MODULES([protobuf], protobuf). Yet I'd like to check for the protoc compiler, or similar tool (in order to auto-magically build my .proto files).
Could anyone please provide some form of macro, or good tutorial on macro making (I haven't found anything useful so far...).
Julian.

To check for the presence of particular programs, you should use either AC_CHECK_PROG or AC_PATH_PROG. See the GNU Autoconf Manual.
AC_PATH_PROG(PROTOC, protoc, no)
if test "x$PROTOC" = "xno" ; then
AC_MSG_ERROR([protoc is not found])
fi
See also this other question.

Related

Can I use GCC compiler AND Clangd Language Server?

I am working on a project that uses a GCC library (SFML), which is not available for clang, as far as I know. I am using COC with vim for code completions, but for C++ it needs clangd. Is there a way to use GCC as my compiler, but still use the clangd language server?
I have also heard that there may be a way to make clang recognize GCC libraries/headers, but I've never been able to make it work right. If somebody could point me in the right direction there that would be helpfull too. But I'm used to GCC (I've been using it since I started programming C++), so being able to use clangd and GCC would be preferable.
Yes it is. I do it with ccls (which is clang based as well).
Given my installation of clang is not the standard one (I compile it, tune it to use libc++ by default, and I install it somewhere in my personal space) I have to inject paths to header files known by clang but unknown by other clang based tools.
I obtain them with
clang++ -E -xc++ - -Wp,-v < /dev/null
Regarding the other options related to the current project, I make sure to have a compile_commands.json compilation database (generated by CMake, or bear if I have no other choice), and ccls can work from there. I expect clangd to be quite similar in these aspects.
Ops, answered the wrong question.
But for those who use ccls:
create a .ccls file in your project directory and append --gcc-toolchain=/usr to it.
use this tool to generate a compile_commands.json file
see https://github.com/MaskRay/ccls/wiki/FAQ#compiling-with-gcc

using my own static/dinamic library: HOW TO compile and link against (the right way to do the things properly)

Sorry for this newbie question.
Context:
I have just created my own library (using CMake):
libmyownsomething.a <--- static version of the compiled library
libmyownsomething.so <--- dinamic version of the same library
libmyownsomething.h <--- the header file to be included in other project
Questions:
Where is the right place where the files should be placed? ( I guest /usr/local/include/ and /usr/local/lib/
How to compile other lib/projects against this one by inserting only #include <myownsomething.h> and a right flag LDFLAGS=-lmyownsomething?
To be able to link agains your library with just -lmyownsomething, you need to have libmyownsomething.so (or .a, for static linking) in one of the directories you linker searches by default.
I found this in texinfo documentation for GNU ld (library search path in linker script):
'SEARCH_DIR(PATH)'
The 'SEARCH_DIR' command adds PATH to the list of paths where 'ld'
looks for archive libraries. Using 'SEARCH_DIR(PATH)' is exactly
like using '-L PATH' on the command line
Now, GNU ld (ld.bfd to be precise) uses a default linker script, which can be obtained with --verbose. Let's see what search dirs there are by default (on my system, anyways -- that might well depend on configuration; if you are going to distribute your library, you probably want to make the most portable choice):
$ ld --verbose |& grep SEARCH_DIR
SEARCH_DIR("/usr/x86_64-pc-linux-gnu/lib64"); SEARCH_DIR("/usr/lib"); SEARCH_DIR("/usr/local/lib"); SEARCH_DIR("/usr/x86_64-pc-linux-gnu/lib");
To answer the question about #includeing files, you need to consult the compiler documentation, or perhaps the POSIX standard. (However, I'd recommend either going with the simplest choices -- see below -- or providing a configurable way to install your files e.g. with --prefix build time option. Then the user/packager can decide the best place to put them for themselves. But nothing keeps you from providing sane defaults. Same goes for libraries, but I tried to address that part of the question exactly how it was asked.)
Generally, this stuff varies from system to system, but they usually follow the Filesystem Hierarchy Standard. I think, /usr/include and /usr/lib is the safest choice. Another good practice is using e.g. pkgconf mechanism.
Hope this helps. I really should have asked about your use-case first: who are you going to distribute your software to, and how? Anyway, be sure to post comments. Also, I wonder if this is Stackoverflow material; perhaps it needs moving some place else.

setting up autotools to link single system library statically

I have a project, where I want to link one of system libraries statically. The project uses GNU build system.
In configure.ac I have:
AC_CHECK_LIB(foobar, foobar_init)
On development machine this library is installed in /usr/lib/x86_64-linux-gnu. It is detected, but it is linked dynamically, which causes issues, as it is not present on some machines. Linking it statically (-Wl,-Bstatic etc.) works fine, but I don't know how to set this up in autotools. I tried forcing this into Makefile.am link flags for the project, but it still gives a preference to dynamic library.
I also tried using --enable-static with ./configure, but it seems to have no effect on system libraries.
If you want to link the whole program statically, then you should pass the --disable-shared option to configure. You might or might not need also to pass --enable-static, depending on the default value for that option (which you can influence via your configure.ac file). You really should consider doing this.
You should also consider making this the installer's problem, not the build system's. Let it be the installer's responsibility to ensure that all the shared libraries needed by the program are provided by the systems where it is installed. This is very common; in fact, it is one of the inspirations for package-management systems such as yum / dnf and apt, and their underlying packaging formats.
If you insist on linking only one library statically, while linking everything else dynamically, then you'll need to jump through a few more hoops. The objective will be to emit link options that cause just that library to be linked statically, without changing other libraries' linking. With the GNU toolchain, and supposing that the program is otherwise being linked dynamically, that would be this combination of options:
-Wl,-Bstatic -lfoobar -Wl,-Bdynamic
Now consider the documentation of the AC_CHECK_LIB() macro:
Macro: AC_CHECK_LIB (library, function, [action-if-found],
[action-if-not-found], [other-libraries])
[...] action-if-found is a list of shell commands to run if the link with the library succeeds; action-if-not-found is a list of shell
commands to run if the link fails. If action-if-found is not
specified, the default action prepends -llibrary to LIBS and defines
'HAVE_LIBlibrary' (in all capitals). [...]
Note in particular the default behavior in the event that the optional arguments are not provided (your present case) -- that's not quite what you want, at least not by itself. I suggest providing at least an alternative behavior for action-if-found case, and you could consider also making configure fail in the action-if-not-found case. The latter is left as an exercise; implementing just the former might look like this:
AC_CHECK_LIB([foobar], [foobar_init], [
LIBS="-Wl,-Bstatic -lfoobar -Wl,-Bdynamic $LIBS"
AC_DEFINE([HAVE_LIBFOOBAR], [1], [Define to 1 if you have libfoobar.])
])
You should also pay attention to the order of your AC_CHECK_LIB() invocations. As its docs go on to say:
This macro is intended to support building LIBS in a right-to-left
(least-dependent to most-dependent) fashion such that library
dependencies are satisfied as a natural side effect of consecutive
tests. Linkers are sensitive to library ordering so the order in which
LIBS is generated is important to reliable detection of libraries.
If you find that you still aren't getting what you want, then have a look at the link commands that make actually executes. You need to understand what's wrong about them before you can determine how to fix the problem.
With all that said, I observe that the above treatment is basically a hack, and it makes your build system much less resilient. It introduces dependencies on GNU toolchain options (which some other toolchains may nevertheless accept), and it assumes dynamic linking is being performed overall. It may be possible to resolve those issues with additional Autoconf code, but I urge you to instead go with one of the first two alternatives I described.

Link go program vs GNU readline statically

I'm writing a Go program that uses the GNU readline library for a fancy command line interface. In order to simplify the installing process and not worry about the library version and other stuff, I want to link it statically.
The problem is I don't really know how to do it. If I precompile the library, I would have to provide several versions of my code, with the different versions of the .a or .lib readline library. To avoid this problem I was thinking of just including the current readline code to my go project, and let the go tool compile it when it build the go project. However, to build the readline library, is necessary to use make. Is there a way of telling the go tool how to build the C code?
Yes, you can certainly do that. I've recently done something similar with a different project, mainly because the code was not available as a library (Ubuntu compiles just the command line tool for it). To achieve it, I've run the autoconf script with options that I figured would be sensible in most systems, and copied the C code together with the automatically built config.h header file into the Go package directory. Then, I've built the original C code with make once and observed which options gcc would get while compiling and linking it, and copied the appropriate ones into cgo's LDFLAGS and CFLAGS options (you can also inspect the Makefile, but that was easier).
A couple of side notes:
Have you considered doing the readline work in Go itself? The ssh terminal package works at least as a pretty good seed, if it doesn't solve your problem completely.
Remember that readline, although being a library, is GPL. You'll necessarily have to license your own software as GPL as well if you link or embed it. There are other smilar libraries available with less strict licenses, if you care.
I recommend avoiding readline, better alternatives exist; like https://github.com/edsrzf/fineline

Migrating from Winarm to Yagarto

This question must apply to so few people...
I am busy mrigrating my ARM C project from Winarm GCC 4.1.2 to Yagarto GCC 4.3.3.
I did not expect any differences and both compile my project happily using the same makefile and .ld files.
However while the Winarm version runs the Yagarto version doesn't. The processor is an Atmel AT91SAM7S.
Any ideas on where to look would be most welcome. i am thinking that my assumption that a makefile is a makefile is incorrect or that the .ld file for Winarm is not applicable to Yagarto.
Since they are both GCC toolchains and presumably use the same linker they must surely be compatable.
TIA
Ends.
I agree that the gcc's and the other binaries (ld) should be the same or close enough for you not to notice the differences. but the startup code whether it is your or theirs, and the C library can make a big difference. Enough to make the difference between success and failure when trying to use the same source and linker script. Now if this is 100% your code, no libraries or any other files being used from WinARM or Yagarto then this doesnt make much sense. 3.x.x to 4.x.x yes I had to re-spin my linker scripts, but 4.1.x to 4.3.x I dont remember having problems there.
It could also be a subtle difference in compiler behavior: code generation does change from gcc release to gcc release, and if your code contains pieces which are implementation-dependent for their semantics, it might well bite you in this way. Memory layouts of data might change, for example, and code that accidentally relied on it would break.
Seen that happen a lot of times.
Try it with different optimization options in the compile and see if that makes a difference.
Both WinARM and YAGARTO are based on gcc and should treat ld files equally. Also both are using gnu make utility - make files will be processed the same way. You can compare the two toolchains here and here.
If you are running your project with an OCD, then there is a difference between the implementation of the OpenOCD debugger. Also the commands sent to the debugger to configure it could be different.
If you are producing an hex file, then this could be different as the two toolchains are not using the same version of newlib library.
In order to be on the safe side, make sure that in both cases the correct binutils are first in the path.
If I were you I'd check the compilation/linker flags - specifically the defaults. It is very common for different toolchains to have different default ABIs or FP conventions. It might even be compiling using an instruction set extension that isn't supported by your CPU.

Resources