how to check for a macro defined in a c file in Makefile? - makefile

I have a #define ONB in a c file which (with several #ifndef...#endifs) changes many aspects of a programs behavior. Now I want to change the project makefile (or even better Makefile.am) so that if ONB is defined and some other options are set accordingly, it runs some special commands.
I searched the web but all i found was checking for environment variables... So is there a way to do this? Or I must change the c code to check for that in environment variables?(I prefer not changing the code because it is a really big project and i do not know everything about it)

Questions: My level is insufficient to ask in comments so I will have to ask here:
How and when is the define added to the target in the first place?
Do you essentially want a way to be able to post compile query the binaries to to determine if a particular define was used?
It would be helpful if you could give a concrete example, i.e. what are the special commands you want run, and what are the .c .h files involved?
Possible solution: Depending on what you need you could use LLVM tools to maybe generate and examine the AST of your code to see if a define is used. But this seems a little like over engineering.
Possible solution: You could also use #includes to pull in .c or header files and a conditional error be generated, or compile (to a .o), then if the compile fails you know it is defined or not. But this has it's own issues depending on how things are set-up in your make file.

Related

Linking against an external object file (.o) with autoconf

For work purposes I need to link against an object file generated by another program and found in its folder, the case is that I did not find information about this kind of linkage. I think that if I hardcode the paths and put the name-of-obj.o in front of the package_LDADD variable should work, but the case is that I don't want to do it that way.
If the object is not found I want the configure to fail and tell the user that the name-of-obj.o is missing.
I tried by using AC_LIBOBJ([name-of-obj.o]) but this will try to find in the root directory a name-of-obj.c and compile it.
Any tip or solution around this issue?
Thank you!
I need to link against an object file generated by another program and
found in its folder
What you describe is a very unusual requirement, not among those that the Autotools are designed to handle cleanly or easily. In particular, Autoconf has no mechanisms specifically applicable to searching for bare object files, as opposed to libraries, and Automake has no particular automation around including such objects when it links. Nevertheless, these tools do have enough general purpose functionality to do what you want; it just won't be as tidy as you might like.
I think that if I hardcode the paths and put the
name-of-obj.o in front of the package_LDADD variable should work, but
the case is that I don't want to do it that way.
I take it that it is the "hardcode the paths" part that you want to avoid. Adding an item to an appropriate LDADD variable is not negotiable; it is the right way to get your object included in the link.
If the object is not found I want the configure to fail and tell the
user that the name-of-obj.o is missing.
Well, then, the key thing appears to be to get configure to perform a search for your object file. Autoconf does not have a built-in mechanism to perform such a search, but it's just a macro-based shell-script generator, so you can write such a search in shell script + Autoconf, maybe something like this:
AC_MSG_CHECKING([for name-of-obj.o])
OTHER_LOCATION=
for my_dir in
/some/location/other_program/src
/another/location/other_program.12345/src
$srcdir/../relative/location/other_program/src; do
AS_IF([test -r "${my_dir}/name-of-obj.o"], [
# optionally, perform any desired test to check that the object is usable
# ... perhaps one using AC_LINK_IFELSE ...
# if it passes, then
OTHER_LOCATION=${my_dir}
break
])
done
# Check whether the object was in fact discovered, and act appropriately
AS_IF([test "x${OTHER_LOCATION}" = x], [
# Not found
AC_MSG_RESULT([not found])
AC_MSG_ERROR([Cannot configure without name-of-obj.o])
], [
AC_MSG_RESULT([${OTHER_LOCATION}/name-of-obj.o])
AC_SUBST([OTHER_LOCATION])
])
That's functional, but of course you could embellish, such as by providing for the package builder to specify a location to use via a command-line argument (AC_ARG_WITH(...)). And if you want to do this for multiple objects, then you would probably want to wrap up at least some of that into a custom macro.
The Automake side is much less involved. To get the object linked, you just need to add it to the appropriate LDADD variable, using the output variable created by the above, such as:
foo_LDADD = $(OTHER_LOCATION)/name-of-obj.o
Note that if you're building just one program target then you can use the general LDADD instead of foo_LDADD, but note that by default these are alternatives not complements.
With that said, this is a bad idea overall. If you want to link something that is not part of your project, then you should get it from an installed library. That can be a local, custom-built library, of course, so long as it is a library, not a bare object file, and it is installed. It can be a static library if you don't want to rely on or distribute a separate shared library.
On the other hand, if your project is part of a larger build, then the best approach is probably to integrate it into that build, maybe as a subproject. It would still be best to link a library instead of a bare object file, but in a subproject context it might make sense to use a lib that was not installed to the build system. In conjunction with a command-line argument that tells it where to find the wanted lib, this could make the needed Autoconf code much cleaner and clearer.

How can I add built-in rules to make?

Make(1) has built-in rules, such that for simple tasks you don't need a makefile at all. I can type make prog and if the current directory has a prog.c, make will do something useful.
I have a number of rules like this (e.g., how to make .pdf from .html) that apply in many projects. If I have a makefile in a directory, I can simply include my rules from a file. Is there a way to tell make to use this file always? Like a dot file that make would always include before doing anything else.
Make's rules are truly built-in, not read from a file. This has advantages (the entirety of make is one executable and you can copy it and install it anywhere and get identical behavior) and disadvantages (you can't modify the default rules without modifying the source code and recompiling--if you want to do that it's easy to do, though: see the default.c file in the sources).
You can specify an extra makefile (or makefiles) that should be parsed before the usual ones using an environment variable, though, so you can create a makefile with some extra rules, then (in your ~/.bashrc or whatever) set the MAKEFILES environment variable to the name of that file (or files) containing these extra rules (don't forget to export it).
Now every make invocation will load these rules as well.
You may discover, though, that this isn't quite what you'd hoped, because it could cause other makefiles to fail or act in bizarre ways (for example if you download open source packages and want to build them locally, etc.) If you do this just remember you did it, so in a few months if you run into issues you'll remember to try undoing it and see if it helps :-)

Boost.Tests: Specify function name instead of main

I am working on a project using the cmake build system. By default CMake has a nice framework for generating a single executable from a set of C/C++ code. The cmake function is called create_test_sourcelist. What it does is generate a C/C++ dispatcher with a single main entry point which will call other C/C++ code.
Therefore I have a bunch of C/C++ files with a function signature such as: int TestFunctionality1(int argc, char *argv[]), which I'd like to keep as-is, unless of course it means even more work.
How can I keep this system in place and start using BOOST_CHECK ? I could not figure out how to specify the actual main entry point is not called int main(int argc, char *argv[]).
My goal is have a framework for integration with Jenkins, since the project already uses Boost, I believe this should be doable without re-writing the existing CMake test suite and changing all tests into independent main function.
Unfortunately, it seems there is no straightforward and clean way to do that.
From one side the only useful function of create_test_sourcelist is to generate a test driver: a (stupid pretty simple, naive and with lack of abilities to hack/extend) C/C++ translation unit based on ${cmake-prefix}/share/cmake/Templates/TestDriver.cxx.in (and there is no way to select some other template).
From other side, Boost UTF offers its own test runner (which is equal to test driver in CMake's terminology), but in any variant (static, dynamic, single-header) it contains definition for main() some way or another (even in case of external test runner).
…so you end up with two main() functions and no ability to choose a single one.
Digging a little into sources of create_test_sourcelist I've really wonder why are they implemented it as a command and not as ordinal (extern) cmake module — it doesn't do any special (which can't be implemented using CMake language). This command is really stupid — it doesn't check that required functions are really exists (you'll get compile errors if smth wrong). There are no ways for flexible customization of output source file at all. All that is does is stripping paths and extensions from a list of source files and substitute it to mentioned template using ordinal configure_file()…
So, personally I see no reason to use it at all. It is why I've done the same (but better ;) job in the module mentioned in the comment above.
If you are still want to use that command, generated test driver completely useless if you want to use Boost UTF. You need to provide your own initialization function anyway (and it is not main()), where you can manually register your test cases into a master test suite (or organize your tests into some more complex tree). In that case there is absolutely no reason to use create_test_sourcelist! All what you can get from it is a list of sources that needs to be given to add_executable()… but it is much more easy to do w/ set()… This command even can't help you w/ list of test functions (list of filenames w/o extension actually) to call (it used internally and not exported). Do you still want to use that command??

Using precompiled headers in gcc

I am investigating using precompiled headers to reduce our compile times.
I have read the documentaiton on the subject here: https://gcc.gnu.org/onlinedocs/gcc/Precompiled-Headers.html, where I read the following:
Only one precompiled header can be used in a particular compilation.
On the project whose build time I would like to improve, there are often very Long lists of includes. The above leads me to Think that to get the most performance improvements, I would have to make a collection of common includes, put them into a single Header file, compile and include that Header file.
On the other hand, I prefer to list my dependancies in particular file explicitly, so I would be inclined to include first the precompiled Header, followed by the Manual list of actual Header files.
I have two questions related to this:
Is my analysis and approach correct? Have I interpreted the statement correctly?
Doing this, I will use this file (say stdafx.h) in many places, thereby including files I don't need. I would like to explicitly list my dependencies however, for code documentation purposes.
Where I to do something like the following:
#ifdef USE_PRECOMPILED_HEADERS
#include "stdafx.h"
#else
#include "dep1.h"
#include "dep2.h"
#endif
I could periodically run a build without pre-compiled headers to check if all my dependencis are listed. This is a bit clunky however. Does anyone have a better solution?
If anyone has Information to help us obtain better results in our Investigation, I am happy to hear them.
Yes, your observation is absolutely fine!
You "would have to make a collection of common includes, put them into a single Header file, compile and include that Header file". This common header file is generally named as stdafx.h (although you can name it anything you want!)
I am afraid I don't really understand this part of the question.
EDIT :
Do you also want the standard headers (like iostream, map, vector, etc.) to be included as dependencies in the code documentation?
Generally this must be a NO. Hence, you must include only those header files in stdafx.h which are not under your control (i.e., [1] standard language includes [2] includes from dependent modules (mostly exposed interface headers)). Rest all includes (whose source is in the current project/module) must be explicitly included in each header file wherever required, and not put in the pre-compiled stdafx.h.
The above leads me to Think that to get the most Performance
improvements, I would have to make a collection of common
includes, put them into a single Header file, compile and
include that Header file.
Yes, this observation is correct: You put most (all?) includes in one single header file, which is then precompiled.
Which, in turn, means that...
any compilation without the aid of that header being precompiled will take ages;
you are relying on naming conventions or other means (documentation?) to make the information link between things referenced in your individual translation unit and their declaration.
I don't much like precompiled headers for those reasons...

Syntax Checking with unsupported languages

I have some files that have a particular syntax that is similar to ada (not identical though), however I would like to verify the syntax before going and running them. There isn't a compiler for these files, so I can't check them before using them. I tried to use the following:
gcc -c -gnats <file>
However this says compilation unit expected. I've tried a few variations on this, but to no avail.
I just want to make sure the file is syntactically correct before using it, but I'm not sure how to do it, and I really don't want to write an entire syntax checker just for this.
Is there some way to include an additional unsupported language to gcc without going through a recompile? Also is this simply a file that details to gcc what the syntax constructs are, or what would be entailed? I don't need a full compile, only a syntax check.
Alternately, are there any syntax checkers I can use that I can update an ada syntax check with the small number of changes required for this language?
I've listed Ada as a tag, since the syntax is nearly identical, and finding something that will do ada syntax checking without compiling will be a 90% solution for me.
You could try running the files through gnatchop first. The GCC Ada compiler is rather unique in that it expects filenames to match up with the main unit names inside the file. That may be what your error message is trying to say.
gnatchop will go through any files you give it and write out Ada source files with the appropriate names to make gcc happy (even splitting files into multiple files if needed).
Another option you might be interested in is OpenToken. It is a parser construction toolkit, written in Ada, that allows you to build your own parsers fairly easily. It comes with a syntax recognizer for Ada, so you may just be able to tweak that a bit for your needs.

Resources