How do I write a SCons builder that finds implicit dependencies - builder

First: Is it possible to write your own way of finding additional dependencies (eg #included headers) and somehow submit them back to your own custom builder?
Second: If not, is it possible to use SCons's hard-coded #include dependency-detecting system in a custom builder?
Third: If not, can I replace say the Object() builder[?]'s compiler with my own (say a custom llvm with some custom global options) and have it also take per-object options (like multiple per-object -iquote ) while still having it add #included headers as dependencies?
A side question: Is there a better build tool I should be using, instead of SCons, where I can write custom builders/tools/commands and manually get those tools to somehow supply custom dependencies (eg: use gcc's -H or -M options and convert them into the build tool's input format).
I have hacked up make to achieve this but it's ultra-verbose and I've resorted to preprocessing makefiles (and then using sed to add tabs). (Which I might throw out here later on as a question / example).
Ok, I now have
import SCons.Scanner
env = Environment(BUILDERS = {'T_s2r' : Builder(action = 'g++ -x c++ -std=c++11 -o $TARGET $SOURCE $OPTS', source_scanner = SCons.Scanner.C.CScanner())})
env.T_s2r("gen/run/proj.run", "code/src/main.cpp", OPTS = "-iquote code/include")
This doesn't update when I alter a header, though.
Ah - Scanner needs a search path.
my_scanner = SCons.Scanner.C.CScanner()
my_scanner.path = FindPathDirs('code/include')
...
Builder(... source_scanner = my_scanner) ...
...
Still no go.

Yes it is possible to do what you are asking in your first question, using SCons Scanners. Considering the answer to your first question is yes, the other two dont need to be answered :)
As for your side question, SCons is very flexible, and very extensible. Adding custom builders and tools is quite simple, and I dont know of any other build tools with this much flexibility.

Related

Linking against an external object file (.o) with autoconf

For work purposes I need to link against an object file generated by another program and found in its folder, the case is that I did not find information about this kind of linkage. I think that if I hardcode the paths and put the name-of-obj.o in front of the package_LDADD variable should work, but the case is that I don't want to do it that way.
If the object is not found I want the configure to fail and tell the user that the name-of-obj.o is missing.
I tried by using AC_LIBOBJ([name-of-obj.o]) but this will try to find in the root directory a name-of-obj.c and compile it.
Any tip or solution around this issue?
Thank you!
I need to link against an object file generated by another program and
found in its folder
What you describe is a very unusual requirement, not among those that the Autotools are designed to handle cleanly or easily. In particular, Autoconf has no mechanisms specifically applicable to searching for bare object files, as opposed to libraries, and Automake has no particular automation around including such objects when it links. Nevertheless, these tools do have enough general purpose functionality to do what you want; it just won't be as tidy as you might like.
I think that if I hardcode the paths and put the
name-of-obj.o in front of the package_LDADD variable should work, but
the case is that I don't want to do it that way.
I take it that it is the "hardcode the paths" part that you want to avoid. Adding an item to an appropriate LDADD variable is not negotiable; it is the right way to get your object included in the link.
If the object is not found I want the configure to fail and tell the
user that the name-of-obj.o is missing.
Well, then, the key thing appears to be to get configure to perform a search for your object file. Autoconf does not have a built-in mechanism to perform such a search, but it's just a macro-based shell-script generator, so you can write such a search in shell script + Autoconf, maybe something like this:
AC_MSG_CHECKING([for name-of-obj.o])
OTHER_LOCATION=
for my_dir in
/some/location/other_program/src
/another/location/other_program.12345/src
$srcdir/../relative/location/other_program/src; do
AS_IF([test -r "${my_dir}/name-of-obj.o"], [
# optionally, perform any desired test to check that the object is usable
# ... perhaps one using AC_LINK_IFELSE ...
# if it passes, then
OTHER_LOCATION=${my_dir}
break
])
done
# Check whether the object was in fact discovered, and act appropriately
AS_IF([test "x${OTHER_LOCATION}" = x], [
# Not found
AC_MSG_RESULT([not found])
AC_MSG_ERROR([Cannot configure without name-of-obj.o])
], [
AC_MSG_RESULT([${OTHER_LOCATION}/name-of-obj.o])
AC_SUBST([OTHER_LOCATION])
])
That's functional, but of course you could embellish, such as by providing for the package builder to specify a location to use via a command-line argument (AC_ARG_WITH(...)). And if you want to do this for multiple objects, then you would probably want to wrap up at least some of that into a custom macro.
The Automake side is much less involved. To get the object linked, you just need to add it to the appropriate LDADD variable, using the output variable created by the above, such as:
foo_LDADD = $(OTHER_LOCATION)/name-of-obj.o
Note that if you're building just one program target then you can use the general LDADD instead of foo_LDADD, but note that by default these are alternatives not complements.
With that said, this is a bad idea overall. If you want to link something that is not part of your project, then you should get it from an installed library. That can be a local, custom-built library, of course, so long as it is a library, not a bare object file, and it is installed. It can be a static library if you don't want to rely on or distribute a separate shared library.
On the other hand, if your project is part of a larger build, then the best approach is probably to integrate it into that build, maybe as a subproject. It would still be best to link a library instead of a bare object file, but in a subproject context it might make sense to use a lib that was not installed to the build system. In conjunction with a command-line argument that tells it where to find the wanted lib, this could make the needed Autoconf code much cleaner and clearer.

How to use multiple compilers with waf (Python)

I can't figure out how to use two different compilers in the same wscript. Nothing in the Waf book shows this clearly.
I tried something among those lines :
def configure(ctx):
ctx.setenv('compiler1')
ctx.env.CC = '/some/compiler'
ctx.load('compiler_c')
ctx.setenv('compiler2')
ctx.env.CC = '/some/other/compiler'
ctx.load('compiler_c')
This does not appear to work. Waf does not find any compiler when I do it that way. I have only managed to compile using two different compilers by specifying in the command line
$ CC='/some/compiler' waf configure
This is annoying because I have to manually change the CC variable every time by hand and rerun configure...
Thanks !
Well, you were close :) You just need to load the compiler tool after setting the CC env variable, conf.load("compiler_c") and use variants build. I wrote a complete example in this answer.

how to check for a macro defined in a c file in Makefile?

I have a #define ONB in a c file which (with several #ifndef...#endifs) changes many aspects of a programs behavior. Now I want to change the project makefile (or even better Makefile.am) so that if ONB is defined and some other options are set accordingly, it runs some special commands.
I searched the web but all i found was checking for environment variables... So is there a way to do this? Or I must change the c code to check for that in environment variables?(I prefer not changing the code because it is a really big project and i do not know everything about it)
Questions: My level is insufficient to ask in comments so I will have to ask here:
How and when is the define added to the target in the first place?
Do you essentially want a way to be able to post compile query the binaries to to determine if a particular define was used?
It would be helpful if you could give a concrete example, i.e. what are the special commands you want run, and what are the .c .h files involved?
Possible solution: Depending on what you need you could use LLVM tools to maybe generate and examine the AST of your code to see if a define is used. But this seems a little like over engineering.
Possible solution: You could also use #includes to pull in .c or header files and a conditional error be generated, or compile (to a .o), then if the compile fails you know it is defined or not. But this has it's own issues depending on how things are set-up in your make file.

Trying to make SCons Ada Builder work with VariantDir

I'm struggling with the last pieces of logic to make our Ada builder work as expectedly with variantdir. The problem is caused by the fact that the inflexible tools gnatbind and gnatlink doesn't allow the binder files to be placed in a directory other than the current one. This leaves me with two options:
Let gnatbind write the the binder files to topdir and then let gnatlink pick it from there. This may however cause race conditions if we want to allow simulatenous builds for different architectures and compiler versions which we want.
Modify the calls to gnatbind and gnatlink to temporarily go down to the build directory, in our case build/$ARCH/src-path. I successfully fixed the gnatbind step as this is explicitly called using a env.Execute from within the Ada builder. To try to fix the linking step I've modified the Program env using
env["LINKCOM"] = SCons.Action.Action(ada_linkcom)
where ada_linkcom is defined as
def ada_linkcom(source, target,env ):
....
return ret
where ret is a string describing what should be done in the shell. I need this to be a function it contains a bit complicated logic to convert paths from being relative to top-level to just containing their basenames.
This however fails with an error in scons-2.3.1/SCons/Executor.py on line 347 in function do_execute. Isn't env["LINKCOM"] allowed to be a function with ada_linkcom's signature?
No, it's not. You seem to think that 'env["LINKCOM"]' is what actually calls/executes the final build command, and that's not quite correct. Instead, environment variables like LINKCOM get expanded by the Executor/Builder for each specified Action, and are then executed.
You can have Python functions as Actions, and also use a so-called "generator" to create your Action strings on-the-fly. But you have to assign this Action to a Builder, and can't set it as an environment variable directly.
Please also have a look at the UserGuide ( http://www.scons.org/doc/production/HTML/scons-user.html ), especially section 18.4 "Builders That Execute Python Functions". Our basic guide for writing Builders and Tools might also prove to be helpful: http://www.scons.org/wiki/ToolsForFools

Debugging a Makefile

Let me prefice this question with the comment that I know very little about Makefiles or make.
There is a very large project that is automatically built nightly. It is built in both Debug and Release mode, Debug being used for utilities like Valgrind to provide code analysis. Somehow, some of the built libraries are losing the debug flag during the make process, which makes some analysis output unhelpful. I was tasked with finding the bug and I need some suggestions on how to go about locating/repairing the issue.
Thanks in advance
make itself also supports a debug flag, -d; depending on how your Makefiles call each other, it may be possible to pass it through (and if not, you can rewrite them to do so with a script); then if you feed the resulting output to a file you can start looking for clues.
Given the sparse information, I can only sketch a very general strategy based on what I've seen in terms of Makefile usage for a handful of large projects.
If you don't already know where the flags originate, search through the Makefiles to find out.
Something like:
find . -name Makefile -exec grep -nH -- -g {} \;
(Adjusting the -name pattern if your project uses included Makefiles like foo.mk or bar.mak or something. And adjusting the "-g" if your debug flag is something else.)
You'll probably find it assigned to a variable like CFLAGS. Look around the spot where this variable is assigned, it is probably set conditionally (e.g. ifeq($(RELEASE),1)).
Now look at the Makefile(s) in the library that isn't getting those flags. Find the spot where the compile command lives. Is it using the right variable? Are these Makefiles overriding the variable?
It may also be helpful to capture the output of a build to a file and search around for any other places that might not have the debug flags set.
use remake, its really good

Resources