I have a fairly simple Boost.Python extension that I am building with bjam. The problem is that the order that things happen in doesn't make sense to me, and I can't see how to fix it.
My project consists of a root directory, with a Jamroot, and a single project subdirectory with a Jamfile, C++ file, header file, and Python script.
In the root I have a Jamroot file that looks like this, largely scraped together from examples and the docs. It is separate from the project's Jamfile because I actually want to share this amongst several projects that will exist in other subdirectories.
import python ;
if ! [ python.configured ]
{
ECHO "notice: no Python configured in user-config.jam" ;
ECHO "notice: will use default configuration" ;
using python ;
}
use-project boost
: ./boost ;
project
: requirements <library>/boost/python//boost_python ;
# A little "rule" (function) to clean up the syntax of declaring tests
# of these extension modules.
rule run-test ( test-name : sources + )
{
import testing ;
testing.make-test run-pyd : $(sources) : : $(test-name) ;
}
build-project hello_world ;
# build-project [[other projects]]... ;
Then I have a subdirectory containing my 'hello_world' project (name changed to protect the innocent), which contains a Jamfile:
PROJECT_NAME = hello_world ;
import python ;
python-extension interpolation_ext :
$(PROJECT_NAME).cpp
:
<define>FOO
;
# Put the extension and Boost.Python DLL in the current directory, so that running script by hand works.
install convenient_copy
: $(PROJECT_NAME)_ext
: <install-dependencies>on <install-type>SHARED_LIB <install-type>PYTHON_EXTENSION
<location>.
;
# Declare test targets
run-test $(PROJECT_NAME) : $(PROJECT_NAME)_ext test_$(PROJECT_NAME)_ext.py ;
That 'convenient_copy' sure is convenient, but I haven't found much documentation about it, unfortunately.
Anyway, the idea is that while I'm in the "hello_world" project directory, I make code changes and type 'bjam' regularly. This has the effect of building the Python extension and then running the test_hello_world_ext.py file, which does an 'import hello_world_ext' to test that the extension has built correctly, and then a bunch of rather trivial unit-tests. If they all pass, then bjam reports success.
The problem seems to be that sometimes bjam runs the Python test before it has run the 'convenient_copy' rule, which means that it performs the test on the previous version of the extension, and then overwrites it with the new version. This means I'm frequently having to run bjam twice. In fact, the second time bjam knows that something is out-of-date because it actually does something. The third and subsequent time it does nothing until I make further source changes. It's like the classic double-make problem when a dependency isn't correct.
The main problem with this is that it's often failing a successful build (because the existing extension was bad), and other times it is showing a bad build as successful. It actually took me several weeks to notice this behaviour, around the same time I thought I was going insane, perhaps not coincidentally...
It also seems to do this more often on Linux than OS X, but I'm not completely sure. Feels that way though, and I divide my time between both environments fairly equally.
Also, am I the only person who finds bjam's 'jamfile' syntax utterly confusing? There's a lot going on under the hood that I simply don't understand, or can find adequate documentation for. I'd gladly use make or SCons instead but I wasn't able to get those working either due to broken examples here and there. What really confuses me is how bjam builds many, many other targets before getting on to my files, which would make writing a makefile quite tricky I think? As I am quite familiar with GNU Make and SCons, is it worth my time abandoning bjam to use one of those instead?
The order of declaring targets in a jamfile doesn't determine the order of building the targets. Use dependencies to control the build order.
It would be done like this here:
Change the run-test rule to accept requirements argument:
rule run-test ( test-name : sources + : requirements * )
{
import testing ;
testing.make-test run-pyd : $(sources) : $(requirements) : $(test-name) ;
}
Modify $(PROJECT_NAME) target declaration to add a dependency requirement on convenient_copy:
run-test $(PROJECT_NAME) : $(PROJECT_NAME)_ext test_$(PROJECT_NAME)_ext.py : <dependency>convenient_copy ;
Regarding the part on jamfile syntax etc.:
If you do anything with Boost.Build apart from really trivial things, you should definitely read its User Manual. My personal experience is that after reading it from start to the end, I choose Boost.Build over other build systems any day. YMMV
Related
It seem to me that scons targets are being generated not in declaration sequence. My problem is, I need to generate some code first, I'm using protoc to process a my.proto file into .h and .cc file, I need some pseudo code like this(what should the working code look like?)
import os
env=Environment(ENV=os.environ,LIBPATH='/usr/local/lib')
env.ShellExecute('protoc', '--outdir=. --out-lang=cpp', 'my.proto')//produces my.cc
myObj=Object('my.cc')//should wait until 'my.cc' is generated by protoc
Dependency(myObj, 'my.cc')
mainObj=Object('main.cpp')
My question is:
How to specify this ShellExecution of protoc in SConstruct/SConscript?
How to make sure that the compilation of 'main.cpp' depends on the existence of 'my.cc', in another word, wait until 'my.cc' is generated and then execute?
Your observations and assumptions are correct, SCons will not execute the single build commands in the order that you list them in the SConstruct files. It will run them based on the dependencies of the targets and source files in your build, either defined implicitly (header includes in C++, for example) or explicitly (via the Depends() method).
So you have to define and setup your dependencies correctly, such that SCons delivers the output that you want. For the special protoc case in your example, a special Builder exists that will help you to get the dependency graph right. It is available in our ToolsIndex, where also support for a variety of other languages and dialects can be found.
These special builders will emit the correct target nodes, e.g. when given a *.proto input file, and SCons is then able to automatically detect the dependency between the protoc input file and your main program if you say something like:
env=Environment(tools=['default','protoc'])
env.Protoc([], "test.proto")
env.Program('main', ['main.cpp'] + Glob('*.cc'))
The Glob('*.cc') will detect your *.cc files, coming out of the protoc Tool, and include them as dependencies for your final target main.
You can always write your own Builders and Emitters in SCons, which is the canonical way of making new tools/toolchains known to SCons dependency analysis. In the UserGuide, sect. "18 Writing Your Own Builders", and especially our ToolsForFools Guide you can find more infos about this.
I'm currently working on a c++ project that uses bjam (boost build) as a builder. For now I was quite happy with this build system and everything works nicely, however with one exception which I could not find an easy solution for:
I would like to have a build configuration of this project, in which the user is able to switch on or off certain modules and its dependencies (also automatic checks if software is not found -> disable module..). With dependencies I mean for example applications that require this module to work. So these applications should also not be built when the module is disabled.
Since I found nothing out there that could do this job for me, I created some variables in the jamroot (top level jamfile) that resemble the module structure of the project, and I use these variables in if statements within the appropriate jamfiles to switch on and off things. See below for an example:
Jamroot excerpt:
constant build_DataReader : 1 ;
constant build_RootReader : 1 ;
constant build_AsciiReader : 1 ;
if $(build_DataReader) {
build-project DataReader ;
}
Jamfile for DataReader Module:
sources = [ glob *.cpp ] ;
if $(build_RootReader)
{
build-project RootReader ;
sources = $(sources) $(DATAREADER)/RootReader//RootReader ;
}
if $(build_AsciiReader)
{
build-project AsciiReader ;
sources = $(sources) $(DATAREADER)/AsciiReader//AsciiReader ;
}
# Build libDataReader.so
lib DataReader :
$(sources)
;
install dist : DataReader : <location>$(TOP)/lib ;
However this is not a very elegant solution, as I would constantly have to update these hardcoded if statements when dependencies change etc.. In addition it is annoying to have to construct this tree structure of the modules in the project myself, as boost build is constructing the same thing internally by itself. Is there some kind of option in boost build to make the building of applications optional in case some requirements are not built in the current build process?
The only solution I see at the moment would be to construct a complete new config tool that would create the jamfiles for me like I want them (preprocessor). However this is work I do not want to start, and I don't really believe that there is nothing out there that is able to do what seems to me like pretty common stuff. But probably I missed something completely...
Hope I explained it in an understandable way, thanks in advance!
Steve
I have got a plugin package to enhance the working of our product. This package contain some additional files and some modified main code-base repository files. But we can't directly merge this package with our code-base. Our target is to copy files from this package to the main code-base at the time of build. So we have to do some modifications in makefiles.
This package follows the similar directory hierarchy as that of the main code-base directory tree. What could be the best method to do so ? I'm thinking of creating some kind of script to do so. Would this be a good option ?
Without seeing any of your code, all I can suggest is creating a make target that will always get executed and putting it as part of the dependencies to your main code-base build. Something along these lines
final_target : other_dependencies copy_plugin_files
command_to_build_final_target
other_dependencies : source_files
command_to_build_other_dependencies
.PHONY : copy_plugin_files #this makes sure this will always execute
copy_plugin_files :
[insert script or cp command here to copy your plugin files]
If you need the plugin files copied first, then put the copy_plugin_files dependency before the other_dependencies after final_target.
If you need the plugin files to run through their own make process first, then put cd path/to/plugin && $(MAKE) as part of the recipe for your copy_plugin_files target.
Hope that helps!
I recently decided to organize the files in my project directory. I moved the parsers I had for a few different file types into their own directory and also decided to use ocamlbuild (the as the project was getting more complicated and the simple shell script was not sufficient any longer).
I was able to successfully include external projects by modifying myocamlbuild with some basic rules (calling ocaml_lib, I'll use ocamlfind some other time), but I am stuck on how to include the folder as a module into the project properly. I created a parser.mlpack file and filled it with the proper modules to be included (eg, "parser/Date", et cetera), wrote a parser.mli in the root of the directory for their implementations, and modified the _tags file (see below).
During the compilation, the parser directory is traversed properly, and parser.cmi, parser.mli.depends were both created in the _build directory; as well as all *.cm[xio] files in the parsers subdirectory.
I feel I might be doing something redundant, but regardless, the project still cannot find the Parser module when I compile!
Thanks!
_tags
debug : true
<*.ml> : annot
"parser" : include
<parser/*.cmx>: for-pack(Parser)
<curlIO.*> : use_curl
<mySQL.*> : use_mysql
<**/*.native> or <**/*.byte> : use_str,use_unix,use_curl,use_mysql
compilation error
/usr/local/bin/ocamlopt.opt unix.cmxa str.cmxa -g -I /usr/local/lib/ocaml/site-lib/mysql mysql.cmxa -I /usr/local/lib/ocaml/curl curl.cmxa curlIO.cmx utilities.cmx date.cmx fraction.cmx logger.cmx mySQL.cmx data.cmx project.cmx -o project.native
File "\_none\_", line 1, characters 0-1:
Error: **No implementations provided for the following modules:**
Parser referenced from project.cmx
Command exited with code 2.
You'll notice -I parser is not included in the linking phase above; actually none of the parser related files are included!
edit: Added new details from comments and answer below.
You need to "include" the parser directory in the search path. You can do this in _tags:
"parser": include
Then ocamlbuild can search the parser directory for interesting files.
I wonder if parser.mli is somehow interfering with the dependencies in processing the mlpack file. parser.cmi will be generated from the pack operation when parser.mlpack is processed and compiled. Try building with the parser.mli file removed. If that works, then this can be re-processed into a real answer.
Also, you don't need parser/ as a prefix to your modules in parser.mlpack if parser.mlpack is in the parser directory and you have the include tag set. But that shouldn't make a difference for this.
Update: this worked around the problem, but wasn't the root cause. Root cause, per comment below, was a file mentioned in the .mlpack that had been relocated.
I'm trying to use clang to profile a project I'm working on. The project includes a rather large static library that is included in Xcode as a dependency.
I would really like clang to not analyze the dependencies' files, as it seems to make clang fail. Is this possible? I've been reading the clang documentation, and I haven't found it.
As a last resort, there is a brute force option.
Add this to the beginning of a file:
// Omit from static analysis.
#ifndef __clang_analyzer__
Add this to the end:
#endif // not __clang_analyzer__
and clang --analyze won't see the contents of the file.
reference: Controlling Static Analyzer Diagnostics
So, this isn't really an answer, but it worked well enough.
What I ended up doing was building the static library ahead of time, and then building the project using scan-build. Since there was already an up-to-date build of the static library, it wasn't rebuilt and thus wasn't scanned.
I'd still love to have a real answer for this, though.
Finally, in 2018 the option was implemented.
Use --exclude <path> [1] [2] option
--exclude
Do not run static analyzer against files found in this directory
(You can specify this option multiple times). Could be useful when
project contains 3rd party libraries.
I don't use XCode, but using scan-build in linux the following works for me. I my case, I want to run the static analysis on all first party, non-generated code. However, I want to avoid running it on third_party code and generated code.
On the command line, clang-analyzer is hooked into the build when scan-build sets CC and CXX environment variables to ccc-analyzer and c++-analyzer locations. I wrote two simple scripts called ccc-analyzer.py and c++-analyzer.py and hooked them in to the compile in place of the default. In these wrapper scripts, I simply looked at the path of the file being compiled and then run either the raw compiler directly (if I wish to avoid static analysis) or the c*-analyzer (if I wish for static analysis to occur). My script is in python and tied to my specific build system, but as an example that needs modification:
import subprocess
import sys
def main(argv):
is_third_party_code = False
for i in range(len(argv)):
arg = argv[i]
if arg == '-c':
file_to_compile = argv[i + 1]
if '/third_party/' in file_to_compile or \
file_to_compile.startswith('gen/'):
is_third_party_code = True
break
if is_third_party_code:
argv[0] = '/samegoal/bin/clang++'
else:
argv[0] = '/samegoal/scan-build/c++-analyzer'
return subprocess.call(argv)
if __name__ == '__main__':
sys.exit(main(sys.argv))
For Xcode users, you can exclude individual files from static analyzer by adding the following flags in the Target -> Build Phases -> Compile Sources section: -Xanalyzer -analyzer-disable-all-checks