I'm currently working on a c++ project that uses bjam (boost build) as a builder. For now I was quite happy with this build system and everything works nicely, however with one exception which I could not find an easy solution for:
I would like to have a build configuration of this project, in which the user is able to switch on or off certain modules and its dependencies (also automatic checks if software is not found -> disable module..). With dependencies I mean for example applications that require this module to work. So these applications should also not be built when the module is disabled.
Since I found nothing out there that could do this job for me, I created some variables in the jamroot (top level jamfile) that resemble the module structure of the project, and I use these variables in if statements within the appropriate jamfiles to switch on and off things. See below for an example:
Jamroot excerpt:
constant build_DataReader : 1 ;
constant build_RootReader : 1 ;
constant build_AsciiReader : 1 ;
if $(build_DataReader) {
build-project DataReader ;
}
Jamfile for DataReader Module:
sources = [ glob *.cpp ] ;
if $(build_RootReader)
{
build-project RootReader ;
sources = $(sources) $(DATAREADER)/RootReader//RootReader ;
}
if $(build_AsciiReader)
{
build-project AsciiReader ;
sources = $(sources) $(DATAREADER)/AsciiReader//AsciiReader ;
}
# Build libDataReader.so
lib DataReader :
$(sources)
;
install dist : DataReader : <location>$(TOP)/lib ;
However this is not a very elegant solution, as I would constantly have to update these hardcoded if statements when dependencies change etc.. In addition it is annoying to have to construct this tree structure of the modules in the project myself, as boost build is constructing the same thing internally by itself. Is there some kind of option in boost build to make the building of applications optional in case some requirements are not built in the current build process?
The only solution I see at the moment would be to construct a complete new config tool that would create the jamfiles for me like I want them (preprocessor). However this is work I do not want to start, and I don't really believe that there is nothing out there that is able to do what seems to me like pretty common stuff. But probably I missed something completely...
Hope I explained it in an understandable way, thanks in advance!
Steve
Related
I'm working on a GTK3 application, but using QTCreator as my IDE, just because I happen to like it. It works fine, I can easily turn off all the QT-specific stuff and link the GTK libraries. There's just one little issue.
GTK uses XML files to define its resources. It comes with a program, "glib-compile-resources", which takes a .gresource.xml file and produces a .c file*, which can then be included in your project. The problem is that QTCreator doesn't know about glib-compile-resources, so I have to remember to run the program manually every time I make any change to them.
I've tried using a custom build step, but if I do that, then QT rebuilds the file every time, even if it hasn't changed, which slows the process down. In addition, if the C file doesn't already exist, it will fail with a "No rule to make target 'x.c' needed by 'x.o'. Stop." error, so I have to run the program manually anyway.
Is there any way to tell QTCreator to run glib-compile-resources whenever it encounters a .gresource.xml file, and include the resulting C file into the final compilation?
*There are other options available then just a straight C source file, but C source is the simplest and easiest for me.
You can add a custom target in your qmake file (rather than in your QtCreator project config). See the qmake docs at https://doc.qt.io/qt-5/qmake-advanced-usage.html#adding-custom-targets and/or https://doc.qt.io/qt-5/qmake-advanced-usage.html#adding-compilers.
Update: this is a simple example which shows how to do this for a single file, using the custom target mechanism in your .pro file:
glib_resources.target = glib-resources.c
glib_resources.depends = glib-resources.xml
glib_resources.commands = glib-compile-resources --target $$glib_resources.target --generate-source $$glib_resources.depends
QMAKE_EXTRA_TARGETS += glib_resources
PRE_TARGETDEPS += glib-resources.c ## set this target as a dependency for the actual build
QMAKE_CLEAN += glib-resources.c ## delete the file at make clean
Here's how I wound up solving my own issue:
I found in the QT documentation how to add your own custom compiler to a QT project file. The exact lines needed are:
GLIB_RESOURCE_FILES += \
resources.gresource.xml
# Add more resource files here.
glib_resources.name = glibresources
glib_resources.input = GLIB_RESOURCE_FILES
glib_resources.output = ${QMAKE_FILE_IN_BASE}.c
glib_resources.depend_command = glib-compile-resources --generate-dependencies ${QMAKE_FILE_IN}
glib_resources.commands = glib-compile-resources --target ${QMAKE_FILE_OUT} --sourcedir ${QMAKE_FILE_IN_PATH} --generate-source ${QMAKE_FILE_IN}
glib_resources.variable_out = SOURCES
glib_resources.clean = ${QMAKE_FILE_OUT}
QMAKE_EXTRA_COMPILERS += glib_resources
(Thanks to #zgyarmati, who's post lead me to the right answer.)
I have got a plugin package to enhance the working of our product. This package contain some additional files and some modified main code-base repository files. But we can't directly merge this package with our code-base. Our target is to copy files from this package to the main code-base at the time of build. So we have to do some modifications in makefiles.
This package follows the similar directory hierarchy as that of the main code-base directory tree. What could be the best method to do so ? I'm thinking of creating some kind of script to do so. Would this be a good option ?
Without seeing any of your code, all I can suggest is creating a make target that will always get executed and putting it as part of the dependencies to your main code-base build. Something along these lines
final_target : other_dependencies copy_plugin_files
command_to_build_final_target
other_dependencies : source_files
command_to_build_other_dependencies
.PHONY : copy_plugin_files #this makes sure this will always execute
copy_plugin_files :
[insert script or cp command here to copy your plugin files]
If you need the plugin files copied first, then put the copy_plugin_files dependency before the other_dependencies after final_target.
If you need the plugin files to run through their own make process first, then put cd path/to/plugin && $(MAKE) as part of the recipe for your copy_plugin_files target.
Hope that helps!
I have some huge project that is being compiled in CMake.
It is developed for quite a long time, have 8000+ source/header files (over 500Mbytes, over 500 CMakefile.txt files).
They use directory structure like this
PROJECT_NAME
src
/ subdir_name
/ other_dir_name
/ some_different_dir
/ MY_SPECIFIC_DIR <---
/ yet_another_dir
build
and build it out-source, like this:
name#host:~/PROJECT_NAME/build> cmake ../src
name#host:~/PROJECT_NAME/build> make all
then it's build as one BIG binary (details are not important).
I cannot touch anything else, just content of MY_SPECIFIC_DIR - it's source and CMake files.
So, I have source code in MY_SPECIFIC_DIR tweak CMakefile.txt files somehow and would like to build it like this:
name#host:~/PROJECT_NAME/build_specific> cmake ../src/MY_SPECIFIC_DIR
name#host:~/PROJECT_NAME/build_specific> make all
This should build things in MY_SPECIFIC_DIR into single binary with some few links to other subprojects. But also (obviously) don't change anything about how whole project is compiled.
My question is:
Is my desired setup
posible
using CMake?
Can I somehow test in CMakeFile.txt that it is root project and build it in different way then when it is builded as a whole?
Unless, I have to resort to different means and use standard make for this.
I don't know CMake so I'm hoping for YES/NO anwer, preferable even for technique how to achieve this. And then learn the CMake and do it.
Also, I must use CMake version 2.6.
Thanks
Basic concept is to use
if (CMAKE_SOURCE_DIR STREQUAL CMAKE_CURRENT_SOURCE_DIR)
... code for stand-alone app
else()
... what was in this file before
endif()
I have a fairly simple Boost.Python extension that I am building with bjam. The problem is that the order that things happen in doesn't make sense to me, and I can't see how to fix it.
My project consists of a root directory, with a Jamroot, and a single project subdirectory with a Jamfile, C++ file, header file, and Python script.
In the root I have a Jamroot file that looks like this, largely scraped together from examples and the docs. It is separate from the project's Jamfile because I actually want to share this amongst several projects that will exist in other subdirectories.
import python ;
if ! [ python.configured ]
{
ECHO "notice: no Python configured in user-config.jam" ;
ECHO "notice: will use default configuration" ;
using python ;
}
use-project boost
: ./boost ;
project
: requirements <library>/boost/python//boost_python ;
# A little "rule" (function) to clean up the syntax of declaring tests
# of these extension modules.
rule run-test ( test-name : sources + )
{
import testing ;
testing.make-test run-pyd : $(sources) : : $(test-name) ;
}
build-project hello_world ;
# build-project [[other projects]]... ;
Then I have a subdirectory containing my 'hello_world' project (name changed to protect the innocent), which contains a Jamfile:
PROJECT_NAME = hello_world ;
import python ;
python-extension interpolation_ext :
$(PROJECT_NAME).cpp
:
<define>FOO
;
# Put the extension and Boost.Python DLL in the current directory, so that running script by hand works.
install convenient_copy
: $(PROJECT_NAME)_ext
: <install-dependencies>on <install-type>SHARED_LIB <install-type>PYTHON_EXTENSION
<location>.
;
# Declare test targets
run-test $(PROJECT_NAME) : $(PROJECT_NAME)_ext test_$(PROJECT_NAME)_ext.py ;
That 'convenient_copy' sure is convenient, but I haven't found much documentation about it, unfortunately.
Anyway, the idea is that while I'm in the "hello_world" project directory, I make code changes and type 'bjam' regularly. This has the effect of building the Python extension and then running the test_hello_world_ext.py file, which does an 'import hello_world_ext' to test that the extension has built correctly, and then a bunch of rather trivial unit-tests. If they all pass, then bjam reports success.
The problem seems to be that sometimes bjam runs the Python test before it has run the 'convenient_copy' rule, which means that it performs the test on the previous version of the extension, and then overwrites it with the new version. This means I'm frequently having to run bjam twice. In fact, the second time bjam knows that something is out-of-date because it actually does something. The third and subsequent time it does nothing until I make further source changes. It's like the classic double-make problem when a dependency isn't correct.
The main problem with this is that it's often failing a successful build (because the existing extension was bad), and other times it is showing a bad build as successful. It actually took me several weeks to notice this behaviour, around the same time I thought I was going insane, perhaps not coincidentally...
It also seems to do this more often on Linux than OS X, but I'm not completely sure. Feels that way though, and I divide my time between both environments fairly equally.
Also, am I the only person who finds bjam's 'jamfile' syntax utterly confusing? There's a lot going on under the hood that I simply don't understand, or can find adequate documentation for. I'd gladly use make or SCons instead but I wasn't able to get those working either due to broken examples here and there. What really confuses me is how bjam builds many, many other targets before getting on to my files, which would make writing a makefile quite tricky I think? As I am quite familiar with GNU Make and SCons, is it worth my time abandoning bjam to use one of those instead?
The order of declaring targets in a jamfile doesn't determine the order of building the targets. Use dependencies to control the build order.
It would be done like this here:
Change the run-test rule to accept requirements argument:
rule run-test ( test-name : sources + : requirements * )
{
import testing ;
testing.make-test run-pyd : $(sources) : $(requirements) : $(test-name) ;
}
Modify $(PROJECT_NAME) target declaration to add a dependency requirement on convenient_copy:
run-test $(PROJECT_NAME) : $(PROJECT_NAME)_ext test_$(PROJECT_NAME)_ext.py : <dependency>convenient_copy ;
Regarding the part on jamfile syntax etc.:
If you do anything with Boost.Build apart from really trivial things, you should definitely read its User Manual. My personal experience is that after reading it from start to the end, I choose Boost.Build over other build systems any day. YMMV
I'm trying to use clang to profile a project I'm working on. The project includes a rather large static library that is included in Xcode as a dependency.
I would really like clang to not analyze the dependencies' files, as it seems to make clang fail. Is this possible? I've been reading the clang documentation, and I haven't found it.
As a last resort, there is a brute force option.
Add this to the beginning of a file:
// Omit from static analysis.
#ifndef __clang_analyzer__
Add this to the end:
#endif // not __clang_analyzer__
and clang --analyze won't see the contents of the file.
reference: Controlling Static Analyzer Diagnostics
So, this isn't really an answer, but it worked well enough.
What I ended up doing was building the static library ahead of time, and then building the project using scan-build. Since there was already an up-to-date build of the static library, it wasn't rebuilt and thus wasn't scanned.
I'd still love to have a real answer for this, though.
Finally, in 2018 the option was implemented.
Use --exclude <path> [1] [2] option
--exclude
Do not run static analyzer against files found in this directory
(You can specify this option multiple times). Could be useful when
project contains 3rd party libraries.
I don't use XCode, but using scan-build in linux the following works for me. I my case, I want to run the static analysis on all first party, non-generated code. However, I want to avoid running it on third_party code and generated code.
On the command line, clang-analyzer is hooked into the build when scan-build sets CC and CXX environment variables to ccc-analyzer and c++-analyzer locations. I wrote two simple scripts called ccc-analyzer.py and c++-analyzer.py and hooked them in to the compile in place of the default. In these wrapper scripts, I simply looked at the path of the file being compiled and then run either the raw compiler directly (if I wish to avoid static analysis) or the c*-analyzer (if I wish for static analysis to occur). My script is in python and tied to my specific build system, but as an example that needs modification:
import subprocess
import sys
def main(argv):
is_third_party_code = False
for i in range(len(argv)):
arg = argv[i]
if arg == '-c':
file_to_compile = argv[i + 1]
if '/third_party/' in file_to_compile or \
file_to_compile.startswith('gen/'):
is_third_party_code = True
break
if is_third_party_code:
argv[0] = '/samegoal/bin/clang++'
else:
argv[0] = '/samegoal/scan-build/c++-analyzer'
return subprocess.call(argv)
if __name__ == '__main__':
sys.exit(main(sys.argv))
For Xcode users, you can exclude individual files from static analyzer by adding the following flags in the Target -> Build Phases -> Compile Sources section: -Xanalyzer -analyzer-disable-all-checks