Can clang be told not to analyze certain files? - xcode

I'm trying to use clang to profile a project I'm working on. The project includes a rather large static library that is included in Xcode as a dependency.
I would really like clang to not analyze the dependencies' files, as it seems to make clang fail. Is this possible? I've been reading the clang documentation, and I haven't found it.

As a last resort, there is a brute force option.
Add this to the beginning of a file:
// Omit from static analysis.
#ifndef __clang_analyzer__
Add this to the end:
#endif // not __clang_analyzer__
and clang --analyze won't see the contents of the file.
reference: Controlling Static Analyzer Diagnostics

So, this isn't really an answer, but it worked well enough.
What I ended up doing was building the static library ahead of time, and then building the project using scan-build. Since there was already an up-to-date build of the static library, it wasn't rebuilt and thus wasn't scanned.
I'd still love to have a real answer for this, though.

Finally, in 2018 the option was implemented.
Use --exclude <path> [1] [2] option
--exclude
Do not run static analyzer against files found in this directory
(You can specify this option multiple times). Could be useful when
project contains 3rd party libraries.

I don't use XCode, but using scan-build in linux the following works for me. I my case, I want to run the static analysis on all first party, non-generated code. However, I want to avoid running it on third_party code and generated code.
On the command line, clang-analyzer is hooked into the build when scan-build sets CC and CXX environment variables to ccc-analyzer and c++-analyzer locations. I wrote two simple scripts called ccc-analyzer.py and c++-analyzer.py and hooked them in to the compile in place of the default. In these wrapper scripts, I simply looked at the path of the file being compiled and then run either the raw compiler directly (if I wish to avoid static analysis) or the c*-analyzer (if I wish for static analysis to occur). My script is in python and tied to my specific build system, but as an example that needs modification:
import subprocess
import sys
def main(argv):
is_third_party_code = False
for i in range(len(argv)):
arg = argv[i]
if arg == '-c':
file_to_compile = argv[i + 1]
if '/third_party/' in file_to_compile or \
file_to_compile.startswith('gen/'):
is_third_party_code = True
break
if is_third_party_code:
argv[0] = '/samegoal/bin/clang++'
else:
argv[0] = '/samegoal/scan-build/c++-analyzer'
return subprocess.call(argv)
if __name__ == '__main__':
sys.exit(main(sys.argv))

For Xcode users, you can exclude individual files from static analyzer by adding the following flags in the Target -> Build Phases -> Compile Sources section: -Xanalyzer -analyzer-disable-all-checks

Related

Passing extra compilation flags to debug build in bitbake recipe

As Bitbake builds -dev and -debug for recipes is it possible for defining compilation definitions specific to debug build for a particular recipe. Lets say I have some source code under DEBUG_INFO for some recipe i.e.,
#ifdef DEBUG_INFO
........... do something
#endif /* DEBUG_INFO */
and uses cmake in bitbake environment.
I want this flag be enabled for the debug binaries generated in the .debug folder. Is this possible?
If I use EXTRA_OECMAKE = "-DDEBUG_INFO" it gets enabled to both dev and debug builds.
No, it is not possible. All packages of a recipe are built in one go, they're just the same files but split somehow.
The only difference is with "special flavors" of a recipe (native, nativesdk, target, multilib, toolchain-specific recipes, etc...), in that case, you can have different flags but still, all the packages resulting from the build of this "flavor" will be built with the same flag.
If you want to build another variant of a package where a certain CMake flag is set in the compilation, you can create a variant of the recipe. If the main recipe is named my-app_git.bb you can create another recipe file named my-app-tweak_git.bb and a common base, my-app.inc. In the bb files, include the inc file:
require my-app.inc
Move most of what's now in my-app_git.bb to my_app.inc, e.g. SRC_URI, but define different contents for EXTRA_OECMAKE in the .bb files.
Now you will have to decide which one of my-app and my-app-tweak goes into the image by specifying either my-app or my-app-tweak in an IMAGE_INSTALL definition.
This is not exactly what you asked for, but as has been stated by qschulz, you cannot change the contents of the -dev and -dbg sub-packages.
Also note that dbg and dev can be considered reserved words for variants of the package name, so if you want to use something other than tweak, as in my example, you cannot use any of them.

How do you create custom build rules in QTCreator for code generation?

I'm working on a GTK3 application, but using QTCreator as my IDE, just because I happen to like it. It works fine, I can easily turn off all the QT-specific stuff and link the GTK libraries. There's just one little issue.
GTK uses XML files to define its resources. It comes with a program, "glib-compile-resources", which takes a .gresource.xml file and produces a .c file*, which can then be included in your project. The problem is that QTCreator doesn't know about glib-compile-resources, so I have to remember to run the program manually every time I make any change to them.
I've tried using a custom build step, but if I do that, then QT rebuilds the file every time, even if it hasn't changed, which slows the process down. In addition, if the C file doesn't already exist, it will fail with a "No rule to make target 'x.c' needed by 'x.o'. Stop." error, so I have to run the program manually anyway.
Is there any way to tell QTCreator to run glib-compile-resources whenever it encounters a .gresource.xml file, and include the resulting C file into the final compilation?
*There are other options available then just a straight C source file, but C source is the simplest and easiest for me.
You can add a custom target in your qmake file (rather than in your QtCreator project config). See the qmake docs at https://doc.qt.io/qt-5/qmake-advanced-usage.html#adding-custom-targets and/or https://doc.qt.io/qt-5/qmake-advanced-usage.html#adding-compilers.
Update: this is a simple example which shows how to do this for a single file, using the custom target mechanism in your .pro file:
glib_resources.target = glib-resources.c
glib_resources.depends = glib-resources.xml
glib_resources.commands = glib-compile-resources --target $$glib_resources.target --generate-source $$glib_resources.depends
QMAKE_EXTRA_TARGETS += glib_resources
PRE_TARGETDEPS += glib-resources.c ## set this target as a dependency for the actual build
QMAKE_CLEAN += glib-resources.c ## delete the file at make clean
Here's how I wound up solving my own issue:
I found in the QT documentation how to add your own custom compiler to a QT project file. The exact lines needed are:
GLIB_RESOURCE_FILES += \
resources.gresource.xml
# Add more resource files here.
glib_resources.name = glibresources
glib_resources.input = GLIB_RESOURCE_FILES
glib_resources.output = ${QMAKE_FILE_IN_BASE}.c
glib_resources.depend_command = glib-compile-resources --generate-dependencies ${QMAKE_FILE_IN}
glib_resources.commands = glib-compile-resources --target ${QMAKE_FILE_OUT} --sourcedir ${QMAKE_FILE_IN_PATH} --generate-source ${QMAKE_FILE_IN}
glib_resources.variable_out = SOURCES
glib_resources.clean = ${QMAKE_FILE_OUT}
QMAKE_EXTRA_COMPILERS += glib_resources
(Thanks to #zgyarmati, who's post lead me to the right answer.)

Compile a third-party library also using SCons from SCons build script

I'm using SCons to build my project.
A third-party library I've integrated also uses SCons, but it can be updated from Git at any time and I've got no control over the contents of its SConstruct file.
When compiled on its own, the library's SConstruct file accepts the parameters bits=32/64 and target=debug/release
I tried building it with env.SConscript(), but this doesn't pass the parameters in a form that the target SConstruct file accepts (without using SCons' Import() function):
# Compile Godot-CPP, a wrapper library we depend on
if nuclex._is_debug_build(environment):
compile_godot_cpp = environment.SConscript(
'addons/godot-cpp/SConstruct', export='bits=64 target=debug'
)
else:
compile_godot_cpp = environment.SConscript(
'addons/godot-cpp/SConstruct', export='bits=64 target=release'
)
Can I compile another SConstruct file and pass parameters to it as if SCons had been invoked from the command line on its own?
I'm aware that I could just use env.Command() to start another SCons process, but then SCons couldn't parallelize the build (i.e. scons -j16) like it does in the case of env.SConscript().
There's not a good way to do this beyond Command().
You might ask the godot project if they could move the bulk of their logic into a SConscript at the top level which you could then import and somehow pass the needed parameters to.

Scons: how to specify file dependency for 3rd party compile result?

It seem to me that scons targets are being generated not in declaration sequence. My problem is, I need to generate some code first, I'm using protoc to process a my.proto file into .h and .cc file, I need some pseudo code like this(what should the working code look like?)
import os
env=Environment(ENV=os.environ,LIBPATH='/usr/local/lib')
env.ShellExecute('protoc', '--outdir=. --out-lang=cpp', 'my.proto')//produces my.cc
myObj=Object('my.cc')//should wait until 'my.cc' is generated by protoc
Dependency(myObj, 'my.cc')
mainObj=Object('main.cpp')
My question is:
How to specify this ShellExecution of protoc in SConstruct/SConscript?
How to make sure that the compilation of 'main.cpp' depends on the existence of 'my.cc', in another word, wait until 'my.cc' is generated and then execute?
Your observations and assumptions are correct, SCons will not execute the single build commands in the order that you list them in the SConstruct files. It will run them based on the dependencies of the targets and source files in your build, either defined implicitly (header includes in C++, for example) or explicitly (via the Depends() method).
So you have to define and setup your dependencies correctly, such that SCons delivers the output that you want. For the special protoc case in your example, a special Builder exists that will help you to get the dependency graph right. It is available in our ToolsIndex, where also support for a variety of other languages and dialects can be found.
These special builders will emit the correct target nodes, e.g. when given a *.proto input file, and SCons is then able to automatically detect the dependency between the protoc input file and your main program if you say something like:
env=Environment(tools=['default','protoc'])
env.Protoc([], "test.proto")
env.Program('main', ['main.cpp'] + Glob('*.cc'))
The Glob('*.cc') will detect your *.cc files, coming out of the protoc Tool, and include them as dependencies for your final target main.
You can always write your own Builders and Emitters in SCons, which is the canonical way of making new tools/toolchains known to SCons dependency analysis. In the UserGuide, sect. "18 Writing Your Own Builders", and especially our ToolsForFools Guide you can find more infos about this.

Boost's bjam is running tests before the build has finished

I have a fairly simple Boost.Python extension that I am building with bjam. The problem is that the order that things happen in doesn't make sense to me, and I can't see how to fix it.
My project consists of a root directory, with a Jamroot, and a single project subdirectory with a Jamfile, C++ file, header file, and Python script.
In the root I have a Jamroot file that looks like this, largely scraped together from examples and the docs. It is separate from the project's Jamfile because I actually want to share this amongst several projects that will exist in other subdirectories.
import python ;
if ! [ python.configured ]
{
ECHO "notice: no Python configured in user-config.jam" ;
ECHO "notice: will use default configuration" ;
using python ;
}
use-project boost
: ./boost ;
project
: requirements <library>/boost/python//boost_python ;
# A little "rule" (function) to clean up the syntax of declaring tests
# of these extension modules.
rule run-test ( test-name : sources + )
{
import testing ;
testing.make-test run-pyd : $(sources) : : $(test-name) ;
}
build-project hello_world ;
# build-project [[other projects]]... ;
Then I have a subdirectory containing my 'hello_world' project (name changed to protect the innocent), which contains a Jamfile:
PROJECT_NAME = hello_world ;
import python ;
python-extension interpolation_ext :
$(PROJECT_NAME).cpp
:
<define>FOO
;
# Put the extension and Boost.Python DLL in the current directory, so that running script by hand works.
install convenient_copy
: $(PROJECT_NAME)_ext
: <install-dependencies>on <install-type>SHARED_LIB <install-type>PYTHON_EXTENSION
<location>.
;
# Declare test targets
run-test $(PROJECT_NAME) : $(PROJECT_NAME)_ext test_$(PROJECT_NAME)_ext.py ;
That 'convenient_copy' sure is convenient, but I haven't found much documentation about it, unfortunately.
Anyway, the idea is that while I'm in the "hello_world" project directory, I make code changes and type 'bjam' regularly. This has the effect of building the Python extension and then running the test_hello_world_ext.py file, which does an 'import hello_world_ext' to test that the extension has built correctly, and then a bunch of rather trivial unit-tests. If they all pass, then bjam reports success.
The problem seems to be that sometimes bjam runs the Python test before it has run the 'convenient_copy' rule, which means that it performs the test on the previous version of the extension, and then overwrites it with the new version. This means I'm frequently having to run bjam twice. In fact, the second time bjam knows that something is out-of-date because it actually does something. The third and subsequent time it does nothing until I make further source changes. It's like the classic double-make problem when a dependency isn't correct.
The main problem with this is that it's often failing a successful build (because the existing extension was bad), and other times it is showing a bad build as successful. It actually took me several weeks to notice this behaviour, around the same time I thought I was going insane, perhaps not coincidentally...
It also seems to do this more often on Linux than OS X, but I'm not completely sure. Feels that way though, and I divide my time between both environments fairly equally.
Also, am I the only person who finds bjam's 'jamfile' syntax utterly confusing? There's a lot going on under the hood that I simply don't understand, or can find adequate documentation for. I'd gladly use make or SCons instead but I wasn't able to get those working either due to broken examples here and there. What really confuses me is how bjam builds many, many other targets before getting on to my files, which would make writing a makefile quite tricky I think? As I am quite familiar with GNU Make and SCons, is it worth my time abandoning bjam to use one of those instead?
The order of declaring targets in a jamfile doesn't determine the order of building the targets. Use dependencies to control the build order.
It would be done like this here:
Change the run-test rule to accept requirements argument:
rule run-test ( test-name : sources + : requirements * )
{
import testing ;
testing.make-test run-pyd : $(sources) : $(requirements) : $(test-name) ;
}
Modify $(PROJECT_NAME) target declaration to add a dependency requirement on convenient_copy:
run-test $(PROJECT_NAME) : $(PROJECT_NAME)_ext test_$(PROJECT_NAME)_ext.py : <dependency>convenient_copy ;
Regarding the part on jamfile syntax etc.:
If you do anything with Boost.Build apart from really trivial things, you should definitely read its User Manual. My personal experience is that after reading it from start to the end, I choose Boost.Build over other build systems any day. YMMV

Resources