I have a project that uses SCons to generate platform dependent source files which are compiled together with other shipped source files into static libraries and linked into the final executable, and that's it, no project files are generated for my IDE (Xcode)
I managed to add SCons as an external build system in a new Xcode project to build and debug the executable
What I want now is to customize the source code and add a few libraries removing Scons altogether as external build system. Scons is not practical in my case, too slow, and I don't want to mess with the scripts.
So the question is whether there is a feature in SCons to skip the build process but just generate the platform dependent source files?
Edit:
I would like to make some customizations to the project and not mess with SCons at least until I need to do pull requests, that was my workfow with a previous project that used CMake to generate the Xcode project, SCons would require to modify the scripts.
Yes, you can specify the targets that you want to get built on the command-line explicitly:
scons lib1/source1.cpp sourceb.cpp
would be an example.
Since you mentioned that SCons would be "too slow" for you, how exactly did you measure that (see http://scons.org/wiki/WhySconsIsNotSlow and http://scons.org/wiki/GoFastButton )?
Overriding Export() in SConstruct like the the code from below and adding the parameter skip_build to the script, which sets the value of __SkipBuild, I was able to skip the build process altogether (i.e. compiling and linking), generating only the platform dependent sources
SConstruct
__Export = Export
__CommandsList = ['CC','CXX','AR','RANLIB','AS','LINK'] # The commands to skip from the build process
__SkipBuild = False
def Export(*vars, **kw):
for var in vars:
locals()[var] = call_stack[-1].globals[var]
if (call_stack[-1].globals['__SkipBuild']):
for command in __CommandsList:
if locals()[var].has_key(command):
locals()[var][command] = 'echo ' + locals()[var][command]
call_stack[-1].globals.update(kw)
__Export(locals(), kw)
Related
I'm working on a CMake based project that has a dependency on a gigantic third party codebase that also uses CMake. I'm including the third party project via the ExternalProject_Add mechanism. That project defaults to using Makefiles, although the parent is an Xcode project.
The resulting build of the external project is painfully slow because it's only using a single core. I don't think that I can force the project to generate Xcode projects instead of Makefiles.
Assuming that I'm stuck with Makefiles, how can I inform ExternalProject_Add to use all the cores available for the titanic third party project?
Note that the addition of the inevitable '-j N' option (why doesn't 'make' do this by default?) needs to conditionally be present for the Mac and Linux builds, but not Windows/Visual Studio.
You've basically already answered the question yourself: Use another build generator. Ninja does parallel builds buy default and CMake uses it just fine.
include(ExternalProject)
ExternalProject_Add(foobar
[...]
## configure options
# cmake is used by default
#CONFIGURE_COMMAND cmake
# cmake will use the same generator as the main project, unless we override it
CMAKE_GENERATOR Ninja
## build options
BUILD_COMMAND ninja
[...]
)
If you don't want to use a different generator, use make's options. Set MAKEFLAGS in your shell. In your CMakeLists.txt do something like:
include(ProcessorCount)
ProcessorCount(N)
if(CMAKE_SYSTEM_NAME MATCHES "Linux|Darwin")
include(ExternalProject)
ExternalProject_Add(foobar
BUILD_COMMAND make -j${N}
)
else()
## do windows stuff
endif()
Also, remember many of the ExternalProject_Add() options (including the *_COMMAND options) override sensible defaults. So, when defining your external project, start small, and add options as needed.
tl;dr
Be sparse when defining your external project. Build up as needed.
Use another build generator that understands parallelism better than
'make'
If you want to use a different CMake build generator for your
external project than for your main project, you must specify it.
I have a system that produces generated code from a spec document which can change at any time. As such, the list of files being generated cannot be static, and must be able to be handled dynamically at build time.
From what I can tell the typical CMakeLists.txt is set up to define a rule for each file at cmake generation time.
Is there a way to get CMake to write generic rules so that the targets can be set at build time?
If not, what are the possible work-arounds?
First, you can put your code generation process into CMakeLists.txt in a form of a target (add_custom_command), and then make your target depends on this command. This was code generation steps will be run every time you issue make.
Alternatively, here is a hack:
add_custom_target(cmake_regen ALL
COMMAND ${CMAKE_COMMAND} -E remove ${CMAKE_BINARY_DIR}/CMakeFiles/Makefile.cmake)
And a less hackish variant, which should preserve already built targets:
add_custom_target(cmake_regen ALL
COMMAND ${CMAKE_COMMAND} --build ${CMAKE_BINARY_DIR} --target rebuild_cache)
Adding such code into CMakeLists.txt forces CMake to regenerate makefiles on each run without running full configuration process.
I'm using CMake on Windows to build test suite based on Boost.Test. As I'm linking to Boost.Test dynamically, my executable needs to be able to find the DLL (which is under ../../../boost/boost_1_47/lib or something like that relative to the executable).
So I need to either copy the DLL into the folder where the executable is, or make it findable in some other way. What's the best way to achieve this with CMake?
-- Additional info --
My CMakeLists.txt has this Boost related configuration at the moment:
set(Boost_ADDITIONAL_VERSIONS "1.47" "1.47.0")
set(BOOST_ROOT "../boost")
find_package(Boost 1.47 COMPONENTS unit_test_framework REQUIRED)
include_directories(${Boost_INCLUDE_DIR})
link_directories(${Boost_LIBRARY_DIR})
add_executable(test-suite test-suite.cpp)
target_link_libraries(test-suite ${Boost_LIBRARIES})
Assuming you are running your tests by building the RUN_TESTS target in Visual Studio:
I always add .../boost/boost_1_47/lib to my command PATH environment variable, so the boost unit_test_framework dlls can be found at run time. That's what I recommend.
If for some reason changing your PATH is not possible, you could copy the files with cmake.
(untested)
get_filename_component(LIBNAME "${Boost_UNIT_TEST_FRAMEWORK_LIBRARY_RELEASE}" NAME)
add_custom_command(TARGET test-suite POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy "${Boost_UNIT_TEST_FRAMEWORK_LIBRARY_RELEASE}" "${CMAKE_CURRENT_BINARY_DIR}/${LIBNAME}"
)
3. If you are NOT only running the tests at build time (as I was assuming above), then you need a series of INSTALL commands, like Hans Passant suggested. In your snippet you don't have an INSTALL command for your executable; so even your executable won't end up "in the executable folder". First add a cmake INSTALL command to put your executable someplace in response to the cmake INSTALL target. Once you have that working, we can work on figuring out how to add another INSTALL command to put the boost unit_test_framework library into the same location. After that, if you want to make an installer using CPACK, the library will automatically be installed with the executable.
I ended up using the install command to copy the Boost DLL over to the executable's folder:
get_filename_component(UTF_BASE_NAME ${Boost_UNIT_TEST_FRAMEWORK_LIBRARY_RELEASE} NAME_WE)
get_filename_component(UTF_PATH ${Boost_UNIT_TEST_FRAMEWORK_LIBRARY_RELEASE} PATH)
install(FILES ${UTF_PATH}/${UTF_BASE_NAME}.dll
DESTINATION ../bin
CONFIGURATIONS Release RelWithDebInfo
)
get_filename_component(UTF_BASE_NAME_DEBUG ${Boost_UNIT_TEST_FRAMEWORK_LIBRARY_DEBUG} NAME_WE)
install(FILES ${UTF_PATH}/${UTF_BASE_NAME_DEBUG}.dll
DESTINATION ../bin
CONFIGURATIONS Debug
)
I have a very similar problem, but the solution presented here is not really satisfactory.
Like the original poster, I want to run unit tests based on boost::test.
I have multiple test projects, one for each mayor component of our product.
Having to run the install target prior to every test means recompiling the whole thing just to run the tests belonging to a core component. That's what I want to avoid.
If I change something in a core component, I want to compile that core component and the associated tests. And then run the tests. When the tests succeed, only then do I want to compile and eventually install the rest of it.
For running the tests in the debugger, I found some very useful cmake scripts at :
https://github.com/rpavlik/cmake-modules
With this, I can specify all the directories of the required dlls, and the PATH environment variable is set for the new process:
# for debugging
INCLUDE(CreateLaunchers)
create_target_launcher(PLCoreTests
ARGS "--run-test=Core1"
RUNTIME_LIBRARY_DIRS ${PL_RUNTIME_DIRS_DEBUG} ${PROJECT_BINARY_DIR}/bin/Debug
WORKING_DIRECTORY ${PL_MAIN_DIR}/App/PL/bin
)
Where ${PL_RUNTIME_DIRS_DEBUG} contains the directories where the dlls from boost and all the other libraries can be found.
Now I'm looking for how I can achieve something similar with ADD_CUSTOM_COMMAND()
Update:
ADD_CUSTOM_COMMAND() can have multiple commands that cmake writes into a batch file. So, you can first set the path with all the runtime directories, and then execute the test executable. To be able to easily execute the tests manually, I let cmake create an additional batch file in the build directory:
MACRO(RunUnitTest TestTargetName)
IF(RUN_UNIT_TESTS)
SET(TEMP_RUNTIME_DIR ${PROJECT_BINARY_DIR}/bin/Debug)
FOREACH(TmpRuntimeDir ${PL_RUNTIME_DIRS_DEBUG})
SET(TEMP_RUNTIME_DIR ${TEMP_RUNTIME_DIR} ${TmpRuntimeDir})
ENDFOREACH(TmpRuntimeDir)
ADD_CUSTOM_COMMAND(TARGET ${TestTargetName} POST_BUILD
COMMAND echo "PATH=${TEMP_RUNTIME_DIR};%PATH%" > ${TestTargetName}_script.bat
COMMAND echo ${TestTargetName}.exe --result_code=no --report_level=no >> ${TestTargetName}_script.bat
COMMAND ${TestTargetName}_script.bat
WORKING_DIRECTORY ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/Debug
)
ENDIF(RUN_UNIT_TESTS)
ENDMACRO()
With this, the unit tests catch the errors as soon as possible, without having to compile the whole lot first.
I develop C/C++ using the Eclipse IDE. Eclipse also generates a makefile which I don't want to edit as it will simply be overwritten.
I want to use that makefile for nightly build within Hudson.
How do I pass #defines which are made in the project file of the IDE to the makefile ? (and why doesn't Eclipse already include them in the generated makefile?)
I actually had this figured out once, then accidentally overwrote it :-( But at least I know that it can be done...
If you are running make from the command line, use
make CPPFLAGS=-DFOO
which will add -DFOO to all compilations. See also CFLAGS, CXXFLAGS, LDFLAGS in the make manual.
You could write a small program to include the headers and write a makefile fragment which you include in the main makefile (requires GNU make).
This is a fairly ugly solution that requires a fair amount of hand hackery. More elegant would be to parse the project file and write the makefile fragment.
For GCC use -D define.
OP commented below that he wants to pass the define into make and have it pass it on to GCC.
Make does not allow this. Typically you just add another make rule to add defines. For instance 'make release' vs 'make debug'. As the makefile creator you make the two rules and have the defines right in the make file. Now if Eclipse is not putting the defines into the makefile for you, I would say Eclipse is broken.
If you're using autotools another options is to have 2 directories 'bin/debug' and 'bin/release'.
# Simple bootstrap script.
# Remove previously generated filed and call autoreconf.
# At the end configure 2 separate builds.
echo "Setting up Debug configuration: bin/debug"
../../configure CXXFLAGS="-g3 -O0 -DDEBUG=1"
echo "Setting up Release configuration: bin/release"
cd bin/release/
../../configure CXXFLAGS="-O2"
Setup Eclipse. Open the project's properties (Project->Properties->C/C++ Build->Builder Settings) and set the Build Location->Build Directory to
${workspace_loc:/helloworld/bin/debug}
Replacing 'helloworld' with your project's directory relative to the workspace (or you can supply an absolute path ${/abs/path/debug}). Do the same thing with the Release config, replacing "/debug" with "release" at the end of the path.
This method seems like a waste of disk space, but a valid alternative to achieve completely separate builds.
Simple question. Are there any tools for generating Xcode projects from the command line? We use SCons to build our cross-platform application, but that doesn't support intrinsic Xcode project generation. We'd like to avoid creating the project manually, since this would involve maintaining multiple file lists.
Look at CMake. You can generate XCode projects from it automatically. I found a previous StackOverflow question about its usage here. To get it to generate an XCode project, you use it as such:
CMake -G xcode
You can use premake (http://industriousone.com/premake) to generate Xcode projects. It can also generate Visual Studio projects.
For the benefit of anyone who lands on this question, I’ve actually just pushed an Xcode project file generator for SCons up to Bitbucket.
I think that your question should be "Is there a way to generate an XCode project from a SCons one?". I suppose, by your asking and by reading the others, that the answer is 'no'.
SCons people should know it better. I think they will be happy if you contribute a SCons Xcode project generator.
In the meantime you may choose to switch to CMake or to create your XCode project by hand that, given a good source tree organization, may be the best pragmatic solution.
qmake in the Qt toolchain generates Xcode projects. You can at least download it and take a look at its source here (LGPL).
You can generate a XCode project using the python based build system called waf. You need to download and install waf with the xcode6 extension:
$ curl -o waf-1.9.7.tar.bz2 https://waf.io/waf-1.9.7.tar.bz2
$ tar xjvf waf-1.9.7.tar.bz2
$ cd waf-1.9.7
$ ./waf-light --tools=xcode6
That will create a waf executable which can build your project. You need to configure how to generate your XCode project inside a file called wscript that should reside in your project folder. The wscript file uses Python syntax. Here's an example of how you could configure your project:
def configure(conf):
# Use environment variables to set default project configuration
# settings
conf.env.FRAMEWORK_VERSION = '1.0'
conf.env.ARCHS = 'x86_64'
# This must be called at the end of configure()
conf.load('xcode6')
# This will build a XCode project with one target with type 'framework'
def build(bld):
bld.load('xcode6')
bld.framework(
includes='include',
# Specify source files.
# This will become the groups (folders) inside XCode.
# Pass a dictionary to group by name. Use a list to add everything in one
source_files={
'MyLibSource': bld.path.ant_glob('src/MyLib/*.cpp|*.m|*.mm'),
'Include': bld.path.ant_glob(incl=['include/MyLib/*.h', 'include'], dir=True)
},
# export_headers will put the files in the
# 'Header Build Phase' in Xcode - i.e tell XCode to ship them with your .framework
export_headers=bld.path.ant_glob(incl=['include/MyLib/*.h', 'include/MyLib/SupportLib'], dir=True),
target='MyLib',
install='~/Library/Frameworks'
)
There are a bunch of settings you can use to configure it for your project.
Then to actually generate the XCode project, cd into your project folder where the wscript is and run your waf executable like
$ ./waf configure xcode6
A promising alternative to CMake which can generate Xcode projects is xmake. I haven’t tried it yet, but it looks good from the documentation.
Install xmake, create a simple project file (xmake.lua):
target("test")
add_files("src/*.cpp")
Then you can either do a command-line build:
xmake
or create an Xcode project:
xmake project -k xcode
Note that currently xmake seems to invoke CMake to generate the Xcode project, although they say they plan to add native Xcode project generation at some point.
You could use Automator to generate them for you.
I checked and there is no prebuilt action.
Therefore you would have to record your actions with Automator to do this.