I am using GCC 4.7 and compiling my C++ code-project. The code consists of files distributed in directories.
On RHEL server I use, there are 16 cores, still the compilation speed is quite slow. Can you suggest possible alternatives or options of makefile, which may help in compiling fast. I have tried -j but it only compiles some folders and stops; does not compile the main binary.
I will be grateful for any help.
if your makefile fails when you compile with -j but works fine without, then you may need to fix your makefile to work properly with parallel compilation. Otherwise, those other 15 cores are of no use to you.
It's not uncommon for less experienced makefile writers to write something like:
final: step1 step2 step3
to mean "to build final, first build step1, then step2, then step3". This works fine when you are running with the default setup of -j 1 because make happens to build each of the dependencies in left-to-right order. But if you use -j 20 (say) then it will attempt to build them in parallel. It will attempt to start building all 3 steps at once, without first waiting until each successive step is complete.
The correct way to write this is:
final: step3
step3: step2
step2: step1
This tells make exactly what's happening: to build final you first need to build step3, for which you need step2, for which you need step1.
Related
I'm trying to learn several things at once (arguably my first problem...), namely: unit testing with Catch2 and building with CMake.
In the course of my investigations, CTest appeared on the radar as a pre-baked way of managing tests within CMake, and seems to 'support' Catch2.
While things seem to build okay, I can't run my tests as automatically as I'd hope.
Specifically, I have a source tree, which at some point contains the library I'm testing, and I'd like to be able to sit at the top of the tree and execute some sort of 'run my tests' command (and ideally run them as part of the full build, but that's for another day).
So here's my CMakeLists.txt file (L:\scratch\shared\testeroolib\CMakeLists.txt) for the library of interest:
cmake_minimum_required(VERSION 3.5)
project(testeroolib)
add_library(${PROJECT_NAME} STATIC src/testeroolib.cpp)
target_include_directories(${PROJECT_NAME} PUBLIC include)
set(PROJECT_TEST ${PROJECT_NAME}_test)
add_executable(${PROJECT_TEST} test/catch2_main.cpp)
target_link_libraries(${PROJECT_TEST} PRIVATE Catch)
target_link_libraries(${PROJECT_TEST} PRIVATE ${PROJECT_NAME})
enable_testing()
add_test(NAME TesterooLibTest COMMAND ${PROJECT_TEST})
If I do the naive thing and run ctest from the same location I run cmake, I get:
L:\scratch>ctest
*********************************
No test configuration file found!
*********************************
...
or
L:\scratch>ctest .
Test project L:/scratch
No tests were found!!!
From what I've read elsewhere, make test would do the trick with GCC and friends, but I'm using VS.
So here, the advice seems to be that I should use a build target of ALL_TESTS but that doesn't do the trick for me.
L:\scratch>cmake --build BUILD --target ALL_TESTS
...
MSBUILD : error MSB1009: Project file does not exist.
Switch: ALL_TESTS.vcxproj
Of course, I can just run the test:
L:\scratch>BUILD\shared\testeroolib\Debug\testeroolib_test.exe
===============================================================================
All tests passed (1 assertion in 1 test case)
I'm hoping I've made some tiny snafu, but there is every chance I've got completely the wrong end of the stick here!
I believe I've found the (two-part) answer.
The first part is that it's no good running ctest from the top level, you need to run it from the build folder. With hindsight, that's pretty obvious :(
cmake -S . -B BUILD
cmake --build BUILD
cd BUILD
ctest
The less obvious part I found in this answer: https://stackoverflow.com/a/13551858/11603085.
Namely, the enable_testing() call needs to be in the top-level CMakeLists.txt, not the one further down that actually builds the library.
I believe the root cause of your problem is the missing Debug/Release configuration info which cmake needs when you configured using Visual Studio. Try:
ctest -C Debug
for example.
What is the difference between Skip_build and test_without_building on Fastlane/Scanfile?
I would really execute my test cases without building everytime, I have several schemes that are executed on a shell script in a loop. I don't want to build for every iteration.
Does anybody tried those parameters before?
The skip_build parameter will omit the build command from the generated xcodebuild command that ultimately gets executed by scan. This means that if there is a built product in DerivedData, it will use it instead of recompiling your app. If there is no build product in DerivedData, it will rebuild the product.
The test_without_building parameter is the equivalent of the --test-without-building flag that you can pass to xcodebuild. This allows you to pass other flags to the command to point to the product under test.
Hope this helps! 🚀
I've got the following piece of script in my CMake file:
CONFIGURE_FILE(
${CMAKE_CURRENT_SOURCE_DIR}/version.hpp.cmake
${CMAKE_CURRENT_SOURCE_DIR}/version.hpp
)
But it's only run after executing cmake, not make. Is it possible to create the version.hpp file after each make?
Here is the content of version.hpp.cmake:
#ifndef _VERSION_HPP_
#define _VERSION_HPP_
#define MAJOR_VERSION "${MAJOR}"
#define MINOR_VERSION "${MINOR}"
#define PATCH_VERSION "${PATCH}"
#define RELEASE_VERSION "${RELEASE}"
#endif //_VERSION_HPP_
The MAJOR, MINOR, PATCH and RELEASE variables have been defined in the CMakeLists.txt file.
P.S. This post is apparently related to my question, but I can't get a grasp of it.
The problem is that configure_file is supposed to run at configure time, that is when you run cmake, instead of compile time, which is when you run make. There is no easy way around this.
The problem is that the information written by configure_file is dependent on variables from the CMake build environment. Changes to those variables cannot be detected without running CMake again. If you have that information mirrored somewhere else, you can use a custom command to extract it and perform the code generation for you, as Peter's answer suggested.
The approach suggested in the post from the CMake mailing list that you linked in your answer is based on a two-phase CMake run: The outer CMake project (which is run only once) adds a custom build step for building the inner CMake project (which is then run with every make) where the configure_file is performed. The underlying idea is the same as with Peter's answer, only instead of a Python script you use a CMake script for generating the file.
My personal recommendation: For a simple problem as a version header, I would not bother with such a complicated approach. Simply generate the file to your BINARY_DIR (not to your project dir, as you currently do! you want to retain the ability to do several out-of-source builds from the same source) and assume that it will be there for compilation. If a user is brave enough to mess with the generated files there, they can be expected to re-run CMake on their own.
So I accidentally stumbled across this, I know it is probably too late, but calling configure is possible an exactly how I do this with mercurial versions.
The trick requires a lot of different tools, and I don't have time to formulate into a good answer atm, but ask questions and I'll fill it in when I have time.
tool 1: calling exec_program to extract the revision information (this is really easy with mercurial)
exec_program(hg ${PROJECT_SOURCE_DIR} ARGS "id" "-i" OUTPUT_VARIABLE OUTPUT_VARIABLE ${PROJECT_NAME}_HG_HASH_CODE)
I'm probably doing something more complicated than you care about here, but the essential bit is hg which you'll replace with whatever version control you are using, ${PROJECT_SOURCE_DIR} which you'll set to whatever executing directory you want, and fill in the custom args.
I put all of the version extraction into a single macro (ReadProjectRevisionStatus()).
The next step is to make a an entirely different CMake file that calls ReadProjectRevisionStatus() and then CONFIGURE_FILE. This file will assume that all the correct values are set when you come into it. In my case, I store the location of this file into ${CONFIG_FILE_LOC}.
The final step is to add a custom target that will call this script. For example:
ADD_CUSTOM_TARGET(${PROJECT_NAME}_HG_VERSION_CONFIG
COMMAND ${CMAKE_COMMAND}
ARGS -DPROJECT_SOURCE_DIR=${PROJECT_SOURCE_DIR}
-DPROJECT_BINARY_DIR=${PROJECT_BINARY_DIR}
-DPROJECT_NAME=${PROJECT_NAME}
-DCMAKE_MODULE_PATH=${CMAKE_MODULE_PATH}
"-D${PROJECT_NAME}_HG_CONFIG_FILE_IN=\"${${PROJECT_NAME}_HG_CONFIG_FILE_IN}\""
"-D${PROJECT_NAME}_HG_CONFIG_FILE_OUT=\"${${PROJECT_NAME}_HG_CONFIG_FILE_OUT}\""
${ARGN}
-P ${CONFIG_FILE_LOC})
One of the beauties of doing it this way is that custom target call can still be called outside of a cmake build system, which I've done on a couple of projects, which a bash call similar to:
cmake -D PROJECT_SOURCE_DIR=$sourcedir -DPROJECT_BINARY_DIR=$sourcedir -DPROJECT_NAME=uControl -DCMAKE_MODULE_PATH=$sourcedir -DuControl_HG_CONFIG_FILE_IN=$sourcedir/tsi_software_version.h.in -DuControl_HG_CONFIG_FILE_OUT=$sourcedir/tsi_software_version.h -P $sourcedir/ConfigureHGVersion.cmake
One possibity is to generate version.hpp from Python and use ADD_CUSTOM_TARGET
... find python ...
ADD_CUSTOM_TARGET(gen_version ALL ${PYTHON_EXECUTABLE} gen_version.py)
SET_SOURCE_FILES_PROPERTIES(version.hpp PROPERTIES GENERATED 1)
... link gen_version to your library/executable ...
I need some help debugging a Makefile system. I have a rather huge Makefile dependency tree, actually the Android source makefile system.
At some point the build fails because a file is missing:
/bin/bash: out/host/linux-x86/bin/mkfs.ubifs: No such file or directory
The file mkfs.ubifs is supposed to be "build" during the make process, and indeed it works if I do:
make out/host/linux-x86/bin/mkfs.ubifs
The mkfs.ubifs is build, and everything is working, until I again clean everything and build from the beginning.
This indicates to me, that there is a missing dependency somewhere. So my question is, how do I go about debugging this? How do I discover exactly which target is missing a dependency? What options can I provide for make which will give me clues as to where the error is?
Any other suggestions will also be appreciated. Thanks. :)
Update
Using make -d provides quite a lot of output. How exactly do I determine from which make target (sourcefile and line) and error occurred?
Problem solved. It seems make -p was the most useful way to debug this problem:
-p, --print-data-base
Print the data base (rules and variable values) that results from
reading the makefiles; then execute as usual or as otherwise spec-
ified. This also prints the version information given by the -v
switch (see below). To print the data base without trying to
remake any files, use make -p -f/dev/null.
From that output it is relatively easy to determine which target was failing, and what dependency that should be included.
There is a discrepancy between target's prerequisites and its commands, that is, a dependency is not specified for a target. I don't think you can debug that using make means because make can't tell you that a dependency is missing.
However, you can try invoking make with -d switch. That is going to tell you which target it tries to build when it hits the missing file. The next step would be to find the rule for that target in the makefile and add the missing dependency.
I have a "run script" step that dynamically creates resources/files that I copy into the build dirs. Every run of this script produces different content so I want it to run on every build. The script gets run correctly on a clean build however once a build is made the step is not run again since no source has been modified.
I tried setting the input of the step to /dev/random but it does not seem to trigger a changed environment and does not re run the step.
Is there a way I can set this up so this step gets run ever time build is pressed, as opposed to only when the source is modified or clean?
You should put the Run Script build phase in a separate Aggregate Target, and make your main target dependent on the Aggregate Target. The Aggregate should be built every time.