I add to my project gtest as external project, and for clean install
I download and recompile it, as shown in the code below. I works fine, but in every single development step, when I add a test case, checks the repository, delaying the execution, and when I am off-net, even the make step fails.
How can I explain CMake, that this download, check, etc, is ONLY needed if I make a build form scratch? (I.e. when gtest is available, no action needed?)
# Add gtest
ExternalProject_Add(
googletest
SVN_REPOSITORY http://googletest.googlecode.com/svn/trunk/
SVN_REVISION -r 660
TIMEOUT 10
PATCH_COMMAND svn patch ${CMAKE_SOURCE_DIR}/gtest.patch ${CMAKE_BINARY_DIR}/ThirdParty/src/googletest
# Force separate output paths for debug and release builds to allow easy
# identification of correct lib in subsequent TARGET_LINK_LIBRARIES commands
CMAKE_ARGS -DCMAKE_BUILD_TYPE=${CMAKE_BUILD_TYPE}
-DCMAKE_ARCHIVE_OUTPUT_DIRECTORY_DEBUG:PATH=DebugLibs
-DCMAKE_ARCHIVE_OUTPUT_DIRECTORY_RELEASE:PATH=ReleaseLibs
-Dgtest_force_shared_crt=ON
# Disable install step
INSTALL_COMMAND ""
# Wrap download, configure and build steps in a script to log output
LOG_DOWNLOAD ON
LOG_CONFIGURE ON
LOG_BUILD ON)
The ExternalProject_Add function has the UPDATE_COMMAND option. Setting this option to empty string "", just like you do for INSTALL_COMMAND, disables the update step.
According to documentation for CMake 3.4, there is also a UPDATE_DISCONNECTED option to achieve the same result. I didn't try it myself, so I'm not sure it works as I understand.
Related
I have a conan recipe of a package, named Package, that requires boost as a shared library:
def requirements(self):
self.requires("boost/1.79.0#")
self.options["boost"].shared = True
self.options["boost"].bzip2 = False
self.options["boost"].without_stacktrace = True
The used generators are CMakeDeps and CMakeToolchain.
def generate(self):
tc = CMakeToolchain(self)
tc.variables['BUILD_SHARED_LIBS'] = "ON" if self.options.shared == True else "OFF"
tc.variables['CMAKE_FIND_ROOT_PATH_MODE_PACKAGE'] = 'NEVER'
tc.variables['CMAKE_POSITION_INDEPENDENT_CODE'] = 'ON'
tc.generate()
The unit tests of this conan package use a CMakeLists.txt which defines a PackageTests target that links against Package and boost::boost.
Building the Package and PackageTests works fine for both Ubuntu and Windows but only on Ubuntu the tests run without issues. On Windows I get exceptions for all the tests because the boost dlls are not found. Using ldd PackageTests and readelf -d PackageTests on Ubuntu shows that boost so files are used from the conan cache.
Using conans VirtualRunEnv generator and then activating the generated environement helps to also run the PackageTests.exe on Windows but I would like to know if there is another way using for example pure CMake to install/copy the required boost dlls to the bin/PackageTests.exe folder? Or is there a way to extend the conan recipe to install the dlls on Windows?
Why are boost shared libraries found correctly in the conan cache but not on Windows? Is there some extra manual work needed or shouldn't this be handeled by conan as well?
Edit:
Trying to use the following to copy the dlls results in a cmake command usage error because the TARGET_RUNTIME_DLLS generator expression is empty.
add_custom_command(TARGET PackageTest POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different $<TARGET_RUNTIME_DLLS:PackageTest > $<TARGET_FILE_DIR:PackageTest>
COMMAND_EXPAND_LISTS
)
Also IMPORTED_LOCATION property for the following targets are *-NOTFOUND:
get_target_property(VAR-boost boost::boost IMPORTED_LOCATION)
message(${VAR-boost})
> VAR-boost-NOTFOUND
get_target_property(VAR-Package Package IMPORTED_LOCATION)
message(${VAR-Package})
> VAR-Package-NOTFOUND
get_target_property(VAR-Package PackageTest IMPORTED_LOCATION)
message(${VAR-Package})
> VAR-PackageTest-NOTFOUND
From the boost conan recipe package_info() I can see that the CMakeDeps generator will only create the BoostConfig.cmake and BoostTargets.cmake scripts, which is also the case for Package. There is no FindBoost.cmake generated. By default CMakeDeps only creates config scripts, except the recipes define cmake_find_mode to be both. Not sure though if adding both to the recipe could help. Even if it would help, this is no immediate solution as this is not directly in my control (hosted on conan-center-index repo). I am still not able to see the reason why everything works fine on Ubuntu but on Windows the dlls are not found/copied at all by conan.
A few months ago I installed an application from source by executing
./configure --whatever
make
sudo make install
I reinstalled my OS recently and now I would like to install this application again, but I don't remember what compile flags I used back then. However, since I kept all files from my home, I still have the source code and my original build. Is there a way to recover the ./configure parameters I used originally?
just re-run make? after all, i think you are mainly after re-running the installation phase rather than the configure-step.
however, to specifically answer your question: autotools record your configure-invocation in the config.status file.
you can call it to re-run your configure phase "in the same conditions":
./config.status --recheck
(this is mainly needed internally by autotools: if you hack your Makefile.am or your configure.ac, automake will rebuild the build-system so the changes have an effect; in order to do this correctly, it must invoke configure in the same way as it was originally called (by the user))
First, a little bit of background as to why I'm asking this question: Our product's daily build script (as run under Debian Linux by Jenkins), does roughly this:
Creates and enters a build environment using debootstrap and chroot
Checks out our codebase (including some 3rd party libraries) from SVN
Runs configure and make as necessary to build all of the code
Packages up the results into an install file that can be uploaded to our headless server boxes using our install tool.
This mostly works fine, but every so often (maybe one daily build out of 10), the part of the script that builds one of our third-party libraries will error out with an error like this one:
CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/bash
/root/software/3rdparty/libogg/missing autoconf
/root/software/3rdparty/libogg/missing: line 81: autoconf: command not found
WARNING: 'autoconf' is missing on your system.
You should only need it if you modified 'configure.ac',
or m4 files included by it.
The 'autoconf' program is part of the GNU Autoconf package:
<http://www.gnu.org/software/autoconf/>
It also requires GNU m4 and Perl in order to run:
<http://www.gnu.org/software/m4/>
<http://www.perl.org/>
make: *** [configure] Error 127
As far as I can tell, this happens occasionally because the timestamps of the files in the third-party library are different (e.g. off by a second or two from each other just due to the timing of when they were checked out from the SVN server during that particular build). That causes the configure script to think that it needs to auto-regenerate a file, so then it tries to call 'automake' to do so, and errors out because automake is not installed.
Of course the obvious thing to do here would be to install automake in the build environment, but the build environment is not one that I can easily modify (due to institutional reasons), so I'd like to avoid having to do that if possible. What I'd like to do instead is figure out how to get the configure scripts (which I can modify) to ignore the timestamps and just always do the basic build that they do when the timestamps are equal.
I tried to finesse the problem by manually running 'touch' on some files to force their timestamps to be the same, and that seemed to make the problem occur less often, but it still happens:
./configure --prefix="$PREFIX" --disable-shared --enable-static && \
touch config* aclocal* Makefile* && \
make clean && make install ) || Failure "libogg"
Can anyone familiar with how automake works supply some advice on how I might make the "configure" calls in our daily build work more reliably, without modifying the build environment?
You could try forcing SVN to use commit times on checkout on your Jenkins server. These commit times can also be set in SVN if they don't work out for some reason. You could use touch -d or touch -r instead of just touch to avoid race conditions there.
After upgrading from Go 1.2.1 to 1.3 (Windows 7 64 bit) "go build" execution time has increased from around 4 to over 45 seconds. There were no other changes except the go version update. Switching off the virus scanner seems to have no effect. Any clues?
You probably have dependencies that are being recompiled each time. Try go install -a mypackage to rebuild all dependencies.
Removing $GOPATH/pkg also helps to ensure you don't have old object files around.
Building with the -x flag will show you if the toolchain is finding incompatible versions.
I have the exact same problem, running this command solves it:
go get -u -v github.com/mattn/go-sqlite3
Another tip: http://kokizzu.blogspot.co.id/2016/06/solution-for-golang-slow-compile.html
Using go1.6,
Simply run go build -i.
It will compile all the dependencies and store them at $GOPATH/pkg/*/* as .a files.
Later when you run go run main.go, everything is much faster.
What s really great is that if you use vendored dependencies (IE: a vendor folder in your project), deps are built appropriately within $GOPATH/pkg/**/yourproject/vendor/**
So you don t have to go get install/get/whatever and have a mix of vendor / global dependencies.
I suspect you got to re-build .a files after deps update (glide update or smthg like this), but i did not test that yet.
After Go 1.10, you'd just need to type go build. You'd not need to type: go build -i.
From the draft Go 1.10 document, here.
Build & Install
The go build command now detects out-of-date packages purely based on the content of source files, specified build flags, and metadata stored in the compiled packages. Modification times are no longer consulted or relevant. The old advice to add -a to force a rebuild in cases where the modification times were misleading for one reason or another (for example, changes in build flags) is no longer necessary: builds now always detect when packages must be rebuilt. (If you observe otherwise, please file a bug.)
...
The go build command now maintains a cache of recently built packages, separate from the installed packages in $GOROOT/pkg or $GOPATH/pkg. The effect of the cache should be to speed builds that do not explicitly install packages or when switching between different copies of source code (for example, when changing back and forth between different branches in a version control system). The old advice to add the -i flag for speed, as in go build -i or go test -i, is no longer necessary: builds run just as fast without -i. For more details, see go help cache.
I just experienced the same problem - updating from 1.4 to 1.5. It seems that the olds versions are somehow incompatible or are being rebuild every time as go build -x shows. Executing go get -v invalidates all packages or refetches them, I am not quite sure and go build -x shows quite less output.
You can build sqlite3 like this:
cd ./vendor/github.com/mattn/go-sqlite3/
go install
After that your project will b built much faster.
If you try as all other said but still not work, I suggest removing the directory of $GOPATH such as :
sudo rm -rf $GOPATH
cd yourproject
go get -d
go get -u -v github.com/mattn/go-sqlite3
I have a library that is built and linked to as part of my project. I want to provide the facility to OPTIONALLY have the library installed system-wide (or wherever ${CMAKE_INSTALL_PREFIX} is set). Otherwise, by default, the project's final build products will be linked statically to the library, and the former get installed, but the library binaries stay in the build directory.
In other words:
$ make
$ make install
will build and install, the programs, but only something like
$ make install.foo
will install the library to ${CMAKE_INSTALL_PREFIX}, building it first if needed.
I have something like this so far (simplified from the actual script, so there might be errors):
INCLUDE_DIRECTORIES( "${CMAKE_CURRENT_LIST_DIR}")
SET (FOO_LIBRARY "foo")
# Following builds library and makes it available to
# be linked other targets within project by:
# TARGET_LINK_LIBRARIES(${progname} ${FOO_LIBRARY})
ADD_LIBRARY(${FOO_LIBRARY}
foo/foo.cpp # and other sources ...
)
###########################################################
# Approach #1
# -----------
# Optionally allow users to install it by invoking:
#
# cmake .. -DINSTALL_FOO="yes"
#
# This works, but it means that users will have to run
# cmake again to switch back and forth between the libary
# installation and non-library installation.
#
OPTION(INSTALL_FOO "Install foo" OFF)
IF (INSTALL_FOO)
INSTALL(TARGETS ${FOO_LIBRARY} DESTINATION lib/foo)
SET(FOO_HEADERS foo/foo.h)
INSTALL(FILES ${FOO_HEADERS} DESTINATION include/foo)
UNSET(INSTALL_FOO CACHE)
ENDIF()
###########################################################
###########################################################
# Approach #2
# -----------
# Optionally allow users to install it by invoking:
#
# make install.foo
#
# Unfortunately, this gets installed by "make install",
# which I want to avoid
SET(FOO_INSTALL "install.foo")
ADD_CUSTOM_TARGET(${FOO_INSTALL}
COMMAND ${CMAKE_COMMAND}
-D COMPONENT=foo
-P cmake_install.cmake)
ADD_DEPENDENCIES(${FOO_INSTALL} ${FOO_LIBRARY})
INSTALL(TARGETS ${FOO_LIBRRARY}
DESTINATION lib/foo COMPONENT foo)
SET(FOO_HEADERS foo/foo.h)
INSTALL(FILES ${FOO_HEADERS}
DESTINATION include/foo COMPONENT foo)
###########################################################
As can be seen, approach #1 sort of works, but the required steps to install the library are:
$ cmake .. -DINSTALL_FOO="yes"
$ make && make install
And then, to go back to "normal" builds, the user has to remember to run cmake again without the "-DINSTALL_FOO" option, otherwise the library will be installed on the next "make install".
The second approach works when I run "make install.foo", but it also install the library if I run "make install". I would like to avoid the latter.
Does anyone have any suggestions on how to go about achieving this?
You are on the right track with approach #2. You can trick CMake into avoiding the installation of the FOO related files by using the OPTIONAL switch of the install command.
For the FOO library target, the commands have to modified in the following way:
SET (FOO_LIBRARY "foo")
ADD_LIBRARY(${FOO_LIBRARY} EXCLUDE_FROM_ALL
foo/foo.cpp
)
INSTALL(TARGETS ${FOO_LIBRARY}
DESTINATION lib/foo COMPONENT foo OPTIONAL)
The EXCLUDE_FROM_ALL is added to ADD_LIBRARY to prevent the library from building when you run a plain make. The installation of FOO_LIBRARY is made optional by adding the OPTIONAL switch.
Making the installation of the FOO_HEADERS optional requires the following changes:
SET(FOO_HEADERS foo/foo.h)
SET(BINARY_FOO_HEADERS "")
FOREACH (FOO_HEADER ${FOO_HEADERS})
ADD_CUSTOM_COMMAND(
OUTPUT ${CMAKE_CURRENT_BINARY_DIR}/${FOO_HEADER}
COMMAND ${CMAKE_COMMAND} -E copy ${CMAKE_CURRENT_SOURCE_DIR}/${FOO_HEADER} ${CMAKE_CURRENT_BINARY_DIR}/${FOO_HEADER}
DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/${FOO_HEADER})
LIST (APPEND BINARY_FOO_HEADERS ${CMAKE_CURRENT_BINARY_DIR}/${FOO_HEADER})
ENDFOREACH()
INSTALL(FILES ${BINARY_FOO_HEADERS}
DESTINATION include/foo COMPONENT foo OPTIONAL)
The FOREACH loop sets up custom commands which copy the FOO_HEADERS verbatim to the corresponding binary dir. Instead of directly using the headers in the current source dir the INSTALL(FILES ... command picks up the copied headers from the binary dir. The paths of the headers in the binary dir are collected in the variable BINARY_FOO_HEADERS.
Finally the FOO_INSTALL target has to be set up in the following way:
SET(FOO_INSTALL "install.foo")
ADD_CUSTOM_TARGET(${FOO_INSTALL}
COMMAND ${CMAKE_COMMAND}
-D COMPONENT=foo
-P cmake_install.cmake
DEPENDS ${BINARY_FOO_HEADERS})
ADD_DEPENDENCIES(${FOO_INSTALL} ${FOO_LIBRARY})
The custom FOO_INSTALL is added with a dependency on BINARY_FOO_HEADERS to trigger the copying of the header files. A target level dependency on FOO_LIBRARY triggers the building of the library when you run make install.foo.
The solution however has the following drawbacks:
Upon configuring CMake will issues a warning Target "foo" has EXCLUDE_FROM_ALL set and will not be built by default but an install rule has been provided for it. CMake will however do the right thing anyway.
Once you have run make install.foo a subsequent make install command will also install all FOO related files, because the built FOO library and the headers exist in the binary directory.
Check if this solution works for you.
You would have to use different commands to those you said you would like to use, but I think it addresses the problem fairly well.