I have always written build (compilation) and install as completely different (to make(1), even unrelated) stages. I could even write them in two separate Makefiles since they are completely isolated.
But a year ago or so, I wrote a Makefile where I specified the dependencies between install and compilation, and noticed some good things.
See the following minimal example:
.PHONY: all
all: build
.PHONY: build
build: foo
.PHONY: install
install: /usr/local/bin/foo
foo: foo.c
cc -o $# $<
/usr/local/bin/foo: foo
install -m 755 -T $< $#
Should /usr/local/bin/foo depend on foo or not?
Reasons to state the dependency:
Only install if necessary (if foo was recompiled).
Of course, that saves considerable install time.
The Makefile can compile something if we forgot to compile it (but see below).
Reasons to not state the dependency:
Make sure that you never compile accidentally as root.
Make sure you install the correct files,
even if they were edited more recently than the compiled files.
This would be similar to the concept of a reinstall.
But for that rare scenario, one could run 'make uninstall && make install'.
The GNU Make Manual, section Standard Targets, lists several standard targets and what make users should expect from them. About the install target:
Compile the program and copy the executables, libraries, and so on to the file names where they should reside for actual use. If there is a simple test to verify that a program is properly installed, this target should run that test.
As for running make as root, I think most do sudo make install without second thoughts. If you're not sure what the makefile will do you can review the output of make -n install for any shenanigans (after running make all to review just the install part).
Related
First, a little bit of background as to why I'm asking this question: Our product's daily build script (as run under Debian Linux by Jenkins), does roughly this:
Creates and enters a build environment using debootstrap and chroot
Checks out our codebase (including some 3rd party libraries) from SVN
Runs configure and make as necessary to build all of the code
Packages up the results into an install file that can be uploaded to our headless server boxes using our install tool.
This mostly works fine, but every so often (maybe one daily build out of 10), the part of the script that builds one of our third-party libraries will error out with an error like this one:
CDPATH="${ZSH_VERSION+.}:" && cd . && /bin/bash
/root/software/3rdparty/libogg/missing autoconf
/root/software/3rdparty/libogg/missing: line 81: autoconf: command not found
WARNING: 'autoconf' is missing on your system.
You should only need it if you modified 'configure.ac',
or m4 files included by it.
The 'autoconf' program is part of the GNU Autoconf package:
<http://www.gnu.org/software/autoconf/>
It also requires GNU m4 and Perl in order to run:
<http://www.gnu.org/software/m4/>
<http://www.perl.org/>
make: *** [configure] Error 127
As far as I can tell, this happens occasionally because the timestamps of the files in the third-party library are different (e.g. off by a second or two from each other just due to the timing of when they were checked out from the SVN server during that particular build). That causes the configure script to think that it needs to auto-regenerate a file, so then it tries to call 'automake' to do so, and errors out because automake is not installed.
Of course the obvious thing to do here would be to install automake in the build environment, but the build environment is not one that I can easily modify (due to institutional reasons), so I'd like to avoid having to do that if possible. What I'd like to do instead is figure out how to get the configure scripts (which I can modify) to ignore the timestamps and just always do the basic build that they do when the timestamps are equal.
I tried to finesse the problem by manually running 'touch' on some files to force their timestamps to be the same, and that seemed to make the problem occur less often, but it still happens:
./configure --prefix="$PREFIX" --disable-shared --enable-static && \
touch config* aclocal* Makefile* && \
make clean && make install ) || Failure "libogg"
Can anyone familiar with how automake works supply some advice on how I might make the "configure" calls in our daily build work more reliably, without modifying the build environment?
You could try forcing SVN to use commit times on checkout on your Jenkins server. These commit times can also be set in SVN if they don't work out for some reason. You could use touch -d or touch -r instead of just touch to avoid race conditions there.
How do I include the linux kernel headers as part of the SDK package in Yocto?
I'm using Yocto 1.8 (fido) in an embedded project and want to do out-of-tree kernel module development. Currently, I can build my kernel modules (apart from bitbake) by pointing my $KERNEL_PATH to the poky/build/tmp/work-shared/<machine>/kernel-source/ directory when I run make. I don't want to do this in the long term, however, since others need to easily build the modules without installing and building a full image from bitbake.
I can generate an SDK using bitbake myimage -c populate_sdk. However, this doesn't include the kernel headers (all I ever see is sysroots/<mach>/usr/include/linux). How do I cause the kernel headers to be included in the SDK package? Also, I don't want the kernel headers to show up as part of my target image.
[Edit]
My image recipe is as follows:
EXTRA_IMAGE_FEATURES_append = " eclipse-debug debug-tweaks"
TOOLCHAIN_HOST_TASK_append = " nativesdk-cmake"
IMAGE_INSTALL = "packagegroup-core-boot ${CORE_IMAGE_EXTRA_INSTALL} util-linux kernel-modules netbase busybox base-passwd base-files sysvinit initscripts bash gdbserver strace sysfsutils dtc gawk ethtool grep sed wget iptables oprofile net-tools dropbear rsync stress-ng rt-tests i2c-tools"
inherit core-image
The kernel I'm using is linux-altera-ltsi-rt in the meta-altera layer.
From the fido release, the handling of kernel builds has been changed. In previous releases, you could usually just skip down to the usage example below.
In fido or any 1.8+, if you want the kernel src and build system available in the SDK, you should add
TOOLCHAIN_TARGET_TASK_append = " kernel-devsrc"
to your image recipe. That will ensure that the new, kernel-devsrc package is installed into your toolchain.
The procedure below is just to ensure that the rest of the workflow is fully understood (even though it's not strictly part of the original question).
Usage Example
Lets assume a module Makefile as follows:
obj-m += hello-1.o
all:
make -C $(KERNEL_SRC) M=$(PWD) modules
clean:
make -C $(KERNEL_SRC) M=$(PWD) clean
Example taken from The Linux Kernel Module Programming Guide (Note that the actual commands needs to have a tab character for indentation).
Then you'll have to define KERNEL_SRC to be sysroots/<mach>/usr/src/kernel/, either in the Makefile, or from your make call. (Using a variable like KERNEL_SRC will ensure that your module recipe automatically picks the right location when building using bitbake).
To manually build your kernel module:
Source the environment-* file for your SDK.
Go to you modules directory.
KERNEL_SRC=<sdk-install-path>/sysroots/<mach>/usr/src/kernel LDFLAGS="" make However, this will fail, as fixdep can't be found. We'll fix this manually.
cd <sdk-install-path>/sysroots/<mach>/usr/src/kernel
make modules_prepare
If this needs to be run with sudo, be sure to source the environment file in the sudo environment: sudo bash -c "source <sdk-install-path>/environment-setup-<mach> && make modules_prepare"
Go back to your modules directory.
KERNEL_SRC=<sdk-install-path>/sysroots/<mach>/usr/src/kernel LDFLAGS="" make
This should now allow you to build your modules.
If you don't have the kernel source under sysroots/<mach>/usr/src/kernel/, we'll have to look into that.
anders answer is very good, but in recent versions of yocto the way to add kernel-devsrc seems to be
IMAGE_INSTALL += "kernel-devsrc"
which I found here: https://www.mail-archive.com/yocto#yoctoproject.org/msg36448.html
With Yocto Zeus (3.0.x) add this to your image recipe:
TOOLCHAIN_TARGET_TASK += "kernel-devsrc"
EDIT: Same for Gatesgarth (3.2.x) but the make scripts command has a new host dependency to libyaml-dev
I'm currently learning how to use the autoconf/automake toolchain. I seem to have a general understanding of the workflow here - basically you have a configure.ac script which generates an executable configure file. The generated configure script is then executed by the end user to generate Makefiles, so the program can be built/installed.
So the installation for a typical end-user is basically:
./configure
make
make install
make clean
Okay, now here's where I'm confused:
As a developer, I've noticed that the auto-generated configure script sometimes won't run, and will error with:
config.status: error: cannot find input file: `somedir/Makefile.in'
This confuses me, because I thought the configure script is supposed to generate the Makefile.in. So Googling around for some answers, I've discovered that this can be fixed with an autogen.sh script, which basically "resets" the state of the autoconf environment. A typical autogen.sh script would be something like:
aclocal \
&& automake --add-missing \
&& autoconf
Okay fine. But as an end-user who's downloaded countless tarballs throughout my life, I've never had to use an autogen.sh script. All I did was uncompress the tarball, and do the usual configure/make/make install/make clean routine.
But as a developer who's now using autoconf, it seems that configure doesn't actually run unless you run autogen.sh first. So I find this very confusing, because I thought the end-user shouldn't have to run autogen.sh.
So why do I have to run autogen.sh first - in order for the configure script to find Makefile.in? Why doesn't the configure script simply generate it?
In order to really understand the autotools utilities you have to remember where they come from: they come from an open source world where there are (a) developers who are working from a source code repository (CVS, Git, etc.) and creating a tar file or similar containing source code and putting that tar file up on a download site, and (b) end-users who are getting the source code tar file, compiling that source code on their system and using the resulting binary. Obviously the folks in group (a) also compile the code and use the resulting binary, but the folks in group (b) don't have or need, often, all the tools for development that the folks in group (a) need.
So the use of the tools is geared towards this split, where the people in group (b) don't have access to autoconf, automake, etc.
When using autoconf, people generally check in the configure.ac file (input to autoconf) into source control but do not check in the output of autoconf, the configure script (some projects do check in the configure script of course: it's up to you).
When using automake, people generally check in the Makefile.am file (input to automake) but do not check in the output of automake: Makefile.in.
The configure script basically looks at your system for various optional elements that the package may or may not need, where they can be found, etc. Once it finds this information, it can use it to convert various XXX.in files (typically, but not solely, Makefile.in) into XXX files (for example, Makefile).
So the steps generally go like this: write configure.ac and Makefile.am and check them in. To build the project from source code control checkout, run autoconf to generate configure from configure.ac. Run automake to generate Makefile.in from Makefile.am. Run configure to generate Makefile from Makefile.in. Run make to build the product.
When you want to release the source code (if you're developing an open source product that makes source code releases) you run autoconf and automake, then bundle up the source code with the configure and Makefile.in files, so that people building your source code release just need make and a compiler and don't need any autotools.
Because the order of running autoconf and automake (and libtool if you use it) can be tricky there are scripts like autogen.sh and autoreconf, etc. which are checked into source control to be used by developers building from source control, but these are not needed/used by people building from the source code release tar file etc.
Autoconf and automake are often used together but you can use autoconf without automake, if you want to write your own Makefile.in.
For this error:
config.status: error: cannot find input file: `somedir/Makefile.in'
In the directory where the configure.ac is located in the Makefile.am add a line with the subdirectory somedir
SUBDIRS = somedir
Inside somedir put a Makefile.am with all the description. then run automaker --add-missing
A better description can be found in 7.1 Recursing subdirectories automake manual.
https://www.gnu.org/software/automake/manual/automake.html
I have started recently to work with automake and autoconf and I am a little confused about how to distribute the code.
Usually when I get a code that works with a configure file, the only thing that I get is a confiure file and the code itself with the Makefile.am and so on. Usually I do
./configure
make
sudo make install
and thats all but when I generate my configure from a configure.ac file it toss out lots of files that I thought where just temporary but if I give the code to a partner and he makes configure, it doesn't work, it needs either remake the autoreconf or have all this files (namely instal.sh,config.sub...).
Is there something that I am missing? How can I distribute the code easily and clean?
I have searched a lot but I think I am searching for the right thing because I cannot find anything useful.
Thanks a lot in advance!
Automake provides a make dist target. This automatically creates a .tar.gz from your project. This archive is set up in such a way that the recipient can simply extract it and run the usual ./configure && make && make install invocation.
It is generally not recommended to check the files generated by Autotools into your repository. This is because they are derived objects. You wouldn't check in your .o files!
Usually, it is a good idea to provide a autogen.sh script that carries out any actions required to re-create the Autotools build infrastructure in a new version control system checkout. Often, it can be as simple as:
#!/bin/sh
autoreconf -i
Then set it chmod +x, and the instructions for compiling from a clean checkout can be ./autogen.sh && ./configure && make.
Maybe I am asking a silly question, but is there any way I can tell automake to put my project include files when I do a "make dist" but not when I do a "make install"?
Maybe I am not acting the right way, so to make it clearer I will tell what I need.
I need to deploy my applications in an embedded board and I use "make install" in a script to create a package that can be copied to the target board.
On the other side, I'd like to be able to update my toolchain with my libraries and include files.
In the first situation, I can't have any fat wasting my limited flash memory but just the necessary things to make the application to run.
In the second one, I need to have headers, pkgconfig and all of the stuff needed for development.
How I am supposed to configure my "Makefile.am" and which rules to expect so that I can reach my goals?
Really thanks.
I just want to be able to set a given script SUID, other data files
R/W arbitrary permissions and so on.
I think adding the $(DESTDIR) 's makefile user variable do that.
As it is not define by automake, "make install" use it empty,
but dpkg-buildpackage define it with the "make dist" target.
(see: http://www.gnu.org/prep/standards/html_node/DESTDIR.html#DESTDIR)
It help me to manage setuid install:
configure.ac:
# Add option to disable setuid during install, use in distcheck
AC_ARG_ENABLE(setuid-install,
AS_HELP_STRING(
[--disable-setuid-install do not set setuid flags during install]),
[enable_setuid_install=$enableval], [enable_setuid_install="yes"])
AM_CONDITIONAL(SETUID_INSTALL, test x"$enable_setuid_install" = "xyes")
Makefile.am:
if SETUID_INSTALL
install-data-hook:
/bin/chmod 4755 $(DESTDIR)$(bindir)myBinary
endif
I don't think autoconf was really designed to be a generic installer/uninstaller that'll give you that kind of control without at least some pain. You're looking for something like dpkg-buildpackage or rpmbuild where you can split up the output of make install into specific subpackages so you can have:
Package foo be for the embedded board and possibly toolchain, depending on what's in the package (DSOs, executables, and other files necessary at runtime)
Package foo-dev or foo-devel for the toolchain (headers, static libs, other files needed for development).