Use PACKAGE_VERSION in the argument of AC_OUTPUT() - makefile

I apologize in advance for my lack of experience with m4sh.
I have a configure.ac file containing the following lines:
AC_INIT([libhelloworld], [2.5])
...
AC_OUTPUT([
Makefile
src/helloworld-${PACKAGE_VERSION}.pc
src/Makefile
])
The reason behind the argument of AC_OUTPUT() is that I would like to avoid to copy and paste the new version of my program in multiple places on every update. Therefore I decided to exploit the PACKAGE_VERSION macro, automatically defined when invoking AC_INIT() at the beginning of configure.ac.
The line src/helloworld-${PACKAGE_VERSION}.pc then correctly expands into src/helloworld-2.5.pc and everything seems to be working fine. However I have a couple of questions.
I use ${PACKAGE_VERSION} as a shell variable, but PACKAGE_VERSION itself is a m4 macro. Can I trust that this will always work? Will it always be defined as such when AC_OUTPUT() is invoked?
Are there other ways of obtaining the value of PACKAGE_VERSION within configure.ac? For example, if instead of configure.ac I were inside Makefile.am I would not use the curly brackets, but the command evaluation syntax instead, as in $(PACKAGE_VERSION). What is the proper correct way of doing what I want within configure.ac?

The documentation for AC_INIT states that PACKAGE_VERSION is an "output variable", meaning when you call AC_INIT, something like this gets executed:
AC_SUBST([PACKAGE_VERSION], [2.5])
This allows the configuration of input files such as Makefile.in (generated from Makefile.am) to rely on #PACKAGE_VERSION# inside those files being replaced by 2.5.
There's nothing wrong with your approach if it works, but you might consider using AS_VAR_SET([hello_version], [AC_PACKAGE_VERSION]) to set the hello_version shell variable and src/helloworld-${hello_version}.pc in Autoconf input. This way, even if Autoconf no longer exposes a PACKAGE_VERSION shell variable in some future release, your code won't break because you'll be relying upon your own hello_version variable.
As an aside, it's a bit irregular to use helloworld-2.5.pc when the helloworld version is 1.0 or greater (i.e. the API is stable). It's common to see helloworld.pc, except then there's the problem of what happens when you release 3.0 and replace the installed 2.x version of helloworld.pc with the 3.0 version: assuming you're using semantic versioning, 3.0 is incompatible with 2.x, and any code relying on something like pkg-config --libs helloworld will break.
You might then consider using helloworld-2.pc instead and when you release 3.0, you'd instead have helloworld-3.pc to avoid users of your library linking the incorrect/incompatible library (and also allowing users the option of moving to the new version at their own pace); one can also apply this idea in Automake for a version-specific header directory:
## SOURCE PATH => INSTALL PATH
## include/hello.h => $(includedir)/helloworld-2/hello.h
helloincludedir = #includedir#/helloworld-#hello_major#
helloinclude_HEADERS = include/hello.h
Autoconf also allows you to specify an output file's inputs, so an output file src/helloworld-${hello_major}.pc in the build directory could be generated from src/helloworld.pc.in in the source directory without you needing to update the src/helloworld.pc.in filename when moving from 2.x to 3.0; this could also be used with AC_INIT if you're OK with macros, allowing you to control the version info in one central location:
m4_define([hello_version_major], [2]) dnl
m4_define([hello_version_minor], [5]) dnl
m4_define([hello_version], [hello_version_major[.]hello_version_minor]) dnl
AC_PREREQ([2.69])
AC_INIT([libhelloworld], [hello_version])
AS_VAR_SET([hello_major], [hello_version_major])
AS_VAR_SET([hello_minor], [hello_version_minor])
# For automake and configuration of pkg-config file
AC_SUBST([hello_major])
AC_SUBST([hello_minor])
AC_SUBST([hello_version])
...
AC_CONFIG_FILES([
Makefile
src/Makefile
src/helloworld-]hello_version_major[.pc:src/helloworld.pc.in
])
AC_OUTPUT
I realize it looks surprisingly more complicated than one might expect, but that's Autoconf for you. Note that I had to use some odd quoting in AC_CONFIG_FILES to make use of the macro. Using
src/helloworld-${hello_major}.pc:src/helloworld.pc.in
instead of the macro resulted in a crippled config.status file being generated in Autoconf 2.69 (try config.status with no arguments, then config.status src/helloworld-2.pc to see the issue); I haven't tested any other versions. I've reported the bug, but the macro works until the next release.

Related

Makefile.am: How to link a dynamic library only if the library exists/is installed on the system

I have a dynamic library /usr/lib64/liba-3.2.so.1. And, I am trying to change Makefile.am so that myprog_LDADD can link against this library if the file exists. Is there any way to do it?
I tried this:
if [ -f /usr/lib64/liba-3.2.so.1 ]; then myprog_LDADD += /usr/lib64/liba-3.2.so.1 ; fi;
But this is not working. Any suggestions on how to make Makefile.am link against a library if the library exists? Thanks!
the usual way would of course be to use your configure(.ac) (autotools) to check for the existance of a library and then use the the result to tell your Makefile(.am) to link against the library.
Snippet from configure.ac:
AC_CHECK_LIB([a-3.2], [a_fun], [A_LIBS="-la-3.2"])
AC_SUBST([A_LIBS])
and the corresponding snippet from Makefile.am:
myprog_LDADD += #A_LIBS#
Note that this will look for liba-3.2.so in all the (default) search paths of the linker, and without the .1 suffix, but I think this is the correct behavior anyhow (and you explicit linking against /usr/lib64/liba-3.2.so.1 is bound to fail in multiple scenarios starting with non-64bit platforms, so I'd consider this over-adaption)

How to link .cma files to my own Frama_C plugin?

I created my own Frama-C plugin by following the instructions of the Frama-C Development Guide (https://frama-c.com/download/frama-c-plugin-development-guide.pdf).
However, I need to use the Mutex module provided by the Ocaml manual (http://caml.inria.fr/pub/docs/manual-ocaml/libref/Mutex.html) in my .ml files. To use this module, I need a particular command line:
ocamlc -thread unix.cma threads.cma myfiles.ml
(as explained here: OCaml Mutex module cannot be found).
To compile my files I use the Makefile that builds the plugin (Plugin Development Guide page 33). So I'm trying to link these .cma files and the -thread option to this Makefile...and I did not succeed. How can I load this Mutex module?
What I tried:
I looked in the file automatically generated by Frama-C: .Makefile.plugin.generated if there was a variable to call and modify in my Makefile (the same kind as the variable PLUGIN_CMO to call my .ml files) but I did not find such a variable.
I tried with some variables that are defined in the generated .Makefile.plugin.generated this way:
I wrote the following lines in my Makefile:
PLUBIN_EXTRA_BYTE = unix.cma threads.cma
or TARGET_TOP_CMA = unix.cma threads.cma
and for the thread option:
PLUGIN_OFLAGS = -thread
or PLUGIN_LINK_BFLAGS= -thread
or PLUGIN_BFLAGS= -thread
But never was the Mutex module recognized and I don't know exactly if it is a good solution...
Finally, I tested using the Olddynlink module provided by Frama-C (http://arvidj.eu/frama/frama-c-Aluminium-20160501_api/frama-c-api/html/FCDynlink.OldDynlink.html#VALloadfile) with the value loadfile or using the Dynlink module (http://caml.inria.fr/pub/docs/manual-ocaml/libref/Dynlink.html#VALloadfile) and his value loadfile, but it did not work either:
I wrote:
open Dynlink
loadfile "unix.cma";;
loadfile "threads.cma";;
in the .ml file concerned.
But always the same error: Unbound module Mutex.
Section 5.2.3 of the plugin development guide gives the list of variables that can be used to customize the Makefile. Notably, if you want to link against an external library, you can use PLUGIN_EXTRA_BYTE and PLUGIN_EXTRA_OPT, as well as PLUGIN_LINK_BFLAGS and PLUGIN_LINK_OFLAGS to add the -thread option. Here is a Makefile that should work (of course, you need to complete it depending on your actual source files).
ifndef FRAMAC_SHARE
FRAMAC_SHARE:=$(shell frama-c-config -print-share-path)
endif
PLUGIN_NAME:=Test_mutex
PLUGIN_BFLAGS:=-thread
PLUGIN_OFLAGS:=-thread
PLUGIN_EXTRA_BYTE:=$(shell ocamlfind query threads)/threads/threads.cma
PLUGIN_EXTRA_OPT:=$(shell ocamlfind query threads)/threads/threads.cmxa
PLUGIN_LINK_BFLAGS:=-thread
PLUGIN_LINK_OFLAGS:=-thread
PLUGIN_CMO:= # list of modules of the plugin
include $(FRAMAC_SHARE)/Makefile.dynamic
Note that in theory, you should only have to use the PLUGIN_REQUIRES variable, and let ocamlfind take care of everything but threads seems to be a bit peculiar in this respect.

How to write a BitBake driver recipe which requires kernel source header files?

Introduction
I have a do_install task in a BitBake recipe which I've written for a driver where I execute a custom install script. The task fails because the installation script cannot find kernel source header files within <the image rootfs>/usr/src/kernel. This script runs fine on the generated OS.
What's Happening
Here's the relevant part of my recipe:
SRC_URI += "file://${TOPDIR}/example"
DEPENDS += " virtual/kernel linux-libc-headers "
do_install () {
( cd ${TOPDIR}/example/Install ; ./install )
}
Here's a relevant portion of the install script:
if [ ! -d "/usr/src/kernel/include" ]; then
echo ERROR: Linux kernel source include directory not found.
exit 1
fi
cd /usr/src/kernel
make scripts
...
./install_drv pci ${DRV_ARGS}
I checked changing to if [ ! -d "/usr/src/kernel" ], which also failed. install passes different options to install_drv, which I have a relevant portion of below:
cd ${DRV_PATH}/pci
make NO_SYSFS=${ARG_NO_SYSFS} NO_INSTALL=${ARG_NO_INSTALL} ${ARGS_HWINT}
if [ ${ARG_NO_INSTALL} == 0 ]; then
if [ `/sbin/lsmod | grep -ci "uceipci"` -eq 1 ]; then
./unload_pci
fi
./load_pci DEBUG=${ARG_DEBUG}
fi
The make target build: within ${DRV_PATH}/pci is essentially this:
make -C /usr/src/kernel SUBDIRS=${PWD} modules
My Research
I found these comments within linux-libc-headers.inc relevant:
# You're probably looking here thinking you need to create some new copy
# of linux-libc-headers since you have your own custom kernel. To put
# this simply, you DO NOT.
#
# Why? These headers are used to build the libc. If you customise the
# headers you are customising the libc and the libc becomes machine
# specific. Most people do not add custom libc extensions to the kernel
# and have a machine specific libc.
#
# But you have some kernel headers you need for some driver? That is fine
# but get them from STAGING_KERNEL_DIR where the kernel installs itself.
# This will make the package using them machine specific but this is much
# better than having a machine specific C library. This does mean your
# recipe needs a DEPENDS += "virtual/kernel" but again, that is fine and
# makes total sense.
#
# There can also be a case where your kernel extremely old and you want
# an older libc ABI for that old kernel. The headers installed by this
# recipe should still be a standard mainline kernel, not your own custom
# one.
I'm a bit unclear if I can 'get' the headers from the STAGING_KERNEL_DIR properly since I'm not using make.
Within kernel.bbclass provided in the meta/classes directory, there is this variable assigment:
# Define where the kernel headers are installed on the target as well as where
# they are staged.
KERNEL_SRC_PATH = "/usr/src/kernel"
This path is then packaged later within that .bbclass file here:
PACKAGES = "kernel kernel-base kernel-vmlinux kernel-image kernel-dev kernel-modules"
...
FILES_kernel-dev = "/boot/System.map* /boot/Module.symvers* /boot/config* ${KERNEL_SRC_PATH} /lib/modules/${KERNEL_VERSION}/build"
Update (1/21):
A suggestion on the yocto IRC channel was to use the following line:
do_configure[depends] += "virtual/kernel:do_shared_workdir"
which is corroborated by the Yocto Project Reference Manual, which states that in version 1.8, there was the following change:
The kernel build process was changed to place the source in a common shared work area and to place build artifacts separately in the source code tree. In theory, migration paths have been provided for most common usages in kernel recipes but this might not work in all cases. In particular, users need to ensure that ${S} (source files) and ${B} (build artifacts) are used correctly in functions such as do_configure and do_install. For kernel recipes that do not inherit from kernel-yocto or include linux-yocto.inc, you might wish to refer to the linux.inc file in the meta-oe layer for the kinds of changes you need to make. For reference, here is the commit where the linux.inc file in meta-oewas updated.
Recipes that rely on the kernel source code and do not inherit the module classes might need to add explicit dependencies on the do_shared_workdir kernel task, for example:
do_configure[depends] += "virtual/kernel:do_shared_workdir"
But I'm having difficulties applying this to my recipe. From what I understand, I should be able to change the above line to:
do_install[depends] += "virtual/kernel:do_shared_workdir"
Which would mean that the do_install task now must be run after do_shared_workdir task of the virtual/kernel recipe, which means that I should be able to work with the shared workdir (see Question 3 below), but I still have the same missing kernel header issue.
My Questions
I'm using a custom linux kernel (v3.14) from git.kernel.org. which inherits the kernel class. Here are some of my questions:
Shouldn't the package kernel-dev be a part of any recipe which inherits the kernel class? (this section of the variables glossary)
If I add the virtual/kernel to the DEPENDS variable, wouldn't that mean that the kernel-dev would be brought in?
If kernel-dev is part of the dependencies of my recipe, wouldn't I be able to point to the /usr/src/kernel directory from my recipe? According to this reply on the Yocto mailing list, I think I should.
How can I properly reference the kernel source header files, preferably without changing the installation script?
Consider your Environment
Remember that there are different environments within the the build time environment, consisting of:
sysroots
in the case of kernels, a shared work directory
target packages
kernel-dev is a target package, which you'd install into the rootfs of the target system for certain things like kernel symbol maps which are needed by profiling tools like perf/oprofile. It is not present at build time although some of its contents are available in the sysroots or shared workdir.
Point to the Correct Directories
Your do_install runs at build time so this is within the build directory structures of the build system, not the target one. In particular, /usr/src/ won't be correct, it would need to be some path within your build directory. The virtual/kernel do_shared_workdir task populates ${STAGING_DIR_KERNEL} so you would want to change to that directory in your script.
Adding a Task Dependency
The:
do_install[depends] += "virtual/kernel:do_shared_workdir
dependency like looks correct for your use case, assuming nothing in do_configure or do_compile accesses the data there.
Reconsider the module BitBake class
The other answers are correct in the recommendation to look at module.bbclass, since this illustrates how common kernel modules can be built. If you want to use custom functions or make commands, this is fine, you can just override them. If you really don't want to use that class, I would suggest taking inspiration from it though.
Task Dependencies
Adding virtual/kernel to DEPENDS means virtual/kernel:do_populate_sysroot must run before our do_configure task. Since you need a dependency for do_shared_workdir here, a DEPENDS on virtual/kernel is not enough.
Answer to Question 3
The kernel-dev package would be built, however it would then need to be installed into your target image and used at runtime on a real target. You need this at build time so kernel-dev is not appropriate.
Other Suggestions
You'd likely want the kernel-devsrc package for what you're doing, not the kernel-dev package.
I don't think anyone can properly answer that last question here. You are using a non-standard install method: we can't know how to interact with it...
That said, take a look at what meta/classes/module.bbclass does. It sets several related variables for make: KERNEL_SRC=${STAGING_KERNEL_DIR}, KERNEL_PATH=${STAGING_KERNEL_DIR}, O=${STAGING_KERNEL_BUILDDIR}. Maybe your installer supports some of these environment variables and you could set them in your recipe?

How to make created packages available on make menuconfig?

I'm trying to create a libxerces package for OpenWrt. Following the instructions from this site http://wiki.openwrt.org/doc/devel/packages, I created a folder called libxerces-c inside the packages directory and a simple Makefile to have the package listed on make menuconfig, but it's not happening.
The Makefile is defined as the following:
#
# Copyright (C) 2006-2013 OpenWrt.org
#
# This is free software, licensed under the GNU General Public License v2.
# See /LICENSE for more information.
#
include $(TOPDIR)/rules.mk
# Name and release number of this package
PKG_NAME:=xerces-c
PKG_VERSION:=3.1.1
PKG_RELEASE:=1
PKG_BUILD_DIR:=$(BUILD_DIR)/$(PKG_NAME)-$(PKG_VERSION)
PKG_SOURCE:=$(PKG_NAME)-$(PKG_VERSION).tar.gz
PKG_SOURCE_URL:=http://apache.mirror.pop-sc.rnp.br/apache//xerces/c/3/sources/
PKG_CAT:=zcat
include $(INCLUDE_DIR)/package.mk
# Specify package information for this program.
# The variables defined here should be self explanatory.
define Package/libxerces
SECTION:=libs
CATEGORY:=Libraries
TITLE:=Validating XML parser written in a portable subset of C++.
URL:=http://xerces.apache.org/
endef
define Package/libxerces/description
Xerces-C++ is a validating XML parser written in a portable subset of
C++. Xerces-C++ makes it easy to give your application the ability
to read and write XML data. A shared library is provided for parsing,
generating, manipulating, and validating XML documents. Xerces-C++ is
faithful to the XML 1.0 recommendation and associated standards (DOM
1.0, DOM 2.0, SAX 1.0, SAX 2.0, Namespaces, XML Schema Part 1 and
Part 2). It also provides experimental implementations of XML 1.1
and DOM Level 3.0. The parser provides high performance, modularity,
and scalability.
endef
CONFIGURE_ARGS+= --host=mips-openwrt-linux
define Build/Configure
$(call Build/Configure/Default)
endef
define Build/Compile
$(call Build/Compile/Default)
endef
define Package/libxerces/install
endef
$(eval $(call BuildPackage,libxerces))
I already tried to execute the install script
./scripts/feeds install libxerces-c
But nothing happened. I still can't see the package after executing make menuconfig.
You need to
add the feed with the package to your feeds.conf.default or create a feeds.conf
then ./scripts/feeds update -a (update all feeds... you could just set the feed's name instead of using -a)
then ./scripts/feeds install foobar
[...]
... You obviously called install on libxerces-c while your package is called libxerces?
Probably you are not still searching for it but here is the answer.
For your package to appear in the menuconfig TUI you need to add the following
option to you Makefile inside the define Package clause:
MENU:1
Thus this part of your Makefile will look like:
define Package/libxerces
SECTION:=libs
CATEGORY:=Libraries
TITLE:=Validating XML parser written in a portable subset of C++.
URL:=http://xerces.apache.org/
endef
can you do a make menuconfig, and see if any error message about your package 'libxerces' is shown. My setup for custom packages is something like:
mkdir package/custom
mkdir package/custom/
ln -s /path/to/package/libxerces/ package/custom/
If your makefile is correct, then Libraries->libxerces should appear in menuconfig, if not an error message should be printed on make/make menuconfig. You will be able to do make package/libxerces/compile etc. also. Note: your package name is libxerces not libxerces-c.

OpenVRML in snow-leopard (from macports)

Hey, I just Downloaded openvrml from macports
(port install openvrml)
Now I have a Sample program (pretty_print.cpp from openvrml at sourceforge) that begins like this:
# ifdef HAVE_CONFIG_H
# include <config.h>
# endif
# include <openvrml/vrml97_grammar.h>
# include <openvrml/browser.h>
# include <fstream>
...
then in Xcode, I added the following path and check "recursive" for the Header search path and Lib Search Path:
/opt/local/var/macports/software
And all '***.h file not found' errors disappeared, but now I have the following two:
complex.h 943 '__pow_helper' is not a member of std
c++locale.h 71 'vsnprintf' is not a member of std
/Developer/SDKs/MacOSX10.6.sdk/usr/include/c++/4.2.1/complex: In function 'std::complex<_Tp> std::pow(const std::complex<_Tp>&, int)':
/Developer/SDKs/MacOSX10.6.sdk/usr/include/c++/4.2.1/complex:943: error: '__pow_helper' is not a member of 'std'
both errors come from system files.
I wonder what is causing this errors...
Can anyone advice me on how to use openvrml samples on Macs?
thanks in advance.
I've had a similar problem. I defined "recursive" flag for an '/opt/local/include' path. This pulled in some strange c++ headers from boost compatiblity includes.
In general, you do not want "recursive" flag on your include paths.
Try unchecking "recursive" from your paths.
if you put recursive on a path containing boost headers you'll use some random boost headers, which are likely designed to be used in different environment and/or different compiler, instead of standard C++ headers, meaning, for example, you'll include TR1 header instead of standard header. This is likely to be the cause of your problem (it happened to me too).
Just locate the directory which contains the headers you need and put only that in header search path instead of being lazy and using "recursive" flag, since there are a lot of header files which have same name but differ in location only.

Resources