Make Makefile.am less repetitive when building a lot of similar executables (tests) - automake

I recently added a bunch of tests to a C project, with one executable per test. Adding them to make check was really easy, but it's starting to make the Makefile.am huge:
TESTS = tests/check_bitreader tests/check_cets_ecm tests/check_descriptors tests/check_mpd tests/check_psi
noinst_PROGRAMS = $(TESTS)
tests_check_bitreader_SOURCES = tests/bitreader.c tests/main.c
tests_check_bitreader_CFLAGS = $(CFLAGS) $(CHECK_CFLAGS)
tests_check_bitreader_LDADD = tslib/libts.a $(LDFLAGS) $(CHECK_LIBS)
tests_check_cets_ecm_SOURCES = tests/cets_ecm.c tests/main.c
tests_check_cets_ecm_CFLAGS = $(CFLAGS) $(CHECK_CFLAGS)
tests_check_cets_ecm_LDADD = tslib/libts.a $(LDFLAGS) $(CHECK_LIBS)
tests_check_descriptors_SOURCES = tests/descriptors.c tests/main.c
tests_check_descriptors_CFLAGS = $(CFLAGS) $(CHECK_CFLAGS)
tests_check_descriptors_LDADD = tslib/libts.a $(LDFLAGS) $(CHECK_LIBS)
tests_check_mpd_SOURCES = tests/mpd.c tests/main.c
tests_check_mpd_CFLAGS = $(CFLAGS) $(CHECK_CFLAGS)
tests_check_mpd_LDADD = tslib/libts.a $(LDFLAGS) $(CHECK_LIBS)
tests_check_psi_SOURCES = tests/psi.c tests/main.c
tests_check_psi_CFLAGS = $(CFLAGS) $(CHECK_CFLAGS)
tests_check_psi_LDADD = tslib/libts.a $(LDFLAGS) $(CHECK_LIBS)
I was able to avoid repeating myself with TESTS and noinst_PROGRAMS, but is there a way to avoid all of these duplicate tests_check_*_SOURCES, CFLAGS, LDADD, etc.? In a normal Makefile, I could do a rule like test_%: %.c: (or something), but I'm not aware of the options in Makefile.am.

Related

GCC Linking Error when Building Fast RCNN

I am trying to build the source code at https://github.com/craftGBD/craftGBD in order to achieve the same results of the published paper of authors to observe whether it is reproducible or not for my term project. I realized that I have to install Fast RCNN by running Makefile inside the craftGBD/evaluation/lib folder. However, I got following results when I run Makefile using make:
/cta/users/byaman/craftEnv/bin/python setup.py build_ext --inplace
python setup.py build_ext --inplace
running build_ext
cythoning utils/bbox.pyx to utils/bbox.c
/cta/users/byaman/craftEnv/lib/python2.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /cta/users/byaman/craftGBD/evaluation/lib/utils/bbox.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
cythoning nms/cpu_nms.pyx to nms/cpu_nms.c
/cta/users/byaman/craftEnv/lib/python2.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /cta/users/byaman/craftGBD/evaluation/lib/nms/cpu_nms.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
cythoning nms/gpu_nms.pyx to nms/gpu_nms.cpp
/cta/users/byaman/craftEnv/lib/python2.7/site-packages/Cython/Compiler/Main.py:369: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /cta/users/byaman/craftGBD/evaluation/lib/nms/gpu_nms.pyx
tree = Parsing.p_module(s, pxd, full_module_name)
skipping 'pycocotools/_mask.c' Cython extension (up-to-date)
building 'utils.cython_bbox' extension
['-Wno-cpp', '-Wno-unused-function'] .c ['-I/cta/users/byaman/craftEnv/lib/python2.7/site-packages/numpy/core/include', '-I/cta/users/byaman/craftEnv/include/python2.7', '-c'] ['-Wno-cpp', '-Wno-unused-function'] ['-I/cta/users/byaman/craftEnv/lib/python2.7/site-packages/numpy/core/include', '-I/cta/users/byaman/craftEnv/include/python2.7']
/cta/users/byaman/craftEnv/bin/x86_64-conda-linux-gnu-cc -fno-strict-aliasing -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O3 -pipe -DNDEBUG -fwrapv -O3 -Wall -Wstrict-prototypes -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /cta/users/byaman/craftEnv/include -I/cta/apps/opt/spack/linux-ubuntu18.04-cascadelake/gcc-10.2.0/cuda-10.0.130-zjercki4memwdfwjztmfkq2yio2jcev4/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /cta/users/byaman/craftEnv/include -I/cta/apps/opt/spack/linux-ubuntu18.04-cascadelake/gcc-10.2.0/cuda-10.0.130-zjercki4memwdfwjztmfkq2yio2jcev4/include -fPIC -I/cta/users/byaman/craftEnv/lib/python2.7/site-packages/numpy/core/include -I/cta/users/byaman/craftEnv/include/python2.7 -c utils/bbox.c -o build/temp.linux-x86_64-2.7/utils/bbox.o -Wno-cpp -Wno-unused-function
x86_64-conda_cos6-linux-gnu-gcc -pthread -shared -Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now -Wl,-rpath,/cta/users/byaman/craftEnv/lib -L/cta/users/byaman/craftEnv/lib -Wl,--no-as-needed -Wl,-O2 -Wl,--sort-common -Wl,--as-needed -Wl,-z,relro -Wl,-z,now -Wl,--disable-new-dtags -Wl,--gc-sections -Wl,-rpath,/cta/users/byaman/craftEnv/lib -Wl,-rpath-link,/cta/users/byaman/craftEnv/lib -L/cta/users/byaman/craftEnv/lib -march=nocona -mtune=haswell -ftree-vectorize -fPIC -fstack-protector-strong -fno-plt -O2 -ffunction-sections -pipe -isystem /cta/users/byaman/craftEnv/include -I/cta/apps/opt/spack/linux-ubuntu18.04-cascadelake/gcc-10.2.0/cuda-10.0.130-zjercki4memwdfwjztmfkq2yio2jcev4/include -DNDEBUG -D_FORTIFY_SOURCE=2 -O2 -isystem /cta/users/byaman/craftEnv/include -I/cta/apps/opt/spack/linux-ubuntu18.04-cascadelake/gcc-10.2.0/cuda-10.0.130-zjercki4memwdfwjztmfkq2yio2jcev4/include build/temp.linux-x86_64-2.7/utils/bbox.o -L/cta/users/byaman/craftEnv/lib -lpython2.7 -o /cta/users/byaman/craftGBD/evaluation/lib/utils/cython_bbox.so
/cta/users/byaman/craftEnv/bin/../lib/gcc/x86_64-conda-linux-gnu/9.3.0/../../../../x86_64-conda-linux-gnu/bin/ld: /cta/users/byaman/craftEnv/lib/libc.a(__stack_chk_fail.o): relocation R_X86_64_32 against symbol `__stack_chk_guard' can not be used when making a shared object; recompile with -fPIC
collect2: error: ld returned 1 exit status
error: command 'x86_64-conda_cos6-linux-gnu-gcc' failed with exit status 1
Makefile:2: recipe for target 'all' failed
make: *** [all] Error 1
Note that my username is byaman and I run the code inside the Conda environment, which is craftEnv.
The code that is run by Makefile is:
# --------------------------------------------------------
# Fast R-CNN
# Copyright (c) 2015 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ross Girshick
# --------------------------------------------------------
import os
from os.path import join as pjoin
from setuptools import setup
from distutils.extension import Extension
from Cython.Distutils import build_ext
import subprocess
import numpy as np
def find_in_path(name, path):
"Find a file in a search path"
# Adapted fom
# http://code.activestate.com/recipes/52224-find-a-file-given-a-search-path/
for dir in path.split(os.pathsep):
binpath = pjoin(dir, name)
if os.path.exists(binpath):
return os.path.abspath(binpath)
return None
def locate_cuda():
"""Locate the CUDA environment on the system
Returns a dict with keys 'home', 'nvcc', 'include', and 'lib64'
and values giving the absolute path to each directory.
Starts by looking for the CUDAHOME env variable. If not found, everything
is based on finding 'nvcc' in the PATH.
"""
# first check if the CUDAHOME env variable is in use
if 'CUDAHOME' in os.environ:
home = os.environ['CUDAHOME']
nvcc = pjoin(home, 'bin', 'nvcc')
else:
# otherwise, search the PATH for NVCC
default_path = pjoin(os.sep, 'usr', 'local', 'cuda', 'bin')
nvcc = find_in_path('nvcc', os.environ['PATH'] + os.pathsep + default_path)
if nvcc is None:
raise EnvironmentError('The nvcc binary could not be '
'located in your $PATH. Either add it to your path, or set $CUDAHOME')
home = os.path.dirname(os.path.dirname(nvcc))
cudaconfig = {'home':home, 'nvcc':nvcc,
'include': pjoin(home, 'include'),
'lib64': pjoin(home, 'lib64')}
for k, v in cudaconfig.iteritems():
if not os.path.exists(v):
raise EnvironmentError('The CUDA %s path could not be located in %s' % (k, v))
return cudaconfig
CUDA = locate_cuda()
# Obtain the numpy include directory. This logic works across numpy versions.
try:
numpy_include = np.get_include()
except AttributeError:
numpy_include = np.get_numpy_include()
def customize_compiler_for_nvcc(self):
"""inject deep into distutils to customize how the dispatch
to gcc/nvcc works.
If you subclass UnixCCompiler, it's not trivial to get your subclass
injected in, and still have the right customizations (i.e.
distutils.sysconfig.customize_compiler) run on it. So instead of going
the OO route, I have this. Note, it's kindof like a wierd functional
subclassing going on."""
# tell the compiler it can processes .cu
self.src_extensions.append('.cu')
# save references to the default compiler_so and _comple methods
default_compiler_so = self.compiler_so
super = self._compile
# now redefine the _compile method. This gets executed for each
# object but distutils doesn't have the ability to change compilers
# based on source extension: we add it.
def _compile(obj, src, ext, cc_args, extra_postargs, pp_opts):
if os.path.splitext(src)[1] == '.cu':
# use the cuda for .cu files
self.set_executable('compiler_so', CUDA['nvcc'])
# use only a subset of the extra_postargs, which are 1-1 translated
# from the extra_compile_args in the Extension class
postargs = extra_postargs['nvcc']
else:
postargs = extra_postargs['gcc']
super(obj, src, ext, cc_args, postargs, pp_opts)
# reset the default compiler_so, which we might have changed for cuda
self.compiler_so = default_compiler_so
# inject our redefined _compile method into the class
self._compile = _compile
# run the customize_compiler
class custom_build_ext(build_ext):
def build_extensions(self):
customize_compiler_for_nvcc(self.compiler)
build_ext.build_extensions(self)
ext_modules = [
Extension(
"utils.cython_bbox",
["utils/bbox.pyx"],
extra_compile_args={'gcc': ["-Wno-cpp", "-Wno-unused-function"]},
include_dirs = [numpy_include]
),
Extension(
"nms.cpu_nms",
["nms/cpu_nms.pyx"],
extra_compile_args={'gcc': ["-Wno-cpp", "-Wno-unused-function"]},
include_dirs = [numpy_include]
),
Extension('nms.gpu_nms',
['nms/nms_kernel.cu', 'nms/gpu_nms.pyx'],
library_dirs=[CUDA['lib64']],
libraries=['cudart'],
language='c++',
runtime_library_dirs=[CUDA['lib64']],
# this syntax is specific to this build system
# we're only going to use certain compiler args with nvcc and not with
# gcc the implementation of this trick is in customize_compiler() below
extra_compile_args={'gcc': ["-Wno-unused-function"],
'nvcc': ['-arch=sm_35',
'--ptxas-options=-v',
'-c',
'--compiler-options',
"'-fPIC'"]},
include_dirs = [numpy_include, CUDA['include']]
),
Extension(
'pycocotools._mask',
sources=['pycocotools/maskApi.c', 'pycocotools/_mask.pyx'],
include_dirs = [numpy_include, 'pycocotools'],
extra_compile_args={
'gcc': ['-Wno-cpp', '-Wno-unused-function', '-std=c99']},
),
]
setup(
name='fast_rcnn',
ext_modules=ext_modules,
# inject our custom trigger
cmdclass={'build_ext': custom_build_ext},
)
I don't know how to solve this problem even though I investigated following questions/answers:
"relocation R_X86_64_32S against " linking Error
How to recompile with -fPIC
Cython wrapping a class that uses another library

tinydtls configuration in Contiki

I am currently trying to configure tinydtls as described in the README in order to later include it into an application, or at least, make the examples run.
The first steps, including the resulting warnings:
home/name/contiki/apps/tinydtls$ autoreconf
aclocal: warning: autoconf input should be named 'configure.ac', not 'configure.in'
home/name/contiki/apps/tinydtls$ ./configure --with-contiki
home/name/contiki/apps/tinydtls$ make
with both TARGET=native and TARGET=zoul
The compilation always ends with (many) undefined reference errors:
obj_zoul/dtls.o: In function dtls_add_ecdsa_signature_elem':
dtls.c:(.text.dtls_add_ecdsa_signature_elem+0x10): undefined reference to `dtls_ec_key_from_uint32_asn1'
dtls_ec_key_from_uint32_asn1 is located in contiki/apps/tinydtls/crypto.c.
Adding #include crypto.h in dtls.c doesn't fix the problem, but #include crypto.c does (fix this first error). Accordingly I assume the problem must have to do something with the linking.
How should the Makefile, of which I pasted a (hopefully significant) part under this question, be adjusted?
SHELL = /bin/sh
MKDIR = mkdir
ETAGS = /bin/false
prefix = /usr/local
exec_prefix = ${prefix}
abs_builddir = /home/name/contiki/apps/tinydtls
top_builddir = .
libdir = ${exec_prefix}/lib
includedir = ${prefix}/include/tinydtls
package = tinydtls-0.8.2
install := cp
# files and flags
SOURCES:= dtls.c crypto.c ccm.c hmac.c netq.c peer.c dtls_time.c session.c
ifneq ("", "1")
SOURCES += debug.c
endif
SUB_OBJECTS:=aes/rijndael.o ecc/ecc.o sha2/sha2.o
OBJECTS:= $(patsubst %.c, %.o, $(SOURCES)) $(SUB_OBJECTS)
HEADERS:=dtls.h hmac.h debug.h dtls_config.h uthash.h numeric.h crypto.h global.h ccm.h \
netq.h t_list.h alert.h utlist.h prng.h peer.h state.h dtls_time.h session.h \
tinydtls.h
CFLAGS:=-Wall -pedantic -std=c99
CPPFLAGS:= -DDTLSv12 -DWITH_SHA256 -DDTLS_CHECK_CONTENTTYPE
SUBDIRS:=tests doc platform-specific sha2 aes ecc
DISTSUBDIRS:=$(SUBDIRS) examples/contiki
DISTDIR=$(top_builddir)/$(package)
FILES:=Makefile.in configure configure.in dtls_config.h.in tinydtls.h.in \
Makefile.tinydtls $(SOURCES) $(HEADERS)
LIB:=libtinydtls.a
LDFLAGS:=
ARFLAGS:=cru
doc:=doc
Edit: Changed the directory for this post to /home/name/...
Edit2: Added warnings after 'autoreconf'.

Makefile tree for debug/test/release builds with multiple targets

Structure:
makefile
system/
-> makefile
-> kernel/
-> -> makefile
-> -> src/
-> FutureModules
-> -> makefile
-> -> src/
userland/
-> makefile
-> FutureModules
-> -> makefile
-> -> src/
Currently I'm building it with make system.
I'd like to split it up into Debug/Test/Release builds so that i can do something like make debug system or make -d system with multiple targets (e.g. make debug system userland or something like that).
I'd like to change the targets so i can directly build a target instead of building system and manually have to add the desired targets in system/makefile.
Now in order to achieve this:
Do constants get shared between makefiles? So when i do make system and i define CFLAGS += -g -Og in the root makefile, does the system/makefile get the constants from the root makefile?
Do constants stay the same in a make session? So when i do make debug system userland and have something like debug: CFLAGS += -g -Og, do system and userland get the -g -Og flags?
EDIT: I managed to achieve 2. by using $(shell find -maxdepth 1 -type d) and some other commands.
Solved it with:
#Build mode. m=d => Debug | m=t => Test | m=r => release | default => release
ifeq ($(m), d)
NASBUILD = $(NASDEBUG)
GASBUILD = $(GASDEBUG)
CPPBUILD = $(CPPDEBUG)
CBUILD = $(CDEBUG)
else ifeq ($(m), t)
NASBUILD = $(NASTEST)
GASBUILD = $(GASTEST)
CPPBUILD = $(CPPTEST)
CBUILD = $(CTEST)
else ifeq ($(m), r)
NASBUILD = $(NASRELEASE)
GASBUILD = $(GASRELEASE)
CPPBUILD = $(CPPRELEASE)
CBUILD = $(CRELEASE)
else
NASBUILD = $(NASRELEASE)
GASBUILD = $(GASRELEASE)
CPPBUILD = $(CPPRELEASE)
CBUILD = $(CRELEASE)
endif
export NASBUILD
export GASBUILD
export CPPBUILD
export CBUILD
Where xxxBUILD and so on gets added to the corresponding xxxFLAGS.

Makefile: embedded statements

I have in the /bin folder a file program.cc.
The following Makefile statements
BINS = $(wildcard bin/*.cc)
EXECS = $(notdir $(BINS))
EXECSR = $(EXECS:.cc=)
mean that EXECSR is program
I try to avoid the intermediary variable EXECS in the above statements
BINS = $(wildcard bin/*.cc)
EXECSR = $($(notdir $(BINS)):.cc=)
but this approach fails - EXECSR is empty. How should I modify the Makefile to avoid the intermediary variable EXECS?
EXECSR = $(notdir $(BINS:.cc=))

GNU Make: How to perform second expansion with suffix-changing substitution

What I'm going for (what's failing)
I have a list of dependencies for each file:
point_deps =
bounds_deps = point
triangle_deps = point bounds
Image_deps = types bounds triangle
main_deps = Image triangle bounds point types
I'd like to write a rule to include the relevant dependencies. Here's my best attempt:
out/%.o: src/%.cpp src/%.h $$($$*_deps:%=src/%.h)
g++ -o $# -c $<
I expect $* to evaluate to, for instance, "main". Then the suffix-changing substitution should change each entry in the dependency list to begin with "src/" and end with ".h".
When I try to run the code above, I get an error (on the out/%.o line):
makefile:26: *** multiple target patterns. Stop.
What's working (non-optimal)
For now I have to create a separate variable for each file's header dependencies:
point_deps_h = $(point_deps:%=src/%.h)
bounds_deps_h = $(bounds_deps:%=src/%.h)
triangle_deps_h = $(triangle_deps:%=src/%.h)
Image_deps_h = $(Image_deps:%=src/%.h)
main_deps_h = $(main_deps:%=src/%.h)
Then I can use secondary-expansion to include the correct header files:
out/%.o: src/%.cpp src/%.h $$($$*_deps_h)
g++ -o $# -c $<

Resources