Autoconf loop but stop after first success? - configure

I've got this snippet to detect and require the highest usable C++ standard. It works exactly the way I want it to - checks highest first, and stop searching after the first success. It's also extremely ugly and hard to maintain.
# Require highest supported C++ standard
AC_LANG(C++)
AX_CHECK_COMPILE_FLAG([-std=c++20], [CXXFLAGS="$CXXFLAGS -std=c++20"], [
AX_CHECK_COMPILE_FLAG([-std=c++2a], [CXXFLAGS="$CXXFLAGS -std=c++2a"], [
AX_CHECK_COMPILE_FLAG([-std=c++17], [CXXFLAGS="$CXXFLAGS -std=c++17"], [
AX_CHECK_COMPILE_FLAG([-std=c++1z], [CXXFLAGS="$CXXFLAGS -std=c++1z"], [
AX_CHECK_COMPILE_FLAG([-std=c++14], [CXXFLAGS="$CXXFLAGS -std=c++14"], [
AX_CHECK_COMPILE_FLAG([-std=c++1y], [CXXFLAGS="$CXXFLAGS -std=c++1y"], [
AC_MSG_ERROR([Could not enable at least C++1y (C++14) - upgrade your compiler])
])
])
])
])
])
])
Is there a way to turn it into a single list? What I ideally want is something like following pseudocode, so that to add/remove a specific C++ variant is simply changing the list instead of the whole tree:
foreach([-std=c++20, -std=c++2a, -std=c++17, etc],
[if(AX_CHECK_COMPILE_FLAG($1, [CXXFLAGS+=$1]), break)])
From what I can find, m4 simply cannot loop. It can recurse, but breaking out of a recursion is the kind of m4 black magic that I apparently can't get to work.
And no, changing to another build system is not an option. This is trivial in CMake, but this project is autotools.

From what I can find, m4 simply cannot loop. It can recurse, but breaking out of a recursion is the kind of m4 black magic that I apparently can't get to work.
M4 absolutely can loop, but you don't need that for your purposes, and you probably don't want it. In fact, one usually doesn't want to directly leverage M4 features in one's Autoconf code.
Remember always that M4 runs when you build the build system, not when you configure the project. It produces configure, a shell script, and for configure-time control flow you need configure to leverage shell features. Autoconf provides macros around some such shell features, but you are not obligated to use them, and you are certainly not obligated to avoid shell features just because there isn't an Autoconf macro wrapping them. The main caveat is that you should be careful to write portable shell code. Indeed, it is in service to that objective that Autoconf provides any macros at all for shell flow-control.
With that in mind, here's a way to use a shell loop to implement your check more cleanly:
# language versions to test, in priority order:
for version in 20 2a 17 1z 14 1y; do
version_flag="-std=c++${version}"
AX_CHECK_COMPILE_FLAG([${version_flag}], [
break
], [
version_flag=none
])
done
AS_IF([test "$version_flag" == none], [
AC_MSG_ERROR([Could not enable at least C++1y (C++14) - upgrade your compiler])
])
CXXFLAGS="$CXXFLAGS ${version_flag}"
If you want to change the versions that are tested or your priority for which to ones to prefer, then you can just change the list in the for statement.
HOWEVER, I offer a frame challenge: what's the point? If -std=c++1y is sufficient for your program's purposes, then what is gained by conditionally selecting a more recent version?
Perhaps there indeed is a point -- for instance, maybe the source contains preprocessor conditionals to enable additional features when built to a more recent standard. In that particular case, however, a better Autoconf idiom would be to add one or more --enable flags for the conditional features, and to select the required C++ version flag based on which, if any, of those are chosen. There's no need then to test any other options, and it's less magical for the builder who wants influence over what features are in fact enabled.

Related

Linking against an external object file (.o) with autoconf

For work purposes I need to link against an object file generated by another program and found in its folder, the case is that I did not find information about this kind of linkage. I think that if I hardcode the paths and put the name-of-obj.o in front of the package_LDADD variable should work, but the case is that I don't want to do it that way.
If the object is not found I want the configure to fail and tell the user that the name-of-obj.o is missing.
I tried by using AC_LIBOBJ([name-of-obj.o]) but this will try to find in the root directory a name-of-obj.c and compile it.
Any tip or solution around this issue?
Thank you!
I need to link against an object file generated by another program and
found in its folder
What you describe is a very unusual requirement, not among those that the Autotools are designed to handle cleanly or easily. In particular, Autoconf has no mechanisms specifically applicable to searching for bare object files, as opposed to libraries, and Automake has no particular automation around including such objects when it links. Nevertheless, these tools do have enough general purpose functionality to do what you want; it just won't be as tidy as you might like.
I think that if I hardcode the paths and put the
name-of-obj.o in front of the package_LDADD variable should work, but
the case is that I don't want to do it that way.
I take it that it is the "hardcode the paths" part that you want to avoid. Adding an item to an appropriate LDADD variable is not negotiable; it is the right way to get your object included in the link.
If the object is not found I want the configure to fail and tell the
user that the name-of-obj.o is missing.
Well, then, the key thing appears to be to get configure to perform a search for your object file. Autoconf does not have a built-in mechanism to perform such a search, but it's just a macro-based shell-script generator, so you can write such a search in shell script + Autoconf, maybe something like this:
AC_MSG_CHECKING([for name-of-obj.o])
OTHER_LOCATION=
for my_dir in
/some/location/other_program/src
/another/location/other_program.12345/src
$srcdir/../relative/location/other_program/src; do
AS_IF([test -r "${my_dir}/name-of-obj.o"], [
# optionally, perform any desired test to check that the object is usable
# ... perhaps one using AC_LINK_IFELSE ...
# if it passes, then
OTHER_LOCATION=${my_dir}
break
])
done
# Check whether the object was in fact discovered, and act appropriately
AS_IF([test "x${OTHER_LOCATION}" = x], [
# Not found
AC_MSG_RESULT([not found])
AC_MSG_ERROR([Cannot configure without name-of-obj.o])
], [
AC_MSG_RESULT([${OTHER_LOCATION}/name-of-obj.o])
AC_SUBST([OTHER_LOCATION])
])
That's functional, but of course you could embellish, such as by providing for the package builder to specify a location to use via a command-line argument (AC_ARG_WITH(...)). And if you want to do this for multiple objects, then you would probably want to wrap up at least some of that into a custom macro.
The Automake side is much less involved. To get the object linked, you just need to add it to the appropriate LDADD variable, using the output variable created by the above, such as:
foo_LDADD = $(OTHER_LOCATION)/name-of-obj.o
Note that if you're building just one program target then you can use the general LDADD instead of foo_LDADD, but note that by default these are alternatives not complements.
With that said, this is a bad idea overall. If you want to link something that is not part of your project, then you should get it from an installed library. That can be a local, custom-built library, of course, so long as it is a library, not a bare object file, and it is installed. It can be a static library if you don't want to rely on or distribute a separate shared library.
On the other hand, if your project is part of a larger build, then the best approach is probably to integrate it into that build, maybe as a subproject. It would still be best to link a library instead of a bare object file, but in a subproject context it might make sense to use a lib that was not installed to the build system. In conjunction with a command-line argument that tells it where to find the wanted lib, this could make the needed Autoconf code much cleaner and clearer.

Preprocess conditional arch/make file to get non-conditional file

I have a conditional makefile (well, actually I am dealing with the arch file that will be called when invoking make) that is quite involved and I would like to preprocess it to get rid of all the 'ifeq', 'ifneq' parts that only worsen the readability, in order to see better what is being actually done. I tried doing
make -n -d
where I get the whole calls to the compiler, but that is also a pain since then I need to separate manually all the flags. I just want to get my nice makefile with my separate FLAGS, DFLAGS, LIBS sentences etc etc.
(My apologies if this has been said anywhere, but I am unable to find it).
Thanks!

Trying to make SCons Ada Builder work with VariantDir

I'm struggling with the last pieces of logic to make our Ada builder work as expectedly with variantdir. The problem is caused by the fact that the inflexible tools gnatbind and gnatlink doesn't allow the binder files to be placed in a directory other than the current one. This leaves me with two options:
Let gnatbind write the the binder files to topdir and then let gnatlink pick it from there. This may however cause race conditions if we want to allow simulatenous builds for different architectures and compiler versions which we want.
Modify the calls to gnatbind and gnatlink to temporarily go down to the build directory, in our case build/$ARCH/src-path. I successfully fixed the gnatbind step as this is explicitly called using a env.Execute from within the Ada builder. To try to fix the linking step I've modified the Program env using
env["LINKCOM"] = SCons.Action.Action(ada_linkcom)
where ada_linkcom is defined as
def ada_linkcom(source, target,env ):
....
return ret
where ret is a string describing what should be done in the shell. I need this to be a function it contains a bit complicated logic to convert paths from being relative to top-level to just containing their basenames.
This however fails with an error in scons-2.3.1/SCons/Executor.py on line 347 in function do_execute. Isn't env["LINKCOM"] allowed to be a function with ada_linkcom's signature?
No, it's not. You seem to think that 'env["LINKCOM"]' is what actually calls/executes the final build command, and that's not quite correct. Instead, environment variables like LINKCOM get expanded by the Executor/Builder for each specified Action, and are then executed.
You can have Python functions as Actions, and also use a so-called "generator" to create your Action strings on-the-fly. But you have to assign this Action to a Builder, and can't set it as an environment variable directly.
Please also have a look at the UserGuide ( http://www.scons.org/doc/production/HTML/scons-user.html ), especially section 18.4 "Builders That Execute Python Functions". Our basic guide for writing Builders and Tools might also prove to be helpful: http://www.scons.org/wiki/ToolsForFools

How to deal with a script that outputs multiple files in a Makefile?

So I have a script, myscript.py, that produces a few output files, out/a.pickle, out/b.pickle, and out/c.pickle
And I have a Makefile that has the rule:
out/a.pickle: data/data.csv
myscript.py
Now, If I update the script, firstly, make out/a.pickle says there's nothing to be done here, even though the script has been modified. Isn't make supposed to check to see if things have been updated and then run them? Do I need to add myscript.py as a dependency to out/a.pickle, or something?
Secondly, is there a way to handle the fact that the script has multiple output files? Do I need to create a rule for each?
Make does not examine time stamps on executables. Otherwise, you would have to recompile the universe if gcc or echo or the shell is upgraded, and it's a slippery slope anyway; what if libraries or the kernel also changed in a way which requires you to recompile? You need human intervention at some point anyhow. So the designers of make simply drew the line at explicit dependencies.
(GNU Make has a lot of other built-in implicit dependencies, which are convenient. I vaguely believe that the original make didn't have any built-in dependencies at all. Anybody able to confirm?)
You can declare all the outputs in one rule:
out/a.pickle out/b.pickle out/c.pickle: myscript.py data/data.csv
./$^
(Notice how the script is included in the dependencies now. You might want to change that after the script is considered stable. Then you'll need to change the action as well.)

Suggestions for using attributes beyond [[noreturn]]?

Coming from the discussions about the use of vendor specific attributes in another question I asked myself, "what rules should we tell people for using attributes that are not listed in the standard"?
The two attributes that are defined are [[ noreturn ]] and [[ carries_dependencies ]]. The standard leaves open how compilers should react on unknown attributes -- thus, by the standard they may stop with an error message. This is not what e.g. GCC does, it emits a warning and continues. This is probably a behavior to be expected by the most-common compilers. For this reason I would have like to read a "should" in the standard, but we don't have it.
The paper N2553 brings up flexible attributes. It lists further attributes used by GCC (
unused, weak) and MSVC (dllimport). for OpenMP, the widely supported parallelizing framework, scoped attributes are suggested, eg. omp::for(clause, clause), omp::parallel(clause,clause). So, it is very likely that we will se some vendor specific attributes very soon after they support the syntax at all, indeed.
Therefore, when we now go "out in the world" and tell people about C++11, what should the advice be about using attributes?
Only use noreturn and carries_dependencies
Use your compilers old syntax instead, eg. __attribute__((noreturn)) and define a macro when you port the code (the current situation)
Use those attributes your favorite compiler supports freely, knowing this code might not be portable to another standard-conforming compiler, because if the standard allows a compiler to stop with an error, you have to consider this will happen. This sounds a bit like advocating writing non-portable code.
Or, my guess, expect the most-used compilers to warn about unknown attributes, so you can use vendor-specific attributes, keeping in mind that in rare cases you may get problems.
Note the slight difference in the last two bullet-items. While both say "use those attributes you need", item3's message is "do not care about other compilers", while item4 implicitly rephrases the standard texts "implementation defined behavior" to "the compiler should emit a diagnostic message".
What could be the suggestion for an upcoming Best Practice here?
The best practice — the only one that is reasonably portable in practical terms, never mind ambiguity in the Standard — is to use macros. It will be many years before we can forget about compilers that don't support attributes.
The number of compilers and the number of custom __keywords__ defined by those compilers will always be increasing, and it makes sense for the language to define a way to contain the damage. It doesn't need to revolutionize the way people write unportable code, or make unportable code portable (although standard attributes do that). There is a benefit simply to giving caffeine-addled compiler backend engineers a sandbox for when they want to extend the grammar.
It is a bit alarming, though, that no attribute tokens are reserved to the implementation, or to the language besides the ones currently standard. So there will be trouble when they decide to standardize more of them.

Resources