When using clearmake, MAKEFILE_LIST is an empty list - makefile

When I use clearmake -C gnu on makefiles that use the MAKEFILE_LIST variable, MAKEFILE_LIST is empty. But when I use regular GNU make on the same makefiles, MAKEFILE_LIST is a list of file paths and names (as it should be).
To see what MAKEFILE_LIST is equal to, I'm using $(info $$MAKEFILE_LIST is [${MAKEFILE_LIST}]). There are no spaces in any of the file names, so I know that's not causing any problems, and the ClearCase manual lists some features of GNU make that clearmake does not support, but MAKEFILE_LIST is not among those features.
Has anyone else experienced a similar problem with clearmake and MAKEFILE_LIST? If so, were you able to fix it and how?

Had to look up the MAKEFILE_LIST macro to see what it did. And yes, this isn't something that clearmake currently supports. It's also not something listed as explicitly not supported.
I used the sample make snippet at https://www.gnu.org/software/make/manual/html_node/Special-Variables.html to confirm the lack.
I can't categorize this as a defect, so the best bet would be to avoid the middleman and enter the RFE in the Developerworks RFE community at https://www.ibm.com/developerworks/rfe/
If at all possible, can you please provide a "business justification" in Manager-Speak (Dollars lost/Man hours needed to work around not having the capability)? Like every other non-startup development shop, developer time is at a premium, so it's important to put the impact in.
I can't say that you'll get the answer you want, but you should get a response.

From what you've written looks pretty clear that it's not supported; just because it's not listed in the manual doesn't mean that they support it. That manual might have just not been updated. However, MAKEFILE_LIST was added in GNU make 3.80 which was released in 2002. Pretty lame if they don't indeed support it.
For the amount of $$ you're paying for ClearCase, I would recommend you contact their support and ask them directly rather than trying to get answers from StackOverflow :).
In any event if you want answers from people who use ClearCase I suggest you add that, or at the very least clearmake, to your list of tags.

As illustrated in "Rational ClearCase: clearmake -C gnu and GNUmake diferences.", clearmake is not gmake.
And this thread makes it clear that "base ClearCase" isn't exactly actively developed (though it is still very much maintained with ClearCase 9.x, as shown in this recent clearmake output format).
MAKEFILE_LIST is not mentioned in "env_ccase" (which includes standard UNIX and Linux EVs that are particularly important for ClearCase and MultiSite), and I never had the occasion of using it.

Related

How do you make ctags more efficient for generating tags for C++ code

I always used ctags -nR to generate ctags and believed that this should be sufficient. But I now find that its struggling with C++ code where there are too many matches. Does it need more options to be sent in the command line to make it efficient.
I find eclipse to be a bit better although I believe it needs location of headers etc to be specified for existing projects with Makefile.
ctags -nR
Is this any different from exuberant ctags or is it just an improvement.
Request to not reject or close this question as a non programming question because I am sure programmers use such tools extensively and people in S.O would be the right audience to be able to answer this for they would have used it too as Linux programmers.
I'd struggled with this as well.
The problem here is certainly the C++ part.
ctags does not support it that well and many times it finds string matches instead of global definitions. What really worked for me was to replace standard/exuberant ctags with universal-ctags. C/C++ support has been polished a lot. It does work with additional languages by default. I've been using it mainly for Go and C and it does work like a charm.
Here is the link:
https://github.com/universal-ctags/ctags
If you have your ctags environment set up, probably you only need to run the standard configure/make/make install and you're good to go!

Why does FetchContent prefer subdirectory-subsumption vs installation of dependencies?

Consider two software projects, proj_a and proj_b, with the latter depending on the former; and with both using CMake.
When reading about modern CMake, one gets the message that the "appropriate" way to express dependencies is via target dependencies; and one should arrange it so that dependent projects are represented as (imported) targets you can depend on. More specifically, in our example, proj_b will idiomatically have:
find_package(proj_a)
# etc etc.
target_link_library(bar proj_a::foo)
and proj_a will need to have been installed, utilizing the CMake installation-and-export-related commands, someplace where proj_b's CMake invocation will search for proj_a-config.cmake.
I like this approach and encourage others to adapt to it. It offers flexibility in the choice of your own version of proj_a vs the system version; and also allows for non-CMake proj_a's via a Findproj_a.cmake script (which again, can be system-level or part of proj_b).
So far so good, right? However, there are people who want to "take matters into their own hands" in terms of dependencies - and CMake officially condones this, with commands such as ExternalProject and more recently, FetchContent: This allows proj_b's configuration stage to actually download a (built, or in our case source-form) version of proj_a.
The puzzling part to me is that, after proj_a is downloaded, say to an external/proj_a directory, CMake's default behavior will be to
add_subdirectory(external/proj_a)
that is, to use proj_a as a subproject of proj_b and build them together. This, while the idiomatic use above allows the maintainer of proj_a to "do their own thing" in my CMakeFile, and only keep things neat and tidy for others via what I export/install.
My questions:
Why does it make sense to add_subdirectory(), rather than to build, install, and perform the equivalent of find_package() to meet the dependency? Or rather, why should the former, rather than the latter, be the default?
Should I really have to write my project-level CMakeLists.txt to be compatible with being add_subdirectory()'ed?
Note: Just to give some concrete examples of how this use constrains proj_a:
Must use unique option names which can't possibly clash with super-project names. So no more WITH_TESTS, BUILD_STATIC_LIB - it has to be: WITH_PROJ_A_TESTS and BUILD_PROJ_A_STATIC_LIB.
You have to account for the parent project having searched for other dependencies already, and perhaps differently than how you would like to search for them.
Following the discussion in comments, I decided to post a bug report about this:
#22904: Support FetchContent_MakeAvailable performing build+install+find_package rather than add_subdirectory
So maybe this will change and the question becomes moot.
Why does it make sense to add_subdirectory(), rather than to build, install, and perform the equivalent of find_package() to meet the dependency? Or rather, why should the former, rather than the latter, be the default?
FetchContent doesn't just have to be for project() dependencies. It can be used for fetching utility scripts too. I'm guessing it was designed with that kind of consideration in mind. If your utility script is just one file, you can just file(DOWNLOAD) and add_subdirectory() directly, but the utilities could be multiple files, such as is the case with aminaya/project_options. FetchContent() uses a lot of the same machinery as ExternalProject, so it can do a lot of the useful things that ExternalProject does. For example, you can use FetchContent to fetch aminaya/project_options as a remote git repo, or as its archive artifacts- ex. v0.20.0.zip
Should I really have to write my project-level CMakeLists.txt to be compatible with being add_subdirectory()'ed?
It's your choice! The reasoning here can be highly objective, or subjective. It's up to you. Some people just like to put in a lot of effort to support whatever their users might want. Some people have a lot of historical configuration baggage and are still catching up to newer CMake. And as you mentioned at the end of your question post, there are certain adjustments that need to be made to accomodate for cleanly allowing people to add_subdirectory() you as a dependency. One example of a project which chose "no" is glew (see issue #314 for explanation).
Just to give another reference to some related work mentioned in responses to the KitWare/CMake ticket your raised, here's the ticket which tracked work on "FetchContent and find_package() integration".

shell - portable way to get `make´ version

I'm writting my own "configure" script for my package. The Makefile uses some features (e.g.: the "call" function) that were added to GNU make in certain versions. I'd want to check if installed `make´ utility is GNU's one and if it support that features, but I have no idea about how to do it.
As the `configure´ script is supposed to be portable (and I want to avoid Autoconf, SCons... and any other, although that's not the question) I need a portable solution. I have looked through my GNU make's documentation and did not find anything useful.
Thanks in advance (and, of course, sorry if this question is irrelevant or if you guess that checking presence & usability of `make´ tool is not part of configure's job).
PS: I've just found `make -p´ option, whose first line prints # GNU Make 3.81 for me, but, another time, I don't know if that's another GNU extension or if it is "standard".
I would run make --version.
If it is GNU make it will be something like:
GNU Make 3.81
Copyright (C) 2006 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE.
So the first line contains the version version and indicates that it is GNU make. If the first line contains something else, including maybe an error message that the version option is not supported, it is not.
You could also emit a temporary Makefile from your configure script, containing those commands that you require. If the make you find on can run it without errors, I.e. with exit code 0, you’re good. No need to determine the exact version. Checking for features is in the spirit of the "real" autoconf and probably more reliable than checking version numbers and using the implicit knowledge of what they support and what not.
Finally, consider looking into the "real" configure / autoconf tool to see how they do it - or even use it instead of your own.
make --version is a "local standard" for GNU tools. If you aren't going to support other makes, it's enough for your purpose, assuming:
that you know version number of GNU make where the features you use were added,
that some other implementation of make won't interpret --version in a harmful way, and won't describe itself as GNU Make 9.99.

Makefile examples and/or templates

After some time using make to build C++ programs I still don't have a very good knowledge about Makefiles. I am thinking about asking for a "good" example and use it from now on. I have been searching, but the ones I found is too complicated for me to understand. Please give me a template, with comments explaining how it works.
Thanks.
Makefiles have the tendency to get really hairy really fast, particularly when working with multiple directories. Many of the Makefile I came across in my professional life where little more then glorified shell scripts with the dependency part mostly non existent. This kind of problems were noted by the seminal paper recursive make considered harmful.
There, and in a following article Implementing non-recursive make -- you can find a reasonable template.
Also here and here on SO you can find people searching for the illusive Makefile(s) template.
Typically, the good Makefile I have seen where the result of an expert that worked for several months and created an infrastructure that transformed the Makefile syntax into something almost completely different. The developers just needed to make assignment to special variables, include the magic code, and build.
The next step in this evolution, is a more modern tool such as CMake. CMake will generate well formed Makefiles for you. If you have control over it, please consider such a tool.
IMHO you will find it makes much more sense, and make you much more productive, give you the added value of cross platform and support the entire build process (including configuration, packaging, Continuous Integration etc.)

Tool for debugging makefiles

I have a large legacy codebase with very complicated makefiles, with lots of variables. Sometimes I need to change them, and I find that it's very difficult to figure out why the change isn't working the way I expect. What I'd like to find is a tool that basically does step-through-debugging of the "make" process, where I would give it a directory, and I would be able to see the value of different variables at different points in the process. None of the debug flags to make seem to show me what I want, although it's possible that I'm missing something. Does anyone know of a way to do this?
Have you been looking at the output from running make -n and make -np, and the biggie make -nd?
Are you using a fairly recent version of gmake?
Have you looked at the free chapter on Debugging Makefiles available on O'Reilly's site for their excellent book "Managing Projects with GNU Make" (Amazon Link).
I'm sure that remake is what you are looking for.
From the homepage:
remake is a patched and modernized version of GNU make utility that adds improved error reporting, the ability to trace execution in a comprehensible way, and a debugger.
It has gdb-like interface and is supported by mdb-mode in (x)emacs which means breakponts, watches etc. And there's DDD if you don't like (x)emacs
From the man page on make command-line options:
-n, --just-print, --dry-run, --recon
Print the commands that would be executed, but do not execute them.
-d Print debugging information in addition to normal processing.
The debugging information says
which files are being considered for remaking,
which file-times are being compared and with what results,
which files actually need to be remade,
which implicit rules are considered and which are applied---
everything interesting about how make decides what to do.
--debug[=FLAGS] Print debugging information in addition to normal processing.
If the FLAGS are omitted, then the behaviour is the same as if -d was specified.
FLAGS may be:
'a' for all debugging output same as using -d,
'b' for basic debugging,
'v' for more verbose basic debugging,
'i' for showing implicit rules,
'j' for details on invocation of commands, and
'm' for debugging while remaking makefiles.
I'm not aware of any specific flag that does exactly what you want, but --print-data-base sounds like it might be useful.
remake --debugger all
More info https://vimeo.com/97397484
https://github.com/rocky/remake/wiki/Installing
There is a GNU make debugger project at http://gmd.sf.net which looks quite useful. The main feature supported by gmd is breakpointing, which may be more useful than stepping. To use this, you download gmd from http://gmd.sf.net and gmsl from http://gmsl.sf.net, and do an 'include gmd' in your makefile.

Resources