Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I found they are all configuration file for compiling in MCU, but when and where to use which?
A script is an interpreted computer program. That is, a program that can be executed by another program (an executable called an interpreter) without first being translated into another language. A script written in a particular language needs an interpreter that understands that language.
A makefile is a script. It is interpreted by some version of Make. The language of makefiles is designed to be used in build systems, invoking shell commands to build files out of other files. It can do other things, but generally not as easily as other languages that are more general or designed with those things in mind.
Where and when to use which tool is a very broad question. Broadly speaking, Make is a good tool for building files from other files, and keeping them up to date. Other languages may be better for other purposes, such as computation, stream editing or regular expression handling.
Beta's answer is perfect. Just to give a glimpse of make, a good analogy could be a chef. When asked to cook a meal (the goal) it reads a cookbook (the makefile), identifies the ingredients (files), the proper order of operations and their description.
An important aspect of make is that it compares last modification times of files to decide whether a specific file is up-to-date or shall be rebuilt. A typical makefile contains a series of rules (which order usually does not matter) like this:
target_file: prerequisite_file_1 prerequisite_file_2 ...
recipe_line_1
recipe_line_2
...
(where each recipe line starts with a tab). This part tells make that target_file depends on prerequisite_file_1, prerequisite_file_2... That is, if target_file is needed to accomplish the goal, make will first decide whether it must be rebuilt or not:
If it does not exist make will of course try to build it.
If it exists already make will compare its last modification time with that of the prerequisite files. If target_file is older than one of the prerequisites, make will try to rebuild it.
If target_file must be rebuilt make will pass each recipe line to the shell (separate shells) which execution is supposed to build it (but make has no way to check this, it is your responsibility to write sound recipes).
Note that all this is recursive: if a prerequisite file is missing make will search for another rule that builds it. And if a prerequisite file exists but a rule also exists for which it is the target, make will check that it is up-to-date before building target_file. And it will first build missing or outdated prerequisite files before the top-level target file.
It is a bit like if the recipe in the cookbook was referring to a composite ingredient (let's say a sauce) that has its own separate recipe in the same cookbook.
The main differences with a naive shell script that would also describe the build process of a project is thus that the shell script would probably always rebuild everything while make will rebuild only what needs to be. Bonus: most make implementations are also capable of parallelisation: knowing what depends on what make can also discover what must be serialized and what could be done in parallel. On a multi-core computer, for large projects, the speed-up factor can be significant.
Related
GNU Make under MinGW is known to be very slow under certain conditions due to how it executes implicit rules and how Windows exposes file information (per "MinGW “make” starts very slowly").
That previous question and all other resources on the issue that I've found on the internet suggest working around the problem by disabling implicit rules entirely with the -r flag. But is there another way?
I have a "portable" Makefile that relies on them, and I'd like to make it so that it does not take around a minute to start it up each time, rather than having to get the Makefile owner to alter it just for me.
You should use make -d to see all the things make is doing and try to see where the time is going. One common reason for lengthy make times are match-anything rules which are used to determine whether or not a makefile needs to be rebuilt. Most of the match-anything rules CAN be removed; they're rarely needed anymore.
You can add this to your makefile and see if it helps:
%:: %,v
%:: RCS/%,v
%:: RCS/%
%:: s.%
%:: SCCS/s.%
And, if you don't need to auto-create your makefile you can add:
Makefile: ;
(also put any included makefiles there that you don't need to auto-create).
ETA
It seems your real question can be summed up as, "why does make take so much longer to start on Windows than on Linux, and what can I do to fix that without changing makefiles?"
The answer is, nothing. Make does exactly the same amount of work on both Windows and Linux: there are no extra rules or procedures happening on Windows that could be removed. The problem is that Windows NTFS is slower than typical Linux filesystems for these lookups. I know of no system setting, etc. that will fix this problem. Your only choice is to get make to do less work so that it's faster, and the only way to do that is by removing built-in rules you don't need.
If the problem is you really don't want to edit the actual makefiles, that's simple enough to solve: just write the rules above into a small separate makefile, maybe something like speedup.mk, then set the environment variable MAKEFILES=speedup.mk before invoking make. Make will parse that makefile as well without you having to change any makefiles.
Background
I have a (large) project A and a (large) project B, such that A depends on B.
I would like to have two separate makefiles -- one for project A and one for project B -- for performance and maintainability.
Based on the comments to an earlier question, I have decided to entirely rewrite B's makefile such that A's makefile can include it. This will avoid the evils of recursive make: allow parallelism, not remake unnecessarily, improve performance, etc.
Current solution
I can find the directory of the currently executing makefile by including at the top (before any other includes).
TOP := $(dir $(lastword $(MAKEFILE_LIST)))
I am writing each target as
$(TOP)/some-target: $(TOP)/some-src
and making changes to any necessary shell commands, e.g. find dir to find $(TOP)/dir.
While this solves the problems it has a couple disadvantages:
Targets and rules are longer and a little less readable. (This is likely unavoidable. Modularity has a price).
Using gcc -M to auto-generate dependencies requires post-processing to add $(TOP) everywhere.
Is this the usual way to write makefiles that can be included by others?
If by "usual" you mean, "most common", then the answer is "no". The most common thing people do, is to improvise some changes to the includee so the names do not clash with the includer.
What you did, however, is "good design".
In fact, I take your design even futher.
I compute a stack of directories, if the inclusion is recursive, you need to keep the current directories on a stack as you parse the makefile tree. $D is the current directory - shorter for people to type than $(TOP)/,
and I prepend everything in the includee, with $D/, so you have variables:
$D/FOOBAR :=
and phony targets:
$D/phony:
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I know there are many linters for programming languages, like pep8 for python, but I have never come across one for a makefile. Are there any such linters for makefiles?
Any other ways to programmatically detect errors or problems in makefiles without actually running them?
As I have grown into using a makefile, it keeps on getting more complicated and long, and for me it would make sense to have a linter to keep the makefile more readable.
Things have apparently changed. I found the following:
Checkmake
Mint
Of the two, Checkmake has (as of 2018/11) more recent development, but I haven't tried either.
I also do not know where to find make file lint (web search for "make file lint" got me here), but here is a incomplete list of shallow idea fragments for implementing a make file lint utility...
White spaces are one aspect of the readability, as tabs and spaces have distinct semantics within make file. Emacs makefile-mode by default warns you of "suspicious" lines when you try to save make file with badly spaced tabs. Maybe it would be feasible to run emacs in batch mode and invoke the parsing and verification functions from that mode. If somebody were to start implementing such a make file lint utility, the emacs lisp mode could be an interesting to check.
About checking for correctness, #Mark Galeck in his answer already mentioned --warn-undefined-variables. The problem is that there is a lot of output from undefined but standardized variables. To improve on this idea, a simple wrapper could be added to filter out messages about those variables so that the real typos would be spotted. In this case make could be run with option --just-print (aka -n or --dry-run) so as to not run the actual commands to build the targets.
It is not good idea to perform any changes when make is run with --just-print option. It would be useful to grep for $(shell ...) function calls and try to do ensure nothing is changed from within them. First iteration of what we could check for: $(shell pwd) and some other common non-destructive uses are okay, anything else should invoke a warning for manual check.
We could grep for $ not followed by ( (maybe something like [$][^$(][[:space:]] expressed with POSIX regular expressions) to catch cases like $VARIABLE which parses as $(V)ARIABLE and is possibly not what the author intended and also is not good style.
The problem with make files is that they are so complex with all the nested constructs like $(shell), $(call), $(eval) and rule evaluations; the results can change from input from environment or command line or calling make invocations; also there are many implicit rules or other definitions that make any deeper semantic analysis problematic. I think all-compassing make lint utility is not feasible (except perhaps built-in within make utility itself), but certain codified guidelines and heuristic checks would already prove useful indeed.
The only thing resembling lint behaviour is the command-line option --warn-undefined-variables.
I am currently confused as to how makefile targets work. I have a current understanding, and I don't know if it is correct because the tutorials I've been reading aren't very clear to me. Here is my current understanding
when you run 'make' in the terminal, the makefile utility finds the first target in the makefile and tries to run it, but before doing so it looks at all of the dependencies in the file
(this is where I start getting confused): If the dependency is a target in the makefile, but does not exist as a file in the makefile's directory, make simply runs the target. If the dependency is a file name, but not a target in the makefile, the utility checks for the existance of the file, and if the file doesn't exist, the utility yells at you. If the dependency is a file that exists in the directory AND a target, the target is run provided that any of the files that the file-target depend on are newer than the associated file.
Do I have it down right? Is it simpler than I'm making it out to be?
You have it right, more or less, but it can be stated a little more clearly. You're right about how make chooses the initial target, except of course if the user specifies a specific target on the make command line then that one is used instead of the first one.
Then make basically implements a recursive algorithm for each target, that works like this:
Find a rule to build that target. If there is no rule to build the target, make fails.
For each prerequisite of the target, run this algorithm with that prerequisite as the target.
If either the target does not exist, or if any prerequisite's modification time is newer than the target's modification time, run the recipe associated with the target. If the recipe fails, (usually) make fails.
That's it! Of course, this hides a number of complex issues: in particular item #1 (finding a rule) can be complex in situations where you have no implicit rule for the target. Also behaviors such as what to do when a rule fails can be modified.
But that's the basic algorithm!
for the question you asked your understanding is correct !!
If you are still confused have a look at this :: http://www.jfranken.de/homepages/johannes/vortraege/make_inhalt.en.html
once comfortable move to other more complete manuals like the GNU manual for make.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I want to do some refactoring of code, especially the "include"-like relationships between files. There are quite a few of them, and to get started, it would be helpful to have a list, diagram, or even a columnar graph, so that I can see at a glance what is included from where.
(In many cases, a given file is included by multiple other files, so the graph would be a DAG, not a tree. There are no cycles.)
I'm working with TeX (actually ConTeXt), but the question would seem to apply to any programming languages that has a facility like that of #include in C.
The obvious, easy answer is to do a grep or "Find in Files" on all the .tex files for the relevant keywords (\usemodule, \input, and a couple of other macros we've defined). This is better than nothing, but the output is long, and it's still difficult to see patterns in what includes what. For example, is file A usually included before file B? Is file C ever included multiple times by the same file?
I guess that brings out an additional, but optional feature: that such a tool would be able to show the sequence of includes from a particular file. So in that case the DAG could be a multigraph, i.e. there could be multiple arcs from one file to another.
Ideally, it would be nice to be able to annotate each file, giving a very brief summary of what's in it. This would form part of the text on the graph node for that file.
Probably this sort of thing could be done with a script that generates graphviz dot language. But I wanted to know if it has already been done, rather than reinvent the wheel.
As it is friday in my country right now, and I'm waiting for my colleagues to go to have a beer, I thought I'd do a little programming.
Here http://www.luki.webzdarma.cz/up/IncludeGraph.zip you can download source of a really simple utility that looks for all files in one folder, parses #includes and generates a .dot file for that.
It supports and correctly handles relative paths, and works on windows and should work on linux as well. It is written in very spartan way. My version of dot is not parsing the generated files, there is some bug, but I really need to go drinking now, see if you can fix it. I'm not a regular dot user and I dont see it, though I'm sure it is pretty obvious.
Enjoy ...
PS - If you run into trouble compiling and/or running, please let me know. Thanks.
EDIT
Ok, my bad, there was a few glitches on linux. The dot problem was it was using "graph" instead of "digraph". But it is working like charm now. Here is the link. Just type make, and if that goes, make test should generate the following diagram of the program itself:
It ignores preprocessor directives in the C++ files so it is not very useful for that directly (could be fixed by simply calling g++ with preprocessor output flag and processing that instead of the actual files). I didn't get to regexp today, but if you have any programming experience, you will find that modifying DotGraph.cpp shouldn't be very hard to hard-code your inclusion token, and to change list of file extensions. Might get to regexp tomorrow or something.
A clever and general solution would be to trace the build system (using something like strace, LD_PRELOAD, patching the binaries, or some other debugging facility).
Once you'd collected the sequence of file open/close operations, you'd just have to filter out the uninteresting stuff, it should be easy to build the dependency tree for any language as long as the following assumptions are true:
The build system will open each file when it is included.
The build system will close each file when it reaches its end.
Unfortunately, a well-written or poorly-written compiler might violate these assumptions by for instance only opening a file the first time it is included, or never closing any files.
Perhaps because of these limitations, I'm not aware of any implementation of this idea.
On the other hand, clever build systems may include functionality to compute or extract dependencies themselves. gcc has the -M option to output dependencies, and javac figures out dependencies on its own (although I don't know how to get it to output them).
As far as TeX goes, I don't know enough TeX to actually implement this, but conceptually it seems like it should be possible to redefine the low-level include to command to:
write a log of what is about to be included
call the original include command to include it
write a log of what was included
You could then build your tree from the log output.