I have a Makefile which looks like this:
.PHONY: aws-deps
requirements.txt: Pipfile Pipfile.lock
pipenv lock -r > $#
aws-deps: requirements.txt
pip3 install --upgrade --target aws_src/ -r $<
If I run make requirements.txt more than once, it correctly says it's up to date. But if I run make aws-deps it doesn't behave as I expect a .PHONY target to, it runs every time regardless of whether requirements.txt has changed. For example, deleting requirements.txt first:
$ make aws-deps
pipenv lock -r > requirements.txt
pip3 install --upgrade --target aws_src/ -r requirements.txt
<snip>
$ make aws-deps
pip3 install --upgrade --target aws_src/ -r requirements.txt
<snip>
Am I mis-understanding what .PHONY deps do? I want aws-deps to only do something if its prerequisite has changed, ie I have a change in requirements.txt - does anybody know what I'm missing in getting that to work?
Thanks!
.PHONY targets tell make to treat a target as not being a file, even though there might be a file that has a name identical to this target. As there is no file named aws-deps here, .PHONY has no real influence in your case. Instead, make has nothing to compare the timestamp of requirements.txt to and assumes that the rule for aws-deps must be run. You might change this behavior by
AWS_DEP = .aws-deps-done # hidden file to compare a timestamp against
.PHONY: aws-deps
aws-deps: $(AWS_DEP)
$(AWS_DEP): requirements.txt
pip3 install --upgrade --target aws_src/ -r $<
#touch $#
I went through similar case. I wasn't too much of a fan of creating another file .hidden or visible. But that is what I've seen a lot around. I went to the GNU make manual [who does that ANYMORE?], hoping that the authors had considered something so obvious. I found the target .INTERMEDIATE, which is not expecting a file update. So, your example would be then:
requirements.txt: Pipfile Pipfile.lock
pipenv lock -r > $#
.INTERMEDIATE: aws-deps
aws-deps: requirements.txt
pip3 install --upgrade --target aws_src/ -r $<
It works well and does not require writing an extra file as a flag. I used this .INTERMEDIATE target type to print a message before a mass compilation of PDF files and another message for similar mass compilation of PNG files. If you try to use .PHONY the compilation will repeat. If you print the message inside the rules block, it will print for every file that is being processed. Printing a one-time message is another use of .INTERMEDIATE.
Related
What I am trying to achieve: a make rule that would create the virtual environment for a script, activate it, and install package dependencies. (I've created a repo with files needed to recreate, for convenience).
Here is my Makefile:
venv:
#echo VENV
virtualenv $# -p python2
foo_requirements: requirements.txt venv .FORCE
#echo PIP
( . venv/bin/activate && pip install -r $< )
.PHONY: foo_requirements
FOO_CMD_SCRIPT = foo.py
FOO_CMD = . venv/bin/activate && python2 $(FOO_CMD_SCRIPT)
$(FOO_CMD_SCRIPT): foo_requirements
#--- Usage ---
all: $(FOO_CMD_SCRIPT)
$(FOO_CMD)
.FORCE:
The target all is there only for testing, in real life I would put the content in a foo.mk file, and include that from another makefile.
What I expect:
make all looks at the dependency FOO_CMD_SCRIPT for (actually a filename to a file on disk). Dependency is the foo_requirements rule (PHONY)
rule foo_requirements has file dependency requirements.txt and venv. There is .FORCE too in here, because I don't know how to check if package installation is already done. So what I think should happen is: 1. nothing for dependency requirements.txt (file exists, no rule) 2. run the rule for venv if it does not exist.
when venv rule has run and the directory is created, run the actual content of the rule: pip install.
after that, the dependencies for all should be finished, and the actual commands should run.
What actually happens:
venv gets created alright
pip never runs
the actual command never runs
Why doesn't the content of the rule foo_requirements run?
Likewise, the all rule content never runs.
Result:
$ make
VENV
virtualenv venv -p python2
created virtual environment CPython2.7.18.final.0-64 in 46ms
creator CPython2Posix(dest=/home/gauthier/tmp/test_mk/venv, clear=False, no_vcs_ignore=False, global=False)
seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/home/gauthier/.local/share/virtualenv)
added seed packages: pip==20.3.4, pkg_resources==0.0.0, setuptools==44.1.1, wheel==0.34.2
activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator
If you don't tell it otherwise, make will always build the first target in the makefile (along with any of its prerequisites) and then stop.
The first target in your makefile is venv and it has no prerequisites, so that target is built then make stops.
You can run make <target> to run a specific target, for example make all.
Or you can put the all target as the first one in the makefile.
Or you can add .DEFAULT_GOAL: all in your makefile.
See How make Processes a Makefile
The error occurs when I tried to run the command make install under Ubuntu 16.04 that
*** No rule to make target 'install'. Stop.
I have already run make command with several errors fatal: bad revision 'HEAD', which didn't lead to halting the command. I have no idea whether these errors matter.
My makefile is:
SUBDIRS := $(wildcard */.)
all: $(SUBDIRS)
$(SUBDIRS):
make -C $#
install:
for dir in $(SUBDIRS); do \
make -C $$dir install; \
done
.PHONY: all $(SUBDIRS)
Specifically, I want to know how the makefile works after install:.
The project should install an APP on the connected phone Nexus 5. But actually, there's no such APP on my phone.
I suppose your Makefile is properly formatted, with tabs where they should be, etc.
Then, when you run make install in the top level directory, your Makefile does have a rule to make the target install: it says to loop on your subdirectories, enter each one of them, and run make install there (this is what the -C option does). One of those sub-makes fails, most probably because, in its respective subdirectory, it doesn’t find a Makefile with an install recipe in it. When the sub-make fails, the loop goes on with the remaining sub-makes (unless the shell was instructed otherwise by means of the -e switch), and the final return code of the whole recipe will be the return code of the last sub-make.
There are some points worth discussing in your Makefile (for example, install should be listed as a .PHONY target), but you don’t provide enough information to clarify them: for example, is it really necessary to have the shell loop through the subdirectories in a particular order? Usually, a better policy is to have make parallelize the sub-makes whenever possible (and, as a side effect, have make stop when the first submake fails...)
In my case I have requirements target, which installs needed Python packages and test, which runs tests and depends on previous one.
Installing dependencies is a long operation and I want it to be executed only when requirements.txt changes. How can I achieve that?
Here is a simplified example of Makefile, that I have now:
.PHONY: test requirements
requirements: requirements.txt
pip install -r $<
test: tests/ | requirements
py.test $^
As #user1034749 pointed out, Make compares the modification times of files. If you want it to know when requirements.txt has been modified since the last installation, you must give it a file whose modification time is the same as the time of the last installation, so that it can compare the two. In other words, you must have a dummy file and modify it whenever you perform the installation. You can call it anything you like, but I will call it "installation":
.PHONY: test
installation: requirements.txt
pip install -r $<
touch $#
test: tests/ | installation
py.test $^
I know the process of installing from source are.
./configure
make
make install
But why "make" before /etc/cups/cupsd.conf, why not just do "make install"?
My understanding so far is "make" only compile the source into executable file, and "make install" actually place them into executable PATH folder, am I right?
If we want to install executable on the machine, can we just do
./configure
make install
Instead of 3 steps shown above.
When you run make, you're instructing it to essentially follow a set of build steps for a particular target. When make is called with no parameters, it runs the first target, which usually simply compiles the project. make install maps to the install target, which usually does nothing more than copy binaries into their destinations.
Frequently, the install target depends upon the compilation target, so you can get the same results by just running make install. However, I can see at least one good reason to do them in separate steps: privilege separation.
Ordinarily, when you install your software, it goes into locations for which ordinary users do not have write access (like /usr/bin and /usr/local/bin). Often, then, you end up actually having to run make and then sudo make install, as the install step requires a privilege escalation. This is a "Good Thing™", because it allows your software to be compiled as a normal user (which actually makes a difference for some projects), limiting the scope of potential damage for a badly-behaving build procedure, and only obtains root privileges for the install step.
make without parameters takes the ./Makefile (or ./makefile) and builds the first target. By convention, this may be the all target, but not necessarily. make install builds the special target, install. By convention, this takes the results of make all, and installs them on the current computer.
Not everybody needs make install. For example, if you build some a web app to be deployed on a different server, or if you use a cross-compiler (e.g. you build an Android application on a Linux machine), it makes no sense to run make install.
In most cases, the single line ./configure && make all install will be equivalent to the three-step process you describe, but this depends on the product, on your specific needs, and again, this is only by a convention.
There are times I want to try to compile code changes but not deploy those changes. For instance, if I'm hacking the Asterisk C code base, and I want to make sure the changes I'm making still compile, I'll save and run make. However, I don't want to deploy those changes because I'm not done coding.
For me, running make is just a way to make sure I don't end up with too many compile errors in my code to where I have trouble locating them. Perhaps more experienced C programmers don't have that problem, but for me, limiting the number of changes between compiles helps reduce the number of possible changes that may have completely trashed my build, and this makes debugging easier.
Lastly, this also helps give me a stopping point. If I want to go to lunch, I know that someone can restart the application in it's currently working state without having to come find me, since only make install would copy the binaries over to the actual application folder.
There may very well be other reasons, but this is my reason for embracing the fact that the two commands are separated. As others have said, if you want them combined, you can combine them using your shell.
A lot of software these days will do the right thing with only make install.
In those that won't, the install target doesn't have a dependency on the compiled binaries.
So to play safe, most people use make && make install or a variation thereof just to be safe.
A simple Makefile example (real example), from https://github.com/jarun/googler#installation.
Most of the comments are added by me
make install PREFIX=YOUR_own_path
use zsh's autocomple to see what you can chose and you will
know many things!!
PREFIX ?= /usr/local
# These two are the same:
# FOO ?= bar
# ifeq ($(origin FOO), undefined)
# FOO = bar
# endif
# ---
BINDIR = $(DESTDIR)$(PREFIX)/bin
MANDIR = $(DESTDIR)$(PREFIX)/share/man/man1
DOCDIR = $(DESTDIR)$(PREFIX)/share/doc/googler
# the cmamnd `make YOUR_target_name`
# Call a specific target in ./Makefile (or ./makefile), which
# contains such pairs :
# targets:
# ^I shell_command_line_1
# ...
# ^I shell_command_line_n
# `make` can be regarded as using the default target: the first one in
# Makefile, which usually named `all`
# .PHONY: all install uninstall disable-self-upgrade
.PHONY: second all install uninstall disable-self-upgrade
# In terms of `Make`, whenever you ask `make <phony_target>`,
# it will run, independent from the state of what files you have,
# because a `phony target` is marked as always out-of-date
all:
echo "hi, this is the 'all' target"
my_first:
echo "hi, this is the first target"
second:
echo "hi, this is the 2nd target"
# the target `install` can usually be found in Makefile. You can change it to `buy` or others
install:
# from tldr: `install` command : Copy files and set attributes.
# -m --mode= set mode
# -d --dirctory
install --mode=755 -d $(BINDIR)
install -m755 -d $(MANDIR)
install -m755 -d $(DOCDIR)
gzip --to-stdout googler.1 > googler.1.gz
install -m755 googler $(BINDIR)
install -m644 googler.1.gz $(MANDIR)
install -m644 README.md $(DOCDIR)
rm -f googler.1.gz
# same as above
buy:
# from tldr: `install` command : Copy files and set attributes.
# -m --mode= set mode
# -d --dirctory
install --mode=755 -d $(BINDIR)
install -m755 -d $(MANDIR)
install -m755 -d $(DOCDIR)
gzip --to-stdout googler.1 > googler.1.gz
install -m755 googler $(BINDIR)
install -m644 googler.1.gz $(MANDIR)
install -m644 README.md $(DOCDIR)
rm -f googler.1.gz
uninstall:
rm -f $(BINDIR)/googler
rm -f $(MANDIR)/googler.1.gz
rm -rf $(DOCDIR)
# Ignore below if you don't use apt or others package managers to install this
# Disable the self-upgrade mechanism entirely. Intended for packagers
#
# We assume that sed(1) has the -i option, which is not POSIX but seems common
# enough in modern implementations.
disable-self-upgrade:
sed -i.bak 's/^ENABLE_SELF_UPGRADE_MECHANISM = True$$/ENABLE_SELF_UPGRADE_MECHANISM = False/' googler
I'm trying to setup a parallel CMake-based build for my source tree, but when I issue
$ cmake .
$ make -j2
I get:
jobserver unavailable: using -j1. Add '+' to parent make rule
as a warning. Does anyone have an idea if it is possible to fix it somehow?
In the generated Makefile, when calling into a sub-make it needs to either use $(MAKE) (not just 'make') or else precede the line with a +. That is, a rule should look like this:
mysubdir:
$(MAKE) -C mysubdir
or like this:
mysubdir:
+make -C mysubdir
If you don't do it one of those two ways, make will give you that warning.
I don't know anything about cmake, so maybe it's generating Makefiles that aren't correct. Or maybe you did something incorrectly on your end.
In my case (with CMake 3.5.2) the trivial cd build && cmake .. && make -j5 works just fine.
But, I do get the jobserver unavailable error when building custom targets (as dependencies of other targets) via the cmake --build . --target foo idiom.
Like this:
add_custom_target(buildroot
COMMAND ${CMAKE_COMMAND} --build . --target install
COMMENT "Populating buildroot..."
)
add_dependencies(deb buildroot)
add_dependencies(rpm buildroot) #... etc
— so that the user can make deb and it Just Works. CMake will regenerate makefiles if needed, run the compilation, install everything exactly as with make install, and then run my custom scripts to package up the populated buildroot into whatever shape or form I need.
Sure enough, I'd like to make -j15 deb — but that fails.
Now, as explained on the mailing list by CMake devs, the root cause lies, surprisingly (or not), within GNU Make; there is a workaround.
The root cause is that make will not pass its jobserver environment to child processes it thinks aren't make.
To illustrate, here's a process tree (ps -A f) branch:
…
\_ bash
\_ make -j15 deb
\_ make -f CMakeFiles/Makefile2 deb
\_ make -f CMakeFiles/buildroot.dir/build.make CMakeFiles/buildroot.dir/build
\_ /usr/bin/cmake --build . --target install ⦿
\_ /usr/bin/gmake install
…
At ⦿ point, make drops jobserver environment, ultimately causing single-threaded compilation.
The workaround which worked great for me, as given away in the linked email, is to prefix all custom commands with +env. Like this:
add_custom_target(buildroot
#-- this ↓↓↓ here -- https://stackoverflow.com/a/41268443/531179
COMMAND +env ${CMAKE_COMMAND} --build . --target install
COMMENT "Populating buildroot..."
)
add_dependencies(deb buildroot)
add_dependencies(rpm buildroot) #... etc
In the end, this appears in the rule for buildroot in the appropriate makefile (CMake generates a bunch of them), and causes GNU Make to behave properly and respect -j.
Hope this helps.
As pointed out by #Carlo Wood in his comment to this answer, trying to convince cmake to add + to the beginning of the command in the cmake-generated makefile is not possible.
A work-around I found is to shield underlying make command from the make flags coming from cmake. This can be done by setting environment variable MAKEFLAGS to empty string for the custom command:
COMMAND ${CMAKE_COMMAND} -E env
MAKEFLAGS=
make <your target and make options>
Hope this helps.