How to automatically generate a Makefile help command - macos

I recently found this article online, that explains how to setup a make help target that will automatically parse the Makefile for comments and display a nicely formatted help command.
It parses:
install: ## Install npm dependencies for the api, admin, and frontend apps
#echo "Installing Node dependencies"
#npm install
install-dev: install ## Install dependencies and prepared development configuration
#./node_modules/.bin/selenium-standalone install
#cp -n ./config/development.js-dist ./config/development.js | true
run-frontend-dev: webpack.PID ## Run the frontend and admin apps in dev (using webpack-dev-server)
Into:
install Install npm dependencies for the api, admin, and frontend apps
install-dev Install dependencies and prepared development configuration
run-frontend-dev Run the frontend and admin apps in dev (using webpack-dev-server)
But for some reason I can't get it working (on OSX at least). With this target:
help: ## Show the help prompt.
#grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
When I run:
make help
I just get:
$ make help
make: Nothing to be done for `help'.
I've tried all of the different solutions from the comments in the article too, but nothing seems to work. What am I doing wrong?
Also, what would a variation on the script be that would allow comments to be placed on a line before the target, instead of after it. Like so:
# Build the source files with Babel.
build: $(shell find lib)
#rm -rf build
#mkdir -p build
#$(babel) lib $(babel_options)
# Seed the database with fake data.
db-fake:
#$(run) node bin/run-sql fake
Here is the full Makefile in question:
# Options.
DEBUG ?=
SOURCEMAPS ?= true
WATCH ?=
# Variables.
bin = ./node_modules/.bin
tests = $(shell find test/routes -depth 1)
shorthand-tests = $(patsubst test/routes/%.js,test-%,$(tests))
# Binaries.
babel = $(bin)/babel
mocha = $(bin)/mocha
node = node
nodemon = $(bin)/nodemon
run = heroku local:run
# Babel options.
babel_options = \
--out-dir build \
--copy-files
# Sourcemaps?
ifneq ($(SOURCEMAPS),false)
babel_options += --source-maps
endif
# Watch?
ifdef WATCH
babel_options += --watch
endif
# Development?
ifneq ($(NODE_ENV),production)
node = $(nodemon)
endif
# Debug?
ifdef DEBUG
node += debug
mocha += debug
endif
# Default.
default: help ## Default.
# Build the source files with Babel.
build: $(shell find lib) ## Build the source files with Babel.
#rm -rf build
#mkdir -p build
#$(babel) lib $(babel_options)
# Seed the database with fake data.
db-fake: ## Seed the database with fake data.
#$(run) node bin/run-sql fake
# Reset the database, dropping everything and the creating again.
db-reset: ## Reset the database, dropping everything and the creating again.
#$(run) node bin/run-sql drop
#$(run) node bin/run-sql create
# Seed the database with test data.
db-seed: ## Seed the database with test data.
#$(run) node bin/run-sql seed
# Show the help prompt.
help: ## Show the help prompt.
#awk '/^#/{c=substr($0,3);next}c&&/^[[:alpha:]][[:alnum:]_-]+:/{print substr($1,1,index($1,":")),c}1{c=0}' mm.mk | column -s: -t
# Open the PSQL interface.
psql: ## Open the PSQL interface.
#psql contentshq
# Run the development server.
server: ## Run the development server.
#$(run) $(node) bin/api
# Start the local postgres server.
start: ## Start the local postgres server.
#pg_ctl -D /usr/local/var/postgres -l /usr/local/var/postgres/server.log start
# Stop the local postgres server.
stop: ## Stop the local postgres server.
#pg_ctl -D /usr/local/var/postgres stop -s -m fast
# Run all of the tests.
test: ## Run all of the tests.
#$(run) $(mocha) $(tests)
# Run specific route tests.
$(shorthand-tests): ## Run specific route tests.
#$(run) $(mocha) test/routes/$(subst test-,,$#).js
# Watch the source files with Babel.
watch: ## Watch the source files with Babel.
#WATCH=true $(MAKE) build
# Phony targets.
.PHONY: help
.PHONY: test
.PHONY: watch

Here's a little awk script which will handle the case where the comment comes before the target name. I used column to put the text into columns, but you could do it in awk using a printf (and, if you wanted, adding the console codes to colorize the output); the advantage of using column is that it works out the column width automatically.
You'd insert it into the Makefile in the same way:
.PHONY: help
# Show this help.
help:
#awk '/^#/{c=substr($$0,3);next}c&&/^[[:alpha:]][[:alnum:]_-]+:/{print substr($$1,1,index($$1,":")),c}1{c=0}' $(MAKEFILE_LIST) | column -s: -t

You need a help rule that will actually extract and print the messages. Assuming the article you're talking about is http://marmelab.com/blog/2016/02/29/auto-documented-makefile.html, this is what you're looking for.
.PHONY: help
help:
#grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
Your docs before the command approach might be doable but will require some more work. Most UNIX commands work line by line so it's easier to do with the comments as specified in the article.

Related

Is there a Buildroot setting for making kernel source available to compile against?

I've been given a .zip file containing source for a proprietary kernel module. Once unzip'd, there is an install script that needs to be run. The install script untar's the actual source and builds the kernel module. It requires kernel headers to compile against.
Here is my Buildroot .mk file:
FOOCO_VERSION = 1.0
FOOCO_SOURCE = cust_kernel_drvr.zip
FOOCO_SITE = /mnt/third-party/fooco
FOOCO_SITE_METHOD = local
define FOOCO_CONFIGURE_CMDS
unzip $(#D)/$(FOOCO_SOURCE) -d $(#D)
endef
define FOOCO_BUILD_CMDS
chmod +x $(#D)/TOOLS/Linux_x64/DRIVER/install
cd $(#D)/TOOLS/Linux_x64/DRIVER; $(SHELL) ./install
rm -rf $(#D)
endef
$(eval $(generic-package))
This results in the following log output and error:
(Note: I enabled debugging that shows the start and end of each step.)
DEBUG: start | rsync | fooco
>>> fooco 1.0 Syncing from source dir /mnt/third-party/fooco
rsync -au --chmod=u=rwX,go=rX --exclude .svn --exclude .git --exclude .hg --exclude .bzr --exclude CVS /mnt/third-party/fooco/ /root/buildroot-2022.02.1/output/build/fooco-1.0
DEBUG: end | rsync | fooco
DEBUG: start | configure | fooco
>>> fooco 1.0 Configuring
unzip /root/buildroot-2022.02.1/output/build/fooco-1.0/cust_kernel_drvr.zip -d /root/buildroot-2022.02.1/output/build/fooco-1.0
Archive: /root/buildroot-2022.02.1/output/build/foofo-1.0/cust_kernel_drvr.zip
[snip]
creating: /root/buildroot-2022.02.1/output/build/fooco-1.0/TOOLS/Linux_x64/
creating: /root/buildroot-2022.02.1/output/build/fooco-1.0/TOOLS/Linux_x64/DRIVER/
inflating: /root/buildroot-2022.02.1/output/build/fooco-1.0/TOOLS/Linux_x64/DRIVER/install
inflating: /root/buildroot-2022.02.1/output/build/fooco-1.0/TOOLS/Linux_x64/DRIVER/cust_kernel_drvr-1.2.0.15-0.noarch.rpm
inflating: /root/buildroot-2022.02.1/output/build/fooco-1.0/TOOLS/Linux_x64/DRIVER/cust_kernel_drvr.tar.gz
inflating: /root/buildroot-2022.02.1/output/build/fooco-1.0/TOOLS/Linux_x64/DRIVER/license_gpl.txt
[snip]
DEBUG: end | configure | fooco
DEBUG: start | build | fooco
>>> fooco 1.0 Building
chmod +x /root/buildroot-2022.02.1/output/build/fooco-1.0/TOOLS/Linux_x64/DRIVER/install
cd /root/buildroot-2022.02.1/output/build/fooco-1.0/TOOLS/Linux_x64/DRIVER; /bin/bash ./install
Extracting archive..OK!
Compiling the driver...Error: make[1]: Entering directory '/root/buildroot-2022.02.1/output/build/fooco-1.0/TOOLS/Linux_x64/DRIVER/fooco_cust/src/linux/driver'
common.mk:82: *** Kernel header files not in any of the expected locations.
common.mk:83: *** Install the appropriate kernel development package, e.g.
common.mk:84: *** kernel-devel, for building kernel modules and try again. Stop.
make[1]: Leaving directory '/root/buildroot-2022.02.1/output/build/fooco-1.0/TOOLS/Linux_x64/DRIVER/fooco_cust/src/linux/driver'
Error: unable to find driver file (fooco_cust.ko) in /root/buildroot-2022.02.1/output/build/fooco-1.0/TOOLS/Linux_x64/DRIVER/fooco_cust/src/linux/driver
rm -rf /root/buildroot-2022.02.1/output/build/fooco-1.0
DEBUG: end | build | fooco
touch: cannot touch '/root/buildroot-2022.02.1/output/build/fooco-1.0/.stamp_built': No such file or directory
make: *** [/root/buildroot-2022.02.1/output/build/fooco-1.0/.stamp_built] Error 1
package/pkg-generic.mk:289: recipe for target
'/root/buildroot-2022.02.1/output/build/fooco-1.0/.stamp_built' failed
I found that the make files that came with the kernel module are looking in several places for the kernel headers:
/lib/modules/${BUILD_KERNEL}/source \
/lib/modules/${BUILD_KERNEL}/build \
/usr/src/linux-${BUILD_KERNEL} \
/usr/src/linux-$(${BUILD_KERNEL} | sed 's/-.*//') \
/usr/src/kernel-headers-${BUILD_KERNEL} \
/usr/src/kernel-source-${BUILD_KERNEL} \
/usr/src/linux-$(${BUILD_KERNEL} | sed 's/\([0-9]*\.[0-9]*\)\..*/\1/') \
/usr/src/linux \
/usr/src/kernels/${BUILD_KERNEL} \
/usr/src/kernels
Why is the kernel source not visible to this build? I thought that, since Buildroot is building the kernel as part of the overall process, the header files would be available for subsequent kernel module compiles. Am I missing a setting? I feel that I'm not understanding the Buildroot process in a basic way, even after referring to the manual many times.
I'm using Buildroot 2022.02.1 and kernel 5.15.33.
Your download/extract logic is very convoluted. You should really use something like this:
FOO_SITE = /mnt/third-party/fooco
FOO_SOURCE = cust_kernel_drvr.zip
FOO_SITE_METHOD = file
define FOO_EXTRACT_CMDS
unzip $(FOO_DLDIR)/$(FOOCO_SOURCE) -d $(#D)
endef
Regarding the build issue: it is impossible to help without studying the specific build system of this kernel module. Very likely you will need to pass some environment variables to tell the build system where your kernel source code is located, and possibly other things. But without looking at the specific details, it's impossible to help you.
You can have a look at how standard out of tree kernel modules are handled by looking at the package/pkg-kernel-module.mk code. However, that will not be directly useful to a package like yours that uses a custom installation script.
The magic was the LINUX_DIR variable which, according to Buildroot user manual:
contains the path to where the Linux kernel has been extracted and built.
I was able to patch the install script to send this variable to the make file that was looking for the kernel.

Where's the location of the app/binary defined in a yocto recipe?

I have a following recipe which runs a said service which in turns runs an app on boot-up, but I am trying to understand where the location of the app defined which ends up in sysfs image.
Currently, the appSource binary (defined in Makefile) gets stored in /usr/bin but I'm not sure where the destination location (/usr/bin) is defined.
The following command results in
$ bitbake -e appSource | grep ^FILES_${PN}
FILES_appSource="/usr/bin/* /usr/sbin/* /usr/libexec/* /usr/lib/lib*.so.* /etc /com /var /bin/* /sbin/* /lib/*.so.* /lib/udev /usr/lib/udev /lib/udev /usr/lib/udev
Here's the recipe
inherit autotools-brokensep pkgconfig
DESCRIPTION = "A sample recipe"
LICENSE = "CLOSED"
DEPENDS = "glib-2.0"
FILESPATH =+ "${THISDIR}:"
SRC_URI = "file://appSource"
S = "${WORKDIR}/appSource""
FILES_${PN} += "${systemd_unitdir}/*"
INIT_MANAGER = "systemd"
do_install_append() {
if ${#bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
install -d ${D}/etc/initscripts
install -d ${D}${systemd_unitdir}/system
install -m 0644 ${WORKDIR}/appService/appService.service ${D}${systemd_unitdir}/system/appService.service
install -d ${D}${systemd_unitdir}/system/multi-user.target.wants/
ln -sf ${systemd_unitdir}/system/appService.service ${D}${systemd_unitdir}/system/multi-user.target.wants/appService.service
fi
}
Here is what I find out:
You are inheriting autotools-brokensep which has the following content:
# Autotools class for recipes where separate build dir doesn't work
# Ideally we should fix software so it does work. Standard autotools supports
# this.
inherit autotools
B = "${S}"
So, it inherit autotools which has do_install with the content of:
autotools_do_install() {
oe_runmake 'DESTDIR=${D}' install
# Info dir listing isn't interesting at this point so remove it if it exists.
if [ -e "${D}${infodir}/dir" ]; then
rm -f ${D}${infodir}/dir
fi
}
So, it runs the install target of your Makefile into ${D} which is ${WORKDIR}/image.
So, I assume that your Makefile has an install target that copies the binary into /usr/bin.
For the FILES variable content, this is defined in bitbake.conf:
FILES_${PN} = "${bindir}/* ${sbindir}/* ${libexecdir}/* ${libdir}/lib*${SOLIBS} \ ...
Provide your Makefile to confirm my assumption, or I do a further research on the topic.

Parallel execution on multiple directories

I would like to create multiple directories with some files with one makefile.
I've a directory structure like this:
conf_a/conf.json
conf_b/conf.json
main.py
Makefile
requirements.txt
And would like to type make conf_a and having a new directory like this:
build/conf_a/conf.json
build/conf_a/main.py
build/conf_a/requirements.txt
conf_a/conf.json
conf_b/conf.json
main.py
Makefile
requirements.txt
Or something like make conf_b and having a new directory like this:
build/conf_b/conf.json
build/conf_b/main.py
build/conf_b/requirements.txt
conf_a/conf.json
conf_b/conf.json
main.py
Makefile
requirements.txt
So I've made a Makefile like this:
# Disable built-in rules and variables
MAKEFLAGS += --no-builtin-rules
MAKEFLAGS += --no-builtin-variables
.ONESHELL:
.SHELLFLAGS: -ec
.SILENT:
BUILD_DIR := $(CURDIR)/build
CONF_FILE := conf.json
FILES_TO_COPY := requirements.txt main.py
FUNCTION_DIRS := $(shell ls */$(CONF_FILE) | xargs -n 1 -I {} dirname {})
HIDDEN_FUNCTION_DIRS := $(shell ls .*/$(CONF_FILE) 2> /dev/null | xargs -n 1 -I {} dirname {})
clean:
rm -rf $(BUILD_DIR)
all: clean $(FUNCTION_DIRS) deploy
$(FUNCTION_DIRS) $(HIDDEN_FUNCTION_DIRS):
tmp=$#
FUNCTION_DIR=$${tmp%/}
export FUNCTION=$${FUNCTION_DIR#.}
mkdir -p $(BUILD_DIR)/$$FUNCTION
cp -f $(FILES_TO_COPY) $$FUNCTION_DIR/$(CONF_FILE) $(BUILD_DIR)/$$FUNCTION/
test:
for FUNCTION in $(shell ls $(BUILD_DIR))
do
echo "Testing $$FUNCTION"
done
deploy:
for FUNCTION in $(shell ls $(BUILD_DIR))
do
echo "Deploying $$FUNCTION"
done
Well, it works...
So if I want to test a conf I do: make conf_a test.
If I want to deploy: make conf_b deploy
It work quiet well but test or deploy target are sequential (because the for loop) and they could have been parallel.
My problem is that I'se too much configuration directories and because the deploy is slow parallel could have been a lot better.
But I do not know how to structure a Makefile to make it this way.
Any idea ?
Truth be told, the task deploy deploy a GCP cloud function, and the test just run the function locally
Generally, the easiest way to structure a makefile to facilitate parallel operations is to define separate targets that can be processed in parallel. Then you can use make's -j option to request that it take care of the parallelization across (up to) a specific maximum number of parallel tasks.
For example:
deploy: deploy_a deploy_b
deploy_a: conf_a
echo deploying conf_a
deploy_b: conf_b
echo deploying conf_b
Then you can make -j2 deploy and (probably) the deploy_a and deploy_b rules will be processed in parallel. But note that that might not help much. Even though you have separate processes for the two deployments, if you're deploying both to the same local disk then they won't be able to both use write to different files on the disk at the same time. As a result, you'll probably not see a significantly better time to completion, and it might even be worse.
Note, too, that the above example eschews dynamically determining the available component directories. Such dynamism is atypical of makefiles, and IMO it rarely provides a net benefit. Nevertheless, GNU make (on which specific implementation you are already relying) does offer mechanisms by which you could generate the needed per-directory deployment rules dynamically.

Multi-dependency makefile target

The problem I'm experiencing is an all target has dependencies on others that set a variable, then run matching dependencies.
Outcome - It will run the first dependency then stop.
Expected - Run both dependencies, setting the variable properly between each run
Is make smart enough to see pull and build were already ran and the dependency target itself has no execution, therefore it sees all dependencies as complete? Or I'm just abusing make in ways it should not be used?
Said makefile:
repo=rippio
image=default
pull:
#docker pull $(repo)/$(image):latest
build: pull
#sed -e 's/{{repo}}/$(repo)/' -e 's/{{image}}/$(image)/' Dockerfile.in > Dockerfile && \
docker build -t=$(repo)/$(image):custom .
#rm -f Dockerfile
node: image=node
node: | pull build
jdk8: image=jdk8
jdk8: | pull build
all: | node jdk8
TLDR
It is used to:
Pull the latest docker image
Run a generically designed Dockerfile against it to customize it
Tag it as :custom for internal use
Pretty handy for customizing images in a generic manner without managing many Dockerfiles.
Dockerfile template (Dockerfile.in), incase interested:
FROM {{repo}}/{{image}}:latest
... super secret sauce
UPDATE (ANSWER)
Thanks to #G.M., ended up with:
IMAGE_NAMES := node jdk8
TARGETS := $(patsubst %,build-%,$(IMAGE_NAMES))
repo=rippio
all: $(TARGETS)
build-%: pull-%
#$sed -e 's/{{repo}}/$(repo)/' -e 's/{{image}}/$*/' Dockerfile.in > Dockerfile-$* && \
$docker build -f=Dockerfile-$* -t=$(repo)/$*:custom .
#rm -f Dockerfile-$*
pull-%:
#$docker pull $(repo)/$*:latest
Which allows for:
Easy upkeep of 'all' targets, which constantly grows
Running parallel via make -j (note the Dockerfile-$* file pattern)
Much more beautiful than before
If you draw your dependency graph out long-hand you'll see that there are multiple paths from all to both pull and build -- one via each of node and jdk8. But make having reached/updated pull and build via one path will then assume that they are both up to date and, hence, not bother to update them further -- regardless of any change to target specific variables.
I think what you're trying to do (assuming I've understood correctly) might be more easily achieved using pattern rules.
IMAGE_NAMES := node jdk8
TARGETS := $(patsubst %,build-%,$(IMAGE_NAMES))
repo=rippio
all: $(TARGETS)
build-%: pull-%
#$sed -e 's/{{repo}}/$(repo)/' -e 's/{{image}}/$*/' Dockerfile.in > Dockerfile && \
$docker build -t=$(repo)/$*:custom .
#rm -f Dockerfile
pull-%:
#$docker pull $(repo)/$*:latest
Note: You currently have all build recipes using the same input/output file DockerFile. That will cause problems if you ever want to use parallel builds -- make -j etc. It might be wise to use the stem from the pattern rule match to uniquely identify the output file if that's possible.
Normally, if you invoke make with:
make all
and if none of pull, build, node, jdk8 are existing files, make should build pull and build. If you see only pull being made, it can be because you invoke make without specifying a goal. In this case make builds the first target it finds in the Makefile (pull in your case).
Anyway, there are several strange aspects in your Makefile: you use order-only prerequisites on what looks like phony targets and these phony targets are not declared as such.
I am not sure I fully understand what you are trying to do but maybe something like this would be a good starting point:
repo=rippio
image=default
.PHONY: all build node jdk8
all: node jdk8
node: image = node
jdk8: image = jdk8
build node jdk8:
#docker pull $(repo)/$(image):latest && \
sed -e 's/{{repo}}/$(repo)/' -e 's/{{image}}/$(image)/' Dockerfile.in > Dockerfile && \
docker build -t=$(repo)/$(image):custom . && \
rm -f Dockerfile
Note: if, instead of build you name the default target default you could even simplify further with:
repo=rippio
.PHONY: all default node jdk8
all: node jdk8
default node jdk8:
#docker pull $(repo)/$#:latest && \
sed -e 's/{{repo}}/$(repo)/' -e 's/{{image}}/$#/' Dockerfile.in > Dockerfile && \
docker build -t=$(repo)/$#:custom . && \
rm -f Dockerfile

what, besides the source, would cause an explicit rule to execute to produce a makefile target?

It is clear that the target is newer than the source from these two ls
comands:
[metaperl#andLinux ~/edan/pkg/gist.el] ls -l ../../wares/gist.el/gist.elc #target
-rw-r--r-- 1 metaperl metaperl 10465 Jul 18 10:56 ../../wares/gist.el/gist.elc
[metaperl#andLinux ~/edan/pkg/gist.el] ls -l yank/gist.el/gist.el #source
-rw-r--r-- 1 metaperl metaperl 13025 Jul 18 10:57 yank/gist.el/gist.el
[metaperl#andLinux ~/edan/pkg/gist.el]
However when I run makepp -v I am told that this rule depends not only
on the listed target, but also on the cd and mv commands.
makepplog: Targets
/home/metaperl/edan/wares/gist.el/gist.elc'
depend on/usr/local/bin/emacs',
/home/metaperl/edan/pkg/gist.el/yank/gist.el/gist.el',
/bin/mv'
What aspect of make logic dictates that the actions to produce the
target are part of the dependency chain of deciding on whether to make
the target?
To my mind, only the listed sources should affect whether or not the
target is rebuilt.
The entire makepp -v output is quite long, and exists at:
http://gist.github.com/480468
My makepp file:
include main.makepp
#VER
PKG := gist.el
URL := http://github.com/defunkt/$(PKG).git
TARGET := $(WARES)gist.el/gist.elc
$(TARGET) : yank/gist.el/gist.el
cd $(dir $(input)) && $(BYTECOMPILE) gist.el
mv $(dir $(input)) $(WARES)
perl {{
print 'github username: ';
my $username = <STDIN>;
print 'github API token: ';
my $api_token = <STDIN>;
system "git config --global github.user $username";
system "git config --global github.token $api_token";
use File::Butler;
my $lines = Butler('init.el', 'read');
my $loc = sprintf '%s%s', $EDAN_PKG, "$PKG/";
$lines =~ s/__LOC__/$loc/g;
$lines =~ s/__PKG__/$PKG/g;
Butler( $EDAN_EL, prepend => \$lines );
}}
yank/gist.el/gist.el : yank
cd yank && git clone http://github.com/defunkt/gist.el.git
yank:
mkdir yank
$(phony clean):
$(RM) -rf $(dir $(TARGET)) yank
With a standard make, the contents of the commands to make a target are not taken into account when deciding whether to rebuild the target. Only the dependencies are taken into account; this can go beyond the source if you have dependencies declared elsewhere.
You don't show your makeppfile, so I can't be sure, but the Parsing command... messages from makepp -v make me suspect that makepp behaves differently from standard make on this count.
makepp will rebuild a target if any of the dependencies have changed or if the command has changes. In your case, I suspect that either some of the variables that you use in the rule to make $(TARGET) have changed or that makepp is seeing that the commands are constructed dynamically and is automatically rebuilding the target. Try using the -m target_newer option to makepp to force it to use the old GNU make method (that is, only re-build if the source is newer than the target).
Link

Resources