I am working on a Makefile which has a¹ receipt producing some file using M4. It uses some complex shell constructions to compute macro values which have to be passed to M4. How can I organize code to avoid redundant declarations displayed in the following example?
M4TOOL= m4
M4TOOL+= -D PACKAGE=$$(cd ${PROJECTBASEDIR} && ${MAKE} -V PACKAGE)
M4TOOL+= -D VERSION=$$(cd ${PROJECTBASEDIR} && ${MAKE} -V VERSION)
M4TOOL+= -D AUTHOR=$$(cd ${PROJECTBASEDIR} && ${MAKE} -V AUTHOR)
M4TOOL+= -D RDC960=$$(openssl rdc960 ${DISTFILE} | cut -d ' ' -f 2)
M4TOOL+= -D SHA256=$$(openssl sha256 ${DISTFILE} | cut -d ' ' -f 2)
Portfile: Portfile.m4
${M4TOOL} ${.ALLSRC} > ${.TARGET}
¹ Actually a lot!
You should define pseudo-commands using the -c option of the shell, like this:
PROJECTVARIABLE=sh -c 'cd ${PROJECTBASEDIR} && ${MAKE} -V $$1' PROJECTVARIABLE
OPENSSLHASH=sh -c 'openssl $$1 $$2 | cut -d " " -f 2' OPENSSLHASH
Note the use of $ or $$ to use bsdmake variable expansion or shell variable expansion. With these defintions you can reorganise your code like this:
M4TOOLS+= -D PACKAGE=$$(${PROJECTVARIABLE} PACKAGE)
M4TOOLS+= -D VERSION=$$(${PROJECTVARIABLE} VERSION)
M4TOOLS+= -D AUTHOR=$$(${PROJECTVARIABLE} AUTHOR)
M4TOOLS+= -D RMD160=$$(${OPENSSLHASH} rmd160 ${DISTFILE})
M4TOOLS+= -D SHA256=$$(${OPENSSLHASH} sha256 ${DISTFILE})
The result is arguably easier to read and maintain. When you write such scripts, remember to use error codes and stderr to report errors.
PS: You can take a look at the COPYTREE_SHARE macro in /usr/ports/Mk/bsd.port.mk on a FreeBSD system. It illustrates well the technique.
Related
I am using a debian-based docker container to build a LaTeX project. The following rule succeeds when run on the host (not inside docker):
.PHONY : timetracking
timetracking:
$(eval TODAY := $(if $(PAGE),$(PAGE),$(shell TZ=$(TIMEZ) date +%Y-%m-%d)))
touch $(PAGES)/$(WEEKLY)/$(TODAY).tex
cat template/page-header-footer/head.tex > $(PAGES)/$(WEEKLY)/$(TODAY).tex;
cat template/page-header-footer/pagestart.tex >> $(PAGES)/$(WEEKLY)/$(TODAY).tex;
echo {Week of $(TODAY)} >> $(PAGES)/$(WEEKLY)/$(TODAY).tex;
cat template/page-header-footer/timetracking.tex >> $(PAGES)/$(WEEKLY)/$(TODAY).tex;
cat template/page-header-footer/tail.tex >> $(PAGES)/$(WEEKLY)/$(TODAY).tex;
cat $(PAGES)/$(WEEKLY)/$(TODAY).tex \
| sed 's/1 January/'"$$(TZ=$(TIMEZ) date +'%d %B')/g" \
| sed 's/Jan 1/'"$$(TZ=$(TIMEZ) date +'%b %d')/g" \
| sed 's/Jan 2/'"$$(TZ=$(TIMEZ) date +'%b %d' -d '+1 days')/g" \
| sed 's/Jan 3/'"$$(TZ=$(TIMEZ) date +'%b %d' -d '+2 days')/g" \
| sed 's/Jan 4/'"$$(TZ=$(TIMEZ) date +'%b %d' -d '+3 days')/g" \
| sed 's/Jan 5/'"$$(TZ=$(TIMEZ) date +'%b %d' -d '+4 days')/g" \
> $(PAGES)/$(WEEKLY)/$(TODAY).tex;
but when the same rule is run within the docker container, it has variable behavior:
Succeeds (file generated as expected)
Creates a blank file (unexpected)
Creates a file filled with NUL characters (unexpected)
This behavior is a result of the modifications made with sed. The template files have some text containing "January 1" and "Jan 1", "Jan 2", "Jan 3", etc. which are to be replaced.
I would like help understanding:
why does this rule behave erratically inside docker
how can I rewrite the rule to behave reliably with docker
At the moment I can run this rule (and others like it) on the host, so long as I have basic tools like Make and sed installed. But it would be ideal if I could dockerize the entire workflow.
By request, the Dockerfile contents are below. Most of the installation instructions are irrelevant since this question is around make and sed. The tools directory contains a deb file for pandoc, and is also irrelevant to this question.
FROM debian:buster
RUN apt -y update
RUN apt -y install vim
RUN apt -y install make
RUN apt -y install texlive-full
RUN apt -y install biber
RUN apt -y install bibutils
RUN apt -y install python-pygments
RUN apt -y install cysignals-tools
RUN apt -y install sagemath
RUN apt -y install python-sagetex
RUN apt -y install sagetex
COPY tools /tools
RUN dpkg -i /tools/*deb
WORKDIR /results
ENTRYPOINT ["/usr/bin/make"]
There's a race condition in your shell syntax. When you run
cat file.tex \
| sed ... \
> file.tex
first the shell opens the output file for writing (processing the > file.tex), then it creates the various subprocesses and starts them, and then at the end of this cat(1) opens the output file for reading. It's possible, but not guaranteed, that the "open for write" step will truncate the file before the "open for read" step gets any content from it.
The easiest way to get around this is to have sed(1) edit the file in place using its -i option. This isn't a POSIX sed option, but both GNU sed (Debian/Ubuntu images) and BusyBox (Alpine images) support it. sed(1) supports multiple -e options to run multiple expressions, so you can use a single sed command to do this.
# (Bourne shell syntax, not escaped for Make)
sed \
-e 's/1 January/'"$(TZ=$(TIMEZ) date +'%d %B')/g" \
-e 's/Jan 1/'"$(TZ=$(TIMEZ) date +'%b %d')/g" \
-e 's/Jan 2/'"$(TZ=$(TIMEZ) date +'%b %d' -d '+1 days')/g" \
-e 's/Jan 3/'"$(TZ=$(TIMEZ) date +'%b %d' -d '+2 days')/g" \
-e 's/Jan 4/'"$(TZ=$(TIMEZ) date +'%b %d' -d '+3 days')/g" \
-e sed 's/Jan 5/'"$(TZ=$(TIMEZ) date +'%b %d' -d '+4 days')/g" \
-i \
$(PAGES)/$(WEEKLY)/$(TODAY).tex
Be careful with this option, though. In GNU sed, -i optionally takes an extension parameter to keep a backup copy of the file, and the optional parameter can have confusing syntax. In BusyBox sed, -i does not take a parameter. In BSD sed (MacOS hosts) the parameter is required.
If you have to deal with this ambiguity, you can work around it by separately creating and renaming the file.
sed e 's/.../.../g' -e 's/.../.../g' ... \
$(PAGES)/$(WEEKLY)/$(TODAY).tex \
> $(PAGES)/$(WEEKLY)/$(TODAY).tex.new
mv $(PAGES)/$(WEEKLY)/$(TODAY).tex.new $(PAGES)/$(WEEKLY)/$(TODAY).tex
In a Make context you might just treat these as separate files.
# lots GNU Make extensions
export TZ=$(TIMEZ)
TODAY := $(if $(PAGE),$(PAGE),$(shell date +%Y-%m-%d))
BASENAME := $(PAGES)/$(WEEKLY)/$(TODAY)
.PHONY: timestamps
timestamps: $(BASENAME).pdf
$(BASENAME).pdf: $(BASENAME).tex
pdflatex $<
$(BASENAME).tex: $(BASENAME)-original.tex
sed \
-e "s/1 January/$$(date +'%d %B')/g" \
...
$< > $#
$(BASENAME)-original.tex: \
template/page-header-footer/head.tex \
template/page-header-footer/pagestart.tex \
template/page-header-footer/timetracking.tex \
template/page-header-footer/tail.tex
cat template/page-header-footer/head.tex > $#
cat template/page-header-footer/pagestart.tex >> $#
echo {Week of $(TODAY)} >> $#
cat template/page-header-footer/timetracking.tex >> $#
cat template/page-header-footer/tail.tex >> $#
I've taken advantage of Make's automatic variables to reduce repetition here: $# is the current target (on the left-hand side of the rule name, the file we're building) and $< is its first dependency (the first thing after the colon).
You also may consider whether some of this can be done in TeX itself. For example, there are packages to format date stamps and built-in macros to include files. If you can put all of this in the .tex file itself then you don't need the complex Make syntax.
In a directory I have a config file with my db variables.
This file (db/database.ini) looks like this:
[PostgreSQL]
host=localhost
database=...
user=postgres
password=...
I have another file (db/create_stmts.sql) where I have all my raw create table statements, and i am trying to experiment the use of a Makefile to have a command like this:
make create-db from_file=db/create_stmts.sql
In order not to repeat myself, I thought of tailing the variables of db/database.ini to a file which I would then source, creating shell variables to pass to psql in the make file.
Here's my plan:
make-db:
# from_file: path to .sql file with all create statements to create the database where to insert
# how to run: make create-db from_file={insert path to sql file}
file_path=$(PWD)/file.sh
tail -n4 db/database.ini > file.sh && . $(file_path)
# -U: --user
# -d: --database
# -q: --quiet
# -f: --file
psql -U $(user) -d $(database) -q -f $(from_file) && rm file.sh
Which I run by: make create-db from_file=db/create_stmts.sql
Which gives me this message - from which i kindof understand that the sourcing just did not work.
#from_file: path to .sql file with all create statements to create the database where to insert
# how to run: make create-db from_file={insert path to sql file}
file_path=/home/gabriele/Desktop/TIUK/companies-house/file.sh
tail -n4 db/database.ini > file.sh && .
# -U: --user
# -d: --database
# -q: --quiet
# -f: --file
psql -U -d -q -f db/schema_tables.sql && rm file.sh
psql: FATAL: Peer authentication failed for user "-d"
Makefile:3: recipe for target 'create-db' failed
make: *** [create-db] Error 2
Any help?
Another solution, perhaps simpler to understand:
make-db:
file_path=$$PWD/file.sh; \
tail -n4 db/database.ini > file.sh && . $$file_path; \
psql -U $$user -d $$database -q -f $$from_file && rm file.sh
Note using ; and \ to convince make to run all commands in a single shell, and using $$ to escape the $ and use shell variable references.
The error is in the text, namely
psql -U -d -q -f db/schema_tables.sql && rm file.sh
This happens because the variables $(user) and $(database) aren't set. Every line within a target is executed in a sub shell. There is now way to use source like you would in a regular script.
You could create a file named database.mk in which you define these variables and use include database.mk at the top of your makefile to include them:
Makefile
CONFILE ?= database
include $(CONFILE).mk
test:
#echo $(user)
#echo $(database)
database.mk
user := user
database := data
If you want to parse the ini file you could do that as such
CONFILE := db/database.ini
make-db: _setup_con
echo $(user) $(database)
# your target
_setup_con:
$(eval user=$(shell grep "user=" $(CONFILE) | grep -Eo "[^=]*$$"))
$(eval database=$(shell grep "database=" $(CONFILE) | grep -Eo "[^=]*$$"))
# and so forward
I would make it more Make-way by using feature of automatic Makefile generation. Given that a configuration file is a simple properties file, its syntax is easily parseable by Make, it's sufficient to just get the lines with variables, i.e.:
include database.mk
database.mk: db/database.ini
grep -E '^\w+=\w+$$' $< > $#
.PHONY: create-db
create-db: $(from_file)
psql -U $(user) -d $(database) -q -f $<
Some additional notes:
create-db should be made .PHONY to avoid situation when nothing is done due to somebody creating (accidentally or not) a file named create-db,
by making create-db depending on from_file one can get a clean and readable error from make that a file does not exist instead of possibly cryptic error later.
I'm trying to run a script for pulling finance history from yahoo. Boris's answer from this thread
wget can't download yahoo finance data any more
works for me ~2 out of 3 times, but fails if the crumb returned from the cookie has a "\" character in it.
Code that sometimes works looks like this
#!usr/bin/sh
symbol=$1
today=$(date +%Y%m%d)
tomorrow=$(date --date='1 days' +%Y%m%d)
first_date=$(date -d "$2" '+%s')
last_date=$(date -d "$today" '+%s')
wget --no-check-certificate --save-cookies=cookie.txt https://finance.yahoo.com/quote/$symbol/?p=$symbol -O C:/trip/stocks/stocknamelist/crumb.store
crumb=$(grep 'root.*App' crumb.store | sed 's/,/\n/g' | grep CrumbStore | sed 's/"CrumbStore":{"crumb":"\(.*\)"}/\1/')
echo $crumb
fileloc=$"https://query1.finance.yahoo.com/v7/finance/download/$symbol?period1=$first_date&period2=$last_date&interval=1d&events=history&crumb=$crumb"
echo $fileloc
wget --no-check-certificate --load-cookies=cookie.txt $fileloc -O c:/trip/stocks/temphistory/hs$symbol.csv
rm cookie.txt crumb.store
But that doesn't seem to process in wget the way I intend either, as it seems to be interpreting as described here:
https://askubuntu.com/questions/758080/getting-scheme-missing-error-with-wget
Any suggestions on how to pass the $crumb variable into wget so that wget doesn't error out if $crumb has a "\" character in it?
Edited to show the full script. To clarify I've got cygwin installed with wget package. I call the script from cmd prompt as (example where the script above is named "stocknamedownload.sh, the stock symbol I'm downloading is "A" from the startdate 19800101)
c:\trip\stocks\StockNameList>bash stocknamedownload.sh A 19800101
This script seems to work fine - unless the crumb returned contains a "\" character in it.
The following implementation appears to work 100% of the time -- I'm unable to reproduce the claimed sporadic failures:
#!/usr/bin/env bash
set -o pipefail
symbol=$1
today=$(date +%Y%m%d)
tomorrow=$(date --date='1 days' +%Y%m%d)
first_date=$(date -d "$2" '+%s')
last_date=$(date -d "$today" '+%s')
# store complete webpage text in a variable
page_text=$(curl --fail --cookie-jar cookies \
"https://finance.yahoo.com/quote/$symbol/?p=$symbol") || exit
# extract the JSON used by JavaScript in the page
app_json=$(grep -e 'root.App.main = ' <<<"$page_text" \
| sed -e 's#^root.App.main = ##' \
-e 's#[;]$##') || exit
# use jq to extract the crumb from that JSON
crumb=$(jq -r \
'.context.dispatcher.stores.CrumbStore.crumb' \
<<<"$app_json" | tr -d '\r') || exit
# Perform our actual download
fileloc="https://query1.finance.yahoo.com/v7/finance/download/$symbol?period1=$first_date&period2=$last_date&interval=1d&events=history&crumb=$crumb"
curl --fail --cookie cookies "$fileloc" >"hs$symbol.csv"
Note that the tr -d '\r' is only necessary when using a native-Windows jq mixed with an otherwise native-Cygwin set of tools.
You are adding quotes to the value of the variable instead of quoting the expansion. You are also trying to use tools that don't know what JSON is to process JSON; use jq.
wget --no-check-certificate \
--save-cookies=cookie.txt \
"https://finance.yahoo.com/quote/$symbol/?p=$symbol" \
-O C:/trip/stocks/stocknamelist/crumb.store
# Something like thist; it's hard to reverse engineer the structure
# of crumb.store from your pipeline.
crumb=$(jq 'CrumbStore.crumb' crumb.store)
echo "$crumb"
fileloc="https://query1.finance.yahoo.com/v7/finance/download/$symbol?period1=$first_date&period2=$last_date&interval=1d&events=history&crumb=$crumb"
echo "$fileloc"
wget --no-check-certificate \
--load-cookies=cookie.txt "$fileloc" \
-O c:/trip/stocks/temphistory/hs$symbol.csv
On my Fedora machine I sometimes need to find out certain components of the kernel name, e.g.
VERSION=3.18.9-200.fc21
VERSION_ARCH=3.18.9-200.fc21.x86_64
SHORT_VERSION=3.18
DIST_VERSION=fc21
EXTRAVERSION = -200.fc21.x86_64
I know uname -a/-r/-m but these give me not all the components I need.
Of course I can just disassemble uname -r e.g.
KERNEL_VERSION_ARCH=$(uname -r)
KERNEL_VERSION=$(uname -r | cut -d '.' -f 1-4)
KERNEL_SHORT_VERSION=$(uname -r | cut -d '.' -f 1-2)
KERNEL_DIST_VERSION=$(uname -r | cut -d '.' -f 4)
EXTRAVERSION="-$(uname -r | cut -d '-' -f 2)"
But this seems very cumbersome and not future-safe to me.
Question: is there an elegant way (i.e. more readable and distribution aware) to get all kernel version/name components I need?
Nice would be s.th. like
kernel-ver -f "%M.%m.%p-%e.%a"
3.19.4-200.fc21.x86_64
kernel-ver -f "%M.%m"
3.19
kernel-ver -f "%d"
fc21
Of course the uname -r part would need a bit sed/awk/grep magic. But there are some other options you can try:
cat /etc/os-release
cat /etc/lsb-release
Since it's fedora you can try: cat /etc/fedora-release
lsb_release -a is also worth a try.
cat /proc/version, but that nearly the same output as uname -a
In the files /etc/*-release the format is already VARIABLE=value, so you could source the file directly and access the variables later:
$ source /etc/os-release
$ echo $ID
fedora
To sum this up a command that should work on every system that combines the above ideas:
cat /etc/*_ver* /etc/*-rel* 2>/dev/null
I would like to implement this as a Makefile task:
# step 1:
curl -u username:password -X POST \
-d '{"name": "new_file.jpg","size": 114034,"description": "Latest release","content_type": "text/plain"}' \
https://api.github.com/repos/:user/:repo/downloads
# step 2:
curl -u username:password \
-F "key=downloads/octocat/Hello-World/new_file.jpg" \
-F "acl=public-read" \
-F "success_action_status=201" \
-F "Filename=new_file.jpg" \
-F "AWSAccessKeyId=1ABCDEF..." \
-F "Policy=ewogIC..." \
-F "Signature=mwnF..." \
-F "Content-Type=image/jpeg" \
-F "file=#new_file.jpg" \
https://github.s3.amazonaws.com/
In the first part however, I need to get the file size (and content type if it's easy, not required though), so some variable:
{"name": "new_file.jpg","size": $(FILE_SIZE),"description": "Latest release","content_type": "text/plain"}
I tried this but it doesn't work (Mac 10.6.7):
$(shell du path/to/file.js | awk '{print $1}')
Any ideas how to accomplish this?
If you have GNU coreutils:
FILE_SIZE=$(stat -L -c %s $filename)
The -L tells it to follow symlinks; without it, if $filename is a symlink it will give you the size of the symlink rather than the size of the target file.
The MacOS stat equivalent appears to be:
FILE_SIZE=$(stat -L -f %z)
but I haven't been able to try it. (I've written this as a shell command, not a make command.) You may also find the -s option useful:
Display information in "shell output", suitable for initializing variables.
For reference, an alternative method is using du with -b bytes output and -s for summary only. Then cut to only keep the first element of the return string
FILE_SIZE=$(du -sb $filename | cut -f1)
This should return the same result in bytes as #Keith Thompson answer, but will also work for full directory sizes.
Extra: I usually use a macro for this.
define sizeof
$$(du -sb \
$(1) \
| cut -f1 )
endef
Which can then be called like,
$(call sizeof,$filename_or_dirname)
I think this is a case where parsing the output of ls is legitimate:
% FILE_SIZE=`ls -l $filename | awk '{print $5}'`
(no it's not: use stat, as noted by Keith Thompson)
For the type, you can use
% FILE_TYPE=`file --mime-type --brief $filename`