I'm trying to automate some commands I use regularly in a Makefile, but I can't seem to figure out the right syntax for it.
Given the following targets:
.PHONY: params-list
params-list:
#aws ssm get-parameters-by-path --path /${SERVICE}/${ENV} | jq -c -r '.Parameters[] | .Name'
.PHONY: params-get
params-get:
#aws ssm get-parameter --name ${PARAM} --with-decryption | jq -c -r .Parameter.Value
I was attempting to call params-list and then feed the results into params-get. My best attempt was along the lines of:
.PHONY: params
params:
for param in $(MAKE) params-list; do \
$(MAKE) params-get PARAM=$${param}; \
done
But obviously doesn't work. What's the right way of achieving this?
One hint is to eschew shell constructs for make ones. Going one step at a time here.
For the get operation, we encode one target for each param. So for param p1 (say), we invent a target params-get<p1>. We note that inside the recipe (i.e., the shell commands) that $# will expand to params-get<p1>. Therefore ${#:params-get<%>=%} will expand to p1.
Writing this out in make syntax:
.PHONY: params-get<p1>
params-get<p1>:
#aws ssm get-parameter --name ${#:params-get<%>=%} --with-decryption | jq -c -r .Parameter.Value
I note that with an intermediate variable we can have exactly the same recipe you used in your earlier incantation.
PARAM = ${#:params-get<%>=%}
.PHONY: params-get<p1>
params-get<p1>:
#aws ssm get-parameter --name ${PARAM} --with-decryption | jq -c -r .Parameter.Value
If we have a second param pa (say), that's easy to add:
PARAM = ${#:params-get<%>=%}
.PHONY: params-get<p1> params-get<pa>
params-get<p1> params-get<pa>:
#aws ssm get-parameter --name ${PARAM} …
I hope you can see some boiler plate appearing here.
Moving towards some more dynamically generated make, rather than hard-coding each param, let's put all the possible params in a list called $PARAMS
PARAMS := p1 pa another
get-targets := ${PARAMS:%=params-get<%>}
PARAM = ${#:params-get<%>=%}
.PHONY: ${get-targets}
${get-targets}:
#aws ssm get-parameter --name ${PARAM} …
.PHONY: params-get
params-get: ${get-targets}
echo $# Done
I have added your original params-get target. Now all the work is done by dependencies.
No shell loop in sight.
The exit code of each aws ssm get-parameter is checked by make.
The code is parallel safe: make -j9 will run 9 jobs in parallel until the work is exhausted. Nice if you have 8 CPUs. (If your make is not parallel safe then it is broken by the way.)
We are pretty close now. We just have to ensure $PARAMS is set to the output of ssm get-parameters-by-path.
Something like:
.PHONY: params-list
params-list:
#aws ssm get-parameters-by-path --path /${SERVICE}/${ENV} | jq -c -r '.Parameters[] | .Name'
PARAMS := $(shell ${MAKE} params-list)
get-targets := ${PARAMS:%=params-get<%>}
PARAM = ${#:params-get<%>=%}
.PHONY: ${get-targets}
${get-targets}:
#aws ssm get-parameter --name ${PARAM} --with-decryption | jq -c -r .Parameter.Value
.PHONY: params-get
params-get: ${get-targets}
echo $# Done
Definitely not a fan of using $(shell …), but this is just a sketch.
You could do a single (long) target/recipe as follows:
.PHONY: params-get
params-get:
#for p in $$(aws ssm get-parameters-by-path --path /${SERVICE}/${ENV} | jq -c -r '.Parameters[] | .Name'); do \
aws ssm get-parameter --name $$p --with-decryption | jq -c -r .Parameter.Value; \
done
This assumes that your parameters don't have white spaces in them
------------ EDIT ---------------
To be able to access the parameters in another recipe, you can create a rule to create a file with the list of files in one recipe, and then access that file in another recipe as so:
# note that in this case .params-list is actually the name of a file,
# but it is declared as phony to force it to be rebuilt every time.
.PHONY: .params-list
.params-list:
aws ssm get-parameters-by-path --path /${SERVICE}/${ENV} | jq -c -r '.Parameters[] | .Name' > $#
.PHONY: params-get
params-get: .params-list
#for param in $$(cat .params-list); do \
aws ssm get-parameter --name $${param} --with-decryption | jq -c -r .Parameter.Value; \
done;
Related
So I found a link that shows I should use the following, but perhaps my logic is wrong within the Makefile. I need to use the Makefile for testing purposes to work on both Mac and Windows. The image is fine and the docker container works, I am just trying to make use of the fact that in Linux/Mac \ can be used to shorten the commands, whereas in Windows you have to use the backtick (`).
Example:
.PHONY: validate-lookml
validate-lookml:
UNAME_S=$(shell uname -s)
ifeq ($(UNAME), Linux)
docker run --rm -it -e LOOKER_BASE_URL=${LOOKER_BASE_URL} \
-e LOOKER_CLIENTID=${LOOKER_CLIENTID} \
-e LOOKER_CLIENT_SECRET=${LOOKER_CLIENTSECRET} mirantis/mirantis_spectacles \
lookml \
--base-url ${LOOKER_BASE_URL} \
--client-id ${LOOKER_CLIENTID} \
--client-secret ${LOOKER_CLIENTSECRET} \
--project ${PROJECT} \
--branch ${BRANCH}
endif
ifeq ($(UNAME), Darwin)
docker run --rm -it -e LOOKER_BASE_URL=${LOOKER_BASE_URL} \
-e LOOKER_CLIENTID=${LOOKER_CLIENTID} \
-e LOOKER_CLIENT_SECRET=${LOOKER_CLIENTSECRET} mirantis/mirantis_spectacles \
lookml \
--base-url ${LOOKER_BASE_URL} \
--client-id ${LOOKER_CLIENTID} \
--client-secret ${LOOKER_CLIENTSECRET} \
--project ${PROJECT} \
--branch ${BRANCH}
endif
ifeq ($(UNAME), Windows_NT)
docker run --rm -it -e LOOKER_BASE_URL=${LOOKER_BASE_URL} `
-e LOOKER_CLIENTID=${LOOKER_CLIENTID} `
-e LOOKER_CLIENT_SECRET=${LOOKER_CLIENTSECRET} mirantis/mirantis_spectacles `
lookml `
--base-url ${LOOKER_BASE_URL} `
--client-id ${LOOKER_CLIENTID} `
--client-secret ${LOOKER_CLIENTSECRET} `
--project ${PROJECT} `
--branch ${BRANCH}
endif
Unfortunately, it doesn't work on Windows, and I need my Makefile to support analysts on Windows Laptops:
Error:
C:\Users\richa\Git\Mirantis\dataops-looker [main ≡ +0 ~1 -0 !]> make -s validate-lookml
process_begin: CreateProcess(NULL, uname -s, ...) failed.
Makefile:9: pipe: No error
process_begin: CreateProcess(NULL, uname -s, ...) failed.
Makefile:12: pipe: No error
process_begin: CreateProcess(NULL, uname -s, ...) failed.
Makefile:15: pipe: No error
usage: spectacles lookml [-h] [--config-file CONFIG_FILE] --base-url BASE_URL
--client-id CLIENT_ID --client-secret CLIENT_SECRET
[--port PORT] [--api-version API_VERSION] [-v]
[--log-dir LOG_DIR] [--do-not-track]
[--severity {success,info,warning,error,fatal}]
--project PROJECT [--branch BRANCH]
[--remote-reset | --commit-ref COMMIT_REF | --pin-imports PIN_IMPORTS [PIN_IMPORTS ...]]
spectacles lookml: error: argument --base-url: expected one argument
make: *** [Makefile:40: validate-lookml] Error 2
Much confusion here.
A makefile consists of lines written in two completely different languages: one is the make language, and the other is the shell. You cannot send make operations to the shell, and you cannot run (directly) shell commands in make.
Make tells the difference between these two by use of the TAB character. Lines that are not indented with TAB are parsed by make, and lines that are indented with TAB are given to the shell. So, in your makefile:
validate-lookml:
UNAME_S=$(shell uname -s)
ifeq ($(UNAME), Linux)
docker run --rm -it -e LOOKER_BASE_URL=${LOOKER_BASE_URL} \
this is not right because the first two indented lines here are make commands, and the third is a shell command. You should write this like:
UNAME_S := $(shell uname -s)
validate-lookml:
ifeq ($(UNAME), Linux)
docker run --rm -it -e LOOKER_BASE_URL=${LOOKER_BASE_URL} \
...
endif
ifeq ($(UNAME), Darwin)
...
etc.
But, there is no uname command on Windows so when you run this it won't work, that's why you're getting the error process_begin: CreateProcess(NULL, uname -s, ...) failed. If you have GNU make 4.0 or better I recommend that you look at the MAKE_HOST variable and use that instead of trying to run uname.
Finally, you don't have to worry about the backslash difference, because make will parse all the backslashes and remove them on its own BEFORE it starts the shell. So just use backslashes to continue all the lines in your recipe and it will work the same way on all different platforms.
I have the following code
#/bin/bash
set -e
set -x
requestResponse=$(ssh jump.gcp.xxxxxxx """source dev2 spi-dev
kubectl get pods -o json | jq '.items[] |select(.metadata.name[0:3]=="fea")' | jq .status.podIP
2>&1""")
echo $requestResponse
In the above code source dev2 spi-dev means we have moved to spi-dev namespace inside dev2 cluster. kubectl get pods -o json | jq '.items[] |select(.metadata.name[0:3]=="fea")' | jq .status.podIP 2>&1""") means to print ip address of pod starting with fea. If I do manually kubectl command works. I have also tried escaping fea like \"fea\"
These triple quotes """ are not working as you expect.
Try to change it like this:
ssh jump.gcp.xxxxxxx << EOF
source dev2 spi-dev
kubectl get pods -o json | \
jq '.items[] | \
select(.metadata.name[0:3]=="fea")' | \
jq .status.podIP 2>&1
EOF
I have a docker image that I want to run locally and to make my life easier I am using make file to pass AWS environment variable.
aws_access_key_id := $(shell aws configure get aws_access_key_id)
aws_secret_access_key := $(shell aws configure get aws_secret_access_key)
aws_region := $(shell aws configure get region)
docker-run:
docker run -e AWS_ACCESS_KEY_ID="$(aws_access_key_id)" -e AWS_SECRET_ACCESS_KEY="$(aws_secret_access_key)" -e AWS_DEFAULT_REGION="$(aws_region)" --rm mydocker-image
And I need to find a way to do something like this in my terminal
make docker-run -d my_db -s dev -t my_table -u my_user -i URI://redshift
make docker-run --pre-actions "delete from dev.my_table where first_name = 'John'" -s dev -t my_table
make docker-run -s3 s3://temp-parquet/avro/ -s dev -t my_table -u myuser -i URI://redshift
These are the arguments that my docker (python application with argparse) will accept.
You can't do that, directly. The command line arguments to make are parsed by make, and must be valid make program command line arguments. Makefiles are not shell scripts and make is not a general interpreter: there's no facility for passing arbitrary options to it.
You can do this by putting them into a variable, like this:
make docker-run DOCKER_ARGS="-d my_db -s dev -t my_table -u my_user -i URI://redshift"
make docker-run DOCKER_ARGS="-d my_db -s dev -t my_table"
then use $(DOCKER_ARGS) in your makefile. But that's the only way.
If you want to do argument parsing yourself, you probably don't want a Makefile! You should probably write a Bash script instead.
Example:
#!/usr/bin/env bash
set -euo pipefail
aws_access_key_id="$(aws configure get aws_access_key_id)"
aws_secret_access_key="$(aws configure get aws_secret_access_key)"
aws_region="$(aws configure get region)"
docker run -e AWS_ACCESS_KEY_ID="$aws_access_key_id" -e AWS_SECRET_ACCESS_KEY="$aws_secret_access_key" -e AWS_DEFAULT_REGION="$aws_region" --rm mydocker-imagedocker "$#"
Note the $# at the end, which passes the arguments from Bash to the docker command.
You might want to try someting like:
$ cat Makefile
all:
#echo make docker-run -d my_db -s dev -t my_table $${MYUSER+-u "$(MYUSER)"} $${URI+-i "URI://$(URI)"}
$ make
make docker-run -d my_db -s dev -t my_table
$ make MYUSER=myuser URI=redshift
make docker-run -d my_db -s dev -t my_table -u myuser -i URI://redshift
In a directory I have a config file with my db variables.
This file (db/database.ini) looks like this:
[PostgreSQL]
host=localhost
database=...
user=postgres
password=...
I have another file (db/create_stmts.sql) where I have all my raw create table statements, and i am trying to experiment the use of a Makefile to have a command like this:
make create-db from_file=db/create_stmts.sql
In order not to repeat myself, I thought of tailing the variables of db/database.ini to a file which I would then source, creating shell variables to pass to psql in the make file.
Here's my plan:
make-db:
# from_file: path to .sql file with all create statements to create the database where to insert
# how to run: make create-db from_file={insert path to sql file}
file_path=$(PWD)/file.sh
tail -n4 db/database.ini > file.sh && . $(file_path)
# -U: --user
# -d: --database
# -q: --quiet
# -f: --file
psql -U $(user) -d $(database) -q -f $(from_file) && rm file.sh
Which I run by: make create-db from_file=db/create_stmts.sql
Which gives me this message - from which i kindof understand that the sourcing just did not work.
#from_file: path to .sql file with all create statements to create the database where to insert
# how to run: make create-db from_file={insert path to sql file}
file_path=/home/gabriele/Desktop/TIUK/companies-house/file.sh
tail -n4 db/database.ini > file.sh && .
# -U: --user
# -d: --database
# -q: --quiet
# -f: --file
psql -U -d -q -f db/schema_tables.sql && rm file.sh
psql: FATAL: Peer authentication failed for user "-d"
Makefile:3: recipe for target 'create-db' failed
make: *** [create-db] Error 2
Any help?
Another solution, perhaps simpler to understand:
make-db:
file_path=$$PWD/file.sh; \
tail -n4 db/database.ini > file.sh && . $$file_path; \
psql -U $$user -d $$database -q -f $$from_file && rm file.sh
Note using ; and \ to convince make to run all commands in a single shell, and using $$ to escape the $ and use shell variable references.
The error is in the text, namely
psql -U -d -q -f db/schema_tables.sql && rm file.sh
This happens because the variables $(user) and $(database) aren't set. Every line within a target is executed in a sub shell. There is now way to use source like you would in a regular script.
You could create a file named database.mk in which you define these variables and use include database.mk at the top of your makefile to include them:
Makefile
CONFILE ?= database
include $(CONFILE).mk
test:
#echo $(user)
#echo $(database)
database.mk
user := user
database := data
If you want to parse the ini file you could do that as such
CONFILE := db/database.ini
make-db: _setup_con
echo $(user) $(database)
# your target
_setup_con:
$(eval user=$(shell grep "user=" $(CONFILE) | grep -Eo "[^=]*$$"))
$(eval database=$(shell grep "database=" $(CONFILE) | grep -Eo "[^=]*$$"))
# and so forward
I would make it more Make-way by using feature of automatic Makefile generation. Given that a configuration file is a simple properties file, its syntax is easily parseable by Make, it's sufficient to just get the lines with variables, i.e.:
include database.mk
database.mk: db/database.ini
grep -E '^\w+=\w+$$' $< > $#
.PHONY: create-db
create-db: $(from_file)
psql -U $(user) -d $(database) -q -f $<
Some additional notes:
create-db should be made .PHONY to avoid situation when nothing is done due to somebody creating (accidentally or not) a file named create-db,
by making create-db depending on from_file one can get a clean and readable error from make that a file does not exist instead of possibly cryptic error later.
I'm using a makefile to run docker, where I first collect some modules to download, so that they can be cached and then run docker. I wanted to parameterize this, but I don't think I'm doing this in the best way. Pointers to make this more concise would be really appreciated.
franz:
$(eval REPO_VERSION := $(shell grep franz requirements/github.txt | cut -d'#' -f3 | cut -d'#' -f1))
if [ -d docker/franz ]; then \
echo "Updating franz to [$(REPO_VERSION)]"; \
cd docker/franz && git fetch && git checkout $(REPO_VERSION); \
else \
echo "Cloning franz to [$(REPO_VERSION)]"; \
git clone --branch $(REPO_VERSION) git#github.com:dubizzle/franz.git docker/franz 2> /dev/null; \
fi \
lilith:
$(eval REPO_VERSION := $(shell grep lilith requirements/github.txt | cut -d'#' -f3 | cut -d'#' -f1))
if [ -d docker/lilith ]; then \
echo "Updating lilith to [$(REPO_VERSION)]"; \
cd docker/lilith && git fetch && git checkout $(REPO_VERSION); \
else \
echo "Cloning lilith to [$(REPO_VERSION)]"; \
git clone --branch $(REPO_VERSION) git#github.com:dubizzle/lilith.git docker/lilith 2> /dev/null; \
fi \
dependencies: franz lilith
git archive --format tar.gz --output docker/archive.tar.gz $(GIT_REF)
Basically, this first updates requirements that are on github, downloads them, checks what version is needed, and then updates to that version. If this could be made a function, a parameterised version would be:
$(eval REPO_VERSION := $(shell grep <repo-name> requirements/github.txt | cut -d'#' -f3 | cut -d'#' -f1))
if [ -d docker/<repo-name> ]; then \
echo "Updating <repo-name> to [$(REPO_VERSION)]"; \
cd docker/<repo-name> && git fetch && git checkout $(REPO_VERSION); \
else \
echo "Cloning <repo-name> to [$(REPO_VERSION)]"; \
git clone --branch $(REPO_VERSION) git#github.com:dubizzle/<repo-name>.git docker/<repo-name> 2> /dev/null; \
fi \
I've seen some examples using define, and call, and eval, but, I can't figure out the right combination to make it work.
Any help with this would be much appreciated.
From the tutorial mentioned above, to pull the information to SO.
This is GNU make, mind.
Defining a template:
define RULES_template
$(1)/obj/%.o: $(1)/src/%.c
$$(CC) $$(CFLAGS) $$(CFLAGS_global) $$(CFLAGS_$(1)) -c $$< -o $$#
endef
This uses one parameter ($(1)), which gets substituted as appropriate. The number of parameters is not declared, you just add $(1), $(2) etc. to the template. Note the duplication of $$ everywhere else.
$(foreach module,$(MODULES),$(eval $(call RULES_template,$(module))))
This calls the template mentioned above for each token in $(MODULES).
call RULES_template,foo instantiates the template with one parameter, foo. eval then parses the output as Makefile syntax (as opposed to, for example, putting it into some variable).
This has been ages ago, and I never used that code in a productive environment, so I am a bit fuzzy on the details. I hope it helps, anyway.
CMake not only is cross-platform, but also has much better primitives to handle sophisticated build mechanics. I can recommend it.