Dictionaries/Maps/Lookup Tables in Makefiles - makefile

I need to create a lookup table/dictionary/map in my Makefile to look up key-value information.
I have been trying to use ifeq statements to do the same thing but my statements seem to fail:
# this gets the account id from the current user's ARN, you must have the AWS CLI and jq installed
AWS_ACCOUNT_ID:=$(shell aws iam get-user | jq -r '.User.Arn' | awk -F ':' '{print $$5;}')
# define a friendly account name for output
ifeq ($(AWS_ACCOUNT_ID), 123456)
AWS_ACCOUNT_FRIENDLY:=staging
endif
ifeq ($(AWS_ACCOUNT_ID), 789012)
AWS_ACCOUNT_FRIENDLY:=preprod
endif
ifeq ($(AWS_ACCOUNT_ID), 345678)
AWS_ACCOUNT_FRIENDLY:=production
endif
It seems to only work with the first value 123456 but not with others.
Is there a way to define a dictionary/map in Make to simply look up the account friendly name by the key of the account id?

I can't explain why you don't see the behavior you expect: I would verify that the value of AWS_ACCOUNT_ID is what you expect: maybe your shell script is not doing what you want. Try adding something like:
AWS_ACCOUNT_ID := $(shell ...)
$(info AWS_ACCOUNT_ID = '$(AWS_ACCOUNT_ID)')
and see what you get.
However related to your more general question, I prefer to use constructed macro names when dealing with situations like this, instead of lots of ifeq values:
AWS_123456_FRIENDLY := staging
AWS_789012_FRIENDLY := preprod
AWS_345678_FRIENDLY := production
AWS_ACCOUNT_ID := $(shell ...)
AWS_ACCOUNT_FRIENDLY := $(AWS_$(AWS_ACCOUNT_ID)_FRIENDLY)

Build and configuration management is becoming a programming task on its own. Unluckily the tools to do that are lacking coherency and broadness of acceptance. Teams often end up rolling their own scripting zoo to glue build, test and release together. If you want to avoid some of the dilution coming with make and necessary helper scripts outside of it, you can use gmtt which allows table selection for those kinds of tasks you want to complete with onboard GNUmake calls:
include gmtt/gmtt.mk
AWS_ACCOUNT_ID:=$(shell aws iam get-user | jq -r '.User.Arn' | awk -F ':' '{print $$5;}')
# define a table of 3 columns: <AWS-id> <name> <admin>
define AWS_ACCOUNT_TBL
3
123456 staging kay
789012 preprod catbert
345678 production pointyhairedboss
endef
# select column 2 & 3 from table AWS_ACCOUNT_TBL where column 1 string-equals AWS_ACCOUNT_ID
AWS_ACCOUNT := $(call select,2 3,$(AWS_ACCOUNT_TBL),$$(call str-eq,$$1,$(AWS_ACCOUNT_ID)))
AWS_NAME := $(word 1,$(AWS_ACCOUNT))
ADMIN := $(word 2,$(AWS_ACCOUNT))
$(info AWS account $(AWS_NAME) administered by $(ADMIN))

Related

Kong custom golang plugin not working in kubernetes/helm setup

I have written custom golang kong plugin called go-wait following the example from the github repo https://github.com/redhwannacef/youtube-tutorials/tree/main/kong-gateway-custom-plugin
The only difference is I created a custom docker image so kong would have the mentioned plugin by default in it's /usr/local/bin directory
Here's the dockerfile
FROM golang:1.18.3-alpine as pluginbuild
COPY ./charts/custom-plugins/ /app/custom-plugins
RUN cd /app/custom-plugins && \
for d in ./*/ ; do (cd "$d" && go mod tidy && GOOS=linux GOARCH=amd64 go build .); done
RUN mkdir /app/all-plugin-execs && cd /app/custom-plugins && \
find . -type f -not -name "*.*" | xargs -i cp {} /app/all-plugin-execs/
FROM kong:2.8
COPY --from=pluginbuild /app/all-plugin-execs/ /usr/local/bin/
COPY --from=pluginbuild /app/all-plugin-execs/ /usr/local/bin/plugin-ref/
# Loop through the plugin-ref directory and create an entry for all of them in
# both KONG_PLUGINS and KONG_PLUGINSERVER_NAMES env vars respectively
# Additionally append `bundled` to KONG_PLUGINS list as without it any unused plugin will case Kong to error out
#### Example Env vars for a plugin named `go-wait`
# ENV KONG_PLUGINS=go-wait
# ENV KONG_PLUGINSERVER_NAMES=go-wait
# ENV KONG_PLUGINSERVER_GO_WAIT_QUERY_CMD="/usr/local/bin/go-wait -dump"
####
RUN cd /usr/local/bin/plugin-ref/ && \
PLUGINS=$(ls | tr '\n' ',') && PLUGINS=${PLUGINS::-1} && \
echo -e "KONG_PLUGINS=bundled,$PLUGINS\nKONG_PLUGINSERVER_NAMES=$PLUGINS" >> ~/.bashrc
# Loop through the plugin-ref directory and create an entry for QUERY_CMD entries needed to load the plugin
# format KONG_PLUGINSERVER_EG_PLUGIN_QUERY_CMD if the plugin name is `eg-plugin` and it should point to the
# plugin followed by `-dump` argument
RUN cd /usr/local/bin/plugin-ref/ && \
for f in *; do echo "$f" | tr "[:lower:]" "[:upper:]" | tr '-' '_' | \
xargs -I {} sh -c "echo 'KONG_PLUGINSERVER_{}_QUERY_CMD=' && echo '\"/usr/local/bin/{} -dump\"' | tr [:upper:] [:lower:] | tr '_' '-'" | \
sed -e '$!N;s/\n//' | xargs -i echo "{}" >> ~/.bashrc; done
This works fine in the docker-compose file and docker container. But when I tried to use the same image in the kubernetes environment along with kong-ingress-controller, I started running into errors "failed to fill-in defaults for plugin: go-wait" and/or a bunch of other errors including "plugin 'go-wait' enabled but not installed" in the logs and I ended up not being able to enable it.
Has anyone tried including go plugins in their kubernetes/helm kong setup. If so please shed some light on this
Update: Found the answer I was looking for, along with setting the environment variables generated by the image, there's modifications in the _helpers.tpl file of the kong helm chart itself.
The reason is that in the deployment charts, the configuration expects plugins to be configured in values-custom.yml used to override the default settings.
But the helm chart seems to be specific to values and plugins being loaded via configMaps which turned out to be a huge bottleneck, as any binary plugin you will generate in golang for kong is going to exceed the maximum allowed limit of the configMaps in kubernetes.
That's the whole reason I had set out on this endeavor to make the plugins part of my image.
TL;dr
I was able to clone the repo to my local system, make the changes for the following patch for loading the plugins from values without having to club them with the lua plugins. (Credits : Answer of thatbenguy from the discussion https://discuss.konghq.com/t/how-to-load-go-plugins-using-kong-helm-chart/5717/10)
--- a/charts/kong/templates/_helpers.tpl
+++ b/charts/kong/templates/_helpers.tpl
## -530,6 +530,9 ## The name of the service used for the ingress controller's validation webhook
{{- define "kong.plugins" -}}
{{ $myList := list "bundled" }}
+{{- range .Values.plugins.goPlugins -}}
+{{- $myList = append $myList .pluginName -}}
+{{- end -}}
{{- range .Values.plugins.configMaps -}}
{{- $myList = append $myList .pluginName -}}
{{- end -}}
Add the following block to my values-custom.yml and I was good to go.
Hopefully this helps anyone else also trying to write custom plugins for kong in golang for use in helm charts.
env:
database: "off"
plugins: bundled,go-wait
pluginserver_names: go-wait
pluginserver_go_wait_query_cmd: "/usr/local/bin/go-wait -dump"
plugins:
goPlugins:
- pluginName: "go-wait"
NOTE : Please remember all this still depends on you having the prebuilt custom kong plugins in your image, in my case I had built an image from the above dockerfile contents (in question) and pushed that to my own docker hub repo and replaced the image in the values-custom.yml using the following block
image:
repository: chalukyaj/kong-custom-image
tag: "1.0.1"
PS: As you guys might have noticed, the only disappointment I have with this is that the environment variables couldn't just be picked from the docker image's ~/.bashrc, which would have made this awesome. But nonetheless, this works, and I couldn't find a single post which showed how to use the new go-pdk (instead of the older go-pluginserver) to build the go plugins and use them in helm.

Use ansible for manual staged rollout using `serial` and unknown inventory size

Consider an Ansible inventory with an unknown number of servers in a nodes key.
The script I'm writing should be usable with different inventories that should be as simple as possible and are out of my control, so I don't know the number of nodes ahead of time.
My command to run the playbook is pretty vanilla and I can freely change it. There could be two separate commands for both rollout stages.
ansible-playbook -i $INVENTORY_PATH playbooks/example.yml
And the playbook is pretty standard as well and can be adjusted:
- hosts: nodes
vars:
...
remote_user: '{{ sudo_user }}'
gather_facts: no
tasks:
...
How would I go about implementing a staged execution without changing the inventory?
I'd like to run one command to execute the playbook for 50% of the inventory first. Here the result needs to be checked manually by a human. Then I'd like to use another command to execute the playbook for the other half. The author of the inventory should not have to worry about this. All machines below the nodes key are the same.
I've looked into the serial keyword, but it doesn't seem like I could automatically end execution after one batch and then later come back to continue with the second half.
Maybe something creative could be done with variables passed to ansible-playbook? I'm just wondering, shouldn't this be a common use-case? Are all staged rollouts supposed to be fully automated?
Without even using serial here is a possible very simple scenario.
First get a calculation of $half of the inventory by inspecting the inventory itself. The following is enabling the json callback plugin for the ad hoc command and making sure it is the only plugin enabled. It is also using jq to parse the result. You can adapt to any other json parser (or even use the yaml callback with a yaml parser if your prefer). Anyway, adapt to your own needs.
half=$( \
ANSIBLE_LOAD_CALLBACK_PLUGINS=1 \
ANSIBLE_STDOUT_CALLBACK=json \
ANSIBLE_CALLBACK_WHITELIST=json \
ansible localhost -i yourinventory.yml -m debug -a "msg={{ (groups['nodes'] | length / 2) | round(0, 'ceil') | int }}" \
| jq -r ".plays[0].tasks[0].hosts.localhost.msg" \
)
Then launch your playbook limiting to the first $half nodes with whatever vars are needed for human check, and launch it again for the remainder nodes without check.
ansible-playbook -i yourinventory.yml example_playbook.yml -l nodes[0:$(($half-1))] -e human_check=true
ansible-playbook -i yourinventory.yml example_playbook.yml -l nodes[$half:] -e human_check=false

how can I run ansible molecule without colors?

When running molecule the logs display with colors:
molecule lint -s preprod
--> [36mValidating schema /home/singuliere/software/enough/infrastructure/molecule/letsencrypt-nginx/molecule.yml.[0m
[0m[0m[0m[32mValidation completed successfully.[0m
[0m[0m[0m--> [36mValidating schema /home/singuliere/software/enough/infrastructure/molecule/postfix/molecule.yml.[0m
...
which can be disabled by piping the output to cat (it only shows when the output is a tty)
molecule lint -s preprod | cat
--> Validating schema /home/singuliere/software/enough/infrastructure/molecule/letsencrypt-nginx/molecule.yml.
Validation completed successfully.
...
Is there a permanent way to do the same? I tried setting ANSIBLE_NOCOLOR=true in the environment but it does not have the desired effect.
It seems that this behaviour is hardcoded.
You can patch molecule's logger class to disable colours.
Find module's path with python -c 'import molecule; print(molecule.__file__)'.
Modify logger.py in that folder:
def color_text(color, msg):
return msg
# return '{}{}{}'.format(color, msg, colorama.Style.RESET_ALL)

Snakemake conda env parameter is not taken from config.yaml file

I use a conda env that I create manually, not automatically using Snakemake. I do this to keep tighter version control.
Anyway, in my config.yaml I have the following line:
conda_env: '/rst1/2017-0205_illuminaseq/scratch/swo-406/snakemake'
Then, at the start of my Snakefile I read that variable (reading variables from config in your shell part does not seem to work, am I right?):
conda_env = config['conda_env']
Then in a shell part I hail said parameter like this:
rule rsem_quantify:
input:
os.path.join(fastq_dir, '{sample}_R1_001.fastq.gz'),
os.path.join(fastq_dir, '{sample}_R2_001.fastq.gz')
output:
os.path.join(analyzed_dir, '{sample}.genes.results'),
os.path.join(analyzed_dir, '{sample}.STAR.genome.bam')
threads: 8
shell:
'''
#!/bin/bash
source activate {conda_env}
rsem-calculate-expression \
--paired-end \
{input} \
{rsem_ref_base} \
{analyzed_dir}/{wildcards.sample} \
--strandedness reverse \
--num-threads {threads} \
--star \
--star-gzipped-read-file \
--star-output-genome-bam
'''
Notice the {conda_env}. Now this gives me the following error:
Could not find conda environment: None
You can list all discoverable environments with `conda info --envs`.
Now, if I change {conda_env} for its parameter directly /rst1/2017-0205_illuminaseq/scratch/swo-406/snakemake, it does work! I don't have any trouble reading other parameters using this method (like rsem_ref_base and analyzed_dir in the example rule above.
What could be wrong here?
Highest regards,
Freek.
The pattern I use is to load variables into params, so something along the lines of
rule rsem_quantify:
input:
os.path.join(fastq_dir, '{sample}_R1_001.fastq.gz'),
os.path.join(fastq_dir, '{sample}_R2_001.fastq.gz')
output:
os.path.join(analyzed_dir, '{sample}.genes.results'),
os.path.join(analyzed_dir, '{sample}.STAR.genome.bam')
params:
conda_env=config['conda_env']
threads: 8
shell:
'''
#!/bin/bash
source activate {params.conda_env}
rsem-calculate-expression \
...
'''
Although, I'd also never do this with a conda environment, because Snakemake has conda environment management built-in. See this section in the docs on Integrated Package Management for details. This makes reproducibility much more manageable.

Ansible advanced variable parsing

I'm new to Ansible an thus this question may seem silly to more advanced users, I'm not sure if it's possible to do what I'm asking since Ansibel is very limited when it comes to loops and conditionals.
I'm performing tasks on a Virtual Connect switch thus I'm limited to use the raw module.
I have a following STDOUT:
=========================================================================
Profile Port Network PXE/IP MAC Address Allocated Status
Name Boot Order Speed
(min-max)
=========================================================================
CLO01ES 1 CLO_355 UseBIOS/Au 00-17-A4-77-58-0 -- -- OK
X02 _1 to 0
-------------------------------------------------------------------------
CLO01ES 2 CLO_355 UseBIOS/Au 00-17-A4-77-58-0 -- -- OK
X02 _2 to 2
-------------------------------------------------------------------------
CLO01ES 3 Multipl UseBIOS/Au 00-17-A4-77-58-0 -- -- OK
X02 e to 4
Network
-------------------------------------------------------------------------
CLO01ES 4 Multipl UseBIOS/Au 00-17-A4-77-58-0 -- -- OK
X02 e to 6
Network
-------------------------------------------------------------------------
<omitted>
The issue is that STDOUT can have multiple lines with different profiles in them i.e. I don't know the line numbers or MAC addresses beforehand.
What I want to achive is a status check. If Profile CLO01ESX02 has Network Name: Multiple Network twice, then I want to skip the task.
Whenever I was googling parsing variables or STDOUT I would get just basic answers.
Is this possible with Ansible or am I forced to write a custom script?
It can't be done directly by any native Ansible modules but using shell command should do the trick.
- name: Check output
command: <your_command_here> | grep CLO01ESX02 | grep "Multiple Network" | wc -l
register: wc
failed_when: wc.stdout|int > 1

Resources