Kong custom golang plugin not working in kubernetes/helm setup - go

I have written custom golang kong plugin called go-wait following the example from the github repo https://github.com/redhwannacef/youtube-tutorials/tree/main/kong-gateway-custom-plugin
The only difference is I created a custom docker image so kong would have the mentioned plugin by default in it's /usr/local/bin directory
Here's the dockerfile
FROM golang:1.18.3-alpine as pluginbuild
COPY ./charts/custom-plugins/ /app/custom-plugins
RUN cd /app/custom-plugins && \
for d in ./*/ ; do (cd "$d" && go mod tidy && GOOS=linux GOARCH=amd64 go build .); done
RUN mkdir /app/all-plugin-execs && cd /app/custom-plugins && \
find . -type f -not -name "*.*" | xargs -i cp {} /app/all-plugin-execs/
FROM kong:2.8
COPY --from=pluginbuild /app/all-plugin-execs/ /usr/local/bin/
COPY --from=pluginbuild /app/all-plugin-execs/ /usr/local/bin/plugin-ref/
# Loop through the plugin-ref directory and create an entry for all of them in
# both KONG_PLUGINS and KONG_PLUGINSERVER_NAMES env vars respectively
# Additionally append `bundled` to KONG_PLUGINS list as without it any unused plugin will case Kong to error out
#### Example Env vars for a plugin named `go-wait`
# ENV KONG_PLUGINS=go-wait
# ENV KONG_PLUGINSERVER_NAMES=go-wait
# ENV KONG_PLUGINSERVER_GO_WAIT_QUERY_CMD="/usr/local/bin/go-wait -dump"
####
RUN cd /usr/local/bin/plugin-ref/ && \
PLUGINS=$(ls | tr '\n' ',') && PLUGINS=${PLUGINS::-1} && \
echo -e "KONG_PLUGINS=bundled,$PLUGINS\nKONG_PLUGINSERVER_NAMES=$PLUGINS" >> ~/.bashrc
# Loop through the plugin-ref directory and create an entry for QUERY_CMD entries needed to load the plugin
# format KONG_PLUGINSERVER_EG_PLUGIN_QUERY_CMD if the plugin name is `eg-plugin` and it should point to the
# plugin followed by `-dump` argument
RUN cd /usr/local/bin/plugin-ref/ && \
for f in *; do echo "$f" | tr "[:lower:]" "[:upper:]" | tr '-' '_' | \
xargs -I {} sh -c "echo 'KONG_PLUGINSERVER_{}_QUERY_CMD=' && echo '\"/usr/local/bin/{} -dump\"' | tr [:upper:] [:lower:] | tr '_' '-'" | \
sed -e '$!N;s/\n//' | xargs -i echo "{}" >> ~/.bashrc; done
This works fine in the docker-compose file and docker container. But when I tried to use the same image in the kubernetes environment along with kong-ingress-controller, I started running into errors "failed to fill-in defaults for plugin: go-wait" and/or a bunch of other errors including "plugin 'go-wait' enabled but not installed" in the logs and I ended up not being able to enable it.
Has anyone tried including go plugins in their kubernetes/helm kong setup. If so please shed some light on this

Update: Found the answer I was looking for, along with setting the environment variables generated by the image, there's modifications in the _helpers.tpl file of the kong helm chart itself.
The reason is that in the deployment charts, the configuration expects plugins to be configured in values-custom.yml used to override the default settings.
But the helm chart seems to be specific to values and plugins being loaded via configMaps which turned out to be a huge bottleneck, as any binary plugin you will generate in golang for kong is going to exceed the maximum allowed limit of the configMaps in kubernetes.
That's the whole reason I had set out on this endeavor to make the plugins part of my image.
TL;dr
I was able to clone the repo to my local system, make the changes for the following patch for loading the plugins from values without having to club them with the lua plugins. (Credits : Answer of thatbenguy from the discussion https://discuss.konghq.com/t/how-to-load-go-plugins-using-kong-helm-chart/5717/10)
--- a/charts/kong/templates/_helpers.tpl
+++ b/charts/kong/templates/_helpers.tpl
## -530,6 +530,9 ## The name of the service used for the ingress controller's validation webhook
{{- define "kong.plugins" -}}
{{ $myList := list "bundled" }}
+{{- range .Values.plugins.goPlugins -}}
+{{- $myList = append $myList .pluginName -}}
+{{- end -}}
{{- range .Values.plugins.configMaps -}}
{{- $myList = append $myList .pluginName -}}
{{- end -}}
Add the following block to my values-custom.yml and I was good to go.
Hopefully this helps anyone else also trying to write custom plugins for kong in golang for use in helm charts.
env:
database: "off"
plugins: bundled,go-wait
pluginserver_names: go-wait
pluginserver_go_wait_query_cmd: "/usr/local/bin/go-wait -dump"
plugins:
goPlugins:
- pluginName: "go-wait"
NOTE : Please remember all this still depends on you having the prebuilt custom kong plugins in your image, in my case I had built an image from the above dockerfile contents (in question) and pushed that to my own docker hub repo and replaced the image in the values-custom.yml using the following block
image:
repository: chalukyaj/kong-custom-image
tag: "1.0.1"
PS: As you guys might have noticed, the only disappointment I have with this is that the environment variables couldn't just be picked from the docker image's ~/.bashrc, which would have made this awesome. But nonetheless, this works, and I couldn't find a single post which showed how to use the new go-pdk (instead of the older go-pluginserver) to build the go plugins and use them in helm.

Related

How to use an anchor to prevent repetition of code sections?

Say I have a number of jobs that all do similar series of scripts, but need a few variables that change between them:
test a:
stage: test
tags:
- a
interruptible: true
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
script:
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t a -f Dockerfile.a .
test b:
stage: test
tags:
- b
interruptible: true
rules:
- if: $CI_PIPELINE_SOURCE == 'merge_request_event'
script:
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t b -f Dockerfile.b .
All I need is to be able to define e.g.
- docker build -t ${WHICH} -f Dockerfile.${which} .
If only I could make an anchor like:
.x: &which_ref
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t $WHICH -f Dockerfile.$WHICH .
And include it there:
test a:
script:
- export WHICH=a
<<: *which_ref
This doesn't work and in a yaml validator I get errors like
Error: YAMLException: cannot merge mappings; the provided source object is unacceptable
I also tried making an anchor that contains some entries under script inside of it:
.x: &which_ref
script:
- echo "env is $(env)"
- echo etcetera
- echo and so on
- docker build -t $WHICH -f Dockerfile.$WHICH .
This means I have to include it from one step higher up. So this does not error, but all this accomplishes is cause the later declared script section to override the first one.
So I'm losing hope. It seems like I will just need to abstract the sections away into their own shell scripts and call them with arguments or whatever.
The YAML merge key << is a non-standard extension for YAML 1.1, which has been superseded by YAML 1.2 about 14 years ago. Usage is discouraged.
The merge key works on mappings, not on sequences. It cannot deep-merge. Thus what you want to do is not possible to implement with it.
Generally, YAML isn't designed to process data, it just loads it. The merge key is an outlier and didn't find its way into the standard for good reasons. You need a pre- or postprocessor to do complex processing, and Gitlab CI doesn't offer anything besides simple variable expension, so you're out of luck.

Is this the correct way to write if..else statement in cloudbuild.yaml file?

I am trying to deploy a cloud function using cloudbuild.yaml. It works fine if I don't use any conditional statement. I am facing an error when I execute my cloudbuild.yaml file with if conditional statement. What is the correct way to write it. Below is my code:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
id: deploy
args:
- '-c'
- 'if [ $BRANCH_NAME != "xoxoxoxox" ]
then
[
'functions', 'deploy', 'groups',
'--region=us-central1',
'--source=.',
'--trigger-http',
'--runtime=nodejs8',
'--entry-point=App',
'--allow-unauthenticated',
'--service-account=xoxoxoxox#appspot.gserviceaccount.com'
]
fi'
dir: 'API/groups'
Where am I doing it wrong ?
From the github page, https://github.com/GoogleCloudPlatform/cloud-sdk-docker, the entrypoint is not set to gcloud. So you cannot specify the arguments like that.
Good practice for specifying directory is to start with /workspace
Also the right way to write the step should be
steps:
- name: 'gcr.io/cloud-builders/gcloud'
id: deploy
dir: '/workspace/API/groups'
entrypoint: bash
args:
- '-c'
- |
if [ $BRANCH_NAME != "xoxoxoxox" ]
then
gcloud functions deploy groups
--region=us-central1
--source=.
--trigger-http
--runtime=nodejs8
--entry-point=App
--allow-unauthenticated
--service-account=xoxoxoxox#appspot.gserviceaccount.com
fi
I'm not sure you can do this.
In my case, I use the branch selector in the Cloud build trigger to select which branch (or tag) I want to build from a pattern.
I wanted to delete the latest version of each service only if there were more than two previous versions. This was my solution.
args:
- "-c"
- |
if [[ $(gcloud app versions list --format="value(version.id)" --service=MY-SERVICE | wc -l) -ge 2 ]];
then
gcloud app versions list --format="value(version.id)" --sort-by="~version.createTime" --service=admin | tail -n -1 | xargs gcloud app versions delete --service=MY-SERVICE --quiet;
fi

Snakemake conda env parameter is not taken from config.yaml file

I use a conda env that I create manually, not automatically using Snakemake. I do this to keep tighter version control.
Anyway, in my config.yaml I have the following line:
conda_env: '/rst1/2017-0205_illuminaseq/scratch/swo-406/snakemake'
Then, at the start of my Snakefile I read that variable (reading variables from config in your shell part does not seem to work, am I right?):
conda_env = config['conda_env']
Then in a shell part I hail said parameter like this:
rule rsem_quantify:
input:
os.path.join(fastq_dir, '{sample}_R1_001.fastq.gz'),
os.path.join(fastq_dir, '{sample}_R2_001.fastq.gz')
output:
os.path.join(analyzed_dir, '{sample}.genes.results'),
os.path.join(analyzed_dir, '{sample}.STAR.genome.bam')
threads: 8
shell:
'''
#!/bin/bash
source activate {conda_env}
rsem-calculate-expression \
--paired-end \
{input} \
{rsem_ref_base} \
{analyzed_dir}/{wildcards.sample} \
--strandedness reverse \
--num-threads {threads} \
--star \
--star-gzipped-read-file \
--star-output-genome-bam
'''
Notice the {conda_env}. Now this gives me the following error:
Could not find conda environment: None
You can list all discoverable environments with `conda info --envs`.
Now, if I change {conda_env} for its parameter directly /rst1/2017-0205_illuminaseq/scratch/swo-406/snakemake, it does work! I don't have any trouble reading other parameters using this method (like rsem_ref_base and analyzed_dir in the example rule above.
What could be wrong here?
Highest regards,
Freek.
The pattern I use is to load variables into params, so something along the lines of
rule rsem_quantify:
input:
os.path.join(fastq_dir, '{sample}_R1_001.fastq.gz'),
os.path.join(fastq_dir, '{sample}_R2_001.fastq.gz')
output:
os.path.join(analyzed_dir, '{sample}.genes.results'),
os.path.join(analyzed_dir, '{sample}.STAR.genome.bam')
params:
conda_env=config['conda_env']
threads: 8
shell:
'''
#!/bin/bash
source activate {params.conda_env}
rsem-calculate-expression \
...
'''
Although, I'd also never do this with a conda environment, because Snakemake has conda environment management built-in. See this section in the docs on Integrated Package Management for details. This makes reproducibility much more manageable.

Passing Laravel .env variable to Dockerfile

I have code in my Dockerfile that install NewRelic php client
RUN \
curl -L https://download.newrelic.com/php_agent/release/newrelic-php5-8.3.0.226-linux.tar.gz | tar -C /tmp -zx && \
NR_INSTALL_USE_CP_NOT_LN=1 NR_INSTALL_SILENT=1 /tmp/newrelic-php5-*/newrelic-install install && \
rm -rf /tmp/newrelic-php5-* /tmp/nrinstall* && \
sed -i -e 's/"REPLACE_WITH_REAL_KEY"/"${MY_NEWRELIC_KEY}"/' \
-e 's/newrelic.appname = "PHP Application"/newrelic.appname = "MyApp"/' \
/usr/local/etc/php/conf.d/newrelic.ini
How to pass variable MY_NEWRELIC_KEY that defined in Laravel .env file to DockerFile?
You need to define ARG and ENV values.
ARG are also known as build-time variables. They are only available from the moment they are 'announced' in the Dockerfile with an ARG instruction up to the moment when the image is built.
ENV variables are also available during the build, as soon as you introduce them with an ENV instruction.
Here is a Dockerfile example, both for default values and without them:
ARG some_variable
# or with a hard-coded default:
#ARG some_variable=default_value
RUN echo "Oh dang look at that $some_variable"
When building a Docker image from the commandline, you can set ARG values using –build-arg:
$ docker build --build-arg some_variable=a_value
Running that command, with the above Dockerfile, will result in the following line being printed (among others):
Oh dang look at that a_value
Here is a basic Dockerfile, using hard-coded ENV default values:
# no default value
ENV blablabla
# a default value
ENV foo /bar
# or ENV foo=/bar
# ENV values can be used during the build
ADD . $foo
# or ADD . ${foo}
# translates to: ADD . /bar
And here is an example of a Dockerfile, using dynamic on-build env values:
# expect a build-time variable
ARG A_VARIABLE
# use the value to set the ENV var default
ENV an_env_var=$A_VARIABLE
# if not overridden, that value of an_env_var will be available to your containers!
If you use docker-compose you may set it in the file (link):
version: '3'
services:
php:
image: my_php
environment:
- MY_NEWRELIC_KEY=keykey
EDIT:
You can specify a file to read values from.
The file above is called env_file (name arbitrary) and it’s located in the current directory. You can reference the filename, which is parsed to extract the environment variables to set:
$ docker run --env-file=env_file php env
With docker-compose.yml files, we just reference a env_file, and Docker parses it for the variables to set.
version: '3'
services:
php:
image: php
env_file: env_file

How can I update jenkins plugins from the terminal?

I am trying to create a bash script for setting up Jenkins. Is there any way to update a plugin list from the Jenkins terminal?
At first setup there is no plugin available on the list
i.e.:
java -jar jenkins-cli.jar -s `http://localhost:8080` install-plugin dry
won't work
A simple but working way is first to list all installed plugins, look for updates and install them.
java -jar /root/jenkins-cli.jar -s http://127.0.0.1:8080/ list-plugins
Each plugin which has an update available, has the new version in brackets at the end. So you can grep for those:
java -jar /root/jenkins-cli.jar -s http://127.0.0.1:8080/ list-plugins | grep -e ')$' | awk '{ print $1 }'
If you call install-plugin with the plugin name, it is automatically upgraded to the latest version.
Finally you have to restart jenkins.
Putting it all together (can be placed in a shell script):
UPDATE_LIST=$( java -jar /root/jenkins-cli.jar -s http://127.0.0.1:8080/ list-plugins | grep -e ')$' | awk '{ print $1 }' );
if [ ! -z "${UPDATE_LIST}" ]; then
echo Updating Jenkins Plugins: ${UPDATE_LIST};
java -jar /root/jenkins-cli.jar -s http://127.0.0.1:8080/ install-plugin ${UPDATE_LIST};
java -jar /root/jenkins-cli.jar -s http://127.0.0.1:8080/ safe-restart;
fi
You can actually install plugins from the computer terminal (rather than the Jenkins terminal).
Download the plugin from the plugin site (http://updates.jenkins-ci.org/download/plugins)
Copy that plugin into the $JENKINS_HOME/plugins directory
At that point either start Jenkins or call the reload settings service (http://yourservername:8080/jenkins/reload)
This will enable the plugin in Jenkins and assuming that Jenkins is started.
cd $JENKINS_HOME/plugins
curl -O http://updates.jenkins-ci.org/download/plugins/cobertura.hpi
curl http://yourservername:8080/reload
Here is how you can deploy Jenkins CI plugins using Ansible, which of course is used from the terminal. This code is a part of roles/jenkins_ci/tasks/main.yaml:
- name: Plugins
with_items: # PLUGIN NAME
- name: checkstyle # Checkstyle
- name: dashboard-view # Dashboard View
- name: dependency-check-jenkins-plugin # OWASP Dependency Check
- name: depgraph-view # Dependency Graph View
- name: deploy # Deploy
- name: emotional-jenkins-plugin # Emotional Jenkins
- name: monitoring # Monitoring
- name: publish-over-ssh # Publish Over SSH
- name: shelve-project-plugin # Shelve Project
- name: token-macro # Token Macro
- name: zapper # OWASP Zed Attack Proxy (ZAP)
sudo: yes
get_url: dest="{{ jenkins_home }}/plugins/{{ item.name | mandatory }}.jpi"
url="https://updates.jenkins-ci.org/latest/{{ item.name }}.hpi"
owner=jenkins group=jenkins mode=0644
notify: Restart Jenkins
This is a part of a more complete example that you can find at:
https://github.com/sakaal/service_platform_ansible/blob/master/roles/jenkins_ci/tasks/main.yaml
Feel free to adapt it to your needs.
You can update plugins list with this command line
curl -s -L http://updates.jenkins-ci.org/update-center.json | sed '1d;$d' | curl -s -X POST -H 'Accept: application/json' -d #- http://localhost:8080/updateCenter/byId/default/postBack
FYI -- some plugins (mercurial in particular) don't install correctly from the command line unless you use their short name. I think this has to do with triggers in the jenkins package info data. You can simulate jenkins' own package update by visiting 127.0.0.1:8080/pluginManager/checkUpdates in a javascript-capable browser.
Or if you're feeling masochistic you can run this python code:
import urllib2,requests
UPDATES_URL = 'https://updates.jenkins-ci.org/update-center.json?id=default&version=1.509.4'
PREFIX = 'http://127.0.0.1:8080'
def update_plugins():
"look at the source for /pluginManager/checkUpdates and downloadManager in /static/<whatever>/scripts/hudson-behavior.js"
raw = urllib2.urlopen(self.UPDATES_URL).read()
jsontext = raw.split('\n')[1] # ugh, JSONP
json.loads(jsontext) # i.e. error if not parseable
print 'received updates json'
# post
postback = PREFIX+'/updateCenter/byId/default/postBack'
reply = requests.post(postback,data=jsontext)
if not reply.ok:
raise RuntimeError(("updates upload not ok",reply.text))
print 'applied updates json'
And once you've run this, you should be able to run jenkins-cli -s http://127.0.0.1:8080 install-plugin mercurial -deploy.
With a current Jenkins Version, the CLI can just be used via SSH. This has to be enabled in the "Global Security Settings" page in the administration interface, as described in the docs. Furthermore, the user who should trigger the updates must add its public ssh key.
With the modified shell script from the accepted answer, this can be automatized as follows, you just have to replace HOSTNAME and USERNAME:
#!/bin/bash
jenkins_host=HOSTNAME #e.g. jenkins.example.com
jenkins_user=USERNAME
jenkins_port=$(curl -s --head https://$jenkins_host/login | grep -oP "^X-SSH-Endpoint: $jenkins_host:\K[0-9]{4,5}")
function jenkins_cli {
ssh -o StrictHostKeyChecking=no -l "$jenkins_user" -p $jenkins_port "$jenkins_host" "$#"
}
UPDATE_LIST=$( jenkins_cli list-plugins | grep -e ')$' | awk '{ print $1 }' );
if [ ! -z "${UPDATE_LIST}" ]; then
echo Updating Jenkins Plugins: ${UPDATE_LIST};
jenkins_cli install-plugin ${UPDATE_LIST};
jenkins_cli safe-restart;
else
echo "No updates available"
fi
This greps the used SSH port of the Jenkins CLI and then connects via SSH without checking the host key, as it changes for every Jenkins restart.
Then all plugins with an update available are upgraded and afterwards Jenkins is restarted.
In groovy
The groovy path has one big advantage: it can be added to a 'system groovy script' build step in a job without any change.
Create the file 'update_plugins.groovy' with this content:
jenkins.model.Jenkins.getInstance().getUpdateCenter().getSites().each { site ->
site.updateDirectlyNow(hudson.model.DownloadService.signatureCheck)
}
hudson.model.DownloadService.Downloadable.all().each { downloadable ->
downloadable.updateNow();
}
def plugins = jenkins.model.Jenkins.instance.pluginManager.activePlugins.findAll {
it -> it.hasUpdate()
}.collect {
it -> it.getShortName()
}
println "Plugins to upgrade: ${plugins}"
long count = 0
jenkins.model.Jenkins.instance.pluginManager.install(plugins, false).each { f ->
f.get()
println "${++count}/${plugins.size()}.."
}
if(plugins.size() != 0 && count == plugins.size()) {
println "restarting Jenkins..."
jenkins.model.Jenkins.instance.safeRestart()
}
Then execute this curl command:
curl --user 'username:token' --data-urlencode "script=$(< ./update_plugins.groovy)" https://jenkins_server/scriptText

Resources