I've got a file that consists of a number of configmaps.
Something like
{{- define "config1" -}}
kind: ConfigMap
metadata:
name: config-{{.Chart.Nn}}
apiVersion: v1
data:
script.sh: |-
#!/bin/bash
echo "Hello World"
echo "Hello Planet"
{{- end -}}
How do I extract echo "Hello World" and echo "Hello Planet" into a function so I can simply refer to the function within script.sh for the configmaps that need to run these particular commands?
I'm trying to avoid having to write the same code over and over.
Thanks.
Related
I'm currently writing a bash script and struggling with something that looked fairly simple at first.
I'm trying to create a function that calls a kubectl (Kubernetes) command. The command is expecting the path to a file as an argument although I'd like to pass the content itself (multiline YAML text). It works in the shell but can't make it work in my function. I've tried many things and the latest looks like that (it's just a subset of the the YAML content):
#!/bin/bash
AGENT_NAME="default"
deploy_agent_statefulset() {
kubectl apply -n default -f - $(cat <<- END
kind: ConfigMap
metadata:
name: $AGENT_NAME
apiVersion: v1
data:
agent.yaml: |
metrics:
wal_directory: /var/lib/agent/wal
END
)
}
deploy_agent_statefulset
The initial command that works in the shell is the following.
cat <<'EOF' | NAMESPACE=default /bin/sh -c 'kubectl apply -n $NAMESPACE -f -'
kind: ConfigMap
...
I'm sure I m doing a lot of things wrong - keen to get some help
Thank you.
name: grafana-agent
In your function, you didn't contruct stdin properly :
#!/bin/bash
AGENT_NAME="default"
deploy_agent_statefulset() {
kubectl apply -n default -f - <<END
kind: ConfigMap
metadata:
name: $AGENT_NAME
apiVersion: v1
data:
agent.yaml: |
metrics:
wal_directory: /var/lib/agent/wal
END
}
deploy_agent_statefulset
this one should work:
#!/bin/bash
AGENT_NAME="default"
deploy_agent_statefulset() {
cat << EOF | kubectl apply -n default -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: $AGENT_NAME
data:
agent.yaml: |
metrics:
wal_directory: /var/lib/agent/wal
EOF
}
deploy_agent_statefulset
To point out what is wrong in your yaml which are all indentations,
you don't need to add the indentations in the beginning
name goes under metadata, so it needs to be intended.
agent.yaml is the key, for the data in the ConfigMap, so it needs to be intended as well
I have a yaml which looks like this.
Is there a way to get the "Corefile" value to multi-line?
apiVersion: v1
data:
Corefile: ".:53 {\n rewrite name regex (.*)\\.test\\.io {1}.default.svc.cluster.local \n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus :9153\n forward . /etc/resolv.conf {\n max_concurrent 1000\n }\n cache 30\n loop\n reload\n loadbalance\n}\n"
kind: ConfigMap
metadata:
creationTimestamp: "2022-02-25T12:36:15Z"
name: coredns
namespace: kube-system
resourceVersion: "14874"
uid: dc352ab8-1e43-4663-8c6a-0d404f4bb4f3
I tried yq -P, but this did not help
The basic command is this (e can be omitted in newer versions):
yq e '.data.Corefile style="literal"' test.yaml
However this will not work in your case, since YAML says that trailing whitespace is ignored, and thus you cannot have data with trailing whitespace formatted as literal block scalar. Relevant part of your data is:
default.svc.cluster.local \n
^
This space does not seem to be relevant, so you can write additional code to remove it:
yq e '.data.Corefile |= sub("\s*(\n)", "${1}") | .data.Corefile style="literal"' test.yaml
(There is a curious bug where I cannot substitute with "\n" directly as that will create "\\n" in the data for some reason, so I use the captured newline instead.)
Result:
apiVersion: v1
data:
Corefile: |
.:53 {
rewrite name regex (.*)\.test\.io {1}.default.svc.cluster.local
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2022-02-25T12:36:15Z"
name: coredns
namespace: kube-system
resourceVersion: "14874"
uid: dc352ab8-1e43-4663-8c6a-0d404f4bb4f3
Question
Given this single-line string:
PG_USER=postgres PG_PORT=1234 PG_PASS=icontain=and*symbols
What would be the right way to assign each value to its designated variable so that I can use it afterward?
Context
I'm parsing the context of a k8s secret within a CronJob so that I can periodically call a Stored Procedure in our Postgres database.
To do so, I plan on using:
PG_OUTPUT_VALUE=$(PGPASSWORD=$PG_PASSWD psql -qtAX -h $PG_HOST -p $PG_PORT -U $PG_USER -d $PG_DATABASE -c $PG_TR_CLEANUP_QUERY)
echo $PG_OUTPUT_VALUE
The actual entire helm chart I'm currently trying to fix looks like this:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ template "fullname" $ }}-tr-cleanup-cronjob
spec:
concurrencyPolicy: Forbid
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
volumes:
- name: postgres
secret:
secretName: {{ template "fullname" $ }}-postgres
containers:
- name: {{ template "fullname" $ }}-tr-cleanup-pod
image: postgres:12-alpine
imagePullPolicy: Always
env:
- name: PG_PROPS
valueFrom:
secretKeyRef:
name: {{ template "fullname" $ }}-postgres
key: postgres.properties
command:
- /bin/sh
- -c
- echo "props:" && echo $PG_PROPS && PG_USER=$(grep "^PG_USER=" | cut -d"=" -f2-) && echo $PG_USER && PG_TR_CLEANUP_QUERY="SELECT something FROM public.somewhere;" && echo $PG_TR_CLEANUP_QUERY && PG_OUTPUT_VALUE=$(PGPASSWORD=$PG_PASSWD psql -qtAX -h $PG_HOST -p $PG_PORT -U $PG_USER -d $PG_DATABASE -c $PG_TR_CLEANUP_QUERY) && echo PG_OUTPUT_VALUE
volumeMounts:
- name: postgres
mountPath: /etc/secrets/postgres
Current approach
As you can see, I'm currently using:
PG_USER=$(grep "^PG_USER=" | cut -d"=" -f2-)
That is because I initially thought the secret would be output on multiple lines, but it turns out that I was wrong. The echo $PG_USER displays an empty string.
The bash declare command is appropriate here, and is safer than eval.
Suppose the input contains something potentially malicious
line='PG_USER=postgres PG_PORT=1234 PG_PASS=icontain=and*symbols`ls`'
I'm assuming none of the values contain whitespace. Let's split that string
read -ra assignments <<< "$line"
Now, declare each one
for assignment in "${assignments[#]}"; do declare "$assignment"; done
Everywhere we examine the input, we maintain double quotes.
Let's see what we ended up with:
$ declare -p PG_USER PG_PORT PG_PASS
declare -- PG_USER="postgres"
declare -- PG_PORT="1234"
declare -- PG_PASS="icontain=and*symbols\`ls\`"
Option 1
This function can be reused to assign each variable individually:
extract() {
echo "$INPUT" | grep -o "$1=.*" | cut -d" " -f1 | cut -d"=" -f2- ;
}
And to use it:
PG_USER=$(extract PG_USER)
PG_PORT=$(extract PG_PORT)
PG_PASS=$(extract PG_PASS)
Option 2
Another potential solution, with a security concern, is to simply use:
eval "$INPUT"
It should only be used if you have validated the input.
Contextual complete answer
And because I've presented the k8s context in the question, here is the answer as plugged into that solution.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ template "fullname" $ }}-cronjob
spec:
concurrencyPolicy: Forbid
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
volumes:
- name: postgres
secret:
secretName: {{ template "fullname" $ }}-postgres
containers:
- name: {{ template "fullname" $ }}-cronjob-pod
image: postgres:12-alpine
imagePullPolicy: Always
env:
- name: PG_PROPS
valueFrom:
secretKeyRef:
name: {{ template "fullname" $ }}-postgres
key: postgres.properties
command:
- /bin/sh
- -c
- >-
extract() { echo "$PG_PROPS" | grep -o "$1=.*" | cut -d" " -f1 | cut -d"=" -f2- ; } &&
export PGHOST=$(extract PG_HOST) &&
export PGPORT=$(extract PG_PORT) &&
export PGDATABASE=$(extract PG_DATABASE) &&
export PGUSER=$(extract PG_USER) &&
PG_SCHEMA=$(extract PG_SCHEMA) &&
PG_QUERY="SELECT tenant_schema FROM $PG_SCHEMA.tenant_schema_mappings;" &&
PGPASSWORD=$(extract PG_PASSWD) psql --echo-all -c "$PG_QUERY"
volumeMounts:
- name: postgres
mountPath: /etc/secrets/postgres
We have a dockerized maven project where we deploy it to the kubernetes via Jenkins and Helm.So the thing that I would like to do is, pass below shell script as a command in deployment.yaml like as below to initate selenium tests while pod is creating. But somehow before I deploy the service, I need to update the variables in shell script according to the conditional statements(jenkins parameters) and then pass to update shell script to the kubernetes pod.
So, is there anyway to add if statements and update variables according to the jenkins parameters and pass through to pod with proper values ?
Shell Script
#!/bin/sh
mvn --projects {$projectName} --also-make clean test -Dcucumber.filter.tags="#${servicename} or #${tag}" -Denvironment=test -Ddhc=true -Djavax.net.ssl.trustStore=/usr/java/openjdk-14/lib/security/cacerts
Deployment.yaml
env:
{{- range $key, $value := .Values.extraEnv }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
command: ["/test-script.sh"]
Jenkinsfile
parameters {
string(name: 'projectName', defaultValue: "Xx", description: 'Which project do you want to test?')
string(name: "service_name", defaultValue: 'Yy', description: 'Selenium tag for service name')
string(name: "tag", defaultValue: 'must')
//string(name:'branchName', defaultValue: "origin/development", description: 'Environment for selenium tests')}
stage('Deploy to dev'){
steps{
script{
sh """
helm upgrade --install --debugtest-service --values values.${ENV}.yaml --namespace ${namespace} --set image.tag=${env.BUILD_NUMBER} .
"""
}
}
EDIT-1
ı just added my question according to the below answer. So after running sed and update the script, how can I add a if statement to check value of service name parameter? Or can I use something like
if [ <PROJECTNAME> == "xx]
Here is the steps that I would like to do.
projectName=$1
serviceName=$2
tag=$3
if [ "$serviceName" == "xx" ]
then
echo "Tests are running "..
mvn --projects <PROJECTNAME> --also-make clean test -Dcucumber.filter.tags=""#<SERVICENAME> or #<TAG>""
else
echo 'All test scenarios are running on stage environment'
mvn --projects <PROJECTNAME> --also-make clean test -Dcucumber.filter.tags="#<SERVICENAME> or #<SERVICENAME>" -
fi
You can keep placeholders for such values which requires to be updated dynamically at build time.
I'd keep the shell-script as below with placeholders instead of variables:
#!/bin/sh
mvn --projects <PROJECTNAME> --also-make clean test -Dcucumber.filter.tags="#<SERVICENAME> or #<TAG>" -Denvironment=test -Ddhc=true -Djavax.net.ssl.trustStore=/usr/java/openjdk-14/lib/security/cacerts
And then add a sed line within the sh block above your helm upgrade ... command to replace the placeholders with build time values so that it can passed to the next set of actions.
sh """
sed -i \"s/<PROJECTNAME>/${projectName}/g; s/<SERVICENAME>/${service_name}/g; s/<TAG>/${tag}/g\" /path/to/test-script.sh
helm upgrade . . .
"""
EDIT-1:
Example:
if (serviceName == 'xx') {
env.TAG = serviceName
}
sh """
.... other actions
"""
I have metrics like route/api_1_test/POST/time/200.avg where api_1_test is the route, POST is the method, time is the metric_name, 200 is the status_code and avg is the metric_type. I have arrays for route, method, status_code and metric_type. I would like to create a config file with all possible combinations of arrays with the following text
- name: finagle_route_<metric_name>
path: $.route/<route>/<method>/<metric_name>/<status_code>.<metric_type>
labels:
route: <route>
method: <method>
status: <status_code>
type: <metric_type>
How do I write a for loop for this?
#!/bin/bash
set -f
for route in $(<routes); do
for method in $(<methods); do
for name in $(<metric_names); do
for code in $(<http_status_codes); do
for type in $(<types); do
echo -e "- name: fingale_route_$name\n path: \$.route/$route/$method/$name/$code.$type\n labels:\n status:$code\n type:$type\n method:$method\n route:$route\n"
done
done
done
done
done