I have a yaml which looks like this.
Is there a way to get the "Corefile" value to multi-line?
apiVersion: v1
data:
Corefile: ".:53 {\n rewrite name regex (.*)\\.test\\.io {1}.default.svc.cluster.local \n errors\n health {\n lameduck 5s\n }\n ready\n kubernetes cluster.local in-addr.arpa ip6.arpa {\n pods insecure\n fallthrough in-addr.arpa ip6.arpa\n ttl 30\n }\n prometheus :9153\n forward . /etc/resolv.conf {\n max_concurrent 1000\n }\n cache 30\n loop\n reload\n loadbalance\n}\n"
kind: ConfigMap
metadata:
creationTimestamp: "2022-02-25T12:36:15Z"
name: coredns
namespace: kube-system
resourceVersion: "14874"
uid: dc352ab8-1e43-4663-8c6a-0d404f4bb4f3
I tried yq -P, but this did not help
The basic command is this (e can be omitted in newer versions):
yq e '.data.Corefile style="literal"' test.yaml
However this will not work in your case, since YAML says that trailing whitespace is ignored, and thus you cannot have data with trailing whitespace formatted as literal block scalar. Relevant part of your data is:
default.svc.cluster.local \n
^
This space does not seem to be relevant, so you can write additional code to remove it:
yq e '.data.Corefile |= sub("\s*(\n)", "${1}") | .data.Corefile style="literal"' test.yaml
(There is a curious bug where I cannot substitute with "\n" directly as that will create "\\n" in the data for some reason, so I use the captured newline instead.)
Result:
apiVersion: v1
data:
Corefile: |
.:53 {
rewrite name regex (.*)\.test\.io {1}.default.svc.cluster.local
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
kind: ConfigMap
metadata:
creationTimestamp: "2022-02-25T12:36:15Z"
name: coredns
namespace: kube-system
resourceVersion: "14874"
uid: dc352ab8-1e43-4663-8c6a-0d404f4bb4f3
Related
I want to build a pipeline function that replaces a value in a yaml file. For that I want to make both the
pattern and the replacement value variable. I have seen the env-variables-operators article in the yq docs, however I cannot find the relevant section.
I have a yaml file with the following content:
---
spec:
source:
helm:
parameters:
- name: "image.tag"
value: "1.0.0"
I now want to build a pipeline function that will replace the value of the value key in the yaml.
I can do so with:
$ yq '.spec.source.helm.parameters[0].value = "2.0.0"' myyaml.yml
---
spec:
source:
helm:
parameters:
- name: "image.tag"
value: "2.0.0"
Now I want to make this command customizable.
What works:
$ VALUE=3.0.0
$ replacement=$VALUE yq '.spec.source.helm.parameters[0].value = env(replacement)' myyaml.yml
---
spec:
source:
helm:
parameters:
- name: "image.tag"
value: "3.0.0"
What doesn't work
$ VALUE=3.0.0
$ PATTERN=.spec.source.helm.parameters[0].value
$ replacement=$VALUE pattern=$PATTERN yq 'env(pattern) = env(replacement)'
spec:
source:
helm:
parameters:
- name: "image.tag"
value: "1.0.0"
I have also tried to use strenv and wrapping the replacement pattern in quotes, but it is not working.
Can anyone help me with the correct syntax?
You can import data with env but not code. You could inject it (note the changes in the quoting), but this is bad practice as it makes your script very vulnerable:
VALUE='3.0.0'
PATTERN='.spec.source.helm.parameters[0].value'
replacement="$VALUE" yq "${PATTERN} = env(replacement)" myyaml.yml
---
spec:
source:
helm:
parameters:
- name: "image.tag"
value: "3.0.0"
Better practice would be to import the path in a form that is interpretable by yq, e.g. as an array and using setpath:
VALUE='3.0.0'
PATTERN='["spec","source","helm","parameters",0,"value"]'
replacement="$VALUE" pattern="$PATTERN" yq 'setpath(env(pattern); env(replacement))' myyaml.yml
I'm currently writing a bash script and struggling with something that looked fairly simple at first.
I'm trying to create a function that calls a kubectl (Kubernetes) command. The command is expecting the path to a file as an argument although I'd like to pass the content itself (multiline YAML text). It works in the shell but can't make it work in my function. I've tried many things and the latest looks like that (it's just a subset of the the YAML content):
#!/bin/bash
AGENT_NAME="default"
deploy_agent_statefulset() {
kubectl apply -n default -f - $(cat <<- END
kind: ConfigMap
metadata:
name: $AGENT_NAME
apiVersion: v1
data:
agent.yaml: |
metrics:
wal_directory: /var/lib/agent/wal
END
)
}
deploy_agent_statefulset
The initial command that works in the shell is the following.
cat <<'EOF' | NAMESPACE=default /bin/sh -c 'kubectl apply -n $NAMESPACE -f -'
kind: ConfigMap
...
I'm sure I m doing a lot of things wrong - keen to get some help
Thank you.
name: grafana-agent
In your function, you didn't contruct stdin properly :
#!/bin/bash
AGENT_NAME="default"
deploy_agent_statefulset() {
kubectl apply -n default -f - <<END
kind: ConfigMap
metadata:
name: $AGENT_NAME
apiVersion: v1
data:
agent.yaml: |
metrics:
wal_directory: /var/lib/agent/wal
END
}
deploy_agent_statefulset
this one should work:
#!/bin/bash
AGENT_NAME="default"
deploy_agent_statefulset() {
cat << EOF | kubectl apply -n default -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: $AGENT_NAME
data:
agent.yaml: |
metrics:
wal_directory: /var/lib/agent/wal
EOF
}
deploy_agent_statefulset
To point out what is wrong in your yaml which are all indentations,
you don't need to add the indentations in the beginning
name goes under metadata, so it needs to be intended.
agent.yaml is the key, for the data in the ConfigMap, so it needs to be intended as well
apiVersion: v1
kind: Secret
metadata:
name: {{ include "backstage.fullname" . }}-backend
type: Opaque
stringData:
GH_APP_PRIVATEKEY_BASE:|-
{{ .Values.auth.ghApp.privateKey | quote | b64dec | indent 2 }}
Getting error converting YAML to JSON: yaml: line 22: could not find expected ':' as the result when
trying to store a base64 encoded string to GH_APP_PRIVATEKEY_BASE
My application (backstage) is using helm charts to map the env secret.
I keep having trouble with storing/passing multi-line RSA. private key,
Currently trying to base64 encoded private key into one-liner, but still failed at validating the secret file. Would love to know other approach like passing a file with key written on it?
BTW, I use GITHUB_PRVATE_KEY=$(echo -n $GITHUB_PRVATE_KEY | base64 -w 0) and
helm_overrides="$helm_overrides --set auth.ghApp.clientSecret=$GITHUB_PRVATE_KEY"
at GitHub action to encoded the private key.
Try increase the indent to 4:
...
stringData:
GH_APP_PRIVATEKEY_BASE: |-
{{ .Values.auth.ghApp.privateKey | quote | b64dec | indent 4 }}
GH_APP_PRIVATEKEY_BASE:|-
You need a space in there GH_APP_PRIVATEKEY_BASE: |-
Also not sure why you have a b64dec in there but I don't think that's the immediate problem.
I was trying to setup an elasticsearch cluster in AKS using helm chart but due to the log4j vulnerability, I wanted to set it up with option -Dlog4j2.formatMsgNoLookups set to true. I am getting unknown flag error when I pass the arguments in helm commands.
Ref: https://artifacthub.io/packages/helm/elastic/elasticsearch/6.8.16
helm upgrade elasticsearch elasticsearch --set imageTag=6.8.16 esJavaOpts "-Dlog4j2.formatMsgNoLookups=true"
Error: unknown shorthand flag: 'D' in -Dlog4j2.formatMsgNoLookups=true
I have also tried to add below in values.yaml file
esConfig: {}
# elasticsearch.yml: |
# key:
# nestedkey: value
log4j2.properties: |
-Dlog4j2.formatMsgNoLookups = true
but the values are not adding to the /usr/share/elasticsearch/config/jvm.options, /usr/share/elasticsearch/config/log4j2.properties or in the environment variables.
First of all, here's a good source of knowledge about mitigating Log4j2 security issue if this is the reason you reached here.
Here's how you can write your values.yaml for the Elasticsearch chart:
esConfig:
log4j2.properties: |
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = debug
A ConfigMap will be generated by Helm:
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-master-config
...
data:
log4j2.properties: |
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = debug
And the Log4j configuration will be mount to your Elasticsearch as:
...
volumeMounts:
...
- name: esconfig
mountPath: /usr/share/elasticsearch/config/log4j2.properties
subPath: log4j2.properties
Update: How to set and add multiple configuration files.
You can setup other ES configuration files in your values.yaml, all the files that you specified here will be part of the ConfigMap, each of the files will be mounted at /usr/share/elasticsearch/config/ in the Elasticsearch container. Example:
esConfig:
elasticsearch.yml: |
node.master: true
node.data: true
log4j2.properties: |
logger.discovery.name = org.elasticsearch.discovery
logger.discovery.level = debug
jvm.options: |
# You can also place a comment here.
-Xmx1g -Xms1g -Dlog4j2.formatMsgNoLookups=true
roles.yml: |
click_admins:
run_as: [ 'clicks_watcher_1' ]
cluster: [ 'monitor' ]
indices:
- names: [ 'events-*' ]
privileges: [ 'read' ]
field_security:
grant: ['category', '#timestamp', 'message' ]
query: '{"match": {"category": "click"}}'
ALL of the configurations above are for illustration only to demonstrate how to add multiple configuration files in the values.yaml. Please substitute these configurations with your own settings.
if you update and put a value under esConfig, you will need to remove the curly brackets
esConfig:
log4j2.properties: |
key = value
I would rather suggest to change the /config/jvm.options file and at the end add
-Dlog4j2.formatMsgNoLookups=true
The helm chart has an option to set java options.
esJavaOpts: "" # example: "-Xmx1g -Xms1g"
In your case, setting it like this should be the solution:
esJavaOpts: "-Dlog4j2.formatMsgNoLookups=true"
As I see in updated in elastic repository values.yml:
esConfig: {}
log4j2.properties: |
key = value
Probably need to uncomment log4j2.properties part.
Question
Given this single-line string:
PG_USER=postgres PG_PORT=1234 PG_PASS=icontain=and*symbols
What would be the right way to assign each value to its designated variable so that I can use it afterward?
Context
I'm parsing the context of a k8s secret within a CronJob so that I can periodically call a Stored Procedure in our Postgres database.
To do so, I plan on using:
PG_OUTPUT_VALUE=$(PGPASSWORD=$PG_PASSWD psql -qtAX -h $PG_HOST -p $PG_PORT -U $PG_USER -d $PG_DATABASE -c $PG_TR_CLEANUP_QUERY)
echo $PG_OUTPUT_VALUE
The actual entire helm chart I'm currently trying to fix looks like this:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ template "fullname" $ }}-tr-cleanup-cronjob
spec:
concurrencyPolicy: Forbid
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
volumes:
- name: postgres
secret:
secretName: {{ template "fullname" $ }}-postgres
containers:
- name: {{ template "fullname" $ }}-tr-cleanup-pod
image: postgres:12-alpine
imagePullPolicy: Always
env:
- name: PG_PROPS
valueFrom:
secretKeyRef:
name: {{ template "fullname" $ }}-postgres
key: postgres.properties
command:
- /bin/sh
- -c
- echo "props:" && echo $PG_PROPS && PG_USER=$(grep "^PG_USER=" | cut -d"=" -f2-) && echo $PG_USER && PG_TR_CLEANUP_QUERY="SELECT something FROM public.somewhere;" && echo $PG_TR_CLEANUP_QUERY && PG_OUTPUT_VALUE=$(PGPASSWORD=$PG_PASSWD psql -qtAX -h $PG_HOST -p $PG_PORT -U $PG_USER -d $PG_DATABASE -c $PG_TR_CLEANUP_QUERY) && echo PG_OUTPUT_VALUE
volumeMounts:
- name: postgres
mountPath: /etc/secrets/postgres
Current approach
As you can see, I'm currently using:
PG_USER=$(grep "^PG_USER=" | cut -d"=" -f2-)
That is because I initially thought the secret would be output on multiple lines, but it turns out that I was wrong. The echo $PG_USER displays an empty string.
The bash declare command is appropriate here, and is safer than eval.
Suppose the input contains something potentially malicious
line='PG_USER=postgres PG_PORT=1234 PG_PASS=icontain=and*symbols`ls`'
I'm assuming none of the values contain whitespace. Let's split that string
read -ra assignments <<< "$line"
Now, declare each one
for assignment in "${assignments[#]}"; do declare "$assignment"; done
Everywhere we examine the input, we maintain double quotes.
Let's see what we ended up with:
$ declare -p PG_USER PG_PORT PG_PASS
declare -- PG_USER="postgres"
declare -- PG_PORT="1234"
declare -- PG_PASS="icontain=and*symbols\`ls\`"
Option 1
This function can be reused to assign each variable individually:
extract() {
echo "$INPUT" | grep -o "$1=.*" | cut -d" " -f1 | cut -d"=" -f2- ;
}
And to use it:
PG_USER=$(extract PG_USER)
PG_PORT=$(extract PG_PORT)
PG_PASS=$(extract PG_PASS)
Option 2
Another potential solution, with a security concern, is to simply use:
eval "$INPUT"
It should only be used if you have validated the input.
Contextual complete answer
And because I've presented the k8s context in the question, here is the answer as plugged into that solution.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ template "fullname" $ }}-cronjob
spec:
concurrencyPolicy: Forbid
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
volumes:
- name: postgres
secret:
secretName: {{ template "fullname" $ }}-postgres
containers:
- name: {{ template "fullname" $ }}-cronjob-pod
image: postgres:12-alpine
imagePullPolicy: Always
env:
- name: PG_PROPS
valueFrom:
secretKeyRef:
name: {{ template "fullname" $ }}-postgres
key: postgres.properties
command:
- /bin/sh
- -c
- >-
extract() { echo "$PG_PROPS" | grep -o "$1=.*" | cut -d" " -f1 | cut -d"=" -f2- ; } &&
export PGHOST=$(extract PG_HOST) &&
export PGPORT=$(extract PG_PORT) &&
export PGDATABASE=$(extract PG_DATABASE) &&
export PGUSER=$(extract PG_USER) &&
PG_SCHEMA=$(extract PG_SCHEMA) &&
PG_QUERY="SELECT tenant_schema FROM $PG_SCHEMA.tenant_schema_mappings;" &&
PGPASSWORD=$(extract PG_PASSWD) psql --echo-all -c "$PG_QUERY"
volumeMounts:
- name: postgres
mountPath: /etc/secrets/postgres