Accessing environment variables from a pod - go

I wrote golang program which fetches values from environment variable set in my system using export var name = somevalue.
cloudType = os.Getenv("CLOUD_TYPE")
clusterRegion = os.Getenv("CLUSTER_REGION")
clusterType = os.Getenv("CLUSTER_TYPE")
clusterName = os.Getenv("CLUSTER_NAME")
clusterID = os.Getenv("CLUSTER_ID")
As mentioned above my program tries to fetch values from env var set in system using getenv func.The program is working good if run it and fetching values from env variables. But When I tried building a image and running it inside a pod it was able to fetch values from the env var. It is giving empty values. Is there any way to access the local env var from the pod?

Make a yaml file like this to define a config map
apiVersion: v1
data:
CLOUD_TYPE: "$CLOUD_TYPE"
CLUSTER_REGION: "$CLUSTER_REGION"
CLUSTER_TYPE: "$CLUSTER_TYPE"
CLUSTER_NAME: "$CLUSTER_NAME"
CLUSTER_ID: "$CLUSTER_ID"
kind: ConfigMap
metadata:
creationTimestamp: null
name: foo
Ensure your config vars are set then apply it to your cluster, with env substitution first
envsubst < foo.yaml | kubectl apply -f
Then in the pod definition use the config map
spec:
containers:
- name: mypod
envFrom:
- configMapRef:
name: foo

It seems you set the env var not in the image.
First, you need ensure that you set up env in your image or pods. In image, you need to use ENV in your Dockerfile. doc. In Kubernetes pod, doc.
Second, you mentioned you want to get runtime env vars from your pod, you can run below command.
kubectl exec -it ${POD_NAME} -- printenv

...haven't set the env var in the pod. I set it locally in my system
Environment variables set on your host are not automatically pass on to the Pod. You can set the env in your spec and access by your container. A common approach to substitute environment variables in the spec with variables on the host is using envsubst < draft-spec.yaml > final-spec.yaml. Example if you have spec:
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
restartPolicy: Never
containers:
- name: busybox
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c","echo ${CONTAINER_MESSAGE}"]
env:
- name: CONTAINER_MESSAGE
value: $HOST_MESSAGE
You can run HOST_MESSAGE='hello, world!' envsubst '{$HOST_MESSAGE}' < busybox.yaml | kubectl apply -f -. This will substitute $HOST_MESSAGE with "hello, world!" but will not touch ${CONTAINER_MESSAGE}. This approach does not depends on ConfigMap and it allows you to use kubectl set env to update the variable after deployed.

Related

How to use nslookup passing the dns through a variable? [duplicate]

This question already has answers here:
Difference between ${} and $() in Bash [duplicate]
(3 answers)
Closed 2 years ago.
Friends, I am trying the implement a init container which will check if MYSQL is ready for connections and I am trying to use nslookup for this. The point is, how to pass the dns through a variable?
It worked like this:
command: ['sh', '-c', 'until nslookup mysql-primary.default.svc.cluster.local;
do echo waiting for mysql; sleep 2; done;']
But not like this:
command: ['sh', '-c', 'until nslookup $(MYSQL_HOST); do echo waiting for mysql; sleep 2; done;']
Any Idea how I could get the second option working?
MYSQL_HOST seems to be an environment variable and not a command.
$(MYSQL_HOST) will execute MYSQL_HOST as a command in a subshell (and that will not work in this case).
You probably want to use "${MYSQL_HOST}" instead.
The problem is, that $() executes a subshell and tries to evaluate a commend in there. What you actually want is variable expansion via ${}.
Here a working example for you:
A pod with an init-container, with a MYSQL_HOST environment variable:
---
apiVersion: v1
kind: Pod
metadata:
name: mysql-pod
spec:
containers:
- name: busybox-container
image: busybox
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: mysql-check
image: busybox
command: ['sh', '-c', 'until nslookup ${MYSQL_HOST}; do echo waiting for mysql; sleep 2; done;']
env:
- name: MYSQL_HOST
value: "mysql-primary.default.svc.cluster.local"
The pod starts after you create a corresponding service:
kubectl create svc clusterip mysql-primary --tcp=3306
For the sake of completeness: YAML of the service (not necessarily relevant in this case)
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: mysql-primary
name: mysql-primary
spec:
ports:
- name: "3306"
port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql-primary
type: ClusterIP
status:
loadBalancer: {}

get device mounts info on kubernetes node using pod

Team,
I have below pod.yaml that outputs the pod's mount info but now I want it to show me the node's mount info instead or also that info. any hint how can I give privilege to the pod such that it runs the same command on the k8s hosts on which the pod is running and list that in output of pods logs?
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["/bin/bash", "-c"]
args:
- |
echo $HOSTNAME && mount | grep -Ec '/dev/sd.*\<csi' | awk '$0 <= 64 { print "Mounts are less than 64, that is found", $0 ;} $0 > 64 { print "Mounts are more than 64", $0 ;}'
restartPolicy: OnFailure
kubectl logs pod/command-demo
command-demo
Mounts are less than 64, that is found 0
expected output:
k8s_node1 << this is hostname of the k8s node on which above pod us running
Mounts are more than 64, that is found 65
what change do i need to do in my pod.yaml such that it runs the shell command on node and not on pod?
You cannot access host filesystem inside the docker container unless you mount part's of the host filesystem as volume. You can try mounting the whole host filesytem into the pod as follows. You might need to privileged securityContext for the pod depending on what you are trying to do.
apiVersion: v1
kind: Pod
metadata:
name: dummy
spec:
containers:
- name: busybox
image: busybox
command: ["/bin/sh"]
args: ["-c", "sleep 3600"]
volumeMounts:
- name: host
mountPath: /host
volumes:
- name: host
hostPath:
path: /
type: Directory
Alternative method and probably better way is to SSH into the host machine from the pod and run the command. You can get the host IP using downward API - https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/

How can I exec into a K8s pod but use a bash_profile from outside of it?

My .bash_profile has many aliases that I use regularly. When I exec into a kubernetes pod, though, those aliases become (understandably) inaccessible. And when I say "exec into" I mean:
kubectl exec -it [pod-name] -c [container-name] bash
Is there any way to make it so that I can still use my bash profile after exec'ing in?
You said those are only the aliases. In that case and only in that case you could save the .bash_profile in the ConfigMap using --from-env-file
kubectl create configmap bash-profile --from-env-file=.bash_profile
Keep in mind that each line in the env file has to be in VAR=VAL format.
Lines with # at the beginning and blank lines will be ignored.
You can then load all the key-value pairs as container environment variables:
apiVersion: v1
kind: Pod
metadata:
name: bash-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: bash-profile
restartPolicy: Never
Or Populate a Volume with data stored in a ConfigMap:
apiVersion: v1
kind: Pod
metadata:
name: bash-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /root/.bash_profile
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: bash-profile
restartPolicy: Never
The idea mentioned by #Mark should also work.
If you do kubectl cp .bash_profile <pod_name>:/root/ if you need to put it into a specific containers you can add option -c, --container='': Container name. If omitted, the first container in the pod will be chosen.

Retrieving contents of j2 template file on stdout

I'm attempting to use Ansible to better manage my Kubernetes configmaps in a multienvironment project (dev, stage, and prod). I've generalized each of the config maps as j2 templates, and I'll override the variables depending on how they might change in different environments (so that they aren't duplicated three times for basically the same file).
My playbook currently looks something like this:
---
- hosts: localhost
vars_files:
- "vars/{{ env }}.yml"
tasks:
- name: Generate YAML from j2 template
template:
src: templates/foo.j2
dest: output/foo.yml
And this has been working great for testing so far. However, I'm at the point where I want to incorporate this into my already existing Jenkins CI/CD, but I'm having trouble understanding how it might work with what I am doing currently.
After generating what is basically a Kuberenets ConfigMap from the j2, I'll somehow do this within Jenkins:
kubectl apply -f <yaml>
However, the playbook is creating a YAML file every time I run it, and I am wondering if there is an alternative that would allow me to pipe the contents of the YAML file or somehow retrieve it from stdout.
Basically, I want to evaluate the template and retrieve it without necessarily creating a file.
If I do this, I could do something like the following:
echo result | kubectl apply -f -
where result of course is the contents of the YAML file that results after the templating, and the short dash after the f flag specifies Kubernetes to use the process' stdout.
Sorry for so much explaining, I can clarify anything if needed.
I would like to retrieve the result of the template, and pipe it into that command, such as "echo result | kubectl apply -f -"
In which case, you'd use the stdin: parameter of the command: module:
- name: generate kubernetes yaml
command: echo "run your command that generates yaml here"
register: k8s_yaml
- name: feed the yaml to kubectl apply
command: kubectl apply -f -
stdin: '{{ k8s_yaml.stdout }}'
It isn't super clear what the relationship is in your question between the top part, dealing with the template:, and the bottom part about apply -f -, but if you mean "how can I render a template to a variable, instead of a file?" the the answer is the template lookup plugin:
- name: render the yaml
set_fact:
k8s_yaml: '{{ lookup("template", "templates/foo.j2") }}'
- name: now feed it to apply
command: kubectl apply -f -
stdin: '{{ k8s_yaml }}'
You've got a couple options here. I usually try to stay away from shelling out to command wherever possible. Check out the k8s module in ansible. Note that as long as state is present ansible will patch your object.
- name: Apply your previously generated configmap if you so choose.
k8s:
state: present
definition: "{{ lookup('file', '/output/foo.yml') }}"
Or even better, you could just directly create the configmap
- name: Create the configmap for {{ env }}
k8s:
state: present
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: ConfigMap
namespace: "{{ foo_namespace }}"
labels:
app: bar
environment: "{{ bizzbang }}"

How to set multiple commands in one yaml file with Kubernetes?

In this official document, it can run command in a yaml config file:
https://kubernetes.io/docs/tasks/configure-pod-container/
apiVersion: v1
kind: Pod
metadata:
name: hello-world
spec: # specification of the pod’s contents
restartPolicy: Never
containers:
- name: hello
image: "ubuntu:14.04"
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/sh","-c"]
args: ["/bin/echo \"${MESSAGE}\""]
If I want to run more than one command, how to do?
command: ["/bin/sh","-c"]
args: ["command one; command two && command three"]
Explanation: The command ["/bin/sh", "-c"] says "run a shell, and execute the following instructions". The args are then passed as commands to the shell. In shell scripting a semicolon separates commands, and && conditionally runs the following command if the first succeed. In the above example, it always runs command one followed by command two, and only runs command three if command two succeeded.
Alternative: In many cases, some of the commands you want to run are probably setting up the final command to run. In this case, building your own Dockerfile is the way to go. Look at the RUN directive in particular.
My preference is to multiline the args, this is simplest and easiest to read. Also, the script can be changed without affecting the image, just need to restart the pod. For example, for a mysql dump, the container spec could be something like this:
containers:
- name: mysqldump
image: mysql
command: ["/bin/sh", "-c"]
args:
- echo starting;
ls -la /backups;
mysqldump --host=... -r /backups/file.sql db_name;
ls -la /backups;
echo done;
volumeMounts:
- ...
The reason this works is that yaml actually concatenates all the lines after the "-" into one, and sh runs one long string "echo starting; ls... ; echo done;".
If you're willing to use a Volume and a ConfigMap, you can mount ConfigMap data as a script, and then run that script:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
entrypoint.sh: |-
#!/bin/bash
echo "Do this"
echo "Do that"
---
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: "ubuntu:14.04"
command:
- /bin/entrypoint.sh
volumeMounts:
- name: configmap-volume
mountPath: /bin/entrypoint.sh
readOnly: true
subPath: entrypoint.sh
volumes:
- name: configmap-volume
configMap:
defaultMode: 0700
name: my-configmap
This cleans up your pod spec a little and allows for more complex scripting.
$ kubectl logs my-pod
Do this
Do that
If you want to avoid concatenating all commands into a single command with ; or && you can also get true multi-line scripts using a heredoc:
command:
- sh
- "-c"
- |
/bin/bash <<'EOF'
# Normal script content possible here
echo "Hello world"
ls -l
exit 123
EOF
This is handy for running existing bash scripts, but has the downside of requiring both an inner and an outer shell instance for setting up the heredoc.
I am not sure if the question is still active but due to the fact that I did not find the solution in the above answers I decided to write it down.
I use the following approach:
readinessProbe:
exec:
command:
- sh
- -c
- |
command1
command2 && command3
I know my example is related to readinessProbe, livenessProbe, etc. but suspect the same case is for the container commands. This provides flexibility as it mirrors a standard script writing in Bash.
IMHO the best option is to use YAML's native block scalars. Specifically in this case, the folded style block.
By invoking sh -c you can pass arguments to your container as commands, but if you want to elegantly separate them with newlines, you'd want to use the folded style block, so that YAML will know to convert newlines to whitespaces, effectively concatenating the commands.
A full working example:
apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
app: myapp
spec:
containers:
- name: busy
image: busybox:1.28
command: ["/bin/sh", "-c"]
args:
- >
command_1 &&
command_2 &&
...
command_n
Here is my successful run
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
containers:
- command:
- /bin/sh
- -c
- |
echo "running below scripts"
i=0;
while true;
do
echo "$i: $(date)";
i=$((i+1));
sleep 1;
done
name: busybox
image: busybox
Here is one more way to do it, with output logging.
apiVersion: v1
kind: Pod
metadata:
labels:
type: test
name: nginx
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: log-vol
mountPath: /var/mylog
command:
- /bin/sh
- -c
- >
i=0;
while [ $i -lt 100 ];
do
echo "hello $i";
echo "$i : $(date)" >> /var/mylog/1.log;
echo "$(date)" >> /var/mylog/2.log;
i=$((i+1));
sleep 1;
done
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- name: log-vol
emptyDir: {}
Here is another way to run multi line commands.
apiVersion: batch/v1
kind: Job
metadata:
name: multiline
spec:
template:
spec:
containers:
- command:
- /bin/bash
- -exc
- |
set +x
echo "running below scripts"
if [[ -f "if-condition.sh" ]]; then
echo "Running if success"
else
echo "Running if failed"
fi
name: ubuntu
image: ubuntu
restartPolicy: Never
backoffLimit: 1
Just to bring another possible option, secrets can be used as they are presented to the pod as volumes:
Secret example:
apiVersion: v1
kind: Secret
metadata:
name: secret-script
type: Opaque
data:
script_text: <<your script in b64>>
Yaml extract:
....
containers:
- name: container-name
image: image-name
command: ["/bin/bash", "/your_script.sh"]
volumeMounts:
- name: vsecret-script
mountPath: /your_script.sh
subPath: script_text
....
volumes:
- name: vsecret-script
secret:
secretName: secret-script
I know many will argue this is not what secrets must be used for, but it is an option.

Resources