Is there a way to pass environment variables through the services in Kubernetes?
I tried passing it in to my service yaml like this:
apiVersion: v1
kind: Service
metadata:
labels:
name: kafka
name: kafka
spec:
ports:
- port: 9092
selector:
name: kafka
env:
- name: BROKER_ID
value: "1"
The service is being consumed by kubectl, and is created.
I've confirmed the service is connected to my container through env | grep KAFKA and the output of variables greatly increase, as expected when my service is up.
However, I would like to pass in custom environment-variables that have to be different depending on which instance of the container it is in.
Is this possible?
The way that Kubernetes is designed has Services decoupled from Pods. You can not inject a Secret or an env var into a running Pod. What you want is to configure the Pod to use the env var or Secret.
This is the best way I've found so far: (reading required)
https://github.com/kubernetes/kubernetes/issues/4710
Roughly, create a secret in a file that's mounted and source it before you execute your script.
Related
What I'd like to ask is if is it possible to create a Kubernetes Job that runs a bash command within another Pod.
apiVersion: batch/v1
kind: Job
metadata:
namespace: dev
name: run-cmd
spec:
ttlSecondsAfterFinished: 180
template:
spec:
containers:
- name: run-cmd
image: <IMG>
command: ["/bin/bash", "-c"]
args:
- <CMD> $POD_NAME
restartPolicy: Never
backoffLimit: 4
I considered using :
Environment variable to define the pod name
Using Kubernetes SDK to automate
But if you have better ideas I am open to them, please!
The Job manifest you shared seems the valid idea.
Yet, you need to take into consideration below points:
As running command inside other pod (pod's some container) requires interacting with Kubernetes API server, you'd need to interact with it using Kubernetes client (e.g. kubectl). This, in turn, requires the client to be installed inside the job's container's image.
job's pod's service account has to have the permissions to pods/exec resource. See docs and this answer.
For the life of Bryan, how do I do this?
Terraform is used to create an SQL Server instance in GCP.
Root password and user passwords are randomly generated, then put into the Google Secret Manager.
The DB's IP is exposed via private DNS zone.
How can I now get the username and password to access the DB into my K8s cluster? Running a Spring Boot app here.
This was one option I thought of:
In my deployment I add an initContainer:
- name: secrets
image: gcr.io/google.com/cloudsdktool/cloud-sdk
args:
- echo "DB_PASSWORD=$(gcloud secrets versions access latest --secret=\"$NAME_OF_SECRET\")" >> super_secret.env
Okay, what now? How do I get it into my application container from here?
There are also options like bitnami/sealed-secrets, which I don't like since the setup is using Terraform already and saving the secrets in GCP. When using sealed-secrets I could skip using the secrets manager. Same with Vault IMO.
On top of the other answers and suggestion in the comments I would like to suggest two tools that you might find interesting.
First one is secret-init:
secrets-init is a minimalistic init system designed to run as PID 1
inside container environments and it`s integrated with
multiple secrets manager services, e.x. Google Secret Manager
Second one is kube-secrets-init:
The kube-secrets-init is a Kubernetes mutating admission webhook,
that mutates any K8s Pod that is using specially prefixed environment
variables, directly or from Kubernetes as Secret or ConfigMap.
It`s also support integration with Google Secret Manager:
User can put Google secret name (prefixed with gcp:secretmanager:) as environment variable value. The secrets-init will resolve any environment value, using specified name, to referenced secret value.
Here`s a good article about how it works.
How do I get it into my application container from here?
You could use a volume to store the secret and mount the same volume in both init container and main container to share the secret with the main container from the init container.
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: my-app:latest
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: secrets
image: gcr.io/google.com/cloudsdktool/cloud-sdk
args:
- echo "DB_PASSWORD=$(gcloud secrets versions access latest --secret=\"$NAME_OF_SECRET\")" >> super_secret.env
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}
You can use spring-cloud-gcp-starter-secretmanager to load secrets from Spring application itself.
Documentation - https://cloud.spring.io/spring-cloud-gcp/reference/html/#secret-manager
Using volumes of emptyDir with medium: Memory to guarantee that the secret will not be persisted.
...
volumes:
- name: scratch
emptyDir:
medium: Memory
sizeLimit: "1Gi"
...
If one has control over the image, it's possible to change the entry point and use berglas.
Dockerfile:
FROM adoptopenjdk/openjdk8:jdk8u242-b08-ubuntu # or whatever you need
# Install berglas, see https://github.com/GoogleCloudPlatform/berglas
RUN mkdir -p /usr/local/bin/
ADD https://storage.googleapis.com/berglas/main/linux_amd64/berglas /usr/local/bin/berglas
RUN chmod +x /usr/local/bin/berglas
ENTRYPOINT ["/usr/local/bin/berglas", "exec", "--"]
Now we build the container and test it:
docker build -t image-with-berglas-and-your-app .
docker run \
-v /host/path/to/credentials_dir:/root/credentials \
--env GOOGLE_APPLICATION_CREDENTIALS=/root/credentials/your-service-account-that-can-access-the-secret.json \
--env SECRET_TO_RESOLVE=sm://your-google-project/your-secret \
-ti image-with-berglas-and-your-app env
This should print the environment variables with the sm:// substituted by the actual secret value.
In K8s we run it with Workload Identity, so the K8s service account on behalf of which the pod is scheduled needs to be bound to a Google service account that has the right to access the secret.
In the end your pod description would be something like this:
apiVersion: v1
kind: Pod
metadata:
name: your-app
spec:
containers:
- name: your-app
image: image-with-berglas-and-your-app
command: [start-sql-server]
env:
- name: AXIOMA_PASSWORD
value: sm://your-google-project/your-secret
I am not sure how dumb or un-reasonable this question is, but we are trying to see if we can do this in any way.
I have a .bash file. And I want to run this when I invoke a url.
Let's take the url is https://domainname.com/jobapi
When I invoke this on the browser, this should invoke a .bash script on the container.
Is this really possible?
If it is possible, want to know if I need to add this script as a deployment or a job?
The first step, before looking at Kubernetes, is to configure a web server to run your script. This could be a generic web server like nginx or Apache, and you could add your script as a CGI script. There's plenty of tutorials out there that explain how to write CGI scripts.
Depending on the requirements of your application, a simple HTTP hook server might be a better match. Have a look at, for example, https://github.com/adnanh/webhook.
Either way, try this out with just Docker first, before trying to create a pod and potentially a service and an ingress in Kubernetes.
In a second step, to be able to access your service (the server invoking your script), you need to create a pod, probably through a deployment, and potentially a service and an ingress for it.
Kubernetes jobs are for running a script (or other program) once. They're most useful to automate maintaince tasks for your application.
What I would try to do is to run the shell script from a php file. Otherwise you are going to need some sort of driver to trigger the script.
So you would have the script as a regular executable, and upon the request php will execute it via shell.
Actually you can make it like an API; domain.com/job1 could execute job1, domain.com/jobn could execute jobn and so on.
Now, the way I'm describing would work only as a Deployment, as you want the server to be always up and ready to get requests.
Create a ingress service (NodePort if external facing) which will call a service
Call a service which maps the labels defined on the pod(runs script).This pod can be from a deployment or a simple pod
Make this service expose a pod/deployment
deployment can trigger the pods with shell script or a pod can trigger a shell script as well.
Ingress service:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: "domainame.com"
http:
paths:
- path: /jobapi
pathType: Prefix
backend:
serviceName: my-service
servicePort: 8080
my-service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
nodePort: 30007
port: 8080
targetPort: 8080
run your bash script, this can be done by defining a deployment or a pod
Pod:
k run MyApp --image=nginx --labels=app=MyApp --port=8080 -- /bin/sh -c echo 'Im up'
or
Deployment.yaml
controllers/nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: MyAppdep
spec:
replicas: 2
selector:
matchLabels:
app: MyApp
template:
metadata:
labels:
app: MyApp
spec:
containers:
- name: nginx
image: nginx:1.14.2
commands: ["/bin/sh","-c","echo 'test'"]
ports:
- containerPort: 8080
In Kubernetes cluster I have created Endpoint pointing to Kafka cluster. Endpoint created successfully.
Name - kafka
Endpoint - X.X.X.X:9092
In my Spring Boot application's deployment yaml I have kept environment variable BROKER_IP. For this environment variable I have pointed:
env:
- name: BROKER_IP
value: kafka
The POD is in Error state. In my bootstrap-server I am getting kafka and not the actual Endpoint that was created. Any thoughts?
UPDATE - Just tried kafka:9092 and it worked. So wondering does the ENDPOINT maps to IP only and not the Port? Is my understanding correct??
Is it possible that you forgot to create the Service object matching the Endpoints? Because you are providing the ip-port pairs yourself the Service would need to be selectorless.
This works for me:
kind: Endpoints
apiVersion: v1
metadata:
name: kafka
subsets:
- addresses: [{ip: "1.2.3.4"}]
ports: [{port: 9092}]
---
kind: Service
apiVersion: v1
metadata:
name: kafka
spec:
ports: [{port: 9092}]
Testing it:
$ kubectl run kafka-dns-test --image=busybox --attach --rm --restart=Never -- nslookup kafka
If you don't see a command prompt, try pressing enter.
Server: 10.96.0.10
Address: 10.96.0.10:53
Name: kafka.default.svc.cluster.local
Address: 10.96.220.40
Successful lookup, ignore extra *** Can't find xxx: No answer messages
Also, because there is a Service object you get some environment variables in your Pods (without having to declare them):
KAFKA_PORT='tcp://10.96.220.40:9092'
KAFKA_PORT_9092_TCP='tcp://10.96.220.40:9092'
KAFKA_PORT_9092_TCP_ADDR='10.96.220.40'
KAFKA_PORT_9092_TCP_PORT='9092'
KAFKA_PORT_9092_TCP_PROTO='tcp'
KAFKA_SERVICE_HOST='10.96.220.40'
KAFKA_SERVICE_PORT='9092'
But the most flexible way to use a Service is still to use the dns name (kafka in this case).
I'm trying to migrate applications based on the Netflix OSS to Kubernetes so the ideal way I found was to create a service of type NodePort and register the applications to Eureka. So i'm doing eureka.hostname=hostIP and eureka.nonSecurePort=nodePort
Here's what I've done -
Create a service for sample-app-service with service type NodePort.
Inject the nodeport in to a ConfigMap by running the command kubectl create configmap saas-event-reception-config --from-literal=nodePort=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services sample-app-service) (Question: Is there a way I can specify this as a yaml?)
Refer the nodePort using the configMapKeyRef in the deployment yaml.
The problem I'm facing is during the automated deployment. So ideally I'd like deploy the application using a single deployment file which includes Service, ConfigMap and Deployment. Is there a way I can do this gracefully? Or are there any alternate suggestion for doing this.
I'm also looking at helm but even if I use --set to pass the nodePort to ConfigMap by running the command (kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services sample-app-service) the Service has to be deployed first so that the ConfigMap gets the nodePort value. Is there a way I can do this?
What you can do is to specify a port from the service-node-port-range (by default 30000-32767) in your Service yaml file under ports with the name nodePort. Then you'll know the nodePort in advance. You could use helm to pass in the nodePort to use as a parameter so that this parameter is used in the Service yaml and also your ConfigMap and Deployment.