Openshift - Variables in Config for different Environments - yaml

I am currently trying to make deployments on two different openshift clusters, but I only want to use one deploymentconfig file. Is there a good way to overcome the current problem
apiVersion: v1
kind: DeploymentConfig
metadata:
labels:
app: my-app
deploymentconfig: my-app
name: my-app
spec:
selector:
app: my-app
deploymentconfig: my-app
strategy:
type: Rolling
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailability: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
replicas: 1
template:
metadata:
labels:
app: my-app
deploymentconfig: my-app
spec:
containers:
- name: my-app-container
image: 172.0.0.1:5000/int-myproject/my-app:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
env:
- name: ROUTE_PATH
value: /my-app
- name: HTTP_PORT
value: "8080"
- name: HTTPS_PORT
value: "8081"
restartPolicy: Always
dnsPolicy: ClusterFirst
Now if you look at spec.template.spec.containers[0].image there are two problems with this
Nr.1
172.0.0.1:5000/int-myproject/my-app:latest
The IP of the internal registry will differ between the two environments
Nr.2
172.0.0.1:5000/int-myproject/my-app:latest
The namespace will also not be the same. In this scenario I want this to be int-myproject or prod-myproject depending on the environment i want to deploy to. I was thinking maybe there is a way to use parameters in the yaml and pass them to openshift somehow similar to this
oc create -f deploymentconfig.yaml --namespace=int-myproject
and have a parameter like ${namespace} in my yaml file. Is there a good way to achieve this?

Firstly, to answer your question, yes you can use parameters with OpenShift templates and pass the value and creation time.
To do this, you will add the required template values to your yaml file and instead of using oc create you will use oc new-app -f deploymentconfig.yaml --param=SOME_KEY=someValue. Check out oc new-app --help for more info here.
Some other points to note though: IF you are referencing images from internal registry you might be better off to use imagestreams. These provide an abstraction for images pulled from internal docker registry on OpenShift, as is the case you have outlined.
Finally, the namespace value is available via the downward API in every Pod and you should not need to (typically) inject that manually.

Related

How to deploy a simple Hello World program to local Kubernetes cluster

I have a very simple spring-boot Hello World program. When I run the application locally, I can navigate to http://localhost:8080/ and see the "Hello World" greeting displayed on the page. I have also created a Dockerfile and can build an image from it.
My next goal is to deploy this to a local Kubernetes cluster. I have used Docker Desktop to create a local kubernetes cluster. I want to create a deployment for my application, host it locally on the cluster, and access it from a browser.
I am not sure where to start with this deployment. I know that I will need to create charts, but I have no idea how to ultimately push this image to my cluster...
You need to create a kubernetes deployment and service definitions respectively.
These definitions can be in json or yaml format. Here is example definitions, you can use these definitions as template for your deploy.
apiVersion: apps/v1
kind: Deployment
metadata:
name: your-very-first-deployment
labels:
app: first-deployment
spec:
replicas: 1
selector:
matchLabels:
app: first-deployment
template:
metadata:
labels:
app: first-deployment
spec:
containers:
- name: your-app
image: your-image:with-version
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: your-service
spec:
type: NodePort
ports:
- port: 80
nodePort: 30180
targetPort: 8080
selector:
app: first-deployment
Do not forget to update image line in deployment yaml with your image name and image version. After that replacement, save this file with name for example deployment.yaml and then apply this definition with kubectl apply -f deployment.yml command.
Note that, you need to use port 30180 to access your application as it is stated in service definition as nodePort value. (http://localhost:30180)
Links:
Kubernetes services: https://kubernetes.io/docs/concepts/services-networking/service/
Kubernetes deployments: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
you need to define deployment first to start , define docker image and required environment in deployment.

How to properly configure the environment in kubernetes cluster?

I have a spring boot application with two profiles, dev and prod, my docker file is:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=target/dependency
COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY ${DEPENDENCY}/META-INF /app/META-INF
COPY ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-Dspring.profiles.active=dev","-cp","app:app/lib/*","com.my.Application"]
please not that, when building the image, I specify the entrypoint as command line argument.
This is the containers section of my kubernetes deployment where I use this image:
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
ports:
- containerPort: 8080
name: myapp
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
timeoutSeconds: 3
periodSeconds: 20
failureThreshold: 3
It works but has a major flaw: how can I now switch to the production environment without rebuilding the image?
The best would be to remove that ENTRYPOINT in my docker file and give this configuration in my kubernetes yml so that I could always use the same image...is this possible?
edit: I saw that there is a lifecycle istruction but note that I have a readiness probe based on the spring boot's actuator. It would always fail if I used this construct.
You can override an image's ENTRYPOINT by using the command property of a Kubernetes Pod spec. Likewise, you could override CMD by using the args property (also see the documentation):
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
command: ["java","-Dspring.profiles.active=prod","-cp","app:app/lib/*","com.my.Application"]
ports:
- containerPort: 8080
name: myapp
Alternatively, to provide a higher level of abstraction, you might write your own entrypoint script that reads the application profile from an environment variable:
#!/bin/sh
PROFILE="${APPLICATION_CONTEXT:-dev}"
exec java "-Dspring.profiles.active=$PROFILE" -cp app:app/lib/* com.my.Application
Then, you could simply pass that environment variable into your pod:
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
env:
- name: APPLICATION_CONTEXT
value: prod
ports:
- containerPort: 8080
name: myapp
Rather than putting spring.profiles.active in dockerfile in the entrypoint.
Make use of configmaps and application.properties.
Your ENTRYPOINT in dockerfile should look like:
ENTRYPOINT ["java","-cp","app:app/lib/*","com.my.Application","--spring.config.additional-location=/config/application-dev.properties"]
Create a configmap that acts as application.properties for your springboot application
---
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
namespace: flow
data:
application-dev.properties: |
spring.application.name=myapp
server.port=8080
spring.profiles.active=dev
NOTE: Here we have specified spring.profiles.active.
In containers section of my kubernetes deployment mount the configmap inside container that will act as application.properties.
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
command: ["java","-cp","app:app/lib/*","com.my.Application","--spring.config.additional-location=/config/application-dev.properties"]
ports:
- containerPort: 8080
name: myapp
volumeMounts:
- name: myapp-application-config
mountPath: "/config"
readOnly: true
volumes:
- name: myapp-application-config
configMap:
name: myapp-config
items:
- key: application-dev.properties
path: application-dev.properties
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
timeoutSeconds: 3
periodSeconds: 20
failureThreshold: 3
NOTE: --spring.config.additional-location points to location of application.properties that we created in configmaps.
So making use of configmaps and application.properties one can override any configuration of your application wihtout rebuilding the image.
If you want to add a new config or update value of existing config, just make appropriate changes in configmap and kubectl apply it. Then scale down and scale up your application pod, to bring the new config in action.
Hope this helps.
There are many many ways to set Spring configuration values. With some rules, you can use ordinary environment variables to specify individual property values. You might see if you can use this instead of having a separate Spring profile control.
Using environment variables has two advantages here: it means you (or your DevOps team) can change deploy-time settings without recompiling the application; and if you're using a deployment manager like Helm where some details like hostnames are intrinsically unpredictable, this lets you specify values that can't be known until deploy time.
For example, let's say you have a Redis dependency:
cache:
redis:
url: redis://localhost:6379/0
You could override this at deploy time by setting
containers:
- name: myapp
env:
- name: CACHE_REDIS_URL
value: "redis://myapp-redis.default.svc.cluster.local:6379/0"
One way to do this is using spring cloud Kubernetes as described here
https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#configmap-propertysource
You can define your profiles in a configmap like below
kind: ConfigMap
apiVersion: v1
metadata:
name: demo
data:
application.yml: |-
greeting:
message: Say Hello to the World
farewell:
message: Say Goodbye
---
spring:
profiles: development
greeting:
message: Say Hello to the Developers
farewell:
message: Say Goodbye to the Developers
---
spring:
profiles: production
greeting:
message: Say Hello to the Ops
And can then select the desired profile by passing an environment variable in your Kubernetes deployment manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-name
labels:
app: deployment-name
spec:
replicas: 1
selector:
matchLabels:
app: deployment-name
template:
metadata:
labels:
app: deployment-name
spec:
containers:
- name: container-name
image: your-image
env:
- name: SPRING_PROFILES_ACTIVE
value: "development"

Retrieve Kubernetes Secrets mounted as volumes

Hi I am playing around with Kubernetes secrets.
My deployment file is :
---
apiVersion: v1
kind: Secret
metadata:
name: my-secrets
labels:
app: my-app
data:
username: dXNlcm5hbWU=
password: cGFzc3dvcmQ=
I am able to create secrets and I am mounting them in my deployments as below:
---
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-service
ports:
- protocol: TCP
port: 80
targetPort: 8080
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: spring-service
labels:
app: spring-service
spec:
replicas: 1
selector:
matchLabels:
app: spring-service
template:
metadata:
labels:
app: spring-service
spec:
containers:
- name: spring-service
image: my-image:tag
imagePullPolicy: Always
ports:
- containerPort: 8080
volumeMounts:
- name: my-secret-vol
mountPath: "/app/secrets/my-secret"
readOnly: true
volumes:
- name: my-secret-vol
secret:
secretName: my-secrets
My question is how can I access username and password I created in secret in spring-boot app?
I have tried loading in with ${my-secrets.username} and ${username}, but it fails to find values.
I also tried adding secrets as enviroment variables as below in deployment.yml:
env:
- name: username
valueFrom:
secretKeyRef:
name: my-secrets
key: username
- name: password
valueFrom:
secretKeyRef:
name: my-secrets
key: password
In this case, values are loaded from secrets and when I change values of secrets in minikube dashboard, it does not reflect the changes.
Please help me to understand how this works.
I am using minikube and docker as containers
You don't inject the secret into properties.yml. Instead, you use the content of the secret as properties.yml. The process is look like the following:
Create a properties.yml with the sensitive data (e.g. password)
Base64 encode this file (e.g. base64 properties.yml).
Take the base64 encoded value and put that in the secret under the key properties.yaml.
You should end up with a secret in the following format:
apiVersion: v1
kind: Secret
metadata:
name: my-secrets
labels:
app: my-app
data:
properties.yml: dXNlcm5hbWU=
Now when you mount this secret on your pod, Kubernetes will decrypt the secret and put the value under the relevant path and you can just mount it.
The pattern is to have 2 configuration files - one with non-sensitive configurations that is stored with the code, and the second (which includes sensitive configurations) stored as a secret. I don't know if that possible to load multiple config files using Spring Boot.
And one final comment - this process is cumbersome and error-prone. Each change to the configuration file requires decoding the original secret and repeating this manual process. Also, it's very hard to understand what changed - all you see is the entire content has changed. For that reason, we build Kamus. It let you encrypt only the sensitive value instead of the entire file. Let me know if that could be relevant for you :)
For the first approach you'll find the values on:
- /app/secrets/my-secret/username
- /app/secrets/my-secret/password
and for the second approach you can't change the value of env vars during runtime, you need to restart or redeploy the pod

k8s: Use parameterized image tag when creating deployment

I want to run a kubernetes deployment in the likes of the following:
apiVersion: v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
spec:
replicas: 1
template:
spec:
containers:
- name: my-app
image: our-own-registry.com/somerepo/my-app:${IMAGE_TAG}
env:
- name: FOO
value: "BAR"
This will be delivered to the developers so that they can perform on demand deployments using the image tag of their preference.
What is best way / recommended pattern to pass the tag variable?
performing an export on the command line to make it available as env var on the shell from which the kubectl command will run?
Unfortunately, it's impossible via native kubernetes tools. From here:
kubectl will never support variable substitution.
But, that issue case also has some good workarounds. The best way is deploy your apps via Helm charts using templates
For simple use cases envsubst will do just fine:
IMAGE_TAG=1.2 envsubst < deployment.yaml | kubectl apply -f -`

How do I access this Kubernetes service via kubectl proxy?

I want to access my Grafana Kubernetes service via the kubectl proxy server, but for some reason it won't work even though I can make it work for other services. Given the below service definition, why is it not available on http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana?
grafana-service.yaml
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: grafana
labels:
app: grafana
spec:
type: NodePort
ports:
- name: web
port: 3000
protocol: TCP
nodePort: 30902
selector:
app: grafana
grafana-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: monitoring
name: grafana
spec:
replicas: 1
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:4.1.1
env:
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: grafana-credentials
key: user
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana-credentials
key: password
volumeMounts:
- name: grafana-storage
mountPath: /var/grafana-storage
ports:
- name: web
containerPort: 3000
resources:
requests:
memory: 100Mi
cpu: 100m
limits:
memory: 200Mi
cpu: 200m
- name: grafana-watcher
image: quay.io/coreos/grafana-watcher:v0.0.5
args:
- '--watch-dir=/var/grafana-dashboards'
- '--grafana-url=http://localhost:3000'
env:
- name: GRAFANA_USER
valueFrom:
secretKeyRef:
name: grafana-credentials
key: user
- name: GRAFANA_PASSWORD
valueFrom:
secretKeyRef:
name: grafana-credentials
key: password
resources:
requests:
memory: "16Mi"
cpu: "50m"
limits:
memory: "32Mi"
cpu: "100m"
volumeMounts:
- name: grafana-dashboards
mountPath: /var/grafana-dashboards
volumes:
- name: grafana-storage
emptyDir: {}
- name: grafana-dashboards
configMap:
name: grafana-dashboards
The error I'm seeing when accessing the above URL is "no endpoints available for service "grafana"", error code 503.
With Kubernetes 1.10 the proxy URL should be slighly different, like this:
http://localhost:8080/api/v1/namespaces/default/services/SERVICE-NAME:PORT-NAME/proxy/
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls
As Michael says, quite possibly your labels or namespaces are mismatching. However in addition to that, keep in mind that even when you fix the endpoint, the url you're after (http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana) might not work correctly.
Depending on your root_url and/or static_root_path grafana configuration settings, when trying to login you might get grafana trying to POST to http://localhost:8001/login and get a 404.
Try using kubectl port-forward instead:
kubectl -n monitoring port-forward [grafana-pod-name] 3000
then access grafana via http://localhost:3000/
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
The issue is that Grafana's port is named web, and as a result one needs to append :web to the kubectl proxy URL: http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana:web.
An alternative, is to instead not name the Grafana port, because then you don't have to append :web to the kubectl proxy URL for the service: http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana:web. I went with this option in the end since it's easier.
There are a few factors that might be causing this issue.
The service expects to find one or more supporting endpoints, which it discovers through matching rules on the labels. If the labels don't align, then the service won't find endpoints, and the network gateway function performed by the service will result in 503.
The port declared by the POD and the process within the container are misaligned from the --target-port expected by the service.
Either one of these might generate the error. Let's take a closer look.
First, kubectl describe the service:
$ kubectl describe svc grafana01-grafana-3000
Name: grafana01-grafana-3000
Namespace: default
Labels: app=grafana01-grafana
chart=grafana-0.3.7
component=grafana
heritage=Tiller
release=grafana01
Annotations: <none>
Selector: app=grafana01-grafana,component=grafana,release=grafana01
Type: NodePort
IP: 10.0.0.197
Port: <unset> 3000/TCP
NodePort: <unset> 30905/TCP
Endpoints: 10.1.45.69:3000
Session Affinity: None
Events: <none>
Notice that my grafana service has 1 endpoint listed (there could be multiple). The error above in your example indicates that you won't have endpoints listed here.
Endpoints: 10.1.45.69:3000
Let's take a look next at the selectors. In the example above, you can see I have 3 selector labels on my service:
Selector: app=grafana01-grafana,component=grafana,release=grafana01
I'll kubectl describe my pods next:
$ kubectl describe pod grafana
Name: grafana01-grafana-1843344063-vp30d
Namespace: default
Node: 10.10.25.220/10.10.25.220
Start Time: Fri, 14 Jul 2017 03:25:11 +0000
Labels: app=grafana01-grafana
component=grafana
pod-template-hash=1843344063
release=grafana01
...
Notice that the labels on the pod align correctly, hence my service finds pods which provide endpoints which are load balanced against by the service. Verify that this part of the chain isn't broken in your environment.
If you do find that the labels are correct, you may still have a disconnect in that the grafana process running within the container within the pod is running on a different port than you expect.
$ kubectl describe pod grafana
Name: grafana01-grafana-1843344063-vp30d
...
Containers:
grafana:
Container ID: docker://69f11b7828c01c5c3b395c008d88e8640c5606f4d865107bf4b433628cc36c76
Image: grafana/grafana:latest
Image ID: docker-pullable://grafana/grafana#sha256:11690015c430f2b08955e28c0e8ce7ce1c5883edfc521b68f3fb288e85578d26
Port: 3000/TCP
State: Running
Started: Fri, 14 Jul 2017 03:25:26 +0000
If for some reason, your port under the container listed a different value, then the service is effectively load balancing against an invalid endpoint.
For example, if it listed port 80:
Port: 80/TCP
Or was an empty value
Port:
Then even if your label selectors were correct, the service would never find a valid response from the pod and would remove the endpoint from the rotation.
I suspect your issue is the first problem above (mismatched label selectors).
If both the label selectors and ports align, then you might have a problem with the MTU setting between nodes. In some cases, if the MTU used by your networking layer (like calico) is larger than the MTU of the supporting network, then you'll never get a valid response from the endpoint. Typically, this last potential issue will manifest itself as a timeout rather than a 503 though.
Your Deployment may not have a label app: grafana, or be in another namespace. Could you also post the Deployment definition?

Resources