How do I run a post-boot script on a container in kubernetes - bash

I have in my application a shell that I need to execute once the container is started and that it remains in the background, I have seen using lifecycle but it does not work for me
ports:
- name: php-port
containerPort: 9000
lifecycle:
postStart:
exec:
command: ["/bin/sh", "sh /root/script.sh"]
I need an artisan execution to stay in the background once the container is started

When the lifecycle hooks (e.g. postStart) do not work for you, you could add another container to your pod, which runs parallel to your main container (sidecar pattern):
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: main
image: some/image
...
- name: sidecar
image: another/container
If your 2nd container should only start after your main container started successfully, you need some kind of notification. This could be for example that the main container creates a file on a shared volume (e.g. an empty dir) for which the 2nd container waits until it starts it's main process. The docs have an example about a shared volume for two containers in the same pod. This obviously requires to add some additional logic to the main container.
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: main
image: some/image
volumeMounts:
- name: shared-data
mountPath: /some/path
- name: sidecar
image: another/image
volumeMounts:
- name: shared-data
mountPath: /trigger
command: ["/bin/bash"]
args: ["-c", "while [ ! -f /trigger/triggerfile ]; do sleep 1; done; ./your/2nd-app"]

You can try using something like supervisor
http://supervisord.org/
We use that to start the main process and a monitoring agent in the background so we get metrics out of it. supervisor would also ensure those processes stay up if they crash or terminate.

Related

Micro-service deploy using helm

I am totally new at helm deployment. I want to deploy a microservice application using helm. There are a couple of services including consul and database. How can I configure so that consul and database services are deployed first and other services will be deployed later
You can make use of initContainers in your services to check that the services it depends are available and that you can connect to them.
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app.kubernetes.io/name: MyApp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', "until nslookup myservice.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for myservice; sleep 2; done"]
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]
References:
https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

Kubernetes `client-go` - How to get container status in a pod

After following this and this, how do I watch containers status (If a container crashed, completed etc) in a Pod and trigger events when a container status changes in a Pod?
Let's say I have a Pod with 2 containers:
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
app: busybox
spec:
containers:
- image: busybox
name: busybox5
command:
- sleep
- "5"
imagePullPolicy: IfNotPresent
- image: busybox
name: busybox50
command:
- sleep
- "50"
imagePullPolicy: IfNotPresent
restartPolicy: Never
I want to get notified when the busybox5 container finishes executions not about busybox50. I have done something like below using informers:
UpdateFunc: func(oldObj, obj interface{}) {
mObj := obj.(v1.Object)
log.Printf("%s: Updated", mObj.GetName())
},
This is simple. But how it works in a multi container Pod? What if I want to handle events about busybox5 container only in the Pod. How can I achieve this in Go?
I think you need the client-go informers. Here's a good tutorial about them: https://firehydrant.io/blog/stay-informed-with-kubernetes-informers/
You can create an asynchronous event listener for the Pod in which your containers are running and then when one of a container status is change then the pod status is change also (updated, so you should listen to update events).
So you got the update event from your pod, after all you need get the Pod cointainers.
I hope you looking for this :)

How to properly configure the environment in kubernetes cluster?

I have a spring boot application with two profiles, dev and prod, my docker file is:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG DEPENDENCY=target/dependency
COPY ${DEPENDENCY}/BOOT-INF/lib /app/lib
COPY ${DEPENDENCY}/META-INF /app/META-INF
COPY ${DEPENDENCY}/BOOT-INF/classes /app
ENTRYPOINT ["java","-Dspring.profiles.active=dev","-cp","app:app/lib/*","com.my.Application"]
please not that, when building the image, I specify the entrypoint as command line argument.
This is the containers section of my kubernetes deployment where I use this image:
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
ports:
- containerPort: 8080
name: myapp
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
timeoutSeconds: 3
periodSeconds: 20
failureThreshold: 3
It works but has a major flaw: how can I now switch to the production environment without rebuilding the image?
The best would be to remove that ENTRYPOINT in my docker file and give this configuration in my kubernetes yml so that I could always use the same image...is this possible?
edit: I saw that there is a lifecycle istruction but note that I have a readiness probe based on the spring boot's actuator. It would always fail if I used this construct.
You can override an image's ENTRYPOINT by using the command property of a Kubernetes Pod spec. Likewise, you could override CMD by using the args property (also see the documentation):
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
command: ["java","-Dspring.profiles.active=prod","-cp","app:app/lib/*","com.my.Application"]
ports:
- containerPort: 8080
name: myapp
Alternatively, to provide a higher level of abstraction, you might write your own entrypoint script that reads the application profile from an environment variable:
#!/bin/sh
PROFILE="${APPLICATION_CONTEXT:-dev}"
exec java "-Dspring.profiles.active=$PROFILE" -cp app:app/lib/* com.my.Application
Then, you could simply pass that environment variable into your pod:
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
env:
- name: APPLICATION_CONTEXT
value: prod
ports:
- containerPort: 8080
name: myapp
Rather than putting spring.profiles.active in dockerfile in the entrypoint.
Make use of configmaps and application.properties.
Your ENTRYPOINT in dockerfile should look like:
ENTRYPOINT ["java","-cp","app:app/lib/*","com.my.Application","--spring.config.additional-location=/config/application-dev.properties"]
Create a configmap that acts as application.properties for your springboot application
---
apiVersion: v1
kind: ConfigMap
metadata:
name: myapp-config
namespace: flow
data:
application-dev.properties: |
spring.application.name=myapp
server.port=8080
spring.profiles.active=dev
NOTE: Here we have specified spring.profiles.active.
In containers section of my kubernetes deployment mount the configmap inside container that will act as application.properties.
containers:
- name: myapp
image: myregistry.azurecr.io/myapp:0.1.7
imagePullPolicy: "Always"
command: ["java","-cp","app:app/lib/*","com.my.Application","--spring.config.additional-location=/config/application-dev.properties"]
ports:
- containerPort: 8080
name: myapp
volumeMounts:
- name: myapp-application-config
mountPath: "/config"
readOnly: true
volumes:
- name: myapp-application-config
configMap:
name: myapp-config
items:
- key: application-dev.properties
path: application-dev.properties
readinessProbe:
httpGet:
path: /actuator/health
port: 8080
timeoutSeconds: 3
periodSeconds: 20
failureThreshold: 3
NOTE: --spring.config.additional-location points to location of application.properties that we created in configmaps.
So making use of configmaps and application.properties one can override any configuration of your application wihtout rebuilding the image.
If you want to add a new config or update value of existing config, just make appropriate changes in configmap and kubectl apply it. Then scale down and scale up your application pod, to bring the new config in action.
Hope this helps.
There are many many ways to set Spring configuration values. With some rules, you can use ordinary environment variables to specify individual property values. You might see if you can use this instead of having a separate Spring profile control.
Using environment variables has two advantages here: it means you (or your DevOps team) can change deploy-time settings without recompiling the application; and if you're using a deployment manager like Helm where some details like hostnames are intrinsically unpredictable, this lets you specify values that can't be known until deploy time.
For example, let's say you have a Redis dependency:
cache:
redis:
url: redis://localhost:6379/0
You could override this at deploy time by setting
containers:
- name: myapp
env:
- name: CACHE_REDIS_URL
value: "redis://myapp-redis.default.svc.cluster.local:6379/0"
One way to do this is using spring cloud Kubernetes as described here
https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#configmap-propertysource
You can define your profiles in a configmap like below
kind: ConfigMap
apiVersion: v1
metadata:
name: demo
data:
application.yml: |-
greeting:
message: Say Hello to the World
farewell:
message: Say Goodbye
---
spring:
profiles: development
greeting:
message: Say Hello to the Developers
farewell:
message: Say Goodbye to the Developers
---
spring:
profiles: production
greeting:
message: Say Hello to the Ops
And can then select the desired profile by passing an environment variable in your Kubernetes deployment manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-name
labels:
app: deployment-name
spec:
replicas: 1
selector:
matchLabels:
app: deployment-name
template:
metadata:
labels:
app: deployment-name
spec:
containers:
- name: container-name
image: your-image
env:
- name: SPRING_PROFILES_ACTIVE
value: "development"

Rebuild and Rerun Go application in Minikube

I'm building a micro service in Golang which is going to live in a Kubernetes cluster. I'm developing it and using Minikube to run a copy of the cluster locally.
The problem I ran into is that if I run my application inside of the container using go run main.go, I need to kill the pod for it to detect changes and update what is running.
I tried using a watcher for the binary so that the binary is updated on every save and a binary is running inside the pod, but even after compiling the new version, minikube is running the old one.
Any suggestion?
Here is my deployment file for running the MS locally:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: pokedex
name: pokedex
spec:
template:
metadata:
labels:
name: pokedex
spec:
volumes:
- name: source
hostPath:
path: *folder where source resides*
containers:
- name: pokedex
image: golang:1.8.5-jessie
workingDir: *folder where source resides*
command: ["./pokedex"] # Here I tried both the binary and go run main.go
ports:
- containerPort: 8080
name: go-server
protocol: TCP
volumeMounts:
- name: source
mountPath: /source
env:
- name: GOPATH
value: /source

Set vm.max_map_count on cluster nodes

I try to install ElasticSearch (latest) on a cluster nodes on Google Container Engine but ElasticSearch needs the variable : vm.max_map_count to be >= 262144.
If I ssh to every nodes and I manually run :
sysctl -w vm.max_map_count=262144
All goes fine then, but any new node will not have the specified configuration.
So my questions is :
Is there a way to load a system configuration on every nodes at boot time ?
Deamon Set would not be the good solution because inside a docker container, the system variables are read-only.
I'm using a fresh created cluster with the gci node image.
I found another solution while looking at this repository.
It relies on the use of an init container, the plus side is that only the init container is running with privileges:
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "sysctl",
"image": "busybox",
"imagePullPolicy": "IfNotPresent",
"command": ["sysctl", "-w", "vm.max_map_count=262144"],
"securityContext": {
"privileged": true
}
}
]'
There is a new syntax available since Kubernetes 1.6 which still works for 1.7. Starting with 1.8 this new syntax is required. The declaration of init containers is moved to spec:
- name: init-sysctl
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
You should be able to use a DaemonSet to emulate the behavior of a startup script. If the script needs to do root-level actions on the node, you can configure the DaemonSet pods to run in privileged mode.
For an example of how to do this, see https://github.com/kubernetes/contrib/tree/master/startup-script
As Robert pointed out, a DaemonSet could run as a startup script. Unfortunately, GKE will only let you run a DaemonSet with restartPolicy set as Always.
So in order to prevent k8s to continually restart the container after running sysctl, it has to sleep after the setup and preferably just run on selected nodes. It isn't an elegant solution, but it's elastic at least.
Example:
es-host-setup Dockerfile:
FROM alpine
CMD sysctl -w vm.max_map_count=262144; sleep 365d
DaemonSet resource file:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: es-host-setup
spec:
template:
metadata:
labels:
name: es-host-setup
spec:
containers:
- name: es-host-setup
image: es-host-setup
securityContext:
privileged: true
restartPolicy: Always
nodeSelector:
pool: elasticsearch

Resources