Kubernetes `client-go` - How to get container status in a pod - go

After following this and this, how do I watch containers status (If a container crashed, completed etc) in a Pod and trigger events when a container status changes in a Pod?
Let's say I have a Pod with 2 containers:
apiVersion: v1
kind: Pod
metadata:
name: busybox
labels:
app: busybox
spec:
containers:
- image: busybox
name: busybox5
command:
- sleep
- "5"
imagePullPolicy: IfNotPresent
- image: busybox
name: busybox50
command:
- sleep
- "50"
imagePullPolicy: IfNotPresent
restartPolicy: Never
I want to get notified when the busybox5 container finishes executions not about busybox50. I have done something like below using informers:
UpdateFunc: func(oldObj, obj interface{}) {
mObj := obj.(v1.Object)
log.Printf("%s: Updated", mObj.GetName())
},
This is simple. But how it works in a multi container Pod? What if I want to handle events about busybox5 container only in the Pod. How can I achieve this in Go?

I think you need the client-go informers. Here's a good tutorial about them: https://firehydrant.io/blog/stay-informed-with-kubernetes-informers/
You can create an asynchronous event listener for the Pod in which your containers are running and then when one of a container status is change then the pod status is change also (updated, so you should listen to update events).
So you got the update event from your pod, after all you need get the Pod cointainers.
I hope you looking for this :)

Related

How do I run a post-boot script on a container in kubernetes

I have in my application a shell that I need to execute once the container is started and that it remains in the background, I have seen using lifecycle but it does not work for me
ports:
- name: php-port
containerPort: 9000
lifecycle:
postStart:
exec:
command: ["/bin/sh", "sh /root/script.sh"]
I need an artisan execution to stay in the background once the container is started
When the lifecycle hooks (e.g. postStart) do not work for you, you could add another container to your pod, which runs parallel to your main container (sidecar pattern):
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: main
image: some/image
...
- name: sidecar
image: another/container
If your 2nd container should only start after your main container started successfully, you need some kind of notification. This could be for example that the main container creates a file on a shared volume (e.g. an empty dir) for which the 2nd container waits until it starts it's main process. The docs have an example about a shared volume for two containers in the same pod. This obviously requires to add some additional logic to the main container.
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: main
image: some/image
volumeMounts:
- name: shared-data
mountPath: /some/path
- name: sidecar
image: another/image
volumeMounts:
- name: shared-data
mountPath: /trigger
command: ["/bin/bash"]
args: ["-c", "while [ ! -f /trigger/triggerfile ]; do sleep 1; done; ./your/2nd-app"]
You can try using something like supervisor
http://supervisord.org/
We use that to start the main process and a monitoring agent in the background so we get metrics out of it. supervisor would also ensure those processes stay up if they crash or terminate.

Restart pod without losing changes

I have 2 pods that are meant to send logs to Elastic search. Logs in /var/log/messages get sent but some reason service_name.log doesn't get sent - I think it is due to the configuration for Elastic search. There is a .conf file in these 2 pods that handle the connection to Elastic Search.
I want to make changes to test if this is indeed the issue. I am not sure if the changes take effect as soon as I edit the file. Is there a way to restart/update the pod without losing changes I might make to this file?
To store non-confidential data as a configuration file in a volume, you could use ConfigMaps.
Here is an example of a Pod that mounts a ConfigMap in a volume:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
configMap:
name: myconfigmap

Kubernetes + Minikube - How to see all stdout output?

I'm running a Ruby app on Kubernetes with Minikube.
However, whenever I look at the logs I don't see the output I would have seen in my terminal when running the app locally.
I presume it's because it only shows stderr?
What can I do to see all types of console logs (e.g. from puts or raise)?
On looking around is this something to do with it being in detached mode - see the Python related issue: Logs in Kubernetes Pod not showing up
Thanks.
=
As requested - here is the deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: sample
spec:
replicas: 1
template:
metadata:
labels:
app: sample
spec:
containers:
- name: sample
image: someregistry
imagePullPolicy: Always
command: ["/bin/sh","-c"]
args: ["bundle exec rake sample:default --trace"]
envFrom:
- configMapRef:
name: sample
- secretRef:
name: sample
ports:
- containerPort: 3000
imagePullSecrets:
- name: regsecret
As shown in this article, kubectl logs pod apod should show you stdout and stderr for a pod deployed in a minikube.
By default in Kubernetes, Docker is configured to write a container's stdout and stderr to a file under /var/log/containers on the host system
Kubernetes adds:
There are two types of system components: those that run in a container and those that do not run in a container.
For example:
The Kubernetes scheduler and kube-proxy run in a container.
The kubelet and container runtime, for example Docker, do not run in containers.
And:
On machines with systemd, the kubelet and container runtime write to journald.
If systemd is not present, they write to .log files in the /var/log directory.
Similarly to the container logs, system component logs in the /var/log directory should be rotated.
In Kubernetes clusters brought up by the kube-up.sh script, those logs are configured to be rotated by the logrotate tool daily or once the size exceeds 100MB.
I presume it's because it only shows stderr?
Not really, unless something specific is disabled in your container or pod spec. I assume you are using Docker so then the default it's to output stdout and stderr and that's what you see when you do a kubectl logs <pod-name>
What can I do to see all types of console logs (e.g. from puts or raise)?
You should see them in the container logs. It would help to post your pod or deployment definition.

Rebuild and Rerun Go application in Minikube

I'm building a micro service in Golang which is going to live in a Kubernetes cluster. I'm developing it and using Minikube to run a copy of the cluster locally.
The problem I ran into is that if I run my application inside of the container using go run main.go, I need to kill the pod for it to detect changes and update what is running.
I tried using a watcher for the binary so that the binary is updated on every save and a binary is running inside the pod, but even after compiling the new version, minikube is running the old one.
Any suggestion?
Here is my deployment file for running the MS locally:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: pokedex
name: pokedex
spec:
template:
metadata:
labels:
name: pokedex
spec:
volumes:
- name: source
hostPath:
path: *folder where source resides*
containers:
- name: pokedex
image: golang:1.8.5-jessie
workingDir: *folder where source resides*
command: ["./pokedex"] # Here I tried both the binary and go run main.go
ports:
- containerPort: 8080
name: go-server
protocol: TCP
volumeMounts:
- name: source
mountPath: /source
env:
- name: GOPATH
value: /source

Set vm.max_map_count on cluster nodes

I try to install ElasticSearch (latest) on a cluster nodes on Google Container Engine but ElasticSearch needs the variable : vm.max_map_count to be >= 262144.
If I ssh to every nodes and I manually run :
sysctl -w vm.max_map_count=262144
All goes fine then, but any new node will not have the specified configuration.
So my questions is :
Is there a way to load a system configuration on every nodes at boot time ?
Deamon Set would not be the good solution because inside a docker container, the system variables are read-only.
I'm using a fresh created cluster with the gci node image.
I found another solution while looking at this repository.
It relies on the use of an init container, the plus side is that only the init container is running with privileges:
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "sysctl",
"image": "busybox",
"imagePullPolicy": "IfNotPresent",
"command": ["sysctl", "-w", "vm.max_map_count=262144"],
"securityContext": {
"privileged": true
}
}
]'
There is a new syntax available since Kubernetes 1.6 which still works for 1.7. Starting with 1.8 this new syntax is required. The declaration of init containers is moved to spec:
- name: init-sysctl
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
You should be able to use a DaemonSet to emulate the behavior of a startup script. If the script needs to do root-level actions on the node, you can configure the DaemonSet pods to run in privileged mode.
For an example of how to do this, see https://github.com/kubernetes/contrib/tree/master/startup-script
As Robert pointed out, a DaemonSet could run as a startup script. Unfortunately, GKE will only let you run a DaemonSet with restartPolicy set as Always.
So in order to prevent k8s to continually restart the container after running sysctl, it has to sleep after the setup and preferably just run on selected nodes. It isn't an elegant solution, but it's elastic at least.
Example:
es-host-setup Dockerfile:
FROM alpine
CMD sysctl -w vm.max_map_count=262144; sleep 365d
DaemonSet resource file:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: es-host-setup
spec:
template:
metadata:
labels:
name: es-host-setup
spec:
containers:
- name: es-host-setup
image: es-host-setup
securityContext:
privileged: true
restartPolicy: Always
nodeSelector:
pool: elasticsearch

Resources