Set vm.max_map_count on cluster nodes - elasticsearch

I try to install ElasticSearch (latest) on a cluster nodes on Google Container Engine but ElasticSearch needs the variable : vm.max_map_count to be >= 262144.
If I ssh to every nodes and I manually run :
sysctl -w vm.max_map_count=262144
All goes fine then, but any new node will not have the specified configuration.
So my questions is :
Is there a way to load a system configuration on every nodes at boot time ?
Deamon Set would not be the good solution because inside a docker container, the system variables are read-only.
I'm using a fresh created cluster with the gci node image.

I found another solution while looking at this repository.
It relies on the use of an init container, the plus side is that only the init container is running with privileges:
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "sysctl",
"image": "busybox",
"imagePullPolicy": "IfNotPresent",
"command": ["sysctl", "-w", "vm.max_map_count=262144"],
"securityContext": {
"privileged": true
}
}
]'
There is a new syntax available since Kubernetes 1.6 which still works for 1.7. Starting with 1.8 this new syntax is required. The declaration of init containers is moved to spec:
- name: init-sysctl
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
imagePullPolicy: IfNotPresent
securityContext:
privileged: true

You should be able to use a DaemonSet to emulate the behavior of a startup script. If the script needs to do root-level actions on the node, you can configure the DaemonSet pods to run in privileged mode.
For an example of how to do this, see https://github.com/kubernetes/contrib/tree/master/startup-script

As Robert pointed out, a DaemonSet could run as a startup script. Unfortunately, GKE will only let you run a DaemonSet with restartPolicy set as Always.
So in order to prevent k8s to continually restart the container after running sysctl, it has to sleep after the setup and preferably just run on selected nodes. It isn't an elegant solution, but it's elastic at least.
Example:
es-host-setup Dockerfile:
FROM alpine
CMD sysctl -w vm.max_map_count=262144; sleep 365d
DaemonSet resource file:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: es-host-setup
spec:
template:
metadata:
labels:
name: es-host-setup
spec:
containers:
- name: es-host-setup
image: es-host-setup
securityContext:
privileged: true
restartPolicy: Always
nodeSelector:
pool: elasticsearch

Related

How do I run a post-boot script on a container in kubernetes

I have in my application a shell that I need to execute once the container is started and that it remains in the background, I have seen using lifecycle but it does not work for me
ports:
- name: php-port
containerPort: 9000
lifecycle:
postStart:
exec:
command: ["/bin/sh", "sh /root/script.sh"]
I need an artisan execution to stay in the background once the container is started
When the lifecycle hooks (e.g. postStart) do not work for you, you could add another container to your pod, which runs parallel to your main container (sidecar pattern):
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: main
image: some/image
...
- name: sidecar
image: another/container
If your 2nd container should only start after your main container started successfully, you need some kind of notification. This could be for example that the main container creates a file on a shared volume (e.g. an empty dir) for which the 2nd container waits until it starts it's main process. The docs have an example about a shared volume for two containers in the same pod. This obviously requires to add some additional logic to the main container.
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: main
image: some/image
volumeMounts:
- name: shared-data
mountPath: /some/path
- name: sidecar
image: another/image
volumeMounts:
- name: shared-data
mountPath: /trigger
command: ["/bin/bash"]
args: ["-c", "while [ ! -f /trigger/triggerfile ]; do sleep 1; done; ./your/2nd-app"]
You can try using something like supervisor
http://supervisord.org/
We use that to start the main process and a monitoring agent in the background so we get metrics out of it. supervisor would also ensure those processes stay up if they crash or terminate.

ElasticSearch CrashLoopBackoff when deploying with ECK in Kubernetes OKD 4.11

I am running Kubernetes using OKD 4.11 (running on vSphere) and have validated the basic functionality (including dyn. volume provisioning) using applications (like nginx).
I also applied
oc adm policy add-scc-to-group anyuid system:authenticated
to allow authenticated users to use anyuid (which seems to have been required to deploy the nginx example I was testing with).
Then I installed ECK using this quickstart with kubectl to install the CRD and RBAC manifests. This seems to have worked.
Then I deployed the most basic ElasticSearch quickstart example with kubectl apply -f quickstart.yaml using this manifest:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.4.2
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
The deployment proceeds as expected, pulling image and starting container, but ends in a CrashLoopBackoff with the following error from ElasticSearch at the end of the log:
"elasticsearch.cluster.name":"quickstart",
"error.type":"java.lang.IllegalStateException",
"error.message":"failed to obtain node locks, tried
[/usr/share/elasticsearch/data]; maybe these locations
are not writable or multiple nodes were started on the same data path?"
Looking into the storage, the PV and PVC are created successfully, the output of kubectl get pv,pvc,sc -A -n my-namespace is:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-9d7b57db-8afd-40f7-8b3d-6334bdc07241 1Gi RWO Delete Bound my-namespace/elasticsearch-data-quickstart-es-default-0 thin 41m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-namespace persistentvolumeclaim/elasticsearch-data-quickstart-es-default-0 Bound pvc-9d7b57db-8afd-40f7-8b3d-6334bdc07241 1Gi RWO thin 41m
NAMESPACE NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/thin (default) kubernetes.io/vsphere-volume Delete Immediate false 19d
storageclass.storage.k8s.io/thin-csi csi.vsphere.vmware.com Delete WaitForFirstConsumer true 19d
Looking at the pod yaml, it appears that the volume is correctly attached :
volumes:
- name: elasticsearch-data
persistentVolumeClaim:
claimName: elasticsearch-data-quickstart-es-default-0
- name: downward-api
downwardAPI:
items:
- path: labels
fieldRef:
apiVersion: v1
fieldPath: metadata.labels
defaultMode: 420
....
volumeMounts:
...
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
I cannot understand why the volume would be read-only or rather why ES cannot create the lock.
I did find this similar issue, but I am not sure how to apply the UID permissions (in general I am fairly naive about the way permissions work in OKD) when when working with ECK.
Does anyone with deeper K8s / OKD or ECK/ElasticSearch knowledge have an idea how to better isolate and/or resolve this issue?
Update: I believe this has something to do with this issue and am researching the optionas related to OKD.
For posterity, the ECK starts an init container that should take care of the chown on the data volume, but can only do so if it is running as root.
The resolution for me was documented here:
https://repo1.dso.mil/dsop/elastic/elasticsearch/elasticsearch/-/issues/7
The manifest now looks like this:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.4.2
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
# run init container as root to chown the volume to uid 1000
podTemplate:
spec:
securityContext:
runAsUser: 1000
runAsGroup: 0
initContainers:
- name: elastic-internal-init-filesystem
securityContext:
runAsUser: 0
runAsGroup: 0
And the pod starts up and can write to the volume as uid 1000.

Kibana with plugins running on Kubernetes

I'm trying to install Kibana with a plugin via the initContainers functionality and it doesn't seem to create the pod with the plugin in it.
The pod gets created and Kibana works perfectly, but the plugin is not installed using the yaml below.
initContainers Documentation
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.11.2
count: 1
elasticsearchRef:
name: quickstart
podTemplate:
spec:
initContainers:
- name: install-plugins
command:
- sh
- -c
- |
bin/kibana-plugin install https://github.com/fbaligand/kibana-enhanced-table/releases/download/v1.11.2/enhanced-table-1.11.2_7.11.2.zip
Got Kibana working with plugins by using a custom container image
dockerfile
FROM docker.elastic.co/kibana/kibana:7.11.2
RUN /usr/share/kibana/bin/kibana-plugin install https://github.com/fbaligand/kibana-enhanced-table/releases/download/v1.11.2/enhanced-table-1.11.2_7.11.2.zip
RUN /usr/share/kibana/bin/kibana --optimize
yaml
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.11.2
image: my-conatiner-path/kibana-with-plugins:7.11.2
count: 1
elasticsearchRef:
name: quickstart
Building you own image would sure work, though it could be avoided in that case.
Your initContainer is pretty much what you were looking for.
With one exception: you need to add some emptyDir volume.
Mount it to both your initContainer and regular kibana container, sharing the plugins you would install during init.
Although I'm not familiar with the Kibana CR, here's how I would do this with elasti.co official images:
spec:
template:
spec:
containers:
- name: kibana
image: official-kibana:x.y.z
securityContext:
runAsUser: 1000
volumeMounts:
- mountPath: /usr/share/kibana/plugins
name: plugins
initContainers:
- command:
- /bin/bash
- -c
- |
set -xe
if ! ./bin/kibana-plugin list | grep prometheus-exporter >/dev/null; then
if ! ./bin/kibana-plugin install "https://github.com/pjhampton/kibana-prometheus-exporter/releases/download/7.12.1/kibanaPrometheusExporter-7.12.1.zip"; then
echo WARNING: failed to install Kibana exporter plugin
fi
fi
name: init
image: official-kibana:x.y.z
securityContext:
runAsUser: 1000
volumeMounts:
- mountPath: /usr/share/kibana/plugins
name: plugins
volumes:
- emptyDir: {}
name: plugins

How can I use the port of a server running on localhost in kubernetes running spring boot app

I am new to Kubernetes and kubectl. I am basically running a GRPC server in my localhost. I would like to use this endpoint in a spring boot app running on kubernetes using kubectl on my mac. If I set the following config in application.yml and run in kubernetes, it doesn't work. The same config works if I run in IDE.
grpc:
client:
local-server:
address: static://localhost:6565
negotiationType: PLAINTEXT
I see some people suggesting port-forward, but it's the other way round (It works when I want to use a port that is already in kubernetes from localhost just like the tomcat server running in kubernetes from a browser on localhost)
apiVersion: apps/v1
kind: Deployment
metadata:
name: testspringconfigvol
labels:
app: testspring
spec:
replicas: 1
selector:
matchLabels:
app: testspringconfigvol
template:
metadata:
labels:
app: testspringconfigvol
spec:
initContainers:
# taken from https://gist.github.com/tallclair/849601a16cebeee581ef2be50c351841
# This container clones the desired git repo to the EmptyDir volume.
- name: git-config
image: alpine/git # Any image with git will do
args:
- clone
- --single-branch
- --
- https://github.com/username/fakeconfig
- /repo # Put it in the volume
securityContext:
runAsUser: 1 # Any non-root user will do. Match to the workload.
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /repo
name: git-config
containers:
- name: testspringconfigvol-cont
image: username/testspring
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /usr/local/lib/config/
name: git-config
volumes:
- name: git-config
emptyDir: {}
What I need in simple terms:
Ports having some server in my localhost:
localhost:6565, localhost:6566, I need to access these ports some how in my kubernetes. Then what should I set it in application.yml config? Will it be the same localhost:6565, localhost:6566 or how-to-get-this-ip:6565, how-to-get-this-ip:6566.
We can get the vmware host ip using minikube with this command minikube ssh "route -n | grep ^0.0.0.0 | awk '{ print \$2 }'". For me it's 10.0.2.2 on Mac. If using Kubernetes on Docker for mac, it's host.docker.internal.
By using these commands, I managed to connect to the services running on host machine from kubernetes.
1) Inside your application.properties define
server.port=8000
2) Create Dockerfile
# Start with a base image containing Java runtime (mine java 8)
FROM openjdk:8u212-jdk-slim
# Add Maintainer Info
LABEL maintainer="vaquar.khan#gmail.com"
# Add a volume pointing to /tmp
VOLUME /tmp
# Make port 8080 available to the world outside this container
EXPOSE 8080
# The application's jar file (when packaged)
ARG JAR_FILE=target/codestatebkend-0.0.1-SNAPSHOT.jar
# Add the application's jar to the container
ADD ${JAR_FILE} codestatebkend.jar
# Run the jar file
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/codestatebkend.jar"]
3) Make sure docker is working fine
docker run --rm -p 8080:8080
4)
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
use following command to find the pod name
kubectl get pods
then
kubectl port-forward <pod-name> 8080:8080
Useful links :
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod
https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
https://developer.okta.com/blog/2019/05/22/java-microservices-spring-boot-spring-cloud

Kubernetes + Minikube - How to see all stdout output?

I'm running a Ruby app on Kubernetes with Minikube.
However, whenever I look at the logs I don't see the output I would have seen in my terminal when running the app locally.
I presume it's because it only shows stderr?
What can I do to see all types of console logs (e.g. from puts or raise)?
On looking around is this something to do with it being in detached mode - see the Python related issue: Logs in Kubernetes Pod not showing up
Thanks.
=
As requested - here is the deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: sample
spec:
replicas: 1
template:
metadata:
labels:
app: sample
spec:
containers:
- name: sample
image: someregistry
imagePullPolicy: Always
command: ["/bin/sh","-c"]
args: ["bundle exec rake sample:default --trace"]
envFrom:
- configMapRef:
name: sample
- secretRef:
name: sample
ports:
- containerPort: 3000
imagePullSecrets:
- name: regsecret
As shown in this article, kubectl logs pod apod should show you stdout and stderr for a pod deployed in a minikube.
By default in Kubernetes, Docker is configured to write a container's stdout and stderr to a file under /var/log/containers on the host system
Kubernetes adds:
There are two types of system components: those that run in a container and those that do not run in a container.
For example:
The Kubernetes scheduler and kube-proxy run in a container.
The kubelet and container runtime, for example Docker, do not run in containers.
And:
On machines with systemd, the kubelet and container runtime write to journald.
If systemd is not present, they write to .log files in the /var/log directory.
Similarly to the container logs, system component logs in the /var/log directory should be rotated.
In Kubernetes clusters brought up by the kube-up.sh script, those logs are configured to be rotated by the logrotate tool daily or once the size exceeds 100MB.
I presume it's because it only shows stderr?
Not really, unless something specific is disabled in your container or pod spec. I assume you are using Docker so then the default it's to output stdout and stderr and that's what you see when you do a kubectl logs <pod-name>
What can I do to see all types of console logs (e.g. from puts or raise)?
You should see them in the container logs. It would help to post your pod or deployment definition.

Resources