Can we access node/host process from the Kubernetes pod? - elasticsearch

Is there a way to access the underlying host/node's process from Kubernetes pod in the same way like we access the host/node's filesystem by using hostPath volume mount?
PS: I am trying to monitor the node process with the help of auditbeat deployed as a pod on Kubernetes.

I believe what you are looking for is hostPID: true in the PodSpec:
spec:
hostPID: true
containers:
- name: show-init
command:
- ps
- -p
- "1"
should, in theory, output "/sbin/init" since it is running in the host's PID namespace

Related

Job that executes command inside a pod

What I'd like to ask is if is it possible to create a Kubernetes Job that runs a bash command within another Pod.
apiVersion: batch/v1
kind: Job
metadata:
namespace: dev
name: run-cmd
spec:
ttlSecondsAfterFinished: 180
template:
spec:
containers:
- name: run-cmd
image: <IMG>
command: ["/bin/bash", "-c"]
args:
- <CMD> $POD_NAME
restartPolicy: Never
backoffLimit: 4
I considered using :
Environment variable to define the pod name
Using Kubernetes SDK to automate
But if you have better ideas I am open to them, please!
The Job manifest you shared seems the valid idea.
Yet, you need to take into consideration below points:
As running command inside other pod (pod's some container) requires interacting with Kubernetes API server, you'd need to interact with it using Kubernetes client (e.g. kubectl). This, in turn, requires the client to be installed inside the job's container's image.
job's pod's service account has to have the permissions to pods/exec resource. See docs and this answer.

Minikube Accessing Images Added With Registry Addon

I’ve followed the instructions outlined on this page and pushed a local image to a local 3 node Minikube cluster with the registry add-on enabled and the cluster started with insecure-registry flag, but I get the following error when I try to create a Pod with the image:
Normal Pulling 9m18s (x4 over 10m) kubelet Pulling image "192.168.99.100:5000/myapp:v1”
Warning Failed 9m18s (x4 over 10m) kubelet Failed to pull image "192.168.99.100:5000/myapp:v1”: rpc error: code = Unknown desc = Error response from daemon: Get "https://192.168.99.100:5000/v2/": http: server gave HTTP response to HTTPS client
Any advice on resolving this would be greatly appreciated
My Minikube (v1.23.2) is on macOS (Big Sur 11.6) using the VirtualBox driver. It is a three node cluster. My Docker Desktop version is (20.10.8)
These are the steps I followed:
Get my cluster’s VMs’ IP range - 192.168.99.0/24
Added the following entry to my Docker Desktop config:
insecure-registries": [
"192.168.99.0/24"
]
Started Minikube with insecure registries flag:
$ minikube start —insecure-registry=“192.168.99.0/24”
Run:
$ minikube addons enable registry
Tagged the image I want to push:
$ docker tag docker.io/library/myapp:v1 $(minikube ip):5000/myapp:v1
Pushed the image:
$ docker push $(minikube ip):5000/myapp:v1
The push works ok - when I exec onto the registry Pod I can see the image in the filesystem. However, when I try to create a Pod using the image, I get the error mentioned above.
My Pod manifest is:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: myapp
name: myapp
spec:
containers:
- image: 192.168.99.100:5000/myapp:v1
name: myapp
imagePullPolicy: IfNotPresent
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
The issue was resolved by deleting the cluster and recreating it using the insecure-registry flag from the start - originally I had created the cluster, stopped it, and then started it again with the insecure-registry flag. For some reason this didn't work, but starting it for the first time with the flag did.
If you're going to be creating clusters with the registry addon a lot, it might be worth adding the flag permanently to your config. Replace the IP with your cluster's subnet:
$ minikube config set insecure-registry "192.168.99.0/24"

Equivalent of kind load or minikube cache in kubeadm

I have 3 nodes kubernetes cluster managing with kubeadm. Previously i used kind and minikube. When I wanted make deployment based on docker image i just need make:
kind load docker-image in kind or,
minikube cache add in minikube,
Now, when I want make deployment in kubeadm I obviously get ImagePullBackOff.
Question: Is a equivalent comment do add image to kubeadm and I can't find it, or there is entirely other way to solve that problem?
EDIT
Maybe, question is not clear enough, so instead of delete it I try to put more details.
I have tree nodes (one control plane, and two workers) with docker, kubeadm, kubelet and kubectl installed on each. One deployment of my future cluster is machine learning module so I need tensorflow:
docker pull tensorflow/tensorflow
Using this image I build my own:
docker build -t mlimage:cluster -f ml.Dockerfile .
Next I prepare deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mldeployment
spec:
selector:
matchLabels:
app: mldeployment
name: mldeployment
replicas: 1
template:
metadata:
labels:
app: mldeployment
name: mldeployment
spec:
containers:
- name: mlcontainer
image: mlimage:cluster
imagePullPolicy: Never
ports:
- name: http
containerPort: 6060
and create it:
kubectl create -f mldeployment.yaml
Now, when I type
kubectl describe pod
In mldeployment are these events:
In that case of minikube or kind it was enought to simply add image to cluster typing
minikibe cache add ...
and
kind load docker-image ...
respectively.
Question is how to add image from my machine to cluster in case of managing it from kubeadm. I assume that there is similar way to do that like for minikube or kind (without creating any connection to docker hub, because everything is locally).
you are getting ImagePullBackOff due to kubeadm maybe going to check inside the registry.
while if you look at both command minikube cache and kind load is for loading the local images into the cluster.
As I now understand images for cluster managed via kubeadm should be stored in trusted registry like dockerhub or cloud. But if you want make fast solution in separated networks there is a posibility: Docker registry.
There are also some tools ready to use e.g. Trow, or simpler solution.
I used second approach, and it works (code is a bit old, so it may needs some changes, this links may be helpful: change apiVersion, add label
After that changes, first create deployment and daemonSet:
kubectl create -f docker-private-registry.json
kubectl create -f docker-private-registry-proxy.json
Add localhost address to image:
docker tag image:tag 127.0.0.1:5000/image:tag
Check full name of docker private registry deployment, and forward port (replace x by exact deployment name:
kubectl get pod
kubectl port-forward docker-private-registry-deployment-xxxxxxxxx-xxxxx 5000:5000 -n default
Open next terminal window and push image to private registry:
docker push 127.0.0.1:5000/image:tag
Finnaly change in deployment.yaml file containers image (add 127.0.0.1:5000/...) and create deployment.
This solution is very unsafe and vulnerable, so use it wisely only in separated networks for test and dev purposes.

How can I diagnose why a k8s pod keeps restarting?

I deploy a elasticsearch to minikube with below configure file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
spec:
replicas: 1
selector:
matchLabels:
name: elasticsearch
template:
metadata:
labels:
name: elasticsearch
spec:
containers:
- name: elasticsearch
image: elasticsearch:7.10.1
ports:
- containerPort: 9200
- containerPort: 9300
I run the command kubectl apply -f es.yml to deploy the elasticsearch cluster.
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
elasticsearch-fb9b44948-bchh2 1/1 Running 5 6m23s
The elasticsearch pod keep restarting every a few minutes. When I run kubectl describe pod command, I can see these events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m11s default-scheduler Successfully assigned default/elasticsearch-fb9b44948-bchh2 to minikube
Normal Pulled 3m18s (x5 over 7m11s) kubelet Container image "elasticsearch:7.10.1" already present on machine
Normal Created 3m18s (x5 over 7m11s) kubelet Created container elasticsearch
Normal Started 3m18s (x5 over 7m10s) kubelet Started container elasticsearch
Warning BackOff 103s (x11 over 5m56s) kubelet Back-off restarting failed container
The last event is Back-off restarting failed but I don't know why it restarts the pod. Is there any way I can check why it keeps restarting?
The first step (kubectl describe pod) you've already done. As a next step I suggest checking container logs: kubectl logs <pod_name>. 99% you get the reason from logs in this case (I bet on bootstrap check failure).
When neither describe pod nor logs do not have anything about the error, I get into the container with 'exec': kubectl exec -it <pod_name> -c <container_name> sh. With this you'll get a shell inside the container (of course if there IS a shell binary in it) ans so you can use it to investigate the problem manually. Note that to keep failing container alive you may need to change command and args to something like this:
command:
- /bin/sh
- -c
args:
- cat /dev/stdout
Be sure to disable probes when doing this. A container may restart if liveness probe fails, you will see that in kubectl describe pod if it happen. Since your snippet doesn't have any probes specified, you can skip this.
Checking logs of the pod using kubectl logs podname gives clue about what could go wrong.
ERROR: [2] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
[2]: the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
ERROR: Elasticsearch did not exit normally - check the logs at /usr/share/elasticsearch/logs/docker-cluster.log
Check this post for a solution

Kubernetes + Minikube - How to see all stdout output?

I'm running a Ruby app on Kubernetes with Minikube.
However, whenever I look at the logs I don't see the output I would have seen in my terminal when running the app locally.
I presume it's because it only shows stderr?
What can I do to see all types of console logs (e.g. from puts or raise)?
On looking around is this something to do with it being in detached mode - see the Python related issue: Logs in Kubernetes Pod not showing up
Thanks.
=
As requested - here is the deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: sample
spec:
replicas: 1
template:
metadata:
labels:
app: sample
spec:
containers:
- name: sample
image: someregistry
imagePullPolicy: Always
command: ["/bin/sh","-c"]
args: ["bundle exec rake sample:default --trace"]
envFrom:
- configMapRef:
name: sample
- secretRef:
name: sample
ports:
- containerPort: 3000
imagePullSecrets:
- name: regsecret
As shown in this article, kubectl logs pod apod should show you stdout and stderr for a pod deployed in a minikube.
By default in Kubernetes, Docker is configured to write a container's stdout and stderr to a file under /var/log/containers on the host system
Kubernetes adds:
There are two types of system components: those that run in a container and those that do not run in a container.
For example:
The Kubernetes scheduler and kube-proxy run in a container.
The kubelet and container runtime, for example Docker, do not run in containers.
And:
On machines with systemd, the kubelet and container runtime write to journald.
If systemd is not present, they write to .log files in the /var/log directory.
Similarly to the container logs, system component logs in the /var/log directory should be rotated.
In Kubernetes clusters brought up by the kube-up.sh script, those logs are configured to be rotated by the logrotate tool daily or once the size exceeds 100MB.
I presume it's because it only shows stderr?
Not really, unless something specific is disabled in your container or pod spec. I assume you are using Docker so then the default it's to output stdout and stderr and that's what you see when you do a kubectl logs <pod-name>
What can I do to see all types of console logs (e.g. from puts or raise)?
You should see them in the container logs. It would help to post your pod or deployment definition.

Resources