Restart pod without losing changes - elasticsearch

I have 2 pods that are meant to send logs to Elastic search. Logs in /var/log/messages get sent but some reason service_name.log doesn't get sent - I think it is due to the configuration for Elastic search. There is a .conf file in these 2 pods that handle the connection to Elastic Search.
I want to make changes to test if this is indeed the issue. I am not sure if the changes take effect as soon as I edit the file. Is there a way to restart/update the pod without losing changes I might make to this file?

To store non-confidential data as a configuration file in a volume, you could use ConfigMaps.
Here is an example of a Pod that mounts a ConfigMap in a volume:
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: redis
volumeMounts:
- name: foo
mountPath: "/etc/foo"
readOnly: true
volumes:
- name: foo
configMap:
name: myconfigmap

Related

How to mount a host volume in Kubernetes running on Docker Desktop (Windows 10) backed by WSL2?

I've figured out the syntax to mount a volume (Kubernetes YAML):
apiVersion: v1
kind: Pod
metadata:
...
spec:
containers:
- name: php
volumeMounts:
- mountPath: /app/db_backups
name: db-backups
readOnly: true
volumes:
- hostPath:
path: /mnt/c/Users/Mark/PhpstormProjects/proj/db_backups
type: DirectoryOrCreate
name: db-backups
And the volume does show when I drop into a shell:
kubectl --context docker-desktop exec --stdin --tty deploy/app-deployment-development -cphp -nmyns -- /bin/bash
But the db_backups directory is empty, so I guess the volume is backed by nothing -- it's not finding the volume on my Windows host machine.
I've tried setting the host path like C:\Users\Mark\PhpstormProjects\proj\db_backups but if I do that then my Deployment fails with a CreateContainerError:
Error: Error response from daemon: invalid volume specification: 'C:\Users\Mark\PhpstormProjects\proj\db_backups:/app/db_backups:ro'
So I guess it doesn't like the Windows-style filepath.
So what then? If neither style of path works, how do I get it to mount?
From here it is clear that, for WSL2 we need to mention the specific path before we are actually giving the path we desired in the host machine.
In your file you are giving like path: /mnt/c/Users/Mark/PhpstormProjects/proj/db_backups but you need to mention the path like this path: /run/desktop/mnt/host/path_of_directory_in_local_machine. The key is we need to mention /run/desktop/mnt/host/ before we are going to give the actual path to the directory.
You gave the type: DirectoryOrCreate in the above file, so that is creating an empty directory in the path you mentioned. Because it is not actually going to your desired path.
So try with this
apiVersion: v1
kind: Pod
metadata:
...
spec:
containers:
- name: php
volumeMounts:
- mountPath: /app/db_backups
name: db-backups
readOnly: true
volumes:
- hostPath:
path: /run/desktop/mnt/host/c/Users/Mark/PhpstormProjects/proj/db_backups
#In my case tested with path: /run/desktop/mnt/host/d/K8-files/voldir
type: DirectoryOrCreate
name: db-backups
It worked in our case, we created a directory in 'd' drive so we used this path: /run/desktop/mnt/host/d/K8-files/voldir. So try giving /run/desktop/mnt/host/ before the actual path.
For more information refer this Link

ElasticSearch CrashLoopBackoff when deploying with ECK in Kubernetes OKD 4.11

I am running Kubernetes using OKD 4.11 (running on vSphere) and have validated the basic functionality (including dyn. volume provisioning) using applications (like nginx).
I also applied
oc adm policy add-scc-to-group anyuid system:authenticated
to allow authenticated users to use anyuid (which seems to have been required to deploy the nginx example I was testing with).
Then I installed ECK using this quickstart with kubectl to install the CRD and RBAC manifests. This seems to have worked.
Then I deployed the most basic ElasticSearch quickstart example with kubectl apply -f quickstart.yaml using this manifest:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.4.2
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
The deployment proceeds as expected, pulling image and starting container, but ends in a CrashLoopBackoff with the following error from ElasticSearch at the end of the log:
"elasticsearch.cluster.name":"quickstart",
"error.type":"java.lang.IllegalStateException",
"error.message":"failed to obtain node locks, tried
[/usr/share/elasticsearch/data]; maybe these locations
are not writable or multiple nodes were started on the same data path?"
Looking into the storage, the PV and PVC are created successfully, the output of kubectl get pv,pvc,sc -A -n my-namespace is:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-9d7b57db-8afd-40f7-8b3d-6334bdc07241 1Gi RWO Delete Bound my-namespace/elasticsearch-data-quickstart-es-default-0 thin 41m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-namespace persistentvolumeclaim/elasticsearch-data-quickstart-es-default-0 Bound pvc-9d7b57db-8afd-40f7-8b3d-6334bdc07241 1Gi RWO thin 41m
NAMESPACE NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/thin (default) kubernetes.io/vsphere-volume Delete Immediate false 19d
storageclass.storage.k8s.io/thin-csi csi.vsphere.vmware.com Delete WaitForFirstConsumer true 19d
Looking at the pod yaml, it appears that the volume is correctly attached :
volumes:
- name: elasticsearch-data
persistentVolumeClaim:
claimName: elasticsearch-data-quickstart-es-default-0
- name: downward-api
downwardAPI:
items:
- path: labels
fieldRef:
apiVersion: v1
fieldPath: metadata.labels
defaultMode: 420
....
volumeMounts:
...
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
I cannot understand why the volume would be read-only or rather why ES cannot create the lock.
I did find this similar issue, but I am not sure how to apply the UID permissions (in general I am fairly naive about the way permissions work in OKD) when when working with ECK.
Does anyone with deeper K8s / OKD or ECK/ElasticSearch knowledge have an idea how to better isolate and/or resolve this issue?
Update: I believe this has something to do with this issue and am researching the optionas related to OKD.
For posterity, the ECK starts an init container that should take care of the chown on the data volume, but can only do so if it is running as root.
The resolution for me was documented here:
https://repo1.dso.mil/dsop/elastic/elasticsearch/elasticsearch/-/issues/7
The manifest now looks like this:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.4.2
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
# run init container as root to chown the volume to uid 1000
podTemplate:
spec:
securityContext:
runAsUser: 1000
runAsGroup: 0
initContainers:
- name: elastic-internal-init-filesystem
securityContext:
runAsUser: 0
runAsGroup: 0
And the pod starts up and can write to the volume as uid 1000.

Encrypt the elasticsearch data in k8s

I have installed the elastic search image in k8s on a PV which is created using ceph-rook DFS.
When installing the ceph-rook the encryption mode was enabled
#pvc for elasitc search pod
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: core-pv-claim
namespace: test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: rook-cephfs
#volume mount for elastic search pod
volumeMounts:
- name: persistent-storage
mountPath: /usr/local/elastic/data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: core-pv-claim
The pod got deployed successfully and the data is being saved in "/usr/local/elastic/data"
When i logged into the pod and changed the path i could see the date in rest in the "/usr/local/elastic/data" without any encryption
#kubectl exec -it elastic-pod12 bash
#ls /usr/local/elastic/data
#your data
Is there a way to encrypt this data as well, or restrict the user from accessing the same using via kubectl

Rebuild and Rerun Go application in Minikube

I'm building a micro service in Golang which is going to live in a Kubernetes cluster. I'm developing it and using Minikube to run a copy of the cluster locally.
The problem I ran into is that if I run my application inside of the container using go run main.go, I need to kill the pod for it to detect changes and update what is running.
I tried using a watcher for the binary so that the binary is updated on every save and a binary is running inside the pod, but even after compiling the new version, minikube is running the old one.
Any suggestion?
Here is my deployment file for running the MS locally:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: pokedex
name: pokedex
spec:
template:
metadata:
labels:
name: pokedex
spec:
volumes:
- name: source
hostPath:
path: *folder where source resides*
containers:
- name: pokedex
image: golang:1.8.5-jessie
workingDir: *folder where source resides*
command: ["./pokedex"] # Here I tried both the binary and go run main.go
ports:
- containerPort: 8080
name: go-server
protocol: TCP
volumeMounts:
- name: source
mountPath: /source
env:
- name: GOPATH
value: /source

Filebeat with ELK stack running in Kubernetes does not capture pod name in logs

I am using the ELK stack (elasticsearch, logsash, kibana) for log processing and analysis in a Kubernetes (minikube) environment. To capture logs I am using filebeat. Logs are propagated successfully from filebeat through to elasticsearch and are viewable in Kibana.
My problem is that I am unable to get the pod name of the actual pod issuing log records. Rather I only get the filebeat podname which is gathering log files and not name of the pod that is originating log records.
The information I can get from filebeat are (as viewed in Kibana)
beat.hostname: the value of this field is the filebeat pod name
beat.name: value is the filebeat pod name
host: value is the filebeat pod name
I can also see/discern container information in Kibana which flow through from filebeat / logstash / elasticsearch:
app: value is {log-container-id}-json.log
source: value is /hostfs/var/lib/docker/containers/{log-container-id}-json.log
As shown above, I seem to be able to get the container Id but not the pod name.
To mitigate the situation, I could probably embed the pod-name in the actual log message and parse it from there, but I am hoping there is a solution in which I can configure filebeat to emit actual pod names.
Does anyone now how to configure filebeat (or other components) to capture kubernetes (minikube) pod names in their logs?
My current filebeat configuration is listed below:
ConfigMap is shown below:
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat
namespace: logging
labels:
component: filebeat
data:
filebeat.yml: |
filebeat.prospectors:
- input_type: log
tags:
- host
paths:
- "/hostfs/var/log"
- "/hostfs/var/log/*"
- "/hostfs/var/log/*/*"
exclude_files:
- '\.[0-9]$'
- '\.[0-9]\.gz$'
- input_type: log
tags:
- docker
paths:
- /hostfs/var/lib/docker/containers/*/*-json.log
json:
keys_under_root: false
message_key: log
add_error_key: true
multiline:
pattern: '^[[:space:]]+|^Caused by:'
negate: false
match: after
output.logstash:
hosts: ["logstash:5044"]
logging.level: info
DamemonSet is shown below:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
spec:
template:
metadata:
labels:
component: filebeat
spec:
containers:
- name: filebeat
image: giantswarm/filebeat:5.2.2
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 100m
requests:
cpu: 100m
volumeMounts:
- name: config
mountPath: /etc/filebeat
readOnly: true
- name: hostfs-var-lib-docker-containers
mountPath: /hostfs/var/lib/docker/containers
readOnly: true
- name: hostfs-var-log
mountPath: /hostfs/var/log
readOnly: true
volumes:
- name: config
configMap:
name: filebeat
- name: hostfs-var-log
hostPath:
path: /var/log
- name: hostfs-var-lib-docker-containers
hostPath:
path: /var/lib/docker/containers
disclaimer: I'm a beats developer
What you want to do is not yet supported by filebeat, but definitely, it's something we want to put some effort on, so you can expect future releases supporting this kind of mapping.
In the meantime, I think your approach is correct. You can append the info you need to your logs so you have it in elasticsearch
I have achieved what you looking for, by assigning a group of specific pods to a namespace, and now can query the log I look for using a combination of namespace, pod name and container name which is also included in generated log which is piped by file beat without any extra effort as you can see here
For future people coming here, it is now already in place in a filebeat processor :
filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log
- /var/log/messages
- /var/log/syslog
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
- drop_event:
when:
equals:
kubernetes.container.name: "filebeat"
helm chart default values : https://github.com/helm/charts/blob/master/stable/filebeat/values.yaml
doc : https://www.elastic.co/guide/en/beats/filebeat/current/add-kubernetes-metadata.html

Resources