Unable to create the fluentd containers in my kubernetes cluster on ubuntu - elasticsearch

I am trying to do the log monitoring of my kubernetes cluster using Elasticsearch, Fluentd, and Kibana. Here is the link which I was followed in this task. I labeled the nodes with beta.kubernetes.io/fluentd-ds-ready: "true". Initially, I created the statefulset for Elasticsearch.
After that, I created the fluentd-es-configmap.yaml,fluentd-es-ds.yaml and checked the pods status using kubectl get pods -n kube-system. The Fluentd pods are showing status like container creating. I checked the logs of the Fluentd container and it shows the error like:
Error from server (BadRequest): container "fluentd-es" in pod "fluentd-es-v2.0.1-csx96" is waiting to start: ContainerCreating
Here is fluentd pod description:
Name: fluentd-es-v2.0.1-csx96
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: ldap/192.168.1.191
Start Time: Wed, 10 Oct 2018 15:08:17 -0400
Labels: controller-revision-hash=5754d85c97
k8s-app=fluentd-es
kubernetes.io/cluster-service=true
pod-template-generation=1
version=v2.0.1
Annotations: scheduler.alpha.kubernetes.io/critical-pod:
Status: Pending
IP:
Controlled By: DaemonSet/fluentd-es-v2.0.1
Containers:
fluentd-es:
Container ID:
Image: gcr.io/google-containers/fluentd-elasticsearch:v2.0.1
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
memory: 500Mi
Requests:
cpu: 100m
memory: 200Mi
Environment:
FLUENTD_ARGS: --no-supervisor -q
Mounts:
/etc/fluent/config.d from config-volume (rw)
/host/lib from libsystemddir (ro)
/var/lib/docker/containers from varlibdockercontainers (ro)
/var/log from varlog (rw)
/var/run/secrets/kubernetes.io/serviceaccount from fluentd-es-token-l2b2m (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
varlog:
Type: HostPath (bare host directory volume)
Path: /var/log
HostPathType:
varlibdockercontainers:
Type: HostPath (bare host directory volume)
Path: /var/lib/docker/containers
HostPathType:
libsystemddir:
Type: HostPath (bare host directory volume)
Path: /usr/lib64
HostPathType:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: fluentd-es-config-v0.1.0
Optional: false
fluentd-es-token-l2b2m:
Type: Secret (a volume populated by a Secret)
SecretName: fluentd-es-token-l2b2m
Optional: false
QoS Class: Burstable
Node-Selectors: beta.kubernetes.io/fluentd-ds-ready=true
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
node.kubernetes.io/unschedulable:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedMount 14m (x42 over 107m) kubelet, ldap Unable to mount vo lumes for pod "fluentd-es-v2.0.1-csx96_kube-system(d80d9c78-ccbf-11e8-b7b5-52540 0e4ff36)": timeout expired waiting for volumes to attach or mount for pod "kube- system"/"fluentd-es-v2.0.1-csx96". list of unmounted volumes=[config-volume]. li st of unattached volumes=[varlog varlibdockercontainers libsystemddir config-volume fluentd-es-token-l2b2m]
Warning FailedMount 3m23s (x60 over 109m) kubelet, ldap MountVolume.SetUp failed for volume "config-volume" : configmap "fluentd-es-config-v0.1.0" not found
Could anybody suggest me how to resolve this issue?
Thanks in advance.

The problem seems to be a mismatch in the name of the configmap. The DaemonSet in looking for a configmap named fluentd-es-config-v0.1.0 but it is not found.
In the repository the configmap is named fluentd-es-config-v0.1.5 in both fluentd-es-ds.yaml and fluentd-es-configmap.yaml, so it should work by just using these files.

Related

ElasticSearch CrashLoopBackoff when deploying with ECK in Kubernetes OKD 4.11

I am running Kubernetes using OKD 4.11 (running on vSphere) and have validated the basic functionality (including dyn. volume provisioning) using applications (like nginx).
I also applied
oc adm policy add-scc-to-group anyuid system:authenticated
to allow authenticated users to use anyuid (which seems to have been required to deploy the nginx example I was testing with).
Then I installed ECK using this quickstart with kubectl to install the CRD and RBAC manifests. This seems to have worked.
Then I deployed the most basic ElasticSearch quickstart example with kubectl apply -f quickstart.yaml using this manifest:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.4.2
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
The deployment proceeds as expected, pulling image and starting container, but ends in a CrashLoopBackoff with the following error from ElasticSearch at the end of the log:
"elasticsearch.cluster.name":"quickstart",
"error.type":"java.lang.IllegalStateException",
"error.message":"failed to obtain node locks, tried
[/usr/share/elasticsearch/data]; maybe these locations
are not writable or multiple nodes were started on the same data path?"
Looking into the storage, the PV and PVC are created successfully, the output of kubectl get pv,pvc,sc -A -n my-namespace is:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-9d7b57db-8afd-40f7-8b3d-6334bdc07241 1Gi RWO Delete Bound my-namespace/elasticsearch-data-quickstart-es-default-0 thin 41m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-namespace persistentvolumeclaim/elasticsearch-data-quickstart-es-default-0 Bound pvc-9d7b57db-8afd-40f7-8b3d-6334bdc07241 1Gi RWO thin 41m
NAMESPACE NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/thin (default) kubernetes.io/vsphere-volume Delete Immediate false 19d
storageclass.storage.k8s.io/thin-csi csi.vsphere.vmware.com Delete WaitForFirstConsumer true 19d
Looking at the pod yaml, it appears that the volume is correctly attached :
volumes:
- name: elasticsearch-data
persistentVolumeClaim:
claimName: elasticsearch-data-quickstart-es-default-0
- name: downward-api
downwardAPI:
items:
- path: labels
fieldRef:
apiVersion: v1
fieldPath: metadata.labels
defaultMode: 420
....
volumeMounts:
...
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
I cannot understand why the volume would be read-only or rather why ES cannot create the lock.
I did find this similar issue, but I am not sure how to apply the UID permissions (in general I am fairly naive about the way permissions work in OKD) when when working with ECK.
Does anyone with deeper K8s / OKD or ECK/ElasticSearch knowledge have an idea how to better isolate and/or resolve this issue?
Update: I believe this has something to do with this issue and am researching the optionas related to OKD.
For posterity, the ECK starts an init container that should take care of the chown on the data volume, but can only do so if it is running as root.
The resolution for me was documented here:
https://repo1.dso.mil/dsop/elastic/elasticsearch/elasticsearch/-/issues/7
The manifest now looks like this:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.4.2
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
# run init container as root to chown the volume to uid 1000
podTemplate:
spec:
securityContext:
runAsUser: 1000
runAsGroup: 0
initContainers:
- name: elastic-internal-init-filesystem
securityContext:
runAsUser: 0
runAsGroup: 0
And the pod starts up and can write to the volume as uid 1000.

`kubectl debug` hangs on 1.20 with feature gate enabled

I have a Kubernetes 1.20 cluster with kubectl 1.20 and the EphemeralContainers feature gate enabled.
I'm trying to run the commands in the kubectl debug documentation, but they don't seem to be working correctly. I can start a pod with:
$ kubectl run ephemeral-demo --image=k8s.gcr.io/pause:3.1 --restart=Never
pod/ephemeral-demo created
And when I try to attach a debug container to it:
$ kubectl debug -it ephemeral-demo --image=busybox --target=ephemeral-demo
Defaulting debug container name to debugger-g6pj6.
I never get a command line, no matter how long I wait or how many times I hit <enter>. If I examine the pod I can see the debug container is present:
$ kubectl describe pod ephemeral-demo
Name: ephemeral-demo
Namespace: nextcloud
Priority: 0
Node: k8s-htz-worker-02/78.47.15.149
Start Time: Tue, 15 Dec 2020 06:36:30 -0600
Labels: run=ephemeral-demo
Annotations: cni.projectcalico.org/podIP: 10.244.2.186/32
cni.projectcalico.org/podIPs: 10.244.2.186/32
Status: Running
IP: 10.244.2.186
IPs:
IP: 10.244.2.186
Containers:
ephemeral-demo:
Container ID: docker://b6d3ffa3d2ee8eb6a51a3b5ba823392cf57ed836833830510a2625788f8789d6
Image: k8s.gcr.io/pause:3.1
Image ID: docker-pullable://k8s.gcr.io/pause#sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 15 Dec 2020 06:36:32 -0600
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-btnzm (ro)
Ephemeral Containers:
debugger-g6pj6:
Image: busybox
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-btnzm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-btnzm
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 55s default-scheduler Successfully assigned nextcloud/ephemeral-demo to k8s-htz-worker-02
Normal Pulled 53s kubelet Container image "k8s.gcr.io/pause:3.1" already present on machine
Normal Created 53s kubelet Created container ephemeral-demo
Normal Started 53s kubelet Started container ephemeral-demo
But if I try to exec into it, I get a failure:
$ kc exec -it ephemeral-demo -c debugger-g6pj6 -- bash
error: unable to upgrade connection: container not found ("debugger-g6pj6")
Am I missing something?
The solution turned out to be that while I enabled the feature gate on the master node (/etc/kubernetes/manifests/kube-apiserver.yaml), the change didn't propogate to the worker nodes in the cluster (/var/lib/kubelet/config.yaml). Manually applying the changes to the worker nodes and restarting kubelet (systemctl restart kubelet.service) resolved the issue.

deploy elk stack in kubernetes with helm VolumeBinding error

I'm trying to deploy elk stack in kubernetes cluster with helm, using this chart. When I launch
helm install elk-stack stable/elastic-stack
I receive the following message:
NAME: elk-stack
LAST DEPLOYED: Mon Aug 24 07:30:31 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
The elasticsearch cluster and associated extras have been installed.
Kibana can be accessed:
* Within your cluster, at the following DNS name at port 9200:
elk-stack-elastic-stack.default.svc.cluster.local
* From outside the cluster, run these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app=elastic-stack,release=elk-stack" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:5601 to use Kibana"
kubectl port-forward --namespace default $POD_NAME 5601:5601
But when I run
kubectl get pods
the result is:
NAME READY STATUS RESTARTS AGE
elk-stack-elasticsearch-client-7fcfc7b858-5f7fw 0/1 Running 0 12m
elk-stack-elasticsearch-client-7fcfc7b858-zdkwd 0/1 Running 1 12m
elk-stack-elasticsearch-data-0 0/1 Pending 0 12m
elk-stack-elasticsearch-master-0 0/1 Pending 0 12m
elk-stack-kibana-cb7d9ccbf-msw95 1/1 Running 0 12m
elk-stack-logstash-0 0/1 Pending 0 12m
Using kubectl describe pods command, I see that for elasticsearch pods the problem is:
Warning FailedScheduling 6m29s default-scheduler running "VolumeBinding" filter plugin for pod "elk-stack-elasticsearch-data-0": pod has unbound immediate PersistentVolumeClaims
and for logstash pods:
Warning FailedScheduling 7m53s default-scheduler running "VolumeBinding" filter plugin for pod "elk-stack-logstash-0": pod has unbound immediate PersistentVolumeClaims
Output of kubectl get pv,pvc,sc -A:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/elasticsearch-data 10Gi RWO Retain Bound default/elasticsearch-data manual 16d
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default persistentvolumeclaim/claim1 Pending slow 64m
default persistentvolumeclaim/data-elk-stack-elasticsearch-data-0 Pending 120m
default persistentvolumeclaim/data-elk-stack-elasticsearch-master-0 Pending 120m
default persistentvolumeclaim/data-elk-stack-logstash-0 Pending 120m
default persistentvolumeclaim/elasticsearch-data Bound elasticsearch-data 10Gi RWO manual 16d
default persistentvolumeclaim/elasticsearch-data-elasticsearch-data-0 Pending 17d
default persistentvolumeclaim/elasticsearch-data-elasticsearch-data-1 Pending 17d
default persistentvolumeclaim/elasticsearch-data-quickstart-es-default-0 Pending 16d
default persistentvolumeclaim/elasticsearch-master-elasticsearch-master-0 Pending 17d
default persistentvolumeclaim/elasticsearch-master-elasticsearch-master-1 Pending 17d
default persistentvolumeclaim/elasticsearch-master-elasticsearch-master-2 Pending 16d
NAMESPACE NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/slow (default) kubernetes.io/gce-pd Delete Immediate false 66m
Storage class slow and Persistent volume claim claim1 are my experiments. I create they using kubectl create and a yaml file, the others is automatically created by helm (I think).
Output of kubectl get pvc data-elk-stack-elasticsearch-master-0 -o yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2020-08-24T07:30:38Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: elasticsearch
release: elk-stack
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:release: {}
f:spec:
f:accessModes: {}
f:resources:
f:requests:
.: {}
f:storage: {}
f:volumeMode: {}
f:status:
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2020-08-24T07:30:38Z"
name: data-elk-stack-elasticsearch-master-0
namespace: default
resourceVersion: "201123"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-elk-stack-elasticsearch-master-0
uid: de58f769-f9a7-41ad-a449-ef16d4b72bc6
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
volumeMode: Filesystem
status:
phase: Pending
Can somebody please help me to fix this problem? Thanks in advance.
The reason why pod is pending is below PVCs are pending because corresponding PVs are not created.
data-elk-stack-elasticsearch-master-0
data-elk-stack-logstash-0
data-elk-stack-elasticsearch-data-0
Since you have mentioned this is for local development you can use hostPath volume for the PV. So create PV for each of the pending PVCs using the sample PV below. So you will create 3 PVs in total.
apiVersion: v1
kind: PersistentVolume
metadata:
name: elk-master
labels:
type: local
spec:
capacity:
storage: 4Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elk-logstash
labels:
type: local
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elk-data
labels:
type: local
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"

Minikube - Not able to get any result from elastic search to if it uses existing indices

I am trying to load my existing local elastic search indices into kubernetes (version - minikube v1.9.2) elastic search pod.
What I finally understood is I have to use mountpath and hostpath combination to do that. Additionlay if I want to provide a custom index file (not the default one), then I have to use a configMap to override path.data of config/elasticsearch.yml
I did those as below and it created a directory in mount path and update config/elasticsearch.yml file but a mount path directory does not contain the host path directory’s content.
I could not figure out the reason behind it. Could some one let me know what am I doing wrong here?
Then went I head and manually copied indexes from local host to kubernetes pod using
kubectl cp localelasticsearhindexdirectory podname:/data/elk/
But then I tried do a elastic search and it gives me a empty result ( even though index manually copied).
If I use the same index with a local elastic search ( not on kubernetes) then I can get the result.
Could someone please give some advice to diagnose following issues
Why mount path does not have the hostpjths content
How to debug / What steps should I follow understand why it’s not able to get the result with the elasticsearch on pod?
kind: Deployment
metadata:
name: elasticsearch
spec:
selector:
matchLabels:
run: elasticsearch
replicas: 1
template:
metadata:
labels:
run: elasticsearch
spec:
containers:
- image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
name: elasticsearch
imagePullPolicy: IfNotPresent
env:
- name: discovery.type
value: single-node
- name: cluster.name
value: elasticsearch
ports:
- containerPort: 9300
name: nodes
- containerPort: 9200
name: client
volumeMounts:
- name: storage
mountPath: /data/elk
- name: config-volume
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name: config-volume
configMap:
name: elasticsearch-config
- name: storage
hostPath:
path: ~/elasticsearch-6.6.1/data
---
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-config
data:
elasticsearch.yml: |
cluster:
name: ${CLUSTER_NAME:elasticsearch-default}
node:
master: ${NODE_MASTER:true}
data: ${NODE_DATA:true}
name: ${NODE_NAME:node-1}
ingest: ${NODE_INGEST:true}
max_local_storage_nodes: ${MAX_LOCAL_STORAGE_NODES:1}
processors: ${PROCESSORS:1}
network.host: ${NETWORK_HOST:_site_}
path:
data: ${DATA_PATH:"/data/elk"}
repo: ${REPO_LOCATIONS:[]}
bootstrap:
memory_lock: ${MEMORY_LOCK:false}
http:
enabled: ${HTTP_ENABLE:true}
compression: true
cors:
enabled: true
allow-origin: "*"
discovery:
zen:
ping.unicast.hosts: ${DISCOVERY_SERVICE:elasticsearch-discovery}
minimum_master_nodes: ${NUMBER_OF_MASTERS:1}
xpack:
license.self_generated.type: basic ```
**service.yaml**
```apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
service: elasticsearch
spec:
ports:
- name: client
port: 9200
protocol: TCP
targetPort: 9200
- name: nodes
port: 9300
protocol: TCP
targetPort: 9300
type: NodePort
selector:
run: elasticsearch```
Solution in HostPath with minikube - Kubernetes worked for me.
To mount a local directory into a pod in minikube (version - v1.9.2), you have to mount that local directory into minikube then use minikube mounted path in hostpath
(https://minikube.sigs.k8s.io/docs/handbook/mount/).
minikube mount ~/esData:/indexdata
📁 Mounting host path /esData into VM as /indexdata ...
▪ Mount type: <no value>
▪ User ID: docker
▪ Group ID: docker
▪ Version: 9p2000.L
▪ Message Size: 262144
▪ Permissions: 755 (-rwxr-xr-x)
▪ Options: map[]
▪ Bind Address: 192.168.5.6:55230
🚀 Userspace file server: ufs starting
✅ Successfully mounted ~/esData to /indexdata
📌 NOTE: This process must stay alive for the mount to be accessible ...
You have to run minikube mount in a separate terminal because it starts a process and stays there until you unmount.
Instead of doing it as Deployment as in the original question, now I am doing it as Statefulset but the same solution will work for Deployment also.
Another issue which I faced during mounting was elastic search server pod was throwing java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes . Then I saw here that I have to use initContainers to set full permission in /usr/share/elasticsearch/data/nodes.
Please see my final yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
spec:
serviceName: "elasticsearch"
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: set-permissions
image: registry.hub.docker.com/library/busybox:latest
command: ['sh', '-c', 'mkdir -p /usr/share/elasticsearch/data && chown 1000:1000 /usr/share/elasticsearch/data' ]
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
env:
- name: discovery.type
value: single-node
ports:
- containerPort: 9200
name: client
- containerPort: 9300
name: nodes
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumes:
- name: data
hostPath:
path: /indexdata
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
service: elasticsearch
spec:
ports:
- port: 9200
name: client
- port: 9300
name: nodes
type: NodePort
selector:
app: elasticsearch

Kubernetes Persistent Volume is not working on GCE

I am trying to make my elastic search pods persistent so that data is preserved when deployment or pods are recreated.Elastic search is a part of Graylog2 setup.
After I set everything up, I sent a few logs to Graylog and I could see them appear on the dashboard. However, I deleted elasticsearch pod and after it was recreated all the data was lost on Graylog dashboard.
I am using GCE.
Here is my persistent volume config:
kind: PersistentVolume
apiVersion: v1
metadata:
name: elastic-pv
labels:
type: gcePD
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
fsType: ext4
pdName: elastic-pv-disk
Persistent volume claim config:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: elastic-pvc
labels:
type: gcePD
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
and here is my elasticsearch deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elastic-deployment
spec:
replicas: 1
template:
metadata:
labels:
type: elasticsearch
spec:
containers:
- name: elastic-container
image: gcr.io/project/myelasticsearch:v1
imagePullPolicy: Always
ports:
- containerPort: 9300
name: first-port
protocol: TCP
- containerPort: 9200
name: second-port
protocol: TCP
volumeMounts:
- name: elastic-pd
mountPath: /data/db
volumes:
- name: elastic-pd
persistentVolumeClaim:
claimName: elastic-pvc
Output of kubectl describe pod:
Name: elastic-deployment-1423685295-jt6x5
Namespace: default
Node: gke-sd-logger-default-pool-2b3affc0-299k/10.128.0.6
Start Time: Tue, 09 May 2017 22:59:59 +0500
Labels: pod-template-hash=1423685295
type=elasticsearch
Status: Running
IP: 10.12.0.11
Controllers: ReplicaSet/elastic-deployment-1423685295
Containers:
elastic-container:
Container ID: docker://8774c747e2a56363f657a583bf5c2234ed2cff64dc21b6319fc53fdc5c1a6b2b
Image: gcr.io/thematic-flash-786/myelasticsearch:v1
Image ID: docker://sha256:7c25be62dbad39c07c413888e275ae419a66070d37e0d98bf5008e15d7720eec
Ports: 9300/TCP, 9200/TCP
Requests:
cpu: 100m
State: Running
Started: Tue, 09 May 2017 23:02:11 +0500
Ready: True
Restart Count: 0
Volume Mounts:
/data/db from elastic-pd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qtdbb (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
elastic-pd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: elastic-pvc
ReadOnly: false
default-token-qtdbb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qtdbb
QoS Class: Burstable
Tolerations: <none>
No events.
Output of kubectl describe pv:
Name: elastic-pv
Labels: type=gcePD
StorageClass:
Status: Bound
Claim: default/elastic-pvc
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 200Gi
Message:
Source:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: elastic-pv-disk
FSType: ext4
Partition: 0
ReadOnly: false
No events.
Output of kubectl describe pvc:
Name: elastic-pvc
Namespace: default
StorageClass:
Status: Bound
Volume: elastic-pv
Labels: type=gcePD
Capacity: 200Gi
Access Modes: RWO
No events.
Confirmation that real disk exists:
What could be the reason Persistent Volume is not persistent?
In the official images, the Elasticsearch data is stored at /usr/share/elasticsearch/data and not /data/db. It would appear that you needed to updated the mount to be /usr/share/elasticsearch/data instead to get the data storing on the persistent volume.

Resources