Minikube - Not able to get any result from elastic search to if it uses existing indices - elasticsearch

I am trying to load my existing local elastic search indices into kubernetes (version - minikube v1.9.2) elastic search pod.
What I finally understood is I have to use mountpath and hostpath combination to do that. Additionlay if I want to provide a custom index file (not the default one), then I have to use a configMap to override path.data of config/elasticsearch.yml
I did those as below and it created a directory in mount path and update config/elasticsearch.yml file but a mount path directory does not contain the host path directory’s content.
I could not figure out the reason behind it. Could some one let me know what am I doing wrong here?
Then went I head and manually copied indexes from local host to kubernetes pod using
kubectl cp localelasticsearhindexdirectory podname:/data/elk/
But then I tried do a elastic search and it gives me a empty result ( even though index manually copied).
If I use the same index with a local elastic search ( not on kubernetes) then I can get the result.
Could someone please give some advice to diagnose following issues
Why mount path does not have the hostpjths content
How to debug / What steps should I follow understand why it’s not able to get the result with the elasticsearch on pod?
kind: Deployment
metadata:
name: elasticsearch
spec:
selector:
matchLabels:
run: elasticsearch
replicas: 1
template:
metadata:
labels:
run: elasticsearch
spec:
containers:
- image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
name: elasticsearch
imagePullPolicy: IfNotPresent
env:
- name: discovery.type
value: single-node
- name: cluster.name
value: elasticsearch
ports:
- containerPort: 9300
name: nodes
- containerPort: 9200
name: client
volumeMounts:
- name: storage
mountPath: /data/elk
- name: config-volume
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name: config-volume
configMap:
name: elasticsearch-config
- name: storage
hostPath:
path: ~/elasticsearch-6.6.1/data
---
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-config
data:
elasticsearch.yml: |
cluster:
name: ${CLUSTER_NAME:elasticsearch-default}
node:
master: ${NODE_MASTER:true}
data: ${NODE_DATA:true}
name: ${NODE_NAME:node-1}
ingest: ${NODE_INGEST:true}
max_local_storage_nodes: ${MAX_LOCAL_STORAGE_NODES:1}
processors: ${PROCESSORS:1}
network.host: ${NETWORK_HOST:_site_}
path:
data: ${DATA_PATH:"/data/elk"}
repo: ${REPO_LOCATIONS:[]}
bootstrap:
memory_lock: ${MEMORY_LOCK:false}
http:
enabled: ${HTTP_ENABLE:true}
compression: true
cors:
enabled: true
allow-origin: "*"
discovery:
zen:
ping.unicast.hosts: ${DISCOVERY_SERVICE:elasticsearch-discovery}
minimum_master_nodes: ${NUMBER_OF_MASTERS:1}
xpack:
license.self_generated.type: basic ```
**service.yaml**
```apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
service: elasticsearch
spec:
ports:
- name: client
port: 9200
protocol: TCP
targetPort: 9200
- name: nodes
port: 9300
protocol: TCP
targetPort: 9300
type: NodePort
selector:
run: elasticsearch```

Solution in HostPath with minikube - Kubernetes worked for me.
To mount a local directory into a pod in minikube (version - v1.9.2), you have to mount that local directory into minikube then use minikube mounted path in hostpath
(https://minikube.sigs.k8s.io/docs/handbook/mount/).
minikube mount ~/esData:/indexdata
📁 Mounting host path /esData into VM as /indexdata ...
▪ Mount type: <no value>
▪ User ID: docker
▪ Group ID: docker
▪ Version: 9p2000.L
▪ Message Size: 262144
▪ Permissions: 755 (-rwxr-xr-x)
▪ Options: map[]
▪ Bind Address: 192.168.5.6:55230
🚀 Userspace file server: ufs starting
✅ Successfully mounted ~/esData to /indexdata
📌 NOTE: This process must stay alive for the mount to be accessible ...
You have to run minikube mount in a separate terminal because it starts a process and stays there until you unmount.
Instead of doing it as Deployment as in the original question, now I am doing it as Statefulset but the same solution will work for Deployment also.
Another issue which I faced during mounting was elastic search server pod was throwing java.nio.file.AccessDeniedException: /usr/share/elasticsearch/data/nodes . Then I saw here that I have to use initContainers to set full permission in /usr/share/elasticsearch/data/nodes.
Please see my final yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
spec:
serviceName: "elasticsearch"
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: set-permissions
image: registry.hub.docker.com/library/busybox:latest
command: ['sh', '-c', 'mkdir -p /usr/share/elasticsearch/data && chown 1000:1000 /usr/share/elasticsearch/data' ]
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
env:
- name: discovery.type
value: single-node
ports:
- containerPort: 9200
name: client
- containerPort: 9300
name: nodes
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumes:
- name: data
hostPath:
path: /indexdata
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
service: elasticsearch
spec:
ports:
- port: 9200
name: client
- port: 9300
name: nodes
type: NodePort
selector:
app: elasticsearch

Related

k8s windows pod - error when trying to connect to remote link [duplicate]

This question already exists:
k8s windows pod - how to add service to connect to remote link [closed]
Closed 29 days ago.
I have an k8s windows pod that i want to add service to connect to remote link
for example: www.webtest.com:1515
How do i add this option?
i have a remote server that i need connection to it from the k8s windows pod
pod yaml file (example):
apiVersion: v1
kind: Pod
metadata:
name: test3
namespace: cloud-extractor
labels:
app: test3
spec:
volumes:
- name: smb-cloud-data
flexVolume:
driver: microsoft.com/smb.cmd
secretRef:
name: smb-secret
options:
source: \\pt-rnd-stg1\falcon_extract\ufds
- name: smb-reader-logs
flexVolume:
driver: microsoft.com/smb.cmd
secretRef:
name: smb-secret
options:
source: <source location>
containers:
- name: ufdr-converter-windows
image: <image liocation>
resources: {}
volumeMounts:
- name: smb-cloud-data
mountPath: <mount location>
- name: smb-reader-logs
mountPath: <mount path location>
imagePullPolicy: Always
tty: true
restartPolicy: Never
nodeSelector:
kubernetes.io/os: windows
imagePullSecrets:
- name: regcred
I added new NodePort service with specificed port, but still cant connect.
Example:
apiVersion: v1
kind: Service
metadata:
name: license
labels:
app: license
spec:
type: NodePort
ports:
- port: 1515
targetPort: 1515
protocol: TCP
selector:
app: nginx

How can I configure different storage mount for different pod in Elasticsearch cluster in K8S?

I am deploying Elasticsearch cluster to K8S on EKS with nodegroup. I claimed a EBS for the cluster's storage. When I launch the cluster, only one pod is running successfully but I got this error for other pods:
Warning FailedAttachVolume 3m33s attachdetach-controller Multi-Attach error for volume "pvc-4870bd46-2f1e-402a-acf7-005de83e4588" Volume is already used by pod(s) es-0
Warning FailedMount 90s kubelet Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[es-config persistent-storage default-token-pqzkp]: timed out waiting for the condition
It means the storage is already in use. I understand that this volume is used by the first pod so other pods can't use it. But I don't know how to use different mount path for different pod when they are using the same EBS volume.
Below is the full spec for the cluster.
apiVersion: v1
kind: ConfigMap
metadata:
name: es-config
data:
elasticsearch.yml: |
cluster.name: elk-cluster
network.host: "0.0.0.0"
bootstrap.memory_lock: false
# discovery.zen.minimum_master_nodes: 2
node.max_local_storage_nodes: 9
discovery.seed_hosts:
- es-0.es-entrypoint.default.svc.cluster.local
- es-1.es-entrypoint.default.svc.cluster.local
- es-2.es-entrypoint.default.svc.cluster.local
ES_JAVA_OPTS: -Xms4g -Xmx8g
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es
namespace: default
spec:
serviceName: es-entrypoint
replicas: 3
selector:
matchLabels:
name: es
template:
metadata:
labels:
name: es
spec:
volumes:
- name: es-config
configMap:
name: es-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
- name: persistent-storage
persistentVolumeClaim:
claimName: ebs-claim
initContainers:
- name: permissions-fix
image: busybox
volumeMounts:
- name: persistent-storage
mountPath: /usr/share/elasticsearch/data
command: [ 'chown' ]
args: [ '1000:1000', '/usr/share/elasticsearch/data' ]
containers:
- name: es
image: elasticsearch:7.10.1
resources:
requests:
cpu: 2
memory: 8Gi
ports:
- name: http
containerPort: 9200
- containerPort: 9300
name: inter-node
volumeMounts:
- name: es-config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
- name: persistent-storage
mountPath: /usr/share/elasticsearch/data
---
apiVersion: v1
kind: Service
metadata:
name: es-entrypoint
spec:
selector:
name: es
ports:
- port: 9200
targetPort: 9200
protocol: TCP
clusterIP: None
You should be using volumeClaimTemplates with statefulset so that each pod gets its own volume. Details:
volumeClaimTemplates:
- metadata:
name: es
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
# storageClassName: <omit to use default StorageClass or specify>

Kubernetes - Connect Elastic search from a springboot app in minikube

I am trying to run a kubernetes closer locally using minikube. This is my first try with kubernetes. Therefore
I am not familiar with all aspects of it.
I am trying to deploy a spring boot app which connects to elastic search server.
springboot deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp1:latest
imagePullPolicy: Never
Elastic search sever deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
spec:
selector:
matchLabels:
run: elasticsearch
replicas: 1
template:
metadata:
labels:
run: elasticsearch
spec:
containers:
- image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
name: elasticsearch
imagePullPolicy: IfNotPresent
env:
- name: discovery.type
value: single-node
- name: cluster.name
value: elasticsearch
ports:
- containerPort: 9300
name: nodes
- containerPort: 9200
name: client
Exposed elastic search service as follows
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
service: elasticsearch
spec:
ports:
- name: client
port: 9200
protocol: TCP
targetPort: 9200
- name: nodes
port: 9300
protocol: TCP
targetPort: 9300
type: NodePort
selector:
run: elasticsearch
Similarly, I exposed service of springboot app also.
Now I am wondering how can I connect from springboot services to elastic search service.
When springbbot and elastic search was normal deployment on the same machine ( not in kubernetes), I connected using as
RestClient.builder(new HttpHost("localhost", 9200))
.build();
What's the best way to connect to the elastic search from springboot in kubernetes?
Save the ip of the elastic search service in an environment variable and use it in springboot or use the service name of the elastic search service?
Please advice
You should be able to get to the service, from within the cluster, using:
http://servicename.servicenamespace:serviceport
Kubernetes dns internal to the cluster will resolve the service name as a host name. If they are in the same namespace you probably don't need the serivcenamespace
Given the yaml above and if you used the default namespace for both elasticsearch and your myapp, then myapp process can connect via:
http://elasticsearch:9200
Now, I am able to connect to the elastic search from my springboot app.
Somehow springboot is not able to connect it using http://elasticsearch:9200.
Instead, I pass the ip and port of the exposed elastic search service (9200 port's equivalent output of minikube service elasticsearch --url) (ip of the node:exposed Nodeport of 9200)to every springboot request which connects to the elastic search service and now I am able to connect it.
I know that it's not the ideal solution and I do not know why it can not resolve the servicename to ip. But atleast I am able to proceed.
It will be helpful, if somebody can suggest someways to fix/diagnose the issue
******* UPDATE ******
Finally springboot is able to connect with elastic search using http://elasticsearch:9200. I do not know which change done by me fixed that. I changed my elasticsearch from a Deployment to Statefulset as shown in the following yaml but that change was not done to fix this issue.
Another change which I did is in the label. I changed it from "run":"elasticsearch" to "app":"elastcisearch" but I do not know whether this helped in that. (I am going to read more labels change and will see whether this has any effect).
Please see the final elasticsearch.yaml file ( more explanation of the file can be seen at Minikube - Not able to get any result from elastic search to if it uses existing indices)
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
spec:
serviceName: "elasticsearch"
replicas: 1
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: set-permissions
image: registry.hub.docker.com/library/busybox:latest
command: ['sh', '-c', 'mkdir -p /usr/share/elasticsearch/data && chown 1000:1000 /usr/share/elasticsearch/data' ]
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.6.1
env:
- name: discovery.type
value: single-node
ports:
- containerPort: 9200
name: client
- containerPort: 9300
name: nodes
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
volumes:
- name: data
hostPath:
path: /indexdata
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
service: elasticsearch
spec:
ports:
- port: 9200
name: client
- port: 9300
name: nodes
type: NodePort
selector:
app: elasticsearch

Debugging uWSGI in kubernetes

I have a pair of kubernetes pods, one for nginx and one for Python Flask + uWSGI. I have tested my setup locally in docker-compose, and it has worked fine, however after deploying to kubernetes somehow it seems there is no communication between the two. The end result is that I get 502 Gateway Error when trying to reach my location.
So my question is not really about what is wrong with my setup, but rather what tools can I use to debug this scenario. Is there a test-client for uwsgi? Can I use ncat? I don't seem to get any useful log output from nginx, and I don't know if uwsgi even has a log.
How can I debug this?
For reference, here is my nginx location:
location / {
# Trick to avoid nginx aborting at startup (set server in variable)
set $upstream_server ${APP_SERVER};
include uwsgi_params;
uwsgi_pass $upstream_server;
uwsgi_read_timeout 300;
uwsgi_intercept_errors on;
}
Here is my wsgi.ini:
[uwsgi]
module = my_app.app
callable = app
master = true
processes = 5
socket = 0.0.0.0:5000
die-on-term = true
uid = www-data
gid = www-data
Here is the kubernetes deployment.yaml for nginx:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: nginx
name: nginx
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
service: nginx
strategy:
type: Recreate
template:
metadata:
labels:
service: nginx
spec:
imagePullSecrets:
- name: docker-reg
containers:
- name: nginx
image: <custom image url>
imagePullPolicy: Always
env:
- name: APP_SERVER
valueFrom:
secretKeyRef:
name: my-environment-config
key: APP_SERVER
- name: FK_SERVER_NAME
valueFrom:
secretKeyRef:
name: my-environment-config
key: SERVER_NAME
ports:
- containerPort: 80
- containerPort: 10443
- containerPort: 10090
resources:
requests:
cpu: 1m
memory: 200Mi
volumeMounts:
- mountPath: /etc/letsencrypt
name: my-storage
subPath: nginx
- mountPath: /dev/shm
name: dshm
restartPolicy: Always
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-storage-claim-nginx
- name: dshm
emptyDir:
medium: Memory
Here is the kubernetes service.yaml for nginx:
apiVersion: v1
kind: Service
metadata:
labels:
service: nginx
name: nginx
spec:
type: LoadBalancer
ports:
- name: "nginx-port-80"
port: 80
targetPort: 80
protocol: TCP
- name: "nginx-port-443"
port: 443
targetPort: 10443
protocol: TCP
- name: "nginx-port-10090"
port: 10090
targetPort: 10090
protocol: TCP
selector:
service: nginx
Here is the kubernetes deployment.yaml for python flask:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: my-app
name: my-app
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
service: my-app
strategy:
type: Recreate
template:
metadata:
labels:
service: my-app
spec:
imagePullSecrets:
- name: docker-reg
containers:
- name: my-app
image: <custom image url>
imagePullPolicy: Always
ports:
- containerPort: 5000
resources:
requests:
cpu: 1m
memory: 100Mi
volumeMounts:
- name: merchbot-storage
mountPath: /app/data
subPath: my-app
- name: dshm
mountPath: /dev/shm
- name: local-config
mountPath: /app/secrets/local_config.json
subPath: merchbot-local-config-test.json
restartPolicy: Always
volumes:
- name: merchbot-storage
persistentVolumeClaim:
claimName: my-storage-claim-app
- name: dshm
emptyDir:
medium: Memory
- name: local-config
secret:
secretName: my-app-local-config
Here is the kubernetes service.yaml for nginx:
apiVersion: v1
kind: Service
metadata:
labels:
service: my-app
name: my-app
spec:
ports:
- name: "my-app-port-5000"
port: 5000
targetPort: 5000
selector:
service: my-app
Debugging in kubernetes is not very different from debugging outside, there's just some concepts that need to be overlaid for the kubernetes world.
A Pod in kubernetes is what you would conceptually see as a host in the VM world. Every container running in a Pod will see each others services on localhost. From there, a Pod to anything else will have a network connection involved (even if the endpoint is node local). So start testing with services on localhost and work your way out through pod IP, service IP, service name.
Some complexity comes from having the debug tools available in the containers. Generally containers are built slim and don't have everything available. So you either need to install tools while a container is running (if you can) or build a special "debug" container you can deploy on demand in the same environment. You can always fall back to testing from the cluster nodes which also have access.
Where you have python available you can test with uswgi_curl
pip install uwsgi-tools
uwsgi_curl hostname:port /path
Otherwise nc/curl will suffice, to a point.
Pod to localhost
First step is to make sure the container itself is responding. In this case you are likely to have python/pip available to use uwsgi_curl
kubectl exec -ti my-app-XXXX-XXXX sh
nc -v localhost 5000
uwsgi_curl localhost:5000 /path
Pod to Pod/Service
Next include the kubernetes networking. Start with IP's and finish with names.
Less likely to have python here, or even nc but I think testing the environment variables is important here:
kubectl exec -ti nginx-XXXX-XXXX sh
nc -v my-app-pod-IP 5000
nc -v my-app-service-IP 5000
nc -v my-app-service-name 5000
echo $APP_SERVER
echo $FK_SERVER_NAME
nc -v $APP_SERVER 5000
# or
uwsgi_curl $APP_SERVER:5000 /path
Debug Pod to Pod/Service
If you do need to use a debug pod, try and mimic the pod you are testing as much as possible. It's great to have a generic debug pod/deployment to quickly test anything, but if that doesn't reveal the issue you may need to customise the deployment to mimic the pod you are testing more closely.
In this case the environment variables play a part in the connection setup, so that should be emulated for a debug pod.
Node to Pod/Service
Pods/Services will be available from the cluster nodes (if you are not using restrictive network policies) so usually the quick test is to check Pods/Services are working from there:
nc -v <pod_ip> <container_port>
nc -v <service_ip> <service_port>
nc -v <service__dns> <service_port>
In this case:
nc -v <my_app_pod_ip> 5000
nc -v <my_app_service_ip> 5000
nc -v my-app.svc.<namespace>.cluster.local 5000

Kubernetes Persistent Volume is not working on GCE

I am trying to make my elastic search pods persistent so that data is preserved when deployment or pods are recreated.Elastic search is a part of Graylog2 setup.
After I set everything up, I sent a few logs to Graylog and I could see them appear on the dashboard. However, I deleted elasticsearch pod and after it was recreated all the data was lost on Graylog dashboard.
I am using GCE.
Here is my persistent volume config:
kind: PersistentVolume
apiVersion: v1
metadata:
name: elastic-pv
labels:
type: gcePD
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
fsType: ext4
pdName: elastic-pv-disk
Persistent volume claim config:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: elastic-pvc
labels:
type: gcePD
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Gi
and here is my elasticsearch deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: elastic-deployment
spec:
replicas: 1
template:
metadata:
labels:
type: elasticsearch
spec:
containers:
- name: elastic-container
image: gcr.io/project/myelasticsearch:v1
imagePullPolicy: Always
ports:
- containerPort: 9300
name: first-port
protocol: TCP
- containerPort: 9200
name: second-port
protocol: TCP
volumeMounts:
- name: elastic-pd
mountPath: /data/db
volumes:
- name: elastic-pd
persistentVolumeClaim:
claimName: elastic-pvc
Output of kubectl describe pod:
Name: elastic-deployment-1423685295-jt6x5
Namespace: default
Node: gke-sd-logger-default-pool-2b3affc0-299k/10.128.0.6
Start Time: Tue, 09 May 2017 22:59:59 +0500
Labels: pod-template-hash=1423685295
type=elasticsearch
Status: Running
IP: 10.12.0.11
Controllers: ReplicaSet/elastic-deployment-1423685295
Containers:
elastic-container:
Container ID: docker://8774c747e2a56363f657a583bf5c2234ed2cff64dc21b6319fc53fdc5c1a6b2b
Image: gcr.io/thematic-flash-786/myelasticsearch:v1
Image ID: docker://sha256:7c25be62dbad39c07c413888e275ae419a66070d37e0d98bf5008e15d7720eec
Ports: 9300/TCP, 9200/TCP
Requests:
cpu: 100m
State: Running
Started: Tue, 09 May 2017 23:02:11 +0500
Ready: True
Restart Count: 0
Volume Mounts:
/data/db from elastic-pd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qtdbb (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
elastic-pd:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: elastic-pvc
ReadOnly: false
default-token-qtdbb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qtdbb
QoS Class: Burstable
Tolerations: <none>
No events.
Output of kubectl describe pv:
Name: elastic-pv
Labels: type=gcePD
StorageClass:
Status: Bound
Claim: default/elastic-pvc
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 200Gi
Message:
Source:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: elastic-pv-disk
FSType: ext4
Partition: 0
ReadOnly: false
No events.
Output of kubectl describe pvc:
Name: elastic-pvc
Namespace: default
StorageClass:
Status: Bound
Volume: elastic-pv
Labels: type=gcePD
Capacity: 200Gi
Access Modes: RWO
No events.
Confirmation that real disk exists:
What could be the reason Persistent Volume is not persistent?
In the official images, the Elasticsearch data is stored at /usr/share/elasticsearch/data and not /data/db. It would appear that you needed to updated the mount to be /usr/share/elasticsearch/data instead to get the data storing on the persistent volume.

Resources