facing issue with filestore persistent volume in gke - yaml

I am using google filestore for persistent volume in kubernetes. But it is mounting only the root folder not its contents.
I am using GKE service and perform the following tasks:
volume-create:
apiVersion: v1
kind: PersistentVolume
metadata:
name: volume1
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: x.x.x.x
path: /share
Persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: volume1-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: volume1
resources:
requests:
storage: 3Gi
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: image1
spec:
template:
metadata:
labels:
name: image1
spec:
restartPolicy: Always
volumes:
- name: volume
persistentVolumeClaim:
claimName: volume1-claim
containers:
- name: image1
image: gcr.io/project-gcr/image:1.0
imagePullPolicy: Always
volumeMounts:
- name: volume
mountPath: "/app/data"
But it mounting empty folder app/data not its contents. I also referred the below URL:
https://cloud.google.com/filestore/docs/accessing-fileshares
Any help is appreciated.

Are you trying to mount an already existing persistent disk?
If so, you will need to define the disk in your configuration file.
You can find more information here.
What kind of data is located on /app/data , is it on a VM? Can you give me some more information on your deployment? How are you testing viewing your data?
The more details I have, the more specific we can be with the help we can provide.

Related

How to force volume mount refresh on Kubernetes docker

I am using the Kubernetes deployment on local Docker VM (MacOs) To develop.
Instead of creating a new image and killing the pod for each code change iteration I mount the code From the IDE into the docker
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: monitor-storage
namespace: wielder-services
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitor-pvc
namespace: wielder-services
labels:
app: monitor
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
storageClassName: monitor-storage
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitor-pv
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: monitor-storage
local:
path: /<path to code folder>
nodeAffinity:
required:
nodeSelectorTerms:
-
matchExpressions:
-
key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
This works fine and I used to immediately see changes in the IDE file in the pod shell and vice versa. Now I cant view changes in the IDE even if I delete and re-deploy the entire volume deployment combination. It seems the directory content is copied to a different place in cache or storage.
following a few threads I stopped the IDE to see if there is an effect but the changes in the files don't appear in the mounted directory.
according to documentation this should be possible:
https://docs.docker.com/desktop/mac/#file-sharing
Is there a way to force a remounting of the host(MacOs) Directory?

pod has unbound immediate PersistentVolumeClaims ECK (Elasticsearch on Kubernetes)

I am trying to deploy elastic on kubernetes https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html on a local minikube cluster. I have already installed the operator.
When i apply the elasticsearch cluster below, i get the following pod error:
running "VolumeBinding" filter plugin for pod "data-es-es-default-0":
pod has unbound immediate PersistentVolumeClaims
volume/claim:
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
--
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-data
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
elastic.yml
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: data-es
spec:
version: 7.4.2
nodeSets:
- name: default
count: 2
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 10Gi
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
xpack.security.authc.realms:
native:
native1:
order: 1
---
apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
name: data-kibana
spec:
version: 7.4.2
count: 1
elasticsearchRef:
name: data-es
kubectl get pvc
pod has unbound immediate PersistentVolumeClaims
Above error means there is no persistentVolume that can be bound to the PersistentVolumeClaim. By default local-storage does not really create a persistentVolume dynamically.
To use dynamic provisioning mechanism of local-storage storage class you need to configure the local-storage class so that it can provision the persistentVolume. Check this discussion Kubernetes: What is the best practice for create dynamic local volume to auto assign PVs for PVCs?.
Alternatively without using dynamic provisioning mechanism of a storageclass you need to create a persistentVolume using hostPath which can be bound to the PersistentVolumeClaim.But this is not a recommended solution for production usage. Check this guide here.
PersistentVolumeClaim will be automatically created based on volumeClaimTemplates in the elastic yaml. Hence you should not create a
PersistentVolumeClaim.
Since nodeSets count is 2 two PersistentVolumeClaim is created. So you need to create two persistentVolume.
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data1
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data2
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"

Data is lost after changes are applied to the image

I am trying to add Elasticsearch to the EKS cluster. But whenever I apply the changes my data is lost. It looks like volume I have attached has been changed and reclaimed.
metadata:
name: elasticsearch-uat
labels:
component: elasticsearch-uat
spec:
replicas: 1
serviceName: elasticsearch-uat
template:
metadata:
...
spec:
initContainers:
- name: init-sysctl
...
containers:
- name: es
securityContext:
capabilities:
add:
- IPC_LOCK
image: 559076975273.dkr.ecr.us-west-2.amazonaws.com/elasticsearch-s3:v2
env:
...
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- mountPath: /data
name: es-storage-uat
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
namespace: k8
name: es-storage-uat
spec:
storageClassName: gp2
accessModes: [ ReadWriteOnce ]
resources:
requests:
storage: 2Gi
This is a statefulset
Please help me to understand this concept. I do not want my data to loose in any condition.
Thanks in advance.
From pv output it is clear that PersistentVolume reclaim policy is set to Delete. Which means that if the pvc is deleted the PersistentVolume gets auto deleted. You loose the data if the pvc is deleted.
In your case, it is appropriate to use the “Retain” policy. With the “Retain” policy, if a user deletes a PersistentVolumeClaim, the corresponding PersistentVolume is not be deleted.
Further to the above, if you are using dynamic storage then set reclaimPolicy field of the storage class to appropriate value. If no reclaimPolicy is specified when a StorageClass object is created, it will default to Delete.

ReadWriteMany storage on Google Kubernetes Engine for StatefulSets

I used NFS for to mount a ReadWriteMany storage on a deployment on Google Kubernetes Engine as described in the following link-
https://medium.com/platformer-blog/nfs-persistent-volumes-with-kubernetes-a-case-study-ce1ed6e2c266
However my particular use case(elasticsearch production cluster- for snapshots) requires mounting the ReadWriteMany volume on a stateful set.
On using the NFS volume created previously for stateful sets, the volumes are not provisioned for the different replicas of the stateful set.
Is there any way to overcome this or any other approach I can use?
The guide makes a small mistake depending on how you follow it. The [ClusterIP] defined in the persistent volume should be "nfs-server.default..." instead of "nfs-service.default...". "nfs-server" is what is used in the service definition.
Below is a very minimal setup I used for a statefulset. I deployed the first 3 files from the tutorial to create the PV & PVC, then used the below yaml in place of the busybox bonus yaml the author included. This deployed successfully. Let me know if you have troubles.
apiVersion: v1
kind: Service
metadata:
name: stateful-service
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: thestate
---
apiVersion: apps/v1
metadata:
name: thestate
labels:
app: thestate
kind: StatefulSet
spec:
serviceName: stateful-service
replicas: 3
selector:
matchLabels:
app: thestate
template:
metadata:
labels:
app: thestate
spec:
containers:
- name: nginx
image: nginx:1.8
volumeMounts:
- name: my-pvc-nfs
mountPath: /mnt
ports:
- containerPort: 80
name: web
volumes:
- name: my-pvc-nfs
persistentVolumeClaim:
claimName: nfs

Elasticsearch path.data

I have two webservers, each with their own installation of Elasticsearch.
Both these webservers have a shared folder on their D: drive.
I want to use the same data folder so that I have one set of indexes and each elasticsearch install uses those same indexes, rather than having 2 sets, one on each server.
Therefore I have changed the 'path.data' location in both elasticsearch.yml files to point to the same shared folder.
Problem is, only one webserver is able to retrieve data for queries, the other server just returns nothing when running a search query.
Am I missing a config setting?
Are the two Elasticsearch Nodes in the same cluster ?
Each node writes to its own folder even though they share the same base directory.
As it is, seems you have two distinct Elasticsearch instances holding separate data.
Define a cluster and add the two nodes to the cluster which is the proper way to have the same data managed by multiple nodes
bellow config works fine
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
name: production
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data-prod
namespace: production
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-data-prod
namespace: production
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: data-es
namespace: production
spec:
version: 8.4.3
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
podTemplate:
spec:
containers:
- name: elasticsearch
# resources:
# limits:
# memory: 2Gi
# cpu: 2
# env:
# - name: ES_JAVA_OPTS
# value: "-Xms2g -Xmx4g"
volumeMounts:
- name: elasticsearch-data-prod
mountPath: /usr/share/production/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elasticsearch-data-prod
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 10Gi
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: data-kibana
namespace: production
spec:
version: 8.4.3
count: 1
elasticsearchRef:
name: data-es

Resources