Trying to create a local persistent volume for ECK
Creating persistent volumes with the following definition
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mo/esdata"
And PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-data
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Getting Error on PVC
kubectl apply -f pvc-es.yml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
The PersistentVolumeClaim "elasticsearch-data-quickstart-es-default-0" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
First, have you created the storage class "manual"?
Second, it seems that in the error message is saying the pvc is immutable after creation. Is it possible you have created a pvc with the same name before? Please run kubectl get pvc and show output. you can delete the pvc and reapply the yaml if it is the case.
Related
I am using the Kubernetes deployment on local Docker VM (MacOs) To develop.
Instead of creating a new image and killing the pod for each code change iteration I mount the code From the IDE into the docker
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: monitor-storage
namespace: wielder-services
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitor-pvc
namespace: wielder-services
labels:
app: monitor
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
storageClassName: monitor-storage
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitor-pv
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: monitor-storage
local:
path: /<path to code folder>
nodeAffinity:
required:
nodeSelectorTerms:
-
matchExpressions:
-
key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
This works fine and I used to immediately see changes in the IDE file in the pod shell and vice versa. Now I cant view changes in the IDE even if I delete and re-deploy the entire volume deployment combination. It seems the directory content is copied to a different place in cache or storage.
following a few threads I stopped the IDE to see if there is an effect but the changes in the files don't appear in the mounted directory.
according to documentation this should be possible:
https://docs.docker.com/desktop/mac/#file-sharing
Is there a way to force a remounting of the host(MacOs) Directory?
I have 3 Kubernetes nodes, one of them is the Master and the others are worker. I deployed Laravel application to the Master Node and created volumes and storage class which points to that folder.
These are my YAML files to create volumes and the persistent volume claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: qsinav-pv-www-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
apiVersion: v1
kind: PersistentVolume
metadata:
name: qsinav-pv-www
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: qsinav-pv-www-claim
hostPath:
path: "/var/www/html/Test/qSinav-starter"
The problem is that the pods in every node try to mount the folder in its parent node.
So as I am running a Web application and I have load balancing between these nodes, if I logged in node one and the next request went to node 2 it redirects me to the login page as I don't have a session there.
So I need to share 1 folder from a master node with all worker nodes.
I don't know what should I do to achieve my goal, so please help me to solve it
Thanks in advance
Yeah that's expected and is clearly mentioned in the docs for hostPath volume.
Pods with identical configuration (such as created from a PodTemplate)
may behave differently on different nodes due to different files on
the nodes
You need to use something like nfs which can be shared between nodes. You PV definition would end up looking something like this (change IP & path as per your setup):
apiVersion: v1
kind: PersistentVolume
metadata:
name: qsinav-pv-www
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: qsinav-pv-www-claim
nfs:
path: /tmp
server: 172.17.0.2
don't use the type hostPath, because this type of volume is just for a single node cluster,
it Work well just for single node environnement, because if the pod is assigne to another node, then the pod can’t get the data nedded.
so use the type Local
It remembers which node was used for provisioning the volume, thus making sure that a restarting POD will always find the data storage in the state it had left it before the reboot.
Ps 1: Once a node has died, the data of both hostpath and local persitent volumes of that node are lost.
ps 2: the local and the hostPath type don't work with dynamic provisioning
apiVersion: v1
kind: PersistentVolume
metadata:
name: qsinav-pv-www
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: qsinav-pv-www-claim
local:
path: "/var/www/html/Test/qSinav-starter"
I am trying to deploy elastic on kubernetes https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html on a local minikube cluster. I have already installed the operator.
When i apply the elasticsearch cluster below, i get the following pod error:
running "VolumeBinding" filter plugin for pod "data-es-es-default-0":
pod has unbound immediate PersistentVolumeClaims
volume/claim:
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
--
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-data
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
elastic.yml
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: data-es
spec:
version: 7.4.2
nodeSets:
- name: default
count: 2
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 10Gi
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
xpack.security.authc.realms:
native:
native1:
order: 1
---
apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
name: data-kibana
spec:
version: 7.4.2
count: 1
elasticsearchRef:
name: data-es
kubectl get pvc
pod has unbound immediate PersistentVolumeClaims
Above error means there is no persistentVolume that can be bound to the PersistentVolumeClaim. By default local-storage does not really create a persistentVolume dynamically.
To use dynamic provisioning mechanism of local-storage storage class you need to configure the local-storage class so that it can provision the persistentVolume. Check this discussion Kubernetes: What is the best practice for create dynamic local volume to auto assign PVs for PVCs?.
Alternatively without using dynamic provisioning mechanism of a storageclass you need to create a persistentVolume using hostPath which can be bound to the PersistentVolumeClaim.But this is not a recommended solution for production usage. Check this guide here.
PersistentVolumeClaim will be automatically created based on volumeClaimTemplates in the elastic yaml. Hence you should not create a
PersistentVolumeClaim.
Since nodeSets count is 2 two PersistentVolumeClaim is created. So you need to create two persistentVolume.
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data1
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data2
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
I am trying to deploy a Elasticsearch cluster(replicas: 3) using Statefulset in kubernetes and need to store the Elasticsearch data in a Persistent Volume (PV). Since each Elasticsearch instance has its own data folder, I need to have separate data folder for each replica in the PV. I am trying to use volumeClaimTemplates and mountPath: /usr/share/elasticsearch/data but this is resulting in an error: pod has unbound immediate PersistentVolumeClaims in the second pod. Hence how can I achieve this using Statefulset?
Thanks in advance.
There is no information how you are trying to install elastic-search however:
As an example please follow:
this tutorial,
helm-charts,
As per documentation for StatefulSet - limitations:
The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner based on the requested storage class, or pre-provisioned by an admin.
This looks like your example, problem with dynamic storage provisioning.
Please verify storage class, if pv and pvc were created and bind together and storage class in volumeClaimTemplates:
volumeMounts:
- name: "elasticsearch-master"
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elasticsearch-master
spec:
accessModes:
- ReadWriteOnce
storageClassName: name #please refer to this settings to see if you are using default storage class. In other case you should spceify this parameter manually
resources:
requests:
storage: 30Gi
Hope this help.
If you are using dynamic provisioning then you can get the volume created automatically at backend, like disk is storage for PVs in Azure ( for Read Write Once kind of operations), else you need to create that manually
Once you create the volume, just create a pvc in the appropriate namespace which is of size matching the pv, then you are just supposed to pass the volume name in pvc definition, it will get bound automatically.
You can try something like this -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claimName
namespace: namespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: default
volumeName: pv-volumeName
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
Please share if you still face issues
I am using google filestore for persistent volume in kubernetes. But it is mounting only the root folder not its contents.
I am using GKE service and perform the following tasks:
volume-create:
apiVersion: v1
kind: PersistentVolume
metadata:
name: volume1
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: x.x.x.x
path: /share
Persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: volume1-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: volume1
resources:
requests:
storage: 3Gi
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: image1
spec:
template:
metadata:
labels:
name: image1
spec:
restartPolicy: Always
volumes:
- name: volume
persistentVolumeClaim:
claimName: volume1-claim
containers:
- name: image1
image: gcr.io/project-gcr/image:1.0
imagePullPolicy: Always
volumeMounts:
- name: volume
mountPath: "/app/data"
But it mounting empty folder app/data not its contents. I also referred the below URL:
https://cloud.google.com/filestore/docs/accessing-fileshares
Any help is appreciated.
Are you trying to mount an already existing persistent disk?
If so, you will need to define the disk in your configuration file.
You can find more information here.
What kind of data is located on /app/data , is it on a VM? Can you give me some more information on your deployment? How are you testing viewing your data?
The more details I have, the more specific we can be with the help we can provide.