Share Folder Between kubernetes Nodes - laravel

I have 3 Kubernetes nodes, one of them is the Master and the others are worker. I deployed Laravel application to the Master Node and created volumes and storage class which points to that folder.
These are my YAML files to create volumes and the persistent volume claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: qsinav-pv-www-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
apiVersion: v1
kind: PersistentVolume
metadata:
name: qsinav-pv-www
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: qsinav-pv-www-claim
hostPath:
path: "/var/www/html/Test/qSinav-starter"
The problem is that the pods in every node try to mount the folder in its parent node.
So as I am running a Web application and I have load balancing between these nodes, if I logged in node one and the next request went to node 2 it redirects me to the login page as I don't have a session there.
So I need to share 1 folder from a master node with all worker nodes.
I don't know what should I do to achieve my goal, so please help me to solve it
Thanks in advance

Yeah that's expected and is clearly mentioned in the docs for hostPath volume.
Pods with identical configuration (such as created from a PodTemplate)
may behave differently on different nodes due to different files on
the nodes
You need to use something like nfs which can be shared between nodes. You PV definition would end up looking something like this (change IP & path as per your setup):
apiVersion: v1
kind: PersistentVolume
metadata:
name: qsinav-pv-www
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: qsinav-pv-www-claim
nfs:
path: /tmp
server: 172.17.0.2

don't use the type hostPath, because this type of volume is just for a single node cluster,
it Work well just for single node environnement, because if the pod is assigne to another node, then the pod can’t get the data nedded.
so use the type Local
It remembers which node was used for provisioning the volume, thus making sure that a restarting POD will always find the data storage in the state it had left it before the reboot.
Ps 1: Once a node has died, the data of both hostpath and local persitent volumes of that node are lost.
ps 2: the local and the hostPath type don't work with dynamic provisioning
apiVersion: v1
kind: PersistentVolume
metadata:
name: qsinav-pv-www
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: qsinav-pv-www-claim
local:
path: "/var/www/html/Test/qSinav-starter"

Related

How to force volume mount refresh on Kubernetes docker

I am using the Kubernetes deployment on local Docker VM (MacOs) To develop.
Instead of creating a new image and killing the pod for each code change iteration I mount the code From the IDE into the docker
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: monitor-storage
namespace: wielder-services
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitor-pvc
namespace: wielder-services
labels:
app: monitor
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
storageClassName: monitor-storage
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitor-pv
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: monitor-storage
local:
path: /<path to code folder>
nodeAffinity:
required:
nodeSelectorTerms:
-
matchExpressions:
-
key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
This works fine and I used to immediately see changes in the IDE file in the pod shell and vice versa. Now I cant view changes in the IDE even if I delete and re-deploy the entire volume deployment combination. It seems the directory content is copied to a different place in cache or storage.
following a few threads I stopped the IDE to see if there is an effect but the changes in the files don't appear in the mounted directory.
according to documentation this should be possible:
https://docs.docker.com/desktop/mac/#file-sharing
Is there a way to force a remounting of the host(MacOs) Directory?

How to mount a volume from Kubernetes on a Windows host to a Linux pod

I am trying to mount a volume within a Kubernetes pod (running linux) to a host folder on Windows 10. The pod starts up without issue, however, the data in the volumes aren't being reflected within the Pod and data set in the Pod isn't being reflected on the Windows host.
Here is my persistent volume:
kind: PersistentVolume
apiVersion: v1
metadata:
name: "elastic-search-persistence"
labels:
volume: persistence
spec:
storageClassName: hostpath
capacity:
storage: 5Gi
accessmodes:
- ReadWriteMany
hostPath:
path: /c/temp/es
Here is my persistent claim:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: "elastic-search-persistence-claim"
spec:
storageClassName: hostpath
volumeName: "elastic-search-persistence"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
And here is my Pod using the above persistent volumes...
apiVersion: v1
kind: Pod
metadata:
name: windows-volume-demo
spec:
containers:
- name: windows-volume-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: windows-volume-data-storage
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: true
volumes:
- name: windows-volume-data-storage
persistentVolumeClaim:
claimName: elastic-search-persistence-claim
I can start everything fine, however, when I create a file in my C:\temp\es folder on my Windows host, that file doesn't show inside the /data/demo folder in the Pod. And the reverse is also true. When I exec into the Pod and create a file in the /data/demo folder, it doesn't show in the C:\temp\es folder on the Windows host.
The folder/file privileges are wide open for the C:\temp folder and the C:\temp\es folder. I also tried exec-ing into the Pod and changing the write privs for the /data/demo folder to wide open -- all with no success.
This configuration works as expected on a Mac host (changing the volume paths for the host to a Mac folder). I suspect it is a privilege/permissions issue for Windows, but I am at a loss as to how to find/fix it.
Any help would be greatly appreciated.

Eck local persistent volume not created

Trying to create a local persistent volume for ECK
Creating persistent volumes with the following definition
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mo/esdata"
And PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-data
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Getting Error on PVC
kubectl apply -f pvc-es.yml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
The PersistentVolumeClaim "elasticsearch-data-quickstart-es-default-0" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
First, have you created the storage class "manual"?
Second, it seems that in the error message is saying the pvc is immutable after creation. Is it possible you have created a pvc with the same name before? Please run kubectl get pvc and show output. you can delete the pvc and reapply the yaml if it is the case.

Elasticsearch deployment on kubernetes using Persistent Volume

I am trying to deploy a Elasticsearch cluster(replicas: 3) using Statefulset in kubernetes and need to store the Elasticsearch data in a Persistent Volume (PV). Since each Elasticsearch instance has its own data folder, I need to have separate data folder for each replica in the PV. I am trying to use volumeClaimTemplates and mountPath: /usr/share/elasticsearch/data but this is resulting in an error: pod has unbound immediate PersistentVolumeClaims in the second pod. Hence how can I achieve this using Statefulset?
Thanks in advance.
There is no information how you are trying to install elastic-search however:
As an example please follow:
this tutorial,
helm-charts,
As per documentation for StatefulSet - limitations:
The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner based on the requested storage class, or pre-provisioned by an admin.
This looks like your example, problem with dynamic storage provisioning.
Please verify storage class, if pv and pvc were created and bind together and storage class in volumeClaimTemplates:
volumeMounts:
- name: "elasticsearch-master"
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elasticsearch-master
spec:
accessModes:
- ReadWriteOnce
storageClassName: name #please refer to this settings to see if you are using default storage class. In other case you should spceify this parameter manually
resources:
requests:
storage: 30Gi
Hope this help.
If you are using dynamic provisioning then you can get the volume created automatically at backend, like disk is storage for PVs in Azure ( for Read Write Once kind of operations), else you need to create that manually
Once you create the volume, just create a pvc in the appropriate namespace which is of size matching the pv, then you are just supposed to pass the volume name in pvc definition, it will get bound automatically.
You can try something like this -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claimName
namespace: namespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: default
volumeName: pv-volumeName
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
Please share if you still face issues

facing issue with filestore persistent volume in gke

I am using google filestore for persistent volume in kubernetes. But it is mounting only the root folder not its contents.
I am using GKE service and perform the following tasks:
volume-create:
apiVersion: v1
kind: PersistentVolume
metadata:
name: volume1
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: x.x.x.x
path: /share
Persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: volume1-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: volume1
resources:
requests:
storage: 3Gi
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: image1
spec:
template:
metadata:
labels:
name: image1
spec:
restartPolicy: Always
volumes:
- name: volume
persistentVolumeClaim:
claimName: volume1-claim
containers:
- name: image1
image: gcr.io/project-gcr/image:1.0
imagePullPolicy: Always
volumeMounts:
- name: volume
mountPath: "/app/data"
But it mounting empty folder app/data not its contents. I also referred the below URL:
https://cloud.google.com/filestore/docs/accessing-fileshares
Any help is appreciated.
Are you trying to mount an already existing persistent disk?
If so, you will need to define the disk in your configuration file.
You can find more information here.
What kind of data is located on /app/data , is it on a VM? Can you give me some more information on your deployment? How are you testing viewing your data?
The more details I have, the more specific we can be with the help we can provide.

Resources