How to force volume mount refresh on Kubernetes docker - macos

I am using the Kubernetes deployment on local Docker VM (MacOs) To develop.
Instead of creating a new image and killing the pod for each code change iteration I mount the code From the IDE into the docker
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: monitor-storage
namespace: wielder-services
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitor-pvc
namespace: wielder-services
labels:
app: monitor
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
storageClassName: monitor-storage
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitor-pv
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: monitor-storage
local:
path: /<path to code folder>
nodeAffinity:
required:
nodeSelectorTerms:
-
matchExpressions:
-
key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
This works fine and I used to immediately see changes in the IDE file in the pod shell and vice versa. Now I cant view changes in the IDE even if I delete and re-deploy the entire volume deployment combination. It seems the directory content is copied to a different place in cache or storage.
following a few threads I stopped the IDE to see if there is an effect but the changes in the files don't appear in the mounted directory.
according to documentation this should be possible:
https://docs.docker.com/desktop/mac/#file-sharing
Is there a way to force a remounting of the host(MacOs) Directory?

Related

Share Folder Between kubernetes Nodes

I have 3 Kubernetes nodes, one of them is the Master and the others are worker. I deployed Laravel application to the Master Node and created volumes and storage class which points to that folder.
These are my YAML files to create volumes and the persistent volume claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: qsinav-pv-www-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
apiVersion: v1
kind: PersistentVolume
metadata:
name: qsinav-pv-www
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: qsinav-pv-www-claim
hostPath:
path: "/var/www/html/Test/qSinav-starter"
The problem is that the pods in every node try to mount the folder in its parent node.
So as I am running a Web application and I have load balancing between these nodes, if I logged in node one and the next request went to node 2 it redirects me to the login page as I don't have a session there.
So I need to share 1 folder from a master node with all worker nodes.
I don't know what should I do to achieve my goal, so please help me to solve it
Thanks in advance
Yeah that's expected and is clearly mentioned in the docs for hostPath volume.
Pods with identical configuration (such as created from a PodTemplate)
may behave differently on different nodes due to different files on
the nodes
You need to use something like nfs which can be shared between nodes. You PV definition would end up looking something like this (change IP & path as per your setup):
apiVersion: v1
kind: PersistentVolume
metadata:
name: qsinav-pv-www
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: qsinav-pv-www-claim
nfs:
path: /tmp
server: 172.17.0.2
don't use the type hostPath, because this type of volume is just for a single node cluster,
it Work well just for single node environnement, because if the pod is assigne to another node, then the pod can’t get the data nedded.
so use the type Local
It remembers which node was used for provisioning the volume, thus making sure that a restarting POD will always find the data storage in the state it had left it before the reboot.
Ps 1: Once a node has died, the data of both hostpath and local persitent volumes of that node are lost.
ps 2: the local and the hostPath type don't work with dynamic provisioning
apiVersion: v1
kind: PersistentVolume
metadata:
name: qsinav-pv-www
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: qsinav-pv-www-claim
local:
path: "/var/www/html/Test/qSinav-starter"

How to mount a volume from Kubernetes on a Windows host to a Linux pod

I am trying to mount a volume within a Kubernetes pod (running linux) to a host folder on Windows 10. The pod starts up without issue, however, the data in the volumes aren't being reflected within the Pod and data set in the Pod isn't being reflected on the Windows host.
Here is my persistent volume:
kind: PersistentVolume
apiVersion: v1
metadata:
name: "elastic-search-persistence"
labels:
volume: persistence
spec:
storageClassName: hostpath
capacity:
storage: 5Gi
accessmodes:
- ReadWriteMany
hostPath:
path: /c/temp/es
Here is my persistent claim:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: "elastic-search-persistence-claim"
spec:
storageClassName: hostpath
volumeName: "elastic-search-persistence"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
And here is my Pod using the above persistent volumes...
apiVersion: v1
kind: Pod
metadata:
name: windows-volume-demo
spec:
containers:
- name: windows-volume-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: windows-volume-data-storage
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: true
volumes:
- name: windows-volume-data-storage
persistentVolumeClaim:
claimName: elastic-search-persistence-claim
I can start everything fine, however, when I create a file in my C:\temp\es folder on my Windows host, that file doesn't show inside the /data/demo folder in the Pod. And the reverse is also true. When I exec into the Pod and create a file in the /data/demo folder, it doesn't show in the C:\temp\es folder on the Windows host.
The folder/file privileges are wide open for the C:\temp folder and the C:\temp\es folder. I also tried exec-ing into the Pod and changing the write privs for the /data/demo folder to wide open -- all with no success.
This configuration works as expected on a Mac host (changing the volume paths for the host to a Mac folder). I suspect it is a privilege/permissions issue for Windows, but I am at a loss as to how to find/fix it.
Any help would be greatly appreciated.

pod has unbound immediate PersistentVolumeClaims ECK (Elasticsearch on Kubernetes)

I am trying to deploy elastic on kubernetes https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html on a local minikube cluster. I have already installed the operator.
When i apply the elasticsearch cluster below, i get the following pod error:
running "VolumeBinding" filter plugin for pod "data-es-es-default-0":
pod has unbound immediate PersistentVolumeClaims
volume/claim:
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
--
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-data
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
elastic.yml
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: data-es
spec:
version: 7.4.2
nodeSets:
- name: default
count: 2
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 10Gi
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
xpack.security.authc.realms:
native:
native1:
order: 1
---
apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
name: data-kibana
spec:
version: 7.4.2
count: 1
elasticsearchRef:
name: data-es
kubectl get pvc
pod has unbound immediate PersistentVolumeClaims
Above error means there is no persistentVolume that can be bound to the PersistentVolumeClaim. By default local-storage does not really create a persistentVolume dynamically.
To use dynamic provisioning mechanism of local-storage storage class you need to configure the local-storage class so that it can provision the persistentVolume. Check this discussion Kubernetes: What is the best practice for create dynamic local volume to auto assign PVs for PVCs?.
Alternatively without using dynamic provisioning mechanism of a storageclass you need to create a persistentVolume using hostPath which can be bound to the PersistentVolumeClaim.But this is not a recommended solution for production usage. Check this guide here.
PersistentVolumeClaim will be automatically created based on volumeClaimTemplates in the elastic yaml. Hence you should not create a
PersistentVolumeClaim.
Since nodeSets count is 2 two PersistentVolumeClaim is created. So you need to create two persistentVolume.
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data1
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data2
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"

Eck local persistent volume not created

Trying to create a local persistent volume for ECK
Creating persistent volumes with the following definition
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mo/esdata"
And PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-data
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Getting Error on PVC
kubectl apply -f pvc-es.yml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
The PersistentVolumeClaim "elasticsearch-data-quickstart-es-default-0" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
First, have you created the storage class "manual"?
Second, it seems that in the error message is saying the pvc is immutable after creation. Is it possible you have created a pvc with the same name before? Please run kubectl get pvc and show output. you can delete the pvc and reapply the yaml if it is the case.

facing issue with filestore persistent volume in gke

I am using google filestore for persistent volume in kubernetes. But it is mounting only the root folder not its contents.
I am using GKE service and perform the following tasks:
volume-create:
apiVersion: v1
kind: PersistentVolume
metadata:
name: volume1
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: x.x.x.x
path: /share
Persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: volume1-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: volume1
resources:
requests:
storage: 3Gi
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: image1
spec:
template:
metadata:
labels:
name: image1
spec:
restartPolicy: Always
volumes:
- name: volume
persistentVolumeClaim:
claimName: volume1-claim
containers:
- name: image1
image: gcr.io/project-gcr/image:1.0
imagePullPolicy: Always
volumeMounts:
- name: volume
mountPath: "/app/data"
But it mounting empty folder app/data not its contents. I also referred the below URL:
https://cloud.google.com/filestore/docs/accessing-fileshares
Any help is appreciated.
Are you trying to mount an already existing persistent disk?
If so, you will need to define the disk in your configuration file.
You can find more information here.
What kind of data is located on /app/data , is it on a VM? Can you give me some more information on your deployment? How are you testing viewing your data?
The more details I have, the more specific we can be with the help we can provide.

Resources