How to mount a volume from Kubernetes on a Windows host to a Linux pod - windows

I am trying to mount a volume within a Kubernetes pod (running linux) to a host folder on Windows 10. The pod starts up without issue, however, the data in the volumes aren't being reflected within the Pod and data set in the Pod isn't being reflected on the Windows host.
Here is my persistent volume:
kind: PersistentVolume
apiVersion: v1
metadata:
name: "elastic-search-persistence"
labels:
volume: persistence
spec:
storageClassName: hostpath
capacity:
storage: 5Gi
accessmodes:
- ReadWriteMany
hostPath:
path: /c/temp/es
Here is my persistent claim:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: "elastic-search-persistence-claim"
spec:
storageClassName: hostpath
volumeName: "elastic-search-persistence"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
And here is my Pod using the above persistent volumes...
apiVersion: v1
kind: Pod
metadata:
name: windows-volume-demo
spec:
containers:
- name: windows-volume-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: windows-volume-data-storage
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: true
volumes:
- name: windows-volume-data-storage
persistentVolumeClaim:
claimName: elastic-search-persistence-claim
I can start everything fine, however, when I create a file in my C:\temp\es folder on my Windows host, that file doesn't show inside the /data/demo folder in the Pod. And the reverse is also true. When I exec into the Pod and create a file in the /data/demo folder, it doesn't show in the C:\temp\es folder on the Windows host.
The folder/file privileges are wide open for the C:\temp folder and the C:\temp\es folder. I also tried exec-ing into the Pod and changing the write privs for the /data/demo folder to wide open -- all with no success.
This configuration works as expected on a Mac host (changing the volume paths for the host to a Mac folder). I suspect it is a privilege/permissions issue for Windows, but I am at a loss as to how to find/fix it.
Any help would be greatly appreciated.

Related

How to force volume mount refresh on Kubernetes docker

I am using the Kubernetes deployment on local Docker VM (MacOs) To develop.
Instead of creating a new image and killing the pod for each code change iteration I mount the code From the IDE into the docker
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: monitor-storage
namespace: wielder-services
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitor-pvc
namespace: wielder-services
labels:
app: monitor
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
storageClassName: monitor-storage
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitor-pv
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: monitor-storage
local:
path: /<path to code folder>
nodeAffinity:
required:
nodeSelectorTerms:
-
matchExpressions:
-
key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
This works fine and I used to immediately see changes in the IDE file in the pod shell and vice versa. Now I cant view changes in the IDE even if I delete and re-deploy the entire volume deployment combination. It seems the directory content is copied to a different place in cache or storage.
following a few threads I stopped the IDE to see if there is an effect but the changes in the files don't appear in the mounted directory.
according to documentation this should be possible:
https://docs.docker.com/desktop/mac/#file-sharing
Is there a way to force a remounting of the host(MacOs) Directory?

Share Folder Between kubernetes Nodes

I have 3 Kubernetes nodes, one of them is the Master and the others are worker. I deployed Laravel application to the Master Node and created volumes and storage class which points to that folder.
These are my YAML files to create volumes and the persistent volume claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: qsinav-pv-www-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
apiVersion: v1
kind: PersistentVolume
metadata:
name: qsinav-pv-www
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: qsinav-pv-www-claim
hostPath:
path: "/var/www/html/Test/qSinav-starter"
The problem is that the pods in every node try to mount the folder in its parent node.
So as I am running a Web application and I have load balancing between these nodes, if I logged in node one and the next request went to node 2 it redirects me to the login page as I don't have a session there.
So I need to share 1 folder from a master node with all worker nodes.
I don't know what should I do to achieve my goal, so please help me to solve it
Thanks in advance
Yeah that's expected and is clearly mentioned in the docs for hostPath volume.
Pods with identical configuration (such as created from a PodTemplate)
may behave differently on different nodes due to different files on
the nodes
You need to use something like nfs which can be shared between nodes. You PV definition would end up looking something like this (change IP & path as per your setup):
apiVersion: v1
kind: PersistentVolume
metadata:
name: qsinav-pv-www
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: qsinav-pv-www-claim
nfs:
path: /tmp
server: 172.17.0.2
don't use the type hostPath, because this type of volume is just for a single node cluster,
it Work well just for single node environnement, because if the pod is assigne to another node, then the pod can’t get the data nedded.
so use the type Local
It remembers which node was used for provisioning the volume, thus making sure that a restarting POD will always find the data storage in the state it had left it before the reboot.
Ps 1: Once a node has died, the data of both hostpath and local persitent volumes of that node are lost.
ps 2: the local and the hostPath type don't work with dynamic provisioning
apiVersion: v1
kind: PersistentVolume
metadata:
name: qsinav-pv-www
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: qsinav-pv-www-claim
local:
path: "/var/www/html/Test/qSinav-starter"

Encrypt the elasticsearch data in k8s

I have installed the elastic search image in k8s on a PV which is created using ceph-rook DFS.
When installing the ceph-rook the encryption mode was enabled
#pvc for elasitc search pod
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: core-pv-claim
namespace: test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: rook-cephfs
#volume mount for elastic search pod
volumeMounts:
- name: persistent-storage
mountPath: /usr/local/elastic/data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: core-pv-claim
The pod got deployed successfully and the data is being saved in "/usr/local/elastic/data"
When i logged into the pod and changed the path i could see the date in rest in the "/usr/local/elastic/data" without any encryption
#kubectl exec -it elastic-pod12 bash
#ls /usr/local/elastic/data
#your data
Is there a way to encrypt this data as well, or restrict the user from accessing the same using via kubectl

facing issue with filestore persistent volume in gke

I am using google filestore for persistent volume in kubernetes. But it is mounting only the root folder not its contents.
I am using GKE service and perform the following tasks:
volume-create:
apiVersion: v1
kind: PersistentVolume
metadata:
name: volume1
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
server: x.x.x.x
path: /share
Persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: volume1-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: volume1
resources:
requests:
storage: 3Gi
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: image1
spec:
template:
metadata:
labels:
name: image1
spec:
restartPolicy: Always
volumes:
- name: volume
persistentVolumeClaim:
claimName: volume1-claim
containers:
- name: image1
image: gcr.io/project-gcr/image:1.0
imagePullPolicy: Always
volumeMounts:
- name: volume
mountPath: "/app/data"
But it mounting empty folder app/data not its contents. I also referred the below URL:
https://cloud.google.com/filestore/docs/accessing-fileshares
Any help is appreciated.
Are you trying to mount an already existing persistent disk?
If so, you will need to define the disk in your configuration file.
You can find more information here.
What kind of data is located on /app/data , is it on a VM? Can you give me some more information on your deployment? How are you testing viewing your data?
The more details I have, the more specific we can be with the help we can provide.

Rebuild and Rerun Go application in Minikube

I'm building a micro service in Golang which is going to live in a Kubernetes cluster. I'm developing it and using Minikube to run a copy of the cluster locally.
The problem I ran into is that if I run my application inside of the container using go run main.go, I need to kill the pod for it to detect changes and update what is running.
I tried using a watcher for the binary so that the binary is updated on every save and a binary is running inside the pod, but even after compiling the new version, minikube is running the old one.
Any suggestion?
Here is my deployment file for running the MS locally:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: pokedex
name: pokedex
spec:
template:
metadata:
labels:
name: pokedex
spec:
volumes:
- name: source
hostPath:
path: *folder where source resides*
containers:
- name: pokedex
image: golang:1.8.5-jessie
workingDir: *folder where source resides*
command: ["./pokedex"] # Here I tried both the binary and go run main.go
ports:
- containerPort: 8080
name: go-server
protocol: TCP
volumeMounts:
- name: source
mountPath: /source
env:
- name: GOPATH
value: /source

Resources