Rebuild and Rerun Go application in Minikube - go

I'm building a micro service in Golang which is going to live in a Kubernetes cluster. I'm developing it and using Minikube to run a copy of the cluster locally.
The problem I ran into is that if I run my application inside of the container using go run main.go, I need to kill the pod for it to detect changes and update what is running.
I tried using a watcher for the binary so that the binary is updated on every save and a binary is running inside the pod, but even after compiling the new version, minikube is running the old one.
Any suggestion?
Here is my deployment file for running the MS locally:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: pokedex
name: pokedex
spec:
template:
metadata:
labels:
name: pokedex
spec:
volumes:
- name: source
hostPath:
path: *folder where source resides*
containers:
- name: pokedex
image: golang:1.8.5-jessie
workingDir: *folder where source resides*
command: ["./pokedex"] # Here I tried both the binary and go run main.go
ports:
- containerPort: 8080
name: go-server
protocol: TCP
volumeMounts:
- name: source
mountPath: /source
env:
- name: GOPATH
value: /source

Related

Kibana with plugins running on Kubernetes

I'm trying to install Kibana with a plugin via the initContainers functionality and it doesn't seem to create the pod with the plugin in it.
The pod gets created and Kibana works perfectly, but the plugin is not installed using the yaml below.
initContainers Documentation
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.11.2
count: 1
elasticsearchRef:
name: quickstart
podTemplate:
spec:
initContainers:
- name: install-plugins
command:
- sh
- -c
- |
bin/kibana-plugin install https://github.com/fbaligand/kibana-enhanced-table/releases/download/v1.11.2/enhanced-table-1.11.2_7.11.2.zip
Got Kibana working with plugins by using a custom container image
dockerfile
FROM docker.elastic.co/kibana/kibana:7.11.2
RUN /usr/share/kibana/bin/kibana-plugin install https://github.com/fbaligand/kibana-enhanced-table/releases/download/v1.11.2/enhanced-table-1.11.2_7.11.2.zip
RUN /usr/share/kibana/bin/kibana --optimize
yaml
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.11.2
image: my-conatiner-path/kibana-with-plugins:7.11.2
count: 1
elasticsearchRef:
name: quickstart
Building you own image would sure work, though it could be avoided in that case.
Your initContainer is pretty much what you were looking for.
With one exception: you need to add some emptyDir volume.
Mount it to both your initContainer and regular kibana container, sharing the plugins you would install during init.
Although I'm not familiar with the Kibana CR, here's how I would do this with elasti.co official images:
spec:
template:
spec:
containers:
- name: kibana
image: official-kibana:x.y.z
securityContext:
runAsUser: 1000
volumeMounts:
- mountPath: /usr/share/kibana/plugins
name: plugins
initContainers:
- command:
- /bin/bash
- -c
- |
set -xe
if ! ./bin/kibana-plugin list | grep prometheus-exporter >/dev/null; then
if ! ./bin/kibana-plugin install "https://github.com/pjhampton/kibana-prometheus-exporter/releases/download/7.12.1/kibanaPrometheusExporter-7.12.1.zip"; then
echo WARNING: failed to install Kibana exporter plugin
fi
fi
name: init
image: official-kibana:x.y.z
securityContext:
runAsUser: 1000
volumeMounts:
- mountPath: /usr/share/kibana/plugins
name: plugins
volumes:
- emptyDir: {}
name: plugins

Kubernetes how to correctly mount windows path in wsl2 backed environment

I have a local image that runs fine this way:
docker run -p 8080:8080 -v C:\Users\moritz\Downloads\1\imageService\examples1:/images -v C:\Users\moritz\entwicklung\projekte\imageCluster\logs:/logs imageservice
Now i want this to run as Kubernetes (using built in from Docker-for-Windows v1.19.7) deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: image-service
spec:
selector:
matchLabels:
app: image-service
template:
metadata:
labels:
app: image-service
spec:
containers:
- name: image-service
image: "imageservice"
resources:
limits:
cpu: "0.9"
memory: "1Gi"
ports:
- name: http
containerPort: 8080
volumeMounts:
- mountPath: /images
name: image-volume
- mountPath: /logs
name: log-volume
volumes:
- name: image-volume
hostPath:
path: "c:\\Users\\moritz\\Downloads\\1\\imageService\\examples1"
type: Directory
- name: log-volume
hostPath:
path: /mnt/c/Users/moritz/entwicklung/projekte/imageCluster/logs
type: Directory
As you see i tried different ways to set up my host path on windows machine but i always get:
Warning FailedMount 0s (x4 over 4s) kubelet MountVolume.SetUp failed for volume "log-volume" : hostPath type check failed: /mnt/c/Users/moritz/entwicklung/projekte/imageCluster/logs is not a directory
Warning FailedMount 0s (x4 over 4s) kubelet MountVolume.SetUp failed for volume "image-volume" : hostPath type check failed: c:\Users\moritz\Downloads\1\imageService\examples1 is not a directory
I also tried other variants (for both):
C:\Users\moritz\entwicklung\projekte\imageCluster\logs
C:/Users/moritz/entwicklung/projekte/imageCluster/logs
So how to correctly setup these windows host path. (The next step would be to set them as environment variable.)
Little update:
removing type: Directory helps to get rid of the error and pod is starting but the mounts are not working. If i "look" into container in /images i don't see the images i have on my host and i don't see any logs in log mount while in container /logs contains the expected files.
in meantime i also tried (no avail)
/host_mnt/c/...
/C/Users/...
//C/Users/...
As mentioned here, you can use below hostPath to make it work on wsl2.
// C:\someDir\volumeDir
hostPath:
path: /run/desktop/mnt/host/c/someDir/volumeDir
type: DirectoryOrCreate
There is also an example you can use.
apiVersion: v1
kind: Pod
metadata:
name: test-localpc
spec:
containers:
- name: test-webserver
image: ubuntu:latest
command: ["/bin/sh"]
args: ["-c", "apt-get update && apt-get install curl -y && sleep 600"]
volumeMounts:
- mountPath: /run/desktop/mnt/host/c/aaa
name: mydir
- mountPath: /run/desktop/mnt/host/c/aaa/1.txt
name: myfile
volumes:
- name: mydir
hostPath:
# Ensure the file directory is created.
path: /run/desktop/mnt/host/c/aaa
type: DirectoryOrCreate
- name: myfile
hostPath:
path: /run/desktop/mnt/host/c/aaa/1.txt
type: FileOrCreate

How to mount a volume from Kubernetes on a Windows host to a Linux pod

I am trying to mount a volume within a Kubernetes pod (running linux) to a host folder on Windows 10. The pod starts up without issue, however, the data in the volumes aren't being reflected within the Pod and data set in the Pod isn't being reflected on the Windows host.
Here is my persistent volume:
kind: PersistentVolume
apiVersion: v1
metadata:
name: "elastic-search-persistence"
labels:
volume: persistence
spec:
storageClassName: hostpath
capacity:
storage: 5Gi
accessmodes:
- ReadWriteMany
hostPath:
path: /c/temp/es
Here is my persistent claim:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: "elastic-search-persistence-claim"
spec:
storageClassName: hostpath
volumeName: "elastic-search-persistence"
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
And here is my Pod using the above persistent volumes...
apiVersion: v1
kind: Pod
metadata:
name: windows-volume-demo
spec:
containers:
- name: windows-volume-demo
image: busybox
command: [ "sh", "-c", "sleep 1h" ]
volumeMounts:
- name: windows-volume-data-storage
mountPath: /data/demo
securityContext:
allowPrivilegeEscalation: true
volumes:
- name: windows-volume-data-storage
persistentVolumeClaim:
claimName: elastic-search-persistence-claim
I can start everything fine, however, when I create a file in my C:\temp\es folder on my Windows host, that file doesn't show inside the /data/demo folder in the Pod. And the reverse is also true. When I exec into the Pod and create a file in the /data/demo folder, it doesn't show in the C:\temp\es folder on the Windows host.
The folder/file privileges are wide open for the C:\temp folder and the C:\temp\es folder. I also tried exec-ing into the Pod and changing the write privs for the /data/demo folder to wide open -- all with no success.
This configuration works as expected on a Mac host (changing the volume paths for the host to a Mac folder). I suspect it is a privilege/permissions issue for Windows, but I am at a loss as to how to find/fix it.
Any help would be greatly appreciated.

Volume mounts not working Kubernetes and WSL 2 and Docker

I am unable to properly mount volumes using HostPath within Kubernetes running in Docker and WSL 2. This seems to be a WSL 2 issue when mounting volumes in Kubernetes running in Docker. Anyone know how to fix this?
Here are the steps:
Deploy debug build to Kubernetes for my app.
Attach Visual Studio Code using the Kubernetes extension
Navigate to the project folder for my application that was attached using the volume mount <= Problem Right Here
When you go and look at the volume mount nothing is there.
C:\Windows\System32>wsl -l -v
NAME STATE VERSION
Ubuntu Running 2
docker-desktop-data Running 2
docker-desktop Running 2
Docker Desktop v2.3.0.3
Kubernetes v1.16.5
Visual Studio Code v1.46.1
====================================================================
Dockerfile
====================================================================
#
# Base image for deploying and running based on Ubuntu
#
# Support ASP.NET and does not include .NET SDK or NodeJs
#
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-bionic AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
#
# Base image for building .NET based on Ubuntu
#
# 1. Uses .NET SDK image as the starting point
# 2. Restore NuGet packages
# 3. Build the ASP.NET Core application
#
# Destination is /app/build which is copied to /app later on
#
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-bionic AS build
WORKDIR /src
COPY ["myapp.csproj", "./"]
RUN dotnet restore "./myapp.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "myapp.csproj" -c Release -o /app
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-bionic AS debug
RUN curl --silent --location https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get install --yes nodejs
ENTRYPOINT [ "sleep", "infinity" ]
#
# Base image for building React based on Node/Ubuntu
#
# Destination is /app/ClientApp/build which is copied to /clientapp later
#
# NOTE: npm run build puts the output in the build directory
#
FROM node:12.18-buster-slim AS clientbuild
WORKDIR /src
COPY ./ClientApp /app/ClientApp
WORKDIR "/app/ClientApp"
RUN npm install
RUN npm run build
#
# Copy clientbuild:/app/ClientApp to /app/ClientApp
#
# Copy build:/app to /app
#
FROM base as final
WORKDIR /app/ClientApp
COPY --from=clientbuild /app/ClientApp .
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "myapp.dll"]
====================================================================
Kubernetes Manifest
====================================================================
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: localhost:6000/myapp
ports:
- containerPort: 5001
securityContext:
privileged: true
volumeMounts:
- mountPath: /local
name: local
resources: {}
volumes:
- name: local
hostPath:
path: /C/dev/myapp
type: DirectoryOrCreate
hostname: myapp
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 5001
targetPort: 5001
selector:
app: myapp
According to the following thread, hostPath volumes are not officially supported for wsl2, yet. They do suggest a workaround, though I had trouble getting it to work. I have found that prepending /run/desktop/mnt/host/c seems to work for me.
// C:\someDir\volumeDir
hostPath:
path: /run/desktop/mnt/host/c/someDir/volumeDir
type: DirectoryOrCreate
Thread Source: https://github.com/docker/for-win/issues/5325
Suggested workaround from thread: https://github.com/docker/for-win/issues/5325#issuecomment-567594291
Using #RyanDarnell's excellent answer above, here is what worked for me
Objective: get message-db with password using docker build --secret running in local docker-desktop kubernetes with StatefulSet and StorageClass using Skaffold
$kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
docker-desktop Ready control-plane,master 21h v1.21.1 <internalip> <none> Docker Desktop 5.4.72-microsoft-standard-WSL2 docker://20.10.7
#./k8s/storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
labels:
app: postgres-database
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
#./k8s/persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv1
labels:
app: postgres-database
spec:
capacity:
storage: 128Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /run/desktop/mnt/host/c/kubernetes-mount-path # created this folder at C:\kubernetes-mount-path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
#./k8s/stateful-set.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-database
spec:
selector:
matchLabels:
app: postgres-database
serviceName: postgres-service
replicas: 1
template:
metadata:
labels:
app: postgres-database
spec:
containers:
- name: message-db-container
image: message-db-test
volumeMounts:
- name: postgres-disk
mountPath: /var/lib/postgresql/data
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
- name: POSTGRES_PASSWORD
value: <postgres-password>
volumeClaimTemplates:
- metadata:
name: postgres-disk
labels:
app: postgres-database
spec:
selector:
matchLabels:
app: postgres-database
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 128Mi
#./k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres-loadbalancer
spec:
selector:
app: postgres-database
type: LoadBalancer
ports:
- port: 5432
targetPort: 5432
#./Dockerfile
# syntax = docker/dockerfile:1.3
FROM postgres:13.3-alpine3.14
RUN apk add --no-cache curl tar
RUN mkdir -p /usr/src/eventide \
&& curl -L https://github.com/message-db/message-db/tarball/v1.2.6 -o /usr/src/eventide/message-db.tgz
RUN tar -xf /usr/src/eventide/message-db.tgz --directory /usr/src/eventide
# change message_store login password
RUN --mount=type=secret,id=message-db-pass sed -i "s/WITH LOGIN/WITH LOGIN ENCRYPTED PASSWORD '$(cat /run/secrets/message-db-pass)'/g" /usr/src/eventide/message-db-message-db-759a4f3/database/roles/message-store.sql
RUN echo -e "#!/bin/sh\ncd /usr/src/eventide/message-db-message-db-759a4f3/database\n./install.sh" > /docker-entrypoint-initdb.d/rundbscripts.sh
RUN docker-entrypoint.sh postgres --version
ENTRYPOINT docker-entrypoint.sh postgres
#./secret.txt
<postgres-password>
#./skaffold.yaml
kind: Config
apiVersion: skaffold/v2beta20
build:
artifacts:
- image: message-db-test
context: .
docker:
secret:
id: message-db-pass
src: secret.txt
local:
useBuildkit: true
deploy:
kubectl:
manifests:
- k8s/*.yaml
run skaffold debug -v debug to startup

Kubernetes pplication run error when using env from ConfigMaps

I have an application written in Go which reads environmental variables from a config.toml file.
The config.toml file contains the key value as
Server="mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo"
Database="nrfdb"
NRFAddrPort = ":9090"
In my application am reading the all the varialbes from the .toml file to my application as
// Represents database and server credentials
type Config struct {
Server string
Database string
NRFAddrPort string
}
var NRFAddrPort string
// Read and parse the configuration file
func (c *Config) Read() {
if _, err := toml.DecodeFile("config.toml", &c); err != nil {
log.Print("Cannot parse .toml configuration file ")
}
NRFAddrPort = c.NRFAddrPort
}
I would like to deploy my application in my Kubernetes cluster (3 VMs,a master and 2 worker nodes). After creating a docker and pushed to docker hub, when deploy my application using configMaps to parse the variables, my application runs for a few seconds and then gives Error.
It seems the application cannot read the env variable from the configMap. Below is my configMap and the deployemnt.
apiVersion: v1
kind: ConfigMap
metadata:
name: nrf-config
namespace: default
data:
config-toml: |
Server="mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo"
Database="nrfdb"
NRFAddrPort = ":9090"
apiVersion: apps/v1
kind: Deployment
metadata:
name: nrf-instance
spec:
selector:
matchLabels:
app: nrf-instance
replicas: 1
template:
metadata:
labels:
app: nrf-instance
version: "1.0"
spec:
nodeName: k8s-worker-node2
containers:
- name: nrf-instance
image: grego/appapi:1.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9090
volumeMounts:
- name: config-volume
mountPath: /home/ubuntu/appapi
volumes:
- name: config-volume
configMap:
name: nrf-config
Also one thing I do not understand is the mountPath in volumeMounts. Do I need to copy the config.toml to this mountPath?
When i hard code these variable in my application and deploy the docker image in kubernetes, it run without error.
My problem now is how to parse these environmental variable to my application using kubernetes configMap or any method so it can run in my Kubernetes cluster instead of hard code them in my application. Any help.
Also attached is my Dockerfile content
# Dockerfile References: https://docs.docker.com/engine/reference/builder/
# Start from the latest golang base image
FROM golang:latest as builder
# Set the Current Working Directory inside the container
WORKDIR /app
# Copy go mod and sum files
COPY go.mod go.sum ./
# Download all dependencies. Dependencies will be cached if the go.mod and go.sum files are not changed
RUN go mod download
# Copy the source from the current directory to the Working Directory inside the container
COPY . .
# Build the Go app
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
######## Start a new stage from scratch #######
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
# Copy the Pre-built binary file from the previous stage
COPY --from=builder /app/main .
# Expose port 9090 to the outside world
EXPOSE 9090
# Command to run the executable
CMD ["./main"]
Any problem about the content?
Passing the values as env values as
apiVersion: apps/v1
kind: Deployment
metadata:
name: nrf-instance
spec:
selector:
matchLabels:
app: nrf-instance
replicas: 1
template:
metadata:
labels:
app: nrf-instance
version: "1.0"
spec:
nodeName: k8s-worker-node2
containers:
- name: nrf-instance
image: grego/appapi:1.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9090
env:
- name: Server
valueFrom:
configMapKeyRef:
name: nrf-config
key: config-toml
- name: Database
valueFrom:
configMapKeyRef:
name: nrf-config
key: config-toml
- name: NRFAddrPort
valueFrom:
configMapKeyRef:
name: nrf-config
key: config-toml
You cannot pass those values as separate environment variables as it is because they are read as one text blob instead of separate key:values. Current configmap looks like this:
Data
====
config.toml:
----
Server="mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo"
Database="nrfdb"
NRFAddrPort = ":9090"
To pass it as environment variables you have to modify the configmap to read those values as key: value pair:
kind: ConfigMap
apiVersion: v1
metadata:
name: example-configmap
data:
Server: mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
Database: nrfdb
NRFAddrPort: :9090
This way those values will be separated and can be passed as env variables:
Data
====
Database:
----
nrfdb
NRFAddrPort:
----
:9090
Server:
----
mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
When you pass it to pod:
[...]
spec:
containers:
- name: nrf-instance
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9090
envFrom:
- configMapRef:
name: example-configmap
You can see that it was passed correctly, for example by executing env command inside the pod:
kubectl exec -it env-6fb4b557d7-zw84w -- env
NRFAddrPort=:9090
Server=mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
Database=nrfdb
The values are read as separate env variables, for example Server value:
kubectl exec -it env-6fb4b557d7-zw84w -- printenv Server
mongodb://mongo-0.mongo,mongo-1.mongo,mongo-2.mongo
What you currently have will create a file in the mountpoint for each key in your config map. Your code is looking for "config.toml" but the key is "config-toml" so it isn't finding it.
If you want the keep the key as-is, you can control what keys are written where (within the mount) like this:
volumes:
- name: config-volume
configMap:
name: nrf-config
items:
- key: config-toml
path: config.toml

Resources