Why can I not TTY into the node:17-alpine image the way I can the NGINX one? - bash

I have the following yml...
apiVersion: v1
kind: Pod
metadata:
name: simple-server
labels:
run: simple-server
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
protocol: TCP
Once applied I can access a bash shell like sudo kubectl exec --stdin --tty simple-server -- /bin/bash. I have a simple Koa docker image I created from node:17-alpine like this...
FROM node:17-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
EXPOSE 8080
CMD node server.mjs
apiVersion: v1
kind: Pod
metadata:
name: simple-server-node
labels:
run: simple-server-node
spec:
containers:
- name: web
image: partyk1d24/node-k8s-demo
ports:
- name: web
containerPort: 8080
protocol: TCP
This one deploys and I can access it using a service. However, when I try to open a tty bash shell I get...
node % sudo kubectl exec --stdin --tty pod/simple-server-node --
/bin/bash OCI runtime exec failed: exec failed:
container_linux.go:380: starting container process caused: exec:
"/bin/bash": stat /bin/bash: no such file or directory: unknown
command terminated with exit code 126
Why can't I open a shell in the node image the same way I can in the nginx one?

Posting as answer for better visibility.
"Why can't I open a shell in the node image the same way I can in the nginx one?" - Because alpine does not come with bash installed. We can, however, use /bin/sh instead of /bin/bash.

Related

How do I run a post-boot script on a container in kubernetes

I have in my application a shell that I need to execute once the container is started and that it remains in the background, I have seen using lifecycle but it does not work for me
ports:
- name: php-port
containerPort: 9000
lifecycle:
postStart:
exec:
command: ["/bin/sh", "sh /root/script.sh"]
I need an artisan execution to stay in the background once the container is started
When the lifecycle hooks (e.g. postStart) do not work for you, you could add another container to your pod, which runs parallel to your main container (sidecar pattern):
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
containers:
- name: main
image: some/image
...
- name: sidecar
image: another/container
If your 2nd container should only start after your main container started successfully, you need some kind of notification. This could be for example that the main container creates a file on a shared volume (e.g. an empty dir) for which the 2nd container waits until it starts it's main process. The docs have an example about a shared volume for two containers in the same pod. This obviously requires to add some additional logic to the main container.
apiVersion: v1
kind: Pod
metadata:
name: foo
spec:
volumes:
- name: shared-data
emptyDir: {}
containers:
- name: main
image: some/image
volumeMounts:
- name: shared-data
mountPath: /some/path
- name: sidecar
image: another/image
volumeMounts:
- name: shared-data
mountPath: /trigger
command: ["/bin/bash"]
args: ["-c", "while [ ! -f /trigger/triggerfile ]; do sleep 1; done; ./your/2nd-app"]
You can try using something like supervisor
http://supervisord.org/
We use that to start the main process and a monitoring agent in the background so we get metrics out of it. supervisor would also ensure those processes stay up if they crash or terminate.

kubernetes service can't reach pod in mac docker desktop environment

I have installed a docker-desktop in my mac for kubernetes study. But I meet a problem for service can not access the selected pod. Not quite sure the root cause.
Here is my DockerFile to create a hello-world webapp with golang. With check, after pod is deployed, inner docker, calling localhost:8081 can be reached.
FROM golang:1.16.15
RUN mkdir -p /var/lib/www && mkdir -p /var/lib/temp
WORKDIR /var/lib/temp
COPY . ./
RUN go env -w GOPROXY="https://goproxy.cn,direct"
RUN go mod tidy
RUN go build -o simplego
RUN mv ./simplego /var/lib/www/ && rm -rf /var/lib/temp
WORKDIR /var/lib/www
COPY ./build.sh ./
EXPOSE 8081
RUN chmod 777 ./simplego
RUN chmod 777 ./build.sh
ENTRYPOINT ["/bin/bash","./build.sh"]
The build.sh is just a ./simplego to trigger the webapp.
Here is the deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: simplego-k8s-deployment
labels:
app: simplego-k8s-deployment
spec:
replicas: 1
selector:
matchLabels:
app: simplego-k8s-p
template:
metadata:
labels:
app: simplego-k8s-p
spec:
containers:
- name: simplego-k8s-pod
image: simplego:1.0
imagePullPolicy: Never
ports:
- containerPort: 8081
name: simplego-port
And this is the service yaml
apiVersion: v1
kind: Service
metadata:
name: simplego-service
labels:
app: simplego-service
spec:
selector:
app: simplego-k8s-p
type: NodePort
ports:
- protocol: TCP
port: 8081
nodePort: 31082
name: simplego-port
When check pod label
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
simplego-k8s-deployment-6b4559b69f-pxsmh 1/1 Running 1 (3m46s ago) 5h53m 10.1.0.86 docker-desktop <none> <none> app=simplego-k8s-p,pod-template-hash=6b4559b69f
And it is align with service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 43d <none>
simplego-service NodePort 10.109.9.238 <none> 8081:31082/TCP 5h54m app=simplego-k8s-p
But when I tried to use simplego-service:31082 or localhost:31082 in my mac to browse the service, all tries are failed.
Not quite sure the reason, may need help.

Kubernetes bash to POD after creation

I tried to create the POD using command
kubectl run --generator=run-pod/v1 mypod--image=myimage:1 -it bash and after successful pod creation it prompts for bash command in side container.
Is there anyway to achieve above command using YML file? I tried below YML but it does not go to bash directly after successful creation of POD. I had to manually write command kubectl exec -it POD_NAME bash. But want to avoid using exec command to bash my container. I want my YML to take me to my container after creation of POD. is there anyway to achieve this?
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: mynamespcae
labels:
app: mypod
spec:
containers:
- args:
- bash
name: mypod
image: myimage:1
stdin: true
stdinOnce: true
tty: true
This is a community wiki answer. Feel free to expand it.
As already mentioned by David, it is not possible to go to bash directly after a Pod is created by only using the YAML syntax. You have to use a proper kubectl command like kubectl exec in order to Get a Shell to a Running Container.
The key you want to have the pod that will not exit.
Here is an example for you.
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: mynamespcae
labels:
app: mypod
spec:
containers:
- command:
- bash
- -c
- yes > /dev/null
name: mypod
image: myimage:1
The command yes will continue to output the string yes until it is killed.
The part > /dev/null will make sure that you won't have a ton of garbage logs.
Then you can access your pod with these commands.
kubectl apply -f my-pod.yaml
kubectl exec -it mypod bash
Remember to remove the pod after you finish all the operations.

Volume mounts not working Kubernetes and WSL 2 and Docker

I am unable to properly mount volumes using HostPath within Kubernetes running in Docker and WSL 2. This seems to be a WSL 2 issue when mounting volumes in Kubernetes running in Docker. Anyone know how to fix this?
Here are the steps:
Deploy debug build to Kubernetes for my app.
Attach Visual Studio Code using the Kubernetes extension
Navigate to the project folder for my application that was attached using the volume mount <= Problem Right Here
When you go and look at the volume mount nothing is there.
C:\Windows\System32>wsl -l -v
NAME STATE VERSION
Ubuntu Running 2
docker-desktop-data Running 2
docker-desktop Running 2
Docker Desktop v2.3.0.3
Kubernetes v1.16.5
Visual Studio Code v1.46.1
====================================================================
Dockerfile
====================================================================
#
# Base image for deploying and running based on Ubuntu
#
# Support ASP.NET and does not include .NET SDK or NodeJs
#
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-bionic AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
#
# Base image for building .NET based on Ubuntu
#
# 1. Uses .NET SDK image as the starting point
# 2. Restore NuGet packages
# 3. Build the ASP.NET Core application
#
# Destination is /app/build which is copied to /app later on
#
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-bionic AS build
WORKDIR /src
COPY ["myapp.csproj", "./"]
RUN dotnet restore "./myapp.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "myapp.csproj" -c Release -o /app
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-bionic AS debug
RUN curl --silent --location https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get install --yes nodejs
ENTRYPOINT [ "sleep", "infinity" ]
#
# Base image for building React based on Node/Ubuntu
#
# Destination is /app/ClientApp/build which is copied to /clientapp later
#
# NOTE: npm run build puts the output in the build directory
#
FROM node:12.18-buster-slim AS clientbuild
WORKDIR /src
COPY ./ClientApp /app/ClientApp
WORKDIR "/app/ClientApp"
RUN npm install
RUN npm run build
#
# Copy clientbuild:/app/ClientApp to /app/ClientApp
#
# Copy build:/app to /app
#
FROM base as final
WORKDIR /app/ClientApp
COPY --from=clientbuild /app/ClientApp .
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "myapp.dll"]
====================================================================
Kubernetes Manifest
====================================================================
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: localhost:6000/myapp
ports:
- containerPort: 5001
securityContext:
privileged: true
volumeMounts:
- mountPath: /local
name: local
resources: {}
volumes:
- name: local
hostPath:
path: /C/dev/myapp
type: DirectoryOrCreate
hostname: myapp
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 5001
targetPort: 5001
selector:
app: myapp
According to the following thread, hostPath volumes are not officially supported for wsl2, yet. They do suggest a workaround, though I had trouble getting it to work. I have found that prepending /run/desktop/mnt/host/c seems to work for me.
// C:\someDir\volumeDir
hostPath:
path: /run/desktop/mnt/host/c/someDir/volumeDir
type: DirectoryOrCreate
Thread Source: https://github.com/docker/for-win/issues/5325
Suggested workaround from thread: https://github.com/docker/for-win/issues/5325#issuecomment-567594291
Using #RyanDarnell's excellent answer above, here is what worked for me
Objective: get message-db with password using docker build --secret running in local docker-desktop kubernetes with StatefulSet and StorageClass using Skaffold
$kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
docker-desktop Ready control-plane,master 21h v1.21.1 <internalip> <none> Docker Desktop 5.4.72-microsoft-standard-WSL2 docker://20.10.7
#./k8s/storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
labels:
app: postgres-database
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
#./k8s/persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv1
labels:
app: postgres-database
spec:
capacity:
storage: 128Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /run/desktop/mnt/host/c/kubernetes-mount-path # created this folder at C:\kubernetes-mount-path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
#./k8s/stateful-set.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-database
spec:
selector:
matchLabels:
app: postgres-database
serviceName: postgres-service
replicas: 1
template:
metadata:
labels:
app: postgres-database
spec:
containers:
- name: message-db-container
image: message-db-test
volumeMounts:
- name: postgres-disk
mountPath: /var/lib/postgresql/data
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
- name: POSTGRES_PASSWORD
value: <postgres-password>
volumeClaimTemplates:
- metadata:
name: postgres-disk
labels:
app: postgres-database
spec:
selector:
matchLabels:
app: postgres-database
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 128Mi
#./k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres-loadbalancer
spec:
selector:
app: postgres-database
type: LoadBalancer
ports:
- port: 5432
targetPort: 5432
#./Dockerfile
# syntax = docker/dockerfile:1.3
FROM postgres:13.3-alpine3.14
RUN apk add --no-cache curl tar
RUN mkdir -p /usr/src/eventide \
&& curl -L https://github.com/message-db/message-db/tarball/v1.2.6 -o /usr/src/eventide/message-db.tgz
RUN tar -xf /usr/src/eventide/message-db.tgz --directory /usr/src/eventide
# change message_store login password
RUN --mount=type=secret,id=message-db-pass sed -i "s/WITH LOGIN/WITH LOGIN ENCRYPTED PASSWORD '$(cat /run/secrets/message-db-pass)'/g" /usr/src/eventide/message-db-message-db-759a4f3/database/roles/message-store.sql
RUN echo -e "#!/bin/sh\ncd /usr/src/eventide/message-db-message-db-759a4f3/database\n./install.sh" > /docker-entrypoint-initdb.d/rundbscripts.sh
RUN docker-entrypoint.sh postgres --version
ENTRYPOINT docker-entrypoint.sh postgres
#./secret.txt
<postgres-password>
#./skaffold.yaml
kind: Config
apiVersion: skaffold/v2beta20
build:
artifacts:
- image: message-db-test
context: .
docker:
secret:
id: message-db-pass
src: secret.txt
local:
useBuildkit: true
deploy:
kubectl:
manifests:
- k8s/*.yaml
run skaffold debug -v debug to startup

How can I use the port of a server running on localhost in kubernetes running spring boot app

I am new to Kubernetes and kubectl. I am basically running a GRPC server in my localhost. I would like to use this endpoint in a spring boot app running on kubernetes using kubectl on my mac. If I set the following config in application.yml and run in kubernetes, it doesn't work. The same config works if I run in IDE.
grpc:
client:
local-server:
address: static://localhost:6565
negotiationType: PLAINTEXT
I see some people suggesting port-forward, but it's the other way round (It works when I want to use a port that is already in kubernetes from localhost just like the tomcat server running in kubernetes from a browser on localhost)
apiVersion: apps/v1
kind: Deployment
metadata:
name: testspringconfigvol
labels:
app: testspring
spec:
replicas: 1
selector:
matchLabels:
app: testspringconfigvol
template:
metadata:
labels:
app: testspringconfigvol
spec:
initContainers:
# taken from https://gist.github.com/tallclair/849601a16cebeee581ef2be50c351841
# This container clones the desired git repo to the EmptyDir volume.
- name: git-config
image: alpine/git # Any image with git will do
args:
- clone
- --single-branch
- --
- https://github.com/username/fakeconfig
- /repo # Put it in the volume
securityContext:
runAsUser: 1 # Any non-root user will do. Match to the workload.
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /repo
name: git-config
containers:
- name: testspringconfigvol-cont
image: username/testspring
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /usr/local/lib/config/
name: git-config
volumes:
- name: git-config
emptyDir: {}
What I need in simple terms:
Ports having some server in my localhost:
localhost:6565, localhost:6566, I need to access these ports some how in my kubernetes. Then what should I set it in application.yml config? Will it be the same localhost:6565, localhost:6566 or how-to-get-this-ip:6565, how-to-get-this-ip:6566.
We can get the vmware host ip using minikube with this command minikube ssh "route -n | grep ^0.0.0.0 | awk '{ print \$2 }'". For me it's 10.0.2.2 on Mac. If using Kubernetes on Docker for mac, it's host.docker.internal.
By using these commands, I managed to connect to the services running on host machine from kubernetes.
1) Inside your application.properties define
server.port=8000
2) Create Dockerfile
# Start with a base image containing Java runtime (mine java 8)
FROM openjdk:8u212-jdk-slim
# Add Maintainer Info
LABEL maintainer="vaquar.khan#gmail.com"
# Add a volume pointing to /tmp
VOLUME /tmp
# Make port 8080 available to the world outside this container
EXPOSE 8080
# The application's jar file (when packaged)
ARG JAR_FILE=target/codestatebkend-0.0.1-SNAPSHOT.jar
# Add the application's jar to the container
ADD ${JAR_FILE} codestatebkend.jar
# Run the jar file
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/codestatebkend.jar"]
3) Make sure docker is working fine
docker run --rm -p 8080:8080
4)
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
use following command to find the pod name
kubectl get pods
then
kubectl port-forward <pod-name> 8080:8080
Useful links :
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod
https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
https://developer.okta.com/blog/2019/05/22/java-microservices-spring-boot-spring-cloud

Resources