Kubernetes bash to POD after creation - bash

I tried to create the POD using command
kubectl run --generator=run-pod/v1 mypod--image=myimage:1 -it bash and after successful pod creation it prompts for bash command in side container.
Is there anyway to achieve above command using YML file? I tried below YML but it does not go to bash directly after successful creation of POD. I had to manually write command kubectl exec -it POD_NAME bash. But want to avoid using exec command to bash my container. I want my YML to take me to my container after creation of POD. is there anyway to achieve this?
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: mynamespcae
labels:
app: mypod
spec:
containers:
- args:
- bash
name: mypod
image: myimage:1
stdin: true
stdinOnce: true
tty: true

This is a community wiki answer. Feel free to expand it.
As already mentioned by David, it is not possible to go to bash directly after a Pod is created by only using the YAML syntax. You have to use a proper kubectl command like kubectl exec in order to Get a Shell to a Running Container.

The key you want to have the pod that will not exit.
Here is an example for you.
apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: mynamespcae
labels:
app: mypod
spec:
containers:
- command:
- bash
- -c
- yes > /dev/null
name: mypod
image: myimage:1
The command yes will continue to output the string yes until it is killed.
The part > /dev/null will make sure that you won't have a ton of garbage logs.
Then you can access your pod with these commands.
kubectl apply -f my-pod.yaml
kubectl exec -it mypod bash
Remember to remove the pod after you finish all the operations.

Related

Job that executes command inside a pod

What I'd like to ask is if is it possible to create a Kubernetes Job that runs a bash command within another Pod.
apiVersion: batch/v1
kind: Job
metadata:
namespace: dev
name: run-cmd
spec:
ttlSecondsAfterFinished: 180
template:
spec:
containers:
- name: run-cmd
image: <IMG>
command: ["/bin/bash", "-c"]
args:
- <CMD> $POD_NAME
restartPolicy: Never
backoffLimit: 4
I considered using :
Environment variable to define the pod name
Using Kubernetes SDK to automate
But if you have better ideas I am open to them, please!
The Job manifest you shared seems the valid idea.
Yet, you need to take into consideration below points:
As running command inside other pod (pod's some container) requires interacting with Kubernetes API server, you'd need to interact with it using Kubernetes client (e.g. kubectl). This, in turn, requires the client to be installed inside the job's container's image.
job's pod's service account has to have the permissions to pods/exec resource. See docs and this answer.

Why can I not TTY into the node:17-alpine image the way I can the NGINX one?

I have the following yml...
apiVersion: v1
kind: Pod
metadata:
name: simple-server
labels:
run: simple-server
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
protocol: TCP
Once applied I can access a bash shell like sudo kubectl exec --stdin --tty simple-server -- /bin/bash. I have a simple Koa docker image I created from node:17-alpine like this...
FROM node:17-alpine
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
EXPOSE 8080
CMD node server.mjs
apiVersion: v1
kind: Pod
metadata:
name: simple-server-node
labels:
run: simple-server-node
spec:
containers:
- name: web
image: partyk1d24/node-k8s-demo
ports:
- name: web
containerPort: 8080
protocol: TCP
This one deploys and I can access it using a service. However, when I try to open a tty bash shell I get...
node % sudo kubectl exec --stdin --tty pod/simple-server-node --
/bin/bash OCI runtime exec failed: exec failed:
container_linux.go:380: starting container process caused: exec:
"/bin/bash": stat /bin/bash: no such file or directory: unknown
command terminated with exit code 126
Why can't I open a shell in the node image the same way I can in the nginx one?
Posting as answer for better visibility.
"Why can't I open a shell in the node image the same way I can in the nginx one?" - Because alpine does not come with bash installed. We can, however, use /bin/sh instead of /bin/bash.

Kubernetes + Minikube - How to see all stdout output?

I'm running a Ruby app on Kubernetes with Minikube.
However, whenever I look at the logs I don't see the output I would have seen in my terminal when running the app locally.
I presume it's because it only shows stderr?
What can I do to see all types of console logs (e.g. from puts or raise)?
On looking around is this something to do with it being in detached mode - see the Python related issue: Logs in Kubernetes Pod not showing up
Thanks.
=
As requested - here is the deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: sample
spec:
replicas: 1
template:
metadata:
labels:
app: sample
spec:
containers:
- name: sample
image: someregistry
imagePullPolicy: Always
command: ["/bin/sh","-c"]
args: ["bundle exec rake sample:default --trace"]
envFrom:
- configMapRef:
name: sample
- secretRef:
name: sample
ports:
- containerPort: 3000
imagePullSecrets:
- name: regsecret
As shown in this article, kubectl logs pod apod should show you stdout and stderr for a pod deployed in a minikube.
By default in Kubernetes, Docker is configured to write a container's stdout and stderr to a file under /var/log/containers on the host system
Kubernetes adds:
There are two types of system components: those that run in a container and those that do not run in a container.
For example:
The Kubernetes scheduler and kube-proxy run in a container.
The kubelet and container runtime, for example Docker, do not run in containers.
And:
On machines with systemd, the kubelet and container runtime write to journald.
If systemd is not present, they write to .log files in the /var/log directory.
Similarly to the container logs, system component logs in the /var/log directory should be rotated.
In Kubernetes clusters brought up by the kube-up.sh script, those logs are configured to be rotated by the logrotate tool daily or once the size exceeds 100MB.
I presume it's because it only shows stderr?
Not really, unless something specific is disabled in your container or pod spec. I assume you are using Docker so then the default it's to output stdout and stderr and that's what you see when you do a kubectl logs <pod-name>
What can I do to see all types of console logs (e.g. from puts or raise)?
You should see them in the container logs. It would help to post your pod or deployment definition.

k8s: Use parameterized image tag when creating deployment

I want to run a kubernetes deployment in the likes of the following:
apiVersion: v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
spec:
replicas: 1
template:
spec:
containers:
- name: my-app
image: our-own-registry.com/somerepo/my-app:${IMAGE_TAG}
env:
- name: FOO
value: "BAR"
This will be delivered to the developers so that they can perform on demand deployments using the image tag of their preference.
What is best way / recommended pattern to pass the tag variable?
performing an export on the command line to make it available as env var on the shell from which the kubectl command will run?
Unfortunately, it's impossible via native kubernetes tools. From here:
kubectl will never support variable substitution.
But, that issue case also has some good workarounds. The best way is deploy your apps via Helm charts using templates
For simple use cases envsubst will do just fine:
IMAGE_TAG=1.2 envsubst < deployment.yaml | kubectl apply -f -`

Set vm.max_map_count on cluster nodes

I try to install ElasticSearch (latest) on a cluster nodes on Google Container Engine but ElasticSearch needs the variable : vm.max_map_count to be >= 262144.
If I ssh to every nodes and I manually run :
sysctl -w vm.max_map_count=262144
All goes fine then, but any new node will not have the specified configuration.
So my questions is :
Is there a way to load a system configuration on every nodes at boot time ?
Deamon Set would not be the good solution because inside a docker container, the system variables are read-only.
I'm using a fresh created cluster with the gci node image.
I found another solution while looking at this repository.
It relies on the use of an init container, the plus side is that only the init container is running with privileges:
annotations:
pod.beta.kubernetes.io/init-containers: '[
{
"name": "sysctl",
"image": "busybox",
"imagePullPolicy": "IfNotPresent",
"command": ["sysctl", "-w", "vm.max_map_count=262144"],
"securityContext": {
"privileged": true
}
}
]'
There is a new syntax available since Kubernetes 1.6 which still works for 1.7. Starting with 1.8 this new syntax is required. The declaration of init containers is moved to spec:
- name: init-sysctl
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
You should be able to use a DaemonSet to emulate the behavior of a startup script. If the script needs to do root-level actions on the node, you can configure the DaemonSet pods to run in privileged mode.
For an example of how to do this, see https://github.com/kubernetes/contrib/tree/master/startup-script
As Robert pointed out, a DaemonSet could run as a startup script. Unfortunately, GKE will only let you run a DaemonSet with restartPolicy set as Always.
So in order to prevent k8s to continually restart the container after running sysctl, it has to sleep after the setup and preferably just run on selected nodes. It isn't an elegant solution, but it's elastic at least.
Example:
es-host-setup Dockerfile:
FROM alpine
CMD sysctl -w vm.max_map_count=262144; sleep 365d
DaemonSet resource file:
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: es-host-setup
spec:
template:
metadata:
labels:
name: es-host-setup
spec:
containers:
- name: es-host-setup
image: es-host-setup
securityContext:
privileged: true
restartPolicy: Always
nodeSelector:
pool: elasticsearch

Resources