Executing a kubernetes pod with the use of pod name - bash

I am writing a shell script for executing a pod for which the syntax is:
winpty kubectl --kubeconfig="C:\kubeconfig" -n namespace exec -it podname bash
This works fine but since podname is not stable and changes for every deployment so is there any alternative for this?
Thanks.

You can use normally $ kubectl exec command but define value for changing pod name.
Assuming that you have deployment and labeled pods: app=example, simply execute:
$ kubectl exec -it $(kubectl get pods -l app=example -o custom-columns=:metadata.name) -- bash
EDIT:
You can also execute:
POD_NAME = $(kubectl get pods -l app=example -o custom-columns=":metadata.name")
or
POD_NAME = $(kubectl get pods -l app=example -o jsonpath = "{. Items [0] .metadata.name}")
finally
$ winpty kubectl exec -ti $POD_NAME --bash
Make sure that you execute command in proper namespace - you can also add -n flag and define it.

You can use the following command:
kubectl -n <namespace> exec -it deploy/<deployment-name> -- bash

Add a service to your application:
As you know, pods are ephemeral; They come in and out of existence dynamically to ensure your application stays in compliance with your configuration. This behavior implements the scaling and self-healing aspects of kubernetes.
You application will consist of one or more pods that are accessible through a service , The application's service name and address does not change and so acts as the stable interface to reach your application.
This method works both if your application has one pod or many pods.
Does that help?

Related

Kubernetes can't get bash prompt when exec into pod

I am using Kubernetes to exec into a pod like this:
kubectl exec myPod bash -i
which works fine, except I don't get a prompt. So then I do:
export PS1="myPrompt "
Which I would expect to give me a prompt, but doesn't. Is there some workaround for this?
Trying to exec into pod in interactive way requires specifying -ti option.
Where -i passes stdin to the container and -t connects your terminal to this stdin.
Take a look at the following example:
kubectl exec -it myPod -- bash

kubectl exec works on single commands, but I cannot enter a bash shell

I'm on macOS Catalina 10.15.4, and I'm using minikube v1.11.0 and kubernetes v1.18.3, both installed from brew. Minikube is initialized with the docker engine.
The initialization command is set up like so:
containers:
- name: database
image: "mysql:5.6"
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: 12345
- name: MYSQL_USER
value: user
- name: MYSQL_PASSWORD
value: password
- name: MYSQL_DATABASE
value: db
I'm trying to get a bash script open for one of my running kubectl containers. From research online, it appears that this should be the command that will open a bash window in my terminal:
minikube kubectl exec -it --namespace=tools test-pod -- bash
However, when I run it, I get the following traceback:
Error: unknown shorthand flag: 'i' in -it See 'minikube kubectl --help' for usage.
It doesn't seem to want me using any arguments in my command. Is there something I'm missing, or am I attempting to use a command that is deprecated?
Note: I am able to run exec, but not for opening a bash script. For example, I am able to run the following command:
minikube kubectl exec test-pod -- ls /
And it outputs this following:
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
bin
boot
dev
docker-entrypoint-initdb.d
entrypoint.sh
etc
home
lib
lib64
media
mnt
opt
proc
root
run
sbin
srv
sys
tmp
usr
var
Edit: I have attempted the following command:
minikube kubectl exec --stdin --tty --namespace=tools test-pod -- sh
And I got the following traceback:
Error: unknown flag: --stdin
See 'minikube kubectl --help' for usage.
It seems like any flags at all, short or long, are failing, and I cannot figure out why they wouldn't be.
minikube kubectl needs the -- after the command when you want to use it with arguments:
$ minikube kubectl -- exec --stdin --tty --namespace=tools test-pod -- sh
You can also use plain kubectl
If would just make sure that your ~/.kube/config is pointing to the right minikube context/cluster. Typically, any minikube command you run from the shell will cause it to change the context to your minikube cluster. i.e minikube ssh
Then just use kubectl
$ kubectl exec --stdin --tty --namespace=tools test-pod -- sh
So, I figured out the solution:
With my configuration, initializing minikube with minikube start --driver=docker does not successfully initialize everything. I changed my driver to virtualbox, and minikube was able to ssh and continue without any issues.
Setup with a docker driver appears to be commonly issue-prone, as this GitHub thread shows: https://github.com/kubernetes/minikube/issues/7332

How to get the running pod name when there are other pods terminating?

I am using kubernetes cluster to run dev environments for myself and other developers. I have written a few shell functions to help everyone deal with their pods without typing long kubectl commands by hand. For example, to get a prompt on one of the pods, my functions use the following
kubectl exec -it $(kubectl get pods --selector=run=${service} --field-selector=status.phase=Running -o jsonpath="{.items[*].metadata.name}") -- bash;
where $service is set to a service label I want to access, like postgres or redis or uwsgi.
Since these are development environments there is always one of each types of pods. The problem I am having is that if I delete a pod to make it pull a fresh image (all pods are managed by deployments, so if I delete a pod it will create a new one), for a while there are two pods, one shows as terminating and the other as running in kubectl get pods output. I want to make sure that the command above selects the pod that is running and not the one terminating. I thought --field-selector=status.phase=Running flag would do it, but it does not. Apparently even if the pod is in the process of terminating it still reports Running status in status.phase field. What can I use to filter out terminating pods?
Use this one
$ kubectl exec -it $(kubectl get pods --selector=run=${service} | grep "running" | awk '{print $1}') -- bash;
or
$ kubectl exec -it $(kubectl get pods --selector=run=${service} -o=jsonpath='{.items[?(#.status.phase==“Running”)].metadata.name}') -- bash;
Ref: https://kubernetes.io/docs/reference/kubectl/jsonpath/

shortcut for typing kubectl --all-namespaces everytime

Is there any alias we can make for all-namespace as kubectl don't recognise the command kubectl --all-namespaces or any kind of shortcut to minimize the typing of the whole command.
New in kubectl v1.14, you can use -A instead of --all-namespaces, eg:
kubectl get -A pod
(rejoice)
Is there any alias we can make for all-namespace
Based on this excellent SO answer you can create alias that inserts arguments between prefix and suffix like so:
alias kca='f(){ kubectl "$#" --all-namespaces -o wide; unset -f f; }; f'
and then use it regularly like so:
kca get nodes
kca get pods
kca get svc,sts,deploy,pvc,pv
etc..
Note: There is -o wide added for fun as well to get more detailed info about resources not normally namespaced like nodes and pv...

Kubernetes - kubectl exec bash - session drop and line width

I'm having k8s cluster with 3 minions, master and haproxy in front. When I use
kubectl exec -p $POD -i -t -- bash -il
for accessing bash in the pod (it is a single container in this case) I get in and after something like 5 mins I get dropped out of the terminal. If I reenter the container I can see my old bash process running, with a new started for my new connection. Is there a way to prevent this from happening? When I'm using docker exec it works fine and doesn't drop me so I guess it is from kubernetes.
As a bonus question - is there a way to increase the characters per line when using kubectl exec? I get truncated output that is different from docker exec.
Thanks in advance!
It is a known issue -
https://github.com/kubernetes/kubernetes/issues/9180
The kubelet webserver times out.
i have resolve by add env COLUMNS=$COLUMNS LINES=$LINES before bash kubectl exec -ti busybox env COLUMNS=$COLUMNS LINES=$LINES bash

Resources