Script for running port-forward automatically - bash

I want to create a script that will run port-forwarding for a pod automatically, for specific pod name(app3, I've multiple apps in this namespace and I need to run just for app3 ) when I run this script.
e.g.
kubectl port-forward pods/app3-86499b66d-dfwf7 8000:8000 -n web
I've started with
kubectl get pod -n webide-system | grep app3
The output is:
app3-86499b66d-dfwf7 1/1 Running 0 18h
However,Im not sure how to take the output, which is the pod name and run the port forwarding to it
The following in bold is constant
pods/app3-86499b66d-dfwf7
And this is changing for each deployment
-86499b66d-dfwf7
Any idea how to make it works with a script?

POD_NAME=`kubectl get pod -n webide-system | grep app3 | sed 's/ .*//'`
kubectl port-forward pods/$POD_NAME 8000:8000 -n web

The answer provided by #Beta is correct, but I would like to show you yet another approach.
You can expose the pod using a Service:
$ kubectl expose pod app3-86499b66d-dfwf7 --port 8000 --name app3
service/app3 exposed
But in this case the pod is referenced using pod-template-hash, so when recreated, the service won't longer work.
This is why much better option is to expose a deployment:
$ kubectl expose deployment app3 --port 8000
service/app3 exposed
And now you can access it using pord-forward funcitonality like following:
$ kubectl port-forward service/app3 8000:8000
Forwarding from 127.0.0.1:8000 -> 8000
Forwarding from [::1]:8000 -> 8000
The second approach additionally takes case of loadbalancing, so if you had more than 1 replica, it would handle traffic distribution for you.

Related

How to ssh into Kubernete Pod

Scenario 1: I am deploying an spring boot app in a physical machine in which it opens ssh connection on a port of the physical machine. Lets say 9889. So once the application starts, that physical machine will open connection on that port and listen for any incoming request. Any client can connect to that port using command
ssh admin#ipaddress -p 9889.
It returns the connection i expect.
Scenario 2: I am deploying that app into Kubernete cluster. I set external Ip of the service to the IP on master node. So when i type Kubectl get services i got something like
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT
app-entry LOADBALANCER 172.x.x.xx 172.x.x.x 3000:3552
How can i ssh to the app in the Kubernete Cluster using the externalIP and the port 3000. Since everytime i try to ssh using command above it returns connection refused
As #zerkms mentioned in the comment, you can use kubectl exec to connect the shell of a container running inside the Kubernetes cluster.
$ kubectl exec -it -n <namespace> <pod_name> -- /bin/bash
$ kubectl exec -it -n <namespace> <pod_name> -- /bin/sh
# if the pod has multiple containers
$ kubectl exec -it -n <namespace> <pod_name> -c <container_name> -- /bin/bash
If you have a running server on your pod which serves at a specific port, you can use kubectl port-forward to connect it from the local machine (ie. localhost:PORT).
# Pods
$ kubectl port-forward -n <namespace> pods/<pod_name> <local_port>:<container_port>
# Services
$ kubectl port-forward -n <namespace> svc/<service_name> <local_port>:<container_port>

Executing a kubernetes pod with the use of pod name

I am writing a shell script for executing a pod for which the syntax is:
winpty kubectl --kubeconfig="C:\kubeconfig" -n namespace exec -it podname bash
This works fine but since podname is not stable and changes for every deployment so is there any alternative for this?
Thanks.
You can use normally $ kubectl exec command but define value for changing pod name.
Assuming that you have deployment and labeled pods: app=example, simply execute:
$ kubectl exec -it $(kubectl get pods -l app=example -o custom-columns=:metadata.name) -- bash
EDIT:
You can also execute:
POD_NAME = $(kubectl get pods -l app=example -o custom-columns=":metadata.name")
or
POD_NAME = $(kubectl get pods -l app=example -o jsonpath = "{. Items [0] .metadata.name}")
finally
$ winpty kubectl exec -ti $POD_NAME --bash
Make sure that you execute command in proper namespace - you can also add -n flag and define it.
You can use the following command:
kubectl -n <namespace> exec -it deploy/<deployment-name> -- bash
Add a service to your application:
As you know, pods are ephemeral; They come in and out of existence dynamically to ensure your application stays in compliance with your configuration. This behavior implements the scaling and self-healing aspects of kubernetes.
You application will consist of one or more pods that are accessible through a service , The application's service name and address does not change and so acts as the stable interface to reach your application.
This method works both if your application has one pod or many pods.
Does that help?

Grep command to extract a kubernetes pod name

I am currently working on a bash script to reduce the time it takes for me to build the db for a project.
Currently I have several databases running in the same namespace and I want to extract only the specific pod name.
I run kubectl get pods
NAME READY STATUS RESTARTS AGE
elastic 1/1 Running 0 37h
mysql 1/1 Running 0 37h
Now I want to save one of the pod names.
I'm currently running this foo=$(kubectl get pods | grep -e "mysql")
and it returns mysql 1/1 Running 0 37h which is the expected results of the command. Now I just want to be able to extract the pod name as that variable so that I can pass it on at a later time.
This should work for you
foo=$(kubectl get pods | awk '{print $1}' | grep -e "mysql")
kubectl already allows you to extract only the names:
kubectl get pods -o=jsonpath='{range .items..metadata}{.name}{"\n"}{end}' | fgrep mysql
If I'm not mistaken, you merely needs to get only pod names to reuse these later.
The kubectl get --help provides a lot of good information on what you can achieve with just a kubectl without invoking the rest of the heavy artillery like awk, sed, etc.
List a single pod in JSON output format.
kubectl get -o json pod web-pod-13je7
List resource information in custom columns.
kubectl get pod test-pod -o
custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image
Return only the phase value of the specified pod.
kubectl get -o template pod/web-pod-13je7 --template={{.status.phase}}
In this particular case I see at least 2 workarounds :
1) Custom columns. You can get virtually any output (and then you can grep/tr/awk if needed):
$ kubectl get pods --no-headers=true -o custom-columns=NAME_OF_MY_POD:.metadata.name
mmmysql-6fff9ffdbb-58x4b
mmmysql-6fff9ffdbb-72fj8
mmmysql-6fff9ffdbb-p76hx
mysql-tier2-86dbb787d9-r98qw
nginx-65f88748fd-s8mgc
2) jsonpath (the one #vault provided):
kubectl get pods -o=jsonpath='{.items..metadata.name}'
Hope that sheds light on options you have to choose from.
Let us know if that helps.
kubectl get pods | grep YOUR_POD_STARTING_NAME
For Example
kubectl get pods | grep mysql

How to get the running pod name when there are other pods terminating?

I am using kubernetes cluster to run dev environments for myself and other developers. I have written a few shell functions to help everyone deal with their pods without typing long kubectl commands by hand. For example, to get a prompt on one of the pods, my functions use the following
kubectl exec -it $(kubectl get pods --selector=run=${service} --field-selector=status.phase=Running -o jsonpath="{.items[*].metadata.name}") -- bash;
where $service is set to a service label I want to access, like postgres or redis or uwsgi.
Since these are development environments there is always one of each types of pods. The problem I am having is that if I delete a pod to make it pull a fresh image (all pods are managed by deployments, so if I delete a pod it will create a new one), for a while there are two pods, one shows as terminating and the other as running in kubectl get pods output. I want to make sure that the command above selects the pod that is running and not the one terminating. I thought --field-selector=status.phase=Running flag would do it, but it does not. Apparently even if the pod is in the process of terminating it still reports Running status in status.phase field. What can I use to filter out terminating pods?
Use this one
$ kubectl exec -it $(kubectl get pods --selector=run=${service} | grep "running" | awk '{print $1}') -- bash;
or
$ kubectl exec -it $(kubectl get pods --selector=run=${service} -o=jsonpath='{.items[?(#.status.phase==“Running”)].metadata.name}') -- bash;
Ref: https://kubernetes.io/docs/reference/kubectl/jsonpath/

Running Docker Commands with a bash script inside a container

I'm trying to automate deployment with webhooks to the Docker hub based on this tutorial. One container runs the web app on port 80. On the same host I run another container that listens for post requests from the docker hub, triggering the host to update the webapp image. The post request triggers a bash script that looks like this:
echo pulling...
docker pull my_username/image
docker stop img
docker rm img
docker run --name img -d -p 80:80 my_username/image
A test payload succesfully triggers the script. However, the container logs the following complaints:
pulling...
app/deploy.sh: line 4: docker: command not found
...
app/deploy.sh: line 7: docker: command not found
It seems that the bash script does not access the host implicitly. How to proceed?
Stuff I tried but did not work:
when firing up the listener container I added the host IP like this based on the docs:
HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print\$2}' | cut -d / -f 1`
docker run --name listener --add-host=docker:${HOSTIP} -e TOKEN="test654321" -d -p 5000:5000 mjhea0/docker-hook-listener
similarly, I substituted the --add-host command with --add-host=dockerhost:$(ip route | awk '/docker0/ { print $NF }') based on this suggestion.
Neither the docker binary nor the docker socket will be present in a container by default (why would it?).
You can solve this fairly easily by mounting the binary and socket from the host when you start the container e.g:
$ docker run -v $(which docker):/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock debian docker --version
Docker version 1.7.0, build 0baf609
You seem to be a bit confused about how Docker works; I'm not sure exactly what you mean by "access the host implicitly" or how you think it would work. Think of a container as a isolated and ephemeral machine, completely separate from your host, something like a fast VM.

Resources