Scenario 1: I am deploying an spring boot app in a physical machine in which it opens ssh connection on a port of the physical machine. Lets say 9889. So once the application starts, that physical machine will open connection on that port and listen for any incoming request. Any client can connect to that port using command
ssh admin#ipaddress -p 9889.
It returns the connection i expect.
Scenario 2: I am deploying that app into Kubernete cluster. I set external Ip of the service to the IP on master node. So when i type Kubectl get services i got something like
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT
app-entry LOADBALANCER 172.x.x.xx 172.x.x.x 3000:3552
How can i ssh to the app in the Kubernete Cluster using the externalIP and the port 3000. Since everytime i try to ssh using command above it returns connection refused
As #zerkms mentioned in the comment, you can use kubectl exec to connect the shell of a container running inside the Kubernetes cluster.
$ kubectl exec -it -n <namespace> <pod_name> -- /bin/bash
$ kubectl exec -it -n <namespace> <pod_name> -- /bin/sh
# if the pod has multiple containers
$ kubectl exec -it -n <namespace> <pod_name> -c <container_name> -- /bin/bash
If you have a running server on your pod which serves at a specific port, you can use kubectl port-forward to connect it from the local machine (ie. localhost:PORT).
# Pods
$ kubectl port-forward -n <namespace> pods/<pod_name> <local_port>:<container_port>
# Services
$ kubectl port-forward -n <namespace> svc/<service_name> <local_port>:<container_port>
Related
I want to create a script that will run port-forwarding for a pod automatically, for specific pod name(app3, I've multiple apps in this namespace and I need to run just for app3 ) when I run this script.
e.g.
kubectl port-forward pods/app3-86499b66d-dfwf7 8000:8000 -n web
I've started with
kubectl get pod -n webide-system | grep app3
The output is:
app3-86499b66d-dfwf7 1/1 Running 0 18h
However,Im not sure how to take the output, which is the pod name and run the port forwarding to it
The following in bold is constant
pods/app3-86499b66d-dfwf7
And this is changing for each deployment
-86499b66d-dfwf7
Any idea how to make it works with a script?
POD_NAME=`kubectl get pod -n webide-system | grep app3 | sed 's/ .*//'`
kubectl port-forward pods/$POD_NAME 8000:8000 -n web
The answer provided by #Beta is correct, but I would like to show you yet another approach.
You can expose the pod using a Service:
$ kubectl expose pod app3-86499b66d-dfwf7 --port 8000 --name app3
service/app3 exposed
But in this case the pod is referenced using pod-template-hash, so when recreated, the service won't longer work.
This is why much better option is to expose a deployment:
$ kubectl expose deployment app3 --port 8000
service/app3 exposed
And now you can access it using pord-forward funcitonality like following:
$ kubectl port-forward service/app3 8000:8000
Forwarding from 127.0.0.1:8000 -> 8000
Forwarding from [::1]:8000 -> 8000
The second approach additionally takes case of loadbalancing, so if you had more than 1 replica, it would handle traffic distribution for you.
I am writing a shell script for executing a pod for which the syntax is:
winpty kubectl --kubeconfig="C:\kubeconfig" -n namespace exec -it podname bash
This works fine but since podname is not stable and changes for every deployment so is there any alternative for this?
Thanks.
You can use normally $ kubectl exec command but define value for changing pod name.
Assuming that you have deployment and labeled pods: app=example, simply execute:
$ kubectl exec -it $(kubectl get pods -l app=example -o custom-columns=:metadata.name) -- bash
EDIT:
You can also execute:
POD_NAME = $(kubectl get pods -l app=example -o custom-columns=":metadata.name")
or
POD_NAME = $(kubectl get pods -l app=example -o jsonpath = "{. Items [0] .metadata.name}")
finally
$ winpty kubectl exec -ti $POD_NAME --bash
Make sure that you execute command in proper namespace - you can also add -n flag and define it.
You can use the following command:
kubectl -n <namespace> exec -it deploy/<deployment-name> -- bash
Add a service to your application:
As you know, pods are ephemeral; They come in and out of existence dynamically to ensure your application stays in compliance with your configuration. This behavior implements the scaling and self-healing aspects of kubernetes.
You application will consist of one or more pods that are accessible through a service , The application's service name and address does not change and so acts as the stable interface to reach your application.
This method works both if your application has one pod or many pods.
Does that help?
I am trying to run pg_dump in a Docker container via kubectl and save the output to my local machine.
Here's what I have so far:
kubectl exec -it MY_POD_NAME -- pg_dump -h DB_HOST -U USER_NAME SCHEMA_NAME > backup.sql
However this just hangs currently. I am fairly certain it's due to the -- ignoring the >
kubectl exec -it MY_POD_NAME -- pg_dump -h DB_HOST -U USER_NAME SCHEMA_NAME outputs to the console as expected.
Use kubectl port-forward POD_NAME 6000:5342 to forward your pod port (assumed to be exposed on 5432) onto localhost:6000.
And then run pg_dump directly with hostname as localhost and port as 6000
$ pg_dump -h DB_HOST -U USER_NAME SCHEMA_NAME > backup.sql
Managed to solve myself - not the most elegant solution but it works.
First I open a shell on a pod in the cluster which has network access to the RDS instance:
kubectl exec -it psql-xxx-xxx sh
Once connected to the shell, run pg_dump to backup the database:
pg_dump -h db.internal.dns -U user schema_name > backup.sql
Once the backup completes, exit the container and copy the file from the pod to my local:
kubectl cp psql-xxx-xxx:/backup.sql ./backup.sql
Will continue searching for a streamlined way to do this.
If the database is indeed running as a docker image in remote kubernetes instance, I was successful with
kubectl exec -it POD_NAME -- pg_dump -h localhost -U DB_USER DB_NAME > backup.sql
Where localhost points to localhost in remote instance. Running directly from local terminal, this saves database dump to my local machine, no matter the database itself is running in the cloud.
As part of a build pipeline I would like to start containers with a free port.
Looking for something like this:
docker run --name frontend -p $(gimme-a-free-port):80 frontend:latest
You can use port 0. Applications pass 0 to kernel and kernel assigns unused port to the application.
docker run --name frontend -p 0:80 frontend:latest
Or:
docker run --name frontend -p 80 frontend:latest
In second example I'm just specifying container port, Host port will be assigned automatically.
To verify:
docker port <containerid or container name>
80/tcp -> 0.0.0.0:32768
To get the random port value only:
docker inspect -f '{{ (index (index .NetworkSettings.Ports "80/tcp") 0).HostPort }}' <containerid or container name>
32768
If you don't assign the host-port, docker will automatically pick a random port for publishing the container port.
For example;
$ docker run --name frontend -p 80 -dit busybox
4439bdce51eee473b1e961664839a410754157bf69da2d2545ab51528a42111c
$ docker port 4439bdce51eee473b1e961664839a410754157bf69da2d2545ab51528a42111c
80/tcp -> 0.0.0.0:32768
(or);
$ docker inspect -f '{{json .NetworkSettings.Ports}}' 4439bdce51eee473b1e961664839a410754157bf69da2d2545ab51528a42111c
{"80/tcp":[{"HostIp":"0.0.0.0","HostPort":"32768"}]}
Get Container's External Port
PORT="$(docker ps|grep some_container|sed 's/.*0.0.0.0://g'|sed 's/->.*//g')"
Reference: https://blog.dcycle.com/snippet/2016-10-04/get-docker-container-port/
I'm trying to automate deployment with webhooks to the Docker hub based on this tutorial. One container runs the web app on port 80. On the same host I run another container that listens for post requests from the docker hub, triggering the host to update the webapp image. The post request triggers a bash script that looks like this:
echo pulling...
docker pull my_username/image
docker stop img
docker rm img
docker run --name img -d -p 80:80 my_username/image
A test payload succesfully triggers the script. However, the container logs the following complaints:
pulling...
app/deploy.sh: line 4: docker: command not found
...
app/deploy.sh: line 7: docker: command not found
It seems that the bash script does not access the host implicitly. How to proceed?
Stuff I tried but did not work:
when firing up the listener container I added the host IP like this based on the docs:
HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print\$2}' | cut -d / -f 1`
docker run --name listener --add-host=docker:${HOSTIP} -e TOKEN="test654321" -d -p 5000:5000 mjhea0/docker-hook-listener
similarly, I substituted the --add-host command with --add-host=dockerhost:$(ip route | awk '/docker0/ { print $NF }') based on this suggestion.
Neither the docker binary nor the docker socket will be present in a container by default (why would it?).
You can solve this fairly easily by mounting the binary and socket from the host when you start the container e.g:
$ docker run -v $(which docker):/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock debian docker --version
Docker version 1.7.0, build 0baf609
You seem to be a bit confused about how Docker works; I'm not sure exactly what you mean by "access the host implicitly" or how you think it would work. Think of a container as a isolated and ephemeral machine, completely separate from your host, something like a fast VM.