Bash command to return a free port - bash

As part of a build pipeline I would like to start containers with a free port.
Looking for something like this:
docker run --name frontend -p $(gimme-a-free-port):80 frontend:latest

You can use port 0. Applications pass 0 to kernel and kernel assigns unused port to the application.
docker run --name frontend -p 0:80 frontend:latest
Or:
docker run --name frontend -p 80 frontend:latest
In second example I'm just specifying container port, Host port will be assigned automatically.
To verify:
docker port <containerid or container name>
80/tcp -> 0.0.0.0:32768
To get the random port value only:
docker inspect -f '{{ (index (index .NetworkSettings.Ports "80/tcp") 0).HostPort }}' <containerid or container name>
32768

If you don't assign the host-port, docker will automatically pick a random port for publishing the container port.
For example;
$ docker run --name frontend -p 80 -dit busybox
4439bdce51eee473b1e961664839a410754157bf69da2d2545ab51528a42111c
$ docker port 4439bdce51eee473b1e961664839a410754157bf69da2d2545ab51528a42111c
80/tcp -> 0.0.0.0:32768
(or);
$ docker inspect -f '{{json .NetworkSettings.Ports}}' 4439bdce51eee473b1e961664839a410754157bf69da2d2545ab51528a42111c
{"80/tcp":[{"HostIp":"0.0.0.0","HostPort":"32768"}]}

Get Container's External Port
PORT="$(docker ps|grep some_container|sed 's/.*0.0.0.0://g'|sed 's/->.*//g')"
Reference: https://blog.dcycle.com/snippet/2016-10-04/get-docker-container-port/

Related

How to ssh into Kubernete Pod

Scenario 1: I am deploying an spring boot app in a physical machine in which it opens ssh connection on a port of the physical machine. Lets say 9889. So once the application starts, that physical machine will open connection on that port and listen for any incoming request. Any client can connect to that port using command
ssh admin#ipaddress -p 9889.
It returns the connection i expect.
Scenario 2: I am deploying that app into Kubernete cluster. I set external Ip of the service to the IP on master node. So when i type Kubectl get services i got something like
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT
app-entry LOADBALANCER 172.x.x.xx 172.x.x.x 3000:3552
How can i ssh to the app in the Kubernete Cluster using the externalIP and the port 3000. Since everytime i try to ssh using command above it returns connection refused
As #zerkms mentioned in the comment, you can use kubectl exec to connect the shell of a container running inside the Kubernetes cluster.
$ kubectl exec -it -n <namespace> <pod_name> -- /bin/bash
$ kubectl exec -it -n <namespace> <pod_name> -- /bin/sh
# if the pod has multiple containers
$ kubectl exec -it -n <namespace> <pod_name> -c <container_name> -- /bin/bash
If you have a running server on your pod which serves at a specific port, you can use kubectl port-forward to connect it from the local machine (ie. localhost:PORT).
# Pods
$ kubectl port-forward -n <namespace> pods/<pod_name> <local_port>:<container_port>
# Services
$ kubectl port-forward -n <namespace> svc/<service_name> <local_port>:<container_port>

docker: how to set port in dockerfile [duplicate]

This question already has an answer here:
Bind container port to host inside Dockerfile
(1 answer)
Closed 3 years ago.
When I call the docker run command in my terminal. The server starts up fine and is accessible, but when I try to add the port in the dockerfile. It does not work.
Is there a way I can set the port in the dockerfile explicitly? Thanks for any help.
This works:
docker run -d -p 5555:4444 -v /dv/sm:/dv/sm sa:latest
I remove the -p flag and try to "pass" it in via the docker file, but it does not work (error: This site can’t be reached)
Not working:
docker run -d -v /dv/sm:/dv/sm sa:latest
I've tried -
Docker file:
FROM WorkingTestImage as MyImage
ENTRYPOINT "/opt/bin/entry_point.sh"
CMD ["-p","5555:4444"]
FROM WorkingTestImage as MyImage
ENTRYPOINT ["/opt/bin/entry_point.sh","-p","5555:4444"]
CMD ["-p","5555:4444"]
FROM WorkingTestImage as MyImage
ENTRYPOINT ["/opt/bin/entry_point.sh","-p","5555:4444"]
The -p option is specifically used with the docker command meanwhile the CMD in Dockerfile just runs a command inside the docker container when it runs. So it is out of the scope from "docker" command.
If you want to write the port as a code you need to use Docker Compose or Kubernetes.

Phlex Docker image is not reachable via http

I'm installing Phlex with Docker on my windows 10 PC.
I have run the command docker create --name=Phlex --net=host -v /g/phlex:/config -e HTTPPORT=5666 -e HTTPSPORT=5667 -e FASTCGIPORT=9000 -p 5666:80 -p 5667:443 --privileged digitalhigh/phlex
and the container is created.
When I start the container (docker start Phlex), it runs successfully.
However, when I try to connect to localhost:5666/5667 it refuses to connect. What am I doing wrong here? Phlex EXPOSES ports 80 and 443 and the only suspicious thing in the log is ip: either "to" is duplicate, or "224.0.0.0" is garbage and I have no idea what that means.
This is my full workflow I have done nothing else.
You need to use
docker run --name=Phlex -p 5666:5666 -p 5667:5667 -v /g/phlex:/config -e HTTPPORT=5666 -e HTTPSPORT=5667 -e FASTCGIPORT=9000 --privileged digitalhigh/phlex
When you use --net=host you should not be using port mappings. So no -p X:Y should be there. And when you want to do port mappings don't use --net=host
Also I looked at the image, it run nginx and fpm in the same image. So if you are testing phlex and or its not a core thing you work on, then you can use this image. Else you should build one of your own Dockerfile. This image is not one the optimized images as such

How to run docker image , take ssh of container and bind port to "49" in single cmd?

I wanted to run the image and take the ssh (bin/bsh) of container at the same time.
To start a container and take ssh
docker run -i -t --entrypoint /bin/bash <Image ID>
But in above cmd, I am not able to bind port to 49.
I think I found it.
docker run -p 8046:49 -i -t --entrypoint /bin/bash [Image Id]
Above command start the container from image , bind port "8046" to service running on port "49" and also give you terminal of that image.
Thanks for all yours comment.

Running Docker Commands with a bash script inside a container

I'm trying to automate deployment with webhooks to the Docker hub based on this tutorial. One container runs the web app on port 80. On the same host I run another container that listens for post requests from the docker hub, triggering the host to update the webapp image. The post request triggers a bash script that looks like this:
echo pulling...
docker pull my_username/image
docker stop img
docker rm img
docker run --name img -d -p 80:80 my_username/image
A test payload succesfully triggers the script. However, the container logs the following complaints:
pulling...
app/deploy.sh: line 4: docker: command not found
...
app/deploy.sh: line 7: docker: command not found
It seems that the bash script does not access the host implicitly. How to proceed?
Stuff I tried but did not work:
when firing up the listener container I added the host IP like this based on the docs:
HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print\$2}' | cut -d / -f 1`
docker run --name listener --add-host=docker:${HOSTIP} -e TOKEN="test654321" -d -p 5000:5000 mjhea0/docker-hook-listener
similarly, I substituted the --add-host command with --add-host=dockerhost:$(ip route | awk '/docker0/ { print $NF }') based on this suggestion.
Neither the docker binary nor the docker socket will be present in a container by default (why would it?).
You can solve this fairly easily by mounting the binary and socket from the host when you start the container e.g:
$ docker run -v $(which docker):/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock debian docker --version
Docker version 1.7.0, build 0baf609
You seem to be a bit confused about how Docker works; I'm not sure exactly what you mean by "access the host implicitly" or how you think it would work. Think of a container as a isolated and ephemeral machine, completely separate from your host, something like a fast VM.

Resources