What are the port mapping complexities solved by Flannel? - hadoop

Suppose, I have 3 Containers running on a single host and we are making a Hadoop cluster,
1 is master and other 2 are slaves(Namenode and datanodes)
And,we need to map ports:
docker run -itd -p 50070:50070 --name master centos:bigdata
docker run -itd -p 50075:50075 -p 50010:50010 --name slave1 centos:bigdata
Now ports 50075,50010,50070 are busy on host, we cannot map them for slave2
And if we do some random mapping like,
docker run -p 123:50075 -p 234:50010 --name slave2 centos:bigdata
Then, containers won't be able to communicate and it won't work.
So, Can flannel solve this problem?

Related

windows redis-client connect to docker server failed

I am using windows10, redis-64bit, I started a redis container with command:
docker run --name myredis -d redis redis-server --appendonly yes
when I try to connect to this container using:
redis-cli -h 192.168.99.1 -p 6379
it shows:
Could not connect to Redis at 192.168.99.1:6379: Unknown error
here, 192.168.99.1 is my virtual machine ip address, anyone know how to solve this issue, thanks!
To connect to a redis container from a remote server you should do the following:
Start redis container on host (192.168.99.1):
docker run --name myredis -p 7000:6379 -d redis redis-server
Connect via remote server:
redis-cli -h 192.168.99.1 -p 7000

Access docker container by name in windows

I create 2 windows container(I try to run windows application, not docker for windows in VM). And want to add link from a to b.
docker run -d --name a imageA
docker run -d --link a:a --name b imageB
I can access a from b by ip, but access by name is not working
Create a docker network first:
docker network create myNetwork
Connect both the containers(container1 and container2) to the network as following:
docker network connect myNetwork container1
docker network connect myNetwork container2
Run the docker network inspect command
docker network inspect myNetwork
Enter the bash of container1 as following:
docker exec -it container1 /bin/bash
Now, you can ping container2 by name:
ping container2
Hope that helps!

Serving web page with Docker using custom /etc/host on host machine

I have added a host/ip to my macbook pro's /etc/hosts file. So something like:
192.168.0.0 example.test
What I would like to do is run a web server with Docker that utilizes the hostname, instead of 'localhost'
I can't figure out how to make this work. I have a laravel project running, and can make it serve to localhost with Docker via:
php artisan serve --host=0.0.0.0
I have tried using the --add-host flag with Docker's run command when I start the container. So something like:
docker container run -it -p 80:80 -v $(pwd)app --add-host example.test:192.168.0.0 my-custom-container bash
Any help would be greatly appreciated. I am pretty stuck.
The --hostname argument provides the hostname of the container itself.
docker container run --hostname example.test -it -p 80:80 -v $(pwd)app --add-host example.test:192.168.0.0 my-custom-container bash
Example:
$ docker run -it debian
root#2843ba8b9de5:/# hostname
2843ba8b9de5
root#2843ba8b9de5:/# exit
$ docker run -it --hostname foo.example.com debian
root#foo:/# hostname
foo.example.com
root#foo:/#

How to remove docker container using port number

I have Node services which are running in Docker container
I am using shell script to run these services
I want to run three different instances of the same service on 3 different port. say 9011 9022 9033
I also want it to configure it in such a way that after every new deployment it should stop the previous service and remove it
I am using docker rm test-service to remove it but it will remove other instances too.
by this approach only once instance can be running.
Is there any way to remove Docker service running on the specific port.
here is my shell script
#!/bin/bash
ORGANISATION="$1"
SERVICE_NAME="$2"
VERSION="$3"
ENVIRONMENT="$4"
INTERNAL_PORT_NUMBER="$5"
EXTERNAL_PORT_NUMBER="$6"
NETWORK="$7"
docker build -t ${ORGANISATION}/${SERVICE_NAME}:${VERSION} --build-arg PORT=${INTERNAL_PORT_NUMBER} --build-arg ENVIRONMENT=${ENVIRONMENT} --no-cache .
docker stop ${SERVICE_NAME}
docker rm ${SERVICE_NAME}
sudo npm install
sudo npm install -g express
docker run -p ${EXTERNAL_PORT_NUMBER}:${INTERNAL_PORT_NUMBER} --network ${NETWORK} --name ${SERVICE_NAME} --restart always -itd ${ORGANISATION}/${SERVICE_NAME}:${VERSION}
I can not run more than one container with the same name. Can I run the docker service with the same name on 3 different port. if yes what modifications do i need to make in above shell file?
That would be three docker run, each using the same internal port, but mapped to a different host port, with three different names
docker run -p ${EXTERNAL_PORT_NUMBER1}:${INTERNAL_PORT_NUMBER} --name ${SERVICE_NAME1}
docker run -p ${EXTERNAL_PORT_NUMBER2}:${INTERNAL_PORT_NUMBER} --name ${SERVICE_NAME2}
docker run -p ${EXTERNAL_PORT_NUMBER3}:${INTERNAL_PORT_NUMBER} --name ${SERVICE_NAME3}
I want to perform LoadBalance for service
See docker swarm mode
The swarm manager uses ingress load balancing to expose the services you want to make available externally to the swarm.
The swarm manager can automatically assign the service a PublishedPort or you can configure a PublishedPort for the service. You can specify any unused port. If you do not specify a port, the swarm manager assigns the service a port in the 30000-32767 range.
Example:
the following command publishes port 80 in the nginx container to port 8080 for any node in the swarm
$ docker service create \
--name my-web \
--publish 8080:80 \
--replicas 2 \
nginx

Failed to communicate a dockerized process with elastic search with "None of the configured nodes are available"

I have spring boot application which communicate with ElasticSearch 5.0.0 alpha 2.
My application successfully communicate with elastic and preform several queries.
When I try to dockerize my application, it fails to communicate with ElasticSearch, and I get the following error:
None of the configured nodes are available: [{#transport#-1}{127.0.0.1}{127.0.0.1:9300}]
I have spent a lot of time on the internet, but I have found problems when the ElasticSearch is dockerized, but in my case, the client is dockerized, and it is working fine without the docker.
The command I used to create the docker image is: docker build -t my-service .
The DockerFile is:
FROM java:8
VOLUME /tmp
ADD ./build/libs/myjarfile-2.0.0.jar app.jar
EXPOSE 8090
RUN sh -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
To execute the image i use: docker run --name myname -d -p 8090:8090 -t my-service
Can someone share his/her experience with this issue?
Thanks
Guy Hudara
The problem is that your elasticsearch is not available on your dockerized host. When you put something in a docker container it also gets isolated on a network layer and localhost is localhost of the docker container but not the host itself. Therefore if you have elasticsearch also in a docker container use container linking and environment variable injection or reference your host machines address of your main network interface – not loopback – to your app.
Option 1
assuming that elasticsearch exposes 9200 try to run the following
$ docker run -d --name=elasticsearch elasticsearch
$ docker run -d --name=my-app --link elasticsearch:elasticsearch -p 8090:8090 my-app
Then you can define elasticsearch address in your app using env variable ${ELASTICSEARCH_PORT_9200_TCP_ADDR}.
Option 2
assuming your host machine runs on 192.168.1.10 you can also do the following:
$ docker run -d -p 9200:9200 elasticsearch
$ docker run -d -p 8090:8090 my-app
note that the name for the easticsearch container is optional here but the exposing of elasticsearch port mandatory. In this case you'll have to configure your elasticsearch host in your app given address of 192.168.1.10.

Resources