can't connect to docker container localhost connection refused - spring-boot

very new to docker. following this tutorial: https://medium.com/thecodefountain/develop-a-spring-boot-and-mysql-application-and-run-in-docker-end-to-end-15b7cdf3a2ba
I followed all the instructions (my application is called accessing-data-mysql) but I think I should have two containers running: one for mysql and one for the application. But when running docker container ls I only see the mysql container listed. Below I am creating a docker container for my application's image and linking it to the running instance of mysql container.
PS C:\projects\project1> docker run -d -p 8089:8089 --name accessing-data-mysql --link mysql-standalone:mysql accessing-data-mysql
82f499c6897d1f6bd2eeaabe4aa25ae786508146929a7039785e4ca37d691435
PS C:\projects\project1> docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
62029a53b9d4 mysql "docker-entrypoint.s…" 16 minutes ago Up 16 minutes 3306/tcp, 33060/tcp mysql-standalone
PS C:\projects\project1> docker run -d -p 8089:8089 --name accessing-data-mysql --link mysql-standalone:mysql accessing-data-mysql
my docker file:
FROM openjdk:12
ADD target/user-mysql.jar user-mysql.jar
EXPOSE 8089
ENTRYPOINT ["java", "-jar", "user-mysql.jar"]
when connecting via browser to localhost:8089, I get connection refused error. Not even sure if the service is running.
below is the result of running docker logs:
PS C:\projects\project1> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
82f499c6897d accessing-data-mysql "java -jar accessing…" 50 minutes ago Exited (0) 50 minutes ago accessing-data-mysql
62029a53b9d4 mysql "docker-entrypoint.s…" About an hour ago Up About an hour 3306/tcp, 33060/tcp mysql-standalone
PS C:\projects\project1> docker logs accessing-data-mysql
Hibernate ORM core version 5.4.27.Final
PS C:\projects\project1>
EDIT:
When I run locally directly from Idea, I see the error: No such host is known (mysql-standalone), which is the mysql url I configured to connect to docker mysql. As soon as I change the mysql url to localhost:3306, it can connect. Does this mean that somehow the my-sql docker instance is not accepting connections?

Your container is not up and running. You need to see the container as up when you do docker ps. You should see something like this:
~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
62029a53b9d4 mysql "docker-entrypoint.s…" 16 minutes ago Up 16 minutes 3306/tcp, 33060/tcp mysql-standalone
XXXXXXXXXXXX openjdk "java" XX minutes ago Up xx minutes 8089/tcp accessing-data-mysql
If you are not able to see the second container, it means it's not running. You will need to find the reason for not running by getting the logs with docker logs accessing-data-mysql and see why the second container is not starting.
Also, consider creating a docker-compose for both containers and establish a separate network. This is not required, your example can work without this, but it makes the things much easier for management and also for troubleshooting.

Related

Docker containers onlys up when access the host with ssh

I have two containers it was builded with command > docker-compose up --build -d.
All containers build normally and stays up, but when I leave the machine the containers stays up at least 2 hours until que he drops again.
This containers is running an API in PHP LARAVEL Framework and a nginx reverse proxy.
Docker Image Started as 46Hours ago and UP 2 seconds
When I start the application and leave the machine where Docker is installed, it is in max two hours running. If I access the machine via ssh and then after that access the application and it is running without the need to do a docker-compose up. And the api was written in Laravel PHP with a Nginx container making a reverse Proxy.
What do I have to do to make these containers stand up as a productive environment?
There is a command that can help you when it goes down or stops:
sudo docker run --restart unless-stopped --name <Name you want to use> <Name of your container>
don't use these <> in your command
after doing this anytime that container is down it will restart the container for you automatically.
I think this trick is really useful when you have multiple containers running, and helpful when you want to update the server packages too.

is there any way to run a docker image on host from other docker image? [duplicate]

I am using a docker container to build and deploy my software to a collection of ec2's. In the deployment script I build my software and then package it in a docker image. The image is pushed to my private registry, pulled by my production ec2's and then run. So essentially I will need to run docker within a docker container.
The problem is that I can't actually start docker on my container. If I try
service docker start
I get
bash: service: command not found
And if I try
docker -d
I get
2014/10/07 15:54:35 docker daemon: 0.11.1-dev 02d20af/0.11.1; execdriver: native; graphdriver:
[e2feb6f9] +job serveapi(unix:///var/run/docker.sock)
[e2feb6f9] +job initserver()
[e2feb6f9.initserver()] Creating server
2014/10/07 15:54:35 Listening for HTTP on unix (/var/run/docker.sock)
[error] attach_loopback.go:42 There are no more loopback device available.
loopback mounting failed
[e2feb6f9] -job initserver() = ERR (1)
2014/10/07 15:54:35 loopback mounting failed
The service command doesn't exist on the docker container so I can't start docker. I'm not sure what I should be doing now to start docker so I'm a bit stuck here, any help is appreciated.
A bit more information
Host machine is running fedora 20 (will eventually be running amazon linux on an ec2)
Docker container is running centos 7.0
Host is running Docker version 1.2.0, build fa7b24f/1.2.0
Container is running docker-0.11.1-22.el7.centos.x86_64
How about not running 'docker inside docker' and run docker on your host, but from within your docker container? Just mount your docker.sock and docker binary:
docker run -v /var/run/docker.sock:/run/docker.sock -v $(which docker):/bin/docker [your image]
https://github.com/sameersbn/docker-gitlab uses this approach to spin up docker containers, take a look at this image.
You can also take a look at: https://registry.hub.docker.com/u/mattgruter/doubledocker/
UPDATE on july 2016
The most current approach is to use docker:dind image, as described here:
https://hub.docker.com/_/docker/
Short summary:
$ docker run --privileged --name some-docker -d docker:dind
and then:
$ docker run --rm --link some-docker:docker docker info
While in almost all cases I would suggest following #cthulhu's answer and not running "docker in docker", in the cases when you must (e.g. a test suite which tests against multiple docker version), use the following to create additional loopback devices:
#!/bin/bash
for i in {0..6}
do
mknod -m0660 /dev/loop$i b 7 $i
done
(Taken from the thread for Docker Issue #7058)
You can simply run docker inside the docker container using dind. Try this image from Jerome, as follows:
docker run --privileged -t -i jpetazzo/dind
Check this page for more details:
https://github.com/jpetazzo/dind

Getting Docker container network to run on Heroku

I am a Docker novice, but locally I can successfully run my container with: docker run -d -v /var/run/docker.sock:/var/run/docker.sock --net example-net -p 8080:8080 my/repo
Typically I can just push a container to Heroku and it just works (which I thought was the whole idea of Docker; once you get it to run once it should run everywhere), but in this case the application doesn't load.
Presumably something about the above docker run is non-standard and the Heroku Docker environment isn't running my container in the same way my local environment is.
I don't know enough about Docker or Heroku to really debug further.
The underlying application is a Spring Boot web app and Heroku logs say Web process failed to bind to $PORT within 60 seconds of launch

docker ports not available

I have a spring-config-sever project that I am trying to run via Docker. I can run it from the command line and my other services and browser successfully connect via:
http://localhost:8980/aservice/dev
However, if I run it via Docker, the call fails.
My config-server has a Dockerfile:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE=build/libs/my-config-server-0.1.0.jar
ADD ${JAR_FILE} my-config-server-0.1.0.jar
EXPOSE 8980
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/my-config-server-0.1.0.jar"]
I build via:
docker build -t my-config-server .
I am running it via:
docker run my-config-server -p 8980:8980
And then I confirm it is running via
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1cecafdf99fe my-config-server "java -Djava.securit…" 14 seconds ago Up 13 seconds 8980/tcp suspicious_brahmagupta
When I run it via Docker, the browse fails with a "ERR_CONNECTION_REFUSED" and my calling services fails with:
Could not locate PropertySource: I/O error on GET request for
"http://localhost:8980/aservice/dev": Connection refused (Connection
refused);
Adding full answer based on comments.
First, you have to specify -p before image name.
docker run -p 8980:8980 my-config-server.
Second, just configuring localhost with host port won't make your my-service container to talk to other container. locahost in container is within itself(not host). You will need to use appropriate docker networking model so both containers can talk to each other.
If you are on Linux, the default is Bridge so you can configure my-config-server container ip docker inspect {containerIp-of-config-server} as your config server endpoint.
Example if your my-config-server ip is 172.17.0.2 then endpoint is - http://172.17.0.2:8980/
spring:
cloud:
config:
uri: http://172.17.0.2:8980
Just follow the docker documentation for little bit more understanding on how networking works.
https://docs.docker.com/network/network-tutorial-standalone/
https://docs.docker.com/v17.09/engine/userguide/networking/
If you want to spin up both containers using docker-compose, then you can link both containers using service name. Just follow Networking in Compose.
I could imagine that the application only listens on localhost, ie 127.0.0.1.
You might want to try setting the property server.address to 0.0.0.0.
Then port 8980 should also be available externally.

Failed to communicate a dockerized process with elastic search with "None of the configured nodes are available"

I have spring boot application which communicate with ElasticSearch 5.0.0 alpha 2.
My application successfully communicate with elastic and preform several queries.
When I try to dockerize my application, it fails to communicate with ElasticSearch, and I get the following error:
None of the configured nodes are available: [{#transport#-1}{127.0.0.1}{127.0.0.1:9300}]
I have spent a lot of time on the internet, but I have found problems when the ElasticSearch is dockerized, but in my case, the client is dockerized, and it is working fine without the docker.
The command I used to create the docker image is: docker build -t my-service .
The DockerFile is:
FROM java:8
VOLUME /tmp
ADD ./build/libs/myjarfile-2.0.0.jar app.jar
EXPOSE 8090
RUN sh -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
To execute the image i use: docker run --name myname -d -p 8090:8090 -t my-service
Can someone share his/her experience with this issue?
Thanks
Guy Hudara
The problem is that your elasticsearch is not available on your dockerized host. When you put something in a docker container it also gets isolated on a network layer and localhost is localhost of the docker container but not the host itself. Therefore if you have elasticsearch also in a docker container use container linking and environment variable injection or reference your host machines address of your main network interface – not loopback – to your app.
Option 1
assuming that elasticsearch exposes 9200 try to run the following
$ docker run -d --name=elasticsearch elasticsearch
$ docker run -d --name=my-app --link elasticsearch:elasticsearch -p 8090:8090 my-app
Then you can define elasticsearch address in your app using env variable ${ELASTICSEARCH_PORT_9200_TCP_ADDR}.
Option 2
assuming your host machine runs on 192.168.1.10 you can also do the following:
$ docker run -d -p 9200:9200 elasticsearch
$ docker run -d -p 8090:8090 my-app
note that the name for the easticsearch container is optional here but the exposing of elasticsearch port mandatory. In this case you'll have to configure your elasticsearch host in your app given address of 192.168.1.10.

Resources