Getting Docker container network to run on Heroku - spring-boot

I am a Docker novice, but locally I can successfully run my container with: docker run -d -v /var/run/docker.sock:/var/run/docker.sock --net example-net -p 8080:8080 my/repo
Typically I can just push a container to Heroku and it just works (which I thought was the whole idea of Docker; once you get it to run once it should run everywhere), but in this case the application doesn't load.
Presumably something about the above docker run is non-standard and the Heroku Docker environment isn't running my container in the same way my local environment is.
I don't know enough about Docker or Heroku to really debug further.
The underlying application is a Spring Boot web app and Heroku logs say Web process failed to bind to $PORT within 60 seconds of launch

Related

Docker containers onlys up when access the host with ssh

I have two containers it was builded with command > docker-compose up --build -d.
All containers build normally and stays up, but when I leave the machine the containers stays up at least 2 hours until que he drops again.
This containers is running an API in PHP LARAVEL Framework and a nginx reverse proxy.
Docker Image Started as 46Hours ago and UP 2 seconds
When I start the application and leave the machine where Docker is installed, it is in max two hours running. If I access the machine via ssh and then after that access the application and it is running without the need to do a docker-compose up. And the api was written in Laravel PHP with a Nginx container making a reverse Proxy.
What do I have to do to make these containers stand up as a productive environment?
There is a command that can help you when it goes down or stops:
sudo docker run --restart unless-stopped --name <Name you want to use> <Name of your container>
don't use these <> in your command
after doing this anytime that container is down it will restart the container for you automatically.
I think this trick is really useful when you have multiple containers running, and helpful when you want to update the server packages too.

Docker hub jhipster-registry not accessible on port 8761

I have recently started exploring the microservice architecture using jhipster and was trying to install and run the jhipster-registry from docker hub. Docker shows that the registry is running, but I am unable to access it on port 8761.
Pulled the image with docker pull jhipster/jhipster-registry
Started the container with docker run --name jhipster-registry -d jhipster/jhipster-registry
Here's a snapshot of what docker container ls returns:
Am I missing something over here?
You are starting the JHipster Registry container, but you aren't exposing the port.
You can expose a port by passing the port flag -p 8761:8761 which will enable you to connect to it via localhost:8761 or 127.0.0.1:8761 in a browser.
You may need to configure some environment variables for the JHipster Registry to start correctly. These may depend on your generated app's options, such as authentication type. For convenience JHipster apps come with a docker-compose.yml file. You can start it with docker-compose -f src/main/docker/jhipster-registry.yml up, as documented.

is there any way to run a docker image on host from other docker image? [duplicate]

I am using a docker container to build and deploy my software to a collection of ec2's. In the deployment script I build my software and then package it in a docker image. The image is pushed to my private registry, pulled by my production ec2's and then run. So essentially I will need to run docker within a docker container.
The problem is that I can't actually start docker on my container. If I try
service docker start
I get
bash: service: command not found
And if I try
docker -d
I get
2014/10/07 15:54:35 docker daemon: 0.11.1-dev 02d20af/0.11.1; execdriver: native; graphdriver:
[e2feb6f9] +job serveapi(unix:///var/run/docker.sock)
[e2feb6f9] +job initserver()
[e2feb6f9.initserver()] Creating server
2014/10/07 15:54:35 Listening for HTTP on unix (/var/run/docker.sock)
[error] attach_loopback.go:42 There are no more loopback device available.
loopback mounting failed
[e2feb6f9] -job initserver() = ERR (1)
2014/10/07 15:54:35 loopback mounting failed
The service command doesn't exist on the docker container so I can't start docker. I'm not sure what I should be doing now to start docker so I'm a bit stuck here, any help is appreciated.
A bit more information
Host machine is running fedora 20 (will eventually be running amazon linux on an ec2)
Docker container is running centos 7.0
Host is running Docker version 1.2.0, build fa7b24f/1.2.0
Container is running docker-0.11.1-22.el7.centos.x86_64
How about not running 'docker inside docker' and run docker on your host, but from within your docker container? Just mount your docker.sock and docker binary:
docker run -v /var/run/docker.sock:/run/docker.sock -v $(which docker):/bin/docker [your image]
https://github.com/sameersbn/docker-gitlab uses this approach to spin up docker containers, take a look at this image.
You can also take a look at: https://registry.hub.docker.com/u/mattgruter/doubledocker/
UPDATE on july 2016
The most current approach is to use docker:dind image, as described here:
https://hub.docker.com/_/docker/
Short summary:
$ docker run --privileged --name some-docker -d docker:dind
and then:
$ docker run --rm --link some-docker:docker docker info
While in almost all cases I would suggest following #cthulhu's answer and not running "docker in docker", in the cases when you must (e.g. a test suite which tests against multiple docker version), use the following to create additional loopback devices:
#!/bin/bash
for i in {0..6}
do
mknod -m0660 /dev/loop$i b 7 $i
done
(Taken from the thread for Docker Issue #7058)
You can simply run docker inside the docker container using dind. Try this image from Jerome, as follows:
docker run --privileged -t -i jpetazzo/dind
Check this page for more details:
https://github.com/jpetazzo/dind

Testcontainers ; Running #Testcontainers Tests inside docker [Running Docker inside Docker]

How To Run #Testcontainers based test cases inside the docker container ?
I have Simple Spring Boot App that has Integration Test (Component level) that are interacting with containers using Testcontainers. Test cases are ruining fine from outside container(Local machine).
We are running everything in containers and build is running on docker jenkins image.
Docker file is creating jar and then image. #Testcontainers is not able to find docker installed.
Below is my docker file.
FROM maven:3.6-jdk-11-openj9
VOLUME ["/var/run/docker.sock"]
RUN apt-get update
RUN apt-get -y install docker.io
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN mvn -Dmaven.repo.local=/root/m2 --batch-mode -f pom.xml clean package
EXPOSE 8080
CMD ["/bin/bash"]
While running build i am getting below below error
org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy - ping failed with configuration Environment variables, system properties and defaults. Resolved dockerHost=unix:///var/run/docker.sock due to org.rnorth.ducttape.TimeoutException: Timeout waiting for result with exception
Whats the best way to handle this case ? I want to run my component level integration test during mvn build phase using docker file.
below reference did not helped me.
https://www.testcontainers.org/supported_docker_environment/continuous_integration/dind_patterns/
This is not complete answer but you should enable access to a docker daemon from inside your container. Installing Docker and running it's daemon inside your container is complicated so not recommended. Docker can be controlled via Unix socket or over TCP (I assume the host system is a Linux).
How Test containers look for Docker:
By default it tries to connect to Unix socket /var/run/docker.sock. You can specify other socket path or TCP address by setting environment variables (DOCKER_HOST).
How docker exposes it's control API:
By default via Unix socket /var/run/docker.sock (on your host). You can expose docker API elsewhere by adding following parameters to docker start command (the location of command launching your docker is system dependent): -H fd:// -H tcp://127.0.0.1:2376. Note that you can specify more than one option. -H fd:// - is the default, tcp://127.0.0.1:2376 - tells Docker to listen on localhost port 2376.
How to make Docker available inside your container ("Docker in Docker"): If you enabled network access - no need to do additional config except pointing Testcontaners to it as mentioned above. If you want to use default Unix socket then you can map (mount) it into container via volume option:
docker run --volume /var/run/docker.sock:/var/run/docker.sock your-image-id-here
The remaining problem is that mounted docker.sock inside container will also be owned by root:docker (with same uid:gid as on your host system) so Testcontainers would work only if your container user can connect to that socket. That is user of running process is root or happen to have exact same group id inside your container as group id of docker on your host system.
I do not know yet a good solution to this one, so for starters you can run your tests inside container as root, or hard-code container's user group-id to match your host's docker group id.

Docker on Mac is running but refusing to expose port

Mac here, running Docker Community Edition Version 17.12.0-ce-mac49 (21995).
I have Dockerized a web app with a Dockerfile like so:
FROM openjdk:8
RUN mkdir /opt/myapp
ADD build/libs/myapp.jar /opt/myapp
ADD application.yml /opt/myapp
ADD logback.groovy /opt/myapp
WORKDIR /opt/myapp
EXPOSE 9200
ENTRYPOINT ["java", "-Dspring.config=.", "-jar", "myapp.jar"]
I then build that image like so:
docker build -t myapp .
I then run a container of that image like so:
docker run -it -p 9200:9200 --net="host" --env-file ~/myapp-local.env --name myapp myapp
In the console I see the app start up without any errors, and all seems to be well. Even my metrics publishes (which publish heartbeat and other health metrics every 20 seconds) are printing to the console as I would expect them to. Everything seems to be fine.
Except when I go to run a curl against my app from another terminal/session:
curl -i -H "Content-Type: application/json" -X POST -d '{"username":"heyitsme","password":"12345"}' http://localhost:9200/v1/auth/signIn
curl: (7) Failed to connect to localhost port 9200: Connection refused
Now, if this were a situation where the /v1/auth/signIn path wasn't valid, or if there was something wrong with my request entity/payload, the server would pick up on it and send an error (I assure you; as I can confirm this exact same curl works when I run the server outside of Docker as just a standalone service).
So this is definitely a situation where the curl command can't connect to localhost:9200. Again, when I run my app outside of Docker, that same curl command works perfectly, so I know my app is trying to standup on port 9200.
Any ideas as to what could be going wrong here, or how I could begin troubleshooting?
The way you run your container has 2 conflicting parts:
-p 9200:9200 says: "publish (bind) port 9200 of the container to port 9200 of the host"
--net="host" says: "use the host's networking stack"
According to Docker for Mac - Networking docs / Known limitations, use cases, and workarounds, you should only publish a port:
I want to connect to a container from the Mac
Port forwarding works for localhost; --publish, -p, or -P all work. Ports exposed from Linux are forwarded to the Mac.
Our current recommendation is to publish a port, or to connect from another container. This is what you need to do even on Linux if the container is on an overlay network, not a bridge network, as these are not routed.
The command to run the nginx webserver shown in Getting Started is an example of this.
$ docker run -d -p 80:80 --name webserver nginx
Check that your app bind to 0.0.0.0:9200 and not localhost:9200 or something similar
Problem seems to be in the network mode you are running the container.
Quick test: Login to your container and run the curl cmd there, hopefully it works. That would isolate the problem to request not being forwarded from host to container.
Try running your container on the default bridge network and test.
Refer to this blog for details on the network modes in docker
TLDR; You will need to add an IPtables entry to allow the traffic to enter your container.

Resources