Docker containers onlys up when access the host with ssh - laravel

I have two containers it was builded with command > docker-compose up --build -d.
All containers build normally and stays up, but when I leave the machine the containers stays up at least 2 hours until que he drops again.
This containers is running an API in PHP LARAVEL Framework and a nginx reverse proxy.
Docker Image Started as 46Hours ago and UP 2 seconds
When I start the application and leave the machine where Docker is installed, it is in max two hours running. If I access the machine via ssh and then after that access the application and it is running without the need to do a docker-compose up. And the api was written in Laravel PHP with a Nginx container making a reverse Proxy.
What do I have to do to make these containers stand up as a productive environment?

There is a command that can help you when it goes down or stops:
sudo docker run --restart unless-stopped --name <Name you want to use> <Name of your container>
don't use these <> in your command
after doing this anytime that container is down it will restart the container for you automatically.
I think this trick is really useful when you have multiple containers running, and helpful when you want to update the server packages too.

Related

How to access a website running on docker after closing the debug on Visual Studio

I build a very simple web app and web api on .net core and configured the docker-compose to get them to communicate over the same network correctly.
On visual studio, when I hit play on the Docker Compose project, it runs fine, both the web app and the web api work and communicate correctly.
On the Docker Desktop app i see them running (green).
But when I close/stop the debugger on VS I can't access the websites anymore even though the containers are still running. I thought docker worked as a sort of IIS.
Am I misunderstanding the docker capabilities or do I need to run them again from a CLI or publish them somewhere or what?
I thought the fact the containers are up and running should mean they're live for me to navigate to.
Help me out over here please.
You are correct, unless there is some special routing happening, the fact that the containers are running means your services are available.
You can see the ports being exposed from the docker ps -a command:
CONTAINER_ID: 560f78689902
IMAGE: moviedecisionweb:dev
COMMAND: "C:\\remote_debugger\\…"
CREATED: About a minute ago
STATUS: Up About a minute
PORTS: 0.0.0.0:52002->80/tcp, 0.0.0.0:52001->443/tcp
NAMES: mdweb
CONTAINER_ID: 1cd7f72426fe
IMAGE: moviedecisionapi:dev
COMMAND: "C:\\remote_debugger\\…"
CREATED: About a minute ago
STATUS: Up About a minute
PORTS: 0.0.0.0:52005->80/tcp, 0.0.0.0:52004->443/tcp
NAMES: mdapi
Based on the provided output, you have two docker containers running.
I'm assuming the ports 80 & 443 are serving the HTTP & HTTPS services (respectively) from your app/s.
Based on this...
For container "mdweb", you should be able to access the docker services from your docker host machine (PC) via:
http://0.0.0.0:52002
https://0.0.0.0:52001
For container "mdapi", you should be able to access the docker services from your docker host machine (PC) via:
http://0.0.0.0:52005
https://0.0.0.0:52004
I believe you can use localhost, 127.0.0.1 & 0.0.0.0 interchangeably in the above.
You cannot use the hostnames "mdweb" or "mdapi" from your docker HOST machine - unless you have explicitly setup your DNS to handle these names. However you can use these hostnames if you are inside a docker container on the same docker network.
If you provide more information (e.g. your docker-compose.yml), we could help you further...

Testcontainers ; Running #Testcontainers Tests inside docker [Running Docker inside Docker]

How To Run #Testcontainers based test cases inside the docker container ?
I have Simple Spring Boot App that has Integration Test (Component level) that are interacting with containers using Testcontainers. Test cases are ruining fine from outside container(Local machine).
We are running everything in containers and build is running on docker jenkins image.
Docker file is creating jar and then image. #Testcontainers is not able to find docker installed.
Below is my docker file.
FROM maven:3.6-jdk-11-openj9
VOLUME ["/var/run/docker.sock"]
RUN apt-get update
RUN apt-get -y install docker.io
COPY . /usr/src/app
WORKDIR /usr/src/app
RUN mvn -Dmaven.repo.local=/root/m2 --batch-mode -f pom.xml clean package
EXPOSE 8080
CMD ["/bin/bash"]
While running build i am getting below below error
org.testcontainers.dockerclient.EnvironmentAndSystemPropertyClientProviderStrategy - ping failed with configuration Environment variables, system properties and defaults. Resolved dockerHost=unix:///var/run/docker.sock due to org.rnorth.ducttape.TimeoutException: Timeout waiting for result with exception
Whats the best way to handle this case ? I want to run my component level integration test during mvn build phase using docker file.
below reference did not helped me.
https://www.testcontainers.org/supported_docker_environment/continuous_integration/dind_patterns/
This is not complete answer but you should enable access to a docker daemon from inside your container. Installing Docker and running it's daemon inside your container is complicated so not recommended. Docker can be controlled via Unix socket or over TCP (I assume the host system is a Linux).
How Test containers look for Docker:
By default it tries to connect to Unix socket /var/run/docker.sock. You can specify other socket path or TCP address by setting environment variables (DOCKER_HOST).
How docker exposes it's control API:
By default via Unix socket /var/run/docker.sock (on your host). You can expose docker API elsewhere by adding following parameters to docker start command (the location of command launching your docker is system dependent): -H fd:// -H tcp://127.0.0.1:2376. Note that you can specify more than one option. -H fd:// - is the default, tcp://127.0.0.1:2376 - tells Docker to listen on localhost port 2376.
How to make Docker available inside your container ("Docker in Docker"): If you enabled network access - no need to do additional config except pointing Testcontaners to it as mentioned above. If you want to use default Unix socket then you can map (mount) it into container via volume option:
docker run --volume /var/run/docker.sock:/var/run/docker.sock your-image-id-here
The remaining problem is that mounted docker.sock inside container will also be owned by root:docker (with same uid:gid as on your host system) so Testcontainers would work only if your container user can connect to that socket. That is user of running process is root or happen to have exact same group id inside your container as group id of docker on your host system.
I do not know yet a good solution to this one, so for starters you can run your tests inside container as root, or hard-code container's user group-id to match your host's docker group id.

How to visit a docker service by ip address

I'm new with docker and I'm probably missing a lot, although i went through the basic documentation and I'm trying to deploy a simple Spring Boot API
I've deployed my API as a docker-spring-boot .jar file , then i installed docker and pushed it with the following commands:
sudo docker login
sudo docker tag docker-spring-boot phillalexakis/myfirstapi:01
sudo docker push phillalexakis/myfirstapi:01
Then i started the API with the docker run command:
sudo docker run -p 7777:8085 phillalexakis/myfirstapi:01
When i visit localhost:7777/hello I'm getting the desired response
This is my Dockerfile
FROM openjdk:8
ADD target/docker-spring-boot.jar docker-spring-boot.jar
EXPOSE 8085
ENTRYPOINT ["java","-jar","docker-spring-boot.jar"]
Based on this answered post this the command to get the ip address
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
So, i run it with container_name_or_id = phillalexakis/myfirstapi:01 and I'm getting this error
Template parsing error: template: :1:24: executing "" at <.NetworkSettings.Networks>: map has no entry for key "NetworkSettings"
If i manage somehow to get the IP will i be able to visit it and get the same response?
This is how i have it in my mind: ip:7777/hello
You have used the image name and not the container name.
Get the container name by executing docker ps.
The container ID is the value in the first column, the container name is the value in the last column. You can use both.
Then, when you have the IP, you will be able to access your API at IP:8085/hello, not IP:7777/hello
The port 7777 is available on the Docker Host and maps to the port 8085 on the container. If you are accessing the container directly - which you do, when you use its IP address - you need to use the port that the container exposes.
There is also another alternative:
You can give the container a name when you start it by specifying the --name parameter:
sudo docker run -p 7777:8085 --name spring_api phillalexakis/myfirstapi:01
Now, from your Docker host, you can access your API by using that name: spring_api:8085/hello
You should never need to look up that IP address, and it often isn't useful.
If you're trying to call the service from outside Docker space, you've done the right thing: use the docker run -p option to publish its port to the host, and use the name of the host to access it. If you're trying to call it from another container, create a network, make sure to run both containers with a --net option pointing at that network, and they can reach other using the other's --name as a hostname, and the container-internal port the other service is listening on (-p options have no effect and aren't required).
The Docker-internal IP address just doesn't work in a variety of common situations. If you're on a different host, it will be unreachable. If your local Docker setup uses a virtual machine (Docker Machine, Docker for Mac, minikube, ...) you can't reach the IP address directly from the host. Even if it does work, when you delete and recreate the container, it's likely to change. Looking it up as you note also requires an additional (privileged) operation, which the docker run -p path avoids.
The invocation you have matches the docker inspect documentation (as #DanielHilgarth notes, make sure to run it on the container and not the image). In the specific situation where it will work (you are on the same native-Linux host as the container) you will need to use the unmapped port, e.g. http://172.17.0.2:8085/hello.

Docker on Windows 10 "driver failed programming external connectivity on endpoint"

I am trying to use $ docker-compose up -d for a project and am getting this error message:
ERROR: for api Cannot start service api: driver failed programming external connectivity on endpoint dataexploration_api_1 (8781c95937a0a4b0b8da233376f71d2fc135f46aad011401c019eb3d14a0b117): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:9000:tcp:172.19.0.2:80: input/output error
Encountered errors while bringing up the project.
I am wondering if it is maybe the port? I had been trying port 8080 previously. The project was originally set up on a mac and I have cloned the repository from gitHub.
I got the same error message on my Windows 10 Pro / Docker v17.06.1-ce-win24 / Docker-Compose v1.14.0 using Windows Powershell x86 in admin mode.
The solution was to simply restart Docker.
If happens once, restarting Docker will do the work. In my case, it was happening every time that I restarted my computer.
In this case, disable Fast Startup, or you probably will restart Docker every time that your computer starts. This solution was obtained from here
Simply restaring Docker didn't fix the problem for me on Windows 10.
In my case, I resolved the problem with the exact steps below:
1) Close "Docker Desktop"
2) Run the commands below:
net stop com.docker.service
net start com.docker.service
3) Launch "Docker Desktop" again
Hope this will help someone else.
I got that error too, if you want to know the main reason why error happens, its because docker is already running a similar container, to resolve the problem( avoid restarting Docker), you must:
docker container ls
You got something similar to:
CONTAINER ID IMAGE COMMAND CREATED
1fa4ab2cf395 friendlyhello "python app.py" 28 seconds ago
This is a list of the running containers, take the CONTAINER ID (copy Ctrl+C)
Now you have to end the process (and let run another image) run this command.
docker container stop <CONTAINER_ID>
And thats all! Now you can create the container.
For more information, visit https://docs.docker.com/get-started/part2/
Normally this error happens when you are trying start a container but the ports that the container needs are occuppied, usually by the same Docker like a result of an latest bad process to stop.
For me the solution is:
Open windows CMD like administrator, type netstat -oan to find the process (Docker is the that process) that is occuppying your port:
In my case my docker ports are 3306 6001 8000 9001.
Now we need free that ports, so we go to kill this process by PID (colum PID), type
TASKKILL /PID 9816 /F
Restart docker.
Be happy.
Regards.
I am aware there are already a lot answers, but none of them solved the problem for me. Instead, I got rid of this error message by resetting docker to factory defaults:
In my case, the problem was that the docker container (Nginx) uses 80 port, and IIS uses the same. Setting up another port in IIS solve problem
In most case, the first case you should think about is there is an old service running and using that port.
In my case, since I change the image name, then when using docker-compose to stop (then up), it won't stop old container (service), lead to the new container can not be started.
A bit of a late answer but I will leave it here it might help someone else
On a Mac mojave after a lot of restarts of both mac and docker
I had to sudo apachectl stop.
The simplest way to solve this is Restarting the docker. But in some cases it might not work even though you don't have any containers that are running in the port.You can check the running containers using docker ps command and can see all the containers that were not cleared but exited before using docker ps -a command.
Imagine there is a container which has the container id 8e35276e845e.You can use the command docker rm 8e35276e845e or docker rm 8e3 to end the container.Note that the first 3 strings are the id of that particular docker container id. Thus according to the above scenario 8e3 is the id of 8e35276e845e.
If restarting doesn't work you can try changing the ports of the services in the docker-compose.yml file and according to the apache port you have to change the port of the v-host(if there's any).This will resolve your problem.
Ex:
build:
context: ./apache
dockerfile: Dockerfile
working_dir: /var/www/
volumes:
- .:/var/www
networks:
- app-network
ports:
- 8082:80
depends_on:
- mysql_db
should be changed into
apache_server:
build:
context: ./apache
dockerfile: Dockerfile
working_dir: /var/www/
volumes:
- .:/var/www
networks:
- app-network
ports:
- 8083:80
depends_on:
- mysql_db
and the particular v-host also has to be changed,
Ex(according to the above scenario):
<VirtualHost *:80>
ProxyPreserveHost On
ServerAlias phpadocker.lk
ProxyPass / http://localhost:8083/
ProxyPassReverse / http://localhost:8083/
</VirtualHost>
This will help you to solve the above problem.
For many windows users out there that have the same issue I would suggest to restart the computer also, because most of the times (for me at least) restarting just Docker doesn't work. So, I would suggest you follow the following steps:
Restart your pc.
Then Start up your PowerShell as admin and run this:
Set-NetConnectionProfile -interfacealias "vEthernet (DockerNAT)" -NetworkCategory Private
After that restart your Docker.
After completing these steps you will be able to run without problems.
I hope that helps.

Creating docker containers on Windows

So getting boot2docker up and running, and pulling containers from the Docker Hub are non-issue on a windows environment. But if I wish to create a container and run it, how do I go about doing this? I've read about using fig, but is fig installed via Windows or from the container? I've attempted to do it from the container, but it often results in a permissions error, and even CHOWNing the folder doesn't solve the issue of not being able to call fig in the container.
Is it even possible to just run docker via Boot2Docker on windows as a development environment? Or should I just use Vagrant as the host VM and play with a bunch of docker containers in it?
Just some clarification and direction would be appreciated.
Fig is a tool for working with Docker. It runs on the host (which could mean your Windows host communicating with Docker via the TCP socket, or could mean your boot2docker VM which is a guest of your windows machine and a host of your Docker containers).
All that Fig's doing is streamlining the process of pulling, building and starting Docker images. For example, this fig.yml
db:
image: postgres
app:
build: .
links:
- "db:db"
environment:
- FOO=bar
is (roughly) the same as this series of Docker commands in Bash:
docker run -d --name db postgres
docker build -t app .
docker run -d --name app --link=db:db --env=FOO=bar app

Resources