Docker on Windows 10 "driver failed programming external connectivity on endpoint" - windows

I am trying to use $ docker-compose up -d for a project and am getting this error message:
ERROR: for api Cannot start service api: driver failed programming external connectivity on endpoint dataexploration_api_1 (8781c95937a0a4b0b8da233376f71d2fc135f46aad011401c019eb3d14a0b117): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:9000:tcp:172.19.0.2:80: input/output error
Encountered errors while bringing up the project.
I am wondering if it is maybe the port? I had been trying port 8080 previously. The project was originally set up on a mac and I have cloned the repository from gitHub.

I got the same error message on my Windows 10 Pro / Docker v17.06.1-ce-win24 / Docker-Compose v1.14.0 using Windows Powershell x86 in admin mode.
The solution was to simply restart Docker.

If happens once, restarting Docker will do the work. In my case, it was happening every time that I restarted my computer.
In this case, disable Fast Startup, or you probably will restart Docker every time that your computer starts. This solution was obtained from here

Simply restaring Docker didn't fix the problem for me on Windows 10.
In my case, I resolved the problem with the exact steps below:
1) Close "Docker Desktop"
2) Run the commands below:
net stop com.docker.service
net start com.docker.service
3) Launch "Docker Desktop" again
Hope this will help someone else.

I got that error too, if you want to know the main reason why error happens, its because docker is already running a similar container, to resolve the problem( avoid restarting Docker), you must:
docker container ls
You got something similar to:
CONTAINER ID IMAGE COMMAND CREATED
1fa4ab2cf395 friendlyhello "python app.py" 28 seconds ago
This is a list of the running containers, take the CONTAINER ID (copy Ctrl+C)
Now you have to end the process (and let run another image) run this command.
docker container stop <CONTAINER_ID>
And thats all! Now you can create the container.
For more information, visit https://docs.docker.com/get-started/part2/

Normally this error happens when you are trying start a container but the ports that the container needs are occuppied, usually by the same Docker like a result of an latest bad process to stop.
For me the solution is:
Open windows CMD like administrator, type netstat -oan to find the process (Docker is the that process) that is occuppying your port:
In my case my docker ports are 3306 6001 8000 9001.
Now we need free that ports, so we go to kill this process by PID (colum PID), type
TASKKILL /PID 9816 /F
Restart docker.
Be happy.
Regards.

I am aware there are already a lot answers, but none of them solved the problem for me. Instead, I got rid of this error message by resetting docker to factory defaults:

In my case, the problem was that the docker container (Nginx) uses 80 port, and IIS uses the same. Setting up another port in IIS solve problem

In most case, the first case you should think about is there is an old service running and using that port.
In my case, since I change the image name, then when using docker-compose to stop (then up), it won't stop old container (service), lead to the new container can not be started.

A bit of a late answer but I will leave it here it might help someone else
On a Mac mojave after a lot of restarts of both mac and docker
I had to sudo apachectl stop.

The simplest way to solve this is Restarting the docker. But in some cases it might not work even though you don't have any containers that are running in the port.You can check the running containers using docker ps command and can see all the containers that were not cleared but exited before using docker ps -a command.
Imagine there is a container which has the container id 8e35276e845e.You can use the command docker rm 8e35276e845e or docker rm 8e3 to end the container.Note that the first 3 strings are the id of that particular docker container id. Thus according to the above scenario 8e3 is the id of 8e35276e845e.
If restarting doesn't work you can try changing the ports of the services in the docker-compose.yml file and according to the apache port you have to change the port of the v-host(if there's any).This will resolve your problem.
Ex:
build:
context: ./apache
dockerfile: Dockerfile
working_dir: /var/www/
volumes:
- .:/var/www
networks:
- app-network
ports:
- 8082:80
depends_on:
- mysql_db
should be changed into
apache_server:
build:
context: ./apache
dockerfile: Dockerfile
working_dir: /var/www/
volumes:
- .:/var/www
networks:
- app-network
ports:
- 8083:80
depends_on:
- mysql_db
and the particular v-host also has to be changed,
Ex(according to the above scenario):
<VirtualHost *:80>
ProxyPreserveHost On
ServerAlias phpadocker.lk
ProxyPass / http://localhost:8083/
ProxyPassReverse / http://localhost:8083/
</VirtualHost>
This will help you to solve the above problem.

For many windows users out there that have the same issue I would suggest to restart the computer also, because most of the times (for me at least) restarting just Docker doesn't work. So, I would suggest you follow the following steps:
Restart your pc.
Then Start up your PowerShell as admin and run this:
Set-NetConnectionProfile -interfacealias "vEthernet (DockerNAT)" -NetworkCategory Private
After that restart your Docker.
After completing these steps you will be able to run without problems.
I hope that helps.

Related

Why is curl shutting down the docker container?

Good day!
I have a microservice that runs in a windower and a registry that stores the address of the microservices.
I also have a script that runs when the container is turned on. The script gets its local ip and sends it to another server using curl. After executing the script, code 0 is returned and the container exits. How can you fix this problem?
#docker-compose realtime logs
nginx_1 | "code":"SUCCESSFUL_REQUEST" nginx_1 exited with code 0
My bash script
#!/bin/bash
address=$(hostname -i)
curl -X POST http://registry/service/register -H 'Content-Type: application/json' -d '{"name":"'"$MICROSERVICE_NAME"'","address":"'"$address"'"}'
The script runs fine and no problem, but unfortunately it breaks the container process. Is it possible to somehow intercept this code so that it does not shut down the container?
I would be grateful for any help or comment!🙏
EDIT:
Dockerfile here the script is called after starting the container
FROM nginx:1.21.1-alpine
WORKDIR /var/www/
COPY ./script.sh /var/www/script.sh
RUN apk add --no-cache --upgrade bash && \
apk add nano
#launch script
CMD /var/www/script.sh
EDIT 2:
my docker-compose.yml
version: "3.9"
services:
#database
pgsql:
hostname: pgsql
build: ./pgsql
ports:
- 5432:5432/tcp
volumes:
- ./pgsql/data:/var/lib/postgresql/data
#registry
registry_fpm:
build: ./fpm/registry
depends_on:
- pgsql
volumes:
- ./microservices/registry:/var/www/registry
registry_nginx:
hostname: registry
build: ./nginx/registry
depends_on:
- registry_fpm
volumes:
- ./microservices/registry:/var/www/registry
- ./nginx/registry/nginx.conf:/etc/nginx/nginx.conf
#server
nginx:
build: ./nginx
environment:
MICROSERVICE_NAME: Microservice_1
depends_on:
- registry_nginx
ports:
- 80:80/tcp
the purpose of the registry is to store only the ip of all microservices. If you are familiar with microservices, then it is quite possible that you know that the registry is like the custodian of all addresses of microservices. The registry is used by other microservices to obtain microservice addresses so that microservices can communicate over http.
there is no need for these addresses as far as i can tell. the microservices can easily use each other's hostnames.
you already do this with your curl: the POST request goes to the server registry; and so on
docker compose may just be all the orchestration you require for you microservices.
regarding IPs and networking
if you prefer, for more isolation and consistency, you can configure in your compose.yaml
custom networks virtualised network adapters; think of it as vLANs where the nodes are selected containers only.
for addn info on networking refer
custom IP addresses for each container
hostnames for each container
links deprecated; do not use; information only
regarding heartbeat
keeping track of a heartbeat shouldn't be necessary.
but if you really need one, doing it from within the container is a no-no. a container should be only one running process. and creating a new record is redunduant as the docker daemon is already keeping track of all IP and state (and loads of others).
the function of registry (keeping track of lifecycle) is instead played by the docker daemon. try docker compose ps
however, you can configure the container to restart automatically when it fails using the restart tag
if you need a way to monitor these without the CLI, listening on the docker socket is the way to go.
you could make your own dashboard that taps into the Docker API whose endpoints are listed here. NB: the socket might need to be protected and if possible, ought to be mounted as read-only
but better solution would be a using an image that already does this. i cannot give you recommendations unfortunately; i have not used any.

Docker containers onlys up when access the host with ssh

I have two containers it was builded with command > docker-compose up --build -d.
All containers build normally and stays up, but when I leave the machine the containers stays up at least 2 hours until que he drops again.
This containers is running an API in PHP LARAVEL Framework and a nginx reverse proxy.
Docker Image Started as 46Hours ago and UP 2 seconds
When I start the application and leave the machine where Docker is installed, it is in max two hours running. If I access the machine via ssh and then after that access the application and it is running without the need to do a docker-compose up. And the api was written in Laravel PHP with a Nginx container making a reverse Proxy.
What do I have to do to make these containers stand up as a productive environment?
There is a command that can help you when it goes down or stops:
sudo docker run --restart unless-stopped --name <Name you want to use> <Name of your container>
don't use these <> in your command
after doing this anytime that container is down it will restart the container for you automatically.
I think this trick is really useful when you have multiple containers running, and helpful when you want to update the server packages too.

How to access a website running on docker after closing the debug on Visual Studio

I build a very simple web app and web api on .net core and configured the docker-compose to get them to communicate over the same network correctly.
On visual studio, when I hit play on the Docker Compose project, it runs fine, both the web app and the web api work and communicate correctly.
On the Docker Desktop app i see them running (green).
But when I close/stop the debugger on VS I can't access the websites anymore even though the containers are still running. I thought docker worked as a sort of IIS.
Am I misunderstanding the docker capabilities or do I need to run them again from a CLI or publish them somewhere or what?
I thought the fact the containers are up and running should mean they're live for me to navigate to.
Help me out over here please.
You are correct, unless there is some special routing happening, the fact that the containers are running means your services are available.
You can see the ports being exposed from the docker ps -a command:
CONTAINER_ID: 560f78689902
IMAGE: moviedecisionweb:dev
COMMAND: "C:\\remote_debugger\\…"
CREATED: About a minute ago
STATUS: Up About a minute
PORTS: 0.0.0.0:52002->80/tcp, 0.0.0.0:52001->443/tcp
NAMES: mdweb
CONTAINER_ID: 1cd7f72426fe
IMAGE: moviedecisionapi:dev
COMMAND: "C:\\remote_debugger\\…"
CREATED: About a minute ago
STATUS: Up About a minute
PORTS: 0.0.0.0:52005->80/tcp, 0.0.0.0:52004->443/tcp
NAMES: mdapi
Based on the provided output, you have two docker containers running.
I'm assuming the ports 80 & 443 are serving the HTTP & HTTPS services (respectively) from your app/s.
Based on this...
For container "mdweb", you should be able to access the docker services from your docker host machine (PC) via:
http://0.0.0.0:52002
https://0.0.0.0:52001
For container "mdapi", you should be able to access the docker services from your docker host machine (PC) via:
http://0.0.0.0:52005
https://0.0.0.0:52004
I believe you can use localhost, 127.0.0.1 & 0.0.0.0 interchangeably in the above.
You cannot use the hostnames "mdweb" or "mdapi" from your docker HOST machine - unless you have explicitly setup your DNS to handle these names. However you can use these hostnames if you are inside a docker container on the same docker network.
If you provide more information (e.g. your docker-compose.yml), we could help you further...

Setting redis configuration with docker in windows

I want to set up redis configuration in docker.
I have my own redis.conf under D:/redis/redis.conf and have configured it to have bind 127.0.0.1 and have uncommented requirepass foobared
Then used this command to load this configuration in docker:
docker run --volume D:/redis/redis.conf:/usr/local/etc/redis/redis.conf --name myredis redis redis-server /usr/local/etc/redis/redis.conf
Next,
I have docker-compose.yml in my application in maven Project under src/resources.
I have the following in my docker-compase.yml
redis:
image: redis
ports:
- "6379:6379"
And i execute the command :
docker-compose up
The Server runs, but when i check with the command:
docker ps -a
it Shows that redis Image runs at 0.0.0.0:6379.
I want it to run at 127.0.0.1.
How do i get that?
isn't my configuration file loading or is it wrong? or my commands are wrong?
Any suggestions are of great help.
PS: I am using Windows.
Thanks
Try to execute:
docker inspect <container_id>
And use "NetworkSettings"->"Gateway" (it must be 172.17.0.1) value instead of 127.0.0.1.
You can't use 127.0.0.1 as your Redis was run in the isolated environment.
Or you can link your containers.
So first of all you should not be worried about redis saying listening on 0.0.0.0:6379. Because redis is running inside the container. And if it doesn't listen on 0.0.0.0 then you won't be able to make any connections.
Next if you want redis to only listen on localhost on localhost then you need to use below
redis:
image: redis
ports:
- "127.0.0.1:6379:6379"
PS: I have not run container or docker for windows with 127.0.0.1 port mapping, so you will have to see if it works. Because host networking in Windows, Mac and Linux are different and may not work this way

How do I debug a network -- probably Hyperkit caused issue -- of a Docker setup on a Mac?

Problem: Network is not routed to the host machine.
e.g.:
docker run -tip 80:8080 httpd
does NOT result in apache responding on localhost:8080 on the host machine or on docker.local:8080 or anything like that. If I try to connect from inside, the container works fine:
docker run -ti debian
curl 172.17.0.2
<html><body><h1>It works!</h1></body></html>
It seems that on the Docker side itself is everything just fine.
On docker ps you get: ... 80/tcp, 0.0.0.0:80->8080/tcp ...
Environment: New, clean OS installation - OSX Sierra 10.12.2, Docker.app Version 1.13.0 stable (plus 1.13.0. beta and 1.12.0 beta tried as well with same results).
Assumption: There is something broken in between Docker and OS. I guess that this 'something' is Hyperkit (which is like a black box for me). There might be some settings broken by build script from here: http://bigchaindb-examples.readthedocs.io/en/latest/install.html#the-docker-way which is docker-machine centric, which fact I've probably underestimated. Funny fact is also that this was a new install: this build script was the first thing I've done on it -- I don't know if the networking actually worked before.
Question: How do I diagnose this stuff. I would like to be able to trace where exactly the traffic gets lost and fix it accordingly.
Your command line has the ports reversed:
docker run -tip 8080:80 httpd
That's the host port first, with an optional interface to bind, followed by the container port. You can also see that in the docker ps output where port 80 on the host is mapped to port 8080 inside the container.
The other problem some have is the service inside the container needs to listen on all container interfaces (0.0.0.0), not the localhost interface of the container, otherwise the proxy can't forward traffic to it. However, the default settings from official images won't have this issue and your curl command shows that doesn't apply to you.

Resources