How to use local proxy settings in docker-compose - proxy

I am setting up a new server for our Redmine installation, since the old installation was done by hand, which makes it difficult to update everything properly. I decided to go with a Docker image but am having trouble starting the docker container due to an error message. The host is running behind a proxy server, which I think, is causing this problem, as everything else such as wget, curl, etc. is working fine.
Error message:
Pulling redmine (redmine:)...
ERROR: Get https://registry-1.docker.io/v2/: dial tcp 34.206.236.31:443: connect: connection refused
I searched on Google about using Docker/Docker-Compose with a proxy server in the background and found a few websites where people had the same issue but none of these really helped me with my problem.
I checked with the Docker documentation and found a guide but this does not seem to work for me: https://docs.docker.com/network/proxy/
I also found an answered question here on StackOverflow: Using proxy on docker-compose in server which might be the solution I am after but I am unsure where exactly I have to put the solution. I guess the person means the docker-compose.yml file but I could be wrong.
This is what my docker-compose.yml looks like:
version: '3.1'
services:
redmine:
image: redmine
restart: always
ports:
- 80:3000
environment:
REDMINE_DB_MYSQL: db
REDMINE_DB_PASSWORD: SECRET_PASSWORD
db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: SECRET_PASSWORD
MYSQL_DATABASE: redmine
I expect to run the following command without the above error message
docker-compose -f docker-compose.yml up -d

I did a bit more research and seem to have used better key words because I found my solution now. I wanted to share the solution with everyone, in case someone else may ever need it.
Create folder for configuring docker service through systemd
mkdir /etc/systemd/system/docker.service.d
Create service configuration file at /etc/systemd/system/docker.service.d/http-proxy.conf and put the following in the newly created file
[Service]
# NO_PROXY is optional and can be removed if not needed
# Change proxy_url to your proxy IP or FQDN and proxy_port to your proxy port
# For Proxy server which require username and password authentication, just add the proper username and password to the URL. (see example below)
# Example without authentication
Environment="HTTP_PROXY=http://proxy_url:proxy_port" "NO_PROXY=localhost,127.0.0.0/8"
# Example with authentication
Environment="HTTP_PROXY=http://username:password#proxy_url:proxy_port" "NO_PROXY=localhost,127.0.0.0/8"
# Example for SOCKS5
Environment="HTTP_PROXY=socks5://proxy_url:proxy_port" "NO_PROXY=localhost,127.0.0.0/8"
Reload systemctl so that new settings are read
sudo systemctl daemon-reload
Verify that docker service Environment is properly set
sudo systemctl show docker --property Environment
Restart docker service so that it uses updated Environment settings
sudo systemctl restart docker
Now you can execute the docker-compose command on your machine without getting any connection refused error messages.

For the proxy server which requires username and password for authentication: Apart from adding the credentials in /etc/systemd/system/docker.service.d/http-proxy.conf, as suggested in this answer, I also had to add the same to the Dockerfile. Following is a snippet from the Dockerfile.
FROM ubuntu:16.04
ENV http_proxy http://username:password#proxy_url:proxy_port
ENV https_proxy http://username:password#proxy_url:proxy_port
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y \
build-essential \
bla bla bla ...

Related

Securing a localhost port for a Flask/Celery app running locally on 0.0.0.0 in Docker on MacOS

My app has a Flask backend and an Angular/Electron frontend. The app runs locally on Mac Catalina. Flask, Celery and Redis are in separate docker containers, while the frontend is outside Docker. The Flask container is listening on port 0.0.0.0:5078. I've set CORS policy to allow messages from "127.0.0.1:4200" only, sent by the frontend. There's no need for internet connections. The backend containers will be launched by the frontend by emulating a terminal command. I'll install the app remotely on non-technical users' Catalina MacBooks.
Question: According to Docker might be exposing ports to the world, Beware of exposing ports in Docker and Docker not blocked by macOS firewall, this use of 0.0.0.0:5078 is a security threat. How can I resolve this threat, eg by block ing any external connections to this port?
Here's some python 3.8 code
# imports: waitress, flask_cors, blueprint
cors = CORS(blueprint, resources={r"/*": {"origins":["http://127.0.0.1:4200"]}})
if __name__ == "__main__":
serve(flask_app, host= '0.0.0.0', port=5078, threads=8)
Here's the Dockerfile:
FROM python:3.8.3-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
ENV BUILD_DEPS="build-essential" \
APP_DEPS="curl libpq-dev"
RUN apt-get update \
&& apt-get install -y ${BUILD_DEPS} ${APP_DEPS} --no-install-recommends \
&& pip install --default-timeout=10000 -r requirements.txt
ARG FLASK_ENV="development"
ENV FLASK_ENV="${FLASK_ENV}" \
FLASK_APP="back5x.api.app" \
PYTHONUNBUFFERED="true"\
FLASK_DEBUG=1
COPY . .
RUN ["chmod", "+x", "/app/docker-entrypoint.sh"]
ENTRYPOINT ["/app/docker-entrypoint.sh"]
EXPOSE 5078
CMD ["python", "main.py"]
and the docker-compose:
version: "3.8"
services:
redis:
# ...
web:
build:
context: "."
args:
- "FLASK_ENV=development"
depends_on:
- "redis"
- "worker"
env_file:
- ".env"
environment:
FLASK_DEBUG: 1
FLASK_APP: back5x.api.app.py
healthcheck:
test: "${DOCKER_HEALTHCHECK_TEST:-curl localhost:5078/healthy}"
...
ports:
- "5078:5078"
restart: "unless-stopped"
volumes:
- ".:/app"
worker: #celery worker
...
volumes:
redis: {}
Tried:
The Docker-based solutions I've found use Linux iptables, eg Disallow egress from Docker containers on Docker for Mac and the above references. So I've added these to the Dockerfile:
RUN apt-get install -y iptables --no-install-recommends #after pip install
RUN iptables -N DOCKER-USER #after COPY . .
RUN iptables -I FORWARD -j DOCKER-USER
RUN iptables -A DOCKER-USER -j RETURN
RUN iptables -I DOCKER-USER -i eth0 ! -s 0.0.0.0 -j DROP
Without the middle three lines, I got an error that DOCKER-USER couldn't be found; with them, that I must run as a root. I've tried a privileged mode and app_cap, but as I'm new to Docker, I haven't got this to work.
I've also looked into defining a rule in Mac's PF firewall to block external connections to the port in question. However, this is not ideal for people who'll be using my app. A similar situation is with installing the paid-for "Little Snitch" app.
Before going this route, could there be a code or Docker-based solution? (Or perhaps there is an appropriate command for launching the backend?)
A working solution is based on the David Maze and Matt's comments and on this question. These are the steps:
Open Docker for Mac, Preferences and Docker Engine. Add "ip": "127.0.0.1" to the json config file.
In the docker-compose, set ports to "127.0.0.1:5078:5078" for the web service.
Leave the Dockerfile and the python code as before: ie, the flask host is still 0.0.0.0
As I checked, messages from Electron's localhost 4201 went through. Also, running netstat -anvp tcp | awk 'NR<3 || /LISTEN/ shows that the insecure port 0.0.0.0.5078 is no longer exposed to the outside.
`

GItLab CI gives curl: (7) Failed to connect to localhost port 8090: Connection refused

The issue is I get the curl: (7) Failed to connect to localhost port 8090: Connection refused GItLab CI error but this does not happen on my laptop where I get the source html of the webpage. The .gitlab-ci.yml below is a simple reproduction of the issue. I have spent numerous hours trying to figure this out - i'm sure someone else has also.
Aside: This isn't a similar question - since they don't offer a solution.
GitLab Repo: https://gitlab.com/mudassir-ahmed/wordpress-testing-with-gitlab-ci/tree/another-approach but the only file it contains is the .gitlab-ci.yml shown below...
image: docker:stable
variables:
# When using dind service we need to instruct docker, to talk with the
# daemon started inside of the service. The daemon is available with
# a network connection instead of the default /var/run/docker.sock socket.
#
# The 'docker' hostname is the alias of the service container as described at
# https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#accessing-the-services
#
# Note that if you're using the Kubernetes executor, the variable should be set to
# tcp://localhost:2375/ because of how the Kubernetes executor connects services
# to the job container
# DOCKER_HOST: tcp://localhost:2375/
#
# For non-Kubernetes executors, we use tcp://docker:2375/
DOCKER_HOST: tcp://docker:2375/
# When using dind, it's wise to use the overlayfs driver for
# improved performance.
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
build:
stage: build
script:
- apk update
- apk add curl
#- hostname -i
- docker container ls
- docker run -d -p 8090:80 --name nginx-server kitematic/hello-world-nginx
- curl localhost:8090 # This works on my laptop but not on a GitLab runner.
Referring to the answer from here : gitlab-ci.yml & docker-in-docker (dind) & curl returns connection refused on shared runner
There are two ways to fix this :
Option 1: Replace localhost in curl localhost:8090 with docker like this curl docker:8090
Option 2:
services:
- name: docker:dind
alias: localhost
docker run -d -p 8090:80 --name nginx-server kitematic/hello-world-nginx
curl localhost:8090 # This works on my laptop but not on a GitLab runner.
Assuming that is your code i think that you should somehow add some timeout between docker run and curl.
I have similar issues some time ago after starting docker container on gitlab runner machine i wasnt able to accces my url to. When i added command which check if container is running for " about one minute " it resolved my problem.
"docker inspect -f {{.State.Running}} " + containerName" but in order to do that check, you should add some additional script

Docker on Mac is running but refusing to expose port

Mac here, running Docker Community Edition Version 17.12.0-ce-mac49 (21995).
I have Dockerized a web app with a Dockerfile like so:
FROM openjdk:8
RUN mkdir /opt/myapp
ADD build/libs/myapp.jar /opt/myapp
ADD application.yml /opt/myapp
ADD logback.groovy /opt/myapp
WORKDIR /opt/myapp
EXPOSE 9200
ENTRYPOINT ["java", "-Dspring.config=.", "-jar", "myapp.jar"]
I then build that image like so:
docker build -t myapp .
I then run a container of that image like so:
docker run -it -p 9200:9200 --net="host" --env-file ~/myapp-local.env --name myapp myapp
In the console I see the app start up without any errors, and all seems to be well. Even my metrics publishes (which publish heartbeat and other health metrics every 20 seconds) are printing to the console as I would expect them to. Everything seems to be fine.
Except when I go to run a curl against my app from another terminal/session:
curl -i -H "Content-Type: application/json" -X POST -d '{"username":"heyitsme","password":"12345"}' http://localhost:9200/v1/auth/signIn
curl: (7) Failed to connect to localhost port 9200: Connection refused
Now, if this were a situation where the /v1/auth/signIn path wasn't valid, or if there was something wrong with my request entity/payload, the server would pick up on it and send an error (I assure you; as I can confirm this exact same curl works when I run the server outside of Docker as just a standalone service).
So this is definitely a situation where the curl command can't connect to localhost:9200. Again, when I run my app outside of Docker, that same curl command works perfectly, so I know my app is trying to standup on port 9200.
Any ideas as to what could be going wrong here, or how I could begin troubleshooting?
The way you run your container has 2 conflicting parts:
-p 9200:9200 says: "publish (bind) port 9200 of the container to port 9200 of the host"
--net="host" says: "use the host's networking stack"
According to Docker for Mac - Networking docs / Known limitations, use cases, and workarounds, you should only publish a port:
I want to connect to a container from the Mac
Port forwarding works for localhost; --publish, -p, or -P all work. Ports exposed from Linux are forwarded to the Mac.
Our current recommendation is to publish a port, or to connect from another container. This is what you need to do even on Linux if the container is on an overlay network, not a bridge network, as these are not routed.
The command to run the nginx webserver shown in Getting Started is an example of this.
$ docker run -d -p 80:80 --name webserver nginx
Check that your app bind to 0.0.0.0:9200 and not localhost:9200 or something similar
Problem seems to be in the network mode you are running the container.
Quick test: Login to your container and run the curl cmd there, hopefully it works. That would isolate the problem to request not being forwarded from host to container.
Try running your container on the default bridge network and test.
Refer to this blog for details on the network modes in docker
TLDR; You will need to add an IPtables entry to allow the traffic to enter your container.

Not able to access Kibana running in a Docker container on port 5601

I have built a docker image with the following Docker file.
# gunicorn-flask
FROM devdb/kibana
MAINTAINER John Doe <user.name#gmail.com>
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y python python-pip python-virtualenv gunicorn
# Setup flask application
RUN mkdir -p /deploy/app
COPY gunicorn_config.py /deploy/gunicorn_config.py
COPY app /deploy/app
RUN pip install -r /deploy/app/requirements.txt
WORKDIR /deploy/app
EXPOSE 5000 5601 9200
# Start gunicorn
CMD ["/usr/bin/gunicorn", "--config", "/deploy/gunicorn_config.py", "listener:app"]
I am running the container from the image created from this Docker file as follows.
sudo docker run -p 5601:5601 -p 9200:9200 -p 5000:5000 -v /home/Workspace/xits/config/elasticsearch.yml:/opt/elasticsearch/config/elasticsearch.yml -v /home/Workspace/xits/config/kibana.yml:/opt/kibana/config/kibana.yml es-kibana-gunicorn:latest
The issue I am facing is that I cannot access Kibana port 5601 on my host machine. My browser page says ERR_CONNECTION_REFUSED
I am able to access port 5000 though.
I can't figure out why this is.Any help would be greatly appreciated.
The parent Dockerfile devdb/kibana is using a script to start kibana and elasticsearch when the docker container is started. See CMD ["/sbin/my_init"] and the script itself.
When in your own Dockerfile you use the CMD instruction, you override the one from the parents Dockerfiles.
Since your CMD only starts gunicorn, elasticsearch and kibana won't ever be started. That's why there is no response on their respective network ports.
The Docker image you inherits from inherits itself from phusion/baseimage which has its own way of making multiple processes run in Docker containers. I recommend you follow the instructions on their README file to learn how to add your gunicorn to the list of services to start. Basically you would have to define a script named run and add it to your docker image within the /etc/service/<service name>/ directory.
In your Dockerfile, add:
COPY run /etc/service/gunicorn/
and the run script should be something similar to:
#!/bin/bash
cd /deploy/app
/usr/bin/gunicorn --config /deploy/gunicorn_config.py listener:app

Cannot download Docker images behind a proxy

I installed Docker on my Ubuntu 13.10 (Saucy Salamander) and when I type in my console:
sudo docker pull busybox
I get the following error:
Pulling repository busybox
2014/04/16 09:37:07 Get https://index.docker.io/v1/repositories/busybox/images: dial tcp: lookup index.docker.io on 127.0.1.1:53: no answer from server
Docker version:
$ sudo docker version
Client version: 0.10.0
Client API version: 1.10
Go version (client): go1.2.1
Git commit (client): dc9c28f
Server version: 0.10.0
Server API version: 1.10
Git commit (server): dc9c28f
Go version (server): go1.2.1
Last stable version: 0.10.0
I am behind a proxy server with no authentication, and this is my /etc/apt/apt.conf file:
Acquire::http::proxy "http://192.168.1.1:3128/";
Acquire::https::proxy "https://192.168.1.1:3128/";
Acquire::ftp::proxy "ftp://192.168.1.1:3128/";
Acquire::socks::proxy "socks://192.168.1.1:3128/";
What am I doing wrong?
Here is a link to the official Docker documentation for proxy HTTP:
https://docs.docker.com/config/daemon/systemd/#httphttps-proxy
A quick outline:
First, create a systemd drop-in directory for the Docker service:
mkdir /etc/systemd/system/docker.service.d
Now create a file called /etc/systemd/system/docker.service.d/http-proxy.conf that adds the HTTP_PROXY and HTTPS_PROXY environment variables:
[Service]
Environment="HTTP_PROXY=http://proxy.example.com:80/"
Environment="HTTPS_PROXY=http://proxy.example.com:80/"
If you have internal Docker registries that you need to contact without proxying you can specify them via the NO_PROXY environment variable:
Environment="HTTP_PROXY=http://proxy.example.com:80/"
Environment="HTTPS_PROXY=http://proxy.example.com:80/"
Environment="NO_PROXY=localhost,127.0.0.0/8,docker-registry.somecorporation.com"
Flush changes:
$ sudo systemctl daemon-reload
Verify that the configuration has been loaded:
$ sudo systemctl show --property Environment docker
Environment=HTTP_PROXY=http://proxy.example.com:80/
Environment=HTTPS_PROXY=http://proxy.example.com:80/
Restart Docker:
$ sudo systemctl restart docker
Footnote regarding HTTP_PROXY vs. HTTPS_PROXY: for a long time, setting HTTP_PROXY alone has been good enough. But with version 20.10.8, Docker has moved on to Go 1.16, which changes the semantics of this variable:
https://golang.org/doc/go1.16#net/http
For https:// URLs, the proxy is now determined by the HTTPS_PROXY variable, with no fallback on HTTP_PROXY.
Your APT proxy settings are not related to Docker.
Docker uses the HTTP_PROXY environment variable, if present. For example:
sudo HTTP_PROXY=http://192.168.1.1:3128/ docker pull busybox
But instead, I suggest you have a look at your /etc/default/dockerconfiguration file: you should have a line to uncomment (and maybe adjust) to get your proxy settings applied automatically. Then restart the Docker server:
service docker restart
On CentOS the configuration file for Docker is at:
/etc/sysconfig/docker
Adding the below line helped me to get the Docker daemon working behind a proxy server:
HTTP_PROXY="http://<proxy_host>:<proxy_port>"
HTTPS_PROXY="http://<proxy_host>:<proxy_port>"
If you're using the new Docker for Mac (or Docker for Windows), just right-click the Docker tray icon and select Preferences (Windows: Settings), then go to Advanced, and under Proxies specify your proxy settings there. Click Apply and Restart and wait until Docker restarts.
On Ubuntu you need to set the http_proxy for the Docker daemon, not the client process. This is done in /etc/default/docker (see here).
To extend Arun's answer, for this to work in CentOS 7, I had to remove the "export" commands. So edit
/etc/sysconfig/docker
And add:
HTTP_PROXY="http://<proxy_host>:<proxy_port>"
HTTPS_PROXY="https://<proxy_host>:<proxy_port>"
http_proxy="${HTTP_PROXY}"
https_proxy="${HTTPS_PROXY}"
Then restart Docker:
sudo service docker restart
The source is this blog post.
Why a locally-bound proxy doesn't work
The Problem
If you're running a locally-bound proxy, e.g. listening on 127.0.0.1:8989, it WON'T WORK in Docker for Mac. From the Docker documentation:
I want to connect from a container to a service on the host
The Mac has a changing IP address (or none if you have no network access). Our current recommendation is to attach an unused IP to the lo0 interface on the Mac; for example: sudo ifconfig lo0 alias 10.200.10.1/24, and make sure that your service is listening on this address or 0.0.0.0 (ie not 127.0.0.1). Then containers can connect to this address.
The similar is for Docker server side. (To understand the server side and client side of Docker, try to run docker version.) And the server side runs on a virtualization layer which has its own localhost. Therefore, it won't connect to the proxy server on the localhost of the host OS.
The solution
So, if you're using a locally-bound proxy like me, basically you would have to do the following things to make it work with Docker for Mac:
Make your proxy server listen on 0.0.0.0 instead of 127.0.0.1. Caution: you'll need proper firewall configuration to prevent malicious access to it.
Add a loopback alias to the lo0 interface, e.g. 10.200.10.1/24:
sudo ifconfig lo0 alias 10.200.10.1/24
Set HTTP and/or HTTPS proxy to 10.200.10.1:8989 from Preferences in Docker tray menu (assume that the proxy server is listening on port 8989).
After that, test the proxy settings by running a command in a new container from an image which is not downloaded:
$ docker rmi -f hello-world
...
$ docker run hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c04b14da8d14: Pull complete
Digest: sha256:0256e8a36e2070f7bf2d0b0763dbabdd67798512411de4cdcf9431a1feb60fd9
Status: Downloaded newer image for hello-world:latest
...
Notice: the loopback alias set by ifconfig does not preserve after a reboot. To make it persistent is another topic. Please check this blog post in Japanese (Google Translate may help).
This is the fix that worked for me: Ubuntu, Docker version: 1.6.2
In the file /etc/default/docker, add the line:
export http_proxy='http://<host>:<port>'
Restart Docker
sudo service docker restart
To configure Docker to work with a proxy you need to add the HTTPS_PROXY / HTTP_PROXY environment variable to the Docker sysconfig file (/etc/sysconfig/docker).
Depending on if you use init.d or the services tool you need to add the "export" statement (due to Debian Bug report logs - #767441. Examples in /etc/default/docker are misleading regarding the supported syntax):
HTTPS_PROXY="https://<user>:<password>#<proxy-host>:<proxy-port>"
HTTP_PROXY="https://<user>:<password>#<proxy-host>:<proxy-port>"
export HTTP_PROXY="https://<user>:<password>#<proxy-host>:<proxy-port>"
export HTTPS_PROXY="https://<user>:<password>#<proxy-host>:<proxy-port>"
The Docker repository (Docker Hub) only supports HTTPS. To get Docker working with SSL intercepting proxies you have to add the proxy root certificate to the systems trust store.
For CentOS, copy the file to /etc/pki/ca-trust/source/anchors/ and update the CA trust store and restart the Docker service.
If your proxy uses NTLMv2 authentication - you need to use intermediate proxies like Cntlm to bridge the authentication. This blog post explains it in detail.
After installing Docker, do the following:
[mdesales#pppdc9prd1vq ~]$ sudo HTTP_PROXY=http://proxy02.ie.xyz.net:80 ./docker -d &
[2] 20880
Then, you can pull or do anything:
mdesales#pppdc9prd1vq ~]$ sudo docker pull base
2014/04/11 00:46:02 POST /v1.10/images/create?fromImage=base&tag=
[/var/lib/docker|aa088847] +job pull(base, )
Pulling repository base
b750fe79269d: Download complete
27cf78414709: Download complete
[/var/lib/docker|aa088847] -job pull(base, ) = OK (0)
In the new version of Docker, docker-engine, in a systemd based distribution, you should add the environment variable line to /lib/systemd/system/docker.service, as it is mentioned by others:
Environment="HTTP_PROXY=http://hostname_or_ip:port/"
As I am not allowed to comment yet:
For CentOS 7 I needed to activate the EnvironmentFile within "docker.service" like it is described here: Control and configure Docker with systemd.
Edit: I am adding my solution as stated out by Nilesh. I needed to open "/etc/systemd/system/docker.service" and I had to add within the section
[Service]
EnvironmentFile=-/etc/sysconfig/docker
Only then was the file "etc/sysconfig/docker" loaded on my system.
If using socks5 proxy, here is my test with Docker 17.03.1-ce with setting "all_proxy", and it worked:
# Set up socks5 proxy server
ssh sshUser#proxyServer -C -N -g -D \
proxyServerIp:9999 \
-o ExitOnForwardFailure=yes \
-o ServerAliveInterval=60
# Configure dockerd and restart.
# NOTICE: using "all_proxy"
mkdir -p /etc/systemd/system/docker.service.d
cat > /etc/systemd/system/docker.service.d/http-proxy.conf <<EOF
[Service]
Environment="all_proxy=socks5://proxyServerIp:9999"
Environment="NO_PROXY=localhost,127.0.0.1,private.docker.registry.com"
EOF
systemctl daemon-reload
systemctl restart docker
# Test whether can pull images
docker run -it --rm alpine:3.5
To solve the problem with curl in Docker build, I added the following inside the Dockerfile:
ENV http_proxy=http://infoprx2:8080
ENV https_proxy=http://infoprx2:8080
RUN apt-get update && apt-get install -y curl vim
Note that the ENV statement is BEFORE the RUN statement.
And in order to make the Docker daemon able to access the Internet (I use Kitematic with boot2docker), I added the following into /var/lib/boot2docker/profile:
export HTTP_PROXY=http://infoprx2:8080
export HTTPS_PROXY=http://infoprx2:8080
Then I restarted Docker with sudo /etc/init.d/docker restart.
The complete solution for Windows, to configure the proxy settings.
< user>:< password>#< proxy-host>:< proxy-port>
You can configure it directly by right-clicking on settings, in the Docker icon, and then Proxies.
There you can configure the proxy address, port, user name, and password.
In this format:
< user>:< password>#< proxy-host>:< proxy-port>
Example:
"geronimous:mypassword#192.168.44.55:8080"
Nothing more than this.
If you are on Ubuntu, you should execute this command:
export https_proxy=http://your_name:password#ip_proxy:port docker
And reload Docker with:
service docker.io restart
Or go to /etc/docker.io with nano...
If you're in Ubuntu, execute these commands to add your proxy.
sudo nano /etc/default/docker
And uncomment the lines that specifies
#export http_proxy = http://username:password#10.0.1.150:8050
And replace it with your appropriate proxy server and username.
Then restart Docker using:
service docker restart
Now you can run Docker commands behind proxy:
docker search ubuntu
Perhaps you need to set up lowercase variables. In my case, my /etc/systemd/system/docker.service.d/http-proxy.conf file looks like this:
[Service]
Environment="ftp_proxy=http://<user>:<password>#<proxy_ip>:<proxy_port>/"
Environment="http_proxy=http://<user>:<password>#<proxy_ip>:<proxy_port>/"
Environment="https_proxy=http://<user>:<password>#<proxy_ip>:<proxy_port>/"
Good luck! :)
I was also facing the same issue behind a firewall. Follow the below steps:
$ sudo vim /etc/systemd/system/docker.service.d/http_proxy.conf
[Service]
Environment="HTTP_PROXY=http://username:password#IP:port/"
Don’t use or remove the https_prxoy.conf file.
Reload and restart your Docker container:
$ sudo systemctl daemon-reload
$ sudo systemctl restart docker
$ docker pull hello-world
Using default tag: latest
latest: Pulling from library/hello-world
1b930d010525: Pull complete
Digest: sha256:2557*********************************8
Status: Downloaded newer image for hello-world:latest
Simply setting proxy environment variables did not help me in version 1.0.1... I had to update the /etc/default/docker.io file with the correct value for the "http_proxy" variable.
On Ubuntu 14.04 (Trusty Tahr) with Docker 1.9.1, I just uncommented the http_proxy line, updated the value and then restarted the Docker service.
export http_proxy="http://proxy.server.com:80"
and then
service docker restart
Remove proxy from environment variables
unset http_proxy
unset https_proxy
unset no_proxy
and then restart your docker
On RHEL6.6 only this works (note the use of export):
/etc/sysconfig/docker
export http_proxy="http://myproxy.example.com:8080"
export https_proxy="http://myproxy.example.com:8080"
NOTE: Both can use the http protocol.)
Thanks to https://crondev.com/running-docker-behind-proxy/
In my network, Ubuntu works behind a corporate ISA proxy server. And it requires authentication. I tried all the solutions mentioned above and nothing helped. What really helped was to write a proxy line in file /etc/systemd/system/docker.service.d/https-proxy.conf without a domain name.
Instead of
Environment="HTTP_PROXY=http://user#domain:password#proxy:8080"
or
Environment="HTTP_PROXY=http://domain\user:password#proxy:8080"
and some other replacement such as # -> %40 or \ -> \\ I tried to use
Environment="HTTP_PROXY=http://user:password#proxy:8080"
And it works now.
Try this:
sudo HTTP_PROXY=http://<IP address of proxy server:port> docker -d &
This doesn't exactly answer the question, but might help, especially if you don't want to deal with service files.
In case you are the one is hosting the image, one way is to convert the image as a tar archive instead, using something like the following at the server.
docker save <image-name> --output <archive-name>.tar
Simply download the archive and turn it back into an image.
docker load <archive-name>.tar
Have resolved the issue by following the below steps:
step 1: sudo systemctl start docker
step 2: sudo systemctl enable docker
(Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.)
step 3: sudo systemctl status docker
step 4: sudo mkdir -p /etc/systemd/system/docker.service.d
step 5: sudo vi /etc/systemd/system/docker.service.d/proxy.conf
Set proxy as below
[Service]
Environment="HTTP_PROXY=http://proxy.server.com:80"
Environment="HTTPS_PROXY=http://proxy.server.com:80"
Environment="NO_PROXY=.proxy.server.com,*.proxy.server.com,localhost,127.0.0.1,::1"
step 6: sudo systemctl daemon-reload
step 7: sudo systemctl restart docker.service
step 8: vi /etc/environment and source /etc/environment
http_proxy=http://proxy.server.com:80
https_proxy=http://proxy.server.com:80
ftp_proxy=http://proxy.server.com:80
no_proxy=127.0.0.1,10.0.0.0/8,3.0.0.0/8,localhost,*.abc.com
I had a problem like I needed to use proxy to use google's dns for project's dependency and for API request needed to communicate with a private server at the same time.
For RHEL7 I configured the system like this:
went to the directory /etc/sysconfig/docker
Environment=http_proxy="http://ip:port"
Environment=https_proxy="http://ip:port"
Environment=no_proxy="hostname"
then save the file and use the command :
sudo systemctl restart docker
after that configure your Dockerfile :
setup the environment structure first:
ENV http_proxy http://ip:port
ENV https_proxy http://ip:port
ENV no_proxy "hostname"
that's all! :)

Resources