docker-compose build and http_proxy - elasticsearch

I want to test ELK.
It works fine
BUt when I want to do a
docker-compose up
behind a proxy
docker-compose up --no-recreate
Building kibana
Step 1 : FROM kibana:latest
---> 544887fbfa30
Step 2 : RUN apt-get update && apt-get install -y netcat
---> Running in 794342b9d807
It failed
W: Some index files failed to download. They have been ignored, or old ones used instead.
Is' OK with
docker build --build-arg http_proxy=http://proxy:3128 --build-arg https_proxy=http://proxy:3128 kibana
But when I redo a docker-compose up, il tries to re-build, and failed to pass through proxy
Any help ?

You will need docker-compose 1.6.0-rc1 in order to pass the proxy to your build through docker-compose.
See commit 47e53b4 from PR 2653 for issue 2163.
Move all build related configuration into a build: section in the service.
Example:
web:
build:
context: .
dockerfile: Dockerfile.name
args:
key: value
As mkjeldsen points out in the comments
If key should assume the value of an environment variable of the same name, value can be omitted (docker-compose ARGS):
Especially useful for https_proxy: if the envvar is unset or empty, the builder will not apply proxy, otherwise it will.

I ran into the same problem. What helped me was using the explicit version 2.2 and then build - args and - network as described in the documentation.

VonC is right, it works for me by adding args section under the build lines in docker-compose file:
original:
ssh:
build: ssh/.
container_name: ssh
ports:
- "3000:22"
networks:
vault_net:
ipv4_address: 172.16.238.20
Modified:
ssh:
build:
context: "ssh/."
args:
HTTP_PROXY: http://X.X.X.X:XXXX
HTTPS_PROXY: http://X.X.X.X:XXXX
NO_PROXY: .domain.ltd,127.0.0.1
container_name: ssh
ports:
- "3000:22"
networks:
vault_net:
ipv4_address: 172.16.238.20
Note that I have to add quotes for context since it needs to be formatted as string.
Thanks a lot.

did you try it on clean machine?
docker-machine stop default
docker-machine create -d virtualbox test
docker-machine start test
eval $(docker-machine env test)
docker-compose up

Related

docker-compose build context dockerfile envar image

I would like use docker-compose to build/run dockerfiles that have envars in their FROM keyword. The problem that I am getting now is that I seem to be unable to pass envars from my environment through docker-compose into the dockerfile.
docker-compose.yml
version: "3.2"
services:
api:
build: 'api/'
restart: on-failure
depends_on:
- mysql
networks:
- frontend
- backend
volumes:
- ./api/php/:/var/www/html/
Dockerfile in 'api/'
FROM ${DOCKER_IMAGE_API}
RUN apk update
RUN apk upgrade
RUN docker-php-ext-install mysqli
Why?
I want to do this so that I can run docker-compose from a bash script that detects the host architecture and changes the base image of the underlying dockerfiles in the host application.
FROM instructions support variables that are declared by any ARG instructions that occur before the first FROM. So what you can do is this:
ARG IMAGE
FROM $IMAGE
when you run the build command, you then pass the --build-arg as follows:
docker build -t test --build-arg IMAGE=alpine .
you can also choose to have a default value for the IMAGE variable, to be used if the --build-arg flag isn't used.
Alternatively, in case you were to use docker compose build and not docker build (and I think this is your case), you can specify the variable in the docker-compose build --build-arg:
version: "3.9"
services:
api:
build: .
and then
docker compose build --build-arg IMAGE=alpine

Weird behaviour passing build-args to Dockerfile through docker-compose

I'm facing a strange problem (or better: two different, weird problems) trying to pass build-args to my Dockerfile through docker-compose up.
My files - initial setup
Dockerfile:
ARG NODE_VERSION
FROM node:${NODE_VERSION}
ARG NPM_REGISTRY_TOKEN
RUN echo "=====> token ${NPM_REGISTRY_TOKEN}"
... ... ...
docker-compose.yml:
version: '3'
services:
myservice:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
With this initial setup in place, I have the following behaviour (on Linux Mint 20, docker-compose version 1.26.2, build eefe0d31):
running docker build --build-arg NPM_REGISTRY_TOKEN=xyz123 produces in output =====> token xyz123: the NPM_REGISTRY_TOKEN arg flows to the Dockerfile
running docker-compose build --build-arg NPM_REGISTRY_TOKEN=xyz123 myservice produces in output =====> token xyz123: the NPM_REGISTRY_TOKEN arg flows to the Dockerfile
running NPM_REGISTRY_TOKEN=xyz123 docker-compose up myservice produces in output =====> token : the NPM_REGISTRY_TOKEN env arg should flow to the Dockerfile due to - NPM_REGISTRY_TOKEN (according to https://docs.docker.com/compose/compose-file/#args: You can omit the value when specifying a build argument, in which case its value at build time is the value in the environment where Compose is running) but it seems to not be available during build
My files - reloaded
Simply changing my docker-compose.yml file to
version: '3'
services:
myservice:
build:
context: ./myservice
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
dockerfile: ../Dockerfile
seems to solve the problem: switching args and dockerfile entries in yml file unlocks the capability to pass environment variables to Dockerfile as build-args through docker-compose up, too. Problem solved. Or not?
Changing OS, getting new problem
So, developers in my team use a bunch of different operating systems: Linux, Mac Os, and Windows, too.
Running the same commands on the same version (1.26.2) of docker-compose on Windows 10 Professional 1909 we're getting the same problem we faced initially, both using the initial version of the docker-compose.yml file and using the version that works on Linux.
We tried passing env var from command line, setting them in the command prompt, setting them as system variables through GUI... we tried launching docker-compose up for git-bash, too, but we're not able to get the variable value in Dockerfile.
I googled a bit aaround but I've not found any reference to known bugs or limitation of the Windows version of docker-compose.
Anyone have any idea what the problem might be? Thank you very much in advance!
So, finally, after some try-and-fail on different OSs and with different configurations, I ended up with an explanation of my problem - and therefore with a viable workaround, which allowed me to reach a satisfactory configuration for my docker-compose-yml file.
Short answer: it wasn't a matter of OSs nor env var passing nor order of context / dockerfile sections - it was a matter of clash between different services in my compose file.
More in detail: my docker-compose.yml file contained an additional service, too, whose job was to initialize the database the application was pointing to:
version: '3'
services:
myservice:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
depends_on:
- persistence
- db_initializer
command: sh -c './wait-for localhost:5432 -- ./wait-for localhost:15672 -- npm run start:dev'
persistence:
# Setting up the DBMS here
db_initializer:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
depends_on:
- persistence
command: sh -c './wait-for localhost:5432 -- ./wait-for localhost:15672 -- npm run db:migrate'
So, the problem was that I was configuring two services based on the same, self-build image, launching it with different commands (npm run db:migrate for the db_initializer service, npm run start:dev for the application service). Apparently compose took the configuration provided for the first initialized service (db_initializer, because myservice was dependant on it) and used that configuration for both services, ignoring the (different) args section I was providing for the second container: so I was able to solve (this time really!) the problem simply merging services declaration, including all args I needed:
version: '3'
services:
myservice:
build:
context: ./myservice
dockerfile: ../Dockerfile
args:
- NODE_VERSION=10.15.1-alpine
- NPM_REGISTRY_TOKEN
depends_on:
- persistence
- db_initializer
command: sh -c './wait-for localhost:5432 -- ./wait-for localhost:15672 -- run db:migrate && npm run start:dev'
persistence:
# Setting up the DBMS here
So, after a bunch of months without collecting answers, I think it's time to share my experience, hoping it can help someone encountering this weird behaviour.

Docker compose containers fail and exit with code 127 missing /bin/env bash

I'm new to Docker so bear with me for any wrong term.
I have Docker Tools installed on Windows 7 and I'm trying to run a Docker compose file of a proprietary existing project stored in a git repository and that has probably been only run on Linux.
These are the commands I ran:
docker-machine start
docker-machine env
#FOR /f "tokens=*" %i IN ('docker-machine env') DO #%i
this was output by step (2)
docker-compose -f <docker-file.yml> up
Most of the Docker work has gone fine (image download, extraction, etc).
It is failing at container start, where some containers run fine - I recognize a working MongoDB instance since its log doesn't report any error - but other containers exit pretty soon with an error code, i.e.:
frontend_1 exited with code 127
Scrolling up a bit the console, I can see lines like:
No such file or directoryr/bin/env: bash
I have no idea where to go from here. I tried launching composer from a CygWin terminal, but got the same result.
Docker Compose file
version: "2"
services:
frontend:
command: "yarn start"
image: company/application/frontend:1
build:
context: frontend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "3000:3000"
volumes:
- ./frontend:/opt/app
backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && yarn run dev"
image: company/application/backend:1
build:
context: backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "4000:4000"
volumes:
- ./backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
generator-backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && npm run dev"
image: company/generator/backend:1
build:
context: generator-backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "5000:5000"
volumes:
- ./generator-backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
db:
image: mongo:3.4
volumes:
- mongo:/data/db
ports:
- "27017:27017"
volumes:
mongo:
It turned out it was a matter of file line endings, caused by git clone, as pointed out by #mklement0 in his answer to env: bash\r: No such file or directory question.
Disabling core.autocrlf then recloning the repo solved it.

Docker compose can not start service network not found after restart docker

I'm using docker for windows (Version 18.03.0-ce-win59 (16762)) in a windows 10 pro. All the containers run ok after running the command docker-compose -up -d. The problem is when I restart the docker service. Then, once restarted, all the containers are stoped and when I run the command docker-compose start -d the following error is shown:
Error response from daemon: network ccccccccccccc not found
I don't know what's happening. When I run the container using run and the --restart=always option everything works as expected. No error is shown on restart.
This is the docker-compose file:
version: '3'
services:
service_1:
image: image1
restart: always
build:
context: C:/ProgramData/Docker/volumes/foo2
dockerfile: Dockerfile
args:
ENTRY: "1"
volumes:
- C:/ProgramData/Docker/volumes/foo1:C:/foo1
- C:/ProgramData/Docker/volumes/foo2:C:/foo2
service_2:
image: image2
restart: always
build:
context: C:/ProgramData/Docker/volumes/foo2
dockerfile: Dockerfile
args:
ENTRY: "2"
volumes:
- C:/ProgramData/Docker/volumes/foo1:C:/foo1
- C:/ProgramData/Docker/volumes/foo2:C:/foo2
service_3:
image: image3
restart: always
build:
context: C:/ProgramData/Docker/volumes/foo2
dockerfile: Dockerfile
args:
ENTRY: "4"
volumes:
- C:/ProgramData/Docker/volumes/foo1:C:/foo1
- C:/ProgramData/Docker/volumes/foo2:C:/foo2
The dockerfiles are like this:
FROM microsoft/dotnet-framework:3.5
ARG ENTRY
ENV my_env=$ENTRY
WORKDIR C:\\foo2
ENTRYPOINT C:/foo2/app.exe %my_env%
The network has changed. I used docker network prune command to meet the same problem.Recreate the container would fix the problem. Docker would set up the network again for the new containers.
#remove all containers
docker rm $(docker ps -qa)
#or
docker system prune
There might be some old container instances which were not removed. Check the instances with
docker container ls -a
You might get output like this if you have some instances which were not removed
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8b4678e6666b b4a75a01d539 "/bin/sh -c 'eval `s…" 6 weeks ago Exited (1) 6 weeks ago zealous_allen
ee862a3418f2 1eaaf48e9b42 "/bin/sh -c 'eval `s…" 6 weeks ago Exited (1) 6 weeks ago jolly_torvalds
Remove the containers by the container id
docker container rm 8b4678e6666b
docker container rm ee862a3418f2
Now start your container with docker-compose file
This worked for me. Hope it helps!
I found a possible solution editing the docker-compose.yml file as follows:
version: '3'
services:
cm04:
image: tnc530_cm04
networks:
- test
privileged: false
restart: always
build:
context: C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC530
dockerfile: Dockerfile
args:
ENTRY: "1"
volumes:
- C:/ProgramData/Docker/volumes/sqlite:C:/sqlite
- C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC530/bin/x86/Release:C:/adontec
cm06:
image: tnc620_cm06
networks:
- test
privileged: false
restart: always
build:
context: C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620
dockerfile: Dockerfile
args:
ENTRY: "2"
volumes:
- C:/ProgramData/Docker/volumes/sqlite:C:/sqlite
- C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620/bin/x86/Release:C:/adontec
cm08:
image: tnc620_cm08
networks:
- test
privileged: false
restart: always
build:
context: C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620
dockerfile: Dockerfile
args:
ENTRY: "4"
volumes:
- C:/ProgramData/Docker/volumes/sqlite:C:/sqlite
- C:/ProgramData/Docker/volumes/adontec/LSV2_Lib/Heidenhain/TNC620/bin/x86/Release:C:/adontec
networks:
test:
external:
name: nat
As you can see I created a network called test linked with the external network nat. Now, when I restart the docker services the containers are started with no errors.
Alternatively, you can just open your docker app and manually delete the containers. Then run docker-compose up on your terminal. Now it should be working. Go to the port either 9000 or 9001 or whichever port you are using and see if minio is actually running.

How to make environmental variables available to Docker RUN commands from docker-compose?

I have a Dockerised application which I would like to run in both proxy and non-proxy host environments. I'm trying to resolve this problem by copying the normal environment variables, such as http_proxy, into the containers if and only if they exist in the host.
I can get 90% of the way there by running
set | grep -i _proxy=>proxies.env
in a top-level script, and then having, in my docker-compose.yml:
myserver:
build: ./myserver
env_file:
- proxies.env
This copies the host's environmental proxy variables, if any, into the server container, and it works in the sense that these variables are available at container run time, in other words by the stage that the Dockerfile CMD or ENTRYPOINT executes.
However I have one container which needs to run npm as a build step, ie from a RUN command in the Dockerfile, and these variables appear not to be present at this stage, so npm can't find the proxy and hangs. In other works, if I have
RUN set
in my Dockerfile, I can't see any variables from proxies.env, but if I do
docker exec -it myserver /bin/bash
and then run set, I can see everything from proxies.env.
Can anyone recommend a way to make these variables visible at container build time, without having to hard-code them, so that my docker-compose.yml and Dockerfile will still work both for hosts with proxies and hosts without proxies?
(Running with centos 7, docker-compose 1.3.1 and docker 1.7.0)
Update 2016, docker-compose 1.6.2, docker 1.10+, with a docker-compose.yml version 2:
You now have the args: sub-section of the build: section, which includes that very interesting possibility:
Build arguments with only a key are resolved to their environment value on the machine Compose is running on.
See PR 2653 (January 2016)
As a result, a way to introduce the proxy variables without hard-coding them in the docker-compose.yml file itself is with that precise syntax:
version: '2'
services:
myservice:
build:
context: .
args:
- http_proxy
- https_proxy
- no_proxy
Before calling docker-compose up, you need to make sure your proxy environment variables are set:
export http_proxy=http://username:password#proxy.com:port
export https_proxy=http://username:password#proxy.com:port
export no_proxy=localhost,127.0.0.1,company.com
docker-compose up
Then your Dockerfile built by the docker-compose process will pick up automatically the proxy variable values, even though the docker-compose.yml does not include any hard-coded specific values.
May be you the "environment" option solves your problem. In your docker compose file would looks like:
myserver:
build: ./myserver
environment:
- HTTP_PROXY=192.168.1.8
- VARIABLE=value
- ...
Maybe you can try this:
Before you call RUN, ADD the .env file into the image
ADD proxies.env proxies.env
then prefix your RUN statement:
RUN export `cat proxies.env` && echo "FOO is $FOO and BAR is $BAR"
This produces the following output:
root#armenubuntudev:~/Dockers/set-env# docker build -t ashimoon/envtest .
Sending build context to Docker daemon 3.584 kB
Sending build context to Docker daemon
Step 0 : FROM ubuntu
---> 91e54dfb1179
Step 1 : ADD proxies.env proxies.env
---> Using cache
---> 181d0e082e65
Step 2 : RUN export `cat proxies.env` && echo "FOO is $FOO and BAR is $BAR"
---> Running in 30426910a450
FOO is 1 and BAR is 2
---> 5d88fcac522c
Removing intermediate container 30426910a450
Successfully built 5d88fcac522c
docker-compose.yml
...
server:
build: .
args:
env: $ENV
...
Dockerfile
ARG env
ENV NODE_ENV $env
This example fixes YUM.
version: '2'
services:
example-service:
build:
context: .
args:
http_proxy: proxy.example.com:80

Resources