I run Docker 1.8.1 in OSX 10.11 via an local docker-machine VM.
I have the following docker-compose.yml:
web:
build: docker/web
ports:
- 80:80
- 8080:8080
volumes:
- $PWD/cms:/srv/cms
My Dockerfile looks like this:
FROM alpine
# install nginx and php
RUN apk add --update \
nginx \
php \
php-fpm \
php-pdo \
php-json \
php-openssl \
php-mysql \
php-pdo_mysql \
php-mcrypt \
php-ctype \
php-zlib \
supervisor \
wget \
curl \
&& rm -rf /var/cache/apk/*
RUN mkdir -p /etc/nginx && \
mkdir -p /etc/nginx/sites-enabled && \
mkdir -p /var/run/php-fpm && \
mkdir -p /var/log/supervisor && \
mkdir -p /srv/cms
RUN rm /etc/nginx/nginx.conf
ADD nginx.conf /etc/nginx/nginx.conf
ADD thunder.conf /etc/nginx/sites-enabled/thunder.conf
ADD nginx-supervisor.ini /etc/supervisor.d/nginx-supervisor.ini
WORKDIR "/srv/cms"
VOLUME "/srv/cms"
EXPOSE 80
EXPOSE 8080
EXPOSE 22
CMD ["/usr/bin/supervisord"]
When I run everything with docker-compose up everything works fine, my volumes are mounted at the correct place.
But the permissions in the mounted folder /srv/cms look wrong. The user is "1000" and the group is "50" in the container. The webserver could not create any files in this folder, because it runs with the user "root".
1) General idea: Docker it is not Vagrant. It is wrong to put two different services into one container! Split it into two different images and link them together. Don't do this shitty image.
Check and follow https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/
Avoid installing unnecessary packages
Run only one process per container
Minimize the number of layers
If you do it:
you will remove your supervisor
your can decrease numbers of layers
It should be something like (example):
FROM alpine
RUN apk add --update \
wget \
curl
RUN apk add --update \
php \
php-fpm \
php-pdo \
php-json \
php-openssl \
php-mysql \
php-pdo_mysql \
php-mcrypt \
php-ctype \
php-zlib
RUN usermod -u 1000 www-data
RUN rm -rf /var/cache/apk/*
EXPOSE 9000
For nginx it is enough to use default image and mount configs.
docker-compose file like:
nginx:
image: nginx
container_name: site.dev
volumes:
- ./myconf1.conf:/etc/nginx/conf.d/myconf1.conf
- ./myconf2.conf:/etc/nginx/conf.d/myconf2.conf
- $PWD/cms:/srv/cms
ports:
- "80:80"
links:
- phpfpm
phpfpm:
build: ./phpfpm/
container_name: phpfpm.dev
command: php5-fpm -F --allow-to-run-as-root
volumes:
- $PWD/cms:/srv/cms
2)
Add RUN usermod -u 1000 www-data into Dockerfile for php container, it will fix problem with permission.
For alpine version you need to use:
RUN apk add shadow && usermod -u 1000 www-data && groupmod -g 1000 www-data
Related
I want to have a CI/CD with gitlab-runner and docker swarm. I have problem when i deploy the commit will not checkout or checkout without changes, I wonder to know if problem is gitlab or docker or docker build. my .gitlab-ci.yml, look like:
stages:
- build
- deploy
build_image:
stage: build
image: docker:git
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker build -t registry.gitlab.com/insuretech1/backend:${CI_COMMIT_REF_SLUG} ./DockerFiles/Worker
- docker push registry.gitlab.com/insuretech1/backend:${CI_COMMIT_REF_SLUG}
only:
- branches
deploy_staging:
stage: deploy
image: rastasheep/ubuntu-sshd:latest
script:
# add the server as a known host
- ssh-keyscan 46.4.151.121 >> ~/.ssh/known_hosts
- chmod 600 ~/.ssh/known_hosts
# add ssh key stored in SSH_PRIVATE_KEY variable to the agent store
- eval $(ssh-agent -s)
- touch key.txt
- echo "$SSH_PRIVATE_KEY" >> key.txt
- chmod 600 key.txt
- ssh-add key.txt
# log into Docker registry
- ssh alireza#46.4.151.121 "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com"
# stop container, remove image.
- ssh alireza#46.4.151.121 "docker stop dockergitlab_${CI_COMMIT_REF_SLUG}" || true
- ssh alireza#46.4.151.121 "docker rm dockergitlab_${CI_COMMIT_REF_SLUG}" || true
- ssh alireza#46.4.151.121 "docker rmi registry.gitlab.com/insuretech1/backend:${CI_COMMIT_REF_SLUG}" || true
# start new container
- ssh alireza#46.4.151.121 "docker run --name dockergitlab_${CI_COMMIT_REF_SLUG} -d registry.gitlab.com/insuretech1/backend:${CI_COMMIT_REF_SLUG}"
only:
- branches
except:
- master
and I also I put my pipeline log below, that might help to describe more:
$ eval "$CI_PRE_CLONE_SCRIPT"
00:02
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/insuretech1/backend/.git/
Created fresh repository.
From https://gitlab.com/insuretech1/backend
* [new ref] refs/pipelines/124187268 -> refs/pipelines/124187268
* [new branch] develop -> origin/develop
Checking out 735209a2 as develop...
Skipping Git submodules setup
$ docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
03:43
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
the content of dockerfile which I use for my build
FROM debian:buster
MAINTAINER Alireza Rahmani Khalili "alirezarahmani#live.com"
ENV TERM xterm
RUN apt-get update --fix-missing && apt-get install -y --force-yes curl sudo vim
RUN apt-get install -y --force-yes wget apt-transport-https lsb-release ca-certificates
RUN wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg
RUN echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list
RUN echo "deb http://ftp.uk.debian.org/debian buster-backports main" >> /etc/apt/sources.list
RUN wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg
RUN echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list
RUN DEBIAN_FRONTEND="noninteractive" apt-get update && apt-get install -y --force-yes \
nginx \
php7.3 \
php7.3-cli \
php7.3-fpm \
php7.3-curl \
php7.3-json \
php7.3-mysql \
php7.3-sqlite \
php7.3-xml \
php7.3-intl \
php7.3-mbstring \
php7.3-xdebug \
php-memcached \
git \
openssh-server \
php7.3-gd \
zip \
php7.3-zip
# configure php-fpm
RUN sed -i 's/^;*clear_env = .*/clear_env = no/' /etc/php/7.3/fpm/pool.d/www.conf
RUN curl -sS https://getcomposer.org/installer | php && \
mv composer.phar /usr/local/bin/composer && chmod +x /usr/local/bin/composer
RUN mkdir /var/run/sshd
RUN echo 'root:root' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
RUN echo "UseDNS no" >> /etc/ssh/sshd_config
RUN echo "KexAlgorithms diffie-hellman-group1-sha1" >> /etc/ssh/sshd_config
RUN echo "fastcgi_param PATH_TRANSLATED \$document_root\$fastcgi_script_name;" >> /etc/nginx/fastcgi_params
RUN mkdir /etc/nginx/ssl
RUN openssl ecparam -out /etc/nginx/ssl/nginx.key -name prime256v1 -genkey
RUN openssl req -new -batch -key /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/csr.pem
RUN openssl req -x509 -nodes -days 365 -key /etc/nginx/ssl/nginx.key -in /etc/nginx/ssl/csr.pem -out /etc/nginx/ssl/nginx.pem
RUN chmod 600 /etc/nginx/ssl/*
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
ADD docker-entrypoint.sh /usr/bin/docker-entrypoint
RUN chmod +x /usr/bin/docker-entrypoint
RUN sed -i 's/^user nginx;/user www-data;/' /etc/nginx/nginx.conf
RUN echo "apc.enable_cli=1" >> /etc/php/7.3/cli/php.ini
RUN echo "apc.shm_size=128M" >> /etc/php/7.3/fpm/conf.d/20-apcu.ini
RUN sed -i "s/\(max_execution_time *= *\).*/\1180/" /etc/php/7.3/fpm/php.ini
RUN sed -i "s/\(upload_max_filesize *= *\).*/\1100M/" /etc/php/7.3/fpm/php.ini
RUN sed -i "s/\(post_max_size *= *\).*/\1100M/" /etc/php/7.3/fpm/php.ini
RUN sed -i "s/\(^.*max_input_vars *= *\).*/max_input_vars = 10000/" /etc/php/7.3/fpm/php.ini
RUN sed -i "s/\(pm.max_children = 5\).*/\pm.max_children = 50/" /etc/php/7.3/fpm/pool.d/www.conf
RUN sed -i "s/\(pm.max_spare_servers = 3\).*/\pm.max_spare_servers = 10/" /etc/php/7.3/fpm/pool.d/www.conf
RUN echo "xdebug.default_enable=1" >> /etc/php/7.3/fpm/conf.d/20-xdebug.ini
RUN echo "xdebug.profiler_output_dir=/var/www/cachegrind/" >> /etc/php/7.3/fpm/conf.d/20-xdebug.ini
RUN echo "xdebug.profiler_enable_trigger=1" >> /etc/php/7.3/fpm/conf.d/20-xdebug.ini
RUN echo "xdebug.profiler_output_name= cachegrind.out" >> /etc/php/7.3/fpm/conf.d/20-xdebug.ini
RUN mkdir /root/.ssh/
ADD default.conf /etc/nginx/conf.d/default.conf
ADD default.conf /etc/nginx/sites-enabled/default
ADD default.conf /etc/nginx/sites-available/default
EXPOSE 22 443 80
WORKDIR /var/www/
ENTRYPOINT ["docker-entrypoint"]
CMD ["nginx", "-g", "daemon off;"]
and also content of my docker compose file which i use when I build in my ci/cd:
version: '3'
services:
worker:
image: registry.gitlab.com/insuretech1/backend:develop
ports:
- 0.0.0.0:80:80
depends_on:
- mysql
deploy:
mode: replicated
replicas: 3
# service resource management
resources:
# Hard limit - Docker does not allow to allocate more
limits:
cpus: '0.25'
memory: 512M
# Soft limit - Docker makes best effort to return to it
reservations:
cpus: '0.25'
memory: 256M
# service restart policy
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
# service update configuration
update_config:
parallelism: 1
delay: 10s
failure_action: continue
monitor: 60s
max_failure_ratio: 0.3
volumes:
- /var/www/backend:/var/www
mysql:
image: mariadb:10.4
ports:
- 0.0.0.0:3306:3306
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- /opt/mysql_data:/var/lib/mysql
deploy:
placement:
constraints: [node.role == manager]
redis:
image: redis
deploy:
placement:
constraints: [node.role == manager]
the issue is I can not see my last changes of my commit in my server (I mean i should manually git pull to fetch last changes), is there anything wrong?
first of all in your Dockerfile you should copy content of directory into docker container. that will help you keep git changes with your container, for example:
COPY . /var/www/
and other problem is in your docker compose file you have:
volumes:
- /var/www/backend:/var/www
this will override changes that git made on your container and that is why you are not able to see git changes.
I have a setup of docker with laravel and apache alongside mysql, when trying to run artisan command in the terminal of vscode i get :
There is no existing directory at "/var/www/html/storage/logs" and its not buildable: Permission denied
Apache setup in docker compose:
laravel-app:
build:
context: ./docker/app
args:
uid: ${UID}
container_name: laravel-app
environment:
- APACHE_RUN_USER=#${UID}
- APACHE_RUN_GROUP=#${UID}
volumes:
- .:/var/www/html
ports:
- ${HOST_PORT}:80
networks:
backend:
aliases:
- laravel-app
Dockerfile of apache
FROM php:7.2-apache
RUN apt-get update
# 1. development packages
RUN apt-get install -y \
git \
zip \
curl \
sudo \
unzip \
libicu-dev \
libbz2-dev \
libpng-dev \
libjpeg-dev \
libmcrypt-dev \
libreadline-dev \
libfreetype6-dev \
g++
# 2. apache configs + document root
ENV APACHE_DOCUMENT_ROOT=/var/www/html/public
RUN sed -ri -e 's!/var/www/html!${APACHE_DOCUMENT_ROOT}!g' /etc/apache2/sites-available/*.conf
RUN sed -ri -e 's!/var/www/!${APACHE_DOCUMENT_ROOT}!g' /etc/apache2/apache2.conf /etc/apache2/conf-available/*.conf
# 3. mod_rewrite for URL rewrite and mod_headers for .htaccess extra headers like Access-Control-Allow-Origin-
RUN a2enmod rewrite headers
# 4. start with base php config, then add extensions
RUN mv "$PHP_INI_DIR/php.ini-development" "$PHP_INI_DIR/php.ini"
RUN docker-php-ext-install \
bz2 \
intl \
iconv \
bcmath \
opcache \
calendar \
mbstring \
pdo_mysql \
zip
# 5. composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# 6. we need a user with the same UID/GID with host user
# so when we execute CLI commands, all the host file's ownership remains intact
# otherwise command from inside container will create root-owned files and directories
ARG uid
RUN useradd -G www-data,root -u $uid -d /home/devuser devuser
RUN mkdir -p /home/devuser/.composer && \
chown -R devuser:devuser /home/devuser
even though the directory exists, also the commands run successfully from within the container. Should i always run the commands related to laravel artisan from the container, or there is something wrong?
Go to your project folder and open terminal
then,
run this command
sudo chmod -R 775 storage
my dockerfile:
FROM AWS_ECR_IMAGE
RUN apt-get update && apt-get install -y \
cron \
python-dev \
git \
zlib1g-dev \
libffi-dev \
libssl-dev \
autotools-dev \
automake \
libbz2-dev \
libaio-dev \
libsasl2-dev \
python-pip
RUN pip install boto boto3 awscli
# Install Nginx.
RUN \
add-apt-repository -y ppa:nginx/stable && \
apt-get update && \
apt-get install -y nginx && \
rm -rf /var/lib/apt/lists/* && \
echo "\ndaemon off;" >> /etc/nginx/nginx.conf && \
chown -R www-data:www-data /var/lib/nginx
# Define mountable directories.
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx", "/var/www/html"]
# Define working directory.
WORKDIR /etc/nginx
# Define default command.
CMD ["nginx"]
COPY nginx_conf /etc/nginx/sites-available/default
# Start service, replace server name, update web ui
COPY main.sh /opt/annotation-pipeline-docs/main.sh
RUN chmod 0755 /opt/annotation-pipeline-docs/main.sh
ENTRYPOINT [ "sh", "-c", "/opt/annotation-pipeline-docs/main.sh" ]
# Expose ports.
EXPOSE 80
And my entrypoint bash file (I need to update the server name first when the container runs) is:
#!/bin/bash -e
/usr/local/bin/aws s3 sync s3://${S3_Bucket}/docs/${ENVIRONMENT}/HEAD/ /var/www/html/
if [ "$ENVIRONMENT" == "prod" ]
then
sed -i.bak "s/REPLACE_ME/example.com/g" /etc/nginx/sites-available/default
else
sed -i.bak "s/REPLACE_ME/example-$ENVIRONMENT.com/g" /etc/nginx/sites-available/default
fi
nginx
while true; do
sleep 60
echo "s3 sync again:"
/usr/local/bin/aws s3 sync s3://${S3_Bucket}/docs/${ENVIRONMENT}/HEAD/ /var/www/html/
done
The issue is when
nginx
runs, it will hanging forever in the terminal:
and the while loop will never get called. Anyone know why is hanging and how to resolve it? Please help, tks in advanced.
The reason for my issue is waiting for the traffic, the while loop will never get called until Nginx start free the bash. However, Nginx will be running in the foreground and not release the focus.
The solution I tried is instead of letting Nginx running as a foreground service, I changed it run in the background. Since this is the only service in my container, should have no problem to do it.
the code changed is simply removed below line in my dockerfile:
echo "\ndaemon off;" >> /etc/nginx/nginx.conf
which will make Nginx as a foreground service
I am trying to install phalcon in docker and I cannot figure out how to do it.
I am searching through the web for solutions and couldn't manage to make it work.
I successfully installed docker for windows and it seems to work fine but i cannot find a way to install docker.
Can anyone help me to install phalcon in docker ?
Thanks in advance
You can create a Dockerfile which is compiling Phalcon for you:
FROM php:7.2-fpm
ENV PHALCON_VERSION=3.4.2
RUN curl -sSL "https://codeload.github.com/phalcon/cphalcon/tar.gz/v${PHALCON_VERSION}" | tar -xz \
&& cd cphalcon-${PHALCON_VERSION}/build \
&& ./install \
&& cp ../tests/_ci/phalcon.ini $(php-config --configure-options | grep -o "with-config-file-scan-dir=\([^ ]*\)" | awk -F'=' '{print $2}') \
&& cd ../../ \
&& rm -r cphalcon-${PHALCON_VERSION}
if you are looking for php7+apache+mysql with phalcon 3.4.2 then here is my solution. for phalcon 4 there needs to be done addition steps like installing psr otherwise it will give error for required dependencies.
CONSIDER FOLLOWING STUCTURE AND PUT FILES ACCORDINGLY
docker-compose.yml
www(directory where you will put your code)
index.php
.docker(directory)
Dockerfile
dump(directory just to persist your mysql data)
here is Dockerfile which you will put in .docker directory
FROM php:7.1.2-apache
RUN docker-php-ext-install pdo_mysql
RUN sed -i '/jessie-updates/d' /etc/apt/sources.list # Now archived
RUN apt update
RUN apt install -y \
apt-transport-https \
lsb-release \
ca-certificates \
wget \
curl \
nano \
dialog \
net-tools \
git \
sudo \
openssl \
libpcre3-dev
RUN wget -O /etc/apt/trusted.gpg.d/php.gpg https://packages.sury.org/php/apt.gpg
RUN sh -c 'echo "deb https://packages.sury.org/php/ $(lsb_release -sc) main" > /etc/apt/sources.list.d/php.list'
RUN apt update && apt install -y \
php7.1-curl \
php7.1-mbstring \
php7.1-gettext \
php7.1-gd \
php7.1-fileinfo \
php7.1-json \
php7.1-mcrypt \
php7.1-redis \
php7.1-intl \
php7.1-xml \
php7.1-zip
ARG PHALCON_VERSION=3.4.2
ARG PHALCON_EXT_PATH=php7/64bits
RUN set -xe && \
# Compile Phalcon
curl -LO https://github.com/phalcon/cphalcon/archive/v${PHALCON_VERSION}.tar.gz && \
tar xzf ${PWD}/v${PHALCON_VERSION}.tar.gz && \
docker-php-ext-install -j $(getconf _NPROCESSORS_ONLN) ${PWD}/cphalcon-${PHALCON_VERSION}/build/${PHALCON_EXT_PATH} && \
# Remove all temp files
rm -r \
${PWD}/v${PHALCON_VERSION}.tar.gz \
${PWD}/cphalcon-${PHALCON_VERSION}
RUN a2enmod rewrite
this will be for you php and phalcon settings along with apache
and you can use docker compose to run all your required container to make up your application
here is your docker-compose file
version: "2"
services:
www:
build: ./.docker
ports:
- "8001:80"
volumes:
- ./www:/var/www/html/
links:
- db
networks:
- default
db:
image: mysql:5.7.13
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: myDb
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
volumes:
- ./dump:/var/lib/mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin
links:
- db:db
ports:
- 8000:80
environment:
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
after this just add index.php in www
<?php echo phpinfo();?>
after this localhost:8001 should b accessible.. happy coding. and if there is any improvements need to be done in this. please let me know. as of now this one is working super cool for me and my first phalcon configuration. wasted lot of time on it.....
You can install it through Dockerfile
Kindly, fetch this link according to your operating system
https://github.com/phalcon/dockerfiles
Plus you can make it through repo
https://hub.docker.com/u/phalconphp
For phalcon its important to compile on same hardware/cpu. Or you pass the flags to
phpize \
&& ./configure CFLAGS="-O2 -g" \
&& make -B \
&& make install
not sure if still relevant but here
https://keepforyourself.com/coding/php/how-to-setup-phalcon-framework-in-a-docker-container-v2/
there is this solution
FROM alpine:latest
RUN apk add --no-cache \
apache2-proxy \
apache2-ssl \
apache2-utils \
curl \
git \
logrotate \
openssl \
git bash php php7-dev apache2 gcc \
libc-dev make php7-pdo php7-json \
php7-session php7-pecl-psr \
php7-apache2
WORKDIR /
RUN git clone --depth=1 "git://github.com/phalcon/cphalcon.git"
WORKDIR /cphalcon/build
RUN ./install
RUN echo "extension=phalcon.so" > /etc/php7/conf.d/phalcon.ini
RUN apk del libc-dev zlib-dev php7-dev libedit-dev musl-dev pcre2-dev ncurses-dev \
expat xz-libs curl musl-utils make libedit zlib ncurses-libs libstdc++ pcre git bash musl argon2-libs
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
RUN rm -rf /cphalcon
WORKDIR /var/www/localhost/htdocs
RUN echo "<?php phpinfo(); ?>" > /var/www/localhost/htdocs/index.php
EXPOSE 80
CMD ["/usr/sbin/httpd", "-D", "FOREGROUND"]
I would like to run Hadoop and Flume dockerized. I have a standard Hadoop image with all the default values. I cannot see how can these services communicate each other placed in separated containers.
Flume's Dockerfile looks like this:
FROM ubuntu:14.04.4
RUN apt-get update && apt-get install -q -y --no-install-recommends wget
RUN mkdir /opt/java
RUN wget --no-check-certificate --header "Cookie: oraclelicense=accept-securebackup-cookie" -qO- \
https://download.oracle.com/otn-pub/java/jdk/8u20-b26/jre-8u20-linux-x64.tar.gz \
| tar zxvf - -C /opt/java --strip 1
RUN mkdir /opt/flume
RUN wget -qO- http://archive.apache.org/dist/flume/1.6.0/apache-flume-1.6.0-bin.tar.gz \
| tar zxvf - -C /opt/flume --strip 1
ADD flume.conf /var/tmp/flume.conf
ADD start-flume.sh /opt/flume/bin/start-flume
ENV JAVA_HOME /opt/java
ENV PATH /opt/flume/bin:/opt/java/bin:$PATH
CMD [ "start-flume" ]
EXPOSE 10000
You should link your containers. There are some variants how you can implement this.
1) Publish ports:
docker run -p 50070:50070 hadoop
option p binds port 50070 of your docker container with port 50070 of host machine
2) Link containers (using docker-compose)
docker-compose.yml
version: '2'
services:
hadoop:
image: hadoop:2.6
flume:
image: flume:last
links:
- hadoop
link option here binds your flume container with hadoop
more info about this https://docs.docker.com/engine/userguide/networking/default_network/dockerlinks/