Scaling Docker Containers with isolated networks - shell

I'm using docker-compose (v 3.3) and I have a set of services as shown below.
version: '3.3'
services:
jboss:
build:
context: .
dockerfile: jboss/Dockerfile
ports:
- 8080
depends_on:
- mysql
- elasticsearch
networks:
- ${NETWORK}
# - front-tier
# - back-tier
mysql:
hostname: mysql
image: mysql:latest
ports:
- 3306
networks:
- ${NETWORK}
# - back-tier
elasticsearch:
image: elasticsearch:1.7.3
ports:
- 9200
networks:
- ${NETWORK}
# - back-tier
networks:
# front-tier:
# driver: bridge
The problem/question is related to the possibility to isolate these services when I scale my containers in a kind of subnet (by the way, I'm not using swarm here) in the sense that jboss1 can only see mysql1 and elasticsearch1. The same for jboss2 - mysql2 - es2 and so on. I know this is quite strange, but the task is to parallelize some tests and they must be completely isolated.
As you probably have realized, I've tried some approaches (that are commented in the compose) of defining some networks, but if I scale the containers, they will obviously be on the same network - which means that I could randomly ping on mysql-n from jboss-1.
Then, I tried another approach explained here by Arun Gupta http://blog.arungupta.me/docker-bridge-overlay-network-compose-variable-substitution/ where he mentioned the variable substitution in the network attribute (for that the ${NETWORK} there). But it turns out that if I try to "up" my containers via:
NETWORK=isolated-net1 docker-compose up -d
and then
NETWORK=isolated-net2 docker-compose up -d
it won't scale the containers, but instead, recreate them all:
Recreating docker_mysql_1 ...
Recreating docker_elasticsearch_1 ...
Recreating docker_jboss_1 ...
In a nutshell: Is there a way to isolate a group of services when doing a docker-compose up --scale?
Thanks

Well, it turns out that after a few days where I've been beating my brains out and trying to find a proper out-of-the-box solution for the problem, the feasible way to achieve this (actually, the one which fits my needs) was doing the following:
I. Creating the specific volume for the instance that would call the other containers (in this case, JBoss):
jboss:
build:
context: jboss
dockerfile: Dockerfile
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# ... omitting the rest
II. Slightly editing my Dockerfile with a script that really does the isolation:
FROM my-private-jboss:latest
ADD isolateNetwork.sh /opt/jboss/jboss-eap/jboss-as/server/default/bin/isolateNetwork.sh
RUN chmod +x /opt/jboss/jboss-eap/jboss-as/server/default/bin/isolateNetwork.sh
CMD ["/opt/jboss/jboss-eap/jboss-as/server/default/bin/isolateNetwork.sh"]
III. Finally, the isolateNetwork.sh (warning: I had to install jq on my JBoss container to help in these extractions from docker API):
#!/usr/bin/env bash
curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$(hostname)/json > container.json
export DOCKER_CONTAINER_NUMBER=$(cat container.json | jq -r '.Config.Labels["com.docker.compose.container-number"]')
export DOCKER_PROJECT_NAME=$(cat container.json | jq -r '.Config.Labels["com.docker.compose.project"]')
export MYSQL_CONTAINER_NAME=${DOCKER_PROJECT_NAME}_mysql_${DOCKER_CONTAINER_NUMBER}
export MYSQL_IP=$(curl -s --unix-socket /var/run/docker.sock http://localhost/containers/$MYSQL_CONTAINER_NAME/json | jq -r '.NetworkSettings.Networks[].IPAddress')
#Now it's just required to edit the Datasources
find . -iname "MySql*-ds.xml" -exec sed -i s/mysql:3306/${MYSQL_CONTAINER_NAME}:3306/g {} +
As you can see, the volume created during step 1 serves to access docker API from inside a container. This way, I could query the info mentioned and edit the configuration to invoke just upon the same DOCKER_CONTAINER_NUMBER. Apart from that, it is important to mention that my command to scale has a very specific requirement that is: every Jboss will have the same number of data-sources, which implies that DOCKER_CONTAINER_NUMBER will ever match.
docker-compose up --scale jboss=%1 --scale elasticsearch=%1 --scale mysql=%1 --no-recreate -d

Related

How to check from inside a container if another container is running on port

I am running 2 containers at the same time (connected via docker-compose on setting links && depends_on).
The depends on is not enough, so I want the script that run on entryphone of one of the container to check if the other container is running already on some port.
I tried:
#!bin/bash
until nc -z w10 <container_name> 3306
do
echo waiting for db to be ready...
sleep 2
done
echo code is ready
But this is not working..
Anyone got an idea?
I would suggest to use the depends_on approach. However, you can use some of the advanced setting of this command. Please, read the documentation of Control startup and shutdown order in Compose
You can use the wait-for-it.sh script to exactly achieve what you need. Extracted from the documentation:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
db:
image: postgres
Since you are already using docker-compose to orchestrate your services a better way would be to use condition: service_healthy of the depends_on long syntax. So instead of manually waiting in one container for the other to become available docker-compose will start the former only after the latter became healthy, i.e. available.
If the depended-on container does not have a specified HEALTHCHECK in its image already you can manually define it in the docker-compose.yml with the healthcheck attribute.
Example with a mariadb database using the included healthcheck.sh script:
services:
app:
image: myapp/image
depends_on:
db:
condition: service_healthy
db:
image: mariadb
environment:
- MARIADB_ROOT_PASSWORD=password
healthcheck:
test: "healthcheck.sh --connect"
With this docker-compose up will first start the db service and wait until the db service becomes healthy, i.e. is ready to accept connections, and only then will start the app service which can immediately connect to the db.

Testing a container against DynamoDB-Local

I wanted to test a container locally before pushing it to aws ecs.
I ran unit tests against a docker-compose stack including a dynamodb-local container using a Go (aws-sdk-go-v2) endpoint resolver with http://localhost:8000 as the url.
So I wanted to build and test container locally and realised I needed to attach it to the default network created by docker-compose. I struggled with this a bit so I build a stripped down trial. I created an endpoint resolver with a url of http://dynamo-local:8000 (named the container dynamo-local in d-c) and attached it to the default network within docker run.
Now that all works, I can perform the various table operations successfully, but one of the things that confuses me is that if I run aws cli:
aws --endpoint-url=http://localhost:8000 dynamodb list-tables
then the output shows no tables exist when there is definitely a table existing. I had assumed, naively, that as I can access port 8000 of the same container with different endpoints I should be able to access the same resources. Wrong.
Obviously a gap in my education. What am I missing ? I need to expand the trial to a proper test of the full app, so its important to me that I understand what is going on here.
Is there a way I can use the aws cli to access the table?
docker-compose file :
version: '3.5'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack_test
ports:
- '4566:4566'
environment:
- SERVICES=s3,sns,sqs, lambda
- DEBUG=1
- DATA_DIR=
volumes:
- './.AWSServices:/tmp/AWSServices'
- '/var/run/docker.sock:/var/run/docker.sock'
nginx:
build:
context: .
dockerfile: Dockerfile
image: chanonry/urlfiles-nginx:latest
container_name: nginx
ports:
- '8080:80'
dynamodb:
image: amazon/dynamodb-local:1.13.6
container_name: dynamo-local
ports:
- '8000:8000'
networks:
default:
name: test-net

Docker Compose - Starting already created containers

I am new to Docker and I am trying to run Selenium Grid tests on Docker. For this purpose, I created a docker compose file and executed below command
docker-compose -f docker-compose.yaml up
Everything worked fine but after a few hours I restarted host machine and executed above command again. This time I get below error
ERROR: for selenium-hub Cannot create container for service selenium-hub: Conflict. The container name "/selenium-hub" is already in use by container "some-hash". You have to remove (or rename) that container to be able to reuse that name.
I tried docker-compose -f docker-compose.yaml run selenium-hub but this command does not start selenium nodes. So my questions are -
Do I need to remove the container everytime before I run the docker compose
again?
Is there any way I can use docker-compose like file, so that
everytime I restart docker, I can just run the file to start all containers together?
Below the Docker-Compose I used
version: "3"
services:
selenium-hub:
image: selenium/hub:3.141.59-20200525
container_name: selenium-hub
ports:
- "4444:4444"
chrome:
image: selenium/node-chrome:3.141.59-20200525
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
firefox:
image: selenium/node-firefox:3.141.59-20200525
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
opera:
image: selenium/node-opera:3.141.59-20200525
volumes:
- /dev/shm:/dev/shm
depends_on:
- selenium-hub
environment:
- HUB_HOST=selenium-hub
- HUB_PORT=4444
There are possible ways
docker system prune will clean up the cache and remove the dangling intermediate container and delete containers name that not actively running. This command have to be used carefully
docker container prune will delete only dead/stop containers and will free up names
docker rm -v $(docker ps -aq -f 'status=exited')
docker rmi $(docker images -aq -f 'dangling=true')
docker-compose rm --force emoves one-off containers created by docker-compose up or docker-compose run
Just use docker-compose up command to start the already created containers.

Configuring Docker with Traefik, Nginx and Laravel

I am trying to figure out how to setup a simple stack for development and later deployment. I want to utilize Docker to serve Traefik in a container as the public facing reverse-proxy, which then interfaces as needed with a Nginx container that is used only to serve static frontend files (HTML, CSS, JS) and a backend PHP container that runs Laravel (I'm intentionally decoupling the frontend and API for this project).
I am trying my best to learn through all of the video and written tutorials out there, but things become complicated very quickly (at least, for my uninitiated brain) and it's a bit overwhelming. I have a one-week deadline to complete this project and I'm strongly considering dropping Docker altogether for the time being out of fear that I'll spend the whole trying to mess around with the configuration instead of actually coding!
To get started, I have a simple docker-compose with the following configuration that I've verified at least runs correctly:
version: '3'
services:
reverse-proxy:
image: traefik
command: --api --docker # Enables Web UI and tells Traefik to listen to Docker.
ports:
- "80:80" # HTTP Port
- "8080:8080" # Web UI
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events.
Now, I need to figure out how to connect Nginx and PHP/Laravel effectively.
First of all don't put yourself under stress to learn new stuff. Because if you do, learning new stuff won't feel that comfortable anymore. Take your knowledge of technology and get stuff done. When you're done and you realize you have 1/2 days to go to your deadline, try to overdeliver by including new technology. This way you won't screw your deadline and you will not be under stress figuring our new technology or configuration.
The configuration you see below is not complete nor functionally tested. I just copied most of the stuff out of 3 of my main projects in order to give you a starting-point. Traefik as-is can be complicated to set up properly.
version: '3'
# Instantiate your own configuration with a Dockerfile!
# This way you can build somewhere and just deploy your container
# anywhere without the need to copy files around.
services:
# traefik as reverse-proxy
traefik:
build:
context: .
dockerfile: ./Dockerfile-for-traefik # including traefik.toml
command: --docker
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# this file you'll have to create manually `touch acme.json && chmod 600 acme.json`
- /home/docker/volumes/traefik/acme.json:/opt/traefik/acme.jso
networks:
- overlay
ports:
- 80:80
- 443:443
nginx:
build:
context: .
dockerfile: ./Dockerfile-for-nginx
networks:
- overlay
depends_on:
- laravel
volumes:
# you can copy your assets to production with
# `tar -c -C ./myassets . | docker cp - myfolder_nginx_1:/var/www/assets`
# there are many other ways to achieve this!
- assets:/var/www/assets
# define your application + whatever it needs to run
# important:
# - "build:" will search for a Dockerfile in the directory you're specifying
laravel:
build: ./path/to/laravel/app
environment:
MYSQL_ROOT_PASSWORD: password
ENVIRONMENT: development
MYSQL_DATABASE: your_database
MYSQL_USER: your_database_user
networks:
- overlay
links:
- mysql
volumes:
# this path is for development
- ./path/to/laravel/app:/app
# you need a database, right?
mysql:
image: mysql:5
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: your_database
MYSQL_USER: your_database_user
networks:
- overlay
volumes:
- mysql-data:/var/lib/mysql
volumes:
mysql-data:
assets:
networks:
overlay:

How to use docker run with a Meteor image?

I have 2 containers mgmt-app who is a Meteor container and mgmt-mongo who is the MongoDB.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b65be4ac454 gitlab-lab:5005/dfc/mongo:latest "/entrypoint.sh mongo" About an hour ago Up About an hour 27017/tcp mgmt-mongo
dff0b3c69c5f gitlab-lab:5005/dfc/mgmt-docker-gui:lab "/bin/sh -c 'sh $METE" About an hour ago Up 42 minutes 0.0.0.0:80->80/tcp mgmt-app
From my Docker host I want to run docker run gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
but I have this error:
=> Starting meteor app on port:80
/app/programs/server/node_modules/fibers/future.js:280
throw(ex);
^
Error: MONGO_URL must be set in environment
So I tried:
docker run -e "MONGO_URL=mongodb://mgmt-mongo:27017/meteor" gitlab-lab:5005/dfc/mgmt-docker-gui:lab ls -al
and then the error was:
/app/programs/server/node_modules/fibers/future.js:313
throw(ex);
^
MongoError: failed to connect to server [mgmt-mongo:27017] on first connect
I really don't understand because when I do a docker-compose up -d with this file:
mgmt-app:
image: gitlab-lab:5005/dfc/mgmt-docker-gui:latest
container_name: mgmt-app
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- $HOME/.docker:/root/.docker
- /home/dockeradm/compose/area:/home/dockeradm/compose/area
environment:
- ROOT_URL=http://localhost:80
- MONGO_URL=mongodb://mgmt-mongo:27017/meteor
ports:
- 80:80
restart: always
mgmt-mongo:
image: gitlab-lab:5005/dfc/mongo:latest
container_name: mgmt-mongo
volumes:
- mgmt_mongo_data_config:/data/configdb
- mgmt_mongo_data_db:/data/db
restart: always
everything go well.
So my request is, how should I do my docker run to execute my command ? (the command is not a simple ls -al but it's ok for the demo)
When you run the containers separately with docker run, they are not linked on the same docker network so the mongo container is not accessible from the app container. To remedy this, you should use either:
--link to mark the app container as linked to the mongo container. This works, but is deprecated.
a defined docker network for both containers to be linked by; this is more complex, but is the recommended architecture
By contrast, docker-compose automatically adds both containers to the same docker network, so they are immediately connectable without any extra configuration required:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.

Resources