I created a simple application consisting of nginx and python flask made up of two containers, which I can deploy to bluemix using docker-compose.
The docker compose file is docker-compose-bluemix.yml
flask:
image: registry.ng.bluemix.net/namespace/simple.flask
restart: always
expose:
- "8000"
command: /usr/local/bin/gunicorn -w 2 -b :8000 app:app
nginx:
image: registry.ng.bluemix.net/namespace/simple.nginx
restart: always
ports:
- "80:80"
links:
- flask:flask
Once I assign an ip to the nginx container it works, in that I can access it like so,
curl http://ip/flask-api/v0.01/hello
and the correct response is returned
{"status": "hello"}
How do I enable https for this app? Must it be done by providing the nginx container self signed certs, or can I leverage bluemix to give me a https://xxx.mybluemix.net address for the containers? If so, how?
If you want Bluemix to assign a route like https://xxx.mybluemix.net then you need to deploy a Scalable Group instead of a Single Container. Scalable Groups can be assigned routes which will allow SSL (https://) access.
I don't believe that you can do this with Docker Compose because Docker is not aware of the container group capabilities in Bluemix. You could use the IBM Container extensions to the Cloud Foundry CLI to do this from the command line or from your DevOps pipeline tool of choice with the following commands:
cf ic group create --name simple-flask -m 64 -p 8000 --min 1 --max 3 --desired 2 registry.ng.bluemix.net/namespace/simple-flask:latest
cf ic route map -n simple-flask -d mybluemix.net simple-flask
At that point you don't need nginx because Bluemix will put a load balancer in front of your container group for you to direct traffic to the containers within it. You can then get to it via:
https://simple-flask.mybluemix.net/flask-api/v0.01/hello
This should give you what you were looking for.
~jr
Related
I am connecting to Redis from the spring boot app on the outside machine where the Redis server docker container is not running. When the app tries to connect to Redis, the app can't connect properly until the sent request is timed out. Meanwhile, if I try to connect from:
Inside the machine where the Redis server docker container is running with the host is localhost, I could connect it. And, I don't know why I can't connect by setup host value as a numerical IP/alphabetical (URL), only works with "localhost."
Outside machine where the Redis server docker container is not running with Redis client app GUI for management, I could connect it.
application.properties:
spring.redis.host=pc-1
spring.redis.port=6379
pc-1 is alias from some numerical ip. I'am using hosts feature from
windows to aliasing/redirecting it.
.env:
REDIS_PORT=6379
docker-compose.yml:
redis:
image: redis:latest
ports:
- "${REDIS_PORT}:6379"
command:
# - redis-server
# - --requirepass "${REDIS_PASSWORD}"
networks:
- redis
healthcheck:
test: ["CMD-SHELL", "redis-cli ping"]
interval: 10s
timeout: 10s
retries: 3
I need help on this issue.
Use the --service-ports flag to the docker compose command to publish the ports you've defined in the docker compose file.
Other debugging tips:
Hardcode the ${REDIS_PORT} variable in case the value is not getting set or set a default like ${REDIS_PORT:-default}
Pass the env file explicitly like docker compose --env-file ./somedir/.env up in case the env file is not being pick up
Use docker inspect to get container status, check the networking info
I'm pretty new to docker, and I've tried searching about networking but haven't found a solution that's worked.
I have a Laravel app that is using Laradock.
I also have an external 3rd party API that runs in its own docker container.
I basically want to specify the container name of the api inside my laravel .env file, and have it dynamically resolve the container ip so I can make API calls from my Laravel app. I can already do this with services that are already part of laradock like mariadb/mysql, but since my API is located in an external container, it can't connect to it.
I tried making a network and attaching them with;
docker network create my-network
Then inside my docker-compose.yml files for each of the containers, I specified;
networks:
my-network:
name: "my-network"
But if I try and ping them with;
docker exec -ti laradock-workspace-1 ping my-api
I can't connect and can't really figure out why. Was hoping someone familiar with docker might be able to explain why since I'm sure it's something very obvious I'm missing. Thanks!
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file.
Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.
I have created a spring app and i want to connect it to redis server which is deployed on docker-compose i put the needed properties as follow :
spring.redis.host=redis
spring.redis.port=6379
But i keep getting a ConnexionException so how can i Know on which host redis is running and how to connect to it.
Here is my docker-compose file :
version: '2'
services:
redis:
image: 'bitnami/redis:5.0'
environment:
# ALLOW_EMPTY_PASSWORD is recommended only for development.
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
ports:
- '6379:6379'
volumes:
- 'redis_data:/bitnami/redis/data'
volumes:
redis_data:
driver: local
From docker compose documentation
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name
If you want to access redis by container name ('redis' in this case), the Spring boot application has also be deployed as a docker compose service, but it doesn't appear in the docker-compose file that you've provided in the question, so please add it.
Alternatively If you're trying to run the spring boot application on host machine, use 'localhost' instead of 'redis' in order to access the redis container.
Another approach you can use is "docker network" , Below are the steps to follow :
Create a docker network for redis
docker network create redis-docker
Spin up redis container is redis-docker network.
docker run -d --net redis-docker --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest
Inspect the redis-docker container
docker network inspect redis-docker
Copy the "IPv4Address" IP and paster in application.yml
Now build , start your application.
I was looking at step-by-step tutorial on how to run my spring boot, mysql-backed app using AWS EKS (Elastic Container service for Kubernetes) using the existing SSL wildcard certificate and wasn't able to find a complete solution.
The app is a standard Spring boot self-contained application backed by MySQL database, running on port 8080. I need to run it with high availability, high redundancy including MySQL db that needs to handle large number of writes as well as reads.
I decided to go with the EKS-hosted cluster, saving a custom Docker image to AWS-own ECR private Docker repo going against EKS-hosted MySQL cluster. And using AWS issued SSL certificate to communicate over HTTPS. Below is my solution but I'll be very curious to see how it can be done differently
This a step-by-step tutorial. Please don't proceed forward until the previous step is complete.
CREATE EKS CLUSTER
Follow the standard tutorial to create EKS cluster. Don't do step 4. When you done you should have a working EKS cluster and you must be able to use kubectl utility to communicate with the cluster. When executed from the command line you should see the working nodes and other cluster elements using
kubectl get all --all-namespaces command
INSTALL MYSQL CLUSTER
I used helm to install MySQL cluster following steps from this tutorial. Here are the steps
Install helm
Since I'm using Macbook Pro with homebrew I used brew install kubernetes-helm command
Deploy MySQL cluster
Note that in MySQL cluster and Kubernetes (EKS) cluster, word "cluster" refers to 2 different things. Basically you are installing cluster into cluster, just like a Russian Matryoshka doll so your MySQL cluster ends up running on EKS cluster nodes.
I used a 2nd part of this tutorial (ignore kops part) to prepare the helm chart and install MySQL cluster. Quoting helm configuration:
$ kubectl create serviceaccount -n kube-system tiller
serviceaccount "tiller" created
$ kubectl create clusterrolebinding tiller-crule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
clusterrolebinding.rbac.authorization.k8s.io "tiller-crule" created
$ helm init --service-account tiller --wait
$HELM_HOME has been configured at /home/presslabs/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!
$ helm repo add presslabs https://presslabs.github.io/charts
"presslabs" has been added to your repositories
$ helm install presslabs/mysql-operator --name mysql-operator
NAME: mysql-operator
LAST DEPLOYED: Tue Aug 14 15:50:42 2018
NAMESPACE: default
STATUS: DEPLOYED
I run all commands exactly as quoted above.
Before creating a cluster, you need a secret that contains the ROOT_PASSWORD key.
Create a file named example-cluster-secret.yaml and copy into it the following YAML code
apiVersion: v1
kind: Secret
metadata:
name: my-secret
type: Opaque
data:
# root password is required to be specified
ROOT_PASSWORD: Zm9vYmFy
But what is that ROOT_PASSWORD? Turns out this is base64 encoded password that you planning to use with your MySQL root user. Say you want root/foobar (please don't actually use foobar). The easiest way to encode the password is to use one of the websites such as https://www.base64encode.org/ which encodes foobar into Zm9vYmFy
When ready execute kubectl apply -f example-cluster-secret.yaml which will create a new secret
Then you need to create a file named example-cluster.yaml and copy into it the following YAML code:
apiVersion: mysql.presslabs.org/v1alpha1
kind: MysqlCluster
metadata:
name: my-cluster
spec:
replicas: 2
secretName: my-secret
Note how the secretName matches the secret name you just created. You can change it to something more meaningful as long as it matches in both files. Now run kubectl apply -f example-cluster.yaml to finally create a MySQL cluster. Test it with
$ kubectl get mysql
NAME AGE
my-cluster 1m
Note that I did not configure a backup as described in the rest of the article. You don't need to do it for the database to operate. But how to access your db? At this point the mysql service is there but it doesn't have external IP. In my case I don't even want that as long as my app that will run on the same EKS cluster can access it.
However you can use kubectl port forwarding to access the db from your dev box that runs kubectl. Type in this command: kubectl port-forward services/my-cluster-mysql 8806:3306. Now you can access your db from 127.0.0.1:8806 using user root and the non-encoded password (foobar). Type this into separate command prompt: mysql -u root -h 127.0.0.1 -P 8806 -p. With this you can also use MySQL Workbench to manage your database just don't forget to run port-forward. And of course you can change 8806 to other port of your choosing
PACKAGE YOUR APP AS A DOCKER IMAGE AND DEPLOY
To deploy your Spring boot app into EKS cluster you need to package it into a Docker image and deploy it into the Docker repo. Let's start with a Docker image. There are plenty tutorials on this like this one but the steps are simple:
Put your generated, self-contained, spring boot jar file into a directory and create a text file with this exact name: Dockerfile in the same directory and add the following content to it:
FROM openjdk:8-jdk-alpine
MAINTAINER me#mydomain.com
LABEL name="My Awesome Docker Image"
# Add spring boot jar
VOLUME /tmp
ADD myapp-0.1.8.jar app.jar
EXPOSE 8080
# Database settings (maybe different in your app)
ENV RDS_USERNAME="my_user"
ENV RDS_PASSWORD="foobar"
# Other options
ENV JAVA_OPTS="-Dverknow.pypath=/"
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jar" ]
Now simply run a Docker command from the same folder to create an image. Of course that requires Docker client installed on your dev box.
$ docker build -t myapp:0.1.8 --force-rm=true --no-cache=true .
If all goes well you should see your image listed with docker ps command
Deploy to the private ECR repo
Deploying your new image to ECR repo is easy and ECR works with EKS right out of the box. Log into AWS console and navigate to the ECR section. I found it confusing that apparently you need to have one repository per image but when you click "Create repository" button put your image name (e.g. myapp) into the text field. Now you need to copy the ugly URL for your image and go back to the command prompt
Tag and push your image. I'm using a fake URL as example: 901237695701.dkr.ecr.us-west-2.amazonaws.com you need to copy your own from the previous step
$ docker tag myapp:0.1.8 901237695701.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
$ docker push 901237695701.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
At this point the image should show up at ECR repository you created
Deploy your app to EKS cluster
Now you need to create a Kubernetes deployment for your app's Docker image. Create a myapp-deployment.yaml file with the following content
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-deployment
spec:
selector:
matchLabels:
app: myapp
replicas: 2
template:
metadata:
labels:
app: myapp
spec:
containers:
- image: 901237695701.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
name: myapp
ports:
- containerPort: 8080
name: server
env:
# optional
- name: RDS_HOSTNAME
value: "10.100.98.196"
- name: RDS_PORT
value: "3306"
- name: RDS_DB_NAME
value: "mydb"
restartPolicy: Always
status: {}
Note how I'm using a full URL for the image parameter. I'm also using a private CLUSTER-IP of mysql cluster that you can get with kubectl get svc my-cluster-mysql command. This will differ for your app including any env names but you do have to provide this info to your app somehow. Then in your app you can set something like this in the application.properties file:
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://${RDS_HOSTNAME}:${RDS_PORT}/${RDS_DB_NAME}?autoReconnect=true&zeroDateTimeBehavior=convertToNull
spring.datasource.username=${RDS_USERNAME}
spring.datasource.password=${RDS_PASSWORD}
Once you save the myapp-deployment.yaml you need to run this command
kubectl apply -f myapp-deployment.yaml
Which will deploy your app into EKS cluster. This will create 2 pods in the cluster that you can see with kubectl get pods command
And rather than try to access one of the pods directly we can create a service to front the app pods. Create a myapp-service.yaml with this content:
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
ports:
- port: 443
targetPort: 8080
protocol: TCP
name: http
selector:
app: myapp
type: LoadBalancer
That's where the magic happens! Just by setting the port to 443 and type to LoadBalancer the system will create a Classic Load Balancer to front your app.
BTW if you don't need to run your app over HTTPS you can set port to 80 and you will be pretty much done!
After you run kubectl apply -f myapp-service.yaml the service in the cluster will be created and if you go to to the Load Balancers section in the EC2 section of AWS console you will see that a new balancer is created for you. You can also run kubectl get svc myapp-service command which will give you EXTERNAL-IP value, something like bl3a3e072346011e98cac0a1468f945b-8158249.us-west-2.elb.amazonaws.com. Copy that because we need to use it next.
It is worth to mention that if you are using port 80 then simply pasting that URL into the browser should display your app
Access your app over HTTPS
The following section assumes that you have AWS-issued SSL certificate. If you don't then go to AWS console "Certificate Manager" and create a wildcard certificate for your domain
Before your load balancer can work you need to access AWS console -> EC2 -> Load Balancers -> My new balancer -> Listeners and click on "Change" link in SSL Certificate column. Then in the pop up select the AWS-issued SSL certificate and save.
Go to Route-53 section in AWS console and select a hosted zone for your domain, say myapp.com.. Then click "Create Record Set" and create a CNAME - Canonical name record with Name set to whatever alias you want, say cluster.myapp.com and Value set to the EXTERNAL-IP from above. After you "Save Record Set" go to your browser and type in https://cluster.myapp.com. You should see your app running
TL;DR: How do I have to change my below docker-compose.yml in order to allow one container to use a service of another over a custom (non-standard) port?
I have a pretty common setup: containers for a web app (Padrino [Ruby]), Postgres, Redis, and a queueing framework (Sidekiq). The web app comes with its custom Dockerfile, the remaining services come either from standard images (Postgres, Redis), or mount the data from the web app (Sidekiq). They are ties together via the following docker-compose.yml:
version: '2'
services:
web:
build: .
command: 'bundle exec puma -C config/puma.rb'
volumes:
- .:/myapp
ports:
- "9000:3000"
depends_on:
- postgres
- redis
sidekiq:
build: .
command: 'bundle exec sidekiq -C config/sidekiq.yml -r ./config/boot.rb'
volumes:
- .:/myapp
depends_on:
- postgres
- redis
postgres:
image: postgres:9.5
environment:
POSTGRES_USER: my-postgres-user
POSTGRES_PASSWORD: my-postgres-pass
ports:
- '9001:5432'
volumes:
- 'postgres:/var/lib/postgresql/data'
redis:
image: redis
ports:
- '9002:6379'
volumes:
- 'redis:/var/lib/redis/data'
volumes:
redis:
postgres:
One key point to notice here is that I am exposing the containers services on non-standard ports (9000-9002).
If I start the setup with docker-compose up, the Redis and Postgres containers come up fine, but the containers for the web app and Sidekiq fail since they can't connect to Redis at redis:9002. Remarkably enough, the same setup works if I use 6379 (the standard Redis port) instead of 9002.
docker ps also looks fine afaik:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9148566c2509 redis "docker-entrypoint.sh" Less than a second ago Up About a minute 0.0.0.0:9002->6379/tcp rubydockerpadrino_redis_1
e6d47321c939 postgres:9.5 "/docker-entrypoint.s" Less than a second ago Up About a minute 0.0.0.0:9001->5432/tcp rubydockerpadrino_postgres_1
What's even more confusing: I can access the Redis container from the host via redis-cli -h localhost -p 9002 -n 0, but the web app and Sidekiq containers fail to establish a connection.
I am using this docker version on MacOS:
Docker version 1.12.3, build 6b644ec, experimental
Any ideas what I am doing wrong? I'd appreciate any hint how to get my setup running.
When you bind ports like this '9002:6379' you're telling Docker to forward traffic from localhost:9002 -> redis:6379. That's why this works from your host machine:
redis-cli -h localhost -p 9002 -n 0
However, when containers talk to each other, they are all connected to the same network by default (the Docker bridge or docker0). By default, containers can communicate with each other freely on this network, without needing any ports opened. Within this network, your redis container is listening for traffic on it's usual port (6379), host isn't involved at all. That's why your container to container communication works on 6379.