Consul - Deploy different config for different hosts - consul

I am trying to deploy a consul cluster. I have the following machines:
consul-server01
consul-server02
consul-server03
web01
database01
I have 3 separate config files. One on each server respectively.
/etc/consul.d/server/config.json
/etc/consul.d/web/config.json
/etc/consul.d/database/config.json
If I add a new server (say web02), how can I have it automatically adopt the web server config?
Does consul support configuration discovery, or do I need to use chef/puppet/ansible/salt to deploy the web config to the web server?
Resources:
https://www.digitalocean.com/community/tutorials/how-to-configure-consul-in-a-production-environment-on-ubuntu-14-04

You can load your configurations into the initial Consul instances or clusters key/value store and then use consul-template to configure the additional nodes.

Create a data-container deriving from consul, mount a named volume onto /data - called myconfig
create a small ruby/whatever script "generate_key.rb" which generates a key into /data/consul/encrypt.json if it yet does not exist. It ends up looking like this.
{ 'encrypt': 'somekey generated by consul keygen' }
For generating a key, use : consul keygen
Start this script on container start ( ENTRYPOINT or CMD)
Setting up the consul-server
in the Dockerfile do
FROM consul
VOLUME /data/consul
# create a placeholder for the optional gossip key
RUN mkdir -p /data/consul && \
echo "{}" > /data/consul/encrypt.json && \
mkdir -p /consul/config &&
ln -s /data/consul/encrypt.json /consul/config/encrypt.json
# you server config
COPY consul-config.json /consul/config/server_config.json
CMD ["agent","-server"]
Your conusl-config.json should look similar to this
{
"datacenter": "stable",
"acl_datacenter": "stable",
"data_dir": "/consul/data",
"ui": true,
"dns_config": {
"allow_stale": false
},
"log_level": "INFO",
"node_name": "consul",
"client_addr" : "0.0.0.0",
"server": true,
"bootstrap":
true
}
# For every consul client
Create the same placeholder symlink
RUN mkdir -p /data/consul && \
echo "{}" > /data/consul/encrypt.json && \
mkdir -p /consul/config &&
ln -s /data/consul/encrypt.json /consul/config/encrypt.json
why those symlinks and dummy files
This ensures, that if we now mount the data volume, the encrypt key get replaced by the one generated by the config - and if not, the server starts without it. Consul needs a proper json file, it is not allowed to be missing nor to be empty
docker-compose example
version: "2"
services:
someconsuleclient:
image: mymongodb
container_name: someconsuleclient
depends_on:
- consul
volumes_from:
- dwconfig:ro
consul:
container_name: consul
image: myconsulimage
depends_on:
- config
volumes_from:
- config:ro
config:
image: myconfigimage
container_name: config
volumes:
- config:/data/
volumes:
config:
driver: local
So we have a config service to generate the encrypt.json, we have a consul server and we have a consul example client. Now you can add new consul-nodes very easy, while having a gossip encryption.
Of course, you can add arbitrary configuration, additional, for every client int /data/consul/custom_client.json in the bootstrap of your config container and share those across all clients. All .json files in the consul-config dir are merged, this way you can easily build "additions"

Related

Cannot open Minio in browser after dockerizing it in Spring Boot App

I have a problem in opening minio in the browser. I just created Spring Boot app with the usage of it.
Here is my application.yaml file shown below.
server:
port: 8085
spring:
application:
name: springboot-minio
minio:
endpoint: http://127.0.0.1:9000
port: 9000
accessKey: minioadmin #Login Account
secretKey: minioadmin # Login Password
secure: false
bucket-name: commons # Bucket Name
image-size: 10485760 # Maximum size of picture file
file-size: 1073741824 # Maximum file size
Here is my docker-compose.yaml file shown below.
version: '3.8'
services:
minio:
image: minio/minio:latest
container_name: minio
environment:
MINIO_ROOT_USER: "minioadmin"
MINIO_ROOT_PASSWORD: "minioadmin"
volumes:
- ./data:/data
ports:
- 9000:9000
- 9001:9001
I run it by these commands shown below.
1 ) docker-compose up -d
2 ) docker ps -a
3 ) docker run minio/minio:latest
Here is the result shown below.
C:\Users\host\IdeaProjects\SpringBootMinio>docker run minio/minio:latest
NAME:
minio - High Performance Object Storage
DESCRIPTION:
Build high performance data infrastructure for machine learning, analytics and application data workloads with MinIO
USAGE:
minio [FLAGS] COMMAND [ARGS...]
COMMANDS:
server start object storage server
gateway start object storage gateway
FLAGS:
--certs-dir value, -S value path to certs directory (default: "/root/.minio/certs")
--quiet disable startup information
--anonymous hide sensitive information from logging
--json output server logs and startup information in json format
--help, -h show help
--version, -v print the version
VERSION:
RELEASE.2022-01-08T03-11-54Z
When I write 127.0.0.1:9000 in the browser, I couldn't open the MinIo login page.
How can I fix my issue?
The MinIO documentation includes a MinIO Docker Quickstart Guide that has some recipes for starting the container. The important thing here is that you cannot just docker run minio/minio; it needs a command to run, probably server. This also needs to be translated into your Compose setup.
The first example on that page breaks down like so:
docker run \
-p 9000:9000 -p 9001:9001 \ # publish ports
-e "MINIO_ROOT_USER=..." \ # set environment variables
-e "MINIO_ROOT_PASSWORD=..." \
quay.io/minio/minio \ # image name
server /data --console-address ":9001" # command to run
That final command is important. In your example where you just docker run the image and get a help message, it's because you omitted the command. In the Compose setup you also don't have a command: line; if you look at docker-compose ps I expect you'll see the container is exited, and docker-compose logs minio will probably show the same help message.
You can include that command in your Compose setup with command::
version: '3.8'
services:
minio:
image: minio/minio:latest
environment:
MINIO_ROOT_USER: "..."
MINIO_ROOT_PASSWORD: "..."
volumes:
- ./data:/data
ports:
- 9000:9000
- 9001:9001
command: server /data --console-address :9001 # <-- add this

How to connect my spring boot app to redis container on docker?

I have created a spring app and i want to connect it to redis server which is deployed on docker-compose i put the needed properties as follow :
spring.redis.host=redis
spring.redis.port=6379
But i keep getting a ConnexionException so how can i Know on which host redis is running and how to connect to it.
Here is my docker-compose file :
version: '2'
services:
redis:
image: 'bitnami/redis:5.0'
environment:
# ALLOW_EMPTY_PASSWORD is recommended only for development.
- ALLOW_EMPTY_PASSWORD=yes
- REDIS_DISABLE_COMMANDS=FLUSHDB,FLUSHALL
ports:
- '6379:6379'
volumes:
- 'redis_data:/bitnami/redis/data'
volumes:
redis_data:
driver: local
From docker compose documentation
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name
If you want to access redis by container name ('redis' in this case), the Spring boot application has also be deployed as a docker compose service, but it doesn't appear in the docker-compose file that you've provided in the question, so please add it.
Alternatively If you're trying to run the spring boot application on host machine, use 'localhost' instead of 'redis' in order to access the redis container.
Another approach you can use is "docker network" , Below are the steps to follow :
Create a docker network for redis
docker network create redis-docker
Spin up redis container is redis-docker network.
docker run -d --net redis-docker --name redis-stack -p 6379:6379 -p 8001:8001 redis/redis-stack:latest
Inspect the redis-docker container
docker network inspect redis-docker
Copy the "IPv4Address" IP and paster in application.yml
Now build , start your application.

Concourse - pass ssh keys via environment

I'm trying to ramp up a concourse ci inside cloude foundry for demo purpose. To avoid additional efforts and costs I'd like to avoid using storage services. But the TSA keys for the ssh connection between web service and worker service needs to be populated some how. My question her is, if it is possible to just pass the TSA keys via the environment in docker-compose file?
I'd expect something like this in docker-compose file:
web:
image: concourse/concourse
command: web
links: [db]
depends_on: [db]
ports: ["9090:8080"]
environment:
CONCOURSE_EXTERNAL_URL: http://10.2.1.20:9090/
CONCOURSE_POSTGRES_HOST: db
CONCOURSE_POSTGRES_USER: concourse_user
CONCOURSE_POSTGRES_PASSWORD: concourse_pass
CONCOURSE_POSTGRES_DATABASE: concourse
CONCOURSE_ADD_LOCAL_USER: test:test
CONCOURSE_MAIN_TEAM_LOCAL_USER: test
#TSA keys:
CONCOURSE_SESSION_KEY: AA67/2C$AVG.....
CONCOURSE_HOST_KEY: AA67/2C$AVG.....
CONCOURSE_WORKER_KEY: AA67/2C$AVG.....
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
Yes, according to https://concourse-ci.org/concourse-web.html#web-running, you can set:
CONCOURSE_SESSION_SIGNING_KEY=path/to/session_signing_key
CONCOURSE_TSA_HOST_KEY=path/to/tsa_host_key
CONCOURSE_TSA_AUTHORIZED_KEYS=path/to/authorized_worker_keys
There are similar env vars you can set for running workers too.

How can I use the port of a server running on localhost in kubernetes running spring boot app

I am new to Kubernetes and kubectl. I am basically running a GRPC server in my localhost. I would like to use this endpoint in a spring boot app running on kubernetes using kubectl on my mac. If I set the following config in application.yml and run in kubernetes, it doesn't work. The same config works if I run in IDE.
grpc:
client:
local-server:
address: static://localhost:6565
negotiationType: PLAINTEXT
I see some people suggesting port-forward, but it's the other way round (It works when I want to use a port that is already in kubernetes from localhost just like the tomcat server running in kubernetes from a browser on localhost)
apiVersion: apps/v1
kind: Deployment
metadata:
name: testspringconfigvol
labels:
app: testspring
spec:
replicas: 1
selector:
matchLabels:
app: testspringconfigvol
template:
metadata:
labels:
app: testspringconfigvol
spec:
initContainers:
# taken from https://gist.github.com/tallclair/849601a16cebeee581ef2be50c351841
# This container clones the desired git repo to the EmptyDir volume.
- name: git-config
image: alpine/git # Any image with git will do
args:
- clone
- --single-branch
- --
- https://github.com/username/fakeconfig
- /repo # Put it in the volume
securityContext:
runAsUser: 1 # Any non-root user will do. Match to the workload.
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
volumeMounts:
- mountPath: /repo
name: git-config
containers:
- name: testspringconfigvol-cont
image: username/testspring
ports:
- containerPort: 8080
volumeMounts:
- mountPath: /usr/local/lib/config/
name: git-config
volumes:
- name: git-config
emptyDir: {}
What I need in simple terms:
Ports having some server in my localhost:
localhost:6565, localhost:6566, I need to access these ports some how in my kubernetes. Then what should I set it in application.yml config? Will it be the same localhost:6565, localhost:6566 or how-to-get-this-ip:6565, how-to-get-this-ip:6566.
We can get the vmware host ip using minikube with this command minikube ssh "route -n | grep ^0.0.0.0 | awk '{ print \$2 }'". For me it's 10.0.2.2 on Mac. If using Kubernetes on Docker for mac, it's host.docker.internal.
By using these commands, I managed to connect to the services running on host machine from kubernetes.
1) Inside your application.properties define
server.port=8000
2) Create Dockerfile
# Start with a base image containing Java runtime (mine java 8)
FROM openjdk:8u212-jdk-slim
# Add Maintainer Info
LABEL maintainer="vaquar.khan#gmail.com"
# Add a volume pointing to /tmp
VOLUME /tmp
# Make port 8080 available to the world outside this container
EXPOSE 8080
# The application's jar file (when packaged)
ARG JAR_FILE=target/codestatebkend-0.0.1-SNAPSHOT.jar
# Add the application's jar to the container
ADD ${JAR_FILE} codestatebkend.jar
# Run the jar file
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/codestatebkend.jar"]
3) Make sure docker is working fine
docker run --rm -p 8080:8080
4)
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
use following command to find the pod name
kubectl get pods
then
kubectl port-forward <pod-name> 8080:8080
Useful links :
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod
https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#manually-constructing-apiserver-proxy-urls
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
https://developer.okta.com/blog/2019/05/22/java-microservices-spring-boot-spring-cloud

Trouble communicating between docker containers

I'm running an "elasticsearch" container. I can curl the container and get results but when I try to communicate with the container from within my "web" container it refuses the connection.
docker-compose up
curl localhost:9200 // works.
curl docker-compose run web curl localhost:9200 // connection refused.
docker-compose.yml
version: '2'
services:
web:
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/src
ports:
- "5000:5000"
depends_on:
- elasticsearch
elasticsearch:
image: elasticsearch:5.1.2
ports:
- "9200:9200"
Dockerfile
FROM python:3.5
ADD . /src
WORKDIR /src
RUN pip install -r requirements.txt
CMD python project/wsgi.py
You cannot use localhost:9200 from within the web container to connect to the elasticsearch container. You could define a link or just use the service name (which is mapped by default):
curl elasticsearch:9200
Links allow you to define extra aliases by which a service is reachable from another service. They are not required to enable services to communicate - by default, any service can reach any other service at that service’s name.
Also see Docker Compose Links
You should be trying to curl elasticsearch:9200, not localhost:9200. The hostname elasticsearch should be in your hosts file on the web container.

Resources