Concourse - pass ssh keys via environment - continuous-integration

I'm trying to ramp up a concourse ci inside cloude foundry for demo purpose. To avoid additional efforts and costs I'd like to avoid using storage services. But the TSA keys for the ssh connection between web service and worker service needs to be populated some how. My question her is, if it is possible to just pass the TSA keys via the environment in docker-compose file?
I'd expect something like this in docker-compose file:
web:
image: concourse/concourse
command: web
links: [db]
depends_on: [db]
ports: ["9090:8080"]
environment:
CONCOURSE_EXTERNAL_URL: http://10.2.1.20:9090/
CONCOURSE_POSTGRES_HOST: db
CONCOURSE_POSTGRES_USER: concourse_user
CONCOURSE_POSTGRES_PASSWORD: concourse_pass
CONCOURSE_POSTGRES_DATABASE: concourse
CONCOURSE_ADD_LOCAL_USER: test:test
CONCOURSE_MAIN_TEAM_LOCAL_USER: test
#TSA keys:
CONCOURSE_SESSION_KEY: AA67/2C$AVG.....
CONCOURSE_HOST_KEY: AA67/2C$AVG.....
CONCOURSE_WORKER_KEY: AA67/2C$AVG.....
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"

Yes, according to https://concourse-ci.org/concourse-web.html#web-running, you can set:
CONCOURSE_SESSION_SIGNING_KEY=path/to/session_signing_key
CONCOURSE_TSA_HOST_KEY=path/to/tsa_host_key
CONCOURSE_TSA_AUTHORIZED_KEYS=path/to/authorized_worker_keys
There are similar env vars you can set for running workers too.

Related

Unable to connect from Spring Boot to Dockerized Redis in outside/inside machine

I am connecting to Redis from the spring boot app on the outside machine where the Redis server docker container is not running. When the app tries to connect to Redis, the app can't connect properly until the sent request is timed out. Meanwhile, if I try to connect from:
Inside the machine where the Redis server docker container is running with the host is localhost, I could connect it. And, I don't know why I can't connect by setup host value as a numerical IP/alphabetical (URL), only works with "localhost."
Outside machine where the Redis server docker container is not running with Redis client app GUI for management, I could connect it.
application.properties:
spring.redis.host=pc-1
spring.redis.port=6379
pc-1 is alias from some numerical ip. I'am using hosts feature from
windows to aliasing/redirecting it.
.env:
REDIS_PORT=6379
docker-compose.yml:
redis:
image: redis:latest
ports:
- "${REDIS_PORT}:6379"
command:
# - redis-server
# - --requirepass "${REDIS_PASSWORD}"
networks:
- redis
healthcheck:
test: ["CMD-SHELL", "redis-cli ping"]
interval: 10s
timeout: 10s
retries: 3
I need help on this issue.
Use the --service-ports flag to the docker compose command to publish the ports you've defined in the docker compose file.
Other debugging tips:
Hardcode the ${REDIS_PORT} variable in case the value is not getting set or set a default like ${REDIS_PORT:-default}
Pass the env file explicitly like docker compose --env-file ./somedir/.env up in case the env file is not being pick up
Use docker inspect to get container status, check the networking info

How to check from inside a container if another container is running on port

I am running 2 containers at the same time (connected via docker-compose on setting links && depends_on).
The depends on is not enough, so I want the script that run on entryphone of one of the container to check if the other container is running already on some port.
I tried:
#!bin/bash
until nc -z w10 <container_name> 3306
do
echo waiting for db to be ready...
sleep 2
done
echo code is ready
But this is not working..
Anyone got an idea?
I would suggest to use the depends_on approach. However, you can use some of the advanced setting of this command. Please, read the documentation of Control startup and shutdown order in Compose
You can use the wait-for-it.sh script to exactly achieve what you need. Extracted from the documentation:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it.sh", "db:5432", "--", "python", "app.py"]
db:
image: postgres
Since you are already using docker-compose to orchestrate your services a better way would be to use condition: service_healthy of the depends_on long syntax. So instead of manually waiting in one container for the other to become available docker-compose will start the former only after the latter became healthy, i.e. available.
If the depended-on container does not have a specified HEALTHCHECK in its image already you can manually define it in the docker-compose.yml with the healthcheck attribute.
Example with a mariadb database using the included healthcheck.sh script:
services:
app:
image: myapp/image
depends_on:
db:
condition: service_healthy
db:
image: mariadb
environment:
- MARIADB_ROOT_PASSWORD=password
healthcheck:
test: "healthcheck.sh --connect"
With this docker-compose up will first start the db service and wait until the db service becomes healthy, i.e. is ready to accept connections, and only then will start the app service which can immediately connect to the db.

How to network 2 separate docker containers to communicate with eachother?

I'm pretty new to docker, and I've tried searching about networking but haven't found a solution that's worked.
I have a Laravel app that is using Laradock.
I also have an external 3rd party API that runs in its own docker container.
I basically want to specify the container name of the api inside my laravel .env file, and have it dynamically resolve the container ip so I can make API calls from my Laravel app. I can already do this with services that are already part of laradock like mariadb/mysql, but since my API is located in an external container, it can't connect to it.
I tried making a network and attaching them with;
docker network create my-network
Then inside my docker-compose.yml files for each of the containers, I specified;
networks:
my-network:
name: "my-network"
But if I try and ping them with;
docker exec -ti laradock-workspace-1 ping my-api
I can't connect and can't really figure out why. Was hoping someone familiar with docker might be able to explain why since I'm sure it's something very obvious I'm missing. Thanks!
By default Docker Compose uses a bridge network to provision inter-container communication. Read this article for more info about inter-container networking.
What matters for you, is that by default Docker Compose creates a hostname that equals the service name in the docker-compose.yml file.
Consider the following docker-compose.yml:
version: '3.9'
services:
server:
image: node:16.9.0
container_name: server
tty: true
stdin_open: true
depends_on:
- mongo
command: bash
mongo:
image: mongo
environment:
MONGO_INITDB_DATABASE: my-database
When you run docker-compose up, Docker will create a default network and assigns the service name as hostname for both mongo and server.
You can now access the backend container via:
docker exec -it server bash
And now you can ping the mongo container using Dockers internal network (default on port 27017 in this case):
curl -v http://mongo:27017/my-database
That's it. The same applies for your setup.

Configuring ssl in rabbitmq.config using rabbitmq docker image

My goal is to set rabbitmq with ssl support, which was achieved previously using below rabbitmq.config file, which resides in host's /etc/rabbitmq path.
Now I want to be able to configure other rabbitmq user and password than defaults guest guest.
I'm using rabbitmq docker image with following docker-compose configuration:
version: '2'
services:
rabbitmq:
build: ./rabbitmq
ports:
- "8181:8181"
expose:
- "15672"
- "8181"
volumes:
- /etc/rabbitmq:/etc/rabbitmq
environment:
RABBITMQ_DEFAULT_USER: user123
RABBITMQ_DEFAULT_PASS: 1234
Rabbitmq config:
[{rabbit,
[
{loopback_users, []},
{heartbeat,0},
{ssl_listeners, [8181]},
{ssl_options, [{cacertfile, "/etc/rabbitmq/ca/cacert.pem"},
{certfile, "/etc/rabbitmq/server/cert.pem"},
{keyfile, "/etc/rabbitmq/server/key.pem"},
{verify,verify_none},
{fail_if_no_peer_cert,false}]}
]}
].
Rabbitmq dockerfile:
from rabbitmq:management
#and some certificate generating logic
I noticed that once upon adding environment section, current rabbitmq.config file is overriden with auto generated configuration possibly by docker-entrypoint.sh file.
For building configuration using the certs I found environment variables that can do this (look here).
However didn't found any reference for defining ssl_listeners section with its port, as seen in below rabbitmq.config
My question is: how can I create the exact configuration as mentioned below using env variables OR how can I remain with mine rabbitmq.config defining rabbitmq with new user and password in some dynamic way (maybe templating the config file)?
Try this
version: '2'
services:
rabbitmq:
build: ./rabbitmq
ports:
- "8181:8181"
expose:
- "15672"
- "8181"
volumes:
- /etc/rabbitmq:/etc/rabbitmq
command: rabbitmq-server
entrypoint: ""
environment:
RABBITMQ_DEFAULT_USER: user123
RABBITMQ_DEFAULT_PASS: 1234
This will override the docker-entrpoint and just run the rabbitmq server. Now the ./docker-entrypoint.sh sets certain environment variables also. Which may be needed in your case. So to make sure you have everything needed

How to enable https on bluemix with docker-compose

I created a simple application consisting of nginx and python flask made up of two containers, which I can deploy to bluemix using docker-compose.
The docker compose file is docker-compose-bluemix.yml
flask:
image: registry.ng.bluemix.net/namespace/simple.flask
restart: always
expose:
- "8000"
command: /usr/local/bin/gunicorn -w 2 -b :8000 app:app
nginx:
image: registry.ng.bluemix.net/namespace/simple.nginx
restart: always
ports:
- "80:80"
links:
- flask:flask
Once I assign an ip to the nginx container it works, in that I can access it like so,
curl http://ip/flask-api/v0.01/hello
and the correct response is returned
{"status": "hello"}
How do I enable https for this app? Must it be done by providing the nginx container self signed certs, or can I leverage bluemix to give me a https://xxx.mybluemix.net address for the containers? If so, how?
If you want Bluemix to assign a route like https://xxx.mybluemix.net then you need to deploy a Scalable Group instead of a Single Container. Scalable Groups can be assigned routes which will allow SSL (https://) access.
I don't believe that you can do this with Docker Compose because Docker is not aware of the container group capabilities in Bluemix. You could use the IBM Container extensions to the Cloud Foundry CLI to do this from the command line or from your DevOps pipeline tool of choice with the following commands:
cf ic group create --name simple-flask -m 64 -p 8000 --min 1 --max 3 --desired 2 registry.ng.bluemix.net/namespace/simple-flask:latest
cf ic route map -n simple-flask -d mybluemix.net simple-flask
At that point you don't need nginx because Bluemix will put a load balancer in front of your container group for you to direct traffic to the containers within it. You can then get to it via:
https://simple-flask.mybluemix.net/flask-api/v0.01/hello
This should give you what you were looking for.
~jr

Resources