How can I reuse "services" code in multiple GitHub CI jobs - continuous-integration

I am trying to DRY up my GitHub ci.yml file somewhat. I have two jobs—one runs RSpec tests, the other runs Cucumber tests. There were a number of steps they shared, which I’ve extracted to an external action.
They both depend on a postgres and chrome Docker image however, and some environment variables, so currently both jobs include the below code. Is there any way I can put this code in one place for them both to use? Note I’m not attempting to share the image itself, I just don’t want to have the repeated code.
services:
postgres:
image: postgres:13
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: postgres
ports:
- 5432:5432
# Set health checks to wait until postgres has started
# tmpfs for faster DB in RAM
options: >-
--mount type=tmpfs,destination=/var/lib/postgresql/data
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
chrome:
image: seleniarm/standalone-chromium:4.1.2-20220227
ports:
- 4444:4444
env:
DB_HOST: localhost
CHROMEDRIVER_HOST: localhost
RAILS_ENV: test

Related

Hosts aren't accessible by name in docker compose on Windows

I have two, windows-based images that I'm using with docker compose.
The docker-compose.yaml:
services:
application:
image: myapp-win:latest
container_name: "my-app"
# for diagnosis
entrypoint: ["cmd"]
stdin_open: true
tty: true
# /diagnosis
env_file: .myapp/.env
environment:
- POSTGRES_URI=jdbc:postgresql://db0:5432/mydatabase
depends_on:
db0:
condition: service_healthy
db0:
image: stellirin/postgres-windows:10.10
container_name: "my-db"
ports:
- 10000:5432 # this doesn't seem to work in windows
env_file:
- .postgres/.env
volumes:
- .postgres\initdb\:c:\docker-entrypoint-initdb.d\
healthcheck:
test: [ "CMD", "pg_isready", "-q", "-d", "${POSTGRES_DATABASE}", "-U", "${POSTGRES_USER}" ]
timeout: 45s
interval: 10s
retries: 10
restart: unless-stopped
With the two containers started, I accessed the terminal for the my-db container and got its IP address.
Next, I accessed the terminal for the my-app container. I was able to ping the my-db container by its IP address. However, it did not respond by its hostname:
c:\app> ping db0
Ping request could not find host db0.
This is symptommatic why the application can't reach the database using the POSTGRES_URI variable.
Is there a different syntax for the hostname in a Windows container?
** edit **
I'm not able to ping outside the network, from either container:
c:\app> ping 8.8.8.8
Request timed out.
Not sure if this is relevant.
Regardless of container OS, to my knowledge, referring to the other name (db0) directly won't directly work inside the container, but is simply exposed to the other compose entries
Instead, set an env var dependent on the name and read it in the container
environment:
- "ADDRESS_DB=db0"
Then, if you want to be able to ping db0 or similar, have a script set the env var as an available host name on start
Alternatively, you may have success setting it the extra_hosts field, but I haven't tested this and you may need to give it a different name to prevent interpolation
extra_hosts:
- db_url:db0

Testing a container against DynamoDB-Local

I wanted to test a container locally before pushing it to aws ecs.
I ran unit tests against a docker-compose stack including a dynamodb-local container using a Go (aws-sdk-go-v2) endpoint resolver with http://localhost:8000 as the url.
So I wanted to build and test container locally and realised I needed to attach it to the default network created by docker-compose. I struggled with this a bit so I build a stripped down trial. I created an endpoint resolver with a url of http://dynamo-local:8000 (named the container dynamo-local in d-c) and attached it to the default network within docker run.
Now that all works, I can perform the various table operations successfully, but one of the things that confuses me is that if I run aws cli:
aws --endpoint-url=http://localhost:8000 dynamodb list-tables
then the output shows no tables exist when there is definitely a table existing. I had assumed, naively, that as I can access port 8000 of the same container with different endpoints I should be able to access the same resources. Wrong.
Obviously a gap in my education. What am I missing ? I need to expand the trial to a proper test of the full app, so its important to me that I understand what is going on here.
Is there a way I can use the aws cli to access the table?
docker-compose file :
version: '3.5'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack_test
ports:
- '4566:4566'
environment:
- SERVICES=s3,sns,sqs, lambda
- DEBUG=1
- DATA_DIR=
volumes:
- './.AWSServices:/tmp/AWSServices'
- '/var/run/docker.sock:/var/run/docker.sock'
nginx:
build:
context: .
dockerfile: Dockerfile
image: chanonry/urlfiles-nginx:latest
container_name: nginx
ports:
- '8080:80'
dynamodb:
image: amazon/dynamodb-local:1.13.6
container_name: dynamo-local
ports:
- '8000:8000'
networks:
default:
name: test-net

Running Sonarqube with docker-compose using bind mount volumes

I’m trying to run Sonarqube in a Docker container on a Centos 7 server using docker-compose. Everything works as expected using named volumes as configured in this docker-compose.yml file:
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_bundled_plugins:
postgresql:
postgresql_data:
However, my /var/lib/docker/volumes directory is not large enough to house the named volumes. So, I changed the docker-compose.yml file to use bind mount volumes as shown below.
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
networks:
- sonarnet
environment:
- sonar.jdbc.url=jdbc:postgresql://db:5432/sonar
volumes:
- /data/sonarqube/conf:/opt/sonarqube/conf
- /data/sonarqube/data:/opt/sonarqube/data
- /data/sonarqube/extensions:/opt/sonarqube/extensions
- /data/sonarqube/bundled_plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
networks:
- sonarnet
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
volumes:
- /data/postgresql:/var/lib/postgresql
- /data/postgresql_data:/var/lib/postgresql/data
networks:
sonarnet:
driver: bridge
However, after running docker-compose up -d, the app starts up but none of the bind mount volumes are written to. As a result, the Sonarqube plugins are not loaded and the sonar postgreSQL database is not initialized. I thought it may be a selinux issue, but I temporarily disabled it with no success. I’m unsure what to look at next.
I think my answer from "How to persist configuration & analytics across container invocations in Sonarqube docker image" would help you as well.
For good measure I have also pasted it in here:
.....
Notice this line SONARQUBE_HOME in the Dockerfile for the docker-sonarqube image. We can control this environment variable.
When using docker run. Simply do:
txt
docker run -d \
...
...
-e SONARQUBE_HOME=/sonarqube-data
-v /PERSISTENT_DISK/sonarqubeVolume:/sonarqube-data
This will make Sonarqube create the conf, data and so forth folders and store data therein. As needed.
Or with Kubernetes. In your deployment YAML file. Do:
txt
...
...
env:
- name: SONARQUBE_HOME
value: /sonarqube-data
...
...
volumeMounts:
- name: app-volume
mountPath: /sonarqube-data
And the name in the volumeMounts property points to a volume in the volumes section of the Kubernetes deployment YAML file.
This again will make Sonarqube use the /sonarqube-data mountPath for creating extenions, conf and so forth folders, then save data therein.
And voila your Sonarqube data is thereby persisted.
I hope this will help others.
N.B. Notice that the YAML and Docker run examples are not exhaustive. They focus on the issue of persisting Sonarqube data.
Try it out BobC and let me know.
Have a great day.
The below code will help you in a single command I hope so.
Create a new docker-compose file named as docker-compose.yaml,
version: "3"
services:
sonarqube:
image: sonarqube:8.2-community
depends_on:
- db
ports:
- "9000:9000"
networks:
- sonarqubenet
environment:
SONAR_JDBC_URL: jdbc:postgresql://db:5432/sonarqube
SONAR_JDBC_USERNAME: sonar
SONAR_JDBC_PASSWORD: sonar
volumes:
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_logs:/opt/sonarqube/logs
- sonarqube_temp:/opt/sonarqube/temp
restart: on-failure
container_name: sonarqube
db:
image: postgres
networks:
- sonarqubenet
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonar
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
restart: on-failure
container_name: postgresql
networks:
sonarqubenet:
driver: bridge
volumes:
sonarqube_data:
sonarqube_extensions:
sonarqube_logs:
sonarqube_temp:
postgresql:
postgresql_data:
Then, execute the command,
$ docker-compose up -d
$ docker container ps
Sounds like the container is running and, as you mentioned, Sonarqube starts-up. When it starts, is it showing that it's using the H2 in memory db? After running docker-compose up -d, use docker logs -f <container_name> to see what's happening on Sonarqube startup.
To simplify viewing your logs with a known name, I suggest you also add a container name to your Sonarqube service. For example, container_name: sonarqube.
Also, while I know the plan is to deprecate the use of environment variables for the username, password and jdbc connection, I've had better luck in docker-compose using environment variables rather than the corresponding property value. For the connection string, try: SONARQUBE_JDBC_URL: jdbc:postgresql://db/sonar without specifying the default port for postgres.

Configuring Docker with Traefik, Nginx and Laravel

I am trying to figure out how to setup a simple stack for development and later deployment. I want to utilize Docker to serve Traefik in a container as the public facing reverse-proxy, which then interfaces as needed with a Nginx container that is used only to serve static frontend files (HTML, CSS, JS) and a backend PHP container that runs Laravel (I'm intentionally decoupling the frontend and API for this project).
I am trying my best to learn through all of the video and written tutorials out there, but things become complicated very quickly (at least, for my uninitiated brain) and it's a bit overwhelming. I have a one-week deadline to complete this project and I'm strongly considering dropping Docker altogether for the time being out of fear that I'll spend the whole trying to mess around with the configuration instead of actually coding!
To get started, I have a simple docker-compose with the following configuration that I've verified at least runs correctly:
version: '3'
services:
reverse-proxy:
image: traefik
command: --api --docker # Enables Web UI and tells Traefik to listen to Docker.
ports:
- "80:80" # HTTP Port
- "8080:8080" # Web UI
volumes:
- /var/run/docker.sock:/var/run/docker.sock # So that Traefik can listen to the Docker events.
Now, I need to figure out how to connect Nginx and PHP/Laravel effectively.
First of all don't put yourself under stress to learn new stuff. Because if you do, learning new stuff won't feel that comfortable anymore. Take your knowledge of technology and get stuff done. When you're done and you realize you have 1/2 days to go to your deadline, try to overdeliver by including new technology. This way you won't screw your deadline and you will not be under stress figuring our new technology or configuration.
The configuration you see below is not complete nor functionally tested. I just copied most of the stuff out of 3 of my main projects in order to give you a starting-point. Traefik as-is can be complicated to set up properly.
version: '3'
# Instantiate your own configuration with a Dockerfile!
# This way you can build somewhere and just deploy your container
# anywhere without the need to copy files around.
services:
# traefik as reverse-proxy
traefik:
build:
context: .
dockerfile: ./Dockerfile-for-traefik # including traefik.toml
command: --docker
restart: always
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# this file you'll have to create manually `touch acme.json && chmod 600 acme.json`
- /home/docker/volumes/traefik/acme.json:/opt/traefik/acme.jso
networks:
- overlay
ports:
- 80:80
- 443:443
nginx:
build:
context: .
dockerfile: ./Dockerfile-for-nginx
networks:
- overlay
depends_on:
- laravel
volumes:
# you can copy your assets to production with
# `tar -c -C ./myassets . | docker cp - myfolder_nginx_1:/var/www/assets`
# there are many other ways to achieve this!
- assets:/var/www/assets
# define your application + whatever it needs to run
# important:
# - "build:" will search for a Dockerfile in the directory you're specifying
laravel:
build: ./path/to/laravel/app
environment:
MYSQL_ROOT_PASSWORD: password
ENVIRONMENT: development
MYSQL_DATABASE: your_database
MYSQL_USER: your_database_user
networks:
- overlay
links:
- mysql
volumes:
# this path is for development
- ./path/to/laravel/app:/app
# you need a database, right?
mysql:
image: mysql:5
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: your_database
MYSQL_USER: your_database_user
networks:
- overlay
volumes:
- mysql-data:/var/lib/mysql
volumes:
mysql-data:
assets:
networks:
overlay:

Traefik - Can't connect via https

I am trying to run Traefik on a Raspberry Pi Docker Swarm (specifally following this guide https://github.com/openfaas/faas/blob/master/guide/traefik_integration.md from the OpenFaaS project) but have run into some trouble when actually trying to connect via https.
Specifically there are two issues:
1) When I connect to http://192.168.1.20/ui I am given the username / password prompt. However the details (unhashed password) generated by htpasswd and used in the below docker-compose.yml are not accepted.
2) Visting the https version (http://192.168.1.20/ui) does not connect at all. This is the same if I try to connect using the domain I have set in --acme.domains
When I explore /etc/ I can see that no /etc/traefik/ directory exists but should presumably be created so perhaps this is the root of my problem?
The relevant part of my docker-compose.yml looks like
traefik:
image: traefik:v1.3
command: -c --docker=true
--docker.swarmmode=true
--docker.domain=traefik
--docker.watch=true
--web=true
--debug=true
--defaultEntryPoints=https,http
--acme=true
--acme.domains='<my domain>'
--acme.email=myemail#gmail.com
--acme.ondemand=true
--acme.onhostrule=true
--acme.storage=/etc/traefik/acme/acme.json
--entryPoints=Name:https Address::443 TLS
--entryPoints=Name:http Address::80 Redirect.EntryPoint:https
ports:
- 80:80
- 8080:8080
- 443:443
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "acme:/etc/traefik/acme"
networks:
- functions
deploy:
labels:
- traefik.port=8080
- traefik.frontend.rule=PathPrefix:/ui,/system,/function
- traefik.frontend.auth.basic=user:password <-- relevant credentials from htpasswd here
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 20
window: 380s
placement:
constraints: [node.role == manager]
volumes:
acme:
Any help very much appreciated.
Due to https://community.letsencrypt.org/t/2018-01-09-issue-with-tls-sni-01-and-shared-hosting-infrastructure/49996
The TLS challenge (default) for Let's Encrypt doesn't work anymore.
You must use the DNS challenge instead https://docs.traefik.io/configuration/acme/#dnsprovider.
Or waiting for the merge of https://github.com/containous/traefik/pull/2701

Resources