Docker-compose: get pulled & deployed images - bash

Context:
I am currently trying to create a Jenkins job that builds periodically and updates the images that are in my docker-compose file. I managed to get a basic version of this to work by labeling my services in my docker-compose.yml. For example:
gitlab:
image: 'gitlab/gitlab-ce:latest'
container_name: 'gitlab'
labels:
update: 'notify'
...
letsencrypt:
image: 'jrcs/letsencrypt-nginx-proxy-companion'
container_name: 'letsencrypt-companion'
labels:
update: 'auto'
...
Notify meant that it should pull new docker images periodically and notify me that an image is ready to be updated. Auto means that it is allowed to automatically deploy the new image.
Problem:
I want to make it so that when new images are pulled Jenkins will automatically notify my that new images are ready / deployed. The problem however is that I have to interpret the output of docker-compose pull and docker-compose up -d to know which images were actually new and deployed. I need a solution that works for a Jenkins pipeline (declarative or scripted)

try to watch this video https://www.youtube.com/watch?v=ZL3hMP9BdmQ; I think that's you are looking for.
https://github.com/v2tec/watchtower
"If you mount the config file as described below, be sure to also prepend the url for the registry when starting up your watched image (you can omit the https://). Here is a complete docker-compose.yml file that starts up a docker container from a private repo at dockerhub and monitors it with watchtower. Note the command argument changing the interval to 30s rather than the default 5 minutes."
version: "3"
services:
cavo:
image: index.docker.io/<org>/<image>:<tag>
ports:
- "443:3443"
- "80:3080"
watchtower:
image: v2tec/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /root/.docker/config.json:/config.json
command: --interval 30

Related

How to choose correct profile on different environments for Docker and Docker-Compose?

Actually I have checked some questions like this
What I do not understand is; if I change my docker-compose.yml and add profile to it then should I leave the Dockerfile without profile ?
For example my docker-compose file:
backend:
container_name: backend
image: backend
build: ./backend
restart: always
deploy:
restart_policy:
condition: on-failure
max_attempts: 15
ports:
- '8080:8080'
environment:
- MYSQL_ROOT_PASSWORD=DbPass3008
- MYSQL_PASSWORD=DbPass3008
- MYSQL_USER=DbUser
- MYSQL_DATABASE=db
depends_on:
- mysql
And I will add:
environment:
- "SPRING_PROFILES_ACTIVE=test
As far as I understand I need to put 3 different compose file and run them with -f parameter for different environments like:
docker-compose -f docker-compose-local/test/prod up -d
But my question is that my Dockerfile is already specifying profile as:
FROM openjdk:17-oracle
ADD ./target/backend-0.0.1-SNAPSHOT.jar backend.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar", "-Dspring.profiles.active=TEST", "backend.jar"]
So how should I change this Dockerfile? Even if I create 3-4 different compose file, they are all using same Dockerfile. Should I create different Dockerfiles too (seems ridicilous) but what is the correct way ?
There's no need to add a java -Dspring.profiles.active=... command-line option; Spring will recognize the runtime SPRING_PROFILES_ACTIVE environment variable on its own. That means all of your environments can use the same image (which is generally a good practice).
Compose can also expand host environment variables in some contexts, so you may be able to use a single Compose file with environment-variable references
version: '3.8'
services:
backend:
environment:
- SPRING_PROFILES_ACTIVE=${ENVIRONMENT:-dev}
ENVIRONMENT=test docker-compose up -d
I tend to discourage putting environment-specific settings in a src/main/resources/*.yml file, since it means you need to recompile the application jar file whenever you deploy to a new environment. Another possibility is to set most Spring properties as environment variables, and then use multiple Compose files to include environment-specific settings. The one downside here is that you need multiple docker-compose -f options and you need to repeat them on every docker-compose invocation.

Testing a container against DynamoDB-Local

I wanted to test a container locally before pushing it to aws ecs.
I ran unit tests against a docker-compose stack including a dynamodb-local container using a Go (aws-sdk-go-v2) endpoint resolver with http://localhost:8000 as the url.
So I wanted to build and test container locally and realised I needed to attach it to the default network created by docker-compose. I struggled with this a bit so I build a stripped down trial. I created an endpoint resolver with a url of http://dynamo-local:8000 (named the container dynamo-local in d-c) and attached it to the default network within docker run.
Now that all works, I can perform the various table operations successfully, but one of the things that confuses me is that if I run aws cli:
aws --endpoint-url=http://localhost:8000 dynamodb list-tables
then the output shows no tables exist when there is definitely a table existing. I had assumed, naively, that as I can access port 8000 of the same container with different endpoints I should be able to access the same resources. Wrong.
Obviously a gap in my education. What am I missing ? I need to expand the trial to a proper test of the full app, so its important to me that I understand what is going on here.
Is there a way I can use the aws cli to access the table?
docker-compose file :
version: '3.5'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack_test
ports:
- '4566:4566'
environment:
- SERVICES=s3,sns,sqs, lambda
- DEBUG=1
- DATA_DIR=
volumes:
- './.AWSServices:/tmp/AWSServices'
- '/var/run/docker.sock:/var/run/docker.sock'
nginx:
build:
context: .
dockerfile: Dockerfile
image: chanonry/urlfiles-nginx:latest
container_name: nginx
ports:
- '8080:80'
dynamodb:
image: amazon/dynamodb-local:1.13.6
container_name: dynamo-local
ports:
- '8000:8000'
networks:
default:
name: test-net

can `bootBuildImage` create writeable volumes?

Given a spring boot app that writes files to /var/lib/app/files.
I create an docker image with the gradle task:
./gradlew bootBuildImage --imageName=app:latest
Then, I want to use it in docker-compose:
version: '3.5'
services:
app:
image: app:latest
volumes:
- app-storage:/var/lib/app/files
// ...ports etc
volumes:
app-storage:
This will fail, because the folder is created during docker-compose up and is owned by root and the app, hence, has no write access to the folder.
The quick fix is to run the image as root by specifying user: root:
version: '3.5'
services:
app:
image: app:latest
user: root # <------------ required
volumes:
- app-storage:/var/lib/app/files
// ...ports etc
volumes:
app-storage:
This works fine, but I do not want to run it as root. I wonder how to achieve it? I normally could create a Dockerfile that creates the desired folder with correct ownership and write permissions. But as far as I know build packs do not use a custom Dockerfile and hence bootBuildImage would not use it - correct? How can we create writable volumes then?
By inspecting the image I found that the buildpack uses /cnb/lifecycle/launcher to launch the application. Hence I was able to customize the docker command and fix the owner of the specific folder before launch:
version: '3.5'
services:
app:
image: app:latest
# enable the app to write to the storage folder (docker will create it as root by default)
user: root
command: "/bin/sh -c 'chown 1000:1000 /var/lib/app/files && /cnb/lifecycle/launcher'"
volumes:
- app-storage:/var/lib/app/files
// ...ports etc
volumes:
app-storage:
Still, this is not very nice, because it is not straight forward (and hence my future self will need to spent time on understand it again) and also it is very limited in its extensibility.
Update 30.10.2020 - Spring Boot 2.3
We ended up creating another Dockerfile/layer so that we do not need to hassle with this in the docker-compose file:
# The base_image should hold a reference to the image created by ./gradlew bootBuildImage
ARG base_image
FROM ${base_image}
ENV APP_STORAGE_LOCAL_FOLDER_PATH /var/lib/app/files
USER root
RUN mkdir -p ${APP_STORAGE_LOCAL_FOLDER_PATH}
RUN chown ${CNB_USER_ID}:${CNB_GROUP_ID} ${APP_STORAGE_LOCAL_FOLDER_PATH}
USER ${CNB_USER_ID}:${CNB_GROUP_ID}
ENTRYPOINT /cnb/lifecycle/launcher
Update 25.11.2020 - Spring Boot 2.4
Note that the above Dockerfile will result in this error:
ERROR: failed to launch: determine start command: when there is no default process a command is required
The reason is that the default entrypoint by the paketo builder changed. Changing the entrypoint from /cnb/lifecycle/launcher to the new one fixes it:
ENTRYPOINT /cnb/process/web
See also this question: ERROR: failed to launch: determine start command: when there is no default process a command is required

How to connect two docker containers, one containing hazelcast in memory data grid, and one containing war file

I have two docker containers, one with hazelcast java application (the core for the web application - jar package) and one with rest service for the web application (war package). I'm using docker-compose to build up whole project in docker which looks like this:
version: "3"
services:
escomled_datagrid:
image: escomled/escomled_datagrid
build:
context: ./sh_scripts/escomled_data_grid
tomcat:
image: escomled/tomcat
build:
context: ./tomcat/app
ports:
- 8585:8080
depends_on:
- escomled_datagrid
links:
- escomled_datagrid:escomled_datagrid
I also have Dockerfile for each container:
-escomled_datagrid:
FROM openjdk:8-jdk-alpine as build
WORKDIR /EscomledML
COPY ./. ./
COPY ./escomled.properties
/home/escomled/escomled_server/config/escomled.properties
CMD ["sh","/EscomledML/escomled_data_grid.sh","start"]
EXPOSE 8085
-tomcat
FROM tomcat:8.5-alpine
COPY ./sample.war /usr/local/tomcat/webapps/
COPY ./escomled-rest.war /usr/local/tomcat/webapps/
COPY ./escomled.properties
/home/escomled/escomled_server/config/escomled.properties
RUN sh -c 'touch /usr/local/tomcat/webapps/sample.war'
RUN sh -c 'touch /usr/local/tomcat/webapps/escomled-rest.war'
EXPOSE 8080
First container uses sh script in the runtime.
This way everyting works fine, the containers start and stay active.
The only problem is that they dont see each other, hazelcast server starts and waits for "member" to connect, war file (hazelcast member) also starts, but they dont "see" each other and wont connect. I put in the docker-compose file "links" and "depends on" tags, but that wont help.
The code for the project works fine when I start it localy, first I start data grid server as java application, then I start the tomcat containing rest service and the connection is established in no time.
So my question is, how do I link this two containers so they can see each other and work together?
try putting the containers in the same "network" by specifying the network bridge
version: "3"
services:
escomled_datagrid:
image: escomled/escomled_datagrid
build:
context: ./sh_scripts/escomled_data_grid
networks:
- networknamename
tomcat:
image: escomled/tomcat
build:
context: ./tomcat/app
ports:
- 8585:8080
depends_on:
- escomled_datagrid
links:
- escomled_datagrid:escomled_datagrid
networks:
- networknamename
networks:
networknamename:
driver: bridge

Drone 0.8: build stuck in pending state

Installed Drone 0.8 on virtual machine with the following Docker Compose file:
version: '2'
services:
drone-server:
image: drone/drone:0.8
ports:
- 8080:8000
- 9000:9000
volumes:
- /var/lib/drone:/var/lib/drone/
restart: always
environment:
- DATABASE_DRIVER=sqlite3
- DATABASE_CONFIG=/var/lib/drone/drone.sqlite
- DRONE_OPEN=true
- DRONE_ORGS=my-github-org
- DRONE_ADMIN=my-github-user
- DRONE_HOST=${DRONE_HOST}
- DRONE_GITHUB=true
- DRONE_GITHUB_CLIENT=${DRONE_GITHUB_CLIENT}
- DRONE_GITHUB_SECRET=${DRONE_GITHUB_SECRET}
- DRONE_SECRET=${DRONE_SECRET}
- GIN_MODE=release
drone-agent:
image: drone/agent:0.8
restart: always
depends_on: [ drone-server ]
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- DRONE_SERVER=drone-server:9000
- DRONE_SECRET=${DRONE_SECRET}
All variable values are stored in .env file and are correctly passed to running containers. Trying to run a build using private Github repository. When pushing to repository for the first time build starts and fails with the following error (i.e. build fails):
Then after clicking on Restart button seeing another screen (i.e. build is pending):
Having the following containers running on the same machine:
root#ci:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
94e6a266e09d drone/agent:0.8 "/bin/drone-agent" 2 hours ago Up 2 hours root_drone-agent_1
7c7d9f93a532 drone/drone:0.8 "/bin/drone-server" 2 hours ago Up 2 hours 80/tcp, 443/tcp, 0.0.0.0:9000->9000/tcp, 0.0.0.0:8080->8000/tcp root_drone-server_1
Even with DRONE_DEBUG=true the only log entry in agent log is:
2017/09/10 15:11:54 pipeline: request next execution
So I think for some reason my agent does not get the build from the queue. I noticed that latest Drone versions are using GRPC instead of WebSockets.
So how to get the build started? What I am missing here?
The reason of the issue - wrong .drone.yml file. Only the first red screen should be shown in that case. Showing pending and Restart button for incorrect YAML is a Drone issue.

Resources