I am trying to setup my project's pom.xml and Maven's settings.xml to automate the process of generating a Docker image and pushing it to my AWS ECS private Docker repository.
In my pom.xml, I added the dockerfile-maven-plugin and configured it as follows:
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.3.6</version>
<executions>
<execution>
<id>default</id>
<goals>
<goal>build</goal>
<goal>push</goal>
</goals>
</execution>
</executions>
<configuration>
<finalName>myproject/server</finalName>
<repository>137037344249.dkr.ecr.us-east-2.amazonaws.com/myproject/server</repository>
<tag>${docker.image.tag}</tag>
<serverId>ecs-docker</serverId>
<useMavenSettingsForAuth>true</useMavenSettingsForAuth>
<buildArgs>
<VERSION>${project.version}</VERSION>
<BUILD_NUMBER>${buildNumber}</BUILD_NUMBER>
<WAR_FILE>${project.build.finalName}.war</WAR_FILE>
</buildArgs>
</configuration>
</plugin>
Per the instructions given by dockerfile-maven-plugin, I need to add configurations for my ECS server authentication, but I don't know what username / password I need to provide. I doubt it's my AWS login user/pass.
<servers>
<server>
<id>ecs-docker</id>
<username>where_to_get_this</username>
<password>where_to_get_this</password>
</server>
</servers>
Also, any suggestions to automate this Docker image generation / pushing to my repo in a better way are welcome.
To build the docker image and push it to AWS ECR with Spotify dockerfile-maven-plugin you should:
Install amazon-ecr-credential-helper
go get -u github.com/awslabs/amazon-ecr-credential-helper/ecr-login/cli/docker-credential-ecr-login
Move it to some folder that is already in the execution PATH:
mv ~/go/bin/docker-credential-ecr-login ~/bin/
Add credHelpers section to ~/.docker/config.json file for your Amazon ECR docker repo ID:
{
"credHelpers": {
"<ecr-id>.dkr.ecr.<aws-region>.amazonaws.com": "ecr-login"
},
//...
}
(on Windows remove line "credsStore": "wincred",, if it exists, from this file)
Check that ~/.aws/config has your region
[default]
region = <aws-region>
and ~/.aws/credentials has your keys
[ecr-push-user]
aws_access_key_id = <id>
aws_secret_access_key = <secret>
(More info...)
Add Spotify dockerfile-maven-plugin to your pom.xml:
<properties>
<docker.image.prefix>xxxxxxxxxxxx.dkr.ecr.rrrrrrr.amazonaws.com</docker.image.prefix>
<docker.image.name>${project.artifactId}</docker.image.name>
<docker.image.tag>${project.version}</docker.image.tag>
<docker.file>Dockerfile</docker.file>
</properties>
<build>
<finalName>service</finalName>
<plugins>
<!-- Docker image mastering -->
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.4.10</version>
<configuration>
<repository>${docker.image.prefix}/${docker.image.name}</repository>
<tag>${docker.image.tag}</tag>
<dockerfile>${docker.file}</dockerfile>
</configuration>
<executions>
<execution>
<id>default</id>
<phase>package</phase>
<goals>
<goal>build</goal>
<goal>push</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
Make sure that Dockerfile exists, for example:
FROM openjdk:11-jre-slim
VOLUME /tmp
WORKDIR /service
COPY target/service.jar service.jar
ENTRYPOINT exec java -server \
-Djava.security.egd=file:/dev/./urandom \
$JAVA_OPTS \
-jar service.jar
Build and push the image with one command:
mvn package
To login on ECR, you must use the AWS command-line to generate a docker login command, and then login your docker daemon with it. I don't think this use case is handled by any docker maven plugin.
What I do on my project is login my docker daemon before doing the push :
logstring=`aws --profile my-aws-profile ecr get-login --registry-ids my-registry-id`
`$logstring`
This manual step is required in my case because we have a single AWS account that is secured with a hardware token that generate one time use codes, but it is not a problem, since we only need to do it once a day (ECR login lasts for 12 hours), on the days we deploy to ECR (as opposed to those where we only test locally).
So the solutions:
Login manually to ECR, so that your docker pushes work without needing to login from maven.
Add a login step that scripts the external login directly in your pom
Try AWS CodePipeline to build your code directly when you commit, and deploy to ECR (what I recommend if you are not otherwise restricted)
Have fun!
I did not configure anything in my maven settings file.
I usually login using below command
$(aws ecr get-login --no-include-email --region my-region)
then I run the maven commands (docker commands are embedded as a part of maven goals) and it works fine.
For your reference , This is my pom file setup using docker plugin
<plugin>
<groupId>com.spotify</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>1.1.1</version>
<configuration>
<imageName>${docker.image.prefix}/${project.artifactId}:${project.version}</imageName>
<dockerDirectory>docker</dockerDirectory>
<!-- <serverId>docker-hub</serverId> -->
<registryUrl>https://${docker.image.prefix}</registryUrl>
<forceTags>true</forceTags>
<resources>
<resource>
<targetPath>/</targetPath>
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.jar</include>
</resource>
</resources>
</configuration>
<executions>
<execution>
<id>tag-image</id>
<phase>package</phase>
<goals>
<goal>build</goal>
</goals>
</execution>
<execution>
<id>push-image</id>
<phase>deploy</phase>
<goals>
<goal>push</goal>
</goals>
<configuration>
<imageName>${docker.image.prefix}/${project.artifactId}:${project.version}</imageName>
</configuration>
</execution>
</executions>
</plugin>
Related
I'm trying to add OpenTelemetry automated instrumentation to our spring boot app but I can't get it working.
The app is deployed as a docker image and the image is created via the spring-boot-maven-plugin.
I'm following these instructions: https://github.com/paketo-buildpacks/opentelemetry
I've added an env section to the spring-boot-maven-plugin config in the pom.xml:
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
<configuration>

</configuration>
</plugin>
I'm not sure this is the correct way to enable it but I'm having a hard time determining whether that step is working or not.
The docker image is created and run with:
mvn clean spring-boot:build-image
docker compose -f app.yml up
I've added environment variables to app.yml file (hostname replaced with XXXXXX):
services:
appname:
image: appname:latest
environment:
- SPRING_PROFILES_ACTIVE=docker-local
- BPE_OTEL_TRACES_EXPORTER=zipkin
- BPE_OTEL_EXPORTER_ZIPKIN_ENDPOINT=http://XXXXXX:9411/api/v2/spans
- BPE_OTEL_SERVICE_NAME=appname
- BPE_OTEL_JAVAAGENT_ENABLED=true
I don't think these environment variables are being set, however because I don't see them when I run:
docker run --entrypoint launcher -it appname:latest bash -c set
I don't see any traces going to zipkin and I don't see anything in the logs.
Without docker, I have everything working fine.
I tried to figure out if I just need to use a more recent version of spring boot but I couldn't find a way to determine that.
I couldn't find any examples of apps that have this working.
Edit to include working solution:
Martin Theiss's solution is correct. Here is the section of the pom.xml that does everything:
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
<configuration>

</configuration>
</plugin>
Note that this is sending traces directly to zipkin. Eventually I'll be sending traces to an opentelemetry collector.
Note also that I was wrong to try to put environment variables in the spring app.yml config file. These should be put in the pom.xml as per above.
OpenTelemetry buildpack is not contained in the buildpacks/java. You have to specify it additionally.

I want to integrate the spring boot maven plugins capability to build and publish an OCI Image to a remote Repository
My Goal
I want to use the following plugin configuration:
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>

</configuration>
<executions>
<execution>
<goals>
<goal>build-image</goal>
</goals>
</execution>
</executions>
</plugin>
And now I want to pass the docker.publishRegistry variables by command line.
What I've tried so far
I've tried to pass the parameter with the -Ddocker.publishRegistry.username property but that didn't work.
When you take a look at the source code of the plugin Docker has no Parameter property assigned to it:
/**
* Alias for {#link Image#publish} to support configuration via command-line property.
*/
#Parameter(property = "spring-boot.build-image.publish", readonly = true)
Boolean publish;
/**
* Docker configuration options.
* #since 2.4.0
*/
#Parameter
private Docker docker;
https://github.com/spring-projects/spring-boot/blob/82b90d57496ba85be316b9eb88a36d81f2cc9baa/spring-boot-project/spring-boot-tools/spring-boot-maven-plugin/src/main/java/org/springframework/boot/maven/BuildImageMojo.java#L159
So I guess it is not possible to define this parameter by command line or is it?
Current Workaround
Currently I'm defining the properties by global maven properties and reuse them in the docker scope.
My pom.xml:
<properties>
<docker-registry>https://example.org</docker-registry>
<docker-registry-username>username</docker-registry-username>
<docker-registry-username>password</docker-registry-username>
</properties>
<!-- ... -->
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>

<docker>
<publishRegistry>
<username>${docker-registry-username}</username>
<password>${docker-registry-password}</password>
<url>${docker-registry}</url>
</publishRegistry>
</docker>
</configuration>
<executions>
<execution>
<goals>
<goal>build-image</goal>
</goals>
</execution>
</executions>
</plugin>
And I'm building with:
./mvnw -B -s \
-Dspring-boot.build-image.publish=true \
-Ddocker-registry-username="$USERNAME" \
-Ddocker-registry-password="$PASSWORD" \
-Ddocker-registry="$REGISTRY" \
clean deploy
I have not the exact solution to you question: "passing publishRegistry parameters on the command line", but If I may, I have another workaround that shields you from exposing your credential in the pom.xml.
What i have done is to put the parameters and credential in a profile in my .m2/settings.xml like this:
<profiles>
<profile>
<id>docker-io-credentials</id>
<properties>
<docker-reg>docker.io</docker-reg>
<docker-reg.user>your-user-name</docker-reg.user>
<docker-reg.pwd>your-token-or-passwd</docker-reg.pwd>
<docker-reg.url>${docker-reg}/library/${docker-reg.user}</docker-reg.url>
</properties>
</profile>
</profiles>
Then on the command-line you can simply pass the profile's name to merge the credential to the current build.
mvn clean install -Pdocker-io-credentials
You can define placeholders in the spring-boot plugin's configuration, which refer to environment variables. This will be slightly less complex, so like
...
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<mainClass>main.Class</mainClass>

<docker>
<publishRegistry>
<username>docker-user</username>
<password>${env.docker_registry_password}</password>
<url>https://registry-url/v1/</url>
<email>user#example.com</email>
</publishRegistry>
</docker>
</configuration>
...
See more on this topic here: https://www.baeldung.com/maven-env-variables
Just to mention that the Spring team is aware and did not consider this a bug but rather a documentation issue: https://github.com/spring-projects/spring-boot/issues/31024#issuecomment-1127905504
Very similar to what #twobiers suggested in his workaround:
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<docker>
<publishRegistry>
<url>${docker.publishRegistry.url}</url>
<username>${docker.publishRegistry.username}</username>
<password>${docker.publishRegistry.password}</password>
</publishRegistry>
</docker>
</configuration>
and then I build (and publish to Github Packages Registry) my project with:
./mvnw spring-boot:build-image \
-Ddocker.publishRegistry.username=${{ github.actor }} \
-Ddocker.publishRegistry.password=${{ secrets.GITHUB_TOKEN }} \
-Ddocker.publishRegistry.url=ghcr.io \
-Dspring-boot.build-image.publish=true \
-Dspring-boot.build-image.imageName="ghcr.io/${{ github.repository }}:latest" \
-DskipTests
I'm unable to push a docker image to a private repository (hosted on https://hub.docker.com) with fabric8 plugin. I created on hub.docker a repository called: manuzid/heap-dump-sample. It's a simple Spring Boot app only with a loop in the main function. the interesting part is the following in the pom.xml:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.27.2</version>
<configuration>
<registry>index.docker.io/v1</registry>
<!-- I think it's not necessary, plugin use the creds from docker config.json -->
<authConfig>
<username>user</username>
<password>pw</password>
</authConfig>
<images>

</images>
</configuration>
<executions>
<execution>
<id>docker-build</id>
<phase>package</phase>
<goals>
<goal>build</goal>
</goals>
<configuration>
<filter>${project.artifactId}</filter>
</configuration>
</execution>
<execution>
<id>docker-push</id>
<phase>install</phase>
<goals>
<goal>push</goal>
</goals>
<configuration>
<filter>${project.artifactId}</filter>
</configuration>
</execution>
</executions>
</plugin>
I get the following error in the console: [ERROR] DOCKER> Unable to push 'manuzid/heap-dump-sample:latest' from registry 'index.docker.io/v1' : denied: requested access to the resource is denied [denied: requested access to the resource is denied ]
But the specified credentials are the same I use to log into the website (https://hub.docker.com). The specified registry url index.docker.io/v1 is obtained with the command docker info.
Any suggestions on this? Thanks in advance.
Edit: This example can be found here: https://github.com/ManuZiD/heap-dump-sample
I have had issues with both pulling and pushing images and through research which I cannot fully remember I was able to resolve this issue by modifying my Docker credentials store
Note executing docker login will create this file and also overwrite its contents (I suggest making a backup!)
The content of config.json should be something like:
{
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.12 (windows)"
},
"auths": {
"https://hub.docker.com/v1/": {
"auth": "AUTH-TOKEN"
},
"https://index.docker.io/v1/": {
"auth": "AUTH-TOKEN"
}
},
"credsStore": "desktop",
"stackOrchestrator": "swarm"
}
AUTH-TOKEN needs to contain base64{docker-user-id:docker-password}:
echo "docker-user-id:docker-password" | base64
Note this can be decoded using
echo AUTH-TOKEN | base64 -d
Warning Never share the contents of your config.json file!
This is my windows client credentials as you will notice from the user-agent details. OSX users may prefer to utlise an OSX key-chain
I am trying to use Fabric8 to build an image for my Java application. However, I am new and this could be a duplicate question.
I have docker installed and the fabric8 library added via maven.
Below is my initial setup for the fabric maven plugin.
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>build</goal>
</goals>
<configuration>
<dockerHost>/var/run/docker.sock</dockerHost>
<images>

</images>
</configuration>
</execution>
</executions>
</plugin>
Below is the error message I am getting.
Unable to parse configuration of mojo io.fabric8:docker-maven-plugin:0.30.0:build for parameter dockerHost: Cannot find 'dockerHost' in class io.fabric8.maven.docker.config.BuildImageConfiguration
Try to remove the dockerHost element from the image build configuration. There is no such option for the build configuration.
dockerHost specifies the connection to the Docker host, i.e the machine where the image is going to be built and eventually run. This option is not actually needed
unless the plugin cannot determine it by itself. The discovery sequence is detailed in the Global Configuration section of the docs.
If you build with maven on the machine where the docker daemon runs, you normally don't need this configuration. The plugin will connect to the unix socket /var/run/docker.sock which is the default URL of the docker daemon.
If the requirement is to run the image on a remote host, then you either specify the dockerHost option or the DOCKER_HOST environment variable. On the host, the docker daemon must be configured for remote access.
I hope this helps.
This might be a duplicate, cause I can't image that we're the first to encounter this, but I can't seem to find it.
So, we are deploying WAR files to a tomcat 8.5 server with gitlab ci using maven. Issue is that tomcat messes up the versions when we moved from 0.2.9 to 0.2.10. Apparendly the server deploys the WARs in alphabetical order and 0.2.10 lies between 0.2.1 and 0.2.2 and the running version is still 0.2.9 even while 0.2.10 was correctly deployed to the server.
Full webapp name looks like: WebappName##0.2.10-SNAPSHOT_201901010000.war
We thought about renaming our versions to 0.2.009 and 0.2.010 but that seems like a rather dirty work-around. Of cause older versions will be deleted from time to time so it's not a permanent problem, but it's just somewhat annoying and any hints on how to solve this would be great.
From the pom.xml
<version>0.2.10-SNAPSHOT</version>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.install.skip>true</maven.install.skip>
<timestamp>${maven.build.timestamp}</timestamp>
<maven.build.timestamp.format>yyyyMMddHHmm</maven.build.timestamp.format>
</properties>
[..]
<profile>
<id>deploy-stage</id>
<activation>
<activeByDefault>false</activeByDefault>
</activation>
<properties>
<war.name>WebappName##${project.version}_${timestamp}</war.name>
<tomcat.url>http://[..]/manager/text</tomcat.url>
<tomcat.server>[..]</tomcat.server>
<tomcat.webpath>/${war.name}</tomcat.webpath>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-war-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<id>default-war</id>
<goals>
<goal>manifest</goal>
<goal>war</goal>
</goals>
<phase>package</phase>
</execution>
</executions>
<configuration>
<warName>${war.name}</warName>
<failOnMissingWebXml>true</failOnMissingWebXml>
<archive>
<addMavenDescriptor>true</addMavenDescriptor>
<forced>true</forced>
<manifest>
<addClasspath>true</addClasspath>
<packageName>true</packageName>
<useUniqueVersions>true</useUniqueVersions>
<addDefaultImplementationEntries>true</addDefaultImplementationEntries>
</manifest>
<manifestEntries>
<Build-Time>${maven.build.timestamp}</Build-Time>
<Archetype>${archetypeArtifactId}</Archetype>
<Archetype-Version>${archetypeVersion}</Archetype-Version>
</manifestEntries>
</archive>
<webResources>
<resource>
<filtering>true</filtering>
<directory>src/main/webapp</directory>
<includes>
<include>**/web.xml</include>
</includes>
</resource>
</webResources>
<warSourceDirectory>src/main/webapp</warSourceDirectory>
<webXml>src/main/webapp/WEB-INF/web.xml</webXml>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.tomcat.maven</groupId>
<artifactId>tomcat7-maven-plugin</artifactId>
<version>2.2</version>
<configuration>
<warFile>${project.build.directory}/${war.name}.war</warFile>
<url>${tomcat.url}</url>
<server>${tomcat.server}</server>
<path>${tomcat.webpath}</path>
</configuration>
<executions>
<execution>
<phase>deploy</phase>
<goals>
<goal>deploy</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
From gitlab-ci.yml
variables:
MAVEN_OPTS: "-Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=WARN -Dorg.slf4j.simpleLogger.showDateTime=true -Djava.awt.headless=true"
MAVEN_CLI_OPTS: "--batch-mode --errors --fail-at-end --show-version -DinstallAtEnd=true -DdeployAtEnd=true"
# Cache downloaded dependencies and plugins between builds.
cache:
paths:
- /root/.m2/repository/
stages:
- build
- deploy
# Run deploy
deploy:staging:
stage: deploy
script:
- 'mvn $MAVEN_CLI_OPTS -Dsonar.branch=$CI_COMMIT_REF_NAME deploy -am -P deploy-stage'
only:
- staging
image: maven:3.3.9-jdk-8
As the Apache Tomcat documentation says:
String comparisons are used to determine version order.
This is simply not the same as comparison of Maven artifact versions. A version of 2.0.2 is always larger by String comparison than 2.0.10 or even 2.0.15000 etc.
I guess you have something like this in your pom.xml:
<properties>
<buildTimestamp>${maven.build.timestamp}</buildTimestamp>
<maven.build.timestamp.format>yyyyMMddHHmm</maven.build.timestamp.format>
</properties>
<build>
<finalName>${project.artifactId}##${project.version}_${maven.build.timestamp}</finalName>
</build>
You can change that to:
<finalName>${project.artifactId}##${maven.build.timestamp}_${project.version}</finalName>
which yields a file name like WebappName##201901010000_0.2.10-SNAPSHOT.war.
This way the most current build by timestamp will be deployed as currently active application version.
Alternatively you can keep your version schema of the .war file name and instead have your app deployed using a versioned file name for your context.xml:
apache-tomcat/conf/Catalina/localhost/WebappName##201901010000.xml
with the content:
<Context docBase="/path/to/WebappName##0.2.10-SNAPSHOT_201901010000.war" path="/WebappName"/>
In Apache Tomcat Manager this will show up as version 201901010000 in application version column. Again the most current build by timestamp will be deployed as currently active application version independent of the Maven artifact version as the deployment version String is taken from the .xml file name instead of the .war file name.