I'm unable to push a docker image to a private repository (hosted on https://hub.docker.com) with fabric8 plugin. I created on hub.docker a repository called: manuzid/heap-dump-sample. It's a simple Spring Boot app only with a loop in the main function. the interesting part is the following in the pom.xml:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.27.2</version>
<configuration>
<registry>index.docker.io/v1</registry>
<!-- I think it's not necessary, plugin use the creds from docker config.json -->
<authConfig>
<username>user</username>
<password>pw</password>
</authConfig>
<images>

</images>
</configuration>
<executions>
<execution>
<id>docker-build</id>
<phase>package</phase>
<goals>
<goal>build</goal>
</goals>
<configuration>
<filter>${project.artifactId}</filter>
</configuration>
</execution>
<execution>
<id>docker-push</id>
<phase>install</phase>
<goals>
<goal>push</goal>
</goals>
<configuration>
<filter>${project.artifactId}</filter>
</configuration>
</execution>
</executions>
</plugin>
I get the following error in the console: [ERROR] DOCKER> Unable to push 'manuzid/heap-dump-sample:latest' from registry 'index.docker.io/v1' : denied: requested access to the resource is denied [denied: requested access to the resource is denied ]
But the specified credentials are the same I use to log into the website (https://hub.docker.com). The specified registry url index.docker.io/v1 is obtained with the command docker info.
Any suggestions on this? Thanks in advance.
Edit: This example can be found here: https://github.com/ManuZiD/heap-dump-sample
I have had issues with both pulling and pushing images and through research which I cannot fully remember I was able to resolve this issue by modifying my Docker credentials store
Note executing docker login will create this file and also overwrite its contents (I suggest making a backup!)
The content of config.json should be something like:
{
"HttpHeaders": {
"User-Agent": "Docker-Client/19.03.12 (windows)"
},
"auths": {
"https://hub.docker.com/v1/": {
"auth": "AUTH-TOKEN"
},
"https://index.docker.io/v1/": {
"auth": "AUTH-TOKEN"
}
},
"credsStore": "desktop",
"stackOrchestrator": "swarm"
}
AUTH-TOKEN needs to contain base64{docker-user-id:docker-password}:
echo "docker-user-id:docker-password" | base64
Note this can be decoded using
echo AUTH-TOKEN | base64 -d
Warning Never share the contents of your config.json file!
This is my windows client credentials as you will notice from the user-agent details. OSX users may prefer to utlise an OSX key-chain
Related
I am building a self-hosted chromium extension for Edge and Chrome. So far I got a nice working CI pipeline using maven with this plugin (https://github.com/bmatthews68/crx-maven-plugin) and I managed to automate the versioning, packaging and signing of the .crx file, and upload to Nexus repository without much hassle (our intent was to point the upload URL to Nexus releases using group policies to get the extension deployed to users).
But we have found that the plugin is a bit outdated and uses crx2 format for the extension packaging. Support for crx2 was dropped a while ago (chromium v75 or so), and current browser versions require crx3 or won't install the extension.
Seems like the only reliable way to package a crx3 extension right now is using the chrome executable itself, but it does not look like the best idea for a CI pipeline :-/
Any suggestion is welcome!
Finally I found a way, though indirect. There is a CRX3 NPM project that has been kept up to date for the CRX3 format at https://www.npmjs.com/package/crx3
Using exec-maven-plugin to invoke NPM as detailed below, I've been able to package the crx file correctly (and this works in local windows workstations and ALM linux nodes):
<!-- Build crx file using NPM -->
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>3.0.0</version>
<executions>
<execution>
<phase>compile</phase>
<goals>
<goal>exec</goal>
</goals>
</execution>
</executions>
<configuration>
<executable>npm</executable>
<workingDirectory>${project.build.directory}</workingDirectory>
<commandlineArgs>install</commandlineArgs>
</configuration>
</plugin>
I used a package.json file for NPM with placeholders for version so I could keep on managing the version in the pom:
{
"name": "${project.artifactId}",
"version": "${project.version}",
"private": true,
"dependencies": {
"crx3": "^1.1.3"
},
"scripts": {
"install": "crx3 ${project.artifactId}-${project.version} --keyPath crx.pem --appVersion ${crx.version} --crxPath ${project.artifactId}-${project.version}.crx"
}
}
For the filtering to work correctly I used maven-resources plugin in the pom as well:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-resources-plugin</artifactId>
<version>3.2.0</version>
<executions>
<execution>
<id>copy-extension-resources</id>
<phase>generate-sources</phase>
<goals>
<goal>resources</goal>
</goals>
<configuration>
<outputDirectory>${project.build.directory}/${project.artifactId}-${project.version}</outputDirectory>
<resources>
<!-- Resource filtering to include version number in manifest.json and copy sources to a subfolder in /target -->
<resource>
<directory>src/main/chrome</directory>
<filtering>true</filtering>
<includes>
<include>**/manifest.json</include>
</includes>
</resource>
<resource>
<directory>src/main/chrome</directory>
<filtering>false</filtering>
<excludes>
<exclude>**/manifest.json</exclude>
</excludes>
</resource>
</resources>
</configuration>
</execution>
<execution>
<id>copy-external-resources</id>
<phase>generate-sources</phase>
<goals>
<goal>resources</goal>
</goals>
<configuration>
<outputDirectory>${project.build.directory}</outputDirectory>
<resources>
<!-- Resource filtering to include version number in update.xml and package.json and copy resources to /target folder -->
<resource>
<directory>src/main/resources</directory>
<filtering>true</filtering>
<includes>
<include>update.xml</include>
<include>package.json</include>
<include>package-lock.json</include>
</includes>
</resource>
<resource>
<filtering>false</filtering>
<directory>src/main/resources</directory>
<includes>
<include>crx.pem</include>
</includes>
</resource>
</resources>
</configuration>
</execution>
</executions>
</plugin>
As you mentioned, CRX2 was deprecated in Chrome 75 two years ago, there are some issues with CRX₂ and its support was completely removed in Chrome 78. Because all extensions must move to the CRX3 format!
I'm not sure how you built it with maven, maybe it was a script or something. In this case, you may need to modify your script appropriately, or find some reference documents that support CRX3 format related to the tools you are using to build the extension.
Otherwise you have to package it in crx3 format. Refer to this document.
I'm starting postgres container with maven docker plugin and then use the DB to generate some artefacts in later steps.
<plugin>
…
<artifactId>docker-maven-plugin</artifactId>
…
<configuration>
<images>

</images>
</configuration>
<executions>
<execution>
<id>start-postgres-container</id>
<phase>generate-sources</phase>
<goals>
<goal>start</goal>
</goals>
</execution>
<execution>
<id>stop-postgres-container</id>
<phase>process-sources</phase>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
</plugin>
The issue I have is that when there is some error with any operations between start and stop above, then maven leaves the running container and consecutive build attempts will fail so one has to put down the leftover container manually first.
Is there a way/plugin in maven to specify some finally action/phase? So that in case of failure in build scripts one would still be able to release some resources which might've been already reserved?
The goal could be achieved applying similar solution to what testcontainers are doing using Ryuk: https://github.com/testcontainers/moby-ryuk
The image to reap should be labeled, e.g. as:
<autoRemove>true</autoRemove>
<labels>
<killme>true</killme>
</labels>
<wait>
<time>3000</time>
</wait>
The above delay is to give some time for
Ryuk setup. As this one must be started in parallel...

The challange is that there must be feed with a hartbeat towards Ryuk container at least 1 per 10s through a TCP socket containing deathnote, eg. like this: printf "label=killme" | nc localhost 8080.
This is easy to achieve with below maven exec plugin and a simple setup-ryuk.sh script calling the command mentioned above. (NOTE: this provides only single deathnote with no consecutive heartbeats so Ryuk dies by itself in next 10s - at that time it will reap all items for which it received a note).
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-antrun-plugin</artifactId>
<version>3.0.0</version>
<executions>
<execution>
<id>stop-postgres</id>
<phase>generate-sources</phase>
<goals>
<goal>run</goal>
</goals>
</execution>
</executions>
<configuration>
<target>
<exec executable="bash">
<arg value="setup-ryuk.sh"/>
<arg value="${ryuk.port}"/>
</exec>
</target>
</configuration>
</plugin>
To make this platform independent (as the above is valid for Linux/Mac) and to keep Ryuk alive for longer time the best way seems to be to come up with own maven plugin sending messages to TCP socket.
I tried passing the proxy settings in via <jvmArguments> just like you do with an install4j-generated installer:
<plugin>
<groupId>org.sonatype.install4j</groupId>
<artifactId>install4j-maven-plugin</artifactId>
<version>1.1.1</version>
<executions>
<execution>
<id>compile-installers</id>
<phase>package</phase>
<goals>
<goal>compile</goal>
</goals>
<configuration>
<jvmArguments>
<arg>-DproxySet=true</arg>
<arg>-Dhttps.proxyHost=...</arg>
<arg>-Dhttps.proxyPort=443</arg>
<arg>-DproxyAuth=true</arg>
<arg>-DproxyAuthUser=${...}</arg>
<arg>-DproxyPassword=${...}</arg>
</jvmArguments>
...
</configuration>
</execution>
</executions>
</plugin>
but that failed.
On a machine where proxy settings are injected via the IDE, the above works, even if I intentionally pass in a wrong password or even a nonexistent proxy server, so I guess I'm Doing It Wrong(tm).
Turns out it was a misconfiguration.
Lesson to take home: If you see "connection refused", "forbidden", or any other connection failure messages, it might be the proxy or the target server talking, you don't know and the install4j-maven-plugin output does not tell you.
It would be nice if a future install4j-maven-plugin version could output that information, but currently it does not.
I am trying to setup my project's pom.xml and Maven's settings.xml to automate the process of generating a Docker image and pushing it to my AWS ECS private Docker repository.
In my pom.xml, I added the dockerfile-maven-plugin and configured it as follows:
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.3.6</version>
<executions>
<execution>
<id>default</id>
<goals>
<goal>build</goal>
<goal>push</goal>
</goals>
</execution>
</executions>
<configuration>
<finalName>myproject/server</finalName>
<repository>137037344249.dkr.ecr.us-east-2.amazonaws.com/myproject/server</repository>
<tag>${docker.image.tag}</tag>
<serverId>ecs-docker</serverId>
<useMavenSettingsForAuth>true</useMavenSettingsForAuth>
<buildArgs>
<VERSION>${project.version}</VERSION>
<BUILD_NUMBER>${buildNumber}</BUILD_NUMBER>
<WAR_FILE>${project.build.finalName}.war</WAR_FILE>
</buildArgs>
</configuration>
</plugin>
Per the instructions given by dockerfile-maven-plugin, I need to add configurations for my ECS server authentication, but I don't know what username / password I need to provide. I doubt it's my AWS login user/pass.
<servers>
<server>
<id>ecs-docker</id>
<username>where_to_get_this</username>
<password>where_to_get_this</password>
</server>
</servers>
Also, any suggestions to automate this Docker image generation / pushing to my repo in a better way are welcome.
To build the docker image and push it to AWS ECR with Spotify dockerfile-maven-plugin you should:
Install amazon-ecr-credential-helper
go get -u github.com/awslabs/amazon-ecr-credential-helper/ecr-login/cli/docker-credential-ecr-login
Move it to some folder that is already in the execution PATH:
mv ~/go/bin/docker-credential-ecr-login ~/bin/
Add credHelpers section to ~/.docker/config.json file for your Amazon ECR docker repo ID:
{
"credHelpers": {
"<ecr-id>.dkr.ecr.<aws-region>.amazonaws.com": "ecr-login"
},
//...
}
(on Windows remove line "credsStore": "wincred",, if it exists, from this file)
Check that ~/.aws/config has your region
[default]
region = <aws-region>
and ~/.aws/credentials has your keys
[ecr-push-user]
aws_access_key_id = <id>
aws_secret_access_key = <secret>
(More info...)
Add Spotify dockerfile-maven-plugin to your pom.xml:
<properties>
<docker.image.prefix>xxxxxxxxxxxx.dkr.ecr.rrrrrrr.amazonaws.com</docker.image.prefix>
<docker.image.name>${project.artifactId}</docker.image.name>
<docker.image.tag>${project.version}</docker.image.tag>
<docker.file>Dockerfile</docker.file>
</properties>
<build>
<finalName>service</finalName>
<plugins>
<!-- Docker image mastering -->
<plugin>
<groupId>com.spotify</groupId>
<artifactId>dockerfile-maven-plugin</artifactId>
<version>1.4.10</version>
<configuration>
<repository>${docker.image.prefix}/${docker.image.name}</repository>
<tag>${docker.image.tag}</tag>
<dockerfile>${docker.file}</dockerfile>
</configuration>
<executions>
<execution>
<id>default</id>
<phase>package</phase>
<goals>
<goal>build</goal>
<goal>push</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
Make sure that Dockerfile exists, for example:
FROM openjdk:11-jre-slim
VOLUME /tmp
WORKDIR /service
COPY target/service.jar service.jar
ENTRYPOINT exec java -server \
-Djava.security.egd=file:/dev/./urandom \
$JAVA_OPTS \
-jar service.jar
Build and push the image with one command:
mvn package
To login on ECR, you must use the AWS command-line to generate a docker login command, and then login your docker daemon with it. I don't think this use case is handled by any docker maven plugin.
What I do on my project is login my docker daemon before doing the push :
logstring=`aws --profile my-aws-profile ecr get-login --registry-ids my-registry-id`
`$logstring`
This manual step is required in my case because we have a single AWS account that is secured with a hardware token that generate one time use codes, but it is not a problem, since we only need to do it once a day (ECR login lasts for 12 hours), on the days we deploy to ECR (as opposed to those where we only test locally).
So the solutions:
Login manually to ECR, so that your docker pushes work without needing to login from maven.
Add a login step that scripts the external login directly in your pom
Try AWS CodePipeline to build your code directly when you commit, and deploy to ECR (what I recommend if you are not otherwise restricted)
Have fun!
I did not configure anything in my maven settings file.
I usually login using below command
$(aws ecr get-login --no-include-email --region my-region)
then I run the maven commands (docker commands are embedded as a part of maven goals) and it works fine.
For your reference , This is my pom file setup using docker plugin
<plugin>
<groupId>com.spotify</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>1.1.1</version>
<configuration>
<imageName>${docker.image.prefix}/${project.artifactId}:${project.version}</imageName>
<dockerDirectory>docker</dockerDirectory>
<!-- <serverId>docker-hub</serverId> -->
<registryUrl>https://${docker.image.prefix}</registryUrl>
<forceTags>true</forceTags>
<resources>
<resource>
<targetPath>/</targetPath>
<directory>${project.build.directory}</directory>
<include>${project.build.finalName}.jar</include>
</resource>
</resources>
</configuration>
<executions>
<execution>
<id>tag-image</id>
<phase>package</phase>
<goals>
<goal>build</goal>
</goals>
</execution>
<execution>
<id>push-image</id>
<phase>deploy</phase>
<goals>
<goal>push</goal>
</goals>
<configuration>
<imageName>${docker.image.prefix}/${project.artifactId}:${project.version}</imageName>
</configuration>
</execution>
</executions>
</plugin>
I do some test with SoapUI Plugin :
I want to integrate this test with Maven / Jenkins. Test are ok when I used the SoapUI tool. But when I used the following pom file
<plugin>
<groupId>com.github.redfish4ktc.soapui</groupId>
<artifactId>maven-soapui-extension-plugin</artifactId>
<version>4.6.4.2</version>
<executions>
<execution>
<id>SoapUITestOnDITx</id>
<phase>integration-test</phase>
<goals>
<goal>test-multi</goal>
</goals>
<configuration>
<projectFiles>
<scan>
<baseDirectory>src/test/ressources</baseDirectory>
<includes>
<include>*.xml</include>
</includes>
<excludes>
<exclude>*ToExclude*.xml</exclude>
</excludes>
</scan>
</projectFiles>
<outputFolder>target/soapui/</outputFolder>
<junitReport>true</junitReport>
<useOutputFolderPerProject>true</useOutputFolderPerProject>
<exportAll>true</exportAll>
<junitHtmlReport>false</junitHtmlReport>
<testFailIgnore>true</testFailIgnore>
<host>${soapui.host}</host>
<username>${soapui.username}</username>
<password>${soapui.password}</password>
</configuration>
</execution>
</executions>
</plugin>
But when I run my test with maven` I got the following error :
ASSERTION FAILED -> XPathContains assertion failed for path [count(//initialInfos/item)>0] : RuntimeException: Trying XBeans path engine... Trying XQRL... Trying delegated path engine...
FAILED on count(//initialInfos/item)>0
whereas my response is good (at least one item is present in the response)
It seems that an dependency is missing but what's the dependency ?