I've successsfully deployed my app to elastic beanstalk using
$ mvn beanstalk:upload-source-bundle beanstalk:create-application-version beanstalk:update-environment
I'm able to watch in the AWS console that the environment is updated correctly. When I try to chain beanstalk:wait-for-environment, maven polls every 90 seconds and never detects that the environment is Ready.
One thing I noticed is that it's waiting for 'environment null to get into Ready', and it's looking for the environment to have a specific domain **.elasticbeanstalk.com. I don't know how to change that, or disable that check
$ mvn beanstalk:upload-source-bundle beanstalk:create-application-version beanstalk:update-environment beanstalk:wait-for-environment
...
[INFO] Will wait until Thu Aug 22 10:59:37 PDT 2013 for environment null to get into Ready
[INFO] ... as well as having domain ********.elasticbeanstalk.com
[INFO] Sleeping for 90 seconds
My plugin config in pom.xml looks like this (company confidential names hidden)
<plugin>
<groupId>br.com.ingenieux</groupId>
<artifactId>beanstalk-maven-plugin</artifactId>
<version>1.0.1</version>
<configuration>
<applicationName>********-web-testing</applicationName>
<s3Key>********-2.0-${BUILDNUMBER}.war</s3Key>
<s3Bucket>********-web-deployments</s3Bucket>
<artifactFile>target/********-2.0-SNAPSHOT-${BUILDNUMBER}.war</artifactFile>
<environmentName>********-web-testing-fe</environmentName>
</configuration>
</plugin>
Does anyone have insight into using beanstalk:wait-for-environment to wait until the environment has been updated?
wait-for-environment is only needed (in part) when you need a build pipeline involving zero downtime (with cname replication). It all boils down to cnamePrefix currently. You can basically ignore this warning if you're not concerned about downtime (which is not really needed for testing environments)
Actually, the best way is to use fast-deploy. Use the archetype as an start:
$ mvn archetype:generate -Dfilter=elasticbeanstalk
Related
This problem seems to be all about my permissions in GCP, but attempting to set the right permissions so far has not worked.
I'm using com.google.cloud.tools:jib-maven-plugin to package a Spring Boot project into a container and push it to Google Artifact Registry (GAR). It works just fine when I run it locally, but it fails when I run the maven build with Google Cloud Build. It says it fails because artifactregistry.repositories.downloadArtifacts permission is missing.
But this is one of the permissions enabled by default according to Google Docs.
My target is a Google Artifacts (docker) Registry. I'm able to change the target to a Google Container Registry (deprecated, so I need to change to GAR) and that works fine under Cloud Build, no permission problems there. The build also downloads jar files from a maven repository stored in a different GAR, though in the same Google project. So clearly it is okay with permissions for that maven GAR.
I verified that running the maven build locally works, including writing to the GAR, which eliminates something going bad in the jib plugin configuration, or in the GAR configuration. This is using my own user credentials.
What have I tried?
I added the appropriate roles to the default service account Cloud Build uses (though they are supposed to be there anyway). The downloadArtifacts permission is included in role Artifact Registry Reader, so I added that role as well as Artifact Registry Writer.
I switched to a different Service Account (build service account, let's call it BSA) and, yes, made sure that had the same appropriate roles (see above).
I added the BSA as a principal to the target GAR and gave it the appropriate roles there as well (getting desperate)
My user credentials include Owner role so I added Owner role to the BSA (not something I want to keep)
All of these gave me the same permission denied error. Just in case I had really misunderstood something I added a step to my Cloud Build yaml to runs gcloud info and verified that, yes, it is using the BSA I have configured with the roles I need.
Is there something I missed?
Thanks
...Edit
More info. Most of my builds use jib but one uses Spotify to create a local docker image and then uses docker to push to the registry.
And this works! So the problem is specific to jib. Somehow, under cloud build, jib is not seeing the creds, though it does see them locally.
...Edit
The actual error message:
Failed to execute goal com.google.cloud.tools:jib-maven-plugin:1.6.1:build (build-and-push-docker-image) on project knifethrower: Build image failed, perhaps you should make sure you have permissions for australia-southeast1-docker.pkg.dev/redacted/bonanza-platform/knifethrower and set correct credentials. See https://github.com/GoogleContainerTools/jib/blob/master/docs/faq.md#what-should-i-do-when-the-registry-responds-with-forbidden-or-denied for help: Unauthorized for australia-southeast1-docker.pkg.dev/redacted/bonanza-platform/knifethrower: 403 Forbidden
[ERROR] {"errors":[{"code":"DENIED","message":"Permission \"artifactregistry.repositories.downloadArtifacts\" denied on resource \"projects/redacted/locations/australia-southeast1/repositories/bonanza-platform\" (or it may not exist)"}]}
[ERROR] -> [Help 1]
Also note I'm using the latest version of jib: 3.1.4
...Edit
I've tried a couple more things. I added an earlier step, before the maven build, which does a gcloud auth configure-docker --quiet --verbosity=debug australia-southeast1-docker.pkg.dev. That creates a /builder/home/.docker/config.json file. Because there seems to be confusion about where that file should really live I copied it to /root/.docker. But that did not help.
The second thing I tried was using the $DOCKER_CONFIG to point to the /builder/home/.docker directory (as suggested here) and that did not help either.
Same error in both cases. I do get a message from the gcloud auth configure-docker...
WARNING: `docker` not in system PATH.
`docker` and `docker-credential-gcloud` need to be in the same PATH in order to work correctly together.
gcloud's Docker credential helper can be configured but it will not work until this is corrected.
Adding credentials for: australia-southeast1-docker.pkg.dev
Docker configuration file updated.
INFO: Display format: "default"
I think this is trying to be helpful and tell me it cannot find docker installed (which is true) but it created the creds anyway (also true). And it should not matter because jib doesn't rely on docker itself, just the creds. However it still doesn't work.
Part of the problem (and thanks again to #ChanseokOh for flagging this) is that I was still using jib 1.6.1 when I thought I was using 3.1.4. Changing that plus some other stuff fixed the problem. So here's the full story for the next person who struggles with this:
First, this is what my pom file has:
<plugin>
<groupId>com.google.cloud.tools</groupId>
<artifactId>jib-maven-plugin</artifactId>
<version>3.1.4</version>
<configuration>
<from>

</from>
<to>

<tags>
<tag>${VERSION_ID}</tag>
<tag>latest</tag>
</tags>
</to>
<creationTime>USE_CURRENT_TIMESTAMP</creationTime>
<allowInsecureRegistries>true</allowInsecureRegistries>
<container>
<ports>
<port>8080</port>
</ports>
</container>
</configuration>
<executions>
<execution>
<id>build-and-push-docker-image</id>
<phase>package</phase>
<goals>
<goal>build</goal>
</goals>
</execution>
</executions>
</plugin>
The version is important. Although the old version (1.6.1) worked just fine locally it did not work on Cloud Build.
My Cloud Build file looks like this:
...
- name: 'gcr.io/cloud-builders/gcloud'
args:
- '-c'
- >
gcloud auth configure-docker --quiet --verbosity=debug `echo
${_CONTAINER_REPO} | cut -d / -f 1`
/root
id: gcloud auth
entrypoint: /bin/bash
...
- name: 'gcr.io/cloud-builders/mvn:3.5.0-jdk-8'
args:
- '-Dmaven.test.skip=false'
- '-Dmaven.repo.local=/workspace/.m2/repository'
- '--settings'
- custom-settings.xml
- clean
- install
- '-DskipITs'
- '-B'
- '-X'
- '-DVERSION_ID=$TAG_NAME'
- '-DBRANCH_ID=master'
- '-DPROJECT_ID=$PROJECT_ID'
- '-DCONTAINER_REPO=${_CONTAINER_REPO}'
- '-DMAVEN_REPO=${_MAVEN_REPO}'
- '-DDOCKER_CONFIG=/builder/home/.docker'
- '-P'
- release
id: build
The gcloud auth step gets the docker credentials file created.
The next step is the maven build and for that the trick is to define DOCKER_CONFIG pointing to the correct location of the docker creds file. I believe creating the docker creds file, defining DOCKER_CONFIG and getting the version number right are all required for the solution.
An interesting aside is that gcr.io/cloud-builders/gcloud and gcr.io/cloud-builders/mvn:3.5.0-jdk-8 still reference Google Container Repository which is superseded by Artifact Repository, and is the whole reason I got into this but I have not seen any updated reference for these images. Docs are here.
Adding another answer for the sake of completeness:
In the basic case, Jib may work out of the box on Google Cloud Build (GCB) without manually configuring credentials for Jib when pushing to your Google Container Registry (GCR) (mostly when pushing to GCR in the same GCP project). This is because Jib may automatically pick up the Application Default Credentials (ADC) that come from the environment where GCB runs. However, I am not sure if the ADC on GCB also works for Artifact Registry out of the box.
Of course, you can always configure registry credentials for Jib in many different ways.
I use the Alfresco SDK with the following command:
mvn install -Ddependency.surf.version=6.3 -Prun
All is fine, except when it gets stuck at this step of Building Alfresco Share WAR Aggregator:
[INFO] --- maven-war-plugin:2.6:war (default-war) # share ---
[INFO] Packaging webapp
[INFO] Assembling webapp [share] in [/home/nico/aegif/projects/60_townpage/townpage-filing/townpage-filing/share/target/share-1.0-SNAPSHOT]
[info] Copying manifest...
[INFO] Processing war project
[INFO] Processing overlay [ id org.alfresco:share]
In such cases I just perform a clean and the problem is solved, but that takes time.
Is there anything I can do to avoid it getting stuck?
alfresco.version is 5.1.g
Ubuntu 2016.10 LTS
Given the parameters you are using, I assume you are on Alfresco SDK 2.2, and trying to use a more recent version of alfresco (5.1.f or newer) on a All In One project.
Using Alfresco SDK AIO projects always adds some overhead during restarts because the SDK is actually building your modules, fetching the wars, fetching additional modules referenced and applies the modules to the wars (as in unzipping the war and unzipping the amps on the same folder before re-packaging the wars), then it starts up an embedded tomcat with some special config from the runner project with the new wars! A complicated approach, if you ask me, and it is definitely expected to take a considerable amount of time and performance (especially on Disk IO), especially when you clean before you rebuild...
Back to your question, the step you are hanging on if when the SDK is trying to unzip the OOTB share war prior to applying amps to it, and there is a lot of reasons why things could go south there! And unless you rovide some more detailed steps (as in adding -X or -e to your mvn command) I doubt any one would be able to catch precisely what is going wrong !
Be careful with running your project without cleaning, as you might end up with some risidual files that give you a different behaviour from the one to be expected from final artifacts... I can imagine at least a couple of these scenarios !
Alternatively, may I suggest that you switch from AIO approach to seperate projects for Repo and Share ? You can install multiple tomcats on your machine: Let's say a tomcat for repo on port 8080 and a tomcat for share on 8081, then you can develop on one tier while having a tomcat service provide the other one (Stop the share tomcat service, and start up a share amp from the SDK pointing to the local Alfresco Repo service on the the other locally installed tomcat) that way you can rapidly always clean and run with this command for running share:
mvn clean install -PampToWar -Dmaven.tomcat.port=8081 -Ddependency.surf.version=6.3
On RedHat OpenShift I have created a jenkins build server. To get it working for my custom build I've changed the config.xml to have numExecutors=1. Then I added a maven gwt project to build. The build fails because it can not create the user preferences directory. See this log snippet.
[INFO] --- gwt-maven-plugin:2.5.0-rc2:compile (default) # web ---
[INFO] auto discovered modules [de.hpfsc.parent]
[ERROR] Apr 09, 2014 2:06:42 AM java.util.prefs.FileSystemPreferences$1 run
[ERROR] WARNING: Couldn't create user preferences directory. User preferences are unusable.
[ERROR] Apr 09, 2014 2:06:42 AM java.util.prefs.FileSystemPreferences$1 run
[ERROR] WARNING: java.io.IOException: No such file or directory
To test the correct working of simple maven builds I created a new jenkins project on https://github.com/abroer/jsltSpringLocaleProblem.git. This maven project works correctly and is being built without errors.
The problem can be recreated by building the https://github.com/steinsag/gwt-maven-example.git project.
Here's the job configuration
<project>
<actions/>
<description></description>
<keepDependencies>false</keepDependencies>
<properties>
<hudson.plugins.openshift.OpenShiftApplicationUUIDJobProperty plugin="openshift#1.4">
<applicationUUID></applicationUUID>
</hudson.plugins.openshift.OpenShiftApplicationUUIDJobProperty>
<hudson.plugins.openshift.OpenShiftBuilderSizeJobProperty plugin="openshift#1.4">
<builderSize>small</builderSize>
</hudson.plugins.openshift.OpenShiftBuilderSizeJobProperty>
<hudson.plugins.openshift.OpenShiftBuilderTimeoutJobProperty plugin="openshift#1.4">
<builderTimeout>300000</builderTimeout>
</hudson.plugins.openshift.OpenShiftBuilderTimeoutJobProperty>
<hudson.plugins.openshift.OpenShiftBuilderTypeJobProperty plugin="openshift#1.4">
<builderType></builderType>
</hudson.plugins.openshift.OpenShiftBuilderTypeJobProperty>
</properties>
<scm class="hudson.plugins.git.GitSCM" plugin="git#1.1.12">
<configVersion>2</configVersion>
<userRemoteConfigs>
<hudson.plugins.git.UserRemoteConfig>
<name>origin</name>
<refspec>+refs/heads/*:refs/remotes/origin/*</refspec>
<url>https://github.com/steinsag/gwt-maven-example.git</url>
</hudson.plugins.git.UserRemoteConfig>
</userRemoteConfigs>
<branches>
<hudson.plugins.git.BranchSpec>
<name>**</name>
</hudson.plugins.git.BranchSpec>
</branches>
<recursiveSubmodules>false</recursiveSubmodules>
<doGenerateSubmoduleConfigurations>false</doGenerateSubmoduleConfigurations>
<authorOrCommitter>false</authorOrCommitter>
<clean>false</clean>
<wipeOutWorkspace>false</wipeOutWorkspace>
<pruneBranches>false</pruneBranches>
<remotePoll>false</remotePoll>
<buildChooser class="hudson.plugins.git.util.DefaultBuildChooser"/>
<gitTool>Default</gitTool>
<submoduleCfg class="list"/>
<relativeTargetDir></relativeTargetDir>
<excludedRegions></excludedRegions>
<excludedUsers></excludedUsers>
<gitConfigName></gitConfigName>
<gitConfigEmail></gitConfigEmail>
<skipTag>false</skipTag>
<scmName></scmName>
</scm>
<canRoam>true</canRoam>
<disabled>false</disabled>
<blockBuildWhenDownstreamBuilding>false</blockBuildWhenDownstreamBuilding>
<blockBuildWhenUpstreamBuilding>false</blockBuildWhenUpstreamBuilding>
<triggers class="vector"/>
<concurrentBuild>false</concurrentBuild>
<builders>
<hudson.tasks.Maven>
<targets>clean package</targets>
<mavenName>(Default)</mavenName>
<usePrivateRepository>false</usePrivateRepository>
<settings class="jenkins.mvn.DefaultSettingsProvider"/>
<globalSettings class="jenkins.mvn.DefaultGlobalSettingsProvider"/>
</hudson.tasks.Maven>
</builders>
<publishers/>
<buildWrappers/>
</project>
AFAICT, these messages shouldn't fail the compilation.
GWT uses preferences only in one place: update checks, so store when it last checked for updates. This is done asynchronously in a dedicated thread, and shouldn't fail the build.
GWT supports disabling update checks by passing the -XdisableUpdateCheck to the Compiler, but the gwt-maven-plugin doesn't let you do it. Try using the exec-maven-plugin to call the GWT compiler and pass the -XdisableUpdateCheck as argument to see if it fixes your problem.
I have a problem getting Maven to release to a Nexus server. Seemingly, it refuses to use my provided username and password (but there might be other problems as well).
When I first type 'mvn release:perform', I get a'not authorized'-error. However, some files are created on the Nexus, namely a pom with checksums etc. When I try a second time (without changing anything), I get a different error: '400 bad request'
When I delete the files and try again, I get the first error once again.
I have run this with the -X flag to see if I can make any sense of what is happening, and I have discovered that the first time I run the command, maven omits my username and password provided in settings.xml:
[INFO] [DEBUG] Using connector WagonRepositoryConnector with priority 0 for http://nexus.example.com/content/repositories/releases
When I run it the second time, it includes my credentials:
[INFO] [DEBUG] Using connector WagonRepositoryConnector with priority 0 for http://nexus.example.com/content/repositories/releases/ as developers
Notice it says 'as developers'
Of course I don't know that the fact that it prints it differently actually means anything, but it seems that way.
When I allow redeploy for the releases repository in Nexus, I always get the first variant (not authorized).
If anyone can tell me how I might force Maven to use my credentials (if that is indeed what it is not doing) or on what else might be wrong, I would be very happy.
I have got it working now, by specifying in the maven release plugin that it only deploy, and not deploy and deploy site as is default.
mvn site:deploy fails with the error: Wagon protocol 'http' does not support directory copying.
Of course, my original error message did not refer very much to site at all.
Way to produce useful error messages, Maven!
I found a way to force preemptive authentication here: http://maven.apache.org/guides/mini/guide-http-settings.html (it didn't solve my problem, but it is an answer to the title.)
I need to setup performance tests which are run automatically triggered by a CI system. For that I want to use JMeter due to some scripts and experience already exist and I want to combine it with Maven.
During my research for a reasonable plugin I found that two plugins are existing:
jmeter-maven-plugin:
http://wiki.apache.org/jmeter/JMeterMavenPlugin
chronos-jmeter-maven-plugin:
http://mojo.codehaus.org/chronos/chronos-jmeter-maven-plugin/usage.html
Which one is better to be used? Both seem to be currently maintained and under development. Is there any experience on this? Even the configuration is similar.
I would be happy to get some hints to help me descide without playing around with both plugins for some days.
I haven't yet used the .jmx files with maven and specifically those plugins you mention.
But I can think of a way how to do it if I needed that.
So consider this, you can execute jmeter test in no gui mode.
Create a shell script wrapper that will execute the jmeter test in no gui mode, example (jmeter_exe.sh):
$JMETER_HOME/bin/jmeter.sh -n -t MY_LOAD_TEST.jmx -l resultFile.jtl
So this will execute the given script and store results in the .jtl file, you can use that to display your test results maybe this post will be useful to you, it's off topic for now.
With step one done.
2.You could then create directory scripts in your project root. Than you can put this in your pom.xml :
<plugin>
<artifactId>exec-maven-plugin</artifactId>
<groupId>org.codehaus.mojo</groupId>
<executions>
<execution>
<id>Run load Test</id>
<phase>generate-sources</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>${basedir}/scripts/jmeter_exe.sh</executable>
</configuration>
</execution>
</executions>
</plugin>
And voila your test is executed during generate-sources phase. This might have been easier with the plugins you mentioned but I have no knowledge of those, this is what just came to my mind.
Use jmeter-maven-plugin: http://wiki.apache.org/jmeter/JMeterMavenPlugin.
It's the de-facto one and (as #Ardesco mentioned above) it doesn't require anything to be installed, which gives you abstraction on where JMeter executable is installed and all those kind of problems...
Word(s) of warning on the apache plugin (lazerycode):
It suppresses JMeter output by default, add the following configuration settings to prevent that:
<configuration>
<suppressJMeterOutput>false</suppressJMeterOutput>
<!-- to override debug logging from the plugin (although also in jmeter.properties) -->
<overrideRootLogLevel>debug</overrideRootLogLevel>
<jmeterLogLevel>DEBUG</jmeterLogLevel>
</configuration>
Looking at the source (of version 1.8.1), it seems the -Xms and Xmx are limited to 512
The plugin swallows exceptions so your tests may fail but you don't know why. It looks like they've just completed but not provided results.
The jmeter mojo kicks off jmeter as a new java process but does not provide the capacity to provide any arguments to this execution. So if exceptions are swallowed (See above), and logging isn't sufficient (which it may not be) it's not easy to debug the process to fing out what's wrong. We (my colleague) added the debug args to the process execution and debugged the jmeter call to find out.
you get informative output running jmeter directly for dev purposes. I'd say it's even more informative in the jmeter UI output.
I've not used chronos mind.
JMeter Maven Plugin by #Ardesco is updated every time JMeter version is released.
It is very well documented and works perfectly.
It is easily setup and allow easy addition of plugins like JMeter-Plugins or commercial plugins as long as required libraries.
You can read a full blog showing the setup for old version 1.1.10:
http://www.ubik-ingenierie.com/blog/integrate-load-testing-in-build-process-with-jmeter-ubikloadpack-maven/
For more recent version 2.5.1 (as of November 2017) ensure you read documentation:
https://github.com/jmeter-maven-plugin/jmeter-maven-plugin/wiki