Running selenium hub in maven - maven

I'm trying to run selenium server using role hub in maven using selenium-maven-plugin in order to use phantomjs driver from remote control test, so far my plugin configuration is very straightforward:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>selenium-maven-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<id>start-selenium</id>
<phase>pre-integration-test</phase>
<goals>
<goal>start-server</goal>
</goals>
<configuration>
<background>true</background>
</configuration>
</execution>
<execution>
<id>stop-seleniump</id>
<phase>post-integration-test</phase>
<goals>
<goal>stop-server</goal>
</goals>
</execution>
</executions>
</plugin>
Then I hook phantomjs using maven execution plugin:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.2.1</version>
<executions>
<execution>
<phase>pre-integration-test</phase>
<goals>
<goal>exec</goal>
</goals>
</execution>
</executions>
<configuration>
<executable>phantomjs</executable>
<arguments>
<argument>--webdriver=8080</argument>
<argument>--webdriver-selenium-grid-hub=http://localhost:4444</argument>
</arguments>
</configuration>
</plugin>
With this configuration the output is: HTTP ERROR: 403 Forbidden for Proxy and I can't go any further. Anyone has successfully configured this?

It wouldn't be too much of a stretch to just create a script that uses YAJSW (Yet Another Java Service Wrapper) to accommodate registering the grid hub as a service. Then, Maven can call the script and start it as its own separate process. Also, Maven could call a stop service to stop it. I think it would be elegant.
Here is my almost working attempt. I'll need to solicit help from a Selenium expert to get it working. Having a error when registering service that is unexpected. Most of the work is done though. Once I get this working, it will be good to go for you.
Now, while you could run the Grid Hub as a service, you wouldn't want to do the same to the Node because it needs access to the desktop (and services can only access their own invisible private desktop). So, perhaps that brings us back to the same problem that you are trying to solve.

Related

How to continue and not fail build on error in Exec Maven Plugin execution?

How can the maven build be made to continue despite an error in one of the execution added by the Maven exec plugin?
https://www.mojohaus.org/exec-maven-plugin/usage.html
Example solution using success code:
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>3.0.0</version>
<executions>
<execution>
<id>docker-rmi</id>
<phase>clean</phase>
<goals>
<goal>exec</goal>
</goals>
<configuration>
<executable>docker</executable>
<workingDirectory>${project.basedir}</workingDirectory>
<arguments>
<argument>rmi</argument>
<argument>${project.groupId}/${project.artifactId}:${project.version</argument>
</arguments>
<successCodes>
<successCode>0</successCode>
<successCode>1</successCode>
</successCodes>
</configuration>
</execution>
</executions>
</plugin>
You can use successCodes and list the error codes what you want to treat as success.
This was created for non-compliant application according to the docs docs but it is useful for such scenario.
I don't know any wildcard solution so you have to explicitly state the list of error codes for the successCodes.

Force post-integration phase to always complete after integration phase

Is there a way to enforce the post-integration phase to always run after the integration phase? By always I mean in the advent of test failures during integration phase.
I am running an Angular / Springboot application. I use protractor to run e2e tests that test the whole Angular + Springboot chain. I managed to integrate this in my Maven build so that I can:
setup the backend Springboot server
setup a DB with initial data
run protractor during the integration phase
with the following plugins:
spring-boot-maven-plugin which starts and stops a test server for integration testing:
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
...
</configuration>
<executions>
<execution>
<id>pre-integration-test</id>
<goals>
<goal>start</goal>
</goals>
</execution>
<execution>
<id>post-integration-test</id>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
</plugin>
and frontend-maven-plugin which runs my protractor tests during the integration phase:
<plugin>
<groupId>com.github.eirslett</groupId>
<artifactId>frontend-maven-plugin</artifactId>
<configuration>
...
</configuration>
<executions>
<execution>
<id>install node and npm</id>
<goals>
<goal>install-node-and-npm</goal>
</goals>
<phase>generate-resources</phase>
</execution>
<execution>
<id>npm install</id>
<goals>
<goal>npm</goal>
</goals>
<phase>generate-resources</phase>
<configuration>
<arguments>install</arguments>
</configuration>
</execution>
<execution>
<id>npm run build</id>
<goals>
<goal>npm</goal>
</goals>
<phase>generate-resources</phase>
<configuration>
<arguments>run build</arguments>
</configuration>
</execution>
<execution>
<id>npm run integration tests</id>
<goals>
<goal>npm</goal>
</goals>
<phase>integration-test</phase>
<configuration>
<arguments>run e2e</arguments>
<testFailureIgnore>true</testFailureIgnore> // this should probably be deleted
</configuration>
</execution>
</executions>
</plugin>
I added testFailureIgnore = true to the frontend-maven-plugin because if any protractor test fails, it will stop my maven build before it gets to execute the post-integration phase. This causes the test server to keep running with that port. Any subsequent runs will fail since the port is already in use until that server is killed (manually). The testFailureIgnore property allows failed tests to be ignored by the build, effectively letting me continue with the post-integration phase.
The obvious downside is that my build will print SUCCESS even when tests have failed. I am looking for behavior similar to the failsafe plugin where failed tests will fail my build, but will still execute the post-integration phase first to cleanup properly.
I can't seem to find a proper solution for this but surely I can't be the first to encounter this problem. What solutions/alternatives are available for this? I imagine using the exec-maven-plugin instead of the frontend-maven-plugin will cause the same issue.
I didn't manage to find a decent solution to this anywhere so I decided to try and create my own. I extended the frontend-maven-plugin with a parameter that logs integration test failures during the integration-test phase, but only fails the build during the verify phase. This allows the post-integration-test phase to finish.
My solution is available from my repository (version 1.9.1-failsafe). This implementation requires a configuration parameter integrationTestFailureAfterPostIntegration to be added. Unfortunately I did not figure out how to make a Mojo execution trigger another Mojo execution at a later phase without user intervention. Because of this the user needs to have an execution that trigger during the verify phase, even if it doesn't do anything useful functionally (ie. npm -version).
My working example:
<execution>
<id>npm run integration tests</id>
<goals>
<goal>npm</goal>
</goals>
<phase>integration-test</phase>
<configuration>
<arguments>run e2e</arguments>
<integrationTestFailureAfterPostIntegration>true</integrationTestFailureAfterPostIntegration>
</configuration>
</execution>
<execution>
<id>fail any integration tests</id>
<goals>
<goal>npm</goal>
</goals>
<phase>verify</phase>
<configuration>
<arguments>-version</arguments>
</configuration>
</execution>
If any IT tests fail, they will be logged during integration-test phase and fail the build at verify. If all IT tests pass, the build will be successful.
I have an open pull request at the frontend-maven-plugin which might get added to the 1.9.2 version. I will still attempt to improve upon the change by removing the need for the verify execution phase to be added manually. Suggestions or improvements on the pull request are welcome!
UPDATE: I already went ahead and released my own version in case the pull request doesn't come through:
<dependency>
<groupId>io.github.alexandertang</groupId>
<artifactId>frontend-maven-plugin</artifactId>
<version>1.9.1-failsafe</version>
</dependency>
In this version I added a verify mojo which simplifies the second execution to:
<execution>
<id>fail any integration tests</id>
<goals>
<goal>verify</goal>
</goals>
<phase>verify</phase> <!--default phase is verify, so this is optional-->
</execution>
I resolved this puting this instructions on package.json
"scripts": {
...
"e2e": "ng e2e && echo Success > e2e/result.txt || echo Error > e2e/result.txt"
}
This will supress the exit code in error situation, and will record a file called result.txt whith Success or Error in your content.
Then, i add the maven-verifier-plugin on maven to verify the content of the file result.txt.

maven javadoc generation issue of viewing on local tomcat server

I have managed to generate javadocs for my maven java project.
I use the following in
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-javadoc-plugin</artifactId>
<executions>
<execution>
<id>attach-javadocs</id>
<goals>
<goal>jar</goal>
</goals>
<configuration>
<additionalparam>-Xdoclint:none</additionalparam>
</configuration>
</execution>
</executions>
</plugin>
I use the goal javadoc:javadoc when building.
Is there a way I can start my tomcat server and view the generated javadocs via a URL on my tomcat server? Something like localhost:8080/...
Thanks

Using maven-failsafe with fabric8-maven to run integration tests that hit a containerised DB

I'm struggling to synthesise how to correctly use the maven-failsafe and fabric8-maven plugins together.
I Want to run integration tests, but in the pre-integration-tests phase, start a docker container running a DB, and in the post-integration-phase stop the container.
Looking at the fabric8 docker-maven-plugin documentation, it states this is possible, but none of the examples seem to illustrate this.
Update #1:
This is the configuration that successfully worked for me:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.15.9</version>
<executions>
<execution>
<id>start-neo4j</id>
<phase>pre-integration-test</phase>
<goals>
<goal>start</goal>
</goals>
</execution>
<execution>
<id>stop-neo4j</id>
<phase>post-integration-test</phase>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
<configuration>
<images>

</images>
</configuration>
</plugin>
There are serveral examples for the docker-maven-plugin which show how the bindings work:
https://github.com/fabric8io/docker-maven-plugin/blob/master/samples/data-jolokia-demo/ contains various configurations variants for running integrations tests where in the pre-test phase a tomcat is fired up, the application deployed, the tests run, and then the tomcat teared down.
https://github.com/rhuss/docker-maven-sample is probably more interesting for you as it covers your use case with starting a Postgres db before the integration test (inclusive waiting until the DB is completely started). The binding is shown here :
<executions>
<execution>
<id>start</id>
<phase>pre-integration-test</phase>
<goals>
<goal>build</goal>
<goal>start</goal>
</goals>
</execution>
<execution>
<id>stop</id>
<phase>post-integration-test</phase>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
But I recommend to examine the pom.xml there in more details, since it has even more info e.g. for how to setup the wait section. Feel free to open issues in this project if something is still unclear.
The way we recommend integration and system testing in maven with docker is via the fabric8 arquillian plugin. That takes care of creating a new namespace for the test and provisioning all the kubernetes resources then running your JUnit test case to run the assertions etc.
You'll need a docker image for the database and to wrap that up in a kubernetes yaml/json file so that it can be run up front as your app is deployed by fabric8-arquillian

maven :: install multiple third-party artifacts to local repository at once from filesystem

We're using non-public artifacts from third-party companies in our project. We don't have maven proxy installed (and there're no plants to do so, because we found it complicates things rather than solves problems. especially if no internet connection or VPN is available).
So I created set of 'maven install file' plugin executions, like this:
<plugin>
<artifactId>maven-install-plugin</artifactId>
<version>2.3.1</version>
<inherited>false</inherited>
<executions>
<execution>
<id>install-artifacts.1</id>
<goals>
<goal>install-file</goal>
</goals>
<phase>initialize</phase>
<configuration>
<pomFile>thirdparty/gwt-0.99.1.pom</pomFile>
<file>thirdparty/gwt-0.99.1.jar</file>
</configuration>
</execution>
<execution>
<id>install-artifacts.2</id>
<goals>
<goal>install-file</goal>
</goals>
<phase>initialize</phase>
<configuration>
<pomFile>thirdparty/morphia-0.99.1.pom</pomFile>
<file>thirdparty/morphia-0.99.1.jar</file>
</configuration>
</execution>
<execution>
<id>install-artifacts.3</id>
<goals>
<goal>install-file</goal>
</goals>
<phase>initialize</phase>
<configuration>
<pomFile>thirdparty/gwt-oauth2-0.2-alpha.pom</pomFile>
<file>thirdparty/gwt-oauth2-0.2-alpha.jar</file>
</configuration>
</execution>
</executions>
</plugin>
it works great and does exactly what we need. However if new artifact is added - new big XML section has to be added.
Is there any way to avoid this, like use 'yet another plugin' which will search for folder and install everything from it?
Best solution for such kind of thing is to install a repository manager.
You've written you won't installing a proxy but that's the wrong way. The only solution to solve such kind of problems is to install a repository manager.

Resources