Forked execution of SpringBootTest's with maven failsafe plugin - does it work out of the box? - spring

Consider you have a Spring-Boot Application and within this application also a bunch of Integration-Tests, which are annotated with #SpringBootTest and run with the SpringRunner class.
They are invoked by the maven failsafe plugin, which by default does not parallelise tests in any way. The tests all run fine without any issues.
What changes if you use failsafe's feature of forkCount - can you expect the test execution to work out of the box? Do you need to adjust some code? What do you need to look out for that could potentially not allow these integration tests to run in a forked, "parallel" environment via this plugin?
From my understanding, the failsafe plugin will create forkCount-many JVMs and in each some of the integration tests are executed. That sounds like there is nothing to do, you don't need to make anything threadsafe, you don't need to make Singleton-beans into ThreadScoped beans or anything - as the process of having multiple JVM's should already create multiple of these beans.
Sorry if the question appears weird, I tried researching this question but I could not find an answer.

From Maven doc:
The parameter forkCount defines the maximum number of JVM processes
This means that the tests will run in it's own process and therefore you will have separate Spring Boot instances. So you really don't have to care about thread safety.
But you have to care about memory consumption.

Related

How to run Junit5 and TestNG togather in Maven pom

I have some set of test cases configured with TestNG. I developed preconditions in junit5 and this has to be run before the test starts. So I wanted to run in sequence Line Precondition(junit5) and then Testcases.
I am using dependency for Junit5 and TestNG7 in PM.XML. Below is a snapshot of POM.xml htmlunitTest.java is for junit5 and testng.xml for TestNGTest cases. while running build is terminating successfully without execution of any test.
You probably do not want to hear my answer. It is the best I have anyway: Don’t take that route! Don’t couple two frameworks that are not made to work together.
Although there might be a convoluted and hacky way to achieve what you describe, you’re setting yourself up for unnecessary technical complexity. Stick with one framework and use its own means for preconditions or fixing a test order.

Running jacoco report where integration tests are in one code base and source code is in another code base

I recently started working on creating jacoco reports for maven projects including unit and integration tests and they seem to work out correctly.
Now I have encountered a different scenario which I am not sure how to approach.
I have one workspace which consists of integration test cases - application A, but the source code does not exist in the same workspace/code base. The source code which actually runs on invoking these integration test scripts are in a different workspace/code base - application B(they are invoked using rest api calls with the localhost urls. The jboss server is started for application B so that the localhost context is up) from the integration tests.
The aim is to invoke these integration tests from application A, which in turn calls the source code of these tests in application B generating the jacoco report of the code coverage for application B.
I am not actually sure how to achieve this.
Can someone provide some input.
Thanks.
If I understand you correctly, you actually have 2 different processes in your scenario:
The "client" process that runs the integration tests and for which jacoco can be easily applied, but it's not what you need
The "server" process that runs the actual JBoss server and executes the actual code.
Client process contacts the server via HTTP.
In this case, I'm afraid jacoco won't be able to provide a coverage for you if you're running the tests from maven/gradle, because jacoco instruments only bytecode on the running JVM. So you have to be "creative" here :)
I'll list here some possible approaches
Disclaimer: I haven't tried them though (didn't work with jboss/java ee), but maybe you'll be able to at least borrow some ideas
The first approach would be running the tests together with the application somehow, like its done for example in spring tests (I'm not sure whether JBoss provides similar capabilities).
The idea is simple:
You run the integration test, it runs the jboss "embedded in the same jvm" and you can inject beans / EJB session beans into the test (like autowiring with spring).
The advantage of such a method is that you'll be able just to use jacoco maven plugin and it will instrument everything for you
I don't know how easy will be achieving this architecture technically, I know that recent jboss versions support embedded mode, So maybe you'll find This link to be a useful foundation
Another direction is to take a look at Arquillian project. They have some jacoco extension that probably will help, but I've never tried it.
And the last approach I can think of is running the jboss server with jacoco agent directly instead of relying on the build system that runs jacoco for you.
The idea here is to stream the results of covered server code into some file / tcp endpoint. So you run the jboss with -javaagent:[yourpath/]jacocoagent.jar and it starts streaming the results wherever you need it to stream. After the tests you should gather these results and prepare a report. You can find Here more information about this approach

Integration tests with Arquillian and Arquillian Spring Framework Extension

I would like to set up an infrastructure for integration testing.
Currently we bootstrap tomcat using maven and then execute httpunit tests.
But the current solution has few drawbacks.
Any changes committed to the database need to be rollback manually in the end if the test
Running code coverage on integration test is not straight forward (we are using sonar).
My goals are:
Allow automatic rollback between tests (hopefully using String #transaction and #rollback)
Simple straight forward code coverage
Using #RunWith that will bootstrap the system from JUnit and not externally
Interacting with live servlets and javascript (I consider switching from httpuinit to selenium…)
Reasonable execution time (at least not longer than the existing execution time)
The goals above look reasonable to me and common to many Java/J2ee projects.
I was thinking to achieve those goals by using Arquillian and Arquillian Spring Framework Extension component.
See also https://github.com/arquillian/arquillian-showcase/
Does anyone have and experience with Arquillian and with Arquillian Spring Framework Extension?
Can you share issues best practices and lesson learned?
Can anyone suggest an alternative approach to the above?
I can't fully answer your question. only some tips
Regarding the automatic rollback. In my case. Using liquibase to init the test data on "hsqldb" or "h2" which could be set as in-memory pattern. Then no need to roll back.
For Arquillian. It's a good real testing approach. What i learned is that "Arauillian Spring Framework Extension" is just a extension. You have to bind to a specific container like "jboss, glasshfish,tomcat" to make the test run.
But i don't know how to apply for a spring-based javaSE program which do not need application server support.
My lesson learned is the jboss port conflict. since jboss-dist is set 8080 as default http port. But our company proxy is same as 8080. So i can't use maven to get the jboss-dist artifact.
Hope others can give more info.

TestNG: Use ApplicationContext in multiple Test-Classes

I've written some test-cases inside a single TestNG Test which extends AbstractTestNGSpringContextTests. The ApplicationContext is correctly setup and I can use it inside my test-cases.
The problem is that the setting up of the applicationContext can take some time and I don't want to do this for every test-class I have, as this would take some time which is unnecessary from my point of view.
So my question is: Is it possible to run more than one TestNG test-class using the same Spring ApplicationContext which is setup only once?
Thanks and best regards,
Robert
How about using a #BeforeSuite?
Spring may cache and re-use ApplicationContext when you use similar locations in #ContextConfiguration annotations. See related article from Tomasz Nurkiewicz (#tomasz-nurkiewicz) at http://nurkiewicz.blogspot.com/2010/12/speeding-up-spring-integration-tests.html
Once the TestContext framework loads an ApplicationContext (or WebApplicationContext) for a test, that context will be cached and reused for all subsequent tests that declare the same unique context configuration within the same test suite.
The Spring TestContext framework stores application contexts in a static cache. This means that the context is literally stored in a static variable. In other words, if tests execute in separate processes the static cache will be cleared between each test execution, and this will effectively disable the caching mechanism.
To benefit from the caching mechanism, all tests must run within the same process or test suite. This can be achieved by executing all tests as a group within an IDE. Similarly, when executing tests with a build framework such as Ant, Maven, or Gradle it is important to make sure that the build framework does not fork between tests. For example, if the forkMode for the Maven Surefire plug-in is set to always or pertest, the TestContext framework will not be able to cache application contexts between test classes and the build process will run significantly slower as a result.

Maven: Conditional Execution upon IT Success / Failure

Here is something I''ve tried to come up with an idea, but I'm not sure.
We do have a module which should be built, deployed and then integration test begins (via failsafe, but others might be fine). We'd like to selectvely invoke mojos based on its results.
I think verify from failsafe should do the trick (with probably some gmaven trickery), but how to validate the results of failsafe? Perhaps some Test Listener Magic with JUnit could help?
Any ideas how could we achieve that, considering a Maven (and Probably Hudson) scenario?
Thank you
Let do the first part of that do Hudson (run integration tests) and do a failsafe:check in your integration test cycle and based on the result you can start a dependenant job in Hudson to run an other job whatever this job will do. But you can't execute selective mojos based on results (afak).

Resources