Running Mockserver with parallel tests without starting application context - spring-boot

Mockserver offer the following advice for running tests in parallel
it is recommended to either use a unique instance of MockServer per
test or to run tests in series and clear all expectations prior to
each test. mockserver documentation on parallel usage
Running in parallel
Therefore I have created a unique instance of mockserver per test. The tests are run in parallel using SpringBoot and we use no tracking_ids or cookies since each test as its own mockserver instance.
The Problem
However, I have to create a new application context before running each test, because I need to store the random mockserver port, otherwise when the tests make eternal api calls, which need to be mocked, they wouldn't know which port to use in the mockserver base URL.
Example
Test 1: mockserver instance port 1445 (running in parallel)
Test 2: mockserver instance port 1446 (running in parallel)
Imagine both tests need to mock and endpoint like
www.google.com/test
Then test 1 will need to hit
www.mockerserver.com:1445/test
And test 2 will need to hit
www.mockerserver.com:1446/test
The only way this might work is by starting the application context before each test and updating all your mockable base URLs in your config using the newly generate mockserver port. But this is so slow it defeats the point of going parallel completely.
Why bad advice in documentation?
Why would mockserver documentation give this advice when it leads down this bad path?

Related

Seperate wiremocks QuarkusTestResources

In my Quarkus application I have multiple controllers which use multiple rest clients. I have multiple tests which all use a #QuarkusTestResource with a Wiremock resource. My approach was for each controller to have it's own Wiremock resource and stub whatever restclients they need and how the stubs needs to be defined. So each test might stub out the same rest client but with different stubs.
When running my test I found that even if each test class is annotated with a different Wiremock implementation they overwrite each other it seems like. The tests are probably run in parallel and the configuration (/mp-rest/url) is shared between them and overwritten by the QuarkusTestResourceLifecycleManager that ran last.
Any tips on how to solve this? Or should I just create one Wiremock class for each rest client?
I think you can use restrictToAnnotatedClass in QuarkusTestResource for that. It is available in at least Quarkus 1.13. See: https://quarkus.io/guides/getting-started-testing
You can also use
wireMockServer = new WireMockServer(new WireMockConfiguration().dynamicPort());
to define a dynamic port to each WireMock server

How to create spring boot test suite

Say I have 10 spring boot test class (annotated with #RunWith(SpringRunner.class) and #SpringBootTest)
Each test needs to launch spring container for like 10 seconds, although the container might do the same init.
So I may need 100 seconds for "mvn test".
Is there a way I can group my 10 test class into 1 suite, and let the container only start once.
So I can:
Only run the suite for "mvn test". (with proper naming for individual test class)
Optionally run individual test in IDE.
Spring uses Cache Management to cache the Application Context between tests:
By default, once loaded, the configured ApplicationContext is reused for each test. Thus, the setup cost is incurred only once per test suite, and subsequent test execution is much faster. In this context, the term “test suite” means all tests run in the same JVM — for example, all tests run from an Ant, Maven, or Gradle build for a given project or module. In the unlikely case that a test corrupts the application context and requires reloading (for example, by modifying a bean definition or the state of an application object) the TestContext framework can be configured to reload the configuration and rebuild the application context before executing the next test. (https://docs.spring.io/spring-framework/docs/current/spring-framework-reference/testing.html#integration-testing)
So this mechanism tries to execute your integration tests on an already running Application Context if possible. As you see multiple Application Context launches, this indicates your tests somehow use a different setup e.g. different profiles active, test properties, MockBeans etc.
The Spring documentation provides an overview on which indicators it puts an Application Context in its cache: https://docs.spring.io/spring-framework/docs/current/spring-framework-reference/testing.html#testcontext-ctx-management-caching
If you e.g. don't change any test property for your integration tests, Spring can run all of them on only one Application Context and be extremely efficient.
Another indicator for your current behaviour might be the use of #DirtiesContext which leads to a fresh Application Context after your test executes.

How does Spring Boot Test keep the context across multiple test suites?

I was reading through the guide for using Spring Boot Test and there was a paragraph that got me confused.
“As our profiles get richer, it's tempting to swap every now and then in our integration tests. There are convenient tools to do so, like #ActiveProfiles. However, every time we pull a test with a new profile, a new ApplicationContext gets created.”
https://www.baeldung.com/spring-tests
So it assumes that if all tests are run under the same profile, there is only one ApplicationContext created — but how is it possible? I thought that all the objects are recreated for each test suite anyway. Am I missing something?
The official reference says that it's cached.
https://docs.spring.io/spring-framework/docs/current/spring-framework-reference/testing.html#testing-ctx-management
But how does it get loaded into the JUnit runner or Spock one across multiple test suites?
What was missing in my understanding is the fact that all the test suites are run as a part of a single program, so it's easy to cache any objects that are required by all of them, including Spring context.

Start a springboot project for each test case

We are developing test cases for a micro service using Spring Boot. One of the requirement is that for each Junit test case we need to:
start the project
test a unit case and
then stop the project .
I feel this is an anti pattern, but this is the requirement.
I looked around internet but couldn't find a solution for the same. I was able to start a web server but it provided no response and this might be because the project is not assigned to the server.
Does anyone have any idea on how this can be achieved?
PS: We don't want to use Mockito
Before hand i want to make clear that this a very bad practice and should be avoided. This approach does not implement unit tests concept correctly because you are testing an entire system up, so JUnit wouldn't be the correct tool.
I pocked around and i don't seem to find a Runner that may be able to do this (does not surprise me although), the most similar Runner may be SpringJUnit4ClassRunner which provides you a complete Spring context in your test space, but won't go live with the application.
An approach i'd suggest if you really want to go with this is to use tools like REST Assured to do End-to-End API layer tests against the live application, but this implies that you have to find another way to start the app, and then point the REST Assured tests to that started app. Maybe a shell script that starts the app and then starts the REST Assured tests suits, then when the suit ends put down the server.
I highly suggest you to chat with your product/management teams to avoid this kind of stuff since the tests will take FOREVER to run and you will be polluting your local or remote DBs if you are persisting data or other systems through REST or SOAP calls.

In TDD, why OpenEJB and why Arquillian?

I'm a web developer ended up in some Java EE development (Richfaces, Seam 2, EJB 3.1, JPA). To test JPA I use hypersonic and Mockito. But I lack deeper EJB knowledge.
Some may argue that we should use OpenEJB and Arquillian, but for what?
When do I need to do container dependent tests? What are the possible test scenarios where I need OpenEJB and Arquillian?
Please enlighten me :)
There are two aspects in this case.
Unit tests. These are intended to be very fast (execute the whole test suite in seconds). They test very small chunks of your code - i.e. one method. To achieve this kind of granularity, you need to mock the whole environment using i.e. Mockito. You're not interested in:
invoking EntityManager and putting entities into the database,
testing transactions,
making asynchronous invocations,
hitting the JMS Endpoint, etc.
You mock this whole environment and just test each method separately. Unit tests are fine-grained and blazingly fast. It's because you can execute them each time you make some important changes in code. If they were more complex and time-consuming, the developer wouldn't hit the 'test' button so often as he should.
Integration tests. These are slower, as you want to test the integration between your modules. You want to test if they 'talk' to each other appropriately, i.e.:
are the transactions propagated in the way you expect it,
what happens if you invoke your business method with no transaction at all,
does the changes sent from your WebServices client, really hits your endpoint method and it adds the data to the database?
what if my JMS endpoint throw an ApplicationException - will it properly rollback all the changes?
As you see, integration tests are coarse-grained and as they're executed in the container (or basically: in production-like environment) they're much slower. These tests are normally not executed by the developer after each code change.
Of course, you can run the EJB Container in embedded mode, just as you can execute the JPA in Java SE. The point is that the artificial environment is giving you the basic services, but you'll end with tweaking it and still end with less flexibility than in the real container.
Arquillian gives you the ability to create the production environment on the container of your choice and just execute tests in this environment (using the datasources, JMS destinations, and a whole lot of other configurations you expect to see in production environment.)
Hope it helps.
I attended Devoxx this year and got a chance to answer the JBOSS dudes this question.
Some of the test scenarios (the stuff i managed to scribble down):
Configuration of the container
Container integration
Transaction boundaries
Entity callback methods
Integration tests
Selenium recordings

Resources