I am trying to improve performance of medium tests in Spring Boot.
I am using the Spring Boot - testcontainers library.
For an individual test this works really well, with a few annotations I can get access to kafka, zookeeper, and schema-registry. These are full services so it takes a few seconds to start everything up, all together setup takes about 40 seconds. The test accurately recreates a realistic deployment, it's beautifully simple.
This would be fine if it just happened once but it happens every time a Spring Context is created. That means every test that uses #MockBean incurs that 40 second cost.
I've tried refactoring into a single TestConfiguration class and referencing that. I've looked into using ContextHierarchy but I think that means I'll lose all of the Spring Boot niceties and I'll need to recreate the context (which means it won't look exactly like the context created by the production app).
Is there a better way to do this?
Spring framework already took care of this scenario.
There is a concept of caching the application context for test class/classes.
See the documentation.
Few lines from the documentation:
The Spring TestContext framework stores application contexts in a
static cache. This means that the context is literally stored in a
static variable. In other words, if tests run in separate processes,
the static cache is cleared between each test execution, which
effectively disables the caching mechanism.
So essentially you need to structure your code or context configuration in such a way that you use cached context in your desired test cases.
But use this capability wisely, if not thought through properly this could lead to undesired side-effects
Related
I have come accross multiple articles on integration testing on Spring Boot applications. Given that the application follows three layer pattern (Web Layer - Service Layer - Repository Layer) I have not seen a single article with integration testing the application up to just the service layer (ommiting the web layer) where all the business logic is contained. All of the integration tests seem like controller unit tests - mostly veryfing only request and response payloads, parameters etc.
What I would like however is to verify the business logic using service integration tests. Since the web layer is responsible only for taking the results from services and exchanging them with the client I think this makes much more sense. Such tests could also contain some database state verifications after running services to e.g. ensure that there are no detached leftovers.
Since I have never seen such a test, is it a good practice to implement one? If no, then why?
There is no one true proper way to test Spring applications. A general approach is as you described:
slices tests (#DataJpaTest, #WebMvcTest) etc for components that heavily rely on Spring
unit tests for domain classes and service layer
small amount of e2e tests (#SpringBootTest) to see if everything is working together properly
Spotify engineers on the other hand wrote how they don't do almost any unit testing and everything is covered with integration tests that covered with integration tests.
There is nothing stopping you from using #SpringBootTest and test your service layer with all underlying components. There are things you need to consider:
it is harder to prepare test data (or put system under certain state), as you need to put them into the database
you need to clean the database by yourself, as (#SpringBootTest) does not rollback transactions
it is harder to test edge cases
you need to mock external HTTP services with things like Wiremock - which is also harder than using regular Mockito
you need to take care of how many application contexts you create during tests - first that it's slow, second each application context will connect to the database, so you will create X connections per context and eventually you can reach limits of your database server.
This is borderline opinion-based, but still, I will share my take on this.
I usually follow Mike Cohn's original test pyramid such as depicted below.
The reason is that unit tests are not only easier to write but also faster and most likely cover much more than other more granular tests.
Then we come across the service or integration tests, the ones you mention in your question. They are usually harder to write simply because you are now testing the whole application and not only a single class and take longer to run. The benefit is that you are able to test a given scenario and most probably they do not require as much maintenance as the unit tests when you need to change something in your code.
However, and here comes the opinion part, I usually prefer to focus much more on writing good and extensive unit tests (but not too much on test coverage and more on what I expect from that class) than on fully-fledged integration tests. What I do like to do is take advantage of Spring Slice Tests which in the pyramid would be placed between the Unit Tests and the Service Tests. They allow you to focus on a specific class (a Controller for example) but they also allow you to test some integration with the underlying Spring Framework or infrastructure. This is for me the best of both worlds. You can still focus on a single class but also test some relevant components of your application. You can test your web layer with #WebMvcTest or #WebFluxTest (so that you can test JSON deserialization and serialization, bean validation, etc...), or you can focus on your persistence layer with #DataJpaTest, #JdbcTest or #DataMongoTest (so that you can test the actual persistence and retrieval of data).
Wrapping up, I usually write a bunch of Unit Tests and then web layer tests to check my Controllers and also some persistence layer tests against a real database.
You can read more in the following interesting online resources:
https://martinfowler.com/articles/practical-test-pyramid.html
https://www.baeldung.com/spring-tests
We have Spring Boot integration tests, and keep writing new ones regularly.
I noticed database connections have been piling up: the more tests I run, the higher the connection peak to my PostgreSQL instance.
It reached a point where there are more than 300 connections requested by Spring Boot when running all tests, and it started failing the build (our max_connection is set to 300).
After some research, it came to my understanding that connections are not being released after tests have been run, because of Spring Boot test: if context is not explicitly destroyed, connections are not closed.
I find it quite weird, but tried using #DirtiesContext to prove a point, on all of our test classes, it indeed fixed the issue in a sense that it avoided peaks (no more than 30 connections at once, not piling up to 300 like before) but since this annotation forces context recreation before each test class, the build got much slower and I find it not very satisfactory to need to recreate a Spring context every time just to make sure connections are closed properly.
Data source is a HikariDataSource, configured using a configuration class.
Another workaround I found is to change maximum pool size for Hikari. I set it to something lower than the default value of 10 (I'm not sure it's useful to reserve 10 connections for each test class).
This change effectively lowers the total number of connections when I run all tests but they are still piling up (only lower!)
I think I'm missing something, how can I ensure that connections are closed after each test class? There has to be a better way than #DirtiesContext, I just can't find it. Thanks for your help.
It turns out that context was recreated almost with every test class because I was extensively using #MockBean annotation in my tests. Since it affects Spring context, each #MockBean/No MockBean combination in different test classes counts as a different context, i.e.:
Test class 1: bean MyService is a MockBean, MyOtherService is not
Test class 2: bean MyService is a MockBean, MyOtherService is also a MockBean
Test class 3: none of these two beans is a MockBean
In such case, a new Spring context will created for each class because the bean configuration is different, resulting in an increasing number of connections to the datasource.
To (partially) solve this, I looked for patterns in the beans combinations of my test classes and created a new class I called TestMockBeans.
Its sole purpose is to declare as many MockBeans and/or SpyBeans as possible to re-use in similar test configurations. I extend corresponding test classes with TestMockBeans, and then, because they share this similar setup, Spring identifies their contexts as similar and does not recreate a new one for every test class.
As you can guess, not all of my tests throughout the Spring boot app share the same need for Mockbeans (or absence of Mockbeans) so it's only a partial solution, but I hope it will help someone experiencing the same issue to mitigate it.
I have a Spring-WS web service that has three issues:
Slow startup time
Slow generation of the dynamic WSDL
Heavy usage of PermGen (app has to be 1.6 compatible)
Currently, the spring-ws-servlet.xml file has several <context:component-scan> elements for autowired dependencies. Two of these scan nearly everything in two external libraries containing Hibernate DAO and Entity classes. Similarly, the Hibernate session factory bean scans a large number of entities from these two libraries.
So, my questions:
Obviously, we would see at least some performance improvement by limiting the scope of the <context:component-scan> elements. But really, would it be that much?
Similarly, would I see improvements by limiting the scope of what Entities are scanned by the session factory?
Making these changes will NOT be a quick process (alter code, test, etc). Therefore, if anyone can add their wisdom, I would greatly appreciate it.
Actually I am developing a spring ws application on Google Cloud and I also have the same problem with slow start up time. The biggest difference that I have notice was when I have moved to aspectj compile time weaving using aspectj-maven-plugin. If you haven't done this yet try this one. The result may be vary depends on your code and deployment environment. On the cloud every file operation is much slower so this may be a reason why this work for me so well.
I have a large spring project, using xml configuration. I'm looking for a quick way to verify changes to the xml configuration.
I can load the whole project locally - the problem is this takes more than 5 minutes, loads a huge amount of data.
My XML editor catches XML formatting errors.
I'm looking for something intermediate - to catch obvious problems like references to beans that aren't defined, or calling constructors with the wrong arguments. Is there a quick way to do this, without having to actually invoke all the constructors and bring up the whole environment?
I'm building with Maven and editing with Eclipse, although my question isn't specific to either.
Since you already use Eclipse, you could try Spring Tool Suite (comes either standalone or as an add-on). It's essentially Eclipse with extra Spring-specific features, like Beans Validator. I'm not sure how thorough the validation is, but it should catch most configuration problems.
It's maintained by SpringSource so its integration with Spring "just works" and it's guaranteed not be more or less in sync with Spring Framework's release cycle.
Beanoh :
http://beanoh.org/overview.html#Verify
this project does exactly what I'm looking for. Verify obvious problems with spring config, but without the overhead of initializing everything.
You can use a Spring testing support to integration test your Spring configuration. However if the loading of the context is taking 5 mins, then the tests will also take the same amount of time. Spring does cache the context so if you have multiple tests using the same set of Spring contexts, then once cached the tests should be very quick.
I can suggest a few ways to more efficiently test your configuration:
Organize your project in modules, with each module being responsible for its own Spring configuration - this way, each module can be independently developed and tested.
If you have a modular structure, the testing can be more localized by mocking out the dependent modules, again this is for speed.
I'm a web developer ended up in some Java EE development (Richfaces, Seam 2, EJB 3.1, JPA). To test JPA I use hypersonic and Mockito. But I lack deeper EJB knowledge.
Some may argue that we should use OpenEJB and Arquillian, but for what?
When do I need to do container dependent tests? What are the possible test scenarios where I need OpenEJB and Arquillian?
Please enlighten me :)
There are two aspects in this case.
Unit tests. These are intended to be very fast (execute the whole test suite in seconds). They test very small chunks of your code - i.e. one method. To achieve this kind of granularity, you need to mock the whole environment using i.e. Mockito. You're not interested in:
invoking EntityManager and putting entities into the database,
testing transactions,
making asynchronous invocations,
hitting the JMS Endpoint, etc.
You mock this whole environment and just test each method separately. Unit tests are fine-grained and blazingly fast. It's because you can execute them each time you make some important changes in code. If they were more complex and time-consuming, the developer wouldn't hit the 'test' button so often as he should.
Integration tests. These are slower, as you want to test the integration between your modules. You want to test if they 'talk' to each other appropriately, i.e.:
are the transactions propagated in the way you expect it,
what happens if you invoke your business method with no transaction at all,
does the changes sent from your WebServices client, really hits your endpoint method and it adds the data to the database?
what if my JMS endpoint throw an ApplicationException - will it properly rollback all the changes?
As you see, integration tests are coarse-grained and as they're executed in the container (or basically: in production-like environment) they're much slower. These tests are normally not executed by the developer after each code change.
Of course, you can run the EJB Container in embedded mode, just as you can execute the JPA in Java SE. The point is that the artificial environment is giving you the basic services, but you'll end with tweaking it and still end with less flexibility than in the real container.
Arquillian gives you the ability to create the production environment on the container of your choice and just execute tests in this environment (using the datasources, JMS destinations, and a whole lot of other configurations you expect to see in production environment.)
Hope it helps.
I attended Devoxx this year and got a chance to answer the JBOSS dudes this question.
Some of the test scenarios (the stuff i managed to scribble down):
Configuration of the container
Container integration
Transaction boundaries
Entity callback methods
Integration tests
Selenium recordings