I have a cucumber test setup with spring boot. There are a large number of integration tests which take a while to run. Because they share a database, the new threaded mode in cucumber 4+ does not work (as expected).
Ideally this would work in Junit also.
For each test, I would like to create a dynamic datasource with a new database/datasource instance that the test can use independently of others, allowing it to run multithreaded (and use the 12 cores I have available).
I have tried #Scope("cucumber-glue") and #Scope("prototype") on the DataSource bean, but this results in org.springframework.beans.factory.BeanCurrentlyInCreationException: Error creating bean with name 'dataSource': Requested bean is currently in creation: Is there an unresolvable circular reference?.
If I use the prototype scope, the bean creation method gets called each time, but gives this error, as does the glue scope.
I have also added #DirtiesContext(classMode = DirtiesContext.ClassMode.BEFORE_EACH_TEST_METHOD) to classes and methods, without the scope, but this seems to do nothing.
Is there a way I can either:
1. create a new datasource for each test and have hibernate create the tables?
2. pool a set of datasources that could be used by tests?
3. Populate/reinitialize the context accordingly?
A side consequence of this is that I don't believe my context is getting recreated properly between tests for other instances of the scoping.
#Configuration
public class H2DynamicDataSource {
#Autowired
private Environment env;
#Bean
#Scope("prototype")
public DataSource dataSource() {
return new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.H2)
.build();
}
}
Cheers
R
Hopefully you've solved this.
I just went through something similar and felt like cruising around SlackOverflow to see if anyone was doing something similar (or perhaps asking about how to do something like this) and saw this post. Figured this would be a good place to drop this information in case anyone else tries to go down this road.
I think your asking two questions:
How to get multithreading working with Cucumber and Junit Cucumber?
multithreading works great but you are constrained to a single lane
of test execution by junit if your using either the Cucumber or
SpringJUnit4ClassRunner runner junit runner classes. (I suspect this is why #CucumberOptions doesnt contain a thread arg. It's useless with the current Cucumber runner implementation.)
In order to get this working I had to write my own test runner that
would defer a the entire test execution to Cucumbers Runtime
object. I used a custom cucumber plugin to cache the results and
then relay those results back to junit when asked to 'run' the
individual tests. With Cucumber behind the wheel I was able to allow
any arbitrary number of threads for my tests.
How to ensure a new spring datasource for each test?
If you're unfamiliar with how Spring cache's contexts for tests you should have a look through the documentation here https://docs.spring.io/spring/docs/current/spring-framework-reference/testing.html#testcontext-ctx-management-caching but the tl;dr: Spring's TestContextManager(this is whats under the hood of Springs junit runner) will create and store a spring context in a map keyed by your test configuration. DirtiesContext will force the context to reload.. but otherwise two tests with the same parent class are effectively guaranteed to execute in the same test context. If your datasource is a spring datasource that is initialized during boot.. you definitely need to refresh this context between tests.
Putting those two concepts together.. Cucumber-Spring provides a way to create a new context (consistent with cucumbers new 'world' per test design) for every test but it does so by disregarding any existing spring contexts or data contained therein. This may actually be helpful to you if you can trust Cucumber-Spring to correctly stand up your datasource for you.. but in my situation we had a bunch of issues using this fake context and really needed to pull objects from the default spring context. In my case I had to incorporate Springs TestContextManager into my custom plugin AND write my own Cucumber BackendSupplierimplementation to hijack Cucumbers dependency injection mechanism (effectively replacing Cucumber-Spring)
All in all, what your trying to do is a major PITA and I hope the Cucumber folks make this easier at some point.. but what your trying to do is definitely possible. Hope you got it working!
Related
I'm making some Integration Tests for my app and I'm encountering this problem I can't see how to solve.
I'm using Spring Boot 2.4.13 + Spring Data Neo4J 6.1.9
FYI, I deleted the Application default test that comes bundled when you create a project through Spring Initializr, and under /src/test/resources I have a .yml file named application.yml
My IT class looks like this:
#SpringBootTest
public class ClientIT {
#Autowired
private ClientServiceImpl service;
#Autowired
private ClientRepository repository;
#Test
void someTest() {
//Given
//When
//Then
}
}
But when I run this test I get the following Exception:
java.lang.IllegalStateException: Failed to load ApplicationContext
And this is the cause:
Caused by: java.lang.IllegalStateException: The provided database selection provider differs from the ReactiveNeo4jClient's one.
The thing is I don't use SDN's Reactive features at all in my project. I don't even understand why Spring tries to load it. I've created an Issue under the Spring Data Neo4j GitHub repository (https://github.com/spring-projects/spring-data-neo4j/issues/2488) but they could only tell me that ReactiveNeo4jDataAutoConfiguration gets automatically included if there's a Driver or Flux class in the classpath which I don't have.
I've been debugging the Spring internals while booting up the Application after JUnit Jupiter methods to no success.
What I could see is that at some point after JUnit Jupiter tests preparation/initialization, "reactiveNeo4jTemplate" gets injected into DefaultListableBeanFactory's beanDefinitionNames variable.
I've tried many combinations of different annotations intended to be used when making Integration Tests but the one time it worked was after I explicitly excluded ReactiveNeo4jDataAutoConfiguration class through
#EnableAutoConfiguration(exclude=ReactiveNeo4jDataAutoConfiguration.class)
What I've always seen in some blogposts is that by using #SpringBootTest I shouldn't worry about this kind of problem but it looks like I need to add that annotation every time I want to make a new IT test.
My Integration Tests basically consist of bootstrapping the application + web server (tomcat) along with an embedded Neo4J instance and after that, making requests to check everything works as it should. Do I really need to worry about all of this just to make these simple tests?
Thank you
References:
How do I set up a Spring Data Neo4j integration test with JUnit 5 (in Kotlin)?
SprintBootTest - create only necessary beans
Answering my own question after finding what is causing this error:
In the linked Github Issue, one of the developers says having Flux.class in the classpath forces SDN to instantiate Neo4jReactiveDataAutoConfiguration which is what is causing the other reactive beans to instantiate.
Apparently, neo4j-harness brings io.projectreactor (where Flux.class belongs) as an indirect dependency through neo4j-fabric which is the root of our problems.
The Spring Data Neo4j will be fixing this issue in a patch later this week.
we are in situation where we cannot simple rollback data after test, so we decide to use Flyway java API like this:
#Autowired
protected Flyway flyway;
#AfterEach
public void restoreDatabase() {
flyway.clean();
flyway.migrate();
}
Is possible execute clean and migrate after each test class instead of test method? I need call this in #AfterAll annotated static method, but this type of methods have to be static so I cannot use autowired component Flyway. Can you advice me any workaround? Thank you.
The following solution may help you.
Besides the #Rollback annotation there is also the possibility to mark a class (or method) as "dirty" with the annotation org.springframework.test.annotation.DirtiesContext. This will provide the test cases a fresh context. From the Java Docs:
Test annotation which indicates that the ApplicationContext associated with a test is dirty and should therefore be closed and removed from the context cache.
Use this annotation if a test has modified the context — for example, by modifying the state of a singleton bean, modifying the state of an embedded database, etc. Subsequent tests that request the same context will be supplied a new context.
#DirtiesContext may be used as a class-level and method-level annotation within the same class or class hierarchy. In such scenarios, the ApplicationContext will be marked as dirty before or after any such annotated method as well as before or after the current test class, depending on the configured methodMode and classMode.
Let me show you an example:
#RunWith(SpringRunner.class)
#SpringBootTest
#DirtiesContext(classMode = ClassMode.BEFORE_CLASS)
public class SomeTestClass {
#Autowired
private ExampleController controller;
#Test
public void testSomething() {
//Do some testing here
}
Now in this case, with an embedded DB (like H2), a fresh DB will be started containing no changes from previous transactions.
Please note that this will most probably slow down your test cases because creating a new context can be time consuming.
Edit:
If you watch the log output you'll see that Spring creates a new application context with everything included. So, when using an embedded DB for the test cases, Spring will drop the current DB and creates a new one and runs all specified migrations to it. It's like restarting the server which also creates a new embedded DB.
A new DB doesn't contain any commits from previous actions. That's why it works. It's actually not hacky imo but a proper set up for integration tests as integration tests mess up the DB and need the same clean setup. However, there are most probably other solutions as well because creating new contexts for every test class may slow down the execution time. So, I would recommend to annotate only classes (or methods) which really needs it. On the other hand, unit tests are in most cases not very time critical and with newer Spring versions lazy loading will speed up startup time.
I have a problem getting Spring Boot 2.0.5 to work nicely with Kotlintest 3.1.10.
I made a test project illustrating the problem I have.
The project is a Spring Boot 2 application
with two entities, ShoppingOrder and OrderLine (to be totally unimaginative).
There is also a test case ShoppingOrderSpec which just tests the mapping by storing and retrieving the Order.
The testcase is configured like this:
#ExtendWith(SpringExtension::class)
#Transactional
#SpringBootTest
class ShoppingOrderSpec : WordSpec() {
override fun listeners() = listOf(SpringListener)
The test case is using the SpringExtension
by Spring to hook into the JUnit 5 engine. It also uses the SpringListener and Wordspec from Kotlintest to structure the tests
and do the assertions.
The SpringListener correctly autowires the dependencies, but somehow the transaction is not being created.
Running the testcase gives the following stack-trace:
2018-10-12 10:54:14.329 INFO 59374 --- [intest-engine-0] com.example.demo.ShoppingOrderSpec : Started ShoppingOrderSpec in 4.478 seconds (JVM running for 7.421)
org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.example.demo.ShoppingOrder.lines, could not initialize proxy - no Session
at org.hibernate.collection.internal.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:582)
...
at com.example.demo.ShoppingOrderSpec$1$1.invoke(ShoppingOrderSpec.kt:35)
at com.example.demo.ShoppingOrderSpec$1$1.invoke(ShoppingOrderSpec.kt:19)
So, somehow the org.springframework.transaction.annotation.Transactional annotation does not seem to work,
as removing the annotation, just gives the same response.
Anyone any ideas how to get the #Transactional being applied and respected?
You can't use JUnit Jupiter extensions with KotlinTest as they are different engines. Junit Jupiter is an implementation on top of Junit Platform, like KotlinTest is, but anything written specifically for Jupiter won't work with KotlinTest. Anything written for Junit Platform should work however.
Unfortunately, the naming choices by the JUnit team are poor imo, and so people think JUnit Jupiter is the same thing as JUnit Platform.
Anyway, those
#ExtendWith(SpringExtension::class)
#Transactional
#SpringBootTest
extensions are not going to mean anything to KotlinTest, anymore than they would for Spek or whatever. ExtendWith is a Jupiter specific annotation that tells it to use the SpringExtension class. The KotlinTest equivilent is SpringListener which you've already wired in.
I'm not sure if #SpringBootTest will be picked up or not by Spring. Support may need to be added for that depending on what it does.
Finally #Transactional works by creating proxies on the methods, but since in more advanced testing frameworks like KotlinTest, the test containers are not methods, but just arbitrary functions, it won't be able to intercept.
I think in this case, you might need to create a proper method and annotate that, or try using the AnnotationSpec rather than StringSpec or whatever other spec base class you are using, which uses actual methods that you could annotate.
What's the recommended way to run a spring boot test where only the one subject under test is configured in the context.
If I annotate the test with
#RunWith(SpringRunner.class)
#SpringBootTest(properties = "spring.profiles.active=test")
#ContextConfiguration(classes = MyTestBean.class)
Then it seems to work - the test passes, the context starts quickly and seems to only contain the bean that I want. However, this seems like an incorrect use of the #ContextConfiguration(classes = MyTestBean.class) annotation. If I understand correctly the class that I reference is supposed to be a Configuration class, not a regular spring service bean or component for example.
Is that right? Or is this indeed a valid way to achieve this goal? I know there are more complex examples like org.springframework.boot.test.autoconfigure.json.JsonTest which use #TypeExcludeFilters(JsonExcludeFilter.class) to control the context - but this seems overkill for my use case. I just want a context with my one bean.
Clarification
I know that I can just construct the one bean I am testing as a POJO without a spring context test and remove the three annotations above. But in my precise use case I am actually reliant on some of the configuration applied to the context by settings in the application-test.properties file - which is why I've made this a Spring Boot test with a profile set. From my perspective this isn't a plain unit test of a single class in isolation of the spring context configuration - the test is reliant on certain configuration being applied (which is currently provided by the spring boot app properties). I can indeed just test the components as a POJO by creating a new instance outside of a spring context, I'm using constructor injection making the providing of necessary dependencies simple but the test does rely on things like the log level (the test actually makes assertions on certain logs being produced) which requires that the log level is set correctly (which is currently being done via logging.level.com.example=DEBUG in a properties file which sets up the spring context).
For starters, reading the documentation first (e.g., the JavaDoc linked below in this answer) is a recommend best practice since it already answers your question.
If I understand correctly the class that I reference is supposed to be
a Configuration class, not a regular spring service bean or
component for example.
Is that right?
No, that's not completely correct.
Classes provided to #ContextConfiguration are typically #Configuration classes, but that is not required.
Here is an excerpt from the JavaDoc for #ContextConfiguration:
Annotated Classes
The term annotated class can refer to any of the following.
A class annotated with #Configuration
A component (i.e., a class annotated with #Component, #Service, #Repository, etc.)
A JSR-330 compliant class that is annotated with javax.inject annotations
Any other class that contains #Bean-methods
Thus you can pass any "annotated class" to #ContextConfiguration.
Or is this indeed a valid way to achieve this goal?
It is in fact a valid way to achieve that goal; however, it is also a bit unusual to load an ApplicationContext that contains a single user bean.
Regards,
Sam (author of the Spring TestContext Framework)
It is definitely a reasonable and normal thing to only test a single class in a unit test.
There is no problem including just one single bean in your test context. Really, a #Configuration is (typically) just a collection of beans. You could hypothetically create a #Configuration class just with MyTestBean, but that would really be unnecessary, as you can accomplish doing the same thing listing your contextual beans with #ContextConfiguration#classes.
However, I do want to point out that for only testing a single bean in a true unit test, best practice ideally leans towards setting up the bean via the constructor and testing the class that way. This is a key reason why the Spring guys recommend using constructor vs. property injection. See the section entitled Constructor-based or setter-based DI of this article, Oliver Gierke's comment (i.e. head of Spring Data project), and google for more information. This is probably the reason you're getting a weird feeling about setting up the context for the one bean!
You can also use ApplicationContextRunner to create your context using a test configuration of your choice (even with one bean if you like, but as other people have already mentioned for one bean it's more reasonable to use the constructor the classical way without using any spring magic).
What I like this way of testing is the fact that test run very fast since you don't load all the context. This method is best used when the tested bean doesn't have any Autowired dependencies otherwise it's more convenient to use #SpringBootTest.
Below is an example that illustrates the way you can use it to achieve your goal:
class MyTest {
#Test
void test_configuration_should_contains_my_bean() {
new ApplicationContextRunner()
.withUserConfiguration(TestConfiguration.class)
.run(context -> {
assertThat(context.getBean(MyTestBean.class)).isNotNull();
});
}
#Configuraiton
public static class TestConfiguration {
#Bean
public MyTestBean myTestBean(){
new MyTestBean();
}
}
}
I’m looking for some guidance on the best way of writing integration (ie tests for entire Spring Boot Application) for Spring Cloud Task.
Based on existing documentation and samples I see two approaches for this:
1) Use the standard #SpringBootTest with #TestPropertySource(properties = {"spring.cloud.task.closecontext_enable=false"}
as described here
http://docs.spring.io/spring-cloud-task/docs/1.2.0.M2/reference/htmlsingle/#_writing_your_test
This seems to allow effectively only one test per test class as the task is run when the spring context is initialized ie once per test class. However
#Autowiring of beans in the context into the test class should work to eg to check the results of the task, or examine the state of the task repository.
2) Use SpringApplication.run(MyTaskApplication.class, myArguments); in each test method as in the example here
https://github.com/spring-cloud/spring-cloud-task/blob/master/spring-cloud-task-samples/batch-job/src/test/java/io/spring/BatchJobApplicationTests.java
This allows me to write multiple tests in the test class each with potentially different spring properties or batch job parameters.
The main problem I have with either approach that I can’t see how to get access eg #Autowire to beans in the context such as JdbcTemplate (eg to insert test input data for a job into an embedded db) or RestTemplate (to set up expectations using MockRestServiceServer)
after these beans are created but BEFORE the task is run - is this possible? If not it’s hard to see how to write meaningful integration tests for tasks.
What Ive done for now is a variation on approach (2) above (ie I can run the task more than once / have multiple tests in the same test class)
I'm using
SpringApplication application = new SpringApplication(new Object[] {MyTaskApplication.class, TestConfig.class});
TestConfig is defined with #TestConfiguration in the test class and contains mock beans etc overriding the actual beans
I then use
application.addListeners()
to add a ContextRefreshedEventListener which allows me to set expectations on mocks (or execute jdbc calls) after the beans are created but before the task is run.
(I have a generic Listener class that allows me to pass in the behaviour as a lambda or method reference per bean)
Then run the task with
application.run(args);
(can use different args in different tests)
I can also pass "--spring.cloud.task.closecontext_enable=false" as an argument to keep the application open if i want to verify mocks / db state after the test runs. In this case I close it manually at the end of the test.
If this seems a sensible approach it might be useful if Spring Cloud Task itself provided some sort of generic listener or hook to allow the setting of test state between bean creation and task execution.