I am using #SpringBatchTest to run e2e tests on my Spring Batch application.
Everything works well except when I run both of my test classes (divided my tests into positive/negative test classes) together. The first one runs and tests pass, but the second fails trying to launch the context again. Since it is already launched, the tests fail on InstanceAlreadyExistsException.
Both my test classes are defined with the following annotations:
#RunWith(SpringRunner.class)
#SpringBatchTest
#EnableAutoConfiguration
#ContextConfiguration(classes = {MyTestConfiguration.class})
#TestExecutionListeners({MockitoTestExecutionListener.class, DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class})
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)
EDIT:
In general, what my test does is:
#RunWith(SpringRunner.class)
#SpringBatchTest
#EnableAutoConfiguration
#ContextConfiguration(classes = {HardDeleteTestConfiguration.class})
#TestExecutionListeners({MockitoTestExecutionListener.class, DependencyInjectionTestExecutionListener.class, DirtiesContextTestExecutionListener.class})
#DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_CLASS)
public class TestClass1 {
#Autowired
private JobLauncherTestUtils jobLauncherTestUtils;
#Autowired
private JobRepositoryTestUtils jobRepositoryTestUtils;
#Before
public void setUp() {
jobRepositoryTestUtils.removeJobExecutions();
}
#Test
public void SpringBatchTest() {
// preparing data for test
// ...
JobExecution jobExecution =
jobLauncherTestUtils.launchJob(createJobParams("myKey","myValue"));
// Perform assertions
// ...
}
}
private void createJobParams(String key, value) {
JobParameters uniqueJobParameters = jobLauncherTestUtils.getUniqueJobParameters();
JobParametersBuilder paramsBuilder = new JobParametersBuilder(uniqueJobParameters);
paramsBuilder.addString(key, value);
return paramsBuilder.toJobParameters();
}
}
TestClass2 is the same as TestClass1 with only different data preparation and assertions.
Also my test properties are as follows:
# Spring Boot configuration
spring.main.allow-bean-definition-overriding=true
spring.batch.job.enabled=false
# Spring Batch configuration
spring.batch.job.names=myBatchJob
I have tried all combinations of true and false for the previous flags but it does not make any difference.
Since it is already launched, the tests fail on InstanceAlreadyExistsException.
This means the datasource is reused between tests, and when the second test runs, it will try to launch the same job instance.
In your createJobParameters() method, you can use JobLauncherTestUtils#getUniqueJobParameters to create unique job parameters and run a different job instance for each test.
Eventually we realized it was an in-house framework wrapping Spring that caused the problem (stupid static instantiations on context loading and such).
To solve we used #MockBean on one problematic class, and #EnableAutoConfiguration( exclude = ProblematicConfiguration.class) in the annotation located above the test class.
Related
I have a set of Junit test cases for a Spring Boot application which are annotated with #EnableCaching annotation. When these Junit tests are run individually it works fine. But when run together with the other Junit test classes , the #EnableCaching annotation seems to get ignored.
I'm using the #DirtiesContext annotation to clean the context after each test method. But this doesnt seem to be making any difference to the above mentioned issue.
Please let me know if #EnableCaching can be used in Junit Tests or not.
Please find below a sample code of the Junit Test class.
#EnableCaching
#SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
#TestPropertySource(properties = { "a,b,c" })
#DirtiesContext(classMode = ClassMode.AFTER_EACH_TEST_METHOD)
public class SampleTest {
#BeforeEach
void setUpTest() {
//setup steps
}
#Test
void testCacheable(){
String result = controller.testCache();
}
}
#RestController
public class TestController {
#RequestMapping("/testCache")
#Cacheable(cacheNames="cache")
public String testCache() throws InterruptedException {
logger.info("Returning NOT from cache");
return "cache";
}
}
I have a sample project in which I experiment with different technologies.
I have the following setup:
Spring Boot 2.3.4.RELEASE
Flyway 7.0.1
Testcontainers 1.15.0-rc2
Junit 5.7.0
How can I test the Repository layer with testcontainer-junit5?
Example of code I have now for CompanyRepositoryTest.java:
#ExtendWith(SpringExtension.class)
#Testcontainers
public class CompanyRepositoryTest {
#Autowired
private CompanyRepository companyRepository;
#Container
public MySQLContainer mysqlContainer = new MySQLContainer()
.withDatabaseName("foo")
.withUsername("foo")
.withPassword("secret");;
#Test
public void whenFindByIdExecuted_thenNullReturned()
throws Exception {
assertEquals(companyRepository.findById(1L), Optional.ofNullable(null));
}
#Test
public void whenFindAllExecuted_thenEmptyListReturned() {
assertEquals(companyRepository.findAll(), new ArrayList<>());
}
}
When I add #SpringBootTest, I need to set up all the context and have some Application load context issues?
The question is, can anyone demystify what #TestContainers annotation does? What is the best practice or correct to use it while testing the Repository?
The JUnit 5 extension provided by the #Testcontainers annotation scans for any containers declared with the #Container annotation, and then starts and stops the those containers for your tests. Containers as static fields will be shared with all tests, and containers as instance fields will be started and stopped for every test.
If you are using Spring Boot, the easiest way to setup testcontainers for your tests is probably to provide properties in application-test.yml. This will use the datasource JDBC URL to launch the testcontainers container. Refer to Testcontainers JDBC support for more information.
You can also test just the repository layer by using #DataJpaTest instead of #SpringBootTest:
#DataJpaTest
#AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
#ActiveProfiles("test")
class CompanyRepositoryTest { }
Your application-test.yml file:
spring:
datasource:
url: jdbc:tc:mysql:8.0://hostname/databasename
driver-class-name: org.testcontainers.jdbc.ContainerDatabaseDriver
In some cases you might also want to use the #TestPropertySource annotation instead:
#DataJpaTest
#AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
#TestPropertySource(
properties = {
"spring.datasource.url = jdbc:tc:mysql:8.0://hostname/test-database",
"spring.datasource.driver-class-name = org.testcontainers.jdbc.ContainerDatabaseDriver"
}
)
class CompanyRepositoryTest { }
Please note that the hostname and test-database are not actually used anywhere.
You said
When I add #SpringBootTest, I need to set up all the context and have
some Application load context issues?
If you'd like to try an alternative and Testcontainer is not mandatory you can do it differently.
You do not need to load everyting when using SpringBootTest annotation, you can specify which classes are needed such as
#SpringBootTest(classes = { TheService.class })
or use #Import annotation
and mock others such as
#MockBean
MyService service;
For database connection you can use annotation such as
#ActiveProfiles("my-profile-for-jpa-test")
#DataJpaTest
#EnableJpaAuditing
#AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
EDIT: I feel like this should be an comment but I wanted to address the SpringBootTest part of the question with proper formatting
Here is an example, how I configured Liquibase (a similar framework to Flyway) with MySql inside Spring:
#DataJpaTest
#TestPropertySource(properties = {"spring.jpa.hibernate.ddl-auto=validate"})
#AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
#ContextConfiguration(initializers = { MySqlLiquibaseBaseIT.Initializer.class })
#Testcontainers
public class MySqlLiquibaseBaseIT {
#Container
public static MySQLContainer<?> mysql = new MySQLContainer<>(
DockerImageName
.parse(MySQLContainer.NAME)
.withTag("5.7.22"));
#Configuration
#EnableJpaRepositories
#EntityScan
static class Initializer implements ApplicationContextInitializer<ConfigurableApplicationContext> {
#Override
public void initialize(ConfigurableApplicationContext configurableApplicationContext) {
TestPropertyValues.of(
"spring.datasource.url=" + mysql.getJdbcUrl(),
"spring.datasource.username=" + mysql.getUsername(),
"spring.datasource.password=" + mysql.getPassword(),
"spring.datasource.driver-class-name=" + mysql.getDriverClassName())
.applyTo(configurableApplicationContext.getEnvironment());
}
#Bean
public SpringLiquibase springLiquibase(DataSource dataSource) {
SpringLiquibase liquibase = new SpringLiquibase();
liquibase.setDropFirst(true);
liquibase.setDataSource(dataSource);
liquibase.setChangeLog("classpath:/db/changelog/db.changelog-master.yml");
return liquibase;
}
}
}
Full MySqlLiquibaseBaseIT.java
As per docs:
The test containers extension finds all fields that are annotated with
Container and calls their container lifecycle methods. Containers
declared as static fields will be shared between test methods. They
will be started only once before any test method is executed and
stopped after the last test method has executed. Containers declared
as instance fields will be started and stopped for every test method.
So in your case it will recreate a container for every test method, it's only responsible for starting and stopping the container. If you need some test data - that has to be done manually, as I see you have Flyway, that should do.
What "context issues" are you talking about?
Repositories are usually not tested separately, you can just test services which run repository methods instead of writing tests for both. If you want to test repos anyway - fill the database with some data in #Before.
If you have more questions please ask.
I have an issue with a spring integration test.
The behavior:
When I run the test below in isolation, it is in success.
However, when all tests are run, many of them including the one below are in error.
When I ignore the test below and run all test, all are in success.
I haven't included the error stacktrace because it is highly related to our business logic and I suspect the error is related to my usage of spring boot test #SpyBean.
Here is the test:
#RunWith(SpringRunner.class)
#ActiveProfiles(profiles = "test")
#SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
...
#Autowired
private TestRestTemplate restTemplate;
#Autowired
private DataKeyStore dataKeyStore;
#SpyBean
private TokenTools tokenTools;
...
#Test
public void myTest() throws Exception {
doReturn("someGeneratedToken")
.doReturn("someGeneratedToken")
.doCallRealMethod()
.when(tokenTools)
.createToken(any(TokenProfile.class), anyString(), anyString());
...
Please note that DataKeyStore is a dependency of TokenTools.
As I said above, I suspect tests are stepping on each other and my #SpyBean somehow leaks on other test classes...
My question is how can I make sure this test does not step on the other tests? I have tried the #DirtiesContext annotation to no avail...
Also what puzzles me is that the #SpyBean is already reset (as per the documentation/javadoc).
Can anyone please help?
edit: Using my IDE to debug the tests indicates that TokenTools is instantiated only twice for all tests: once at the initialization of tests and a second time for creating the #SpyBean for the test above. The remainder of tests run after the test above use the second instance i.e. the #SpyBean instance...
I recently ran into the same issue. Make sure to set the right classMode for your
#DirtiesContext annotation.
By default, #DirtiesContext will reset the #SpyBean after the complete test class. You probably want to reset it before or after each test method.
So add #DirtiesContext(classMode = DirtiesContext.ClassMode.BEFORE_EACH_TEST_METHOD) or #DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD) to your test class.
I can confirm that #DirtiesContext didn't work for us as well. We had problems initialize DB (using Liquibase) for new context after old context was closed (by #DirtiesContext annotation).
We ended up naming Spring test context differently for tests that are faking some bens:
E.g.:
#RunWith(SpringRunner.class)
#SpringBootTest
#ContextConfiguration(classes = SpringBootApp.class, name = "mainContext")
public class TestThatDoesntFakeBeans(){
}
#RunWith(SpringRunner.class)
#SpringBootTest
#ContextConfiguration(classes = SpringBootApp.class, name = "contextWithFakeBean")
public class TestThatFakeBeans(){
#SpyBean
//...
}
This way there is separate Spring context created for each name. Contexts with same name are reused by tests. But of course you need to make sure that tests with same context name doesn't affect each other.
#SpyBean seems to not be reset after each test which leads to unusual behavior. I would suggest using Mockito #Spy instead and check if the problem still persists.
import org.mockito.Spy
....
#RunWith(SpringRunner.class)
#ActiveProfiles(profiles = "test")
#SpringBootTest(webEnvironment = WebEnvironment.RANDOM_PORT)
...
#Autowired
private TestRestTemplate restTemplate;
#Autowired
private DataKeyStore dataKeyStore;
#Spy
private TokenTools tokenTools;
...
#Test
public void myTest() throws Exception {
doReturn("someGeneratedToken")
.doReturn("someGeneratedToken")
.doCallRealMethod()
.when(tokenTools)
.createToken(any(TokenProfile.class), anyString(), anyString());
...
I use spring batch with annotations only, I want to test a step which is configured like so:
#Bean("findMe")
#Qualifier("findMe")
public Step findMe() {
return stepBuilderFactory.get("findMe"). ... some step configuration
}
Test:
#Test
public shouldRunTheJob() {
JobLauncherTestUtils.launchJob("findMe");
}
I was not able to address the job, besides that I was able to test all other levels, how can I address a job annotated like this?
From what I understand from your question, you want to test a step and not a job.
Try using the following sample test class for your step test.
#RunWith(SpringRunner.class)
#ContextConfiguration(classes = YourClassToTest.class)
public class StepTest {
#Autowired
private JobLauncherTestUtils jobLauncherTestUtils;
#Test
public void testStep() throws Exception {
JobExecution jobExecution = jobLauncherTestUtils.launchStep("findMe");
// your test case, e.g. assert something on the jobExecution
}
}
For more information please refer to the spring batch docs here.
If in your TestCase class there is this annotations:
#SpringApplicationConfiguration(classes = {Application.class})
this will cause the Application.class, implementing the CommandLineRunner interface, to run the required method
public void run(String... args) throws Exception
I still think this is, mostly, a not wanted behaviour, since in your test environment you may not want to launch the entire application.
I have in mind two solution to circumvent this problem:
to remove the CommandLineRunner interface from my Application class
to have a different context for testing
Both this solution requires lot of coding.
Do you have a more convenient solution?
Jan's solution can be achieved easier.
In your test class, activate the "test" profile:
#RunWith(SpringJUnit4ClassRunner.class)
#ActiveProfiles("test")
public class MyFancyTest {}
In your CommandLineRunner set the profile to NOT test:
#Component
#Profile("!test")
public class JobCommandLineRunner implements CommandLineRunner {}
Then you don't have to manually set the profile in the Application.
As mentioned in the spring documentation you can use #ContextConfiguration with a special initializer:
ConfigDataApplicationContextInitializer is an ApplicationContextInitializer that you can apply to your tests to load Spring Boot application.properties files. You can use it when you do not need the full set of features provided by #SpringBootTest
In this example anyComponent is initialized and properties are injected, but run(args) methods won't be executed. (Application.class is my main spring entry point)
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(classes = Application.class,
initializers = ConfigDataApplicationContextInitializer.class)
public class ExtractorTest {
#Autowired
AnyComponent anyComponent;
#Test
public void testAnyComponent() {
anyComponent.anyMethod(anyArgument);
}
}
You can define a test configuration in the same package as your application that looks exactly the same, except that it excludes beans implementing CommandLineRunner. The key here is #ComponentScan.excludeFilters:
#Configuration
#ComponentScan(excludeFilters = #ComponentScan.Filter(type = FilterType.ASSIGNABLE_TYPE, value = CommandLineRunner.class))
#EnableAutoConfiguration
public class TestApplicationConfiguration {
}
Then, just replace the configuration on your test:
#RunWith(SpringJUnit4ClassRunner.class)
#SpringBootTest(classes = TestApplicationConfiguration.class)
public class SomeApplicationTest {
...
}
No CommandLineRunner will be executed now, because they are not part of the configuration.
I'm a bit late to the party, but a reasonable approach is to mark the bean with #ConditionalOnProperty, e.g.
#ConditionalOnProperty(prefix = "job.autorun", name = "enabled", havingValue = "true", matchIfMissing = true)
public CommandLineRunner myRunner() {...}
The following annotation will then disable it in tests:
#SpringBootTest(properties = {"job.autorun.enabled=false"})
If you have a mocking framework installed (e.g. MockMVC) you can create a mock instance of the CommandLineRunner implementation, more or less disabling it:
#MockBean
private TextProcessor myProcessor;
Previous answers didn't work wor me. I ended up using different profiles - example for the init method in Spring Boot:
SpringApplication app = new SpringApplication(AppConfig.class);
app.setAdditionalProfiles("production");
app.run(args);
This is not executed during the tests so we're safe here.
All tests have their own profile "test" (which is useful in many other ways, too):
#RunWith(SpringJUnit4ClassRunner.class)
#ActiveProfiles("test")
public class MyFancyTest {}
The command-line runner is annotated with the "production" profile so the tests ignore it:
#Component
#Profile("production")
public class JobCommandLineRunner implements CommandLineRunner {}
I solve this by not implementing CommandLineRunner. Just get a bean from the context, and call a method on it, passing argv. That way you will get the same result, and the application won't start automatically when running the tests.