I have the following JUnit test that is basically an example of a production test.
#Autowired
private MessageChannel messageChannel;
#SpyBean
#Autowired
private Handler handler;
#Test
public void testPublishing() {
SomeEvent event = new SomeEvent(); // implements Message
messageChannel.send(event);
Thread.sleep(2000); // sleep 2 seconds
Mockito.verify(handler, times(1))
.someMethod(Mockito.any());
}
The service activator is the someMethod method inside the Handler class. For some reason this test fails stating that someMethod was invoked twice even though only a single message was published to the channel. I even added code to someMethod to print the memory address of the message consumed and both invocations are the exact same address. Any idea what could cause this?
NOTE: I built this basic code example as a test case and it verifies as single invocation as I'd expect, but what could possibly (in my production system test) cause the send operation to result in 2 separate invocations of the service activator?
NOTE2: I added a print statement inside my real service activator. When I have the #SpyBean annotation on the handler and use the Mockito.verify(... I get two print outs of the input. However, if I remove the annotation and the verify call then I only get one print out. However, this does not happen in the simple demo I shared here.
NOTE3: Seems to be some sort of weird SpyBean behavior as I am only seeing the single event downstream. No idea why Mockito is giving me trouble on this.
I want to dynamically schedule a task based on the user input in a given popup.
The user should be able to schedule multiple tasks and each tasks should be a repeteable task.
I have tried to follow some of the possibilities offered by spring boot using espmale below:
example 1: https://riteshshergill.medium.com/dynamic-task-scheduling-with-spring-boot-6197e66fec42
example 2: https://www.baeldung.com/spring-task-scheduler#threadpooltaskscheduler
The Idea of example 1 is to send a http post request that should then invoke a schudeled task as below :
Each http call will lead to console print as below :
But I still not able to reach the needed behaviour; what I get as result is the task1 executed when invoked by action1 but as soon as a task2 is executed by an action2 the first task1 will stop executing .
Any idea how the needed logic could be implemented?
Example 1 demonstrates how to schedule a task based on requested rest api and Example 2 shows how to create ThreadPoolTaskScheduler for TaskScheduler. But you miss an important point, here. Even if you created thread pool, TaskScheduler is not aware of that and thus, it needs to be configured so that it can use thread pool. For that, use SchedulingConfigurer interface. Here is an example:
#Configuration
#EnableScheduling
public class TaskConfigurer implements SchedulingConfigurer {
#Override
public void configureTasks(ScheduledTaskRegistrar taskRegistrar) {
//Create your ThreadPoolTaskScheduler here.
}
}
After creating such configuration class, everything should work fine.
I have seen this code many times but don't know what is the advantage/disadvantage for it. In Spring Boot applications, I saw people define this bean.
#Bean
#Qualifier("heavyLoadBean")
public ExecutorService heavyLoadBean() {
return Executors.newWorkStealingPool();
}
Then whenever a CompletableFuture object is created in the service layer, that heavyLoadBean is used.
public CompletionStage<T> myService() {
return CompletableFuture.supplyAsync(() -> doingVeryBigThing(), heavyLoadBean);
}
Then the controller will call the service.
#GetMapping("/some/path")
public CompletionStage<SomeModel> doIt() {
return service.myService();
}
I don't see the point of doing that. Tomcat in Spring Boot has x number of threads. All the threads are used to process user requests. What is the point of using a different thread pool here? Anyway the user expects to see response coming back.
CompletableFuture is used process the tasks asynchronously, suppose in your application if you have two tasks independent of each other then you can execute two tasks concurrently (to reduce the processing time)
public CompletionStage<T> myService() {
CompletableFuture.supplyAsync(() -> doingVeryBigThing(), heavyLoadBean);
CompletableFuture.supplyAsync(() -> doingAnotherBigThing(), heavyLoadBean);
}
In the above example doingVeryBigThing() and doingAnotherBigThing() two tasks which are independent of each other, so now these two tasks will be executed concurrently with two different threads from heavyLoadBean thread pool, try below example will print the two different thread names.
public CompletionStage<T> myService() {
CompletableFuture.supplyAsync(() -> System.out.println(Thread.currentThread().getName(), heavyLoadBean);
CompletableFuture.supplyAsync(() -> System.out.println(Thread.currentThread().getName(), heavyLoadBean);
}
If you don't provide the thread pool, by default supplied Supplier will be executed by ForkJoinPool.commonPool()
public static CompletableFuture supplyAsync(Supplier supplier)
Returns a new CompletableFuture that is asynchronously completed by a task running in the ForkJoinPool.commonPool() with the value obtained by calling the given Supplier.
public static CompletableFuture supplyAsync(Supplier supplier,
Executor executor)
Returns a new CompletableFuture that is asynchronously completed by a task running in the given executor with the value obtained by calling the given Supplier.
Please check comments in the main post and other solutions. They will give you more understanding of java 8 CompletableFuture. I'm just not feeling the right answer was given though.
From our discussions, I can see the purpose of having a different thread pool instead of using the default thread pool is that the default thread pool is also used by the main web server (spring boot - tomcat). Let's say 8 threads.
If we use up all 8 threads, server appears to be irresponsive. However, if you use a different thread pool and exhaust that thread pool with your long running processes, you will get a different errors in your code. Therefore, the server can still response to other user requests.
Correct me if I'm wrong.
I'm trying to perform some checks at the startup of a spring web application (e.g. check that the DB version is as excepted). If the checks fail, the servlet should be killed (or better yet, never started) so as to prevent it from serving any pages. Ideally, the containing Tomcat/Netty/whatever service should also be killed (although this looks more tricky).
I can't call System.exit because my startup check depends on a lot of services that should be safely shut down (e.g. DB connections, etc...).
I found this thread, which suggests calling close on the spring context. However, except for reporting exceptions, spring merrily continues to start up the servlet (see below).
I've looked into the Java Servlet documentation -- it says not to call destroy on the servlet - and I've no idea whether I'd be calling Servlet.destroy from methods where the Servlet object appears further up the stack (don't want to eat my own tail). In fact, I'd rather the servlet was never created in the first place. Better to run my at-startup checks first, before starting any web-serving.
Here's what I have...
#Service
class StartupCheckService extends InitializingBean {
#Autowired a:OtherServiceToCheckA = null
#Autowired b:OtherServiceToCheckB = null
override def afterPropertiesSet = {
try{
checkSomeEssentialStuff();
} catch {
case e: Any => {
// DON'T LET THE SERVICE START!
ctx = getTheContext();
ctx.close();
throw e;
}
}
The close call causes the error:
BeanFactory not initialized or already closed - call 'refresh' before accessing beans via the ApplicationContext.
presumably because you shouldn't call close while bean initialization is happening (and refresh is likely to put us into an infinite loop). Here's my startup code...
class WebAppInitializer extends WebApplicationInitializer {
def onStartup(servletContext: ServletContext): Unit = {
val ctx = new AnnotationConfigWebApplicationContext()
// Includes StartupCheckService
ctx.register(classOf[MyAppConfig])
ctx.registerShutdownHook() // add a shutdown hook for the above context...
// Can't access StartupCheckService bean here.
val loaderListener = new ContextLoaderListener(ctx)
// Make context listens for servlet events
servletContext.addListener(loaderListener)
// Make context know about the servletContext
ctx.setServletContext(servletContext)
val servlet: Dynamic = servletContext.addServlet(DISPATCHER_SERVLET_NAME, new DispatcherServlet(ctx))
servlet.addMapping("/")
servlet.setLoadOnStartup(1)
}
I've tried doing this kind of thing in onStartup
ctx.refresh()
val ss:StartupService = ctx.getBean(classOf[StartupCheckService])
ss.runStarupRountines()
but apparently I'm not allowed to call refresh until onStartup exits.
Sadly, Spring's infinite onion of abstraction layers is making it very hard for me to grapple with this simple problem. All of the important details about the order of how things get initialize are hidden.
Before the "should have Googled it" Nazis arrive... A B C D E F
I'm not sure why you need to do this in a WebApplicationInitializer. If you want to configure a #Bean that does the health check for you then do it in an ApplicationListener<ContextRefreshedEvent>. You can access the ConfigurableApplicationContext from there (the source of the event) and close it. That will shut down the Spring context. Throw an exception if you want the Servlet and the webapp to die.
You can't kill a container (Tomcat etc.) unless you started it. You could try using an embedded container (e.g. Spring Boot will do that for you easily).
As far as I understand, you don't have to explicitly call close().
Just let the exception escape the afterPropertiesSet(), Spring should automatically stop instantiating remaining beans and shutdown the whole context.
You can use #PreDestroy if you have to make some cleanup on beans which have been initialized so far.
In our recent project Sonar was complaining about a weak test coverage. We noticed that it didn't consider integration tests by default. Beside the fact that you can configure Sonar, so it will consider them (JaCoCo plugin), we were discussing the question in our team if there really is the need to write Unit Tests, when you cover all your service and database layer with integration tests anyway.
What I mean with integration tests is, that all our tests run against a dedicated Oracle instance of the same type we use in production. We don't mock anything. If a service depends on another service, we use the real service. Data we need before running a test, we construct through some factory classes that use our Services/Repositories (DAOs).
So from my point of view - writing integration tests for simple CRUD operations especially when using frameworks like Spring Data/Hibernate is not a big effort. It is sometimes even easier, because you don't think of what and how to mock.
So why should I write Unit Tests for my CRUD operations that are less reliable as the Integration Tests I can write?
The only point I see is that integration tests will take more time to run, the bigger the project gets. So you don't want to run them all before check-in. But I am not so sure if this is so bad, if you have a CI environment with Jenkins/Hudson that will do the job.
So - any opinions or suggestions are highly appreciated!
If most of your services simply pass through to your daos, and your daos do little but invoke methods on Spring's HibernateTemplate or JdbcTemplate then you are correct that unit tests don't really prove anything that your integration tests already prove. However, having unit tests in place are valuable for all the usual reasons.
Since unit tests only test single classes, run in memory with no disk or network access, and never really test multiple classes working together, they normally go like this:
Service unit tests mock the daos.
Dao unit tests mock the database driver (or spring template) or use an embedded database (super easy in Spring 3).
To unit test the service that just passes through to the dao, you can mock like so:
#Before
public void setUp() {
service = new EventServiceImpl();
dao = mock(EventDao.class);
service.EventDao = dao;
}
#Test
public void creationDelegatesToDao() {
service.createEvent(sampleEvent);
verify(dao).createEvent(sampleEvent);
}
#Test(expected=EventExistsException.class)
public void creationPropagatesExistExceptions() {
doThrow(new EventExistsException()).when(dao).createEvent(sampleEvent);
service.createEvent(sampleEvent);
}
#Test
public void updatesDelegateToDao() {
service.updateEvent(sampleEvent);
verify(dao).updateEvent(sampleEvent);
}
#Test
public void findingDelgatesToDao() {
when(dao.findEventById(7)).thenReturn(sampleEvent);
assertThat(service.findEventById(7), equalTo(sampleEvent));
service.findEvents("Alice", 1, 5);
verify(dao).findEventsByName("Alice", 1, 5);
service.findEvents(null, 10, 50);
verify(dao).findAllEvents(10, 50);
}
#Test
public void deletionDelegatesToDao() {
service.deleteEvent(sampleEvent);
verify(dao).deleteEvent(sampleEvent);
}
But is this really a good idea? These Mockito assertions are asserting that a dao method got called, not that it did what was expected! You will get your coverage numbers but you are more or less binding your tests to an implementation of the dao. Ouch.
Now this example assumed the service had no real business logic. Normally the services will have business logic in addtion to dao calls, and you surely must test those.
Now, for unit testing daos, I like to use an embedded database.
private EmbeddedDatabase database;
private EventDaoJdbcImpl eventDao = new EventDaoJdbcImpl();
#Before
public void setUp() {
database = new EmbeddedDatabaseBuilder()
.setType(EmbeddedDatabaseType.H2)
.addScript("schema.sql")
.addScript("init.sql")
.build();
eventDao.jdbcTemplate = new JdbcTemplate(database);
}
#Test
public void creatingIncrementsSize() {
Event e = new Event(9, "Company Softball Game");
int initialCount = eventDao.findNumberOfEvents();
eventDao.createEvent(e);
assertThat(eventDao.findNumberOfEvents(), is(initialCount + 1));
}
#Test
public void deletingDecrementsSize() {
Event e = new Event(1, "Poker Night");
int initialCount = eventDao.findNumberOfEvents();
eventDao.deleteEvent(e);
assertThat(eventDao.findNumberOfEvents(), is(initialCount - 1));
}
#Test
public void createdEventCanBeFound() {
eventDao.createEvent(new Event(9, "Company Softball Game"));
Event e = eventDao.findEventById(9);
assertThat(e.getId(), is(9));
assertThat(e.getName(), is("Company Softball Game"));
}
#Test
public void updatesToCreatedEventCanBeRead() {
eventDao.createEvent(new Event(9, "Company Softball Game"));
Event e = eventDao.findEventById(9);
e.setName("Cricket Game");
eventDao.updateEvent(e);
e = eventDao.findEventById(9);
assertThat(e.getId(), is(9));
assertThat(e.getName(), is("Cricket Game"));
}
#Test(expected=EventExistsException.class)
public void creatingDuplicateEventThrowsException() {
eventDao.createEvent(new Event(1, "Id1WasAlreadyUsed"));
}
#Test(expected=NoSuchEventException.class)
public void updatingNonExistentEventThrowsException() {
eventDao.updateEvent(new Event(1000, "Unknown"));
}
#Test(expected=NoSuchEventException.class)
public void deletingNonExistentEventThrowsException() {
eventDao.deleteEvent(new Event(1000, "Unknown"));
}
#Test(expected=NoSuchEventException.class)
public void findingNonExistentEventThrowsException() {
eventDao.findEventById(1000);
}
#Test
public void countOfInitialDataSetIsAsExpected() {
assertThat(eventDao.findNumberOfEvents(), is(8));
}
I still call this a unit test even though most people might call it an integration test. The embedded database resides in memory, and it is brought up and taken down when the tests run. But this relies on the fact that the embedded database looks the same as the production database. Will that be the case? If not, then all that work was pretty useless. If so, then, as you say, these tests are doing anything different than the integration tests. But I can run them on demand with mvn test and I have the confidence to refactor.
Therefor, I write these unit tests anyway and meet my coverage targets. When I write integration tests, I assert that an HTTP request returns the expected HTTP response. Yeah it subsumes the unit tests, but hey, when you practice TDD you have those unit tests written before your actual dao implementation anyway.
If you write unit tests after your dao, then of course they are no fun to write. The TDD literature is full of warnings about how writing tests after your code feels like make work and no one wants to do it.
TL;DR: Your integration tests will subsume your unit tests and in that sense the unit tests are not adding real testing value. However when you have a high-coverage unit test suite you have the confidence to refactor. But of course if the dao is trivially calling Spring's data access template, then you might not be refactoring. But you never know. And finally, though, if the unit tests are written first in TDD style, you are going to have them anyway.
You only really need to unit test each layer in isolation if you plan to have the layers exposed to other components out of your project. For a web app, the only way the the repository layer can be invoked, is by the services layer, and the only way the service layer can be invoked is by the controller layer. So testing can start and end at the controller layer. For background tasks, these are invoked in the service layer, so need to be tested here.
Testing with a real database is pretty fast these days, so doesn't slow your tests down too much, if you design your setup/tear down well. However, if there are any other dependancies that could be slow, or problematic, then these should be mocked/stubbed.
This approach will give you:
good coverage
realistics tests
minimum amount of effort
minimum amount of refectoring effort
However, testing layers in isolation does allow your team to work more concurrently, so one dev can do repository and another can do service for one piece of functionality, and produce independently tested work.
There will always be double coverage when selenium/functional tests are incorporated as you can't rely on these alone as they are too slow to run. However, functional tests dont necessarily need to cover all the code, core functionality alone can be sufficient, aslong as the code has been covered by unit/integration tests.
I think there are two advantages of having finer grained(I will not use intentionaly the word unit test here) tests besides the high end integration tests.
1) Redundancy, having the layers covered in more than one place acts like a switch. If one set of tests (the integration test f.ex.) fail to locate the error the second layer may catch it. I will draw a comparison here with electric switches where redundancy is a must. You have a main switch and a specialized switch.
2)Lets suppose that you have a process calling external service. For one or another reason (Bug) the original exception gets consumer and an exception that does not carry information about the technical nature of the error reaches the integration test. The integration test will catch the error, but you will have no clue about what the error is or where is it coming from. Having a finer grained test in place increases the chance of pointing in the correct direction what and where excactly has failed.
I personally think that certain level of redundancy in testing is not a bad thing.
In your particular case if you write a CRUD test with in memory database you will have the chance to test your Hibernate mapping layer which can be quite complex if you are using things like Cascading , or fetching and so on...