How to dynamically schedule multiple tasks in Spring Boot - spring-boot

I want to dynamically schedule a task based on the user input in a given popup.
The user should be able to schedule multiple tasks and each tasks should be a repeteable task.
I have tried to follow some of the possibilities offered by spring boot using espmale below:
example 1: https://riteshshergill.medium.com/dynamic-task-scheduling-with-spring-boot-6197e66fec42
example 2: https://www.baeldung.com/spring-task-scheduler#threadpooltaskscheduler
The Idea of example 1 is to send a http post request that should then invoke a schudeled task as below :
Each http call will lead to console print as below :
But I still not able to reach the needed behaviour; what I get as result is the task1 executed when invoked by action1 but as soon as a task2 is executed by an action2 the first task1 will stop executing .
Any idea how the needed logic could be implemented?

Example 1 demonstrates how to schedule a task based on requested rest api and Example 2 shows how to create ThreadPoolTaskScheduler for TaskScheduler. But you miss an important point, here. Even if you created thread pool, TaskScheduler is not aware of that and thus, it needs to be configured so that it can use thread pool. For that, use SchedulingConfigurer interface. Here is an example:
#Configuration
#EnableScheduling
public class TaskConfigurer implements SchedulingConfigurer {
#Override
public void configureTasks(ScheduledTaskRegistrar taskRegistrar) {
//Create your ThreadPoolTaskScheduler here.
}
}
After creating such configuration class, everything should work fine.

Related

ServiceActivator invoked twice when only one message is published

I have the following JUnit test that is basically an example of a production test.
#Autowired
private MessageChannel messageChannel;
#SpyBean
#Autowired
private Handler handler;
#Test
public void testPublishing() {
SomeEvent event = new SomeEvent(); // implements Message
messageChannel.send(event);
Thread.sleep(2000); // sleep 2 seconds
Mockito.verify(handler, times(1))
.someMethod(Mockito.any());
}
The service activator is the someMethod method inside the Handler class. For some reason this test fails stating that someMethod was invoked twice even though only a single message was published to the channel. I even added code to someMethod to print the memory address of the message consumed and both invocations are the exact same address. Any idea what could cause this?
NOTE: I built this basic code example as a test case and it verifies as single invocation as I'd expect, but what could possibly (in my production system test) cause the send operation to result in 2 separate invocations of the service activator?
NOTE2: I added a print statement inside my real service activator. When I have the #SpyBean annotation on the handler and use the Mockito.verify(... I get two print outs of the input. However, if I remove the annotation and the verify call then I only get one print out. However, this does not happen in the simple demo I shared here.
NOTE3: Seems to be some sort of weird SpyBean behavior as I am only seeing the single event downstream. No idea why Mockito is giving me trouble on this.

Spring webflux how to return 200 response to client before processing large file

I am working on a Spring Webflux project,
I want to do something like, When client make API call, I want to send success message to client and perform large file operation in background.
So client does not have to wait till my entire file is process.
For try out I made sample code as below
REST controller
#GetMapping(value = "/{jobId}/process")
#ApiOperation("Start import job")
public Mono<Integer> process(#PathVariable("jobId") long jobId) {
return service.process(jobId);
}
File processing Service
public Mono<Integer> process(Integer jobId) {
return repository
.findById(jobId)
.map(
job -> {
File file = new File("read.csv");
return processFile(file);
});
}
Following is my stack
Spring Webflux 2.2.2.RELEASE
I try to make this call using WebClient, but till entire file is not processed I am not getting response.
As one of the options, you can run processing in a different thread.
For example:
Create an Event Listener Link
Enable #Async and #EnableAsync Link
Or use deferent types of Executors from Java concurrency package
Or manually run the thread
Also for Kotlin you can use Coroutines
You can use the subscribe method and start a job with its own scope in background.
Mono.delay(Duration.ofSeconds(10)).subscribeOn(Schedulers.newElastic("myBackgroundTask")).subscribe(System.out::println);
As long as you do not tie this to your response publisher using one of the zip/merge or similar operators your job will be run on background on its own scheduler pool.
subscribe() method returns a Disposable instance which can later be used cancel the background job by calling dispose() method.

How to execute long running/polling operations in Eclipse Vert.x

I have a scenario where we need to keep on polling a database table for all active users and perform an api call to fetch any unread emails from their inbox. My approach is to use two verticles, one for polling and another for fetching emails for an user. The first verticle when found an user, sends a message(userId) to the second verticle through an event bus to fetch emails. That way, I can increase the number of second verticle instances required when there are lots of users.
Following two ways I found I can use to poll the database for active users and then perform an api call for each user.
vertx.setPeriodic
vertx.executeBlocking
But in the manual, its mentioned that for long running/polling tasks, its better to create an application managed thread to handle the task.
Is my approach for the problem correct, or is there a better approach to solve the problem at hand?
If I go through an application managed thread, can you please help illustrate with an example.
Thanks.
You can create a dedicated worker thread pool for that, and run your periodic tasks on it:
public class PeriodicWorkerExample {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new MyPeriodicWorker(), new DeploymentOptions()
.setWorker(true)
.setWorkerPoolSize(1)
.setWorkerPoolName("periodic"));
}
}
class MyPeriodicWorker extends AbstractVerticle {
#Override
public void start() {
vertx.setPeriodic(1000, (r) -> {
System.out.println(Thread.currentThread().getName());
});
}
}

Is it good to have dedicated ExecutorService for Spring Boot With Tomcat

I have seen this code many times but don't know what is the advantage/disadvantage for it. In Spring Boot applications, I saw people define this bean.
#Bean
#Qualifier("heavyLoadBean")
public ExecutorService heavyLoadBean() {
return Executors.newWorkStealingPool();
}
Then whenever a CompletableFuture object is created in the service layer, that heavyLoadBean is used.
public CompletionStage<T> myService() {
return CompletableFuture.supplyAsync(() -> doingVeryBigThing(), heavyLoadBean);
}
Then the controller will call the service.
#GetMapping("/some/path")
public CompletionStage<SomeModel> doIt() {
return service.myService();
}
I don't see the point of doing that. Tomcat in Spring Boot has x number of threads. All the threads are used to process user requests. What is the point of using a different thread pool here? Anyway the user expects to see response coming back.
CompletableFuture is used process the tasks asynchronously, suppose in your application if you have two tasks independent of each other then you can execute two tasks concurrently (to reduce the processing time)
public CompletionStage<T> myService() {
CompletableFuture.supplyAsync(() -> doingVeryBigThing(), heavyLoadBean);
CompletableFuture.supplyAsync(() -> doingAnotherBigThing(), heavyLoadBean);
}
In the above example doingVeryBigThing() and doingAnotherBigThing() two tasks which are independent of each other, so now these two tasks will be executed concurrently with two different threads from heavyLoadBean thread pool, try below example will print the two different thread names.
public CompletionStage<T> myService() {
CompletableFuture.supplyAsync(() -> System.out.println(Thread.currentThread().getName(), heavyLoadBean);
CompletableFuture.supplyAsync(() -> System.out.println(Thread.currentThread().getName(), heavyLoadBean);
}
If you don't provide the thread pool, by default supplied Supplier will be executed by ForkJoinPool.commonPool()
public static CompletableFuture supplyAsync(Supplier supplier)
Returns a new CompletableFuture that is asynchronously completed by a task running in the ForkJoinPool.commonPool() with the value obtained by calling the given Supplier.
public static CompletableFuture supplyAsync(Supplier supplier,
Executor executor)
Returns a new CompletableFuture that is asynchronously completed by a task running in the given executor with the value obtained by calling the given Supplier.
Please check comments in the main post and other solutions. They will give you more understanding of java 8 CompletableFuture. I'm just not feeling the right answer was given though.
From our discussions, I can see the purpose of having a different thread pool instead of using the default thread pool is that the default thread pool is also used by the main web server (spring boot - tomcat). Let's say 8 threads.
If we use up all 8 threads, server appears to be irresponsive. However, if you use a different thread pool and exhaust that thread pool with your long running processes, you will get a different errors in your code. Therefore, the server can still response to other user requests.
Correct me if I'm wrong.

How to manage logs of two seperate cron in a single project in java

There is an issue in running Multiple Cron in Spring. I have created multi module maven project in which there are two separate cron running but at some point of time they coincide and it becomes very tedious to debug from the log.
Is there any way of having a separate log or some way when one cron is running then another should not start, I mean at one point of time only one cron should run.
#Scheduled(cron="0 0 */2 * * *")
public void startAnalysis() {
logger.info("Inside of Analysis scheduler");
// Doing Something
}
#Scheduled(cron="0 0 */6 * * *")
public void startAnalysis() {
logger.info("Inside of Analysis1 scheduler");
// Doing Something
}
Above are the two crons that I am running. Currently, I am using sl4j for logging purpose.
You can create two instances of the logger, each with a different name and configure them to log to different files in your logging framework.
Spring by default uses a single threaded Executor. so no two
#Scheduled tasks will be execute simultaneously. even if there are two
#Scheduled methods in completely unrelated classes will not overlap
and that is because there is only a single thread to execute tasks.
the only possiblity for task to be executed at the same time is to have multi thread executors which defined as:
#EnableScheduling
#Configuration
public class MyConfiguration implements SchedulingConfigurer {
#Override
public void configureTasks(ScheduledTaskRegistrar
scheduledTaskRegistrar) {
scheduledTaskRegistrar.setScheduler(taskExecutor());
}
#Bean(destroyMethod="shutdown")
public Executor taskExecutor() {
return Executors.newScheduledThreadPool(5);
}
}
If you have such custom configuration that defines a thread pool for schedule tasks, you should remove it.
Please use Log4J programmatic config, you can separate different log file.
https://www.codejava.net/coding/how-to-configure-log4j-as-logging-mechanism-in-java?showall=&start=2#ProgrammaticConfiguration

Resources