Configuring multiple tasks with Spring scheduler having configurable triggers - spring-boot

I need some help with some design as well as spring scheduler related code. I am trying to write few utility classes, where by all the tasks (which are going to do some async processing) are scheduled at regular intervals.
So, I create a Task class which has a execute() method, which needs to be implemented by each task.
public interface Task
{
void execute();
}
public class TaskOne implements Task
{
#Override
public void execute()
{
// do something
}
}
public class TaskTwo implements Task
{
#Override
public void execute()
{
// do something
}
}
#Component
public class ScheduledTasks
{
#Scheduled(fixedRate = 5000)
public void runTask()
{
Task taskOne = new TaskOne();
taskOne.execute();
Task taskTwo = new TaskTwo();
taskTwo.execute();
}
}
I need to run different tasks at different interval, which I want to be configurable without the need of restarting the server. By this, I mean that time specified here can be changed without a restart.
#Scheduled(fixedRate = configurable, with some initial value)
Normally, what is the best way of doing this ?
One approach I can think of is:
1. Keep the trigger (periodic or cron) for each task in db
2. Load these triggers at the time of start up
3. Somehow, annotate the execute() methods with the proper value. This is going to be technically stiff.
4. If we want to change the job interval timing, update the db and refresh the caches.
Also, if some one can share some code snippet or suggest something like a small library, that would be highly appreciated. Thanks.

For configuration you don't need ScheduledTasks. Instead, you can annotate every task as #Component and annotate execute method with #Scheduled.
#Componenet
public class TaskOne implements Task
{
#Override
#Scheduled(...)
public void execute()
{
// do something
}
}
As far as updating this question. You'll have to figure out how to differentiate for different tasks.

Related

Using event gateway to publish events (Axon)

I am very new to Axon so bear with me. I have a command that starts the sessionCreated event in my aggregate. After that happens some other external code is executed multiple times. For each iteration I would like an event to be published without having to send a new command to the aggregate. I tried using EventGateway.publish and #EventHandler as shown below.
Code:
#Aggregate
public class SessionAggregate {
// aggregate logic
#EventHandler
public void on(OtherEvent event){
// code that never runs
}
}
public class ExternalLogic {
private final EventGateway eventGateway;
public void execute() {
// other code
eventGateway.publish(new OtherEvent());
}
}
This won't work. The Aggregate only reacts to CommandHandler calls and (in case you are using eventSourcing as storage) can replay its internal state via EventSourcingHandlers.
In addition, an aggregate is only active while handling a command, it is not permanently present in memory, so even if it would be notified, you wouldn't have any guarantee that it was called.

Spring-Boot: scalability of a component

I am trying Spring Boot and think about scalabilty.
Lets say I have a component that does a job (e.g. checking for new mails).
It is done by a scheduled method.
e.g.
#Component
public class MailMan
{
#Scheduled (fixedRateString = "5000")
private void run () throws Exception
{ //... }
}
Now the application gets a new customer. So the job has to be done twice.
How can I scale this component to exist or run twice?
Interesting question but why Multiple components per customer? Can scheduler not pull the data for every customer on scheduled run and process the record for each customer? You component scaling should not be decided based on the entities evolved in your application but the resources utilization by the component. You can have dedicated components type for processing the messages for queues and same for REST. Scale them based on how much each of them is getting utilized.
Instead of using annotations to schedule a task, you could do the same thing programmatically by using a ScheduledTaskRegistrar. You can register the same bean multiple time, even if it is a singleton.
public class SomeSchedulingConfigurer implements SchedulingConfigurer {
private final SomeJob someJob; <-- a bean that is Runnable
public SomeSchedulingConfigurer(SomeJob someJob) {
this.someJob = someJob;
}
#Override
public void configureTasks(#NonNull ScheduledTaskRegistrar taskRegistrar) {
int concurrency = 2;
IntStream.range(0, concurrency)).forEach(
__ -> taskRegistrar.addFixedDelayTask(someJob, 5000));
}
}
Make sure the thread executor you are using is large enough to process the amount of jobs concurrently. The default executor has exactly one thead :-). Be aware that this approach has scaling limits.
I also recommend to add a delay or skew between jobs, so that not all jobs run at exactly the same moment.
See SchedulingConfigurer
and
ScheduledTaskRegistrar
for reference.
The job needs to run only once even with multiple customers. The component itself doesn't need to scale at all. It just a mechanism to "signal" that some logic needs to be run at some moment in time. I would keep the component really thin and just call the desired business logic that handles all the rest e.g.
#Component
public class MailMan {
#Autowired
private NewMailCollector newMailCollector;
#Scheduled (fixedRateString = "5000")
private void run () throws Exception {
// Collects emails for customers
newMailCollector.collect();
}
}
If you want to check for new e-mails per customer you might want to avoid using scheduled tasks in a backend service as it will make the implementation very inflexible.
Better make an endpoint available for clients to call to trigger that logic.

How to get the PerformContext from hangfire API

In our project we are using aspnetzero template. This template allows a simple but abstracted usage of hangfire. Now we would like to add Hangfire.Console to our project which would allow us to write logs to hangfires dashboard.
In order to write a log statement to the dashboard console we have to access the PerformContext of the current running job. Unfortunately because of the abstraction in aspnetzero we can't inject the PerformContext as it would be planned by hangfire. What we do have access to is the hangfire namespace and all it's static objects.
Therefore my question: Is there a way to get the PerformContext by another way than passing null to the execution method?
What I have tried so far:
By using the IServerFilter interface a method OnPerforming should be called. But unfortunately this is not the case within aspnetzero background jobs.
I tried to overwrite/extend the given base class BackgroundJob< T > of aspnetzero but with no luck. Perhaps someone can give me a hint in this direction.
I used JobFilterAttribute with a IServerFilter.
Example:
[AttributeUsage(AttributeTargets.Class)]
public class HangFirePerformContextAttribute : JobFilterAttribute, IServerFilter
{
private static PerformContext _Context;
public static PerformContext PerformContext
{
get
{
return new PerformContext(_Context);
}
}
public void OnPerformed(PerformedContext filterContext)
{
Context = (PerformContext)filterContext;
_Context = Context;
}
public void OnPerforming(PerformingContext filterContext)
{
Context = (PerformContext)filterContext;
_Context = Context;
}
}
And I create a new Class AsyncBackgroundJobHangFire<TArgs> : AsyncBackgroundJob<TArgs>
Exemple:
[HangFirePerformContext]
public abstract class AsyncBackgroundJobHangFire<TArgs> : AsyncBackgroundJob<TArgs>
{
public PerformContext Context { get; set; }
protected async override Task ExecuteAsync(TArgs args)
{
Context = HangFirePerformContextAttribute.PerformContext;
await ExecuteAsync(args, Context);
}
protected abstract Task ExecuteAsync(TArgs args, PerformContext context);
}
It´s Work
In a Class of job i use a AsyncBackgroundJobHangFire
And de method is
[UnitOfWork]
protected override async Task ExecuteAsync(string args, PerformContext context)
{
}
I have suffered using abp's implementation of hangfire jobs as well. I don't know how to answer your question precisely, but I was able to access a PerformingContext by implementing an attribute that extends JobFilterAttribute and implements IClientFilter, IServerFilter, IElectStateFilter, IApplyStateFilter. The interfaces will depend on your requirements, but I was capable of accessing PerformingContext this way.
You should never use a static field for that, even if marked with a ThreadStaticAttribute , please refer to this link for more details
https://discuss.hangfire.io/t/use-hangfire-job-id-in-the-code/2621/2

Reset state before each Spring scheduled (#Scheduled) run

I have a Spring Boot Batch application that needs to run daily. It reads a daily file, does some processing on its data, and writes the processed data to a database. Along the way, the application holds some state such as the file to be read (stored in the FlatFileItemReader and JobParameters), the current date and time of the run, some file data for comparison between read items, etc.
One option for scheduling is to use Spring's #Scheduled such as:
#Scheduled(cron = "${schedule}")
public void runJob() throws Exception {
jobRunner.runJob(); //runs the batch job by calling jobLauncher.run(job, jobParameters);
}
The problem here is that the state is maintained between runs. So, I have to update the file to be read, the current date and time of the run, clear the cached file data, etc.
Another option is to run the application via a unix cron job. This will obviously meet the need to clear state between runs but I prefer to tie the job scheduling to the application instead of the OS (and prefer it to OS agnostic). Can the application state be reset between #Scheduled runs?
You could always move the code that performs your task (and more importantly, keeps your state) into a prototype-scoped bean. Then you can retrieve a fresh instance of that bean from the application context every time your scheduled method is run.
Example
I created a GitHub repository which contains a working example of what I'm talking about, but the gist of it is in these two classes:
ScheduledTask.java
Notice the #Scope annotation. It specifies that this component should not be a singleton. The randomNumber field represents the state that we want to reset with every invocation. "Reset" in this case means that a new random number is generated, just to show that it does change.
#Component
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
class ScheduledTask {
private double randomNumber = Math.random();
void execute() {
System.out.printf(
"Executing task from %s. Random number is %f%n",
this,
randomNumber
);
}
}
TaskScheduler.java
By autowiring in ApplicationContext, you can use it inside the scheduledTask method to retrieve a new instance of ScheduledTask.
#Component
public class TaskScheduler {
#Autowired
private ApplicationContext applicationContext;
#Scheduled(cron = "0/5 * * * * *")
public void scheduleTask() {
ScheduledTask task = applicationContext.getBean(ScheduledTask.class);
task.execute();
}
}
Output
When running the code, here's an example of what it looks like:
Executing task from com.thomaskasene.example.schedule.reset.ScheduledTask#329c8d3d. Random number is 0.007027
Executing task from com.thomaskasene.example.schedule.reset.ScheduledTask#3c5b751e. Random number is 0.145520
Executing task from com.thomaskasene.example.schedule.reset.ScheduledTask#3864e64d. Random number is 0.268644
Thomas' approach seems to be a reasonable solution, that's why I upvoted it. What is missing is how this can be applied in the case of a spring batch job. Therefore I adapted his example little bit:
#Component
public class JobCreatorComponent {
#Bean
#Scope(ConfigurableBeanFactory.SCOPE_PROTOTYPE)
public Job createJob() {
// use the jobBuilderFactory to create your job as usual
return jobBuilderFactory.get() ...
}
}
your component with the launch method
#Component
public class ScheduledLauncher {
#Autowired
private ... jobRunner;
#Autwired
private JobCreatorComponent creator;
#Scheduled(cron = "${schedule}")
public void runJob() throws Exception {
// it would probably make sense to check the applicationContext and
// remove any existing job
creator.createJob(); // this should create a complete new instance of
// the Job
jobRunner.runJob(); //runs the batch job by calling jobLauncher.run(job, jobParameters);
}
I haven't tried out the code, but this is the approach I would try.
When constructing the job, it is important to ensure that all reader, processors and writers used in this job are complete new instances as well. This means, if they are not instantiated as pure java objects (not as spring beans) or as spring beans with scope "step" you must ensure that always a new instance is used.
Edited:
How to handle SingeltonBeans
Sometimes singleton beans cannot be prevented, in these cases there must be a way to "reset" them.
An simple approach would be to define an interface "ResetableBean" with a reset method that is implemented by such beans. Autowired can then be used to collect a list of all such beans.
#Component
public class ScheduledLauncher {
#Autowired
private List<ResetableBean> resetables;
...
#Scheduled(cron = "${schedule}")
public void runJob() throws Exception {
// reset all the singletons
resetables.forEach(bean -> bean.reset());
...

Spring Controller - forking request, return value before long run function ends

I have controller and long run function in it, like:
#Controller
#RequestMapping("/deposit")
public class DepositController {
#RequestMapping
public ModelAndView getNewJob(long userId, Model model) {
//execute function that can runs a lot of time ...
longRunFunction();
return new ModelAndView("jobTasks");
}
public void longRunFunction(){
// process long run function
}
}
My question is :
How can I execute the longRunFunction()
and return ModelAndView("jobTasks") answer to the browser without waiting for the end of the function?
Thank you !
Hi, I found nice example here http://krams915.blogspot.co.il/2011/01/spring-3-task-scheduling-via.html
This can be done using Asynch support in Spring Framework, essentially delegate the long running task to another service, the method of which is annotated with #Async annotation, this task would then be executed by a threadpool and control will return back to your caller immediately.
Here is much more detailed reference: http://docs.spring.io/spring-framework/docs/3.2.3.RELEASE/spring-framework-reference/html/scheduling.html#scheduling-annotation-support-async
public class SampleBeanImpl implements SampleBean {
#Async
void longRunFunction() { … }
}
Add #Async to the method declaration of longRunningMethod. But to make this work without AspectJ weaving you need to put this method in an other bean.

Resources