Scheduled tasks with multiple servers - single point of responsibility - spring

We have a Spring + JPA web application.
We use two tomcat servers that run both application and uses the same DB.
One of our application requirmemnt is to preform cron \ scheduled tasks.
After a short research we found that spring framework delivers a very straight forward solution to cron jobs,
(Annotation based solution)
However since both tomcats running the same webapp - if we will use this spring's solution we will create a very problematic scenario where 2 crons are running at the same time (each on a different tomcat)
Is there any way to solve this issue? maybe this alternative is not good for our purpose?
thanks!

As a general rule, you're going to want to save a setting to indicate that a job is running. Similar to how "Spring Batch" does the trick, you might want to create a table in your database simply for storing a job execution. You can choose to implement this however you'd like, but ultimately, your scheduled tasks should check the database to see if an identical task is already running, and if not, proceed with the task execution. Once the task has completed, update the database appropriately so that a future execution will be able to proceed.

#kungfuters solution is certainly a better end goal, but as a simple first implementation, you could use a property to enable/disable the tasks, and only have the tasks run on one of the servers.

Related

Is it right pattern of using Spring batch if one task that is launched through SCDF API call will launch other task through SCDF API calls?

I have a requirement to have a schedule batch that will identify what are the batches I need to restart or re-submit(as new job instance). Schedule batch will identify and call SCDF API to launch tasks. Is it really good design pattern to have a such batch ?
I can implement the above require pattern but is is good practice or anyone can suggest what is alternate way of doing it.
I see nothing wrong in implementing that requirement with Spring Batch (it would be a single step job with a simple tasklet). The question you should be asking is that do you really need a Spring Batch job for that? What would you gain from using Spring Batch? Most interesting features won't be used for that job (restartability, fault-tolerance, transaction management, etc), so the benefit/cost ratio is low IMO.
I have seen folks putting similar logic (db query + rest call) in a shell script scheduled with crontab. And I see nothing wrong with this approach neither.
So it really depends on how you want to implement that requirement.

Spring Boot run multiple Task concurrently

I have a question regarding Spring Boot. My web application has different Users. For each User, I need to run several Tasks (e.g import/export CSV, run rest request process it and answer it, ...).
So let's say we have about 4 tasks per user. and now when I have 100 users I would have 400 tasks.
Now my question is how can I handle such an amount of task? The task should be scheduled with a cron expression and need to run parallel / concurrent so that no user has a disadvantage.
The Main Question is I really don't know how to make all this task run concurrently so that the tasks run parallel. How do I make that what SpringBoot-Things do I need to use (e.g a special Scheduler?).
Is it even possible to handle such amount of task running parallel/concurrently?
Thank you for your help :)

One time task with Kubernetes

We are implementing a utility that will apply the DDLs to the database. The utility is built using spring boot java and it has a main program that will run just once on startup. Can some one share what kind of K8s recipe file. Here are my considerations, the pod is expected to be short lived and after the program executes I want the POD to get killed.
Kubernetes Jobs are what you want for that.
Here is a great example.
Once you start running jobs you'll also want to think of an automated way of cleaning up the old jobs. There are custom controllers written to clean up jobs, so you could look at those, but there is first-class support being built-in for job clean-up that I believe is still in alpha state, but you can already use this of course.
It works by simply adding a TTL to your job manifests. Here is more info on the job clean-up mechanism with TTL.

Spring Batch Parallel Job Scaling

I'm currently working on a Spring Batch POC and have got a pretty good handle on most of the actual Spring Batch features. I've currently got a program that uses Spring Integration to receive an HttpRequest and use message channels to eventually send the job executions to the job launcher in a queue. What we'd really like to do is implement some kind of "scheduler/load balancer" (not quite sure what to call it) before the job launcher that will look at the currently running worker nodes and the size of the input file and make a decision on how many worker nodes the job should be allowed. We would probably also want to be able to change the amount of worker nodes a job has while it is running to allow more jobs to run.
The idea is that we'd have a server running that could accept many job requests at any time, and a large cluster of machines that jobs will be partitioned onto. We'd like to be able to scale horizontally, so whenever the server isn't busy it can make full use of the hardware, as well as being able to make sure that small jobs don't get constantly blocked by larger jobs.
From my research it seems like we'd have to implement another framework to do this (do GridGain and Hadoop allow this?), but I figured I'd ask to see what people recommended to do something like this, and if there's a way to do it without implementing another large framework.
Sorry if anything is unclear or confusing, I'm just a lowly intern who started learning Spring and Spring Batch last month and I'm far from completely understanding everything, especially this scaling stuff. Just ask and I'll try to clear things up.
Thanks for any help!
Take a look at the 'spring-batch-integration' project under the spring-batch-admin umbrella project https://github.com/SpringSource/spring-batch-admin
It has a number of examples of using spring-integration to distribute work to other nodes. IN particular see the chunk and partition packages. Just swap out the spring integration channels with jms channel adapters. By distributing work partitions via JMS, you can scale out the number of worker nodes as needed.
There are a number of threads on this subject in the spring integration forum; search for 'PartitionHandler'.
Hope that helps.

Spring quartz/cron jobs in a distributed environment

I have a fleet of about 5 servers. I want to run an identical Spring/Tomcat app on each machine.
I also need a particular task to be executed every ten minutes. It should only run on one of the machines. I need some sort of election protocol or other similar solution.
Does Spring or Quartz have any sort of built-in distributed cron solution, or do I need to implement something myself?
Hazelcast has a distributed executor framework which you can use to run jobs using the JDK Executor framework (which, by the way, is possibly more testable than horrid Quartz... maybe). It has a number of modes of operation, including having it pick a single node "at random" to execute your job on.
See the documentation for more details

Resources