Spring boot Quartz to skip some jobs - spring-boot

I use spring boot + maven multi module project. I am using cluster-aware Quartz configuration, so job details are stored in the database. Lets say, 2 Spring boot maven modules (projects) are using quartz, but I want each module run it is own jobs, which are different to each module. In that case, when I start one module, it actually tries to run jobs from other module too because Quartz engine is reading the jobs to run from the database. So how do I specify in each module to run only specific jobs related to at module?
One way is to have different table prefixes for each module. Is there any other way to use the same quartz tables between 2 different modules, but each module decide which jobs to run?

I have almost the same. My solution was:
Job A writes a flag into the DB table next to the job "running_by" jobA and when its done, it remove the flag.
So JobB checks if this Job has a flag already. If not it will proceed, if yes it skip this job because JobA is already running it.

config a property in quartz.properties
org.quartz.jobStore.isClustered = true
that can make quartz work in cluster mode

Related

How can I custom number of job per second (job/s) that be executed by quartz scheduler?

I'm attempting to practice with Quartz Scheduler in a small Spring Boot Application. I really want to know if any way to determined number of job per second (job/s) that be executed by quartz scheduler. I checked all quart properties in documentation but can not find anything.
Thanks all!
You can maanage this from your property file though property name is depend on which Job store are you using,
For JDBC Job store, property name is org.quartz.threadPool.threadCount.
Check out http://www.quartz-scheduler.org/documentation/quartz-2.1.7/configuration/ official documentation for respective implementation.
[documentation is for 2.1.7]

Spring task:scheduled or #Scheduler to restrict a Job to run in multiple instance

I have one #Scheduler job which runs on multiple servers in a clustered environment. However I want to restrict the job to run in only in one server and other servers should not run the same job once any other server has started it .
I have explored Spring Batch has lock mechanism using some Database table , but looking for any a solution only in spring task:scheduler.
I had the same problem and the solution what I implemented was a the Lock mechanism with Hazelcast and to made it easy to use I also added a proper annotation and a bit of spring AOP for that. So with this trick I was able to enforce a single schedule over the cluster done with a single annotation.
Spring Batch has this nice functionality that it would not run the job with same job arguments twice.
You can use this feature so that when a spring batch job kicks start in another server it does not run.
Usually people pass a timestamp as argument so it will by pass this logic, which you can change it.

spring boot batch to spring cloud task with multiple jobs

I have a spring boot batch application that has 5 unique jobs that execute by console using the command:
java -jar artifactName jobName param1
but now this project will be move to cloud, so I need to use spring cloud task. So far so good.
I know that I have to define in the main class the #enableTask and also in the application.properties define the properties:
spring.application.name=cloudTask
So reading the Spring documentation understand that for triggering my jobs using spring cloud dataflow server, can define a task that in this case i should use as cloudTask. But does not make sense because how will tigger it, because my application has 5 different jobs, so the question is:
how do i connect this task name with my jobs define in the application?
The logic tell me that I need to define also 5 task name, then how do I bind this task name with the respective job.
With #EnableTask annotation, you should be able to register your batch as Task application in SCDF - Under 'Apps'
Once your batch appears in Apps,
If all jobs 5 jobs are independent, you should be able to create 5 different Composed Tasks with same App name but with different parameters,
OR
If those are interlinked, then linked jobs can be combined together in 1 composed task, by providing alias and passing corresponding set of parameters, in DSL.
Once the composed task is launched, task execution status can be viewed under 'Task -> Executions' and corresponding to Jobs status can be viewed under 'Jobs'
To pass custom parameters to tasks, #EnableConfigurationProperties #ConfigurationProperties can be leveraged.

When spring-boot app multi-node deploy, how to handle cron job?

When I use spring task handle a simple sync job! But when I deploy multi-node, how I make sure the cron job just run one time.
Maybe you say that:
1. Use distributed-lock control a flag before the crob job run.
2. Integrated quartz cluster function.
But I hope spring task #EnableScheduling can add a flag argument, so as we can set a flag when launch app.
We are using https://github.com/lukas-krecan/ShedLock with success, zookeeper provider in particular.
Spring boot, in a nutshell, doesn't allow any type of coordination between multiple instances
of the same microservice.
All the work of such a coordination is done by the third parties that spring boot gets integrated with.
One example of this is indeed a #Scheduled annotation.
Another is DB migration support via flyway.
When many nodes start and the migration has to be done, flyway is responsible to lock the migration table by itself, spring boot has nothing to do with it.
So, bottom line, there is no such support and all options that you've raised can work.

How to achieve that a task just run on one instance with spring-boot microservice architecture

Have one service named AccountService.5 instances are needed to support the traffic.all these instances run in docker container.Now I need to run a schedule task.this task should only run on a single AccountService instance.but not all the five instances.which one is not important
My question is how to configure to achieve this.Can eureka do this?and zookeeper seems have the ability to manage the cluster.Do I need to register the AccountService into Zookeeper?
Hope someone can share experience with me
Consider using a shared data store like Redis or, if you're already using a DB, a table in the DB, to have a task lock. The first instance to come up can grab the lock, run the task, and release the lock.
Include spring-cloud-task in your dependency (which is suitable for scheduled tasks).
Then enable this property - spring.cloud.task.single-instance-enabled=true
Add Spring Integration dependencies. Copy/paste from here - https://docs.spring.io/spring-cloud-task/docs/current/reference/#features-single-instance-enabled
Note:
Locks are created and stored under TASK_LOCK table. Make sure its clean, otherwise you will have problems restarting.
Use Spring Based task scheduling #Schedule
For more click here

Resources