How does Spring-XD handle job execution - spring-xd

I can't get the information out of the documentation. Can anyone tell me how Spring-XD executes jobs? Does it assign a job to a certain container and is this job only executed on the container it is deployed to, or is each job execution assigned to another container? Can I somehow control that a certain job may be executed in parallel (with different arguments) and others may not ?
Thanks!
Peter

I am sure you would have seen some of the documentation here:
https://github.com/spring-projects/spring-xd/wiki/Batch-Jobs
To answer your questions:
Can anyone tell me how Spring-XD executes jobs? Does it assign a job to a certain container and is this job only executed on the container it is deployed to, or is each job execution assigned to another container?
After you create a new job definition using this:
xd>job create dailyfeedjob --definition "myfeedjobmodule" --deploy
the batch job module myfeedjobmodule gets deployed into the XD container. Once deployed, there is a job launching queue setup in the message broker: redis, rabbit or local. The name of the queue is job:dailyfeedjob in the message broker. Since this queue is bound to the job module deployed in the XD container, a request message sent to this queue is picked by the job module deployed inside that specific container.
Now, you can send the job launching request message (with job parameters) into the job:dailyfeedjob queue by simply setting up a stream that sends a message into this queue. For example: a trigger (fixed-delay, cron, date triggers) could do that. This also a job launch command from the shell which launches job only once.
This section would explain it more: https://github.com/spring-projects/spring-xd/wiki/Batch-Jobs#launching-a-job
Hence, the job is launched (every time it receives the job launching request) only inside the container where the job module is deployed and you can expect original the spring batch flow when the job is executed. (refer to shell doc for all the job related commands)
Can I somehow control that a certain job may be executed in parallel (with different arguments) and others may not ?
If it is for the different job parameters for the same job definition, then it would go to the same container where the job module is deployed.
But, you can still create a new job definition with the same batch job module.
xd>job create myotherdailyfeedjob --definition "myfeedjobmodule" --deploy
The only difference being it will be under that namespace. and, the job launching queue name would job:myotherdailyfeedjob. It all depends on how do you want to organize running your batch jobs.
Also, for parallel processing batch jobs you can use:
http://docs.spring.io/spring-batch/reference/html/scalability.html
and, XD provides single step partitioning support for running batch jobs:
Include this in your job module:
<import resource="classpath:/META-INF/spring-xd/batch/singlestep-partition-support.xml"/>
with partitioner and tasklet beans defined.
You can try out some of the XD batch samples from here:
https://github.com/spring-projects/spring-xd-samples

Related

Multiple instances of a partitioned spring batch job

I have a Spring batch partitioned job. The job is always started with a unique set of parameters so always a new job.
My remoting fabric is JMS with request/response queues configured for communication between the masters and slaves.
One instance of this partitioned job processes files in a given folder. Master step gets the file names from the folder and submits the file names to the slaves; each slave instance processes one of the files.
Job works fine.
Recently, I started to execute multiple instances (completely separate JVMs) of this job to process files from multiple folders. So I essentially have multiple master steps running but the same set of slaves.
Randomly; I notice the following behavior sometimes - the slaves will finish their work but the master keeps spinning thinking the slaves are still doing something. The step status will show successful in the job repo but at the job level the status is STARTING with an exit code of UNKNOWN.
All masters share the set of request/response queues; one queue for requests and one for responses.
Is this a supported configuration? Can you have multiple master steps sharing the same set of queues running concurrently? Because of the behavior above I'm thinking the responses back from the workers are going to the incorrect master.

Interrupting a job in quartz with multiple instances

I have 5 instances of an application using quartz in cluster mode both having the quartz scheduler running. (with postgresql)
org.quartz.jobStore.isClustered:true
org.quartz.scheduler.instanceName: myInstanceName
org.quartz.scheduler.instanceId: AUTO
So I have a job which starts and do some operations, update itself if necessary with new scheduled time or else deletes itself. (One job can contain only one trigger.)
The application has a UI interface to allow the user to cancel the job.
When the interrupt command is send from the UI;
If job is not currently working; I can pause the job or cancel.
If my job is currently working at that time, how can I stop the job with the correct instance and get the current state of the job? Basically I want to catch at that moment and save that data at that time, which user is actually interrupt moment
Does scheduler.interrupt(jobKey) interrupt my job which implements InterruptableJob correctly ?
Is scheduler.interrupt() exactly knows which instance should currently running the job and find the correct instance and get the right state of the job ?
Can u correct me, or which way should I go with ?
interrupt method implementations and getCurrentlyExecutingJobs() in quartz are not cluster aware,
which means the method has to be run on the instance which is executing that job, in other words only jobs with specified job key running in the current instance will be interrupted.
An interrupt request can be broadcasted to all running instances of quartz to cancel all instances of running jobs.
from: https://www.quartz-scheduler.org/api/2.1.7/org/quartz/Scheduler.html#interrupt(org.quartz.JobKey)
This method is not cluster aware. That is, it will only interrupt
instances of the identified InterruptableJob currently executing in
this Scheduler instance, not across the entire cluster.

Spring Scheduler code within an App with multiple instances with multiple JVMs

I have a spring scheduler task configured with either of fixedDelay or cron, and have multiple instances of this app running on multiple JVMs.
The default behavior is all the instances are executing the scheduler task.
Is there a way by which we can control this behavior so that only one instance will execute the scheduler task and others don't.
Please let me know if you know any approaches.
Thank you
We had similar problem. We fixed it like this:
Removed all #Scheduled beans from our Spring Boot services.
Created AWS Lambda function scheduled with desired schedule.
Lambda function hits our top level domain with scheduling request.
Load balancer forwards this request to one of the service instances.
This way we are sure that scheduled task is executed only once across the cluster of our services.
I have faced similar problem where same scheduled batch job was running on two server where it was intended to be running on one node at a time. But later on I found a solution to not to execute the job if it is already running on other server.
Job someJob = ...
Set<JobExecution> jobs = jobExplorer.findRunningJobExecutions("someJobName");
if (jobs == null || jobs.isEmpty()) {
jobLauncher.run(someJob, jobParametersBuilder.toJobParameters());
}
}
So before launching the job, a check is needed if the job is already in execution on other node.
Please note that this approach will work only with DB based job repository.
We had the same problem our three instance were running same job and doing the tasks three times every day. We solved it by making use of Spring batch. Spring batch can have only unique job id so if you start the job with a job id like date it will restricts duplicate jobs to start with same id. In our case we used date like '2020-1-1' (since it runs only once a day) . All three instance tries to start the job with id '2020-1-1' but spring rejects two duplicate job stating already job '2020-1-1' is running.
If my understanding is correct on your question, that you want to run this scheduled job on a single instance, then i think you should look at ShedLock
ShedLock makes sure that your scheduled tasks are executed at most once at the same time. If a task is being executed on one node, it acquires a lock which prevents execution of the same task from another node (or thread). Please note, that if one task is already being executed on one node, execution on other nodes does not wait, it is simply skipped.

How to continue the interrupted batch job after application started

My batch task was triggered by end user, so I do not want to execute all batch jobs when application startup (with spring.batch.job.enabled=false).
But I hope there is a solution to deal with this below situation,
When application started spring-batch could continue the interrupted batch job caused by application restart or or exceptional interruption.
Resume of failed/interrupted job in Spring Batch is achieved by submitting same job with same job parameters.
Therefore you can do the following to resubmit failed jobs (assuming you are using DB to store job meta data)
By joining BATCH_JOB_INSTANCE and BATCH_JOB_EXECUTION table, find out all job instances with no completed job executions
find out the latest BATCH_JOB_EXECUTIONS for each of the above incomplete job instance, and lookup the corresponding job parameters from BATCH_JOB_EXECUTION_PARAMS
Resubmit the job using job name from BATCH_JOB_INSTANCE, and job parameters from BATCH_JOB_EXECUTION_PARAMS

Lauching a Task at a time from a Stream

I am using the File Source stream component to read files from a directory and send a File instance to a custom processor that reads the file and launches a specific task using a TaskLauncher sink. If I drop 5 files in the directory, 5 tasks launch at the same time. What I am trying to achieve is to have each Task executed one after the other, so I need to monitor the state of the Tasks to ensure the prior task has completed before launching another task. What are my options for implementing this? As a side note, I am running this on a Yarn cluster.
Thanks,
-Frank
I think asynchronous task launching by the YARN TaskLauncher could be the reason to make it look like all the tasks are launched at the same time. One possible approach you can try is to have a custom task launcher sink that launches the task and waits for the task status to be completed before it starts processing the next trigger request.

Resources