Springboot+Job Framework - spring-boot

My requirement is whenever we call certain RestAPI from UI/Postman, At backend it should trigger JOB that perform several operations/task.
Example:
Assume some POST Rest API is invoked -
It should invoke "Identify-JOB"(performs several activities)- Based on certain condition, It should invoke PLANA-JOB or PLANB-JOB
1> Suppose PLANA-JOB is invoked, on the success of this JOB, it should trigger another JOB called "finish-JOB". On the failure it should not invoke another JOB "finish-JOB"
Can you please help here how can i do this?

You can use async processing and that'll trigger the first job and that task will trigger the next set of tasks.
You can build them like AWS step functions
You can use Rqueue to enqueue async task and that'll be processed by one of the listeners.

Related

Optimistic locking exception encountered in parallel service task

Problem: Service task (which calls API) is invoked twice with OptimisticLockingException
Case: I have two sub processes and all service task calling Api’s and subprocesses are set to async before ‘true’ but got OptimisticLockingException.
also the same service task gets invoked twice before proceeding to next service task in sequence, when this happens the response from API received as part of the first invocation of serice task in not persisted in the DB and the response of 2nd API call is persisted.
I am trying to resolve that by exclusive true my service tasks are running sequentially across the sub processes. Now I want to run my sub processes in parallel with single invocation of same service task within the sub process.
If you want to avoid that the service task gets executed a second time in case of a rollback (triggered e.g. by an OLE), try setting async after the service task. However, thsi is not required if it is followed by an async before on a subsequent parallel gateway.
To handle OLEs the best practice is to set an async before on the parallel gateway. See
https://docs.camunda.io/docs/components/best-practices/development/understanding-transaction-handling-c7/
"Do configure a savepoint before [...] Parallel joins Parallel Join Inclusive Join Multiinstance Task: Parallel joins synchronize separate process pathes, which is why one of two path executions arriving at a parallel join at the same time will be rolled back with an optimistic locking exception and must be retryed later on. Therefore such a savepoint makes sure that the path synchronisation will be taken care of by Camunda's internal job executor. Note that for multi instance activities, there exists a dedicated "multi instance asynchronous after" flag which saves every single instance of those multiple instances directly after their execution, hence still "before" their technical synchronization."

How to run Lambda functions in parallel with individual retries and only update final state once all complete successfully?

I need to orchestrate multiple jobs in parallel (each using Lambdas in AWS), ensure all finish by retrying individual jobs as needed, and then update a state only when all jobs have completed successfully. It will look like this:
This image was taken from the Step Functions documentation. I'm considering whether Step Functions might be an answer.
Any thoughts on how this might look using Lambda (with or without Step Functions)? I'm guessing deadletter queues might be involved to facilitate retries? My biggest unknown is how to update the final state only after all jobs complete and considering whether retries may have occurred.
You are correctly, using AWS Step Functions do resolve your problem.
But, as you are looking for other approaches using pure lambda you will need a state persistence, as the lambda doesn't have that over different functions.
Create a data structure that will be checked at the end of each lambda execution, e.g boolean attributes that corresponds to each process that have to be executed
At the end of each process (lambda execution), change the attribute related to that lambda process to true, than verify if all the attributes are true, if yes you can invoke the lambda responsible to the next step of your pipeline.
If you need retry when errors came up, implement a DLQ and you can have more control of it.

How to prevent Quartz scheduler from missing few executions

We have quartz scheduler to trigger the jenkins job which will be created on the fly through Jenkins create API.
So I have placed my create jenkins job API call inside the execute internal method.When multiple parallel requests are made, sometimes all the requests are picked up for execution and sometimes few are missed.
The count of missed executions differs.
How do I make this stop and make quartz to run all of my requests.
Tried increasing the thread count and misfire threshold but the issue exists
Seems like all you need is to set the correct misfire instruction based on your business logic and type of trigger
Trigger trigger = TriggerBuilder.newTrigger() .
withIdentity(changePoolNameTriggerKey).
startAt(new DateTime().plusSeconds(configuration.getInt(JobConstants.execution_latency)).
toDate().
build();
((MutableTrigger) trigger).setMisfireInstruction(MISFIRE_INSTRUCTION_FIRE_NOW)

Execute a task after queue completion in laravel

I have a queue like the following in laravel,
Mail::queue($notificationCreated->template, $data, function ($message) use ($data) {
$message->to($data['email'], $data['first_name'])->subject($data['subject']);
});
Is it possible to execute a task after queue completes it's execution, i.e in my case after sending a mail.
Something like this is not present in the API because that's not the point of Queue.
It's asynchronous so after calling Mail::queue you immediately get back control and code execution continues. That does not mean the actual job has been executed, just that it's been scheduled.
And there's no way of writing a Mail::whenJobIsComplete there because that would mean the whole execution of your code would have to stop and wait for the job to be completed. There no way this could work asynchronously.
You could however periodically poll for completed jobs and execute code when that happens. There's build-in functionality for polling for failed jobs in the API.
But the best approach would be to write your own custom queue listener,
and adding functionality besides the handleWorkerOutput call.
Again, this is asynchronous, this code will run at some indeterminate point in the future, not even close to the place where you initially called Mail::queue.

dbms_scheduler job chain exceptions

I'd like to find the best way to handle exceptions (failure of any steps) from an Oracle scheduler job chain (11gR2).
Say I have a chain that contains 20 steps. If at any point the chain exits with FAILURE, I'd like to do a set of actions. These actions are specific to that chain, not the individual steps (each step's procedure may be used outside of scheduler or in other chains).
Thanks to 11gR2, I can now setup an email notification on FAILURE of chain, but this is only 1 of several actions I need to do, so its only a partial solution for me.
The only thing I can think of is have another polling job check the status of my chain every x minutes and launch the failure actions when it sees the latest job of the chain exited with FAILURE status. But this is a hack at best imo.
What is the best way to handle exceptions for a given job chain?
thanks
The most flexible way to handle jobs exceptions in general is to use a job exception monitoring procedure and define the jobs to generate events upon job status changes. The job exception monitoring procedure should watch the scheduler event queue in a loop and react upon events in a way you define.
Doing so takes away the burden to have to create failure steps for about each and every job step in a chain. This is a very powerful mechanism.
by lack of time: in the book is a complete scenario of event based scheduling. Will dig one up later.

Resources