Optimistic locking exception encountered in parallel service task - spring-boot

Problem: Service task (which calls API) is invoked twice with OptimisticLockingException
Case: I have two sub processes and all service task calling Api’s and subprocesses are set to async before ‘true’ but got OptimisticLockingException.
also the same service task gets invoked twice before proceeding to next service task in sequence, when this happens the response from API received as part of the first invocation of serice task in not persisted in the DB and the response of 2nd API call is persisted.
I am trying to resolve that by exclusive true my service tasks are running sequentially across the sub processes. Now I want to run my sub processes in parallel with single invocation of same service task within the sub process.

If you want to avoid that the service task gets executed a second time in case of a rollback (triggered e.g. by an OLE), try setting async after the service task. However, thsi is not required if it is followed by an async before on a subsequent parallel gateway.
To handle OLEs the best practice is to set an async before on the parallel gateway. See
https://docs.camunda.io/docs/components/best-practices/development/understanding-transaction-handling-c7/
"Do configure a savepoint before [...] Parallel joins Parallel Join Inclusive Join Multiinstance Task: Parallel joins synchronize separate process pathes, which is why one of two path executions arriving at a parallel join at the same time will be rolled back with an optimistic locking exception and must be retryed later on. Therefore such a savepoint makes sure that the path synchronisation will be taken care of by Camunda's internal job executor. Note that for multi instance activities, there exists a dedicated "multi instance asynchronous after" flag which saves every single instance of those multiple instances directly after their execution, hence still "before" their technical synchronization."

Related

How to run Lambda functions in parallel with individual retries and only update final state once all complete successfully?

I need to orchestrate multiple jobs in parallel (each using Lambdas in AWS), ensure all finish by retrying individual jobs as needed, and then update a state only when all jobs have completed successfully. It will look like this:
This image was taken from the Step Functions documentation. I'm considering whether Step Functions might be an answer.
Any thoughts on how this might look using Lambda (with or without Step Functions)? I'm guessing deadletter queues might be involved to facilitate retries? My biggest unknown is how to update the final state only after all jobs complete and considering whether retries may have occurred.
You are correctly, using AWS Step Functions do resolve your problem.
But, as you are looking for other approaches using pure lambda you will need a state persistence, as the lambda doesn't have that over different functions.
Create a data structure that will be checked at the end of each lambda execution, e.g boolean attributes that corresponds to each process that have to be executed
At the end of each process (lambda execution), change the attribute related to that lambda process to true, than verify if all the attributes are true, if yes you can invoke the lambda responsible to the next step of your pipeline.
If you need retry when errors came up, implement a DLQ and you can have more control of it.

Springboot+Job Framework

My requirement is whenever we call certain RestAPI from UI/Postman, At backend it should trigger JOB that perform several operations/task.
Example:
Assume some POST Rest API is invoked -
It should invoke "Identify-JOB"(performs several activities)- Based on certain condition, It should invoke PLANA-JOB or PLANB-JOB
1> Suppose PLANA-JOB is invoked, on the success of this JOB, it should trigger another JOB called "finish-JOB". On the failure it should not invoke another JOB "finish-JOB"
Can you please help here how can i do this?
You can use async processing and that'll trigger the first job and that task will trigger the next set of tasks.
You can build them like AWS step functions
You can use Rqueue to enqueue async task and that'll be processed by one of the listeners.

How to prevent Quartz scheduler from missing few executions

We have quartz scheduler to trigger the jenkins job which will be created on the fly through Jenkins create API.
So I have placed my create jenkins job API call inside the execute internal method.When multiple parallel requests are made, sometimes all the requests are picked up for execution and sometimes few are missed.
The count of missed executions differs.
How do I make this stop and make quartz to run all of my requests.
Tried increasing the thread count and misfire threshold but the issue exists
Seems like all you need is to set the correct misfire instruction based on your business logic and type of trigger
Trigger trigger = TriggerBuilder.newTrigger() .
withIdentity(changePoolNameTriggerKey).
startAt(new DateTime().plusSeconds(configuration.getInt(JobConstants.execution_latency)).
toDate().
build();
((MutableTrigger) trigger).setMisfireInstruction(MISFIRE_INSTRUCTION_FIRE_NOW)

Get status of asynchronous (InvocationType=Event) AWS lambda execution

I am creating an AWS step function where one of the step, let's call it step X, starts a variable number of lambdas. Since these lambda functions are long (they take between 1 and 10 minutes each to complete), I don't want to wait for them in step X. I would be spending money just for waiting. I therefore start them with InvocationType=Event so that they all run asynchronously and in parallel.
Once step X is done starting all these lambdas, I want my step function to wait for all these asynchronous functions to complete. So, a little like described here, I would create some kind of while loop in my step function. This loop would wait until all my asynchronous invocations have completed.
So the problem is: is it possible to query for the status of an AWS lambda that was started with InvocationType=Event?
If it is not possible, I would need my lambdas to persist their status somewhere so that I can poll this status. I would like to avoid this strategy since it does not cover problems that occur outside of my lambda (ex: out of memory, throttling exceptions, etc.)
An asynchronously invoked Lambda is a "fire and forget" use case. There's no straightforward way to get its result. I'm afraid you'll have to write your own job synchronization logic.
instead of polling,(which again is expensive), you can provide a callback, for the lambda to post back asynchronously. once you get all positives for all lambdas, then continue the process.
Since the question was initially posted, AWS added the support for dynamic parallelism in workflows. The need to manually start lambda functions and poll for their completion from within a step function is therefore now an anti-pattern.

Explicitly setting max jobs / flow limit on a BW sub-process

I know we can set max jobs and flow limit on a TIBCO starter process but is there anyway of explicitly setting it on a sub-process (non starter process)?
Max Jobs and Flow Limit can only be set on process starters or on spawned subprocesses. Flow control on regular (i.e. non-spawned) subprocesses is determined by the parent process starter's configuration and cannot be overridden.
If you want to be able to control the flow of a subprocess, I see 2 options:
Make it a spawnable process.
Make it an independent process with its own process starter (e.g. JMS Queue Receiver) and have the parent process invoke it with the
appropriate protocol (e.g. JMS). This way you can control the
process' flow control as you would do with any process starter.
I agree with Nicolas. However if for example let say that your flow allows 10 jobs max. to enter but then you want one job at a time to get executed, you could use a "Critical Section" to make sure only one job access the resources at any given time. (This is an example only)
"Critical section groups are used to synchronize process instances so that only one process instance executes the grouped activities at any given time. Any concurrently running process instances that contain a corresponding critical section group wait until the process instance that is currently executing the critical section group completes.
Critical Section groups are particularly useful for controlling concurrent access to shared variables (see Synchronizing Access to Shared Variables for more information). However, other situations may occur where you wish to ensure that only one process instance is executing a set of activities at a time. "

Resources