OpenWhisk parallel execution - openwhisk

I tried the code from this answer Parallel Actions in OpenWhisk
But instead of getting the result, i only get back the activationID. It seems that the actions are not blocking when invoking an array of actions?

The calls to "action1, action2", in the code, which you have linked to, are done blocking. However, that does not affect the way you call the action itself. To call it blocking, use
wsk action invoke <action-name> -b
This will give you a lot of output. You can reduce it by filtering to the result only
wsk action invoke <action-name> -br

Related

How to pass output of one action as an input to another action in state machine Step functions?

I have an architecture where I want an output of one lambda function as an input to another in a single state machine. Also after I invoke all the 3 lambdas I want the error of each of them in Error Handling lambda.
How is this achievable in step function?
The first part of your question is the very basis of State Machines. This would be done in a Parallel Task if you need them done simultaneously - if not you can simply chain them. You should follow the documentation. https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html
As for the second part:
If they are all actually erroring? No its not possible to catch all three errors in the state machine - only the first would be caught using a catch - as soon as one lambda errors the state machine will stop its execution. If you are interested in catching the error that caused your state machine to fail (ie it should not be a normal scenario) then you can use the Catch properties of the State Machine Definition.
Again, refer to the documentation for how to code this.

Springboot+Job Framework

My requirement is whenever we call certain RestAPI from UI/Postman, At backend it should trigger JOB that perform several operations/task.
Example:
Assume some POST Rest API is invoked -
It should invoke "Identify-JOB"(performs several activities)- Based on certain condition, It should invoke PLANA-JOB or PLANB-JOB
1> Suppose PLANA-JOB is invoked, on the success of this JOB, it should trigger another JOB called "finish-JOB". On the failure it should not invoke another JOB "finish-JOB"
Can you please help here how can i do this?
You can use async processing and that'll trigger the first job and that task will trigger the next set of tasks.
You can build them like AWS step functions
You can use Rqueue to enqueue async task and that'll be processed by one of the listeners.

Get status of asynchronous (InvocationType=Event) AWS lambda execution

I am creating an AWS step function where one of the step, let's call it step X, starts a variable number of lambdas. Since these lambda functions are long (they take between 1 and 10 minutes each to complete), I don't want to wait for them in step X. I would be spending money just for waiting. I therefore start them with InvocationType=Event so that they all run asynchronously and in parallel.
Once step X is done starting all these lambdas, I want my step function to wait for all these asynchronous functions to complete. So, a little like described here, I would create some kind of while loop in my step function. This loop would wait until all my asynchronous invocations have completed.
So the problem is: is it possible to query for the status of an AWS lambda that was started with InvocationType=Event?
If it is not possible, I would need my lambdas to persist their status somewhere so that I can poll this status. I would like to avoid this strategy since it does not cover problems that occur outside of my lambda (ex: out of memory, throttling exceptions, etc.)
An asynchronously invoked Lambda is a "fire and forget" use case. There's no straightforward way to get its result. I'm afraid you'll have to write your own job synchronization logic.
instead of polling,(which again is expensive), you can provide a callback, for the lambda to post back asynchronously. once you get all positives for all lambdas, then continue the process.
Since the question was initially posted, AWS added the support for dynamic parallelism in workflows. The need to manually start lambda functions and poll for their completion from within a step function is therefore now an anti-pattern.

How to debug an asynchronous RFC - Starting new task...Performing...on end of task?

I am a beginner in SAP ABAP. I am debugging an asynchronous RFC (parallel processing). I have put a break-point in the calling portion of the RFC, an external break-point inside the RFC and an external break point in the form which is called at the end of task through perform. I am able to debug the RFC FM.
Another session opens up. But I am not able to debug the perform which is called after end of task. After the RFC is debugged, the control returns to the calling point of the FM. it doesn't go inside the form. When all the iterations are finished, then at the end it goes inside the perform. Why so? shouldn't the perform be executed in parallel?
Inside the perform I have written like RECEIVE RESULTS FROM FUNCTION XXX. But the debugger control is not going inside the perform after returning from the RFC.
You have given very little information on the overall program flow, but there's a part of the documentation that might be relevant to your case:
A prerequisite for the execution of a registered callback routine is
that the calling program still exists in its internal session when
the remote function is terminated. It is then executed here at the
next change of the work process in a roll-in. If the program was
terminated or is located on the stack as part of a call sequence, the
callback routine is not executed.
[...]
The time when the callback routines are executed can be programmed
explicitly or be reached implicitly:
The statement WAIT FOR ASYNCHRONOUS TASKS is used for explicit programming. As specified by a condition, this statement changes the
work process and hence executes the callback routines registered up to
this time. It waits for as many registered routines to end until the
condition is met (the maximum wait time can be restricted). Explicit
programming is recommended whenever the results of the remote function
are required in the current program.
If the results of the remote function are not required in the current program, the time at which the callback routines are executed
can also be determined by an implicit change of the work process (for
example, at the end of a dialog step). This can be a good idea, for
example, in GUI scenarios in which uses of WAIT are not wanted. In
this case, it must be ensured that the work process changes before the
program is ended. There is also a risk that, if the work process is
changed implicitly, not all callback routines are registered in time.
It is likely that the program issuing the call and registering the callback routine is either terminated or does not issue a WAIT FOR ASYNCHRONOUS TASKS so that the callback is only executed on the next roll-in.
Re-reading your question, you apparently assume that the callback routine will be executed in parallel to the program that has registered it. That is not the case, ABAP is not multi-threaded.

Execute a task after queue completion in laravel

I have a queue like the following in laravel,
Mail::queue($notificationCreated->template, $data, function ($message) use ($data) {
$message->to($data['email'], $data['first_name'])->subject($data['subject']);
});
Is it possible to execute a task after queue completes it's execution, i.e in my case after sending a mail.
Something like this is not present in the API because that's not the point of Queue.
It's asynchronous so after calling Mail::queue you immediately get back control and code execution continues. That does not mean the actual job has been executed, just that it's been scheduled.
And there's no way of writing a Mail::whenJobIsComplete there because that would mean the whole execution of your code would have to stop and wait for the job to be completed. There no way this could work asynchronously.
You could however periodically poll for completed jobs and execute code when that happens. There's build-in functionality for polling for failed jobs in the API.
But the best approach would be to write your own custom queue listener,
and adding functionality besides the handleWorkerOutput call.
Again, this is asynchronous, this code will run at some indeterminate point in the future, not even close to the place where you initially called Mail::queue.

Resources