How to perform clean ups on spring cloud task completion - spring

I am writing a SCDF spi implementation for supporting stream and task application. As part of this we need to perform some clean up operations once the task finishes.
Could someone provide info on whether SCDF will be getting a callback on task completion. If not then what are the alternative ways to perform cleanup.

A Task is a short-lived and finite operation. Depending on what you're trying to accomplish, you can do one of the following to invoke any custom cleanup routine.
1) A task running a batch-job and in that job, you can define "n" number of steps as part of a workflow, and upon successful upstream steps, the last step could invoke the cleanup routine.
2) You can have a stream in SCDF listening to Task-complete events (a batch-job example here), which can finally kickoff another task/job to invoke the cleanup routine.
3) You can define a composed-task graph (via Dashboard/shell) where each of the steps (aka tasks) can run its intended operation, and upon successful transition or failure event, you get the opportunity to kick off the cleanup routine.

Related

Using custom executor with quartz scheduling library

I am using quartz library, mainly to schedule multiple tasks that will run forever at specific time. I have a customized executor (a retry executor that reschedule a task for a specified number of times in case of failure (retries are customizable). I want to know if there is a way to setup quartz to use this customized executor? Currently I am using the executor inside the Job i.e. call executor.execute() inside the Job's execute method.
From the Guide there is only explained how to configure your own ThreadPool.
Fine-tuning the Scheduler
You can also implement Quatz's interface ThreadExecutor or adjust your "RetryExecutor" implementation to do so.
Then it can be passed as component via setThreadExecutor to the QuartzSchedulerResources which in turn are used to configure QuartzScheduler - the heart of Quartz.
Keep the Job isolated
It's discouraged to modify scheduling or execute additional jobs from within a job's execute method. This control-flow is kept outside from the jobs by Quartz. It's part of the Scheduler's responsibilities. Thus your current solution:
using the executor inside the Job i.e. call executor.execute() inside the Job's execute method
can inflict the correct function of the Scheduler itself.
Retry controlled from within the job
There might be a couple of ways how to deal with retries in Quartz.
Take a search here in Stackoverflow for [quartz-scheduler] retry:
Automatic retry of failed jobs
Count-based retries, increasing delays between retries, etc.
This question explains some:
Quartz retry when failure

Is it possible to have both synchronous and asynchonous plugin running post creation of record?

We have a business need where we need to create up to 50,000 records. Using a synchronous plugin or javascript is not acceptable solution here because it takes too long-- SQL timeout will occur. Is it possible? Can we run asynchronous & synchronous plugin on the same PostOperation Create step of that entity?
Of course, yes. You cannot have same step as both sync and Async but You can register two steps, one as synchronous and another one as asynchronous. Make sure you are not doing the same logic in those steps in same plugin.
You can split the logic in two plugins and register the two separate steps carefully wrt what is need in sync mode vs Async mode.
Normally, If you want to Rollup the DB transaction when the logic fails - then synchronous step is needed. If the logic failure is not a show stopper and can silently fail to move forward - then asynchronous is enough (write a plugin trace log entry in try..catch for analysis).
Assembly (.dll) can have two plugins (.cs file), and multiple steps for each plugin is possible. But maintain the clarity for less complexity and maintenance.

Failed Service Task state after application reboot

What happens to a failed Service Task when the application is restarted. Will it attempt to retry the Service task once again.
If not, how does the Process proceed. Is there a way to manually retry the Service task ?
I could see from Flowable API, that taskQuery lists only the UserTasks.
In Flowable when an execution fail (due to some exception), the transaction would be rollbacked to the previous wait state in your workflow. This means that the next time it runs it would continue from there.
You can make a Service Task async which would make it a wait state. In that case an Async Job would be created and that would be retried (by default 3 times, but that is configurable). If the Service Task fails all the time (the configured number of retries) the job would be moved into the dead letter job and then you need to manually trigger it.

Spring Cloud Data Flow Task execution monitoring

We have been looking into spring cloud task. It looks very promising but we seem to be missing how monitoring should work, especially for tasks that are executed from a stream.
For tasks manually executed from the dashboard there is an execution tab, but does not seem to be a page where you can find an overview of the tasks executed from within a stream.
What is the way to monitor the executions, exit codes, progress and other things for such tasks?
The tasks that are executed from your stream will create a TaskExecution entry in the Task_Execution table just like tasks that executed from the dashboard. In that case the executions tab will fill this need.

Windows Workflow - Is there a way to guarantee only one workflow running?

The workflow is being published as a wcf service, and I need to guarantee that workflows execute sequentially. Is there a way--in code or in the config--to guarantee the runtime doesn't launch two workflows concurrently?
There is no way to configure the runtime to limit the number of workflows in progress.
Consider though that its the responsibility of the workflow itself to control flow. Hence the workflow itself should have means to determine if another instance of itself is currently in progress.
I would consider creating an Activity that would transactionally attempt to update a DB record to the effect that an instance of this workflow is in progress. If it finds that another is currently in progress it could take the appropriate action. It could fail or it could queue itself using an EventActivity to be alerted when the previous workflow has completed.
You probably will need to check at workflow start for another running instance.
If found, cancel it.
I don't agree that this needs to be handled at the WorkflowRuntime level. I like the idea of a custom Activity, sort of a MutexActivity that would be a CompositeActivity that has a DB backend. The first execution would log to the database it has a hold of the mutex. Subsequent calls would queue up their workflow IDs and then go idle. When the MutexActivity completes, it would release the Mutex, load up the next workflow in the queue and invoke the contained child activities.

Resources