Updating a QuartzJob from the running job itself - spring-boot

The update of a QuartzJob within a spring boot application works while the job is not running (here or here). The spring variable spring.quartz.overwrite-existing-jobs: true is set.
However, when doing the same from within a running job the job keeps firing itself in an endless loop without taking into account the interval time (each few milliseconds it fires again). I even tried doing the same from within a TriggerListener but that doesn't change it.
As code example I would have nothing else but what is given in the second link above:
// retrieve the trigger
Trigger oldTrigger = sched.getTrigger(triggerKey("oldTrigger", "group1");
// obtain a builder that would produce the trigger
TriggerBuilder tb = oldTrigger.getTriggerBuilder();
// update the schedule associated with the builder, and build the new trigger
// (other builder methods could be called, to change the trigger in any desired way)
Trigger newTrigger = tb.withSchedule(simpleSchedule()
.withIntervalInSeconds(10)
.withRepeatCount(10)
.build();
sched.rescheduleJob(oldTrigger.getKey(), newTrigger);
Did anyone try that from within a running job?

It works with the following trigger. It is the startAt which makes the difference. Without that the trigger fires immediately again.
Trigger trigger = newTrigger()
.withIdentity(triggerName, groupname)
.startAt(Date.from(LocalDateTime.now().plusSeconds(intervalInSeconds).atZone(ZoneId.systemDefault()).toInstant()))
.withSchedule(SimpleScheduleBuilder.simpleSchedule()
.withIntervalInSeconds(intervalInSeconds)
.repeatForever()
.withMisfireHandlingInstructionIgnoreMisfires())
.build();

Related

Multiple NetSuite Script Event Types

I'm new to NetSuite and have been tasked with integrating another system with NetSuite. I've created a User Event script that needs to run against multiple NetSuite events. The deployment interface seems to only let me assign the script to Create OR Edit, but not both. Is this not possible or what am I doing wrong?
Thanks,
You can define the events on which the UE script runs within the script, and leave the event type assignment in the deployment record blank.
Firstly, if you leave the event type blank in the UI and don't include logic withing the script to limit when it runs, it will be triggered on all event types (create, edit etc) whenever the triggering event occurs (beforeLoad, beforeSubmit, afterSubmit).
Selecting the event type in the UI is an easy shortcut to limiting when a script runs without having to worry about additional script logic; however, for maximum flexibility you can use script logic as follows or modify to suit your needs (in SS2.0):
function beforeSubmit(scriptContext) {
log.debug('type', scriptContext.type);
if (scriptContext.type !== scriptContext.UserEventType.CREATE) {
log.error('Exiting script', 'Context type is ' + scriptContext.type);
return;
}
//Do your work here
}

Quartz .NET - Prevent parallel Job Execution

I am using Quartz .NET for job scheduling.
So I created one job class (implementing IJob).
public class TransferData : IJob
{
public Task Execute(IJobExecutionContext context){
string tableName = context.JobDetail.JobDataMap.Get("table");
// Transfer the table here.
}
}
So I want to transfer different and multiple tables. For this purpose I am doing something like this:
foreach (Table table in tables)
{
IJobDetail job = JobBuilder.Create<TransferData>()
.WithIdentity(new JobKey(table.Name, "table_transfer"))
.UsingJobData("table", table.Name)
.Build();
ITrigger trigger = TriggerBuilder.Create()
.WithIdentity(new TriggerKey("trigger_" + table.Name, "table_trigger"))
.WithCronSchedule("*/5 * * * *")
.ForJob(job)
.Build();
await this.scheduler.ScheduleJob(job, trigger);
}
So every table should be transfered every 5 minutes. To achieve this I create several jobs with different job names.
The question is: how to prevent the parallel job execution for the same jobName? (e.g. the previous run takes longer for one table, so I do not want to start the next transfer for the same table.)
I know about the attribute #DisallowConcurrentExecution, but this is used to prevent the parallel execution for the same Job class. I do not want to write an extra Job class per table, because the "main" code for the transfer is always the same, the one and only difference is the table name. So I want to use the same job class for this purpose.
The Quatz .NET documentation is a little bit confusing.
DisallowConcurrentExecution is an attribute that can be added to the
Job class that tells Quartz not to execute multiple instances of a
given job definition (that refers to the given job class)
concurrently. Notice the wording there, as it was chosen very
carefully. In the example from the previous section, if
“SalesReportJob” has this attribute, than only one instance of
“SalesReportForJoe” can execute at a given time, but it can execute
concurrently with an instance of “SalesReportForMike”. The constraint
is based upon an instance definition (JobDetail), not on instances of
the job class. However, it was decided (during the design of Quartz)
to have the attribute carried on the class itself, because it does
often make a difference to how the class is coded.
Source: https://www.quartz-scheduler.net/documentation/quartz-3.x/tutorial/more-about-jobs.html
But if you read the API documentation, it's says: the bold text is important!
An attribute that marks a IJob class as one that must not have
multiple instances executed concurrently (where instance is based-upon
a IJobDetail definition - or in other words based upon a JobKey).
Source: https://quartznet.sourceforge.io/apidoc/3.0/html/
In other words: the DisallowConcurrentExecution attribute works for my purposes.

How to retrieve the process's unique workflow number, while launching it through C# API?

I'm struggling for 4 days now.
There's this C# Process Engine API:
https://www.ibm.com/support/knowledgecenter/en/SSNW2F_5.2.1/com.ibm.p8.pe.dev.doc/web_services/ws_reference.htm
What I need to do is to retrieve the WorkflowNumber when launching the workflow, so later I can find that process in the system.
The issue here is that when you launch it - it returns the LaunchStep (the first step in the workflow) which doesn't have that ID assigned yet - it's null. The only thing available is the LaunchStep's WOBNumber.
In order to assign the Workflow ID to the step, you need to dispatch the step, so I do that:
UpdateStepRequest request = new UpdateStepRequest();
UpdateFlagEnum flag = UpdateFlagEnum.UPDATE_DISPATCH;
request.updateFlag = flag;
request.stepElement = element; // add the launch step
session.Client.updateStep(request);
And here the funny part happens. From this point, there is complately no option to retrieve that, because StepElements are stateless, updateStep() returns nothing and the best part - the LaunchStep is now destroyed in the system, because it's a LaunchStep - it just gets destroyed after the launch.
Any tips would be appreciated!

Spring batch A job instance already exists

OK,
I know this has been asked before, but I still can't find a definite answer to my question. And my question is this: I am using spring batch to export data to SOLR search server. It needs to run every minute, so I can export all the updates. The first execution passes OK, but the second one complains with:
2014-10-02 20:37:00,022 [defaultTaskScheduler-1] ERROR: catching
org.springframework.batch.core.repository.JobInstanceAlreadyCompleteException: A job instance already exists and is complete for parameters={catalogVersionPK=3378876823725152,
type=UPDATE}. If you want to run this job again, change the parameters.
at org.springframework.batch.core.repository.support.SimpleJobRepository.createJobExecution(SimpleJobRepository.java:126)
at
Of course I can add a date-time parameter to the job like this:
.addLong("time", System.getCurrentTimeMillis())
and then the job can be run more than once. However, I also want to query the last execution of the job, so I have code like this:
DateTime endTime = new DateTime(0);
JobExecution je = jobRepository.getLastJobExecution("searchExportJob", new JobParametersBuilder().addLong("catalogVersionPK", catalogVersionPK).addString("type", type).toJobParameters());
if (je != null && je.getEndTime() != null) {
endTime = new DateTime(je.getEndTime());
}
and this returns nothing, because I didn't provide the time parameter. So seems like I can run the job once and get the last execution time, or i can run it multiple times and not get the last execution time. I am really stuck :(
Assumption
Spring Batch use some tables to store each JOB executed with its parameters.
If you run twice the job with the same parameters, the second one fails, because the job is identified by jobName and parameters.
1# Solution
You could use JobExecution when run a new Job.
JobExecution execution = jobLauncher.run(job, new JobParameters());
.....
// Use a JobExecutionDao to retrieve the JobExecution by ID
JobExecution ex = jobExecutionDao.getJobExecution(execution.getId());
2# Solution
You could implement a custom JobExecutionDao and perform a custom query to find your JobExecution on BATCH_JOB_EXECUTION table.
See here the reference of Spring.
I hope my answer is helpful to you.
Use the Job Explorer as suggested by Luca Basso Ricci.
Because you do not know the job parameters you need to look up by instance.
Look for the last instance of job named searchExportJob
Look for the last execution of the instance above
This way you use Spring Batch API only
//We can set count 1 because job instance are ordered by instance id descending
//so we know the first one returned is the last instance
List<JobInstance> instances = jobExplorer.getJobInstances("searchExportJob",0,1);
JobInstance lastInstance = instances.get(0);
List<JobExecution> jobExecutions = jobExplorer.getJobExecutions(lastInstance);
//JobExcectuin is ordered by execution id descending so first
//result is the last execution
JobExecution je = jobExecutions.get(0);
if (je != null && je.getEndTime() != null) {
endTime = new DateTime(je.getEndTime());
}
Note this code only work for Spring Batch 2.2.x and above in 2.1.x the API was somewhat different
There is another interface you can use: JobExplorer
From its javadoc:
Entry point for browsing executions of running or historical jobs and
steps. Since the data may be re-hydrated from persistent storage, it
may not contain volatile fields that would have been present when the
execution was active
If you are debugging your batch job and terminate your batch job before completing, then it will give this error if you try to start it again.
To start it again either you need to update the name of your job so that it will create another execution id.
or you can update below tables.
BATCH_JOB_EXECUTION
BATCH_STEP_EXECUTION
You need to update Status and End_time columns with non null values.
Create new Job RunId everytime.
If your code involves creating Job object using Jobfactory then below snippet would be useful for the problem:
return jobBuilderFactory
.get("someJobName")
.incrementer(new RunIdIncrementer()) // solution lies here- creating new job id everytime
.flow( // and here
stepBuilderFactory
.get("someTaskletStepName")
.tasklet(tasklet) // u can replace it with step
.allowStartIfComplete(true) // this will make the job run even if complete in last run
.build())
.end()
.build();

Spring #Async cancel and start?

I have a spring MVC app where a user can kick off a Report generation via button click. This process could take few minutes ~ 10-20 mins.
I use springs #Async annotation around the service call so that report generation happens asynchronously. While I pop a message to user indicating job is currently running.
Now What I want to do is, if another user (Admin) can kick off Report generation via the button which should cancel/stop currently running #Async task and restart the new task.
To do this, I call the
.. ..
future = getCurrentTask(id); // returns the current task for given report id
if (!future.isDone())
future.cancel(true);
service.generateReport(id);
How can make it so that "service.generateReport" waits while the future cancel task kills all the running threads?
According to the documentation, after i call future.cancel(true), isDone will return true as well as isCancelled will return true. So there is no way of knowing the job is actually cancelled.
I can only start new report generation when old one is cancelled or completed so that it would not dirty data.
From documentation about cancel() method,
Subsequent calls to isCancelled() will always return true if this method returned true
Try this.
future = getCurrentTask(id); // returns the current task for given report id
if (!future.isDone()){
boolean terminatedImmediately=future.cancel(true);
if(terminatedImmediately)
service.generateReport(id);
else
//Inform user existing job couldn't be stopped.And to try again later
}
Assuming the code above runs in thread A, and your recently cancelled report is running in thread B, then you need thread A to stop before service.generateReport(id) and wait until thread B is completes / cancelled.
One approach to achieve this is to use Semaphore. Assuming there can be only 1 report running concurrently, first create a semaphore object acccessible by all threads (normally on the report runner service class)
Semaphore semaphore = new Semaphore(1);
At any point on your code where you need to run the report, call the acquire() method. This method will block until a permit is available. Similarly when the report execution is finished / cancelled, make sure release() is called. Release method will put the permit back and wakes up other waiting thread.
semaphore.acquire();
// run report..
semaphore.release();

Resources