Is there a way to unregister Spring Cron Task from ScheduledTaskRegistrar? - spring

We have written a simple application in spring boot that triggers a cron job. We are able to launch it successfully. Below is pice of code.
CronTask task = new CronTask(new Runnable() {
#Override
public void run() {
System.out.println("Job running ...")
}
}, cronExpression);
taskRegistrar.addCronTask(task);
taskRegistrar.afterPropertiesSet();
Now how to unregister/kill/destroy/remove the task started by me.
I can get the corn task back from the registry using
this.taskRegistrar.getCronTaskList();
But don't see a method in the registry to unregister a task nor any method in task to destroy it.

Related

Sprint boot async - does not use max-size

I am trying out Spring Boot's Async feature, but I am having some trouble getting it to work as I need.
This is my application yml
spring:
task:
execution:
pool:
max-size: 100
queue-capacity: 5
keep-alive: "10s"
core-size: 10
Application class
#SpringBootApplication
#EnableAsync
public class ServiceApplication {
public static void main(String[] args) {
SpringApplication.run(ServiceApplication.class, args);
}
}
Service class:
for (int i=0;i< 40; i++) {
CompletableFuture.runAsync(()-> {
try {
System.out.println("------------------Starting thread------------------");
//do some action here
System.out.println("------------------Ending thread------------------");
} catch (Exception e) {
e.printStackTrace();
}
});
}
I am expecting to see the System.out print out 40 times. The operations in between take long enough, and I have tried adding Thread.sleep(), but I do not see the sysouts printed more than 8 times. Is there something wrong with my config, or does it not work the way I expect?
Completable future has no idea about the pool that is used by Spring.
From docs of runAsync() method:
Returns a new CompletableFuture that is asynchronously completed by a
task running in the ForkJoinPool.commonPool() after it runs the given
action. Params: runnable – the action to run before completing the
returned CompletableFuture Returns: the new CompletableFuture
So, those tasks are being run on ForkJoinPool, not on executor used by Spring.
About executor used by Spring with #EnableAsync:
By default, Spring will be searching for an associated thread pool
definition: either a unique TaskExecutor bean in the context, or an
Executor bean named "taskExecutor" otherwise. If neither of the two is
resolvable, a SimpleAsyncTaskExecutor will be used to process async
method invocations. Besides, annotated methods having a void return
type cannot transmit any exception back to the caller. By default,
such uncaught exceptions are only logged.
You could try autowire that executor and pass it as an argument to
public static CompletableFuture<Void> runAsync(Runnable runnable, Executor executor)
Returns a new CompletableFuture that is asynchronously completed by a
task running in the given executor after it runs the given action.
Params: runnable – the action to run before completing the returned
CompletableFuture executor – the executor to use for asynchronous
execution

Launch spring cloud task with JVM args via DeployerPartitionHandler

I am planning to execute spring batch job on Pivotal cloud Foundry. The job executes fine on a single JVM with multiple threads(Local partitioning). I am looking to scale the job and the first option i considered is running the worker processes as a spring cloud task.
Before run it in PCF i am running it in local. I created a partition handler in the following manner.
#Bean
public PartitionHandler partitionHandler(TaskLauncher taskLauncher, JobExplorer jobExplorer, TaskRepository taskRepository) throws Exception {
Resource resource = this.resourceLoader
.getResource("file:I:/Project/target/batch.jar");
DeployerPartitionHandler partitionHandler =
new DeployerPartitionHandler(taskLauncher, jobExplorer, resource, "workerStep", taskRepository);
List<String> commandLineArgs = new ArrayList<>(3);
commandLineArgs.add("--spring.profiles.active=worker");
commandLineArgs.add("--spring.cloud.task.initialize-enabled=false");
commandLineArgs.add("--spring.batch.initializer.enabled=false");
commandLineArgs.add("--java.security.krb5.conf=I:/krb5.conf");
commandLineArgs.add("--java.security.auth.login.config=I:/jaas.conf");
partitionHandler
.setCommandLineArgsProvider(new PassThroughCommandLineArgsProvider(commandLineArgs));
partitionHandler
.setEnvironmentVariablesProvider(new SimpleEnvironmentVariablesProvider(this.environment));
partitionHandler.setMaxWorkers(2);
partitionHandler.setApplicationName("PartitionedBatchJobTask");
return partitionHandler;
}
#Bean
#Profile("worker")
public DeployerStepExecutionHandler stepExecutionHandler(JobExplorer jobExplorer) {
return new DeployerStepExecutionHandler(this.context, jobExplorer, this.jobRepository);
}
The task is started, but as mentioned in the args the jar has to run with certain JVM args. These args are not being properly sent to task to start the app as a task. (I am connecting to a DB using Kerberose. Need to send these properties as a JVM arg)
I see the below being executed as a task
Command to be executed: I:/java.exe -jar I:/Project/target/batch.jar --spring.profiles.active=worker --spring.cloud.task.initialize-enabled=false --spring.batch.initializer.enabled=false --java.security.krb5.conf=I:/krb5.conf --java.security.auth.login.config=I:/jaas.conf --jdk.tls.client.protocols=TLSv1.2 --spring.cloud.task.job-execution-id=1 --spring.cloud.task.step-execution-id=3 --spring.cloud.task.step-name=workerStep --spring.cloud.task.name=application-1_migrateProfileJob_migrateProfileFollowerStep:partition0 --spring.cloud.task.parentExecutionId=29 --spring.cloud.task.executionid=30
Since the JVM args are sent after the jar command the app is not able to recognize th eproperties.. Can any one please let me know what i am doing wrong

Spring Batch - Running a particular job in an application with multiple jobs

I have just got into a new project, which is all about migrating COBOL jobs to spring batch.
Before I joined, my colleague started on this migration and created the spring batch application with 1 job. My colleague was/is able to run the spring batch job from the command line just by running the application's jar file.
After I joined, I followed the 1st spring batch job and developed the 2nd spring batch job. But my spring batch job is not running from the command line by running the application's jar file like the 1st spring batch job.
I tried to explicitly launch my job (2nd job) from the commandlinerunner's run method and it worked. (The 1st spring batch job didn't have to do this).
Once the jobs are developed and tested, each job will be run from the Tivoli workload scheduler scheduled at different times.
The command that I used from command line:
java -Dspring.profiles.active=dev -jar target/SPRINGBATCHJOBS-0.0.2-SNAPSHOT.jar job.name=JOB2 dateCode=$$$$
Package structure
App
src
main
java
com.test
BatchApplication.java
common
dao
...
entity
...
service
...
job1
config
...
core
...
dao
...
entity
...
job2
config
...
core
...
dao
...
entity
...
Code Snippet (on a very high-level)
public class BatchApplication {
public static void main(String[] args) {
SpringApplication.run(BatchApplication.class, args);
}
}
public class Job2Config {
public Job JOB2() {
return jobBuilderFactory.get("JOB2")
.preventRestart()
.incrementer(batchJobIncrementer)
.listener(jobExecutionListener())
.flow(job2Step())
.end()
.build();
}
public Step job2Step() {
return stepBuilderFactory.get("job2Step")
.<String, String>chunk(
Integer.parseInt(chunkSize))
.reader(job2ItemReader())
.writer(this.job2ItemWriter)
.faultTolerant()
.skipLimit(Integer.parseInt(this.skipLimit))
.skipPolicy(skipPolicy())
.retryPolicy(new NeverRetryPolicy())
.listener(chunkListener())
.allowStartIfComplete(true)
.build();
}
public JdbcCursorItemReader<String> job2ItemReader() {
return new JdbcCursorItemReaderBuilder<String>()
.dataSource(oracleDataSource)
.name("job2ItemReader")
.sql(this.sqlQuery)
.fetchSize(Integer.parseInt(this.fetchSize))
.rowMapper((resultSet, rowNum) -> resultSet.getString(1))
.build();
}
}
public class Job2CommandLineRunner implements CommandLineRunner {
#Autowired
private JobLauncher jobLauncher;
#Autowired
private Job JOB2;
public void run(String... args) throws Exception {
String jobName = params.get("job.name");
if (jobName != null && jobName.equalsIgnoreCase("JOB2")) {
JobParameters jobParameters =
new JobParametersBuilder()
.addLong("time", System.currentTimeMillis())
.addString("dateCode", params.get("dateCode"))
.toJobParameters();
this.jobLauncher.run(JOB2, jobParameters);
}
}
}
I have couple of questions:
The 2nd spring batch job simply followed the 1st spring batch job for the implementation but I'm not sure why the 2nd spring batch job didn't get invoked
from the command line like the 1st spring batch job. To invoke the 2nd spring batch job, I had to explicitly launch it from the command line runner.
When there're multiple jobs in a Spring Batch application, is invoking a particular job explicitly thru CommandLineRunner the right way of doing it? If it's not, what's the right way of invoking a particular job (of an application with multiple jobs) from command line (eventually the jobs need to be called thru Tivoli workload scheduler)
It will be great if someone can help me clarify my questions.
Thanks in advance.

Can the fixed delay of a #Scheduled task be configured at runtime

I have a database poller that I have implemented as a scheduled task. I am using Spring 3.0. I want to control the fixed-delay of the scheduled task at runtime using a restful api. Is there a way to achieve this in spring?
as of now i have:
#Scheduled(fixedDelay = 30000)
public void poll() {
if (pollingEnabled) {
// code to do stuff in database
}
}
I want the fixedDelay to be dynamic.Thanks.

how to start a standalone quartz job

public class CronTriggerExample
{
public static void main(String[] args) throws Exception
{
try
{
JobDetail job = JobBuilder.newJob(HelloJob.class).withIdentity("dummyJobName", "group1").build();
Trigger trigger =
TriggerBuilder.newTrigger().withIdentity("dummyTriggerName", "group1")
.withSchedule(CronScheduleBuilder.cronSchedule("0/2 * * * * ?")).build();
// schedule it
Scheduler scheduler = new StdSchedulerFactory().getScheduler();
scheduler.start();
scheduler.scheduleJob(job, trigger);
return;
}
catch (SchedulerException e)
{
e.printStackTrace();
}
}
}
i am using quartz to set up some crons on my server. But how can i execute this file on server so that this can schedule the cron. i tried to use plugin
"org.codehaus.mojo" to execute the java file. But it always create a new trigger when ever i run mvn install as deamon. What to do so that it will reinitialize the cron on "mvn install".
The way you have written your main method, the application will exit just after scheduling the job. Although the quartz task is scheduled in a different thread, when the process ends it kills all active threads.
Simply add a while (true) {} statement after your scheduler.scheduleJob to keep the application running.
Now, just have maven build your jar and execute java -jar myjar.jar

Resources