how to start a standalone quartz job - maven

public class CronTriggerExample
{
public static void main(String[] args) throws Exception
{
try
{
JobDetail job = JobBuilder.newJob(HelloJob.class).withIdentity("dummyJobName", "group1").build();
Trigger trigger =
TriggerBuilder.newTrigger().withIdentity("dummyTriggerName", "group1")
.withSchedule(CronScheduleBuilder.cronSchedule("0/2 * * * * ?")).build();
// schedule it
Scheduler scheduler = new StdSchedulerFactory().getScheduler();
scheduler.start();
scheduler.scheduleJob(job, trigger);
return;
}
catch (SchedulerException e)
{
e.printStackTrace();
}
}
}
i am using quartz to set up some crons on my server. But how can i execute this file on server so that this can schedule the cron. i tried to use plugin
"org.codehaus.mojo" to execute the java file. But it always create a new trigger when ever i run mvn install as deamon. What to do so that it will reinitialize the cron on "mvn install".

The way you have written your main method, the application will exit just after scheduling the job. Although the quartz task is scheduled in a different thread, when the process ends it kills all active threads.
Simply add a while (true) {} statement after your scheduler.scheduleJob to keep the application running.
Now, just have maven build your jar and execute java -jar myjar.jar

Related

Launch spring cloud task with JVM args via DeployerPartitionHandler

I am planning to execute spring batch job on Pivotal cloud Foundry. The job executes fine on a single JVM with multiple threads(Local partitioning). I am looking to scale the job and the first option i considered is running the worker processes as a spring cloud task.
Before run it in PCF i am running it in local. I created a partition handler in the following manner.
#Bean
public PartitionHandler partitionHandler(TaskLauncher taskLauncher, JobExplorer jobExplorer, TaskRepository taskRepository) throws Exception {
Resource resource = this.resourceLoader
.getResource("file:I:/Project/target/batch.jar");
DeployerPartitionHandler partitionHandler =
new DeployerPartitionHandler(taskLauncher, jobExplorer, resource, "workerStep", taskRepository);
List<String> commandLineArgs = new ArrayList<>(3);
commandLineArgs.add("--spring.profiles.active=worker");
commandLineArgs.add("--spring.cloud.task.initialize-enabled=false");
commandLineArgs.add("--spring.batch.initializer.enabled=false");
commandLineArgs.add("--java.security.krb5.conf=I:/krb5.conf");
commandLineArgs.add("--java.security.auth.login.config=I:/jaas.conf");
partitionHandler
.setCommandLineArgsProvider(new PassThroughCommandLineArgsProvider(commandLineArgs));
partitionHandler
.setEnvironmentVariablesProvider(new SimpleEnvironmentVariablesProvider(this.environment));
partitionHandler.setMaxWorkers(2);
partitionHandler.setApplicationName("PartitionedBatchJobTask");
return partitionHandler;
}
#Bean
#Profile("worker")
public DeployerStepExecutionHandler stepExecutionHandler(JobExplorer jobExplorer) {
return new DeployerStepExecutionHandler(this.context, jobExplorer, this.jobRepository);
}
The task is started, but as mentioned in the args the jar has to run with certain JVM args. These args are not being properly sent to task to start the app as a task. (I am connecting to a DB using Kerberose. Need to send these properties as a JVM arg)
I see the below being executed as a task
Command to be executed: I:/java.exe -jar I:/Project/target/batch.jar --spring.profiles.active=worker --spring.cloud.task.initialize-enabled=false --spring.batch.initializer.enabled=false --java.security.krb5.conf=I:/krb5.conf --java.security.auth.login.config=I:/jaas.conf --jdk.tls.client.protocols=TLSv1.2 --spring.cloud.task.job-execution-id=1 --spring.cloud.task.step-execution-id=3 --spring.cloud.task.step-name=workerStep --spring.cloud.task.name=application-1_migrateProfileJob_migrateProfileFollowerStep:partition0 --spring.cloud.task.parentExecutionId=29 --spring.cloud.task.executionid=30
Since the JVM args are sent after the jar command the app is not able to recognize th eproperties.. Can any one please let me know what i am doing wrong

Spring Batch - Running a particular job in an application with multiple jobs

I have just got into a new project, which is all about migrating COBOL jobs to spring batch.
Before I joined, my colleague started on this migration and created the spring batch application with 1 job. My colleague was/is able to run the spring batch job from the command line just by running the application's jar file.
After I joined, I followed the 1st spring batch job and developed the 2nd spring batch job. But my spring batch job is not running from the command line by running the application's jar file like the 1st spring batch job.
I tried to explicitly launch my job (2nd job) from the commandlinerunner's run method and it worked. (The 1st spring batch job didn't have to do this).
Once the jobs are developed and tested, each job will be run from the Tivoli workload scheduler scheduled at different times.
The command that I used from command line:
java -Dspring.profiles.active=dev -jar target/SPRINGBATCHJOBS-0.0.2-SNAPSHOT.jar job.name=JOB2 dateCode=$$$$
Package structure
App
src
main
java
com.test
BatchApplication.java
common
dao
...
entity
...
service
...
job1
config
...
core
...
dao
...
entity
...
job2
config
...
core
...
dao
...
entity
...
Code Snippet (on a very high-level)
public class BatchApplication {
public static void main(String[] args) {
SpringApplication.run(BatchApplication.class, args);
}
}
public class Job2Config {
public Job JOB2() {
return jobBuilderFactory.get("JOB2")
.preventRestart()
.incrementer(batchJobIncrementer)
.listener(jobExecutionListener())
.flow(job2Step())
.end()
.build();
}
public Step job2Step() {
return stepBuilderFactory.get("job2Step")
.<String, String>chunk(
Integer.parseInt(chunkSize))
.reader(job2ItemReader())
.writer(this.job2ItemWriter)
.faultTolerant()
.skipLimit(Integer.parseInt(this.skipLimit))
.skipPolicy(skipPolicy())
.retryPolicy(new NeverRetryPolicy())
.listener(chunkListener())
.allowStartIfComplete(true)
.build();
}
public JdbcCursorItemReader<String> job2ItemReader() {
return new JdbcCursorItemReaderBuilder<String>()
.dataSource(oracleDataSource)
.name("job2ItemReader")
.sql(this.sqlQuery)
.fetchSize(Integer.parseInt(this.fetchSize))
.rowMapper((resultSet, rowNum) -> resultSet.getString(1))
.build();
}
}
public class Job2CommandLineRunner implements CommandLineRunner {
#Autowired
private JobLauncher jobLauncher;
#Autowired
private Job JOB2;
public void run(String... args) throws Exception {
String jobName = params.get("job.name");
if (jobName != null && jobName.equalsIgnoreCase("JOB2")) {
JobParameters jobParameters =
new JobParametersBuilder()
.addLong("time", System.currentTimeMillis())
.addString("dateCode", params.get("dateCode"))
.toJobParameters();
this.jobLauncher.run(JOB2, jobParameters);
}
}
}
I have couple of questions:
The 2nd spring batch job simply followed the 1st spring batch job for the implementation but I'm not sure why the 2nd spring batch job didn't get invoked
from the command line like the 1st spring batch job. To invoke the 2nd spring batch job, I had to explicitly launch it from the command line runner.
When there're multiple jobs in a Spring Batch application, is invoking a particular job explicitly thru CommandLineRunner the right way of doing it? If it's not, what's the right way of invoking a particular job (of an application with multiple jobs) from command line (eventually the jobs need to be called thru Tivoli workload scheduler)
It will be great if someone can help me clarify my questions.
Thanks in advance.

Is there a way to unregister Spring Cron Task from ScheduledTaskRegistrar?

We have written a simple application in spring boot that triggers a cron job. We are able to launch it successfully. Below is pice of code.
CronTask task = new CronTask(new Runnable() {
#Override
public void run() {
System.out.println("Job running ...")
}
}, cronExpression);
taskRegistrar.addCronTask(task);
taskRegistrar.afterPropertiesSet();
Now how to unregister/kill/destroy/remove the task started by me.
I can get the corn task back from the registry using
this.taskRegistrar.getCronTaskList();
But don't see a method in the registry to unregister a task nor any method in task to destroy it.

Rerunning a completed Job from Spring Framework?

I am developing a web application using Spring Framework that does some jobs as the application starts up, and these jobs primarily consist of loading data from CSVs and making Java objects out of them.
Currently, I am trying to build a RESTful API using Restlet and Spring framework and one of the queries is supposed to take in a job name as parameter and restart that job even if that job has been marked as COMPLETED, how do I accomplish a job restart? I have tried the spring frameworks' Joboperator interface's startNextInstance() method and have also tried to manually increment the JobParameters so that there is no jobinstancealrradyrunning exception?
Anyone has any code snippet or alternative idea on how to restart a Job in Spring Framework that has been marked as Completed?
Any help would be greatly appreciated, thanks!!
Because of the terms you're using i'm quite certain you're using Spring Batch
In Batch terms you cannot actually restart a COMPLETED instance or execution. Single job instance is identified by job parameters. If you need to run the job again with same parameters, one way would be to include some unique parameter, for example the current timestamp to the JobParameters before launching.
So restarting a completed job would mean starting a new instance of the job with similiar parameters. Here's an slightly modified snippet i've used before that uses JobLauncher and JobRegistry to launch a new job by name:
#Autowired
#Qualifier("asyncJobLauncher")
private JobLauncher asyncJobLauncher;
#Autowired
private JobRegistry jobRegistry;
...
public JobExecution startJob(String jobName) {
Job job;
try {
job = jobRegistry.getJob(jobName);
} catch (NoSuchJobException e) {
// handle invalid job name
}
JobParametersBuilder jobParams = new JobParametersBuilder();
jobParams.addLong("currentTime", System.currentTimeMillis());
// add other relevant parameters
try {
JobExecution jobExecution = asyncJobLauncher.run(job, jobParams.toJobParameters());
return jobExecution;
} catch (JobExecutionAlreadyRunningException e) {
// handle instance already running with given params
} catch (Exception e) {
// handle other errors
}
}
Hope it helps, here's some reading about the subject.

How can Hadoop job kill by itself

Is there any way to kill a Hadoop job itself or send a signal to kill it.
I've read the Configuration settings from jobConf where it says that if a user specify the wrong settings I need to kill the job or throw an error, since map/reduce config method does not allow throwing an exception.
public void configure(JobConf job) {
System.out.println("Inside config start processing");
try {
String strFileName = job.get("hadoop.rules");
LoadFile(strFileName );
} catch (Exception e) {
e.printStackTrace();
//Here i need to write code to kill job
}
}
In the configure() method, just throw a RuntimeException.
Better yet, if possible, you're better off performing your validation step before the job is run.
Just save the state into a boolean variable called kill and evaluate the variable inside the map step and then throw an IOException.

Resources