Spring Batch Admin: Schedule new jobs through web GUI - spring

A newbie question on Sprint Batch Admin.
My requirement is that the user should be able to schedule new jobs (passing some parameters for the job functionality) through a web UI. These jobs should be persistent, will be repetitive and could be cancelled or deleted. Also, a report could be generated for last run jobs and to list all the existing jobs with their next run dates.
Perhaps my most important requirement is that this should be possible "on the fly", not requiring redeploying the web-application or a server re-start.
Can this be done using Spring Batch Admin (I see that the guide talks about uploading an XML for adding a job but that seems tedious, if there is an API why shouldn't we be able to create a job on the fly through the Batch Admin Web UI)? Or does JDK Timer or Quartz support it?

Once a job has been created, it can't be deleted, but it can be stopped. Allowing deletion from DB is a risky operation, as Spring Batch might have already been started the job execution, but the DB has not been updated yet. If one removes the job at this moment, you have inconsistency.
Scheduling a new job is described in Launch Job. It is not possible to create new types of jobs, as jobs can generally have complicated configuration which is parsed only once when Spring Context is loaded.

Dynamic deployment (on the fly) of jobs and configurations, without requiring server restart, is a feature we implemented in Trooper Batch Profile - it is not exactly Spring Batch admin but builds on it. You continue to write your jobs using Spring batch, just the container changes for in Trooper you would use its Batch profile runtime. Screen shots and features are here : https://github.com/regunathb/Trooper/wiki/Writing-Batch-jobs-in-Trooper

I think we can deploy the each spring batch job by a SBA. I mean each batch job will be compiled as a war file. We deploy them together in server. In this way, we have the following visiting urls to monitor each jobs:
h t t p://bactchjobserver/job1
h t t p://bactchjobserver/job2
h t t p://bactchjobserver/job3
h t t p://bactchjobserver/job4
But the downside is that each war fill surely contains lib files, which make each war file like 10MB size.
At the same time, I tried to manually add new-job.xml to war-file\WEB-INF\classes\META-INF\spring\batch\jobs, and new-job.jar to war-file\WEB-INF\lib without stopping JBoss. It works. The new job can be showed in SBA UI and runnable.
But obviously this would lead much maintenance and trouble shooting. It is not implementable.

Related

Correct scope for multi threaded batch jobs in spring

I believe I've got a scoping issue here.
Project explanation:
The goal is to process any incoming file (on disk), including meta data (which is stored in an SQL database). For this I have two tasklets (FileReservation and FileProcessorTask) which are the steps in the overarching "worker" jobs. They wait for an event to start their work. There are several threads dealing with jobs for concurrency. The FileReservation tasklet sends the fileId to FileProcessorTask using the job context.
A separate job (which runs indefinitely) checks for new file meta data records in the database and upon discovering new records "wakes up" the FileReservationTask tasklets using a published event.
With the current configuration the second step in a job can receive a null message when the FileReservation tasklets are awoken.
If you uncomment the code in BatchConfiguration you'll see that it works when we have separate instances of the beans.
Any pointers are greatly appreciated.
Thanks!
Polling a folder for new files is not suitable for a batch job. So using a Spring Batch job (filePollingJob) is not a good idea IMO.
Any pointers are greatly appreciated.
Polling a folder for new files and running a job for each incoming file is a common use case, which can be implemented using a java.nio.file.WatchService or a FileInboundChannelAdapter from Spring integration. See How do I kickoff a batch job when input file arrives? for more details.

Activate Batch on only one Server instance

I have a nginx loadbalancer in front of two tomcat instances each contains a spring boot application. Each spring boot application executes a batch that writes data in a database.
The batch executes every day at 1am.
The problem is that both instances execute the batch simultaniously which i don't want.
Is there a way to keep the batchs deployed in two instances and tell tomcat or nginx to start the batch in master server (and the slave server doesn't run the batch).
If one of the servers stops, the second server could start the batch on his behalf.
Is there a tool in nginx or tomcat (or some other technology) to do that ?
thank you in advance.
Here is a simplistic design approach.
Since you have two scheduled methods in the 2 VMs triggered at same time, add a random delay to both. This answer has many options on how to delay the trigger for a random duration. Spring #Scheduled annotation random delay
Inside the method run the job only if it is NOT already started (by the other VM). This could be done with a new table to track this.
Here is the pseudo code for this design:
#Scheduled(cron = "schedule expression")
public void batchUpdateMethod() {
//Check database for signs of job running now.
if (job is not running){
//update database table to indicate job is running
//Run the batch job
//update database table to indicate job is finished
}
}
The database, or some common file location, should be used as a lock to sync between the two runs, since the two VMs are independent of each other.
For a more robust design, consider Spring Batch
Spring Batch uses a database for its jobs (JobsRepository). By default an in memory datasource is used to keep track of running jobs and their status. In your setup, the 2 instances are (most likely) using their own in memory database.
Multiple instances of Spring Batch can coordinate with each other as a cluster and one can run jobs, while the other actasa backup, if the jobsRepository database is shared.
For this you need to configure the 2 instances to use a common datasource.
Here are some docs:
https://docs.spring.io/spring-batch/docs/current/reference/html/index-single.html#jobrepository
https://docs.spring.io/spring-batch/docs/current/reference/html/job.html#configuringJobRepository
If you design two app server instances to run the same job at the same time, then by design, one will succeed to create a job instance and the other will fail (and this failure can be ignored). See Javadoc of JobRepository. This is one of the roles of the job repository: to act as a safeguard against duplicate job executions in a clustered environment.
If one of the servers stops, the second server could start the batch on his behalf. Is there a tool in nginx or tomcat (or some other technology) to do that ?
I believe there is no need for such tool or technology. If one of the servers is down at the time of the schedule, the other will be able to take things over and succeed in launching the job.
I did implement a simple BCM Server functionality, where all servers do register(create a Server-table entry) with their unique IP. The Servers need to register within a defined time(e.g. 10 sec). If a Server does not register within time(last update timestamp > 10 sec), then the Server gets de-registered(delete Server-table entry) by the Server, which do register.
At the end I do have a table with ordered Server entries and can define the task uniquely to the registered Servers.
The Implementation is very simple and Works perfectly.
Before I did also have in mind the Spring Batch Job Sharing functionality, but I wanted zu have a more lightweight and more flexible Solution.
Currently I use it in all my projects where I need to have Batch-Processing implemented.

Logging for Talend job running within spring-boot

We have talend-jobs triggered within Spring-boot application. Is there any way to configure the output of talend-jobs to the application log files?
One workaround we find is to write logs directly to an external file (filePath passed as context-param). But wanted to find if there is a better way to configure this seamlessly.
Not sure if I understood the question correctly, but I guess your concerns might be on what might have happened to the triggered Jobs.
Logging
With Respect to Logging for Talend, You could configure using Log4j,
https://help.talend.com/reader/5DC~TBhDsBie5JTXyVLW4g/QSGCZJKXo~uhKvZDq1DxUg
Monitoring
Regarding the Status of the Job Executed, you could get the execution details retrieved using REST Call(Talend Metaservlet API).
getTaskExecutionStatus
https://help.talend.com/reader/oYf9gKhmYrkWCiSua4qLeg/SLiAyHyDTjuznLR_F~MiQQ
By Modifying the Existing Talend Job,You could also design a like a feedback loop, ie Trigger a REST Call back to your application. With the details of Execution from Talend Job.

Spring Batch: Horizontal scaling of Job Repository

I read a lot about how to enable parallel processing and chunking of an individual job, using Master/Slave paradigm. Consider an already implemented Spring Batch solution that was intended to run on a standalone server. With minimal refactoring I would like to enable this to horizontally scale and be more resilient in production operation. Speed and efficiency is not a goal.
http://www.mkyong.com/spring-batch/spring-batch-hello-world-example/
In the following example a Job Repository is used that connects to an initializes a database schema for the Job Repository. Job initiation requests are fed to a message queue, that a single server, with a single Java process is listening on via Spring JMS. When encountering this it executes a new Java process that is the Spring Batch job. If the job has not been started according to the Job Repository it will begin. If the job had failed it will pick up where the job left off. If the job is in process it will ignore.
The single point of failure is the single server and single listening process for job initiation. I would like to increase resiliency by horizontally scaling identical server instances all competing for who can first grab the job initiation message when it first appears in the queue. That server instance will now attempt to run the job.
I was conceiving that all instances of the JobRepository would share the same schema, so they can all query for when the status is currently in process and decide what they will do. I am unsure though if this schema or JobRepository implementation is meant to be utilized by multiple instances.
Is there a risk in pursuing this that this approach could result in deadlocking the database? There are other constraints to where the Partition features of Spring Batch will not work for my application.
I decided to build a prototype to test if the condition that the Spring Batch Job Repository schema and SimpleJobRepository can be used in a load balanced way with multiple Spring Batch Java processes running concurrently. I was afraid that deadlock scenarios might have occurred at the database to where all running job processes get stuck.
My Test
I started with the mkyong Spring Batch HelloWorld example and made some changes to it where it could be packaged into a Jar that can be executed from the command line. I also removed the initialize database step defined in the database.config file and manually established a local MySQL server with the proper schema elements. I added a Job parameter for time to be the current time in millis so that each job instance would be unique.
Next, I wrote a separate Java main class that used Apache Commons Exec framework to create 50 sub processes with no wait between them. Each of these processes have a Thread.sleep for 1 second within their Processor objects as well so that a number of processes will all kick off at the same time and all attempt to access the database at the same time.
Results
After running this test a number of times in a row I see that all 50 Spring batch processes consistently complete successfully and update the same database schema correctly. I don't see any indication that if there were multiple Spring Batch job processes running on multiple servers connecting to the same database that they would interfere with each other on the schema nor do I see any indication that a deadlock could happen at this time.
So it sounds as if load balancing of Spring Batch jobs without the use of advanced Master/Slave and Step Partitioning approaches is a valid use case.
If anybody would like to comment on my test or suggest ways to improve it I would appreciate it.
Here is excerpt from
Spring Batch docs on how Spring Batch handles database updates for its repository:
Spring Batch employs an optimistic locking strategy when dealing with updates to the database. This means that each time a record is 'touched' (updated) the value in the version column is incremented by one. When the repository goes back to save the value, if the version number has changed it throws an OptimisticLockingFailureException, indicating there has been an error with concurrent access. This check is necessary, since, even though different batch jobs may be running in different machines, they all use the same database tables.

Java Quartz is internally caching jobs and its parameters

I am involved in a project which requires me to create a Job Scheduler using “Quartz Scheduler” to schedule various jobs which in turn trigger Pentaho Kettle transformation(s). Kettle transformations are essentially ETL scripts performing some mundane activities in our case. Am facing a critical issue while running the scheduler:
We have around 10 jobs scheduled using Job Scheduler. For some 3 to 4 specific jobs it’s throwing following exception:
Unable to load the job from XML file [/home /transformations/jobs/TestJob.kjb] Unable to read file [file:///home /transformations/jobs/ TestJob.kjb] Could not read from "file:///home /transformations/jobs/TestJob.kjb" because it is a not a file.
org.pentaho.di.job.JobMeta.(JobMeta.java:715)
org.pentaho.di.job.JobMeta.(JobMeta.java:679)
com. XYZ.transformation.jobs.impl.JobBootstrapImpl.executeJob(JobBootstrapImpl.java:115)
com. XYZ.transformation.jobs.impl.JobBootstrapImpl.startJobsExecution(JobBootstrapImpl.java:100)
com. XYZ.transformation.jobs.impl.QuartzJobsScheduler.executeInternal(QuartzJobsScheduler.java:25)
org.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:86)
org.quartz.core.JobRunShell.run(JobRunShell.java:223)
org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:549)
Weird thing is that, upon verifying the specified path i.e. “/home /transformations/jobs/TestJob.kjb”, file is present and I am able to read it. Moreover the Job runs successfully and does all the things which it is supposed to, yet throws the exception detailed above.
After observing closely, I strongly feel that Quartz is internally caching jobs and/or its parameters. We do load certain parameters required for the job to execute after it is triggered. Would it be possible to delete/purge the cache used by Quartz? I also tried killing all the java processes running on the box (thinking that it may kill Quartz itself, as Quartz is being run within java process) and restarting quartz and its jobs afresh, but couldn’t make it work as expected. It still stores the old parameters somewhere perhaps in some cache.
Versions used –
Spring Framework (spring-core & spring-beans) - 3.0.6.RELEASE
Quertz Scheduler - 1.8.6
Platform – Redhat Linux - 2.6.18-308.el5
Pentaho kettle – Spoon Stable Release – 4.3.0
I will do in this way:
Ensure that the Pentaho Job can run in standalone first with a shell script, java service wrapper or whatever
In the Quartz Job, then use Quartz's NativeJob to call the same standalone script
Just my two cents
Looks to me like you have an extra space in the path.
/home /transformations/jobs/TestJob.kjb
Between the e of home and the /
Remove that space, I can't possibly believe you actually have a home directory called "home "!!

Resources