I want to run a spring batch job between 2:30 AM to 10:30 PM in every 10 mins.
Please suggest the expression for it to be added in #Scheduled annotation of spring boot.
Can you try these expression:
10/30 02-21 * * *
10,20,30 22 * * *
Related
I'm working on an AWS Lambda using the serverless framework. I'm trying to get my Lambda function to run on an EventBridge trigger, more specifically a cron schedule. The schedule being to run every 30 minutes between 20:00 and 07:00 Mon-Friday. However, this is great for prod, but in QA/UAT I want a different cron schedule (most likely). Thus I'm looking to implement stage based cron triggers so that I can schedule it for the day time in QA/UAT, but schedule it for the evening in production.
I originally tried a single cron schedule trigger of cron (0/30 20:30-06:30 ? * 1-5 *) for UAT, but that didn't work for some reason. My Lambda function ran only twice after 20:30, which I've yet to figure out.
My serverless file contains:
custom:
stage: "${opt:stage, self:provider.stage, 'dev'}"
stages:
- dev
- uat
eveningSchedule:
dev: cron(0/30 08:00-12:59 ? * 1-5 *)
uat: cron(0/30 20:30-23:59 ? * 1-5 *)
morningSchedule:
dev: cron(0/30 12:01-17:30 ? * 1-5 *)
uat: cron(0/30 00:30-06:30 ? * 1-5 *)
The function is defined as :
functions:
handler123:
handler: foo::bar::functionName
package:
artifact: ./bin/Release/net6.0/foo.bar.zip
events:
- schedule: "${self:custom.eveningSchedule.stage}"
- schedule: "${self:custom.morningSchedule.stage}"
The error I get when running sls deploy is:
Cannot resolve variable at "functions.CifFileRetriever.events.0": Value not found at "self" source and "functions.CifFileRetriever.events.1": Value not found at "self"
Would be massively grateful for any help on this one.
I'm using SpringBoot 2.4.x app with SpringBatch 4.3.x. I've created a simple job.
Where I've FlatFileItemReader which reads from CSV file. I've ImportKafkaItemWriter which writes to Kafka topic. One step where I combines these. I'm using SimpleJobLauncher and I've set ThreadPoolTaskExecutor as TasKExecutor of the JobLauncher. It is working fine as I've expected. But one resilience use case I've which is if I kill the app and then restart the app and trigger the job then it would carry on and finish the remaining job. Unfortunately it is not happening. I did further investigate and found that when I forcibly close the app SpringBatch job repository key tables look like this:
job_execution_id
version
job_instance_id
create_time
start_time
end_time
status
exit_code
exit_message
last_updated
job_configuration_location
1
1
1
2021-06-16 09:32:43
2021-06-16 09:32:43
STARTED
UNKNOWN
2021-06-16 09:32:43
and
step_execution_id
version
step_name
job_execution_id
start_time
end_time
status
commit_count
read_count
filter_count
write_count
read_skip_count
write_skip_count
process_skip_count
rollback_count
exit_code
exit_message
last_updated
1
4
productImportStep
1
2021-06-16 09:32:43
STARTED
3
6
0
6
0
0
0
0
EXECUTING
2021-06-16 09:32:50
If I manually update these tables where I set a valid end_time and status to FAILED then I can restart the job and works absolutely fine. May I know what I need to do so that Spring Batch can update those relevant repositories appropriately and I can avoid this manual steps. I can provide more information about code if needed.
If I manually update these tables where I set a valid end_time and status to FAILED then I can restart the job and works absolutely fine. May I know what I need to do so that Spring Batch can update those relevant repositories appropriately and I can avoid this manual steps
When a job is killed abruptly, Spring Batch won't have a chance to update its status in the Job repository, so the status is stuck at STARTED. Now when the job is restarted, the only information that Spring Batch has is the status in the job repository. By just looking at the status in the database, Spring Batch cannot distinguish between a job that is effectively running and a job that has been killed abruptly (in both cases, the status is STARTED).
The way to go in indeed manually updating the tables to either mark the status as FAILED to be able to restart the job or ABANDONED to abandon it. This is a business decision that you have to make and there is no way to automate it on the framework side. For more details, please refer to the reference documentation here: Aborting a Job.
You can add a faked parameter example Version a counter to increment for every new job execution so you don't have to check for the table database job.
What I mean mvn clean package
Then you try to launch the program like this :
java my-jarfile.jar dest=/tmp/foo Version="0"
java my-jarfile.jar dest=/tmp/foo Version="1"
java my-jarfile.jar dest=/tmp/foo Version="2"
etc ... Or
You Can use jobParameters to launch thé job programatically via jobLauncher and use date paramèter date = new Date().toString() which gives date with New stamp on every New job execution
You can use "JVM Shutdown Hook":
Something like this:
Runtime.getRuntime().addShutdownHook(new Thread(() -> {
if (jobExecution.isRunning()) {
jobExecution.setEndTime(new Date());
jobExecution.setStatus(BatchStatus.FAILED);
jobExecution.setExitStatus(ExitStatus.FAILED);
jobRepository.update(jobExecution);
}
}));
I have scheduled a coordinator using cron expression
frequency = "20 3 * * 2-4" but it gives error.
The oozie coordinator logs say "java.lang.IllegalArgumentException" : paramter [frequency]=[20 3 * * 2-4] must be an integer . Parsing error for input String : "20 3 * * 2-4"
HDP version : 2.5.3
Oozie Client build version : 4.2.0.2.5.3.0-37
..
..
You are requesting Oozie to apply XML schema for Coordinator... in version 0.2 of that schema.
The documentation hints that CRON syntax worked with schema 0.2 but I'm pretty sure that CRON scheduling was introduced in Oozie V4.0 (and documented in V4.1) -- and since Oozie V4.0 introduced schema 0.4 I believe that the documentation is wrong.
Bottom line: requesting xmlns="uri:oozie:coordinator:0.4" should allow Oozie to parse your CRON schedule correctly.
I am trying to use the Rufus Scheduler (within Dashing) to schedule a cron job, but also have it run once upon the server spinning up. I am following the readme here where it is saying to do the following:
scheduler.cron '00 14 * * *', :first_in => '3d' do
# ... every day at 14h00, but start after 3 * 24 hours
end
When I try to do this, I get the following error in my job:
`cron': unknown option: :first_in (ArgumentError)
Has anyone come across this?
Dashing is using rufus-scheduler 2.0.24 ( https://github.com/Shopify/dashing/blob/55f90939eae4d6eb64822fd3590f694418396510/dashing.gemspec#L24 ) which doesn't support the first_in feature for cron.
First_in was introduced for cron in rufus-scheduler 3.0.
It seems you're reading the rufus-scheduler 3.x documentation instead of the 2.x one.
The documentation for rufus-scheduler is at https://github.com/jmettraux/rufus-scheduler#rufus-scheduler , on top of it, there is the link to the 2.x documentation ( https://github.com/jmettraux/rufus-scheduler/blob/two/README.rdoc ). You'll have better luck there.
A 2.x alternative would be:
scheduler.in '3d' do
scheduler.cron '00 14 * * *' do
# ... every day at 1400
end
end
I have oozie installation as part of the cloudera installation.
I'm trying to execute the coordinator workflow fro the example with the following configuration in the coordinator.xml.
<coordinator-app name="cron-coord" frequency="${coord:minutes(60)}" start="${start}" end="${end}" timezone="UTC" xmlns="uri:oozie:coordinator:0.2">
With this configuration i expected the workflow to be executed every 1 hour , but it seems that the workflow has been executed every 5 minutes , does anyone have answer for this issue?
Are you setting the start time prior to the current time? If so, Oozie will work in the catch up mode until all delayed actions have been scheduled. The "frequency" setting does not apply to the catch-up mode.
You may give time coords in hours instead of minutes as :
coordinator-app name="cron-coord" frequency="${coord:hours(1)}" start="${start}" end="${end}" timezone="UTC" xmlns="uri:oozie:coordinator:0.2"