Quartz Integration with Spring - spring

I have a web application and I am trying to start Quartz scheduler programmatically in spring. I have a service class where I create an instance of SchedulerFactory and then get a scheduler.
The code is as follows.
#Service("auctionWinnerService")
public class NormalAuctionWinnerServiceImpl implements AuctionWinnerService {
public static final String NORMAL_AUCTION = "NORMAL AUCTION";
public static int NORMAL_AUCTION_COUNTER = 0;
private SchedulerFactory schedulerFactory;
private Scheduler scheduler;
public void declareWinner(int auctionId, Map<String, Object> parameterMap) {
System.out.println("INSIDE declareWinner of NormalAuctionWinner");
schedulerFactory = new StdSchedulerFactory();
try {
scheduler = schedulerFactory.getScheduler();
System.out.println("GOT SCHEDULER : "+scheduler);
} catch (SchedulerException e1) {
e1.printStackTrace();
}
JobDetail jd = new JobDetail();
jd.setName(NORMAL_AUCTION+" JOB "+NORMAL_AUCTION_COUNTER);
jd.setJobClass(NormalAuctionWinnerJob.class);
/** CREATE CRON TRIGGER INSTANCE **/
CronTrigger t = new CronTrigger();
t.setName(NORMAL_AUCTION + ++NORMAL_AUCTION_COUNTER);
t.setGroup("Normal Auction");
Date d = new Date();
Date d1 = new Date();
d1.setMinutes(d.getMinutes()+5);
t.setStartTime(d);
t.setEndTime(d1);
try {
t.setCronExpression("10 * * * * ? *");
scheduler.scheduleJob(jd, t);
} catch (SchedulerException e) {
e.printStackTrace();
} catch (ParseException e) {
e.printStackTrace();
}
}
}
The schedulerFactory and scheduler are instantiated but my jobs do not run.
Could someone point out what am I missing here?
Also I need only one instance of Factory and one scheduler instance. I tried making the static but it didn't work. Any pointers in this direction will be helpful.
Thanks

Unless you have a specific requirement on Quartz's proprietary functionality, I recommend getting rid of it and using Spring's internal scheduling capability. As of Spring 3, this includes support for cron-type expressions, very similar to Quartz's cron trigger.
As well as bringing simplicity to your application and its config, it's inherently more reliable than Quartz, and provides an easier API for programmatic usage, via the TaskScheduler interface.

First of all, how well do you know Quartz or cron trigger expressions? I may be mistaken, but 10 * * * * ? * will make the trigger fire every 10th second of every minute, but I've never seen such expression, it may not fire at all.
Are you trying to create a trigger to fire every 10 seconds? In this case, use a simple trigger like this:
new SimpleTrigger((NORMAL_AUCTION + ++NORMAL_AUCTION_COUNTER),
"Normal Auction",
d,
d1,
SimpleTrigger.REPEAT_INDEFINITELY,
10000L);
Edit:
Ok, so if that's you requirement, you need a trigger that fill fire just once, at the end time of the auction. For that, use a SimpleTrigger like this:
new SimpleTrigger((NORMAL_AUCTION + ++NORMAL_AUCTION_COUNTER),
"Normal Auction",
d1,
null,
0,
0L);
The start date in this case does not matter, as long as you set it to fire on the appropriate time (the end time) and just once.
And as an additional note, do not calculate dates like that. I suggest you to try Joda Time library. Really simple and well known replacement for the clumsy standard Date/Calendar API.

You have forgotten to start the scheduler! scheduler.start();
...
try {
t.setCronExpression("10 * * * * ? *");
scheduler.scheduleJob(jd, t);
scheduler.start();
} catch (SchedulerException e) {
e.printStackTrace();
} catch (ParseException e) {
e.printStackTrace();
}
I have proved this, after adding the missing statement, (and replacing the job with an dummy) it worked for me/

Related

Schedule a method dinamically using cron of the annotation #Scheduled

I would Like to schedule a method using The annotation #Scheduled using cron, For example I want that the method should be executed everyday in the time specified by the client.
So I would like to get the cron value from the DB, in order to give the client the possibility of executing the method whenever he wants.
Here is my method, it sends emails automatically at 10:00 am to the given addresses, so my goal is to make the 10:00 dynamic.
Thanks for your help.
#Scheduled(cron = "0 00 10* * ?")
public void periodicNotification() {
JavaMailSenderImpl jms = (JavaMailSenderImpl) sender;
MimeMessage message = jms.createMimeMessage();
MimeMessageHelper helper;
try {
helper = new MimeMessageHelper(message, MimeMessageHelper.MULTIPART_MODE_MIXED_RELATED, StandardCharsets.UTF_8.name());
List<EmailNotification> emailNotifs = enr.findAll();
for (EmailNotification i : emailNotifs)
{
helper.setFrom("smsender4#gmail.com");
List<String> recipients = fileRepo.findWantedEmails(i.getDaysNum());
//List<String> emails = recipientsRepository.getScheduledEmails();
String[] to = recipients.stream().toArray(String[]::new);
helper.setTo(to);
helper.setText(i.getMessage());
helper.setSubject(i.getSubject());
sender.send(message);
System.out.println("Email successfully sent to: " + Arrays.toString(to));
}
}
catch (MessagingException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
So I'm thinking at the next solution. ( + using the answer accepted here )
Let's say you have a class that imlpements Runnable interface -> this will be your job that gets executed. Let's call it MyJob
Also assume that we have a map that hold the id of the job and it's execution reference ( you'll see in a sec what i'm talking about). Call it something like currentExecutingJobs
Assume you have an endpoint that gets the name of the job and a cron expression from the client
When that endpoints gets called:
You'll look in the map above to see if there is any entry with that job id. If it exists, you cancel the job.
After that, you'll create an instance of that job ( You can do that by using reflection and having a custom annotation on your job classes in which you can provide an id. For example #MyJob("myCustomJobId" )
And from the link provided, you'll schedule the job using
// Schedule a task with the given cron expression
ScheduledFuture myJobScheduledFutere = executor.schedule(myJob, new CronTrigger(cronExpression));
And put the result in the above map currentExecutingJobs.put("myCustomJobId", myJobScheduledFutere)
ScheduledFuture docs
In case you want to read property from database you can implement the EnvironmentPostProcessor and read the necessary values from DB and add it to Environment object, more details available at https://docs.spring.io/spring-boot/docs/current/reference/html/howto.html#howto-spring-boot-application

Long running Spring Service is locking DB table

I have a Spring Service that is going through multiple items in a list and for each one it is making an extra WS call to external services. The Service is called by a Job on a fixed time interval.
As a first step, the Service is saving in a JOB_CONTROL table the status of the Job (STARTED), then it iterates through the list and at the end it saves it to (FINISHED).
There are 2 issues:
the JOB_CONTROL table doesn't get saved gradually - only the
"FINISHED" value is saved and never "STARTED"
if using flush method in order to force the commit, the table gets locked, eg. no other select can be made on it until the Service finishes
#Service
public class PromotionSchedulerService implements Runnable {
#Autowired
GeofencingAreaDAO storeDao;
#Autowired
promotionsWSClient promotionsWSClient;
#Autowired
private JobControlDAO jobControlDAO;
public void run() {
JobControl job = jobControlDAO.findByClassName(this.getClass().getSimpleName());
job.setState(JobControlStateTypes.RUNNING.getStateType());
job.setLastRunDate(new Date());
// LINE BELLOW DOES NOT GET COMMITED IN DB
jobControlDAO.save(job);
List < GeofencingArea > stores = storeDao.findAllStores();
for (GeofencingArea store: stores) {
/** Call WS **/
GetActivePromotionsResponse rsp = null;
try {
rsp = promotionsWSClient.getpromotions();
} catch (Exception e) {
e.printStackTrace();
job.setState(JobControlStateTypes.FAILED.getStateType());
job.setLastRunStatus("There was an error calling promagic promotions");
jobControlDAO.save(job);
return;
}
List < PromotionBean > promos = rsp.getReturn();
for (PromotionBean promo: promos) {
BackendPromotionPOJO backendPromotionsPOJO = new BackendPromotionPOJO();
backendPromotionsPOJO.setDescription(promo.getDescription());
}
}
// ONLY THIS JOB STATE GOES TO DB. IT ACTUALLY SEEM TO OVERWRITE PREVIOUS SET VALUE ("RUNNING") from line 16
job.setLastRunStatus("COMPLETED");
job.setState(JobControlStateTypes.SUCCESS.getStateType());
jobControlDAO.save(job);
}
}
I would like to force the commit after changing job state and not locking the table when doing this.

How to perform new operation on #RetryOnFailure by jcabi

Iam using jcabi-aspects to retry connection to my URL http://xxxxxx:8080/hello till the connection comes back.As you know #RetryOnFailure by jcabi has two fields attempts and delay.
I want to perform the operation like attempts(12)=expiryTime(1 min=60000 millis)/delay(5 sec=5000 millis) on jcabi #RetryOnFailure.How do i do this.The code snippet is as below.
#RetryOnFailure(attempts = 12, delay = 5)
public String load(URL url) {
return url.openConnection().getContent();
}
You can combine two annotations:
#Timeable(unit = TimeUnit.MINUTE, limit = 1)
#RetryOnFailure(attempts = Integer.MAX_VALUE, delay = 5)
public String load(URL url) {
return url.openConnection().getContent();
}
#RetryOnFailure will retry forever, but #Timeable will stop it in a minute.
The library you picked (jcabi) does not have this feature. But luckily the very handy RetryPolicies from Spring-Batch have been extracted (so you can use them alone, without the batching):
Spring-Retry
One of the many classes you could use from there is TimeoutRetryPolicy:
RetryTemplate template = new RetryTemplate();
TimeoutRetryPolicy policy = new TimeoutRetryPolicy();
policy.setTimeout(30000L);
template.setRetryPolicy(policy);
Foo result = template.execute(new RetryCallback<Foo>() {
public Foo doWithRetry(RetryContext context) {
// Do stuff that might fail, e.g. webservice operation
return result;
}
});
The whole spring-retry project is very easy to use and full of features, like backOffPolicies, listeners, etc.

OrmLiteSqliteOpenHelper onDowngrade

I'm using ormlite for android and I have a database table class that extends OrmLiteSqliteOpenHelper. I've had some reports in google play that the application had a force close caused by:
android.database.sqlite.SQLiteException: Can't downgrade database from version 3 to 2
The users probably get to downgrade via a backup or something. The problem is I cannot implement the onDowngrade method that exists on SQLiteOpenHelper:
Does ormlite support the downgrade? Is there any work around for this? At least to avoid the force close.
Interesting. So the onDowngrade(...) method was added in API 11. I can't just add support for it into ORMLite. Unfortunately this means that you are going to have to make your own onDowngrade method which is the same as the onUpgrade(...) in OrmLiteSqliteOpenHelper. Something like the followign:
public abstract void onUpgrade(SQLiteDatabase database, ConnectionSource connectionSource, int oldVersion,
int newVersion) {
// your code goes here
}
public final void onDowngrade(SQLiteDatabase db, int oldVersion, int newVersion) {
ConnectionSource cs = getConnectionSource();
/*
* The method is called by Android database helper's get-database calls when Android detects that we need to
* create or update the database. So we have to use the database argument and save a connection to it on the
* AndroidConnectionSource, otherwise it will go recursive if the subclass calls getConnectionSource().
*/
DatabaseConnection conn = cs.getSpecialConnection();
boolean clearSpecial = false;
if (conn == null) {
conn = new AndroidDatabaseConnection(db, true);
try {
cs.saveSpecialConnection(conn);
clearSpecial = true;
} catch (SQLException e) {
throw new IllegalStateException("Could not save special connection", e);
}
}
try {
onDowngrade(db, cs, oldVersion, newVersion);
} finally {
if (clearSpecial) {
cs.clearSpecialConnection(conn);
}
}
}
For more information about the onDowngrade(...) method see below:
public void onDowngrade (SQLiteDatabase db, int oldVersion, int newVersion);
To quote from the javadocs:
Called when the database needs to be downgraded. This is strictly similar to onUpgrade(SQLiteDatabase, int, int) method, but is called whenever current version is newer than requested one. However, this method is not abstract, so it is not mandatory for a customer to implement it. If not overridden, default implementation will reject downgrade and throws SQLiteException
Also see:
Can't downgrade database from version 2 to 1

Non-Blocking Endpoint: Returning an operation ID to the caller - Would like to get your opinion on my implementation?

Boot Pros,
I recently started to program in spring-boot and I stumbled upon a question where I would like to get your opinion on.
What I try to achieve:
I created a Controller that exposes a GET endpoint, named nonBlockingEndpoint. This nonBlockingEndpoint executes a pretty long operation that is resource heavy and can run between 20 and 40 seconds.(in the attached code, it is mocked by a Thread.sleep())
Whenever the nonBlockingEndpoint is called, the spring application should register that call and immediatelly return an Operation ID to the caller.
The caller can then use this ID to query on another endpoint queryOpStatus the status of this operation. At the beginning it will be started, and once the controller is done serving the reuqest it will be to a code such as SERVICE_OK. The caller then knows that his request was successfully completed on the server.
The solution that I found:
I have the following controller (note that it is explicitely not tagged with #Async)
It uses an APIOperationsManager to register that a new operation was started
I use the CompletableFuture java construct to supply the long running code as a new asynch process by using CompletableFuture.supplyAsync(() -> {}
I immdiatelly return a response to the caller, telling that the operation is in progress
Once the Async Task has finished, i use cf.thenRun() to update the Operation status via the API Operations Manager
Here is the code:
#GetMapping(path="/nonBlockingEndpoint")
public #ResponseBody ResponseOperation nonBlocking() {
// Register a new operation
APIOperationsManager apiOpsManager = APIOperationsManager.getInstance();
final int operationID = apiOpsManager.registerNewOperation(Constants.OpStatus.PROCESSING);
ResponseOperation response = new ResponseOperation();
response.setMessage("Triggered non-blocking call, use the operation id to check status");
response.setOperationID(operationID);
response.setOpRes(Constants.OpStatus.PROCESSING);
CompletableFuture<Boolean> cf = CompletableFuture.supplyAsync(() -> {
try {
// Here we will
Thread.sleep(10000L);
} catch (InterruptedException e) {}
// whatever the return value was
return true;
});
cf.thenRun(() ->{
// We are done with the super long process, so update our Operations Manager
APIOperationsManager a = APIOperationsManager.getInstance();
boolean asyncSuccess = false;
try {asyncSuccess = cf.get();}
catch (Exception e) {}
if(true == asyncSuccess) {
a.updateOperationStatus(operationID, Constants.OpStatus.OK);
a.updateOperationMessage(operationID, "success: The long running process has finished and this is your result: SOME RESULT" );
}
else {
a.updateOperationStatus(operationID, Constants.OpStatus.INTERNAL_ERROR);
a.updateOperationMessage(operationID, "error: The long running process has failed.");
}
});
return response;
}
Here is also the APIOperationsManager.java for completness:
public class APIOperationsManager {
private static APIOperationsManager instance = null;
private Vector<Operation> operations;
private int currentOperationId;
private static final Logger log = LoggerFactory.getLogger(Application.class);
protected APIOperationsManager() {}
public static APIOperationsManager getInstance() {
if(instance == null) {
synchronized(APIOperationsManager.class) {
if(instance == null) {
instance = new APIOperationsManager();
instance.operations = new Vector<Operation>();
instance.currentOperationId = 1;
}
}
}
return instance;
}
public synchronized int registerNewOperation(OpStatus status) {
cleanOperationsList();
currentOperationId = currentOperationId + 1;
Operation newOperation = new Operation(currentOperationId, status);
operations.add(newOperation);
log.info("Registered new Operation to watch: " + newOperation.toString());
return newOperation.getId();
}
public synchronized Operation getOperation(int id) {
for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
return op;
}
}
Operation notFound = new Operation(-1, OpStatus.INTERNAL_ERROR);
notFound.setCrated(null);
return notFound;
}
public synchronized void updateOperationStatus (int id, OpStatus newStatus) {
iteration : for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
op.setStatus(newStatus);
log.info("Updated Operation status: " + op.toString());
break iteration;
}
}
}
public synchronized void updateOperationMessage (int id, String message) {
iteration : for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if(op.getId() == id) {
op.setMessage(message);
log.info("Updated Operation status: " + op.toString());
break iteration;
}
}
}
private synchronized void cleanOperationsList() {
Date now = new Date();
for(Iterator<Operation> iterator = operations.iterator(); iterator.hasNext();) {
Operation op = iterator.next();
if((now.getTime() - op.getCrated().getTime()) >= Constants.MIN_HOLD_DURATION_OPERATIONS ) {
log.info("Removed operation from watchlist: " + op.toString());
iterator.remove();
}
}
}
}
The questions that I have
Is that concept a valid one that also scales? What could be improved?
Will i run into concurrency issues / race conditions?
Is there a better way to achieve the same in boot spring, but I just didn't find that yet? (maybe with the #Async directive?)
I would be very happy to get your feedback.
Thank you so much,
Peter P
It is a valid pattern to submit a long running task with one request, returning an id that allows the client to ask for the result later.
But there are some things I would suggest to reconsider :
do not use an Integer as id, as it allows an attacker to guess ids and to get the results for those ids. Instead use a random UUID.
if you need to restart your application, all ids and their results will be lost. You should persist them to a database.
Your solution will not work in a cluster with many instances of your application, as each instance would only know its 'own' ids and results. This could also be solved by persisting them to a database or Reddis store.
The way you are using CompletableFuture gives you no control over the number of threads used for the asynchronous operation. It is possible to do this with standard Java, but I would suggest to use Spring to configure the thread pool
Annotating the controller method with #Async is not an option, this does not work no way. Instead put all asynchronous operations into a simple service and annotate this with #Async. This has some advantages :
You can use this service also synchronously, which makes testing a lot easier
You can configure the thread pool with Spring
The /nonBlockingEndpoint should not return the id, but a complete link to the queryOpStatus, including id. The client than can directly use this link without any additional information.
Additionally there are some low level implementation issues which you may also want to change :
Do not use Vector, it synchronizes on every operation. Use a List instead. Iterating over a List is also much easier, you can use for-loops or streams.
If you need to lookup a value, do not iterate over a Vector or List, use a Map instead.
APIOperationsManager is a singleton. That makes no sense in a Spring application. Make it a normal PoJo and create a bean of it, get it autowired into the controller. Spring beans by default are singletons.
You should avoid to do complicated operations in a controller method. Instead move anything into a service (which may be annotated with #Async). This makes testing easier, as you can test this service without a web context
Hope this helps.
Do I need to make database access transactional ?
As long as you write/update only one row, there is no need to make this transactional as this is indeed 'atomic'.
If you write/update many rows at once you should make it transactional to guarantee, that either all rows are updated or none.
However, if two operations (may be from two clients) update the same row, always the last one will win.

Resources