Apache Camel CronScheduledRoutePolicy not stopping route?
I am trying to start and stop my route which reads from google-pubsub component and pushes to a JDBC datasource(oracle). I want to do this only in certain times of day as the Oracle database would be down from say 10pm-12AM every night during which times I don't want my route to keep processing incoming pubsub messages and want it to stop. But when I try it the route at stop time says:
'{"severity":"WARN","message":"o.a.c.r.q.ScheduledRoutePolicy | Route is not in a started/suspended state and cannot be stopped. The current route state is Stopped"}'
#Override
public void configure() {
CronScheduledRoutePolicy routePolicy = new CronScheduledRoutePolicy();
routePolicy.setRouteStartTime("0 15 00 * * ?");
routePolicy.setRouteStopTime("0 00 22 * * ?");
System.out.println("am here!!");
onException(Exception.class)
.log(LoggingLevel.ERROR," Error processing message: ${header['CamelGooglePubsub.MessageId']} : ${exception}" )
.to("log:app_error.log?level=DEBUG&showAll=true&showException=true")
.markRollbackOnlyLast()
.end();
CamelContext camelContext= getContext();
System.out.println("Route Status is" + camelContext.getRouteStatus("{{routeID}"));
from("google-pubsub:{{google_project_name}}:{{google_pubsub_subscription}}"+
"?concurrentConsumers={{concurrent_consumers}}"+
"&maxMessagesPerPoll={{max_messages_per_poll}}"+
"&connectionFactory=#googlePubsubConnectionFactory")
.routeId("{{routeID}")
.routePolicy(startPolicy)
.noAutoStartup()
I wanted the route to start at say 12:15 am (routePolicy.setRouteStartTime("0 15 00 * * ?");
and end at
routePolicy.setRouteStartTime("0 00 22 * * ?");
Am I doing this right, or should use process() on the route to stop it forcefully, how can I do it?
Thank you
After going through the command, I think it more like a spring boot question instead of a camel question.
I don't think using the camel cron component can do the job. You may consider to call a service from outside to shutdown the whole spring boot application by running the cron script to send shutdown command to the spring boot application from your docker image.
Here is an example that you can take a reference.
Related
Spring Boot here. I have the following scheduled task:
#Component
public class AdminWatchdog {
#Autowired
private EmailService emailService;
// Ctors, getters & setters here
#Scheduled(cron = "'* * */12 * * *")
public void runReports() {
// Doesn't matter what it does, really
}
}
When I run this, it appears to be firing either every minute or every second (can't tell based on the logs) for the entire duration of the 12th hour of every day!
I only want this task to run one time every day at noon (12 pm). Is the Spring cron configured incorrectly or do I have something else going on in my app perhaps??
Your cron is incorrect. For running your job every noon every day use this
"0 0 12 * * ?"
The expression is very self explainatory if you understand what each character represent
0 0 12 * * ?
<second> <minute> <hour> <day-of-month> <month> <day-of-week>
For your reference. You can make use of tools like http://www.cronmaker.com/ to design your cron
I have Spring Boot based cron job running:
#Scheduled(cron = "30 * * * * *}")
// #Scheduled(initialDelay = -1, fixedDelay = 60000)
public void cronCheck()
{
instance.refreshStatus();
if (instance.status.isVerified() && !instance.status.isExpired())
{
instance.updateCheckTime();
}
}
Most of the cases, it's running perfect. But when I changed the system time back , for example one month, it would run one single time and never run again. However, if I change system back forward, it would run as scheduled.
Anyone has any idea why this is happening and maybe a solution?
Highly appreciate it!
Here is my use case.
A legacy system updates a database queue table QUEUE.
I want a scheduled recurring job that
- checks the contents of QUEUE
- if there are rows in the table it locks the row and does some work
- deletes the row in QUEUE
If the previous job is still running, then a new thread will be created to do the work. I want to configure the maximum number of concurrent threads.
I am using Spring 3 and my current solution is to do the following (using a fixedRate of 1 millisecond to get the threads to run basically continuously)
#Scheduled(fixedRate = 1)
#Async
public void doSchedule() throws InterruptedException {
log.debug("Start schedule");
publishWorker.start();
log.debug("End schedule");
}
<task:executor id="workerExecutor" pool-size="4" />
This created 4 threads straight off and the threads correctly shared the workload from the queue. However I seem to be getting a memory leak when the threads take a long time to complete.
java.util.concurrent.ThreadPoolExecutor # 0xe097b8f0 | 80 | 373,410,496 | 89.74%
|- java.util.concurrent.LinkedBlockingQueue # 0xe097b940 | 48 | 373,410,136 | 89.74%
| |- java.util.concurrent.LinkedBlockingQueue$Node # 0xe25c9d68
So
1: Should I be using #Async and #Scheduled together?
2: If not then how else can I use spring to achieve my requirements?
3: How can I create the new threads only when the other threads are busy?
Thanks all!
EDIT: I think the queue of jobs was getting infinitely long... Now using
<task:executor id="workerExecutor"
pool-size="1-4"
queue-capacity="10" rejection-policy="DISCARD" />
Will report back with results
You can try
Run a scheduler with one second delay, which will lock & fetch all
QUEUE records that weren't locked so far.
For each record, call an Async method, which will process that record & delete it.
The executor's rejection policy should be ABORT, so that the scheduler can unlock the QUEUEs that aren't given out for processing yet. That way the scheduler can try processing those QUEUEs again in the next run.
Of course, you'll have to handle the scenario, where the scheduler has locked a QUEUE, but the handler didn't finish processing it for whatever reason.
Pseudo code:
public class QueueScheduler {
#AutoWired
private QueueHandler queueHandler;
#Scheduled(fixedDelay = 1000)
public void doSchedule() throws InterruptedException {
log.debug("Start schedule");
List<Long> queueIds = lockAndFetchAllUnlockedQueues();
for (long id : queueIds)
queueHandler.process(id);
log.debug("End schedule");
}
}
public class QueueHandler {
#Async
public void process(long queueId) {
// process the QUEUE & delete it from DB
}
}
<task:executor id="workerExecutor" pool-size="1-4" queue-capcity="10"
rejection-policy="ABORT"/>
//using a fixedRate of 1 millisecond to get the threads to run basically continuously
#Scheduled(fixedRate = 1)
When you use #Scheduled a new thread will be created and will invoke method doSchedule at the specified fixedRate at 1 milliseconds. When you run your app you can already see 4 threads competing for the QUEUE table and possibly a dead lock.
Investigate if there is a deadlock by taking thread dump.
http://helpx.adobe.com/cq/kb/TakeThreadDump.html
#Async annotation will not be of any use here.
Better way to implement this is to create you class as a thread by implementing runnable and passing your class to TaskExecutor with required number of threads.
Using Spring threading and TaskExecutor, how do I know when a thread is finished?
Also check your design it doesn't seem to be handling the synchronization properly. If a previous job is running and holding a lock on the row, the next job you create will still see that row and will wait for acquiring lock on that particular row.
We have a situation where we set up a component to run batch jobs using spring batch remotely. We send a JMS message with the job xml path, name, parameters, etc. and we wait on the calling batch client for a response from the server.
The server reads the queue and calls the appropriate method to run the job and return the result, which our messaging framework does by:
this.jmsTemplate.send(queueName, messageCreator);
this.LOGGER.debug("Message sent to '" + queueName + "'");
try {
final Destination replyTo = messageCreator.getReplyTo();
final String correlationId = messageCreator.getMessageId();
this.LOGGER.debug("Waiting for the response '" + correlationId + "' back on '" + replyTo + "' ...");
final BytesMessage message = (BytesMessage) this.jmsTemplate.receiveSelected(replyTo, "JMSCorrelationID='"
+ correlationId + "'");
this.LOGGER.debug("Response received");
Ideally, we want to be able to call out runJobSync method twice, and have two jobs simultaneously operate. We have a unit test that does something similar, without jobs. I realize this code isn't very great, but, here it is:
final List result = Collections.synchronizedList(new ArrayList());
Thread thread1 = new Thread(new Runnable(){
#Override
public void run() {
client.pingWithDelaySync(1000);
result.add(Thread.currentThread().getName());
}
}, "thread1");
Thread thread2 = new Thread(new Runnable(){
#Override
public void run() {
client.pingWithDelaySync(500);
result.add(Thread.currentThread().getName());
}
}, "thread2");
thread1.start();
Thread.sleep(250);
thread2.start();
thread1.join();
thread2.join();
Assert.assertEquals("both thread finished", 2, result.size());
Assert.assertEquals("thread2 finished first", "thread2", result.get(0));
Assert.assertEquals("thread1 finished second", "thread1", result.get(1));
When we run that test, thread 2 completes first since it just has a 500 millisencond wait, while thread 1 does a 1 second wait:
Thread.sleep(delayInMs);
return result;
That works great.
When we run two remote jobs in the wild, one which takes about 50 seconds to complete and one which is designed to fail immediately and return, this does not happen.
Start the 50 second job, then immediately start the instant fail job. The client prints that we sent a message requesting that the job run, the server prints that it received the 50 second request, but waits until that 50 second job is completed before handling the second message at all, even though we use the ThreadPoolExecutor.
We are running transactional with Auto acknowledge.
Doing some remote debugging, the Consumer from AbstractPollingMessageListenerContainer shows no unhandled messages (so consumer.receive() obviously just returns null over and over). The webgui for the amq broker shows 2 enqueues, 1 deque, 1 dispatched, and 1 in the dispatched queue. This suggests to me that something is preventing AMQ from letting the consumer "have" the second message. (prefetch is 1000 btw)
This shows as the only consumer for the particular queue.
Myself and a few other developers have poked around for the last few days and are pretty much getting nowhere. Any suggestions on either, what we have misconfigured if this is expected behavior, or, what would be broken here.
Does the method that is being remotely called matter at all? Currently the job handler method uses an executor to run the job in a different thread and does a future.get() (the extra thread is for reasons related to logging).
Any help is greatly appreciated
not sure I follow completely, but off the top, you should try the following...
set the concurrentConsumers/maxConcurrentConsumers greater than the default (1) on the MessageListenerContainer
set the prefetch to 0 to better promote balancing messages between consumers, etc.
I am using camel to read messages from an activemq queue, process it and post it to another queue. The route looks as follows:
from("jms:incoming.queue")
.process(new MyProcessor())
.to("jms:outgoing.queue");
I need to specify a timeout such that if there are no messages in "incoming.queue" for more than 3 minutes, I would like to stop the route. I can use OnCompletion() but it gets called after each message. I can specify timeout for sending message to "outgoing.queue". Is there a way I can specify a timeout such that if there are no message for more than 3 minutes in the "incoming.queue", I can stop the route?
Thanks in advance for you help.
two options I can think of...
use a CronScheduledRoutePolicy to start/stop your route automatically at specified times...
CronScheduledRoutePolicy myPolicy = new CronScheduledRoutePolicy();
myPolicy.setRouteStartTime("0 20 * * * ?");
myPolicy.setRouteStopTime("0 0 * * * ?");
from("jms:incoming.queue")
.routePolicy(myPolicy).noAutoStartup()
.process(new MyProcessor())
.to("jms:outgoing.queue");
use a camel-quartz route and a polling consumer to drain the queue on a schedule
MyCoolBean cool = new MyCoolBean();
cool.setProducer(template);
cool.setConsumer(consumer);
from("quartz://myGroup/myTimerName?cron=0+20+*+*+*+?")
.bean(cool);
//MyCoolBean snippet
while (true) {
// receive the message from the queue, wait at most 60s
Object msg = consumer.receiveBody("jms:incoming.queue", 60000);
if (msg == null) {
break;
}
producer.sendBody("jms:outgoing.queue", msg);
}
Based on your comment above it appears you are just looking to start and stop the route on a schedule. You can use a quartz job to call the start and stop methods on your jms route. You could even make the quartz logic a route as well using the quartz endpoint if you like.