Allow Camel context to run forever - spring

I am using camel-spring jar for springCamelContext. When I start the camel context , it run for 5 minutes (Default time). I can make my thread sleep for some specific time i.e.
try {
camelContext.start();
Thread.sleep(50 * 60 * 1000);
camelContext.stop();
} catch (Exception e) {
e.printStackTrace();
}
BUT I want is my camelContext to run FOREVER because this application is going to be deployed and It will be listening for messages from KAFKA server. I know there is a class
org.apache.camel.spring.Main
But I don't know how to configure it with springCamelContext or not sure if there any other way. Thanks
Update : Even If I remove camelContext.stop() , context is stopped after sometime and I get following logs :
[Thread-1] INFO org.apache.camel.spring.SpringCamelContext - Apache Camel 2.17.2 (CamelContext: camel-1) is shutting down
[Thread-1] INFO org.apache.camel.impl.DefaultShutdownStrategy - Starting to graceful shutdown 1 routes (timeout 300 seconds)
[Camel (camel-1) thread #1 - ShutdownTask] INFO org.apache.camel.component.kafka.KafkaConsumer - Stopping Kafka consumer
[Camel (camel-1) thread #1 - ShutdownTask] INFO org.apache.camel.impl.DefaultShutdownStrategy - Route: route1 shutdown complete, was consuming from: Endpoint[kafka://localhost:9092?groupId=group0&serializerClass=org.springframework.integration.kafka.serializer.avro.AvroSerializer&topic=my-topic]
[Thread-1] INFO org.apache.camel.impl.DefaultShutdownStrategy - Graceful shutdown of 1 routes completed in 0 seconds
[Thread-1] INFO org.apache.camel.spring.SpringCamelContext - Apache Camel 2.17.2 (CamelContext: camel-1) uptime 4 minutes
[Thread-1] INFO org.apache.camel.spring.SpringCamelContext - Apache Camel 2.17.2 (CamelContext: camel-1) is shutdown in 0.022 seconds

Here is a minimal example which runs forever and only copies files from one folder to another:
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.main.Main;
public class FileWriteRoute {
public static void main(String[] args) throws Exception {
Main main = new Main();
main.addRouteBuilder(new RouteBuilder() {
public void configure() {
from("file://D:/dev/playground/camel-activemq/src/data")
.to("file://D:/dev/playground/camel-activemq/src/data_out");
}
});
main.run();
}
}

Of if you have your Route defined in a class try:
public static void main(String[] args) throws Exception {
Main main = new Main();
CamelContext context = main.getOrCreateCamelContext();
try {
context.addRoutes(new YOURROUTECLASS());
context.start();
main.run();
}
catch (Exception e){
enter code here
}
}

Related

Failed Jobs not being executed by Quartz in a clustered environment

I have built a project using Spring Quartz with a clustered environment. I am trying to test if the jobs can be picked up , in case the server that initiated them shut down. While it works perfectly as expected for Cron Triggers, same cannot be said about the SimpleTrigger job.
For Cron Triggers which have not been executed yet, Quartz runs the job without any hassle.
Steps to Reproduce:
Start the servers at port 8080 and port 8081.
Schedule a job using server at port 8081.
Shut down the server while the job is running.
This is what I get when the ClusterManager picks up the jobs:
2022-06-22 13:37:52.659 INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore : ClusterManager: detected 2 failed or restarted instances.
2022-06-22 13:37:52.661 INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore : ClusterManager: Scanning for instance "MacBook-Pro.local1655885157031"'s failed in-progress jobs.
2022-06-22 13:37:52.677 INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore : ClusterManager: Scanning for instance "MacBook-Pro.local1655885169333"'s failed in-progress jobs.
2022-06-22 13:37:52.720 INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore : ClusterManager: ......Deleted 1 complete triggers(s).
2022-06-22 13:37:52.722 INFO 23852 --- [_ClusterManager] o.s.s.quartz.LocalDataSourceJobStore : ClusterManager: ......Cleaned-up 1 other failed job(s).
This is how my Job looks like:
public class ApiJob implements Job {
final static Logger log = LoggerFactory.getLogger(ApiJob.class);
#Override
public void execute(JobExecutionContext context) {
this.context=context;
log.info("Job Execution Started");
JobDataMap map=context.getMergedJobDataMap();
ApiRequest request=new ApiRequest(map.getString("message"));
try {
Thread.sleep(2*60*1000);
log.info("Job scheduled...{}",context.getJobDetail().getKey().getName());
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
}
}
Trigger:
Trigger trigger=TriggerBuilder.newTrigger().forJob(jobDetail)
.withIdentity(jobDetail.getKey().getName(), "quartz-jobs-triggers")
.withDescription("Random trigger")
.startAt(Date.from(startTime.toInstant()))
//Add custom logic here?
.withSchedule(SimpleScheduleBuilder.simpleSchedule().withMisfireHandlingInstructionFireNow())
.build();
And finally how I setup the clustering in the application.properties:
#Quartz Properties
spring.quartz.job-store-type=jdbc
spring.quartz.properties.org.quartz.threadPool.threadCount=5
spring.quartz.properties.org.quartz.scheduler.instanceId=AUTO
spring.quartz.properties.org.quartz.jobStore.isClustered = true
spring.quartz.properties.org.quartz.jobStore.clusterCheckinInterval = 20000

When is it safe to depend on Spring's #PreDestroy?

Per Spring's documentation here, I added a shutdown hook:
SpringApplication app = new SpringApplication(App.class);
DefaultProfileUtil.addDefaultProfile(app);
appContext = app.run(args);
appContext.registerShutdownHook();
However the #PreDestroy method does not get called if the application is killed after starting.
import org.springframework.stereotype.Service;
import javax.annotation.PreDestroy;
import javax.annotation.PostConstruct;
#Service
public class Processor {
public Processor() {
...
}
#PostConstruct
public void init() {
System.err.println("processor started");
}
//not called reliably
#PreDestroy
public void shutdown() {
System.err.println("starting shutdown");
try {Thread.sleep(1000*10);} catch (InterruptedException e) {e.printStackTrace();}
System.err.println("shutdown completed properly");
}
}
All I ever see is processor started...
processor started
^C
If I wait at least 30 seconds for spring to complete starting up, and THEN kill the process, then the #PreDestroy annotated function does get called.
processor started
[...]
2018-12-26 17:01:09.050 INFO 31398 --- [ restartedMain] c.App : Started App in 67.555 seconds (JVM running for 69.338)
2018-12-26 17:01:09.111 INFO 31398 --- [ restartedMain] c.App :
----------------------------------------------------------
Application 'App' is running! Access URLs:
Local: http://localhost:8081
External: http://10.10.7.29:8081
Profile(s): [dev]
----------------------------------------------------------
2018-12-26 17:01:09.111 INFO 31398 --- [ restartedMain] c.app :
----------------------------------------------------------
^Cstarting shutdown
shutdown completed properly
How do I determine when it is safe to depend on the calling of all #PreDestroy annotated functions?
I know how to register a shutdown hook with the JVM and that is what I am currently doing, however it seems to me that #PreDestroy should be doing that.
By "safe to depend on" I am assuming a normal shutdown sequence (i.e. requested by SIGTERM or SIGINT) and not power outages and killing the process, etc.

AWS Lambda - Spring boot is not handling the request

I am trying to run spring boot application as serverless in AWS lambda and I am getting below exception while calling lambda function. Spring boot application successfully ran but it seems that it is going to fail to map the request
2018-09-25 06:11:50.717 INFO 1 --- [ main] o.s.j.e.a.AnnotationMBeanExporter : Registering beans for JMX exposure on startup
2018-09-25 06:11:50.823 INFO 1 --- [ main] **my.service.Application : Started Application in 7.405 seconds (JVM running for 8.939)**
START RequestId: decfc13c-c089-11e8-bacd-a37f1ba65629 Version: $LATEST
2018-09-25 06:11:50.994 ERROR 1 --- [ main] **c.a.s.p.i.s.AwsProxyHttpServletRequest : Called set character encoding to UTF-8 on a request without a content type. Character encoding will not be set
2018-09-25 06:11:51.175 ERROR 1 --- [ main] o.s.boot.web.support.ErrorPageFilter : Forwarding to error page from request [/] due to exception [null]**
java.lang.NullPointerException: null
at com.amazonaws.serverless.proxy.internal.servlet.AwsProxyHttpServletRequest.getRemoteAddr(AwsProxyHttpServletRequest.java:575) ~[task/:na]
at org.springframework.web.servlet.FrameworkServlet.publishRequestHandledEvent(FrameworkServlet.java:1075) ~[task/:na]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1005) ~[task/:na]
.........
2018-09-25 06:11:51.535 ERROR 1 --- [ main] s.p.i.s.AwsLambdaServletContainerHandler : Could not forward request
This is my StreamLambdaHandler java file.
public class StreamLambdaHandler implements RequestStreamHandler {
private static SpringBootLambdaContainerHandler<AwsProxyRequest, AwsProxyResponse> handler;
static {
try {
handler = SpringBootLambdaContainerHandler.getAwsProxyHandler(Application.class);
} catch (ContainerInitializationException e) {
throw new RuntimeException("Could not initialize Spring Boot application", e);
}
}
#Override
public void handleRequest(InputStream inputStream, OutputStream outputStream, Context context)
throws IOException {
handler.proxyStream(inputStream, outputStream, context);
outputStream.close();
}
}
Looks like you might be hitting https://github.com/awslabs/aws-serverless-java-container/issues/172. According to the ticket, the fix will be available as part of the upcoming 1.2 release.

Not able to shutdown the jms listener which posts message to kafka spring boot application with Runtime.exit, context.close, System.exit()

I am developing a spring boot application which will listen to ibm mq with
#JmsListener(id="abc", destination="${queueName}", containerFactory="defaultJmsListenerContainerFactory")
I have a JmsListenerEndpointRegistry which starts the listenerContainer.
On message will try to push the same message with some business logic to kafka. The poster code is
kafkaTemplate.send(kafkaProp.getTopic(), uniqueId, message)
Now in case a kafka producer fails, I want my boot application to get terminated. So I have added a custom
setErrorHandler.
So I have tried
`System.exit(1)`, `configurableApplicationContextObject.close()`, `Runtime.getRuntime.exit(1)`.
But none of them work. Below is the log that gets generated after
System.exit(0) or above others.
2018-05-24 12:12:47.981 INFO 18904 --- [ Thread-4] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#1d08376: startup date [Thu May 24 12:10:35 IST 2018]; root of context hierarchy
2018-05-24 12:12:48.027 INFO 18904 --- [ Thread-4] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 2147483647
2018-05-24 12:12:48.028 INFO 18904 --- [ Thread-4] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 0
2018-05-24 12:12:48.028 INFO 18904 --- [ Thread-4] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown
2018-05-24 12:12:48.028 INFO 18904 --- [ Thread-4] o.a.k.clients.producer.KafkaProducer : Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
2018-05-24 12:12:48.044 INFO 18904 --- [ Thread-4] o.a.k.clients.producer.KafkaProducer : Closing the Kafka producer with timeoutMillis = 30000 ms.
But the application is still running and below are the running threads
Daemon Thread [Tomcat JDBC Pool Cleaner[14341596:1527144039908]] (Running)
Thread [DefaultMessageListenerContainer-1] (Running)
Thread [DestroyJavaVM] (Running)
Daemon Thread [JMSCCThreadPoolMaster] (Running)
Daemon Thread [RcvThread: com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection#12474910[qmid=*******,fap=**,channel=****,ccsid=***,sharecnv=***,hbint=*****,peer=*******,localport=****,ssl=****]] (Running)
Thread [Thread-4] (Running)
The help is much appreciated. Thanks in advance. I simply want the application should exit.
Below is the thread dump before I call System.exit(1)
"DefaultMessageListenerContainer-1"
java.lang.Thread.State: RUNNABLE
at sun.management.ThreadImpl.getThreadInfo1(Native Method)
at sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:174)
at com.QueueErrorHandler.handleError(QueueErrorHandler.java:42)
at org.springframework.jms.listener.AbstractMessageListenerContainer.invokeErrorHandler(AbstractMessageListenerContainer.java:931)
at org.springframework.jms.listener.AbstractMessageListenerContainer.handleListenerException(AbstractMessageListenerContainer.java:902)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:326)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:235)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1166)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1158)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1055)
at java.lang.Thread.run(Thread.java:745)
You should take a thread dump to see what Thread [DefaultMessageListenerContainer-1] (Running) is doing.
Now in case a kafka producer fails
What kind of failure? If the broker is down, the thread will block in the producer library for up to 60 seconds by default.
You can reduce that time by setting the max.block.ms producer property.
Couple of solutions which worked for me to solve above.
Solutions 1.
Get all threads in error handler and interrupt them all and then exist the system.
ThreadMXBean threadMXBean = ManagementFactory.getThreadMXBean();
ThreadInfo[] threadInfos = threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds(), 100);
for (ThreadInfo threadInfo : threadInfos) {
Thread.currentThread().interrupt();
}
System.exit(1);
Solution 2. Define a application context manager. Like
public class AppContextManager implements ApplicationContextAware {
private static ApplicationContext _appCtx;
#Override
public void setApplicationContext(ApplicationContext ctx){
_appCtx = ctx;
}
public static ApplicationContext getAppContext(){
return _appCtx;
}
public static void exit(Integer exitCode) {
System.exit(SpringApplication.exit(_appCtx,() -> exitCode));
}
}
Then use same manager to exit in error handler
Executors.newSingleThreadExecutor().execute(new Runnable() {
public void run() {
jmsListenerEndpointRegistry.stop();
AppContextManager.exit(-1);
}
});

How to stop micro service with Spring Kafka Listener, when connection to Apache Kafka Server is lost?

I am currently implementing a micro service, which reads data from Apache Kafka topic. I am using "spring-boot, version: 1.5.6.RELEASE" for the micro service and "spring-kafka, version: 1.2.2.RELEASE" for the listener in the same micro service. This is my kafka configuration:
#Bean
public Map<String, Object> consumerConfigs() {
return new HashMap<String, Object>() {{
put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, servers);
put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
put(ConsumerConfig.GROUP_ID_CONFIG, groupIdConfig);
put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, autoOffsetResetConfig);
}};
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
I have implemented the listener via the #KafkaListener annotation:
#KafkaListener(topics = "${kafka.dataSampleTopic}")
public void receive(ConsumerRecord<String, String> payload) {
//business logic
latch.countDown();
}
I need to be able to shutdown the micro service, when the listener looses connection to the Apache Kafka server.
When I kill the kafka server I get the following message in the spring boot log:
2017-11-01 19:58:15.721 INFO 16800 --- [ 0-C-1] o.a.k.c.c.internals.AbstractCoordinator : Marking the coordinator 192.168.0.4:9092 (id: 2145482646 rack: null) dead for group TestGroup
When I start the kafka sarver, I get:
2017-11-01 20:01:37.748 INFO 16800 --- [ 0-C-1] o.a.k.c.c.internals.AbstractCoordinator : Discovered coordinator 192.168.0.4:9092 (id: 2145482646 rack: null) for group TestGroup.
So clearly the Spring Kafka Listener in my micro service is able to detect when the Kafka Server is up and running and when it's not. In the book by confluent Kafka The Definitive Guide in chapter But How Do We Exit? it is said that the wakeup() method needs to be called on the Consumer, so that a WakeupException would be thrown. So I tried to capture the two events (Kafka server down and Kafka server up) with the #EventListener tag, as described in the Spring for Apache Kafka documentation, and then call wakeup(). But the example in the documentation is on how to detect idle consumer, which is not my case. Could someone please help me with this. Thanks in advance.
I don't know how to get a notification of the server down condition (in my experience, the consumer goes into a tight loop within the poll()).
However, if you figure that out, you can stop the listener container(s) which will wake up the consumer and exit the tight loop...
#Autowired
private KafkaListenerEndpointRegistry registry;
...
this.registry.stop();
2017-11-01 16:29:54.290 INFO 21217 --- [ad | so47062346] o.a.k.c.c.internals.AbstractCoordinator : Marking the coordinator localhost:9092 (id: 2147483647 rack: null) dead for group so47062346
2017-11-01 16:29:54.346 WARN 21217 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : Connection to node 0 could not be established. Broker may not be available.
...
2017-11-01 16:30:00.643 WARN 21217 --- [ntainer#0-0-C-1] org.apache.kafka.clients.NetworkClient : Connection to node 0 could not be established. Broker may not be available.
2017-11-01 16:30:00.680 INFO 21217 --- [ntainer#0-0-C-1] essageListenerContainer$ListenerConsumer : Consumer stopped
You can improve the tight loop by adding reconnect.backoff.ms, but the poll() never exits so we can't emit an idle event.
spring:
kafka:
consumer:
enable-auto-commit: false
group-id: so47062346
properties:
reconnect.backoff.ms: 1000
I suppose you could enable idle events and use a timer to detect if you've received no data (or idle events) for some period of time, and then stop the container(s).

Resources