When is it safe to depend on Spring's #PreDestroy? - spring

Per Spring's documentation here, I added a shutdown hook:
SpringApplication app = new SpringApplication(App.class);
DefaultProfileUtil.addDefaultProfile(app);
appContext = app.run(args);
appContext.registerShutdownHook();
However the #PreDestroy method does not get called if the application is killed after starting.
import org.springframework.stereotype.Service;
import javax.annotation.PreDestroy;
import javax.annotation.PostConstruct;
#Service
public class Processor {
public Processor() {
...
}
#PostConstruct
public void init() {
System.err.println("processor started");
}
//not called reliably
#PreDestroy
public void shutdown() {
System.err.println("starting shutdown");
try {Thread.sleep(1000*10);} catch (InterruptedException e) {e.printStackTrace();}
System.err.println("shutdown completed properly");
}
}
All I ever see is processor started...
processor started
^C
If I wait at least 30 seconds for spring to complete starting up, and THEN kill the process, then the #PreDestroy annotated function does get called.
processor started
[...]
2018-12-26 17:01:09.050 INFO 31398 --- [ restartedMain] c.App : Started App in 67.555 seconds (JVM running for 69.338)
2018-12-26 17:01:09.111 INFO 31398 --- [ restartedMain] c.App :
----------------------------------------------------------
Application 'App' is running! Access URLs:
Local: http://localhost:8081
External: http://10.10.7.29:8081
Profile(s): [dev]
----------------------------------------------------------
2018-12-26 17:01:09.111 INFO 31398 --- [ restartedMain] c.app :
----------------------------------------------------------
^Cstarting shutdown
shutdown completed properly
How do I determine when it is safe to depend on the calling of all #PreDestroy annotated functions?
I know how to register a shutdown hook with the JVM and that is what I am currently doing, however it seems to me that #PreDestroy should be doing that.
By "safe to depend on" I am assuming a normal shutdown sequence (i.e. requested by SIGTERM or SIGINT) and not power outages and killing the process, etc.

Related

Is it possible to enforce message order on ActiveMQ topics using Spring Boot and JmsTemplate?

In playing around with Spring Boot, ActiveMQ, and JmsTemplate, I noticed that it appears that message order is not always preserved. In reading on ActiveMQ, "Message Groups" are offered as a potential solution to preserving message order when sending to a topic. Is there a way to do this with JmsTemplate?
Add Note: I'm starting to think that JmsTemplate is nice for "getting launched", but has too many issues.
Sample code and console output posted below...
#RestController
public class EmptyControllerSB {
#Autowired
MsgSender msgSender;
#RequestMapping(method = RequestMethod.GET, value = { "/v1/msgqueue" })
public String getAccount() {
msgSender.sendJmsMessageA();
msgSender.sendJmsMessageB();
return "Do nothing...successfully!";
}
}
#Component
public class MsgSender {
#Autowired
JmsTemplate jmsTemplate;
void sendJmsMessageA() {
jmsTemplate.convertAndSend(new ActiveMQTopic("VirtualTopic.TEST-TOPIC"), "message A");
}
void sendJmsMessageB() {
jmsTemplate.convertAndSend(new ActiveMQTopic("VirtualTopic.TEST-TOPIC"), "message B");
}
}
#Component
public class MsgReceiver {
private final String consumerOne = "Consumer.myConsumer1.VirtualTopic.TEST-TOPIC";
private final String consumerTwo = "Consumer.myConsumer2.VirtualTopic.TEST-TOPIC";
#JmsListener(destination = consumerOne )
public void receiveMessage1(String strMessage) {
System.out.println("Received on #1a -> " + strMessage);
}
#JmsListener(destination = consumerOne )
public void receiveMessage2(String strMessage) {
System.out.println("Received on #1b -> " + strMessage);
}
#JmsListener(destination = consumerTwo )
public void receiveMessage3(String strMessage) {
System.out.println("Received on #2 -> " + strMessage);
}
}
Here's the console output (note the order of output in first sequence)...
\Intel\Intel(R) Management Engine Components\DAL;C:\WINDOWS\System32\OpenSSH\;C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common;C:\Program Files (x86)\gnupg\bin;C:\Users\LesR\AppData\Local\Microsoft\WindowsApps;c:\Gradle\gradle-5.0\bin;;C:\Program Files\JetBrains\IntelliJ IDEA 2018.3\bin;;.]
2019-04-03 09:23:08.408 INFO 13936 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2019-04-03 09:23:08.408 INFO 13936 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 672 ms
2019-04-03 09:23:08.705 INFO 13936 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2019-04-03 09:23:08.845 INFO 13936 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path ''
2019-04-03 09:23:08.877 INFO 13936 --- [ main] mil.navy.msgqueue.MsgqueueApplication : Started MsgqueueApplication in 1.391 seconds (JVM running for 1.857)
2019-04-03 09:23:14.949 INFO 13936 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2019-04-03 09:23:14.949 INFO 13936 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2019-04-03 09:23:14.952 INFO 13936 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 3 ms
Received on #2 -> message A
Received on #1a -> message B
Received on #1b -> message A
Received on #2 -> message B
<HIT DO-NOTHING ENDPOINT AGAIN>
Received on #1b -> message A
Received on #2 -> message A
Received on #1a -> message B
Received on #2 -> message B
BLUF - Add "?consumer.exclusive=true" to the declaration of the destination for the JmsListener annotation.
It seems that the solution is not that complex, especially if one abandons ActiveMQ's "message groups" in favor or "exclusive consumers". The drawback to the "message groups" is that the sender has to have prior knowledge of the potential partitioning of message consumers. If the producer has this knowledge, then "message groups" are a nice solution, as the solution is somewhat independent of the consumer.
But, a similar solution can be implemented from the consumer side, by having the consumer declare "exclusive consumer" on the queue. While I did not see anything in the JmsTemplate implementation that directly supports this, it seems that Spring's JmsTemplate implementation passes the queue name to ActiveMQ, and then ActiveMQ "does the right thing" and enforces the exclusive consumer behavior.
So...
Change the following...
private final String consumerOne = "Consumer.myConsumer1.VirtualTopic.TEST-TOPIC";
to...
private final String consumerOne = "Consumer.myConsumer1.VirtualTopic.TEST-TOPIC";?consumer.exclusive=true
Once I did this, only one of the two declared receive methods were invoked, and message order was maintained in all my test runs.

HikariCP restart with Spring Cloud Config

I have recently configured my application to use Spring Cloud Config with Github as a configuration repository.
Spring Boot - 2.1.1.RELEASE
Spring Cloud Dependencies - Greenwich.RC2
My application is using pretty much everything out of the box. I have just configured the database in application.yml and I have HikariCP autoconfigurations doing the magic in the background.
I am refreshing my applications using this job that calls refresh() method on the RefreshEndpoint.
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.endpoint.RefreshEndpoint;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;
#EnableScheduling
#Component
public class ConfigRefreshJob {
private static final Logger LOG = LoggerFactory.getLogger(ConfigRefreshJob.class);
private static final int ONE_MINUTE = 60 * 1000;
private final RefreshEndpoint refreshEndpoint;
#Autowired
public ConfigRefreshJob(final RefreshEndpoint refreshEndpoint) {
this.refreshEndpoint = refreshEndpoint;
}
#Scheduled(fixedDelay = ONE_MINUTE)
public void refreshConfigs() {
LOG.info("Refreshing Configurations - {}", refreshEndpoint.refresh());
}
}
Everything seems be working good, but I see following logs every time I refresh the configurations. These logs say HikariCP pool is shutdown and started everytime I refresh.
2019-01-16 18:54:55.817 INFO 14 --- [taskScheduler-9] o.s.b.SpringApplication : Started application in 0.155 seconds (JVM running for 144.646)
2019-01-16 18:54:55.828 INFO 14 --- [taskScheduler-9] c.z.h.HikariDataSource : HikariPool-1555 - Shutdown initiated...
2019-01-16 18:54:55.828 INFO 14 --- [taskScheduler-9] c.z.h.HikariDataSource : HikariPool-1555 - Shutdown completed.
2019-01-16 18:54:55.828 INFO 14 --- [taskScheduler-9] c.d.ConfigRefreshJob : Refreshing Configurations - []
2019-01-16 18:55:03.094 INFO 14 --- [ XNIO-1 task-5] c.z.h.HikariDataSource : HikariPool-1556 - Starting...
2019-01-16 18:55:03.123 INFO 14 --- [ XNIO-1 task-5] c.z.h.HikariDataSource : HikariPool-1556 - Start completed.
If I look at the times of these logs, it takes around 8 seconds for the HikariCP to be configured again.
I haven't found any issues in my application as of now since the load on the application is not that much right now, but here are couple of questions that I have.
Does this restart of HikariCP cause issues with the load to the application is increased?
If the restarting can cause issues, is there a way to not refresh the HikariCP?
HikariCP is made refreshable by default because a change made to it that seals the configuration once the pool is started.
So disable this, set spring.cloud.refresh.refreshable to an empty set.
Here is the example to configure in yaml
spring:
cloud:
refresh:
refreshable:
- com.example.app.config.ConfigProperties
where ConfigProperties is the class annotated with #RefreshScope.
this worked for me (spring-boot-2.2.7.RELEASE, spring-cloud-Hoxton.SR4)
spring.cloud.refresh.extra-refreshable=com.zaxxer.hikari.HikariDataSource

Not able to shutdown the jms listener which posts message to kafka spring boot application with Runtime.exit, context.close, System.exit()

I am developing a spring boot application which will listen to ibm mq with
#JmsListener(id="abc", destination="${queueName}", containerFactory="defaultJmsListenerContainerFactory")
I have a JmsListenerEndpointRegistry which starts the listenerContainer.
On message will try to push the same message with some business logic to kafka. The poster code is
kafkaTemplate.send(kafkaProp.getTopic(), uniqueId, message)
Now in case a kafka producer fails, I want my boot application to get terminated. So I have added a custom
setErrorHandler.
So I have tried
`System.exit(1)`, `configurableApplicationContextObject.close()`, `Runtime.getRuntime.exit(1)`.
But none of them work. Below is the log that gets generated after
System.exit(0) or above others.
2018-05-24 12:12:47.981 INFO 18904 --- [ Thread-4] s.c.a.AnnotationConfigApplicationContext : Closing org.springframework.context.annotation.AnnotationConfigApplicationContext#1d08376: startup date [Thu May 24 12:10:35 IST 2018]; root of context hierarchy
2018-05-24 12:12:48.027 INFO 18904 --- [ Thread-4] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 2147483647
2018-05-24 12:12:48.028 INFO 18904 --- [ Thread-4] o.s.c.support.DefaultLifecycleProcessor : Stopping beans in phase 0
2018-05-24 12:12:48.028 INFO 18904 --- [ Thread-4] o.s.j.e.a.AnnotationMBeanExporter : Unregistering JMX-exposed beans on shutdown
2018-05-24 12:12:48.028 INFO 18904 --- [ Thread-4] o.a.k.clients.producer.KafkaProducer : Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.
2018-05-24 12:12:48.044 INFO 18904 --- [ Thread-4] o.a.k.clients.producer.KafkaProducer : Closing the Kafka producer with timeoutMillis = 30000 ms.
But the application is still running and below are the running threads
Daemon Thread [Tomcat JDBC Pool Cleaner[14341596:1527144039908]] (Running)
Thread [DefaultMessageListenerContainer-1] (Running)
Thread [DestroyJavaVM] (Running)
Daemon Thread [JMSCCThreadPoolMaster] (Running)
Daemon Thread [RcvThread: com.ibm.mq.jmqi.remote.impl.RemoteTCPConnection#12474910[qmid=*******,fap=**,channel=****,ccsid=***,sharecnv=***,hbint=*****,peer=*******,localport=****,ssl=****]] (Running)
Thread [Thread-4] (Running)
The help is much appreciated. Thanks in advance. I simply want the application should exit.
Below is the thread dump before I call System.exit(1)
"DefaultMessageListenerContainer-1"
java.lang.Thread.State: RUNNABLE
at sun.management.ThreadImpl.getThreadInfo1(Native Method)
at sun.management.ThreadImpl.getThreadInfo(ThreadImpl.java:174)
at com.QueueErrorHandler.handleError(QueueErrorHandler.java:42)
at org.springframework.jms.listener.AbstractMessageListenerContainer.invokeErrorHandler(AbstractMessageListenerContainer.java:931)
at org.springframework.jms.listener.AbstractMessageListenerContainer.handleListenerException(AbstractMessageListenerContainer.java:902)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:326)
at org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:235)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1166)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1158)
at org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1055)
at java.lang.Thread.run(Thread.java:745)
You should take a thread dump to see what Thread [DefaultMessageListenerContainer-1] (Running) is doing.
Now in case a kafka producer fails
What kind of failure? If the broker is down, the thread will block in the producer library for up to 60 seconds by default.
You can reduce that time by setting the max.block.ms producer property.
Couple of solutions which worked for me to solve above.
Solutions 1.
Get all threads in error handler and interrupt them all and then exist the system.
ThreadMXBean threadMXBean = ManagementFactory.getThreadMXBean();
ThreadInfo[] threadInfos = threadMXBean.getThreadInfo(threadMXBean.getAllThreadIds(), 100);
for (ThreadInfo threadInfo : threadInfos) {
Thread.currentThread().interrupt();
}
System.exit(1);
Solution 2. Define a application context manager. Like
public class AppContextManager implements ApplicationContextAware {
private static ApplicationContext _appCtx;
#Override
public void setApplicationContext(ApplicationContext ctx){
_appCtx = ctx;
}
public static ApplicationContext getAppContext(){
return _appCtx;
}
public static void exit(Integer exitCode) {
System.exit(SpringApplication.exit(_appCtx,() -> exitCode));
}
}
Then use same manager to exit in error handler
Executors.newSingleThreadExecutor().execute(new Runnable() {
public void run() {
jmsListenerEndpointRegistry.stop();
AppContextManager.exit(-1);
}
});

Allow Camel context to run forever

I am using camel-spring jar for springCamelContext. When I start the camel context , it run for 5 minutes (Default time). I can make my thread sleep for some specific time i.e.
try {
camelContext.start();
Thread.sleep(50 * 60 * 1000);
camelContext.stop();
} catch (Exception e) {
e.printStackTrace();
}
BUT I want is my camelContext to run FOREVER because this application is going to be deployed and It will be listening for messages from KAFKA server. I know there is a class
org.apache.camel.spring.Main
But I don't know how to configure it with springCamelContext or not sure if there any other way. Thanks
Update : Even If I remove camelContext.stop() , context is stopped after sometime and I get following logs :
[Thread-1] INFO org.apache.camel.spring.SpringCamelContext - Apache Camel 2.17.2 (CamelContext: camel-1) is shutting down
[Thread-1] INFO org.apache.camel.impl.DefaultShutdownStrategy - Starting to graceful shutdown 1 routes (timeout 300 seconds)
[Camel (camel-1) thread #1 - ShutdownTask] INFO org.apache.camel.component.kafka.KafkaConsumer - Stopping Kafka consumer
[Camel (camel-1) thread #1 - ShutdownTask] INFO org.apache.camel.impl.DefaultShutdownStrategy - Route: route1 shutdown complete, was consuming from: Endpoint[kafka://localhost:9092?groupId=group0&serializerClass=org.springframework.integration.kafka.serializer.avro.AvroSerializer&topic=my-topic]
[Thread-1] INFO org.apache.camel.impl.DefaultShutdownStrategy - Graceful shutdown of 1 routes completed in 0 seconds
[Thread-1] INFO org.apache.camel.spring.SpringCamelContext - Apache Camel 2.17.2 (CamelContext: camel-1) uptime 4 minutes
[Thread-1] INFO org.apache.camel.spring.SpringCamelContext - Apache Camel 2.17.2 (CamelContext: camel-1) is shutdown in 0.022 seconds
Here is a minimal example which runs forever and only copies files from one folder to another:
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.main.Main;
public class FileWriteRoute {
public static void main(String[] args) throws Exception {
Main main = new Main();
main.addRouteBuilder(new RouteBuilder() {
public void configure() {
from("file://D:/dev/playground/camel-activemq/src/data")
.to("file://D:/dev/playground/camel-activemq/src/data_out");
}
});
main.run();
}
}
Of if you have your Route defined in a class try:
public static void main(String[] args) throws Exception {
Main main = new Main();
CamelContext context = main.getOrCreateCamelContext();
try {
context.addRoutes(new YOURROUTECLASS());
context.start();
main.run();
}
catch (Exception e){
enter code here
}
}

Spring Web Application: Post-DispatcherServlet initialization

I am using Spring 3.2 DispatcherServlet. I am looking for an initialization hook that takes place after the DispatcherServlet initialization completes; either a standard Spring solution or servlet solution. Any suggestions?
As a point of reference, the final logging statements after servlet startup follow. I want my initialization method to execute right after the configured successfully log statement.
DEBUG o.s.w.s.DispatcherServlet - Published WebApplicationContext of servlet 'mySpringDispatcherServlet' as ServletContext attribute with name [org.springframework.web.servlet.FrameworkServlet.CONTEXT.mySpringDispatcherServlet]
INFO o.s.w.s.DispatcherServlet - FrameworkServlet 'mySpringDispatcherServlet': initialization completed in 5000 ms
DEBUG o.s.w.s.DispatcherServlet - Servlet 'mySpringDispatcherServlet' configured successfully
From my research, so far the following have not had the desired effect:
Extending ContextLoaderListener/implementing ServletContextListener per this answer.
Implementing WebApplicationInitializer per the javaoc.
My beans use #PostConstruct successfully; I'm looking for a Servlet or container level hook that will be executed essentially after the container initializes and post-processes the beans.
The root issue was that I couldn't override the final method HttpsServlet.init(). I found a nearby #Override-able method in DispatcherServlet.initWebApplicationContext that ensured my beans and context were fully initialized:
#Override
protected WebApplicationContext initWebApplicationContext()
{
WebApplicationContext wac = super.initWebApplicationContext();
// do stuff with initialized Foo beans via:
// wac.getBean(Foo.class);
return result;
}
From Spring's Standard and Custom Events.
import org.springframework.context.ApplicationListener;
import org.springframework.context.event.ContextRefreshedEvent;
import org.springframework.stereotype.Component;
#Component
public class ApplicationContextListener implements
ApplicationListener<ContextRefreshedEvent> {
#Override
public void onApplicationEvent(ContextRefreshedEvent event) {
System.out.println("ApplicationContext was initialized or refreshed: "
+ event.getApplicationContext().getDisplayName());
}
}
The event above will be fired when the DispatcherServlet is initialized, such as when it prints:
INFO org.springframework.web.servlet.DispatcherServlet - FrameworkServlet 'ServletName': initialization completed in 1234 ms
You can implement ApplicationListener<ContextStartedEvent> within your application context. This event listener will then be called once for your root context and once for each servlet context.
public class StartupListener implements ApplicationListener<ContextStartedEvent> {
public void onApplicationEvent(ContextStartedEvent event) {
ApplicationContext context = (ApplicationContext) event.getSource();
System.out.println("Context '" + context.getDisplayName() + "' started.");
}
}
If you define this listener within your servlet context, it should be called just once for the servlet context intself.
try this, change your port number.
In my case i changed from server.port=8001 to server.port=8002

Resources