How to gracefully shutdown spring boot app using configuration and programatically? - spring

I want to gracefully shutdown my spring boot application. I wanted to know what is the difference between configuring it via application.yaml file and programmatically ?
in my application yaml file, i have added this from reference https://www.baeldung.com/spring-boot-web-server-shutdown:
server:
shutdown: graceful
spring:
lifecycle:
timeout-per-shutdown-phase: 30s
and in my code i am also using threadPoolTaskExecuter because I am using #EnableAsync annotation:
#Bean("threadPoolTaskExecutor")
public TaskExecutor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(20);
executor.setMaxPoolSize(200);
executor.setWaitForTasksToCompleteOnShutdown(true);
executor.setAwaitTerminationSeconds(30);
executor.setThreadNamePrefix("async-");
executor.setQueueCapacity(50);
executor.initialize();
return executor;
}
where i added executor.setAwaitTerminationSeconds(30)
so my question is do we need both ways to configure it or not? and what is the difference? Also I found that we have other configuration available from spring from here:
https://www.appsloveworld.com/springboot/100/57/spring-boot-gracefully-shutdown-by-controlling-termination-order-involving-mongo
spring.task.execution.shutdown.await-termination=true
spring.task.execution.shutdown.await-termination-period=60
this is confusing to me. Can someone please explain the different ways ?
Thanks in advance

Related

StreamsException: Unable to initialize state, this can happen if multiple instances of Kafka Streams are running in the same state directory

This is regarding upgrading existing code base in production which uses windowing from kafka-clients,kafka-streams,spring-kafka 2.4.0 to 2.6.x and also upgrading spring-boot-starter-parentfrom 2.2.2.RELEASE to 2.3.x as 2.2 is incompatible with kafka-streams 2.6.
The existing code had these beans mentioned below with old verions(2.4.0,2.2 spring release):
#Bean("DataCompressionCustomTopology")
public Topology customTopology(#Qualifier("CustomFactoryBean") StreamsBuilder streamsBuilder) {
//Your topology code
return streamsBuilder.build();
}
#Bean("GenericKafkaStreams")
public KafkaStreams kStream() {
//Your kafka streams code
return kafkaStreams;
}
Now after upgrading kafka streams,kafka clients to to 2.6.2 and spring kafka to 2.6.x, the following exception was observed:
2021-05-13 12:33:51.954 [Persistence-Realtime-Transformation] [main] WARN o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'CustomFactoryBean'; nested exception is org.springframework.kafka.KafkaException: Could not start stream: ; nested exception is org.apache.kafka.streams.errors.StreamsException: Unable to initialize state, this can happen if multiple instances of Kafka Streams are running in the same state directory
A similar Error can happen when you are running multiple of the same application(name/id) on the same machine.
Please visite State.dir to get the idea.
you can add that in Kafka configurations and make it unique per each instance
In case you are using spring cloud stream (cann't have same port in the same machine):
spring.cloud.stream.kafka.streams.binder.configuration.state.dir: ${spring.application.name}${server.port}
UPDATE:
In the case of spring stream kafka:
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
KafkaStreamsConfiguration kStreamsConfig() {
Map<String, Object> props = new HashMap<>();
props.put(APPLICATION_ID_CONFIG, springApplicationName);
props.put(BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
props.put(StreamsConfig.STATE_DIR_CONFIG, String.format("%s%s", springApplicationName, serverPort));
return new KafkaStreamsConfiguration(props);
}
or:
spring.kafka:
bootstrap-servers: ....
streams:
properties:
application.server: localhost:${server.port}
state.dir: ${spring.application.name}${server.port}
The problem here is newer versions of spring-kafka is initializing one more instance of kafka streams based on topology bean automatically and another bean of generickafkaStreams is getting initialized from existing code base which is resulting in multiple threads trying to lock over state directory and thus the error.
Even disabling the KafkaAutoConfiguration at spring boot level does not disable this behavior. This was such a pain to identify and lost lot of time.
The fix is to get rid of topology bean and have our own custom kafka streams bean as below code:
protected Topology customTopology() {
//topology code
return streamsBuilder.build();
}
/**
* This starts kafka stream application and sets the state listener and state
* store listener.
*
* #return KafkaStreams
*/
#Bean("GenericKafkaStreams")
public KafkaStreams kStream() {
KafkaStreams kafkaStreams = new KafkaStreams(customTopology(), kstreamsconfigs);
return kafkaStreams;
}
If you have a sophisticated Kafka Streams topology in your Spring Cloud Streams Kafka Streams Binder 3.0 style application, you might need to specify different application ids for different functions like the following:
spring.cloud.stream.function.definition: myFirstStream;mySecondStream
...
spring.cloud.stream.kafka.streams:
binder:
functions:
myFirstStream:
applicationId: app-id-1
mySecondStream:
applicationId: app-id-2
I've handled problem on versions:
org.springframework.boot version 2.5.3
org.springframework.kafka:spring-kafka:2.7.5
org.apache.kafka:kafka-clients:2.8.0
org.apache.kafka:kafka-streams:2.8.0
Check this: State directory
By default it is created in temp folder with kafka streams app id like:
/var/folders/xw/xgslnvzj1zj6wp86wpd8hqjr0000gn/T/kafka-streams/${spring.kafka.streams.application-id}/.lock
If two or more Kafka Streams apps use the same spring.kafka.streams.application-id then you get this exception.
So just change your Kafka Streams apps id's.
Or set directory option manually StreamsConfig.STATE_DIR_CONFIG in streams config.
Above answers to set state dir works perfectly for me. Thanks.
Adding one observation that might be helpful for someone working with spring-boot. When working on same machine and trying to bring up multiple kafka stream application instances and If you have enabled property spring.devtools.restart.enabled (which mostly is the case in dev profile), you might want to disable it as when the same application instance restarts automatically it might not get store lock. This is what I was facing and was able to resolve by disabling restart behavior.
In my case perfectly works specyfing separate #TestConfiguration class in which I specify counter for changing application name for each SpringBoot Test Context.
#TestConfiguration
public class TestKafkaStreamsConfig {
private static final AtomicInteger COUNTER = new AtomicInteger();
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
KafkaStreamsConfiguration kStreamsConfig() {
final var props = new HashMap<String, Object>();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "test-application-id-" + COUNTER.getAndIncrement());
// rest of configuration
return new KafkaStreamsConfiguration(props);
}
}
Of course I had to enable spring bean overriding to replace primary configuration.
Edit: I'm using SpringBoot v. 2.5.10 so in my case to make use of #TestConfiguration i have to pass it to #SpringBootTest(classes =) annotation.
I was facing the same problem. A single topology in spring boot and I was trying to access the state store for interactive queries. In order to do so I needed a KafkaStreams object as shown below.
GlobalKTable<String, String> configTable = builder.globalTable("config",
Materialized.<String, String, KeyValueStore<Bytes, byte[]>> as("config-store")
.withKeySerde(Serdes.String())
.withValueSerde(Serdes.String()));
KafkaStreams streams = new KafkaStreams(builder.build(), kconfig.asProperties());
streams.start();
ReadOnlyKeyValueStore<String, String> configView = streams.store(StoreQueryParameters.fromNameAndType("config-store", QueryableStoreTypes.keyValueStore()));
The problem is the Spring Kafka Factory bean starts a topology and calling streams.start() causes the lock on the state store as a second start is called.
This can be fixed by setting the auto start property to false.
spring.kafka.streams.auto-startup=false
That's all you need.

When running a springboot app - using spring 5 - how are apache nio properties configured?

We are running a springboot application using the new reactive features. For downstream calls we are using the new WebClient provided by spring. When we configure the max threads - the configuration is honored. We would like to experiment with additional poller threads or changing some of the timeouts. However the nio specific apache configuration is not honored.
Any help is greatly appreciated!
in application.properties
server.tomcat.max-threads=3 <- this is working
server.tomcat.accept-count=1000 <- this is working
server.tomcat.poller-thread-count=5 <- this is not working/ignored
server.tomcat.poller-thread-priority=5 <- this is not working/ignored
server.tomcat.selector-timeout=2000 <- this is not working/ignored
The Connector that is used by Tomcat , e.g the NIO connector can be configured
by providing your own Customizer :
#Bean
public EmbeddedServletContainerFactory servletContainerFactory() {
TomcatEmbeddedServletContainerFactory factory = new TomcatEmbeddedServletContainerFactory();
factory.addConnectorCustomizers(connector ->
((AbstractProtocol) connector.getProtocolHandler()).setMaxConnections(200));
// configure some more properties
return factory;
}
}
You can also read:
https://docs.spring.io/spring-boot/docs/current/reference/htmlsingle/#howto-configure-tomcat

Register Hazelcast Cluster instances on Eureka

I have a Spring Boot service in which I've started an HazelCast instance.
#Bean
public Config config(){
return new Config();
}
#Bean
public HazelcastInstance hazelcastInstance(Config config){
return Hazelcast.newHazelcastInstance();
}
Now, I want to register my HazelCast instances in Eureka so that my HC clients can retrieve HC cluster instances dynamically.
HazelCast plugin page point me to the eureka plugin but this one is from 2015 and contains a lot of deprecated code recommending me to use EurekaModule and DI.
Does someone have an example?
Have a look at https://github.com/hazelcast/hazelcast-code-samples/tree/master/hazelcast-integration/springboot-eureka-partition-groups
Should do what you need

Spring boot Artemis embedded broker behaviour

Morning all,
I've been struggling lately with the spring-boot-artemis-starter.
My understanding of its spring-boot support was the following:
set spring.artemis.mode=embedded and, like tomcat, spring-boot will instanciate a broker reachable through tcp (server mode). The following command should be successful: nc -zv localhost 61616
set spring.artmis.mode=native and spring-boot will only configure the jms template according to the spring.artemis.* properties (client mode).
The client mode works just fine with a standalone artemis server on my machine.
Unfortunatelly, I could never manage to reach the tcp port in server mode.
I would be grateful if somebody confirms my understanding of the embedded mode.
Thank you for tour help
After some digging I noted that the implementation provided out of the box by the spring-boot-starter-artemis uses org.apache.activemq.artemis.core.remoting.impl.invm.InVMAcceptorFactory acceptor. I'm wondering if that's not the root cause (again I'm by no means an expert).
But it appears that there is a way to customize artemis configuration.
Therefore I tried the following configuration without any luck:
#SpringBootApplication
public class MyBroker {
public static void main(String[] args) throws Exception {
SpringApplication.run(MyBroker.class, args);
}
#Autowired
private ArtemisProperties artemisProperties;
#Bean
public ArtemisConfigurationCustomizer artemisConfigurationCustomizer() {
return configuration -> {
try {
configuration.addAcceptorConfiguration("netty", "tcp://localhost:" + artemisProperties.getPort());
} catch (Exception e) {
throw new RuntimeException("Failed to add netty transport acceptor to artemis instance");
}
};
}
}
You just have to add a Connector and an Acceptor to your Artemis Configuration. With Spring Boot Artemis starter Spring creates a Configuration bean which will be used for EmbeddedJMS configuration. You can see this in ArtemisEmbeddedConfigurationFactory class where an InVMAcceptorFactory will be set for the configuration. You can edit this bean and change Artemis behaviour through custom ArtemisConfigurationCustomizer bean which will be sucked up by Spring autoconfig and be applied to the Configuration.
An example config class for your Spring Boot application:
import org.apache.activemq.artemis.api.core.TransportConfiguration;
import org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptorFactory;
import org.apache.activemq.artemis.core.remoting.impl.netty.NettyConnectorFactory;
import org.springframework.boot.autoconfigure.jms.artemis.ArtemisConfigurationCustomizer;
import org.springframework.context.annotation.Configuration;
#Configuration
public class ArtemisConfig implements ArtemisConfigurationCustomizer {
#Override
public void customize(org.apache.activemq.artemis.core.config.Configuration configuration) {
configuration.addConnectorConfiguration("nettyConnector", new TransportConfiguration(NettyConnectorFactory.class.getName()));
configuration.addAcceptorConfiguration(new TransportConfiguration(NettyAcceptorFactory.class.getName()));
}
}
My coworker and I had the exact same problem as the documentation on this link (chapter Artemis Support) says nothing about adding an individual ArtemisConfigurationCustomizer - Which is sad because we realized that without this Customizer our Spring Boot App would start and act as if everything was okay but actually it wouldn't do anything.
We also realized that without the Customizer the application.properties file is not beeing loaded so no matter what host or port you mentioned there it would not count.
After adding the Customizer as stated by the two examples it worked without a problem.
Here some results that we figured out:
It only loaded the application.properties after configuring an ArtemisConfigurationCustomizer
You don't need the broker.xml anymore with an embedded spring boot artemis client
Many examples showing the use of Artemis use a "in-vm" protocol while we just wanted to use the netty tcp protocol so we needed to add it into the configuration
For me the most important parameter was pub-sub-domain as I was using topics and not queues. If you are using topics this parameter needs to be set to true or the JMSListener won't read the messages.
See this page: stackoverflow jmslistener-usage-for-publish-subscribe-topic
When using a #JmsListener it uses a DefaultMessageListenerContainer
which extends JmsDestinationAccessor which by default has the
pubSubDomain set to false. When this property is false it is
operating on a queue. If you want to use topics you have to set this
properties value to true.
In Application.properties:
spring.jms.pub-sub-domain=true
If anyone is interested in the full example I have uploaded it to my github:
https://github.com/CorDharel/SpringBootArtemisServerExample
The embedded mode starts the broker as part of your application. There is no network protocol available with such setup, only InVM calls are allowed. The auto-configuration exposes the necessary pieces you can tune though I am not sure you can actually have a TCP/IP channel with the embedded mode.

Spring Boot same broker messages repeated to console

I'm working on a Spring Boot project at the moment, this text keeps being printed to the console every second for thirty seconds before stopping.
15:18:02.416 o.a.activemq.broker.TransportConnector : Connector vm://localhost started
15:18:03.480 o.a.activemq.broker.TransportConnector : Connector vm://localhost stopped
15:18:03.480 o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.10.1 (localhost, ID:Jordan-801993-L.local-55074-1432703875573-0:7) is shutting down
15:18:03.481 o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.10.1 (localhost, ID:Jordan-801993-L.local-55074-1432703875573-0:7) uptime 1.069 seconds
15:18:03.481 o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.10.1 (localhost, ID:Jordan-801993-L.local-55074-1432703875573-0:7) is shutdown
15:18:03.542 o.apache.activemq.broker.BrokerService : Using Persistence Adapter: MemoryPersistenceAdapter
15:18:03.543 o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.10.1 (localhost, ID:Jordan-801993-L.local-55074-1432703875573-0:8) is starting
15:18:03.543 o.apache.activemq.broker.BrokerService : Apache ActiveMQ 5.10.1 (localhost, ID:Jordan-801993-L.local-55074-1432703875573-0:8) started
15:18:03.543 o.apache.activemq.broker.BrokerService : For help or more information please see: http://activemq.apache.org
15:18:03.544 o.a.a.broker.jmx.ManagementContext : JMX consoles can connect to service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi
15:18:03.544 o.a.activemq.broker.TransportConnector : Connector vm://localhost started
The project still works fine, it's just annoying. Anyone know why this would be happening?
I cannot explain in-depth why this happens, but it has something to do with the way the ConnectionFactory is auto-configured.
One way to get rid of this constant restarting of the embedded broker is to enable pooling in your application.properties:
spring.activemq.pooled=true
In order to use this you also have to add the following dependency to your pom.xml:
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-pool</artifactId>
</dependency>
I've dug through some documentation and eventually found this
At the bottom of the page it reads:
Using ActiveMQConnectionFactory
...
The broker will be created upon creation of the first connection.
...
Again, this doesn't fully explain what's going on, but I stopped digging once I found that enabling pooling stopped this behaviour from happening.
Met the same problem but the accepted answer did not help. Found the thread with explanation
ActiveMQ will only keep an in-memory queue alive if there is something holding onto it.
If the Queue has been setup in spring to not cache the queue then ActiveMQ will keep starting/stopping the connector. A full appserver would effectively cache queue stuff in a pool, thus keeping it alive.
Stick
in the bean def for the Spring JMS container (org.springframework.jms.listener.DefaultMessageLi stenerContainer) to fix the problem.
#Bean
public JmsListenerContainerFactory<?> myFactory(ConnectionFactory connectionFactory,
DefaultJmsListenerContainerFactoryConfigurer configurer) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setCacheLevelName("CACHE_CONNECTION"); //<-- the line fixed the problem
configurer.configure(factory, connectionFactory);
return factory;
}
We have spring boot 1.5 services which consume and publish via JMSTemplate and do not have this problem and then we have one which only publishes (so no JmsListener) and fills logs with these messages - plus the related WARN Temporary Store limit is 51200 mb ... resetting to maximum available disk space - writing them on every GET to /healthcheck. You'll want to consider the implications but this behaviour and hence all the related logging can be disabled with the property setting:
management.health.jms.enabled=false
You'll see the jms block disappear from your /healthcheck output as a result - so check whether it is actually reflecting anything required by your service.
We had no success with other answers on this thread nor with those we tried from Startup error of embedded ActiveMQ: Temporary Store limit is 51200 mb.
What i did, was explicitly create a broker, and tell Spring to use it instead of its implicit one. This way, the broker will be kept alive for sure:
#Configuration
public class MessagingConfig {
private final static String BROKER_URL = "tcp://localhost:61616";
#Bean
public BrokerService brokerService() {
BrokerService broker = new BrokerService();
broker.addConnector(BROKER_URL);
broker.setPersistent(false);
broker.start();
return broker;
}
#Bean
public ActiveMQConnectionFactory connectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(BROKER_URL);
return connectionFactory;
}
}
The accepted answer did not suit me, found another solution. Maybe will be helpful:
I you are using Spring Boot application and do not need PooledConnectionFactory, then add this line to your Application class:
#SpringBootApplication(exclude = {ActiveMQAutoConfiguration.class})
public class MyApplication{...}
This will exclude activemq autoconfiguration and you won't see that garbage in the logs.

Resources