StreamsException: Unable to initialize state, this can happen if multiple instances of Kafka Streams are running in the same state directory - spring-boot

This is regarding upgrading existing code base in production which uses windowing from kafka-clients,kafka-streams,spring-kafka 2.4.0 to 2.6.x and also upgrading spring-boot-starter-parentfrom 2.2.2.RELEASE to 2.3.x as 2.2 is incompatible with kafka-streams 2.6.
The existing code had these beans mentioned below with old verions(2.4.0,2.2 spring release):
#Bean("DataCompressionCustomTopology")
public Topology customTopology(#Qualifier("CustomFactoryBean") StreamsBuilder streamsBuilder) {
//Your topology code
return streamsBuilder.build();
}
#Bean("GenericKafkaStreams")
public KafkaStreams kStream() {
//Your kafka streams code
return kafkaStreams;
}
Now after upgrading kafka streams,kafka clients to to 2.6.2 and spring kafka to 2.6.x, the following exception was observed:
2021-05-13 12:33:51.954 [Persistence-Realtime-Transformation] [main] WARN o.s.b.w.s.c.AnnotationConfigServletWebServerApplicationContext - Exception encountered during context initialization - cancelling refresh attempt: org.springframework.context.ApplicationContextException: Failed to start bean 'CustomFactoryBean'; nested exception is org.springframework.kafka.KafkaException: Could not start stream: ; nested exception is org.apache.kafka.streams.errors.StreamsException: Unable to initialize state, this can happen if multiple instances of Kafka Streams are running in the same state directory

A similar Error can happen when you are running multiple of the same application(name/id) on the same machine.
Please visite State.dir to get the idea.
you can add that in Kafka configurations and make it unique per each instance
In case you are using spring cloud stream (cann't have same port in the same machine):
spring.cloud.stream.kafka.streams.binder.configuration.state.dir: ${spring.application.name}${server.port}
UPDATE:
In the case of spring stream kafka:
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
KafkaStreamsConfiguration kStreamsConfig() {
Map<String, Object> props = new HashMap<>();
props.put(APPLICATION_ID_CONFIG, springApplicationName);
props.put(BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
props.put(StreamsConfig.STATE_DIR_CONFIG, String.format("%s%s", springApplicationName, serverPort));
return new KafkaStreamsConfiguration(props);
}
or:
spring.kafka:
bootstrap-servers: ....
streams:
properties:
application.server: localhost:${server.port}
state.dir: ${spring.application.name}${server.port}

The problem here is newer versions of spring-kafka is initializing one more instance of kafka streams based on topology bean automatically and another bean of generickafkaStreams is getting initialized from existing code base which is resulting in multiple threads trying to lock over state directory and thus the error.
Even disabling the KafkaAutoConfiguration at spring boot level does not disable this behavior. This was such a pain to identify and lost lot of time.
The fix is to get rid of topology bean and have our own custom kafka streams bean as below code:
protected Topology customTopology() {
//topology code
return streamsBuilder.build();
}
/**
* This starts kafka stream application and sets the state listener and state
* store listener.
*
* #return KafkaStreams
*/
#Bean("GenericKafkaStreams")
public KafkaStreams kStream() {
KafkaStreams kafkaStreams = new KafkaStreams(customTopology(), kstreamsconfigs);
return kafkaStreams;
}

If you have a sophisticated Kafka Streams topology in your Spring Cloud Streams Kafka Streams Binder 3.0 style application, you might need to specify different application ids for different functions like the following:
spring.cloud.stream.function.definition: myFirstStream;mySecondStream
...
spring.cloud.stream.kafka.streams:
binder:
functions:
myFirstStream:
applicationId: app-id-1
mySecondStream:
applicationId: app-id-2

I've handled problem on versions:
org.springframework.boot version 2.5.3
org.springframework.kafka:spring-kafka:2.7.5
org.apache.kafka:kafka-clients:2.8.0
org.apache.kafka:kafka-streams:2.8.0
Check this: State directory
By default it is created in temp folder with kafka streams app id like:
/var/folders/xw/xgslnvzj1zj6wp86wpd8hqjr0000gn/T/kafka-streams/${spring.kafka.streams.application-id}/.lock
If two or more Kafka Streams apps use the same spring.kafka.streams.application-id then you get this exception.
So just change your Kafka Streams apps id's.
Or set directory option manually StreamsConfig.STATE_DIR_CONFIG in streams config.

Above answers to set state dir works perfectly for me. Thanks.
Adding one observation that might be helpful for someone working with spring-boot. When working on same machine and trying to bring up multiple kafka stream application instances and If you have enabled property spring.devtools.restart.enabled (which mostly is the case in dev profile), you might want to disable it as when the same application instance restarts automatically it might not get store lock. This is what I was facing and was able to resolve by disabling restart behavior.

In my case perfectly works specyfing separate #TestConfiguration class in which I specify counter for changing application name for each SpringBoot Test Context.
#TestConfiguration
public class TestKafkaStreamsConfig {
private static final AtomicInteger COUNTER = new AtomicInteger();
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
KafkaStreamsConfiguration kStreamsConfig() {
final var props = new HashMap<String, Object>();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "test-application-id-" + COUNTER.getAndIncrement());
// rest of configuration
return new KafkaStreamsConfiguration(props);
}
}
Of course I had to enable spring bean overriding to replace primary configuration.
Edit: I'm using SpringBoot v. 2.5.10 so in my case to make use of #TestConfiguration i have to pass it to #SpringBootTest(classes =) annotation.

I was facing the same problem. A single topology in spring boot and I was trying to access the state store for interactive queries. In order to do so I needed a KafkaStreams object as shown below.
GlobalKTable<String, String> configTable = builder.globalTable("config",
Materialized.<String, String, KeyValueStore<Bytes, byte[]>> as("config-store")
.withKeySerde(Serdes.String())
.withValueSerde(Serdes.String()));
KafkaStreams streams = new KafkaStreams(builder.build(), kconfig.asProperties());
streams.start();
ReadOnlyKeyValueStore<String, String> configView = streams.store(StoreQueryParameters.fromNameAndType("config-store", QueryableStoreTypes.keyValueStore()));
The problem is the Spring Kafka Factory bean starts a topology and calling streams.start() causes the lock on the state store as a second start is called.
This can be fixed by setting the auto start property to false.
spring.kafka.streams.auto-startup=false
That's all you need.

Related

NoSuchMethodError when trying to map Kafka binder to input method

The following is prompted in console when trying to launch a spring cloud streams project with the Kafka binder active:
org.springframework.context.ApplicationContextException: Failed to start bean 'inputBindingLifecycle';
Caused by: java.lang.NoSuchMethodError: java.util.List.of(Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;Ljava/lang/Object;)Ljava/util/List;
My input method goes as follows using Spring Cloud functions:
#Bean
public Function<Message<String>, byte[]> exec() {
return input -> ...
Now, having Kafka in place, my .properties file looks as follows:
spring.cloud.stream.function.bindings.exec-in-0=in
spring.cloud.stream.bindings.in.destination=topic-0
spring.cloud.stream.function.bindings.exec-out-0=out
spring.cloud.stream.bindings.out.destination=topic-1
spring.cloud.stream.bindings.in.binder=kafka
spring.cloud.stream.kafka.bindings.in.consumer.configuration.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.cloud.stream.bindings.out.binder=kafka
spring.cloud.stream.kafka.bindings.out.producer.configuration.value.serializer=org.apache.kafka.common.serialization.ByteArraySerializer
Am I missing any configs for the input method? Should that method be different for Kafka (already tested this with PubSub and it works)?
The error in your stack trace gives us a clue that you are facing this problem: https://github.com/spring-projects/spring-integration/issues/3761.
So, or upgrade to the latest Spring Cloud Stream: https://spring.io/projects/spring-cloud-stream#learn
Or to the latest Spring Integration: https://spring.io/projects/spring-integration#learn
Or just use Java > 8 !

Deploying Spring Integration to WebSphere ND 8.5.5

I am looking for some guidance on deploying a simple Spring Integration application to WebSphere. The overall scope of the application is quite simple - it reads from a RabbitMQ endpoint, transforms any messages received to a specific xml format, and then posts the message to a JMS endpoint in WAS.
Initially, I built the application as a JAR. I was able to get it to work well enough with SSL turned off on the IIOP endpoints in WAS, but despite hours of debugging I never could get it to communicate properly with WAS over SSL. The initial handshake and communication with the bootstrap port was successful, but the SIB endpoint rejected the exact same certificate chain with the usual PKIX chaining error, and no amount of certificate importing made any difference.
So I elected to work out deploying the application as a web app into WAS itself, which would have been the end goal anyways. This caused a number of issues that i've had to work through:
I have not gotten properties to work in the normal Spring fashion. I assume that in this context Spring needs to be explicitly told where to look, but i've sidestepped this with hardcoding for now. Perhaps using #Resource annotations would be the way to do this in this context?
Various jar versioning issues, which i've mostly worked out by setting the application classloader as PARENT_LAST, and judiciously removing things that seemed redundant.
Oddly I did have to add some jars related to Parameter validation which don't seem to have been present in my original maven build.
Needing to set some values in the web.xml in order for spring to location configuration beans, specifically setting init-param with contextClass (org.springframework.web.context.support.AnnotationConfigWebApplicationContext) and contextConfigLocation set to a list of the objects that would normally be loaded via the #Configuration annotation.
May or may not be necessary but I did move from Maven to IID in order to hopefully avoid versioning issues with IBM related jars.
Now I would like to know if there are other items generally needed to be done to deploy Spring (especially Spring Integration) to WAS, and whether the above seems like enough.
In addition, I have an issue with the actual JMS connection to WAS. I have tried to use the UserCredentialsConnectionFactoryAdapter, and was successful with this with Spring standalone. However when deployed in WAS, an exception is thrown:
Caused by: java.lang.ClassCastException: com.ibm.ws.sib.api.jms.impl.JmsManagedQueueConnectionFactoryImpl incompatible with javax.jms.ConnectionFactory
I believe this is thrown when the setTargetConnectionFactory method is called, since if I use the connection factory without the UserCredentialsConnectionFactoryAdapter, it works fine, except the connection by "anonymous" is rejected by the bus:
[03/03/21 15:23:32:934 EST] 0000016c SibMessage W [BPM.WorkflowServer.Bus:Node1.server1-BPM.WorkflowServer.Bus] CWSII0212W: The bus BPM.WorkflowServer.Bus denied an anonymous user access to the bus.
If you want to see the code, this works fine (but doesn't authenticate):
#Bean
public ConnectionFactory jmsConnectionFactory() throws NamingException {
ConnectionFactory connectionFactory = null;
Context ctx = null;
Properties p = new Properties();
p.put(Context.INITIAL_CONTEXT_FACTORY, "com.ibm.websphere.naming.WsnInitialContextFactory");
p.put(Context.PROVIDER_URL, providerUrl);
p.put(Context.SECURITY_AUTHENTICATION,"simple");
p.put(Context.SECURITY_PRINCIPAL,jmsUsername);
p.put(Context.SECURITY_CREDENTIALS,jmsPassword);
ctx = new InitialContext(p);
if (null != ctx)
System.out.println("Got naming context");
connectionFactory = (QueueConnectionFactory) ctx.lookup("javax.jms.QueueConnectionFactory");
if (null != connectionFactory)
System.out.println("Got connection factory");
return connectionFactory;
}
Whereas this throws the class cast exception:
#Bean
public UserCredentialsConnectionFactoryAdapter jmsConnectionFactory() throws NamingException {
ConnectionFactory connectionFactory = null;
Context ctx = null;
Properties p = new Properties();
p.put(Context.INITIAL_CONTEXT_FACTORY, "com.ibm.websphere.naming.WsnInitialContextFactory");
p.put(Context.PROVIDER_URL, providerUrl);
p.put(Context.SECURITY_AUTHENTICATION,"simple");
p.put(Context.SECURITY_PRINCIPAL,jmsUsername);
p.put(Context.SECURITY_CREDENTIALS,jmsPassword);
ctx = new InitialContext(p);
if (null != ctx)
System.out.println("Got naming context");
connectionFactory = (ConnectionFactory) ctx.lookup("javax.jms.QueueConnectionFactory");
if (null != connectionFactory)
System.out.println("Got connection factory");
UserCredentialsConnectionFactoryAdapter adapter = new UserCredentialsConnectionFactoryAdapter();
adapter.setTargetConnectionFactory(connectionFactory);
adapter.setUsername(jmsUsername);
adapter.setPassword(jmsPassword);
return adapter;
// return connectionFactory;
}
Note: the credentials set in the Context properties seem to have no effect.
I am using this connection factory with Spring Integration Java DSL:
.handle(Jms.outboundAdapter(jmsConfig.jmsConnectionFactory())
.destination(jmsDestination))
I understand from WebSphere documentation that supplying credentials happens on the ConnectionFactory.getConnection() call. So I wonder whether there is any hook in the DSL where I could override the getConnection so as to provide parameters and avoid the class cast exception that I am seeing.
Alternately I am considering just explicitly calling jms template methods to send the message using a lambda in the handler and creating the connection manually.
So, finally what I would like to ask for is:
Any overall guidance on deploying a Spring application to WebSphere traditional
What may be causing the class cast exception
ps, I have placed all of the spring, et al jars in a shared library. This is the contents:
c:/IBM/IID/sharedlibs/spring/accessors-smart-1.2.jar
c:/IBM/IID/sharedlibs/spring/amqp-client-5.10.0.jar
c:/IBM/IID/sharedlibs/spring/android-json-0.0.20131108.vaadin1.jar
c:/IBM/IID/sharedlibs/spring/apiguardian-api-1.1.0.jar
c:/IBM/IID/sharedlibs/spring/asm-5.0.4.jar
c:/IBM/IID/sharedlibs/spring/assertj-core-3.18.1.jar
c:/IBM/IID/sharedlibs/spring/byte-buddy-1.10.19.jar
c:/IBM/IID/sharedlibs/spring/byte-buddy-agent-1.10.19.jar
c:/IBM/IID/sharedlibs/spring/hamcrest-2.2.jar
c:/IBM/IID/sharedlibs/spring/hamcrest-core-2.2.jar
c:/IBM/IID/sharedlibs/spring/hamcrest-library-2.2.jar
c:/IBM/IID/sharedlibs/spring/http-client-3.8.0.RELEASE.jar
c:/IBM/IID/sharedlibs/spring/jackson-annotations-2.11.4.jar
c:/IBM/IID/sharedlibs/spring/jackson-core-2.11.4.jar
c:/IBM/IID/sharedlibs/spring/jackson-databind-2.11.4.jar
c:/IBM/IID/sharedlibs/spring/jackson-dataformat-xml-2.11.4.jar
c:/IBM/IID/sharedlibs/spring/jackson-datatype-jdk8-2.11.4.jar
c:/IBM/IID/sharedlibs/spring/jackson-datatype-jsr310-2.11.4.jar
c:/IBM/IID/sharedlibs/spring/jackson-module-jaxb-annotations-2.11.4.jar
c:/IBM/IID/sharedlibs/spring/jackson-module-parameter-names-2.11.4.jar
c:/IBM/IID/sharedlibs/spring/jakarta.activation-api-1.2.2.jar
c:/IBM/IID/sharedlibs/spring/jakarta.annotation-api-1.3.5.jar
c:/IBM/IID/sharedlibs/spring/jakarta.el-3.0.3.jar
c:/IBM/IID/sharedlibs/spring/json-path-2.4.0.jar
c:/IBM/IID/sharedlibs/spring/json-smart-2.3.jar
c:/IBM/IID/sharedlibs/spring/jsonassert-1.5.0.jar
c:/IBM/IID/sharedlibs/spring/objenesis-3.1.jar
c:/IBM/IID/sharedlibs/spring/reactive-streams-1.0.3.jar
c:/IBM/IID/sharedlibs/spring/reactor-core-3.4.2.jar
c:/IBM/IID/sharedlibs/spring/snakeyaml-1.27.jar
c:/IBM/IID/sharedlibs/spring/spring-amqp-2.3.4.jar
c:/IBM/IID/sharedlibs/spring/spring-aop-5.3.3.jar
c:/IBM/IID/sharedlibs/spring/spring-beans-5.3.3.jar
c:/IBM/IID/sharedlibs/spring/spring-boot-2.4.2.jar
c:/IBM/IID/sharedlibs/spring/spring-boot-autoconfigure-2.4.2.jar
c:/IBM/IID/sharedlibs/spring/spring-boot-starter-2.4.2.jar
c:/IBM/IID/sharedlibs/spring/spring-boot-starter-amqp-2.4.2.jar
c:/IBM/IID/sharedlibs/spring/spring-boot-starter-json-2.4.2.jar
c:/IBM/IID/sharedlibs/spring/spring-boot-starter-logging-2.4.2.jar
c:/IBM/IID/sharedlibs/spring/spring-boot-starter-web-2.4.2.jar
c:/IBM/IID/sharedlibs/spring/spring-context-5.3.3.jar
c:/IBM/IID/sharedlibs/spring/spring-core-5.3.3.jar
c:/IBM/IID/sharedlibs/spring/spring-expression-5.3.3.jar
c:/IBM/IID/sharedlibs/spring/spring-integration-amqp-5.4.3.jar
c:/IBM/IID/sharedlibs/spring/spring-integration-core-5.4.3.jar
c:/IBM/IID/sharedlibs/spring/spring-integration-jms-5.4.3.jar
c:/IBM/IID/sharedlibs/spring/spring-integration-xml-5.4.3.jar
c:/IBM/IID/sharedlibs/spring/spring-jcl-5.3.3.jar
c:/IBM/IID/sharedlibs/spring/spring-jms-5.3.3.jar
c:/IBM/IID/sharedlibs/spring/spring-messaging-5.3.3.jar
c:/IBM/IID/sharedlibs/spring/spring-oxm-5.3.3.jar
c:/IBM/IID/sharedlibs/spring/spring-rabbit-2.3.4.jar
c:/IBM/IID/sharedlibs/spring/spring-rabbit-junit-2.3.4.jar
c:/IBM/IID/sharedlibs/spring/spring-retry-1.3.1.jar
c:/IBM/IID/sharedlibs/spring/spring-tx-5.3.3.jar
c:/IBM/IID/sharedlibs/spring/spring-web-5.3.3.jar
c:/IBM/IID/sharedlibs/spring/spring-webmvc-5.3.3.jar
c:/IBM/IID/sharedlibs/spring/spring-xml-3.0.10.RELEASE.jar
c:/IBM/IID/sharedlibs/spring/stax2-api-4.2.1.jar
c:/IBM/IID/sharedlibs/spring/woodstox-core-6.2.4.jar
c:/IBM/IID/sharedlibs/spring/xmlunit-core-2.7.0.jar
c:/IBM/IID/sharedlibs/spring/slf4j-api-1.7.30.jar
c:/IBM/IID/sharedlibs/spring/jakarta.validation-api-2.0.2.jar
c:/IBM/IID/sharedlibs/spring/hibernate-validator-6.1.7.Final.jar
c:/IBM/IID/sharedlibs/spring/jboss-logging-3.4.1.Final.jar
c:/IBM/IID/sharedlibs/spring/classmate-1.5.1.jar
c:/IBM/IID/sharedlibs/spring/javax.jms-api-2.0.1.jar
UPDATE
So what I finally realized is that:
WAS 8.5.5 is using J2EE v6, which means JMS 1.1
Spring JMS is using JMS 2.0
When I switched to using the UserCredentialsConnectionFactoryAdapter, this tries to use the JmsContext interface which is part of JMS 2.0 classes, and not provided by the WAS jee container, so this was the reason for the class cast exception.
What I did was to do the JMS sending manually instead of using any spring integration gateway. A better solution might be to create my own adapter that extends connection factory and uses credentials in the connect method, but this works well enough for now:
.handle( m -> {
try {
jmsConfig.sendMessage( m.getPayload().toString() );
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
} )
JmsConfig being a bean that manages the connection.

Kafka, Spring Kafka and redelivering old messages

I use Kafka and Spring Boot with Spring Kafka. After abnormal application termination and then restart, my application started receiving the old, already processed messages from Kafka queue.
What may be the reason for that and how to find and resolve the issue?
my Kafka properties:
spring.kafka.bootstrap-servers=${kafka.host}:${kafka.port}
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.group-id=postfenix
spring.kafka.consumer.enable-auto-commit=false
My Spring Kafka factory and listener:
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Post> postKafkaListenerContainerFactory(KafkaProperties kafkaProperties) {
ConcurrentKafkaListenerContainerFactory<String, Post> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.getContainerProperties().setAckMode(AckMode.MANUAL);
factory.setConsumerFactory(postConsumerFactory(kafkaProperties));
return factory;
}
#KafkaListener(topics = "${kafka.topic.post.send}", containerFactory = "postKafkaListenerContainerFactory")
public void sendPost(ConsumerRecord<String, Post> consumerRecord, Acknowledgment ack) {
Post post = consumerRecord.value();
// do some logic here
ack.acknowledge();
}
When using Kafka, the clients need to commit offsets themselves. This is in contrast to other message brokers, such as AMQP brokers, where the broker keeps track of messages a client did already receive.
In your case, you do not commit offsets automatically and therefore Kafka expects you to commit them manually (because of this setting: spring.kafka.consumer.enable-auto-commit=false). If you do not commit offsets manually in your program, the behaviour you describe is pretty much the expected one. Kafka simply does not know what messages your program did process successfully. Each time you restart your program, Kafka will see that your program did not commit any offsets yet and will apply the strategy you provide in spring.kafka.consumer.auto-offset-reset=earliest, which means the first message in the queue.
If this is all new to you, I suggest reading up this documentation on Kafka and this Spring documentation, because Kafka is quite different than other message brokers.

Spring Boot 2 integrate Brave MySQL-Integration into Zipkin

I am trying to integrate the Brave MySql Instrumentation into my Spring Boot 2.x service to automatically let its interceptor enrich my traces with spans concerning MySql-Queries.
The current Gradle-Dependencies are the following
compile 'io.zipkin.zipkin2:zipkin:2.4.5'
compile('io.zipkin.reporter2:zipkin-sender-okhttp3:2.3.1')
compile('io.zipkin.brave:brave-instrumentation-mysql:4.14.3')
compile('org.springframework.cloud:spring-cloud-starter-zipkin:2.0.0.M5')
I already configured Sleuth successfully to send traces concerning HTTP-Request to my Zipkin-Server and now I wanted to add some spans for each MySql-Query the service does.
The TracingConfiguration it this:
#Configuration
public class TracingConfiguration {
/** Configuration for how to send spans to Zipkin */
#Bean
Sender sender() {
return OkHttpSender.create("https://myzipkinserver.com/api/v2/spans");
}
/** Configuration for how to buffer spans into messages for Zipkin */
#Bean AsyncReporter<Span> spanReporter() {
return AsyncReporter.create(sender());
}
#Bean Tracing tracing(Reporter<Span> spanListener) {
return Tracing.newBuilder()
.spanReporter(spanReporter())
.build();
}
}
The Query-Interceptor works properly, but my problem now is that the spans are not added to the existing trace but each are added to a new one.
I guess its because of the creation of a new sender/reporter in the configuration, but I have not been able to reuse the existing one created by the Spring Boot Autoconfiguration.
That would moreover remove the necessity to redundantly define the Zipkin-Url (because it is already defined for Zipkin in my application.yml).
I already tried autowiring the Zipkin-Reporter to my Bean, but all I got is a SpanReporter - but the Brave-Tracer-Builder requries a Reporter<Span>
Do you have any advice for me how to properly wire things up?
Please use latest snapshots. Sleuth in latest snapshots uses brave internally so integration will be extremely simple.

How to delay spring beans startup?

Having spring application (actually grails app) that runs apache-activemq server as spring bean and couple of apache-camel routes. Application use hibernate to work with database. The problem is simple. Activemq+Camel starts up BEFORE grails injects special methods into hibernate domain objects (actually save/update methods etc). So, if activemq already has some data on startup - camel starts processing messages w/o having grails DAO methods injected. This fails with grails.lang.MissingMethodException. Must delay activemq/camel startup before Grails injects special methods into domain objects.
If all these are defined as spring bean, you can use
<bean id="activeMqBean" depends-on="anotherBean" />
This will make sure anotherBean is initialized before activeMqBean
can you move MQ managment into a plugin? It would increase modularity and if you declare in plugin-descriptor
def loadAfter = ['hibernate']
you should have the desired behavior. Works for JBPM plugin
I am not sure in your case but lazy loading may also help e.g.
<bean id="lazybean" class="com.xxx.YourBean" lazy-init="true">
A lazily-initialized bean indicates to the IoC container to create bean instance when it is first requested. This can help you delay the loading of beans you want.
I know this question is pretty old, but I am now facing the same problem in the year 2015 - and this thread does not offer a solution for me.
I came up with a custom processor bean having a CountDownLatch, which I count down after bootstrapping the application. So the messages will be idled until the app has started fully and its working for me.
/**
* bootstrap latch processor
*/
#Log4j
class BootstrapLatchProcessor implements Processor {
private final CountDownLatch latch = new CountDownLatch(1)
#Override
void process(Exchange exchange) throws Exception {
if(latch.count > 0){
log.info "waiting for bootstrapped # ${exchange.fromEndpoint}"
latch.await()
}
exchange.out = exchange.in
}
/**
* mark the application as bootstrapped
*/
public void setBootstrapped(){
latch.countDown()
}
}
Then use it as a bean in your application and call the method setBootstrapped in your Bootstrap.groovy
Then in your RouteBuilder you put the processor between your endpoint and destination for all routes you expect messages coming in before the app has started:
from("activemq:a.in ").processRef('bootstrapProcessor').to("bean:handlerService?method=handle")

Resources