Spring integration kafka outbound adapter error handle - spring

How can I handle message failed to produce to kafka in spring integration?
I didn't see 'error-channel' is an option at 'int-kafka:outbound-channel-adapter', wondering where should I add the error-channel information so that my ErrorHandler can get "failed to produce to kafka" type of error.
(including all type of failure, configuration, network and etc)
Also, inputToKafka is queued channel, where should I add error-channel to handle potential queue full error?
<int:gateway id="myGateway"
service-interface="someGateway"
default-request-channel="transformChannel"
error-channel="errorChannel"
default-reply-channel="replyChannel"
async-executor="MyThreadPoolTaskExecutor"/>
<int:transformer id="transformer" input-channel="transformChannel" method="transform" output-channel="inputToKafka">
<bean class="Transformer"/>
</int:transformer>
<int-kafka:outbound-channel-adapter id="kafkaOutboundChannelAdapter"
kafka-template="template"
auto-startup="false"
channel="inputToKafka"
topic="foo"
message-key-expression="'bar'"
partition-id-expression="2">
<int:poller fixed-delay="200" time-unit="MILLISECONDS" receive-timeout="0"
task-executor="kafkaExecutor"/>
</int-kafka:outbound-channel-adapter>
<bean id="kafkaExecutor" class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
....
</bean>
<bean id="template" class="org.springframework.kafka.core.KafkaTemplate">
<constructor-arg>
<bean class="org.springframework.kafka.core.DefaultKafkaProducerFactory">
<constructor-arg>
<map>
<entry key="bootstrap.servers" value="localhost:9092" />
...
</map>
</constructor-arg>
</bean>
</constructor-arg>
</bean>
<int:service-activator input-channel='errorChannel' output-channel="replyChannel" method='process'>
<bean class="ErrorHandler"/>
</int:service-activator>
Edit
<property name="producerListener">
<bean id="producerListener" class="org.springframework.kafka.support.ProducerListenerAdapter"/>
</property>

Any errors on the downstream flow will be sent to the error-channel on your gateway. However, since kafka is async by default, you won't get any errors that way. You can set sync=true on the outbound adapter and then an exception will be thrown if there's a problem.
Bear in mind, though, it will be much slower.
You can get async exceptions by adding a ProducerListener to your KafkaTemplate.

Related

Solace JMS didn't run with parallel threads in Camel

In order to perform parallel processing of jms messages, I have configured the JmsComponent and connectionFactory as below.
After reading some posts and the official tutorial, seems that the below configuration should work for ActiveMQ. However, my testing shows that it doesn't work on Solace. Can someone give me a hint on this? Thanks.
// Route Definition - Camel Java DSL
from(INBOUND_ENDPOINT).setExchangePattern(ExchangePattern.InOnly).threads(5).bean(ThroughputMeasurer.class);
<!-- JMS Config -->
<bean id="jms" class="org.apache.camel.component.jms.JmsComponent">
<property name="connectionFactory" ref="cachedConnectionFactory" />
<property name="acknowledgementModeName" value="AUTO_ACKNOWLEDGE" />
<property name="deliveryPersistent" value="false" />
<property name="asyncConsumer" value="true" />
<property name="concurrentConsumers" value="5" />
</bean>
<!-- jndiTemplate is omitted here -->
<bean id="connectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="jndiTemplate" />
<property name="jndiName" value="ceConnectionFactory" />
</bean>
<bean id="cachedConnectionFactory"
class="org.springframework.jms.connection.CachingConnectionFactory">
<property name="targetConnectionFactory" ref="connectionFactory" />
<property name="sessionCacheSize" value="30" />
</bean>
I believe the problem here is that your consumer is bound to an exclusive queue, and only 1 consumer is allowed to process message. Binding to an non-exclusive queue should solve the problem.
Exclusive queues only allow the first consumer to consume from the queue.
All other consumers that are bound to the queue will not receive data. In the event that the first consumer disconnects, the next oldest consumer will begin receiving data. The other consumers can be thought of as being "standby" consumers that will take over the moment the first consumer disconnects.
Non-exclusive queues are used for load balancing. Messages that are spooled to the queue will be distributed to all consumers in a round-robin manner.
Check:
Are there any exception messages in the log?
Is the jndiName correct? Perhaps it should be jms/ceConnectionFactory?
Is the URI INBOUND_ENDPOINT correct?
...
Try to setup ActiveMQ first, and migrate the configuration to Solace.

SpringJMS DMLC messages not picked up until restart

I am using DMLC to listen to Tibco EMS queues ( Tomcat). After some time, messages are not being delivered. After restarting, messages are delivered again. I am using SingleConnectionFactory.
Connection Factory:
<bean id="jmsConnectionFactory" class="org.springframework.jndi.JndiObjectFactoryBean">
<property name="jndiTemplate" ref="jndiTemplate" />
<property name="jndiName" value="${connectionQueueFactory}" />
<property name="cache" value="false"/>
<property name="lookupOnStartup" value="false"/>
<property name="proxyInterface" value="javax.jms.ConnectionFactory"/>
</bean>
Authenticated Connection Factory:
<bean id="authenticationConnectionFactory"
class="com.my.service.AuthenticationConnectionFactory"> <-- extends SingleConnectionFactory
<property name="targetConnectionFactory" ref="jmsConnectionFactory" />
<property name="username" value="${userName}" />
<property name="password" value="${password}" />
<property name="sessionCacheSize" value="1"/>
</bean>
Destination Resolver:
<bean id="destinationResolver" class="org.springframework.jms.support.destination.JndiDestinationResolver">
<property name="jndiTemplate" ref="jndiTemplate" />
<property name="cache" value="true"/>
</bean>
Container:
<jms:listener-container concurrency="10-15" container-type="default"
connection-factory="simpleAuthenticationConnectionFactory"
destination-type="queue"
cache="consumer"
prefetch="1"
destination-resolver="destinationResolver"
acknowledge="transacted">
..... listeners.....
</jms:listener-container>
Thank you.
acknowledge="transacted" will not confirm the message if an exception is thrown during processing, which can result in this behaviour as the EMS daemon thinks your application is still busy processing the messages that have been delivered but not confirmed.
However transacted is also the only acknowledge mode that guarantees re-delivery in case an exception is thrown.
What this means is that you must not let exceptions be thrown, unless your application is shutting down, which can be a real pain. I've covered off the various options in an article The Chain of Custody Problem. The short version is:
Discard the message and ignore the error;
Send the message (and/or the error) to an error log or queue, which is by monitored by people;
Send the message (and/or the error) to an error queue, which is processed by the sending application.
All of these options have problems, which is why Fire-and-Forget messaging sucks, but in your case you'll probably find option 2 to be the most pragmatic.

Huge latency observed while committing transacted persistent AMQ messages

I have the following AMQ consumer configuration file which tries to consume 'persistent' messages from the queue. Also, the messages are 'transacted' (as I need to rollback if a message can't be processed in an expected way).
I see a problem with this configuration:
Whenever the consumer calls session.commit() after consuming the message, I see the commit call taking ~8 seconds to come out. I believe this is not expected.
Can someone point me if I have any issues with the below config?
<bean id="amqConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://localhost:61616"/>
</bean>
<bean id="simpleMessageListener" class="listener.StompSpringListener" scope="prototype" />
<bean id="destination" class="org.apache.activemq.command.ActiveMQQueue">
<property name="physicalName" value="JOBS.notifications"/>
</bean>
<bean id="poolMessageListener" class="org.springframework.aop.target.CommonsPoolTargetSource">
<property name="targetBeanName" value="simpleMessageListener"/>
<property name="maxSize" value="200"/>
</bean>
<bean id="messageListenerBean" class="org.springframework.aop.framework.ProxyFactoryBean">
<property name="targetSource" ref="poolMessageListener"/>
</bean>
<jms:listener-container
container-type="default"
connection-factory="amqConnectionFactory"
acknowledge="transacted"
concurrency="200-200"
cache="consumer"
prefetch="10">
<jms:listener destination="JOBS.notifications" ref="messageListenerBean" method="onMessage"/>
</jms:listener-container>
Take a look at the the ActiveMQ Spring support page. What I can see immediately is that you should be using a PoolingConnectionFactory instead of an ActiveMQConnectionFactory. This will ensure that only a single TCP connection is set up to the broker from your code, and will be shared between polling threads.

Failover queue should not deliver the messages until the actual queue start delivering the messages

I am using ActiveMQ implementation for sending messages to the queue. When there is a problem in the queue I am redirecting all my messages to another queue using failover mechanism.
But my requirement is that the failover queue messages should not be consumed by the consumer until the the messages in the first queue are consumed by the consumer.
Can anyone suggest me how to implement this scenario? Thanks in advance.
Here is my XML configuration:
<bean id="jmsConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://172.16.121.146:61617" />
</bean>
<bean id="cscoDest" class="org.apache.activemq.command.ActiveMQQueue">
<constructor-arg value="STOCKS.CSCO" />
</bean>
<!--The message listener-->
<bean id="portfolioListener" class="my.test.jms.Listener"></bean>
<!--Spring DMLC-->
<bean id="cscoConsumer" class="org.springframework.jms.listener.DefaultMessageListenerContainer102">
<property name="connectionFactory" ref="jmsConnectionFactory" />
<property name="destination" ref="cscoDest" />
<property name="messageListener" ref="portfolioListener" />
<property name="sessionAcknowledgeModeName" value="CLIENT_ACKNOWLEDGE"/>
</bean>
<!--Spring JMS Template-->
<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate">
<property name="connectionFactory" ref="jmsConnectionFactory" />
</bean>
<bean id="stockPublisher" class="my.test.jms.SpringPublisher">
<property name="template" ref="jmsTemplate" />
<property name="destinations">
<list>
<ref local="cscoDest" />
</list>
</property>
</bean>
The ActiveMQ failover mechanism is there to let the client fail over to another ActiveMQ broker if the connection with the current broker goes down.
Your requirement that the 2nd queue should not deliver messages until the first queue is empty is very odd. What happends if someone terminates the ActiveMQ server by pulling the plug? There might still be unprocessed messages on that server, but you cannot process them.
What you want is a master slave setup that shares a disk area (network share somewhere). Then a second broker can pick up where the master broker stopped working.

Handle activemq-spring connection errors

I have configured (with spring) my application to listen to a jms que with activemq, and everything works fine.
My activemq server is installed on another server and sometime it can go offline and I would like to handle the connection error. Is that possible?
Here is my spring configuration
<amq:connectionFactory id="jmsFactory" brokerURL="tcp://xxx.xxx.xxx.xxx:61616" />
<bean id="messageConverter" class="com.unic.thesting.main.jms.message.TheStingMessageConverter" scope="tenant"/>
<jms:listener-container concurrency="10" connection-factory="thestingJmsFactory" destination-type="queue" message-converter="thestingMessageConverter">
<jms:listener destination="in" ref="orderStatusConsumer" method="consume"/>
</jms:listener-container>
<bean id="jmsTemplate" class="org.springframework.jms.core.JmsTemplate" scope="tenant">
<property name="messageConverter" ref="messageConverter" />
<property name="connectionFactory">
<bean class="org.springframework.jms.connection.SingleConnectionFactory" scope="tenant">
<property name="targetConnectionFactory">
<ref local="jmsFactory" />
</property>
</bean>
</property>
</bean>
The DefaultMessageListenerContainer which gets registered when you use ` handles recovering connections to the JMS provider if it gets dropped for any reason (it by default retries every 5 seconds until the connection is restored), so you don't have to do anything on the listener front.
On the sending side with jmsTemplate, you would receive a runtime org.springframework.jms.JmsException if there is any issues in sending a message. You should be able to catch it for any custom processing

Resources