How to fix ListenerExecutionFailedException: Listener threw exception amqp.AmqpRejectAndDontRequeueException: Reply received after timeout - spring-boot

I set up rabbbitMQ on my java spring-boot application and it works properly (it seems), but after running for a while and somehow with same time interval It throws below exception.
org.springframework.amqp.rabbit.listener.exception.ListenerExecutionFailedException: Listener threw exception
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.wrapToListenerExecutionFailedExceptionIfNeeded(AbstractMessageListenerContainer.java:1646) ~[spring-rabbit-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:1550) ~[spring-rabbit-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.actualInvokeListener(AbstractMessageListenerContainer.java:1473) ~[spring-rabbit-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:1461) ~[spring-rabbit-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:1456) ~[spring-rabbit-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.executeListener(AbstractMessageListenerContainer.java:1405) ~[spring-rabbit-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.amqp.rabbit.listener.DirectMessageListenerContainer$SimpleConsumer.callExecuteListener(DirectMessageListenerContainer.java:995) [spring-rabbit-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.amqp.rabbit.listener.DirectMessageListenerContainer$SimpleConsumer.handleDelivery(DirectMessageListenerContainer.java:955) [spring-rabbit-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at com.rabbitmq.client.impl.ConsumerDispatcher$5.run(ConsumerDispatcher.java:149) [amqp-client-5.4.3.jar!/:5.4.3]
at com.rabbitmq.client.impl.ConsumerWorkService$WorkPoolRunnable.run(ConsumerWorkService.java:104) [amqp-client-5.4.3.jar!/:5.4.3]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_201]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_201]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_201]
Caused by: org.springframework.amqp.AmqpRejectAndDontRequeueException: Reply received after timeout
at org.springframework.amqp.rabbit.core.RabbitTemplate.onMessage(RabbitTemplate.java:2523) ~[spring-rabbit-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.amqp.rabbit.listener.DirectReplyToMessageListenerContainer.lambda$setMessageListener$1(DirectReplyToMessageListenerContainer.java:115) ~[spring-rabbit-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
at org.springframework.amqp.rabbit.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:1547) ~[spring-rabbit-2.1.4.RELEASE.jar!/:2.1.4.RELEASE]
... 11 common frames omitted
below you can find the consumer code for rabbit configuration
#Bean
public DirectExchange exchange() {
return new DirectExchange("rpc");
}
#Bean
#Qualifier("Consumer")
public Queue queue() {
return new Queue(RoutingEngine.class.getSimpleName()+"_"+config.getDatasetName());
}
#Bean
public Binding binding(Queue queue, DirectExchange exchange) {
return BindingBuilder.bind(queue).to(exchange).with(Consumer.class.getSimpleName()+"_"+config.getDatasetName());
}
#Bean
#Qualifier("ConsumerExport")
public AmqpInvokerServiceExporter exporter(RabbitTemplate template, Consumer service) {
AmqpInvokerServiceExporter exporter = new AmqpInvokerServiceExporter();
exporter.setAmqpTemplate(template);
exporter.setService(service);
exporter.setServiceInterface(Consumer.class);
return exporter;
}
#Bean
public SimpleMessageListenerContainer container(ConnectionFactory connectionFactory,#Qualifier("consumer") Queue queue,
#Qualifier("RoutingEngineExport") AmqpInvokerServiceExporter exporter) {
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer(connectionFactory);
container.setPrefetchCount(5);
container.setQueues(queue);
container.setMessageListener(exporter);
logger.info("initialize rabbitmq with {} Consumers",config.getCount());
container.setConcurrency(1+"-"+config.getCount());
return container;
}
#Bean
public FanoutExchange fanoutExchange(){
return new FanoutExchange("event");
}
#Bean
#Qualifier("reinitialize")
public Queue reInitQueue() {
return new Queue("bus."+config.getConsumerName(),false,true,true);
}
#Bean
public Binding topicBinding(#Qualifier("reinitialize") Queue queue, FanoutExchange fanoutExchangee) {
return BindingBuilder
.bind(queue)
.to(fanoutExchangee);
}
#Bean
public MessageListener<Consumer> messageListener(RabbitTemplate rabbitTemplate,Consumer target){
return new MessageListener<>(rabbitTemplate, target, "engine", config.getConsumerName());
}
and also producer configuration code is
#Bean
public AmqpProxyFactoryBean rerouteProxy(RabbitTemplate template) {
AmqpProxyFactoryBean proxy = new AmqpProxyFactoryBean();
proxy.setAmqpTemplate(template);
proxy.setServiceInterface(ConsumerService.class);
proxy.setRoutingKey(ConsumerService.class.getSimpleName());
return proxy;
}
#Bean
public Map<String,Consumer> consumerEngines( RabbitTemplate template){
Map<String,Consumer> ret= new ConcurrentHashMap<>();
//FIXme read from config
List<String> lst = Arrays.asList(config.getEngines());
lst.parallelStream().forEach(k->{
AmqpProxyFactoryBean proxy = new AmqpProxyFactoryBean();
template.setReceiveTimeout(400);
template.setReplyTimeout(400);
proxy.setAmqpTemplate(template);
proxy.setServiceInterface(Consumer.class);
proxy.setRoutingKey(Consumer.class.getSimpleName() + "_" + k);
proxy.afterPropertiesSet();
ret.put(k, (Consumer) proxy.getObject());
});
return ret;
}
what causes this problem and how to fix it?
NOTE 1: I have 3 producers and 3 consumers on different servers, and rabbit is running on another server
ٔNOTE 2: Consumers are very fast, their response time is less than 100 miliseconds

Caused by: org.springframework.amqp.AmqpRejectAndDontRequeueException: Reply received after timeout
This is caused by one of two reasons
the reply took too long to arrive (in which case the send and receive operation would have returned null earlier).
a consumer sent more than one reply for the same request

Related

Spring Kafka's MessageListener still consuming after MessageListenerContainer is stopped

My goal is to create my own Camel component for Spring Kafka.
I have managed to create it and start consuming. I also want to be able to stop the component and consumption (with JMX, with other Camel route,...), without loosing any messages.
To do that, when stopping Camel component, I need to stop a MessageListenerContainer and eventually MessageListener which is registered in MessageListenerContainer.
My problem is that when MessageListenerContainer is stopped, MessageListener is still processing messages.
#Override
protected void doStart() throws Exception {
super.doStart();
if (kafkaMessageListenerContainer != null) {
return;
}
kafkaMessageListenerContainer = kafkaListenerContainerFactory.createContainer(endpoint.getTopicName());
kafkaMessageListenerContainer.setupMessageListener(messageListener());
kafkaMessageListenerContainer.start();
}
#Override
protected void doStop() throws Exception {
LOG.info("STOPPING kafkaMessageListenerContainer");
kafkaMessageListenerContainer.stop();
LOG.info("STOPPED kafkaMessageListenerContainer");
super.doStop();
}
private MessageListener<Object, Object> messageListener() {
return new MessageListener<Object, Object>() {
#Override public void onMessage(ConsumerRecord<Object, Object> data) {
LOG.info("Record received: {}", data.offset());
//...pass a message to Camel processing route
LOG.info("Record processed: {}", data.offset());
}
};
}
This is snippet from log
{"time":"2020-11-27T14:01:57.047Z","message":"Record received: 2051","logger":"com.my.lib.springboot.camel.component.kafka.KafkaAdapterConsumer","thread-id":"consumer-0-C-1","level":"INFO","tId":"c5efc5db-5981-4477-925a-83ffece49572"}
{"time":"2020-11-27T14:01:57.153Z","message":"STOPPED kafkaMessageListenerContainer","logger":"com.my.lib.springboot.camel.component.kafka.KafkaAdapterConsumer","thread-id":"Camel (camelContext) thread #2 - ShutdownTask","level":"INFO"}
{"time":"2020-11-27T14:01:57.153Z","message":"Route: testTopic.consumer shutdown complete, was consuming from: my-kafka://events.TestTopic","logger":"org.apache.camel.impl.DefaultShutdownStrategy","thread-id":"Camel (camelContext) thread #2 - ShutdownTask","level":"INFO"}
{"time":"2020-11-27T14:01:57.159Z","message":"Record processed: 2051","logger":"com.my.lib.springboot.camel.component.kafka.KafkaAdapterConsumer","thread-id":"consumer-0-C-1","level":"INFO","tId":"8c835691-ba8d-43c2-b3e0-90a2f768ed7f"}
{"time":"2020-11-27T14:01:57.165Z","message":"Record received: 2052","logger":"com.my.lib.springboot.camel.component.kafka.KafkaAdapterConsumer","thread-id":"consumer-0-C-1","level":"INFO","tId":"8c835691-ba8d-43c2-b3e0-90a2f768ed7f"}
{"time":"2020-11-27T14:01:57.275Z","message":"Record processed: 2052","logger":"com.my.lib.springboot.camel.component.kafka.KafkaAdapterConsumer","thread-id":"consumer-0-C-1","level":"INFO","tId":"f7bcebb4-9e5e-46a1-bc5b-569264914b05"}
...
I would expect that MessageListener would not consume anymore after MessageListenerContainer is gracefully stopped. I must be missing something, any suggestions?
Many thanks!
I found a issue which caused my problem.
For some reason I was overriding consumerFactory which was not correct.
#Bean
public ConsumerFactory<Object, Object> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, localhost:9092);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, false);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer);
return new DefaultKafkaConsumerFactory<>(props);
}
After removing this and use default one, which is configured in application.yml, problem was resolved.

Two queue manager with different channel not allowed

I use spring boot 2.3.2 with ibm mq.
I use property file setup mq.
ibm.mq.queueManager=
ibm.mq.channel=
ibm.mq.connName
ibm.mq.user
ibm.mq.password
ibm.mq.useIBMCipherMappings=false
ibm.mq.userAuthenticationMQCSP=false
ibm.mq.sslCipherSuite=TLS_RSA_WITH_AES_128_CBC_SHA256
that works fine.
I need to create another factory to be able to connect to another channel. So I would like to create a similar one which is created by default but with different channel name.
So I created a config class
#EnableJms
#Configuration
public class JmsConfig {
#Bean
public MQQueueConnectionFactory jmsMQConnectionFactoryPayment() throws JMSException {
MQQueueConnectionFactory connectionFactory = new MQQueueConnectionFactory();
connectionFactory.setQueueManager("AD_TEST");
connectionFactory.setChannel("FROM.PAYMENTMNG");
connectionFactory.setConnectionNameList("wmqd1.int.test.com(1818),wmqd2.int.test.com(1818)");
connectionFactory.setBooleanProperty(WMQConstants.USER_AUTHENTICATION_MQCSP, false);
connectionFactory.setStringProperty(WMQConstants.WMQ_SSL_CIPHER_SUITE, "TLS_RSA_WITH_AES_128_CBC_SHA256");
connectionFactory.setIntProperty(CommonConstants.WMQ_CONNECTION_MODE, CommonConstants.WMQ_CM_CLIENT);
System.setProperty("com.ibm.mq.cfg.useIBMCipherMappings", String.valueOf(Boolean.FALSE));
connectionFactory.createConnection("123", "123");
return connectionFactory;
}
#Bean
JmsListenerContainerFactory<?> jmsContainerFactoryPayment() throws JMSException {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(jmsMQConnectionFactoryPayment());
return factory;
}
#Bean("payment")
JmsTemplate jmsTemplatePayment() throws JMSException {
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(jmsMQConnectionFactoryPayment());
return template;
}
}
In a class I have
#JmsListener(destination="xxx", containerFactory="jmsContainerFactoryPayment"){
....
}
When I start application, I get
com.ibm.msg.client.jms.DetailedIllegalStateException: JMSWMQ0018: E Failed to connect to queue manager 'AD_TEST' with connection mode 'Client' and host name 'wmqd1.int.test.com(1818),wmqd2.int.test.com(1818)'.
at com.ibm.msg.client.wmq.common.internal.Reason.reasonToException(Reason.java:489) ~[com.ibm.mq.allclient-9.2.0.0.jar:9.2.0.0 - p920-L200710.DE]
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:215) ~[com.ibm.mq.allclient-9.2.0.0.jar:9.2.0.0 - p920-L200710.DE]
at com.ibm.msg.client.wmq.internal.WMQConnection.<init>(WMQConnection.java:450) ~[com.ibm.mq.allclient-9.2.0.0.jar:9.2.0.0 - p920-L200710.DE]
at com.ibm.msg.client.wmq.factories.WMQConnectionFactory.createV7ProviderConnection(WMQConnectionFactory.java:8475) ~[com.ibm.mq.allclient-9.2.0.0.jar:9.2.0.0
Caused by: com.ibm.mq.MQException: JMSCMQ0001: IBM MQ call failed with compcode '2' ('MQCC_FAILED') ; reason '2538' ('MQRC_HOST_NOT_AVAILABLE').
at com.ibm.msg.client.wmq.common.internal.Reason.createException(Reason.java:203) ~[com.ibm.mq.allclient-9.2.0.0.jar:9.2.0.0 - p920-L200710.DE]
... 51 common frames omitted
Seem like we can't have two queue manager with same host but different channel
I have configured Spring Boot with IBM MQ to test this. Starting with the sample provided in this IBM Messaging GitHub Repo, I added two additional listeners as follows.
First I added some additional properties to the application.properties file
my.mq.queueManager=QM2
my.mq.channel=DEV.APP.SVRCONN
my.mq.connName=localhost(1415)
my.mq.user=<your_user_name>
my.mq.password=<your_password>
I left Application.java unchanged, copied Listener.java to create ListenerTwo.java and ListenerThree.java. I then added a new ListenerBeanConfig.java class to the sample.
ListenerTwo.java was changed to bind to a new ConnectionFactory configuration listenerTwoFactory which is created later.
#JmsListener(destination = Application.qName, containerFactory = "listenerTwoFactory")
ListenerThree.java was changed to bind to a new listenerThreeFactory configuration and Queue
#JmsListener(destination = "DEV.QUEUE.2", containerFactory = "listenerThreeFactory")
The ListenerBeanConfig.java class declaration was annotated so that I can access my properties by adding Strings for each property e.g., queueManager, channel, connName etc. and providing setter methods for each of them e.g.,
#Configuration
#EnableJms
#ConfigurationProperties(prefix="my.mq")
public class ListenerBeanConfig {
String connName;
public void setConnName(String value) {
System.out.println("connName is set to: "+value);
connName = value;
}
I registered the two new Listener beans
#Bean
public ListenerTwo myListenerTwo() {
return new ListenerTwo();
}
#Bean
public ListenerThree myListenerThree() {
return new ListenerThree();
}
I then created the new connection factory configurations listenerTwoFactory and listenerThreeFactory
For listenerTwoFactory I used the JMS classes provided by com.ibm.mq.jms in the Spring Boot config
JmsConnectionFactory cf;
#Bean
public DefaultJmsListenerContainerFactory listenerTwoFactory() {
DefaultJmsListenerContainerFactory containerFactory = new DefaultJmsListenerContainerFactory();
try {
JmsFactoryFactory ff = JmsFactoryFactory.getInstance(WMQConstants.WMQ_PROVIDER);
cf = ff.createConnectionFactory();
cf.setStringProperty(WMQConstants.WMQ_CONNECTION_NAME_LIST, connName);
cf.setStringProperty(WMQConstants.WMQ_CHANNEL, channel);
cf.setIntProperty(WMQConstants.WMQ_CONNECTION_MODE, WMQConstants.WMQ_CM_CLIENT);
cf.setStringProperty(WMQConstants.WMQ_QUEUE_MANAGER, queueManager);
cf.setStringProperty(WMQConstants.WMQ_APPLICATIONNAME, "Spring Boot ListenerTwo");
cf.setBooleanProperty(WMQConstants.USER_AUTHENTICATION_MQCSP, true);
cf.setStringProperty(WMQConstants.USERID, user);
cf.setStringProperty(WMQConstants.PASSWORD, password);
} catch (JMSException jmsex) {
System.out.println(jmsex);
}
containerFactory.setConnectionFactory(cf);
return containerFactory;
}
For the listenerThreeFactory I used the MQ JMS helper classes from com.ibm.mq.spring.boot.
#Bean
public DefaultJmsListenerContainerFactory listenerThreeFactory() {
MQConfigurationProperties myProps = new MQConfigurationProperties();
myProps.setUser(user);
myProps.setChannel(channel);
myProps.setConnName(connName);
myProps.setPassword(password);
myProps.setQueueManager(queueManager);
//No customizer
MQConnectionFactoryFactory mqcff = new MQConnectionFactoryFactory(myProps,null);
MQConnectionFactory mqcf = mqcff.createConnectionFactory(MQConnectionFactory.class);
DefaultJmsListenerContainerFactory containerFactory = new DefaultJmsListenerContainerFactory();
containerFactory.setConnectionFactory(mqcf);
return containerFactory;
}
Finally, I compiled and ran the new sample configuration. Using the IBM MQ Console for two IBM MQ queue manager docker instances, I put messages to QM1: DEV.QUEUE.1 and QM2: DEV.QUEUE.1, DEV.QUEUE.2. On the terminal see the following output.
========================================
Received message is: message 1
========================================
========================================
ListenerTwo received message is: message 2
========================================
========================================
ListenerThree received message is: message 3
========================================
Also tested with all three listeners connected to QM2 via a two different channels: DEV.APP.SVRCONN and DEV.APP.SVRCONN.TWO.
I am sure there are far more elegant ways to manage the additional properties.

DestinationResolutionException: no output-channel or replyChannel header available

I am implementing a flow where using a MongoDbMessageSource I get a list of users and I want to process each document in parallel. For this I use the default behavior of Split.
But the following error occurs after the split:
o.s.i.channel.PublishSubscribeChannel : preSend on channel 'errorChannel', message: ErrorMessage [payload=org.springframework.messaging.MessagingException: Dispatcher failed to deliver Message; nested exception is org.springframework.messaging.core.DestinationResolutionException: no output-channel or replyChannel header available, failedMessage=GenericMessage [payload=UserEntity{id=5974dfe53681ac160c78dc0f, firstName=David, lastName=García, age=14, socialMedia=[]}, headers={sequenceNumber=4, correlationId=8f8f7b7a-832a-8942-1922-26b6a7529091, id=bb373e42-d59c-42e6-d221-68bf1f56fec3, mongo_collectionName=users, sequenceSize=5, timestamp=1500831727759}], headers={id=9187ffcd-8c79-eb1e-8791-1cfe558ab134, timestamp=1500831727762}]
The code is as follows:
#Configuration
#IntegrationComponentScan
public class InfrastructureConfiguration {
private static Logger logger = LoggerFactory.getLogger(InfrastructureConfiguration.class);
/**
* The Pollers builder factory can be used to configure common bean definitions or
* those created from IntegrationFlowBuilder EIP-methods
*/
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata poller() {
return Pollers.fixedDelay(10, TimeUnit.SECONDS).get();
}
#Bean
public TaskExecutor taskExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(5);
executor.setMaxPoolSize(10);
executor.setQueueCapacity(25);
return executor;
}
/**
*
* MongoDbMessageSource is an instance of MessageSource which returns a Message with a payload
* which is the result of execution of a Query
*/
#Bean
#Autowired
public MessageSource<Object> mongoMessageSource(MongoDbFactory mongo) {
MongoDbMessageSource messageSource = new MongoDbMessageSource(mongo, new LiteralExpression("{}"));
messageSource.setExpectSingleResult(false);
messageSource.setEntityClass(UserEntity.class);
messageSource.setCollectionNameExpression(new LiteralExpression("users"));
return messageSource;
}
#Bean
public DirectChannel inputChannel() {
return new DirectChannel();
}
#Bean
#Autowired
public IntegrationFlow processUsers(MongoDbFactory mongo, PollerMetadata poller) {
return IntegrationFlows.from(mongoMessageSource(mongo), c -> c.poller(poller))
.split()
.channel(MessageChannels.executor("executorChannel", this.taskExecutor()))
.handle((GenericHandler<UserEntity>) (payload, headers) -> {
logger.debug("user:" + payload + " on thread "
+ Thread.currentThread().getName());
return payload;
})
.aggregate()
.get();
}
}
Does anyone know I'm doing wrong? Thanks in advance.
Use MessageHandler as suggested Barath:
#Bean
#Autowired
public IntegrationFlow processUsers(MongoDbFactory mongo, PollerMetadata poller) {
return IntegrationFlows.from(mongoMessageSource(mongo), c -> c.poller(poller))
.split()
.channel(MessageChannels.executor("executorChannel", this.taskExecutor()))
.wireTap(sf -> sf.handle(user -> logger.debug("user:" + user.getPayload().toString() + " on thread " + Thread.currentThread().getName())))
.aggregate()
.get();
}
The error persists, the complete trace of the error I put below:
2017-07-23 21:46:36.785 DEBUG 15148 --- [ taskExecutor-4] o.s.integration.handler.LoggingHandler : _org.springframework.integration.errorLogger.handler received message: ErrorMessage [payload=org.springframework.messaging.MessagingException: Dispatcher failed to deliver Message; nested exception is org.springframework.messaging.core.DestinationResolutionException: no output-channel or replyChannel header available, failedMessage=GenericMessage [payload=UserEntity{id=5974fd123681ac3b2c5c343a, firstName=David, lastName=García, age=14, socialMedia=[]}, headers={sequenceNumber=4, correlationId=da7be297-992b-8f5a-d41c-58a89e654fcc, id=eb55d6bf-8108-4be5-8b32-6260d4ceea9b, mongo_collectionName=users, sequenceSize=5, timestamp=1500839196770}], headers={id=83a833cd-ac87-f6be-fb75-8d6907e3d194, timestamp=1500839196784}]
2017-07-23 21:46:36.788 ERROR 15148 --- [ taskExecutor-4] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Dispatcher failed to deliver Message; nested exception is org.springframework.messaging.core.DestinationResolutionException: no output-channel or replyChannel header available, failedMessage=GenericMessage [payload=UserEntity{id=5974fd123681ac3b2c5c343a, firstName=David, lastName=García, age=14, socialMedia=[]}, headers={sequenceNumber=4, correlationId=da7be297-992b-8f5a-d41c-58a89e654fcc, id=eb55d6bf-8108-4be5-8b32-6260d4ceea9b, mongo_collectionName=users, sequenceSize=5, timestamp=1500839196770}]
at org.springframework.integration.dispatcher.AbstractDispatcher.wrapExceptionIfNecessary(AbstractDispatcher.java:133)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:120)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:148)
at org.springframework.integration.dispatcher.UnicastingDispatcher.access$000(UnicastingDispatcher.java:53)
at org.springframework.integration.dispatcher.UnicastingDispatcher$3.run(UnicastingDispatcher.java:129)
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:55)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.springframework.messaging.core.DestinationResolutionException: no output-channel or replyChannel header available
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutput(AbstractMessageProducingHandler.java:353)
at org.springframework.integration.handler.AbstractMessageProducingHandler.produceOutput(AbstractMessageProducingHandler.java:269)
at org.springframework.integration.handler.AbstractMessageProducingHandler.sendOutputs(AbstractMessageProducingHandler.java:186)
at org.springframework.integration.aggregator.AbstractCorrelatingMessageHandler.completeGroup(AbstractCorrelatingMessageHandler.java:671)
at org.springframework.integration.aggregator.AbstractCorrelatingMessageHandler.handleMessageInternal(AbstractCorrelatingMessageHandler.java:418)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:127)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116)
... 7 more
2017-07-23 21:46:36.788 DEBUG 15148 --- [ taskExecutor-4] o.s.i.channel.PublishSubscribeChannel : postSend (sent=true) on channel 'errorChannel', message: ErrorMessage [payload=org.springframework.messaging.MessagingException: Dispatcher failed to deliver Message; nested exception is org.springframework.messaging.core.DestinationResolutionException: no output-channel or replyChannel header available, failedMessage=GenericMessage [payload=UserEntity{id=5974fd123681ac3b2c5c343a, firstName=David, lastName=García, age=14, socialMedia=[]}, headers={sequenceNumber=4, correlationId=da7be297-992b-8f5a-d41c-58a89e654fcc, id=eb55d6bf-8108-4be5-8b32-6260d4ceea9b, mongo_collectionName=users, sequenceSize=5, timestamp=1500839196770}], headers={id=83a833cd-ac87-f6be-fb75-8d6907e3d194, timestamp=1500839196784}]
Your problem that Aggregator is request-reply component, but you don't have anything after that in your flow. That's why you have that error. You have to decide what to do with the aggregator result.

Caused by: org.apache.commons.net.MalformedServerReplyException: Could not parse response code

I'm developing a spring boot application which reads data from an ftp connection. Have been referring this article. http://docs.spring.io/spring-integration/reference/html/ftp.html I've added below dependency to pom.xml:
<dependency>
<groupId>org.springframework.integration</groupId>
<artifactId>spring-integration-ftp</artifactId>
<version>4.3.2.RELEASE</version>
</dependency>
Here is my Spring boot application:
#SpringBootApplication
public class FtpApplication {
public static void main(String[] args) {
SpringApplication.run(FtpApplication.class, args);
}
#Bean
public SessionFactory<FTPFile> ftpSessionFactory() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost("localhost");
sf.setPort(14147);
sf.setUsername("root");
sf.setPassword("root");
return new CachingSessionFactory<FTPFile>(sf);
}
#Bean
public FtpInboundFileSynchronizer ftpInboundFileSynchronizer() {
FtpInboundFileSynchronizer fileSynchronizer = new FtpInboundFileSynchronizer(ftpSessionFactory());
fileSynchronizer.setDeleteRemoteFiles(false);
fileSynchronizer.setRemoteDirectory("/");
fileSynchronizer.setFilter(new FtpSimplePatternFileListFilter("*.xml"));
return fileSynchronizer;
}
#Bean
#InboundChannelAdapter(channel = "ftpChannel")
public MessageSource<File> ftpMessageSource() {
FtpInboundFileSynchronizingMessageSource source = new FtpInboundFileSynchronizingMessageSource(
ftpInboundFileSynchronizer());
source.setLocalDirectory(new File("ftp-inbound"));
source.setAutoCreateLocalDirectory(true);
source.setLocalFilter(new AcceptOnceFileListFilter<File>());
return source;
}
#Bean
#ServiceActivator(inputChannel = "ftpChannel")
public MessageHandler handler() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
File file = (File) message.getPayload();
BufferedReader br;
String sCurrentLine;
try {
br = new BufferedReader(new FileReader(file.getPath()));
while ((sCurrentLine = br.readLine()) != null) {
System.out.println(sCurrentLine);
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
System.out.println(message.getPayload());
}
};
}
#Bean(name = PollerMetadata.DEFAULT_POLLER)
public PollerMetadata defaultPoller() {
PollerMetadata pollerMetadata = new PollerMetadata();
pollerMetadata.setTrigger(new PeriodicTrigger(10));
return pollerMetadata;
}
}
From the windows explorer I'm adding a file. Now when the control comes to the MessageHandler function, I see the below error. But I can neatly get the file and I see the contents correctly when I read it. But I'm unable to figure out what is the error all about:
2016-09-27 08:25:07.548 ERROR 10292 --- [ask-scheduler-1] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessagingException: Problem occurred while synchronizing remote to local directory; nested exception is org.springframework.messaging.MessagingException: Failed to obtain pooled item; nested exception is java.lang.IllegalStateException: failed to create FTPClient
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:274)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:193)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizingMessageSource.doReceive(AbstractInboundFileSynchronizingMessageSource.java:59)
at org.springframework.integration.endpoint.AbstractMessageSource.receive(AbstractMessageSource.java:134)
at org.springframework.integration.endpoint.SourcePollingChannelAdapter.receiveMessage(SourcePollingChannelAdapter.java:209)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:245)
at org.springframework.integration.endpoint.AbstractPollingEndpoint.access$000(AbstractPollingEndpoint.java:58)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:190)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:186)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller$1.run(AbstractPollingEndpoint.java:353)
at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:55)
at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50)
at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:51)
at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.run(AbstractPollingEndpoint.java:344)
at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54)
at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.springframework.messaging.MessagingException: Failed to obtain pooled item; nested exception is java.lang.IllegalStateException: failed to create FTPClient
at org.springframework.integration.util.SimplePool.getItem(SimplePool.java:178)
at org.springframework.integration.file.remote.session.CachingSessionFactory.getSession(CachingSessionFactory.java:123)
at org.springframework.integration.file.remote.RemoteFileTemplate.execute(RemoteFileTemplate.java:433)
at org.springframework.integration.file.remote.synchronizer.AbstractInboundFileSynchronizer.synchronizeToLocalDirectory(AbstractInboundFileSynchronizer.java:232)
... 22 more
Caused by: java.lang.IllegalStateException: failed to create FTPClient
at org.springframework.integration.ftp.session.AbstractFtpSessionFactory.getSession(AbstractFtpSessionFactory.java:169)
at org.springframework.integration.ftp.session.AbstractFtpSessionFactory.getSession(AbstractFtpSessionFactory.java:41)
at org.springframework.integration.file.remote.session.CachingSessionFactory$1.createForPool(CachingSessionFactory.java:81)
at org.springframework.integration.file.remote.session.CachingSessionFactory$1.createForPool(CachingSessionFactory.java:78)
at org.springframework.integration.util.SimplePool.doGetItem(SimplePool.java:188)
at org.springframework.integration.util.SimplePool.getItem(SimplePool.java:169)
... 25 more
Caused by: org.apache.commons.net.MalformedServerReplyException: Could not parse response code.
Server Reply: FZS ..... some speacial characters here.....
at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:336)
at org.apache.commons.net.ftp.FTP.__getReply(FTP.java:292)
at org.apache.commons.net.ftp.FTP._connectAction_(FTP.java:418)
at org.apache.commons.net.ftp.FTPClient._connectAction_(FTPClient.java:966)
at org.apache.commons.net.ftp.FTPClient._connectAction_(FTPClient.java:954)
at org.apache.commons.net.SocketClient.connect(SocketClient.java:189)
at org.apache.commons.net.SocketClient.connect(SocketClient.java:209)
at org.springframework.integration.ftp.session.AbstractFtpSessionFactory.createClient(AbstractFtpSessionFactory.java:191)
at org.springframework.integration.ftp.session.AbstractFtpSessionFactory.getSession(AbstractFtpSessionFactory.java:166)
... 30 more
I'm new to spring integration, please help. Let me know the concepts that I should still consider preparing.
You are most probably connecting to the FileZilla FTP server administrative port (14147).
That port uses a proprietary protocol used by a "FileZilla Server Interface", not FTP, and you are not supposed to connect to it with your application.
Connect to the FTP port instead. By default that is 21. It is configured in "FileZilla Server Interface" on "General Settings" page of the "FileZilla Server Options" as "Listen to these ports".

Camel lookup RemoteConnectionFactory on Wildfly

I'm very new to Apache Camel and completely new to Spring. I'm tryin to send some JMS messages to the embedded hornetq in Wildfly (ver.8.1.0). Here is my code:
public class CamelJMS {
public static void main(String[] args) throws Exception {
try {
CamelContext cc = new DefaultCamelContext();
Properties prop = new Properties();
prop.put(Context.INITIAL_CONTEXT_FACTORY,
"org.jboss.naming.remote.client.InitialContextFactory");
prop.put(Context.URL_PKG_PREFIXES,
"org.jboss.jms.jndi.JNDIProviderAdapter");
prop.put(Context.PROVIDER_URL, "http-remoting://localhost:8080");
prop.put(Context.SECURITY_PRINCIPAL,
System.getProperty("username", "usr"));
prop.put(Context.SECURITY_CREDENTIALS,
System.getProperty("password", "pwd"));
JndiTemplate jndiT = new JndiTemplate(prop);
jndiT.bind("ccf", "jms/RemoteConnectionFactory");
JndiObjectFactoryBean jndiCFB = new JndiObjectFactoryBean();
jndiCFB.setJndiTemplate(jndiT);
JmsComponent jmsC = JmsComponent.jmsComponent((ConnectionFactory)jndiCFB.getObject());
cc.addComponent("jmsrc", jmsC);
cc.addRoutes(new RouteBuilder() {
#Override
public void configure() throws Exception {
System.out.println("Go!");
onException(Throwable.class)
.handled(true)
.process(new Processor() {
#Override
public void process(Exchange arg0) throws Exception {
System.out.println("error.");
((Exception) arg0.getProperty("CamelExceptionCaught", Exception.class)).printStackTrace();
}
});
from("file:///Users/Foo/Desktop/IN")
.process(new Processor() {
#Override
public void process(Exchange arg0) throws Exception {
System.out.println(arg0.getIn().getHeader("CamelFileAbsolutePath", String.class));
System.out.println(arg0.getIn().getBody(String.class));
}
})
.to("jms:jms/generatoreQueue?connectionFactory=ccf");
}
});
cc.start();
Thread.sleep(10000);
cc.stop();
} catch (Exception e) {
e.printStackTrace();
}
}
}
I'm sure about my Wildfly's configuration because I can access the same queue using a non Camel client. When I launch my client I got this error:
org.jboss.naming.remote.protocol.NamingIOException: Failed to bind [Root exception is java.io.IOException: Internal server error.]
at org.jboss.naming.remote.client.ClientUtil.namingException(ClientUtil.java:49)
at org.jboss.naming.remote.protocol.v1.Protocol$2.execute(Protocol.java:220)
at org.jboss.naming.remote.protocol.v1.Protocol$2.execute(Protocol.java:179)
at org.jboss.naming.remote.protocol.v1.RemoteNamingStoreV1.bind(RemoteNamingStoreV1.java:108)
at org.jboss.naming.remote.client.HaRemoteNamingStore$2.operation(HaRemoteNamingStore.java:288)
at org.jboss.naming.remote.client.HaRemoteNamingStore$2.operation(HaRemoteNamingStore.java:285)
at org.jboss.naming.remote.client.HaRemoteNamingStore.namingOperation(HaRemoteNamingStore.java:137)
at org.jboss.naming.remote.client.HaRemoteNamingStore.bind(HaRemoteNamingStore.java:284)
at org.jboss.naming.remote.client.RemoteContext.bind(RemoteContext.java:133)
at org.jboss.naming.remote.client.RemoteContext.bind(RemoteContext.java:137)
at javax.naming.InitialContext.bind(InitialContext.java:419)
at org.springframework.jndi.JndiTemplate$2.doInContext(JndiTemplate.java:198)
at org.springframework.jndi.JndiTemplate.execute(JndiTemplate.java:87)
at org.springframework.jndi.JndiTemplate.bind(JndiTemplate.java:196)
at edu.pezzati.camel.jms.broker.CamelJMSBroker.main(CamelJMSBroker.java:38)
Caused by: java.io.IOException: Internal server error.
at org.jboss.naming.remote.protocol.v1.RemoteNamingServerV1$MessageReciever$1.run(RemoteNamingServerV1.java:82)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Looking to the server log, I found this:
...
09:12:02,585 INFO [org.jboss.as.naming] (default task-5) JBAS011806: Channel end notification received, closing channel Channel ID 793d9f9e (inbound) of Remoting c
onnection 4406e6f5 to /127.0.0.1:49289
09:12:20,121 ERROR [org.jboss.as.naming] (pool-1-thread-1) JBAS011807: Unexpected internal error: java.lang.UnsupportedOperationException: JBAS011859: Naming context is read-only
at org.jboss.as.naming.WritableServiceBasedNamingStore.requireOwner(WritableServiceBasedNamingStore.java:161)
at org.jboss.as.naming.WritableServiceBasedNamingStore.bind(WritableServiceBasedNamingStore.java:66)
at org.jboss.as.naming.NamingContext.bind(NamingContext.java:253)
at org.jboss.naming.remote.protocol.v1.Protocol$2.handleServerMessage(Protocol.java:249)
at org.jboss.naming.remote.protocol.v1.RemoteNamingServerV1$MessageReciever$1.run(RemoteNamingServerV1.java:73)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [rt.jar:1.7.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [rt.jar:1.7.0_45]
at java.lang.Thread.run(Thread.java:744) [rt.jar:1.7.0_45]
...
Of course I'm misconfiguring something in Spring's JndiTemplate but I can't figure out what.
I'm not familiar with Wildfly and only a little familiar with JBoss, but you said the configuration parameters should be correct. So based off of my experience configuring the Camel JmsComponent...
Try this:
Properties prop = new Properties();
prop.put(Context.INITIAL_CONTEXT_FACTORY,
"org.jboss.naming.remote.client.InitialContextFactory");
prop.put(Context.URL_PKG_PREFIXES,
"org.jboss.jms.jndi.JNDIProviderAdapter");
prop.put(Context.PROVIDER_URL, "http-remoting://localhost:8080");
prop.put(Context.SECURITY_PRINCIPAL,
System.getProperty("username", "usr"));
prop.put(Context.SECURITY_CREDENTIALS,
System.getProperty("password", "pwd"));
Context context = new InitialContext(prop);
ConnectionFactory connectionFactory = (ConnectionFactory) context.lookup("jms/RemoteConnectionFactory");
JmsComponent jmsC = new JmsComponent(connectionFactory);
cc.addComponent("jms", jmsC);
And change your endpoint to:
.to("jms:queue:generatoreQueue");
You shouldn't need the "jms/" in front of the queue name, and the component name should be the same as what you bound it to above in the addComponent method.

Resources