Spring Integration logging error channel (slf4j + logback) - spring

I am using Spring integration with this configuration:
#Bean MessageChannel errorChannel(){
return new PublishSubscribeChannel();
}
#MessagingGateway(name = "gatewayInbound",
defaultRequestChannel="farsRequestChannel", errorChannel="errorChannel"){
}
With this configuration, I am avoiding showing messages but I want to create a basic log such as LOGGER.error().
Additionally, I am working with SLFJ and logbak. Thus, the perfect scenario will be integrate this error message with similar configuration in my logback XML. For this reason:
Can I use logback to log Spring integration errorChannel LOGS?
Can I show the error sent to an errorChannel?
Can I personalize this error with this similar expression in logback? If I use, LoggingHandler, I see the complete stack trace and I want to customize this message.
[%-5level] - %d{dd/MM/YYYY HH:mm:ss} - [%file:%line] - %msg%n

#Bean
#ServiceActivator(inputChannel="myErrorChannel")
public MessageHandler myLogger() {
return new MessageHandler() {
#Override
public void handleMessage(Message<?> message) throws MessagingException {
ErrorMessage em = (ErrorMessage) message;
String errorMessage = em.getPayload().getMessage();
// log it
throw (MessagingException) em.getPayload();
}
};
}
If you don't want the exception to be propagated, you can just consume it, but you need to set defaultReplyTimeout=0 on the gateway (and null will be returned).
or
#Bean
#ServiceActivator(inputChannel="myErrorChannel")
public MessageHandler loggingHandler() {
LoggingHandler loggingHandler = new LoggingHandler("ERROR");
loggingHandler.setExpression("payload.message");
return loggingHandler;
}
(the error will be consumed in this case).

Related

Transactional kafka listener retry

I'm trying to create a Spring Kafka #KafkaListener which is both transactional (kafa and database) and uses retry. I am using Spring Boot. The documentation for error handlers says that
When transactions are being used, no error handlers are configured, by default, so that the exception will roll back the transaction. Error handling for transactional containers are handled by the AfterRollbackProcessor. If you provide a custom error handler when using transactions, it must throw an exception if you want the transaction rolled back (source).
However, when I configure my listener with a #Transactional("kafkaTransactionManager) annotation, even though I can clearly see that the template rolls back produced messages when an exception is raised, the container actually uses a non-null commonErrorHandler rather than an AfterRollbackProcessor. This is the case even when I explicitly configure the commonErrorHandler to null in the container factory. I do not see any evidence that my configured AfterRollbackProcessor is ever invoked, even after the commonErrorHandler exhausts its retry policy.
I'm uncertain how Spring Kafka's error handling works in general at this point, and am looking for clarification. The questions I want to answer are:
What is the recommended way to configure transactional kafka listeners with Spring-Kafka 2.8.0? Have I done it correctly?
Should the common error handler indeed be used rather than the after rollback processor? Does it rollback the current transaction before trying to process the message again according to the retry policy?
In general, when I have a transactional kafka listener, is there ever more than one layer of error handling I should be aware of? E.g. if my common error handler re-throws exceptions of kind T, will another handler catch that and potentially start retry of its own?
Thanks!
My code:
#Configuration
#EnableScheduling
#EnableKafka
public class KafkaConfiguration {
private static final Logger LOGGER = LoggerFactory.getLogger(KafkaConfiguration.class);
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Integer, Object>();
factory.setConsumerFactory(consumerFactory);
var afterRollbackProcessor =
new DefaultAfterRollbackProcessor<Object, Object>(
(record, e) -> LOGGER.info("After rollback processor triggered! {}", e.getMessage()),
new FixedBackOff(1_000, 1));
// Configures different error handling for different listeners.
factory.setContainerCustomizer(
container -> {
var groupId = container.getContainerProperties().getGroupId();
if (groupId.equals("InputProcessorHigh") || groupId.equals("InputProcessorLow")) {
container.setAfterRollbackProcessor(afterRollbackProcessor);
// If I set commonErrorHandler to null, it is defaulted instead.
}
});
return factory;
}
}
#Component
public class InputProcessor {
private static final Logger LOGGER = LoggerFactory.getLogger(InputProcessor.class);
private final KafkaTemplate<Integer, Object> template;
private final AuditLogRepository repository;
#Autowired
public InputProcessor(KafkaTemplate<Integer, Object> template, AuditLogRepository repository) {
this.template = template;
this.repository = repository;
}
#KafkaListener(id = "InputProcessorHigh", topics = "input-high", concurrency = "3")
#Transactional("kafkaTransactionManager")
public void inputHighProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
#KafkaListener(id = "InputProcessorLow", topics = "input-low", concurrency = "1")
#Transactional("kafkaTransactionManager")
public void inputLowProcessor(ConsumerRecord<Integer, Input> input) {
processInputs(input);
}
public void processInputs(ConsumerRecord<Integer, Input> input) {
var key = input.key();
var message = input.value().getMessage();
var output = new Output().setMessage(message);
LOGGER.info("Processing {}", message);
template.send("output-left", key, output);
repository.createIfNotExists(message); // idempotent insert
template.send("output-right", key, output);
if (message.contains("ERROR")) {
throw new RuntimeException("Simulated processing error!");
}
}
}
My application.yaml (minus my bootstrap-servers and security config):
spring:
kafka:
consumer:
auto-offset-reset: 'earliest'
key-deserializer: 'org.apache.kafka.common.serialization.IntegerDeserializer'
value-deserializer: 'org.springframework.kafka.support.serializer.JsonDeserializer'
isolation-level: 'read_committed'
properties:
spring.json.trusted.packages: 'java.util,java.lang,com.github.tomboyo.silverbroccoli.*'
producer:
transaction-id-prefix: 'tx-'
key-serializer: 'org.apache.kafka.common.serialization.IntegerSerializer'
value-serializer: 'org.springframework.kafka.support.serializer.JsonSerializer'
[EDIT] (solution code)
I was able to figure it out with Gary's help. As they say, we need to set the kafka transaction manager on the container so that the container can start transactions. The transactions documentation doesn't cover how to do this, and there are a few ways. First, we can get the mutable container properties object from the factory and set the transaction manager on that:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
var factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.getContainerProperties().setTransactionManager(...);
return factory;
}
If we are in Spring Boot, we can re-use some of the auto configuration to set sensible defaults on our factory before we customize it. We can see that the KafkaAutoConfiguration module imports KafkaAnnotationDrivenConfiguration, which produces a ConcurrentKafkaListenerContainerFactoryConfigurer bean. This appears to be responsible for all the default configuration in a Spring-Boot application. So, we can inject that bean and use it to initialize our factory before adding customizations:
#Bean
public ConcurrentKafkaListenerContainerFactory<?, ?> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer bootConfigurer,
ConsumerFactory<Object, Object> consumerFactory) {
var factory = new ConcurrentKafkaListenerContainerFactory<Object, Object>();
// Apply default spring-boot configuration.
bootConfigurer.configure(factory, consumerFactory);
factory.setContainerCustomizer(
container -> {
... // do whatever
});
return factory;
}
Once that's done, the container uses the AfterRollbackProcessor for error handling, as expected. As long as I don't explicitly configure a common error handler, this appears to be the only layer of exception handling.
The AfterRollbackProcessor is only used when the container knows about the transaction; you must provide a KafkaTransactionManager to the container so that the kafka transaction is started by the container, and the offsets sent to the transaction. Using #Transactional is not the correct way to start a Kafka Transaction.
See https://docs.spring.io/spring-kafka/docs/current/reference/html/#transactions

Producer callback in Spring Cloud Stream with reactor core publisher

I have written a spring cloud stream application where producers are publishing messages to the designated kafka topics. My query is how can I add a producer callback to receive ack/confirmation that the message has been successfully published on the topic? Like how we do in spring kafka producer.send(record, new callback { ... }) (maintaining async producer). Below is my code:
private final Sinks.Many<Message<?>> responseProcessor = Sinks.many().multicast().onBackpressureBuffer();
#Bean
public Supplier<Flux<Message<?>>> event() {
return responseProcessor::asFlux;
}
public Message<?> publishEvent(String status) {
try {
String key = ...;
response = MessageBuilder.withPayload(payload)
.setHeader(KafkaHeaders.MESSAGE_KEY, key)
.build();
responseProcessor.tryEmitNext(response);
}
How can I make sure that tryEmitNext has successfully written to the topic?
Is implementing ProducerListener a solution and possible? Couldn't find a concrete solution/documentation in Spring Cloud Stream
UPDATE
I have implemented below now, seems to work as expected
#Component
public class MyProducerListener<K, V> implements ProducerListener<K, V> {
#Override
public void onSuccess(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata) {
// Do nothing on onSuccess
}
#Override
public void onError(ProducerRecord<K, V> producerRecord, RecordMetadata recordMetadata, Exception exception) {
log.error("Producer exception occurred while publishing message : {}, exception : {}", producerRecord, exception);
}
}
#Bean
ProducerMessageHandlerCustomizer<KafkaProducerMessageHandler<?, ?>> customizer(MyProducerListener pl) {
return (handler, destinationName) -> handler.getKafkaTemplate().setProducerListener(pl);
}
See the Kafka Producer Properties.
recordMetadataChannel
The bean name of a MessageChannel to which successful send results should be sent; the bean must exist in the application context. The message sent to the channel is the sent message (after conversion, if any) with an additional header KafkaHeaders.RECORD_METADATA. The header contains a RecordMetadata object provided by the Kafka client; it includes the partition and offset where the record was written in the topic.
ResultMetadata meta = sendResultMsg.getHeaders().get(KafkaHeaders.RECORD_METADATA, RecordMetadata.class)
Failed sends go the producer error channel (if configured); see Error Channels. Default: null
You can add a #ServiceActivator to consume from this channel asynchronously.

How to dead letter a RabbitMQ messages when an exceptions happens in a service after an aggregator's forceRelease

I am trying to figure out the best way to handle errors that might have occurred in a service that is called after a aggregate's group timeout occurred that mimics the same flow as if the releaseExpression was met.
Here is my setup:
I have a AmqpInboundChannelAdapter that takes in messages and send them to my aggregator.
When the releaseExpression has been met and before the groupTimeout has expired, if an exception gets thrown in my ServiceActivator, the messages get sent to my dead letter queue for all the messages in that MessageGroup. (10 messages in my example below, which is only used for illustrative purposes) This is what I would expect.
If my releaseExpression hasn't been met but the groupTimeout has been met and the group times out, if an exception gets throw in my ServiceActivator, then the messages do not get sent to my dead letter queue and are acked.
After reading another blog post,
link1
it mentions that this happens because the processing happens in another thread by the MessageGroupStoreReaper and not the one that the SimpleMessageListenerContainer was on. Once processing moves away from the SimpleMessageListener's thread, the messages will be auto ack.
I added the configuration mentioned in the link above and see the error messages getting sent to my error handler. My main question, is what is considered the best way to handle this scenario to minimize message getting lost.
Here are the options I was exploring:
Use a BatchRabbitTemplate in my custom error handler to publish the failed messaged to the same dead letter queue that they would have gone to if the releaseExpression was met. (This is the approach I outlined below but I am worried about messages getting lost, if an error happens during publishing)
Investigate if there is away I could let the SimpleMessageListener know about the error that occurred and have it send the batch of messages that failed to a dead letter queue? I doubt this is possible since it seems the messages are already acked.
Don't set the SimpleMessageListenerContainer to AcknowledgeMode.AUTO and manually ack the messages when they get processed via the Service when the releaseExpression being met or the groupTimeOut happening. (This seems kinda of messy, since there can be 1..N message in the MessageGroup but wanted to see what others have done)
Ideally, I want to have a flow that will that will mimic the same flow when the releaseExpression has been met, so that the messages don't get lost.
Does anyone have recommendation on the best way to handle this scenario they have used in the past?
Thanks for any help and/or advice!
Here is my current configuration using Spring Integration DSL
#Bean
public SimpleMessageListenerContainer workListenerContainer() {
SimpleMessageListenerContainer container =
new SimpleMessageListenerContainer(rabbitConnectionFactory);
container.setQueues(worksQueue());
container.setConcurrentConsumers(4);
container.setDefaultRequeueRejected(false);
container.setTransactionManager(transactionManager);
container.setChannelTransacted(true);
container.setTxSize(10);
container.setAcknowledgeMode(AcknowledgeMode.AUTO);
return container;
}
#Bean
public AmqpInboundChannelAdapter inboundRabbitMessages() {
AmqpInboundChannelAdapter adapter = new AmqpInboundChannelAdapter(workListenerContainer());
return adapter;
}
I have defined a error channel and defined my own taskScheduler to use for the MessageStoreRepear
#Bean
public ThreadPoolTaskScheduler taskScheduler(){
ThreadPoolTaskScheduler ts = new ThreadPoolTaskScheduler();
MessagePublishingErrorHandler mpe = new MessagePublishingErrorHandler();
mpe.setDefaultErrorChannel(myErrorChannel());
ts.setErrorHandler(mpe);
return ts;
}
#Bean
public PollableChannel myErrorChannel() {
return new QueueChannel();
}
public IntegrationFlow aggregationFlow() {
return IntegrationFlows.from(inboundRabbitMessages())
.transform(Transformers.fromJson(SomeObject.class))
.aggregate(a->{
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.releaseExpression("size() == 10");
a.transactional(true);
}
)
.handle("someService", "processMessages")
.get();
}
Here is my custom error flow
#Bean
public IntegrationFlow errorResponse() {
return IntegrationFlows.from("myErrorChannel")
.<MessagingException, Message<?>>transform(MessagingException::getFailedMessage,
e -> e.poller(p -> p.fixedDelay(100)))
.channel("myErrorChannelHandler")
.handle("myErrorHandler","handleFailedMessage")
.log()
.get();
}
Here is the custom error handler
#Component
public class MyErrorHandler {
#Autowired
BatchingRabbitTemplate batchingRabbitTemplate;
#ServiceActivator(inputChannel = "myErrorChannelHandler")
public void handleFailedMessage(Message<?> message) {
ArrayList<SomeObject> payload = (ArrayList<SomeObject>)message.getPayload();
payload.forEach(m->batchingRabbitTemplate.convertAndSend("some.dlq","#", m));
}
}
Here is the BatchingRabbitTemplate bean
#Bean
public BatchingRabbitTemplate batchingRabbitTemplate() {
ThreadPoolTaskScheduler scheduler = new ThreadPoolTaskScheduler();
scheduler.setPoolSize(5);
scheduler.initialize();
BatchingStrategy batchingStrategy = new SimpleBatchingStrategy(10, Integer.MAX_VALUE, 30000);
BatchingRabbitTemplate batchingRabbitTemplate = new BatchingRabbitTemplate(batchingStrategy, scheduler);
batchingRabbitTemplate.setConnectionFactory(rabbitConnectionFactory);
return batchingRabbitTemplate;
}
Update 1) to show custom MessageGroupProcessor:
public class CustomAggregtingMessageGroupProcessor extends AbstractAggregatingMessageGroupProcessor {
#Override
protected final Object aggregatePayloads(MessageGroup group, Map<String, Object> headers) {
return group;
}
}
Example Service:
#Slf4j
public class SomeService {
#ServiceActivator
public void processMessages(MessageGroup messageGroup) throws IOException {
Collection<Message<?>> messages = messageGroup.getMessages();
//Do business logic
//ack messages in the group
for (Message<?> m : messages) {
com.rabbitmq.client.Channel channel = (com.rabbitmq.client.Channel)
m.getHeaders().get("amqp_channel");
long deliveryTag = (long) m.getHeaders().get("amqp_deliveryTag");
log.debug(" deliveryTag = {}",deliveryTag);
log.debug("Channel = {}",channel);
channel.basicAck(deliveryTag, false);
}
}
}
Updated integrationFlow
public IntegrationFlow aggregationFlowWithCustomMessageProcessor() {
return IntegrationFlows.from(inboundRabbitMessages()).transform(Transformers.fromJson(SomeObject.class))
.aggregate(a -> {
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.releaseExpression("size() == 10");
a.transactional(true);
a.outputProcessor(new CustomAggregtingMessageGroupProcessor());
}).handle("someService", "processMessages").get();
}
New ErrorHandler to do nack
public class MyErrorHandler {
#ServiceActivator(inputChannel = "myErrorChannelHandler")
public void handleFailedMessage(MessageGroup messageGroup) throws IOException {
if(messageGroup!=null) {
log.debug("Nack messages size = {}", messageGroup.getMessages().size());
Collection<Message<?>> messages = messageGroup.getMessages();
for (Message<?> m : messages) {
com.rabbitmq.client.Channel channel = (com.rabbitmq.client.Channel)
m.getHeaders().get("amqp_channel");
long deliveryTag = (long) m.getHeaders().get("amqp_deliveryTag");
log.debug("deliveryTag = {}",deliveryTag);
log.debug("channel = {}",channel);
channel.basicNack(deliveryTag, false, false);
}
}
}
}
Update 2 Added custom ReleaseStratgedy and change to aggegator
public class CustomMeasureGroupReleaseStratgedy implements ReleaseStrategy {
private static final int MAX_MESSAGE_COUNT = 10;
public boolean canRelease(MessageGroup messageGroup) {
return messageGroup.getMessages().size() >= MAX_MESSAGE_COUNT;
}
}
public IntegrationFlow aggregationFlowWithCustomMessageProcessorAndReleaseStratgedy() {
return IntegrationFlows.from(inboundRabbitMessages()).transform(Transformers.fromJson(SomeObject.class))
.aggregate(a -> {
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.transactional(true);
a.releaseStrategy(new CustomMeasureGroupReleaseStratgedy());
a.outputProcessor(new CustomAggregtingMessageGroupProcessor());
}).handle("someService", "processMessages").get();
}
There are some flaws in your understanding.If you use AUTO, only the last message will be dead-lettered when an exception occurs. Messages successfully deposited in the group, before the release, will be ack'd immediately.
The only way to achieve what you want is to use MANUAL acks.
There is no way to "tell the listener container to send messages to the DLQ". The container never sends messages to the DLQ, it rejects a message and the broker sends it to the DLX/DLQ.

can we batch up groups of 10 message load in mosquitto using spring integration

this is how i have defined my mqtt connection using spring integration.i am not sure whether this is possible bt can we setup a mqtt subscriber works after getting a 10 load of messages. right now subscriber works after publishing a message as it should.
#Autowired
ConnectorConfig config;
#Bean
public MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory();
factory.setServerURIs(config.getUrl());
factory.setUserName(config.getUser());
factory.setPassword(config.getPass());
return factory;
}
#Bean
public MessageProducer inbound() {
MqttPahoMessageDrivenChannelAdapter adapter =
new MqttPahoMessageDrivenChannelAdapter(config.getClientid(), mqttClientFactory(), "ALERT", "READING");
adapter.setCompletionTimeout(5000);
adapter.setConverter(new DefaultPahoMessageConverter());
adapter.setQos(1);
adapter.setOutputChannel(mqttRouterChannel());
return adapter;
}
/**this is router**/
#MessageEndpoint
public class MessageRouter {
private final Logger logger = LoggerFactory.getLogger(MessageRouter.class);
static final String ALERT = "ALERT";
static final String READING = "READING";
#Router(inputChannel = "mqttRouterChannel")
public String route(#Header("mqtt_topic") String topic){
String route = null;
switch (topic){
case ALERT:
logger.info("alert message received");
route = "alertTransformerChannel";
break;
case READING:
logger.info("reading message received");
route = "readingTransformerChannel";
break;
}
return route;
}
}
i need to batch up groups of 10 messages at a time
That is not a MqttPahoMessageDrivenChannelAdapter responsibility.
We use there MqttCallback with this semantic:
* #param topic name of the topic on the message was published to
* #param message the actual message.
* #throws Exception if a terminal error has occurred, and the client should be
* shut down.
*/
public void messageArrived(String topic, MqttMessage message) throws Exception;
So, we can't batch them there on this Channel Adapter by nature of the Paho client.
What we can suggest you from the Spring Integration perspective is an Aggregator EIP implementation.
In your case you should add #ServiceActivator for the AggregatorFactoryBean #Bean before that mqttRouterChannel, before sending to the router.
That maybe as simple as:
#Bean
#ServiceActivator(inputChannel = "mqttAggregatorChannel")
AggregatorFactoryBean mqttAggregator() {
AggregatorFactoryBean aggregator = new AggregatorFactoryBean();
aggregator.setProcessorBean(new DefaultAggregatingMessageGroupProcessor());
aggregator.setCorrelationStrategy(m -> 1);
aggregator.setReleaseStrategy(new MessageCountReleaseStrategy(10));
aggregator.setExpireGroupsUponCompletion(true);
aggregator.setSendPartialResultOnExpiry(true);
aggregator.setGroupTimeoutExpression(new ValueExpression<>(1000));
aggregator.setOutputChannelName("mqttRouterChannel");
return aggregator;
}
See more information in the Reference Manual.

Error when sending bytes through TCP: Unexpected message - no endpoint registered with connection interceptor

I'm trying to rewrite integration.xml in an application to Java Config using DSL. My integration flow goes like this:
Communication object comes to sendCommunication channel
sendCommunication channel is routed into two different channels
objects in each channel are transformed to byte[]
data is logged with custom logger (wire tapped)
bytes from each channel are sent through TCP using two different TcpSendingMessageHandlers
Here's a part of my Integration.java related to this flow (some beans like custom logger are skipped):
#Bean(name = "sendCommunicationRouter")
public IntegrationFlow routeRoundRobin() {
return IntegrationFlows.from(getSendCommunication())
.route(roundRobinRouter, "route",
s -> s.channelMapping("sendCommunication1",
"sendCommunication1")
.channelMapping("sendCommunication2",
"sendCommunication2"))
.get();
}
#Bean(name = "sendCommunication")
public MessageChannel getSendCommunication() {
return getDefaultMessageChannel();
}
#Bean(name = "sendCommunication1")
public MessageChannel getSendCommunication1() {
return getDefaultMessageChannel();
}
#Bean(name = "sendCommunication2")
public MessageChannel getSendCommunication2() {
return getDefaultMessageChannel();
}
#Bean(name = "tcpClientOutbound1")
public TcpSendingMessageHandler getTcpClientOutbound1() {
return getDefaultTcpClientOutbound(getOutboundConnectionFactory1());
}
#Bean(name = "tcpClientOutbound2")
public TcpSendingMessageHandler getTcpClientOutbound2() {
return getDefaultTcpClientOutbound(getOutboundConnectionFactory2());
}
private TcpSendingMessageHandler getDefaultTcpClientOutbound(TcpNetClientConnectionFactory connectionFactory) {
TcpSendingMessageHandler handler = new TcpSendingMessageHandler();
handler.setConnectionFactory(connectionFactory);
handler.setTaskScheduler(myScheduler);
handler.setClientMode(true);
handler.setRetryInterval(DEFAULT_CHANNEL_RETRY_INTERVAL);
handler.start();
return handler;
}
#Bean
public IntegrationFlow handleOutgoingCommunication1() {
return handleOutgoingCommunication(getSendCommunication1(), getTcpClientOutbound1());
}
#Bean
public IntegrationFlow handleOutgoingCommunication2() {
return handleOutgoingCommunication(getSendCommunication2(), getTcpClientOutbound2());
}
private IntegrationFlow handleOutgoingCommunication(MessageChannel inputChannel, TcpSendingMessageHandler handler) {
return IntegrationFlows.from(inputChannel)
.<Communication, byte[]>transform(communication -> communicationTransformer.toBytes(communication))
.wireTap(getLogger())
.handle(handler)
.get();
}
I'm getting this error when I'm trying to send data through sendCommunication channel (IPs hidden intentionally):
2016-10-09 19:52:45 WARN TcpNetConnection:186 - Unexpected message -
no endpoint registered with connection interceptor:
IP:PORT:37007:b2347dad-b65c-4686-b016-5ef5ee613bd5 - GenericMessage [payload=byte[267], headers={ip_tcp_remotePort=PORT,
ip_connectionId=IP:PORT:37007:b2347dad-b65c-4686-b016-5ef5ee613bd5,
ip_localInetAddress=/LOCAL IP, ip_address=IP,
id=c6fb70b1-6d06-a909-cfc4-4eac7c715de5, ip_hostname=IP,
timestamp=1476035565330}]
I'd kindly appreciate any help, this error is giving me a headache since yesterday. I couldn't find any explanation myself, google is giving me only this source code on github.
It's just a warning that the an inbound message (reply?) was received from the server to which you sent a message, and there's no inbound channel adapter configured to handle incoming messages.
Perhaps if you show the XML you are trying to replace with Java config, someone can help.

Resources