Performing Aggregation of records and launch spring cloud task in single Processor in Spring cloud stream - spring-boot

I am trying to perform the following actions
Aggregating messages
Launching Spring Cloud Task
But not able to pass the aggregated message to the method launching Task. Below is the piece of code
#Autowired
private TaskProcessorProperties processorProperties;
#Autowired
Processor processor;
#Autowired
private AppConfiguration appConfiguration ;
#Transformer(inputChannel = MyProcessor.intermidiate, outputChannel = Processor.OUTPUT)
public Object setupRequest(String message) {
Map<String, String> properties = new HashMap<>();
if (StringUtils.hasText(this.processorProperties.getDataSourceUrl())) {
properties.put("spring_datasource_url", this.processorProperties.getDataSourceUrl());
}
if (StringUtils.hasText(this.processorProperties.getDataSourceDriverClassName())) {
properties.put("spring_datasource_driverClassName", this.processorProperties
.getDataSourceDriverClassName());
}
if (StringUtils.hasText(this.processorProperties.getDataSourceUserName())) {
properties.put("spring_datasource_username", this.processorProperties
.getDataSourceUserName());
}
if (StringUtils.hasText(this.processorProperties.getDataSourcePassword())) {
properties.put("spring_datasource_password", this.processorProperties
.getDataSourcePassword());
}
properties.put("payload", message);
TaskLaunchRequest request = new TaskLaunchRequest(
this.processorProperties.getUri(), null, properties, null,
this.processorProperties.getApplicationName());
System.out.println("inside task launcher **************************");
System.out.println(request.toString() +"**************************");
return new GenericMessage<>(request);
}
#ServiceActivator(inputChannel = Processor.INPUT,outputChannel = MyProcessor.intermidiate)
#Bean
public MessageHandler aggregator() {
AggregatingMessageHandler aggregatingMessageHandler =
new AggregatingMessageHandler(new DefaultAggregatingMessageGroupProcessor(),
new SimpleMessageStore(10));
AggregatorFactoryBean aggregatorFactoryBean = new AggregatorFactoryBean();
//aggregatorFactoryBean.setMessageStore();
//aggregatingMessageHandler.setOutputChannel(processor.output());
//aggregatorFactoryBean.setDiscardChannel(processor.output());
aggregatingMessageHandler.setSendPartialResultOnExpiry(true);
aggregatingMessageHandler.setSendTimeout(1000L);
aggregatingMessageHandler.setCorrelationStrategy(new ExpressionEvaluatingCorrelationStrategy("'FOO'"));
aggregatingMessageHandler.setReleaseStrategy(new MessageCountReleaseStrategy(3)); //ExpressionEvaluatingReleaseStrategy("size() == 5")
aggregatingMessageHandler.setExpireGroupsUponCompletion(true);
aggregatingMessageHandler.setGroupTimeoutExpression(new ValueExpression<>(3000L)); //size() ge 2 ? 5000 : -1
aggregatingMessageHandler.setExpireGroupsUponTimeout(true);
return aggregatingMessageHandler;
}
To pass the message between aggregator and task launcher method (setupRequest(String message)) , i am using a channel MyProcessor.intermidiate defined as below
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.stereotype.Indexed;
public interface MyProcessor {
String intermidiate = "intermidiate";
#Output("intermidiate")
MessageChannel intermidiate();
}
Applicaion.properties used is below
aggregator.message-store-type=persistentMessageStore
spring.cloud.stream.bindings.input.destination=output
spring.cloud.stream.bindings.output.destination=input
Its not working , With the above mentioned approach .
In this class if i change the channel name from my defined channel MyProcessor.intermediate to Processor.input or Processor.output than any one of the things works (based on the channel name changed to Processor.*)
I want to aggregate the messages first and than want to launch task on aggragated messages in processor, which is not happening

See here:
public Object setupRequest(String message) {
So, you expect some string as a request payload.
Your AggregatorFactoryBean use a DefaultAggregatingMessageGroupProcessor, which does exactly this:
List<Object> payloads = new ArrayList<Object>(messages.size());
for (Message<?> message : messages) {
payloads.add(message.getPayload());
}
return payloads;
So, it is definitely not a String.
It is strange that you don't show what exception happens with your configuration, but I assume you need to change setupRequest() signature to expect a List of payloads or you need to provide some custom MessageGroupProcessor to build that String from the group of messages you have aggregated.

Related

Spring RabbitMq Listener Configuration

We are using RabbitMq with default spring boot configurations. We have a use case in which we want no parallelism for one of the listeners. That is, we want only one thread of the consumer to be running at any given point in time. We want this, because the nature of the use case is such that we want the messages to be consumed in order, thus if there are multiple threads per consumer there can be chances that the messages are processed out of order.
Since, we are using the defaults and have not explicitly tweaked the container, we are using the SimpleMessageListenerContainer. By looking at the documentation I tried fixing the number of consumers using concurrency = "1" . The annotation on the target method looks like this #RabbitListener(queues = ["queue-name"], concurrency = "1").
As per the documentation this should have ensured that there is only consumer thread.
{#link org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer
* SimpleMessageListenerContainer} if this value is a simple integer, it sets a fixed
* number of consumers in the {#code concurrentConsumers} property
2021-10-29 06:11:26.361 INFO 29752 --- [ntContainer#4-1] c.t.t.i.p.s.xxx : Created xxx
2021-10-29 06:11:26.383 INFO 29752 --- [ntContainer#0-1] c.t.t.i.p.s.xxx : Created xxx
ThreadIds to be noted here are [ntContainer#4-1] and [ntContainer#0-1].
So the question is- how can we ensure that there is only one thread per consumer at any given point in time ?
Edit: Adding the code of the consumer class for more context
#ConditionalOnProperty(value = ["rabbitmq.sharebooking.enabled"], havingValue = "true", matchIfMissing = false)
class ShareBookingConsumer #Autowired constructor(
private val shareBookingRepository: ShareBookingRepository,
private val objectMapper: ObjectMapper,
private val shareDtoToShareBookingConverter: ShareBookingDtoToShareBookingConverter
) {
private val logger = LoggerFactory.getLogger(javaClass)
init {
logger.info("start sharebooking created consumer")
}
#RabbitListener(queues = ["tax_engine.share_booking"], concurrency = "1-1", exclusive = true)
#Timed
#Transactional
fun consumeShareBookingCreatedEvent(message: Message) {
try {
consumeShareBookingCreatedEvent(message.body)
} catch (e: Exception) {
throw AmqpRejectAndDontRequeueException(e)
}
}
private fun consumeShareBookingCreatedEvent(event: ByteArray) {
toShareBookingCreationMessageEvent(event).let { creationEvent ->
RmqMetrics.measureEventMetrics(creationEvent)
val shareBooking = shareDtoToShareBookingConverter.convert(creationEvent.data)
val persisted = shareBookingRepository.save(shareBooking)
logger.info("Created shareBooking ${creationEvent.data.id}")
}
}
private fun toShareBookingCreationMessageEvent(event: ByteArray) =
objectMapper.readValue(event, shareBookingCreateEventType)
companion object {
private val shareBookingCreateEventType =
object : TypeReference<RMQMessageEnvelope<ShareBookingCreationDto>>() {}
}
}
Edit: Adding application thread analysis using visualvm
5 threads get created for 5 listeners.
[1]: https://i.stack.imgur.com/gQINE.png
Set concurrency = "1-1". Note that the concurrency of the Listener depends not only on concurrentConsumers, but also on maxConcurrentConsumers:
If you are using a custom factory:
#Bean
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(CachingConnectionFactory cachingConnectionFactory) {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setConnectionFactory(cachingConnectionFactory);
factory.setConcurrentConsumers(1);
factory.setMaxConcurrentConsumers(1);
return factory;
}
See: https://docs.spring.io/spring-amqp/docs/current/reference/html/#simplemessagelistenercontainer for detail.
EDIT:
I did a simple test, 2 consumers&2 threads:
#RabbitListener(queues = "myQueue111", concurrency = "1-1")
public void handleMessage(Object message) throws InterruptedException {
LOGGER.info("Received message : {} in {}", message, Thread.currentThread().getName());
}
#RabbitListener(queues = "myQueue222", concurrency = "1-1")
public void handleMessag1e(Object message) throws InterruptedException {
LOGGER.info("Received message222 : {} in {}", message, Thread.currentThread().getName());
}
Try this:
#RabbitListener(queues = ["queue-name"], exclusive = true)
See https://docs.spring.io/spring-amqp/docs/current/reference/html/#exclusive-consumer

Spring Integration - Dynamic MailReceiver configuration

I'm pretty new to spring-integration anyway I'm using it in order to receive mails and elaborate them.
I used this spring configuration class:
#Configuration
#EnableIntegration
#PropertySource(value = { "classpath:configuration.properties" }, encoding = "UTF-8", ignoreResourceNotFound = false)
public class MailReceiverConfiguration {
private static final Log logger = LogFactory.getLog(MailReceiverConfiguration.class);
#Autowired
private EmailTransformerService emailTransformerService;
// Configurazione AE
#Bean
public MessageChannel inboundChannelAE() {
return new DirectChannel();
}
#Bean(name= {"aeProps"})
public Properties aeProps() {
Properties javaMailPropertiesAE = new Properties();
javaMailPropertiesAE.put("mail.store.protocol", "imap");
javaMailPropertiesAE.put("mail.debug", Boolean.TRUE);
javaMailPropertiesAE.put("mail.auth.debug", Boolean.TRUE);
javaMailPropertiesAE.put("mail.smtp.socketFactory.fallback", "false");
javaMailPropertiesAE.put("mail.imap.socketFactory.class", "javax.net.ssl.SSLSocketFactory");
return javaMailPropertiesAE;
}
#Bean(name="mailReceiverAE")
public MailReceiver mailReceiverAE(#Autowired MailConfigurationBean mcb, #Autowired #Qualifier("aeProps") Properties javaMailPropertiesAE) throws Exception {
return ConfigurationUtil.getMailReceiver("imap://USERNAME:PASSWORD#MAILSERVER:PORT/INBOX", new BigDecimal(2), javaMailPropertiesAE);
}
#Bean
#InboundChannelAdapter( autoStartup = "true",
channel = "inboundChannelAE",
poller = {#Poller(fixedRate = "${fixed.rate.ae}",
maxMessagesPerPoll = "${max.messages.per.poll.ae}") })
public MailReceivingMessageSource pollForEmailAE(#Autowired MailReceiver mailReceiverAE) {
MailReceivingMessageSource mrms = new MailReceivingMessageSource(mailReceiverAE);
return mrms;
}
#Transformer(inputChannel = "inboundChannelAE", outputChannel = "transformerChannelAE")
public MessageBean transformitAE( MimeMessage mailMessage ) throws Exception {
// amministratore email inbox
MessageBean messageBean = emailTransformerService.transformit(mailMessage);
return messageBean;
}
#Splitter(inputChannel = "transformerChannelAE", outputChannel = "nullChannel")
public List<Message<?>> splitIntoMessagesAE(final MessageBean mb) {
final List<Message<?>> messages = new ArrayList<Message<?>>();
for (EmailFragment emailFragment : mb.getEmailFragments()) {
Message<?> message = MessageBuilder.withPayload(emailFragment.getData())
.setHeader(FileHeaders.FILENAME, emailFragment.getFilename())
.setHeader("directory", emailFragment.getDirectory()).build();
messages.add(message);
}
return messages;
}
}
So far so good.... I start my micro-service and there is this component listening on the specified mail server and mails are downloaded.
Now I have this requirement: mail server configuration (I mean the string "imap://USERNAME:PASSWORD#MAILSERVER:PORT/INBOX") must be taken from a database and it can be configurable. In any time a system administrator can change it and the mail receiver must use the new configuration.
As far as I understood I should create a new instance of MailReceiver when a new configuration is present and use it in the InboundChannelAdapter
Is there any best practice in order to do it? I found this solution: ImapMailReceiver NO STORE attempt on READ-ONLY folder (Failure) [THROTTLED];
In this solution I can inject the ThreadPoolTaskScheduler if I define it in my Configuration class; I can also inject the DirectChannel but every-time I should create a new MailReceiver and a new ImapIdleChannelAdapter without considering this WARN message I get when the
ImapIdleChannelAdapter starts:
java.lang.RuntimeException: No beanfactory at org.springframework.integration.expression.ExpressionUtils.createStandardEvaluationContext(ExpressionUtils.java:79) at org.springframework.integration.mail.AbstractMailReceiver.onInit(AbstractMailReceiver.java:403)
Is there a better way to satisfy my scenario?
Thank you
Angelo
The best way to do this is to use the Java DSL and dynamic flow registration.
Documentation here.
That way, you can unregister the old flow and register a new one, each time the configuration changes.
It will automatically handle injecting dependencies such as the bean factory.

How to dead letter a RabbitMQ messages when an exceptions happens in a service after an aggregator's forceRelease

I am trying to figure out the best way to handle errors that might have occurred in a service that is called after a aggregate's group timeout occurred that mimics the same flow as if the releaseExpression was met.
Here is my setup:
I have a AmqpInboundChannelAdapter that takes in messages and send them to my aggregator.
When the releaseExpression has been met and before the groupTimeout has expired, if an exception gets thrown in my ServiceActivator, the messages get sent to my dead letter queue for all the messages in that MessageGroup. (10 messages in my example below, which is only used for illustrative purposes) This is what I would expect.
If my releaseExpression hasn't been met but the groupTimeout has been met and the group times out, if an exception gets throw in my ServiceActivator, then the messages do not get sent to my dead letter queue and are acked.
After reading another blog post,
link1
it mentions that this happens because the processing happens in another thread by the MessageGroupStoreReaper and not the one that the SimpleMessageListenerContainer was on. Once processing moves away from the SimpleMessageListener's thread, the messages will be auto ack.
I added the configuration mentioned in the link above and see the error messages getting sent to my error handler. My main question, is what is considered the best way to handle this scenario to minimize message getting lost.
Here are the options I was exploring:
Use a BatchRabbitTemplate in my custom error handler to publish the failed messaged to the same dead letter queue that they would have gone to if the releaseExpression was met. (This is the approach I outlined below but I am worried about messages getting lost, if an error happens during publishing)
Investigate if there is away I could let the SimpleMessageListener know about the error that occurred and have it send the batch of messages that failed to a dead letter queue? I doubt this is possible since it seems the messages are already acked.
Don't set the SimpleMessageListenerContainer to AcknowledgeMode.AUTO and manually ack the messages when they get processed via the Service when the releaseExpression being met or the groupTimeOut happening. (This seems kinda of messy, since there can be 1..N message in the MessageGroup but wanted to see what others have done)
Ideally, I want to have a flow that will that will mimic the same flow when the releaseExpression has been met, so that the messages don't get lost.
Does anyone have recommendation on the best way to handle this scenario they have used in the past?
Thanks for any help and/or advice!
Here is my current configuration using Spring Integration DSL
#Bean
public SimpleMessageListenerContainer workListenerContainer() {
SimpleMessageListenerContainer container =
new SimpleMessageListenerContainer(rabbitConnectionFactory);
container.setQueues(worksQueue());
container.setConcurrentConsumers(4);
container.setDefaultRequeueRejected(false);
container.setTransactionManager(transactionManager);
container.setChannelTransacted(true);
container.setTxSize(10);
container.setAcknowledgeMode(AcknowledgeMode.AUTO);
return container;
}
#Bean
public AmqpInboundChannelAdapter inboundRabbitMessages() {
AmqpInboundChannelAdapter adapter = new AmqpInboundChannelAdapter(workListenerContainer());
return adapter;
}
I have defined a error channel and defined my own taskScheduler to use for the MessageStoreRepear
#Bean
public ThreadPoolTaskScheduler taskScheduler(){
ThreadPoolTaskScheduler ts = new ThreadPoolTaskScheduler();
MessagePublishingErrorHandler mpe = new MessagePublishingErrorHandler();
mpe.setDefaultErrorChannel(myErrorChannel());
ts.setErrorHandler(mpe);
return ts;
}
#Bean
public PollableChannel myErrorChannel() {
return new QueueChannel();
}
public IntegrationFlow aggregationFlow() {
return IntegrationFlows.from(inboundRabbitMessages())
.transform(Transformers.fromJson(SomeObject.class))
.aggregate(a->{
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.releaseExpression("size() == 10");
a.transactional(true);
}
)
.handle("someService", "processMessages")
.get();
}
Here is my custom error flow
#Bean
public IntegrationFlow errorResponse() {
return IntegrationFlows.from("myErrorChannel")
.<MessagingException, Message<?>>transform(MessagingException::getFailedMessage,
e -> e.poller(p -> p.fixedDelay(100)))
.channel("myErrorChannelHandler")
.handle("myErrorHandler","handleFailedMessage")
.log()
.get();
}
Here is the custom error handler
#Component
public class MyErrorHandler {
#Autowired
BatchingRabbitTemplate batchingRabbitTemplate;
#ServiceActivator(inputChannel = "myErrorChannelHandler")
public void handleFailedMessage(Message<?> message) {
ArrayList<SomeObject> payload = (ArrayList<SomeObject>)message.getPayload();
payload.forEach(m->batchingRabbitTemplate.convertAndSend("some.dlq","#", m));
}
}
Here is the BatchingRabbitTemplate bean
#Bean
public BatchingRabbitTemplate batchingRabbitTemplate() {
ThreadPoolTaskScheduler scheduler = new ThreadPoolTaskScheduler();
scheduler.setPoolSize(5);
scheduler.initialize();
BatchingStrategy batchingStrategy = new SimpleBatchingStrategy(10, Integer.MAX_VALUE, 30000);
BatchingRabbitTemplate batchingRabbitTemplate = new BatchingRabbitTemplate(batchingStrategy, scheduler);
batchingRabbitTemplate.setConnectionFactory(rabbitConnectionFactory);
return batchingRabbitTemplate;
}
Update 1) to show custom MessageGroupProcessor:
public class CustomAggregtingMessageGroupProcessor extends AbstractAggregatingMessageGroupProcessor {
#Override
protected final Object aggregatePayloads(MessageGroup group, Map<String, Object> headers) {
return group;
}
}
Example Service:
#Slf4j
public class SomeService {
#ServiceActivator
public void processMessages(MessageGroup messageGroup) throws IOException {
Collection<Message<?>> messages = messageGroup.getMessages();
//Do business logic
//ack messages in the group
for (Message<?> m : messages) {
com.rabbitmq.client.Channel channel = (com.rabbitmq.client.Channel)
m.getHeaders().get("amqp_channel");
long deliveryTag = (long) m.getHeaders().get("amqp_deliveryTag");
log.debug(" deliveryTag = {}",deliveryTag);
log.debug("Channel = {}",channel);
channel.basicAck(deliveryTag, false);
}
}
}
Updated integrationFlow
public IntegrationFlow aggregationFlowWithCustomMessageProcessor() {
return IntegrationFlows.from(inboundRabbitMessages()).transform(Transformers.fromJson(SomeObject.class))
.aggregate(a -> {
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.releaseExpression("size() == 10");
a.transactional(true);
a.outputProcessor(new CustomAggregtingMessageGroupProcessor());
}).handle("someService", "processMessages").get();
}
New ErrorHandler to do nack
public class MyErrorHandler {
#ServiceActivator(inputChannel = "myErrorChannelHandler")
public void handleFailedMessage(MessageGroup messageGroup) throws IOException {
if(messageGroup!=null) {
log.debug("Nack messages size = {}", messageGroup.getMessages().size());
Collection<Message<?>> messages = messageGroup.getMessages();
for (Message<?> m : messages) {
com.rabbitmq.client.Channel channel = (com.rabbitmq.client.Channel)
m.getHeaders().get("amqp_channel");
long deliveryTag = (long) m.getHeaders().get("amqp_deliveryTag");
log.debug("deliveryTag = {}",deliveryTag);
log.debug("channel = {}",channel);
channel.basicNack(deliveryTag, false, false);
}
}
}
}
Update 2 Added custom ReleaseStratgedy and change to aggegator
public class CustomMeasureGroupReleaseStratgedy implements ReleaseStrategy {
private static final int MAX_MESSAGE_COUNT = 10;
public boolean canRelease(MessageGroup messageGroup) {
return messageGroup.getMessages().size() >= MAX_MESSAGE_COUNT;
}
}
public IntegrationFlow aggregationFlowWithCustomMessageProcessorAndReleaseStratgedy() {
return IntegrationFlows.from(inboundRabbitMessages()).transform(Transformers.fromJson(SomeObject.class))
.aggregate(a -> {
a.sendPartialResultOnExpiry(true);
a.groupTimeout(3000);
a.expireGroupsUponCompletion(true);
a.expireGroupsUponTimeout(true);
a.correlationExpression("T(Thread).currentThread().id");
a.transactional(true);
a.releaseStrategy(new CustomMeasureGroupReleaseStratgedy());
a.outputProcessor(new CustomAggregtingMessageGroupProcessor());
}).handle("someService", "processMessages").get();
}
There are some flaws in your understanding.If you use AUTO, only the last message will be dead-lettered when an exception occurs. Messages successfully deposited in the group, before the release, will be ack'd immediately.
The only way to achieve what you want is to use MANUAL acks.
There is no way to "tell the listener container to send messages to the DLQ". The container never sends messages to the DLQ, it rejects a message and the broker sends it to the DLX/DLQ.

Spring Integration: how to access the returned values from last Subscriber

I'm trying to implement a SFTP File Upload of 2 Files which has to happen in a certain order - first a pdf file and after successfull upload of that an text file with meta information about the pdf.
I followed the advice in this thread, but can't get it to work properly.
My Spring Boot Configuration:
#Bean
public SessionFactory<LsEntry> sftpSessionFactory() {
final DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
final Properties jschProps = new Properties();
jschProps.put("StrictHostKeyChecking", "no");
jschProps.put("PreferredAuthentications", "publickey,password");
factory.setSessionConfig(jschProps);
factory.setHost(sftpHost);
factory.setPort(sftpPort);
factory.setUser(sftpUser);
if (sftpPrivateKey != null) {
factory.setPrivateKey(sftpPrivateKey);
factory.setPrivateKeyPassphrase(sftpPrivateKeyPassphrase);
} else {
factory.setPassword(sftpPasword);
}
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
#BridgeTo
public MessageChannel toSftpChannel() {
return new PublishSubscribeChannel();
}
#Bean
#ServiceActivator(inputChannel = "toSftpChannel")
#Order(0)
public MessageHandler handler() {
final SftpMessageHandler handler = new SftpMessageHandler(sftpSessionFactory());
handler.setRemoteDirectoryExpression(new LiteralExpression(sftpRemoteDirectory));
handler.setFileNameGenerator(message -> {
if (message.getPayload() instanceof byte[]) {
return (String) message.getHeaders().get("filename");
} else {
throw new IllegalArgumentException("File expected as payload.");
}
});
return handler;
}
#ServiceActivator(inputChannel = "toSftpChannel")
#Order(1)
public String transferComplete(#Payload byte[] file, #Header("filename") String filename) {
return "The SFTP transfer complete for file: " + filename;
}
#MessagingGateway
public interface UploadGateway {
#Gateway(requestChannel = "toSftpChannel")
String upload(#Payload byte[] file, #Header("filename") String filename);
}
My Test Case:
final String pdfStatus = uploadGateway.upload(content, documentName);
log.info("Upload of {} completed, {}.", documentName, pdfStatus);
From the return of the Gateway upload call i expect to get the String confirming the upload e.g. "The SFTP transfer complete for file:..." but I get the the returned content of the uploaded File in byte[]:
Upload of 123456789.1.pdf completed, 37,80,68,70,45,49,46,54,13,37,-30,-29,-49,-45,13,10,50,55,53,32,48,32,111,98,106,13,60,60,47,76,105,110,101,97,114,105,122,101,100,32,49,47,76,32,50,53,52,55,49,48,47,79,32,50,55,55,47,69,32,49,49,49,55,55,55,47,78,32,49,47,84,32,50,53,52,51,53,57,47,72,32,91,32,49,49,57,55,32,53,51,55,93,62,62,13,101,110,100,111,98,106,13,32,32,32,32,32,32,32,32,32,32,32,32,13,10,52,55,49,32,48,32,111,98,106,13,60,60,47,68,101,99,111,100,101,80,97,114,109,115,60,60,47,67,111,108,117,109,110,115,32,53,47,80,114,101,100,105,99,116,111,114,32,49,50,62,62,47,70,105,108,116,101,114,47,70,108,97,116,101,68,101,99,111,100,101,47,73,68,91,60,57,66,53,49,56,54,69,70,53,66,56,66,49,50,52,49,65,56,50,49,55,50,54,56,65,65,54,52,65,57,70,54,62,60,68,52,50,68,51,55,54,53,54,65,67,48,55,54,52,65,65,53,52,66,52,57,51,50,56,52,56,68,66 etc.
What am I missing?
I think #Order(0) doesn't work together with the #Bean.
To fix it you should do this in that bean definition istead:
final SftpMessageHandler handler = new SftpMessageHandler(sftpSessionFactory());
handler.setOrder(0);
See Reference Manual for more info:
When using these annotations on consumer #Bean definitions, if the bean definition returns an appropriate MessageHandler (depending on the annotation type), attributes such as outputChannel, requiresReply etc, must be set on the MessageHandler #Bean definition itself.
In other words: if you can use setter, you have to. We don't process annotations for this case because there is no guarantee what should get a precedence. So, to avoid such a confuse we have left for you only setters choice.
UPDATE
I see your problem and it is here:
#Bean
#BridgeTo
public MessageChannel toSftpChannel() {
return new PublishSubscribeChannel();
}
That is confirmed by the logs:
Adding {bridge:dmsSftpConfig.toSftpChannel.bridgeTo} as a subscriber to the 'toSftpChannel' channel
Channel 'org.springframework.context.support.GenericApplicationContext#b3d0f7.toSftpChannel' has 3 subscriber(s).
started dmsSftpConfig.toSftpChannel.bridgeTo
So, you really have one more subscriber to that toSftpChannel and it is a BridgeHandler with an output to the replyChannel header. And a default order is like private volatile int order = Ordered.LOWEST_PRECEDENCE; this one becomes as a first subscriber and exactly this one returns you that byte[] just because it is a payload of request.
You need to decide if you really need such a bridge. There is no workaround for the #Order though...

Spring-Boot MQTT Configuration

I have a requirement to send payload to a lot of devices whose names are picked from Database. Then, i have to send to different topics, which will be like settings/{put devicename here}.
Below is the configuration i was using which i got from spring-boot reference documents.
MQTTConfiguration.java
#Configuration
#IntegrationComponentScan
public class MQTTConfiguration {
#Autowired
private Settings settings;
#Autowired
private DevMqttMessageListener messageListener;
#Bean
MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory clientFactory = new DefaultMqttPahoClientFactory();
clientFactory.setServerURIs(settings.getMqttBrokerUrl());
clientFactory.setUserName(settings.getMqttBrokerUser());
clientFactory.setPassword(settings.getMqttBrokerPassword());
return clientFactory;
}
#Bean
MessageChannel mqttOutboundChannel() {
return new DirectChannel();
}
#Bean
#ServiceActivator(inputChannel = "mqttOutboundChannel")
public MessageHandler mqttOutbound() {
MqttPahoMessageHandler messageHandler = new MqttPahoMessageHandler("dev-client-outbound",
mqttClientFactory());
messageHandler.setAsync(true);
messageHandler.setDefaultTopic(settings.getMqttPublishTopic());
return messageHandler;
}
#MessagingGateway(defaultRequestChannel = "mqttOutboundChannel")
public interface DeviceGateway {
void sendToMqtt(String payload);
}
}
Here, i am sending to only 1 topic. So i added the bean like below to send to multiple number of topics;
#Bean
public MqttClient mqttClient() throws MqttException {
MqttClient mqttClient = new MqttClient(settings.getMqttBrokerUrl(), "dev-client-outbound");
MqttConnectOptions connOptions = new MqttConnectOptions();
connOptions.setUserName(settings.getMqttBrokerUser());
connOptions.setPassword(settings.getMqttBrokerPassword().toCharArray());
mqttClient.connect(connOptions);
return mqttClient;
}
and i send using,
try {
mqttClient.publish(settings.getMqttPublishTopic()+device.getName(), mqttMessage);
} catch (MqttException e) {
LOGGER.error("Error While Sending Mqtt Messages", e);
}
Which works.
But my question is, Can i achieve the same, using output channel for better performance? If yes, any help is greatly appreciated. Thank You.
MqttClient is synchronous.
The MqttPahoMessageHandler uses an MqttAsyncClient and can be configured (set async to true) to not wait for the confirmation, but publish the confirmation later as an application event.
If you are using your own code and sending multiple messages in a loop, it will probably be faster to use an async client, and wait for the IMqttDeliveryToken completions later.

Resources