Spring Integration Channeling With Bean Name vs Method Name - spring

I have PublishSubscribeChannel like this:
#Bean(name = {"publishCha.input", "publishCha2.input"}) //2 subscribers
public MessageChannel publishAction() {
PublishSubscribeChannel ps = MessageChannels.publishSubscribe().get();
ps.setMaxSubscribers(8);
return ps;
}
I have also subscriber channels:
#Bean
public IntegrationFlow publishCha() {
return f -> f
.handle(m -> System.out.println("In publishCha channel..."));
}
#Bean
public IntegrationFlow publishCha2() {
return f -> f
.handle(m -> System.out.println("In publishCha2 channel..."));
}
And finally another subscriber:
#Bean
public IntegrationFlow anotherChannel() {
return IntegrationFlows.from("publishAction")
.handle(m -> System.out.println("ANOTHER CHANNEL IS HERE!"))
.get();
}
The problem is, when I call channel with method name "publishAction" like below from another flow, it only prints "ANOTHER CHANNEL HERE" and ignores other subscribers. However, if I call with
.channel("publishCha.input"), this time it enters publishCha and publishCha2 subscribers but ignoring the third subscriber.
#Bean
public IntegrationFlow flow() {
return f -> f
.channel("publishAction");
}
My question is, why those two different channeling methods yields different results?
.channel("publishAction") // channeling with method name executes third subscriber
.channel("publishCha.input") // channelling with bean name, executes first and second subscribers
Edit: narayan-sambireddy requested how I send messages to channel. I send it via Gateway:
#MessagingGateway
public interface ExampleGateway {
#Gateway(requestChannel = "flow.input")
void flow(Order orders);
}
In Main:
Order order = new Order();
order.addItem("PC", "TTEL", 2000, 1)
ConfigurableApplicationContext ctx = SpringApplication.run(Start.class, args);
ctx.getBean(ExampleGateway.class).flow(order);

Your problem with the third subscriber that you miss the purpose of the name in the #Bean:
/**
* The name of this bean, or if several names, a primary bean name plus aliases.
* <p>If left unspecified, the name of the bean is the name of the annotated method.
* If specified, the method name is ignored.
* <p>The bean name and aliases may also be configured via the {#link #value}
* attribute if no other attributes are declared.
* #see #value
*/
#AliasFor("value")
String[] name() default {};
So, method name as a bean name is ignored in this case, therefore Spring Integration Java DSL doesn't find a bean with the publishAction and creates one - DirectChannel.
You can use method reference though:
IntegrationFlows.from(publishAction())
Or, if that is in a different configuration class, you can re-use one of the predefined name"
IntegrationFlows.from(publishCha.input)
This way DSL will re-use existing bean and will just add one more subscriber to that pub-sub channel.

Related

Spring SFTP Outbound Adapter - determining when files have been sent

I have a Spring SFTP output adapter that I start via "adapter.start()" in my main program. Once started, the adapter transfers and uploads all the files in the specified directory as expected. But I want to stop the adapter after all the files have been transferred. How do I detect if all the files have been transferred so I can issue an adapter.stop()?
#Bean
public IntegrationFlow sftpOutboundFlow() {
return IntegrationFlows.from(Files.inboundAdapter(new File(sftpOutboundDirectory))
.filterExpression("name.endsWith('.pdf') OR name.endsWith('.PDF')")
.preventDuplicates(true),
e -> e.id("sftpOutboundAdapter")
.autoStartup(false)
.poller(Pollers.trigger(new FireOnceTrigger())
.maxMessagesPerPoll(-1)))
.log(LoggingHandler.Level.INFO, "sftp.outbound", m -> m.getPayload())
.log(LoggingHandler.Level.INFO, "sftp.outbound", m -> m.getHeaders())
.handle(Sftp.outboundAdapter(outboundSftpSessionFactory())
.useTemporaryFileName(false)
.remoteDirectory(sftpRemoteDirectory))
.get();
}
#Artem Bilan has already given the answer. But here's kind of a concrete implementation of what he said - for those who are a Spring Integration noob like me:
Define a service to get the PDF files on demand:
#Service
public class MyFileService {
public List<File> getPdfFiles(final String srcDir) {
File[] files = new File(srcDir).listFiles((dir, name) -> name.toLowerCase().endsWith(".pdf"));
return Arrays.asList(files == null ? new File[]{} : files);
}
}
Define a Gateway to start the SFTP upload flow on demand:
#MessagingGateway
public interface SFtpOutboundGateway {
#Gateway(requestChannel = "sftpOutboundFlow.input")
void uploadFiles(List<File> files);
}
Define the Integration Flow to upload the files to the SFTP server via Sftp.outboundGateway:
#Configuration
#EnableIntegration
public class FtpFlowIntegrationConfig {
// could be also bound via #Value
private String sftpRemoteDirectory = "/path/to/remote/dir";
#Bean
public SessionFactory<ChannelSftp.LsEntry> outboundSftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost("localhost");
factory.setPort(22222);
factory.setUser("client1");
factory.setPassword("password123");
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public IntegrationFlow sftpOutboundFlow(RemoteFileTemplate<ChannelSftp.LsEntry> remoteFileTemplate) {
return e -> e
.log(LoggingHandler.Level.INFO, "sftp.outbound", Message::getPayload)
.log(LoggingHandler.Level.INFO, "sftp.outbound", Message::getHeaders)
.handle(
Sftp.outboundGateway(remoteFileTemplate, AbstractRemoteFileOutboundGateway.Command.MPUT, "payload")
);
}
#Bean
public RemoteFileTemplate<ChannelSftp.LsEntry> remoteFileTemplate(SessionFactory<ChannelSftp.LsEntry> outboundSftpSessionFactory) {
RemoteFileTemplate<ChannelSftp.LsEntry> template = new SftpRemoteFileTemplate(outboundSftpSessionFactory);
template.setRemoteDirectoryExpression(new LiteralExpression(sftpRemoteDirectory));
template.setAutoCreateDirectory(true);
template.afterPropertiesSet();
template.setUseTemporaryFileName(false);
return template;
}
}
Wiring up:
public class SpringApp {
public static void main(String[] args) {
final MyFileService fileService = ctx.getBean(MyFileService.class);
final SFtpOutboundGateway sFtpOutboundGateway = ctx.getBean(SFtpOutboundGateway.class);
// trigger the sftp upload flow manually - only once
sFtpOutboundGateway.uploadFiles(fileService.getPdfFiles());
}
}
Import notes:
1.
#Gateway(requestChannel = "sftpOutboundFlow.input")
void uploadFiles(List files);
Here the DirectChannel channel sftpOutboundFlow.input will be used to pass message with the payload (= List<File> files) to the receiver. If this channel is not created yet, the Gateway is going to create it implicitly.
2.
#Bean
public IntegrationFlow sftpOutboundFlow(RemoteFileTemplate<ChannelSftp.LsEntry> remoteFileTemplate) { ... }
Since IntegrationFlow is a Consumer functional interface, we can simplify the flow a little using the IntegrationFlowDefinition. During the bean registration phase, the IntegrationFlowBeanPostProcessor converts this inline (Lambda) IntegrationFlow to a StandardIntegrationFlow and processes its components. An IntegrationFlow definition using a Lambda populates DirectChannel as an inputChannel of the flow and it is registered in the application context as a bean with the name sftpOutboundFlow.input in the sample above (flow bean name + ".input"). That's why we use that name for the SFtpOutboundGateway gateway.
Ref: https://spring.io/blog/2014/11/25/spring-integration-java-dsl-line-by-line-tutorial
3.
#Bean
public RemoteFileTemplate<ChannelSftp.LsEntry> remoteFileTemplate(SessionFactory<ChannelSftp.LsEntry> outboundSftpSessionFactory) {}
see: Remote directory for sftp outbound gateway with DSL
Flowchart:
But I want to stop the adapter after all the files have been transferred.
Logically this is not for what this kind of component has been designed. Since you are not going to have some constantly changing local directory, probably it is better to think about an even driver solution to list files in the directory via some action. Yes, it can be a call from the main, but only once for all the content of the dir and that's all.
And for this reason the Sftp.outboundGateway() with a Command.MPUT is there for you:
https://docs.spring.io/spring-integration/reference/html/sftp.html#using-the-mput-command.
You still can trigger an IntegrationFlow, but it could start from a #MessagingGateway interface to be called from a main with a local directory to list files for uploading:
https://docs.spring.io/spring-integration/reference/html/dsl.html#java-dsl-gateway

Spring integration RecipientListRouter doesn't create multiple payloads

Please can any one help me with this issue, I configured my ReceipientListRouter as the documentation suggested:
#Bean
public IntegrationFlow routerFlow() {
return IntegrationFlows.from(CHANNEL_INPUT)
.routeToRecipients(r -> r
.applySequence(true)
.ignoreSendFailures(true)
.recipient(CHANNEL_OUTPUT_1)
.recipient(CHANNEL_OUTPUT_2)
.sendTimeout(1_234L))
.get();
}
#ServiceActivator(inputChannel = CHANNEL_OUTPUT_1, outputChannel = CHANNEL_END)
public Object foo(Message<?> message) {
message.gePayload();
// processing1() ...
}
#ServiceActivator(inputChannel = CHANNEL_OUTPUT_2, outputChannel = CHANNEL_END)
public Object bar(Message<?> message) {
message.gePayload();
// processing2() ...
}
I expect to get this workflow:
CHANNEL_INPUT(payload-1) |----> CHANNEL_OUTPUT_1(payload-2)
|----> CHANNEL_OUTPUT_2(payload-3)
where payload-2 on the input of the foo activator equals the payload-1 and payload-3 on the input of the bar activator equals payload-1
But the actual workflow is:
the payload-2 on the input of the foo activator equals payload-1 but the payload-3 on the input of the bar activator equals payload-2 message of the output of foo activator
it seems like this is the actual workflow
CHANNEL_INPUT(payload-1)----> CHANNEL_OUTPUT_1(payload-2)----> CHANNEL_OUTPUT_2(payload-3)
after debugging I notice that message.getHeader() are not the same (it actually contain the "sequenceNumber" and the "sequenceSize") but for the message.getPayload are as described above
While the message is immutable, the payload is not (unless it's an immutable object such as a String).
If you mutate the payload in service1, the mutation will be seen in service2.
You need to clone/copy the payload before mutating it if you don't want service2 to see the mutation.

#RabbitListener (having id set) not registered with RabbitListenerEndpointRegistry

I am using #RabbitListener to consume a message from RabbitMQ. I want to have the ability to pause/resume the message consume process based on some a threshold.
As per this and this SO posts, I could be able to use RabbitListenerEndpointRegistry and get the list of containers and pause/resume consuming based on need. I also understand that to be able to register to RabbitListenerEndpointRegistry, I need to specify an id to #RabbitListener annotation.
However, I have add an id to #RabbitListener annotation and still RabbitListenerEndpointRegistry.getListenerContainers() returns me a empty collection.
I am not sure, what could be the issue due to which i could not get the ListenerContainers collection.
Here is now I create the SimpleRabbitListenerContainerFactory. ( I use SimpleRabbitListenerContainerFactory because I need to consume from different queues)
public static SimpleRabbitListenerContainerFactory getSimpleRabbitListenerContainerFactory(
final ConnectionFactory connectionFactory,
final Jackson2JsonMessageConverter jsonConverter,
final AmqpAdmin amqpAdmin,
final String queueName, final String bindExchange,
final String routingKey, final int minConsumerCount,
final int maxConsumerCount, final int msgPrefetchCount,
final AcknowledgeMode acknowledgeMode) {
LOGGER.info("Creating SimpleRabbitListenerContainerFactory to consume from Queue '{}', " +
"using {} (min), {} (max) concurrent consumers, " +
"prefetch count set to {} and Ack mode is {}",
queueName, minConsumerCount, maxConsumerCount, msgPrefetchCount, acknowledgeMode);
/**
* Before creating the connector factory, ensure that the model with appropriate
* binding exists.
*/
createQueueAndBind(amqpAdmin, bindExchange, queueName, routingKey);
// Create listener factory
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
// set connection factory
factory.setConnectionFactory(connectionFactory);
// set converter
factory.setMessageConverter(jsonConverter);
// set false so that if model is missing the app will not crash
factory.setMissingQueuesFatal(false);
// set min concurrent consumer
factory.setConcurrentConsumers(minConsumerCount);
// set max concurrent consumer
factory.setMaxConcurrentConsumers(maxConsumerCount);
// how many message should be fetched in a go
factory.setPrefetchCount(msgPrefetchCount);
// set acknowledge mode to auto
factory.setAcknowledgeMode(acknowledgeMode);
// captures the error on consumer side if any error
factory.setErrorHandler(errorHandler());
return factory;
}
#RabbitListener annotation declaration on a method
#RabbitListener(queues = "${object.scan.queue-name}",
id = "object-scan",
containerFactory = "object.scan")
/**
* Before creating the connector factory, ensure that the model with appropriate
* binding exists.
*/
createQueueAndBind(amqpAdmin, bindExchange, queueName, routingKey);
You should not be doing this during the bean definition phase; it's too early in the application context's lifecycle.
All you need to do is add the queue/exchange/binding #Beans to the application context and Spring (RabbitAdmin) will automatically do the declarations when the container is first started.

How to route messges which will cause DestinationResolutionException to a customized error channel in spring integration

In my case, the application receives mqtt messages and routes them to a certain channel based on the type value in the messages. Such, I defined a IntegrationFlow to route the messages as follows:
#Bean
public IntegrationFlow mqttInbound() {
return IntegrationFlows.from(inbound())
.transform(new PojoTransformer())
.<Data, String>route(Data::getType,
m -> m.prefix("Channel."))
.get();
}
And also, I defined some other IntegrationFlows to handle the messages in these channels, e.g.
#Bean
public IntegrationFlow normalProcess() {
return IntegrationFlows.from("Channel.1")
.handle("normalHandler", "handle")
.channel("mqttOutboundChannel")
.get();
}
The problem is if there are no defined mappings (e.g. type is "4"), an exception will occur which says something like org.springframework.messaging.core.DestinationResolutionException: failed to look up MessageChannel with name 'Channel.4' in the BeanFactory. My question is how can I route all these unmapped messages to a certain error channel and then I can do something exception handling.
Set resolutionRequired to false and add a default output channel.
.<Data, String>route(Data::getType,
m -> m.prefix("Channel.")
.resolutionRequired(false)
.defaultOutputChannel("noRouteChannel"))

Spring AMQP #RabbitListener convert to origin object

I try to send a message based on a flatten MAP using Spring Boot and AMQP. The message should then be received using #RabbitListener and transfer it back to a MAP.
First I have nested json String and flat it and send it using the following code:
// Flatten the JSON String returned into a map
Map<String,Object> jsonMap = JsonFlattener.flattenAsMap(result);
rabbitTemplate.convertAndSend(ApplicationProperties.rmqExchange, ApplicationProperties.rmqTopic, jsonMap, new MessagePostProcessor() {
#Override
public Message postProcessMessage(Message message) throws AmqpException {
message.getMessageProperties().setHeader("amqp_Key1", "wert1");
message.getMessageProperties().setHeader("amqp_Key2", "Wert2");
message.getMessageProperties().setDeliveryMode(MessageDeliveryMode.PERSISTENT);
return message;
}
});
So far so good.
On the receiving site I try to use a Listener and convert the message payload back to the Map as it was send before.
The problem ist that I have no idea how to do it.
I receive the message with the following code:
#RabbitListener(queues = "temparea")
public void receiveMessage(Message message) {
log.info("Receiving data from RabbitMQ:");
log.info("Message is of type: " + message.getClass().getName());
log.info("Message: " + message.toString());
}
As I mentioned before I have no idea how I can convert the message to my old MAP. The __ TypeId __ of the Message is: com.github.wnameless.json.flattener.JsonifyLinkedHashMap
I would be more than glad if somebody could assist me how I get this message back to an Java Map.
BR
Update after answer from Artem Bilan:
I added the following code to my configuration file:
#Bean
public SimpleRabbitListenerContainerFactory myRabbitListenerContainerFactory() {
SimpleRabbitListenerContainerFactory factory = new SimpleRabbitListenerContainerFactory();
factory.setMessageConverter(new Jackson2JsonMessageConverter());
factory.setConnectionFactory(connectionFactory());
factory.setMaxConcurrentConsumers(5);
return factory;
}
But still I have no idea how to get the Map out of my message.
The new code block does not change anything.
You have to configure Jackson2JsonMessageConverter bean and Spring Boot will pick it up for the SimpleRabbitListenerContainerFactory bean definition which is used to build listener containers for the #RabbitListener methods.
UPDATE
Pay attention to the Spring AMQP JSON Sample.
There is a bean like jsonConverter(). According Spring Boot auto-configuration this bean is injected to the default:
public SimpleRabbitListenerContainerFactory rabbitListenerContainerFactory(SimpleRabbitListenerContainerFactoryConfigurer configurer, ConnectionFactory connectionFactory) {
Which is really used for the #RabbitListener by default, when the containerFactory attribute is empty.
So, you need just configure that bean and don't need any custom SimpleRabbitListenerContainerFactory. Or if you do that you should specify its bean name in that containerFactory attribute of your #RabbitListener definitions.
Another option to consider is like Jackson2JsonMessageConverter.setTypePrecedence():
/**
* Set the precedence for evaluating type information in message properties.
* When using {#code #RabbitListener} at the method level, the framework attempts
* to determine the target type for payload conversion from the method signature.
* If so, this type is provided in the
* {#link MessageProperties#getInferredArgumentType() inferredArgumentType}
* message property.
* <p> By default, if the type is concrete (not abstract, not an interface), this will
* be used ahead of type information provided in the {#code __TypeId__} and
* associated headers provided by the sender.
* <p> If you wish to force the use of the {#code __TypeId__} and associated headers
* (such as when the actual type is a subclass of the method argument type),
* set the precedence to {#link TypePrecedence#TYPE_ID}.
* #param typePrecedence the precedence.
* #since 1.6
* #see DefaultJackson2JavaTypeMapper#setTypePrecedence(Jackson2JavaTypeMapper.TypePrecedence)
*/
public void setTypePrecedence(Jackson2JavaTypeMapper.TypePrecedence typePrecedence) {
So, if you want still to have a Message as a method argument but get a gain of the JSON conversion based on the __TypeId__ header, you should consider to configure Jackson2JsonMessageConverter to be based on the Jackson2JavaTypeMapper.TypePrecedence.TYPE_ID.

Resources