how to send partial source data attributes to different targets - spring

I am building an integration between a source and two targets, here source data object has 10 attributes , of which one target needs around 6 attributes and another target needs 4 attributes only, appreciate any help here on how i can achieve with spring

You can configure the source to send the Message to a PublishSubscribeChannel .
Then configure two Transformers to subscribe to this pub-sub channel. One of the transformer will transform the message the 6 attributes version while other to the 4 attributes version. Both transformers will then send the transformed messages to a separate channel .The two target system will look for the messages sent to these separated channels and process them.
In term of annotation configuration , it looks like the following: (Assuming the message the source sent out is Foo)
#Bean
public MessageChannel pubSubChannel() {
return new PublishSubscribeChannel();
}
#Bean
public MessageChannel outputChannelWith4Attributes() {
return new DirectChannel();
}
#Bean
public MessageChannel outputChannelWith6Attributes() {
return new DirectChannel();
}
#Component
public class MyTransformer {
#Transformer(inputChannel = "pubSubChannel", outputChannel = "outputChannelWith4Attributes")
public Foo transformTo4Attribute(Foo foo) {
//do the transformation logic here
return result;
}
#Transformer(inputChannel = "pubSubChannel", outputChannel = "outputChannelWith6Attributes")
public Foo transformTo6Attribute(Foo foo) {
//do the transformation logic here
return result;
}
}
And configure the source to send the message with payload Foo to pubSubChannel .Also configure the targets to process message from outputChannelWith4Attributes and outputChannelWith6Attributes.

Related

Spring SFTP Outbound Adapter - determining when files have been sent

I have a Spring SFTP output adapter that I start via "adapter.start()" in my main program. Once started, the adapter transfers and uploads all the files in the specified directory as expected. But I want to stop the adapter after all the files have been transferred. How do I detect if all the files have been transferred so I can issue an adapter.stop()?
#Bean
public IntegrationFlow sftpOutboundFlow() {
return IntegrationFlows.from(Files.inboundAdapter(new File(sftpOutboundDirectory))
.filterExpression("name.endsWith('.pdf') OR name.endsWith('.PDF')")
.preventDuplicates(true),
e -> e.id("sftpOutboundAdapter")
.autoStartup(false)
.poller(Pollers.trigger(new FireOnceTrigger())
.maxMessagesPerPoll(-1)))
.log(LoggingHandler.Level.INFO, "sftp.outbound", m -> m.getPayload())
.log(LoggingHandler.Level.INFO, "sftp.outbound", m -> m.getHeaders())
.handle(Sftp.outboundAdapter(outboundSftpSessionFactory())
.useTemporaryFileName(false)
.remoteDirectory(sftpRemoteDirectory))
.get();
}
#Artem Bilan has already given the answer. But here's kind of a concrete implementation of what he said - for those who are a Spring Integration noob like me:
Define a service to get the PDF files on demand:
#Service
public class MyFileService {
public List<File> getPdfFiles(final String srcDir) {
File[] files = new File(srcDir).listFiles((dir, name) -> name.toLowerCase().endsWith(".pdf"));
return Arrays.asList(files == null ? new File[]{} : files);
}
}
Define a Gateway to start the SFTP upload flow on demand:
#MessagingGateway
public interface SFtpOutboundGateway {
#Gateway(requestChannel = "sftpOutboundFlow.input")
void uploadFiles(List<File> files);
}
Define the Integration Flow to upload the files to the SFTP server via Sftp.outboundGateway:
#Configuration
#EnableIntegration
public class FtpFlowIntegrationConfig {
// could be also bound via #Value
private String sftpRemoteDirectory = "/path/to/remote/dir";
#Bean
public SessionFactory<ChannelSftp.LsEntry> outboundSftpSessionFactory() {
DefaultSftpSessionFactory factory = new DefaultSftpSessionFactory(true);
factory.setHost("localhost");
factory.setPort(22222);
factory.setUser("client1");
factory.setPassword("password123");
factory.setAllowUnknownKeys(true);
return new CachingSessionFactory<>(factory);
}
#Bean
public IntegrationFlow sftpOutboundFlow(RemoteFileTemplate<ChannelSftp.LsEntry> remoteFileTemplate) {
return e -> e
.log(LoggingHandler.Level.INFO, "sftp.outbound", Message::getPayload)
.log(LoggingHandler.Level.INFO, "sftp.outbound", Message::getHeaders)
.handle(
Sftp.outboundGateway(remoteFileTemplate, AbstractRemoteFileOutboundGateway.Command.MPUT, "payload")
);
}
#Bean
public RemoteFileTemplate<ChannelSftp.LsEntry> remoteFileTemplate(SessionFactory<ChannelSftp.LsEntry> outboundSftpSessionFactory) {
RemoteFileTemplate<ChannelSftp.LsEntry> template = new SftpRemoteFileTemplate(outboundSftpSessionFactory);
template.setRemoteDirectoryExpression(new LiteralExpression(sftpRemoteDirectory));
template.setAutoCreateDirectory(true);
template.afterPropertiesSet();
template.setUseTemporaryFileName(false);
return template;
}
}
Wiring up:
public class SpringApp {
public static void main(String[] args) {
final MyFileService fileService = ctx.getBean(MyFileService.class);
final SFtpOutboundGateway sFtpOutboundGateway = ctx.getBean(SFtpOutboundGateway.class);
// trigger the sftp upload flow manually - only once
sFtpOutboundGateway.uploadFiles(fileService.getPdfFiles());
}
}
Import notes:
1.
#Gateway(requestChannel = "sftpOutboundFlow.input")
void uploadFiles(List files);
Here the DirectChannel channel sftpOutboundFlow.input will be used to pass message with the payload (= List<File> files) to the receiver. If this channel is not created yet, the Gateway is going to create it implicitly.
2.
#Bean
public IntegrationFlow sftpOutboundFlow(RemoteFileTemplate<ChannelSftp.LsEntry> remoteFileTemplate) { ... }
Since IntegrationFlow is a Consumer functional interface, we can simplify the flow a little using the IntegrationFlowDefinition. During the bean registration phase, the IntegrationFlowBeanPostProcessor converts this inline (Lambda) IntegrationFlow to a StandardIntegrationFlow and processes its components. An IntegrationFlow definition using a Lambda populates DirectChannel as an inputChannel of the flow and it is registered in the application context as a bean with the name sftpOutboundFlow.input in the sample above (flow bean name + ".input"). That's why we use that name for the SFtpOutboundGateway gateway.
Ref: https://spring.io/blog/2014/11/25/spring-integration-java-dsl-line-by-line-tutorial
3.
#Bean
public RemoteFileTemplate<ChannelSftp.LsEntry> remoteFileTemplate(SessionFactory<ChannelSftp.LsEntry> outboundSftpSessionFactory) {}
see: Remote directory for sftp outbound gateway with DSL
Flowchart:
But I want to stop the adapter after all the files have been transferred.
Logically this is not for what this kind of component has been designed. Since you are not going to have some constantly changing local directory, probably it is better to think about an even driver solution to list files in the directory via some action. Yes, it can be a call from the main, but only once for all the content of the dir and that's all.
And for this reason the Sftp.outboundGateway() with a Command.MPUT is there for you:
https://docs.spring.io/spring-integration/reference/html/sftp.html#using-the-mput-command.
You still can trigger an IntegrationFlow, but it could start from a #MessagingGateway interface to be called from a main with a local directory to list files for uploading:
https://docs.spring.io/spring-integration/reference/html/dsl.html#java-dsl-gateway

Use Function to replyTo RPC request

I would like to use the java.util.Function approach to reply to an request send via RabbitTemplate.convertSendAndReceive. It's working fine with the RabbitListener but I can not get it working with the functional approach.
Client (working)
class Client(private val template RabbitTemplate) {
fun send() = template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message"
)
}
Server (approach 1, working)
class Server {
#RabbitListener(queues = ["rpc-queue"])
fun receiveRequest(message: String) = "Response Message"
#Bean
fun queue(): Queue {
return Queue("rpc-queue")
}
#Bean
fun exchange(): DirectExchange {
return DirectExchange("rpc-exchange")
}
#Bean
fun binding(exchange: DirectExchange, queue: Queue): Binding {
return BindingBuilder.bind(queue).to(exchange).with("rpc-routing-key")
}
}
Server (approach 2, not working) --> goal
class Server {
#Bean
fun receiveRequest(): Function<String, String> {
return Function { value: String ->
"Response Message"
}
}
}
With the config (approach 2)
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.binding.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.binding.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
With approach 2 the server receives. Unfortunately the response is lost. Does anybody know how to use the RPC pattern with the functional approach? I don't want to use the RabbitListener.
See documentation/tutorial.
Spring Cloud Stream is not really designed for RPC on the server side, so it won't handle this automatically like #RabbitListener does.
You can, however, achieve it by adding an output binding to route the reply to the default exchange and the replyTo header:
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.bindings.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.bindings.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
spring.cloud.stream.bindings.receiveRequest-out-0.destination=
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.routing-key-expression=headers['amqp_replyTo']
#logging.level.org.springframework.amqp=debug
#SpringBootApplication
public class So66586230Application {
public static void main(String[] args) {
SpringApplication.run(So66586230Application.class, args);
}
#Bean
Function<String, String> receiveRequest() {
return str -> {
return str.toUpperCase();
};
}
#Bean
public ApplicationRunner runner(RabbitTemplate template) {
return args -> {
System.out.println(new String((byte[]) template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message")));
};
}
}
PAYLOAD MESSAGE
Note that the reply will come as a byte[]; you can use a custom message converter on the template to convert to String.
EDIT
In reply to the third comment below.
The RabbitTemplate uses direct reply-to by default, so the reply address is not a real queue, it is a pseudo queue created by the binder and associated with a consumer in the template.
You can also configure the template to use temporary reply queues, but they are also routed to by the default exchange "".
You can, however, configure an external reply container, with the template as the listener.
You can then route back using whatever exchange and routing key you want.
Putting it all together:
spring.cloud.function.definition: receiveRequest
spring.cloud.stream.bindings.receiveRequest-in-0.destination: rpc-exchange
spring.cloud.stream.bindings.receiveRequest-in-0.group: rpc-queue
spring.cloud.stream.rabbit.bindings.receiveRequest-in-0.consumer.bindingRoutingKey: rpc-routing-key
spring.cloud.stream.bindings.receiveRequest-out-0.destination=reply-exchange
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.routing-key-expression='reply-routing-key'
spring.cloud.stream.rabbit.bindings.receiveRequest-out-0.producer.declare-exchange=false
spring.rabbitmq.template.reply-timeout=10000
#logging.level.org.springframework.amqp=debug
public class So66586230Application {
public static void main(String[] args) {
SpringApplication.run(So66586230Application.class, args);
}
#Bean
Function<String, String> receiveRequest() {
return str -> {
return str.toUpperCase();
};
}
#Bean
SimpleMessageListenerContainer replyContainer(SimpleRabbitListenerContainerFactory factory,
RabbitTemplate template) {
template.setReplyAddress("reply-queue");
SimpleMessageListenerContainer container = factory.createListenerContainer();
container.setQueueNames("reply-queue");
container.setMessageListener(template);
return container;
}
#Bean
public ApplicationRunner runner(RabbitTemplate template, SimpleMessageListenerContainer replyContainer) {
return args -> {
System.out.println(new String((byte[]) template.convertSendAndReceive(
"rpc-exchange",
"rpc-routing-key",
"payload message")));
};
}
}
IMPORTANT: if you have multiple instances of the client side, each needs its own reply queue.
In that case, the routing key must be the queue name and you should revert to the previous example to set the routing key expression (to get the queue name from the header).

Performing Aggregation of records and launch spring cloud task in single Processor in Spring cloud stream

I am trying to perform the following actions
Aggregating messages
Launching Spring Cloud Task
But not able to pass the aggregated message to the method launching Task. Below is the piece of code
#Autowired
private TaskProcessorProperties processorProperties;
#Autowired
Processor processor;
#Autowired
private AppConfiguration appConfiguration ;
#Transformer(inputChannel = MyProcessor.intermidiate, outputChannel = Processor.OUTPUT)
public Object setupRequest(String message) {
Map<String, String> properties = new HashMap<>();
if (StringUtils.hasText(this.processorProperties.getDataSourceUrl())) {
properties.put("spring_datasource_url", this.processorProperties.getDataSourceUrl());
}
if (StringUtils.hasText(this.processorProperties.getDataSourceDriverClassName())) {
properties.put("spring_datasource_driverClassName", this.processorProperties
.getDataSourceDriverClassName());
}
if (StringUtils.hasText(this.processorProperties.getDataSourceUserName())) {
properties.put("spring_datasource_username", this.processorProperties
.getDataSourceUserName());
}
if (StringUtils.hasText(this.processorProperties.getDataSourcePassword())) {
properties.put("spring_datasource_password", this.processorProperties
.getDataSourcePassword());
}
properties.put("payload", message);
TaskLaunchRequest request = new TaskLaunchRequest(
this.processorProperties.getUri(), null, properties, null,
this.processorProperties.getApplicationName());
System.out.println("inside task launcher **************************");
System.out.println(request.toString() +"**************************");
return new GenericMessage<>(request);
}
#ServiceActivator(inputChannel = Processor.INPUT,outputChannel = MyProcessor.intermidiate)
#Bean
public MessageHandler aggregator() {
AggregatingMessageHandler aggregatingMessageHandler =
new AggregatingMessageHandler(new DefaultAggregatingMessageGroupProcessor(),
new SimpleMessageStore(10));
AggregatorFactoryBean aggregatorFactoryBean = new AggregatorFactoryBean();
//aggregatorFactoryBean.setMessageStore();
//aggregatingMessageHandler.setOutputChannel(processor.output());
//aggregatorFactoryBean.setDiscardChannel(processor.output());
aggregatingMessageHandler.setSendPartialResultOnExpiry(true);
aggregatingMessageHandler.setSendTimeout(1000L);
aggregatingMessageHandler.setCorrelationStrategy(new ExpressionEvaluatingCorrelationStrategy("'FOO'"));
aggregatingMessageHandler.setReleaseStrategy(new MessageCountReleaseStrategy(3)); //ExpressionEvaluatingReleaseStrategy("size() == 5")
aggregatingMessageHandler.setExpireGroupsUponCompletion(true);
aggregatingMessageHandler.setGroupTimeoutExpression(new ValueExpression<>(3000L)); //size() ge 2 ? 5000 : -1
aggregatingMessageHandler.setExpireGroupsUponTimeout(true);
return aggregatingMessageHandler;
}
To pass the message between aggregator and task launcher method (setupRequest(String message)) , i am using a channel MyProcessor.intermidiate defined as below
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.stereotype.Indexed;
public interface MyProcessor {
String intermidiate = "intermidiate";
#Output("intermidiate")
MessageChannel intermidiate();
}
Applicaion.properties used is below
aggregator.message-store-type=persistentMessageStore
spring.cloud.stream.bindings.input.destination=output
spring.cloud.stream.bindings.output.destination=input
Its not working , With the above mentioned approach .
In this class if i change the channel name from my defined channel MyProcessor.intermediate to Processor.input or Processor.output than any one of the things works (based on the channel name changed to Processor.*)
I want to aggregate the messages first and than want to launch task on aggragated messages in processor, which is not happening
See here:
public Object setupRequest(String message) {
So, you expect some string as a request payload.
Your AggregatorFactoryBean use a DefaultAggregatingMessageGroupProcessor, which does exactly this:
List<Object> payloads = new ArrayList<Object>(messages.size());
for (Message<?> message : messages) {
payloads.add(message.getPayload());
}
return payloads;
So, it is definitely not a String.
It is strange that you don't show what exception happens with your configuration, but I assume you need to change setupRequest() signature to expect a List of payloads or you need to provide some custom MessageGroupProcessor to build that String from the group of messages you have aggregated.

Multiple #RabbitListeners sending reply to same queue when using sendAndReceive() in producer

I am using SpringBoot with Spring AMQP and I want to use RPC pattern using synchronous sendAndReceive method in producer. My configuration assumes 1 exchange with 2 distinct bindings (1 for each operation on the same resource). I want to send 2 messages with 2 different routingKeys and receive response on distinct reply-to queues
Problem is, as far as I know, sendAndReceive will wait for reply on a queue with name ".replies" so both replies will be sent to products.replies queue (at least that is my understanding).
My publisher config:
#Bean
public DirectExchange productsExchange() {
return new DirectExchange("products");
}
#Bean
public OrderService orderService() {
return new MqOrderService();
}
#Bean
public RabbitTemplate rabbitTemplate(final ConnectionFactory connectionFactory) {
final RabbitTemplate rabbitTemplate = new RabbitTemplate(connectionFactory);
rabbitTemplate.setMessageConverter(producerJackson2MessageConverter());
return rabbitTemplate;
}
#Bean
public Jackson2JsonMessageConverter producerJackson2MessageConverter() {
return new Jackson2JsonMessageConverter();
}
and the 2 senders:
...
final Message response = template.sendAndReceive(productsExchange.getName(), "products.get", message);
...
final Message response = template.sendAndReceive(productsExchange.getName(), "products.stock.update", message);
...
consumer config:
#Bean
public Queue getProductQueue() {
return new Queue("getProductBySku");
}
#Bean
public Queue updateStockQueue() {
return new Queue("updateProductStock");
}
#Bean
public DirectExchange exchange() {
return new DirectExchange("products");
}
#Bean
public Binding getProductBinding(DirectExchange exchange) {
return BindingBuilder.bind(getProductQueue())
.to(exchange)
.with("products.get");
}
#Bean
public Binding modifyStockBinding(DirectExchange exchange) {
return BindingBuilder.bind(updateStockQueue())
.to(exchange)
.with("products.stock.update");
}
and #RabbitListeners with following sigratures:
#RabbitListener(queues = "getProductBySku")
public Message getProduct(GetProductResource getProductResource) {...}
#RabbitListener(queues = "updateProductStock")
public Message updateStock(UpdateStockResource updateStockResource) {...}
I noticed that the second sender receives 2 responses, one of which is of invalid type (from first receiver). Is there any way in which I can make these connections distinct? Or is using separate exchange for each operation the only reasonable solution?
as far as I know, sendAndReceive will wait for reply on a queue with name ".replies"
Where did you get that idea?
Depending on which version you are using, either a temporary reply queue will be created for each request or RabbitMQ's "direct reply-to" mechanism is used, which again means each request is replied to on a dedicated pseudo queue called amq.rabbitmq.reply-to.
I don't see any way for one producer to get another's reply; even if you use an explicit reply container (which is generally not necessary any more), the template will correlate the replies to the requests.
Try enabling DEBUG logging to see if provides any hints.

Right way to split, enrich items then send each item to another channel?

Is this the right way to split a list of items, enrich each item and then send each of those enriched items to another channel?
It seems like even though each item is being enriched only the last one is sent to the output channel...
Here is the snipper from my test where I see from the flow for only page2 being invoked.
this.sitePackage = new Package();
this.sitePackage.add(page1);
this.sitePackage.add(page2);
this.sitePackage.add(page3);
//Publish using gateway
this.publishingService.publish(sitePackage);
If I do this however...
this.sitePackage.add(page1);
this.sitePackage.add(page1);
this.sitePackage.add(page2);
this.sitePackage.add(page2);
this.sitePackage.add(page3);
this.sitePackage.add(page3);
I see all the pages being published but the last one is page2 not page3 (even though from debugging I can see the instance has page 3 properties).
It seems like every other item is being seen by the flows...
My flows go like this...
Starting with the PublishPackage flow. This is the main entry flow and intended to split the items out of the package and send each of them, after enriching the payload, to flows who are attached to the publishPackageItem channel...
#Bean
IntegrationFlow flowPublishPackage()
{
return flow -> flow
.channel(this.publishPackageChannel())
.<Package>handle((p, h) -> this.savePackage(p))
.split(Package.class, this::splitPackage)
.channel(this.publishPackageItemChannel());
}
#Bean
#PublishPackageChannel
MessageChannel publishPackageChannel()
{
return MessageChannels.direct().get();
}
#Bean
#PublishPackageItemChannel
MessageChannel publishPackageItemChannel()
{
return MessageChannels.direct().get();
}
#Splitter
List<PackageEntry> splitPackage(final Package bundle)
{
final List<PackageEntry> enrichedEntries = new ArrayList<>();
for (final PackageEntry entry : bundle.getItems())
{
enrichedEntries.add(entry);
}
return enrichedEntries;
}
#Bean
GatewayProxyFactoryBean publishingGateway()
{
final GatewayProxyFactoryBean proxy = new GatewayProxyFactoryBean(PublishingService.class);
proxy.setBeanFactory(this.beanFactory);
proxy.setDefaultRequestChannel(this.publishPackageChannel());
proxy.setDefaultReplyChannel(this.publishPackageChannel());
proxy.afterPropertiesSet();
return proxy;
}
Next, the CMS publish flows are attached to the publishPackageItem channel and based on the type after splitting, routed to a specific element channel for handling. After splitting the page only specific element types may have a subscribing flow.
#Inject
public CmsPublishFlow(#PublishPackageItemChannel final MessageChannel channelPublishPackageItem)
{
this.channelPublishPackageItem = channelPublishPackageItem;
}
#Bean
#PublishPageChannel
MessageChannel channelPublishPage()
{
return MessageChannels.direct().get();
}
#Bean
IntegrationFlow flowPublishContent()
{
return flow -> flow
.channel(this.channelPublishPackageItem)
.filter(PackageEntry.class, p -> p.getEntry() instanceof Page)
.transform(PackageEntry.class, PackageEntry::getEntry)
.split(Page.class, this::traversePageElements)
.<Content, String>route(Content::getType, mapping -> mapping
.resolutionRequired(false)
.subFlowMapping(PAGE, sf -> sf.channel(channelPublishPage()))
.subFlowMapping(IMAGE, sf -> sf.channel(channelPublishAsset()))
.defaultOutputToParentFlow());
//.channel(IntegrationContextUtils.NULL_CHANNEL_BEAN_NAME);
}
Finally, my goal is to subscribe to the channel and handle each element accordingly. I subscribe this flow to the channelPublishPage. Each subscriber may handle the element differently.
#Inject
#PublishPageChannel
private MessageChannel channelPublishPage;
#Bean
IntegrationFlow flowPublishPage()
{
return flow -> flow
.channel(this.channelPublishPage)
.publishSubscribeChannel(c -> c
.subscribe(s -> s
.<Page>handle((p, h) -> this
.generatePage(p))));
}
I somehow feel that the problem is here:
proxy.setDefaultRequestChannel(this.publishPackageChannel());
proxy.setDefaultReplyChannel(this.publishPackageChannel());
Consider do not use the same channel for requests and for waiting replies. This way you bring some loop and really unexpected behavior.

Resources