Spring Integration: How to know when a work flow completes? - spring

I am working on adding logging/monitoring functionality for multiple spring integration deployments. I want to create a logger transaction at the start of the workflow and close the log transaction at the end of the workflow. At the end of the logger transaction I will send out the logs and metrics to a centralized logging server. At the end of the day I want to see the logs for all the messages that went through workflows across multiple spring integration deployments.
The spring integration deployments are across a lot of teams and I cant count on each team to add the logging code for me, so I want to write code that will run across the spring integration deployments.
To start a log transaction,the solution was to use a global-channel interceptor on a set of inbound messaging channels. All the workflows built across deployments use the same set of inbound channels, so the start log transaction interceptor will run.
Also I pass the logger transaction details as part of the message headers
But I am having trouble figuring out a solution for ending the transaction. The workflows can be synchronous as well as asynchronous. Also not all endpoints within the workflow will have a output channel.
Any strategies/ideas on how can I close the logger transaction ?
An example of a sample workflow is shown in the image below:

When you have a flow that ends with a channel adapter (or other endpoint that produces no result), make the last channel a publish-subscribe-channel; subscribe a second endpoint to that channel that terminates the "transaction".
I generally prefer to add an order attribute on such endpoints - make the regular endpoint order="1" and the flow terminator order="2" - clearly indicating it is called after the main endpoint.
It is important that no task executor is added to the channel so the endpoints are called serially on the same thread.

Related

JMS message processing with spring integration in cloud environment

I'm currently trying to refactor the processing of JMS messages to work in a distributed/cloud environment. To allow a better retry and error handling the messages are first stored to the database with a JPA entity and then read by spring integration jpa inbound adapter. This works fine as long as just a single instance of my service is running. However when multiple instances are running, the instances try to process the same message even after introducing a processing state on the persisted messages.
I have already tried to save the JMS messages in a JDBC message store, however then I would have to define a group identifier according to which an instance could select a message which is not really possible since the number of instances is dynamic and I can not assign a group id for each instance. Another possibility could be some kind of distributed lock with a LockRegistry but I couldn't make that work.
Do you have any hint/advice how I could implement the following requirements the best with spring integration:
JMS message should be persisted
Any instance can pick up the message and process it
If the processing fails there will be a retry for x times (could also be retried by another instance)
If an instance crashes or gets killed during the processing the message must not be lost
Is there maybe some spring-cloud component which could be helpful?
I'm happy about every hint in which direction I should go.

Can we restrict spring boot rabbitmq message processing only between specific timings?

Using Spring boot #RabbitListener, we are able to process the AMQP messages.
Whenever a message sent to queue its immediately publish to destination exchange.
Using #RabbitListener we are able to process the message immediately.
But we need to process the message only between specific timings example 1AM to 6AM.
How to achieve that ?
First of all you can take a look into Delayed Exchange feature of RabbitMQ: https://docs.spring.io/spring-amqp/docs/current/reference/html/#delayed-message-exchange
So, this way on the producer side you should determine how long the message should be delayed before it is routed to the main exchange for the actual consuming afterwards.
Another way is to take a look into Spring Integration and its Delayer component: https://docs.spring.io/spring-integration/docs/5.2.0.BUILD-SNAPSHOT/reference/html/messaging-endpoints.html#delayer
This way you will consume messages from the RabbitMQ, but will delay them in the target application logic.
And another way I see like start()/stop() the listener container for consumption and after according your timing requirements. This way the message is going to stay in the RabbitMQ until you start the listener container: https://docs.spring.io/spring-amqp/docs/current/reference/html/#containerAttributes

Deferred consumption of message queue

Sorry this might sound naive to JMS gurus, but still.
I have a requirement where a Spring based application is not able to connect synchronously to a SAP back-end (via their web-service interface) because the response from SAP is way too slow. We are thinking of a solution where the updates from GUI would be saved by the Spring middle-ware in a local database, simultaneously sending a message to a JMS queue. We want that after (say) every few hours (or may be nightly) a batch job runs to consume the message from the JMS queue, and based on the message contents, queries on the local database and sends the result to the SAP web-service.
Is this approach correct? Would I need a batch to trigger the JMS message consumption (because I don't want to consume the message immediately but in a deferred manner and at a pre-decided time)? Is there any way in Spring to implement this gracefully (like Camel)? Appreciate your help.
Spring Batch has a JmsItemReader that can be used in a batch program; an empty queue signals the end of the batch. Spring Cloud Task is built on top of batch and can be used for cloud deployments.

SCDF: Can I use an outside microservice as a source?

I am trying to work through a solution where the workflow is like this:
User hits a microservice to upload images
That microservice de-duplicates the image and if it really is new, queues it up for processing
The processing chain lives in Spring Cloud Dataflow
The microservice already exists, and we are trying to extend it to do the fancy processing. My initial cut was to use the Http Source from the sample starter pack since that would be something I didn't have to create. The problem is that the source doesn't register itself with Spring Discovery server, so there is no way to get an end point without making gross assumptions (like it lives on the dataflow server at port XYZ).
We can create a Queue endpoint and send the data directly a Queue source that receives the outside event and forwards it to an SCDF queue.
What would be awesome is if DataFlow could connect the start of the queue for me, without repackaging the microservice as a Source.
The major issue with Spring Data Flow is that it does not automatically start up deployed streams when the server starts up, and we need to be reasonably sure that microservice is always up.
The lifecycle of the server is decoupled from the apps it deploys, that was intentional.
I'm not following your thoughts on how dataflow could connect the start of the queue, but from your description there's a few things you could do:
You would need to modify the app in order to have it registered with eureka, but this is a very simple operation, no more than a few lines of code:
You can either start from a stream app perspective: https://start-scs.cfapps.io/ , select http source, your binder, and then add the spring-cloud-netflix library as well as #EnableDiscoveryClient at the Main boot class
Start with http://start.spring.io Select Stream Rabbit or Stream Kafka, add Web and netflix libraries, then add the #EnableDiscoveryClient and #EnableBinding annotations and create a simple HTTP endpoint for your use case.
In any case should be a small addition.
You can also open an issue at :https://github.com/spring-cloud-stream-app-starters/http/issues suggesting that we add #EnableDiscoveryClient to the http source app, we can take that in consideration on our next iteration as well.
I'll try to clarify few bits.
upload images -> if it really is new -> queues it up for processing
Upon a new upload event, you'd want to process the image. Here's a similar use-case, but more of a real-time streaming style solution. This is not what you're looking to do, but I thought it might be useful.
Porting the image processing code to a Spring Cloud Stream application is as simple as adding #EnableBinding(Processor.class). It is the same business logic - whether you're running it separately or orchestrating it via SCDF, it is still a standalone microservice. However, SCDF expects it to be either a Source, Processor, Sink, or Task application types. We will be opening this up to support any arbitrary "functions" (lambdas) in the future release.
We can create a Queue endpoint and send the data directly a Queue source that receives the outside event and forwards it to an SCDF queue.
This is one of the standard solutions. You can directly consume new events (images) from a queue/topic and process it in the image-processor that we created in previous step. The named-channel support in DSL facilitates just that.
What would be awesome is if DataFlow could connect the start of the queue for me, without repackaging the microservice as a Source.
I'm not sure I understand this. If I were to assume, you're looking for "named-channel" as source and that is supported.
The major issue with Spring Data Flow is that it does not automatically start up deployed streams when the server starts up, and we need to be reasonably sure that microservice is always up.
The moment you deploy a Stream in SCDF, all the individual steps included in the DSL (i.e., stream definition) are resolved and deployed as standalone apps in the target runtime (cloud foundry, kubernetes, etc.,). Once deployed, it is left to the platform where the apps run for lifecycle management. SCDF does not retain or track the app states.

JMS with Spring Integration or Spring Batch

Our project is to integrate two applications, using the REST API of each and using JMS (to provide asynchronous nature). Application-1 writes the message on the queue. The next step is to read the message from the queue, process it, and send it to application2.
I have two questions:
Should we use one more queue for storing messages after processing and before sending them to application2?
Should we use spring batch or spring integration to read/process the data?
Or you don't show the whole premise, or you really try to overhead your app. If there is just need to read messages from the queue, there is just enough to use Spring JMS directly... From other side with the Spring Integration and its Adapters power you can just process messes from the <int-jms:message-driven-channel-adapter> to the <int-http:outbound-channel-adapter>.
Don't see reason to store message somewhere else in the reading and sending process. Just because with some exception here you just rollback your message to the JMS queue back.

Resources