How to architect event notification with Spring Framework - spring

I need event-notification engine for my microservice product.
For example:
Abstract: an application "A" on node "A-7" sends an asynchronous task to an application "B". The application "B" on random node (e. g. "B-3") recieves the asynchronous task, process it, stores the task result. The application "A" knows how to get the task result, but don't know when.
Problem: the application "B" shall notify the application "A" on node "A-7" (not other nodes, but "A-7" only) that task is ready. The application "A" must recieve the notification immediately (i. e. with low latency).
I can use Spring Boot, Apache Kafka, Apache Zookeeper and PostgreSQL for solution.
Is there library/solution that solves my problem? I think it is common problem. I read the Spring and Spring Cloud documentation but didn't find any solution.

Related

Parallel processing in multiple instances of spring boot application

I am not able to analyse, how to go ahead. I am using Spring boot 2, Oracle, IBM MQ.
I have made 2 async requests to external applications. I need to do some operation when I have received both of the responses.
I am not able to set it up as there are multiple instances of application running and listening to same queue for response.
I tried using #transactional and cyclic barrier. But I guess they will work only in scope of their own instance and not between multiple instances.
How should I proceed ahead?
It is also really difficult to reproduce the scenario where one message is read by one instance and other by other instance that too at the same time, where they eventually try to update db at same time.

Using rabbitmq to send String/Custom Object from one spring boot application to another

My requirement is to for starters send a string from one spring-boot application to another using AMQP.
I am new to it and I have gone through this spring-boot guide, so i know the basic fundamentals of Queue, Exchange, Binding, Container and listener.
So, above guide shows the steps when amqp is received in same application.
I am a little confused on where to start if I want to achieve above type of communication between 2 different spring-boot applications.
What are the properties needed for that, etc.
Let me know if any details required.
Just divide the application into two:
One without Receiver and ...
Another without Sender
Make sure your application and configuration etc stays the same. With Spring boot's built-in RabbitMQ, you will be able to run it alright.
Next step is to call sender as and when needed from your business logic.

Spring Batch remote partitioning how to shutdown slaves

I want to use Spring Batch remote partitioning to handle large workloads on the cloud, and spin up/shutdown VMs on demand.
However, when configuring the slave steps, I'm using the StepExecutionRequestHandler to handle the step requests from a JMS queue. Right now the application just hangs. How can I shut down the application after the queue is depleted?
How can I shut down the application after the queue is depleted?
In a remote partitioning setup, workers are listeners on a queue on which StepExecutionRequests are coming. The question is how to know, from the listener point of view, that the queue is depleted? This is a tricky design problem. There are some known solutions like the "End-Of-Stream" message or "Poison" record but those are tricky too since you have to make sure all listeners get one such message.
If you are using Spring Cloud Task to launch your workers, you can use the DeployerPartitionHandler which provides an elegant way to dynamically create workers on demand up to a maximum configurable number. You can find more details about it here: https://docs.spring.io/spring-cloud-task/docs/2.0.0.RELEASE/reference/htmlsingle/#batch-partitioning and an example in this github repo: https://github.com/mminella/scaling-demos/blob/master/partitioned-demo/src/main/java/io/spring/batch/partitiondemo/configuration/BatchConfiguration.java#L75
The ice on the cake is that this is based on Spring Cloud Deployer which means you can use it on any cloud provider that implements the SCD SPI. Here is how to do it for:
Kubernetes: https://docs.spring.io/spring-cloud-task/docs/2.0.0.RELEASE/reference/htmlsingle/#_notes_on_developing_a_batch_partitioned_application_for_the_kubernetes_platform
cloud foundry: https://docs.spring.io/spring-cloud-task/docs/2.0.0.RELEASE/reference/htmlsingle/#_notes_on_developing_a_batch_partitioned_application_for_the_cloud_foundry_platform

SCDF: Can I use an outside microservice as a source?

I am trying to work through a solution where the workflow is like this:
User hits a microservice to upload images
That microservice de-duplicates the image and if it really is new, queues it up for processing
The processing chain lives in Spring Cloud Dataflow
The microservice already exists, and we are trying to extend it to do the fancy processing. My initial cut was to use the Http Source from the sample starter pack since that would be something I didn't have to create. The problem is that the source doesn't register itself with Spring Discovery server, so there is no way to get an end point without making gross assumptions (like it lives on the dataflow server at port XYZ).
We can create a Queue endpoint and send the data directly a Queue source that receives the outside event and forwards it to an SCDF queue.
What would be awesome is if DataFlow could connect the start of the queue for me, without repackaging the microservice as a Source.
The major issue with Spring Data Flow is that it does not automatically start up deployed streams when the server starts up, and we need to be reasonably sure that microservice is always up.
The lifecycle of the server is decoupled from the apps it deploys, that was intentional.
I'm not following your thoughts on how dataflow could connect the start of the queue, but from your description there's a few things you could do:
You would need to modify the app in order to have it registered with eureka, but this is a very simple operation, no more than a few lines of code:
You can either start from a stream app perspective: https://start-scs.cfapps.io/ , select http source, your binder, and then add the spring-cloud-netflix library as well as #EnableDiscoveryClient at the Main boot class
Start with http://start.spring.io Select Stream Rabbit or Stream Kafka, add Web and netflix libraries, then add the #EnableDiscoveryClient and #EnableBinding annotations and create a simple HTTP endpoint for your use case.
In any case should be a small addition.
You can also open an issue at :https://github.com/spring-cloud-stream-app-starters/http/issues suggesting that we add #EnableDiscoveryClient to the http source app, we can take that in consideration on our next iteration as well.
I'll try to clarify few bits.
upload images -> if it really is new -> queues it up for processing
Upon a new upload event, you'd want to process the image. Here's a similar use-case, but more of a real-time streaming style solution. This is not what you're looking to do, but I thought it might be useful.
Porting the image processing code to a Spring Cloud Stream application is as simple as adding #EnableBinding(Processor.class). It is the same business logic - whether you're running it separately or orchestrating it via SCDF, it is still a standalone microservice. However, SCDF expects it to be either a Source, Processor, Sink, or Task application types. We will be opening this up to support any arbitrary "functions" (lambdas) in the future release.
We can create a Queue endpoint and send the data directly a Queue source that receives the outside event and forwards it to an SCDF queue.
This is one of the standard solutions. You can directly consume new events (images) from a queue/topic and process it in the image-processor that we created in previous step. The named-channel support in DSL facilitates just that.
What would be awesome is if DataFlow could connect the start of the queue for me, without repackaging the microservice as a Source.
I'm not sure I understand this. If I were to assume, you're looking for "named-channel" as source and that is supported.
The major issue with Spring Data Flow is that it does not automatically start up deployed streams when the server starts up, and we need to be reasonably sure that microservice is always up.
The moment you deploy a Stream in SCDF, all the individual steps included in the DSL (i.e., stream definition) are resolved and deployed as standalone apps in the target runtime (cloud foundry, kubernetes, etc.,). Once deployed, it is left to the platform where the apps run for lifecycle management. SCDF does not retain or track the app states.

Send feedback to spring from jbpm process

My application is written in Spring Framework 3.x.x. In my application I want to integrate the JBPM5.x processes using its REST API. I have done this thing but I want to send feedback like simple message to my spring application so that I'm able to know what will be the status of my process on the exit of the process. I'm not able to find any way to send this kind of feedback using REST API.
Please give the right direction for it, or give any other way to integrate Spring and JBPM so that my Spring application and JBPM process can be run in different application container instances (same application server but two different instances).
Not sure the REST api would be capable of handling something like that, it is basically some stateless API to get / send information, but doesn't handle async notifications.
What I would recommend is registering a custom process listener to the engine that will be notified when a process is completed, at that point you can do whatever you want, like for example send a JMS message or any other type of async message that could be picked up by your application.
This information is probably already stored in the history log as well. So if your Spring application could take advantage of that, that might be an option as well.

Resources