spring boot rabbit consumer vs rest API prioritization - spring-boot

We are trying to figure out how Srping Boot behvaes in a service which BOTH
a) pulls events from a rabbit queue
b) provides UI with REST API's
The problem is that we would like Spring Boot configured in a way prioritizes REST API's over Rabbit queue. I googled for things like Spring Boot Rest controller buffer etc. but haven't found anything viable.
Spring Boot should have some kind of method that, after processing an event (REST API call or Rabbit pull), checks if there is anything in REST buffer (if such a thing even exists), and only if that is empty, pulls another event from a queue.
We are not even sure if Spring Boot prioritizes Rabbit over REST, but after some UAT it seems it does.
Switching to push pattern with Rabbit seems like an option, but we would like something else.
Also another option was to create replica services: same business logic in two services, one just consuming rabbit, and another offering REST APIs for UI, but this of course adds to DevOps complexity

The two mechanisms are completely independent; the framework provides no coordination between them.

Related

how to initialize a continous running stream using alpakka, spring boot & Akka-stream?

All,
I am developing an application, which use alpakka spring boot integration to read data from kafka. I have most of the code ready, the only place i am stuck is how to initialize a continuous running stream, as this is going to be a backend application and wont be having any api to be called from ?
As far as I know, Alpakka's Spring integration is basically designed around exposing Akka Streams via a Spring HTTP controller. So I'm not sure what purpose bringing Spring into this serves, since there's quite an impedance mismatch between the way an Akka application will tend to like to work and the way a Spring application will tend to like to work.
Assuming you're talking about using Alpakka Kafka, the most idiomatic thing to do would be to just start a stream fed by an Alpakka Kafka Source in your main method and it will run until killed or it fails. You may want to use a RestartSource around the consumer and business logic to ensure that in the event of failure the stream restarts (note that one should generally expect messages for which the offset commit hadn't happened to be processed again, as Kafka in typical cases can only guarantee at-least-once processing).

Spring Reactive Stack with Spring for Apache Kafka

In a few words:
I'm trying to decide between using the default Spring for Apache Kafka stack, KafkaTemplate or the pair, ReactiveKafkaProducerTemplate and ReactiveKafkaConsumerTemplate for my Reactor based application.
Some more context:
In the company I work we're developing a high-disponibility application aiming to publish a set of requests directly to a Kafka Broker. Since this is an API centric application expecting to receive a few millions of requests per week, we decided to go with a stack based on the Project Reactor with Spring WebFlux and Kotlin.
After doing some digging I've discovered that the Spring for Apache Kafka has a simple wrapper designed around the Reactor Kafka implementation, but this wrapper lacks a lot of the functionalities present in the default KafkaTemplate mentioned before, things like: A Metrics Binder out of the box (for prometheus integration), associated factories, extensive documentation, Auto configuration, etc.
I'm trying to understand what I'm really giving up when using the default implementation in favor of the Reactive one. Am I giving up back pressure functionality? Am I sacrificing the Reactive Stack present in my application? Will this be a toll in the future? Does anyone has some experience in working with a Reactive Stack alongside a non-reactive solution?
I have, also, a few concerns regarding the DLT flow facilitated in the default implementation, things like the SeekToCurrentErrorHandler strategy

Using rabbitmq to send String/Custom Object from one spring boot application to another

My requirement is to for starters send a string from one spring-boot application to another using AMQP.
I am new to it and I have gone through this spring-boot guide, so i know the basic fundamentals of Queue, Exchange, Binding, Container and listener.
So, above guide shows the steps when amqp is received in same application.
I am a little confused on where to start if I want to achieve above type of communication between 2 different spring-boot applications.
What are the properties needed for that, etc.
Let me know if any details required.
Just divide the application into two:
One without Receiver and ...
Another without Sender
Make sure your application and configuration etc stays the same. With Spring boot's built-in RabbitMQ, you will be able to run it alright.
Next step is to call sender as and when needed from your business logic.

Does Spring Boot with its Blocking IO really fit well with Microservices?

There are a lot of tutorials and articles (including official site) promoting spring boot as a good tool for building microservices.
Let's say we have some rest api endpoint (User profile) which aggregates data from multiple services (User service, Stat service, Friends service).
To achieve this, user profile endpoint makes 3 http calls to those services.
But in Spring, requests are blocking and as I see, the server will quickly run out of available resources (threads) to serve request in such system.
So to me, it as quite inefficient way to build such systems (compared to non-blocking frameworks, like play! framework or node.js)
Do I miss something?
P.S.: I do not mean here spring 5 with its new webflux framework.
No one prevents you from building an asynchronous microservice architecture with Spring Boot :).
Something along these lines:
Instead of one service calling another synchronously, a service can put events to a queue (e.g. RabbitMQ). The events are delivered to services that subscribe to those events.
Using RabbitMQ and its "exchange" concept, the event producing service doesn't even need to the consumers of its events.
A blog post detailing this with Spring Boot code can be found here: https://reflectoring.io/event-messaging-with-spring-boot-and-rabbitmq/
This is not a limitation of Spring rather it is more to do with the Application Architecture.
For instance, the scenario that you have is commonly solved using Aggregate Design Pattern
While this solution is quite prevalent,it has the limitation of being synchronous, and thus blocking. Asynchronous behaviour in such scenarios should be implemented in an application specific way.
Having said that if you have to call other services in order to be able to serve a response to a request from a client(outside), this is typically an architectural problem. It really doesn’t matter if you are using HTTP or asynchronous message passing (with a request-reply pattern), the overall response time for the outside client will be bad
Also, I have seen quite a few applications which uses synchronous REST calls for external clients, but when communication is needed between internal MicroServices, it should always be asynchronous. You can read an interesting paper on this topic here MicroServices Messaging Patterns

SCDF: Can I use an outside microservice as a source?

I am trying to work through a solution where the workflow is like this:
User hits a microservice to upload images
That microservice de-duplicates the image and if it really is new, queues it up for processing
The processing chain lives in Spring Cloud Dataflow
The microservice already exists, and we are trying to extend it to do the fancy processing. My initial cut was to use the Http Source from the sample starter pack since that would be something I didn't have to create. The problem is that the source doesn't register itself with Spring Discovery server, so there is no way to get an end point without making gross assumptions (like it lives on the dataflow server at port XYZ).
We can create a Queue endpoint and send the data directly a Queue source that receives the outside event and forwards it to an SCDF queue.
What would be awesome is if DataFlow could connect the start of the queue for me, without repackaging the microservice as a Source.
The major issue with Spring Data Flow is that it does not automatically start up deployed streams when the server starts up, and we need to be reasonably sure that microservice is always up.
The lifecycle of the server is decoupled from the apps it deploys, that was intentional.
I'm not following your thoughts on how dataflow could connect the start of the queue, but from your description there's a few things you could do:
You would need to modify the app in order to have it registered with eureka, but this is a very simple operation, no more than a few lines of code:
You can either start from a stream app perspective: https://start-scs.cfapps.io/ , select http source, your binder, and then add the spring-cloud-netflix library as well as #EnableDiscoveryClient at the Main boot class
Start with http://start.spring.io Select Stream Rabbit or Stream Kafka, add Web and netflix libraries, then add the #EnableDiscoveryClient and #EnableBinding annotations and create a simple HTTP endpoint for your use case.
In any case should be a small addition.
You can also open an issue at :https://github.com/spring-cloud-stream-app-starters/http/issues suggesting that we add #EnableDiscoveryClient to the http source app, we can take that in consideration on our next iteration as well.
I'll try to clarify few bits.
upload images -> if it really is new -> queues it up for processing
Upon a new upload event, you'd want to process the image. Here's a similar use-case, but more of a real-time streaming style solution. This is not what you're looking to do, but I thought it might be useful.
Porting the image processing code to a Spring Cloud Stream application is as simple as adding #EnableBinding(Processor.class). It is the same business logic - whether you're running it separately or orchestrating it via SCDF, it is still a standalone microservice. However, SCDF expects it to be either a Source, Processor, Sink, or Task application types. We will be opening this up to support any arbitrary "functions" (lambdas) in the future release.
We can create a Queue endpoint and send the data directly a Queue source that receives the outside event and forwards it to an SCDF queue.
This is one of the standard solutions. You can directly consume new events (images) from a queue/topic and process it in the image-processor that we created in previous step. The named-channel support in DSL facilitates just that.
What would be awesome is if DataFlow could connect the start of the queue for me, without repackaging the microservice as a Source.
I'm not sure I understand this. If I were to assume, you're looking for "named-channel" as source and that is supported.
The major issue with Spring Data Flow is that it does not automatically start up deployed streams when the server starts up, and we need to be reasonably sure that microservice is always up.
The moment you deploy a Stream in SCDF, all the individual steps included in the DSL (i.e., stream definition) are resolved and deployed as standalone apps in the target runtime (cloud foundry, kubernetes, etc.,). Once deployed, it is left to the platform where the apps run for lifecycle management. SCDF does not retain or track the app states.

Resources