Thread model for Async API implementation using Spring - spring-boot

I am working on the micro-service developed using Spring Boot . I have implemented following layers:
Controller layer: Invoked when user sends API request
Service layer: Processes the request. Either sends request to third-part service or sends request to database
Repository layer: Used to interact with the
database
.
Methods in all of above layers returns the CompletableFuture. I have following questions related to this setup:
Is it good practice to return Completable future from all methods across all layers?
Is it always recommended to use #Async annotation when using CompletableFuture? what happens when I use default fork-join pool to process the requests?
How can I configure the threads for above methods? Will it be a good idea to configure the thread pool per layer? what are other configurations I can consider here?
Which metrics I should focus while optimizing performance for this micro-service?

If the work your application is doing can be done on the request thread without too much latency, I would recommend it. You can always move to an async model if you find that your web server is running out of worker threads.
The #Async annotation is basically helping with scheduling. If you can, use it - it can keep the code free of the references to the thread pool on which the work will be scheduled. As for what thread actually does your async work, that's really up to you. If you can, use your own pool. That will make sure you can add instrumentation and expose configuration options that you may need once your service is running.
Technically you will have two pools in play. One that Spring will use to consume the result of your future, and another that you will use to do the async work. If I recall correctly, Spring Boot will configure its pool if you don't already have one, and will log a warning if you didn't explicitly configure one. As for your worker threads, start simple. Consider using Spring's ThreadPoolTaskExecutor.
Regarding which metrics to monitor, start first by choosing how you will monitor. Using something like Spring Sleuth coupled with Spring Actuator will give you a lot of information out of the box. There are a lot of services that can collect all the metrics actuator generates into time-based databases that you can then use to analyze performance and get some ideas on what to tweak.
One final recommendation is that Spring's Web Flux is designed from the start to be async. It has a learning curve for sure since reactive code is very different from the usual MVC stuff. However, that framework is also thinking about all the questions you are asking so it might be better suited for your application, specially if you want to make everything async by default.

Related

Advisable to run a Kafka producer + consumer in same application?

Spring + Apache Kafka noob here. I'm wondering if its advisable to run a single Spring Boot application that handles both producing messages as well as consuming messages.
A lot of the applications I've seen using Kafka lately usually have one separate application send/emit the message to a Kafka topic, and another one that consumes/processes the message from that topic. For larger applications, I can see a case for separate producer and consumer applications, but what about smaller ones?
For example: I'm a simple app that processes HTTP requests => send requests to a third party service, but to ensure retryability, I put the request on a Kafka queue with a service using the #Retryable annotation?
And what other considerations might come into play since it would be on the Spring framework?
Note: As your question states, what'll say is more of an advice based on my beliefs and experience rather than some absolute truth written in stone.
Your use case seems more like a proxy than an actual application with business logic. You should make sure that making this an asynchronous service makes sense - maybe it's good enough to simply hold the connection until you get a response from the 3p, and let your client handle retries if you get an error - of course, you can also retry until some timeout.
This would avoid common asynchronous issues such as making your client need to poll or have a webhook in order to get a result, or making sure a record still makes sense to be processed after a lot of time has elapsed after an outage or a high consumer lag.
If your client doesn't care about the result as long as it gets done, and you don't expect high-throughput on either side, a single Spring Boot application should be enough for handling both producer and consumer sides - while also keeping it simple.
If you do expect high throughput, I'd look into building a WebFlux based application with the reactor-kafka library - high throughput proxies are an excellent use case for reactive applications.
Another option would be having a simple serverless function that handles the http requests and produces the records, and a standard Spring Boot application to consume them.
TBH, I don't see a use case where having two full-fledged java applications to handle a proxy duty would pay off, unless maybe you have a really sound infrastructure to easily manage them that it doesn't make a difference having two applications instead of one and using more resources is not an issue.
Actually, if you expect really high traffic and a serverless function wouldn't work, or maybe you want to stick to Java-based solutions, then you could have a simple WebFlux-based application to handle the http requests and send the messages, and a standard Spring Boot or another WebFlux application to handle consumption. This way you'd be able to scale up the former in order to accommodate the high traffic, and independently scale the later in correspondence with your performance requirements.
As for the retry part, if you stick to non-reactive Spring Kafka applications, you might want to look into the non-blocking retries feature from Spring Kafka. This will enable your consumer application to process other records while waiting to retry a failed one - the #Retryable approach is deprecated in favor of DefaultErrorHandler and both will block consumption while waiting.
Note that with that you lose ordering guarantees, so use it only if the order the requests are processed is not important.

Performance tuning Spring RestTemplate

Background : I am using spring boot with embedded jetty. My app calls bunch of rest apis. For calling these rest apis I use spring rest template.
Question: Is Spring rest template any good at high concurrency? Searching on the web search suggests moving to reactive but still there are apps which are written in blocking way and need to continue that way. Question is what alternate is there or what can be done to make rest template more responsive under heavy load. PoolingHttpClientConnectionManager improves things a bit but essentially still not at par with is required.
There are suggestions to move to rest easy and other http clients but no slid reasoning behind it. End of the day, they all make pool of connections and essentially works the same. Please note, reactive is not an option yet. This question is very specific to traditional blocking rest calls. Any suggestions in optimizing connection pooling or using rest template right will be of great help.
RestTemplate does not do an actual rest call by itself, its just a "wrapper" - a convenient API.
Now when it comes to connection pooling, by default it doesn't use any kind of pooling and just opens URL connections available in Java anyway. No third-parties are required, but performance is not so good.
You can configure rest template to use, say, OkHttp Client under the hood. See here for different ways to work with different clients. The interesting part is that its possible to configure connection pools there and achieve a better performance.
So you should really check what exactly the expected performance is and configure the connection pool accordingly.
Now one more thing about Reactive stuff - it won't give you a performance gain, however it will allow to serve better multiple concurrent requests by reusing resources more efficiently. However if you'll measure how long it takes to perform one single request - its not expected to be performed faster.
In other words you should consider the transition to reactive stack if the application has too many concurrent requests that it can't serve, but not if you want to process every single request faster.
Spring RestTemplate is used to write application level code. It obtains the HTTP connection from ClientHttpRequestFactory implementation which is what glues low-level HTTP client library to Spring e.g. HttpComponentsClientHttpRequestFactory for Apache HTTP Client.
Bottom line, in most cases you have to tune the underlying low-level HTTP client library and not RestTemplate when you are tuning outgoing requests to external APIs.
You are confusing a lot of concepts in your question. Try understanding what is Reactive programming, HTTP, HTTP pipelining, and TCP/IP before you start tuning anything. Otherwise you won't find where is your code's bottleneck and you will end up tuning wrong part of the software stack.

Does Spring Boot with its Blocking IO really fit well with Microservices?

There are a lot of tutorials and articles (including official site) promoting spring boot as a good tool for building microservices.
Let's say we have some rest api endpoint (User profile) which aggregates data from multiple services (User service, Stat service, Friends service).
To achieve this, user profile endpoint makes 3 http calls to those services.
But in Spring, requests are blocking and as I see, the server will quickly run out of available resources (threads) to serve request in such system.
So to me, it as quite inefficient way to build such systems (compared to non-blocking frameworks, like play! framework or node.js)
Do I miss something?
P.S.: I do not mean here spring 5 with its new webflux framework.
No one prevents you from building an asynchronous microservice architecture with Spring Boot :).
Something along these lines:
Instead of one service calling another synchronously, a service can put events to a queue (e.g. RabbitMQ). The events are delivered to services that subscribe to those events.
Using RabbitMQ and its "exchange" concept, the event producing service doesn't even need to the consumers of its events.
A blog post detailing this with Spring Boot code can be found here: https://reflectoring.io/event-messaging-with-spring-boot-and-rabbitmq/
This is not a limitation of Spring rather it is more to do with the Application Architecture.
For instance, the scenario that you have is commonly solved using Aggregate Design Pattern
While this solution is quite prevalent,it has the limitation of being synchronous, and thus blocking. Asynchronous behaviour in such scenarios should be implemented in an application specific way.
Having said that if you have to call other services in order to be able to serve a response to a request from a client(outside), this is typically an architectural problem. It really doesn’t matter if you are using HTTP or asynchronous message passing (with a request-reply pattern), the overall response time for the outside client will be bad
Also, I have seen quite a few applications which uses synchronous REST calls for external clients, but when communication is needed between internal MicroServices, it should always be asynchronous. You can read an interesting paper on this topic here MicroServices Messaging Patterns

Is Spring Compatible with Serverless Computing

I've seen this post here: https://dzone.com/articles/making-spring-boot-application-run-serverless-with which gives an example of how to use Spring in a Serverless scenario, but I believe that this still involves creating the Spring context, an expensive thing to do every time a request comes in. And I am wondering if Spring, but also the traditional web application frameworks are even truely compatible with the severless model, as they all tend to assume the server is only going to initialise on start, and then not again till the server is restarted, as opposed to being immediately ready to handle a request and not needing to initialize a Spring context for instance. So then these frameworks tend to do allot of stuff in the start up phase, which is not good I believe when you don't have a server per-say, and you effectively need to start up every time your would call what would be a lambda in AWS.
So my question is are these traditional web frameworks, such as Spring, which perform allot of compute when starting up still applicable in the Serverless model, for instance: AWS lambda.
Spring can indeed be applicable with the Serverless model, but as you suggest, IMHO it is not suitable for all use cases.
For the reasons that you mention (comparatively long start up times for a "cold" Lambda), I would advise against using Spring when implementing a web app that is deployed to an AWS Lambda function behind an API Gateway as the response times will suffer.
However, there are scenarios when the long start up time of a JVM based function handler implementation in a cold AWS Lambda function is less of a headache and where you may consider this option. One example is as a consumer of a Kinesis stream. The cold start will still be as bad as in the previous case, but if you have a steady stream of events the cold start will only occur once per shard. Another difference is that when using Kinesis you have already chosen an asynchronous application flow. In other words, the event producer can continue its work as soon as the event has been put on the stream without waiting for the event to be processed.
There are some Spring sub-projects that try to deal with this scenario, like Spring Cloud Function:
https://spring.io/blog/2017/07/05/introducing-spring-cloud-function
The deployment profiles even extend into the realm of Serverless (a.k.a. Functions-as-a-Service) providers, such as AWS Lambda and Apache OpenWhisk (as well as Azure Functions and Google Cloud Functions once they provide support for Java)
However, context initialization is still needed, so I guess is up to the developer to make it as small as possible to guarantee a quick startup.
EDIT: Today, I was on a talk given by Dave Syer in the Spring I/O Conference, and he presented some solutions to make Spring Boot more suitable for serveless computing:
Spring Boot Mini Applications: They are SB application but with reduced contexts:
https://github.com/dsyer/spring-boot-thin-launcher
Spring Boot thin launcher:
https://github.com/dsyer/spring-boot-thin-launcher
Some benchmarks on how long does it take to launch several configurations:
https://github.com/dsyer/spring-boot-startup-bench

Spring RMI load balancing / Scalability

I am looking to implement a web application in which the end user is likely to cause invocation of business logic methods which are both cpu heavy and require a fair amount of memory to run.
My initial thought is to provide these methods as part of a standalone stateless business service, which can run on a separate machine to the web application. This can then be horizontally scaled as much as I need.
As these service methods are synchronous I am opting to us RMI as opposed to JMS.
My first question is if the above approach seems viable or seems to be good, or if my though process has got lost somewhere (this will be the first time I don't work on a standalone application).
Should that be the case I have been looking at spring RMI which seems to do an excellent job of exposing remote services non-intrusively. However I am unsure as how I could use this API to load balance between multiple servers. Are there any ways of doing this using spring or do I need a seperate API?
JBoss has the ability provide RMI proxies that are automatically load-balanced: http://docs.jboss.org/jbossas/jboss4guide/r4/html/cluster.chapt.html

Resources