Spring Boot on AWS EKS API limit - spring-boot

I need to design a solution able to process avg 150k requests per day.
I was thinking to expose a REST API from a Spring Boot App running on AWS EKS (current tech approved from CTO), but I'm wondering about the limits.
Is there a knowledge base where I can read of any cap for such scenario (API limit for Spring boot app on EKS considering the pod replicas)?
If this would not work, how would you do that? Was thinking my customer can write on a Kafka queue where my Spring Boot app will read (streaming approach).
The goal is to take requests from my customer app and forward them to my backend system that is gonna do its processes.

I don't see any reason why it wouldn't work, EKS is a service that provides Kubernetes as a service, so as long as you get the required resources, the performance of the app depends directly on the application.
150K requests per day is around 1.7 requests per second, which is definitely a manageable volume (of course, depends on the logic your app has)

Related

is there a way to build separate application as rate limiter in spring?

i have a micro service that contains at least 20 microservice that they work together.
and i want to build a rate limiter that limits API by IP address of users. the problem is that we need it to be as an separate server that can do the job of rate limiting of all of micro services.
i searched about this and all i found was guava and other components that could do this job for an specific spring app . but i want separate application for rate limiting.
is there a way for doing this job separately ?
what should i do ? what component i should use ?
You can use Bucket4J, which is a library agnostic of any web framework, that you can use to implement rate-limiting.
Also, Guava Rate Limiter can be used in the same way, it's not tied to spring boot.
Spring Cloud Gateway has rate limiting and fits microservice architecture with a dedicated gateway. However, documentation seems quite scarce.

Scale SpringBoot App based on Thread Pool State

We have a Spring Boot microservice which should get some data from old / legacy system. This microservice exposes external modern REST API. Sometimes we have to issue 7-10 requests to the legacy system in order to get all the data we need for single API call. Unfortunately we can't use Reactor / WebClient and have to stick with WebServiceTemplate to issue those "legacy" calls. We can't also use Reactive Spring WebClient - Making a SOAP call
What is the best way to scale such a miroservice in Kubernetes? We have very big concerns that Thread Pool used for parallel WebServiceTemplate invocation will be depleted very fast, but I'm not sure that creating and exposing custom metric based on active threads count / thread pool size is a good idea.
Any advice will be helpful.
Enable Prometheus exporter in Spring
Make sure metrics are scraped. You're going to watch for a threadpool_size metric. Refer your k8s/prometheus distro docs to get prometheus service discovery working for you.
Write a horizontal pod autoscaler (HPA) based on a Prometheus metric:
Setup Prometheus-Adapter and follow the HPA walkthrough.
Or follow this guide https://github.com/stefanprodan/k8s-prom-hpa
Depending on what k8s distro you are using, you might have different ways to get the Prometheus and prometheus discovery:
(example platform built-in) https://cloud.google.com/stackdriver/docs/solutions/gke/prometheus
(example product) https://docs.datadoghq.com/integrations/prometheus/
(example opensource) https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
any other prometheus solution

Spring dataflow and GCP Pub Sub

I'm building an event-driven microservice architecture, which is supposed to be Cloud agnostic (as much as possible). Since this is initially going in GCP and I don't want to spend a long time in configurations and all that, I was going to use GCP's Pub/Sub directly for the event queue and would take care of other Cloud implementations later, but then I came across Spring Cloud Dataflow, which seemed nice because these are Spring Boot microservices and I needed a way to orchestrate them.
Does Spring Cloud Dataflow support Pub Sub as it's event queue?
Would it make my life easier in terms of configuration and setup going that path, rather than choosing a non native broker?
It'd be useful first to unpack the Spring Cloud Stream's "binder abstraction" because it is using this framework, you'd have a portable event-driven streaming application, which can run locally in your laptop or any cloud of your choice against the desired message broker.
Learn more about the binder-abstraction here. Here are all the available binder implementations of choice. Google PubSub is an option, and it is maintained by Google here.
Now, let's talk about Spring Cloud Data Flow (SCDF). Once when you have built the streaming applications, you could use SCDF to design+create a data pipeline made of such applications. There's the option to mix and reuse the collection of utility applications that we build, maintain, and release as well. The utility applications can be packaged with Google PubSub or other binders. More details here.
When you deploy the data pipeline, SCDF will resolve and download the individual applications to deploy them natively on platforms like Kubernetes or Cloud Foundry. We have users doing the same in a variety of cloud infrastructure (VMs, Bare-metal, EC2, Rackspace, etc.), including DIY platforms, too.
While also automating the deployment of the applications, SCDF will automate the configuration setup based on naming conventions derived from stream/task and application names as a combination. So, when the apps bootstrap, they would have automatically received the connection configurations (from SCDF) and as well the destination/topic to connect to along with the other metadata to reason through a collection of apps as a "stream" or a "task/batch" data pipeline. This allows you to monitor and manage the pipelines centrally.
Lastly, there's the native ability in SCDF to rolling-upgrade/rolling-downgrade 1 or many applications in a data pipeline without impacting the upstream or downstream consumers in production. More details here. There's a webinar recording (demo starts at ~41.25) on how to do with CI/CD automation.

Is Spring Compatible with Serverless Computing

I've seen this post here: https://dzone.com/articles/making-spring-boot-application-run-serverless-with which gives an example of how to use Spring in a Serverless scenario, but I believe that this still involves creating the Spring context, an expensive thing to do every time a request comes in. And I am wondering if Spring, but also the traditional web application frameworks are even truely compatible with the severless model, as they all tend to assume the server is only going to initialise on start, and then not again till the server is restarted, as opposed to being immediately ready to handle a request and not needing to initialize a Spring context for instance. So then these frameworks tend to do allot of stuff in the start up phase, which is not good I believe when you don't have a server per-say, and you effectively need to start up every time your would call what would be a lambda in AWS.
So my question is are these traditional web frameworks, such as Spring, which perform allot of compute when starting up still applicable in the Serverless model, for instance: AWS lambda.
Spring can indeed be applicable with the Serverless model, but as you suggest, IMHO it is not suitable for all use cases.
For the reasons that you mention (comparatively long start up times for a "cold" Lambda), I would advise against using Spring when implementing a web app that is deployed to an AWS Lambda function behind an API Gateway as the response times will suffer.
However, there are scenarios when the long start up time of a JVM based function handler implementation in a cold AWS Lambda function is less of a headache and where you may consider this option. One example is as a consumer of a Kinesis stream. The cold start will still be as bad as in the previous case, but if you have a steady stream of events the cold start will only occur once per shard. Another difference is that when using Kinesis you have already chosen an asynchronous application flow. In other words, the event producer can continue its work as soon as the event has been put on the stream without waiting for the event to be processed.
There are some Spring sub-projects that try to deal with this scenario, like Spring Cloud Function:
https://spring.io/blog/2017/07/05/introducing-spring-cloud-function
The deployment profiles even extend into the realm of Serverless (a.k.a. Functions-as-a-Service) providers, such as AWS Lambda and Apache OpenWhisk (as well as Azure Functions and Google Cloud Functions once they provide support for Java)
However, context initialization is still needed, so I guess is up to the developer to make it as small as possible to guarantee a quick startup.
EDIT: Today, I was on a talk given by Dave Syer in the Spring I/O Conference, and he presented some solutions to make Spring Boot more suitable for serveless computing:
Spring Boot Mini Applications: They are SB application but with reduced contexts:
https://github.com/dsyer/spring-boot-thin-launcher
Spring Boot thin launcher:
https://github.com/dsyer/spring-boot-thin-launcher
Some benchmarks on how long does it take to launch several configurations:
https://github.com/dsyer/spring-boot-startup-bench

How to monitor streaming apps Inside SCDF?

I am novice to Spring Cloud Data flow and Stream Cloud Streaming Applications.
Currently my project diagram looks like following :
I route a POST request from outside client using zuul API gateway to a microservice called Composite. Composite creates a stream using REST POST and deployes onto Spring Cloud Data Flow Server. As far as I know the microservices mongodb and file run as co-existing JVM processes. If My client has to know the status of stream, status of the processed data, How should Composite Microservice interact with Spring Cloud Data Flow Server? Currently when I make POST call to deploy the stream I dont even get the status from SCDF Server. Does SCDF expose any hooks to look at the individual apps? Also how can I change the flow #runtime to create a dynamic mesh?
Currently I am using Local Spring Cloud Data Flow Server for development.
Runtime platform is local
Local runtime is recommended only for development purpose and if you're preparing for production, please make sure to choose a platform variant (eg: cf, k8s, yarn, ..) that comes with non-functional requirements to support reliable and durable execution of all the applications running in streaming pipeline.
As far as I know the microservices mongodb and file run as co-existing JVM processes.
If your stream definition is file | mongodb, you'd have 2 different JVM's even when using Local runtime. They're independent Boot applications.
How should Composite Microservice interact with Spring Cloud Data Flow Server?
Not clear what you mean by "composite" here. All the microservice applications in SCDF communicate via messaging middleware such as Kafka or Rabbit. SCDF provides the orchestration capability to run such applications into various runtime platforms.
Currently when I make POST call to deploy the stream I dont even get the status from SCDF Server
You can use SCDF's REST-APIs to query for current status of the apps and it is platform agnostic. You can view the list of supported APIs by hitting the root URL (see image below) - there's a gap in docs - we will fix it. Following APIs could be useful for status checks.
Does SCDF expose any hooks to look at the individual apps?
Once the apps are deployed in a runtime platform, you can take advantage of Boot's actuator endpoints to explore more details such as trace, metrics, health, env among others at each application level. See Boot's actuator endpoints for more details. For instance, if your mongodb app is running locally and on port 23000, then you can check granular metrics for this application at: http://localhost:23000/metrics.
[As an FYI: future SCDF releases would include integrating Spring Boot + Spring Cloud Sleuth metrics and visual representation of the same.]
Also how can I change the flow #runtime to create a dynamic mesh?
If you're referring to editing a running streaming pipeline with addition/deletes, we are currently exploring design approach to support this functionality.

Resources