Spring Webflux cpu in a container environment - spring

In our environment with kubernetes, our pods usually have less than 1 cpu core reserved.
Knowing that spring webflux works with the concept of eventloop + workers,
how would that work? is it recommended that we reserve at least 1 cpu core for this pod?
If i still use webflux with less than 1cpu request in kubernetes, will my eventloop be underperformance?

Related

Request Handling Capacity of springboot application with 1 instance

The number of requests that can be handled by a deployed spring boot application depends on configuration server.tomcat.threads.max. It is default as 200.
However, I believe the request handling capacity of an application also depends on various other capacities of the server, such as CPU, RAM, Disk capacity, etc.
So, the deployed instance of spring boot application with higher capacity should be able to handle more requests than lower capacity one. However, I am not clear how server.tomcat.threads.max decide this for different server sizes. Can somebody please clarify that?

Spring boot API request limit issue

I have developed a microservice using Spring Boot and deployed it as a docker container and when doing performance testing the service I see that the maximum number of threads that created to the service is 20 at any point in time even though the number of calls made is much higher. I have even set the max thread to 4000 and also set Max connection to 10000 along with all db configuration and server has 24 core and 64 gb ram still no improvement Are there any limitations with respect to number of calls that can be made to a microservice developed using Spring Boot or is the issue is with docker container? Or is this normal?

spring boot microservice consumes a lot of memory in container deployed in ecs aws

I have a microservice in spring boot which is deployed in a container with corretto 11 in ECS.
The component is deployed with 512MB and its initial consumption is close to 50%, as traffic increases, memory increases and is never freed, to the point where the task in ECS crashes and a new one must be started.
The following image shows the behavior of memory over time, as the traffic increases the times in which the tasks in ECS are up are less.
Memory Consumption in a Container on AWS ECS
Spring Boot Version: 2.4.3
JDK Image: Corretto 11
UPDATE:
I did a profiler and analysis of the heapdump and I see a high consumption in spring libraries.
Heapdump analysis with VisualVM
According to the screenshot, the problem seems to be in DefaultListableBeanFactory objects.
Please, check your code and make sure that you don't instantiate spring beans for every request. If so, you need to autowire them only once, statically.
For example, check the following solution:
Memory leak in jboss service due to DefaultListableBeanFactory objects
Here is the explanation, on why there can be Memory leak in serializable bean factory management:
https://github.com/spring-projects/spring-framework/issues/12159?focusedCommentId=57240&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-57240
P.S. I don't think that there is a bug in Spring framework itself, but if so, then please, upgrade Spring Boot to the latest version. Maybe your version contains some bug, which causes memory leak.
See details in this reported issue:
https://github.com/spring-projects/spring-framework/issues/25619

From an Application Server to Spring Boot - How Tos for Performance Tuning

Currently we have Java applications deployed in an Application server (Websphere to be exact). To fix common performance and memory related problems we encounter, we do tweakings like:
Adjust the thread pool setting - to prevent waiting threads.
Adjust the application server's garbage collection behavior.
Now, there is a plan to move them to containers (via Docker and using Spring Boot). So essentially they would be converted to Spring boot apps running on Docker containers. Now my question is, what is the equivalent of doing #1 and #2 in this kind of setup? is there still a way to adjust thread pool and garbage collection? or is it a different way now? or this shouldn't be an issue because docker swarm can manage all this and scale?
Edit: for the meantime, docker swarm will be used for managing containers. Kubernetes is still not in the picture.

How to automatically scale up and scale down of micro services instances built using Spring Boot and Spring cloud?

How to automatically scale up and scale down of micro services instances built using Spring Boot and Spring cloud?
I didn't find much info about this on web.
please help in understanding the possible approaches
If this is not provided by your environment (i.e. AWS Lambda) then you probably have to do it yourself.
For this you need a method of programatically scaling up/down the microservices (i.e. docker service scale xyz=2) and a meaning of determining that a service needs scaling up/down. For this you need to be able to read the relevant metrics from the microservice and a scaling controller that use those metrics to compute the scaling reguirements. For example, if the CPU usage is at least 90% for at least 5 seconds then scale up, if the CPU is less than 10% for at least 5 seconds then scale down.
You can even design the microservice to report its own metrics to the controller for more business specific metrics.

Resources