I have a Spring boot service deployed to Linux server, and the service is consuming like 684 Mb when is checked this in spring boot admin and drops to 38 Mb for sometime, but this service is just a simple controller and which sends a modal html to the front end. Is there a way we can reduce the memory consumption to this service. Roughly for every 30 secs we get a call for this service. Please let me know any memory optimization techniques that I can use. Appreciate your help. It does have spring boot actuator (FYI)
If you are using Gradle you can see the full dependency list via command gradle --scan, and then you can exclude some of the repeated or unused dependencies.
Related
I have developed a microservice using Spring Boot and deployed it as a docker container and when doing performance testing the service I see that the maximum number of threads that created to the service is 20 at any point in time even though the number of calls made is much higher. I have even set the max thread to 4000 and also set Max connection to 10000 along with all db configuration and server has 24 core and 64 gb ram still no improvement Are there any limitations with respect to number of calls that can be made to a microservice developed using Spring Boot or is the issue is with docker container? Or is this normal?
I have a microservice in spring boot which is deployed in a container with corretto 11 in ECS.
The component is deployed with 512MB and its initial consumption is close to 50%, as traffic increases, memory increases and is never freed, to the point where the task in ECS crashes and a new one must be started.
The following image shows the behavior of memory over time, as the traffic increases the times in which the tasks in ECS are up are less.
Memory Consumption in a Container on AWS ECS
Spring Boot Version: 2.4.3
JDK Image: Corretto 11
UPDATE:
I did a profiler and analysis of the heapdump and I see a high consumption in spring libraries.
Heapdump analysis with VisualVM
According to the screenshot, the problem seems to be in DefaultListableBeanFactory objects.
Please, check your code and make sure that you don't instantiate spring beans for every request. If so, you need to autowire them only once, statically.
For example, check the following solution:
Memory leak in jboss service due to DefaultListableBeanFactory objects
Here is the explanation, on why there can be Memory leak in serializable bean factory management:
https://github.com/spring-projects/spring-framework/issues/12159?focusedCommentId=57240&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-57240
P.S. I don't think that there is a bug in Spring framework itself, but if so, then please, upgrade Spring Boot to the latest version. Maybe your version contains some bug, which causes memory leak.
See details in this reported issue:
https://github.com/spring-projects/spring-framework/issues/25619
I am working on a application (Banking) which has a TPS requirement of 100 and multiple concurrent users.
Will Spring Boot 1.x.x allow me to achieve this?
Note: I would have used Spring Boot 2.x.x which supports Reactive paradigm but there is some legacy code which I have to use and it does not work on 2.x.x.
You can hit these numbers running a Java application on any reasonable hardware. LMAX claims that Disruptor can do over 100k TPS with 1ms latency. Spring Boot, or Java in general, won't be the limiting factor.
What will be the problem are the business requirements. If your application is to produce complex reports from over utilised database that's located in another data centre, well just the packet round-trip from CA to Netherlands is 150ms. If your SQL queries will take 30+ seconds, you are toast.
You can take a look at Tuning Tomcat For A High Throughput, Fail Fast System. It gives a good insight what can be tuned in a standard Tomcat deployment (assuming you will use Tomcat in Spring Boot). However it's unlikely that HTTP connections (assuming you will expose HTTP API) will be the initial bottleneck.
the response time of my spring boot rest service running on embedded tomcat sometimes goes really high. I have isolated the external dependencies and all of that is pretty quick.
I am at a point that I think that it is something to do with tomcat's default 200 thread pool size that it reserves only for incoming requests for the service.
What I believe is that all 200 threads under heavy load (100 requests per second) are held up and other requests are queued and lead to higher response time.
I was wondering if there is a definitive way to find out if the incoming requests are really getting queued? I have done an extensive research on tomcat documentation, spring boot embedded container documentation. Unfortunately I don't see anything relevant.
Does anyone have any ideas on how to check this
Over time my unused running my Spring Boot v1.3.2 application gradually increases memory consumption until it eventually falls over. By unused I mean no client requested are being served apart from the regular ping of the /health end point.
According to the Eclipse Memory Analyser, org.springframework.boot.loader.LaunchedURLClassLoader is taking up a massive 920MB.
It appears as though Spring Boot is continually loading classes
Any ideas what's going on?
EDIT
Looks like it's Spring Cloud Consul that's causing the issue:
Appears to be a memory leak in Spring Cloud Consul. Raised issue https://github.com/spring-cloud/spring-cloud-consul/issues/183