Over time my unused running my Spring Boot v1.3.2 application gradually increases memory consumption until it eventually falls over. By unused I mean no client requested are being served apart from the regular ping of the /health end point.
According to the Eclipse Memory Analyser, org.springframework.boot.loader.LaunchedURLClassLoader is taking up a massive 920MB.
It appears as though Spring Boot is continually loading classes
Any ideas what's going on?
EDIT
Looks like it's Spring Cloud Consul that's causing the issue:
Appears to be a memory leak in Spring Cloud Consul. Raised issue https://github.com/spring-cloud/spring-cloud-consul/issues/183
Related
I have Spring Boot application 2.5.7 where I set up a micrometer to scrape metrics
runtimeOnly("io.micrometer:micrometer-registry-prometheus")
When I make a request locally http://localhost:8081/actuator/prometheus
There are no performance problems with my application
But when I make a request to the actuator on the server with a high load
https://myserver:8081/actuator/prometheus
it returns a lot more data in response and it also slows down all request that is currently running on my server.
The problem appears even after one request to /actuator/prometheus
Is there any way to optimize the micrometer work(while returning the same ammount of metrics), so it will not slow down my application?
Without sufficient data it is hard to give a recommendation. If the slowness is due to insufficient memory/garbage collection, try increasing the memory of your application.
Reviewing the metrics being returned may also give you some ideas, for example if you have a high thread count, I think there is a pause when Micrometer iterates over the thread statuses. You could look into disabling that metric.
I have a microservice in spring boot which is deployed in a container with corretto 11 in ECS.
The component is deployed with 512MB and its initial consumption is close to 50%, as traffic increases, memory increases and is never freed, to the point where the task in ECS crashes and a new one must be started.
The following image shows the behavior of memory over time, as the traffic increases the times in which the tasks in ECS are up are less.
Memory Consumption in a Container on AWS ECS
Spring Boot Version: 2.4.3
JDK Image: Corretto 11
UPDATE:
I did a profiler and analysis of the heapdump and I see a high consumption in spring libraries.
Heapdump analysis with VisualVM
According to the screenshot, the problem seems to be in DefaultListableBeanFactory objects.
Please, check your code and make sure that you don't instantiate spring beans for every request. If so, you need to autowire them only once, statically.
For example, check the following solution:
Memory leak in jboss service due to DefaultListableBeanFactory objects
Here is the explanation, on why there can be Memory leak in serializable bean factory management:
https://github.com/spring-projects/spring-framework/issues/12159?focusedCommentId=57240&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-57240
P.S. I don't think that there is a bug in Spring framework itself, but if so, then please, upgrade Spring Boot to the latest version. Maybe your version contains some bug, which causes memory leak.
See details in this reported issue:
https://github.com/spring-projects/spring-framework/issues/25619
I have a Spring boot service deployed to Linux server, and the service is consuming like 684 Mb when is checked this in spring boot admin and drops to 38 Mb for sometime, but this service is just a simple controller and which sends a modal html to the front end. Is there a way we can reduce the memory consumption to this service. Roughly for every 30 secs we get a call for this service. Please let me know any memory optimization techniques that I can use. Appreciate your help. It does have spring boot actuator (FYI)
If you are using Gradle you can see the full dependency list via command gradle --scan, and then you can exclude some of the repeated or unused dependencies.
We have tomcat in docker inside an ec2 instance. A java application which uses SQS, Spring boot runs in tomcat. As the load increases we see that the system memory increases slowly and have to restart the ec2-instance to clear the memory.
During all this time, The heap remains stable.
we were able to slowdown the memory growth with a chnage to make JAXB context singleton.
Not sure what is using the System memory and how to clear it or prevent it. Does spring boot and it litener threads taking this memory ?
Thanks in advance.
the response time of my spring boot rest service running on embedded tomcat sometimes goes really high. I have isolated the external dependencies and all of that is pretty quick.
I am at a point that I think that it is something to do with tomcat's default 200 thread pool size that it reserves only for incoming requests for the service.
What I believe is that all 200 threads under heavy load (100 requests per second) are held up and other requests are queued and lead to higher response time.
I was wondering if there is a definitive way to find out if the incoming requests are really getting queued? I have done an extensive research on tomcat documentation, spring boot embedded container documentation. Unfortunately I don't see anything relevant.
Does anyone have any ideas on how to check this