We have some rest services which is going out of memory frequently.
I was doing code review and saw heavy usage of static methods.
Can that be an issue ?
Java 8, spring-boot 2.1.5
How do I debug this issue ?
Get the Heap Dump by using jvisualvm file.
Download the Eclipse Memory Analyzer and start analysis the below
factors
Heap
Garbage Collection
OutOfMemory
Please visit this
Related
In WSO2 EI 6.6, proxy stopped working abruptly. upon analyzing observed an error in the wso2 carbon log "GC Overhead limit exceeded", after this error nothing happening in the EI.
Proxy logic is to get the data from Sql server table and form an xml and send it to an external API. Proxy runs every 5 mins interval and in every interval maximum of 5 records will be pushed to an API.
After restarting the wso2 carbon services, proxy are started working. currently we are restarting the services every 3 days to avoid this issue.
Need to know how to identify the potential issue and resolve this.
This mean the JVM has run out of allocated memory. There can be many reasons for this. For example, if you haven't allocated enough memory to the JVM you can easily run out of memory. If that's not the case you need to analyze a memory dump and see what's occupying the memory causing it to fill up.
Generally, when you see the mentioned error the JVM automatically creates a heap dump(heap-dump.hprof) in the <EI_HOME>/repository/logs directory. You can try analyzing the dump to find the root cause. If the server doesn't generate a memory dump, manually take a memory dump when it's occupied than the expected level and analyze it.
I see a degradation in response times within myapplications. After a server restart, response times are acceptable. However, after some time, which depends on the workload on the system, the response times degrade and the server has to be restarted to return to good performance.
Are you monitoring the Java heap usage with verbose garbage collection (GC) logs?
The behavior you describe can happen if the heap has enough free space after a restart, then gradually fills with long-lived objects as the workload runs. This may be caused by the heap simply being too small, or the application may have a memory leak, using heap and not releasing it for collection when the associated work is completed. When there is not enough free heap space, the application work slows down because the JVM spends excessive time running GC.
You can learn more about Java GC troubleshooting in our documentation
https://www.eclipse.org/openj9/docs/vgclog/
You can also open a support case to get assistance from WebSphere/Java troubleshooting experts, if you have a support arrangement with IBM.
We have a large TeamCity Server (10.0.3), with around 2.000 builds configurations and around 50 build agents.
Frequently, we encounter some performances issues, with a garbage collection.
Inside the teamcity-server.log, we found this:
[2017-11-28 12:30:54,339] WARN - jetbrains.buildServer.SERVER - GC usage exceeded 50% threshold and is now 60%. GC was fired 82987 times since server start and consumed total 18454595ms. Current memory usage: 1.09 GB.
We are unable to figure out the source of the issue.
According to the Documentation, a 64 bit version of Java should be used, with only 4g RAM. We encountered some issues, and decided to use -Xmx6g parameter instead.
Do you know where we can enable/find more traces in order to figure out the source of our over-consumption of memory ?
First, you can try disabling third-party plugins and see if it helps.
Then you try benchmarking the server according to this blog post and see if increased memory limits will improve the situations.
But the best way to investigate the memory over-consumption would be capturing the memory dump and investigating the content using profiling tools. You can create memory dump from Administration | Server Administration | Diagnostics page of your TeamCity web UI using Dump Memory Snapshot button.
You can investigate the dump on your own or send it to Jetbrains for investigation.
I am working with grails framework. It takes too much time to respond a request from browser. Due to this issue i have to restart the server too many times. I will highly appreciate for your accurate answer.
You may suffer from performance issues due to insufficient memory (heap). In this case you will need increase the maximum heap size. You can use the Grails JavaMelody plugin , which integrates JavaMelody system monitoring tool into grails application, in order to tack such issues.
To increase the maximum heap size when running the application with the run-app command add the following vm option: -Xmx1024m
If running with the run-war command add the following to your BuildConfig.groovy:
grails.tomcat.jvmArgs = ['-Xmx1g']
In both cases set the maximum heap size according to your needs.
We have an application built using Grails 2.0.1 and MongoDB. And as our userbase have grown and we did some performance research, we noticed that for each typical request grails eats about 150Mb of RAM, and when RAM is about to reach maximum it performs GC.
We've put singleton mode for controllers, and non-transactional for Services. We use JRockit.
I'd like to know if it can be considered normal for grails app or no. Our website is nothing more than a usual website, no extra memory usages, just a user management system and the code itself seems to be OK.
Here are the plugins we use:
app.grails.version=2.0.1,
app.servlet.version=2.4,
app.version=0.1,
plugins.cache-headers=1.1.3,
plugins.code-coverage=1.2.5,
plugins.codenarc=0.12,
plugins.crypto=2.0,
plugins.gsp-arse=1.3
plugins.jaxrs=0.6,
plugins.mongodb=1.0.0.RC5,
plugins.navigation=1.2,
plugins.quartz=0.4.2,
plugins.redis=1.0.0.M9,
plugins.rendering=0.4.3,
plugins.selenium=0.8,
plugins.selenium-rc=1.0.2,
plugins.spring-security-core=1.2.7.2,
plugins.springcache=1.3.1,
plugins.svn=1.0.1,
plugins.tomcat=2.0.1,
plugins.ui-performance=1.2.2
On a Sun JDK, fire up jvisualvm (or the jrockit equivalent, if there is one. Otherwise get yourself a proper profiler that works with jrockit), attach it to your running server, start the profiler and analyze the output. This will give you an idea on where to look.
Maybe you are actually loading that much information from the backend storage. but that's just a guess.