I am getting java.lang.OutOfMemoryError: Metaspace exception since new deployment on production(Before this change we were using a separate jar for scheduling it was working fine but due to some network issue it was stopping again so we added scheduler and included into wildfly server with other war) env. So basically we are using wildfly 11.0.0 final server in which we have 4 war files and one of them has #scheduled - Or scheduler and it run every 10 mins. So generally we do stop the service of wildfly and start again after new war deployment, but after certain time (4to5 hours) application start slowing down and when see the console of server there i can see java.lang.OutOfMemoryError: Metaspace as below :
WARN [org.jboss.modules] (default task-11) Failed to define class com.arjuna.ats.jta.cdi.TransactionScopeCleanup$1 in Module "org.jboss.jts" from local module loader #1e802ef9 (finder: local module finder #2b6faea6 (roots: E:\Data\wildfly-11.0.0.Final\modules,E:\Data\wildfly-11.0.0.Final\modules\system\layers\base)): java.lang.OutOfMemoryError: Metaspace
ERROR [org.jboss.as.ejb3.invocation] (default task-55) WFLYEJB0034: EJB Invocation failed on component AuditLoggerHandler for method public void com.banctec.caseware.server.logger.AuditLoggerHandlerBean.publishCaseAudit(java.lang.String,com.banctec.caseware.server.helpers.SessionHolder,com.banctec.caseware.resources.Resource[],java.lang.Long) throws com.banctec.caseware.exceptions.CaseWareException: javax.ejb.EJBTransactionRolledbackException: WFLYEJB0457: Unexpected Error
So then for each operation we get similar kind of errors with java.lang.OutOfMemoryError: Metaspace
So very first i have removed plain code from #scheduler and used Executor framework where i have used 5 fixed thread pool and with this change we deployed again but again same issue is coming.
I am not sure what is causing server down again and again and getting this memory leak issue.
In all 4 war we used Spring boot 2.0.2.
Any help appreciated. Sorry for bad English.
You need to increase your heap space. And check if you have a memory leak. Please take a look at following link. http://www.mastertheboss.com/java/solving-java-lang-outofmemoryerror-metaspace-error/
You can use tools like Jprofiler to find memory leak. It works like a charm. Check out following link https://www.youtube.com/watch?v=032aTGa-1XM
Related
In these days I added a bundle of codes to my project, Gradle build failed in Github actions, some tests throw OOM error when running Gradle test task.
The project tech stack is Spring Boot 3/R2dbc + Kotlin 1.8/Kotlin Coroutines+ Java 17(Gradle Java Language level set to 17)
The build tooling stack.
Local system: Windows 10 Pro(with 16G memory)/Oracle JDK 17/Gradle 7.6(project Gradle wrapper)
Github actions: Custom Ubuntu with 16G memory/Amazon JDK 17
After researching, we use a custom larger runner with 16G memory, and increase the Gradle JVM heap size to 8G, but it is no help.
org.gradle.jvmargs=-Xmx8g -Xms4g
We still get the following errors when running tests. But testing codes itself is not a problem, they have been passed on my local machine.
*** java.lang.instrument ASSERTION FAILED ***: "!errorOutstanding" with message can't create name string at src/java.instrument/share/native/libinstrument/JPLISAgent.c line: 827
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "ClassGraph-worker-439"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "ClassGraph-worker-438"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "boundedElastic-evictor-1"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "ClassGraph-worker-435"
*** java.lang.instrument ASSERTION FAILED ***: "!errorOutstanding" with message can't create name string at src/java.instrument/share/native/libinstrument/JPLISAgent.c line: 827
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "ClassGraph-worker-433"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "ClassGraph-worker-436"
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "ClassGraph-worker-432"
Update: After posting this question on Spring Boot and other discussion, now confirmed it was caused by classgraph. Classgraph is used by spring doc to scan and analyze the OpenAPI endpoints. If I remove spring doc from the project, it works again.
The problem is even I setup a global springdoc.packageToScan to shrink the scan scope, it still failed with OOM error.
It looks that the error happens in a Gradle worker. Gradle executes separate JVM processes to run the tests, whose memory settings are different than main Gradle process. This by default uses 512mb.
You can do different things to solve this: either increase the heap for that worker, or reduce the amount of tests executed in each worker. You can reduce the amount in two ways: either forking multiple parallel processes per module or forking a new process each a fixed amount of tests in serial mode. Increasing the heap for the Gradle test worker might be the best, but if you have many modules executing the tests in parallel you might exhaust the total memory of your agent as well.
Please take a look to the Gradle testing documentation for more details on these options.
You control all of these settings with the code below (I do not recommend applying all of these together, this is just to illustrate the options).
tasks.withType<Test>().configureEach {
maxHeapSize = "1g"
forkEvery = 100
maxParallelForks = 4
}
In any case, my recommendation would be to profile the build to figure out exactly what are the processes exhausting the memory, what is the most suitable memory settings and potentially identify where is the leak that is producing this.
I can not make my app to boot in Heroku (using hobby plan, Jhipster 5.7 ) because I'm using too much memory as Help Service says:
Your app is crashing because it is using too much memory when it boots:
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactory' defined in class path resource [org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaConfiguration.class]: Invocation of init method failed; nested exception is java.lang.OutOfMemoryError: Java heap space
You will need to use less than 512MB of RAM both at boot and at runtime. We do
have this page https://devcenter.heroku.com/articles/java-memory-issues that
provides some pointers for debugging memory issues.
So I've read the https://devcenter.heroku.com/articles/java-memory-issues document and I have several questions:
1) Is there an easy solution for this problem? (if you are a programmer, not interesting in administration of your machine).
2) Which program should I use to check how much memory I'm using: VisualVM?
3) What thing should I look for in order to reduce memory usage and not create a bigger problem?
Thanks
EDIT FROM HEROKU CUSTOMER SERVICE:
Java tends to be a little bit more memory hungry, but it should be obeying the JAVA_OPTS set. What happens when you just unset the JAVA_OPTS and rely on our defaults? You can unset it with heroku config:unset JAVA_OPTS -a jhipsterpress. I ask because typically we will automatically set JAVA_OPTS to something that should run correctly on the platform given available memory.
I would like to ask why does the spring classloader load java classes multiple times when <context:load-time-weaver aspectj-weaving="on" /> is used in xml config?
I can see spring is using
org.springframework.context.support.ContextTypeMatchClassLoader$ContextOverridingClassLoader
classloader, which, as I read in documentation, creates new classloader instance for each loaded class. In our current project this results in 11 loaded classes of the same type - 1 using parent classloader and 10 more using ContextOverridingClassLoader (each loaded in its own). What could be the cause of this? If we startup many applications in parallel, these duplicate classes eat up too mach permgen memory (resulting in crash). We could just increase permgen memory of course, but I was curious if there is anything else to do.
As soon as I remove this configuration parameter, spring loads all classes only once. I checked this using -XX:+TraceClassLoading VM option and heapdumps.
We are using Spring 3.2.4 and AspectJ 1.7.4
Update:
After I upgraded to Spring 4.2.1, each class is now loaded 15 times. Could it be somehow connected to spring aspects?
We ended up calling GC after application context initialization, thus reducing the amount of used memory during parallel startup of many applications (a good tradeoff to longer application startup) as each application cleans up after init.
I have a web app running on Tomcat 6.0.35, which makes use of Spring 3.1.2, Hibernate 4.1.8 and MySQL Connector 5.1.21.
I have been trying to figure out what is causing Tomcat to keep running out of memory (Perm Gen) after a few redeploys.
Note: Don't tell me to increase Tomcat's JVM memory because that will simply postpone, the problem
Specifically, I made use of the VisualVM tool, and was able to eliminate some problems, including some mysql and google threads issues. I was also able to discover and fix a problem caused by using Velocity as a singleton in the web app, and also not closing at the correct time/place some thread local variables I was having. But I still am not completely able to eliminate/figure out this Hibernate issue.
Here is what I'm doing:
Deploy my webapp from my development IDE
Open a tomcat manager window in my browser
Start VisualVM and get the HeapDump on the tomcat instance
Go the tomcat manager and redeploy my webapp
Take another HeapDump in VisualVM
My first observation is that the WebappClassLoader for the original webapp is not garbage collected.
When I scrutinize the retained objects from the second HeapDump, the class org.hibernate.internal.SessionFactoryImpl features prominently which leads me to believe that it IS NOT being destroyed/closed by Spring or something along those lines (and hence the WebappClassLoader still having a reference to it).
Has anyone encountered this problem and identified the correct fix for it?
I don't currently have an idea what could be amiss in your setup but what I know is that using Plumbr you'll most likely find the actual leak(s).
When i am running my application it give me this exception:-
Exception sending context initialized event to listener instance of class
org.springframework.web.util.Log4jConfigListener
java.lang.ExceptionInInitializerError
But this problem is not permanent. If i remove the log4j entries from web.xml and restart p.c and then again add log4j entries and start the server, then the application works fine.
I noticed that when the application is deployed and when i undeploy and again deploy the same application, this problem emerge.
Please help me ........ i am facing this problem from 3 months...
I just investigated it a little and have found the similar issue:
It causes when multiple jar file for log4j is available to application.
From web/application server and from build path(Included from other
path rather than web/application server)
You can see the entire thread here:
http://www.coderanch.com/t/551933/Spring/Exception-sending-context-initialized-event
looks like this is your problem.