I am throwing a question on Out of Memory in IBM Websphere 8.5.5.7....We have an application primarily a Spring RestFull Webservices application deployed in IBM WAS 8.5.5.7. getting the below Out of Memory error for the last 5 days
[2/3/16 13:12:51:651 EST] 000000ab BBFactoryImpl E CWOBB9999E: Something unexpected happened; the data (if any) is <null> and the exception (if any) is java.lang.OutOfMemoryError: Java heap space at
com.ibm.oti.vm.VM.getClassNameImpl(Native Method) at
com.ibm.oti.vm.AbstractClassLoader.getPackageName(AbstractClassLoader.java:384) at
com.ibm.oti.vm.BootstrapClassLoader.loadClass(BootstrapClassLoader.java:65) at
java.lang.ClassLoader.loadClassHelper(ClassLoader.java:691) at
java.lang.ClassLoader.loadClass(ClassLoader.java:680) at
java.lang.ClassLoader.loadClassHelper(ClassLoader.java:693) at
java.lang.ClassLoader.loadClass(ClassLoader.java:680) at
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:358) at
java.lang.ClassLoader.loadClass(ClassLoader.java:663) at
org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:502) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:422) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410) at
org.eclipse.osgi.internal.loader.buddy.RegisteredPolicy.loadClass(RegisteredPolicy.java:79) at
org.eclipse.osgi.internal.loader.buddy.PolicyHandler.doBuddyClassLoading(PolicyHandler.java:135) at
org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:494) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:422) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410) at
org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at
java.lang.ClassLoader.loadClassHelper(ClassLoader.java:693) at
java.lang.ClassLoader.loadClass(ClassLoader.java:680) at
java.lang.ClassLoader.loadClass(ClassLoader.java:663) at
sun.reflect.DelegatingClassLoader.loadClass(DelegatingClassLoader.java:51) at
sun.misc.Unsafe.defineClass(Native Method) at
sun.reflect.ClassDefiner.defineClass(ClassDefiner.java:57) at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:437) at
java.security.AccessController.doPrivileged(AccessController.java:363) at
sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:433) at
sun.reflect.MethodAccessorGenerator.generateSerializationConstructor(MethodAccessorGenerator.java:149) at
sun.reflect.ReflectionFactory.newConstructorForSerialization(ReflectionFactory.java:316) at
java.io.ObjectStreamClass.getSerializableConstructor(ObjectStreamClass.java:1409) at
java.io.ObjectStreamClass.access$1500(ObjectStreamClass.java:63) at
java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:515) at
java.security.AccessController.doPrivileged(AccessController.java:363) at
java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:491) at
java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:338) at
java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:625) at
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1619) at
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1514) at
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1619) at
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1514) at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1768) at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) at
java.io.ObjectInputStream.readObject(ObjectInputStream.java:364) at
com.ibm.son.util.Util.deserialize(Util.java:434) at
com.ibm.son.mesh.AbstractTCPImpl.procReceivedMessage(AbstractTCPImpl.java:478) at
com.ibm.son.mesh.CfwTCPImpl.completedRead(CfwTCPImpl.java:1248) at
com.ibm.son.mesh.CfwTCPImpl.complete(CfwTCPImpl.java:1061) at
com.ibm.ws.ssl.channel.impl.SSLReadServiceContext$SSLReadCompletedCallback.complete(SSLReadServiceContext.java:1818) at
com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:175) at
com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217) at
com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161) at
com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138) at
com.ibm.io.async.ResultHandler.complete(ResultHandler.java:204) at
com.ibm.io.async.ResultHandler.runEventProces
Analyzed on the Introscope and Heap Analyser for the heap dump It is observed that consistently the lion share of the memory (>60%) is being consumed by com/ibm/xml/xlxp2/scan/util/SimpleDataBufferFactory used by IBM stax parser with WAS
Introscope Analysis throws light on sudden spike in the thread count, memory usage and gradual increase in connection count when the OOM happened.
When checking on the com.ibm.xml.xlxp2.scan.util.Databuffer issue of taking more heapsize , its being seen that IBM has been fixing Out Of Memory Issues for classes belong to com.ibm.xml.xlxp.scan.util/com.ibm.xml.xlxp2.scan.util in WAS 6, WAS 7 and WAS 8 servers.
http://www-01.ibm.com/support/docview.wss?uid=swg1PM39346
http://www-01.ibm.com/support/docview.wss?uid=swg1PM08333
Can anyone share any idea whether this an issue with IBM WAS 8.5.5.7...could not get a solid break
Many of the out of memory problems concerning com.ibm.xml.xlxp2.scan.util.DataBuffer were addressed with system properties that users can configure to reduce the memory used by the IBM StAX parser.
The following system properties can be helpful in resolving out of memory issues with the IBM StAX parser. Each of them should be available in WebSphere Application Server v8.5.5.7.
com.ibm.xml.xlxp2.api.util.encoding.DataSourceFactory.bufferLength
System property which controls the size of the StAX parser's data buffers. The default value is 65536.
Setting this property to a smaller value such as 2048 may reduce the memory usage if the 64KB buffers were only being partially filled by the InputStream when they are in use. The buffers are cached within the StAX parser (inside com/ibm/xml/xlxp2/scan/util/SimpleDataBufferFactory) so a reduction in memory usage there would reduce the overall memory linked to each StAX parser object.
com.ibm.xml.xlxp2.api.util.Pool.STRONG_REFERENCE_POOL_MAXIMUM_SIZE
System property (introduced by APAR PM42465) which limits the number of XMLStreamReaders (and XMLStreamWriters) that will be cached using strong references. Follow the instructions at the link provided on how to set this property.
com.ibm.xml.xlxp2.api.util.encoding.DataSourceFactory.bufferLoadFactor
The value of this system property is a non-negative integer which determines the minimum number of bytes (as a percentage) that will be loaded into each buffer. The percentage is calculated with the following formula 1 / (2^n).
When the system property is not set its default value is 3. Setting the property to a lower value than the default can improve memory usage but may also reduce throughput.
com.ibm.xml.xlxp2.scan.util.SymbolMap.maxSymbolCount
System property (introduced by APAR PI08415). The value of this property is a non-negative integer which determines the maximum size of the StAX parser's symbol map. Follow the instructions at the link provided on how to set this property.
Related
when i run my programm and add a 10MB Excel and calculate something i get this error:
Exception in thread "AWT-EventQueue-0" java.lang.OutOfMemoryError: Java heap space
at java.base/java.util.HashMap.resize(HashMap.java:705)
at java.base/java.util.HashMap.putVal(HashMap.java:630)
at java.base/java.util.HashMap.put(HashMap.java:613)
at java.base/java.util.HashSet.add(HashSet.java:221)
at java.base/java.util.Collections.addAll(Collections.java:5593)
at org.logicng.formulas.FormulaFactory.or(FormulaFactory.java:532)
at org.logicng.formulas.FormulaFactory.naryOperator(FormulaFactory.java:372)
at org.logicng.formulas.FormulaFactory.naryOperator(FormulaFactory.java:359)
at org.logicng.formulas.NAryOperator.restrict(NAryOperator.java:130)
at org.logicng.formulas.NAryOperator.restrict(NAryOperator.java:129)
at org.logicng.formulas.NAryOperator.restrict(NAryOperator.java:129)
at
org.logicng.transformations.qe.ExistentialQuantifierElimination.apply(ExistentialQuantifierElimination.java:74)
at ToPue.calculatePueForPos(ToPue.java:59)
at PosvHandler.calculatePosv(PosvHandler.java:21)
at PueChecker$EqualBtnClicked.actionPerformed(PueChecker.java:192)
at java.desktop/javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:1967)
at java.desktop/javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2308)
at java.desktop/javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:405)
at java.desktop/javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:262)
at java.desktop/javax.swing.plaf.basic.BasicButtonListener.mouseReleased(BasicButtonListener.java:279)
at java.desktop/java.awt.Component.processMouseEvent(Component.java:6636)
at java.desktop/javax.swing.JComponent.processMouseEvent(JComponent.java:3342)
at java.desktop/java.awt.Component.processEvent(Component.java:6401)
at java.desktop/java.awt.Container.processEvent(Container.java:2263)
at java.desktop/java.awt.Component.dispatchEventImpl(Component.java:5012)
at java.desktop/java.awt.Container.dispatchEventImpl(Container.java:2321)
at java.desktop/java.awt.Component.dispatchEvent(Component.java:4844)
at java.desktop/java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4919)
at java.desktop/java.awt.LightweightDispatcher.processMouseEvent(Container.java:4548)
at java.desktop/java.awt.LightweightDispatcher.dispatchEvent(Container.java:4489)
at java.desktop/java.awt.Container.dispatchEventImpl(Container.java:2307)
at java.desktop/java.awt.Window.dispatchEventImpl(Window.java:2764)
I searched everything and tested to set the maven memory higher.
I set the heap space for java to 12gb
The maximum usage of the heapspace is 201MB. Why isnĀ“t it using the whole memory?
Can somebody help
We are having an issue with virtual servers (VMs) running out of native memory. These VMs are running:
Linux 7.2(Maipo)
Wildfly 9.0.1
Java 1.8.0._151 running with (different JVMs have different heap sizes. They range from 0.5G to 2G)
The JVM args are:
-XX:+UseG1GC
-XX:SurvivorRatio=1
-XX:NewRatio=2
-XX:MaxTenuringThreshold=15
-XX:-UseAdaptiveSizePolicy
-XX:G1HeapRegionSize=16m
-XX:MaxMetaspaceSize=256m
-XX:CompressedClassSpaceSize=64m
-javaagent:/<path to new relic.jar>
After about a month, sometimes longer, the VMs start to use all of their swap space and then eventually the OOM-Killer notices that java is using too much memory and kills one of our JVMs.
The amount of memory being used by the java process is larger than heap + metaSpace + compressed as revealed by using -XX:NativeMemoryTracking=detail
Are there tools that could tell me what is in this native memory(like a heap dump but not for the heap)?
Are there any tools that can map java heap usage to native memory usage (outside the heap) that are not jemalloc? I have used jemalloc to try to achieve this but the graph that is being drawn contains only hex values and not human readable class names so I cant really get anything out of it. Maybe I'm doing something wrong or perhaps I need another tool.
Any suggestions would be greatly appreciated.
You can use jcmd.
Start application with -XX:NativeMemoryTracking=summary or -
XX:NativeMemoryTracking=detail
Use jcmd to monitor the NMT (native memory tracker)
jcmd "pid" VM.native_memory baseline //take the baseline
jcmd "pid" VM.native_memory detail.diff // use based on your need to analyze more on change in native memory from its baseline
Nashron Release notes claims they fixed the JSON parser bugs, but I am still able to produce a (different) bug on new patch 8u60. This time it is OutOfMemoryError.
Refer the attached JSON [1] (it is typically a Category & Subcategory relation). When I try to invoke JSON.parse() it is failing.
[1] http://jsfiddle.net/manivannandsekaran/rfftavkz/
I tried to increase the heap size, didn't help, instead of getting
the OOM Exception quickly, it delayed bit.
When I replace all the integer key with Alpahnumberic, the entire
parsing time is super fast. [2]
[2] https://jsfiddle.net/manivannandsekaran/8yw3ojmu/
It is almost 4 months we waited to get the original bug fixed, now again the new path introduced a another bug (it is really frustrating, I am not sure how these bugs are get escaped from regression). Is there any workaround available? Is it possible to override default JSON parser with other well known JSON parsers (like GSON or Jackson).
Here the stack trace of failure from JJS.
jjs> load("catsubcat/test.js")
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at jdk.nashorn.internal.runtime.arrays.IntArrayData.toObjectArray(IntArrayData.java:138)
at jdk.nashorn.internal.runtime.arrays.IntArrayData.convertToObject(IntArrayData.java:180)
at jdk.nashorn.internal.runtime.arrays.IntArrayData.convert(IntArrayData.java:192)
at jdk.nashorn.internal.runtime.arrays.IntArrayData.set(IntArrayData.java:243)
at jdk.nashorn.internal.runtime.arrays.ArrayFilter.set(ArrayFilter.java:99)
at jdk.nashorn.internal.runtime.arrays.DeletedRangeArrayFilter.set(DeletedRangeArrayFilter.java:144)
at jdk.nashorn.internal.parser.JSONParser.addArrayElement(JSONParser.java:246)
at jdk.nashorn.internal.parser.JSONParser.parseObject(JSONParser.java:210)
at jdk.nashorn.internal.parser.JSONParser.parseLiteral(JSONParser.java:165)
at jdk.nashorn.internal.parser.JSONParser.parseObject(JSONParser.java:207)
at jdk.nashorn.internal.parser.JSONParser.parseLiteral(JSONParser.java:165)
at jdk.nashorn.internal.parser.JSONParser.parseObject(JSONParser.java:207)
at jdk.nashorn.internal.parser.JSONParser.parseLiteral(JSONParser.java:165)
at jdk.nashorn.internal.parser.JSONParser.parse(JSONParser.java:148)
at jdk.nashorn.internal.runtime.JSONFunctions.parse(JSONFunctions.java:80)
at jdk.nashorn.internal.objects.NativeJSON.parse(NativeJSON.java:105)
at java.lang.invoke.LambdaForm$DMH/1880587981.invokeStatic_L3_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$BMH/1095293768.reinvoke(LambdaForm$BMH)
at java.lang.invoke.LambdaForm$MH/1411892748.exactInvoker(LambdaForm$MH)
at java.lang.invoke.LambdaForm$MH/22805895.linkToCallSite(LambdaForm$MH)
at jdk.nashorn.internal.scripts.Script$5$test.:program(file:catsubcat/test.js:1)
at java.lang.invoke.LambdaForm$DMH/1323165413.invokeStatic_LL_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$MH/653687670.invokeExact_MT(LambdaForm$MH)
at jdk.nashorn.internal.runtime.ScriptFunctionData.invoke(ScriptFunctionData.java:640)
at jdk.nashorn.internal.runtime.ScriptFunction.invoke(ScriptFunction.java:228)
at jdk.nashorn.internal.runtime.ScriptRuntime.apply(ScriptRuntime.java:393)
at jdk.nashorn.internal.runtime.Context.evaluateSource(Context.java:1219)
at jdk.nashorn.internal.runtime.Context.load(Context.java:841)
at jdk.nashorn.internal.objects.Global.load(Global.java:1536)
at java.lang.invoke.LambdaForm$DMH/1323165413.invokeStatic_LL_L(LambdaForm$DMH)
at java.lang.invoke.LambdaForm$BMH/1413378318.reinvoke(LambdaForm$BMH)
at java.lang.invoke.LambdaForm$reinvoker/40472007.dontInline(LambdaForm$reinvoker)
The problem is just that Nashorn switches to sparse array representation too late. I filed a bug for this: https://bugs.openjdk.java.net/browse/JDK-8137281
I'm testing using Apache's Jmeter, I'm simply accessing one page of my companies website and turning up the number of users until it reaches a threshold, the problem is that when I get to around 3000 threads JMeter doesn't run all of them. Looking at the Aggregate Graph
it only runs about 2,536 (this number varies but is always around here) of them.
The partial run comes with the following exception in the logs:
01:16 ERROR - jmeter.JMeter: Uncaught exception:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Unknown Source)
at org.apache.jmeter.threads.ThreadGroup.start(ThreadGroup.java:293)
at org.apache.jmeter.engine.StandardJMeterEngine.startThreadGroup(StandardJMeterEngine.java:476)
at org.apache.jmeter.engine.StandardJMeterEngine.run(StandardJMeterEngine.java:395)
at java.lang.Thread.run(Unknown Source)
This behavior is consistent. In addition one of the times JMeter crashed in the middle outputting a file that said:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 32756 bytes for ChunkPool::allocate
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (allocation.cpp:211), pid=10748, tid=11652
#
# JRE version: 6.0_31-b05
# Java VM: Java HotSpot(TM) Client VM (20.6-b01 mixed mode, sharing windows-x86 )
Any ideas?
I tried changing the heap size in jmeter.bat, but that didn't seem to help at all.
JVM is simply not capable of running so many threads. And even if it is, JMeter will consume a lot of CPU resources to purely switch contexts. In other words, above some point you are not benchmarking your web application but the client computer, hosting JMeter.
You have few choices:
experiment with JVM options, e.g. decrease default -Xss512K to something smaller
run JMeter in a cluster
use tools taking radically different approach like Gatling
I had a similar issue and increased the heap size in jmeter.bat to 1024M and that fixed the issue.
set HEAP=-Xms1024m -Xmx1024m
For the JVM, if you read hprof it gives you some solutions among which are:
switch to a 64 bits jvm ( > 6_u25)
with this you will be able to allocate more Heap (-Xmx) , ensure you have this RAM
reduce Xss with:
-Xss256k
Then for JMeter, follow best-practices:
http://jmeter.apache.org/usermanual/best-practices.html
http://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/
Finally ensure you use last JMeter version.
Use linux OS preferably
Tune the TCP stack, limits
Success will depend on your machine power (cpu and memory) and your test plan.
If this is not enough (for 3000 threads it should be OK), you may need to use distributed testing
Increasing the heap size in jmeter.bat works fine
set HEAP=-Xms1024m -Xmx1024m
OR
you can do something like below if you are using jmeter.sh:
JVM_ARGS="-Xms512m -Xmx1024m" jmeter.sh etc.
I ran into this same problem and the only solution that helped me is: https://stackoverflow.com/a/26190804/5796780
proper 100k threads on linux:
ulimit -s 256
ulimit -i 120000
echo 120000 > /proc/sys/kernel/threads-max
echo 600000 > /proc/sys/vm/max_map_count
echo 200000 > /proc/sys/kernel/pid_max
If you don't have root access:
echo 200000 | sudo dd of=/proc/sys/kernel/pid_max
After increasing Xms et Xmx heap size, I had to make my Java run in 64 bits mode. In jmeter.bat :
set JM_LAUNCH=java.exe -d64
Obviously, you need to run a 64 bits OS and have installed Java 64 bits (see https://www.java.com/en/download/manual.jsp)
Hi I know that this error which I'm going to show can't be fixed through code. I just want to know why and how is it caused and I also know its due to JVM trying to access address space of another program.
A fatal error has been detected by the Java Runtime Environment:
EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x6dcd422a, pid=4024, tid=3900
JRE version: 6.0_14-b08
Java VM: Java HotSpot(TM) Server VM (14.0-b16 mixed mode windows-x86 )
Problematic frame:
V [jvm.dll+0x17422a]
An error report file with more information is saved as:
C:\PServer\server\bin\hs_err_pid4024.log
If you would like to submit a bug report, please visit:
http://java.sun.com/webapps/bugreport/crash.jsp
The book "modern operating systems" from Tanenbaum, which is available online here:
http://lovingod.host.sk/tanenbaum/Unix-Linux-Windows.html
Covers the topic in depth. (Chapter 4 is in Memory Management and Chapter 4.8 is on Memory segmentation). The short version:
It would be very bad if several programs on your PC could access each other's memory. Actually even within one program, even in one thread you have multiple areas of memory that must not influence one other. Usually a process has at least one memory area called the "stack" and one area called the "heap" (commonly every process has one heap + one stack per thread. There MAY be more segments, but this is implementation dependent and it does not matter for the explanation here). On the stack things like you function's arguments and your local variables are saved. On the heap are variables saved that's size and lifetime cannot be determined by the compiler at compile time (that would be in Java everything that you use the "new"-Operator on. Example:
public void bar(String hi, int myInt)
{
String foo = new String("foobar");
}
in this example are two String objects: (referenced by "foo" and "hi"). Both these objects are on the heap (you know this, because at some point both Strings were allocated using "new". And in this example 3 values are on the stack. This would be the value of "myInt", "hi" and "foo". It is important to realize that "hi" and "foo" don't really contain Strings directly, but instead they contain some id that tells them were on the heap they can find the String. (This is not as easy to explain using java because java abstracts a lot. In C "hi" and "foo" would be a Pointer which is actually just an integer which represents the address in the heap where the actual value is stored).
You might ask yourself why there is a stack and a heap anyway. Why not put everything in the same place. Explaining that unfortunately exceeds the scope of this answer. Read the book I linked ;-). The short version is that stack and heap are differently managed and the separation is done for reasons of optimization.
The size of stack and heap are limited. (On Linux execute ulimit -a and you'll get a list including "data seg size" (heap) and "stack size" (yeah... stack :-)).).
The stack is something that just grows. Like an array that gets bigger and bigger if you append more and more data. Eventually you run out of space. In this case you may end up writing in the memory area that does not belong to you anymore. And that would be extremely bad. So the operating systems notices that and stops the program if that happens. On Linux you get a "Segmenation fault" and on Windows you get an "Access violation".
In other languages like in C, you need to manage your memory manually. A tiny error can easily cause you to accidentally write into some space that does not belong to you. In Java you have "automatic memory management" which means that the JVM does all this for you. You don't need to care and that takes loads from your shoulders as a developer (it usually does. I bet there are people out there who would disagree about the "loads" part ;-)). This means that it /should/ be impossible to produce segmentation faults with java. Unfortunatelly the JVM is not perfect. Sometimes it has bugs and screws up. And then you get what you got.