Objectmapper writeValueAsString throwing OOM Exception - spring-boot

I'm repeatedly converting an object to a String using the Jackson writeValueAsString method, like a couple of thousand times repeatedly. The size of JSON would be something around 1KB. But after a while, my program exits and throws OOM Exception. Following is the stacktrace:
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)
Caused by: java.lang.OutOfMemoryError: Java heap space
at com.fasterxml.jackson.core.util.TextBuffer.carr(TextBuffer.java:864)
at com.fasterxml.jackson.core.util.TextBuffer.expand(TextBuffer.java:825)
at com.fasterxml.jackson.core.util.TextBuffer.append(TextBuffer.java:590)
at com.fasterxml.jackson.core.io.SegmentedStringWriter.write(SegmentedStringWriter.java:58)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator._writeString2(WriterBasedJsonGenerator.java:1013)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator._writeString(WriterBasedJsonGenerator.java:982)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator.writeString(WriterBasedJsonGenerator.java:377)
at com.fasterxml.jackson.databind.ser.std.StringSerializer.serialize(StringSerializer.java:41)
at com.fasterxml.jackson.databind.ser.std.MapSerializer.serializeFields(MapSerializer.java:718)
at com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(MapSerializer.java:639)
at com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(MapSerializer.java:33)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider._serialize(DefaultSerializerProvider.java:480)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:319)
at com.fasterxml.jackson.databind.ObjectMapper._configAndWriteValue(ObjectMapper.java:3893)
at com.fasterxml.jackson.databind.ObjectMapper.writeValueAsString(ObjectMapper.java:3207)
at com.ad2pro.neonmigration.neondatamigration.utils.NeonMetricsProducerUtil.produceImpressions(NeonMetricsProducerUtil.java:121)
at com.ad2pro.neonmigration.neondatamigration.scheduler.NeonScheduler.gerMetrics(NeonScheduler.java:100)
at com.ad2pro.neonmigration.neondatamigration.NeonDataMigrationApplication.main(NeonDataMigrationApplication.java:18)
... 8 more
java.lang.OutOfMemoryError: Java heap space
at javax.crypto.CipherSpi.bufferCrypt(CipherSpi.java:814)
at javax.crypto.CipherSpi.engineUpdate(CipherSpi.java:555)
at javax.crypto.Cipher.update(Cipher.java:2002)
at sun.security.ssl.CipherBox.decrypt(CipherBox.java:544)
at sun.security.ssl.EngineInputRecord.decrypt(EngineInputRecord.java:200)
at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:974)
at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:907)
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:781)
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
at com.amazon.support.channels.TLSSocketChannel.read(Unknown Source)
at com.amazon.jdbc.communications.InboundMessagesThread.run(Unknown Source)
~
There is 1GB of free memory before my program starts. Is objectmapper holding onto a lot of memory that even 1GB is not sufficient to convert objects to String. Any help is appreciated.

You could try to set JVM heap passing the parameters -Xmx and -Xms. For example tell to the Jvm to take a maximium of 512 mb of heap -Xms512m.
Default max heap size is 64 mb, maybe that's not enough for your program.

There are multiple aspects to this issue:
As mentioned in above answer ,Increase the JVM heap to take 512 mb.
Check whether conversion to String using objectMapper object is not
created everytime it is called.
writeAsString should not result in unending string.
use visualVM and check what exactly is causing increase in heap.

Related

Spark object runtime error

while running the program in my local system getting error as
MY ram size is 3GB , need solution
Exception in thread "main" java.lang.IllegalArgumentException: System memory 259522560 must be at least 471859200. Please increase heap size using the --driver-memory option or spark.driver.memory in Spark configuration.
at org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory(UnifiedMemoryManager.scala:216)
at org.apache.spark.memory.UnifiedMemoryManager$.apply(UnifiedMemoryManager.scala:198)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:330)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:174)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:257)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:432)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
at SparkCore.cartesianTransformation$.main(cartesianTransformation.scala:11)
at SparkCore.cartesianTransformation.main(cartesianTransformation.scala)
It seems your spark driver is running in small memory try to increase the size of driver memory.
You can use --driver-memory 4g to provide the memory size to driver.
Hope this helps!

Can I control the max memory consumed by chronicle queue?

Can I control the max memory consumed by chronicle queue?
I encounter the following exception on 32-bit java process with Xmx1200m parameter:
java.nio.BufferOverflowException
at net.openhft.chronicle.bytes.MappedBytes.acquireNextByteStore(MappedBytes.java:147)
at net.openhft.chronicle.bytes.MappedBytes.writeCheckOffset(MappedBytes.java:135)
at net.openhft.chronicle.bytes.AbstractBytes.compareAndSwapInt(AbstractBytes.java:165)
at net.openhft.chronicle.wire.AbstractWire.writeFirstHeader(AbstractWire.java:402)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueue$StoreSupplier.acquire(SingleChronicleQueue.java:514)
at net.openhft.chronicle.queue.impl.WireStorePool.acquire(WireStorePool.java:65)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueue.storeForCycle(SingleChronicleQueue.java:262)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.cycle(SingleChronicleQueueExcerpts.java:1249)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.moveToIndex(SingleChronicleQueueExcerpts.java:1094)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.moveToIndexResult(SingleChronicleQueueExcerpts.java:1080)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.moveToIndex(SingleChronicleQueueExcerpts.java:1073)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.next(SingleChronicleQueueExcerpts.java:828)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.readingDocument(SingleChronicleQueueExcerpts.java:808)
at net.openhft.chronicle.queue.ExcerptTailer.readingDocument(ExcerptTailer.java:41)
at net.openhft.chronicle.wire.MarshallableIn.readBytes(MarshallableIn.java:38)
at com.pingway.platform.tb.InboundQueue.pop(InboundQueue.java:74)
at com.pingway.platform.tb.RecordUpdateExecutor$1.run(RecordUpdateExecutor.java:23)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: java.lang.OutOfMemoryError: Map failed
at net.openhft.chronicle.core.OS.asAnIOException(OS.java:306)
at net.openhft.chronicle.core.OS.map(OS.java:282)
at net.openhft.chronicle.bytes.MappedFile.acquireByteStore(MappedFile.java:186)
at net.openhft.chronicle.bytes.MappedFile.acquireByteStore(MappedFile.java:141)
at net.openhft.chronicle.bytes.MappedBytes.acquireNextByteStore(MappedBytes.java:143)
... 23 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.reflect.GeneratedMethodAccessor131.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at net.openhft.chronicle.core.OS.map0(OS.java:290)
at net.openhft.chronicle.core.OS.map(OS.java:278)
... 26 more
If I decrease Xmx to 768m, the exception will disappear.
When running on 32-bit processes, you do need to allow some space for memory mappings. A heap of 1.2 GB is close to the maximum heap size you can have on Win XP (and thus on the 32-bit emulation system on later versions of windows)
What you can do is reduce the block/chunk size from the default of 64 MB to say 1 MB. This will reduce the size of memory mapping.
However, a much better/simpler/faster solution is to use a 64-bit JVM. This will give you about 100,000x more virtual memory in practice.
If you can't use a 64-bit JVM just yet, you can use a Java client connection to Chronicle Engine. This would allow you to run a server with Chronicle Queue running 64-bit, and have a 32-bit client access that data.

how to avoid mapreduce OutOfMemory Java heap space error while using kite-dataset to import data?

on my hortonworks HDP 2.6 cluster, I'm using kite-dataset tool to import data:
./kite-dataset -v csv-import ml-100k/u.data ratings
I'm getting this error:
java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:986)
at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:402)
at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:698)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
My cluster nodes have 16 GB or RAM, some of which is listed as available.
What can I do to avoid this error?
My first impulse would be to ask what your startup parameters are. Typically, when you run MapReduce and experience an out-of-memory error, you would use something like the following as your startup params:
-Dmapred.map.child.java.opts=-Xmx1G -Dmapred.reduce.child.java.opts=-Xmx1G
The key here is that these two amounts are cumulative. So, the amounts you specificy added together should not come close to exceeding the memory available on your system after you start MapReduce.

Out Of Memory in IBM Websphere 8.5.5.7

I am throwing a question on Out of Memory in IBM Websphere 8.5.5.7....We have an application primarily a Spring RestFull Webservices application deployed in IBM WAS 8.5.5.7. getting the below Out of Memory error for the last 5 days
[2/3/16 13:12:51:651 EST] 000000ab BBFactoryImpl E CWOBB9999E: Something unexpected happened; the data (if any) is <null> and the exception (if any) is java.lang.OutOfMemoryError: Java heap space at
com.ibm.oti.vm.VM.getClassNameImpl(Native Method) at
com.ibm.oti.vm.AbstractClassLoader.getPackageName(AbstractClassLoader.java:384) at
com.ibm.oti.vm.BootstrapClassLoader.loadClass(BootstrapClassLoader.java:65) at
java.lang.ClassLoader.loadClassHelper(ClassLoader.java:691) at
java.lang.ClassLoader.loadClass(ClassLoader.java:680) at
java.lang.ClassLoader.loadClassHelper(ClassLoader.java:693) at
java.lang.ClassLoader.loadClass(ClassLoader.java:680) at
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:358) at
java.lang.ClassLoader.loadClass(ClassLoader.java:663) at
org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:502) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:422) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410) at
org.eclipse.osgi.internal.loader.buddy.RegisteredPolicy.loadClass(RegisteredPolicy.java:79) at
org.eclipse.osgi.internal.loader.buddy.PolicyHandler.doBuddyClassLoading(PolicyHandler.java:135) at
org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:494) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:422) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410) at
org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at
java.lang.ClassLoader.loadClassHelper(ClassLoader.java:693) at
java.lang.ClassLoader.loadClass(ClassLoader.java:680) at
java.lang.ClassLoader.loadClass(ClassLoader.java:663) at
sun.reflect.DelegatingClassLoader.loadClass(DelegatingClassLoader.java:51) at
sun.misc.Unsafe.defineClass(Native Method) at
sun.reflect.ClassDefiner.defineClass(ClassDefiner.java:57) at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:437) at
java.security.AccessController.doPrivileged(AccessController.java:363) at
sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:433) at
sun.reflect.MethodAccessorGenerator.generateSerializationConstructor(MethodAccessorGenerator.java:149) at
sun.reflect.ReflectionFactory.newConstructorForSerialization(ReflectionFactory.java:316) at
java.io.ObjectStreamClass.getSerializableConstructor(ObjectStreamClass.java:1409) at
java.io.ObjectStreamClass.access$1500(ObjectStreamClass.java:63) at
java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:515) at
java.security.AccessController.doPrivileged(AccessController.java:363) at
java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:491) at
java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:338) at
java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:625) at
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1619) at
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1514) at
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1619) at
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1514) at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1768) at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) at
java.io.ObjectInputStream.readObject(ObjectInputStream.java:364) at
com.ibm.son.util.Util.deserialize(Util.java:434) at
com.ibm.son.mesh.AbstractTCPImpl.procReceivedMessage(AbstractTCPImpl.java:478) at
com.ibm.son.mesh.CfwTCPImpl.completedRead(CfwTCPImpl.java:1248) at
com.ibm.son.mesh.CfwTCPImpl.complete(CfwTCPImpl.java:1061) at
com.ibm.ws.ssl.channel.impl.SSLReadServiceContext$SSLReadCompletedCallback.complete(SSLReadServiceContext.java:1818) at
com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:175) at
com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217) at
com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161) at
com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138) at
com.ibm.io.async.ResultHandler.complete(ResultHandler.java:204) at
com.ibm.io.async.ResultHandler.runEventProces
Analyzed on the Introscope and Heap Analyser for the heap dump It is observed that consistently the lion share of the memory (>60%) is being consumed by com/ibm/xml/xlxp2/scan/util/SimpleDataBufferFactory used by IBM stax parser with WAS
Introscope Analysis throws light on sudden spike in the thread count, memory usage and gradual increase in connection count when the OOM happened.
When checking on the com.ibm.xml.xlxp2.scan.util.Databuffer issue of taking more heapsize , its being seen that IBM has been fixing Out Of Memory Issues for classes belong to com.ibm.xml.xlxp.scan.util/com.ibm.xml.xlxp2.scan.util in WAS 6, WAS 7 and WAS 8 servers.
http://www-01.ibm.com/support/docview.wss?uid=swg1PM39346
http://www-01.ibm.com/support/docview.wss?uid=swg1PM08333
Can anyone share any idea whether this an issue with IBM WAS 8.5.5.7...could not get a solid break
Many of the out of memory problems concerning com.ibm.xml.xlxp2.scan.util.DataBuffer were addressed with system properties that users can configure to reduce the memory used by the IBM StAX parser.
The following system properties can be helpful in resolving out of memory issues with the IBM StAX parser. Each of them should be available in WebSphere Application Server v8.5.5.7.
com.ibm.xml.xlxp2.api.util.encoding.DataSourceFactory.bufferLength
System property which controls the size of the StAX parser's data buffers. The default value is 65536.
Setting this property to a smaller value such as 2048 may reduce the memory usage if the 64KB buffers were only being partially filled by the InputStream when they are in use. The buffers are cached within the StAX parser (inside com/ibm/xml/xlxp2/scan/util/SimpleDataBufferFactory) so a reduction in memory usage there would reduce the overall memory linked to each StAX parser object.
com.ibm.xml.xlxp2.api.util.Pool.STRONG_REFERENCE_POOL_MAXIMUM_SIZE
System property (introduced by APAR PM42465) which limits the number of XMLStreamReaders (and XMLStreamWriters) that will be cached using strong references. Follow the instructions at the link provided on how to set this property.
com.ibm.xml.xlxp2.api.util.encoding.DataSourceFactory.bufferLoadFactor
The value of this system property is a non-negative integer which determines the minimum number of bytes (as a percentage) that will be loaded into each buffer. The percentage is calculated with the following formula 1 / (2^n).
When the system property is not set its default value is 3. Setting the property to a lower value than the default can improve memory usage but may also reduce throughput.
com.ibm.xml.xlxp2.scan.util.SymbolMap.maxSymbolCount
System property (introduced by APAR PI08415). The value of this property is a non-negative integer which determines the maximum size of the StAX parser's symbol map. Follow the instructions at the link provided on how to set this property.

Squash and Stretch ball animation in Java

So im new to Java programming and i was trying to find an example code of Ball animation in 3D with accurate physics involved.
This is the source:
http://www.java2s.com/Code/Java/3D/AnimationandInteractionaBouncingBall.htm
And here is the error:
Exception in thread "main" java.lang.UnsatisfiedLinkError: no J3D in java.library.path
at java.lang.ClassLoader.loadLibrary(Unknown Source)
at java.lang.Runtime.loadLibrary0(Unknown Source)
at java.lang.System.loadLibrary(Unknown Source)
at javax.media.j3d.MasterControl$22.run(MasterControl.java:889)
at java.security.AccessController.doPrivileged(Native Method)
at javax.media.j3d.MasterControl.loadLibraries(MasterControl.java:886)
at javax.media.j3d.VirtualUniverse.<clinit>(VirtualUniverse.java:229)
at BouncingBall.<init>(BouncingBall.java:81)
at BouncingBall.main(BouncingBall.java:140)
The native libraries are missing and I fear that you use an obsolete version of Java 3D, please follow these instructions.

Resources