Can I control the max memory consumed by chronicle queue?
I encounter the following exception on 32-bit java process with Xmx1200m parameter:
java.nio.BufferOverflowException
at net.openhft.chronicle.bytes.MappedBytes.acquireNextByteStore(MappedBytes.java:147)
at net.openhft.chronicle.bytes.MappedBytes.writeCheckOffset(MappedBytes.java:135)
at net.openhft.chronicle.bytes.AbstractBytes.compareAndSwapInt(AbstractBytes.java:165)
at net.openhft.chronicle.wire.AbstractWire.writeFirstHeader(AbstractWire.java:402)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueue$StoreSupplier.acquire(SingleChronicleQueue.java:514)
at net.openhft.chronicle.queue.impl.WireStorePool.acquire(WireStorePool.java:65)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueue.storeForCycle(SingleChronicleQueue.java:262)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.cycle(SingleChronicleQueueExcerpts.java:1249)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.moveToIndex(SingleChronicleQueueExcerpts.java:1094)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.moveToIndexResult(SingleChronicleQueueExcerpts.java:1080)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.moveToIndex(SingleChronicleQueueExcerpts.java:1073)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.next(SingleChronicleQueueExcerpts.java:828)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreTailer.readingDocument(SingleChronicleQueueExcerpts.java:808)
at net.openhft.chronicle.queue.ExcerptTailer.readingDocument(ExcerptTailer.java:41)
at net.openhft.chronicle.wire.MarshallableIn.readBytes(MarshallableIn.java:38)
at com.pingway.platform.tb.InboundQueue.pop(InboundQueue.java:74)
at com.pingway.platform.tb.RecordUpdateExecutor$1.run(RecordUpdateExecutor.java:23)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: java.lang.OutOfMemoryError: Map failed
at net.openhft.chronicle.core.OS.asAnIOException(OS.java:306)
at net.openhft.chronicle.core.OS.map(OS.java:282)
at net.openhft.chronicle.bytes.MappedFile.acquireByteStore(MappedFile.java:186)
at net.openhft.chronicle.bytes.MappedFile.acquireByteStore(MappedFile.java:141)
at net.openhft.chronicle.bytes.MappedBytes.acquireNextByteStore(MappedBytes.java:143)
... 23 more
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)
at sun.reflect.GeneratedMethodAccessor131.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at net.openhft.chronicle.core.OS.map0(OS.java:290)
at net.openhft.chronicle.core.OS.map(OS.java:278)
... 26 more
If I decrease Xmx to 768m, the exception will disappear.
When running on 32-bit processes, you do need to allow some space for memory mappings. A heap of 1.2 GB is close to the maximum heap size you can have on Win XP (and thus on the 32-bit emulation system on later versions of windows)
What you can do is reduce the block/chunk size from the default of 64 MB to say 1 MB. This will reduce the size of memory mapping.
However, a much better/simpler/faster solution is to use a 64-bit JVM. This will give you about 100,000x more virtual memory in practice.
If you can't use a 64-bit JVM just yet, you can use a Java client connection to Chronicle Engine. This would allow you to run a server with Chronicle Queue running 64-bit, and have a 32-bit client access that data.
Related
I'm repeatedly converting an object to a String using the Jackson writeValueAsString method, like a couple of thousand times repeatedly. The size of JSON would be something around 1KB. But after a while, my program exits and throws OOM Exception. Following is the stacktrace:
Exception in thread "main" java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87)
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50)
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)
Caused by: java.lang.OutOfMemoryError: Java heap space
at com.fasterxml.jackson.core.util.TextBuffer.carr(TextBuffer.java:864)
at com.fasterxml.jackson.core.util.TextBuffer.expand(TextBuffer.java:825)
at com.fasterxml.jackson.core.util.TextBuffer.append(TextBuffer.java:590)
at com.fasterxml.jackson.core.io.SegmentedStringWriter.write(SegmentedStringWriter.java:58)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator._writeString2(WriterBasedJsonGenerator.java:1013)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator._writeString(WriterBasedJsonGenerator.java:982)
at com.fasterxml.jackson.core.json.WriterBasedJsonGenerator.writeString(WriterBasedJsonGenerator.java:377)
at com.fasterxml.jackson.databind.ser.std.StringSerializer.serialize(StringSerializer.java:41)
at com.fasterxml.jackson.databind.ser.std.MapSerializer.serializeFields(MapSerializer.java:718)
at com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(MapSerializer.java:639)
at com.fasterxml.jackson.databind.ser.std.MapSerializer.serialize(MapSerializer.java:33)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider._serialize(DefaultSerializerProvider.java:480)
at com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:319)
at com.fasterxml.jackson.databind.ObjectMapper._configAndWriteValue(ObjectMapper.java:3893)
at com.fasterxml.jackson.databind.ObjectMapper.writeValueAsString(ObjectMapper.java:3207)
at com.ad2pro.neonmigration.neondatamigration.utils.NeonMetricsProducerUtil.produceImpressions(NeonMetricsProducerUtil.java:121)
at com.ad2pro.neonmigration.neondatamigration.scheduler.NeonScheduler.gerMetrics(NeonScheduler.java:100)
at com.ad2pro.neonmigration.neondatamigration.NeonDataMigrationApplication.main(NeonDataMigrationApplication.java:18)
... 8 more
java.lang.OutOfMemoryError: Java heap space
at javax.crypto.CipherSpi.bufferCrypt(CipherSpi.java:814)
at javax.crypto.CipherSpi.engineUpdate(CipherSpi.java:555)
at javax.crypto.Cipher.update(Cipher.java:2002)
at sun.security.ssl.CipherBox.decrypt(CipherBox.java:544)
at sun.security.ssl.EngineInputRecord.decrypt(EngineInputRecord.java:200)
at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:974)
at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:907)
at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:781)
at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624)
at com.amazon.support.channels.TLSSocketChannel.read(Unknown Source)
at com.amazon.jdbc.communications.InboundMessagesThread.run(Unknown Source)
~
There is 1GB of free memory before my program starts. Is objectmapper holding onto a lot of memory that even 1GB is not sufficient to convert objects to String. Any help is appreciated.
You could try to set JVM heap passing the parameters -Xmx and -Xms. For example tell to the Jvm to take a maximium of 512 mb of heap -Xms512m.
Default max heap size is 64 mb, maybe that's not enough for your program.
There are multiple aspects to this issue:
As mentioned in above answer ,Increase the JVM heap to take 512 mb.
Check whether conversion to String using objectMapper object is not
created everytime it is called.
writeAsString should not result in unending string.
use visualVM and check what exactly is causing increase in heap.
while running the program in my local system getting error as
MY ram size is 3GB , need solution
Exception in thread "main" java.lang.IllegalArgumentException: System memory 259522560 must be at least 471859200. Please increase heap size using the --driver-memory option or spark.driver.memory in Spark configuration.
at org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory(UnifiedMemoryManager.scala:216)
at org.apache.spark.memory.UnifiedMemoryManager$.apply(UnifiedMemoryManager.scala:198)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:330)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:174)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:257)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:432)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)
at SparkCore.cartesianTransformation$.main(cartesianTransformation.scala:11)
at SparkCore.cartesianTransformation.main(cartesianTransformation.scala)
It seems your spark driver is running in small memory try to increase the size of driver memory.
You can use --driver-memory 4g to provide the memory size to driver.
Hope this helps!
on my hortonworks HDP 2.6 cluster, I'm using kite-dataset tool to import data:
./kite-dataset -v csv-import ml-100k/u.data ratings
I'm getting this error:
java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(MapTask.java:986)
at org.apache.hadoop.mapred.MapTask.createSortingCollector(MapTask.java:402)
at org.apache.hadoop.mapred.MapTask.access$100(MapTask.java:81)
at org.apache.hadoop.mapred.MapTask$NewOutputCollector.<init>(MapTask.java:698)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:770)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
My cluster nodes have 16 GB or RAM, some of which is listed as available.
What can I do to avoid this error?
My first impulse would be to ask what your startup parameters are. Typically, when you run MapReduce and experience an out-of-memory error, you would use something like the following as your startup params:
-Dmapred.map.child.java.opts=-Xmx1G -Dmapred.reduce.child.java.opts=-Xmx1G
The key here is that these two amounts are cumulative. So, the amounts you specificy added together should not come close to exceeding the memory available on your system after you start MapReduce.
I am throwing a question on Out of Memory in IBM Websphere 8.5.5.7....We have an application primarily a Spring RestFull Webservices application deployed in IBM WAS 8.5.5.7. getting the below Out of Memory error for the last 5 days
[2/3/16 13:12:51:651 EST] 000000ab BBFactoryImpl E CWOBB9999E: Something unexpected happened; the data (if any) is <null> and the exception (if any) is java.lang.OutOfMemoryError: Java heap space at
com.ibm.oti.vm.VM.getClassNameImpl(Native Method) at
com.ibm.oti.vm.AbstractClassLoader.getPackageName(AbstractClassLoader.java:384) at
com.ibm.oti.vm.BootstrapClassLoader.loadClass(BootstrapClassLoader.java:65) at
java.lang.ClassLoader.loadClassHelper(ClassLoader.java:691) at
java.lang.ClassLoader.loadClass(ClassLoader.java:680) at
java.lang.ClassLoader.loadClassHelper(ClassLoader.java:693) at
java.lang.ClassLoader.loadClass(ClassLoader.java:680) at
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:358) at
java.lang.ClassLoader.loadClass(ClassLoader.java:663) at
org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:502) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:422) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410) at
org.eclipse.osgi.internal.loader.buddy.RegisteredPolicy.loadClass(RegisteredPolicy.java:79) at
org.eclipse.osgi.internal.loader.buddy.PolicyHandler.doBuddyClassLoading(PolicyHandler.java:135) at
org.eclipse.osgi.internal.loader.BundleLoader.findClassInternal(BundleLoader.java:494) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:422) at
org.eclipse.osgi.internal.loader.BundleLoader.findClass(BundleLoader.java:410) at
org.eclipse.osgi.internal.baseadaptor.DefaultClassLoader.loadClass(DefaultClassLoader.java:107) at
java.lang.ClassLoader.loadClassHelper(ClassLoader.java:693) at
java.lang.ClassLoader.loadClass(ClassLoader.java:680) at
java.lang.ClassLoader.loadClass(ClassLoader.java:663) at
sun.reflect.DelegatingClassLoader.loadClass(DelegatingClassLoader.java:51) at
sun.misc.Unsafe.defineClass(Native Method) at
sun.reflect.ClassDefiner.defineClass(ClassDefiner.java:57) at sun.reflect.MethodAccessorGenerator$1.run(MethodAccessorGenerator.java:437) at
java.security.AccessController.doPrivileged(AccessController.java:363) at
sun.reflect.MethodAccessorGenerator.generate(MethodAccessorGenerator.java:433) at
sun.reflect.MethodAccessorGenerator.generateSerializationConstructor(MethodAccessorGenerator.java:149) at
sun.reflect.ReflectionFactory.newConstructorForSerialization(ReflectionFactory.java:316) at
java.io.ObjectStreamClass.getSerializableConstructor(ObjectStreamClass.java:1409) at
java.io.ObjectStreamClass.access$1500(ObjectStreamClass.java:63) at
java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:515) at
java.security.AccessController.doPrivileged(AccessController.java:363) at
java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:491) at
java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:338) at
java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:625) at
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1619) at
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1514) at
java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1619) at
java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1514) at
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1768) at
java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1347) at
java.io.ObjectInputStream.readObject(ObjectInputStream.java:364) at
com.ibm.son.util.Util.deserialize(Util.java:434) at
com.ibm.son.mesh.AbstractTCPImpl.procReceivedMessage(AbstractTCPImpl.java:478) at
com.ibm.son.mesh.CfwTCPImpl.completedRead(CfwTCPImpl.java:1248) at
com.ibm.son.mesh.CfwTCPImpl.complete(CfwTCPImpl.java:1061) at
com.ibm.ws.ssl.channel.impl.SSLReadServiceContext$SSLReadCompletedCallback.complete(SSLReadServiceContext.java:1818) at
com.ibm.ws.tcp.channel.impl.AioReadCompletionListener.futureCompleted(AioReadCompletionListener.java:175) at
com.ibm.io.async.AbstractAsyncFuture.invokeCallback(AbstractAsyncFuture.java:217) at
com.ibm.io.async.AsyncChannelFuture.fireCompletionActions(AsyncChannelFuture.java:161) at
com.ibm.io.async.AsyncFuture.completed(AsyncFuture.java:138) at
com.ibm.io.async.ResultHandler.complete(ResultHandler.java:204) at
com.ibm.io.async.ResultHandler.runEventProces
Analyzed on the Introscope and Heap Analyser for the heap dump It is observed that consistently the lion share of the memory (>60%) is being consumed by com/ibm/xml/xlxp2/scan/util/SimpleDataBufferFactory used by IBM stax parser with WAS
Introscope Analysis throws light on sudden spike in the thread count, memory usage and gradual increase in connection count when the OOM happened.
When checking on the com.ibm.xml.xlxp2.scan.util.Databuffer issue of taking more heapsize , its being seen that IBM has been fixing Out Of Memory Issues for classes belong to com.ibm.xml.xlxp.scan.util/com.ibm.xml.xlxp2.scan.util in WAS 6, WAS 7 and WAS 8 servers.
http://www-01.ibm.com/support/docview.wss?uid=swg1PM39346
http://www-01.ibm.com/support/docview.wss?uid=swg1PM08333
Can anyone share any idea whether this an issue with IBM WAS 8.5.5.7...could not get a solid break
Many of the out of memory problems concerning com.ibm.xml.xlxp2.scan.util.DataBuffer were addressed with system properties that users can configure to reduce the memory used by the IBM StAX parser.
The following system properties can be helpful in resolving out of memory issues with the IBM StAX parser. Each of them should be available in WebSphere Application Server v8.5.5.7.
com.ibm.xml.xlxp2.api.util.encoding.DataSourceFactory.bufferLength
System property which controls the size of the StAX parser's data buffers. The default value is 65536.
Setting this property to a smaller value such as 2048 may reduce the memory usage if the 64KB buffers were only being partially filled by the InputStream when they are in use. The buffers are cached within the StAX parser (inside com/ibm/xml/xlxp2/scan/util/SimpleDataBufferFactory) so a reduction in memory usage there would reduce the overall memory linked to each StAX parser object.
com.ibm.xml.xlxp2.api.util.Pool.STRONG_REFERENCE_POOL_MAXIMUM_SIZE
System property (introduced by APAR PM42465) which limits the number of XMLStreamReaders (and XMLStreamWriters) that will be cached using strong references. Follow the instructions at the link provided on how to set this property.
com.ibm.xml.xlxp2.api.util.encoding.DataSourceFactory.bufferLoadFactor
The value of this system property is a non-negative integer which determines the minimum number of bytes (as a percentage) that will be loaded into each buffer. The percentage is calculated with the following formula 1 / (2^n).
When the system property is not set its default value is 3. Setting the property to a lower value than the default can improve memory usage but may also reduce throughput.
com.ibm.xml.xlxp2.scan.util.SymbolMap.maxSymbolCount
System property (introduced by APAR PI08415). The value of this property is a non-negative integer which determines the maximum size of the StAX parser's symbol map. Follow the instructions at the link provided on how to set this property.
I'm testing using Apache's Jmeter, I'm simply accessing one page of my companies website and turning up the number of users until it reaches a threshold, the problem is that when I get to around 3000 threads JMeter doesn't run all of them. Looking at the Aggregate Graph
it only runs about 2,536 (this number varies but is always around here) of them.
The partial run comes with the following exception in the logs:
01:16 ERROR - jmeter.JMeter: Uncaught exception:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Unknown Source)
at org.apache.jmeter.threads.ThreadGroup.start(ThreadGroup.java:293)
at org.apache.jmeter.engine.StandardJMeterEngine.startThreadGroup(StandardJMeterEngine.java:476)
at org.apache.jmeter.engine.StandardJMeterEngine.run(StandardJMeterEngine.java:395)
at java.lang.Thread.run(Unknown Source)
This behavior is consistent. In addition one of the times JMeter crashed in the middle outputting a file that said:
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 32756 bytes for ChunkPool::allocate
# Possible reasons:
# The system is out of physical RAM or swap space
# In 32 bit mode, the process size limit was hit
# Possible solutions:
# Reduce memory load on the system
# Increase physical memory or swap space
# Check if swap backing store is full
# Use 64 bit Java on a 64 bit OS
# Decrease Java heap size (-Xmx/-Xms)
# Decrease number of Java threads
# Decrease Java thread stack sizes (-Xss)
# Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
# Out of Memory Error (allocation.cpp:211), pid=10748, tid=11652
#
# JRE version: 6.0_31-b05
# Java VM: Java HotSpot(TM) Client VM (20.6-b01 mixed mode, sharing windows-x86 )
Any ideas?
I tried changing the heap size in jmeter.bat, but that didn't seem to help at all.
JVM is simply not capable of running so many threads. And even if it is, JMeter will consume a lot of CPU resources to purely switch contexts. In other words, above some point you are not benchmarking your web application but the client computer, hosting JMeter.
You have few choices:
experiment with JVM options, e.g. decrease default -Xss512K to something smaller
run JMeter in a cluster
use tools taking radically different approach like Gatling
I had a similar issue and increased the heap size in jmeter.bat to 1024M and that fixed the issue.
set HEAP=-Xms1024m -Xmx1024m
For the JVM, if you read hprof it gives you some solutions among which are:
switch to a 64 bits jvm ( > 6_u25)
with this you will be able to allocate more Heap (-Xmx) , ensure you have this RAM
reduce Xss with:
-Xss256k
Then for JMeter, follow best-practices:
http://jmeter.apache.org/usermanual/best-practices.html
http://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/
Finally ensure you use last JMeter version.
Use linux OS preferably
Tune the TCP stack, limits
Success will depend on your machine power (cpu and memory) and your test plan.
If this is not enough (for 3000 threads it should be OK), you may need to use distributed testing
Increasing the heap size in jmeter.bat works fine
set HEAP=-Xms1024m -Xmx1024m
OR
you can do something like below if you are using jmeter.sh:
JVM_ARGS="-Xms512m -Xmx1024m" jmeter.sh etc.
I ran into this same problem and the only solution that helped me is: https://stackoverflow.com/a/26190804/5796780
proper 100k threads on linux:
ulimit -s 256
ulimit -i 120000
echo 120000 > /proc/sys/kernel/threads-max
echo 600000 > /proc/sys/vm/max_map_count
echo 200000 > /proc/sys/kernel/pid_max
If you don't have root access:
echo 200000 | sudo dd of=/proc/sys/kernel/pid_max
After increasing Xms et Xmx heap size, I had to make my Java run in 64 bits mode. In jmeter.bat :
set JM_LAUNCH=java.exe -d64
Obviously, you need to run a 64 bits OS and have installed Java 64 bits (see https://www.java.com/en/download/manual.jsp)