I am trying to run a load test for a application. For this i am using JMeter (v4 & v5) on linux Red hat 7.5 Vm with 16GB Ram and 8vCPU power. Goal is to reach 20k Users connected via ยต-service.
However during the test runs i get the following errors on the console:
Uncaught Exception java.lang.OutOfMemoryError: unable to create new native thread.
Here is my jvm jmeter configuration :
cat bin/jmeter | grep HEAP
HEAP (Optional) Java runtime options for memory management
: "${HEAP:="-Xms1g -Xmx4g -XX:MaxMetaspaceSize=256m"}"
Any ideas?
I tried changing the heap size in jmeter, but that didn't seem to help at all.
unable to create new native thread is not something you can work around by increasing JVM Heap, you're going above maximum number of threads threshold which is defined on OS level.
You will need to amend nproc value via ulimit command or by modifying /etc/security/limits.conf file to look like:
your_user soft nproc 1024
your_user hard nproc 32768
Reference: Unable to create new native thread
If you will be still receiving this error after raising maximum number of processes on OS level - most probably you will have to go for Distributed Testing
Related
I just downloaded a new docker image. When I try to run it I get this log on my console
Setting Active Processor Count to 4
Calculating JVM memory based on 381456K available memory
unable to calculate memory configuration
fixed memory regions require 654597K which is greater than 381456K available for allocation: -XX:MaxDirectMemorySize=10M, -XX:MaxMetaspaceSize=142597K, -XX:ReservedCodeCacheSize=240M, -Xss1M * 250 threads
Please, how can I fix this?
I am assuming that you have multiple services and you are going to start them at a time. The issue is related to memory which docker and spring boot uses.
Try this:
environment:
- JAVA_TOOL_OPTIONS=-Xmx128000K
deploy:
resources:
limits:
memory: 800m
You have to provide memory which I mentioned in the .yaml file syntax.
While at the time of startup each service takes lot of memory, so there is no memory remaining for rest of the services and because of that other services starts failing with the memory related message.
I keep getting this error when running some of my steps:
Container [pid=5784,containerID=container_1482150314878_0019_01_000015] is running beyond physical memory limits. Current usage: 5.6 GB of 5.5 GB physical memory used; 10.2 GB of 27.5 GB virtual memory used. Killing container.
I searched over the web and people say to increase the memory limits. This error is after I already increased to the maximum allowed on the instance I'm using c4.xlarge. Can I get some assistance about this error and how to solve this?
Also, I don't understand why mapreduce will throw this error and won't just swap or even work slower but just continue to work ...
NOTE: This error started happening after I changed to a custom output compression so it should be related to that.
Thanks!
Im unsuccessfully trying to increase the driver memory for my spark interpreter.
I just set spark.driver.memory in interpreter settings and everything looks great at first.
But in the docker container that zeppelin runs there is
Zeppelin 0.6.2
Spark 2.0.1
2:06 /usr/lib/jvm/java-8-openjdk-amd64/bin/java -cp /usr/zeppelin/int.....-2.7.2/share/hadoop/tools/lib/* -Xmx1g ..... --class org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer /usr/zeppelin/interpreter/spark/zeppelin-spark_2.11-0.6.2.jar 42651
a max heap setting that kind of breaks everything.
My main issue is I am trying to run the Latent Dirchilet Allocation of mllib and it always runs out of memory and just dies on the driver.
The docker container has 26g RAM now so that should be enough.
Zeppelin itself should be fine with its 1g ram.
But the spark driver simply needs more.
My Executor process have RAM but the driver is reported in the UI as
Executor ID Address Status RDD Blocks Storage Memory Disk Used Cores Active Tasks Failed Tasks Complete Tasks Total Tasks Task Time (GC Time) Input Shuffle Read Shuffle Write Thread Dump
driver 172.17.0.6:40439 Active 0 0.0 B / 404.7 MB 0.0 B 20 0 0 1 1 1.4 s (0 ms) 0.0 B 0.0 B 0.0 B Thread Dump
pretty abysmal
Setting ZEPPELIN_INTP_MEM='-Xms512m -Xmx12g' does not seem to change anything.
I though zeppelin-env.sh is not loaded correctly so I passed this variable directly in the docker create -e ZE... but that did not change anything.
SPARK_HOME is set and the it connects to a standalone spark cluster. But that part works. Only the driver runs out of memory.
But I tried starting a local[*] process with 8g driver memory and 6g executor but the same abysmal 450mb driver memory.
the intrepreter reports a java heap out of memory error and that breaks that halts the LDAModel training.
Just came across this in a search while running into the exact same problem! Hopefully you've found a solution by now, but just in case anyone else runs across this issue and is looking for a solution like me, here's the issue:
The process you're looking at here isn't considered an interpreter process by Zeppelin, it's actually a Spark Driver process. This means that it gets options set differently than the ZEPPELIN_INTP_MEM variable. Add this to your zeppelin-env.sh:
export SPARK_SUBMIT_OPTIONS="--driver-memory 12G"
Restart Zeppelin and you should be all set! (tested and works with the latest 0.7.3, assuming it works with earlier versions).
https://issues.apache.org/jira/browse/ZEPPELIN-1263 fix this issue. After that you can use whatever standard spark configuration. e.g. you can specify driver memeory via setting spark.driver.memory in spark interpreter setting.
I'm running multiple microservices (Spring cloud + docker) in small/medium machines on AWS and recently I found that these machines are often exhausted and need rebooting.
I'm investigating the causes of this loss of power, thinking of possible memory leaks or misconfigurations on the instance/container.
I tried to limit the amount of memory these containers can use by doing:
docker run -m 500M --memory-swap 500M -d my-service:latest
At this point my service (standard spring cloud service with one single endpoint that writes stuff to a Redis DB, using spring-data-redis) didn't even start.
Increased the memory to 760M and it worked, but monitoring it with docker I see the minimum is:
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
cd5f64aa371e 0.18% 606.9 MiB / 762.9 MiB 79.55% 102.4 MB / 99 MB 1.012 MB / 4.153 MB 60
I added some parameters to limit the JVM memory heap but it doesn't seem to reduce it very much:
_JAVA_OPTIONS: "-Xms8m -Xss256k -Xmx512m"
I'm running
Spring Cloud Brixton.M5
Spring Boot 1.3.2
Java 8 (Oracle JVM)
Docker
Spring data Redis 1.7.1
Is there a reason why such simple service uses so much memory to run? Are there any features I should disable to improve that?
We've investigated a number of things in a similar setup in terms of the JVM itself. A quick way to save some memory if using Java 8 is to use the following options:
-Xms256m -Xmx512m -XX:-TieredCompilation -Xss256k -XX:+UseG1GC -XX:+UseStringDeduplication
The G1GC is well documented, the UseStringDeduplication reduces heap usage by de-duplicating the storage of Strings in the heap (we found about 20% in a JSON/XML web service type environment), and the TieredCompilation makes a big difference in the use of CodeCache (from 70Mb down to 10Mb), as well as about 10% less Metaspace at the expense of about 10% startup time.
According to Spring's Installing Spring Boot applications page you can customize the application startup script by either environment variable or configuration file with the JAVA_OPTS variable.
For example: JAVA_OPTS=-Xmx64m
Weblogic 10.3 gives out of memory
Followings thing I have done
Increased the -Xms512m
Increased the -Xmx1024m
Increased the max perm size in setdomainenv.bat
Is there any other way to resolve this issue I have a 2 GB system?
It is a production machine and the size of the log is around 4 GB .When analysed the log I found many connection refused error
You'll need to profile your application to find the memory leak. It could be open database connections or other resources not being handled properly
Just increasing the Xms and Xmx wont work beyond a point
Take a Heap Dump into an HPROF file and run this using Eclipse Memory Analyzer Tool or VisualVM
or monitor this using JConsole