In the doc of Spark GC Tuning (https://spark.apache.org/docs/latest/tuning.html#garbage-collection-tuning), it says increasing Young generation using the option -Xmn when there are too many minor GC. I've tried putting it in "spark.executor.extraJavaOptions", but doesn't work. Where should I configure this JVM settings?
Actually, this is a false alarm. It works with the following configurations
spark.executor.extraJavaOptions="-Xmn1g -verbose:gc -XX:+UseParallelGC -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution"
Related
sonar bulild sucess
but analysis error java.lang.OutOfMemoryError: Java heap space
set sonar.properties but not use
sonar.web.javaOpts=-Xmx6144m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
sonar.ce.javaOpts=-Xmx6144m -Xms128m -XX:+HeapDumpOnOutOfMemoryErro
sonar.search.javaOpts=-Xms512m -Xmx6144m -XX:+HeapDumpOnOutOfMemoryError
check ui setting is not use
Increase the memory via the SONAR_SCANNER_OPTS environment variable:
export SONAR_SCANNER_OPTS="-Xmx512m"
On Windows environments, avoid the double-quotes, since they get misinterpreted and combine the two parameters into a single one.
set SONAR_SCANNER_OPTS=-Xmx512m
is unuseful ,mypoject is scan (by maven) sucess but SonarQube background task fails with an out-of-memory error.
https://docs.sonarqube.org/display/SONARqube71/Java+Process+Memory
As it is stated in official documentation,
You need to define SONAR_SCANNER_OPTS as environment variable with desired heap space.
Documentation link here
I'm trying to build a spark application which uses zookeeper and kafka. Maven is being used for build. The project I'm trying to build is here. On executing:
mvn clean package exec:java -Dexec.mainClass="com.iot.video.app.spark.processor.VideoStreamProcessor"
It shows
ERROR SparkContext:91 - Error initializing SparkContext.
java.lang.IllegalArgumentException: System memory 253427712 must be at least 471859200. Please increase heap size using the --driver-memory option or spark.driver.memory in Spark configuration.
I tried adding spark.driver.memory 4g to spark-defaults.conf but I still get the error. How can I fix it?
You can send extra JVM options to your workers by using dedicated spark-submit arguments:
spark-submit --conf 'spark.executor.memory=1g'\
--conf 'spark.executor.extraJavaOptions=-Xms1024m -Xmx4096m'
Similarly, you can set the option for your driver (useful if your application is submitted in cluster mode, or launched by spark-submit):
--conf 'spark.driver.extraJavaOptions=-Xms512m -Xmx2048m'
I am trying to increase the my idea IDE startup memory. So I:
right click on Intellij IDEA > show package content > Contents > bin > idea.vmoptions
and the idea.vmoptions is modified as:
-Xms256m
-Xmx2048m
-XX:ReservedCodeCacheSize=480m
-XX:+UseCompressedOops
-Dfile.encoding=UTF-8
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=50
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
-XX:+HeapDumpOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
-Xverify:none
-XX:ErrorFile=$USER_HOME/java_error_in_idea_%p.log
-XX:HeapDumpPath=$USER_HOME/java_error_in_idea.hprof
-Xbootclasspath/a:../lib/boot.jar
But it changed nothing.I see the idea process parameter ,it is still -Xmx768m,instead of what I have configured -Xmx2048m:
/Applications/IntelliJ IDEA.app/Contents/jdk/Contents/Home/jre/bin/java -d64 -Djava.awt.headless=true -Didea.version==2017.2.5 -Xmx768m -Didea.maven.embedder.version=3.3.9 -Dfile.encoding=UTF-8 -classpath /Applications/IntelliJ ....
Also,I copy /Applications/IntelliJ\ IDEA.app/Contents/bin/idea.vmoptions to /Users/wuchang/Library/Preferences/IntelliJIdea2017.2/idea.vmoptions,also doesn't help.
Anyone could give me some suggestions?
My Idea version is 2017.2.5 and mac os version is 10.12.6.I have also tried to use Help -> Edit Custom VM Options to modify it ,also doesn't help.
The issue is that there is another process started for Maven importing and its heap size is configured elsewhere. You are checking the heap size of the Maven importer process instead of IntelliJ IDEA process.
I am running 6450 users test in a distributed environment in AWS ubuntu machines.
I am getting the following error when test reach to peak load,
ERROR - jmeter.JMeter: Uncaught exception: java.lang.OutOfMemoryError: GC overhead limit exceeded
Machine Details:
m4.4xlarge
HEAP="-Xms512m -Xmx20480m" (jmeter.sh file)
I allocated 20GB for the heap size in JMeter.sh.
But when I run the ps -eaf|grep java command its giving following response.
root 11493 11456 56 15:47 pts/9 00:00:03 java -server -
XX:+HeapDumpOnOutOfMemoryError -Xms512m -Xmx512m -
XX:MaxTenuringThreshold=2 -XX:PermSize=64m -XX:MaxPermSize=128m -
XX:+CMSClassUnloadingEnabled -jar ./ApacheJMeter.jar**
I don't have any idea what changes I have to do now.
Do the change in jmeter file not in jmeter.sh as you can see with ps that it is not being applied.
Also with such a heap you may need to add:
-XX:-UseGCOverheadLimit
And switch to G1 garbage collector algorithm.
And also check you respect these recommendations:
http://jmeter.apache.org/usermanual/best-practices.html
http://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/
First of all, the answer is in your question: you say that ps -eaf|grep java shows this:
XX:+HeapDumpOnOutOfMemoryError -Xms512m -Xmx512m
That is memory is still very low. So either you changed jmeter.sh, but using other shell script to actually start JMeter, or you didn't change it in a valid way, so JMeter uses defaults.
But on top of that, I really doubt you can run 6450 users on one machine, unless your script is very light. Unconfigured machine can usually handle 200-400, and well-configured machine probably can deal with up to 2000.
You need to amend the line in jmeter file, not jmeter.sh file. Locate HEAP="-Xms512m -Xmx512m" line and update the Xmx value accordingly.
Also ensure you're starting JMeter using jmeter file.
If you have environment which explicitly relies on jmeter.sh file you should be amending HEAP size a little bit differently, like:
export JVM_ARGS="-Xms512m -Xmx20480m" && ./jmeter.sh
or add the relevant line to jmeter.sh file.
See JMeter Best Practices and 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure articles for comprehensive information on tuning JMeter
Some time ago I found out that each of our data nodes is constantly reading disks at ~10M/s accumulated speed. I found it out with iotop util.
What I've done so far to diagnose it:
I tried to stop different services on a cluster, but it only stops when I stop hdfs service completely
I checked the logs of a data node, but can only see some HDFS_WRITEs operation happening every 1-2 minutes, nothing about reading the data. I checked during idle time of course
Some info on our system:
we're using a CDH distro, 5.8 now, but the problem started several versions ago
no running jobs in YARN at that moment
the issue is here for several months 24/7 and it wasn't there before
My prime suspect for now is some auditing process at CDH. Unfortunately I couldn't find any good documentation on administration of these processes.
Here is an information on a data node process from ps -ef output:
hdfs 58093 6398 10 Oct11 ? 02:56:30 /usr/lib/jvm/java-8-oracle/bin/java -Dproc_datanode -Xmx1000m -Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop-hdfs -Dhadoop.log.file=hadoop-cmf-hdfs-DATANODE-hadoop-worker-03.srv.mycompany.org.log.out -Dhadoop.home.dir=/opt/cloudera/parcels/CDH-5.8.2-1.cdh5.8.2.p0.3/lib/hadoop -Dhadoop.id.str=hdfs -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/opt/cloudera/parcels/CDH-5.8.2-1.cdh5.8.2.p0.3/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Xms1073741824 -Xmx1073741824 -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=70 -XX:+CMSParallelRemarkEnabled -XX:OnOutOfMemoryError=/usr/lib/cmf/service/common/killparent.sh -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
I'll be really grateful for any clues on an issue.