I am installed geonetwork 4.0.6 with ElasticSearch in Debian 11. But i have a problem, the Java overload the procesor and i am cannot login to my server through ssh. I am limit Java usage of memory in /etc/default/tomcat9 file config and /etc/elasticsearch/jvm.options. My configs are:
In /etc/default/tomcat9
JAVA_OPTS="-Djava.awt.headless=true -Xmx6144m -XX:MaxPermSize=1536m -Dfile.encoding=UTF-8"
In /etc/elasticsearch/jvm.options
-Xms2048m
-Xmx2048m
Related
I am trying to increase the my idea IDE startup memory. So I:
right click on Intellij IDEA > show package content > Contents > bin > idea.vmoptions
and the idea.vmoptions is modified as:
-Xms256m
-Xmx2048m
-XX:ReservedCodeCacheSize=480m
-XX:+UseCompressedOops
-Dfile.encoding=UTF-8
-XX:+UseConcMarkSweepGC
-XX:SoftRefLRUPolicyMSPerMB=50
-ea
-Dsun.io.useCanonCaches=false
-Djava.net.preferIPv4Stack=true
-XX:+HeapDumpOnOutOfMemoryError
-XX:-OmitStackTraceInFastThrow
-Xverify:none
-XX:ErrorFile=$USER_HOME/java_error_in_idea_%p.log
-XX:HeapDumpPath=$USER_HOME/java_error_in_idea.hprof
-Xbootclasspath/a:../lib/boot.jar
But it changed nothing.I see the idea process parameter ,it is still -Xmx768m,instead of what I have configured -Xmx2048m:
/Applications/IntelliJ IDEA.app/Contents/jdk/Contents/Home/jre/bin/java -d64 -Djava.awt.headless=true -Didea.version==2017.2.5 -Xmx768m -Didea.maven.embedder.version=3.3.9 -Dfile.encoding=UTF-8 -classpath /Applications/IntelliJ ....
Also,I copy /Applications/IntelliJ\ IDEA.app/Contents/bin/idea.vmoptions to /Users/wuchang/Library/Preferences/IntelliJIdea2017.2/idea.vmoptions,also doesn't help.
Anyone could give me some suggestions?
My Idea version is 2017.2.5 and mac os version is 10.12.6.I have also tried to use Help -> Edit Custom VM Options to modify it ,also doesn't help.
The issue is that there is another process started for Maven importing and its heap size is configured elsewhere. You are checking the heap size of the Maven importer process instead of IntelliJ IDEA process.
I've just added a new datanode to my Hortonworks cluster (machines running RHEL7), but clearly I must have missed something when I installed Java jdk 1.8 on it. All the node's roles are installed but Datanode, metrics monitor and node manager show up as stopped in the Ambari manager. Whenever I run 'Datanode start' it fails with the following message:
==> /var/log/hadoop/hdfs/jsvc.out <==
==> /var/log/hadoop/hdfs/jsvc.err <==
Cannot find any VM in Java Home /usr/java/jdk1.8.0_77
Cannot locate JVM library file
Output when running java -version (logged in as root):
java version "1.8.0_77"
Java(TM) SE Runtime Environment (build 1.8.0_77-b03)
Java HotSpot(TM) Server VM (build 25.77-b03, mixed mode)
I figure it must be something along the lines of exporting JAVA_HOME or setting PATH, in a way that it looks inside the jdk's bin folder. Can't make it work though. Maybe because I'm exporting to root's bash profile, instead whichever account ambari uses to run datanode start? Any ideas?
Turned out Ambari doesn't automatically 'see' the changes you make to the jdk (if like me you have been messing with it). To solve this I recommissioned the datanode, and then restarted it. It then worked right away.
I have a Javafx application which is deploybed via java webstart.
I need to pass GC vm args for the application and I have issues in doing so.
I have the following in my jnlp
<j2se version="1.8+" href="http://java.sun.com/products/autodl/j2se" initial-heap-size="1024m" max-heap-size="1024m" java-vm-args="-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -XX:+DisableExplicitGC"/>
When the application starts, it looks like most of them are not passed to the VM
ps -ef | grep java gives the below output
133768645 2448 1 0 4:31PM ttys020 0:37.80 /Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home/bin/java -XX:+DisableExplicitGC -XX:CMSInitiatingOccupancyFraction=75 -Xmx1g -Xms1g
The min & max heap gets set as expected but not all the other VM arguments.
Can u please let me know why the other vm args are not being passed to the VM ?
Am I doing something wrong ?
Appreciate your help.
Thanks
Make sure your jnlp file that you changed is the one javaws is using. If it has an href attribute in the jnlp file header, it will take the jnlp file from there even if you launch it from your local machine.
I have deployed an Amazon EC2 cluster with Spark like so:
~/spark-ec2 -k spark -i ~/.ssh/spark.pem -s 2 --region=eu-west-1 --spark-version=1.3.1 launch spark-cluster
I copy a file I need first to the master and then from master to HDFS using:
ephemeral-hdfs/bin/hadoop fs -put ~/ANTICOR_2_10000.txt ~/user/root/ANTICOR_2_10000.txt
I have a jar I want to run which was compiled with JDK 8 (I am using a lot of Java 8 features) so I copy it over with scp and run it with:
spark/bin/spark-submit --master spark://public_dns_with_port --class package.name.to.Main job.jar -f hdfs://public_dns:~/ANTICOR_2_10000.txt
The problem is that spark-ec2 loads the cluster with JDK7 so I am getting the Unsupported major.minor version 52.0
My question is, which are all the places where I need to change JDK7 to JDK8?
The steps I am doing thus far on master are:
Install JDK8 with yum
Use sudo alternatives --config java and change prefered java to java-8
export JAVA_HOME=/usr/lib/jvm/openjdk-8
Do I have to do that for all the nodes? Also do I need to change the java path that hadoop uses at ephemeral-hdfs/conf/hadoop-env.sh or are there any other spots I missed?
Unfortunately, Amazon doesn't offer out-of-the-box Java 8 installations, yet: see available versions.
Have you seen this post on how to install it on running instances?
Here is what i have been doing for all java installations which are different from versions provided by default installations: -
Configure the JAVA_HOME environment variable on each machine/ node: -
export JAVA_HOME=/home/ec2-user/softwares/jdk1.7.0_25
Modify the default PATH and place the "java/bin" directory before the rest of the PATH on all Nodes/ machines.
export PATH=/home/ec2-user/softwares/jdk1.7.0_25/bin/:$M2:$SCALA_HOME/bin/:$HIVE_HOME/bin/:$PATH:
And the above needs to be done with the same "OS user" which is used to execute/ own the spark master/ worker process.
I am using weblogic to deploy the java project. when I am trying to start the weblogic server I am getting following error. kindly guide me in this.
weblogic.application.utils.StateChangeException: java.lang.OutOfMemoryError: Per
mGen space..
You need to increase the HEAP of your weblogic instance, probaby in your setDomainEnv.sh
-Xms256m -Xmx512m