How to increase Java heap size for Carrot2? - carrot2

How can I increase Java heap size for Carrot2 Document Clustering Workbench?

Via command line:
carrot2-workbench -vmargs -Xmx256m
Tip: You can use the above pattern to specify any other JVM options if needed.

carrot2-workbench -vmargs -Xmx256m
Another tip:
You can also add JVM path and options to the eclipse.ini file located in in Carrot2 Document Clustering Workbench installation directory. Please see Eclipse Wiki for a list of all available options.

Related

Flume Exception in thread "main" java.lang.OutOfMemoryError: Java heap space

arun#arun-admin:/usr/lib/apache-flume-1.6.0-bin/bin$ ./flume-ng agent --conf ./conf/ -f /usr/lib/apache-flume-1.6.0properties -Dflume.root.logger=DEBUG,console -n agent
Info: Including Hadoop libraries found via (/usr/share/hadoop/bin/hadoop) for HDFS access
Info: Excluding /usr/share/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar from classpath
Info: Excluding /usr/share/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar from classpath
Info: Including Hive libraries found via (/usr/lib/apache-hive-3.1.2-bin) for Hive access
+ exec /usr/lib/jvm/java-11-openjdk-amd64/bin/java -Xmx20m -Dflume.root.logger=DEBUG,console -cp './conf/:/usr/lib/apache-flume-1.6.0-bin/lib/:/usr/share/hadoop/etc/hadoop:/usr/share/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/share/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/share/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/share/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/share/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/share/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/share/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/share/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/share/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/share/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/share/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/share/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/share/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/share/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/share/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/share/hadoop/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/share/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/share/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/share/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/share/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/share/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/share/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/share/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/share/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/share/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/share/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/share/hadoop/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/usr/share/hadoop/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/usr/share/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/share/hadoop/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/share/hadoop/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/share/hadoop/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/share/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/share/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/share/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/share/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/share/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/share/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/share/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/share/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/share/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/share/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/share/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/share/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/share/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/share/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/share/hadoop/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/share/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/share/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/share/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/share/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/share/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/share/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/share/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/share/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/share/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/share/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/share/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/share/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/share/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/share/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/share/hadoop/share/hadoop/common/hadoop-common-2.7.3.jar:/usr/share/hadoop/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/usr/share/hadoop/share/hadoop/common/hadoop-nfs-2.7.3.jar:/usr/share/hadoop/share/hadoop/common/jdiff:/usr/share/hadoop/share/hadoop/common/lib:/usr/share/hadoop/share/hadoop/common/sources:/usr/share/hadoop/share/hadoop/common/templates:/usr/share/hadoop/share/hadoop/hdfs:/usr/share/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/share/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/share/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/usr/share/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/usr/share/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/usr/share/hadoop/share/hadoop/hdfs/jdiff:/usr/share/hadoop/share/hadoop/hdfs/lib:/usr/share/hadoop/share/hadoop/hdfs/sources:/usr/share/hadoop/share/hadoop/hdfs/templates:/usr/share/hadoop/share/hadoop/hdfs/webapps:/usr/share/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/share/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/share/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/share/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/share/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/share/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/share/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/share/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/share/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/share/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/share/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/share/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/share/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/share/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/share/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/share/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/share/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/share/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/share/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/share/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/share/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/share/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/share/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/share/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/share/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/share/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/share/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/share/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/share/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/share/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/share/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/share/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/share/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/share/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/share/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/share/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/share/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/share/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/share/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/usr/share/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/usr/share/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/usr/share/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/usr/share/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/usr/share/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/usr/share/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/usr/share/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/usr/share/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/usr/share/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/usr/share/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/usr/share/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/usr/share/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/usr/share/hadoop/share/hadoop/yarn/lib:/usr/share/hadoop/share/hadoop/yarn/sources:/usr/share/hadoop/share/hadoop/yarn/test:/usr/share/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/share/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/usr/share/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/usr/share/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/usr/share/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/usr/share/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/usr/share/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/usr/share/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/usr/share/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/usr/share/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/usr/share/hadoop/share/hadoop/mapreduce/lib:/usr/share/hadoop/share/hadoop/mapreduce/lib-examples:/usr/share/hadoop/share/hadoop/mapreduce/sources:/usr/share/hadoop/contrib/capacity-scheduler/.jar:/usr/lib/apache-hive-3.1.2-bin/lib/*' -Djava.library.path=:/ usr/share/hadoop/lib org.apache.flume.node.Application -f
It is saying out of Memory error. Please change your Xmx value while running the application. Currently, you are giving 20MB by Xmx20m and maybe this much of memory is not enough to run this. Change it to higher value say 1000MB like this Xmx1000m and see if that helps.
You need to find the right value for this configuration. This can be done if you know the data size that has to flow. If you are unable to anticipate that then, trial and error is the only option.
You can try increasing heap size in your flume command by passing -Xmx512m. If you still face the same error pls try to increase heap size in flume command to -Xmx1000m.

Setting the sonar file, sonar.properties, no useful after restarting the service

sonar bulild sucess
but analysis error java.lang.OutOfMemoryError: Java heap space
set sonar.properties but not use
sonar.web.javaOpts=-Xmx6144m -Xms128m -XX:+HeapDumpOnOutOfMemoryError
sonar.ce.javaOpts=-Xmx6144m -Xms128m -XX:+HeapDumpOnOutOfMemoryErro
sonar.search.javaOpts=-Xms512m -Xmx6144m -XX:+HeapDumpOnOutOfMemoryError
check ui setting is not use
Increase the memory via the SONAR_SCANNER_OPTS environment variable:
export SONAR_SCANNER_OPTS="-Xmx512m"
On Windows environments, avoid the double-quotes, since they get misinterpreted and combine the two parameters into a single one.
set SONAR_SCANNER_OPTS=-Xmx512m
is unuseful ,mypoject is scan (by maven) sucess but SonarQube background task fails with an out-of-memory error.
https://docs.sonarqube.org/display/SONARqube71/Java+Process+Memory
As it is stated in official documentation,
You need to define SONAR_SCANNER_OPTS as environment variable with desired heap space.
Documentation link here

How to set Heap size for Jmeter Slave instance

I have a distributed Jmeter Master-Slave set up. On increasing the throughput to a higher number, I started getting OOM exception for heap space.
I found this post:
How to Increase Heap size
to increase the HEAP size in the jmeter.bat file (windows in my case). However for Jmeter slave machines we don't launch jmeter via jmeter.bat but rather via jmeter.server.bat file. I checked this file doesn't have any HEAP memory parameter.
Any suggestions on how to increase the Heap memory size on Slave instances?
Looking into jmeter-server.bat source code:
It respects JVM_ARGS environment variable
It calls jmeter.bat under the hood which in its turn respects HEAP environment variable
So given you're on Windows you can do something like:
set HEAP=-Xms1G -Xmx10G -XX:MaxMetaspaceSize=256M && jmeter-server.bat
and the JVM heap will be increased to 10 gigabytes for the slave instance.
Above instructions are applicable to JMeter 4.0, the behavior might differ on previous versions.
The command to start the Jmeter slaves looks like:
nohup java -jar "/bin/ApacheJMeter.jar" "-Djava.rmi.server.hostname=127.0.0.1" -Dserver_port=10000 -s -j jmeter-server.log > /dev/null 2>&1
So if you want to change Java parameters just pass it after java:
nohup java -Xms512m -Xmx512m -XX:+UseCMSInitiatingOccupancyOnly ...

JMETER : ERROR - jmeter.JMeter: Uncaught exception: java.lang.OutOfMemoryError: GC overhead limit exceeded

I am running 6450 users test in a distributed environment in AWS ubuntu machines.
I am getting the following error when test reach to peak load,
ERROR - jmeter.JMeter: Uncaught exception: java.lang.OutOfMemoryError: GC overhead limit exceeded
Machine Details:
m4.4xlarge
HEAP="-Xms512m -Xmx20480m" (jmeter.sh file)
I allocated 20GB for the heap size in JMeter.sh.
But when I run the ps -eaf|grep java command its giving following response.
root 11493 11456 56 15:47 pts/9 00:00:03 java -server -
XX:+HeapDumpOnOutOfMemoryError -Xms512m -Xmx512m -
XX:MaxTenuringThreshold=2 -XX:PermSize=64m -XX:MaxPermSize=128m -
XX:+CMSClassUnloadingEnabled -jar ./ApacheJMeter.jar**
I don't have any idea what changes I have to do now.
Do the change in jmeter file not in jmeter.sh as you can see with ps that it is not being applied.
Also with such a heap you may need to add:
-XX:-UseGCOverheadLimit
And switch to G1 garbage collector algorithm.
And also check you respect these recommendations:
http://jmeter.apache.org/usermanual/best-practices.html
http://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/
First of all, the answer is in your question: you say that ps -eaf|grep java shows this:
XX:+HeapDumpOnOutOfMemoryError -Xms512m -Xmx512m
That is memory is still very low. So either you changed jmeter.sh, but using other shell script to actually start JMeter, or you didn't change it in a valid way, so JMeter uses defaults.
But on top of that, I really doubt you can run 6450 users on one machine, unless your script is very light. Unconfigured machine can usually handle 200-400, and well-configured machine probably can deal with up to 2000.
You need to amend the line in jmeter file, not jmeter.sh file. Locate HEAP="-Xms512m -Xmx512m" line and update the Xmx value accordingly.
Also ensure you're starting JMeter using jmeter file.
If you have environment which explicitly relies on jmeter.sh file you should be amending HEAP size a little bit differently, like:
export JVM_ARGS="-Xms512m -Xmx20480m" && ./jmeter.sh
or add the relevant line to jmeter.sh file.
See JMeter Best Practices and 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure articles for comprehensive information on tuning JMeter

How to change Elasticsearch max memory size

I have an Apache server with a default configuration of Elasticsearch and everything works perfectly, except that the default configuration has a max size of 1GB.
I don't have such a large number of documents to store in Elasticsearch, so I want to reduce the memory.
I have seen that I have to change the -Xmx parameter in the Java configuration, but I don't know how.
I have seen I can execute this:
bin/ElasticSearch -Xmx=2G -Xms=2G
But when I have to restart Elasticsearch this will be lost.
Is it possible to change max memory usage when Elasticsearch is installed as a service?
In ElasticSearch >= 5 the documentation has changed, which means none of the above answers worked for me.
I tried changing ES_HEAP_SIZE in /etc/default/elasticsearch and in /etc/init.d/elasticsearch, but when I ran ps aux | grep elasticsearch the output still showed:
/usr/bin/java -Xms2g -Xmx2g # aka 2G min and max ram
I had to make these changes in:
/etc/elasticsearch/jvm.options
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms1g
-Xmx1g
# the settings shipped with ES 5 were: -Xms2g
# the settings shipped with ES 5 were: -Xmx2g
Updated on Nov 24, 2016: Elasticsearch 5 apparently has changed the way to configure the JVM. See this answer here. The answer below still applies to versions < 5.
tirdadc, thank you for pointing this out in your comment below.
I have a pastebin page that I share with others when wondering about memory and ES. It's worked OK for me: http://pastebin.com/mNUGQCLY. I'll paste the contents here as well:
References:
https://github.com/grigorescu/Brownian/wiki/ElasticSearch-Configuration
http://www.elasticsearch.org/guide/reference/setup/installation/
Edit the following files to modify memory and file number limits. These instructions assume Ubuntu 10.04, may work on later versions and other distributions/OSes. (Edit: This works for Ubuntu 14.04 as well.)
/etc/security/limits.conf:
elasticsearch - nofile 65535
elasticsearch - memlock unlimited
/etc/default/elasticsearch (on CentOS/RH: /etc/sysconfig/elasticsearch ):
ES_HEAP_SIZE=512m
MAX_OPEN_FILES=65535
MAX_LOCKED_MEMORY=unlimited
/etc/elasticsearch/elasticsearch.yml:
bootstrap.mlockall: true
For anyone looking to do this on Centos 7 or with another system running SystemD, you change it in
/etc/sysconfig/elasticsearch
Uncomment the ES_HEAP_SIZE line, and set a value, eg:
# Heap Size (defaults to 256m min, 1g max)
ES_HEAP_SIZE=16g
(Ignore the comment about 1g max - that's the default)
Create a new file with the extension .options inside /etc/elasticsearch/jvm.options.d and put the options there. For example:
sudo nano /etc/elasticsearch/jvm.options.d/custom.options
and put the content there:
# JVM Heap Size - see /etc/elasticsearch/jvm.options
-Xms2g
-Xmx2g
It will set the maximum heap size to 2GB. Don't forget to restart elasticsearch:
sudo systemctl restart elasticsearch
Now you can check the logs:
sudo cat /var/log/elasticsearch/elasticsearch.log | grep "heap size"
You'll see something like so:
… heap size [2gb], compressed ordinary object pointers [true]
Doc
Instructions for ubuntu 14.04:
sudo vim /etc/init.d/elasticsearch
Set
ES_HEAP_SIZE=512m
then in:
sudo vim /etc/elasticsearch/elasticsearch.yml
Set:
bootstrap.memory_lock: true
There are comments in the files for more info
Previous answers were insufficient in my case, probably because I'm on Debian 8, while they were referred to some previous distribution.
On Debian 8 modify the service script normally place in /usr/lib/systemd/system/elasticsearch.service, and add Environment=ES_HEAP_SIZE=8G
just below the other "Environment=*" lines.
Now reload the service script with systemctl daemon-reload and restart the service. The job should be done!
If you use the service wrapper provided in Elasticsearch's Github repository, found at https://github.com/elasticsearch/elasticsearch-servicewrapper, then the conf file at elasticsearch-servicewrapper / service / elasticsearch.conf controls memory settings. At the top of elasticsearch.conf is a parameter:
set.default.ES_HEAP_SIZE=1024
Just reduce this parameter, say to "set.default.ES_HEAP_SIZE=512", to reduce Elasticsearch's allotted memory.
Note that if you use the elasticsearch-wrapper, the ES_HEAP_SIZE provided in elasticsearch.conf OVERRIDES ALL OTHER SETTINGS. This took me a bit to figure out, since from the documentation, it seemed that heap memory could be set from elasticsearch.yml.
If your service wrapper settings are set somewhere else, such as at /etc/default/elasticsearch as in James's example, then set the ES_HEAP_SIZE there.
If you installed ES using the RPM/DEB packages as provided (as you seem to have), you can adjust this by editing the init script (/etc/init.d/elasticsearch on RHEL/CentOS). If you have a look in the file you'll see a block with the following:
export ES_HEAP_SIZE
export ES_HEAP_NEWSIZE
export ES_DIRECT_SIZE
export ES_JAVA_OPTS
export JAVA_HOME
To adjust the size, simply change the ES_HEAP_SIZE line to the following:
export ES_HEAP_SIZE=xM/xG
(where x is the number of MB/GB of RAM that you would like to allocate)
Example:
export ES_HEAP_SIZE=1G
Would allocate 1GB.
Once you have edited the script, save and exit, then restart the service. You can check if it has been correctly set by running the following:
ps aux | grep elasticsearch
And checking for the -Xms and -Xmx flags in the java process that returns:
/usr/bin/java -Xms1G -Xmx1G
Hope this helps :)
Elasticsearch will assign the entire heap specified in jvm.options via the Xms (minimum heap size) and Xmx (maximum heap size) settings.
-Xmx12g
-Xmx12g
Set the minimum heap size (Xms) and maximum heap size (Xmx) to be equal to each other.
Don’t set Xmx to above the cutoff that the JVM uses for compressed object pointers (compressed oops), the exact cutoff varies but is near 32 GB.
It is also possible to set the heap size via an environment variable
ES_JAVA_OPTS="-Xms2g -Xmx2g" ./bin/elasticsearch
ES_JAVA_OPTS="-Xms4000m -Xmx4000m" ./bin/elasticsearch
File path to change heap size /etc/elasticsearch/jvm.options
If you are using nano then do sudo nano /etc/elasticsearch/jvm.options and update -Xms and -Xmx accordingly.
(You can use any file editor to edit it)
In elasticsearch path home dir i.e. typically /usr/share/elasticsearch,
There is a config file bin/elasticsearch.in.sh.
Edit parameter ES_MIN_MEM, ES_MAX_MEM in this file to change -Xms2g, -Xmx4g respectively.
And Please make sure you have restarted the node after this config change.
If you are using docker-compose to run a ES cluster:
Open <your docker compose>.yml file
If you have set the volumes property, you won't lose anything. Otherwise, you must first move the indexes.
Look for this value ES_JAVA_OPTS under environment and change the value in all nodes, the result could be somethig like "ES_JAVA_OPTS=-Xms2g -Xmx2g"
rebuild all nodes docker-compose -f <your docker compose>.yml up -d
Oneliner for Centos 7 & Elasticsearch 7 (2g = 2GB)
$ echo $'-Xms2g\n-Xmx2g' > /etc/elasticsearch/jvm.options.d/2gb.options
and then
$ service elasticsearch restart
If you use windows server, you can change Environment Variable, restart server to apply new Environment Value and start Elastic Service. More detail in Install Elastic in Windows Server
In elasticsearch 2.x :
vi /etc/sysconfig/elasticsearch
Go to the block of code
# Heap size defaults to 256m min, 1g max
# Set ES_HEAP_SIZE to 50% of available RAM, but no more than 31g
#ES_HEAP_SIZE=2g
Uncomment last line like
ES_HEAP_SIZE=2g
Update elastic configuration in path /etc/elasticsearch/jvm.options
################################################################
## IMPORTANT: JVM heap size
################################################################
##
## The heap size is automatically configured by Elasticsearch
## based on the available memory in your system and the roles
## each node is configured to fulfill. If specifying heap is
## required, it should be done through a file in jvm.options.d,
## and the min and max should be set to the same value. For
## example, to set the heap to 4 GB, create a new file in the
## jvm.options.d directory containing these lines:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################
-Xms1g
-Xmx1g
These configs mean you allocate 1GB RAM for elasticsearch service.
If you use ubuntu 15.04+ or any other distro that uses systemd, you can set the max memory size editing the elasticsearch systemd service and setting the max memory size using the ES_HEAP_SIZE environment variable, I tested it using ubuntu 20.04 and it works fine:
systemctl edit elasticsearch
Add the environement variable ES_HEAP_SIZE with the desired max memory, here 2GB as example:
[Service]
Environment=ES_HEAP_SIZE=2G
Reload systemd daemon
systemd daemon-reload
Then restart elasticsearch
systemd restart elasticsearch
To check if it worked as expected:
systemd status elasticsearch
You should see in the status -Xmx2G:
CGroup: /system.slice/elasticsearch.service
└─2868 /usr/bin/java -Xms2G -Xmx2G
window 7 elasticsearch
elastic search memories problem
elasticsearch-7.14.1\config\jvm.options
add this
-Xms1g
-Xmx1g
elasticsearch-7.14.1\config\elasticsearch.yml
uncomment
bootstrap.memory_lock: true
and pest
https://github.com/elastic/elasticsearch-servicewrapper download service file and pest
lasticsearch-7.14.1\bin
bin\elasticsearch.bat enter
Elastic Search 7.x and above, tested with Ubuntu 20
Create a file in /etc/elasticsearch/jvm.options.d. The file name must ends with .options
For example heap_limit.options
Add these lines to the file
## Initial memory allocation
-Xms1g
## Maximum memory allocation
-Xmx1g
Restart elastic search service
sudo service elasticsearch restart
or
sudo systemctl restart elasticsearch

Resources