Elasticsearch: Job for elasticsearch.service failed - elasticsearch

I am currently trying to setup Elasticsearch for a project. I have installed Elasticsearch 7.4.1 and I have also installed Java, that is openjdk 11.0.4.
But when I try to start Elasticsearch using the command
sudo systemctl start elasticsearch
I get the error below
Job for elasticsearch.service failed because the control process exited with error code.
See "systemctl status elasticsearch.service" and "journalctl -xe" for details.
And when I try to run the command
systemctl status elasticsearch.service
I get the error message
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vend
Active: failed (Result: exit-code) since Fri 2019-11-01 06:09:54 UTC; 12s ago
Docs: http://www.elastic.co
Process: 5960 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DI
Main PID: 5960 (code=exited, status=1/FAILURE)
I have removed/purged Elasticsearch from my machine and re-installed several times, but it doesn't seem to fix the issue.
I have tried to modify the default network.host and host.port settings in /etc/default/elasticsearch to network.host: 0.0.0.0 and http.port: 9200 to fix the issue, but no luck yet.

Here's how I solved
Firstly, Open /etc/elasticsearch/elasticsearch.yml in your nano editor using the command below:
sudo nano /etc/elasticsearch/elasticsearch.yml
Your network settings should be:
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
http.port: 9200
In order for Elasticsearch to allow connections from localhost, and to also listen on port 9200.
Next, run the code below to determine the cause of the error:
journalctl -xe
Error 1
There is insufficient memory for the Java Runtime Environment to continue
Solution
As a JVM application, the Elasticsearch main server process only utilizes memory devoted to the JVM. The required memory may depend on the JVM used (32- or 64-bit). The memory used by JVM usually consists of:
heap space (configured via -Xms and -Xmx)
metaspace (limited by the amount of available native memory)
internal JVM (usually tens of Mb)
OS-dependent memory features like memory-mapped files.
Elasticsearch mostly depends on the heap memory, and this setting manually by passing the -Xms and -Xmx(heap space) option to the JVM running the Elasticsearch server.
Solution
Open /etc/elasticsearch/jvm.options in your nano editor using the command below:
sudo nano /etc/elasticsearch/jvm.options
First, un-comment the value of Xmx and Xms
Next, modify the value of -Xms and -Xmx to no more than 50% of your physical RAM. The value for these settings depends on the amount of RAM available on your server and Elasticsearch requires memory for purposes other than the JVM heap and it is important to leave space for this.
Minimum requirements: If your physical RAM is <= 1 GB
Then, your settings should be:
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms128m
-Xmx128m
OR
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms256m
-Xmx256m
Medium requirements: If your physical RAM is >= 2 GB but <= 4 GB
Then, your settings should be:
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms512m
-Xmx512m
OR
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms750m
-Xmx750m
Large requirements: If your physical RAM is >= 4 GB but <= 8 GB
Then, your settings should be:
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms1024m
-Xmx1024m
OR
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms2048m
-Xmx2048m
Note: If your physical RAM is >= 8 GB you can decide how much heap space you want to allocate to Elasticsearch. You can allocate -Xms2048m and -Xmx2048m OR -Xms4g and -Xmx4g or even higher for better performance based on your available resources.
Error 2
Initial heap size not equal to the maximum heap size
Solution
Ensure the value of -Xms and Xmx are equal. That is, say, you are using the minimum requirements since your physical RAM is <= 1 GB, instead of this:
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms128m
-Xmx256m
it should be this:
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms128m
-Xmx128m
OR this:
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms256m
-Xmx256m
Error 3
the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
Solution
Open /etc/elasticsearch/elasticsearch.yml in your nano editor using the command below:
sudo nano /etc/elasticsearch/elasticsearch.yml
Your discovery settings should be:
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: []
Once all the errors are fixed run the command below to start and confirm the status of Elasticsearch:
sudo systemctl start elasticsearch
sudo systemctl status elasticsearch
That's all.

Had the same problem with a small virtual machine. The above configurations were already set. The only thing that helped was to increase the start timeout. The standard systemd timeout was just not enough.
As a precaution, I set the timeout to 5 minutes as follows.
sudo nano /usr/lib/systemd/system/elasticsearch.service
Added under [Service] section in elasticsearch.service file.
TimeoutStartSec=300
Activate change to service.
sudo /bin/systemctl enable elasticsearch.service
Start service again.
service elasticsearch start

A basic solution to this problem is to just uninstall Elasticsearch and Kibana and again re-install them and your problem will be solved.
For uninstalling Elasticsearch:
sudo apt-get remove --purge elasticsearch
The message was:
dpkg: warning: while removing elasticsearch, directory '/var/lib/elasticsearch' not empty so not removed
dpkg: warning: while removing elasticsearch, directory '/etc/elasticsearch' not empty so not removed
Removed those directories as well:
sudo rm -rf /etc/elasticsearch
sudo rm -rf /var/lib/elasticsearch
Then install it again:
sudo apt-get install elasticsearch=7.10.1
sudo systemctl start elasticsearch
curl http://localhost:9200/
For uninstalling Kibana:
sudo apt-get remove --purge kibana
Removed those directories as well:
sudo rm -rf /etc/kibana
sudo rm -rf /var/lib/kibana
Then install it again:
sudo apt-get install kibana=7.10.1
sudo systemctl start kibana
For opening Kibana on browser:
http://localhost:5601

If you installed using package management, check if the owner of /etc/elasticsearch directory is elasticsearch.
sudo chown -R elasticsearch:elasticsearch /etc/elasticsearch/

First verify that this is the same problem with command:
journalctl -xe
If you see error like this java.lang.NoClassDefFoundError: Could not initialize class
then do this:
My solution I got from here https://github.com/elastic/elasticsearch/issues/57018
sudo nano /etc/sysconfig/elasticsearch
Add this at the end or beggining of the file
# Elasticsearch temp directory
ES_TMPDIR=/var/log/elasticsearch

Try a system restart or just logging-out
I had the same issue as OP on a fresh install of ES. Before going down a rabbit hole of logs and Google searches, I simply tried logging-out of my OS (Ubuntu 20.04) and logged back in. Opened a fresh terminal and elasticsearch was able to start successfully.
For reference, I used:
sudo service elasticsearch restart
and checked it with:
sudo service elasticsearch status

In my case, java was missing from my server
When I have reconfigured my new server I did not check java.
After install java, it starts working.
It may help someone
Please first check java is pre-installed or not... because it is a pre-requirement of elasticsearch.
# systemctl status elasticsearch.service
# which java
# locate java
# java --version
# sudo apt install openjdk-11-jre-headless
# java --version
# sudo systemctl stop elasticsearch
# sudo systemctl start elasticsearch
Thank you.

Steps to install elasticsearch 7.15.2
Follow this digital ocean article
If you see this error
Job for elasticsearch.service failed because the control process exited with error code.
See "systemctl status elasticsearch.service" and "journalctl -xe" for
details.
Open sudo nano /etc/elasticsearch/elasticsearch.yml
Un comment these
network.host: 127.0.0.1
http.port: 9200
Open sudo nano /etc/elasticsearch/jvm.options
Un comment these
-Xms4g
-Xmx4g
Open sudo nano /etc/elasticsearch/elasticsearch.yml
Update this
discovery.seed_hosts: []
At last run these
sudo systemctl start elasticsearch
sudo systemctl status elasticsearch
To check if its working or not run this command
curl -X GET 'http://localhost:9200'

I am using ubuntu 20.04 and in my case, the issue was with the installation part. I followed automatic installation steps in official documentation.
After searching for a while, I tried the manual approach described in the same documentation, work like a magic for me.

For some people this might be the case as it was for me so this might help someone.I am noob in writing such things so bear with me.
So got this error
Exception in thread "main" org.elasticsearch.bootstrap.BootstrapException: org.elasticsearch.cli.UserException: unable to create temporary keystore a>
Likely root cause: java.nio.file.AccessDeniedException: /etc/elasticsearch/elasticsearch.keystore.tmp
elasticsearch.service: Failed with result 'exit-code'.
elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
for ubuntu 20.04,
workaround for this was to run these two commands:
sudo chmod g+w /etc/elasticsearch
Above command changes file permissions (allowing) for creating keystroke manually.And below command create that manually.
sudo -u elasticsearch -s /usr/share/elasticsearch/bin/elasticsearch-keystore create

I also faced the same problem
I checked the elasticsearch service status
sudo systemctl start elasticsearch
Next, run the code below to determine the cause of the error:
journalctl -xe
I see a lots of line about Performance analyzer
Dec 20 21:07:37 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:37 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:37 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:37 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:42 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:42 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:42 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:42 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:42 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:42 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:47 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:47 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:47 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:47 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:47 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:47 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:52 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:52 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:52 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:52 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:52 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:52 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:57 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:57 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:57 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:57 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:57 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:57 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:08:02 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:08:02 PM org.jooq.tools.JooqLogger info
Dec 20 21:08:02 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:08:02 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:08:02 PM org.jooq.tools.JooqLogger info
Dec 20 21:08:02 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:08:07 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:08:07 PM org.jooq.tools.JooqLogger info
Dec 20 21:08:07 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:08:07 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:08:07 PM org.jooq.tools.JooqLogger info
Dec 20 21:08:07 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
This makes problem to analyze the issue
So First I tried to stop it so I found a link
in /usr/lib/systemd/system/opendistro-performance-analyzer.service
under [Service] add StandardOutput=null
after that reload systemd via ‘/bin/systemctl daemon-reload’ for it to take affect
For more Detail follow the below link
https://discuss.opendistrocommunity.dev/t/performance-analyzer-agent-cli-spamming-syslog-in-od-1-3-0/2040/4
Now the picture was clear easily I found the issue that There were duplicate properties in the elasticsearch.yml file that I forgot to comment on. I commented out the duplicate property and restart the elasticsearch service.
sudo systemctl start elasticsearch
sudo systemctl status elasticsearch
That's all.
I hope It will help.

I had to also disable security in /etc/elasticsearch/elasticsearch.yml
xpack.security.enabled: false

Related

Elasticsearch uses more memory than JVM heap settings allow

The link here from the official elasticsearch documentation, mentioning that to limit elasticsearch memory use, you have to set the value of Xms and Xmx to the proper value.
Current setup is:
-Xms1g
-Xmx1g
On my server, I am using CentOS8, elastic search is using more memory than it is allowed in the JVM heap settings and causing the server to crash.
The following errors observed at the same time:
[2021-09-06T13:11:08,810][WARN ][o.e.m.f.FsHealthService ] [dev.localdomain] health check of [/var/lib/elasticsearch/nodes/0] took [8274ms] which is above the warn threshold of [5s]
[2021-09-06T13:11:20,579][WARN ][o.e.c.InternalClusterInfoService] [dev.localdomain] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout
[2021-09-06T13:12:14,585][WARN ][o.e.g.DanglingIndicesState] [dev.localdomain] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
at the same times the following errors issued on /var/log/messages
Sep 6 13:11:08 dev kernel: out_of_memory+0x1ba/0x490
Sep 6 13:11:08 dev kernel: Out of memory: Killed process 277068 (elasticsearch) total-vm:4145008kB, anon-rss:3300504kB, file-rss:0kB, shmem-rss:86876kB, UID:1001
Am I missing some settings to limit elasticsearch memory usage?

HDFS fails to start with Hadoop 3.2 : bash v3.2+ is required

I'm building a small Hadoop cluster composed of 2 nodes : 1 master + 1 worker. I'm using the latest version of Hadoop (3.2) and everything is executed by the root user. In the installation process, I've been able to hdfs namenode -format. Next step is to start the HDFS daemon with start-dfs.sh.
$ start-dfs.sh
Starting namenodes on [master]
bash v3.2+ is required. Sorry.
Starting datanodes
bash v3.2+ is required. Sorry.
Starting secondary namenodes [master]
bash v3.2+ is required. Sorry.
Here's the generated logs in the journal:
$ journalctl --since "1 min ago"
-- Logs begin at Thu 2019-08-29 11:12:27 CEST, end at Thu 2019-08-29 11:46:40 CEST. --
Aug 29 11:46:40 master su[3329]: (to root) root on pts/0
Aug 29 11:46:40 master su[3329]: pam_unix(su-l:session): session opened for user root by root(uid=0)
Aug 29 11:46:40 master su[3329]: pam_unix(su-l:session): session closed for user root
Aug 29 11:46:40 master su[3334]: (to root) root on pts/0
Aug 29 11:46:40 master su[3334]: pam_unix(su-l:session): session opened for user root by root(uid=0)
Aug 29 11:46:40 master su[3334]: pam_unix(su-l:session): session closed for user root
Aug 29 11:46:40 master su[3389]: (to root) root on pts/0
Aug 29 11:46:40 master su[3389]: pam_unix(su-l:session): session opened for user root by root(uid=0)
Aug 29 11:46:40 master su[3389]: pam_unix(su-l:session): session closed for user root
As I'm using Zsh (with Oh-my-Zsh), I logged into a bash console to give it a try. Sadly, I get the same result. In fact, this error happens for all sbin/start-*.sh scripts. However, the hadoop and yarn commands work like a charm.
Since I didn't find much information on this error on the Internet, here I am. Would be glad to have any advice!
Other technical details
Operating system info:
$ lsb_release -d
Description: Debian GNU/Linux 10 (buster)
$ uname -srm
Linux 4.19.0-5-amd64 x86_64
Available Java versions (tried with both):
$ update-alternatives --config java
There are 2 choices for the alternative java (providing /usr/bin/java).
Selection Path Priority Status
------------------------------------------------------------
0 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 auto mode
* 1 /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/bin/java 1081 manual mode
2 /usr/lib/jvm/java-11-openjdk-amd64/bin/java 1111 manual mode
Some ENV variables you might be interested in:
$ env
USER=root
LOGNAME=root
HOME=/root
PATH=/root/bin:/usr/local/bin:/usr/local/hadoop/bin:/usr/local/hadoop/sbin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
SHELL=/usr/bin/zsh
TERM=rxvt-unicode
JAVA_HOME=/usr/lib/jvm/adoptopenjdk-8-hotspot-amd64
HADOOP_HOME=/usr/local/hadoop
HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
ZSH=/root/.oh-my-zsh
Output of the Hadoop executable:
$ hadoop version
Hadoop 3.2.0
Source code repository https://github.com/apache/hadoop.git -r e97acb3bd8f3befd27418996fa5d4b50bf2e17bf
Compiled by sunilg on 2019-01-08T06:08Z
Compiled with protoc 2.5.0
From source with checksum d3f0795ed0d9dc378e2c785d3668f39
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-3.2.0.jar
My Zsh and Bash installation:
$ zsh --version
zsh 5.7.1 (x86_64-debian-linux-gnu)
$ bash --version
GNU bash, version 5.0.3(1)-release (x86_64-pc-linux-gnu)
# only available in a console using *bash*
$ echo ${BASH_VERSINFO[#]}
5 0 3 1 release x86_64-pc-linux-gnu
TL;DR: use a different user (e.g. hadoop) instead of root.
I found the solution but not the deep understanding on what is going on. Despite how sad I can be, here's the solution I found:
Running with root user:
$ start-dfs.sh
Starting namenodes on [master]
bash v3.2+ is required. Sorry.
Starting datanodes
bash v3.2+ is required. Sorry.
Starting secondary namenodes [master_bis]
bash v3.2+ is required. Sorry
Then I created a hadoop user and gave this user privileges on the Hadoop installation (R/W access). After logging in with this new user I have the following output for the command that caused me some troubles:
$ start-dfs.sh
Starting namenodes on [master]
Starting datanodes
Starting secondary namenodes [master_bis]
Moreover, I noticed that processes created by start-yarn.sh were not listed in the output of jps while using Java 11. Switching to Java 8 solved my problem (don't forget to update all $JAVA_HOME variables, both in /etc/environment and hadoop-env.sh).
Success \o/. However, I'd be glad to understand why the root user cannot do this. I know it's a bad habit to use root but in an experimental environment this is not of our interest to have a clean "close-to" production environment. Any information about this will be kindly appreciated :).
try
chsh -s /bin/bash
to change the default shell back to bash

alternative to sudo /etc/init.d/elasticsearch start

I am trying to run elasticsearch through supervisord. To do this I need a command to start elasticsearch without running it in the background. My current supervisord script looks like
[program:elasticsearch]
command=/etc/init.d/elasticsearch start
autostart=true
autorestart=true
startretries=3
user=root
stdout_logfile=/var/www/elasticsearch_std.log
but since the '/etc/init.d/elasticsearch start' command runs elasticsearch in the background, it tries to start elasticsearch again as soon as the command returns a successful launch, which results in
DEBG 'elasticsearch' stdout output:
* Already running.
...done.
Since I told supervisord to restart 3 times, it will do that three times before giving up. However, the purpose of this is of course that supervisord should restart elasticsearch in case of a crash.
So I need a command which starts elasticsearch in the foreground.
EDIT:
Following the suggestion below and the elasticsearch instruction from https://www.elastic.co/guide/en/elasticsearch/reference/current/settings.html I tried to run
/usr/share/elasticsearch/bin/elasticsearch -Epath.conf=/etc/elasticsearch -Epath.logs=/var/log/elasticsearch -Epath.data=/var/lib/elasticsearch
Error: encountered environment variables that are no longer supported
Use jvm.options or ES_JAVA_OPTS to configure the JVM
ES_HEAP_SIZE=256m: set -Xms256m and -Xmx256m in jvm.options or add "-Xms256m -Xmx256m" to ES_JAVA_OPTS
I do not understand this error message since I already set
-Xms256m
-Xmx256m
in /etc/elasticsearch/jvm.options
EDIT2: I also tried to set these parameters through the environment, which did not work either
ES_JAVA_OPTS="-Xms256m -Xmx256m" /usr/share/elasticsearch/bin/elasticsearch -Epath.conf=/etc/elasticsearch -Epath.logs=/var/log/elasticsearch -Epath.data=/var/lib/elasticsearch
Error: encountered environment variables that are no longer supported
Use jvm.options or ES_JAVA_OPTS to configure the JVM
ES_HEAP_SIZE=256m: set -Xms256m and -Xmx256m in jvm.options or add "-Xms256m -Xmx256m" to ES_JAVA_OPTS
the /etc/default/elasticsearch file has all lines commented out except
ES_STARTUP_SLEEP_TIME=5
Start elasticsearch directly with bin/elasticsearch. Using the init file will daemonize and exit immediately, which is not suitable for supervisor.
Instead, set the command attribute to something like:
command=/usr/share/elasticsearch/bin/elasticsearch
-Edefault.path.conf=/etc/elasticsearch
-Edefault.path.logs=/var/log/elasticsearch
-Edefault.path.data=/var/lib/elasticsearch
replacing the paths accordingly.
You can also set the default.path.conf and edit the YAML file inside for the data and log settings (amongst others).

booting elasticsearch on machine with 2GB RAM

I continue to get following error while trying to run elasticsearch on a SSD machine with 2GB RAM.
elasticsearch[1234] : # There is insufficient memory for the Java Runtime Environment to continue.
elasticsearch[1234] : # Native memory allocation (mmap) failed to map 1973026816 bytes for committing reserved memory.
I modified default config /etc/init.d/elasticsearch modified with following options
ES_JAVA_OPTS="-Xms1g -Xmx1g"
ES_HEAP_SIZE=1g
I restarted elasticsearch but I continue to get the same error.
sudo /bin/systemctl restart elasticsearch.service
Any ideas?
You should set Xms and Xmx in the jvm.options file. (/etc/elasticsearch/jvm.options)
You can also use environment variables (ES_JAVA_OPTS="-Xms1g -Xmx1g"), but you need to comment out the settings in jvm.options for that to work.
PS: Assuming 5.x since you didn't specify the version.

How to change Elasticsearch max memory size

I have an Apache server with a default configuration of Elasticsearch and everything works perfectly, except that the default configuration has a max size of 1GB.
I don't have such a large number of documents to store in Elasticsearch, so I want to reduce the memory.
I have seen that I have to change the -Xmx parameter in the Java configuration, but I don't know how.
I have seen I can execute this:
bin/ElasticSearch -Xmx=2G -Xms=2G
But when I have to restart Elasticsearch this will be lost.
Is it possible to change max memory usage when Elasticsearch is installed as a service?
In ElasticSearch >= 5 the documentation has changed, which means none of the above answers worked for me.
I tried changing ES_HEAP_SIZE in /etc/default/elasticsearch and in /etc/init.d/elasticsearch, but when I ran ps aux | grep elasticsearch the output still showed:
/usr/bin/java -Xms2g -Xmx2g # aka 2G min and max ram
I had to make these changes in:
/etc/elasticsearch/jvm.options
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms1g
-Xmx1g
# the settings shipped with ES 5 were: -Xms2g
# the settings shipped with ES 5 were: -Xmx2g
Updated on Nov 24, 2016: Elasticsearch 5 apparently has changed the way to configure the JVM. See this answer here. The answer below still applies to versions < 5.
tirdadc, thank you for pointing this out in your comment below.
I have a pastebin page that I share with others when wondering about memory and ES. It's worked OK for me: http://pastebin.com/mNUGQCLY. I'll paste the contents here as well:
References:
https://github.com/grigorescu/Brownian/wiki/ElasticSearch-Configuration
http://www.elasticsearch.org/guide/reference/setup/installation/
Edit the following files to modify memory and file number limits. These instructions assume Ubuntu 10.04, may work on later versions and other distributions/OSes. (Edit: This works for Ubuntu 14.04 as well.)
/etc/security/limits.conf:
elasticsearch - nofile 65535
elasticsearch - memlock unlimited
/etc/default/elasticsearch (on CentOS/RH: /etc/sysconfig/elasticsearch ):
ES_HEAP_SIZE=512m
MAX_OPEN_FILES=65535
MAX_LOCKED_MEMORY=unlimited
/etc/elasticsearch/elasticsearch.yml:
bootstrap.mlockall: true
For anyone looking to do this on Centos 7 or with another system running SystemD, you change it in
/etc/sysconfig/elasticsearch
Uncomment the ES_HEAP_SIZE line, and set a value, eg:
# Heap Size (defaults to 256m min, 1g max)
ES_HEAP_SIZE=16g
(Ignore the comment about 1g max - that's the default)
Create a new file with the extension .options inside /etc/elasticsearch/jvm.options.d and put the options there. For example:
sudo nano /etc/elasticsearch/jvm.options.d/custom.options
and put the content there:
# JVM Heap Size - see /etc/elasticsearch/jvm.options
-Xms2g
-Xmx2g
It will set the maximum heap size to 2GB. Don't forget to restart elasticsearch:
sudo systemctl restart elasticsearch
Now you can check the logs:
sudo cat /var/log/elasticsearch/elasticsearch.log | grep "heap size"
You'll see something like so:
… heap size [2gb], compressed ordinary object pointers [true]
Doc
Instructions for ubuntu 14.04:
sudo vim /etc/init.d/elasticsearch
Set
ES_HEAP_SIZE=512m
then in:
sudo vim /etc/elasticsearch/elasticsearch.yml
Set:
bootstrap.memory_lock: true
There are comments in the files for more info
Previous answers were insufficient in my case, probably because I'm on Debian 8, while they were referred to some previous distribution.
On Debian 8 modify the service script normally place in /usr/lib/systemd/system/elasticsearch.service, and add Environment=ES_HEAP_SIZE=8G
just below the other "Environment=*" lines.
Now reload the service script with systemctl daemon-reload and restart the service. The job should be done!
If you use the service wrapper provided in Elasticsearch's Github repository, found at https://github.com/elasticsearch/elasticsearch-servicewrapper, then the conf file at elasticsearch-servicewrapper / service / elasticsearch.conf controls memory settings. At the top of elasticsearch.conf is a parameter:
set.default.ES_HEAP_SIZE=1024
Just reduce this parameter, say to "set.default.ES_HEAP_SIZE=512", to reduce Elasticsearch's allotted memory.
Note that if you use the elasticsearch-wrapper, the ES_HEAP_SIZE provided in elasticsearch.conf OVERRIDES ALL OTHER SETTINGS. This took me a bit to figure out, since from the documentation, it seemed that heap memory could be set from elasticsearch.yml.
If your service wrapper settings are set somewhere else, such as at /etc/default/elasticsearch as in James's example, then set the ES_HEAP_SIZE there.
If you installed ES using the RPM/DEB packages as provided (as you seem to have), you can adjust this by editing the init script (/etc/init.d/elasticsearch on RHEL/CentOS). If you have a look in the file you'll see a block with the following:
export ES_HEAP_SIZE
export ES_HEAP_NEWSIZE
export ES_DIRECT_SIZE
export ES_JAVA_OPTS
export JAVA_HOME
To adjust the size, simply change the ES_HEAP_SIZE line to the following:
export ES_HEAP_SIZE=xM/xG
(where x is the number of MB/GB of RAM that you would like to allocate)
Example:
export ES_HEAP_SIZE=1G
Would allocate 1GB.
Once you have edited the script, save and exit, then restart the service. You can check if it has been correctly set by running the following:
ps aux | grep elasticsearch
And checking for the -Xms and -Xmx flags in the java process that returns:
/usr/bin/java -Xms1G -Xmx1G
Hope this helps :)
Elasticsearch will assign the entire heap specified in jvm.options via the Xms (minimum heap size) and Xmx (maximum heap size) settings.
-Xmx12g
-Xmx12g
Set the minimum heap size (Xms) and maximum heap size (Xmx) to be equal to each other.
Don’t set Xmx to above the cutoff that the JVM uses for compressed object pointers (compressed oops), the exact cutoff varies but is near 32 GB.
It is also possible to set the heap size via an environment variable
ES_JAVA_OPTS="-Xms2g -Xmx2g" ./bin/elasticsearch
ES_JAVA_OPTS="-Xms4000m -Xmx4000m" ./bin/elasticsearch
File path to change heap size /etc/elasticsearch/jvm.options
If you are using nano then do sudo nano /etc/elasticsearch/jvm.options and update -Xms and -Xmx accordingly.
(You can use any file editor to edit it)
In elasticsearch path home dir i.e. typically /usr/share/elasticsearch,
There is a config file bin/elasticsearch.in.sh.
Edit parameter ES_MIN_MEM, ES_MAX_MEM in this file to change -Xms2g, -Xmx4g respectively.
And Please make sure you have restarted the node after this config change.
If you are using docker-compose to run a ES cluster:
Open <your docker compose>.yml file
If you have set the volumes property, you won't lose anything. Otherwise, you must first move the indexes.
Look for this value ES_JAVA_OPTS under environment and change the value in all nodes, the result could be somethig like "ES_JAVA_OPTS=-Xms2g -Xmx2g"
rebuild all nodes docker-compose -f <your docker compose>.yml up -d
Oneliner for Centos 7 & Elasticsearch 7 (2g = 2GB)
$ echo $'-Xms2g\n-Xmx2g' > /etc/elasticsearch/jvm.options.d/2gb.options
and then
$ service elasticsearch restart
If you use windows server, you can change Environment Variable, restart server to apply new Environment Value and start Elastic Service. More detail in Install Elastic in Windows Server
In elasticsearch 2.x :
vi /etc/sysconfig/elasticsearch
Go to the block of code
# Heap size defaults to 256m min, 1g max
# Set ES_HEAP_SIZE to 50% of available RAM, but no more than 31g
#ES_HEAP_SIZE=2g
Uncomment last line like
ES_HEAP_SIZE=2g
Update elastic configuration in path /etc/elasticsearch/jvm.options
################################################################
## IMPORTANT: JVM heap size
################################################################
##
## The heap size is automatically configured by Elasticsearch
## based on the available memory in your system and the roles
## each node is configured to fulfill. If specifying heap is
## required, it should be done through a file in jvm.options.d,
## and the min and max should be set to the same value. For
## example, to set the heap to 4 GB, create a new file in the
## jvm.options.d directory containing these lines:
##
## -Xms4g
## -Xmx4g
##
## See https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
## for more information
##
################################################################
-Xms1g
-Xmx1g
These configs mean you allocate 1GB RAM for elasticsearch service.
If you use ubuntu 15.04+ or any other distro that uses systemd, you can set the max memory size editing the elasticsearch systemd service and setting the max memory size using the ES_HEAP_SIZE environment variable, I tested it using ubuntu 20.04 and it works fine:
systemctl edit elasticsearch
Add the environement variable ES_HEAP_SIZE with the desired max memory, here 2GB as example:
[Service]
Environment=ES_HEAP_SIZE=2G
Reload systemd daemon
systemd daemon-reload
Then restart elasticsearch
systemd restart elasticsearch
To check if it worked as expected:
systemd status elasticsearch
You should see in the status -Xmx2G:
CGroup: /system.slice/elasticsearch.service
└─2868 /usr/bin/java -Xms2G -Xmx2G
window 7 elasticsearch
elastic search memories problem
elasticsearch-7.14.1\config\jvm.options
add this
-Xms1g
-Xmx1g
elasticsearch-7.14.1\config\elasticsearch.yml
uncomment
bootstrap.memory_lock: true
and pest
https://github.com/elastic/elasticsearch-servicewrapper download service file and pest
lasticsearch-7.14.1\bin
bin\elasticsearch.bat enter
Elastic Search 7.x and above, tested with Ubuntu 20
Create a file in /etc/elasticsearch/jvm.options.d. The file name must ends with .options
For example heap_limit.options
Add these lines to the file
## Initial memory allocation
-Xms1g
## Maximum memory allocation
-Xmx1g
Restart elastic search service
sudo service elasticsearch restart
or
sudo systemctl restart elasticsearch

Resources