Elasticsearch uses more memory than JVM heap settings allow - elasticsearch

The link here from the official elasticsearch documentation, mentioning that to limit elasticsearch memory use, you have to set the value of Xms and Xmx to the proper value.
Current setup is:
-Xms1g
-Xmx1g
On my server, I am using CentOS8, elastic search is using more memory than it is allowed in the JVM heap settings and causing the server to crash.
The following errors observed at the same time:
[2021-09-06T13:11:08,810][WARN ][o.e.m.f.FsHealthService ] [dev.localdomain] health check of [/var/lib/elasticsearch/nodes/0] took [8274ms] which is above the warn threshold of [5s]
[2021-09-06T13:11:20,579][WARN ][o.e.c.InternalClusterInfoService] [dev.localdomain] Failed to update shard information for ClusterInfoUpdateJob within 15s timeout
[2021-09-06T13:12:14,585][WARN ][o.e.g.DanglingIndicesState] [dev.localdomain] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
at the same times the following errors issued on /var/log/messages
Sep 6 13:11:08 dev kernel: out_of_memory+0x1ba/0x490
Sep 6 13:11:08 dev kernel: Out of memory: Killed process 277068 (elasticsearch) total-vm:4145008kB, anon-rss:3300504kB, file-rss:0kB, shmem-rss:86876kB, UID:1001
Am I missing some settings to limit elasticsearch memory usage?

Related

Error in configuring elasticsearch cluster

I'm configuring three node elasticsearch cluster. I'm getting following error while try to start first node using following
startup command
[cloud_user#mishai3c elasticsearch-6.2.4]$ ./bin/elasticsearch -d -p pid
error message
[2019-11-11T04:50:39,634][INFO ][o.e.b.BootstrapChecks ] [master] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2019-11-11T04:50:39,636][ERROR][o.e.b.Bootstrap ] [master] node validation exception
[1] bootstrap checks failed
[1]: max number of threads [3581] for user [cloud_user] is too low, increase to at least [4096]
[2019-11-11T04:50:39,666][INFO ][o.e.n.Node ] [master] stopping ...
I have tried to set up ulimit in /etc/security/limits.conf file by adding following line
#cloud_user hard nproc 4096
It's highly appriciate if anyone can help
After changing limit.conf file,I have checked max thread limit by running ulimit -u command in terminal it still show previous value then
Then I logout and log into server and run ulimit -u command then it show 4096.
Then I tried to start elasticsearch it works

Elasticsearch: Job for elasticsearch.service failed

I am currently trying to setup Elasticsearch for a project. I have installed Elasticsearch 7.4.1 and I have also installed Java, that is openjdk 11.0.4.
But when I try to start Elasticsearch using the command
sudo systemctl start elasticsearch
I get the error below
Job for elasticsearch.service failed because the control process exited with error code.
See "systemctl status elasticsearch.service" and "journalctl -xe" for details.
And when I try to run the command
systemctl status elasticsearch.service
I get the error message
● elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vend
Active: failed (Result: exit-code) since Fri 2019-11-01 06:09:54 UTC; 12s ago
Docs: http://www.elastic.co
Process: 5960 ExecStart=/usr/share/elasticsearch/bin/elasticsearch -p ${PID_DI
Main PID: 5960 (code=exited, status=1/FAILURE)
I have removed/purged Elasticsearch from my machine and re-installed several times, but it doesn't seem to fix the issue.
I have tried to modify the default network.host and host.port settings in /etc/default/elasticsearch to network.host: 0.0.0.0 and http.port: 9200 to fix the issue, but no luck yet.
Here's how I solved
Firstly, Open /etc/elasticsearch/elasticsearch.yml in your nano editor using the command below:
sudo nano /etc/elasticsearch/elasticsearch.yml
Your network settings should be:
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
http.port: 9200
In order for Elasticsearch to allow connections from localhost, and to also listen on port 9200.
Next, run the code below to determine the cause of the error:
journalctl -xe
Error 1
There is insufficient memory for the Java Runtime Environment to continue
Solution
As a JVM application, the Elasticsearch main server process only utilizes memory devoted to the JVM. The required memory may depend on the JVM used (32- or 64-bit). The memory used by JVM usually consists of:
heap space (configured via -Xms and -Xmx)
metaspace (limited by the amount of available native memory)
internal JVM (usually tens of Mb)
OS-dependent memory features like memory-mapped files.
Elasticsearch mostly depends on the heap memory, and this setting manually by passing the -Xms and -Xmx(heap space) option to the JVM running the Elasticsearch server.
Solution
Open /etc/elasticsearch/jvm.options in your nano editor using the command below:
sudo nano /etc/elasticsearch/jvm.options
First, un-comment the value of Xmx and Xms
Next, modify the value of -Xms and -Xmx to no more than 50% of your physical RAM. The value for these settings depends on the amount of RAM available on your server and Elasticsearch requires memory for purposes other than the JVM heap and it is important to leave space for this.
Minimum requirements: If your physical RAM is <= 1 GB
Then, your settings should be:
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms128m
-Xmx128m
OR
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms256m
-Xmx256m
Medium requirements: If your physical RAM is >= 2 GB but <= 4 GB
Then, your settings should be:
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms512m
-Xmx512m
OR
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms750m
-Xmx750m
Large requirements: If your physical RAM is >= 4 GB but <= 8 GB
Then, your settings should be:
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms1024m
-Xmx1024m
OR
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms2048m
-Xmx2048m
Note: If your physical RAM is >= 8 GB you can decide how much heap space you want to allocate to Elasticsearch. You can allocate -Xms2048m and -Xmx2048m OR -Xms4g and -Xmx4g or even higher for better performance based on your available resources.
Error 2
Initial heap size not equal to the maximum heap size
Solution
Ensure the value of -Xms and Xmx are equal. That is, say, you are using the minimum requirements since your physical RAM is <= 1 GB, instead of this:
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms128m
-Xmx256m
it should be this:
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms128m
-Xmx128m
OR this:
# Xms represents the initial size of total heap space
# Xmx represents the maximum size of total heap space
-Xms256m
-Xmx256m
Error 3
the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured
Solution
Open /etc/elasticsearch/elasticsearch.yml in your nano editor using the command below:
sudo nano /etc/elasticsearch/elasticsearch.yml
Your discovery settings should be:
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: []
Once all the errors are fixed run the command below to start and confirm the status of Elasticsearch:
sudo systemctl start elasticsearch
sudo systemctl status elasticsearch
That's all.
Had the same problem with a small virtual machine. The above configurations were already set. The only thing that helped was to increase the start timeout. The standard systemd timeout was just not enough.
As a precaution, I set the timeout to 5 minutes as follows.
sudo nano /usr/lib/systemd/system/elasticsearch.service
Added under [Service] section in elasticsearch.service file.
TimeoutStartSec=300
Activate change to service.
sudo /bin/systemctl enable elasticsearch.service
Start service again.
service elasticsearch start
A basic solution to this problem is to just uninstall Elasticsearch and Kibana and again re-install them and your problem will be solved.
For uninstalling Elasticsearch:
sudo apt-get remove --purge elasticsearch
The message was:
dpkg: warning: while removing elasticsearch, directory '/var/lib/elasticsearch' not empty so not removed
dpkg: warning: while removing elasticsearch, directory '/etc/elasticsearch' not empty so not removed
Removed those directories as well:
sudo rm -rf /etc/elasticsearch
sudo rm -rf /var/lib/elasticsearch
Then install it again:
sudo apt-get install elasticsearch=7.10.1
sudo systemctl start elasticsearch
curl http://localhost:9200/
For uninstalling Kibana:
sudo apt-get remove --purge kibana
Removed those directories as well:
sudo rm -rf /etc/kibana
sudo rm -rf /var/lib/kibana
Then install it again:
sudo apt-get install kibana=7.10.1
sudo systemctl start kibana
For opening Kibana on browser:
http://localhost:5601
If you installed using package management, check if the owner of /etc/elasticsearch directory is elasticsearch.
sudo chown -R elasticsearch:elasticsearch /etc/elasticsearch/
First verify that this is the same problem with command:
journalctl -xe
If you see error like this java.lang.NoClassDefFoundError: Could not initialize class
then do this:
My solution I got from here https://github.com/elastic/elasticsearch/issues/57018
sudo nano /etc/sysconfig/elasticsearch
Add this at the end or beggining of the file
# Elasticsearch temp directory
ES_TMPDIR=/var/log/elasticsearch
Try a system restart or just logging-out
I had the same issue as OP on a fresh install of ES. Before going down a rabbit hole of logs and Google searches, I simply tried logging-out of my OS (Ubuntu 20.04) and logged back in. Opened a fresh terminal and elasticsearch was able to start successfully.
For reference, I used:
sudo service elasticsearch restart
and checked it with:
sudo service elasticsearch status
In my case, java was missing from my server
When I have reconfigured my new server I did not check java.
After install java, it starts working.
It may help someone
Please first check java is pre-installed or not... because it is a pre-requirement of elasticsearch.
# systemctl status elasticsearch.service
# which java
# locate java
# java --version
# sudo apt install openjdk-11-jre-headless
# java --version
# sudo systemctl stop elasticsearch
# sudo systemctl start elasticsearch
Thank you.
Steps to install elasticsearch 7.15.2
Follow this digital ocean article
If you see this error
Job for elasticsearch.service failed because the control process exited with error code.
See "systemctl status elasticsearch.service" and "journalctl -xe" for
details.
Open sudo nano /etc/elasticsearch/elasticsearch.yml
Un comment these
network.host: 127.0.0.1
http.port: 9200
Open sudo nano /etc/elasticsearch/jvm.options
Un comment these
-Xms4g
-Xmx4g
Open sudo nano /etc/elasticsearch/elasticsearch.yml
Update this
discovery.seed_hosts: []
At last run these
sudo systemctl start elasticsearch
sudo systemctl status elasticsearch
To check if its working or not run this command
curl -X GET 'http://localhost:9200'
I am using ubuntu 20.04 and in my case, the issue was with the installation part. I followed automatic installation steps in official documentation.
After searching for a while, I tried the manual approach described in the same documentation, work like a magic for me.
For some people this might be the case as it was for me so this might help someone.I am noob in writing such things so bear with me.
So got this error
Exception in thread "main" org.elasticsearch.bootstrap.BootstrapException: org.elasticsearch.cli.UserException: unable to create temporary keystore a>
Likely root cause: java.nio.file.AccessDeniedException: /etc/elasticsearch/elasticsearch.keystore.tmp
elasticsearch.service: Failed with result 'exit-code'.
elasticsearch.service: Main process exited, code=exited, status=1/FAILURE
for ubuntu 20.04,
workaround for this was to run these two commands:
sudo chmod g+w /etc/elasticsearch
Above command changes file permissions (allowing) for creating keystroke manually.And below command create that manually.
sudo -u elasticsearch -s /usr/share/elasticsearch/bin/elasticsearch-keystore create
I also faced the same problem
I checked the elasticsearch service status
sudo systemctl start elasticsearch
Next, run the code below to determine the cause of the error:
journalctl -xe
I see a lots of line about Performance analyzer
Dec 20 21:07:37 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:37 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:37 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:37 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:42 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:42 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:42 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:42 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:42 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:42 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:47 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:47 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:47 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:47 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:47 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:47 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:52 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:52 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:52 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:52 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:52 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:52 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:57 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:57 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:57 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:07:57 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:07:57 PM org.jooq.tools.JooqLogger info
Dec 20 21:07:57 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:08:02 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:08:02 PM org.jooq.tools.JooqLogger info
Dec 20 21:08:02 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:08:02 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:08:02 PM org.jooq.tools.JooqLogger info
Dec 20 21:08:02 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:08:07 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:08:07 PM org.jooq.tools.JooqLogger info
Dec 20 21:08:07 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
Dec 20 21:08:07 my performance-analyzer-agent-cli[13112]: Dec 20, 2019 9:08:07 PM org.jooq.tools.JooqLogger info
Dec 20 21:08:07 my performance-analyzer-agent-cli[13112]: INFO: Single batch : No bind variables have been provided with a single statement batch execution. This may be due to accidental API misuse
This makes problem to analyze the issue
So First I tried to stop it so I found a link
in /usr/lib/systemd/system/opendistro-performance-analyzer.service
under [Service] add StandardOutput=null
after that reload systemd via ‘/bin/systemctl daemon-reload’ for it to take affect
For more Detail follow the below link
https://discuss.opendistrocommunity.dev/t/performance-analyzer-agent-cli-spamming-syslog-in-od-1-3-0/2040/4
Now the picture was clear easily I found the issue that There were duplicate properties in the elasticsearch.yml file that I forgot to comment on. I commented out the duplicate property and restart the elasticsearch service.
sudo systemctl start elasticsearch
sudo systemctl status elasticsearch
That's all.
I hope It will help.
I had to also disable security in /etc/elasticsearch/elasticsearch.yml
xpack.security.enabled: false

Failed Elasticsearch 7.3 Install on MAC El Capitan. Repeated installation failure due to problems enabling bootstrap memory locking

Good Evening Everyone,
I have been trying to install a single stand alone instance (locally) of Elasticsearch 7.3 on my MAC Book Pro El Capitan (10.11.6, 4GB-Ram). I really thought this would be fairly straight forward but alas ES is having memory locking issues whilst being installed on my mac.
Details:
I downloaded, and have been attempting to install, Elasticsearch 7.3. It was downloaded from here: https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started-install.html
After extracting the archive I proceeded to follow instructions for installation, starting with running the binary executable - "./elasticsearch" - by running this command -->
cd elasticsearch-7.3.0/bin
./elasticsearch
Upon running said command I have repeatedly been getting this error -->
"1 bootstrap checks failed 1: memory locking requested for elasticsearch process but memory is not locked"
After conducting some research, I now realize that Elasticsearch is basically having problems enabling 'memory locking'. It is fully understood that elasticsearch does not like memory swapping and that "memory locking" needs to be enabled using the --> bootstrap.memory_lock: true command, however it seems to me that I have a case here, where this setting is being passed to Elasticsearch but Elasticsearch is not able to read said setting to lock memory (Java Heap) and complete the installation of my ES instance.
I have tried everything to try to enable "memory locking", to no avail. I have set the following config parameters in the following files:
A) I added the following lines to /etc/security/limits.conf file:
elasticsearch soft memlock unlimited
elasticsearch hard memlock unlimited
* - memlock unlimited
* - nofile 100000
* - nproc 32768
* - as unlimited
B) I added the following lines to the jvm.options file:
-Xms2g (initial size of total heap space, set to half of RAM)
-Xmx2g (maximum size of heap space, set to half of RAM)
-Des.enforce.bootstrap.checks=true (enforcing memory locking checks)
-Djna.tmpdir=chosenpath/elasticsearch-7.3.0/tmp (this seemed important)
C) I edited the following lines in the elasticsearch.yml file:
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
D) I added the '/etc/launchd.conf' file (in trying to increase my max processes and max files available) and added the following lines:
limit maxproc 2048 2048
limit maxfiles 1024 unlimited
E) I added the '/etc/sysctl.conf' file (in trying to increase my max processes, and max processes available per user) and added the following lines:
# Turn up maxproc
kern.maxproc=2048
# Turn up the maxproc per user
kern.maxprocperuid=1024
# Remove core files
kern.coredump=0
F) My ulimit -as output has remained unchanged no matter what I do, and still gives me the following output:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 256
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 709
virtual memory (kbytes, -v) unlimited
But no matter what I try, I always get theses 2 errors:
A) "Unable to lock JVM Memory: error=78, reason=Function not implemented"
B) "ERROR: 1 bootstrap checks failed 1: memory locking requested for elasticsearch process but memory is not locked"
I have looked into maybe disabling memory swapping entirely on my mac, but decided that maybe that was too drastic an action, and would prefer that feature (memory locks, no swapping) to be invoked only when my ES is active. I also cannot seem to find anywhere where one can set LimitMEMLOCK=infinity because it seems that the concept of elasticsearch.service dose not exist as a part of installing ES on a MAC.
I thought installing ES would be as simple as editing the "Elasticsearch.yml" and "jvm.options" file and that would be it. Boy was I wrong.
I would love your assistance guys. Thanks in advance.
So - I finally got ES 7.3 to install on my MAC (El Capitan).
I removed this entry from my jvm.options file:
-Des.enforce.bootstrap.checks=true
Somehow this did not cause ES to crap out when authenticating memory locking. Removing this parameter disables ES's ability to enforce bootstrap checks, even if "bootstrap.memory_lock: true" in elasticsearch.yml config file. Now all I have to figure out is how to add 2 additional nodes to this same ES instance, without getting this error:
uncaught exception in thread [main]
org.elasticsearch.bootstrap.StartupException: BindTransportException[Failed to bind to
[9300]]; nested: BindException[Address already in use]
Seems like i need to make some adjustments to my elasticsearch.yml file. current pertinent settings here:
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
transport.host: localhost
#transport.tcp.port: 9300-9400 (commented out for now)
#node.master: true (commented out for now)
#node.data: true (commented out for now)
#discovery.type: single-node
cluster.initial_master_nodes: ["node-1", "node-2"]
Anyone have any ideas?

booting elasticsearch on machine with 2GB RAM

I continue to get following error while trying to run elasticsearch on a SSD machine with 2GB RAM.
elasticsearch[1234] : # There is insufficient memory for the Java Runtime Environment to continue.
elasticsearch[1234] : # Native memory allocation (mmap) failed to map 1973026816 bytes for committing reserved memory.
I modified default config /etc/init.d/elasticsearch modified with following options
ES_JAVA_OPTS="-Xms1g -Xmx1g"
ES_HEAP_SIZE=1g
I restarted elasticsearch but I continue to get the same error.
sudo /bin/systemctl restart elasticsearch.service
Any ideas?
You should set Xms and Xmx in the jvm.options file. (/etc/elasticsearch/jvm.options)
You can also use environment variables (ES_JAVA_OPTS="-Xms1g -Xmx1g"), but you need to comment out the settings in jvm.options for that to work.
PS: Assuming 5.x since you didn't specify the version.

Elasticsearch 5.2.0 - Could not reserve enough space for 2097152KB object heap

On my Windows 10 machine I'm trying to start Elasticsearch 5.2.0 which fails with a following error:
D:\Tools\elasticsearch-5.2.0\bin>elasticsearch.bat
Error occurred during initialization of VM
Could not reserve enough space for 2097152KB object heap
Right now I have 20GB free RAM.
How to resolve this issue ?
Change the JVM options of Elasticsearch before launch it.
Basically go to your config/jvm.options and change the values of
-Xms2g ---> to some megabytes (200 MB)
-Xmx2g ---> to some megabytes (500 MB)
here 2g refers to 2GB so change to 200MB it should 200m
For example change it to below value
-Xms200m
-Xmx500m
It worked for me.
Updated to the last JDK version 1.8.0_121 64-Bit(I had 1.8.0_90) and the issue is gone

Resources