elasticsearch JVM memory red - elasticsearch

I have a 3-node elasticsearch cluster.
My ES heap size is configured to 1Gb on each node.
Max locked memory is set to unlimited. Mem alloc is set to true on all nodes.
ulimit -v shows unlimited.
My nodes are up on KVM VMs.
I started logstash / elasticsearch and I once I get a substancial amount of logs, elasticsearch JVM Memory % turns red close to 100% and jvm never releases memory.
Does anyone know how to effectively tune jvm so the GC works effectively and the memory doesn't get saturated because my nodes end up crashing.
Thank you

Related

memory management for elasticsearch

I am trying to calculate good balance of total memory in three node es cluster.
If I have three node e.s cluster each with 32G memory, 8 vcpu. Which combination would be more suitable for balancing memory between all the components? I know there will be no fixed answers but just trying to get as accurate as I can.
different elasticsearch components will be used are beats (filebeat, metricbeat,heartbeat), logstash, elasticsearch, kibana.
most use case for this cluster will be, application logs getting indexed and running query on them like fetch average response time for 7 days,30 days, how many are different status codes for last 24 hrs, 7 days etc through curl calls, so aggregation will be used and other use case is monitoring, seeing logs through kibana but no ML jobs or dashboard creation etc..
After going through below official docs, its recommended to set heap size as below,
logstash -
https://www.elastic.co/guide/en/logstash/current/jvm-settings.html#heap-size
The recommended heap size for typical ingestion scenarios should be no less than 4GB and no more than 8GB.
elasticsearch -
https://www.elastic.co/guide/en/elasticsearch/reference/current/advanced-configuration.html#set-jvm-heap-size
Set Xms and Xmx to no more than 50% of your total memory. Elasticsearch requires memory for purposes other than the JVM heap
Kibana -
I have't found default or recommended memory for kibana but in our test cluster of single node of 8G memory it is taking 1.4G as total (256 MB/1.4 GB)
beats -
not found what is the default or recommended memory for beats but they will also consume more or less.
What should the ideal combination from below?
32G = 16G for OS + 16G for Elasticsearch heap.
for logstash 4G from 16G of OS, say three beats will consume 4G, kibana 2G
this leaves OS with 6G and if any new component has to be install in future like say APM or any other OS related then they all will have only 6G with OS.
Above is, per offical recommendation for all components. (i.e 50% for OS and 50% for es)
32G = 8G for elasticsearch heap. (25% for elasticsearch)
4G for logstash + beats 4G + kibana 2G
this leaves 14G for OS and for any future component.
I am missing to cover something that can change this memory combination?
Any suggestion by changing in above combination or any new combination is appreciated.
Thanks,

Elasticsearch shard sizes and JVM memory for elastic cloud configuration

I have a 2-node ES cluster (Elastic Cloud) with 60GB heap size.
Following are my indexes and number of shards allocated.
green open prod-master-account 6 0 6871735 99067 4.9gb 4.9gb
green open prod-master-categories 1 1 221 6 3.5mb 1.7mb
green open prod-v1-apac 4 1 10123830 1405510 11.4gb 5.6gb
green open prod-v1-emea 9 1 28608447 2405254 30.6gb 15gb
green open prod-v1-global 10 1 94955647 12548946 128.1gb 61.2gb
green open prod-v1-latam 2 1 4398361 471038 4.7gb 2.3gb
green open prod-v1-noram 9 1 51933712 6188480 60.1gb 29.2gb
The JVM memory is above 60%. I want to downgrade this cluster to a lower heap size.
But it fails each time and gives a circuit-breaker due to the JVM memory high.
I want to know why the JVM memory is still high? How can I keep the JVM memory low? Am I doing something wrong with sharding?
As the guides says to keep 20 shards per GB, Looking at my configurations its under those values.
How can I downgrade this cluster to a lower heap size cluster?
Much appreciated!
60 GB of HEAP size is not at all recommended for ES process or any other JVM process as beyond 32 GB, JVM doesn't use the compressed object pointers (compressed oops), so you won't get optimal performance.
Please refer to ES official doc on heap setting for more info.
You can try below to optimize the ES heap size
If you have big RAM machines, then try to use the mid-size machine, where you allocate 50% of RAM to ES heap size(should not cross 32 GB heap size threshold).
Assign less primary shards and increase replica shards for better search performance.

Why does Elasticsearch Cluster JVM Memory Pressure keep increasing?

The JVM Memory Pressure of my AWS Elasticsearch cluster has been increasing consistently. The pattern I see for the last 3 days is that it adds 1.1% every 1 hour. This is for one of the 3 master nodes I have provisioned.
All other metrics seem to be in the normal range. The CPU is under 10% and there are barely any indexing or search operations being performed.
I have tried clearing the cache for fielddata for all indices as mentioned in this document but that has not helped.
Can anyone help me understand what might be the reason for this?
Got this answer from AWS Support
I checked the particular metric and can also see the JVM increasing from the last few days. However, I do not think this is an issue as JVM is expected to increase over time. Also, the garbage collection in ES runs once the JVM reaches 75% (currently its around 69%), after which you would see a drop in the JVM metric of your cluster. If JVM is being continuously > 75 % and not coming down after GC's is a problem and should be investigated.
The other thing which you mentioned about clearing the cache for
fielddata for all indices was not helping in reducing JVM, that is
because the dedicated master nodes do not hold any indices data and
their related caches. Clearing caches should help in reducing JVM on
the data nodes.

heap size when running elasticsearch cluster on kubernetes

I am running elasticsearch cluster on Kubernetes cluster. I have a need to increase the heap size. Right now the heap size is 4gb and memory allocated to the pod is 8gb. When setting up elastic search cluster on VMs/BMs I have always followed the principle that heap size should not be more than 50% of physical RAM. Please follow link https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
Now my question is, will this principle work in the same manner when running elastic search on K8 or how to decide the heap size when running ES on K8?
The heap size should be half the size of RAM allocated to pods.
From elasticsearch guide:
The heap size should be half the size of RAM allocated to the Pod. To
minimize disruption caused by Pod evictions due to resource
contention, you should run Elasticsearch pods at the "Guaranteed" QoS
level by setting both requests and limits to the same value.

how to limit memory usage of elasticsearch in ubuntu 17.10?

My elasticsearch service is consuming around 1 gb.
My total memory is 2gb. The elasticsearch service keeps getting shut down. I guess the reason is because of the high memory consumption. How can i limit the usage to just 512 MB?
This is the memory before starting elastic search
After running sudo service elasticsearch start the memory consumption jumps
I appreciate any help! Thanks!
From the official doc
The default installation of Elasticsearch is configured with a 1 GB heap. For just about every deployment, this number is usually too small. If you are using the default heap values, your cluster is probably configured incorrectly.
So you can change it like this
There are two ways to change the heap size in Elasticsearch. The easiest is to set an environment variable called ES_HEAP_SIZE. When the server process starts, it will read this environment variable and set the heap accordingly. As an example, you can set it via the command line as follows: export ES_HEAP_SIZE=512m
But it's not recommended. You just can't run an Elasticsearch in the optimal way with so few RAM available.

Resources