heap size when running elasticsearch cluster on kubernetes - elasticsearch

I am running elasticsearch cluster on Kubernetes cluster. I have a need to increase the heap size. Right now the heap size is 4gb and memory allocated to the pod is 8gb. When setting up elastic search cluster on VMs/BMs I have always followed the principle that heap size should not be more than 50% of physical RAM. Please follow link https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
Now my question is, will this principle work in the same manner when running elastic search on K8 or how to decide the heap size when running ES on K8?

The heap size should be half the size of RAM allocated to pods.
From elasticsearch guide:
The heap size should be half the size of RAM allocated to the Pod. To
minimize disruption caused by Pod evictions due to resource
contention, you should run Elasticsearch pods at the "Guaranteed" QoS
level by setting both requests and limits to the same value.

Related

memory management for elasticsearch

I am trying to calculate good balance of total memory in three node es cluster.
If I have three node e.s cluster each with 32G memory, 8 vcpu. Which combination would be more suitable for balancing memory between all the components? I know there will be no fixed answers but just trying to get as accurate as I can.
different elasticsearch components will be used are beats (filebeat, metricbeat,heartbeat), logstash, elasticsearch, kibana.
most use case for this cluster will be, application logs getting indexed and running query on them like fetch average response time for 7 days,30 days, how many are different status codes for last 24 hrs, 7 days etc through curl calls, so aggregation will be used and other use case is monitoring, seeing logs through kibana but no ML jobs or dashboard creation etc..
After going through below official docs, its recommended to set heap size as below,
logstash -
https://www.elastic.co/guide/en/logstash/current/jvm-settings.html#heap-size
The recommended heap size for typical ingestion scenarios should be no less than 4GB and no more than 8GB.
elasticsearch -
https://www.elastic.co/guide/en/elasticsearch/reference/current/advanced-configuration.html#set-jvm-heap-size
Set Xms and Xmx to no more than 50% of your total memory. Elasticsearch requires memory for purposes other than the JVM heap
Kibana -
I have't found default or recommended memory for kibana but in our test cluster of single node of 8G memory it is taking 1.4G as total (256 MB/1.4 GB)
beats -
not found what is the default or recommended memory for beats but they will also consume more or less.
What should the ideal combination from below?
32G = 16G for OS + 16G for Elasticsearch heap.
for logstash 4G from 16G of OS, say three beats will consume 4G, kibana 2G
this leaves OS with 6G and if any new component has to be install in future like say APM or any other OS related then they all will have only 6G with OS.
Above is, per offical recommendation for all components. (i.e 50% for OS and 50% for es)
32G = 8G for elasticsearch heap. (25% for elasticsearch)
4G for logstash + beats 4G + kibana 2G
this leaves 14G for OS and for any future component.
I am missing to cover something that can change this memory combination?
Any suggestion by changing in above combination or any new combination is appreciated.
Thanks,

Does decrease elasticsearch heap size could help improve search performance?

From 01-13, search performance has slowed down, maybe some metrics has reached a critical value, e.g the index doc count or store size.
From official document, I got
Elasticsearch heavily relies on the filesystem cache in order to make search fast. In general, you should make sure that at least half the available memory goes to the filesystem cache so that Elasticsearch can keep hot regions of the index in physical memory.
And now the maxed used heap is 11.72GB and elasticsearch app specified 16G (-Xms16g -Xmx16g)
So if I changed elasticsearch heap size to 12GB(-Xms12g -Xmx12g), does filesystem cache could used more memory and the search performance could improve?

How to change elastic search heap size in elastic cloud?

I am using 14 trial account of elastic search. This account showing me I have a 4.6GB heap size. I want to reduce my heap size to 2GB so how I can reduce this. I have checked the way of changing the heap size using the following options:
export ES_HEAP_SIZE=2g or
ES_JAVA_OPTS="-Xms2g -Xmx2g" ./bin/elasticsearch
But How I can reduce the heap size using one of the above option in the elastic cloud?
Since Elastic Cloud is a managed service, end users do not have access to the backend master and data nodes in the cluster. Unfortunately, you cannot change the heap size setting of an Elastic Cloud Elasticsearch cluster. You can however scale down your cluster so that the heap memory allocated also reduces. Alternatively, you could try emailing Elastic support at support#elastic.co and ask if they can change the heap size for you but I highly doubt that level of customization is offered for Elastic Cloud service.

how to limit memory usage of elasticsearch in ubuntu 17.10?

My elasticsearch service is consuming around 1 gb.
My total memory is 2gb. The elasticsearch service keeps getting shut down. I guess the reason is because of the high memory consumption. How can i limit the usage to just 512 MB?
This is the memory before starting elastic search
After running sudo service elasticsearch start the memory consumption jumps
I appreciate any help! Thanks!
From the official doc
The default installation of Elasticsearch is configured with a 1 GB heap. For just about every deployment, this number is usually too small. If you are using the default heap values, your cluster is probably configured incorrectly.
So you can change it like this
There are two ways to change the heap size in Elasticsearch. The easiest is to set an environment variable called ES_HEAP_SIZE. When the server process starts, it will read this environment variable and set the heap accordingly. As an example, you can set it via the command line as follows: export ES_HEAP_SIZE=512m
But it's not recommended. You just can't run an Elasticsearch in the optimal way with so few RAM available.

elasticsearch JVM memory red

I have a 3-node elasticsearch cluster.
My ES heap size is configured to 1Gb on each node.
Max locked memory is set to unlimited. Mem alloc is set to true on all nodes.
ulimit -v shows unlimited.
My nodes are up on KVM VMs.
I started logstash / elasticsearch and I once I get a substancial amount of logs, elasticsearch JVM Memory % turns red close to 100% and jvm never releases memory.
Does anyone know how to effectively tune jvm so the GC works effectively and the memory doesn't get saturated because my nodes end up crashing.
Thank you

Resources