Understanding elasticsearch jvm heap usage - elasticsearch

Folks,
I am trying reduce my memory usage in my elasticsearch deployment (Single node cluster).
I can see 3GB JVM heap space being used.
To optimize I first need to understand the bottleneck.
I have limited understanding of how is JVM usage is split.
Field data looks to consume 1.5GB and filter cache & query cache combined consume less than 0.5GB, that adds upto 2GB at the max.
Can someone help me understand where does elasticsearch eats up rest of 1GB?

I can't tell for your exact setup, but in order to know what's going on in your heap, you can use the jvisualvm tool (bundled with the jdk) together with marvel or the bigdesk plugin (my preference) and the _cat APIs to analyze what's going on.
As you've rightly noticed, the heap hosts three main caches, namely:
the fielddata cache: unbounded by default, but can be controlled with indices.fielddata.cache.size (in your case it seems to be around 50% of the heap, probably due to the fielddata circuit breaker)
the node query/filter cache: 10% of the heap
the shard request cache: 1% of the heap but disabled by default
There is nice mindmap available here (Kudos to Igor KupczyƄski) that summarizes the roles of caches. That leaves more or less ~30% of the heap (1GB in your case) for all other object instances that ES needs to create in order to function properly (see more about this later).
Here is how I proceeded on my local env. First, I started my node fresh (with Xmx1g) and waited for green status. Then I started jvisualvm and hooked it onto my elasticsearch process. I took a heap dump from the Sampler tab so I can compare it later on with another dump. My heap looks like this initially (only 1/3 of max heap allocated so far):
I also checked that my field data and filter caches were empty:
Just to make sure, I also ran /_cat/fielddata and as you can see there's no heap used by field data yet since the node just started.
$ curl 'localhost:9200/_cat/fielddata?bytes=b&v'
id host ip node total
TMVa3S2oTUWOElsBrgFhuw iMac.local 192.168.1.100 Tumbler 0
This is the initial situation. Now, we need to warm this all up a bit, so I started my back- and front-end apps to put some pressure on the local ES node.
After a while, my heap looks like this, so its size has more or less increased by 300 MB (139MB -> 452MB, not much but I ran this experiment on a small dataset)
My caches have also grown a bit to a few megabytes:
$ curl 'localhost:9200/_cat/fielddata?bytes=b&v'
id host ip node total
TMVa3S2oTUWOElsBrgFhuw iMac.local 192.168.1.100 Tumbler 9066424
At this point I took another heap dump to gain insights into how the heap had evolved, I computed the retained size of the objects and I compared it with the first dump I took just after starting the node. The comparison looks like this:
Among the objects that increased in retained size, he usual suspects are maps, of course, and any cache-related entities. But we can also find the following classes:
NIOFSDirectory that are used to read Lucene segment files on the filesystem
A lot of interned strings in the form of char arrays or byte arrays
Doc values related classes
Bit sets
etc
As you can see, the heap hosts the three main caches, but it is also the place where reside all other Java objects that the Elasticsearch process needs and that are not necessarily cache-related.
So if you want to control your heap usage, you obviously have no control over the internal objects that ES needs to function properly, but you can definitely influence the sizing of your caches. If you follow the links in the first bullet list, you'll get a precise idea of what settings you can tune.
Also tuning caches might not be the only option, maybe you need to rewrite some of your queries to be more memory-friendly or change your analyzers or some fields types in your mapping, etc. Hard to tell in your case, without more information, but this should give you some leads.
Go ahead and launch jvisualvm the same way I did here and learn how your heap is growing while your app (searching+indexing) is hitting ES and you should quickly gain some insights into what's going on in there.

Marvel only plots some instances on the heap which needs to be monitored like Caches in this case.
The caches represent only a portion of the total heap usage. There are a lot many other instances which will occupy the heap memory and those may not have a direct plotting on this marvel interface.
Hence, Not all heap occupied in ES is only by the cache.
In order to clearly understand the exact usage of heap by different instances, you should take heap dump of the process and then analyze it using a Memory Analyzer tool which can provide you with the exact picture.

Related

How to determine what causes ES's query API instability

Normally, my ES query API takes less than 1s.But sometimes these queries get slow.
cluster consists of three 32G machines (16G allocated to ES).The index consists of 20 primaries and 1 replica, 303,000,000 dos count and 500gb primaries storage size and 1tb storage size.
Here's kibana's monitoring data:
`
Personally, I think it's the result of GC. I want to add machines.But I need to find a reason to convince my leader.
Yes it could be a GC problem. But can you be more specific? What do you mean by slow?
Anyway it seems the allocated heap is way too large for your needs. You have a collection when the heap is at 12Go ( 75% of 16go ) and it goes back to 5go every time. Its generate huge garbage collection.
You should try to lower the heap to like 10Go and check the impact on performance GC count and GC duration.
I recommands you too read this article https://www.elastic.co/blog/a-heap-of-trouble especially the "Together We Can Prevent Forest Fires" part.

Ignite Query / SqlQuery / SqlFieldsQuery: are "lazy" and "page size" actually doing anything?

Some background: we are building a machine learning farm off Ignite cluster. Part of the use case is generating training data sets, which are gigantic matrices (up to billions of rows x thousands of columns in theory), one row per data entry in Ignite cache.
We are using SqlQuery to fetch the records matching a predicate locally on each node, then iterate over records, generate vectors, write them into an external storage for further consumption. Each node exports data independently, so for 32 nodes we end up with 32 exported data sets.
The problem: small data set generations worked OK, but larger data set generations (queries expected to return 10M+ rows per node) basically kill the entire cluster, blowing out the nodes due to OOME and GC hell. We looked at "Performance and Debugging" section of Ignite docs (https://apacheignite-sql.readme.io/docs/performance-and-debugging#result-set-lazy-load), tried lazy result set and page size settings. Nope.
The investigations (profiling, memory dumps, debugger, etc), suggested that result sets of the queries are loaded into memory completely before our code gets to read the first row, even though we are using QueryCursor and iterations. The Query#pageSize and SqlFieldsQuery#setLazy do not seem to have any effect on that whatsoever. Digging deeper, it turned out that H2 (that Ignite indexing is using) does not support server-side cursor in result sets at all, and can only be configured to buffer the records onto disk (even with SSDs, this is a performance non-starter). Reading through Ignite source code suggested that Query#pageSize and SqlFieldsQuery#setLazy are only used in distributed queries / scan queries, and still Ignite does full reads of result sets on the nodes into memory.
Sigh. These are remediations we thought of:
Literally run thousands of tiny Ignite nodes with a lots of unused memory, as opposed to dozens of big ones. Looks really cost-ineffective.
Use h2.maxMemoryRows set to some small value. Seems like a silly solution (memory-to-memory in the same JVM through a buffer on disk? really?).
Ditch the SQLQuery / SQLFieldsQuery, bypass H2, use ScanQuery. This is a ton of work (have to parse predicate and compile them into IgniteBiPredicate, plus these are full table scans, therefore no index / optimization, which sort makes the whole ordeal pointless).
Presuade Thomas Mueller to do server-side cursors in H2 somehow (see
How to set h2 to stream resultset?)?
Allocate gigantic ram drives on each ignite node to buffer H2 record sets there (alright, this got weird, I'd probably stop).
The question: is our assessment correct and query result streaming is not a thing for local queries? Are there any more sensible workarounds?
Would appreciate some insight from Ignite comrades if they read this. Thanks in advance!
UPDATE: in case it matters, the cluster is 8 nodes as follows:
Architecture: x86_64
CPU(s): 32
Model name: Intel(R) Xeon(R) CPU E5-2686 v4 # 2.30GHz
CPU MHz: 2345.904
BogoMIPS: 4600.05
Hypervisor vendor: Xen
Virtualization type: full
RAM: 240Gb
These are designated EC2-s with ephemeral volumes for Ignite /data mount.
lazy does definitely do anything: for SELECT * FROM cache query, it's the difference between cluster failure and normal operation. page size should be working together with lazy.
Can you show your query? Also, I'm not sure that it really applies to local queries.
Updating this after seemingly solving the issue.
Note: the question was about local queries.
It looks like using SqlFieldsQuery with 'lazy' helped for local queries. Local SqlQuery seems to be ignoring the lazy flag.

how to decide the memory requirement for my elasticsearch server

I have a scenario here,
The Elasticsearch DB with about 1.4 TB of data having,
_shards": {
"total": 202,
"successful": 101,
"failed": 0
}
Each index size is approximately between, 3 GB to 30 GB and in near future, it is expected to have 30GB file size on a daily basis.
OS information:
NAME="Red Hat Enterprise Linux Server"
VERSION="7.2 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.2 (Maipo)"
The system has 32 GB of RAM and the filesystem is 2TB (1.4TB Utilised). I have configured a maximum of 15 GB for Elasticsearch server.
But this is not enough for me to query this DB. The server hangs for a single query hit on server.
I will be including 1TB on the filesystem in this server so that the total available filesystem size will be 3TB.
also I am planning to increase the memory to 128GB which is an approximate estimation.
Could someone help me calculate how to determine the minimum RAM required for a server to respond at least 50 requests simultaneously?
It would be greatly appreciated if you can suggest any tool/ formula to analyze this requirement. also it will be helpful if you can give me any other scenario with numbers so that I can use that to determine my resource need.
You will need to scale using several nodes to stay efficient.
Elasticsearch has its per-node memory sweet spot at 64GB with 32GB reserved for ES.
https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html#_memory for more details. The book is a very good read if you are using Elasticsearch for serious stuff
If you're here for a rule of thumb, I'd say that on modern ES and Java, 10-20GB of heap per TB of data (I'm thinking of the typical ELK use-case) should be enough. Multiplying by 2, that's 20-40GB of total RAM per TB.
Now for the datailed answer :) There are two types of memory that are relevant here:
JVM heap
OS cache (the OS will use free memory to cache index files)
OS cache is down to your IO requirements (queries do lots of small random IO). If you have a query-intensive use-case (e.g. E-commerce), you'll want to fit your whole index in the OS cache (or at least most of it). For logs and other time-series data, you typically have more expensive, rarer queries. There, if you have a local SSD you can make do with only a fraction of your data in the cache. I've seen servers with 4TB of disk space on 32GB of OS cache.
JVM heap can also be divided in two:
static memory, required even when the server is idle
transient memory, required by ongoing indexing/search operations
You'd see most of the static memory if you hit the _nodes/stats endpoint. It's best if you have these plotted in your Elasticsearch monitoring tool. You'll see it as segments_memory and various caches. For recent versions of Elasticsearch (e.g. 7.7 or higher), there's not a lot of memory like this - at least for most use-cases. I've seen ELK deployments with multiple TB of data definitely using less than 10GB of RAM for static memory. That said, you may reduce it by not storing info that you don't need. For example by not indexing fields you don't search on.
Transient memory will mainly depend on your queries: how often they run and how expensive they are. One-off expensive queries tend to be more dangerous, so avoid using too many levels of aggregations, massive size values, or queries that expand to too many terms (wildcards, fuzzy...). To accommodate those, you simply need heap. How much? It's really a matter of monitor-and-adjust.
Side-note: I don't agree with the general suggestion that you should stay under 32GB at all costs. With Java 11+ and G1GC, I've seen deployments with over 100GB of heap that run just fine. The overhead of uncompressed oops is not 10-20GB at every 30GB, like the docs suggest - that's an extrapolation of a worse-case scenario. In my experience, it's more like a few GB every 30GB - something like 10% for many deployments. This doesn't mean you have to use 100GB of heap, it's just that if you need a lot of heap in your cluster, you don't have to have hundreds of nodes (you can have fewer bigger ones).
Speaking of GC, it may fall behind if you run many queries that aren't terribly expensive. And then you'd run out of heap, even if you have plenty. Monitoring should tell you this, as a full GC will eventually clean up the heap with a big pause (read: cluster instability). Here, Java 11 with G1GC and a low -XX:GCTimeRatio (e.g. 3) should fix the issue.
This gives a good overview of heap sizing and memory management and you will be able to answer yourself.
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
https://www.elastic.co/guide/en/elasticsearch/guide/master/_limiting_memory_usage.html

Elasticsearch with different java heaps, does it matter?

Say, I've got 2 servers. One of which has -xmx and -xms set to 4G and one to 2G.
Will ElasticSearch handle those performance differences in the balancing mode? Or will both the servers be called purely based on indices, resulting in a (much more) likely OOM for the latter than the former?
By the way, I've set the properties indices.fielddata.cache.size, indices.breaker.fielddata.limit, indices.breaker.request.limit, and indices.breaker.total.limit on both servers as ElasticSearch is suggesting
This is important, to me, because if it does, I'd have to change the index sharding on guessed index strain, which will be a hassle (if not impossible)
Elasticsearch treats every nodes as the same and equally balances the documents between them. This means that Elasticsearch wont readjust based on hardware and get you the optimal performance.
One thing to remember here is that a herd of bulls is only as fast as its slowest bull. The same gets applied here. But then if the load is small enough that it does not eat up all the hardware for 2 GB machine ,then we should not be seeing any issue. Otherwise you should see difference in memory aggressive operations like aggregations.

Why does ElasticSearch Heap Size not return to normal?

I have setup elasticsearch and it works great.
I've done a few bulk inserts and did a bit of load testing. However, its been idle for a while and I'm not sure why the Heap size doesn't reduce to about 50mb which is what it was when it started? I'm guessing GC hasn't happened?
Please note the nodes are running on different machines on AWS. They are all on small instances and each instance has 1.7GB of RAM.
Any ideas?
Probably. Hard to say, the JVM manages the memory and does what it thinks is best. It may be avoiding GC cycles because it simply isn't necessary. In fact, it's recommended to set mlockall to true, so that the heap is fully allocated at startup and will never change.
It's not really a problem that ES is using memory for heap...memory is to be used, not saved. Unless you are having memory problems, I'd just ignore it and continue on.
ElasticSearch and Lucene maintains cache data to perform fast sorts or facets.
If your queries are doing sorts, this may increase the Lucene FieldCache size which may not be released because objects here are not eligible for the GC.
So the default threshold (CMSInitiatingOccupancyFraction) of 75% do not apply here.
You can manage FieldCache duration as explained here : http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-fielddata.html

Resources