We are using Hazelcast (v4.2) as a cache solution. We are using only IMap data structure to store the data.
In Management Center, we are seeing the Map memory is less than 5% and "Others" is taking upto 90% of the heap memory. The value for Others fluctuates between 50 and 90.
We are not able to understand what is it taking this much memory. This sometimes brings down the cluster also, when it reaches 100%.
Has someone faced similar issue?
Related
Normally, my ES query API takes less than 1s.But sometimes these queries get slow.
cluster consists of three 32G machines (16G allocated to ES).The index consists of 20 primaries and 1 replica, 303,000,000 dos count and 500gb primaries storage size and 1tb storage size.
Here's kibana's monitoring data:
`
Personally, I think it's the result of GC. I want to add machines.But I need to find a reason to convince my leader.
Yes it could be a GC problem. But can you be more specific? What do you mean by slow?
Anyway it seems the allocated heap is way too large for your needs. You have a collection when the heap is at 12Go ( 75% of 16go ) and it goes back to 5go every time. Its generate huge garbage collection.
You should try to lower the heap to like 10Go and check the impact on performance GC count and GC duration.
I recommands you too read this article https://www.elastic.co/blog/a-heap-of-trouble especially the "Together We Can Prevent Forest Fires" part.
There are 3 objects stored in my map - couple of MB each. They don't change so it makes sense to cache them locally at the node. And that's what I thought I was doing before I realized the average get latency is huge and that slows down my computations by large. See that hazelcast console:
This makes my wonder where did it get from. Is it those 90 and 48 misses which I think happend at first? The computations are run in parallel so I figure they could all issue a reguest to get before the entries were even cached and thus all of those would not benefit from near-cache at this point. Is it then some pre-loading method so that I would run it before I trigger all those parallel tasks? Btw. why entry memory is 0 even if there are entries in that near cache data table?
Here is my map config:
<map name="commons">
<in-memory-format>BINARY</in-memory-format>
<backup-count>0</backup-count>
<async-backup-count>0</async-backup-count>
<eviction-policy>NONE</eviction-policy>
<near-cache>
<in-memory-format>OBJECT</in-memory-format>
<max-size>0</max-size>
<time-to-live-seconds>0</time-to-live-seconds>
<max-idle-seconds>0</max-idle-seconds>
<eviction-policy>NONE</eviction-policy>
<invalidate-on-change>true</invalidate-on-change>
<cache-local-entries>true</cache-local-entries>
</near-cache>
</map>
The actual question is why there are so many misses in the near cache and is it where that huge average get latency may come from?
The latency that management center shows, is the latency after a request hit the server. If you have a Near Cache and you hit Near Cache, that will not show on the Man.Center. I suspect that you shall not be observing the high latency from your application. I see that there have been 34 events. I assume this entry have been updated. When an entry is updated, it is evicted from Near Cache. The subsequent read will hit the server.
I have a scenario here,
The Elasticsearch DB with about 1.4 TB of data having,
_shards": {
"total": 202,
"successful": 101,
"failed": 0
}
Each index size is approximately between, 3 GB to 30 GB and in near future, it is expected to have 30GB file size on a daily basis.
OS information:
NAME="Red Hat Enterprise Linux Server"
VERSION="7.2 (Maipo)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="7.2"
PRETTY_NAME="Red Hat Enterprise Linux Server 7.2 (Maipo)"
The system has 32 GB of RAM and the filesystem is 2TB (1.4TB Utilised). I have configured a maximum of 15 GB for Elasticsearch server.
But this is not enough for me to query this DB. The server hangs for a single query hit on server.
I will be including 1TB on the filesystem in this server so that the total available filesystem size will be 3TB.
also I am planning to increase the memory to 128GB which is an approximate estimation.
Could someone help me calculate how to determine the minimum RAM required for a server to respond at least 50 requests simultaneously?
It would be greatly appreciated if you can suggest any tool/ formula to analyze this requirement. also it will be helpful if you can give me any other scenario with numbers so that I can use that to determine my resource need.
You will need to scale using several nodes to stay efficient.
Elasticsearch has its per-node memory sweet spot at 64GB with 32GB reserved for ES.
https://www.elastic.co/guide/en/elasticsearch/guide/current/hardware.html#_memory for more details. The book is a very good read if you are using Elasticsearch for serious stuff
If you're here for a rule of thumb, I'd say that on modern ES and Java, 10-20GB of heap per TB of data (I'm thinking of the typical ELK use-case) should be enough. Multiplying by 2, that's 20-40GB of total RAM per TB.
Now for the datailed answer :) There are two types of memory that are relevant here:
JVM heap
OS cache (the OS will use free memory to cache index files)
OS cache is down to your IO requirements (queries do lots of small random IO). If you have a query-intensive use-case (e.g. E-commerce), you'll want to fit your whole index in the OS cache (or at least most of it). For logs and other time-series data, you typically have more expensive, rarer queries. There, if you have a local SSD you can make do with only a fraction of your data in the cache. I've seen servers with 4TB of disk space on 32GB of OS cache.
JVM heap can also be divided in two:
static memory, required even when the server is idle
transient memory, required by ongoing indexing/search operations
You'd see most of the static memory if you hit the _nodes/stats endpoint. It's best if you have these plotted in your Elasticsearch monitoring tool. You'll see it as segments_memory and various caches. For recent versions of Elasticsearch (e.g. 7.7 or higher), there's not a lot of memory like this - at least for most use-cases. I've seen ELK deployments with multiple TB of data definitely using less than 10GB of RAM for static memory. That said, you may reduce it by not storing info that you don't need. For example by not indexing fields you don't search on.
Transient memory will mainly depend on your queries: how often they run and how expensive they are. One-off expensive queries tend to be more dangerous, so avoid using too many levels of aggregations, massive size values, or queries that expand to too many terms (wildcards, fuzzy...). To accommodate those, you simply need heap. How much? It's really a matter of monitor-and-adjust.
Side-note: I don't agree with the general suggestion that you should stay under 32GB at all costs. With Java 11+ and G1GC, I've seen deployments with over 100GB of heap that run just fine. The overhead of uncompressed oops is not 10-20GB at every 30GB, like the docs suggest - that's an extrapolation of a worse-case scenario. In my experience, it's more like a few GB every 30GB - something like 10% for many deployments. This doesn't mean you have to use 100GB of heap, it's just that if you need a lot of heap in your cluster, you don't have to have hundreds of nodes (you can have fewer bigger ones).
Speaking of GC, it may fall behind if you run many queries that aren't terribly expensive. And then you'd run out of heap, even if you have plenty. Monitoring should tell you this, as a full GC will eventually clean up the heap with a big pause (read: cluster instability). Here, Java 11 with G1GC and a low -XX:GCTimeRatio (e.g. 3) should fix the issue.
This gives a good overview of heap sizing and memory management and you will be able to answer yourself.
https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html
https://www.elastic.co/guide/en/elasticsearch/guide/master/_limiting_memory_usage.html
I'm trying to insert about 250 million documents that are each roughly 400 bytes into MongoDB 3.0 with WiredTiger. I need to search on only one short string key, _user_lower. Although I'm using WiredTiger now, which is much better than MMAPv1, I did use MMAPv1 first and had similar issues.
My server (a very cheap VPS) has:
250 GB magnetic disk
1 GB RAM
2 GB Swap
2.1 GHz single-core CPU
I know that this machine is really slow, and I'm asking it to do something a bit unrealistic. But I'm confused about how it started so fast with one index, and the second just ruined the performance:
I inserted all the data that I had at the time (about 250M rows) without any index except on _id. This performed very well, considering my awful hardware:
Approximately 5000 inserts per second (totally acceptable)
This rate was nearly constant for the 14 hours hours it took to complete
The index size on _id once complete was nearly 2.5GB. Note that this is more than double my physical RAM.
The RES of the process didn't exceed 450 MB according to mongostat.
No swapping
top seemed to indicate that CPU time wasn't all being spent waiting for the disk (so a significant amount was spent in userspace, presumably with WiredTiger in the snappy code)
Then I built a (non-unique) index on the only field I need to query by, _user_lower. This took 7.7 hours, which is fine since that's a one-time deal. The index ended up being 1.6 GB, which seems really low to me when compared to the _id index. The RES went up to about 750 MB.
Then, I downloaded a new data set to load. It was only 102 MB (238 K documents). I loaded it in the same way, using mongoimport, but this time:
Only 80 inserts per second (slower at times)
RES stayed at around 750 MB
top says almost 100% of the CPU was spent waiting for IO
Of course, load went through the roof.
I could understand a sizable performance hit, since that index has to be updated. But I didn't expect this much. I've read all over the place that my indexes should fit in RAM, but the performance was great during the initial insert, where the index quickly outgrew my memory.
Can I optimize the _user_index index at all? I don't know what this would even mean, but maybe only index the first few characters? I'm definitely willing to halve the query performance in exchange for tripling the insert performance.
What accounts for the massive performance hit? How do I fix it without new hardware? I'm not really attached to MongoDB, so alternatives that don't have these performance characteristics are fine. I have an idea that just uses flat files which would probably work but I don't want to write all that code.
When adding new items to a collection, the database will have to keep the index up-to-date. Since the index in MongoDB is a B-Tree by default, that means it will have to insert an item in the tree. While that isn't a particularly expensive operation in the best case, it comes with two potential performance problems:
performance jitter: from time to time, the B-Tree bucket might be full, requiring a bucket split and hence a lot more operations than the 'simple' insert
the insert destination must be readily available
In this case, the latter is likely to cause trouble: because the insertion of a name hits a random node in the tree (i.e, the name insertion doesn't follow a pattern) and your RAM is smaller than the index, chances are high that the destination must be fetched from disk. Unfortunately, the performance of disk seeks is orders of magnitude lower than main memory references. If you're unlucky, the first ref location requires another disk seek such that for a single insert multiple disk reads are required before MongoDB can even begin writing. That can take hundreds of milliseconds, with spinning disks or some contention on typical IaaS infrastructure even seconds.
Because ObjectIds are generated monotonically (the timestamp is the most significant part), the insertion always happens at the end and it is possible to keep the destination largely in RAM. Performance jitter, i.e. problem 1 might still be an issue since a bucket split might require a disk seek, but it happens so rarely compared to the first case that it doesn't wreck average performance, which should explain the observed behavior.
Also, when the bucket is filled by a monotonically increasing value, MongoDB will split the bucket when it is 90% filled; with random insertion, splits will happen a lot earlier, at 50%, so the tree is a little more 'dense' in that case.
I have setup elasticsearch and it works great.
I've done a few bulk inserts and did a bit of load testing. However, its been idle for a while and I'm not sure why the Heap size doesn't reduce to about 50mb which is what it was when it started? I'm guessing GC hasn't happened?
Please note the nodes are running on different machines on AWS. They are all on small instances and each instance has 1.7GB of RAM.
Any ideas?
Probably. Hard to say, the JVM manages the memory and does what it thinks is best. It may be avoiding GC cycles because it simply isn't necessary. In fact, it's recommended to set mlockall to true, so that the heap is fully allocated at startup and will never change.
It's not really a problem that ES is using memory for heap...memory is to be used, not saved. Unless you are having memory problems, I'd just ignore it and continue on.
ElasticSearch and Lucene maintains cache data to perform fast sorts or facets.
If your queries are doing sorts, this may increase the Lucene FieldCache size which may not be released because objects here are not eligible for the GC.
So the default threshold (CMSInitiatingOccupancyFraction) of 75% do not apply here.
You can manage FieldCache duration as explained here : http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-fielddata.html