In general, will increasing the number of shards utilize more CPU? - elasticsearch

Right now I have 5 nodes and indices are set to 5 shards and 1 replica. Shards are generally around 4 gigs.
Performance is good and CPU utilization is low, no heap or memory issues, and IO wait is acceptable, but sometimes if I search back very far in kibana it times out. When it times out CPU doesn't spike and the load avg on the nodes doesn't blow up.
The nodes have 8 cores and cpu avgs around only 3%.
So I know there is added overhead of more shards but I have the capacity for handling that i think.
My question is will more shards improve query performance by opening up more threads? I know I shouldn't manually tweak the thread pool settings so I was thinking about increasing the number of shards. Essentially I see big queries time out and I also see underutilized resources so I want to tweak it to maximize those resources.

Related

Elastic search down after several months

I have a elastic search cluster with 2 nodes running on a 2 cores CPU 8GB ram instance. Each node has the argument "ES_JAVA_OPTS=-Xms3g -Xmx3g" passed in. I have 4 indices each has 2 shards and 1 replica. After 2 months, it went down. Checked the instance monitoring, not seeing any CPU or memory spike. Disk has plenty of free space. Checked the es log. The only thing I see is
[gc][2845340] overhead, spent [339ms] collecting in the last [1s]
Any idea why?
When the garbage collector starts reporting that it spends ~30% of the time collecting, it usually means that there's not enough heap anymore.
You should increase the heap a little bit until the GC stops reporting. You can increase the heap up to half the available memory, but not more than 30GB
To this end, change the setting below, and make sure that Xms is always equals to Xmx and never goes over 30.
ES_JAVA_OPTS=-Xms4g -Xmx4g

Elasticsearch: High CPU usage by Lucene Merge Thread

I have a ES 2.4.1 cluster with 3 master and 18 data nodes which collects log data with a new index being created every day. In a day index size grows to about 2TB. Indexes older than 7 days get deleted. Very few searches are being performed on the cluster, so the main goal is to increase indexing throughput.
I see a lot of the following exceptions which is another symptom of what I am going to say next:
EsRejectedExecutionException[rejected execution of org.elasticsearch.transport.TransportService$4#5a7d8a24 on EsThreadPoolExecutor[bulk, queue capacity = 50, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor#5f9ef44f[Running, pool size = 8, active threads = 8, queued tasks = 50, completed tasks = 68888704]]];]];
The nodes in the cluster are constantly pegging CPU. I increased index refresh interval to 30s but that had little effect. When I check hot threads I see multiple "Lucene Merge Thread" per node using 100% CPU. I also noticed that segment count is constantly around 1000 per shard, which seems like a lot. The following is an example of a segment stat:
"_2zo5": {
"generation": 139541,
"num_docs": 5206661,
"deleted_docs": 123023,
"size_in_bytes": 5423948035,
"memory_in_bytes": 7393758,
"committed": true,
"search": true,
"version": "5.5.2",
"compound": false
}
Extremely high "generation" number worries me and I'd like to optimize segment creation and merge to reduce CPU load on the nodes.
Details about indexing and cluster configuration:
Each node is an i2.2xl AWS instance with 8 CPU cores and 1.6T SSD drives
Documents are indexed constantly by 6 client threads with bulk size 1000
Each index has 30 shards with 1 replica
It takes about 25 sec per batch of 1000 documents
/_cat/thread_pool?h=bulk*&v shows that bulk.completed are equally spread out across nodes
Index buffer size and transaction durability are left at default
_all is disabled, but dynamic mappings are enabled
The number of merge threads is left at default, which should be OK given that I am using SSDs
What's the best way to go about it?
Thanks!
Here are the optimizations I made to the cluster to increase indexing throughput:
Increased threadpool.bulk.queue_size to 500 because index requests were frequently overloading the queues
Increased disk watermarks, because default settings were too aggressive for the large SSDs that we were using. I set "cluster.routing.allocation.disk.watermark.low": "100gb" and "cluster.routing.allocation.disk.watermark.high": "10gb"
Deleted unused indexes to free up resources ES uses to manage their shards
Increased number of primary shards to 175 with the goal of keeping shard size under 50GB and have approximately a shard per processor
Set client index batch size to 10MB, which seemed to work very well for us because the size of documents indexed varied drastically (from KBs to MBs)
Hope this helps others
I have run similar workloads and your best bet is to run hourly indices and run optimize on older indices to keep segments in check.

Elasticsearch sharding basics with respect to query speed

Pardon my ignorance if this question is too broad or vague. I'm playing with Elasticsearch with out-of-the-box settings on my laptop and it works just great.
It's a 8 core Macbook and 6G of heap is given to Elasticsearch and it works pretty well for a large dataset (just over 7 Million documents).
I'm keen to set up a multinode cluster (2 machines) and before I assume a few things, I would like to get expert views on a few key points.
I understand "How many shards per node" is a very subjective question and one answer will not fit all situations.
I understand that sharding helps to distribute the indices to multiple nodes so that the storage footprint is optimal per node.
But mainly, I'd like to understand how the sharding effects on the query speed & effective CPU cores utilization.
When a single search query comes in, does ES fire internal subqueries to all the shards in parallel, and therefore it can keep all the cores busy (if the no of shards equals no of cores)?
Can I also be pointed to a few useful links that will help me? Thanks.
Your understanding is pretty much spot on.
The basic concept to understand is that one query on a single shard will use one thread. One thread boils down to one core CPU. If the query needs to touch multiple shards, then ES will make sure the shards involved are queried. This means each shard will do its part of the job using one thread.
The size of the shard and the complexity of the query translates to how much time is being spent in that thread. But the OS will not give one CPU core to that thread all the time, the OS it's scheduling jobs and other processes get a slice of the CPU core.
Ideally, yes, you would have number of shards = number of cores, but rarely clusters out there use this setup. Mainly those clusters that have a lot of concurrent requests per seconds and they demand a strict response time.
Thanks for the response.
Just a summary of my understanding to get it validated.
No of shards == No of cores
(-)
Bigger shards.
A thread could take more time to search a single shard
therefore other threads could be queued up and made to wait?
(+)
Optimal core utilization.
Less chances of context switching overhead as the no of threads are limited.
No of shards > No of cores
(-)
More threads will be spawned for queries and context switching overhead may apply.
More threads perhaps need more memory for thread stack etc.
More shards could potentially need more housekeeping (ie managing file handles etc) by elasticsearch.
(+)
A thread could take less time(relatively) to search a single shard
Could process more concurrent requests (as the no of threads are more).
Eventually, it depends on which one of these gives a best balance between:
Available Hardware, Query Speed and Concurrency factor and I think it requires quite a bit of experimenting. Or in otherwords, which hurts little.

Elastic Search - Maximum Shard Size

I came across and couldn't reach a final conclusion during learning ElasticSearch.
What is the maximum shard size for ElasticSearch?
How many shards can an index have? Is there any maximum limit?
After reading multiple articles and blogs and running my own load tests, I came to the conclusion that
number of shards and maximum size of each shard depends upon many factors like:
Size of the data inserted
Rate at which the data is inserted
Whether data retrieval / search is happening at the same time? If yes, what is the frequency of search? How many concurrent searches are done?
Server configuration details, like number of cores in CPU, hard disk size, Memory size etc
So, to find out the optimized size for each shard and optimized number of shards for a deployment, one good way is to run tests using various combinations of parameters & loads and arrive at a conclusion.
Simple : Don’t Cross 4 Billions documents
Think about the limit of 32 bits systems of the Heap Size (still valid for 64 bits systems). ES recommand half memory up to 32 GB even for 64 bits systems, as it's concern memory handeling limit and optimization. If you have more than 64 GB of memory, you can keep further memory for Lucene?
For further details : https://www.elastic.co/guide/en/elasticsearch/guide/current/heap-sizing.html and https://qbox.io/blog/optimizing-elasticsearch-how-many-shards-per-index .
As others have said, the theoretical maximum is very large, however depending on your system, there can be practical limits.
I've found that shards start to become less performant around 150GB. I've had 50GB shards that perform reasonably well. In both cases, the shard was the only shard on the node, and the node had 54GB of system memory, with 31GB devoted to elasticsearch. At 50GB, I was getting results from relatively heavy-duty queries around 100ms, and at 150GB it was taking 500ms or longer.
I'm sure this depends on the mappings I've used, and a host of other factors, but perhaps it's useful if you're polling for datapoints.

Why use shards when there is replicas

I am using Solr and have a single collection with dynamic fields.
My goal is to setup a SolrCloud, and SolrWiki recommend this:
https://wiki.apache.org/solr/SolrCloud#Example_C:_Two_shard_cluster_with_shard_replicas_and_zookeeper_ensemble
From my understanding replications gives you load balancing and redundancy, since it is a straight copy.
Sharding gives you load balancing and acquires half the memory for the index but you are dependent of both working.
So when they set up the cluster like this with 4 servers, would the requests be approximately 4 times faster?
If you only have 1 shard with 4 replicas, does it get 4 times faster with more redundancy?
I took for granted that there is no point in having virtual servers because it wouldn't give you more CPUs to work simultaneously.
In SolrCloud adding more replicas improves concurrency and adding more shards improves query response time.
In other words, if your original query returned in 1 second, adding more replicas will probably not improve the response time but will give you more results per time period.
But, splitting your index into more shards will defiantly reduce the response time.
So, if you split you index from 1 shard into 4 shards you will get almost 4 times faster queries.
But if you choose to have 1 shard with 4 replicas your query response time will probably improved only slightly.

Resources