ElasticSearch - Should I Shard by Partition? - elasticsearch

I have an ES cluster of 80mm documents with 4 data nodes and 3 master nodes. Searching in the cluster is pretty fast depending on the query, but is always painfully slow to scroll when I need to pull millions of documents out at once.
I do have logical partitions in my data, and only search on a given partition at a time (client id). Though these partitions don't necessarily have an even distribution of documents. One partition may have 1mm documents while another only has 100k.
For this reason I never considered partitioning my shards since I'm certain it wouldn't be an even distribution.
Is my thinking correct or could I see faster query/scroll times by keeping partitioned data localized to a shard?

The outcome of routing depends on the use case, but if applied correctly it can make the difference between a hard working cluster or a performant one.
With routing enabled, write and search operations will hit only the single shard which is relevant according to the routing parameter. It will reduce the impact on many layers of the cluster like distribution of requests, network traffic, threads/IOPS on the nodes, merging the results etc.
But the data will be distributed unevenly across the shards associated with the index. You'll potentially get highly loaded shards on the one side and barely used shards on the other side. Also the optimal size for a shard (40-50GB) will be violated by the same reason. For small shards there will be to much overhead handling the shard in comparison to the data being held by the shard. And for large shards there will be to much data to search though.
In order to overcome this downside, there is another option for routed indices: Increase the partition size. All routed requests will go to a larger partition, not a single shard but a subset of available shards. This will reduce the risk of imbalanced shards while reducing the search impact. Just set index.routing_partition_size while creating the index to a value larger 1 but lower than index.number_of_shards. Now the requests will be routed across shards in the partition rather than one (basic routing) or all shards (no routing) in the index. It's a reasonable trade-off between route-optimized reading and balanced data distribution.
I see another potential improvement: With routing enabled, there will be still more than one logical partition (the client in your case) per shard and irrelevant data have to be visited while searching. That's why you should think about using index sorting in order to improve reading speed in the underlying segment files of a shard. This feature is available with or without _routing. Having all associated data stored together will help you to reduce search speed too. But it comes at costs of writing speed because the documents have to be ordered for flush or segment merge operations.
Here is a example index creation request using it all together, assuming your most common case is writing and reading the data routed using client_id and mostly querying for specific order_id:
PUT my_index
{
"settings": {
"index": {
"number_of_shards": 10,
"routing_partition_size: 2,
"sort.field": ["client_id", "order_id"],
"sort.order": ["asc", "desc"]
}
},
"mappings": {
...
}
}
This answer has been written while 7.5 was the current version of elasticsearch.

Related

ElasticSearch - How does sharding affect indexing performance?

I'm doing some benchmarks on a single-node cluster of ElasticSearch.
I faced to the situation that more shards will reduce the
indexing performance -at least in a single node- (both in latency and throughput)
These are some of my numbers:
Index with 1 shard it indexed +6K documents per minute
Index with 5 shards it indexed +3K documents per minute
Index with 20 shards it indexed +1K documents per minute
I had the same results with bulk API. So I'm wondering what's the relation and why this happens?
Note: I don't have the resource problem! Resources are free (CPU & Memory)
Just to have you on the same page:
Your data is organized in indices, each made of shards and distributed across multiple nodes. If a new document needs to be indexed, a new id is being generated and the destination shard is being calculated based on this id. After that, the write is delegated to the node, which is holding the calculated destination shard. This will distribute your documents pretty well across all of your shards.
Finding documents by id is now easy, as the shard, containing the wanted document, can be calulated just based on the id. There is no need for searching all shards. BTW, that's the reason why you can't change the number of shards afterwards. The changed shard number will result in a different document distribution across your shards.
Now, just to make it clear, each shard is a separate lucene index, made of segment files located on your disk. When writing, new segments will be created. If a particular number of segment files will be reached, the segments will be merged.
So just introducing more shards without distributing them to other nodes will just introduce a higher I/O and memory consumption for your single node.
While searching, the query will be executed against each shard. Afterwards the results of all shards needs to be merged into one result - more shards, more cpu work to do...
Coming back to your question:
For your write heavy indexing case, with just one node, the optimal number of indices and shards is 1!
But for the search case (not accessing by id), the optimal number of shards per node is the number of CPUs available. In such a way, searching can be done in multiple threads, resulting in better search performance. Correction: Searching and indexing are multithreaded, a single shard can fully utilize all CPU cores from a node.
But what are the benefits of sharding?
Availability: By replicating the shards to other nodes you can still serve if some of your nodes can´t be reached anymore!
Performance: Distibuting the primary shards to different nodes, will distribute the workload too.
So if your scenario is write heavy, keep the number of shards per index low. If you need better search performance, increase the number of shards, but keep the "physics" in mind. If you need reliability, take the number of nodes/replicas into account.
Further readings:
https://www.elastic.co/guide/en/elasticsearch/reference/current/_basic_concepts.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/tune-for-indexing-speed.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/tune-for-search-speed.html
https://www.elastic.co/de/blog/how-many-shards-should-i-have-in-my-elasticsearch-cluster
https://thoughts.t37.net/designing-the-perfect-elasticsearch-cluster-the-almost-definitive-guide-e614eabc1a87
I faced to the situation that more shards will reduce the indexing
performance -at least in a single node- (both in latency and
throughput)
For reference: Elasticsearch is a distributed database. Data is stored in an "index", the index is split into "shards". Each "shard" is allocated on a node (a different node if possible).
Having more shards allows to use more machines. This is very much how the "distributed" in "distributed database" actually work. Elasticsearch will automatically allocate and move shards in the background, to balance disk usage across all machines.
With 1 shards, the data is split onto one node, this gives you a baseline of N reads and M writes per second.
With 3 shards, the data is split onto three nodes, this gives you 3 times the throughput.
Of course this assumes that there are 3 machines available. If there is a single machine, then the machine is doing all the processing either way and having more shards has no effect.
There is a bit of overhead with sharding, gotta distribute queries and merge back results, hence doubling the amount of shards will not exactly double performance (expect in the order of +90%).
Your cluster has a single machine. You lose performance when you increase the amount of shards, because it's just increasing the overhead.
P.S. Shards have a replica by default, the replica will take over if the primary is gone (machine failed), this is how resiliency works. An index with 5 shards and 5 replicas can fully utilize 10 nodes. Meaning it takes few shards to use many many nodes.
P.P.S In my experience a configuration of shard=5 is a maximum. You should never set more than that, unless working with large clusters (10+ machines) or terabytes indexes.

Max value of number_of_routing_shards in Elasticsearch 6.x

What is the max recommended value of number_of_routing_shards for an index?
Can I specify a very high value like 30000? What are the side effects if I do so?
Shards are "slices" of an index created by elasticsearch to have flexibility to distribute indexed data. For example, among several datanodes.
Shards, in the low level are independent sets of lucene segments that work autonomously, which can be queried independently. This makes possible the high performance because search operations can be split into independent processes.
The more shards you have the more flexible becomes the storage assignment for a given index. This obviously has some caveats.
Distributed searches must wait each other to merge step-results into a consistent response. If there are many shards, the query must be sliced into more parts, (which has a computing overhead). The query is distributed to each shard, whose hashes match any of the current search (not all shards are necesary hit by every query) therefore the most busy (slower) shard, will define the overall performance of your search.
It's better to have a balanced number of indexes. Each index has a memory footprint that is stored in the cluster state. The more indexes you have the bigger the cluster state, the more time it takes to be shared among all cluster nodes.
The more shards an index has, the complexer it becomes, therefore the size taken to serialize it into the cluster state is bigger, slowing things down globally.
This will give you an index with 30.000 shards (according https://www.elastic.co/guide/en/elasticsearch/reference/6.x/indices-split-index.html), which is ... useless.
As all software tuning, recommended values vary with your:
use case
hardware (VM / network / disk ...) ?
metrics

Why would I need to shrink index in elasticsearch

I tried searching for an answer to my question but I couldn't find any, this is my first time dealing with big data and Elasticsearch, I'm trying to learn how Elasticsearch works by going through there online tutorial, while reading I came across the topic for shrinking indices and how that can be done, OK now I know how to do it but unfortunately I don't know why I need to do it?
Why do I need to shrink my index and decrease my shards? is it space related change or what?
Every Elasticsearch index consists of multiple shards (default 5), which are each a Lucene index. Each one of these has an overhead (in terms of memory, file handles,...) but allow more parallelization. In case you don't need that much parallelization any more at some point — think of a daily index for logs and after a few days there won't be more writes any more and only few reads — you might want to reduce the number of shards to cut down on their overhead.
The number of shards is tied to query performance in the following way:
How does shard size affect performance?
In Elasticsearch, each query is executed in a single thread per shard.
Multiple shards can however be processed in parallel, as can multiple
queries and aggregations against the same shard.
This means that the minimum query latency, when no caching is
involved, will depend on the data, the type of query, as well as the
size of the shard. Querying lots of small shards will make the
processing per shard faster, but as many more tasks need to be queued
up and processed in sequence, it is not necessarily going to be faster
than querying a smaller number of larger shards. Having lots of small
shards can also reduce the query throughput if there are multiple
concurrent queries.
https://www.elastic.co/blog/how-many-shards-should-i-have-in-my-elasticsearch-cluster

ElasticSearch - Optimal number of Shards per node

I would appreciate if someone could suggest the optimal number of shards per ES node for optimal performance or provide any recommended way to arrive at the number of shards one should use, given the number of cores and memory foot print.
I'm late to the party, but I just wanted to point out a couple of things:
The optimal number of shards per index is always 1. However, that provides no possibility of horizontal scale.
The optimal number of shards per node is always 1. However, then you cannot scale horizontally more than your current number of nodes.
The main point is that shards have an inherent cost to both indexing and querying. Each shard is actually a separate Lucene index. When you run a query, Elasticsearch must run that query against each shard, and then compile the individual shard results together to come up with a final result to send back. The benefit to sharding is that the index can be distributed across the nodes in a cluster for higher availability. In other words, it's a trade-off.
Finally, it should be noted that any more than 1 shard per node will introduce I/O considerations. Since each shard must be indexed and queried individually, a node with 2 or more shards would require 2 or more separate I/O operations, which can't be run at the same time. If you have SSDs on your nodes then the actual cost of this can be reduced, since all the I/O happens much quicker. Still, it's something to be aware of.
That, then, begs the question of why would you want to have more than one shard per node? The answer to that is planned scalability. The number of shards in an index is fixed. The only way to add more shards later is to recreate the index and reindex all the data. Depending on the size of your index that may or may not be a big deal. At the time of writing, Stack Overflow's index is 203GB (see: https://stackexchange.com/performance). That's kind of a big deal to recreate all that data, so resharding would be a nightmare. If you have 3 nodes and a total of 6 shards, that means that you can scale out to up to 6 nodes at a later point easily without resharding.
There are three condition you consider before sharding..
Situation 1) You want to use elasticsearch with failover and high availability. Then you go for sharding.
In this case, you need to select number of shards according to number of nodes[ES instance] you want to use in production.
Consider you wanna give 3 nodes in production. Then you need to choose 1 primary shard and 2 replicas for every index. If you choose more shards than you need.
Situation 2) Your current server will hold the current data. But due to dynamic data increase future you may end up with no space on disk or your server cannot handle much data means, then you need to configure more no of shards like 2 or 3 shards (its up to your requirements) for each index. But there shouldn't any replica.
Situation 3) In this situation you the combined situation of situation 1 & 2. then you need to combine both configuration. Consider your data increased dynamically and also you need high availability and failover. Then you configure a index with 2 shards and 1 replica. Then you can share data among nodes and get an optimal performance..!
Note: Then query will be processed in each shard and perform mapreduce on results from all shards and return the result to us. So the map reduce process is expensive process. Minimum shards gives us optimal performance
If you are using only one node in production then, only one primary shards is optimal no of shards for each index.
Hope it helps..!
Just got back from configuring some log storage for 10 TB so let's talk sharding :D
Node limitations
Main source: The definitive guide to elasticsearch
HEAP: 32 GB at most:
If the heap is less than 32 GB, the JVM can use compressed pointers, which saves a lot of memory: 4 bytes per pointer instead of 8 bytes.
HEAP: 50% of the server memory at most. The rest is left to filesystem caches (thus 64 GB servers are a common sweet spot):
Lucene makes good use of the filesystem caches, which are managed by the kernel. Without enough filesystem cache space, performance will suffer. Furthermore, the more memory dedicated to the heap means less available for all your other fields using doc values.
[An index split in] N shards can spread the load over N servers:
1 shard can use all the processing power from 1 node (it's like an independent index). Operations on sharded indices are run concurrently on all shards and the result is aggregated.
Less shards is better (the ideal is 1 shard):
The overhead of sharding is significant. See this benchmark for numbers https://blog.trifork.com/2014/01/07/elasticsearch-how-many-shards/
Less servers is better (the ideal is 1 server (with 1 shard)]):
The load on an index can only be split across nodes by sharding (A shard is enough to use all resources on a node). More shards allow to use more servers but more servers bring more overhead for data aggregation... There is no free lunch.
Configuration
Usage: A single big index
We put everything in a single big index and let elasticsearch do all the hard work relating to sharding data. There is no logic whatsoever in the application so it's easier to dev and maintain.
Let's suppose that we plan for the index to be at most 111 GB in the future and we've got 50 GB servers (25 GB heap) from our cloud provider.
That means we should have 5 shards.
Note: Most people tend to overestimate their growth, try to be realistic. For instance, this 111GB example is already a BIG index. For comparison the stackoverflow index is 430 GB (2016) and it's a top 50 site worldwide, made entirely of written texts by millions of people.
Usage: Index by time
When there're too much data for a single index or it's getting too annoying to manage, the next thing is to split the index by time period.
The most extreme example is logging applications (logstach and graylog) which are using a new index every day.
The ideal configuration of 1-single-shard-per-index makes perfect sense in scenario. The index rotation period can be adjusted, if necessary, to keep the index smaller than the heap.
Special case: Let's imagine a popular internet forum with monthly indices. 99% of requests are hitting the last index. We have to set multiple shards (e.g. 3) to spread the load over multiple nodes. (Note: It's probably unnecessary optimization. A 99% hitrate is unlikely in the real world and the shard replica could distribute part of the read-only load anyway).
Usage: Going Exascale (just for the record)
ElasticSearch is magic. It's the easiest database to setup in cluster and it's one of the very few able to scale to many nodes (excluding Spanner ).
It's possible to go exascale with hundreds of elasticsearch nodes. There must be many indices and shards to spread the load on that many machines and that takes an appropriate sharding configuration (eventually adjusted per index).
The final bit of magic is to tune elasticsearch routing to target specific nodes for specific operations.
It might be also a good idea to have more than one primary shard per node, depends on use case. I have found out that bulk indexing was pretty slow, only one CPU core was used - so we had idle CPU power and very low IO, definitely hardware was not a bottleneck. Thread pool stats shown, that during indexing only one bulk thread was active. We have a lot of analyzers and complex tokenizer (decomposed analysis of German words). Increasing number of shards per node has resulted in more bulk threads being active (one per shard on node) and it has dramatically improved speed of indexing.
Number of primary shards and replicas depend upon following parameters:
No of Data Nodes: The replica shards for the given primary shard meant to be present on different data nodes, which means if there are 3 data Nodes: DN1, DN2, DN3 then if primary shard is in DN1 then the replica shard should be present in DN2 and/or DN3. Hence no of replicas should be less than total no of Data Nodes.
Capacity of each of the Data Nodes: Size of the shard cannot be more than the size of the data nodes hard disk and hence depending upon the expected size for the given index, no of primary shards should be defined.
Recovering mechanism in case of failure: If the data on the given index has quick recovering mechanism then 1 replica should be enough.
Performance requirement from the given index: As sharding helps in directing the client node to appropriate shard to improve the performance and hence depending upon the query parameter and size of the data belonging to that query parameter should be considered in defining the no of primary shards.
These are the ideal and basic guidelines to be followed, it should be optimized depending upon the actual use cases.
I have not tested this yet, but aws has a good articale about ES best practises. Look at Choosing Instance Types and Testing part.
Elastic.co recommends to:
[…] keep the number of shards per node below 20 per GB heap it has configured

Understanding Elastic Search

Sorry to say this but ES' documentation ( http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index.html ) is confusing me.
Thanks to the glossary I understand the terms for database, table and row but I have read substantial sections of the documentation and I cannot find answers to:
Why do I need do to add number_of_shards and number_of_replicas to index creation? I did look here http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules.html but ironically it leaves those two settings out :/
How can I have 3 shards with 2 replicas? If the glossary is anything to go by shouldn't that be impossible considering that a shard is "is a single Lucene instance"?
If I add more nodes later how can I change these values to span the new nodes?
How does sharding work in ES?
How does replica sets work in ES?
How can I manage sharding? I understand it is auto join ( http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html#cluster-name ) but how do I define the difference between replicas and shards?
How can I manage replica sets? I.e. how do I add replicas, promote primaries etc?
For reference I read these links first:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/glossary.html
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index.html
If that information exists in the documentation then I would be very grateful if you can point me towards it.
Edit:
I am also unsure how auto-discovery works on a distributed network. Short if pinging every public network around how does it connect to the right one that could possibly be on the other side of the world?
Please see below for answers to your points.
Why do I need do to add number_of_shards and number_of_replicas to index creation? I did look here
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules.html
but ironically it leaves those two settings out :/
You don't "have" to, but you probably should in especially will want to in production. The default is five shards and one replication.
The number of replications defined is just the number of times your entire index is replicated throughout all of the nodes in your elasticsearch cluster. Think of it as being multiple read copies of a RDBMS database (but in this case, we read and write all copies).
A shard is the number of times I split up, or shard, an index. So, I can have an index with a single shard, or I can have an index with multiple shards. This is similar in concept to sharding a RDBMS database by primary key, but not identical.
So, the total number of shards you will have in an index is the product of number_of_shards and number_of_replicas.
When you do a search, elasticsearch will distribute your search to all possible nodes containing the shards in your index and aggregagate the result for you. You can think of this as a map/ reduce where the map is sending the search out to each shard and the reduce is collecting the results.
Also, you can change the replication number_of_replicas at any time, but you can never change the number_of_shards. This must be set at index creation.
How can I have 3 shards with 2 replicas? If the glossary is anything to go by shouldn't that be impossible considering that a
shard is "is a single Lucene instance"?
I think the above mostly answers this, but it's important to remember that elasticsearch is primarily a distributed computing solution to search. We are splitting the work up to multiple shards and possibly machines.
If I add more nodes later how can I change these values to span the new nodes?
Once the cluster is aware of another node in the cluster, no other action is needed by you. The settings propagate throughout the cluster on their own. In your above example of three shards and two replicas, if you had two nodes initially and added a third, each node will have on average two shards per node, this shard movement happens without your intervention (again, provided the cluster is aware of the new node)
How does sharding work in ES?
See above
How does replica sets work in ES?
See above
How can I manage sharding? I understand it is auto join ( http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html#cluster-name ) but how do I define the difference between replicas and shards?
You don't have to "manage" it actively. As stated earlier, sharding and everything else you define at index creation, is propagated to new nodes within the cluster.
You define replicas and shards like this:
{
"settings": {
"index": {
"number_of_shards": 20,
"number_of_replicas": 1
}
},
"mappings": {
"some_type": {
"properties": {
"some_field": {
"type": "long"
}
}
}
}
}
How can I manage replica sets? I.e. how do I add replicas, promote primaries etc?
You do that through the update indices API, documentation for this specific case is found on there site here:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-update-settings.html
I just noticed your edit, please see below:
I am also unsure how auto-discovery works on a distributed network.
In the YML config file you set the unicast like this:
discovery.zen.ping.multicast.enabled: false
#discovery.zen.minimum_master_nodes: 3
discovery.zen.ping.unicast.hosts: ["ip.add.r.ess", "ip.add.r.ess"]
The middle setting is an important setting, but I commented it out here. That number should always be number of (master nodes / 2) +1. This is to avoid split brain situations. Generally I set all nodes to master eligible.
These settings are for unicast, which is what I think you are going for with your question and not multicast.
In short, an index is broken into shards. Shards can be replicated, meaning multiple copies of the same shard can exist in the same cluster. So if an index has 3 shards and 2 replica's, that means you have nine shards in total of which six are replicas of the three master shards.
ES, will try to balance shards and their replica's across the cluster so that if a node goes down it can fail over from the master shards on that node to replicas. This can confuse some people: a master in elastic search refers to shards, not the actual node. So a single node can have a mix of replica's and master shards.
If you come from the lucene world, a lucene index is not the same thing as an elastic search index. An elastic search index is a logical group of indexed documents with types, mappings and documents. More or less the same as a database schema. A lucene index on the other hand is a group of several files that contains indexed data. When Elastic search creates indexes, what it does is create several lucene indexes (one for each field and shard) and when it replicates, it is basically copying the files of these lucene indices around.
You can't change the number of shards for an index but you can change the number of replicas. Typically what you do when you need to have more shards is create a new index and reindex the data.
In terms of shard management beyond deciding on the number of shards, there's not much to manage by default and ES is pretty good coordinating things by itself, There are a ton of options you can fiddle with once you gain a bit better understanding of how it works. Defaults are pretty OK for most. In terms of cluster management, you can do a lot via the API in terms of shutting down nodes in a controlled way, using index aliases, changing number of replica's, etc.
As for autodiscovery, ES uses local network multicast by default. You can switch to unicast and you probably want to change the default clustername to prevent accidents (had some fun in coffeeshops with unintended clusters forming). You probably don't want to cluster globally. I don't see that ending well.
It's a quite incident that about 80% of your questions are answered in the Video Presentation given by Shay Banon (The creater of ElastiSearch). Though this presentation has much more than you can find anywhere else. Hope this helps.
http://www.infoq.com/presentations/ElasticSearch
This video is a bit low-resolution, so if you want code shown in presentation follow this
https://github.com/kimchy/talks/tree/master/2011/wsnparis

Resources