number of nodes in elasticsearch cluster - elasticsearch

in our university we have an elasticsearch cluster with 1 Node. Now we have money to install more powerful server. We produce 7-10 millions accesslogs / day.
What is better to create a cluster with:
a. 3 powerful server each 64GB and 16 CPU + SSD.
b. to have 14 not so powerful server each 32GB and 8CPU +SSD
ps: a & b have the same price.
c. may be some recommendation?
Thank you in advance

it depends on the scenario. for the logging case you describing option b seems more flexible to me. let me explain my opinion:
as you are in a logging scenario, then implement the hot/warm architecture. you'll mainly write and read recent indices. in few cases you want to access older data and you probably want to shrink old and close even older indices.
set up at least 3 master eligble nodes to prevent spit brain problems. configure the same nodes also as coordinating nodes (11 nodes left)
install 2 ingest nodes to move the ingestion workload to dedicated nodes (9 nodes left)
install 3 hot data nodes for storing the most recent indices (6 nodes left)
install 6 warm data nodes for holding older, shrinked and closed indices. (0 nodes left)
the previous setup is just a example. the node numbers/roles should be changed if
if you need more resiliency. then add more master nodes, increase replica count for the index nodes. this will also reduce the total capacity.
the more old data you need to have searchable or being held in already closed indices, the more warm nodes you'll need. then rebalance the hot/warm node count according to you needs. if you can drop your old data early then increase the hot node count.
if you have xpack licensed, consider installing ml/alerting nodes. add this roles to the master nodes or reduce the data nodes count in favor of ml/alertig.
do you need kibana/logstash? depending on the workload, prepare one/two nodes exclusively.
assuming there are the same mainboards in both options you have more potential to quickly scale the 14 boxes up just by adding more ram/cpu/storage. having 3 nodes already maxed out at the specs, you'll need to set up new boxes and join them the cluster in order to scale up. but this also gives you maybe more recent hardware in you rack over the time.
please also have a look on this: https://www.elastic.co/pdf/architecture-best-practices.pdf
if you need some background on sharding configuration please see ElasticSearch - How does sharding affect indexing performance?
BTW: thomas is right with his comment about the heap size. please have a look on this if you want to know the background: https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html

Related

How much clusters must I use in elastic stack?

I am confused which approach would be better having single cluster with 12 nodes or having 3 cluster with 4 nodes each in elastic stack. What are the advantages and disadvantages of single cluster? Does elastic charge me for 3 cluster as far as I know they charge for nodes but can someone clarify which would be better approach and which would be cost effective solution?
I am planning to use these nodes in my cluster :
master
data_content
data_hot
ingest
ml
remote_cluster_client
What the optimal cluster size is depends on various requirements / tradeoffs:
Do you have multiple users / systems that you might want to isolate against each other (so that one running wild won't overload the cluster for everyone)? Then you might be better off with multiple clusters.
On the other hand a single larger cluster would be able to absorb extra load from one user / system better.
Smaller clusters are quicker to upgrade and you don't have one "big bang" upgrade. Or you might just upgrade some part but not everything at once.
Every cluster should have 3 master eligible nodes.
Most features in the Elastic Stack are free, but some are paid. Besides the cloud service where it's resource based, there are 2 modes for pricing:
The classic node based pricing. Every Elasticsearch process would need a license. So larger nodes (within the technical limits) would cost you less than many smaller ones, but the cluster size itself doesn't matter.
The newer pricing model for ECE / ECK is resource based where you buy chunks of memory and you can slice that into as many nodes or clusters as you want.

Elasticsearch cluster setup

I'm curerntly running a single node ES-Instance. As there are some limitations with a single server setup in ES, and the queries are becoming pretty slow sometimes, I want to upgrade to a full cluster.
The ES-Instance currently only stores data, and is not doing any fancy stuff (Transformations, Ingest Pipelines, ...). All I currently need is a place to store my data at, and to retrieve it (Search + Aggregations). There are more reads than writes.
In a lot of forums and blog posts I read about the "Split-Brain" issue. To circumvent this, the minimum node count should be 3.
The idea is to keep the amount of machines low, because this is a private project and I do not want to also manage a lot of OS in my spare time..
The structure I thought about was:
- 1 Coordinator + Voting-only Node
- 2 Master-eligible + Data Nodes
minimum_master_nodes: 2 to circumvent Split-Brains
Send all ES-Queries to the Coordinator, which will then issue the requests on the data nodes and reduce the final results.
My question is: Does this make sense? Or is it better to use 3 master-eligible + Data nodes?
Online I found no guidance for ES-Newbies to get an idea of the structure of a simple cluster.
You are in right direction and I can see most of your thinking is also right so don't consider yourself as ES newbie :).
Anyway as you are going to have 3 nodes in your cluster, why note make all three nodes as master eligible nodes and why you are making a dedicated co-ordinating node when by default every ES node works as a co-ordinating node and in your small project you won't need a dedicated co-ordinating node. this way you will have a simple configuration, just don't assign any explicit role to any node as by default all ES nodes are master, data and co-ordinating node.
Also, you should invest some time to identify the slow logs and its cause to make it more performant rather than adding more resources that too in personal project, please refer to my short tips on improving the search performance

elasticsearch cluster setup information

I'm newbie to search and elasticsearch. I have gone some online docs and developed some app using elasticsearch setup in our test environment. So far, its smooth in developing and testing, Now do create in production and setup the cluster, i need some expert advise on,
Number of shards
Number of replicas
Should i need to separate out master and data nodes
can all the nodes be data node
i dont have any advanced search use case, but atleast need plural match (phone) should match all docs with phones and vice versa, any special stemming need in this case ?
My usecase and traffic patterns are,
Upto 100M read per day
Upto 1M write/update per day
Initial data size 10GB, grow rate 1 GB every 6 months
Cluster info
1. Initial cluster size 14 machines, 28 GB RAM / 120 GB spin hard disk / 12 cores
2. load balancer with dns, would distribute the traffic to any 14 machines.
I have used unicast and i have bootstrap.mlockall: true and index.routing.allocation.disable_allocation: false
Please advise.
Thanks
1. Number of shards
The number of shards in Elasticsearch is a one-time setting, once your shard size is set you cannot change it. So you need to plan how many shards are required for your cluster taking into consideration your current dataset size plus any index growth. To do this set up one Elasticsearch node with one shard and zero replicas on a box that has the same specifications as your production boxes.
The capacity of a single shard will depend on a number of factors:
The size of your documents
The size of your fields
The amount a RAM you assign the JVM that runs Elasticsearch. If you have lots of aggregations,
sorting and parent/child documents, you will need to make sure that you have assigned enough RAM
to Elasticsearch so it can cache the results.
Your number of queries per second requirement.
The maximum search request response time allowed.
Index documents into your single shard node at iterations of x million (or less), at each iteration perform benchmarks by executing x queries per second using a testing tool like JMeter. When the queries in your tests are returning response times that are reaching your maximum search request time you have the amount of documents a single shard can index. Once you have this value you can calculate the number of shards that is required for your full dataset and calculate how many shards you will need for index growth.
2. Number of replicas
Start with 1 replica, a replica shard will be placed on a different node from its primary shard so if one node goes down you still have the full dataset available. One replica is usually sufficient, if you find you need more you can always add them in later on.
3.Should i need to separate out master and data nodes
It depends on the size of your cluster, if you have more than 5 nodes in your cluster it is advisable to have master only nodes to maintain cluster state only.
4. can all the nodes be data node
There must always be at least one master node in your cluster, the master node maintains the cluster state. If you have a small cluster (< 5 nodes), you can make every node in your cluster both a data node and a master node. One of of the nodes will be elected as the master, if the master node goes down another node in the cluster will be elected as the master. If you have master only nodes as described in point 3, the rest of the of the nodes in the cluster can be data only nodes.
5. i dont have any advanced search use case, but atleast need plural match (phone) should match all docs with phones and vice versa, any special stemming need in this case ?
Yes, stemming will handle your use case.
Also, Elasticsearch comes with very good configurations OOTB, you should start out by only changing the configurations listed in the link below.
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_important_configuration_changes.html

Elasticsearch cluster leaving shards unassigned

We're running an elasticsearch cluster for logging, indexing logs from multiple locations using logstash. We recently added two additional nodes for additional capacity whilst we await further hardware for the cluster's expansion. Ultimately we aim to have 2 nodes for "realtime" data running on SSDs to provide fast access to recent data, and ageing the data over to HDDs for older indicies. The new nodes we put in had a lot less memory than the existing boxes (700GB vs 5TB), but given this will be similar to the situation we'd have when we implemented SSDs, I didn't forsee it being much of a problem.
As a first attempt, I threw the nodes into the cluster trusting the new Disk spaced based allocation rules would mean they wouldn't instantly get filled up. This unfortunately wasn't the case, I awoke to find the cluster had merrily reallocated shards onto the new nodes, in excess of 99%. After some jigging of settings I managed to remove all data from these nodes and return the cluster to it's previous state (all shards assigned, cluster state green).
As a next approach I tried to implement index/node tagging similar to my plans for when we implement SSDs. This left us with the following configuration:
Node 1 - 5TB, tags: realtime, archive
Node 2 - 5TB, tags: realtime, archive
Node 3 - 5TB, tags: realtime, archive
Node 4 - 700GB, tags: realtime
Node 5 - 700GB, tags: realtime
(all nodes running elasticsearch 1.3.1 and oracle java 7 u55)
Using curator I then tagged indicies older than 10days as "archive" and more recent ones "realtime". This in the background sets the index shard allocation "Require". Which my understanding is it will require the node to have the tag, but not ONLY that tag.
Unfortunately this doesn't appeared to have had the desired effect. Most worryingly, no indices tagged as archive are allocating their replica shards, leaving 295 unassigned shards. Additionally the realtime tagged indicies are only using nodes 4, 5 and oddly 3. Node 3 has no shards except the very latest index and some kibana-int shards.
If I remove the tags and use exclude._ip to pull shards off the new nodes, I can (slowly) return the cluster to green, as this is the approach I took when the new nodes had filled up completely, but I'd really like to get this setup sorted so I can have confidence the SSD configuration will work when the new kit arrives.
I have attempted to enable: cluster.routing.allocation.allow_rebalance to always, on the theory the cluster wasn't rebalancing due to the unassigned replicas.
I've also tried: cluster.routing.allocation.enable to all, but again, this has had no discernable impact.
Have I done something obviously wrong? Or is there disagnostics of some sort I could use? I've been visualising the allocation of shards using Elasticsearch Head plugin.
Any assistance would be appreciated, hopefully it's just a stupid mistake that I can fix easily!
Thanks in advance
This probably doesn't fully answer your question, but seeing as I was looking at these docs this morning:
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-allocation.html#disk
You should be able to set watermarks on disk usage in your version to avoid this reoccurring.
For (manual) monitoring of clusters I quite like
https://github.com/lmenezes/elasticsearch-kopf
Currently watching my cluster sort out it's shards again (so slow) after a similar problem, but I'm still running an ancient version.

Shards / Replicas settings for high availability

We have java application with embedded Elasticsearch in a cluster of 14 nodes. All the data resides in a central database, and they are indexed in elasticsearch for querying. A full reindex can be done at any time.
The system are very query-heavy, the amount of writes are small. The number of documents will not be higher than, say, 300.000.
The size of each document varies greatly, from just a couple of ids, to extracted text from e.g word-documents of several pages.
I want to make sure that in case of a total breakdown, it should be sufficient that one or two nodes are available for the system to work.
Write consistency should not be a problem since the master copy of the data is in the database, and it seems that ES is capable of resolving conflicting data by using the newest version (which should be all right in our case)
My first though is to use 1 shard, and 13 replicas. This will naturally ensure that all nodes have access to all data. This could also be accomplished by having 2 shards / 13 replicas, so this yield that to ensure that all data is available, the number of replicas should be the number of nodes - 1, not depending on the number of shards (which could be anything).
If the requirement of number of nodes are reduced to "2 nodes should be up at any time", then a shards / replica distribution of "x/number of nodes - 2" should be sufficient.
So, for the question:
Asserting the above setup and that my thoughts is correct, would a setup with 1 shard / 13 replicas make sense or would there be anything to gain by adding more shards and run e.g a 4 shards/13 replicas setup?
After a good bit of research and talking to ES-gurus;
As long as the shard size is small enough, the most efficient way of setting up this cluster would indeed be 1 shard only, with 13 replicas. I have not been able to pinpoint the threshold size of the shard for this starting to perform worse.
If the index is big... you will need more than one shard (if you want perfomance). Do You really need 13 replica? When you put only 2 replicas, ES manage that to keep it that way, if the principal node fail, ES will create a new reply. May be you will need a balancer node too.

Resources