I am confused which approach would be better having single cluster with 12 nodes or having 3 cluster with 4 nodes each in elastic stack. What are the advantages and disadvantages of single cluster? Does elastic charge me for 3 cluster as far as I know they charge for nodes but can someone clarify which would be better approach and which would be cost effective solution?
I am planning to use these nodes in my cluster :
master
data_content
data_hot
ingest
ml
remote_cluster_client
What the optimal cluster size is depends on various requirements / tradeoffs:
Do you have multiple users / systems that you might want to isolate against each other (so that one running wild won't overload the cluster for everyone)? Then you might be better off with multiple clusters.
On the other hand a single larger cluster would be able to absorb extra load from one user / system better.
Smaller clusters are quicker to upgrade and you don't have one "big bang" upgrade. Or you might just upgrade some part but not everything at once.
Every cluster should have 3 master eligible nodes.
Most features in the Elastic Stack are free, but some are paid. Besides the cloud service where it's resource based, there are 2 modes for pricing:
The classic node based pricing. Every Elasticsearch process would need a license. So larger nodes (within the technical limits) would cost you less than many smaller ones, but the cluster size itself doesn't matter.
The newer pricing model for ECE / ECK is resource based where you buy chunks of memory and you can slice that into as many nodes or clusters as you want.
Related
I'm new on working with the ELK stack and I'm working on 10 TB stocked on physical servers, so if there is recommendation on how many data nodes, Master nodes .. should I need to use , the best practice to configure our cluster to work smoothly in production and if there is other tools or technologies used with Elasticsearch for to improve performance
#ameur you can refer to these pages :
https://www.elastic.co/guide/en/elasticsearch/reference/current/general-recommendations.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/tune-for-indexing-speed.html
https://www.elastic.co/guide/en/elasticsearch/reference/current/tune-for-search-speed.html
Regarding master nodes, you should have minimum 3 nodes(Go for 5 nodes if possible).
For data nodes , there are multiple factors involved -
for ex:
resources like RAM,CPU, disk
throughput like qpa,wps etc.
so there is no straightforward answer to that, you will need to do some performance test to get the right number.
don't forget to read about sharding strategy https://www.elastic.co/guide/en/elasticsearch/reference/current/size-your-shards.html
in our university we have an elasticsearch cluster with 1 Node. Now we have money to install more powerful server. We produce 7-10 millions accesslogs / day.
What is better to create a cluster with:
a. 3 powerful server each 64GB and 16 CPU + SSD.
b. to have 14 not so powerful server each 32GB and 8CPU +SSD
ps: a & b have the same price.
c. may be some recommendation?
Thank you in advance
it depends on the scenario. for the logging case you describing option b seems more flexible to me. let me explain my opinion:
as you are in a logging scenario, then implement the hot/warm architecture. you'll mainly write and read recent indices. in few cases you want to access older data and you probably want to shrink old and close even older indices.
set up at least 3 master eligble nodes to prevent spit brain problems. configure the same nodes also as coordinating nodes (11 nodes left)
install 2 ingest nodes to move the ingestion workload to dedicated nodes (9 nodes left)
install 3 hot data nodes for storing the most recent indices (6 nodes left)
install 6 warm data nodes for holding older, shrinked and closed indices. (0 nodes left)
the previous setup is just a example. the node numbers/roles should be changed if
if you need more resiliency. then add more master nodes, increase replica count for the index nodes. this will also reduce the total capacity.
the more old data you need to have searchable or being held in already closed indices, the more warm nodes you'll need. then rebalance the hot/warm node count according to you needs. if you can drop your old data early then increase the hot node count.
if you have xpack licensed, consider installing ml/alerting nodes. add this roles to the master nodes or reduce the data nodes count in favor of ml/alertig.
do you need kibana/logstash? depending on the workload, prepare one/two nodes exclusively.
assuming there are the same mainboards in both options you have more potential to quickly scale the 14 boxes up just by adding more ram/cpu/storage. having 3 nodes already maxed out at the specs, you'll need to set up new boxes and join them the cluster in order to scale up. but this also gives you maybe more recent hardware in you rack over the time.
please also have a look on this: https://www.elastic.co/pdf/architecture-best-practices.pdf
if you need some background on sharding configuration please see ElasticSearch - How does sharding affect indexing performance?
BTW: thomas is right with his comment about the heap size. please have a look on this if you want to know the background: https://www.elastic.co/guide/en/elasticsearch/reference/current/heap-size.html
I am currently running h2o's DRF algorithm an a 3-node EC2 cluster (the h2o server spans across all 3 nodes).
My data set has 1m rows and 41 columns (40 predictors and 1 response).
I use the R bindings to control the cluster and the RF call is as follows
model=h2o.randomForest(x=x,
y=y,
ignore_const_cols=TRUE,
training_frame=train_data,
seed=1234,
mtries=7,
ntrees=2000,
max_depth=15,
min_rows=50,
stopping_rounds=3,
stopping_metric="MSE",
stopping_tolerance=2e-5)
For the 3-node cluster (c4.8xlarge, enhanced networking turned on), this takes about 240sec; the CPU utilization is between 10-20%; RAM utilization is between 20-30%; network transfer is between 10-50MByte/sec (in and out). 300 trees are built until early stopping kicks in.
On a single-node cluster, I can get the same results in about 80sec. So, instead of an expected 3-fold speed up, I get a 3-fold slow down for the 3-node cluster.
I did some research and found a few resources that were reporting the same issue (not as extreme as mine though). See, for instance:
https://groups.google.com/forum/#!topic/h2ostream/bnyhPyxftX8
Specifically, the author of http://datascience.la/benchmarking-random-forest-implementations/ notes that
While not the focus of this study, there are signs that running the
distributed random forests implementations (e.g. H2O) on multiple
nodes does not provide the speed benefit one would hope for (because
of the high cost of shipping the histograms at each split over the
network).
Also https://www.slideshare.net/0xdata/rf-brighttalk points at 2 different DRF implementations, where one has a larger network overhead.
I think that I am running into the same problems as described in the links above.
How can I improve h2o's DRF performance on a multi-node cluster?
Are there any settings that might improve runtime?
Any help highly appreciated!
If your Random Forest is slower on a multi-node H2O cluster, it just means that your dataset is not big enough to take advantage of distributed computing. There is an overhead to communicate between cluster nodes, so if you can train your model successfully on a single node, then using a single node will always be faster.
Multi-node is designed for when your data is too big to train on a single node. Only then, will it be worth using multiple nodes. Otherwise, you are just adding communication overhead for no reason and will see the type of slowdown that you observed.
If your data fits into memory on a single machine (and you can successfully train a model w/o running out of memory), the way to speed up your training is to switch to a machine with more cores. You can also play around with certain parameter values which affect training speed to see if you can get a speed-up, but that usually comes at a cost in model performance.
As Erin says, often adding more nodes just adds the capability for bigger data sets, not quicker learning. Random forest might be the worst; I get fairly good results with deep learning (e.g. 3x quicker with 4 nodes, 5-6x quicker with 8 nodes).
In your comment on Erin's answer you mention the real problem is you want to speed up hyper-parameter optimization? It is frustrating that h2o.grid() doesn't support building models in parallel, one on each node, when the data will fit in memory on each node. But you can do that yourself, with a bit of scripting: set up one h2o cluster on each node, do a grid search with a subset of hyper-parameters on each node, have them save the results and models to S3, then bring the results in and combine them at the end. (If doing a random grid search, you can run exactly the same grid on each cluster, but it might be a good idea to explicitly use a different seed on each.)
I planning to launch three EC2 instance across Amazon hosting region. For say, Region-A,Region-B and Region-C.
Based on the above plan, Each region act as Cluster(Or Datacenter) and have one node.(Correct me if I am wrong).
Using this infrastructure, Can I attain below configuration?
Replication Factor : 2
Write and Read Level:QUORUM.
My basic intention to do these are to achieve "If two region are went down, I can be survive with remaining one region".
Please help me with your inputs.
Note: I am very new to cassandra, hence whatever your inputs you are given will be useful for me.
Thanks
If you have a replication factor of 2 and use CL of Quorum, you will not tolerate failure i.e. if a node goes down, and you only get 1 ack - thats not a majority of responses.
If you deploy across multiple regions, each region is, as you mention, a DC in your Cluster. Each individual DC is a complete replica of all your data i.e. it will hold all the data for your keyspace. If you read/write at a LOCAL_* consistency (eg. LOCAL_ONE, LOCAL_QUORUM) level within each region, then you can tolerate the loss of the other regions.
The number of replicas in each DC/Region and the consistency level you are using to read/write in that DC will determine how much failure you can tolerate. If you are using QUORUM - this is a cross-DC consistency level. It will require a majority of acks from ALL replicas in your cluster in all DCs. If you loose 2 regions then its unlikely that you will be getting a quorum of responses.
Also, its worth remembering that Cassandra can be made aware of the AZ's it is deployed on in the Region and can do its best to ensure replicas of your data are placed in multiple AZs. This will give you even better tolerance to failure.
If this was me and I didnt need to have a strong cross-DC consistency level (like QUORUM). I would have 4 nodes in each region, deployed across each AZ and then a replication factor of 3 in each region. I would then be reading/writing at LOCAL_QUORUM or LOCAL_ONE (preferably). If you go with LOCAL_ONE than you could have fewer replicas in each DC e.g a replication factor of 2 with LOCAL_ONE means you could tolerate the loss of 1 replica.
However, this would be more expensive than what your initially suggesting but (for me) that would be the minimum setup I would need if I wanted to be in multiple regions and tolerate the loss of 2. You could go with 3 nodes in each region if you wanted to really save costs.
I am trying to find out if 3 node HA cluster is common practice? Most of the references on Google point to 2 node cluster. But i not able to convince myself that an application that require 5 Nine's, can implement 2 node HA cluster on commodity hardware.
The reason behind it is simple. If a machine on which one node goes offline, then there will be only one node left without any back up.
To reduce dependency on node that went offline, i think a 3 node cluster is a min requirement.
In order to give a factual answer, much more data would be required.
But from an anecdotal perspective, two nodes of commodity hardware are not nearly enough to give you five-nines with any level of reliability (or at least sleep-at-night comfort).
Most cluster diagrams are likely drawn with only two nodes for ease of explanation, "If A fails, B keeps working".
Given your five-nines however, and "commodity hardware", I would consider more than three as a requirement; perhaps as many as five or more.
Remember to allow for network, power and perhaps even geographical diversity if you are really after that kind of reliability.