Faster Logstash to Elastic indexing from flat files - elasticsearch

I'm indexing JSON files out of S3 into Elastic with Logstash's S3 input plugin running on an EC2 T2.Medium instance. This works fine, but it's incredibly slow. I'm looking for some advice on faster ways of doing this as I realise multithreading with multiple Logstash instances out of S3 isn't an option.
My source data is actually in Google Big Query tables so if there was a way I could index from there that would be great, but I can't find a plugin or obvious way of doing this. I've been exploring the idea of pushing the BigQuery data into Redis first, but with the volume of data i'm looking to index i'm concerned this adds extra overhead, technical and cost wise that could be avoided.
My Elastic cluster is very simple, single node / single shard. I ran a test on a multi-node cluster to see if there were any indexing speed increased and it stayed the same. I'm using Elastic's hosted cloud service, formerly Found, so i'm not sure if that would have any bearing on this.
At present i'm happily indexing around 5M rows a day, albeit slowly. I'm aiming to be able to index around 100M per day in as quick a time as possible. At the current EPS, it'll take days!
Any general pointers would be much appreciated.

Related

What are the general guidelines for Elasticsearch cluster configuration for instance size, data nodes and sharding?

We regularly encounter several issues that crop up from time to time with Elasticsearch. They seem to be as follows:
Out of disk space
Slow query evaluation time
Slow/throttled data write times
Timeouts on queries
There are various areas of an Elasticsearch cluster that can be configured:
Cluster disk space
Instance type/size
Num data nodes
Sharding
It can sometimes be confusing which areas of the cluster you should be tuning depending on the problems outlined above.
Increasing the ES cluster total disk space is easy enough. Boosting the ES instance type seems to help when we experience slow data write times and slow query response times. Implementing sharding seems to be best when one particular ES index is extremely large. But it's never quite clear when we should boost the number of data nodes vs boosting the instance size.

Migrating very large elasticsearch indices

We ran out of space due to a very large indices (5TB primary | 5TB replica). This indices has 5 shards (each shard is 1TB). We are planning to migrate this indices to bigger AWS instance type. Please let me know what are the settings that can be modified for the migration to go fast and smooth?
Note: We are using default elasticsearch settings.
First of all, I'd like to point out that having a 1TB shard is way off from the recommended 30gb limit. I'd also assume that due to this your cluster probably isn't as optimised as expected in even extreme scenarios.
Secondly, the recommended settings would depend on the track you're using to migrate this index?
I'd personally let snapshot/restore to take care of the process as it would use the least bandwidth and hence refusing the transfer time. Once done, since snapshot is already in an AWS region, it would be faster to restore.
Again, I'm assuming a lot here so alot depends on your limitations and preferred method.
All the best.

What is the fastest way of indexing to ElasticSearch

We've been working with ElasticSearch 2.x for a quite while. Everything meets our requirements perfectly except for one weak point: The performance of writing/indexing to ElasticSearch cluster is not very good.
In our case, we have 8 nodes ES cluster, it's 100~ fields wide indices we are putting in ES. The indexing rate is around 50,000 per minute which is way too slow for our scenario. We've tried all tuning methods recommended by www.elastic.co. The fastest way we've found is that construct the json payload as files, they dump them into ES using bulk API. But still, the indexing pace is just too slow.
I've seen some ES-Hadoop connector, also elasticsearch has spark support where you can use saveToES() saves the RDD to ES. I suspect they all use ES bulk API underneath. Can anyone share some experience on them? What is the fastest way of writing indices in ElasticSearch?
No matter what third party tool you use outside ES, everything needs to use the ES ways of putting data in. Either Spark, Logstash, your own app all need to use bulk or index api in one way or another. There's no backdoor magic here.

ELK Stack and scaling

Bear with me here. I have spent the last week or so familiarising myself with the ELK Stack.
I have a working single box solution running the ELK stack, and I have the basics down on how to forward more than one type of log, and how to put them into different ES indexes.
This is all working pretty well, I would like to expand operations.
My question is more how to scale the solution out to cover more data needs/requirements.
The current solution is handling a smaller subset of data, and working fine, but I would like to aggregate a lot more data. For example I am currently pushing message tracking logs from 4 mailbox servers, I want to do the same but for 40 mailbox servers, and much, much busier ones.
I would also like to push over IIS Log files from the Client Access servers, there are 18 CAS servers, and around 30 mins of IIS logs per server during peak time were 120MB in size, with almost 1 million records.
This volume of data would most likely collapse a single box running ELK.
I haven't really looked into it but I read that ES allows for some form of clustering to add more instances, does the same apply to Logstash as well? Should Kibana be run on more than one server? or a different server to both Logstash and ES?
You will hit limits with logstash if you're doing a lot of processing on the records - groks, conditionals, etc. Watch the cpu utilization of the machine for hints.
For elasticsearch itself, it's about RAM and disk IO. Having more nodes in a cluster should provide both.
With two elasticsearch nodes, you'll get redundancy (a copy on both machines). Add a third, and you can start to realize an IO benefit (writing two copies to three machines spreads the IO).
The ultimate data node will have 64GB of RAM on the machine, with 31GB allocated to elasticsearch.
You'll probably want to add non-data nodes, which handle the routing of data to be indexed and the 'reduce' phase when running queries. Put two of them behind a load balancer.
As Alain mentioned, adding more ES nodes will improve performance (and give you redundancy).
On the logstash front, we have two logstash servers feeding into ES - at the moment we just direct different servers to log to the different logstash servers, but we're likely to be adding a HA-Proxy layer in front to do this automatically, and again provide redundancy.
With Kibana, I wouldn't worry too much - as far as I'm aware most of the processing is done in the client browser, and that that isn't is more dependent on the performance of the ES cluster.

Which is better Apache solr or Elastic search?

I started creating my new search application. In my earlier application I used Apache solr. Now I want to know which better in terms of performance and usability.
Personally I want to know the performance benchmark of Elastic search and solr. If there are other alternatives suggestions are most welcome.
Disclaimer: I work at elasticsearch.com
I would just say: give elasticsearch a try. I think that after some hours (minutes?), you will have somehow an opinion.
Start 2 or 3 or 4 nodes, and you will see how things are rebalanced nicely.
About performance, I'd say that elasticsearch will give you a constant query throughput even if you are doing massive index operations.
I have used both quite a bit, and much prefer ElasticSearch. The API is more flexible and accessible. It is easier to get started with. Replication happens automatically by default. In general all the defaults are easier to work with. Everything generally works out of the box (safe defaults) and you only need to tune what you find needs to work better.
I have not worked much with SOLR 4, only with 3.x. Once I switched I never looked back, but I hear that there are many improvements in 4 with regards to replication and clustering that make it a usable competitor.
With regards to performance, I think that generally they are comparable as they both rely on Lucene. That is why there is a lack of valid benchmarks that make this general comparison. That said, there are certainly use cases where one will perform better than the other.
If you look at the trends of utilization while there are many more people currently using SOLR, it is in decline. That decline is very correlated to the increase in users of Elasticsearch which is very much on the rise. As Dadoonet said, give ElasticSearch a try, it won't take long and you won't want to use SOLR again.
UPDATE
I just spent two weeks on a client site consulting on a SOLR Cloud installation. I am now much more familiar with the updates to SOLR, and say quite confidently, I still prefer ElasticSearch, but it seems SOLR has some momentum again.
ElasticSearch, is hands down more elastic. That is, having an elastic cluster where nodes come and go, or even where you just need to add nodes is much much easier in ElasticSearch than SOLR. Anyone who tells you it is easy in SOLR, has not done it in ElasticSearch. ElasticSearch will automatically join a cluster and assume an active role in that cluster, taking over serving available shards and replicas. Over the last week I decommissioned a 2 node cluster, replacing it with two new nodes. I simply added the 2 new nodes, and one at a time, marked the other two nodes as non-data nodes. Once the shard migration completed I decommissioned the nodes. I had set minimum_master_nodes = 2 ((2/2)+1), and had no issue with split brain.
During the same week, I had to add a node to a SOLR cluster. The process was poorly documented, especially considering the changes from 4.1 to 4.3 and the mishmash of existing documentation, much of which says you can't even do it based on old versions of SOLR. I finally found documentation which clarified. It requires manually adding a core to the collection and then adding replicas to existing shards within the cluster. Finally you manually decommission the redundant shards on some other node. At some point this node may become master for one of those shards but not immediately.
With SOLR If you do not have sufficient shards to distribute, you can just add replicas or you can go through a shard split to create two new shards. Again this is a poorly documented feature, but is functionality that does not exist in ElasticSearch. You must split and then remove the original shard, something none of the documentation clearly explains.
SolrCloud has a couple other advantages as well if integrating with Hadoop. If you are indexing data in HDFS or HBase, there are now both Map-Reduce, and real time methods of ingesting data into SOLR. This provides some real power to your Big Data platform and allows you to do full text search over data that is otherwise barely accessible.
While you can index Hadoop data into ElasticSearch, the implementation is not as clean as the SolrCloud/Cloudera Search implementations. Having the MapReduce directly build the shards is a far superior solution with significant performance benefits. Reducers talking directly to a cluster works, but it is not the same. I do not know if anything similar to the Lily connector for HBase exists for ElasticSearch, if not I may look into writing one. This allows indexing directly from the HBase replication logs.
So in summary there are certainly situations where either is beneficial. If you are looking for tight integration with Hadoop, SOLR, ClouderaSearch specifically, is a good option. If you are looking for ease in managing an Elastic cluster, Elasticsearch will be a much better option. For me, I'll continue with my hacky Hadoop integrations to make it work with Elasticsearch, until something better emerges.

Resources