ElasticSearch vs. ElasticSearch+Cassandra - elasticsearch

My main question is what is the benefit of integrating Cassandra and Elasticsearch versus using only Elasticsearch?
In fact, there are answers to similar questions on StackOverflow (e.g., here and here). But there are some points:
A lot of answers are old. Much may have changed in these years.
One point that is mentioned is that "Sometimes ElasticSearch loses writes". However, it can be imagined those alleged loses may had been because of some bugs that have been solved in these years. It is assumable that e.g., Cassandra may also have some bugs that cause data loses. Is there any fundamental differences between Cassandra and Elasticsearch that cause Elasticsearch to lose data but doesn't cause it for Cassandra?
It is mentioned that "Schema changes are difficult to do in ElasticSearch without blowing everything away and reloading." This may not be a major problem for us, assuming that our data model is relatively stable or at-least backward-compatible. Also, because of dynamic mapping in Elasticsearch it may adapt itself with the new requirements (e.g., extra fields).
With respect to the indexing delay in Elasticsearch, Cassandra also does not provide consistency. So, in Cassandra you may also face delays in reading the written data.
Overall, what extra features does Cassandra offer when used in conjunction with Elasticsearch?
P.S. It may be better if the question is answered in general. But, if it is necessary, assume that we only append rows to the database and never delete or update anything. We want to be able to do full-text search in the data.

So as the author of one of the linked answers (Elasticsearch vs Cassandra vs Elasticsearch with Cassandra), I suppose that I should weigh in here.
those alleged loses may had been because of some bugs that have been solved in these years.
This is an absolutely true statement. The answer I wrote is almost six years old, and ElasticSearch has grown to be a much more reliable product in that time. That being said, there are some things which Cassandra can do that ElasticSearch just wasn't designed to do (and vice-versa).
what extra features does Cassandra offer...
I can think of a few, which I'll summarize here:
Write throughput/performance/latency
ElasticSearch is a search engine based on the Lucene project. Handling large amounts of write throughput at low latencies is just not something that it was designed to do; at least not "out of the box." There are ways to configure ElasticSearch to be better at this, as described here: Techniques to Achieve High Write Throughput With ElasticSearch. But in terms of building a new cluster with minimal config, you'll spend less time engineering Cassandra to accomplish this.
"Sometimes ElasticSearch loses writes"
Yes, I wrote that. Again, ElasticSearch has improved. A lot. But I still see this happen under high write throughput conditions. When a cluster is engineered for a certain level of throughput, and an application exceeds those tolerances causing a node to become overwhelmed from the write back-pressure, writes will be lost.
Cassandra is not immune to this problem, either. It just has a higher tolerance for it. If you were to use them both together, architecting something like Kafka to "throttle" the write throughput to each would be a good approach.
Multi Data center High Availability (MDHA)
With the ability to define logical data centers and availability zones (racks), Cassandra has always been good at replicating a data set over multiple regions.
This is problematic for ElasticSearch, as it does not have a concept of a logical data center, and its "master" nodes are not active/active.
Peer nodes vs. role-based nodes
As a follow-up to my MDHA point, ElasticSearch now allows for nodes to be designated with a "role" in the cluster. You can specify multiple nodes to act as the "master" role, in-charge of adding and updating indexes. Any node can direct search traffic to the nodes which work under the "data" role. In fact, one way to improve write throughput (my first talking point), is to designate a node or two with the "ingest" role, which can prevent read and write traffic from interfering with each other.
This deviates from Cassandra's approach where every node is a peer, and can handle reads and writes. Being able to treat all nodes the same, simplifies maintenance and administration. And "no," despite popular misconception, a "seed" node not is not anything special.
Query vs. Search
To me, this is the fundamental difference between the two. Querying is not the same as searching. They may seem similar, but they are quite different.
Retrieving data by matching a pattern on one or multiple columns/properties is searching. Also with searching, the number of results is more of an unknown beforehand. Sure, Cassandra has added some features in the last few years to allow for pattern matching based on LIKE queries (I don't recommend its use). But when the ability to "search" a data set is required, Cassandra can't compete with ElasticSearch.
Retrieving data by providing a specific value on a specific key (column) is querying. With querying, it is also easier to have accurate expectations on the number of results to be returned. If I was building an app and I knew that I'd only ever have to retrieve data based on a static, pre-defined query with a specific key, I'd choose Cassandra every time.
With Cassandra, I can also tune query consistency, requiring operational acknowledgement from more or fewer replicas. Likewise, I can also direct those operations to a specific geographic region, based on the locality of the application.
...when used in conjunction with Elasticsearch?
They compliment each other well. Cassandra is good at some things (detailed above) that ElasicSearch is not (and vice-versa...saying that a lot). Requirements for an application may require both searching and querying. Sometimes you've got an app that needs that high-speed key lookup "oh, and we also want search."
Summary, tl;dr;
So while I've written quite a bit here, the main point that I'll keep coming back to, is picking the right tool for the job. When I need to search I'll pick ElasticSearch. When I need to query in a highly-available, geographically-aware scenario, I'll pick Cassandra. I still see applications use both (in tandem), so both have their merits.

Related

is there any issue if i using ElasticSearch instead of relational database?

as the question title, if crud data directly through elasticsearch without relation database(mysql/postgresql), is there any issue here?
i know elasticsearch good at searhing, but if update data frequencies, maybe got bad performance?
if every update-request setRefreshPolicy(IMMEDIATE), maybe got bad performance also?
ElasticSearch will likely outperform a relational db on similar hardware, though workloads can vary. However, ElasticSearch can do this because it has made certain design decisions that are different than the design decisions of a relational database.
ElasticSearch is eventually consistent. This means that queries immediately after your insert might still get old results. There are things that can be done to mitigate this but nothing will eliminate the possibility.
Prior to version 5.x ElasticSearch was pretty good at losing data when bad things happen the 5.x release was all about making Elastic more robust in those regards, and data loss is no longer the problem it was previously, though potential for data loss still exists, particularly if you make configuration mistakes.
If you frequently modify documents in ElasticSearch you will generate large numbers of deleted documents as every update generates a new document and marks an old document as deleted. Over time those old documents fall off, or you can force the system to clean them out, but if you are doing rapid modifications this could present a problem for you.
The application I am working for is using Elasticsearch as the backend. There are 9 microservices connecting to this backend. Writes are fewer when compared to reads. Our write APIs have a performance requirements of max. 3 seconds.
We have configured 1 second as the refresh interval and always using WAIT_FOR instead of IMMEDIATE and fewer times using NONE in the case of asynchronous updates.

CQRS (Lagom) elasticsearch read-side

I've read that ElasticSearch isn't the most reliable in terms of durability, but I would like to use it to store data on the read-side for optimal searching.
If we store events (write-side) in a cassandra database, that means that data is never really lost.
I don't really understand what is meant with 'data durability'.
If we use ES on the read-side, does that mean that some data may not be properly imported? Does it mean that one day data may randomly be lost, or the risk that all data may one day just have disappeared?
The use case is a Twitter-like geolocation based app.
How reliable is it in the end to use ES exclusively on the read-side, without needing a more reliable datastore (write-side) to store the data?
Depending on what is meant with this "durability", I wonder what measures should be taken to replay events and keep ES consistent at all times.
Thanks
I don't have a huge amount of experience running ES in production, but essentially, ensuring that when you persist data, it stays persisted, especially in a distributed system, is hard. There are many, many edge cases that are very hard to get right, and it takes time for a database to mature and sort those edge cases out. A less durable database is one that probably hasn't ironed all these issues out.
Of course, ElasticSearch is popular open source database with a thriving community maintaining it, so there's likely no well defined cases where "your data will be lost in this circumstance", rather, there's likely cases that either haven't been come across yet, or when they have been come across by users in the wild, the users that came across them didn't care enough to debug it because they were only using ES as a secondary data store and were able to rebuild it from their primary data store. Whenever a case is identified that ES loses data under well understood circumstances, the maintainers of ES would be quick to fix that.
The most typical use cases for ES are as a secondary database store, and in such a use case, durability isn't as important because the data store can be rebuilt from the primary. Accordingly, you'll find durability isn't as high a priority to the maintainers of ES because their users aren't asking for it - that's not say it's not a high priority, just relative to other databases, it's not as high.
So, if you use ES, you've got a higher chance of encountering bugs where you'll lose data, than with other databases that are either more mature or put more of a focus on durability in their development.
As to whether you should regularly drop your ES database and replay the events, it really depends on your use case and how important it is for your ES database to be consistent. A lot of the edge cases around ES's durability probably result in major corruptions with significant data loss - ie, you'll know if it happens, so there's no need to drop and replay regularly in that case. Another thing to consider is that because of the way CQRS read sides work, you'll only have a limited number of writers to your ES store, and you can easily control that concurrency. What this means is that a spike in load won't result in a spike in concurrent writers, what will happen is that your ES store might temporarily lag behind in consistency from your primary store. Due to this, you're probably less likely to encounter the edge cases that might trigger ES to lose data.
So, you're probably fine not bothering dropping and rebuilding unless something catastrophic happens, unless the consequences of silently losing small amounts of data in a way that you won't notice are so high that the incredibly small chance that that might happen is unacceptable.
I know this topic is more then 3 years old but I am also using Elasticsearch for the read side of the CQRS but I think there are other platforms fitting better to write side but it is not just a database technology, in todays Event Sourced paradigm more is necessary, I am using Akka's Finite State Machine with Cassandra, which in my opinion fits better that sort extreme write loads better then Elasticsearch.
I wrote a blog about it, if anybody likes to see, Write Side for Elasticsearch CQRS

ElasticSearch Analytical queries

I am evaluating a few different options for powering an analytics application using an open-source technology. One of the options is using ElasticSearch, though I haven't been able to find any examples of companies using it for large-scale implementations of analytics, thus my question here.
For datasets of 1B-10B points, what limitations (if any, or would it be possible?) would ElasticSearch have? For example, in having a feature-set like Google Analytics, with it.
Here's one user who seems to do analytics on largeish amounts of data - https://digitalgov.gov/2015/01/07/elk - plus description of what they do including downsides.
With Elasticsearch there is no black-white answer to a question as open-ended as yours. The amount of records is not everything: how much disk space are we talking about, how many nodes, how many indices, the number of shards for each, what kind of analytics you need, hardware specs etc etc. Two things are certain from the data you mentioned: you need dedicated master nodes and more importantly good client nodes and depending on queries and the concurrent searches count you will need more or less of them.
In Elasticsearch 5 the client node is called coordinating node but it has the same role. One limitation I can think of is the heap/RAM memory of such coordinating node. The heap of an Elasticsearch node shouldn't be set to values larger than ~30GB due to the longer garbage collection cycles of the JVM (larger memory to clean, more time it takes, more unusable the node is). During GC nothing else runs on that JVM. So you could be limited by the size of the memory.
I said that you most likely will need coordinating nodes because heavy aggregations (what will probably be the most used feature in an analytics platform) will use cpu and memory in the final phase of a query where it gathers the results from all shards involved and performs a final sorting and aggregation. Thus it will need more memory than a normal data node would only for aggregations.
I doubt though that a single aggregation will use so many GBs of memory but it could theoretically use it if the query/aggregation being used is built in a reckless way. Depending on how many concurrent searches there are and how much memory they use you might need more or less coordinating nodes so that the GC cycles are not very frequent.
Bottom line: I think this is possible but some common sense is needed (see my comment about reckless aggregations) and some as close to reality as possible estimations regarding the load.
Google Analytics Pros:
Easy to Install
Can be used in multiple environments (e.g. web, mobile, other)
Customized data collection
Google Analytics Cons:
Custom reporting is limited
Upgrading to Premium is expensive
Requires continual traning
Slices data into smaller samples to deal with large sampling issues
ElasticSearch Pros:
Distributed by design
Easier to scale horizontally
Good at full text search
Fast indexing & querying
ElasticSearch Cons:
Not a relational database therefore does not benefit from things like foreign-key constaints
Data consistency can be affected
No built-in authentication or authorization system

How reliable is ElasticSearch as a primary datastore against factors like write loss, data availability

I am working on a project with a requirement of coming up with a generic dashboard where a users can do different kinds of grouping, filtering and drill down on different fields. For this we are looking for a search store that allows slice and dice of data.
There would be multiple sources of data and would be storing it in the Search Store. There may be some pre-computation required on the source data which can be done by an intermediate components.
I have looked through several blogs to understand whether ES can be used reliably as a primary datastore too. It mostly depends on the use-case we are looking for. Some of the information about the use case that we have :
Around 300 million record each year with 1-2 KB.
Assuming storing 1 year data, we are today with 300 GB but use-case can go up to 400-500 GB given growth of data.
As of now not sure, how we will push data, but roughly, it can go up to ~2-3 million records per 5 minutes.
Search request are low, but requires complex queries which can search data for last 6 weeks to 6 months.
document will be indexed across almost all the fields in document.
Some blogs say that it is reliable enough to use as a primary data store -
http://chrisberkhout.com/blog/elasticsearch-as-a-primary-data-store/
http://highscalability.com/blog/2014/1/6/how-hipchat-stores-and-indexes-billions-of-messages-using-el.html
https://karussell.wordpress.com/2011/07/13/jetslide-uses-elasticsearch-as-database/
And some blogs say that ES have few limitations -
https://www.found.no/foundation/elasticsearch-as-nosql/
https://www.found.no/foundation/crash-elasticsearch/
http://www.quora.com/Why-should-I-NOT-use-ElasticSearch-as-my-primary-datastore
Has anyone used Elastic Search as the sole truth of data without having a primary storage like PostgreSQL, DynamoDB or RDS? I have looked up that ES has certain issues like split brains and index corruption where there can be a problem with the data loss. So, I am looking to know if anyone has used ES and have got into any troubles with the data
Thanks.
Short answer: it depends on your use case, but you probably don't want to use it as a primary store.
Longer answer: You should really understand all of the possible issues that can come up around resiliency and data loss. Elastic has some great documentation of these issues which you should really understand before using it as a primary data store. In addition Aphyr's post on the topic is a good resource.
If you understand the risks you are taking and you believe that those risks are acceptable (e.g. because small data loss is not a problem for your application) then you should feel free to go ahead and try it.
It is generally a good idea to design redundant data storage solutions. For example, it could be a fast and reliable approach to first just push everything as flat data to a static storage like s3 then have ES pull and index data from there. If you need more flexibility leveraging some ORM, you could have an RDS or Redshift layer in between. This way the data can always be rebuilt in ES.
It depends on your needs and requirements how you set the balance between redundancy and flexibility/performance. If there's a lot of data involved, you could store the raw data statically and just index some parts of it by ES.
Amazon Lambda offers great features:
Many developers store objects in Amazon S3 while using Amazon DynamoDB
to store and index the object metadata and enable high speed search.
AWS Lambda makes it easy to keep everything in sync by running a
function to automatically update the index in Amazon DynamoDB every
time objects are added or updated from Amazon S3.
Since 2015 when this question was originally posted a lot of resiliency issues have been found and addressed, and in recent years a lot of features and specifically stability and resiliency features have been added, that it's definitely something to consider given the right use-cases and leveraging the right features in the right way.
So as of 2022, my answer to this question is - yes you can, as long as you do it correctly and for the right use-case.

Is it appropriate to use a search engine as a caching layer?

We're talking about a normalized dataset, with several different entities that must often be accessed along with related records. We want to be able to search across all of this data. We also want to use a caching layer to store view-ready denormalized data.
Since search engines like Elasticsearch and Solr are fast, and since it seems appropriate in many cases to put the same data into both a search engine and a caching layer, I've read at least anecdotal accounts of people combining the two roles. This makes sense on a surface level, at least, but I haven't found much written about the pros and cons of this architecture. So: is it appropriate to use a search engine as a cache, or is using one layer for two roles a case of being penny wise but pound foolish?
These guys have done this...
http://www.artirix.com/elasticsearch-as-a-smart-cache/
The problem I see is not in the read speed, but in the write speed. You are incurring a pretty hefty cost for adding things to the cache (forcing spool to disk and index merge).
Things like memcached or elastic cache if you are on AWS, are much more efficient at both inserts and reads.
"Elasticsearch and Solr are fast" is relative, caching infrastructure is often measured in single-digit millisecond range, same for inserts. These search engines are at least measured in 10's of milliseconds for reads, and much higher for writes.
I've heard of setups where ES was used for what is it really good for: full context search and used in parallel with a secondary storage. In these setups data was not stored (but it can be) - "store": "no" - and after searching with ES in its indices, the actual records were retrieved from the second storage level - usually a RDBMS - given that ES was holding a reference to the actual record in the RDBMS (an ID of some sort). If you're not happy with whatever secondary storage gives in you in terms of speed and "search" in general I don't see why you couldn't setup an ES cluster to give you the missing piece.
The disadvantage here is the time spent architecting the ES data structure because ES is not as good as a RDBMS at representing relationships. And it really doesn't need to, its main job and purpose is different. And is, actually, happier with a denormalized set of data to search over.
Another disadvantage is the complexity of keeping in sync the two storage systems which will require some thinking ahead. But, once the initial setup and architecture is in place, it should be easy afterwards.
the only recommended way of using a search engine is to create indices that match your most frequently accessed denormalised data access patterns. You can call it a cache if you want. For searching it's perfect, as it's fast enough.
Recommended thing to add cache for there - statistics for "aggregated" queries - "Top 100 hotels in Europe", as a good example of it.
May be you can consider in-memory lucene indexes, instead of SOLR or elasticsearch. Here is an example

Resources