I've been confused about this for a while. Rethinkdb is schema-less, so with the way it stores information (or any other NoSQL database's method of storage), would a smaller JSON key make shorter documents?
{"avg": 312} vs {"average": 312}
Thanks
For now yes since RethinkDB doesn't compress fields.
See https://github.com/rethinkdb/rethinkdb/issues/1396
Related
I have 2 indexes and they both have one common field (basically relationship).
Now as elastic search is not giving filters from multiple indexes, should we store them in memory in variable and filter them in node.js (which basically means that my application itself is working as a database server now).
We previously were using MongoDB which is also a NoSQL DB but we were able to manage it through aggregate queries but seems the elastic search is not providing that.
So even if we use both databases combined, we have to store results of them somewhere to further filter data from them as we are giving users advanced search functionality where they are able to filter data from multiple collections.
So should we store results in memory to filter data further? We are currently giving advanced search in 100 million records to customers but that was not having the advanced text search that elastic search provides, now we are planning to provide elastic search text search to customers.
What do you suggest should we use the approach here to make MongoDB and elastic search together? We are using node.js to serve data.
Or which option to choose from
Denormalizing: Flatten your data
Application-side joins: Run multiple queries on normalized data
Nested objects: Store arrays of objects
Parent-child relationships: Store multiple documents through joins
https://blog.mimacom.com/parent-child-elasticsearch/
https://spoon-elastic.com/all-elastic-search-post/simple-elastic-usage/denormalize-index-elasticsearch/
Storing things client side in memory is not the solution.
First of all the simplest way to solve this problem is to simply make one combined index. Its very trivial to do this. Just insert all the documents from index 2 into index 1. Prefix all fields coming from index-2 by some prefix like "idx2". That way you won't overwrite any similar fields. You can use an ingestion pipeline to do this, or just do it client side. You only will ever do this once.
After that you can perform aggregations on the single index, since you have all the data in one-index.
If you are using somehting other than ES as your primary data-store you need to reconfigure the indexing operation to redirect everything that was earlier going into index-2 to go into index-1 as well(with the prefixed terms).
100 million records is trivial for something like ELasticsearch. Doing anykind of "joins" client side is NOT RECOMMENDED, as this will obviate the entire value of using ES.
If you need any further help on executing this, feel free to contact me. I have 11 years exp in ES. And I have seen people struggle with "joins" for 99% of the time. :)
The first thing to do when coming from MySQL/PostGres or even Mongodb is to restructure the indices to suit the needs of data-querying. Never try to work with multiple indices, ES is not built for that.
HTH.
In my cassandra database I have a table with users and I want a function to search for users by their unique usernames. For that I need to query all usernames from the user table so that I can filter them serverside, because for input of "nark" I should also find username "Mark", "Narkis" and so on, so I can't just use the username as a partition key and search for the exact value.
If I give them all in the same partition, it results in a hot partition. If I distribute them over multiple partitions, I have to search in all of them.
How can query that efficiently for millions of users? Is there a way to search like that without querying all usernames?
Thank you for your help!
Cassandra natively is not a good fit for such a use case. Even extensive use of secondary indexes will be of minimal help here.
Nevertheless if you already have all your data on C* to achieve such a functionality you essentially need a indexing framework on top of it, most widely used is Apache SOLR (built on Lucene).I have seen SOLR work like magic for fuzzy searching on C* though nothing beats having something like Elasticsearch for the use case.
I am new to elastic and starting to sync my database tables into elastic indexes. I have started by using the table ID(UUID) as the elastic id, but I am starting to wonder if this is a mistake in terms of performance or flexibility in the long term? Any advice would be appreciated.
I think this approach should actually be a best practice. When you update data in your ES index from the (changed) DB, you can address the document directly.
It has worked great for us to use the _bulk update API, which requires an explicit id per item.
On every change on the DB side, we enqueue change notifications, the changed object gets JSON-serialized and sent to ES, asynchronously, and in larger batches. That is making a huge performance difference. Search performance, on the other side, does not depend on the length of the _id AFAIK, not even when you look up by _id. So your DB UUID should be just fine. Especially since _ids can be alphanumeric, they are not limited to just numbers.
Having a 1:1 relationship via _id between the ES result and your system of record (I assume that's what your DB is for) is advantageous also for transparency purposes. In any case, you want to store the database ID as some field, ideally indexed, at least, to help you understand where that document came from.
So, rather than creating your own ID field, you may as well use the built-in _id field right away, with your DB-supplied data.
We were comparing those search solutions and started to wonder why one does need a schema and the other does not. What are tradeoffs? Is it because one is like SQL and the other is like NoSQL in sense of schema configuration?
ES does have a schema defined as templates and mappings. You don't have to use it, but in practice you will. Schema is actually a good thing, and if you notice a database claiming to be pure schemaless - there will be performance implication.
Schema is a tradeoff between ease of developing and adoption against performance. It is easy to read/write into a schemaless database, but it it will be less performant, particularly for any non-trivial query.
Elasticsearch definitely has a schema. If you think it does not, try indexing a date into a field and then an int into the same field. Or even into different types with the same name (I think ES 2.0 disallows that now).
What Elasticsearch does is simplifies auto-creation of a schema. That has tradeoffs such as possible incorrect type detection, fields that are single-valued or multivalued in the result output based on number of elements they contain (they are always multivalued under the covers), and so on. Elasticsearch has some ways to work around that, mostly by defining some of the schema elements and explicit schema mapping as Oleksii wrote.
Solr also has schemaless mode that closely matches Elasticsearch mode, down to storing all JSON as a single field. And when you enable it, you get both similar benefits and similar disadvantages Elasticsearch has. Except, in Solr, you can change things like order of auto-type strategies and mapping to field types. In Elasticsearch (1.x at least) it was hard coded. You can see - slightly dated - comparison in my presentation from 2014.
As Slomo said, they both use Lucene underneath for storing and most of the search. So, the core engine approach cannot change.
I have been working with elasticsearch for the past 2 months. I have used both REST approach and API support in different languages to index, get and search data. I also read a lot about elasticsearch and found out it is not a good option to use it as a data store. Why is this? And I'm also curious about how elasticsearch internally stores the indexed data. Any good link or explanation??
Elastic Search is built on top of Apache Lucene - here's a reference doc on the Lucene index file structure:
http://lucene.apache.org/core/4_7_2/core/org/apache/lucene/codecs/lucene46/package-summary.html#package_description
Regarding whether or not it's a good option as a data store I think that's more individual opinion and specific use cases than a fact that can be proved. It does not have the transaction support that something like MySQL does if that's what you are looking for. In that case it's somewhat on a par with other NoSQL solutions. This is a pretty decent writeup on the trade-offs and issues: https://www.found.no/foundation/elasticsearch-as-nosql/
In the end it depends on what you are doing with your data and what level of robustness you require.