How to make query both to parent and child index? - elasticsearch

I got parent index users and child purchase. Purchase has field purchase_count it is number of purchase made by user, for example first purchase of some user will be with purchase_count = 1, second with 2 etc.
I want to make query to get total number of users, number of users who had first purchase, number of users who had second etc. For example All: 100, 1: 10, 2: 6, 3: 3 etc..
I know how to do it in two requests, first get count of all users next term aggregation of purchases based on purchase_count field, but can I do it somehow in single query?

There is a datatype in Elasticsearch called parent-join or parent-child previously: https://www.elastic.co/guide/en/elasticsearch/reference/current/parent-join.html
That datatype needs to be in a single index. There are no joins across indices in Elasticsearch.
You probably want to look into parent-join for your usecase, but you'll have to restructure your data to reside in a single index.

Related

Performance in Elasticsearch

I am now beginning with elasticsearch.
I have two cases of data in a relational database, but in both cases I want to find the records from the first table as quickly as possible.
Case 1: binding tables 1: n (example Invoice - Items of invoice)
Have I been to save the data to the elasticsearch system: all rows from slave or master_id and group all data from slave to single string?
Case 2: binding tables n: 1 (example Invoice - Customer)
Have I been to save the data as in case 1 to independent index or add next column to previous index?
The problem is that sometimes I only need to search for records that contain a specific invoice item, sometimes a specific customer, and sometimes both an invoice item and a customer.
Should I create one index containing all the data, or all 3 variants?
Another problem, is it possible to speed up the search in elasticsearch somehow, when the stored data is eg only EAN (13 digit number) but not plain text?
Thank
Jaroslav
You should denormalize and just use single index for all your data(invoices, items and customer) for the best performance, Elasticsearch although supports joins and parent-child relationship but their performance is no where near to when all the data is part of single index and quick benchmark test on your data will prove it easily.

Complex search query from 2 documents

I'm new to Elasticsearch and I need to execute a complex query, but I need some help.
Here is my use case:
I would like to recommend a new place to each of my users everyday.
However:
The place must be opened this week day
The chosen place must be near of the user (closer places have higher score)
The place should not be one of the last 10 places a user have already been/suggested (if a place has already been visited by a user in his last 10 visits, this place should have a lower score)
My first guess is to have 2 documents types as follow:
user_history
user_id
place_id
date
place
place_id
opening_days (array with week days the place opens)
location geo position of the place
Given a user with position [lat, lon] and id user_1, what could be the search query to execute to retrieve X places sorted by score? (better score is near of user and not in the 10 last places a user have already been).
This query seems to be a basic but I can't figure out how to "mix" data from user_history and from place to get places I want.
But that's not all!
With this query, if I want to attribute to each user a place I need 3 steps:
retrieve all users (with their position)
for each user, search for the best place
once I have this place, add it to the user_history
This seems very time consuming task. Is it possible to simplify it with less Elasticsearch queries?
For instance, having something like this:
retrieving for each user his best place (with 1 query, search for all users and find them the best place)
add the place to the history
Or event better:
retrieving for each user his best place and add it to the history (with 1 query, perform all the 3 tasks above)
I don't know if it's possible to create queries that complex. That's why I need your help to tell me if it's possible and how it could be accomplished.

OBIEE Merge two queries (join)

I need help.
I am new to obiee (recently moved from business objects and re-creating all reports in obiee).
Here is an example I need help with. I have created an analysis where I am listing all orders with their target delivery dates and number of products in each order.
Order Id......Target Delivery Date...No of products
Abc....1/1/2016.....5
I want to add to a column next to No of products called "No of prods delivered on time". I want to compare delivery date of each product within a order with the target delivery date and
Give count of products delivered within the target date.. So the output should be
Abc....1/1/2016....5.....3
Where 3 is number of products delivered on time.
I could do it in BO by running two queries and merging them however in obiee I am not able to add second query to my analysis. I did try at product level using case when target date >=delivery date then 1 else 0 and wrapped this with sum function to aggregate but it didn't work ..
Appreciate your help in this. Searching for this topics give me results for running queries from multiple subject area :(
You also have unions in OBIEE, you union the results of 2 queries which return the same structure, so you have query A with Order ID, Target Date, No Products and a Dummy column with a 0 and default agregation Sum, and a second query with Order ID, Target Date, Dummy column summing 0 and the number of products delivered.
You do all this in the criteria tab of the analysis. It's important the order in which you put your columns, because that's what OBIEE is using to do the union.
Regards

Random exhaustive (non-repeating) selection from a large pool of entries

Suppose I have a large (300-500k) collection of text documents stored in the relational database. Each document can belong to one or more (up to six) categories. I need users to be able to randomly select documents in a specific category so that a single entity is never repeated, much like how StumbleUpon works.
I don't really see a way I could implement this using slow NOT IN queries with large amount of users and documents, so I figured I might need to implement some custom data structure for this purpose. Perhaps there is already a paper describing some algorithm that might be adapted to my needs?
Currently I'm considering the following approach:
Read all the entries from the database
Create a linked list based index for each category from the IDs of documents belonging to the this category. Shuffle it
Create a Bloom Filter containing all of the entries viewed by a particular user
Traverse the index using the iterator, randomly select items using Bloom Filter to pick not viewed items.
If you track via a table what entries that the user has seen... try this. And I'm going to use mysql because that's the quickest example I can think of but the gist should be clear.
On a link being 'used'...
insert into viewed (userid, url_id) values ("jj", 123)
On looking for a link...
select p.url_id
from pages p left join viewed v on v.url_id = p.url_id
where v.url_id is null
order by rand()
limit 1
This causes the database to go ahead and do a 1 for 1 join, and your limiting your query to return only one entry that the user has not seen yet.
Just a suggestion.
Edit: It is possible to make this one operation but there's no guarantee that the url will be passed successfully to the user.
It depend on how users get it's random entries.
Option 1:
A user is paging some entities and stop after couple of them. for example the user see the current random entity and then moving to the next one, read it and continue it couple of times and that's it.
in the next time this user (or another) get an entity from this category the entities that already viewed is clear and you can return an already viewed entity.
in that option I would recommend save a (hash) set of already viewed entities id and every time user ask for a random entity- randomally choose it from the DB and check if not already in the set.
because the set is so small and your data is so big, the chance that you get an already viewed id is so small, that it will take O(1) most of the time.
Option 2:
A user is paging in the entities and the viewed entities are saving between all users and every time user visit your page.
in that case you probably use all the entities in each category and saving all the viewed entites + check whether a entity is viewed will take some time.
In that option I would get all the ids for this topic- shuffle them and store it in a linked list. when you want to get a random not viewed entity- just get the head of the list and delete it (O(1)).
I assume that for any given <user, category> pair, the number of documents viewed is pretty small relative to the total number of documents available in that category.
So can you just store indexed triples <user, category, document> indicating which documents have been viewed, and then just take an optimistic approach with respect to randomly selected documents? In the vast majority of cases, the randomly selected document will be unread by the user. And you can check quickly because the triples are indexed.
I would opt for a pseudorandom approach:
1.) Determine number of elements in category to be viewed (SELECT COUNT(*) WHERE ...)
2.) Pick a random number in range 1 ... count.
3.) Select a single document (SELECT * FROM ... WHERE [same as when counting] ORDER BY [generate stable order]. Depending on the SQL dialect in use, there are different clauses that can be used to retrieve only the part of the result set you want (MySQL LIMIT clause, SQLServer TOP clause etc.)
If the number of documents is large the chance serving the same user the same document twice is neglibly small. Using the scheme described above you don't have to store any state information at all.
You may want to consider a nosql solution like Apache Cassandra. These seem to be ideally suited to your needs. There are many ways to design the algorithm you need in an environment where you can easily add new columns to a table (column family) on the fly, with excellent support for a very sparsely populated table.
edit: one of many possible solutions below:
create a CF(column family ie table) for each category (creating these on-the-fly is quite easy).
Add a row to each category CF for each document belonging to the category.
Whenever a user hits a document, you add a column with named and set it to true to the row. Obviously this table will be huge with millions of columns and probably quite sparsely populated, but no problem, reading this is still constant time.
Now finding a new document for a user in a category is simply a matter of selecting any result from select * where == null.
You should get constant time writes and reads, amazing scalability, etc if you can accept Cassandra's "eventually consistent" model (ie, it is not mission critical that a user never get a duplicate document)
I've solved similar in the past by indexing the relational database into a document oriented form using Apache Lucene. This was before the recent rise of NoSQL servers and is basically the same thing, but it's still a valid alternative approach.
You would create a Lucene Document for each of your texts with a textId (relational database id) field and multi valued categoryId and userId fields. Populate the categoryId field appropriately. When a user reads a text, add their id to the userId field. A simple query will return the set of documents with a given categoryId and without a given userId - pick one randomly and display it.
Store a users past X selections in a cookie or something.
Return the last selections to the server with the users new criteria
Randomly choose one of the texts satisfying the criteria until it is not a member of the last X selections of the user.
Return this choice of text and update the list of last X selections.
I would experiment to find the best value of X but I have in mind something like an X of say 16?

Querying MongoDB for last-items-before

Consider I have two collections in MongoDB. One for products with documents like:
{'_id': ObjectId('lalala'), 'title': 'Yellow banana'}
And another stores price changes with documents like:
{'product': DBRef('products', ObjectId('lalala')),
'since': datetime(2011, 4, 5),
'new_price': 150 }
One product may have many price changes. The price lasts until a new change with later time stamp. I guess you've caught idea.
Say, I have 100 products. I want to query my DB to get know what's the price of each product at the moment of June 9, 2011. What is the most efficient (quick) way to perform this query in MongoDB? Suppose I have no cache solution or cache is empty.
I thought about group statement on prices collection, where reduce function would select last since before a date provided, grouping by product.$id. But in this case I would not benefit from an index on since field and all documents would be scanned.
Any ideas?
I had a similar problem, but for GPS locations. I found the fastest way was to set up a query for each item, which is rather counter-intuitive if your used to SQL databases.
Query for the item where it's timestamp is less or equal than the date your looking for, and limit the result to 1. Repeat for each item. To really speed things up, run multiple querys in parallel to utilise all the cores on the MongoDB server.

Resources