How do search engines merge results from an inverted index? - algorithm

How do search engines merge results from an inverted index?
For example, if I searched for the inverted indexes of the words "dog" and "bat", there would be two huge lists of every document which contained one of the two words.
I doubt that a search engine walks through these lists, one document at a time, and tries to find matches with the results of the lists. What is done algorithmically to make this merging process blazing fast?

Actually search engines do merge these document lists. They gain good performance by using other techniques, the most important of which is pruning: for example, for every word the documents are stored in order of decreasing pagerank, and to get results that have a chance of getting into the first 10 (which will be shown to the user) you may traverse just a fairly small portion of the dog and bat lists, say, the first thousand. (and, of course, there's caching, but that's not related to the very query execution algorithm)
Besides, after all, there are not that many documents about dogs and about bats: even if it's millions, it turns into split seconds with a good implementation.
P.S. I worked at our country's leading search engine, however, not in the very engine of our flagship search product, but I talked to its developers and was surprised to know that query execution algorithms are actually fairly dumb: it turns out that one may squash a huge amount of computation into acceptable time bounds. It is all very optimized of course, but there's no magic and no miracles.

Since inverted indices are ordered by docId they can be merged very fast. [if one of the word starts at docId 23 and second at docId 100001, you can immediately fast forward to docId 100001 or higher in first list as well. ]
Since the typical document intersections are atmost few million they can be sorted for rank very fast. I searched for 'dog cat' [very common 2 words] which returned only 54 million hits.
Sorting of 10 millon random integers took only 2.3 seconds in my Mac with single threaded code [1 million took 206 ms!] and since we need to typically pick only the top 10 not even the full sort is required.
Here is the code if someone wants to try the speed of sort and too lazy to write code!
import java.lang.*;
import java.math.*;
import java.util.*;
public class SortTest {
public static void main(String[] args) {
int count = Integer.parseInt(args[0]);
Random random = new Random();
int[] values = new int[count];
int[] bogusValues = new int[100000]; //screw cache
for(int i = 0; i < values.length;++i) {
values[i] = random.nextInt(count);
}
for(int i = 0; i < bogusValues.length;++i) {
bogusValues[i] = random.nextInt(count);
}
long start = System.currentTimeMillis();
System.out.println(start);
Arrays.sort(values);
System.out.println(System.currentTimeMillis());
System.out.println(System.currentTimeMillis()-start);
Arrays.sort(bogusValues);
}
}

Related

Algorithm to find a set of words in a large list of words

I'm looking for the fastest algorithm that can find a set of words matching another set of words in a list of 9 million records.
Problem: I have a list with almost 100,000 sets of words and I need to search for a match of each of the word set in another list of 9 million sets of words.
My current solution goes like this, I read all the records (from a text file) and keep in memory (in form of an array, let's call it 'search list'). While building this array, I sort the set of words alphabetically and once all the word sets are added, I sort the whole list. I do the same with the other big list, let's call that 'data list'.
Now I iterate over each of elements in my search list and try to find a match. Once a match is found I remember the position at which it matched and the next search I do from the same position. This saves me from iterating the whole data list again and again for each element in the search list.
I assumed it to be super fast but unfortunately, it's not. It almost takes 15 to 20 mins to complete full iteration of the search list. This is not acceptable.
Here is a snippet of my code
int lastPointer = 0
for(int i=0; i<search list.size(); i++){
def this_matched_out = []
inmem_json_arr[i][0]
for(int j=lastPointer; j<data list.size(); j++){
if(data list[j].containsAll(search list[i])){
this_matched_out.add(data list[j])
lastPointer = j
}
}
if(this_matched_out.size()>0) - println "found a match for search "+list[i]
else println "No match found for "+list[i]
}
Can anybody suggest me a better algorithm or am I doing anything wrong here?
Use a hash table. A lookup takes O(1) time no matter how big your set of words is.

Ruby - Optimize code finding the optimal choice from an array

I asked a question that was basically a knapsack problem - I needed to find the combination of several different array of objects that gave the optimal output. So for example, the highest sum "value" from the objects with respect to a limit on the "cost" of each object. The answer I received here was the following-
a.product(b,c)
.select{ |arr| arr.reduce(0) { |sum,h| sum + h[:cost] } < 30 }
.max_by{ |arr| arr.reduce(0) { |sum,h| sum + h[:value] } }
Which works great, but as I get into 6 arrays with ~40 choices each, the possible combinations get upwards of 4 million and take too long to process. I made some changes to the code that made processing faster -
#creating the array doesn't take too long
combinations = a.product(b,c,d,e)
possibles = []
combinations.each do |array_of_objects|
#max_cost is a numeric parameter, and I can't have the same exact object used twice
if !(array_of_objects.sum(&:salary) > max_cost) or !(array_of_objects.uniq.count < array_of_objects.count)
possibles << array_of_objects
end
end
possibles.max_by{ |ar| ar.sum(&:std_proj) }
Breaking it into two separate arrays helped the performance a lot as I only had to check the max_by for many less possible combinations that fit the criteria.
Does anyone see a way to optimize this code? Since I'm typically dealing with tens of thousands or millions of combinations, any little bit could greatly help. Thanks.
If we are talking about millions of rows, and the operations are like unique and max.
I suggest you to solve it by using DISINCT and MAX() in your query and You can even use WHERE filtering by cost.
Looping over the objects in Ruby, is clearly more expensive.

Store and update huge (and sparse?) multi-dimensional array efficiently to count conditional probabilities

Just for fun I would like to count the conditional probabilities that a word (from a natural language) appears in a text, depending on the last and next to last word. I.e. I would take a huge bunch of e.g. English texts and count how often each combination n(i|jk) and n(jk) appears (where j,k,i are sucsessive words).
The naive approach would be to use a 3-D array (for n(i|jk)), using a mapping of words to position in 3 dimensions. The position look-up could be done efficiently using tries (at least that's my best guess), but already for O(1000) words I would run into memory constraints. But I guess that this array would be only sparsely filled, most entries being zero, and I would thus waste lots of memory. So no 3-D array.
What data structure would be suited better for such a use case and still be efficient to do a lot of small updates like I do them when counting the appearances of the words? (Maybe there is a completely different way of doing this?)
(Of course I also need to count n(jk), but that's easy, because it's only 2-D :)
The language of choice is C++ I guess.
C++ code:
struct bigram_key{
int i, j;// words - indexes of the words in a dictionary
// a constructor to be easily constructible
bigram_key(int a_i, int a_j):i(a_i), j(a_j){}
// you need to sort keys to be used in a map container
bool operator<(bigram_key const &other) const{
return i<other.i || (i==other.i && j<other.j);
}
};
struct bigram_data{
int count;// n(ij)
map<int, int> trigram_counts;// n(k|ij) = trigram_counts[k]
}
map<bigram_key, bigram_data> trigrams;
The dictionary could be a vector of all found words like:
vector<string> dictionary;
but for better lookup word->index it could be a map:
map<string, int> dictionary;
When you read a new word. You add it to the dictionary and get its index k, you already have i and j indexes of the previous two words so then you just do:
trigrams[bigram_key(i,j)].count++;
trigrams[bigram_key(i,j)].trigram_counts[k]++;
For better performance you may search for bigram only once:
bigram_data &bigram = trigrams[bigram_key(i,j)];
bigram.count++;
bigram.trigram_counts[k]++;
Is it understandable? Do you need more details?

Best data structure to retrieve by max values and ID?

I have quite a big amount of fixed size records. Each record has lots of fields, ID and Value are among them. I am wondering what kind of data structure would be best so that I can
locate a record by ID(unique) very fast,
list the 100 records with the biggest values.
Max-heap seems work, but far from perfect; do you have a smarter solution?
Thank you.
A hybrid data structure will most likely be best. For efficient lookup by ID a good structure is obviously a hash-table. To support top-100 iteration a max-heap or a binary tree is a good fit. When inserting and deleting you just do the operation on both structures. If the 100 for the iteration case is fixed, iteration happens often and insertions/deletions aren't heavily skewed to the top-100, just keep the top 100 as a sorted array with an overflow to a max-heap. That won't modify the big-O complexity of the structure, but it will give a really good constant factor speed-up for the iteration case.
I know you want pseudo-code algorithm, but in Java for example i would use TreeSet, add all the records by ID,value pairs.
The Tree will add them sorted by value, so querying the first 100 will give you the top 100. Retrieving by ID will be straight-forward.
I think the algorithm is called Binary-Tree or Balanced Tree not sure.
Max heap would match the second requirement, but hash maps or balanced search trees would be better for the first one. Make the choice based on frequency of these operations. How often would you need to locate a single item by ID and how often would you need to retrieve top 100 items?
Pseudo code:
add(Item t)
{
//Add the same object instance to both data structures
heap.add(t);
hash.add(t);
}
remove(int id)
{
heap.removeItemWithId(id);//this is gonna be slow
hash.remove(id);
}
getTopN(int n)
{
return heap.topNitems(n);
}
getItemById(int id)
{
return hash.getItemById(id);
}
updateValue(int id, String value)
{
Item t = hash.getItemById(id);
//now t is the same object referred to by the heap and hash
t.value = value;
//updated both.
}

Best algorithm to index sentences

Imagine I have a situation where I need to index sentences. Let me explain it a little bit deeper.
For example I have these sentences:
The beautiful sky.
Beautiful sky dream.
Beautiful dream.
As far as I can imagine the index should look something like this:
alt text http://img7.imageshack.us/img7/4029/indexarb.png
But also I would like to do search by any of these words.
For example, if I do search by "the" It should show give me connection to "beautiful".
if I do search by "beautiful" it should give me connections to (previous)"The", (next)"sky" and "dream". If I search by "sky" it should give (previous) connection to "beautiful" and etc...
Any Ideas ? Maybe you know already existing algorithm for this kind of problem ?
Short Answer
Create a struct with two vectors of previous/forward links.
Then store the word structs in a hash table with the key as the word itself.
Long Answer
This is a linguistic parsing problem that is not easily solved unless you don't mind gibberish.
I went to the park basketball court.
Would you park the car.
Your linking algorithm will create sentences like:
I went to the park the car.
Would you park basketball court.
I'm not quite sure of the SEO applications of this, but I would not welcome another gibberish spam site taking up a search result.
I imagine you would want some sort of Inverted index structure. You would have a Hashmap with the words as keys pointing to lists of pairs of the form (sentence_id, position). You would then store your sentences as arrays or linked lists. Your example would look like this:
sentence[0] = ['the','beautiful', 'sky'];
sentence[1] = ['beautiful','sky', 'dream'];
sentence[2] = ['beautiful', 'dream'];
inverted_index =
{
'the': {(0,0)},
'beautiful': {(0,1), (1,0), (2,0)},
'sky' : {(0,2),(1,1)},
'dream':{(1,2), (2,1)}
};
Using this structure lookups on words can be done in constant time. Having identified the word you want, finding the previous and subsequent word in a given sentence can also be done in constant time.
Hope this helps.
You can try and dig into Markov chains, formed from words of sentences. Also you'll require both-way chain (i.e. to find next and previous words), i.e. store probable words that appear just after the given or just before it.
Of course, Markov chain is a stochastic process to generate content, however similar approach may be used to store information you need.
That looks like it could be stored in a very simple database with the following tables:
Words:
Id integer primary-key
Word varchar(20)
Following:
WordId1 integer foreign-key Words(Id) indexed
WordId2 integer foreign-key Words(Id) indexed
Then, whenever you parse a sentence, just insert the ones that aren't already there, as follows:
The beautiful sky.
Words (1,'the')
Words (2, 'beautiful')
Words (3,, 'sky')
Following (1, 2)
Following (2, 3)
Beautiful sky dream.
Words (4, 'dream')
Following (3, 4)
Beautiful dream.
Following (2, 4)
Then you can query to your hearts content on what words follow or precede other words.
This oughta get you close, in C#:
class Program
{
public class Node
{
private string _term;
private Dictionary<string, KeyValuePair<Node, Node>> _related = new Dictionary<string, KeyValuePair<Node, Node>>();
public Node(string term)
{
_term = term;
}
public void Add(string phrase, Node previous, string [] phraseRemainder, Dictionary<string,Node> existing)
{
Node next= null;
if (phraseRemainder.Length > 0)
{
if (!existing.TryGetValue(phraseRemainder[0], out next))
{
existing[phraseRemainder[0]] = next = new Node(phraseRemainder[0]);
}
next.Add(phrase, this, phraseRemainder.Skip(1).ToArray(), existing);
}
_related.Add(phrase, new KeyValuePair<Node, Node>(previous, next));
}
}
static void Main(string[] args)
{
string [] sentences =
new string [] {
"The beautiful sky",
"Beautiful sky dream",
"beautiful dream"
};
Dictionary<string, Node> parsedSentences = new Dictionary<string,Node>();
foreach(string sentence in sentences)
{
string [] words = sentence.ToLowerInvariant().Split(' ');
Node startNode;
if (!parsedSentences.TryGetValue(words[0],out startNode))
{
parsedSentences[words[0]] = startNode = new Node(words[0]);
}
if (words.Length > 1)
startNode.Add(sentence,null,words.Skip(1).ToArray(),parsedSentences);
}
}
}
I took the liberty of assuming you wanted to preserve the actual initial phrase. At the end of this, you'll have a list of words in the phrases, and in each one, a list of phrases that use that word, with references to the next and previous words in each phrase.
Using an associative array will allow you to quickly parse sentences in Perl. It is much faster than you would anticipate and it can be effectively dumped out in a tree like structure for subsequent usage by a higher level language.
Tree Search Algorithms (like BST, ect)

Resources