Task:
What is the most common first letter found in all the words in this document?
-unweighted (count a word once regardless of how many times it shows up)
-weighted (count a word separately for each time it shows up)
What is the most common word of a given length in this document?
I'm thinking of using a hashmap to count the most common first letter. But should I use a hashmap for both the unweighted and weighted?
And for most common word of a given length(ex. 5) could I use something more simple like an array list?
For the unweighted, you need a hash table to keep track of the words you've already seen, as well as a hash map to count the occurrences of the first letter. That is, you need to write:
if words_seen does not contain word
add word to words seen
update hash map with first letter of word
end-if
For the weighted, you don't need that hash table, because you don't care how many times the word occurs. So you can just write:
update hash map with first letter of word
For the most common words, you need a hash map to keep track of all the unique words you see, and the number of times you see the word. After you've scanned the entire document, make a pass through that hash map to determine the most frequent one with the desired length.
You probably don't want to use an array list for the last task, because you want to count occurrences. If you used an array list then after scanning the entire document you'd have to sort that list and count frequencies. That would take more memory and more time than just using the hash map.
Related
I was not able to find any solution for below problem in a coding contest.
Problem:
We have input string of "good words" separated by underscore and list of user reviews (basically array of strings where each array element is having some words separated by underscore).
We have to sort the list of user reviews such that elements having more number of good words comes first.
Example:
input:
good words: "pool_clean_food".
user review array:["food_bedroom_environment","view_sea_desert","clean_pool_table"].
output: [2,0,1]
Explanation:
Array[2]="clean_pool_table" having 2 good words i.e. pool and clean
Array[0]="food_bedroom_environment" having 1 good word i.e. food
Array[1]="view_sea_desert" having 0 good word i.e. nil
How can I approach the problem, which data structure shall I use so that my code can handle large inputs?
Split the words of input good words by underscore and store them in a hashset.
Now for each review, assign score 0 initially. split the words by underscore as well and check if the words are present in the hashset one by one. If the word is present, add 1 to score of that word.
Now consider every reviews as <review, score> pair and sort those reviews based on their score value in ascending order. You can use any standard sorting O(nlogn) algorithm for this.
Instead of hashset, you can use Trie which will be speed up the algorithm in case the words are too big.
I had a telephone recently for a SE role and was asked how I'd determine if two words were anagrams or not, I gave a reply that involved something along the lines of getting the character, iterating over the word, if it exists exit loop and so on. I think it was a N^2 solution as one loop per word with an inner loop for the comparing.
After the call I did some digging and wrote a new solution; one that I plan on handing over tomorrow at the next stage interview, it uses a hash map with a unique prime number representing each character of the alphabet.
I'm then looping through the list of words, calculating the value of the word and checking to see if it compares with the word I'm checking. If the values match we have a winner (the whole mathematical theorem business).
It means one loop instead of two which is much better but I've started to doubt myself and am wondering if the additional operations of the hashmap and multiplication are more expensive than the original suggestion.
I'm 99% certain the hash map is going to be faster but...
Can anyone confirm or deny my suspicions? Thank you.
Edit: I forgot to mention that I check the size of the words first before even considering doing anything.
An anagram contains all the letters of the original word, in a different order. You are on the right track to use a HashMap to process a word in linear time, but your prime number idea is an unnecessary complication.
Your data structure is a HashMap that maintains the counts of various letters. You can add letters from the first word in O(n) time. The key is the character, and the value is the frequency. If the letter isn't in the HashMap yet, put it with a value of 1. If it is, replace it with value + 1.
When iterating over the letters of the second word, subtract one from your count instead, removing a letter when it reaches 0. If you attempt to remove a letter that doesn't exist, then you can immediately state that it's not an anagram. If you reach the end and the HashMap isn't empty, it's not an anagram. Else, it's an anagram.
Alternatively, you can replace the HashMap with an array. The index of the array corresponds to the character, and the value is the same as before. It's not an anagram if a value drops to -1, and it's not an anagram at the end if any of the values aren't 0.
You can always compare the lengths of the original strings, and if they aren't the same, then they can't possibly be anagrams. Including this check at the beginning means that you don't have to check if all the values are 0 at the end. If the strings are the same length, then either something will produce a -1 or there will be all 0s at the end.
The problem with multiplying is that the numbers can get big. For example, if letter 'c' was 11, then a word with 10 c's would overflow a 32bit integer.
You could reduce the result modulo some other number, but then you risk having false positives.
If you use big integers, then it will go slowly for long words.
Alternative solutions are to sort the two words and then compare for equality, or to use a histogram of letter counts as suggested by chrylis in the comments.
The idea is to have an array initialized to zero containing the number of times each letter appears.
Go through the letters in the first word, incrementing the count for each letter. Then go through the letters in the second word, decrementing the count.
If the counts reach zero at the end of this process, then the words are anagrams.
Consider there are 10 billion words that people have searched for in google. Corresponding
to each word you have the sorted list of all document id's. The list looks like this:
[Word 1]->[doc_i1,doc_j1,.....]
[Word 2]->[doc_i2,doc_j2,.....]
...
...
...
[Word N]->[doc_in,doc_jn,.....]
I am looking for an algorithm to find 100 rare word-pairs.
A rare word-pair is a pair of words that occur together(not necessarily contiguous) in
exactly 1 document.
I am looking for something better than O(n^2) if possible.
You order the words according to the number of documents they occur in. The idea here is, that words that occur rarely at all, will occur rarely in pairs as well. If you find words that occur in exactly one document, just pick any other word from that document and you are done.
Then you start inverting the index, starting with the rarest word. That means you create a map where each document points to the set of words in it. At first you create that inverted index with the rarest word only. After you inserted all documents associated with that rarest word into the inverted index, you have a map where each document points to exactly one word.
Then you add the next word with all its documents, still following the order created in (1.). At some point you will find that a document associated with a word is already present in your inverted map. Here you check all words associated with that document if they form such a rare word pair.
The performance of the thing depends heavily on how far you have to go to find 100 such pairs, the idea is that you are done after processing only a small fraction of the total data set. To take advantage of the fact that you only process a small fraction of the data, you should employ in (1.) a sort algorithm that allows you to find the smallest elements long before the entire set has been sorted, like quick sort. Then the sorting can be done in like O(N*log(N1) ), with N1 being the number of words you actually need to add to the inverted index before finding 100 pairs. The complexity of the other operations, namely adding a word to the inverted index and checking if a word pair occurs in more than one document also is linear with the number of documents per word, so those operations should be speedy at the beginning and slow down later, because later you have more documents per word.
This is the opposite of "Frequent Itemset Mining"
i.e. check this recent publication: Rare pattern mining: challenges and future perspectives
I am thinking about writing a program to collect for me the most common phrases in a large volume of the text. Had the problem been reduced to just finding words than that would be as simple as storing each new word in a hashmap and then increasing the count on each occurrence. But with phrases, storing each permutation of a sentence as a key seems infeasible.
Basically the problem is narrowed down to figuring out how to extract every possible phrase from a large enough text. Counting the phrases and then sorting by the number of occurrences becomes trivial.
I assume that you are searching for common patterns of consecutive words appearing in the same order (e.g. "top of the world" would not be counted as the same phrase as "top of a world" or "the world of top").
If so then I would recommend the following linear-time approach:
Split your text into words and remove things you don't consider significant (i.e. remove capitalisation, punctuation, word breaks, etc.)
Convert your text into an array of integers (one integer per unique word) (e.g. every instance of "cat" becomes 1, every "dog" becomes 2) This can be done in linear time by using a hash-based dictionary to store the conversions from words to numbers. If the word is not in the dictionary then assign a new id.
Construct a suffix-array for the array of integers (this is a sorted list of all the suffixes of your array and can be constructed by linear time - e.g. using the algorithm and C code here)
Construct the longest common prefix array for your suffix array. (This can also be done in linear-time, for example using this C code) This LCP array gives the number of common words at the start of each suffix between consecutive pairs in the suffix array.
You are now in a position to collect your common phrases.
It is not quite clear how you wish to determine the end of a phrase. One possibility is to simply collect all sequences of 4 words that repeat.
This can be done in linear time by working through your suffix array looking at places where the longest common prefix array is >= 4. Each run of indices x in the range [start+1...start+len] where the LCP[x] >= 4 (for all except the last value of x) corresponds to a phrase that is repeated len times. The phrase itself is given by the first 4 words of, for example, suffix start+1.
Note that this approach will potentially spot phrases that cross sentence ends. You may prefer to convert some punctuation such as full stops into unique integers to prevent this.
Here's the scenario:
I have an array of millions of random strings of letters of length 3-32, and an array of words (the dictionary).
I need to test if a random string can be made up by concatenating 1, 2, or 3 different dictionary words or not.
As the dictionary words would be somewhat fixed, I can do any kind of pre-processing on them.
Ideally, I'd like something that optimizes lookup speeds by doing some kind of pre-processing on the dictionary.
What kind of data structures / algorithms should I be looking at to implement this?
First, Build a B-Tree like Trie structure from your dict. Each root would map to a letter. Each 2nd level subtree would then have all of the words that could be made with two letters, and so on.
Then take your word and start with the first letter and walk down the B-Tree Trie until you find a match and then recursively apply this algorithm to the rest of the word. If you don't find a match at any point you know you can't form the word via concats.
Store the dictionary strings in a hashed set data structure. Iterate through all possible splits of the string you want to check in 1, 2 or 3 parts, and for each such split look up all parts in the hash set.
Make a regex matching every word in your dictionary.
Put parentheses around it.
Put a + on the end.
Compile it with any correct (DFA-based) regex engine.