Does sentimentr package account for number of words in sentence and number of sentence in paragraph? - sentiment-analysis

Can anyone help explain whether sentimentr package accounts for word number? I am trying to work out how the number of words affects the sentiment score. Does it take into account if people have more sentences in an answer or more words per sentence? Thanks!

Sentimentr does account for the number of words. This is from the documentation:
The researcher may provide a weight z to be utilized with amplifiers/de-amplifiers (default is .8; de-amplifier weight is constrained to -1 lower bound). Last, these weighted context clusters (c_i,j,l) are summed (c'_i,j) and divided by the square root of the word count (√n w_i,jn) yielding an unbounded polarity score (C) for each sentence.

Related

Efficiently finding a list of near matches for list of words and phrases

I am looking for an algorithm, but I don't know the name of the problem so I can't find anything. Hopefully my explanation of the problem makes sense!
Let's say you have a long list of phrases, where each phrase is a set of words. The user inputs a list of words, and their list "matches" a phrase every word in the phrase is found in their list. A list's "score" is the number of phrases it matches. The goal is to provide the user with a list of words that would most improve their list's score.
Here's a simple example. We have ten phrases:
wood cabin
camping in woods
camping cabin
fun camping
bon fire
camping fire
swimming hole
fun cabin
wood fire
fire place
And the user provides this list:
wood
fun
camping
We match phrases 1 and 4, so the score is 2. But if the user adds "cabin" to their list, they will match 3 more phrases and get a score of 5. "fire" would add 2 to the score.
With the trivially short list, there isn't any complicated problem, as you can just iterate through the options in almost no time. But as the list grows to the hundreds of thousands, it starts taking hundreds of milliseconds. It feels like there should be a way to build an index to make the process faster, but I can't think of what the index's structure would be.
Anyone who took the time to read all this, thank you! Hopefully someone knows what I'm talking about.
You need to map words to number of occurrences. If you use a hash table you can do it very quickly (O(N) - with N being the number of words in the phrases) - loop over all phrases, break them into words, if the word is already in the map increment its count, if not - add it to the map with count 1.
To compute the score of the input, just loop over the input words and accumulate the number of occurrences. O(M) - this time M being the number of input words.
I doubt you can get better complexity (you need to scan the phrases at least once), and with a proper implementation of a map (available in almost all modern languages) - it will be fast as well.
Suffix tree.
They're rather fiddly and complicated things, but basically we store a node for each character (26 * 2), then we store the suffixes for each character, so entries for th and an and so on, but presumably not for qj or other combinations which won't occur. Then you get the suffixes for those, (so the, thr, and and so on, but plenty of combinations of three letters disallowed).
It allows for very fast searching, which doesn't have to be exact. If we want to match a*d we simply follow all the suffixes of a, then only d suffixes, then we insist on nul.

Partitioning an ordered list of weights into N sub-lists of approximately equal weight

Suppose I have an ordered list of weights, having length M. I want to divide this list into N ordered non-empty sublists, where the sum of the weights in each sublist are as close to each other as possible. Finally, the length of the list will always be greater than or equal to the number of partitions.
For example:
A reader of epoch fantasy wants to read the entire Wheel of Time series in N = 90 days. She wants to read approximately the same amount of words each day, but she doesn't want to break a single chapter across two days. Obviously, she also doesn't want to read it out of order either. The series has a total of M chapters, and she has a list of the word counts in each.
What algorithm could she use to calculate the optimum reading schedule?
In this example, the weights probably won't vary much, but the algorithm I'm seeking should be general enough to handle weights that vary widely.
As for what I consider optimum, I would say that given the choice between having two or three partitions vary in weight a small amount from the average would be better than having one partition vary a lot. Or in other words, She would rather have several days where she reads a few hundred more or fewer words than the average, if it means she can avoid having to read a thousand words more or fewer than the average, even once. My thinking is to use something like this to compute the score of any given solution:
let W_1, W_2, W_3 ... w_N be the weights of each partition (calculated by simply summing the weights of its elements).
let x be the total weight of the list, divided by its length M.
Then the score would be the sum, where I goes from 1 to N of (X - w_i)^2
So, I think I know a way to score each solution. The question is, what's the best way to minimize the score, other than brute force?
Any help or pointers in the right direction would be much appreciated!
As hinted by the first entry under "Related" on the right column of this page, you are probably looking for a "minimum raggedness word wrap" algorithm.

How can I merge similar words into one?

The similar_text gem can calculate the words's pairwise similarity. I want to merge words whose similarity is greater than 50% into one, and keep the longest one.
Original
[
"iphone 6",
"iphone 5c",
"iphone 6",
"macbook air",
"macbook",
]
Expected
[
"iphone 5c",
"macbook air",
]
But I don't know how to implement the algorithm to filter the expected results efficiently.
This is not a trivial problem and also not 100% what you're looking for.
Especially how to handle transitive similarities: If a is similar to b and b similar to c, are a and c in the same group (even if they aren't similar to each other?)
Here is a piece of code where you can find all similar pairs in an array:
def find_pairs(ar)
ar.product(ar).reject{|l,r| l == r}.map(&:sort).uniq
.map{|l,r| [[l,r],l.similar(r)]}
.reject{|pair, similarity| similarity < 50.0}
.map{|pair, _| pair}
end
For an answer on how to find the groups in the matches see:
Finding All Connected Components of an Undirected Graph
First of all - There is no efficient way to do this as you must calculate all piers which can take long times on long lists.
Having said that...
I am not familiar with this specific gem but I'm assuming it will give you some sort of distance between the two words (the smaller the better) or probability the words are the same (Higher results are better). Let's go with distance as changing the algorithm to probability is trivial.
This is just an algorithm description that you may find useful. It helped my in a similar case.
What I suggest is to put all the words in a 2 dimensional array as the headers of the rows and columns. If you have N words you need NxN matrix.
In each cell put the calculated distance between the words (the row and column headers).
You will get a matrix of all the possible distances. Remember that in this case we look for minimum distance between words.
So, for each row, look for the minimum cell (not the one with a zero value which is the distance of the word with itself).
If this min is bigger than some threshold than this word has no similar words. If not, look for all the words in this row with a distance up to the threshold (actually you can skip the previous stage and just do this search).
All of the words you found belong to the same group. Look for the longest and use it in the new list you are building.
Also note the column indexes you found minimum distances in, and skip the rows with that indexes (so you will not add the same words to different groups).

relevant text search in a large text file

I have a large text document and I have a search query(e.g. : rock climbing). I want to return 5 most relevant sentences from the text. What are the approaches that can be followed? I am a complete newbie at this text retrieval domain, so any help is appreciated.
One approach I can think of is :
scan the file sentence by sentence, look for the whole search query in the sentence and if it matches then return the sentence.
above approach works only if some of the sentences contain the whole search query. what to do if there are no sentences containing whole query and if some of the sentences contain just one of the word? or what if they contain none of the words?
any help?
Another question I have is can we preprocess the text document to make building index easier? Is trie a good data structure for preprocessing?
In general, relevance is something you define using some sort of scoring function. I will give you an example of a naive scoring algorithm, as well as one of the common search engine ranking algorithms (used for documents, but I modified it for sentences for educational purposes).
Naive ranking
Here's an example of a naive ranking algorithm. The ranking could go as simple as:
Sentences are ranked based on the average proximity between the query terms (e.g. the biggest number of words between all possible query term pairs), meaning that a sentence "Rock climbing is awesome" is ranked higher than "I am not a fan of climbing because I am lazy like a rock."
More word matches are ranked higher, e.g. "Climbing is fun" is ranked higher than "Jogging is fun."
Pick alphabetical or random favorites in case of a tie, e.g. "Climbing is life" is ranked higher than "I am a rock."
Some common search engine ranking
BM25
BM25 is a good robust algorithm for scoring documents with relation to the query. For reference purposes, here's a Wikipedia article about BM25 ranking algorithm. You would want to modify it a little because you are dealing with sentences, but you can take a similar approach by treating each sentence as a 'document'.
Here it goes. Assuming your query consists of keywords q1, q2, ... , qm, the score of a sentence S with respect to the query Q is calculated as follows:
SCORE(S, Q) = SUM(i=1..m) (IDF(qi * f(qi, S) * (k1 + 1) / (f(qi, S) + k1 * (1 - b + b * |S| / AVG_SENT_LENGTH))
k1 and b are free parameters (could be chosen as k in [1.2, 2.0] and b = 0.75 -- you can find some good values empirically) f(qi, S) is the term frequency of qi in a sentence S (could treat is as just the number of times the term occurs), |S| is the length of your sentence (in words), and AVG_SENT_LENGTH is the average sentence length of your sentences in a document. Finally, IDF(qi) is the inverse document frequency (or, in this case, inverse sentence frequency) of the qi, which is usually computed as:
IDF(qi) = log ((N - n(qi) + 0.5) / (n(qi) + 0.5))
Where N is the total number of sentences, and n(qi) is the number of sentences containing qi.
Speed
Assume you don't store inverted index or any additional data structure for fast access.
These are the terms that could be pre-computed: N, *AVG_SENT_LENGTH*.
First, notice that the more terms are matched, the higher this sentence will be scored (because of the sum terms). So if you get top k terms from the query, you really need to compute the values f(qi, S), |S|, and n(qi), which will take O(AVG_SENT_LENGTH * m * k), or if you are ranking all the sentences in the worst case, O(DOC_LENGTH * m) time where k is the number of documents that have the highest number of terms matched and m is the number of query terms. Assuming each sentence is about AVG_SENT_LENGTH, and you have to go m times for each of the k sentences.
Inverted index
Now let's look at inverted index to allow fast text searches. We will treat your sentences as documents for educational purposes. The idea is to built a data structure for your BM25 computations. We will need to store term frequencies using inverted lists:
wordi: (sent_id1, tf1), (sent_id2, tf2), ... ,(sent_idk, tfk)
Basically, you have hashmaps where your key is word and your value is list of pairs (sent_id<sub>j</sub>, tf<sub>k</sub>) corresponding to ids of sentences and frequency of a word. For example, it could be:
rock: (1, 1), (5, 2)
This tells us that the word rock occurs in the first sentence 1 time and in the fifth sentence 2 times.
This pre-processing step will allow you to get O(1) access to the term frequencies for any particular word, so it will be fast as you want it.
Also, you would want to have another hashmap to store sentence length, which should be a fairly easy task.
How to build inverted index? I am skipping stemming and lemmatization in your case, but you are welcome to read more about it. In short, you traverse through your document, continuously creating pairs/increasing frequencies for your hashmap containing the words. Here are some slides on building the index.

Solve Hangman in AI way

I named it as "AI way" because I'm thinking make Application to play the hangman game without human being interactive.
The scenario is like this:
a available word list which would contains hundreds of thousands English word.
The Application will pick certain amount of words, e.g 20 from the list.
The Application play Hangman against each word until either WON or FAILURE.
The restriction here is max wrong bad guess.
26 does not make sense obviously and let's say 6 for the max wrong guess.
I tried the strategy mentioned at wiki page but it does not work well.
Basically successful rate is about 30%.
Any suggestions / comments regarding strategy as well as which field I should dig in order to find a fair good strategy?
Thanks a lot.
-Simon
PS: A JavaScript implementation which looks fairly well.
(https://github.com/freizl/play-hangman-game)
Updated Idea
Download a dictionary of words and put it into some database or structure of your choice
When presented with a word, narrow your guesses to words of the same length and perform a letter frequency distribution (you can use a dictionary and/or list collection for fast distribution analysis and sorting)
Pick the most common letter from this list
If the letter is found, create a regex pattern based on the known letter(s) and the word length and repeat from step 2
You should be able to quickly narrow down a single word resulting from your pattern search
For posterity:
Take a look at this wiki page. It includes a table of frequencies of the first letters of words which may help you tune your algorithm.
You could also take into account the fact that if you find a vowel or two in a word the likelihood of finding other vowels will decrease significantly and you should then try more common consonants instead. The example from the wiki page you listed start with E then T and then tries three vowels in a row: A, O and I. The first two letters are missed but once the third letter is found, twice then the process should switch to common consonants and skip trying for more vowels since there will likely be fewer.
Any useful strategies will certainly employ frequency distribution charts on letters and possibly words e.g. some words are very common while others are rarely used so performing a letter frequency distribution on a set of more common words might help... guessing that some words may appear more frequently than other but that depends on your word selection algorithm which might not take into account "common" usage.
You could also build specialized letter frequency tables and possibly even on-the-fly. For example, given the wikipedia h a ngm a n example: You find the letter A twice in a word in two locations 2nd and 6th. You know that the word has seven letters and with a fairly simple reg ex you could isolate the words from a dictionary that match this pattern:
_ a _ _ _ a _
Then perform a letter frequency on that set of words that matches this pattern and use that set for your next guess. Rinse and repeat. I think doing some of those things I mentioned but especially the last will really increase your odds of success.
The strategies in the linked page seem to be "order guesses by letter frequency" and "guess the vowels, then order guesses by letter frequency"
A couple of observations about hangman:
1) Since guessing a letter that isn't in the word hurts us, we should guess letters by word frequency (percentage of words that contain letter X), not letter frequency (number of times that X appears in all words). This should maximise our chances of guessing a bad letter.
2) Once we've guessed some letters correctly, we know more about the word we're trying to guess.
Here are two strategies that should beat the letter frequency strategy. I'm going to assume we have a dictionary of words that might come up.
If we expect the word to be in our dictionary:
1) We know the length of the target word, n. Remove all words in the dictionary that aren't of length n
2) Calculate the word frequency of all letters in the dictionary
3) Guess the most frequent letter that we haven't already guessed.
4) If we guessed correctly, remove all words from the dictionary that don't match the revealed letters.
5) If we guessed incorrectly, remove all words that contain the incorrectly guessed letter
6) Go to step 2
For maximum effect, instead of calculating word frequencies of all letters in step 2, calculate the word frequencies of all letters in positions that are still blank in the target word.
If we don't expect the word to be in our dictionary:
1) From the dictionary, build up a table of n-grams for some value of n (say 2). If you haven't come across n-grams before, they are groups of consecutive letters inside the word. For example, if the word is "word", the 2-grams are {^w,wo,or,rd,d$}, where ^ and $ mark the start and the end of the word. Count the word frequency of these 2-grams.
2) Start by guessing single letters by word frequency as above
3) Once we've had some hits, we can use the table of word frequency of n-grams to determine either letters to eliminate from our guesses, or letters that we're likely to be able to guess. There are a lot of ways you could achieve this:
For example, you could use 2-grams to determine that the blank in w_rd is probably not z. Or, you could determine that the character at the end of the word ___e_ might (say) be d or s.
Alternatively you could use the n-grams to generate the list of possible characters (though this might be expensive for long words). Remember that you can always cross off all n-grams that contain letters you've guessed that aren't in the target word.
Remember that at each step you're trying not to make a wrong guess, since that keeps us alive. If the n-grams tell you that one position is likely to only be (say) a,b or c, and your word frequency table tells you that a appears in 30% of words, but b and c only appear in 10%, then guess a.
For maximum benefit, you could combine the two strategies.
The strategy discussed is one suitable for humans to implement. Since you're writing an AI you can throw computing power at it to get a better result.
Take your word list, filter it down to only those words which match what information you have about the target word. (At the start that will only be the word length.) For each letter A through Z note how many words contain at least one of them (this is different than the count of the letters.) Pick the letter with the highest score.
You MIGHT even be able to run multiple cycles of this in computing a guess but that might prove too much even for modern CPUs.
Clarification: I'm saying that you might be able to run a look-ahead. If we choose "A" at this level what options does that present for the next level? This is an O(x^n) algorithm, obviously you can't go too far down that path.

Resources