About the Text Rank summary - summary

How can I select the number of sentences for the text rank summary? I want to know after applying the text rank algorithm to the text data set, How many sentences or how much portion of the text (standard) for the text rank summary.

Related

How can I vectorize a list of words?

I am working on SMS data where I have a list of words in my one column of dataframe
I want to train a classifier to predict it's type and subtype.
How would I convert the words into numerical format as they are in a list.
The idea is to use as vocabulary all the words found in this column across instances, except that the least frequent words should be removed (to avoid overfitting). Then for every instance the column is represented as vector of boolean features, where the nth value represents the nth word in the vocabulary: 1 if it is in the list for this instance, 0 if not.
In python you can use CountVectorizer, considering every list in the column as a sentence.

Arrange words to get max height

In my app I have a frame inside which user can form a "sentence" from random source words. I should calculate words font size so they will fit this frame (any order, any number - even all - of words can be used).
Words are placed one-by-one from top-left corner. If next word exceeds frame width - it's moved to a new line. So result words height shouldn't exceed frame's one. To simplify algorithm let's ignore paddings (between words, lines). All words have same height and font.
Can't find anything better than NP-completeness... Any ideas?
UPDATE:
This question is: "How can I calculate words maximum height?" Problem: this height depends on words order.
UPDATE2:
Exact question by #Teepeemm: "What arrangement of the words will need the most lines to display?"

relevant text search in a large text file

I have a large text document and I have a search query(e.g. : rock climbing). I want to return 5 most relevant sentences from the text. What are the approaches that can be followed? I am a complete newbie at this text retrieval domain, so any help is appreciated.
One approach I can think of is :
scan the file sentence by sentence, look for the whole search query in the sentence and if it matches then return the sentence.
above approach works only if some of the sentences contain the whole search query. what to do if there are no sentences containing whole query and if some of the sentences contain just one of the word? or what if they contain none of the words?
any help?
Another question I have is can we preprocess the text document to make building index easier? Is trie a good data structure for preprocessing?
In general, relevance is something you define using some sort of scoring function. I will give you an example of a naive scoring algorithm, as well as one of the common search engine ranking algorithms (used for documents, but I modified it for sentences for educational purposes).
Naive ranking
Here's an example of a naive ranking algorithm. The ranking could go as simple as:
Sentences are ranked based on the average proximity between the query terms (e.g. the biggest number of words between all possible query term pairs), meaning that a sentence "Rock climbing is awesome" is ranked higher than "I am not a fan of climbing because I am lazy like a rock."
More word matches are ranked higher, e.g. "Climbing is fun" is ranked higher than "Jogging is fun."
Pick alphabetical or random favorites in case of a tie, e.g. "Climbing is life" is ranked higher than "I am a rock."
Some common search engine ranking
BM25
BM25 is a good robust algorithm for scoring documents with relation to the query. For reference purposes, here's a Wikipedia article about BM25 ranking algorithm. You would want to modify it a little because you are dealing with sentences, but you can take a similar approach by treating each sentence as a 'document'.
Here it goes. Assuming your query consists of keywords q1, q2, ... , qm, the score of a sentence S with respect to the query Q is calculated as follows:
SCORE(S, Q) = SUM(i=1..m) (IDF(qi * f(qi, S) * (k1 + 1) / (f(qi, S) + k1 * (1 - b + b * |S| / AVG_SENT_LENGTH))
k1 and b are free parameters (could be chosen as k in [1.2, 2.0] and b = 0.75 -- you can find some good values empirically) f(qi, S) is the term frequency of qi in a sentence S (could treat is as just the number of times the term occurs), |S| is the length of your sentence (in words), and AVG_SENT_LENGTH is the average sentence length of your sentences in a document. Finally, IDF(qi) is the inverse document frequency (or, in this case, inverse sentence frequency) of the qi, which is usually computed as:
IDF(qi) = log ((N - n(qi) + 0.5) / (n(qi) + 0.5))
Where N is the total number of sentences, and n(qi) is the number of sentences containing qi.
Speed
Assume you don't store inverted index or any additional data structure for fast access.
These are the terms that could be pre-computed: N, *AVG_SENT_LENGTH*.
First, notice that the more terms are matched, the higher this sentence will be scored (because of the sum terms). So if you get top k terms from the query, you really need to compute the values f(qi, S), |S|, and n(qi), which will take O(AVG_SENT_LENGTH * m * k), or if you are ranking all the sentences in the worst case, O(DOC_LENGTH * m) time where k is the number of documents that have the highest number of terms matched and m is the number of query terms. Assuming each sentence is about AVG_SENT_LENGTH, and you have to go m times for each of the k sentences.
Inverted index
Now let's look at inverted index to allow fast text searches. We will treat your sentences as documents for educational purposes. The idea is to built a data structure for your BM25 computations. We will need to store term frequencies using inverted lists:
wordi: (sent_id1, tf1), (sent_id2, tf2), ... ,(sent_idk, tfk)
Basically, you have hashmaps where your key is word and your value is list of pairs (sent_id<sub>j</sub>, tf<sub>k</sub>) corresponding to ids of sentences and frequency of a word. For example, it could be:
rock: (1, 1), (5, 2)
This tells us that the word rock occurs in the first sentence 1 time and in the fifth sentence 2 times.
This pre-processing step will allow you to get O(1) access to the term frequencies for any particular word, so it will be fast as you want it.
Also, you would want to have another hashmap to store sentence length, which should be a fairly easy task.
How to build inverted index? I am skipping stemming and lemmatization in your case, but you are welcome to read more about it. In short, you traverse through your document, continuously creating pairs/increasing frequencies for your hashmap containing the words. Here are some slides on building the index.

Algorithm to find keywords of a text

Given a set of texts (might be books, articles, documents, etc.) how would you find relevant keywords for each text?
Common sense suggests to:
split words
exclude common words (also called stop-words, like "a,
to, for, in")
count words frequencies
give a score to each word, with a formula that takes into account the frequency of each word in the document and in other documents, the number of words of the document and the total number of words of all documents
The question is: which is a good formula to do that?
I've developed one.
For each word calculate this ratio:
(frequency of word in this text) * (total number of words in all texts)
-----------------------------------------------------------------------
(number of words in this text) * (frequency of word in all texts)
Keywords are those words whose ratio is in the highest 20% (for this doucument).
Ankerl also proposes his own formula:
tanh(curVal/curWords*200) - 5*tanh((allVal-curVal)/(allWords-curWords)*200)
Where:
curVal: How often the word to score is present in the to-be-analyzed text
curWords: Total number of words in the to-be-analyzed text
allVal: How often the word to score is present in the indexed dataset
allWords: Total number of words of the indexed dataset
Both algorithms work pretty well, and results often coincide. Do you know any way to do it better?

Get most unique text from a group of text

I have a number of texts, for example 100.
I would keep the 10 most unique among them. I made a 100x100 matrix where I compared each text among them with the Levenshtein algorithm.
Is there an algorithm to select the 10 most unique?
EDIT :
What i want is the N most unique text that maximize the distance between this N text regardless of the 1st element of my set.
I want the most unique because i will publish these text to the web and i want avoid near duplicate.
A long comment rather than an answer ...
I don't think you've specified your requirement(s) clearly enough. How do you select the 1st element of your set of 10 strings ? Is it the string with the largest distance from any other string (in which case you are looking for the largest element in your array) or the one with the largest distance from all the other strings (in which case you are looking for the largest row- or column-sum in the array).
Moving on to the N (or 10 as you suggest) most distant strings, you have a number of choices.
You could select the N largest distances in the array. I suspect, not having seen your data, that it is likely that the string which is furthest from any other string may also be furthest away from several other strings too -- I mean you may find that several of the N largest entries in your array occur in the same row or column.
You could simply select the N strings with the largest row sums.
Or perhaps you are looking for a cluster of N strings which maximises the distance between all the strings in that cluster and all the strings in the remaining 100-N strings. This might lead you towards looking at, rather obviously, clustering algorithms.
I suggest you clarify your requirements and edit your question.
Since this looks like an eigenvalue problem, I would try to execute the Power iteration on the matrix, and reject the 90 highest values from the resulting vector. The power iteration normally converges very fast, within ~ten iterations. BTW: this solution assumes a similarity matrix. If the entries of your matrix are a measure of *dis*similarity ("distance"), you might need to use their inverses instead.

Resources