I fined these values are interchangeable in Language Understanding Intelligent Service(LUIS) Phrase List.
What is LUIS Phrase List interchangeable mean?
What is this used for?
Regarding your question on phrase lists, happy to speak high-levelly on what the feature does :)
So ultimately the goal with LUIS is to understand the meaning of the user’s input (utterance), and through calculations, it returns to you the value of how confident it is about the meaning of the input. Using phrase lists is one of the ways to improve the accuracy of determining the meaning of the user’s utterance —more specifically, when adding features to a phrase list, it can put more weight on the score of an intent or entity.
Using a couple of examples to illustrate the high-level concept of how features help determine intent/entity score, and in turn predict the user’s utterance’s meaning:
For example, if I wanted to describe a class called Tablet, features I could use to describe it could include screen, size, battery, color, etc. If an utterance mentions any of the features, it’ll add points/weight to the score of predicting that the utterance’s meaning is describing Tablet. However, features that would be good to include in a phrase list are words that are maybe foreign, proprietary, or perhaps just rare. For example, maybe I would add, “SurfacePro”, “iPad”, or “Wugz” (a made-up tablet brand) to the phrase list of Tablet. Then if a user’s utterance includes “Wugz”, more points/weight would be put onto predicting that Tablet is the right entity to an utterance.
Or maybe the intent is Book.Flight and features include “Book”, “Flight”, “Cairo”, “Seattle”, etc. And the utterance is “Book me a flight to Cairo”, points/weight towards the score of Book.Flight intent would be added for “Book”, “flight”, “Cairo”.
Now, regarding interchangeable vs. non-interchangeable phrase lists. Maybe I had a Cities phrase list that included “Seattle”, “Cairo”, “L.A.”, etc. I would make sure that the phrase list is non-interchangeable, because it would indicate that yes “Seattle” and “Cairo” are somehow similar to one-another, however they are not synonyms—I can’t use them interchangeably or rather one in place of the other. (“book flight to Cairo” is different from “book flight to Seattle”) But if I had a phrase list of Coffee that included features “Coffee”, “Starbucks”, “Joe”, and marked the list as interchangeable, I’m specifying that the features in the list are interchangeable. (“I’d like a cup of coffee” means the same as “I’d like a cup of Joe”)
For more on Phrase Lists - Phrase List features in LUIS
For more on improving prediction - Tutorial: Add phrase list to improve predictions"
Related
How are keyword clouds constructed?
I know there are a lot of nlp methods, but I'm not sure how they solve the following problem:
You can have several items that each have a list of keywords relating to them.
(In my own program, these items are articles where I can use nlp methods to detect proper nouns, people, places, and (?) possibly subjects. This will be a very large list given a sufficiently sized article, but I will assume that I can winnow the list down using some method by comparing articles. How to do this properly is what I am confused about).
Each item can have a list of keywords, but how do they pick keywords such that the keywords aren't overly specific or overly general between each item?
For example, trivially "the" can be a keyword that is a lot of items.
While "supercalifragilistic" could only be in one.
I suppose that I could create a heuristic where if a word exists in n% of the items where n is sufficiently small, but will return a nice sublist (say 5% of 1000 articles is 50, which seems reasonable) then I could just use that. However, the issue that I take with this approach is that given two different sets of entirely different items, there is most likely some difference in interrelatedness between the items, and I'm throwing away that information.
This is very unsatisfying.
I feel that given the popularity of keyword clouds there must have been a solution created already. I don't want to use a library however as I want to understand and manipulate the assumptions in the math.
If anyone has any ideas please let me know.
Thanks!
EDIT:
freenode/programming/guardianx has suggested https://en.wikipedia.org/wiki/Tf%E2%80%93idf
tf-idf is ok btw, but the issue is that the weighting needs to be determined apriori. Given that two distinct collections of documents will have a different inherent similarity between documents, assuming an apriori weighting does not feel correct
freenode/programming/anon suggested https://en.wikipedia.org/wiki/Word2vec
I'm not sure I want something that uses a neural net (a little complicated for this problem?), but still considering.
Tf-idf is still a pretty standard method for extracting keywords. You can try a demo of a tf-idf-based keyword extractor (which has the idf vector, as you say apriori determined, estimated from Wikipedia). A popular alternative is the TextRank algorithm based on PageRank that has an off-the-shelf implementation in Gensim.
If you decide for your own implementation, note that all algorithms typically need plenty of tuning and text preprocessing to work correctly.
The minimum you need to do is removing stopwords that you know that they never can be a keyword (prepositions, articles, pronouns, etc.). If you want something fancier, you can use for instance Spacy to keep only desired parts of speech (nouns, verbs, adjectives). You can also include frequent multiword expressions (gensim has good function for automatic collocation detection), named entities (spacy can do it). You can get better results if you run coreference resolution and substitute pronouns with what they refer to... There are endless options for improvements.
I'm looking to do the opposite to what is described here: Tools for text simplification (Java)
Finding meaningful sub-sentences from a sentence
That is, take two simple sentences and combine them as a compound sentence.
Are there any algorithms to do this?
I'm particularly sure that you will not be able to compound sentences like in the example from the linked question (John played golf. John was the CEO of a company. -> John, who was the CEO of a company, played golf), because it requires such language understanding that is too far from now.
So, it seems that the best option is to bluntly replace dot by comma and concatenate simple sentences (if you have to choose sentences to be compounded from text, you can try simple heuristics like approximating semantic similarity by number of common words or tools like those based on WordNet). I guess, in most cases human readers can infer missed conjunction from the context.
Of course, you could develop more sophisticated solutions, but it requires either narrow domain (e.g. all sentences share very similar structure), or tools that can determine relations between sentences, e.g. relationship of cause and effect. I'm not aware of such tools and doubt in their existence, because this level (sentences and phrases) are much more diverse and sparse than the level of words and collocations.
We are creating a website for a client that wants a website based around a survey of peoples' '10 favourite things'. There are 10 questions that each user must answer, e.g. 'What is your favourite colour', 'Who is your favourite celebrity', etc., and then the results are collated into a global Top 10 list on the home page.
The conundrum lies in both allowing the user to input anything they want, e.g. their favourite holiday destination might be 'Grandma's house', and being able to accurately count the votes accurately, e.g. User A might say their favourite celebrity is 'The Queen' and User B might says it's 'Queen of England' - we need those two answers to be counted as two votes for the same 'thing'.
If we force the user to choose from a large but predetermined list for each question, it restricts users' ability to define literally anything as their 'favourite thing'. Whereas, if we have a plain text input field and try to interpret answers after they have been submitted, it's going to be much more difficult to count votes where there are variations in names or spelling for the same answer.
Is it possible to automatically moderate their answers in real-time through some form of search phrase suggestion engine? How can we make sure that, if a plain text field is the input method, we make allowances for variations in spelling?
If anyone has any ideas as to possible solutions to this functionality, perhaps a piece of software, a plugin, an API, anything, then please do let us know.
Thank you and please just ask for any clarification.
If you want to automate counting "The Queen" and "The Queen of England", you're in for work that might be more complex than it's worth for a "fun little survey". If the volume is light enough, consider just manually counting the results. Just to give you a feeling, what if someone enters "The Queen of Sweden" or "Queen Letifah Concerts"?
If you really want to go down that route, look into Natural Language Processing (NLP). Specifically, the field of categorization.
For a general introduction to NLP, I recommend the relevant Wikipedia article
http://en.wikipedia.org/wiki/Natural_language_processing
RapidMiner is an open source NLP solution that would be worth looking into.
As Eric J said, this is getting into cutting edge NLP applications. These are fields of study that are very important for AI/automation researchers and computer science in general, but are still very fledgeling. There are a number of programs and algorithms you can use, the drawbacks and benefits of which very widely. RapidMiner is good, WordNet is widely used in medical applications and should be relatively easy to adjust to your own corpus, and there are more advanced methods like latent Dirichlet allocation. Here are a few resources you should start with (in addition to the Wikipedia article provided above)
http://www.semanticsearchart.com/index.html
http://www.mitpressjournals.org/loi/coli
http://marimba.d.umn.edu/ (try the SenseClusters calculator)
http://wordnet.princeton.edu/
The best to classify short answers is k-means clustering. You need to apply stemming. Then you need to convert words into indexes using elementary dictionary. You can use EverGroingDictionary.cs from sematicsearchart.com. After throwing phrase to a dictionary it will be converted to sequence of numbers or vector. Introduce measure of proximity as number of coincidences in words and apply k-means, which is lightning fast algorithm. k-means will organize all answers into groups. Most frequent words in each group will be a signature of the group. Your whole program in C++ or C# or Java must be less than 1000 lines.
Is there an algorithm for finding temporal characteristics of verbs? Meaning if it's an "event", "accomplishment", "achievement" or "state"? As described in Zeno Vendler's paper "Verbs and Times"?
http://semantics.uchicago.edu/kennedy/classes/s07/events/vendler57.pdf
Or maybe someone has an idea of what would be the best way to implement such thing?
Thanks!
As far as I can see, there is no way to do this without the use of a database. The "algorithm" itself would then be a union of the structure of the database and the queries made to it.
For example, a relational database which had a table of English words, each two columns: word and 1 or more parts of speech, is the most basic language processing database conceivable. A more complex one would also have a verb table with two columns, word, and "temporal characteristics".
As an example, the word "be" always describes a state. Therefore, a program that sees the word be (or its conjugations: is, are, was, etc.) can immediately recognize the clause as describing a state-of-being. Obviously, the word accomplish will immediately denote an accomplishment, and "achieve" will always denote an achievement. But don't forget that out of the four categories you listed, only "state" and "event" are mutually exclusive (with the exception of a present participle such as in the sentence "An event is taking place."). Other than that, a state can also be an accomplishment or achievement ("I am an Olympic gold medalist.") and so can an event ("I graduate tomorrow.").
Accomplishment and achievement are also subjective terms and depend on the sensibilities of the speaker and reader alike. Words like "achieve", "accomplished" and "succeeded" are deliberate expressions of a feeling of achievement, and can therefore always be categorized as such. However, this is a priori information, and would therefore require a relational database to be realized.
Finally, some words' "temporal characteristics" change depending the other words in the sentence. For example, in the sentence "I smell good.", "smell" is a state-of-being verb. In the sentence "I smell bacon.", it is an action verb. These kinds of verbs are action verbs when followed by a noun (transitive), state-of-being verbs when followed by an adjective (predicate nominative), and action verbs when followed by neither (intransitive). A parser would therefore have to inspect the words that follow it in the sentence, one as a noun or adjective, and from that recognize the verb's role in the sentence. This is a joint effort between the database knowing the parts of speech of each word, and the algorithm being able to parse the sentence correctly (and simply knowing that it needs to parse it at all).
This is just a brief overview of lexographical computing, and just my knowledge on the subject. There's a lot more to it and obviously, populating a database with words and their parts of speech, definitions, roles, etc. is tedious. There may exist databases pre-populated with the information that a lexographical computer scientist would need to implement such a system (but I don't claim to know where one would find them).
Hope I've helped, and good luck!
I would like to create algorithm to distinguish the persons writing on forum under different nicknames.
The goal is to discover people registring new account to flame forum anonymously, not under their main account.
Basicaly I was thinking about stemming words they use and compare users according to similarities or these words.
As shown on the picture there is user3 and user4 who uses same words. It means there is probably one person behind the computer.
Its clear that there are lot of common words which are being used by all users. So I should focus on "user specific" words.
Input is (related to the image above):
<word1, user1>
<word2, user1>
<word2, user2>
<word3, user2>
<word4, user2>
<word5, user3>
<word5, user4>
... etc. The order doesnt matter
Output should be:
user1
user2
user3 = user4
I am doing this in Java but I want this question to be language independent.
Any ideas how to do it?
1) how to store words/users? What data structures?
2) how to get rid of common words everybody use? I have to somehow ignore them among user specific words. Maybe I could just ignore them because they get lost. I am afraid that they will hide significant difference of "user specific words"
3) how to recognize same users? - somehow count same words between each user?
I am very thankful for every advice in advance.
In general this is task of author identification, and there are several good papers like this that may give you a lot of information. Here are my own suggestions on this topic.
1. User recognition/author identification itself
The most simple kind of text classification is classification by topic, and there you take meaningful words first of all. That is, if you want to distinguish text about Apple the company and apple the fruit, you count words like "eat", "oranges", "iPhone", etc., but you commonly ignore things like articles, forms of words, part-of-speech (POS) information and so on. However many people may talk about same topics, but use different styles of speech, that is articles, forms of words and all the things you ignore when classifying by topic. So the first and the main thing you should consider is collecting the most useful features for your algorithm. Author's style may be expressed by frequency of words like "a" and "the", POS-information (e.g. some people tend to use present time, others - future), common phrases ("I would like" vs. "I'd like" vs. "I want") and so on. Note that topic words should not be discarded completely - they still show themes the user is interested in. However you should treat them somehow specially, e.g. you can pre-classify texts by topic and then discriminate users not interested in it.
When you are done with feature collection, you may use one of machine learning algorithm to find best guess for an author of the text. As for me, 2 best suggestions here are probability and cosine similarity between text vector and user's common vector.
2. Discriminating common words
Or, in latest context, common features. The best way I can think of to get rid of the words that are used by all people more or less equally is to compute entropy for each such feature:
entropy(x) = -sum(P(Ui|x) * log(P(Ui|x)))
where x is a feature, U - user, P(Ui|x) - conditional probability of i-th user given feature x, and sum is the sum over all users.
High value of entropy indicates that distribution for this feature is close to uniform and thus is almost useless.
3. Data representation
Common approach here is to have user-feature matrix. That is, you just build table where rows are user ids and columns are features. E.g. cell [3][12] shows normalized how many times user #3 used feature #12 (don't forget to normalize these frequencies by total number of features user ever used!).
Depending on features your are going to use and size of the matrix, you may want to use sparse matrix implementation instead of dense. E.g. if you use 1000 features and for every particular user around 90% of cells are 0, it doesn't make sense to keep all these zeros in memory and sparse implementation is better option.
I recommend a language modelling approach. You can train a language model (unigram, bigram, parsimonious, ...) on each of your user accounts' words. That gives you a mapping from words to probabilities, i.e. numbers between 0 and 1 (inclusive) expressing how likely it is that a user uses each of the words you encountered in the complete training set. Language models can be stored as arrays of pairs, hash tables or sparse vectors. There are plenty of libraries on the web for fitting LMs.
Such a mapping can be considered a high-dimensional vector, in the same way documents are considered as vector in the vector space model of information retrieval. You can then compare these vectors by using KL-divergence or any of the popular distance metrics: Euclidean distance, cosine distance, etc. A strong similarity/small distance between two users' vectors might then indicate that they belong to one and the same user.
how to store words/users? What data structures?
You probably have some kind of representation for the users and the posts that they have made. I think you should have a list of words, and a list corresponding to each word containing the users who use it. Something like:
<word: <user#1, user#4, user#5, ...> >
how to get rid of common words everybody use?
Hopefully, you have a set of stopwords. Why not extend it to include commonly used words from your forum? For example, for stackoverflow, some of the most frequently used tags' names should qualify for it.
how to recognize same users?
In addition to using similarity or word-frequency based measures, you can also try using interactions between users. For example, user3 likes/upvotes/comments each and every post by user8, or a new user doing similar things for some other (older) user in this way.