WEKA PartitionMembership filter - filter

I have a question regarding the supervised PartitionMembership filter in WEKA.
When applying this filter using J48 as partition generator, I am able to achieve a much higher accuracy in combination with the KStar classifier.
What does this filter exactly do, because the documentation provided by WEKA is quite limited? And is it valid to use this filter to get an increased accuracy?
When applying this filter on my trainings set, it generates a number of classes. When I try to reapply the model on my test set, the filter generates a different number of classes. Hence, I am not able to use this trained supervised PartitionMembership filter for my test set. How can I use the PartitionMembership filter that was trained on the training set also for the test set?

You are asking two or three questions here. Regarding the first two: What does the PartitionMembership filter do, and how do I use it? - That I don't know to answer properly. Ultimately you can read the source code to check it out.
For the latter question, (~ how do I get it to evaluate my test set), Please use the FilteredClassifier, and there choose your filter and your classification in the dialog box of that classifier.
NAME weka.classifiers.meta.FilteredClassifier
SYNOPSIS Class for running an arbitrary classifier on data that has
been passed through an arbitrary filter. Like the classifier, the
structure of the filter is based exclusively on the training data and
test instances will be processed by the filter without changing their
structure.

Related

Does H2OAutoML handle hyperparamter optimization?

I know that there are different methods in H2O such as H2OGridSearch, H2ORandomSearch to perform hyperparameter optimization. However, is there a way to include hyperparameter optimization method when we use H2OAutoML to train many models at once? Does it already include it as a default?
Any inputs would be beneficial.
Yes, H2O's AutoML does also hyperparameter optimization. It mainly does random grid search. From the docs:
AutoML performs a hyperparameter search over a variety of H2O
algorithms in order to deliver the best model. In the table below, we
list the hyperparameters, along with all potential values that can be
randomly chosen in the search. If these models also have a non-default
value set for a hyperparameter, we identify it in the list as well.
Random Forest and Extremely Randomized Trees are not grid searched (in
the current version of AutoML), so they are not included in the list
below.
Note: AutoML does not run a grid search for GLM. Instead AutoML builds
a single model with lambda_search enabled and passes a list of alpha
values. It returns only the model with the best alpha-lambda
combination rather than one model for each alpha.

Contextual Search: Classifying shopping products

I have got a new task(not traditional) from my client, It is something about machine learning.
As I have never been to "machine learning" except some little Data Mining stuff so I need your help.
My task is to Classify a product present on any Shopping Site, on the basis of gender(whom the product belongs to),agegroup etc, the training data we can have is the product's Title, Keywords(available in the html of the product page), and product description.
I did a lot of R&D , I found Image Recog APIs(cloudsight,vufind) that returned the details of the product image but that did not full fill the need, used google suggestqueries, searched out many machine learning algorithms and finally...
I came to know about the "Decision Tree Learning Algorithm" but cannot figure out, how it is applicable to my problem.
I tried out the "PlayingTennis" dataset but couldn't make the sense what to do.
Can you give me some direction that from where to start this journey? Should I focus on The Decision Tree Learning algorithm or Is there any other algorithm you would suggest I should focus on to categorize the products on the basis of context?
If you say , I would share in detail about what things I searched about to solve my problem.
I would suggest to do the following:
Go through items in your dataset and classify them manually (decide for which gender each item is). Store each decision so that you would be able to somehow link each item in an original dataset with a target class.
Develop an algorithm for converting each item from your dataset into a feature vector. This algorithm should be able to convert each item in your original dataset in a vector of numbers (more about how to do it later).
Convert all your dataset with appropriate classes into a dataset that would look like this:
Feature_1, Feature_2, Feature_3, ..., Gender
value_1, value_2, value_3, ... male
It would be a good decision to store it in CSV file since you would be able to load it and process in different machine learning tools (More about those later).
Load dataset you've created at step 3 in machine learning tool of your choice and try to come up with the best model that can classify items in your dataset by gender.
Store model created at step 4. It will be part of your production system.
Develop a production code that can convert an unclassified product, create feature vector out of it and pass this feature vector to the model you've saved at step 5. The result of this operation should be a predicted gender.
Details
If there too many items (say tens of thousands) in your original dataset it may be impractical to classify them yourself. What you can do is to use Amazon Mechanical Turk to simplify your task. If you are unable to use it (the last time I've checked you had to have a USA address to use it) you can just classify few hundreds of items to start working on your model and classify the rest to improve accuracy of your classification (the more training data you use the better the accuracy, but up to a certain point)
How to extract features from a dataset
If keyword has form like tag=true/false, it's a boolean feature.
If keyword has form like tag=42, it's a numerical one or ordinal. For example it can be price value or price range (0-10, 10-50, 50-100, etc.)
If keyword has form like tag=string_value you can convert it into a categorical value
A class (gender) is simply boolean value 0/1
You can experiment a bit with how you extract your features, since it may influence the result accuracy.
How to extract features from product description
There are different ways to convert a text into a feature vector. Look for TF-IDF algorithms or something similar.
Machine learning tools
You can use one of existing machine learning libraries and hack some code that loads your CSV dataset, trains a model and checks the accuracy, but at first I would suggest to use something like Weka. It has more or less intuitive UI and you can quickly start to experiment with different machine learning algorithms, convert different features in your dataset from string to categories, or from real values to ordinal values, etc. Good thing about Weka is that it has Java API, so you can automate all the process of data conversion, train models programmatically, etc.
What algorithms to choose
I would suggest to use decision tree algorithms like C4.5. It's fast and show good results on wide range of machine learning tasks. Additionally you can use ensemble of classifiers. There are various algorithms that can combine several algorithms like (google for boosting or random forest to find out more) usually they give better results, but work more slowly (since you need to run a single feature vector through several algorithms.
One another trick that you can use to make your algorithm more accurate is to use models that work on different sets of features (say one algorithm uses features extracted from tags and another algorithm uses data extracted from product description). You can then combine them using algorithms like stacking to come up with a final result.
For classification on the basis of features extracted from text, you can try to use Naive Bayes algorithm or SVM. They both show good results in text classification.
Do consider Support Vector Classifier (SVC), or for Google's sake the Support Vector Machine (SVM). If You have a large training set (which I suspect) search for implementations that are "fast" or "scalable".

Algorithm to recognize keywords' categories in a One-search-box-for-all model query

I'm aiming at providing one-search-box-for-everything model in search engine project, like LinkedIn.
I've tried to express my problem using an analogy.
Let's assume that each result is an article and has multiple dimensions like author, topic, conference (if that's a publication), hosted website, etc.
Some sample queries:
"information retrieval papers at IEEE by authorXYZ": three dimensions {topic, conf-name, authorname}
"ACM paper by authoABC on design patterns" : three dimensions {conf-name, author, topic}
"Multi-threaded programming at javaranch" : two dimensions {topic, website}
I've to identify those dimensions and corresponding keywords in a big query before I can retrieve the final result from the database.
Points
I've access to all the possible values to all the dimensions. For example, I've all the conference names, author names, etc.
There's very little overlap of terms across dimensions.
My approach (naive)
Using Lucene, index all the keywords in each dimension with a dedicated field called "dimension" and another field with actual value.
Ex:
1) {name:IEEE, dimension:conference}, etc.
2) {name:ooad, dimension:topic}, etc.
3) {name:xyz, dimension:author}, etc.
Search the index with the query as-it-is.
Iterate through results up to some extent and recognize first document with a new dimension.
Problems
Not sure when to stop recognizing the dimensions from the result set. For example, the query may contain only two dimensions but the results may match 3 dimensions.
If I want to include spell-checking as well, it becomes more complex and the results tend to be less accurate.
References to papers, articles, or pointing-out the right terminology that describes my problem domain, etc. would certainly help.
Any guidance is highly appreciated.
Solution 1: Well how about solving your problem using Natural Language Processing Named Entity Recognition (NER). Now NER can be done using simple Regular Expressions (in case where the data is too static) or else you can use some Machine Learning Technique like Hidden Markov Models to actually figure out the named entities in your sequence data set. Why I stress on HMM as compared to other Machine Learning Supervised algorithms is because you have sequential data with each state dependent on the previous or next state. NER would output for you the dimensions along with the corresponding name. After that your search becomes a vertical search problem and you can just search for the identified words in different Solr/Lucene fields and set your boosts accordingly.
Now coming to the implementation part, I assume you know Java as you are working with Lucene, so Mahout is a good choice. Mahout has an HMM built in and you can train+test the model on your data set. I am also assuming you have large data set.
Solution 2: Try to model this problem as a property graph problem. Check out something like Neo4j. I suggest this as your problem falls under schema less domain. Your schema is not fixed and problem very well can be modelled as a graph where each node would be a set of key value pairs.
Solution 3: As you said that you have all possible values of dimensions than before anything else why not simply convert all your unstructured data from your text to structured data by using Regular Expressions and again as you do not have fixed schema so store the data in any NoSQL key value database. Most of them provided Lucene Integrations for full text search, then simply search on those database.
what you need to do is to calculate the similarity between the query and the document set you are looking in. Measures like cosine similarity should serve your need. However a hack that you can use is calculate the Tf/idf for the document and create an index using that score from there you can choose the appropriate one. I would recommend you to look into Vector Space Model to find a method that serves your need!!
give this algorithm a look aswell
http://en.wikipedia.org/wiki/Okapi_BM25

How to improve the accuracy of a Naive Bayes Classifier?

I am using Naive Bayes Classifier. Following this tutorial.
For the the trained data, i am using 308 questions and categorizing them into 26 categories which are manually tagged.
Before sending the data i am performing NLP. In NLP i am performing(punctuation removal, tokenization, stopword removal and stemming)
This filtered data, am using as input for mahout.
Using mahout NBC's i train this data and get the model file. Now when i run
mahout testnb
command i get Correctly Classified Instances as 96%.
Now for my test data i am using 100 questions which i have manually tagged. And when i use the trained model with the test data, i get Correctly Classified Instances as 1%.
This is pissing me off.
Can anyone suggest me what i doing wrong or suggest me some ways to increase the performance of NBC.?
Also, ideally how much of questions data should i use to train and test?
This appears to be the classic problem of "overfitting"... where you get a very high % accuracy on the training set, but a low % in real situations.
You probably need more training instances. Also, there is the possibility that the 26 categories don't correlate to the features you have. Machine Learning isn't magical and needs some sort of statistical relationship between the variables and the outcomes. Effectively, what NBC might be doing here is effectively "memorizing" the training set, which is completely useless for questions outside of memory.

Find most accurate classifier for my data automatically

I am using Weka, and I'm trying to find the most accurate classifier for my dataset.
The interface for selecting a classifier looks like the following:
It works fine, but it only lets me select one classifier at a time, which is not very practical.
Can I somehow make it run all the available classifiers on my data, so I can easily find the most accurate one?
You need to use weka Experimenter. See below image. I have chosen one data set and two different classification algorithm.
See following tutorial for more information ,
WEKA Experimenter Tutorial.

Resources