The autoML stops on a clock. I compared two auto-ML's where one used a subset of what the other had to make the same predictions, and at 3600 seconds runtime the fuller model looked better. I repeated this with a 5000 second re-run, and the subset model looked better. They traded places, and that isn't supposed to happen.
I think it is convergence. Is there any way to track convergence-history of stacked ensemble learners to determine either if they are relatively stable? We have that for parallel and series CART ensembles. I don't see why a heterogeneous ensemble wouldn't do the same.
I have plenty of data, and especially with cross-validation, I would like to not think that the difference was because of the training vs. validation set random draws.
I'm running on relatively high performance hardware so I don't think it is a "too short runtime". My "all" model count is between hundreds and a thousand, for what it's worth.
Related
I have about 44 Million training examples across about 6200 categories.
After training, the model comes out to be ~ 450MB
And while testing, with 5 parallel mappers (each given enough RAM), the classification proceeds at a rate of ~ 4 items a second which is WAY too slow.
How can speed things up?
One way i can think of is to reduce the word corpus, but i fear losing accuracy. I had maxDFPercent set to 80.
Another way i thought of was to run the items through a clustering algorithm and empirically maximize the number of clusters while keeping the items within each category restricted to a single cluster. This would allow me to build separate models for each cluster and thereby (possibly) decrease training and testing time.
Any other thoughts?
Edit :
After some of the answers given below, i started contemplating doing some form of down-sampling by running a clustering algorithm, identifying groups of items that are "highly" close to one another and then taking a union of a few samples from those "highly" close groups and other samples that are not that tightly close to one another.
I also started thinking about using some form of data normalization techniques that involve incorporating edit distances while using n-grams (http://lucene.apache.org/core/4_1_0/suggest/org/apache/lucene/search/spell/NGramDistance.html)
I'm also considering using the hadoop streaming api to leverage some of the ML libraries available in Python from listed here http://pydata.org/downloads/ , and here http://scikit-learn.org/stable/modules/svm.html#svm (These I think use liblinear mentioned in one of the answers below)
Prune stopwords and otherwise useless words (too low support etc.) as early as possible.
Depending on how you use clustering, it may actually make in particular the test phase even more expensive.
Try other tools than Mahout. I found Mahout to be really slow in comparison. It seems that it somewhere comes at a really high overhead.
Using less training exampes would be an option. You will see that after a specific amount of training examples you classification accuracy on unseen examples won't increase. I would recommend to try to train with 100, 500, 1000, 5000, ... examples per category and using 20% for cross validating the accuracy. When it doesn't increase anymore, you have found the amount of data you need which may be a lot less then you use now.
Another approach would be to use another library. For document-classification i find liblinear very very very fast. It's may be more low-level then mahout.
"but i fear losing accuracy" Have you actually tried using less features or less documents? You may not lose as much accuracy as you fear. There may be a few things at play here:
Such a high number of documents are not likely to be from the same time period. Over time, the content of a stream will inevitably drift and words indicative of one class may become indicative of another. In a way, adding data from this year to a classifier trained on last year's data is just confusing it. You may get much better performance if you train on less data.
The majority of features are not helpful, as #Anony-Mousse said already. You might want to perform some form of feature selection before you train your classifier. This will also speed up training. I've had good results in the past with mutual information.
I've previously trained classifiers for a data set of similar scale and found the system worked best with only 200k features, and using any more than 10% of the data for training did not improve accuracy at all.
PS Could you tell us a bit more about your problem and data set?
Edit after question was updated:
Clustering is a good way of selecting representative documents, but it will take a long time. You will also have to re-run it periodically as new data come in.
I don't think edit distance is the way to go. Typical algorithms are quadratic in the length of the input strings, and you might have to run for each pair of words in the corpus. That's a long time!
I would again suggest that you give random sampling a shot. You say you are concerned about accuracy, but are using Naive Bayes. If you wanted the best model money can buy, you would go for a non-linear SVM, and you probably wouldn't live to see it finish training. People resort to classifiers with known issues (there's a reason Naive Bayes is called Naive) because they are much faster than the alternative but performance will often be just a tiny bit worse. Let me give you an example from my experience:
RBF SVM- 85% F1 score - training time ~ month
Linear SVM- 83% F1 score - training time ~ day
Naive Bayes- 82% F1 score - training time ~ day
You find the same thing in the literature: paper . Out of curiosity, what kind of accuracy are you getting?
This is a semi-broad question, but it's one that I feel on some level is answerable or at least approachable.
I've spent the last month or so making a fairly extensive simulation. In order to protect the interests of my employer, I won't state specifically what it does... but an analogy of what it does may be explained by... a high school dance.
A girl or boy enters the dance floor, and based on the selection of free dance partners, an optimal choice is made. After a period of time, two dancers finish dancing and are now free for a new partnership.
I've been making partner selection algorithms designed to maximize average match outcome while not sacrificing wait time for a partner too much.
I want a way to gauge / compare versions of my algorithms in order to make a selection of the optimal algorithm for any situation. This is difficult however since the inputs of my simulation are extremely large matrices of input parameters (2-5 per dancer), and the simulation takes several minutes to run (a fact that makes it difficult to test a large number of simulation inputs). I have a few output metrics, but linking them to the large number of inputs is extremely hard. I'm also interested in finding which algorithms completely fail under certain input conditions...
Any pro tips / online resources which might help me in defining input constraints / output variables which might give clarity on an optimal algorithm?
I might not understand what you exactly want. But here is my suggestion. Let me know if my solution is inaccurate/irrelevant and I will edit/delete accordingly.
Assume you have a certain metric (say compatibility of the pairs or waiting time). If you just have the average or total number for this metric over all the users, it is kind of useless. Instead you might want to find the distribution of of this metric over all users. If nothing, you should always keep track of the variance. Once you have the distribution, you can calculate a probability that particular algorithm A is better than B for a certain metric.
If you do not have the distribution of the metric within an experiment, you can always run multiple experiments, and the number of experiments you need to run depends on the variance of the metric and difference between two algorithms.
I wrote data mining apriori algorithm, it works well on small test data but I am having issue to run it on bigger data sets.
I am trying to generate rules of items which were bought together frequently.
My small test data is 5 transactions and 10 products.
My big test data is 11 million transactions and around 2700 products.
Problem: Min-support and Filter non frequent items.
Lets imagine we are interested in items which frequency is 60% or more.
frequency = 0.60;
When I compute Min-support for a small data set with 60% frequency algorithm will remove all items which where bought less than 3 times. Min-support = numberOfTransactions * frequency;
But when I am trying to do the same thing for a large data set, algorithm will filter almost all item set after first iteration, just couple of items able to meet such plane.
So I've started decreasing that plane lower and lower, running algorithm many times. But not even 5% giving desired results. I had to lower my frequency percents until 0.0005 to get it at least 50% of items involved in first iteration.
What do you think about current situation is it might be a data problem, since it is generated artificially? (Microsoft adventure works version)
Or it is my code or min support computation problems?
Maybe you can offer any other solution or better way of doing this?
Thanks!
Maybe that is just how your data is like.
If you have a lot of different items, and few items per transaction, the chances of items co-occurring are low.
Did you verify the result, is it incorrectly pruning, or is the algorithm correct, and your parameters bad?
Can you actually name an itemset that Apriori pruned but that shouldn't have pruned?
The problem is, yes, choosing the parameters is hard. And no, apriori cannot use an adaptive threshold, because that wouldn't satisfy the monotonicity requirement. You must use the same threshold for all itemset sizes.
Actually, it all depends on your data. For some real datasets, I had to set the support threshold lower than 0.0002 to get some results. For some other datasets' i used 0.9. It really depends on your data.
By the way, there exists variation of Apriori and FPGrowth that can consider multiple minimum supports at the same time to use different threshold for different items. For example, CFP-Growth or MIS-Apriori. There also exists some algorithms specialized for mining rare itemsets or rare association rules. If you are interested by this topic, you could check my software which offers some of these algorithms : http://www.philippe-fournier-viger.com/spmf/
I'm building a single-layer perceptron that has a reasonably long feature vector (30-200k), all normalised.
Let's say I have 30k features which are somewhat useful at predicting a class but then add 100 more features which are excellent predictors. The accuracy of the predictions only goes up a negligible amount. However, if I manually increase the weights on the 100 excellent features (say by 5x), the accuracy goes up several percent.
It was my impression that the nature of the training process should give better features a higher weight naturally. However, it seems like the best features are being 'drowned out' by the worse ones.
I tried running it with a larger number of iterations, but that didn't help.
How can I adapt the algorithm to better weight features in a reasonably simple way? Also, a reasonably fast way; if I had fewer features it'd be easy to just run the algorithm leaving one out at a time but it's not really feasible with 30k.
My experience with implementing perceptron based network is that it takes a lot of iterations to learn something. I believe I used each sample about 1k times to learn the xor function(when having only 4 inputs). So if you have 200k inputs it will take a lot of samples and a lot of time to train your network.
I have a few suggestions for you:
try to reduce the size of the input(try to aggregate several inputs into a single one or try to remove redundant once).
try to use each sample more times. As I said it may take a lot of iterations to learn even a simple function
Hope this helps.
I am creating a tool for predicting the time and cost of software projects based on past data. The tool uses a neural network to do this and so far, the results are promising, but I think I can do a lot more optimisation just by changing the properties of the network. There don't seem to be any rules or even many best-practices when it comes to these settings so if anyone with experience could help me I would greatly appreciate it.
The input data is made up of a series of integers that could go up as high as the user wants to go, but most will be under 100,000 I would have thought. Some will be as low as 1. They are details like number of people on a project and the cost of a project, as well as details about database entities and use cases.
There are 10 inputs in total and 2 outputs (the time and cost). I am using Resilient Propagation to train the network. Currently it has: 10 input nodes, 1 hidden layer with 5 nodes and 2 output nodes. I am training to get under a 5% error rate.
The algorithm must run on a webserver so I have put in a measure to stop training when it looks like it isn't going anywhere. This is set to 10,000 training iterations.
Currently, when I try to train it with some data that is a bit varied, but well within the limits of what we expect users to put into it, it takes a long time to train, hitting the 10,000 iteration limit over and over again.
This is the first time I have used a neural network and I don't really know what to expect. If you could give me some hints on what sort of settings I should be using for the network and for the iteration limit I would greatly appreciate it.
Thank you!
First of all, thanks for providing so much information about your network! Here are a few pointers that should give you a clearer picture.
You need to normalize your inputs. If one node sees a mean value of 100,000 and another just 0.5, you won't see an equal impact from the two inputs. Which is why you'll need to normalize them.
Only 5 hidden neurons for 10 input nodes? I remember reading somewhere that you need at least double the number of inputs; try 20+ hidden neurons. This will provide your neural network model the capability to develop a more complex model. However, too many neurons and your network will just memorize the training data set.
Resilient backpropagation is fine. Just remember that there are other training algorithms out there like Levenberg-Marquardt.
How many training sets do you have? Neural networks usually need a large dataset to be good at making useful predictions.
Consider adding a momentum factor to your weight-training algorithm to speed things up if you haven't done so already.
Online training tends to be better for making generalized predictions than batch training. The former updates weights after running every training set through the network, while the latter updates the network after passing every data set through. It's your call.
Is your data discrete or continuous? Neural networks tend to do a better job with 0s and 1s than continuous functions. If it is the former, I'd recommend using the sigmoid activation function. A combination of tanh and linear activation functions for the hidden and output layers tend to do a good job with continuously-varying data.
Do you need another hidden layer? It may help if your network is dealing with complex input-output surface mapping.