How much resources does preprocessing generally use? - performance

I'm learning about CNN's (Convolutional Neural Networks) and the wiki page says, "Convolutional networks were inspired by ... and are variations of multilayer perceptrons which are designed to use minimal amounts of preprocessing."
So could one say that a characteristic of the variations of multilayer perceptrons leads to minimal amounts of preprocessing?
If so, how much resources does preprocessing usually take up?
Does using this technique of data mining introduce a new model for minimizing preprocessing?

Related

number of layers in convolution neural network

I am a beginner in convolution networks. I use digits to implement them and facing with few doubts.
While trying out a basic classification problem of images, how do we decide on the number of layers - how many conv layers/ fully connected layer, etc.
In digits we have 3 standard papers implemented, for a particular dataset is there any way to find out which architecture to use – or when should we use our own architecture.
How can the hidden layers be helpful in solving the problems – i.e. what possible decisions can we take by looking at the results in the hidden layer
Deciding on how many layers or neurons is needed or the best architecture for building neural network was never clear or possible. the main procedure was taken before is to try building on some parameters and then measure the performance on training set and testing set not bias or to over fit the data and decide on the best parameters, or try some other algorithm like genetic algorithm.
conclusion either you start from scratch every time to measure the network performance or apply other algorithms which doesn't need to start from scratch and can build incrementally by applying transfer learning and fine tuning on the network architecture.
The core philosophy that makes deep learning so democratic and amazing is simple "Don't be a Hero".
What it means is that in most cases the best deep learning models take millions of data points and weeks to train, something most of us cannot achieve with our low performance PC's (yes a single GPU system is low performance). So why would you want to waste your time in building and training NN architectures. Simple you don't.
Transfer learning is your solution!! try to find models that are trained on data similar to your problem and use their pre-trained weights to fine tune your data set. Doing this not only do you get an already proven NN architecture but also a major head start in training.
The best place to find pre-trained models is the caffe model zoo so go have a look at it.

Relational Data Mining without ILP

I have an huge dataset from a relational database which I need to create a classification model for. Normally for this situation I would use ILP but due to special circumstances I can't do that.
The other way to tackle this would be just to try to aggregate the values when I have a foreign relations however I have thousands of important and distinct rows for some nominal attributes (Ex: A patient with a relation to several distinct drug prescriptions) in which I just can't do that without creating a new attributes for each distinct row of that nominal attribute and furthermore most of the new columns would have NULL values if I do that.
Is there any non-ILP algorithm that allows me to data mine relational databases without resort to technique like pivoting which would create thousands of new columns?
First, some caveats
I'm not sure why you can't use your preferred programming (sub-)paradigm*, Inductive Logic Programming (ILP), or what it is that you're trying to classify. Giving more detail would probably lead to a much better answer; especially as it's a little unusual to approach selection of classification algorithms on the basis of the programming paradigm with which they're associated. If your real world example is confidential, then simply make up a fictional-but-analogous example.
Big Data Classification without ILP
Having said that, after ruling out ILP we have 4 other logic programming paradigms in our consideration set:
Abductive
Answer Set
Constraint
Functional
in addition to the dozens of paradigms and sub-paradigms outside of logic programming.
Within Functional Logic Programming for instance, there exists extensions of ILP called Inductive Functional Logic Programming, which is based on inversion narrowing (i.e. inversion of the narrowing mechanism). This approach overcomes several limitations of ILP and (according to some scholars, at least) is as suitable for application in terms of representation and has the benefit of allowing problems to be expressed in a more natural way.
Without knowing more about the specifics of your database and the barriers you face to using ILP, I can't know if this solves your problem or suffers from the same problems. As such, I'll throw out a completely different approach as well.
ILP is contrasted with "classical" or "propositional" approaches to data mining. Those approaches include the meat and bones of Machine Learning like decision trees, neural networks, regression, bagging and other statistical methods. Rather than give up on these approaches due to the size of your data, you can join the ranks of many Data Scientists, Big Data engineers and statisticians who utilize High Performance Computing (HPC) to employ these methods on with massive data sets (there are also sampling and other statistical techniques you may choose to utilize to reduce the computational resources and time required to analyze the Big Data in your relational database).
HPC includes things like utilizing multiple CPU cores, scaling up your analysis with elastic use of servers with high memory and large numbers of fast CPU cores, using high-performance data warehouse appliances, employing clusters or other forms of parallel computing, etc. I'm not sure what language or statistical suite you're analyzing your data with, but as an example this CRAN Task View lists many HPC resources for the R language which would allow you to scale up a propositional algorithm.

Detail of all MPI Algorithm?

Is there any document about how MPI functions such as MPI_Algather, MPI_AlltoAll, MPI_Allreduce etc.. are implemented ?
I would like to learn about their algorithm and compute the complexity of them in term of uni-directional or bi-directional bandwidth and total data transfer size for a number of nodes and fixed data size.
I think the exact implementation of those algoritms varies, depending on the communication mechanism: in example a network will have tree-based reduction algorithms, while shared memory models will have different ones.
I'm not exactly sure about where to find answers to such questions, but I think that a good search for papers in google scholar or having a look at this paper list at open-mpi.org should be useful.
http://www.amazon.com/Parallel-Programming-MPI-Peter-Pacheco/dp/1558603395/ref=sr_1_10?s=books&ie=UTF8&qid=1314807638&sr=1-10
shown above is great link that explains all the basic MPI algorithms and allows you to implement a simple version yourself. However, when doing comparisons between the algorithms that you have implemented and the MPI algorithms, you will see that they have made many optimizations depending on the size of the message and number of nodes that you are running on. Hopefully this helps

Anomaly Detection Algorithms

I am tasked with detecting anomalies (known or unknown) using machine-learning algorithms from data in various formats - e.g. emails, IMs etc.
What are your favorite and most effective anomaly detection algorithms?
What are their limitations and sweet-spots?
How would you recommend those limitations be addressed?
All suggestions very much appreciated.
Statistical filters like Bayesian filters or some bastardised version employed by some spam filters are easy to implement. Plus there are lots of online documentation about it.
The big downside is that it cannot really detect unknown things. You train it with a large sample of known data so that it can categorize new incoming data. But you can turn the traditional spam filter upside down: train it to recognize legitimate data instead of illegitimate data so that anything it doesn't recognize is an anomaly.
There are various types of anomaly detection algorithms, depending on the type of data and the problem you are trying to solve:
Anomalies in time series signals:
Time series signals is anything you can draw as a line graph over time (e.g., CPU utilization, temperature, rate per minute of number of emails, rate of visitors on a webpage, etc). Example algorithms are Holt-Winters, ARIMA models, Markov Models, and more. I gave a talk on this subject a few months ago - it might give you more ideas about algorithms and their limitations.
The video is at: https://www.youtube.com/watch?v=SrOM2z6h_RQ
Anomalies in Tabular data: These are cases where you have feature vector that describe something (e.g, transforming an email to a feature vector that describes it: number of recipients, number of words, number of capitalized words, counts of keywords, etc....). Given a large set of such feature vectors, you want to detect some that are anomalies compared to the rest (sometimes called "outlier detection"). Almost any clustering algorithm is suitable in these cases, but which one would be most suitable depends on the type of features and their behavior -- real valued features, ordinal, nominal or anything other. The type of features determine if certain distance functions are suitable (the basic requirement for most clustering algorithms), and some algorithms are better with certain types of features than others.
The simplest algo to try is k-means clustering, where an anomaly sample would be either very small clusters or vectors that are far from all cluster centers. One sided SVM can also detect outliers, and has the flexibility of choosing different kernels (and effectively different distance functions). Another popular algo is DBSCAN.
When anomalies are known, the problem becomes a supervised learning problem, so you can use classification algorithms and train them on the known anomalies examples. However, as mentioned - it would only detect those known anomalies and if the number of training samples for anomalies is very small, the trained classifiers may not be accurate. Also, because the number of anomalies is typically very small compared to "no-anomalies", when training the classifiers you might want to use techniques like boosting/bagging, with over sampling of the anomalies class(es), but optimize on very small False Positive rate. There are various techniques to do it in the literature --- one idea that I found to work many times very well is what Viola-Jones used for face detection - a cascade of classifiers. see: http://www.vision.caltech.edu/html-files/EE148-2005-Spring/pprs/viola04ijcv.pdf
(DISCLAIMER: I am the chief data scientist for Anodot, a commercial company doing real time anomaly detection for time series data).

Artificial Neural Network Question

Generally speaking what do you get out of extending an artificial neural net by adding more nodes to a hidden layer or more hidden layers?
Does it allow for more precision in the mapping, or does it allow for more subtlety in the relationships it can identify, or something else?
There's a very well known result in machine learning that states that a single hidden layer is enough to approximate any smooth, bounded function (the paper was called "Multilayer feedforward networks are universal approximators" and it's now almost 20 years old). There are several things to note, however.
The single hidden layer may need to be arbitrarily wide.
This says nothing about the ease with which an approximation may be found; in general large networks are hard to train properly and fall victim to overfitting quite frequently (the exception are so-called "convolutional neural networks" which really are only meant for vision problems).
This also says nothing about the efficiency of the representation. Some functions require exponential numbers of hidden units if done with one layer but scale much more nicely with more layers (for more discussion of this read Scaling Learning Algorithms Towards AI)
The problem with deep neural networks is that they're even harder to train. You end up with very very small gradients being backpropagated to the earlier hidden layers and the learning not really going anywhere, especially if weights are initialized to be small (if you initialize them to be of larger magnitude you frequently get stuck in bad local minima). There are some techniques for "pre-training" like the ones discussed in this Google tech talk by Geoff Hinton which attempt to get around this.
This is very interesting question but it's not so easy to answer. It depends on the problem you try to resolve and what neural network you try to use. There are several neural network types.
I general it's not so clear that more nodes equals more precision. Research show that you need mostly only one hidden layer. The numer of nodes should be the minimal numer of nodes that are required to resolve a problem. If you don't have enough of them - you will not reach solution.
From the other hand - if you have reached the number of nodes that is good to resolve solution - you can add more and more of them and you will not see any further progress in result estimation.
That's why there are so many types of neural networks. They try to resolve different types of problems. So you have NN to resolve static problems, to resolve time related problems and so one. The number of nodes is not so important like the design of them.
When you have a hidden layer is that you are creating a combined feature of the input. So, is the problem better tackled by more features of the existing input, or through higher-order features that come from combining existing features? This is the trade-off for a standard feed-forward network.
You have a theoretical reassurance that any function can be represented by a neural network with two hidden layers and non-linear activation.
Also, consider using additional resources for boosting, instead of adding more nodes, if you're not certain of the appropriate topology.
Very rough rules of thumb
generally more elements per layer for bigger input vectors.
more layers may let you model more non-linear systems.
If the kind of network you are using has delays in propagation , more layers may allow modelling of time series . Take care to have time jitter in the delays or it wont work very well. If this is just gobbledegook to you, ignore it.
More layers lets you insert recurrent features. This can be very useful for discrimination tasks. You ANN implementation my not permit this.
HTH
The number of units per hidden layer accounts for the ANN's potential to describe an arbitrarily complex function. Some (complicated) functions may require many hidden nodes, or possibly more than one hidden layer.
When a function can be roughly approximated by a certain number of hidden units, any extra nodes will provide more accuracy...but this is only true if the training samples used are enough to justify this addition - otherwise what will happen is "overconvergence". Overconvergence means that your ANN has lost its generalization abilities because it has overemphasized on the particular samples.
In general it is best to use the less hidden units possible, if the resulting network can give good results. The additional training patterns required to justify more hidden nodes can not be found easily in most cases, and accuracy is not the NNs' strong point.

Resources