How to merge up multiple algorithms in WEKA? - algorithm

I've visited
this tutorial
and got the idea of merging up multiple algorithms using VOTE, but I'm not clear about the actual mechanism about how it works. I want to understand if the first mentioned algorithm is being applied at first to the data set and then the second algorithm is being applied to the classifier we are getting from the applied first algorithm ?
Suppose I choose Naive Bayes and Bayes Net, then what is happening? Is Naive Bayes being applied to the given data set first and then we get a classifier C1 and next Bayes Net is being applied to C1 and finally it is giving final classifier as C*,
or it that at each step both of the algorithms are working and the the higher VOTED result is proceeding further?

Each ensemble member (or algorithm) is being trained on its own training data. Once each of these have been trained, they are later evaluated using a specific voting algorithm.
Generally, when testing cases are presented for estimation, each of the algorithms generate their estimate, and then the voting algorithm determines how the classifier's weights are applied and assigns the best output as the ensemble estimate.
That's not to say that it always works this way. There was one proposed model I used in the past that selected a subset of algorithms depending on the locality of the testing case in the problem space and weighted each member's vote differently. Each voting algorithm works in a different way and Weka has a few common models that can be tried out.

Related

How genetic algorithm is different from random selection and evaluation for fittest?

I have been learning the genetic algorithm since 2 months. I knew about the process of initial population creation, selection , crossover and mutation etc. But could not understand how we are able to get better results in each generation and how its different than random search for a best solution. Following I am using one example to explain my problem.
Lets take example of travelling salesman problem. Lets say we have several cities as X1,X2....X18 and we have to find the shortest path to travel. So when we do the crossover after selecting the fittest guys, how do we know that after crossover we will get a better chromosome. The same applies for mutation also.
I feel like its just take one arrangement of cities. Calculate the shortest distance to travel them. Then store the distance and arrangement. Then choose another another arrangement/combination. If it is better than prev arrangement, then save the current arrangement/combination and distance else discard the current arrangement. By doing this also, we will get some solution.
I just want to know where is the point where it makes the difference between random selection and genetic algorithm. In genetic algorithm, is there any criteria that we can't select the arrangement/combination of cities which we have already evaluated?
I am not sure if my question is clear. But I am open, I can explain more on my question. Please let me know if my question is not clear.
A random algorithm starts with a completely blank sheet every time. A new random solution is generated each iteration, with no memory of what happened before during the previous iterations.
A genetic algorithm has a history, so it does not start with a blank sheet, except at the very beginning. Each generation the best of the solution population are selected, mutated in some way, and advanced to the next generation. The least good members of the population are dropped.
Genetic algorithms build on previous success, so they are able to advance faster than random algorithms. A classic example of a very simple genetic algorithm, is the Weasel program. It finds its target far more quickly than random chance because each generation it starts with a partial solution, and over time those initial partial solutions are closer to the required solution.
I think there are two things you are asking about. A mathematical proof that GA works, and empirical one, that would waive your concerns.
Although I am not aware if there is general proof, I am quite sure at least a good sketch of a proof was given by John Holland in his book Adaptation in Natural and Artificial Systems for the optimization problems using binary coding. There is something called Holland's schemata theoerm. But you know, it's heuristics, so technically it does not have to be. It basically says that short schemes in genotype raising the average fitness appear exponentially with successive generations. Then cross-over combines them together. I think the proof was given only for binary coding and got some criticism as well.
Regarding your concerns. Of course you have no guarantee that a cross-over will produce a better result. As two intelligent or beautiful parents might have ugly stupid children. The premise of GA is that it is less likely to happen. (As I understand it) The proof for binary coding hinges on the theoerm that says a good partial patterns will start emerging, and given that the length of the genotype should be long enough, such patterns residing in different specimen have chance to be combined into one improving his fitness in general.
I think it is fairly easy to understand in terms of TSP. Crossing-over help to accumulate good sub-paths into one specimen. Of course it all depends on the choice of the crossing method.
Also GA's path towards the solution is not purely random. It moves towards a certain direction with stochastic mechanisms to escape trappings. You can lose best solutions if you allow it. It works because it wants to move towards the current best solutions, but you have a population of specimens and they kind of share knowledge. They are all similar, but given that you preserve diversity new better partial patterns can be introduced to the whole population and get incorporated into the best solutions. This is why diversity in population is regarded as very important.
As a final note please remember the GA is a very broad topic and you can modify the base in nearly every way you want. You can introduce elitarism, taboos, niches, etc. There is no one-and-only approach/implementation.

Is there way to detect formula with help of genetic algorithm?

I am trying to find how two images (let's say "image1" and "image2") match to each other.
There are several parameters calculated for each possible position of "image2" relative to "image1". And I have empirical formula which gives "score" to each position depending on those parameters.
I tried to match image pars with help of neural networks, but failed : empirical formula works much better. From this I started thinking about to improve this formula with help of genetic algorithm.
So, the question is : I have a bunch of image pairs and for each pair I know "right" match position. Is genetic algorithm can be used in such things ? Any examples ?
Suggestions and links are appreciated.
Thanks.
Basically, yes! The parameters of your score function could be the parameters that your GA is going to evolve. You may want to use a real coded genetic algorithm or evolution strategy (CMA-ES) if your parameters are in the real domain.
There exist several possible choices for crossover:
Average / Intermediate
Blend-Alpha (BLX-a)
Blend-Alpha-Beta (BLX-a-b)
Discrete
Heuristic
Local
Random Convex
Simulated Binary (SBX)
Single Point
And also some mutation operators:
Normal distributed N(0, sigma) -> e.g. with adaptation to reduce sigma over time
Uniform distributed (in some positions)
Polynomial mutation
Another metaheuristic suitable for real coded problems is particle swarm optimization (PSO).
With genetic programming you're going to evolve a formula (e.g. a tree). I'm not so sure why you mention it, maybe I still misunderstand something. Clarify your problem, just in case.
EDIT:
Okay it seems it's not the weights that you want to optimize, but the whole formula. Still, genetic algorithms can be used for this representation as well. I want to mention HeuristicLab due to its good support for genetic programming.
I assume you have a more complex problem since you want to optimize the scoring function, and still have another algorithm for optimizing the placement according to that scoring function. You could try an easy approach and generate a dataset with several positions predefined and the features calculated accordingly. Then you could formulate a classification problem and find a model that allows you to identify those positionings that are optimal.

How to mix genetic algorithm with some heuristic

I'm working on university scheduling problem and using simple genetic algorithm for this. Actually it works great and optimizes the objective function value for 1 hour from 0% to 90% (approx). But then the process getting slow down drammatically and it takes days to get the best solution. I saw a lot of papers that it is reasonable to mix other algos with genetiс one. Could you, please, give me some piece of advise of what algorithm can be mixed with genetic one and of how this algorithm can be applied to speed up the solving process. The main question is how can any heuristic can be applied to such complex-structured problem? I have no idea of how can be applied there, for instance, greedy heuristics.
Thanks to everyone in advance! Really appreciate your help!
Problem description:
I have:
array filled by ScheduleSlot objects
array filled by Lesson objects
I do:
Standart two-point crossover
Mutation (Move random lesson to random position)
Rough selection (select only n best individuals to next population)
Additional information for #Dougal and #izomorphius:
I'm triyng to construct a university schedule, which will have no breaks between lessons, overlaps and geographically distributed lessons for groups and professors.
The fitness function is really simple: fitness = -1000*numberOfOverlaps - 1000*numberOfDistrebutedLessons - 20*numberOfBreaks. (or something like that, we can simply change coefficients in fron of the variables)
At the very beggining I generate my individuals just placing lessons in random room, time and day.
Mutation and crossover, as described above, a really trivial:
Crossover - take to parent schedules, randomly choose the point and the range of crossover and just exchange the parts of parent schedules, generating two child schedules.
Mutation - take a child schedule and move n random lessons to random position.
My initial observation: you have chosen the coefficients in front of the numberOfOverlaps, numberOfDistrebutedLessons and numberOfBreaks somewhat randomly. My experience shows that usually these choices are not the best one and you should better let the computer choose them. I propose writing a second algorithm to choose them - could be neural network, second genetic algorithm or a hill climbing. The idea is - compute how good a result you get after a certain amount of time and try to optimize the choice of these 3 values.
Another idea: after getting the result you may try to brute-force optimize it. What I mean is the following - if you had the initial problem the "silly" solution would be back track that checks all the possibilities and this is usually done using dfs. Now this would be very slow, but you may try using depth first search with iterative deepening or simply a depth restricted DFS.
For many problems, I find that a Lamarckian-style of GA works well, combining a local search into the GA algorithm.
For your case, I would try to introduce a partial systematic search as the local search. There are two obvious ways to do this, and you should probably try both.
Alternate GA iterations with local search iterations. For your local search you could, for example, brute force all the lessons assigned in a single day while leaving everything else unchanged. Another possibility is to move a randomly selected lesson to all free slots to find the best choice for that. The key is to minimise the cost of the brute-search while still having the chance to find local improvements.
Add a new operator alongside mutation and crossover that performs your local search. (You might find that the mutation operator is less useful in the hybrid scheme, so just replacing that could be viable.)
In essence, you will be combining the global exploration of the GA with an efficient local search. Several GA frameworks include features to assist in this combination. For example, GAUL implements the alternate scheme 1 above, with either the full population or just the new offspring at each iteration.

Group detection in data sets

Assume a group of data points, such as one plotted here (this graph isn't specific to my problem, but just used as a suitable example):
Inspecting the scatter graph visually, it's fairly obvious the data points form two 'groups', with some random points that do not obviously belong to either.
I'm looking for an algorithm, that would allow me to:
start with a data set of two or more dimensions.
detect such groups from the dataset without prior knowledge on how many (or if any) might be there
once the groups have been detected, 'ask' the model of groups, if a new sample point seems to fit to any of the groups
There are many choices, but if you are interested in the probability that a new data point belongs to a particular mixture, I would use a probabilistic approach such as Gaussian mixture modeling either estimated by maximum likelihood or Bayes.
Maximum likelihood estimation of mixtures models is implemented in Matlab.
Your requirement that the number of components is unknown makes your model more complex. The dominant probabilistic approach is to place a Dirichlet Process prior on the mixture distribution and estimate by some Bayesian method. For instance, see this paper on infinite Gaussian mixture models. The DP mixture model will give you inference over the number of components and the components each elements belong to, which is exactly what you want. Alternatively you could perform model selection on the number of components, but this is generally less elegant.
There are many implementation of DP mixture models models, but they may not be as convenient. For instance, here's a Matlab implementation.
Your graph suggests you are an R user. In that case, if you are looking for prepacked solutions, the answer to your question lies on this Task View for cluster analysis.
I think you are looking for something along the lines of a k-means clustering algorithm.
You should be able to find adequate implementations in most general purpose languages.
You need one of clustering algorithms. All of them can be devided in 2 groups:
you specify number of groups (clusters) - 2 clusters in your example
algorithm try to guess correct number of clusters by itself
If you want algorithm of 1st type then K-Means is what you really need.
If you want algorithm of 2nd type then you probably need one of hierarchical clustering algorithms. I haven't ever implement any of them. But I see an easy way to improve K-means in such way thay it will be unnecessary to specify number of clusters.

Determining the best k for a k nearest neighbour

I have need to do some cluster analysis on a set of 2 dimensional data (I may add extra dimensions along the way).
The analysis itself will form part of the data being fed into a visualisation, rather than the inputs into another process (e.g. Radial Basis Function Networks).
To this end, I'd like to find a set of clusters which primarily "looks right", rather than elucidating some hidden patterns.
My intuition is that k-means would be a good starting place for this, but that finding the right number of clusters to run the algorithm with would be problematic.
The problem I'm coming to is this:
How to determine the 'best' value for k such that the clusters formed are stable and visually verifiable?
Questions:
Assuming that this isn't NP-complete, what is the time complexity for finding a good k. (probably reported in number of times to run the k-means algorithm).
is k-means a good starting point for this type of problem? If so, what other approaches would you recommend. A specific example, backed by an anecdote/experience would be maxi-bon.
what short cuts/approximations would you recommend to increase the performance.
For problems with an unknown number of clusters, agglomerative hierarchical clustering is often a better route than k-means.
Agglomerative clustering produces a tree structure, where the closer you are to the trunk, the fewer the number of clusters, so it's easy to scan through all numbers of clusters. The algorithm starts by assigning each point to its own cluster, and then repeatedly groups the two closest centroids. Keeping track of the grouping sequence allows an instant snapshot for any number of possible clusters. Therefore, it's often preferable to use this technique over k-means when you don't know how many groups you'll want.
There are other hierarchical clustering methods (see the paper suggested in Imran's comments). The primary advantage of an agglomerative approach is that there are many implementations out there, ready-made for your use.
In order to use k-means, you should know how many cluster there is. You can't try a naive meta-optimisation, since the more cluster you'll add (up to 1 cluster for each data point), the more it will brought you to over-fitting. You may look for some cluster validation methods and optimize the k hyperparameter with it but from my experience, it rarely work well. It's very costly too.
If I were you, I would do a PCA, eventually on polynomial space (take care of your available time) depending on what you know of your input, and cluster along the most representatives components.
More infos on your data set would be very helpful for a more precise answer.
Here's my approximate solution:
Start with k=2.
For a number of tries:
Run the k-means algorithm to find k clusters.
Find the mean square distance from the origin to the cluster centroids.
Repeat the 2-3, to find a standard deviation of the distances. This is a proxy for the stability of the clusters.
If stability of clusters for k < stability of clusters for k - 1 then return k - 1
Increment k by 1.
The thesis behind this algorithm is that the number of sets of k clusters is small for "good" values of k.
If we can find a local optimum for this stability, or an optimal delta for the stability, then we can find a good set of clusters which cannot be improved by adding more clusters.
In a previous answer, I explained how Self-Organizing Maps (SOM) can be used in visual clustering.
Otherwise, there exist a variation of the K-Means algorithm called X-Means which is able to find the number of clusters by optimizing the Bayesian Information Criterion (BIC), in addition to solving the problem of scalability by using KD-trees.
Weka includes an implementation of X-Means along with many other clustering algorithm, all in an easy to use GUI tool.
Finally you might to refer to this page which discusses the Elbow Method among other techniques for determining the number of clusters in a dataset.
You might look at papers on cluster validation. Here's one that is cited in papers that involve microarray analysis, which involves clustering genes with related expression levels.
One such technique is the Silhouette measure that evaluates how closely a labeled point is to its centroid. The general idea is that, if a point is assigned to one centroid but is still close to others, perhaps it was assigned to the wrong centroid. By counting these events across training sets and looking across various k-means clusterings, one looks for the k such that the labeled points overall fall into the "best" or minimally ambiguous arrangement.
It should be said that clustering is more of a data visualization and exploration technique. It can be difficult to elucidate with certainty that one clustering explains the data correctly, above all others. It's best to merge your clusterings with other relevant information. Is there something functional or otherwise informative about your data, such that you know some clusterings are impossible? This can reduce your solution space considerably.
From your wikipedia link:
Regarding computational complexity,
the k-means clustering problem is:
NP-hard in general Euclidean
space d even for 2 clusters
NP-hard for a general number of
clusters k even in the plane
If k and d are fixed, the problem can be
exactly solved in time O(ndk+1 log n),
where n is the number of entities to
be clustered
Thus, a variety of heuristic
algorithms are generally used.
That said, finding a good value of k is usually a heuristic process (i.e. you try a few and select the best).
I think k-means is a good starting point, it is simple and easy to implement (or copy). Only look further if you have serious performance problems.
If the set of points you want to cluster is exceptionally large a first order optimisation would be to randomly select a small subset, use that set to find your k-means.
Choosing the best K can be seen as a Model Selection problem. One possible approach is Minimum Description Length, which in this context means: You could store a table with all the points (in which case K=N). At the other extreme, you have K=1, and all the points are stored as their distances from a single centroid. This Section from Introduction to Information Retrieval by Manning and Schutze suggest minimising the Akaike Information Criterion as a heuristic for an optimal K.
This problematic belongs to the "internal evaluation" class of "clustering optimisation problems" which curent state of the art solution seems to use the **Silhouette* coeficient* as stated here
https://en.wikipedia.org/wiki/Cluster_analysis#Applications
and here:
https://en.wikipedia.org/wiki/Silhouette_(clustering) :
"silhouette plots and averages may be used to determine the natural number of clusters within a dataset"
scikit-learn provides a sample usage implementation of the methodology here
http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html

Resources