Genetic algorithm best way to mutate list/gene - genetic-algorithm

So if I have a list of x integers what is the best way to mutate that list?
A way to do this would be to pick a random integer and change it but I suspect this would cause a problem if 2 or more elements have to change to achieve a better result.
So what would be the best way? Generating a new random list? Change random number of elements? Maybe another way?

First of all, and this may be slightly off-topic, you should take note that some crossover operations may introduce mutations as a side-effect. For instance, suppose that you use binary encoding and that some discrete element E of the solution is encoded with multiple bits, then if the crossover point is in the middle of the binary representation of E, the resulting child will most likely have its E element different that that of both its parents' E elements.
In your case, I would suggest that you mutate a certain percentage of the elements of your list. This percentage does not need to be fixed, and it could possibly be better if it were not (e.g., you could choose to mutate between 1% and 3% of the integers, chosen randomly).
If you use binary encoding, take a look at Gray code, which can be interesting if you want your children's mutations to be more "local" (i.e., remain closer to the neighborhoods of their parents).

There is no best/optimal way. All your suggestions seem good. Try each and see which works best.
What's important (and is often neglected) is to define what "better performance" means. Having this measure will help you tune your parameters (e.g. Mutation rate). Typically performance is some form of fitness convergence rate. Good luck!

Related

Identify more Compressible Dataset by observing Input distribution

This may be a repeat of the question here: Predict Huffman compression ratio without constructing the tree
So basically, I have the probabilistic distribution of two datasets with the same variables but different probabilities. Now, is there any way that by looking at the variable distribution, I can to some degree confidently say that the dataset, when passed through a Huffman Coding implementation would achieve a higher compression ratio than the other?
One of the solutions that I came across was to calculate the upper bound using conditional entropy and then compute the average code length. Is there any other approach that can I can probably explore before using the said method?
Thanks a lot.
I don't know what "to some degree confidently" means, but you can get a lower bound on the compressed size of each set by computing the zero-order entropy as done in the linked question (the negative of the sum of the probabilities times the log of the probabilities). Then the lower entropy very likely produces a shorter Huffman coding than the higher entropy. It is not definite, as I am sure that one could come up with a counter-example.
You also need to send a description of the code itself if you want to decode it on the other end, which adds a wrinkle to the comparison. However if the data is much larger than the code description, then that will be lost in the noise.
Simply generating the code, the coded data, and the code description is very fast. The best solution is to do that, and compare the resulting number of bits directly.

Find mutually compatible options from list of list of options

For purposes of this question, let us call a list of mutually incompatible options for "OptionS". I have a list of such OptionS, where each Option, apart from disqualifying all other Options in it's own OptionS list, also disqualify some Options from the other OptionS lists. These rules are symmetrical, so if A forbids B, B forbids A.
I want to pick exactly one Option from each list, such that no Options disqualify each other. There are too many Options (and OptionS) and too few disqualifications in each step to brute force a backtracking solution.
It reminds be a bit of Sudoku, but it is not an exact analog. From certain external factors, I have a rough likelihood for the different Options, or at least an ordering.
Is there a known better solution to this problem? Is it in NP?
Currently, I plan to just take random "paths" through the solution space, weighted by likelihood. A sort of simulated annealing.
EDIT - Clarification
I have a number, let's say between 5 and 500, of vectors.
Each vector contains a number, between 10 and 10000, of elements
Each element rules out a number of elements in the other vectors
This relation is symmetric
I want to pick exactly one element from each vector in a way that no elements disqualify each other
If there is no way to choose one from each vector, I want to at least choose as many as possible. The nature of the data is such that there will always be at least one (and at most a few) solution (or almost-solution - with just a few misses).
I cannot share the real data, but an example would be that the elements are integers between 1 and 10e9 and that only elements whose pairwise sum has more than P prime factors are allowed. Some numbers are more likely than others to "fit" other numbers, since larger numbers tend to have more factors, which makes some choices more likely just like the real one.
Pick P and the sizes and number of vectors as needed to make it suitably challenging :).
My naive solution:
I order the elements by how many other elements they rule out and try those who rule out few first (because that gives you a larger chance to be able to pick one from each).
Then I order the vectors by how many elements the "best" element rules out. Vectors that rule out many other elements are first. So the most constrained vector is tried first, even though the least constrained elements of that vector are tried first.
I then search depth first
The problem with this approach is that if the first choice is wrong, then the depth first search will never have time to reach the next choice.
A better way, which I try to explain in a comment below, would be to score each partial choice (node in the search tree) of elements according to how many you have chosen and how many elements are left. Then I could look deeper in the highest scoring node at each step, so the first choice is less rigid.
A similar way, which I might try first because it is slightly easier, is to do simulated annealing and take random paths, weighted by how many possibilities they keep, down the tree.
Depending on what constraints are allowed, I think you can reduce SAT to this.
Take a SAT expression e.g. (A|B|C)(~A|C|~D)...
Replace ~A by a and make a vector out of each term giving you {A,B,C} {a,C,d}...
You now have the problem of choosing one element from each vector, subject to the constraint that you cannot choose both versions of a variable - the constraints say that A is incompatible with a, B is incompatible with b, and so on.
If you can solve this instance of your problem you can solve SAT by setting to true variables that are chosen in your problem as A, B, C,... to false variables that are chosen as a, b, c,.. and making an arbitrary choice for anything not chosen - therefore your problem is at least as hard as SAT. (Except if you don't encounter these sorts of constraints, in which case I have not proved that your problem is this hard).
Given an instance of your problem, associate a variable with each element, write the constraints as boolean expressions (typically with only 2 variables) to give something which looks like 2-SAT, except that you need an expression for each vector of the form (A|B|C|D|...) to say that you must choose at least one element from each vector - so the exact solution version of your problem, at least, might code up quite nicely as input for a SAT-solver - so it is in NP and since we have already shown it is NP-hard it is NP-complete.
My first recommendation would be to find an off-the-shelf constraint solver and try that (request a maximum-weight solution with the log-likelihoods as weights), but if you're determined to implement a solver from scratch, then I would suggest that you start with something like WalkSAT. To summarize the link in the language of your question: at all times, keep a list of option choices (one from each option list, not necessarily compatible) and a list of conflicts (i.e., a set of pairs of indexes into the list of option lists). Repeatedly choose a conflict at random and resolve it by choosing differently for one half of the conflict or the other (most of the time) so as to decrease the number of conflicts afterward as much as possible or (some of the time) randomly, perhaps according to the likelihoods. Good data structures will be essential in making this run fast.

GA Chromosome Representation with bits of different importance

In a genetic algorithm, is it ok to encode the chromosome in a way such that some bits have more importance than other bits in the same chromosome? For example, the (index%2==0)/(2,4,6,..) bit is more important than (index%2!=0)/(1,3,5,..) bits. For example, if the bit 2 has value in range [1,5], we consider the value of bit 3, and if the bit 2 has value 0, the value of bit 3 makes no effect.
For example, if the problem is that we have multiple courses to be offered by a school and we want to know which course should be offered in the next semester and which should not, and if a course should be offered who should teach that course and when he/she should teach it. So one way to represent the problem is to use a vector of length 2n, where n is the number of courses. Each course is represented by a 2-tuple (who,when), where when is when the course should be taught and who is who should teach it. The tuple in the i-th position holds assignment for the i-th course. Now the possible values for who are the ids of the teachers [1-10], and the possible values for when are all possible times plus 0, where 0 means at no time which means the course should not be offered.
Now is it ok to have two different tuples with the same fitness? For instance, (3,0) and (2,0) are different values for the i-th course but they mean the same thing, this course should not be offered since we don't care about who if when=0. Or should I add 0 to who so that 0 means taught by no one and a tuple means that the corresponding course should not be offered if and only if its value is (0,0). But how about (0,v) and (v,0), where v>0? should I consider these to mean that the course should not be offered? I need help with this please.
I'm not sure I fully understand your question but I'll try to answer as best I can.
When using genetic algorithms to solve problems you can have a lot of flexibility in how it's encoded. Broadly, there are two places where certain bits can have more prominence: In the fitness function or in the implementation of the algorithms (namely selection, crossover and mutation). If you want to change the prominence of certain bits in the fitness function I'd go ahead. This would encourage the behaviour you want and generally lead towards a solution where certain bits are more prominent.
I've worked with a lot of genetic algorithms where the fitness function gives some bits (or groupings of bits) more weight than others. It's fairly standard.
I'd be a lot more careful when making certain bits more prominent than others in the genetic algorithm implementation. I've worked with algorithms that only allow certain bits to mutate, or that can only crossover at certain points. Sometimes they can work well (sometimes they're necessary given the problem) but for the most part they're a lot harder to get right, and more prone to problems like premature convergence.
EDIT:
In answer to the second part of your question, and your comments:
The best way to deal with situations where a course should not be offered is probably in the fitness function. Simply give a low score (or no score) to these. The same applies to course duplicates in a chromosome. In theory, this should discourage them from becoming a prevalent part of your population. Alternatively, you could apply a form of "culling" every generation, which completely removes chromosome which are not viable from the population. You can probably mix the two by completely excluding chromosomes with no fitness score from selection.
From what you've said about the problem it sounds like having non-viable chromosomes is probably going to be common. This doesn't have to be a problem. If your fitness function is encoded well, and you use the correct selection and crossover methods it shouldn't be an issue. As long as the more viable solutions are fitter you should be able to evolve a good solution.
In some cases it's a good idea to stop crossover at certain points in the chromosomes. It sounds like this might be the case, but again, without knowing more about your implementation it's hard to say.
I can't really give a more detailed answer without knowing more about how you plan to implement the algorithm. I'm not really familiar with the problem either. It's not something I've ever done. If you add a bit more detail on how you plan to encode the problem and fitness function I may be able to give more specific advise.

Finding the average of large list of numbers

Came across this interview question.
Write an algorithm to find the mean(average) of a large list. This
list could contain trillions or quadrillions of number. Each number is
manageable in hundreds, thousands or millions.
Googling it gave me all Median of Medians solutions. How should I approach this problem?
Is divide and conquer enough to deal with trillions of number?
How to deal with the list of the such a large size?
If the size of the list is computable, it's really just a matter of how much memory you have available, how long it's supposed to take and how simple the algorithm is supposed to be.
Basically, you can just add everything up and divide by the size.
If you don't have enough memory, dividing first might work (Note that you will probably lose some precision that way).
Another approach would be to recursively split the list into 2 halves and calculating the mean of the sublists' means. Your recursion termination condition is a list size of 1, in which case the mean is simply the only element of the list. If you encounter a list of odd size, make either the first or second sublist longer, this is pretty much arbitrary and doesn't even have to be consistent.
If, however, you list is so giant that its size can't be computed, there's no way to split it into 2 sublists. In that case, the recursive approach works pretty much the other way around. Instead of splitting into 2 lists with n/2 elements, you split into n/2 lists with 2 elements (or rather, calculate their mean immediately). So basically, you calculate the mean of elements 1 and 2, that becomes you new element 1. the mean of 3 and 4 is your new second element, and so on. Then apply the same algorithm to the new list until only 1 element remains. If you encounter a list of odd size, either add an element at the end or ignore the last one. If you add one, you should try to get as close as possible to your expected mean.
While this won't calculate the mean mathematically exactly, for lists of that size, it will be sufficiently close. This is pretty much a mean of means approach. You could also go the median of medians route, in which case you select the median of sublists recursively. The same principles apply, but you will generally want to get an odd number.
You could even combine the approaches and calculate the mean if your list is of even size and the median if it's of odd size. Doing this over many recursion steps will generate a pretty accurate result.
First of all, this is an interview question. The problem as stated would not arise in practice. Also, the question as stated here is imprecise. That is probably deliberate. (They want to see how you deal with solving an imprecisely specified problem.)
Write an algorithm to find the mean(average) of a large list.
The word "find" is rubbery. It could mean calculate (to some precision) or it could mean estimate.
The phrase "large list" is rubbery. If could mean a list or array data structure in memory, or the "list" could be the result of a database query, the contents of a file or files.
There is no mention of the hardware constraints on the system where this will be implemented.
So the first thing >>I<< would do would be to try to narrow the scope by asking some questions of the interviewer.
But assuming that you can't, then a complete answer would need to cover the following points:
The dataset probably won't fit in memory at the same time. (But if it does, then that is good.)
Calculating the average of N numbers is O(N) if you do it serially. For N this size, it could be an intractable problem.
An alternative is to split into sublists of equals size and calculate the averages, and the average of the averages. In theory, this gives you O(N/P) where P is the number of partitions. The parallelism could be implemented with multiple threads, with multiple processes on the same machine, or distributed.
In practice, the limiting factors are going to be computational, memory and/or I/O bandwidth. A parallel solution will be effective if you can address these limits. For example, you need to balance the problem of each "worker" having uncontended access to its "sublist" versus the problem of making copies of the data so that that can happen.
If the list is represented in a way that allows sampling, then you can estimate the average without looking at the entire dataset. In fact, this could be O(C) depending on how you sample. But there is a risk that your sample will be unrepresentative, and the average will be too inaccurate.
In all cases doing calculations, you need to guard against (integer) overflow and (floating point) rounding errors. Especially while calculating the sums.
It would be worthwhile discussing how you would solve this with a "big data" platform (e.g. Hadoop) and the limitations of that approach (e.g. time taken to load up the data ...)

data structure for NFA representation

In my lexical analyzer generator I use McNaughton and Yamada algorithm for NFA construction, and one of its properties that transition form I to J marked with char at J position.
So, each node of NFA can be represented simply as list of next possible states.
Which data structure best suit for storing this type of data? It must provide fast lookup for all possible states and use less space, but insertion time is not so important.
My understanding is that you want to encode a graph, where the nodes are states and the edges are transitions, and where every edge is labelled with a character. Is that correct?
The dull but practical answer is to have a object for each state, and to encode the transitions in some little structure in that object.
The simplest one would be an array, indexed by character code: that's as fast as it gets, but not naturally space-efficient. You can make it more space efficient by using a sort of offset, truncated array: store only the part of the array which contains transitions, along with the start and end indices of that part. When looking up a character in it, check that its code is within the bounds; if it isn't, treat it as a null edge (or an edge back to the start state or whatever), and if it is, fetch the element at index (character code - start). Does that make sense?
A more complex option would be a little hashtable, which would be more compact but slightly slower. I would suggest closed hashing, because collision lists will use too much memory; linear probing should be enough. You could look into using perfect hashing (look it up), which takes a lot of time to generate the table but then gives collision-free lookup. The generation process is quite complex, though.
A clever approach is to use both arrays and hashtables, and to pick one or the other based on the number of edges: if the compacted array would be more than, say, a third full, use it, but if not, use a hashtable.
Now, something a bit more radical you could do would be to use arrays, but to overlap them - if they're sparse, they'll have lots of holes in, and if you're clever, you can arrange them so that the entries in each array lines up with holes in the others. That will give you fast lookups, but also excellent memory efficiency. You will need some scheme for distinguishing when a lookup has found something from when it's found an empty slot with some other state's transition in, but i'm sure you can think of something.

Resources