Im using JGAP to generate testvectors for a schematic. I got maximum coverage of a single testvector by setting the genes of a chromosome to be bits . Now i need to get a 100% coverage with minimum number of testvectors.
If i design each gene to be a testvector, id need to calculate a fitness function based on the number of genes and total coverage and id also need to evolve both chromosome length and each testvectors(genes) bits ..
is it even possible to have a variable length chromosome?
Are there any standard designs for this type of tasks?
Sounds a bit similar to the vehicle routing problem (VRP). There the solution is often encoded as a list of lists. Each list represents the tour for a vehicle and all together they represent a solution to the problem.
I assume you could encode it in a similar way. Consider each point that you want to cover as a customer that you want to visit and consider each vehicle to be a "testvector". You want to cover all points (as typically in a VRP solution you want to visit all customers), but you want to cover them with a minimum number of vehicles (=testvectors).
What are the specific constraints of your problem? I assume you have some kind of limit on which points you can cover in a certain testvector.
Related
I've been given the challenge to find the seed from a series of pseudo-randomly generated alphanumerical IDs and after some analysis, I'm stuck in a dead end that I hope you'll be able to get me out of.
Each ID is obtained by passing the previous one through the encryption algorithm, that I'm supposed to reverse engineer in order to find the seed. The list given to me is composed of the 2070 first IDs (without the seed obviously). The IDs start as 4 alphanumerical characters, and switch to 5 after some time (e.g. "2xm1", "34nj", "avdfe", "2lgq9")
This switch happens once the algorithm, after encrypting an ID, returns an ID that has already been generated previously. At this point, it adds one character to this returned ID, making it longer and thus unique. It then proceeds as usual, generating IDs of the new length. This effectively means that the generation algorithm is surjective.
My first reflex was to try to convert those IDs from base36 to some other base, notably decimal. I used the results to scatter plot a chart of the IDs' decimal values in terms of their rank in the list, when I noticed a pattern that I couldn't understand the origin of.
After isolating the two parts of the list in terms of ID length, I scatter plotted the same graph for the 4-characters IDs sub-list and 5-characters IDs sub-list, allowing me to notice the strange density patterns.
After some analysis, I've observed 2 things :
For each sub-list, the delimitation between the 2 densities is 6x36^(n-1), n being the number of characters in the ID. In other terms, it is 1/6th of the entire range of values for a given ID length. The range of values is [0; (36^n)-1]
The repartition of those IDs in relation to this limit tends towards 50/50, half of them being above the 1/6th limit, half of them being under it.
I've tried to correlate such a behavior with other known PRNG scatter-plots, but none of them matched what I get on my graphs.
I'm hoping some of you might know about an encryption method, formula, or function matching such a specific scatter plot, or have any idea about what could be going on behind the scenes.
Thanks in advance for your answers.
This answer may not be very useful but I think it can help. the graph plot you shown is most likely that it doesn't belong to one of the most known PRNG used and of course it would never belong to cryptographic PRNG.
But I have a notice I dont know if it can help. This PRNG seems to have a full period equals to full cycle of numbers generated for a fixed character places. I mean it operate with a pattern for 4 digits then repeat pattern but with higher magnitude for 5 characters which will propably means that this same pattern of distribution will repeat for 6 characters but with higher magnitude.
So, in summery, this can mean that this pattern can be exploited if you know what is the value of this magnitude so you know the increments for 6 characters graph plot and then you can just stretch the 5 characters graph on the Y-Axis to get some kind of a solution (which would be the seed for 6 characters graph).
EDIT: To clear things more clearly regarding your comment. what I mean is that this PRNG generate random numbers but these random numbers would not be repeated to infinity instead there will be some point in time were the same sequence will be regenerated. The I've inadvertantly left behind a piece of information: confirm this since when it encounter same number generated before ( reached this point in time where same sequence is regenerated ). It will just add 1 extra character to the sequence which would not change the distribution on the graph but instead will make the graph appear like if it was stretched along Y-Axis (like if Y intercept of the graph function just got bigger).
I'm fairly new to genetic algorithm and would like to ask one question. All the resources on genetic algorithm I came across talked about using binary number or real number to represent gene. I'm working on itinerary generator that makes use of genetic algorithm. Normally, itinerary consists of point of interests, but mine is made up of cities that is represented by a binary string. Each bit position encodes information such as if the city has museums or not, or if it has car rental service or not. For example, if the city has a car rental, bit position that represents car rental service will be set to 1. The number of cities consist in an itinerary is determined by the duration of stay. So, in terms of genetic algorithm representation, each itinerary represents a chromosome and city represents gene. I haven't seen that kind of representation in any resources that I have read(each gene is a binary string and each chromosome is made up of multiple binary strings.). So I would like to know if I'm on the right track or not.
Edit : So for crossover, it'd be between multiple bit strings. For mutation, it's basically replacing an existing city with another city from the population.
It looks like you could represent it as a string of integers, each of the integer being the integer representation of the binary number that describes the city. Than you are good to go - crossover is just crossover and mutation you described as changing a city (i.e. one number) to another one.
The representation of your chromosome can be anything. Having it represented as a bit string is convenient, easy to understand, and easy to manipulate if you have something that can already manipulate bit strings.
But it could be anything. Usually it's a collection of some data type (or several different ones put together).
What representation you choose will influence how it mutates and what the shape of the fitness landscape is.
I'm not a trained statistician so I apologize for the incorrect usage of some words. I'm just trying to get some good results from the Weka Nearest Neighbor algorithms. I'll use some redundancy in my explanation as a means to try to get the concept across:
Is there a way to normalize a multi-dimensional space so that the distances between any two instances are always proportional to the effect on the dependent variable?
In other words I have a statistical data set and I want to use a "nearest neighbor" algorithm to find instances that are most similar to a specified test instance. Unfortunately my initial results are useless because two attributes that are very close in value weakly correlated to the dependent variable would incorrectly bias the distance calculation.
For example let's say you're trying to find the nearest-neighbor of a given car based on a database of cars: make, model, year, color, engine size, number of doors. We know intuitively that the make, model, and year have a bigger effect on price than the number of doors. So a car with identical color, door count, may not be the nearest neighbor to a car with different color/doors but same make/model/year. What algorithm(s) can be used to appropriately set the weights of each independent variable in the Nearest Neighbor distance calculation so that the distance will be statistically proportional (correlated, whatever) to the dependent variable?
Application: This can be used for a more accurate "show me products similar to this other product" on shopping websites. Back to the car example, this would have cars of same make and model bubbling up to the top, with year used as a tie-breaker, and then within cars of the same year, it might sort the ones with the same number of cylinders (4 or 6) ahead of the ones with the same number of doors (2 or 4). I'm looking for an algorithmic way to derive something similar to the weights that I know intuitively (make >> model >> year >> engine >> doors) and actually assign numerical values to them to be used in the nearest-neighbor search for similar cars.
A more specific example:
Data set:
Blue,Honda,6-cylinder
Green,Toyota,4-cylinder
Blue,BMW,4-cylinder
now find cars similar to:
Blue,Honda,4-cylinder
in this limited example, it would match the Green,Toyota,4-cylinder ahead of the Blue,Honda,6-cylinder because the two brands are statistically almost interchangeable and cylinder is a stronger determinant of price rather than color. BMW would match lower because that brand tends to double the price, i.e. placing the item a larger distance.
Final note: the prices are available during training of the algorithm, but not during calculation.
Possible you should look at Solr/Lucene for this aim. Solr provides a similarity search based field value frequency and it already has functionality MoreLikeThis for find similar items.
Maybe nearest neighbor is not a good algorithm for this case? As you want to classify discrete values it can become quite hard to define reasonable distances. I think an C4.5-like algorithm may better suit the application you describe. On each step the algorithm would optimize the information entropy, thus you will always select the feature that gives you the most information.
Found something in the IEEE website. The algorithm is called DKNDAW ("dynamic k-nearest-neighbor with distance and attribute weighted"). I couldn't locate the actual paper (probably needs a paid subscription). This looks very promising assuming that the attribute weights are computed by the algorithm itself.
Let TARGET be a set of strings that I expect to be spoken.
Let SOURCE be the set of strings returned by a speech recognizer (that is, the possible sentences that it has heard).
I need a way to choose a string from TARGET. I read about the Levenshtein distance and the Damerau-Levenshtein distance, which basically returns the distance between a source string and a target string, that is the number of changes needed to transform the source string into the target string.
But, how can I apply this algorithm to a set of target strings?
I thought I'd use the following method:
For each string that belongs to TARGET, I calculate the distance from each string in SOURCE. In this way we obtain an m-by-n matrix, where n is the cardinality of SOURCE and n is the cardinality of TARGET. We could say that the i-th row represents the similarity of the sentences detected by the speech recognizer with respect to the i-th target.
Calculating the average of the values on each row, you can obtain the average distance between the i-th target and the output of the speech recognizer. Let's call it average_on_row(i), where i is the row index.
Finally, for each row, I calculate the standard deviation of all values in the row. For each row, I also perform the sum of all the standard deviations. The result is a column vector, in which each element (Let's call it stadard_deviation_sum(i)) refers to a string of TARGET.
The string which is associated with the shortest stadard_deviation_sum could be the sentence pronounced by the user. Could be considered the correct method I used? Or are there other methods?
Obviously, too high values indicate that the sentence pronounced by the user probably does not belong to TARGET.
I'm not an expert but your proposal does not make sense. First of all, in practice I'd expect the cardinality of TARGET to be very large if not infinite. Second, I don't believe the Levensthein distance or some similar similarity metric will be useful.
If :
you could really define SOURCE and TARGET sets,
all strings in SOURCE were equally probable,
all strings in TARGET were equally probable,
the strings in SOURCE and TARGET consisted of not characters but phonemes,
then I believe your best bet would be to find the pair p in SOURCE, q in TARGET such that distance(p,q) is minimum. Since especially you cannot guarantee the equal-probability part, I think you should think about the problem from scratch, do some research and make a completely different design. The usual methodology for speech recognition is the use Hidden Markov models. I would start from there.
Answer to your comment: Choose whichever is more probable. If you don't consider probabilities, it is hopeless.
[Suppose the following example is on phonemes, not characters]
Suppose the recognized word the "chees". Target set is "cheese", "chess". You must calculate P(cheese|chees) and P(chess|chees) What I'm trying to say is that not every substitution is equiprobable. If you will model probabilities as distances between strings, then at least you must allow that for example d("c","s") < d("c","q") . (It is common to confuse c and s letters but it is not common to confuse c and q) Adapting the distance calculation algorithm is easy, coming with good values for all pairs is difficult.
Also you must somehow estimate P(cheese|context) and P(chess|context) If we are talking about board games chess is more probable. If we are talking about dairy products cheese is more probable. This is why you'll need large amounts of data to come up with such estimates. This is also why Hidden Markov Models are good for this kind of problem.
You need to calculate these probabilities first: probability of insertion, deletion and substitution. Then use log of these probabilities as penalties for each operation.
In a "context independent" situation, if pi is probability of insertion, pd is probability of deletion and ps probability of substitution, the probability of observing the same symbol is pp=1-ps-pd.
In this case use log(pi/pp/k), log(pd/pp) and log(ps/pp/(k-1)) as penalties for insertion, deletion and substitution respectively, where k is the number of symbols in the system.
Essentially if you use this distance measure between a source and target you get log probability of observing that target given the source. If you have a bunch of training data (i.e. source-target pairs) choose some initial estimates for these probabilities, align source-target pairs and re-estimate these probabilities (AKA EM strategy).
You can start with one set of probabilities and assume context independence. Later you can assume some kind of clustering among the contexts (eg. assume there are k different sets of letters whose substitution rate is different...).
I am a data mining student and I have a problem that I was hoping that you guys could give me some advice on:
I need a genetic algo that optimizes the weights between three inputs. The weights need to be positive values AND they need to sum to 100%.
The difficulty is in creating an encoding that satisfies the sum to 100% requirement.
As a first pass, I thought that I could simply create a chrom with a series of numbers (ex.4,7,9). Each weight would simply be its number divided by the sum of all of the chromosome's numbers (ex. 4/20=20%).
The problem with this encoding method is that any change to the chromosome will change the sum of all the chromosome's numbers resulting in a change to all of the chromosome's weights. This would seem to significantly limit the GA's ability to evolve a solution.
Could you give any advice on how to approach this problem?
I have read about real valued encoding and I do have an implementation of a GA but it will give me weights that may not necessarily add up to 100%.
It is mathematically impossible to change one value without changing at least one more if you need the sum to remain constant.
One way to make changes would be exactly what you suggest: weight = value/sum. In this case when you change one value, the difference to be made up is distributed across all the other values.
The other extreme is to only change pairs. Start with a set of values that add to 100, and whenever 1 value changes, change another by the opposite amount to maintain your sum. The other could be picked randomly, or by a rule. I'd expect this would take longer to converge than the first method.
If your chromosome is only 3 values long, then mathematically, these are your only two options.