Which algorithm or statistical method will be best? - algorithm

I have a table of 21 students (A1…A21) and their 25 characteristics (table 1) and I have another matrix (table 2) which shows if a student likes another student or not (0 means likes and 100 means dislike).
How can I find the least no. of characteristics that can give me similar distance in space as the likeability matrix?
For Example:
If we get 5 dimensions with characteristics C1, C3, C4, C5, C10, then the points A1,..A21 when plotted for these characteristics will have the proportional distance as the likeability matrix.
For example, if A3 and A2 have a small distance between them in that 5D characteristics space, then they will have a corresponding smaller distance/value in the likeability matrix.
Table 1
Table 2

You can make this look like a well-known statistical problem, but you have made assumptions (that similar students like each other), I will make further assumptions, and most of the solutions to the statistical problem are not very respectable, so you should take the results with a pinch of salt.
With 21 students, you have 21*20/2 = 210 pairs of students. Treat each pair as a separate observation. You have a likeability value for that pair. For each pair compute, for each characteristic, the absolute value of the difference between the values for each of the two students. This gives you a vector of 25 elements for each observation. You will now try and predict the 210 likeabilities given the 210 25-long vectors of absolute differences.
Procedures for this go under the names of all-subsets regression and stepwise regression. See https://www.r-bloggers.com/variable-selection-using-automatic-methods/ and https://www.r-bloggers.com/variable-selection-using-automatic-methods/. One way to compute these is to use the free open source statistical package R from https://www.r-project.org/.
For each possible selection of variables you can use linear regression to predict likeability from the vector of absolute differences. From that linear regression you can get a measure of how good the prediction is, and so whether that particular selection of variables was any good or not. All subsets regression uses a variation on branch and bound to work out, for each N, the set of variables of size N which predicts best. Stepwise regression starts off with a possibly incomplete selection of variables and performs a sort of hillclimb, adding or subtracting one variable from the set at each stage, trying all of the variables and choosing the one that gives the best prediction. Typically you start with no variables and add one variable at a time, or start will all variables, and remove one variable at a time. Stepwise selection isn't guaranteed to find the absolute best selection of variables that all subsets regression will find, but all subsets regression can be very expensive.
From this you will get a best selection of variables (probably one best selection for each number of variables) and you may get some indication of statistical significance. You have broken so many rules about multiple testing and independence (inflating 21 observations to 210) that you shouldn't take any statistical significance seriously. If you want some idea of whether you are looking at real information or prettied-up random noise, automate the procedure and see what it looks like on fake data where there is no underlying effect at all, and perhaps on fake data where you have constructed data from which there is an underlying effect that you know about because you have constructed it. See also https://en.wikipedia.org/wiki/Bootstrapping_(statistics) and https://en.wikipedia.org/wiki/Resampling_(statistics)#Permutation_tests

Related

What algorithm do I use to calculate voltage across a combination circuit?

I'm trying to programmatically calculate voltage changes over a very large circuit.
*This question may seem geared toward electronics, but it's
more about applying an algorithm over a set of data.
To keep things simple,
here is a complete circuit, with the voltages already calculated:
I'm originally only given the battery voltage and the resistances:
The issue I have is that voltage is calculated differently among parallel and series circuits.
A somewhat similar question asked on SO.
Some formulas:
When resistors are in parallel:
Rtotal = 1/(1/R1 + 1/R2 + 1/R3 ... + 1/Rn)
When resistors are in series:
Rtotal = R1 + R2 + R3 ... + Rn
Ohm's Law:
V = IR
I = V/R
R = V/I
V is voltage (volts)
I is current (amps)
R is resistance(ohms)
Every tutorial I've found on the internet consists of people conceptually grouping together parallel circuits to get the total resistance, and then using that resistance to calculate the resistance in series.
This is fine for small examples, but it's difficult to derive an algorithm out of it for large scale circuits.
My question:
Given a matrix of all complete paths,
is there a way for me to calculate all the voltage drops?
I currently have the system as a graph data structure.
All of the nodes are represented(and can be looked up by) an id number.
So for the example above, if I run the traversals, I'll get back a list of paths like this:
[[0,1,2,4,0]
,[0,1,3,4,0]]
Each number can be used to derive the actual node and it's corresponding data. What kind of transformations/algorithms do I need to perform on this set of data?
It's very likely that portions of the circuit will be compound, and those compound sections may find themselves being in parallel or series with other compound sections.
I think my problem is akin to this:
http://en.wikipedia.org/wiki/Series-parallel_partial_order
Some circuits cannot even be analyzed in terms of series and parallel, for example a circuit which includes the edges of a cube (there's some code at the bottom of that web page that might be helpful; I haven't looked at it). Another example that can't be analyzed into series/parallel is a pentagon/pentagram shape.
A more robust solution than thinking about series and parallel is to use Kirchhoff's laws.
You need to make variables for the currents in each linear section
of the circuit.
Apply Kirchhoff's current law (KCL) to nodes where
linear sections meet.
Apply Kirchhoff's voltage law (KVL) to as many
cycles as you can find.
Use Gaussian elimination to solve the
resulting linear system of equations.
The tricky part is identifying cycles. In the example you give, there are three cycles: through battery and left resistor, battery and right resistor, and through left and right resistors. For planar circuits it's not too hard to find a complete set of cycles; for three dimensional circuits, it can be hard.
You don't actually need all the cycles. In the above example, two would be enough (corresponding to the two bounded regions into which the circuit divides the plane). Then you have three variables (currents in three linear parts of the circuit) and three equations (sum of currents at the top node where three linear segments meet, and voltage drops around two cycles). That is enough to solve the system for currents by Gaussian elimination, then you can calculate voltages from the currents.
If you throw in too many equations (e.g., currents at both nodes in your example, and voltages over three cycles instead of two), things will still work out: Gaussian elimination will just eliminate the redundancies and you'll still get the unique, correct answer. The real problem is if you have too few equations. For example, if you use KCL on the two nodes in your example and KVL around just one cycle, you'll have three equations, but one is redundant, so you'll only really have two independent equations, which is not enough. So I would say throw in every equation you can find and let Gaussian elimination sort it out.
And hopefully you can restrict to planar circuits, for which it is easy to find a nice set of cycles. Otherwise you'll need a graph cycle enumeration algorithm. I'm sure you can find one if you need it.
use a maximum flow algorithm (Dijkstra is your friend).
http://www.cs.princeton.edu/courses/archive/spr04/cos226/lectures/maxflow.4up.pdf
You pretend to be in front of a water flow problem (well, actually it IS a flow problem). You have to compute the flow of water on each segment (the current). Then you can easily compute the voltage drop (water pressure) across every resistor.
I think the way to go here would be something like this:
Sort all your paths into groups of the same length.
While there are more than one group, choose the group with the largest length and:
2a. Find two paths with one item difference.
2b. "Merge" them into a path with the length smaller by one - the merge is dependent on the actual items that are different.
2c. Add the new path into the relevant group.
2d. If there are only paths with more than one item difference, merge the different items so that you have only one different item between the paths.
2e. When there is only one item left, find an item from a "lower" (= length is smaller) with minimum differences, and merge item to match.
When there is one group left with more than one item, keep doing #2 until there is one group left with one item.
Calculate the value of that item directly.
This is very initial, but I think the main idea is clear.
Any improvements are welcome.

Neural Network-like Data Structure

So I'm working on a little side-project for the purpose of experimenting with genetic algorithms. The project involves two classes, Critters and Food. The critters gain hunger each tick and lose hunger when they find food. The Critters can move, and the Food is stationary. Each Critter has a genome which is just a string of randomly generated characters. The goal is that after several generations, the Critters will evolve specialized movement patterns that result in optimal food consumption.
Right now, each Critter is governed by a neural network. The neural network is initialized with weights and biases derived from the Critter's genome. The first input into the neural network is [0,0]. The neural network produces two outputs which dictate the direction of the Critter's x and y movement respectively. This output is used as the input for the neural network at the next tick. For example:
1: [0,0]->NN->[.598.., -.234...] // Critter moves right and up
2: [.598...,-.234...]->NN->[-.409...,-.232...] // Critter moves left and up
3: [-.409...,-.232...]->NN-> etc.
The problem is that, regardless of how the weights are initialized, the neural network is finding a sort of "fixed point." That is, after two or three iterations the output and input are practically the same so the Critter always moves in the same direction. Now I'm not training the neural net and I don't really want to. So what I'm looking for is an alternative method of generating the output.
More specifically, let's say I have n random weights generated by the genome. I need a relation determined by those n weights that can map (in the loosest sense of the word) 2 inputs in the range [-1,1] to two outputs in the same range. The main thing is that I want the weights to have a significant impact on the behavior of the function. I don't want it to be something like y=mx+b where we're only changing y and b.
I know that's a pretty vague description. At first I thought the Neural Network would be perfect, but it seems as though the inputs have virtually no affect on the outputs without training (which is fair since Neural Networks are meant to be trained).
Any advice?
Just an idea.
You have f(genome) -> (w_1, w_2, ..., w_n), where f generates w based on the genome.
You could for example use a hash function h and compute [h(w_1, ..., w_(n/2)), h(w_(n/2+1), ..., w_n))].
Normally a hash function should give very distinct outputs for small change in inputs. But not always. You can look for hash functions that are continuous (small change in input, small change in output). This kind of function would be used for similarity search, http://en.wikipedia.org/wiki/Locality_sensitive_hashing might provide some ideas. This way you could actually use the hash directly on the genome.
Otherwise, you could try to split the genome or the weights and give the splits different purposes. Let's say n = 4.
(w_1, w_2) affects x
(w_3, w_4) affects y
You could then compute x as (w_1 + w_2*Random_[-1,1])/2, where Random_[-1,1] is a random number from the interval [-1,1] and assuming that w_i \in [-1,1] for all i.
Similarly for y.
Your genetic algorithm would then optimize how fast and how randomly the Critters move in order to optimally find food. If you have more weights (or longer genome), you could try to come up with a fancier function in a similar spirit.
This actually shows that with genetic algorithms the problem solving shifts to finding a good genome representation and a good fitness function so don't worry if you get stuck on it a bit.

"Covering" the space of all possible histogram shapes

There is a very expensive computation I must make frequently.
The computation takes a small array of numbers (with about 20 entries) that sums to 1 (i.e. the histogram) and outputs something that I can store pretty easily.
I have 2 things going for me:
I can accept approximate answers
The "answers" change slowly. For example: [.1 .1 .8 0] and [.1
.1 .75 .05] will yield similar results.
Consequently, I want to build a look-up table of answers off-line. Then, when the system is running, I can look-up an approximate answer based on the "shape" of the input histogram.
To be precise, I plan to look-up the precomputed answer that corresponds to the histogram with the minimum Earth-Mover-Distance to the actual input histogram.
I can only afford to store about 80 to 100 precomputed (histogram , computation result) pairs in my look up table.
So, how do I "spread out" my precomputed histograms so that, no matter what the input histogram is, I'll always have a precomputed result that is "close"?
Finding N points in M-space that are a best spread-out set is more-or-less equivalent to hypersphere packing (1,2) and in general answers are not known for M>10. While a fair amount of research has been done to develop faster methods for hypersphere packings or approximations, it is still regarded as a hard problem.
It probably would be better to apply a technique like principal component analysis or factor analysis to as large a set of histograms as you can conveniently generate. The results of either analysis will be a set of M numbers such that linear combinations of histogram data elements weighted by those numbers will predict some objective function. That function could be the “something that you can store pretty easily” numbers, or could be case numbers. Also consider developing and training a neural net or using other predictive modeling techniques to predict the objective function.
Building on #jwpat7's answer, I would apply k-means clustering to a huge set of randomly generated (and hopefully representative) histograms. This would ensure that your space was spanned with whatever number of exemplars (precomputed results) you can support, with roughly equal weighting for each cluster.
The trick, of course, will be generating representative data to cluster in the first place. If you can recompute from time to time, you can recluster based on the actual data in the system so that your clusters might get better over time.
I second jwpat7's answer, but my very naive approach was to consider the count of items in each histogram bin as a y value, to consider the x values as just 0..1 in 20 steps, and then to obtain parameters a,b,c that describe x vs y as a cubic function.
To get a "covering" of the histograms I just iterated through "possible" values for each parameter.
e.g. to get 27 histograms to cover the "shape space" of my cubic histogram model I iterated the parameters through -1 .. 1, choosing 3 values linearly spaced.
Now, you could change the histogram model to be quartic if you think your data will often be represented that way, or whatever model you think is most descriptive, as well as generate however many histograms to cover. I used 27 because three partitions per parameter for three parameters is 3*3*3=27.
For a more comprehensive covering, like 100, you would have to more carefully choose your ranges for each parameter. 100**.3 isn't an integer, so the simple num_covers**(1/num_params) solution wouldn't work, but for 3 parameters 4*5*5 would.
Since the actual values of the parameters could vary greatly and still achieve the same shape it would probably be best to store ratios of them for comparison instead, e.g. for my 3 parmeters b/a and b/c.
Here is an 81 histogram "covering" using a quartic model, again with parameters chosen from linspace(-1,1,3):
edit: Since you said your histograms were described by arrays that were ~20 elements, I figured fitting parameters would be very fast.
edit2 on second thought I think using a constant in the model is pointless, all that matters is the shape.

Genetic algorithm on a knapsack-alike optiproblem

I have a optimzation problem i'm trying to solve using a genetic algorithm. Basically, there is a list of 10 bound real valued variables (-1 <= x <= 1), and I need to maximize some function of that list. The catch is that only up to 4 variables in the list may be != 0 (subset condition).
Mathematically speaking:
For some function f: [-1, 1]^10 -> R
min f(X)
s.t.
|{var in X with var != 0}| <= 4
Some background on f: The function is NOT similar to any kind of knapsack objective function like Sum x*weight or anything like that.
What I have tried so far:
Just a basic genetic algorithm over the genome [-1, 1]^10 with 1-point-crossover and some gaussian mutation on the variables. I tried to encode the subset condition in the fitness function by using just the first 4 nonzero (zero as in close enough to 0) values. This approach doesn't work that well and the algorithm is stuck at the 4 first variables and never uses values beyond that. I saw some kind of GA for the 01-knapsack problem where this approach worked well, but apparently this works just with binary variables.
What would you recommend me to try next?
If your fitness function is quick and dirty to evaluate then it's cheap to increase your total population size.
The problem you are running into is that you're trying to select two completely different things simultaneously. You want to select which 4 genomes you care about, and then what values are optimal.
I see two ways to do this.
You create 210 different "species". Each specie is defined by which 4 of the 10 genomes they are allowed to use. Then you can run a genetic algorithm on each specie separately (either serially, or in parallel within a cluster).
Each organism has only 4 genome values (when creating random offspring choose which genomes at random). When two organisms mate you only cross over with genomes that match. If your pair of organisms contain 3 common genomes then you could randomly pick which of the genome you may prefer as the 4th. You could also, as a heuristic, avoid mating organisms that appear to be too genetically different (i.e. a pair that shares two or fewer genomes may make for a bad offspring).
I hope that gives you some ideas you can work from.
You could try a "pivot"-style step: choose one of the existing nonzero values to become zero, and replace it by setting one of the existing zero values to become nonzero. (My "pivot" term comes from linear programming, in which a pivot is the basic step in the simplex method).
Simplest case would be to be evenhandedly random in the selection of each of these values; you can choose a random value, or multiple values, for the new nonzero variable. A more local kind of step would be to use a Gaussian step only on the existing nonzero variables, but if one of those variables crosses zero, spawn variations that pivot to one of the zero values. (Note that these are not mutually exclusive, as you can easily add both kinds of steps).
If you have any information about the local behavior of your fitness score, you can try to use that to guide your choice. Just because actual evolution doesn't look at the fitness function, doesn't mean you can't...
Does your GA solve the problem well without the subset constraint? If not, you might want to tackle that first.
Secondly, you might make your constraint soft instead of hard: Penalize a solution's fitness for each zero-valued variable it has, beyond 4. (You might start by loosening the constraint even further, allowing 9 0-valued variables, then 8, etc., and making sure the GA is able to handle those problem variants before making the problem more difficult.)
Thirdly, maybe try 2-point or multi-point crossover instead of 1-point.
Hope that helps.
-Ted

Frequency determination from sparsely sampled data

I'm observing a sinusoidally-varying source, i.e. f(x) = a sin (bx + d) + c, and want to determine the amplitude a, offset c and period/frequency b - the shift d is unimportant. Measurements are sparse, with each source measured typically between 6 and 12 times, and observations are at (effectively) random times, with intervals between observations roughly between a quarter and ten times the period (just to stress, the spacing of observations is not constant for each source). In each source the offset c is typically quite large compared to the measurement error, while amplitudes vary - at one extreme they are only on the order of the measurement error, while at the other extreme they are about twenty times the error. Hopefully that fully outlines the problem, if not, please ask and i'll clarify.
Thinking naively about the problem, the average of the measurements will be a good estimate of the offset c, while half the range between the minimum and maximum value of the measured f(x) will be a reasonable estimate of the amplitude, especially as the number of measurements increase so that the prospects of having observed the maximum offset from the mean improve. However, if the amplitude is small then it seems to me that there is little chance of accurately determining b, while the prospects should be better for large-amplitude sources even if they are only observed the minimum number of times.
Anyway, I wrote some code to do a least-squares fit to the data for the range of periods, and it identifies best-fit values of a, b and d quite effectively for the larger-amplitude sources. However, I see it finding a number of possible periods, and while one is the 'best' (in as much as it gives the minimum error-weighted residual) in the majority of cases the difference in the residuals for different candidate periods is not large. So what I would like to do now is quantify the possibility that the derived period is a 'false positive' (or, to put it slightly differently, what confidence I can have that the derived period is correct).
Does anybody have any suggestions on how best to proceed? One thought I had was to use a Monte-Carlo algorithm to construct a large number of sources with known values for a, b and c, construct samples that correspond to my measurement times, fit the resultant sample with my fitting code, and see what percentage of the time I recover the correct period. But that seems quite heavyweight, and i'm not sure that it's particularly useful other than giving a general feel for the false-positive rate.
And any advice for frameworks that might help? I have a feeling this is something that can likely be done in a line or two in Mathematica, but (a) I don't know it, an (b) don't have access to it. I'm fluent in Java, competent in IDL and can probably figure out other things...
This looks tailor-made for working in the frequency domain. Apply a Fourier transform and identify the frequency based on where the power is located, which should be clear for a sinusoidal source.
ADDENDUM To get an idea of how accurate is your estimate, I'd try a resampling approach such as cross-validation. I think this is the direction that you're heading with the Monte Carlo idea; lots of work is out there, so hopefully that's a wheel you won't need to re-invent.
The trick here is to do what might seem at first to make the problem more difficult. Rewrite f in the similar form:
f(x) = a1*sin(b*x) + a2*cos(b*x) + c
This is based on the identity for the sin(u+v).
Recognize that if b is known, then the problem of estimating {a1, a2, c} is a simple LINEAR regression problem. So all you need to do is use a 1-variable minimization tool, working on the value of b, to minimize the sum of squares of the residuals from that linear regression model. There are many such univariate optimizers to be found.
Once you have those parameters, it is easy to find the parameter a in your original model, since that is all you care about.
a = sqrt(a1^2 + a2^2)
The scheme I have described is called a partitioned least squares.
If you have a reasonable estimate of the size and the nature of your noise (e.g. white Gaussian with SD sigma), you can
(a) invert the Hessian matrix to get an estimate of the error in your position and
(b) should be able to easily derive a significance statistic for your fit residues.
For (a), compare http://www.physics.utah.edu/~detar/phys6720/handouts/curve_fit/curve_fit/node6.html
For (b), assume that your measurement errors are independent and thus the variance of their sum is the sum of their variances.

Resources