Not sure if this is a 'mathematical' problem or a 'numerical/computational' problem, but I thought would post it here in hope that someone might be able to help point me in the right direction.
Consider a function g(t) in the time-domain that consists of a sum of time-scaled versions of an unknown basis function f(t) ie.
g(t) = f(a_0.t) + f(a_1.t) + ... + f(a_N.t)
where N and the time scaling coefficients (a_0, a_1,..., a_N) are known. We can assume all coefficients are positive and no greater than one. ie. 0 < a_i <= 1.
Observations of g(t) are made at 'M' uniformly sampled discrete points ie. t = [0,1,...,M-1]. It is assumed that g(t) and f(t) are continuous and otherwise "well behaved".
The objective is to obtain an estimate of the function f(t) at the sample points.
One approach I have tried is to express both f(t) and g(t) as polynomials of degree K, equate polynomial coefficients and solve; ie.
g(t) = \sum_{k=0}^{K} g_k.t^k
f(t) = \sum_{k=0}^{K} f_k.t^k
and hence
f_K = g_K / (a_0^K + a_1^K + .... + a_N^K)
f_{K-1} = g_{K-1} / (a_0^(K-1) + a_1^(K-1) + .... + a_N^(K-1))
....etc....
f_0 = g_0 / N
Directly implementing this approach (naively?) suffers from the sensitivity of estimating g(t) with a degree K polynomial. Does anyone know of other approaches to this problem that may be more numerically stable?
Related
I am working on a genetic algorithm. Here is how it works :
Input : a list of 2D points
Input : the degree of the curve
Output : the equation of the curve that passes through points the best way (try to minimize the sum of vertical distances from point's Ys to the curve)
The algorithm finds good equations for simple straight lines and for 2-degree equations.
But for 4 points and 3 degree equations and more, it gets more complicated. I cannot find the right combination of parameters : sometimes I have to wait 5 minutes and the curve found is still very bad. I tried modifying many parameters, from population size to number of parents selected...
Do famous combinations/theorems in GA programming can help me ?
Thank you ! :)
Based on what is given, you would need a polynomial interpolation in which, the degree of the equation is number of points minus 1.
n = (Number of points) - 1
Now having said that, let's assume you have 5 points that need to be fitted and I am going to define them in a variable:
var points = [[0,0], [2,3], [4,-1], [5,7], [6,9]]
Please be noted the array of the points have been ordered by the x values which you need to do.
Then the equation would be:
f(x) = a1*x^4 + a2*x^3 + a3*x^2 + a4*x + a5
Now based on definition (https://en.wikipedia.org/wiki/Polynomial_interpolation#Constructing_the_interpolation_polynomial), the coefficients are computed like this:
Now you need to used the referenced page to come up with the coefficient.
It is not that complicated, for the polynomial interpolation of degree n you get the following equation:
p(x) = c0 + c1 * x + c2 * x^2 + ... + cn * x^n = y
This means we need n + 1 genes for the coefficients c0 to cn.
The fitness function is the sum of all squared distances from the points to the curve, below is the formula for the squared distance. Like this a smaller value is obviously better, if you don't want that you can take the inverse (1 / sum of squared distances):
d_squared(xi, yi) = (yi - p(xi))^2
I think for faster conversion you could limit the mutation, e.g. when mutating choose a new value with 20% probability between min and max (e.g. -1000 and 1000) and with 80% probabilty a random factor between 0.8 and 1.2 with which you multiply the old value.
So I am learning about the k-means algorithm for clustering and have seen a couple of different cost functions that can be used, in particular, $$J_{avg} = \sum_{i=0}^k\sum_{x\in C_i}d(x,m_j)^2$$$$J_{IC}=\sum_{i=0}^k\frac{1}{|C_j|}\sum_{x\in C_i}\sum_{x'\in C_i}d(x,x')^2.$$ Now I am trying to show that if $m_j=\frac{1}{C_j}\sum_{x\in C_j}x$ then $J_{IC}=2J_{avg}.$ This makes intuitive sense to me since it seems to be the difference between average distance to the centriod and average distance between two points (which should be double of that to the center). Would appreciate any help, thanks!
For the cost functions to be equivalent they don't have to be exactly equal, just monotonically related so that optimizing one means optimizing the other.
SUM_ij (Xi - Xj)^2 = SUM_ij (Xi - x + x - Xj)^2 = SUM_ij (Xi - x)^2 + (Xj - x)^2 + 2(Xi - x).(x - Xj)
If x is the mean of the Xi then SUM_j (x - Xj) = 0 so the dot product term goes away and you get the sort of connection between the sum of squared distances from the mean and the sum of the squared distances between any two points that I think you need.
So I've been studying sorting algorithms.
I am stuck on finding the complexity of merge sort.
Can someone please explain to me how h=1+lg(n)
If you keep dividing n by 2, you'll eventually get to 1.
Namely, it takes log2(n) divisions by 2 to make this happen, by definition of the logarithm.
Every time we divide by 2, we add a new level to the recursion tree.
Add that to the root level (which didn't require any divisions), and we have log2(n) + 1 levels total.
Here's a cooler proof. Notice that, rearranging, we have T(2n) - 2 T(n) = 2 c n.
If n = 2k, then we have T(2k + 1) - 2 T(2k) = 2 c 2k.
Let's simplify the mess. Let's define U(k) = T(2k) / (2 c).
Then we have U(k + 1) - 2 U(k) = 2k, or, if we define U'(k) = U(k + 1) - U(k):
U'(k) - U(k) = 2k
k is discrete here, but we can let it be continuous, and if we do, then U' is the derivative of U.
At that point the solution is obvious: if you've ever taken derivatives, then you know that if the difference of a function and its derivative is exponential, then the function itself has to be exponential (since only in that case will the derivative be some multiple of itself).
At that point you know U(k) is exponential, so you can just plug in an exponential for the unknown coefficients in the exponential, and plug it back in to solve for T.
The problem:
N points are given on a 2-dimensional plane. What is the maximum number of points on the same straight line?
The problem has O(N2) solution: go through each point and find the number of points which have the same dx / dy with relation to the current point. Store dx / dy relations in a hash map for efficiency.
Is there a better solution to this problem than O(N2)?
There is likely no solution to this problem that is significantly better than O(n^2) in a standard model of computation.
The problem of finding three collinear points reduces to the problem of finding the line that goes through the most points, and finding three collinear points is 3SUM-hard, meaning that solving it in less than O(n^2) time would be a major theoretical result.
See the previous question on finding three collinear points.
For your reference (using the known proof), suppose we want to answer a 3SUM problem such as finding x, y, z in list X such that x + y + z = 0. If we had a fast algorithm for the collinear point problem, we could use that algorithm to solve the 3SUM problem as follows.
For each x in X, create the point (x, x^3) (for now we assume the elements of X are distinct). Next, check whether there exists three collinear points from among the created points.
To see that this works, note that if x + y + z = 0 then the slope of the line from x to y is
(y^3 - x^3) / (y - x) = y^2 + yx + x^2
and the slope of the line from x to z is
(z^3 - x^3) / (z - x) = z^2 + zx + x^2 = (-(x + y))^2 - (x + y)x + x^2
= x^2 + 2xy + y^2 - x^2 - xy + x^2 = y^2 + yx + x^2
Conversely, if the slope from x to y equals the slope from x to z then
y^2 + yx + x^2 = z^2 + zx + x^2,
which implies that
(y - z) (x + y + z) = 0,
so either y = z or z = -x - y as suffices to prove that the reduction is valid.
If there are duplicates in X, you first check whether x + 2y = 0 for any x and duplicate element y (in linear time using hashing or O(n lg n) time using sorting), and then remove the duplicates before reducing to the collinear point-finding problem.
If you limit the problem to lines passing through the origin, you can convert the points to polar coordinates (angle, distance from origin) and sort them by angle. All points with the same angle lie on the same line. O(n logn)
I don't think there is a faster solution in the general case.
The Hough Transform can give you an approximate solution. It is approximate because the binning technique has a limited resolution in parameter space, so the maximum bin will give you some limited range of possible lines.
Again an O(n^2) solution with pseudo code. Idea is create a hash table with line itself as the key. Line is defined by slope between the two points, point where line cuts x-axis and point where line cuts y-axis.
Solution assumes languages like Java, C# where equals method and hashcode methods of the object are used for hashing function.
Create an Object (call SlopeObject) with 3 fields
Slope // Can be Infinity
Point of intercept with x-axis -- poix // Will be (Infinity, some y value) or (x value, 0)
Count
poix will be a point (x, y) pair. If line crosses x-axis the poix will (some number, 0). If line is parallel to x axis then poix = (Infinity, some number) where y value is where line crosses y axis.
Override equals method where 2 objects are equal if Slope and poix are equal.
Hashcode is overridden with a function which provides hashcode based on combination of values of Slope and poix. Some pseudo code below
Hashmap map;
foreach(point in the array a) {
foeach(every other point b) {
slope = calculateSlope(a, b);
poix = calculateXInterception(a, b);
SlopeObject so = new SlopeObject(slope, poix, 1); // Slope, poix and intial count 1.
SlopeObject inMapSlopeObj = map.get(so);
if(inMapSlopeObj == null) {
inMapSlopeObj.put(so);
} else {
inMapSlopeObj.setCount(inMapSlopeObj.getCount() + 1);
}
}
}
SlopeObject maxCounted = getObjectWithMaxCount(map);
print("line is through " + maxCounted.poix + " with slope " + maxCounted.slope);
Move to the dual plane using the point-line duality transform for p=(a,b) p*:y=a*x + b.
Now using a line sweep algorithm find all intersection points in NlogN time.
(If you have points which are one above the other just rotate the points to some small angle).
The intersection points corresponds in the dual plane to lines in the primer plane.
Whoever said that since 3SUM have a reduction to this problem and thus the complexity is O(n^2). Please note that the complexity of 3SUM is less than that.
Please check https://en.wikipedia.org/wiki/3SUM and also read
https://tmc.web.engr.illinois.edu/reduce3sum_sosa.pdf
As already mentioned, there probably isn't a way to solve the general case of this problem better than O(n^2). However, if you assume a large number of points lie on the same line (say the probability that a random point in the set of points lie on the line with the maximum number of points is p) and don't need an exact algorithm, a randomized algorithm is more efficient.
maxPoints = 0
Repeat for k iterations:
1. Pick 2 random, distinct points uniformly at random
2. maxPoints = max(maxPoints, number of points that lies on the
line defined by the 2 points chosen in step 1)
Note that in the first step, if you picked 2 points which lies on the line with the maximum number of points, you'll get the optimal solution. Assuming n is very large (i.e. we can treat the probability of finding 2 desirable points as sampling with replacement), the probability of this happening is p^2. Therefore the probability of finding a suboptimal solution after k iterations is (1 - p^2)^k.
Suppose you can tolerate a false negative rate rate = err. Then this algorithm runs in O(nk) = O(n * log(err) / log(1 - p^2)). If both n and p are large enough, this is significantly more efficient than O(n^2). (i.e. Supposed n = 1,000,000 and you know there are at least 10,000 points that lie on the same line. Then n^2 would required on the magnitude of 10^12 operations, while randomized algorithm would require on the magnitude of 10^9 operations to get a error rate of less than 5*10^-5.)
It is unlikely for a $o(n^2)$ algorithm to exist, since the problem (of even checking if 3 points in R^2 are collinear) is 3Sum-hard (http://en.wikipedia.org/wiki/3SUM)
This is not a solution better than O(n^2), but you can do the following,
For each point convert first convert it as if it where in the (0,0) coordinate, and then do the equivalent translation for all the other points by moving them the same x,y distance you needed to move the original choosen point.
2.Translate this new set of translated points to the angle with respect to the new (0,0).
3.Keep stored the maximum number (MSN) of points that are in each angle.
4.Choose the maximum stored number (MSN), and that will be the solution
Using assorted matrix math, I've solved a system of equations resulting in coefficients for a polynomial of degree 'n'
Ax^(n-1) + Bx^(n-2) + ... + Z
I then evaulate the polynomial over a given x range, essentially I'm rendering the polynomial curve. Now here's the catch. I've done this work in one coordinate system we'll call "data space". Now I need to present the same curve in another coordinate space. It is easy to transform input/output to and from the coordinate spaces, but the end user is only interested in the coefficients [A,B,....,Z] since they can reconstruct the polynomial on their own. How can I present a second set of coefficients [A',B',....,Z'] which represent the same shaped curve in a different coordinate system.
If it helps, I'm working in 2D space. Plain old x's and y's. I also feel like this may involve multiplying the coefficients by a transformation matrix? Would it some incorporate the scale/translation factor between the coordinate systems? Would it be the inverse of this matrix? I feel like I'm headed in the right direction...
Update: Coordinate systems are linearly related. Would have been useful info eh?
The problem statement is slightly unclear, so first I will clarify my own interpretation of it:
You have a polynomial function
f(x) = Cnxn + Cn-1xn-1 + ... + C0
[I changed A, B, ... Z into Cn, Cn-1, ..., C0 to more easily work with linear algebra below.]
Then you also have a transformation such as: z = ax + b that you want to use to find coefficients for the same polynomial, but in terms of z:
f(z) = Dnzn + Dn-1zn-1 + ... + D0
This can be done pretty easily with some linear algebra. In particular, you can define an (n+1)×(n+1) matrix T which allows us to do the matrix multiplication
d = T * c ,
where d is a column vector with top entry D0, to last entry Dn, column vector c is similar for the Ci coefficients, and matrix T has (i,j)-th [ith row, jth column] entry tij given by
tij = (j choose i) ai bj-i.
Where (j choose i) is the binomial coefficient, and = 0 when i > j. Also, unlike standard matrices, I'm thinking that i,j each range from 0 to n (usually you start at 1).
This is basically a nice way to write out the expansion and re-compression of the polynomial when you plug in z=ax+b by hand and use the binomial theorem.
If I understand your question correctly, there is no guarantee that the function will remain polynomial after you change coordinates. For example, let y=x^2, and the new coordinate system x'=y, y'=x. Now the equation becomes y' = sqrt(x'), which isn't polynomial.
Tyler's answer is the right answer if you have to compute this change of variable z = ax+b many times (I mean for many different polynomials). On the other hand, if you have to do it just once, it is much faster to combine the computation of the coefficients of the matrix with the final evaluation. The best way to do it is to symbolically evaluate your polynomial at point (ax+b) by Hörner's method:
you store the polynomial coefficients in a vector V (at the beginning, all coefficients are zero), and for i = n to 0, you multiply it by (ax+b) and add Ci.
adding Ci means adding it to the constant term
multiplying by (ax+b) means multiplying all coefficients by b into a vector K1, multiplying all coefficients by a and shifting them away from the constant term into a vector K2, and putting K1+K2 back into V.
This will be easier to program, and faster to compute.
Note that changing y into w = cy+d is really easy. Finally, as mattiast points out, a general change of coordinates will not give you a polynomial.
Technical note: if you still want to compute matrix T (as defined by Tyler), you should compute it by using a weighted version of Pascal's rule (this is what the Hörner computation does implicitely):
ti,j = b ti,j-1 + a ti-1,j-1
This way, you compute it simply, column after column, from left to right.
You have the equation:
y = Ax^(n-1) + Bx^(n-2) + ... + Z
In xy space, and you want it in some x'y' space. What you need is transformation functions f(x) = x' and g(y) = y' (or h(x') = x and j(y') = y). In the first case you need to solve for x and solve for y. Once you have x and y, you can substituted those results into your original equation and solve for y'.
Whether or not this is trivial depends on the complexity of the functions used to transform from one space to another. For example, equations such as:
5x = x' and 10y = y'
are extremely easy to solve for the result
y' = 2Ax'^(n-1) + 2Bx'^(n-2) + ... + 10Z
If the input spaces are linearly related, then yes, a matrix should be able to transform one set of coefficients to another. For example, if you had your polynomial in your "original" x-space:
ax^3 + bx^2 + cx + d
and you wanted to transform into a different w-space where w = px+q
then you want to find a', b', c', and d' such that
ax^3 + bx^2 + cx + d = a'w^3 + b'w^2 + c'w + d'
and with some algebra,
a'w^3 + b'w^2 + c'w + d' = a'p^3x^3 + 3a'p^2qx^2 + 3a'pq^2x + a'q^3 + b'p^2x^2 + 2b'pqx + b'q^2 + c'px + c'q + d'
therefore
a = a'p^3
b = 3a'p^2q + b'p^2
c = 3a'pq^2 + 2b'pq + c'p
d = a'q^3 + b'q^2 + c'q + d'
which can be rewritten as a matrix problem and solved.