I'm having trouble on doing a homework exercise.
I need to describe an efficient algorithm which solves the polynomial interpolation problem:
Let P[i,j] be the polynomial interpolation of the points (xi, yi),...,(xj,yj). Find 3 simple polynomials q(x), r(x), s(x) of degree 0 or 1 such that:
P[i,j+1]={q(x)P[i,j](x)-r(x)P[i+1,j+1](x)}/s(x)
Given the points (x1,y1),....(xn,yn), describe an efficient dynamic programming algorithm based on the recurrence relation which you found in section 1 for computing the coefficients a0,...an-1 of the polynomial interpolation.
Well, I know how to solve the polynomial interpolation problem using Newton polynomial which looks quite similar to the above recurrence relation but I don't see how it helps me to find q(x), r(x), s(x) of degree 0 or 1, and assuming I have the correct q(x), r(x), s(x)- how do I solve this problem using dynamic programming?
Any help will be much appreciated.
q(x) = (x at {j+1}) - x
r(x) = (x at i) - x
s(x) = (x at {j+1}) - (x at i)
x at i or x at j mean their place in the ordered list of input points.
Some explanations:
First we need to understand what P[i,j](x) means.
Put all your initial (x,y) pairs in the main diagonal of an n x n matrix.
Now you can extract P[0,0](x) to be the y value of the point in your matrix at (0,0).
P[0,1] is the linear interpolation of the points in your matrix at (0,0) and (1,1). This will be a straight line function.
((x at 0 - x)(y at 1) - (x at 1 - x)(y at 0))
---------------------------------------------
(x at 1 - x at 0)
P[0,2] is the linear interpolation of two previous linear interpolations, which means that your ys now will be the linear functions which you calculated at the previous step.
This is also the dynamic algorithm which builds the full polynom.
I highly recommend you have a look at this very good lecture, and the full lecture notes.
Related
I'm looking for an algorithm, which can decide if a given point (x,y) satisfies some equation written in polar form, like r-phi=0. The main problem is that the angle phi is bounded between (0,2pi), so in this example I'm only getting one cycle of the spiral. So how can I get all the possible solutions for any polar equation written in such form?
Tried bounding r value to (0-2pi) range, which didn't work on some more complicated examples like logarithmic spirals
You can use the following transformation equations:
r = √(x² + y²)
φ = arctan(y, x) + 2kπ
where the function arctan is on four quadrants.
In the case of your Archimedean spiral, check that
(√(x² + y²) - arctan(y, x)) / 2π
is an integer.
This is part of a bigger question. Its actually a mathematical problem. So it would be really great if someone can direct me to any algorithm to obtain the solution of this problem or a pseudo code will be of help.
The question. Given an equation check if it has an integral solution.
For example:
(26a+5)/32=b
Here a is an integer. Is there an algorithm to predict or find if b can be an integer. I need a general solution not specific to this question. The equation can vary. Thanks
Your problem is an example of a linear Diophantine equation. About that, Wikipedia says:
This Diophantine equation [i.e., a x + b y = c] has a solution (where x and y are integers) if and only if c is a multiple of the greatest common divisor of a and b. Moreover, if (x, y) is a solution, then the other solutions have the form (x + k v, y - k u), where k is an arbitrary integer, and u and v are the quotients of a and b (respectively) by the greatest common divisor of a and b.
In this case, (26 a + 5)/32 = b is equivalent to 26 a - 32 b = -5. The gcd of the coefficients of the unknowns is gcd(26, -32) = 2. Since -5 is not a multiple of 2, there is no solution.
A general Diophantine equation is a polynomial in the unknowns, and can only be solved (if at all) by more complex methods. A web search might turn up specialized software for that problem.
Linear Diophantine equations take the form ax + by = c. If c is the greatest common divisor of a and b this means a=z'c and b=z''c then this is Bézout's identity of the form
with a=z' and b=z'' and the equation has an infinite number of solutions. So instead of trial searching method you can check if c is the greatest common divisor (GCD) of a and b
If indeed a and b are multiples of c then x and y can be computed using extended Euclidean algorithm which finds integers x and y (one of which is typically negative) that satisfy Bézout's identity
(as a side note: this holds also for any other Euclidean domain, i.e. polynomial ring & every Euclidean domain is unique factorization domain). You can use Iterative Method to find these solutions:
Integral solution to equation `a + bx = c + dy`
and thank you for the attention you're paying to my question :)
My question is about finding an (efficient enough) algorithm for finding orthogonal polynomials of a given weight function f.
I've tried to simply apply the Gram-Schmidt algorithm but this one is not efficient enough. Indeed, it requires O(n^2) integrals. But my goal is to use this algorithm in order to find Hankel determinants of a function f. So a "direct" computation wich consists in simply compute the matrix and take its determinants requires only 2*n - 1 integrals.
But I want to use the theorem stating that the Hankel determinant of order n of f is a product of the n first leading coefficients of the orthogonal polynomials of f. The reason is that when n gets larger (say about 20), Hankel determinant gets really big and my goal is to divided it by an other big constant (for n = 20, the constant is of order 10^103). My idea is then to "dilute" the computation of the constant in the product of the leading coefficients.
I hope there is a O(n) algorithm to compute the n first orthogonal polynomials :) I've done some digging and found nothing in that direction for general function f (f can be any smooth function, actually).
EDIT: I'll precise here what the objects I'm talking about are.
1) A Hankel determinant of order n is the determinant of a square matrix which is constant on the skew diagonals. Thus for example
a b c
b c d
c d e
is a Hankel matrix of size 3 by 3.
2) If you have a function f : R -> R, you can associate to f its "kth moment" which is defined as (I'll write it in tex) f_k := \int_{\mathbb{R}} f(x) x^k dx
With this, you can create a Hankel matrix A_n(f) whose entries are (A_n(f)){ij} = f{i+j-2}, that is something of the like
f_0 f_1 f_2
f_1 f_2 f_3
f_2 f_3 f_4
With this in mind, it is easy to define the Hankel determinant of f which is simply
H_n(f) := det(A_n(f)). (Of course, it is understood that f has sufficient decay at infinity, this means that all the moments are well defined. A typical choice for f could be the gaussian f(x) = exp(-x^2), or any continuous function on a compact set of R...)
3) What I call orthogonal polynomials of f is a set of polynomials (p_n) such that
\int_{\mathbb{R}} f(x) p_j(x) p_k(x) is 1 if j = k and 0 otherwize.
(They are called like that since they form an orthonormal basis of the vector space of polynomials with respect to the scalar product
(p|q) = \int_{\mathbb{R}} f(x) p(x) q(x) dx
4) Now, it is basic linear algebra that from any basis of a vector space equipped with a scalar product, you can built a orthonormal basis thanks to the Gram-Schmidt algorithm. This is where the n^2 integrations comes from. You start from the basis 1, x, x^2, ..., x^n. Then you need n(n-1) integrals for the family to be orthogonal, and you need n more in order to normalize them.
5) There is a theorem saying that if f : R -> R is a function having sufficient decay at infinity, then we have that its Hankel determinant H_n(f) is equal to
H_n(f) = \prod_{j = 0}^{n-1} \kappa_j^{-2}
where \kappa_j is the leading coefficient of the j+1th orthogonal polynomial of f.
Thank you for your answer!
(PS: I tagged octave because I work in octave so, with a bit of luck (but I doubt it), there is a built-in function or a package already done managing this kind of think)
Orthogonal polynomials obey a recurrence relation, which we can write as
P[n+1] = (X-a[n])*P[n] - b[n-1]*P[n-1]
P[0] = 1
P[1] = X-a[0]
and we can compute the a, b coefficients by
a[n] = <X*P[n]|P[n]> / c[n]
b[n-1] = c[n-1]/c[n]
where
c[n] = <P[n]|P[n]>
(Here < | > is your inner product).
However I cannot vouch for the stability of this process at large n.
I have a set of points which are interpolated with an unknown method, or to be more precise, the method is known but it can be one of the several - it can be polynomial interpolation, spline, simple linear ... - and a line, which, let's for now imagine it is given in the simple form of y = ax + b.
For interpolation, I don't know what method is used (i.e. the function is hidden), so I can only determine y for some x, and equally, x for a given y value.
What is the usual way to go about finding an intersection between the two?
Say your unknown function is y = f(x) and the line is y = g(x) = ax + b. The intersection of these curves will be the zeroes of Δy = f(x) - g(x). Just use any iterative method to find the roots of Δy - the simplest would be to use the bisection method.
You have (an interpolation polynomial) f1(x) and (a line) f2(x) and you want to solve f(x) = f1(x)-f2(x) = 0. Use any method for solving this equation, e.g. Newton-Raphson or even bisection. This may not be the most optimal for your case. Pay attention to convergence guarantees and possible multiple roots.
Spline: bezier clipping.
Polynomial: Viète's formulas (to get the zeroes, I think).
Line: line-line.
Not a trivial question (or solution) under any circumstance.
I am finding it very hard to understand the way the inverse of the matrix is calculated in the Hill Cipher algorithm. I get the idea of it all being done in modulo arithmetic, but somehow things are not adding up. I would really appreciate a simple explanation!
Consider the following Hill Cipher key matrix:
5 8
17 3
Please use the above matrix for illustration.
You must study the Linear congruence theorem and the extended GCD algorithm, which belong to Number Theory, in order to understand the maths behind modulo arithmetic.
The inverse of matrix K for example is (1/det(K)) * adjoint(K), where det(K) <> 0.
I assume that you don't understand how to calculate the 1/det(K) in modulo arithmetic and here is where linear congruences and GCD come to play.
Your K has det(K) = -121. Lets say that the modulo m is 26. We want x*(-121) = 1 (mod 26).[ a = b (mod m) means that a-b = N*m]
We can easily find that for x=3 the above congruence is true because 26 divides (3*(-121) -1) exactly. Of course, the correct way is to use GCD in reverse to calculate the x, but I don't have time for explaining how do it. Check the extented GCD algorithm :)
Now, inv(K) = 3*([3 -8], [-17 5]) (mod 26) = ([9 -24], [-51 15]) (mod 26) = ([9 2], [1 15]).
Update: check out Basics of Computational Number Theory to see how to calculate modular inverses with the Extended Euclidean algorithm. Note that -121 mod 26 = 9, so for gcd(9, 26) = 1 we get (-1, 3).
In my very humble opinion it is much easier to calculate the inverse matrix (modular or otherwise) by using the Gauss-Jordan method. That way you don't have to calculate the determinant, and the method scales very simply to arbitrarily large systems.
Just look up 'Gauss Jordan Matrix Inverse' - but to summarise, you simply adjoin a copy of the identity matrix to the right of the matrix to be inverted, then use row operations to reduce your matrix to be solved until it itself is an identity matrix. At this point, the adjoined identity matrix has become the inverse of the original matrix. Voila!