asymptotic notation on both side of equation - algorithm

I know what does f(n)=theta(g(n)) or f(n)=BighOh(g(n)) mean but getting confused when there are something like theta(f(n)) = theta(g(n)). i.e. when the asymptotic notation is on the both side. Can anyone please explain what does this mean?
I got this, when solving a problem like this: there are 3 algorithm
X : is polynomial
Y : is exponential
Z : is double exponential
There are 4 opitions in the answers :
a) theta(X) = theta(Y)
b) theta(X) = theta(Z)
c) theta(Y) = theta(Z)
d) BigOh(Z) = X
The correct answer is option C.
Can anyone please explain

C = θ(D), in simple language means there are 2 tight bounds say, A and B such that C can be sandwiched between them. That is A <= C <= B.
A and B depend upon D. That is, A = aD and B = bD where, a and b are constants.
In general theta(P) = theta(Q) means the bounds specified by P (aP and bP) and Q (aQ and bQ)
are equal i.e, aP = aQ and bP = bQ, or
one of the bounds in contained inside another
i.e, aP<=aQ<=bQ<=bP or aQ<=aP<=bP<=bQ.
Y = exponential = 1.5^x
Z = double exponential = 1.5^1.5^x
Here, it can be seen from the the graph that the bounds on exponential function (1.5^x) can contain the bounds of double exponential function (1.5^1.5^x). Hence θ(Y) = θ(Z). In fact the bounds of exponential function can be used as bounds of double exponential function.

Related

Calculate integer powers with a given loop invariant

I need to derive an algorithm in C++ to calculate integer powers m^n that uses the loop invariant r = y^n and the loop condition y != m.
I tried using the instruction y= y+1 to advance, but I don´t know how to obtain (y+1)^n from y^n, and it shouldn't be difficult to find . So, probably, this isn't the correct path to follow
Could you help me to derive the program?
EDIT: this is a problem from the subject Data Structures and Algorithms. The difficulty ( if there is at all) shouldn't be mathematic.
EDIT2: Just to clarify, the difficulty of the problem is using the invariant y^n and loop condition y != m. If I vary the n I'm not achieving that
Given w and P such that 2^w > m, P > 2^(wn), and 2^((P-1)/2) = -1 mod P,
then 2 is a generator mod P, and there will be some x such that 2^x = m mod P, so:
if (m<=1 || n==1)
return m;
if (n==0)
return 1;
let y = 2;
let r = 1<<n;
while(y!=m)
{
y = (y*2)%P;
r = (r*(1<<n))%P;
}
return r;
Unless your function needs to produce bignum results, you can just pick the largest P that fits into an integer in your language.
There is no useful relation between (y+1)^n and y^n (you can write (y+1)^n = (√(y^n)+1)^n or (y+1)^n = (1+1/y)^n y^n, but this leads you nowhere).
If y was factored, you could exploit (a.b)^n = (a^n).(b^n), but you would need a table of the nth powers of the primes.
I can't see an answer that makes sense.
You can also think of the Binomial theorem,
(y+1)^n = y^n + n y^(n-1) + n(n-1)/2 y^(n-2) + ... 1
but this is worse than anything: you need to compute n binomial coefficients, and update all powers of y from 0 to n. The total cost of the computation would be ridiculously high.

How to linearize a minmax constraint

Currently I have this linear programming model:
Max X
such that:
Max_a(Min_b(F(a,b,X))) <= some constant
*Max_a meaning to maximize the following equation by just changing a, and the same applies to Min_b
Now, the problem becomes how to linearize the constraint part. Most of the current Minmax linearization papers talks about Minmax as an objective. But how to linearize it if it was an constraint??
Thanks
Preliminary remark: the problem you describe is not a "linear programming model", and there is no way to transform it into a linear model directly (which doesn't mean it can't be solved).
First, note that the Max in the constraint is not necessary, i.e. your problem can be reformulated as:
Max X
subject to: Min_b F(a, b, X) <= K forall a
Now, since you are speaking of 'linear model', I assume that at least F is linear, i.e.:
F(a, b, X) = Fa.a + Fb.b + FX.X
And the constraint can obviously be written:
Fa.a + Min_b Fb.b + FX.X <= k forall a
The interesting point is that the minimum on b does not depend on the value of a and X. Hence, it can be solved beforehand: first find u = Min_b Fb.b, and then solve
Max X
subject to Fa.a + FX.X <= k - u forall a
This assume, of course, that the domain of a and b are independant (of the form AxB): if there are other constraints coupling a and b, it is a different problem (in that case please write the complete problem in the question).

Finding a Perfect Square efficiently

How to find the first perfect square from the function: f(n)=An²+Bn+C? B and C are given. A,B,C and n are always integer numbers, and A is always 1. The problem is finding n.
Example: A=1, B=2182, C=3248
The answer for the first perfect square is n=16, because sqrt(f(16))=196.
My algorithm increments n and tests if the square root is a integer nunber.
This algorithm is very slow when B or C is large, because it takes n calculations to find the answer.
Is there a faster way to do this calculation? Is there a simple formula that can produce an answer?
What you are looking for are integer solutions to a special case of the general quadratic Diophantine equation1
Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0
where you have
ax^2 + bx + c = y^2
so that A = a, B = 0, C = -1, D = b, E = 0, F = c where a, b, c are known integers and you are looking for unknown x and y that satisfy this equation. Once you recognize this, solutions to this general problem are in abundance. Mathematica can do it (use Reduce[eqn && Element[x|y, Integers], x, y]) and you can even find one implementation here including source code and an explanation of the method of solution.
1: You might recognize this as a conic section. It is, and people have been studying them for thousands of years. As such, our understanding of them is very deep and your problem is actually quite famous. The study of them is an immensely deep and still active area of mathematics.

A fast algorithm to minimize a pseudo Diophantine equation

We're looking for an algorithm to solve this problem in under O(N).
given two real numbers a and b (without loss of generality you can assume they are both between 0 and 1)
Find an integer n between -N and N that minimizes the expression:
|a n - b - round(a n - b)|
We have thought that the Euclidean Algorithm might work well for this, but can't figure it out. It certainly looks like there should be much faster ways to do this than via an exhaustive search over integers n.
Note: in our situation a and b could be changing often, so fixing a and b for a lookup table is possible, it gets kind of ugly as N can vary as well. Haven't looked in detail into the lookup table yet, to see how small we can get it as a function of N.
It sounds like you may be looking for something like continued fractions...
How are they related? Suppose you can substitute b with a rational number b1/b2. Now you are looking for integers n and m such that an-b1/b2 is approximately m. Put it otherwise, you are looking for n and m such that (m+(b1/b2))/n = (mb2+b1)/nb1, a rational number, is approximately a. Set a1 = mb2+b1 and a2 = nb1. Find values for a1 and a2 from a continued fractions approximation and solve for n and m.
Another approach could be this:
Find a good rational approximations for a and b: a ~ a1/a2 and b ~ b1/b2.
Solve n(a1/a2)-(b1/b2) = m for n and m.
I'm not too sure it would work though. The accuracy needed for a depends on n and b.
You are effectively searching for the integer N that makes the expression aN - b as close as possible to an integer. Are a and b fixed? If yes you can pre-compute a lookup table and have O(1) :-)
If not consider looking for the N that makes aN close to I + b for all integers I.
You can compute a continued fraction for the ratio a/b. You can stop when the denominator is greater than N, or when your approximation is good enough.
// Initialize:
double ratio = a / b;
int ak = (int)(ratio);
double remainder = ratio - ak;
int n0 = 1;
int d0 = 0;
int n1 = ak;
int d1 = 1;
do {
ratio = 1 / remainder;
ak = (int)ratio;
int n2 = ak * n1 + n0;
int d2 = ak * d1 + d0;
n0 = n1;
d0 = d1;
n1 = n2;
d1 = d2;
remainder = ratio - ak;
} while (d1 < N);
The value for n you're looking for is d0 (or d1 if it is still smaller than N).
This doesn't necessarily give you the minimum solution, but it will likely be a very good approximation.
First, let us consider a simpler case where b=0 and 0 < a < 1. F(a,n) = |an-round(an)|
Let step_size = 1
Step 1. Let v=a
Step 2. Let period size p = upper_round( 1/v ).
Step 3. Now, for n=1..p, there must be a number i such that F(v,i) < v.
Step 4. v = F(v,i), step_size = stepsize * i
Step 5. Go to step 2
As you can see you can reduce F(v, *) to any level you want. Final solution n = step_size.

How can a transform a polynomial to another coordinate system?

Using assorted matrix math, I've solved a system of equations resulting in coefficients for a polynomial of degree 'n'
Ax^(n-1) + Bx^(n-2) + ... + Z
I then evaulate the polynomial over a given x range, essentially I'm rendering the polynomial curve. Now here's the catch. I've done this work in one coordinate system we'll call "data space". Now I need to present the same curve in another coordinate space. It is easy to transform input/output to and from the coordinate spaces, but the end user is only interested in the coefficients [A,B,....,Z] since they can reconstruct the polynomial on their own. How can I present a second set of coefficients [A',B',....,Z'] which represent the same shaped curve in a different coordinate system.
If it helps, I'm working in 2D space. Plain old x's and y's. I also feel like this may involve multiplying the coefficients by a transformation matrix? Would it some incorporate the scale/translation factor between the coordinate systems? Would it be the inverse of this matrix? I feel like I'm headed in the right direction...
Update: Coordinate systems are linearly related. Would have been useful info eh?
The problem statement is slightly unclear, so first I will clarify my own interpretation of it:
You have a polynomial function
f(x) = Cnxn + Cn-1xn-1 + ... + C0
[I changed A, B, ... Z into Cn, Cn-1, ..., C0 to more easily work with linear algebra below.]
Then you also have a transformation such as: z = ax + b that you want to use to find coefficients for the same polynomial, but in terms of z:
f(z) = Dnzn + Dn-1zn-1 + ... + D0
This can be done pretty easily with some linear algebra. In particular, you can define an (n+1)×(n+1) matrix T which allows us to do the matrix multiplication
d = T * c ,
where d is a column vector with top entry D0, to last entry Dn, column vector c is similar for the Ci coefficients, and matrix T has (i,j)-th [ith row, jth column] entry tij given by
tij = (j choose i) ai bj-i.
Where (j choose i) is the binomial coefficient, and = 0 when i > j. Also, unlike standard matrices, I'm thinking that i,j each range from 0 to n (usually you start at 1).
This is basically a nice way to write out the expansion and re-compression of the polynomial when you plug in z=ax+b by hand and use the binomial theorem.
If I understand your question correctly, there is no guarantee that the function will remain polynomial after you change coordinates. For example, let y=x^2, and the new coordinate system x'=y, y'=x. Now the equation becomes y' = sqrt(x'), which isn't polynomial.
Tyler's answer is the right answer if you have to compute this change of variable z = ax+b many times (I mean for many different polynomials). On the other hand, if you have to do it just once, it is much faster to combine the computation of the coefficients of the matrix with the final evaluation. The best way to do it is to symbolically evaluate your polynomial at point (ax+b) by Hörner's method:
you store the polynomial coefficients in a vector V (at the beginning, all coefficients are zero), and for i = n to 0, you multiply it by (ax+b) and add Ci.
adding Ci means adding it to the constant term
multiplying by (ax+b) means multiplying all coefficients by b into a vector K1, multiplying all coefficients by a and shifting them away from the constant term into a vector K2, and putting K1+K2 back into V.
This will be easier to program, and faster to compute.
Note that changing y into w = cy+d is really easy. Finally, as mattiast points out, a general change of coordinates will not give you a polynomial.
Technical note: if you still want to compute matrix T (as defined by Tyler), you should compute it by using a weighted version of Pascal's rule (this is what the Hörner computation does implicitely):
ti,j = b ti,j-1 + a ti-1,j-1
This way, you compute it simply, column after column, from left to right.
You have the equation:
y = Ax^(n-1) + Bx^(n-2) + ... + Z
In xy space, and you want it in some x'y' space. What you need is transformation functions f(x) = x' and g(y) = y' (or h(x') = x and j(y') = y). In the first case you need to solve for x and solve for y. Once you have x and y, you can substituted those results into your original equation and solve for y'.
Whether or not this is trivial depends on the complexity of the functions used to transform from one space to another. For example, equations such as:
5x = x' and 10y = y'
are extremely easy to solve for the result
y' = 2Ax'^(n-1) + 2Bx'^(n-2) + ... + 10Z
If the input spaces are linearly related, then yes, a matrix should be able to transform one set of coefficients to another. For example, if you had your polynomial in your "original" x-space:
ax^3 + bx^2 + cx + d
and you wanted to transform into a different w-space where w = px+q
then you want to find a', b', c', and d' such that
ax^3 + bx^2 + cx + d = a'w^3 + b'w^2 + c'w + d'
and with some algebra,
a'w^3 + b'w^2 + c'w + d' = a'p^3x^3 + 3a'p^2qx^2 + 3a'pq^2x + a'q^3 + b'p^2x^2 + 2b'pqx + b'q^2 + c'px + c'q + d'
therefore
a = a'p^3
b = 3a'p^2q + b'p^2
c = 3a'pq^2 + 2b'pq + c'p
d = a'q^3 + b'q^2 + c'q + d'
which can be rewritten as a matrix problem and solved.

Resources