For the following code:
int func(int x, int y)
{
int flag=0;
for(flag=0; flag<x; flag++)
{
....
}
for(flag=0; flag<y; flag++)
{
....
}
return 0;
}
for the following cases the time complexity (my understanding) is -
x > y => O(x+y)
y < x => O(x+y)
x = y => O(2x)
Can someone verify if I am right?
Thanks.
x > y => O(x+y) -- yes. But if, x = O(y), then just O(x).
y < x => O(x+y) -- yes. Same explanation above.
x = y => O(2x) -- not quite. You ignore the constant factor in Big O analysis. The idea is that when x goes to infinity, the '2' or the constant would contribute that much towards the rate of increase of the function.
x = y^2 => O(y^2) -- Another characteristic of Big O analysis is that you only consider the major term.
An excellent introduction to Big O analysis can be found here an video lecture format. Check out the second lecture for Big O analysis.
Big-O notation doesn't use constant multipliers. Ie, there is no O(2n), only O(n). This is in contrast to exponentials, where it is common to see O(2^n), etc.
Your function is linear, so if x == y, then the order of magnitude is simply listed as O(x).
Related
We are covering the class P in my class and this one part is tripping me up regarding if the primality problem is in P
Our program:
"{prime(x): i =2; while i < x { if n mod i == 0 { return 0 } i++ } return 1 }"
Complexity for the program:
If x is n digits long, then x is in the rough vicinity of 10^n
. (Assuming no leading 0s, 10^n−1 ≤ x < 10^n.) The division algorithm that you learned
in elementary school divides an m-digit number by an n-digit number in
time O(mn). Puting that all together, we find that our algorithm for testing
whether an integer is prime takes time O(n^2 10^n).
My questions:
Where in the world does the professor get that x is 10^n, for example if x is 17 how does that turn into x being 10^2 = 100 operations long.
Furthermore where is the n^2 coming from in the final big O notation.
This trial division algorithm has to try x−2 divisors (i.e.,
Θ(10n) of them) when x is prime. The vast majority of these
divisors have n or n−1 digits, so each division takes Θ(n2)
time on average since m = Θ(n).
"{prime(x): i =2; while i < x { if n mod i == 0 { return 0 } i++ } return 1 }"
I mean with n you mean x, so your code should look like
prime(x){
if(x == 2) return 0;
for(int i = 2; i < x; i++)
if(x mod i == 0) return 0
return 1
}
This algorithm is the naive solution, which costs O(x).
It tests all natural numbers from 2 up to x. Supposing modulo is a constant time operation, you will do x - 1 operations (in worst case), hence O(x - 1) = O(x).
Obviously there are better solutions, like evaluating sieves to precompute primes so that you don't need to divide x for every other natural number.
Where in the world does the professor get that x is 10^n
They don't.
What they actually said is:
If x is n digits long, then x is in the rough vicinity of 10^n
In your case it's off by a factor of 100/17≈5.9. But that's a constant factor (and not a big one). In the worst case, it's off by factor 10. And in complexity classes we ignore such constant factors, so it doesn't matter for their analysis.
Well, primality problem is in P, see AKS Test for details
However, naive algorithm (that is in the question) is not in P. For given x we have
Time complexity t = O(x):
{prime(x):
i = 2;
while i < x { # we loop x - 2 times, t = O(x)
if n mod i == 0 {
return 0
}
i++
}
return 1
}
Size of the problem s = O(log(x)) - we have to provide all digits to write x:
x = 123456789 # size is not 123456789, but 27 bits (or 9 digits)
So time complexity t from size of the problem s is O(2^s) if size s is in bits or O(10^s) if size s is in decimal digits. That is definitely not polynomial.
Let's have a look at your example: if x is n digits long (log10(x)), t will be ~ 10^n:
x : size (digits) : time
----------------------------------------------------
17 : 1.23 : 15 # your example
~ 17_000_000_000 : 10.23 : ~ 17_000_000_000 # just 10 times longer
Can you see exponent time ~ f(size) now?
I do understand the algorithm but can't find a way to define its complexity, the only thing i know is it have something to with the second parameter, because if it was smaller the steps will be fewer. Do you have any idea how can i do this? and is there any general way to define time complexity for any given algorithm?
egyptian multiplication algorithm:
def egMul(x, y):
res = 0
while(y>0):
if(y%2==0):
x = x * 2
y = y / 2
else:
y = y - 1
res = res + x
return res
This code performs Theta(log(y)) arithmetic operations. By considering the binary representation of y, you can see it performs the else branch for each 1 that appears in the binary representation, and it performs the first branch of the if (the one that divides y by 2) floor(log_2(y)) times.
I need to derive an algorithm in C++ to calculate integer powers m^n that uses the loop invariant r = y^n and the loop condition y != m.
I tried using the instruction y= y+1 to advance, but I don´t know how to obtain (y+1)^n from y^n, and it shouldn't be difficult to find . So, probably, this isn't the correct path to follow
Could you help me to derive the program?
EDIT: this is a problem from the subject Data Structures and Algorithms. The difficulty ( if there is at all) shouldn't be mathematic.
EDIT2: Just to clarify, the difficulty of the problem is using the invariant y^n and loop condition y != m. If I vary the n I'm not achieving that
Given w and P such that 2^w > m, P > 2^(wn), and 2^((P-1)/2) = -1 mod P,
then 2 is a generator mod P, and there will be some x such that 2^x = m mod P, so:
if (m<=1 || n==1)
return m;
if (n==0)
return 1;
let y = 2;
let r = 1<<n;
while(y!=m)
{
y = (y*2)%P;
r = (r*(1<<n))%P;
}
return r;
Unless your function needs to produce bignum results, you can just pick the largest P that fits into an integer in your language.
There is no useful relation between (y+1)^n and y^n (you can write (y+1)^n = (√(y^n)+1)^n or (y+1)^n = (1+1/y)^n y^n, but this leads you nowhere).
If y was factored, you could exploit (a.b)^n = (a^n).(b^n), but you would need a table of the nth powers of the primes.
I can't see an answer that makes sense.
You can also think of the Binomial theorem,
(y+1)^n = y^n + n y^(n-1) + n(n-1)/2 y^(n-2) + ... 1
but this is worse than anything: you need to compute n binomial coefficients, and update all powers of y from 0 to n. The total cost of the computation would be ridiculously high.
I'm trying to implement Pollard Rho based on pseudocode I found on Wikipedia, but it doesn't appear to work for the numbers 4, 8, and 25, and I have no clue why.
Here's my code:
long long x = initXY;
long long y = initXY;
long long d = 1;
while (d == 1) {
x = polynomialModN(x, n);
y = polynomialModN(polynomialModN(y, n), n);
d = gcd(labs(x - y), n);
}
if (d == n)
return getFactor(n, initXY + 1);
return d;
This is my polynomial function:
long long polynomialModN(long long x, long long n) {
return (x * x + 1) % n;
}
And this is example pseudocode from Wikipedia:
x ← 2; y ← 2; d ← 1
while d = 1:
x ← g(x)
y ← g(g(y))
d ← gcd(|x - y|, n)
if d = n:
return failure
else:
return d
Only difference: I don't return failure but instead try different initializing variables, as Wikipedia also notes this:
Here x and y corresponds to x i {\displaystyle x_{i}} x_{i} and x j
{\displaystyle x_{j}} x_{j} in the section about core idea. Note that
this algorithm may fail to find a nontrivial factor even when n is
composite. In that case, the method can be tried again, using a
starting value other than 2 or a different g ( x ) {\displaystyle
g(x)} g(x).
Does Pollard-Rho just not work for certain numbers? What are their characteristics? Or am I doing something wrong?
Pollard Rho does not work on even numbers. If you have an even number, first remove all factors of 2 before applying Pollard Rho to find the odd factors.
Pollard Rho properly factors 25, but it finds both factors of 5 at the same time, so it returns a factor of 25. That's correct, but not useful. So Pollard Rho will not find the factors of any power (square, cube, and so on).
Although I didn't run it, your Pollard Rho function looks okay. Wikipedia's advice to change the starting point might work, but generally doesn't. It is better, as Wikipedia also suggests, to change the random function g. The easiest way to do that is to increase the addend; instead of x²+1, use x²+c, where c is initially 1 and increases to 2, 3, … after each failure.
Here, as x can be as big as n-1, the product in your polynomialModN function will overflow.
I know what does f(n)=theta(g(n)) or f(n)=BighOh(g(n)) mean but getting confused when there are something like theta(f(n)) = theta(g(n)). i.e. when the asymptotic notation is on the both side. Can anyone please explain what does this mean?
I got this, when solving a problem like this: there are 3 algorithm
X : is polynomial
Y : is exponential
Z : is double exponential
There are 4 opitions in the answers :
a) theta(X) = theta(Y)
b) theta(X) = theta(Z)
c) theta(Y) = theta(Z)
d) BigOh(Z) = X
The correct answer is option C.
Can anyone please explain
C = θ(D), in simple language means there are 2 tight bounds say, A and B such that C can be sandwiched between them. That is A <= C <= B.
A and B depend upon D. That is, A = aD and B = bD where, a and b are constants.
In general theta(P) = theta(Q) means the bounds specified by P (aP and bP) and Q (aQ and bQ)
are equal i.e, aP = aQ and bP = bQ, or
one of the bounds in contained inside another
i.e, aP<=aQ<=bQ<=bP or aQ<=aP<=bP<=bQ.
Y = exponential = 1.5^x
Z = double exponential = 1.5^1.5^x
Here, it can be seen from the the graph that the bounds on exponential function (1.5^x) can contain the bounds of double exponential function (1.5^1.5^x). Hence θ(Y) = θ(Z). In fact the bounds of exponential function can be used as bounds of double exponential function.