how can i prove the following algorithm? - algorithm

Exp(n)
If n = 0
Return 1
End If
If n%2==0
temp = Exp(n/2)
Return temp × temp
Else //n is odd
temp = Exp((n−1)/2)
Return temp × temp × 2
End if
how can i prove by strong induction in n that for all n ≥ 1, the number of multiplications made by
Exp (n) is ≤ 2 log2 n.
ps: Exp(n) = 2^n

A simple way is to use strong induction.
First, prove that Exp(0) terminates and returns 2^0.
Let N be some arbitrary even nonnegative number.
Assume the function Exp correctly calculates and returns 2^n for every n in [0, N].
Under this assumption, prove that Exp(N+1) and Exp(N+2) both terminate and correctly return 2^(N+1) and 2^(N+2).
You're done! By induction it follows that for any nonnegative N, Exp(N) correctly returns 2^N.
PS: Note that in this post, 2^N means "two to the power of N" and not "bitwise xor of the binary representations of 2 and N".

The program exactly applies the following recurrence:
P[0] = 1
n even -> P[n] -> P[n/2]²
n odd -> P[n] -> P[(n-1)/2]².2
the program always terminates, because for n>0, n/2 and (n-1)/2 < n and the argument of the recursive calls always decreases.
P[n] = 2^n is the solution of the recurrence. Indeed,
n = 0 -> 2^0 = 1
n = 2m -> 2^n = (2^m)²
n = 2m+1 -> 2^n = 2.(2^n)²
and this covers all cases.
As every call decreases the number of significant bits of n by one and performs one or two multiplications, the total number does not exceed two times the number of significant bits.

Related

Find if N! is divide by n^2

I have got an exercise which requires to find to write a program in which you should find if N! is divided by N^2.
1 ≤ N ≤ 10^9
I wanted to this with the easy way of creating factorial function and dividing it to the power of N but obviously it won't work.
Just algorithm or pseudo-code would be enough
For any n > 4, if n is a prime, then n! is not evenly divisible by n^2.
Here is simple explanation to support my argument:
After n! is divided by n, we are left with (n-1)! in the numerator that needs to be divided by n. So we need n or a multiple of n in the numerator in order for (n-1)! to be evenly divisible by n, which can never happen when n is prime.
While the above will always happen when n is a non-prime. Check it out for yourself by diving into a bit of Number Theory
Hope it helps!!!
Edit: Here is a simple Python code for the above. Complexity is O(sqrt(N)):
def checkPrime(n):
i = 2
while i<n**(1/2.0):
if n%i == 0:
return "Yes" # non-prime, so it's divisible
i = i + 1
return "No" # prime, so not divisible
def main():
n = int(raw_input())
if n==1:
print "Yes"
elif n==4:
print "No"
else:
print checkPrime(n)
main()
Input:
7
Output:
No
This is related to though easier than Wilson's Theorem which says that a number n > 1 is prime if and only if
(n-1)! = -1 (mod n)
This is algebraically equivalent to saying that n>1 is prime if and only if
n! = -n (mod n^2)
Furthermore, it is known and easy to prove that (to quote the Wikipedia article)
With the sole exception of 4, where 3! = 6 ≡ 2 (mod 4), if n is
composite then (n − 1)! is congruent to 0 (mod n).
Hence with the sole exception of 4, if n is composite, (n-1)! = 0 (mod n) hence n! = 0 (mod n^2) and if n is prime, n! = -n = n^2-n (mod n^2) hence n! isn't congruent to 0 in that case.
The full power of Wilson's theorem is needed if you want to show that for prime n, n! leaves a remainder of exactly n^2-n upon division by n^2. For this problem all you need to know is that it isn't zero.
In any event, you could just write a program which runs a primality check, although whether or not that would be considered a valid solution is up to whoever assigned the problem.

Order of growth of following function

Can someone tell me if the ranking of following functions by order of growth is correct ? (increasing to decreasing)
2^n, n^2, (nlgn, lg(n!)), n^(1/lgn), 4
Most of these are right. However, look at
n1/lg n.
Notice that, for nonzero n, we have n = 2lg n, so
n1/lg n = (2lg n)1/lg n = 2lg n / lg n = 21 = 2.
So while 2 and 4 do grow at the same (nonexistent) rate, n1/lg n is always smaller than 4 for any nonzero n.

Time complexity of iterating over a k-layer deep loop nest always Θ(nᵏ)?

Many algorithms have loops in them that look like this:
for a from 1 to n
for b from 1 to a
for c from 1 to b
for d from 1 to c
for e from 1 to d
...
// Do O(1) work
In other words, the loop nest is k layers deep, the outer layer loops from 1 to n, and each inner layer loops up from 1 to the index above it. This shows up, for example, in code to iterate over all k-tuples of positions inside an array.
Assuming that k is fixed, is the runtime of this code always Θ(nk)? For the special case where n = 1, the work is Θ(n) because it's just a standard loop over an array, and for the case where n = 2 the work is Θ(n2) because the work done by the inner loop is given by
0 + 1 + 2 + ... + n-1 = n(n-1)/2 = Θ(n2)
Does this pattern continue when k gets large? Or is it just a coincidence?
Yes, the time complexity will be Θ(nk). One way to measure the complexity of this code is to look at what values it generates. One particularly useful observation is that these loops will iterate over all possible k-element subsets of the array {1, 2, 3, ..., n} and will spend O(1) time producing each one of them. Therefore, we can say that the runtime is given by the number of such subsets. Given an n-element set, the number of k-element subsets is n choose k, which is equal to
n! / k!(n - k)!
This is given by
n (n-1)(n-2) ... (n - k + 1) / k!
This value is certainly no greater than this one:
n · n · n · ... · n / k! (with k copies of n)
= nk / k!
This expression is O(nk), since the 1/k! term is a fixed constant.
Similarly, when n - k + 1 ≥ n / 2, this expression is greater than or equal to
(n / 2) · (n / 2) · ... · (n / 2) / k! (with k copies of n/2)
= nk / k! 2k
This is Ω(nk), since 1 / k! 2k is a fixed constant.
Since the runtime is O(nk) and Ω(nk), the runtime is Θ(nk).
Hope this helps!
You may consume the following equation:
Where c is the number of constant time operations inside the innermost loop, n is the number of elements, and r is the number of nested loops.

how to solve recursion of given algorithm?

int gcd(n,m)
{
if (n%m ==0) return m;
n = n%m;
return gcd(m,n);
}
I solved this and i got
T(n, m) = 1 + T(m, n%m) if n > m
= 1 + T(m, n) if n < m
= m if n%m == 0
I am confused how to proceed further to get the final result. Please help me to solve this.
The problem here is that the size of the next values of m and n depend on exactly what the previous values were, not just their size. Knuth goes into this in detail in "The Art of Computer Programming" Vol 2: Seminumerical algorithms, section 4.5.3. After about five pages he proves what you might have guessed, which is that the worst case is when m and n are consecutive fibonacci numbers. From this (or otherwise!) it turns out that in the worst case the number of divisions required is linear in the logarithm of the larger of the two arguments.
After a great deal more heavy-duty math, Knuth proves that the average case is also linear in the logarithm of the arguments.
mcdowella has given a perfect answer to this.
For an intuitive explaination you can think of it this way,
if n >= m, n mod m < n/2;
This can be shown as,
if m < n/2, then:
n mod m < m < n/2
if m > n/2, then: n mod m = n-m < n/2
So effectively you are halving the larger input, and in two calls both the arguments will be halved.

How do I find the time complexity T(n) and show that it is tightly bounded (Big Theta)?

I'm trying to figure out how to give a worst case time complexity. I'm not sure about my analysis. I have read nested for loops big O is n^2; is this correct for a for loop with a while loop inside?
// A is an array of real numbers.
// The size of A is n. i,j are of type int, key is
// of type real.
Procedure IS(A)
for j = 2 to length[A]
{
key = A[ j ]
i = j-1
while i>0 and A[i]>key
{
A[i+1] = A[i]
i=i-1
}
A[i+1] = key
}
so far I have:
j=2 (+1 op)
i>0 (+n ops)
A[i] > key (+n ops)
so T(n) = 2n+1?
But I'm not sure if I have to go inside of the while and for loops to analyze a worse case time complexity...
Now I have to prove that it is tightly bound, that is Big theta.
I've read that nested for loops have Big O of n^2. Is this also true for Big Theta? If not how would I go about finding Big Theta?!
**C1= C sub 1, C2= C sub 2, and no= n naught all are elements of positive real numbers
To find the T(n) I looked at the values of j and looked at how many times the while loop executed:
values of J: 2, 3, 4, ... n
Loop executes: 1, 2, 3, ... n
Analysis:
Take the summation of the while loop executions and recognize that it is (n(n+1))/2
I will assign this as my T(n) and prove it is tightly bounded by n^2.
That is n(n+1)/2= θ(n^2)
Scratch Work:
Find C1, C2, no є R+ such that 0 ≤ C1(n^2) ≤ (n(n+1))/2 ≤ C2(n^2)for all n ≥ no
To make 0 ≤ C1(n) true, C1, no, can be any positive reals
To make C1(n^2) ≤ (n(n+1))/2, C1 must be ≤ 1
To make (n(n+1))/2 ≤ C2(n^2), C2 must be ≥ 1
PF:
Find C1, C2, no є R+ such that 0 ≤ C1(n^2) ≤ (n(n+1))/2 ≤ C2(n^2) for all n ≥ no
Let C1= 1/2, C2= 1 and no = 1.
show that 0 ≤ C1(n^2) is true
C1(n^2)= n^2/2
n^2/2≥ no^2/2
⇒no^2/2≥ 0
1/2 > 0
Therefore C1(n^2) ≥ 0 is proven true!
show that C1(n^2) ≤ (n(n+1))/2 is true
C1(n^2) ≤ (n(n+1))/2
n^2/2 ≤ (n(n+1))/2
n^2 ≤ n(n+1)
n^2 ≤ n^2+n
0 ≤ n
This we know is true since n ≥ no = 1
Therefore C1(n^2) ≤ (n(n+1))/2 is proven true!
Show that (n(n+1))/2 ≤ C2(n^2) is true
(n(n+1))/2 ≤ C2(n^2)
(n+1)/2 ≤ C2(n)
n+1 ≤ 2 C2(n)
n+1 ≤ 2(n)
1 ≤ 2n-n
1 ≤ n(2-1) = n
1≤ n
Also, we know this to be true since n ≥ no = 1
Hence by 1, 2 and 3, θ(n^2 )= (n(n+1))/2 is true since
0 ≤ C1(n^2) ≤ (n(n+1))/2 ≤ C2(n^2) for all n ≥ no
Tell me what you thing guys... I'm trying to understand this material and would like y'alls input!
You seem to be implementing the insertion sort algorithm, which Wikipedia claims is O(N2).
Generally, you break down components based off your variable N rather than your constant C when dealing with Big-O. In your case, all you need to do is look at the loops.
Your two loops are (worse cases):
for j=2 to length[A]
i=j-1
while i > 0
/*action*/
i=i-1
The outer loop is O(N), because it directly relates to the number of elements.
Notice how your inner loop depends on the progress of the outer loop. That means that (ignoring off-by-one issues) the inner and outer loops are related as follows:
j's inner
value loops
----- -----
2 1
3 2
4 3
N N-1
----- -----
total (N-1)*N/2
So the total number of times that /*action*/ is encountered is (N2 - N)/2, which is O(N2).
Looking at the number of nested loops isn't the best way to go about getting a solution. It's better to look at the "work" that's being done in the code, under a heavy load N. For example,
for(int i = 0; i < a.size(); i++)
{
for(int j = 0; j < a.size(); j++)
{
// Do stuff
i++;
}
}
is O(N).
A function f is in Big-Theta of g if it is both in Big-Oh of g and Big-Omega of g. The worst case happens when the data A is monotonically decreasing function. Then, for every iteration of the outer loop, the while loop executes. If each statement contributed a time value of 1, then the total time would be 5*(1 + 2 + ... + n - 2) = 5*(n - 2)*(n - 1) / 2. This gives a quadratic dependence on the data. However, if the data A is a monotonically increasing sequence, the condition A[i] > key will always fail. Thus the outer loop executes in constant time, N - 3 times. The best case of f then has a linear dependence on the data. For the average case, we take the next number in A and find its place in the sorting that has previously occurred. On average, this number will be in the middle of this range, which implies the inner while loop will run half as often as in the worst case, giving a quadratic dependence on the data.
Big O (basically) about how many times the elements in your loop will be looked at in order to complete a task.
For example, a O(n) algorithm will iterate through every element just once.
A O(1) algorithm will not have to iterate through every element at all, it will know exactly where in the array to look because it has an index. An example of this is an array or hash table.
The reason a loop inside of a loop is O(n^2) is because every element in the loop has to be iterated over itself ^2 times. Changing the type of the loop has nothing to do with it since it's about # of iterations essentially.
There are approaches to algorithms that will allow you to reduce the number of iterations you need. An example of these are "divide & conquer" algorithms like Quicksort, which if I recall correctly is O(nlog(n)).
It's tough to come up with a better alternative to your example without knowing more specifically what you're trying to accomplish.

Resources