Find if N! is divide by n^2 - algorithm

I have got an exercise which requires to find to write a program in which you should find if N! is divided by N^2.
1 ≤ N ≤ 10^9
I wanted to this with the easy way of creating factorial function and dividing it to the power of N but obviously it won't work.
Just algorithm or pseudo-code would be enough

For any n > 4, if n is a prime, then n! is not evenly divisible by n^2.
Here is simple explanation to support my argument:
After n! is divided by n, we are left with (n-1)! in the numerator that needs to be divided by n. So we need n or a multiple of n in the numerator in order for (n-1)! to be evenly divisible by n, which can never happen when n is prime.
While the above will always happen when n is a non-prime. Check it out for yourself by diving into a bit of Number Theory
Hope it helps!!!
Edit: Here is a simple Python code for the above. Complexity is O(sqrt(N)):
def checkPrime(n):
i = 2
while i<n**(1/2.0):
if n%i == 0:
return "Yes" # non-prime, so it's divisible
i = i + 1
return "No" # prime, so not divisible
def main():
n = int(raw_input())
if n==1:
print "Yes"
elif n==4:
print "No"
else:
print checkPrime(n)
main()
Input:
7
Output:
No

This is related to though easier than Wilson's Theorem which says that a number n > 1 is prime if and only if
(n-1)! = -1 (mod n)
This is algebraically equivalent to saying that n>1 is prime if and only if
n! = -n (mod n^2)
Furthermore, it is known and easy to prove that (to quote the Wikipedia article)
With the sole exception of 4, where 3! = 6 ≡ 2 (mod 4), if n is
composite then (n − 1)! is congruent to 0 (mod n).
Hence with the sole exception of 4, if n is composite, (n-1)! = 0 (mod n) hence n! = 0 (mod n^2) and if n is prime, n! = -n = n^2-n (mod n^2) hence n! isn't congruent to 0 in that case.
The full power of Wilson's theorem is needed if you want to show that for prime n, n! leaves a remainder of exactly n^2-n upon division by n^2. For this problem all you need to know is that it isn't zero.
In any event, you could just write a program which runs a primality check, although whether or not that would be considered a valid solution is up to whoever assigned the problem.

Related

Give an example of a sequence of n integers withΩ(n 2 ) inversions

I'm working on a exercise but it's confusing to understand.
I assume that the worst case for a inversion is that there is a sequence that's sorted in descending order: [n,n-1,...,0] which has (n(n-1))/2 inversions. Then we have to prove that (n(n-1))/2 >= C · n^2 based on the definition of omega which is g(n) >= C · f(n). But when n goes to infinity, g(n) = 1/2, so there doesn't exist a constant C which C>=1 and n0 >= 1 that satisfy the inequality.
What am I missing?
I'm assuming that the question you are asked is, Give an example of a sequence of n integers with Ω(n^2) inversions.
They worded this a bit sloppily. But here is what you need to come up with:
The rule for generating sequences.
A number 0 < n_0.
A constant 0 < C such that if n_0 < n then the number of inversions of the nth sequence is greater than C * n^2.
You've already given a rule that you think might work. n numbers sorted descending.
I would suggest that you try n_0 = 2 and C = 1/4.
Can you prove this statement?
If 2 < n, the descending sequence of length n has more than (1/4) * n^2 inversions.

how can i prove the following algorithm?

Exp(n)
If n = 0
Return 1
End If
If n%2==0
temp = Exp(n/2)
Return temp × temp
Else //n is odd
temp = Exp((n−1)/2)
Return temp × temp × 2
End if
how can i prove by strong induction in n that for all n ≥ 1, the number of multiplications made by
Exp (n) is ≤ 2 log2 n.
ps: Exp(n) = 2^n
A simple way is to use strong induction.
First, prove that Exp(0) terminates and returns 2^0.
Let N be some arbitrary even nonnegative number.
Assume the function Exp correctly calculates and returns 2^n for every n in [0, N].
Under this assumption, prove that Exp(N+1) and Exp(N+2) both terminate and correctly return 2^(N+1) and 2^(N+2).
You're done! By induction it follows that for any nonnegative N, Exp(N) correctly returns 2^N.
PS: Note that in this post, 2^N means "two to the power of N" and not "bitwise xor of the binary representations of 2 and N".
The program exactly applies the following recurrence:
P[0] = 1
n even -> P[n] -> P[n/2]²
n odd -> P[n] -> P[(n-1)/2]².2
the program always terminates, because for n>0, n/2 and (n-1)/2 < n and the argument of the recursive calls always decreases.
P[n] = 2^n is the solution of the recurrence. Indeed,
n = 0 -> 2^0 = 1
n = 2m -> 2^n = (2^m)²
n = 2m+1 -> 2^n = 2.(2^n)²
and this covers all cases.
As every call decreases the number of significant bits of n by one and performs one or two multiplications, the total number does not exceed two times the number of significant bits.

What is the correct time complexity for n factorial time funcion?

I am very new to this topic and I am trying to grasp everything related to the asymptotic notations. I want to ask for your opinion on the following question:
If we have, for an algorithm, that T(n)=n!, then we can say for its time complexity that:
1 x 1 x 1 ... x1 <= n! <= n x n x n ... x n
This relation means that n! = O(n^n) and n! = Ω(1). However, can't we do better? We want the big-oh to be as close as we can to the function T(n). If we do the following:
n! <= 1 x 2 x 3 x 4 ... x n x n
That is, for the second to last element, we replace (n-1) with n. Now isnt this relation true? So isn't it true that n! = O(1 x 2 ... x n x n)? Something similar can be said for the lower bound Ω.
I am not sure if there is an error in my though process so I would really appreciate your input. Thanks in advance.
The mathematical statement n! = O(1 x 2 ... x n x n) is true. But also not terribly helpful nor enlightening. In what situations do you want to write n! = O(...)?
Either you are satisfied with n! = n!, and you don't need to write n! = O(1 x 2 ... x n x n). Or you are not satisfied with n! = n!; you want something that explains better exactly how large is n!; then you shouldn't be satisfied with n! = O(1 x 2 ... x n x n) either, as it is not any easier to understand.
Personally, I am satisfied with polynomials, like n^2. I am satisfied with exponentials, like 2^n. I am also somewhat satisfied with n^n, because I know n^n = 2^(n log n), and I also know I can't hope to find a better expression for n^n.
But I am not satisfied with n!. I would like to be able to compare it to exponentials.
Here are two comparisons:
n! < n^n
2^n < n!
The first one is obtained by upperbounding every factor by n in the product; the second one is obtained by lowerbounding every factor by 2 in the product.
That's already pretty good; it tells us that n! is somewhere between the exponential 2^n and the superexponential n^n.
But you can easily tell that the upperbound n^n is too high; for instance, you can find the following tighter bounds quite easily:
n! < n^(n-1)
n! < 2 * n^(n-2)
n! < 6 * n^(n-3)
Note that n^(n-3) is a lot smaller than n^n when n is big! This is slightly better, but still not satisfying.
You could go even further, and notice that half the factors are smaller than n/2, thus:
n! < (n/2)^(n/2) * n^(n/2) = (1/2)^(n/2) * n^n = (n / sqrt(2))^n =~ (0.7 n)^n
This is a slightly tighter upper bound! But can we do even better? I am still not satisfied.
If you are not satisfied either, I encourage you to read: https://en.wikipedia.org/wiki/Stirling%27s_approximation

Is complexity O(log(n)) equivalent to O(sqrt(n))?

My professor just taught us that any operation that halves the length of the input has an O(log(n)) complexity as a thumb rule. Why is it not O(sqrt(n)), aren't both of them equivalent?
They are not equivalent: sqrt(N) will increase a lot more quickly than log2(N). There is no constant C so that you would have sqrt(N) < C.log(N) for all values of N greater than some minimum value.
An easy way to grasp this, is that log2(N) will be a value close to the number of (binary) digits of N, while sqrt(N) will be a number that has itself half the number of digits that N has. Or, to state that with an equality:
        log2(N) = 2log2(sqrt(N))
So you need to take the logarithm(!) of sqrt(N) to bring it down to the same order of complexity as log2(N).
For example, for a binary number with 11 digits, 0b10000000000 (=210), the square root is 0b100000, but the logarithm is only 10.
Assuming natural logarithms (otherwise just multiply by a constant), we have
lim {n->inf} log n / sqrt(n) = (inf / inf)
= lim {n->inf} 1/n / 1/(2*sqrt(n)) (by L'Hospital)
= lim {n->inf} 2*sqrt(n)/n
= lim {n->inf} 2/sqrt(n)
= 0 < inf
Refer to https://en.wikipedia.org/wiki/Big_O_notation for alternative defination of O(.) and thereby from above we can say log n = O(sqrt(n)),
Also compare the growth of the functions below, log n is always upper bounded by sqrt(n) for all n > 0.
Just compare the two functions:
sqrt(n) ---------- log(n)
n^(1/2) ---------- log(n)
Plug in Log
log( n^(1/2) ) --- log( log(n) )
(1/2) log(n) ----- log( log(n) )
It is clear that: const . log(n) > log(log(n))
No, It's not equivalent.
#trincot gave one excellent explanation with example in his answer. I'm adding one more point. Your professor taught you that
any operation that halves the length of the input has an O(log(n)) complexity
It's also true that,
any operation that reduces the length of the input by 2/3rd, has a O(log3(n)) complexity
any operation that reduces the length of the input by 3/4th, has a O(log4(n)) complexity
any operation that reduces the length of the input by 4/5th, has a O(log5(n)) complexity
So on ...
It's even true for all reduction of lengths of the input by (B-1)/Bth. It then has a complexity of O(logB(n))
N:B: O(logB(n)) means B based logarithm of n
One way to approach the problem can be to compare the rate of growth of O()
and O( )
As n increases we see that (2) is less than (1). When n = 10,000 eq--1 equals 0.005 while eq--2 equals 0.0001
Hence is better as n increases.
No, they are not equivalent; you can even prove that
O(n**k) > O(log(n, base))
for any k > 0 and base > 1 (k = 1/2 in case of sqrt).
When talking on O(f(n)) we want to investigate the behaviour for large n,
limits is good means for that. Suppose that both big O are equivalent:
O(n**k) = O(log(n, base))
which means there's a some finite constant C such that
O(n**k) <= C * O(log(n, base))
starting from some large enough n; put it in other terms (log(n, base) is not 0 starting from large n, both functions are continuously differentiable):
lim(n**k/log(n, base)) = C
n->+inf
To find out the limit's value we can use L'Hospital's Rule, i.e. take derivatives for numerator and denominator and divide them:
lim(n**k/log(n)) =
lim([k*n**(k-1)]/[ln(base)/n]) =
ln(base) * k * lim(n**k) = +infinity
so we can conclude that there's no constant C such that O(n**k) < C*log(n, base) or in other words
O(n**k) > O(log(n, base))
No, it isn't.
When we are dealing with time complexity, we think of input as a very large number. So let's take n = 2^18. Now for sqrt(n) number of operation will be 2^9 and for log(n) it will be equal to 18 (we consider log with base 2 here). Clearly 2^9 much much greater than 18.
So, we can say that O(log n) is smaller than O(sqrt n).
To prove that sqrt(n) grows faster than lgn(base2) you can take the limit of the 2nd over the 1st and proves it approaches 0 as n approaches infinity.
lim(n—>inf) of (lgn/sqrt(n))
Applying L’Hopitals Rule:
= lim(n—>inf) of (2/(sqrt(n)*ln2))
Since sqrt(n) and ln2 will increase infinitely as n increases, and 2 is a constant, this proves
lim(n—>inf) of (2/(sqrt(n)*ln2)) = 0

how to solve recursion of given algorithm?

int gcd(n,m)
{
if (n%m ==0) return m;
n = n%m;
return gcd(m,n);
}
I solved this and i got
T(n, m) = 1 + T(m, n%m) if n > m
= 1 + T(m, n) if n < m
= m if n%m == 0
I am confused how to proceed further to get the final result. Please help me to solve this.
The problem here is that the size of the next values of m and n depend on exactly what the previous values were, not just their size. Knuth goes into this in detail in "The Art of Computer Programming" Vol 2: Seminumerical algorithms, section 4.5.3. After about five pages he proves what you might have guessed, which is that the worst case is when m and n are consecutive fibonacci numbers. From this (or otherwise!) it turns out that in the worst case the number of divisions required is linear in the logarithm of the larger of the two arguments.
After a great deal more heavy-duty math, Knuth proves that the average case is also linear in the logarithm of the arguments.
mcdowella has given a perfect answer to this.
For an intuitive explaination you can think of it this way,
if n >= m, n mod m < n/2;
This can be shown as,
if m < n/2, then:
n mod m < m < n/2
if m > n/2, then: n mod m = n-m < n/2
So effectively you are halving the larger input, and in two calls both the arguments will be halved.

Resources