I justed started studying the sorting algorithm, so i need help solving problems on (big Omega) $\Omega$
How can I Prove that $n! = \Omega(n^{100})$
I know that we write $f(x) = \Omega(g(x))$ if $g(x) = O(f(x))$. This means that there is a constant $c>0$ and a value $x_0$ such that $|f(X)| \ge cg(x)$ whenever $x>x_0$.
Hence from the definition above, I can write
$$n^100 = O(n!)$$
We can find a constant c and a value $x_0$ such that $n^100 \le O(n!)$ for all $x>x_0$.
We could take $c=1$ and $x_0=1$
I don't know if I am correct. Please how should I continue and complete the proof.
The meaning of n! being Ω(n**100) is that there is some c and some x₀ such that n! ≥ c n**100 for all x ≥ x₀. Your choice of c=x₀=1 says that 3! is bigger than 3^100, which it clearly isn't.
Think about how fast n! grows. (n + 1)! is n + 1 times bigger than n!
Think about how fact n**100 grows. (n+1)**100 is ((n + 1)/n) ** 100 bigger than n*100. For large n, that number is going to get closer and closer to 1.
Related
fun root(n) =
if n>0 then
let
val x = root(n div 4);
in
if (2*x+1)*(2*x+1) > n then 2*x
else 2*x+1
end
else 0;
fun isPrime(n,c) =
if c<=root(n) then
if n mod c = 0 then false
else isPrime(n,c+1)
else true;
The time complexity for the root(n) function here is O(log(n)): the number is getting divided by 4 at every step and the code in the function itself is O(1). The time complexity for the isPrime function is o(sqrt(n)) as it runs iteratively from 1 to sqrt(n). The issue I face now is what would be the order of both functions together? Would it just be O(sqrt(n)) or would it be O(sqrt(n)*log(n)) or something else altogether?
I'm new to big O notation in general, I have gone through multiple websites and youtube videos trying to understand the concept but I can't seem to calculate it with any confidence... If you guys could point me towards a few resources to help me practice calculating, it would be a great help.
root(n) is O(log₄(n)), yes.
isPrime(n,c) is O((√n - c) · log₄(n)):
You recompute root(n) in every step even though it never changes, causing the "... · log₄(n)".
You iterate c from some value up to root(n); while it is upwards bounded by root(n), it is not downards bounded: c could start at 0, or at an arbitrarily large negative number, or at a positive number less than or equal to √n, or at a number greater than √n. If you assume that c starts at 0, then isPrime(n,c) is O(√n · log₄(n)).
You probably want to prove this using either induction or by reference to the Master Theorem. You may want to simplify isPrime so that it does not take c as an argument in its outer signature, and so that it does not recompute root(n) unnecessarily on every iteration.
For example:
fun isPrime n =
let
val sq = root n
fun check c = c > sq orelse (n mod c <> 0 andalso check (c + 1))
in
check 2
end
This isPrime(n) is O(√n + log₄(n)), or just O(√n) if we omit lower-order terms.
First it computes root n once at O(log₄(n)).
Then it loops from 0 up to root n once at O(√n).
Note that neither of us have proven anything formally at this point.
(Edit: Changed check (n, 0) to check (n, 2), since duh.)
(Edit: Removed n as argument from check since it never varies.)
(Edit: As you point out, Aryan, looping from 2 to root n is indeed O(√n) even though computing root n takes only O(log₄(n))!)
In Cracking the Coding Interview there's an example where the runtime for a recursive algorithm that counts the nodes in a binary search tree is O(2^(logN)). The book explains how we simplify to get O(N) like so...
2^p = Q
logQ = P
Let P = 2^(logN).
but I am lost at the step when they say Let P = 2^(logN). I don't understand how we know to set those two equal to one another, and I also don't understand this next step... (Although they tell me they do it by the definition of log base 2)
logP = logN
P = N
2^(logN) = N
Therefore the runtime of the code is O(N)
Assuming logN is log2N
This line:
Let P = 2^(logN).
Just assumes that P equals to 2^(logN). You do not know N yet, you just define how P and N relates to each other.
Later, you can apply log function to both sides of equation. And since log(2^(logN)) is logN, the next step is:
logP = logN
And, obviously, when logP = logN, then:
P = N
And previously you assumed that P = 2^(logN), then:
2^(logN) = N
Moreover, all of this could be simplified to 2^logN = N by definition of the log function.
The short answer is that the original question probably implicitly assumed that the logarithm was supposed to be in base 2, so that 2^(log_2(N)) is just N, by definition of log_2(x) as the inverse function of 2^y.
However, it's interesting to examine this a bit more carefully if the logarithm is to a different base. Standard results allow us to write the logarithm to base b as follows:
where ln(x) is the natural logarithm (using base e). Similarly, one can rewrite 2^x as follows:
We can then rewrite the original order-expression as follows:
which can be reduced to:
So, if the base b of our logarithm is 2, then this is clearly just N. However, if the base is different, then we get N raised to a power. For example, if b=10 we get N raised to the power 0.301, which is definitely a more slowly increasing function than O(N).
We can check this directly with the following Python script:
import numpy
import matplotlib.pyplot as plt
N = numpy.arange(1, 100)
plt.figure()
plt.plot(N, 2**(numpy.log2(N)))
plt.xlabel('N')
plt.ylabel(r'$2^{\log_2 N}$')
plt.figure()
plt.plot(N, 2**(numpy.log10(N)))
plt.xlabel('N')
plt.ylabel(r'$2^{\log_{10} N}$')
plt.show()
The graph this produces when we assume that the logarithm is to base two:
is very different from the graph when the logarithm is taken to base ten:
The definition of logarithm is “to what power does the base need to be raised to get this value” so if the base of the logarithm is 2, then raising 2 to that power brings us to the original value.
Example: N is 256. If we take the base 2 log of it we get 8. If we raise 2 to the power of 8 we get 256. So it is linear and we can make it to be just N.
If the log would be in a different base, for example 10, the conversion would just require dividing the exponent with a constant, making the more accurate form into N = 2^(log N / log 2), which can be changed into N / 2^(1 / log 2) = 2^log N. Here the divider for N on the left is a constant so we can forget it when discussing complexity and again come to N = 2^log N.
You can also test it by hand. Log2 of 256 is 8. Log2 of 128 is 7. 8/7 is about 1.14. Log10 of 256 is 2.4. Log10 of 128 is 2.1. 2.4/2.1 is about 1.14. So the base doesn’t matter, the value you get out isn’t the same but it is linear. So mathematically N doesn’t equal to 2^Log10 N, but in complexity terms it does.
Hello I am weak in maths. but I an trying to solve the problem below. Am I doing it correctly>
Given: that A is big O, omega,or theta of B.
Question is:
A = n^3 + n * log(n);
B = n^3 + n^2 * log(n);
As an example, I take n=2.
A= 2^3+2log2 => 8.6
B= 2^3+2^2log2 => 9.2
A is lower bound of B..
I have other questions as well but i need to just confirm the method i am applying is correct or is there any other way to do so.
Am doing this right? Thanks in advance.
The idea behind the big O-notation is to compare the long term behaviour. Your idea (to insert n=2) reveals whether A or B is largest for small values of n. However O is all about large values. Part of the problem is to figure out what a large value is.
One way to get a feel of the problem is to make a table of A and B for larger and larger values of n:
A B
n=10
n=100
n=1000
n=10000
n=100000
n=1000000
The first entry in the table is A for n=10: A=10^3 + 10*log(10) = 1000+10*1 = 1010.
The next thing to do, is to draw graphs of A and B in the same coordinate system. Can you spot any long term relation between the two?
A n^3 + n *log(n) 1 + log(n)/n^2
--- = ------------------ = ----------------
B n^3 + n^2*log(n) 1 + log(n)/n
Since log(n)/n and also log(n)/n^2 have limit zero for n trending to infinity, the expressions 1+log(n)/n and 1+log(n)/n^2 in the canceled quotient A/B are bounded to both sides away from zero. For instance, there is a lower bound N such that both expressions fall into the interval [1/2,3/2] for all n > N. This means that all possibilities are true.
So I've been studying sorting algorithms.
I am stuck on finding the complexity of merge sort.
Can someone please explain to me how h=1+lg(n)
If you keep dividing n by 2, you'll eventually get to 1.
Namely, it takes log2(n) divisions by 2 to make this happen, by definition of the logarithm.
Every time we divide by 2, we add a new level to the recursion tree.
Add that to the root level (which didn't require any divisions), and we have log2(n) + 1 levels total.
Here's a cooler proof. Notice that, rearranging, we have T(2n) - 2 T(n) = 2 c n.
If n = 2k, then we have T(2k + 1) - 2 T(2k) = 2 c 2k.
Let's simplify the mess. Let's define U(k) = T(2k) / (2 c).
Then we have U(k + 1) - 2 U(k) = 2k, or, if we define U'(k) = U(k + 1) - U(k):
U'(k) - U(k) = 2k
k is discrete here, but we can let it be continuous, and if we do, then U' is the derivative of U.
At that point the solution is obvious: if you've ever taken derivatives, then you know that if the difference of a function and its derivative is exponential, then the function itself has to be exponential (since only in that case will the derivative be some multiple of itself).
At that point you know U(k) is exponential, so you can just plug in an exponential for the unknown coefficients in the exponential, and plug it back in to solve for T.
As part of a programming assignment I saw recently, students were asked to find the big O value of their function for solving a puzzle. I was bored, and decided to write the program myself. However, my solution uses a pattern I saw in the problem to skip large portions of the calculations.
Big O shows how the time increases based on a scaling n, but as n scales, once it reaches the resetting of the pattern, the time it takes resets back to low values as well. My thought was that it was O(nlogn % k) when k+1 is when it resets. Another thought is that as it has a hard limit, the value is O(1), since that is big O of any constant. Is one of those right, and if not, how should the limit be represented?
As an example of the reset, the k value is 31336.
At n=31336, it takes 31336 steps but at n=31337, it takes 1.
The code is:
def Entry(a1, q):
F = [a1]
lastnum = a1
q1 = q % 31336
rows = (q / 31336)
for i in range(1, q1):
lastnum = (lastnum * 31334) % 31337
F.append(lastnum)
F = MergeSort(F)
print lastnum * rows + F.index(lastnum) + 1
MergeSort is a standard merge sort with O(nlogn) complexity.
It's O(1) and you can derive this from big O's definition. If f(x) is the complexity of your solution, then:
with
and with any M > 470040 (it's nlogn for n = 31336) and x > 0. And this implies from the definition that:
Well, an easy way that I use to think about big-O problems is to think of n as so big it may as well be infinity. If you don't get particular about byte-level operations on very big numbers (because q % 31336 would scale up as q goes to infinity and is not actually constant), then your intuition is right about it being O(1).
Imagining q as close to infinity, you can see that q % 31336 is obviously between 0 and 31335, as you noted. This fact limits the number of array elements, which limits the sort time to be some constant amount (n * log(n) ==> 31335 * log(31335) * C, for some constant C). So it is constant time for the whole algorithm.
But, in the real world, multiplication, division, and modulus all do scale based on input size. You can look up Karatsuba algorithm if you are interested in figuring that out. I'll leave it as an exercise.
If there are a few different instances of this problem, each with its own k value, then the complexity of the method is not O(1), but instead O(k·ln k).