Right way to prove Asymptotic notations - algorithm

I have always been confused with this and I wanted to clarify this.
How exactly do you prove that an asymptotic notation is true?
Example 1:
sin(n) = Ω(cos(n))
What I have been doing is to rewrite the question into the big-omega
form which says:
f(n) >= c * g(n) for all n >= n0
Which is: sin(n) >= c * cos(n)
I then proceed to randomly find an n and c that makes the equation
true.
In this case, I found c = 5, n = 3 which makes:
sin(3) >= 5 x cos(3) true
Another example 2:
2^squareroot(lgn) = ω(lgn)
I tried n = 5 and c = 6 as well as n = 6 and c = 5, and both yielded
false, however, the answer is actually true.
However, I am not sure if this is the correct way to do it, because I managed to make it true while the answer says that it is false.
I believe I've been doing it all wrong by 'guessing the n and c' values to make the equation true. When I watch some youtube videos, they seem to be shifting equations around using algebra, before guessing a value for n, and I am not sure how to do it for this instance.
What is the correct way to do it?

Related

Prove that $n! = \Omega(n^{100})$

I justed started studying the sorting algorithm, so i need help solving problems on (big Omega) $\Omega$
How can I Prove that $n! = \Omega(n^{100})$
I know that we write $f(x) = \Omega(g(x))$ if $g(x) = O(f(x))$. This means that there is a constant $c>0$ and a value $x_0$ such that $|f(X)| \ge cg(x)$ whenever $x>x_0$.
Hence from the definition above, I can write
$$n^100 = O(n!)$$
We can find a constant c and a value $x_0$ such that $n^100 \le O(n!)$ for all $x>x_0$.
We could take $c=1$ and $x_0=1$
I don't know if I am correct. Please how should I continue and complete the proof.
The meaning of n! being Ω(n**100) is that there is some c and some x₀ such that n! ≥ c n**100 for all x ≥ x₀. Your choice of c=x₀=1 says that 3! is bigger than 3^100, which it clearly isn't.
Think about how fast n! grows. (n + 1)! is n + 1 times bigger than n!
Think about how fact n**100 grows. (n+1)**100 is ((n + 1)/n) ** 100 bigger than n*100. For large n, that number is going to get closer and closer to 1.

How do I find the space and time complexities for this code

fun root(n) =
if n>0 then
let
val x = root(n div 4);
in
if (2*x+1)*(2*x+1) > n then 2*x
else 2*x+1
end
else 0;
fun isPrime(n,c) =
if c<=root(n) then
if n mod c = 0 then false
else isPrime(n,c+1)
else true;
The time complexity for the root(n) function here is O(log(n)): the number is getting divided by 4 at every step and the code in the function itself is O(1). The time complexity for the isPrime function is o(sqrt(n)) as it runs iteratively from 1 to sqrt(n). The issue I face now is what would be the order of both functions together? Would it just be O(sqrt(n)) or would it be O(sqrt(n)*log(n)) or something else altogether?
I'm new to big O notation in general, I have gone through multiple websites and youtube videos trying to understand the concept but I can't seem to calculate it with any confidence... If you guys could point me towards a few resources to help me practice calculating, it would be a great help.
root(n) is O(log₄(n)), yes.
isPrime(n,c) is O((√n - c) · log₄(n)):
You recompute root(n) in every step even though it never changes, causing the "... · log₄(n)".
You iterate c from some value up to root(n); while it is upwards bounded by root(n), it is not downards bounded: c could start at 0, or at an arbitrarily large negative number, or at a positive number less than or equal to √n, or at a number greater than √n. If you assume that c starts at 0, then isPrime(n,c) is O(√n · log₄(n)).
You probably want to prove this using either induction or by reference to the Master Theorem. You may want to simplify isPrime so that it does not take c as an argument in its outer signature, and so that it does not recompute root(n) unnecessarily on every iteration.
For example:
fun isPrime n =
let
val sq = root n
fun check c = c > sq orelse (n mod c <> 0 andalso check (c + 1))
in
check 2
end
This isPrime(n) is O(√n + log₄(n)), or just O(√n) if we omit lower-order terms.
First it computes root n once at O(log₄(n)).
Then it loops from 0 up to root n once at O(√n).
Note that neither of us have proven anything formally at this point.
(Edit: Changed check (n, 0) to check (n, 2), since duh.)
(Edit: Removed n as argument from check since it never varies.)
(Edit: As you point out, Aryan, looping from 2 to root n is indeed O(√n) even though computing root n takes only O(log₄(n))!)

Calculation of log for efficiency in math

Hello I am weak in maths. but I an trying to solve the problem below. Am I doing it correctly>
Given: that A is big O, omega,or theta of B.
Question is:
A = n^3 + n * log(n);
B = n^3 + n^2 * log(n);
As an example, I take n=2.
A= 2^3+2log2 => 8.6
B= 2^3+2^2log2 => 9.2
A is lower bound of B..
I have other questions as well but i need to just confirm the method i am applying is correct or is there any other way to do so.
Am doing this right? Thanks in advance.
The idea behind the big O-notation is to compare the long term behaviour. Your idea (to insert n=2) reveals whether A or B is largest for small values of n. However O is all about large values. Part of the problem is to figure out what a large value is.
One way to get a feel of the problem is to make a table of A and B for larger and larger values of n:
A B
n=10
n=100
n=1000
n=10000
n=100000
n=1000000
The first entry in the table is A for n=10: A=10^3 + 10*log(10) = 1000+10*1 = 1010.
The next thing to do, is to draw graphs of A and B in the same coordinate system. Can you spot any long term relation between the two?
A n^3 + n *log(n) 1 + log(n)/n^2
--- = ------------------ = ----------------
B n^3 + n^2*log(n) 1 + log(n)/n
Since log(n)/n and also log(n)/n^2 have limit zero for n trending to infinity, the expressions 1+log(n)/n and 1+log(n)/n^2 in the canceled quotient A/B are bounded to both sides away from zero. For instance, there is a lower bound N such that both expressions fall into the interval [1/2,3/2] for all n > N. This means that all possibilities are true.

Solving a recurrence: T(n)=3T(n/2)+n

I need to Find the solution of the recurrence for n, a power of two if T(n)=3T(n/2)+n for n>1 and T(n)=1 otherwise.
using substitution of n=2^m,S(m)=T(2^(m-1)) I can get down to:
S(m)=2^m+3*2^(m-1)+3^2*2^(m-2)+⋯+3^(m-1) 2^1+3^m
But I have no idea how to simply that.
These types of recurrences are most easily solved by Master Theorem for analysis of algorithms which is explained as follows:
Let a be an integer greater than or equal to 1, b be a real number greater than 1, and c be a positive real number. Given a recurrence of the form -
T (n) = a * T(n/b) + nc where n > 1, then for n a power of b, if
Logba < c, T (n) = Θ(nc);
Logba = c, T (n) = Θ(nc * Log n);
Logba > c, T (n) = Θ(nlogba).
English translation of your recurrence
The most critical thing to understand in Master Theorem is the constants a, b, and c mentioned in the recurrence. Let's take your own recurrence - T(n) = 3T(n/2) + n - for example.
This recurrence is actually saying that the algorithm represented by it is such that,
(Time to solve a problem of size n) = (Time taken to solve 3 problems of size n/2) + n
The n at the end is the cost of merging the results of those 3 n/2 sized problems.
Now, intuitively you can understand that:
if the cost of "solving 3 problems of size n/2" has more weight than "n" then the first item will determine the overall complexity;
if the cost "n" has more weight than "solving 3 problems of size n/2" then the second item will determine the overall complexity; and,
if both parts are of same weight then solving the sub-problems and merging their results will have an overall compounded weight.
From the above three intuitive understanding, only the three cases of Master Theorem arise.
In your example, a = 3, b = 2 and c = 1. So it falls in case-3 as Logba = Log23 which is greater than 1 (the value of c).
The complexity therefore is straightforward - Θ(nlogba) = Θ(nlog23).
You can solve this using Masters theorem, but also by opening the recursion tree in the following way:
At the root of the recursion tree, you will have a work of n.
In the second stage, the tree splits into three parts, and in each part, the work will be n / 2.
Keep going until you reach the leaves. The entire work leaf will be: O (1) = O (n / 2 ^ k) when: n = 2 ^ k.
Note that at each step m have 3 ^ m splits.
Now we'll combine all the steps together, using the geometric progression and logarithms rules. In the end, you will get:
T(n) = 3T(n/2)+n = 2n^(log3)-2n
the calculation
Have a look here at page 60 http://www.cs.columbia.edu/~cs4205/files/CM2.pdf.
And maybe you should have asked here https://math.stackexchange.com/
The problems like this can be solved using Masters theorem.
In your case a = 3, b = 2 and f(n) = n.
So c = log_b(a) = log_2(3), which is bigger than 1, and therefore you fall into the first case. So your complexity is:
O(n^{log_2(3)}) = O(n^{1.58})

Finding a Perfect Square efficiently

How to find the first perfect square from the function: f(n)=An²+Bn+C? B and C are given. A,B,C and n are always integer numbers, and A is always 1. The problem is finding n.
Example: A=1, B=2182, C=3248
The answer for the first perfect square is n=16, because sqrt(f(16))=196.
My algorithm increments n and tests if the square root is a integer nunber.
This algorithm is very slow when B or C is large, because it takes n calculations to find the answer.
Is there a faster way to do this calculation? Is there a simple formula that can produce an answer?
What you are looking for are integer solutions to a special case of the general quadratic Diophantine equation1
Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0
where you have
ax^2 + bx + c = y^2
so that A = a, B = 0, C = -1, D = b, E = 0, F = c where a, b, c are known integers and you are looking for unknown x and y that satisfy this equation. Once you recognize this, solutions to this general problem are in abundance. Mathematica can do it (use Reduce[eqn && Element[x|y, Integers], x, y]) and you can even find one implementation here including source code and an explanation of the method of solution.
1: You might recognize this as a conic section. It is, and people have been studying them for thousands of years. As such, our understanding of them is very deep and your problem is actually quite famous. The study of them is an immensely deep and still active area of mathematics.

Resources