In steps, how do I analyse the running time for a certain algorithm as big theta - algorithm

This algorithm is giving me trouble, I cannot find any sources online about dealing with the while loop that is also affected by the outer for loop. Is there a complicated process, or can you look from the loop that it is simply (outer loop = n , inner loop = %%%%) ? Any help is appreciated thank you.

Have you ever heard of the Logarithm operator? If a, b, n are real numbers such that an=b then lognb=a. The inner loop tells the computer that, multiply the number j by 2 a number of times (we don't know exactly what is this number, let's call it x) such that after this, j should equal to or exceed n.
Mathematically, this can be written as 2n > j * 2x ≥ n
Solve for x: 2n/j > 2x ≥ n/j ⇔ log2(2n/j) > x ≥ log2(n/j) ⇔ log2(n/j) + 1 > x ≥ log2(n/j)
As j increases from 1 to n, x decreases. From this point, I'll solve the problem in Big-O notation, your work is to convert it to Big-Theta notation
Since 1 is constant, it can be omitted. So x = log2(n/j), which is always less than log2(n). So we can say the running time of the inner loop is bounded above by O(log2n), which means the whole algorithm is bounded above by O(n.log2n).

Edit: For a better approximation and some corrections please read Paul Hankin's useful comments bellow this answer. Thanks to him.
PS: Stirling's approximation.

Related

Complexity/big theta of loop with multiplicative increment of i

I'm trying to find the time complexity/ big theta of the following:
def f(n):
i = 2
while i <n:
print(i)
i = i*i
The only approach of how I know how to solve this is to find a general formula for i_k and then solve the equation of i_k >= n, however I end up with a log(logn/log2)/log(2) equation as my k value, and that seems awefully wrong to me and I'm not sure how I would translate that into a big theta expression. Any help would be appreciated!
That answer looks good, actually! If you rewrite log x / log 2 as log2 x (or lg x, for short), what you have is that the number of iterations is lg lg n. Since the value of i in iteration k of the loop is 22k, this means that the loop stops when i reaches the value 22lg lg n = 2lg n = n, which matches the loop bound.
More generally, the number of times you can square a value before it exceeds n is Θ(log log n), and similarly the number of square roots you can take before you drop a number n down to a constant is Θ(log log n), so your answer is pretty much what you’d expect.

concept confusion, advices on solving these 2 code

In O() notation, write the complexity of the following code:
For i = 1 to x functi
call funct(i) if (x <= 0)
return some value
else
In O() notation, write the complexity of the following code:
For x = 1 to N
I'm really lost at solving these 2 big O notation complexity problem, please help!
They both appear to me to be O(N).
The first one subtracts by 1 when it calls itself, this means if given N, then it runs N times.
The second one divides the N by 2, but Big-O is determined by worst case scenario, which means that we must assume N is getting significantly larger. When you take that into account, dividing by 2 does not have much of a difference. That means while it originally is O(N/2) it can be reduced to O(N)

Finding Big-O, Omega and theta

I've looked through the links, and I'm too braindead to understand the mechanical process of figuring them out. I understand the ideas of O, theta and omega, and I understand the "Rules". So let me work on this example with you guys to clear this up in my head :)
f(n) = 100n+logn
g(n) = n+(logn)2
I need to find: whether f = O(g), or f = Ω(g), or both (in which case f = Θ(g))
so I know that 100n and n are the same, and they are both slower than log(n). I just need to figure out if (log(n))^2 is slower or faster. but I can't really remember anything about logs. if the log(n) is bigger, does it mean the number gets bigger or smaller?
let me please add my real struggle is in figuring out BOTH omega and theta. By definition f(n) <= g(n) if there is a constant c that will make g(n) bigger, and same for the reverse for omega. but how do I really test this?
You can usually figure it out from these rules:
Broadly k < log(n)^k < n^k < k^n. You can replace k at each step with any positive number you want and it remains true for large enough n.
If x is big, then 1/x is very close to 0.
For positive x and y, x < y if and only if log(x) < log(y). (Sometimes taking logs can help with complicated and messy products.
log(k^n) = log(k) n.
For O, theta, and omega, you can ignore everything except the biggest term that doesn't cancel out.
Rules 1 and 5 suffice for your specific questions. But learn all of the rules.
You don't need to remember rules, but rather learn general principles.
Here, all you need to know is that log(n) is increasing and grows without limit, and the definition of big-O, namely f = O(g) if there's a c such that for all sufficiently large n, f(n) <= c * g(n). You might learn the fact about log by remembering that log(n) grows like the number of digits of n.
Can log^2(n) be O(log(n))? That would mean (using the definition of big-O) that log^2(n) <= c.log(n) for all sufficiently large n, so log^2(n)/log(n) <= c for sufficiently large n (*). But log^2(n)/log(n) = log(n), which grows without limit, so can't be bounded by c. So log^2(n) = O(log(n)).
Can log(n) be O(log^2(n))? Well, at some point log(n) > 1 (since it's increasing without limit), and from that point on, log(n) < log^2(n). That proves that log(n) = O(log^2(n)), with the constant c equal to 1.
(*) If you're being extra careful, you need to exclude the possibility that log(n) is infinitely many times zero.

Algorithm Peer Review

So I am currently taking an algorithms class and have been asked to prove
Prove: ((n^2 / log n) + 10^5 n * sqrt(n)) / n^2 = O(n^2 / log n)
I have come up with n0 = 1 and c = 5 when solving it I end up with 1 <= 5 I just wanted to see if I could get someone to verify this for me.
I'm not sure if this is the right forum to post in, if it's wrong I apologize and if you could point me in the right direction to go to that would be wonderful.
If I am not wrong, you have to prove that the upper bound of the given function is n^2 logn.
Which can be the case if for very large values of n,
n^2/logn >= n * sqrt(n)
n >= sqrt(n) * log(n)
Since, log(n) < sqrt(n), log(n)*sqrt(n) will always be less than n. Hence, our inequality is correct. So, the upper bound is O(n^2/ logn).
You can use the limit method process:
Thus, the solution of your case should look like this:
Assuming functions f and g are increasing, by definition f(x) = O(g(x)) iff limit x->inf f(x)/g(x) is non-negative. If you substitute your functions for f and g, and simplify the expression, you will see that the limit trivially comes out to be 0.

Meaning of lg * N in Algorithmic Analysis

I'm currently reading about algorithmic analysis and I read that a certain algorithm (weighted quick union with path compression) is of order N + M lg * N. Apparently though this is linear because lg * N is a constant in this universe. What mathematical operation is being referred to here. I am unfamiliar with the notation lg * N.
The answers given here so far are wrong. lg* n (read "log star") is the iterated logarithm. It is defined as recursively as
0 if n <= 1
lg* n =
1 + lg*(lg n) if n > 1
Another way to think of it is the number of times that you have to iterate logarithm before the result is less than or equal to 1.
It grows extremely slowly. You can read more on Wikipedia which includes some examples of algorithms for which lg* n pops up in the analysis.
I'm assuming you're talking about the algorithm analyzed on slide 44 of this lecture:
http://www.cs.princeton.edu/courses/archive/fall05/cos226/lectures/union-find.pdf
Where they say "lg * N is a constant in this universe" I believe they aren't being entirely literal.
lg*N does appear to increase with N as per their table on the right side of the slide; it just happens to grow at such a slow rate that it can't be considered much else (N = 2^65536 -> log*n = 5). As such it seems they're saying that you can just ignore the log*N as a constant because it will never increase enough to cause a problem.
I could be wrong, though. That's simply how I read it.
edit: it might help to note that for this equation they're defining "lg*N" to be 2^(lg*(N-1)). Meaning that an N value of 2^(2^(65536)) [a far larger number] would give lg*N = 6, for example.
The recursive definition of lg*n by Jason is equivalent to
lg*n = m when 2 II m <= n < 2 II (m+1)
where
2 II m = 2^2^...^2 (repeated exponentiation, m copies of 2)
is Knuth's double up arrow notation. Thus
lg*2= 1, lg*2^2= 2, lg*2^{2^2}= 3, lg*2^{2^{2^2}} = 4, lg*2^{2^{2^{2^2}}} = 5.
Hence lg*n=4 for 2^{16} <= n < 2^{65536}.
The function lg*n approaches infinity extremely slowly.
(Faster than an inverse of the Ackermann function A(n,n) which involves n-2 up arrows.)
Stephen
lg is "LOG" or inverse exponential. lg typically refers to base 2, but for algorithmic analysis, the base usually doesnt matter.
lg n refers to log base n. It is the answer to the equation 2^x = n. In Big O complexity analysis, the base to log is irrelevant. Powers of 2 crop up in CS, so it is no surprise if we have to choose a base, it will be base 2.
A good example of where it crops up is a fully binary tree of height h, which has 2^h-1 nodes. If we let n be the number of nodes this relationship is the tree is height lg n with n nodes. The algorithm traversing this tree takes at most lg n to see if a value is stored in the tree.
As to be expected, wiki has great additional info.
Logarithm is denoted by log or lg. In your case I guess the correct interpretation is N + M * log(N).
EDIT: The base of the logarithm does not matter when doing asymptotic complexity analysis.

Resources