Complexity of a particular pseudocode algorithm - complexity-theory

I'm just studying for my data structures & algorithms final. The following question was on my midterm and I got it wrong, so I'm just trying to figure it out:
What is the complexity of the following pseudocode?
x <- 0
for x <- 0 to n:
for y <- 0 to n:
y <- y + 1
y <- y * 2
On the midterm I answered O( n^2 ) but now that I'm looking at it again, I think it might be O( nlogn ).. See my answer below showing my attempt.
What is the correct answer?
Any help is helping me pass my exam!
Cheers!

The following is my answer for the moment...
The outer loop for x <- 0 to n executes n times, definitely.
The inner loop for y <- 0 to n appears to execute n times, however every time it executes, its contained code brings y exponentially closer to n. So I believe that this section of code executes with O( logn ) complexity.
Thus, the whole algorithm executes with O( nlogn ) time complexity.

Empirically, I may dare to represent your algorithm's behavior as such:
Some snapshots ("sum" is the number of iterations):
500 * 8 = 4000
Logarithm calculator link

Related

In steps, how do I analyse the running time for a certain algorithm as big theta

This algorithm is giving me trouble, I cannot find any sources online about dealing with the while loop that is also affected by the outer for loop. Is there a complicated process, or can you look from the loop that it is simply (outer loop = n , inner loop = %%%%) ? Any help is appreciated thank you.
Have you ever heard of the Logarithm operator? If a, b, n are real numbers such that an=b then lognb=a. The inner loop tells the computer that, multiply the number j by 2 a number of times (we don't know exactly what is this number, let's call it x) such that after this, j should equal to or exceed n.
Mathematically, this can be written as 2n > j * 2x ≥ n
Solve for x: 2n/j > 2x ≥ n/j ⇔ log2(2n/j) > x ≥ log2(n/j) ⇔ log2(n/j) + 1 > x ≥ log2(n/j)
As j increases from 1 to n, x decreases. From this point, I'll solve the problem in Big-O notation, your work is to convert it to Big-Theta notation
Since 1 is constant, it can be omitted. So x = log2(n/j), which is always less than log2(n). So we can say the running time of the inner loop is bounded above by O(log2n), which means the whole algorithm is bounded above by O(n.log2n).
Edit: For a better approximation and some corrections please read Paul Hankin's useful comments bellow this answer. Thanks to him.
PS: Stirling's approximation.

concept confusion, advices on solving these 2 code

In O() notation, write the complexity of the following code:
For i = 1 to x functi
call funct(i) if (x <= 0)
return some value
else
In O() notation, write the complexity of the following code:
For x = 1 to N
I'm really lost at solving these 2 big O notation complexity problem, please help!
They both appear to me to be O(N).
The first one subtracts by 1 when it calls itself, this means if given N, then it runs N times.
The second one divides the N by 2, but Big-O is determined by worst case scenario, which means that we must assume N is getting significantly larger. When you take that into account, dividing by 2 does not have much of a difference. That means while it originally is O(N/2) it can be reduced to O(N)

Intro to Algorithms (chapter 1-1)

Just reading this book for fun, this isn't homework.
However I am already confused on the first main assignment:
1-1 Comparison of running times
For each function f(n) and time t in the following table, determine the largest size n of a problem that can be solved in time t, assuming that the algorithm to solve the problem takes f(n) microseconds.
What does this even mean?
The next table shows a bunch of times along one axis (1 second, 1 minute, one hour, etc), and the other axis shows different f(n) such as lg n, sqrt(n), n, etc.
I am not sure how to fill in the matrix because I can't understand the question. So if f(n) = lg n, it's asking the largest n that can be solved in, for example, 1 second, but the problem takes f(n) = lg n microseconds to solve? What does that last part even mean? I don't even know how to set up the equations / ratios to solve this problem because I literally can't even put together the meaning of the question.
My hangup is over the sentence "assuming that the algorithm to solve the problem takes f(n) microseconds" because I don't know what this refers to. The time for what algorithm to solve what problem takes f(n) microseconds? So if I call f(100) it'll take lg 100 microseconds? So I need to find some n where f(n) = lg n microseconds = 1 second?
Does this mean lg n microseconds = 1 second when lg n microseconds = 10^6 microseconds, so n = 2^(10^6)?
For each time T, and each function f(n), you are required to find the maximal integer n such that f(n) <= T
For example, f(n) = n^2, T=1Sec = 1000 ms:
n^2 <= 1000
n <= sqrt(1000)
n <= ~31.63 <- not an integer
n <= 31
Given any function f(n), and some time T, you are required to similarly find the maximal value of n, and fill in the table.
I will do the first two as an example to help you do the rest. Since a second is 10^6 microseconds. By solving an equation which relates f(n) to the time we are plotting for f(n) to run we can solve for the largest input n that f can run on within the time limit.
1 second:
log(n2)=1,000,000⟹n2=e1,000,000⟹n=e500,000
1 minute:
log(n2)=60,000,000⟹n2=e60,000,000⟹n=e30,000,000
the rest can be similarly done.
P.S. make sure to floor the values of n you get from these equations because n is an integer length input.

Algorithm Peer Review

So I am currently taking an algorithms class and have been asked to prove
Prove: ((n^2 / log n) + 10^5 n * sqrt(n)) / n^2 = O(n^2 / log n)
I have come up with n0 = 1 and c = 5 when solving it I end up with 1 <= 5 I just wanted to see if I could get someone to verify this for me.
I'm not sure if this is the right forum to post in, if it's wrong I apologize and if you could point me in the right direction to go to that would be wonderful.
If I am not wrong, you have to prove that the upper bound of the given function is n^2 logn.
Which can be the case if for very large values of n,
n^2/logn >= n * sqrt(n)
n >= sqrt(n) * log(n)
Since, log(n) < sqrt(n), log(n)*sqrt(n) will always be less than n. Hence, our inequality is correct. So, the upper bound is O(n^2/ logn).
You can use the limit method process:
Thus, the solution of your case should look like this:
Assuming functions f and g are increasing, by definition f(x) = O(g(x)) iff limit x->inf f(x)/g(x) is non-negative. If you substitute your functions for f and g, and simplify the expression, you will see that the limit trivially comes out to be 0.

Meaning of lg * N in Algorithmic Analysis

I'm currently reading about algorithmic analysis and I read that a certain algorithm (weighted quick union with path compression) is of order N + M lg * N. Apparently though this is linear because lg * N is a constant in this universe. What mathematical operation is being referred to here. I am unfamiliar with the notation lg * N.
The answers given here so far are wrong. lg* n (read "log star") is the iterated logarithm. It is defined as recursively as
0 if n <= 1
lg* n =
1 + lg*(lg n) if n > 1
Another way to think of it is the number of times that you have to iterate logarithm before the result is less than or equal to 1.
It grows extremely slowly. You can read more on Wikipedia which includes some examples of algorithms for which lg* n pops up in the analysis.
I'm assuming you're talking about the algorithm analyzed on slide 44 of this lecture:
http://www.cs.princeton.edu/courses/archive/fall05/cos226/lectures/union-find.pdf
Where they say "lg * N is a constant in this universe" I believe they aren't being entirely literal.
lg*N does appear to increase with N as per their table on the right side of the slide; it just happens to grow at such a slow rate that it can't be considered much else (N = 2^65536 -> log*n = 5). As such it seems they're saying that you can just ignore the log*N as a constant because it will never increase enough to cause a problem.
I could be wrong, though. That's simply how I read it.
edit: it might help to note that for this equation they're defining "lg*N" to be 2^(lg*(N-1)). Meaning that an N value of 2^(2^(65536)) [a far larger number] would give lg*N = 6, for example.
The recursive definition of lg*n by Jason is equivalent to
lg*n = m when 2 II m <= n < 2 II (m+1)
where
2 II m = 2^2^...^2 (repeated exponentiation, m copies of 2)
is Knuth's double up arrow notation. Thus
lg*2= 1, lg*2^2= 2, lg*2^{2^2}= 3, lg*2^{2^{2^2}} = 4, lg*2^{2^{2^{2^2}}} = 5.
Hence lg*n=4 for 2^{16} <= n < 2^{65536}.
The function lg*n approaches infinity extremely slowly.
(Faster than an inverse of the Ackermann function A(n,n) which involves n-2 up arrows.)
Stephen
lg is "LOG" or inverse exponential. lg typically refers to base 2, but for algorithmic analysis, the base usually doesnt matter.
lg n refers to log base n. It is the answer to the equation 2^x = n. In Big O complexity analysis, the base to log is irrelevant. Powers of 2 crop up in CS, so it is no surprise if we have to choose a base, it will be base 2.
A good example of where it crops up is a fully binary tree of height h, which has 2^h-1 nodes. If we let n be the number of nodes this relationship is the tree is height lg n with n nodes. The algorithm traversing this tree takes at most lg n to see if a value is stored in the tree.
As to be expected, wiki has great additional info.
Logarithm is denoted by log or lg. In your case I guess the correct interpretation is N + M * log(N).
EDIT: The base of the logarithm does not matter when doing asymptotic complexity analysis.

Resources