Complexity of Operation That Isn't Performed - complexity-theory

If I want to describe the time complexity of an operation that isn't performed in some program, how could I do this? For example, given the following trivial function:
def trivial():
return
How could I describe the upper bound on the time consumed by calling Sort? Could I say that the time required by calling Sort is O(0)? This seems to be true given the definition of O-notation.

if some program run for finite no of statements then its complexity is of order 1.
complexity is calculated for cases where input size defines the no. of statements executed.
if no of input are n then, complexity is of order n if it runs for n times.
if no of input are n then, complexity is of order n^2 if it runs for n*n times, and so on.
if no. of times function executed doesn't depends on input size (or it don't take any input)
then its of order 1, no matter how long that function is.

Related

Determine running time for multiple calls of a functions

Assume I have a function f(K) that runs in amortised logarithmic time in K, but linear worst case time. What is the running time of:
for i in range(N): f(N) (Choose the smallest correct estimate)
A. Linearithmic in N
B. Amortised linear in N
C. Quadratic in N
D.Impossible to say from the information given
Let's say f(N) just prints "Hello World" so it doesn't depend on how big the parameter is. Can we say the total running time is amortised linear in N ?
This kinda looks like a test question, so instead of just saying the answer, allow me to explain what each of these algorithmic complexity concepts mean.
Let's start with a claim function f(n) runs in constant time. I am aware it's not even mentioned in the question, but it's really the basis for understanding all other algorithmic complexities. If some function runs in constant time, it means that its runtime is not dependent on its input parameters. Note that it could be as simple as print Hello World or return n, or as complex as finding the first 1,000,000,000 prime numbers (which obviously takes a while, but takes the same amount of time on every invocation). However, this last example is more of an abuse of the mathematical notation; usually constant-time functions are fast.
Now, what does it mean if a function f(n) runs in amortized constant time? It means that if you call the function once, there is no guarantee on how fast it will finish; but if you call it n times, the sum of time spent will be O(n) (and thus, each invocation on average took O(1)). Here is a lengthier explanation from another StackOverflow answer. I can't think of any extremely simple functions that run in amortized constant time (but not constant time), but here is one non-trivial example:
called = 0
next_heavy = 1
def f(n):
called += 1
if called == next_heavy:
for i in range(n):
print i
next_heavy *= 2
On 512-th call, this function would print 512 numbers; however, before that it only printed a total of 511, so it's total number of prints is 2*n-1, which is O(n). (Why 511? Because sum of powers of two from 1 to 2^k equals 2^(k+1).)
Note that every constant time function is also an amortized constant time function, because it takes O(n) time for n calls. So non-amortized complexity is a bit stricter than amortized complexity.
Now your question mentions a function with amortized logarithmic time, which similarly to above means that after n calls to this function, the total runtime is O(nlogn) (and average runtime per one call is O(logn)). And then, per question, this function is called in a loop from 1 to N, and we just said that by definition those N calls together would run in O(NlogN). This is linearithmic.
As for the second part of your question, can you deduce what's the total running time of the loop based on our previous observations?

What is the complexity of this while loop?

Let m be the size of Array A and n be the size of Array B. What is the complexity of the following while loop?
while (i<n && j<m){ if (some condition) i++ else j++}
Example for an array: A=[1,2,3,4] B=[1,2,3,4] the while loop executes at most 5+4 times O(m+n).
Example for an array: A=[1,2,3,4,7,8,9,10] B=[1,2,3,4] the while loop executes at most 4 times O(n).
I am not able to figure out how to represent the complexity of the while loop.
One common approach is to describe the worst-case time complexity. In your example, the worst-case time complexity is O(m + n), because no matter what some condition is during a given loop iteration, the total number of loop iterations is at most m + n.
If it's important to emphasize that the time complexity has a lesser upper bound in some cases, then you'll need to figure out what those cases are, and find a way to express them. (For example, if a given algorithm takes an array of size n and has worst-case O(n2) time, it might also be possible to describe it as "O(mn) time, where m is the number of distinct values in the array" — only if that's true, of course — where we've introduced an extra variable m to let us capture the impact on the performance of having more vs. fewer duplicate values.)

Time complexity of an algorithm - n or n*n?

I'm trying to find out which is the Theta complexity of this algorithm.
(a is a list of integers)
def sttr(a):
for i in xrange(0,len(a)):
while s!=[] and a[i]>=a[s[-1]]:
s.pop()
s.append(i)
return s
On the one hand, I can say that append is being executed n (length of a array) times, so pop too and the last thing I should consider is the while condition which could be executed probably 2n times at most.
From this I can say that this algorithm is at most 4*n so it is THETA(n).
But isn't it amortised analysis?
On the other hand I can say this:
There are 2 nested cycles. The for cycle is being executed exactly n times. The while cycle could be executed at most n times since I have to remove item in each iteration. So the complexity is THETA(n*n).
I want to compute THETA but don't know which of these two options is correct. Could you give me advice?
The answer is THETA(n) and your arguments are correct.
This is not amortized analysis.
To get to amortized analysis you have to look at the inner loop. You can't easily say how fast the while will execute if you ignore the rest of the algorithm. Naive approach would be O(N) and that's correct since that's the maximum number of iterations. However, since we know that the total number of executions is O(N) (your argument) and that this will be executed N time we can say that the complexity of the inner loop is O(1) amortized.

time complexity when while not execute

What is the “best case” time complexity of the following segment of program?
n=0
sum=0
input(x)
while x!=-999 do
n=n+1
sum=sum+x
input(x)
end {while}
mean=sum/n
Does "best case" could be O(1) when user in the first time type “-999”
note: when user type -999 in first time, "mean" will be 0/0, result of function is undefined
The worst case is infinity,which means this program never stops. I would not even call it an algorithm, as some definitions of "algorithm" require a finite set of inputs, while other require that it terminates before a given number of calculation steps.
Which means O() is not applicable here.
Algorithm's complexity is usually defined in relation to some kind of data. It can be the input data, as in your case.
Imagine for a while, that the data is not inputted manually, but provided in an array in the app. What would the complexity be then?
In this case the amount of numbers processed is N, and the loop runs once for each N, so you can assume that the complexity is O(N).

Determining time complexity of an algorithm

Below is some pseudocode I wrote that, given an array A and an integer value k, returns true if there are two different integers in A that sum to k, and returns false otherwise. I am trying to determine the time complexity of this algorithm.
I'm guessing that the complexity of this algorithm in the worst case is O(n^2). This is because the first for loop runs n times, and the for loop within this loop also runs n times. The if statement makes one comparison and returns a value if true, which are both constant time operations. The final return statement is also a constant time operation.
Am I correct in my guess? I'm new to algorithms and complexity, so please correct me if I went wrong anywhere!
Algorithm ArraySum(A, n, k)
for (i=0, i<n, i++)
for (j=i+1, j<n, j++)
if (A[i]+A[j]=k)
return true
return false
Azodious's reasoning is incorrect. The inner loop does not simply run n-1 times. Thus, you should not use (outer iterations)*(inner iterations) to compute the complexity.
The important thing to observe is, that the inner loop's runtime changes with each iteration of the outer loop.
It is correct, that the first time the loop runs, it will do n-1 iterations. But after that, the amount of iterations always decreases by one:
n - 1
n - 2
n - 3
…
2
1
We can use Gauss' trick (second formula) to sum this series to get n(n-1)/2 = (n² - n)/2. This is how many times the comparison runs in total in the worst case.
From this, we can see that the bound can not get any tighter than O(n²). As you can see, there is no need for guessing.
Note that you cannot provide a meaningful lower bound, because the algorithm may complete after any step. This implies the algorithm's best case is O(1).
Yes. In the worst case, your algorithm is O(n2).
Your algorithm is O(n2) because every instance of inputs needs time complexity O(n2).
Your algorithm is Ω(1) because there exist one instance of inputs only needs time complexity Ω(1).
Following appears in chapter 3, Growth of Function, of Introduction to Algorithms co-authored by Cormen, Leiserson, Rivest, and Stein.
When we say that the running time (no modifier) of an algorithm is Ω(g(n)), we mean that no mater what particular input of size n is chosen for each value of n, the running time on that input is at least a constant time g(n), for sufficiently large n.
Given an input in which the summation of first two elements is equal to k, this algorithm would take only one addition and one comparison before returning true.
Therefore, this input costs constant time complexity and make the running time of this algorithm Ω(1).
No matter what the input is, this algorithm would take at most n(n-1)/2 additions and n(n-1)/2 comparisons before returning value.
Therefore, the running time of this algorithm is O(n2)
In conclusion, we can say that the running time of this algorithm falls between Ω(1) and O(n2).
We could also say that worst-case running of this algorithm is Θ(n2).
You are right but let me explain a bit:
This is because the first for loop runs n times, and the for loop within this loop also runs n times.
Actually, the second loop will run for (n-i-1) times, but in terms of complexity it'll be taken as n only. (updated based on phant0m's comment)
So, in worst case scenerio, it'll run for n * (n-i-1) * 1 * 1 times. which is O(n^2).
in best case scenerio, it's run for 1 * 1 * 1 * 1 times, which is O(1) i.e. constant.

Resources