Trying to better understand time complexity - multiple operations of the same speed - runtime

When analyzing the time complexity of a function, do we account for multiple instances of operations with the same runtime? For example, let's say we have the function below:
my_fake_function(int v) {
for(int i=0; i < v; i++;) { // O(V)
someDatastructure.insert("test"); // insert() is O(log(n))
someDatastructure.insert("test"); // insert() is O(log(n))
}
}
So we know this function runs V times and that, for every iteration, a insert() is called twice. Given that insert has a hypothetical runtime of O(log(n)), would that make the overall runtime for the function O(V*log(n)) or O(V*2log(n))? The insert() operation is running twice but I'm not entirely clear on if we show that or not.

Your reasoning is correct. Your function makes 2v insert calls, thus making the overall time complexity O(2 v log(n)).
However, if you study the definition of the big-O notation, you will find that O(2 v log(n)) is the same class of functions as O(v log(n)). As a rule, constant coefficients (if > 0) can be omitted: O(c f(n)) = O(f(n)) for constant c > 0.
Thus, it is as correct to claim that your function runs in O(2 v log(n)) time as it is to say it runs in O(v log(n)) time. However, we tend to describe complexity classes by a
a simplified expression, so in practice you will see O(v log (n)).
I suggest you take some time to understand the definition of big-O notation, as it would have answered this question for you already. If the formal mathematical definition is not for you, there are also plenty of online sources that do a good job at explaining what it means in practical terms.

Related

What is the best case time complexity of the following algorithm?

I am given the following function, where g is some other function that runs in Θ(n²). What is the best case time complexity for this function?
void f(int n) {
if(n % 2 == 0) {
return;
} else {
g(n);
}
}
Clearly, the function runs in constant time if n is even, which tempts me to say Θ(1), but I don't think that's the correct answer because I don't think that's how an asymptotic tight bound is defined.
I've looked at a lot of similar questions on SO regarding big theta notation and best case analysis, but they all pertain to arrays of length n inputs, rather than just an integer. I think the best case analysis makes sense in those cases because it depends on the elements in the array.
However there is no analog to "elements in the array" for this question that seems to matter in determining the complexity other than g, whose complexity is fixed.
This leads me to conclude that the actual best case time complexity is Θ(n²). Is my understanding correct? Is it Θ(n²) or Θ(1)?
You seem quite confused in the usage of Θ notation.
I don't think that's the correct answer because I don't think that's
how an asymptotic tight bound is defined.
Θ notation is an asymptotic tight bound and it may be defined as:
For any two functions f(n) and g(n), we have f(n) = Θ(g(n) if and only if
f(n) = O(g(n)) and f(n) = Ω(g(n)).
O notation gives an asymptotic upper bound and Ω notation gives an asymptotic lower bound.
In the algorithm you provided,
void f(int n)
{
if(n % 2 == 0) return;
else g(n);
}
where g(n) = Θ(n²).
The running time of the algorithm belongs to both O(n²) and Ω(1). We cannot use the Θ notation to describe the running time of your algorithm if we are considering all the possible values of n.
However, if we look at the running time of the algorithm when the only value that n can take is even, which is the best case, then we can say that in the best case, the running time of the algorithm belongs to both O(1) and Ω(1). Hence, we can say that the best case complexity of the algorithm is Θ(1).
Do notice the difference between saying,
The running time of the algorithm is Θ(1). //Wrong in this case
and
The best case running time of the algorithm is Θ(1). //Correct in this case
Similarly, if we look at the running time of the algorithm when the only value that n can take is odd, then we find out that worst case complexity of the algorithm is Θ(n²).
I hope I have helped you.
I don't agree with #aksingh's answer.
From the given,
when n is even, all of the best case, worst case and plain running times are Θ(1), and
when n is odd, all of the best case, worst case and plain running times are Θ(n²).
In fact, there is no difference between the three times (in the asymptotic sense), because of the Θ.
It is important to notice that the function that describes these running times need not be "a single expression".
If you want to obtain results that do not depend on parity, you can write that the best, worst and plain case running times are Ω(1) and O(n²).
To illustrate, the plot shows some hypothetical running times (red dots). In green and magenta, the best and worst cases, for odd n. In blue, even n.
Now in gray, the best case function.

What does it mean the complexity of a polynomial function?

I am newly studying algorithmic complexity, big-o, omega notations etc.
I can understand such examples like nested for loops, for example. I calculate how many times the inner statement will be executed dependent on n, and that gives the complexity. For example
int a = 1;
for(i=0; i<n; i++){
for(j=0; j<n; j++){
a = a*2;
}
}
I can understand that big-o for this above one is O(n^2). However, I also came across with this kind of questions,
O( ) complexity of the function (n^3 + 7) / (n + 1)
At the first look, as an intuition, I want to think that as n goes to infinity, n^3 is the dominant term in the numerator and n is the dominant term in the denominator. So, the complexity is O(n^3/n) = O(n^2). However, at the same amount it looks logical, it also does not. In short, I am not sure such a calculus-ish approach is correct or not.
But more importantly, I also do not understand what does complexity mean in case of a function. It looks like for any value of n, there is only a limited amount of operations will be done: Take the cube of the number n, add 7, divide by (n+1). So I don't understand how the complexity is affacted by n.
This is a good question because you have spotted an ambiguity in the wording of the exercise you were given. The "O()-complexity of a function" is not well defined. You need to distinguish between the following:
The (e.g. time) complexity of an implementation of an algorithm in terms of O(...). Example: Quicksort with linked list.
The (e.g. time) complexity of an algorithm in terms of O(...). Example: Quicksort.
The (e.g. time) complexity of a decision problem in terms of O(...). Example: is a given list sorted?
The (e.g. time) complexity of a function problem in terms of O(...). Example: sort a given list.
An asymptotic upper bound on a mathematical function in terms of O(...). Example: upper bound on (n^3 + 7) / (n + 1).
My guess is that 5. is the intended meaning, but the wording was misleading, or even arguably wrong. It could also be #4, but that mean: find an upper bound on the time complexity of any possible algorithm that could be used to compute the given polynomial, a rather unusual question to ask about polynomials.

Describing space complexity of algorithms

I am currently a CS major, and I am practising different algorithmic questions. I make sure I try to analyse time and space complexity for every question.
However, I have a doubt:
If there are two steps (steps which call recursive functions for varying size of OP) in the algorithm, i.e.
int a = findAns(arr1)
int b = findAns(arr2)
return max(a,b);
Would the worst time complexity of this be: O(N1) + O(N2) or simply, O(max(N1,N2)). I ask because at a time, we would be calling the function with only single input array.
While calculating worst case space complexity, if it comes out to be, O(N) + O(logN), since N > logN, would we discard O(logN) or since logN is also dependent on N and say worst space complexity is O(N), we would say, worst case space complexity to be O(N) only.

the smallest algorithm’s asymptotic time complexity as a function of n

we have known some of the algorithm’s asymptotic time complexity is a function of n such as
O(log* n), O(log n), O(log log n), O(n^c) with 0< c < 1, ....
May I know what is the smallest algorithm’s asymptotic time complexity as a function of n ?
Update 1 : we look for the asymptotic time complexity function with n. O(1) is the smallest, but it does not have n.
Update 2: O(1) is the smallest time complexity we can go, but what is the next smallest well-known functions with n ? so far as I research:
O(alpha (n)) : inverse Ackermann: Amortized time per operation using a disjoint set
or O(log * n)iterated logarithmic The find algorithm of Hopcroft and Ullman on a disjoint set
Apart from the trivial O(1), the answer is: there isn't one.
If something isn't O(1) (that is, with n -> infinity, computing time goes to infinity), whatever bounding function of n you find, there's always a smaller one: just take a logarithm of the bounding function. You can do this infinitely, hence there's no smallest non-constant bounding function.
However in practice you should probably stop worrying when you reach the inverse Ackermann function :)
It is not necessary that the complexity of a given algorithm be expressible via well-known functions. Also note that big-oh is not the complexity of a given algorithm. It is an upper bound of the complexity.
You can construct functions growing as slow as you want, for instance n1/k for any k.
O(1) is as low as you can go in terms of complexity and strictly speaking 1 is a valid function, it simply is constant.
EDIT: a really slow growing function that I can think of is the iterated logarithm as complexity of disjoint set forest implemented with both path compression and union by rank.
There is always a "smaller algorithm" that whatever suggested.
O(log log log log(n)) < O(log log log(n)) < O(log log (n)) < O(log(n)).
You can put as many log as you want. But I don't know if there is real life example of these.
So my answer is you will get closer and closer to O(1).

Is "best case performance Θ(1) -> running time ≠ Θ(log n)" valid?

This is an argument for justifying that the running time of an algorithm can't be considered Θ(f(n)) but should be O(f(n)) instead.
E.g. this question about binary search: Is binary search theta log (n) or big O log(n)
MartinStettner's response is even more confusing.
Consider *-case performances:
Best-case performance: Θ(1)
Average-case performance: Θ(log n)
Worst-case performance: Θ(log n)
He then quotes Cormen, Leiserson, Rivest: "Introduction to Algorithms":
What we mean when we say "the running time is O(n^2)" is that the worst-case running time (which is a function of n) is O(n^2) ...
Doesn't this suggest the terms running time and worst-case running time are synonymous?
Also, if running time refers to a function with natural input f(n), then there has to be Θ class which contains it, e.g. Θ(f(n)), right? This indicates that you are obligated to use O notation only when the running time is not known very precisely (i.e. only an upper bound is known).
When you write O(f(n)) that means that the running time of your algorithm is bounded above by a function c*f(n) where c is a constant. That also means that that your algorithm can complete in much less number of steps than c*f(n). We often use the Big-O notation because we want to include the possibility that the algorithm completes faster than we indicate. On the other hand, Theta(f(n)) means that the algorithm always completes in c*f(n) steps. Binary search is O(log(n)) because usually it will complete in log(n) steps, but could complete in one step if you get lucky (best-case performance).
I get always confused, if I read about running times.
For me a running time is the time an implementation of an algorithm needs to be executed on a computer. This can be differ in many ways and so is a complicated thing.
So I think complexity of an algorithm is a better word.
Now, the complexity is (in most cases) a worst-case-complexity. If you know an upper bound for the worst case, you also know that it can only get better in other cases.
So, if you know, that there exist some (maybe trivial) cases, where your algorithm does only a few (constant number) steps and stops, you don't have to care about an lower bound and so you (normaly) use an upper bound in big-O or little-o notation.
If you do your calculations carfully, you can also use the Θ notation.
But notice: all complexities only hold on the cases they are attached to. This means: If you make assumtions like "Input is a best case", this effects your calculation and also the resulting complexity. In the case of binary search you posted the complexity for three different assumtions.
You can generalize it by saying: "The complexity of Binary Search is in O(log n)", since Θ(log n) means "Ω(log n) and O(log n)" and O(1) is a subset of O(log n).
To summerize:
- If you know the very precise function for the complexity, you can give the complexity in Θ-notation
- If you want to get an overall upper bound, you have to use O-notation, until the lower bound over all input cases does not differ from the upper bound.
- In most cases you have to use the O-notation, since the algorithms are too complex to get a close upper and lower bound.

Resources