I am reading some information on time complexity and I'm quite confused as to how the following time complexities are achieved and if there is a particular set of rules or methods for working this out?
1)
Input: int n
for(int i = 0; i < n; i++){
print("Hello World, ");
}
for(int j = n; j > 0; j--){
print("Hello World");
}
Tight: 6n + 5
Big O: O(n)
2)
Input: l = array of comparable items
Output: l = array of sorted items
Sort:
for(int i = 0; i < l.length; i++){
for(int j = 0; j < l.length; j++){
if(l{i} > l{j}){
} }
Swap(l{i},l{j});
}
return ls;
Worst Case Time Complexity: 4n2 +3n+2 = O(n2)
For a given algorithm, time complexity or Big O is a way to provide some fair enough estimation of "total elementary operations performed by the algorithm" in relationship with the given input size n.
Type-1
Lets say you have an algo like this:
a=n+1;
b=a*n;
there are 2 elementary operations in the above code, no matter how big your n is, for the above code a computer will always perform 2 operations, as the algo does not depend on the size of the input, so the Big-O of the above code is O(1).
Type-2
For this code:
for(int i = 0; i < n; i++){
a=a+i;
}
I hope you understand the Big-O in O(n), as elementary operation count directly depend on the size of n
Type-3
Now what about this code:
//Loop-1
for(int i = 0; i < n; i++){
print("Hello World, ");
}
//Loop-2
for(int i = 0; i < n; i++){
for(int j = 0; j < n; j++) {
x=x+j;
}
}
As you can see loop-1 is O(n) and loop-2 is O(n^2). So it feel like total complexity should be O(n)+O(n^2). But no, the time complexity of the above code is O(n^2). Why? Because we are trying to know the fair enough count of elementary operations performed by the algorithm for a given input size n, which will be comparatively easy to understand by another person. With this logic, O(n)+O(n^2) become O(n^2), or O(n^2)+O(n^3)+O(n^4) become O(n^4)!
Again, you may ask: But how? How all the lower power of Big-O become so insignificant as we add it with a higher power of Big-O, that we can completely omit them (lower powers) when we are describing the complexity of our algorithm to another human?
I will try show the reason for this case: O(n)+O(n^2)=O(n^2).
Lets say n=1000 then the exact count for O(n) is 1000 operations and the exact count for O(n^2) is 1000*1000=1000000, so O(n^2) is 1000 time bigger than O(n), which means your program will spend most of the execution time in O(n^2) and thus it is not worth to mention that your algorithm also has some O(n).
PS. Pardon my English :)
In the first example, the array has n elements, and you go through these elements Twice. The first time you start from index 0 until i, and the second time you start from index n to 0. So, to simplify this, we can say that it took you 2n. When dealing with Big O notation, you should keep in mind that we care about the bounds:
As a result, O(2n)=O(n)
and O(an+b)=O(n)
Input: int n // operation 1
for(int i = 0; i < n; i++){ // operation 2
print("Hello World, "); // Operation 3
}
for(int j = n; j > 0; j--) // Operation 4
{
print("Hello World"); //Operation 5
}
As you can see, we have a total of 5 operations outside the loops.
Inside the first loop, we do three internal operations: checking if i is less than n, printing "Hello World", and incrementing i .
Inside the second loop, we also have three internal operations.
So, the total number of of opetations that we need is: 3n ( for first loop) + 3n ( second loop) + 5 ( operations outside the loop). As a result, the total number of steps required is 6n+5 ( that is your tight bound).
As I mentioned before, O( an +b )= n because once an algorithm is linear, a and b do not have a great impact when n is very large.
So, your time complexity will become : O(6n+5) =O(n).
You can use the same logic for the second example keeping in mind that two nested loops take n² instead of n.
I will slightly modify Johns answer. Defining n is one constant operation, defining integer i and assigning it to 0 is 2 constant operations. defining integer j and assigning with n is another 2 constant operations. checking the conditions for i,j inside for loop,increment,print statement depends on n so the total will be 3n+3n+5 which is equal to 6n+5. Here we cannot skip any of the statements during execution so its average case running time will also be its worst case running time which is O(n)
Related
So many loops, I stuck at counting how many times the last loop runs.
I also don't know how to simplify summations to get big Theta. Please somebody help me out!
int fun(int n) {
int sum = 0
for (int i = n; i > 0; i--) {
for (int j = i; j < n; j *= 2) {
for (int k = 0; k < j; k++) {
sum += 1
}
}
}
return sum
}
Any problem has 2 stages:
You guess the answer
You prove it
In easy problems, step 1 is easy and then you skip step 2 or explain it away as "obvious". This problem is a bit more tricky, so both steps require some more formal thinking. If you guess incorrectly, you will get stuck at your proof.
The outer loop goes from n to 0, so the number of iterations is O(n). The middle loop is uncomfortable to analyze because its bounds depend on current value of i. Like we usually do in guessing O-rates, let's just replace its bounds to be from 1 to n.
for (int i = n; i > 0; i--) {
for (int j = 1; j < n; j *= 2) {
perform j steps
}
}
The run-time of this new middle loop, including the inner loop, is 1+2+4+...+n, or approximately 2*n, which is O(n). Together with outer loop, you get O(n²). This is my guess.
I edited the code, so I may have changed the O-rate when I did. So I must now prove that my guess is right.
To prove this, use the "sandwich" technique - edit the program in 2 different ways, one which makes its run-time smaller and one which makes its run-time greater. If you manage to make both new programs have the same O-rate, you will prove that the original code has the same O-rate.
Here is a "smaller" or "faster" code:
do n/2 iterations; set i=n/2 for each of them {
do just one iteration, where you set j = i {
perform j steps
}
}
This code is faster because each loop does less work. It does something like n²/4 iterations.
Here is a "greater" or "slower" code:
do n iterations; set i=n for each of them {
for (int j = 1; j <= 2 * n; j *= 2) {
perform j steps
}
}
I made the upper bound for the middle loop 2n to make sure its last iteration is for j=n or greater.
This code is slower because each loop does more work. The number of iterations of the middle loop (and everything under it) is 1+2+4+...+n+2n, which is something like 4n. So the number of iterations for the whole program is something like 4n².
We got, in a somewhat formal manner:
n²/4 ≤ runtime ≤ 4n²
So runtime = O(n²).
Here I use O where it should be Θ. O is usually defined as "upper bound", while sometimes it means "upper or lower bound, depending on context". In my answer O means "both upper and lower bound".
Although, I found pretty good replies to the same question! However, I want time complexity equation of the following piece of code
sum = 0; c1=11
for (i=0; i<N; i++) c2=6
for (j=0; j<N; j++) c2=6
for (j=0; j<N; j++) c2=7
sum += arr[i][j] c3=2
While each statement has a cost associated with it, I require complete time complexity equation and its answer.
Regards
The comments section got quite long so I am going to write up an answer summarizing everything.
Measuring Time Complexity
In Computer Science, we measure time complexity by the number of steps/iterations your algorithm takes to evaluate.
So if you have a simple array of length n and you go through this array only once, say to print all the elements, we say that this algorithm is O(n) because the time is takes to run will grow proportionally to the size of the array you have, thus n
You can think of Big-O O(..) as a higher order function that compares other functions. if we say f(x) = O(n) it means that you function grows at most as fast as y=n thus linearly. This means that if you were to plot these functions on a graph, there would be a point c x = c after which the graph of n will always be on top of f(x) for any given x > c. Big-O signifies upper bound of a function in terms of another function.
So let's look at your original question and what it means to be constant time. Say we have this function
def printFirst5(arr: Array[Int]) = {
for(i =0 ;i < 5; i++){
println(arr[i])
}
}
This is what we call a constant time algorithm. Can you see why? Because no matter what array you pass into this (as long as it has at least 5 elements), it will only go through the first 5 elements. You can pass it an array of length 100000000000 you can pass it an array of length 10 it doesn't matter. In each case it will only look at the first 5 elements. Meaning this function printFirst5 will never go above the line y=5 in terms of time complexity. These kind of functions are denoted O(1) for constant time.
Now, finally, let's look at you edited version. (I am not sure what you are trying to do in your example because it is syntactically wrong, so I will write my own example)
def groupAllBy3(array: Array[Int]) = {
for(i=0; i < array.length; i++){
for(j=0; j < array.length; j++){
for(k=0; k< array.length; k++{
println(s"$array[i], $array[j], $array[k]")
}
}
}
}
This functions time complexity is O(N3). Why? Let's take a look.
The innermost loop will go through N elements for every j
How many js are there? Well there will be N js for every i.
How many is are there? N many.
So in total we get numberof-i * numberof-j * numberof-k = N * N * N = O(N^3)
Just to make sure you understand this correctly, let's take a look at another example. What would happen if these loops weren't nested? If we had:
def printAllx3(array: Array[Int]) = {
for(i=0; i < array.length; i++){
println(s"array[i]")
}
for(j=0; j < array.length; j++){
println(s"array[j]")
}
for(k=0; k< array.length; k++{
println(s"array[k]")
}
}
What is the case here?
The first loop goes through N elements, the second loop goes through N elements, the third loop goes through N elements. But they don't depend on each other in terms of iterations so we get N + N + N = 3N = O(N)
Do you see the difference?
With all due respect, I believe you are missing some of the fundamentals of what time complexity is & how we measure it. There is only so much I can explain here, I highly recommend you do some reading on the subject and ask any further questions you don't understand.
Hope this helps
I'm having some trouble calculating the bigO of this algorithm:
public void foo(int[] arr){
int count = 0;
for(int i = 0; i < arr.length; i++){
for(int j = i; j > 0; j--){
count++;
}
}
}
I know the first for loop is O(n) time but I can't figure out what nested loop is. I was thinking O(logn) but I do not have solid reasoning. I'm sure I'm missing out on something pretty easy but some help would be nice.
Let's note n the length of the array.
If you consider the second loop alone, it is just a function f(i), and since it will iterate on all elements from i to 1, its complexity will be O(i). Since you know that j<n, you can say that it is O(n). However, there is no logarithm involved, since in the worst case, i.e. j=n, you will perfrom n iterations.
As for evaluating the complexity of both loops, observe that for each value of i, the second loop goes throught i iterations, so the total number of iterations is
1+2+...+(n-1)= n*(n-1)/2=(1/2)*(n^2-n)
which is O(n^2).
If we consider c as a number of times count is incremented in the inner loop then the total number of times count is incremented can be represented by the formula below:
As you can see, the total time complexity of the algorithm is O(n^2).
After looking at this question, this article, and several other questions, I have still not been able to find a general way to determine the computational complexity of algorithms with looping variables dependent on the parent loop's variable. For example,
for (int i = 0; i < n; i++) {
for (int j = i; j < n; j++) {
for (int k = i; k < j; k++) {
//one statement
}
}
}
I know that the first loop has a complexity of n, but the inner loops are confusing me. The second loop seems to be executed n-i times and the third loop seems to be executed j-i times. However, I'm not sure how to turn this into a regular Big-O statement. I don't think I can say O(n(n-i)(j-i)), so how can I get rid of the i and j variables here?
I know this is something on the order of n^3, but how can I show this? Do I need to use series?
Thanks for your help!
(If you were wondering, this is from a brute force implementation of the maximum sum contiguous subsequence problem.)
First loop hits N items on average.
Second loop hits N / 2 items on avarage
Third loop hits N / 4 items on average
O(N * N / 2 * N / 4) is about O((N^3)/8) is about O(N^3)
I have the below pseudocode that takes a given unsorted array of length size and finds the range by finding the max and min values in the array. I'm just learning about the various time efficiency methods, but I think the below code is Θ(n), as a longer array adds a fixed number of actions (3).
For example, ignoring the actual assignments to max and min (as the unsorted array is arbitrary and these assignments are unknown in advance), an array of length 2 would only require 5 actions total (including the final range calculation). An array of length 4 only uses 9 actions total, again adding the final range calculation. An array of length 12 uses 25 actions.
This all points me to Θ(n), as it is a linear relationship. Is this correct?
Pseudocode:
// Traverse each element of the array, storing the max and min values
// Assuming int size exists that is size of array a[]
// Assuming array is a[]
min = a[0];
max = a[0];
for(i = 0; i < size; i++) {
if(min > a[i]) { // If current min is greater than val,
min = a[i]; // replace min with val
}
if(max < a[i]) { // If current max is smaller than val,
max = a[i]; // replace max with val
}
}
range = max – min; // range is largest value minus smallest
You're right. It's O(n).
An easy way to tell in simple code (like the one above) is to see how many for() loops are nested, if any. For every "normal" loop (from i = 0 -> n), you add a factor of n.
[Edit2]: That is, if you have code like this:
array a[n]; //Array with n elements.
for(int i = 0; i < n; ++i){ //Happens n times.
for(int j = 0; j < n; ++j){ //Happens n*n times.
//something //Happens n*n times.
}
}
//Overall complexity is O(n^2)
Whereas
array a[n]; //Array with n elements.
for(int i = 0; i < n; ++i){ //Happens n times.
//something //Happens n times.
}
for(int j = 0; j < n; ++j){ //Happens n times.
//something //Happens n times.
}
//Overall complexity is O(2n) = O(n)
This is pretty rudimentary, but useful if someone has not taken an Algorithm course.
The procedures within your for() loop are irrelevant in a complexity question.
[Edit]: This assumes that size actually means the size of array a.
Yes, this would be Θ(n). Your reasoning is a little skewed though.
You have to look at every item in your loop so you're bounded above by a linear function. Conversely, you are also bounded below by a linear function (the same one in fact), because you can't avoid looking at every element.
O(n) only requires that you bound above, Omega(n) requires that you bound below.
Θ(n) says you're bounded on both sides.
Let size be n, then it's clear to see that you always have 2n comparisons and of course the single assignment at the end. So you always have 2n + 1 operations in this algorithm.
In the worst case scenario, you have 2n assignments, thus 2n + 1 + 2n = 4n + 1 = O(n).
In the best case scenrio, you have 0 assignments, thus 2n + 1 + 0 = 2n + 1 = Ω(n).
Therefore, we have that both the best and worst case perform in linear time. Hence, Ɵ(n).
Yeah this surely is O(n) algorithm. I don't think you really need to drill down to see number of comparisons to arrive on the conclusion about the complexity of the algorithm. Just try to see how the number of comparisons will change with the increasing size of the input. For O(n) the comparisons should have a linear increase with the increase in input. For O(n^2) it increases by some multiple of n and so on.