Algorithm Time Complexity Analysis (for loop with inner while loop) - algorithm

Here's the code I've implemented in a nutshell. The for loop should have a complexity of O(n). I just can't figure out the time complexity of the inner while loop.
int x,n; // Inputted by the user.
for (int i=0; i<n; i++)
{
int done=0;
if (Condition)
{
while (done < x)
{
done++; // Based on a lot of operations
}
}
}
I can post the whole code if you want. Thanks in advance :)

Here, the complexity is measured by studying the number of times the program will run the operations of the inner loop.
Each time Condition is triggered, the inner loop runs x times. Thus the inner loop complexity is O(x).
This loop can run at most n times. This provides you an overall worst-case complexity of O(x.n).
Having additional knowledge about Condition can get you a more precise analysis. You may be able to compute average complexity for example.
As an example : let Condition be !(i & (i-1)). This is true if and only if i is either 0 or a power of 2. In this case, your loop would get run exactly E(ln2(n)) + 2 times (E(.) being the integer part function). In the end, the overall complexity knowing this becomes O(x.ln(n))

Related

ambiguity of complexity analysis of for loop containing O(1) operations

I have been doing a program in which we are popping the elements of the queue until the queue becomes empty.The thing is that we know that time complexity of pop operation is O(1) and while it is running in a loop.So we can say we have a loop that is running O(1) operations for the time till the queue is not empty.
Suppose q has some number of elements say 1,2,3,4,5,6
while(!q->empty())
{
cout<<q->front()<<" ";
q->pop();
}
And I have read on geeksforgeeks that if loop is running for a constant number of times and is running O(1) operations in it..then that loop is considered to have time complexity of O(1).
This was said like that:
A loop or recursion that runs a constant number of times is also considered as O(1). For example the following loop is O(1).
// Here c is a constant
for (int i = 1; i <= c; i++) {
// some O(1) expressions
}
So the while loop for queue also has O(1) or O(n) or I am mistaken . Plz help me to clear my ambiguity.
The link for geeksforgeeks article is:
https://www.geeksforgeeks.org/analysis-of-algorithms-set-4-analysis-of-loops/
You can look at it this way: if the number of steps in your algorithm does not depend on the size of your data n - it has a constant complexity.
If you have to do "something" once for each of your n - you have a linear complexity, or O(n). That's the case with your queue example.

For loop - big-O

I am trying to do this problem out of a book and am struggling to understand the answer.
for (i = 0; i < N; ++i) {
if ((i % 2) == 0) {
outVal[i] = inVals[i] * i
}
}
here's how I was breaking it down:
I=0 -> executes 1 time
I < n and ++I each execute once every iteration. so 1n+1n = 2n.
the if statement contains 2 operands, so now we are at 4n+1.
the contents of the if statement only executes n/2 times, so we are at 4n+1+n/2
however, big O drops those terms off, leaving us with N as the answer
Here's what I don't get: the explanation for the answer of my problem says this:
outVal[i] = inVals[i] * i; executes every other loop iteration, so the total number of operations include: 1 operation at the start of the loop, 3 operations every loop iteration, 1 operation every other loop iteration, and 1 operation for the final loop condition check.
how are there only 3 operations in the loop? I counted 4 as stated above. Please let me know the rationale behind this.
The complexity is measured by the time/space you take to accomplish a task. i<N and ++i do not take time dependant of your space variable N (the length of the loop).
You must not add how many times an operation is done and sum all of them - you must, instead, choose the one who takes more time or space, as that's the algorithm bottleneck. In a loop, msot of the operations run equal times, so we use the length of the loop as its complexity space or time.
The loop will run N times, so that's its complexity -> O(n)
Inside the loop, the if scope will run N/2 times, as you correctly said -> O(n/2)
But those runs are already added to the first loop iterations. You will not add it since there are no external iterations.
So, the complexity of the algorithm is O(n).
Regarding the operations, the 3 are:
Checking I
Adding 1 to I
The if condition
All of them are done in every iteration.

What is time complexity of the following sort function?

I've wrote this code for bubble sort.Can someone explain me the time complexity for this. It is working similar to 2 for loops. But still want to confirm with time complexity.
public int[] sortArray(int[] inpArr)
{
int i = 0;
int j = 0;
while(i != inpArr.length-1 && j != inpArr.length-1)
{
if(inpArr[i] > inpArr[i+1])
{
int temp = inpArr[i];
inpArr[i] = inpArr[i+1];
inpArr[i+1] = temp;
}
else
{
i++;
}
if(i==inpArr.length-1)
{
j++;
i = 0;
}
}
return inpArr;
}
This would have O(n^2) time complexity. Actually, this would be probably be both O(n^2) and theta(n^2).
Look at the logic of your code. You are performing the following:
Loop through the input array
If the current item is bigger than the next, switch the two
If that is not the case, increase the index(and essentially check the next item, so recursively walk through steps 1-2)
Once your index is the length-1 of the input array, i.e. it has gone through the entire array, your index is reset (the i=0 line), and j is increased, and the process restarts.
This essentially ensures that the given array will be looped through twice, meaning that you will have a WORST-CASE (big o, or O(x)) time complexity of O(n^2), but given this code, your AVERAGE (theta) time complexity will be theta(n^2).
There are SOME situations where you can have a BEST CASE (lambda) of nlg(n), giving a lambda(nlg*(n)) time complexity, but this situation is rare and I'm not even sure its achievable with this code.
Your time complexity is O(n^2) as a worst-case scenario and O(n) as a best case scenario. Your average scenario still performs O(n^2) comparisons but will have less swaps than O(n^2). This is because you're essentially doing the same thing as having two for loops. If you're interested in algorithmic efficiency, I'd recommend checking out pre-existing libraries that sort. The computer scientists that work on these sort of things really are intense. Java's Arrays.sort() method is based on a Python project called timsort that is based on merge-sorting. The disadvantage of your (and every) Bubble sort is that it's really inefficient for big, disordered arrays. Read more here.

Big O sum of integers runtime

I have am trying to learn Big O and am confused on an algorithm I just came across. The algorithm is:
void pairs(int[] array){
for (int i=0; i < array.length; i++){
for (int j=i+1; j<array.length; j++){
System.out.println(array[i]+","+array[j]);
}
}
}
I think the first for loop is O(n) and I know the second for loop is O(1/2*n(n+1)). The answer to the problem was that the run time for the function is O(n^2). I simplified O(1/2*n(n+1)) to O(1/2*(n^2+n)). So I'm confused because I thought that you needed to multiply the two run time terms since the for loop is nested, which would give O(n) * O(1/2*(n^2+n)). I simplified this to O(1/2n^3 + 1/2n^2). From what I understand of Big O, you only keep the largest term so this would reduce to O(n^3). Can anyone help me out with where I went wrong? Not sure how the answer is O(n^2) instead of O(n^3).
When you say the inner loop is O(1/2*n(n+1)), you are actually describing the big-O complexity of both loops.
To say that the outer loop has complexity O(N) basically means its body runs N times. But for your calculation of the inner loop's complexity, you already took account of all iterations of the outer loop, because you added up the number of times the inner loop runs over all iterations of the outer loop. If you multiply again by N, you would be saying that the outer loop itself is re-run another N times.
Put another way, what your analysis shows is that the inner loop body (the System.out.println call) runs 1/2*n(n+1) times overall. That means the overall complexity of the two-loop combination is O(1/2*n(n+1)) = O(n^2). The overall complexity of the two-loop combination describes how many times the innermost code is run.
your mistake is counting the second loop as O(1/2n^2)...
first, you can clearly see it is capped to N-1 (when j = 0)
first loop is clearly N
Second loop is MAX of N-1...
threrefore, O(N^2)...
if we examine it little more,
second loop will run N-1 when i=0,
then N-2 for i=1,
and ONE single time for i=n-1,
this is 1/2n(n-1) = 1/2n^2 - 1/2n = O(n^2)
Notice this includes all iteration of the outer loop too!

Inconsistencies in Big-O Analysis of a Basic "Algorithm"

I recently learned about formal Big-O analysis of algorithms; however, I don't see why these 2 algorithms, which do virtually the same thing, would have drastically different running times. The algorithms both print numbers 0 up to n. I will write them in pseudocode:
Algorithm 1:
def countUp(int n){
for(int i = 0; i <= n; i++){
print(n);
}
}
Algorithm 2:
def countUp2(int n){
for(int i = 0; i < 10; i++){
for(int j = 0; j < 10; j++){
... (continued so that this can print out all values 0 - Integer.MAX_VALUE)
for(int z = 0; z < 10; z++){
print("" + i + j + ... + k);
if(("" + i + j + k).stringToInt() == n){
quit();
}
}
}
}
}
So, the first algorithm runs in O(n) time, whereas the second algorithm (depending on the programming language) runs in something close to O(n^10). Is there anything with the code that causes this to happen, or is it simply the absurdity of my example that "breaks" the math?
In countUp, the loop hits all numbers in the range [0,n] once, thus resulting in a runtime of O(n).
In countUp2, you do somewhat the exact same thing, a bunch of times. The bounds on all your loops is 10.
Say you have 3 loop running with a bound of 10. So, outer loop does 10, inner does 10x10, innermost does 10x10x10. So, worst case your innermost loop will run 1000 times, which is essentially constant time. So, for n loops with bounds [0, 10), your runtime is 10^n which, again, can be called constant time, O(1), since it is not dependent on n for worst case analysis.
Assuming you can write enough loops and that the size of n is not a factor, then you would need a loop for every single digit of n. Number of digits in n is int(math.floor(math.log10(n))) + 1; lets call this dig. So, a more strict upper bound on the number of iterations would be 10^dig (which can be kinda reduced to O(n); proof is left to the reader as an exercise).
When analyzing the runtime of an algorithm, one key thing to look for is the loops. In algorithm 1, you have code that executes n times, making the runtime O(n). In algorithm 2, you have nested loops that each run 10 times, so you have a runtime of O(10^3). This is because your code runs the innermost loop 10 times for each run of the middle loop, which in turn runs 10 times for each run of the outermost loop. So the code runs 10x10x10 times. (This is purely an upper bound however, because your if-statement may end the algorithm before the looping is complete, depending on the value of n).
To count up to n in countUp2, then you need the same number of loops as the number of digits in n: so log(n) loops. Each loop can run 10 times, so the total number of iterations is 10^log(n) which is O(n).
The first runs in O(n log n) time, since print(n) outputs O(log n) digits.
The second program assumes an upper limit for n, so is trivially O(1). When we do complexity analysis, we assume a more abstract version of the programming language where (usually) integers are unbounded but arithmetic operations still perform in O(1). In your example you're mixing up the actual programming language (which has bounded integers) with this more abstract model (which doesn't). If you rewrite the program[*] so that is has a dynamically adjustable number of loops depending on n (so if your number n has k digits, then there's k+1 nested loops), then it does one iteration of the innermost code for each number from 0 up to the next power of 10 after n. The inner loop does O(log n) work[**] as it constructs the string, so overall this program too is O(n log n).
[*] you can't use for loops and variables to do this; you'd have to use recursion or something similar, and an array instead of the variables i, j, k, ..., z.
[**] that's assuming your programming language optimizes the addition of k length-1 strings so that it runs in O(k) time. The obvious string concatenation implementation would be O(k^2) time, meaning your second program would run in O(n(log n)^2) time.

Resources