Runtime of this Program - runtime

I'm currently in an Intro to Java course and studying for a midterm. I came across this problem:
public void wug() {
int j = 0;
for (int i = 0; i < N; i += 1) {
for (; j < M; j += 1) {
if (bump(i, j))
break;
}
}
}
N and M are trivial, and are provided somewhere else.
The solution says the runtime if theta(M+N) for the worst case, and theta(N) for the best case. I understand the best case, but I thought the worst case was theta(N*M). Could someone explain why the worst case is theta(M+N)? I'm really shaky on algorithm complexity. Thank you!

Note that j is never reset, so the inner loop iterates at most M times. To get N*M iterations you'd have to reset the iterator to zero at the start of the loop.

Note that j isn't initialised in the inner loop so each execution of the inner loop continues from where the previous loop exited. Think through how that changes the different values that j takes as the program executes.
You'll gain better understanding of this code by setting it up in a debugger and single-stepping through it. That's because you're seeing what you think the code is doing, not what's actually happening. Single-stepping the code helps you to focus on the details that make the difference.

Related

Analyze the time cost of the following algorithms using Θ notation

So many loops, I stuck at counting how many times the last loop runs.
I also don't know how to simplify summations to get big Theta. Please somebody help me out!
int fun(int n) {
int sum = 0
for (int i = n; i > 0; i--) {
for (int j = i; j < n; j *= 2) {
for (int k = 0; k < j; k++) {
sum += 1
}
}
}
return sum
}
Any problem has 2 stages:
You guess the answer
You prove it
In easy problems, step 1 is easy and then you skip step 2 or explain it away as "obvious". This problem is a bit more tricky, so both steps require some more formal thinking. If you guess incorrectly, you will get stuck at your proof.
The outer loop goes from n to 0, so the number of iterations is O(n). The middle loop is uncomfortable to analyze because its bounds depend on current value of i. Like we usually do in guessing O-rates, let's just replace its bounds to be from 1 to n.
for (int i = n; i > 0; i--) {
for (int j = 1; j < n; j *= 2) {
perform j steps
}
}
The run-time of this new middle loop, including the inner loop, is 1+2+4+...+n, or approximately 2*n, which is O(n). Together with outer loop, you get O(n²). This is my guess.
I edited the code, so I may have changed the O-rate when I did. So I must now prove that my guess is right.
To prove this, use the "sandwich" technique - edit the program in 2 different ways, one which makes its run-time smaller and one which makes its run-time greater. If you manage to make both new programs have the same O-rate, you will prove that the original code has the same O-rate.
Here is a "smaller" or "faster" code:
do n/2 iterations; set i=n/2 for each of them {
do just one iteration, where you set j = i {
perform j steps
}
}
This code is faster because each loop does less work. It does something like n²/4 iterations.
Here is a "greater" or "slower" code:
do n iterations; set i=n for each of them {
for (int j = 1; j <= 2 * n; j *= 2) {
perform j steps
}
}
I made the upper bound for the middle loop 2n to make sure its last iteration is for j=n or greater.
This code is slower because each loop does more work. The number of iterations of the middle loop (and everything under it) is 1+2+4+...+n+2n, which is something like 4n. So the number of iterations for the whole program is something like 4n².
We got, in a somewhat formal manner:
n²/4 ≤ runtime ≤ 4n²
So runtime = O(n²).
Here I use O where it should be Θ. O is usually defined as "upper bound", while sometimes it means "upper or lower bound, depending on context". In my answer O means "both upper and lower bound".

How to better analyze runtime of complex nested loops?

When you have nested for-loops where the amount of loops for the nested one changes each time, what is the easiest approach to analyze the total runtime? It's hard for me to conceptualize how to factor in the changing max value since I've only ever analyzed nested loops where the max was out of N, which led to a pretty simple O(n^2) runtime. Should I make a summation and use that?
For Example:
int val = 1;
for (int i = 0; i < n; i++) {
for (int j = 0; j < val; j++) {
val++;
}
}
My intuition tells me this is 2^n, but I have no practical way of really proving that
In general, to find the time complexity of loops, you need to find how many times they execute, as a function of the input. Sometimes it is straightforward, sometimes it is not. You may end up with complex mathematical expressions, and in some cases you may not be able to decide at all.
As for your example, your outer loop would clearly run exactly n times. Your inner loop, however, checks its loop condition j < val, which the first time is true because j = 0 and val = 1. Then, it increments val by 1 on each iteration so it will always be true that j < val. Therefore we notice that it is an infinite loop, and your program thus runs in O(∞).
(As a side note, in practice, depending on the language of implementation, eventually val may overflow and become smaller than j, which will cause the loop to finish. In this case, it only depends on the integer size you are using.)

What is the time complexity of nested loop over single loop with same task

So, what is the difference between given below two function in terms of performance and what is the time complexity of both function. It is doing exactly same task with two loop and single loop.
With TWO Loop.
RecipeModel returnRecipe(String? suggestion) {
for (int i = 0; i < _appData.recipeCategories!.length; i++) {
for (int j = 0; j < _appData.recipeCategories![i].recipes!.length; j++) {
if (_appData.recipeCategories![i].recipes![j].recipeName! ==
suggestion) {
return _appData.recipeCategories![i].recipes![j];
}
}
}
return recipe;
}
With Single loop
RecipeModel returnRecipe(String? suggestion) {
int recCategoriesLen = _appData.recipeCategories!.length;
int i = 0
for (int j = 0; j < _appData.recipeCategories![i].recipes!.length;) {
if (_appData.recipeCategories![i].recipes![j].recipeName! ==
suggestion) {
return _appData.recipeCategories![i].recipes![j];
}
j++
if (_appData.recipeCategories![i].recipes!.length == j && i < recCategoriesLen - 1) {
i++
j = 0
}
}
return recipe;
}
It's common for people, when first learning about big-O notation, to assume that big-O notation is calculated by looking at how many loops there are and then multiplying those loops together in some way. While that's often the case, the real guiding principle behind big-O analysis is to think through, conceptually, what it is that the loops are actually doing.
In your case, you have a 2D array of items indexed by i and j. The first version of the code explicitly enumerates all pairs of possible i's and j's, with the outer loop visiting all choices of i and the inner loop visiting all choices of j. The total work done is then proportional to the number of items visited.
The second loop does essentially the same thing, but less explicitly. Specifically, it still generates all possible combinations of i and j, except that there's just a single loop driving the changes to both i and j. Because you're still iterating over all choices, the amount of work done is likely to be pretty similar to what you started with. The actual performance will depend on what optimizations the compiler/interpreter does for you.
To actually reduce the amount of work you're doing, you'll need to find a fundamentally different strategy. Since you're asking the question "across all combinations of i and j, does this item exist?," you might want to store an auxiliary hash table (dictionary) that stores each recipe keyed by its name. That way, you don't need to loop over everything and can instead just do a dictionary lookup, which is much faster.

How to determine computational complexity for algorithms with nested loops?

After looking at this question, this article, and several other questions, I have still not been able to find a general way to determine the computational complexity of algorithms with looping variables dependent on the parent loop's variable. For example,
for (int i = 0; i < n; i++) {
for (int j = i; j < n; j++) {
for (int k = i; k < j; k++) {
//one statement
}
}
}
I know that the first loop has a complexity of n, but the inner loops are confusing me. The second loop seems to be executed n-i times and the third loop seems to be executed j-i times. However, I'm not sure how to turn this into a regular Big-O statement. I don't think I can say O(n(n-i)(j-i)), so how can I get rid of the i and j variables here?
I know this is something on the order of n^3, but how can I show this? Do I need to use series?
Thanks for your help!
(If you were wondering, this is from a brute force implementation of the maximum sum contiguous subsequence problem.)
First loop hits N items on average.
Second loop hits N / 2 items on avarage
Third loop hits N / 4 items on average
O(N * N / 2 * N / 4) is about O((N^3)/8) is about O(N^3)

Time complexity of nested loop

im trying to find the worst case time complexity for the following algorithm.
for (i = N*N; i>1; i = i /2 ){
for ( j = 0; j < i; j++){
counter++;
}
}
I managed to figure out that the inner loop will execute in a logarithmic fashion (backwards though, but I think that doesnt matter) But im really unsure how to approach the outer loop.
As #ClintPowell pointed out, the inner loop is O(i). The trick, then, is to add up the various values for i.
Suppose that the outer loop was i=1; i<=K; i=i*2 (as you point out, the order doesn't matter), and solve in terms of K. Then substitute N*N for K, and simplify as necessary.
Well the outer loop executes for log n times and the inner loop executes i times.
If k=log n, then summation of i from 1 to k would be k(k+1)/2,which is O(log n)^2.
Hope this helps.

Resources