I am aware intuitively that two for loops make an O(n^2) function, but what if the loops are unrelated. How is it expressed
For example:
for(x = 1; x < t; x++)
for(y = 1; y < z; y++)
do something trivial
end
end
is the big-o of this O(t*z)? or is it O(n^2) or is it O(t^2). I have always overlooked this, but I would like to know now.
Thanks
It's O(t*z). If you have two nested loop each doing n iterations you have n^2 because of n*n :)
It's like computing the area.. for every t you iterate z times.. so it's intuitively t*z..
Or you can imagine to have a counter inside the loops.. how much will be the result?
for(x = 1; x < t; x++)
for(y = 1; y < z; y++)
do something trivial
end
end
As written, these loops execute (t-1)*(z-1) = t*z - t - z + 1 times -> O(t*z)
Basically it's really O(t*z), but unless there is something specific about the problem otherwise, you would normally just say O(n^2). The reasoning for that is pretty simple: assume you have t,z with t≠z. Then for any particular t,z there exists t/z which is a constant. You can factor that out, it becomes a constant in the expression, and you have n^2. O(n^2) is the same as O(t^2) for our purposes -- it's a bit more correct to say O(t^2) but most people would understand you using the generic n.
Update
Okay, sorry, let's take this a bit further. We're given t,z, both positive natural numbers with t≠z, and with no specific functional relationship between t and z. (Yes, there could be such a relationship, but it's not in the problem statement. If we can't make that assumption, then the problem can't be answered: consider, eg, that z = tx. We don't know the x, so we can't ever say what the run time would be. Consider z = st. If I can assert a functional relation might exist, then the answer is indeterminate.)
Now, by examination we can see it's going to be O(t*z). Call the function that's the real run time f(n)=n2. By definition, O(f(tz)) means the run time f(tz) ≤ kg(tz) for some constant k>0. Divide by z. Then f(t)/z ≤ (k/z)g(t), and thus f(t) ≤ kg(t). We substitute and get f(t)=t2 and renaming the variable makes that O(n2).
Normally two for loops inside one another is O(n^2) which is read O n squared.
I don't know that I would consider this O(n^2) without some further information about the how the loop will be used.
If z will always be less than or equal to 1, you would get O(n) or O(1).
Related
Determining Number of Iterations in a loop (Java)
The formula stated in this question works for for loops where i is being added to or subtracted from. My question is how can this formula be changed to give the number of iterations in a for loop where i is being divided or multiplied by a value.
#WJS
for(int i = M; i < N; i*=k)
...
The critical point is when we have been through this loop s times, so that i = M ks = N. Take the logarithm of both sides:
log(M) + s log(k) = log(N)
s = (log(N)-log(M))/log(k)
Use the floor function for <= and the ceiling function (minus one) for < (which is what WJS did in the linked question). I don't know Java, so I won't attempt to write Java code.
(There might be a problem if you are multiplying an integer counter by a floating-point factor, or something like that, but I will not accept responsibility for such foolishness.)
I'm having trouble understanding something. I'm not even sure if it's correct.
In Cracking the Code Interview, there is a section that asks you to determine the Big O for a number of functions. For the most part, they're predictable.
However, one of them throws me for a loop.
Apparently, this evaluates to O(ab):
void printUnorderedPairs(int[] arrayA, int[] arrayB) {
for (int i = 0; i < arrayA.length; i++){
for (int j = 0; j < arrayB.length; j++){
for (int k = 0; k < 100000; k++){
System.out.println(arrayA[i] + "," + arrayB[j]);
}
}
}
}
With the rational that
"100,000 units of work is still constant, so the runtime is O(ab).
I'm trying to see why this could make sense, but I just can't yet; naturally, I expected O(abc).
Yes, 100,000 is a constant and arrayA and arrayB are arrays, but we're taking the length of the arrays. At the time of running these for loops, won't array[x].length be a constant (assuming the size of the arrays don't change during their execution)?
So, is the book right? If so, I would really appreciate insight and intuition so I don't fall into the same trap in the future.
Thanks!
Time complexity is generally expressed as the number of required elementary operations on an input of size n, where elementary operations are assumed to take a constant amount of time on a given computer and change only by a constant factor when run on a different computer.
O(ab) is the complexity in the above case as arrayA and arrayB are of variable length and are fully dependent on the calling function , and 100000 is constant, which won't change by any external factors.
Complexity is the measure of Unknown
The arrays A and B have an unspecified length, and all you can do is to give an indication of the complexity that is a function of these two lengths. Nothing else is variable in the given code.
What the authors meant by constant is a value that is a fix, regardless of the input size, unlike the length of input arrays that might change. For instance, the printUnorderedPairs might be called with different arrays as parameter, and those arrays might have different sizes among them.
The point of Big-O is to examine how the calculation grows as the inputs grow. It's clear that it would double if A doubled, and likewise if B doubled. So linear in those two.
What might be confusing you is that you could easily replace the 100k with C, yet another linear input, but it happens it doesn't have the 100k as a variable, it's a constant.
A similar thing in Big-O problems is where you step through an array a fixed number of times. That doesn't change the Big-O. For example if you step through an array to find the max, that's O(n). Stepping through it twice to find the min and the max is... also O(n). And in fact it's the same as stepping through it once to find the min and max in a single sweep.
I came across a lot of samples around teaching about the big-oh notation of a while loop and the multiplication of variable inside the loop. I still can't get the understanding right.
Codes like this
for(int i = i; i <= n; i = i*2) is considered as lgn because it manipulate the value by multiple of 2.
I have codes like this too
while(i>N)
{
i/=2;
}
which is also considered as lgn as both the variable are being manipulated by 2. However, do it means the same if i changed the codes to something like
while(x > 0.01){
x = x* 0.8;
y = y + x;
}
The main concern is, safely say that the runtime complexity of this loop is log base 0.8 ?
Or is it suppose to be log base 1.25?
I do understand that log base 0.8 and log base 1.25 is not defined therefore the run time complexity of the while loop technically should be is O(n).
Number of loops n is given by
Thus the base is indeed 1.25. However changes in the base only means a multiplicative factor overall, which does not affect the complexity of the algorithm, so it will still be O(log n).
for(int i = 0; i < n; ++i)
{
for(int x = i; x < n; ++x)
{
// work...
}
}
What is the big o notation for this type of algorithm? Also, please explain to me how you came up with the solution.
Also, sorry for the vague title but I didn't know the name of this type of algorithm.
Here is what I tried:
If n is:
1, there will be 1 work execution.
2, there will be 3 work execution.
3, there will be 6 work execution.
4, there will be 10 work execution.
5, there will be 15 work execution.
People in the comment say it is n^2 but the numbers I'm getting don't match the result as 5^2 is 25 and not 15
Big O notation is derived from calculating the time complexity. You must take into account the amount of work in which your algorithm is doing.
Please see below my answer in which I derive Big 0. This is using LateX which is a nice tool to write equations.
Notes
The giant E like symbol - is called a Sigma. This is a mathematical symbol that is used in writing up algorithms to annotate a looping function. Think of it as your for - the bottom term is like your i=0 and the top term is like your i < n.
The (n-1) represents the work of the inner loop. - to calculate this, we break the equation into two separate Sigmas - as i is more complex to derive.
notice how the inner loop does not run n times but n-i. Also, line (3) - to understand what i is - we use Summations (Law 6 maybe?).
To get n^2 - we eliminate constants from the equation aswell as terms that do not dominate the growth of the function.
Yesterday I applied for computer engineering master degree and it was the one of the their questions. I could not solve it so I was very curious.
...
i = 1;
while (i <= n)
{
i = i * 2;
}
...
How many times will this while loop get executed, please give your answer as a formula. For ex: log n...
Thanks
On the xth iteration of the loop, i equals 2x (you can easily prove this by induction). Suppose the loop stops after X iterations, which means n < 2X. This also means that on iteration X-1 the loop was still running and so 2X-1 ≤ n. In other words :
2X-1 ≤ n < 2X
From there, finding X as a function of log2(n) is easy.