A loop whose variable is multiplied/divided by a constant factor at each iteration is considered to run in O(log(n)) time.
for example:
for(i=1; i<=n;i*2){
--some O(1) operations ...
}
How do I calculate or establish that this loop will run log(n) times?
I was previously explained that the factor by which the variable is divided/multiplied will be the base of the log.
I understand running times and their meanings, I just do not understand the math/reasoning that is required to arrive at this particular solution.
How does one mathematically arrive at this solution from knowing that the loop runs from i=1 to i=n multiplying i by 2 each time?
(I am trying to understand this as a basis to understanding how increasing the variable by a constant power leads to a running time of log(log(n)).)
This is how I make sense of it myself: Try to come up with a function f(x) to model your for loop, such that on the xth iteration of your for loop, your iterator i=f(x). For the simple case of for(i=0;i<n;i++) it is easy to see that for every 1 iteration, i goes up by one, so we can say that f(x)=x, on the xth iteration of the loop, i=x. On the 0th iteration i=0, on the first i=1, on the second i=2, and so on.
For the other case, for (i=1;i<n;i*=2), we need to come up with an f(x) that will model the fact that for every xth iteration, i is doubled. Successive doubling can be expressed as powers of 2, so let f(x)=2^x. On the 0th iteration, i=1, and 2^0=1. On the first, i=2, and 2^1=2, on the second, i=4, and 2^2=4, then i=8, 2^3=8, then i=16, and 2^4=16. So we can say that f(x)=2^x accurately models our loop.
To figure out how many steps the loop takes to complete to reach a certain n, solve the equation f(x)=n. Using an example of 16, ie for (i=1;i<16;i*=2), it becomes f(2^x)=16. log2(2^x=16) = x=4, which agrees with the fact that our loop completes in 4 iterations.
According to the wikipedia:
the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input.
So lets assume we have a function T from input length N, T(N) means it takes T(N) seconds to run the algo on array of N.
for(i=1; i<=n;i*2)
In case when we double the array size in your algorithm , we will get a recurrent formula:
T(2*N)=T(N)+C , this means that if we double the array lenght it takes the same time as T(N) plus a constant time for operation.
There is a theory about how to solve such recurrences, but you can use simple approach with wolfram alpha solver, so , for this particular case we have the result
When you go in a loop via
for(i=1; i<=n;i+=3)
then T(N)=T(N-3)+C, which leads us to a linear time of execution.
Related
Assume I have a function f(K) that runs in amortised logarithmic time in K, but linear worst case time. What is the running time of:
for i in range(N): f(N) (Choose the smallest correct estimate)
A. Linearithmic in N
B. Amortised linear in N
C. Quadratic in N
D.Impossible to say from the information given
Let's say f(N) just prints "Hello World" so it doesn't depend on how big the parameter is. Can we say the total running time is amortised linear in N ?
This kinda looks like a test question, so instead of just saying the answer, allow me to explain what each of these algorithmic complexity concepts mean.
Let's start with a claim function f(n) runs in constant time. I am aware it's not even mentioned in the question, but it's really the basis for understanding all other algorithmic complexities. If some function runs in constant time, it means that its runtime is not dependent on its input parameters. Note that it could be as simple as print Hello World or return n, or as complex as finding the first 1,000,000,000 prime numbers (which obviously takes a while, but takes the same amount of time on every invocation). However, this last example is more of an abuse of the mathematical notation; usually constant-time functions are fast.
Now, what does it mean if a function f(n) runs in amortized constant time? It means that if you call the function once, there is no guarantee on how fast it will finish; but if you call it n times, the sum of time spent will be O(n) (and thus, each invocation on average took O(1)). Here is a lengthier explanation from another StackOverflow answer. I can't think of any extremely simple functions that run in amortized constant time (but not constant time), but here is one non-trivial example:
called = 0
next_heavy = 1
def f(n):
called += 1
if called == next_heavy:
for i in range(n):
print i
next_heavy *= 2
On 512-th call, this function would print 512 numbers; however, before that it only printed a total of 511, so it's total number of prints is 2*n-1, which is O(n). (Why 511? Because sum of powers of two from 1 to 2^k equals 2^(k+1).)
Note that every constant time function is also an amortized constant time function, because it takes O(n) time for n calls. So non-amortized complexity is a bit stricter than amortized complexity.
Now your question mentions a function with amortized logarithmic time, which similarly to above means that after n calls to this function, the total runtime is O(nlogn) (and average runtime per one call is O(logn)). And then, per question, this function is called in a loop from 1 to N, and we just said that by definition those N calls together would run in O(NlogN). This is linearithmic.
As for the second part of your question, can you deduce what's the total running time of the loop based on our previous observations?
Let m be the size of Array A and n be the size of Array B. What is the complexity of the following while loop?
while (i<n && j<m){ if (some condition) i++ else j++}
Example for an array: A=[1,2,3,4] B=[1,2,3,4] the while loop executes at most 5+4 times O(m+n).
Example for an array: A=[1,2,3,4,7,8,9,10] B=[1,2,3,4] the while loop executes at most 4 times O(n).
I am not able to figure out how to represent the complexity of the while loop.
One common approach is to describe the worst-case time complexity. In your example, the worst-case time complexity is O(m + n), because no matter what some condition is during a given loop iteration, the total number of loop iterations is at most m + n.
If it's important to emphasize that the time complexity has a lesser upper bound in some cases, then you'll need to figure out what those cases are, and find a way to express them. (For example, if a given algorithm takes an array of size n and has worst-case O(n2) time, it might also be possible to describe it as "O(mn) time, where m is the number of distinct values in the array" — only if that's true, of course — where we've introduced an extra variable m to let us capture the impact on the performance of having more vs. fewer duplicate values.)
I'm trying to prove that the following algorithm runs in O(n^2) time.
I'm given the Code:
Which is mostly psedocode
function generalFunction(value)
for(i=1 to value.length-1)// value is an array, and this runs `n-1 Times`
for(j=1 to value.length-i // This runs `n-1(n-1)` times?
if value[j-1] > value[j] //This runs n^2 times?
swap A[j+1] and value[j] //How would would this run?
For the first line, I calculated that it runs n-1 times. Because the loop goes n times, but since we are subtracting a 1 from the length of the arbritray array, it would be n-1 times (I believe).
The Same can be said for the second line, except we have to multiply it by the original for loop.
However, I'm not sure about the last two lines, would the third one run in n^2 times? I only wrote n^2 because of the two nested loops. I'm not sure how to go about approaching the last line either, any input would be much appreciated.
Yes, this will run in n^2 - as per your comments. Note that the execution of the inner if statement (swap) is irrelevant to the fact that double loop run n^2 times. Also, the minus one part (n-1) still makes it n^2, since you are basically looking for an upper bound ~approximation~ and n^2 is the tightest such bound. Basically (n-1)(n-1) = n^2 - 2n +1 is dominated by n^2 term.
For definition and workable example similar to this see Wikipedi - example section
P.S. Bug O is about the -worst- case scenario. So in the worst case, the if statement will always be true, hence swap will hit each loop cycle. meaning if you put a break point, it will get hit (n-1)*(n-1) times. Expanding that means n^2 - 2n + 1 times.
To find time complexity, I set a value for n, but once I iterated through the algorithm, I was unable to determine what it was. Any suggestions on how to find a formal for it so I can determine what the big-O is.
for (int i = 0; i < 2*n; i++){
X
X
}
for (int i = n; i > 0; i--) {
X
}
X are just operations in the algorithm.
I set n to two and it increases very fast every time it goes through the loop n doubles. It looks like it might be 2^n.
Since i increases by 1 each time through the first loop and n doubles each time through the loop, do you think the loop would ever terminate? (It probably would terminate when 2*n overflows, but if you're operating in an environment that, say, automatically switches to a Big Integer package when numbers exceed the size of a primitive int, the program will probably simply crash when the system runs out of memory.)
But let's say that this is not what's happening and that n is constant. You don't say whether the execution time for X depends on the value of i, but let's say it does not.
The way to approach this is to answer the question: Since i increases by 1 each time through the first loop, how many iterations will be required to reach the loop termination condition (for i to be at least 2*n)? The answer, of course, is O(n) iterations. (Constant coefficients are irrelevant for O() calculations.)
By the same reasoning, the second loop requires O(n) iterations.
Thus the overall program requires O(n + n) = O(n) iterations (and O(n) executions of X).
Your time complexity should be O(n). I assume you dont got any other loops where X is provided. You are using a n*2 which just doubles your n for this algorithm and your time complexity will then increase linear.
If you for an example using floyds algortihm which includes 3 nested for loops you can draw the conclusion that floyd have a time complexity of O(n^3) where n is the number of elements and 3 is represented by 3 nested for loops.
You may proceed like the following:
Note that you can ameliorate your algorithm (first loop) by avoiding to multiply n by 2 at every
Below is some pseudocode I wrote that, given an array A and an integer value k, returns true if there are two different integers in A that sum to k, and returns false otherwise. I am trying to determine the time complexity of this algorithm.
I'm guessing that the complexity of this algorithm in the worst case is O(n^2). This is because the first for loop runs n times, and the for loop within this loop also runs n times. The if statement makes one comparison and returns a value if true, which are both constant time operations. The final return statement is also a constant time operation.
Am I correct in my guess? I'm new to algorithms and complexity, so please correct me if I went wrong anywhere!
Algorithm ArraySum(A, n, k)
for (i=0, i<n, i++)
for (j=i+1, j<n, j++)
if (A[i]+A[j]=k)
return true
return false
Azodious's reasoning is incorrect. The inner loop does not simply run n-1 times. Thus, you should not use (outer iterations)*(inner iterations) to compute the complexity.
The important thing to observe is, that the inner loop's runtime changes with each iteration of the outer loop.
It is correct, that the first time the loop runs, it will do n-1 iterations. But after that, the amount of iterations always decreases by one:
n - 1
n - 2
n - 3
…
2
1
We can use Gauss' trick (second formula) to sum this series to get n(n-1)/2 = (n² - n)/2. This is how many times the comparison runs in total in the worst case.
From this, we can see that the bound can not get any tighter than O(n²). As you can see, there is no need for guessing.
Note that you cannot provide a meaningful lower bound, because the algorithm may complete after any step. This implies the algorithm's best case is O(1).
Yes. In the worst case, your algorithm is O(n2).
Your algorithm is O(n2) because every instance of inputs needs time complexity O(n2).
Your algorithm is Ω(1) because there exist one instance of inputs only needs time complexity Ω(1).
Following appears in chapter 3, Growth of Function, of Introduction to Algorithms co-authored by Cormen, Leiserson, Rivest, and Stein.
When we say that the running time (no modifier) of an algorithm is Ω(g(n)), we mean that no mater what particular input of size n is chosen for each value of n, the running time on that input is at least a constant time g(n), for sufficiently large n.
Given an input in which the summation of first two elements is equal to k, this algorithm would take only one addition and one comparison before returning true.
Therefore, this input costs constant time complexity and make the running time of this algorithm Ω(1).
No matter what the input is, this algorithm would take at most n(n-1)/2 additions and n(n-1)/2 comparisons before returning value.
Therefore, the running time of this algorithm is O(n2)
In conclusion, we can say that the running time of this algorithm falls between Ω(1) and O(n2).
We could also say that worst-case running of this algorithm is Θ(n2).
You are right but let me explain a bit:
This is because the first for loop runs n times, and the for loop within this loop also runs n times.
Actually, the second loop will run for (n-i-1) times, but in terms of complexity it'll be taken as n only. (updated based on phant0m's comment)
So, in worst case scenerio, it'll run for n * (n-i-1) * 1 * 1 times. which is O(n^2).
in best case scenerio, it's run for 1 * 1 * 1 * 1 times, which is O(1) i.e. constant.