I had another question regarding loops. I know 2 for loops make the run time O(n^2) since you iterate through the list n * n times.
But what about two while loops?
While (array1 is not empty)
if(~~~)
do ~~~
else(~~~)
do ~~~
while (array2 is not empty)
if(~~~)
do ~~~
else(~~~)
do ~~~
so a while loop is nested inside another while loop. Does this make the run time n^2 also since we iterate though the first loop n times and second loop n times? Any help would be apreciated.
Thanks!
In this case, it doesn't look like they are nested. There are 2 loops, separated by an if/else. In this case, it would be O(n).
If the while loops were nested and based on input size, it would indeed be O(n^2). It's not important what 'type' of loop you are using, but rather the fact that you're looping over the input of size n.
A nested for loop runs at O(n²), as you said. The notation for determining how fast two of these in sequence will run is O(2n²). The notation for running two while loops n times each is O(2n).
while (array1 isn't empty){
while (array2 isn't empty){
//code goes here
}
}
If the first array has n elements and the 2nd array has m elements then the runtime is O( n * m )
In the special case where n and m are the same, then it is O( n * n )
while (array1 isn't empty){
//code
}
while (array2 isn't empty){
//code
}
In this case the runtime is O(n) + O(m) which is O(n) if n is greater than or equal to m and O(m) if m is greater than or equal to n.
Related
I have written an algorithm to read in a text file and extract the contents inside into two array, then sort. The program is working but I am confuse at calculating the time complexity. Just needed someone to clarify on this.
Say I have two functions, a main and a helper.
Helper function
insertion(int array[], int length)
...
Main function
int main()
while(...) // this while loop read the input text file and push integer into vector
...
while(...)
...
if(...)
for(...) // this for loop validates array B only
insertion(arrayA, lengthA)
insertion(arrayB, lengthB)
Program read in text file
Push line 1 to array A, push line 2 to array B
'for loop' to validate array B array integers with an outer 'if'
Perform insertion sort on array A and array B
From what I learnt, I have to let number of data be 'n' before calculating the Big-O or number of operations. Now, obviously there are two data points here - one for array A and one for array B.
So, array A = n and array B = m.
However, I am unsure whether the number of data in the helper function should be using 'n' or 'm'. Likewise for the nested while loop, if the number of data should also be using 'n' or 'm'.
I tried my best to explain my difficulty in understanding this time complexity along with a simplified form of my program (the actual program has tons of loops...). Hopefully someone can understand what I mean and provide some clarification or else I will modify further to see if I can make it clearer. Thanks!
Edit: I am required to calculate the number of operations before finding the Big-O for my algorithm.
I understand that after you read the file, will have array A and B.
If m and n is close, then you can say that m = n. Otherwise, you choose the biggest one and say it is n.
Then you read n two time, n + n = 2, but in big O, you can take out the constant, then at this point you have O(n) time.
If validate only pass one time through your array B, then you say 3n of complexity time, but 3 still a constant, then time complexity still O(n).
But, the worse case insertion sort can do is O(n^2). You do it two time n^2 + n^ 2 = 2*n^2, two is a constant, so time of insertion sort peace takes O(n^2).
Finally, you have O(n) + O(n^2). Since it's big notation, the most cost part is the really significant part: O(n^2) is your complexity.
For example, if you use insertion sort n times, then you'd have O(n(n^2)) time, which is O(n^3).
The computer do 10^9 operation per second. So small n doesn't count so much.
If you not sure if n and m is close, let's says that 0 < n < 10^9 and 0 < m < 10^3. You'd say that time complexity of inputs is O(n+m). Then insertion sort O(n^2) + O(m^2). But still here, m << n (m is much less than n), you can equally not consider m (I'm saying m here is almost optional IF YOU'RE not being strict!). IF you need be strict, do not ignore at first this small cases.
If 0 < n < 10^9 and 0 < m < 10^9, then you should't say m = n, or ignore anyone. Because n can be one, and m one million.
I have a method called binary sum
Algorithm BinarySum(A, i, n):
Input: An array A and integers i and n
Output: The sum of the n integers in A starting at index i
if n = 1 then
return A[i]
return BinarySum(A, i, n/ 2) + BinarySum(A, i + n/ 2, n/ 2)
Ignoring the fact of making a simple problem complicated I have been asked to find the Big O. Here is my thought process. For an array of size N I will be making 1 + 2 + 4 .. + N recursive calls. This is close to half the sum from 1 to N so I will say it is about N(N + 1)/4. After making this many calls now I need to add them together. So once again I need to perform N(N+1)/4 additions. Adding them together we are left with N^2 as the dominate term.
So would the big O of this algorithm be O(N^2)? Or am I doing something wrong. It feels strange to have binary recursion and not have a 2^n or log n in the final answer
There are in-fact 2^n and log n terms in the final result... sort of.
For each call to a sub-array of length n, two recursive calls are made to both halves of this array, plus a constant amount of work (if-statement, addition, pushing onto the call stack etc). Thus the recurrence relation is given by:
At this point we could just use the Master theorem to directly arrive at the final result - O(n). But let's instead derive it by repeated expansion:
The stopping condition n = 1 gives the maximum value of m (ignoring rounding):
In step (*) we used the standard formula for geometric series. So as you can see the answer does involve log n and 2^n terms in a sense, but they "cancel" out to give a simple linear term, which is the same as for a simple loop.
Can someone explain to me in plain english how Merge Sort is O(n*logn). I know that the 'n' comes from the fact that it takes n appends to merge two sorted lists of size n/2. What confuses me is the log. If we were to draw a tree of the function calls of running Merge Sort on a 32 element list, then it would have 5 levels. Log2(32)= 5. That makes sense, however, why do we use the levels of the tree, rather than the actual function calls and merges in the Big O definition ?
In this diagram we can see that for an 8 element list, there are 3 levels. In this context, Big O is trying to find how the number of operations behaves as the input increases, my question is how are the levels (of function calls) considered operations?
The levels of function calls are considered like this(in the book [introduction to algorithms](https://mitpress.mit.edu/books/introduction-algorithms Chapter 2.3.2):
We reason as follows to set up the recurrence for T(n), the worst-case running time of merge sort on n numbers. Merge sort on just one element takes constant time. When we have n > 1 elements, we break down the running time as follows.
Divide: The divide step just computes the middle of the subarray, which takes constant time. Thus, D(n) = Θ(1).
Conquer: We recursively solve two subproblems, each of size n/2, which contributes 2T(n/2) to the running time.
Combine: We have already noted that the MERGE procedure on an n-element subarray takes time Θ(n), and so C(n) = Θ(n).
When we add the functions D(n) and C(n) for the merge sort analysis, we are adding a function that is Θ(n) and a function that is Θ(1). This sum is a linear function of n, that is, Θ(n). Adding it to the 2T(n/2) term from the “conquer” step gives the recurrence for the worst-case running time T(n) of merge sort:
T(n) = Θ(1), if n = 1; T(n) = 2T(n/2) + Θ(n), if n > 1.
Then using the recursion tree or the master theorem, we can calculate:
T(n) = Θ(nlgn).
Simple analysis:-
Say length of array is n to be sorted.
Now every time it will be divided into half.
So, see as under:-
n
n/2 n/2
n/4 n/4 n/4 n/4
............................
1 1 1 ......................
As you can see height of tree will be logn( 2^k = n; k = logn)
At every level sum will be n. (n/2 +n/2 = n, n/4+n/4+n/4+n/4 = n).
So finally levels = logn and every level takes n
combining we get nlogn
Now regarding your question, how levels are considered operations, consider as under:-
array 9, 5, 7
suppose its split into 9,5 and 7
for 9,5 it will get converted to 5,9 (at this level one swap required)
then in upper level 5,9 and 7 while merging gets converted to 5,7,9
(again at this level one swap required).
In worst case on any level number operations can be O(N) and number of levels logn. Hence nlogn.
For more clarity try to code merge sort, you will be able to visualise it.
Let's take your 8-item array as an example. We start with [5,3,7,8,6,2,1,4].
As you noted, there are three passes. In the first pass, we merge 1-element subarrays. In this case, we'd compare 5 with 3, 7 with 8, 2 with 6, and 1 with 4. Typical merge sort behavior is to copy items to a secondary array. So every item is copied; we just change the order of adjacent items when necessary. After the first pass, the array is [3,5,7,8,2,6,1,4].
On the next pass, we merge two-element sequences. So [3,5] is merged with [7,8], and [2,6] is merged with [1,4]. The result is [3,5,7,8,1,2,4,6]. Again, every element was copied.
In the final pass the algorithm again copies every item.
There are log(n) passes, and at every pass all n items are copied. (There are also comparisons, of course, but the number is linear and no more than the number of items.) Anyway, if you're doing n operations log(n) times, then the algorithm is O(n log n).
I don't mean to be asking for help with something simple, but I can't seem to figure out how to answer this question.
Compute the time complexity of the following program
fragment:
sum = 0;
for i=1 to n do
for j=1 to i do
k = n*2
while k>0 do
sum=sum+1;
k = k div 2;
I recognize that what is inside the while loop takes O(1), the while loop takes O(logn), but then I don't follow how that connects to the nested for loops, since I am used to just doing nested sigma notations for for loops.
Thanks!
A formal demonstration which shows step by step the order of growth of your algorithm:
Here are some hints on to break down this function's complexity:
Look at the inner loop, where k=n*2. Lets assume n=8 so k=16, k keeps being divided by 2 until it's 0 or less (i'll assume here that rounding 0.5 yields 0). So the series describing k until the end of the loop will be 16,8,4,2,1,0. try to think what function describes the number of elements in this series, if you know the first value is k.
You have two nested for loops, the first loop just iterates n times, then second (inner) loop iterates until it reaches the number of the iteration of the first loop (represented by i), which means at first it will iterate once, then twice, and so on until n. So the number of iterations performed by the second loop can be described by the series: 1, 2, 3, ... , n. This is a very simple arithmetic progression, and the sum of this series will give you the total number of iterations of the inner for loop. This is also the number of times you call the inner while loop (which is not affected by the number of the current iteration, as k depends on n which is constant and not on i or j).
i m calculating running time for this algorithm?
Cost No Of Times
for(j=1;j<=n-1;j++){ c1 n(loop will run for n-1 times +1 for failed cond
for(i=0;i<=n-2;i++){ c2 n*(n-1) (n-1 from outer loop and n for inner
if(a[i]>a[i+1]){ c3 (n-1)*(n-1)
Swap c4 (n-1)*(n-1) {in worst case }
}
}
in worst case
T(n)= c1*n + c2*(n-1)n + c3(n-1)(n-1) + c4*(n-1)(n-1)
which is O(n^2)
In Best case:
T(n)=c1*n + c2*(n-1)n + c3(n-1)(n-1)
which is O(n^2).
BUT actually in best case bubble sort has time complexity O(n).
Can anyone explain?
Bubble Sort has O(n) time complexity in the best case because it is possible to pass an already sorted list to it.
You have to check if you did any swaps after the second nested loop. If no swaps were done, the list is sorted and there's no need to continue, so you can break the loop.
For an already-sorted list, you'd have iterated over all n elements once in this case.
your algo for implementing bubble sort is correct but not efficient,
// n is the total number of elments
do{
swp = false // swp checks whether or not any variable has been swapped
in the inner loop
for(i=0;i<=n-2;i++){
if(a[i]>a[i+1])
{
swap(a[i],a[i+1])
sw = true
}
n = n-1
}while(sw == true && n>0)
swp is a variable which checks whether there has been any swap in the inner loop or not,
if there has not been any swap this means that our array is sorted.
The best case for bubble sort is when the elements are already sorted in ascending order(in this case)
for which the inner loop just runs once but the if condition(in the inner loop) is never satisfied and swp remains false and thus we exit from the outer loop after one iteration which gives bubble sort O(n) complexity.
You can compute the number of iterations (what's inside the loop is irrelevant because it's of constant time) using Sigma Notation:
Bubble Sort with a best case running time is actually an enhanced version of this sorting algorithm.
During the first parse (outer loop), if no swap was performed, that is a decisive information that the array is sorted, and it is pointless to cover all cases.
Therefore, the outer loop would iterate once, and the inner loop would iterate n times: that's n + 1 iterations overall ==> O(n).