I am having problems with anyalysing the time complexity of an algorithm.
For example the following Haskell code, which will sort a list.
sort xs
|isSorted xs= xs
|otherwise= sort (check xs)
where
isSorted xs=all (==True) (zipWith (<=) xs ( drop 1 xs))
check [] =[]
check [x]=[x]
check (x:y:xs)
|x<=y = x:check (y:xs)
|otherwise=y:check (x:xs)
So for n being the length of the list and t_isSorted(n) the running time function: there is an constant t_drop(n) =c and t_all(n)=n, t_zipWith(n)=n :
t_isSorted(n)= c + n +n
For t_check:
t_check(1)=c1
t_check(n)=c2 + t_check(n-1), c2= for comparing and changing an element
.
.
.
t_check(n)=i*c2 + tcheck_(n-i), with i=n-1
=(n-1)*c2 + t_check(1)
=n*c2 - c2 + c1
And how exactly do I have to combine those to get t_sort(n)? I guess in the worst- case, sort xs has to run n-1 times.
isSorted is indeed O(n), since it's dominated by zipWith which in turn is O(n) since it does a linear pass over its argument.
check itself is O(n), since it only calls itself once per execution, and it always removes a constant number of elements from the list. The fastest sorting algorithm (without knowing something more about the list) runs in O(n*log(n)) (equivalent to O(log(n!)) time). There's a mathematical proof of this, and this algorithm is faster, so it cannot possibly be sorting the whole list.
check only moves things one step; it's effectively a single pass of bubble sort.
Consider sorting this list: [3,2,1]
check [3,2,1] = 2:(check [3,1]) -- since 3 > 2
check [3,1] = 1:(check [3]) -- since 3 > 1
check [3] = [3]
which would return the "sorted" list [2,1,3].
Then, as long as the list is not sorted, we loop. Since we might only put one element in its correct position (as 3 did in the example above), we might need O(n) loop iterations.
This totals at a time complexity of O(n) * O(n) = O(n^2)
The time complexity is O(n^2).
You're right, one step takes O(n) time (for both isSorted and check functions). It is called no more than n times (maybe even n - 1, it doesn't really matter for time complexity) (after the first call the largest element is guaranteed to be the last one, the same is the case for the second largest after the second call. We can prove that the last k elements are the largest and sorted properly after k calls). It swaps only adjacent elements, so it removes at most one inversion per step. As the number of inversions is O(n^2) in the worst case (namely, n * (n - 1) / 2), the time complexity is O(n^2).
Related
I have this algorithm, and I am trying to calculate its complexity.
A = {a_1, a_2, a_3, ...}
w = 0
while A != empty
a' = argmin(A) #a' is the element with smallest y_a
if (N_a' + w > C)
A = A - {a'}
else
x_a' = x_a' + 1
w = w + N_a'
Update the y_a' value in A using x_a'
A is a set, and if the condition (N_a' + w > C) is true, we remove an element from the set until the set is empty. I know that the algorithm runs at least O(n), but it can run more if the if statement is false. Assume the last line (the update) takes a constant time.
How can I calculate the complexity here?
Let's first determine how often the then and else branches can run in the worst case. In the then branch the set A becomes smaller by one element so it can only be executed n times (where n is the initial number of elements in A). The else branch can be executed at most C times (take N_a' = 1, it must be >= 1). C is a constant so this is O(1). The total number of iterations is therefore O(n).
The critical point is now the data structure used for A. Three operations have to be supported: finding the min, removing the minimum element, and the update in the last line. When we choose a min-heap, each of these operations can be done in O(log n). Note that the update is not O(1) time in this case. The total run time is now O(n log n).
A naive minimum search (i.e. using an uordered array for A) makes the operations min, remove element, and update O(n), O(1), and O(1) respectively. The total run time would therefore be O(n*n).
Using an ordered array to represent A we get run times of O(1), O(1), and O(n) respectively for our three operations. The min search O(1) operation is executed for each iteration, so O(n) times. The remove elemnt O(1) operation is in the then branch, so executed O(n) times, the update O(n) operation is in the else branch, so executed O(1) times. Taking all together gives a runtime of O(n).
However, if the set has to be sorted in the beginning we are again at O(n log n).
Can someone explain to me in plain english how Merge Sort is O(n*logn). I know that the 'n' comes from the fact that it takes n appends to merge two sorted lists of size n/2. What confuses me is the log. If we were to draw a tree of the function calls of running Merge Sort on a 32 element list, then it would have 5 levels. Log2(32)= 5. That makes sense, however, why do we use the levels of the tree, rather than the actual function calls and merges in the Big O definition ?
In this diagram we can see that for an 8 element list, there are 3 levels. In this context, Big O is trying to find how the number of operations behaves as the input increases, my question is how are the levels (of function calls) considered operations?
The levels of function calls are considered like this(in the book [introduction to algorithms](https://mitpress.mit.edu/books/introduction-algorithms Chapter 2.3.2):
We reason as follows to set up the recurrence for T(n), the worst-case running time of merge sort on n numbers. Merge sort on just one element takes constant time. When we have n > 1 elements, we break down the running time as follows.
Divide: The divide step just computes the middle of the subarray, which takes constant time. Thus, D(n) = Θ(1).
Conquer: We recursively solve two subproblems, each of size n/2, which contributes 2T(n/2) to the running time.
Combine: We have already noted that the MERGE procedure on an n-element subarray takes time Θ(n), and so C(n) = Θ(n).
When we add the functions D(n) and C(n) for the merge sort analysis, we are adding a function that is Θ(n) and a function that is Θ(1). This sum is a linear function of n, that is, Θ(n). Adding it to the 2T(n/2) term from the “conquer” step gives the recurrence for the worst-case running time T(n) of merge sort:
T(n) = Θ(1), if n = 1; T(n) = 2T(n/2) + Θ(n), if n > 1.
Then using the recursion tree or the master theorem, we can calculate:
T(n) = Θ(nlgn).
Simple analysis:-
Say length of array is n to be sorted.
Now every time it will be divided into half.
So, see as under:-
n
n/2 n/2
n/4 n/4 n/4 n/4
............................
1 1 1 ......................
As you can see height of tree will be logn( 2^k = n; k = logn)
At every level sum will be n. (n/2 +n/2 = n, n/4+n/4+n/4+n/4 = n).
So finally levels = logn and every level takes n
combining we get nlogn
Now regarding your question, how levels are considered operations, consider as under:-
array 9, 5, 7
suppose its split into 9,5 and 7
for 9,5 it will get converted to 5,9 (at this level one swap required)
then in upper level 5,9 and 7 while merging gets converted to 5,7,9
(again at this level one swap required).
In worst case on any level number operations can be O(N) and number of levels logn. Hence nlogn.
For more clarity try to code merge sort, you will be able to visualise it.
Let's take your 8-item array as an example. We start with [5,3,7,8,6,2,1,4].
As you noted, there are three passes. In the first pass, we merge 1-element subarrays. In this case, we'd compare 5 with 3, 7 with 8, 2 with 6, and 1 with 4. Typical merge sort behavior is to copy items to a secondary array. So every item is copied; we just change the order of adjacent items when necessary. After the first pass, the array is [3,5,7,8,2,6,1,4].
On the next pass, we merge two-element sequences. So [3,5] is merged with [7,8], and [2,6] is merged with [1,4]. The result is [3,5,7,8,1,2,4,6]. Again, every element was copied.
In the final pass the algorithm again copies every item.
There are log(n) passes, and at every pass all n items are copied. (There are also comparisons, of course, but the number is linear and no more than the number of items.) Anyway, if you're doing n operations log(n) times, then the algorithm is O(n log n).
I believe that a BubbleSort is of the order O(n^2). As I read previous postings, this has to do with nested iteration. But when I dry run a simple unsorted list, (see below), I have the list sorted in 10 comparisons.
In my example, here is my list of integer values:
5 4 3 2 1
To get 5 into position, I did n-1 swap operations. (4)
To get 4 into position, I did n-2 swap operations. (3)
To get 3 into position, I did n-3 swap operations. (2)
To get 2 into position, I did n-4 swap operations. (1)
I can't see where (n^2) comes from, as when I have a list of n=5 items, I only need 10 swap operations.
BTW, I've seen (n-1).(n-1) which doesn't make sense to me, as this would give 16 swap operations.
I'm only concerned with basic BubbleSort...a simple nested FOR loop, in the interest of simplicity and clarity.
You don't seem to understand the concept of big O notation very
well. It refers to how the number of operations or the time grows in
relation to the size of the input, asymptotically, considering only the
fastest-growing term, and without considering the constant of
proportionality.
A single measurement like your 5:10 result is completely meaningless.
Imagine looking for a function that maps 5 to 10. Is it 2N? N + 5? 4N –
10? 0.4N2? N2 – 15? 4 log5N + 6? The
possibilities are limitless.
Instead, you have to analyze the algorithm to see how the number of
operations grows as N does, or measure the operations or time over many
runs, using various values of N and the most general datasets you can
devise. Note that your test case is not general at all: when checking
the average performance of a sorting algorithm, you want the input to be
in random order (the most likely case), not sorted or reverse-sorted.
If you wan to precise there are (n)*(n-1)/2 operations because you are actually computing n+(n-1)+(n-2)+...+1 as the first element needs n swaps, second element need n-1 swaps and so on. So the algorithm is of O(1/2 * (n^2) - n) which in asymptotic notations is equal to O(n^2). But what actually is happening in bubble sort is different. In bubble sort you perform a pass on array and swap the misplaced neighbors place, until there is no misplacement which means the array has become sorted. As each pass on array takes O(n) time and in the worst case you have to perform n passes so the algorithm is of O(n^2). Note that we are counting the number of comparisons not the number of swaps.
There are two version of bubble sort mentioned in wikipedia:
procedure bubbleSort( A : list of sortable items )
n = length(A)
repeat
swapped = false
for i = 1 to n-1 inclusive do
/* if this pair is out of order */
if A[i-1] > A[i] then
/* swap them and remember something changed */
swap( A[i-1], A[i] )
swapped = true
end if
end for
until not swapped
end procedure
This version perform (n-1)*(n-1) comparison -> O(n^2)
Optimizing bubble sort
The bubble sort algorithm can be easily
optimized by observing that the n-th pass finds the n-th largest
element and puts it into its final place. So, the inner loop can avoid
looking at the last n-1 items when running for the n-th time:
procedure bubbleSort( A : list of sortable items )
n = length(A)
repeat
swapped = false
for i = 1 to n-1 inclusive do
if A[i-1] > A[i] then
swap(A[i-1], A[i])
swapped = true
end if
end for
n = n - 1
until not swapped
end procedure
This version performs (n-1)+(n-2)+(n-3)+...+1 operations which is (n-1)(n-2)/2 comparisons -> O(n^2)
I am struggling with my homework and need a little push- the question is to design an algorithm that will in O(nlogm) time find multiple smallest elements 1<k1<k2<...<kn and you have m *k. I know that a simple selection algorithm takes o(n) time to find the kth element, but how do you reduce the m in your recurrence? I though to do both k1 and kn in each run, but that will only take out 2, not m/2.
Would appreciate some directions.
Thanks
If I understand the question correctly, you have a vector K containing m indices, and you want to find the k'th ranked element of A for each k in K. If K contains the smallest m indices (i.e. k=1,2,...,m) then this can be done easily in linear time T=O(n) by using quickselect to find the element k_m (since all the smaller elements will be on the left at the end of quickselect). So I'm assuming that K can contain any set of m indices.
One way to accomplish this is by running quickselect on all of K at the same time. Here is the algorithm
QuickselectMulti(A,K)
If K is empty, then return an empty result set
Pick a pivot p from A at random
Partition A into sets A0<p and A1>p.
i = A0.size + 1
if K contains i, then remove i from K and add (i=>p) to the result set.
Partition K into sets K0<i and K1>i
add QuickselectMulti(A0,K0) to the result set
subtract i from each k in K1
call QuickselectMulti(A1,K1), add i to each index of the output, and add this to the result set
return the result set
If K contains just one element, this is the same as randomized quickselect. To see why the running time is O(n log m) on average, first consider what happens when each pivot exactly splits both A and K in half. In this case, you get two recursive calls, so you have
T = n + 2T(n/2,m/2)
= n + n + 4T(n/4,m/4)
= n + n + n + 8T(n/8,m/8)
Since m drops in half each time, then n will show up log m times in this summation. To actually derive the expected running time requires a little more work, because you can't assume that the pivot will split both arrays in half, but if you work through the calculations, you will see that the running time is in fact O(n log m) on average.
On edit: The analysis of this algorithm can make this simpler by choosing the pivot by running p=Quickselect(A,k_i) where k_i is the middle element of K, rather than choosing p at random. This will guarantee that K gets split in half each time, and so the number of recursive calls will be exactly log m, and since quickselect runs in linear time, the result will still be O(n log m).
Can someone explain to me in simple English or an easy way to explain it?
The Merge Sort use the Divide-and-Conquer approach to solve the sorting problem. First, it divides the input in half using recursion. After dividing, it sort the halfs and merge them into one sorted output. See the figure
It means that is better to sort half of your problem first and do a simple merge subroutine. So it is important to know the complexity of the merge subroutine and how many times it will be called in the recursion.
The pseudo-code for the merge sort is really simple.
# C = output [length = N]
# A 1st sorted half [N/2]
# B 2nd sorted half [N/2]
i = j = 1
for k = 1 to n
if A[i] < B[j]
C[k] = A[i]
i++
else
C[k] = B[j]
j++
It is easy to see that in every loop you will have 4 operations: k++, i++ or j++, the if statement and the attribution C = A|B. So you will have less or equal to 4N + 2 operations giving a O(N) complexity. For the sake of the proof 4N + 2 will be treated as 6N, since is true for N = 1 (4N +2 <= 6N).
So assume you have an input with N elements and assume N is a power of 2. At every level you have two times more subproblems with an input with half elements from the previous input. This means that at the the level j = 0, 1, 2, ..., lgN there will be 2^j subproblems with an input of length N / 2^j. The number of operations at each level j will be less or equal to
2^j * 6(N / 2^j) = 6N
Observe that it doens't matter the level you will always have less or equal 6N operations.
Since there are lgN + 1 levels, the complexity will be
O(6N * (lgN + 1)) = O(6N*lgN + 6N) = O(n lgN)
References:
Coursera course Algorithms: Design and Analysis, Part 1
On a "traditional" merge sort, each pass through the data doubles the size of the sorted subsections. After the first pass, the file will be sorted into sections of length two. After the second pass, length four. Then eight, sixteen, etc. up to the size of the file.
It's necessary to keep doubling the size of the sorted sections until there's one section comprising the whole file. It will take lg(N) doublings of the section size to reach the file size, and each pass of the data will take time proportional to the number of records.
After splitting the array to the stage where you have single elements i.e. call them sublists,
at each stage we compare elements of each sublist with its adjacent sublist. For example, [Reusing #Davi's image
]
At Stage-1 each element is compared with its adjacent one, so n/2 comparisons.
At Stage-2, each element of sublist is compared with its adjacent sublist, since each sublist is sorted, this means that the max number of comparisons made between two sublists is <= length of the sublist i.e. 2 (at Stage-2) and 4 comparisons at Stage-3 and 8 at Stage-4 since the sublists keep doubling in length. Which means the max number of comparisons at each stage = (length of sublist * (number of sublists/2)) ==> n/2
As you've observed the total number of stages would be log(n) base 2
So the total complexity would be == (max number of comparisons at each stage * number of stages) == O((n/2)*log(n)) ==> O(nlog(n))
Algorithm merge-sort sorts a sequence S of size n in O(n log n)
time, assuming two elements of S can be compared in O(1) time.
This is because whether it be worst case or average case the merge sort just divide the array in two halves at each stage which gives it lg(n) component and the other N component comes from its comparisons that are made at each stage. So combining it becomes nearly O(nlg n). No matter if is average case or the worst case, lg(n) factor is always present. Rest N factor depends on comparisons made which comes from the comparisons done in both cases. Now the worst case is one in which N comparisons happens for an N input at each stage. So it becomes an O(nlg n).
Many of the other answers are great, but I didn't see any mention of height and depth related to the "merge-sort tree" examples. Here is another way of approaching the question with a lot of focus on the tree. Here's another image to help explain:
Just a recap: as other answers have pointed out we know that the work of merging two sorted slices of the sequence runs in linear time (the merge helper function that we call from the main sorting function).
Now looking at this tree, where we can think of each descendant of the root (other than the root) as a recursive call to the sorting function, let's try to assess how much time we spend on each node... Since the slicing of the sequence and merging (both together) take linear time, the running time of any node is linear with respect to the length of the sequence at that node.
Here's where tree depth comes in. If n is the total size of the original sequence, the size of the sequence at any node is n/2i, where i is the depth. This is shown in the image above. Putting this together with the linear amount of work for each slice, we have a running time of O(n/2i) for every node in the tree. Now we just have to sum that up for the n nodes. One way to do this is to recognize that there are 2i nodes at each level of depth in the tree. So for any level, we have O(2i * n/2i), which is O(n) because we can cancel out the 2is! If each depth is O(n), we just have to multiply that by the height of this binary tree, which is logn. Answer: O(nlogn)
reference: Data Structures and Algorithms in Python
The recursive tree will have depth log(N), and at each level in that tree you will do a combined N work to merge two sorted arrays.
Merging sorted arrays
To merge two sorted arrays A[1,5] and B[3,4] you simply iterate both starting at the beginning, picking the lowest element between the two arrays and incrementing the pointer for that array. You're done when both pointers reach the end of their respective arrays.
[1,5] [3,4] --> []
^ ^
[1,5] [3,4] --> [1]
^ ^
[1,5] [3,4] --> [1,3]
^ ^
[1,5] [3,4] --> [1,3,4]
^ x
[1,5] [3,4] --> [1,3,4,5]
x x
Runtime = O(A + B)
Merge sort illustration
Your recursive call stack will look like this. The work starts at the bottom leaf nodes and bubbles up.
beginning with [1,5,3,4], N = 4, depth k = log(4) = 2
[1,5] [3,4] depth = k-1 (2^1 nodes) * (N/2^1 values to merge per node) == N
[1] [5] [3] [4] depth = k (2^2 nodes) * (N/2^2 values to merge per node) == N
Thus you do N work at each of k levels in the tree, where k = log(N)
N * k = N * log(N)
MergeSort algorithm takes three steps:
Divide step computes mid position of sub-array and it takes constant time O(1).
Conquer step recursively sort two sub arrays of approx n/2 elements each.
Combine step merges a total of n elements at each pass requiring at most n comparisons so it take O(n).
The algorithm requires approx logn passes to sort an array of n elements and so total time complexity is nlogn.
lets take an example of 8 element{1,2,3,4,5,6,7,8} you have to first divide it in half means n/2=4({1,2,3,4} {5,6,7,8}) this two divides section take 0(n/2) and 0(n/2) times so in first step it take 0(n/2+n/2)=0(n)time.
2. Next step is divide n/22 which means (({1,2} {3,4} )({5,6}{7,8})) which would take
(0(n/4),0(n/4),0(n/4),0(n/4)) respectively which means this step take total 0(n/4+n/4+n/4+n/4)=0(n) time.
3. next similar as previous step we have to divide further second step by 2 means n/222 ((({1},{2},{3},{4})({5},{6},{7},{8})) whose time is 0(n/8+n/8+n/8+n/8+n/8+n/8+n/8+n/8)=0(n)
which means every step takes 0(n) times .lets steps would be a so time taken by merge sort is 0(an) which mean a must be log (n) because step will always divide by 2 .so finally TC of merge sort is 0(nlog(n))