I was in the middle of reading Multithreaded merge sort in Introduction to algorithm 3rd edition. However I am confused with the number of processors required for the following Merge-Sort algo:
MERGE-SORT(A, p, r)
1 if p < r
2 q = (p+r)/2
3 spawn MERGE-SORT(A, p, q)
4 MERGE-SORT(A, q + 1, r)
5 sync
6 MERGE(A, p, q, r)
the MERGE is the standard merge algorithm. Now what is the number of processor required for this algorithm ?? Though i am assuming it should be O(N) but the book is claiming it to be O(log n), why? Note i am not multithreading the MERGE procedure. an explaination with an example will be really helpful. Thanks in advance.
The O(log n) value is not the number of CPUs "required" to run the algorithm, but the actual "parallelism" achieved by the algorithm. Because MERGE itself is not parallelized, you don't get the full benefit if O(n) processors even if you have them all available.
That is, the single-threaded, serial time complexity for merge sort is O(n log n). You can think of 'n' as the cost of merge and 'log n' as the factor that counts in the recursive invocations of merge sort to get the array to a stage where you can merge it. When you parallelize the recursion, but merge is still serial, you save the O(log n) factor but the O(n) factor stays there. Therefore the parallelism is of the order O(log n) when you have enough processors available, but you can't get to O(n).
In yet other words, even if you have O(n) CPUs available, most of them fall idle very soon and less and less CPUs work when the large MERGEs start to take place.
Related
I've got this problem in my theoretical homework on algorithms and data structures related to search trees:
Given n numbers a1, ..., an, initially each in its own set. There are two types of queries:
unite two sets;
find the smallest element bigger than x in a specific set.
In these queries, set is specified by one of its elements' index in {ai}. The task is to process q queries in O(n + q log(n)) time.
I've tried using AVL trees to store sets' elements, but this approach results in O(n log(n)) or O(n) merge time, so the overall time complexity requirement is not satisfied. At the moment I have only these few ideas (but actually they don't quite help):
There are at most n unite queries.
If q > n, eventually, we'll need to build a search tree containing all n elements of {ai} to process the last (q - n) queries of type (2). Thus, it seems to be reasonable to first solve the problem with q ≤ n and then naturally extend the solution to q > n.
To create a set containing (k + 1) elements at least k merge operations is needed (this is easy to prove by mathematical induction), so at each step of processing queries we need work with "not-so-big" sets only. This might yield some tight asymptotic estimates.
Probably there is a way to somehow scan several first queries before processing them, understand which sets are involved in type (2) queries, and merge them only, ignoring other unite requests.
There is no memory limit, so this might be abused in some way.
Actually your solution of using self-balancing binary search trees to represent the sets was correct, and your ideas (1) - (3) are essential to achieve a tighter asymptotic bound.
Setting up the sets initially is O(n), and searching (finding the smallest element larger than some x) within each set is O(log n), so q searches has a cost of O(q log n).
Now let's consider the merge operations. To merge two binary search trees of size a and b, insert all elements of the smaller tree into the larger tree. This can be done in O(min(a,b)*log(max(a,b)+1)).
But what is the complexity of q successive merge operations, if we start with singleton sets? We can prove by induction that for q < n, the cost is O(q log n). (As you have noted, there cannot be any more merge operations apart from merging a set with itself, which is a no-op.)
So the cost of q merge operations is the cost of q-1 merges plus the cost of the last merge. By the inductive hypothesis, the cost of q-1 merges is O((q-1)log n).
The cost of the last merge is O(min(a,b)*log(max(a,b)+1)). But a and b are less than q, so for the last merge we get an upper bound of O(q * log(q + 1)). Since q < n, this is a subset of O(q log n). So the total cost of q merge operations is O((q-1) log n + q log n) = O(q log n).
Therefore, the total complexity is bounded by O(n + q log n).
Was reading CLRS when I encountered this:
Why do we not ignore the constant k in the big o equations in a. , b. and c.?
In this case, you aren't considering the run time of a single algorithm, but of a family of algorithms parameterized by k. Considering k lets you compare the difference between sorting n/n == 1 list and n/2 2-element lists. Somewhere in the middle, there is a value of k that you want to compute for part (c) so that Θ(nk + n lg(n/k)) and Θ(n lg n) are equal.
Going into more detail, insertion sort is O(n^2) because (roughly speaking) in the worst case, any single insertion could take O(n) time. However, if the sublists have a fixed length k, then you know the insertion step is O(1), independent of how many lists you are sorting. (That is, the bottleneck is no longer in the insertion step, but the merge phase.)
K is not a constant when you compare different algorithms with different values of k.
Can some one tell me which is better of the two algorithms TriMergeSort and MergeSort.
The time complexity of the MergeSort would be nlogn base 2.
The time complexity of the TriMergeSort is nlogn base 3.
Since TriMergeSort is base 3 and MergeSort is base 2 I am considering TriMergeSort is faster than that of MergeSort.
Please correct me if I am wrong.
While you are right that the number of levels in the recursive structure is log2 n in the case of regular mergesort and log3 n in the case of three-way mergesort, it's important to remember that the work done per level increases as the number of levels increases. Specifically, in your merge step, you need to switch from a normal 2-way merge to a special 3-way merge. At each step in the merge, you need to determine which of the lists has the smallest unused element. In a two-way merge, you just compare the front elements of the two lists against one another. In a three-way merge, there are more comparisons required because you have to find the lowest element out of three elements.
Generalizing this to a k-way mergesort, the number of layers will be logk n, but the work for the merge will be higher than this. It's possible to do a k-way merge of n total elements in time O(n log k) by using binary heaps, so more work is required as k increases.
Interestingly, if we talk about the amount of work required overall, then we can see that we need to do O(n log k) work across logk n levels. This gives us a total runtime of O(n log k logk n). Using the change-of-base formula for logarithms, which says that logk n = log2 n / log2 k, we see that the runtime will be
O(n log k logk n)
= O(n log k (log n / log k))
= O(n log n)
In other words, there isn't an asymptotic difference between the algorithms when you choose different values of k. The drop in levels due to a higher splitting factor is offset by an increased amount of work per level.
To figure out which algorithm is best, the best option would be to run them all and see what happens. Due to caching effects and locality of reference, I suspect that the answer might at some level depend on the particular architecture you're using.
As far as Big-O complexity, it doesn't matter.
Regular merge sort is n * log_2(n) which is equivalent to n * (log(n) / log(2)). The log(2) is constant, so merge sort is simply n * log(n)
Tri-merge sort is n * log_3(n) which, using the same logic for regular merge sort, is simply n * log(n)
Given that both reduce to O(n * log(n)), it's not really possible to say which is better.
An alternate way to demonstrate why you can't just assume tri-merge to be better:
Assume a 3-way merge is better than a 2-way merge.
In general, assume an (N+1)-way merge is better than an N-way merge.
If this were true, it would be best to use an N-way merge where N is the number of elements you're sorting. However, the merge step requires choosing the least element from N sources which requires O(N) time.
This means that the N-way merge sort runs in O(N^2) time, effectively making it selection sort.
I am asking this question to make sure some concept of parallel computing concept.
Lets give a simple example: We have a set of n numbers, what's the best running time to search a item from it if we have at least n/3 parallel computers?
I think this will still be O(n), but not sure if I am right. Since the constant part of the big-Oh expression can be erased?
Thank you
It could be O(1) or O(ln n).
Given each of your n/3 computers n/(n/3) numbers; they all get essentially 3 values. It takes them individually constant time to search their constant sized-set and return a result ("0 --> not found", k if found at the kth position in the array, if each is given K*(n/3) as the index in an array to start). So, the value is found in time O(1).
The issue comes in reporting the answer. Something has choose among the responses from the n/3 machines to pick a unique result. Typically this requires a "repeated" choice among the subsets of machines, which you can do in O(n) time but in parallel systems is often done with a "reduction" operator (such as SUM or MAX or ...). Such reduction operators can be (and usually are) implemented using a reduction tree, which is logarithmic.
Some parallel hardware has very fast reduction hardware, but is it still logarithmic.
Weirdly enough, if you have n/1000 CPUs, you'll still get O(1) search times (with a big constant), and O(ln n) reduction times with a very small constant. It'll "look" like constant time if you ignore the O notation.
This strictly depends on the underlying parallel model. Indeed, the final reduction step in which every processor defines a flag Found x and all processors perform a parallel reduction may have a different complexity. See in particular the COMMON CRCW PRAM case.
For a message-passing setting:
T(n) = O(n/p + log p) for p < n
T(n) = O(log n) for p = O(n)
For a shared-memory setting:
a) EREW PRAM
T(n) = O(n/p + log p) for p < n
T(n) = O(log n) for p = O(n)
b) CREW PRAM
concurrent reads do not help: the final reduction step still takes O(log p) time anyway
T(n) = O(n/p + log p) for p < n
T(n) = O(log n) for p = O(n)
c) COMMON CRCW PRAM
concurrent writes really help: the final reduction step takes now O(1) time, those processors with the flag Found x set can write simultaneously the same value in a shared location
T(n) = O(n/p) for p < n
T(n) = O(1) for p = O(n)
I know there are quite a bunch of questions about big O notation, I have already checked:
Plain english explanation of Big O
Big O, how do you calculate/approximate it?
Big O Notation Homework--Code Fragment Algorithm Analysis?
to name a few.
I know by "intuition" how to calculate it for n, n^2, n! and so, however I am completely lost on how to calculate it for algorithms that are log n , n log n, n log log n and so.
What I mean is, I know that Quick Sort is n log n (on average).. but, why? Same thing for merge/comb, etc.
Could anybody explain me in a not too math-y way how do you calculate this?
The main reason is that Im about to have a big interview and I'm pretty sure they'll ask for this kind of stuff. I have researched for a few days now, and everybody seem to have either an explanation of why bubble sort is n^2 or the unreadable explanation (for me) on Wikipedia
The logarithm is the inverse operation of exponentiation. An example of exponentiation is when you double the number of items at each step. Thus, a logarithmic algorithm often halves the number of items at each step. For example, binary search falls into this category.
Many algorithms require a logarithmic number of big steps, but each big step requires O(n) units of work. Mergesort falls into this category.
Usually you can identify these kinds of problems by visualizing them as a balanced binary tree. For example, here's merge sort:
6 2 0 4 1 3 7 5
2 6 0 4 1 3 5 7
0 2 4 6 1 3 5 7
0 1 2 3 4 5 6 7
At the top is the input, as leaves of the tree. The algorithm creates a new node by sorting the two nodes above it. We know the height of a balanced binary tree is O(log n) so there are O(log n) big steps. However, creating each new row takes O(n) work. O(log n) big steps of O(n) work each means that mergesort is O(n log n) overall.
Generally, O(log n) algorithms look like the function below. They get to discard half of the data at each step.
def function(data, n):
if n <= constant:
return do_simple_case(data, n)
if some_condition():
function(data[:n/2], n / 2) # Recurse on first half of data
else:
function(data[n/2:], n - n / 2) # Recurse on second half of data
While O(n log n) algorithms look like the function below. They also split the data in half, but they need to consider both halves.
def function(data, n):
if n <= constant:
return do_simple_case(data, n)
part1 = function(data[n/2:], n / 2) # Recurse on first half of data
part2 = function(data[:n/2], n - n / 2) # Recurse on second half of data
return combine(part1, part2)
Where do_simple_case() takes O(1) time and combine() takes no more than O(n) time.
The algorithms don't need to split the data exactly in half. They could split it into one-third and two-thirds, and that would be fine. For average-case performance, splitting it in half on average is sufficient (like QuickSort). As long as the recursion is done on pieces of (n/something) and (n - n/something), it's okay. If it's breaking it into (k) and (n-k) then the height of the tree will be O(n) and not O(log n).
You can usually claim log n for algorithms where it halves the space/time each time it runs. A good example of this is any binary algorithm (e.g., binary search). You pick either left or right, which then axes the space you're searching in half. The pattern of repeatedly doing half is log n.
For some algorithms, getting a tight bound for the running time through intuition is close to impossible (I don't think I'll ever be able to intuit a O(n log log n) running time, for instance, and I doubt anyone will ever expect you to). If you can get your hands on the CLRS Introduction to Algorithms text, you'll find a pretty thorough treatment of asymptotic notation which is appropriately rigorous without being completely opaque.
If the algorithm is recursive, one simple way to derive a bound is to write out a recurrence and then set out to solve it, either iteratively or using the Master Theorem or some other way. For instance, if you're not looking to be super rigorous about it, the easiest way to get QuickSort's running time is through the Master Theorem -- QuickSort entails partitioning the array into two relatively equal subarrays (it should be fairly intuitive to see that this is O(n)), and then calling QuickSort recursively on those two subarrays. Then if we let T(n) denote the running time, we have T(n) = 2T(n/2) + O(n), which by the Master Method is O(n log n).
Check out the "phone book" example given here: What is a plain English explanation of "Big O" notation?
Remember that Big-O is all about scale: how much more operation will this algorithm require as the data set grows?
O(log n) generally means you can cut the dataset in half with each iteration (e.g. binary search)
O(n log n) means you're performing an O(log n) operation for each item in your dataset
I'm pretty sure 'O(n log log n)' doesn't make any sense. Or if it does, it simplifies down to O(n log n).
I'll attempt to do an intuitive analysis of why Mergesort is n log n and if you can give me an example of an n log log n algorithm, I can work through it as well.
Mergesort is a sorting example that works through splitting a list of elements repeatedly until only elements exists and then merging these lists together. The primary operation in each of these merges is comparison and each merge requires at most n comparisons where n is the length of the two lists combined. From this you can derive the recurrence and easily solve it, but we'll avoid that method.
Instead consider how Mergesort is going to behave, we're going to take a list and split it, then take those halves and split it again, until we have n partitions of length 1. I hope that it's easy to see that this recursion will only go log (n) deep until we have split the list up into our n partitions.
Now that we have that each of these n partitions will need to be merged, then once those are merged the next level will need to be merged, until we have a list of length n again. Refer to wikipedia's graphic for a simple example of this process http://en.wikipedia.org/wiki/File:Merge_sort_algorithm_diagram.svg.
Now consider the amount of time that this process will take, we're going to have log (n) levels and at each level we will have to merge all of the lists. As it turns out each level will take n time to merge, because we'll be merging a total of n elements each time. Then you can fairly easily see that it will take n log (n) time to sort an array with mergesort if you take the comparison operation to be the most important operation.
If anything is unclear or I skipped somewhere please let me know and I can try to be more verbose.
Edit Second Explanation:
Let me think if I can explain this better.
The problem is broken into a bunch of smaller lists and then the smaller lists are sorted and merged until you return to the original list which is now sorted.
When you break up the problems you have several different levels of size first you'll have two lists of size: n/2, n/2 then at the next level you'll have four lists of size: n/4, n/4, n/4, n/4 at the next level you'll have n/8, n/8 ,n/8 ,n/8, n/8, n/8 ,n/8 ,n/8 this continues until n/2^k is equal to 1 (each subdivision is the length divided by a power of 2, not all lengths will be divisible by four so it won't be quite this pretty). This is repeated division by two and can continue at most log_2(n) times, because 2^(log_2(n) )=n, so any more division by 2 would yield a list of size zero.
Now the important thing to note is that at every level we have n elements so for each level the merge will take n time, because merge is a linear operation. If there are log(n) levels of the recursion then we will perform this linear operation log(n) times, therefore our running time will be n log(n).
Sorry if that isn't helpful either.
When applying a divide-and-conquer algorithm where you partition the problem into sub-problems until it is so simple that it is trivial, if the partitioning goes well, the size of each sub-problem is n/2 or thereabout. This is often the origin of the log(n) that crops up in big-O complexity: O(log(n)) is the number of recursive calls needed when partitioning goes well.