Merge a sorted array with unsorted array to give a final sorted array. Can it be better done, the not obvious way.
final_sorted_array=merge(sort(unsorted_array), sorted_array)
I am assuming merge step similar to found in merge sort
and we know any best is limited by O(n log n) in general. I am trying to understand how a ordered data (knowing the information about data) can be useful to what extent in general.
Suppose your first array is called A1 of n1 elements and your second one is called A2 of n2 elements:
extend the length of A1 and append A2 to it at the cost of O(n2 + c), where c is a constant. Apply quick sort and get it ordered at another cost of O( (n1+n2)*log(n1+n2) ). You mentioned you didn't want the obvious way, however, you wanted to get the job done the best way. This is likely the best way, because it allows you to still use regular arrays which are usually the fastest data structures to work in general (of course it depends specifically on which task you are working on).
However, a different possibility, is to make A1 a linked list insetad of a regular array, for the purpose of this analysis we will not consider the costs of transforming A1 from an regular array into a linked list. Therefore, the approach would be to insert ordered each element of A2 into A1. The obvious way to insert ordered would be to verify each element of A1 and then decide where to insert, at the cost of O(n1 + c) for each insertion. However, a smarter method would be to make a binary search over A1 to decide where each element of A2 should be inserted in, at the cost of O(log(n1) + c) for each insertion. In this case, there would be n2 insertions, being the total cost n2*O(log(n1) + c). In this approach you wouldn't need to move any of A1 elements, since we are assuming you use a linked list, all you need is to check these elements. That's why this asymptotic function looks better, this data structure gives you the power to only look at A1 original elements without actually moving them.
To decide which approach you should use, I recommend that you analyse the algorithm which comes after the arrays merge, choosing the option that best fits your needs.
Related
I've been reading up on Algorithms from the book Algorithms by Robert Sedgewick and I've been stuck on an exercise problem for a while. Here is the question :
Given 3 lists of N names each, find an algorithm to determine if there is any name common to all three lists. The algorithm must have O(NlogN) complexity. You're only allowed to use sorting algorithms and the only data structures you can use are stacks and queues.
I figured I could solve this problem using a HashMap, but the questions restricts us from doing so. Even then that still wouldn't have a complexity of NlogN.
If you sort each of the lists, then you could trivially check if all three lists have any 1 name in O(n) time by picking the first name of list A compare it to the first name in list B, if that element is < that of list A, pop the list b element and repeat until list B >= list A. If you find a match repeat the process on C. If you find a match in C also return true, otherwise return to the next element in a.
Now you have to sort all of the lists in n log n time. which you could do with your favorite sorting algorithm though you would have to be a little creative using just stacks and queues. I would probably recommend merge sort
The below psuedo code is a little messed up because I am changing lists that I am iterating over
pseudo code:
assume listA, b and c are sorted Queues where the smallest name is at the top of the queue.
eltB = listB.pop()
eltC = listC.pop()
for eltA in listA:
while(eltB<=eltA):
if eltB==eltA:
while(eltC<=eltB):
if eltB==eltC:
return true
if eltC<eltB:
eltC=listC.pop();
eltB=listB.pop()
Steps:
Sort the three lists using an O(N lgN) sorting algorithm.
Pop the one item from each list.
If any of the lists from which you tried to pop is empty, then you are done i.e. no common element exists.
Else, compare the three elements.
If the elements are equal, you are done - you have found the common element.
Else, keep the maximum of the three elements (constant time) and replenish from the same lists from which the two elements were discarded.
Go to step 3.
Step 1 takes O(N lgN) and the rest of the steps take O(N), so the overall complexity is O(N lgN).
Using JavaScript notation:
A = {color:'red',size:8,type:'circle'};
L = [{color:'gray',size:15,type:'square'},
{color:'pink',size:4,type:'triangle'},
{color:'red',size:8,type:'circle'},
{color:'red',size:12,type:'circle'},
{color:'blue',size:10,type:'rectangle'}];
The answer for this case would be 2, because L[2] is identic to A. You could find the answer in O(n) by testing each possibility. What is a representation/algorithm that allows finding that answer faster?
I would just create a HashMap and put all objects into the HashMap. Also we would need to define a hash function which is function of data in object (something similar to overriding Object.hashcode() in java)
Suppose given array L is [B, C, D] where B, C and D are objects. Then HashMap would be {B=>1, C=>2, D=>3}. Now suppose D is copy of A. So we would just lookup A in this map and get the answer. Also as suggested by Eric P in comment, we would need to keep the hashmap updated with respect to any change in array L. This also can be done in O(1) for every operation in array L.
Cost of Looking up an object in the HashMap is O(1). So we can achieve O(1) complexity.
I think it's not possible to do it faster than O(n) with your preconditions.
It's possible to find element in O(logn) using binary search, but:
A) you need elements with one variable to compare
B) sorted list by that variable
Maybe with some technics (ordering, skip lists, etc.) you can find answer faster than N iterations, but the worst case is O(n)
Since the goal is to find all objects which are clones of A, you must test every object at least once to determine whether it is a clone of A, so the minimum number of tests is N. Passing through the list once and testing each object performs N tests, so since this method is the minimum number of tests, it is an optimal method.
first, I assume, that you are talking about array, not list. the word 'list' is reserved for specific type of data structures, that has O(n) indexing comlexity, so meantime for any search in it is at least linear.
for unsorted array, the only algorithm is full scan with linear time. However, if array is sorted, you can use binary or interpolating search to get better time.
The problem with sorted arrays is that they have linear insert time. No good. So if you wish to update your set much and both update and search times are important, you should search for optimized container, that in c++ and haskell is called Set (set template in set header and Data.Set module in containers package respectively). I dunno if there is any in JS.
The setup: I have two arrays which are not sorted and are not of the same length. I want to see if one of the arrays is a subset of the other. Each array is a set in the sense that there are no duplicates.
Right now I am doing this sequentially in a brute force manner so it isn't very fast. I am currently doing this subset method sequentially. I have been having trouble finding any algorithms online that A) go faster and B) are in parallel. Say the maximum size of either array is N, then right now it is scaling something like N^2. I was thinking maybe if I sorted them and did something clever I could bring it down to something like Nlog(N), but not sure.
The main thing is I have no idea how to parallelize this operation at all. I could just do something like each processor looks at an equal amount of the first array and compares those entries to all of the second array, but I'd still be doing N^2 work. But I guess it'd be better since it would run in parallel.
Any Ideas on how to improve the work and make it parallel at the same time?
Thanks
Suppose you are trying to decide if A is a subset of B, and let len(A) = m and len(B) = n.
If m is a lot smaller than n, then it makes sense to me that you sort A, and then iterate through B doing a binary search for each element on A to see if there is a match or not. You can partition B into k parts and have a separate thread iterate through every part doing the binary search.
To count the matches you can do 2 things. Either you could have a num_matched variable be incremented every time you find a match (You would need to guard this var using a mutex though, which might hinder your program's concurrency) and then check if num_matched == m at the end of the program. Or you could have another array or bit vector of size m, and have a thread update the k'th bit if it found a match for the k'th element of A. Then at the end, you make sure this array is all 1's. (On 2nd thoughts bit vector might not work out without a mutex because threads might overwrite each other's annotations when they load the integer containing the bit relevant to them). The array approach, atleast, would not need any mutex that can hinder concurrency.
Sorting would cost you mLog(m) and then, if you only had a single thread doing the matching, that would cost you nLog(m). So if n is a lot bigger than m, this would effectively be nLog(m). Your worst case still remains NLog(N), but I think concurrency would really help you a lot here to make this fast.
Summary: Just sort the smaller array.
Alternatively if you are willing to consider converting A into a HashSet (or any equivalent Set data structure that uses some sort of hashing + probing/chaining to give O(1) lookups), then you can do a single membership check in just O(1) (in amortized time), so then you can do this in O(n) + the cost of converting A into a Set.
I need to calculate a 1d histogram that must be dynamically maintained and looked up frequently. One idea I had involves keeping an ordered array with the data (cause thus I can determine percentiles in O(1), and this suffices for quickly finding a histogram with non-uniform bins with the exactly same amount of points inside each bin).
So, is there a way that is less than O(N) to insert a number into an ordered array while keeping it ordered?
I guess the answer is very well known but I don't know a lot about algorithms (physicists doing numerical calculations rarely do).
In the general case, you could use a more flexible tree-like data structure. This would allow access, insertion and deletion in O(log) time and is also relatively easy to get ready-made from a library (ex.: C++'s STL map).
(Or a hash map...)
An ordered array with binary search does the same things as a tree, but is more rigid. It might probably be faster for acess and memory use but you will pay when having to insert or delete things in the middle (O(n) cost).
Note, however, that an ordered array might be enough for you: if your data points are often the same, you can mantain a list of pairs {key, count}, ordered by key, being able to quickly add another instance of an existing item (but still having to do more work to add a new item)
You could use binary search. This is O(log(n)).
If you like to insert number x, then take the number in the middle of your array and compare it to x. if x is smaller then then take the number in the middle of the first half else the number in the middle of the second half and so on.
You can perform insertions in O(1) time if you rearrange your array as a bunch of linked-lists hanging off of each element:
keys = Array([0][1][2][3][4]......)
a c b e f . .
d g i . . .
h j .
|__|__|__|__|__|__|__/linked lists
There's also the strategy of keeping two datastructures at the same time, if your update workload supports it without increasing time-complexity of common operations.
So, is there a way that is less than O(N) to insert a number into an
ordered array while keeping it ordered?
Yes, you can use an array to implement a binary search tree using arrays and do the insertion in O(log n) time. How?
Keep index 0 empty; index 1 = root; if node is the left child of parent node, index of node = 2 * index of parent node; if node is the right child of parent node, index of node = 2 * index of parent node + 1.
Insertion will thus be O(log n). Unfortunately, you might notice that the binary search tree for an ordered list might degenerate to a linear search if you don't balance the tree i.e. O(n), which is pointless. Here, you may have to implement a red black tree to keep the height balanced. However, this is quite complicated, BUT insertion can be done with arrays in O(log n). Note that the array elements will no longer be ints; instead, they'll have to be objects with a colour attribute.
I wouldn't recommend it.
Any particular reason this demands an array? You need an data structure which keeps data ordered and allows you to insert quickly. Why not a binary search tree? Or better still, a red black tree. In C++, you could use the Set structure in the Standard template library which is implemented as a red black tree. Gives you O(log(n)) insertion time and the ability to iterate over it like an array.
I have a set of A's and a set of B's, each with an associated numerical priority, where each A may match some or all B's and vice versa, and my main loop basically consists of:
Take the best A and B in priority order, and do stuff with A and B.
The most obvious way to do this is with a single priority queue of (A,B) pairs, but if there are 100,000 A's and 100,000 B's then the set of O(N^2) pairs won't fit in memory (and disk is too slow).
Another possibility is for each A, loop through every B. However this means that global priority ordering is by A only, and I really need to take priority of both components into account.
(The application is theorem proving, where the above options are called the pair algorithm and the given clause algorithm respectively; the shortcomings of each are known, but I haven't found any reference to a good solution.)
Some kind of two layer priority queue would seem indicated, but it's not clear how to do this without using either O(N^2) memory or O(N^2) time in the worst case.
Is there a known method of doing this?
Clarification: each A must be processed with all corresponding B's, not just one.
Maybe there's something I'm not understanding but,
Why not keep the A's and B's in separate heaps, get_Max on each of the heaps, do your work, remove each max from its associated heap and continue?
You could handle the best pairs first, and if nothing good comes up mop up the rest with the given clause algorithm for completeness' sake. This may lead to some double work, but I'd bet that this is insignificant.
Have you considered ordered paramodulation or superposition?
It appears that the items in A have an individual priority, the items in B have an individual priority, and the (A,B) pairs have a combined priority. Only the combined priority matters, but hopefully we can use the individual properties along the way. However, there is also a matching relation between items in A and items in B that is independent priority.
I assume that, for all a in A, b1 and b2 in B, such that Match(a,b1) and Match(a,b2), then Priority(b1) >= Priority(b2) implies CombinedPriority(a,b1) >= CombinedPriority(a,b2).
Now, begin by sorting B in decreasing order priority. Let B(j) indicate the jth element in this sorted order. Also, let A(i) indicate the ith element of A (which may or may not be in sorted order).
Let nextb(i,j) be a function that finds the smallest j' >= j such that Match(A(i),B(j')). If no such j' exists, the function returns null (or some other suitable error value). Searching for j' may just involve looping upward from j, or we may be able to do something faster if we know more about the structure of the Match relation.
Create a priority queue Q containing (i,nextb(i,0)) for all indices i in A such that nextb(i,0) != null. The pairs (i,j) in Q are ordered by CombinedPriority(A(i),B(j)).
Now just loop until Q is empty. Pull out the highest-priority pair (i,j) and process (A(i),B(j)) appropriately. Then re-insert (i,nextb(i,j+1)) into Q (unless nextb(i,j+1) is null).
Altogether, this takes O(N^2 log N) time in the worst case that all pairs match. In general, it takes O(N^2 + M log N) where M are the number of matches. The N^2 component can be reduced if there is a faster way of calculating nextb(i,j) that just looping upward, but that depends on knowledge of the Match relation.
(In the above analysis, I assumed both A and B were of size N. The formulas could easily be modified if they are different sizes.)
You seemed to want something better than O(N^2) time in the worst case, but if you need to process every match, then you have a lower bound of M, which can be N^2 itself. I don't think you're going to be able to do better than O(N^2 log N) time unless there is some special structure to the combined priority that lets you use a better-than-log-N priority queue.
So you have a Set of A's, and a set of B's, and you need to pick a (A, B) pair from this set such that some f(a, b) is the highest of any other (A, B) pair.
This means you can either store all possible (A, B) pairs and order them, and just pick the highest each time through the loop (O(1) per iteration but O(N*M) memory).
Or you could loop through all possible pairs and keep track of the current maximum and use that (O(N*M) per iteration, but only O(N+M) memory).
If I am understanding you correctly this is what you are asking.
I think it very much depends on f() to determine if there is a better way to do it.
If f(a, b) = a + b, then it is obviously very simple, the highest A, and the highest B are what you want.
I think your original idea will work, you just need to keep your As and Bs in separate collections and just stick references to them in your priority queue. If each reference takes 16 bytes (just to pick a number), then 10,000,000 A/B references will only take ~300M. Assuming your As and Bs themselves aren't too big, it should be workable.