From the given array of numbers find all the of numbers in group of 3 with sum value N - algorithm

Given is a array of numbers:
1, 2, 8, 6, 9, 0, 4
We need to find all the numbers in group of three which sums to a value N ( say 11 in this example). Here, the possible numbers in group of three are:
{1,2,8}, {1,4,6}, {0,2,9}
The first solution I could think was of O(n^3). Later I could improve a little(n^2 log n) with the approach:
1. Sort the array.
2. Select any two number and perform binary search for the third element.
Can it be improved further with some other approaches?

You can certainly do it in O(n^2): for each i in the array, test whether two other values sum to N-i.
You can test in O(n) whether two values in a sorted array sum to k by sweeping from both ends at once. If the sum of the two elements you're on is too big, decrement the "right-to-left" index to make it smaller. If the sum is too small, increment the "left-to-right" index to make it bigger. If there's a pair that works, you'll find them, and you perform at most 2*n iterations before you run out of road at one end or the other. You might need code to ignore the value you're using as i, depends what the rules are.
You could instead use some kind of dynamic programming, working down from N, and you probably end up with time something like O(n*N) or so. Realistically I don't think that's any better: it looks like all your numbers are non-negative, so if n is much bigger than N then before you start you can quickly throw out any large values from the array, and also any duplicates beyond 3 copies of each value (or 2 copies, as long as you check whether 3*i == N before discarding the 3rd copy of i). After that step, n is O(N).

Related

Is the computational complexity of counting runs in cribbage O(N*log(N)) in the worst case?

In the card game cribbage, counting the runs for a hand during the show (one of the stages of a turn in the game) is reporting the longest increasing subsequence which consists of only values that increase by 1. If duplicate values exist are apart of this subsequence than a double run (or triple, quadruple, et cetera) is reported.
Some examples:
("A","2","3","4","5") => (1,5) Single run for 5
("A","2","3","4","4") => (2,4) Double run for 4
("A","2","3","3","3") => (3,3) Triple run for 3
("A","2","3","4","6") => (1,4) Single run for 4
("A","2","3","5","6") => (1,3) Single run for 3
("A","2","4","5","7") => (0,0) No runs
To address cases that arise with hands larger than the cribbage hand size of 5. A run will be selected if it has the maximum product of the number duplicates of a subsequence and that subsequences length.
Some relevant examples:
("A","2","2","3","5","6","7","8","9","T","J") => (1,7) Single run for 7
("A","2","2","3","5","6","7","8") => (2,3) Double run for 3
My method for finding the maximum scoring run is as follows:
Create a list of ranks and sort it. O(N*log(N))
Create a list to store the length of the maximum run length and how many duplicates of it exist. Initialize it to [1 duplicate, 1 long].
Create an identical list as above to store the current run.
Create a flag that indicates whether the duplicate you've encountered is not the initial duplicate of this value. Initialize it to False.
Create a variable to store the increase in duplicate subsequences if additional duplicates values are found after the initial duplicate. Initialize it to 1.
Iterate over the differences between adjacent elements. O(N)
If the difference is greater than one, the run has ended. Check if the product of the elements of the max run is less than the current run and the current run has length 3 or greater. If this is true, the current run becomes the maximum run and the current run list is reset to [1,1]. The flag is reset to False. The increment for duplicate subsequences is reset to 1. Iterate to next value.
If the difference is 1, increment the length of the current run by 1 and set the flag to False. Iterate to next value.
If the difference is 0 and the flag is False, set the increment for duplicate subsequences equal to the current number of duplicates for the run. Then, double the number of duplicates for the run and set the flag to True. Iterate to the next value
If the difference is 0 and the flag is True, increase the number of the runs by the increment for duplicate subsequences value.
After the iteration, check the current run list as in step 7 against the max run and set max run accordingly.
I believe this has O(N*(1+log(N)). I believe this is the best time complexity, but I am not sure how to prove this or what a better algorithm would look like. Is there a way to do this without sorting the list first that achieves a better time complexity? If not, how does one go about proving this is the best time complexity?
iterate over the differences between
Time complexity of an algorithm is a well-traveled path. Proving the complexity of an algorithm varies slightly among mathematician clusters; rather, the complexity community usually works with modular pseudo-code and standard reductions. For instance, a for loop based on the input length is O(N) (surprise); sorting a list is known to be O(log N) at best (in the general case). For an good treatment, see Big O, how do you calculate/approximate it?.
Note: O(N x (1+log(N)) is slightly sloppy notation. Only the greatest complexity factor -- the one that dominates as N approaches infinity -- is used. Drop the 1+: it's simply O(N log N).
As I suggested in a comment, you can simply count elements. Keep a list of counts, indexed by your card values. For discussing the algorithm, don't use the "dirty" data of character representations: "A23456789TJQK"; simply use their values, either 0-12 or 1-13.
for rank in hand:
count[rank] += 1
This is a linear pass through the data, O(N).
Now, traverse your array of counts, finding the longest sequence of non-zero values. This is a fixed-length list of 13 elements, touching each element only once: O(1). If you accumulate a list of multiples (card counts, then you'll also have your combinatoric factors at the end.
The resulting algorithm and code are, therefore, O(N).
For instance, let's shorten this to 7 card values, 0-6. Given the input integers
1 2 1 3 6 1 3 5 0
You make the first pass to count items:
[1 3 1 2 0 1 1]
A second pass gives you a max run length of 4, with counts [1 3 1 2].
You report a run of 4, a triple and a double, or the point count
4 * (1 * 3 * 1 * 2)
You can also count the pair values:
2 * 3! + 2 * 2!

Min number of Elements To generate all other elements using xor

I have n integers a_1, ..., a_n. I want to pick the minimum number from all of them whose xor forms others.
For example, consider [1,2,3], 1^3=2 so you don't need 2 in the array. So you can remove it. To end up with [1,3]. So the min number of elements is 2 and they can form all the original elements in the array by xoring any 2 of them. Would a greedy approach work here? or DP?
Edit: To explain what I am thinking. A greedy approach I thought about was due to the fact that if a^b=c then a^c=b and b^c=a. First I delete all duplicates. then I would first in the beginning list all the pairs that each element can pair up with to form another element in the array. It takes O(n^3) for preprocessing. Then I pick the element with the least contribution and I delete it and subsequently subtract 1 from each of the other elements. I repeat this until all elements have <=2 pairs. and I stop. This would also take O(n^3) for a total of O(n^3). Does this greedy approach work? Is there a DP way to do it?
If n is bounded by 50 I think backtracking should work.
Suppose at some step we have already selected a subset S of numbers (that should produce all the others) and want to include a new number to that subset.
Then we can do the following:
Consider all remaining numbers R and include in S all numbers that can't be produced by others (in S and R)
Include in S a random (or "best" in some way) number from R
Remove from R all numbers that can be produced by those in updated S
Also you should keep track of the current best solution and cut off all the branches that won't allow to get a better result.

Binary search in 2 sorted integer arrays

There is a big array which consists of 2 small integer arrays written one at the end of another. Both small arrays are sorted by ascending. We have to find an element in big array as fast, as possible. My idea was to find the end of the left array by binsearch in big array and then implement 2 binsearches on small arrays. The problem is that I don't know how to find that end. If you have an idea, how to find element without finding borders of smaller arrays, you're welcome!
Information about arrays: both small arrays have integer elements, both are sorted by ascending, they both can have length from 0 to any positive integer number, but there can be only one copy of an element.
Here are some examples of big arrays:
1 2 3 4 5 6 7 (all the elements of the second array are bigger, than the maximum of the first array)
100 1 (both arrays have only one element)
1 3 5 2 4 6 or 2 4 6 1 3 5 (most common situations)
This problem is impossible to solve in guaranteed time complexity faster than O(n) and not possible to solve at all for certain arrays. Binary search runs in O(log n) for a sorted array, but the big array is not guaranteed to be sorted and will in the worst-case require one or more comparisions per element, which is O(n). The best guaranteed time complexity is O(n) with the trivial algorithm: compare every item with its neighbour until you find the "turning point" with A[i] > A[i+1]. However, if you use a breadth-first search, you may get lucky and find the "turning point" early.
Proof that the problem is unsolvable for some arrays: let the array M = [A B] be our big array. To find the point where the arrays meet we're looking for an index i where M[i] > M[i+1]. Now let A=[1 2 3] and B=[4 5]. There is no index in the array M for which the condition holds true, thus the problem is unsolvable for some arrays.
Informal proof for the former: let M=[A B] and A=[1..x] and B=[(x+1)..y] be two sorted arrays. Then swap the positions of element x and y in M. We have no way of finding the index of x without (in the worst case) checking every index, thus the problem is O(n).
Binary search relies on being able to eliminate half the solution space with each comparision, but in this case we cannot eliminate anything from the array and so we cannot do better than a linear search.
(From a practical standpoint, you should never do this in a program. The two arrays should be separate. If this isn't possible, append the length of either array to the bigger array.)
Edit: changed my answer after question was updated. It's possible to do it faster than linear time for some arrays, but not all possible arrays. Here's my idea for an algorithm using breadth-first search:
Start with the interval [0..n-1] where n is the length of the big array.
Make a list of intervals and put the starting interval in it.
For each interval in the list:
if the interval is only two elements and the first element is greater than the last
we found the turning point, return it
else if the interval is two elements or less
remove it from the list
else if the first element of the interval is greater than the last
turning point is in this interval
clear the list
split this interval in two equal parts and add them to the list
else
split this interval in two equal parts and replace this interval in the list with the two parts
I think a breadth-first approach will increase the odds of finding an interval where A[first] > A[last] early. Note that this approach will not work if the turning point is between two intervals, but it's something to get you started. I would test this myself, but unfortunately I don't have the time now.

Algorithms for dividing an array into n parts

In a recent campus Facebook interview i have asked to divide an array into 3 equal parts such that the sum in each array is roughly equal to sum/3.My Approach1. Sort The Array2. Fill the array[k] (k=0) uptil (array[k]<=sum/3)3. After that increment k and repeat the above step for array[k]Is there any better algorithm for this or it is NP Hard Problem
This is a variant of the partition problem (see http://en.wikipedia.org/wiki/Partition_problem for details). In fact a solution to this can solve that one (take an array, pad with 0s, and then solve this problem) so this problem is NP hard.
There is a dynamic programming approach that is pseudo-polynomial. For each i from 0 to the size of the array, you keep track of all possible combinations of current sizes for the sub arrays, and their current sums. As long as there are a limited number possible sums of subsets of the array, this runs acceptably fast.
The solution that I would have suggested is to just go for "good enough" closeness. First let's consider the simpler problem with all values positive. Then sort by value descending. Take that array in threes. Build up the three subsets by always adding the largest of the triple to the one with the smallest sum, the smallest to the one with the largest, and the middle to the middle. You will end up dividing the array evenly, and the difference will be no more than the value of the third smallest element.
For the general case you can divide into positive and negative, use the above approach on each, and then brute force all combinations of a group of positives, a group of negatives, and the few leftover values in the middle that did not divide evenly.
Here are details on a dynamic programming solution if you are interested. The running time and memory usage is O(n*(sum)^2) where n is the size of your array and sum is the sum of absolute values of your array values. For each array index j from 1 to n, store all the possible values you can get for your 3 subset sums when you split the array from index 1 to j into 3 subsets. Also for each possibility, store one possible way to split the array to get the 3 sums. Then to extend this information for 1 to (j+1) given the information from 1 to j, simply take each possible combination of 3 sums for splitting 1 to j and form the 3 combinations of 3 sums you get when you choose to add the (j+1)th array element to any one of the 3 subsets. Finally, when you are done and reach j = n, go through the set of all combinations of 3 subset sums you can get when you split array positions 1 to n into 3 sets, and choose the one whose maximum deviation from sum/3 is minimized. At first this may seem like O(n*(sum)^3) complexity, but for each j and each combination of the first 2 subset sums, the 3rd subset sum is uniquely determined. (because you are not allowed to omit any elements of the array). Thus the complexity really is O(n*(sum)^2).

Algorithm to generate a 'nearly sorted' or 'k sorted' list?

I want to generate some test data to test a function that merges 'k sorted' lists (lists where each element is at most k positions away from it's correct sorted position) into a single fully sorted list. I have an approach that works but I'm not sure how well randomized it is and I feel there should be a simpler / more elegant way to do this. My current approach:
Generate n random elements paired with an integer index.
Sort random elements.
Set paired index for each element to its sorted position.
Work backwards through the elements, swapping each element with an element a random distance between 1 and k positions behind it in the list. Only swap with the target element if its paired index is its current index (this avoids swapping an element that is already out of place and moving it further than k positions away from where it should be).
Copy the perturbed elements out into another list.
Like I say, this works but I'm interested in alternative / better approaches.
I think you could just fill an array with random integers and then run quicksort on it with a custom stopping condition.
If in a particular quicksort recursion your start and end indexes are less than k apart, then just return instead of continuing to recur.
Because of how quicksort works, every number in the start..end interval belongs somewhere in that region; worst case is that array[start] might really belong at array[end] (or vice versa) in truly sorted order. So, assuring that start and end are no more than k apart should be sufficient.
You can generate array of random numbers and then h-sort it like in shellsort, but without fiew last sorting steps when h is less then k.
Step 1: Randomly permute disjoint segments of length k. (Eg. 1 to K, k+1 to 2k ...)
Step 2: Permute conditionally again by swapping (that they don't break k-sorted assumption (1+t yo k+t, k+1+t to 1+2k+t ...) where t is a number between 1 and k (most preferably k/2)
Probably repeat step 2 multiple times with different t.
If I understand the problem, you want an algorithm to randomly pick a single k-sorted list of length n, uniformly selected from the universe U of all k-sorted lists of length n. (You will then run this algorithm m times to produce m lists as input test data.)
The first step is to count them. What is the size of U? |U|
The next step is to enumerate them. Create any one-to-one mapping F between the integers (1,2,...,|U|) and k-sorted lists of length n.
Then randomly select an integer x between 1 and |U| inclusive, and then apply F(x) to get the list.

Resources