I have n integers a_1, ..., a_n. I want to pick the minimum number from all of them whose xor forms others.
For example, consider [1,2,3], 1^3=2 so you don't need 2 in the array. So you can remove it. To end up with [1,3]. So the min number of elements is 2 and they can form all the original elements in the array by xoring any 2 of them. Would a greedy approach work here? or DP?
Edit: To explain what I am thinking. A greedy approach I thought about was due to the fact that if a^b=c then a^c=b and b^c=a. First I delete all duplicates. then I would first in the beginning list all the pairs that each element can pair up with to form another element in the array. It takes O(n^3) for preprocessing. Then I pick the element with the least contribution and I delete it and subsequently subtract 1 from each of the other elements. I repeat this until all elements have <=2 pairs. and I stop. This would also take O(n^3) for a total of O(n^3). Does this greedy approach work? Is there a DP way to do it?
If n is bounded by 50 I think backtracking should work.
Suppose at some step we have already selected a subset S of numbers (that should produce all the others) and want to include a new number to that subset.
Then we can do the following:
Consider all remaining numbers R and include in S all numbers that can't be produced by others (in S and R)
Include in S a random (or "best" in some way) number from R
Remove from R all numbers that can be produced by those in updated S
Also you should keep track of the current best solution and cut off all the branches that won't allow to get a better result.
Related
Given an array of integers and a threshold value, determine the maximum sum of any subsequence of the array that is less than or equal to the threshold. For all but at most 15 of the elements, either array[i] >= 2*array[j] or array[j] >=2*array[i] where j!=i.
threshold can be up to to 10^17, the length of the array can be up to 60, and array[i] can be up to 10^16.
Here threshold is too high, so we cannot solve it by normal knapsack method. I tried it dividing this array into three parts, then getting lists of possible sums by brute force by backtracking and then merging three lists to find result. But I think there could be a more optimal way of doing this.
This problem has been carefully set up so that all usual approaches will run out of space. You have to use the hint.
Step 1, sort the array size descending then and divide it into into up to 15 "weird ones" and a chain of elements such that b1 >= 2*b2, b2 >= 2*b3 and so on.
You do that by taking the largest into your chain, then sticking weird ones into the weird array until you find one half the size, add that to the chain, stick weird ones into the weird array, and so on.
Now for each of the up to 32768 subsets of the weird ones, try to figure out which subset of the rest gets you closest. However you can use the following observation. For any element that you have a choice about including, either it is too big to include, or it must be included. (Because if you don't include it, then all the rest together will give you a smaller number.) That gives you a maximum of 45 decision points to consider.
In other words
for each subset of weird ones:
for each element of the chain
If we can add this element:
Add it to the set we are looking at
if sum(this set) is best so far, improve our max
return the best found.
In a recent campus Facebook interview i have asked to divide an array into 3 equal parts such that the sum in each array is roughly equal to sum/3.My Approach1. Sort The Array2. Fill the array[k] (k=0) uptil (array[k]<=sum/3)3. After that increment k and repeat the above step for array[k]Is there any better algorithm for this or it is NP Hard Problem
This is a variant of the partition problem (see http://en.wikipedia.org/wiki/Partition_problem for details). In fact a solution to this can solve that one (take an array, pad with 0s, and then solve this problem) so this problem is NP hard.
There is a dynamic programming approach that is pseudo-polynomial. For each i from 0 to the size of the array, you keep track of all possible combinations of current sizes for the sub arrays, and their current sums. As long as there are a limited number possible sums of subsets of the array, this runs acceptably fast.
The solution that I would have suggested is to just go for "good enough" closeness. First let's consider the simpler problem with all values positive. Then sort by value descending. Take that array in threes. Build up the three subsets by always adding the largest of the triple to the one with the smallest sum, the smallest to the one with the largest, and the middle to the middle. You will end up dividing the array evenly, and the difference will be no more than the value of the third smallest element.
For the general case you can divide into positive and negative, use the above approach on each, and then brute force all combinations of a group of positives, a group of negatives, and the few leftover values in the middle that did not divide evenly.
Here are details on a dynamic programming solution if you are interested. The running time and memory usage is O(n*(sum)^2) where n is the size of your array and sum is the sum of absolute values of your array values. For each array index j from 1 to n, store all the possible values you can get for your 3 subset sums when you split the array from index 1 to j into 3 subsets. Also for each possibility, store one possible way to split the array to get the 3 sums. Then to extend this information for 1 to (j+1) given the information from 1 to j, simply take each possible combination of 3 sums for splitting 1 to j and form the 3 combinations of 3 sums you get when you choose to add the (j+1)th array element to any one of the 3 subsets. Finally, when you are done and reach j = n, go through the set of all combinations of 3 subset sums you can get when you split array positions 1 to n into 3 sets, and choose the one whose maximum deviation from sum/3 is minimized. At first this may seem like O(n*(sum)^3) complexity, but for each j and each combination of the first 2 subset sums, the 3rd subset sum is uniquely determined. (because you are not allowed to omit any elements of the array). Thus the complexity really is O(n*(sum)^2).
I had the following question
Find the smallest two nonadjacent values in an array, such that non of these elements is on the array edge (no A[0] and no A[n-1])
The runtime of the algorithm should be O(n)
I first thought about sorting the array, but sorting would cost O(nlogn)
Ignoring this fact for a second, if we sort the array, we can not just take the first two values, since they might violate the conditions mentioned above? and then what? take the next element and try, if not take the next, I can't see an easy solution there
Another solution is to generate all allowed pairs and find the pair with the minimum sum. But finding all pairs cost O(n^2)
Any ideas?
In linear time, find the ith smallest entry (excluding the first and last) for i from 1 to 4. The best possibility is a pair of these. If 1 and 2 are nonadjacent, then that's the best. Otherwise, if 1 and 3 are nonadjacent, then that's the best. Otherwise, 2 and 3 are bordering 1 (hence not each other), and the possibilities are 1 and 4, or 2 and 3.
You could go with your sort first, then, unless I am missing something, take elements 0 and 2. Those would be the smallest non-adjacent values.
As long as the array is 3 elements or greater you should be assured that the element values in position 0 and 2 are the smallest (and if you need them to be, non-consecutive) as well as non-adjacent in the array.
If your array is sorted, you would only have to keep comparing alternate elements (like indices (0,2), (1,3), (2,5)) and so on and then find the pair with the smallest difference. But without sorting, you are right in saying that the run time complexity would then become O(n^2) as you would have to compare every element with every other element in the array.
I want to generate some test data to test a function that merges 'k sorted' lists (lists where each element is at most k positions away from it's correct sorted position) into a single fully sorted list. I have an approach that works but I'm not sure how well randomized it is and I feel there should be a simpler / more elegant way to do this. My current approach:
Generate n random elements paired with an integer index.
Sort random elements.
Set paired index for each element to its sorted position.
Work backwards through the elements, swapping each element with an element a random distance between 1 and k positions behind it in the list. Only swap with the target element if its paired index is its current index (this avoids swapping an element that is already out of place and moving it further than k positions away from where it should be).
Copy the perturbed elements out into another list.
Like I say, this works but I'm interested in alternative / better approaches.
I think you could just fill an array with random integers and then run quicksort on it with a custom stopping condition.
If in a particular quicksort recursion your start and end indexes are less than k apart, then just return instead of continuing to recur.
Because of how quicksort works, every number in the start..end interval belongs somewhere in that region; worst case is that array[start] might really belong at array[end] (or vice versa) in truly sorted order. So, assuring that start and end are no more than k apart should be sufficient.
You can generate array of random numbers and then h-sort it like in shellsort, but without fiew last sorting steps when h is less then k.
Step 1: Randomly permute disjoint segments of length k. (Eg. 1 to K, k+1 to 2k ...)
Step 2: Permute conditionally again by swapping (that they don't break k-sorted assumption (1+t yo k+t, k+1+t to 1+2k+t ...) where t is a number between 1 and k (most preferably k/2)
Probably repeat step 2 multiple times with different t.
If I understand the problem, you want an algorithm to randomly pick a single k-sorted list of length n, uniformly selected from the universe U of all k-sorted lists of length n. (You will then run this algorithm m times to produce m lists as input test data.)
The first step is to count them. What is the size of U? |U|
The next step is to enumerate them. Create any one-to-one mapping F between the integers (1,2,...,|U|) and k-sorted lists of length n.
Then randomly select an integer x between 1 and |U| inclusive, and then apply F(x) to get the list.
what is the minimum number of comparisons needed to find the largest element from 4 distinct elements? I know for 5 distinct numbers it is 6, floor(5/2) * 3; this is from clrs book. but I know there is no one general formula for finding this, or is there?
edit clarification
these 4 elements could be in any different order(for all permutations of these 4 elements) im not interested in a counting technique to keep track of the largest element as you traverse the elements, but comparisons like > or <.
for 4 elements the min. number of comparisons is 3.
In general, to find largest of N elements you need N-1 comparisons. This gives you 4 for 5 numbers, not 6.
Proof:
there is always a solution with N-1 comparisons: just compare first two and then select the larger and compare with next one, select the larger and compare with next one etc....
there cannot be shorter solution because this solution would not compare all the elements.
QED.
I know it does not answer the original question, but I enjoyed reading this not-so-intuitive post on the minimum number of comparisons needed to find the smallest AND the largest number from an unsorted array (with proof).
Think of it as a competition. By comparing two elements you have a looser and a winner.
So if you have n elements and need 1 final winner you need n-1 comparisons to rule out the other ones.
for elements a,b,c,d
if a>b+c+d, then it only required one comparison to know that a is the biggest.
You do have to get lucky though.