Find the minimum two non edge, non adjacent entries in an array - algorithm

I had the following question
Find the smallest two nonadjacent values in an array, such that non of these elements is on the array edge (no A[0] and no A[n-1])
The runtime of the algorithm should be O(n)
I first thought about sorting the array, but sorting would cost O(nlogn)
Ignoring this fact for a second, if we sort the array, we can not just take the first two values, since they might violate the conditions mentioned above? and then what? take the next element and try, if not take the next, I can't see an easy solution there
Another solution is to generate all allowed pairs and find the pair with the minimum sum. But finding all pairs cost O(n^2)
Any ideas?

In linear time, find the ith smallest entry (excluding the first and last) for i from 1 to 4. The best possibility is a pair of these. If 1 and 2 are nonadjacent, then that's the best. Otherwise, if 1 and 3 are nonadjacent, then that's the best. Otherwise, 2 and 3 are bordering 1 (hence not each other), and the possibilities are 1 and 4, or 2 and 3.

You could go with your sort first, then, unless I am missing something, take elements 0 and 2. Those would be the smallest non-adjacent values.
As long as the array is 3 elements or greater you should be assured that the element values in position 0 and 2 are the smallest (and if you need them to be, non-consecutive) as well as non-adjacent in the array.

If your array is sorted, you would only have to keep comparing alternate elements (like indices (0,2), (1,3), (2,5)) and so on and then find the pair with the smallest difference. But without sorting, you are right in saying that the run time complexity would then become O(n^2) as you would have to compare every element with every other element in the array.

Related

Min number of Elements To generate all other elements using xor

I have n integers a_1, ..., a_n. I want to pick the minimum number from all of them whose xor forms others.
For example, consider [1,2,3], 1^3=2 so you don't need 2 in the array. So you can remove it. To end up with [1,3]. So the min number of elements is 2 and they can form all the original elements in the array by xoring any 2 of them. Would a greedy approach work here? or DP?
Edit: To explain what I am thinking. A greedy approach I thought about was due to the fact that if a^b=c then a^c=b and b^c=a. First I delete all duplicates. then I would first in the beginning list all the pairs that each element can pair up with to form another element in the array. It takes O(n^3) for preprocessing. Then I pick the element with the least contribution and I delete it and subsequently subtract 1 from each of the other elements. I repeat this until all elements have <=2 pairs. and I stop. This would also take O(n^3) for a total of O(n^3). Does this greedy approach work? Is there a DP way to do it?
If n is bounded by 50 I think backtracking should work.
Suppose at some step we have already selected a subset S of numbers (that should produce all the others) and want to include a new number to that subset.
Then we can do the following:
Consider all remaining numbers R and include in S all numbers that can't be produced by others (in S and R)
Include in S a random (or "best" in some way) number from R
Remove from R all numbers that can be produced by those in updated S
Also you should keep track of the current best solution and cut off all the branches that won't allow to get a better result.

Minimum amount of swaps needed so no number has two neighbours that are both greater..?

The problem statement goes like this: Given a list of N < 500 000 distinct numbers, find the minimum number of swaps of adjacent elements required such that no number has two neighbours that are both greater. A number can only be swapped with a neighbour.
Hint given: Use a segment tree or a fenwick tree.
I don't really get the idea of how I should use a sum-tree to solve this problem.
Example inputs:
Input 1:
5 (amount of elements in the list)
3 1 4 2 0
output 1: 1
input 2:
6
4 5 2 0 1 3
output 2: 4
I can do it in O(n log n) time and O(n) extra space. But first let's look at the quadratic solution I've hinted at earlier:
initialize the result accumulator to 0
while the input list has more than two elements
find the lowest element in the list
add its distance from the closer end of the list to the accumulator
remove the element from the list.
output the accumulator.
Why does this work? First, Let's look at how a sequence that requires zero swap looks like. Since there are no duplicates, if the lowest element is anywhere but at either end, it is surrounded by two elements that are both greater, violating the requirement, thus the lowest element must be at one of the ends. Recurse into the subsequence that excludes this element. To bring a sequence into this state: at least as many swaps involving the lowest element as in the greedy algorithm are required to move the lowest element to one end, and since swaps involving the lowest element do not change the relative ordering of the rest, there is no penalty to reordering them to the front either.
Unfortunately, just implementing this with a list is quadratic. How do you make it faster? Have a finger tree that tracks the subtree weight and minimum value of each subtree and update these as you remove individual minima:
To initialize the tree: First, think of each element in the list as a one-element sublist with its minimum equal to its value. Then, while you have more than one sublist, group the subsequences in pairs, building a tree of subsequences. The length of a sequence is the sum of lengths of both its halves, and its minimum is equal to whichever minimum from both halves is lower.
To remove the minimum from a subsequence while tracking its index in the sequence:
Decrease the length of the subsequence
Remove the minimum from whichever half's minimum is equal to this subsequence minimum
the new minimum is the lower of its halves' new minima
The index of the minimum is equal to its index in its respective half, plus the length of the left half if the minimum was in the right half.
The distance from one end is then equal to either the index or (length before removal - index - 1), whichever is lower.

Binary search in 2 sorted integer arrays

There is a big array which consists of 2 small integer arrays written one at the end of another. Both small arrays are sorted by ascending. We have to find an element in big array as fast, as possible. My idea was to find the end of the left array by binsearch in big array and then implement 2 binsearches on small arrays. The problem is that I don't know how to find that end. If you have an idea, how to find element without finding borders of smaller arrays, you're welcome!
Information about arrays: both small arrays have integer elements, both are sorted by ascending, they both can have length from 0 to any positive integer number, but there can be only one copy of an element.
Here are some examples of big arrays:
1 2 3 4 5 6 7 (all the elements of the second array are bigger, than the maximum of the first array)
100 1 (both arrays have only one element)
1 3 5 2 4 6 or 2 4 6 1 3 5 (most common situations)
This problem is impossible to solve in guaranteed time complexity faster than O(n) and not possible to solve at all for certain arrays. Binary search runs in O(log n) for a sorted array, but the big array is not guaranteed to be sorted and will in the worst-case require one or more comparisions per element, which is O(n). The best guaranteed time complexity is O(n) with the trivial algorithm: compare every item with its neighbour until you find the "turning point" with A[i] > A[i+1]. However, if you use a breadth-first search, you may get lucky and find the "turning point" early.
Proof that the problem is unsolvable for some arrays: let the array M = [A B] be our big array. To find the point where the arrays meet we're looking for an index i where M[i] > M[i+1]. Now let A=[1 2 3] and B=[4 5]. There is no index in the array M for which the condition holds true, thus the problem is unsolvable for some arrays.
Informal proof for the former: let M=[A B] and A=[1..x] and B=[(x+1)..y] be two sorted arrays. Then swap the positions of element x and y in M. We have no way of finding the index of x without (in the worst case) checking every index, thus the problem is O(n).
Binary search relies on being able to eliminate half the solution space with each comparision, but in this case we cannot eliminate anything from the array and so we cannot do better than a linear search.
(From a practical standpoint, you should never do this in a program. The two arrays should be separate. If this isn't possible, append the length of either array to the bigger array.)
Edit: changed my answer after question was updated. It's possible to do it faster than linear time for some arrays, but not all possible arrays. Here's my idea for an algorithm using breadth-first search:
Start with the interval [0..n-1] where n is the length of the big array.
Make a list of intervals and put the starting interval in it.
For each interval in the list:
if the interval is only two elements and the first element is greater than the last
we found the turning point, return it
else if the interval is two elements or less
remove it from the list
else if the first element of the interval is greater than the last
turning point is in this interval
clear the list
split this interval in two equal parts and add them to the list
else
split this interval in two equal parts and replace this interval in the list with the two parts
I think a breadth-first approach will increase the odds of finding an interval where A[first] > A[last] early. Note that this approach will not work if the turning point is between two intervals, but it's something to get you started. I would test this myself, but unfortunately I don't have the time now.

Algorithms for dividing an array into n parts

In a recent campus Facebook interview i have asked to divide an array into 3 equal parts such that the sum in each array is roughly equal to sum/3.My Approach1. Sort The Array2. Fill the array[k] (k=0) uptil (array[k]<=sum/3)3. After that increment k and repeat the above step for array[k]Is there any better algorithm for this or it is NP Hard Problem
This is a variant of the partition problem (see http://en.wikipedia.org/wiki/Partition_problem for details). In fact a solution to this can solve that one (take an array, pad with 0s, and then solve this problem) so this problem is NP hard.
There is a dynamic programming approach that is pseudo-polynomial. For each i from 0 to the size of the array, you keep track of all possible combinations of current sizes for the sub arrays, and their current sums. As long as there are a limited number possible sums of subsets of the array, this runs acceptably fast.
The solution that I would have suggested is to just go for "good enough" closeness. First let's consider the simpler problem with all values positive. Then sort by value descending. Take that array in threes. Build up the three subsets by always adding the largest of the triple to the one with the smallest sum, the smallest to the one with the largest, and the middle to the middle. You will end up dividing the array evenly, and the difference will be no more than the value of the third smallest element.
For the general case you can divide into positive and negative, use the above approach on each, and then brute force all combinations of a group of positives, a group of negatives, and the few leftover values in the middle that did not divide evenly.
Here are details on a dynamic programming solution if you are interested. The running time and memory usage is O(n*(sum)^2) where n is the size of your array and sum is the sum of absolute values of your array values. For each array index j from 1 to n, store all the possible values you can get for your 3 subset sums when you split the array from index 1 to j into 3 subsets. Also for each possibility, store one possible way to split the array to get the 3 sums. Then to extend this information for 1 to (j+1) given the information from 1 to j, simply take each possible combination of 3 sums for splitting 1 to j and form the 3 combinations of 3 sums you get when you choose to add the (j+1)th array element to any one of the 3 subsets. Finally, when you are done and reach j = n, go through the set of all combinations of 3 subset sums you can get when you split array positions 1 to n into 3 sets, and choose the one whose maximum deviation from sum/3 is minimized. At first this may seem like O(n*(sum)^3) complexity, but for each j and each combination of the first 2 subset sums, the 3rd subset sum is uniquely determined. (because you are not allowed to omit any elements of the array). Thus the complexity really is O(n*(sum)^2).

minimum number of comparisons needed

what is the minimum number of comparisons needed to find the largest element from 4 distinct elements? I know for 5 distinct numbers it is 6, floor(5/2) * 3; this is from clrs book. but I know there is no one general formula for finding this, or is there?
edit clarification
these 4 elements could be in any different order(for all permutations of these 4 elements) im not interested in a counting technique to keep track of the largest element as you traverse the elements, but comparisons like > or <.
for 4 elements the min. number of comparisons is 3.
In general, to find largest of N elements you need N-1 comparisons. This gives you 4 for 5 numbers, not 6.
Proof:
there is always a solution with N-1 comparisons: just compare first two and then select the larger and compare with next one, select the larger and compare with next one etc....
there cannot be shorter solution because this solution would not compare all the elements.
QED.
I know it does not answer the original question, but I enjoyed reading this not-so-intuitive post on the minimum number of comparisons needed to find the smallest AND the largest number from an unsorted array (with proof).
Think of it as a competition. By comparing two elements you have a looser and a winner.
So if you have n elements and need 1 final winner you need n-1 comparisons to rule out the other ones.
for elements a,b,c,d
if a>b+c+d, then it only required one comparison to know that a is the biggest.
You do have to get lucky though.

Resources