Sort sub array minimaly - algorithm

This is an interview question.
Given an array of integers, write a method to find indices m and n such that if you sorted elements m through n, the entire array would be sorted. Minimize n-m. i.e. find smallest sequence.

Observation
The integers before m should be ascending and smaller than (or equal to) any integers after.
Algorithm
Start from the first element, and stop upon first decreasing. (sub array SA)
Find minimum after. (MIN)
The start point is just after the maximum integer in SA that is smaller than (or equal to) MIN. (m is found)
Complexity
O(N)
Do similar for n.

You need to keep track of four things:
End of sorted region at the beginning
Start of sorted region at the end
Minimum number after the beginning region
Maximum number before the end region
Start by figuring out a preliminary value for 1 and 2, by scanning the array from the start and from the end until you find a misplaced value.
Then you scan everything after your preliminary 1, to find the minimum number. This is your 3. Find 4 in the same way.
Now you backtrack trough the start region of the array until you find the place where the minimum value should be. This is the exact answer to 1 and also your m.
Find n in the same way by backtracking through the end region to find where the maximum number should be.

Related

Count palindromic permutations ("mirrors") of an array

I've been trying to find a solution for this question:
Given an array of integers, count the distinct permutations that are palindromes ("mirrors"); that is, find the number of distinct ways that the array's elements can be rearranged so that they read the same way backward as forward. For example:
If the array is [1,1,2], then there is only one distinct palindromic permutation (namely [1,2,1]), so the desired result is 1.
If the array is [1,1,2,2], then there are two distinct palindromic permutations (namely [1,2,2,1] and [2,1,1,2]), so the desired result is 2.
If the array is [2,2,2,3,3], then there are two distinct palindromic permutations (namely [3,2,2,2,3] and [2,3,2,3,2]), so the desired result is 2.
I've been trying to solve this and been stuck for quite a while, and can't find any solution online. Any help will be appreciated (just starting out on algo & ds stuff)
My idea is to find the index of the median of that array (e.g., in example #1, the median is at index 1) and move all numbers after it to before it (so, [1,2,1]), and check using two pointers (one at end, one at start) if all numbers are equal.
However, this won't work if, let's say, #1 is arr = [1,2,2], as doing the above would be equal to 1,2,2. What I should've done in this case is then to move the 1 in between the 2s (sort of median from the end, if that makes sense). Sort of like the above method but the reverse (?)
Here is the general idea:
Count the frequency of each unique value.
If the array's length is odd, then exactly one frequency should be odd. If not, there are no mirrors. If so, that value will have to be placed in the center. The number of mirrors is then equal to what you would get for an array with one value less -- that value removed.
Now the array length is even. No frequencies should be odd, or else there are no mirrors. Now halve all those frequencies.
Determine how many permutations can be formed with those values and their (halved) frequencies. The formula is:
𝑛! / (𝑛1!𝑛2!𝑛3!...𝑛𝑘!)
where 𝑛 is the sum of all (halved) frequencies (i.e. half the size of the array), and the 𝑛𝑖 is the list of (halved) frequencies.

Toggling bits pairs in an array to maximize its dot product with another array

Suppose two arrays are given A and B. A consists of integers and the second one consists of 0 and 1.
Now an operation is given - You can choose any adjacent bits in array B and you can toggle these two bits (for example - 00->11, 01->10, 10->01, 11->00) and you can perform this operation any number of times.
The output should be the sum of A[0]*B[0]+A[1]*B[1]+....+A[N-1]*B[N-1] such that the sum is maximum.
During the interview, my approach to this problem was to get the maximum number of 1's in array B in order to maximize the sum.
So to do that, I first calculated the total number of 1's in O(n) time in B. Let count = No. Of 1's=x.
Then I started traversing the array and toggle only if count becomes greater than x or based on the elements of array A (for example: Let B[i]=0 and B[i+1]=1 & A[i]=51 and A[i+1]=50
So I will toggle B[i] B[i+1] because A[i]>A[i+1])
But the interviewer was not quite satisfied with my approach and was asking me further to develop a less time complex algorithm.
Can anyone suggest a better approach with lesser time complexity?
You can create any B-vector with an even number of flipped bits just by repeatedly flipping the first bit that is in the wrong state.
So, pick all the positive numbers in A, and then drop the smallest one if you ended up with an a count that has a different oddness than the number of 1s in B. If you can't do that, because B has an odd number of 1s and A is all negative, then just pick the negative number closest to 0.
Then turn on all the bits corresponding to the numbers you chose, and turn off the other ones.

Algorithms for dividing an array into n parts

In a recent campus Facebook interview i have asked to divide an array into 3 equal parts such that the sum in each array is roughly equal to sum/3.My Approach1. Sort The Array2. Fill the array[k] (k=0) uptil (array[k]<=sum/3)3. After that increment k and repeat the above step for array[k]Is there any better algorithm for this or it is NP Hard Problem
This is a variant of the partition problem (see http://en.wikipedia.org/wiki/Partition_problem for details). In fact a solution to this can solve that one (take an array, pad with 0s, and then solve this problem) so this problem is NP hard.
There is a dynamic programming approach that is pseudo-polynomial. For each i from 0 to the size of the array, you keep track of all possible combinations of current sizes for the sub arrays, and their current sums. As long as there are a limited number possible sums of subsets of the array, this runs acceptably fast.
The solution that I would have suggested is to just go for "good enough" closeness. First let's consider the simpler problem with all values positive. Then sort by value descending. Take that array in threes. Build up the three subsets by always adding the largest of the triple to the one with the smallest sum, the smallest to the one with the largest, and the middle to the middle. You will end up dividing the array evenly, and the difference will be no more than the value of the third smallest element.
For the general case you can divide into positive and negative, use the above approach on each, and then brute force all combinations of a group of positives, a group of negatives, and the few leftover values in the middle that did not divide evenly.
Here are details on a dynamic programming solution if you are interested. The running time and memory usage is O(n*(sum)^2) where n is the size of your array and sum is the sum of absolute values of your array values. For each array index j from 1 to n, store all the possible values you can get for your 3 subset sums when you split the array from index 1 to j into 3 subsets. Also for each possibility, store one possible way to split the array to get the 3 sums. Then to extend this information for 1 to (j+1) given the information from 1 to j, simply take each possible combination of 3 sums for splitting 1 to j and form the 3 combinations of 3 sums you get when you choose to add the (j+1)th array element to any one of the 3 subsets. Finally, when you are done and reach j = n, go through the set of all combinations of 3 subset sums you can get when you split array positions 1 to n into 3 sets, and choose the one whose maximum deviation from sum/3 is minimized. At first this may seem like O(n*(sum)^3) complexity, but for each j and each combination of the first 2 subset sums, the 3rd subset sum is uniquely determined. (because you are not allowed to omit any elements of the array). Thus the complexity really is O(n*(sum)^2).

Algorithm to generate a 'nearly sorted' or 'k sorted' list?

I want to generate some test data to test a function that merges 'k sorted' lists (lists where each element is at most k positions away from it's correct sorted position) into a single fully sorted list. I have an approach that works but I'm not sure how well randomized it is and I feel there should be a simpler / more elegant way to do this. My current approach:
Generate n random elements paired with an integer index.
Sort random elements.
Set paired index for each element to its sorted position.
Work backwards through the elements, swapping each element with an element a random distance between 1 and k positions behind it in the list. Only swap with the target element if its paired index is its current index (this avoids swapping an element that is already out of place and moving it further than k positions away from where it should be).
Copy the perturbed elements out into another list.
Like I say, this works but I'm interested in alternative / better approaches.
I think you could just fill an array with random integers and then run quicksort on it with a custom stopping condition.
If in a particular quicksort recursion your start and end indexes are less than k apart, then just return instead of continuing to recur.
Because of how quicksort works, every number in the start..end interval belongs somewhere in that region; worst case is that array[start] might really belong at array[end] (or vice versa) in truly sorted order. So, assuring that start and end are no more than k apart should be sufficient.
You can generate array of random numbers and then h-sort it like in shellsort, but without fiew last sorting steps when h is less then k.
Step 1: Randomly permute disjoint segments of length k. (Eg. 1 to K, k+1 to 2k ...)
Step 2: Permute conditionally again by swapping (that they don't break k-sorted assumption (1+t yo k+t, k+1+t to 1+2k+t ...) where t is a number between 1 and k (most preferably k/2)
Probably repeat step 2 multiple times with different t.
If I understand the problem, you want an algorithm to randomly pick a single k-sorted list of length n, uniformly selected from the universe U of all k-sorted lists of length n. (You will then run this algorithm m times to produce m lists as input test data.)
The first step is to count them. What is the size of U? |U|
The next step is to enumerate them. Create any one-to-one mapping F between the integers (1,2,...,|U|) and k-sorted lists of length n.
Then randomly select an integer x between 1 and |U| inclusive, and then apply F(x) to get the list.

How to find the lowest value possible of a serie of ints?

I have a sequence of integers (positive and negative) like this one:
12,-54,32,1,-2,-4,-8,12,56,-22,-21,4,17,35
And I need to find the worst result (smaller sum of values) possible taking any sub sequence of this sequence (and of course the start index and end index of that sub sequence).
Is there a way of doing this that is not 2^n (computing all the possible sequences one by one)?
For example, with this simple sequence:
1,2,-3,4,-6,4,-10,3,-2
The smaller sum of values would be the subsequence:
-6,4,-10 (with start index 4 and end index 6)
The issue of finding the minimum can be transformed into a maximum search by changing sign of each item.
For the maximum subsequence there exist well-known algorithms, see e.g. here.
You can either transform your list and apply said algorithm or slightly modify the algorithm itself (min instead of max or minus instead of plus) in order to work with your original list.

Resources