I was solving one problem and it stated that we need to find the sum of all minimum number that needs to be added to the elements in the array so that the bitwise AND is greater than 0.
For eg: Given array is [4, 4, 3, 2]
then the output should be 3
(adding one to 1st 2nd and 4th element).
My approach : first I decided to find the position of the right most set in all elements and check for the overall minimum number to be added so that the and is greater than zero. But this is not working.Can anyone help in finding an alternative algo?
Let's solve a bit different problem first:
What minimum min number should be add to value to ensure 1 in kth position (zero based)?
We have two cases here:
if value has 1 at k position we add 0 (do nothing);
if value has 0 at k position we can add
min = 100...000000000 - (value & 11.....11111)
<- k zeroes -> <- k ones ->
Code (C#)
private static long AddToEnsureOne(long value, int position) {
if ((value & (1L << position)) != 0)
return 0;
long shift = 1L << (position);
return shift - (value & (shift - 1));
}
Demo: if we have 3 and we want 1 at 2nd position
0b011
^
we want 1 here
we should add
0b100 - (0b011 & 0b11) == 4 - 3 == 1
let's add: 3 + 1 == 4 == 0b100 which has 1 at 2nd position
Now we can scan all 32 positions (if integer is good old 32 bit integer int); C# code:
private static long MinToAdd(IEnumerable<int> items) {
long best = 0;
for (int i = 0; i < 32; ++i) {
long sum = 0;
foreach (int item in items)
sum += AddToEnsureOne(unchecked((uint)item), i); // uint - get rid of sign
if (i == 0 || sum < best)
best = sum;
}
return best;
}
One can improve the solution looping not for 32 positions but for the leftmost 1 of the maximum item. Here we have 4 as the maximum, which is 0b100, the leftmost 1 is in the 2 position; thus for (int i = 0; i <= 2; ++i) will be enough in the context
Simple test:
Console.Write(MinToAdd(new int[] { 4, 4, 3, 2}));
Outcome:
3
Related
Question: The input is on a sequential file. The file contains at most 4Billion integers. Find a missing integer.
Solution as per my understanding:
make two temporary files one with leading 0 and the other with leading 1
one of the two MUST( 4.3B pigeon-holes and 4B pigeons) have less than 2B.
Pick the file and repeat steps 1 & 2 on the 2nd bit and then on 3rd bit and so on..
what is the end condition of this iteration?
Also, the book mentions the efficiency of the algorithm being O(n)
but,
1st iteration => n probe operations
2nd iteration => n/2 probe operations
.
.
.
n + n/2 + n/4 +... 1 => nlogn??
Am I missing something?
You'll check both files and pick the one with the fewest elements.
You'll repeat the process until you've gone through all 32 bits and at the end you'll have a file 0 elements. This is where one of the missing numbers was supposed to be. So, if you've been keeping track of the bits you've filtered on thus far, you'll know what the number is supposed to be.
Note that this is to find a (i.e. 'any') missing number. If given an (unordered) sequential list of 4 billion (not 2^32 (4294967296)) integers with one missing, which you have to find, this won't work, as you can cut off the missing integer right in the beginning.
Also:
n + n/2 + n/4 + ... 1 <= 2n
Not n log n.
It's a geometric sequence with a = n, r = 1/2, which can be calculated with the formula:
n (1-(1/2)^m)
-------------
1 - (1/2)
Since 0 < (1/2)^m < 1 for any positive number m (since 0 < 1/2 < 1), we can say (1-r^m) < 1 and thus we can say the maximum is:
n.1
-------
1 - 1/2
n
= ---
1/2
= 2n
If there is only 1 missing value, meaning that you have the following criteria:
File contains all numbers ranging from a lowest value, N up to and including a highest value, M, except for 1 of those numbers, and each of the numbers present occurs only once, each (thanks #maraca)
File does not have to be sorted
There is only 1 of those values missing (just making sure)
Then the solution is quite simple:
ADD or XOR together all the numbers in the file.
ADD or XOR together all the numbers you're supposed to have.
The missing number is either one minus the other (in case of ADD) or one xor the other.
Here is a LINQPad program you can experiment with:
void Main()
{
var input = new[] { 1, 2, 3, 4, 5, 6, 8, 9, 10 };
var lowest = input[0];
var highest = input[0];
int xor = 0;
foreach (var value in input)
{
lowest = Math.Min(lowest, value);
highest = Math.Max(highest, value);
xor ^= value;
}
int requiredXor = 0;
for (int index = lowest; index <= highest; index++)
requiredXor ^= index;
var missing = xor ^ requiredXor;
missing.Dump();
}
Basically, it will:
XOR all values in the file together (value 1)
Find the lowest and highest numbers at the same time
XOR all values from lowest up to highest (value 2)
XOR the two values (value 1 and value 2) together to find the missing value
This method will not detect if the missing value is the lowest value - 1 or highest value + 1, for instance, if the file is supposed to hold 1..10, but is missing 10 or 1, then the above approach will not find it.
This solution is O(2n) (we loop the numbers twice), which translates to O(n).
Here is a more complete example showing both the ADD and the XOR solution (again in LINQPad):
void Main()
{
var input = new[] { 1, 2, 3, 4, 5, 6, 8, 9, 10 };
MissingXOR(input).Dump("xor");
MissingADD(input).Dump("add");
}
public static int MissingXOR(int[] input)
{
var lowest = input[0];
var highest = input[0];
int xor = 0;
foreach (var value in input)
{
lowest = Math.Min(lowest, value);
highest = Math.Max(highest, value);
xor ^= value;
}
int requiredXor = 0;
for (int index = lowest; index <= highest; index++)
requiredXor ^= index;
return xor ^ requiredXor;
}
public static int MissingADD(int[] input)
{
var lowest = input[0];
var highest = input[0];
int sum = 0;
foreach (var value in input)
{
lowest = Math.Min(lowest, value);
highest = Math.Max(highest, value);
sum += value;
}
var sumToHighest = (highest * (highest + 1)) / 2;
var sumToJustBelowLowest = (lowest * (lowest - 1)) / 2;
int requiredSum = sumToHighest - sumToJustBelowLowest;
return requiredSum - sum;
}
I am trying to solve a Data Structures and Algorithms problem, which states that given a group of 1s and 0s, group the digits such that all 0s are together and all 1s are together. What is the minimum number of swaps required to accomplish this if one can only swap two adjacent elements? It does not matter which group is at what end.
Eg:
[0,1,0,1] = [0,0,1,1] 1 swaps
[1,1,1,1,0,1,0] = [1,1,1,1,1,0,0] 1 swaps
[1, 0, 1, 0, 0, 0, 0, 1] = = [1,1,1,0,0,0,0,0] 6 swaps
Note that this is different from the questions asked here:
Find the minimum number of swaps required such that all the 0s and all the 1s are together
I am not sorting the array, I am just trying to group all the 0s and all the 1s together and it does not matter which is at which end.
I really have no clue where to even start. Can someone help me?
Let's focus on zeroes. Each swap moves a single zero a single position closer to the final order. Then we can find the number of swaps by finding the number of displaced zeroes, and the severity of the displacement.
Let's start by assuming that the zeroes end up at the start of the array. We'll keep track of two things: count_of_ones, and displacement, both initialized to zero. Each time we find a 1, we increment count_of_ones. Each time we find a 0, we increase displacement by count_of_ones.
Then we do this in the other direction. Both ways are linear, so this is linear.
E.g. 1010001
1: count_of_ones: 0 -> 1
0: displacement: 0 -> 1
1: count_of_ones: 1 -> 2
0: displacement: 1 -> 3
0: displacement: 3 -> 5
0: displacement: 5 -> 7
1: count_of_ones: 2 -> 3
The answer for this direction is the final displacement, or 7. Going the other way we get 5. Final answer is 5.
In fact, the sum of the final displacements (starting vs ending with all zeroes) will always equal num_zeroes * num_ones. This halves the work (though it's still linear).
From the comments it seems some people didn't understand my answer. Here's a Ruby implementation to make things clearer.
def find_min_swaps(arr)
count_of_ones = 0
displacement = 0
arr.each do |v|
count_of_ones += 1 if v == 1
displacement += count_of_ones if v == 0
end
count_of_zeroes = arr.length - count_of_ones
reverse_displacement = count_of_ones * count_of_zeroes - displacement
return [displacement, reverse_displacement].min
end
The zeroes end up on the left if displacement < reverse_displacement, either if they're equal, or the right if displacement > reverse_displacement.
Let SUM0 be the sum of the (0-based) indexes of all the zeros, and let SUM1 be the sum of the indexes of all the ones. Every time you swap 10 -> 01, SUM0 goes down by one, and SUM1 goes up by one. They go the other way when you swap 01 -> 10.
Lets say you have N0 zeros and N1 ones. If the zeros were packed together at the start of the array, then you would have SUM0 = N0*(N0-1)/2. That's the smallest SUM0 you can have.
Since a single adjacent swap can reduce SUM0 by exactly one, it takes exactly SUM0 - N0*(N0-1)/2 swaps to pack the zeros together at the front. Similarly, it takes SUM1 - N1*(N1-1)/2 swaps to pack the ones together at the front.
Your answer is the smaller of these numbers: min( SUM0 - N0*(N0-1)/2 , SUM1 - N1*(N1-1)/2 )
Those values are all easy to calculate in linear time.
Simple approach using Bubble Sort, which takes O(n2), would be this:
public class MainClass {
public static void main(String[] args) {
int[] arr = new int[]{1, 0, 0, 0, 0, 0, 0, 1, 0};
int minSwaps = minimumSwaps(arr);
System.out.println("Minimum swaps required: " + minSwaps);
}
public static int minimumSwaps(int[] array) {
int[] arr1 = array.clone(), arr2 = array.clone();
int swapsForRight = 0, swapsForLeft = 0;
boolean sorted = false;
while (!sorted) {
sorted = true;
for (int i = 0; i < arr1.length - 1; i++) {
if (arr1[i + 1] < arr1[i]) {
int temp = arr1[i + 1];
arr1[i + 1] = arr1[i];
arr1[i] = temp;
sorted = false;
swapsForRight++;
}
}
}
sorted = false;
while (!sorted) {
sorted = true;
for (int i = 0; i > arr2.length - 1; i++) {
if (arr2[i + 1] < arr2[i]) {
int temp = arr2[i + 1];
arr2[i + 1] = arr2[i];
arr2[i] = temp;
sorted = false;
swapsForLeft++;
}
}
}
return swapsForLeft > swapsForRight ? swapsForRight : swapsForLeft;
}
}
I have an algorithm problem. I am trying to find all unique subset of values from a larger set of values.
For example say I have the set {1,3,7,9}. What algorithm can I use to find these subsets of 3?
{1,3,7}
{1,3,9}
{1,7,9}
{3,7,9}
Subsets should not repeat, and order is unimportant, set {1,2,3} is the same as set {3,2,1} for these purposes. Psudocode (or the regular kind) is encouraged.
A brute force approach is obviously possible, but not desired.
For example such a brute force method would be as follows.
for i = 0 to size
for j = i + 1 to size
for k = j + 1 to size
subset[] = {set[i],set[j],set[k]}
Unfortunately this requires an additional loop for each element desired in the subset, which is undesirable if, for example, you want a subset of 8 elements.
Some Java code using recursion.
The basic idea is to try to swap each element with the current position and then recurse on the next position (but we also need startPos here to indicate what the last position that we swapped with was, otherwise we'll get a simple permutation generator). Once we've got enough elements, we print all those and return.
static void subsets(int[] arr, int pos, int depth, int startPos)
{
if (pos == depth)
{
for (int i = 0; i < depth; i++)
System.out.print(arr[i] + " ");
System.out.println();
return;
}
for (int i = startPos; i < arr.length; i++)
{
// optimization - not enough elements left
if (depth - pos + i > arr.length)
return;
// swap pos and i
int temp = arr[pos];
arr[pos] = arr[i];
arr[i] = temp;
subsets(arr, pos+1, depth, i+1);
// swap pos and i back - otherwise things just gets messed up
temp = arr[pos];
arr[pos] = arr[i];
arr[i] = temp;
}
}
public static void main(String[] args)
{
subsets(new int[]{1,3,7,9}, 0, 3, 0);
}
Prints:
1 3 7
1 3 9
1 7 9
3 7 9
A more detailed explanation (through example):
First things first - in the above code, an element is kept in the same position by performing a swap with itself - it doesn't do anything, just makes the code a bit simpler.
Also note that at each step we revert all swaps made.
Say we have input 1 2 3 4 5 and we want to find subsets of size 3.
First we just take the first 3 elements - 1 2 3.
Then we swap the 3 with 4 and 5 respectively,
and the first 3 elements gives us 1 2 4 and 1 2 5.
Note that we've just finished doing all sets containing 1 and 2 together.
Now we want sets of the form 1 3 X, so we swap 2 and 3 and get 1 3 2 4 5. But we already have sets containing 1 and 2 together, so here we want to skip 2. So we swap 2 with 4 and 5 respectively, and the first 3 elements gives us 1 3 4 and 1 3 5.
Now we swap 2 and 4 to get 1 4 3 2 5. But we want to skip 3 and 2, so we start from 5. We swap 3 and 5, and the first 3 elements gives us 1 4 5.
And so on.
Skipping elements here is perhaps the most complex part. Note that whenever we skip elements, it just involves continuing from after the position we swapped with (when we swapped 2 and 4, we continued from after the 4 was). This is correct because there's no way an element can get to the left of the position we're swapping with without having been processed, nor can a processed element get to the right of that position, because we process all the elements from left to right.
Think in terms of the for-loops
It's perhaps the simplest to think of the algorithm in terms of for-loops.
for i = 0 to size
for j = i + 1 to size
for k = j + 1 to size
subset[] = {set[i],set[j],set[k]}
Each recursive step would represent a for-loop.
startPos is 0, i+1 and j+1 respectively.
depth is how many for-loops there are.
pos is which for-loop we're currently at.
Since we never go backwards in a deeper loop, it's safe to use the start of the array as storage for our elements, as long as we revert the changes when we're done with an iteration.
If you are interested only in subsets of size 3, then this can be done using three simple nested for loops.
for ( int i = 0; i < arr.size(); i++ )
for ( int j = i+1; j < arr.size(); j++ )
for ( int k = j+1; k < arr.size(); k++ )
std::cout << "{ " << arr[i] <<"," << arr[j] <<"," << arr[k] <<" }";
For a more general case you will have to use recursion.
void recur( set<int> soFar, set<int> remaining, int subSetSize ) {
if (soFar.size() == subSetSize) {
print soFar;
return;
}
for ( int I = 0; I < remaining.size(); I++ ) {
//take out Ith element from remaining and push it in soFar.
// recur( newSofar, newRemaining, subSetSize);
}
}
I have been trying to formulate an algorithm to solve a problem. In this problem, we have a photo containing some buildings. The photo is divided into n vertical regions (called pieces) and the height of a building in each piece is given.
One building may span several consecutive pieces, but each piece can only contain one visible building, or no buildings at all. We are required to find the minimum number of buildings.
e.g.
given ,
3 ( no of pieces)
1 2 3 ( heights) ans = 3
3
1 2 1 ans = 2
6
1 2 3 1 2 3 ans = 5 ( a figure wud help show the overlap.).
Though I feel like I get it, I am unable to get a solid algorithm for it. Any ideas?
You can find the lowest number from the given array and account for all occurances of this number. This will split the array into multiple subarrays and now you need to recursively solve the problem for each of them.
In the example:
1 2 3 1 2 3 (total = 0)
Smallest number is 1:
x 2 3 x 2 3 (total = 1)
Now you have 2 subarrays.
Solve for the first one - the smallest number is 2:
x 3 (total = 2)
Finally you have a single element: total = 3
Solving the other subarray makes it 5.
Here is some code in C#:
int Solve(int[] ar, int start, int end){
//base for the recursion -> the subarray has single element
if(end-start == 1) return 1;
//base for the recursion -> the subarray is empty
if(end-start < 1) return 0;
//find min
int m = int.MaxValue;
for(int i = start; i < end; i++)
if (ar[i] < m) m = ar[i];
int total = 1;
//find the subarrays and their contribution recursively
int subStart = start;
for(int subEnd = start; subEnd < end; subEnd++){
if(ar[subEnd] == m) {
total += Solve(ar, subStart, subEnd);
subStart = subEnd + 1;
}
}
total += Solve(ar, subStart, ar.Length);
return total;
}
Given an array of length N. How will you find the minimum length
contiguous sub-array of whose sum is S and whose product is P.
For eg 5 6 1 4 6 2 9 7 for S = 17, Ans = [6, 2, 9] for P = 24, Ans = [4 6].
Just go from left to right, and sum all the numbers, if the sum > S, then throw away left ones.
import java.util.Arrays;
public class test {
public static void main (String[] args) {
int[] array = {5, 6, 1, 4, 6, 2, 9, 7};
int length = array.length;
int S = 17;
int sum = 0; // current sum of sub array, assume all positive
int start = 0; // current start of sub array
int minLength = array.length + 1; // length of minimum sub array found
int minStart = 0; // start of of minimum sub array found
for (int index = 0; index < length; index++) {
sum = sum + array[index];
// Find by add to right
if (sum == S && index - start + 1 < minLength) {
minLength = index - start + 1;
minStart = start;
}
while (sum >= S) {
sum = sum - array[start];
start++;
// Find by minus from left
if (sum == S && index - start + 1 < minLength) {
minLength = index - start + 1;
minStart = start;
}
}
}
// Found
if (minLength != length + 1) {
System.out.println(Arrays.toString(Arrays.copyOfRange(array, minStart, minStart + minLength)));
}
}
}
For your example, I think it is OR.
Product is nothing different from sum, except for calculation.
pseudocode:
subStart = 0;
Sum = 0
for (i = 0; i< array.Length; i++)
Sum = Sum + array[i];
if (Sum < targetSum) continue;
if (Sum == targetSum) result = min(result, i - subStart +1);
while (Sum >= targetSum)
Sum = Sum - array[subStart];
subStart++;
I think that'll find the result with one pass through the array. There's a bit of detail missing there in the result value. Needs a bit more complexity there to be able to return the actual subarray if needed.
To find the Product sub-array just substitute multiplication/division for addition/subtraction in the above algorithm
Put two indices on the array. Lets call them i and j. Initially j = 1 and i =0. If the product between i and j is less than P, increment j. If it is greater than P, increment i. If we get something equal to p, sum up the elements (instead of summing up everytime, maintain an array where S(i) is the sum of everything to the left of it. Compute sum from i to j as S(i) - S(j)) and see whether you get S. Stop when j falls out of the array length.
This is O(n).
You can use a hashmap to find the answer for product in O(N) time with extra space.