Generating Ascending Sequence 2^p*3^q - algorithm

I was interested in implementing a specific Shellsort method I read about that had the same time complexity as a bitonic sort. However, it requires the gap sequence to be the sequence of numbers [1, N-1] that satisfy the expression 2^p*3^q for any integers p and q. In layman's terms, all the numbers in that range that are only divisible by 2 and 3 an integer amount of times. Is there a relatively efficient method for generating this sequence?

Numbers of that form are called 3-smooth. Dijkstra studied the closely related problem of generating 5-smooth or regular numbers, proposing an algorithm that generates the sequence S of 5-smooth numbers by starting S with 1 and then doing a sorted merge of the sequences 2S, 3S, and 5S. Here's a rendering of this idea in Python for 3-smooth numbers, as an infinite generator.
def threesmooth():
S = [1]
i2 = 0 # current index in 2S
i3 = 0 # current index in 3S
while True:
yield S[-1]
n2 = 2 * S[i2]
n3 = 3 * S[i3]
S.append(min(n2, n3))
i2 += n2 <= n3
i3 += n2 >= n3

Simplest I can think of is to run a nested loop over p and q and then sort the result. In Python:
N=100
products_of_powers_of_2and3 = []
power_of_2 = 1
while power_of_2 < N:
product_of_powers_of_2and3 = power_of_2
while product_of_powers_of_2and3 < N:
products_of_powers_of_2and3.append(product_of_powers_of_2and3)
product_of_powers_of_2and3 *= 3
power_of_2 *= 2
products_of_powers_of_2and3.sort()
print products_of_powers_of_2and3
result
[1, 2, 3, 4, 6, 8, 9, 12, 16, 18, 24, 27, 32, 36, 48, 54, 64, 72, 81, 96]
(before sorting the products_of_powers_of_2and3 is
[1, 3, 9, 27, 81, 2, 6, 18, 54, 4, 12, 36, 8, 24, 72, 16, 48, 32, 96, 64]
)
Given the size of products_of_powers_of_2and3 is of the order of log2N*log3N the list doesn't grow very fast and sorting it doesn't seem particularly inefficient. E.g. even for N = 1 million, the list is very short, 142 items, so you don't need to worry.

You can do it very easy in JavaScript
arr = [];
n = 20;
function generateSeries() {
for (let i = 0; i < n; i++) {
for (let j = 0; j < n; j++) {
arr.push(Math.pow(2, i) * Math.pow(3, j))
}
}
sort();
}
function sort() {
arr.sort((a, b) => {
if (a < b) {return -1;}
if (a > b) {return 1;}
return 0;
});
}
function solution(N) {
arr = [];
if (N >= 0 && N <= 200 ) {
generateSeries();
console.log("arr >>>>>", arr);
console.log("result >>>>>", arr[N]);
return arr[N];
}
}

N = 200
res =[]
a,b = 2,3
for i in range(N):
for j in range(N):
temp1=a**i
temp2=b**j
temp=temp1*temp2
if temp<=200:
res.append(temp)
res = sorted(res)
print(res)

Related

Why "studentRequired < m" returns true in minimum allocation problem

In Book allocation problem in isPossible() function why even if studentsRequired are less than m (students we need to allocate books for ) results the curr_min to be a viable sollution and we return true ? Does't it mean that books are not allocated to all the students ?
for example if the array is [5, 82, 52, 66, 16, 37, 38, 44, 1, 97, 71, 28, 37, 58, 77, 97, 94, 4, 9]
with m = 16. Maximum value studentsRequired gets is 13. does't it mean that only 13 students got the books but 16 should have ?
here is the code in JS, for other languages ( cpp, java, python ) please get to this GFG page
function isPossible(arr, n, m, curr_min) {
let studentsRequired = 1;
let curr_sum = 0;
for (let i = 0; i < n; i++) {
if (arr[i] > curr_min) return false;
if (curr_sum + arr[i] > curr_min) {
studentsRequired++;
curr_sum = arr[i];
if (studentsRequired > m) return false;
} else {
curr_sum += arr[i];
}
}
return true;
}
function findPages(arr, n, m) {
let sum = 0;
if (n < m) return -1;
for (let i = 0; i < n; i++) sum += arr[i];
let start = 0,
end = sum;
let result = Number.MAX_VALUE;
while (start <= end) {
let mid = Math.floor((start + end) / 2);
if (isPossible(arr, n, m, mid)) {
result = Math.min(result, mid);
end = mid - 1;
} else start = mid + 1;
}
return result;
}
const ans = findPages(
[5, 82, 52, 66, 16, 37, 38, 44, 1, 97, 71, 28, 37, 58, 77, 97, 94, 4, 9],
19,
16
);
console.log("Ans : ", ans);
What i think is happening =>
When i dry run the code for test case arr = [10,20,30,40] with m = 4, the result is S1 = [10,20], S2 = [30], S3 = [40] and we did not allocated a book to S4.
Are we assuming that students with more books (like S1 had [10,20]) can transfer there last book, 20 in this case to there right student ? And then S2 will have [20,30] so 30 will be then transferred to S3 and in the end S3 will tranfer its last book to S4 making the condition "Every student must have a book" true as now S1 = [10], S = [20], S3 = [30], S4 = [40]? Or its something else ?

Generating integer partition by its number

I'm trying to generate decent partition of given integer number N numbered K in lexicographical order, e.g. for N = 5, K = 3 we got:
5 = 1 + 1 + 1 + 1 + 1
5 = 1 + 1 + 1 + 2
5 = 1 + 1 + 3
5 = 1 + 2 + 2
5 = 1 + 4
5 = 2 + 3
5 = 5
And the third one is 1 + 1 + 3.
How can I generate this without generating every partition(in C language, but most of all I need algorithm)?
Going to find maximal number in partition(assuming we can find number of partitions d[i][j], where i is number and j is maximal integer in its partition), then decrease the original number and number we are looking for. So yes, I'm trying to use dynamic programming. Still working on code.
This doesn't work at all:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
FILE *F1, *F2;
main()
{
long long i, j, p, n, k, l, m[102][102];
short c[102];
F1 = fopen("num2part.in", "r");
F2 = fopen ("num2part.out", "w");
n = 0;
fscanf (F1, "%lld %lld", &n, &k);
p = 0;
m[0][0] = 1;
for ( i = 0; i <= n; i++)
{
for (j = 1; j <= i; j++)
{
m[i][j] = m[i - j][j] + m[i][j - 1];
}
for (j = i + 1; j <= n; j++)
{
m[i][j] = m[i][i];
}
}
l = n;
p = n;
j = n;
while (k > 0)
{
while ( k < m[l][j])
{
if (j == 0)
{
while (l > 0)
{
c[p] = 1;
p--;
l--;
}
break;
}
j--;
}
k -=m[l][j];
c[p] = j + 1;
p--;
l -= c[p + 1];
}
//printing answer here, answer is contained in array from c[p] to c[n]
}
Here is some example Python code that generates the partitions:
cache = {}
def p3(n,val=1):
"""Returns number of ascending partitions of n if all values are >= val"""
if n==0:
return 1 # No choice in partitioning
key = n,val
if key in cache:
return cache[key]
# Choose next value x
r = sum(p3(n-x,x) for x in xrange(val,n+1))
cache[key]=r
return r
def ascending_partition(n,k):
"""Generate the k lexicographically ordered partition of n into integer parts"""
P = []
val = 1 # All values must be greater than this
while n:
# Choose the next number
for x in xrange(val,n+1):
count = p3(n-x,x)
if k >= count:
# Keep trying to find the correct digit
k -= count
elif count: # Check that there are some valid positions with this digit
# This must be the correct digit for this location
P.append(x)
n -= x
val = x
break
return P
n=5
for k in range(p3(n)):
print k,ascending_partition(n,k)
It prints:
0 [1, 1, 1, 1, 1]
1 [1, 1, 1, 2]
2 [1, 1, 3]
3 [1, 2, 2]
4 [1, 4]
5 [2, 3]
6 [5]
This can be used to generate an arbitrary partition without generating all the intermediate ones. For example, there are 9253082936723602 partitions of 300.
print ascending_partition(300,10**15)
prints
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 4, 4, 4, 4, 4, 4, 5, 7, 8, 8, 11, 12, 13, 14, 14, 17, 17, 48, 52]
def _yieldParts(num,lt):
''' It generate a comination set'''
if not num:
yield ()
for i in range(min(num,lt),0,-1):
for parts in _yieldParts(num-i,i):
yield (i,)+parts
def patition(number,kSum,maxIntInTupple):
''' It generates a comination set with sum of kSum is equal to number
maxIntInTupple is for maximum integer can be in tupple'''
for p in _yieldParts(number,maxIntInTupple):
if(len(p) <=kSum):
if(len(p)<kSum):
while len(p) < kSum:
p+=(0,)
print p
patition(40,8,40)
Output:
-------
(40,0,0,0,0,0,0,0)
(39,1,0,0,0,0,0,0)
.
.
.
.

closest to zero [absolute value] sum of consecutive subsequence of a sequence of real values

this is an algorithmic playground for me! I've seen variations of this problem tackling maximum consecutive subsequence but this is another variation as well.
the formal def:
given A[1..n] find i and j so that abs(A[i]+A[i+1]+...+A[j]) is closest to zero among others.
I'm wondering how to get O(n log^2 n), or even O(n log n) solution.
Calculate the cumulative sum.
Sort it.
Find the sequential pair with least difference.
function leastSubsequenceSum(values) {
var n = values.length;
// Store the cumulative sum along with the index.
var sums = [];
sums[0] = { index: 0, sum: 0 };
for (var i = 1; i <= n; i++) {
sums[i] = {
index: i,
sum: sums[i-1].sum + values[i-1]
};
}
// Sort by cumulative sum
sums.sort(function (a, b) {
return a.sum == b.sum ? b.index - a.index : a.sum - b.sum;
});
// Find the sequential pair with the least difference.
var bestI = -1;
var bestDiff = null;
for (var i = 1; i <= n; i++) {
var diff = Math.abs(sums[i-1].sum - sums[i].sum);
if (bestDiff === null || diff < bestDiff) {
bestDiff = diff;
bestI = i;
}
}
// Just to make sure start < stop
var start = sums[bestI-1].index;
var stop = sums[bestI].index;
if (start > stop) {
var tmp = start;
start = stop;
stop = tmp;
}
return [start, stop-1, bestDiff];
}
Examples:
>>> leastSubsequenceSum([10, -5, 3, -4, 11, -4, 12, 20]);
[2, 3, 1]
>>> leastSubsequenceSum([5, 6, -1, -9, -2, 16, 19, 1, -4, 9]);
[0, 4, 1]
>>> leastSubsequenceSum([3, 16, 8, -10, -1, -8, -3, 10, -2, -4]);
[6, 9, 1]
In the first example, [2, 3, 1] means, sum from index 2 to 3 (inclusive), and you get an absolute sum of 1:
[10, -5, 3, -4, 11, -4, 12, 20]
^^^^^

Find common elements in N sorted arrays with no extra space

Given N arrays with sizeof N, and they are all sorted, if it does not allow you to use extra space, how will find their common datas efficiently or with less time complexity?
For ex:
1. 10 160 200 500 500
2. 4 150 160 170 500
3. 2 160 200 202 203
4. 3 150 155 160 300
5. 3 150 155 160 301
This is an interview question, I found some questions which were similar but they didn't include the extra conditions of input being sorted or not being able to use extra memory.
I couldn't think of any solution less than O(n^2 lg n) complexity. In that case, I'd prefer to go with the simplest solution which gives me this complexity, which is:
not_found_flag = false
for each element 'v' in row-1
for each row 'i' in the remaining set
perform binary search for 'v' in 'i'
if 'v' not found in row 'i'
not_found_flag = true
break
if not not_found_flag
print element 'v' as it is one of the common element
We could improve this by comparing the min and max of each row and decide based on that whether it is possible for a number 'num' to fall between 'min_num' and 'max_num' of that row.
Binary search -> O(log n)
For searching 1 num in n-1 rows : O(nlogn)
Binary search for each number in first row : O(n2logn)
I selected first row, we can pick any row and if a no element of the row picked is found in any of the (N-1) rows then we don't really have common data.
It seems this can be done in O(n^2); i.e., just looking at each element once. Note that if an element is common to all the arrays then it must exist in any one of them. Also for purposes of illustration (and since you used the for loop above) I will assume we can keep an index for each of the arrays, but I'll talk about how to get around this later.
Let's call the arrays A_1 through A_N, and use indices starting at 1. Pseudocode:
# Initial index values set to first element of each array
for i = 1 to N:
x_i = 1
for x_1 = 1 to N:
val = A_1[x_1]
print_val = true
for i = 2 to N:
while A_i[x_i] < val:
x_i = x_i + 1
if A_i[x_i] != val:
print_val = false
if print_val:
print val
Explanation of algorithm. We use the first array (or any arbitrary array) as the reference algorithm, and iterate through all the other arrays in parallel (kind of like the merge step of a merge sort, except with N arrays.) Every value of the reference array that is common to all the arrays must be present in all the other arrays. So for each other array (since they are sorted), we increase the index x_i until the value at that index A_i[x_i] is at least the value we are looking for (we don't care about lesser values; they can't be common.) We can do this since the arrays are sorted and thus monotonically nondecreasing. If all the arrays had this value, then we print it, otherwise we increment x_1 in the reference array and keep going. We have to do this even if we don't print the value.
By the end, we've printed all the values that are common to all the arrays, while only having examined each element once.
Getting around the extra storage requirement. There are many ways to do this, but I think the easiest way would be to check the first element of each array and take the max as the reference array A_1. If they are all the same, print that value, and then store the indices x_2 ... x_N as the first element of each array itself.
Java implementation (for brevity, without the extra hack), using your example input:
public static void main(String[] args) {
int[][] a = {
{ 10, 160, 200, 500, 500, },
{ 4, 150, 160, 170, 500, },
{ 2, 160, 200, 202, 203, },
{ 3, 150, 155, 160, 300 },
{ 3, 150, 155, 160, 301 } };
int n = a.length;
int[] x = new int[n];
for( ; x[0] < n; x[0]++ ) {
int val = a[0][x[0]];
boolean print = true;
for( int i = 1; i < n; i++ ) {
while (a[i][x[i]] < val && x[i] < n-1) x[i]++;
if (a[i][x[i]] != val) print = false;
}
if (print) System.out.println(val);
}
}
Output:
160
This is a solution in python O(n^2), uses no extra space but destroys the lists:
def find_common(lists):
num_lists = len(lists)
first_list = lists[0]
for j in first_list[::-1]:
common_found = True
for i in range(1,num_lists):
curr_list = lists[i]
while curr_list[len(curr_list)-1] > j:
curr_list.pop()
if curr_list[len(curr_list)-1] != j:
common_found = False
break
if common_found:
return j
An O(n^2) (Python) version that doesn't use extra storage, but modify the original array. Allows to store the common elements without printing them:
data = [
[10, 160, 200, 500, 500],
[4, 150, 160, 170, 500],
[2, 160, 200, 202, 203],
[3, 150, 155, 160, 300],
[3, 150, 155, 160, 301],
]
for k in xrange(len(data)-1):
A, B = data[k], data[k+1]
i, j, x = 0, 0, None
while i<len(A) or j<len(B):
while i<len(A) and (j>=len(B) or A[i] < B[j]):
A[i] = x
i += 1
while j<len(B) and (i>=len(A) or B[j] < A[i]):
B[j] = x
j += 1
if i<len(A) and j<len(B):
x = A[i]
i += 1
j += 1
print data[-1]
What I'm doing is basically get every array in the data and comparing with it's next, element by element, removing those that are not common.
Here is the Java implementation
public static Integer[] commonElementsInNSortedArrays(int[][] arrays) {
int baseIndex = 0, currentIndex = 0, totalMatchFound= 0;
int[] indices = new int[arrays.length - 1];
boolean smallestArrayTraversed = false;
List<Integer> result = new ArrayList<Integer>();
while (!smallestArrayTraversed && baseIndex < arrays[0].length) {
totalMatchFound = 0;
for (int array = 1; array < arrays.length; array++) {
currentIndex = indices[array - 1];
while (currentIndex < arrays[array].length && arrays[array][currentIndex] < arrays[0][baseIndex]) {
currentIndex ++;
}
if (currentIndex < arrays[array].length) {
if (arrays[array][currentIndex] == arrays[0][baseIndex]) {
totalMatchFound++;
}
} else {
smallestArrayTraversed = true;
}
indices[array - 1] = currentIndex;
}
if (totalMatchFound == arrays.length - 1) {
result.add(arrays[0][baseIndex]);
}
baseIndex++;
}
return result.toArray(new Integer[0]);
}
Here is the Unit Tests
#Test
public void commonElementsInNSortedArrayTest() {
int arr[][] = { {1, 5, 10, 20, 40, 80},
{6, 7, 20, 80, 100},
{3, 4, 15, 20, 30, 70, 80, 120}
};
Integer result[] = ArrayUtils.commonElementsInNSortedArrays(arr);
assertThat(result, equalTo(new Integer[]{20, 80}));
arr = new int[][]{
{23, 34, 67, 89, 123, 566, 1000},
{11, 22, 23, 24,33, 37, 185, 566, 987, 1223, 1234},
{23, 43, 67, 98, 566, 678},
{1, 4, 5, 23, 34, 76, 87, 132, 566, 665},
{1, 2, 3, 23, 24, 344, 566}
};
result = ArrayUtils.commonElementsInNSortedArrays(arr);
assertThat(result, equalTo(new Integer[]{23, 566}));
}
This Swift solution makes a copy of the original but could be modified to take an inout parameter so that it takes no additional space. I left it as a copy because I think it is better to not modify the original since it deletes elements. It is possible to not remove elements by keeping indices, but this algorithm removes elements to keep track of where it is. This is a functional approach, and may not be super efficient but works. Since it is functional less conditional logic is necessary. I posted it because I thought it might be a different approach which might be interesting to others, and maybe others can figure out ways of making it more efficient.
func findCommonInSortedArrays(arr: [[Int]]) -> [Int] {
var copy = arr
var result: [Int] = []
while (true) {
// get first elements
let m = copy.indices.compactMap { copy[$0].first }
// find max value of those elements.
let mm = m.reduce (0) { max($0, $1) }
// find the value in other arrays or nil
let ii = copy.indices.map { copy[$0].firstIndex { $0 == mm } }
// if nil present in of one of the arrays found, return result
if (ii.map { $0 }).count != (ii.compactMap { $0 }.count) { return result }
// remove elements that don't match target value.
copy.indices.map { copy[$0].removeFirst( ii[$0] ?? 0 ) }
// add to list of matching values.
result += [mm]
// remove the matched element from all arrays
copy.indices.forEach { copy[$0].removeFirst() }
}
}
findCommonInSortedArrays(arr: [[9, 10, 12, 13, 14, 29],
[3, 5, 9, 10, 13, 14],
[3, 9, 10, 14]]
)
findCommonInSortedArrays(arr: [[],
[],
[]]
)
findCommonInSortedArrays(arr: [[9, 10, 12, 13, 14, 29],
[3, 5, 9, 10, 13, 14],
[3, 9, 10, 14],
[9, 10, 29]]
)
findCommonInSortedArrays(arr: [[9, 10, 12, 13, 14, 29],
[3, 5, 9, 10, 13, 14],
[3, 9, 10, 14],
[9, 10, 29]]
)

Count all subsets of an array where the largest number is the sum of the remaining numbers

I've been struggling with level 3 of the Greplin challenge. For those not familiar, here is the problem:
you must find all subsets of an array where the largest number is the sum of the remaining numbers. For example, for an input of:
(1, 2, 3, 4, 6)
the subsets would be
1 + 2 = 3
1 + 3 = 4
2 + 4 = 6
1 + 2 + 3 = 6
Here is the list of numbers you should
run your code on. The password is the
number of subsets. In the above case
the answer would be 4.
3, 4, 9, 14, 15, 19, 28, 37, 47, 50, 54, 56, 59, 61, 70, 73, 78, 81, 92, 95, 97, 99
I was able to code a solution that builds all 4 million plus combinations of the 22 numbers and then tests them all which will get me the right answer. Problem is it takes over 40 minutes to crunch through. Searching around the web it seems like several people were able to write an algorithm to get the answer to this in less than a second. Can anyone explain in pseudo-code a better way to tackle this than the computationally expensive brute-force method? It's driving me nuts!
The trick is that you only need to keep track of counts of how many ways there are to do things. Since the numbers are sorted and positive, this is pretty easy. Here is an efficient solution. (It takes under 0.03s on my laptop.)
#! /usr/bin/python
numbers = [
3, 4, 9, 14, 15, 19, 28, 37, 47, 50, 54, 56,
59, 61, 70, 73, 78, 81, 92, 95, 97, 99]
max_number = max(numbers)
counts = {0: 1}
answer = 0
for number in numbers:
if number in counts:
answer += counts[number]
prev = [(s,c) for (s, c) in counts.iteritems()]
for (s, c) in prev:
val = s+number;
if max_number < val:
continue
if val not in counts:
counts[val] = c
else:
counts[val] += c
print answer
We know the values are nonzero and grow monontonically left to right.
An idea is to enumerate the the possible sums (any order, left to right is fine)
and then enumerate the subsets to the left of of that value,
because values on the right can't possibly participate (they'd make the sum
too big). We don't have have to instantiate the set; just as we consider
each value, see how if affects the sum. It can either be too big (just ignore
that value, can't be in the set), just right (its the last member in the set),
or too small, at which point it might or might not be in the set.
[This problem made me play with Python for the first time. Fun.]
Here's the Python code; according to Cprofile.run this takes .00772 seconds
on my P8700 2.54Ghz laptop.
values = [3, 4, 9, 14, 15, 19, 28, 37, 47, 50, 54, 56, 59, 61, 70, 73, 78, 81, 92, 95, 97, 99]
def count():
# sort(values) # force strictly increasing order
last_value=-1
duplicates=0
totalsets=0
for sum_value in values: # enumerate sum values
if last_value==sum_value: duplicates+=1
last_value=sum_value
totalsets+=ways_to_sum(sum_value,0) # faster, uses monotonicity of values
return totalsets-len(values)+duplicates
def ways_to_sum(sum,member_index):
value=values[member_index]
if sum<value:
return 0
if sum>value:
return ways_to_sum(sum-value,member_index+1)+ways_to_sum(sum,member_index+1)
return 1
The resulting count I get is 179. (Matches another poster's result.)
EDIT: ways_to_sum can be partly implemented using a tail-recursion loop:
def ways_to_sum(sum,member_index):
c=0
while True:
value=values[member_index]
if sum<value: return c
if sum==value: return c+1
member_index+=1
c+=ways_to_sum(sum-value,member_index)
This takes .005804 seconds to run :-} Same answer.
This runs in less than 5ms (python). It uses a variant of dynamical programming called memoized recursion. The go function computed the number of subsets of the first p+1 elements, which sum up to target. Because the list is sorted it's enough to call the function once for every element (as target) and sum the results:
startTime = datetime.now()
li = [3, 4, 9, 14, 15, 19, 28, 37, 47, 50, 54, 56, 59, 61, 70, 73, 78, 81, 92, 95, 97, 99]
memo = {}
def go(p, target):
if (p, target) not in memo:
if p == 0:
if target == li[0]:
memo[(p,target)] = 1
else:
memo[(p,target)] = 0
else:
c = 0
if li[p] == target: c = 1
elif li[p] < target: c = go(p-1,target-li[p])
c += go(p-1, target)
memo[(p,target)] = c
return memo[(p,target)]
ans = 0
for p in range(1, len(li)):
ans += go(p-1, li[p])
print(ans)
print(datetime.now()-startTime)
This works
public class A {
static int[] a = {3,4,9,14,15,19,28,37,47,50,54,56,59,61,70,73,78,81,92,95,97,99};
public static void main(String[] args) {
List<Integer> b = new ArrayList<Integer>();
int count = 0;
for (int i = 0; i < a.length; i++) {
System.out.println(b);
for (Integer t:b) {
if(a[i]==t)
{
System.out.println(a[i]);
count++;
}
}
int size = b.size();
for (int j = 0; j < size; j++) {
if(b.get(j) + a[i] <=99)
b.add(b.get(j) + a[i]);
}
b.add(a[i]);
}
System.out.println(count);
}
}
pseudo code(with explanation):
store the following variables
i.) 'count' of subsets till now
ii.)an array b which contains sums of all possible subsets
2.iterate through the array (say a). for each element a[i]
i.)go through array b and count the number of occurrences of a[i]. add this to 'count'
ii.)go through array b and for each element b[j].add (a[i]+b[j]) to b because this is a possible subset sum. (if a[i]+b[j]> max element in a, u can ignore to add it)
iii.)add a[i] to b.
3.u have count :)
I used the combination generator class in Java available here:
http://www.merriampark.com/comb.htm
Iterating through the combos and looking for valid subsets took less than a second. (I don't think using outside code is in keeping with the challenge, but I also didn't apply.)
public class Solution {
public static void main(String arg[]) {
int[] array = { 3, 4, 9, 14, 15, 19, 28, 37, 47, 50, 54, 56, 59, 61,
70, 73, 78, 81, 92, 95, 97, 99 };
int N = array.length;
System.out.println(N);
int count = 0;
for (int i = 1; i < 1 << N; i++) {
int sum = 0;
int max = 0;
for (int j = 0; j < N; j++) {
if (((i >> j) & 1) == 1) {
sum += array[j];
max = array[j];
}
}
if (sum == 2 * max)
count++;
}
System.out.println(count);
}
public static boolean isP(int N) {
for (int i = 3; i <= (int) Math.sqrt(1.0 * N); i++) {
if (N % i == 0) {
System.out.println(i);
// return false;
}
}
return true;
}
}
Hope it helps, but don't just copy and paste.
I don't want to beat a dead horse, but most of the solutions posted here miss a key opportunity for optimization and therefore take 6 times longer to execute.
Rather than iterating through the input array and searching for sums that match each value, it is far more efficient to calculate all the possible RELEVANT sums only once, then see which of those sums appear in the original input array. (A "relevant" sum is any subset sum <= the max value in the array.)
The second approach runs approximately 6 times faster -- generally milliseconds rather than centiseconds -- simply because it calls the recursive sum-finding function about 1/6th as many times!
The code for this approach and full explanation can be found in this github repo (it's in PHP because that's what was called for by the person who gave me this challenge):
https://github.com/misterrobinson/greplinsubsets

Resources