This is more of an algorithms question than a programming one. I'm wondering if the prefix sum (or any) parallel algorithm can be modified to accomplish the following. I'd like to generate a result from two input lists on a GPU in less than O(N) time.
The rule is: Carry forth the first number from data until the same index in keys contains a lesser value.
Whenever I try mapping it to a parallel scan, it doesn't work because I can't be sure which values of data to propagate in upsweep since it's not possible to know which prior data might have carried far enough to compare against the current key. This problem reminds me of a ripple carry where we need to consider the current index AND all past indices.
Again, don't need code for a parallel scan (though that would be nice), more looking to understand how it can be done or why it can't be done.
int data[N] = {5, 6, 5, 5, 3, 1, 5, 5};
int keys[N] = {5, 6, 5, 5, 4, 2, 5, 5};
int result[N];
serial_scan(N, keys, data, result);
// Print result. should be {5, 5, 5, 5, 3, 1, 1, 1, }
code to do the scan in serial is below:
void serial_scan(int N, int *k, int *d, int *r)
{
r[0] = d[0];
for(int i=1; i<N; i++)
{
if (k[i] >= r[i-1]) {
r[i] = r[i-1];
} else if (k[i] >= d[i]) {
r[i] = d[i];
} else {
r[i] = 0;
}
}
}
The general technique for a parallel scan can be found here, described in the functional language Standard ML. This can be done for any associative operator, and I think yours fits the bill.
One intuition pump is that you can calculate the sum of an array in O(log(n)) span (running time with infinite processors) by recursively calculating the sum of two halves of the array and adding them together. In calculating the scan you just need know the sum of the array before the current point.
We could calculate the scan of an array doing two halves in parallel: calculate the sum of the 1st half using the above technique. Then calculating the scan for the two halves sequentially; the 1st half starts at 0 and the 2nd half starts at the sum you calculated before. The full algorithm is a little trickier, but uses the same idea.
Here's some pseudo-code for doing a parallel scan in a different language (for the specific case of ints and addition, but the logic is identical for any associative operator):
//assume input.length is a power of 2
int[] scanadd( int[] input) {
if (input.length == 1)
return input
else {
//calculate a new collapsed sequence which is the sum of sequential even/odd pairs
//assume this for loop is done in parallel
int[] collapsed = new int[input.length/2]
for (i <- 0 until collapsed.length)
collapsed[i] = input[2 * i] + input[2*i+1]
//recursively scan collapsed values
int[] scancollapse = scanadd(collapse)
//now we can use the scan of the collapsed seq to calculate the full sequence
//also assume this for loop is in parallel
int[] output = int[input.length]
for (i <- 0 until input.length)
//if an index is even then we can just look into the collapsed sequence and get the value
// otherwise we can look just before it and add the value at the current index
if (i %2 ==0)
output[i] = scancollapse[i/2]
else
output[i] = scancollapse[(i-1)/2] + input[i]
return output
}
}
Related
I am new to dynamic programming and I am trying to understand the basics of recursion and memoization while trying to solve the Max sum of non adjacent elements - Problem. After reading some theory, one of the basic properties of a recursive function is it should have over lapping sub problems. For the naïve brute force approach to this problem, for my code below, I am having a hard time seeing overlapping problems.
Is the reason I am not able to see overlapping sub problems because I don't have a recursive relation?
Do I always have to have a recurrence relation in order to have sub problems, i.e., all recursive problems may/may not have sub problems if there is no recurrence relation
How can I add memoization if 1 or 2 is holds and I just made a mistake in analysis?
public static int maxSubsetSum(int[] arr) {
HashMap<Integer, Integer> cache = new HashMap<Integer, Integer>();
return maxSumHelper(arr, 0, arr.length , 0, new ArrayList<Integer>(), cache);
}
public static int maxSumHelper(int[]arr, int start, int end, int sum, ArrayList<Integer>combo, HashMap<Integer, Integer> cache){
/*
* if(cache.containsKey(start)) { return; }
*/
if(start>=end){
for(int i = 0; i <combo.size();i++ ) {
System.out.print(combo.get(i));
}
System.out.println();
return sum;
}
for(int i = start; i < arr.length; i++){
sum+= arr[i];
combo.add(arr[i]);
int withMax = maxSumHelper(arr, i + 2, end, sum, combo, cache);
sum-=arr[i];
combo.remove(combo.size() - 1);
int withoutMax = maxSumHelper(arr, i+ 2, end, sum, combo, cache);
//cache.put(i, Math.max(withMax, withoutMax));
max = Math.max(max,Math.max(withMax, withoutMax));
}
return max;
}
one of the basic properties of a recursive function is it should have over lapping sub problems
This is a condition for getting a benefit from dynamic programming, but is not a condition for a recursive function in general.
I am having a hard time seeing overlapping problems.
They are there. But first a correction: the second recursive call should pass i+1 as argument, not i+2. This is because it deals with the case where you do not include the value at i for the sum, and so it is allowed to include the value at i+1.
Now take this example input, and looking for a sum that is maximised:
{ 1, 2, 3, 2, 0, 3, 2, 1 }
^
i=5
Let's focus on the call that gets as argument start=5: the most we can add to the current sum is 3 + 1 = 4. This fact is independent on what the value is of the sum argument, and so we could benefit from a cache that would tell us what the maximised additional value is at a given index start.
There are many paths that lead to this call with start=5. The path (combo) could include any of these values from the input before index 5:
1, 3, 0
1, 2
2, 2
2, 0
3, 0
2
0
nothing
So if the first case drills down to the end and determines that for index 5 the greatest additional value is 4 (3+1), there is no need to do this search again for the other cases, and you can just do return sum + cache.get(start).
Side note
It is better practice to let the recursive function return that "additional" sum, and not pass it the sum-so-far as argument. Just add the value of the current selection (if any) to the greatest of the sums that come back from recursive calls and return that. This way it is also clearer how you can use memoization.
I am fairly new to dynamic programming and don't yet understand most of the types of problems it can solve. Hence I am facing problems in understaing the solution of Jewelry topcoder problem.
Can someone at least give me some hints as to what the code is doing ?
Most importantly is this problem a variant of the subset-sum problem ? Because that's what I am studying to make sense of this problem.
What are these two functions actually counting ? Why are we using actually two DP tables ?
void cnk() {
nk[0][0]=1;
FOR(k,1,MAXN) {
nk[0][k]=0;
}
FOR(n,1,MAXN) {
nk[n][0]=1;
FOR(k,1,MAXN)
nk[n][k] = nk[n-1][k-1]+nk[n-1][k];
}
}
void calc(LL T[MAXN+1][MAX+1]) {
T[0][0] = 1;
FOR(x,1,MAX) T[0][x]=0;
FOR(ile,1,n) {
int a = v[ile-1];
FOR(x,0,MAX) {
T[ile][x] = T[ile-1][x];
if(x>=a) T[ile][x] +=T[ile-1][x-a];
}
}
}
How is the original solution constructed by using the following logic ?
FOR(u,1,c) {
int uu = u * v[done];
FOR(x,uu,MAX)
res += B[done][x-uu] * F[n-done-u][x] * nk[c][u];
}
done=p;
}
Any help would be greatly appreciated.
Let's consider the following task first:
"Given a vector V of N positive integers less than K, find the number of subsets whose sum equals S".
This can be solved in polynomial time with dynamic programming using some extra-memory.
The dynamic programming approach goes like this:
instead of solving the problem for N and S, we will solve all the problems of the following form:
"Find the number of ways to write sum s (with s ≤ S) using only the first n ≤ N of the numbers".
This is a common characteristic of the dynamic programming solutions: instead of only solving the original problem, you solve an entire family of related problems. The key idea is that solutions for more difficult problem settings (i.e. higher n and s) can efficiently be built up from the solutions of the easier settings.
Solving the problem for n = 0 is trivial (sum s = 0 can be expressed in one way -- using the empty set, while all other sums can't be expressed in any ways).
Now consider that we have solved the problem for all values up to a certain n and that we have these solutions in a matrix A (i.e. A[n][s] is the number of ways to write sum s using the first n elements).
Then, we can find the solutions for n+1, using the following formula:
A[n+1][s] = A[n][s - V[n+1]] + A[n][s].
Indeed, when we write the sum s using the first n+1 numbers we can either include or not V[n+1] (the n+1th term).
This is what the calc function computes. (the cnk function uses Pascal's rule to compute binomial coefficients)
Note: in general, if in the end we are only interested in answering the initial problem (i.e. for N and S), then the array A can be uni-dimensional (with length S) -- this is because whenever trying to construct solutions for n + 1 we only need the solutions for n, and not for smaller values).
This problem (the one initially stated in this answer) is indeed related to the subset sum problem (finding a subset of elements with sum zero).
A similar type of dynamic programming approach can be applied if we have a reasonable limit on the absolute values of the integers used (we need to allocate an auxiliary array to represent all possible reachable sums).
In the zero-sum problem we are not actually interested in the count, thus the A array can be an array of booleans (indicating whether a sum is reachable or not).
In addition, another auxiliary array, B can be used to allow reconstructing the solution if one exists.
The recurrence would now look like this:
if (!A[s] && A[s - V[n+1]]) {
A[s] = true;
// the index of the last value used to reach sum _s_,
// allows going backwards to reproduce the entire solution
B[s] = n + 1;
}
Note: the actual implementation requires some additional care for handling the negative sums, which can not directly represent indices in the array (the indices can be shifted by taking into account the minimum reachable sum, or, if working in C/C++, a trick like the one described in this answer can be applied: https://stackoverflow.com/a/3473686/6184684).
I'll detail how the above ideas apply in the TopCoder problem and its solution linked in the question.
The B and F matrices.
First, note the meaning of the B and F matrices in the solution:
B[i][s] represents the number of ways to reach sum s using only the smallest i items
F[i][s] represents the number of ways to reach sum s using only the largest i items
Indeed, both matrices are computed using the calc function, after sorting the array of jewelry values in ascending order (for B) and descending order (for F).
Solution for the case with no duplicates.
Consider first the case with no duplicate jewelry values, using this example: [5, 6, 7, 11, 15].
For the reminder of the answer I will assume that the array was sorted in ascending order (thus "first i items" will refer to the smallest i ones).
Each item given to Bob has value less (or equal) to each item given to Frank, thus in every good solution there will be a separation point such that Bob receives only items before that separation point, and Frank receives only items after that point.
To count all solutions we would need to sum over all possible separation points.
When, for example, the separation point is between the 3rd and 4th item, Bob would pick items only from the [5, 6, 7] sub-array (smallest 3 items), and Frank would pick items from the remaining [11, 12] sub-array (largest 2 items). In this case there is a single sum (s = 11) that can be obtained by both of them. Each time a sum can be obtained by both, we need to multiply the number of ways that each of them can reach the respective sum (e.g. if Bob could reach a sum s in 4 ways and Frank could reach the same sum s in 5 ways, then we could get 20 = 4 * 5 valid solutions with that sum, because each combination is a valid solution).
Thus we would get the following code by considering all separation points and all possible sums:
res = 0;
for (int i = 0; i < n; i++) {
for (int s = 0; s <= maxS; s++) {
res += B[i][s] * F[n-i][s]
}
}
However, there is a subtle issue here. This would often count the same combination multiple times (for various separation points). In the example provided above, the same solution with sum 11 would be counted both for the separation [5, 6] - [7, 11, 15], as well as for the separation [5, 6, 7] - [11, 15].
To alleviate this problem we can partition the solutions by "the largest value of an item picked by Bob" (or, equivalently, by always forcing Bob to include in his selection the largest valued item from the first sub-array under the current separation).
In order to count the number of ways to reach sum s when Bob's largest valued item is the ith one (sorted in ascending order), we can use B[i][s - v[i]]. This holds because using the v[i] valued item implies requiring the sum s - v[i] to be expressed using subsets from the first i items (indices 0, 1, ... i - 1).
This would be implemented as follows:
res = 0;
for (int i = 0; i < n; i++) {
for (int s = v[i]; s <= maxS; s++) {
res += B[i][s - v[i]] * F[n - 1 - i][s];
}
}
This is getting closer to the solution on TopCoder (in that solution, done corresponds to the i above, and uu = v[i]).
Extension for the case when duplicates are allowed.
When duplicate values can appear in the array, it's no longer easy to directly count the number of solutions when Bob's most valuable item is v[i]. We need to also consider the number of such items picked by Bob.
If there are c items that have the same value as v[i], i.e. v[i] = v[i+1] = ... v[i + c - 1], and Bob picks u such items, then the number of ways for him to reach a certain sum s is equal to:
comb(c, u) * B[i][s - u * v[i]] (1)
Indeed, this holds because the u items can be picked from the total of c which have the same value in comb(c, u) ways. For each such choice of the u items, the remaining sum is s - u * v[i], and this should be expressed using a subset from the first i items (indices 0, 1, ... i - 1), thus it can be done in B[i][s - u * v[i]] ways.
For Frank, if Bob used u of the v[i] items, the number of ways to express sum s will be equal to:
F[n - i - u][s] (2)
Indeed, since Bob uses the smallest i + u values, Frank can use any of the largest n - i - u values to reach the sum s.
By combining relations (1) and (2) from above, we obtain that the number of solutions where both Frank and Bob have sum s, when Bob's most valued item is v[i] and he picks u such items is equal to:
comb(c, u) * B[i][s - u * v[i]] * F[n - i - u][s].
This is precisely what the given solution implements.
Indeed, the variable done corresponds to variable i above, variable x corresponds to sums s, the index p is used to determine the c items with same value as v[done], and the loop over u is used in order to consider all possible numbers of such items picked by Bob.
Here's some Java code for this that references the original solution. It also incorporates qwertyman's fantastic explanations (to the extent feasible). I've added some of my comments along the way.
import java.util.*;
public class Jewelry {
int MAX_SUM=30005;
int MAX_N=30;
long[][] C;
// Generate all possible sums
// ret[i][sum] = number of ways to compute sum using the first i numbers from val[]
public long[][] genDP(int[] val) {
int i, sum, n=val.length;
long[][] ret = new long[MAX_N+1][MAX_SUM];
ret[0][0] = 1;
for(i=0; i+1<=n; i++) {
for(sum=0; sum<MAX_SUM; sum++) {
// Carry over the sum from i to i+1 for each sum
// Problem definition allows excluding numbers from calculating sums
// So we are essentially excluding the last number for this calculation
ret[i+1][sum] = ret[i][sum];
// DP: (Number of ways to generate sum using i+1 numbers =
// Number of ways to generate sum-val[i] using i numbers)
if(sum>=val[i])
ret[i+1][sum] += ret[i][sum-val[i]];
}
}
return ret;
}
// C(n, r) - all possible combinations of choosing r numbers from n numbers
// Leverage Pascal's polynomial co-efficients for an n-degree polynomial
// Leverage Dynamic Programming to build this upfront
public void nCr() {
C = new long[MAX_N+1][MAX_N+1];
int n, r;
C[0][0] = 1;
for(n=1; n<=MAX_N; n++) {
C[n][0] = 1;
for(r=1; r<=MAX_N; r++)
C[n][r] = C[n-1][r-1] + C[n-1][r];
}
}
/*
General Concept:
- Sort array
- Incrementally divide array into two partitions
+ Accomplished by using two different arrays - L for left, R for right
- Take all possible sums on the left side and match with all possible sums
on the right side (multiply these numbers to get totals for each sum)
- Adjust for common sums so as to not overcount
- Adjust for duplicate numbers
*/
public long howMany(int[] values) {
int i, j, sum, n=values.length;
// Pre-compute C(n,r) and store in C[][]
nCr();
/*
Incrementally split the array and calculate sums on either side
For eg. if val={2, 3, 4, 5, 9}, we would partition this as
{2 | 3, 4, 5, 9} then {2, 3 | 4, 5, 9}, etc.
First, sort it ascendingly and generate its sum matrix L
Then, sort it descendingly, and generate another sum matrix R
In later calculations, manipulate indexes to simulate the partitions
So at any point L[i] would correspond to R[n-i-1]. eg. L[1] = R[5-1-1]=R[3]
*/
// Sort ascendingly
Arrays.sort(values);
// Generate all sums for the "Left" partition using the sorted array
long[][] L = genDP(values);
// Sort descendingly by reversing the existing array.
// Java 8 doesn't support Arrays.sort for primitive int types
// Use Comparator or sort manually. This uses the manual sort.
for(i=0; i<n/2; i++) {
int tmp = values[i];
values[i] = values[n-i-1];
values[n-i-1] = tmp;
}
// Generate all sums for the "Right" partition using the re-sorted array
long[][] R = genDP(values);
// Re-sort in ascending order as we will be using values[] as reference later
Arrays.sort(values);
long tot = 0;
for(i=0; i<n; i++) {
int dup=0;
// How many duplicates of values[i] do we have?
for(j=0; j<n; j++)
if(values[j] == values[i])
dup++;
/*
Calculate total by iterating through each sum and multiplying counts on
both partitions for that sum
However, there may be count of sums that get duplicated
For instance, if val={2, 3, 4, 5, 9}, you'd get:
{2, 3 | 4, 5, 9} and {2, 3, 4 | 5, 9} (on two different iterations)
In this case, the subset {2, 3 | 5} is counted twice
To account for this, exclude the current largest number, val[i], from L's
sum and exclude it from R's i index
There is another issue of duplicate numbers
Eg. If values={2, 3, 3, 3, 4}, how do you know which 3 went to L?
To solve this, group the same numbers
Applying to {2, 3, 3, 3, 4} :
- Exclude 3, 6 (3+3) and 9 (3+3+3) from L's sum calculation
- Exclude 1, 2 and 3 from R's index count
We're essentially saying that we will exclude the sum contribution of these
elements to L and ignore their count contribution to R
*/
for(j=1; j<=dup; j++) {
int dup_sum = j*values[i];
for(sum=dup_sum; sum<MAX_SUM; sum++) {
// (ways to pick j numbers from dup) * (ways to get sum-dup_sum from i numbers) * (ways to get sum from n-i-j numbers)
if(n-i-j>=0)
tot += C[dup][j] * L[i][sum-dup_sum] * R[n-i-j][sum];
}
}
// Skip past the duplicates of values[i] that we've now accounted for
i += dup-1;
}
return tot;
}
}
Let's say I have two sets:
A = [1, 3, 5, 7, 9, 11]
and
B = [1, 3, 9, 11, 12, 13, 14]
Both sets can be of arbitrary (and differing numbers of elements).
I am writing a performance critical application that requires me to perform a search to determine the number of elements which both sets have in common. I don't actually need to return the matches, only the number of matches.
Obviously, a naive method would be a brute force, but I suspect that is nowhere near optimal. Is there an algorithm for performing this type of operation?
If it helps, in all cases the sets will consists of integers.
If both sets are roughly the same size, walking over them in sync, similar to a merge sort merge operation, is about as fast as it gets.
Look at the first elements.
If they match, you add that element to your result, and move both pointers forward.
Otherwise, you move the pointer that points to the smallest value forward.
Some pseudo-Python:
a = []
b = []
res = []
ai = 0
bi = 0
while ai < len(a) and bi < len(b):
if a[ai] == b[bi]:
res += a[ai]
ai+=1
bi+=1
elif a[ai] < b[bi]:
ai+=1
else:
bi+=1
return res
If one set is significantly larger than the other, you can use binary search to look for each item from the smaller in the larger.
Here is the idea (very high level description though).
By the way, I'll take the liberty to assume that the numbers in each set are not appearing more than once, for instance [1,3,5,5,7,7,9,11] will not take place.
You define two variables that will hold the indices you are examining in each array.
You start with the first number of each set and compare them. Two possible conditions: they are equal or one is bigger than the other.
If they are equal, you count the event and move the pointers in both arrays to the next element.
If they differ, you move the pointer of the lower value to the next element in the array and repeat the process (compare both values).
The loop ends when you reach the last element of either array.
Hope I was able to explain it in a clear way.
If both set are sorted, the smallest element of both sets is either the minimum of the first set, or the minimum of second set. If it's the min of the first set, then the next smallest element is either the minimum of the second set or the 2nd minimum of first set. If you repeat this till the end of both sets you have ordered both set. For your specific problem you just need to compare if elements are also equals.
You can iterate over the union of both sets with the following algorithm:
intersection_set_cardinality(s1, s2)
{
iterator i = begin(s1);
iterator j = begin(s2);
count = 0;
while(i != end(s1) && j != end(s2))
{
if(elt(i) == elt(j))
{
count = count + 1;
i = i + 1;
j = j + 1;
}
else if(elt(i) < elt(j))
{
i = i + 1;
}
else
{
j = j + 1;
}
}
return count
}
In the merge algorithm of merge sort, I don't understand we have to use auxiliary arrays L, R? Why can't we just keep 2 pointers corresponding which element we're comparing in the 2 subarrays L and R so that the merge-sort algorithm remains inplace?
Thanks.
say you split your array s.th. L uses the first half of the original array, and R uses the second half.
then say that durign merge the first few elements from R are smaller than the smallest in L. If you want to put them in the correct place for the merge result, you will have to overwrite elements from L that have not been processed during the merge step yet.
of course you can make a diferent split. But you can always construct such an (then slightly different) example.
My first post here. Be gentle!
Here's my solution for a simple and easy-to-understand stable in-place merge-sort. I wrote this yesterday. I'm not sure it hasn't been done before, but I've not seen it about, so maybe?
The one drawback to the following in-place merge algorithm can degenerate into O(n²) under certain conditions, but is typically O(n.log₂n) in practice. This degeneracy can be mitigated with certain changes, but I wanted to keep the base algorithm pure in the code sample so it can be easily understood.
Coupled with the O(log₂n) time complexity for the driving merge_sort() function, this presents us with a typical time complexity of O(n.(log₂n)²) overall, and O(n².log₂n) worst case, which is not fantastic, but again with some tweaks, it can be made to almost always run in O(n.(log₂n)²) time, and with its good CPU cache locality it is decent even for n values up to 1M, but it is always going to be slower than quicksort.
// Stable Merge In Place Sort
//
//
// The following code is written to illustrate the base algorithm. A good
// number of optimizations can be applied to boost its overall speed
// For all its simplicity, it does still perform somewhat decently.
// Average case time complexity appears to be: O(n.(log₂n)²)
#include <stddef.h>
#include <stdio.h>
#define swap(x, y) (t=(x), (x)=(y), (y)=t)
// Both sorted sub-arrays must be adjacent in 'a'
// Assumes that both 'an' and 'bn' are always non-zero
// 'an' is the length of the first sorted section in 'a', referred to as A
// 'bn' is the length of the second sorted section in 'a', referred to as B
static void
merge_inplace(int A[], size_t an, size_t bn)
{
int t, *B = &A[an];
size_t pa, pb; // Swap partition pointers within A and B
// Find the portion to swap. We're looking for how much from the
// start of B can swap with the end of A, such that every element
// in A is less than or equal to any element in B. This is quite
// simple when both sub-arrays come at us pre-sorted
for(pa = an, pb = 0; pa>0 && pb<bn && B[pb] < A[pa-1]; pa--, pb++);
// Now swap last part of A with first part of B according to the
// indicies we found
for (size_t index=pa; index < an; index++)
swap(A[index], B[index-pa]);
// Now merge the two sub-array pairings. We need to check that either array
// didn't wholly swap out the other and cause the remaining portion to be zero
if (pa>0 && (an-pa)>0)
merge_inplace(A, pa, an-pa);
if (pb>0 && (bn-pb)>0)
merge_inplace(B, pb, bn-pb);
} // merge_inplace
// Implements a recursive merge-sort algorithm with an optional
// insertion sort for when the splits get too small. 'n' must
// ALWAYS be 2 or more. It enforces this when calling itself
static void
merge_sort(int a[], size_t n)
{
size_t m = n/2;
// Sort first and second halves only if the target 'n' will be > 1
if (m > 1)
merge_sort(a, m);
if ((n-m)>1)
merge_sort(a+m, n-m);
// Now merge the two sorted sub-arrays together. We know that since
// n > 1, then both m and n-m MUST be non-zero, and so we will never
// violate the condition of not passing in zero length sub-arrays
merge_inplace(a, m, n-m);
} // merge_sort
// Print an array */
static void
print_array(int a[], size_t size)
{
if (size > 0) {
printf("%d", a[0]);
for (size_t i = 1; i < size; i++)
printf(" %d", a[i]);
}
printf("\n");
} // print_array
// Test driver
int
main()
{
int a[] = { 17, 3, 16, 5, 14, 8, 10, 7, 15, 1, 13, 4, 9, 12, 11, 6, 2 };
size_t n = sizeof(a) / sizeof(a[0]);
merge_sort(a, n);
print_array(a, n);
return 0;
} // main
If you ever tried to write a merge sort in place, you will soon find out why you can't wen you are merging the 2 sub arraies - you basically need to read from and write to the same range of the array, and it will overwrite each other. Hence we need any auxiliary array:
vector<int> merge_sort(vector<int>& vs, int l, int r, vector<int>& temp)
{
if(l==r) return vs; // recursion must have an end condition
int m = (l+r)/2;
merge_sort(vs, l, m, temp);
merge_sort(vs, m+1, r, temp);
int il = l, ir=m+1, i=l;
while(il <= m && ir <= r)
{
if(vs[il] <= vs[ir])
temp[i++] = vs[il++];
else
temp[i++] = vs[ir++];
}
// copy left over items(only one of below will apply
while(il <= m) temp[i++] = vs[il++];
while(ir <= r) temp[i++] = vs[ir++];
for(i=l; i<=r; ++i) vs[i] = temp[i];
return vs;
}
I have a set of given integers:
A[] = { 2, 3, 4, 5, 6, 7, 8, 10, 15, 20, 25, 30, 40, 50, 100, 500 }
I want to check if a given integer T can be written as a multiple of the numbers in A[];
EDIT CLARIFICATION:
any number in A[] can be used.If used can be used only one time.
EX 60 is a valid T.60=30*2.
AlSO 90 is valid . 90=3*5*6
Check which numbers can form that integer T.
Also return the 2 closest integers to the given T (that can be written that way) if the number T cannot be written as a multiple of that numbers.
Parts 2 and 3, I think I can sort out of my own if someone helps me with part 1.
I know this is an algorithmic question or even a mathematical one but if anyone can help, please do.
NOT HOMEWORK. SEE COMMENT BELOW.
#
SOLUTION.
TY VERY MUCH FOR ALL ANSWERS.1 answer particulary (but the author choose to remove it and i don't know really why since it was correct.)
Ty author (don't remember your name.)
#
Solution Code with a twist(The author's algorithm used multiple times one multiplier.This one uses multiplier only 1 time)
int oldT = 0;
HashMap<Integer, Boolean> used = new HashMap<Integer, Boolean>();
while (T != 1 && T != -1) {
oldT = T;
for (int multiple : A) {
if (!used.containsKey(multiple)) {
if (T % multiple == 0) {
T = T / multiple;
used.put(multiple, true);
}
}
}
if (oldT == T)
return false;
}
return true;
If T is not very big (say, < 10^7), this is straight DP.
a[1] = true; // means that number '1' can be achieved with no multipliers
for (every multiplier x) {
for (int i = T; i > 0; --i) {
if (a[i] and (i * x <= T)) {
// if 'i' can be achieved and 'x' is a valid multiplier,
// then 'i * x' can be achieved too
a[i * x] = true;
}
}
}
That's assuming every multiplier can be used only once.
Now, you can find decomposition of T if you have another array b[i] storing which multiplier was used to achieve i.
There's a lot of online content to get familiar with dynamic programming, if you have little time. It should give you idea how to approach such problems. For example, this one seems not bad
http://www.topcoder.com/tc?module=Static&d1=tutorials&d2=dynProg
I'm not sure exactly what you're asking.
If the user can select n*(one of those numbers), then note that you have the primes 2, 3, 5, and 7, so if your number is divisible by 2, 3, 5, or 7, then divide that out, and then you have n*(that one).
If you have to multiply the numbers by each other, but can do so multiple times, then note again that all of your numbers factor into powers of 2, 3, 5, and 7. Check if a bet is divisible only by these (divide each out until you can't divide out any more, then see if you're left with 1) and count the number of times you've divided by each one.
If you have to multiply the numbers by each other without replacement, then again find the prime factorization and remove from the list whichever number makes the powers present the most even. If you manage to remove all the multiples, you're done.
In all but the last case, the numbers that can be bet is very dense, so you can find the nearest just by going up or down and checking again. In the last case, searching for something near could be kind of tricky; you might just want to form a table of possible (low) bets and suggest something from that, assuming that users aren't going to bet 2*3*4*5*6*7*8*10*15*....