Subset sum with unlimited elements - algorithm

I am trying to solve a coding problem at topcoder for practice. I believe I have solved it partly, but am struggling with the other half.
The essence of the problem is "Given a set P with positive integers, find the smallest set of numbers that adds up to a sum S. You can use an element of the set more than once. There may be a case where the sum is not attainable as well."
For small inputs, the exponential algorithm of searching through all possible subsets works. However, the size of the set can go up to 1024.
What is the idea behind solving this problem? Is this problem even an extension of subset-sum?
[EDIT]
This is the problem on topcoder : https://community.topcoder.com/stat?c=problem_statement&pm=8571

Related

Algorithms for bucketizing integers into buckets with zero sums

Suppose we have an array of integers (both negative and positive) A[1 ... n] such that all the elements sum to zero. Now, whenever I have a bunch of integers that sum to zero, I will call them a group and I want to split A in as many disjoint groups as possible. Can you suggest any paper discussing this very same problem?
It sounds like your problem consists of two NP-Complete problems.
The first would be finding all subsets that solve the Subset Sum problem. This problem does have an exponential time complexity (as implied by amit in the comments), but it is a very reasonable extension of the Subset Sum problem from a theoretical standpoint. For example, if you can solve the Subset Sum problem by dynamic programming and generate the canonical 2D array as a result, this array will contain enough information to generate all possible solutions using a traceback.
The second NP-Complete problem embedded within your problem is the Integer Linear Programming problem. Given all possible subsets solving the Subset Sum problem, N total, we want to select select 0<=n<=N, such that the value of n is maximized and no element of A is repeated.
I doubt there is a publication devoted to describing this problem because it seems to involve a straightforward application of known theory.

An algorithm to remove the least amount of numbers in the list in increasing and decreasing order?

Consider the following numbers:
9,44,32,12,7,45,31,98,35,37,41,8,20,27,83,64,61,28,39,93,29,92,17,13,14,55,21,66,72,23,73,99,1,2,88,77,3,65,83,84,62,5,11,74,68,76,78,67,75,69,70,22,71,24,25,26.
I try to implement an algorithm to remove the least amount of numbers in the list to make the sequence
a) increasing order
b) decreasing order
I already tried with the shortest and longest subsecuence. Dont want the code, only the explanation or a pseudo code,i can't understand how to solve the problem thanks!
This is a lightly camouflaged Longest increasing (decreasing) subsequence problem. The algorithm to solving your problem is as follows:
Find the longest increasing (decreasing) subsequence in the array
Remove all elements that do not belong to the longest increasing subsequence.
Since the increasing/decreasing subsequence is longest, the amount of numbers that you will remove is the smallest.
Wikipedia article has a nice pseudocode for solving the LIS/LDS problem. You can substitute binary search for a linear one unless the original sequence is 1000+ elements long.
Since it has already been mentioned, I will add my 2 cents. Most probably this will be asked in interview in those circumstances, the running time(efficiency) is a major concern. So the same problem can be tackled with many algorithms depending on the time they take to execute.
The best known algorithm is of order O(nlogn). Other important one can like Dynamic programming paradigm can also be applied to yield a solution of O(n^2).
O(n^2) here
O(nlogn) here

Is there any algorithm to solve counting change for finite number of denominations?

I know the algorithm to solve the coin change problem for infinite number of denominations but is there any algorithm for finite number of denominations using DP?
Yes. Modify the initial algorithm such that, when it's about to add a coin that would exceed the number of available coins of that denomination, it doesn't, instead. Then it will only print the valid combos.
Another, more simple way is: run the algorithm without bounds, then filter the output based on what combinations are invalid. Thinking of it this way makes it really obvious that the problem is indeed solvable.

Given n integers, find the m whose sum's absolute value is minimal

I have n integers given; both positive and negative values are included. What is a good algorithm to find m integers from that list, such that that the absolute value of the sum of those m integers is the smallest possible?
The problem is NP-hard, since solving it efficiently would solve the subset-sum decision problem efficiently.
Given that, you're not going to find an efficient algorithm to solve it unless you believe that P=NP.
You can always come up with some heuristics to direct your search but in the worst case you'll have to check every subset of m integers.
If "good" means "correct", then just try every possibility. This will take you about n choose m time. Very slow. Unfortunately, this is the best you can do in general, because for any set of integers you can always add one more that is the negative of a sum of m-1 other ones--and those others could all have the same sign, so you have no way to search.
If "good" means "fast and usually works okay", then there are various ways to proceed. E.g.:
Suppose you can solve the problem for m=2, and suppose further you can solve it for both the positive and the negative answer (and then take the smaller of the two). Now suppose you want to solve m=4. Solve for m=2, then throw those two numbers out and solve again...should be obvious what to do next! Now, what about m=6?
Now suppose you can solve the problem for m=3 and m=2. Think you can get a decent answer for m=5?
Finally, note that if you sort the numbers, you can solve for m=2 in one pass, and for m=3 you have an annoying quadratic search to do, but at least you can do it on only about a quarter of the list twice (the small halves of the positive and negative numbers) and look for a number of opposite sign to cancel.

test whether k elements of a set add up to a certain number

Is there a way to determine if k elements of a set add up to a certain number in polynomial time?
How big is the number?
This is a variation on the subset sum problem, which is well-known and NP-complete. However dynamic programming techniques will make it polynomial if the set of possible values that the subsets can take grows polynomially. Which with general integers, isn't true. But with numbers picked from a restricted range happens surprisingly often.

Resources