Variation of subset sum - algorithm

Given an array of numbers I want to find out set of numbers whose sum is a multiple of Given number.
I know this is variation of subset sum. But the problem is that there are infinite multiples of a number. So I can't think of a Dynamic Problem Solution to the problem.
So how to extend subset sum problem to it?

Pseudo polynomial DP solution to subset sum uses the DP state:
DP(n, s) = Number of ways of getting a sum of s using first n elements of the set
And takes O(ns) time. If I want to find all the multiples of d, I am only interested in remainders of subset sums with d. Remember modulo is distributive. Therefore, I change the DP state to
DP(n, m) = Number of subsets whose sum = m mod d using the first n elements
Space reduced to O(nd) and time also O(nd)
One convention followed in the actual pseudopolynomial solution is to traverse the DP array from the end, allowing you ro use only O(s) space. That cannot be done here. The best you can do is to use O(2m) memory to store previous and current DP arrays.

Although there are infinitely many multiples of every (nonzero) number, there are only finitely many multiples of a number that will be less than the sum of all the elements in your set. In other words, you can always upper-bound the maximum multiple that could be generated by the sum of the elements of the set. This should enable you to use standard pseudopolynomial-time DP techniques to solve the problem.
Hope this helps!

Here is code for finding the number of ways you can calculate the sum value.
public static void main(String[] args) {
Scanner scan=new Scanner(System.in);
int n=scan.nextInt();//number of elements in the set
int m=scan.nextInt();//sum needs to be calculated
scan.nextLine();
int[] setValue=new int[m];
long[][] setSplit=new long[m+1][n+1];
for(int i=0;i<m; i++)
{
setValue[i]=scan.nextInt();
}
setSplit[0][0]=1;
//when sum is 0
for(int i=1; i<m+1; i++)
{
setSplit[i][0]=1;
}
//when sum is more than 0 but set element is 0
for(int j=1; j<n+1; j++)
{
setSplit[0][j]=0;
}
int temp=0;
for(int i=1; i<=m; i++)
{
for(int j=1; j<n+1; j++)
{
setSplit[i][j]=setSplit[i-1][j];
if(j>=setValue[i-1])
{
setSplit[i][j]=setSplit[i][j]+setSplit[i][j-setValue[i-1]];
}
}
}
// System.out.println(Arrays.deepToString(setSplit));
System.out.println(setSplit[m][n]);/*this will give number of ways sum can be calculated*/
}

Related

Given N arrays of integers of N elements, check if X can be formed by taking one element from each array

Given N arrays of N elements, and a number X, check if X can be formed by taking an element from each array.
You have to take 1 and only 1 element from each array.
For N = 2, we need to find a pair which can be done in O(n) time (using a set or assuming that the array was sorted).
For N = 3, the complexity will be O(n2)?
For N = 4, it can be done n2Log(n) time.
How to generalized for N arrays?
This looks like some variant of subset sum problem, and can be solved similarly using Dynamic Programming in pseudo-polynomial time:
D[0][0] = true
D[0][w] = false | w != 0
D[i][w] = OR { D[i-1][w-x] | for each x in array i }
Then, you are looking for the value of D[n][X].
Intuitively, what you are doing is checking for each element in the ith array, and checks where its leading you to for each w <= X, and then checking recursively if this can be achieved. Note that the recursion here is symbolic, as dynamic programming doesn't require recursive solution.
Time complexity of the solution is O(n^2 * X), since each entry of the table takes O(n) time to calculate, and there are n*X entries in the table.
I am kind of beginner in the world of algorithms but here is an attempt by me to solve the above mentioned problem.
So the time complexity of my algorithm is somewhere ~ O(nnl*logn). Now here n is the number of arrays and l is the length of each array, so the example which I have considered while solving this problem has n = 4 and l = 3.
Please let me know if I make any mistakes and I will improve my answer accordingly.
So my algorithm is something like this:-
1. I store all the array values in n*l 2-D array where each row represents an array and l is the column number of each array.
2. I keep a global set of integers in which while I iterate over i arrays, I store the sum of all the possible values.
The example which I have considered for this code is given as below
4 3 26
1 4 5
2 3 6
7 1 3
8 4 2
Input Structure:-
1. First line contains three integers n,l,sum where n is the number of arrays, l is the length of each array and sum is the required sum that we need to check, whether it's possible to achieve or not.
2. Next n lines, each line containing l values are the elements for the ith array.
Now below is my code, written in c++. I think it's pretty straightforward to understand.
#include <bits/stdc++.h>
using namespace std;
int main()
{
int n,l,sum;
cin>>n>>l;
cin>>sum;
vector<vector<int> > A(n, vector<int>(l));
for(int i=0; i<n; i++) {
for(int j=0; j<l; j++) {
cin>>A[i][j];
}
}
vector<vector<int> > dp(1000,vector<int>(1000,0));
dp[0][0] = 1;
set<int> sum_intervals;
// initially for filling up dp array with values of first row
for(int i=0; i<l; i++) {
dp[0][A[0][i]] = 1;
sum_intervals.insert(A[0][i]);
}
for(int i=1; i<n; i++) {
for(int j=0; j<l; j++) {
set<int> temp = sum_intervals;
for(set<int>::iterator it=sum_intervals.begin(); it!=sum_intervals.end(); it++) {
dp[i][*it+A[i][j]] = 1;
temp.insert(*it+A[i][j]);
}
sum_intervals = temp;
}
}
if(A[n-1][sum]) {
cout<<"Yayy answer exists"<<endl;
} else {
cout<<"Sorry, no answer exists"<<endl;
}
return 0;
}
Hope this helps!

Find minimum cost to convert array to arithmetic progression

I recently encountered this question in an interview. I couldn't really come up with an algorithm for this.
Given an array of unsorted integers, we have to find the minimum cost in which this array can be converted to an Arithmetic Progression where a cost of 1 unit is incurred if any element is changed in the array. Also, the value of the element ranges between (-inf,inf).
I sort of realised that DP can be used here, but I couldn't solve the equation. There were some constraints on the values, but I don't remember them. I am just looking for high level pseudo code.
EDIT
Here's a correct solution, unfortunately, while simple to understand it's not very efficient at O(n^3).
function costAP(arr) {
if(arr.length < 3) { return 0; }
var minCost = arr.length;
for(var i = 0; i < arr.length - 1; i++) {
for(var j = i + 1; j < arr.length; j++) {
var delta = (arr[j] - arr[i]) / (j - i);
var cost = 0;
for(var k = 0; k < arr.length; k++) {
if(k == i) { continue; }
if((arr[k] + delta * (i - k)) != arr[i]) { cost++; }
}
if(cost < minCost) { minCost = cost; }
}
}
return minCost;
}
Find the relative delta between every distinct pair of indices in the array
Use the relative delta to test the cost of transforming the whole array to AP using that delta
Return the minimum cost
Louis Ricci had the right basic idea of looking for the largest existing arithmetic progression, but assumed that it would have to appear in a single run, when in fact the elements of this progression can appear in any subset of the positions, e.g.:
1 42 3 69 5 1111 2222 8
requires just 4 changes:
42 69 1111 2222
1 3 5 8
To calculate this, notice that every AP has a rightmost element. We can suppose each element i of the input vector to be the rightmost AP position in turn, and for each such i consider all positions j to the left of i, determining the step size implied for each (i, j) combination and, when this is integer (indicating a valid AP), add one to the the number of elements that imply this step size and end at position i -- since all such elements belong to the same AP. The overall maximum is then the longest AP:
struct solution {
int len;
int pos;
int step;
};
solution longestArithProg(vector<int> const& v) {
solution best = { -1, 0, 0 };
for (int i = 1; i < v.size(); ++i) {
unordered_map<int, int> bestForStep;
for (int j = 0; j < i; ++j) {
int step = (v[i] - v[j]) / (i - j);
if (step * (i - j) == v[i] - v[j]) {
// This j gives an integer step size: record that j lies on this AP
int len = ++bestForStep[step];
if (len > best.len) {
best.len = len;
best.pos = i;
best.step = step;
}
}
}
}
++best.len; // We never counted the final element in the AP
return best;
}
The above C++ code uses O(n^2) time and O(n) space, since it loops over every pair of positions i and j, performing a single hash read and write for each. To answer the original problem:
int howManyChangesNeeded(vector<int> const& v) {
return v.size() - longestArithProg(v).len;
}
This problem has a simple geometric interpretation, which shows that it can be solved in O(n^2) time and probably can't be solved any faster than that (reduction from 3SUM). Suppose our array is [1, 2, 10, 3, 5]. We can write that array as a sequence of points
(0,1), (1,2), (2,10), (3,3), (4,5)
in which the x-value is the index of the array item and the y-value is the value of the array item. The question now becomes one of finding a line which passes the maximum possible number of points in that set. The cost of converting the array is the number of points not on a line, which is minimized when the number of points on a line is maximized.
A fairly definitive answer to that question is given in this SO posting: What is the most efficient algorithm to find a straight line that goes through most points?
The idea: for each point P in the set from left to right, find the line passing through that point and a maximum number of points to the right of P. (We don't need to look at points to the left of P because they would have been caught in an earlier iteration).
To find the maximum number of P-collinear points to the right of P, for each such point Q calculate the slope of the line segment PQ. Tally up the different slopes in a hash map. The slope which maps to the maximum number of hits is what you're looking for.
Technical issue: you probably don't want to use floating point arithmetic to calculate the slopes. On the other hand, if you use rational numbers, you potentially have to calculate the greatest common divisor in order to compare fractions by comparing numerator and denominator, which multiplies running time by a factor of log n. Instead, you should check equality of rational numbers a/b and c/d by testing whether ad == bc.
The SO posting referenced above gives a reduction from 3SUM, i.e., this problem is 3SUM-hard which shows that if this problem could be solved substantially faster than O(n^2), then 3SUM could also be solved substantially faster than O(n^2). This is where the condition that the integers are in (-inf,inf) comes in. If it is known that the integers are from a bounded set, the reduction from 3SUM is not definitive.
An interesting further question is whether the idea in the Wikipedia for solving 3SUM in O(n + N log N) time when the integers are in the bounded set (-N,N) can be used to solve the minimum cost to convert an array to an AP problem in time faster than O(n^2).
Given the array a = [a_1, a_2, ..., a_n] of unsorted integers, let diffs = [a_2-a_1, a_3-a_2, ..., a_n-a_(n-1)].
Find the maximum occurring value in diffs and adjust any values in a necessary so that all neighboring values differ by this amount.
Interestingly,even I had the same question in my campus recruitment test today.While doing the test itself,I realised that this logic of altering elements based on most frequent differences between 2 subsequent elements in the array fails in some cases.
Eg-4,5,8,9 .According to the logic of a2-a1,a3-a2 as proposed above,answer shud be 1 which is not the case.
As you suggested DP,I feel it can be on the lines of considering 2 values for each element in array-cost when it is modified as well as when it is not modified and return minimum of the 2.Finally terminate when you reach end of the array.

Minimal Number of Extract + Inserts required to sort a list

Context
this problem arises from trying to minimize number of expensive function calls
Problem Definition
Please note that extract_and_insert != swap. In particular, we take the element from position "from", insert it at position "to", and SHIFT all intermediate elements.
int n;
int A[n]; // all elements are integer and distinct
function extract_and_insert(from, to) {
int old_value = A[from]
if (from < to) {
for(int i = from; i < to; ++i)
A[i] = A[i+1];
A[to] = old_value;
} else {
for(int i = from; i > to; --i)
A[i] = A[i-1];
A[to] = old_value;
}
}
Question
We know there are O(n log n) algorithms for sorting a list of numbers.
Now: is there an O(n log n) function, which returns the minimum number of calls to extract_and_insert required to sort the list?
The answer is Yes.
This problem is essentially equivalent to finding the longest increasing subsequence (LIS) in an array, and you can use algorithms to solve that.
Why is this question equivalent to longest increasing subsequence?
Because each extract_and_insert operation will, at its most effective use, correct the relative position of exactly one element in the array. In other words, when we consider the length of the longest increasing subsequence of the array, each operation will increase that length by 1. So, the minimum number of required calls is:
length_of_array - length_of_LIS
and therefore by finding the length of LIS, we will be able to find the minimum number of operations required.
Do read up the linked Wikipedia page to see how to implement the algorithm.

Sample an index of a maximal number in an array, with a probability of 1/(number of maximal numbers)

This is one of the recent interview question that I faced. Program to return the index of the maximum number in the array [ To Note : the array may or may not contain multiple copies of maximum number ] such that each index ( which contains the maximum numbers ) have the probability of 1/no of max numbers to be returned.
Examples:
[-1 3 2 3 3], each of positions [1,3,4] have the probability 1/3 to be returned (the three 3s)
[ 2 4 6 6 3 1 6 6 ], each of [2,3,6,7] have the probability of 1/4 to be returned (corresponding to the position of the 6s).
First, I gave O(n) time and O(n) space algorithm where I collect the set of max-indexes and then return a random number from the set. But he asked for a O(n) time and O(1) complexity program and then I came up with this.
int find_maxIndex(vector<int> a)
{
max = a[0];
max_index = 0;
count = 0;
for(i = 1 to a.size())
{
if(max < a[i])
{
max = a[i];
count = 0;
}
if(max == a[i])
{
count++;
if(rand < 1/count) //rand = a random number in the range of [0,1]
max_index = i;
}
}
return max_index;
}
I gave him this solution. But my doubt is if this procedure would select one of the indexes of max numbers with equal probability. Hope I am clear.Is there any other method to do this ?
What you have is Reservoir sampling! There is another easy to understand solution, but requires two passes.
int find_maxIndex(vector<int> a){
int count = 1;
int maxElement = a[0];
for(int i = 1; i < a.size(); i++){
if(a[i] == maxElement){
count ++;
} else if(a[i] > maxElement){
count = 1;
maxElement = a[i];
}
}
int occurrence = rand() % count + 1;
int occur = 0;
for(int i = 0; i < a.size(); i++){
if(a[i] == maxElement){
occur++;
if(occur == occurrence) return i;
}
}
}
The algorithm is pretty simple, first find the number of times the max element occurs in the first pass. And choose a random occurrence and return the index of that occurrence. It takes two passes though, but very easy to understand.
Your algorithm works fine, and you can prove it via induction.
That is, assuming it works for any array of size N, prove it works for any array of size N+1.
So, given an array of size N+1, think of it as a sub-array of size N followed a new element at the end. By assumption, your algorithm uniformly selects one of the max elements of the sub-array... And then it behaves as follows:
If the new element is larger than the max of the sub-array, return that element. This is obviously correct.
If the new element is less than the max of the sub-array, return the result of the algorithm on the sub-array. Also obviously correct.
The only slightly tricky part is when the new element equals the max element of the sub-array. In this case, let the number of max elements in the sub-array be k. Then, by hypothesis, your algorithm selected one of them with probability 1/k. By keeping that same element with probability k/(k+1), you make the overall probability of selecting that same element equal 1/k * k /(k+1) == 1/(k+1), as desired. You also select the last element with the same probability, so we are done.
To complete the inductive proof, just verify the algorithm works on an array of size 1. Also, for quality of implementation purposes, fix it not to crash on arrays of size zero :-)
[Update]
Incidentally, this algorithm and its proof are closely related to the Fisher-Yates shuffle (which I always thought was "Knuth's card-shuffling algorithm", but Wikipedia says I am behind the times).
The idea is sound, but the devil is in the details.
First off, what language are you using? It might make a difference. The rand() from C and C++ will return an integer, which isn't likely to be less than 1/count unless it returns 0. Even then, if 1/count is an integer division, that result is always going to be 0.
Also your count is off by 1. It starts as 1 when you get a new max, but you immediately increment it in the next if statement.

Most efficient way of randomly choosing a set of distinct integers

I'm looking for the most efficient algorithm to randomly choose a set of n distinct integers, where all the integers are in some range [0..maxValue].
Constraints:
maxValue is larger than n, and possibly much larger
I don't care if the output list is sorted or not
all integers must be chosen with equal probability
My initial idea was to construct a list of the integers [0..maxValue] then extract n elements at random without replacement. But that seems quite inefficient, especially if maxValue is large.
Any better solutions?
Here is an optimal algorithm, assuming that we are allowed to use hashmaps. It runs in O(n) time and space (and not O(maxValue) time, which is too expensive).
It is based on Floyd's random sample algorithm. See my blog post about it for details.
The code is in Java:
private static Random rnd = new Random();
public static Set<Integer> randomSample(int max, int n) {
HashSet<Integer> res = new HashSet<Integer>(n);
int count = max + 1;
for (int i = count - n; i < count; i++) {
Integer item = rnd.nextInt(i + 1);
if (res.contains(item))
res.add(i);
else
res.add(item);
}
return res;
}
For small values of maxValue such that it is reasonable to generate an array of all the integers in memory then you can use a variation of the Fisher-Yates shuffle except only performing the first n steps.
If n is much smaller than maxValue and you don't wish to generate the entire array then you can use this algorithm:
Keep a sorted list l of number picked so far, initially empty.
Pick a random number x between 0 and maxValue - (elements in l)
For each number in l if it smaller than or equal to x, add 1 to x
Add the adjusted value of x into the sorted list and repeat.
If n is very close to maxValue then you can randomly pick the elements that aren't in the result and then find the complement of that set.
Here is another algorithm that is simpler but has potentially unbounded execution time:
Keep a set s of element picked so far, initially empty.
Pick a number at random between 0 and maxValue.
If the number is not in s, add it to s.
Go back to step 2 until s has n elements.
In practice if n is small and maxValue is large this will be good enough for most purposes.
One way to do it without generating the full array.
Say I want a randomly selected subset of m items from a set {x1, ..., xn} where m <= n.
Consider element x1. I add x1 to my subset with probability m/n.
If I do add x1 to my subset then I reduce my problem to selecting (m - 1) items from {x2, ..., xn}.
If I don't add x1 to my subset then I reduce my problem to selecting m items from {x2, ..., xn}.
Lather, rinse, and repeat until m = 0.
This algorithm is O(n) where n is the number of items I have to consider.
I rather imagine there is an O(m) algorithm where at each step you consider how many elements to remove from the "front" of the set of possibilities, but I haven't convinced myself of a good solution and I have to do some work now!
If you are selecting M elements out of N, the strategy changes depending on whether M is of the same order as N or much less (i.e. less than about N/log N).
If they are similar in size, then you go through each item from 1 to N. You keep track of how many items you've got so far (let's call that m items picked out of n that you've gone through), and then you take the next number with probability (M-m)/(N-n) and discard it otherwise. You then update m and n appropriately and continue. This is a O(N) algorithm with low constant cost.
If, on the other hand, M is significantly less than N, then a resampling strategy is a good one. Here you will want to sort M so you can find them quickly (and that will cost you O(M log M) time--stick them into a tree, for example). Now you pick numbers uniformly from 1 to N and insert them into your list. If you find a collision, pick again. You will collide about M/N of the time (actually, you're integrating from 1/N to M/N), which will require you to pick again (recursively), so you'll expect to take M/(1-M/N) selections to complete the process. Thus, your cost for this algorithm is approximately O(M*(N/(N-M))*log(M)).
These are both such simple methods that you can just implement both--assuming you have access to a sorted tree--and pick the one that is appropriate given the fraction of numbers that will be picked.
(Note that picking numbers is symmetric with not picking them, so if M is almost equal to N, then you can use the resampling strategy, but pick those numbers to not include; this can be a win, even if you have to push all almost-N numbers around, if your random number generation is expensive.)
My solution is the same as Mark Byers'. It takes O(n^2) time, hence it's useful when n is much smaller than maxValue. Here's the implementation in python:
def pick(n, maxValue):
chosen = []
for i in range(n):
r = random.randint(0, maxValue - i)
for e in chosen:
if e <= r:
r += 1
else:
break;
bisect.insort(chosen, r)
return chosen
The trick is to use a variation of shuffle or in other words a partial shuffle.
function random_pick( a, n )
{
N = len(a);
n = min(n, N);
picked = array_fill(0, n, 0); backup = array_fill(0, n, 0);
// partially shuffle the array, and generate unbiased selection simultaneously
// this is a variation on fisher-yates-knuth shuffle
for (i=0; i<n; i++) // O(n) times
{
selected = rand( 0, --N ); // unbiased sampling N * N-1 * N-2 * .. * N-n+1
value = a[ selected ];
a[ selected ] = a[ N ];
a[ N ] = value;
backup[ i ] = selected;
picked[ i ] = value;
}
// restore partially shuffled input array from backup
// optional step, if needed it can be ignored
for (i=n-1; i>=0; i--) // O(n) times
{
selected = backup[ i ];
value = a[ N ];
a[ N ] = a[ selected ];
a[ selected ] = value;
N++;
}
return picked;
}
NOTE the algorithm is strictly O(n) in both time and space, produces unbiased selections (it is a partial unbiased shuffling) and does not need hasmaps (which may not be available and/or usualy hide a complexity behind their implementation, e.g fetch time is not O(1), it might even be O(n) in worst case)
adapted from here
Linear congruential generator modulo maxValue+1. I'm sure I've written this answer before, but I can't find it...
UPDATE: I am wrong. The output of this is not uniformly distributed. Details on why are here.
I think this algorithm below is optimum. I.e. you cannot get better performance than this.
For choosing n numbers out of m numbers, the best offered algorithm so far is presented below. Its worst run time complexity is O(n), and needs only a single array to store the original numbers. It partially shuffles the first n elements from the original array, and then you pick those first n shuffled numbers as your solution.
This is also a fully working C program. What you find is:
Function getrand: This is just a PRNG that returns a number from 0 up to upto.
Function randselect: This is the function that randmoly chooses n unique numbers out of m many numbers. This is what this question is about.
Function main: This is only to demonstrate a use for other functions, so that you could compile it into a program and have fun.
#include <stdio.h>
#include <stdlib.h>
int getrand(int upto) {
long int r;
do {
r = rand();
} while (r > upto);
return r;
}
void randselect(int *all, int end, int select) {
int upto = RAND_MAX - (RAND_MAX % end);
int binwidth = upto / end;
int c;
for (c = 0; c < select; c++) {
/* randomly choose some bin */
int bin = getrand(upto)/binwidth;
/* swap c with bin */
int tmp = all[c];
all[c] = all[bin];
all[bin] = tmp;
}
}
int main() {
int end = 1000;
int select = 5;
/* initialize all numbers up to end */
int *all = malloc(end * sizeof(int));
int c;
for (c = 0; c < end; c++) {
all[c] = c;
}
/* select select unique numbers randomly */
srand(0);
randselect(all, end, select);
for (c = 0; c < select; c++) printf("%d ", all[c]);
putchar('\n');
return 0;
}
Here is the output of an example code where I randomly output 4 permutations out of a pool of 8 numbers for 100,000,000 many times. Then I use those many permutations to compute the probability of having each unique permutation occur. I then sort them by this probability. You notice that the numbers are fairly close, which I think means that it is uniformly distributed. The theoretical probability should be 1/1680 = 0.000595238095238095. Note how the empirical test is close to the theoretical one.

Resources