Implementation: Algorithm for a special distribution Problem - algorithm

We are given a number x, and a set of n coins with denominations v1, v2, …, vn.
The coins are to be divided between Alice and Bob, with the restriction that each person's coins must add up to at least x.
For example, if x = 1, n = 2, and v1 = v2 = 2, then there are two possible distributions: one where Alice gets coin #1 and Bob gets coin #2, and one with the reverse. (These distributions are considered distinct even though both coins have the same denomination.)
I'm interested in counting the possible distributions. I'm pretty sure this can be done in O(nx) time and O(n+x) space using dynamic programming; but I don't see how.

Count the ways for one person to get just less than x, double it and subtract from the doubled total number of ways to divide the collection in two, (Stirling number of the second kind {n, 2}).
For example,
{2, 3, 3, 5}, x = 5
i matrix
0 2: 1
1 3: 1 (adding to 2 is too much)
2 3: 2
3 N/A (≥ x)
3 ways for one person to get
less than 5.
Total ways to partition a set
of 4 items in 2 is {4, 2} = 7
2 * 7 - 2 * 3 = 8
The Python code below uses MBo's routine. If you like this answer, please consider up-voting that answer.
# Stirling Algorithm
# Cod3d by EXTR3ME
# https://extr3metech.wordpress.com
def stirling(n,k):
n1=n
k1=k
if n<=0:
return 1
elif k<=0:
return 0
elif (n==0 and k==0):
return -1
elif n!=0 and n==k:
return 1
elif n<k:
return 0
else:
temp1=stirling(n1-1,k1)
temp1=k1*temp1
return (k1*(stirling(n1-1,k1)))+stirling(n1-1,k1-1)
def f(coins, x):
a = [1] + (x-1) * [0]
# Code by MBo
# https://stackoverflow.com/a/53418438/2034787
for c in coins:
for i in xrange(x - 1, c - 1, -1):
if a[i - c] > 0:
a[i] = a[i] + a[i - c]
return 2 * (stirling(len(coins), 2) - sum(a) + 1)
print f([2,3,3,5], 5) # 8
print f([1,2,3,4,4], 5) # 16

If sum of all coins is S, then the first person can get x..S-x of money.
Make array A of length S-x+1 and fill it with numbers of variants of changing A[i] with given coins (like kind of Coin Change problem).
To provide uniqueness (don't count C1+C2 and C2+C1 as two variants), fill array in reverse direction
A[0] = 1
for C in Coins:
for i = S-x downto C:
if A[i - C] > 0:
A[i] = A[i] + A[i - C]
//we can compose value i as i-C and C
then sum A entries in range x..S-x
Example for coins 2, 3, 3, 5 and x=5.
S = 13, S-x = 8
Array state after using coins in order:
0 1 2 3 4 5 6 7 8 //idx
1 1
1 1 1 1
1 1 2 2 1 1
1 1 2 3 1 1 3
So there are 8 variants to distribute these coins. Quick check (3' denotes the second coin 3):
2 3 3' 5
2 3' 3 5
2 3 3' 5
2 5 3 3'
3 3' 2 5
3 5 2 3'
3' 5 2 3
5 2 3 3'

You can also solve it in O(A * x^2) time and memory adding memoization to this dp:
solve(A, pos, sum1, sum2):
if (pos == A.length) return sum1 == x && sum2 == x
return solve(A, pos + 1, min(sum1 + A[pos], x), sum2) +
solve(A, pos + 1, sum1, min(sum2 + A[pos], x))
print(solve(A, 0, 0, 0))
So depending if x^2 < sum or not you could use this or the answer provided by #Mbo (in terms of time complexity). If you care more about space, this is better only when A * x^2 < sum - x

Related

Summing elements of a set of numbers to a given number

I have been battling to put up an algorithm to solve this problem.
Let say i have a set of number {1, 2, 5} and each element of the this set as unlimited supply, and i given another number 6, then ask to determine the number of ways you can sum the elements to get the number 6. For illustration purpose i do this
1 + 1 + 1 + 1 + 1 + 1 = 6
1 + 1 + 2 + 2 = 6
2 + 2 + 2 = 6
1 + 5 = 6
1 + 1 + 1 + 1 + 2 = 6
So in this case the program will output 5 as the number of ways. Again let say you are to find the sum for 4,
1 + 1 + 1 + 1 = 4
2 + 2 = 4
1 + 1 + 2 = 4
In this case the algorithm will output 3 as the number of way
This is similar to sum of subsets problem . I am sure you have to use branch and bound method or backtracking method.
1)Create a state space tree which consist of all possible cases.
0
/ | \
1 2 5
/ | \
1 2 5 ........
2) Continue the process until the sum of nodes in depth first manner is greater or equal to your desired number.
3) Count the no. of full branches that satisfy your condition.
The python implementation of similar problem can be found here.
This is a good problem to use recursion and dynamic programming techniques. Here is an implementation in Python using the top-down approach (memoization) to avoid doing the same calculation multiple times:
# Remember answers for subsets
cache = {}
# Return the ways to get the desired sum from combinations of the given numbers
def possible_sums(numbers, desired_sum):
# See if we have already calculated this possibility
key = (tuple(set(numbers)), desired_sum)
if key in cache:
return cache[key]
answers = {}
for n in numbers:
if desired_sum % n == 0:
# The sum is a multiple of the number
answers[tuple([n] * (desired_sum / n))] = True
if n < desired_sum:
for a in possible_sums(numbers, desired_sum - n):
answers[tuple([n] + a)] = True
cache[key] = [list(k) for k in answers.iterkeys()]
return cache[key]
# Return only distinct combinations of sums, ignoring order
def unique_possible_sums(numbers, desired_sum):
answers = {}
for s in possible_sums(numbers, desired_sum):
answers[tuple(sorted(s))] = True
return [list(k) for k in answers.iterkeys()]
for s in unique_possible_sums([1, 2, 5], 6):
print '6: ' + repr(s)
for s in unique_possible_sums([1, 2, 5], 4):
print '4: ' + repr(s)
For smaller target number(~1000000) and 1000{supply} n try this:
The supply of numbers you have
supply {a,b,c....}
The target you need
steps[n]
1 way to get to 0 use nothing
steps[0]=1
Scan till target number
for i from 1 to n:
for each supply x:
if i - x >=0
steps[i] += steps[i-x]
Steps at n will contain the number of ways
steps[n]
Visualization of the above:
supply {1, 2, 5} , target 6
i = 1, x=1 and steps required is 1
i = 2, x=1 and steps required is 1
i = 2, x=2 and steps required is 2
i = 3, x=1 and steps required is 2
i = 3, x=2 and steps required is 3
i = 4, x=1 and steps required is 3
i = 4, x=2 and steps required is 5
i = 5, x=1 and steps required is 5
i = 5, x=2 and steps required is 8
i = 5, x=5 and steps required is 9
i = 6, x=1 and steps required is 9
i = 6, x=2 and steps required is 14
i = 6, x=5 and steps required is 15
Some Java Code
private int test(int targetSize, int supply[]){
int target[] = new int[targetSize+1];
target[0]=1;
for(int i=0;i<=targetSize;i++){
for(int x:supply){
if(i-x >= 0){
target[i]+=target[i-x];
}
}
}
return target[targetSize];
}
#Test
public void test(){
System.err.println(test(12, new int[]{1,2,3,4,5,6}));
}

Algorithm to finding if the numbers in the list, when added or subtracted, are equal to a mod b

I was doing some interview problems when I ran into an interesting one that I could not think of a solution for. The problems states:
Design a function that takes in an array of integers. The last two numbers
in this array are 'a' and 'b'. The function should find if all of the
numbers in the array, when summed/subtracted in some fashion, are equal to
a mod b, except the last two numbers a and b.
So, for example, let us say we have an array:
array = [5, 4, 3, 3, 1, 3, 5].
I need to find out if there exists any possible "placement" of +/- in this array so that the numbers can equal 3 mod 5. The function should print True for this array because 5+4-3+3-1 = 8 = 3 mod 5.
The "obvious" and easy solution would be to try and add/subtract everything in all possible ways, but that is an egregiously time complex solution, maybe
O(2n).
Is there any way better to do this?
Edit: The question requires the function to use all numbers in the array, not any. Except, of course, the last two.
If there are n numbers, then there is a simple algorithm that runs in O (b * n): For k = 2 to n, calculate the set of integers x such that the sum or difference of the first k numbers is equal to x modulo b.
For k = 2, the set contains (a_0 + a_1) modulo b and (a_0 - a_1) modulo b. For k = 3, 4, ..., n you take the numbers in the previous set, then either add or subtract the next number in the array. And finally check if a is element of the last set.
O(b * n). Let's take your example, [5, 4, 3, 3, 1]. Let m[i][j] represent whether a solution exists for j mod 5 up to index i:
i = 0:
5 = 0 mod 5
m[0][0] = True
i = 1:
0 + 4 = 4 mod 5
m[1][4] = True
but we could also subtract
0 - 4 = 1 mod 5
m[1][1] = True
i = 2:
Examine the previous possibilities:
m[1][4] and m[1][1]
4 + 3 = 7 = 2 mod 5
4 - 3 = 1 = 1 mod 5
1 + 3 = 4 = 4 mod 5
1 - 3 = -2 = 3 mod 5
m[2][1] = True
m[2][2] = True
m[2][3] = True
m[2][4] = True
i = 3:
1 + 3 = 4 mod 5
1 - 3 = 3 mod 5
2 + 3 = 0 mod 5
2 - 3 = 4 mod 5
3 + 3 = 1 mod 5
3 - 3 = 0 mod 5
4 + 3 = 2 mod 5
4 - 3 = 1 mod 5
m[3][0] = True
m[3][1] = True
m[3][2] = True
m[3][3] = True
m[3][4] = True
We could actually stop there, but let's follow a different solution than the one in your example backwards:
i = 4:
m[3][2] True means we had a solution for 2 at i=3
=> 2 + 1 means m[4][3] = True
+ 1
+ 3
+ 3
- 4
(0 - 4 + 3 + 3 + 1) = 3 mod 5
I coded a solution based on the mathematical explanation provided here. I didn't comment the solution, so if you want an explanation, I recommend you read the answer!
def kmodn(l):
k, n = l[-2], l[-1]
A = [0] * n
count = -1
domath(count, A, l[:-2], k, n)
def domath(count, A, l, k, n):
if count == len(l):
boolean = A[k] == 1
print boolean
elif count == -1:
A[0] = 1; # because the empty set is possible
count += 1
domath(count, A, l, k, n)
else:
indices = [i for i, x in enumerate(A) if x == 1]
b = [0] * n
for i in indices:
idx1 = (l[count] + i) % n
idx2 = (i - l[count]) % n
b[idx1], b[idx2] = 1, 1
count += 1
A = b
domath(count, A, l, k, n)

algorithmic puzzle for calculating the number of combinations of numbers sum to a fixed result

This is a puzzle i think of since last night. I have come up with a solution but it's not efficient so I want to see if there is better idea.
The puzzle is this:
given positive integers N and T, you will need to have:
for i in [1, T], A[i] from { -1, 0, 1 }, such that SUM(A) == N
additionally, the prefix sum of A shall be [0, N], while when the prefix sum PSUM[A, t] == N, it's necessary to have for i in [t + 1, T], A[i] == 0
here prefix sum PSUM is defined to be: PSUM[A, t] = SUM(A[i] for i in [1, t])
the puzzle asks how many such A's exist given fixed N and T
for example, when N = 2, T = 4, following As work:
1 1 0 0
1 -1 1 1
0 1 1 0
but following don't:
-1 1 1 1 # prefix sum -1
1 1 -1 1 # non-0 following a prefix sum == N
1 1 1 -1 # prefix sum > N
following python code can verify such rule, when given N as expect and an instance of A as seq(some people may feel easier reading code than reading literal description):
def verify(expect, seq):
s = 0
for j, i in enumerate(seq):
s += i
if s < 0:
return False
if s == expect:
break
else:
return s == expect
for k in range(j + 1, len(seq)):
if seq[k] != 0:
return False
return True
I have coded up my solution, but it's too slow. Following is mine:
I decompose the problem into two parts, a part without -1 in it(only {0, 1} and a part with -1.
so if SOLVE(N, T) is the correct answer, I define a function SOLVE'(N, T, B), where a positive B allows me to extend prefix sum to be in the interval of [-B, N] instead of [0, N]
so in fact SOLVE(N, T) == SOLVE'(N, T, 0).
so I soon realized the solution is actually:
have the prefix of A to be some valid {0, 1} combination with positive length l, and with o 1s in it
at position l + 1, I start to add 1 or more -1s and use B to track the number. the maximum will be B + o or depend on the number of slots remaining in A, whichever is less.
recursively call SOLVE'(N, T, B)
in the previous N = 2, T = 4 example, in one of the search case, I will do:
let the prefix of A be [1], then we have A = [1, -, -, -].
start add -1. here i will add only one: A = [1, -1, -, -].
recursive call SOLVE', here i will call SOLVE'(2, 2, 0) to solve the last two spots. here it will return [1, 1] only. then one of the combinations yields [1, -1, 1, 1].
but this algorithm is too slow.
I am wondering how can I optimize it or any different way to look at this problem that can boost the performance up?(I will just need the idea, not impl)
EDIT:
some sample will be:
T N RESOLVE(N, T)
3 2 3
4 2 7
5 2 15
6 2 31
7 2 63
8 2 127
9 2 255
10 2 511
11 2 1023
12 2 2047
13 2 4095
3 3 1
4 3 4
5 3 12
6 3 32
7 3 81
8 3 200
9 3 488
10 3 1184
11 3 2865
12 3 6924
13 3 16724
4 4 1
5 4 5
6 4 18
an exponential time solution will be following in general(in python):
import itertools
choices = [-1, 0, 1]
print len([l for l in itertools.product(*([choices] * t)) if verify(n, l)])
An observation: assuming that n is at least 1, every solution to your stated problem ends in something of the form [1, 0, ..., 0]: i.e., a single 1 followed by zero or more 0s. The portion of the solution prior to that point is a walk that lies entirely in [0, n-1], starts at 0, ends at n-1, and takes fewer than t steps.
Therefore you can reduce your original problem to a slightly simpler one, namely that of determining how many t-step walks there are in [0, n] that start at 0 and end at n (where each step can be 0, +1 or -1, as before).
The following code solves the simpler problem. It uses the lru_cache decorator to cache intermediate results; this is in the standard library in Python 3, or there's a recipe you can download for Python 2.
from functools import lru_cache
#lru_cache()
def walks(k, n, t):
"""
Return the number of length-t walks in [0, n]
that start at 0 and end at k. Each step
in the walk adds -1, 0 or 1 to the current total.
Inputs should satisfy 0 <= k <= n and 0 <= t.
"""
if t == 0:
# If no steps allowed, we can only get to 0,
# and then only in one way.
return k == 0
else:
# Count the walks ending in 0.
total = walks(k, n, t-1)
if 0 < k:
# ... plus the walks ending in 1.
total += walks(k-1, n, t-1)
if k < n:
# ... plus the walks ending in -1.
total += walks(k+1, n, t-1)
return total
Now we can use this function to solve your problem.
def solve(n, t):
"""
Find number of solutions to the original problem.
"""
# All solutions stick at n once they get there.
# Therefore it's enough to find all walks
# that lie in [0, n-1] and take us to n-1 in
# fewer than t steps.
return sum(walks(n-1, n-1, i) for i in range(t))
Result and timings on my machine for solve(10, 100):
In [1]: solve(10, 100)
Out[1]: 250639233987229485923025924628548154758061157
In [2]: %timeit solve(10, 100)
1000 loops, best of 3: 964 µs per loop

Arithmetic operation on sequence on integers

I have N integers numbers: 1,2,3...N
The task is to use +,-,*,/ to make expression 0.
For example -1*2+3+4-5=0
How can I do it?
May be some code on C/C++ ?
If N % 4 == 0, for every four consecutive integers a, b, c, d, take a - b - c + d
If N % 4 == 1, use 1 * 2 to start, then proceed as before. (i.e., 1*2 - 3 - 4 + 5 + 6 - 8 - 8 + 9 ...)
If N % 4 == 2, start with 1 - 2 + 3 * 4 - 5 - 6, then proceed as in the N % 4 == 0 example.
If N % 4 == 3, start with 1 + 2 - 3, then proceed as in the N%4 == 0 example.
All of these find a way to get zero out of the first few integers, leaving a multiple of four integers to work on, then take advantage of the fact that the pattern a - b - c + d = 0 for any four consecutive integers.
This is essentially SAT, or do you know that the numbers are a sequence (e.g. 2 1 8 is forbidden). What about negative numbers?
If the sequence is not too large, i would recommend to simply bootforce it. A greedy solution would be to reduce the problem by finding subsets which can be evaluated to zero.

How does this work? Weird Towers of Hanoi Solution

I was lost on the internet when I discovered this unusual, iterative solution to the towers of Hanoi:
for (int x = 1; x < (1 << nDisks); x++)
{
FromPole = (x & x-1) % 3;
ToPole = ((x | x-1) + 1) % 3;
moveDisk(FromPole, ToPole);
}
This post also has similar Delphi code in one of the answers.
However, for the life of me, I can't seem to find a good explanation for why this works.
Can anyone help me understand it?
the recursive solution to towers of Hanoi works so that if you want to move N disks from peg A to C, you first move N-1 from A to B, then you move the bottom one to C, and then you move again N-1 disks from B to C. In essence,
hanoi(from, to, spare, N):
hanoi(from, spare, to, N-1)
moveDisk(from, to)
hanoi(spare, to, from, N-1)
Clearly hanoi( _ , _ , _ , 1) takes one move, and hanoi ( _ , _ , _ , k) takes as many moves as 2 * hanoi( _ , _ , _ , k-1) + 1. So the solution length grows in the sequence 1, 3, 7, 15, ... This is the same sequence as (1 << k) - 1, which explains the length of the loop in the algorithm you posted.
If you look at the solutions themselves, for N = 1 you get
FROM TO
; hanoi(0, 2, 1, 1)
0 2 movedisk
For N = 2 you get
FROM TO
; hanoi(0, 2, 1, 2)
; hanoi(0, 1, 2, 1)
0 1 ; movedisk
0 2 ; movedisk
; hanoi(1, 2, 0, 1)
1 2 ; movedisk
And for N = 3 you get
FROM TO
; hanoi(0, 2, 1, 3)
; hanoi(0, 1, 2, 2)
; hanoi(0, 2, 1, 1)
0 2 ; movedisk
0 1 ; movedisk
; hanoi(2, 1, 0, 1)
2 1 ; movedisk
0 2 ; movedisk ***
; hanoi(1, 2, 0, 2)
; hanoi(1, 0, 2, 1)
1 0 ; movedisk
1 2 ; movedisk
; hanoi(0, 2, 1, 1)
0 2 ; movedisk
Because of the recursive nature of the solution, the FROM and TO columns follow a recursive logic: if you take the middle entry on the columns, the parts above and below are copies of each others, but with the numbers permuted. This is an obvious consequence of the algorithm itself which does not perform any arithmetics on the peg numbers but only permutes them. In the case N=4 the middle row is at x=4 (marked with three stars above).
Now the expression (X & (X-1)) unsets the least set bit of X, so it maps e.g. numbers form 1 to 7 like this:
1 -> 0
2 -> 0
3 -> 2
4 -> 0 (***)
5 -> 4 % 3 = 1
6 -> 4 % 3 = 1
7 -> 6 % 3 = 0
The trick is that because the middle row is always at an exact power of two and thus has exactly one bit set, the part after the middle row equals the part before it when you add the middle row value (4 in this case) to the rows (i.e. 4=0+4, 6=2+6). This implements the "copy" property, the addition of the middle row implements the permutation part. The expression (X | (X-1)) + 1 sets the lowest zero bit which has ones to its right, and clears these ones, so it has similar properties as expected:
1 -> 2
2 -> 4 % 3 = 1
3 -> 4 % 3 = 1
4 -> 8 (***) % 3 = 2
5 -> 6 % 3 = 0
6 -> 8 % 3 = 2
7 -> 8 % 3 = 2
As to why these sequences actually produce the correct peg numbers, let's consider the FROM column. The recursive solution starts with hanoi(0, 2, 1, N), so at the middle row (2 ** (N-1)) you must have movedisk(0, 2). Now by the recursion rule, at (2 ** (N-2)) you need to have movedisk(0, 1) and at (2 ** (N-1)) + 2 ** (N-2) movedisk (1, 2). This creates the "0,0,1" pattern for the from pegs which is visible with different permutations in the table above (check rows 2, 4 and 6 for 0,0,1 and rows 1, 2, 3 for 0,0,2, and rows 5, 6, 7 for 1,1,0, all permuted versions of the same pattern).
Now then of all the functions that have this property that they create copies of themselves around powers of two but with offsets, the authors have selected those that produce modulo 3 the correct permutations. This isn't an overtly difficult task because there are only 6 possible permutations of the three integers 0..2 and the permutations progress in a logical order in the algorithm. (X|(X-1))+1 is not necessarily deeply linked with the Hanoi problem or it doesn't need to be; it's enough that it has the copying property and that it happens to produce the correct permutations in the correct order.
antti.huima's solution is essentially correct, but I wanted something more rigorous, and it was too big to fit in a comment. Here goes:
First note: at the middle step x = 2N-1 of this algorithm, the "from" peg is 0, and the "to" peg is 2N % 3. This leaves 2(N-1) % 3 for the "spare" peg.
This is also true for the last step of the algorithm, so we see that actually the authors' algorithm
is a slight "cheat": they're moving the disks from peg 0 to peg 2N % 3, rather than a fixed,
pre-specified "to" peg. This could be changed with not much work.
The original Hanoi algorithm is:
hanoi(from, to, spare, N):
hanoi(from, spare, to, N-1)
move(from, to)
hanoi(spare, to, from, N-1)
Plugging in "from" = 0, "to" = 2N % 3, "spare" = 2N-1 % 3, we get (suppressing the %3's):
hanoi(0, 2**N, 2**(N-1), N):
(a) hanoi(0, 2**(N-1), 2**N, N-1)
(b) move(0, 2**N)
(c) hanoi(2**(N-1), 2**N, 0, N-1)
The fundamental observation here is:
In line (c), the pegs are exactly the pegs of hanoi(0, 2N-1, 2N, N-1) shifted by 2N-1 % 3, i.e.
they are exactly the pegs of line (a) with this amount added to them.
I claim that it follows that when we
run line (c), the "from" and "to" pegs are the corresponding pegs of line (a) shifted by 2N-1 % 3. This
follows from the easy, more general lemma that in hanoi(a+x, b+x, c+x, N), the "from and "to" pegs are shifted exactly x from in hanoi(a, b, c, N).
Now consider the functions
f(x) = (x & (x-1)) % 3
g(x) = (x | (x-1)) + 1 % 3
To prove that the given algorithm works, we only have to show that:
f(2N-1) == 0 and g(2N-1) == 2N
for 0 < i < 2N, we have f(2N - i) == f(2N + i) + 2N % 3, and g(2N - i) == g(2N + i) + 2N % 3.
Both of these are easy to show.
This isn't directly answering the question, but it was too long to put in a comment.
I had always done this by analyzing the size of disk you should move next. If you look at the disks moved, it comes out to:
1 disk : 1
2 disks : 1 2 1
3 disks : 1 2 1 3 1 2 1
4 disks : 1 2 1 3 1 2 1 4 1 2 1 3 1 2 1
Odd sizes always move in the opposite direction of even ones, in order if pegs (0, 1, 2, repeat) or (2, 1, 0, repeat).
If you take a look at the pattern, the ring to move is the highest bit set of the xor of the number of moves and the number of moves + 1.

Resources