convert to divide and conquer algorithm. Kotlin - algorithm

convert method "FINAL" to divide and conquer algorithm
the task sounded like this: The buyer has n coins of
H1,...,Hn.
The seller has m
coins in denominations of
B1,...,Bm.
Can the buyer purchase the item
the cost S so that the seller has an exact change (if
necessary).
fun Final(H: ArrayList<Int>, B: ArrayList<Int>, S: Int): Boolean {
var Clon_Price = false;
var Temp: Int;
for (i in H) {
if (i == S)
return true;
}
for (i in H.withIndex()) {
Temp = i.value - S;
for (j in B) {
if (j == Temp)
Clon_Price = true;
}
}
return Clon_Price;
}
fun main(args: Array<String>) {
val H:ArrayList<Int> = ArrayList();
val B:ArrayList<Int> = ArrayList();
println("Enter the number of coins the buyer has:");
var n: Int = readln().toInt();
println("Enter their nominal value:")
while (n > 0){
H.add(readln().toInt());
n--
}
println("Enter the number of coins the seller has:");
var m: Int = readln().toInt();
println("Enter their nominal value:")
while (m > 0){
B.add(readln().toInt());
m--
}
println("Enter the product price:");
val S = readln().toInt();
if(Final(H,B,S)){
println("YES");
}
else{
println("No!");
}

Introduction
Since this is an assignment, I will only give you insights to solve this problem and you will need to do the coding yourself.
The algorithm
Receives two ArrayList<Int> and an Int parameter
if the searched (S) element can be found in H, then the result is true
Otherwise it loops H
Computes the difference between the current element and S
Searches for a match in B and if it's found, then true is being returned
If the method has not returned yet, then return false;
Divide et impera (Divide and conquer)
Divide and conquer is the process of breaking down a complicated task into similar, but simpler subtasks, repeating this breaking down until the subtasks become trivial (this was the divide part) and then, using the results of the trivial subtasks we can solve the slightly more complicated subtasks and go upwards in our layers of unsolved complexities until the problem is solved (this is the conquer part).
A very handy data-structure to use is the Stack. You can use the stack of your memory, which are fancy words for recursion, or, you can solve it iteratively, by managing such a stack yourself.
This specific problem
This algorithm does not seem to necessitate divide and conquer, given the fact that you only have two array lists that can be iterated, so, I guess, this is an early assignment.
To make sure this is divide and conquer, you can add two parameters to your method (which are 0 and length - 1 at the start) that reflect the current problem-space. And upon each call, check whether the starting and ending index (the two new parameters) are equal. If they are, you already have a trivial, simplified subtask and you just iterate the second ArrayList.
If they are not equal, then you still need to divide. You can simply
//... Some code here
return Final(H, B, S, start, end / 2) || Final(H, B, S, end / 2 + 1, end);
(there you go, I couldn't resist writing code, after all)
for your nontrivial cases. This automatically breaks down the problem into sub-problems.
Self-criticism
The idea above is a simplistic solution for you to get the gist. But, in reality, programmers dislike recursion, as it can lead to trouble. So, once you complete the implementation of the above, you are well-advised to convert your algorithm to make sure it's iterative, which should be fairly easy once you succeeded implementing the recursive version.

Related

How to solve divisible triangular problem (ProjectEuler) efficiently in Kotlin?

I was solving problems on the the ProjectEuler, And I am stuck on the 12th problem, the following code takes too longer not even done in five minutes and my CPU got warm.
Essentially what I am doing is generate a sequence of triangular numbers by adding successive natural numbers like:
1 -> 1 (i.e. 1)
2 -> 3 (i.e. 1+2)
3 -> 6 (i.e. 1+2+3)
And so on, then finding the first triangular number which has more than 500 factors (i.e. 501 factors).
fun main() {
val numbers = generateTriangularNumbers()
val result = numbers.first {
val count = factorOf(it).count()
// println(count) // just to see the count
count > 500
}
println(result)
}
// Finds factor of input [x]
private fun factorOf(x: Long): Sequence<Long> = sequence {
var current = 1L
while (current <= x) {
if (x % current == 0L) yield(current++) else current++
}
}
// generates triangular numbers, like 1, 3, 6, 10. By adding numbers like 1+2+3+...n.
private fun generateTriangularNumbers(from: Long = 1): Sequence<Long> = sequence {
val mapper: (Long) -> Long = { (1..it).sum() }
var current = from
while (true) yield(mapper(current++))
}
The count (number of factors of triangular numbers) is hardly getting over 200, Is there a way to efficiently solve this problem, maybe within a minute?
Project Euler is about math. Programming comes second. You need to do some homework.
Prove that triangular numbers are in form of n*(n+1)/2. Trivial.
Prove that n and n+1 are coprime. Trivial.
Prove, or at least convince yourself, that d(n) is multiplicative.
Combine this knowledge to come up with an efficient algorithm. You wouldn't need to actually compute the triangular numbers, and you would need to factorize the number much smaller; memoization would also avoid quite a few factorizations.

Proving that there are no overlapping sub-problems?

I just got the following interview question:
Given a list of float numbers, insert “+”, “-”, “*” or “/” between each consecutive pair of numbers to find the maximum value you can get. For simplicity, assume that all operators are of equal precedence order and evaluation happens from left to right.
Example:
(1, 12, 3) -> 1 + 12 * 3 = 39
If we built a recursive solution, we would find that we would get an O(4^N) solution. I tried to find overlapping sub-problems (to increase the efficiency of this algorithm) and wasn't able to find any overlapping problems. The interviewer then told me that there wasn't any overlapping subsolutions.
How can we detect when there are overlapping solutions and when there isn't? I spent a lot of time trying to "force" subsolutions to appear and eventually the Interviewer told me that there wasn't any.
My current solution looks as follows:
def maximumNumber(array, current_value=None):
if current_value is None:
current_value = array[0]
array = array[1:]
if len(array) == 0:
return current_value
return max(
maximumNumber(array[1:], current_value * array[0]),
maximumNumber(array[1:], current_value - array[0]),
maximumNumber(array[1:], current_value / array[0]),
maximumNumber(array[1:], current_value + array[0])
)
Looking for "overlapping subproblems" sounds like you're trying to do bottom up dynamic programming. Don't bother with that in an interview. Write the obvious recursive solution. Then memoize. That's the top down approach. It is a lot easier to get working.
You may get challenged on that. Here was my response the last time that I was asked about that.
There are two approaches to dynamic programming, top down and bottom up. The bottom up approach usually uses less memory but is harder to write. Therefore I do the top down recursive/memoize and only go for the bottom up approach if I need the last ounce of performance.
It is a perfectly true answer, and I got hired.
Now you may notice that tutorials about dynamic programming spend more time on bottom up. They often even skip the top down approach. They do that because bottom up is harder. You have to think differently. It does provide more efficient algorithms because you can throw away parts of that data structure that you know you won't use again.
Coming up with a working solution in an interview is hard enough already. Don't make it harder on yourself than you need to.
EDIT Here is the DP solution that the interviewer thought didn't exist.
def find_best (floats):
current_answers = {floats[0]: ()}
floats = floats[1:]
for f in floats:
next_answers = {}
for v, path in current_answers.iteritems():
next_answers[v + f] = (path, '+')
next_answers[v * f] = (path, '*')
next_answers[v - f] = (path, '-')
if 0 != f:
next_answers[v / f] = (path, '/')
current_answers = next_answers
best_val = max(current_answers.keys())
return (best_val, current_answers[best_val])
Generally the overlapping sub problem approach is something where the problem is broken down into smaller sub problems, the solutions to which when combined solve the big problem. When these sub problems exhibit an optimal sub structure DP is a good way to solve it.
The decision about what you do with a new number that you encounter has little do with the numbers you have already processed. Other than accounting for signs of course.
So I would say this is a over lapping sub problem solution but not a dynamic programming problem. You could use dive and conquer or evenmore straightforward recursive methods.
Initially let's forget about negative floats.
process each new float according to the following rules
If the new float is less than 1, insert a / before it
If the new float is more than 1 insert a * before it
If it is 1 then insert a +.
If you see a zero just don't divide or multiply
This would solve it for all positive floats.
Now let's handle the case of negative numbers thrown into the mix.
Scan the input once to figure out how many negative numbers you have.
Isolate all the negative numbers in a list, convert all the numbers whose absolute value is less than 1 to the multiplicative inverse. Then sort them by magnitude. If you have an even number of elements we are all good. If you have an odd number of elements store the head of this list in a special var , say k, and associate a processed flag with it and set the flag to False.
Proceed as before with some updated rules
If you see a negative number less than 0 but more than -1, insert a / divide before it
If you see a negative number less than -1, insert a * before it
If you see the special var and the processed flag is False, insert a - before it. Set processed to True.
There is one more optimization you can perform which is removing paris of negative ones as candidates for blanket subtraction from our initial negative numbers list, but this is just an edge case and I'm pretty sure you interviewer won't care
Now the sum is only a function of the number you are adding and not the sum you are adding to :)
Computing max/min results for each operation from previous step. Not sure about overall correctness.
Time complexity O(n), space complexity O(n)
const max_value = (nums) => {
const ops = [(a, b) => a+b, (a, b) => a-b, (a, b) => a*b, (a, b) => a/b]
const dp = Array.from({length: nums.length}, _ => [])
dp[0] = Array.from({length: ops.length}, _ => [nums[0],nums[0]])
for (let i = 1; i < nums.length; i++) {
for (let j = 0; j < ops.length; j++) {
let mx = -Infinity
let mn = Infinity
for (let k = 0; k < ops.length; k++) {
if (nums[i] === 0 && k === 3) {
// If current number is zero, removing division
ops.splice(3, 1)
dp.splice(3, 1)
continue
}
const opMax = ops[j](dp[i-1][k][0], nums[i])
const opMin = ops[j](dp[i-1][k][1], nums[i])
mx = Math.max(opMax, opMin, mx)
mn = Math.min(opMax, opMin, mn)
}
dp[i].push([mx,mn])
}
}
return Math.max(...dp[nums.length-1].map(v => Math.max(...v)))
}
// Tests
console.log(max_value([1, 12, 3]))
console.log(max_value([1, 0, 3]))
console.log(max_value([17,-34,2,-1,3,-4,5,6,7,1,2,3,-5,-7]))
console.log(max_value([59, 60, -0.000001]))
console.log(max_value([0, 1, -0.0001, -1.00000001]))

Divide N cake to M people with minimum wastes

So here is the question:
In a party there are n different-flavored cakes of volume V1, V2, V3 ... Vn each. Need to divide them into K people present in the party such that
All members of party get equal volume of cake (say V, which is the solution we are looking for)
Each member should get a cake of single flavour only (you cannot distribute parts of different flavored cakes to a member).
Some volume of cake will be wasted after distribution, we want to minimize the waste; or, equivalently, we are after a maximum distribution policy
Given known condition that: if V is an optimal solution, then at least one cake, X, can be divided by V without any volume left, i.e., Vx mod V == 0
I am trying to look for a solution with best time complexity (brute force will do it, but I need a quicker way).
Any suggestion would be appreciated.
Thanks
PS: It is not an assignment, it is an Interview question. Here is the pseducode for brute force:
int return_Max_volumn(List VolumnList)
{
maxVolumn = 0;
minimaxLeft = Integer.Max_value;
for (Volumn v: VolumnList)
for i = 1 to K people
targeVolumn = v/i;
NumberofpeoplecanGetcake = v1/targetVolumn +
v2/targetVolumn + ... + vn/targetVolumn
if (numberofPeopleCanGetcake >= k)
remainVolumn = (v1 mod targetVolumn) + (v2 mod targetVolumn)
+ (v3 mod targetVolumn + ... + (vn mod targetVolumn)
if (remainVolumn < minimaxLeft)
update maxVolumn to be targetVolumn;
update minimaxLeft to be remainVolumn
return maxVolumn
}
This is a somewhat classic programming-contest problem.
The answer is simple: do a basic binary search on volume V (the final answer).
(Note the title says M people, yet the problem description says K. I'll be using M)
Given a volume V during the search, you iterate through all of the cakes, calculating how many people each cake can "feed" with single-flavor slices (fed += floor(Vi/V)). If you reach M (or 'K') people "fed" before you're out of cakes, this means you can obviously also feed M people with any volume < V with whole single-flavor slices, by simply consuming the same amount of (smaller) slices from each cake. If you run out of cakes before reaching M slices, it means you cannot feed the people with any volume > V either, as that would consume even more cake than what you've already failed with. This satisfies the condition for a binary search, which will lead you to the highest volume V of single-flavor slices that can be given to M people.
The complexity is O(n * log((sum(Vi)/m)/eps) ). Breakdown: the binary search takes log((sum(Vi)/m)/eps) iterations, considering the upper bound of sum(Vi)/m cake for each person (when all the cakes get consumed perfectly). At each iteration, you have to pass through at most all N cakes. eps is the precision of your search and should be set low enough, no higher than the minimum non-zero difference between the volume of two cakes, divided by M*2, so as to guarantee a correct answer. Usually you can just set it to an absolute precision such as 1e-6 or 1e-9.
To speed things up for the average case, you should sort the cakes in decreasing order, such that when you are trying a large volume, you instantly discard all the trailing cakes with total volume < V (e.g. you have one cake of volume 10^6 followed by a bunch of cakes of volume 1.0. If you're testing a slice volume of 2.0, as soon as you reach the first cake of volume 1.0 you can already return that this run failed to provide M slices)
Edit:
The search is actually done with floating point numbers, e.g.:
double mid, lo = 0, hi = sum(Vi)/people;
while(hi - lo > eps){
mid = (lo+hi)/2;
if(works(mid)) lo = mid;
else hi = mid;
}
final_V = lo;
By the end, if you really need more precision than your chosen eps, you can simply take an extra O(n) step:
// (this step is exclusively to retrieve an exact answer from the final
// answer above, if a precision of 'eps' is not acceptable)
foreach (cake_volume vi){
int slices = round(vi/final_V);
double difference = abs(vi-(final_V*slices));
if(difference < best){
best = difference;
volume = vi;
denominator = slices;
}
}
// exact answer is volume/denominator
Here's the approach I would consider:
Let's assume that all of our cakes are sorted in the order of non-decreasing size, meaning that Vn is the largest cake and V1 is the smallest cake.
Generate the first intermediate solution by dividing only the largest cake between all k people. I.e. V = Vn / k.
Immediately discard all cakes that are smaller than V - any intermediate solution that involves these cakes is guaranteed to be worse than our intermediate solution from step 1. Now we are left with cakes Vb, ..., Vn, where b is greater or equal to 1.
If all cakes got discarded except the biggest one, then we are done. V is the solution. END.
Since we have more than one cake left, let's improve our intermediate solution by redistributing some of the slices to the second biggest cake Vn-1, i.e. find the biggest value of V so that floor(Vn / V) + floor(Vn-1 / V) = k. This can be done by performing a binary search between the current value of V and the upper limit (Vn + Vn-1) / k, or by something more clever.
Again, just like we did on step 2, immediately discard all cakes that are smaller than V - any intermediate solution that involves these cakes is guaranteed to be worse than our intermediate solution from step 4.
If all cakes got discarded except the two biggest ones, then we are done. V is the solution. END.
Continue to involve the new "big" cakes in right-to-left direction, improve the intermediate solution, and continue to discard "small" cakes in left-to-right direction until all remaining cakes get used up.
P.S. The complexity of step 4 seems to be equivalent to the complexity of the entire problem, meaning that the above can be seen as an optimization approach, but not a real solution. Oh well, for what it is worth... :)
Here's one approach to a more efficient solution. Your brute force solution in essence generates an implicit of possible volumes, filters them by feasibility, and returns the largest. We can modify it slightly to materialize the list and sort it so that the first feasible solution found is the largest.
First task for you: find a way to produce the sorted list on demand. In other words, we should do O(n + m log n) work to generate the first m items.
Now, let's assume that the volumes appearing in the list are pairwise distinct. (We can remove this assumption later.) There's an interesting fact about how many people are served by the volume at position k. For example, with volumes 11, 13, 17 and 7 people, the list is 17, 13, 11, 17/2, 13/2, 17/3, 11/2, 13/3, 17/4, 11/3, 17/5, 13/4, 17/6, 11/4, 13/5, 17/7, 11/5, 13/6, 13/7, 11/6, 11/7.
Second task for you: simulate the brute force algorithm on this list. Exploit what you notice.
So here is the algorithm I thought it would work:
Sort the volumes from largest to smallest.
Divide the largest cake to 1...k people, i.e., target = volume[0]/i, where i = 1,2,3,4,...,k
If target would lead to total number of pieces greater than k, decrease the number i and try again.
Find the first number i that will result in total number of pieces greater than or equal to K but (i-1) will lead to a total number of cakes less than k. Record this volume as baseVolume.
For each remaining cake, find the smallest fraction of remaining volume divide by number of people, i.e., division = (V_cake - (baseVolume*(Math.floor(V_cake/baseVolume)) ) / Math.floor(V_cake/baseVolume)
Add this amount to the baseVolume(baseVolume += division) and recalculate the total pieces all volumes could divide. If the new volume result in less pieces, return previous value, otherwise, repeat step 6.
Here are the java codes:
public static int getKonLagestCake(Integer[] sortedVolumesList, int k) {
int result = 0;
for (int i = k; i >= 1; i--) {
double volumeDividedByLargestCake = (double) sortedVolumesList[0]
/ i;
int totalNumber = totalNumberofCakeWithGivenVolumn(
sortedVolumesList, volumeDividedByLargestCake);
if (totalNumber < k) {
result = i + 1;
break;
}
}
return result;
}
public static int totalNumberofCakeWithGivenVolumn(
Integer[] sortedVolumnsList, double givenVolumn) {
int totalNumber = 0;
for (int volume : sortedVolumnsList) {
totalNumber += (int) (volume / givenVolumn);
}
return totalNumber;
}
public static double getMaxVolume(int[] volumesList, int k) {
List<Integer> list = new ArrayList<Integer>();
for (int i : volumesList) {
list.add(i);
}
Collections.sort(list, Collections.reverseOrder());
Integer[] sortedVolumesList = new Integer[list.size()];
list.toArray(sortedVolumesList);
int previousValidK = getKonLagestCake(sortedVolumesList, k);
double baseVolume = (double) sortedVolumesList[0] / (double) previousValidK;
int totalNumberofCakeAvailable = totalNumberofCakeWithGivenVolumn(sortedVolumesList, baseVolume);
if (totalNumberofCakeAvailable == k) {
return baseVolume;
} else {
do
{
double minimumAmountAdded = minimumAmountAdded(sortedVolumesList, baseVolume);
if(minimumAmountAdded == 0)
{
return baseVolume;
}else
{
baseVolume += minimumAmountAdded;
int newTotalNumber = totalNumberofCakeWithGivenVolumn(sortedVolumesList, baseVolume);
if(newTotalNumber == k)
{
return baseVolume;
}else if (newTotalNumber < k)
{
return (baseVolume - minimumAmountAdded);
}else
{
continue;
}
}
}while(true);
}
}
public static double minimumAmountAdded(Integer[] sortedVolumesList, double volume)
{
double mimumAdded = Double.MAX_VALUE;
for(Integer i:sortedVolumesList)
{
int assignedPeople = (int)(i/volume);
if (assignedPeople == 0)
{
continue;
}
double leftPiece = (double)i - assignedPeople*volume;
if(leftPiece == 0)
{
continue;
}
double division = leftPiece / (double)assignedPeople;
if (division < mimumAdded)
{
mimumAdded = division;
}
}
if (mimumAdded == Double.MAX_VALUE)
{
return 0;
}else
{
return mimumAdded;
}
}
Any Comments would be appreciated.
Thanks

Lazy Shuffle Algorithms

I have a large list of elements that I want to iterate in random order. However, I cannot modify the list and I don't want to create a copy of it either, because 1) it is large and 2) it can be expected that the iteration is cancelled early.
List<T> data = ...;
Iterator<T> shuffled = shuffle(data);
while (shuffled.hasNext()) {
T t = shuffled.next();
if (System.console().readLine("Do you want %s?", t).startsWith("y")) {
return t;
}
}
System.out.println("That's all");
return t;
I am looking for an algorithm were the code above would run in O(n) (and preferably require only O(log n)space), so caching the elements that were produced earlier is not an option. I don't care if the algorithm is biased (as long as it's not obvious).
(I uses pseudo-Java in my question, but you can use other languages if you wish)
Here is the best I got so far.
Iterator<T> shuffle(final List<T> data) {
int p = data.size();
while ((data.size() % p) == 0) p = randomPrime();
return new Iterator<T>() {
final int prime = p;
int n = 0, i = 0;
public boolean hasNext() { return i < data.size(); }
public T next() {
i++; n += prime;
return data.get(n);
}
}
}
Iterating all elements in O(n), constant space, but obviously biased as it can produce only data.size() permutations.
The easiest shuffling approaches I know of work with indices. If the List is not an ArrayList, you may end up with a very inefficient algorithm if you try to use one of the below (a LinkedList does have a get by ID, but it's O(n), so you'll end up with O(n^2) time).
If O(n) space is fine, which I'm assuming it's not, I'd recommend the Fisher-Yates / Knuth shuffle, it's O(n) time and is easy to implement. You can optimise it so you only need to perform a single operation before being able to get the first element, but you'll need to keep track of the rest of the modified list as you go.
My solution:
Ok, so this is not very random at all, but I can't see a better way if you want less than O(n) space.
It takes O(1) space and O(n) time.
There may be a way to push it up the space usage a little and get more random results, but I haven't figured that out yet.
It has to do with relative primes. The idea is that, given 2 relative primes a (the generator) and b, when you loop through a % b, 2a % b, 3a % b, 4a % b, ..., you will see every integer 0, 1, 2, ..., b-2, b-1, and this will also happen before seeing any integer twice. Unfortunately I don't have a link to a proof (the wikipedia link may mention or imply it, I didn't check in too much detail).
I start off by increasing the length until we get a prime, since this implies that any other number will be a relative prime, which is a whole lot less limiting (and just skip any number greater than the original length), then generate a random number, and use this as the generator.
I'm iterating through and printing out all the values, but it should be easy enough to modify to generate the next one given the current one.
Note I'm skipping 1 and len-1 with my nextInt, since these will produce 1,2,3,... and ...,3,2,1 respectively, but you can include these, but probably not if the length is below a certain threshold.
You may also want to generate a random number to multiply the generator by (mod the length) to start from.
Java code:
static Random gen = new Random();
static void printShuffle(int len)
{
// get first prime >= len
int newLen = len-1;
boolean prime;
do
{
newLen++;
// prime check
prime = true;
for (int i = 2; prime && i < len; i++)
prime &= (newLen % i != 0);
}
while (!prime);
long val = gen.nextInt(len-3) + 2;
long oldVal = val;
do
{
if (val < len)
System.out.println(val);
val = (val + oldVal) % newLen;
}
while (oldVal != val);
}
This is an old thread, but in case anyone comes across this in future, a paper by Andrew Kensler describes a way to do this in constant time and constant space. Essentially, you create a reversible hash function, and then use it (and not an array) to index the list. Kensler describes a method for generating the necessary function, and discusses "cycle walking" as a way to deal with a domain that is not identical to the domain of the hash function. Afnan Enayet's summary of the paper is here: https://afnan.io/posts/2019-04-05-explaining-the-hashed-permutation/.
You may try using a buffer to do this. Iterate through a limited set of data and put it in a buffer. Extract random values from that buffer and send it to output (or wherever you need it). Iterate through the next set and keep overwriting this buffer. Repeat this step.
You'll end up with n + n operations, which is still O(n). Unfortunately, the result will not be actually random. It will be close to random if you choose your buffer size properly.
On a different note, check these two: Python - run through a loop in non linear fashion, random iteration in Python
Perhaps there's a more elegant algorithm to do this better. I'm not sure though. Looking forward to other replies in this thread.
This is not a perfect answer to your question, but perhaps it's useful.
The idea is to use a reversible random number generator and the usual array-based shuffling algorithm done lazily: to get the i'th shuffled item, swap a[i] with and a randomly chosen a[j] where j is in [i..n-1], then return a[i]. This can be done in the iterator.
After you are done iterating, reset the array to original order by "unswapping" using the reverse direction of the RNG.
The unshuffling reset will never take longer than the original iteration, so it doesn't change asymptotic cost. Iteration is still linear in the number of iterations.
How to build a reversible RNG? Just use an encryption algorithm. Encrypt the previously generated pseudo-random value to go forward, and decrypt to go backward. If you have a symmetric encryption algorithm, then you can add a "salt" value at each step forward to prevent a cycle of two and subtract it for each step backward. I mention this because RC4 is simple and fast and symmetric. I've used it before for tasks like this. Encrypting 4-byte values then computing mod to get them in the desired range will be quick indeed.
You can press this into the Java iterator pattern by extending Iterator to allow resets. See below. Usage will look like:
ShuffledList<Integer> lst = new SuffledList<>();
... build the list with the usual operations
ResetableInterator<Integer> i = lst.iterator();
while (i.hasNext()) {
int val = i.next();
... use the randomly selected value
if (anyConditinoAtAll) break;
}
i.reset(); // Unshuffle the array
I know this isn't perfect, but it will be fast and give a good shuffle. Note that if you don't reset, the next iterator will still be a new random shuffle, but the original order will be lost forever. If the loop body can generate an exception, you'd want the reset in a finally block.
class ShuffledList<T> extends ArrayList<T> implements Iterable<T> {
#Override
public Iterator<T> iterator() {
return null;
}
public interface ResetableInterator<T> extends Iterator<T> {
public void reset();
}
class ShufflingIterator<T> implements ResetableInterator<T> {
int mark = 0;
#Override
public boolean hasNext() {
return true;
}
#Override
public T next() {
return null;
}
#Override
public void remove() {
throw new UnsupportedOperationException("Not supported.");
}
#Override
public void reset() {
throw new UnsupportedOperationException("Not supported yet.");
}
}
}

Dynamic programming - Coin change decision

I'm reviewing some old notes from my algorithms course and the dynamic programming problems are seeming a bit tricky to me. I have a problem where we have an unlimited supply of coins, with some denominations x1, x2, ... xn and we want to make change for some value X. We are trying to design a dynamic program to decide whether change for X can be made or not (not minimizing the number of coins, or returning which coins, just true or false).
I've done some thinking about this problem, and I can see a recursive method of doing this where it's something like...
MakeChange(X, x[1..n this is the coins])
for (int i = 1; i < n; i++)
{
if ( (X - x[i] ==0) || MakeChange(X - x[i]) )
return true;
}
return false;
Converting this a dynamic program is not coming so easily to me. How might I approach this?
Your code is a good start. The usual way to convert a recursive solution to a dynamic-programming one is to do it "bottom-up" instead of "top-down". That is, if your recursive solution calculates something for a particular X using values for smaller x, then instead calculate the same thing starting at smaller x, and put it in a table.
In your case, change your MakeChange recursive function into a canMakeChange table.
canMakeChange[0] = True
for X = 1 to (your max value):
canMakeChange[X] = False
for i=1 to n:
if X>=x[i] and canMakeChange[X-x[i]]==True:
canMakeChange[X]=True
My solution below is a greedy approach calculating all the solutions and cacheing the latest optimal one. If current executing solution is already larger than cached solution abort the path. Note, for best performance denomination should be in decreasing order.
import java.util.ArrayList;
import java.util.List;
public class CoinDenomination {
int denomination[] = new int[]{50,33,21,2,1};
int minCoins=Integer.MAX_VALUE;
String path;
class Node{
public int coinValue;
public int amtRemaining;
public int solutionLength;
public String path="";
public List<Node> next;
public String toString() { return "C: "+coinValue+" A: "+amtRemaining+" S:"+solutionLength;}
}
public List<Node> build(Node node)
{
if(node.amtRemaining==0)
{
if (minCoins>node.solutionLength) {
minCoins=node.solutionLength;
path=node.path;
}
return null;
}
if (node.solutionLength==minCoins) return null;
List<Node> nodes = new ArrayList<Node>();
for(int deno:denomination)
{
if(node.amtRemaining>=deno)
{
Node nextNode = new Node();
nextNode.amtRemaining=node.amtRemaining-deno;
nextNode.coinValue=deno;
nextNode.solutionLength=node.solutionLength+1;
nextNode.path=node.path+"->"+deno;
System.out.println(node);
nextNode.next = build(nextNode);
nodes.add(node);
}
}
return nodes;
}
public void start(int value)
{
Node root = new Node();
root.amtRemaining=value;
root.solutionLength=0;
root.path="start";
root.next=build(root);
System.out.println("Smallest solution of coins count: "+minCoins+" \nCoins: "+path);
}
public static void main(String args[])
{
CoinDenomination coin = new CoinDenomination();
coin.start(35);
}
}
Just add a memoization step to the recursive solution, and the dynamic algorithm falls right out of it. The following example is in Python:
cache = {}
def makeChange(amount, coins):
if (amount,coins) in cache:
return cache[amount, coins]
if amount == 0:
ret = True
elif not coins:
ret = False
elif amount < 0:
ret = False
else:
ret = makeChange(amount-coins[0], coins) or makeChange(amount, coins[1:])
cache[amount, coins] = ret
return ret
Of course, you could use a decorator to auto-memoize, leading to more natural code:
def memoize(f):
cache = {}
def ret(*args):
if args not in cache:
cache[args] = f(*args)
return cache[args]
return ret
#memoize
def makeChange(amount, coins):
if amount == 0:
return True
elif not coins:
return False
elif amount < 0:
return False
return makeChange(amount-coins[0], coins) or makeChange(amount, coins[1:])
Note: even the non-dynamic-programming version you posted had all kinds of edge cases bugs, which is why the makeChange above is slightly longer than yours.
This paper is very relevant: http://ecommons.library.cornell.edu/handle/1813/6219
Basically, as others have said, making optimal change totaling an arbitrary X with arbitrary denomination sets is NP-Hard, meaning dynamic programming won't yield a timely algorithm. This paper proposes a polynomial-time (that is, polynomial in the size of the input, which is an improvement upon previous algorithms) algorithm for determining if the greedy algorithm always produces optimal results for a given set of denominations.
Here is c# version just for reference to find the minimal number of coins required for given sum:
(one may refer to my blog # http://codingworkout.blogspot.com/2014/08/coin-change-subset-sum-problem-with.html for more details)
public int DP_CoinChange_GetMinimalDemoninations(int[] coins, int sum)
{
coins.ThrowIfNull("coins");
coins.Throw("coins", c => c.Length == 0 || c.Any(ci => ci <= 0));
sum.Throw("sum", s => s <= 0);
int[][] DP_Cache = new int[coins.Length + 1][];
for (int i = 0; i <= coins.Length; i++)
{
DP_Cache[i] = new int[sum + 1];
}
for(int i = 1;i<=coins.Length;i++)
{
for(int s=0;s<=sum;s++)
{
if (coins[i - 1] == s)
{
//k, we can get to sum using just the current coin
//so, assign to 1, no need to process further
DP_Cache[i][s] = 1;
}
else
{
//initialize the value withouth the current value
int minNoOfCounsWithoutUsingCurrentCoin_I = DP_Cache[i - 1][s];
DP_Cache[i][s] = minNoOfCounsWithoutUsingCurrentCoin_I;
if ((s > coins[i - 1]) //current coin can particiapte
&& (DP_Cache[i][s - coins[i - 1]] != 0))
{
int noOfCoinsUsedIncludingCurrentCoin_I =
DP_Cache[i][s - coins[i - 1]] + 1;
if (minNoOfCounsWithoutUsingCurrentCoin_I == 0)
{
//so far we couldnt identify coins that sums to 's'
DP_Cache[i][s] = noOfCoinsUsedIncludingCurrentCoin_I;
}
else
{
int min = this.Min(noOfCoinsUsedIncludingCurrentCoin_I,
minNoOfCounsWithoutUsingCurrentCoin_I);
DP_Cache[i][s] = min;
}
}
}
}
}
return DP_Cache[coins.Length][sum];
}
In the general case, where coin values can be arbitrary, the problem you are presenting is called the Knapsack Problem, and is known to belong to NP-complete (Pearson, D. 2004), so therefore is not solvable in polynomial time such as dynamic programming.
Take the pathological example of x[2] = 51, x[1] = 50, x[0] = 1, X = 100. Then it is required that the algorithm 'consider' the possibilities of making change with coin x[2], alternatively making change beginning with x[1]. The first-step used with national coinage, otherwise known as the Greedy Algorithm -- to wit, "use the largest coin less than the working total," will not work with pathological coinages. Instead, such algorithms experience a combinatoric explosion that qualifies them into NP-complete.
For certain special coin value arrangements, such as practically all those in actual use, and including the fictitious sytem X[i+1] == 2 * X[i], there are very fast algorithms, even O(1) in the fictitious case given, to determine the best output. These algorithms exploit properties of the coin values.
I am not aware of a dynamic programming solution: one which takes advantage of optimal sub-solutions as required by the programming motif. In general a problem can only be solved by dynamic programming if it can be decomposed into sub-problems which, when optimally solved, can be re-composed into a solution which is provably optimal. That is, if the programmer cannot mathematically demonstrate ("prove") that re-composing optimal sub-solutions of the problem results in an optimal solution, then dynamic programming cannot be applied.
An example commonly given of dynamic programming is an application to multiplying several matrices. Depending on the size of the matrices, the choice to evaluate A·B·C as either of the two equivalent forms: ((A·B)·C) or (A·(B·C)) leads to the computations of different quantities of multiplications and additions. That is, one method is more optimal (faster) than the other method. Dynamic programming is a motif which tabulates the computational costs of different methods, and performs the actual calculations according to a schedule (or program) computed dynamically at run-time.
A key feature is that computations are performed according to the computed schedule and not by an enumeration of all possible combinations -- whether the enumeration is performed recursively or iteratively. In the example of multiplying matrices, at each step, only the least-cost multiplication is chosen. As a result, the possible costs of intermediate-cost sub-optimal schedules are never calculated. In other words, the schedule is not calculated by searching all possible schedules for the optimal, but rather by incrementally building an optimal schedule from nothing.
The nomenclature 'dynamic programming' may be compared with 'linear programming' in which 'program' is also used in the sense meaning 'to schedule.'
To learn more about dynamic programming, consult the greatest book on algorithms yet known to me, "Introduction to Algorithms" by Cormen, Leiserson, Rivest, and Stein. "Rivest" is the 'R' of "RSA" and dynamic programming is but one chapter of scores.
iIf you write in a recursive way, it is fine, just use memory based search. you have to store what you have calculated, which will not be calculated again
int memory[#(coins)]; //initialize it to be -1, which means hasn't been calculated
MakeChange(X, x[1..n this is the coins], i){
if(memory[i]!=-1) return memory[i];
for (int i = 1; i < n; i++)
{
if ( (X - x[i] ==0) || MakeChange(X - x[i], i) ){
memory[i]=true;
return true;
}
}
return false;
}

Resources