Coin change memoization - algorithm

Does the following algorithm to find all possible ways of making changes for a particular sum really use memoization?
func count( n, m )
for i from 0 to n
for j from 0 to m
if i equals 0
table[i,j] = 1
else if j equals 0
table [i,j] = 0
else if S_j greater than i
table[ i, j ] = table[ i, j - 1 ]
else
table[ i, j ] = table[ i - S_j, j ] + table[ i, j - 1 ]
return table[ n, m ]
Each time the function count is called, it starts filling the table from scratch. Even if the table's already been initialized for certain values, the next time count is called, it won't use these values, but will start again from i = 0 and j = 0.

This is not Memoization. This is an example for Dynamic Programming code.
In order to analyze your code, first we need to distinguish between Memoization and Dynamic Programming.
Memoization is a Top Down approach, where as Dynamic Programming is a Bottom Up approach.
Consider the problem of finding the factorial of a number n.
If you are finding n! by using the following facts,
n! = n * (n-1)! and 0!=1
this is an example for top down approach.
The value of n is kept in memory until the values of 0! to (n-1)! are returned. The disadvantage is that you waste a lot of stack memory. The advantage is that you don't have to recalculate sub problems if they are already solved. The solutions to sub problems are stored in memory.
But in your problem you don't have a top down approach, hence no memoization.
Every entry in the table is directly obtained from previously calculated sub problem solutions. There for it uses a bottom up approach. Hence you have a piece of code which uses dynamic programming.

Related

Prove that merge sort outputs a permutation of the input

I am beginning to study Computational Logic, and as an exercise, I want to prove the correctness of merge sort algorithm.
Currently, I’m having difficulties to prove that the output of this algorithm will always correspond to a permutation of a given input.
I’d be very glad if someone can assist me with this.
Thank you very much 😄
The core of this proof will need to show that the "merge" procedure inserts each element once and only once into the result. Since the merge procedure works using a loop, you need to use a loop invariant to show this.
Loop invariants can usually be discovered by asking, "what do I know halfway through the loop?"
to merge arrays A and B:
let n = length of A, m = length of B
let R = new array of length (n + m)
let i = 0, j = 0
while i < n or j < m:
if i < n and (j == m or A[i] <= B[j]):
R[i+j] = A[i]
i = i + 1
else:
R[i+j] = B[j]
j = j + 1
return R
In this loop, we always know that the first i+j elements of R are some permutation of the first i elements of A and the first j elements of B. That's the loop invariant, so you need to show that:
This is true before the loop starts (when i = j = 0).
If this is true before an iteration of the loop, then it remains true after that iteration, i.e. the invariant is preserved.
If this is true when the loop terminates (when i = m, j = n), then the array R has the required property.
In general, the hard parts of a proof like this are discovering the loop invariant, and showing that the invariant is preserved by each iteration of the loop.
What are the preconditions of a merge sort? What are the post conditions? Do you have any loop invariants?
These are the three questions you have to ask yourself before you can start writing your proof.
Then: what are your base cases? Presumably you know how merge sort works if you are working on a proof, so what happens when you have an array of length 1 passed to the mergesort function? What is the postcondition there?
Here's a decent primer from Berkeley on how to prove the correctness of a function. It might take some discrete math (induction) to write the proof.

Is there a way of making a simple change to this sorting algorithm so that it runs faster than quadratic time?

I have the following sorting algorithm which runs in O(n2):
function sortingArray (U, n)
//S is the array and n is the size of the array
//1. First create a temp array and initialize to 0
for i = 0 to n - 1
Temp[i] = 0
//2. This is where it gets skewed - essentially Temp is used to store indixes for the correct position of U
for i = 0 to n - 2
for j = i + 1 to n - 1
if U[i] <= U[j]
Temp[j]++
else
Temp[i]++
//3. Now Temp has the correct "index" order so create an array S and place the elements of U into it using Temp
for i = 0 to n - 1
S[Temp[i]] = U[i]
return S
I have verified several initial unsorted arrays and it correctly sorts every time. The scope of this homework question is not to scrap this algorithm and write a well-known sorting algorithm. Rather, I am to determine if it possible to "simply" modify the above algorithm such that the time complexity is less than n2. Clearly what makes this quadratic is the nested loops but I don't see a way of using Temp in the same manner without two nested for loops.

Using binary indexed trees for a RMQ extension

The RMQ problem can be extended like so:
Given is an array of n integers A.
query(x, y): given two integers 1 ≤ x, y ≤ n, find the minimum of A[x], A[x+1], ... A[y];
update(x, v): given an integer v and 1 ≤ x ≤ n do A[x] = v.
This problem can be solved in O(log n) for both operations using segment trees.
This is an efficient solution on paper, but in practice, segment trees involve a lot of overhead, especially if implemented recursively.
I know for a fact that there is a way to solve the problem in O(log^2 n) for one (or both, I'm not sure) of the operations, using binary indexed trees (more resources can be found, but this and this are, IMO, the most concise and exhaustive, respectively). This solution, for values of n that fit into memory, is faster in practice, because BITs have a lot less overhead.
However, I do not know how the BIT structure is used to perform the given operations. I only know how to use it to query an interval sum for example. How can I use it to find the minimum?
If it helps, I have code that others have written that does what I am asking for, but I cannot make sense of it. Here is one such piece of code:
int que( int l, int r ) {
int p, q, m = 0;
for( p=r-(r&-r); l<=r; r=p, p-=p&-p ) {
q = ( p+1 >= l ) ? T[r] : (p=r-1) + 1;
if( a[m] < a[q] )
m = q;
}
return m;
}
void upd( int x ) {
int y, z;
for( y = x; x <= N; x += x & -x )
if( T[x] == y ) {
z = que( x-(x&-x) + 1, x-1 );
T[x] = (a[z] > a[x]) ? z : x;
}
else
if( a[ T[x] ] < a[ y ] )
T[x] = y;
}
In the above code, T is initialized with 0, a is the given array, N its size (they do indexing from 1 for whatever reason) and upd is called at first for every read value. Before upd is called a[x] = v is executed.
Also, p & -p is the same as the p ^ (p & (p - 1)) in some BIT sources and indexing starts from 1 with the zero element initialized to infinity.
Can anyone explain how the above works or how I could solve the given problem with a BIT?
I haven't looked at the code in detail, but it seems to be roughly consistent with the following scheme:
1) Keep the structure of the BIT, that is, impose a tree structure based on powers of two on the array.
2) At each node of the tree, keep the minimum value found at any descendant of that node.
3) Given an arbitrary range, put pointers at the start and end of the range and move them both upwards until they meet. If you move a pointer upwards and towards the other pointer then you have just entered a node in which every descendant is a member of the range, so take note of that value at that node. If you move a pointer upwards and away from the other pointer the node you have just joined records a minimum derived from values including those outside the range, and you have already taken note of every relevant value below that node inside the range, so ignore the value at that node.
4) Once the two pointers are the same pointer, the minimum in the range is the minimum value in any node that you have taken note of.
From a level above the bit fiddling, this is what we have:
A normal BIT array g for integer data array a stores range sums.
g[k] = sum{ i = D(k) + 1 .. k } a[i]
where D(k) is just k with the lowest-order 1 bit set to 0. Here we have instead
T[k] = min{ i = D(k) + 1 .. k } a[i]
The query works exactly like a normal BIT range sum query with the change that minima of subranges are taken as the query proceeds rather than sums. For N items in a, there are ceiling(log N) bits in N, which determines the run time.
The update takes more work because O(log N) subrange minima - i.e. elements of g - are affected by the change, and each takes an O(log N) query by itself to resolve. This makes the update O(log^2 n) overall.
At the bit fiddling level this is fiendishly clever code. The statement x += x & -x clears the lowest-order consecutive string of 1's in x and then sets the next highest-order zero to 1. This is just what you need to "traverse" the BIT for the original integer x.
Segment trees are an efficient solution in practice too. You don't implement them as trees, though. Round n up to the next power of two and use an array rmq of size 2*n. The last n entries of rmq are A. If j < n, then rmq[j] = min(rmq[2*j], rmq[2*j+1]). You only need to look at logarithmically many entries of rmq to answer a range-minimum query. And you only need to update logarithmically many entries of rmq when an entry of A is updated.
I don't understand your code, though, so I'm not going to remark on it.

Error in finding two subsets having equal sum

I have been trying to divide an array into two non empty disjoint subsets such that their sum are equal.
eg. A = {1,2,3,6,88,55,29}
one possible answer = 1+2+3 and 6
I have read mit tutorial on balanced partition problem but my constraints are different. I don't have to consider whole of set A(means its not necessary that A1 U A2 would result in A). Also another problem is limit of N. There are at most 100 distinct elements each (<= 100 ) .
I have also read THIS post related to my problem, but I couldn't get anything .
My present Algo --
p[1][a[0]] = 1
for i = 2 to n
for j = 0 to n*n
if( p[i][j] >= 2) stop
p[i][j] += j - a[i] > 0 ? ( p[i-1][j] + p[i-1][j-a[i]] ):0
p[i][j] += j == a[i] ? 1:0
p[i][j] += j < a[i] ? p[i-1][j]:0
explanation :
Search for sum j at position i. If we got count at position j >=2 means
there are more than two possibilities for summing up to j.
HERE is sample working code by me
I know this method cant take care of disjoint sets but I am unable to figure out any other approach.
I am in learning phase of Dynamic Prog. and I find it somewhat difficult. Can someone please help me in finding out the error in my current algorithm.
It seems your code don't go over all the subsets. The Power Set of a set of size n has 2^n-1 non empty elements, so I think this is the lower limit for the algorithmic complexity. You need to find an appropriate way to enumerate the subsets, as related by this other question on SO
In general subset generation is made by adding elements one by one. This allows you to compute the sum of an individual set in one addition if you use dynamic programming. Indeed, If you you have {1,2,3,6} and you save the value to 12, you just need to add 88 to find the sum of {1,2,3,6,88}.
You can find further optimization aside basic DP. For instance if you test
{88} > {1,2,3,6,29}
first then you don't need to test for any subset of {1,2,3,6,29} (the smaller sum) with {88}. At the same time you don't need to test any set containing 88 with {1,2,3,6,29}, as it will be always bigger... Now it requires to use recursion from bigger sets to smaller sets.

Problem k-subvector using dynamic programming

Given a vector V of n integers and an integer k, k <= n, you want a subvector (a sequence of consecutive elements of the vector ) of maximum length containing at most k distinct elements.
The technique that I use for the resolution of the problem is dynamic programming.
The complexity of this algorithm must be O(n*k).
The main problem is how to count distinct elements of the vector. as you would resolve it ?
How to write the EQUATION OF RECURRENCE ?
Thanks you!!!.
I don't know why you would insist on O(n*k), this can be solved in O(n) with 'sliding window' approach.
Maintain current 'window' [left..right]
At each step, if we can increase right by 1 (without violating 'at most k disctint elements' requirement), do it
Otherwise, increase left by 1
Check whether current window is the longest and go back to #2
Checking whether we can increase right in #2 is a little tricky. We can use hashtable storing for each element inside window how many times it occurred there.
So, the condition to allow right increase would look like
hash.size < k || hash.contains(V[right + 1])
And each time left or right is increased, we'll need to update hash (decrease or increase number of occurrences of the given element).
I'm pretty sure, any DP solution here would be longer and more complicated.
the main problem is how to count distinct elements of the vector. as you would resolve it?
If you allowed to use hashing, you could do the following
init Hashtable h
distinct_count := 0
for each element v of the vector V
if h does not contain v (O(1) time in average)
insert v into h (O(1) time in average)
distinct_count := distinct_count + 1
return distinct_count
This is in average O(n) time.
If not here is an O(n log n) solution - this time worst case
sort V (O(n log n) comparisons)
Then it should be easy to determine the number of different elements in O(n) time ;-)
I could also tell you an algorithm to sort V in O(n*b) where b is the bit count of the integers - if this helps you.
Here is the algorithm:
sort(vector, begin_index, end_index, currentBit)
reorder the vector[begin_index to end_index] so that the elements that have a 1 at bit currentBit are after those that have a 0 there (O(end_index-begin_index) time)
Let c be the count of elements that have a 0 at bit currentBit (O(end_index-begin_index) time; can be got from the step before)
if (currentBit is not 0)
call sort(vector, begin_index, begin_index+c)
call sort(vector, begin_index+c+1, end_index)
Call it with
vector = V
begin_index = 0
end_index = n-1
currentBit = bit count of the integers (=: b)-1.
This even uses dynamic programming as requested.
As you can determine very easily this is O(n*b) time with a recursion depth of b.

Resources