memoization in dynamic programming - algorithm

can please tell me how memoization is working in this dp example.
dp example problem, codechef
the part where i stuck is like when input is 4 then why code is calculating
n-1 i.e 4-1 when optimal step would be 4/2 or for input =10 why we will calculate n-1 till 1. Any help would be appreciated.
New to dynamic programming so please bear with me.

Memoization in dynamic programming is just storing solutions to a subproblem. for input n=4 you calculate its solution. So you try step 1. Subtract 1 + the solution to the subproblem n=3. For this to evaluate you need to solve the problem n=3, because you have not solved it previously. So you again try step 1 until you get to the base problem of n = 1 where you output 0.
After you tried step 1 for the current problem you try step 2 which means dividing n and afterwards you try step 3. You try every step for every subproblem, but because you store the best value at every subproblem you can use this when it occurs again.
For example when you get back to n=4 after you tried step 1 on it you try step 2 on it and you see that you can use n / 2 and because you already calculated the optimal value for n=2 you can output 1 + optimal value for n=2 which is 1, so in total 2.

The link explains it fairly clearly. If F(n) is the minimal number of steps to convert n to 1, then for any n > 1 we have the following recurrence relation:
F(n) = 1 + min(F(n-1), F(n/2), F(n/3)) // if n divisible by 2 and 3
F(n) = 1 + min(F(n-1), F(n/2)) // if n divisible by 2 and not 3
F(n) = 1 + min(F(n-1), F(n/3)) // if n divisible by 3 and not 2
F(n) = 1 + F(n-1) // all other cases
For your case, n=4, we have to compute F(n-1) and F(n/2) to decide which one is minimum.
As for the second question, when n=10 we will evaluate first F(9). During this evaluation all values F(8), F(7), ... F(2)are computed and memoized. Then, when we evaluate F(10/2) = F(5) it will be simply a matter of looking up the value in the array of memoized values. This will save lots of computing.

May be you can do as follows in JS;
function getSteps(n){
var fs = [i => i%3 ? false : i/3, i => i%2 ? false : i/2, i => i-1],
res = [n],
chk;
while (res[res.length-1] > 1) {
chk = false;
fs.forEach(f => !chk && (chk = f(res[res.length-1])) && res.push(chk));
}
return res;
}
var result = getSteps(1453);
console.log(result);
console.log("The number of steps:",result.length);

Related

Efficiently grab some subsets that meet criteria [duplicate]

This question already has an answer here:
Count the total number of subsets that don't have consecutive elements
(1 answer)
Closed 4 years ago.
Given a set of consecutive numbers from 1 to n, I'm trying to find the number of subsets that do not contain consecutive numbers.
E.g., for the set [1, 2, 3], some possible subsets are [1, 2] and [1, 3]. The former would not be counted while the latter would be, since 1 and 3 are not consecutive numbers.
Here is what I have:
def f(n)
consecutives = Array(1..n)
stop = (n / 2.0).round
(1..stop).flat_map { |x|
consecutives.combination(x).select { |combo|
consecutive = false
combo.each_cons(2) do |l, r|
consecutive = l.next == r
break if consecutive
end
combo.length == 1 || !consecutive
}
}.size
end
It works, but I need it to work faster, under 12 seconds for n <= 75. How do I optimize this method so I can handle high n values no sweat?
I looked at:
Check if array is an ordered subset
How do I return a group of sequential numbers that might exist in an array?
Check if an array is subset of another array in Ruby
and some others. I can't seem to find an answer.
Suggested duplicate is Count the total number of subsets that don't have consecutive elements, although that question is slightly different as I was asking for this optimization in Ruby and I do not want the empty subset in my answer. That question would have been very helpful had I initially found that one though! But SergGr's answer is exactly what I was looking for.
Although #user3150716 idea is correct the details are wrong. Particularly you can see that for n = 3 there are 4 subsets: [1],[2],[3],[1,3] while his formula gives only 3. That is because he missed the subset [3] (i.e. the subset consisting of just [i]) and that error accumulates for larger n. Also I think it is easier to think if you start from 1 rather than n. So the correct formulas would be
f(1) = 1
f(2) = 2
f(n) = f(n-1) + f(n-2) + 1
Those formulas are easy to code using a simple loop in constant space and O(n) speed:
def f(n)
return 1 if n == 1
return 2 if n == 2
# calculate
# f(n) = f(n-1) + f(n - 2) + 1
# using simple loop
v2 = 1
v1 = 2
i = 3
while i <= n do
i += 1
v1, v2 = v1 + v2 + 1, v1
end
v1
end
You can see this online together with the original code here
This should be pretty fast for any n <= 75. For much larger n you might require some additional tricks like noticing that f(n) is actually one less than a Fibonacci number
f(n) = Fib(n+2) - 1
and there is a closed formula for Fibonacci number that theoretically can be computed faster for big n.
let number of subsets with no consecutive numbers from{i...n} be f(i), then f(i) is the sum of:
1) f(i+1) , the number of such subsets without i in them.
2) f(i+2) + 1 , the number of such subsets with i in them (hence leaving out i+1 from the subset)
So,
f(i)=f(i+1)+f(i+2)+1
f(n)=1
f(n-1)=2
f(1) will be your answer.
You can solve it using matrix exponentiation(http://zobayer.blogspot.in/2010/11/matrix-exponentiation.html) in O(logn) time.

How to determine the time complexity of this algorithm?

The following function calculates a^b.
assume that we already have a prime_list which contain all needed primes and is sorted from small to large.
The code is written in python.
def power(a,b):
if b == 0:
return 1
prime_range = int(sqrt(b)) + 1
for prime in prime_list:
if prime > prime_range:
break
if b % prime == 0:
return power(power(a, prime), b/prime)
return a * power(a, b-1)
How to determine its time complexity?
p.s. The code isn't perfect but as you can see the idea is using primes to reduce the number of times of arithmetic operations.
I am still looking for an ideal implementation so please help if you come up with something. Thx!
Worst case when for loop is exausted. But in this case b becomes divided by 2 in next recursive call.
In worst case we devide b by factor 2 in approx sqrt(b) operations in each step until b reach 1.
so if we set equations
f(1) = 1 and f(n) = f(n/2) + sqrt(n)
we get using woflram alpha
f(n) = (1+sqrt(2)) (sqrt(2) sqrt(n)-1)
and that is
O(sqrt(b))

How to calculate the T(N) for this primes finder algorithm

This algorithm find all prime numbers below N
var f = function(n){
var primes = [2]; //1
var flag; //1
for(var i=3; i<n; i+=2){ // ( from 3 to n-1 ) / 2
flag = true; //1
var startI = 0; // 1
for(var y=primes[startI]; y<=Math.sqrt(i); y=primes[++startI]){ // ???
if(i%y === 0) // 1
flag = false; // 1
}
if(flag) // 1
primes.push(i); // 1
}
return primes; // 1
}
So far my analysis is done till the first loop, I'm not sure how to handle the second sum ( the one that is using primes.length and Math.sqrt ).
T(n) = 1 + 1 + sum( ( 1+ 1+ ??weird sum???) , from i = 3 to n -1) / 2 + 1 + 1
I understand how to analyze till the second nested loop, I suspect that is around log(N) or something like that, but I would like to know the exact number of iterations..
Questions:
How can I handle that kind of loop that is using arrays in memory to iterate ?
If not possible to get the exact number, how can I get a good approximation ?
Any help is appreciated (links to similar cases, books, etc ).
The inner loop iterates over the array of all primes below sqrt(i).
So you have to calculate the number of elements in that array. In the case of an array of primes, you have to use approximations for π(i), the number of primes below i.
You can approximate them by x/ln(x) or (much better) by li(x). More details here.
For analysis the x/ln(x) would be easier.
In total you get (assuming n = 2k+1)
T(n) = T(n-2) + O(sqrt(n)/( (1/2)⋅ln(n) )) = O( Σi = 1,...,k 2⋅sqrt(2⋅i+1)/ln(2⋅i+1) )
You get the recursive formula from the inner for loop, that iterates over the array of primes lower than sqrt(n) (approximated by sqrt(n)/( (1/2)⋅ln(n) )), and the work you have to do to come this far, represented by T(n-2).
Maybe you can simplify this more. I don't want to take the fun from you :)
Hint: Maybe you can use an integral to get an approximation of the sum. But I think there is no "nice" way to write it down.
If you forget about the 1/ln(i)-part, you can see
T(n) ∈ o(n3/2) and T(n) ∈ ω(n). Maybe you can do better.
As #vib mentioned, you can get a tighter upper bound O(n3/2/ln(n)). But notice that sqrt(n)/ln(n) is only an approximation for the number of primes lower than sqrt(n). Thus you get better bounds with better approximations. Since this approximations do not provide the exact value of π(n), we cannot say that this algorithm runs in Θ(n3/2/ln(n)).

Recurrence relation on Factorial

I was studying recurrence by a slide found at (slide 7 and 8):
http://www.cs.ucf.edu/courses/cop3502h/spring2012/Lectures/Lec8_RecurrenceRelations.pdf
I just can't accept (probably I`m not seeing it right) that the recurrence equation of factorial is :
T(n) = T(n-1)+2
T(1) = 1
when considering the number of operations ("*" and "-") of the function :
int factorial(int n) {
if (n == 1)
return 1;
return n * factorial(n-1);
}
If we use n = 5 we will get 6 by the formula above while the real number of subs and molts are 8.
My teacher also told us that if analyzing only the number of "*" it would be :
T(n) = T(n-1)+1.
Again if I use n = 5, I would get 5 but if you do it on a paper you will get 4 multiplications.
I also checked on the forum, but this Question is more messed then a hell :
Recurrence Relation
Anyone could help me understand that ? thanks.
if we use n = 5 we will get 6 by the formula above while the real number of subs and molts are 8.
It seems that the slides are counting the number of operations, not just subtractions and multiplications. In particular, the return statement is counted as one operation. (The slides say, "if it’s the base case just one operation to return.")
Thus, the real number of subtractions and multiplications is 8, but the number of operations is 9. If n is 5, then, unrolling the recursion, we get 1 + 2 + 2 + 2 + 2 = 9 operations, which looks right to me.

Is my base case wrong? - Recurrence relation for algorithm with multiple recurrences

I need to create a recurrence relation to capture the number of comparisons performed in this algorithm:
Func(n)
if n = 1
print "!"
return 1
else
return Func(n-1) * Func(n-1) * Func(n-1)
This is what I came up with - but I can't seem to figure out what I did wrong...
Base Case: M(1) = 0
M(n) = 3[M(n-1)]
= 3[3[M(n-2)]]
= 3[3[3[M(n-3)]]]
= 3^i[M(n-i)]
i = n-1 //to get base case
M(n) = 3^(n-1)[M(n-(n-1))]
= 3^(n-1)[M(1)]
= 3^(n-1)[0]
= 0 //????????????
Is my base case wrong? If so, why? Please and thank you for your help.
For Base case (n equals 1), M(1) should be taken as 1 (time complexity constant),
M(n) = 3^(n-1) then
The question is about the number of comparisons.
Every time you call the function, you execute exactly one comparison.
When the outcome is n=1, you are done, and when the outcome is n>1, you perform three recursive calls with n-1.
Clearly,
M(1) = 1
M(n) = 3 M(n-1)
By computing M for increasing values of n, you easily spot the pattern: 1, 3, 9, 27, 81...

Resources