Understanding a Particular Recursive Algorithm - algorithm

Which problem does the algorithm Delta below solve, where m, n >= 0 are integers?
So Im finding the algorithm very hard break down due to the nature of the nested recursion and how it calls on another recursive algorithm. If I had to guess I would say that Delta solves the LCS(longest common subsequence) problem, but Im not able to give a good explanation as to why.
Could someone help me break down the algorithm and explain the recursion and how it works?

As you found out yourself, delta computes the product of two integers.
The recursion indeed makes this confusing to look at but the best way to gain intuition is to perform the computation by hand on some example data. But by looking at the functions separately, you will find that:
Gamma is just summation. Gamma(n,m) = gamma(n, m - 1) + 1 essentially performs a naive summation where you count down the second term, while adding 1 to the first. Example:
3 + 3 =
(3 + 2) + 1 =
((3 + 1) + 1) + 1 =
(((3 + 0) + 1) + 1) + 1 =
6
Knowing this, we can simplify Delta:
Delta(n, m) = n + Delta(n, m - 1) (if m!=0, else return 0).
In the same way, we are counting down on the second factor, but instead of adding 1, we add n. This is in indeed one definition of multiplication. It is easy to understand this if you manually solve an example just like above.

Related

Solve recurrence relation in which there is a separate relation for even and odd values

Can someone help me how to solve these type of questions? What kind of approach should I follow?
Looking over the question, since you will be asked to
evaluate the recurrence lots of times
for very large inputs,
you will likely need to either
find a closed-form solution to the recurrence, or
find a way to evaluate the nth term of the recurrence in sublinear time.
The question, now, is how to do this. Let's take a look at the recurrence, which was defined as
f(1) = f(2) = 1,
f(n+2) = 3f(n) if n is odd, and
f(n+2) = 2f(n+1) - f(n) + 2 if n is even.
Let's start off by just exploring the recurrence to see if any patterns arise. Something that stands out here - the odd terms of this recurrence only depend on other odd terms in the recurrence. This means that we can imagine trying to split this recurrence into two smaller recurrences: one that purely deals with the odd terms, and one that purely deals with the even terms. Let's have D(n) be the sequence of the odd terms, and E(n) be the sequence of the even terms. Then we have
D(1) = 1
D(n+2) = 3D(n)
We only need to evaluate D on odd numbers, so we can play around with that to see if a pattern emerges:
D(2·0 + 1) = 1 = 30
D(2·1 + 1) = 3 = 31
D(2·2 + 1) = 9 = 32
D(2·3 + 1) = 27 = 33
The pattern here is that D(2n+1) = 3n. And hey, that's great news! That means that we have a direct way of computing D(2n+1).
With that in mind, notice that E(n) is defined as
E(2) = 1 = D(1)
E(n+2) = 2D(n+1) - E(n) + 2
Remember that we know the exact value of D(n+1), which is going to make our lives a lot easier. Let's see what happens if we iterate on this recurrence a bit. For example, notice that
E(8)
= 2D(7) - E(6) + 2
= 2D(7) + 2 - (2D(5) - E(4) + 2)
= 2D(7) - 2D(5) + E(4)
= 2D(7) - 2D(5) + (2D(3) - E(2) + 2)
= 2D(7) - 2D(5) + 2D(3) + 2 - D(1)
= 2D(7) - 2D(5) + 2D(3) - D(1) + 2
Okay... that's really, really interesting. It seems like we're getting an alternating sum of the D recurrence, where we alternate between including and excluding 2. At this point, if I had to make a guess, I'd say that the way to solve this recurrence is going to be to think about subdividing the even case further into cases where the inputs are 2n for an even n and 2n for an odd n. In fact, notice that if the input is 2n for even n, then there won't be a +2 term at the end (all the +2's are balanced out by -2's), whereas if the input is odd, then there will be a +2 term at the end (all the +2's are balanced out by -2's).
Now, let's turn to a different aspect of the problem. You weren't asked to query for individual terms of the recurrence. You were asked to query for the sum of the recurrence, evaluated over a range of inputs. The fact that we're getting alternating sums and differences of the D terms here is really, really interesting. For example, what is f(10) + f(11) + f(12)? Well, we know that f(11) = D(11), which we can compute directly. And we also know that f(10) and f(12) are E(10) and E(12). And watch what happens if we evalute E(10) + E(12):
E(10) + E(12)
= (D(9) - D(7) + D(5) - D(3) + D(1) + 2) + (D(11) - D(9) + D(7) - D(5) + D(3) - D(1))
= D(11) + (D(9) - D(9)) + (D(7) - D(7)) + (D(5) - D(5)) + (D(3) - D(3)) + (D(1) - D(1)) + 2
= D(11) + 2.
Now that's interesting. Notice that all of the terms have cancelled out except for the D(11) term and the +2 term! More generally, this might lead us to guess that there's some rule about how to simplify E(n+2) + E(n). In fact, there is. Specifically:
E(2n) + E(2n+2) = D(2n+1) + 2
This means that if we're summing up lots of consecutive values in a range, every pair of adjacent even terms will simplify instantly to something of the form D(2n+1) + 2.
There's still some more work to be done here. For example, you'll need to be able to sum up enormous numbers of D(n) terms, and you'll need to factor in the effects of all the +2 terms. I'll leave those to you to figure out.
One hint: all the values you're asked to return are modulo some number P. This means that the sequence of values 0, D(1), D(1) + D(3), D(1) + D(3) + D(5), D(1) + D(3) + D(5) + D(7), etc. eventually has to reach 0 again (mod P). You can both compute how many terms have to happen before this occurs and write down all the values encountered when doing this by just computing these values explicitly. That will enable you to sum up huge numbers of consecutive D terms in a row - you can mod the number of terms by the length of the cycle, then look up the residual sum in the table.
Hope this helps!

Stuck at Algorithm pseudocode generation

I do not know what to do next (and even if my approach is correct) in the following problem:
Part 1
Part 2
I have just figured out that a possible MNT (for part a) is to get a jar, test if it breaks from height h, if so then there's the answer, if not, height+1 and keep looping.
For part b is the following. Since we know max height equals n, then we start from n (current height = n). Therefore we go from top to bottom adding to our broken jar count (they are supposed to break if you start from top) until the jars stop breaking. Then the number would be current height + 1 (because we need to go back one index).
For part c, I don't even know what my approach would be, since I am assuming that the order of the algorithm is O(n^c) where c is a fraction. I also know that O(n^c) is faster than O(n).
I also noted that there is a problem similar to this one online, but it talks about rungs instead of a robotic arm. Maybe it is similar? Here is the link
Do you have any recommendations/clues? Any help will be appreciated.
Thank you for your time and help in advance.
Cheers!
This is an answer for part (c).
The idea is to find some number k and apply the following scheme:
Drop a jar from height k:
If it breaks, drop the other one from k-1 down to 1 until we find the height that it breaks in, in no more than k tries.
If it doesn't break, drop it again from height k + (k-1). Again, if it breaks drop the other one from (k+(k-1)-1) down to k+1, otherwise continue to (k + (k-1) + (k-2)).
Continue this until you find the height
(of course if at some point you need to jump to a height greater than n, you just jump to n).
This scheme ensures we'll use at most k tries. So now the question is how to find a minimal k (as a function of n), for which the scheme will work. Since, at every step, we reduce by 1 our height advancement, the following equation must hold:
k + (k-1) + (k-2) + ... + 1 >= n
Otherwise will "run out" of steps before reaching n. We want to find the smallest k for which the inequality holds.
There's a formula to the sum:
1 + 2 + ... + k = k(k+1)/2
Using that we get the equation:
k(k+1)/2 = n ===> k^2 + k - 2n = 0
Solving this (and if it's not integral take the ceiling of it) will give us k. Quadratic equations might have two solutions, but ignoring the negative one you get:
k = (-1 + sqrt(1 + 8n))/2
Looking for the complexity, we can ignore everything but the n, which has an exponent of 1/2 (since we're taking its square root). That is actually better then the requested complexity of n to power of 2/3.
For part (a) you can use binary search over height. pseudo code for the same is below :
lo = 0
hi = n
while(lo<hi) {
mid = lo +(hi-lo)/2;
if(galss_breaks(mid)) {
hi = mid-1;
} else {
lo = mid;
}
}
'lo' will contain the maximum possible height in minimum possible trials. It will take log(n) steps in worst case whereas your approach may take N steps in worst case.
For part(b) ,
you can use your approach a, start from the minimum height and increase height by 1 until the glass breaks. This will at most break 1 glass to determine the required height.

Merge sort - recursion tree

So I've been studying sorting algorithms.
I am stuck on finding the complexity of merge sort.
Can someone please explain to me how h=1+lg(n)
If you keep dividing n by 2, you'll eventually get to 1.
Namely, it takes log2(n) divisions by 2 to make this happen, by definition of the logarithm.
Every time we divide by 2, we add a new level to the recursion tree.
Add that to the root level (which didn't require any divisions), and we have log2(n) + 1 levels total.
Here's a cooler proof. Notice that, rearranging, we have T(2n) - 2 T(n) = 2 c n.
If n = 2k, then we have T(2k + 1) - 2 T(2k) = 2 c 2k.
Let's simplify the mess. Let's define U(k) = T(2k) / (2 c).
Then we have U(k + 1) - 2 U(k) = 2k, or, if we define U'(k) = U(k + 1) - U(k):
U'(k) - U(k) = 2k
k is discrete here, but we can let it be continuous, and if we do, then U' is the derivative of U.
At that point the solution is obvious: if you've ever taken derivatives, then you know that if the difference of a function and its derivative is exponential, then the function itself has to be exponential (since only in that case will the derivative be some multiple of itself).
At that point you know U(k) is exponential, so you can just plug in an exponential for the unknown coefficients in the exponential, and plug it back in to solve for T.

Series Summation to calculate algorithm complexity

I have an algorithm, and I need to calculate its complexity. I'm close to the answer but I have a little math problem: what is the summation formula of the series
½(n4+n3) where the pattern of n is 1, 2, 4, 8, ... so the series becomes:
½(14+13) + ½(24+23) + ½(44+43) + ½(84+83) + ...
It might help to express n as 2^k for k=0,1,2...
Substitute that into your original formula to get terms of the form (16^k + 8^k)/2.
You can break this up into two separate sums (one with base 16 and one with base 8),
each of which is a geometric series.
S1 = 1/2(16^0 + 16^1 + 16^2 + ...)
S2 = 1/2(8^0 + 8^1 + 8^2 + ...)
The J-th partial sum of a geometric series is a(1-r^J)/(1-r) where a is the initial
value and r the ratio between successive terms. For S1, a=1/2, r=16. For S2, a=1/2,
r=8.
Multiply it out and I believe you will find that the sum of the first J terms is O(16^J).
You're asking about
½ Ʃ ((2r)4+(2r)3) from r=1 to n
(Sorry for the ugly math; there's no LaTeX here.)
The result is 16/15 16n + 8/7 8n - 232/105
See http://www.wolframalpha.com/input/?i=sum+%282%5Er%29%5E4%2B%282%5Er%29%5E3+from+r%3D1+to+n .
You don't need the exact formula. All you need to know is that this is an O(16n) algorithm.
thanks to all of u.. the final formula which I was looking for (based on your works) was :
((1/15 2^(4(log2(n)+1)) + 8^(log2(n)+1)/7 -232/105)/2) + 1
this will gives the same result as the program which runs the algorithm
looks like your series does not converge ... that is, the summation is infinity. maybe your formula is wrong or you asked the question wrong.

Interview question - Finding numbers

I just got this question on a SE position interview, and I'm not quite sure how to answer it, other than brute force:
Given a natural number N, find two numbers, A and P, such that:
N = A + (A+1) + (A+2) + ... + (A+P-1)
P should be the maximum possible.
Ex: For N=14, A = 2 and P = 4
N = 2 + (2+1) + (2+2) + (4+2-1)
N = 2 + 3 + 4 + 5
Any ideas?
If N is even/odd, we need an even/odd number of odd numbers in the sum. This already halfes the number of possible solutions. E.g. for N=14, there is no point in checking any combinations where P is odd.
Rewriting the formula given, we get:
N = A + (A+1) + (A+2) + ... + (A+P-1)
= P*A + 1 + 2 + ... + (P-1)
= P*A + (P-1)P/2 *
= P*(A + (P-1)/2)
= P/2*(2*A + P-1)
The last line means that N must be divisible by P/2, this also rules out a number of possibilities. E.g. 14 only has these divisors: 1, 2, 7, 14. So possible values for P would be 2, 4, 14 and 28. 14 and 28 are ruled our for obvious reasons (in fact, any P above N/2 can be ignored).
This should be a lot faster than the brute-force approach.
(* The sum of the first n natural numbers is n(n+1)/2)
With interview questions, it is often wise to think about what is probably the purpose of the question. If I would be asking you this question, it is not because I think you know the solution, but I want to see you finding the solution. Reformulating the problem, making implications, devising what is known, ... this is what I would like to see.
If you just sit and tell me "I do not know how to solve it", you immediately fail the interview.
If you say: I know how to solve it by brute force, and I am aware it will be probably slow, I will give you some hints or help you to get you started. If that does not help, you most likely fail (unless you show some extraordinary skills to compensate for the fact you are probably lacking something in the field of general problem analysis, e.g. you will show how to implement a solution paralelized for many cores or implemented on GPU).
If you bring me a ready solution, but you are unable to derive it, I will give you another similar problem, because I am not interested about solution, I am interested in your thinking.
A + (A+1) + (A+2) + ... + (A+P-1) simplifies to P*A + (P*(P-1)/2) resp P*(A+(P-1)/2).
Thus, you could just enumerate all divisors of N, and then test each divisor P to the following:
Is A = (N-(P*(P-1)/2))/P (solved the first simplification for A) an integral number? (I assume it should be an integral number, otherwise it would be trivial.) If so, return it as a solution.
Can be solved using 0-1 Knapsack solution .
Observation : N/2 + N/2 + 1 > N
so our series is 1,2,...,N/2
Consider the constraints of W=N and vi =1 for all elements, I think this trivially maps to 0-1 knapsack, O(n^2)
Here is a O(n) solution.
It uses the property of the sum of an arithmetic progression.
S = difference*(first_term + last_term)/2
Here our sum is N, the difference is P and first term is A.
Manipulation the above equation we get some equations and we can iterate P from 1 to n - 1 to get a valid A.
def solve(n,p):
return (2*n - (p**2) + p)/(2*p)
def condition(n,p,a):
if (2*n == (2*a*p) + (p**2) - p) and (a*-1 < 0):
return True
else:
return False
def find(n):
for x in xrange(n,-1,-1):
a = solve(n,x)
if condition(n,x,a):
return n,x,a

Resources