Stuck at Algorithm pseudocode generation - algorithm

I do not know what to do next (and even if my approach is correct) in the following problem:
Part 1
Part 2
I have just figured out that a possible MNT (for part a) is to get a jar, test if it breaks from height h, if so then there's the answer, if not, height+1 and keep looping.
For part b is the following. Since we know max height equals n, then we start from n (current height = n). Therefore we go from top to bottom adding to our broken jar count (they are supposed to break if you start from top) until the jars stop breaking. Then the number would be current height + 1 (because we need to go back one index).
For part c, I don't even know what my approach would be, since I am assuming that the order of the algorithm is O(n^c) where c is a fraction. I also know that O(n^c) is faster than O(n).
I also noted that there is a problem similar to this one online, but it talks about rungs instead of a robotic arm. Maybe it is similar? Here is the link
Do you have any recommendations/clues? Any help will be appreciated.
Thank you for your time and help in advance.
Cheers!

This is an answer for part (c).
The idea is to find some number k and apply the following scheme:
Drop a jar from height k:
If it breaks, drop the other one from k-1 down to 1 until we find the height that it breaks in, in no more than k tries.
If it doesn't break, drop it again from height k + (k-1). Again, if it breaks drop the other one from (k+(k-1)-1) down to k+1, otherwise continue to (k + (k-1) + (k-2)).
Continue this until you find the height
(of course if at some point you need to jump to a height greater than n, you just jump to n).
This scheme ensures we'll use at most k tries. So now the question is how to find a minimal k (as a function of n), for which the scheme will work. Since, at every step, we reduce by 1 our height advancement, the following equation must hold:
k + (k-1) + (k-2) + ... + 1 >= n
Otherwise will "run out" of steps before reaching n. We want to find the smallest k for which the inequality holds.
There's a formula to the sum:
1 + 2 + ... + k = k(k+1)/2
Using that we get the equation:
k(k+1)/2 = n ===> k^2 + k - 2n = 0
Solving this (and if it's not integral take the ceiling of it) will give us k. Quadratic equations might have two solutions, but ignoring the negative one you get:
k = (-1 + sqrt(1 + 8n))/2
Looking for the complexity, we can ignore everything but the n, which has an exponent of 1/2 (since we're taking its square root). That is actually better then the requested complexity of n to power of 2/3.

For part (a) you can use binary search over height. pseudo code for the same is below :
lo = 0
hi = n
while(lo<hi) {
mid = lo +(hi-lo)/2;
if(galss_breaks(mid)) {
hi = mid-1;
} else {
lo = mid;
}
}
'lo' will contain the maximum possible height in minimum possible trials. It will take log(n) steps in worst case whereas your approach may take N steps in worst case.
For part(b) ,
you can use your approach a, start from the minimum height and increase height by 1 until the glass breaks. This will at most break 1 glass to determine the required height.

Related

What is the time complexity of this BFS algorithm?

I looked at LeetCode question 270. Perfext Squares:
Given an integer n, return the least number of perfect square numbers that sum to n.
A perfect square is an integer that is the square of an integer; in other words, it is the product of some integer with itself. For example, 1, 4, 9, and 16 are perfect squares while 3 and 11 are not.>
Example 1:
Input: n = 12
Output: 3
Explanation: 12 = 4 + 4 + 4.
I solved it using the following algorithm:
def numSquares(n):
squares = [i**2 for i in range(1, int(n**0.5)+1)]
step = 1
queue = {n}
while queue:
tempQueue = set()
for node in queue:
for square in squares:
if node-square == 0:
return step
if node < square:
break
tempQueue.add(node-square)
queue = tempQueue
step += 1
It basically tries to go from goal number to 0 by subtracting each possible number, which are : [1 , 4, 9, .. sqrt(n)] and then does the same work for each of the numbers obtained.
Question
What is the time complexity of this algorithm? The branching in every level is sqrt(n) times, but some branches are destined to end early... which makes me wonder how to derive the time complexity.
If you think about what you're doing, you can imagine that you're doing a breadth-first search over a graph with n + 1 nodes (all the natural numbers between 0 and n, inclusive) and some number of edges m, which we'll determine later on. Your graph is essentially represented as an adjacency list, since at each point you iterate over all the outgoing edges (squares less than or equal to your number) and stop as soon as you consider a square that's too large. As a result, the runtime will be O(n + m), and all we have to do now is work out what m is.
(There's another cost here in computing all the square roots up to and including n, but that takes time O(n1/2), which is dominated by the O(n) term.)
If you think about it, the number of outgoing edges from each number k will be given by the number of perfect squares less than or equal to k. That value is equal to ⌊√k⌋ (check this for a few examples - it works!). This means that the total number of edges is upper-bounded by
√0 + √1 + √2 + ... + √n
We can show that this sum is Θ(n3/2). First, we'll upper-bound this sum at O(n3/2), which we can do by noting that
√0 + √1 + √2 + ... + √n
≤ √n + √n + √ n + ... + √n (n+1) times
= (n + 1)√n
= O(n3/2).
To lower-bound this at Ω(n3/2), notice that
√0 + √1 + √2 + ... + √ n
≥ √(n/2) + √(n/2 + 1) + ... + √(n) (drop the first half of the terms)
≥ √(n/2) + √(n/2) + ... + √(n/2)
= (n / 2)√(n / 2)
= Ω(n3/2).
So overall, the number of edges is Θ(n3/2), so using a regular analysis of breadth-first search we can see that the runtime will be O(n3/2).
This bound is likely not tight, because this assumes that you visit every single node and every single edge, which isn't going to happen. However, I'm not sure how to tighten things much beyond this.
As a note - this would be a great place to use A* search instead of breadth-first search, since you can fairly easily come up with heuristics to underestimate the remaining total distance (say, take the number and divide it by the largest perfect square less than it). That would cause the search to focus on extremely promising paths that jump rapidly toward 0 before less-good paths, like, say, always taking steps of size one.
Hope this helps!
Some observations:
The number of squares up to n is √n (floored to the nearest integer)
After the first iteration of the while loop, tempQueue will have √n entries
tempQueue can never have more than n entries, since all these values are positive, less than n and unique.
Every natural number can be written as the sum of four integer squares. So that means your BFS algorithm's while loop will iterate at the most 4 times. If the return statement did not get executed during any of the first 3 iterations, it is guaranteed it will in the 4th.
Every statement (except for the initialisation of squares) runs in constant time, even the call to .add().
The initialisation of squares has a list comprehension loop that has √n iterations, and range runs in constant time, so that initialisation has a time complexity of O(√n).
Now we can set a ceiling to the number of times the if node-square == 0 statement is executed (or any other statement in the innermost loop's body):
1⋅√n + √n⋅√n + n⋅√n + n⋅√n
Each of the 4 terms corresponds to an iteration of the while loop. The left factor of each product corresponds to the maximum size of queue in that particular iteration, and the factor at the right corresponds to the size of squares (always the same). This simplifies to:
√n + n + 2n3⁄2
In terms of time complexity this is:
O(n3⁄2)
This is the worst case time complexity. When the while loop only has to iterate twice, it is O(n), and when only once (when n is a square), it is O(√n).

Hot and Cold Binary Search Game

Hot or cold.
I think you have to do some sort of binary search but I'm not sure how.
Your goal is the guess a secret integer between 1 and N. You
repeatedly guess integers between 1 and N. After each guess you learn
if it equals the secret integer (and the game stops); otherwise
(starting with the second guess), you learn if the guess is hotter
(closer to) or colder (farther from) the secret number than your
previous guess. Design an algorithm that finds the secret number in lg
N + O(1) guesses.
Hint: Design an algorithm that solves the problem in lg N + O(1)
guesses assuming you are permitted to guess integers in the range -N
to 2N.
I've been racking my brain and I can't seem to come up with a a lg N + O(1).
I found this: http://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi?board=riddles_cs;action=display;num=1316188034 but could not understand the diagram and it did not describe other possible cases.
Suppose you know that your secret integer is in [a,b], and that your last guess is c.
You want to divide your interval by two, and to know whether your secret integer lies in between [a,m] or [m,b], with m=(a+b)/2.
The trick is to guess d, such that (c+d)/2 = (a+b)/2.
Without loss of generality, we can suppose that d is bigger than c. Then, if d is hotter than c, your secret integer will be bigger than (c+d)/2 = (a+b)/2 = m, and so your secret integer will lie in [m,b]. If d is cooler than c, your secret integer will belong to [a,m].
You need to be able to guess between -N and 2N because you can't guarantee that c and d as defined above will always be [a,b]. Your two first guess can be 1 and N.
So, your are dividing your interval be two at each guess, so the complexity is log(N) + O(1).
A short example to illustrate this (results chosen randomly):
Guess Result Interval of the secret number
1 *** [1 , N ] // d = a + b - c
N cooler [1 , N/2 ] // N = 1 + N - 1
-N/2 cooler [N/4 , N/2 ] //-N/2 = 1 + N/2 - N
5N/4 hotter [3N/8, N/2 ] // 5N/4 = N/4 + N/2 + N/2
-3N/8 hotter [3N/8, 7N/16] //-3N/8 = 3N/8 + N/2 - 5N/4
. . . .
. . . .
. . . .
Edit, suggested by #tmyklebu:
We still need to prove that our guess will always fall in bewteen [-N,2N]
By recurrence, suppose that c (our previous guess) is in [a-(a+b), b+(a+b)] = [-b,a+2b]
Then d = a+b-c <= a+b-(-b) <= a+2b and d = a+b-c >= a+b-(a+2b) >= -b
Initial case: a=1, b=N, c=1, c is indeed in [-b,a+2*b]
QED
This was a task at IOI 2010, for which I sat on the Host Scientific Committee. (We asked for an optimal solution instead of simply lg N + O(1), and what follows is not quite optimal.)
Not swinging outside -N .. 2N and using lg N + 2 guesses is straightforward; all you need to do is show that the obvious translation of binary search works.
Once you have something that doesn't swing outside -N .. 2N and takes lg N + 2 guesses, do this:
Guess N/2, then N/2+1. This tells you which half of the array the answer is in. Then guess the end of that half-array. You're either in one of the two "middle" quarters or you're in one of the two "end" quarters. If you're in a middle quarter, do the thing before and you win in lg N + 4 guesses. The ends are slightly trickier.
Suppose I need to guess a number in 1 .. K without straying outside 1 .. N and my last guess was 1. If I guess K/2 and I'm colder, then I next guess 1; I spent two guesses to get a similar subproblem that's 1/4 the size. If K/2 is hotter, I know the answer is in K/4 .. K. Guess K/2-1 next. The two subcases are K/4 .. K/2-1 and K/2 .. K, both of which are nice. But it took me three guesses to (in the worst-case) halve the size of the problem; if I ever do this, I wind up doing lg N + 6 guesses.
The solution is close to binary search. At each step you have an interval that the number can be in. Start with the whole interval [1, N]. First guess both ends - that is the numbers 1 and N. One of them will be closer, thus you will know that now the number you are searching for is in [1, N/2] or in [N/2 + 1, N](considering N even for simplicity). Now you go to the next step having a twice smaller interval. Continue using the same approach. Keep in mind that you've already probed one of the ends, however it may not be your last guess.
I am not sure what you mean by lg N + O(1), but the approach I suggest will perform O(log(N)) operations and in the worst case it will do exactly log4(N) probes.
Here're my two cents for this problem, since I got obsessed with it for two days. I'm not going to say anything new to what others have already said, but I'm going to explain it in a way that might get some people to understand the solution easily(or at least that was the way I managed to understand it).
Drawing from the ~ 2 lg N solution, if I knew that solution existed in [a, b] I'd want to know if it's in the left half [a, (a + b) / 2] or the right half [(a + b) / 2, b], with the point (a + b) / 2 separating the two halves. So what do I do? I guess a then b; if I get colder with b I know I'm in the first(left) half, if I get hotter I know I'm in the second(right) one. So guessing a and b is the way to know the secret integer position with respect to the mid point (a + b) / 2. However a and b aren't the only points that I can guess at to know the secret position. (a - 1, b + 1), (a - 2, b + 2), ... etc are all valid pairs of points to guess at to know the secret position, as the mid point of all these pairs is (a + b) / 2, the mid point of the original interval [a, b]. In fact any two numbers c and d such that (c + d) / 2 = (a + b) / 2 can be used.
So considering [a, b] as the interval we know the secret integer exists within, take c to be the last number we guessed. We want to determine the position of the secret with respect to the mid point (a + b) / 2, so we a new number d to guess at to know the secret relative position to (a + b) / 2. How do we know such number d? By solving the equation (c + d) / 2 = (a + b) / 2, which yields d = a + b - c. Guessing at that d, we shrink the range [a, b] appropriately based on the answer(colder or hotter) and then repeat the process taking d as our last guess and trying a new guess at number e for example with the same conditions.
To establish the initial conditions, we should start with a = 1, b = N, and c = 1. We guess at c to establish a reference(since the first guess can't tell you anything useful as there were no prior guesses). We then proceed with new guesses and adjusting the enclosing interval as appropriate with each guess. The table in #R2B2's answer explains it all.
You have to be vigilant however when trying to code this solution. When I tried to code it in python, I first ran into the mistake of getting [a, b] stuck when it was small enough(like [a, a + 1]) where neither a nor b would move inwards. I had to phase the cases where the interval size was 2 outside the loop and handle them separately(like I did with intervals with size 1 also).
the task is quite brain cracking
but I'll try to keep it simple
to solve the problem in 2lnN let's make following guesses:
1 and N : to decide which half is hotter (1, N/2) or (N/2, N), for example (N/2, N) is the hotter half, then let's make next guesses
N/2 and N : again to decide which half is hotter (N/2, 3/4N) or (3/4N, N) and so on
... so we need 2 guesses for every half-division, therefore we should make 2 * lnN guesses.
here we see that each time we need to repeat one of previous intervals' borders one more time - in our example point 'N' is repeated.
to solve the problem in 1*lnN guesses instead of 2*lnN, we need to find way to spend only one guess for each intervals' half-division, good illustration of such method is depicted on image at http://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi?board=riddles_cs;action=display;num=1316188034
the idea is to avoid repeating one of border points of each interval again in subsequent steps, but be smart to spend one guess for each half-division step by mirroring points.
the smart idea is that when we want to decide which half of current interval is hotter, we don't need to try only its borders, but we can as well try any points located outside its borders, these guessing points must be at equal distances (in other words mirrored) relatively to the center of interesting interval, nothing bad even if these guessing points are negative (i.e. < 0) or if they are > N (this is not prohibited by the conditions of the task)
so to guess the secret we can freely make guesses using pounts in interval (-N, 2N)
public class HotOrCold {
static int num=5000;// number to search
private static int prev=0;
public static void main(String[] args) {
System.out.println(guess(1,Integer.MAX_VALUE));
}
public static int guess(int lo,int hi) {
while(hi>=lo) {
boolean one=false;
boolean two=false;
if(hi==lo) {
return lo;
}
if(isHot(lo)) {
one=true;
}
if(isHot(hi)) {
two=true;
}
if(!(one||two)) {
return (hi+lo)/2;// equal distance
}
if(two) {
lo=hi-(hi-lo)/2;
continue;//checking two before as it's hotter than lo so ignoring the lo
}
if(one) {
hi=lo+(hi-lo)/2;
}
}
return 0;
}
public static boolean isHot(int curr) {
boolean hot=false;
if(Math.abs(num-curr)<Math.abs(num-prev)) {
hot=true;
}
prev=curr;
return hot;
}
}

Algorithm to compute k fractions of form 1/r summing up to 1

Given k, we need to write 1 as a sum of k fractions of the form 1/r.
For example,
For k=2, 1 can uniquely be written as 1/2 + 1/2.
For k=3, 1 can be written as 1/3 + 1/3 + 1/3 or 1/2 + 1/4 + 1/4 or 1/6 + 1/3 + 1/2
Now, we need to consider all such set of k fractions that sum upto 1 and return the highest denominator among all such sets; for instance, the sample case 2, our algorithm should return 6.
I came across this problem in a coding competition and couldn't come up with an algorithm for the same. A bit of Google search later revealed that such fractions are called Egyption Fractions but probably they are set of distinct fractions summing upto a particular value (not like 1/2 + 1/2). Also, I couldn't find an algo to compute Egyption Fractions (if they are at all helpful for this problem) when their number is restricted by k.
If all you want to do is find the largest denominator, there's no reason to find all the possibilities. You can do this very simply:
public long largestDenominator(int k){
long denominator = 1;
for(int i=1;i<k;i++){
denominator *= denominator + 1;
}
return denominator;
}
For you recursive types:
public long largestDenominator(int k){
if(k == 1)
return 1;
long last = largestDenominator(k-1);
return last * (last + 1); // or (last * last) + last)
}
Why is it that simple?
To create the set, you need to insert the largest fraction that will keep it under 1 at each step(except the last). By "largest fraction", I mean by value, meaning the smallest denominator.
For the simple case k=3, that means you start with 1/2. You can't fit another half, so you go with 1/3. Then 1/6 is left over, giving you three terms.
For the next case k=4, you take that 1/6 off the end, since it won't fit under one, and we need room for another term. Replace it with 1/7, since that's the biggest value that fits. The remainder is 1/42.
Repeat as needed.
For example:
2 : [2,2]
3 : [2,3,6]
4 : [2,3,7,42]
5 : [2,3,7,43,1806]
6 : [2,3,7,43,1807,3263442]
As you can see, it rapidly becomes very large. Rapidly enough that you'll overflow a long if k>7. If you need to do so, you'll need to find an appropriate container (ie. BigInteger in Java/C#).
It maps perfectly to this sequence:
a(n) = a(n-1)^2 + a(n-1), a(0)=1.
You can also see the relationship to Sylvester's sequence:
a(n+1) = a(n)^2 - a(n) + 1, a(0) = 2
Wikipedia has a very nice article explaining the relationship between the two, as pointed out by Peter in the comments.
I never heard of Egyptian fractions before but here are some thoughts:
Idea
You can think of them geometrically:
Start with a unit square (1x1)
Draw either vertical or horizontal lines dividing the square into equal parts.
Repeat optionally the drawing of lines inside any of the sub-boxes evenly.
Stop any time you want.
The rectangles present will form a set of fractions of the form 1/n that add to 1.
You can count them and they might equal your 'k'.
Depending on how many equal sections you divided a rectangle into, it will tell whether you have 1/2 or 1/3 or whatever. 1/6 is 1/2 of 1/3 or 1/3 of 1/2. (i.e. You dived by 2 and then one of the sub-boxes by 3 OR the other way around.)
Idea 2
You start with 1 box. This is the fraction 1/1 with k=1.
When you sub-divide by n you add n to the count of boxes (k or of fractions summed) and subtract 1.
When you sub-divide any of those boxes, again, subtract 1 and add n, the number of divisions. Note that n-1 is the number of lines you drew to divide them.
More
You are going to start searching for the answer with k. Obviously k * 1/k = 1 so you have one solution right there.
How about k-1?
There's a solution there: (k-2) * 1/(k-1) + 2 * (1/((k-1)*2))
How did I get that? I made k-1 equal sections (with k-2 vertical lines) and then divided the last one in half horizontally.
Each solution is going to consist of:
taking a prior solution
using j less lines and some stage and dividing one of the boxes or sub-boxes into j+1 equal sections.
I don't know if all solutions can be formed by repeating this rule starting from k * 1/k
I do know you can get effective duplicates this way. For example: k * 1/k with j = 1 => (k-2) * 1/(k-1) + 2 * (1/((k-1)*2)) [from above] but k * 1/k with j = (k-2) => 2 * (1/((k-1)*2)) + (k-2) * 1/(k-1) [which just reverses the order of the parts]
Interesting
k = 7 can be represented by 1/2 + 1/4 + 1/8 + ... + 1/(2^6) + 1/(2^6) and the general case is 1/2 + ... + 1/(2^(k-1)) + 1/(2^(k-1)).
Similarly for any odd k it can be represented by 1/3 + ... + 3 * [1/(3^((k-1)/2)].
I suspect there are similar patterns for all integers up to k.

recursion tree method of solving recurrences

I was practicing the recursion tree method using this link: http://www.cs.cornell.edu/courses/cs3110/2012sp/lectures/lec20-master/lec20.html .. 1st example was okay but in the second example he calculates the height of the tree as log(base 3/2) n .. Can anyone tell me how he calculated height ? May be a dumb question but i can't understand! :|
Let me try explain it. The recursion formula you have is T(n) = T(n/3) + T(2n/3) + n. It says, you are making a recursion tree that splits into two subtrees of sizes n/3, 2n/3, and costs n at that level.
If you see the height is determined by height of largest subtree (+1). Here the right-subtree, the one with 2n/3 element will drive the height. OK?
If the above sentence is clear to you, lets calculate height. At height 1,we will have n*(2/3) elements, at height 2, we have n*(2/3)^2 elements,... we will keep splitting till we have one element left, thus at height h
n*(2/3)^h <= 1
(take log both side)
log(n) + h*log(2/3) <= 0
(log is an increasing function)
h*log(3/2) >= log(n)
h >= log(n)/log(3/2)
h >= log3/2 (n)
I would suggest reading Master Method for Recursion from Introduction to Algorithms - CLRS.

Running time: Finding k smallest element using Selection Sort

I suppose answer is kn? But when I try drawing the tree, it looked like
So I must have done something wrong in the more detailed analysis?
First, your work list has length k+2 when it should probably have length k. My guess is that you meant to run from n to n-(k-1) = n-k+1.
Now if you want to sum consecutive numbers, the easiest is to remember (or derive) the formula
1 + 2 + ... + a = a(a+1)/2
Use this to figure out that the sum you're after is
n(n+1)/2 - (n-k)(n-k+1)/2 = nk + (k-k^2)/2
as you correctly found. Now, think about big O. Since n>k, we know nk > k^2, so the latter term is really a lower term, and the whole thing is O(nk).

Resources