A question regarding the tower of hanoi recursive algorithm time complexity - algorithm

I am doing a coding exercise today. After finishing the examination, I checked the results and I faced a problem whose problem statement is shown as follows:
Given 4 disks in the tower of Hanoi problem, the recursive algorithm calls the same function at most ___ times.
A. 10
B. 16
C. 22
D. 31
The only thing I knew is that I selected B. 16 and I was wrong.
I searched on the internet upon discovering that it should be 2n - 1 times, or 15 times.
However, it was not in the options.
Which option is correct?
Any advice will be appreciated.
Thank you.

The 4-disk puzzle takes 15 moves. The number of recursive calls, though, depends on how it's implemented.
If your recursive base case is 1 disk => 1 move, then it's 15. If your recursive base case is 0 disks => 0 moves, then it's 31.

Related

Identifying when greedy method gives optimum solution

I was looking at this problem on leetcode
https://leetcode.com/problems/minimum-number-of-operations-to-convert-time/description/
The hints tell us to follow a greedy approach where we convert the time difference to minutes and pick the largest possible values
For the given allowed values [1,5,15,60] greedy seems to work for all cases
But let us assume a case where the difference b/w time is 46 minutes and the allowed values are [1,23,40] then as per greedy, the operation would take 7 steps but the minimum possible is 2 ( 23 + 23 )
Why don't we have similar case for the values given the original problem ? how can it be proved that greedy is always the optimal solution in the case of the original problem ? and how can we know, for a given set of allowed values, do we get the optimal solution with greedy or not ?
This problem is very similar to the coin change problem. In general you are very much correct and the greedy solution is not optimal. However is this case the greedy solution is optimal since the "coins" are complete multiplications of each other :
1 * 5 = 5
5 * 3 = 15
15 * 4 = 60
Since this is the case each 60 step can be done by 4 15 steps or 12 5 steps. Therefore the greedy solutions is best in this case.
In the second example you shoed, the "coins" were not a multiplication of each other, making the greedy solution sub-optimal in some cases.

How to find the minimum time required to solve all N problems?

I was trying to solve this problem but even after hours I am not able
to understand the problem completely. I am not even able to come up
with any brute force techniques.This is the question:
There are N members and N problems and each member must exactly solve
1 problem.Only one member of the from the team is allowed to read the
problem statements before anyone start to solve.
Note that not everyone have read the problems at first. So, to solve
problems a member needs to know the statements from some teammate who
already knows them. After knowing problems once, a member is eligible
to explain them to other teammates ( one teammate at a time ). You can
assume that the explaining ( 1 or N problems ) will always take D
minutes. During explaining, none of the two involved members will be
able to do anything else.
Problems are of different difficulty levels. You can assume that it
will take Ti minutes to solve the ith problem, regardless of which
member solves it.
Given a team's data, what is the minimum possible time in which they
can solve all problems?
Input
N D
2 100
T=[1 2]
Output
102
Member 1 is allowed to know problems before start time. He starts
explaing problems to member 2 when contest starts. Explaining ends at
the 100th minute. Then both of them immidiately starts solving
problems parallely. Member 1 solved 1st problem at the 101th minute
and member 2 solved 2nd problem at the 102th minute.
What is the best method to decode this type of problem and to approach it?
This reminds me of Huffman coding.
I am not sure if the following approach is optimal, but it will probably give a good answer in practice.
Pick the easiest two problems T0 and T1 and replace them by a single problem consisting of time D+max(T0,T1).
Repeat until you have a single problem left
Finding the two easiest problems can be done in O(logN) if you store the problems in a binary heap, so overall this is O(NlogN).
Example
Suppose we have 1,3,4,6 and D=2.
We first combine 1 and 3 to make 2+max(1,3)=5. The new list is 4,5,6
We then combine 4 and 5 to make 2+max(4,5)=7. The new list is 6,7.
We then combine 6 and 7 to make 2+max(6,7)=9.
This represents the following procedure.
t=0 A shares with B
t=2 A starts problem 6, B shares with C
t=4 B starts problem 4, C shares with D
t=6 C starts problem 3, D starts problem 1
t=7 D finishes
t=8 A finishes, B finishes
t=9 C finishes
Every member of the team (except the one who read the problems)
must hear the problems. That is, problems must be told N - 1 times.
For N = 2 this can be done in D minutes,
for 2 < N <= 4 in 2D minutes,
for 4 < N <= 8 in 3D minutes, etc.
If N is not an exact power of 2 then some people must finish telling
the problems at least D minutes sooner than others.
The ones who finish early can work on
the hardest problems, leaving easier problems for the ones who finish later.
If some of the problems take time Ti > D and N is neither an exact
power of 2 nor one less than an exact power of 2, you may want to have
someone stop telling problems more than D minutes before
the last problem-telling is finished.
If some of the problems take time Ti > 2D then you may need to consider
making some people stop telling problems and start working on the really
hard problems sooner even if N is an exact power of 2.
Since the solving of one problem is in every member's critical path,
but telling is in multiple members' critical paths,
it makes no sense for anyone to solve a problem until they are finished
with all the telling of problems they are going to do.
After each D minutes the number of people who know the problems
increases by the number who were telling problems.
The number who are telling problems increases by the number who
were telling problems (that is, the number who have just learned the
problems) minus the number who start working on problems at that time.
A good "brute force" approach might be to sort the problems
by difficulty; then find out the time until the last person hears
the problems if nobody starts working on them before then;
find out when the last person finishes;
then try starting problems D minutes earlier, or 2D minutes,
or 3D minutes, etc., but never start a shorter-running
problem before a longer-running one.
The problem statement is somewhat ambiguous about the explaining part. Depending on how the statement is interpreted, the following solution is possible:
If you assume that you can explain N problems in D minutes, then it takes N/D minutes to explain one problem. Let's call that Te, for "time to explain". And the time to solve Problem i is Ti, which we know is equal to i minutes.
So at the start of the contest, Member 1 (who knows all of the problems) explains problem N to Member 2. That takes Te minutes. Member 2 then begins working on problem N (which will take N minutes to solve), and Member 1 starts explaining problem N-1 to Member 3. This continues until Member 1 explains problem 2 to Member N. At that point, Member N starts working on problem 2, and Member 1 starts working on problem 1.
Let's say that there are 4 problems, 4 team members, and D=8. So Te=2.
Time Description
0 Member 1 begins explaining Problem 4 to Member 2
2 Member 2 begins working on Problem 4
Member 1 begins explaining Problem 3 to Member 3
4 Member 3 begins working on Problem 3
Member 1 begins explaining Problem 2 to Member 4
6 Member 2 completes problem 4
Member 4 begins working on Problem 2
Member 1 begins working on Problem 1
7 Member 3 completes Problem 3
Member 1 completes Problem 1
8 Member 4 completes Problem 2
This seems like the optimum solution regardless of the value of D or N: arrange it so that the problem that takes the longest to solve is started as early as possible.
I suspect that the problem statement is an English translation of a problem given in some other language or perhaps a re-translation of something that was originally written in English and translated into some other language. Because if that's the original problem statement, whoever wrote it should be barred from ever writing problems again.
The length of time it takes to complete any one task seems of the form C * D + T, where C is a positive integer less than N, and all N-1 lead-times must be accounted for. Suppose we made a mistake and the optimal solution should actually have a task coupled with a longer lead time - so some C * D + Tj < C * D + Ti, where Ti < Tj, which is impossible.
Therefore, iterate once over the sums of pairs (assuming sorted input):
solution = maximum (T2 + (n-1) * D, T3 + (n-2) * D...D + Tn)

Second best solution to an assignmentproblem using the Hungarian Algorithm

For finding the best solution in the assignment problem it's easy to use the Hungarian Algorithm.
For example:
A | 3 4 2
B | 8 9 1
C | 7 9 5
When using the Hungarian Algorithm on this you become:
A | 0 0 1
B | 5 5 0
C | 0 1 0
Which means A gets assigned to 'job' 2, B to job 3 and C to job 1.
However, I want to find the second best solution, meaning I want the best solution with a cost strictly greater that the cost of the optimal solution. According to me I just need to find the assignment with the minimal sum in the last matrix without it being the same as the optimal. I could do this by just searching in a tree (with pruning) but I'm worried about the complexity (being O(n!)). Is there any efficient method for this I don't know about?
I was thinking about a search in which I sort the rows first and then greedily choose the lowest cost first assuming most of the lowest costs will make up for the minimal sum + pruning. But assuming the Hungarian Algorithm can produce a matrix with a lot of zero's, the complexity is terrible again...
What you describe is a special case of the K best assignments problem -- there was in fact a solution to this problem proposed by Katta G. Murty in the following 1968 paper "An Algorithm for Ranking all the Assignments in Order of Increasing Cost." Operations Research 16(3):682-687.
Looks like there are actually a reasonable number of implementations of this, at least in Java and Matlab, available on the web (see e.g. here.)
In r there is now an implementation of Murty's algorithm in the muRty package.
CRAN
GitHub
It covers:
Optimization in both minimum and maximum direction;
output by rank (similar to dense rank in SQL), and
the use of either Hungarian algorithm (as implemented in clue) or linear programming (as implemented in lpSolve) for solving the initial assignment(s).
Disclaimer: I'm the author of the package.

Generic solution to towers of Hanoi with variable number of poles?

Given D discs, P poles, and the initial starting positions for the disks, and the required final destination for the poles, how can we write a generic solution to the problem?
For example,
Given D=6 and P=4, and the initial starting position looks like this:
5 1
6 2 4 3
Where the number represents the disk's radius, the poles are numbered 1-4 left-right, and we want to stack all the disks on pole 1.
How do we choose the next move?
The solution is (worked out by hand):
3 1
4 3
4 1
2 1
3 1
(format: <from-pole> <to-pole>)
The first move is obvious, move the "4" on top of the "5" because that its required position in the final solution.
Next, we probably want to move the next largest number, which would be the "3". But first we have to unbury it, which means we should move the "1" next. But how do we decide where to place it?
That's as far as I've gotten. I could write a recursive algorithm that tries all possible places, but I'm not sure if that's optimal.
We can't.
More precisely, as http://en.wikipedia.org/wiki/Tower_of_Hanoi#Four_pegs_and_beyond says, for 4+ pegs, proving what the optimal solution is is an open problem. There is a known very good algorithm, that is widely believed to be optimal, for the simple case where the pile of disks is on one peg and you want to transfer the whole pile to another. However we do not have an algorithm, or even a known heuristic, for an arbitrary starting position.
If we did have a proposed algorithm, then the open problem would presumably be much easier.

What is the complexity of the next grammar

I'm developing a Generalized Parsing Algorithm and I'm testing it with next rule
S ::= a | SS
Well, the algorithm is showing me all trees generated for the string composed of n a's.
For example next table shows the time used by the algorithm due to the quantity of a's
length trees time(ms)
1 1 1
2 1 1
3 2 2
4 5 2
5 14 2
6 42 2
7 132 5
8 429 13
9 1430 28
10 4862 75
11 16796 225
12 58786 471
13 208012 1877
14 742900 10206
I dont know what O (Big O notation) is my algorithm. How can i measure it because of course the time depends of three things:
The length of the string to parse
The grammar complexity
The performance of the algorithm
S can match any string of all a's.
Any binary tree with n leaf nodes could be a parse tree, and the number of such trees is given by the Catalan numbers.
Big-O isn't a matter of measuring run-times; that's profiling. Big-O is a matter of algorithm analysis, which is impossible without seeing the algorithm.
Broadly speaking, break the algorithm down into basic operations, loops and recursive calls. The basic operations have a defined timing (generally, O(1)). The timing of loops is the number of iterations times the timing of the loop body. Recursion is trickier: you have to define the timing in terms of a recurrence relation, then solve for an explicit solution. Looking at the process call tree can offer hints as to what the explicit solution might be.
We don't know the complexity neither because you didn't post the algorithm. But obviously there is a chance that you have an implementation that blows up pretty bad. The problem is not necessarily in the algorithm, thought, but in the grammar itself. A suitable preprocessor for the grammar could rewrite it to the more natural form
S ::= a | a S
which would be much more efficient to handle.

Resources