DP approach for the n-puzzle problem - logic

is there a DP approach for the n-puzzle problem
thanks all, appreciate that...
rajan

Dynamic Programming is a technique used to solve problems by reducing difficult cases to simpler cases, recursively, until you reach a case simple enough to solve "by inspection". Therefore, there can only be a sensible DP approach to the n-puzzle problem if, at each stage, you can consider a move which reduces the complexity of the problem.
For instance, if the first "move" in a n-puzzle always made it into an "(n-1)-puzzle" (for some concrete definition of "move", and assuming an (n-1)-puzzle made sense), then you could apply DP, eventually solving the "1-puzzle", and composing back upwards to solve the n-puzzle.
I don't know of any such simplification process for the n-puzzle; and I can't think of one at the moment. However, that doesn't mean one doesn't exist.

Related

Is applying AI ok and/or practical to finding the optimal solution to the algorithmic problem

Both in learning environment and practice, from time to time I had to use different algorithms to solve problems. But the more I use them, the more it seems like the AI could be deployed to try finding the optimal solution, especially to the NP-complete problems, since the AI "progression" is easily tracked
If we, for example, never knew how to solve knapsack problem efficiently; I wonder, is applying AI practical and/or ever OK to finding the optimal solution to the given problem?
AI algorithms in general can find an approximation to basically any function. They are so powerful because this is true even for extremely complex functions with many input parameters and/or many output parameters and/or a very complicated internal structure.
On the other hand, there is no known way to solve NP-complete problems "quickly". In practice, you would often have to search through a huge solution space for finding the optimal solution. This is why people use heuristic methods and approximation algorithms to efficiently find a "sufficiently good" solution.
So yes, you can use AI to find a good approximate solution (and possibly even a better one than with traditional heuristics) to a computationally hard problem.
But no, if the problem is NP-complete, you still cannot know that you have found the optimal solution.

when to use bottom-up DP and when to use top-down DP

I have leant that two ways of DP, but I am confused now. How we choose in different condition? And I find that in most of time top-down is more natural for me. Can anyone tell me that how to make the choice.
PS: I have read this post older post but still get confused. Need help. Don't identify my questions as duplication. I have mentioned that they are different. I hope to know how to choose and when to consider problem from top-down or bottom-up way.
To make it simple, I will explain based on my summary from some sources
Top-down: something looks like: a(n) = a(n-1) + a(n-2). With this equation, you can implement with about 4-5 lines of code by making the function a call itself. Its advantage, as you said, is quite intuitive to most developers but it costs more space (memory stack) to execute.
Bottom-up: you first calculate a(0) then a(1), and save it to some array (for instance), then you continuously savea(i) = a(i-1) + a(i-2). With this approach, you can significantly improve the performance of your code. And with big n, you can avoid stack overflow.
A slightly longer answer, but I have tried to explain my own approach to dynamic programming and what I have come to understand after solving such questions. I hope future users find it helpful. Please do feel free to comment and discuss:
A top-down solution comes more naturally when thinking about a dynamic programming problem. You start with the end result and try to figure out the ways you could have gotten there. For example, for fib(n), we know that we could have gotten here only through fib(n-1) and fib(n-2). So we call the function recursively again to calculate the answer for these two cases, which goes deeper and deeper into the tree until the base case is reached. The answer is then built back up until all the stacks are popped off and we get the final result.
To reduce duplicate calculations, we use a cache that stores a new result and returns it if the function tries to calculate it again. So, if you imagine a tree, the function call does not have to go all the way down to the leaves, it already has the answer and so it returns it. This is called memoization and is usually associated with the top-down approach.
Now, one important point I think for the bottom-up approach is that you must know the order in which the final solution has to be built. In the top-down case, you just keep breaking one thing down into many but in the bottom-up case, you must know the number and order of states that need to be involved in a calculation to go from one level to the next. In some simpler problems (eg. fib(n)), this is easy to see, but for more complex cases, it does not lend itself naturally. The approach I usually follow is to think top-down, break the final case into previous states and try to find a pattern or order to then be able to build it back up.
Regarding when to choose either of those, I would suggest the approach above to identify how the states are related to each other and being built. One important distinction you can find this way is how many calculations are really needed and how a lot might just be redundant. In the bottom up case, you have to fill an entire level before you go to the next. However, in the top down case, an entire subtree can be skipped if not needed and in such a way, a lot of extra calculations can be saved.
Hence, the choice obviously depends on the problem, but also on the inter-relation between states. It is usually the case that bottom-up is recommended because it saves you stack space as compared to the recursive approach. However, if you feel the recursion isn't too deep but is very wide and can lead to a lot of unnecessary calculations by tabularization, you can then go for top-down approach with memoization.
For example, in this question: https://leetcode.com/problems/partition-equal-subset-sum/, if you see the discussions, it is mentioned that top-down is faster than bottom-up, basically, the binary tree approach with a cache versus the knapsack bottom up build-up. I leave it as an exercise to understand the relation between the states.
Bottom-up and Top-down DP approaches are the same for many problems in terms of time and space complexity. Difference are that, bottom-up a little bit faster, because you don't need overhead for recursion and, yes, top-down more intuitive and natural.
But, real advantage of Top-bottom approach can be on some small sets of tasks, where you don't need to calculate answer for all smaller subtasks! And you can reduce time complexity in this cases.
For example you can use Top-down approach with memorization for finding N-th Fibonacci number, where the sequence is defined as a[n]=a[n-1]+a[n-2] So, you have both O(N) time for calculating it (I don't compare with O(logN) solution for finding this number). But look at the sequence a[n]=a[n/2]+a[n/2-1] with some edge cases for small N. In botton up approach you can't do it faster than O(N) where top-down algorithm will work with complexity O(logN) (or maybe some poly-logarithmic complexity, I am not sure)
To add on to the previous answers,
Optimal time:
if all sub-problems need to be solved
→ bottom-up approach
else
→ top-down approach
Optimal space:
Bottom-up approach
The question Nikhil_10 linked (i.e https://leetcode.com/problems/partition-equal-subset-sum/) doesn't require all subproblems to be solved. Hence the top-down approach is more optimal.
If you like the top-down natural then use it if you know you can implement it. bottom-up is faster than the top-down one. Sometimes Bottom-ups are easier and most of the times the bottom-up are easy. Depending on your situation make your decision.

Numberlink/Flow Game: How to spot NP-Complete problems?

I was trying to find a way to solve the problem in the famous game Flow. http://moh97.us/flow/
After googling I find out that this is a NP-complete problem. A good solution would make use of heuristics and cuts. How can I spot a NP-complete problem easily? Sometimes when I block, I can't see the obvious solution. When this happens with an NP-complete, it's better to recognise it quickly and move on to the next problem.
When you have an explosion of objects (say objects whose count grows
exponentially based on some parameter or parameters), this should point
you in the direction that it's an NP-complete problem. When you
have to inspect, check too many objects (combinatorial or others).
Usually these objects are subsets or sub-spaces of some initial
object space. You should build some intuition for this. But as usual,
the intuition lies sometimes (I've been lied like this by my intuition
on 2-3 occasions).
Then once you suspect some problem is NP-complete, just
Google for it and try finding more information about
the same or about a similar problem.
This is what I do at least and I've been
solving quite a few algorithmic problems
some time ago.
Here is a nice problem which I am pretty sure
is NP-complete but which can be solved through
a genetic algorithm for example.
http://uva.onlinejudge.org/index.php?option=com_onlinejudge&Itemid=8&page=show_problem&problem=973
And as Dukeling said, there's no generic way of doing this.
There's no easy generic way of doing this. It basically comes down to reducing some known NP-complete problem to solving this problem, or showing that they are equivalent.
So, learn more about well-known NP-complete problems and get more experience with reducing a problem to solving another problem and identifying equivalent problems.
Disclaimer: You just have to be careful of special cases of NP-complete problems - while the generic case of the problem may require exponential time to be solved, these special cases may actually be solvable in polynomial time.

Memoization or Tabulation approach for Dynamic programming

There are many problems that can be solved using Dynamic programming e.g. Longest increasing subsequence. This problem can be solved by using 2 approaches
Memoization (Top Down) - Using recursion to solve the sub-problem and storing the result in some hash table.
Tabulation (Bottom Up) - Using Iterative approach to solve the problem by solving the smaller sub-problems first and then using it during the execution of bigger problem.
My question is which is better approach in terms of time and space complexity?
Short answer: it depends on the problem!
Memoization usually requires more code and is less straightforward, but has computational advantages in some problems, mainly those which you do not need to compute all the values for the whole matrix to reach the answer.
Tabulation is more straightforward, but may compute unnecessary values. If you do need to compute all the values, this method is usually faster, though, because of the smaller overhead.
First understand what is dynamic programming?
If a problem at hand can be broken down to sub-problems whose solutions are also optimal and can be combined to reach solution of original/bigger problem. For such problems, we can apply dynamic programming.
It's way of solving a problem by storing the results of sub-problems in program memory and reuse it instead of recalculating it at later stage.
Remember the ideal case of dynamic programming usage is, when you can reuse the solutions of sub-problems more than one time, otherwise, there is no point in storing the result.
Now, dynamic programming can be applied in bottom-up approach(Tabulation) and top-down approach(Memoization).
Tabulation: We start with calculating solutions to smallest sub-problem and progress one level up at a time. Basically follow bottom-up approach.
Here note, that we are exhaustively finding solutions for each of the sub-problems, without knowing if they are really neeeded in future.
Memoization: We start with the original problem and keep breaking it one level down till the base case whose solution we know. In most cases, such breaking down(top-down approach) is recursive. Hence, time taken is slower if problem is using each steps sub-solutions due to recursive calls. But, in case when all sub-solutions are not needed then, Memoization performs better than Tabulation.
I found this short video quite helpful: https://youtu.be/p4VRynhZYIE
Asymptotically a dynamic programming implementation that is top down is the same as going bottom up, assuming you're using the same recurrence relation. However, bottom up is generally more efficient because of the overhead of recursion which is used in memoization.
If the problem has overlapping sub-problems property then use Memoization, else it depends on the problem

How to spot a "greedy" algorithm?

I am reading a tutorial about "greedy" algorithms but I have a hard time spotting them solving real "Top Coder" problems.
If I know that a given problem can be solved with a "greedy" algorithm it is pretty easy to code the solution. However if I am not told that this problem is "greedy" I can not spot it.
What are the common properties and patterns of the problems solved with "greedy" algorithms? Can I reduce them to one of the known "greedy" problems (e.g. MST)?
Formally, you'd have to prove the matroid property of course. However, I assume that in terms of topcoder you rather want to find out quickly if a problem can be approached greedily or not.
In that case, the most important point is the optimal sub-structure property. For this, you have to be able to spot that the problem can be decomposed into sub-problems and that their optimal solution is part of the optimal solution of the whole problem.
Of course, greedy problems come in such a wide variety that it's next to impossible to offer a general correct answer to your question. My best advice would hence be to think somewhere along these lines:
Do I have a choice between different alternatives at some point?
Does this choice result in sub-problems that can be solved individually?
Will I be able to use the solution of the sub-problem to derive a solution for the overall problem?
Together with loads and loads of experience (just had to say that, too) this should help you to quickly spot greedy problems. Of course, you may eventually classify a problem as greedy, which is not. In that case, you can only hope to realize it before working on the code for too long.
(Again, for reference, I assume a topcoder context.. for anything more realistic and of practical consequence I strongly advise to actually verify the matroid structure before selecting a greedy algorithm.)
A part of your problem may be caused by thinking of "greedy problems". There are greedy algorithms and problems where there is a greedy algorithm, that leads to an optimal solution. There are other hard problems that can also be solved by greedy algorithms but the result will not necessarily be optimal.
For example, for the bin packing problem, there are several greedy algorithms all of them with much better complexity than the exponential algorithm, but you can only be sure that you'll get a solution that is below a certain threshold compared to the optimal solution.
Only regarding problems where greedy algorithms will lead to an optimal solution, my guess would be that an inductive correctness proof feels totally natural and easy. For every single one of your greedy steps, it is quite clear that this was the best thing to do.
Typically problems with optimal, greedy solutions are easy anyway, and others will force you to come up with a greedy heuristic, because of complexity limitations. Usually a meaningful reduction would be showing that your problem is in fact at least NP-hard and hence you know you'll have to find a heuristic. For those problems, I'm a big fan of trying out. Implement your algorithm and try to find out if solutions are "pretty good" (ideal if you also have a slow but correct algorithm you can compare results against, otherwise you might need manually created ground truths). Only if you have something that works well, try to think why and maybe even try to come up with proof of boundaries. Maybe it works, maybe you'll spot border cases where it doesn't work and needs refinement.
"A term used to describe a family of algorithms. Most algorithms try to reach some "good" configuration from some initial configuration, making only legal moves. There is often some measure of "goodness" of the solution (assuming one is found).
The greedy algorithm always tries to perform the best legal move it can. Note that this criterion is local: the greedy algorithm doesn't "think ahead", agreeing to perform some mediocre-looking move now, which will allow better moves later.
For instance, the greedy algorithm for egyptian fractions is trying to find a representation with small denominators. Instead of looking for a representation where the last denominator is small, it takes at each step the smallest legal denominator. In general, this leads to very large denominators at later steps.
The main advantage of the greedy algorithm is usually simplicity of analysis. It is usually also very easy to program. Unfortunately, it is often sub-optimal."
--- ariels
(http://www.everything2.com/title/greedy+algorithm?searchy=search)

Resources