Tips on solving binary tree/binary search tree problems using recursion (Java) - algorithm

I am reviewing how to solve binary trees/binary search tree problems in Java using recursion and I feel like I'm struggling a little bit. Ex: manipulating/changing the nodes in the tree, counting certain patterns (# of even ints, height of tree), etc. I pretty much always use private helper methods.
I'd like to hear some rules of thumb to follow to make my life easier to solve these problems. Thanks in advance!
edit: be more specific.... I'm just talking about ANY kinds of problems involving using recursion to solve a binary tree/BST problem. I'm not talking about any one problem. I want to know STRATEGIES FOR SOLVING these problems when creating METHODS to solve them. Like certain things to always include or think of when solving them. I can't get more specific than that. Thanks for the votes down.

First off, always remember that you're working on a BST, not an unsorted binary tree or non-binary general tree. That means at all times you can rely on the BST invariant: every value in left subtree < this < every value in right subtree. (equality included on one of the sides in some implementations).
Example where this is relevant: in BST searching, if the value you're trying to find is less than this, there's no point looking in the right subtree for it; it's not there.
Other than that, treat it like you would any recursion problem. That means, for a given problem:
1) Determine what cases are trivial and don't require recursion. Write code that correctly identifies these cases and returns the trivial result. Some possibilities include a height-0 tree, or no tree (null root).
2) For all other cases, determine which of the following would make more sense: (both are usually possible, but one can be cleaner)
What non-recursive work you could do to then reduce the problem to a sub-problem and return that solution (recursion at end, potentially tail recursion)
What recursive work you would need to do first in order to solve this problem. (recursion at start, not tail recursion)
Having private helper methods is not necessarily a bad thing; so long as they serve a distinct and useful function you shouldn't feel bad for writing them. There are certainly some recursion problems that are much cleaner and less redundant when split into 3-4 methods rather than cramming them into one.
Take some BST problems and see if you can identify the general structure of the solution before you sit down to code it. Hope this helps! Also you may want to have a java tag on your question if you're specifically asking about BST stuff for Java.

Related

General algorithm for partial backtracking search

Backtracking search is a well-known problem-solving technique, that recurs through all possible combinations of variable assignments in search of a valid solution. The general algorithm is abstracted into a concise higher-order function: https://en.wikipedia.org/wiki/Backtracking
Some problems require partial backtracking, that is, they have a mixture of don't-know non-determinism (have a choice to make, that matters, if you get it wrong you have to backtrack) and don't-care non-determinism (have a choice to make, that doesn't matter, maybe it matters for how long it takes you to find the solution, but not for the correctness thereof, you don't have to backtrack).
Consider for example the Boolean satisfiability problem that can be solved with the DPLL algorithm. If you try to represent that with the general backtracking algorithm, the result will not only recur through all 2^N variable assignments (which is sadly necessary in the general case), but all N! orders of trying the variables (completely unnecessary and hopelessly inefficient).
Is there a general algorithm for partial backtracking? A concise higher-order function that takes function parameters for both don't-know and don't-care choices?
If I understand you correctly, you’re asking about symmetry-breaking in tree search. In the specific example you gave, all permutations of the list of variable assignments are equivalent.
Symmetries are going to be domain-specific. So is the more-general technique of pruning the search tree, by short-circuiting and backtracking eagerly. There are a few symmetry-breaking techniques I’ve used that generalize.
One is to search the problem space in a canonical order. If the branch that sets variable 10 only tries variables 11, 12 and up, not variables 9, 8 or 7, it won’t search any permutation of the same solution. It will only test solutions that are unique up to permutation. (In the specific case of SAT-solving, this might rule out an optimal search order—although you could re-order the variables arbitrarily.)
Another is to make a test that only one distinct solution of any equivalence class will pass, ideally one that can be checked near the top of the search tree. The classic example of this is, in the 8-queens problem, checking whether the queen on the row you look at first is on the left or the right side of the chessboard. Any solution where she’s on the right is a mirror-image of one other solution where she’s on the left, so you can cut the search space in half. (You can actually do better than this with that problem.) If you only need to test for satisfiability, you can get by with a filter that merely guarantees that, if any solution exists, at least one solution will pass.
If you have enough memory, you might also store a set of branches that have already been searched, and then check whether a branch that you are considering whether to search is equivalent to one already in the set. This would be more practical for a search space with a huge number of symmetries than one with a huge number of solutions unique up to symmetry.

When to use backtracking template and when to not?

I saw a lot of posts about backtracking explanations on SO, but the part which still confuses me is when do I actually use the backtracking template(choose-explore-unchoose) as opposed to a normal DFS mindset while solving problems? I understand that both of them are essentially backtracking, but from the problem solving perspective, a traditional DFS approach feels a lot more intuitive when you see a problem of that sort. But, I wanted to know when your mind should go "choose-explore-unchoose".
For example: if you want to print all root-to-leaf paths, a simple DFS problem solving approach makes a lot of sense, whereas while printing permutations of a string, we take take the "choose-explore-unchoose" strategy. I am having a difficult time to categorize problems into these 2 buckets. (Categorization is the current strategy im using to make my mind think in a certain direction. If there is any other strategy to solve, anyone is welcome to share.)
Here is the template I am referring to:
function dfs(node, state):
if state is a solution:
report(state) # e.g. add state to final result list
return
for child in children:
if child is a part of a potential solution:
state.add(child) # make move
dfs(child, state)
state.remove(child) # backtrack
As you said, DFS is essentially backtracking: choose (child), explore (all his children) and unchoose (move to next child).
There might be some other different approaches for backtracking, but that it depends more on heuristics. If you look for graph algorithms (like "A-star" family algorithms) you will see a lot of backtracking algorithm-styles that differs from DFS. The difference is what heuristics they rely on, and order of search: just like with small change you can get BFS instead of DFS.
There is no "one rule" to know when to use when. Whatever is more intuitive to you is fine, as long as you know how to use all of your toolbox/

How do I balance a BK-Tree and is it necessary?

I am looking into using an Edit Distance algorithm to implement a fuzzy search in a name database.
I've found a data structure that will supposedly help speed this up through a divide and conquer approach - Burkhard-Keller Trees. The problem is that I can't find very much information on this particular type of tree.
If I populate my BK-tree with arbitrary nodes, how likely am I to have a balance problem?
If it is possibly or likely for me to have a balance problem with BK-Trees, is there any way to balance such a tree after it has been constructed?
What would the algorithm look like to properly balance a BK-tree?
My thinking so far:
It seems that child nodes are distinct on distance, so I can't simply rotate a given node in the tree without re-calibrating the entire tree under it. However, if I can find an optimal new root node this might be precisely what I should do. I'm not sure how I'd go about finding an optimal new root node though.
I'm also going to try a few methods to see if I can get a fairly balanced tree by starting with an empty tree, and inserting pre-distributed data.
Start with an alphabetically sorted list, then queue from the middle. (I'm not sure this is a great idea because alphabetizing is not the same as sorting on edit distance).
Completely shuffled data. (This relies heavily on luck to pick a "not so terrible" root by chance. It might fail badly and might be probabilistically guaranteed to be sub-optimal).
Start with an arbitrary word in the list and sort the rest of the items by their edit distance from that item. Then queue from the middle. (I feel this is going to be expensive, and still do poorly as it won't calculate metric space connectivity between all words - just each word and a single reference word).
Build an initial tree with any method, flatten it (basically like a pre-order traversal), and queue from the middle for a new tree. (This is also going to be expensive, and I think it may still do poorly as it won't calculate metric space connectivity between all words ahead of time, and will simply get a different and still uneven distribution).
Order by name frequency, insert the most popular first, and ditch the concept of a balanced tree. (This might make the most sense, as my data is not evenly distributed and I won't have pure random words coming in).
FYI, I am not currently worrying about the name-synonym problem (Bill vs William). I'll handle that separately, and I think completely different strategies would apply.
There is a lisp example in the article: http://cliki.net/bk-tree. About unbalancing the tree I think the data structure and the method seems to be complicated enough and also the author didn't say anything about unbalanced tree. When you experience unbalanced tree maybe it's not for you?

Analyzing Recursive Algorithms

I've often been slightly stumped by recursive algorithms that seem to require magical leaps (with a large dose of shrunken notation born of an ink shortage) of logic.
I realize that the alternative is to simply memorize the Big O notation for all the common algorithms but at a certain point, that approach fails. For example, I am happy to disclose the performance for bubble sort, insertion sort, binary tree insertion/removal, mergesort, and quicksort but don't ask me to come up with the performance of AVL trees or Djikstra's shortest path algorithm off the top of my head.
Where can I go to get:
A discussion of recursive algorithm analysis that uses words instead of a profusion of symbols
Practice problems to confirm that my newly-obtained understanding is actually correct
Example:
Bad:
Sigma v e T (1+cv)
Possible 'good' equivalent:
The amount of work required for 1 node in the tree (which is 1+the # of children of a node), which is then executed once for every element in the tree where the original node is the root.
Side commentary:
I could simply watch a video for every single algorithm because there's no way to make one's voice turn into a subscript (or any of the other contortions) but I suspect that would take an inordinate amount of time compared to reading textual descriptions.
Update:
Here's 1 source of solved problems: http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-introduction-to-algorithms-sma-5503-fall-2005/ (this tackles #2 above)
TopCoders has a great source of tutorials and thorough explanations. Have you tried them out?
http://www.topcoder.com/tc?d1=tutorials&d2=alg_index&module=Static
To 'officially' answer this question...
Source of sample problems: http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-046j-introduction-to-algorithms-sma-5503-fall-2005/
An explanation of analysis for CS: http://en.wikipedia.org/wiki/Concrete_Mathematics.

N-Puzzle with 5x5 grid, theory question

I'm writing a program which solves a 24-puzzle (5x5 grid) using two heuristic. The first uses how many blocks the incorrect place and the second uses the Manhattan distance between the blocks current place and desired place.
I have different functions in the program which use each heuristic with an A* and a greedy search and compares the results (so 4 different parts in total).
I'm curious whether my program is wrong or whether it's a limitation of the puzzle. The puzzle is generated randomly with pieces being moved around a few times and most of the time (~70%) a solution is found with most searches, but sometimes they fail.
I can understand why greedy would fail, as it's not complete, but seeing as A* is complete this leads me to believe that there's an error in my code.
So could someone please tell me whether this is an error in my thinking or a limitation of the puzzle? Sorry if this is badly worded, I'll rephrase if necessary.
Thanks
EDIT:
So I"m fairly sure it's something I'm doing wrong. Here's a step-by-step list of how I'm doing the searches, is anything wrong here?
Create a new list for the fringe, sorted by whichever heuristic is being used
Create a set to store visited nodes
Add the initial state of the puzzle to the fringe
while the fringe isn't empty..
pop the first element from the fringe
if the node has been visited before, skip it
if node is the goal, return it
add the node to our visited set
expand the node and add all descendants back to the fringe
If you mean that sliding puzzle: This is solvable if you exchange two pieces from a working solution - so if you don't find a solution this doesn't tell anything about the correctness of your algorithm.
It's just your seed is flawed.
Edit: If you start with the solution and make (random) legal moves, then a correct algorithm would find a solution (as reversing the order is a solution).
It is not completely clear who invented it, but Sam Loyd popularized the 14-15 puzzle, during the late 19th Century, which is the 4x4 version of your 5x5.
From the Wikipedia article, a parity argument proved that half of the possible configurations are unsolvable. You are probably running into something similar when your search fails.
I'm going to assume your code is correct, and you implemented all the algorithms and heuristics correctly.
This leaves us with the "generated randomly" part of your puzzle initialization. Are you sure you are generating correct states of the puzzle? If you generate an illegal state, obviously there will be no solution.
While the steps you have listed seem a little incomplete, you have listed enough to ensure that your A* will reach a solution if there is one (albeit not optimal as long as you are just simply skipping nodes).
It sounds like either your puzzle generation is flawed or your algorithm isn't implemented correctly. To easily verify your puzzle generation, store the steps used to generate the puzzle, and run it in reverse and check if the result is a solution state before allowing the puzzle to be sent to the search routines. If you ever generate an invalid puzzle, dump the puzzle, and expected steps and see where the problem is. If the puzzle passes and the algorithm fails, you have at least narrowed down where the problem is.
If it turns out to be your algorithm, post a more detailed explanation of the steps you have actually implemented (not just how A* works, we all know that), like for instance when you run the evaluation function, and where you resort the list that acts as your queue. That will make it easier to determine a problem within your implementation.

Resources