Correct functional implementation on Binomial Heap - data-structures

I am reading Binomial Heap in Purely Functional Data Structures.
The implementation of insTree function confused me quite much. Here are the set of codes
datatype Tree = Node of int * Elem.T * Tree list
fun link (t1 as Node (r, x1, c1), t2 as Node (_, x2, c2)) =
if Elem.leq (x1, x2) then Node (r+1, x1, t2::c1)
else Node (r+1, x2, t1::c2)
fun rank (Node (r, x, c)) = r
fun insTree (t, []) = [t]
| insTree (t, ts as t' :: ts') =
if rank t < rank t' then t::ts else insTree (link (t, t'), ts')
My confusion lies on the bit in insTree that why it does not consider the situation of rank t > rank t'?
In if rank t < rank t' then t::ts else insTree (link (t, t'), ts'),
if t's rank is less than t''s rank, then put the t into the heap, no question asked
the else has two cases: equal and bigger.
For equal, yes, we can link two trees (we link only two trees with the same rank), and then try to insert the new linked tree into the heap, no question asked.
but even the bigger case would have the same as equal, why? Even if rank t > rank t', we still link them?
Edit
I thought the process of inserting a binomial tree into a binomial heap should be like this:
We get the tree t and the heap
In the heap (actually a list), we compare rank of the tree t with every tree in the heap
We find a missing rank (increasing order in the heap) which matches the rank of t, we put t at that slot
We find a tree in the heap with same rank as t, then we link two trees and process a rank+1 new tree and try again insert the new tree to the heap.
So, I think the correct fun insTree could be like this:
fun insTree (t, []) = [t]
| insTree (t, ts as t' :: ts') =
if rank t < rank t' then t::ts
else if rank t = rank t' then insTree (link (t, t'), ts')
else t'::(insTree (t, ts'))

insTree is a helper function that is not visible to the user. The user calls insert, which in turn calls insTree with a tree of rank 0, and a list of trees of increasing rank. insTree has an invariant that the rank of t is <= the rank of the first tree in the list. So if it's not <, then it must be =.
You're right that if insTree was a general-purpose public function, rather than a special-purpose private function, then it would have to deal with the missing case.

An important detail behind this is that a binomial heap is not any tree that happens to have k children. It's a tree that's rigorously defined as
The binomial tree of order 0 is a single node, and
The binomial tree of order n is a single node with binomial trees of order 0, 1, 2, ..., n - 1 as children.
This answer, explain why the insert function, which is the one required to construct a binomial heap, do not manage these cases (In theory they cannot happen). Maybe the cases you are proposing make sense for a merging operation (but the underlying implementation should differ).

Related

Range function for values in a splay tree?

I am a beginner at data structures.
I am trying to write some pseudocode for a range function with splay trees: Range(S, A, B), which changes S to the set of all its members for which the key value C satisfies A ≤ C ≤ B. I know a splay trees fall into being types of binary search trees and implement their own splay operation. Basically, I am trying to return a range of values that are between A and B. However, I am having trouble understanding how I should do this, or where I should even begin, and what conditions I should check. I've read the definition of splay trees, and know they are like binary search trees with the move-to-front algorithm.
This is what I have so far:
Algorithm Range(Array S, int A, int B): array Set
S = new array(size) //Initialize an empty array of some size
if (A > B) then return NULL
I just feel somewhat lost after this point. I am not sure how to check the values of splay trees. Please let me know if I can provide additional information, or what directions I should go in.
According to Wikipedia,
A splay tree is a self-adjusting binary search tree with the additional property that recently accessed elements are quick to access again. It performs basic operations such as insertion, look-up and removal in O(log n) amortized time.
However, since the "splay" operation applies only to random searches, the tree may be considered to be an ordinary "binary search tree".
The algorithm becomes,
Range (BSTree T, int A, int B)
int Array S
S ← Empty array
If A <= B then
For each C in T
If A <= C and C <= B then
Append C to S
Return S
That is, tree T is traversed, in order; and, for each item C meeting the condition, the item is added to array S. If no item meets the condition, an empty array is returned.
The For each, if not available in the implementation language, may be implemented using the algorithm described as in-order
inorder(node)
if (node = null)
return
inorder(node.left)
visit(node)
inorder(node.right)
where vist(node) is the place to test whether the item meets the condition.
This is pretty late, but from the term "change" in the question prompt, it seems like it is asking you to modify the S tree so that it has only the elements within range.
So I would do it like this: splay the tree around the lower bound, and drop the left subtree, since everything in the left subtree will be lower value than the lower bound. Then splay the tree around upper bound, then dropping the right subtree, since everything in the right subtree will be higher value than the upper bound.
Here is how I would write it, in pseudocode
//assumes S is the root of an actual tree with elements
function Range(node S, int A, int B)
node temp <- Splay(k1, S) //splay around lower bound
if (temp.key < k1) //makes sure that there are elements in tree that satisfies this
temp <- temp.right
if (temp == null) return //there are no key greater than A, abort!
temp <- Splay(temp.key, S)
temp.left <- null //drops left subtree, bc they are all going to be lesser value
temp <- Splay(k2, temp) //splay around upper bound
if (temp.key > k2)
temp <- temp.left
if (temp == null) return //there are no keys less than B, abort!
temp <- Splay(temp.key, temp)
temp.right <- null //drops all right subtree
S <- temp
Hope this helps! This should run in O(logn) also

Binomial Heaps: proof that merge runs in O(log n) time

I am reading through Okasaki's Purely Functional Data Structures and am trying to do some of the exercises. One of them is to prove that binomial heap merge takes O(log n) time where n is the number of nodes in the heap.
functor BinomialHeap (Element:ORDERED):HEAP=
struct
structure Elem=Element
datatype Tree = Node of int*Elem.T*Tree list
type Heap = Tree list
fun link (t1 as Node (r,x1,c1), t2 as Node (_,x2,c2))=
if Elem.leq(x1,x2)
then Node (r+1,x1,t2::c1)
else Node (r+1,x2,t1::c2)
fun insTree (t,[])=[t]
|insTree (t,ts as t'::ts')=
if rank t < rank t' then t::ts else insTree(link(t,t'),ts')
fun insert (x,ts)=insTree(Node(0,x,[]),ts) (*just for reference*)
fun merge (ts1,[])=ts1
|merge ([],ts2)=ts2
|merge (ts1 as t1::ts1', ts2 as t2:ts2')=
if rank t1 < rank t2 then t1::merge(ts1',ts2)
else if rank t2 < rank t1 then t2::merge(ts1,ts2')
else insTree (link(t1,t2), merge (ts1',ts2'))
end
It is clear that merge will call itself max(numNodes ts1, numNodes ts2) times, but since insTree is O(log n) worst case, can you explain how merge is O(log n)?
First note that merge will be called at most (numNodes ts1 + numNodes ts2) times, and this is O(log n) times. (Just to be clear, ts1 and ts2 are lists of binomial trees, where such a tree of rank r contains exactly 2^r nodes, and each rank can occur at most once. Therefore, there are O(log n1) such trees in ts1 and O(log n2) in ts2, where n1 and n2 are the number of nodes in the heaps and n=n1+n2.)
The key point to notice is that insTree is called at most once for each rank (either through merge or recursively), and the largest possible rank is log_2(n). The reason is the following:
If insTree is called from merge, then say r = rank t1 = rank t2, and link(t1,t2) will have rank r+1. So let's say insTree is called for rank r+1. Now think about what happens with merge(ts1', ts2'). Let the smallest rank that occurs as a tree in ts1' and ts2' be r' >= r+1. Then insTree will be called again from merge for rank r'+1, since the two trees of rank r' will get linked to form a tree of rank r'+1. However, the merged heap merge(ts1', ts2') can therefore not contain a tree of rank r', and the previous call to insTree can therefore not recurse further than r'.
So putting things together:
insTree is called at most O(log n) times, with each call being constant time (since we count the recursion as a separate call)
merge is called at most O(log n) times, with each call being constant time (since we count the calls to insTree separately and link is constant time)
=> The entire merge operation is O(log n).
EDIT: By the way, merging binomial heaps is very much like adding binary numbers. A heap of size n will have a tree of rank r if and only if the binary number n has a '1' at the 2^r position. When merging such heaps, you proceed from the lowest rank to highest rank -- or least significant to most significant position. Trees of the same rank need to be linked (the 'ones' added), and inserted / "carried" into the higher rank positions.

Should melding/merging of binomial heaps be done in one pass or two?

Okasaki's implementation in Purely Functional Data Structures (page 22) does it in two: one to merge the forest, and one to propagate the carries. This strikes me as harder to analyze, and also probably slower, than a one-pass version. Am I missing something?
Okasaki's implementation:
functor BinomialHeap (Element:ORDERED):HEAP=
struct
structure Elem=Element
datatype Tree = Node of int*Elem.T*Tree list
type Heap = Tree list
fun link (t1 as Node (r,x1,c1), t2 as Node (_,x2,c2))=
if Elem.leq(x1,x2)
then Node (r+1,x1,t2::c1)
else Node (r+1,x2,t1::c2)
fun insTree (t,[])=[t]
|insTree (t,ts as t'::ts')=
if rank t < rank t' then t::ts else insTree(link(t,t'),ts')
fun insert (x,ts)=insTree(Node(0,x,[]),ts) (*just for reference*)
fun merge (ts1,[])=ts1
|merge ([],ts2)=ts2
|merge (ts1 as t1::ts1', ts2 as t2:ts2')=
if rank t1 < rank t2 then t1::merge(ts1',ts2)
else if rank t2 < rank t1 then t2::merge(ts1,ts2')
else insTree (link(t1,t2), merge (ts1',ts2'))
end
This strikes me as hard to analyze because you have to prove an upper bound on the cost of propagating all the carries (see below). The top-down merge implementation I came up with is much more obviously O(log n) where n is the size of the larger heap:
functor BinomialHeap (Element:ORDERED):HEAP=
struct
structure Elem=Element
datatype Tree = Node of int*Elem.T*Tree list
type Heap = Tree list
fun rank (Node(r,_,_))=r
fun link (t1 as Node (r,x1,c1), t2 as Node (_,x2,c2))=
if Elem.leq(x1,x2)
then Node (r+1,x1,t2::c1)
else Node (r+1,x2,t1::c2)
fun insTree (t,[])=[t]
|insTree (t,ts as t'::ts')=
if rank t < rank t' then t::ts else insTree(link(t,t'),ts')
fun insert (x,ts)=insTree(Node(0,x,[]),ts)
fun merge(ts1,[])=ts1
|merge([],ts2)=ts2
|merge (ts1 as t1::ts1', ts2 as t2::ts2')=
if rank t1 < rank t2 then t1::merge(ts1',ts2)
else if rank t2 < rank t1 then t2::merge(ts1,ts2')
else mwc(link(t1,t2),ts1',ts2')
(*mwc=merge with carry*)
and mwc (c,ts1,[])=insTree(c,ts1)
|mwc (c,[],ts2)=insTree(c,ts2)
|mwc (c,ts1 as t1::ts1', ts2 as t2::ts2')=
if rank c < rank t1
then if rank c < rank t2 then c::merge(ts1,ts2)
else mwc(link(c,t2),ts1,ts2')
else mwc(link(c,t1),ts1',ts2)
end
Proof that Okasaki's implementation is O(log n): if a carry is "expensive" (requires one or more links), then it produces a zero. Therefore the next expensive carry will stop when it reaches that point. So the total number of links required to propagate all the carries is no more than around the total length of the binary representation before propagation, which is bounded above by ceil(log n), where n is the size of the larger heap.

How to find largest common sub-tree in the given two binary search trees?

Two BSTs (Binary Search Trees) are given. How to find largest common sub-tree in the given two binary trees?
EDIT 1:
Here is what I have thought:
Let, r1 = current node of 1st tree
r2 = current node of 2nd tree
There are some of the cases I think we need to consider:
Case 1 : r1.data < r2.data
2 subproblems to solve:
first, check r1 and r2.left
second, check r1.right and r2
Case 2 : r1.data > r2.data
2 subproblems to solve:
- first, check r1.left and r2
- second, check r1 and r2.right
Case 3 : r1.data == r2.data
Again, 2 cases to consider here:
(a) current node is part of largest common BST
compute common subtree size rooted at r1 and r2
(b)current node is NOT part of largest common BST
2 subproblems to solve:
first, solve r1.left and r2.left
second, solve r1.right and r2.right
I can think of the cases we need to check, but I am not able to code it, as of now. And it is NOT a homework problem. Does it look like?
Just hash the children and key of each node and look for duplicates. This would give a linear expected time algorithm. For example, see the following pseudocode, which assumes that there are no hash collisions (dealing with collisions would be straightforward):
ret = -1
// T is a tree node, H is a hash set, and first is a boolean flag
hashTree(T, H, first):
if (T is null):
return 0 // leaf case
h = hash(hashTree(T.left, H, first), hashTree(T.right, H, first), T.key)
if (first):
// store hashes of T1's nodes in the set H
H.insert(h)
else:
// check for hashes of T2's nodes in the set H containing T1's nodes
if H.contains(h):
ret = max(ret, size(T)) // size is recursive and memoized to get O(n) total time
return h
H = {}
hashTree(T1, H, true)
hashTree(T2, H, false)
return ret
Note that this is assuming the standard definition of a subtree of a BST, namely that a subtree consists of a node and all of its descendants.
Assuming there are no duplicate values in the trees:
LargestSubtree(Tree tree1, Tree tree2)
Int bestMatch := 0
Int bestMatchCount := 0
For each Node n in tree1 //should iterate breadth-first
//possible optimization: we can skip every node that is part of each subtree we find
Node n2 := BinarySearch(tree2(n.value))
Int matchCount := CountMatches(n, n2)
If (matchCount > bestMatchCount)
bestMatch := n.value
bestMatchCount := matchCount
End
End
Return ExtractSubtree(BinarySearch(tree1(bestMatch)), BinarySearch(tree2(bestMatch)))
End
CountMatches(Node n1, Node n2)
If (!n1 || !n2 || n1.value != n2.value)
Return 0
End
Return 1 + CountMatches(n1.left, n2.left) + CountMatches(n1.right, n2.right)
End
ExtractSubtree(Node n1, Node n2)
If (!n1 || !n2 || n1.value != n2.value)
Return nil
End
Node result := New Node(n1.value)
result.left := ExtractSubtree(n1.left, n2.left)
result.right := ExtractSubtree(n1.right, n2.right)
Return result
End
To briefly explain, this is a brute-force solution to the problem. It does a breadth-first walk of the first tree. For each node, it performs a BinarySearch of the second tree to locate the corresponding node in that tree. Then using those nodes it evaluates the total size of the common subtree rooted there. If the subtree is larger than any previously found subtree, it remembers it for later so that it can construct and return a copy of the largest subtree when the algorithm completes.
This algorithm does not handle duplicate values. It could be extended to do so by using a BinarySearch implementation that returns a list of all nodes with the given value, instead of just a single node. Then the algorithm could iterate this list and evaluate the subtree for each node and then proceed as normal.
The running time of this algorithm is O(n log m) (it traverses n nodes in the first tree, and performs a log m binary-search operation for each one), putting it on par with most common sorting algorithms. The space complexity is O(1) while running (nothing allocated beyond a few temporary variables), and O(n) when it returns its result (because it creates an explicit copy of the subtree, which may not be required depending upon exactly how the algorithm is supposed to express its result). So even this brute-force approach should perform reasonably well, although as noted by other answers an O(n) solution is possible.
There are also possible optimizations that could be applied to this algorithm, such as skipping over any nodes that were contained in a previously evaluated subtree. Because the tree-walk is breadth-first we know than any node that was part of some prior subtree cannot ever be the root of a larger subtree. This could significantly improve the performance of the algorithm in certain cases, but the worst-case running time (two trees with no common subtrees) would still be O(n log m).
I believe that I have an O(n + m)-time, O(n + m) space algorithm for solving this problem, assuming the trees are of size n and m, respectively. This algorithm assumes that the values in the trees are unique (that is, each element appears in each tree at most once), but they do not need to be binary search trees.
The algorithm is based on dynamic programming and works with the following intution: suppose that we have some tree T with root r and children T1 and T2. Suppose the other tree is S. Now, suppose that we know the maximum common subtree of T1 and S and of T2 and S. Then the maximum subtree of T and S
Is completely contained in T1 and r.
Is completely contained in T2 and r.
Uses both T1, T2, and r.
Therefore, we can compute the maximum common subtree (I'll abbreviate this as MCS) as follows. If MCS(T1, S) or MCS(T2, S) has the roots of T1 or T2 as roots, then the MCS we can get from T and S is given by the larger of MCS(T1, S) and MCS(T2, S). If exactly one of MCS(T1, S) and MCS(T2, S) has the root of T1 or T2 as a root (assume w.l.o.g. that it's T1), then look up r in S. If r has the root of T1 as a child, then we can extend that tree by a node and the MCS is given by the larger of this augmented tree and MCS(T2, S). Otherwise, if both MCS(T1, S) and MCS(T2, S) have the roots of T1 and T2 as roots, then look up r in S. If it has as a child the root of T1, we can extend the tree by adding in r. If it has as a child the root of T2, we can extend that tree by adding in r. Otherwise, we just take the larger of MCS(T1, S) and MCS(T2, S).
The formal version of the algorithm is as follows:
Create a new hash table mapping nodes in tree S from their value to the corresponding node in the tree. Then fill this table in with the nodes of S by doing a standard tree walk in O(m) time.
Create a new hash table mapping nodes in T from their value to the size of the maximum common subtree of the tree rooted at that node and S. Note that this means that the MCS-es stored in this table must be directly rooted at the given node. Leave this table empty.
Create a list of the nodes of T using a postorder traversal. This takes O(n) time. Note that this means that we will always process all of a node's children before the node itself; this is very important!
For each node v in the postorder traversal, in the order they were visited:
Look up the corresponding node in the hash table for the nodes of S.
If no node was found, set the size of the MCS rooted at v to 0.
If a node v' was found in S:
If neither of the children of v' match the children of v, set the size of the MCS rooted at v to 1.
If exactly one of the children of v' matches a child of v, set the size of the MCS rooted at v to 1 plus the size of the MCS of the subtree rooted at that child.
If both of the children of v' match the children of v, set the size of the MCS rooted at v to 1 plus the size of the MCS of the left subtree plus the size of the MCS of the right subtree.
(Note that step (4) runs in expected O(n) time, since it visits each node in S exactly once, makes O(n) hash table lookups, makes n hash table inserts, and does a constant amount of processing per node).
Iterate across the hash table and return the maximum value it contains. This step takes O(n) time as well. If the hash table is empty (S has size zero), return 0.
Overall, the runtime is O(n + m) time expected and O(n + m) space for the two hash tables.
To see a correctness proof, we proceed by induction on the height of the tree T. As a base case, if T has height zero, then we just return zero because the loop in (4) does not add anything to the hash table. If T has height one, then either it exists in T or it does not. If it exists in T, then it can't have any children at all, so we execute branch 4.3.1 and say that it has height one. Step (6) then reports that the MCS has size one, which is correct. If it does not exist, then we execute 4.2, putting zero into the hash table, so step (6) reports that the MCS has size zero as expected.
For the inductive step, assume that the algorithm works for all trees of height k' < k and consider a tree of height k. During our postorder walk of T, we will visit all of the nodes in the left subtree, then in the right subtree, and finally the root of T. By the inductive hypothesis, the table of MCS values will be filled in correctly for the left subtree and right subtree, since they have height ≤ k - 1 < k. Now consider what happens when we process the root. If the root doesn't appear in the tree S, then we put a zero into the table, and step (6) will pick the largest MCS value of some subtree of T, which must be fully contained in either its left subtree or right subtree. If the root appears in S, then we compute the size of the MCS rooted at the root of T by trying to link it with the MCS-es of its two children, which (inductively!) we've computed correctly.
Whew! That was an awesome problem. I hope this solution is correct!
EDIT: As was noted by #jonderry, this will find the largest common subgraph of the two trees, not the largest common complete subtree. However, you can restrict the algorithm to only work on subtrees quite easily. To do so, you would modify the inner code of the algorithm so that it records a subtree of size 0 if both subtrees aren't present with nonzero size. A similar inductive argument will show that this will find the largest complete subtree.
Though, admittedly, I like the "largest common subgraph" problem a lot more. :-)
The following algorithm computes all the largest common subtrees of two binary trees (with no assumption that it is a binary search tree). Let S and T be two binary trees. The algorithm works from the bottom of the trees up, starting at the leaves. We start by identifying leaves with the same value. Then consider their parents and identify nodes with the same children. More generally, at each iteration, we identify nodes provided they have the same value and their children are isomorphic (or isomorphic after swapping the left and right children). This algorithm terminates with the collection of all pairs of maximal subtrees in T and S.
Here is a more detailed description:
Let S and T be two binary trees. For simplicity, we may assume that for each node n, the left child has value <= the right child. If exactly one child of a node n is NULL, we assume the right node is NULL. (In general, we consider two subtrees isomorphic if they are up to permutation of the left/right children for each node.)
(1) Find all leaf nodes in each tree.
(2) Define a bipartite graph B with edges from nodes in S to nodes in T, initially with no edges. Let R(S) and T(S) be empty sets. Let R(S)_next and R(T)_next also be empty sets.
(3) For each leaf node in S and each leaf node in T, create an edge in B if the nodes have the same value. For each edge created from nodeS in S to nodeT in T, add all the parents of nodeS to the set R(S) and all the parents of nodeT to the set R(T).
(4) For each node nodeS in R(S) and each node nodeT in T(S), draw an edge between them in B if they have the same value AND
{
(i): nodeS->left is connected to nodeT->left and nodeS->right is connected to nodeT->right, OR
(ii): nodeS->left is connected to nodeT->right and nodeS->right is connected to nodeT->left, OR
(iii): nodeS->left is connected to nodeT-> right and nodeS->right == NULL and nodeT->right==NULL
(5) For each edge created in step (4), add their parents to R(S)_next and R(T)_next.
(6) If (R(S)_next) is nonempty {
(i) swap R(S) and R(S)_next and swap R(T) and R(T)_next.
(ii) Empty the contents of R(S)_next and R(T)_next.
(iii) Return to step (4).
}
When this algorithm terminates, R(S) and T(S) contain the roots of all maximal subtrees in S and T. Furthermore, the bipartite graph B identifies all pairs of nodes in S and nodes in T that give isomorphic subtrees.
I believe this algorithm has complexity is O(n log n), where n is the total number of nodes in S and T, since the sets R(S) and T(S) can be stored in BST’s ordered by value, however I would be interested to see a proof.

N-ary trees - is it symmetric or not

Given an N-ary tree, find out if it is symmetric about the line drawn through the root node of the tree. It is easy to do it in case of a binary tree. However for N-ary trees it seems to be difficult
One way to think about this problem is to notice that a tree is symmetric if it is its own reflection, where the reflection of a tree is defined recursively:
The reflection of the empty tree is itself.
The reflection of a tree with root r and children c1, c2, ..., cn is the tree with root r and children reflect(cn), ..., reflect(c2), reflect(c1).
You can then solve this problem by computing the tree's reflection and checking if it's equal to the original tree. This again can be done recursively:
The empty tree is only equal to itself.
A tree with root r and children c1, c2, ..., cn is equal to another tree T iff the other tree is nonempty, has root r, has n children, and has children that are equal to c1, ..., cn in that order.
Of course, this is a bit inefficient because it makes a full copy of the tree before doing the comparison. The memory usage is O(n + d), where n is the number of nodes in the tree (to hold the copy) and d is the height of the tree (to hold the stack frames in the recursion tom check for equality). Since d = O(n), this uses O(n) memory. However, it runs in O(n) time since each phase visits each node exactly once.
A more space-efficient way of doing this would be to use the following recursive formulation:
1. The empty tree is symmetric.
2. A tree with n children is symmetric if the first and last children are mirrors, the second and penultimate children are mirrors, etc.
You can then define two trees to be mirrors as follows:
The empty tree is only a mirror of itself.
A tree with root r and children c1, c2,..., cn is a mirror of a tree with root t and children d1, d2, ..., dn iff r = t, c1 is a mirror of dn, c2 is a mirror of dn-1, etc.
This approach also runs in linear time, but doesn't make a full copy of the tree. Comsequently, the memory usage is only O(d), where d is the depth of the tree. This is at worst O(n) but is in all likelihood much better.
I would just do an in-order tree traversal( node left right) on the left sub-tree and save it to a list. Then do another in-order tree traversal (node right left) on the right sub-tree and save it to a list. Then, you can just compare the two lists. They should be the same.
Take a stack
Now each time start traversing through root node,
now recursively call a function and push the element of left sub tree one by one at a particular level.
maintain a global variable and update its value each time a left sub tree is pushed onto the stack.now call recursively(after recursive call to left sub tree)the right sub and pop on each correct match.doing this will ensure that it is being checked in symmetric manner.
At the end if stack is empty ,i.e. all elements are processed and each element of stack has been popped out..you are through!
One more way to answer this question is to look at each level and see if each level is palindrome or not. For Null nodes, we can keep adding dummy nodes with any value which is unique.
It's not difficult. I'm going to play golf with this question. I got 7... anyone got better?
data Tree = Tree [Tree]
symmetrical (Tree ts) =
(even n || symmetrical (ts !! m)) &&
all mirror (zip (take m ts) (reverse $ drop (n - m) ts))
where { n = length ts; m = n `div` 2 }
mirror (Tree xs, Tree ys) =
length xs == length ys && all mirror (zip xs (reverse ys))

Resources