Today I was studying AVL trees in Data Structures but got stuck in understanding LR and RL rotations. LL and RR rotations are quite intuitive and so easy to remember, but it seems to me that LR and RL rotation do not follow common sense, so I am having a hard time in remembering them. Should these rotations be crammed or is there any way to understand them? The book I am reading (Data Structures by Seymoure Lipschutz) says LR rotation is a combination of RR rotation followed by LL rotation. But I am unable to connect it. Here is the picture depicted in that book:
Between the second image and the final image what is happening, please explain if possible with this picture. I think if I understand LR then automatically will understand RL as the both are mirror image to one-another.
You don't understand it because it isn't correct, as pictured. It's not a even valid binary search tree. 37 cannot be the right (greater) child of 76 because it is less.
Initial insert
└── 44
├── (L) 30
│ ├── (L) 16
│ └── (R) 39
│ └── (L) 37
└── (R) 76
After left rotation on (30): (39) get rotated into it's parent's [30] spot, (39)'s child [37] becomes 30's child.
└── 44
├── (L) 39
│ └── (L) 30
│ ├── (L) 16
│ └── (R) 37
└── (R) 76
After a right rotation on (39): (39) is at the top of the tree and (44) becomes his right child.
└── 39
├── (L) 30
│ ├── (L) 16
│ └── (R) 37
└── (R) 44
└── (R) 76
Related
Given a square grid with side N. Also given a set of input points on the grid. Need to find another set of points on the grid (can be multiple) that have the maximal Manhattan distance from the given input points.
For example with N = 4 and input [p1:{0, 0}, p2:{3, 3}] the output should be [{0, 3}, {1, 2}, {2, 1}, {3, 0}] with the distance equal to 3.
0 1 2 3
┌───┬───┬───┬───┐
0 │p1 │ │ │ x │
├───┼───┼───┼───┤
1 │ │ │ x │ │
├───┼───┼───┼───┤
2 │ │ x │ │ │
├───┼───┼───┼───┤
3 │ x │ │ │p2 │
└───┴───┴───┴───┘
My first attempt was a simple brute-force iteration - for every point in the grid calculate Manhattan distance to every input point, take the minimum, and finally take the maximums from these minimums. This of course works, but slow on big N and input.
In my second attempt, I first build a kd-tree. Next iterated almost the same as before with the difference that now I don't calculate the distance to every input point but to the closest one (or multiple). This helped a bit but still, I was told there's a better algorithm.
You should use a breadth-first search, starting with all the given points, to simultaneously find the distance of every cell from its nearest point. This takes linear time and is pretty easy to code.
Then find the highest distance (or just remember it, since it will be the last one you wrote), and return all the cells with that distance.
That will work pretty well if your points aren't too sparse. If they're separated by vast distances, then you'll need an algorithm the grows with the number of points instead of the size of the grid. That would be based on calculating the Manhattan-distance Voronoi diagram, since all the points you want to return are on it.
I am trying to build a BST based on an array.
my question is once I have a new value (TreeNode value) and want to insert it into my bst, and return the level of this TreeNode after its insertion. is there a (quick) way to know its level/depth information? Moreover, is there a way to know its level without actually build the BST. For example, I have an array [3,1,2], and I want to insert 4, is there a way to know what is the depth/level of 4 without actually build the BST and search for 4 to get is depth?
Thank you.
(If you don't mind, python code is preferred. sorry for any inconvenience.)
A complete BST has 2^k-1 nodes for some k>0. The k-bit binary fractions between 0 and 1 (not inclusive) listed in ascending order with trailing zeros removed serve as "instructions" for deciding where to place each node in the sorted list. For example, if k=3, then the tree has 7 nodes. The binary fractions are
001
01
011
1
101
11
111
Now suppose the sorted list to be placed in the tree is [10,20,30,40,50,60,70]. Interpret a 0 to mean "go left: and every 1 except the final one to mean "go right." The final 1 means "stop here." We start at the root. The match-up is
001 <-> 10
01 <-> 20
011 <-> 30
1 <-> 40
101 <-> 50
11 <-> 60
111 <-> 70
Following the "instructions", you'll end up with the list elements in the tree as follows:
40
20 60
10 30 50 70
Voila! A BST. This works for all k. Proving why is an interesting task! I will leave it to you to turn this observation into a working algorithm. There are several ways to do that.
I was working on the Normalized Cross Correlation for Template Matching in Spatial domain.
While the method is slow, it works good enough for my purpose. But I saw a weird thing in there. Let me explain the situation below:
91 91 91 91 9 9
91 91 91 91 9 9
8 6 7 8
pattern image source image.
Now, when NCC goes through this: It finds the mean of the template image as 91 and underlying source image also as 91 and then it subtracts the intensity value from the pixel which essentially takes all terms in the formula to zero resulting in an undefined correlation value and no matches are found even when there is a perfect match.!
How to get around this situation?
I am using the following formula: from the excellent source by J. P. Lewis
Also, when I modified the formula to subtract (mean/2) from every pixel intensity, it seemed to work fine but I am concerned as to how much vulnerable to Illumination this new correlation coefficient is.
Edit: Conditions even worsened when I took a 1 X 1 pattern image and had multiple occurrences in source image. Using the above modified version I was unable to find appropriate matches. I would love to look into various work arounds many of you might have been using. Thanks.!
The idea of the normalized cross correlation is that the similarity doesn't change if you add an arbitrary number to every pixel or multiply every pixel by an arbitrary (non-negative) number. Now take any 2x2 pixel area in the search image, e.g.
91 9
6 7
Multiply this by 0 and add 91 - and you have a perfect match. So in a nutshell: You can't match a "flat" template using normalized cross-correlation. Or a "flat" area in the search image, either.
Note that this isn't a "bug" in the normalized cross correlation. The effect you're seeing makes perfect sense. Imagine someone hands you a completely black photograph and asks you what you see. Your answer wouldn't be "I have perfect match for the batmobile, because it's completely black", your answer would be "I can't tell, there's too little contrast in the image". Which is precisely what the NCC is trying to tell you with the division by zero.
Also, when I modified the formula to subtract (mean/2) from every pixel intensity
You mean, you replaced the mean's in the numerator and denominator by mean/2? That doesn't sound like a good idea. If the template or search image area contains only zeros, you will still get a division by zero. More importantly: you're calculating a quantity that has no real meaning (at least none that I can think of). For example, the average brightness of the search area region will influence on the matching result.
I would love to look into various work arounds [...]
The obvious ad-hoc workaround would be to add a small quantity to the denominator, so a "flat" area in the search image won't lead to devision by zero. Then you get (more or less) a similarity measure that doesn't change if you add an arbitrary number to every pixel or multiply every pixel by an arbitrary (non-negative) number unless that number is very close to 0.
But that would give you a 0 match for any flat search area region or flat template. Which (as explained above) makes perfect sense. If you want different behavior in this case, you don't want a normalized cross correlation.
An alternative similarity measure might be squared euclidean distance. You can optionally subtract the mean of the template and search area before calculating the difference. Then you get a similarity measure that doesn't change if you add an arbitrary number to every pixel. But it will change if you multiply every pixel by some value.
3 trees at school and after trying to find examples how to insert and build a 2-3 tree the answers that I found were different from what I learned. I want an 2-3 tree with m-1 like the following. I know the answer but I don't know how to build it. Can someone please show me how to build one using these elements which I got in this 2-3 and from where to begin
45_
14 25 50_
1 3_ 14 17 _ 25 27 30 45 _ _ 50 57 _
2-3 tree can have different no of elements in a particular node. The possible number of children each node can have is 2 0r 3.
Now if the parent consists on on element and has 2 children like
(a)
/ \
(b) (c)
Then ba which is what essentially happens in the case of a binary search tree. If the parent consists of 2 elements (a,b) and the children are q, w, e then qb and a
These are the conditions that have t be checked when you are inserting elements in a 2-3 tree. This will help you a lot. :)
There is an implementation of a 2 3 tree at Implementing a 2 3 Tree in C++.
I am reading a book on data structures and it says that a left balanced binary tree is a tree in which the leaves only occupy the leftmost positions in the last level.
This seemed a little vague to me. Does this mean that leaves are only on the left side of a root and are distributed throughout the whole level, or leave exists only on the left side of the entire tree. Exactly what constitute left balanced?
I am not sure if my guess even covers any of the answer, so if anyone could help, it would be greatly appreciated :-).
You can think of a left-balanced binary tree as a balanced binary tree where the left sub-tree of each node is filled before the right sub-tree. In more informal terms, this is a tree wherein the nodes at the bottom-most level are all on the left side of the entire tree.
Take this tree for example:
This tree is balanced, but not left-balanced. If node 67 were removed, however, the tree would be left-balanced.
It seems to me that the definition of left balanced binary tree is the same of complete binary tree.
I don't really know the answer, but based on the description in the book it sounds to me like this...
For starters, lets think of it this way. Every "row" in the tree has a number, starting at zero and counting up. The row with the highest number is considered to be the last level.
Remember that leaf nodes are nodes without any children. So the tree meets the condition that every leaf node in the last level must be on the left, like so:
50
/ \
/ \
35 70
/ \ / \
10 34 57 90
/ / /
9 7 78
If we were to add a "98" as the right-child of 90 or a "59" as the right-child of 57, then this tree would no longer be left-balanced.
Edit: After reading Brandon E Taylor's answer, you should definitely not choose this one. After looking it over and reading the description again, not only does his make more sense, but it fits the description much better.
In addition to Brandon E Taylor's answer, a left balanced binary tree is a binary tree that when represented in an array there shouldn't exist any gaps in between the elements representing the tree nodes.
for a tree of size 6 for example, here are some possible array representations with the following criteria
1- let _ denote an empty slot in the array
2- let i be the index of a slot in an array (i is 1-based)
2- for any parent at index i the left child is at index (2*i) and the right child is at index (2*i)+1
1 2 3 4 _ _ <- left balanced
6 3 2 4 1 <- left balanced
4 2 1 _ 6 <- not left balanced
to elaborate further more, let's represent user265312's answer:
50 35 70 10 34 57 90 9 _ 7 _ _ _ 78 _ <- not left balanced
meanwhile Brandon's answer (after removing node 67) doesn't violate the left balancing rule:
50 17 72 12 23 54 76 9 14 19 _ _ _ _ _ <- left balanced