Why External Path Length == Internal Path length + 2(Internal nodes)? - data-structures

I want to understand this intuitionally, not mathematically.
Facts I know :
This is for strictly binary tree(0 or 2 child).
The Number of Leaf Nodes is either half of total nodes or 1 more
(Means it would be same as number of internal nodes or 1 more).
But by just using these 2, I'm unable to imagine this Formula. Appreciate your view and help.

Related

How to solve this with simple forward-backward algorithm?

I've been playing around with the forward-backward algorithm to find the most efficient (determined by a cost function dependent on how a current state differs from the next state) path to go from State 1 to State N. In the picture below, a short version of the problem can be seen with just 3 States and 2 Nodes per State. I do forward-backward algorithm on that and find the best path like normal. The red bits in the pictures are the paths checked during forward propagation bit in the code.
Now the interesting bit, I now want to find the best 3-State Length path (as before) but now only Nodes in the first State are known. The other 4 are now free-floating and can be considered to be in any State (State 2 or State 3). I want to know if you guys have a good idea of how to do this.
Picture: http://i.imgur.com/JrQ2tul.jpg
Note: Bear in mind the original problem consists of around 25 States and 100 Nodes per State. So, you'll know the State of around 100 Nodes in State 1 but the other 24*100 Nodes are Stateless. In this case, I want find a 25-State length path (with minimum cost).
Addendum: Someone pointed out a better algorithm would be Viterbi's algorithm. So here is a problem with more variables thrown in. Can you guys explain how would that be implemented? Same rules apply, the path should start from one of the Nodes in State 1 (Node a or Node b). Also, the cost function using the norm doesn't make sense in this case since we only have one property (Size of node) but in the actual problem I'm expecting a lot more properties.
A variation of Dijkstra's algorithm might be faster for your problem than the forward-backward algorithm, because it does not analyze all nodes at once. Dijkstra is a DP algorithm after all.
Let a node be specified by
Node:
Predecessor : Node
Total cost : Number
Visited nodes : Set of nodes (e.g. a hash set or other performant set)
Initialize the algorithm with
open set : ordered (by total cost) set of nodes = set of possible start nodes (set visitedNodes to the one-element set with the current node)
( = {a, b} in your example)
Then execute the algorithm:
do
n := pop element from open set
if(n.visitedNodes.count == stepTarget)
we're done, backtrace the path from this node
else
for each n2 in available nodes
if not n2 in n.visitedNodes
push copy of n2 to open set (the same node might appear multiple times in the set):
.cost := n.totalCost + norm(n2 - n)
.visitedNodes := n.visitedNodes u { n2 } //u = set union
.predecessor := n
next
loop
If calculating the norm is expensive, you might want to calculate it on demand and store it in a map.

What does the beam size represent in the beam search algorithm?

I have a question about the beam search algorithm.
Let's say that n = 2 (the number of nodes we are going to expand from every node). So, at the beginning, we only have the root, with 2 nodes that we expand from it. Now, from those two nodes, we expand two more. So, at the moment, we have 4 leafs. We will continue like this till we find the answer.
Is this how beam search works? Does it expand only n = 2 of every node, or it keeps 2 leaf nodes at all the times?
I used to think that n = 2 means that we should have 2 active nodes at most from each node, not two for the whole tree.
In the "standard" beam search algorithm, at every step, the total number of the nodes you currently "know about" is limited - and NOT the number of nodes you will follow from each node.
Concretely, if n = 2, it means that the "beam" will be of size at most 2, at all times. So, initially, you start from one node, then you discover all nodes that are reachable from it, but discard all of them but two, and finish step 1 with 2 nodes. At step 2, you have two nodes, and you will expand both, and discard all nodes again, except exactly 2 nodes (total, not from each!). In the next steps, similarly, you will keep 2 nodes after each step.
Choosing which node to keep is usually done by some heuristic function that evaluates which node is closest to the target.
Note that the beam search algorithm is not complete (i.e., it may not find a solution if one exists) nor optimal (i.e. it may not find the best solution). The best way to see this is witnessing that when n = 1, it basically reduces to best-first-search.
In beam search, instead of choosing the best token to generate at each timestep, we keep k possible tokens at each step. This fixed-size memory footprint k is called the beam width, on the metaphor of a flashlight beam that can be parameterized to be wider or narrower.
Thus at the first step of decoding, we compute a softmax over the entire vocabulary, assigning a probability to each word. We then select the k-best options from this softmax output. These initial k outputs are the search frontier and these k initial words are called hypotheses. A hypothesis is an output sequence, a translation-so- far, together with its probability.
At subsequent steps, each of the k best hypotheses is extended incrementally by being passed to distinct decoders, which each generate a softmax over the entire vocabulary to extend the hypothesis to every possible next token. Each of these k∗V hypotheses is scored by P(yi|x,y<i): the product of the probability of current word choice multiplied by the probability of the path that led to it. We then prune the k∗V hypotheses down to the k best hypotheses, so there are never more than k hypotheses at the frontier of the search, and never more than k decoders.
The beam size(or beam width) is the k aforementioned.
Source: https://web.stanford.edu/~jurafsky/slp3/ed3book.pdf

skiplist- i really need an explanation ,how does it insert and delete

i really dont understand the probabilty thing of this list. in addition to the statement"we have to examine no more than n/2 + 1 nodes (where n is the length of the list).Also giving every fourth node a pointer four ahead (Figure1c) requires that no more than n/4 + 2 nodes be examined".
this statement i read in the following link:ftp://ftp.cs.umd.edu/pub/skipLists/skiplists.pdf
What you're not understanding is that every node has a link at level 1. That is, at the lowest level, the data structure is essentially a linked list. Searching for a node using this is of course an O(n) operation.
Each node of the skip list has at least one link: the one at level 1. On average, half of the nodes also have a link at level 2. If this was the highest level at which a link existed, then you could find an arbitrary node in O(n/2). Basically, you follow the 2nd level nodes until either you find the item you're looking for, or until you get to a node whose value is larger than the one you're looking for. At that point, you step down to the level 1 nodes and search forward from the previous node (i.e. the one that's less than the one you're looking for).
Again on average, 1/4 of the nodes have a link at level 3. Using these, you can find an arbitrary node in O(n/4). You first search the level 3 nodes until you either find the node or go past it, then drop down to the level 2 nodes from that point, and to the level 1 nodes if don't find the node at level 2.
If you follow the math, you can see that if your maximum level is m, then as long as you have less than 2^m nodes in the skip list, your amortized average search time will be O(log2(n)), where n is the number of items in the list.
So the structure of a skip list node is like this:
SkiplistNode
{
int level;
SomeType data; // the data held in the node
SkiplistNode* forwards[]; // an array of 'level' forward references
}
If a node has a level value of 1, then there will be just one item in the forwards array. If it's at level 4, then there will be four entries: one for each of levels 4, 3, 2, and 1.
As it turns out, the average size of the forwards array is 2. That follows the progression 1 + 1/2 + 1/4 + 1/8 + 1/16, + 1/32, ... That is:
Every node has a link at level 1
1/2 of the nodes have a link at level 2
1/4 of the nodes have a link at level 3
1/8 of the nodes have a link at level 4
etc.
Is that more clear now?
Skip lists are explained quite well in their Wikipedia article. If you have a specific question about the data structure itself feel free to ask them though.
Lecture from MIT about skip lists: http://video.google.com/videoplay?docid=-6710586843601387849#
A somewhat easier to understand explanation can be found here : http://igoro.com/archive/skip-lists-are-fascinating/

Sequentially Constructing Full B-Trees

If I have a sorted set of data, which I want to store on disk in a way that is optimal for both reading sequentially and doing random lookups on, it seems that a B-Tree (or one of the variants is a good choice ... presuming this data-set does not all fit in RAM).
The question is can a full B-Tree be constructed from a sorted set of data without doing any page splits? So that the sorted data can be written to disk sequentially.
Constructing a "B+ tree" to those specifications is simple.
Choose your branching factor k.
Write the sorted data to a file. This is the leaf level.
To construct the next highest level, scan the current level and write out every kth item.
Stop when the current level has k items or fewer.
Example with k = 2:
0 1|2 3|4 5|6 7|8 9
0 2 |4 6 |8
0 4 |8
0 8
Now let's look for 5. Use binary search to find the last number less than or equal to 5 in the top level, or 0. Look at the interval in the next lowest level corresponding to 0:
0 4
Now 4:
4 6
Now 4 again:
4 5
Found it. In general, the jth item corresponds to items jk though (j+1)k-1 at the next level. You can also scan the leaf level linearly.
We can make a B-tree in one pass, but it may not be the optimal storage method. Depending on how often you make sequential queries vs random access ones, it may be better to store it in sequence and use binary search to service a random access query.
That said: assume that each record in your b-tree holds (m - 1) keys (m > 2, the binary case is a bit different). We want all the leaves on the same level and all the internal nodes to have at least (m - 1) / 2 keys. We know that a full b-tree of height k has (m^k - 1) keys. Assume that we have n keys total to store. Let k be the smallest integer such that m^k - 1 > n. Now if 2 m^(k - 1) - 1 < n we can completely fill up the inner nodes, and distribute the rest of the keys evenly to the leaf nodes, each leaf node getting either the floor or ceiling of (n + 1 - m^(k - 1))/m^(k - 1) keys. If we cannot do that then we know that we have enough to fill all of the nodes at depth k - 1 at least halfway and store one key in each of the leaves.
Once we have decided the shape of our tree, we need only do an inorder traversal of the tree sequentially dropping keys into position as we go.
Optimal meaning that an inorder traversal of the data will always be seeking forward through the file (or mmaped region), and a random lookup is done in a minimal number of seeks.

How many elements can be held in a B-tree of order n?

Is it 2n? Just checking.
Terminology
The Order of a B-Tree is inconstantly defined in the literature.
(see for example the terminology section of Wikipedia's article on B-Trees)
Some authors consider it to be the minimum number of keys a non-leaf node may hold, while others consider it to be the maximum number of children nodes a non-leaf node may hold (which is one more than the maximum number of keys such a node could hold).
Yet many others skirt around the ambiguity by assuming a fixed length key (and fixed sized nodes), which makes the minimum and maximum the same, hence the two definitions of the order produce values that differ by 1 (as said the number of keys is always one less than the number of children.)
I define depth as the number of nodes found in the search path to a leaf record, and inclusive of the root node and the leaf node. In that sense, a very shallow tree with only a root node pointing directly to leaf nodes has depth 2. If that tree were to grow and require an intermediate level of non-leaf nodes, its depth would be 3 etc.
How many elements can be held in a B-Tree of order n?
Assuming fixed length keys, and assuming that "order" n is defined as the maximum number of child nodes, the answer is:
(Average Number of elements that fit in one Leaf-node) * n ^ (depth - 1)
How do I figure?...:
The data (the "elements") is only held in leaf nodes. So the number of element held is the average number of elements that fit in one node, times the number of leaf nodes.
The number of leaf nodes is itself driven by the number of children that fit in a non-leaf node (the order). For example the non-leaf node just above a leaf node, points to n (the order) leaf-nodes. Then, the non-leaf node above this non-leaf node points to n similar nodes etc, hence "to the power of (depth -1)".
Note that the formula above generally holds using the averages (of key held in a non-leaf node, and of elements held in a leaf node) rather than assuming fixed key length and fixed record length: trees will typically have a node size that is commensurate with the key and record sizes, hence holding a number key or records that is big enough that the effective number of keys or record held in any leaf will vary relatively little compared with the average.
Example:
A tree of depth 4 (a root node, two level of non-leaf nodes and one level [obviously] of leaf nodes) and of order 12 (non-leaf nodes can hold up to 11 keys, hence point to 12 nodes below them) and such that leaf nodes can contain 5 element each, would:
- have its root node point to 12 nodes below it
- each node below it points to 12 nodes below them (hence there will be 12 * 12 nodes in the layer "3" (assuming the root is layer 1 etc., this numbering btw is also ambiguously defined...)
- each node in "layer 3" will point to 12 leaf-nodes (hence there will be 12 * 12 * 12 leaf nodes.
- each leaf node has 5 elements (in this example case)
Hence.. such a tree will hold...
Nb Of Elements in said tree = 5 * 12 * 12 * 12
= 5 * (12 ^ 3)
= 5 * (12 ^ depth -1)
= 8640
Recognize the formula on the 3rd line.
What is generally remarkable for B-Tree, and which makes for their popularity is that a relatively shallow tree (one with a limited number of "hops" between the root and the sought record), can hold a relatively high number record. This number is multiplied by the order at each level.
My book says that the order of a B-tree is the maximum number of pointers that can be stored in a node. (p. 348) The number of "keys" is one less than the order. So a B-tree of order n can hold n-1 elements.
The book is "File Structures", second edition, by Michael J. Folk.
If your formula for the number of elements doesn't include an exponentiation somewhere, you've done it wrong.
A binary tree of order 5 can hold 2^0 + 2^1 + 2^2 + 2^3 + 2^4 elements, so 31 .. (which is 2^order - 1).
Edit:
I appear to have gotten order and depth / length mixed up. What on earth is the order of a binary tree? You appear to discuss B-trees as if they don't, by the very nature of their definition, hold a maximum of two child elements per element.
Let Order of b-tree is 'm' means maximum number of nodes that can be inserted at same level in a b-tree=m-1.After that nodes will splits.
for ex: if order is 3 then only 2 maximum node can be inserted on arrival of 3rd element ,nodes will splits by following the property of binary search tree or self balancing tree.

Resources