What is Pointer-chasing and how it is related to BFS - algorithm

While reading a PPT on BFS (Breadth First Searching) I found that BFS can be used where we have " pointer-chasing" . What exactly is a pointer chasing and how is it related to BFS?

Pointers imply a graph on your data. BFS (breadth first search) is an algorithm to search in that graph.
Pointer chasing is just another word for following lots of pointers.

From the hardware perspective (CPU), pointer-chasing is bad for performance because memory reads are in effect serialized in the CPU (ie no ILP). You can't start a read (ie a load instr) until the prior one is done (since the prior load gives us the address for the next load and so on....).

I find it easiest to think of a Linked List example.
Lets say we have a Linked List with 5 elements. To get to the 3rd element, you have to use Pointer-chasing to traverse through the elements.

Related

How do I balance a BK-Tree and is it necessary?

I am looking into using an Edit Distance algorithm to implement a fuzzy search in a name database.
I've found a data structure that will supposedly help speed this up through a divide and conquer approach - Burkhard-Keller Trees. The problem is that I can't find very much information on this particular type of tree.
If I populate my BK-tree with arbitrary nodes, how likely am I to have a balance problem?
If it is possibly or likely for me to have a balance problem with BK-Trees, is there any way to balance such a tree after it has been constructed?
What would the algorithm look like to properly balance a BK-tree?
My thinking so far:
It seems that child nodes are distinct on distance, so I can't simply rotate a given node in the tree without re-calibrating the entire tree under it. However, if I can find an optimal new root node this might be precisely what I should do. I'm not sure how I'd go about finding an optimal new root node though.
I'm also going to try a few methods to see if I can get a fairly balanced tree by starting with an empty tree, and inserting pre-distributed data.
Start with an alphabetically sorted list, then queue from the middle. (I'm not sure this is a great idea because alphabetizing is not the same as sorting on edit distance).
Completely shuffled data. (This relies heavily on luck to pick a "not so terrible" root by chance. It might fail badly and might be probabilistically guaranteed to be sub-optimal).
Start with an arbitrary word in the list and sort the rest of the items by their edit distance from that item. Then queue from the middle. (I feel this is going to be expensive, and still do poorly as it won't calculate metric space connectivity between all words - just each word and a single reference word).
Build an initial tree with any method, flatten it (basically like a pre-order traversal), and queue from the middle for a new tree. (This is also going to be expensive, and I think it may still do poorly as it won't calculate metric space connectivity between all words ahead of time, and will simply get a different and still uneven distribution).
Order by name frequency, insert the most popular first, and ditch the concept of a balanced tree. (This might make the most sense, as my data is not evenly distributed and I won't have pure random words coming in).
FYI, I am not currently worrying about the name-synonym problem (Bill vs William). I'll handle that separately, and I think completely different strategies would apply.
There is a lisp example in the article: http://cliki.net/bk-tree. About unbalancing the tree I think the data structure and the method seems to be complicated enough and also the author didn't say anything about unbalanced tree. When you experience unbalanced tree maybe it's not for you?

Finding proper data structure c++

I was looking for some simple implemented data structure which gets my needs fulfilled in least possible time (in worst possible case) :-
(1)To pop nth element (I have to keep relative order of elements intact)
(2)To access nth element .
I couldn't use array because it can't pop and i dont want to have a gap after deleting ith element . I tried to remove the gap , by exchanging nth element with next again with next untill last but that proves time ineffecient though array's O(1) is unbeatable .
I tried using vector and used 'erase' for popup and '.at()' for access , but even this is not cheap for time effeciency though its better than array .
What you can try is skip list - it support the operation you are requesting in O(log(n)). Another option would be tiered vector that is just slightly easier to implement and takes O(sqrt(n)). both structures are quite cool but alas not very popular.
Well , tiered vector implemented on array would i think best fit your purpose . Though the tiered vector concept may be knew and little tricky to understand at first but then once you get it , it opens lot of question and you get a handy weapon to tackle many question's data structure part very effeciently . So it is recommended that you master tiered vectors implementation.
An array will give you O(1) lookup but O(n) delete of the element.
A list will give you O(n) lookup bug O(1) delete of the element.
A binary search tree will give you O(log n) lookup with O(1) delete of the element. But it doesn't preserve the relative order.
A binary search tree used in conjunction with the list will give you the best of both worlds. Insert a node into both the list (to preserve order) and the tree (fast lookup). Delete will be O(1).
struct node {
node* list_next;
node* list_prev;
node* tree_right;
node* tree_left;
// node data;
};
Note that if the nodes are inserted into the tree using the index as the sort value, you will end up with another linked list pretending to be a tree. The tree can be balanced however in O(n) time once it is built which you would only have to incur once.
Update
Thinking about this more this might not be the best approach for you. I'm used to doing lookups on the data itself not its relative position in a set. This is a data centric approach. Using the index as the sort value will break as soon as you remove a node since the "higher" indices will need to change.
Warning: Don't take this answer seriously.
In theory, you can do both in O(1). Assuming this are the only operations you want to optimize for. The following solution will need lots of space (and it will leak space), and it will take long to create the data structure:
Use an array. In every entry of the array, point to another array which is the same, but with that entry removed.

Is there a name for this BFS/DFS/IDDFS-like algorithm?

Essentially, it is a depth-first search that stops at a certain depth or cost. For example, it may DFS all nodes within 10 edges from the source, then 20, then 30. The difference is that rather than starting the DFS from scratch after each iteration, I store the "perimeter" of the searched area (a list of nodes) when each iteration of the search reaches its limits.
On the next iteration, i loop through all nodes on the perimeter, performing a DFS from each node, again to a fixed depth/cost before stopping, again recording down the perimeter of the searched area for the next iteration to start from.
The reason I am doing this is because my graph (which is a tree) is split into a set of logical "chunks", each of which must be fully explored before its child-chunks can start being explored. There are a large number of nodes but only a small number of chunks; I am essentially doing a chunk-by-chunk BFS, which each individual chunk (comprising a large number of individual nodes) being fully explored by its own mini-DFS.
Now, I just completely made this up on the spot to solve my problem, and it does, but is there anything like this in the literature? I haven't managed to find anything, but I'm sure someone else has done this before, and properly analysed its performance, asymptotic behavior, disadvantages, bugs, etc.. In that case, I would want to know about it.
I do not now of a name for this mixed type. I have used something similar too, but I don't think it is used very frequently and has a name. Often, the other algorithms make more sense:
If you want to advance slowly in chunks, Why don't you use a BFS?
Often a DFS is preferred because there you get the full traces. Furthermore, doing an iterative deepening DFS is simpler then your algorithm, just twice as time consuming, and requires much less memory.

Need some help with page walking algorithm

I am writing my own operating system and I want to validate whether dirty bits are set or not. So I want to walk through a certain virtual address range say R! to R2 and walk through pages and check its set or not.I am looking for a good algorithm for doing this. I can treat each page table level as a level of a tree and walk through each level. So I can use DFS or BFS. Is there a better algorithm for doing this ?
Use depth first search if you want to check each entry. DFS only requires a stack no deeper than the number of levels in the tree, and page tables are only a few levels deep.
BFS is slower and requires additional storage. It's generally most useful when the breadth-first property lets you break out early.

How to implement a graph-structured stack?

Ok, so I would like to make a GLR parser generator. I know there exist such programs better than what I will probably make, but I am doing this for fun/learning so that's not important.
I have been reading about GLR parsing and I think I have a decent high level understanding of it now. But now it's time to get down to business.
The graph-structured stack (GSS) is the key data structure for use in GLR parsers. Conceptually I know how GSS works, but none of the sources I looked at so far explain how to implement GSS. I don't even have an authoritative list of operations to support. Can someone point me to some good sample code/tutorial for GSS? Google didn't help so far. I hope this question is not too vague.
Firstly, if you haven't already, you should read McPeak's paper on GLR http://www.cs.berkeley.edu/~smcpeak/papers/elkhound_cc04.ps. It is an academic paper, but it gives good details on GSS, GLR, and the techniques used to implement them. It also explains some of the hairy issues with implementing a GLR parser.
You have three parts to implementing a graph-structured stack.
I. The graph data structure itself
II. The stacks
III. GLR's use of a GSS
You are right, google isn't much help. And unless you like reading algorithms books, they won't be much help either.
I. The graph data structure
Rob's answer about "the direct representation" would be easiest to implement. It's a lot like a linked-list, except each node has a list of next nodes instead of just one.
This data structure is a directed graph, but as the McPeak states, the GSS may have cycles for epsilon-grammars.
II. The stacks
A graph-structured stack is conceptually just a list of regular stacks. For an unambiguous grammar, you only need one stack. You need more stacks when there is a parsing conflict so that you can take both parsing actions at the same time and maintain the different state both actions create. Using a graph allows you to take advantage of the fact that these stacks share elements.
It may help to understand how to implement a single stack with a linked-list first. The head of the linked list is the top of the stack. Pushing an element onto the stack is just creating a new head and pointing it to the old head. Popping an element off the stack is just moving the pointer to head->next.
In a GSS, the principle is the same. Pushing an element is just creating a new head node and pointing it to the old head. If you have two shift operations, you will push two elements onto the old head and then have two head nodes. Conceptually this is just two different stacks that happen share every element except the top ones. Popping an element is just moving the head pointer down the stack by following each of the next nodes.
III. GLR's use of the GSS
This is where McPeak's paper is a useful read.
The GLR algorithm takes advantage of the GSS by merging stack heads that have the same state element. This means that one state element may have more than one child. When reducing, the GLR algorithm will have to explore all possible paths from the stack head.
You can optimize GLR by maintaining the deterministic depth of each node. This is just the distance from a split in the stack. This way you don't always have to search for a stack split.
This is a tough task! So good luck!
The question that you're asking isn't trivial. I see two main ways of doing this:
The direct representation. Your data structure is represented in memory as node objects/structures, where each node has a reference/pointer to the structs below it on the stack (one could also make the references bi-directional, as an alternative). This is the way lists and trees are normally represented in memory. It is a bit more complicated in this case, because unlike a tree or a list, where one need only maintain a reference to root node or head node to keep track of the tree, here we would need to maintain a list of references to all the 'top level' nodes.
The adjacency list representation. This is similar to the way that mathematicians like to think about graphs: G = (V, E). You maintain a list of edges, indexed by the vertices which are the origin and termination points for each edge.
The first option has the advantage that traversal can be quicker, as long as the GSS isn't too flat. But the structure is slightly more difficult to work with. You'll have to roll a lot of your own algorithms.
The second option has the advantage of being more straightforward to work with. Most algorithms in textbooks seem to assume some kind of adjacency list representation, which makes is easier to apply the wealth of graph algorithms out there.
Some resources:
There are various types of adjacency list, e.g. hash table based, array based, etc. The wikipedia adjacency list page is a good place to start.
Here's a blog post from someone who has been grappling with the same issue. The code is clojure, which may or may not be familiar, but the discussion is worth a look, even if not.
I should mention that I think that I wish there were more information about representing Directed Acyclic Graphs (or Graph Structured Stacks, if you prefer), given the widespread application of this sort of model. I think there is room for better solutions to be found.

Resources