data structure for a file system--interview - algorithm

I've encountered the following interview questions online. Based on my understanding, it's asked you to design a data structure to simulate the file system. Can anyone give me some hints?
// addMapping("/foo/bar/x", "XController")
// addMapping("/foo/bar/z", "ZController")
// addMapping("/foo/baz", "BazController");
//getMapping("/foo/bar/x") -> ["XController"]
//getMapping("/foo/bar") -> ["XController", "ZController"]
public void addMapping(String path, String destination) {
//candidate TODO
}
public List<String> getMapping(String path) {
//candidate TODO
}

I think the best structure to use for this mapping is a Trie or even better its compressed version - a Patricia Tree(a.k.a radix tree). The idea is the following - both structures store prefixes of dictionary words. When a user queries for a given path you traverse the structure(be it a trie or a radix tree) according to the query string. After that you do any walk over the subtree under the node where you end up and print all the controllers associated with the nodes there.

Related

How would unencapsulate an Unsorted Array structure’s Fetch algorithm?

//access the node (assumes the node is in the structure)
i=0;
while(targetKey!=data[i].key()){
i++
}
// return a a copy of the node to the client
return data[i].deepCopy();
Here is a sample of my code to help better understand the question. Any suggestions would be nice. When it says unencapsulate the structure, does it mean that I will reveal the implementation details of the deepCopy() method. Referring to specifically line 7 as the question in hand which is return data[i].deepCopy();. Here is partial code above:

Stanford OpenIE: How to output dependency path instead of plain text patterns?

I am looking through the Java source code and wondering if it's easy to modify the system such that the predicate portion of each triple is the dependency path between the two entities instead of the surface form.
Since the natural logic module operates on the dependency trees I suppose there shall be an easy tweak to this demand.
I trace the code in edu.stanford.nlp.naturalli/OpenIE.java to:
// Get the extractions
boolean empty = true;
synchronized (OUTPUT) {
for (CoreMap sentence : ann.get(CoreAnnotations.SentencesAnnotation.class)) {
for (RelationTriple extraction : sentence.get(NaturalLogicAnnotations.RelationTriplesAnnotation.class)) {
// Print the extractions
OUTPUT.println(tripleToString(extraction, docid, sentence));
empty = false;
}
}
}
Please point me to the implementation of the following step:
sentence.get(NaturalLogicAnnotations.RelationTriplesAnnotation.class)
Thanks!
Each relation triple actually does store the dependency structure from which it was generated. Take a look at the asDependencyTree() function in RelationTriple.
Note that this tree is not necessarily a subtree of the original sentence -- e.g., it may be that a subject was moved around to produce a relation triple. If you're looking for a dependency path in the original sentence, you can look up tokens by their IndexAnnotation and compute a dependency path from that.

Efficient implementation of immutable (double) LinkedList

Having read this question Immutable or not immutable? and reading answers to my previous questions on immutability, I am still a bit puzzled about efficient implementation of simple LinkedList that is immutable. In terms of array tha seems to be easy - copy the array and return new structure based on that copy.
Supposedly we have a general class of Node:
class Node{
private Object value;
private Node next;
}
And class LinkedList based on the above allowing the user to add, remove etc. Now, how would we ensure immutability? Should we recursively copy all the references to the list when we insert an element?
I am also curious about answers in Immutable or not immutable? that mention cerain optimization leading to log(n) time and space with a help of a binary tree. Also, I read somewhere that adding an elem to the front is 0(1) as well. This puzzles me greatly, as if we don't provide the copy of the references, then in reality we are modifying the same data structures in two different sources, which breaks immutability...
Would any of your answers alo work on doubly-linked lists? I look forward to any replies/pointers to any other questions/solution. Thanks in advance for your help.
Supposedly we have a general class of Node and class LinkedList based on the above allowing the user to add, remove etc. Now, how would we ensure immutability?
You ensure immutability by making every field of the object readonly, and ensuring that every object referred to by one of those readonly fields is also an immutable object. If the fields are all readonly and only refer to other immutable data, then clearly the object will be immutable!
Should we recursively copy all the references to the list when we insert an element?
You could. The distinction you are getting at here is the difference between immutable and persistent. An immutable data structure cannot be changed. A persistent data structure takes advantage of the fact that a data structure is immutable in order to re-use its parts.
A persistent immutable linked list is particularly easy:
abstract class ImmutableList
{
public static readonly ImmutableList Empty = new EmptyList();
private ImmutableList() {}
public abstract int Head { get; }
public abstract ImmutableList Tail { get; }
public abstract bool IsEmpty { get; }
public abstract ImmutableList Add(int head);
private sealed class EmptyList : ImmutableList
{
public override int Head { get { throw new Exception(); } }
public override ImmutableList Tail { get { throw new Exception(); } }
public override bool IsEmpty { get { return true; } }
public override ImmutableList Add(int head)
{
return new List(head, this);
}
}
private sealed class List : ImmutableList
{
private readonly int head;
private readonly ImmutableList tail;
public override int Head { get { return head; } }
public override ImmutableList Tail { get { return tail; } }
public override bool IsEmpty { get { return false; } }
public override ImmutableList Add(int head)
{
return new List(head, this);
}
}
}
...
ImmutableList list1 = ImmutableList.Empty;
ImmutableList list2 = list1.Add(100);
ImmutableList list3 = list2.Add(400);
And there you go. Of course you would want to add better exception handling and more methods, like IEnumerable<int> methods. But there is a persistent immutable list. Every time you make a new list, you re-use the contents of an existing immutable list; list3 re-uses the contents of list2, which it can do safely because list2 is never going to change.
Would any of your answers also work on doubly-linked lists?
You can of course easily make a doubly-linked list that does a full copy of the entire data structure every time, but that would be dumb; you might as well just use an array and copy the entire array.
Making a persistent doubly-linked list is quite difficult but there are ways to do it. What I would do is approach the problem from the other direction. Rather than saying "can I make a persistent doubly-linked list?" ask yourself "what are the properties of a doubly-linked list that I find attractive?" List those properties and then see if you can come up with a persistent data structure that has those properties.
For example, if the property you like is that doubly-linked lists can be cheaply extended from either end, cheaply broken in half into two lists, and two lists can be cheaply concatenated together, then the persistent structure you want is an immutable catenable deque, not a doubly-linked list. I give an example of a immutable non-catenable deque here:
http://blogs.msdn.com/b/ericlippert/archive/2008/02/12/immutability-in-c-part-eleven-a-working-double-ended-queue.aspx
Extending it to be a catenable deque is left as an exercise; the paper I link to on finger trees is a good one to read.
UPDATE:
according to the above we need to copy prefix up to the insertion point. By logic of immutability, if w delete anything from the prefix, we get a new list as well as in the suffix... Why to copy only prefix then, and not suffix?
Well consider an example. What if we have the list (10, 20, 30, 40), and we want to insert 25 at position 2? So we want (10, 20, 25, 30, 40).
What parts can we reuse? The tails we have in hand are (20, 30, 40), (30, 40) and (40). Clearly we can re-use (30, 40).
Drawing a diagram might help. We have:
10 ----> 20 ----> 30 -----> 40 -----> Empty
and we want
10 ----> 20 ----> 25 -----> 30 -----> 40 -----> Empty
so let's make
| 10 ----> 20 --------------> 30 -----> 40 -----> Empty
| /
| 10 ----> 20 ----> 25 -/
We can re-use (30, 40) because that part is in common to both lists.
UPDATE:
Would it be possible to provide the code for random insertion and deletion as well?
Here's a recursive solution:
ImmutableList InsertAt(int value, int position)
{
if (position < 0)
throw new Exception();
else if (position == 0)
return this.Add(value);
else
return tail.InsertAt(value, position - 1).Add(head);
}
Do you see why this works?
Now as an exercise, write a recursive DeleteAt.
Now, as an exercise, write a non-recursive InsertAt and DeleteAt. Remember, you have an immutable linked list at your disposal, so you can use one in your iterative solution!
Should we recursively copy all the references to the list when we insert an element?
You should recursively copy the prefix of the list up until the insertion point, yes.
That means that insertion into an immutable linked list is O(n). (As is inserting (not overwriting) an element in array).
For this reason insertion is usually frowned upon (along with appending and concatenation).
The usual operation on immutable linked lists is "cons", i.e. appending an element at the start, which is O(1).
You can see clearly the complexity in e.g. a Haskell implementation. Given a linked list defined as a recursive type:
data List a = Empty | Node a (List a)
we can define "cons" (inserting an element at the front) directly as:
cons a xs = Node a xs
Clearly an O(1) operation. While insertion must be defined recursively -- by finding the insertion point. Breaking the list into a prefix (copied), and sharing that with the new node and a reference to the (immutable) tail.
The important thing to remember about linked lists is :
linear access
For immutable lists this means:
copying the prefix of a list
sharing the tail.
If you are frequently inserting new elements, a log-based structure , such as a tree, is preferred.
There is a way to emulate "mutation" : using immutable maps.
For a linked list of Strings (in Scala style pseudocode):
case class ListItem(s:String, id:UUID, nextID: UUID)
then the ListItems can be stored in a map where the key is UUID:
type MyList = Map[UUID, ListItem]
If I want to insert a new ListItem into val list : MyList :
def insertAfter(l:MyList, e:ListItem)={
val beforeE=l.getElementBefore(e)
val afterE=l.getElementAfter(e)
val eToInsert=e.copy(nextID=afterE.nextID)
val beforeE_new=beforeE.copy(nextID=e.nextID)
val l_tmp=l.update(beforeE.id,beforeE_new)
return l_tmp.add(eToInsert)
}
Where add, update, get takes constant time using Map: http://docs.scala-lang.org/overviews/collections/performance-characteristics
Implementing double linked list goes similarly.

How to represent data to be used for DFS/BFS

I was assigned a problem to solve using various search techniques. The problem is very similar to the Escape From Zurg problem or the Bridge and Torch problem. My issue is that I am lost as to how to represent the data as a tree.
This is my guess as to how to do it, but it doesn't make much sense for searching.
Another way could be to use a binary tree sorted by their walking time. However, I'm still not sure if I'm attacking this problem correctly since search algorithms don't necessarily require binary trees.
Any tips on representing this data would be appreciated.
Generally when you are using a tree search to solve a problem, each node represents some possible "state" of the world (who's on what side of the bridge, for example), and the children of each node represent all possible "successor states" (new states that can be reached in one move from the previous state). A depth-first search then represents trying one option until it dead-ends, then backing up to the last state where another option was available and trying it out. A breadth-first search represents trying out lots of options in parallel and seeing when the first of them find the goal node.
In terms of the actual way of encoding this, you would represent this as a multiway tree. Each node would probably contain the current state, plus a list of pointers to child nodes.
Hope this helps!
U could use something like this:
public class Node
{
public int root;
public List<Node> neighbours;
public Node(int x)
{
root=x;
}
public void setNeighboursList(List<Node> l)
{
neighbours = l;
}
public void addNeighbour(Node n)
{
if(neighbours==null) neighbours = new ArrayList<Node>();
neighbours.add(n);
}
...
}
public class Tree
{
public Node root;
....
}

Sorted hash table (map, dictionary) data structure design

Here's a description of the data structure:
It operates like a regular map with get, put, and remove methods, but has a sort method that can be called to sorts the map. However, the map remembers its sorted structure, so subsequent calls to sort can be much quicker (if the structure doesn't change too much between calls to sort).
For example:
I call the put method 1,000,000 times.
I call the sort method.
I call the put method 100 more times.
I call the sort method.
The second time I call the sort method should be a much quicker operation, as the map's structure hasn't changed much. Note that the map doesn't have to maintain sorted order between calls to sort.
I understand that it might not be possible, but I'm hoping for O(1) get, put, and remove operations. Something like TreeMap provides guaranteed O(log(n)) time cost for these operations, but always maintains a sorted order (no sort method).
So what's the design of this data structure?
Edit 1 - returning the top-K entries
Alhough I'd enjoy hearing the answer to the general case above, my use case has gotten more specific: I don't need the whole thing sorted; just the top K elements.
Data structure for efficiently returning the top-K entries of a hash table (map, dictionary)
Thanks!
For "O(1) get, put, and remove operations" you essentially need O(1) lookup, which implies a hash function (as you know), but the requirements of a good hash function often break the requirement to be easily sorted. (If you had a hash table where adjacent values mapped to the same bucket, it would degenerate to O(N) on lots of common data, which is a worse case you typically want a hash function to avoid.)
I can think of how to get you 90% of the way there. Set up a hashtable alongside a parallel index that is sorted. The index has a clean part (ordered) and a dirty part (unordered). The index would map keys to the values (or references to the values stored in the hashtable - whichever suits you in terms of performance or memory use). When you add to the hashtable, the new entry is pushed onto the back of the dirty list. When you remove from the hashtable, the entry is nulled/removed from the clean and dirty parts of the index. You can sort the index, which sorts the dirty entries only, then merges them into the already sorted 'clean' part of the index. And obviously you can iterate over the index.
As far as I can see, this gives you the O(1) everywhere except on the remove operation and is still fairly simple to implement with standard containers (at least as provided by C++, Java, or Python). It also gives you the "second sort is cheaper" condition by only needing to sort the dirty index entries and then letting you do an O(N) merge. The cost of all this is obviously extra memory for the index and extra indirection when using it.
Why exactly do you need a sort() function ?
What do you perhaps want and need is a Red-Black Tree.
http://en.wikipedia.org/wiki/Red-black_tree
These trees are automatically sorting your input by a comparator you give. They are complex, but have excellent O(n) characteristics. Couple your tree entries as key with a hash
map as dictionary and you get your datastructure.
In Java it is implemented as TreeMap as instance of SortedMap.
What you're looking at is a hashtable with pointers in the entries to the next entry in sorted order. It's a lot like the LinkedHashMap in java except that the links are tracking a sort order rather than the insertion order. You can actually implement this totally by wrapping a LinkedHashMap and having the implementation of sort transfer the entries from the LinkedHashMap into a TreeMap and then back into a LinkedHashMap.
Here's an implementation that sorts the entries in an array list rather than transferring to a tree map. I think the sort algorithm used by Collection.sort will do a good job of merging the new entries into the already sorted portion.
public class SortaSortedMap<K extends Comparable<K>,V> implements Map<K,V> {
private LinkedHashMap<K,V> innerMap;
public SortaSortedMap() {
this.innerMap = new LinkedHashMap<K,V>();
}
public SortaSortedMap(Map<K,V> map) {
this.innerMap = new LinkedHashMap<K,V>(map);
}
public Collection<V> values() {
return innerMap.values();
}
public int size() {
return innerMap.size();
}
public V remove(Object key) {
return innerMap.remove(key);
}
public V put(K key, V value) {
return innerMap.put(key, value);
}
public Set<K> keySet() {
return innerMap.keySet();
}
public boolean isEmpty() {
return innerMap.isEmpty();
}
public Set<Entry<K, V>> entrySet() {
return innerMap.entrySet();
}
public boolean containsKey(Object key) {
return innerMap.containsKey(key);
}
public V get(Object key) {
return innerMap.get(key);
}
public boolean containsValue(Object value) {
return innerMap.containsValue(value);
}
public void clear() {
innerMap.clear();
}
public void putAll(Map<? extends K, ? extends V> m) {
innerMap.putAll(m);
}
public void sort() {
List<Map.Entry<K,V>> entries = new ArrayList<Map.Entry<K,V>>(innerMap.entrySet());
Collections.sort(entries, new KeyComparator());
LinkedHashMap<K,V> newMap = new LinkedHashMap<K,V>();
for (Map.Entry<K,V> e: entries) {
newMap.put(e.getKey(), e.getValue());
}
innerMap = newMap;
}
private class KeyComparator implements Comparator<Map.Entry<K,V>> {
public int compare(Entry<K, V> o1, Entry<K, V> o2) {
return o1.getKey().compareTo(o2.getKey());
}
}
}
I don't know if there's a name, but you could store the current index of each item on the hash.
That is, you have a HashMap< Object, Pair( Integer, Object ) >
and a List<Object> objects
When you put, add to the tail or head of the list and insert into the hashmap with your data and the index of insertion. This is O(1).
When you get, pull from the hashmap and ignore the index. This is O(1).
When you remove, you pull from the map. Take the index and remove from the list as well. This is O(1)
When you sort, just sort the list. Either update the indexes in the map during the sort, or update after the sort is complete. This does not affect the O(nlgn) sort, as it's a linear step. O(nlgn + n) == O(nlgn)
Ordered Dictionary
Recent versions of Python (2.7, 3.1) have "ordered dictionaries" which sound like what you're describing.
The official Python "ordered dictionary" implementation is inspired by previous 3rd-party implementations, as described in the PEP 372.
References:
collections.OrderedDict documentation for Python 2.7
collections.OrderedDict documentation for Python 3.1
PEP 372
ActiveState Ordered Dictionary recipe for Python ≥ 2.4
I'm not aware of a data structure classification with that exact behavior, at least not in Java Collections (or from nonlinear data structures class). Perhaps you can implement it, and it will henceforth be known as the RudigerMap.

Resources