Directed Graph Versus Associative Array - algorithm

I have been reading up on directed graphs. I have managed to get an abstract graph data type working in my application but I don't find it particularly intuitive and am considering replacing it with an ordinary multi-dimensional array.
My graph is sparse and acyclic. Each vertex is reachable from one particular 'master' vertex. If it was a tree, this master vertex would be the 'root'. It it was a social network, this master vertex would be 'me'.
Although my graph may have hundreds of thousands of vertices it has a finite depth: the greatest distance between any two nodes is 3 edges.
The underlying data representation is an adjacency list. A small example would look like this:
Head | Tails
--------------
1 | 2, 3, 4
2 | 5
3 | 5
4 | 5
5 | 6
If I was using an ordinary multi-dim array instead of my graph data type, it would look something like this:
$me[1][2][5][6]
$me[1][3][5][6]
$me[1][4][5][6]
Now, the main things that I want to be able to do with this graph are:
Navigate it as a hierarchy. I realise that some child vertices will feature in more than one category (e.g. #5), but that is what I want for this particular use case. I can't see any real difference between an array and a graph for this point.
Lay it out as a list (alphabetical, according to vertex name), with no duplicates. I would probably do a DFS, flagging visited vertices as I go, to avoid exploring them more than once. But as far as I can see this is achievable using either the graph or the array, and at the same cost.
Do an 'all paths' analysis for any given pair of points. Because I want 'all paths' (ie. I'm not simply checking for reachability), it seems to me that I have to traverse the entire graph, and again I can see no advantage in a graph over an array.
I get the feeling that I am missing something, but I can't put my finger on it. Can you??? Any ideas, suggestions, insights or advice gratefully accepted... (By the way, I'm using PHP, and the data source is a relational DB. I don't think this makes any real difference though).
Thanks!

One thing you need to understand is that a directed graph (or digraph) is a concept, whereas an associative array is a data structure.
An instance of the digraph concept can be stored in many different data structures, of which you can find the most common on this wikipedia page.
I'm not sure what you are doing with your multidimensional array... storing all paths? You will end up with a N³ space complexity, and trouble building it. A tree-based structure would be more efficient at the very least.
Now to the things you want to do with your graph:
Navigate as a hierarchy. The basic digraph concept doesn't allow to go up in the hierarchy, but you can easily store the reverse graph as well (especially with matrix-based representations, just use 3 values instead of 2 - forward, backward and nothing) .
Lay it out as a list, according to name. You have to store the name somewhere (either in a side map or in the vertex object), but it shouldn't be any harder than sorting anything else according to name.
Do an 'all paths' analysis. You can probably get away with linear complexity (in the number of paths) through DP and a shared representation of paths.

It looks that your data structure is too complicated. If you represent a directed graph as a multidimensional array, it is almost always of dimension two so that
$array[$x][$y]
is a boolean value that is TRUE if and only if there is an edge from node $x to node $y in the graph. In your example if would be e.g.
$array[1][2] = TRUE
$array[1][5] = FALSE
But for sparse graphs, using this boolean matrix representation is not usually good. Typically you would have a one-dimensional array that maps every node to a set of nodes to which there is an edge, e.g.
$array[1] = { 2, 3, 4 }
where { ... } means some sort of an unordered collection data structure, which can be e.g. a binary search tree or a hash set (hash table).
This data structure enables you to quickly find the nodes to which there is an arc from a given node, which is a key feature for graph algorithms.
Sometimes you want to be able to traverse your graph backwards also; in that case you would have another array that maps nodes to the list of their predecessors.

Related

Object and Pointer Graph representations

I keep seeing everywhere that there are 3 ways to represent graphs:
Objects and pointers
Adjacency matrix
Adjacency lists
However, I just plain don't understand what these Object and pointer representations are - yet every recruiter, and many blogs cite Steve Yegge's blog that they are indeed a separate representation.
This widely accepted answer to a very similar question seems to suggest that the vertex structures themselves have no internal pointers to other vertices, and instead all edges are represented by edge structures which contain pointers to the adjacent vertices.
How does this representation offer any discernible analytical advantage in any scenario?
From the top of my head, I hope I have the facts correct.
Conceptually, graph tries to represent how a set of nodes (or vertices) are related (connected) to each other (via edges).
However, in actual physical device (memory), we have a continuous array of memory cell.
So, in order to represent the graph, we can choose to use a matrix.
In this case, we use the vertex index as the row and column and the entry has value 1 if the vertices are adjacent to each other, 0 otherwise.
Alternatively, you can also represent a graph by allocating an object to represent the node/vertex which points to a list of all the nodes that are adjacent to it.
The matrix representation gives the advantage when the graph is dense, meaning when most of the nodes/vertices are connected to each other. This is because in such cases, by using the entry of matrix, it saves us from having to allocate an extra pointer (which need a word size memory) for each connection.
For sparse graph, the list approach is better because you don't need to account for the 0 entries when there is no connection between the vertices.
Hope it helps.
For now I have a hard time finding a pro w.r.t typical "graph algorithms". But it sure is possible to represent a graph with objects and pointers and a very natural thing to do if you think of it as a representation of something you just drew on a whiteboard.
Think of a scenario where you want to combine nodes of a graph in a certain order.
Nodes have payloads that contain domain data, the graph structure itself is not a core aspect of your program.
Sure, you can update your lists / matrix for every operation, but given an "objects and pointers" structure, you can do the merging locally. Further, if nodes have payloads, it means that lists/matrix will feature node id's that identify the actual node objects. A combination would mean you update your graph representation, follow the node identifiers and do the actual processing. It may feel more intuitively to work on your actual node objects and simply remove pointerswhen collapsing a neighbor (and delete that node) .
Besides, there are more ways to represent a graph:
E.g. just as triples, like Turle does
Or as offset
representation (offsets per node into an edge array), e.g. this
Boost data structure (disclaimer: I have not tested the linked
implementation myself)
etc
Here a way i have been using to create Graph with this concept :
#include <vector>
class Node
{
public:
Node();
void setLink(Node *n); // *n as argument to pass the address of the node
virtual ~Node(void);
private:
vector<Node*> m_links;
};
And the function responsible for creating the link between vertices is :
void Node::setLink(Node *n)
{
m_links.push_back(n);
}
Objects and pointers representation reduces space complexity to exactly V+E, where V is the number of vertices, E - the number of edges (down from V+2E in Adjacency List or even 2V+2E if you store index->Vertex mapping in a separate hash map), sacrificing time complexity: particular edge lookup will take O(E), which equals O(V^2) in a Dense graph (up from O(V) in Adjacency List). The space saving is achieved by removing duplicated edges that appear in the Adjacency List.

Graph implementation knowing edges ahead of time

I'm looking for an efficent way to implement a weighted undirected graph knowing only the number of edges ahead of time.
sample input:
N (number of edges)
A B x (x is the distance from A to B)
.
.
I've thinked to use adjacency lists of Node* (I need to know neighbours) and stored nodes in a dynamic hash table (I don't know how many nodes I'll take so I need a dynamic - search/insert - container).
Are there better ways to do it?
Sorry for my bad english! :D
Given the format you're getting the input in, a very reasonable approach would be to use either a hash table of lists, where the keys are the nodes and the values are lists of pairs of (node, distance). Alternatively, if you have a dense graph and want to be able to quickly determine the distance from one node to another, it might be good to have a hash table of hash tables, where the top level hash table maps nodes to a second hash table, which then maps each node the original node has an edge to to its cost. This still lets you iterate across a node's outgoing edges, but gives you faster lookup of distances.
Another idea (depending on the use case) would be to start off by building the first data structure (the hash table of lists), then to post process it by building an adjacency matrix. This would be useful if you didn't need to iterate across a node's outgoing edges and needed fast random access to distances between nodes. It is similar to the hash table of hash tables, but is probably more space efficient.
Hope this helps!

Ideal data structure for metabolic pathways

So I have a huge list of chemicals within an organism, with the data for both their precursor chemicals, and the ones they created.
I was thinking that some sort of tree structure would be appropriate; each chemical is a node, each parent is a precursor, each child is a product.
Each node could have more than one parent or more than one child, hence my confusion!
However, the main function in this structure will be to find ALL the chemical pathways to make it, and I'm not sure if a tree would be the most efficient at this sort of search.
My question is: is there a more appropriate data structure for this type of data and operation?
I think your data structure is a directed graph.
The brute-force approach for finding all the pathways from A to B would be to do a breadth-first search starting in A, and cover as much of the graph as you are allowed to.
This guarantees that the paths you'll find will be ordered in length from shortest to longest.
Whenever you hit B, you should mark all of the nodes in that path as 'leading to B'. This way you can account for convergent pathways without having to walk the graph more than once.
Keep in mind that, unless you constrain it, it can be posible for the graph to contain loops. A loop in a pathway leading from A to B presents you with infinite pathways, so it's up to you how you'd like to handle this cases.

What are good ways of organizing directed graph data?

Here's my situation. I have a graph that has different sets of data being added at different times. For example, set1 might have a few thousand nodes and then set2 comes in later and we apply business logic to create edges from set1 to set2(and disgard any Vertices from set1 that do not have edges to set2). Then at a later point, we get set3, set4, and so on and the same process applies between each set and its previous set.
Question, what's the best way to organize this? What I did before was name the nodes set1-xx, set2-xx,etc.. The problem I faced was when I was trying to run analytics between the current set and the previous set I would have to run a loop through the entire graph and look for all the nodes that started with 'setx'. It took a long time as the graph grew, so I thought of another solution which was to create a node called 'set1' and have it connected to all nodes for that particular set. I am testing it but I was wondering if there way a more efficient way or a build in way of handling data structures like this? Is there a way to somehow segment data like this?
I think a general solution would be application but if it helps I'm using neo4j(so any specific solution to that database would be good as well).
You have a very special type of a directed graph, called a layered graph.
The choice of the data structure depends primarily on the expected graph density (how many nodes from a previous set/layer are typically connected to a node in the current set/layer) and on the operations that you need to perform on it most of the time. It is definitely a good idea to have each layer directly represented by a numeric index (that is, the outermost structure will be an array of sets/layers), and presumably you can also use one array of vertices per layer. However, the list of edges per vertex (out only, or in and out sets of edges depending on whether you ever traverse the layers backward) may be any of the following:
Linked list of vertex identifiers; this is good if the graph is very sparse and edges are often added/removed.
Sorted array of vertex identifiers; this is good if the graph is quite sparse and immutable.
Array of booleans, indexed by vertex identifiers, determining whether a given vertex is or is not linked by an edge from the current vertex; this is good if the graph is dense.
The "vertex identifier" can take many forms. For example, it can be an index into the array of vertices on the next layer.
Your second solution is what I would do- create a setX node and connect all nodes belonging to that set to setX. That way your data is partitioned and it is easier to query.

An algorithm to check if a vertex is reachable

Is there an algorithm that can check, in a directed graph, if a vertex, let's say V2, is reachable from a vertex V1, without traversing all the vertices?
You might find a route to that node without traversing all the edges, and if so you can give a yes answer as soon as you do. Nothing short of traversing all the edges can confirm that the node isn't reachable (unless there's some other constraint you haven't stated that could be used to eliminate the possibility earlier).
Edit: I should add that it depends on how often you need to do queries versus how large (and dense) your graph is. If you need to do a huge number of queries on a relatively small graph, it may make sense to pre-process the data in the graph to produce a matrix with a bit at the intersection of any V1 and V2 to indicate whether there's a connection from V1 to V2. This doesn't avoid traversing the graph, but it can avoid traversing the graph at the time of the query. I.e., it's basically a greedy algorithm that assumes you're going to eventually use enough of the combinations that it's easiest to just traverse them all and store the result. Depending on the size of the graph, the pre-processing step may be slow, but once it's done executing a query becomes quite fast (constant time, and usually a pretty small constant at that).
Depth first search or breadth first search. Stop when you find one. But there's no way to tell there's none without going through every one, no. You can improve the performance sometimes with some heuristics, like if you have additional information about the graph. For example, if the graph represents a coordinate space like a real map, and most of the time you know that there's going to be a mostly direct path, then you can attempt to have the depth-first search look along lines that "aim towards the target". However, imagine the case where the start and end points are right next to each other, but with no vector inbetween, and to find it, you have to go way out of the way. You have to check every case in order to be exhaustive.
I doubt it has a name, but a breadth-first search might go like this:
Add V1 to a queue of nodes to be visited
While there are nodes in the queue:
If the node is V2, return true
Mark the node as visited
For every node at the end of an outgoing edge which is not yet visited:
Add this node to the queue
End for
End while
Return false
Create an adjacency matrix when the graph is created. At the same time you do this, create matrices consisting of the powers of the adjacency matrix up to the number of nodes in the graph. To find if there is a path from node u to node v, check the matrices (starting from M^1 and going to M^n) and examine the value at (u, v) in each matrix. If, for any of the matrices checked, that value is greater than zero, you can stop the check because there is indeed a connection. (This gives you even more information as well: the power tells you the number of steps between nodes, and the value tells you how many paths there are between nodes for that step number.)
(Note that if you know the number of steps in the longest path in your graph, for whatever reason, you only need to create a number of matrices up to that power. As well, if you want to save memory, you could just store the base adjacency matrix and create the others as you go along, but for large matrices that may take a fair amount of time if you aren't using an efficient method of doing the multiplications, whether from a library or written on your own.)
It would probably be easiest to just do a depth- or breadth-first search, though, as others have suggested, not only because they're comparatively easy to implement but also because you can generate the path between nodes as you go along. (Technically you'd be generating multiple paths and discarding loops/dead-end ones along the way, but whatever.)
In principle, you can't determine that a path exists without traversing some part of the graph, because the failure case (a path does not exist) cannot be determined without traversing the entire graph.
You MAY be able to improve your performance by searching backwards (search from destination to starting point), or by alternating between forward and backward search steps.
Any good AI textbook will talk at length about search techniques. Elaine Rich's book was good in this area. Amazon is your FRIEND.
You mentioned here that the graph represents a road network. If the graph is planar, you could use Thorup's Algorithm which creates an O(nlogn) space data structure that takes O(nlogn) time to build and answers queries in O(1) time.
Another approach to this problem would allow you to ignore all of the vertices. If you were to only look at the edges, you can produce a transitive closure array that will show you each vertex that is reachable from any other vertex.
Start with your list of edges:
Va -> Vc
Va -> Vd
....
Create an array with start location as the rows and end location as the columns. Fill the arrays with 0. For each edge in the list of edges, place a one in the start,end coordinate of the edge.
Now you iterate a few times until either V1,V2 is 1 or there are no changes.
For each row:
NextRowN = RowN
For each column that is true for RowN
Use boolean OR to OR in the results of that row of that number with the current NextRowN.
Set RowN to NextRowN
If you run this algorithm until the end, you will quickly have a complete list of all reachable vertices without looking at any of them. The runtime is proportional to the number of edges. This would work well with a reasonable implementation and a reasonable number of edges.
A slightly more complex version of this algorithm would be to only calculate the vertices reachable by V1. To do this, you would focus your scope on the ones that are currently reachable at any given time. You can also limit adding rows to only one time, since the other rows are never changing.
In order to be sure, you either have to find a path, or traverse all vertices that are reachable from V1 once.
I would recommend an implementation of depth first or breadth first search that stops when it encounters a vertex that it has already seen. The vertex will be processed on the first occurrence only. You need to make sure that the search starts at V1 and stops when it runs out of vertices or encounters V2.

Resources