Given is a graph with data in mathematica-11. The graph includes undirected edges between nodes and standalone nodes (not connected).
Graph[{1 <-> 2, 2<-> 3, 3<-> 1, 4<-> 5, 5<-> 6, 6<-> 2, 2<-> 4}, VertexLabels -> "Name", VertexShapeFunction -> "Diamond", VertexSize -> Small]
Question:
How do I reflect (invert) the edges between the nodes?
Since they are unidirected, by reflecting/inverting I mean that those nodes are supposed to be linked where no edge was before (and the former edges are gone).
Mathematica-11 provides the ReverseGraph function, however this one does only reflect directed edges by their direction. Any ideas?
Idea:
Convert the graph into a AdjacencyMatrix and inverse it. Then, the inversed matrix could be used to create the reflected/inverted graph.
However, I am stuck with inverting the AdjacencyMatrix since the result will behavior strange:
So simple the true values are replaced with the term inverse when using Inverse (AdjacencyMatrix[data]) // MatrixForm
Related:
This article covers how to reflect edge weights, but not the edges itself.
I'm not sure I completely understand your question so if this isn't the answer tell us why not:
opts = {VertexLabels -> "Name", VertexShapeFunction -> "Diamond", VertexSize -> Small}
g1 = Graph[{1 <-> 2, 2 <-> 3, 3 <-> 1, 4 <-> 5, 5 <-> 6, 6 <-> 2, 2 <-> 4}];
Then you might like
GraphComplement[g1, opts]
or, if you want edges from each node to itself
AdjacencyGraph[Table[1, {VertexCount[g1]}, {VertexCount[g1]}] - AdjacencyMatrix[g1], opts]
Related
There are N Buildings on the site ranging from 0 to N-1. Every employee has an office space in one of the buildings. A new employee may make request to move from current building X to another building Y. A moving request is noted by
class Request {
String employeeName;
int fromBuilding;
int toBuilding;
}
Initially all buildings are full. A request from building X to building Y is achievable only if someone in Building Y makes an achievable request to move therefore creating a vacancy. Given a wishlist of requests help us plan for the best way of building swaps. A plan that fulfills maximum number of requests is considered the best.
Example 1:
Input:
["Alex", 1, 2]
["Ben", 2, 1]
["Chris", 1, 2]
["David", 2, 3]
["Ellen", 3, 1]
["Frank", 4, 5]
Output: [["Alex", "Bem"], ["Chris", "David", "Ellen"]]
Example 2:
Input:
["Adam", 1, 2]
["Brian", 2, 1]
["Carl", 4, 5]
["Dan", 5, 1]
["Eric", 2, 3]
["Fred", 3, 4]
Output: [["Adam", "Eric", "Fred", "Carl", "Dan"]]
This question was taken from leet code here:
https://leetcode.com/discuss/interview-question/325840/amazon-phone-screen-moving-requests
I am trying to do this in python and I figured that creating a dictionary that represents the graph would be a good start but not sure what to do next.
```
def findMovers(buildReqs):
graph={}
for i in buildReqs:
if i[1] not in graph:
graph[i[1]]=[i[2]]
else:
graph[i[1]].append(i[2])
```
Make a bipartite graph with current offices on one side and future offices on the other.
Draw edges with score 0 for people staying in their current office, and edges with score 1 for people moving into any desired new office.
Find the maximum weight bipartite matching: https://en.wikipedia.org/wiki/Assignment_problem
Looking at wiki's description on node ordering of contraction hierarchies
https://en.wikipedia.org/wiki/Contraction_hierarchies
I can't seem to understand how they come up with this "correct" order.
I have followed some heuristics from several papers on the subject. Those most include edge difference and increasing cost of neighbors when contraction happens. (also mentioned on wiki)
Following those heuristics my algorithms looks like:
First run over all nodes in the graph and compute edge difference (look out/ingoing edges and subtract number of edges made through contraction).
Use this list for the contraction phrase. Get the node with min value. At contraction, add +1 to the cost to it's adjacency nodes.
I come up with the following contraction order which is not the same as wiki papers.
node order list after edge difference = {-1, -1 ,-1 ,-1 ,-1, -1}, c = contracted.
Contract node 0, add +1 to node 1 as it's a neighbor.
node order list now = {c, 0 ,-1 ,-1 ,-1, -1}
Contract node 2, add +1 to node 1 and 3.
node order list now = {c, 1 ,c ,0 ,-1, -1}
Contract node 4, add +1 to node 3 and 5. node order list now = {c, 1 ,c ,1 ,c, 0}
Contract node 5, no neighbors. node order list now = {c, 1 ,c ,1 ,c, c}
Contract node 1, no neighbors. node order list now = {c, c ,c ,1 ,c, c}
Contract node 3, no neighbors. node order list now = {c, c ,c ,c ,c, c}
This only gives two shortcuts, one between 1 and 3 and the other one between 3 and 5, but missing one from 1 to 5. Wiki's example gives 3 shortcuts including the last mentioned.
What am I missing?
procedure explore(G; v)
Input: G = (V;E) is a graph; v 2 V
Output: visited(u) is set to true for all nodes u reachable from v
visited(v) = true
previsit(v)
for each edge (v; u) 2 E:
if not visited(u): explore(u)
postvisit(v)
All this pseudocode does is find one path right? It does nothing while backtracking if I'm not wrong?
It just explores the graph (it doesn't return a path) - everything that's reachable from the starting vertex will be explored and have the corresponding value in visited set (not just the vertices corresponding to one of the paths).
It moves on to the next edge while backtracking ... and it does postvisit.
So if we're at a, which has edges to b, c and d, we'll start by going to b, then, when we eventually return to a, we'll then go to c (if it hasn't been visited already), and then we will similarly go to d after return to a for the 2nd time.
It's called depth-first search, in case you were wondering. Wikipedia also gives an example of the order in which vertices will get explored in a tree: (the numbers correspond to the visit order, we start at 1)
In the above, you're not just exploring the vertices going down the left (1-4), but after 4 you go back to 3 to visit 5, then back to 2 to visit 6, and so on, until all 12 are visited.
With regard to previsit and postvisit - previsit will happen when we first get to a vertex, postvisit will happen after we've explored all of it's children (and their descendants in the corresponding DFS tree). So, in the above example, for 1, previsit will happen right at the start, but post-visit will happen only at the very end, because all the vertices are children of 1 or descendants of those children. The order will go something like:
pre 1, pre 2, pre 3, pre 4, post 4, pre 5, post 5, post 3, pre 6, post 6, post 2, ...
What is an efficient algorithm for the enumeration of all subgraphs of a parent graph. In my particular case, the parent graph is a molecular graph, and so it will be connected and typically contain fewer than 100 vertices.
Edit: I am only interested in the connected subgraphs.
This question has a better answer in the accepted answer to this question. It avoids the computationally complex step marked "you fill in above function" in #ninjagecko's answer. It can deal efficiently with compounds where there are several rings.
See the linked question for the full details, but here's the summary. (N(v) denotes the set of neighbors of vertex v. In the "choose a vertex" step, you can choose any arbitrary vertex.)
GenerateConnectedSubgraphs(verticesNotYetConsidered, subsetSoFar, neighbors):
if subsetSoFar is empty:
let candidates = verticesNotYetConsidered
else
let candidates = verticesNotYetConsidered intersect neighbors
if candidates is empty:
yield subsetSoFar
else:
choose a vertex v from candidates
GenerateConnectedSubgraphs(verticesNotYetConsidered - {v},
subsetSoFar,
neighbors)
GenerateConnectedSubgraphs(verticesNotYetConsidered - {v},
subsetSoFar union {v},
neighbors union N(v))
What is an efficient algorithm for the enumeration of all subgraphs of a parent graph. In my particular case, the parent graph is a molecular graph, and so it will be connected and typically contain fewer than 100 vertices.
Comparison with mathematical subgraphs:
You could give each element a number from 0 to N, then enumerate each subgraph as any binary number of length N. You wouldn't need to scan the graph at all.
If what you really want is subgraphs with a certain property (fully connected, etc.) that is different, and you'd need to update your question. As a commentor noted, 2^100 is very large, so you definitely don't want to (like above) enumerate the mathematically-correct-but-physically-boring disconnected subgraphs. It would literally take you, assuming a billion enumerations per second, at least 40 trillion years to enumerate them all.
Connected-subgraph-generator:
If you want some kind of enumeration that retains the DAG property of subgraphs under some metric, e.g. (1,2,3)->(2,3)->(2), (1,2,3)->(1,2)->(2), you'd just want an algorithm that could generate all CONNECTED subgraphs as an iterator (yielding each element). This can be accomplished by recursively removing a single element at a time (optionally from the "boundary"), checking if the remaining set of elements is in a cache (else adding it), yielding it, and recursing. This works fine if your molecule is very chain-like with very few cycles. For example if your element was a 5-pointed star of N elements, it would only have about (100/5)^5 = 3.2million results (less than a second). But if you start adding in more than a single ring, e.g. aromatic compounds and others, you might be in for a rough ride.
e.g. in python
class Graph(object):
def __init__(self, vertices):
self.vertices = frozenset(vertices)
# add edge logic here and to methods, etc. etc.
def subgraphs(self):
cache = set()
def helper(graph):
yield graph
for element in graph:
if {{REMOVING ELEMENT WOULD DISCONNECT GRAPH}}:
# you fill in above function; easy if
# there is 0 or 1 ring in molecule
# (keep track if molecule has ring, e.g.
# self.numRings, maybe even more data)
# if you know there are 0 rings the operation
# takes O(1) time
continue
subgraph = Graph(graph.vertices-{element})
if not subgraph in cache:
cache.add(subgraph)
for s in helper(subgraph):
yield s
for graph in helper(self):
yield graph
def __eq__(self, other):
return self.vertices == other.vertices
def __hash__(self):
return hash(self.vertices)
def __iter__(self):
return iter(self.vertices)
def __repr__(self):
return 'Graph(%s)' % repr(set(self.vertices))
Demonstration:
G = Graph({1,2,3,4,5})
for subgraph in G.subgraphs():
print(subgraph)
Result:
Graph({1, 2, 3, 4, 5})
Graph({2, 3, 4, 5})
Graph({3, 4, 5})
Graph({4, 5})
Graph({5})
Graph(set())
Graph({4})
Graph({3, 5})
Graph({3})
Graph({3, 4})
Graph({2, 4, 5})
Graph({2, 5})
Graph({2})
Graph({2, 4})
Graph({2, 3, 5})
Graph({2, 3})
Graph({2, 3, 4})
Graph({1, 3, 4, 5})
Graph({1, 4, 5})
Graph({1, 5})
Graph({1})
Graph({1, 4})
Graph({1, 3, 5})
Graph({1, 3})
Graph({1, 3, 4})
Graph({1, 2, 4, 5})
Graph({1, 2, 5})
Graph({1, 2})
Graph({1, 2, 4})
Graph({1, 2, 3, 5})
Graph({1, 2, 3})
Graph({1, 2, 3, 4})
There is this algorithm called gspan [1] that has been used to count frequent subgraphs it can also be used to enumerate all subgraphs. You can find an implementation of it here [2].
The idea is the following: Graphs are represented by so called DFS codes. A DFS code corresponds to a depth first search on a graph G and has an entry of the form
(i, j, l(v_i), l(v_i, v_j), l(v_j)), for each edge (v_i, v_j) of the graph, where the vertex subscripts correspond to the order in which the vertices are discovered by the DFS. It is possible to define a total order on the set of all DFS codes (as is done in [1]) and as a consequence to obtain a canonical graph label for a given graph by computing the minimum over all DFS codes representing this graph. Meaning that if two graphs have the same minimum DFS code they are isomorphic. Now, starting from all possible DFS codes of length 1 (one per edge), all subgraphs of a graph can be enumerated by subsequently adding one edge at a time to the codes which gives rise to an enumeration tree where each node corresponds to a graph. If the enumeration is done carefully (i.e., compatible with the order on the DFS codes) minimal DFS codes are encountered first. Therefore, whenever a DFS code is encountered that is not minimal its corresponding subtree can be pruned. Please consult [1] for further details.
[1] https://sites.cs.ucsb.edu/~xyan/papers/gSpan.pdf
[2] http://www.nowozin.net/sebastian/gboost/
For a description of the data structure see
http://www.flipcode.com/archives/The_Half-Edge_Data_Structure.shtml
http://www.cgal.org/Manual/latest/doc_html/cgal_manual/HalfedgeDS/Chapter_main.html
An half-edge data structure involves cycles.
is it possible to implement it in a functional language like Haskell ?
are mutable references (STRef) to way to go ?
Thanks
In order to efficiently construct half-edge data structures you need an acceleration structure for the HE_vert (let's call it HE_vert_acc... but you can actually just do this in HE_vert directly) that saves all HE_edges that point to this HE_vert. Otherwise you get very bad complexity when trying to define the "HE_edge* pair" (which is the oppositely oriented adjacent half-edge), e.g. via brute-force comparison.
So, making a half-edge data structure for a single face can easily be done with the tying-the-knot method, because there are (probably) no pairs anyway. But if you add the complexity of the acceleration structure to decide on those pairs efficiently, then it becomes a bit more difficult, since you need to update the same HE_vert_acc across different faces, and then update the HE_edges to contain a valid pair. Those are actually multiple steps. How you would glue them all together via tying-the-knot is way more complex than constructing a circular doubly linked list and not really obvious.
Because of that... I wouldn't really bother much about the question "how do I construct this data structure in idiomatic haskell".
I think it's reasonable to use more imperative approaches here while trying to keep the API functional. I'd probably go for arrays and state-monads.
Not saying it isn't possible with tying-the-knot, but I haven't seen such an implementation yet. It is not an easy problem in my opinion.
EDIT: so I couldn't let go and implemented this, assuming the input is an .obj mesh file.
My approach is based on the method described here https://wiki.haskell.org/Tying_the_Knot#Migrated_from_the_old_wiki, but the one from Andrew Bromage where he explains tying the knots for a DFA without knowing the knots at compile-time.
Unfortunately, the half-edge data structure is even more complex, since it actually consists of 3 data structures.
So I started with what I actually want:
data HeVert a = HeVert {
vcoord :: a -- the coordinates of the vertex
, emedge :: HeEdge a -- one of the half-edges emanating from the vertex
}
data HeFace a = HeFace {
bordedge :: HeEdge a -- one of the half-edges bordering the face
}
data HeEdge a = HeEdge {
startvert :: HeVert a -- start-vertex of the half-edge
, oppedge :: Maybe (HeEdge a) -- oppositely oriented adjacent half-edge
, edgeface :: HeFace a -- face the half-edge borders
, nextedge :: HeEdge a -- next half-edge around the face
}
The problem is that we run into multiple issues here when constructing it efficiently, so for all these data structures we will use an "Indirect" one which basically just saves plain information given by the .obj mesh file.
So I came up with this:
data IndirectHeEdge = IndirectHeEdge {
edgeindex :: Int -- edge index
, svindex :: Int -- index of start-vertice
, nvindex :: Int -- index of next-vertice
, indexf :: Int -- index of face
, offsetedge :: Int -- offset to get the next edge
}
data IndirectHeVert = IndirectHeVert {
emedgeindex :: Int -- emanating edge index (starts at 1)
, edgelist :: [Int] -- index of edge that points to this vertice
}
data IndirectHeFace =
IndirectHeFace (Int, [Int]) -- (faceIndex, [verticeindex])
A few things are probably not intuitive and can be done better, e.g. the "offsetedge" thing.
See how I didn't save the actual vertices anywhere. This is just a lot of index stuff which sort of emulates the C pointers.
We will need "edgelist" to efficiently find the oppositely oriented ajdgacent half-edges later.
I don't go into detail how I fill these indirect data structures, because that is really specific to the .obj file format. I'll just give an example on how things convert.
Suppose we have the following mesh file:
v 50.0 50.0
v 250.0 50.0
v 50.0 250.0
v 250.0 250.0
v 50.0 500.0
v 250.0 500.0
f 1 2 4 3
f 3 4 6 5
The indirect faces will now look like this:
[IndirectHeFace (0,[1,2,4,3]),IndirectHeFace (1,[3,4,6,5])]
The indirect edges:
[IndirectHeEdge {edgeindex = 0, svindex = 1, nvindex = 2, indexf = 0, offsetedge = 1},
IndirectHeEdge {1, 2, 4, 0, 1},
IndirectHeEdge {2, 4, 3, 0, 1},
IndirectHeEdge {3, 3, 1, 0, -3},
IndirectHeEdge {0, 3, 4, 1, 1},
IndirectHeEdge {1, 4, 6, 1, 1},
IndirectHeEdge {2, 6, 5, 1, 1},
IndirectHeEdge {3, 5, 3, 1, -3}]
And the indirect vertices:
[(1,IndirectHeVert {emedgeindex = 0, edgelist = [3]}),
(2,IndirectHeVert {1, [0]}),
(3,IndirectHeVert {4, [7,2]}),
(4,IndirectHeVert {5, [4,1]}),
(5,IndirectHeVert {7, [6]}),
(6,IndirectHeVert {6, [5]})]
Now the really interesting part is how we can turn these indirect data structures into the "direct" one we defined at the very beginning. This is a bit tricky, but is basically just index lookups and works because of laziness.
Here's the pseudo code (the actual implementation uses not just lists and has additional overhead in order to make the function safe):
indirectToDirect :: [a] -- parsed vertices, e.g. 2d points (Double, Double)
-> [IndirectHeEdge]
-> [IndirectHeFace]
-> [IndirectHeVert]
-> HeEdge a
indirectToDirect points edges faces vertices
= thisEdge (head edges)
where
thisEdge edge
= HeEdge (thisVert (vertices !! svindex edge) $ svindex edge)
(thisOppEdge (svindex edge) $ indexf edge)
(thisFace $ faces !! indexf edge)
(thisEdge $ edges !! (edgeindex edge + offsetedge edge))
thisFace face = HeFace $ thisEdge (edges !! (head . snd $ face))
thisVert vertice coordindex
= HeVert (points !! (coordindex - 1))
(thisEdge $ points !! (emedgeindex vertice - 1))
thisOppEdge startverticeindex faceindex
= thisEdge
<$>
(headMay
. filter ((/=) faceindex . indexf)
. fmap (edges !!)
. edgelist -- getter
$ vertices !! startverticeindex)
Mind that we cannot really make this return a "Maybe (HeEdge a)" because it would try to evaluate the whole thing (which is infinite) in order to know which constructor to use.
I had to add a NoVert/NoEdge/NoFace constructor for each of them to avoid the "Maybe".
Another downside is that this heavily depends on the input and isn't really a generic library thing. I'm also not entirely sure if it will re-evaluate (which is still very cheap) already visited edges.
Using Data.IntMap.Lazy seems to increase performance (at least for the list of IndirectHeVert). Data.Vector didn't really do much for me here.
There's no need for using the state monad anywhere, unless you want to use Arrays or Vectors.
Obviously the problem is that a half-edge references the next and the opposite half-edge (the other references are no problem). You can "break the cycle" e.g. by referencing not directly to other half-edges, but to reference just an ID (e.g. simple Ints). In order to look up a half-edge by ID, you can store them in a Data.Map. Of course this approach requires some book-keeping in order to avoid a big hairy mess, but it is the easiest way I can think of.
Stupid me, I'm not thinking lazy enough. The solution above works for strict functional languages, but is unnecessary for Haskell.
If the task in question allows you to build the half-edge structure once and then query it many times, then lazy tying-the-know approach is the way to go, as was pointed out in the comments and the other answer.
However, if you want to update your structure, then purely-functional interface might prove cumbersome to work with. Also, you need to consider O(..) requirements for update functions. It might turn out that you need mutable internal representation (probably with pure API on top) after all.
I've run into a helpful application of polymorphism for this sort of thing. You'll commonly desire both a static non-infinite version for serialization, as well as a knot-tyed version for internal representation.
If you make one version that's polymorphic, then you can update that particular value using record syntax :
data Foo edge_type_t = Depot {
edge_type :: edge_type_t,
idxI, idxE, idxF, idxL :: !Int
} deriving (Show, Read)
loadFoo edgetypes d = d { edge_type = edgetypes ! edge_type d }
unloadFoo d = d { edge_type = edgetype_id $ edge_type d }
There is however one major caveat : You cannot make a Foo (Foo (Foo( ...))) type this way because Haskell must understand the type's recursively. :(