I need to create a random graph from a few pre existing vertices.
I didn't find a way to do this yet.
Each tutorial is using the "VertexFactory" (which btw cannot be resolved even though importing org.jgraph.generate.*) to make up vertices while creating the graph.
But I rather would like to generate the graph from already existing vertices. I'm especially interested in the WattsStragotz algorithm but I don't know yet.
Is this possible with using the already written RandomGeneration classes of Jgrapht?
Thanks a lot
Yes you can do this, but now with vertices that have already been added to the graph. If, for whatever reason, you want to use pre-existing objects as your vertices, you could do something along the following lines:
Put all your pre-existing vertex objects into a list
Implement a custom Vertex supplier (the old VertexFactory has been taken out of commission in favor of the Supplier paradigm in the newer java versions). This vertex supplier iterates over your list of pre-defined vertices, and returns a new vertex each time the get() method of the supplier is invoked.
Create a new graph with your custom vertex supplier. Have a look at this example on our wiki page.
Invoke any of the random graph generators on the graph.
Related
In this force directed graph,
https://codesandbox.io/s/3d-force-graph-forked-le6ny7
I want all nodes in group 2 to repel each other alot, but I want all nodes with any other group combination to repel each other only a little.
I saw this answer for how to apply a custom force to only certain nodes.
https://stackoverflow.com/a/61985672
I added this in the codesandbox (for whatever reason, this code isnt applied on page load - it doesnt re-draw the graph until I make some kind of code change, like adding a newline. Weird).
But after re-drawing the graph, now the graph is in 2D - not 3D anymore.
How can I add a custom force to only certain nodes, AND keep the graph in 3D?
I'm working through the polygon triangulation algorithm in Computational Geometry:
Algorithms and Applications, 3rd edition, by Mark de Berg and others. The data structure used to represent the polygon is called a "doubly-connected edge list". As the book describes, "[it] contains a record for each face, edge, and vertex". Each edge is actually stored as two "half edges", representing each side of the edge. This makes it easier to walk the half edges around a face. The half edge records look like:
// Pseudocode. No particular language. The properties need to be pointers/references of some kind.
struct HalfEdge {
Vertex origin;
HalfEdge twin;
Face incidentFace;
HalfEdge next;
HalfEdge previous;
}
Now imagine I'm processing this polygon, with one face (so far) called Face1. I need to add a diagonal at the dotted line. This will create a new face (Face2). It seems like I need to walk all those HalfEdges that now surround Face2 and set their incidentFace property to point to Face2.
But the book says:
The diagonals computed for the split and merge vertices are added to the doubly-connected edge list. To access the doubly-connected edge list we use cross-pointers between the edges in the status structure and the corresponding edges in the doubly-connected edge list. Adding a diagonal can then be done in constant time with some simple pointer manipulations.
I don't see any further explanation. I'm not sure what a "cross pointer" means here. The "status structure" refers to a binary search tree of edges that is used to easily find the edge to the left of a given vertex, as the polygon is processed from top to bottom.
Can anyone explain this further? How are they able to update all the edges in constant time?
The same book (page 33) says:
Even more important is to realize that in
many applications the faces of the subdivision carry no interesting meaning
(think of the network of rivers or roads that we looked at before). If that is the case, we can completely forget about the face records, and the IncidentFace() field of half-edges. As we will see, the algorithm of the next section doesn’t need these fields (and is actually simpler to implement if we don’t need to update them).
So, you don't need to modify the incidentFace field in any half-edges after each diagonal insertion - you will be able to find vertices of all the monotone polygons using the next field of the HalfEdge record after all the necessary diagonals are inserted into the DCEL.
The most intricate job here is to set next and prev fields of newly inserted half-edges, and to modify these fields in existing half-edges. Exactly four existing half-edges need to be updated after each insertion - but it's not easy to find them.
As you already know, each DCEL vertex record contains a field, pointing to any half-edge, originating in this vertex. If you keep this field pointing to an internal half-edge or most recently inserted half-edge, then it'll be a little bit easier to find half-edges, which need updates in their next and prev fields after each insertion.
I have a directed graph (in JS/TS but that's a general programming patterns question) where each vertex is a child class of Shape and the children are the different shapes, e.g cycle, rectangle etc. I'm looking for a design pattern for the following problem:
Problem: Each vertex has its own rules regarding what it can be connected to or from, which are sometimes not simple
some rules are easier to check from the target vertex class (e.g. cycles must have no incoming edges) and some others are easier to check from the source vertex class (a circle can have no outgoing edges)
some rules are two way, e.g. a rectangle can be connected to / from circles, triangles. I can check this rule from the source vertex focus (In rectangle class method validateEdge, make sure target is not any of these) or from the target vertex focus (in classes circle and triangle, in the validateEdge method make sure source is not circle). I shouldn't be checking for the same rule multiple times.
some rules take into account an attribute of one of the vertices, e.g. circles are only connectable to rectangles that are red etc. Thus I can't just have a map of key value pairs that capture the rules and the validation runs over the map to check if any applies.
Currently I have it implemented as the naive way; given an edge, check all the rules by conditioning on the type of source and destination, which is obviously ugly and unmaintainable.
My proposed solution
The best thing I came up with is to have a method isConnectableTo(target) for each Shape. This restricts validating edges from the source vertex focus thus it avoids the problem of checking the same rule multiple times, one from the target vertex focus and one from the source.
The problem is that it doesn't fully capture the first requirement and also I still need to condition on the target type before I check for which rules apply.
Any other solutions?
Thanks
Take a simple network A=B=C, where node B has the attribute minor. When I just filter out that attribute, I'll get the graph A C. However, I want a visualisation of the fact that A and C are indirectly connected, i.e. A---C. How can this be achieved? Are filters even the proper way?
I could be wrong but I don't think this is something you can achieve without processing your data first, looping through all the edges that involve B, creating an array of B's neighbours and then creating new edges linking them.
The exact way to do this depends on the format of your data and the programming language.
You could also add edges manually in Gephi, but that is of course only realistic for small graphs.
I don't think filters are the right tool for this because they simply exclude nodes (and edges) but don't create new ones. https://github.com/gephi/gephi/wiki/Filter
I have a collection of lines in my diagram and I also have a point. What I want is a collection of lines which will together form a polygon through a ordered traversal. I don't need implementation or anything all I want is someone to direct me towards the algorithm I can use.
Similar problem like this have been asked but won't work for me because
One of the common asked problems is that given a polygon I need to find whether the point lies inside it or not but this won't work for me because I don't have any polygons I only have a collection of lines.
the final polygon can be convex too so simply drawing rays on every side from that point and finding intersections won't work I need something more advanced.
Sorry for all the confusion : See this for clarity https://ibb.co/nzbxGF
You need to store your collection of segments inside a suitable data structure. Namely, the chosen data structure should support the concept of faces, as you're looking for a way to find the face in which a given point resides. One such data structure is the Doubly Connected Edge List.
The Doubly Connected Edge List is a data structure that holds a subdivision of the plane. In particular, it contains a record for each face, edge, and vertex of the subdivision. It also supports walking around a face counterclockwise, which allows you to know which segments bound a particular face (such as the face containing the point you're searching for).
You can use a Sweep Line Algorithm to construct the Doubly Connected Edge List in O(nlog(n)+klog(n)) where n is the number of segments and k is the complexity of the resulting subdivision (the total number of vertices, edges, and faces). You don't have to code it from scratch as this data structure and its construction algorithm have already been implemented many times (you can use CGAL's DCEL implementation for example).
With the Doubly Connected Edge List data structure you can solve your problem by applying the approach you've suggested in your post: given an input point, solve the Point in Polygon problem for each face in the Doubly Connected Edge List and return the set of segments bounding the face you've found. However, this approach, while might be good enough for somewhat simple subdivisions, is not efficient for complex ones.
A better approach is to transform the Doubly Connected Edge List into a data structure that is specialized in point location queries: The Trapezoidal Map. This data structure can be constructed in O(nlog(n)) expected time and for any query point the expected search time is O(log(n)). As with the Doubly Connected Edge List, you don't have to implement it yourself (again, you can use CGAL's Trapezoidal Map implementation).