Force two nodes to occupy the same rank in Graphviz? - graphviz

Using ruby-graphviz, I've created a graph that looks like this (border added to emphasize rendering boundaries):
What I really want is for A and K to line up together at the top (or left, if rankdir="LR"). So I added an invisible node (call it X), and added invisible edges from X to A and K. And here's what I got:
X, XA, and XK have no labels, and style set to 'invis'.
X has height, width, and margin set to 0, and fixedsize set to true.
XA and XK have minlen, len, and penwidth set to 0.
But there's still that empty space at the top. Is there any way to get rid of it, short of cropping after the fact?

You do not need invisible nodes to achieve this.
This is the dot syntax to force the same rank for two nodes:
{rank=same; A; K;}
This is called a subgraph.
I don't know ruby-graphviz, I'm not sure how to create a subgraph - but there is an example on github:
c2 = g.subgraph { |c|
c[:rank => "same"]
c.mysite[:label => "\nexample.com\n ", :shape => "component", :fontname => "Arial"]
c.dotgraph[:label => "\ndotgraph.net\n ", :shape => "component", :fontname => "Arial"]
}

Related

How to draw a spline curve between 2 points with a control points with graphviz?

I would like to create a spline curve between the blue point and the green point with the red point as a control point.
Graphviz documentation says :
splines attribute, of type bool or string, is valid on Graphs
splineType with the pattern : spline ( ';' spline )* is a valid type for pos attribute
pos attribute is valid on Edges and Nodes
I Tried this graph
graph G{
layout ="neato" outputorder="edgesfirst" splines="true"
a[shape="point" pos="0,0!" color="blue"]
b[shape="point" pos="1,0!" color="green"]
c[shape="point" pos=" 0.5,0.5!" color="red"]
a -- b [pos="e,1,0 s,0,0 0.5,0.5!"]
}
then on Windows 10 PowerShell :
neato.exe -Tsvg .\spline.dot > .\spline.svg
with
neato - graphviz version 2.49.3 (20211023.0002)
result
What is the proper way of achieving this?
Thanks.
Close, but ...
! notation seems to only apply to nodes (not edges) (poorly documented, but see: https://graphviz.org/docs/attr-types/point/ and https://graphviz.org/docs/attrs/pin/)
by default, pos value units are points (inch/72). Your values are ~ too small
neato -n2 will use node & edge values you provide (https://graphviz.org/faq/#FaqDotWithCoords)
all edges (not just curves) are defined as cubic B-splines and are defined by 1 + n*3 points (n is integer >=1) (https://graphviz.org/docs/attr-types/splineType/) (i.e. 4 or 7 or 11 or ...)
So:
graph G{
// default units for pos values are in points (72/inch)
// multiplied by 100 out of laziness
// ! only applies to nodes, not edges
layout ="neato"
outputorder="edgesfirst" // why???
splines="true"
a[shape="point" pos="0,0!" color="blue"]
b[shape="point" pos="100,0!" color="green"]
c[shape="point" pos="50,50!" color="red"]
// "s" arrowhead must preceed "e" arrowhead, so swaped them
// BUT, "--" says non-directed graph, NO arrowheads, so they are deleted
// also need 4, 7, 11, ... points NOT including arrowhead points
// so added points (just guesses)
a -- b [pos="0,0 30,66 70,66 100,0"]
}
Gives:
Whew

Porting d3-voronoi to d3-delunay

What I want to do is replicate a data-structure produced by processing the output from d3-voronoi, but by using d3-delaunay instead.
The data-structure in question is the one produced by this makeMesh function:
function makeMesh(pts, extent) {
extent = extent || defaultExtent;
var vor = voronoi(pts, extent);
var vxs = [];
var vxids = {};
var adj = [];
var edges = [];
var tris = [];
for (var i = 0; i < vor.edges.length; i++) {
var e = vor.edges[i];
if (e == undefined) continue;
var e0 = vxids[e[0]];
var e1 = vxids[e[1]];
if (e0 == undefined) {
e0 = vxs.length;
vxids[e[0]] = e0;
vxs.push(e[0]);
}
if (e1 == undefined) {
e1 = vxs.length;
vxids[e[1]] = e1;
vxs.push(e[1]);
}
adj[e0] = adj[e0] || [];
adj[e0].push(e1);
adj[e1] = adj[e1] || [];
adj[e1].push(e0);
edges.push([e0, e1, e.left, e.right]);
tris[e0] = tris[e0] || [];
if (!tris[e0].includes(e.left)) tris[e0].push(e.left);
if (e.right && !tris[e0].includes(e.right)) tris[e0].push(e.right);
tris[e1] = tris[e1] || [];
if (!tris[e1].includes(e.left)) tris[e1].push(e.left);
if (e.right && !tris[e1].includes(e.right)) tris[e1].push(e.right);
}
var mesh = {
pts: pts,
vxs: vxs,
adj: adj,
tris: tris,
edges: edges,
extent: extent
}
mesh.map = function (f) {
var mapped = vxs.map(f);
mapped.mesh = mesh;
return mapped;
}
return mesh;
}
I've been trying to solve this for a while now and have finally made some progress here on observablehq:
https://observablehq.com/#folcon/original-code-by-martin-oleary-mewo2
I'm assessing how well it works by comparing the rendered images:
I want to produce these smooth colour transitions, which requires a correct mapping between vertices, heightmap and triangles.
Attempt 1:
Well I got something rendered, if my approach below doesn't work, it might be instructive to come back to this.
Attempt 2:
This one I feel is a lot better, I don't appear to be getting extra triangles (no black ones visible), but the issue appears to be I'm rendering in the wrong order. I was attempting to go from the leftmost point and then walk around the cell edges, this seems like the right idea, but the ordering is wrong...
I've dug into the delunator's guide to datastructures and at this point I feel like I'm pretty close, but missing something obvious.
Useful Notes / Assumptions:
The mesh uses the edges of the cells, not the triangles for "edges", if you look at the adj (adjacencies) it's never higher than 3, which is in keeping with using the cells as each vertex of the cell has no more than 3 neighbours.
Given that an edge is an edge of a cell, the left and the right of the edge should therefore be the two cells that edge sits on.
Hopefully that's clear.
To answer #thclark:
The Output Data Structure:
{
pts: pts,
vxs: vxs,
adj: adj,
tris: tris,
edges: edges,
extent: extent
}
pts is the original points array, which is a list of pairs of [x, y] coordinates.
vxs is the vertices of the cell as pairs of [x, y] coordinates. The points here are unique, and their index in the array is the authoritative id for that vertex.
I will use one of redblobgame's images to clarify:
The red dots in this image are the original points, the blue one's are the vertices of the cell.
edges is are comprised of 4 values, the first two are the indexes of the vertices that make up the edges of a cell, which in the image above are in white. The second two are the circumcenter's corresponding left and right cells which I've illustrated in an image above, but hopefully this will be clearer:
adj is a mapping from a vertex index to the other vertex index's which are connected to it, because we're using the cells, no entry in adj should have more than 3 indexes. As you can see in the image below, each vertex (red), can only have 3 neighbours (green).
tris is an array of triangles, the original data-structure does not always have complete triangles, but they indexed by the vertex index, to the circumcenter's corresponding left and right cells.
You can see in the above image, that by combining the left and right circumcenter's of three edges, it describes a triangle.
The Input Data Structure:
Delaunator's data-structures guide has a lot more detail, but a quick overview is this:
Delaunator takes an array of [x, y] coordinates of length N and makes a points array of length 2N where each coordinate in the original [x, y] array now sits at it's original index * 2 if it was the x coord, and * 2 + 1 if it was the y.
For example, the coords [[1, 2], [3, 4], [5, 6]] would become: [1, 2, 3, 4, 5, 6].
It then builds a delaunay triangulation, where each triangle edge is comprised of two halfedge's.
A halfedge takes a bi-directional edge and splits it into two directional edges.
So a triangle made up of 3 edges, now has 6 halfedges like so:
It also constructs two arrays:
delaunay.triangles which takes a halfedge index and returns the point id (an index into the points array described previously) where the halfedge begins.
delaunay.halfedges which takes a halfedge index and returns the opposite halfedge in the adjacent triangle:
Hopefully that's sufficient detail?
I've tried to make the setup runnable, so if someone wants to poke around with it to test out a quick hypothesis, they can do so easily, just edit the notebook or fork it.
I've also added at the bottom a more complete example that's purely focused on the "physical" things the map derives from the mesh+heightmap, which is basically the coastline and rivers.

Terrain Quadtree LOD can't detect out where T-junctions appears that generates cracks

I implemented a Quadtree that stores a single mesh of data with different LOD levels, where each four children have their own vertices and their corresponding indexes to the four corners(x0, y0, x1, y1 -> from top to bottom) that makes up their corresponding LOD level.
QuadTree.prototype.fillTree = function(currentNode, depth, currentBox, currentIndex) {
if(depth === 0 || !currentBox.checkPartition())
return;
let node = {
"vertices" : [],
"lines" : [],
"children" : [],
"box" : currentBox
};
currentNode.push(node);
currentBox.getVerticesCoord(this.cols, currentIndex).forEach((coord) => {
this.getPlaneVertices(node.vertices, coord);
});
currentBox.getLinesCoord(this.cols, currentIndex).forEach((coord) => {
this.getPlaneVertices(node.lines, coord);
});
currentBox.getPartitions().forEach((box, index) => {
this.fillTree(node["children"], depth-1, box, index);
});
};
I have also a checkFrustumBoundaries method where I calculate the minimum distance between a LOD level and the camera location [0, 0, 0] and also checks if it's visible by being within [-1, 1] range for all the coordinates by being multiplied by the projection matrix and divided by w.
Finally I have the method that selects needed LOD levels for the current state by finding the minimum distance between camera origins and their corresponding depth and all 4 children being checked if they are within the visible zone and store them into an array that will be rendered.
Note: That I want children to inherit the depth of their sibling with lowest Complexity level if he is ready to be rendered, thus I will always have a 4 four square with same LOD level.
QuadTree.prototype.readComplexity = function(projection, viewCamera, currentNode, currentIndex, currentDepth = 1) {
let childrenToBeRendered = [];
let minDepth = this.depth;
currentNode.children.forEach((child, index) => {
let frustumBoundaries = this.checkFrustumBoundaries(child.vertices, projection, viewCamera);
if(frustumBoundaries.withinFrustum) {
//Section is the 1.0/this.depth
let depth = this.depth-Math.ceil(frustumBoundaries.minDistance/this.section);
minDepth = Math.min(minDepth, depth);
childrenToBeRendered.push({
"child" : child,
"index" : index
});
}
});
childrenToBeRendered.forEach((child => {
if(minDepth <= currentDepth) {
//Even complexity, or the others quarters inherits on of their siblings with lowest complexity level
this.fetchLines(projection, viewCamera, child.child, currentDepth, child.index);
} else if(minDepth > currentDepth) {
//Needs more complexity because it's too near to camera origins
this.readComplexity(projection, viewCamera, child.child, child.index, currentDepth+1);
}
}));
};
But here I stumbled on the biggest problem, it appears T-junctions are causing cracks between different LOD levels:
I figured out that I could remove the cracks by disabling the vertices that make up the T-junction by using a stack and append to it the 2 vertices that makes a half diamond and use the following child where I use his vertex whose index is different from the previous two used. By cycling, starting from the top-left to top-right, bottom-right, bottom-left with a flag in case there is a LOD difference between the top-right and the left neighbor of him.
But before doing that, I need to know if the child's neighbors have less complexity or equal, thus if the child let's say is at top-left, I need to know if there is a LOD level at left and top that takes four times more space and by logic has less complexity.
How can I manage to do it, how can I reach for neighbors if they can be located at different quad-tree levels, if I try to use the node's box to generate the two neighbors' boxes and calculate their depth, I can't compare them with the node because during the selection process, there is the possibility that the neighbor inherited his siblings depth, thus the comparison will be wrong. But, if I chose to not use the rule of four, consequently I can't use the tactics of diamond selection I mentioned above.

Optimal algorithm to solve this maze?

I'm currently stuck on a challenge our lecturer gave us over at our university. We've been looking at the most popular pathfinding algorithms such as Dijkstra and A*. Although, I think this challenge exercise requires something else and it has me stumped.
A visual representation of the maze that needs solving:
Color legend
Blue = starting node
Gray = path
Green = destination node
The way it's supposed to be solved is that when movement is done, it has to be done until it collides with either the edge of the maze or an obstacle (the black borders). It would also need to be solved in the least amount of row-movements possible (in this case 7)
My question: Could someone push me in the right direction on what algorithm to look at? I think Dijkstra/A* is not the way to go, considering the shortest path is not always the correct path given the assignment.
It is still solvable with Dijkstra / A*, what needs to be changed is the configuration of neighbors.
A little background, first:
Dijkstra and A* are general path finding algorithms formulated on graphs. When, instead of a graph, we have a character moving on a grid, it might not be that obvious where the graph is. But it's still there, and one way to construct a graph would be the following:
the nodes of the graph correspond to the cells of the grid
there are edges between nodes corresponding to neighboring cells.
Actually, in most problems involving some configurations and transitions between them, it is possible to construct a corresponding graph, and apply Dijkstra/A*. Thus, it is also possible to tackle problems such as sliding puzzle, rubik's cube, etc., which apparently differ significantly from a character moving on a grid. But they have states, and transitions between states, thus it is possible to try graph search methods (these methods, especially the uninformed ones such as Dijkstra's algorithm, might not always be feasible due to the large search space, but in principle it's possible to apply them).
In the problem that you mentioned, the graph would not differ much from the one with typical character movements:
the nodes can still be the cells of the grid
there will now be edges from a node to nodes corresponding to valid movements (ending near a border or an obstacle), which, in this case, will not always coincide with the four spatial immediate neighbors of the grid cell.
As pointed by Tamas Hegedus in the comments section, it's not obvious what heuristic function should be chosen if A* is used.
The standard heuristics based on Manhattan or Euclidean distance would not be valid here, as they might over-estimate the distance to the target.
One valid heuristic would be id(row != destination_row) + id(col != destination_col), where id is the identity function, with id(false) = 0 and id(true) = 1.
Dijkstra/A* are fine. What you need is to carefully think about what you consider graph nodes and graph edges.
Standing in the blue cell (lets call it 5,5), you have three valid moves:
move one cell to the right (to 6,5)
move four cells to the left (to 1,5)
move five cells up (to 5,1)
Note that you can't go from 5,5 to 4,5 or 5,4. Apply the same reasoning to new nodes (eg from 5,1 you can go to 1,1, 10,1 and 5,5) and you will get a graph on which you run your Dijkstra/A*.
You need to evaluate every possible move and take the move that results with the minimum distance. Something like the following:
int minDistance(int x, int y, int prevX, int prevY, int distance) {
if (CollionWithBorder(x, y) // can't take this path
return int.MAX_VALUE;
if (NoCollionWithBorder(x, y) // it's OK to take this path
{
// update the distance only when there is a long change in direction
if (LongDirectionChange(x, y, prevX, prevY))
distance = distance + 1;
)
if (ReachedDestination(x, y) // we're done
return distance;
// find the path with the minimum distance
return min(minDistance(x, y + 1, x, y, distance), // go right
minDistance(x + 1, y, x, y, distance), // go up
minDistance(x - 1, y, x, y, distance), // go down
minDistance(x, y - 1, x, y, distance)); // go left
}
bool LongDirectionChange(x, y, prevX, prevY) {
if (y-2 == prevY && x == prevX) ||(y == prevY && x-2 == prevX)
return true;
return false;
}
This is assuming diagonal moves are not allowed. If they are, add them to the min() call:
minDistance(x + 1, y + 1, distance), // go up diagonally to right
minDistance(x - 1, y - 1, distance), // go down diagonally to left
minDistance(x + 1, y - 1, distance), // go up diagonally to left
minDistance(x - 1, y + 1, distance), // go down diagonally to right

Ordering coordinates from top left to bottom right

How can I go about trying to order the points of an irregular array from top left to bottom right, such as in the image below?
Methods I've considered are:
calculate the distance of each point from the top left of the image (Pythagoras's theorem) but apply some kind of weighting to the Y coordinate in an attempt to prioritise points on the same 'row' e.g. distance = SQRT((x * x) + (weighting * (y * y)))
sort the points into logical rows, then sort each row.
Part of the difficulty is that I do not know how many rows and columns will be present in the image coupled with the irregularity of the array of points. Any advice would be greatly appreciated.
Even though the question is a bit older, I recently had a similar problem when calibrating a camera.
The algorithm is quite simple and based on this paper:
Find the top left point: min(x+y)
Find the top right point: max(x-y)
Create a straight line from the points.
Calculate the distance of all points to the line
If it is smaller than the radius of the circle (or a threshold): point is in the top line.
Otherwise: point is in the rest of the block.
Sort points of the top line by x value and save.
Repeat until there are no points left.
My python implementation looks like this:
#detect the keypoints
detector = cv2.SimpleBlobDetector_create(params)
keypoints = detector.detect(img)
img_with_keypoints = cv2.drawKeypoints(img, keypoints, np.array([]), (0, 0, 255),
cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
points = []
keypoints_to_search = keypoints[:]
while len(keypoints_to_search) > 0:
a = sorted(keypoints_to_search, key=lambda p: (p.pt[0]) + (p.pt[1]))[0] # find upper left point
b = sorted(keypoints_to_search, key=lambda p: (p.pt[0]) - (p.pt[1]))[-1] # find upper right point
cv2.line(img_with_keypoints, (int(a.pt[0]), int(a.pt[1])), (int(b.pt[0]), int(b.pt[1])), (255, 0, 0), 1)
# convert opencv keypoint to numpy 3d point
a = np.array([a.pt[0], a.pt[1], 0])
b = np.array([b.pt[0], b.pt[1], 0])
row_points = []
remaining_points = []
for k in keypoints_to_search:
p = np.array([k.pt[0], k.pt[1], 0])
d = k.size # diameter of the keypoint (might be a theshold)
dist = np.linalg.norm(np.cross(np.subtract(p, a), np.subtract(b, a))) / np.linalg.norm(b) # distance between keypoint and line a->b
if d/2 > dist:
row_points.append(k)
else:
remaining_points.append(k)
points.extend(sorted(row_points, key=lambda h: h.pt[0]))
keypoints_to_search = remaining_points
Jumping on this old thread because I just dealt with the same thing: sorting a sloppily aligned grid of placed objects by left-to-right, top to bottom location. The drawing at the top in the original post sums it up perfectly, except that this solution supports rows with varying numbers of nodes.
S. Vogt's script above was super helpful (and the script below is entirely based on his/hers), but my conditions are narrower. Vogt's solution accommodates a grid that may be tilted from the horizontal axis. I assume no tilting, so I don't need to compare distances from a potentially tilted top line, but rather from a single point's y value.
Javascript below:
interface Node {x: number; y: number; width:number; height:number;}
const sortedNodes = (nodeArray:Node[]) => {
let sortedNodes:Node[] = []; // this is the return value
let availableNodes = [...nodeArray]; // make copy of input array
while(availableNodes.length > 0){
// find y value of topmost node in availableNodes. (Change this to a reduce if you want.)
let minY = Number.MAX_SAFE_INTEGER;
for (const node of availableNodes){
minY = Math.min(minY, node.y)
}
// find nodes in top row: assume a node is in the top row when its distance from minY
// is less than its height
const topRow:Node[] = [];
const otherRows:Node[] = [];
for (const node of availableNodes){
if (Math.abs(minY - node.y) <= node.height){
topRow.push(node);
} else {
otherRows.push(node);
}
}
topRow.sort((a,b) => a.x - b.x); // we have the top row: sort it by x
sortedNodes = [...sortedNodes,...topRow] // append nodes in row to sorted nodes
availableNodes = [...otherRows] // update available nodes to exclude handled rows
}
return sortedNodes;
};
The above assumes that all node heights are the same. If you have some nodes that are much taller than others, get the value of the minimum node height of all nodes and use it instead of the iterated "node.height" value. I.e., you would change this line of the script above to use the minimum height of all nodes rather that the iterated one.
if (Math.abs(minY - node.y) <= node.height)
I propose the following idea:
1. count the points (p)
2. for each point, round it's x and y coordinates down to some number, like
x = int(x/n)*n, y = int(y/m)*m for some n,m
3. If m,n are too big, the number of counts will drop. Determine m, n iteratively so that the number of points p will just be preserved.
Starting values could be in alignment with max(x) - min(x). For searching employ a binary search. X and Y scaling would be independent of each other.
In natural words this would pin the individual points to grid points by stretching or shrinking the grid distances, until all points have at most one common coordinate (X or Y) but no 2 points overlap. You could call that classifying as well.

Resources