I have a square-planar lattice represented as an NxN grid Graph. Is there any way in Jung to get a symmetric pair of a specific vertex (given the axis of symmetry). Example: 8->0, 5->3.
My goal is to get distinct pairs of nodes. Since pairs (4,1), (4,7), (4,3) and (4,5) are essentially the same. (1,3) would be the same as (3,7) etc. Perhaps some algorithm can be performed on a matrix and then translated to the Graph.
General graphs aren't really particularly well-suited to this sort of thing, because they don't have a built-in notion of rows, columns, symmetry about an axis, etc.; they're all about topology, not geometry.
If you really want something like this, you should either create a subtype of Graph that has the operations you want, and create an implementation to match, or just create the corresponding matrix (and a mapping from matrix locations to graph nodes) and do the operations on that matrix instead.
So far I was able to write an algorithm rotating a matrix 3 times and keeping track of nodes at fixed indices. The same can be written for any type of Graph, using the node's visual coordinates instead of indices.
fun rotateMatrix(matrix: List<IntArray>): List<IntArray> {/*---*/}
val reflections = mutableListOf<Pair<Number, Number>>()
(0..2).fold(mat) { a, b ->
val new = rotateMatrix(a)
mat.forEachIndexed { x, e ->
e.forEachIndexed { y, e2 ->
reflections.add(mat[x][y] to new[x][y])
}
}
new
}
The result is a relationship describing that (0,2,8,6) are the "same"; (1,5,3,7) are the same etc. The only thing left to do is to use the output to determine which pairs of nodes correspond to which reflective siblings.
Related
I have a 2D matrix with boolean values, which is updated highly frequently. I want to choose a 2D index {x, y} within the matrix, and find the nearest element that is "true" in the table, without going through all the elements (the matrix is massive).
For example, if I have the matrix:
0000100
0100000
0000100
0100001
and I choose a coordinate {x1, y1} such as {4, 3}, I want returned the location of the closest "true" value, which in this case is {5, 3}. The distance between the elements is measured using the standard Pythagorean equation:
distance = sqrt(distX * distX + distY * distY) where distX = x1 - x and distY = y1 - y.
I can go through all the elements in the matrix and keep a list of "true" values and select the one with the shortest distance result, but it's extremely inefficient. What algorithm can I use to reduce search time?
Details: The matrix size is 1920x1080, and around 25 queries will be made every frame. The entire matrix is updated every frame. I am trying to maintain a reasonable framerate, more than 7fps is enough.
If matrix is always being updated, then there is no need to build some auxillary structure like distance transform, Voronoy diagram etc.
You can just execute search like BFS (bread-first search) propagating from query point. The only difference from usual BFS is euclidean metrics. So you can generate (u, v) pairs ordered by (u^2+v^2) and check symmetric points shifted by (+-u,+-v),(+-v,+-u) combinations (four points when u or v is zero, eight points otherwise)
You could use a tree data structure like a quad-tree (see https://en.wikipedia.org/wiki/Quadtree) to store all locations with value "true". In this way it should be possible to quickly iterate over all "true" values in the neighborhood of a given location. Furthermore, the tree can be updated in logarithmic time, if the value of a location changes.
I have a 3D Cartesian grid data that needs to be used to create a 3D regular mesh for interpolation method. x,y & z are 3 vectors with data points that are used to form this grid. My question is, how can i efficiently give 2 index to these points say,
where c000 is indexed as 1 point (1,1,1), c100 is indexed as 2 for (2,1,1) for (x,y,z)
coordinate points and another index to identify the 8 points forming the cube. Say if I have a point C, I must retrieve the nearest 8 points for interpolation. so for points c000,c100,c110,c010,c001,c101,c111,c011 point index and cube index. Since the data available is huge, the focus is to use faster implementation. pls give me some hints on how to proceed.
About the maths:
Identifying the cube which a point p surrounds requires a mapping
U ⊂ ℝ+**3 -> ℕ:
p' (= p - O_) -> hash_r(p');
"O_" being located at (min_x(G),min_y(G),min_z(G)) of the Grid G.
Along each axis, the cube numbering is trivial.
Given a compound cube number
n_ = (s,t,u)
and N_x, N_y, N_z being the size of your X_, Y_, Z_, a suitable hash would be
hash_n(n_) = s
| t * 2**(floor(log_2(N_x))+1)
| u * 2**(floor(log_2(N_x)) + floor(log_2(N_y)) + 2).
To calculate e.g. "s" for a point C, take
s = floor((C[0] - O_)/ a)
"a" being the edge length of the cubes.
About taking that to C++
Given you have enough space to allocate
(2**(floor(log_2(max(N_x, N_y, N_z)))+1)**3
buckets, a std::unordered_map<hash_t,Cube> using that (perfect) hash would offer O(1) for finding the cube for a point p.
A lesser pompous std::map<hash_t,Cube> using a less based on that hash would offer O(log(N)) find complexity.
I have a bipartite graph that I would like to project onto 2 new graphs (e.g. using the network from the internet movie database of actors and movies, projecting out the actor network and the movie network with weights corresponding to number of movies (actors) in common)
In regular matrix notation, one simply needs to square the adjacency matrix and ignore the diagonal (which is equal to the original graph outdegree). Presumably there are quicker edge-based algorithms for large sparse graphs, which is why I became interested in using Spark / GraphX with Hadoop and Map Reduce frameworks.
So, how do I do this A^2 calculation in Spark GraphX?
Say I begin with this code:
import org.apache.spark.graphx._
val orig_graph = GraphLoader.edgeListFile(sc, "bipartite_network.dat")
All of the functions I see are joins on vertices, joins on edges, or maps for vertex attributes or edge attributes. How do I loop over all edges doubly and create a new graph with EDGES that are based on vertex ids?
Here's some pseudocode:
for i = 1 to orig_graph.edges.count
for j = i to orig_graph.edges.count
var edge1 = orig_graph.edges.[i]
var edge2 = orig_graph.edges.[j]
if edge1.1 == edge2.1 then add new edge = (edge1.2, edge2.2)
if edge1.1 == edge2.2 then add new edge = (edge1.2, edge2.1)
if edge1.2 == edge2.1 then add new edge = (edge1.1, edge2.2)
if edge1.2 == edge2.2 then add new edge = (edge1.1, edge2.1)
something like that.
Do I need to use Pregel and message passing, or just use the various graph.join functions?
See also http://apache-spark-developers-list.1001551.n3.nabble.com/GraphX-adjacency-matrix-example-td6579.html
I have a loosely connected graph. For every edge in this graph, I know the approximate distance d(v,w) between node v and w at positions p(v) and p(w) as a vector in R3, not only as an euclidean distance. The error shall be small (lets say < 3%) and the first node is at <0,0,0>.
If there were no errors at all, I can calculate the node-positions this way:
set p(first_node) = <0,0,0>
calculate_position(first_node)
calculate_position(v):
for (v,w) in Edges:
if p(w) is not set:
set p(w) = p(v) + d(v,w)
calculate_position(w)
for (u,v) in Edges:
if p(u) is not set:
set p(u) = p(v) - d(u,v)
calculate_position(u)
The errors of the distance are not equal. But to keep things simple, assume the relative error (d(v,w)-d'(v,w))/E(v,w) is N(0,1)-normal-distributed. I want to minimize the sum of the squared error
sum( ((p(v)-p(w)) - d(v,w) )^2/E(v,w)^2 ) for all edges
The graph may have a moderate amount of Nodes ( > 100 ) but with just some connections between the nodes and have been "prefiltered" (split into subgraphs, if there is only one connection between these subgraphs).
I have tried a simplistic "physical model" with hooks low but its slow and unstable. Is there a better algorithm or heuristic for this kind of problem?
This looks like linear regression. Take error terms of the following form, i.e. without squares and split into separate coordinates:
(px(v) - px(w) - dx(v,w))/E(v,w)
(py(v) - py(w) - dy(v,w))/E(v,w)
(pz(v) - pz(w) - dz(v,w))/E(v,w)
If I understood you correctly, you are looking for values px(v), py(v) and pz(v) for all nodes v such that the sum of squares of the above terms is minimized.
You can do this by creating a matrix A and a vector b in the following way: every row corresponds to one of equation of the above form, and every column of A corresponds to one variable, i.e. a single coordinate. For n vertices and m edges, the matrix A will have 3m rows (since you separate coordinates) and 3n−3 columns (since you also fix the first node px(0)=py(0)=pz(0)=0).
The row for (px(v) - px(w) - dx(v,w))/E(v,w) would have an entry 1/E(v,w) in the column for px(v) and an entry -1/E(v,w) in the column for px(w). All other columns would be zero. The corresponding entry in the vector b would be dx(v,w)/E(v,w).
Now solve the linear equation (AT·A)x = AT·b where AT denotes the transpose of A. The solution vector x will contain the coordinates for your vertices. You can break this into three independent problems, one for each coordinate direction, to keep the size of the linear equation system down.
In 3-D space I have an unordered set of, say, 6 points; something like this:
(A)*
(C)*
(E)*
(F)*
(B)*
(D)*
The points form a 3-D contour but they are unordered. For unordered I mean that they are stored in an
unorderedList = [A - B - C - D - E - F]
I just want to reorganize this list starting from an arbitrary location (let's say point A) and traversing the points clockwise or counter-clockwise. Something like this:
orderedList = [A - E - B - D - F - C]
or
orderedList = [A - C - F - D - B - E]
I'm trying to implement an algorithm as simple as possible, since the set of points in mention corresponds to a N-ring neighborhood of each vertex on a mesh of ~420000 points, and I have to do this for each point on the mesh.
Some time ago there was a similar discussion regarding points in 2-D, but for now it's not clear for me how to go from this approach to my 3-D scenario.
The notion of "clockwise" or "counterclockwise" is not well-defined without an axis and orientation! (proof: What if you looked at those points from the other side of your monitor screen, or flipped them, for example!)
You must define an axis and orientation, and specify it as an additional input. Ways to specify it include:
a line (1x=2y=3z), using the right-hand rule
a (unit) vector (A_x, A_y, A_z), using the right-hand rule; this is the preferred way to do so
In order to determine the orientation, you have to look deeper at your problem: You must define a "up" and "down" size of the mesh. Then for each set of points, you must take the centroid (or another "inside" point) and construct a unit vector pointing "up" which is normal to the surface. (One way to do this would be to find the least-squares-fit plane, then find the two perpendicular vectors through that point, picking the one in the "up" direction.)
You will need to use any of the above suggestions to determine your axis. This will allow you to reformulate your problem as follows:
Inputs:
the set of points {P_i}
an axis, which we shall call "the z-axis" and treat as a unit vector centered on the centroid (or somewhere "inside") of the points
an orientation (e.g. counterclockwise) chosen by one of the above methods
Setup:
For all points, pick two mutually-orthogonal unit vectors to the axis, which we shall call "the y-axis" and "the x-axis". (Just rotate the z-axis unit-vector 90 degrees in two directions, http://en.wikipedia.org/wiki/Rotation_matrix#Basic_rotations )
Algorithm:
For each point P, project P onto the x-axis and y-axis (using the dot product), then use http://en.wikipedia.org/wiki/Atan2
Once you have the angles, you can just sort them.
I can't attest for the efficiency of this code, but it works, and you can optimize parts of it as needed, I'm just not good at it.
Code is in C#, using system collection classes, and linq.
Vector3 is a class with floats x, y, z, and static vector math functions.
Node is a class with Vector3 variable called pos
//Sort nodes with positions in 3d space.
//Assuming the points form a convex shape.
//Assuming points are on a single plain (or close to it).
public List<Node> sortVerticies( Vector3 normal, List<Node> nodes ) {
Vector3 first = nodes[0].pos;
//Sort by distance from random point to get 2 adjacent points.
List<Node> temp = nodes.OrderBy(n => Vector3.Distance(n.pos, first ) ).ToList();
//Create a vector from the 2 adjacent points,
//this will be used to sort all points, except the first, by the angle to this vector.
//Since the shape is convex, angle will not exceed 180 degrees, resulting in a proper sort.
Vector3 refrenceVec = (temp[1].pos - first);
//Sort by angle to reference, but we are still missing the first one.
List<Node> results = temp.Skip(1).OrderBy(n => Vector3.Angle(refrenceVec,n.pos - first)).ToList();
//insert the first one, at index 0.
results.Insert(0,nodes[0]);
//Now that it is sorted, we check if we got the direction right, if we didn't we reverse the list.
//We compare the given normal and the cross product of the first 3 point.
//If the magnitude of the sum of the normal and cross product is less than Sqrt(2) then then there is more than 90 between them.
if ( (Vector3.Cross( results[1].pos-results[0].pos, results[2].pos - results[0].pos ).normalized + normal.normalized).magnitude < 1.414f ) {
results.Reverse();
}
return results;
}