I have a long list of lines in (possibly) random order. So basically:
struct Line
{
Vector StartPos;
Vector EndPos;
};
Now I'm looking for an efficient way to sort these lines so that they are sorted into spans. I.E. if line A's startpos matches Line B's endpos, it gets moved into the list immediately after line B. If nothing matches, it just goes to the end of the list to start a new span.
Right now I'm doing it brute force-- setting a flag variable if anything was changed, and if anything changed, sorting it again. This produces gigantically exponential iterations. Is there any faster way to optimize this so that I could conceivably keep the iterations down to listsize^listsize?
If you do not have lines that start or end at the same point maybe you can use dictionaries to reduce the look ups. Something like:
public class Line
{
public Point StartPos;
public Point EndPos;
public bool isUsed = false;
};
and then 1) create a dictionary with the key the endPos and the value the index of the element in you list, 2) for each element of the list follow the link using the dictionary. Something like:
List<List<Line>> result = new List<List<Line>>();
Dictionary<Point,int> dic= new Dictionary<Point,int>();
for (int kk = 0; kk < mylines.Count; kk++)
{
dic[mylines[kk].EndPos] = kk;
}
for (int kk = 0; kk < mylines.Count; kk++)
{
if (mylines[kk].isUsed == false)
{
var orderline= new List<Line>();
orderline.Add(mylines[kk]);
int mm = kk;
while (dic.ContainsKey(mylines[mm].EndPos))
{
mm = dic[mylines[mm].EndPos];
mylines[mm].isUsed = true;
orderline.Add(mylines[mm]);
}
result.Add(orderline);
}
}
We are given n points in a 3d space ,we need to find count of all points that are strictly less than atleast one of the points in the 3d space
i.e.
x1<x2 and y1<y2 and z1<z2
so (x1,y1,z1) would be one such point.
For example,Given points
1 4 2
4 3 2
2 5 3
(1,4,2)<(2,5,3)
So the answer for the above case should be the count of such points i.e. 1.
I know this can be solved through a O(n^2) algorithm but i need something faster,i tried sorting through one dimension and then searching only over the greater part of the key , but its still o(n^2) worst case.
What is the efficient way to do this?
There is a way to optimize your search that may be faster than O(n^2) - I would welcome counter-sample input.
Keep three lists of the indexes of the points, sorted by x, y and z respectively. Make a fourth list associating each point with it's place in each of the lists (indexes in the code below; e. g., indexes[0] = [5,124,789] would mean the first point is 5th in the x-sorted list, 124th in the y-sorted list, and 789th in the z-sorted list).
Now iterate over the points - pick the list where the point is highest and test the point against the higher indexed points in the list, exiting early if the point is strictly less than one of them. If a point is low on all three lists, the likelihood of finding a strictly higher point is greater. Otherwise, a higher place in one of the lists means less iterations.
JavaScript code:
function strictlyLessThan(p1,p2){
return p1[0] < p2[0] && p1[1] < p2[1] && p1[2] < p2[2];
}
// iterations
var it = 0;
function f(ps){
var res = 0,
indexes = new Array(ps.length);
// sort by x
var sortedX =
ps.map(function(x,i){ return i; })
.sort(function(a,b){ return ps[a][0] - ps[b][0]; });
// record index of point in x-sorted list
for (var i=0; i<sortedX.length; i++){
indexes[sortedX[i]] = [i,null,null];
}
// sort by y
var sortedY =
ps.map(function(x,i){ return i; })
.sort(function(a,b){ return ps[a][1] - ps[b][1]; });
// record index of point in y-sorted list
for (var i=0; i<sortedY.length; i++){
indexes[sortedY[i]][1] = i;
}
// sort by z
var sortedZ =
ps.map(function(x,i){ return i; })
.sort(function(a,b){ return ps[a][2] - ps[b][2]; });
// record index of point in z-sorted list
for (var i=0; i<sortedZ.length; i++){
indexes[sortedZ[i]][2] = i;
}
// check for possible greater points only in the list
// where the point is highest
for (var i=0; i<ps.length; i++){
var listToCheck,
startIndex;
if (indexes[i][0] > indexes[i][1]){
if (indexes[i][0] > indexes[i][2]){
listToCheck = sortedX;
startIndex = indexes[i][0];
} else {
listToCheck = sortedZ;
startIndex = indexes[i][2];
}
} else {
if (indexes[i][1] > indexes[i][2]){
listToCheck = sortedY;
startIndex = indexes[i][1];
} else {
listToCheck = sortedZ;
startIndex = indexes[i][2];
}
}
var j = startIndex + 1;
while (listToCheck[j] !== undefined){
it++;
var point = ps[listToCheck[j]];
if (strictlyLessThan(ps[i],point)){
res++;
break;
}
j++;
}
}
return res;
}
// var input = [[5,0,0],[4,1,0],[3,2,0],[2,3,0],[1,4,0],[0,5,0],[4,0,1],[3,1,1], [2,2,1],[1,3,1],[0,4,1],[3,0,2],[2,1,2],[1,2,2],[0,3,2],[2,0,3], [1,1,3],[0,2,3],[1,0,4],[0,1,4],[0,0,5]];
var input = new Array(10000);
for (var i=0; i<input.length; i++){
input[i] = [Math.random(),Math.random(),Math.random()];
}
console.log(input.length + ' points');
console.log('result: ' + f(input));
console.log(it + ' iterations not including sorts');
I doubt that the worst-case complexity can be reduced below N×N, because it is possible to create input where no point is strictly less than any other point:
For any value n, consider the plane that intersects with the Z, Y and Z axis at (n,0,0), (0,n,0) and (0,0,n), described by the equation x+y+z=n. If the input consists of points on such a plane, none of the points is strictly less than any other point.
Example of worst-case input:
(5,0,0) (4,1,0) (3,2,0) (2,3,0) (1,4,0) (0,5,0)
(4,0,1) (3,1,1) (2,2,1) (1,3,1) (0,4,1)
(3,0,2) (2,1,2) (1,2,2) (0,3,2)
(2,0,3) (1,1,3) (0,2,3)
(1,0,4) (0,1,4)
(0,0,5)
However, the average complexity can be reduced to much less than N×N, e.g. with this approach:
Take the first point from the input and put it in a list.
Take the second point from the input, and compare it to the first
point in the list. If it is strictly less, discard the new point. If
it is strictly greater, replace the point in the list with the new
point. If it is neither, add the point to the list.
For each new point from the input, compare it to each point in the
list. If it is stricly less than any point in the list, discard the
new point. If it is strictly greater, replace the point in the list
with the new point, and also discard any further points in the list
which are strictly less than the new point. If the new point is not
strictly less or greater than any point in the list, add the new
point to the list.
After checking every point in the input, the result is the number of
points in the input minus the number of points in the list.
Since the probability that for any two random points a and b either a<b or b<a is 25%, the list won't grow to be very large (unless the input is specifically crafted to contain few or no points that are strictly less than any other point).
Limited testing with the code below (100 cases) with 1,000,000 randomly distributed points in a cubic space shows that the average list size is around 116 (with a maximum of 160), and the number of checks whether a point is strictly less than another point is around 1,333,000 (with a maximum of 2,150,000).
(And a few tests with 10,000,000 points show that the average number of checks is around 11,000,000 with a list size around 150.)
So in practice, the average complexity is close to N rather than N×N.
function xyzLessCount(input) {
var list = [input[0]]; // put first point in list
for (var i = 1; i < input.length; i++) { // check every point in input
var append = true;
for (var j = 0; j < list.length; j++) { // against every point in list
if (xyzLess(input[i], list[j])) { // new point < list point
append = false;
break; // continue with next point
}
if (xyzLess(list[j], input[i])) { // new point > list point
list[j] = input[i]; // replace list point
for (var k = list.length - 1; k > j; k--) {
if (xyzLess(list[k], list[j])) { // check rest of list
list.splice(k, 1); // remove list point
}
}
append = false;
break; // continue with next point
}
}
if (append) list.push(input[i]); // append new point to list
}
return input.length - list.length;
function xyzLess(a, b) {
return a.x < b.x && a.y < b.y && a.z < b.z;
}
}
var points = []; // random test data
for (var i = 0; i < 1000000; i++) {
points.push({x: Math.random(), y: Math.random(), z: Math.random()});
}
document.write("1000000 → " + xyzLessCount(points));
I have a triangulated mesh. Assume it looks like an bumpy surface. I want to be able to find all edges that fall on the surrounding border of the mesh. (forget about inner vertices)
I know I have to find edges that are only connected to one triangle, and collect all these together and that is the answer. But I want to be sure that the vertices of these edges are ordered clockwise around the shape.
I want to do this because I would like to get a polygon line around the outside of mesh.
I hope this is clear enough to understand. In a sense i am trying to "De-Triangulate" the mesh. ha! if there is such a term.
Boundary edges are only referenced by a single triangle in the mesh, so to find them you need to scan through all triangles in the mesh and take the edges with a single reference count. You can do this efficiently (in O(N)) by making use of a hash table.
To convert the edge set to an ordered polygon loop you can use a traversal method:
Pick any unvisited edge segment [v_start,v_next] and add these vertices to the polygon loop.
Find the unvisited edge segment [v_i,v_j] that has either v_i = v_next or v_j = v_next and add the other vertex (the one not equal to v_next) to the polygon loop. Reset v_next as this newly added vertex, mark the edge as visited and continue from 2.
Traversal is done when we get back to v_start.
The traversal will give a polygon loop that could have either clock-wise or counter-clock-wise ordering. A consistent ordering can be established by considering the signed area of the polygon. If the traversal results in the wrong orientation you simply need to reverse the order of the polygon loop vertices.
Well as the saying goes - get it working - then get it working better. I noticed on my above example it assumes all the edges in the edges array do in fact link up in a nice border. This may not be the case in the real world (as I have discovered from my input files i am using!) In fact some of my input files actually have many polygons and all need borders detected. I also wanted to make sure the winding order is correct. So I have fixed that up as well. see below. (Feel I am making headway at last!)
private static List<int> OrganizeEdges(List<int> edges, List<Point> positions)
{
var visited = new Dictionary<int, bool>();
var edgeList = new List<int>();
var resultList = new List<int>();
var nextIndex = -1;
while (resultList.Count < edges.Count)
{
if (nextIndex < 0)
{
for (int i = 0; i < edges.Count; i += 2)
{
if (!visited.ContainsKey(i))
{
nextIndex = edges[i];
break;
}
}
}
for (int i = 0; i < edges.Count; i += 2)
{
if (visited.ContainsKey(i))
continue;
int j = i + 1;
int k = -1;
if (edges[i] == nextIndex)
k = j;
else if (edges[j] == nextIndex)
k = i;
if (k >= 0)
{
var edge = edges[k];
visited[i] = true;
edgeList.Add(nextIndex);
edgeList.Add(edge);
nextIndex = edge;
i = 0;
}
}
// calculate winding order - then add to final result.
var borderPoints = new List<Point>();
edgeList.ForEach(ei => borderPoints.Add(positions[ei]));
var winding = CalculateWindingOrder(borderPoints);
if (winding > 0)
edgeList.Reverse();
resultList.AddRange(edgeList);
edgeList = new List<int>();
nextIndex = -1;
}
return resultList;
}
/// <summary>
/// returns 1 for CW, -1 for CCW, 0 for unknown.
/// </summary>
public static int CalculateWindingOrder(IList<Point> points)
{
// the sign of the 'area' of the polygon is all we are interested in.
var area = CalculateSignedArea(points);
if (area < 0.0)
return 1;
else if (area > 0.0)
return - 1;
return 0; // error condition - not even verts to calculate, non-simple poly, etc.
}
public static double CalculateSignedArea(IList<Point> points)
{
double area = 0.0;
for (int i = 0; i < points.Count; i++)
{
int j = (i + 1) % points.Count;
area += points[i].X * points[j].Y;
area -= points[i].Y * points[j].X;
}
area /= 2.0f;
return area;
}
Traversal Code (not efficient - needs to be tidied up, will get to that at some point) Please Note: I store each segment in the chain as 2 indices - rather than 1 as suggested by Darren. This is purely for my own implementation / rendering needs.
// okay now lets sort the segments so that they make a chain.
var sorted = new List<int>();
var visited = new Dictionary<int, bool>();
var startIndex = edges[0];
var nextIndex = edges[1];
sorted.Add(startIndex);
sorted.Add(nextIndex);
visited[0] = true;
visited[1] = true;
while (nextIndex != startIndex)
{
for (int i = 0; i < edges.Count - 1; i += 2)
{
var j = i + 1;
if (visited.ContainsKey(i) || visited.ContainsKey(j))
continue;
var iIndex = edges[i];
var jIndex = edges[j];
if (iIndex == nextIndex)
{
sorted.Add(nextIndex);
sorted.Add(jIndex);
nextIndex = jIndex;
visited[j] = true;
break;
}
else if (jIndex == nextIndex)
{
sorted.Add(nextIndex);
sorted.Add(iIndex);
nextIndex = iIndex;
visited[i] = true;
break;
}
}
}
return sorted;
The answer to your question depends actually on how triangular mesh is represented in memory. If you use Half-edge data structure, then the algorithm is extremely simple, since everything was already done during Half-edge data structure construction.
Start from any boundary half-edge HE_edge* edge0 (it can be found by linear search over all half-edges as the first edge without valid face). Set the current half-edge HE_edge* edge = edge0.
Output the destination edge->vert of the current edge.
The next edge in clockwise order around the shape (and counter-clockwise order around the surrounding "hole") will be edge->next.
Stop when you reach edge0.
To efficiently enumerate the boundary edges in the opposite (counter-clockwise order) the data structure needs to have prev data field, which many existing implementations of Half-edge data structure do provide in addition to next, e.g. MeshLib
Right, so let's say I want to compare two BitmapDatas. One is an image of a background (not solid, it has varying pixels), and another is of something (like a sprite) on top of the exact same background. Now what I want to do is remove the background from the second image, by comparing the two images, and removing all the pixels from the background that are present in the second image.
For clarity, basically I want to do this in AS3.
Now I came up with two ways to do this, and they both work perfectly. One compares pixels directly, while the other uses the BitmapData.compare() method first, then copies the appropriate pixels into the result. What I want to know is which way is faster.
Here are my two ways of doing it:
Method 1
for (var j:int = 0; j < layer1.height; j++)
{
for (var i:int = 0; i < layer1.width; i++)
{
if (layer1.getPixel32(i, j) != layer2.getPixel32(i, j))
{
result.setPixel32(i, j, layer2.getPixel32(i, j));
}
}
}
Method 2
result = layer1.compare(layer2) as BitmapData;
for (var j:int = 0; j < layer1.height; j++)
{
for (var i:int = 0; i < layer1.width; i++)
{
if (result.getPixel32(i, j) != 0x00000000)
{
result.setPixel32(i, j, layer2.getPixel32(i, j));
}
}
}
layer1 is the background, layer2 is the image the background will be removed from, and result is just a BitmapData that the result will come out on.
These are very similar, and personally I haven't noticed any difference in speed, but I was just wondering if anybody knows which would be faster. I'll probably use Method 1 either way, since BitmapData.compare() doesn't compare pixel alpha unless the colours are identical, but I still thought it wouldn't hurt to ask.
This might not be 100% applicable to your situation, but FWIW I did some research into this a while back, here's what I wrote back then:
I've been meaning to try out grant skinners performance test thing for a while, so this was my opportunity.
I tested the native compare, the iterative compare, a reverse iterative compare (because i know they're a bit faster) and finally a compare using blendmode DIFFERENCE.
The reverse iterative compare looks like this:
for (var bx:int = _base.width - 1; bx >= 0; --bx) {
for (var by:int = _base.height - 1; by >= 0; --by) {
if (_base.getPixel32(bx, by) != compareTo.getPixel32(bx, by)) {
return false;
}
}
}
return true;
The blendmode compare looks like this:
var test:BitmapData = _base.clone();
test.draw(compareTo, null, null, BlendMode.DIFFERENCE);
var rect:Rectangle = test.getColorBoundsRect(0xffffff, 0x000000, false);
return (rect.toString() != _base.rect.toString());
I'm not 100% sure this is completely reliable, but it seemed to work.
I ran each test for 50 iterations on 500x500 images.
Unfortunately my pixel bender toolkit is borked, so I couldn't try that method.
Each test was run in both the debug and release players for comparison, notice the massive differences!
Complete code is here: http://webbfarbror.se/dump/bitmapdata-compare.zip
You might try a shader that would check if two images are matching, and if not, save one of image's point as output. Like this:
kernel NewFilter
< namespace : "Your Namespace";
vendor : "Your Vendor";
version : 1;
description : "your description";
>
{
input image4 srcOne;
input image4 srcTwo;
output pixel4 dst;
void evaluatePixel()
{
float2 positionHere = outCoord();
pixel4 fromOne = sampleNearest(srcOne, positionHere);
pixel4 fromTwo = sampleNearest(srcTwo, positionHere);
float4 difference=fromOne-fromTwo;
if (abs(difference.r)<0.01&&abs(difference.g)<0.01&&abs(difference.b)<0.01) dst=pixel4(0,0,0,1);
else dst = fromOne;
}
}
This will return black pixel if matched and first image's pixel when no match. Use Pixel Bender to make it run in Flash. Some tutorial is here. Shaders are usually a factor faster than plain AS3 code.
This is a question regarding forming a topographical tree in 3D. A bit of context: I have a physics engine where I have bodies and collision points and some constraints. This is not homework, but an experiment in a multi-threading.
I need to sort the bodies in a bottom-to-top fashion with groups of objects belonging to layers like in this document: See section on "shock-propagation"
http://www2.imm.dtu.dk/visiondag/VD05/graphical/slides/kenny.pdf
the pseudocode he uses to describe how to iterate over the tree makes perfect sense:
shock-propagation(algorithm A)
compute contact graph
for each stack layer in bottom up order
fixate bottom-most objects of layer
apply algorithm A to layer
un-fixate bottom-most objects of layer
next layer
I already have algorithm A figured out (my impulse code). What would the pseudocode look like for tree/layer sorting (topo sort?) with a list of 3D points?
I.E., I don't know where to stop/begin the next "rung" or "branch". I guess I could just chunk it up by y position, but that seems clunky and error prone. Do I look into topographical sorting? I don't really know how to go about this in 3D. How would I get "edges" for a topo sort, if that's the way to do it?
Am I over thinking this and I just "connect the dots" by finding point p1 then the least distant next point p2 where p2.y > p1.y ? I see a problem here where p1 distance from p0 could be greater than p2 using pure distances, which would lead to a bad sort.
i just tackled this myself.
I found an example of how to accomplish this in the downloadable source code linked to this paper:
http://www-cs-students.stanford.edu/~eparker/files/PhysicsEngine/
Specifically the WorldState.cs file starting at line 221.
But the idea being that you assign all static objects with the level of -1 and each other object with a different default level of say -2. Then for each collision with the bodies at level -1 you add the body it collided with to a list and set its level to 0.
After that using a while loop while(list.Count > 0) check for the bodies that collide with it and set there levels to the body.level + 1.
After that, for each body in the simulation that still has the default level (i said -2 earlier) set its level to the highest level.
There are a few more fine details, but looking at the code in the example will explain it way better than i ever could.
Hope it helps!
Relevant Code from Evan Parker's code. [Stanford]
{{{
// topological sort (bfs)
// TODO check this
int max_level = -1;
while (queue.Count > 0)
{
RigidBody a = queue.Dequeue() as RigidBody;
//Console.Out.WriteLine("considering collisions with '{0}'", a.Name);
if (a.level > max_level) max_level = a.level;
foreach (CollisionPair cp in a.collisions)
{
RigidBody b = (cp.body[0] == a ? cp.body[1] : cp.body[0]);
//Console.Out.WriteLine("considering collision between '{0}' and '{1}'", a.Name, b.Name);
if (!b.levelSet)
{
b.level = a.level + 1;
b.levelSet = true;
queue.Enqueue(b);
//Console.Out.WriteLine("found body '{0}' in level {1}", b.Name, b.level);
}
}
}
int num_levels = max_level + 1;
//Console.WriteLine("num_levels = {0}", num_levels);
ArrayList[] bodiesAtLevel = new ArrayList[num_levels];
ArrayList[] collisionsAtLevel = new ArrayList[num_levels];
for (int i = 0; i < num_levels; i++)
{
bodiesAtLevel[i] = new ArrayList();
collisionsAtLevel[i] = new ArrayList();
}
for (int i = 0; i < bodies.GetNumBodies(); i++)
{
RigidBody a = bodies.GetBody(i);
if (!a.levelSet || a.level < 0) continue; // either a static body or no contacts
// add a to a's level
bodiesAtLevel[a.level].Add(a);
// add collisions involving a to a's level
foreach (CollisionPair cp in a.collisions)
{
RigidBody b = (cp.body[0] == a ? cp.body[1] : cp.body[0]);
if (b.level <= a.level) // contact with object at or below the same level as a
{
// make sure not to add duplicate collisions
bool found = false;
foreach (CollisionPair cp2 in collisionsAtLevel[a.level])
if (cp == cp2) found = true;
if (!found) collisionsAtLevel[a.level].Add(cp);
}
}
}
for (int step = 0; step < num_contact_steps; step++)
{
for (int level = 0; level < num_levels; level++)
{
// process all contacts
foreach (CollisionPair cp in collisionsAtLevel[level])
{
cp.ResolveContact(dt, (num_contact_steps - step - 1) * -1.0f/num_contact_steps);
}
}
}
}}}