Find count of all points in a 3d space that are strictly less than any of the points in that space? - algorithm

We are given n points in a 3d space ,we need to find count of all points that are strictly less than atleast one of the points in the 3d space
i.e.
x1<x2 and y1<y2 and z1<z2
so (x1,y1,z1) would be one such point.
For example,Given points
1 4 2
4 3 2
2 5 3
(1,4,2)<(2,5,3)
So the answer for the above case should be the count of such points i.e. 1.
I know this can be solved through a O(n^2) algorithm but i need something faster,i tried sorting through one dimension and then searching only over the greater part of the key , but its still o(n^2) worst case.
What is the efficient way to do this?

There is a way to optimize your search that may be faster than O(n^2) - I would welcome counter-sample input.
Keep three lists of the indexes of the points, sorted by x, y and z respectively. Make a fourth list associating each point with it's place in each of the lists (indexes in the code below; e. g., indexes[0] = [5,124,789] would mean the first point is 5th in the x-sorted list, 124th in the y-sorted list, and 789th in the z-sorted list).
Now iterate over the points - pick the list where the point is highest and test the point against the higher indexed points in the list, exiting early if the point is strictly less than one of them. If a point is low on all three lists, the likelihood of finding a strictly higher point is greater. Otherwise, a higher place in one of the lists means less iterations.
JavaScript code:
function strictlyLessThan(p1,p2){
return p1[0] < p2[0] && p1[1] < p2[1] && p1[2] < p2[2];
}
// iterations
var it = 0;
function f(ps){
var res = 0,
indexes = new Array(ps.length);
// sort by x
var sortedX =
ps.map(function(x,i){ return i; })
.sort(function(a,b){ return ps[a][0] - ps[b][0]; });
// record index of point in x-sorted list
for (var i=0; i<sortedX.length; i++){
indexes[sortedX[i]] = [i,null,null];
}
// sort by y
var sortedY =
ps.map(function(x,i){ return i; })
.sort(function(a,b){ return ps[a][1] - ps[b][1]; });
// record index of point in y-sorted list
for (var i=0; i<sortedY.length; i++){
indexes[sortedY[i]][1] = i;
}
// sort by z
var sortedZ =
ps.map(function(x,i){ return i; })
.sort(function(a,b){ return ps[a][2] - ps[b][2]; });
// record index of point in z-sorted list
for (var i=0; i<sortedZ.length; i++){
indexes[sortedZ[i]][2] = i;
}
// check for possible greater points only in the list
// where the point is highest
for (var i=0; i<ps.length; i++){
var listToCheck,
startIndex;
if (indexes[i][0] > indexes[i][1]){
if (indexes[i][0] > indexes[i][2]){
listToCheck = sortedX;
startIndex = indexes[i][0];
} else {
listToCheck = sortedZ;
startIndex = indexes[i][2];
}
} else {
if (indexes[i][1] > indexes[i][2]){
listToCheck = sortedY;
startIndex = indexes[i][1];
} else {
listToCheck = sortedZ;
startIndex = indexes[i][2];
}
}
var j = startIndex + 1;
while (listToCheck[j] !== undefined){
it++;
var point = ps[listToCheck[j]];
if (strictlyLessThan(ps[i],point)){
res++;
break;
}
j++;
}
}
return res;
}
// var input = [[5,0,0],[4,1,0],[3,2,0],[2,3,0],[1,4,0],[0,5,0],[4,0,1],[3,1,1], [2,2,1],[1,3,1],[0,4,1],[3,0,2],[2,1,2],[1,2,2],[0,3,2],[2,0,3], [1,1,3],[0,2,3],[1,0,4],[0,1,4],[0,0,5]];
var input = new Array(10000);
for (var i=0; i<input.length; i++){
input[i] = [Math.random(),Math.random(),Math.random()];
}
console.log(input.length + ' points');
console.log('result: ' + f(input));
console.log(it + ' iterations not including sorts');

I doubt that the worst-case complexity can be reduced below N×N, because it is possible to create input where no point is strictly less than any other point:
For any value n, consider the plane that intersects with the Z, Y and Z axis at (n,0,0), (0,n,0) and (0,0,n), described by the equation x+y+z=n. If the input consists of points on such a plane, none of the points is strictly less than any other point.
Example of worst-case input:
(5,0,0) (4,1,0) (3,2,0) (2,3,0) (1,4,0) (0,5,0)
(4,0,1) (3,1,1) (2,2,1) (1,3,1) (0,4,1)
(3,0,2) (2,1,2) (1,2,2) (0,3,2)
(2,0,3) (1,1,3) (0,2,3)
(1,0,4) (0,1,4)
(0,0,5)
However, the average complexity can be reduced to much less than N×N, e.g. with this approach:
Take the first point from the input and put it in a list.
Take the second point from the input, and compare it to the first
point in the list. If it is strictly less, discard the new point. If
it is strictly greater, replace the point in the list with the new
point. If it is neither, add the point to the list.
For each new point from the input, compare it to each point in the
list. If it is stricly less than any point in the list, discard the
new point. If it is strictly greater, replace the point in the list
with the new point, and also discard any further points in the list
which are strictly less than the new point. If the new point is not
strictly less or greater than any point in the list, add the new
point to the list.
After checking every point in the input, the result is the number of
points in the input minus the number of points in the list.
Since the probability that for any two random points a and b either a<b or b<a is 25%, the list won't grow to be very large (unless the input is specifically crafted to contain few or no points that are strictly less than any other point).
Limited testing with the code below (100 cases) with 1,000,000 randomly distributed points in a cubic space shows that the average list size is around 116 (with a maximum of 160), and the number of checks whether a point is strictly less than another point is around 1,333,000 (with a maximum of 2,150,000).
(And a few tests with 10,000,000 points show that the average number of checks is around 11,000,000 with a list size around 150.)
So in practice, the average complexity is close to N rather than N×N.
function xyzLessCount(input) {
var list = [input[0]]; // put first point in list
for (var i = 1; i < input.length; i++) { // check every point in input
var append = true;
for (var j = 0; j < list.length; j++) { // against every point in list
if (xyzLess(input[i], list[j])) { // new point < list point
append = false;
break; // continue with next point
}
if (xyzLess(list[j], input[i])) { // new point > list point
list[j] = input[i]; // replace list point
for (var k = list.length - 1; k > j; k--) {
if (xyzLess(list[k], list[j])) { // check rest of list
list.splice(k, 1); // remove list point
}
}
append = false;
break; // continue with next point
}
}
if (append) list.push(input[i]); // append new point to list
}
return input.length - list.length;
function xyzLess(a, b) {
return a.x < b.x && a.y < b.y && a.z < b.z;
}
}
var points = []; // random test data
for (var i = 0; i < 1000000; i++) {
points.push({x: Math.random(), y: Math.random(), z: Math.random()});
}
document.write("1000000 → " + xyzLessCount(points));

Related

Verification of algorithm for variant of gas station

I am studying this problem and I recognise this as a variant of the gas station problem. As a result, I use Greedy algorithm to solve this problem. I would like to ask if anyone helps me to point out my algorithm is correct or not, thanks.
My algorithm
var x = input.distance, cost = input.cost, c = input.travelDistance, price = [Number.POSITIVE_INFINITY];
var result = [];
var lastFill = 0, tempMinIndex = 0, totalCost = 0;
for(var i=1; i<x.length; i++) {
var d = x[i] - x[lastFill];
if(d > c){ //car can not travel to this shop, has to decide which shop to refill in the previous possible shops
result.push(tempMinIndex);
lastFill = tempMinIndex;
totalCost += price[tempMinIndex];
tempMinIndex = i;
}
//calculate price
price[i] = d/c * cost[i];
if(price[i] <= price[tempMinIndex])
tempMinIndex = i;
}
//add last station to the list and the total cost
if(lastFill != x.length - 1){
result.push(x.length - 1);
totalCost += price[price.length-1];
}
You can try out the algorithm at this link
https://drive.google.com/file/d/0B4sd8MQwTpVnMXdCRU0xZFlVRlk/view?usp=sharing
First, regarding to your solution.
There is a bug that ruins even at the most simple inputs. When you decided that the distance became too far and you should fulfil at some point before, you don't update distance and gas station charge you more that it should. The fix is simple:
if(d > c){
//car can not travel to this shop, has to decide which shop to refill
//in the previous possible shops
result.push(tempMinIndex);
lastFill = tempMinIndex;
totalCost += price[tempMinIndex];
tempMinIndex = i;
// Fix: update distance
var d = x[i] - x[lastFill];
}
Even with this fix, your algorithm fails on some input data, like this:
0 10 20 30
0 20 30 50
30
It should refill on every gasoline to minimize cost, but it simply fills on the last one.
After some research, I came up with solution. I'll try to explain it as simple as possible to make it language independent.
Idea
For every gas station G we will count cheapest way of filling. We'll do that recursively: for each gas station let's find all gas stations i from which we can reach G. For every i count cheapest filling possible and sum up with the cost of the filling at G given gasoline left. For start gas station cost is 0. More formally:
CostOfFilling(x), Capacity and Position(x) can be retrieved from input data.
So, the answer for the problem is simply BestCost(LastGasStation)
Code
Now, solution in javascript to make things clearer.
function calculate(input)
{
// Array for keeping calculated values of cheapest filling at each station
best = [];
var x = input.distance;
var cost = input.cost;
var capacity = input.travelDistance;
// Array initialization
best.push(0);
for (var i = 0; i < x.length - 1; i++)
{
best.push(-1);
}
var answer = findBest(x, cost, capacity, x.length - 1);
return answer;
}
// Implementation of BestCost function
var findBest = function(distances, costs, capacity, distanceIndex)
{
// Return value if it's already have been calculated
if (best[distanceIndex] != -1)
{
return best[distanceIndex];
}
// Find cheapest way to fill by iterating on every available gas station
var minDistanceIndex = findMinDistance(capacity, distances, distanceIndex);
var answer = findBest(distances, costs, capacity, minDistanceIndex) +
calculateCost(distances, costs, capacity, minDistanceIndex, distanceIndex);
for (var i = minDistanceIndex + 1; i < distanceIndex; i++)
{
var newAnswer = findBest(distances, costs, capacity, i) +
calculateCost(distances, costs, capacity, i, distanceIndex);
if (newAnswer < answer)
{
answer = newAnswer;
}
}
// Save best result
best[distanceIndex] = answer;
return answer;
}
// Implementation of MinGasStation function
function findMinDistance(capacity, distances, distanceIndex)
{
for (var i = 0; i < distances.length; i++)
{
if (distances[distanceIndex] - distances[i] <= capacity)
{
return i;
}
}
}
// Implementation of Cost function
function calculateCost(distances, costs, capacity, a, b)
{
var distance = distances[b] - distances[a];
return costs[b] * (distance / capacity);
}
Full workable html page with code is available here

Is there an efficient algorithm that could do this?

I have two lists of integers of equal length, each with no duplicates, and I need to map them to each other based on the (absolute value) of their differences, where nothing could be switched in the output to make the totaled differences of all pair smaller. The 'naive' approach I could think of would run would be this (in condensed C#, but I think it's pretty easy to get):
Dictionary<int, int> output;
List<int> list1, list2;
while(!list1.Empty) //While we haven't arranged all the pairs
{
int bestDistance = Int32.MaxValue; //best distance between numbers so far
int bestFirst, bestSecond; //best numbers so far
foreach(int i in list1)
{
foreach(int j in list2)
{
int distance = Math.Abs(i - j);
//if the distance is better than the best so far, make it the new best
if(distance < bestDistance)
{
bestDistance = distance;
bestFirst = i;
bestSecond = j;
}
}
}
output[bestFirst] = bestSecond; //add the best to dictionary
list1.Remove(bestFirst); //remove it from the lists
list2.Remove(bestSecond);
}
Essentially, it just finds the best pair, removes it, and then repeates until it's done. But this runs in cubic time, if I see it correctly, and would take incredibly long for large lists. Is there any faster way to do this?
This is less trivial than my initial hunch suggested. The key to keeping this O(N log(N)) is to work with sorted lists, and search for the "pivot" element in the second sorted list with the smallest difference to the first element in the first sorted list.
Thus the steps to take become:
Sort both input lists
Find the pivot element in the second sorted list
Return this pivot element together with the first element of the first sorted list
Keep track of the element index left to the pivot and right to the pivot
Iterate the first list in sorted order, returning either the left or right element, depending on which difference is smallest and adjusting the left and right indexes.
As in (c# example):
public static IEnumerable<KeyValuePair<int, int>> FindSmallestDistances(List<int> first, List<int> second)
{
Debug.Assert(first.Count == second.Count); // precondition.
// sort the input: O(N log(N)).
first.Sort();
second.Sort();
// determine pivot: O(N).
var min_first = first[0];
var smallest_abs_dif = Math.Abs(second[0] - min_first);
var pivot_ndx = 0;
for (int i = 1; i < second.Count; i++)
{
var abs_dif = Math.Abs(second[i] - min_first);
if (abs_dif < smallest_abs_dif)
{
smallest_abs_dif = abs_dif;
pivot_ndx = i;
}
};
// return the first one.
yield return new KeyValuePair<int, int>(min_first, second[pivot_ndx]);
// Iterate the rest: O(N)
var left = pivot_ndx - 1;
var right = pivot_ndx + 1;
for (var i = 1; i < first.Count; i++)
{
if (left >= 0)
{
if (right < first.Count && Math.Abs(first[i] - second[left]) > Math.Abs(first[i] - second[right]))
yield return new KeyValuePair<int, int>(first[i], second[right++]);
else
yield return new KeyValuePair<int, int>(first[i], second[left--]);
}
else
yield return new KeyValuePair<int, int>(first[i], second[right++]);
}
}

Return the number of elements of an array that is the most "expensive"

I recently stumbled upon an interesting problem, an I am wondering if my solution is optimal.
You are given an array of zeros and ones. The goal is to return the
amount zeros and the amount of ones in the most expensive sub-array.
The cost of an array is the amount of 1s divided by amount of 0s. In
case there are no zeros in the sub-array, the cost is zero.
At first I tried brute-forcing, but for an array of 10,000 elements it was far too slow and I ran out of memory.
My second idea was instead of creating those sub-arrays, to remember the start and the end of the sub-array. That way I saved a lot of memory, but the complexity was still O(n2).
My final solution that I came up is I think O(n). It goes like this:
Start at the beginning of the array, for each element, calculate the cost of the sub-arrays starting from 1, ending at the current index. So we would start with a sub-array consisting of the first element, then first and second etc. Since the only thing that we need to calculate the cost, is the amount of 1s and 0s in the sub-array, I could find the optimal end of the sub-array.
The second step was to start from the end of the sub-array from step one, and repeat the same to find the optimal beginning. That way I am sure that there is no better combination in the whole array.
Is this solution correct? If not, is there a counter-example that will show that this solution is incorrect?
Edit
For clarity:
Let's say our input array is 0101.
There are 10 subarrays:
0,1,0,1,01,10,01,010,101 and 0101.
The cost of the most expensive subarray would be 2 since 101 is the most expensive subarray. So the algorithm should return 1,2
Edit 2
There is one more thing that I forgot, if 2 sub-arrays have the same cost, the longer one is "more expensive".
Let me sketch a proof for my assumption:
(a = whole array, *=zero or more, +=one or more, {n}=exactly n)
Cases a=0* and a=1+ : c=0
Cases a=01+ and a=1+0 : conforms to 1*0{1,2}1*, a is optimum
For the normal case, a contains one or more 0s and 1s.
This means there is some optimum sub-array of non-zero cost.
(S) Assume s is an optimum sub-array of a.
It contains one or more zeros. (Otherwise its cost would be zero).
(T) Let t be the longest `1*0{1,2}+1*` sequence within s
(and among the equally long the one with with most 1s).
(Note: There is always one such, e.g. `10` or `01`.)
Let N be the number of 1s in t.
Now, we prove that always t = s.
By showing it is not possible to add adjacent parts of s to t if (S).
(E) Assume t shorter than s.
We cannot add 1s at either side, otherwise not (T).
For each 0 we add from s, we have to add at least N more 1s
later to get at least the same cost as our `1*0+1*`.
This means: We have to add at least one run of N 1s.
If we add some run of N+1, N+2 ... somewhere than not (T).
If we add consecutive zeros, we need to compensate
with longer runs of 1s, thus not (T).
This leaves us with the only option of adding single zeors and runs of N 1s each.
This would give (symmetry) `1{n}*0{1,2}1{m}01{n+m}...`
If m>0 then `1{m}01{n+m}` is longer than `1{n}0{1,2}1{m}`, thus not (T).
If m=0 then we get `1{n}001{n}`, thus not (T).
So assumption (E) must be wrong.
Conclusion: The optimum sub-array must conform to 1*0{1,2}1*.
Here is my O(n) impl in Java according to the assumption in my last comment (1*01* or 1*001*):
public class Q19596345 {
public static void main(String[] args) {
try {
String array = "0101001110111100111111001111110";
System.out.println("array=" + array);
SubArray current = new SubArray();
current.array = array;
SubArray best = (SubArray) current.clone();
for (int i = 0; i < array.length(); i++) {
current.accept(array.charAt(i));
SubArray candidate = (SubArray) current.clone();
candidate.trim();
if (candidate.cost() > best.cost()) {
best = candidate;
System.out.println("better: " + candidate);
}
}
System.out.println("best: " + best);
} catch (Exception ex) { ex.printStackTrace(System.err); }
}
static class SubArray implements Cloneable {
String array;
int start, leftOnes, zeros, rightOnes;
// optimize 1*0*1* by cutting
void trim() {
if (zeros > 1) {
if (leftOnes < rightOnes) {
start += leftOnes + (zeros - 1);
leftOnes = 0;
zeros = 1;
} else if (leftOnes > rightOnes) {
zeros = 1;
rightOnes = 0;
}
}
}
double cost() {
if (zeros == 0) return 0;
else return (leftOnes + rightOnes) / (double) zeros +
(leftOnes + zeros + rightOnes) * 0.00001;
}
void accept(char c) {
if (c == '1') {
if (zeros == 0) leftOnes++;
else rightOnes++;
} else {
if (rightOnes > 0) {
start += leftOnes + zeros;
leftOnes = rightOnes;
zeros = 0;
rightOnes = 0;
}
zeros++;
}
}
public Object clone() throws CloneNotSupportedException { return super.clone(); }
public String toString() { return String.format("%s at %d with cost %.3f with zeros,ones=%d,%d",
array.substring(start, start + leftOnes + zeros + rightOnes), start, cost(), zeros, leftOnes + rightOnes);
}
}
}
If we can show the max array is always 1+0+1+, 1+0, or 01+ (Regular expression notation then we can calculate the number of runs
So for the array (010011), we have (always starting with a run of 1s)
0,1,1,2,2
so the ratios are (0, 1, 0.3, 1.5, 1), which leads to an array of 10011 as the final result, ignoring the one runs
Cost of the left edge is 0
Cost of the right edge is 2
So in this case, the right edge is the correct answer -- 011
I haven't yet been able to come up with a counterexample, but the proof isn't obvious either. Hopefully we can crowd source one :)
The degenerate cases are simpler
All 1's and 0's are obvious, as they all have the same cost.
A string of just 1+,0+ or vice versa is all the 1's and a single 0.
How about this? As a C# programmer, I am thinking we can use something like Dictionary of <int,int,int>.
The first int would be use as key, second as subarray number and the third would be for the elements of sub-array.
For your example
key|Sub-array number|elements
1|1|0
2|2|1
3|3|0
4|4|1
5|5|0
6|5|1
7|6|1
8|6|0
9|7|0
10|7|1
11|8|0
12|8|1
13|8|0
14|9|1
15|9|0
16|9|1
17|10|0
18|10|1
19|10|0
20|10|1
Then you can run through the dictionary and store the highest in a variable.
var maxcost=0
var arrnumber=1;
var zeros=0;
var ones=0;
var cost=0;
for (var i=1;i++;i<=20+1)
{
if ( dictionary.arraynumber[i]!=dictionary.arraynumber[i-1])
{
zeros=0;
ones=0;
cost=0;
if (cost>maxcost)
{
maxcost=cost;
}
}
else
{
if (dictionary.values[i]==0)
{
zeros++;
}
else
{
ones++;
}
cost=ones/zeros;
}
}
This will be log(n^2), i hope and u just need 3n size of memory of the array?
I think we can modify the maximal subarray problem to fit to this question. Here's my attempt at it:
void FindMaxRatio(int[] array, out maxNumOnes, out maxNumZeros)
{
maxNumOnes = 0;
maxNumZeros = 0;
int numOnes = 0;
int numZeros = 0;
double maxSoFar = 0;
double maxEndingHere = 0;
for(int i = 0; i < array.Size; i++){
if(array[i] == 0) numZeros++;
if(array[i] == 1) numOnes++;
if(numZeros == 0) maxEndingHere = 0;
else maxEndingHere = numOnes/(double)numZeros;
if(maxEndingHere < 1 && maxEndingHere > 0) {
numZeros = 0;
numOnes = 0;
}
if(maxSoFar < maxEndingHere){
maxSoFar = maxEndingHere;
maxNumOnes = numOnes;
maxNumZeros = numZeros;
}
}
}
I think the key is if the ratio is less then 1, we can disregard that subsequence because
there will always be a subsequence 01 or 10 whose ratio is 1. This seemed to work for 010011.

Get border edges of mesh - in winding order

I have a triangulated mesh. Assume it looks like an bumpy surface. I want to be able to find all edges that fall on the surrounding border of the mesh. (forget about inner vertices)
I know I have to find edges that are only connected to one triangle, and collect all these together and that is the answer. But I want to be sure that the vertices of these edges are ordered clockwise around the shape.
I want to do this because I would like to get a polygon line around the outside of mesh.
I hope this is clear enough to understand. In a sense i am trying to "De-Triangulate" the mesh. ha! if there is such a term.
Boundary edges are only referenced by a single triangle in the mesh, so to find them you need to scan through all triangles in the mesh and take the edges with a single reference count. You can do this efficiently (in O(N)) by making use of a hash table.
To convert the edge set to an ordered polygon loop you can use a traversal method:
Pick any unvisited edge segment [v_start,v_next] and add these vertices to the polygon loop.
Find the unvisited edge segment [v_i,v_j] that has either v_i = v_next or v_j = v_next and add the other vertex (the one not equal to v_next) to the polygon loop. Reset v_next as this newly added vertex, mark the edge as visited and continue from 2.
Traversal is done when we get back to v_start.
The traversal will give a polygon loop that could have either clock-wise or counter-clock-wise ordering. A consistent ordering can be established by considering the signed area of the polygon. If the traversal results in the wrong orientation you simply need to reverse the order of the polygon loop vertices.
Well as the saying goes - get it working - then get it working better. I noticed on my above example it assumes all the edges in the edges array do in fact link up in a nice border. This may not be the case in the real world (as I have discovered from my input files i am using!) In fact some of my input files actually have many polygons and all need borders detected. I also wanted to make sure the winding order is correct. So I have fixed that up as well. see below. (Feel I am making headway at last!)
private static List<int> OrganizeEdges(List<int> edges, List<Point> positions)
{
var visited = new Dictionary<int, bool>();
var edgeList = new List<int>();
var resultList = new List<int>();
var nextIndex = -1;
while (resultList.Count < edges.Count)
{
if (nextIndex < 0)
{
for (int i = 0; i < edges.Count; i += 2)
{
if (!visited.ContainsKey(i))
{
nextIndex = edges[i];
break;
}
}
}
for (int i = 0; i < edges.Count; i += 2)
{
if (visited.ContainsKey(i))
continue;
int j = i + 1;
int k = -1;
if (edges[i] == nextIndex)
k = j;
else if (edges[j] == nextIndex)
k = i;
if (k >= 0)
{
var edge = edges[k];
visited[i] = true;
edgeList.Add(nextIndex);
edgeList.Add(edge);
nextIndex = edge;
i = 0;
}
}
// calculate winding order - then add to final result.
var borderPoints = new List<Point>();
edgeList.ForEach(ei => borderPoints.Add(positions[ei]));
var winding = CalculateWindingOrder(borderPoints);
if (winding > 0)
edgeList.Reverse();
resultList.AddRange(edgeList);
edgeList = new List<int>();
nextIndex = -1;
}
return resultList;
}
/// <summary>
/// returns 1 for CW, -1 for CCW, 0 for unknown.
/// </summary>
public static int CalculateWindingOrder(IList<Point> points)
{
// the sign of the 'area' of the polygon is all we are interested in.
var area = CalculateSignedArea(points);
if (area < 0.0)
return 1;
else if (area > 0.0)
return - 1;
return 0; // error condition - not even verts to calculate, non-simple poly, etc.
}
public static double CalculateSignedArea(IList<Point> points)
{
double area = 0.0;
for (int i = 0; i < points.Count; i++)
{
int j = (i + 1) % points.Count;
area += points[i].X * points[j].Y;
area -= points[i].Y * points[j].X;
}
area /= 2.0f;
return area;
}
Traversal Code (not efficient - needs to be tidied up, will get to that at some point) Please Note: I store each segment in the chain as 2 indices - rather than 1 as suggested by Darren. This is purely for my own implementation / rendering needs.
// okay now lets sort the segments so that they make a chain.
var sorted = new List<int>();
var visited = new Dictionary<int, bool>();
var startIndex = edges[0];
var nextIndex = edges[1];
sorted.Add(startIndex);
sorted.Add(nextIndex);
visited[0] = true;
visited[1] = true;
while (nextIndex != startIndex)
{
for (int i = 0; i < edges.Count - 1; i += 2)
{
var j = i + 1;
if (visited.ContainsKey(i) || visited.ContainsKey(j))
continue;
var iIndex = edges[i];
var jIndex = edges[j];
if (iIndex == nextIndex)
{
sorted.Add(nextIndex);
sorted.Add(jIndex);
nextIndex = jIndex;
visited[j] = true;
break;
}
else if (jIndex == nextIndex)
{
sorted.Add(nextIndex);
sorted.Add(iIndex);
nextIndex = iIndex;
visited[i] = true;
break;
}
}
}
return sorted;
The answer to your question depends actually on how triangular mesh is represented in memory. If you use Half-edge data structure, then the algorithm is extremely simple, since everything was already done during Half-edge data structure construction.
Start from any boundary half-edge HE_edge* edge0 (it can be found by linear search over all half-edges as the first edge without valid face). Set the current half-edge HE_edge* edge = edge0.
Output the destination edge->vert of the current edge.
The next edge in clockwise order around the shape (and counter-clockwise order around the surrounding "hole") will be edge->next.
Stop when you reach edge0.
To efficiently enumerate the boundary edges in the opposite (counter-clockwise order) the data structure needs to have prev data field, which many existing implementations of Half-edge data structure do provide in addition to next, e.g. MeshLib

Logic to randomly reorder remaining tile positions using a tile map array

The title explains most of the question.
I have a tile grid which is represented by a 2D array. Some tiles are marked as empty (but they exist in the array, for certain continued uses) while others are in normal state.
What I need to do is, to reorder the remaining (non-empty) tiles in the grid so that all (or most) are in a different non-empty position. If I just iterate all the non-empty positions and swap the tile with another random one, I might be already reordering many of them automatically (the swapped ones).
So I was wondering if there's some technique I can follow so as to reorder the grid satisfactorily with minimal looping. Any hints?
public void RandomizeGrid<T>(T[,] grid, Func<T,bool> isEmpty)
{
// Create a list of the indices of all non-empty cells.
var indices = new List<Point>();
int width = grid.GetLength(0);
int height = grid.GetLength(1);
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
if (!isEmpty(grid[x,y])) // function to check emptiness
{
indices.Add(new Point(x,y));
}
}
}
// Randomize the cells using the index-array as displacement.
int n = indices.Count;
var rnd = new Random();
for (int i = 0; i < n; i++)
{
int j = rnd.Next(i,n); // Random index i <= j < n
if (i != j)
{
// Swap the two cells
var p1 = indices[i];
var p2 = indices[j];
var tmp = grid[p1.X,p1.Y];
grid[p1.X,p1.Y] = grid[p2.X,p2.Y];
grid[p2.X,p2.Y] = tmp;
}
}
}
Would it meet your needs ("satisfactorily" is a bit vague) to ensure that every non empty tile was swapped with one other non-empty tile one time?
Say you have a list :
(1,4,7,3,8,10)
we can write down the indicies of the list
(0,1,2,3,4,5)
and perform N random swaps on the indices to shuffle it - maybe some numbers move, some don't.
(5,1,3,2,4,0)
Then take these pairwise as a sequence of swaps to perform on our original list.
(8,10,3,7,1,4)
if you have an odd number of elements, the leftover is swapped with any other element in the list.

Resources