Pseudocode for Swept AABB Collision - algorithm

I think swept means determining if objects will collide at some point, not just whether they are currently colliding, but if I'm wrong tell me.
I have objects with bounded boxes that are aligned on an axis. The boxes of objects can be different sizes, but they are always rectangular.
I've tried and tried to figure out an algorithm to determine if two moving AABB objects will collide at some point, but I am having a really hard time. I read a question on here about determining the time intervals when the two objects will pass at some point, and I didn't have a problem visualizing it, but implementing it was another story. It seems like there are too many exceptions, and it doesn't seem like I am doing it correctly.
The objects are only able to move in straight lines (though obviously they can change direction, e.g. turn around, but they are always on the axis. If they try to turn off the axis then it just doesn't work), and are bound to the axis. Their bounded boxes don't rotate or do anything like that. Velocity can change, but it doesn't matter since the point of the method is to determine whether, given the objects' current state, they are on a "collision course". If you need any more information let me know.
If someone could provide some pseudocode (or real code) that would be great. I read a document called Intersection of Convex Objects: The Method of Separating Axes but I didn't understand some of the pseudocode in it (what does Union mean)?
Any help is appreciated, thanks.

When a collision occurs, the boxes will touch on one side. You could check whether they would be touching for pairs of sides (LR, RL, UD, DU).
If it would simplify the problem, you could translate the boxes so the first box is at the origin and is not moving.
Something like the following code:
dLR = B.L - A.R;
dRL = A.L - B.R;
dUD = B.U - A.D;
dDU = A.U - B.D;
vX = A.xV - B.xV;
vY = A.yV - B.yV;
tLR = dLR / vX;
tRL =-dRL / vX;
tUD = dUD / vY;
tDU =-dDU / vY;
hY = dUD + dDU; //combined height
hX = dLR + dRL;
if((tLR > 0) && (abs(dDU + vY*tLR) < hY)) return true;
if((tRL > 0) && (abs(dUD - vY*tRL) < hY)) return true;
if((tUD > 0) && (abs(dRL + vX*tUD) < hX)) return true;
if((tDU > 0) && (abs(dLR - vX*tDU) < hX)) return true;
return false;

Related

SAT Collision detection - Corners fix

I'm building a game and I'm currently working on the physics.
I'm using the SAT algorithm to detect collisions. The collisions are between the character (AxisAlignedBoundingBox) and some rectangles (with rotation).
Everything works fine, except the collision near to a corner in specific situations. (This is a pretty known problem but I didn't find any good solutions).
On Example 1, in the second scene the character should move upwards (stay on the obstacle).
It happens to move left.
On Example 2, in the second scene the character should not get up. Sometimes it gets.
I know why this is happening, because of dx and dy, the Minimum Translation Vector isn't always the wanted one.
There are several solutions to this problem, but not a really good one (in terms of solving the problem and not creating others!).
I'm willing to even use a totally different algorithm from the beginning.
Please give me a hint about an algorithm better than the SAT, or some workaround.
THANK YOU!
A picture is worth many words.
The image has two boxes to test the red and the black..
Note how the center of the black box is always on the darker red box when it is just touching.
You can simplify any AABB test by increasing the size of one box by the size of the other. As long as you referance the boxes position by their centers all works well.
// x,y are box centers
var bBox = { w : 100 , h : 50, x : ?, y ? }; // black
var rBox = { w : 200 , h : 200, x : ?, y ? }; // red
to test if bBox is inside rBox
if(bBox.x > rBox.x - (rBox.w + bBox.w)/2 &&
bBox.x < rBox.x + (rBox.w + bBox.w)/2 &&
bBox.y > rBox.y - (rBox.h + bBox.h)/2 &&
bBox.y < rBox.y + (rBox.h + bBox.h)/2)
// boxes are touching.
}
Also works if boxes are moving. You just test if the vector of bBox movement intersects any of rBox's 4 sides.

Intersection of axis-aligned rectangular cuboids (MBR) in one dimension

Currently I'm doing benchmarks on time series indexing algorithms. Since most of the time no reference implementations are available, I have to write my own implementations (all in Java). At the moment I am stuck a little at section 6.2 of a paper called Indexing multi-dimensional time-series with support for multiple distance measures available here in PDF : http://hadjieleftheriou.com/papers/vldbj04-2.pdf
A MBR (minimum bounding rectangle) is basically a rectanglular cubiod with some coordinates and directions. As an example P and Q are two MBRs with P.coord={0,0,0} and P.dir={1,1,3} and Q.coords={0.5,0.5,1} and Q.dir={1,1,1} where the first entries represent the time dimension.
Now I would like to calculate the MINDIST(Q,P) between Q and P :
However I am not sure how to implement the "intersection of two MBRs in the time dimension" (Dim 1) since I am not sure what the intersection in the time dimension actually means. It is also not clear what h_Q, l_Q, l_P, h_P mean, since this notation is not explained (my guess is they mean something like highest or lowest value of a dimension in the intersection).
I would highly appreciate it, if someone could explain to me how to calculate the intersection of two MBRs in the first dimension and maybe enlighten me with an interpretation of the notation. Thanks!
Well, Figure 14 in your paper explains the time intersection. And the rectangles are axis-aligned, thus it makes sense to use high and low on each coordinate.
The multiplication sign you see is not a cross product, just a normal multiplication, because on both sides of it you have a scalar, and not vectors.
However I must agree that the discussions on page 14 are rather fuzzy, but they seem to tell us that both types of intersections (complete and partial), when they are have a t subscript, mean the norm of the intersection along the t coordinate.
Thus it seems you could factorize the time intersection to get a formula that would be :
It is worth noting that, maybe counter-intuitively, when your objects don't intersect on the time plane, their MINDIST is defined to be 0.
Hence the following pseudo-code ;
mindist(P, Q)
{
if( Q.coord[0] + Q.dir[0] < P.coord[0] ||
Q.coord[0] > P.coord[0] + P.dir[0] )
return 0;
time = min(Q.coord[0] + Q.dir[0], P.coord[0] + P.dir[0]) - max(Q.coord[0], P.coord[0]);
sum = 0;
for(d=1; d<D; ++d)
{
if( Q.coord[d] + Q.dir[d] < P.coord[d] )
x = Q.coord[d] + Q.dir[d] - P.coord[d];
else if( P.coord[d] + P.dir[d] < Q.coord[d] )
x = P.coord[d] + P.dir[d] - Q.coord[d];
else
x = 0;
sum += x*x;
}
return sqrt(time * sum);
}
Note the absolute values in the paper are unnecessary since we just checked which values where bigger, and we thus know we only add positive numbers.

Multiliteration implementation with inaccurate distance data

I am trying to create an android smartphone application which uses Apples iBeacon technology to determine the current indoor location of itself. I already managed to get all available beacons and calculate the distance to them via the rssi signal.
Currently I face the problem, that I am not able to find any library or implementation of an algorithm, which calculates the estimated location in 2D by using 3 (or more) distances of fixed points with the condition, that these distances are not accurate (which means, that the three "trilateration-circles" do not intersect in one point).
I would be deeply grateful if anybody can post me a link or an implementation of that in any common programming language (Java, C++, Python, PHP, Javascript or whatever). I already read a lot on stackoverflow about that topic, but could not find any answer I were able to convert in code (only some mathematical approaches with matrices and inverting them, calculating with vectors or stuff like that).
EDIT
I thought about an own approach, which works quite well for me, but is not that efficient and scientific. I iterate over every meter (or like in my example 0.1 meter) of the location grid and calculate the possibility of that location to be the actual position of the handset by comparing the distance of that location to all beacons and the distance I calculate with the received rssi signal.
Code example:
public Location trilaterate(ArrayList<Beacon> beacons, double maxX, double maxY)
{
for (double x = 0; x <= maxX; x += .1)
{
for (double y = 0; y <= maxY; y += .1)
{
double currentLocationProbability = 0;
for (Beacon beacon : beacons)
{
// distance difference between calculated distance to beacon transmitter
// (rssi-calculated distance) and current location:
// |sqrt(dX^2 + dY^2) - distanceToTransmitter|
double distanceDifference = Math
.abs(Math.sqrt(Math.pow(beacon.getLocation().x - x, 2)
+ Math.pow(beacon.getLocation().y - y, 2))
- beacon.getCurrentDistanceToTransmitter());
// weight the distance difference with the beacon calculated rssi-distance. The
// smaller the calculated rssi-distance is, the more the distance difference
// will be weighted (it is assumed, that nearer beacons measure the distance
// more accurate)
distanceDifference /= Math.pow(beacon.getCurrentDistanceToTransmitter(), 0.9);
// sum up all weighted distance differences for every beacon in
// "currentLocationProbability"
currentLocationProbability += distanceDifference;
}
addToLocationMap(currentLocationProbability, x, y);
// the previous line is my approach, I create a Set of Locations with the 5 most probable locations in it to estimate the accuracy of the measurement afterwards. If that is not necessary, a simple variable assignment for the most probable location would do the job also
}
}
Location bestLocation = getLocationSet().first().location;
bestLocation.accuracy = calculateLocationAccuracy();
Log.w("TRILATERATION", "Location " + bestLocation + " best with accuracy "
+ bestLocation.accuracy);
return bestLocation;
}
Of course, the downside of that is, that I have on a 300m² floor 30.000 locations I had to iterate over and measure the distance to every single beacon I got a signal from (if that would be 5, I do 150.000 calculations only for determine a single location). That's a lot - so I will let the question open and hope for some further solutions or a good improvement of this existing solution in order to make it more efficient.
Of course it has not to be a Trilateration approach, like the original title of this question was, it is also good to have an algorithm which includes more than three beacons for the location determination (Multilateration).
If the current approach is fine except for being too slow, then you could speed it up by recursively subdividing the plane. This works sort of like finding nearest neighbors in a kd-tree. Suppose that we are given an axis-aligned box and wish to find the approximate best solution in the box. If the box is small enough, then return the center.
Otherwise, divide the box in half, either by x or by y depending on which side is longer. For both halves, compute a bound on the solution quality as follows. Since the objective function is additive, sum lower bounds for each beacon. The lower bound for a beacon is the distance of the circle to the box, times the scaling factor. Recursively find the best solution in the child with the lower lower bound. Examine the other child only if the best solution in the first child is worse than the other child's lower bound.
Most of the implementation work here is the box-to-circle distance computation. Since the box is axis-aligned, we can use interval arithmetic to determine the precise range of distances from box points to the circle center.
P.S.: Math.hypot is a nice function for computing 2D Euclidean distances.
Instead of taking confidence levels of individual beacons into account, I would instead try to assign an overall confidence level for your result after you make the best guess you can with the available data. I don't think the only available metric (perceived power) is a good indication of accuracy. With poor geometry or a misbehaving beacon, you could be trusting poor data highly. It might make better sense to come up with an overall confidence level based on how well the perceived distance to the beacons line up with the calculated point assuming you trust all beacons equally.
I wrote some Python below that comes up with a best guess based on the provided data in the 3-beacon case by calculating the two points of intersection of circles for the first two beacons and then choosing the point that best matches the third. It's meant to get started on the problem and is not a final solution. If beacons don't intersect, it slightly increases the radius of each up until they do meet or a threshold is met. Likewise, it makes sure the third beacon agrees within a settable threshold. For n-beacons, I would pick 3 or 4 of the strongest signals and use those. There are tons of optimizations that could be done and I think this is a trial-by-fire problem due to the unwieldy nature of beaconing.
import math
beacons = [[0.0,0.0,7.0],[0.0,10.0,7.0],[10.0,5.0,16.0]] # x, y, radius
def point_dist(x1,y1,x2,y2):
x = x2-x1
y = y2-y1
return math.sqrt((x*x)+(y*y))
# determines two points of intersection for two circles [x,y,radius]
# returns None if the circles do not intersect
def circle_intersection(beacon1,beacon2):
r1 = beacon1[2]
r2 = beacon2[2]
dist = point_dist(beacon1[0],beacon1[1],beacon2[0],beacon2[1])
heron_root = (dist+r1+r2)*(-dist+r1+r2)*(dist-r1+r2)*(dist+r1-r2)
if ( heron_root > 0 ):
heron = 0.25*math.sqrt(heron_root)
xbase = (0.5)*(beacon1[0]+beacon2[0]) + (0.5)*(beacon2[0]-beacon1[0])*(r1*r1-r2*r2)/(dist*dist)
xdiff = 2*(beacon2[1]-beacon1[1])*heron/(dist*dist)
ybase = (0.5)*(beacon1[1]+beacon2[1]) + (0.5)*(beacon2[1]-beacon1[1])*(r1*r1-r2*r2)/(dist*dist)
ydiff = 2*(beacon2[0]-beacon1[0])*heron/(dist*dist)
return (xbase+xdiff,ybase-ydiff),(xbase-xdiff,ybase+ydiff)
else:
# no intersection, need to pseudo-increase beacon power and try again
return None
# find the two points of intersection between beacon0 and beacon1
# will use beacon2 to determine the better of the two points
failing = True
power_increases = 0
while failing and power_increases < 10:
res = circle_intersection(beacons[0],beacons[1])
if ( res ):
intersection = res
else:
beacons[0][2] *= 1.001
beacons[1][2] *= 1.001
power_increases += 1
continue
failing = False
# make sure the best fit is within x% (10% of the total distance from the 3rd beacon in this case)
# otherwise the results are too far off
THRESHOLD = 0.1
if failing:
print 'Bad Beacon Data (Beacon0 & Beacon1 don\'t intersection after many "power increases")'
else:
# finding best point between beacon1 and beacon2
dist1 = point_dist(beacons[2][0],beacons[2][1],intersection[0][0],intersection[0][1])
dist2 = point_dist(beacons[2][0],beacons[2][1],intersection[1][0],intersection[1][1])
if ( math.fabs(dist1-beacons[2][2]) < math.fabs(dist2-beacons[2][2]) ):
best_point = intersection[0]
best_dist = dist1
else:
best_point = intersection[1]
best_dist = dist2
best_dist_diff = math.fabs(best_dist-beacons[2][2])
if best_dist_diff < THRESHOLD*best_dist:
print best_point
else:
print 'Bad Beacon Data (Beacon2 distance to best point not within threshold)'
If you want to trust closer beacons more, you may want to calculate the intersection points between the two closest beacons and then use the farther beacon to tie-break. Keep in mind that almost anything you do with "confidence levels" for the individual measurements will be a hack at best. Since you will always be working with very bad data, you will defintiely need to loosen up the power_increases limit and threshold percentage.
You have 3 points : A(xA,yA,zA), B(xB,yB,zB) and C(xC,yC,zC), which respectively are approximately at dA, dB and dC from you goal point G(xG,yG,zG).
Let's say cA, cB and cC are the confidence rate ( 0 < cX <= 1 ) of each point.
Basically, you might take something really close to 1, like {0.95,0.97,0.99}.
If you don't know, try different coefficient depending of distance avg. If distance is really big, you're likely to be not very confident about it.
Here is the way i'll do it :
var sum = (cA*dA) + (cB*dB) + (cC*dC);
dA = cA*dA/sum;
dB = cB*dB/sum;
dC = cC*dC/sum;
xG = (xA*dA) + (xB*dB) + (xC*dC);
yG = (yA*dA) + (yB*dB) + (yC*dC);
xG = (zA*dA) + (zB*dB) + (zC*dC);
Basic, and not really smart but will do the job for some simple tasks.
EDIT
You can take any confidence coef you want in [0,inf[, but IMHO, restraining at [0,1] is a good idea to keep a realistic result.

How to convert an image to a Box2D polygon using the alpha layer and triangulation?

I'm coding a game using Box2D and SFML, and I'd like to let my users import their own textures to use as physics polygons. The polygons are created using the images' alpha layer. It doesn't need to be pixel perfect, and this is where my problem is. If it's pixel-perfect, it's going to be way too buggy when the player gets stuck between two rather complex shapes. I have a working edge-detection algorithm, and it produces something like this. It's pixel per pixel (and the shape it's tracing is simply a square with an dip). After that, I have a simplifying algorithm that produces this. It works fine to me, but if every single corner is traced like that, I'm going to have some problems. The code for the vector-simplifying is this:
//borders is a std::vector containing simple Box2D b2Vec2 (2D vector class containing an x and a y)
//vector shortener
for(unsigned int i = 0; i < borders.size(); i++)
{
int x = 0, y = 0;
int counter = 0;
//get the values for x and y that need to be added to check whether in a line or not
x = borders[i].x - borders[i-1].x;
y = borders[i].y - borders[i-1].y;
//while points are aligned..
while((borders[i].x + x*counter == borders[i + counter].x) && (borders[i].y + y*counter == borders[i+counter].y))
{
counter++;
}
if(counter-1 > i)
{
borders.erase(borders.begin() + i, borders.begin() + i + counter -1);
}
}
So my question is, how can I transform the previous set of vectors into something a bit less precise? Are there any rounding algorithms out there? If so, which is best? Any tips you can give me? It doesn't matter whether the resulting polygon is convex or concave, I'm triangulating it anyways.
Thanks,
AsterAlff

Calculating which tiles are lit in a tile-based game ("raytracing")

I'm writing a little tile-based game, for which I'd like to support light sources. But my algorithm-fu is too weak, hence I come to you for help.
The situation is like this: There is a tile-based map (held as a 2D array), containing a single light source and several items standing around. I want to calculate which tiles are lit up by the light source, and which are in shadow.
A visual aid of what it would look like, approximately. The L is the light source, the Xs are items blocking the light, the 0s are lit tiles, and the -s are tiles in shadow.
0 0 0 0 0 0 - - 0
0 0 0 0 0 0 - 0 0
0 0 0 0 0 X 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 0 L 0 0 0 0
0 0 0 0 0 0 0 0 0
0 0 0 X X X X 0 0
0 0 0 - - - - - 0
0 0 - - - - - - -
A fractional system would be even better, of course, where a tile can be in half-shadow due to being partially obscured. The algorithm wouldn't have to be perfect - just not obviously wrong and reasonably fast.
(Of course, there would be multiple light sources, but that's just a loop.)
Any takers?
The roguelike development community has a bit of an obsession with line-of-sight, field-of-view algorithms.
Here's a link to a roguelike wiki article on the subject:
http://roguebasin.roguelikedevelopment.org/index.php?title=Field_of_Vision
For my roguelike game, I implemented a shadow casting algorithm (http://roguebasin.roguelikedevelopment.org/index.php?title=Shadow_casting) in Python. It was a bit complicated to put together, but ran reasonably efficiently (even in pure Python) and generated nice results.
The "Permissive Field of View" seems to be gaining popularity as well:
http://roguebasin.roguelikedevelopment.org/index.php?title=Permissive_Field_of_View
You can get into all sorts of complexities with calculating occlusion etc, or you can go for the simple brute force method: For every cell, use a line drawing algorithm such as the Bresenham Line Algorithm to examine every cell between the current one and the light source. If any are filled cells or (if you have only one light source) cells that have already been tested and found to be in shadow, your cell is in shadow. If you encounter a cell known to be lit, your cell will likewise be lit. An easy optimisation to this is to set the state of any cells you encounter along the line to whatever the final outcome is.
This is more or less what I used in my 2004 IOCCC winning entry. Obviously that doesn't make good example code, though. ;)
Edit: As loren points out, with these optimisations, you only need to pick the pixels along the edge of the map to trace from.
The algorithms being presented here seem to me to be doing more calculations than I think are needed. I have not tested this but I think it would work:
Initially, mark all pixels as lit.
For every pixel on the edge of the map: As Arachnid suggested, use Bresenham to trace a line from the pixel to the light. If that line strikes an obstruction then mark all pixels from the edge to just beyond the obstruction as being in shadow.
Quick and dirty:
(Depending on how big the array is)
Loop through each tile
draw a line to the Light
If any pary of the line hits an X, then it is in shadow
(Optional): calculate the amount of X the line passes through and do fancy maths to determint the proportion of the tile in shadow. NB: This could be done by anti-aliasing the line between the tile and the Light (therefore looking at other tiles along the route back to the light source) during the thresholding procedure these will appear as small anomolies. Depending on the logic used you could potentially determine how much (if at all) the tile is in shadow.
You could also keep a track of which pixels have been tested, therefore optimize the solution a little and not re-test pixels twice.
This could be dome pretty well by using image manipulation and drawing straight lines between pixles (tiles) If the lines are semi transparent and the X blocks are semi-transparent again. You can threshold the image to determine if the line has intersected an 'X'
If you have an option to use a 3rd party tool, then Id probably take it. In the long run it might turn out to be quicker, but you'd understand less about your game.
This is just for fun:
You can replicate the Doom 3 approach in 2D if you first do a step to convert your tiles into lines. For instance,
- - - - -
- X X X -
- X X - -
- X - - -
- - - - L
...would be reduced into three lines connecting the corners of the solid object in a triangle.
Then, do what the Doom 3 engine does: From the perspective of the light source, consider each "wall" that faces the light. (In this scene, only the diagonal line would be considered.) For each such line, project it into a trapezoid whose front edge is the original line, whose sides lie on lines from the light source through each end point, and whose back is far away, past the whole scene. So, it's a trapezoid that "points at" the light. It contains all the space that the wall casts its shadow on. Fill every tile in this trapezoid with darkness.
Proceed through all such lines and you will end up with a "stencil" that includes all the tiles visible from the light source. Fill these tiles with the light color. You may wish to light the tile a little less as you get away from the source ("attenuation") or do other fancy stuff.
Repeat for every light source in your scene.
To check if a tile is in shadow you need to draw a straight line back to the light source. If the line intersects another tile that's occupied, then the tile you were testing is in shadow. Raytracing algorithms do this for every object (in your case tile) in the view.
The Raytracing article on Wikipedia has pseudocode.
Here is a very simple but fairly effective approach that uses linear time in the number of tiles on screen. Each tile is either opaque or transparent (that's given to us), and each can be visible or shaded (that's what we're trying to compute).
We start by marking the avatar itself as "visible".
We then apply this recursive rule to determine the visibility of the remaining tiles.
If the tile is on the same row or column as the avatar, then it is only visible if the adjacent tile nearer to the avatar is visible and transparent.
If the tile is on a 45 degree diagonal from the avatar, then it is only visible if the neighboring diagonal tile (towards the avatar) is visible and transparent.
In all other cases, consider the three neighboring tiles that are closer to the avatar than the tile in question. For example, if this tile is at (x,y) and is above and to the right of the avatar, then the three tiles to consider are (x-1, y), (x, y-1) and (x-1, y-1). The tile in question is visible if any of those three tiles are visible and transparent.
In order to make this work, the tiles must be inspected in a specific order to ensure that the recursive cases are already computed. Here is an example of a working ordering, starting from 0 (which is the avatar itself) and counting up:
9876789
8543458
7421247
6310136
7421247
8543458
9876789
Tiles with the same number can be inspected in any order amongst themselves.
The result is not beautiful shadow-casting, but computes believable tile visibility.
I know this is years old question, but for anyone searching for this style of stuff I'd like to offer a solution I used once for a roguelike of my own; manually "precalculated" FOV. If you field of view of light source has a maximum outer distance it's really not very much effort to hand draw the shadows created by blocking objects. You only need to draw 1/8 th of the circle (plus the straight and diagonal directions); you can use symmerty for the other eigths. You'll have as many shadowmaps as you have squares in that 1/8th of a circle. Then just OR them together according to objects.
The three major pros for this are:
1. It's very quick if implemented right
2. You get to decide how the shadow should be cast, no comparing which algorith handles which situation the best
3. No weird algorith induced edge cases which you have to somehow fix
The con is you don't really get to implement a fun algorithm.
TK's solution is the one that you would generally use for this sort of thing.
For the partial lighting scenario, you could have it so that if a tile results in being in shadow, that tile is then split up into 4 tiles and each one of those is tested. You could then split that up as much as you wanted?
Edit:
You can also optimise it out a bit by not testing any of the tiles adjacent to a light - this would be more important to do when you have multiple light sources, I guess...
I've actually just recently wrote this functionality into one of my projects.
void Battle::CheckSensorRange(Unit* unit,bool fog){
int sensorRange = 0;
for(int i=0; i < unit->GetSensorSlots(); i++){
if(unit->GetSensorSlot(i)->GetSlotEmpty() == false){
sensorRange += unit->GetSensorSlot(i)->GetSensor()->GetRange()+1;
}
}
int originX = unit->GetUnitX();
int originY = unit->GetUnitY();
float lineLength;
vector <Place> maxCircle;
//get a circle around the unit
for(int i = originX - sensorRange; i < originX + sensorRange; i++){
if(i < 0){
continue;
}
for(int j = originY - sensorRange; j < originY + sensorRange; j++){
if(j < 0){
continue;
}
lineLength = sqrt( (float)((originX - i)*(originX - i)) + (float)((originY - j)*(originY - j)));
if(lineLength < (float)sensorRange){
Place tmp;
tmp.x = i;
tmp.y = j;
maxCircle.push_back(tmp);
}
}
}
//if we're supposed to fog everything we don't have to do any fancy calculations
if(fog){
for(int circleI = 0; circleI < (int) maxCircle.size(); circleI++){
Map->GetGrid(maxCircle[circleI].x,maxCircle[circleI].y)->SetFog(fog);
}
}else{
bool LOSCheck = true;
vector <bool> placeCheck;
//have to check all of the tiles to begin with
for(int circleI = 0; circleI < (int) maxCircle.size(); circleI++){
placeCheck.push_back(true);
}
//for all tiles in the circle, check LOS
for(int circleI = 0; circleI < (int) maxCircle.size(); circleI++){
vector<Place> lineTiles;
lineTiles = line(originX, originY, maxCircle[circleI].x, maxCircle[circleI].y);
//check each tile in the line for LOS
for(int lineI = 0; lineI < (int) lineTiles.size(); lineI++){
if(false == CheckPlaceLOS(lineTiles[lineI], unit)){
LOSCheck = false;
//mark this tile not to be checked again
placeCheck[circleI] = false;
}
if(false == LOSCheck){
break;
}
}
if(LOSCheck){
Map->GetGrid(maxCircle[circleI].x,maxCircle[circleI].y)->SetFog(fog);
}else{
LOSCheck = true;
}
}
}
}
There's some extra stuff in there that you wouldn't need if you're adapting it for your own use. The type Place is just defined as an x and y position for conveniences sake.
The line function is taken from Wikipedia with very small modifications. Instead of printing out x y coordinates I changed it to return a place vector with all the points in the line. The CheckPlaceLOS function just returns true or false based on if the tile has an object on it. There's some more optimizations that could be done with this but this is fine for my needs.
i have implemented tilebased field of view in a single C function. here it is:
https://gist.github.com/zloedi/9551625
If you don't want to spend the time to reinvent/re-implement this, there are plenty of game engines out there. Ogre3D is an open source game engine that fully supports lighting, as well as sound and game controls.

Resources