Related
We use HMM (Hidden Markov Model) to localize a robot in a windy maze with damaged sensors. If he attempts to move in a direction, he will do so with a high probability, and a low chance to accidentally go to either side. If his movement would make him go over an obstacle, he will bounce back to the original tile.
From any given position, he can sense in all four directions. He will notice an obstacle if it is there with high certainty, and see an obstacle when there is none with low certainty.
We have a probability map for all possible places the robot might be in the maze, since he knows what the maze looks like. Initially it all starts evenly distributed.
I have completed the motion and sensing aspect of this and am getting the proper answers, but I am stuck on smoothing (backward algorithm).
Assume that the robot performs the following sequence of actions: senses, moves, senses, moves, senses. This gives us 3 states in our HMM model. Assume that the results I have at each step of the way so far are correct.
I am having a lot of trouble performing smoothing (backward algorithm), given that there are four conditional probabilities (one for each direction).
Assume SP is for smoothing probability, BP is for backward probability
Assume Sk is for a state, and Zk is for an observation at that state. The problem for me is figuring out how to construct my backwards equation given that each Zk is only for a single direction.
I know the algorithm for smoothing is: SP(k) is proportional to BP(k+1) * P(Sk | Z1:k)
Where BP(k+1) is defined as :
if (k == n) return 1 else return Sum(s) of BP(k+1) * P(Zk+1|Sk+1) * P(Sk+1=s | Sk)
This is where I am having my trouble. Mainly in the Conditional Probability portion of this equation. Because each spot has four different directions that it observed! In other words, each state has four different evidence variables as opposed to just one! Do I average these values? Do I do a separate summation for them? How do I account for multiple observations at a given state and properly condense it into this equation which only has room for one conditional probability?
Here is the code I have performing the smoothing:
public static void Smoothing(List<int[]> observations) {
int n = observations.Count; //n is Total length of evidence sequence
int k = n - 1; //k is the state we are trying to smooth. start with n-1
for (; k >= 1; k--) { //Smooth all the way back to the first state
for (int dir = 0; dir < 4; dir++) {
//We must smooth each direction separately
SmoothDirection(dir, observations, k, n);
}
Console.WriteLine($"Smoothing for k = {k}\n");
UpdateMapMotion(mapHistory[k]);
PrintMap();
}
}
public static void SmoothDirection(int dir, List<int[]> observations, int k, int n) {
var alphas = new double[ROWS, COLS];
var normalizer = 0.0;
int row, col;
foreach (var t in map) {
if (t.isObstacle) continue;
row = t.pos.y;
col = t.pos.x;
alphas[row, col] = mapHistory[k][row, col]
* Backwards(k, n, t, dir, observations, moves[^(n - k)]);
normalizer += alphas[row, col];
}
UpdateHistory(k, alphas, normalizer);
}
public static void UpdateHistory(int index, double[,] alphas, double normalizer) {
for (int r = 0; r < ROWS; r++) {
for (int c = 0; c < COLS; c++) {
mapHistory[index][r, c] = alphas[r, c] / normalizer;
}
}
}
public static double Backwards(int k, int n, Tile t, int dir, List<int[]> observations, int moveDir) {
if (k == n) return 1;
double p = 0;
var nextStates = GetPossibleNextStates(t, moveDir);
foreach (var s in nextStates) {
p += Cond_Prob(s.hasObstacle[dir], observations[^(n - k)][dir] == 1) * Trans_Prob(t, s, moveDir)
* Backwards(k+1, n, s, dir, observations, moves[^(n - k)]);
}
return p;
}
public static List<Tile> GetPossibleNextStates(Tile t, int direction) {
var tiles = new List<Tile>(); //Next States
var perpDirs = GetPerpendicularDir(direction); //Perpendicular Directions
//If obstacle in front of Tile t or on the sides, Tile t is a possible next state.
if (t.hasObstacle[direction] || t.hasObstacle[perpDirs[0]] || t.hasObstacle[perpDirs[1]])
tiles.Add(t);
//If there is no obstacle in front of Tile t, then that tile is a possible next state.
if (!t.hasObstacle[direction])
tiles.Add(GetTileAtPos(t.pos + directions[direction]));
//If there are no obstacles on the sides of Tile t, then those are possible next states.
foreach (var dir in perpDirs) {
if (!t.hasObstacle[dir])
tiles.Add(GetTileAtPos(t.pos + directions[dir]));
}
return tiles;
}
TL;DR : How do I perform smoothing (backward algorithm) in a Hidden Markov Model when there are 4 evidences at each state as opposed to just 1?
SOLVED!
It was actually rather much more simple than I imagined.
I don't actually need to each iteration separately in each direction.
I just need to replace the Cond_Prob() function with Joint_Cond_Prob() which finds the joint probability of all directional observations at a given state.
So P(Zk|Sk) is actually P(Zk1:Zk4|Sk) which is just P(Zk1|Sk)P(Zk2|Sk)P(Zk3|Sk)P(Zk4|Sk)
I want to model the following puzzle with a graph.
The barman gives you three
glasses whose sizes are 1000ml, 700ml, and 400ml, respectively. The 700ml and 400ml glasses start
out full of beer, but the 1000ml glass is initially empty. You can get unlimited free beer if you win
the following game:
Game rule: You can keep pouring beer from one glass into another, stopping only when the source
glass is empty or the destination glass is full. You win if there is a sequence of pourings that leaves
exactly 200ml in the 700ml or 400 ml glass.
I was a little unsure of how to translate this problem in a graph. My thought was that the glasses would be represented by nodes in a weighted, undirected graph where edges indicate that a glass u can be poured into a glass v and the other way is the same, therefore a walk would be a sequence of pourings that would lead to the correct solution.
However, this approach of having three single nodes and undirected edges doesn't quite work for Dijkstra's algorithm or other greedy algorithms which was what I was going to use to solve the problem. Would modeling the permutations of the pourings as a graph be more suitable?
You should store whole state as vertex. I mean, value in each glass is a component of state, hence state is array of glassesCount numbers. For example, initial state is (700,400,0).
After that you should add initial state to queue and run BFS. BFS is appliable because each edge has equal weight =1. Weight is equal because weight is a number of pourings between each state which is obviously = 1 as we generate only reachable states from each state in queue.
You may also use DFS, but BFS returns the shortest sequence of pourings because BFS gives shortest path for 1-weighted graphs. If you are not interested in shortest sequence of pourings but any solution, DFS is ok. I will describe BFS because it has the same complexity with DFS and returns better (shorter) solution.
In each state of BFS you have to generate all possible new states by pouring from all pairwise combinations. Also, you should check possibility of pouring.
For 3 glasses there are 3*(3-1)=6 possible branches from each state but I implemented more generic solution allowing you to use my code for N glasses.
public class Solution{
static HashSet<State> usedStates = new HashSet<State>();
static HashMap<State,State> prev = new HashMap<State, State>();
static ArrayDeque<State> queue = new ArrayDeque<State>();
static short[] limits = new short[]{700,400,1000};
public static void main(String[] args){
State initialState = new State(new Short[]{700,400,0});
usedStates.add(initialState);
queue.add(initialState);
prev.put(initialState,null);
boolean solutionFound = false;
while(!queue.isEmpty()){
State curState = queue.poll();
if(curState.isWinning()){
printSolution(curState);
solutionFound = true;
break; //stop BFS even if queue is not empty because solution already found
}
// go to all possible states
for(int i=0;i<curState.getGlasses().length;i++)
for(int j=0;j<curState.getGlasses().length;j++) {
if (i != j) { //pouring from i-th glass to j-th glass, can't pour to itself
short glassI = curState.getGlasses()[i];
short glassJ = curState.getGlasses()[j];
short possibleToPour = (short)(limits[j]-glassJ);
short amountToPour;
if(glassI<possibleToPour) amountToPour = glassI; //pour total i-th glass
else amountToPour = possibleToPour; //pour i-th glass partially
if(glassI!=0){ //prepare new state
Short[] newGlasses = Arrays.copyOf(curState.getGlasses(), curState.getGlasses().length);
newGlasses[i] = (short)(glassI-amountToPour);
newGlasses[j] = (short)(newGlasses[j]+amountToPour);
State newState = new State(newGlasses);
if(!usedStates.contains(newState)){ // if new state not handled before mark it as used and add to queue for future handling
usedStates.add(newState);
prev.put(newState, curState);
queue.add(newState);
}
}
}
}
}
if(!solutionFound) System.out.println("Solution does not exist");
}
private static void printSolution(State curState) {
System.out.println("below is 'reversed' solution. In order to get solution from initial state read states from the end");
while(curState!=null){
System.out.println("("+curState.getGlasses()[0]+","+curState.getGlasses()[1]+","+curState.getGlasses()[2]+")");
curState = prev.get(curState);
}
}
static class State{
private Short[] glasses;
public State(Short[] glasses){
this.glasses = glasses;
}
public boolean isWinning() {
return glasses[0]==200 || glasses[1]==200;
}
public Short[] getGlasses(){
return glasses;
}
#Override
public boolean equals(Object other){
return Arrays.equals(glasses,((State)other).getGlasses());
}
#Override
public int hashCode(){
return Arrays.hashCode(glasses);
}
}
}
Output:
below is 'reversed' solution. In order to get solution from initial
state read states from the end
(700,200,200)
(500,400,200)
(500,0,600)
(100,400,600)
(100,0,1000)
(700,0,400)
(700,400,0)
Interesting fact - this problem has no solution if replace
200ml in g1 OR g2
to
200ml in g1 AND g2
.
I mean, state (200,200,700) is unreachable from (700,400,0)
If we want to model this problem with a graph, each node should represent a possible assignment of beer volume to glasses. Suppose we represent each glass with an object like this:
{ volume: <current volume>, max: <maximum volume> }
Then the starting node is a list of three such objects:
[ { volume: 0, max: 1000 }, { volume: 700, max: 700 }, { volume: 400, max: 400 } ]
An edge represents the action of pouring one glass into another. To perform such an action, we pick a source glass and a target glass, then calculate how much we can pour from the source to the target:
function pour(indexA, indexB, glasses) { // Pour from A to B.
var a = glasses[indexA],
b = glasses[indexB],
delta = Math.min(a.volume, b.max - b.volume);
a.volume -= delta;
b.volume += delta;
}
From the starting node we try pouring from each glass to every other glass. Each of these actions results in a new assignment of beer volumes. We check each one to see if we have achieved the target volume of 200. If not, we push the assignment into a queue.
To find the shortest path from the starting node to a target node, we push newly discovered nodes onto the head of the queue and pop nodes off the end of the queue. This ensures that when we reach a target node, it is no farther from the starting node than any other node in the queue.
To make it possible to reconstruct the shortest path, we store the predecessor of each node in a dictionary. We can use the same dictionary to make sure that we don't explore a node more than once.
The following is a JavaScript implementation of this approach. Click on the blue button below to run it.
function pour(indexA, indexB, glasses) { // Pour from A to B.
var a = glasses[indexA],
b = glasses[indexB],
delta = Math.min(a.volume, b.max - b.volume);
a.volume -= delta;
b.volume += delta;
}
function glassesToKey(glasses) {
return JSON.stringify(glasses);
}
function keyToGlasses(key) {
return JSON.parse(key);
}
function print(s) {
s = s || '';
document.write(s + '<br />');
}
function displayKey(key) {
var glasses = keyToGlasses(key);
parts = glasses.map(function (glass) {
return glass.volume + '/' + glass.max;
});
print('volumes: ' + parts.join(', '));
}
var startGlasses = [ { volume: 0, max: 1000 },
{ volume: 700, max: 700 },
{ volume: 400, max: 400 } ];
var startKey = glassesToKey(startGlasses);
function solve(targetVolume) {
var actions = {},
queue = [ startKey ],
tail = 0;
while (tail < queue.length) {
var key = queue[tail++]; // Pop from tail.
for (var i = 0; i < startGlasses.length; ++i) { // Pick source.
for (var j = 0; j < startGlasses.length; ++j) { // Pick target.
if (i != j) {
var glasses = keyToGlasses(key);
pour(i, j, glasses);
var nextKey = glassesToKey(glasses);
if (actions[nextKey] !== undefined) {
continue;
}
actions[nextKey] = { key: key, source: i, target: j };
for (var k = 1; k < glasses.length; ++k) {
if (glasses[k].volume === targetVolume) { // Are we done?
var path = [ actions[nextKey] ];
while (key != startKey) { // Backtrack.
var action = actions[key];
path.push(action);
key = action.key;
}
path.reverse();
path.forEach(function (action) { // Display path.
displayKey(action.key);
print('pour from glass ' + (action.source + 1) +
' to glass ' + (action.target + 1));
print();
});
displayKey(nextKey);
return;
}
queue.push(nextKey);
}
}
}
}
}
}
solve(200);
body {
font-family: monospace;
}
I had the idea of demonstrating the elegance of constraint programming after the two independent brute force solutions above were given. It doesn't actually answer the OP's question, just solves the puzzle. Admittedly, I expected it to be shorter.
par int:N = 7; % only an alcoholic would try more than 7 moves
var 1..N: n; % the sequence of states is clearly at least length 1. ie the start state
int:X = 10; % capacities
int:Y = 7;
int:Z = 4;
int:T = Y + Z;
array[0..N] of var 0..X: x; % the amount of liquid in glass X the biggest
array[0..N] of var 0..Y: y;
array[0..N] of var 0..Z: z;
constraint x[0] = 0; % initial contents
constraint y[0] = 7;
constraint z[0] = 4;
% the total amount of liquid is the same as the initial amount at all times
constraint forall(i in 0..n)(x[i] + y[i] + z[i] = T);
% we get free unlimited beer if any of these glasses contains 2dl
constraint y[n] = 2 \/ z[n] = 2;
constraint forall(i in 0..n-1)(
% d is the amount we can pour from one glass to another: 6 ways to do it
let {var int: d = min(y[i], X-x[i])} in (x[i+1] = x[i] + d /\ y[i+1] = y[i] - d) \/ % y to x
let {var int: d = min(z[i], X-x[i])} in (x[i+1] = x[i] + d /\ z[i+1] = z[i] - d) \/ % z to x
let {var int: d = min(x[i], Y-y[i])} in (y[i+1] = y[i] + d /\ x[i+1] = x[i] - d) \/ % x to y
let {var int: d = min(z[i], Y-y[i])} in (y[i+1] = y[i] + d /\ z[i+1] = z[i] - d) \/ % z to y
let {var int: d = min(y[i], Z-z[i])} in (z[i+1] = z[i] + d /\ y[i+1] = y[i] - d) \/ % y to z
let {var int: d = min(x[i], Z-z[i])} in (z[i+1] = z[i] + d /\ x[i+1] = x[i] - d) % x to z
);
solve minimize n;
output[show(n), "\n\n", show(x), "\n", show(y), "\n", show(z)];
and the output is
[0, 4, 10, 6, 6, 2, 2]
[7, 7, 1, 1, 5, 5, 7]
[4, 0, 0, 4, 0, 4, 2]
which luckily coincides with the other solutions. Feed it to the MiniZinc solver and wait...and wait. No loops, no BFS and DFS.
In a nutshell: I want to do a non-approximate version of Bresenham's line algorithm, but for a rectangle rather than a line, and whose points aren't necessarily aligned to the grid.
Given a square grid, and a rectangle comprising four non-grid-aligned points, I want to find a list of all grid squares that are covered, partially or completely, by the rectangle.
Bresenham's line algorithm is approximate – not all partially covered squares are identified. I'm looking for a "perfect" algorithm, that has no false positives or negatives.
It's an old question, but I have solved this issue (C++)
https://github.com/feelinfine/tracer
Maybe it will be usefull for someone
(sorry for my poor english)
Single line tracing
template <typename PointType>
std::set<V2i> trace_line(const PointType& _start_point, const PointType& _end_point, size_t _cell_size)
{
auto point_to_grid_fnc = [_cell_size](const auto& _point)
{
return V2i(std::floor((double)_point.x / _cell_size), std::floor((double)_point.y / _cell_size));
};
V2i start_cell = point_to_grid_fnc(_start_point);
V2i last_cell = point_to_grid_fnc(_end_point);
PointType direction = _end_point - _start_point;
//Moving direction (cells)
int step_x = (direction.x >= 0) ? 1 : -1;
int step_y = (direction.y >= 0) ? 1 : -1;
//Normalize vector
double hypot = std::hypot(direction.x, direction.y);
V2d norm_direction(direction.x / hypot, direction.y / hypot);
//Distance to the nearest square side
double near_x = (step_x >= 0) ? (start_cell.x + 1)*_cell_size - _start_point.x : _start_point.x - (start_cell.x*_cell_size);
double near_y = (step_y >= 0) ? (start_cell.y + 1)*_cell_size - _start_point.y : _start_point.y - (start_cell.y*_cell_size);
//How far along the ray we must move to cross the first vertical (ray_step_to_vside) / or horizontal (ray_step_to_hside) grid line
double ray_step_to_vside = (norm_direction.x != 0) ? near_x / norm_direction.x : std::numeric_limits<double>::max();
double ray_step_to_hside = (norm_direction.y != 0) ? near_y / norm_direction.y : std::numeric_limits<double>::max();
//How far along the ray we must move for horizontal (dx)/ or vertical (dy) component of such movement to equal the cell size
double dx = (norm_direction.x != 0) ? _cell_size / norm_direction.x : std::numeric_limits<double>::max();
double dy = (norm_direction.y != 0) ? _cell_size / norm_direction.y : std::numeric_limits<double>::max();
//Tracing loop
std::set<V2i> cells;
cells.insert(start_cell);
V2i current_cell = start_cell;
size_t grid_bound_x = std::abs(last_cell.x - start_cell.x);
size_t grid_bound_y = std::abs(last_cell.y - start_cell.y);
size_t counter = 0;
while (counter != (grid_bound_x + grid_bound_y))
{
if (std::abs(ray_step_to_vside) < std::abs(ray_step_to_hside))
{
ray_step_to_vside = ray_step_to_vside + dx; //to the next vertical grid line
current_cell.x = current_cell.x + step_x;
}
else
{
ray_step_to_hside = ray_step_to_hside + dy;//to the next horizontal grid line
current_cell.y = current_cell.y + step_y;
}
++counter;
cells.insert(current_cell);
};
return cells;
}
Get all cells
template <typename Container>
std::set<V2i> pick_cells(Container&& _points, size_t _cell_size)
{
if (_points.size() < 2 || _cell_size <= 0)
return std::set<V2i>();
Container points = std::forward<Container>(_points);
auto add_to_set = [](auto& _set, const auto& _to_append)
{
_set.insert(std::cbegin(_to_append), std::cend(_to_append));
};
//Outline
std::set<V2i> cells;
/*
for (auto it = std::begin(_points); it != std::prev(std::end(_points)); ++it)
add_to_set(cells, trace_line(*it, *std::next(it), _cell_size));
add_to_set(cells, trace_line(_points.back(), _points.front(), _cell_size));
*/
//Maybe this code works faster
std::vector<std::future<std::set<V2i> > > results;
using PointType = decltype(points.cbegin())::value_type;
for (auto it = points.cbegin(); it != std::prev(points.cend()); ++it)
results.push_back(std::async(trace_line<PointType>, *it, *std::next(it), _cell_size));
results.push_back(std::async(trace_line<PointType>, points.back(), points.front(), _cell_size));
for (auto& it : results)
add_to_set(cells, it.get());
//Inner
std::set<V2i> to_add;
int last_x = cells.begin()->x;
int counter = cells.begin()->y;
for (auto& it : cells)
{
if (last_x != it.x)
{
counter = it.y;
last_x = it.x;
}
if (it.y > counter)
{
for (int i = counter; i < it.y; ++i)
to_add.insert(V2i(it.x, i));
}
++counter;
}
add_to_set(cells, to_add);
return cells;
}
Types
template <typename _T>
struct V2
{
_T x, y;
V2(_T _x = 0, _T _y = 0) : x(_x), y(_y)
{
};
V2 operator-(const V2& _rhs) const
{
return V2(x - _rhs.x, y - _rhs.y);
}
bool operator==(const V2& _rhs) const
{
return (x == _rhs.x) && (y == _rhs.y);
}
//for std::set sorting
bool operator<(const V2& _rhs) const
{
return (x == _rhs.x) ? (y < _rhs.y) : (x < _rhs.x);
}
};
using V2d = V2<double>;
using V2i = V2<int>;
Usage
std::vector<V2d> points = { {200, 200}, {400, 400}, {500,100} };
size_t cell_size = 30;
auto cells = pick_cells(points, cell_size);
for (auto& it : cells)
... //do something with cells
You can use a scanline approach. The rectangle is a closed convex polygon, so it is sufficient to store the leftmost and rightmost pixel for each horizontal scanline. (And the top and bottom scanlines, too.)
The Bresenham algorithm tries to draw a thin, visually pleasing line without adjacent cells in the smaller dimension. We need an algorithm that visits each cell that the edges of the polygon pass through. The basic idea is to find the starting cell (x, y) for each edge and then to adjust x whenever the edge intersects a vertical border and to adjust y when it intersects a horizontal border.
We can represent the intersections by means of a normalised coordinate s that travels along the edge and that is 0.0 at the first node n1 and 1.0 at the second node n2.
var x = Math.floor(n1.x / cellsize);
var y = Math.floor(n1.y / cellsize);
var s = 0;
The vertical insersections can the be represented as equidistant steps of with dsx from an initial sx.
var dx = n2.x - n1.x;
var sx = 10; // default value > 1.0
// first intersection
if (dx < 0) sx = (cellsize * x - n1.x) / dx;
if (dx > 0) sx = (cellsize * (x + 1) - n1.x) / dx;
var dsx = (dx != 0) ? grid / Math.abs(dx) : 0;
Likewise for the horizontal intersecions. A default value greater than 1.0 catches the cases of horizontal and vertical lines. Add the first point to the scanline data:
add(scan, x, y);
Then we can visit the next adjacent cell by looking at the next intersection with the smallest s.
while (sx <= 1 || sy <= 1) {
if (sx < sy) {
sx += dsx;
if (dx > 0) x++; else x--;
} else {
sy += dsy;
if (dy > 0) y++; else y--;
}
add(scan, x, y);
}
Do this for all four edges and with the same scanline data. Then fill all cells:
for (var y in scan) {
var x = scan[y].min;
var xend = scan[y].max + 1;
while (x < xend) {
// do something with cell (x, y)
x++;
}
}
(I have only skimmed the links MBo provided. It seems that the approach presented in that paper is essentially the same as mine. If so, please excuse the redundant answer, but after working this out I thought I could as well post it.)
This is sub-optimal but might give a general idea.
First off treat the special case of the rectangle being aligned horizontally or vertically separately. This is pretty easy to test for and make the rest simpler.
You can represent the rectangle as a set of 4 inequalities a1 x + b1 y >= c1 a1 x + b1 y <= c2 a3 x + b3 y >= c3 a3 x + b3 y <= c4 as the edges of the rectangles are parallel some of the constants are the same. You also have (up to a multiple) a3=b1 and b3=-a1. You can multiply each inequality by a common factor so you are working with integers.
Now consider each scan line with a fixed value of y.
For each value of y find the four points where the lines intersect the scan line. That is find the solution with each line above. A little bit of logic will find the minimum and maximum values of x. Plot all pixels between these values.
You condition that you want all partially covered squares makes things a little trickier. You can solve this by considering two adjacent scan lines. You want to plot the points between the minimum x for both lines and the maximum for the both lines. If say
a1 x+b1 y>=c is the inequality for the bottom left line in the figure. You want the find the largest x such that a1 x + b1 y < c this will be floor((c-b1 y)/a1) call this minx(y) also find minx(y+1) and the left hand point will be the minimum of these two values.
There is many easy optimisation you can find the y-values of the top and bottom corners reducing the range of y-values to test. You should only need to test two side. For each end point of each line there is one multiplication, one subtraction and one division. The division is the slowest part I think about 4 time slower than other ops. You might be able to remove this with a Bresenham or DDA algorithms others have mentioned.
There is method of Amanatides and Woo to enumerate all intersected cells
A Fast Voxel Traversal Algorithm for Ray Tracing.
Here is practical implementation.
As side effect for you - you'll get points of intersection with grid lines - it may be useful if you need areas of partially covered cells (for antialiasing etc).
I was trying to solve the following problem:
There is a monkey which can walk around on a planar grid. The monkey
can move one space at a time left, right, up or down. That is, from
(x, y) the monkey can go to (x+1, y), (x-1, y), (x, y+1), and (x,
y-1). Points where the sum of the digits of the absolute value of the
x coordinate plus the sum of the digits of the absolute value of the y
coordinate are lesser than or equal to 19 are accessible to the
monkey. For example, the point (59, 79) is inaccessible because 5 + 9
+ 7 + 9 = 30, which is greater than 19. Another example: the point (-5, -7) is accessible because abs(-5) + abs(-7) = 5 + 7 = 12, which
is less than 19. How many points can the monkey access if it starts at
(0, 0), including (0, 0) itself?
I came up with the following brute force solution (pseudo code):
/*
legitPoints = {}; // all the allowed points that monkey can goto
list.push( Point(0,0) ); // start exploring from origin
while(!list.empty()){
Point p = list.pop_front(); // remove point
// if p has been seen before; ignore p => continue;
// else mark it and proceed further
if(legit(p){
// since we are only exploring points in one quadrant,
// we don't need to check for -x direction and -y direction
// hence explore the following: this is like Breadth First Search
list.push(Point(p.x+1, p.y)); // explore x+1, y
list.push(Point(p.x, p.y+1)); // explore x, y+1
legitPoints.insert(p); // during insertion, ignore duplicates
// (although no duplicates should come through after above check)
// count properly using multipliers
// Origin => count once x = 0 && y == 0 => mul : 1
// X axis => count twice x = 0 && y != 0 => mul : 2
// Y axis => count twice x != 0 && y = 0 => mul : 2
// All others => mul : 4
}
return legitPoints.count();
}
*/
This is a very brute force solution. One of the optimizations I used was to one scan one quadrant instead of looking at four. Another one was to ignore the points that we've already seen before.
However, looking at the final points, I was trying to find a pattern, perhaps a mathematical solution or a different approach that would be better than what I came up.
Any thoughts ?
PS: If you want, I can post the data somewhere. It is interesting to look at it with any one of the axis sorted.
First quadrant visual:
Here's what the whole grid looks like as an image:
The black squares are inaccessible, white accessible, gray accessible and reachable by movement from the center. There's a 600x600 bounding box of black because the digits of 299 add to 20, so we only have to consider that.
This exercise is basically a "flood fill", with a shape which is just about the worst case possible for a flood fill. You can do the symmetry speedup if you like, though that's not really where the meat of the issue is--my solution runs in 160 ms without it (under 50ms with it).
The big speed wins are (1) do a line-filling flood so you don't have to put every point on the stack, and (2) manage your own stack instead of doing recursion. I built my stack as two dynamically-allocated vectors of ints (for x and y), and they grow to about 16k, so building whole stack frames that deep would definitely be a huge loss.
Without looking for the ideal solution I had something similar. For each point the monkey is, I added the next 4 possibilities to a list and did the same for the next four recursively only if they had not been visited. This can be also done with multiprocessing to speed up the process.
Here is my solution, more like a BFS:
int DigitSum(int num)
{
int sum = 0;
num = (num >= 0) ? num : -num;
while(num) {
sum += num % 10;
num /= 10;
}
return sum;
}
struct Point {
int x,y;
Point(): x(0), y(0) {}
Point(int x1, int y1): x(x1), y(y1) {}
friend bool operator<(const Point& p1, const Point& p2)
{
if (p1.x < p2.x) {
return true;
} else if (p1.x == p2.x) {
return (p1.y < p2.y);
} else {
return false;
}
}
};
void neighbor(vector<Point>& n, const Point& p)
{
if (n.size() < 4) n.resize(4);
n[0] = Point(p.x-1, p.y);
n[1] = Point(p.x+1, p.y);
n[2] = Point(p.x, p.y-1);
n[3] = Point(p.x, p.y+1);
}
int numMoves(const Point& start)
{
map<Point, bool> m;
queue<Point> q;
int count = 0;
vector<Point> neigh;
q.push(start);
m[start] = true;
while (! q.empty()) {
Point c = q.front();
neighbor(neigh, c);
for (auto p: neigh) {
if ((!m[p]) && (DigitSum(p.x) + DigitSum(p.y) <= 19)) {
count++;
m[p] = true;
q.push(p);
}
}
q.pop();
}
return count;
}
I'm not sure how different this may be from brainydexter's idea... roaming the one quadrant, I instituted a single array hash (index = 299 * y + x) and built the result with another array, each index storing only the points that expand from its previous index, for example:
first iteration, result = [[(0,0)]]
second iteration, result = [[(0,0)],[(0,1),(1,0)]]
...
On an old IBM Thinkpad in JavaScript, the speed seemed to vary from 35-120 milliseconds (fiddle here).
I would like to determine a polygon and implement an algorithm which would check if a point is inside or outside the polygon.
Does anyone know if there is any example available of any similar algorithm?
If i remember correctly, the algorithm is to draw a horizontal line through your test point. Count how many lines of of the polygon you intersect to reach your point.
If the answer is odd, you're inside. If the answer is even, you're outside.
Edit: Yeah, what he said (Wikipedia):
C# code
bool IsPointInPolygon(List<Loc> poly, Loc point)
{
int i, j;
bool c = false;
for (i = 0, j = poly.Count - 1; i < poly.Count; j = i++)
{
if ((((poly[i].Lt <= point.Lt) && (point.Lt < poly[j].Lt))
|| ((poly[j].Lt <= point.Lt) && (point.Lt < poly[i].Lt)))
&& (point.Lg < (poly[j].Lg - poly[i].Lg) * (point.Lt - poly[i].Lt)
/ (poly[j].Lt - poly[i].Lt) + poly[i].Lg))
{
c = !c;
}
}
return c;
}
Location class
public class Loc
{
private double lt;
private double lg;
public double Lg
{
get { return lg; }
set { lg = value; }
}
public double Lt
{
get { return lt; }
set { lt = value; }
}
public Loc(double lt, double lg)
{
this.lt = lt;
this.lg = lg;
}
}
After searching the web and trying various implementations and porting them from C++ to C# I finally got my code straight:
public static bool PointInPolygon(LatLong p, List<LatLong> poly)
{
int n = poly.Count();
poly.Add(new LatLong { Lat = poly[0].Lat, Lon = poly[0].Lon });
LatLong[] v = poly.ToArray();
int wn = 0; // the winding number counter
// loop through all edges of the polygon
for (int i = 0; i < n; i++)
{ // edge from V[i] to V[i+1]
if (v[i].Lat <= p.Lat)
{ // start y <= P.y
if (v[i + 1].Lat > p.Lat) // an upward crossing
if (isLeft(v[i], v[i + 1], p) > 0) // P left of edge
++wn; // have a valid up intersect
}
else
{ // start y > P.y (no test needed)
if (v[i + 1].Lat <= p.Lat) // a downward crossing
if (isLeft(v[i], v[i + 1], p) < 0) // P right of edge
--wn; // have a valid down intersect
}
}
if (wn != 0)
return true;
else
return false;
}
private static int isLeft(LatLong P0, LatLong P1, LatLong P2)
{
double calc = ((P1.Lon - P0.Lon) * (P2.Lat - P0.Lat)
- (P2.Lon - P0.Lon) * (P1.Lat - P0.Lat));
if (calc > 0)
return 1;
else if (calc < 0)
return -1;
else
return 0;
}
The isLeft function was giving me rounding problems and I spent hours without realizing that I was doing the conversion wrong, so forgive me for the lame if block at the end of that function.
BTW, this is the original code and article:
http://softsurfer.com/Archive/algorithm_0103/algorithm_0103.htm
By far the best explanation and implementation can be found at
Point In Polygon Winding Number Inclusion
There is even a C++ implementation at the end of the well explained article. This site also contains some great algorithms/solutions for other geometry based problems.
I have modified and used the C++ implementation and also created a C# implementation. You definitely want to use the Winding Number algorithm as it is more accurate than the edge crossing algorithm and it is very fast.
I think there is a simpler and more efficient solution.
Here is the code in C++. I should be simple to convert it to C#.
int pnpoly(int npol, float *xp, float *yp, float x, float y)
{
int i, j, c = 0;
for (i = 0, j = npol-1; i < npol; j = i++) {
if ((((yp[i] <= y) && (y < yp[j])) ||
((yp[j] <= y) && (y < yp[i]))) &&
(x < (xp[j] - xp[i]) * (y - yp[i]) / (yp[j] - yp[i]) + xp[i]))
c = !c;
}
return c;
}
The complete solution in asp.Net C#, you can see the complete detail here, you can see how to find point(lat,lon) whether its inside or Outside the Polygon using the latitude and longitudes ?
Article Reference Link
private static bool checkPointExistsInGeofencePolygon(string latlnglist, string lat, string lng)
{
List<Loc> objList = new List<Loc>();
// sample string should be like this strlatlng = "39.11495,-76.873259|39.114588,-76.872808|39.112921,-76.870373|";
string[] arr = latlnglist.Split('|');
for (int i = 0; i <= arr.Length - 1; i++)
{
string latlng = arr[i];
string[] arrlatlng = latlng.Split(',');
Loc er = new Loc(Convert.ToDouble(arrlatlng[0]), Convert.ToDouble(arrlatlng[1]));
objList.Add(er);
}
Loc pt = new Loc(Convert.ToDouble(lat), Convert.ToDouble(lng));
if (IsPointInPolygon(objList, pt) == true)
{
return true;
}
else
{
return false;
}
}
private static bool IsPointInPolygon(List<Loc> poly, Loc point)
{
int i, j;
bool c = false;
for (i = 0, j = poly.Count - 1; i < poly.Count; j = i++)
{
if ((((poly[i].Lt <= point.Lt) && (point.Lt < poly[j].Lt)) |
((poly[j].Lt <= point.Lt) && (point.Lt < poly[i].Lt))) &&
(point.Lg < (poly[j].Lg - poly[i].Lg) * (point.Lt - poly[i].Lt) / (poly[j].Lt - poly[i].Lt) + poly[i].Lg))
c = !c;
}
return c;
}
Just a heads up (using answer as I can't comment), if you want to use point-in-polygon for geo fencing, then you need to change your algorithm to work with spherical coordinates. -180 longitude is the same as 180 longitude and point-in-polygon will break in such situation.
Relating to kobers answer I worked it out with more readable clean code and changed the longitudes that crosses the date border:
public bool IsPointInPolygon(List<PointPosition> polygon, double latitude, double longitude)
{
bool isInIntersection = false;
int actualPointIndex = 0;
int pointIndexBeforeActual = polygon.Count - 1;
var offset = calculateLonOffsetFromDateLine(polygon);
longitude = longitude < 0.0 ? longitude + offset : longitude;
foreach (var actualPointPosition in polygon)
{
var p1Lat = actualPointPosition.Latitude;
var p1Lon = actualPointPosition.Longitude;
var p0Lat = polygon[pointIndexBeforeActual].Latitude;
var p0Lon = polygon[pointIndexBeforeActual].Longitude;
if (p1Lon < 0.0) p1Lon += offset;
if (p0Lon < 0.0) p0Lon += offset;
// Jordan curve theorem - odd even rule algorithm
if (isPointLatitudeBetweenPolyLine(p0Lat, p1Lat, latitude)
&& isPointRightFromPolyLine(p0Lat, p0Lon, p1Lat, p1Lon, latitude, longitude))
{
isInIntersection = !isInIntersection;
}
pointIndexBeforeActual = actualPointIndex;
actualPointIndex++;
}
return isInIntersection;
}
private double calculateLonOffsetFromDateLine(List<PointPosition> polygon)
{
double offset = 0.0;
var maxLonPoly = polygon.Max(x => x.Longitude);
var minLonPoly = polygon.Min(x => x.Longitude);
if (Math.Abs(minLonPoly - maxLonPoly) > 180)
{
offset = 360.0;
}
return offset;
}
private bool isPointLatitudeBetweenPolyLine(double polyLinePoint1Lat, double polyLinePoint2Lat, double poiLat)
{
return polyLinePoint2Lat <= poiLat && poiLat < polyLinePoint1Lat || polyLinePoint1Lat <= poiLat && poiLat < polyLinePoint2Lat;
}
private bool isPointRightFromPolyLine(double polyLinePoint1Lat, double polyLinePoint1Lon, double polyLinePoint2Lat, double polyLinePoint2Lon, double poiLat, double poiLon)
{
// lon <(lon1-lon2)*(latp-lat2)/(lat1-lat2)+lon2
return poiLon < (polyLinePoint1Lon - polyLinePoint2Lon) * (poiLat - polyLinePoint2Lat) / (polyLinePoint1Lat - polyLinePoint2Lat) + polyLinePoint2Lon;
}
I add one detail to help people who live in the... south of earth!!
If you're in Brazil (that's my case), our GPS coord are all negatives.
And all these algo give wrong results.
The easiest way if to use the absolute values of the Lat and Long of all point. And in that case Jan Kobersky's algo is perfect.
Check if a point is inside a polygon or not -
Consider the polygon which has vertices a1,a2,a3,a4,a5. The following set of steps should help in ascertaining whether point P lies inside the polygon or outside.
Compute the vector area of the triangle formed by edge a1->a2 and the vectors connecting a2 to P and P to a1. Similarly, compute the vector area of the each of the possible triangles having one side as the side of the polygon and the other two connecting P to that side.
For a point to be inside a polygon, each of the triangles need to have positive area. Even if one of the triangles have a negative area then the point P stands out of the polygon.
In order to compute the area of a triangle given vectors representing its 3 edges, refer to http://www.jtaylor1142001.net/calcjat/Solutions/VCrossProduct/VCPATriangle.htm
The problem is easier if your polygon is convex. If so, you can do a simple test for each line to see if the point is on the inside or outside of that line (extending to infinity in both directions). Otherwise, for concave polygons, draw an imaginary ray from your point out to infinity (in any direction). Count how many times it crosses a boundary line. Odd means the point is inside, even means the point is outside.
This last algorithm is trickier than it looks. You will have to be very careful about what happens when your imaginary ray exactly hits one of the polygon's vertices.
If your imaginary ray goes in the -x direction, you can choose only to count lines that include at least one point whose y coordinate is strictly less than the y coordinate of your point. This is how you get most of the weird edge cases to work correctly.
If you have a simple polygon (none of the lines cross) and you don't have holes you can also triangulate the polygon, which you are probably going to do anyway in a GIS app to draw a TIN, then test for points in each triangle. If you have a small number of edges to the polygon but a large number of points this is fast.
For an interesting point in triangle see link text
Otherwise definately use the winding rule rather than edge crossing, edge crossing has real problems with points on edges, which if your data is generated form a GPS with limited precision is very likely.
the polygon is defined as a sequential list of point pairs A, B, C .... A.
no side A-B, B-C ... crosses any other side
Determine box Xmin, Xmax, Ymin, Ymax
case 1 the test point P lies outside the box
case 2 the test point P lies inside the box:
Determine the 'diameter' D of the box {[Xmin,Ymin] - [Xmax, Ymax]} ( and add a little extra to avoid possible confusion with D being on a side)
Determine the gradients M of all sides
Find a gradient Mt most different from all gradients M
The test line runs from P at gradient Mt a distance D.
Set the count of intersections to zero
For each of the sides A-B, B-C test for the intersection of P-D with a side
from its start up to but NOT INCLUDING its end. Increment the count of intersections
if required. Note that a zero distance from P to the intersection indicates that P is ON a side
An odd count indicates P is inside the polygon
I translated c# method in Php and I added many comments to understand code.Description of PolygonHelps:
Check if a point is inside or outside of a polygon. This procedure uses gps coordinates and it works when polygon has a little geographic area.
INPUT:$poly: array of Point: polygon vertices list; [{Point}, {Point}, ...];$point: point to check; Point: {"lat" => "x.xxx", "lng" => "y.yyy"}
When $c is false, the number of intersections with polygon is even, so the point is outside of polygon;When $c is true, the number of intersections with polygon is odd, so the point is inside of polygon;$n is the number of vertices in polygon;For each vertex in polygon, method calculates line through current vertex and previous vertex and check if the two lines have an intersection point.$c changes when intersection point exists.
So, method can return true if point is inside the polygon, else return false.
class PolygonHelps {
public static function isPointInPolygon(&$poly, $point){
$c = false;
$n = $j = count($poly);
for ($i = 0, $j = $n - 1; $i < $n; $j = $i++){
if ( ( ( ( $poly[$i]->lat <= $point->lat ) && ( $point->lat < $poly[$j]->lat ) )
|| ( ( $poly[$j]->lat <= $point->lat ) && ( $point->lat < $poly[$i]->lat ) ) )
&& ( $point->lng < ( $poly[$j]->lng - $poly[$i]->lng )
* ( $point->lat - $poly[$i]->lat )
/ ( $poly[$j]->lat - $poly[$i]->lat )
+ $poly[$i]->lng ) ){
$c = !$c;
}
}
return $c;
}
}
Jan's answer is great.
Here is the same code using the GeoCoordinate class instead.
using System.Device.Location;
...
public static bool IsPointInPolygon(List<GeoCoordinate> poly, GeoCoordinate point)
{
int i, j;
bool c = false;
for (i = 0, j = poly.Count - 1; i < poly.Count; j = i++)
{
if ((((poly[i].Latitude <= point.Latitude) && (point.Latitude < poly[j].Latitude))
|| ((poly[j].Latitude <= point.Latitude) && (point.Latitude < poly[i].Latitude)))
&& (point.Longitude < (poly[j].Longitude - poly[i].Longitude) * (point.Latitude - poly[i].Latitude)
/ (poly[j].Latitude - poly[i].Latitude) + poly[i].Longitude))
c = !c;
}
return c;
}
You can try this simple class https://github.com/xopbatgh/sb-polygon-pointer
It is easy to deal with it
You just insert polygon coordinates into array
Ask library is desired point with lat/lng inside the polygon
$polygonBox = [
[55.761515, 37.600375],
[55.759428, 37.651156],
[55.737112, 37.649566],
[55.737649, 37.597301],
];
$sbPolygonEngine = new sbPolygonEngine($polygonBox);
$isCrosses = $sbPolygonEngine->isCrossesWith(55.746768, 37.625605);
// $isCrosses is boolean
(answer was returned from deleted by myself because initially it was formatted wrong)