How to have an angled line on a chart in live charts - livecharts

I'm trying to draw a trend line on a chart in livecharts using windowsForms and while I am aware of LineSeries(or GLineSeries in my case) it only goes through the points I select. Anybody know how I can make line go on forever like a AxisSection but angled
Ive found a solution where I calculate the path of the line with the straight line equation.
private void extendLineAcrossChart()
{
//lines is a list where I keep the indexes the lines I want to extend
for(int i = 0; i < lines.Count; i++)
{
GLineSeries ls = (GLineSeries)ohlcChart.Series[lines[i]];
ls.PointGeometry = null;
extendLineAcrossChart((ChartValues<ObservablePoint>)ohlcChart.Series[lines[i]].Values);
}
}
private void extendLineAcrossChart(ChartValues<ObservablePoint> ChartValues)
{
ObservablePoint PointFirst = ChartValues[0];
double X1 = PointFirst.X;
double Y1 = PointFirst.Y;
ObservablePoint PointSecond = ChartValues[ChartValues.Count-1];
double X2 = PointSecond.X;
double Y2 = PointSecond.Y;
double slope = (Y1-Y2) / (X1-X2);
double coEfficient = Y1 - slope * X1;
PointFirst.Y = slope * ohlcChart.AxisX[0].ActualMinValue + coEfficient;// Y = mX + n
PointFirst.X = ohlcChart.AxisX[0].ActualMinValue;
PointSecond.Y = slope * ohlcChart.AxisX[0].ActualMaxValue + coEfficient;
PointSecond.X = ohlcChart.AxisX[0].ActualMaxValue;
}

Related

How can I read/transform the range images of the stanford bunny .ply-files?

I want to read the not reconstructed data from the Stanford Bunny. The point data is stored as several range images, which have to be transformed to be combined to one big point cloud, like written in the README:
These data files were obtained with a Cyberware 3030MS optical
triangulation scanner. They are stored as range images in the "ply"
format. The ".conf" file contains the transformations required to
bring each range image into a single coordinate system.
This is the .conf-file:
camera -0.0172 -0.0936 -0.734 -0.0461723 0.970603 -0.235889 0.0124573
bmesh bun000.ply 0 0 0 0 0 0 1
bmesh bun045.ply -0.0520211 -0.000383981 -0.0109223 0.00548449 -0.294635 -0.0038555 0.955586
bmesh bun090.ply 2.20761e-05 -3.34606e-05 -7.20881e-05 0.000335889 -0.708202 0.000602459 0.706009
bmesh bun180.ply 0.000116991 2.47732e-05 -4.6283e-05 -0.00215148 0.999996 -0.0015001 0.000892527
bmesh bun270.ply 0.000130273 1.58623e-05 0.000406764 0.000462632 0.707006 -0.00333301 0.7072
bmesh top2.ply -0.0530127 0.138516 0.0990356 0.908911 -0.0569874 0.154429 0.383126
bmesh top3.ply -0.0277373 0.0583887 -0.0796939 0.0598923 0.670467 0.68082 -0.28874
bmesh bun315.ply -0.00646017 -1.36122e-05 -0.0129064 0.00449209 0.38422 -0.00976512 0.923179
bmesh chin.ply 0.00435102 0.0882863 -0.108853 -0.441019 0.213083 0.00705734 0.871807
bmesh ear_back.ply -0.0829384 0.0353082 0.0711536 0.111743 0.925689 -0.215443 -0.290169
For each range image seven values are stored. But I do not know, what information can be obtained from these values.
I guess that three of them will contain some information about the translation and maybe three contain information about the rotation. But I didn't find something about the order of these values and how to transform the values to get one point cloud.
The wiki page doesn't handle with range images and I found nothing more at the Stanford pages. They just talk about, that the method of Turk94 is used to scan this data set, but the method has no information about the transformations needed. (Or I was not able to get the information out of this paper.)
Does anybody know how to read these values correctly? Why is there a transformation for the camera position? Is this just a good initial value to view the whole point cloud?
Thanks for your help.
EDIT:
Ok. At this point, I already tried to read the data and to correctly transform them, but everything did not work. I use the boost library to handle with the quaternions
Here is my code for it:
boost::math::quaternion<double> translation, quaternionRotation;
//Get Transformation
translation = boost::math::quaternion<double>(0.0, lineData[2].toDouble(), lineData[3].toDouble(), lineData[4].toDouble());
quaternionRotation = boost::math::quaternion<double>(lineData[5].toDouble(),lineData[6].toDouble(),lineData[7].toDouble(),lineData[8].toDouble());
//do some file related stuff
//...
//for each line: read the point data and transform it and store the point in a data array
pointData[j].x = stringPointData[0].toDouble();
pointData[j].y = stringPointData[1].toDouble();
pointData[j].z = stringPointData[2].toDouble();
tmpQuat = boost::math::quaternion<double> (0.0,pointData[j].x,pointData[j].y,pointData[j].z);
//first translation
tmpQuat += translation;
//then quaternion rotation
tmpQuat = (quaternionRotation * (tmpQuat) * boost::math::conj(quaternionRotation));
//read the data from quaternion to a usual type
pointData[j].x = tmpQuat.R_component_2();
pointData[j].y = tmpQuat.R_component_3();
pointData[j].z = tmpQuat.R_component_4();
I assume that the first component of the quaternion is the w component and the others refers to x, y andz like in equation 2 from here. If necessary I can provide the screenshots of the false transformations.
EDIT: It is written in the source code of zipper in the file zipper.c, that the 7 values are saved as followed:
transX transY transZ quatX quatY quatZ quatW
The quaternion is then transformed into a rotation matrix and then the rotation is performed with this new matrix. But even with this information, I am not able to transform it correctly. To test it, I implemented the function quat_to_mat() from zipper in my project:
glm::dmat4 cPlyObjectLoader::quat_to_mat(boost::math::quaternion<double> quat) const
{
float s;
float xs,ys,zs;
float wx,wy,wz;
float xx,xy,xz;
float yy,yz,zz;
glm::dmat4 mat(1.0);
s = 2 / (quat.R_component_2()*quat.R_component_2() +
quat.R_component_3()*quat.R_component_3() +
quat.R_component_4()*quat.R_component_4() +
quat.R_component_1()*quat.R_component_1());
xs = quat.R_component_2() * s;
ys = quat.R_component_3() * s;
zs = quat.R_component_4() * s;
wx = quat.R_component_1() * xs;
wy = quat.R_component_1() * ys;
wz = quat.R_component_1() * zs;
xx = quat.R_component_2() * xs;
xy = quat.R_component_2() * ys;
xz = quat.R_component_2() * zs;
yy = quat.R_component_3() * ys;
yz = quat.R_component_3() * zs;
zz = quat.R_component_4() * zs;
mat[0][0] = 1 - (yy + zz);
mat[0][1] = xy - wz;
mat[0][2] = xz + wy;
mat[0][3] = 0;
mat[1][0] = xy + wz;
mat[1][1] = 1 - (xx + zz);
mat[1][2] = yz - wx;
mat[1][3] = 0;
mat[2][0] = xz - wy;
mat[2][1] = yz + wx;
mat[2][2] = 1 - (xx + yy);
mat[2][3] = 0;
mat[3][0] = 0;
mat[3][1] = 0;
mat[3][2] = 0;
mat[3][3] = 1;
return mat;
}
Now I am doing the translation and rotation with a vector and this matrix:
quaternionRotation = boost::math::quaternion<double>(lineData[8].toDouble(),lineData[5].toDouble(),lineData[6].toDouble(),lineData[7].toDouble());
rotationMat = this->quat_to_mat(quaternionRotation);
translationVec = glm::dvec4(lineData[2].toDouble(), lineData[3].toDouble(), lineData[4].toDouble(),0.0);
//same stuff as above
//...
glm::dvec4 curPoint = glm::dvec4(pointData[j].x,pointData[j].y,pointData[j].z,1.0);
curPoint += translationVec;
curPoint = rotationMat*curPoint;
The result is different to my quaternion rotation (Why? It should be the same.), but not correct.
Debug information:
the input of all transformations is correct
the input of all points is correct
As i read from stanford 3d scan
For all the Stanford models, alignment was done using a modified ICP
algorithm, as described in this paper. These alignments are stored in
".conf" files, which list each range image in the model along with a
translation and a quaternion rotation.
Here is the link to "this paper"
Edit: The two methods are called zippering and volmetric merging
As Ello mentioned, it is written at the stanford 3D repo:
For all the Stanford models, alignment was done using a modified ICP algorithm, as described in this paper. These alignments are stored in ".conf" files, which list each range image in the model along with a translation and a quaternion rotation.
But that is not enough to understand everything of this data file.
It is correct, that the first line:
camera -0.0172 -0.0936 -0.734 -0.0461723 0.970603 -0.235889 0.0124573
stores a good initial camera position and every other line starting with bmesh refers to a .ply-file, which stores a ranged image.
The transformation values are stored as followed:
transX transY transZ quatX quatY quatZ quatW
where trans... refers to a translation value and quat... refers to a value of the quaternion. Currently, I do not know, why it doesn't work with the quaternion rotation by itself, but by transforming it into a rotation matrix with the code of zipper the transformation is correct. Be aware, that the translation is stored first, but to get a correct transformation the rotation has to be done at the beginning and the translation afterwards.
My code snippet to read the files and transform it, is the following:
boost::math::quaternion<double> translation, quaternionRotation;
//Get Transformation
translationVec = glm::dvec4(lineData[2].toDouble(), lineData[3].toDouble(), lineData[4].toDouble(),0.0);
quaternionRotation = boost::math::quaternion<double>(lineData[8].toDouble(),lineData[5].toDouble(),lineData[6].toDouble(),lineData[7].toDouble());
//calculate the unit quaternion
double magnitude = std::sqrt(
quaternionRotation.R_component_1()*quaternionRotation.R_component_1()+
quaternionRotation.R_component_2()*quaternionRotation.R_component_2()+
quaternionRotation.R_component_3()*quaternionRotation.R_component_3()+
quaternionRotation.R_component_4()*quaternionRotation.R_component_4());
quaternionRotation /= magnitude;
rotationMat = this->quat_to_mat(quaternionRotation);
//do some file related stuff
//...
//for each line: read the point data and transform it and store the point in a data array
pointData[j].x = stringPointData[0].toDouble();
pointData[j].y = stringPointData[1].toDouble();
pointData[j].z = stringPointData[2].toDouble();
//transform the curren point
glm::dvec4 curPoint = glm::dvec4(pointData[j].x,pointData[j].y,pointData[j].z,1.0);
//first rotation
curPoint = rotationMat*curPoint;
//then translation
curPoint += translationVec;
//store the data in a data array
pointData[j].x = curPoint.x;
pointData[j].y = curPoint.y;
pointData[j].z = curPoint.z;
I know, that it's not the best one, but it works. Feel free to optimize it by yourself.
Here is the file converter that I wrote. It will assemble all the scans into a single file, one point per line. It supports different file formats (including Stanford .conf files).
#include <string>
#include <vector>
#include <sstream>
#include <iostream>
#include <stdio.h>
#include <ctype.h>
#include <string.h>
#include <stdlib.h>
#include <math.h>
#ifndef M_PI
#define M_PI 3.14159265
#endif
class LineInput {
public:
LineInput(const std::string& filename) {
F_ = fopen(filename.c_str(), "r" ) ;
ok_ = (F_ != 0) ;
}
~LineInput() {
if(F_ != 0) {
fclose(F_); F_ = 0 ;
}
}
bool OK() const { return ok_ ; }
bool eof() const { return feof(F_) ; }
bool get_line() {
line_[0] = '\0' ;
// Skip the empty lines
while(!isprint(line_[0])) {
if(fgets(line_, MAX_LINE_LEN, F_) == 0) {
return false ;
}
}
// If the line ends with a backslash, append
// the next line to the current line.
bool check_multiline = true ;
int total_length = MAX_LINE_LEN ;
char* ptr = line_ ;
while(check_multiline) {
int L = strlen(ptr) ;
total_length -= L ;
ptr = ptr + L - 2;
if(*ptr == '\\' && total_length > 0) {
*ptr = ' ' ;
ptr++ ;
fgets(ptr, total_length, F_) ;
} else {
check_multiline = false ;
}
}
if(total_length < 0) {
std::cerr
<< "MultiLine longer than "
<< MAX_LINE_LEN << " bytes" << std::endl ;
}
return true ;
}
int nb_fields() const { return field_.size() ; }
char* field(int i) { return field_[i] ; }
int field_as_int(int i) {
int result ;
ok_ = ok_ && (sscanf(field(i), "%d", &result) == 1) ;
return result ;
}
double field_as_double(int i) {
double result ;
ok_ = ok_ && (sscanf(field(i), "%lf", &result) == 1) ;
return result ;
}
bool field_matches(int i, const char* s) {
return !strcmp(field(i), s) ;
}
void get_fields(const char* separators=" \t\r\n") {
field_.resize(0) ;
char* tok = strtok(line_,separators) ;
while(tok != 0) {
field_.push_back(tok) ;
tok = strtok(0,separators) ;
}
}
private:
enum { MAX_LINE_LEN = 65535 } ;
FILE* F_ ;
char line_[MAX_LINE_LEN] ;
std::vector<char*> field_ ;
bool ok_ ;
} ;
std::string to_string(int x, int mindigits) {
char buff[100] ;
sprintf(buff, "%03d", x) ;
return std::string(buff) ;
}
double M[4][4] ;
void transform(double* xyz) {
double xyzw[4] ;
for(unsigned int c=0; c<4; c++) {
xyzw[c] = M[3][c] ;
}
for(unsigned int j=0; j<4; j++) {
for(unsigned int i=0; i<3; i++) {
xyzw[j] += M[i][j] * xyz[i] ;
}
}
for(unsigned int c=0; c<3; c++) {
xyz[c] = xyzw[c] / xyzw[3] ;
}
}
bool read_frames_file(int no) {
std::string filename = "scan" + to_string(no,3) + ".frames" ;
std::cerr << "Reading frames from:" << filename << std::endl ;
LineInput in(filename) ;
if(!in.OK()) {
std::cerr << " ... not found" << std::endl ;
return false ;
}
while(!in.eof() && in.get_line()) {
in.get_fields() ;
if(in.nb_fields() == 17) {
int f = 0 ;
for(unsigned int i=0; i<4; i++) {
for(unsigned int j=0; j<4; j++) {
M[i][j] = in.field_as_double(f) ; f++ ;
}
}
}
}
return true ;
}
bool read_pose_file(int no) {
std::string filename = "scan" + to_string(no,3) + ".pose" ;
std::cerr << "Reading pose from:" << filename << std::endl ;
LineInput in(filename) ;
if(!in.OK()) {
std::cerr << " ... not found" << std::endl ;
return false ;
}
double xyz[3] ;
double euler[3] ;
in.get_line() ;
in.get_fields() ;
xyz[0] = in.field_as_double(0) ;
xyz[1] = in.field_as_double(1) ;
xyz[2] = in.field_as_double(2) ;
in.get_line() ;
in.get_fields() ;
euler[0] = in.field_as_double(0) * M_PI / 180.0 ;
euler[1] = in.field_as_double(1) * M_PI / 180.0 ;
euler[2] = in.field_as_double(2) * M_PI / 180.0 ;
double sx = sin(euler[0]);
double cx = cos(euler[0]);
double sy = sin(euler[1]);
double cy = cos(euler[1]);
double sz = sin(euler[2]);
double cz = cos(euler[2]);
M[0][0] = cy*cz;
M[0][1] = sx*sy*cz + cx*sz;
M[0][2] = -cx*sy*cz + sx*sz;
M[0][3] = 0.0;
M[1][0] = -cy*sz;
M[1][1] = -sx*sy*sz + cx*cz;
M[1][2] = cx*sy*sz + sx*cz;
M[1][3] = 0.0;
M[2][0] = sy;
M[2][1] = -sx*cy;
M[2][2] = cx*cy;
M[2][3] = 0.0;
M[3][0] = xyz[0];
M[3][1] = xyz[1];
M[3][2] = xyz[2];
M[3][3] = 1.0;
return true ;
}
void setup_transform_from_translation_and_quaternion(
double Tx, double Ty, double Tz,
double Qx, double Qy, double Qz, double Qw
) {
/* for unit q, just set s = 2 or set xs = Qx + Qx, etc. */
double s = 2.0 / (Qx*Qx + Qy*Qy + Qz*Qz + Qw*Qw);
double xs = Qx * s;
double ys = Qy * s;
double zs = Qz * s;
double wx = Qw * xs;
double wy = Qw * ys;
double wz = Qw * zs;
double xx = Qx * xs;
double xy = Qx * ys;
double xz = Qx * zs;
double yy = Qy * ys;
double yz = Qy * zs;
double zz = Qz * zs;
M[0][0] = 1.0 - (yy + zz);
M[0][1] = xy - wz;
M[0][2] = xz + wy;
M[0][3] = 0.0;
M[1][0] = xy + wz;
M[1][1] = 1 - (xx + zz);
M[1][2] = yz - wx;
M[1][3] = 0.0;
M[2][0] = xz - wy;
M[2][1] = yz + wx;
M[2][2] = 1 - (xx + yy);
M[2][3] = 0.0;
M[3][0] = Tx;
M[3][1] = Ty;
M[3][2] = Tz;
M[3][3] = 1.0;
}
bool read_points_file(int no) {
std::string filename = "scan" + to_string(no,3) + ".3d" ;
std::cerr << "Reading points from:" << filename << std::endl ;
LineInput in(filename) ;
if(!in.OK()) {
std::cerr << " ... not found" << std::endl ;
return false ;
}
while(!in.eof() && in.get_line()) {
in.get_fields() ;
double xyz[3] ;
if(in.nb_fields() >= 3) {
for(unsigned int c=0; c<3; c++) {
xyz[c] = in.field_as_double(c) ;
}
transform(xyz) ;
printf("%f %f %f\n",xyz[0],xyz[1],xyz[2]) ;
}
}
return true ;
}
/* only works for ASCII PLY files */
void read_ply_file(char* filename) {
std::cerr << "Reading points from:" << filename << std::endl;
LineInput in(filename) ;
if(!in.OK()) {
std::cerr << filename << ": could not open" << std::endl ;
return;
}
bool reading_vertices = false;
int nb_vertices = 0 ;
int nb_read_vertices = 0 ;
while(!in.eof() && in.get_line()) {
in.get_fields();
if(reading_vertices) {
double xyz[3] ;
for(unsigned int c=0; c<3; c++) {
xyz[c] = in.field_as_double(c) ;
}
transform(xyz) ;
printf("%f %f %f\n",xyz[0],xyz[1],xyz[2]) ;
++nb_read_vertices;
if(nb_read_vertices == nb_vertices) {
return;
}
} else if(
in.field_matches(0,"element") &&
in.field_matches(1,"vertex")
) {
nb_vertices = in.field_as_int(2);
} else if(in.field_matches(0,"end_header")) {
reading_vertices = true;
}
}
}
/* For Stanford scanning repository */
void read_conf_file(char* filename) {
LineInput in(filename) ;
if(!in.OK()) {
std::cerr << filename << ": could not open" << std::endl ;
return;
}
while(!in.eof() && in.get_line()) {
in.get_fields();
if(in.nb_fields() == 0) { continue ; }
if(in.field_matches(0,"bmesh")) {
char* filename = in.field(1);
// Translation vector
double Tx = in.field_as_double(2);
double Ty = in.field_as_double(3);
double Tz = in.field_as_double(4);
/// Quaternion
double Qx = in.field_as_double(5);
double Qy = in.field_as_double(6);
double Qz = in.field_as_double(7);
double Qw = in.field_as_double(8);
setup_transform_from_translation_and_quaternion(Tx,Ty,Tz,Qx,Qy,Qz,Qw);
read_ply_file(filename);
}
}
}
int main(int argc, char** argv) {
if(argc != 2) { return -1 ; }
if(strstr(argv[1],".conf")) {
read_conf_file(argv[1]);
} else {
int max_i = atoi(argv[1]) ;
for(int i=0; i<=max_i; i++) {
if(!read_frames_file(i)) {
read_pose_file(i) ;
}
read_points_file(i) ;
}
}
return 0 ;
}
Okay so here is my solution since none of the above worked for me (note this is in python using blender's bpy). It seems that I need to transpose the rotation part of my 4x4 transformation matrix (note I am using a standard way to convert quaternion to rotation matrix and not the one from zipper). Also note since I am using blender when importing or using any model it only stores the models local coordinates relative to the objects world transformation so you do not have to do this point = objWorld * point, it is blender specific.
#loop
for meshName, transform in zip(plyFile, transformations):
#Build Quaternion
#transform structure [x, y, z, qx, qy, qz, qw]
Rt = mathutils.Quaternion((transform[6], transform[3], transform[4], transform[5])).to_matrix().to_4x4()
Rt.normalize()
Rt.transpose()
Rt[0][3] = transform[0]
Rt[1][3] = transform[1]
Rt[2][3] = transform[2]
bpy.ops.object.select_all(action='DESELECT')
#import the ply mesh into blender
bpy.ops.import_mesh.ply(filepath=baseDir + meshName)
#get the ply object
obj = bpy.context.object
#get objects world matrix
objWorld = obj.matrix_world
for index in range(len(obj.data.vertices)):
#get local point
point = mathutils.Vector([obj.data.vertices[index].co[0],obj.data.vertices[index].co[1], obj.data.vertices[index].co[2], 1.])
#convert local point to world
point = objWorld * point
#apply ply transformation
point = Rt * point
#update the point in the mesh
obj.data.vertices[index].co[0] = point[0]
obj.data.vertices[index].co[1] = point[1]
obj.data.vertices[index].co[2] = point[2]
#all vertex positions should be updated correctly
As mentioned in other answers, the Stanford 3D repository gives some info about the data organization in the '.conf' files but, the transformation for the bunny model were not working properly when using the quaternion data provided.
I was also stuck in this registration problem for the bunny model, and based on my tests I have some extra considerations to add up. When applying the transformation - rotations to be more specific - I have realized that quaternion values were not rotating the cloud in the correct direction but, when using the corresponding Euler notation, by changing the sign of one specific axis of rotation, I got the correct registration. So, back to the quaternion notation used in the '.conf' file, after some tests I have noticed that by changing the sign of the 'w' component in the quaternion, in each 'bmesh' row, but the first (bun000.ply), the rotation by quaternion can be used.
Furthermore, for some reason, when registering the dragon (dragon_stand and dragon_side) and armadillo (armadillo_stand) stanford point clouds, in order to get the correct result I had to use a different sequence for reading the quaternion data in the ‘.conf’ file. It seems to be stored as:
tx ty tz qw qx qy qz
where 't' refers to a translation value and 'q' refers to a quaternion value. Just to be clear, I have just tested these three models, therefore, I don’t know what is the default pattern for the quaternion values. Besides, for these last two point cloud models, I did not need to change the '.conf' file.
I hope this could be useful for someone else trying to do the same
Just in case someone is looking for a full python implementation on the basis of what #DanceIgel found out, here is some code in python 3.9.1, also generating a figure in mathplotlib:
# Python 3.9.1
import numpy as np
import sys
import math
import glob
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import open3d as o3d
def get_pointcloud_files(path):
files = list()
for f in glob.glob(path + '/*.ply'):
files.append(f)
return files
def get_pointcloud_from_file(path, filename):
cloud = o3d.io.read_point_cloud(path + '/' + filename)
return cloud
def get_transformations_from_file(path, filename):
with open(path + '/' + filename) as f:
lines = (line for line in f)
source = np.loadtxt(lines, delimiter=' ', skiprows=1, dtype='str')
source = np.delete(source, 0, 1) #remove camera
filenames = source[:,0]
source = source[filenames.argsort()]
filenames = np.sort(filenames)
translations = list()
for row in source[:,1:4]:
translations.append(np.reshape(row, [3,1]).astype(np.float32))
quaternions = list()
for row in source[:,4:]:
quaternions.append(np.reshape(row, [4,1]).astype(np.float32))
return filenames, translations, quaternions
def quaternion_rotation_matrix(Q):
# Extract the values from Q
q0 = Q[3]
q1 = Q[0]
q2 = Q[1]
q3 = Q[2]
# calculate unit quarternion
magnitude = math.sqrt(q0*q0 + q1*q1 + q2*q2 + q3*q3)
q0 = q0 / magnitude
q1 = q1 / magnitude
q2 = q2 / magnitude
q3 = q3 / magnitude
# First row of the rotation matrix
r00 = 2 * (q0 * q0 + q1 * q1) - 1
r01 = 2 * (q1 * q2 - q0 * q3)
r02 = 2 * (q1 * q3 + q0 * q2)
# Second row of the rotation matrix
r10 = 2 * (q1 * q2 + q0 * q3)
r11 = 2 * (q0 * q0 + q2 * q2) - 1
r12 = 2 * (q2 * q3 - q0 * q1)
# Third row of the rotation matrix
r20 = 2 * (q1 * q3 - q0 * q2)
r21 = 2 * (q2 * q3 + q0 * q1)
r22 = 2 * (q0 * q0 + q3 * q3) - 1
# 3x3 rotation matrix
rot_matrix = np.array([[r00, r01, r02],
[r10, r11, r12],
[r20, r21, r22]])
rot_matrix = np.transpose(rot_matrix)
return rot_matrix
if __name__=="__main__": # $python visualization_bunny.py bunny/data
path = sys.argv[1]
# load transformations and filenames from file
filenames, translations, quaternions = get_transformations_from_file(path, 'bun.conf')
curr_transformation = np.zeros([3,4])
clouds = list()
for curr_filename, curr_quaternion, curr_translation in zip(filenames, quaternions, translations): # go through input files
curr_cloud = get_pointcloud_from_file(path, curr_filename)
# convert cloud to numpy
curr_cloud = np.asarray(curr_cloud.points)
# compute rotation matrix from quaternions
curr_rotation_matr = quaternion_rotation_matrix(curr_quaternion)
curr_rotation_matr = np.squeeze(curr_rotation_matr)
curr_translation = np.squeeze(curr_translation)
# create transformation matrix
curr_transformation[:,0:3] = curr_rotation_matr
curr_transformation[:,3] = curr_translation
# transform current cloud
for i in range(curr_cloud.shape[0]):
# apply rotation
curr_point = np.matmul(curr_rotation_matr, np.transpose(curr_cloud[i,:]))
# apply translation
curr_point = curr_point + curr_translation
curr_cloud[i,0] = curr_point[0]
curr_cloud[i,1] = curr_point[1]
curr_cloud[i,2] = curr_point[2]
# add current cloud to list of clouds
clouds.append(curr_cloud)
#plot separate point clouds in same graph
ax = plt.axes(projection='3d')
for cloud in clouds:
ax.plot(cloud[:,0], cloud[:,1], cloud[:,2], 'bo', markersize=0.005)
#ax.view_init(elev=90, azim=270)
ax.view_init(elev=100, azim=270)
plt.axis('off')
plt.savefig("ZZZ_Stanford_Bunny_PointCloud.png", bbox_inches='tight')
plt.show()

understanding hough transform accumulator

I don't know if I am understanding things wrong or if it's something else.
This is how the hough transformation appears to me:
You just go over every pixel and calculate rho and theta with this formula:
r = x.cos(θ) + y.sin(θ)
Most people put this formula in a loop from 0->180 or 0-360.
why?
afterwards you put all the values in the accumulator.
But for some dark and mysterious reason, all the points from a same line
will give the same theta and r value.
How is this possible? I really don't get it.
I tried it, these were my results:
I Just put a straight black horizontal line on an image. The line was 10px long 1px high. So I applied the formula, but Instead of getting 1 place in my accumulator with value 10. I got 5 places with value 10, why?
this is my code if necessary:
int outputImage[400][400];
int accumulator[5000][5000]; //random size
int i,j;
int r,t;
int x1,y1,x2,y2;
/* PUT ALL VALUES IN ACCU */
for(i=0;i<imageSize[0];i++)
{
for( j=0;j<imageSize[1];j++)
{
if (inputImage[i][j] == 0x00) //just to test everything, formula only applied on the black line
{
for( t=0;t<180;t++)
{
r = i * cos(t) + j*sin(t);
accumulator[r][t] ++;
}
}
}
}
/*READ ACCU, and draw lines */
for (i=0; i<5000;i++)
{
for (j=0;j<5000;j++)
{
if(accumulator[i][j] >= 10) // If accu value >= tresholdvalue, I believe we have a line
{
x1 = accumulator[i][j] * i / (cos(accumulator[i][j] * j));
y1 = 0;
x2 = 0;
y2 = accumulator[i][j] * i / (sin(accumulator[i][j] * j));
// now i just need to draw all the lines between x1y1 and x2y2
}
}
}
Thanks!

Rotated rectangle rasterisation algorithm

In a nutshell: I want to do a non-approximate version of Bresenham's line algorithm, but for a rectangle rather than a line, and whose points aren't necessarily aligned to the grid.
Given a square grid, and a rectangle comprising four non-grid-aligned points, I want to find a list of all grid squares that are covered, partially or completely, by the rectangle.
Bresenham's line algorithm is approximate – not all partially covered squares are identified. I'm looking for a "perfect" algorithm, that has no false positives or negatives.
It's an old question, but I have solved this issue (C++)
https://github.com/feelinfine/tracer
Maybe it will be usefull for someone
(sorry for my poor english)
Single line tracing
template <typename PointType>
std::set<V2i> trace_line(const PointType& _start_point, const PointType& _end_point, size_t _cell_size)
{
auto point_to_grid_fnc = [_cell_size](const auto& _point)
{
return V2i(std::floor((double)_point.x / _cell_size), std::floor((double)_point.y / _cell_size));
};
V2i start_cell = point_to_grid_fnc(_start_point);
V2i last_cell = point_to_grid_fnc(_end_point);
PointType direction = _end_point - _start_point;
//Moving direction (cells)
int step_x = (direction.x >= 0) ? 1 : -1;
int step_y = (direction.y >= 0) ? 1 : -1;
//Normalize vector
double hypot = std::hypot(direction.x, direction.y);
V2d norm_direction(direction.x / hypot, direction.y / hypot);
//Distance to the nearest square side
double near_x = (step_x >= 0) ? (start_cell.x + 1)*_cell_size - _start_point.x : _start_point.x - (start_cell.x*_cell_size);
double near_y = (step_y >= 0) ? (start_cell.y + 1)*_cell_size - _start_point.y : _start_point.y - (start_cell.y*_cell_size);
//How far along the ray we must move to cross the first vertical (ray_step_to_vside) / or horizontal (ray_step_to_hside) grid line
double ray_step_to_vside = (norm_direction.x != 0) ? near_x / norm_direction.x : std::numeric_limits<double>::max();
double ray_step_to_hside = (norm_direction.y != 0) ? near_y / norm_direction.y : std::numeric_limits<double>::max();
//How far along the ray we must move for horizontal (dx)/ or vertical (dy) component of such movement to equal the cell size
double dx = (norm_direction.x != 0) ? _cell_size / norm_direction.x : std::numeric_limits<double>::max();
double dy = (norm_direction.y != 0) ? _cell_size / norm_direction.y : std::numeric_limits<double>::max();
//Tracing loop
std::set<V2i> cells;
cells.insert(start_cell);
V2i current_cell = start_cell;
size_t grid_bound_x = std::abs(last_cell.x - start_cell.x);
size_t grid_bound_y = std::abs(last_cell.y - start_cell.y);
size_t counter = 0;
while (counter != (grid_bound_x + grid_bound_y))
{
if (std::abs(ray_step_to_vside) < std::abs(ray_step_to_hside))
{
ray_step_to_vside = ray_step_to_vside + dx; //to the next vertical grid line
current_cell.x = current_cell.x + step_x;
}
else
{
ray_step_to_hside = ray_step_to_hside + dy;//to the next horizontal grid line
current_cell.y = current_cell.y + step_y;
}
++counter;
cells.insert(current_cell);
};
return cells;
}
Get all cells
template <typename Container>
std::set<V2i> pick_cells(Container&& _points, size_t _cell_size)
{
if (_points.size() < 2 || _cell_size <= 0)
return std::set<V2i>();
Container points = std::forward<Container>(_points);
auto add_to_set = [](auto& _set, const auto& _to_append)
{
_set.insert(std::cbegin(_to_append), std::cend(_to_append));
};
//Outline
std::set<V2i> cells;
/*
for (auto it = std::begin(_points); it != std::prev(std::end(_points)); ++it)
add_to_set(cells, trace_line(*it, *std::next(it), _cell_size));
add_to_set(cells, trace_line(_points.back(), _points.front(), _cell_size));
*/
//Maybe this code works faster
std::vector<std::future<std::set<V2i> > > results;
using PointType = decltype(points.cbegin())::value_type;
for (auto it = points.cbegin(); it != std::prev(points.cend()); ++it)
results.push_back(std::async(trace_line<PointType>, *it, *std::next(it), _cell_size));
results.push_back(std::async(trace_line<PointType>, points.back(), points.front(), _cell_size));
for (auto& it : results)
add_to_set(cells, it.get());
//Inner
std::set<V2i> to_add;
int last_x = cells.begin()->x;
int counter = cells.begin()->y;
for (auto& it : cells)
{
if (last_x != it.x)
{
counter = it.y;
last_x = it.x;
}
if (it.y > counter)
{
for (int i = counter; i < it.y; ++i)
to_add.insert(V2i(it.x, i));
}
++counter;
}
add_to_set(cells, to_add);
return cells;
}
Types
template <typename _T>
struct V2
{
_T x, y;
V2(_T _x = 0, _T _y = 0) : x(_x), y(_y)
{
};
V2 operator-(const V2& _rhs) const
{
return V2(x - _rhs.x, y - _rhs.y);
}
bool operator==(const V2& _rhs) const
{
return (x == _rhs.x) && (y == _rhs.y);
}
//for std::set sorting
bool operator<(const V2& _rhs) const
{
return (x == _rhs.x) ? (y < _rhs.y) : (x < _rhs.x);
}
};
using V2d = V2<double>;
using V2i = V2<int>;
Usage
std::vector<V2d> points = { {200, 200}, {400, 400}, {500,100} };
size_t cell_size = 30;
auto cells = pick_cells(points, cell_size);
for (auto& it : cells)
... //do something with cells
You can use a scanline approach. The rectangle is a closed convex polygon, so it is sufficient to store the leftmost and rightmost pixel for each horizontal scanline. (And the top and bottom scanlines, too.)
The Bresenham algorithm tries to draw a thin, visually pleasing line without adjacent cells in the smaller dimension. We need an algorithm that visits each cell that the edges of the polygon pass through. The basic idea is to find the starting cell (x, y) for each edge and then to adjust x whenever the edge intersects a vertical border and to adjust y when it intersects a horizontal border.
We can represent the intersections by means of a normalised coordinate s that travels along the edge and that is 0.0 at the first node n1 and 1.0 at the second node n2.
var x = Math.floor(n1.x / cellsize);
var y = Math.floor(n1.y / cellsize);
var s = 0;
The vertical insersections can the be represented as equidistant steps of with dsx from an initial sx.
var dx = n2.x - n1.x;
var sx = 10; // default value > 1.0
// first intersection
if (dx < 0) sx = (cellsize * x - n1.x) / dx;
if (dx > 0) sx = (cellsize * (x + 1) - n1.x) / dx;
var dsx = (dx != 0) ? grid / Math.abs(dx) : 0;
Likewise for the horizontal intersecions. A default value greater than 1.0 catches the cases of horizontal and vertical lines. Add the first point to the scanline data:
add(scan, x, y);
Then we can visit the next adjacent cell by looking at the next intersection with the smallest s.
while (sx <= 1 || sy <= 1) {
if (sx < sy) {
sx += dsx;
if (dx > 0) x++; else x--;
} else {
sy += dsy;
if (dy > 0) y++; else y--;
}
add(scan, x, y);
}
Do this for all four edges and with the same scanline data. Then fill all cells:
for (var y in scan) {
var x = scan[y].min;
var xend = scan[y].max + 1;
while (x < xend) {
// do something with cell (x, y)
x++;
}
}
(I have only skimmed the links MBo provided. It seems that the approach presented in that paper is essentially the same as mine. If so, please excuse the redundant answer, but after working this out I thought I could as well post it.)
This is sub-optimal but might give a general idea.
First off treat the special case of the rectangle being aligned horizontally or vertically separately. This is pretty easy to test for and make the rest simpler.
You can represent the rectangle as a set of 4 inequalities a1 x + b1 y >= c1 a1 x + b1 y <= c2 a3 x + b3 y >= c3 a3 x + b3 y <= c4 as the edges of the rectangles are parallel some of the constants are the same. You also have (up to a multiple) a3=b1 and b3=-a1. You can multiply each inequality by a common factor so you are working with integers.
Now consider each scan line with a fixed value of y.
For each value of y find the four points where the lines intersect the scan line. That is find the solution with each line above. A little bit of logic will find the minimum and maximum values of x. Plot all pixels between these values.
You condition that you want all partially covered squares makes things a little trickier. You can solve this by considering two adjacent scan lines. You want to plot the points between the minimum x for both lines and the maximum for the both lines. If say
a1 x+b1 y>=c is the inequality for the bottom left line in the figure. You want the find the largest x such that a1 x + b1 y < c this will be floor((c-b1 y)/a1) call this minx(y) also find minx(y+1) and the left hand point will be the minimum of these two values.
There is many easy optimisation you can find the y-values of the top and bottom corners reducing the range of y-values to test. You should only need to test two side. For each end point of each line there is one multiplication, one subtraction and one division. The division is the slowest part I think about 4 time slower than other ops. You might be able to remove this with a Bresenham or DDA algorithms others have mentioned.
There is method of Amanatides and Woo to enumerate all intersected cells
A Fast Voxel Traversal Algorithm for Ray Tracing.
Here is practical implementation.
As side effect for you - you'll get points of intersection with grid lines - it may be useful if you need areas of partially covered cells (for antialiasing etc).

How to compare two colors for similarity/difference

I want to design a program that can help me assess between 5 pre-defined colors which one is more similar to a variable color, and with what percentage. The thing is that I don't know how to do that manually step by step. So it is even more difficult to think of a program.
More details: The colors are from photographs of tubes with gel that as different colors. I have 5 tubes with different colors were each is representative of 1 of 5 levels. I want to take photographs of other samples and on the computer assess to which level that sample belongs by comparing colors, and I want to know that with a percentage of approximation too. I would like a program that does something like this: http://www.colortools.net/color_matcher.html
If you can tell me what steps to take, even if they are things for me to think and do manually. It would be very helpful.
See Wikipedia's article on Color Difference for the right leads.
Basically, you want to compute a distance metric in some multidimensional colorspace.
But RGB is not "perceptually uniform", so your Euclidean RGB distance metric suggested by Vadim will not match the human-perceived distance between colors. For a start, L*a*b* is intended to be a perceptually uniform colorspace, and the deltaE metric is commonly used. But there are more refined colorspaces and more refined deltaE formulas that get closer to matching human perception.
You'll have to learn more about colorspaces and illuminants to do the conversions. But for a quick formula that is better than the Euclidean RGB metric, just do this:
Assume that your RGB values are in the sRGB colorspace
Find the sRGB to L*a*b* conversion formulas
Convert your sRGB colors to L*a*b*
Compute deltaE between your two L*a*b* values
It's not computationally expensive, it's just some nonlinear formulas and some multiplications and additions.
Just an idea that first came to my mind (sorry if stupid).
Three components of colors can be assumed 3D coordinates of points and then you could calculate distance between points.
F.E.
Point1 has R1 G1 B1
Point2 has R2 G2 B2
Distance between colors is
d=sqrt((r2-r1)^2+(g2-g1)^2+(b2-b1)^2)
Percentage is
p=d/sqrt((255)^2+(255)^2+(255)^2)
Actually I walked the same path a couple of months ago. There is no perfect answer to the question (that was asked here a couple of times) but there is one, more sophisticated than the sqrt(r-r) etc. answer and more easy to implement directly with RGB without moving to all kinds of alternate color spaces. I found this formula here which is a low cost approximation of the quite complicated real formula (by the CIE which is the W3C of colors, since this is a not finished quest, you can find older and simpler color difference equations there).
Good Luck.
Edit: For posterity, here's the relevant C code:
typedef struct {
unsigned char r, g, b;
} RGB;
double ColourDistance(RGB e1, RGB e2)
{
long rmean = ( (long)e1.r + (long)e2.r ) / 2;
long r = (long)e1.r - (long)e2.r;
long g = (long)e1.g - (long)e2.g;
long b = (long)e1.b - (long)e2.b;
return sqrt((((512+rmean)*r*r)>>8) + 4*g*g + (((767-rmean)*b*b)>>8));
}
If you have two Color objects c1 and c2, you can just compare each RGB value from c1 with that of c2.
int diffRed = Math.abs(c1.getRed() - c2.getRed());
int diffGreen = Math.abs(c1.getGreen() - c2.getGreen());
int diffBlue = Math.abs(c1.getBlue() - c2.getBlue());
Those values you can just divide by the amount of difference saturations (255), and you will get the difference between the two.
float pctDiffRed = (float)diffRed / 255;
float pctDiffGreen = (float)diffGreen / 255;
float pctDiffBlue = (float)diffBlue / 255;
After which you can just find the average color difference in percentage.
(pctDiffRed + pctDiffGreen + pctDiffBlue) / 3 * 100
Which would give you a difference in percentage between c1 and c2.
A color value has more than one dimension, so there is no intrinsic way to compare two colors. You have to determine for your use case the meaning of the colors and thereby how to best compare them.
Most likely you want to compare the hue, saturation and/or lightness properties of the colors as oppposed to the red/green/blue components. If you are having trouble figuring out how you want to compare them, take some pairs of sample colors and compare them mentally, then try to justify/explain to yourself why they are similar/different.
Once you know which properties/components of the colors you want to compare, then you need to figure out how to extract that information from a color.
Most likely you will just need to convert the color from the common RedGreenBlue representation to HueSaturationLightness, and then calculate something like
avghue = (color1.hue + color2.hue)/2
distance = abs(color1.hue-avghue)
This example would give you a simple scalar value indicating how far the gradient/hue of the colors are from each other.
See HSL and HSV at Wikipedia.
One of the best methods to compare two colors by human perception is CIE76. The difference is called Delta-E. When it is less than 1, the human eye can not recognize the difference.
There is wonderful color utilities class ColorUtils (code below), which includes CIE76 comparison methods. It is written by Daniel Strebel,University of Zurich.
From ColorUtils.class I use the method:
static double colorDifference(int r1, int g1, int b1, int r2, int g2, int b2)
r1,g1,b1 - RGB values of the first color
r2,g2,b2 - RGB values ot the second color that you would like to compare
If you work with Android, you can get these values like this:
r1 = Color.red(pixel);
g1 = Color.green(pixel);
b1 = Color.blue(pixel);
ColorUtils.class by Daniel Strebel,University of Zurich:
import android.graphics.Color;
public class ColorUtil {
public static int argb(int R, int G, int B) {
return argb(Byte.MAX_VALUE, R, G, B);
}
public static int argb(int A, int R, int G, int B) {
byte[] colorByteArr = {(byte) A, (byte) R, (byte) G, (byte) B};
return byteArrToInt(colorByteArr);
}
public static int[] rgb(int argb) {
return new int[]{(argb >> 16) & 0xFF, (argb >> 8) & 0xFF, argb & 0xFF};
}
public static int byteArrToInt(byte[] colorByteArr) {
return (colorByteArr[0] << 24) + ((colorByteArr[1] & 0xFF) << 16)
+ ((colorByteArr[2] & 0xFF) << 8) + (colorByteArr[3] & 0xFF);
}
public static int[] rgb2lab(int R, int G, int B) {
//http://www.brucelindbloom.com
float r, g, b, X, Y, Z, fx, fy, fz, xr, yr, zr;
float Ls, as, bs;
float eps = 216.f / 24389.f;
float k = 24389.f / 27.f;
float Xr = 0.964221f; // reference white D50
float Yr = 1.0f;
float Zr = 0.825211f;
// RGB to XYZ
r = R / 255.f; //R 0..1
g = G / 255.f; //G 0..1
b = B / 255.f; //B 0..1
// assuming sRGB (D65)
if (r <= 0.04045)
r = r / 12;
else
r = (float) Math.pow((r + 0.055) / 1.055, 2.4);
if (g <= 0.04045)
g = g / 12;
else
g = (float) Math.pow((g + 0.055) / 1.055, 2.4);
if (b <= 0.04045)
b = b / 12;
else
b = (float) Math.pow((b + 0.055) / 1.055, 2.4);
X = 0.436052025f * r + 0.385081593f * g + 0.143087414f * b;
Y = 0.222491598f * r + 0.71688606f * g + 0.060621486f * b;
Z = 0.013929122f * r + 0.097097002f * g + 0.71418547f * b;
// XYZ to Lab
xr = X / Xr;
yr = Y / Yr;
zr = Z / Zr;
if (xr > eps)
fx = (float) Math.pow(xr, 1 / 3.);
else
fx = (float) ((k * xr + 16.) / 116.);
if (yr > eps)
fy = (float) Math.pow(yr, 1 / 3.);
else
fy = (float) ((k * yr + 16.) / 116.);
if (zr > eps)
fz = (float) Math.pow(zr, 1 / 3.);
else
fz = (float) ((k * zr + 16.) / 116);
Ls = (116 * fy) - 16;
as = 500 * (fx - fy);
bs = 200 * (fy - fz);
int[] lab = new int[3];
lab[0] = (int) (2.55 * Ls + .5);
lab[1] = (int) (as + .5);
lab[2] = (int) (bs + .5);
return lab;
}
/**
* Computes the difference between two RGB colors by converting them to the L*a*b scale and
* comparing them using the CIE76 algorithm { http://en.wikipedia.org/wiki/Color_difference#CIE76}
*/
public static double getColorDifference(int a, int b) {
int r1, g1, b1, r2, g2, b2;
r1 = Color.red(a);
g1 = Color.green(a);
b1 = Color.blue(a);
r2 = Color.red(b);
g2 = Color.green(b);
b2 = Color.blue(b);
int[] lab1 = rgb2lab(r1, g1, b1);
int[] lab2 = rgb2lab(r2, g2, b2);
return Math.sqrt(Math.pow(lab2[0] - lab1[0], 2) + Math.pow(lab2[1] - lab1[1], 2) + Math.pow(lab2[2] - lab1[2], 2));
}
}
Just another answer, although it's similar to Supr's one - just a different color space.
The thing is: Humans perceive the difference in color not uniformly and the RGB color space is ignoring this. As a result if you use the RGB color space and just compute the euclidean distance between 2 colors you may get a difference which is mathematically absolutely correct, but wouldn't coincide with what humans would tell you.
This may not be a problem - the difference is not that large I think, but if you want to solve this "better" you should convert your RGB colors into a color space that was specifically designed to avoid the above problem. There are several ones, improvements from earlier models (since this is based on human perception we need to measure the "correct" values based on experimental data). There's the Lab colorspace which I think would be the best although a bit complicated to convert it to. Simpler would be the CIE XYZ one.
Here's a site that lists the formula's to convert between different color spaces so you can experiment a bit.
Kotlin version with how much percent do you want to match.
Method call with percent optional argument
isMatchingColor(intColor1, intColor2, 95) // should match color if 95% similar
Method body
private fun isMatchingColor(intColor1: Int, intColor2: Int, percent: Int = 90): Boolean {
val threadSold = 255 - (255 / 100f * percent)
val diffAlpha = abs(Color.alpha(intColor1) - Color.alpha(intColor2))
val diffRed = abs(Color.red(intColor1) - Color.red(intColor2))
val diffGreen = abs(Color.green(intColor1) - Color.green(intColor2))
val diffBlue = abs(Color.blue(intColor1) - Color.blue(intColor2))
if (diffAlpha > threadSold) {
return false
}
if (diffRed > threadSold) {
return false
}
if (diffGreen > threadSold) {
return false
}
if (diffBlue > threadSold) {
return false
}
return true
}
All methods below result in a scale from 0-100.
internal static class ColorDifference
{
internal enum Method
{
Binary, // true or false, 0 is false
Square,
Dimensional,
CIE76
}
public static double Calculate(Method method, int argb1, int argb2)
{
int[] c1 = ColorConversion.ArgbToArray(argb1);
int[] c2 = ColorConversion.ArgbToArray(argb2);
return Calculate(method, c1[1], c2[1], c1[2], c2[2], c1[3], c2[3], c1[0], c2[0]);
}
public static double Calculate(Method method, int r1, int r2, int g1, int g2, int b1, int b2, int a1 = -1, int a2 = -1)
{
switch (method)
{
case Method.Binary:
return (r1 == r2 && g1 == g2 && b1 == b2 && a1 == a2) ? 0 : 100;
case Method.CIE76:
return CalculateCIE76(r1, r2, g1, g2, b1, b2);
case Method.Dimensional:
if (a1 == -1 || a2 == -1) return Calculate3D(r1, r2, g1, g2, b1, b2);
else return Calculate4D(r1, r2, g1, g2, b1, b2, a1, a2);
case Method.Square:
return CalculateSquare(r1, r2, g1, g2, b1, b2, a1, a2);
default:
throw new InvalidOperationException();
}
}
public static double Calculate(Method method, Color c1, Color c2, bool alpha)
{
switch (method)
{
case Method.Binary:
return (c1.R == c2.R && c1.G == c2.G && c1.B == c2.B && (!alpha || c1.A == c2.A)) ? 0 : 100;
case Method.CIE76:
if (alpha) throw new InvalidOperationException();
return CalculateCIE76(c1, c2);
case Method.Dimensional:
if (alpha) return Calculate4D(c1, c2);
else return Calculate3D(c1, c2);
case Method.Square:
if (alpha) return CalculateSquareAlpha(c1, c2);
else return CalculateSquare(c1, c2);
default:
throw new InvalidOperationException();
}
}
// A simple idea, based on on a Square
public static double CalculateSquare(int argb1, int argb2)
{
int[] c1 = ColorConversion.ArgbToArray(argb1);
int[] c2 = ColorConversion.ArgbToArray(argb2);
return CalculateSquare(c1[1], c2[1], c1[2], c2[2], c1[3], c2[3]);
}
public static double CalculateSquare(Color c1, Color c2)
{
return CalculateSquare(c1.R, c2.R, c1.G, c2.G, c1.B, c2.B);
}
public static double CalculateSquareAlpha(int argb1, int argb2)
{
int[] c1 = ColorConversion.ArgbToArray(argb1);
int[] c2 = ColorConversion.ArgbToArray(argb2);
return CalculateSquare(c1[1], c2[1], c1[2], c2[2], c1[3], c2[3], c1[0], c2[0]);
}
public static double CalculateSquareAlpha(Color c1, Color c2)
{
return CalculateSquare(c1.R, c2.R, c1.G, c2.G, c1.B, c2.B, c1.A, c2.A);
}
public static double CalculateSquare(int r1, int r2, int g1, int g2, int b1, int b2, int a1 = -1, int a2 = -1)
{
if (a1 == -1 || a2 == -1) return (Math.Abs(r1 - r2) + Math.Abs(g1 - g2) + Math.Abs(b1 - b2)) / 7.65;
else return (Math.Abs(r1 - r2) + Math.Abs(g1 - g2) + Math.Abs(b1 - b2) + Math.Abs(a1 - a2)) / 10.2;
}
// from:http://stackoverflow.com/questions/9018016/how-to-compare-two-colors
public static double Calculate3D(int argb1, int argb2)
{
int[] c1 = ColorConversion.ArgbToArray(argb1);
int[] c2 = ColorConversion.ArgbToArray(argb2);
return Calculate3D(c1[1], c2[1], c1[2], c2[2], c1[3], c2[3]);
}
public static double Calculate3D(Color c1, Color c2)
{
return Calculate3D(c1.R, c2.R, c1.G, c2.G, c1.B, c2.B);
}
public static double Calculate3D(int r1, int r2, int g1, int g2, int b1, int b2)
{
return Math.Sqrt(Math.Pow(Math.Abs(r1 - r2), 2) + Math.Pow(Math.Abs(g1 - g2), 2) + Math.Pow(Math.Abs(b1 - b2), 2)) / 4.41672955930063709849498817084;
}
// Same as above, but made 4D to include alpha channel
public static double Calculate4D(int argb1, int argb2)
{
int[] c1 = ColorConversion.ArgbToArray(argb1);
int[] c2 = ColorConversion.ArgbToArray(argb2);
return Calculate4D(c1[1], c2[1], c1[2], c2[2], c1[3], c2[3], c1[0], c2[0]);
}
public static double Calculate4D(Color c1, Color c2)
{
return Calculate4D(c1.R, c2.R, c1.G, c2.G, c1.B, c2.B, c1.A, c2.A);
}
public static double Calculate4D(int r1, int r2, int g1, int g2, int b1, int b2, int a1, int a2)
{
return Math.Sqrt(Math.Pow(Math.Abs(r1 - r2), 2) + Math.Pow(Math.Abs(g1 - g2), 2) + Math.Pow(Math.Abs(b1 - b2), 2) + Math.Pow(Math.Abs(a1 - a2), 2)) / 5.1;
}
/**
* Computes the difference between two RGB colors by converting them to the L*a*b scale and
* comparing them using the CIE76 algorithm { http://en.wikipedia.org/wiki/Color_difference#CIE76}
*/
public static double CalculateCIE76(int argb1, int argb2)
{
return CalculateCIE76(Color.FromArgb(argb1), Color.FromArgb(argb2));
}
public static double CalculateCIE76(Color c1, Color c2)
{
return CalculateCIE76(c1.R, c2.R, c1.G, c2.G, c1.B, c2.B);
}
public static double CalculateCIE76(int r1, int r2, int g1, int g2, int b1, int b2)
{
int[] lab1 = ColorConversion.ColorToLab(r1, g1, b1);
int[] lab2 = ColorConversion.ColorToLab(r2, g2, b2);
return Math.Sqrt(Math.Pow(lab2[0] - lab1[0], 2) + Math.Pow(lab2[1] - lab1[1], 2) + Math.Pow(lab2[2] - lab1[2], 2)) / 2.55;
}
}
internal static class ColorConversion
{
public static int[] ArgbToArray(int argb)
{
return new int[] { (argb >> 24), (argb >> 16) & 0xFF, (argb >> 8) & 0xFF, argb & 0xFF };
}
public static int[] ColorToLab(int R, int G, int B)
{
// http://www.brucelindbloom.com
double r, g, b, X, Y, Z, fx, fy, fz, xr, yr, zr;
double Ls, fas, fbs;
double eps = 216.0f / 24389.0f;
double k = 24389.0f / 27.0f;
double Xr = 0.964221f; // reference white D50
double Yr = 1.0f;
double Zr = 0.825211f;
// RGB to XYZ
r = R / 255.0f; //R 0..1
g = G / 255.0f; //G 0..1
b = B / 255.0f; //B 0..1
// assuming sRGB (D65)
if (r <= 0.04045) r = r / 12;
else r = (float)Math.Pow((r + 0.055) / 1.055, 2.4);
if (g <= 0.04045) g = g / 12;
else g = (float)Math.Pow((g + 0.055) / 1.055, 2.4);
if (b <= 0.04045) b = b / 12;
else b = (float)Math.Pow((b + 0.055) / 1.055, 2.4);
X = 0.436052025f * r + 0.385081593f * g + 0.143087414f * b;
Y = 0.222491598f * r + 0.71688606f * g + 0.060621486f * b;
Z = 0.013929122f * r + 0.097097002f * g + 0.71418547f * b;
// XYZ to Lab
xr = X / Xr;
yr = Y / Yr;
zr = Z / Zr;
if (xr > eps) fx = (float)Math.Pow(xr, 1 / 3.0);
else fx = (float)((k * xr + 16.0) / 116.0);
if (yr > eps) fy = (float)Math.Pow(yr, 1 / 3.0);
else fy = (float)((k * yr + 16.0) / 116.0);
if (zr > eps) fz = (float)Math.Pow(zr, 1 / 3.0);
else fz = (float)((k * zr + 16.0) / 116);
Ls = (116 * fy) - 16;
fas = 500 * (fx - fy);
fbs = 200 * (fy - fz);
int[] lab = new int[3];
lab[0] = (int)(2.55 * Ls + 0.5);
lab[1] = (int)(fas + 0.5);
lab[2] = (int)(fbs + 0.5);
return lab;
}
}
A simple method that only uses RGB is
cR=R1-R2
cG=G1-G2
cB=B1-B2
uR=R1+R2
distance=cR*cR*(2+uR/256) + cG*cG*4 + cB*cB*(2+(255-uR)/256)
I've used this one for a while now, and it works well enough for most purposes.
I've tried various methods like LAB color space, HSV comparisons and I've found that luminosity works pretty well for this purpose.
Here is Python version
def lum(c):
def factor(component):
component = component / 255;
if (component <= 0.03928):
component = component / 12.92;
else:
component = math.pow(((component + 0.055) / 1.055), 2.4);
return component
components = [factor(ci) for ci in c]
return (components[0] * 0.2126 + components[1] * 0.7152 + components[2] * 0.0722) + 0.05;
def color_distance(c1, c2):
l1 = lum(c1)
l2 = lum(c2)
higher = max(l1, l2)
lower = min(l1, l2)
return (higher - lower) / higher
c1 = ImageColor.getrgb('white')
c2 = ImageColor.getrgb('yellow')
print(color_distance(c1, c2))
Will give you
0.0687619047619048
Android for ColorUtils API RGBToHSL:
I had two int argb colors (color1, color2) and I wanted to get distance/difference among the two colors. Here is what I did;
private float getHue(int color) {
int R = (color >> 16) & 0xff;
int G = (color >> 8) & 0xff;
int B = (color ) & 0xff;
float[] colorHue = new float[3];
ColorUtils.RGBToHSL(R, G, B, colorHue);
return colorHue[0];
}
Then I used below code to find the distance between the two colors.
private float getDistance(getHue(color1), getHue(color2)) {
float avgHue = (hue1 + hue2)/2;
return Math.abs(hue1 - avgHue);
}
The best way is deltaE. DeltaE is a number that shows the difference of the colors. If deltae < 1 then the difference can't recognize by human eyes. I wrote a code in canvas and js for converting rgb to lab and then calculating delta e. On this example the code is recognising pixels which have different color with a base color that I saved as LAB1. and then if it is different makes those pixels red. You can increase or reduce the sensitivity of the color difference with increae or decrease the acceptable range of delta e. In this example I assigned 10 for deltaE in the line that I wrote (deltae <= 10):
<script>
var constants = {
canvasWidth: 700, // In pixels.
canvasHeight: 600, // In pixels.
colorMap: new Array()
};
// -----------------------------------------------------------------------------------------------------
function fillcolormap(imageObj1) {
function rgbtoxyz(red1,green1,blue1){ // a converter for converting rgb model to xyz model
var red2 = red1/255;
var green2 = green1/255;
var blue2 = blue1/255;
if(red2>0.04045){
red2 = (red2+0.055)/1.055;
red2 = Math.pow(red2,2.4);
}
else{
red2 = red2/12.92;
}
if(green2>0.04045){
green2 = (green2+0.055)/1.055;
green2 = Math.pow(green2,2.4);
}
else{
green2 = green2/12.92;
}
if(blue2>0.04045){
blue2 = (blue2+0.055)/1.055;
blue2 = Math.pow(blue2,2.4);
}
else{
blue2 = blue2/12.92;
}
red2 = (red2*100);
green2 = (green2*100);
blue2 = (blue2*100);
var x = (red2 * 0.4124) + (green2 * 0.3576) + (blue2 * 0.1805);
var y = (red2 * 0.2126) + (green2 * 0.7152) + (blue2 * 0.0722);
var z = (red2 * 0.0193) + (green2 * 0.1192) + (blue2 * 0.9505);
var xyzresult = new Array();
xyzresult[0] = x;
xyzresult[1] = y;
xyzresult[2] = z;
return(xyzresult);
} //end of rgb_to_xyz function
function xyztolab(xyz){ //a convertor from xyz to lab model
var x = xyz[0];
var y = xyz[1];
var z = xyz[2];
var x2 = x/95.047;
var y2 = y/100;
var z2 = z/108.883;
if(x2>0.008856){
x2 = Math.pow(x2,1/3);
}
else{
x2 = (7.787*x2) + (16/116);
}
if(y2>0.008856){
y2 = Math.pow(y2,1/3);
}
else{
y2 = (7.787*y2) + (16/116);
}
if(z2>0.008856){
z2 = Math.pow(z2,1/3);
}
else{
z2 = (7.787*z2) + (16/116);
}
var l= 116*y2 - 16;
var a= 500*(x2-y2);
var b= 200*(y2-z2);
var labresult = new Array();
labresult[0] = l;
labresult[1] = a;
labresult[2] = b;
return(labresult);
}
var canvas = document.getElementById('myCanvas');
var context = canvas.getContext('2d');
var imageX = 0;
var imageY = 0;
context.drawImage(imageObj1, imageX, imageY, 240, 140);
var imageData = context.getImageData(0, 0, 240, 140);
var data = imageData.data;
var n = data.length;
// iterate over all pixels
var m = 0;
for (var i = 0; i < n; i += 4) {
var red = data[i];
var green = data[i + 1];
var blue = data[i + 2];
var xyzcolor = new Array();
xyzcolor = rgbtoxyz(red,green,blue);
var lab = new Array();
lab = xyztolab(xyzcolor);
constants.colorMap.push(lab); //fill up the colormap array with lab colors.
}
}
// -----------------------------------------------------------------------------------------------------
function colorize(pixqty) {
function deltae94(lab1,lab2){ //calculating Delta E 1994
var c1 = Math.sqrt((lab1[1]*lab1[1])+(lab1[2]*lab1[2]));
var c2 = Math.sqrt((lab2[1]*lab2[1])+(lab2[2]*lab2[2]));
var dc = c1-c2;
var dl = lab1[0]-lab2[0];
var da = lab1[1]-lab2[1];
var db = lab1[2]-lab2[2];
var dh = Math.sqrt((da*da)+(db*db)-(dc*dc));
var first = dl;
var second = dc/(1+(0.045*c1));
var third = dh/(1+(0.015*c1));
var deresult = Math.sqrt((first*first)+(second*second)+(third*third));
return(deresult);
} // end of deltae94 function
var lab11 = new Array("80","-4","21");
var lab12 = new Array();
var k2=0;
var canvas = document.getElementById('myCanvas');
var context = canvas.getContext('2d');
var imageData = context.getImageData(0, 0, 240, 140);
var data = imageData.data;
for (var i=0; i<pixqty; i++) {
lab12 = constants.colorMap[i];
var deltae = deltae94(lab11,lab12);
if (deltae <= 10) {
data[i*4] = 255;
data[(i*4)+1] = 0;
data[(i*4)+2] = 0;
k2++;
} // end of if
} //end of for loop
context.clearRect(0,0,240,140);
alert(k2);
context.putImageData(imageData,0,0);
}
// -----------------------------------------------------------------------------------------------------
$(window).load(function () {
var imageObj = new Image();
imageObj.onload = function() {
fillcolormap(imageObj);
}
imageObj.src = './mixcolor.png';
});
// ---------------------------------------------------------------------------------------------------
var pixno2 = 240*140;
</script>
I used this in my android up and it seems satisfactory although RGB space is not recommended:
public double colourDistance(int red1,int green1, int blue1, int red2, int green2, int blue2)
{
double rmean = ( red1 + red2 )/2;
int r = red1 - red2;
int g = green1 - green2;
int b = blue1 - blue2;
double weightR = 2 + rmean/256;
double weightG = 4.0;
double weightB = 2 + (255-rmean)/256;
return Math.sqrt(weightR*r*r + weightG*g*g + weightB*b*b);
}
Then I used the following to get percent of similarity:
double maxColDist = 764.8339663572415;
double d1 = colourDistance(red1,green1,blue1,red2,green2,blue2);
String s1 = (int) Math.round(((maxColDist-d1)/maxColDist)*100) + "% match";
It works well enough.
I expect you want to analyze a whole image at the end, don't you? So you could check for the smallest/highest difference to the identity color matrix.
Most math operations for processing graphics use matrices, because the possible algorithms using them are often faster than classical point by point distance and comparism calculations. (e.g. for operations using DirectX, OpenGL, ...)
So I think you should start here:
http://en.wikipedia.org/wiki/Identity_matrix
http://en.wikipedia.org/wiki/Matrix_difference_equation
... and as Beska already commented above:
This may not give the best "visible" difference...
Which means also that your algorithm depends onto your definiton of "similar to" if you are processing images.
You'll need to convert any RGB colors into the Lab color space to be able to compare them in the way that humans see them. Otherwise you'll be getting RGB colors that 'match' in some very strange ways.
The wikipedia link on Color Differences gives you an intro into the various Lab color space difference algorithms that have been defined over the years. The simplest that just checks the Euclidian distance of two lab colours, works but has a few flaws.
Conveniently there's a Java implementation of the more sophisticated CIEDE2000 algorithm in the OpenIMAJ project. Provide it your two sets of Lab colours and it'll give you back single distance value.
The only "right" way to compare colors is to do it with deltaE in CIELab or CIELuv.
But for a lot of applications I think this is a good enough approximation:
distance = 3 * |dR| + 4 * |dG| + 3 * |dB|
I think a weighted manhattan distance makes a lot more sense when comparing colors. Remember that color primaries are only in our head. They don't have any physical significance. CIELab and CIELuv is modelled statistically from our perception of color.
For quick and dirty, you can do
import java.awt.Color;
private Color dropPrecision(Color c,int threshold){
return new Color((c.getRed()/threshold),
(c.getGreen()/threshold),
(c.getBlue()/threshold));
}
public boolean inThreshold(Color _1,Color _2,int threshold){
return dropPrecision(_1,threshold)==dropPrecision(_2,threshold);
}
making use of integer division to quantize the colors.
Swift 5 Answer
I found this thread because I needed a Swift version of this question. As nobody has answered with the solution, here's mine:
extension UIColor {
var rgba: (red: CGFloat, green: CGFloat, blue: CGFloat, alpha: CGFloat) {
var red: CGFloat = 0
var green: CGFloat = 0
var blue: CGFloat = 0
var alpha: CGFloat = 0
getRed(&red, green: &green, blue: &blue, alpha: &alpha)
return (red, green, blue, alpha)
}
func isSimilar(to colorB: UIColor) -> Bool {
let rgbA = self.rgba
let rgbB = colorB.rgba
let diffRed = abs(CGFloat(rgbA.red) - CGFloat(rgbB.red))
let diffGreen = abs(rgbA.green - rgbB.green)
let diffBlue = abs(rgbA.blue - rgbB.blue)
let pctRed = diffRed
let pctGreen = diffGreen
let pctBlue = diffBlue
let pct = (pctRed + pctGreen + pctBlue) / 3 * 100
return pct < 10 ? true : false
}
}
Usage:
let black: UIColor = UIColor.black
let white: UIColor = UIColor.white
let similar: Bool = black.isSimilar(to: white)
I set less than 10% difference to return similar colours, but you can customise this yourself.

Geo Fencing - point inside/outside polygon

I would like to determine a polygon and implement an algorithm which would check if a point is inside or outside the polygon.
Does anyone know if there is any example available of any similar algorithm?
If i remember correctly, the algorithm is to draw a horizontal line through your test point. Count how many lines of of the polygon you intersect to reach your point.
If the answer is odd, you're inside. If the answer is even, you're outside.
Edit: Yeah, what he said (Wikipedia):
C# code
bool IsPointInPolygon(List<Loc> poly, Loc point)
{
int i, j;
bool c = false;
for (i = 0, j = poly.Count - 1; i < poly.Count; j = i++)
{
if ((((poly[i].Lt <= point.Lt) && (point.Lt < poly[j].Lt))
|| ((poly[j].Lt <= point.Lt) && (point.Lt < poly[i].Lt)))
&& (point.Lg < (poly[j].Lg - poly[i].Lg) * (point.Lt - poly[i].Lt)
/ (poly[j].Lt - poly[i].Lt) + poly[i].Lg))
{
c = !c;
}
}
return c;
}
Location class
public class Loc
{
private double lt;
private double lg;
public double Lg
{
get { return lg; }
set { lg = value; }
}
public double Lt
{
get { return lt; }
set { lt = value; }
}
public Loc(double lt, double lg)
{
this.lt = lt;
this.lg = lg;
}
}
After searching the web and trying various implementations and porting them from C++ to C# I finally got my code straight:
public static bool PointInPolygon(LatLong p, List<LatLong> poly)
{
int n = poly.Count();
poly.Add(new LatLong { Lat = poly[0].Lat, Lon = poly[0].Lon });
LatLong[] v = poly.ToArray();
int wn = 0; // the winding number counter
// loop through all edges of the polygon
for (int i = 0; i < n; i++)
{ // edge from V[i] to V[i+1]
if (v[i].Lat <= p.Lat)
{ // start y <= P.y
if (v[i + 1].Lat > p.Lat) // an upward crossing
if (isLeft(v[i], v[i + 1], p) > 0) // P left of edge
++wn; // have a valid up intersect
}
else
{ // start y > P.y (no test needed)
if (v[i + 1].Lat <= p.Lat) // a downward crossing
if (isLeft(v[i], v[i + 1], p) < 0) // P right of edge
--wn; // have a valid down intersect
}
}
if (wn != 0)
return true;
else
return false;
}
private static int isLeft(LatLong P0, LatLong P1, LatLong P2)
{
double calc = ((P1.Lon - P0.Lon) * (P2.Lat - P0.Lat)
- (P2.Lon - P0.Lon) * (P1.Lat - P0.Lat));
if (calc > 0)
return 1;
else if (calc < 0)
return -1;
else
return 0;
}
The isLeft function was giving me rounding problems and I spent hours without realizing that I was doing the conversion wrong, so forgive me for the lame if block at the end of that function.
BTW, this is the original code and article:
http://softsurfer.com/Archive/algorithm_0103/algorithm_0103.htm
By far the best explanation and implementation can be found at
Point In Polygon Winding Number Inclusion
There is even a C++ implementation at the end of the well explained article. This site also contains some great algorithms/solutions for other geometry based problems.
I have modified and used the C++ implementation and also created a C# implementation. You definitely want to use the Winding Number algorithm as it is more accurate than the edge crossing algorithm and it is very fast.
I think there is a simpler and more efficient solution.
Here is the code in C++. I should be simple to convert it to C#.
int pnpoly(int npol, float *xp, float *yp, float x, float y)
{
int i, j, c = 0;
for (i = 0, j = npol-1; i < npol; j = i++) {
if ((((yp[i] <= y) && (y < yp[j])) ||
((yp[j] <= y) && (y < yp[i]))) &&
(x < (xp[j] - xp[i]) * (y - yp[i]) / (yp[j] - yp[i]) + xp[i]))
c = !c;
}
return c;
}
The complete solution in asp.Net C#, you can see the complete detail here, you can see how to find point(lat,lon) whether its inside or Outside the Polygon using the latitude and longitudes ?
Article Reference Link
private static bool checkPointExistsInGeofencePolygon(string latlnglist, string lat, string lng)
{
List<Loc> objList = new List<Loc>();
// sample string should be like this strlatlng = "39.11495,-76.873259|39.114588,-76.872808|39.112921,-76.870373|";
string[] arr = latlnglist.Split('|');
for (int i = 0; i <= arr.Length - 1; i++)
{
string latlng = arr[i];
string[] arrlatlng = latlng.Split(',');
Loc er = new Loc(Convert.ToDouble(arrlatlng[0]), Convert.ToDouble(arrlatlng[1]));
objList.Add(er);
}
Loc pt = new Loc(Convert.ToDouble(lat), Convert.ToDouble(lng));
if (IsPointInPolygon(objList, pt) == true)
{
return true;
}
else
{
return false;
}
}
private static bool IsPointInPolygon(List<Loc> poly, Loc point)
{
int i, j;
bool c = false;
for (i = 0, j = poly.Count - 1; i < poly.Count; j = i++)
{
if ((((poly[i].Lt <= point.Lt) && (point.Lt < poly[j].Lt)) |
((poly[j].Lt <= point.Lt) && (point.Lt < poly[i].Lt))) &&
(point.Lg < (poly[j].Lg - poly[i].Lg) * (point.Lt - poly[i].Lt) / (poly[j].Lt - poly[i].Lt) + poly[i].Lg))
c = !c;
}
return c;
}
Just a heads up (using answer as I can't comment), if you want to use point-in-polygon for geo fencing, then you need to change your algorithm to work with spherical coordinates. -180 longitude is the same as 180 longitude and point-in-polygon will break in such situation.
Relating to kobers answer I worked it out with more readable clean code and changed the longitudes that crosses the date border:
public bool IsPointInPolygon(List<PointPosition> polygon, double latitude, double longitude)
{
bool isInIntersection = false;
int actualPointIndex = 0;
int pointIndexBeforeActual = polygon.Count - 1;
var offset = calculateLonOffsetFromDateLine(polygon);
longitude = longitude < 0.0 ? longitude + offset : longitude;
foreach (var actualPointPosition in polygon)
{
var p1Lat = actualPointPosition.Latitude;
var p1Lon = actualPointPosition.Longitude;
var p0Lat = polygon[pointIndexBeforeActual].Latitude;
var p0Lon = polygon[pointIndexBeforeActual].Longitude;
if (p1Lon < 0.0) p1Lon += offset;
if (p0Lon < 0.0) p0Lon += offset;
// Jordan curve theorem - odd even rule algorithm
if (isPointLatitudeBetweenPolyLine(p0Lat, p1Lat, latitude)
&& isPointRightFromPolyLine(p0Lat, p0Lon, p1Lat, p1Lon, latitude, longitude))
{
isInIntersection = !isInIntersection;
}
pointIndexBeforeActual = actualPointIndex;
actualPointIndex++;
}
return isInIntersection;
}
private double calculateLonOffsetFromDateLine(List<PointPosition> polygon)
{
double offset = 0.0;
var maxLonPoly = polygon.Max(x => x.Longitude);
var minLonPoly = polygon.Min(x => x.Longitude);
if (Math.Abs(minLonPoly - maxLonPoly) > 180)
{
offset = 360.0;
}
return offset;
}
private bool isPointLatitudeBetweenPolyLine(double polyLinePoint1Lat, double polyLinePoint2Lat, double poiLat)
{
return polyLinePoint2Lat <= poiLat && poiLat < polyLinePoint1Lat || polyLinePoint1Lat <= poiLat && poiLat < polyLinePoint2Lat;
}
private bool isPointRightFromPolyLine(double polyLinePoint1Lat, double polyLinePoint1Lon, double polyLinePoint2Lat, double polyLinePoint2Lon, double poiLat, double poiLon)
{
// lon <(lon1-lon2)*(latp-lat2)/(lat1-lat2)+lon2
return poiLon < (polyLinePoint1Lon - polyLinePoint2Lon) * (poiLat - polyLinePoint2Lat) / (polyLinePoint1Lat - polyLinePoint2Lat) + polyLinePoint2Lon;
}
I add one detail to help people who live in the... south of earth!!
If you're in Brazil (that's my case), our GPS coord are all negatives.
And all these algo give wrong results.
The easiest way if to use the absolute values of the Lat and Long of all point. And in that case Jan Kobersky's algo is perfect.
Check if a point is inside a polygon or not -
Consider the polygon which has vertices a1,a2,a3,a4,a5. The following set of steps should help in ascertaining whether point P lies inside the polygon or outside.
Compute the vector area of the triangle formed by edge a1->a2 and the vectors connecting a2 to P and P to a1. Similarly, compute the vector area of the each of the possible triangles having one side as the side of the polygon and the other two connecting P to that side.
For a point to be inside a polygon, each of the triangles need to have positive area. Even if one of the triangles have a negative area then the point P stands out of the polygon.
In order to compute the area of a triangle given vectors representing its 3 edges, refer to http://www.jtaylor1142001.net/calcjat/Solutions/VCrossProduct/VCPATriangle.htm
The problem is easier if your polygon is convex. If so, you can do a simple test for each line to see if the point is on the inside or outside of that line (extending to infinity in both directions). Otherwise, for concave polygons, draw an imaginary ray from your point out to infinity (in any direction). Count how many times it crosses a boundary line. Odd means the point is inside, even means the point is outside.
This last algorithm is trickier than it looks. You will have to be very careful about what happens when your imaginary ray exactly hits one of the polygon's vertices.
If your imaginary ray goes in the -x direction, you can choose only to count lines that include at least one point whose y coordinate is strictly less than the y coordinate of your point. This is how you get most of the weird edge cases to work correctly.
If you have a simple polygon (none of the lines cross) and you don't have holes you can also triangulate the polygon, which you are probably going to do anyway in a GIS app to draw a TIN, then test for points in each triangle. If you have a small number of edges to the polygon but a large number of points this is fast.
For an interesting point in triangle see link text
Otherwise definately use the winding rule rather than edge crossing, edge crossing has real problems with points on edges, which if your data is generated form a GPS with limited precision is very likely.
the polygon is defined as a sequential list of point pairs A, B, C .... A.
no side A-B, B-C ... crosses any other side
Determine box Xmin, Xmax, Ymin, Ymax
case 1 the test point P lies outside the box
case 2 the test point P lies inside the box:
Determine the 'diameter' D of the box {[Xmin,Ymin] - [Xmax, Ymax]} ( and add a little extra to avoid possible confusion with D being on a side)
Determine the gradients M of all sides
Find a gradient Mt most different from all gradients M
The test line runs from P at gradient Mt a distance D.
Set the count of intersections to zero
For each of the sides A-B, B-C test for the intersection of P-D with a side
from its start up to but NOT INCLUDING its end. Increment the count of intersections
if required. Note that a zero distance from P to the intersection indicates that P is ON a side
An odd count indicates P is inside the polygon
I translated c# method in Php and I added many comments to understand code.Description of PolygonHelps:
Check if a point is inside or outside of a polygon. This procedure uses gps coordinates and it works when polygon has a little geographic area.
INPUT:$poly: array of Point: polygon vertices list; [{Point}, {Point}, ...];$point: point to check; Point: {"lat" => "x.xxx", "lng" => "y.yyy"}
When $c is false, the number of intersections with polygon is even, so the point is outside of polygon;When $c is true, the number of intersections with polygon is odd, so the point is inside of polygon;$n is the number of vertices in polygon;For each vertex in polygon, method calculates line through current vertex and previous vertex and check if the two lines have an intersection point.$c changes when intersection point exists.
So, method can return true if point is inside the polygon, else return false.
class PolygonHelps {
public static function isPointInPolygon(&$poly, $point){
$c = false;
$n = $j = count($poly);
for ($i = 0, $j = $n - 1; $i < $n; $j = $i++){
if ( ( ( ( $poly[$i]->lat <= $point->lat ) && ( $point->lat < $poly[$j]->lat ) )
|| ( ( $poly[$j]->lat <= $point->lat ) && ( $point->lat < $poly[$i]->lat ) ) )
&& ( $point->lng < ( $poly[$j]->lng - $poly[$i]->lng )
* ( $point->lat - $poly[$i]->lat )
/ ( $poly[$j]->lat - $poly[$i]->lat )
+ $poly[$i]->lng ) ){
$c = !$c;
}
}
return $c;
}
}
Jan's answer is great.
Here is the same code using the GeoCoordinate class instead.
using System.Device.Location;
...
public static bool IsPointInPolygon(List<GeoCoordinate> poly, GeoCoordinate point)
{
int i, j;
bool c = false;
for (i = 0, j = poly.Count - 1; i < poly.Count; j = i++)
{
if ((((poly[i].Latitude <= point.Latitude) && (point.Latitude < poly[j].Latitude))
|| ((poly[j].Latitude <= point.Latitude) && (point.Latitude < poly[i].Latitude)))
&& (point.Longitude < (poly[j].Longitude - poly[i].Longitude) * (point.Latitude - poly[i].Latitude)
/ (poly[j].Latitude - poly[i].Latitude) + poly[i].Longitude))
c = !c;
}
return c;
}
You can try this simple class https://github.com/xopbatgh/sb-polygon-pointer
It is easy to deal with it
You just insert polygon coordinates into array
Ask library is desired point with lat/lng inside the polygon
$polygonBox = [
[55.761515, 37.600375],
[55.759428, 37.651156],
[55.737112, 37.649566],
[55.737649, 37.597301],
];
$sbPolygonEngine = new sbPolygonEngine($polygonBox);
$isCrosses = $sbPolygonEngine->isCrossesWith(55.746768, 37.625605);
// $isCrosses is boolean
(answer was returned from deleted by myself because initially it was formatted wrong)

Resources