Eigen with PointCloud (PCL) - eigen

I have been following the tutorial http://pointclouds.org/documentation/tutorials/pcl_visualizer.php#pcl-visualizer and could get a simple viewer working.
I looked up the documentation and found the function getMatrixXfMap which returns the Eigen::MatrixXf from a PointCloud.
// Get Eigen matrix
Eigen::MatrixXf M = basic_cloud_ptr->getMatrixXfMap();
cout << "(Eigen) #row :" << M.rows() << endl;
cout << "(Eigen) #col :" << M.cols() << endl;
Next I process M (basically rotations, translations and some other transforms). I how is possible to set M into the PointCloud efficiently. Or is it that I need to pushback() one point at a time?

You do not need to cast your pcl cloud to an Eigen::MatrixXF, do the tranformations and cast back. You can simply perform on your input cloud:
pcl::PointCloud<pcl::PointXYZ>::Ptr source_cloud (new pcl::PointCloud<pcl::PointXYZ> ());
\\ Fill the cloud
\\ .....
Eigen::Affine3f transform_2 = Eigen::Affine3f::Identity();
// Define a translation of 2.5 meters on the x axis.
transform_2.translation() << 2.5, 0.0, 0.0;
// The same rotation matrix as before; theta radians arround Z axis
transform_2.rotate (Eigen::AngleAxisf (theta, Eigen::Vector3f::UnitZ()));
// Print the transformation
printf ("\nMethod #2: using an Affine3f\n");
std::cout << transform_2.matrix() << std::endl;
// Executing the transformation
pcl::PointCloud<pcl::PointXYZ>::Ptr transformed_cloud (new pcl::PointCloud<pcl::PointXYZ> ());
// You can either apply transform_1 or transform_2; they are the same
pcl::transformPointCloud (*source_cloud, *transformed_cloud, transform_2);
Taken from pcl transformation tutorial.

Related

Has anyone encounter this this stripping artifact during "RayTracing in one Weekend"?

I am trying to port the "RayTracing in One Weekend" into metal compute shader. I encounter this strip artifacts in my project:
Is it because my random generator does not work well?
Does anyone have a clue?
// 参照 https://www.pcg-random.org/ 实现
typedef struct { uint64_t state; uint64_t inc; } pcg32_random_t;
uint32_t pcg32_random_r(thread pcg32_random_t* rng)
{
uint64_t oldstate = rng->state;
rng->state = oldstate * 6364136223846793005ULL + rng->inc;
uint32_t xorshifted = ((oldstate >> 18u) ^ oldstate) >> 27u;
uint32_t rot = oldstate >> 59u;
return (xorshifted >> rot) | (xorshifted << ((-rot) & 31));
}
void pcg32_srandom_r(thread pcg32_random_t* rng, uint64_t initstate, uint64_t initseq)
{
rng->state = 0U;
rng->inc = (initseq << 1u) | 1u;
pcg32_random_r(rng);
rng->state += initstate;
pcg32_random_r(rng);
}
// 生成0~1之间的浮点数
float randomF(thread pcg32_random_t* rng)
{
//return pcg32_random_r(rng)/float(UINT_MAX);
return ldexp(float(pcg32_random_r(rng)), -32);
}
// 生成x_min ~ x_max之间的浮点数
float randomRange(thread pcg32_random_t* rng, float x_min, float x_max){
return randomF(rng) * (x_max - x_min) + x_min;
}
I found this link.It says that the primary ray hit point is either above or below the sphere's surface a litter bit due to the float precision error. It is a z-fighting problem
Seeing these "circles aroune the image center" artifact is almost always a dead give-away for z-fighting (rays along any such "circle" always have the same distance to any flat object you're looking at, so they either all round up or all round down, giving you this artifact).
This z-fighting then translates to this artifact because sometimes all the rays on such a circle are inside the sphere (meaning their shadow rays get self-occluded by the sphere), or all outside (they do what they should). What you want to do is offset the ray origin of the shadow rays a tiny bit (say, 1e-3f) along the normal direction at the hitpoint.
You may also want to read up on Carsten Waechter's article in Ray Tracing Gems 1 - it's for triangles, but explains the problem and potential solution very well.

Basic space carving algorithm

I have the following problem as shown in the figure. I have point cloud and a mesh generated by a tetrahedral algorithm. How would I carve the mesh using the that algorithm ? Are landmarks are the point cloud ?
Pseudo code of the algorithm:
for every 3D feature point
convert it 2D projected coordinates
for every 2D feature point
cast a ray toward the polygons of the mesh
get intersection point
if zintersection < z of 3D feature point
for ( every triangle vertices )
cull that triangle.
Here is a follow up implementation of the algorithm mentioned by the Guru Spektre :)
Update code for the algorithm:
int i;
for (i = 0; i < out.numberofpoints; i++)
{
Ogre::Vector3 ray_pos = pos; // camera position);
Ogre::Vector3 ray_dir = (Ogre::Vector3 (out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]) - pos).normalisedCopy(); // vertex - camea pos ;
Ogre::Ray ray;
ray.setOrigin(Ogre::Vector3( ray_pos.x, ray_pos.y, ray_pos.z));
ray.setDirection(Ogre::Vector3(ray_dir.x, ray_dir.y, ray_dir.z));
Ogre::Vector3 result;
unsigned int u1;
unsigned int u2;
unsigned int u3;
bool rayCastResult = RaycastFromPoint(ray.getOrigin(), ray.getDirection(), result, u1, u2, u3);
if ( rayCastResult )
{
Ogre::Vector3 targetVertex(out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]);
float distanceTargetFocus = targetVertex.squaredDistance(pos);
float distanceIntersectionFocus = result.squaredDistance(pos);
if(abs(distanceTargetFocus) >= abs(distanceIntersectionFocus))
{
if ( u1 != -1 && u2 != -1 && u3 != -1)
{
std::cout << "Remove index "<< "u1 ==> " <<u1 << "u2 ==>"<<u2<<"u3 ==> "<<u3<< std::endl;
updatedIndices.erase(updatedIndices.begin()+ u1);
updatedIndices.erase(updatedIndices.begin()+ u2);
updatedIndices.erase(updatedIndices.begin()+ u3);
}
}
}
}
if ( updatedIndices.size() <= out.numberoftrifaces)
{
std::cout << "current face list===> "<< out.numberoftrifaces << std::endl;
std::cout << "deleted face list===> "<< updatedIndices.size() << std::endl;
manual->begin("Pointcloud", Ogre::RenderOperation::OT_TRIANGLE_LIST);
for (int n = 0; n < out.numberofpoints; n++)
{
Ogre::Vector3 vertexTransformed = Ogre::Vector3( out.pointlist[3*n+0], out.pointlist[3*n+1], out.pointlist[3*n+2]) - mReferencePoint;
vertexTransformed *=1000.0 ;
vertexTransformed = mDeltaYaw * vertexTransformed;
manual->position(vertexTransformed);
}
for (int n = 0 ; n < updatedIndices.size(); n++)
{
int n0 = updatedIndices[n+0];
int n1 = updatedIndices[n+1];
int n2 = updatedIndices[n+2];
if ( n0 < 0 || n1 <0 || n2 <0 )
{
std::cout<<"negative indices"<<std::endl;
break;
}
manual->triangle(n0, n1, n2);
}
manual->end();
Follow up with the algorithm:
I have now two versions one is the triangulated one and the other is the carved version.
It's not not a surface mesh.
Here are the two files
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_non_triangulated.obj
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_triangulated.obj
I see it like this:
So you got image from camera with known matrix and FOV and focal length.
From that you know where exactly the focal point is and where the image is proected onto the camera chip (Z_near plane). So any vertex, its corresponding pixel and focal point lies on the same line.
So for each view cas ray from focal point to each visible vertex of the pointcloud. and test if any face of the mesh hits before hitting face containing target vertex. If yes remove it as it would block the visibility.
Landmark in this context is just feature point corresponding to vertex from pointcloud. It can be anything detectable (change of intensity, color, pattern whatever) usually SIFT/SURF is used for this. You should have them located already as that is the input for pointcloud generation. If not you can peek pixel corresponding to each vertex and test for background color.
Not sure how you want to do this without the input images. For that you need to decide which vertex is visible from which side/view. May be it is doable form nearby vertexes somehow (like using vertex density points or corespondence to planar face...) or the algo is changed somehow for finding unused vertexes inside mesh.
To cast a ray do this:
ray_pos=tm_eye*vec4(imgx/aspect,imgy,0.0,1.0);
ray_dir=ray_pos-tm_eye*vec4(0.0,0.0,-focal_length,1.0);
where tm_eye is camera direct transform matrix, imgx,imgy is the 2D pixel position in image normalized to <-1,+1> where (0,0) is the middle of image. The focal_length determines the FOV of camera and aspect ratio is ratio of image resolution image_ys/image_xs
Ray triangle intersection equation can be found here:
Reflection and refraction impossible without recursive ray tracing?
If I extract it:
vec3 v0,v1,v2; // input triangle vertexes
vec3 e1,e2,n,p,q,r;
float t,u,v,det,idet;
//compute ray triangle intersection
e1=v1-v0;
e2=v2-v0;
// Calculate planes normal vector
p=cross(ray[i0].dir,e2);
det=dot(e1,p);
// Ray is parallel to plane
if (abs(det)<1e-8) no intersection;
idet=1.0/det;
r=ray[i0].pos-v0;
u=dot(r,p)*idet;
if ((u<0.0)||(u>1.0)) no intersection;
q=cross(r,e1);
v=dot(ray[i0].dir,q)*idet;
if ((v<0.0)||(u+v>1.0)) no intersection;
t=dot(e2,q)*idet;
if ((t>_zero)&&((t<=tt)) // tt is distance to target vertex
{
// intersection
}
Follow ups:
To move between normalized image (imgx,imgy) and raw image (rawx,rawy) coordinates for image of size (imgxs,imgys) where (0,0) is top left corner and (imgxs-1,imgys-1) is bottom right corner you need:
imgx = (2.0*rawx / (imgxs-1)) - 1.0
imgy = 1.0 - (2.0*rawy / (imgys-1))
rawx = (imgx + 1.0)*(imgxs-1)/2.0
rawy = (1.0 - imgy)*(imgys-1)/2.0
[progress update 1]
I finally got to the point I can compile sample test input data for this to get even started (as you are unable to share valid data at all):
I created small app with hard-coded table mesh (gray) and pointcloud (aqua) and simple camera control. Where I can save any number of views (screenshot + camera direct matrix). When loaded back it aligns with the mesh itself (yellow ray goes through aqua dot in image and goes through the table mesh too). The blue lines are casted from camera focal point to its corners. This will emulate the input you got. The second part of the app will use only these images and matrices with the point cloud (no mesh surface anymore) tetragonize it (already finished) now just cast ray through each landmark in each view (aqua dot) and remove all tetragonals before target vertex in pointcloud is hit (this stuff is not even started yet may be in weekend)... And lastly store only surface triangles (easy just use all triangles which are used just once also already finished except the save part but to write wavefront obj from it is easy ...).
[Progress update 2]
I added landmark detection and matching with the point cloud
as you can see only valid rays are cast (those that are visible on image) so some points on point cloud does not cast rays (singular aqua dots)). So now just the ray/triangle intersection and tetrahedron removal from list is what is missing...

Eigen: convert rotation matrix to quaternion then back getting completely different matrices

Can anyone help me out with Eigen? I tried to convert quaternion to matrix and then back and got completely different matrices. I am not able to trust quaternion before understanding this issue.
Here is the code:
#include <Eigen/Geometry>
#include <iostream>
void Print_Quaternion(Eigen::Quaterniond &q){
std::cout<<"["<<q.w()<<" "<<q.x()<<" "<<q.y()<<" "<<q.z()<<"]"<<std::endl;
}
void Verify_Orthogonal_Matrix(Eigen::Matrix3d &m)
{
std::cout<<"|c0|="<<m.col(0).norm()<<",|c1|="<<m.col(1).norm()<<",|c2|="<<m.col(2).norm()<<std::endl;
std::cout<<"c0c1="<<m.col(0).dot(m.col(1))<<",c1c2="<<m.col(1).dot(m.col(2))<<",c0c2="<<m.col(0).dot(m.col(2))<<std::endl;
}
int main()
{
Eigen::Matrix3d m; m<<0.991601,0.102421,-0.078975,0.125398,-0.611876,0.78095,-0.0316631,0.784294,0.619581;
std::cout<<"Input matrix:"<<std::endl<<m<<std::endl;
std::cout<<"Verify_Orthogonal_Matrix:"<<std::endl;
Verify_Orthogonal_Matrix(m);
std::cout<<"Convert to quaternion q:"<<std::endl;
Eigen::Quaterniond q(m);
Print_Quaternion(q);
std::cout<<"Convert back to rotation matrix m1="<<std::endl;
Eigen::Matrix3d m1=q.normalized().toRotationMatrix();
std::cout<<m1<<std::endl;
std::cout<<"Verify_Orthogonal_Matrix:"<<std::endl;
Verify_Orthogonal_Matrix(m1);
std::cout<<"Convert again to quaternion q1="<<std::endl;
Eigen::Quaterniond q1(m1);
Print_Quaternion(q1);
}
Here is the result I got:
Input matrix:
0.991601 0.102421 -0.078975
0.125398 -0.611876 0.78095
-0.0316631 0.784294 0.619581
Verify_Orthogonal_Matrix:
|c0|=1,|c1|=1,|c2|=1
c0c1=-4.39978e-07,c1c2=4.00139e-07,c0c2=2.39639e-08
Convert to quaternion q:
[0.706984 0.00118249 -0.0167302 0.00812501]
Convert back to rotation matrix m1=
0.998617 -0.0230481 -0.047257
0.0228899 0.99973 -0.00388638
0.0473339 0.0027993 0.998875
Verify_Orthogonal_Matrix:
|c0|=1,|c1|=1,|c2|=1
c0c1=1.73472e-18,c1c2=-4.33681e-19,c0c2=6.93889e-18
Convert again to quaternion q1=
[0.999653 0.001672 -0.0236559 0.0114885]
Did I do something wrong here? I feel that this should be a well-known problem but I got stuck here. Can someone help me out?
Input matrix is not a rotation matrix, it contains mirroring. Its determinant == -1, but rotation should have +1.
check the code for orthogonalization, and look and the signs of last col
m.col(0).normalize();
m.col(1).normalize();
m.col(2) = m.col(0).cross(m.col(1));
m.col(2).normalize();
m.col(0) = m.col(1).cross(m.col(2));
m.col(0).normalize();
std::cout << "orthogonal matrix:" << std::endl << m << std::endl;
Input matrix:
0.991601 0.102421 -0.078975
0.125398 -0.611876 0.78095
-0.0316631 0.784294 0.619581
orthogonal matrix:
0.991601 0.102421 0.078975
0.125398 -0.611876 -0.78095
-0.0316628 0.784294 -0.619581

HDR color-space transformations result in negative RGB values (Yxy to XYZ to sRGB)

I'm currently adding HDR to an old engine and stumbled on a color-space transformation problem.
I'm defining my lights in the Yxy color space
Then I'm converting Yxy to XYZ
XYZ to sRGB transformation.
Using RGB values > 1.0 when rendering and normalizing the result with tone mapping in the end.
I'm working with rather huge numbers since the main light source is the sun having up to 150k Lux Illuminance.
YxyToXYZ function
osg::Vec3 YxyToXYZ( const osg::Vec3& Yxy )
{
if ( Yxy[2] > 0.0f )
{
return osg::Vec3( Yxy[0] * Yxy[1] / Yxy[2] , Yxy[0] , Yxy[0] * ( 1.0f - Yxy[1] - Yxy[2]) / Yxy[2] );
}
else
{
return osg::Vec3( 0.0f , 0.0f , 0.0f );
}
}
XYZtosRGB
osg::Vec3 XYZToSpectralRGB( const osg::Vec3& XYZ )
{
// Wikipedia matrix
osg::Vec3 rgb;
rgb[0] = 3.240479 * XYZ[0] - 1.537150 * XYZ[1] - 0.498535 * XYZ[2];
rgb[1] = -0.969256 * XYZ[0] + 1.875992 * XYZ[1] + 0.041556 * XYZ[2];
rgb[2] = 0.055648 * XYZ[0] - 0.204043 * XYZ[1] + 1.057311 * XYZ[2];
std::cout << "newrgb rgb r:" << rgb[0] << " g:" << rgb[1] << " b:" << rgb[2] << std::endl;
// The matrix in pbrt book p. 235 gives unexpected results. We expect that if we have
// x = y = 0.33333 we get a white pixel but that matrix gives red. Hence we use a different
// matrix that is often used by 3D people
rgb[0] = 2.5651 * XYZ[0] -1.1665 * XYZ[1] -0.3986 * XYZ[2];
rgb[1] = -1.0217 * XYZ[0] + 1.9777 * XYZ[1] + 0.0439 * XYZ[2];
rgb[2] = 0.0753 * XYZ[0] -0.2543 * XYZ[1] + 1.1892 * XYZ[2];
std::cout << "oldrgb rgb r:" << rgb[0] << " g:" << rgb[1] << " b:" << rgb[2] << std::endl;
return rgb;
}
Test samples:
Yxy Y:1 x:1 y:1
XYZ X:1 Y:1 Z:-1
newrgb rgb r:2.20186 g:0.86518 b:-1.20571
oldrgb rgb r:1.7972 g:0.9121 b:-1.3682
Yxy Y:25 x:0.26 y:0.28
XYZ X:23.2143 Y:25 Z:41.0714
newrgb rgb r:16.3211 g:26.106 b:39.616
oldrgb rgb r:14.0134 g:27.5275 b:44.2327
Yxy Y:3100 x:0.27 y:0.29
XYZ X:2886.21 Y:3100 Z:4703.45
newrgb rgb r:2242.7 g:3213.56 b:4501.09
oldrgb rgb r:1912.47 g:3388.51 b:5022.34
Yxy Y:6e+06 x:0.33 y:0.33
XYZ X:6e+06 Y:6e+06 Z:6.18182e+06
newrgb rgb r:7.13812e+06 g:5.69731e+06 b:5.64573e+06
oldrgb rgb r:5.92753e+06 g:6.00738e+06 b:6.27742e+06
Question:
I suppose negative values should be just clamped away? Or do i have a error in my calculations?
The 2 matrices produce similar but different values (primaries very close to sRGB). I would like to replace the old matrix (which I have no idea where it's coming from) with the Wiki one. Does anyone know where the old matrix is coming from and which would be the correct one?
I found a partial answer # Yxy to RGB conversion which sounds like from a former colleague :)... but doesn't solve my problems.
Many Thanks
Your newrgb rgb computations are correct, I get the same output using Colour:
import colour
xyY = (1.0, 1.0, 1.0)
XYZ = colour.xyY_to_XYZ(xyY)
print(colour.XYZ_to_sRGB(XYZ, apply_encoding_cctf=False))
xyY = (0.26, 0.28, 25.0)
XYZ = colour.xyY_to_XYZ(xyY)
print(colour.XYZ_to_sRGB(XYZ, apply_encoding_cctf=False))
xyY = (0.27, 0.29, 3100.0)
XYZ = colour.xyY_to_XYZ(xyY)
print(colour.XYZ_to_sRGB(XYZ, apply_encoding_cctf=False))
xyY = (0.33, 0.33, 6e+06)
XYZ = colour.xyY_to_XYZ(xyY)
print(colour.XYZ_to_sRGB(XYZ, apply_encoding_cctf=False))
# [ 2.2020461 0.86530782 -1.20530687]
# [ 16.31921754 26.10605109 39.60507773]
# [ 2242.49706509 3213.58480355 4499.85141917]
# [ 7138073.69325106 5697605.86069197 5644291.15301836]
You get negative values in the first conversion because your xy chromaticity coordinates are outside the spectral locus, thus they represent imaginary colours.
The Wikipedia matrix is correct and is the one from IEC 61966-2-1:1999 which is the official standard for sRGB colourspace.

Depth / Ranged Image to Point Cloud

I am trying to create a pointcloud from a depth image / ranged image / displacement map (grey values which correspond to z depth).
Here is the depth image of an trailer vehicle:
http://de.tinypic.com/r/rlvp80/8
Camera Parameters, no rotation: x,y,z: 0,25,0
I get this pointcloud with a aslope ground plane, which should be even (just a quad at 0,0,0 with no rotation):
rotated:
http://de.tinypic.com/r/2mnglj6/8
Here is my code, I try to do this:
normalizedCameraRay = normalize(CameraRay);
Point_in_3D = zDepthValueOfPixelXY /normalizedCameraRay.zValue; //zValue is <= 1
This doesnt seem to work. If I use zDepthValueOfPixelXY * normalizedCameraRay, I got Radial Disortion (everything looks a bit like a sphere).
Any Ideas how to fix this? I attached my code.
Thank you for your ideas
float startPositionX = -((float)image.rows)/2;
float startPositionY = -((float)image.cols)/2;
glm::vec3 cameraWorldPosition(x,y,z);
qDebug() << x << "," << y << ", " << z;
for(int a = 0; a < image.rows;a++)
{
for(int b = 0; b < image.cols;b++)
{
float movementX = startPositionX+b;
float movementY = startPositionY+a;
glm::vec3 ray(movementX, movementY, focal_length);
ray = glm::normalize(ray);
float rayFactor = ((float)image.at<u_int16_t>(a,b)) / ray[2];
glm::vec3 completeRay = rayFactor*ray;
pointCloud[a][b] = completeRay;
/*
ray = completeRay - cameraWorldPosition;
ray = glm::normalize(ray);
rayFactor = ((float)image.at<u_int16_t>(a,b)) / ray[2];
ray = rayFactor * ray;
pointCloud[a][b] = ray;*/
/*
ray = glm::vec3(startPositionX+b,startPositionY-a, image.at<u_int16_t>(a,b));
pointCloud[a][b] = ray;
*/
}
}

Resources