Efficiently Transforming from Spherical Coordinates to Cartesian Coordinates using Eigen - performance

I need to transform the coordinates from spherical to Cartesian space using the Eigen C++ Library. The following code serves the purpose.
const int size = 1000;
Eigen::Array<std::pair<float, float>, Eigen::Dynamic, 1> direction(size);
for(int i=0; i<direction.size();i++)
{
direction(i).first = (i+10)%360; // some value for this example (denoting the azimuth angle)
direction(i).second = (i+20)%360; // some value for this example (denoting the elevation angle)
}
SSPL::MatrixX<T1> transformedMatrix(3, direction.size());
for(int i=0; i<transformedMatrix.cols(); i++)
{
const T1 azimuthAngle = direction(i).first*M_PI/180; //converting to radians
const T1 elevationAngle = direction(i).second*M_PI/180; //converting to radians
transformedMatrix(0,i) = std::cos(azimuthAngle)*std::cos(elevationAngle);
transformedMatrix(1,i) = std::sin(azimuthAngle)*std::cos(elevationAngle);
transformedMatrix(2,i) = std::sin(elevationAngle);
}
I would like to know a better implementation is possible to improve the speed.
I know that Eigen has supporting functions for Geometrical transformations. But I am yet to see a clear example to implement the same.
Is it also possible to vectorize the code to improve the performance?

You could at least use the vectorized versions of sine/cosine:
void dir2vector2(Eigen::Matrix3Xf& out, const Eigen::Array2Xf& in){
Eigen::Array2Xf sine = sin(in * (M_PI/180));
Eigen::Array2Xf cosi = cos(in * (M_PI/180));
out.resize(3, in.cols());
out << cosi.row(0) * cosi.row(1),
sine.row(0) * cosi.row(1),
sine.row(1);
}
There would still be a lot of optimization potential, e.g., calculating both sine and cosine of the same angle could share a lot of computation. And it is technically not necessary to store sine and cosi explicitly into temporaries (but Eigen is currently not able to automatically re-use common-sub expressions).
Also, the multiplication at the end could be vectorized better, if you store your input and output in row-major format (though the Eigen comma-initializer currently does not well with vectorization, it seems).

Related

How to improve FPS and overcome memory bandwidth by random access on textures?

In my virtual reality program I am heavily bound by memory bandwidth:
#version 320 es
precision lowp float;
const int n_pool = 30;
layout(local_size_x = 8, local_size_y = 16, local_size_z = 1) in;
layout(rgba8, binding = 0) writeonly uniform lowp image2D image;
layout(rgba8, binding = 1) readonly uniform lowp image2DArray pool;
uniform mat3 RT[n_pool]; // <- this is a rotation-translation matrix
void main() {
uint u = gl_GlobalInvocationID.y;
uint v = gl_GlobalInvocationID.x;
vec4 Ir = imageLoad(pool, ivec3(u,v,29));
float cost = 1.0/0.0;
for (int j = 0; j < 16; j++) {
float C = 0.0;
for (int i = 0; i < n_pool; i++) {
vec3 w = RT[i]*vec3(u,v,j);
C += length(imageLoad(pool, ivec3(w[0],w[1],i)) - Ir);
}
}
cost = C < cost ? C : cost;
}
imageStore(image, ivec2(u,v), vec4(cost, cost, cost, 1.0));
}
You can see that I have a lot of random accesses on a TEXTURE_2D_ARRAY (width = 320, height = 240, layers = 30). However, the access is not so random, because it will be in the proximity of u,v.
Here are my thoughts:
another texture format instead rgba-floats (rgba-unsigned byte maybe?).
the shared memory is too small to even store one gray scale image.
changing loop order. Strangely, this ordering is faster although the other should to have a better caching behaviour.
resizing work groups to fit the textures better.
using compressed images (Unlikely scenario giving performance boost). In theory however, that should help with the bandwidth.
What are your thoughts?
Do you have any actual data which shows that the issue you have is texture
bandwidth, or is that just an assumption?
I can see a number of issues which mean that may well not actually be your problem. For example:
vec3 w = RT[i]*vec3(u,v,j);
... you have a mat3 array load in inside your inner loop, so on most architectures I know you probably are uniform fetch bound, not texture bound. This should cache well in the GPU data cache, but is probably still being refetched per loop iteration, which smells a lot more expensive than a single imageLoad() unless you texture format is exceptionally wide ...
If you are using fp16 or fp32 RGBA texture inputs, then narrower 8-bit unorm formats are always going to be faster (fp32 is particularly expensive).
For the following:
cost = C < cost ? C : cost;
... it's probably more reliable in terms of code generation to use the min() built in function.
UPDATE 1
moving from conventional pixel pipeline to compute shader brought 3x speedup
UPDATE 2
using compressed formats increases 5% FPS
UPDATE 3
In this simplified version I didn't show that I indeed created many temporary vectors on-the-fly (both, in the outer and the inner loop). By removing mat3/vec4/vec3 creation within loop brought 2x speedup. Very, surprising to me that creating vectors in loops is that expensive.
Now I am well deep in real-time and fulfilled my goal...

C++ Eigen AlignedBox Transformations

I am trying to make my first steps with the C++ Eigen library. The Matrix functionality was very intuitive but I have some problems using the AlignedBox type from the Geometry module.
For an exercise I have to rotate an AlignedBox around a specific point and be able to translate it within a 2D plane using Eigen::Transform.
I have tried around for quite a while.
#include <iostream>
#include <eigen3/Eigen/Dense>
int main()
{
// create 1D AlignedBox
Eigen::MatrixXf sd1(1,1);
Eigen::MatrixXf sd2(1,1);
sd1 << 0;
sd2 << 3;
Eigen::AlignedBox1f box1(sd1, sd2);
// rotation of 45 deg
typedef Eigen::Rotation2D<float> R2D;
R2D r(M_PI/4.0);
// create transformation matrix with rotation of 45 deg
typedef Eigen::Transform< float, 2, Eigen::AffineCompact > SE2;
SE2 t;
t = r;
// how to apply transformation t to box1???
return 0;
}
I thought I have to multiply the AlignedBox with t.matrix() but since the Box is no matrix type and I did not find any useful build in function I have no idea how to apply the transformation. Any help would be appreciated
Note that result will be a 2D box. You can compute it by applying the affine transformation to the two 2D extremities, and updating the 2D box with the extend method, e.g.:
AlignedBox2f box2;
box2.extend(t * Vector2f(box1.min()(0), 0));
box2.extend(t * Vector2f(box1.max()(0), 0));
To apply another transformation to box2, you can use the same principle on the 4 corners of the box that you can get using the AlignedBox::corner method.

Clustering objects in GPU

My algorithm is simple for clustering, and it goes like this.
First object is grouped by all other objects which the distance between them is lower the X.
Then we go to the second object, if not included in the first group, we run the same algorithm on the other objects that are not included in the first group,
and so on...
I'm trying to do this algo in the GPU using the fragment shader.
First I set all the locations into a RGBA float texture. Setting for each pixel the location (x,y) - z and w are free for now. Then i draw to a result texture my calculations using the shader. In the end i will read the pixels of the result texture and do my code.
Tried many variations of code, and multi phases draw for performing my algorithm but i'm not happy with the time performances.
The question is,
Is there a way to do one run over the texture to perform my wish (single draw phase) ?
My latest try is this algorithm - My fragment shader
precision highp float;
uniform sampler2D locs;
varying vec2 coord;
uniform float clusterDistance;
const float textureSize = 64.;
void main()
{
// Getting my location
vec4 currData = texture2D(locs, coord);
float offsetPix = 1./textureSize/2.;
vec2 coordIdx = (coord - offsetPix) * textureSize;
// Getting the index of my location
float myIdx = coordIdx.y * textureSize + coordIdx.x;
int clusterIdx = 0;
float clusterNum = 0.;
// Running over all the other locations until me and finding the first close object to me
for (float i=0.;i<textureSize*textureSize;++i)
{
clusterNum = i +1.;
// Which mean that we didn't find any closed object to me so we stop
if (i == myIdx)
{
break;
}
else
{
vec2 pntLoc = vec2(mod(i, textureSize), floor(i/textureSize)) / textureSize+offsetPix;
vec4 pnt = texture2D(locs, pntLoc);
if (distance(currData.xy, pnt.xy) <= clusterDistance)
{
break;
}
}
}
// Print the result
gl_FragColor = vec4(currData.x, currData.y, clusterNum, 1.);
}
But the problem here is that the result can cause a chain clustering. For ex.
if our data is {0,0}, {4,0}, {8,0}, and the max distance to group is 4. Then the first is closed to the second. and then the third is close to the second but not the first. according to my algo, it is returning the index of the second, although that second is out of the picture because is grouped by the first object, and the first is the reference object for distances.
Is it possible to read from the result texture while writing to it?
It would solve my problem, cause then i could check the z value of the result when comparing distances..
No, you cannot read and write to a texture in the same pass (with standard WebGL and I think not at all in the way you intend).
Your algorithm seems rather serial in nature, not well suited for GPU/SIMD execution, but I may misinterpret your intent. Remember that the GPU may run a shader program for multiple data-points (fragments/pixels in this case) at once, having no clue about the results of others.
You also can't break out of a for loop on a SIMD architecture. The for loop will just keep iterating although the changes will not be written for fragments that broke out of it. In other words there is no speed benefit. It's a different story if the break condition evaluates to the same value for all fragments.
You might want to look at other ways of clustering, like k-means.

How to apply Gaussian filter to DFT output in OpenCV

I want to create a Gaussian high-pass filter after determining the correct padding size (e.g., if image width and height is 10X10, then should be 20X20).
I have Matlab code that I am trying to port in OpenCV, but I am having difficulty properly porting it. My Matlab code is show below:
f1_seg = imread('thumb1-small-test.jpg');
Iori = f1_seg;
% Iori = imresize(Iori, 0.2);
%Convert to grayscale
I = Iori;
if length(size(I)) == 3
I = rgb2gray(Iori);
end
%
%Determine good padding for Fourier transform
PQ = paddedsize(size(I));
I = double(I);
%Create a Gaussian Highpass filter 5% the width of the Fourier transform
D0 = 0.05*PQ(1);
H = hpfilter('gaussian', PQ(1), PQ(2), D0);
% Calculate the discrete Fourier transform of the image.
F=fft2(double(I),size(H,1),size(H,2));
% Apply the highpass filter to the Fourier spectrum of the image
HPFS_I = H.*F;
I know how to use the DFT in OpenCV, and I am able to generate its image, but I am not sure how to create the Gaussian filter. Please guide me to how I can create a high-pass Gaussian filter as is shown above?
I believe where you are stuck is that the Gaussian filter supplied by OpenCV is created in the spatial (time) domain, but you want the filter in the frequency domain. Here is a nice article on the difference between high and low-pass filtering in the frequency domain.
Once you have a good understanding of how frequency domain filtering works, then you are ready to try to create a Gaussian Filter in the frequency domain. Here is a good lecture on creating a few different (including Gaussian) filters in the frequency domain.
If you are still having difficulty, I will try to update my post with an example a bit later!
EDIT :
Here is a short example that I wrote on implementing a Gaussian high-pass filter (based on the lecture I pointed you to):
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
using namespace cv;
using namespace std;
double pixelDistance(double u, double v)
{
return cv::sqrt(u*u + v*v);
}
double gaussianCoeff(double u, double v, double d0)
{
double d = pixelDistance(u, v);
return 1.0 - cv::exp((-d*d) / (2*d0*d0));
}
cv::Mat createGaussianHighPassFilter(cv::Size size, double cutoffInPixels)
{
Mat ghpf(size, CV_64F);
cv::Point center(size.width / 2, size.height / 2);
for(int u = 0; u < ghpf.rows; u++)
{
for(int v = 0; v < ghpf.cols; v++)
{
ghpf.at<double>(u, v) = gaussianCoeff(u - center.y, v - center.x, cutoffInPixels);
}
}
return ghpf;
}
int main(int /*argc*/, char** /*argv*/)
{
Mat ghpf = createGaussianHighPassFilter(Size(128, 128), 16.0);
imshow("ghpf", ghpf);
waitKey();
return 0;
}
This is definitely not an optimized filter generator by any means, but I tried to keep it simple and straight forward to ease understanding :) Anyway, this code displays the following filter:
NOTE : This filter is not FFT shifted (i.e., this filter works when the DC is placed in the center instead of the upper-left corner). See the OpenCV dft.cpp sample (lines 62 - 74 in particular) on how to perform FFT shifting in OpenCV.
Enjoy!

3D Rotation Matrix deforms over time in Processing/Java

Im working on a project where i want to generate a 3D mesh to represent a certain amount of data.
To create this mesh i want to use transformation Matrixes, so i created a class based upon the mathematical algorithms found on a couple of websites.
Everything seems to work, scale/translation but as soon as im rotating a mesh on its x-axis its starts to deform after 2 to 3 complete rotations. It feels like my scale values are increasing which transforms my mesh. I'm struggling with this problem for a couple of days but i can't figure out what's going wrong.
To make things more clear you can download my complete setup here.
I defined the coordinates of a box and put them through the transformation matrix before writing them to the screen
This is the formula for rotating my object
void appendRotation(float inXAngle, float inYAngle, float inZAngle, PVector inPivot ) {
boolean setPivot = false;
if (inPivot.x != 0 || inPivot.y != 0 || inPivot.z != 0) {
setPivot = true;
}
// If a setPivot = true, translate the position
if (setPivot) {
// Translations for the different axises need to be set different
if (inPivot.x != 0) { this.appendTranslation(inPivot.x,0,0); }
if (inPivot.y != 0) { this.appendTranslation(0,inPivot.y,0); }
if (inPivot.z != 0) { this.appendTranslation(0,0,inPivot.z); }
}
// Create a rotationmatrix
Matrix3D rotationMatrix = new Matrix3D();
// xsin en xcos
float xSinCal = sin(radians(inXAngle));
float xCosCal = cos(radians(inXAngle));
// ysin en ycos
float ySinCal = sin(radians(inYAngle));
float yCosCal = cos(radians(inYAngle));
// zsin en zcos
float zSinCal = sin(radians(inZAngle));
float zCosCal = cos(radians(inZAngle));
// Rotate around x
rotationMatrix.setIdentity();
// --
rotationMatrix.matrix[1][1] = xCosCal;
rotationMatrix.matrix[1][2] = xSinCal;
rotationMatrix.matrix[2][1] = -xSinCal;
rotationMatrix.matrix[2][2] = xCosCal;
// Add rotation to the basis matrix
this.multiplyWith(rotationMatrix);
// Rotate around y
rotationMatrix.setIdentity();
// --
rotationMatrix.matrix[0][0] = yCosCal;
rotationMatrix.matrix[0][2] = -ySinCal;
rotationMatrix.matrix[2][0] = ySinCal;
rotationMatrix.matrix[2][2] = yCosCal;
// Add rotation to the basis matrix
this.multiplyWith(rotationMatrix);
// Rotate around z
rotationMatrix.setIdentity();
// --
rotationMatrix.matrix[0][0] = zCosCal;
rotationMatrix.matrix[0][1] = zSinCal;
rotationMatrix.matrix[1][0] = -zSinCal;
rotationMatrix.matrix[1][1] = zCosCal;
// Add rotation to the basis matrix
this.multiplyWith(rotationMatrix);
// Untranslate the position
if (setPivot) {
// Translations for the different axises need to be set different
if (inPivot.x != 0) { this.appendTranslation(-inPivot.x,0,0); }
if (inPivot.y != 0) { this.appendTranslation(0,-inPivot.y,0); }
if (inPivot.z != 0) { this.appendTranslation(0,0,-inPivot.z); }
}
}
Does anyone have a clue?
You never want to cumulatively transform matrices. This will introduce error into your matrices and cause problems such as scaling or skewing the orthographic components.
The correct method would be to keep track of the cumulative pitch, yaw, roll angles. Then reconstruct the transformation matrix from those angles every update.
If there is any chance: avoid multiplying rotation matrices. Keep track of the cumulative rotation and compute a new rotation matrix at each step.
If it is impossible to avoid multiplying the rotation matrices then renormalize them (starts on page 16). It works for me just fine for more than 10 thousand multiplications.
However, I suspect that it will not help you, numerical errors usually requires more than 2 steps to manifest themselves. It seems to me the reason for your problem is somewhere else.
Yaw, pitch and roll are not good for arbitrary rotations. Euler angles suffer from singularities and instability. Look at 38:25 (presentation of David Sachs)
http://www.youtube.com/watch?v=C7JQ7Rpwn2k
Good luck!
As #don mentions, try to avoid cumulative transformations, as you can run into all sort of problems. Rotating by one axis at a time might lead you to Gimbal Lock issues. Try to do all rotations in one go.
Also, bare in mind that Processing comes with it's own Matrix3D class called PMatrix3D which has a rotate() method which takes an angle(in radians) and x,y,z values for the rotation axis.
Here is an example function that would rotate a bunch of PVectors:
PVector[] rotateVerts(PVector[] verts,float angle,PVector axis){
int vl = verts.length;
PVector[] clone = new PVector[vl];
for(int i = 0; i<vl;i++) clone[i] = verts[i].get();
//rotate using a matrix
PMatrix3D rMat = new PMatrix3D();
rMat.rotate(angle,axis.x,axis.y,axis.z);
PVector[] dst = new PVector[vl];
for(int i = 0; i<vl;i++) {
dst[i] = new PVector();
rMat.mult(clone[i],dst[i]);
}
return dst;
}
and here is an example using it.
HTH
A shot in the dark: I don't know the rules or the name of the programming language you are using, but this procedure looks suspicious:
void setIdentity() {
this.matrix = identityMatrix;
}
Are you sure your are taking a copy of identityMatrix? If it is just a reference you are copying, then identityMatrix will be modified by later operations, and soon nothing makes sense.
Though the matrix renormalization suggested probably works fine in practice, it is a bit ad-hoc from a mathematical point of view. A better way of doing it is to represent the cumulative rotations using quaternions, which are only converted to a rotation matrix upon application. The quaternions will also drift slowly from orthogonality (though slower), but the important thing is that they have a well-defined renormalization.
Good starting information for implementing this can be:
http://www.cprogramming.com/tutorial/3d/quaternions.html
http://www.scheib.net/school/library/quaternions.pdf
A useful academic reference can be:
K. Shoemake, “Animating rotation with quaternion curves,” ACM
SIGGRAPH Comput. Graph., vol. 19, no. 3, pp. 245–254, 1985. DOI:10.1145/325165.325242

Resources