C++ Eigen AlignedBox Transformations - eigen

I am trying to make my first steps with the C++ Eigen library. The Matrix functionality was very intuitive but I have some problems using the AlignedBox type from the Geometry module.
For an exercise I have to rotate an AlignedBox around a specific point and be able to translate it within a 2D plane using Eigen::Transform.
I have tried around for quite a while.
#include <iostream>
#include <eigen3/Eigen/Dense>
int main()
{
// create 1D AlignedBox
Eigen::MatrixXf sd1(1,1);
Eigen::MatrixXf sd2(1,1);
sd1 << 0;
sd2 << 3;
Eigen::AlignedBox1f box1(sd1, sd2);
// rotation of 45 deg
typedef Eigen::Rotation2D<float> R2D;
R2D r(M_PI/4.0);
// create transformation matrix with rotation of 45 deg
typedef Eigen::Transform< float, 2, Eigen::AffineCompact > SE2;
SE2 t;
t = r;
// how to apply transformation t to box1???
return 0;
}
I thought I have to multiply the AlignedBox with t.matrix() but since the Box is no matrix type and I did not find any useful build in function I have no idea how to apply the transformation. Any help would be appreciated

Note that result will be a 2D box. You can compute it by applying the affine transformation to the two 2D extremities, and updating the 2D box with the extend method, e.g.:
AlignedBox2f box2;
box2.extend(t * Vector2f(box1.min()(0), 0));
box2.extend(t * Vector2f(box1.max()(0), 0));
To apply another transformation to box2, you can use the same principle on the 4 corners of the box that you can get using the AlignedBox::corner method.

Related

Efficiently Transforming from Spherical Coordinates to Cartesian Coordinates using Eigen

I need to transform the coordinates from spherical to Cartesian space using the Eigen C++ Library. The following code serves the purpose.
const int size = 1000;
Eigen::Array<std::pair<float, float>, Eigen::Dynamic, 1> direction(size);
for(int i=0; i<direction.size();i++)
{
direction(i).first = (i+10)%360; // some value for this example (denoting the azimuth angle)
direction(i).second = (i+20)%360; // some value for this example (denoting the elevation angle)
}
SSPL::MatrixX<T1> transformedMatrix(3, direction.size());
for(int i=0; i<transformedMatrix.cols(); i++)
{
const T1 azimuthAngle = direction(i).first*M_PI/180; //converting to radians
const T1 elevationAngle = direction(i).second*M_PI/180; //converting to radians
transformedMatrix(0,i) = std::cos(azimuthAngle)*std::cos(elevationAngle);
transformedMatrix(1,i) = std::sin(azimuthAngle)*std::cos(elevationAngle);
transformedMatrix(2,i) = std::sin(elevationAngle);
}
I would like to know a better implementation is possible to improve the speed.
I know that Eigen has supporting functions for Geometrical transformations. But I am yet to see a clear example to implement the same.
Is it also possible to vectorize the code to improve the performance?
You could at least use the vectorized versions of sine/cosine:
void dir2vector2(Eigen::Matrix3Xf& out, const Eigen::Array2Xf& in){
Eigen::Array2Xf sine = sin(in * (M_PI/180));
Eigen::Array2Xf cosi = cos(in * (M_PI/180));
out.resize(3, in.cols());
out << cosi.row(0) * cosi.row(1),
sine.row(0) * cosi.row(1),
sine.row(1);
}
There would still be a lot of optimization potential, e.g., calculating both sine and cosine of the same angle could share a lot of computation. And it is technically not necessary to store sine and cosi explicitly into temporaries (but Eigen is currently not able to automatically re-use common-sub expressions).
Also, the multiplication at the end could be vectorized better, if you store your input and output in row-major format (though the Eigen comma-initializer currently does not well with vectorization, it seems).

Boost Geometry Matrix Transformations on Polygons

Are there any examples of matrix transformations on polygons (cartesian), using Boost Geometry? I am defining the matrix with simple std::vectors.
Also, I could only find 1 example of matrix_transformers using ublas but it's way too convoluted for a simple matrix transformation. If this is the only way though, I'll stick with it, but it would be great to have other options, ad do this with std::vector instead of ublas::matrix.
Here's my solution for anyone who might be interested. Boost geometry actually added a strategy called matrix_transformer that relies on Boost's qvm::mat for matrix transformations. There's not that many examples out there, so here's my code:
#include <boost/geometry.hpp>
#include <boost/geometry/geometries/point_xy.hpp>
#include <boost/geometry/geometries/polygon.hpp>
using namespace boost::geometry::strategy::transform;
typedef boost::geometry::model::d2::point_xy<double> point_2f;
typedef boost::geometry::model::polygon<point_2f> polygon_2f;
int main() {
polygon_2f pol;
boost::geometry::read_wkt("POLYGON((10 10,10 27,24 22,22 10,10 10))", pol);
polygon_2f polTrans;
// Set the rotation angle (in radians)
double angleDeg = 45;
double angleRad = angleDeg * 3.14159 / 180.0;
vector<vector<double> > mat = {{cos(angleRad), sin(angleRad), 0}, {-sin(angleRad), cos(angleRad), 0}, {0, 0, 1}};
// Create the matrix_trasformer for a simple rotation matrix
matrix_transformer<double, 2, 2> rotation(mat[0][0], mat[0][1], mat[0][2], mat[1][0], mat[1][1], mat[1][2], mat[2][0], mat[2][1], mat[2][2]);
// Apply the matrix_transformer
boost::geometry::transform(pol, polTrans, rotation);
// Create svg file to show results
std::ofstream svg("transformationExample.svg");
boost::geometry::svg_mapper<point_2f> mapper(svg, 400, 400);
mapper.add(pol);
mapper.map(pol, "fill-opacity:0.5;fill:rgb(153,204,0);stroke:rgb(153,204,0);stroke-width:2");
mapper.add(polTrans);
mapper.map(polTrans, "fill-opacity:0.5;fill:rgb(153,204,255);stroke:rgb(153,204,255);stroke-width:2");
return 0;
}
And here's my result, where the green polygon is the original and the blue polygon is transformed (remember that the rotation was about the origin):

Rotation matrix in Eigen

Can I use the Eigen library to get the rotation matrix which rotates vector A to vector B?
I have been searching for a while, but cannot find related api.
You first have to construct a quaternion and then convert it to a matrix, for instance:
#include <Eigen/Geometry>
using namespace Eigen;
int main() {
Vector3f A, B;
Matrix3f R;
R = Quaternionf().setFromTwoVectors(A,B);
}

How to apply Gaussian filter to DFT output in OpenCV

I want to create a Gaussian high-pass filter after determining the correct padding size (e.g., if image width and height is 10X10, then should be 20X20).
I have Matlab code that I am trying to port in OpenCV, but I am having difficulty properly porting it. My Matlab code is show below:
f1_seg = imread('thumb1-small-test.jpg');
Iori = f1_seg;
% Iori = imresize(Iori, 0.2);
%Convert to grayscale
I = Iori;
if length(size(I)) == 3
I = rgb2gray(Iori);
end
%
%Determine good padding for Fourier transform
PQ = paddedsize(size(I));
I = double(I);
%Create a Gaussian Highpass filter 5% the width of the Fourier transform
D0 = 0.05*PQ(1);
H = hpfilter('gaussian', PQ(1), PQ(2), D0);
% Calculate the discrete Fourier transform of the image.
F=fft2(double(I),size(H,1),size(H,2));
% Apply the highpass filter to the Fourier spectrum of the image
HPFS_I = H.*F;
I know how to use the DFT in OpenCV, and I am able to generate its image, but I am not sure how to create the Gaussian filter. Please guide me to how I can create a high-pass Gaussian filter as is shown above?
I believe where you are stuck is that the Gaussian filter supplied by OpenCV is created in the spatial (time) domain, but you want the filter in the frequency domain. Here is a nice article on the difference between high and low-pass filtering in the frequency domain.
Once you have a good understanding of how frequency domain filtering works, then you are ready to try to create a Gaussian Filter in the frequency domain. Here is a good lecture on creating a few different (including Gaussian) filters in the frequency domain.
If you are still having difficulty, I will try to update my post with an example a bit later!
EDIT :
Here is a short example that I wrote on implementing a Gaussian high-pass filter (based on the lecture I pointed you to):
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
using namespace cv;
using namespace std;
double pixelDistance(double u, double v)
{
return cv::sqrt(u*u + v*v);
}
double gaussianCoeff(double u, double v, double d0)
{
double d = pixelDistance(u, v);
return 1.0 - cv::exp((-d*d) / (2*d0*d0));
}
cv::Mat createGaussianHighPassFilter(cv::Size size, double cutoffInPixels)
{
Mat ghpf(size, CV_64F);
cv::Point center(size.width / 2, size.height / 2);
for(int u = 0; u < ghpf.rows; u++)
{
for(int v = 0; v < ghpf.cols; v++)
{
ghpf.at<double>(u, v) = gaussianCoeff(u - center.y, v - center.x, cutoffInPixels);
}
}
return ghpf;
}
int main(int /*argc*/, char** /*argv*/)
{
Mat ghpf = createGaussianHighPassFilter(Size(128, 128), 16.0);
imshow("ghpf", ghpf);
waitKey();
return 0;
}
This is definitely not an optimized filter generator by any means, but I tried to keep it simple and straight forward to ease understanding :) Anyway, this code displays the following filter:
NOTE : This filter is not FFT shifted (i.e., this filter works when the DC is placed in the center instead of the upper-left corner). See the OpenCV dft.cpp sample (lines 62 - 74 in particular) on how to perform FFT shifting in OpenCV.
Enjoy!

what 2 & 3 mean in this and how can i change them CvMat* rot = cvCreateMat(2,3,CV_32FC1)

What do 2 & 3 mean in this and how can I change them?
CvMat* rot = cvCreateMat(2,3,CV_32FC1)
When I change these two values I get an openCV GUI error handler.
size of input arguments do not match()
in function cvConvertScale.\cxconvert.cpp(1601)
I want to understand what that means
Update:
The code is:
#include <cv.h>
#include <highgui.h>
int main()
{
CvMat* rot = cvCreateMat(2,3,CV_32FC1);
IplImage *src, *dst;
src=cvLoadImage("doda.jpg");
// make acopy of gray image(src)
dst = cvCloneImage( src );
dst->origin = src->origin;
// make dstof zeros
cvZero( dst );
// Compute rotation matrix
double x=0.0;
// loop to get rotation from 0 to 360 by 4 press on anykey
for(int i=1;i<=5;i++)
{
CvPoint2D32f center = cvPoint2D32f(src->width/2,src->height/2);
double angle = 0+x;
double scale = 0.6;
cv2DRotationMatrix( center, angle, scale, rot );
// Do the transformation
cvWarpAffine( src, dst, rot);
cvNamedWindow( "Affine_Transform", 1 );
cvShowImage( "Affine_Transform", dst );
if (i<=4)
x=x+90.0;
else
x=0.0;
cvWaitKey();
}
cvReleaseImage( &dst );
cvReleaseMat( &rot );
return 0;
}
2 and 3 are the row and column counts of the matrix you're creating.
From Introduction to programming with OpenCV:
Allocate a matrix:
CvMat* cvCreateMat(int rows, int cols, int type);
type: Type of the matrix elements. Specified in form
CV_<bit_depth>(S|U|F)C<number_of_channels>. E.g.: CV_8UC1 means an
8-bit unsigned single-channel matrix, CV_32SC2 means a 32-bit signed
matrix with two channels.
Example:
CvMat* M = cvCreateMat(4,4,CV_32FC1);
Changing them is as simple as substituting different values. But I guess you should already know that.
2 = number of rows and 3 = number of columns in your matrix, rot.
Can you post the entire code? Or maybe tell us what you want to achieve? Are you trying to rotate an image?
Also, I'd recommend upgrading to OpenCV 2.0 which has a C++ interface. With the new version, you can extensively use the Mat class which handles everything (matrices,images,etc.) and makes things much simpler.
You get an error using any other shape than 2x3 because it is then meaningless for opencv when you use rot for rotation.
Take a look at Jacob's answer.
He describes the rotation matrix components in details.

Resources