Finding the distance between two photos of the ceiling using SIFT - location

I am currently writing a project that will allow robot to find its location depending on the photos of the ceiling. The camera is mounted on the robot and is facing the ceiling directly (meaning that the center of the photo is always consider to be the position of the robot). The idea is to establish 0,0 position and orientation of the x,y axis using the first photo and then finding the distance and rotation between that and the next photo (which will be taken in the slightly different position) and establish new 0,0 position and orientation of x,y axis and so on. I am finding the features on the photo using the following algorithm (so far on only one image):
#include <opencv/cv.h>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/nonfree/nonfree.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
Mat img = imread("ceiling.jpg");
if (img.empty())
{
cout << "Cannot load an image!" << endl;
getchar();
return -1;
}
SIFT sift(10); //number of keypoints
vector<KeyPoint> key_points;
Mat descriptors, mascara;
Mat output_img;
sift(img,mascara,key_points,descriptors);
drawKeypoints(img, key_points, output_img);
namedWindow("Image");
imshow("Image", output_img);
imwrite("image.jpg", output_img);
waitKey(0);
return 0;
}
Is there any function that could help me do that?

If the images are not far apart, optical flow could be a better option. Look for documentation on cv::calcOpticalFlowPyrLK for details on how to use this.

Related

How do I make a convex shape infinitely rotate in SFML

I am making a game where I need a player to be constantly rotating. I have tried using a vertex array instead of a convex shape so that I could use trig to move the points around (which was shown to me in a tutorial), but it does not load my texture correctly. The player is a triangle and when I try to load the texture to the vertex array it displays it as completely black, and I want it to actually load my texture. when I used a convex shape, it did load the texture correctly, but I don't know how I'm going to make it rotate. I would prefer to use a convex shape since for me it's easier to work with.
#include <SFML/Graphics.hpp>
#include <math.h>
#include <stdlib.h>
int main()
{
sf::RenderWindow window(sf::VideoMode(800,800), "High noon showdown", sf::Style::Close | sf::Style::Titlebar);
sf::Texture p1texture;
p1texture.loadFromFile("2nd hns bandit.png");
sf::ConvexShape p1;
p1.setPointCount(3);
p1.setTexture(&p1texture);
//points for player 1
p1.setPoint(0, sf::Vector2f(70.0f, 380.0f));
p1.setPoint(1, sf::Vector2f(30.0f, 380.0f));
p1.setPoint(2, sf::Vector2f(30.0f, 310.0f));
//game loop
while(window.isOpen()) {
sf::Event evnt;
while(window.pollEvent(evnt)) {
switch(evnt.type){
case sf::Event::Closed:
window.close();
break;
}
}
window.clear(sf::Color(210,180,140,100));
window.draw(p1);
window.display();
}
return 0;
}

How to plot sine wave?

I am new to C++ programming and I would like to plot a sine/cosine/square wave but I cannot find any resources to help me with it.
My goal is to produce any wave, and then perform a fourier transform of that wave and produce the resultant wave.
This code should work for you. Just make sure you run it on an IDE that has graphics.h. In many new IDEs graphics.h doesn't come by default and you have to add it first.
#include <iostream>
#include <conio.h>
#include <graphics.h>
#include <math.h>
using namespace std;
int main(){
initwindow(800,600);
int x,y;
line(0,500,getmaxx(),500); //to draw co-ordinate axes
line(500,0,500,getmaxy());
float pi = 3.14;
for(int i = -360; i < 360 ; i++){
x = (int)500+i;
y = (int)500 - sin(i*pi/100)*25;
putpixel(x,y,WHITE); //to plot points on the graph
}
getch(); //to see the resultant graph
closegraph();
return 0;
}

Preview pixel value for float-image

Say I have a floating point image, e.g. in 32FC1 format for a thermal image, and I want to display it using (preferably) ROS or openCV tools, while also being able to see the current pixel value (e.g. temperature) my mouse is hovering over. How would I do that? Rviz can display the image, but will not show any pixel values. Image_view is also able to display the image, but will show the pixel value in RGB.
Thank you!
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using std::cout;
using std::endl;
// create a global Mat
cv::Mat img_32FC1;
// function to be called on mouse event
// displays values on console, it can be modified to print values on image
void mouseEventCallBack (int event, int x, int y, int flags, void *userdata)
{
if(event == cv::EVENT_MOUSEMOVE)
{
cout<<"x = "<<x<<", y = "<<y<<" value = "<<img_32FC1.at<float>(y,x)<<endl;
}
}
int main()
{
// original color image, CV_8UC3
cv::Mat img_8UC3 = cv::imread("image.jpg",cv::IMREAD_UNCHANGED), img_8UC1;
// convert original image to gray, CV_8UC1
cv::cvtColor(img_8UC3, img_8UC1, cv::COLOR_BGR2GRAY);
// convert to float, CV_32FC1
img_8UC1.convertTo(img_32FC1, CV_32FC1);
img_32FC1 /= 255.0;
// create a window
cv::namedWindow("window",CV_WINDOW_AUTOSIZE);
// set MouseCallback function
cv::setMouseCallback("window", mouseEventCallBack);
// Display image
cv::imshow("window", img_8UC1);
cv::waitKey(0);
cv::destroyAllWindows();
return 0;
}

Boost Geometry Matrix Transformations on Polygons

Are there any examples of matrix transformations on polygons (cartesian), using Boost Geometry? I am defining the matrix with simple std::vectors.
Also, I could only find 1 example of matrix_transformers using ublas but it's way too convoluted for a simple matrix transformation. If this is the only way though, I'll stick with it, but it would be great to have other options, ad do this with std::vector instead of ublas::matrix.
Here's my solution for anyone who might be interested. Boost geometry actually added a strategy called matrix_transformer that relies on Boost's qvm::mat for matrix transformations. There's not that many examples out there, so here's my code:
#include <boost/geometry.hpp>
#include <boost/geometry/geometries/point_xy.hpp>
#include <boost/geometry/geometries/polygon.hpp>
using namespace boost::geometry::strategy::transform;
typedef boost::geometry::model::d2::point_xy<double> point_2f;
typedef boost::geometry::model::polygon<point_2f> polygon_2f;
int main() {
polygon_2f pol;
boost::geometry::read_wkt("POLYGON((10 10,10 27,24 22,22 10,10 10))", pol);
polygon_2f polTrans;
// Set the rotation angle (in radians)
double angleDeg = 45;
double angleRad = angleDeg * 3.14159 / 180.0;
vector<vector<double> > mat = {{cos(angleRad), sin(angleRad), 0}, {-sin(angleRad), cos(angleRad), 0}, {0, 0, 1}};
// Create the matrix_trasformer for a simple rotation matrix
matrix_transformer<double, 2, 2> rotation(mat[0][0], mat[0][1], mat[0][2], mat[1][0], mat[1][1], mat[1][2], mat[2][0], mat[2][1], mat[2][2]);
// Apply the matrix_transformer
boost::geometry::transform(pol, polTrans, rotation);
// Create svg file to show results
std::ofstream svg("transformationExample.svg");
boost::geometry::svg_mapper<point_2f> mapper(svg, 400, 400);
mapper.add(pol);
mapper.map(pol, "fill-opacity:0.5;fill:rgb(153,204,0);stroke:rgb(153,204,0);stroke-width:2");
mapper.add(polTrans);
mapper.map(polTrans, "fill-opacity:0.5;fill:rgb(153,204,255);stroke:rgb(153,204,255);stroke-width:2");
return 0;
}
And here's my result, where the green polygon is the original and the blue polygon is transformed (remember that the rotation was about the origin):

C++ Eigen AlignedBox Transformations

I am trying to make my first steps with the C++ Eigen library. The Matrix functionality was very intuitive but I have some problems using the AlignedBox type from the Geometry module.
For an exercise I have to rotate an AlignedBox around a specific point and be able to translate it within a 2D plane using Eigen::Transform.
I have tried around for quite a while.
#include <iostream>
#include <eigen3/Eigen/Dense>
int main()
{
// create 1D AlignedBox
Eigen::MatrixXf sd1(1,1);
Eigen::MatrixXf sd2(1,1);
sd1 << 0;
sd2 << 3;
Eigen::AlignedBox1f box1(sd1, sd2);
// rotation of 45 deg
typedef Eigen::Rotation2D<float> R2D;
R2D r(M_PI/4.0);
// create transformation matrix with rotation of 45 deg
typedef Eigen::Transform< float, 2, Eigen::AffineCompact > SE2;
SE2 t;
t = r;
// how to apply transformation t to box1???
return 0;
}
I thought I have to multiply the AlignedBox with t.matrix() but since the Box is no matrix type and I did not find any useful build in function I have no idea how to apply the transformation. Any help would be appreciated
Note that result will be a 2D box. You can compute it by applying the affine transformation to the two 2D extremities, and updating the 2D box with the extend method, e.g.:
AlignedBox2f box2;
box2.extend(t * Vector2f(box1.min()(0), 0));
box2.extend(t * Vector2f(box1.max()(0), 0));
To apply another transformation to box2, you can use the same principle on the 4 corners of the box that you can get using the AlignedBox::corner method.

Resources