I am new to C++ programming and I would like to plot a sine/cosine/square wave but I cannot find any resources to help me with it.
My goal is to produce any wave, and then perform a fourier transform of that wave and produce the resultant wave.
This code should work for you. Just make sure you run it on an IDE that has graphics.h. In many new IDEs graphics.h doesn't come by default and you have to add it first.
#include <iostream>
#include <conio.h>
#include <graphics.h>
#include <math.h>
using namespace std;
int main(){
initwindow(800,600);
int x,y;
line(0,500,getmaxx(),500); //to draw co-ordinate axes
line(500,0,500,getmaxy());
float pi = 3.14;
for(int i = -360; i < 360 ; i++){
x = (int)500+i;
y = (int)500 - sin(i*pi/100)*25;
putpixel(x,y,WHITE); //to plot points on the graph
}
getch(); //to see the resultant graph
closegraph();
return 0;
}
Related
Say I have a floating point image, e.g. in 32FC1 format for a thermal image, and I want to display it using (preferably) ROS or openCV tools, while also being able to see the current pixel value (e.g. temperature) my mouse is hovering over. How would I do that? Rviz can display the image, but will not show any pixel values. Image_view is also able to display the image, but will show the pixel value in RGB.
Thank you!
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using std::cout;
using std::endl;
// create a global Mat
cv::Mat img_32FC1;
// function to be called on mouse event
// displays values on console, it can be modified to print values on image
void mouseEventCallBack (int event, int x, int y, int flags, void *userdata)
{
if(event == cv::EVENT_MOUSEMOVE)
{
cout<<"x = "<<x<<", y = "<<y<<" value = "<<img_32FC1.at<float>(y,x)<<endl;
}
}
int main()
{
// original color image, CV_8UC3
cv::Mat img_8UC3 = cv::imread("image.jpg",cv::IMREAD_UNCHANGED), img_8UC1;
// convert original image to gray, CV_8UC1
cv::cvtColor(img_8UC3, img_8UC1, cv::COLOR_BGR2GRAY);
// convert to float, CV_32FC1
img_8UC1.convertTo(img_32FC1, CV_32FC1);
img_32FC1 /= 255.0;
// create a window
cv::namedWindow("window",CV_WINDOW_AUTOSIZE);
// set MouseCallback function
cv::setMouseCallback("window", mouseEventCallBack);
// Display image
cv::imshow("window", img_8UC1);
cv::waitKey(0);
cv::destroyAllWindows();
return 0;
}
Are there any examples of matrix transformations on polygons (cartesian), using Boost Geometry? I am defining the matrix with simple std::vectors.
Also, I could only find 1 example of matrix_transformers using ublas but it's way too convoluted for a simple matrix transformation. If this is the only way though, I'll stick with it, but it would be great to have other options, ad do this with std::vector instead of ublas::matrix.
Here's my solution for anyone who might be interested. Boost geometry actually added a strategy called matrix_transformer that relies on Boost's qvm::mat for matrix transformations. There's not that many examples out there, so here's my code:
#include <boost/geometry.hpp>
#include <boost/geometry/geometries/point_xy.hpp>
#include <boost/geometry/geometries/polygon.hpp>
using namespace boost::geometry::strategy::transform;
typedef boost::geometry::model::d2::point_xy<double> point_2f;
typedef boost::geometry::model::polygon<point_2f> polygon_2f;
int main() {
polygon_2f pol;
boost::geometry::read_wkt("POLYGON((10 10,10 27,24 22,22 10,10 10))", pol);
polygon_2f polTrans;
// Set the rotation angle (in radians)
double angleDeg = 45;
double angleRad = angleDeg * 3.14159 / 180.0;
vector<vector<double> > mat = {{cos(angleRad), sin(angleRad), 0}, {-sin(angleRad), cos(angleRad), 0}, {0, 0, 1}};
// Create the matrix_trasformer for a simple rotation matrix
matrix_transformer<double, 2, 2> rotation(mat[0][0], mat[0][1], mat[0][2], mat[1][0], mat[1][1], mat[1][2], mat[2][0], mat[2][1], mat[2][2]);
// Apply the matrix_transformer
boost::geometry::transform(pol, polTrans, rotation);
// Create svg file to show results
std::ofstream svg("transformationExample.svg");
boost::geometry::svg_mapper<point_2f> mapper(svg, 400, 400);
mapper.add(pol);
mapper.map(pol, "fill-opacity:0.5;fill:rgb(153,204,0);stroke:rgb(153,204,0);stroke-width:2");
mapper.add(polTrans);
mapper.map(polTrans, "fill-opacity:0.5;fill:rgb(153,204,255);stroke:rgb(153,204,255);stroke-width:2");
return 0;
}
And here's my result, where the green polygon is the original and the blue polygon is transformed (remember that the rotation was about the origin):
I'm looking to write some code in opencv/matlab that'll apply the Gabor filter to images to spot interesting image regions. I've read quite a lot of literature and seen some of the previous matlab/opencv code, but I'd like to attempt it all myself to make sure I fully understand.
I have the equation for the Gabor function and an image. I am unsure of the steps I should take in my algorithm. The general idea I got was to take the discrete Fourier transform of the image, multiply it (convolve?) it with the Gabor function then take the inverse Fourier transform for the result. Any pointers appreciated. Thanks!
#include <opencv2/core/core.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <math.h>
using namespace cv;
int main(int argc, char** argv)
{
int ks = 47;
int hks = (ks-1)/2;
int kernel_size=21;
double sig = 7;
double th = 200;
double ps = 90;
double lm = 0.5+ps/100.0;
double theta = th*CV_PI/180;
double psi = ps*CV_PI/180;
double del = 2.0/(ks-1);
double sigma = sig/ks;
double x_theta;
double y_theta;
Mat image = imread("C:\\users\\michael\\desktop\\tile1.tif",1), dest, src, src_f;
if (image.empty())
{
return -1;
}
imshow("Src", image);
cvtColor(image, src, CV_BGR2GRAY);
src.convertTo(src_f, CV_32F, 1.0/255, 0);
if (!ks%2)
{
ks+=1;
}
Mat kernel(ks,ks, CV_32F);
for (int y=-hks; y<=hks; y++)
{
for (int x=-hks; x<=hks; x++)
{
x_theta = x*del*cos(theta)+y*del*sin(theta);
y_theta = -x*del*sin(theta)+y*del*cos(theta);
kernel.at<float>(hks+y,hks+x) = (float)exp(-0.5*(pow(x_theta,2)+pow(y_theta,2))/pow(sigma,2))* cos(2*CV_PI*x_theta/lm + psi);
}
}
filter2D(src_f, dest, CV_32F, kernel);
imshow("Gabor", dest);
Mat Lkernel(kernel_size*20, kernel_size*20, CV_32F);
resize(kernel, Lkernel, Lkernel.size());
Lkernel /= 2.;
Lkernel += 0.5;
imshow("Kernel", Lkernel);
Mat mag;
pow(dest, 2.0, mag);
imshow("Mag", mag);
waitKey(0);
return 0;
}
I am currently writing a project that will allow robot to find its location depending on the photos of the ceiling. The camera is mounted on the robot and is facing the ceiling directly (meaning that the center of the photo is always consider to be the position of the robot). The idea is to establish 0,0 position and orientation of the x,y axis using the first photo and then finding the distance and rotation between that and the next photo (which will be taken in the slightly different position) and establish new 0,0 position and orientation of x,y axis and so on. I am finding the features on the photo using the following algorithm (so far on only one image):
#include <opencv/cv.h>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/nonfree/nonfree.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main()
{
Mat img = imread("ceiling.jpg");
if (img.empty())
{
cout << "Cannot load an image!" << endl;
getchar();
return -1;
}
SIFT sift(10); //number of keypoints
vector<KeyPoint> key_points;
Mat descriptors, mascara;
Mat output_img;
sift(img,mascara,key_points,descriptors);
drawKeypoints(img, key_points, output_img);
namedWindow("Image");
imshow("Image", output_img);
imwrite("image.jpg", output_img);
waitKey(0);
return 0;
}
Is there any function that could help me do that?
If the images are not far apart, optical flow could be a better option. Look for documentation on cv::calcOpticalFlowPyrLK for details on how to use this.
Can I use the Eigen library to get the rotation matrix which rotates vector A to vector B?
I have been searching for a while, but cannot find related api.
You first have to construct a quaternion and then convert it to a matrix, for instance:
#include <Eigen/Geometry>
using namespace Eigen;
int main() {
Vector3f A, B;
Matrix3f R;
R = Quaternionf().setFromTwoVectors(A,B);
}